NEW DELHI: At Amazon Web Services’ annual developer conference in November 2017, CEO Andy Jassy said that in some years, few companies will have their own data centres, and those who have, their data centre footprints will be much smaller.
The logic was this: companies have no reason to invest in their own IT hardware and software and hire an army of engineers to manage these systems when they can easily rent all of that from public clouds – the likes of AWS, Microsoft Azure and Google – and pay only for what they use and for the period they use.
But the matter is probably not as simple as that. While public clouds are still growing strongly, many who went to the cloud are said to be reconsidering at least some part of their choices.
Tim Yeaton, chief marketing officer at Red Hat, says about 80% of their customers who have workloads on public clouds are reassessing what’s out there and a high percentage of them are expected to bring back one or more of those workloads to their inhouse data centres. Rajesh Shetty, managing director of enterprise sales at Cisco India, says a lot of customers who went to public cloud have realised that the penalty in scalability is too high. In other words, beyond a certain scale, a public cloud might be less efficient.
Flipkart is a good example. The e-commerce company says the scale and complexity of its operations is one that no public cloud service provider can possibly deal with. “At some point of time, you will hit a ceiling with any other service provider. Also, their mindset is to go with the least common denominator approach – whatever works for most of my consumers I will do, but for any customisation, you are on your own,” says a senior technologist in the company.
Flipkart’s push notifications go out to over 150 million devices in less than 7 minutes. During its Big Billion Day sale, the complexity is mindboggling. Customisation is key to its ability to ensure reliability at this scale. “We require custom networking setups. The switches, routers, the layout, the chips, storage, all of that we handpick, and we configure to ensure network connectivity is optimal, and we are in control,” says the technologist.
NetApp CEO George Kurian makes the same point. He says when data needs to be available with very low latency – trading floor kinds of applications – they don’t lend themselves to run in a shared public cloud. “For the cloud provider to build the cloud for those use cases makes the economics of the cloud awkward,” he says.
Dheeraj Pandey, founder & CEO of enterprise cloud computing company Nutanix, notes that data is exploding, making it difficult for networks to move that to large data centres. And computing is increasingly happening at what is called the ‘edge’. Life-critical systems run next to patients, self-driving cars have to make inferences fast.
“These can’t wait for data to be sent to some far-away, rented cloud data centre for analysis and its results then sent back for the system to take action,” Pandey says.
The public cloud players will not agree, but Kurian and Pandey also say that well-architected data centres are today comparable or better than public cloud offerings. Their argument is that with a lot of the hardware in data centres turning into software, data centres are beginning to have the flexibility, ease of use and costing of public clouds. Pandey says it’s cheaper to own things if their usage is predictable.
But that said, everyone agrees that there are many use cases – especially development and testing, and for smaller ventures that cannot afford to be distracted by the job of handling IT infrastructure – where public cloud is enormously beneficial. Milind Borate, co-founder & CTO of Druva, the data protection venture that is valued at over $1 billion and whose data is almost entirely on the public cloud, says public clouds are also getting better and better at handling scale.