By Lionel Snell
Editor, NetEvents
Think about the economies of scale… then just imagine the economies of hyperscale! It’s enough to make an average enterprise weep: what chance of ever being able to compete in a market dominated by the world’s hyperscale giants?
If that is what you think, then Brad Casemore, IDC’s Research Vice President at Datacenter Networks has reassuring news. Yes, the hyperscalers have taken the initiative, but the vendor community is seizing the opportunity to repackage similar technology for a vastly greater, and ultimately more lucrative, enterprise market. The very vendors who helped the giants develop their advanced solutions are now offering versions of the same technology at enterprise-friendly prices. Datacenter greatness – simple on the outside, very smart within – can now be yours.
Well, that is nice to know. But, according to Brad, it is not just nice, it is absolutely critical: “It’s not an option these days. It’s something that [enterprises] must do. It has significant applications for their infrastructure in their data centre and their networks, because typically, they must modernise. They’re dealing with very traditional, outdated architectures; their operational models have become outdated. They need to modernise architecturally and they need to modernise the way they run their networks.”
He outlines some of the key differentiators. “The hyperscalers see cloud as not only a destination for workloads, but it has its own operating principles. They’ve been able to do things at unprecedented scale and with unprecedented agility.” For example the hypercalers have used comprehensive automation; they have pioneered Software-Defined Networking (SDN) principles; they have made very efficient use of scale-out architectures, real-time analytics and streaming telemetry.
“This has allowed them to move from a reactive to a proactive approach in their networks. It’s not just about remediating problems, it’s about being able to plan for eventualities and be proactive approach. They’ve done part of this through having the right abstractions and through using disaggregation, so that they can innovate at speed on an overlay, on the underlay, in software, in hardware. They can decouple those and move very quickly”.
For Kevin Deierling, Senior VP of Marketing, Mellanox Technologies, enterprises too need to move towards a cloud model: “They have a protocol stack given by their vendors, and they do just a little bit of monitoring and automation. The cloud guys are the exact opposite. They have a really narrow protocol stack and they do a ton of automation and monitoring”
Mansour Karam, CEO and Founder, Apstra, sounds a note of caution: “Enterprise environments are in some ways a lot more complex than your public cloud. The public cloud has a set of applications that’s quite constrained – they are only building a product for themselves. Whereas every enterprise environment is quite different, with mixes of modern, cloud-native applications and legacy environments. So the challenge is really to build a product that distils the benefits automation provided to public cloud providers. Distil those benefits, but deliver them in a package that takes into account the complexities of enterprise environments”.
That requires an extensible architecture, because for the enterprise you must be able to adapt to multiple domains, rather than one specific topology. The hyperscalers’ task is simpler, because they only have one specifically designed network rather than the variety of networks and domains in the enterprise environment.
Kyle Forster is the Founder of Big Switch Networks, a company that aims to emulate on premises the full public cloud experience. For example he quotes the surge in popularity of Amazon Virtual Private Clouds (VPCs) rather than Cisco Virtual LANs: “How do we present an Amazon experience or an Azure experience or a Google Cloud experience on-prem?… When somebody uses a cloud formation template on AWS, they can use the exact same networking stanzas in that same cloud formation template on-prem. This suddenly makes all cloud native integration really easy”.
Mike Capuana, Chief Marketing Officer, Pluribus Networks emphasises also the move from massive data centers towards a more distributed model that takes the computing closer to the customer: “We’re seeing trends around distribution of data centres… to a much higher distribution of data centres in your neighbourhood. How do you build automation that is cost-effective and scalable across a more distributed environment?” A large data centre deploys external servers for management, for controlling the software-defined network virtualisation and for analytics, but in a small data centre it is more of a problem. So companies like Edgecore, Dell EMC, Mellanox are incorporating powerful processors, RAM and flash drives in their leaf/spine switches. “Let’s leverage that, write some clever distributed code for the switch you have to deploy anyway: running your SDN, your network virtualisation and network analytics functions. The other benefit is that I can pre-integrate and, pre-test all that so it just works out of the box.”
The disaggregation or decoupling of hardware and software is another key factor, making it possible for smaller companies to take advantage of available open-source code and avoid vendor lock in. Mansour Karam offers one piece of advice to organisations transforming their infrastructures: “Don’t start by choosing as strategic partner the hardware vendor you’ve been working with for the last 20 years. That would restrict your choices dramatically. You’ll lock yourself into a fully-integrated solution that is not best of breed, one that will slow you down andl not deliver your business needs. Gartner has shown that that digital initiatives are three times more likely to fail if you fail in infrastructure transformation”. Kevin Deierling agrees: “It’s not like Cisco where you get the hardware and the software together. If somebody offers better hardware, you go buy it from them. If you want to use whichever software you want, you can do it. That whole notion of open network is really powerful”.
Brad Casemore draws attention to the importance of pervasive, real-time visibility across the network. Mellanox, for example, focuses strongly on the low level infrastructure for analytic data, according to Deierling: “We have a tool called What Just Happened, WJH”. When there is a problem and users start complaining WJH holds all the data so you can be confident that it’s not the network, but rather some specific node or database spewing out massive amounts of data. In Mellanox they talk about “reducing mean time to innocence.”
In conclusion, it must be said that the huge technology gap between hyperscale operators and the average enterprise is beginning to narrow. Vendors like VMware, Arista, Dell EMC, Mellanox, Pluribus, Big Switch, Apstra – some of whom were closely involved with the original development of hyperscale technology – are aware of the massive potential of a wider enterprise market. They are competing, and collaborating, to deliver the sort of network performance and reliability that is really needed to survive in today’s business environment. Look at what is available, focus on open networking and avoid vendor lock in, and expect accelerating changes in enterprise data centers in 2020 and beyond.