If you've read this blog for a while, it's no secret that I believe that one aspect of cloud computing is a dramatic drop in the cost of computing. While many discuss cloud computing's cost advantage in terms of better utilization via resource pooling and rapid elasticity, we believe that there is a more fundamental shift going on as data centers are redesigned to focus on scale, efficiency, and a shift to commodity components.
Put another way, the former cost advantage (utilization, etc.) relies on more efficient use of existing data center design patterns, while the latter relies on transforming the cost basis of data centers by creating new design patterns.
I wrote about this topic a few months ago in a post entitled "Are You Making Your Data Centers Cloud-Friendly?" In it I discussed trends evinced at the San Francisco DatacenterDynamics conference: energy efficiency, raised operating temperatures, and "chicken coop" data center building designs.
A couple of developments this past week reinforced the perspective that data centers are rapidly evolving to become mass scale computing environments. Over the past decade, data center design has been standardized as a collection of standard components plugged together. Each component has been designed to optimize its efficiency, with the expectation that, taken as a whole, optimum efficiency would be achieved. That view is shifting to one in which an entire data center is viewed as an integrated combination designed to run at the highest possible efficiency level, which requires custom-designing sub-components to ensure they contribute to the overall efficiency goal.
I identified aspects of this in the previous post. The "chicken coop" data center is designed as long rectangles with a long side facing the prevailing wind side, thereby allowing natural cooling. Facebook, in its open compute design, places air intakes and outputs on the second floor so that cool air can enter the building and drop on the machines, while hot air rises and is evacuated by large fans.
The two things that caught my eye this week related to server design and network equipment cost. The design item is about Facebook's custom server design and the implications for today's standardized blade or pizza box server economics. The network equipment item relates to Brocade's announcement that it will rent equipment for placement in cloud computing environments. Both of these align with the continuing shift of data centers to low-cost, high-scale environments, and both call into question the viability of established data center designs and economics.
Facebook's Custom-Designed Servers
The first article is a Bloomberg piece titled "Dell Loses Orders as Facebook Do-It-Yourself Servers Gain." Bloomberg notes that large cloud providers are designing their own servers and having them manufactured to order. The traditional server suppliers like HP and Dell (while the headline read "Dell," it was clear from the article that this phenomenon applies to all of the major server vendors) are faced with an unpleasant dilemma: manufacture these custom designs, but accept lower margins, or proffer higher-margin standard designs and lose orders.
The implication of the article was clear. The large cloud market is cost-focused and applies pressure on vendors to accept less profit in return for volume. Astonishingly, according to Gartner analyst Jeffrey Hewitt, quoted in the article, this type of server accounts for 20 percent of the server market (the article wasn't clear if this is 20 percent of shipments or revenue). From this stat, it's clear that this is a significant part of the market; one can expect that it will come to represent an even higher proportion of the overall market, increasing the margin compression for server vendors.
Brocade: Why Not Rent Instead of Buy?
Brocade, a network equipment manufacturer, has launched a new program aimed at cloud computing environments. Customers can acquire network equipment on a subscription basis: "Customers can subscribe for additional capacity to meet peaks and then step down capacity as those peaks diminish."
The model for how this works was not made clear in the article. Does Brocade install a large number of boxes and dynamically configure them to be on or off based on network load? Or does Brocade physically install and de-install boxes according to a customer calling up and asking for more or less equipment? One presumes it is the former. This has the effect of shifting network equipment--from the cloud provider's perspective--from a capex to an opex financial commitment. Put another way, it shifts utilization risk from the provider to Brocade. Or, put yet another way, it has the effect of Brocade financing its customers network equipment.
No matter how it's put, though, the implication is clear: Brocade will carry the asset-owning capital investment and let its customers pay for capacity as needed.
This is a really big deal. If you've seen people from AWS like Werner Vogels (Amazon CTO) or James Hamilton (AWS data center guru), they note that network equipment is one of the high cost items in their data centers. Vogels, in particular, is quite vocal about how AWS views network equipment as too expensive for what it provides and speaks longingly about network equipment moving toward the commodity approach of server designs.
Taking these two stories together, it's clear that the entire cost structure of data centers is changing under the pressure of cloud computing and its move toward massive scale and efficiency. Every supplier, whether a server manufacturer or a network equipment provider, will need to respond to a high-volume, low-margin market.
The obvious question is, how will users with smaller data centers do? Will they be able to ride the same economics (e.g., "you know those servers you just sent over to Facebook, well, we'd like 50 of the same type), or will their lower volumes and more heterogeneous environments force them to use higher cost equipment?
I often use the auto industry and Henry Ford's invention of mass production as a proxy for what is happening in IT. His shift to the assembly line vastly reduced the price of individual cars, but crucially, made it vastly more expensive to be a manufacturer. Car companies unable to achieve the economies of scale that the mass manufacturers achieved were inexorably forced out of business.
Will the same dynamic occur in cloud computing? Will "table stakes" be the ability to finance and operate a 100,000 square foot data center? Will those with lower requirements be forced to abjure data center ownership and rely on larger providers with their better economics?
One can expect we are not anywhere near the end of the data center design evolution. Commodity scale industries are ruthless in seeking cost advantage, and woe to the competitor who cannot finance cost improvements.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.