I've gotten a lot of feedback on parts one and two of this three-part series on "The Three Revolutions of Cloud Computing." This series is based on my perspective that cloud computing represents the next major platform shift in computing, and will undoubtedly impose as much change as previous shifts like client/server or the rise of the Web. In parts one and two I focused on the changes cloud computing will cause in IT operations and application funding patterns. Now I'd like to turn to the changes cloud computing will cause in applications - and, to be blunt - those changes will be enormous.
If you look at most application architectures today, it's clear that these assumptions underlie their creation and deployment:
• Compute resources are expensive and difficult to obtain, so the number of applications must be limited, with only the most critical applications being deployed. • Compute resources are static, so application architectures can assume a stable application topology, with compute resources rarely joining and almost never leaving an application deployment topology. • The responsibility for compute infrastructure provisioning and modification lies with IT operations, so application developers need only focus on functionality and rely on others to manage the infrastructure.
However, those assumptions are no longer appropriate in a cloud computing world. As I explored in the first part of this post, cloud computing will alter - even disrupt - IT operations practices. The "resources on-demand" nature of cloud computing means process re-engineering for IT operations. The new model is "resources when I want, as much as I want, how I want." It's important not to underestimate how liberating this will be for application groups. When there isn't a lot of organizational overhead to obtaining resources, app developers will - surprise - use more resources - a *lot* more resources. We haven't yet begun to understand the scale of resource demand as application groups learn to work in an unconstrained environment.
In the second part of this series, I examined the change in IT costs that cloud computing's metered pricing and lower costs will cause. Put succinctly, the opex-based nature of cloud computing costs will encourage experimentation and allow many more applications to be deployed.
So the combined effect of these changes means that two very strong friction points for applications - resource provisioning and funding - will dwindle. What does this mean for applications?
Price elasticity means apps explode. While there is lots of controversy on this topic, my viewpoint is clear - cloud computing is cheaper than what went before. And I expect that classical economics will hold - higher consumption of goods as they get less expensive. Also, the "pay by the drink" nature of cloud computing will only amplify this trend, as it only costs dollars to get started on an application. Certainly this phenomenon has been our experience - every client we've worked with starts with one application and then starts finding others to do once the ease of getting an app going becomes clear. I would not be surprised to see a 10X increase in the number of applications running in most organizations. And the low cost and ease of scaling means that the deployed topology of applications will increase as well. Horizontally-scaled, multi-tier applications with 50 or 100 individual virtual machines won't be anomalies in the near future.
Apps explosion puts unprecedented pressure on IT infrastructure. Guess what? The infrastructure designed and built with outmoded assumptions about the number of applications that will be running, their resource consumption, and their agility. As applications are deployed that take advantage of cloud computing's unique characteristics, current infrastructures will groan under the load. Moving to server virtualization bought time for a lot of companies which had data centers maxed out, but the growth of applications will outstrip the compression of consolidation. In the future, expect to hear that "private clouds suffer resource constraints."
Apps take more responsibility for app management and IT operations. Everyone gets the vision of agile provisioning - or orchestration as it's often referred to. Engineers writing apps self-provision computing resources by filling out a Web page, asking for X amount of processing power, Y amount of network capacity, and Z amount of disk storage. Not as commonly recognized is that future apps are not provision-and-leave-alone; they're going to be provision-and-adjust-resources in near-real time as load changes. Bypassing IT operations in the initial provision but needing to consult with it for every subsequent change solves only half the problem; consequently, application groups will take on much more responsibility for monitoring the performance and stability of their applications. This is likely to be a troubled transition, as application groups today really have no responsibility (or even visibility) into application performance and stability.
Apps create big data, which creates new apps. The very nature of applications is changing from inward-focused operations optimizers to outward-facing engagement enablers. Associated with this change is much greater flows of data along with deeper analysis and integration of the data. These applications will tie in new sources of data like sensors, generate much greater amounts of data, which, in turn, will provide opportunities to slice-and-dice the data in new applications. If you think you've seen a lot of data in the recent past, get ready for a real deluge.
App developers need new skills to build scalable apps. If there's one thing that comes through during our cloud computing workshops, it's that most developers don't comprehend building dynamic apps that can gracefully add or subtract compute resources on-demand. I'd add that there is a difference between having a system management package that monitors application load and spins up new instances and an application that can integrate new resources into an operating environment successfully (or indeed, release unneeded resources - remember, the future of apps is scalability responsiveness, both up and down). Software developers creating the new, ever-larger number of applications will need to learn new skills to successfully build "cloud-ready" applications. This isn't any different, really, than past platform shifts, but every time a new platform comes out, people are surprised that there's a learning curve. Expect a big one with cloud computing.
App developer shortages, with emphasis on locating talent and leveraging service providers. Something else one can expect is a shortage of software developers. The price elasticity of applications will drive up demand for complementary goods (or services, in the case of people). CIOs are going to be haunted by line of business groups demanding human resources to take advantage of the availability of compute resources. Expect to see significant shortages of software engineers, particularly those able to write cloud-ready applications. IT organizations will turn to outside service providers, but will find many of them subject to the same shortage of talent. I expect even the seemingly (or at least, typically assumed) inexhaustible pool of offshore talent will be found inadequate for how much software will be desired.
In conclusion, I'm sure it's easy to dismiss these three revolutionary developments of cloud computing as over-dramatic. However, as a species, we humans have very little temporal perspective. In 1995, who would have predicted the rise of a computing giant like Google? Computing at that time was underpowered PCs communicating to servers that looked like slightly beefy desktop computer upended onto its side. Rack servers didn't even exist. We don't do a very good job of extrapolating current trends into the future, but it's clear that cloud computing is a trend that will overturn the practices and assumptions used in IT today. It's going to be a wild ride.
Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.