This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
As organizations build out internal clouds, a major goal is to provide internal customers with self-service access to capacity. Although simple in concept, this fundamental change can create challenges in practice. To explore these challenges and identify solutions, it is useful to frame the conversation in terms of supply and demand.
Internal customers, such as lines of business and application teams, create workload demands that must be met through the supply of appropriate compute, storage and network resources. In this sense, self-service models for accessing compute and storage capacity are poised to revolutionize the demand side of the equation just as virtualization revolutionized the supply side over the last decade. But this will bring challenges.
On the supply-side, it has taken a considerable amount of time for the management of back-end capacity to catch up to the new virtualization technologies. Although virtualization is not new, only now are more modern methods of managing and controlling capacity taking hold. By adopting more analytics-based approaches to matching supply and demand, where workload placements are the primary focus, organizations are becoming better and better at forecasting, efficiently procuring hardware, and eliminating the rampant over provisioning of the past.
Unfortunately, the demand side of the equation is about to go through similar gyrations as self-service models send organizations into unfamiliar territory. Although demand management has always taken a back seat to supply-side capacity management in IT organizations, there was always at least some level of control over inbound applications and user demands, if only as a by-product of the complex processes and lengthy procurement cycles that tended slow them down. But self-service eliminates this overhead, and although this is a very good thing overall, it also threatens to turn demand management into a wild west of unfettered end-user activity.
Understanding the true goals of self-service can help avoid this disruption. Self-service models should streamline demand management by disintermediating IT staff from the process of requesting capacity. But it should not be used as an excuse to bypass processes, controls and the rigorous planning that needs to go into deploying IT services.
Many view internal clouds, at least initially, as a sandbox for rapidly "spinning up" VMs, but the deployment of enterprise apps and critical business services require a lot of diligence and advanced planning, and these requirements don't simply evaporate when clouds are involved. Just because users can rapidly access compute and storage resources doesn't mean they can throw all caution to the wind.
So a bit of rethinking is needed in order to fully understand self-service and how it can be safely and effectively used. A good place to start is to categorize cloud use cases into the amount of rigor and planning that must accompany the workloads that are being deployed.
For dev/test workloads, a very dynamic model tends to be best, where users can access capacity rapidly and without much planning. To use an analogy to hotels, this is similar to going on a road trip across the country, where the travelers simply stop at a roadside motel in whatever town they happen to be in, and no advanced reservations are required (or possible). This usage model is typically the first one targeted by organizations building internal clouds, where self-service consoles enable immediate access to capacity with little or no planning.
For enterprise workloads the analogy is quite different, and it tends to resemble an important business trip, where planning is critical and hotels are reserved in advance to ensure there is a place to stay. There is also a considerable amount of thought given to the amenities of the hotel, as more business-class features may be required than a simple roadside motel, such as network access, meeting rooms, printers, etc..
This is where a different self-service model is needed. Immediate access to capacity is far less important than reserving it in advance, and the requirements of the workloads must be assessed in detail against the capabilities of the hosting environments in order to ensure the workloads are routed to the right kind of capacity. From a self-service perspective, this is more like an online hotel reservation system, where end users can enter their specific requirements and dates, determine which hotels are best, and book space. In other words, self-service does not imply instant access, and internal clouds must also support proactive, detailed requests for more critical applications.
Given this, there are three simple questions IT should ask in order to ensure they are building enterprise-class internal clouds:
* Are my lines of business able to reserve capacity ahead of time for critical application deployments?
Enterprise consumers are very concerned with managing risk, and want guarantees that capacity will be available, even if they are early in the planning process. Without the ability to reserve capacity, many will request VMs well in advance of the go live date, and simply sit on them. These VMs become time bombs that can light up at any moment, frustrating infrastructure capacity management and, ironically, causing potential capacity shortfalls that defeat the purpose of being proactive in the first place.
Some cloud front-ends claim they can reserve capacity, but do this by drawing down on a pool of allocated resources, not by analyzing actual utilization of infrastructure. This leads to a false sense of security, and can be more dangerous than not having reservations at all.
* Can operations groups scientifically route workloads to the right hosting environments?
There are many subtle and not-so-subtle requirements that must be met when hosting enterprise workloads. Applications may require specific licensed software, storage tiering, compliance levels, data protection, backup and snapshotting, redundancy, jurisdictional requirements and other considerations. To accommodate this, organizations may have dozens of hosting environments spanning different geographies, platforms, configurations and cost levels. But the process of matching the two is still in the stone ages in many organizations, and often relies on spreadsheets and gut feel. Moving to cloud operating models requires this process to automated and extremely accurate.
Be prepared for pushback when automating this, as manual processes can be very ingrained, and there is typically a vested interest in the status quo. Although spreadsheet-based approaches are being phased out, significant effort may have gone into their development.
* Can capacity managers analyze these inbound "bookings" in order to predict short and long-term capacity requirements?
Looking at things from the infrastructure perspective, what applications a line of business needs to deploy is not under your control, but what you can control is how these workloads are placed in the infrastructure, how capacity is allocated to them, and how hardware is procured in the future to meet upcoming demands. If reserving and routing is done properly, the result is a "pipeline" of inbound bookings that can be used for predictive analysis, enabling advanced capacity risk management and just-in-time purchasing. This moves organizations from a low-efficiency high-risk situation to a high-efficiency low-risk operational model, which should be a major goal for any cloud initiative.
Beyond compute and memory resources, it is important to forecast the impact of inbound bookings on storage capacity (and to predictively route workloads based on this). Failure to do so can lead to operational issues as virtual data stores become unexpectedly exhausted.
These three areas represent what is needed to reach the next phase of cloud maturity, and if the answer is yes to all three then the outlook is good. It means that the internal clouds being built can go beyond being a sandbox for early adopters, and become true next-generation hosting platforms that are ready for real production workloads. It means all enterprise capacity can be put under a single management umbrella, allowing workloads to be matched with available capacity in a scientific manner. And it allows a painful period to be avoided where the dynamics of demand far outstrip the ability to manage that demand.
Demand management may not be top of mind when building internal clouds, but its fundamental principles should be. Serious organizations should demand nothing less.
CiRBAis a leading provider of Automated Capacity Control software.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.