Data center fabrics promise flatter, simpler networks
- 16 July, 2012 10:10
When the Government Employees Health Association (GEHA) overhauled its data center to build a fabric infrastructure, the process was "really straightforward," unlike that of many IT projects, says Brenden Bryan, senior manager of enterprise architecture. "We haven't had any gotchas or heartburn, with me looking back and saying, 'I wish I made that decision differently.' "
Based in Kansas City, Mo., GEHA is the nation's second largest health and dental plan, processing claims for more than 1 million people, including federal employees, retirees and their families. The main motive for switching to a fabric model, says Bryan, was to simplify and consolidate operations and move away from a legacy Fibre Channel SAN environment.
When he started working at GEHA in August 2010, Bryan says he inherited a fairly typical infrastructure: a patchwork of components from different vendors with multiple points of failure. The association also wanted to virtualize its mainframe environment and turn it into a distributed architecture. "We needed an infrastructure that was redundant and highly available," he explains. Once the new infrastructure was in place, the plan was to then move all of GEHA's Tier 2 and Tier 3 apps to it and then move the Tier 1 claims processing system.
GEHA deployed Ethernet switches and routers from Brocade, and now, more than a year after the six-month project was completed, Bryan says the association has a high-speed environment and a 20-to-1 ratio of virtual machines to blade hardware.
"I can keep the number of physical servers I have to buy to a minimum and get more utilization out of them," he says. "It enables me to drive the efficiencies out of my storage as well as my computing."
Implementing a data center fabric does require some planning, however. It means having to upgrade and replace old switches with new switching gear because of the different traffic configuration used in fabrics, explains Zeus Kerravala, principal analyst at ZK Research. "Then you have to re-architect your network and reconnect servers."
Moving Flat and Forward
A data center fabric is a flatter, simpler network that's optimized for horizontal traffic flows. In comparison, traditional networks are designed more for client/server setups that send traffic from the server to the core of the network and back out, Kerravala explains.
In a fabric model, the traffic moves horizontally across the network and virtual machine, "so it's more a concept of server-to-server connectivity," he says. Fabrics are flatter and have no more than two tiers, whereas legacy networks have three or more tiers. Storage networks have featured simpler designs for years, and now data networks need to migrate to that model, Kerravala says.
One factor driving the move to fabrics is that about half of all enterprise data center workloads in Fortune 2000 companies are virtualized, and when companies get to that point, they start seeing the need to reconfigure how their servers communicate with one another and with the network. "We look at it as an evolution in the architectural landscape of the data center network," says Bob Laliberte, an analyst at Enterprise Strategy Group. "What's driving this is more server-to-server connectivity . . . there are all these different pieces that need to talk to each other and go out to the core and back to communicate, and that adds a lot of processing and latency."
Virtualization adds another layer of complexity, he says, because it means dynamically moving things around, "so network vendors have been striving to simplify these complex environments."
When Data Centers Can't Scale
As home foreclosures spiked in 2006, Walz Group, which provides document management, fulfillment and regulatory compliance services to clients in multiple industries, found that its data center couldn't scale effectively to take on the growing workloads. "IT was impeding the business growth," says Bart Falzarano, chief information security officer at the Temecula, Calif-based company.
The company hired additional in-house IT personnel to deal with disparate systems and management, as well as to build new servers, extend the network and add disaster recovery services, says Falzarano. "But it was difficult to manage the technology footprint, especially as we tried to move to a virtual environment," he says. The company also had some applications that couldn't be virtualized and would have to be managed differently. "There were different touch points in systems, storage and network. We were becoming counterproductive."
To reduce the complexity, in 2009 Walz Group deployed Cisco's Unified Data Center platform, a fabric architecture that combines compute, storage, network and management capabilities in a platform designed to automate IT as a service, across physical and virtual environments. The platform is connected to a NetApp SAN Storage Flexpod platform.
Previously, when the company was using HP technology, Falzarano recalls, one of the database nodes went down. This required getting the vendor on the phone, eventually taking out three of the four CPUs, and going through a troubleshooting process that took four hours. By the time the team had ordered and installed the necessary part and had restored normal operations, 14 hours had passed, says Falzarano.
"Now, for the same [type of failure], if we get a degraded blade server node, we un-associate that SQL application and re-associate the SQL app in about four minutes. And you can do the same for a hypervisor," he says.
Easy to Implement
Like the IT executives at Walz Group, IT team leaders at GEHA believed that deploying a fabric model would not only meet the organization's business requirements, but also reduce the complexity and cost of managing the data center. Bryan says the association was able to manage the all-Ethernet environment with a staff of two people; with a different setup, it would have needed additional personnel who were familiar with Fibre Channel.
"We didn't have anyone on our team who was an expert in Fibre Channel, and the only way to achieve getting the claims processing system to be redundant and highly available was to leverage the Ethernet fabric expertise, which we had on staff," he says.
Bryan says the association has been able to trim "probably a half million dollars of capital off the budget" since it didn't have to purchase any Fibre Channel switching, as well as a quarter of a million dollars in operating expenses since it didn't need staffers to manage Fibre Channel. "Since collapsing everything to an Ethernet fabric, I was able to eliminate a whole stack of equipment," says Bryan.
IT has now utilized the fabric for its backup environment with software from CommVault. Bryan says the association is seeing performance of about a terabyte an hour of throughput on the network, "which is probably eight to 10 times greater than before" the fabric was in place.
Today, all of GEHA's production traffic is on the fabric, and Bryan says he couldn't be more pleased with the infrastructure. He notes that scaling it out isn't an issue and is one of the major advantages of converged fabric and speed. GEHA is also able to run a very dense workload of virtual machines on a single blade. "Instead of having to spend a lot of money on a lot of blades, you can increase the ROI on those blades without sacrificing performance," says Bryan.
Laliberte says he sees a long life ahead for data center fabrics, noting that this type of architecture "is just getting started."
"If you think about complexity and size, and you have thousands of servers in your environment and thousands of switches, any kind of architecture change isn't done lightly and takes time to evolve," he says. Just as it took time for a three-tier architecture to evolve, it will take time for three-tier to get broken down to two-tier. Flat fabric is the next logical step, Laliberte says, adding, "These things get announced and are available, but it still takes years to get widespread deployments."
Shein is a freelance writer and editor. She can be reached at email@example.com.
This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.
Read more about data center in Computerworld's Data Center Topic Center.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.
Solving the skills conundrum – part 1
"How many of the Fortune 500 companies have access to PRISM? https://en.wikipedia.org/wiki/Industrial_espionage ..."Australia suspected to have PRISM data: Ludlam
Australia Post’s mail business to lose $200 million this year
Australia Post’s mail business to lose $200 million this year
Microsoft's ambivalence about Office on the Web gives Apple shot with iWork on iCloud
World Quality Report - The State of Quality 2012
The most comprehensive assessment of the current state of enterprise application quality and testing practices available from around the world. With our 1550 respondents, Capgemini, HP & Sogeti bring you the full report. Download today.
BYOD and Beyond - Implementing a Unified Access Solution
The rise of BYOD programs is the single most radical shift in the economics of client computing for business since PCs invaded the workplace. Whether you are contemplating the creation of a BYOD program or currently trying to establish one, this fact cannot be overstated. Find out how to overcome these challenges.
Key Factors in Modernising Backup and Recovery
There is a definite need for better data protection solutions in today’s enterprise data centres. The question is whether to continue with software-only backup and recovery solutions, or to make the move to a purpose-built backup appliance with de-duplication capabilities. This paper discusses the trends that have made modernising backup and recovery an urgent priority. Click to download.