Advanced Micro Devices is cutting costs and reducing the number of its data centers worldwide with the help of the cloud and hardware upgrades, an AMD executive said Thursday.
AMD will reduce the number of data centers it has to three by 2014, of which two will be in North America and one in Asia, said Farid Dana, director of IT services at AMD, in an interview. AMD currently has 12 data centers, down from 18 in mid-2009 when the consolidation effort began.
The company's goal is to cut costs by shifting more tasks to the cloud, and by opening data centers in locations that have lower power costs and lower taxes, Dana said. AMD is moving away from high-cost-per-watt places like Boston and California and establishing data centers in places like Suwanee, Georgia.
"We've gained some tax efficiency from the location," Dana said. "One of the factors is also disaster recovery. That's why we have three data centers and not one, and we are geographically dispersed."
Dana has a list of 40 physical factors to take into account when deciding where to locate a data center, include proximity to transit, weather, water sources and available electricity. Choices have to be made carefully, and something as simple as a nearby rail line could cause vibrations that harm server operations, he said.
But in downsizing, Dana wants to ensure AMD's engineers have access to the resources needed to design chips. AMD is trying to consolidate servers and reduce expenses such as electric bills through higher utilization rates. The company is also reducing network latency so engineers get quicker access to servers.
AMD is operating a private cloud that makes key EDA (electronic design applications) accessible to engineers worldwide. The company's engineering tasks are executed in real time across a virtual grid of servers that has 120,000 CPU cores. AMD tries to maintain close to a 100 percent utilization rate, and virtualization tools help all cores seem like one "giant number-crunching machine," Dana said.
"We want to do compute anywhere -- it doesn't matter where the engineer sits as long as they get the performance they need," Dana said.
Putting applications in the cloud consolidates computing resources and centralizes the computing infrastructure, Dana said. Data is more secure because it is stored in fewer, centralized locations.
Many companies offer cloud services, such as Amazon, but AMD kept an internal cloud as it wanted to have stronger control over usage of EDA tools. The company has deployed specific tools to track down where resources need to be assigned, and cloud transactions change by region as employees worldwide have been assigned different tasks, Dana said.
"It's not cost-effective to do it externally," Dana said.
Over the past few years, closure of data centers has resulted in huge savings, Dana said. The company is retiring old data centers as contracts end and as hardware retires, replacing it with new equipment and hardware, which requires the same level of investment as upgrading existing data centers.
Idle CPU cycles cost the company, and server upgrades have netted AMD millions in savings, Dana said. Socket compatibility provides a cost-effective way to upgrade to faster and more power-efficient chips without buying extra hardware.
"For socket upgrades you have to do your homework," Dana said. Upgrades could be done to cut costs or add performance, or to test out new chips, he said.
But as servers move to the next generation, it's better to change the motherboard, Dana said. Hardware depreciation could range from three to five years.
"It's more cost-effective to change the board than to put the processor on top of it," Dana said.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.