CIOs are under more pressure than ever to provide core applications and key services to the business as quickly as possible.
Everyone wants services delivered faster, yet many organisations today are running complex transactions using increasingly large datasets. Often, IT departments are caught short in providing the IT performance required when equipment in the data centres is subject to heavy workloads.
IT leaders from various industries gathered for a luncheon roundtable in Sydney recently to discuss their strategies for building data centres, as well as the types of server and storage technologies helping them to improve performance and act faster. The roundtable was sponsored by Violin Memory.
Zodiac Marine and Pool has recently invested in three regional data centres around the world to support traditional interoffice ERP and computing services, says its APAC IT manager, John Khoury.
The data centres use flash-based, in-memory server technology, which enables the company to deploy new private cloud technologies to support real-time mobility transaction apps and BI analytics and reporting.
“The adoption of in-memory technologies ensures our data warehousing database is available to all services and apps in all regions and time zones. This has also allowed for fast and cost-effective capabilities to upscale and downscale in business acquisition and separation projects,” Khoury says.
The biggest benefit of flash and solid-state technologies has been the performance power savings, and the ability to extend storage capabilities within SAN and RAID configurations, he continues.
“The biggest drawback at present is the cost factor for large enterprise volume storage. However, there are new solid-state disk hybrid technologies emerging that could counter the [higher] cost,” Khoury says.
One attendee, who oversees IT at a building and industrial systems company, says his organisation has a highly virtualised environment, leveraging large hosted servers in a load-balanced, highly availability architecture.
Servers in the company’s data centre share a tiered LAN, which employs a combination of fibre-connected flash and hard disks, as well as SATA-connected disk technologies.
“We have not employed in-memory computing as our data processing volumes do not yet demand the type of performance offered by this technology,” the IT director explains.
“Speed of this [in-memory computing] technology is an obvious benefit, but there is still a perception that the cost of this technology is high.”
Fred Lusk, CIO at the NSW Government’s Safety, Return to Work and Support (SRWS) division, says building a high-performance data centre is not a requirement for its low volume, high transaction environment. The department has adopted virtual servers and standard SAN storage within its data centres and is probably not yet in a position to take advantage of flash and solid-state disks, he says.
“Our data centre equipment is relatively new, and the data centre itself is purpose-built, meaning that a reduction in the number of servers will not have a significant cost savings impact,” Lusk says.
“Having said that, the NSW Government is in the process of converging our data centres. Government agencies should be able to significantly reduce their footprints within the new data centres. This will cut costs around both floor space leasing and software licensing.
“The ability of flash memory to increase system performance may make it an attractive option.”
Another attendee from a large security firm says his organisation reviewed flash storage but considered it too expensive. He claims flash offers theoretical savings in power, space in the data centre, and cabling but in a one or two data centre environment, these savings were negligible.
“We reviewed our current and foreseeable application performance requirements and worked out that we could satisfy our need for raw disk capacity best in the cloud and performance best in-house,” he says.
“We further determined that in-house performance needed to be better than the traditional SAN, and solid-state technology would be required. It was also 100 per cent clear SSD was going to be overkill for the majority of our apps so we selected a hybrid solution with SSDs and spindle disks in the array.
“Frequently accessed information is kept on the SSD by the operating system and infrequently used information on the spindles – which is an excellent price/performance compromise.”
Still, flash-based memory storage systems are helping to massively accelerate ‘latency-sensitive’ critical apps for some organisations, says Ross Lynch, Australia and New Zealand sales director at Violin Memory.
According to Lynch, the economics of the data centre are at a tipping point where flash systems can deliver significant savings in database licensing costs and reductions in power, floor space, and heat. He adds flash can’t be viewed using the old “dollars per gigabyte” metric and comparing flash with spinning disk in monolithic storage systems is like comparing apples and oranges.
“Flash must be compared using a full TCO model, taking into account the many areas of reductions and business improvements,” Lynch says. “Organisations used to buy flash for its performance advantages but this is now changing to the key driver being cost savings.”
Several attendees are also turning to cloud or IT-as-a-service models to improve service availability and increase business performance.
Coates Hire’s customer portal is being delivered under an infrastructure-as-a-service model, says the company’s manager, IT strategy and project portfolio, Aaron Zhang.
“This helped us overcome the [data] latency challenge due to a high level of back-end integration,” he says.
Zodiac Marine and Pool is using a mixture of private and public cloud technologies to improve overall customer service and performance, says Khoury.
“Examples of private cloud technologies have included field service, sales and mobility apps, B2B e-commerce, as well as BI analytics and reporting,” he says.
“Public SaaS cloud technologies have been deployed regionally and globally to ensure we remain customer-focused and flexible during business merger and acquisition activities.”
Another attendee says his organisation turned to cloud for the processor-driven requirements of video analytics, which is not a constant demand but a spiky one.
“And we have turned to the cloud for large volume storage, which is infrequently accessed,” he says. SRWS has adopted a “value-driven” strategy when selecting technology services, says Lusk.
“As anticipated, we are finding that commodity ICT services are best delivered by external XaaS [anything-as-a-service] providers,” he says.
Organisations obviously trust third-party cloud and IaaS vendors to deliver the uptime they need, but do they care about the server and storage technologies their core apps are running on?
The cloud, by definition, removes the technology discussion by delivering an outcome based on service level, says Eric Herzog, senior vice-president of alliances at Violin Memory.
“Since both public cloud providers and private enterprise clouds must provide the best service level agreements [SLAs], flash is essential to delivering the appropriate SLAs for upper end applications,” he says.
“Both private and public cloud deployments depend on as little latency as possible within their data centres to compensate for normal network and transit latencies.
“Only with all flash array solutions can cloud provide optimal SLAs.”