Networking company Mellanox Technologies, along with Hewlett-Packard and Dell, is demonstrating a next-generation FDR Infiniband network running at 56G bps (bits per second) at the International Supercomputing Conference in Hamburg.
The company has set up a network that connects 10 booths, using the company's InfiniBand adapter, switches and cables, and running Scalable Graphics' Remote Desktop over Infiniband application.
The products will start shipping late in the third quarter, said John Monson, vice president of product marketing at Mellanox.
Infiniband is used in high-end data centers to connect clusters of servers or storage systems, or to link servers to storage.
Mellanox, which develops its own chipset, will be shipping a number of different products that can all handle data throughput at 56G bps, including both modular and fixed switches from its SX-6000 family. One, the SX-6036, will come with 36 ports. It will also sell adapters, which can be purchased separately or integrated directly into servers or storage devices, and cables.
Using copper cables, 56G bps Infiniband can travel up to five meters; over fiber it can go up to 50 meters. Copper is mainly used in a rack, while fiber connects different nodes in a data center. The long-term goal is to extend the reach over fiber to 100 meters, Monson said.
Besides better throughput, the move to FDR Infiniband will also offer better latency and reliability. The price will be about 35 percent to 40 percent higher per port than current QDR Infiniband switches, according to Monson.
Send news tips and comments to firstname.lastname@example.org
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.