Showing posts with label RDMA. Show all posts
Showing posts with label RDMA. Show all posts

Wednesday, July 13, 2016

Mellanox Advances Remote Direct Memory Access over Ethernet

Mellanox Technologies released new software drivers for RoCE (RDMA over Converged Ethernet).

The new drivers simplify RDMA (Remote Direct Memory Access) deployments on Ethernet networks and enable high-end performance using RoCE, without requiring the network to be configured for lossless operation.

“RoCE has built-in error recovery mechanisms and while a lossless network has never been a strict requirement, customers typically configure their networks to prevent packet loss and ensure the best performance,” said Michael Kagan, chief technology officer, Mellanox. “The new software supports our latest 10, 25, 40, 50, and 100 Gb/s Ethernet adapters and is completely compatible with the RoCE specification. This new software eliminates any special lossy network configuration requirements.”

“Microsoft deploys an advanced datacenter infrastructure delivering an enterprise-grade intelligent cloud platform to our customers,” said Albert Greenberg, Distinguished Engineer, Network Development, Microsoft Corp. “We’re encouraged to see the continued development and evolution of the RoCE technology. RoCE is an example of one of the advanced networking solutions that helps enable Microsoft Azure to deliver cloud services to our customers with superb flexibility, performance, and reliability.”

Tuesday, August 11, 2015

HGST Announces Persistent Memory Fabric Technology

HGST has developed a persistent memory fabric technology that promises low-power, DRAM-like performance, and does not require BIOS modification nor rewriting of applications. Memory mapping of remote PCM using the Remote Direct Memory Access (RDMA) protocol over networking infrastructures, such as Ethernet or InfiniBand, enables a seamless, wide scale deployment of in-memory computing. This network-based approach allows applications to harness the non-volatile PCM across multiple computers to scale out as needed.

At this week's Flash Memory Summit in Santa Clara, California, HGST, in collaboration with Mellanox Technologies, its showcasing the PCM-based, RDMA-enabled in-memory compute cluster architecture. The HGST/Mellanox demonstration achieves random access latency of less than two microseconds for 512 B reads, and throughput exceeding 3.5 GB/s for two KB block sizes using RDMA over InfiniBand.

"DRAM is expensive and consumes significant power, but today's alternatives lack sufficient density and are too slow to be a viable replacement," said Steve Campbell, HGST's chief technology officer. "Last year our Research arm demonstrated Phase Change Memory as a viable DRAM performance alternative at a new price and capacity tier bridging main memory and persistent storage.  To scale out this level of performance across the data center requires further innovation.  Our work with Mellanox proves that non-volatile main memory can be mapped across a network with latencies that fit inside the performance envelope of in-memory compute applications."

"Mellanox is excited to be working with HGST to drive persistent memory fabrics," said Kevin Deierling, vice president of marketing at Mellanox Technologies.  "To truly shake up the economics of the in-memory compute ecosystem will require a combination of networking and storage working together transparently to minimize latency and maximize scalability.  With this demonstration, we were able to leverage RDMA over InfiniBand to achieve record-breaking round-trip latencies under two microseconds.  In the future, our goal is to support PCM access using both InfiniBand and RDMA over Converged Ethernet (RoCE) to increase the scalability and lower the cost of in-memory applications."

Thursday, June 25, 2015

New RDMA over Converged Ethernet (RoCE) Initiative Gets Underway

A new RDMA over Converged Ethernet (RoCE) Initiative has been launched by the InfiniBand Trade Association (IBTA) to raise awareness about the benefits that RoCE delivers for cloud, storage, virtualization and hyper-converged infrastructures.

Remote Direct Memory Access (RDMA) enables faster movement of data between servers and between servers and storage with much less work being done by the CPU. RoCE utilizes RDMA to enhance infrastructure solutions for hyper-converged data centers, cloud, storage, and virtualized environments (see RoCE video). The technology brings greater network utilization with lower latency and improved CPU efficiency, in addition to reducing overall financial investment by increasing server productivity while leveraging Ethernet technology. RoCE technology transports data across Layer 2 and Layer 3 networks, providing better traffic isolation and enabling hyperscale data center deployments.

“The RoCE Initiative will be the leading source for information on RDMA over Ethernet solutions,” said Barry Barnet, co-chair, IBTA Steering Committee. “The IBTA remains committed to furthering the InfiniBand specification, of which RoCE is a part. The RoCE Initiative expands our potential audience and will enable us to deliver solution information and resources to those requiring the highest performing Ethernet networks.”

Monday, January 12, 2015

Mellanox Supplies Open Ethernet to Monash University

Mellanox will supply its CloudX platform to Monash University in Melbourne, Australia

The deployment, which is on Mellanox’s SwitchX-2 SX1036 Open Ethernet switches, ConnectX-3 NICs and LinkX cables will provide the fabric for Monash's new cloud data center. The cloud utilizes Mellanox end-to-end 10, 40, and 56Gb/s Ethernet solutions as part of a nationwide initiative to create an open and global cloud infrastructure.

Mellanox said the university selected its RDMA-capable Ethernet technology due to its performance scalability and cloud efficiency improvements. The university’s cloud node, R@CMon, is part of The National eResearch Collaboration Tools and Resources (NeCTAR) Project. The fabric tightly integrates Ceph and Lustre storage with the cloud, meeting the needs of block, object and applications workloads as one converged fabric.

“The Mellanox CloudX platform enables application defined back-planning, where researchers orchestrate all the various components – cloud, HPC and software – into their own 21st century microscope within our data center,” said Steve Quenette, deputy director of the Monash eResearch Centre. “Mellanox Open Ethernet solutions give us the flexibility and the freedom to optimize the interconnect infrastructure for our needs and to ensure that we will be able to cope with the increase in data, compute and storage requirements of our users.”

See also