Showing posts with label Mellanox. Show all posts
Showing posts with label Mellanox. Show all posts

Tuesday, June 19, 2018

Mellanox supplies Infiniband for Sandia's ARM supercomputer

Mellanox Technologies will supply an InfiniBand solution to accelerate the world’s top Arm-based supercomputer to be deployed in Sandia National Laboratory in the second half of 2018.

The Astra supercomputer will include nearly 2600 nodes, and will leverage InfiniBand In-Network Computing acceleration engines. Astra is the first system in a series of the Vanguard program of advanced architecture platforms, supporting the US Department of Energy’s National Nuclear Security Administration (NNSA) missions.

“InfiniBand smart In-Network Computing acceleration engines will enable the highest performance and productivity for Astra, the first large scale Arm-based supercomputer,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “InfiniBand is the world-leading interconnect technology that supports the full range of compute and storage infrastructures, including X86, Power, Arm, GPUs, FPGAs and more. We are happy to support the Department of Energy’s efforts to explore new supercomputing platforms for their future needs.”

http://www.mellanox.com

Tuesday, May 29, 2018

Mellanox intros Hyper-scable Enterprise Framework

Mellanox Technologies introduced its Hyper-scalable Enterprise Framework for private cloud and enterprise data centers.

The five key elements of the ‘Mellanox Hyper-scalable Enterprise Framework’ are:
  • High Performance Networks – Mellanox end-to-end suite of 25G, 50G, and 100G adapters, cables, and switches is proven within hyperscale data centers who have adopted these solutions for the simple reason that an intelligent and high-performance network delivers total infrastructure efficiency
  • Open Networking – an open and fully disaggregated networking platform is key to scalability and flexibility as well as achieving operational efficiency
  • Converged Networks on an Ethernet Storage Fabric – a fully converged network supporting compute, communications, and storage on a single integrated fabric
  • Software Defined Everything and Virtual Network Acceleration – Enables enterprise to enjoy the benefits of the hyperscalers who have embraced software-defined networking, storage, and virtualization – or software-defined everything (SDX)
  • Cloud Software Integration – networking solutions that are fully integrated with the most popular cloud platforms such as OpenStack, vSphere, and Azure Stack and support for advanced software-defined storage solutions such as Ceph, Gluster, Storage Spaces Direct, and VSAN
“With the advent of open platforms and open networking it is now possible for even modestly sized organizations to build data centers like the hyperscalers do,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “We are confident and excited to release the Mellanox Hyper-scalable Enterprise Framework to the industry – and to provide an open, intelligent, high performance, accelerated and fully converged network to enable enterprise and private cloud architects to build a world-class data center.”

Thursday, May 17, 2018

Mellanox raises Q2 and full year 2018 outlook

Citing strength across all its product lines, including InfiniBand and Ethernet, Mellanox Technologies raised its second quarter and full year 2018 outlook.

Mellanox currently projects:

  • Quarterly revenues of $260 million to $270 million. Prior guidance provided on April 17, 2018 was quarterly revenues of $255 million to $265 million.
  • Full Year 2018 revenues of $1,050 million to $1,070 million. Prior guidance was revenues of $1,030 million to $1,050 million.

“We continue to see strength across all our product lines, including InfiniBand and Ethernet, and we are well-positioned for further growth as the adoption of 25 gigabit per second and above Ethernet adapters continues in 2018 and beyond,” said Eyal Waldman, Chief Executive Officer of Mellanox. “The team remains disciplined in its investments and committed to optimizing efficiencies and reducing expenses, without slowing down revenue growth. The Board and management team are confident that the successful execution of our strategy will continue to deliver enhanced value for all shareholders.”

Monday, April 2, 2018

Mellanox interconnects NVIDIA's new DGX-2 AI box

Mellanox Technologies confirmed that its InfiniBand and Ethernet are used in the new NVIDIA DGX-2 artificial intelligence (AI) system.

NVIDIA's DGX-2, which delivers 2 Petaflops of system performance, is powered by sixteen GPUs and eight Mellanox ConnectX adapters, supporting both EDR InfiniBand and 100 GigE connectivity.

The embedded Mellanox network adapters provide overall 1600 gigabit per second bi-directional data throughout, which enables scaling up AI capabilities for building the largest Deep Learning compute systems.

"We are excited to collaborate with NVIDIA and to bring the performance advantages of our EDR InfiniBand and 100 gigabit Ethernet to the new DGX-2 Artificial Intelligence platform," said Gilad Shainer, vice president of marketing at Mellanox Technologies. "Doubling the network throughput as compared to previous systems to provide overall bi-directional 1600 gigabit per second data speed enables the DGX-2 platform to analyze growing amounts of data, and to dramatically improve Deep Learning application performance."

Wednesday, March 7, 2018

Mellanox milestone: one Million 100G ports with LinkX optical transceivers and cables

Mellanox Technologies announced a big milestone:  volume shipments of LinkX optical transceivers, Active Optical Cables (AOCs) and Direct Attach Copper Cables (DACs), have surpassed the one million 100Gb/s QSFP28 ports milestone.

“Our early 100Gb/s sales were driven by US-based hyperscale companies who were the first to deploy 100G Ethernet,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “Now, China Web 2.0, Cloud computing networks, and OEMs worldwide are moving to 100G. Customers select us because of our high-speed experience, our capacity to ship in volume, and the quality of our products.”

Tuesday, March 6, 2018

Mellanox debuts its Onyx Ethernet Network Operating System

Mellanox Technologies released a next-generation Ethernet network operating system for its Mellanox Spectrum Open Ethernet switches.

Mellanox Onyx OS provides a rich Layer-3 feature set with built-in dynamic routing protocols such as BGP/OSFP. It features a robust control plane and 64-way ECMP support, which Mellanox says can be used to construct large, scale-out L3 fabrics for data centers.

Additional capabilities include:

  • support for standard DevOps tools like Ansible and Puppet
  • smart hooks to automate network provisioning for high-performance workloads such as storage and artificial intelligence
  • support to run containerized applications on the switch system itself
  • buffer and link monitoring that leverage Mellanox Spectrum’s unique silicon level capabilities.

The Mellanox Onyx OS is aimed at cloud, hyperscale, enterprise, media & entertainment, storage, and high-performance Ethernet-based interconnect applications.

“Mellanox Onyx brings leading management, provisioning, automation and network visibility to data centers and cloud infrastructures, to deliver the best scalability, performance and overall return on investment,” said Yael Shenhav, vice president of products at Mellanox Technologies. “Mellanox Onyx also offers a mature Layer-3 feature-set, with integrated support for standard Devops tools, allowing customers to run third party containerized applications with complete SDK access. By utilizing Mellanox Onyx’s leading capabilities, our customers can enjoy the benefits of an industry-standard Layer-2 and Layer3 feature-set along with the ability to customize and optimize the network to their specific needs.”

Thursday, January 18, 2018

Mellanox posts record Q4 revenues of $237.6M, up 7% yoy

Mellanox Technologies reported revenues of $237.6 million for the fourth quarter and $863.9 million in fiscal year 2017.

GAAP operating loss was $(6.7) million, or (2.8) percent of revenue, in the fourth quarter, and was $(17.1) million, or (2.0) percent of revenue, in fiscal year 2017.Non-GAAP operating income was $38.0 million, or 16.0 percent of revenue, in the fourth quarter, and $118.7 million, or 13.7 percent of revenue, in fiscal year 2017.

GAAP gross margins were 64.1 percent in the fourth quarter, and 65.2 percent in fiscal year 2017.
Non-GAAP gross margins were 68.8 percent in the fourth quarter, and 70.4 percent in fiscal year 2017.

“We are pleased to achieve record quarterly and full year revenues,” said Eyal Waldman, President and CEO of Mellanox Technologies. “2017 represented a year of investment and product transitions for Mellanox. Fourth quarter Ethernet revenues increased 11 percent sequentially, due to expanding customer adoption of our 25 gigabit per second and above Ethernet products across all geographies. We are encouraged by the acceleration of our 25 gigabit per second and above Ethernet switch business, which grew 41 percent sequentially, with broad based growth across OEM, hyperscale, tier-2, cloud, financial services and channel customers. During the fourth quarter, InfiniBand revenues grew 2 percent sequentially, driven by growth from our high-performance computing and artificial intelligence customers.

Monday, December 18, 2017

Meituan data centers deploy Mellanox Ethernet switches, adapters

Meituan.com will deploy Mellanox Spectrum Ethernet switches, ConnectX adapters and LinkX cables to accelerate its multi-thousand servers for their artificial intelligence, big data analytics and cloud data centers. The installation will use Mellanox 25 Gigabit and 100 Gigabit smart interconnect solutions and RDMA technology.
Financial terms were not disclosed.

Meituan.com is the world’s leading online and on-demand delivery platform, supporting 280 million mobile users and 5 million merchants across 2,180 cities in China, and processing up to 21 million orders a day during peak times.

Tuesday, December 12, 2017

Tencent deploys Mellanox RDMA and Intelligent Interconnect Acceleration Engines

Tencent Cloud has adopted Mellanox interconnect solutions for its high-performance computing (HPC) and artificial intelligence (AI) public cloud offering.

Specifically, the Tencent Cloud infrastructure leverages Mellanox Ethernet and InfiniBand adapters, switches and cables to deliver advanced public cloud services. Tencent also employs Mellanox RDMA, in-network computing and other interconnect acceleration engines.

“Tencent Cloud is utilizing Mellanox interconnect and applications acceleration technology to help companies develop their next generation products and offer new and intelligent services,” said Wang Huixing, vice president of Tencent Cloud. “We are excited to work with Mellanox to integrate its world-leading interconnect technologies into our public cloud offerings, and plan to continue to scale our infrastructure product lines to meet the growing needs of our customers.”

Monday, November 13, 2017

Mellanox: Infiniband powers most Supercomputers

InfiniBand solutions accelerate 77 percent of the new high-performance computing systems on the TOP500 list deployed since the previous list (June 2017 to November 2017), according to Mellanox Technologies.

Some highlights:

  • Mellanox accelerates the fastest supercomputer on the list
  • InfiniBand connects 2 of top 5 systems - #1 (China) and #4 (Japan)
  • InfiniBand connects 6 times more new HPC systems versus proprietary interconnects (June’17 - Nov’17)
  • InfiniBand connects 15 times more new HPC systems versus Ethernet (June’17 - Nov’17)
  • InfiniBand connects 77% of new HPC systems (June’17 – Nov’17)
  • Mellanox connects 39 percent of overall TOP500 systems (192 systems, InfiniBand and Ethernet)
  • InfiniBand connects 33 percent of the total TOP500 systems (164 systems)
  • InfiniBand connects 60 percent of the HPC TOP500 systems
  • 25G Ethernet first appearance on the Nov’17 TOP500 list (China Hyperscale companies) - 19 systems
  • Mellanox connects all of 25G, 40G and 100G Ethernet systems on the Nov’17 TOP500 list
  • InfiniBand is the most used high-speed Interconnect on the TOP500

“Due to its smart acceleration and offload advantages, InfiniBand connects the vast majority of the new TOP500 high-performance computing and deep learning systems deployed in the last 6 months. Furthermore, InfiniBand accelerates the fastest supercomputer in the world and in China, the fastest supercomputer in Japan, and has been selected to connect the fastest supercomputers in Canada, and in the US. By delivering highest applications performance, scalability and robustness, InfiniBand enables users to maximize their data center return on investment and improve their total cost of ownership by 50 percent,” said Eyal Waldman, president and CEO of Mellanox Technologies.

“We are also happy to see our 25 Gigabit and above Ethernet solutions on the TOP500 list, representing the adoption of our Ethernet NICs, switches and cables in hyperscale and cloud companies, as they deliver highest efficiency to these platforms. Finally, we are excited to showcase our HDR 200Gb/s switch systems portfolio at the Supercomputing’17 conference, as we plan to release our HDR InfiniBand solutions in the first half of next year, further increasing the technology advantage of Mellanox in high-performance computing, cloud, Web2.0, database, deep learning and compute and storage platforms.”

Thursday, November 9, 2017

NGENIX deploys Mellanox Open Ethernet Switch

NGENIX, a subsidiary of Rostelecom, a leading Russian telecom provider, has deployed a 100Gb/s Ethernet Spectrum switch based on the Linux Switchdev driver to support their next generation content distribution network service.

Mellanox said this is the first major deployment of an open Ethernet switch based on the Switchdev driver that has been accepted and is available as open source as part of the Linux kernel. The combination of Mellanox’s high performance and field-proven Spectrum switch systems running an open, standard Linux distribution provides NGENIX with unified Linux interfaces across data center entities, servers and switches alike, with no compromise on performance.

“We were looking for a truly open solution to power our next generation 100GbE network,” said Dmitry Krikov, CTO at NGENIX. “The choice was clear. Not only was the Mellanox Spectrum-based switch the only truly open, Linux kernel-based solution, but also allows us to use a single infrastructure to manage, authorize and monitor our entire network. In addition, it’s proving to be very cost-effective in terms of price-performance.”

Mellanox intros Spectrum-2 200/400 GBE data centre switch

Mellanox Technologies announced the Spectrum-2, a scalable 200 and 400 Gbit/s Open Ethernet switch solution designed to enable increased data centre scalability and lower operational costs through improved power efficiency.

Spectrum-2 also provides enhanced programmability and optimised routing capabilities for building efficient Ethernet-based compute and storage infrastructures.

Mellanox's Spectrum-2 provides leading Ethernet connectivity for up to 16 ports of 400 Gigabit Ethernet, 32 ports of 200 Gigabit Ethernet, 64 ports of 100 Gigabit Ethernet and 128 ports of 50 and 25 Gigabit Ethernet, and offers enhancements including increased flexibility and port density for a range of switch platforms optimised for cloud, hyperscale, enterprise data centre, big data, artificial intelligence, financial and storage applications.

Spectrum-2 is designed to enable IT managers to optimise their network for specific customer requirements. The solution implements a complete set of the network protocols within the switch ASIC efficiently, providing users with the functionality required out-of-box. Additionally, Spectrum-2 includes a flexible parser and packet modifier which can be programmed to process new protocols as they emerge in the future.

Mellanox stated that Spectrum-2 is the first 400/200 Gigabit Ethernet switch to provide adaptive routing and load balancing while guaranteeing zero packet loss and unconditional port performance for predictable network operation. The solution also supports double the data capacity while providing latency of 300 nanoseconds, claimed to be 1.4 times less than alternative offerings. It is designed to provide the foundation for Ethernet storage fabrics for connecting the next generation of Flash based storage platforms.

Mellanox noted that Spectrum-2 extends the capabilities of its first generation Spectrum switch, which is now deployed in thousands of data centres. Spectrum enables IT managers to efficiently implement 10 Gbit/s and higher infrastructures and to economically migrate to 25, 50 and 100 Gbit/s speeds.

Monday, November 6, 2017

Mellanox intros Innova-2 FPGA-based adapter

Mellanox Technologies introduced its Innova-2 product family of FPGA-based smart network adapters for range of applications including security, cloud, Big Data, deep learning, NFV and high-performance computing.

The Innova-2 adapters will be offered in multiple configurations, either open for customers’ specific applications or pre-programmed for security applications with encryption acceleration such as IPsec, TLS/SSL and more. The Innova-2 family of dual-port Ethernet and InfiniBand network adapters supports network speeds of 10, 25, 40, 50 and 100Gb/s, while the PCIe Gen4 and OpenCAPI (Coherent Accelerator Processor Interface) host connections offer low-latency and high-bandwidth.

Mellanox said this new line of Innova-2 adapters delivers 6X higher performance while reducing total cost of ownership by 10X when compared to alternative options. The new products combine the company's ConnectX-5 25/40/50/100Gb/s Ethernet and InfiniBand network adapters with a Xilinx UltraScale™ FPGA accelerator.

“The Innova-2 product line brings new levels of acceleration to Mellanox intelligent interconnect solutions,” said Gilad Shainer vice president of Marketing, Mellanox Technologies. “We are pleased to equip our customers with new capabilities to develop their own innovative ideas, whether related to security, big-data analytics, deep learning training and inferencing, cloud and other applications. The solution allows our customers to achieve unprecedented performance and flexibility for the most demanding market needs.”

Wednesday, October 25, 2017

Mellanox Posts Q3 sales of $225.7 million - flat yoy

Mellanox Technologies reported Q3 2017 revenue of $225.7 million, up 0.7 percent compared to $224.2 million in the third quarter of 2016. GAAP gross margins were 65.7 percent, compared to 65.1 percent in the third quarter of 2016. GAAP net income was $3.4 million, compared to $12.0 million in the third quarter of 2016.

“We are pleased to achieve a record revenue quarter and resume our growth. Our third quarter Ethernet revenues achieved double digit sequential growth, driven by increasing deployments of our 25 gigabit per second and above products, which demonstrates our leadership position in these markets,” said Eyal Waldman, President and CEO of Mellanox Technologies. “During the third quarter, InfiniBand revenues declined seven percent sequentially mainly due to a large Department of Energy CORAL deployment in the second quarter. On a year-over-year basis, our InfiniBand high-performance computing and artificial intelligence revenues increased by double digit percentages."

Wednesday, October 4, 2017

Mellanox announces software-defined SmartNIC adapters based on ARM

Mellanox Technologies announced its BlueField family of software-defined SmartNIC adapters, designed for scale-out server and storage applications.

The new adapters leverage embedded ARM processor cores based on the company's BlueField system-on-chip processors and accelerators in the network interface card (NIC).

Key features of the BlueField intelligent adapters:

  • 2 network ports of Ethernet or InfiniBand: 10G/25G, 40G, 50G or 100Gb/s options
  • RDMA support for both InfiniBand and RoCE from the leader in RDMA technology
  • Accelerators for NVMe-over-Fabrics (NVMe-oF), RAID, crypto and packet processing
  • PCI Express Gen3 and Gen4, with either x8- or x16-lane configurations
  • Integrated low-latency PCIe switch with up to 8 external ports for flexible topologies
  • Up to 16 ARMv8 Cortex A72 processors with 20MB of coherent cache
  • 8 – 32GB of on-board DDR4 DRAM
  • Comprehensive virtualization support with SR-IOV
  • Accelerated Switching and Packet Processing (ASAP2) OVS offloads
  • Multi-host and SocketDirect™ enabling a single adapter to support up to four CPU hosts
  • Multiple server form-factor options including half-height, half-length PCIe and other configurations


Mellanox said its new BlueField SmartNIC could be used for a range of applications, including Network Functions Virtualization (NFV), security and network traffic acceleration. The fully programmable environment and DPDK framework support a wide range of standard software packages running in the BlueField ARM subsystem. Examples include: Open vSwitch (OVS), Security packages such as L3/4 firewall, DDoS protection and Intrusion Prevention, encryption stacks (IPsec, SSL/TLS), traffic monitoring, telemetry and packet capture.

“Our BlueField adapters effectively place a Computer in Front of the Computer,” said Gilad Shainer, vice president marketing, Mellanox Technologies. “They provide the flexibility needed to adapt to new and emerging network protocols, and to implement complex networking and security functions in a distributed manner, right at the boundary of the server. This brings more scalability to the data center and enhances security by creating an isolated trust zone.”

Tuesday, August 22, 2017

Mellanox Supplies Network Adapters for Alibaba's 25G RoCE Ethernet Cloud

Mellanox Technologies confirmed that its 25GbE and 100GbE ConnectX®-4 EN family of Ethernet adapters has been deployed in Alibaba's data centers.

Key capabilities of the Mellanox ConnectX-4 Network Interface Cards (NICs):

  • RoCE - RDMA over Converged Ethernet - RDMA (Remote Direct Memory Access) technology is designed to solve the delay of server-side data processing in network transmission, as it enables network adapters to access the application buffer directly, bypassing the kernel, the CPU, and the protocol stack, so the CPU can perform more useful tasks during the I/O transport. 
  • RDMA (RoCE), based on converged Ethernet, can be implemented on an existing open Ethernet network. With RoCE, there is no need to convert data centers legacy infrastructures, which allows companies to save on capital spending
  • DPDK - Data Path Development Kit - provides a framework for fast packet processing in data plane applications. The tool set allows developers to rapidly build new prototypes. The Mellanox open source DPDK software enables industry standard servers and provides the best performance to support large-scale, efficient production deployments of Network Function Virtualization (NFV) solutions such as gateways, load balancers, and enhanced security solutions that help prevent denial of service attacks in the data center.


http://www.mellanox.com

Friday, July 7, 2017

Mellanox intros Spectrum-2 200/400 GBE data centre switch

Mellanox Technologies announced the Spectrum-2, a scalable 200 and 400 Gbit/s Open Ethernet switch solution designed to enable increased data centre scalability and lower operational costs through improved power efficiency.

Spectrum-2 also provides enhanced programmability and optimised routing capabilities for building efficient Ethernet-based compute and storage infrastructures.

Mellanox's Spectrum-2 provides leading Ethernet connectivity for up to 16 ports of 400 Gigabit Ethernet, 32 ports of 200 Gigabit Ethernet, 64 ports of 100 Gigabit Ethernet and 128 ports of 50 and 25 Gigabit Ethernet, and offers enhancements including increased flexibility and port density for a range of switch platforms optimised for cloud, hyperscale, enterprise data centre, big data, artificial intelligence, financial and storage applications.

Spectrum-2 is designed to enable IT managers to optimise their network for specific customer requirements. The solution implements a complete set of the network protocols within the switch ASIC efficiently, providing users with the functionality required out-of-box. Additionally, Spectrum-2 includes a flexible parser and packet modifier which can be programmed to process new protocols as they emerge in the future.

Mellanox stated that Spectrum-2 is the first 400/200 Gigabit Ethernet switch to provide adaptive routing and load balancing while guaranteeing zero packet loss and unconditional port performance for predictable network operation. The solution also supports double the data capacity while providing latency of 300 nanoseconds, claimed to be 1.4 times less than alternative offerings. It is designed to provide the foundation for Ethernet storage fabrics for connecting the next generation of Flash based storage platforms.

Mellanox noted that Spectrum-2 extends the capabilities of its first generation Spectrum switch, which is now deployed in thousands of data centres. Spectrum enables IT managers to efficiently implement 10 Gbit/s and higher infrastructures and to economically migrate to 25, 50 and 100 Gbit/s speeds.


The new Spectrum-2 maintains the same API as Spectrum for porting software onto the ASIC via the Open SDK/SAI API or Linux upstream driver (Switchdev), and supports standard network operating systems and interfaces including Cumulus Linux, SONIC and standard Linux distributions. It also supports telemetry capabilities including the latest in-band network telemetry standard, enabling visibility into the network and monitoring, diagnosis and analysis of operations.


Monday, June 19, 2017

Mellanox forms strategic agreement with HPE

Mellanox Technologies announced a strategic collaboration with HPE covering high-performance computing and machine learning data centres, and also introduced SHIELD, an interconnect technology that is claimed to improve data centre fault recovery by 5,000 times through providing interconnect autonomous self-healing capabilities.

HPE collaboration

Mellanox collaboration with HPE is designed to enable efficient high-performance computing and machine learning data centres based on technologies from both parties. The joint solutions are intended to enable customers to leverage the InfiniBand and Gen-Z open standards to enhance return on investment for current and future data centres and applications.

Leveraging Mellanox's intelligent and In-Network Computing capabilities of ConnectX-5 InfiniBand adapters and Switch-IB2 InfiniBand switches in the recently launched HPE SGI 8600 and Apollo 6000 Gen10 systems the companies can offer scalable and efficient high-performance computing and machine learning fabric solutions.

The collaboration will enable both companies to develop technology integration and use the forthcoming HDR InfiniBand Quantum switches, ConnectX-6 adapters and future Gen-Z devices. In addition, joint development work with HPE's Advanced Development team will support the advance to Exascale computing.

SHIELD

The new SHIELD technology is enabled within Mellanox's 100 Gbit/s EDR and 200 Gbit/s HDR InfiniBand solutions, providing the ability for interconnect components to exchange real-time information and to make instant smart decisions to help overcome issues and optimise data flows. SHIELD is designed to enable greater reliability, more productive computation and optimised data centre operations.


Friday, June 9, 2017

HPE selects Mellanox for 25/50/100 GBE fabric switch

Mellanox Technologies, a supplier of high-performance, end-to-end smart interconnect solutions for data centre servers and storage systems, announced that its Spectrum Ethernet switch ASIC has been selected to power the first Hewlett Packard Enterprise (HPE) Synergy Switch Module, supporting native 25, 50 and 100 Gigabit Ethernet connectivity.

The Mellanox Spectrum switch module serves to connect the HPE Synergy compute module with an Ethernet switch fabric offering high performance and low latency, as demanded for cloud, financial services, telco and HPC environments.

Mellanox noted that the new switch module is designed to help HPE support the transition to the next generation of Ethernet performance by providing 25 Gbit/s connectivity options for the Synergy platform. The Mellanox SH2200 Synergy switch module enables 25 and 50 Gigabit Ethernet compute and storage connectivity while also enabling 100 Gbit/s uplinks.

The capabilities of the Mellanox switch allows the HPE Synergy fabric portfolio to deliver high performance Ethernet connectivity for and expanded range of applications, for example financial trading and analytics, scientific computing, cloud and NFV (network function virtualisation), where line rate, zero packet loss and 300 ns latency offers advantages.

HPE Synergy offering features compute, storage and built-in management, as well as the new advanced Ethernet fabric option. The SH2200 HPE Synergy Fabric, based on Mellanox's Spectrum Ethernet switch, offers a key building block in helping make enterprise applications more efficient and enabling data centre operators to analyse data in real-time. HPE Synergy compute modules with the Mellanox SH2200 Synergy switch are due to be available in the third quarter of 2017.


Regarding the solution, Paul Miller, vice president of marketing at HPE, said, "HPE Synergy is the first composable infrastructure, a new category of infrastructure designed to accelerate application and services delivery for both traditional and new cloud native and DevOps environments on the same infrastructure… with Mellanox, HPE can offer higher performance networking as an integrated component of the Synergy platform".

Friday, April 21, 2017

Mellanox Cites Momentum for its 25G Ethernet Adapters

Mellanox Technologies announced that the ConnectX-4 Lx 25 Gigabit OCP and PCIe Ethernet adapters targeting data centre applications, which deliver 2.5x the throughput of 10 Gbit/s solutions using the same infrastructure, along with low latency performance, have been adopted by a number of major ODMs (original design manufacturers).

Mellanox stated that it is currently shipping hundreds of thousands of Ethernet adapters each quarter in line with increasing demand for its Ethernet connectivity solutions.

Mellanox cited ODM customers for its ConnectX-4 Lx 25 Gigabit Ethernet adapters including:

1.         Wiwynn, a cloud infrastructure provider offering computing and storage products, which is shipping its OCP server SV7221G2 products with the Mellanox ConnectX-4 Lx OCP Mezzanine NICs and PCIe cards to major ISPs.

2.         Inventec of Taiwan, which has qualified ConnectX-4 Lx 25 Gigabit Ethernet cards for TB800G4, Balder and K800G3 platforms for supply to major cloud and Web 2.0 providers in China,
3.         Acer, a Taiwanese hardware and electronics company, which has qualified the ConnectX-4 Lx PCIe adapters and plans to shortly launch its Altos R380 F3, R360 F3 and AW2000h F3 servers.

4.         Mitac-TYAN, a supplier of servers and desktop motherboards based in Taiwan, which is shipping ConnectX-3 Pro 40 Gigabit Ethernet OCP mezzanine cards and recently added the ConnectX-4 Lx 25 Gigabit Ethernet OCP mezzanine cards to its GT86A-B7083 server offering.


Mellanox's ConnectX-4 Lx is a 10/25/40/50 Gigabit Ethernet adapter that allows data centres to transition from 10 Gbit/s to 25 Gbit/s and from 40 Gbit/s to 50 Gbit/s while delivering similar power consumption and cost and utilising the same infrastructure.

Wednesday, March 22, 2017

Mellanox Intros Silicon Photonics-based 200G Transceivers

Mellanox Technologies introduced a new line of new line of 200 Gbit/s silicon photonics and VCSEL-based transceivers in a QSFP28 package and 100 Gbit/s silicon photonics components designed for hyperscale Web 2.0 and cloud optical interconnect applications.

200 Gbit/s transceivers

Mellanox new 200 Gbit/s silicon photonics and VCSEL-based transceivers are offered in the same QSFP28 package as current 100 Gbit/s products and target hyperscale Web 2.0 and cloud 100 Gbit/s networks. Mellanox is also introducing 200 Gbit/s active optical cables (AOCs) and direct attach copper cables (DACs), including breakout cables, enabling end-to-end 200 Gbit/s Ethernet networks.

Specifically, Mellanox is introducing the following 200 Gbit/s products:

1.  1550 nm DR4 QSFP28 silicon photonics transceiver for reach up to 500 metres on single mode fibre.

2.  SR4 VCSEL QSFP28 transceiver for reach up to 100 metres on OM4 multi-mode fibre.

3.  QSFP28 AOC for reach up to 100 metres.

4.  QSFP28 DAC cable for reaches up to 3 metres.

5.  QSFP28 to 4 x 50 Gbit/s SFP28 copper splitter cables for connecting 50 Gbit/s servers to ToR switches.

Mellanox noted that the transceivers and DACs support the new IEEE 200GAUI electrical standard using 25 GBaud PAM4 signalling.

100 Gbit/s silicon photonics components

Mellanox also unveiled new 100 Gbit/s silicon photonics components and announced the availability of the following solutions:

1.  PSM4 silicon photonics 1550 nm transmitter with flip-chip bonded DFB lasers and attached 1 metre fibre pigtail for reach up to 2 km.

2.  PSM4 silicon photonics 1550 nm transmitter with flip-chip bonded DFB lasers and attached fibre stub for connectorised transceivers with reach up to 2 km.

3.  Low-power 100 Gbit/s (4 x 25 Gbit/s) modulator driver IC.

4.  PSM4 silicon photonics 1310 and 1550 nm receiver array with 1 metre fibre pigtail.

5.  PSM4 silicon photonics 1310 and 1550 nm receiver array for connectorised transceivers.

6.  Low-power 100 Gbit/s (4 x 25 Gbit/s) trans-impedance amplifier IC.


At OFC Mellanox held live demonstrations of its end-to-end solutions, including: Spectrum 100 Gbit/s QSFP28/ SFP28 switches; ConnectX-4 and ConnectX-5 25/50/100 Gbit/s QSFP28/SFP28 network adapters; LinkX 25/50/100 Gbit/s DAC and AOC cables and 100 Gbit/s SR4 and PSM4 transceivers; Quantum switches with 40 ports of 200 Gbit/s QSFP28 in a 1 RU chassis; and ConnectX-6 adapters with 2 ports of 200 Gbit/s QSFP28.

See also