Showing posts with label Mellanox. Show all posts
Showing posts with label Mellanox. Show all posts

Monday, November 13, 2017

Mellanox: Infiniband powers most Supercomputers

InfiniBand solutions accelerate 77 percent of the new high-performance computing systems on the TOP500 list deployed since the previous list (June 2017 to November 2017), according to Mellanox Technologies.

Some highlights:

  • Mellanox accelerates the fastest supercomputer on the list
  • InfiniBand connects 2 of top 5 systems - #1 (China) and #4 (Japan)
  • InfiniBand connects 6 times more new HPC systems versus proprietary interconnects (June’17 - Nov’17)
  • InfiniBand connects 15 times more new HPC systems versus Ethernet (June’17 - Nov’17)
  • InfiniBand connects 77% of new HPC systems (June’17 – Nov’17)
  • Mellanox connects 39 percent of overall TOP500 systems (192 systems, InfiniBand and Ethernet)
  • InfiniBand connects 33 percent of the total TOP500 systems (164 systems)
  • InfiniBand connects 60 percent of the HPC TOP500 systems
  • 25G Ethernet first appearance on the Nov’17 TOP500 list (China Hyperscale companies) - 19 systems
  • Mellanox connects all of 25G, 40G and 100G Ethernet systems on the Nov’17 TOP500 list
  • InfiniBand is the most used high-speed Interconnect on the TOP500

“Due to its smart acceleration and offload advantages, InfiniBand connects the vast majority of the new TOP500 high-performance computing and deep learning systems deployed in the last 6 months. Furthermore, InfiniBand accelerates the fastest supercomputer in the world and in China, the fastest supercomputer in Japan, and has been selected to connect the fastest supercomputers in Canada, and in the US. By delivering highest applications performance, scalability and robustness, InfiniBand enables users to maximize their data center return on investment and improve their total cost of ownership by 50 percent,” said Eyal Waldman, president and CEO of Mellanox Technologies.

“We are also happy to see our 25 Gigabit and above Ethernet solutions on the TOP500 list, representing the adoption of our Ethernet NICs, switches and cables in hyperscale and cloud companies, as they deliver highest efficiency to these platforms. Finally, we are excited to showcase our HDR 200Gb/s switch systems portfolio at the Supercomputing’17 conference, as we plan to release our HDR InfiniBand solutions in the first half of next year, further increasing the technology advantage of Mellanox in high-performance computing, cloud, Web2.0, database, deep learning and compute and storage platforms.”

Thursday, November 9, 2017

NGENIX deploys Mellanox Open Ethernet Switch

NGENIX, a subsidiary of Rostelecom, a leading Russian telecom provider, has deployed a 100Gb/s Ethernet Spectrum switch based on the Linux Switchdev driver to support their next generation content distribution network service.

Mellanox said this is the first major deployment of an open Ethernet switch based on the Switchdev driver that has been accepted and is available as open source as part of the Linux kernel. The combination of Mellanox’s high performance and field-proven Spectrum switch systems running an open, standard Linux distribution provides NGENIX with unified Linux interfaces across data center entities, servers and switches alike, with no compromise on performance.

“We were looking for a truly open solution to power our next generation 100GbE network,” said Dmitry Krikov, CTO at NGENIX. “The choice was clear. Not only was the Mellanox Spectrum-based switch the only truly open, Linux kernel-based solution, but also allows us to use a single infrastructure to manage, authorize and monitor our entire network. In addition, it’s proving to be very cost-effective in terms of price-performance.”

Mellanox intros Spectrum-2 200/400 GBE data centre switch

Mellanox Technologies announced the Spectrum-2, a scalable 200 and 400 Gbit/s Open Ethernet switch solution designed to enable increased data centre scalability and lower operational costs through improved power efficiency.

Spectrum-2 also provides enhanced programmability and optimised routing capabilities for building efficient Ethernet-based compute and storage infrastructures.

Mellanox's Spectrum-2 provides leading Ethernet connectivity for up to 16 ports of 400 Gigabit Ethernet, 32 ports of 200 Gigabit Ethernet, 64 ports of 100 Gigabit Ethernet and 128 ports of 50 and 25 Gigabit Ethernet, and offers enhancements including increased flexibility and port density for a range of switch platforms optimised for cloud, hyperscale, enterprise data centre, big data, artificial intelligence, financial and storage applications.

Spectrum-2 is designed to enable IT managers to optimise their network for specific customer requirements. The solution implements a complete set of the network protocols within the switch ASIC efficiently, providing users with the functionality required out-of-box. Additionally, Spectrum-2 includes a flexible parser and packet modifier which can be programmed to process new protocols as they emerge in the future.

Mellanox stated that Spectrum-2 is the first 400/200 Gigabit Ethernet switch to provide adaptive routing and load balancing while guaranteeing zero packet loss and unconditional port performance for predictable network operation. The solution also supports double the data capacity while providing latency of 300 nanoseconds, claimed to be 1.4 times less than alternative offerings. It is designed to provide the foundation for Ethernet storage fabrics for connecting the next generation of Flash based storage platforms.

Mellanox noted that Spectrum-2 extends the capabilities of its first generation Spectrum switch, which is now deployed in thousands of data centres. Spectrum enables IT managers to efficiently implement 10 Gbit/s and higher infrastructures and to economically migrate to 25, 50 and 100 Gbit/s speeds.

Monday, November 6, 2017

Mellanox intros Innova-2 FPGA-based adapter

Mellanox Technologies introduced its Innova-2 product family of FPGA-based smart network adapters for range of applications including security, cloud, Big Data, deep learning, NFV and high-performance computing.

The Innova-2 adapters will be offered in multiple configurations, either open for customers’ specific applications or pre-programmed for security applications with encryption acceleration such as IPsec, TLS/SSL and more. The Innova-2 family of dual-port Ethernet and InfiniBand network adapters supports network speeds of 10, 25, 40, 50 and 100Gb/s, while the PCIe Gen4 and OpenCAPI (Coherent Accelerator Processor Interface) host connections offer low-latency and high-bandwidth.

Mellanox said this new line of Innova-2 adapters delivers 6X higher performance while reducing total cost of ownership by 10X when compared to alternative options. The new products combine the company's ConnectX-5 25/40/50/100Gb/s Ethernet and InfiniBand network adapters with a Xilinx UltraScale™ FPGA accelerator.

“The Innova-2 product line brings new levels of acceleration to Mellanox intelligent interconnect solutions,” said Gilad Shainer vice president of Marketing, Mellanox Technologies. “We are pleased to equip our customers with new capabilities to develop their own innovative ideas, whether related to security, big-data analytics, deep learning training and inferencing, cloud and other applications. The solution allows our customers to achieve unprecedented performance and flexibility for the most demanding market needs.”

Wednesday, October 25, 2017

Mellanox Posts Q3 sales of $225.7 million - flat yoy

Mellanox Technologies reported Q3 2017 revenue of $225.7 million, up 0.7 percent compared to $224.2 million in the third quarter of 2016. GAAP gross margins were 65.7 percent, compared to 65.1 percent in the third quarter of 2016. GAAP net income was $3.4 million, compared to $12.0 million in the third quarter of 2016.

“We are pleased to achieve a record revenue quarter and resume our growth. Our third quarter Ethernet revenues achieved double digit sequential growth, driven by increasing deployments of our 25 gigabit per second and above products, which demonstrates our leadership position in these markets,” said Eyal Waldman, President and CEO of Mellanox Technologies. “During the third quarter, InfiniBand revenues declined seven percent sequentially mainly due to a large Department of Energy CORAL deployment in the second quarter. On a year-over-year basis, our InfiniBand high-performance computing and artificial intelligence revenues increased by double digit percentages."

Wednesday, October 4, 2017

Mellanox announces software-defined SmartNIC adapters based on ARM

Mellanox Technologies announced its BlueField family of software-defined SmartNIC adapters, designed for scale-out server and storage applications.

The new adapters leverage embedded ARM processor cores based on the company's BlueField system-on-chip processors and accelerators in the network interface card (NIC).

Key features of the BlueField intelligent adapters:

  • 2 network ports of Ethernet or InfiniBand: 10G/25G, 40G, 50G or 100Gb/s options
  • RDMA support for both InfiniBand and RoCE from the leader in RDMA technology
  • Accelerators for NVMe-over-Fabrics (NVMe-oF), RAID, crypto and packet processing
  • PCI Express Gen3 and Gen4, with either x8- or x16-lane configurations
  • Integrated low-latency PCIe switch with up to 8 external ports for flexible topologies
  • Up to 16 ARMv8 Cortex A72 processors with 20MB of coherent cache
  • 8 – 32GB of on-board DDR4 DRAM
  • Comprehensive virtualization support with SR-IOV
  • Accelerated Switching and Packet Processing (ASAP2) OVS offloads
  • Multi-host and SocketDirect™ enabling a single adapter to support up to four CPU hosts
  • Multiple server form-factor options including half-height, half-length PCIe and other configurations


Mellanox said its new BlueField SmartNIC could be used for a range of applications, including Network Functions Virtualization (NFV), security and network traffic acceleration. The fully programmable environment and DPDK framework support a wide range of standard software packages running in the BlueField ARM subsystem. Examples include: Open vSwitch (OVS), Security packages such as L3/4 firewall, DDoS protection and Intrusion Prevention, encryption stacks (IPsec, SSL/TLS), traffic monitoring, telemetry and packet capture.

“Our BlueField adapters effectively place a Computer in Front of the Computer,” said Gilad Shainer, vice president marketing, Mellanox Technologies. “They provide the flexibility needed to adapt to new and emerging network protocols, and to implement complex networking and security functions in a distributed manner, right at the boundary of the server. This brings more scalability to the data center and enhances security by creating an isolated trust zone.”

Tuesday, August 22, 2017

Mellanox Supplies Network Adapters for Alibaba's 25G RoCE Ethernet Cloud

Mellanox Technologies confirmed that its 25GbE and 100GbE ConnectX®-4 EN family of Ethernet adapters has been deployed in Alibaba's data centers.

Key capabilities of the Mellanox ConnectX-4 Network Interface Cards (NICs):

  • RoCE - RDMA over Converged Ethernet - RDMA (Remote Direct Memory Access) technology is designed to solve the delay of server-side data processing in network transmission, as it enables network adapters to access the application buffer directly, bypassing the kernel, the CPU, and the protocol stack, so the CPU can perform more useful tasks during the I/O transport. 
  • RDMA (RoCE), based on converged Ethernet, can be implemented on an existing open Ethernet network. With RoCE, there is no need to convert data centers legacy infrastructures, which allows companies to save on capital spending
  • DPDK - Data Path Development Kit - provides a framework for fast packet processing in data plane applications. The tool set allows developers to rapidly build new prototypes. The Mellanox open source DPDK software enables industry standard servers and provides the best performance to support large-scale, efficient production deployments of Network Function Virtualization (NFV) solutions such as gateways, load balancers, and enhanced security solutions that help prevent denial of service attacks in the data center.


http://www.mellanox.com

Friday, July 7, 2017

Mellanox intros Spectrum-2 200/400 GBE data centre switch

Mellanox Technologies announced the Spectrum-2, a scalable 200 and 400 Gbit/s Open Ethernet switch solution designed to enable increased data centre scalability and lower operational costs through improved power efficiency.

Spectrum-2 also provides enhanced programmability and optimised routing capabilities for building efficient Ethernet-based compute and storage infrastructures.

Mellanox's Spectrum-2 provides leading Ethernet connectivity for up to 16 ports of 400 Gigabit Ethernet, 32 ports of 200 Gigabit Ethernet, 64 ports of 100 Gigabit Ethernet and 128 ports of 50 and 25 Gigabit Ethernet, and offers enhancements including increased flexibility and port density for a range of switch platforms optimised for cloud, hyperscale, enterprise data centre, big data, artificial intelligence, financial and storage applications.

Spectrum-2 is designed to enable IT managers to optimise their network for specific customer requirements. The solution implements a complete set of the network protocols within the switch ASIC efficiently, providing users with the functionality required out-of-box. Additionally, Spectrum-2 includes a flexible parser and packet modifier which can be programmed to process new protocols as they emerge in the future.

Mellanox stated that Spectrum-2 is the first 400/200 Gigabit Ethernet switch to provide adaptive routing and load balancing while guaranteeing zero packet loss and unconditional port performance for predictable network operation. The solution also supports double the data capacity while providing latency of 300 nanoseconds, claimed to be 1.4 times less than alternative offerings. It is designed to provide the foundation for Ethernet storage fabrics for connecting the next generation of Flash based storage platforms.

Mellanox noted that Spectrum-2 extends the capabilities of its first generation Spectrum switch, which is now deployed in thousands of data centres. Spectrum enables IT managers to efficiently implement 10 Gbit/s and higher infrastructures and to economically migrate to 25, 50 and 100 Gbit/s speeds.


The new Spectrum-2 maintains the same API as Spectrum for porting software onto the ASIC via the Open SDK/SAI API or Linux upstream driver (Switchdev), and supports standard network operating systems and interfaces including Cumulus Linux, SONIC and standard Linux distributions. It also supports telemetry capabilities including the latest in-band network telemetry standard, enabling visibility into the network and monitoring, diagnosis and analysis of operations.


Monday, June 19, 2017

Mellanox forms strategic agreement with HPE

Mellanox Technologies announced a strategic collaboration with HPE covering high-performance computing and machine learning data centres, and also introduced SHIELD, an interconnect technology that is claimed to improve data centre fault recovery by 5,000 times through providing interconnect autonomous self-healing capabilities.

HPE collaboration

Mellanox collaboration with HPE is designed to enable efficient high-performance computing and machine learning data centres based on technologies from both parties. The joint solutions are intended to enable customers to leverage the InfiniBand and Gen-Z open standards to enhance return on investment for current and future data centres and applications.

Leveraging Mellanox's intelligent and In-Network Computing capabilities of ConnectX-5 InfiniBand adapters and Switch-IB2 InfiniBand switches in the recently launched HPE SGI 8600 and Apollo 6000 Gen10 systems the companies can offer scalable and efficient high-performance computing and machine learning fabric solutions.

The collaboration will enable both companies to develop technology integration and use the forthcoming HDR InfiniBand Quantum switches, ConnectX-6 adapters and future Gen-Z devices. In addition, joint development work with HPE's Advanced Development team will support the advance to Exascale computing.

SHIELD

The new SHIELD technology is enabled within Mellanox's 100 Gbit/s EDR and 200 Gbit/s HDR InfiniBand solutions, providing the ability for interconnect components to exchange real-time information and to make instant smart decisions to help overcome issues and optimise data flows. SHIELD is designed to enable greater reliability, more productive computation and optimised data centre operations.


Friday, June 9, 2017

HPE selects Mellanox for 25/50/100 GBE fabric switch

Mellanox Technologies, a supplier of high-performance, end-to-end smart interconnect solutions for data centre servers and storage systems, announced that its Spectrum Ethernet switch ASIC has been selected to power the first Hewlett Packard Enterprise (HPE) Synergy Switch Module, supporting native 25, 50 and 100 Gigabit Ethernet connectivity.

The Mellanox Spectrum switch module serves to connect the HPE Synergy compute module with an Ethernet switch fabric offering high performance and low latency, as demanded for cloud, financial services, telco and HPC environments.

Mellanox noted that the new switch module is designed to help HPE support the transition to the next generation of Ethernet performance by providing 25 Gbit/s connectivity options for the Synergy platform. The Mellanox SH2200 Synergy switch module enables 25 and 50 Gigabit Ethernet compute and storage connectivity while also enabling 100 Gbit/s uplinks.

The capabilities of the Mellanox switch allows the HPE Synergy fabric portfolio to deliver high performance Ethernet connectivity for and expanded range of applications, for example financial trading and analytics, scientific computing, cloud and NFV (network function virtualisation), where line rate, zero packet loss and 300 ns latency offers advantages.

HPE Synergy offering features compute, storage and built-in management, as well as the new advanced Ethernet fabric option. The SH2200 HPE Synergy Fabric, based on Mellanox's Spectrum Ethernet switch, offers a key building block in helping make enterprise applications more efficient and enabling data centre operators to analyse data in real-time. HPE Synergy compute modules with the Mellanox SH2200 Synergy switch are due to be available in the third quarter of 2017.


Regarding the solution, Paul Miller, vice president of marketing at HPE, said, "HPE Synergy is the first composable infrastructure, a new category of infrastructure designed to accelerate application and services delivery for both traditional and new cloud native and DevOps environments on the same infrastructure… with Mellanox, HPE can offer higher performance networking as an integrated component of the Synergy platform".

Friday, April 21, 2017

Mellanox Cites Momentum for its 25G Ethernet Adapters

Mellanox Technologies announced that the ConnectX-4 Lx 25 Gigabit OCP and PCIe Ethernet adapters targeting data centre applications, which deliver 2.5x the throughput of 10 Gbit/s solutions using the same infrastructure, along with low latency performance, have been adopted by a number of major ODMs (original design manufacturers).

Mellanox stated that it is currently shipping hundreds of thousands of Ethernet adapters each quarter in line with increasing demand for its Ethernet connectivity solutions.

Mellanox cited ODM customers for its ConnectX-4 Lx 25 Gigabit Ethernet adapters including:

1.         Wiwynn, a cloud infrastructure provider offering computing and storage products, which is shipping its OCP server SV7221G2 products with the Mellanox ConnectX-4 Lx OCP Mezzanine NICs and PCIe cards to major ISPs.

2.         Inventec of Taiwan, which has qualified ConnectX-4 Lx 25 Gigabit Ethernet cards for TB800G4, Balder and K800G3 platforms for supply to major cloud and Web 2.0 providers in China,
3.         Acer, a Taiwanese hardware and electronics company, which has qualified the ConnectX-4 Lx PCIe adapters and plans to shortly launch its Altos R380 F3, R360 F3 and AW2000h F3 servers.

4.         Mitac-TYAN, a supplier of servers and desktop motherboards based in Taiwan, which is shipping ConnectX-3 Pro 40 Gigabit Ethernet OCP mezzanine cards and recently added the ConnectX-4 Lx 25 Gigabit Ethernet OCP mezzanine cards to its GT86A-B7083 server offering.


Mellanox's ConnectX-4 Lx is a 10/25/40/50 Gigabit Ethernet adapter that allows data centres to transition from 10 Gbit/s to 25 Gbit/s and from 40 Gbit/s to 50 Gbit/s while delivering similar power consumption and cost and utilising the same infrastructure.

Wednesday, March 22, 2017

Mellanox Intros Silicon Photonics-based 200G Transceivers

Mellanox Technologies introduced a new line of new line of 200 Gbit/s silicon photonics and VCSEL-based transceivers in a QSFP28 package and 100 Gbit/s silicon photonics components designed for hyperscale Web 2.0 and cloud optical interconnect applications.

200 Gbit/s transceivers

Mellanox new 200 Gbit/s silicon photonics and VCSEL-based transceivers are offered in the same QSFP28 package as current 100 Gbit/s products and target hyperscale Web 2.0 and cloud 100 Gbit/s networks. Mellanox is also introducing 200 Gbit/s active optical cables (AOCs) and direct attach copper cables (DACs), including breakout cables, enabling end-to-end 200 Gbit/s Ethernet networks.

Specifically, Mellanox is introducing the following 200 Gbit/s products:

1.  1550 nm DR4 QSFP28 silicon photonics transceiver for reach up to 500 metres on single mode fibre.

2.  SR4 VCSEL QSFP28 transceiver for reach up to 100 metres on OM4 multi-mode fibre.

3.  QSFP28 AOC for reach up to 100 metres.

4.  QSFP28 DAC cable for reaches up to 3 metres.

5.  QSFP28 to 4 x 50 Gbit/s SFP28 copper splitter cables for connecting 50 Gbit/s servers to ToR switches.

Mellanox noted that the transceivers and DACs support the new IEEE 200GAUI electrical standard using 25 GBaud PAM4 signalling.

100 Gbit/s silicon photonics components

Mellanox also unveiled new 100 Gbit/s silicon photonics components and announced the availability of the following solutions:

1.  PSM4 silicon photonics 1550 nm transmitter with flip-chip bonded DFB lasers and attached 1 metre fibre pigtail for reach up to 2 km.

2.  PSM4 silicon photonics 1550 nm transmitter with flip-chip bonded DFB lasers and attached fibre stub for connectorised transceivers with reach up to 2 km.

3.  Low-power 100 Gbit/s (4 x 25 Gbit/s) modulator driver IC.

4.  PSM4 silicon photonics 1310 and 1550 nm receiver array with 1 metre fibre pigtail.

5.  PSM4 silicon photonics 1310 and 1550 nm receiver array for connectorised transceivers.

6.  Low-power 100 Gbit/s (4 x 25 Gbit/s) trans-impedance amplifier IC.


At OFC Mellanox held live demonstrations of its end-to-end solutions, including: Spectrum 100 Gbit/s QSFP28/ SFP28 switches; ConnectX-4 and ConnectX-5 25/50/100 Gbit/s QSFP28/SFP28 network adapters; LinkX 25/50/100 Gbit/s DAC and AOC cables and 100 Gbit/s SR4 and PSM4 transceivers; Quantum switches with 40 ports of 200 Gbit/s QSFP28 in a 1 RU chassis; and ConnectX-6 adapters with 2 ports of 200 Gbit/s QSFP28.

Thursday, March 16, 2017

Mellanox: 100G Transceiver Shipments Ramp Up Quickly

Mellanox Technologies reported that it has shipped more than 200,000 VCSEL and silicon photonics transceiver modules to serve the growing demand of hyperscale Web 2.0, cloud, and enterprise 100Gb/s networks. The modules are delivered in the QSFP28 form factor as Active Optical Cables (AOCs) or as standalone pluggable transceivers.

“The 100Gb/s optical transceivers market has ramped very quickly and we have ramped our optical manufacturing capabilities with it, and are shipping multiple product families in high volume,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “Multi-mode optics are the most cost effective solution on the market today to connect 25G and 100G Ethernet servers and switches over shorter data center reaches. For longer reaches, customers selected our silicon photonics-based PSM4 transceivers as the most cost effective, highest configurable, single-mode transceiver available.”

Recently, Mellanox announced that is has shipped over 100,000 Direct Attach Copper (DAC) cables for 100Gb/s networks. DAC are used to link servers and storage to Top-of-Rack switches; typically less than 3 meters in length. Transceivers and AOCs offer lengths up to 2km.

http://www.mellanox.com

Wednesday, March 8, 2017

Mellanox Enables IPCIe Gen-4 OpenPOWER-Based Rackspace OCP Server

Mellanox Technologies is supplying its ConnectX-5 Open Compute Project (OCP) Ethernet adapter to enable the world’s first PCIe Gen-4 OpenPOWER/OCP-based Zaius, the open server platform from Google and Rackspace.

Mellanox’s ConnectX-5 supports both InfiniBand and Ethernet at 10/25/50/100 Gbps. It is also the first adapter to support PCIe Express Gen 4.0 for full 200Gb/s data throughput to servers and storage platforms.

Mellanox said its ConnectX-5 also supports Multi-Host technology, which disaggregates the network and enables building new scale-out heterogeneous compute and storage racks with direct connectivity from multiple processors to shared network controller. Mellanox Multi-Host technology is available today in the Mellanox portfolio of ConnectX-4 Lx, ConnectX-4, and ConnectX-5 adapters at speeds of 50 and 100Gb/s.

“We anticipate that Zaius and our Barreleye G2 server solution will bring new levels of performance and efficiency to our portfolio,” said Aaron Sullivan, Distinguished Engineer, Rackspace. “This platform combines IBM’s Power9 processor with PCI Express Gen4, and Mellanox ConnectX-5 network adapters. Leveraging these technologies, it is now possible to deliver hundreds of gigabits of bandwidth from a single network adapter.”

“IBM, the OpenPOWER Foundation and its members are fostering an open ecosystem for innovation to unleash the power of cognitive and AI computing platforms,” said Ken King, IBM general manager of OpenPOWER. “The combination of the POWER processor and Mellanox ConnectX-5 technology, using novel interfaces like CAPI and OpenCAPI, will dramatically increase system throughput for the next generation of advanced analytics, AI and cognitive applications.”

“Mellanox has been committed to OCP’s vision from its inception and we are excited to bring continued innovation to this growing community,” said Kevin Deierling, vice president marketing at Mellanox Technologies. “Through collaboration between IBM and Rackspace, we continue to push the boundaries of innovation, enable open platforms and unlock performance of compute and storage infrastructure.”

http://www.mellanox.com

Tuesday, February 28, 2017

Mellanox and ECI partner to deliver virtual CPE platform for NFV

Mellanox Technologies, a supplier of interconnect solutions for data centre servers and storage systems, and 'elastic networking' company ECI have announced the introduction of an advanced virtual CPE (vCPE) platform at the Mobile World Congress 2017.

The joint platform is designed to provide enhanced performance and efficiency and enable service providers to cost-effectively implement network function virtualisation (NFV) deployments. The solution is based on ECI's Mercury NFVi platform and uCPE solution, accelerated by Mellanox's Indigo network processor delivering over 400 Gbit/s of L2-7 packet processing. Compared to un-accelerated platforms, the ECI-Mellanox solution is claimed to deliver more than a 30x performance improvement for virtual router, L4-7 firewall and L7 QoS applications.

The ECI Mercury NFVi platform and uCPE technology converge multiple customer premises networking functions onto an elastic, software-configurable platform, with the uCPE solution enabling service providers to combine networking functions flexibly to create new value-added service mixes.

In addition, the advanced network processing capabilities of Mellanox Indigo constitute an integral part of the NFV Infrastructure to enhance the performance of virtualised network functions, particularly those requiring guaranteed throughput, packet rate, latency and jitter for SLAs.

The companies noted that compared to conventional platforms, the joint ECI-Mellanox solution can enable flexible service creation more cost-effectively together with enhanced infrastructure efficiency and in a more compact footprint.

Separately, Mellanox unveiled the IDG4400 6WIND Network Routing and IPsec platform based on the combination of Indigo and 6WIND's 6WINDGate packet processing software, which includes routing and security features such as IPsec VPNs. The IDG4400 6WIND 1 U platform supports 10/40/100 Gigabit Ethernet connectivity and can deliver sustained rates of up to 180 Gbit/s encryption/decryption while providing IPv4/v6 routing functions at rates up to 400 Gbit/s.

http://www.mellanox.com

Monday, February 13, 2017

Mellanox Demos Innova IPsec 40G Ethernet Network Adapter

Mellanox Technologies announced that its Innova IPsec Network Adapter demonstrated more than three times higher throughput and more than four times better CPU utilization in crypto throughput when compared to x86 software-based server offerings.

The Innova IPsec adapter addresses the growing need for security and “encryption by default” by combining Mellanox ConnectX advanced network adapter accelerations with IPsec offload capabilities to deliver end-to-end data protection in a low profile PCIe form factor. It offers support for RDMA over Converged Ethernet (RoCE), Ethernet stateless offload engines, Overlay Networks, etc.

“The Innova security adapter product line enables the use of secure communications in a cost effective and a performant manner,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Whether used within an appliance such as firewall or gateway, or as an intelligent adapter that ensures data-in-motion protection, Innova IPsec adapters are the ideal solution for cloud, Web 2.0, telecommunication, high-performance compute, storage systems and other applications.”

http://www.mellanox.com

Monday, February 6, 2017

Mellanox 100G Milestone: 100,000 Direct Attach Cables Shipped

Mellanox Technologies reached a significant milestone:  more than 100,000 units of its Direct Attach Copper Cables (DACs) have now been shipped to serve the growing demand of hyperscale Web 2.0 and cloud 100Gb/s networks.

“Hyperscale customers are selecting Mellanox cables due to our advanced manufacturing automation technologies which enable us to achieve higher quality, lower costs and to deliver in high volume,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “Copper cables are the most cost effective way to connect new 25G and 50G servers to TOR switches as they enable the entire new generation of 100Gb/s networks.”

Mellanox offers a full line of 10, 25, 40, 50 and 100 Gbps copper cabling for server and storage interconnect. The two most popular options are splitter cables, which feature a 100 Gbps connector at one end for plugging into a switch port and either two 50 Gbps connectors or four 25 Gbps connectors at the other end for connecting to 25G or 50G servers. Widely used by hyperscale customers to connect servers to the top of the rack (TOR) switch, DACs have lower cost and zero power consumption when compared to optical cables and transceivers. The superior performance and low 1E-15 BER eliminates the need for FEC, which would add latency to the critical server-TOR link.

http://www.mellanox.com

Thursday, February 2, 2017

Mellanox Says Infiniband Continues to Grow

Mellanox Technologies reported Q4 sales of $221.7 million and $857.5 million in fiscal year 2016. GAAP gross margins were 66.8 percent in the fourth quarter, and 64.8 percent in fiscal year 2016.

“During the fourth quarter we saw continued sequential growth in our InfiniBand business, driven by robust customer adoption of our 100 Gigabit EDR solutions into artificial intelligence, machine learning, high-performance computing, storage, database and more. Our quarterly, and full-year 2016 results, highlight InfiniBand’s continued leadership in high-performance interconnects,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Customer adoption of our 25, 50, and 100 gigabit Ethernet solutions continued to grow in the fourth quarter. Adoption of Spectrum Ethernet switches by customers worldwide generated positive momentum exiting 2016. Our fourth quarter and full-year 2016 results demonstrate Mellanox’s diversification, and leadership in both Ethernet and InfiniBand. We anticipate growth in 2017 from all Mellanox product lines.”

http://www.mellanox.com


Tuesday, January 10, 2017

Mellanox Supplies Ethernet Switches/Adapters for Baidu's Machine Learning

Baidu, the leading Chinese Internet search engine, has selected Mellanox Technologies' Spectrum Ethernet switches and ConnectX-4 100Gb/s Ethernet adapters of the Baidu Machine Learning platform.  The Spectrum switches and RDMA-enabled ConnectX-4 adapters enable efficient data movement placed for machine learning. Mellanox said its solutions enabled Baidu to demonstrate a 200 percent improvement in machine learning training times, resulting in faster decision making.

“We are pleased to continue working with Mellanox to enable the most efficient platforms for our applications,” said Mr. Liu Ning, system department deputy director at Baidu. “Mellanox Ethernet solutions with RDMA allow us to fully leverage our Machine Learning platform and work with various machine models while saving valuable CPU cycles and associated computing costs.”

“Machine Learning has become a critical predictive and computational tool for many businesses worldwide,” said Amir Prescher, senior vice president business development, Mellanox Technologies. “Working with Baidu, the premiere internet search provider in China, has enabled Mellanox to showcase the advantages and cost effectiveness of our Spectrum switches and ConnectX-4 100Gb/s adapters solutions to enable the most efficient machine learning platforms.”

http://www.mellanox.com/page/press_release_item?id=1837

driver support for ConnectX-4 Ethernet and RoCE (RDMA over Converged Ethernet) on the VMware vSphere virtualization platform.

The new vSphere software for ConnectX-4 delivers three critical new capabilities; increased Ethernet network speeds at 25/50 and 100 Gb/s, virtualized application communication over RoCE, and advanced network virtualization and SDN (Software Defined Networking) acceleration 

Tuesday, December 6, 2016

Mellanox NPS-400 Processor Deliver 400 Gbps

Mellanox Technologies reported unprecedented packet processing performance of its NPS-400 Network Processor when using its newly released Deep Packet Inspection and Stateful Packet Processing software libraries.

Mellanox said these new software libraries, coupled with the hardware acceleration capabilities of the NPS-400, enable Deep Packet Inspection processing for application recognition at record breaking processing rates of up to 400Gbps, in conjunction with handling of 100 million flows with an average packet size of 400 bytes.

These processing capabilities could be used for Intrusion Detection Systems and Intrusion Prevention Systems and to accelerate processing capabilities for switch routers.

“Qosmos is very excited to collaborate with Mellanox providing a record breaking performance of Stateful Packet Processing and Deep Packet Inspection at 400Gb/s on the Mellanox NPS-400 solution,” said Thibaut Bechetoille, CEO of Qosmos. “Deep Packet Inspection drives L7 applications intelligence in the network and we expect further deployment of L7 services at more and more places in the network.”

http://www.mellanox.com/

Sunday, November 6, 2016

Mellanox Pursues Open Source Path for its NPUs

Mellanox Technologies is launching an open source software initiative to enable advanced open networking platforms such as routers, load balancers, and firewalls based on its network processors.

The company is releasing an open SDK for its most advanced network processor unit (NPU), the NPS-400, which delivers programmable packet processing at 600 million packets per seconds..  The Mellanox OpenNPU is made available under either GPL or BSD license. The SDK for the NPS-400 packet processor includes open source driver software, APIs, control and data path libraries, and complete toolchain to program the NPS-400 network processor. In addition, the kit also provides reference applications for switching, routing and IPSec processing. The Accelerated Linux Virtual server (LVS) is provided as an example of how the OpenNPU can be used to provide significant hardware acceleration to the Linux kernel networking stack.

“The market for networking devices is undergoing a major paradigm shift, moving away from closed proprietary OEM network equipment and migrating towards open platforms that are both flexible and configurable,” said Dror Goldenberg, vice president software architecture, Mellanox Technologies. “We are in full support of this movement as evidenced by our initiative to open source the entire suite of Mellanox software for our NPS family of network processors.”

http://www.mellanox.com/page/press_release_item?id=1808

Mellanox Samples its High-end Network Processor

Mellanox Technologies has begun sampling its next generation, NPS-400 network processor, which is capable of performing advanced deep packet processing for security and telecommunications applications at over 800 Gbps.


The NPS features programmable CPU cores that are highly optimized for packet processing and leverage deep packet processing and applications experience, a traffic manager, hardware accelerators for security and DPI (Deep Packet Inspection) tailored for efficiency and performance, on-chip search engines including TCAM with scaling through algorithmic extension to external low-cost low-power DRAM memory and a multitude of network interfaces providing an aggregated bandwidth of 800-Gigabits per second including 10-, 40- and 100-Gigabit Ethernet, Interlaken and PCI Express interfaces. It offers C‑based programming, a standard toolset, support of the Linux operating system, large code space, and a run-to-completion or pipeline programming style. Mellanox supplies a library of source code for a variety of applications.

ZTE plans to use the NPS-400 for its new line cards design for carrier-grade router platforms.

“We are pleased that the NPS network processing unit is now available and shipping to communication technology leaders such as ZTE,” said Marc Sultzbaugh, senior vice president, Mellanox Technologies. “Through the acquisition of EZchip we’ve gained tremendous expertise and advanced network processing capabilities. The NPS is a sixth generation network processor and has evolved to provide unmatched performance and flexibility that meets the needs of the most advanced data center and communications customers.”

http://www.mellanox.com


  • Last month, Mellanox completed its acquisition of EZchip at a total purchase price of approximately $811 million (approximately $606 million net of cash).

See also