Showing posts with label Mellanox. Show all posts
Showing posts with label Mellanox. Show all posts

Thursday, November 14, 2019

Mellanox extends Infiniband to 40km

Mellanox Technologies introduced its Quantum LongReach series of long-distance InfiniBand switches for connecting remote InfiniBand data centers together, or to provide high-speed and full RDMA (remote direct memory access) connectivity between remote compute and storage infrastructures.

Based on the 200 gigabit HDR Mellanox Quantum InfiniBand switch, the LongReach solution provides up to two long-reach InfiniBand ports and eight local InfiniBand ports. The long reach ports can deliver up to 100 Gbps data throughput for distances of 10 and 40 kilometers.

Key capabilities

  • Connect remote InfiniBand based data centers together to create a single virtual data center, effectively combining the compute power of multiple distributed data centers for higher overall performance and scalability. With LongReach, users can leverage the In-Network Computing capabilities such as the Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™, congestion control, enhanced telemetry and more between the multiple remote data centers.
  • Seamlessly migrate user applications from one data center to another to support different user demands, to provide load balancing between InfiniBand data centers, or to provide continuous compute services in cases of data-center failures.
  • Enable fast and efficient connectivity between remote compute and storage infrastructures, enabling fast disaster recovery and more.

“The Mellanox Quantum LongReach appliance enables native InfiniBand connectivity between remote InfiniBand-based data centers, or between data center and remote storage infrastructure, allowing users to enjoy native RDMA, In-Network Computing acceleration engines, congestion control and other InfiniBand technology advantages globally,” said Gilad Shainer, senior vice president of marketing at Mellanox Technologies. “Our existing and new customers, who wish to expand their clusters seamlessly over local and distributed InfiniBand networks that are kilometers apart, will find Mellanox Quantum LongReach to be the best cost-effective and easily managed solution.”

The products will be available in the first half of 2020.

https://www.mellanox.com/page/longreach/?ls=pr&lsd=191114-LongReach-1

Wednesday, September 25, 2019

Mellanox adds SONiC to its Spectrum switches

Mellanox Technologies announced ASIC-to-Protocol (A2P) customer support solutions for the SONiC Network Operating System (NOS) on Mellanox Spectrum switches.

SONiC (Software for Open Networking in the Cloud) is a fully open-sourced NOS for Ethernet switches, first created by Microsoft to run Microsoft Azure and now a community project under the Open Compute Project (OCP). SONiC is built on the Switch Abstraction Interface API (SAI) and breaks down traditional monolithic switch software into agile, microservices-based containerized components. This model accelerates innovation within the NOS and the data center by breaking vendor lock-in and simplifying switch programmability, allowing network operators to choose the best-of-breed switching platforms. SONiC offers a full suite of network functionality—like BGP, ECMP, VXLAN, IPv6, and RDMA—that has been deployed and production-hardened in some of the largest data centers in the world.

Mellanox has been a major contributor to SONiC. Mellanox is now adding SONiC support for customers running large deployments of the SONiC NOS on Mellanox SN2000 and SN3000 switches.

“SONiC is an amazingly versatile and scalable NOS for the data center, and Open Ethernet is an incredibly powerful concept,” said Amit Katz, Vice President of Ethernet Switches, Mellanox Technologies. “Every week we hear from more customers who want to combine the power of SONiC with the best-in-class switch silicon in Mellanox Spectrum. Our unique support offering and vast SONiC experience make this easy for new and existing SONiC customers.”

Yousef Kahlidi, Corporate Vice President, Azure Networking at Microsoft Corp. said, “SONiC delivers scalable and efficient cloud networking that offers one optimized NOS that runs on a variety of best-of-breed switches. Offering support for SONiC on their switches allows Mellanox to bring the benefits of SONiC to a larger customer segment.”



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.

Wednesday, September 11, 2019

Mellanox on track to ship over 1 million ConnectX Adapters in Q3 2019

Mellanox Technologies announced it is on track to ship over one million ConnectX and BlueField Ethernet network adapters in Q3 2019, a new quarterly record.

The company says growth is driven by public and private clouds, telco operators and enterprise data centers seeking faster compute and storage platforms.

“We are thrilled to see ConnectX and Ethernet SmartNICs exceed the one million parts shipment mark in a single quarter. We expect this number to continue and grow in the coming quarters as more of the market is transitioning to 25 Gb/s Ethernet and faster speeds,” said Eyal Waldman, president and CEO of Mellanox Technologies.

The BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gb/s.

Video - Mellanox's Michael Kagan on SmartNICs



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.

Tuesday, September 3, 2019

Mellanox intros Active DACs, QSA56 adapters and 400G DR4 transceivers

Mellanox Technologies announced new 400G DR4 500m transceivers and 400G DAC splitters and 100G SFP-DD DAC cables for server/storage interconnects.

The company is also introducing new 200G “active” DAC cables for HDR InfiniBand and 200GbE Ethernet to extend copper cable reach up to four meters.

Lastly, new QSA56 Port Adapters enable single-channel SFP cables and transceivers to be connected to 200G switch or network adapter ports. QSA56 supports cables and transceivers from 0.5m to 10km.

Mellanox is demonstrating these LinkX products as well as showcasing the full line of 100/200/400G cables and transceivers at the China International Optoelectronic Expo (CIOE) September 4th in Shenzhen, China and the European Convention for Optical Communications (ECOC) Sept 21st in Dublin, Ireland.

“We’ve had tremendous adoption of our full line of LinkX 25/50/100G cables and transceivers with web-scale, cloud computing, and OEM customers in China and worldwide,” said, Steen Gundersen, vice president LinkX interconnects, Mellanox Technologies. “We are just at the beginning of the transition to 200G and 400G will soon follow. Customers select Mellanox because of our expertise in high-speed interconnects, our capacity to ship in volume, and the high quality of our products.”

Monday, August 26, 2019

Mellanox's latest SmartNICs deliver 200G I/O and Security

Mellanox introduced its latest generation ConnectX-6 Dx and BlueField-2 Secure Cloud SmartNICs for data center servers and storage systems.

The ConnectX-6 Dx SmartNICs provide up to two ports of 25, 50 or 100Gbps, or a single port of 200Gbps, Ethernet connectivity powered by 50Gbps PAM4 SerDes technology and PCIe 4.0 host connectivity.

Significantly, the new SmartNICs' hardware offload engines include IPsec and inline TLS data-in-motion cryptography, advanced network virtualization, RDMA over Converged Ethernet (RoCE), and NVMe over Fabrics (NVMe-oF) storage accelerations. ConnectX-6 Dx provides IPsec, TLS, and AES-XTS built-in cryptographic acceleration, and Hardware Root of Trust. In addition to the above capabilities, BlueField-2 adds accelerated key management, integrated Regular Expression (RegEx) pattern detection, secure hash computation, etc.

Mellanox said its BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gbps. ConnectX-6 Dx and BlueField-2 also offer built-in SR-IOV, Open vSwitch (OVS), and VirtIO hardware accelerators. Mellanox is also introducing additional network virtualization offloads, enhanced programmability and extreme scale capabilities.

“We are excited to introduce the ConnectX-6 Dx and BlueField-2 architectures, providing groundbreaking acceleration engines for next-generation cloud data centers,” said Yael Shenhav, vice president, Ethernet NIC and SoC at Mellanox. “Built on the success of our award-winning ConnectX and BlueField product families, ConnectX-6 Dx and BlueField-2 set new records in high-performance networking, allowing our customers and partners to build highly secure and efficient compute and storage infrastructures to increase productivity and reduce total cost of ownership.”

“Baidu is an AI cloud giant tasked with delivering results at the speed of thought,” said Liu Ning, director of system department, Baidu. “Therefore, we have partnered with Mellanox, the leader in high-performance networking, whose high-speed connectivity solutions today supports Baidu’s machine learning platforms. We look forward to this new release of Mellanox’s programmable cloud SmartNICs and IPUs to deliver best-in-class network performance for accelerating scalable AI-driven applications.”

“IBM’s enterprise server solutions are designed to deliver the best performance for the most demanding workloads, while providing cutting-edge security and reliability,” said Monica Aggarwal, vice president of Cognitive Systems Development. “We look forward to integrating the new Mellanox SmartNIC family into our product portfolio for building highly efficient secured cloud data centers.”

https://www.mellanox.com/products/bluefield2-overview/

Video - Mellanox's Michael Kagan on SmartNICs



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.

Wednesday, July 24, 2019

Mellanox posts record revenue of $310.3m, up 16% yoy

Mellanox Technologies reported record revenue of $310.3 million in the second quarter, an increase of 15.6 percent, compared to $268.5 million in the second quarter of 2018. GAAP gross margins were 64.5 percent, compared to 61.4 percent in the second quarter of 2018. Non-GAAP net income amounted to $83.9 million in the second quarter, compared to $66.6 million in the second quarter of 2018.

“Mellanox delivered record revenue in Q2, achieving 2 percent sequential growth and 16 percent year-over-year growth. We continue to demonstrate leadership with our Ethernet adapter solutions for data rates of 25 gigabit per second and above. The growth in our Ethernet business reflects strong demand from our cloud customers as well as expanding channel sales. We are pleased that we’ve begun shipping 200 gigabit per second Ethernet adapters, switches, and cables to our data center customers, and expect this to be a future revenue growth driver,” said Eyal Waldman, president and CEO of Mellanox Technologies.

“We continue to see strong demand for our InfiniBand products across the high performance computing, artificial intelligence, cloud, and storage market segments, driven by our highest throughput 200 gigabit HDR InfiniBand solutions. InfiniBand accelerates six of the top ten supercomputers in the world today, including the top three. We are proud that multiple HDR InfiniBand systems have entered the TOP500 supercomputers list, led by the Frontera TACC system, which is the fastest TOP500 supercomputer built in 2019 and premiered at #5 on the list.”

“We are pleased with our financial performance this quarter and the adoption of our latest 25, 50, and 100Gb/s Ethernet and 200Gb/s HDR InfiniBand products,” continued Waldman. “We expect to maintain and grow our leadership in these segments as we expand our footprint for both adapters and switches in the data center.”


  • On March 11, 2019, NVIDIA agreed to acquire all the issued and outstanding common shares of Mellanox for $125 per share in cash. The acquisition is pending.

Monday, July 8, 2019

Mellanox invests in CNEX Labs and Pliops

Mellanox Capital, which is the investment arm of Mellanox Technologies, has made equity investments in storage start-ups CNEX Labs and Pliops, both of which are pushing software defined and intelligent storage to the next level of performance, efficiency, and scalability.

CNEX Labs, which targest high-performance storage semiconductors, has developed Denali/Open-Channel NVMe Flash storage controllers/

Pliops is transforming data center infrastructure with a new class of storage processors that deliver massive scalability and lower the cost of data services.

“Mellanox is committed to enabling customers to harness the power of distributed compute and disaggregated storage to improve the performance and efficiency of analytics and AI applications,” said Nimrod Gindi, senior vice president of mergers and acquisitions and head of investments, Mellanox Technologies. “Optimizing datacenter solutions requires faster, smarter storage connected with faster, smarter networks, and our investments in innovative storage leaders such as CNEX Labs and Pliops will accelerate the deployment of scale-out storage and data-intensive analytics solutions. Our strategic partnerships with these innovative storage mavericks are transforming the ways that customers can bring compute closer to storage to access and monetize the business value of data.”

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.

Tuesday, June 18, 2019

Mellanox supplies 200G InfiniBand for Lenovo’s liquid cooled servers

Mellanox Technologies has begun shipping liquid cooled HDR 200G Multi-Host InfiniBand adapters for the Lenovo ThinkSystem SD650 server platform, which features Lenovo's "Neptune" liquid cooling technologies.

“Our collaboration with Lenovo delivers a scalable and highly energy efficient platform that delivers nearly 90% heat removal efficiency and can reduce data center energy costs by nearly 40%, and takes full advantage of the best-of-breed capabilities from Mellanox InfiniBand, including the Mellanox smart acceleration engines, RDMA, GPUDirect, Multi-Host and more,” said Gilad Shainer, Senior Vice President of Marketing at Mellanox Technologies.

Monday, June 17, 2019

Mellanox cites supercomputing momentum for HDR 200G Infiniband

Mellanox Technologies reports that HDR 200G InfiniBand continues to gain traction with next generation of supercomputers worldwide due to its highest data throughput, extremely low latency, and smart In-Network Computing acceleration engines.

Mellanox's HDR 200G InfiniBand solutions include its ConnectX-6 adapters, Mellanox Quantum switches, LinkX cables and transceivers and software packages.

“We are proud to have our HDR InfiniBand solutions accelerate supercomputers around the world, enhance research and discoveries, and advancing Exascale programs,” said Gilad Shainer, senior vice president of marketing at Mellanox Technologies. “InfiniBand continues to gain market share, and be selected by many research, educational and government institutes, weather and climate facilities, and commercial organizations. The technology advantages of InfiniBand make it the interconnect of choice for compute and storage infrastructures.”

Examples

  • The Texas Advanced Computing Center’s (TACC) Frontera supercomputer -- ranked #5 on the June 2019 TOP500 Supercomputers list, Frontera utilizes HDR InfiniBand, and in particular multiple 800-port HDR InfiniBand switches.
  • The new HDR InfiniBand-based Orion supercomputer located at the Mississippi State University High Performance Computing Collaboratory -- ranked #62 on the June 2019 TOP500 list, the 1800-node supercomputer leverages the performance advantages of HDR InfiniBand and its application acceleration engines to provide new levels of application performance and scalability.
  • CSC, the Finnish IT Center for Science, and the Finnish Meteorological Institute -- ranked #166 on the TOP500 list.
  • Cygnus -- the first HDR InfiniBand supercomputer in Japan and ranked #264 on the TOP500 list.
  • India's Center for Development of Advanced Computing (C-DAC) 

Monday, May 20, 2019

Mellanox debuts Ethernet Cloud Fabric for 400G

Mellanox Technologies introduced its data center Ethernet Cloud Fabric (ECF) technology based on its second generation, Spectrum-2 silicon, which can deliver up to 16 ports of 400GbE, 32 ports of 200GbE, 64 ports of 100GbE, or 128 ports of 50/25/10/1GbE. 

Mellanox ECF combines three critical capabilities:

Packet forwarding data plane
  • 8.33 Billion Packets per second – Fastest in its class
  • 42MB Monolithic and fully shared packet buffer to provide high bandwidth and low-latency cut-through performance
  • Robust RoCE Datapath to enable hardware accelerated data movement for Ethernet Storage Fabric and Machine Learning applications
  • Half a million flexible forwarding entries to support large Layer-2 and Layer-3 networks
  • Up to 2 Million routes with external memory to address Internet Peering use cases
  • 128-way ECMP with support for flowlet based Adaptive Routing
  • Hardware-based Network Address Translation
  • 500K+ Access Control List entries for micro-segmentation and cloud scale whitelist policies
  • 500K+ VXLAN Tunnels, 10K+ VXLAN VTEPs to provide caveat-free Network Virtualization
Flexible and fully programmable data pipeline
  • Support for VXLAN overlays including single pass VXLAN routing and bridging
  • Centralized VXLAN routing for brown field environments
  • Support for other overlay protocols including EVPN, VXLAN-GPE, MPLS-over-GRE/UDP, NSH, NVGRE, MPLS/IPv6 based Segment routing and more
  • Future-proofing with programmable pipeline that can support new, custom and emerging protocols
  • Hardware optimized stages that accelerate traditional as well as virtualized network functions
  • Advanced modular data plane and integrated container support enables extensibility and flexibility to add customized and application specific capabilities
Open and Actionable telemetry
  • 10X reduction in mean time to resolution by providing a rich set of contextual and actionable Layer 1-4 “What Just Happened” telemetry insights
  • Hardware based packet buffer tracking and data summarization using histograms
  • More than 500K flow tracking counters
  • Open and Extensible platform to facilitate integration and customization with 3rd party and open source visualization tools
  • Support for traditional visibility tools including sFlow, Streaming and In-band telemetry
Marvell said its Ethernet Cloud Fabric incorporates Ethernet Storage Fabric (ESF) technology that seamlessly allows the network to serve as the ideal scale-out data plane for computing, storage, artificial intelligence, and communications traffic. 

“The Spectrum-2 switch ASIC operates at speeds up to 400 Gigabit Ethernet, but goes beyond just raw performance by delivering the most advanced features of any switch in its class without compromising operation ability and simplicity,” said Amir Prescher, senior vice president of end user sales and business development at Mellanox Technologies, “Spectrum-2 enables a new era of Ethernet Cloud Fabrics designed to increase business continuity by delivering the most advanced visibility capabilities to detect and eliminate data center outages. This state-of-the-art visibility technology is combined with fair and predictable performance unmatched in the industry, which guarantees consistent application level performance, which in turn drives predictable business results for our customers. Spectrum-2 is at the heart a new family of SN3000 switches that come in leaf, spine, and super-spine form factors.”

The Spectrum-2 based SN3000 family of switch systems with ECF technology will be available in Q3.


With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

Tuesday, April 16, 2019

Mellanox delivered record $305 million in revenue in Q1

Mellanox Technologies reported record revenue of $305.2 million in the first quarter, an increase of 21.6 percent, compared to $251.0 million in the first quarter of 2018. GAAP gross margins of 64.6 percent in the first quarter, compared to 64.5 percent in the first quarter of 2018.

“Mellanox delivered record revenue in Q1, achieving 5 percent sequential growth and 22 percent year-over-year growth. All of our product lines grew sequentially, showing the benefits of our diversified data center strategy,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Our R&D execution has resulted in differentiated products, while at the same time we have generated operating margin of 14.6% on a GAAP basis and 28.3% on a non-GAAP basis. Additionally, we increased cash and short-term investments by $114 million during the quarter.”

“Across InfiniBand and Ethernet product lines, our innovations are driving continued market leadership. Our 200 gigabit HDR InfiniBand solutions are enabling the world’s fastest supercomputers and driving our overall InfiniBand growth. During Q1, HDR InfiniBand connected tens-of-thousands of compute and storage end-points across supercomputing, hyperscale, and cloud data centers around the globe to achieve breakthrough performance. Our Ethernet solutions continue to penetrate the market for both adapters and switches. Our market leadership in 25 gigabit per second Ethernet solutions is well established, and our 100 gigabit per second solutions are the fastest growing portion of our Ethernet adapter product line. We are also encouraged by the adoption of our BlueField System-on-a-Chip and SmartNIC technology. With further innovations to come, Mellanox is well-positioned to continue its growth trajectory,” Mr. Waldman concluded.

Highlights

  • Non-GAAP gross margins of 68.0 percent in the first quarter, compared to 69.0 percent in the first quarter of 2018.
  • GAAP operating income of $44.7 million in the first quarter, compared to $12.0 million in the first quarter of 2018.
  • Non-GAAP operating income of $86.3 million in the first quarter, or 28.3 percent of revenue, compared to $52.1 million, or 20.8 percent of revenue in the first quarter of 2018.
  • GAAP net income of $48.6 million in the first quarter, compared to $37.8 million in the first quarter of 2018.
  • Non-GAAP net income of $86.5 million in the first quarter, compared to $51.4 million in the first quarter of 2018.
  • GAAP net income per diluted share of $0.87 in the first quarter, compared to $0.71 in the first quarter of 2018.
  • Non-GAAP net income per diluted share of $1.59 in the first quarter, compared to $0.98 in the first quarter of 2018.

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.

Monday, March 11, 2019

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.




Tuesday, January 22, 2019

Mellanox supplies 200 Gigabit HDR InfiniBand to Finnish IT Center for Science

Mellanox Technologies will supply its 200 Gigabit HDR InfiniBand solutions to accelerate a multi-phase supercomputer system by CSC – the Finnish IT Center for Science. The new supercomputers, set to be deployed in 2019 and 2020, will serve the Finnish researchers in universities and research institutes, enhancing climate, renewable energy, astrophysics, nanomaterials and bioscience, among a wide range of exploration activities. The Finnish Meteorological Institute (FMI) will have their own separate partition for diverse simulation tasks ranging from ocean fluxes to atmospheric modeling and space physics.

Mellanox said its HDR InfiniBand interconnect solution was selected for its fast data throughout, extremely low latency, smart In-Network Computing acceleration engines, and enhanced Dragonfly network topology.

Monday, January 7, 2019

Mellanox supplies 200 Gigabit HDR InfiniBand for supercomputing

Mellanox Technologies is supplying its 200 Gigabit HDR InfiniBand to accelerate a world-leading supercomputer at the High-Performance Computer Center of the University of Stuttgart (HLRS). The 5000-node supercomputer named “Hawk” will be built in 2019 and provide 24 petaFLOPs of compute performance.

The mission of the HLRS Hawk supercomputer is to advance engineering development and research in the fields of energy, climate, health and more, and if built today, the new system would be the world's fastest supercomputer for industrial production.

Mellanox said its Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology enables the execution of data algorithms on the data as it is being transferred within the network, providing the highest application performance and scalability.

“HDR InfiniBand delivers the best performance and scalability for HPC and AI applications, providing our users with the capabilities to enhance research, discoveries and product development,” said Gilad Shainer, vice president of marketing at Mellanox Technologies.

Friday, December 7, 2018

Mellanox supplies RDMA over Ethernet 25 Gbps adapters to Alibaba

Mellanox Technologies confirmed that it is now shipping its RDMA over Ethernet (RoCE) 25Gbps ConnectX network adapters for deployment in Alibaba Infrastructure Services’ production network.

RDMA technology provides Remote Direct Memory Access from the memory of one host to the memory of another host without involving the operating system and CPU, therefore boosting network and host performance with low latency, low CPU load and high bandwidth.

Mellanox cites the following advantages for its ConnectX network adapters:

  • Sub 1us point-to-point latency
  • Close to zero CPU utilization at full wire speed
  • Scalability to thousands of nodes
  • Outstanding performance on all types of fabrics – from lossless to lossy
  • Ease of deployment through automation

“High performance network transport technology is critical for Alibaba to achieve the throughput and latency required by our services. We are excited to collaborate with Mellanox to deploy its RoCE technology into our infrastructure.” said Dennis Cai, Chief Architect of Network Infrastructure under Alibaba Infrastructure Services.

“Mellanox has pioneered RoCE technology and is now shipping its 7th generation of RoCE capable ConnectX network adapters,” said Amir Prescher, senior vice president of business development at Mellanox Technologies. “Alibaba’s successful large-scale deployment of ConnectX RoCE adapters confirms again that RoCE is a proven technology to accelerate the most demanding workloads in a cost-effective manner. We are thrilled to work with Alibaba to achieve this.”

Monday, November 12, 2018

SC18: Mellanox connects 53% of overall TOP500 systems

Mellanox Technologies' InfiniBand and Ethernet solutions connect 53% of overall TOP500 platforms or 265 systems, demonstrating 38% growth within 12 months (Nov’17-Nov’18). Furthermore, InfiniBand accelerates the top three supercomputers on the TOP500 list: the fastest High-Performance Computing (HPC) and Artificial Intelligence (AI) supercomputer in the world deployed at the Oak Ridge National Laboratory, the second fastest supercomputer in the US deployed at the Lawrence Livermore National Laboratory, and the fastest supercomputer in China (ranked third).

“Mellanox InfiniBand and Ethernet solutions now connect the majority of systems on the TOP500 list, an increase of 38 percent over the last twelve-month period. InfiniBand In-Network Computing acceleration engines provide the highest performance and scalability for HPC and AI applications, and accelerate the top three supercomputers in the world. InfiniBand enables record performance in HPC and AI, enabling the advancement of academic and scientific research which is reshaping our world. We continue to win new opportunities and are proud to have deployed the first HDR InfiniBand supercomputer at the University of Michigan. We expect to see more HDR InfiniBand connected platforms this year,” said Eyal Waldman, president and CEO of Mellanox Technologies.

The TOP500 List has evolved in the recent years to include more hyperscale, cloud, and enterprise platforms, in addition to the high-performance computing and machine learning systems. Nearly half of the systems on the November 2018 list can be categorized as non-HPC application platforms, with a vast part of these systems representing US, Chinese and other hyperscale infrastructures, and are interconnected with Ethernet. Mellanox Ethernet solutions connect 130 systems or 51% of the Ethernet systems on the list.

Thursday, October 25, 2018

Mellanox hits record revenue of $279.2 million, up 24%

Mellanox Technologies reported record revenue of $279.2 million for Q3 2018, an increase of 23.7 percent, compared to $225.7 million in the third quarter of 2017. GAAP gross margins of 65.8 percent in the third quarter, compared to 65.7 percent in the third quarter of 2017. GAAP net income was $37.1 million in the third quarter, compared to $3.4 million in the third quarter of 2017.  Non-GAAP net income was $71.4 million in the third quarter, compared to $36.6 million in the third quarter of 2017.

“Mellanox continues to execute and gain momentum in the markets we participate in. We reported another record quarter in Q3, delivering 24% revenue growth and 90% non-GAAP operating income growth year-over-year. This resulted in a non-GAAP operating margin of 26.2%," said Eyal Waldman, President and CEO of Mellanox Technologies. "Our strong results reflect the differentiated and superior product technologies that Mellanox has to offer for data center infrastructure.”

“The innovations built into our high-speed Ethernet adapters, switches and cables are fueling demand for our Ethernet products. Leading hyperscale, cloud, enterprise data center and artificial intelligence customers continue to choose Mellanox to maximize the efficiency and utilization of their compute and storage investments. This has resulted in further market share gains across our high-speed Ethernet products and 59% year-over-year revenue growth in our Ethernet business."

Mellanox also announed that it has shipped more than 2.1 million Ethernet adapters during the first nine months of 2018.

The company said this milestone signals that high-performance Ethernet technology (25G and faster) has moved beyond the Super 7 cloud and web titans. The adoption of high-performance Ethernet technology has spread to enterprise data centers globally, including the next wave of cloud, telco/service providers, financial services and more.

Thursday, October 18, 2018

NTT ICT upgrades data centers with Mellanox 25G and 100G

NTT Communications ICT Solutions (NTT ICT) has selected Mellanox Technologies' 25G and 100G Ethernet to accelerate their multi-cloud data centers.

The upgrade includes: Spectrum-based switches running Cumulus Linux, ConnectX adapters, and LinkX cables and transceivers.

NTT ICT is a premium global IT provider delivering solutions to Australian enterprise and government clients.

Monday, September 17, 2018

Singapore's National Supercomputing Centre picks Mellanox

Singapore's National Supercomputing Centre (NSCC) has selected Mellanox 100 Gigabit Ethernet Spectrum-based switches, ConnectX adapters, cables and modules for its network.

"We are excited to collaborate with NSCC to interconnect the Singapore's research and educational facilities in the most efficient and scalable way," said Gilad Shainer, Vice President of Marketing at Mellanox Technologies. "The combination of our Ethernet RoCE technology, Spectrum switches, MetroX WDM long-haul switch, cables and software provide the highest data throughput, enabling users to be at the forefront of research and scientific discovery."

Mellanox ConnectX-5 with Virtual Protocol Interconnect supports two ports of InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus embedded PCIe switch and NVMe over Fabric offloads. It enables higher HPC performance with new Message Passing Interface (MPI) offloads, advanced dynamic routing, and new capabilities to perform various data algorithms.

Mellanox Spectrum, the eighth generation of switching IC family from Mellanox, delivers leading Ethernet performance, efficiency and throughput, low-latency and scalability for data center Ethernet networks by integrating advanced networking functionality for Ethernet fabrics. Hyperscale, cloud, data-intensive, virtualized datacenters or storage environments drive the need for increased interconnect performance and throughput beyond 10 and 40GbE. Spectrum's flexibility enables solution companies to build any Ethernet switch system at the speeds of 10, 25, 40, 50 and 100G, with leading port density, low latency, zero packet loss, and non-blocking traffic.

Mellanox's MetroX provides RDMA Long-Haul Systems enable connections between data centers deployed across multiple geographically distributed sites, extending Mellanox's world-leading interconnect benefits beyond local data centers and storage clusters.

See also