Showing posts with label Mellanox. Show all posts
Showing posts with label Mellanox. Show all posts

Monday, April 27, 2020

NVIDIA acquires Mellanox - focus on Next Gen Data Centers

NVIDIA completed its $7 billion acquisition of Mellanox Technologies. The deal was originally announced on March 11, 2019.

NVIDIA says that by combining its computing expertise with Mellanox’s high-performance networking technology, data center customers will achieve higher performance, greater utilization of computing resources and lower operating costs.

“The expanding use of AI and data science is reshaping computing and data center architectures,” said Jensen Huang, founder and CEO of NVIDIA. “With Mellanox, the new NVIDIA has end-to-end technologies from AI computing to networking, full-stack offerings from processors to software, and significant scale to advance next-generation data centers. Our combined expertise, supported by a rich ecosystem of partners, will meet the challenge of surging global demand for consumer internet services, and the application of AI and accelerated data science from cloud to edge to robotics.”

Eyal Waldman, founder and CEO of Mellanox, said: “This is a powerful, complementary combination of cultures, technology and ambitions. Our people are enormously enthusiastic about the many opportunities ahead. As Mellanox steps into the next exciting phase of its journey, we will continue to offer cutting-edge solutions and innovative products to our customers and partners. We look forward to bringing NVIDIA products and solutions into our markets, and to bringing Mellanox products and solutions into NVIDIA’s markets. Together, our technologies will provide leading solutions into compute and storage platforms wherever they are required.”

The acquisition is expected to be immediately accretive to NVIDIA’s non-GAAP gross margin, non-GAAP EPS and free cash flow, inclusive of incremental interest expense related to NVIDIA’s recent issuance of $5 billion of notes.

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.



NVIDIA cites increasing GPUdemand from data centers and gaming

NVIDIA reported quarterly revenue of $3.11 billion, up 41 percent from $2.21 billion a year earlier, and up 3 percent from $3.01 billion in the previous quarter.

GAAP earnings per diluted share for the quarter were $1.53, up 66 percent from $0.92 a year ago, and up 6 percent from $1.45 in the previous quarter. Non-GAAP earnings per diluted share were $1.89, up 136 percent from $0.80 a year earlier, and up 6 percent from $1.78 in the previous quarter.

For fiscal 2020, revenue was $10.92 billion, down 7 percent from $11.72 billion a year earlier. GAAP earnings per diluted share were $4.52, down 32 percent from $6.63 a year earlier. Non-GAAP earnings per diluted share were $5.79, down 13 percent from $6.64 a year earlier.

“Adoption of NVIDIA accelerated computing drove excellent results, with record data center revenue,” said Jensen Huang, founder and CEO of NVIDIA. “Our initiatives are achieving great success.

“NVIDIA RTX ray tracing is reinventing computer graphics, driving powerful adoption across gaming, VR and design markets, while opening new opportunities in rendering and cloud gaming. NVIDIA AI is enabling breakthroughs in language understanding, conversational AI and recommendation engines ― the core algorithms that power the internet today. And new NVIDIA computing applications in 5G, genomics, robotics and autonomous vehicles enable us to continue important work that has great impact."


Mellanox hits revenue of $429 million, up 40% yoy

Mellanox Technologies reported Q1 2020 revenue of $428.7 million, an increase of 40.5%, compared to $305.2 million in the first quarter of 2019.
GAAP gross margins were 66.8%, compared to 64.6% in the first quarter of 2019.

“Mellanox delivered record revenue and operating income in the first quarter of 2020. All our major product lines continued to grow. We are pleased to be shipping end-to-end solutions at speeds of 200 gigabits per second (Gbps) for both InfiniBand and Ethernet. In addition, we are shipping 400 Gbps Ethernet switches,” said Eyal Waldman, President and CEO of Mellanox Technologies.

“Sales of Ethernet adapter products increased 112% year-over-year. We expect our new ConnectX-6 Dx adapters and Bluefield-2 I/O Processing Units (IPUs), the latest additions to our industry-leading family of Smart NICs, to bring unprecedented security and co-processing capabilities to enterprise and cloud data centers. These capabilities will be further strengthened by our recent acquisition of Titan IC, the leading developer of network intelligence and security technology to accelerate search and big data analytics across a broad range of applications in data centers worldwide. The product line revenue of our Spectrum ASIC based Ethernet switch business grew 66% year-over-year. We recently began shipping Spectrum-3 based switches, the world’s first 12.8 Tbps networking platforms optimized for cloud, storage, and artificial intelligence,” continued Waldman.

“We are experiencing very strong adoption of InfiniBand for hyperscale artificial intelligence and cloud environments, resulting in tens of thousands of compute nodes connected with InfiniBand, which demonstrates the superior performance and scalability of InfiniBand. We saw 27% year-over-year growth in InfiniBand, led by strong demand for our HDR 200 gigabit solutions. HDR InfiniBand has been selected to interconnect national Exascale programs, large scale artificial intelligence and cloud platforms, and enterprise compute and storage infrastructures. We are proud that our InfiniBand technology is being utilized by many of the supercomputers in the Covid-19 High-Performance Computing Consortium, which is helping to aggregate computing capabilities for researchers to execute complex computations to help fight the novel Corona virus,” continued Waldman. “We are excited to participate in such important global initiatives through the adoption of our industry-leading adapters, switches, cables, and software, while also delivering strong financial performance for the first quarter of 2020.”

Thursday, April 23, 2020

Mellanox hits revenue of $429 million, up 40% yoy

Mellanox Technologies reported Q1 2020 revenue of $428.7 million, an increase of 40.5%, compared to $305.2 million in the first quarter of 2019.
GAAP gross margins were 66.8%, compared to 64.6% in the first quarter of 2019.

“Mellanox delivered record revenue and operating income in the first quarter of 2020. All our major product lines continued to grow. We are pleased to be shipping end-to-end solutions at speeds of 200 gigabits per second (Gbps) for both InfiniBand and Ethernet. In addition, we are shipping 400 Gbps Ethernet switches,” said Eyal Waldman, President and CEO of Mellanox Technologies.

“Sales of Ethernet adapter products increased 112% year-over-year. We expect our new ConnectX-6 Dx adapters and Bluefield-2 I/O Processing Units (IPUs), the latest additions to our industry-leading family of Smart NICs, to bring unprecedented security and co-processing capabilities to enterprise and cloud data centers. These capabilities will be further strengthened by our recent acquisition of Titan IC, the leading developer of network intelligence and security technology to accelerate search and big data analytics across a broad range of applications in data centers worldwide. The product line revenue of our Spectrum ASIC based Ethernet switch business grew 66% year-over-year. We recently began shipping Spectrum-3 based switches, the world’s first 12.8 Tbps networking platforms optimized for cloud, storage, and artificial intelligence,” continued Waldman.

“We are experiencing very strong adoption of InfiniBand for hyperscale artificial intelligence and cloud environments, resulting in tens of thousands of compute nodes connected with InfiniBand, which demonstrates the superior performance and scalability of InfiniBand. We saw 27% year-over-year growth in InfiniBand, led by strong demand for our HDR 200 gigabit solutions. HDR InfiniBand has been selected to interconnect national Exascale programs, large scale artificial intelligence and cloud platforms, and enterprise compute and storage infrastructures. We are proud that our InfiniBand technology is being utilized by many of the supercomputers in the Covid-19 High-Performance Computing Consortium, which is helping to aggregate computing capabilities for researchers to execute complex computations to help fight the novel Corona virus,” continued Waldman. “We are excited to participate in such important global initiatives through the adoption of our industry-leading adapters, switches, cables, and software, while also delivering strong financial performance for the first quarter of 2020.”

Thursday, March 12, 2020

Mellanox ships its 12.8 Tbps Ethernet switch

Mellanox Technologies announced the first customer shipments of its new 12.8 Tbps Ethernet switch platform, which is optimized for Cloud, Ethernet Storage Fabric, and AI interconnect applications.

The Mellanox SN4000 family, which is powered by its Spectrum-3 ASIC, supports a combination of up to 32 ports of 400GbE, 64 ports of 200GbE and 128 ports of 100/50/25/10GbE. The SN4000 platforms complement the 200/400GbE SN3000 leaf switches to form an efficient and high bandwidth leaf/spine network.

Mellanox  said its Spectrum-3 boasts tunneling, and network virtualization capabilities with its advanced FlexFlow packet processing technology, and WJH (What Just Happened)™ based real-time telemetry.

Highlights

  • Up to 128 ports of 100GbE, 64 ports of 200GbE or 32 ports of 400GbE
  • Up to 200,000 NAT entries, and 1 Million on-chip routes
  • Fully shared packet buffer to maximize burst absorption and deliver fair bandwidth sharing
  • RoCE-Ready one-click configuration with hardware-accelerated, end-to-end congestion management to simplify networking for storage, AI, and big data workloads
  • FlexFlow™ programmable pipeline which delivers rich network processing capabilities at an unprecedented scale
  • WJH™ based granular telemetry to simplify network operations and dramatically reduce mean time to issue resolution
  • Simultaneous NRZ and PAM4 port speeds allowing flexible configurations
  • Dual stack IPV4 and IPV6 protocol operation
  • Support for overlay protocols including EVPN, VXLAN-GPE, MPLS-over-GRE/UDP, NSH, NVGRE, MPLS/IPv6 based Segment routing and more
  • Flowlet-based adaptive routing maximizes performance and network utilization for layer-3 fabrics with high cross-sectional bandwidth
  • Support for customer-defined, on-switch, containerized microservices with complete SDK access to host management, orchestration, and telemetry applications.


“Mellanox Spectrum-3 offers better performance, more advanced features, and easier management than any other 12.8 terabit switch,” said Amit Katz, vice president of Ethernet switches at Mellanox. “Our VXLAN support features single-pass routing for more than 500,000 tunnels, making Mellanox Spectrum-3 the best switch not only for cloud data centers, but for any networking deployment supporting virtualization, containers, or microservices.”

Monday, February 24, 2020

Mellanox ships ConnectX-6 Dx SmartNICs

Mellanox Technologies has begun shipping its ConnectX-6 Dx SmartNICs, in addition to the soon-to-be-released BlueField-2 I/O Processing Units (IPUs)s.

The ConnectX-6 Dx SmartNICs provide up to two ports of 25, 50 or 100Gbps, or a single port of 200Gbps, Ethernet connectivity powered by 50Gbps PAM4 SerDes technology and PCIe 4.0 host connectivity.

Significantly, the new SmartNICs' hardware offload engines include IPsec and inline TLS data-in-motion cryptography, advanced network virtualization, RDMA over Converged Ethernet (RoCE), and NVMe over Fabrics (NVMe-oF) storage accelerations. ConnectX-6 Dx provides IPsec, TLS, and AES-XTS built-in cryptographic acceleration, and Hardware Root of Trust. In addition to the above capabilities, BlueField-2 adds accelerated key management, integrated Regular Expression (RegEx) pattern detection, secure hash computation, etc.

“Networking and security must converge to achieve consistent and predictable application performance, with all the necessary levels of data privacy, integrity and reliability. This vision is the core foundation on which we designed our ConnectX-6 Dx SmartNIC and BlueField-2 IPU products,” said Amit Krig, senior vice president, Ethernet NIC and IPU Product Line at Mellanox Technologies. “Today we are excited to ship production qualified ConnectX-6 Dx SmartNICs to our hyperscale customers, turning our vision into a reality.”

As an IPU, BlueField-2 provides even more in-hardware security capabilities, including agentless micro-segmentation, advanced malware detection, deep packet inspection and application recognition, that far outperform software-only solutions. Mellanox BlueField IPUs enable the best of both worlds – the speed and flexibility of software-defined solutions, with tighter security, accelerated performance and improved efficiency by processing data in the device hardware at the I/O path.



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.


Monday, January 13, 2020

Mellanox supplies 200G HDR InfiniBand to ECMWF

Mellanox Technologies will supply its 200 Gigabit HDR InfiniBand to the European Centre for Medium-Range Weather Forecasts (ECMWF) to accelerate their new world-leading supercomputer, which is based on Atos’ latest BullSequana XH2000 technology.

ECMWF's new supercomputer will be one of the world’s most powerful meteorological supercomputers, supporting weather forecasting and prediction researchers from over 30 countries across Europe. The new platform, utilizing HDR InfiniBand, will enable running nearly 2 times higher-resolution probabilistic weather forecasts in under an hour, improving the ability to monitor and predict increasingly severe weather phenomena and enable European countries to take proactive precautions to protect lives and property.

“We are proud to have our 200 Gigabit HDR InfiniBand solutions accelerate one of the most powerful meteorological services supercomputers in the world, at the European Centre for Medium-Range Weather Forecasts,” said Gilad Shainer, senior vice president of marketing at Mellanox Technologies. “Climate and weather simulations are compute and data intensive, and require the most advanced interconnect technology to ensure fast and accurate results. HDR InfiniBand includes multiple data acceleration and analysis engines, making it the leading technology for such applications. We look forward to continuing work with ECMWF and Atos, to develop the supercomputing capabilities needed for even more accurate and complex simulations in the future.”

Thursday, November 14, 2019

Mellanox extends Infiniband to 40km

Mellanox Technologies introduced its Quantum LongReach series of long-distance InfiniBand switches for connecting remote InfiniBand data centers together, or to provide high-speed and full RDMA (remote direct memory access) connectivity between remote compute and storage infrastructures.

Based on the 200 gigabit HDR Mellanox Quantum InfiniBand switch, the LongReach solution provides up to two long-reach InfiniBand ports and eight local InfiniBand ports. The long reach ports can deliver up to 100 Gbps data throughput for distances of 10 and 40 kilometers.

Key capabilities

  • Connect remote InfiniBand based data centers together to create a single virtual data center, effectively combining the compute power of multiple distributed data centers for higher overall performance and scalability. With LongReach, users can leverage the In-Network Computing capabilities such as the Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™, congestion control, enhanced telemetry and more between the multiple remote data centers.
  • Seamlessly migrate user applications from one data center to another to support different user demands, to provide load balancing between InfiniBand data centers, or to provide continuous compute services in cases of data-center failures.
  • Enable fast and efficient connectivity between remote compute and storage infrastructures, enabling fast disaster recovery and more.

“The Mellanox Quantum LongReach appliance enables native InfiniBand connectivity between remote InfiniBand-based data centers, or between data center and remote storage infrastructure, allowing users to enjoy native RDMA, In-Network Computing acceleration engines, congestion control and other InfiniBand technology advantages globally,” said Gilad Shainer, senior vice president of marketing at Mellanox Technologies. “Our existing and new customers, who wish to expand their clusters seamlessly over local and distributed InfiniBand networks that are kilometers apart, will find Mellanox Quantum LongReach to be the best cost-effective and easily managed solution.”

The products will be available in the first half of 2020.

https://www.mellanox.com/page/longreach/?ls=pr&lsd=191114-LongReach-1

Wednesday, September 25, 2019

Mellanox adds SONiC to its Spectrum switches

Mellanox Technologies announced ASIC-to-Protocol (A2P) customer support solutions for the SONiC Network Operating System (NOS) on Mellanox Spectrum switches.

SONiC (Software for Open Networking in the Cloud) is a fully open-sourced NOS for Ethernet switches, first created by Microsoft to run Microsoft Azure and now a community project under the Open Compute Project (OCP). SONiC is built on the Switch Abstraction Interface API (SAI) and breaks down traditional monolithic switch software into agile, microservices-based containerized components. This model accelerates innovation within the NOS and the data center by breaking vendor lock-in and simplifying switch programmability, allowing network operators to choose the best-of-breed switching platforms. SONiC offers a full suite of network functionality—like BGP, ECMP, VXLAN, IPv6, and RDMA—that has been deployed and production-hardened in some of the largest data centers in the world.

Mellanox has been a major contributor to SONiC. Mellanox is now adding SONiC support for customers running large deployments of the SONiC NOS on Mellanox SN2000 and SN3000 switches.

“SONiC is an amazingly versatile and scalable NOS for the data center, and Open Ethernet is an incredibly powerful concept,” said Amit Katz, Vice President of Ethernet Switches, Mellanox Technologies. “Every week we hear from more customers who want to combine the power of SONiC with the best-in-class switch silicon in Mellanox Spectrum. Our unique support offering and vast SONiC experience make this easy for new and existing SONiC customers.”

Yousef Kahlidi, Corporate Vice President, Azure Networking at Microsoft Corp. said, “SONiC delivers scalable and efficient cloud networking that offers one optimized NOS that runs on a variety of best-of-breed switches. Offering support for SONiC on their switches allows Mellanox to bring the benefits of SONiC to a larger customer segment.”



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.

Wednesday, September 11, 2019

Mellanox on track to ship over 1 million ConnectX Adapters in Q3 2019

Mellanox Technologies announced it is on track to ship over one million ConnectX and BlueField Ethernet network adapters in Q3 2019, a new quarterly record.

The company says growth is driven by public and private clouds, telco operators and enterprise data centers seeking faster compute and storage platforms.

“We are thrilled to see ConnectX and Ethernet SmartNICs exceed the one million parts shipment mark in a single quarter. We expect this number to continue and grow in the coming quarters as more of the market is transitioning to 25 Gb/s Ethernet and faster speeds,” said Eyal Waldman, president and CEO of Mellanox Technologies.

The BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gb/s.

Video - Mellanox's Michael Kagan on SmartNICs



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.

Tuesday, September 3, 2019

Mellanox intros Active DACs, QSA56 adapters and 400G DR4 transceivers

Mellanox Technologies announced new 400G DR4 500m transceivers and 400G DAC splitters and 100G SFP-DD DAC cables for server/storage interconnects.

The company is also introducing new 200G “active” DAC cables for HDR InfiniBand and 200GbE Ethernet to extend copper cable reach up to four meters.

Lastly, new QSA56 Port Adapters enable single-channel SFP cables and transceivers to be connected to 200G switch or network adapter ports. QSA56 supports cables and transceivers from 0.5m to 10km.

Mellanox is demonstrating these LinkX products as well as showcasing the full line of 100/200/400G cables and transceivers at the China International Optoelectronic Expo (CIOE) September 4th in Shenzhen, China and the European Convention for Optical Communications (ECOC) Sept 21st in Dublin, Ireland.

“We’ve had tremendous adoption of our full line of LinkX 25/50/100G cables and transceivers with web-scale, cloud computing, and OEM customers in China and worldwide,” said, Steen Gundersen, vice president LinkX interconnects, Mellanox Technologies. “We are just at the beginning of the transition to 200G and 400G will soon follow. Customers select Mellanox because of our expertise in high-speed interconnects, our capacity to ship in volume, and the high quality of our products.”

Monday, August 26, 2019

Mellanox's latest SmartNICs deliver 200G I/O and Security

Mellanox introduced its latest generation ConnectX-6 Dx and BlueField-2 Secure Cloud SmartNICs for data center servers and storage systems.

The ConnectX-6 Dx SmartNICs provide up to two ports of 25, 50 or 100Gbps, or a single port of 200Gbps, Ethernet connectivity powered by 50Gbps PAM4 SerDes technology and PCIe 4.0 host connectivity.

Significantly, the new SmartNICs' hardware offload engines include IPsec and inline TLS data-in-motion cryptography, advanced network virtualization, RDMA over Converged Ethernet (RoCE), and NVMe over Fabrics (NVMe-oF) storage accelerations. ConnectX-6 Dx provides IPsec, TLS, and AES-XTS built-in cryptographic acceleration, and Hardware Root of Trust. In addition to the above capabilities, BlueField-2 adds accelerated key management, integrated Regular Expression (RegEx) pattern detection, secure hash computation, etc.

Mellanox said its BlueField-2 IPU integrates all the advanced capabilities of ConnectX-6 Dx with an array of powerful Arm processor cores, high performance memory interfaces, and flexible processing capabilities in a single System-on-Chip (SoC), supporting both Ethernet and InfiniBand connectivity up to 200Gbps. ConnectX-6 Dx and BlueField-2 also offer built-in SR-IOV, Open vSwitch (OVS), and VirtIO hardware accelerators. Mellanox is also introducing additional network virtualization offloads, enhanced programmability and extreme scale capabilities.

“We are excited to introduce the ConnectX-6 Dx and BlueField-2 architectures, providing groundbreaking acceleration engines for next-generation cloud data centers,” said Yael Shenhav, vice president, Ethernet NIC and SoC at Mellanox. “Built on the success of our award-winning ConnectX and BlueField product families, ConnectX-6 Dx and BlueField-2 set new records in high-performance networking, allowing our customers and partners to build highly secure and efficient compute and storage infrastructures to increase productivity and reduce total cost of ownership.”

“Baidu is an AI cloud giant tasked with delivering results at the speed of thought,” said Liu Ning, director of system department, Baidu. “Therefore, we have partnered with Mellanox, the leader in high-performance networking, whose high-speed connectivity solutions today supports Baidu’s machine learning platforms. We look forward to this new release of Mellanox’s programmable cloud SmartNICs and IPUs to deliver best-in-class network performance for accelerating scalable AI-driven applications.”

“IBM’s enterprise server solutions are designed to deliver the best performance for the most demanding workloads, while providing cutting-edge security and reliability,” said Monica Aggarwal, vice president of Cognitive Systems Development. “We look forward to integrating the new Mellanox SmartNIC family into our product portfolio for building highly efficient secured cloud data centers.”

https://www.mellanox.com/products/bluefield2-overview/

Video - Mellanox's Michael Kagan on SmartNICs



Michael Kagan, CTO and co-founder of Mellanox Technologies, talks about the next step for SmartNICs and the company's newly released ConnectX-6 Dx product driven by its own silicon.

Wednesday, July 24, 2019

Mellanox posts record revenue of $310.3m, up 16% yoy

Mellanox Technologies reported record revenue of $310.3 million in the second quarter, an increase of 15.6 percent, compared to $268.5 million in the second quarter of 2018. GAAP gross margins were 64.5 percent, compared to 61.4 percent in the second quarter of 2018. Non-GAAP net income amounted to $83.9 million in the second quarter, compared to $66.6 million in the second quarter of 2018.

“Mellanox delivered record revenue in Q2, achieving 2 percent sequential growth and 16 percent year-over-year growth. We continue to demonstrate leadership with our Ethernet adapter solutions for data rates of 25 gigabit per second and above. The growth in our Ethernet business reflects strong demand from our cloud customers as well as expanding channel sales. We are pleased that we’ve begun shipping 200 gigabit per second Ethernet adapters, switches, and cables to our data center customers, and expect this to be a future revenue growth driver,” said Eyal Waldman, president and CEO of Mellanox Technologies.

“We continue to see strong demand for our InfiniBand products across the high performance computing, artificial intelligence, cloud, and storage market segments, driven by our highest throughput 200 gigabit HDR InfiniBand solutions. InfiniBand accelerates six of the top ten supercomputers in the world today, including the top three. We are proud that multiple HDR InfiniBand systems have entered the TOP500 supercomputers list, led by the Frontera TACC system, which is the fastest TOP500 supercomputer built in 2019 and premiered at #5 on the list.”

“We are pleased with our financial performance this quarter and the adoption of our latest 25, 50, and 100Gb/s Ethernet and 200Gb/s HDR InfiniBand products,” continued Waldman. “We expect to maintain and grow our leadership in these segments as we expand our footprint for both adapters and switches in the data center.”


  • On March 11, 2019, NVIDIA agreed to acquire all the issued and outstanding common shares of Mellanox for $125 per share in cash. The acquisition is pending.

Monday, July 8, 2019

Mellanox invests in CNEX Labs and Pliops

Mellanox Capital, which is the investment arm of Mellanox Technologies, has made equity investments in storage start-ups CNEX Labs and Pliops, both of which are pushing software defined and intelligent storage to the next level of performance, efficiency, and scalability.

CNEX Labs, which targest high-performance storage semiconductors, has developed Denali/Open-Channel NVMe Flash storage controllers/

Pliops is transforming data center infrastructure with a new class of storage processors that deliver massive scalability and lower the cost of data services.

“Mellanox is committed to enabling customers to harness the power of distributed compute and disaggregated storage to improve the performance and efficiency of analytics and AI applications,” said Nimrod Gindi, senior vice president of mergers and acquisitions and head of investments, Mellanox Technologies. “Optimizing datacenter solutions requires faster, smarter storage connected with faster, smarter networks, and our investments in innovative storage leaders such as CNEX Labs and Pliops will accelerate the deployment of scale-out storage and data-intensive analytics solutions. Our strategic partnerships with these innovative storage mavericks are transforming the ways that customers can bring compute closer to storage to access and monetize the business value of data.”

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.

Tuesday, June 18, 2019

Mellanox supplies 200G InfiniBand for Lenovo’s liquid cooled servers

Mellanox Technologies has begun shipping liquid cooled HDR 200G Multi-Host InfiniBand adapters for the Lenovo ThinkSystem SD650 server platform, which features Lenovo's "Neptune" liquid cooling technologies.

“Our collaboration with Lenovo delivers a scalable and highly energy efficient platform that delivers nearly 90% heat removal efficiency and can reduce data center energy costs by nearly 40%, and takes full advantage of the best-of-breed capabilities from Mellanox InfiniBand, including the Mellanox smart acceleration engines, RDMA, GPUDirect, Multi-Host and more,” said Gilad Shainer, Senior Vice President of Marketing at Mellanox Technologies.

Monday, June 17, 2019

Mellanox cites supercomputing momentum for HDR 200G Infiniband

Mellanox Technologies reports that HDR 200G InfiniBand continues to gain traction with next generation of supercomputers worldwide due to its highest data throughput, extremely low latency, and smart In-Network Computing acceleration engines.

Mellanox's HDR 200G InfiniBand solutions include its ConnectX-6 adapters, Mellanox Quantum switches, LinkX cables and transceivers and software packages.

“We are proud to have our HDR InfiniBand solutions accelerate supercomputers around the world, enhance research and discoveries, and advancing Exascale programs,” said Gilad Shainer, senior vice president of marketing at Mellanox Technologies. “InfiniBand continues to gain market share, and be selected by many research, educational and government institutes, weather and climate facilities, and commercial organizations. The technology advantages of InfiniBand make it the interconnect of choice for compute and storage infrastructures.”

Examples

  • The Texas Advanced Computing Center’s (TACC) Frontera supercomputer -- ranked #5 on the June 2019 TOP500 Supercomputers list, Frontera utilizes HDR InfiniBand, and in particular multiple 800-port HDR InfiniBand switches.
  • The new HDR InfiniBand-based Orion supercomputer located at the Mississippi State University High Performance Computing Collaboratory -- ranked #62 on the June 2019 TOP500 list, the 1800-node supercomputer leverages the performance advantages of HDR InfiniBand and its application acceleration engines to provide new levels of application performance and scalability.
  • CSC, the Finnish IT Center for Science, and the Finnish Meteorological Institute -- ranked #166 on the TOP500 list.
  • Cygnus -- the first HDR InfiniBand supercomputer in Japan and ranked #264 on the TOP500 list.
  • India's Center for Development of Advanced Computing (C-DAC) 

Monday, May 20, 2019

Mellanox debuts Ethernet Cloud Fabric for 400G

Mellanox Technologies introduced its data center Ethernet Cloud Fabric (ECF) technology based on its second generation, Spectrum-2 silicon, which can deliver up to 16 ports of 400GbE, 32 ports of 200GbE, 64 ports of 100GbE, or 128 ports of 50/25/10/1GbE. 

Mellanox ECF combines three critical capabilities:

Packet forwarding data plane
  • 8.33 Billion Packets per second – Fastest in its class
  • 42MB Monolithic and fully shared packet buffer to provide high bandwidth and low-latency cut-through performance
  • Robust RoCE Datapath to enable hardware accelerated data movement for Ethernet Storage Fabric and Machine Learning applications
  • Half a million flexible forwarding entries to support large Layer-2 and Layer-3 networks
  • Up to 2 Million routes with external memory to address Internet Peering use cases
  • 128-way ECMP with support for flowlet based Adaptive Routing
  • Hardware-based Network Address Translation
  • 500K+ Access Control List entries for micro-segmentation and cloud scale whitelist policies
  • 500K+ VXLAN Tunnels, 10K+ VXLAN VTEPs to provide caveat-free Network Virtualization
Flexible and fully programmable data pipeline
  • Support for VXLAN overlays including single pass VXLAN routing and bridging
  • Centralized VXLAN routing for brown field environments
  • Support for other overlay protocols including EVPN, VXLAN-GPE, MPLS-over-GRE/UDP, NSH, NVGRE, MPLS/IPv6 based Segment routing and more
  • Future-proofing with programmable pipeline that can support new, custom and emerging protocols
  • Hardware optimized stages that accelerate traditional as well as virtualized network functions
  • Advanced modular data plane and integrated container support enables extensibility and flexibility to add customized and application specific capabilities
Open and Actionable telemetry
  • 10X reduction in mean time to resolution by providing a rich set of contextual and actionable Layer 1-4 “What Just Happened” telemetry insights
  • Hardware based packet buffer tracking and data summarization using histograms
  • More than 500K flow tracking counters
  • Open and Extensible platform to facilitate integration and customization with 3rd party and open source visualization tools
  • Support for traditional visibility tools including sFlow, Streaming and In-band telemetry
Marvell said its Ethernet Cloud Fabric incorporates Ethernet Storage Fabric (ESF) technology that seamlessly allows the network to serve as the ideal scale-out data plane for computing, storage, artificial intelligence, and communications traffic. 

“The Spectrum-2 switch ASIC operates at speeds up to 400 Gigabit Ethernet, but goes beyond just raw performance by delivering the most advanced features of any switch in its class without compromising operation ability and simplicity,” said Amir Prescher, senior vice president of end user sales and business development at Mellanox Technologies, “Spectrum-2 enables a new era of Ethernet Cloud Fabrics designed to increase business continuity by delivering the most advanced visibility capabilities to detect and eliminate data center outages. This state-of-the-art visibility technology is combined with fair and predictable performance unmatched in the industry, which guarantees consistent application level performance, which in turn drives predictable business results for our customers. Spectrum-2 is at the heart a new family of SN3000 switches that come in leaf, spine, and super-spine form factors.”

The Spectrum-2 based SN3000 family of switch systems with ECF technology will be available in Q3.


With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

Tuesday, April 16, 2019

Mellanox delivered record $305 million in revenue in Q1

Mellanox Technologies reported record revenue of $305.2 million in the first quarter, an increase of 21.6 percent, compared to $251.0 million in the first quarter of 2018. GAAP gross margins of 64.6 percent in the first quarter, compared to 64.5 percent in the first quarter of 2018.

“Mellanox delivered record revenue in Q1, achieving 5 percent sequential growth and 22 percent year-over-year growth. All of our product lines grew sequentially, showing the benefits of our diversified data center strategy,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Our R&D execution has resulted in differentiated products, while at the same time we have generated operating margin of 14.6% on a GAAP basis and 28.3% on a non-GAAP basis. Additionally, we increased cash and short-term investments by $114 million during the quarter.”

“Across InfiniBand and Ethernet product lines, our innovations are driving continued market leadership. Our 200 gigabit HDR InfiniBand solutions are enabling the world’s fastest supercomputers and driving our overall InfiniBand growth. During Q1, HDR InfiniBand connected tens-of-thousands of compute and storage end-points across supercomputing, hyperscale, and cloud data centers around the globe to achieve breakthrough performance. Our Ethernet solutions continue to penetrate the market for both adapters and switches. Our market leadership in 25 gigabit per second Ethernet solutions is well established, and our 100 gigabit per second solutions are the fastest growing portion of our Ethernet adapter product line. We are also encouraged by the adoption of our BlueField System-on-a-Chip and SmartNIC technology. With further innovations to come, Mellanox is well-positioned to continue its growth trajectory,” Mr. Waldman concluded.

Highlights

  • Non-GAAP gross margins of 68.0 percent in the first quarter, compared to 69.0 percent in the first quarter of 2018.
  • GAAP operating income of $44.7 million in the first quarter, compared to $12.0 million in the first quarter of 2018.
  • Non-GAAP operating income of $86.3 million in the first quarter, or 28.3 percent of revenue, compared to $52.1 million, or 20.8 percent of revenue in the first quarter of 2018.
  • GAAP net income of $48.6 million in the first quarter, compared to $37.8 million in the first quarter of 2018.
  • Non-GAAP net income of $86.5 million in the first quarter, compared to $51.4 million in the first quarter of 2018.
  • GAAP net income per diluted share of $0.87 in the first quarter, compared to $0.71 in the first quarter of 2018.
  • Non-GAAP net income per diluted share of $1.59 in the first quarter, compared to $0.98 in the first quarter of 2018.

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.

Monday, March 11, 2019

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.




Tuesday, January 22, 2019

Mellanox supplies 200 Gigabit HDR InfiniBand to Finnish IT Center for Science

Mellanox Technologies will supply its 200 Gigabit HDR InfiniBand solutions to accelerate a multi-phase supercomputer system by CSC – the Finnish IT Center for Science. The new supercomputers, set to be deployed in 2019 and 2020, will serve the Finnish researchers in universities and research institutes, enhancing climate, renewable energy, astrophysics, nanomaterials and bioscience, among a wide range of exploration activities. The Finnish Meteorological Institute (FMI) will have their own separate partition for diverse simulation tasks ranging from ocean fluxes to atmospheric modeling and space physics.

Mellanox said its HDR InfiniBand interconnect solution was selected for its fast data throughout, extremely low latency, smart In-Network Computing acceleration engines, and enhanced Dragonfly network topology.

Monday, January 7, 2019

Mellanox supplies 200 Gigabit HDR InfiniBand for supercomputing

Mellanox Technologies is supplying its 200 Gigabit HDR InfiniBand to accelerate a world-leading supercomputer at the High-Performance Computer Center of the University of Stuttgart (HLRS). The 5000-node supercomputer named “Hawk” will be built in 2019 and provide 24 petaFLOPs of compute performance.

The mission of the HLRS Hawk supercomputer is to advance engineering development and research in the fields of energy, climate, health and more, and if built today, the new system would be the world's fastest supercomputer for industrial production.

Mellanox said its Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology enables the execution of data algorithms on the data as it is being transferred within the network, providing the highest application performance and scalability.

“HDR InfiniBand delivers the best performance and scalability for HPC and AI applications, providing our users with the capabilities to enhance research, discoveries and product development,” said Gilad Shainer, vice president of marketing at Mellanox Technologies.