Showing posts with label Mellanox. Show all posts
Showing posts with label Mellanox. Show all posts

Monday, July 8, 2019

Mellanox invests in CNEX Labs and Pliops

Mellanox Capital, which is the investment arm of Mellanox Technologies, has made equity investments in storage start-ups CNEX Labs and Pliops, both of which are pushing software defined and intelligent storage to the next level of performance, efficiency, and scalability.

CNEX Labs, which targest high-performance storage semiconductors, has developed Denali/Open-Channel NVMe Flash storage controllers/

Pliops is transforming data center infrastructure with a new class of storage processors that deliver massive scalability and lower the cost of data services.

“Mellanox is committed to enabling customers to harness the power of distributed compute and disaggregated storage to improve the performance and efficiency of analytics and AI applications,” said Nimrod Gindi, senior vice president of mergers and acquisitions and head of investments, Mellanox Technologies. “Optimizing datacenter solutions requires faster, smarter storage connected with faster, smarter networks, and our investments in innovative storage leaders such as CNEX Labs and Pliops will accelerate the deployment of scale-out storage and data-intensive analytics solutions. Our strategic partnerships with these innovative storage mavericks are transforming the ways that customers can bring compute closer to storage to access and monetize the business value of data.”

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.

Tuesday, June 18, 2019

Mellanox supplies 200G InfiniBand for Lenovo’s liquid cooled servers

Mellanox Technologies has begun shipping liquid cooled HDR 200G Multi-Host InfiniBand adapters for the Lenovo ThinkSystem SD650 server platform, which features Lenovo's "Neptune" liquid cooling technologies.

“Our collaboration with Lenovo delivers a scalable and highly energy efficient platform that delivers nearly 90% heat removal efficiency and can reduce data center energy costs by nearly 40%, and takes full advantage of the best-of-breed capabilities from Mellanox InfiniBand, including the Mellanox smart acceleration engines, RDMA, GPUDirect, Multi-Host and more,” said Gilad Shainer, Senior Vice President of Marketing at Mellanox Technologies.

Monday, June 17, 2019

Mellanox cites supercomputing momentum for HDR 200G Infiniband

Mellanox Technologies reports that HDR 200G InfiniBand continues to gain traction with next generation of supercomputers worldwide due to its highest data throughput, extremely low latency, and smart In-Network Computing acceleration engines.

Mellanox's HDR 200G InfiniBand solutions include its ConnectX-6 adapters, Mellanox Quantum switches, LinkX cables and transceivers and software packages.

“We are proud to have our HDR InfiniBand solutions accelerate supercomputers around the world, enhance research and discoveries, and advancing Exascale programs,” said Gilad Shainer, senior vice president of marketing at Mellanox Technologies. “InfiniBand continues to gain market share, and be selected by many research, educational and government institutes, weather and climate facilities, and commercial organizations. The technology advantages of InfiniBand make it the interconnect of choice for compute and storage infrastructures.”

Examples

  • The Texas Advanced Computing Center’s (TACC) Frontera supercomputer -- ranked #5 on the June 2019 TOP500 Supercomputers list, Frontera utilizes HDR InfiniBand, and in particular multiple 800-port HDR InfiniBand switches.
  • The new HDR InfiniBand-based Orion supercomputer located at the Mississippi State University High Performance Computing Collaboratory -- ranked #62 on the June 2019 TOP500 list, the 1800-node supercomputer leverages the performance advantages of HDR InfiniBand and its application acceleration engines to provide new levels of application performance and scalability.
  • CSC, the Finnish IT Center for Science, and the Finnish Meteorological Institute -- ranked #166 on the TOP500 list.
  • Cygnus -- the first HDR InfiniBand supercomputer in Japan and ranked #264 on the TOP500 list.
  • India's Center for Development of Advanced Computing (C-DAC) 

Monday, May 20, 2019

Mellanox debuts Ethernet Cloud Fabric for 400G

Mellanox Technologies introduced its data center Ethernet Cloud Fabric (ECF) technology based on its second generation, Spectrum-2 silicon, which can deliver up to 16 ports of 400GbE, 32 ports of 200GbE, 64 ports of 100GbE, or 128 ports of 50/25/10/1GbE. 

Mellanox ECF combines three critical capabilities:

Packet forwarding data plane
  • 8.33 Billion Packets per second – Fastest in its class
  • 42MB Monolithic and fully shared packet buffer to provide high bandwidth and low-latency cut-through performance
  • Robust RoCE Datapath to enable hardware accelerated data movement for Ethernet Storage Fabric and Machine Learning applications
  • Half a million flexible forwarding entries to support large Layer-2 and Layer-3 networks
  • Up to 2 Million routes with external memory to address Internet Peering use cases
  • 128-way ECMP with support for flowlet based Adaptive Routing
  • Hardware-based Network Address Translation
  • 500K+ Access Control List entries for micro-segmentation and cloud scale whitelist policies
  • 500K+ VXLAN Tunnels, 10K+ VXLAN VTEPs to provide caveat-free Network Virtualization
Flexible and fully programmable data pipeline
  • Support for VXLAN overlays including single pass VXLAN routing and bridging
  • Centralized VXLAN routing for brown field environments
  • Support for other overlay protocols including EVPN, VXLAN-GPE, MPLS-over-GRE/UDP, NSH, NVGRE, MPLS/IPv6 based Segment routing and more
  • Future-proofing with programmable pipeline that can support new, custom and emerging protocols
  • Hardware optimized stages that accelerate traditional as well as virtualized network functions
  • Advanced modular data plane and integrated container support enables extensibility and flexibility to add customized and application specific capabilities
Open and Actionable telemetry
  • 10X reduction in mean time to resolution by providing a rich set of contextual and actionable Layer 1-4 “What Just Happened” telemetry insights
  • Hardware based packet buffer tracking and data summarization using histograms
  • More than 500K flow tracking counters
  • Open and Extensible platform to facilitate integration and customization with 3rd party and open source visualization tools
  • Support for traditional visibility tools including sFlow, Streaming and In-band telemetry
Marvell said its Ethernet Cloud Fabric incorporates Ethernet Storage Fabric (ESF) technology that seamlessly allows the network to serve as the ideal scale-out data plane for computing, storage, artificial intelligence, and communications traffic. 

“The Spectrum-2 switch ASIC operates at speeds up to 400 Gigabit Ethernet, but goes beyond just raw performance by delivering the most advanced features of any switch in its class without compromising operation ability and simplicity,” said Amir Prescher, senior vice president of end user sales and business development at Mellanox Technologies, “Spectrum-2 enables a new era of Ethernet Cloud Fabrics designed to increase business continuity by delivering the most advanced visibility capabilities to detect and eliminate data center outages. This state-of-the-art visibility technology is combined with fair and predictable performance unmatched in the industry, which guarantees consistent application level performance, which in turn drives predictable business results for our customers. Spectrum-2 is at the heart a new family of SN3000 switches that come in leaf, spine, and super-spine form factors.”

The Spectrum-2 based SN3000 family of switch systems with ECF technology will be available in Q3.


With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

Tuesday, April 16, 2019

Mellanox delivered record $305 million in revenue in Q1

Mellanox Technologies reported record revenue of $305.2 million in the first quarter, an increase of 21.6 percent, compared to $251.0 million in the first quarter of 2018. GAAP gross margins of 64.6 percent in the first quarter, compared to 64.5 percent in the first quarter of 2018.

“Mellanox delivered record revenue in Q1, achieving 5 percent sequential growth and 22 percent year-over-year growth. All of our product lines grew sequentially, showing the benefits of our diversified data center strategy,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Our R&D execution has resulted in differentiated products, while at the same time we have generated operating margin of 14.6% on a GAAP basis and 28.3% on a non-GAAP basis. Additionally, we increased cash and short-term investments by $114 million during the quarter.”

“Across InfiniBand and Ethernet product lines, our innovations are driving continued market leadership. Our 200 gigabit HDR InfiniBand solutions are enabling the world’s fastest supercomputers and driving our overall InfiniBand growth. During Q1, HDR InfiniBand connected tens-of-thousands of compute and storage end-points across supercomputing, hyperscale, and cloud data centers around the globe to achieve breakthrough performance. Our Ethernet solutions continue to penetrate the market for both adapters and switches. Our market leadership in 25 gigabit per second Ethernet solutions is well established, and our 100 gigabit per second solutions are the fastest growing portion of our Ethernet adapter product line. We are also encouraged by the adoption of our BlueField System-on-a-Chip and SmartNIC technology. With further innovations to come, Mellanox is well-positioned to continue its growth trajectory,” Mr. Waldman concluded.

Highlights

  • Non-GAAP gross margins of 68.0 percent in the first quarter, compared to 69.0 percent in the first quarter of 2018.
  • GAAP operating income of $44.7 million in the first quarter, compared to $12.0 million in the first quarter of 2018.
  • Non-GAAP operating income of $86.3 million in the first quarter, or 28.3 percent of revenue, compared to $52.1 million, or 20.8 percent of revenue in the first quarter of 2018.
  • GAAP net income of $48.6 million in the first quarter, compared to $37.8 million in the first quarter of 2018.
  • Non-GAAP net income of $86.5 million in the first quarter, compared to $51.4 million in the first quarter of 2018.
  • GAAP net income per diluted share of $0.87 in the first quarter, compared to $0.71 in the first quarter of 2018.
  • Non-GAAP net income per diluted share of $1.59 in the first quarter, compared to $0.98 in the first quarter of 2018.

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.

Monday, March 11, 2019

With Mellanox, NVIDIA targets full compute/network/storage stack

NVIDIA agreed to acquire Mellanox in a deal valued at approximately $6.9 billion.

The merger targets data centers in general and the high-performance computing (HPC) market in particular. Together, NVIDIA’s computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers and have as customers every major cloud service provider and computer maker. Mellanox pioneered the InfiniBand interconnect technology, which along with its high-speed Ethernet products is now used in over half of the world’s fastest supercomputers and in many leading hyperscale datacenters.

NVIDIA said the acquired assets enables it to data center-scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters,” said Jensen Huang, founder and CEO of NVIDIA. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as NVIDIA,” said Eyal Waldman, founder and CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

NVIDIA also promised to continue investing in Israel, where Mellanox is based.

The companies expect to close the deal by the end of 2019.




Tuesday, January 22, 2019

Mellanox supplies 200 Gigabit HDR InfiniBand to Finnish IT Center for Science

Mellanox Technologies will supply its 200 Gigabit HDR InfiniBand solutions to accelerate a multi-phase supercomputer system by CSC – the Finnish IT Center for Science. The new supercomputers, set to be deployed in 2019 and 2020, will serve the Finnish researchers in universities and research institutes, enhancing climate, renewable energy, astrophysics, nanomaterials and bioscience, among a wide range of exploration activities. The Finnish Meteorological Institute (FMI) will have their own separate partition for diverse simulation tasks ranging from ocean fluxes to atmospheric modeling and space physics.

Mellanox said its HDR InfiniBand interconnect solution was selected for its fast data throughout, extremely low latency, smart In-Network Computing acceleration engines, and enhanced Dragonfly network topology.

Monday, January 7, 2019

Mellanox supplies 200 Gigabit HDR InfiniBand for supercomputing

Mellanox Technologies is supplying its 200 Gigabit HDR InfiniBand to accelerate a world-leading supercomputer at the High-Performance Computer Center of the University of Stuttgart (HLRS). The 5000-node supercomputer named “Hawk” will be built in 2019 and provide 24 petaFLOPs of compute performance.

The mission of the HLRS Hawk supercomputer is to advance engineering development and research in the fields of energy, climate, health and more, and if built today, the new system would be the world's fastest supercomputer for industrial production.

Mellanox said its Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology enables the execution of data algorithms on the data as it is being transferred within the network, providing the highest application performance and scalability.

“HDR InfiniBand delivers the best performance and scalability for HPC and AI applications, providing our users with the capabilities to enhance research, discoveries and product development,” said Gilad Shainer, vice president of marketing at Mellanox Technologies.

Friday, December 7, 2018

Mellanox supplies RDMA over Ethernet 25 Gbps adapters to Alibaba

Mellanox Technologies confirmed that it is now shipping its RDMA over Ethernet (RoCE) 25Gbps ConnectX network adapters for deployment in Alibaba Infrastructure Services’ production network.

RDMA technology provides Remote Direct Memory Access from the memory of one host to the memory of another host without involving the operating system and CPU, therefore boosting network and host performance with low latency, low CPU load and high bandwidth.

Mellanox cites the following advantages for its ConnectX network adapters:

  • Sub 1us point-to-point latency
  • Close to zero CPU utilization at full wire speed
  • Scalability to thousands of nodes
  • Outstanding performance on all types of fabrics – from lossless to lossy
  • Ease of deployment through automation

“High performance network transport technology is critical for Alibaba to achieve the throughput and latency required by our services. We are excited to collaborate with Mellanox to deploy its RoCE technology into our infrastructure.” said Dennis Cai, Chief Architect of Network Infrastructure under Alibaba Infrastructure Services.

“Mellanox has pioneered RoCE technology and is now shipping its 7th generation of RoCE capable ConnectX network adapters,” said Amir Prescher, senior vice president of business development at Mellanox Technologies. “Alibaba’s successful large-scale deployment of ConnectX RoCE adapters confirms again that RoCE is a proven technology to accelerate the most demanding workloads in a cost-effective manner. We are thrilled to work with Alibaba to achieve this.”

Monday, November 12, 2018

SC18: Mellanox connects 53% of overall TOP500 systems

Mellanox Technologies' InfiniBand and Ethernet solutions connect 53% of overall TOP500 platforms or 265 systems, demonstrating 38% growth within 12 months (Nov’17-Nov’18). Furthermore, InfiniBand accelerates the top three supercomputers on the TOP500 list: the fastest High-Performance Computing (HPC) and Artificial Intelligence (AI) supercomputer in the world deployed at the Oak Ridge National Laboratory, the second fastest supercomputer in the US deployed at the Lawrence Livermore National Laboratory, and the fastest supercomputer in China (ranked third).

“Mellanox InfiniBand and Ethernet solutions now connect the majority of systems on the TOP500 list, an increase of 38 percent over the last twelve-month period. InfiniBand In-Network Computing acceleration engines provide the highest performance and scalability for HPC and AI applications, and accelerate the top three supercomputers in the world. InfiniBand enables record performance in HPC and AI, enabling the advancement of academic and scientific research which is reshaping our world. We continue to win new opportunities and are proud to have deployed the first HDR InfiniBand supercomputer at the University of Michigan. We expect to see more HDR InfiniBand connected platforms this year,” said Eyal Waldman, president and CEO of Mellanox Technologies.

The TOP500 List has evolved in the recent years to include more hyperscale, cloud, and enterprise platforms, in addition to the high-performance computing and machine learning systems. Nearly half of the systems on the November 2018 list can be categorized as non-HPC application platforms, with a vast part of these systems representing US, Chinese and other hyperscale infrastructures, and are interconnected with Ethernet. Mellanox Ethernet solutions connect 130 systems or 51% of the Ethernet systems on the list.

Thursday, October 25, 2018

Mellanox hits record revenue of $279.2 million, up 24%

Mellanox Technologies reported record revenue of $279.2 million for Q3 2018, an increase of 23.7 percent, compared to $225.7 million in the third quarter of 2017. GAAP gross margins of 65.8 percent in the third quarter, compared to 65.7 percent in the third quarter of 2017. GAAP net income was $37.1 million in the third quarter, compared to $3.4 million in the third quarter of 2017.  Non-GAAP net income was $71.4 million in the third quarter, compared to $36.6 million in the third quarter of 2017.

“Mellanox continues to execute and gain momentum in the markets we participate in. We reported another record quarter in Q3, delivering 24% revenue growth and 90% non-GAAP operating income growth year-over-year. This resulted in a non-GAAP operating margin of 26.2%," said Eyal Waldman, President and CEO of Mellanox Technologies. "Our strong results reflect the differentiated and superior product technologies that Mellanox has to offer for data center infrastructure.”

“The innovations built into our high-speed Ethernet adapters, switches and cables are fueling demand for our Ethernet products. Leading hyperscale, cloud, enterprise data center and artificial intelligence customers continue to choose Mellanox to maximize the efficiency and utilization of their compute and storage investments. This has resulted in further market share gains across our high-speed Ethernet products and 59% year-over-year revenue growth in our Ethernet business."

Mellanox also announed that it has shipped more than 2.1 million Ethernet adapters during the first nine months of 2018.

The company said this milestone signals that high-performance Ethernet technology (25G and faster) has moved beyond the Super 7 cloud and web titans. The adoption of high-performance Ethernet technology has spread to enterprise data centers globally, including the next wave of cloud, telco/service providers, financial services and more.

Thursday, October 18, 2018

NTT ICT upgrades data centers with Mellanox 25G and 100G

NTT Communications ICT Solutions (NTT ICT) has selected Mellanox Technologies' 25G and 100G Ethernet to accelerate their multi-cloud data centers.

The upgrade includes: Spectrum-based switches running Cumulus Linux, ConnectX adapters, and LinkX cables and transceivers.

NTT ICT is a premium global IT provider delivering solutions to Australian enterprise and government clients.

Monday, September 17, 2018

Singapore's National Supercomputing Centre picks Mellanox

Singapore's National Supercomputing Centre (NSCC) has selected Mellanox 100 Gigabit Ethernet Spectrum-based switches, ConnectX adapters, cables and modules for its network.

"We are excited to collaborate with NSCC to interconnect the Singapore's research and educational facilities in the most efficient and scalable way," said Gilad Shainer, Vice President of Marketing at Mellanox Technologies. "The combination of our Ethernet RoCE technology, Spectrum switches, MetroX WDM long-haul switch, cables and software provide the highest data throughput, enabling users to be at the forefront of research and scientific discovery."

Mellanox ConnectX-5 with Virtual Protocol Interconnect supports two ports of InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus embedded PCIe switch and NVMe over Fabric offloads. It enables higher HPC performance with new Message Passing Interface (MPI) offloads, advanced dynamic routing, and new capabilities to perform various data algorithms.

Mellanox Spectrum, the eighth generation of switching IC family from Mellanox, delivers leading Ethernet performance, efficiency and throughput, low-latency and scalability for data center Ethernet networks by integrating advanced networking functionality for Ethernet fabrics. Hyperscale, cloud, data-intensive, virtualized datacenters or storage environments drive the need for increased interconnect performance and throughput beyond 10 and 40GbE. Spectrum's flexibility enables solution companies to build any Ethernet switch system at the speeds of 10, 25, 40, 50 and 100G, with leading port density, low latency, zero packet loss, and non-blocking traffic.

Mellanox's MetroX provides RDMA Long-Haul Systems enable connections between data centers deployed across multiple geographically distributed sites, extending Mellanox's world-leading interconnect benefits beyond local data centers and storage clusters.

Wednesday, August 29, 2018

Mellanox ships 200G LinkX Copper and Optical Cables

Mellanox Technologies is now shipping 200GbE Ethernet and InfiniBand HDR LinkX optical transceivers, Active Optical Cables (AOCs) and Direct Attach Copper cables (DACs) for use in upcoming 200 Gbps systems.

The new LinkX 200 Gbps product line provides comprehensive options for switch, server, and storage network connectivity for HDR InfiniBand and 200/400GbE infrastructures. LinkX is part of the Mellanox “end-to-end” ecosystem including Spectrum2 200GbE and Quantum HDR systems and ConnectX-6 network adapters, which includes:

  • 200G SR4/HDR Transceiver: Designed and manufactured by Mellanox, the 4x50G PAM4 transceiver uses the QSFP56 form-factor and forms the basis for transceivers and AOC products for Mellanox’s upcoming 200G systems.
  • 200GbE and HDR DAC and AOC cables: Designed and manufactured by Mellanox we will be displaying both straight and y-splitter 100GbE and HDR100 form-factors.
  • 400GbE DAC Cables: Mellanox LinkX™ kicks off its 400GbE line with announcing beginning shipments of its 400G 8x50G PAM4 DAC cables in the QSFP-DD form-factor.
  • Live Demos: At ECOC we will host a live demo with Keysight/Ixia showing 200Gb/s SR4 transceivers and 400Gb/s QSFP-DD DAC cables.
  • 400G SR8 Transceiver: Mellanox-designed, 8-channel parallel transceiver will be on display.
  • Low-Loss DAC Cables: Extending one of the industry’s largest offerings of interconnect products, with new low-loss DAC cables that enables simplified or even FEC-less links for Mellanox SN2000 series of 25/50/100G network switches and ConnectX network adapters. The new cables offer lengths up to 5 meters and support the IEEE CA-N and CA-L specifications. This enables considerable interconnect latency savings.

Mellanox also began shipping 400G QSFP-DD DAC cables for use in next-generation systems.

Tuesday, June 19, 2018

Mellanox supplies Infiniband for Sandia's ARM supercomputer

Mellanox Technologies will supply an InfiniBand solution to accelerate the world’s top Arm-based supercomputer to be deployed in Sandia National Laboratory in the second half of 2018.

The Astra supercomputer will include nearly 2600 nodes, and will leverage InfiniBand In-Network Computing acceleration engines. Astra is the first system in a series of the Vanguard program of advanced architecture platforms, supporting the US Department of Energy’s National Nuclear Security Administration (NNSA) missions.

“InfiniBand smart In-Network Computing acceleration engines will enable the highest performance and productivity for Astra, the first large scale Arm-based supercomputer,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “InfiniBand is the world-leading interconnect technology that supports the full range of compute and storage infrastructures, including X86, Power, Arm, GPUs, FPGAs and more. We are happy to support the Department of Energy’s efforts to explore new supercomputing platforms for their future needs.”

http://www.mellanox.com

Tuesday, May 29, 2018

Mellanox intros Hyper-scable Enterprise Framework

Mellanox Technologies introduced its Hyper-scalable Enterprise Framework for private cloud and enterprise data centers.

The five key elements of the ‘Mellanox Hyper-scalable Enterprise Framework’ are:
  • High Performance Networks – Mellanox end-to-end suite of 25G, 50G, and 100G adapters, cables, and switches is proven within hyperscale data centers who have adopted these solutions for the simple reason that an intelligent and high-performance network delivers total infrastructure efficiency
  • Open Networking – an open and fully disaggregated networking platform is key to scalability and flexibility as well as achieving operational efficiency
  • Converged Networks on an Ethernet Storage Fabric – a fully converged network supporting compute, communications, and storage on a single integrated fabric
  • Software Defined Everything and Virtual Network Acceleration – Enables enterprise to enjoy the benefits of the hyperscalers who have embraced software-defined networking, storage, and virtualization – or software-defined everything (SDX)
  • Cloud Software Integration – networking solutions that are fully integrated with the most popular cloud platforms such as OpenStack, vSphere, and Azure Stack and support for advanced software-defined storage solutions such as Ceph, Gluster, Storage Spaces Direct, and VSAN
“With the advent of open platforms and open networking it is now possible for even modestly sized organizations to build data centers like the hyperscalers do,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “We are confident and excited to release the Mellanox Hyper-scalable Enterprise Framework to the industry – and to provide an open, intelligent, high performance, accelerated and fully converged network to enable enterprise and private cloud architects to build a world-class data center.”

Thursday, May 17, 2018

Mellanox raises Q2 and full year 2018 outlook

Citing strength across all its product lines, including InfiniBand and Ethernet, Mellanox Technologies raised its second quarter and full year 2018 outlook.

Mellanox currently projects:

  • Quarterly revenues of $260 million to $270 million. Prior guidance provided on April 17, 2018 was quarterly revenues of $255 million to $265 million.
  • Full Year 2018 revenues of $1,050 million to $1,070 million. Prior guidance was revenues of $1,030 million to $1,050 million.

“We continue to see strength across all our product lines, including InfiniBand and Ethernet, and we are well-positioned for further growth as the adoption of 25 gigabit per second and above Ethernet adapters continues in 2018 and beyond,” said Eyal Waldman, Chief Executive Officer of Mellanox. “The team remains disciplined in its investments and committed to optimizing efficiencies and reducing expenses, without slowing down revenue growth. The Board and management team are confident that the successful execution of our strategy will continue to deliver enhanced value for all shareholders.”

Monday, April 2, 2018

Mellanox interconnects NVIDIA's new DGX-2 AI box

Mellanox Technologies confirmed that its InfiniBand and Ethernet are used in the new NVIDIA DGX-2 artificial intelligence (AI) system.

NVIDIA's DGX-2, which delivers 2 Petaflops of system performance, is powered by sixteen GPUs and eight Mellanox ConnectX adapters, supporting both EDR InfiniBand and 100 GigE connectivity.

The embedded Mellanox network adapters provide overall 1600 gigabit per second bi-directional data throughout, which enables scaling up AI capabilities for building the largest Deep Learning compute systems.

"We are excited to collaborate with NVIDIA and to bring the performance advantages of our EDR InfiniBand and 100 gigabit Ethernet to the new DGX-2 Artificial Intelligence platform," said Gilad Shainer, vice president of marketing at Mellanox Technologies. "Doubling the network throughput as compared to previous systems to provide overall bi-directional 1600 gigabit per second data speed enables the DGX-2 platform to analyze growing amounts of data, and to dramatically improve Deep Learning application performance."

Wednesday, March 7, 2018

Mellanox milestone: one Million 100G ports with LinkX optical transceivers and cables

Mellanox Technologies announced a big milestone:  volume shipments of LinkX optical transceivers, Active Optical Cables (AOCs) and Direct Attach Copper Cables (DACs), have surpassed the one million 100Gb/s QSFP28 ports milestone.

“Our early 100Gb/s sales were driven by US-based hyperscale companies who were the first to deploy 100G Ethernet,” said Amir Prescher, senior vice president of business development and general manager of the interconnect business at Mellanox. “Now, China Web 2.0, Cloud computing networks, and OEMs worldwide are moving to 100G. Customers select us because of our high-speed experience, our capacity to ship in volume, and the quality of our products.”

Tuesday, March 6, 2018

Mellanox debuts its Onyx Ethernet Network Operating System

Mellanox Technologies released a next-generation Ethernet network operating system for its Mellanox Spectrum Open Ethernet switches.

Mellanox Onyx OS provides a rich Layer-3 feature set with built-in dynamic routing protocols such as BGP/OSFP. It features a robust control plane and 64-way ECMP support, which Mellanox says can be used to construct large, scale-out L3 fabrics for data centers.

Additional capabilities include:

  • support for standard DevOps tools like Ansible and Puppet
  • smart hooks to automate network provisioning for high-performance workloads such as storage and artificial intelligence
  • support to run containerized applications on the switch system itself
  • buffer and link monitoring that leverage Mellanox Spectrum’s unique silicon level capabilities.

The Mellanox Onyx OS is aimed at cloud, hyperscale, enterprise, media & entertainment, storage, and high-performance Ethernet-based interconnect applications.

“Mellanox Onyx brings leading management, provisioning, automation and network visibility to data centers and cloud infrastructures, to deliver the best scalability, performance and overall return on investment,” said Yael Shenhav, vice president of products at Mellanox Technologies. “Mellanox Onyx also offers a mature Layer-3 feature-set, with integrated support for standard Devops tools, allowing customers to run third party containerized applications with complete SDK access. By utilizing Mellanox Onyx’s leading capabilities, our customers can enjoy the benefits of an industry-standard Layer-2 and Layer3 feature-set along with the ability to customize and optimize the network to their specific needs.”

See also