Showing posts with label #SC14. Show all posts
Showing posts with label #SC14. Show all posts

Saturday, November 22, 2014

RapidIO Delivers 16 Gbps Interconnect for NVIDIA-based Clusters

Integrated Device Technology (IDT) and Orange Silicon Valley have co-developed a new compute architecture that leverages massive, highly scalable, low-latency clusters of low-power NVIDIA Tegra K1 mobile processors along with using IDT’s RapidIO technology which can interconnect multiple nodes at up to 16 Gbps. The architecture can scale to more than 2,000 nodes in a rack and enables ultra-high Gflop density and energy efficiency not achievable with PCI Express or Ethernet technologies.

The design essentially interconnects a large number of low-power GPUs in a server rack, enableing tremendous computing horsepower with low latency and low energy consumption. It yields up to 23 Tflops per 1U server, or greater than 800 Tflops of computing per rack.

IDT said its new architecture matches computing cores with 16 Gbps data rate to each node for better computing-to-throughput balance, one of the key limitations in the industry today. The compute to I/O ratio will continue to improve with 40 Gbps IDT RapidIO 10xN technology.

The architecture allows for 60 nodes on a 19-inch 1U board, with more than 2,000 nodes in a rack. Any node can communicate with another node with only 400 ns of fabric latency. Memory-to-memory latency is less than two microseconds. Each node consists of a Tsi721 PCIe to RapidIO NIC and a Tegra K1 Mobile Processor with 384 Gflops per 16 Gbps of data rate, or 24 floating point operations per bit of I/O. This will be valuable at the rack level in data centers and at the individual analytics server level for wireless access networks.

The cluster was achieved with NVIDIA’s Jetson TK1 development kit, which is powered by the NVIDIA Tegra K1 mobile processor.

“Leading innovators in the ‘Big Data’ arena are increasingly discovering the benefits RapidIO interconnect can bring to their applications,” said Sean Fan, vice president and general manager of IDT’s Interface and Connectivity Division. “Our work with Orange Silicon Valley—connecting massive numbers of low-power NVIDIA mobile processors via RapidIO—demonstrates a breakthrough approach to addressing the tradeoffs between total computing, power and balanced networking interconnect to feed the processors.”

http://www.IDT.com

Wednesday, November 19, 2014

Mellanox Intros Programmable Network Adapter with FPGA

Mellanox Technologies introduced its Programmable ConnectX-3 Pro adapter card with Virtual Protocol Interconnect (VPI) technology and aimed at modern data centers, public and private clouds, Web 2.0 infrastructures, telecommunication, and high-performance computing systems.

The new adapter uses an on-board integrated FPGA and memory to enable users to bring their own customized applications such as IPSEC encryption, enhanced flow steering and Network Address Translation (NAT), overlay networks bridge or router, data inspection, data compression or deduplication offloads, and others.

The programmable adapter card supports both InfiniBand and Ethernet protocols at bandwidths up to 56Gb/s. In addition, the FPGA can be engaged on the PCIe bus, the network interface, or in both locations simultaneously, giving the card complete flexibility for HPC, cloud, Web 2.0 and enterprise data center environments.

“Data center administrators and application users have looked for simple, powerful and more flexible ways to run applications at their highest potential, and to enable their own innovations as a competitive advantage,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “The Programmable ConnectX-3 Pro adapter card with FPGA completely revitalizes a data center’s ability to boost application performance by enabling a flexible and efficient programmable capability as data enters the network interface or is sent out.”

http://www.mellanox.com

Tuesday, November 18, 2014

Intel's Next Xeon Phi Brings Omni-Path Fabric and 10nm process

Intel's third-generation Intel Xeon Phi product family, code-named Knights Hill, will be built using Intel's 10nm process technology and integrate Intel Omni-Path Fabric technology. Knights Hill will follow the upcoming Knights Landing product, with first commercial systems based on Knights Landing expected to begin shipping next year.

Intel also disclosed that its Intel Omni-Path Architecture will achieve 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives. The architecture targets a 48 port switch chip compared to the current 36 port InfiniBand alternatives. This will reduce the number of switches required in HPC clusters.

"Intel is excited about the strong market momentum and customer investment in the development of HPC systems based on current and future Intel Xeon Phi processors and high-speed fabric technology," said Charles Wuischpard, vice president, Data Center Group, and general manager of Workstations and HPC at Intel. "The integration of these fundamental HPC building blocks, combined with an open standards-based programming model, will maximize HPC system performance, broaden accessibility and use, and serve as the on-ramp to exascale."

http://www.intel.com/xeonphi
http://www.intel.com/omnipath

Monday, November 17, 2014

CenturyLink and Infinera Deliver Terabit to SC14 in New Orleans

Infinera and CenturyLink are delivering one terabit per second (1 Tbps) of super-channel transmission capacity to support the SCinet network at this week's International Conference for High Performance Computing, Networking, Storage and Analysis (SC14) at the Ernest N. Morial Convention Center in New Orleans.

CenturyLink currently offers 100 GbE Optical Wavelength Service to research institutions and laboratories, including those at more than 150 U.S. Department of Defense locations through a contract for the Defense Research and Engineering Network; financial and educational institutions; Internet content providers; and is delivering these Ethernet services to enterprise customers across the U.S. and in select international cities through a dedicated network connection.

"This terabit deployment demonstrates the scale, reliability and efficiency of our network and complements the 100 gigabit Ethernet services we have available to our customers today," said Pieter Poll, senior vice president of national and international network planning at CenturyLink. "The delivery of terabit capacity on SCinet demonstrates the rapid provisioning of services and the ability to turn up terabit capacity in minutes."

SCinet is one of the most powerful and advanced networks in the world, created each year for the Supercomputing Conference.

"This demonstration with CenturyLink illustrates the ability to rapidly scale bandwidth and provision services across an Intelligent Transport Network," said Bob Jandro, Infinera senior vice president, worldwide sales. "Working with CenturyLink to dynamically deliver terabit capacity over their backbone emphasizes the value of our solutions in powering one of the largest networks in the U.S."

http://www.infinera.com
http://www.centurylink.com
http://sc14.supercomputing.org