Showing posts with label HPC. Show all posts
Showing posts with label HPC. Show all posts

Tuesday, August 15, 2017

Microsoft acquires Cycle Computing for cloud HPC

Microsoft has acquired Cycle Computing, a start-up specializing int cloud orchestration of High-Performance Computing (HPC) resources. Financial terms were not disclosed.

Cycle Computing describes its CycleCloud software suite as a cloud orchestration, provisioning, and data management platform for Big Compute, Big Data, and large technical computing applications running on any public, private, or internal environment. The tool suite helps manage cloud workflows.

Cycle Computing said it is joining Microsoft because Azure has a massive global footprint, powerful infrastructure, InfiniBand support for fast networking and state-of-the-art GPU capabilities.

Microsoft Azure's Jason Stowe writes: "We’ve already seen explosive growth on Azure in the areas of artificial intelligence, the Internet of Things and deep learning. As customers continue to look for faster, more efficient ways to run their workloads, Cycle Computing’s depth and expertise around massively scalable applications make them a great fit to join our Microsoft team. Their technology will further enhance our support of Linux HPC workloads and make it easier to extend on-premise workloads to the cloud."

https://cyclecomputing.com/cycle-computing-joining-microsoft/

Wednesday, July 5, 2017

UCAR deploys ADVA FSP 3000 CloudConnect

ADVA Optical Networking announced that the University Corporation for Atmosphere Research (UCAR), based in Boulder, Colorado, has deployed its FSP 3000 CloudConnect data centre interconnect (DCI) solution to support ultra-high capacity connectivity to the Cheyenne supercomputer.

UCAR has deployed the ADVA DCI technology to enable the transport of scientific data over two 200 Gbit/s 16QAM connections between the NCAR-Wyoming Supercomputing Center in Cheyenne, Wyoming and the Front Range GigaPop in Denver, Colorado. By providing greater flexibility and more capacity, the new network is designed to help UCAR expand educational opportunities and expand collaboration.

As a leading institution for atmosphere research, UCAR will leverage the new capabilities and enhanced efficiency provided by the ADVA solution to offer the scientific community enhanced access to computing and data analysis platforms. As a result, the over 100 universities and research centres within the UCAR consortium will gain improved access to the Cheyenne supercomputing centre to support their research programs.

The ADVA FSP 3000 CloudConnect platform is designed to enable UCAR to maximise throughput at the optical layer, as well as offering scalability for the future. The ADVA solution features advanced technology but is designed to be simple to use, thereby helping UCAR to reduce operational complexity and costs.

ADVA noted that the FSP 3000 CloudConnect solution is an open DCI platform, with no vendor lock-in or restrictions that can address the research centre's density, security and energy requirements.

Regarding the project, John Scherzinger, SVP, sales, North America at ADVA, noted, "ADVA has developed a close relationship with UCAR over many years… the FSP 3000 CloudConnect… DCI solution will deliver UCAR significant savings in terms of price, power and space".



  • ADVA recently announced that the Poznań Supercomputing and Networking Center (PSNC) in Poland had deployed its FSP 3000 CloudConnect with QuadFlex 400Gbit/s technology into its PIONIER research network. The DCI solution supplied employs 16QAM modulation and provides a 96-channel network connecting supercomputing centres in Poznań and Warsaw.

Tuesday, November 15, 2016

Intel Cites Gains with its Omni-Path Architecture Systems

Intel cited growing momentum in the nine months since Intel Omni-Path Architecture (Intel OPA) began shipping, The company said OPA is becoming the standard fabric for 100 gigabit (Gb) systems, as it is now featured in 28 of the top 500 most powerfulsupercomputers in the world announced at Supercomputing 2016.  Intel believes OPA now has 66 percent of the 100Gb market. This acceptance is twice the number of InfiniBand EDR systems.

Top500 designs include Oakforest-PACS, MIT Lincoln Lab and CINECA. The Intel OPA systems on the list add up to total floating-point operations per second (FLOPS) of 43.7 petaflops (Rmax), or 2.5 times the FLOPS of all InfiniBand* EDR systems.

Intel OPA is an end-to-end fabric solution that improves HPC workloads for clusters of all sizes, achieving up to 9 percent higher application performance and up to 37 percent lower fabric costs on average compared to InfiniBand EDR.

https://newsroom.intel.com/newsroom/wp-content/uploads/sites/11/2016/11/supercomputing-2016-fact-sheet.pdf

Monday, November 14, 2016

SC16 Opens in Salt Lake City

SC16, annual super computing confernce is underway this week, November 13-18, in Salt Lake City with with more than 12,000 conference exhibitors and attendees expected.

SCinet, the high-performance, experimental network built specifically for the conference, will deliver more than 5 Tbps of internal network bandwidth.

The Utah Education Network (UEN) and CenturyLink are partnering wtih SCinet to enable tens of 100 Gbps Ethernet circuits to bring 3.15 Tbps of Wide Area Network bandwidth to the convention center. UEN guides this collaboration with national and international research & education networks and commodity Internet providers.

For SC16, the SCinet team used more than $32 million in vendor-provided equipment and services. They also installed 56 miles of fiber optic cable and 200+ wireless access points that can support more than 10,000 simultaneous users on the conference wifi. The wireless network supports eduroam, the worldwide education roaming service, which allows anyone from participating institutions to securely access the protected wireless network using their home organization’s login credentials.

“SCinet is more than hardware and software; its unprecedented scale is achieved by volunteers and vendors from around the world,” said Corby Schmitz, SCinet chair and manager of network communications operations and support at the Argonne National Laboratory. “SCinet hardware is supplied by vendors, and then volunteers collaborate to design a network architecture that exists nowhere else in the world.”

Platinum vendor contributors: CenturyLink, Ciena, Cisco, Coriant, Corsa, ESnet, Infinera, Internet2, Juniper, Zayo
Gold vendor contributors: Arista, Brocade Communication Systems, UEN/CloudLab
Silver vendor contributors: ADVA, ECI Telecom, Gigamon, InMon, Ixia, Metaflow, Nokia, Reservoir, Spirent, Splunk, Viavi
Bronze vendor contributors: Cablexpress, Commscope, Leverage, Palo Alto, Puppet Labs, RedSeal

http://www.supercomputing.org

Monday, June 20, 2016

Dell Launches New High Performance Computing Portfolio

Dell launched a new a family of high-performance computing (HPC) systems tuned for specific science, manufacturing and analytics workloads with fully tested and validated building block systems, backed by a single point of hardware support and additional service options across the solution lifecycle. Dell's new HPC Systems feature an Intel Scalable System Framework configuration with the latest Xeon processors, support for Intel Omni-Path Architecture (Intel OPA) fabric, and software in the Dell HPC Lustre Storage and Dell HPC NFS Storage solutions.

Dell has instituted a customer early access program for early development and testing in preparation for Dell’s next server offering in the HPC solutions portfolio, the Dell PowerEdge C6320p server, which will be available in the second half of 2016, with the Intel® Xeon Phi™ processor (formerly code-named Knights Landing).

http://www.dell.com

Wednesday, April 6, 2016

U-Michigan Collaborates with IBM on HPC

The University of Michigan is collaborating with IBM to develop and deliver “data-centric” supercomputing systems based on OpenPower architecture.

Specifically, under a grant from the National Science Foundation,  U-M researchers have designed a computing resource called ConFlux to enable high performance computing clusters to communicate directly and at interactive speeds with data-intensive operations.

ConFlux incorporates IBM Power Systems LC servers and is also powered by the latest additions to the NVIDIA Tesla Accelerated Computing Platform: NVIDIA Tesla P100 GPU accelerators with the NVLink high-speed interconnect technology. Additional data-centric solutions U-M is using include IBM Elastic Storage Server, IBM Spectrum Scale software (scale-out, parallel access network attached storage), and IBM Platform Computing software.

An initial application for the high-performance system involves a simulation of turbulence around aircraft and rocket engines.

http://www-03.ibm.com/press/us/en/pressrelease/49477.wss
http://micde.umich.edu/tag/conflux/

Monday, July 13, 2015

Intel Shows its Omni-Path Architecture for HPC

Intel conducted the first public "powered-on" demonstration of its Omni-Path Architecture, a next-generation fabric technology for high performance computing (HPC) clusters.

The demonstration, conducted at the ISC2015 show in Frankfurt, featured Intel Omni-Path Architecture (Intel OPA), an end-to-end solution, including PCIe* adapters, silicon, switches, cables, and management software, that builds on the existing Intel True Scale Fabric and Infiniband. Intel OPA was designed to address the challenge that processor capacity and memory bandwidth have been scaling faster than system I/O.  It accelerates the message passing interface (MPI) rates in next gen systems. Intel OPA also promises the ability to scale to tens — and eventually hundreds — of thousands of nodes.

Intel Omni-Path Architecture uses technologies acquired from both QLogic and Cray, as well as Intel-developed technologies. In the near future, Intel says it will integrate the Intel Omni-Path Host Fabric Interface onto future generations of Intel Xeon processors and Intel Xeon Phi processors.

Intel also announced new a collaboration with HP to develop purpose-built HP Apollo systems designed to expand the use of HPC solutions to enterprises of all sizes.  The purpose built HP Apollo compute platforms will utilize the Intel HPC scalable system framework, including next generation Intel Xeon processors, the Intel Xeon Phi product family, Intel Omni-Path Architecture and the Intel Enterprise Edition of Lustre software.

http://www.intel.com/content/www/us/en/high-performance-computing-fabrics/omni-path-architecture-fabric-overview.html


In April 2015, Intel and Cray were selected to build two next generation, high-performance computing (HPC) systems that will be five to seven times more powerful than the fastest supercomputers today.

Intel will serve as prime contractor to deliver the supercomputers for the U.S. Department of Energy’s (DOE) Argonne Leadership Computing Facility (ALCF). The Aurora system will be based on Intel’s HPC scalable system framework and will be a next-generation Cray “Shasta” supercomputer. Intel said the Aurora system will be delivered in 2018 and have a peak performance of 180 petaflops, making it the world’s most powerful system currently announced to date. Aurora will use future generations of Intel Xeon Phi processors and the Intel Omni-Path Fabric high-speed interconnect technology, a new non-volatile memory architecture and advanced file system storage using Intel Lustre software.



In November 2014, Intel confirmed that its third-generation Intel Xeon Phi product family, code-named Knights Hill, will be built using 10nm process technology and that it will integrate Intel Omni-Path Fabric technology. Knights Hill will follow the upcoming Knights Landing product, with first commercial systems based on Knights Landing expected to begin shipping next year.

Intel also disclosed that its Intel Omni-Path Architecture will achieve 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives. The architecture targets a 48 port switch chip compared to the current 36 port InfiniBand alternatives. This will reduce the number of switches required in HPC clusters.

Intel Previews Enhanced Lustre File System for HPC

Intel previewed a number of new features coming in its Enterprise Edition for Lustre 2.3, including support for Multiple Metadata Targets in the Intel Manager for Lustre GUI. Lustre, which has been in use in the world’s largest dataccenters for over a decade and hardened in the harshest big data environments, leverages an object-based storage architecture that can scale to tens of thousands of clients and petabytes of data.

New capabilities will enable Lustre metadata to be distributed across servers. Intel Enterprise Edition for Lustre 2.3 supports remote directories, which allow each metadata target to serve a discrete sub-directory within the file system name space. This enables the size of the Lustre namespace and metadata throughput to scale with demand and provide dedicated metadata servers for projects, departments, or specific workloads.

Intel is also preparing to roll out new security, disaster recovery, and enhanced support features in Intel Cloud Edition for Lustre 1.2, which will arrive later this year. These enhancements will include network encryption using IPSec, the ability to recover a complete file system using snapshots, and new client mounting tools, updates to instance and target naming, and added network testing tools.

http://www.intel.com/content/www/us/en/software/intel-enterprise-edition-for-lustre-software.html

Wednesday, November 19, 2014

Mellanox Intros Programmable Network Adapter with FPGA

Mellanox Technologies introduced its Programmable ConnectX-3 Pro adapter card with Virtual Protocol Interconnect (VPI) technology and aimed at modern data centers, public and private clouds, Web 2.0 infrastructures, telecommunication, and high-performance computing systems.

The new adapter uses an on-board integrated FPGA and memory to enable users to bring their own customized applications such as IPSEC encryption, enhanced flow steering and Network Address Translation (NAT), overlay networks bridge or router, data inspection, data compression or deduplication offloads, and others.

The programmable adapter card supports both InfiniBand and Ethernet protocols at bandwidths up to 56Gb/s. In addition, the FPGA can be engaged on the PCIe bus, the network interface, or in both locations simultaneously, giving the card complete flexibility for HPC, cloud, Web 2.0 and enterprise data center environments.

“Data center administrators and application users have looked for simple, powerful and more flexible ways to run applications at their highest potential, and to enable their own innovations as a competitive advantage,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “The Programmable ConnectX-3 Pro adapter card with FPGA completely revitalizes a data center’s ability to boost application performance by enabling a flexible and efficient programmable capability as data enters the network interface or is sent out.”

http://www.mellanox.com

Tuesday, November 18, 2014

Intel's Next Xeon Phi Brings Omni-Path Fabric and 10nm process

Intel's third-generation Intel Xeon Phi product family, code-named Knights Hill, will be built using Intel's 10nm process technology and integrate Intel Omni-Path Fabric technology. Knights Hill will follow the upcoming Knights Landing product, with first commercial systems based on Knights Landing expected to begin shipping next year.

Intel also disclosed that its Intel Omni-Path Architecture will achieve 100 Gbps line speed and up to 56 percent lower switch fabric latency in medium-to-large clusters than InfiniBand alternatives. The architecture targets a 48 port switch chip compared to the current 36 port InfiniBand alternatives. This will reduce the number of switches required in HPC clusters.

"Intel is excited about the strong market momentum and customer investment in the development of HPC systems based on current and future Intel Xeon Phi processors and high-speed fabric technology," said Charles Wuischpard, vice president, Data Center Group, and general manager of Workstations and HPC at Intel. "The integration of these fundamental HPC building blocks, combined with an open standards-based programming model, will maximize HPC system performance, broaden accessibility and use, and serve as the on-ramp to exascale."

http://www.intel.com/xeonphi
http://www.intel.com/omnipath

Wednesday, July 23, 2014

IBM adds InfiniBand to SoftLayer for HPC in Cloud

IBM will begin offering Infiniband networking to connect bare metal servers in its Softlayer data centers.  The new option enables very high-speed throughput and low latency between servers in a high-performance computing (HPC) cluster.

IBM said the introduction on InfiniBand on SoftLayer will especially benefit customers who are leveraging fully supported, ready-to-run clusters complete with code name IBM Elastic Storage, IBM Platform LSF or Platform Symphony workload management.

“As more and more companies migrate their toughest workloads to the cloud, they’re now demanding that vendors provide high speed networking performance to keep up,” said SoftLayer CEO Lance Crosby. “Our InfiniBand support is helping to push the technological envelope while redefining how cloud computing can be used to solve complex business issues.”

InfiniBand delivers high transfer speeds of up to 56 Gbps.

http://www.ibm.com/cloud

IBM Chalks up Gains from Softlayer Acquisition

One year after acquiring Softlayer for $2 billion, IBM reported continued gains in its cloud leadership positions in terms of new enterprise customers and the launch of differentiated services.

IBM said the hybrid cloud model is catching on with new enterprise customers such as Macy’s, Whirlpool, Daimler subsidiary moovel Gmbh, Sicoss Group and others. Clients can maintain on-premise control of key applications and data while moving other workloads – so-called systems of engagement with customers and partners -- to the cloud for quick access to data, expansion of new services and cost reductions. The company says nearly half of its top 100 strategic outsourcing clients,  who are among the world's largest enterprises, are already implementing cloud solutions with IBM as they transition to a hybrid cloud model.

IBM now has over 1,000 business partners signed on to offer their services on SoftLayer, ranging from leading global players such as Avnet, Arrow Electronics and Ingram Micro, to cloud-based services and solution providers like Mirantis, Assimil8, Silverstring, Clipcard, SilverSky, and Cnetric Enterprise Solutions.

SoftLayer is expected to play a pivotal role in delivering IBM's rich data and analytics portfolio.  Already more than 300 services within the IBM cloud marketplace are is based on SoftLayer.

In June, IBM opened the first of two new Softlayer Cloud Services data centers designed and dedicated for U.S. government workloads and compliant with Federal Risk and Authorization Management Program (FedRAMP) and Federal Information Security Management Act (FISMA) requirements.

The first of the new data centers is located in Dallas, Texas  Later this year, IBM anticipates activating a companion center in Ashburn, Virginia.  Each of the high-security data centers will have initial capacity for 30,000 servers.  They will be connected by an isolated, robust private network with 2 Tbps of capacity.

Monday, June 23, 2014

Top 500 Supercomputer List for June 2014

A new list of the Top 500 Supercomputer Sites has just been published and for the third consecutive year the most powerful designation goes to the Tianhe-2 supercomputer at China's University of Defense Technology with a performance of 33.86 petaflops delivered by its 3.1 million Intel Xeon cores.

A few highlights from the list:


  • Total combined performance of all 500 systems has grown to 274 Pflop/s, compared to 250 Pflop/s six months ago and 223 Pflop/s one year ago. This increase in installed performance also exhibits a noticeable slowdown in growth compared to the previous long-term trend.
  • There are 37 systems with performance greater than a Pflop/s on the list, up from 31 six months ago.
  • The No. 1 system, Tianhe-2, and the No. 7 system, Stampede, use Intel Xeon Phi processors to speed up their computational rate. The No. 2 system, Titan, and the No. 6 system, Piz Daint, use NVIDIA GPUs to accelerate computation.
  • A total of 62 systems on the list are using accelerator/co-processor technology, up from 53 from November 2013. Forty-four of these use NVIDIA chips, two use ATI Radeon, and there are now 17 systems with Intel MIC technology (Xeon Phi). The average number of accelerator cores for these 62 systems is 78,127 cores/system.


The full listed is posted here:

http://www.top500.org/lists/2014/06/

See also