Showing posts with label Nvidia. Show all posts
Showing posts with label Nvidia. Show all posts

Wednesday, September 27, 2017

NVIDIA secures server design wins with leading manufacturers

NVIDIA has secured design wins for its Volta architecture-based Tesla V100 GPU accelerators with the leading server manufacturers, including Dell EMC, Hewlett Packard Enterprise, IBM, Supermicro, Inspur, Lenovo and Huawei.

Each NVIDIA V100 GPU features over 21 billion transistors, as well as 640 Tensor Cores, the latest NVLink high-speed interconnect technology, and 900 GB/sec HBM2 DRAM to achieve 50 percent more memory bandwidth than previous generation GPUs. NVIDIA says this enables 120 teraflops of deep learning performance.

V100-based systems announced include:

  • Dell EMC -- The PowerEdge R740 supporting up to three V100 GPUs for PCIe, the PowerEdge R740XD supporting up to three V100 GPUs for PCIe, and the PowerEdge C4130 supporting up to four V100 GPUs for PCIe or four V100 GPUs for NVIDIA NVLink™ interconnect technology in an SXM2 form factor.
  • HPE -- HPE Apollo 6500 supporting up to eight V100 GPUs for PCIe and HPE ProLiant DL380 systems supporting up to three V100 GPUs for PCIe.
  • IBM -- The next generation of IBM Power Systems servers based on the POWER9 processor will incorporate multiple V100 GPUs and take advantage of the latest generation NVLink interconnect technology -- featuring fast GPU-to-GPU interconnects and an industry-unique OpenPOWER CPU-to-GPU design for maximum throughput.
  • Supermicro -- Products supporting the new Volta GPUs include a 7048GR-TR workstation for all-around high-performance GPU computing, 4028GR-TXRT, 4028GR-TRT and 4028GR-TR2 servers designed to handle the most demanding deep learning applications, and 1028GQ-TRT servers built for applications such as advanced analytics.

Tuesday, September 26, 2017

NVIDIA sees big wins for data center GPUs in China

NVIDIA announced some big wins in China for its Volta GPUs - Alibaba Cloud, Baidu and Tencent are all incorporating the NVIDIA Tesla V100 GPU accelerators into their data centers and cloud-service infrastructures.
Specifically, the three cloud giants are shifting from NVIDIA Pascal architecture-based systems to Volta-based platforms, which offer performance gains for AI inferencing and training.

The NVIDIA V100 data center GPU packs 21 billion transistors and provides a 5x improvement over the preceding NVIDIA Pascal architecture P100 GPU accelerators.


NVIDIA also announced that China's leading original equipment manufacturers -- including Inspur, Lenovo and Huawei -- are using the NVIDIA HGX reference architecture to offer Volta-based accelerated systems for hyperscale data centers.

Saturday, August 19, 2017

NVIDIA adds virtualization software for GPU-accelerated servers

NVIDIA introduced new virtualization software capabilities for NVIDIA Tesla GPU-accelerated servers.

The company said its new Quadro Virtual Data Center Workstation Software (Quadro vDWS) enables enterprises to run both virtualized graphics and compute workloads on any virtual workstation or laptop from NVIDIA Tesla-accelerated data centers. This provides high-end performance to multiple enterprise users from the same GPU for lower cost of ownership.

When powered by NVIDIA Pascal architecture-based Tesla GPU accelerators, Quadro vDWS provides:

  • The ability to create complex 3D and photoreal designs - Up to 24GB of GPU memory for working with large, immersive models.
  • Increased productivity - Up to double the graphics performance of the previous NVIDIA GPU architecture lets users make better, faster decisions.
  • Unified graphics and compute workloads - Supports accelerated graphics and compute (CUDA and OpenCL) workflows to streamline design and computer-aided engineering simulation.
  • Better performance for Linux users - NVIDIA NVENC delivers better performance and user density by off-loading H.264 video encoding, a compute-intensive task, from the CPU for Linux virtual workstation users.

"The enterprise is transforming. Workflows are evolving to incorporate AI, photorealism, VR, and greater collaboration among employees. The Quadro visualization platform is evolving with the enterprise to provide the performance required," said Bob Pette, Vice President of Professional Visualization at NVIDIA. "With Quadro vDWS on Tesla-powered servers, businesses can tackle larger datasets, power the most demanding applications and meet the need for greater mobility."

htttp://www.nvidia.com

Friday, August 11, 2017

NVIDIA Cits Growth in Data center, Auto

NVIDIA reported record revenue for its second quarter ended July 30, 2017, of $2.23 billion, up 56 percent from $1.43 billion a year earlier, and up 15 percent from $1.94 billion in the previous quarter. GAAP EPS was $0.92, up 124 percent from a year ago.
"Adoption of NVIDIA GPU computing is accelerating, driving growth across our businesses," said Jensen Huang, founder and chief executive officer of NVIDIA. "Datacenter revenue increased more than two and a half times. A growing number of car and robot-taxi companies are choosing our DRIVE PX self-driving computing platform. And in Gaming, increasingly the world's most popular form of entertainment, we power the fastest growing platforms - GeForce and Nintendo Switch.

"Nearly every industry and company is awakening to the power of AI. Our new Volta GPU, the most complex processor ever built, delivers a 100-fold speedup for deep learning beyond our best GPU of four years ago. This quarter, we shipped Volta in volume to leading AI customers. This is the era of AI, and the NVIDIA GPU has become its brain. We have incredible opportunities ahead of us," he said.

http://investor.nvidia.com/results.cfm


NVIDIA Debuts Latest Quadro Pascal GPUs


NVIDIA introduced its latest line-up of Quadro GPUs products, all based on its Pascal architecture and designed for professional workflows in engineering, deep learning, VR, and many vertical applications. "Professional workflows are now infused with artificial intelligence, virtual reality and photorealism, creating new challenges for our most demanding users," said Bob Pette, vice president of Professional Visualization at NVIDIA. "Our new Quadro...





NVIDIA Advances its Pascal-based GPUs for AI


NVIDIA is expanding its Pascal™ architecture-based deep learning platform with the introduction of new Tesla P4 and P40 GPU accelerators and new software. The solution is aimed at accelerating inferencing production workloads for artificial intelligence services, such as voice-activated assistance, email spam filters, and movie and product recommendation engines. NVIDIA said its GPU are better at these tasks than current CPU-based technology, which...


Sunday, February 5, 2017

NVIDIA Debuts Latest Quadro Pascal GPUs

NVIDIA introduced its latest line-up of Quadro GPUs products, all based on its Pascal architecture and designed for professional workflows in engineering, deep learning, VR, and many vertical applications.

"Professional workflows are now infused with artificial intelligence, virtual reality and photorealism, creating new challenges for our most demanding users," said Bob Pette, vice president of Professional Visualization at NVIDIA. "Our new Quadro lineup provides the graphics and compute performance required to address these challenges. And, by unifying compute and design, the Quadro GP100 transforms the average desktop workstation with the power of a supercomputer."

Some highlight of the new generation of Quadro Pascal-based GPUs:

  • the GP100 combines double precision performance with 16GB of high-bandwidth memory (HBM2) so users can conduct simulations during the design process and gather realistic multiphysics simulations faster than ever before. Customers can combine two GP100 GPUs with NVLink technology and scale to 32GB of HBM2 to create a massive visual computing solution on a single workstation.
  • the GP100 provides more than 20 TFLOPS of 16-bit floating point precision computing, making it an ideal development platform to enable deep learning in Windows and Linux environments.
  • the "VR Ready" Quadro GP100 and P4000 have the power to create detailed, lifelike, immersive environments. Larger, more complex designs can be experienced at scale.
  • Pascal-based Quadro GPUs can render photorealistic images more than 18 times faster than a CPU.
  • Visualize data in high resolution and HDR color on up to four 5K displays.
  • for digital signage, up to 32 4K displays can be configured through a single chassis by combining up to eight P4000 GPUs and two Quadro Sync II cards.

The new cards complete the entire NVIDIA Quadro Pascal lineup including the previously announced P6000, P5000 and mobile GPUs. The entire NVIDIA Quadro Pascal lineup supports the latest NVIDIA CUDA 8 compute platform providing developers access to powerful new Pascal features in developer tools, performance enhancements and new libraries including nvGraph.

http://www.nvidia.com

Tuesday, September 13, 2016

NVIDIA Advances its Pascal-based GPUs for AI

NVIDIA is expanding its Pascal™ architecture-based deep learning platform with the introduction of new Tesla P4 and P40 GPU accelerators and new software. The solution is aimed at accelerating inferencing production workloads for artificial intelligence services, such as voice-activated assistance, email spam filters, and movie and product recommendation engines.

NVIDIA said its GPU are better at these tasks than current CPU-based technology, which isn't capable of delivering real-time responsiveness. The Tesla P4 and P40 are specifically designed for inferencing, which uses trained deep neural networks to recognize speech, images or text in response to queries from users and devices.

Based on the Pascal architecture, these GPUs feature specialized inference instructions based on 8-bit (INT8) operations, delivering 45x faster response than CPUs and a 4x improvement over GPU solutions launched less than a year ago.

With 47 tera-operations per second (TOPS) of inference performance with INT8 instructions, a server with eight Tesla P40 accelerators can replace the performance of more than 140 CPU servers.5 At approximately $5,000 per CPU server, this results in savings of more than $650,000 in server acquisition cost.

"With the Tesla P100 and now Tesla P4 and P40, NVIDIA offers the only end-to-end deep learning platform for the data center, unlocking the enormous power of AI for a broad range of industries," said Ian Buck, general manager of accelerated computing at NVIDIA. "They slash training time from days to hours. They enable insight to be extracted instantly. And they produce real-time responses for consumers from AI-powered services."

http://nvidianews.nvidia.com

Friday, August 12, 2016

NVIDIA Posts Record Q2 Revenue of $1.43 Billion, up 24% YoY

NVIDIA posted record revenue of $1.43 billion for its Q2, ended 31-July-2016, up 24 percent from $1.15 billion a year earlier, and up 9 percent from $1.30 billion in the previous quarter. GAAP earnings per diluted share for the quarter were $0.40, compared with $0.05 a year ago and up 21 percent from $0.33 in the previous quarter.

"Strong demand for our new Pascal-generation GPUs and surging interest in deep learning drove record results," said Jen-Hsun Huang, co-founder and chief executive officer, NVIDIA. "Our strategy to focus on creating the future where graphics, computer vision and artificial intelligence converge is fueling growth across our specialized platforms -- Gaming, Pro Visualization, Datacenter and Automotive." "We are more excited than ever about the impact of deep learning and AI, which will touch every industry and market. We have made significant investments over the past five years to evolve our entire GPU computing stack for deep learning. Now, we are well positioned to partner with researchers and developers all over the world to democratize this powerful technology and invent its future," he said.

http://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-second-quarter-fiscal-2017

NVIDIA Unveils GPU Accelerators for Deep Learning AI



NVIDIA unveiled its most advanced accelerator to date -- the Tesla P100 -- based on Pascal architecture and composed of an array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The Tesla P100, which is implemented in 16nm FinFET on a massive 610mm2die, enables a new class of servers that can deliver the performance of hundreds of CPU server nodes. NVIDIA said its accelerator brings five breakthroughs: NVIDIA...


Friday, August 5, 2016

Microsoft Azure Previews Nvidia GPU VMs

Microsoft is getting ready to offer Azure N-Series Virtual Machines, billed as "the fastest GPUs in the public cloud," powered by NVIDIA’s GPUs. The service will enable users to run GPU-accelerated workloads and visualize them while paying on a per-minute of usage basis.

The Azure Our N-Series VMs are split into two categories: NC-Series (compute-focused GPUs) for compute intensive HPC workloads using CUDA or OpenCL and powered by Tesla K80 GPUs; and the NV-Series for visualization of desktop accelerated applications and powered by Tesla M60 GPUs.

Microsoft noted that its service, unlike other providers, will expose the GPUs through discreet device assignment (DDA) which results in close to bare-metal performance.

https://azure.microsoft.com/en-us/blog/azure-n-series-preview-availability/

Tuesday, April 5, 2016

NVIDIA Unveils GPU Accelerators for Deep Learning AI

NVIDIA unveiled its most advanced accelerator to date -- the Tesla P100 -- based on Pascal architecture and composed of an array of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The Tesla P100, which is implemented in 16nm FinFET on a massive 610mm2die, enables a new class of servers that can deliver the performance of hundreds of CPU server nodes.

NVIDIA said its accelerator brings five breakthroughs:

  • NVIDIA Pascal architecture for exponential performance leap -- a Pascal-based Tesla P100 solution delivers over a 12x increase in neural network training performance compared with a previous-generation NVIDIA Maxwell-based solution.
  • NVIDIA NVLink for maximum application scalability -- The NVIDIA NVLink high-speed GPU interconnect scales applications across multiple GPUs, delivering a 5x acceleration in bandwidth compared to today's best-in-class solution. Up to eight Tesla P100 GPUs can be interconnected with NVLink to maximize application performance in a single node, and IBM has implemented NVLink on its POWER8 CPUs for fast CPU-to-GPU communication.
  • 16nm FinFET for unprecedented energy efficiency -- with 15.3 billion transistors built on 16 nanometer FinFET fabrication technology, the Pascal GPU is the world's largest FinFET chip ever built.
  • CoWoS with HBM2 for big data workloads -- the Pascal architecture unifies processor and data into a single package to deliver unprecedented compute efficiency. An innovative approach to memory design, Chip on Wafer on Substrate (CoWoS) with HBM2, provides a 3x boost in memory bandwidth performance, or 720GB/sec, compared to the Maxwell architecture.
  • New AI algorithms for peak performance -- new half-precision instructions deliver more than 21 teraflops of peak performance for deep learning.

At its GPU Technology conference in San Jose, Nvidia also unveiled its DGX-1 Deep Learning supercomputer. It is a turnkey system that integrates eight Tesla P100 GPU accelerators, delivering the equivalent throughput of 250 x86 servers.

"Artificial intelligence is the most far-reaching technological advancement in our lifetime," said Jen-Hsun Huang, CEO and co-founder of NVIDIA. "It changes every industry, every company, everything. It will open up markets to benefit everyone. Data scientists and AI researchers today spend far too much time on home-brewed high performance computing solutions. The DGX-1 is easy to deploy and was created for one purpose: to unlock the powers of superhuman capabilities and apply them to problems that were once unsolvable."

"NVIDIA GPU is accelerating progress in AI. As neural nets become larger and larger, we not only need faster GPUs with larger and faster memory, but also much faster GPU-to-GPU communication, as well as hardware that can take advantage of reduced-precision arithmetic. This is precisely what Pascal delivers," said Yann LeCun, director of AI Research at Facebook.

http://nvidianews.nvidia.com

Monday, January 4, 2016

NVIDIA Develops Supercomputer for Self-Driving Cars

NVIDIA unveiled an artificial-intelligence supercomputer for self-driving cars.

In a pre-CES keynote in Las Vegas, NVIDIA's CEO Jen-Hsun Huang said the onboard processing needs of future automobiles far exceeds the silicon capabilities currently on the market.

NVIDIA's DRIVE PX 2 will pack the processing equivalent of 150 MacBook Pros -- 8 teraflops of power -- enough to process data from multiple sensors in real time, providing 360-degree detection of lanes, vehicles, pedestrians, signs, etc. The design will use the company's next gen Tegra processors plus two discrete, Pascal-based GPUs. NVIDIA is also developing a suite of software tools, libraries and modules to accelerate the development and testing of autonomous vehicles.

Volvo will be the first company to deploy the DRIVE PX 2. A public test of 100 autonomous cars using this technology is planned for Gothenburg, Sweden.

http://nvidianews.nvidia.com/news/nvidia-boosts-iq-of-self-driving-cars-with-world-s-first-in-car-artificial-intelligence-supercomputer

Thursday, December 10, 2015

Facebook Shows GPU-based System for AI

Facebook will open source its design for a GPU-based system optimized for machine learning (ML) and artificial intelligence (AI).

The hardware system, code-named "Big Sur", was developed by Facebook's engineering team to run its software capable of answering questions based on ingested stories and articles. Big Sur is Open Rack-compatible and incorporates eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It uses NVIDIA's Tesla Accelerated Computing Platform.

Facebook said its plan to open-source Big Sur and will submit the design materials to the Open Compute Project (OCP).

https://code.facebook.com/posts/1687861518126048?

Tuesday, November 10, 2015

NVIDIA's Jetson Wants to be the Brain of Autonomous Robots and Drones

NVIDIA unveiled a credit-card sized module named Jetson for a new generation of smart, autonomous machines that can learn.

The NVIDIA Jetson TX1 module is an embedded computer designed to process deep neural networks -- computer software that can learn to recognize objects or interpret information. The module brings 1 teraflops of processing performance to enable autonomous devices to recognize visual data or interpret images, process conversational speech or navigate real world environments.

"Jetson TX1 will enable a new generation of incredibly capable autonomous devices," said Deepu Talla, vice president and general manager of the Tegra business at NVIDIA. "They will navigate on their own, recognize objects and faces, and become increasingly intelligent through machine learning. It will enable developers to create industry-changing products."

Key features of Jetson TX1 include:

  • GPU: 1 teraflops, 256-core Maxwell architecture-based GPU offering best-in-class performance
  • CPU: 64-bit ARM A57 CPUs
  • Video: 4K video encode and decode
  • Camera: Support for 1400 megapixels/second
  • Memory: 4GB LPDDR4; 25.6 gigabits/second
  • Storage: 16GB eMMC
  • Wi-Fi/Bluetooth: 802.11ac 2x2 Bluetooth ready
  • Networking: 1GB Ethernet
  • OS Support: Linux for Tegra
  • Size: 50mm x 87mm, slightly smaller than a credit card

http://www.nvidia.com

Sunday, August 30, 2015

NVIDIA's GRID 2.0 Virtual Desktop Doubles Density to 128 Users per Server

NVIDIA introduced its GRID 2.0 virtual desktop solution featuring greater density and performance along with support from major server vendors, including Cisco, Dell, HP and Lenovo. NVIDIA has worked closely with Citrix and VMware to bring a rich graphics experience to end-users on the industry's leading virtualization platforms.

Highlights include:


  • Doubled user density: NVIDIA GRID 2.0 doubles user density over the previous version, introduced last year, allowing up 128 users per server. This enables enterprises to scale more cost effectively, expanding service to more employees at a lower cost per user.
  • Doubled application performance: Using the latest version of NVIDIA’s award-winning Maxwell GPU architecture, NVIDIA GRID 2.0 delivers twice the application performance as before — exceeding the performance of many native clients.
  • Blade server support: Enterprises can now run GRID-enabled virtual desktops on blade servers — not simply rack servers — from Cisco, Dell, HP and others.
  • Linux support: No longer limited to the Windows operating system, NVIDIA GRID 2.0 now enables enterprises in industries that depend on Linux applications and workflows to take advantage of graphics-accelerated virtualization.


http://www.nvidia.com

Monday, March 16, 2015

Cavium Integrates NVIDIA Tesla GPU in ThunderX ARM processor

Cavium is adding support for NVIDIA Tesla GPU accelerators in its ThunderX ARM processor family, its 64-bit ARMv8 server processor for next generation data center and cloud applications.

The ThunderX family integrates up to 48 high performance ARMv8-A custom cores, single and dual socket configurations, high memory bandwidth, large memory capacity, integrated hardware accelerators, integrated feature rich high bandwidth network and storage IO, fully virtualized core and IO, and scalable high bandwidth, low latency Ethernet fabric ThunderX enables best in class performance per dollar and performance per watt.

Cavium said many key application and market segments will benefit directly from the combination of the NVIDIA Tesla GPU accelerators and its ThunderX ARM processors, such as high performance computing (HPC) workloads that require high levels of double precision floating point compute performance, data analytics workloads, and the integration of compute and storage.

ThunderX processors with support for NVIDIA Tesla GPU accelerators are expected to be available in Q2'2015.

"Our collaboration with NVIDIA is yet another demonstration of the Workload Optimized focus that Cavium is driving in the server market with ThunderX," said Gopal Hegde, VP/GM, Data Center Processor Group at Cavium. "NVIDIA's leadership in high-performance computing solutions for the HPC and data analytics markets is well recognized and complements Cavium's continued innovation in processors for next generation data center and cloud applications.  Our partners and customers will benefit with this collaboration as we continue to drive application optimization, performance efficiency and TCO advantage with ThunderX."

http://www.cavium.com

Thursday, September 4, 2014

NVIDIA Sues Samsung and Qualcomm over GPUs

NVIDIA filed complaints against Samsung and Qualcomm at the International Trade Commission and in the U.S. District Court in Delaware, alleging that the companies are both infringing NVIDIA GPU patents covering technology including programmable shading, unified shaders and multithreaded parallel processing.

NVIDIA claims several Samsung products are in violation, including the Galaxy Note Edge, Galaxy Note 4, Galaxy S5, Galaxy Note 3 and Galaxy S4 mobile phones; and the Galaxy Tab S, Galaxy Note Pro and Galaxy Tab 2 computer tablets. Most of these devices incorporate Qualcomm mobile processors -- including the Snapdragon S4, 400, 600, 800, 801 and 805. Others are powered by Samsung Exynos mobile chips, which incorporate ARM's Mali and Imagination Technologies' PowerVR GPU cores.

http://www.nvidia.com

Monday, August 4, 2014

NTT Com Signs NVIDIA for Hong Kong Data Hub

NVIDIA is using NTT Communications' Hong Kong data centre facilities and Arcstar Global Leased line service as its data hub across the region.

The extended service provides NVIDIA with high speed point-to-point Ethernet leased line with up to 10Gbs capability, which is delivered over an automatic re-routable cable network. This offers its regional hub in Hong Kong dedicated, secure and reliable connectivity with its offices in Asia Pacific and across the globe. The secure private network is also backed by Asia Submarine-cable Express (ASE) and PC-1 for an ultra-low latency path to the rest of the world.

Tony Chang, NVIDIA APAC IT Infrastructure Manager said, “We have always had a longstanding relationship with NTT Communications that has grown over the last six years. Its high quality data centre has been instrumental in helping us house our data assets, while providing reliable support for business continuity across the region. In particular, we value the role the Professional Services team has played in guiding us through the relocation and migration of IT equipment.

http://www.ntt.com.hk


Monday, June 23, 2014

NVIDIA and Partners Develop GPU-accelerated ARM64 Servers for HPC

NVIDIA is seeing progress in leverage its GPU accelerators in supercomputers.  Multiple server vendors are now developing 64-bit ARM development systems integrating its NVIDIA GPU processors for high performance computing (HPC).

The new ARM64 servers feature Applied Micro X-GeneARM64 CPUs and NVIDIA Tesla K20 GPU accelerators.  The systems use the hundreds of existing CUDA-accelerated scientific and engineering HPC applications by simply recompiling them to ARM64 systems.

The first GPU-accelerated ARM64 development platforms will be available in July from Cirrascale Corp. and E4 Computer Engineering, with production systems expected to ship later this year. The Eurotech Group also plans to ship production systems later this year. System details include:

  • Cirrascale RM1905D - High-density two-in-one 1U server with two Tesla K20 GPU accelerators; provides high-performance, low total cost of ownership for private cloud, public cloud, HPC, and enterprise applications.
  • E4 EK003 - Production-ready, low-power 3U, dual-motherboard server appliance with two Tesla K20 GPU accelerators, designed for seismic, signal and image processing, video analytics, track analysis, web applications and MapReduce processing. 
  • Eurotech - Ultra-high density, energy efficient and modular Aurora HPC server configuration, based on proprietary Brick Technology and featuring direct hot liquid cooling.

"We aim to leverage the latest technology advances, both within and beyond the HPC market, to move science forward in entirely new ways," said Pat McCormick, senior scientist at Los Alamos National Laboratory. "We are working with NVIDIA to explore how we can unite GPU acceleration with novel technologies like ARM to drive new levels of scientific discovery and innovation."

http://nvidianews.nvidia.com/News/NVIDIA-GPUs-Open-the-Door-to-ARM64-Entry-Into-High-Performance-Computing-b52.aspx

Monday, August 19, 2013

HP's New Graphic Server Blade Virtualizes NVIDIA GPUs

HP introduced a Graphics Server Blade powered by NVIDIA GRID GPUs.  This allows up to 8 simultaneous virtual desktops to benefit from the processing performance of NVIDIA graphics processing units (GPUs).

HP said its Gen8 server with NVIDIA GRID or Multi-GPU Carrier lowers the cost per user while delivering levels of graphics performance previously unavailable for desktop virtualization users. The NVIDIA GRID K1 and K2 GPU adapters also enable multiple media-rich PC or high-end graphics users per blade by providing graphics capability to each virtual machine.

http://www8.hp.com/us/en/hp-news/press-release.html?id=1461268#.UhKNApLlZ8G

http://blogs.nvidia.com/blog/2013/08/19/blade/

Tuesday, May 21, 2013

CTIA: NVIDIA Showcases Tegra 4i with LTE-Advanced Modem


At this week's CTIA in Las Vegas, NVIDIA is showcasing its Tegra 4i mobile processor for mainstream smartphones with an integrated NVIDIA i500 LTE modem performing at 150 Mbps in the downlink.

Earlier this year at Mobile World Congress in February, the same demo was operating at 100 Mbps.  The performance boost is achieved with a software update for NVIDIA’s software-defined radio technology.

Tegra 4i’s modem is also multi-mode -- it delivers 4G LTE Advanced and is backward compatible with LTE Cat 3, 3G, and 2G.

http://www.nvidia.com

See also