Showing posts with label Silicon. Show all posts
Showing posts with label Silicon. Show all posts

Monday, September 18, 2017

Intel Capital's AI Investments top $1 billion

Intel Capital has now invested over $1 billion in companies devoted to the advancement of artificial intelligence.

In a blog post, Intel's CEO Brian Krzanich said the company is fully committed to making its silicon the "platform of choice" for AI developers. Key areas of AI development inside Intel include:

  • Intel Xeon Scalable family of processors for evolving AI workloads. Intel also offers purpose-built silicon for deep learning training, code-named “Lake Crest”
  • Intel Mobileye vision technologies for specialized use cases such as active safety and autonomous driving
  • Intel FPGAs, which can serve as programmable accelerators for deep learning inference
  • Intel Movidius low-power vision technology, which provides machine learning at the edge.



Intel Nervana Aims for AI

Intel introduced its "Nervana" platform and outlined its broad for artificial intelligence (AI), encompassing a range of new products, technologies and investments from the edge to the data center.

Intel currently powers 97 percent of data center servers running AI workloads on its existing Intel Xeon processors and Intel Xeon Phi processors, along with more workload-optimized accelerators, including FPGAs (field-programmable gate arrays).

Intel said the breakthrough technology acquired from Nervana earlier this summer will be integrated into its product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unp

Tuesday, August 29, 2017

Intel Xeon workstation processors

Intel unveiled its latest Xeon processors for workstations.

Intel said its new Xeon W processor delivers optimized performance for traditional workstation professionals by combining mainstream performance, enhanced memory capabilities, and hardware-enhanced security and reliability features. The Intel Xeon W processor features up to 18 cores and up to 36 threads, with an Intel Turbo Boost Technology frequency up to 4.5 GHz. Mainstream workstations will experience up to a 1.87x boost in performance compared to a 4-year-old system4and up to 1.38x higher performance compared to the previous generation.

This builds on the new family of Intel Xeon Scalable processors, announced in July, that offer up to 56 cores, up to 112 threads and an Intel Turbo Boost Technology frequency up to 4.2 GHz.

https://newsroom.intel.com/news/intel-xeon-scalable-processors-accelerate-creation-innovation-next-generation-workstations/

Monday, August 21, 2017

Intel Launches 8th gen Core processors

Intel officially introduced its 8th Gen Intel Core processors, including a range of mobile processors designed specifically for sleek thin and light notebooks and 2 in 1s.

The new mobile processors promises a boost of up to 40 percent gen over gen1 devices or 2x the performance if you compare it with a 5-year-old machine. The processors feature a new quad-core configuration, power-efficient microarchitecture, advanced process technology and a huge range of silicon optimizations.

  • Intel UHD Graphics are integrated into these next-generation processors. A media engine, with power-efficient VP9 and HEVC 10-bit hardware acceleration, means great battery life, even with 4K UHD viewing andcontent creation. 
  • I/O in 8th generation Intel Core Processor U-series includes PCIe 3.0, delivering data transfer rates at 8 GT/s versus 5
  • GT/s with PCIe 2.0. 
  • The latest Intel Rapid Storage Technology supports NVMe PCIe x4 Solid State Drives, and it is capable of utilizing PCIe 3.0 speed. 
  • Thunderbolt 3 technology (USB-C) supports up to 40 Gbps transfer speeds, two 4K 60 Hz displays, system charging up to 100W, external graphics, and Thunderbolt networking.

The first wave of 8th Gen Intel Core processor-powered devices featuring i5/i7 processors will come to market beginning in September.

http://www.intel.com


Thursday, July 20, 2017

Microsemi integrates Ethernet MAC to deliver FPGA 10 GBE

Microsemi, a provider of advanced semiconductor solutions, and Tamba Networks, a developer of connectivity intellectual property (IP) cores, have announced a collaboration through which they will incorporate Tamba Networks' Ethernet media access controller (MAC) into Microsemi's latest cost-optimised, mid-range PolarFire FPGA to offer a low power FPGA-based 10 Gigabit Ethernet solution.

Tamba Networks' Ethernet MAC is claimed to occupy half the area and to deliver twice the speed of competing Ethernet MACs, and can therefore offer Microsemi customers a lower cost solution based on its compact size combined with the security and advanced capabilities of PolarFire FPGAs.

As part of the collaboration with Tamba Networks, Microsemi has adopted the company's Interlaken and 10/40 Gigabit Ethernet MAC soft cores as key building blocks to evaluate and enhance PolarFire FPGAs' fabric architecture, with 10 and 40 Gbit/s datapaths running at 160 MHz and 320 MHz.

The Tamba Networks cores are designed to offer low gate count and latency along with flexibility. When combined with Microsemi's low power fabric and transceiver, the 10 Gigabit Ethernet soft core enables a 10 Gbit/s datapath that is claimed to offer 50% lower power consumption. Microsemi noted that the device is also available as a direct core from its IP library.

Microsemi stated that Tamba Networks was involved in the development of the PolarFire transceiver physical coding sublayer (PCS), providing the 64b66b/64b67b encoding modules used for Ethernet and Interlaken, and also helped modify the 64b66b encoder to operate with deterministic latency, providing support for common public radio interface (CPRI) options 7b, 8 and 9.

Microsemi's PolarFire FPGAs also target applications in the communications market, including access network, network edge, metro (1 to 40 Gbit/s), mobile infrastructure, wireless backhaul, smart optical modules and video broadcasting.


Microsemi noted that its PolarFire FPGAs are particularly suited to the access network infrastructure applications, where OEMs wish to deliver more bandwidth to customers while reducing costs.

Thursday, July 13, 2017

Intel debuts its Xeon Scalable platform - Part 2

Intel described the launch of its Xeon Scalable Platform as the biggest data centre announcement in the past 10 years. Wall Street's reaction was fairly muted, perhaps because Intel has already captured nearly the entire market for server CPUs and there was not much to suggest that any innovations in the chip architecture would significantly expand the overall market or the company's margins. However, a broad ecosystem of cloud providers, telecom carriers, server vendors, network equipment suppliers, storage specialists and systems integrators were lined-up for the big Xeon unveiling with press releases of their own. As an industry milestone, it is certain that the next wave of cloud data centre infrastructure will be built on Xeon Scalable processors.

Highlights of Intel's Xeon ecosystem momentum

Amazon Web Services

AWS has listed Intel as a strategic partner for over a decade. It is certainly a major customer. It’s been claimed that every day AWS enough servers to power a Fortune 500 enterprise. AWS launched a C5 instance family in November 2016 powered by a custom version of the Xeon Scalable Platform with hardware acceleration capability. Amazon EC2 C5 instances based on Xeon Scalable processors with AVX-512 now offer up to 72 vCPUs - twice that of previous generation compute-optimised instances - and 144 GB of memory. AWS also said it is working with Intel to optimise deep learning engines. AWS reports over a 100x boost in inference performance and is also using the new Xeon for high performance computing (HPC) clusters supporting thousands or tens of thousands of EC2 instances. AWS provided a video testimonial for the launch event.

It should also be noted that AWS is now offering NVIDIA GPU instances. Like the other cloud giants, AWS will also build its own data centre gear whenever this is the fastest or cheapest path to deployment. This includes routers based on custom Broadcom silicon and bespoke network interface cards based on an in-house Annapurna ASIC. At its scale, AWS would surely consider all silicon options for its core platform. Intel and AWS seem to be working well together.

AT&T

The guest of honour at the Xeon launch event was John Donovan, AT&T's chief strategy officer and group president, technology and operations. ‎AT&T has been running the new Xeon processors for several months in its production network. Donovan reported a 25% boost in performance - good but maybe not overwhelmingly so. Still, AT&T is moving all its network functions into a cloud based on X86. AT&T said it has a strong collaborative relationship with Intel. Total cost of ownership for the entire network improves with each generation of Xeons.

Google

The first public cloud to deploy the new Xeon Scalable Platform processors is Google. End customers are reporting consistent performance improvements, in some cases of 30 to 50%. When the applications are tuned for the AVX-512 instructions, customers are reporting more than a 100% performance improvement.

Microsoft

The new Intel Xeon Scalable Platform processors will be the base for Microsoft Azure. Earlier this year at the Open Compute Project (OCP) Summit in San Jose, Microsoft announced Project Olympus, a next generation hyperscale cloud hardware design and a new model for open source hardware development with the OCP community. Rather than contributing a fully-completed design to OCP, with this new approach Microsoft will contribute its next generation cloud hardware designs when they are approximately 50% complete. The building blocks that Project Olympus will contribute consist of a new universal motherboard, high-availability power supply with included batteries, 1U/2U server chassis, high-density storage expansion, a new universal rack power distribution unit (PDU) for global data centre interoperability, and a standards compliant rack management card.

Although some saw this announcement as a potential opening for ARM processors in Azure, in a customer testimonial video this week Microsoft confirmed that Project Olympus is based on Xeon Scalable Platform processors and Intel FPGAs. Microsoft said this combination of Xeon processors, FPGAs and high-performance storage will be a powerful solution for AI. In fact, Azure anticipates the world's largest deployment of FPGAs to power the largest neural network to date.

Telefónica

In Spain, Intel has been collaborating with Telefónica since 2008. One big focus of development is network functions virtualisation (NFV) to simplify its network. Telefonica expects the Intel Xeon Scalable Platform processors will play a key role in its 5G network. This means that Telefónica is fully committed to x86 for the basis of its infrastructure. The new processors, which are currently in Telefónica’s labs, have been delivering a performance boost of approximately 67% over the previous Xeon E5 2600 chips.

6WIND

6WIND reports that its software running on Xeon Scalable Processors delivers a significant boost for IPsec. Specifically, 6WIND Turbo IPsec performance tests on Xeon Platinum servers demonstrate a 50% increase in processing power for common applications such as multi-site VPNs and backhaul security gateways.

Accton

Accton announced a combination server-switch hardware appliance based on dual-socket Intel Xeon Scalable processors, supporting up to 28 cores (56 threads) per socket. The switch system includes 48 SFP28 (25 GbE) and 6 QSFP28 (100 GbE) network ports, all contained within a single 1RU chassis form-factor. Accton said its Intel Xeon Purley platform increases CPU capacity and performance for virtual machine consolidation and density, as well as boosting memory bandwidth (six channels).

Advantech

Advantech has introduced two new platforms: a 2U dual socket network appliance and a single socket, short depth 1U server, both based on the new Intel Xeon Scalable Processors. The scalability of the dual socket appliance increases significantly, with up to 12 more cores per CPU than on the previous generation appliance. The company noted performance advances in the throughput of encrypted packets using the latest Intel QuickAssist Technology, now available in the chipset, to perform IPsec encryption and decryption. During tests at Intel Labs, a server configured with an Intel Xeon Platinum Processor 8160 showed an increase of up to 1.32 times higher performance, demonstrating what both platforms will be able to deliver to help meet demands for higher encrypted data throughput and VPN density while freeing up slots for more I/O and offload.

Cisco

Cisco launched a new generation of servers and software based on Intel's latest Xeon Scalable Platform processors and a unique Cisco system-level vision for the future of IT. The Cisco Unified Computing System (Cisco UCS) M5 generation seeks to extend the power and simplicity of unified computing for data-intensive workloads, applications at the edge, and the next generation of distributed application architectures. The latest UCS Director 6.5 management software allows data centre professionals to complete 80% of operational tasks from a single console. A Workload Optimization Manager, powered by Turbonomic and which is deeply integrated into the UCS hardware, uses intent-based analytics to continuously match workload demand to infrastructure supply across on premise and multi-cloud environments. The company says the Cisco UCS can reduce administration and management costs by up to 63% while accelerating the delivery of new application services by up to 83%.

Dell EMC

Dell EMC launched the 14th generation of its PowerEdge servers featuring the new processors and a cyber-resilient architecture with a deep root of trust, including cryptographically trusted booting.

Ericsson

Highlighting the new Intel Xeon Scalable processors, Ericsson published a whitepaper 'Industrialising Network Functions Virtualisation with Software-Defined Infrastructure'. Topics discussed include Data Plane Development Kit (DPDK), which is a set of software libraries for accelerating packet processing workloads on commodity off-the-shelf hardware platforms.

The Fast Data Project

FD.io or Fido, a collaborative open source project that aims to establish a high-performance IO services framework for dynamic computing environments, announced significant performance gains reaching terabit levels at multimillion route scale. Architectural improvement increases in latest Xeon Scalable processors - such as increased PCIe bandwidth - allow FD.io to double its performance at scale without modification to the software. FD.io said it is the only vSwitch for which performance scaling is IO bound rather than CPU bound.

Fujitsu

Fujitsu launched a multi-node server that combines the density of blade-like servers with the simplicity of rack-based systems. The newly-refreshed range of dual- and quad-socket PRIMERGY servers and octo-socket PRIMEQUEST business critical server systems are designed for the new Xeon Scalable processors. Technical features include enhanced DDR4 memory modules and up to 6 TB capacity in quad socket PRIMERGY server. Fujitsu said its PRIMEQUEST server pushes the performance envelope of SAP HANA up to 12 TB of the in-memory database.

Nokia

Nokia introduced a refreshed AirFrame Data Center solution based on the Xeon Scalable Processors. Nokia said it has worked closely with Intel over the past year during the Intel Xeon Scalable processor development process and has just completed its own benchmarking of the new design. The results show a performance improvement over the previous generation Intel Xeon processor E5-26xxv4, with an average gain of 40% in processor rate performance.

Radisys

Radisys announced support for the new Xeon Scalable processors in its DCEngine, which helps communication service providers to transform their central offices into hyperscale SDN-enabled virtualised data centres. Radisys said its DCEngine’s management software suite, delivered with Intel Rack Scale Design, simplifies data centre resource management by enabling an open management framework with dynamic resource allocation, intelligent policy profiling and real-time, granular insight into compute, storage and network resources. The company estimates that CSPs leveraging DCEngine in data centres can expect significant improvements in total cost of ownership through reduced real estate footprint by 55%, which can result in up to 35% cost savings over a period of three years, as well as substantial reduction in costs associated with power consumption, hardware and software support.

ZTE

ZTE has launched a 2-socket cloud application rack server R5300 G4, 4-socket high-reliability rack server R8500 G4, hyperconverged blade server E9000 and software-defined storage KS10000.

Tuesday, July 11, 2017

Intel Debuts its Xeon Scalable Platform

In what it called its “biggest data center launch in a decade”, Intel officially unveiled its Xeon Scalable platform, a new line of server CPUs based codenamed Skylake and specifically designed for evolving data center and network infrastructure.

The new silicon, which Intel has been refining for the past five years, promises the highest core and system-level performance averaging 1.65x higher performance over the prior generation.  First shipments went out several months ago and are now in commercial use at over 30 customers worldwide, including AT&T, Amazon Web Services and Google.  Intel says every aspect of Xeon has been improved or redesigned: brand new core, cache, on-die interconnects, memory controller and hardware accelerators.

Intel’s new processors scale up to 28 cores and will be offered in four classes: Platinum, Gold, Silver, and Bronze. The design boasts six memory channels versus four memory channels of previous generation for memory-intensive workloads. Up to three Intel Ultra Path Interconnect (Intel UPI) channels provide increase scalability of the platform to as many as eight sockets.

Intel claims 4.2X greater VM capacity than its previous generation and a 65% lower total cost of ownership over a 4-year old server.  Potentially you might need only one quarter of the number of servers. For communication service providers, the claim is that the new Xeon Gold will deliver a 2.7X performance boost for DPDK L3 forwarding applications over a 4-year old server.



Key innovations in Xeon Scalable Platform

  • Intel Mesh on-chip interconnect topology provides direct data paths with lower latency and high bandwidth among additional cores, memory, and I/O controllers. The Mesh architecture, which replaces a previous ring interconnect design, aligns cores, on-chip cache banks, memory controllers, and I/O controllers, which are organized in rows and columns, with wires and switches connecting them at each intersection to allow for turns. Intel said this new design yields improved performance and greater energy efficiency.

    More specifically, in a 28-core Intel Xeon Scalable processor, the Last Level Cache (LLC), six memory channels, and 48 PCIe channels are shared among all the cores, giving access to large resources across the entire die a
  • Intel Advanced Vector Extensions 512 (Intel AVX-512), which delivers ultra-wide vector processing capabilities to boost specific workload performance, now offers double the flops per clock cycle compared to the previous generation.  Intel AVX2,6 Intel AVX-512 boosts performance and throughput for computational tasks such as modeling and simulation, data analytics and machine learning, data compression, visualization, and digital content creation.
  • Intel Omni-Path Architecture (Intel OPA) is the high-bandwidth and low-latency fabric that Intel has been talking about for some time. It optimizes HPC clusters, and is available as an integrated extension for the Intel Xeon Scalable platform. Intel said Omni-Path now scales to tens of thousands of nodes. The processors can also be matched with the new Intel Optane SSDs.
  • Intel QuickAssist Technology (Intel QAT) provides hardware acceleration for compute-intensive workloads, such as cryptography and data compression, by offloading the functions to a specialized logic engine (integrated into the chipset). This frees the processor for other workload operations. Encryption can be applied to data at rest, in-flight, or data in use.  Intel claims that performance is degraded by under 1 percent when encryption is turned on. This function used to be off-chip.
  • Enhanced Intel Run Sure Technology, which aims to reduce server downtime, includes reliability, availability, and serviceability (RAS) features. New capabilities include Local Machine Check Exception based Recovery (or Enhanced Machine Check Architecture Recovery Gen 3) for protecting critical data.

Aiming for the megatrends

In a webcast presentation, Navin Shenoy, Exec Vice President & General Manager, Intel’s Data Center Group, said that as traditional industries turn to technology to reinvent themselves, there are three megatrends that Intel is pursuing: Cloud, AI & Analytics, and 5G.  The new Xeon Scalable Platform addresses the performance, security and agility challenges for each of these megatrends.

AT&T’s John Donovan testifies, performance boost about 30%

During the big Xeon Scalable unveiling, Intel invited AT&T’s John Donovan on stage to talk about the new processors/ AT&T gained access to the new processors a few months ago and has already deployed Xeon Scalable servers which are carrying production traffic.  Donovan reported about at 30% performance boost for its applications over the previous Xeon generation. The net effect he said should be a 25% reduction in the number of servers it will need to deploy.  Intel has been seeding the process with other top customers as well.

This 30% performance boost is certainly good, but it is probably a stretch to call this upgrade “the biggest data center announcement in a decade.” For other applications, perhaps the claim is better justified. One such area is machine learning, which Intel identifies as one of the key megatrends for the industry. There are some interesting developments for Xeons in this domain.

A strong market position

Google Cloud Platform (GCP) is the first public cloud to put the Intel Xeon Scalable Platform into commercial operation. A partnership between Google and Intel was announced earlier this year at a Google event where the companies said they are collaborating in other areas as well, including hybrid cloud orchestration, security, machine and deep learning, and IoT edge-to-cloud solutions. Intel is also a backer of Google’s Tensor Flow and Kubernetes open source initiatives.

In May 2016, Google announced the development of a custom ASIC for Tensor Flow processing. These TPUs are already in service in Google data centres where they "deliver an order of magnitude better-optimized performance per watt for machine learning." For Intel, this poses a long-term strategic threat.  With this announcement, Intel said Xeon’s onboard advanced Vector Extensions 512 (Intel AVX-512) can increase machine learning inference performance by over 100x – a huge boost for AI developers.

The data centre server market is currently dominated by Intel.  Over the years, there have been several attempts by ARM to gain at least a toe-hold of market share in data centre servers, but so far, the impact has been very limited.  AMD recently announced its EPYC processor for data centre servers, but no shipment date has been stated and the current market position is zero. NVIDIA has been gaining traction in AI applications as well as in public cloud acceleration for GPU intensive applications – but these are specialized use cases.

Monday, July 10, 2017

Baidu deploys Xilinx FPGAs for cloud acceleration

Xilinx announced that Baidu has deployed Xilinx FPGA-based application acceleration services into its public cloud, specifically for the Baidu FPGA Cloud Server, a new service that leverages Xilinx Kintex FPGAs, tools and the software required for hardware-accelerated data centre applications such as machine learning and data security.

The Baidu FPGA Cloud Server provides a complete FPGA-based hardware and software development environment, including hardware and software design examples, and is designed to help users quickly develop and migrate applications with reduced development costs.

The Baidu service is based on each FPGA instance serving as a dedicated acceleration platform that is not shared between instances or users. The design examples provided services including cover deep learning acceleration, encryption and decryption.

Xilinx claims that FPGA-enabled servers can deliver a 10x to 80x performance per watt advantage compared to CPU-only servers. In addition, as they are dynamically reconfigurable, Xilinx FPGAs can support a range of workloads, including machine learning, data analytics, security and video processing.



  • Separately, Baidu announced a partnership with Microsoft for its new open source autonomous driving platform, Apollo. Baidu unveiled Apollo in April, featuring cloud services, software and reference hardware/vehicle platforms, and expects the technology will be running on roads by late 2020.
  • In addition, Conexant, a provider of audio and voice technology solutions, announced it was collaborating with Baidu to release development kits and reference designs for device makers to develop far-field voice-enabled artificial intelligent (AI) devices running on Baidu's DuerOS platform. The development kits and reference designs will feature Conexant's CX20924 4-microphone and CX20921 2-microphone voice input processing solutions and DuerOS, a conversation-based AI system that enables access to a voice-activated digital assistant for mobile phones, TVs and other devices.

Friday, July 7, 2017

IDT unveils clock generator/jitter attenuator

Integrated Device Technology (IDT) announced a new highly-programmable clock generator and jitter attenuator IC offering less than 200 fs of phase noise and designed to provide system design margin for 10 Gbit/s interfaces in wireline and wireless communication networks.

IDT noted that the increased phase noise margin can lower system design constraints and help engineers to minimise bit error rates (BER) while also reducing system costs.

The new IDT 8T49N240 product is the latest member of IDT's third-generation Universal Frequency Translator (UFT) family. The clock generator and jitter attenuator device is able to produce most common output frequencies from almost any input frequency and targets 10 Gbit/s or multi-lane 40/100 Gbit/s timing applications where 300 fs of phase noise is typically the maximum acceptable level permitted at physical ports. The device is also suitable for 25/28 Gbit/s interfaces.

The 200 fs phase noise specification of the 8T49N240 product therefore provides noise margin to enable engineers to both simplify their clock tree designs and utilise lower cost PCBs.

The 8T49N240 is complemented by IDT's proven Timing Commander software, a free, intuitive program designed to allow users to configure the device by clicking on blocks, entering desired values and sending the configuration to the device. IDT also offers a web-based tool that allows customers to quickly generate custom part numbers to match their specific configurations.


The 8T49N240 product features a 6 x 6 mm package footprint that requires less PCB area than other comparable solutions. The 8T49N240 and evaluations boards are available immediately.


Thursday, June 29, 2017

Cavium unveils FastLinQ 41000 10/25/40/50 GBE NIC

Cavium announced the introduction of the FastLinQ 41000 Series products, its low power, second generation 10/25/40/50 Gigabit Ethernet NIC that is claimed to be the only such adapter to feature Universal RDMA.

Cavium's FastLinQ 41000 Series devices are designed to deliver advanced networking for cloud and telco architectures; the products are available immediately from Cavium and shortly due to be available from Tier-1 OEMs/ODMs in standard, mezzanine, LOM and OCP form factors.

The FastLinQ QL41000 family of standards-compliant 25/50 Gigabit Ethernet NICs offer support for concurrent RoCE, RoCEv2 and iWARP - Universal RDMA. The FastLinQ adapters, coupled with server and networking platforms, are designed to enable enterprise data centres to optimise infrastructure costs and increase virtual machine density leveraging technologies such as concurrent SR-IOV and NIC Partitioning (NPAR) that provide acceleration and QoS for tenant workloads and infrastructure traffic.

The new FastLinQ adapters also support network function virtualisation with enhanced small packet performance via integration into DPDK and OpenStack, enabling cloud and telcos/NFV customers to deploy, manage and accelerate demanding artificial intelligence, big data, CDN and machine learning workloads.

For telco and NFV applications, the products provide improved small packet performance with line rate packets per second for 10/25 Gigabit Ethernet, MPLSoUDP offload and integration with DPDK and OpenStack using the Mirantis FUEL plug-in. This allows telco's and NFV application vendors to deploy, manage and accelerate demanding NFV workloads.

Additionally, integrated storage acceleration and offloads such as NVMe-oF, NVMe-Direct, iSCSI, iSER and FCoE enable upgrades from existing storage paradigms to next generation NVMe and persistent memory semantics.

The products also offer zero-touch automatic speed and FEC selection via Cavium's FastLinQ SmartAN technology, which is designed to significantly reduce interoperability challenges in physical layer networks.

Further Features of the FastLinQ 41000 Series inlcude:

1.         10/25/40/50 Gigabit Ethernet connectivity across standard and OCP form factors.

2.         Stateless offloads for VxLAN, NVGRE and GENEVE.

3.         SmartAN to provide seamless 10/25 Gigabit Ethernet interoperability.

4.         Storage protocol offloads for iSCSI, FCoE, iSER, NVMe-oF and SMB Direct.

5.         Management across heterogeneous platforms with QConvergeConsole GUI and CLI.


Regarding the new products, Martin Hull, senior director product management at Arista Networks, said, "Arista… has partnered with Cavium to ensure availability of tested and interoperable solutions for hyperscale data centres… Cavium's FastLinQ 41000 Series of NICs and Arista’s portfolio of 25 Gbit/s leaf and spine systems deliver backward compatibility and investment protection with standards compliance".


Friday, June 23, 2017

France-based Kalray raises $26m for Manycore Silicon

France-based Kalray, a fabless developer of high-performance, low-power 'manycore' microprocessors:

1.         Founded as a spin-off by technology investment firm CEA Investissement in 2008 and developer of the patented massively parallel manycore architecture, MPPA (massively parallel processor array).

2.         Offering manycore processors designed to enable high performance computing with low power consumption and low latency targeting embedded applications including autonomous vehicles and acceleration in data centres.
Has announced the completion of a new round of funding totalling $26 million that was led by new investor Safran, with participation from Asian investor, Pengpai, also a new investor, ACE Management, CEA Investissement, EUREKAP! Héléa Financière and INOCAP Gestion. Kalray has raised a total of over $65 million in capital and public funding from investors including Bpifrance.

Kalray stated that the new funding round will be used to accelerate the commercial exploitation of its existing solutions and to begin the development of the MPPA Coolidge, its 3rd generation of microprocessors, which is scheduled to be released in 2018. It also plans to expand its team, in particular its engineering team in Grenoble, and to strengthen its commercial network internationally. Leveraging a fabless model, Kalray has partnered with major chip company TSMC for production of its solution.

The company noted that since its spin-off from CEA in 2008 it has developed the massively parallel manycore architecture for its microprocessors that is protected by over 20 international patents. The MPPA technology is designed to increase processors' real-time processing abilities while maintaining low power consumption.

Kalray's microprocessors are utilised in two key markets - critical embedded applications (such as aeronautics/defence and autonomous vehicles), and data centres, for storage acceleration and high-speed networking).


The company stated that it is expanding its international presence, and now has 65 employees, distributed across its home base in Grenoble, France and its North American operation in Los Altos, California. Kalray also has an office in Tokyo, Japan.


Wednesday, June 14, 2017

Nokia Unveils its Next Gen IP Routing Engine

Nokia unveils its fourth generation network processing silicon for powering the first petabit-class core IP routers.

The new FP4 silicon, which comes six years after the preceding FP3 chipset was announced, offers 2.4 Tb/s half-duplex capacity, or 6X more capacity than the current generation 400 Gb/s FP3 chipset. The FP4 will support full terabit IP flows. All conventional routing capabilities are included. Deep classification capabilities include enhanced packet intelligence and control, policy controls, telemetry, and security.


The FP4 could be used to provide an in-field upgrade to Nokia’s current line of core routers and carrier switches. It will also be used to power a new family of 7750 SR-s series routers designed for single-node, cloud scale density. In terms of specs, the SR-s boasts a 144 Tb/s configuration supporting port densities of up to 144 future Terabit links, 288 400G ports, or 1,440 100GE ports. Absolute capacity could be double for a maximum of 288 Tb/s configuration. It runs the same software as the company’s widely-deployed systems.  The first 7750 SR-s boxes are already running in Nokia’s labs. First commercial shipments are expected in Q4.

Nokia is also introducing a chassis extension option to push its router into petabit territory. Without using the switching shelf concept employed in the multi-chassis designs of its competitors, Nokia is offering the means to integrate up to six of its 7750 SRS-s routers into a single system. This results in 576 Tb/s of capacity, enough for densities of up to 2,880 100GE ports or 720 400G ports.

https://networks.nokia.com/ip-networks-reimagined

Broadcom Brings Programmable Packet Processing to Trident 3 Switching Silicon

Broadcom unveiled a new generation of its widely-deployed Trident switching silicon for data center, enterprise, and service provider networks.

The new StrataXGS Trident 3 switch family, which is aimed at networks transitioning to high density 10/25/100G Ethernet, is manufactured in 16nm and designed to support fully programmable packet processing, while achieving significant cost and power efficiency. It builds on Broadcom's widely deployed StrataXGS Trident and Tomahawk switch products by offering fully programmable, line-rate switching. It supports new protocol parsing, processing, and editing for Service Function Chaining, Network Virtualization, and SDN. It offers programmable support for new switch instrumentation capabilities such as in-band and out-of-band network telemetry. The StrataXGS Trident 3 also retains complete functional compatibility to with StrataXGS Trident 2 and Trident 2+ based networks, which were widely adopted by network equipment manufacturers.


"The innovation in our StrataXGS Trident 3 Series is in delivering a fully programmable switching pipeline while maintaining backwards compatibility to the existing install base of StrataXGS Trident and Trident 2 based networks," said Ram Velaga, senior vice president and general manager, Switch Products at Broadcom. "Rather than a blank slate, our customers want a scalable, bulletproof network data plane that is reprogrammable to address future requirements, while continuing to aggressively drive down Ethernet cost and power. With Trident 3, we’ve uniquely delivered that solution. Our customers can leverage a single development to yield a complete line of programmable switching platforms, with the same rich feature set extending all the way from the service provider edge, to the data center, converged campus core, and wiring closet.”

Broadcom said the FleXGS architecture in Trident 3 comprises of new programmable parsing, lookup, and editing engines with associated reconfigurable databases. The engines are dimensioned and arrayed to maximize parallelism, performance, functional capacity and area/power efficiency to best address the diverse and concurrent needs of today’s evolving networks.  The pipeline can be programmed to handle software-defined network virtualization and service chaining protocols, including VXLAN, GPE, NSH, Geneve, MPLS, MPLS over GRE, MPLS over UDP, GUE, Identifier Locator Addressing (ILA) and PPPoE, among others.

StrataXGS Trident 3 Switch Series Key Features

  • High density 1/2.5/5/10/25/40/50/100GbE port connectivity using best-in-class integrated 10/25Gbps NRZ SerDes
  •  Example single-chip platforms and line cards include spine and converged campus core (32x100GbE), 25/100GbE Top-of Rack (48x25GbE + 8x100GbE) and 10/100GbE Top-of-Rack (48x10GbE + 6x100GbE) 
  • 32MB on-chip, 100% fully shared packet buffer delivers up to 8X higher network burst absorption and congestion avoidance compared to previous generations
  • Large, programmable on-chip forwarding databases for L2 switching, L3 routing, label switching, and overlay forwarding
  • 3X increased ACL scale to support evolving policy/security requirements
  • PCIe Gen3 x4 host CPU interface with on-chip accelerators improves control-plane update and boot performance by up to 5X
  • Programmable support for enhanced network telemetry, including per-packet timestamping, Flow Tracker, microburst detection, latency/drop monitor, Active-probe-based in-band network telemetry, and in-band OAM processing; integrated with open-source BroadView v2 telemetry agent and analytics software
  • Dynamic, State-Based Flow Distribution provides systematic and adaptive reduction in link congestion and traffic imbalances in large-scale Layer3/ECMP leaf-spine networks
  • Adaptive Routing for dynamic traffic engineering in non-Clos topologies
  • Full feature compatibility with previous generation Trident 2 and Trident 2+ devices

The first two members of the StrataXGS Trident 3 family are currently sampling: BCM56870 (3.2 Tbps) and BCM56873 (2.0 Tbps).

http://www.broadcom.com

Wednesday, June 7, 2017

Alibaba, AT&T, Baidu, Tencent adopt Barefoot forwarding plane

Barefoot Networks, a provider of advanced, high speed switching technology, announced significant market momentum driven by growing demand for its programmable forwarding plane technology.

Barefoot's 6.5 Tbit/s Tofino switch, which is claimed to be the fastest and P4-programmable switch chip, has been sampling to customers since the fourth quarter of 2016. The company noted that its technology is being adopted by large enterprises and telecommunications providers to increase network performance and efficiency through leveraging programmable forwarding plane technology.

Barefoot stated that it has recently worked with AT&T and SnapRoute to deliver what it believes is the first real-time path and latency visualisation. Utilising Tofino and In-band Network Telemetry (INT), AT&T was able to gain deep insight into the network down to packet-level for the first time to help to address bottlenecks caused by path or latency variation.

Barefoot noted it took 6 weeks to develop the visualisation capability before it was deployed into AT&T's production environment carrying live customer traffic over a Washington DC to San Francisco link.

In addition, major Internet companies Alibaba, Baidu and Tencent have used Tofino and P4 to address challenges in their networks. Barefoot noted that the demands of mega-scale data centres are growing to support new applications and services, while legacy fixed-function switching technology is not sufficiently flexible and so they are using Barefoot to develop custom forwarding planes. The companies are therefore able to adopt load balancing, DDoS protection and INT features without affecting performance.

Barefoot has also expanded its ecosystem via partnerships with equipment manufacturers based in Asia. To date, the company has announced go-to-market partnerships with Edgecore Networks, WNC, H3C, Ruijie and ZTE. These partnerships are designed to enable Barefoot to meet growing demand for programmable networking across a range of network environments.



  • Barefoot Networks, based in Palo Alto, California, exited stealth and unveiled its user-programmable Tofino switch chip in June last year. Founded in 2013, Barefoot is backed by investors including Andreessen Horowitz, Lightspeed Venture Partners and Sequoia Capital. The company has raised approximately $155 million in five funding rounds, most recently raising $23 million in November 2016 in a round led by Alibaba and Tencent.

Tuesday, May 23, 2017

Centec selects 100 Gbit/s QSFP28 active copper cable from Credo and FIT

Credo Semiconductor, a developer of mixed-signal ICs and IP for data centre and enterprise networking applications, and interconnect solutions supplier Foxconn Interconnect Technology (FIT) announced that they will demonstrate robust and error-free 100 Gbit/s QSFP28 active copper cable (ACC) connectivity solutions with reach up to 10 metres during the Computex Show in Taipei.

The new cable assembly is designed to enable server designers to transition to higher bandwidths utilising cost-effective copper connectivity as an alternative to implementing higher cost optical technology.

The companies stated that to enable lower cost high bandwidth solutions, Centec, an established provider of Ethernet switching solutions, plans to adopt ACC technology for its data centre solutions to help speed the transition to the technology in 100 Gbit/s intra-rack and inter-rack applications within the data centre.

The companies noted that with growing demand for bandwidth, maintaining copper interconnects between servers and top-of-rack switches would save significant capex in the transition from 10 to 25 Gbit/s single lane data rates. The new jointly developed 100 Gbit/s QSFP28 ACCs provide connectivity between standard QSFP ports, with a QSFP28 ACC capable of supporting 4x full-duplex lanes, with each lane transmitting at up to 25 Gbit/s in each direction, delivering aggregate bandwidth of up to 100 Gbit/s.


The ACC solution utilises Credo's mixed-signal processing technology to provide cost-effective intermediate-reach data centre interconnects that cannot be achieved with traditional passive copper cable (PCC). In addition, Credo's low power technology means that the 100 Gbit/s ACC consumes significantly less power than competing AOCs (active optical cables).



  • Earlier this year, Credo Semiconductor announced demonstrations of its low power 112 Gbit/s PAM4, 56 Gbit/s PAM4 LR and 56 Gbit/s NRZ LR SerDes technologies at DesignCon 2017. Demonstrations with Keysight and Amphenol showcased Credo's 112 Gbit/s PAM4-SR and 56 Gbit/s PAM4-LR technology, with Molex demonstrating Credo's 56 Gbit/s PAM4-LR and NRZ LR SerDes IP over copper cables and backplanes. Credo also demonstrated long-reach 28 Gbit/s technology for data centre connectivity with Leoni.

Wednesday, May 10, 2017

SiFive Raises $8.5m for RISC-V based custom chips

SiFive based in San Francisco, the fabless provider of customised, open-source-enabled semiconductors:

a.         Co-founded in 2015 by Krste Asanovic, currently the company's chief architect, Yunsup Lee, currently CTO, and Andrew Waterman, currently chief engineer, the inventors of RISC-V technology.

b.         Established with the aim of providing access to custom silicon chip designs leveraging RISC-V technology.

Announced that it has raised $8.5 million in a Series B round led by Spark Capital, with the participation of Osage University Partners and existing investor Sutter Hill Ventures. The latest funding brings the total investment in SiFive to $13.5 million and comes as the company reports increasing demand for RISC-V IP. In conjunction with the new funding, Todd Dagres, general partner at Spark Capital, is to join the SiFive board of directors.

SiFive is a fabless provider of custom semiconductors based on the free and open RISC-V instruction set architecture. Founded by the inventors of RISC-V, SiFive stated that in the first six months of availability, more than 1,000 HiFive1 software development boards have been purchased and delivered to developers in over 40 countries.

SiFive launched its Freedom Everywhere platform, designed for micro-controller, embedded, IoT and wearable applications, and Freedom Unleashed platform for machine learning, storage and networking applications in July 2016. In November, it announced general availability of the Freedom Everywhere 310 (FE310) SoC and HiFive1 software development board.

The company noted that RISC-V has established an ecosystem of more than 60 companies that includes Google, HPE, Microsoft, IBM, Qualcomm, NVIDIA, Samsung and Microsemi, while member companies and third-party open-source contributors are helping build a suite of software and toolchains that includes GCC and binutils.


Tuesday, May 9, 2017

Flex Logix, developer of embedded FPGA technology, raises $5m

Flex Logix Technologies, headquartered in Mountain View, California, a supplier of embedded FPGA IP and software:

a.         Founded in March 2014 to develop solutions for reconfigurable RTL in chip and system designs employing embedded FPGA IP cores and software.

b.         Offering the EFLX technology platform designed to significantly reduce design and manufacturing risks, accelerate technology development and provide greater flexibility for customers' hardware.

c.         Which in October 2015 announced it had raised $7.4 million in a financing round was led by dedicated hardware fund Eclipse Ventures (formerly the Formation 8 hardware fund), with participation from founding investors Lux Capital and the Tate Family Trust.

Announced it has secured $5 million in Series B equity financing in a round led by existing investors Lux Capital and Eclipse Ventures, with participation from the Tate Family Trust.

Flex Logix stated that new funding will be used to expand its sales, applications and engineering teams to meet the growing customer demand for its embedded FPGA platform in applications including networking, government, data centres and deep learning.

Targeting chips in multiple markets, the Flex Logix EFLX platform can be used with networking chips with reconfigurable protocols, data centre chips with reconfigurable accelerators, deep learning chips with real-time upgradeable algorithms, base stations chips with customisable features and MCU/IoT chips with flexible I/O and accelerators. The company noted that EFLX is currently available for popular process nodes and is being ported to further process nodes based on customer demand.

The Flex Logix technology offers high-density blocks of programmable RTL in any size together with the key features customers require. The solution allows designers to customise a single chip to address multiple markets and/or upgrade the chip while in the system to meet to changing standards such as networking protocols. It also allows customers to update chips with new deep learning algorithms and implement their own versions of protocols in data centres.

Regarding the new funding, Peter Hebert, managing partner at Lux Capital, said, "I believe that Flex Logix's embedded FPGA has the potential to be as pervasive as ARM's embedded processors… the company's software and silicon are proven and in use at multiple customers, paving the way to become one of the most widely-used chip building blocks across many markets and for a range of applications".

While Pierre Lamond, partner at Eclipse Ventures, commented, "The Flex Logix platform is the… most scalable and flexible embedded FPGA solution on the market, delivering competitive advantages in time to market, engineering efficiency, minimum metal layers and high density… the patented technology combined with an experienced management team led by Geoff Tate, founding CEO of Rambus, position the company for rapid growth".


Tuesday, May 2, 2017

Iliad's Online launches cloud service based on Cavium ThunderX

Web hosting provider Online, a wholly-owned subsidiary of French telecom company Iliad Group, announced the commercial deployment of server platforms based on Cavium's ThunderX workload-optimised processors as part of its Scaleway cloud service offering.

Online offers a range of services to Internet customers worldwide including domain names, web hosting, dedicated servers and hosting in its data centre, and with several hundred thousand servers deployed is one of the largest web hosting providers in Europe.

For the deployment, Online is using dual socket, 96 core ThunderX based platforms as part of the Scaleway IaaS cloud offering. The Scaleway cloud platform is supported by Ubuntu 16.04 OS, including LAMP stack, Docker, Puppet, Juju, Hadoop and MAAS, and also provides support for standard features of the Scaleway cloud including flexible IPs, native IPv6, Snapshots and images.

Cavium's ThunderX products offer a 64-bit ARMv8-A based server processor designed for data centre and cloud applications. The devices feature custom cores, single and dual socket configurations, and high memory bandwidth and memory capacity. The products also include hardware accelerators, integrated high bandwidth network and storage IO, virtualised core and IO functionality and a scalable high bandwidth, low latency Ethernet fabric.

ThunderX products are compliant with ARMv8-A architecture specifications, as well as with ARM's SBSA and SBBR standards, and supported by major OS, hypervisor and software tool and application vendors.

Earlier in the year, Cavium announced it was collaborating with Microsoft to evaluate and enable a range of cloud workloads running on its flagship ThunderX2 ARMv8-A data centre processor for the Microsoft Azure cloud platform.


As part of the partnership, the companies demonstrated web services on a version of Windows Server developed for Microsoft's internal use running cloud services workloads on ThunderX2. The server platform was based on Microsoft Project Olympus open source, hyper-scale cloud hardware design.

Friday, April 28, 2017

Dispute Intensifies between Apple and Qualcomm

Qualcomm reported that it has been informed by Apple that Apple is withholding payments to its contract manufacturers for the royalties those contract manufacturers owe under their licenses with Qualcomm for sales during the quarter ended March 31, 2017.

Qualcomm's statement:

“Apple is improperly interfering with Qualcomm’s long-standing agreements with Qualcomm’s licensees,” said Don Rosenberg, executive vice president and general counsel of Qualcomm.  “These license agreements remain valid and enforceable.  While Apple has acknowledged that payment is owed for the use of Qualcomm’s valuable intellectual property, it nevertheless continues to interfere with our contracts.  Apple has now unilaterally declared the contract terms unacceptable; the same terms that have applied to iPhones and cellular-enabled iPads for a decade.  Apple’s continued interference with Qualcomm’s agreements to which Apple is not a party is wrongful and the latest step in Apple's global attack on Qualcomm.  We will continue vigorously to defend our business model, and pursue our right to protect and receive fair value for our technological contributions to the industry.”

http://www.qualcomm.com

Analog Devices unveils 28 nm DA converter for 4G/5G

Analog Devices has introduced a 28 nm D/A converter as part of a new series of high speed digital-to-analogue converters, designed to address the requirements of gigahertz bandwidth applications and provide the spectral efficiency needed for 4G/5G multi-band base stations and 2 GHz E-band microwave point-to-point backhaul platforms.

Based on 28 nm CMOS technology, the new AD9172 device is claimed to set performance benchmarks in terms of dynamic range, signal bandwidth and low power consumption.

Analog Devices' dual 16-bit AD9172 product supports 12 GSPS and provides direct-to-RF synthesis up to 6 GHz, eliminating the IF-to-RF up-conversion stage and LO generation to simplify the overall RF signal chain and reducing system cost. The device is designed to maintain superior linearity and noise performance across the RF frequencies to allow a high level of configurability.

In addition, independent numerically controlled oscillator (NCO), digital gain control and a range of interpolation filter combinations per input channel provide a suite of signal processing options to allow flexible signal chain partitioning between the analogue and digital domains, enabling the development of software defined platforms. The AD9172 is complemented by the AD9208 28 nm analogue-to-digital converter.

Key features of Analog Devices' new AD9172 device, which is offered in a 10 x 10 mm 144-ball BGA-ED package, include:

1.         Support for single- and multi-band wireless applications with 3 bypassable complex data input channels per RF DAC at a maximum complex input data rate of 1.5 GSPS with independent NCO per input channel.

2.         Selectable interpolation filters enabling up to 8x configurable data channel interpolation and up to 12x configurable final interpolation to support a full set of input data rates.

3.         Final 48-bit NCO that operates at the DAC rate to support frequency synthesis up to 6 GHz.

4.         Flexible 8-lane 15 Gbit/s JESD204B (subclass 0 and 1) interface that supports 12-bit high density mode for enhanced data throughput.

5.         Low noise on-chip PLL clock multiplier that supports 12 GSPS DAC update rate and provides observation ADC clock driver with selectable divide ratios.

In conjunction with the AD9172, Analog Devices also introduced its dual 14-bit AD9208 product optimised to deliver wide input bandwidth, high sample rate, excellent linearity and low power in a small package. With sampling speeds up to 3.0 GSPS, the A/D converter is designed to facilitate direct RF signal processing architectures and enable high oversampling.

The device's full power bandwidth supports IF signal sampling up to 9 GHz (-3 dB point), with up to 96-tap programmable finite impulse response (FIR) filter block that can be configured for channel equalisation and/or quadrature error correction. In addition, four integrated wide-band decimation filters and 48-bit NCO blocks enable support for multi-band receivers.

Monday, April 17, 2017

Intel Developer Forum 2017 is Cancelled

Intel has canceled its annual Intel Developer Forum, which was scheduled for August in San Francisco.

In comments to Anandtech, Intel said the event had simply grown too large especially given its move into autonomous driving and artificial intelligence.

IDF has been held annually since 1997.  A spring IDF 2017 in China was also canceled.

http://www.intel.com/content/www/us/en/intel-developer-forum-idf/san-francisco/2017/idf-2017-san-francisco.html

See also