Showing posts with label Switch. Show all posts
Showing posts with label Switch. Show all posts

Friday, July 7, 2017

Mellanox intros Spectrum-2 200/400 GBE data centre switch

Mellanox Technologies announced the Spectrum-2, a scalable 200 and 400 Gbit/s Open Ethernet switch solution designed to enable increased data centre scalability and lower operational costs through improved power efficiency.

Spectrum-2 also provides enhanced programmability and optimised routing capabilities for building efficient Ethernet-based compute and storage infrastructures.

Mellanox's Spectrum-2 provides leading Ethernet connectivity for up to 16 ports of 400 Gigabit Ethernet, 32 ports of 200 Gigabit Ethernet, 64 ports of 100 Gigabit Ethernet and 128 ports of 50 and 25 Gigabit Ethernet, and offers enhancements including increased flexibility and port density for a range of switch platforms optimised for cloud, hyperscale, enterprise data centre, big data, artificial intelligence, financial and storage applications.

Spectrum-2 is designed to enable IT managers to optimise their network for specific customer requirements. The solution implements a complete set of the network protocols within the switch ASIC efficiently, providing users with the functionality required out-of-box. Additionally, Spectrum-2 includes a flexible parser and packet modifier which can be programmed to process new protocols as they emerge in the future.

Mellanox stated that Spectrum-2 is the first 400/200 Gigabit Ethernet switch to provide adaptive routing and load balancing while guaranteeing zero packet loss and unconditional port performance for predictable network operation. The solution also supports double the data capacity while providing latency of 300 nanoseconds, claimed to be 1.4 times less than alternative offerings. It is designed to provide the foundation for Ethernet storage fabrics for connecting the next generation of Flash based storage platforms.

Mellanox noted that Spectrum-2 extends the capabilities of its first generation Spectrum switch, which is now deployed in thousands of data centres. Spectrum enables IT managers to efficiently implement 10 Gbit/s and higher infrastructures and to economically migrate to 25, 50 and 100 Gbit/s speeds.


The new Spectrum-2 maintains the same API as Spectrum for porting software onto the ASIC via the Open SDK/SAI API or Linux upstream driver (Switchdev), and supports standard network operating systems and interfaces including Cumulus Linux, SONIC and standard Linux distributions. It also supports telemetry capabilities including the latest in-band network telemetry standard, enabling visibility into the network and monitoring, diagnosis and analysis of operations.


Wednesday, June 21, 2017

Switch enters rapid growth phase for its SuperNAP data centres

Switch is the operation behind the massive SuperNAP in Las Vegas, also known by superlatives such as 'world’s densest data centre' or the first 'elite' data centre capable of exceeding Tier IV classification by the Uptime Institute. Switch currently has about 1.8 million sq feet of colocation data centre space powered up in Las Vegas, with plans to add a further 854,000 sq feet of space in this same market. Switch has also kicked off construction of a multi-billion dollar data centre campus in Reno, Nevada, as well as another marquee data centre in Grand Rapids, Michigan. An international expansion is also underway with its first data centre in Europe (Siziano, Italy) and Asia (Chonburi, Thailand). Last week, Switch unveiled its latest ambition - a data centre campus spanning more than one million sq feet in Atlanta.

Switch is privately-held company founded in 2000 by Rob Roy, a young entrepreneur who seized upon the idea that the world's leading corporations and telecom operators would benefit from highly-secure, scalable and energy-efficient colocation space where their systems could be in close physical proximity to many other like-minded carriers and corporations. Many others had this same idea at the turn of the millennium and thus we had the birth of top data centre operators whose names are still recognised today (Equinix CoreSite, Telecity), along with others that have since disappeared.

The company really got started by acquiring an Enron Broadband Services building located on Las Vegas' east Sahara Boulevard that provided access to long-haul fibre routes from the national network operators. This facility was originally intended to be the operational centre of Enron's bandwidth arbitrage business. Following Enron's spectacular collapse, the property was acquired in a bankruptcy auction by Rob Roy, reportedly only for $930,000.

Rob Roy, who remains CEO and chairman of the business, had the counter-intuitive insight to build the world’s largest data centre in the desert city of Las Vegas. There are several reasons why Las Vegas could have been a bad choice. First, the geographic location is far away from the financial centres of North America - there are relatively few Fortune 500 headquarters in Las Vegas. Second, Las Vegas is unmistakably situated in a desert. During July, the average daytime high temperature is 40.1C (104F). It is commonly understood that air conditioning is one of the greatest costs in running a data centre, and for this reason hyperscale data centres have been built near the Arctic Circle. Why build one in the desert? Third, Las Vegas is known for gambling and entertainment, but not particularly for high-tech.  If you are looking for hotspots for tech talent, you might think of Silicon Valley, Seattle, Boston, Austin, Ann Arbor or many other locations before picking Las Vegas.

However, each of these objections turned out to be an advantage for Switch thanks to the persistence or innovation of its founder. Regarding its location, the Nevada desert is geographically isolated from other potential geographic disasters.  It is spared from the earthquakes of California, Oregon or Washington. It is not in tornado alley nor is it in the path of any potential hurricane.  The location has no possibility of suffering through a debilitating blizzard, flood or tsunami. The biggest enterprises with the tightest requirements will want to have at least one major data facility out of any potential danger zone. By scaling its data centre campus to an enormous size, the Switch SuperNAP becomes its own gravity centre for attracting clients to the campus. According to the company's website, there are over 1,000 clients now, including big names such as Boeing, eBay, Dell EMC, Intel, JP Morgan Chase and many others.

As for the desert heat, Switch innovations enabling it to nail the energy efficiency challenge. The company's proprietary Thermal Separate Compartment in Facility (T-SCIF) design, which enables an unusually high-density of power load per rack, does not use water cooling. Nor does it use conventional computer room air conditioning units. Key ingredients include a slab concrete floor, hot air containment chambers, high ceilings and a heat exchange system mounted above. HVAC cooling units are outside the building. The company cites a PUE of 1.18 for its data centres in Las Vegas and an estimated 1.20 for its new facility in Reno, Nevada.

Regarding technology innovation, Rob Roy now has 256 patents and patent-pending claims with many focused on his Wattage Density Modular Design (WDMW) data centre design. Talent attracts talent. Whereas some data centre operators describe themselves primarily as real estate investments trusts, Switch positions itself as a technology leader.  One example is its proprietary building management system, which uses more than 10,000 sensors to gather millions of daily data points for dynamically optimising operations.

The Nevada desert enjoys abundant sunshine and since January 2016 all its data centres have operated on 100% renewal energy thanks to two nearby solar power stations operated by the company. These solar farms use PV panels to generate 180 MW of capacity. The focus on renewable power has earned the company an “A” listing on Greenpeace's Clean Company Scorecard, ahead of Apple, Facebook, Google, Salesforce, Microsoft, Equinix and all the others with large-scale data centre operations.

Below is an overview of major facilities and developments (data from the company website and other public sources):


In March 2017, Switch officially opened the first phase of the 1.8 million-square-foot data centre campus in Grand Rapids, Michigan. The iconic building, which is an adaptive reuse of the Steelcase Pyramid, is the centre piece of what is intended to become the largest, most advanced data centre campus in the eastern U.S. The entire campus is powered by green energy.

In February 2017, Switch inaugurated its Citadel Campus in Reno, Nevada (near Tesla’s Gigafactory). The Citadel Campus, located on 2,000 acres of land, aims to be the largest colocation facility in the world when it is fully built. The first building has 1.3 million sq feet of space. It is connected to the Switch SUPERLOOP, a 500-mile fibre backbone built by the company to provide low-latency connectivity to its campus in Las Vegas as well as to the San Francisco Bay Area and Los Angeles.

In December 2016, SUPERNAP International officially opened the 'largest, most advanced' data centre in southern Europe. The new facility is built to the specifications of the company's flagship, Tier IV Gold-rated Switch Las Vegas multi-tenant/colocation data centre. The new facility is located near Milan and includes 42,000 sq meters of data centre space with four data halls.

In January 2016, construction began on a new $300 million SUPERNAP data centre in Thailand’s eastern province of Chonburi. The new SUPERNAP Thailand data centre, which is in the Hemmaraj Industrial Estate, will cover an area of nearly 12 hectares and will be strategically built outside the flood zone, 110-metres above sea level and only 27 km away from an international submarine cable landing station.

Wednesday, March 15, 2017

Barefoot Tests its Switch Performance with Ixia IxNetwork + Novus

Network testing solutions provider Ixia announced that Barefoot Networks, developer of high-performance, programmable switch solutions, has chosen its 100 Gigabit Ethernet test solutions to validate the Tofino series of programmable switches launched in June last year.

Utilising the Ixia solutions, Barefoot will be able to ensure that its Tofino switches deliver the performance network operators demand, as well as test the functionality enabled by its programmable packet processing pipeline, for example new or custom protocols, in-band network telemetry and load balancing.

Barefoot's Tofino switches are based on technology that is designed to enable a fully programmable Ethernet switch that does not suffer a performance penalty to its 6.5 Tbit/s traffic processing capability. To validate the performance, scale and quality of these switches, Barefoot test engineers can use Ixia's IxNetwork and Novus 100 Gigabit Ethernet solutions to recreate real-life traffic patterns and load characteristics.

Ixia's IxNetwork offers a complete chip, device and network infrastructure test solution that can be used to validate Layer 2/3 performance, interoperability and functionality. Capable of analysing up to 4 million traffic flows simultaneously, IxNetwork is designed to provide enhanced real-time analysis and statistics. Featuring eight native QSP28 100 Gigabit Ethernet ports, the Ixia Novus load modules can generate terabytes of data to simulate real-world network traffic and Layer 2/3 protocols.

In combination, the Ixia solutions provide a test platform designed to enable full line-rate 100 Gigabit Ethernet evaluation of application-specific integrated circuit (ASIC) designs, field-programmable gate arrays (FPGAs) and hardware switch fabrics.

Recently at MWC, Barefoot Networks partnered with Netronome, a provider of intelligent networking solutions, to demonstrate a solution combining the Agilio CX SmartNIC platform and its Tofino P4-programmable switch to deliver precise, real-time network telemetry data required to detect, root-cause and resolve network problems.

The joint solution showed how a DevOps approach can be used to triangulate performance issues to VMs and NICs in servers or network switches and thereby enable the detection of poorly performing virtual network functions (VNFs) in service chains.

https://www.ixiacom.com/


  • In January, Barefoot announced that it was sharing its Wedge 100B series switches, including the Wedge100BF-32X 3.2 Tbit/s 1 RU 32 x 100 Gigabit Ethernet and Wedge100BF-65X 6.5 Tbit/s 2 RU 65 x 100 Gigabit Ethernet switches that are based on its technology Tofino, with the OCP ecosystem.


Tuesday, March 14, 2017

Innovium Unveils 12.8Tbps Data Center Switching Silicon

Innovium, a start-up based in San Jose, California, introduced its TERALYNX scalable Ethernet silicon for data centers switches.

Innovium said its TERALYNX will be the first single switching chip to break the 10 Tbps performance barrier, along with telemetry, line-rate programmability, the largest on-chip buffers and best-in-class low-latency. The chip is expected to sample in Q3 2017.

TERALYNX includes broad support for 10/25/40/50/100/200/400GbE Ethernet standards. It will deliver 128 ports of 100GbE, 64 ports of 200GbE or 32 ports of 400GbE in a single device. The TERALYNX switch family includes software compatible options at 12.8Tbps, 9.6Tbps, 6.4Tbps and 3.2Tbps performance points, each delivering compelling benefits for switch system vendors and data center operators.

Some highlights:

  • 12.8Tbps, 9.6Tbps, 6.4Tbps and 3.2Tbps single chip performance options at packet sizes of 300B or smaller 
  • Single flow performance of 400Gbps at 64B minimum packet size, 4x vs alternatives
  • 70MB of on-chip buffer for superior network quality, fewer packet drops and substantially lower latency compared to off-chip buffering options
  • Up to 128 ports of 100GbE, 64 ports of 200GbE or 32 ports of 400GbE, which enable flatter networks for lower Capex and fewer hops
  • Support for cut-through with best-in-class low latency of less than 350ns
  • Programmable, feature-rich INNOFLEX forwarding pipeline
  • Comprehensive layer 2/3 forwarding and flexible tunneling including MPLS
  • Large table resources with flexible allocation across L2, IPv4 and IPv6
  • Line-rate, standards-based programmability to add new/custom features and protocols
  • FLASHLIGHT telemetry and analytics to enable autonomous data center networks
  • Extensive visibility and telemetry capabilities such as sFlow, FlexMirroring along with highly customizable extra-wide counters
  • P4-INT in-band telemetry and extensions to dramatically simplify end to end analysis
  • Advanced analytics enable optimal resource monitoring, utilization and congestion control allowing predictive capabilities and network automation
  • SERDES I/Os for existing and upcoming networks
  • Industry-leading, proven SerDes supports 10G and 25G NRZ, as well as 50G PAM4, to provide customers a variety of connectivity choices, ranging from widely deployed 10/25/40/50/100G Ethernet to upcoming 200/400GbE
  • Up to 258 lanes of long-reach SerDes, each of which can be configured dynamically
  • Integrated GHz ARM CPU core along with PCIe Gen 3 host connectivity
  • ARM core enables development of differentiated real-time automation features
  • High speed host connectivity and DMA enhancements enable high performance packet, table and telemetry data transfers while minimizing CPU overhead
  • Two high-speed Ethernet ports for management or telemetry data


“Networking silicon solutions in the market today are generic, one-size-fits-all approaches and as a result, sub-optimal for data-centers. Innovium has used a unique, singular focus on data centers to deliver the strongest set of switch capabilities that dramatically advance the future of data centers,” said Rajiv Khemani, CEO & Co-founder of Innovium. “Equally important, we have assembled one of the strongest execution teams for rapid, high-volume deployments. We are excited to be working with leading switch system and data center customers as we execute on our mission.”

Innovium, in partnership with Inphi, also introduced a single switch chip based reference design for a platform supporting 12.8Tbps (128 X 100GbE) QSFP28 deployments. The reference design uses Innovium’s 12.8Tbps TERALYNX Ethernet switch silicon and Inphi’s 4-Level Pulse Amplitude Modulation (PAM4) chipset.

“As the pioneer of PAM based electronics for 40/50/100/200/400G, Inphi continues to enable a successful PAM4 ecosystem, leading the industry to the new world of terabit cloud optical interconnects. The 12.8Tbps reference design with Inphi’s PAM4 silicon in conjunction with Innovium’s new single switch chip is yet other major achievement in direct response to what data center operators require in the networking world,” said Siddharth Sheth, vice president of marketing, Networking Interconnect at Inphi.

In addition, Innovium announced $38.3 million in Series C funding from new lead investor, Redline Capital, new strategic investors and existing investors Greylock Partners, Walden Riverwood Ventures, Capricorn Investment Group, Qualcomm Ventures and S-Cubed Capital.  This brings the company's total financing to $90 million/

Innovium also announced a board of advisors and investors consisting of networking industry luminaries: Yuval Bachar, Principal Engineer for Global Infrastructure Architecture at LinkedIn; Sachin Katti, Professor of EE & CS at Stanford University; Martin Lund, CEO of Metaswitch; Rajeev Madhavan, serial entrepreneur and General Partner of Clear Ventures; Pradeep Sindhu, Founder and Vice Chairman of Juniper Networks; Krishna Yarlagadda, President of Imagination Technologies; and Raj Yavatkar, VMware Fellow.

https://www.innovium.com/


Friday, March 10, 2017

Switch Opens its Massive Data Center in Michigan

Switch, which runs the SUPERNAP data centers in Nevada, officially opened the first Phase of the 1.8 million-square-foot data center campus in Grand Rapids, Michigan.

The iconic building, which is an adaptive reuse of the Steelcase Pyramid, is the center piece of what is intended to become the largest, most advanced data center campus in the eastern U.S.. The entire campus is powered by 100-percent green energy.

“Rob Roy’s vision has turned one of the most iconic buildings in the country into the foundation of what we believe will be the most advanced technology ecosystem campus in the eastern U.S.,” said Switch Executive Vice President for Strategy Adam Kramer.  “Since the announcement of Switch’s expansion into Michigan, the state has been attracting the tech world’s attention, defining the region and the state as an epicenter for technology that runs the internet of absolutely everything.”

https://www.switch.com/switch-grand-rapids-now-open-largest-advanced-data-center-campus-eastern-u-s/

Wednesday, January 25, 2017

Apstra Demos Wedge Switch Running its OS

Apstra, a start-up based in Menlo Park, California, released its Apstra Operating System (AOS) 1.1.1 and an integration with Wedge 100, Facebook’s second generation top-of-rack network switch.

Apstra said its distributed operating system for the data center network will disaggregate the operational plane from the underlying device operating systems and hardware. Sitting above both open and traditional vendor hardware, AOS provides the abstraction required to automatically translate a data center network architect’s intent into a closed loop, continuously validated infrastructure. The intent, network configurations, and telemetry are stored in a distributed, system-wide state repository.

“At Apstra we believe in giving network engineers choice and control in operating their network and we are excited to be part of the network disaggregation movement,” said Mansour Karam, CEO and Founder of Apstra, Inc. “We are delighted to have been invited to demonstrate AOS integrated with Wedge 100 today. AOS provides network engineers with advanced operational control and situational awareness of network services, and enables them to design, deploy, and operate a truly Self-Operating Network™ (SON) without vendor lock-in.”

http://www.apstra.com

Facebook Deploys Backpack -- its 2nd Gen Data Center Switch

Facebook unveiled Backpack, its second-generation modular switch platform developed in house at Facebook for 100G data center infrastructure. It leverages Facebook's recently announced Wedge switch.

Backpack is designed with a clear separation of the data, control, and management planes. It uses simple building blocks called switch elements. The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together. The orthogonal direct chassis architecture opens up more air channel space for a better thermal performance for managing the heat from 100G ASICs and optics.  Facebook will use the BGP routing protocol for the distribution of routes between the different line cards in the chassis.

The design has already entered production and deployment in Facebook data centers.  The company plans to submit the design to the Open Compute Project.

https://code.facebook.com/posts/864213503715814/introducing-backpack-our-second-generation-modular-open-switch/

Tuesday, November 8, 2016

Facebook Deploys Backpack -- its 2nd Gen Data Center Switch

Facebook unveiled Backpack, its second-generation modular switch platform developed in house at Facebook for 100G data center infrastructure. It leverages Facebook's recently announced Wedge switch.

Backpack is designed with a clear separation of the data, control, and management planes. It uses simple building blocks called switch elements. The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together. The orthogonal direct chassis architecture opens up more air channel space for a better thermal performance for managing the heat from 100G ASICs and optics.  Facebook will use the BGP routing protocol for the distribution of routes between the different line cards in the chassis.

The design has already entered production and deployment in Facebook data centers.  The company plans to submit the design to the Open Compute Project.

https://code.facebook.com/posts/864213503715814/introducing-backpack-our-second-generation-modular-open-switch/

Facebook Targets 32x100G Wedge Data Center Switch

Facebook confirmed that work is already underway on Wedge 100, a 32x100G switch for its hyperscale data centers.

Facebook is also adapting Wedge to a much bigger aggregation switch called 6-pack, which uses Wedge as its foundation and stacks 12 of these Wedges in a modular and nonblocking aggregation switch. FBOSS will be used as a software stack across the growing platform of network switches: Wedge, 6-pack, and now Wedge 100.

In an engineering blog post, Facebook said thousands of its initial Wedge top-of-rack network switch have already been deployed.  The design has been contributed to the Open Compute Project and is now commercially available from various vendors.

Facebook also revealed that its FBOSS software undergoes a weekly cadence of new features and bug fixes. Facebook is able to update thousands of switches seamlessly, without any traffic loss, to keep up with this pace.  Some new capabilities include detailed monitoring, non-stop forwarding, warm boot, etc.

https://code.facebook.com/posts/145488969140934/open-networking-advances-with-wedge-and-fboss/

Monday, October 24, 2016

Marvell Unveils its 25GbE Data Center Solution

Marvell introduced its 25 Gigabit Ethernet (GbE) end-to-end data center solution comprised of its Prestera 98CX84xx family of 25G Ethernet (GbE) switches and Alaska C 88X5123 and 88X5113 Ethernet transceivers, all fully compliant with the IEEE 25GbE and 100GbE standards.

The new Prestera 98CX84xx switch family is designed specifically for mainstream data center high-performance server applications and addresses the most common top of rack (ToR) port configurations.  The Prestera devices are integrated with 25GbE PHYs, enabling data centers to break the 1W per 25G port barrier for 25G ToR applications.

Marvell's Prestera 98CX84xx switches also include an abstraction layer which integrates seamlessly with Open Computing Project (OCP) switch abstraction interface (SAI) application program interfaces (APIs). Marvell provides an OpenSwitch driver plug-in that facilitates easy integration with the OpenSwitch application stack.

The Alaska C 88X5123 Ethernet transceiver enables customers to support the new IEEE 25GbE specifications on their existing switch ASICs without the expensive investment involved in new silicon development.

The Alaska C 88X5113 Ethernet transceiver, a 40G Ethernet to 25G Ethernet Gearbox device, enables a 40GbE stream to be translated to a 25G Ethernet stream. This device is purpose-built to enable existing 40GbE-capable server NIC controllers to support native 25GbE, hastening the availability of 25GbE-capable NICs.

"I believe Marvell's 25GbE-optimized devices are a significant contribution to the industry, helping drive the adoption of 25GbE server access to meet increasing bandwidth demands in data centers," said Michael Zimmerman, vice president and general manager, Connectivity, Storage and Infrastructure (CSI) Business Unit at Marvell Semiconductor, Inc. "Our newest 25G Ethernet switch devices, PHY and Gearbox devices extend Marvell's leadership of providing best-in-class networking solutions optimized for high performance, cost effective, and energy-efficient computing."

http://www.marvell.com/

Tuesday, October 11, 2016

Broadcom's Tomahawk II Switch Packs 64 Ports of 100GE

Broadcom announced its StrataXGS Tomahawk II Switching chip -- the highest performance Ethernet switch to date, packing up to 64 ports 100GE or 128 ports of 50GE with SDN-optimized packet switch engines operating at 6.4 Terabits per second.

The StrataXGS Tomahawk II  Tomahawk II integrates 256 serdes running at over 25 Gbps, with large on-chip forwarding tables and packet buffer memory.  The chip supports Broadcom's next generation BroadView instrumentation and control capabilities. Tomahawk II also provides enhanced traffic load balancing, network visibility and control of traffic provisioning.

StrataXGS Tomahawk II Key Features

  • 6.4 Tbps Multilayer L2/L3/MPLS Ethernet switching
  • Support for 64 ports 100GE, or 128 ports 40GE/50GE
  • 256 x 25Gbps SERDES
  • Low Latency StrataXGS Pipeline Architecture
  • FleXGS flow processing with flexible match/action capabilities
  • Dynamic Flow Distribution for large scale Layer3/ECMP networks
  • Increased On-Chip Forwarding Databases
  • Higher Capacity Embedded Packet Buffer Memory
  • Enhanced BroadView network telemetry features
  • Comprehensive overlay and tunneling support including VXLAN, NVGRE, MPLS, SPB
  • Advanced OpenFlow support using Broadcom OF-DPA
  • 16nm process


"Our StrataXGS Tomahawk II comes just two years after Tomahawk with twice the bandwidth and resource capacity. We expect this to accelerate 100G adoption in the next wave of data centers, while delivering network scale and visibility for public/private cloud computing, storage and HPC fabrics," said Ram Velaga,  senior vice president and general manager, Switch Products at Broadcom. "Tomahawk II will leverage the 25G/100G infrastructure established with the highly successful Tomahawk line.”

http://www.broadcom.com

Wednesday, September 21, 2016

Centec Raises $47 Million for Ethernet Switching Silicon

Centec Networks, a developer of Ethernet switching silicon and SDN white box solutions, announced $47 million in Series E funding led by China Integrated Circuit Industry Investment Fund (CICF), a national fund established to help develop IC technology companies in China.

Centec, which was founded in 2005 and is based in Suzhou, China, said it is pursuing an open-networking strategy. The company has developed and released multiple product generations including switch silicon for 1GE10GE, 40GE and 100GE applications, with OCP’s Switch Abstraction Interface (SAI) to ease the integration into both commercial and open source Network Operating System(NOS). Centec has a history of success delivering productized switch platform based on either a commercial or open source NOS for global vendors to integrate and customize for SDN and Open Networking applications.

The company has raised $30 million in previous funding rounds, developed four generations of switching silicon, and has achieved double-digit quarterly growth for the past 3 years.  Proceeds from its Series E funding round will be used to develop new products and to scale the company’s sales and marketing operations across the globe, especially North America.  CICF is joined in the round by existing investor China Electronics Corporation (CEC) through its China Electronics Innovations Fund, which led the company’s prior funding round.

“This funding round further establishes Centec as one of the top global sources of Ethernet switching silicon, and will enable us to meet growing product demand, accelerate innovation, and launch a new product family for the rapidly growing SDN white box market,” said James Sun, CEO of Centec Networks.  “The CICF investment also reinforces our position as a leading innovator in the worldwide market for Ethernet switching silicon, and a valued partner for customers entering the Chinese market.”

“Centec has emerged as a leading competitor in a segment dominated worldwide by a single vendor and has developed impressive switch chips with competitive features using a fraction of the funding a U.S. company would require,” said Bob Wheeler, principal analyst at The Linley Group. “The additional funding led by CICF will further strengthen its technology and market leadership in China and fuel its major expansion beyond China into the global market.”

http://www.centecnetworks.com/en/About.asp?ID=23

Centec Debuts Fourth Generation 1.2 Tbps Switching Silicon

Centec Networks announced its fourth-generation GoldenGate switch silicon, a 1.2 Tbps chip designed to address SDN and virtualization in 10GE, 40GE and 100GE networks by increasing visibility in the forwarding plane.

The company said it was able to incorporate a number of unique features to solve SDN challenges for elephant flow detection, flow completion time, and flow visibility and control, while also supporting network virtualization with diverse overlay technology including the latest GENEVE protocol.

Tuesday, August 23, 2016

Arista Adds Telemetry Features with HPE, SAP, Veriflow and VMware

Arista Networks is rolling out new telemetry and analytics capabilities for cloud networks.

The new Arista EOS (Extensible Operating System) and CloudVision provides visibility into network workloads, workflows, and workstreams on a network-wide basis.

Key features include:

  • Instantaneous event-driven streaming of every state change, providing improved granularity compared to traditional polling models.
  • State visibility from all devices in the network, including configuration, counters, errors, statistics, tables, environmentals, buffer utilization, flow data, and much more.
  • CloudVision Analytics Engine for storing state history and performing trend analysis, event correlation, and automated alerts. The basis for both real-time monitoring and historical forensic event investigation.
  • New Telemetry Apps for the CloudVision Portal, including the Workstream Analytics Viewer, providing simplified visualization of network-wide state for faster time to resolution.
  • An open framework, built on standard RESTful APIs as well as OpenConfig-based infrastructure, providing a point for integration into a variety of partner solutions and customer-specific infrastructure.
  • Expansion of existing EOS Telemetry Tracer capabilities across device, topology, virtual machine, container, and application components.

“The automated network operations in today’s cloud networks are dependent on both a highly programmable software infrastructure as well as deeper visibility into what the network is doing. Legacy approaches to visibility fall short of these cloud requirements,” said Ken Duda, CTO and Senior Vice President, Software Engineering for Arista Networks. “The Arista state-streaming approach provides an open framework with unprecedented levels of completeness and granularity for network state information. Our CloudVision platform harnesses streamed network state to provide customers of all types with clearer real-time and historical visibility into their network.”

Arista said its partner ecosystem can leverage many benefits of this new telemetry solution via access to the network-wide state through common API’s at multiple integration points. Arista’s partners can access the network state either streamed directly from the devices or from the central state repository within CloudVision. The Arista CloudVision Telemetry solution is endorsed by Hewlett Packard Enterprise, SAP, Veriflow and VMware.

https://www.arista.com/en/company/news/press-release/1463-pr-20160822

Tuesday, June 14, 2016

Barefoot Unveils 6.5 Tbps Tofino Switching Chip

Barefoot Networks, a start-up based in Palo Alto, California emerged from stealth to unveil its "Tofino" switching chip and announce that it has raised $130 million, including a strategic investment from Google.

Dubbed "the fast switch every built", Barefoot’s programmable Tofino switch chip processes packets at 6.5 terabits per second, twice as fast as the previous record holder. While conventional programmable network devices such as NPUs have orders of magnitude slower than their fixed-function brethren, Barefoot said its Tofino silicon provides the first programmable forwarding plane while setting a new performance benchmark for performance, power, and price.

The silicon is designed for user programmability via the open-source P4 programming language, enabling precise control over packets and bringing entirely new features into the switch—for example, features that replace load balancers, features that replace firewalls, features that add packet-by-packet telemetry enabling rapid debug of distributed application behavior.

Barefoot said the open-source P4 language provides software developers with the compilers, tools, and applications they need to successfully program the fastest networking gear. This could eliminate "middle boxes" that add latency, complexity and cost to a data center network. Barefoot’s new compiler technology has taken P4 programs – written by customers – and converted them into blazing-fast running code executed on Tofino.  Barefoot will open an ecosystem of compilers, tools and P4 code to make P4 accessible.

Barefoot Networks also disclosed that it has recently closed a $57 million funding round led by Goldman Sachs Principal Strategic Investments and Google Inc. This brings total funding to more than $130 million to date.

“The basic fixed-function switch architecture was set in 1996 and has remained unchanged for twenty years,” noted Nick McKeown, co-founder and chief scientist at Barefoot Networks. “Yet everything else in the data center changed. We went from monolithic software to VM’s and then to containers and fully distributed applications. With the rise of the cloud, data center traffic patterns changed as did the role of the data center. How could a 1996 switching architecture be the right foundation for 2016’s applications? In all other parts of the data center we have moved to programmability. Tofino enables this move for networking. It empowers network owners and their infrastructure partners to design, optimize and innovate to their specific requirements.”

"Mega-scale data center operators greatly benefit from building their own networking equipment and writing the software that runs on it. The forwarding plane, though, has been off-limits to programmers because of the rigid nature of high-performance switching solutions,” noted Martin Izzard, co-founder and CEO, Barefoot Networks.  “With P4 and Barefoot, the landscape is changing; users can develop the P4 programs to define the innovative forwarding plane behavior, introducing new ways to monitor and analyze network traffic, making networks more reliable, scalable, efficient and secure."

http://www.barefootnetworks.com


  • Barefoot Networks was co-founded by Nick McKeown, a Stanford professor and co-founder of Nicira (acquired by VMware), Martin Izzard, Pat Bosshart, and Dan Lenoski VP Engineering.

Tuesday, January 26, 2016

Switch Commits to 100% Green for Michigan Data Center

Switch, which runs the SUPERNAP data centers in Nevada, has committed to 100% renewable energy for its upcoming SUPERNAP Michigan data center near Grand Rapids/

As part of this announcement and Switch’s continued commitment to run all of its data centers on green energy, Switch has also become a member of the WWF/WRI Renewable Energy Buyers’ Principles.

“Sustainably running the internet is one of the driving principles of Switch, which is why in our site selection process for an eastern U.S. SUPERNAP data center site, we had to find a local utility who could provide a pathway to 100 percent renewable power,” said Switch EVP of Strategy Adam Kramer. “When we first met with the team at WRI we knew that our goals aligned perfectly with the Buyers’ Principles, which is why Switch was excited to join.”

https://www.supernap.com/news/switch-plans-to-make-michigan-data-center-100-percent-green.html

Thursday, October 15, 2015

Accton Contributes Design of 100 Gigabit Ethernet Switch to OCP

Accton Technology Corp. will open source through the Open Compute Project (OCP) the design of its AS7500-32X 100 Gigabit Ethernet (GbE) open network switch, the first switch design contributed to OCP based on the Cavium XPliant switch ASIC.

The AS7500-32X Cavium-based switch design uses the same physical switch packaging, including x86 CPU processor modules, power supplies, fans and enclosure, as the AS7700-32X 100GbE switch design which Accton contributed to the OCP in March.

Accton’s subsidiary, Edgecore Networks, is now offering prototype units of the AS7500-32X 100GbE open network switch for evaluation and software development. The switch has thirty-two QSFP28 ports in a 1U form factor, with each port supporting 100GbE, 2x50GbE, 40GbE, 4x25GbE or 4x10GbE connections. It supports the following OCP open source software:

Open Network Install Environment (ONIE), the universal Network Operating System (NOS) loader, which enables automated loading of compatible commercial and open-source NOS software;
Open Network Linux, an open-source reference OS platform for organizations developing and customizing switch software applications; and Switch Abstraction Interface (SAI), a standard interface to ASICs from multiple vendors, allowing greater portability and faster introduction of NOS and application software for open switches.

“Cloud data center operators, telecommunications service providers and enterprises are all planning the deployment of next generation 25GbE and 100GbE infrastructures that can support increased capacity and services delivery with the automation, choice and control that open infrastructures provide,” said George Tchaparian, GM Data Center Networks at Accton Technology and CEO at Edgecore Networks. “Accton’s contribution of our second 100GbE switch design to OCP, and the industry’s first OCP contribution based on the Cavium XPliant switch ASIC, will further expand open network choices for use cases ranging from cloud data center fabrics and data center interconnect to central offices, Internet Exchanges, monitoring and analytics networks, and web-scale enterprises.”

http://www.accton.com
http://www.Edge-Core.com

Monday, August 31, 2015

NETGEAR Unveils 28-port 10-Gigabit Switch

NETGEAR unveiled its ProSAFE 28-port 10-Gigabit Smart Managed Switch - the industry's first 28-port (24 copper, 4 SFP+) 10-Gigabit Smart Managed Switch purposely designed for small to mid-sized organizations (SMBs) with 10GBASE-T connectivity and Advanced L2+/Layer 3 Lite features.

Key Features:

  • Advanced VLAN features such as Protocol-based VLAN, MAC-based VLAN and 802.1x Guest VLAN
  • Advanced QoS (Quality of Service) with L2/L3/L4 awareness and 8 priority queues including Q-in-Q
  • Static Routing (both IPv4 and IPv6)
  • Private VLAN
  • Dynamic VLAN assignment
  • IGMP and MLD snooping
  • Advanced Security
  • IPv6 support for management, QoS and ACL
  • Easy-to-use web-based management GUI or PC-based Smart Control Center application for multi-switch deployment


It has an MSRP in the U.S. of $4,624.00, although final pricing for end customers may vary depending on the reseller and bundled offerings.

http://www.netgear.com

Thursday, June 4, 2015

Dell'Oro: Cisco Gains Share in L2-3 Ethernet Switch Market

The Layer 2-3 Ethernet Switch market declined nearly $1 billion in the first quarter 2015 to slightly more than $5.5 billion, according to a new report from Dell'Oro Group.

"Seasonality, especially since China has become a larger part of the market caused Ethernet Switch revenues to be down significantly in 1Q15. Despite the strong sequential market decline, Cisco gained revenue share year-over-year," said Alan Weckel, Vice President of Ethernet Switch market research at Dell'Oro Group. "Campus switching has begun an upgrade cycle to support next generation wireless LAN access points using new Multi-Gigabit technology as a catalyst. Campus switching will also get a boost from E-Rate during the summer. As we transition to the end of 2015, the data center will begin an upgrade cycle to 25 GE for server access with 100 GE starting to ramp to significant volumes. The market will also be absorbing both HP's announcement to divest H3C and Avago's announcement to acquire Broadcom. It has been almost a decade since we have seen so much vendor repositioning in the market," stated Weckel.

The report also indicates that Cisco Systems, Huawei Technologies, and Hewlett-Packard (H3C) were the top three vendors in revenue rank in China during the first quarter 2015.

http://www.delloro.com

Monday, June 1, 2015

Nokia Launches its AirFrame Data Center Servers and Switches

Nokia unveiled its AirFrame Data Center Solution for combining the benefits of cloud computing technologies with the stringent requirements of the core and radio in the telco world.


The Nokia AirFrame Data Center Solution is the company's foundation for meeting the latency and data processing requirements of the future, including 5G and distributed cloud applications. The portfolio is build around highly integrated, Intel-based servers and switches that are optimized for low-latency, scalability, flexibility and business agility.

Key advantages of AirFrame:

  • Offers significant efficiency gains when running data-demanding telco applications like mobile network Virtual Network Functions (VNFs)
  • Fully compliant with IT standards and able to run the most common IT cloud applications in parallel to telco cloud
  • Enables operators to implement not only their NFV strategy, but also expand into new business models, such as renting data center capacity for customers' IT applications
  • Implements the most advanced telco cloud security practices, which have been tested and approved at the Nokia Security Center in Berlin
  • Adheres to Nokia Networks' open standards approach as well as complies with ETSI NFV, ensuring the success of telco cloud deployments
  • Ready for 5G, with an advanced cloud management solution to handle the telco cloud architecture (centralized / distributed), including security orchestration which automates and manages the lifecycle of security policies and security functions
  • Ready to support several Nokia VNFs, including OSS/CEM and the company's recently announced Radio Cloud architecture

AirFrame components include:

  • Nokia AirFrame Cloud Servers and Switches - Pre-integrated racks with ultra-dense servers, high performance switches and software defined storage, including Nokia Networks specific enhancements that make it more efficient than other solutions to run demanding VNFs
  • Data center services - AirFrame is complemented by a suite of professional services provided by the company's services experts and geared to implement, monitor and operate telco cloud data centers

"Nokia Networks is changing the game in telco cloud. We are taking on the IT-telco convergence with a new solution to challenge the traditional IT approach of the data center. From the beginning, Nokia Networks has been a forerunner in telco cloud innovation*. This newest solution brings telcos carrier-grade high availability, security-focused reliability as well as low latency, while leveraging the company's deep networks expertise and strong business with operators to address an increasingly cloud-focused market valued in the tens of billions of euros," stated Marc Rouanne, Executive Vice President, Mobile Broadband, Nokia Networks.

http://www.nokia.com

Monday, December 9, 2013

NTT Develops 10G SDN Software Switch for Carriers

NTT outlined details of a prototype, 10 Gbps high-performance SDN software switch that it developed as part of the "Research and Development of Network Virtualization Technology" program commissioned by Japan's Ministry of Internal Affairs and Communications.

NTT, which is already using SDN in its data centers for cloud services, said it developed this prototype SDN software switch to handle the large scale flow entries that will be required in wide area networks, such as those of telecommunications carriers and Internet providers.

The prototype switch still transfers large packets at a 10 Gbps wire rate even when 100K entries are in its flow tables and each packet header must be rewritten.  This makes it one of the highest performance SDN software switches to date, according to NTT.

The work was carried out by NTT Network Innovation Laboratories (Yokosuka, Kanagawa) as part of reseach for SDN software nodes in the "O3 Project".

The switch features a "Flexible parallel Flow processing Framework."

http://www.ntt.co.jp/news2013/1312e/131209a.html



Earlier this year, leading Japanese companies, including Fujitsu, Hitachi, NEC, NTT Communications and NTT, launched the "Open Innovation over Network Platforms" research and development (R&D) project, also known as the "O3 Project".

The project, which is supported by Japan's Ministry of Internal Affairs and Communications, aims to develop network virtualization technology that enables multiple telecommunications carriers and service providers who share network resources to design and construct networks and manage their operations freely to suit their needs.

Specifically, a common SDN layer will be developed to integrate a multilayer infrastructure consisting of optical, wireless and packet communications platforms. Compared to existing SDN architectures, the O3 Project targets a common, multi-carrier framework that would enable any Service Providers to share resources.

In September 2013, NTT Com's Nationwide SDN for Uncompressed HDTV received the Innovation Award in Content Delivery at IBC 2013.  The service provides stable transmission of uncompressed high-definition video images between the 128 members of the Japan Commercial Broadcasters Association via a large-capacity and highly reliable network.

NTT Com said the value of SDN in this application is that it enables simplified functionality for broadcasters, such as adjusting the bandwidth requirements of each station according to their program schedule. By centrally managing the network, the technology flexibly and efficiently manages traffic to ensure the flawless and timely transmission of vast amounts of data between stations.


In June 2013, NTT showcased its Versatile Openflow vaLidaTor (VOLT) resource control system for software defined networking (SDN) at Interop Tokyo 2013.

NTT said VOLT, which was developed in partnership with Fujitsu, is able to duplicate the entire route information and configuration of OpenFlow.  This can be used to test a new network under the same conditions as the real environment.

The system consists of an MPLS edge router and controller.


Blueprint Tutorial: SDN and NFV for Optimizing Carrier Networks




This article provides some examples of how SDN and NFV can be applied to various segments of a carrier network, and how the functions of a traditional carrier network can be offloaded to a virtualized datacenter to improve end-to-end performance.


Sunday, September 8, 2013

Las Vegas' Switch SUPERNAPS Continues Rapid Growth

Switch, which operates the SUPERNAPS data centers in Las Vegas, sold a record-breaking 19 megawatts of power to 23 clients during the month of August.

Switch provides massive-scale colocation, connectivity, cloud and content solutions for more than 600 global clients. SUPERNAP 8 is built from Switch’s modular optimized designed products, which includes the Switch SHIELD (redundant weather-proof roofing structures), the TSC 1000 ROTOFLY (HVAC system), the T-SCIF (equipment heat containment pods), and the IRON BLACK FOREST (data center separation and containment environment)

http://www.switchlv.com/switch-expands-worlds-largest-data-center-with-nap8/

In June, Switch, which operates the massive SUPERNAP data center complex in Las Vegas, unveiled plans for its most advanced building yet.

The SUPERNAP 8 data center will be built with two independent, separate and individually rated 200 mph steel roof decks. The two roofs are located nine feet apart and are attached to the concrete and steel shell of the facility and contain zero roof penetrations. The new facility will use the Switch IRON BLACK FOREST temperature control system for heat containment and to mitigate an disaster event impacts.  The building will also feature a patent pending flywheel to support refrigeration and temperature controls during a power outage by delivering uninterruptable high-efficiency data center cooling.

In addition, SUPERNAP 8 will offer the same tri-redundant backup power systems and the enhanced security that Switch uses in its other building. 

Switch notes that its Las Vegas location is extremely well connected by national fiber.  It is also located in "the safest desert on the planet, free from natural disasters and disturbances and built on the newest and greenest power grid in the United States."

Monday, August 19, 2013

Marvell's New Prestera Switching Processors Target Dynamic Access/Aggregation

Marvell introduced its Prestera DX4200 series of packet processors for the access and aggregation layers of fixed and mobile networks.

The new product family, which represents the eight generation of Marvell switching silicon, is implemented in 28nm. The design integrages multi-core ARM v7 CPUs, a carrier grade traffic manager and a flexible IPv6 packet processing pipeline to enable dynamic software defined networking and advanced service virtualization.

It supports CAPWAP, MPLS, VPLS, OAM, SPB and Bridge Port Extension, and offer synchronization features. An integrated InterLaken interface also enables the development of transport and circuit switched solutions while leveraging the service enabling paradigms of the DX4200. The integrated traffic manager offers hierarchical flow based quality of service and massive external buffering enabling tens of thousands of applications and users through unique queuing schemes that insure no variance in user experience across different access models. Sampling begins in September.

"As demand for higher service density per watt increases, Marvell is uniquely positioned to offer platforms for the software defined storage, networking, mobile and compute clouds being designed today," said Ramesh Sivakolundu, vice president for the Connectivity, Services and Infrastructure Business Unit (CSIBU) at Marvell Semiconductor.

http://www.marvell.com/

See also