Sunday, September 23, 2012

The Case for the Big GEs... and IP-over-DWDM

by Sultan Dawood, Solutions Marketing Manager, SP Marketing Cisco Systems

No one knows better than the readers of this publication about the importance of Gigabit networks. The emergence of 10 Gigabit (10 G) links in big networks began more than 10 years ago, and seemed like enough capacity, for a long time.

But that was then. In today’s optical conversations, talk tends to center on 40/100 Gig links, all the way up to Terabit advancements. Why? The volume of consumer and business usage of bandwidth is astounding, on fixed and mobile networks. Upwards of 50% CAGR per year, on some portions of the “big” Internet, like last-mile access networks.

If history is any indicator, doomed is the man or woman who publicly wonders why on earth so much capacity is needed. In the 1960s, cable television providers wondered why they’d ever need to build for more than 12 (analog!) channels. Back in the early days of dial-up data connections, some wondered why we’d ever need to go beyond 56 kbps. We’ve seen this “I’ll eat my hat” scenario over and over, in the course of network expansion.

Because the majority of today’s transport networks are conveying data using 10 Gig networks, and at the same time are facing unprecedented volumes of usage, decisions about expansion tend to center on three known options:

1) Add more 10 Gig links
2) Go straight to 100 Gig
3) Find a stepping stone path to 100 Gig via 40 Gig

What is perhaps lesser known are the decision sets and resultant economic impacts of getting to 40/100 GigE, using existing routers, minus the provisioning and maintenance of electrical-to-optical-to-electrical conversion transponders, minus the operational expenses involved with maintaining what is essentially two disparate transport networks.

If you remember one thing about this article, please make it be this: by converging the optical and IP layers of the network, capex and opex costs can be trimmed by 25-30%, according to our ongoing and live research with service providers. Path identification (traditionally handled within the “transport silo”) happens much more quickly; apps and services (handled within the “data services silo”) move more securely.

Consider: What if you could turn up a link to a customer in minutes -- not months?

IP over DWDM is an innovative option (we’d argue the option) that economically justifies 40/100 Gig adoption, by reducing additional unnecessary equipment and associated interfaces including optics – thus lowering requirements for additional power, cooling and space. It’s been proven that integrating optical intelligence into a router makes it cognizant of any optical path degradation. That means routers can proactively ensure that any apps and services in transit are protected from degradation or failure.

Why: The forward error correction (FEC) intelligence as a result of integration of optical into the routers will provide awareness to automatically switch to a secondary, safer path, before any optical impairments impact any app or service performance.

So we won’t venture into questions of whether 40/100 Gig networks are necessary. Instead we’ll look at what’s driving the world’s data capacity needs, then examine the options in getting into “the big GEs,” including the substantial economic benefits associated with converging the optical and IP layers.

The Capacity Drivers

At least three factors are driving the world’s explicit and implicit obsession with network capacity: Device proliferation, video as an app, and the data centers fueling cloud computing.

Think about the number of IP-connectable devices in your home or business, 10 years ago, compared to now. All of them want Internet connectivity – some more so than others.

Plus, most gadgets in the device ecosystem are mobile. Not long ago, we connected to the Internet when we went to the office, or were at home. Fixed connections – personal computers, tower computers, laptops to some extent. Internet wasn’t an option when outdoors, or when driving, to navigate via GPS, or find a restaurant, or locate friends.

Our ongoing VNI (Visual Networking Index) research indicates that by 2016, there will exist nearly 19 billion global network connections – enough for 2.5 per person, in the world. (Click here for more VNI information:

Capacity Driver #2: Video

Driver number two dovetails with the first one: Video. With more and more powerful, HD video-capable screens fetching and tossing big streams of data in and out of whomever’s data cloud, the question of how and when to scale the network is more relevant than ever.

Beyond the 50+% compound annual growth in broadband usage – wired and wireless – new pressure points are arriving into the consumer and business marketplaces with alarming regularity.

Consider the spate of recent announcements from consumer electronics and PC makers about putting high-resolution screens into handhelds, tablets, laptops and televisions. High-rez screens means high-rez streams.

Indeed, smart phones and tablets impact real-time network capacity in a big way, because most include still and video cameras, capturing images and sound in SD and HD. Video eats up capacity like nothing else (so far.) Already, and again according to our ongoing VNI research, more streamed video is done in HD than SD.

At the high end of the video spectrum, the 2012 trade show scene is producing regular headlines about the pursuit of 4K resolution.

Even using the best compression on the market today (which goes variously by H.264, AVC and MPEG-4), a 4K stream “weighs” as much as 17 Mbps. Compare that to today’s 1-3 Mbps carrying capacity, for “regular” HD video compressed with H.264/AVC.

Yes, H.265 compression is on the way, which will do for H.264 what it did for MPEG-2 – but still. The point is that network bandwidth is under enormous strain right now, with no signs of easing up.

Capacity Driver #3: Clouds and Data Centers

Consider: Networks used to move static web pages, or haul 64 kbps telephone conversations, or broadcast (in a one-to-many sense) SD video. These days, they do all that plus stream HD, (unicast and multicast) video to high resolution displays. They haul video phone conversations. They carry adaptive bit rate video, which by its nature behaves like a gas, filling all available space.

Plus, big networks traditionally were “silo’ed,” with transport and data departments and people operating largely independent of one another. Not so anymore. Why: The fetching of a web page is one thing – simple, isochronous, not a huge strain on the network.

Today’s emerging applications are another story entirely, segueing into transport-heavy fare like the shipping or storing of enormous amounts of digital stuff – think digital pictures, videos, and cloud-based storage in general . Transport-heavy networks needs require mutual and simultaneous attention, from both “data” and “path/transport” departments in the organization.

“The cloud,” in all its iterations, then, is capacity driver number 3. Clouds, and the data centers that enable them, are both sourcing more network traffic -- and struggling under its weight. Anyone building a cloud designed to service big geographic areas will need multiple data centers that are interconnected – preferably intelligently.

Today’s data centers are connected via a combination of routers and transport networks. Connecting Router A in Data Center A to Router B in Data Center B, for instance, requires transport infrastructure. Traditionally and so far, that transport has been 10 Gig.

However, as bandwidth demands increase, with routers leveraging 100 Gigabit Ethernet interfaces and data centers moving large volumes of content, it would make sense to increase the transport network capacity and transition from 10G to 100G DWDM. To add further to that thought, it is also possible to integrate optical interfaces directly into routers, thus offering an innovative and green approach as well as a truly integrated solution that in turn justifies a faster and more cohesive transition to 40/100 G links.

The Meaning and Importance of Coherence in Optical Transmission Systems

In optical terms, “coherence” refers to the ability of a lightwave to produce interference patterns that work in favor of the intended signal. Coherent optical communications products arrived in the marketplace roughly coincident with 40/100 Gig networks, because they are intrinsically suited for very long-haul networks – upwards of 500 km and higher.

Any time signals are distributed over very long distances, however, two things can happen that compromise performance. First is plant anomolies, which cause signal strength to lag. That necessitates amplification, but amplifiers boost both the intended signal, and any noise that is present. Dispersion compensation is then required, to compensate for impairments.

These impairment compensation activities do not come for free – especially when the distance in question is measured in thousands of kilometers. That’s why service providers considering the shift to 40/100G seek ways to do so without adding additional equipment for signal impairment compensation.

Service providers seek the most economical, yet best performing signals that can exist on their network as a way to control total cost of ownership, by adding 40/100 Gig capabilities to existing routers and over existing infrastructure – even if the fiber plant is marginal in places.

This is where coherent optical systems really shine. We’ve seen (because we built) 100 Gbps connectivity over a 3,000 km link, on top of existing 10 Gbps fiber infrastructure, that is vendor-independent. (More here:

Here’s a real-world example: You’re running a video connection to a customer. It’s an MPLS tunnel mapped onto an optical wavelength. Let’s say that fiber degrades. With the forward error correction (FEC) techniques within IP-DWDM, thresholds can be set ahead of time, to default to another optical path.

Maybe the pre-FEC value is 10-17 , but at 10-19 the router knows to switch the video connection to a cleaner path – proactively. Having ways to set thresholds and interact between layers ensures that the video connection stays solid – and your customer has no idea that a problem almost occurred.

Now What? Harmonizing Optical and Packet Transport

We’ve talked about capacity drivers, and the benefits of IP-DWDM as a way to get to 40/100 Gig- without stranding prior investments in routers or optics. The fact is that the dominant type of traffic on broadband networks today is packet-based; existing optical networks aren’t as well suited for packet-based delivery than other types of traffic.

This is how IP-DWDM began, for what it’s worth. Service providers asked how to bulk up capacity without disrupting capital or operational spending. How to save money in a packet environment led to the need to do certain things differently, which led to the development of IP-DWDM.

Because of that, the drumbeat toward converging the optical and IP domains began, as a way to reduce capital and operational costs, as well as to have a better handle on network controls. Equally relevant: The ability to launch new services/apps more quickly, and more securely. IP over DWDM is one example of this convergence, which provides between 25-35% in capital and opex spending, combined.

Perhaps some day it will seem quaint, that at one time network architects were debating the convergence of the optical and IP layers in long-haul transport. But for now, the decision to go with IP-DWDM is still a bit maverick, for those going through it.

Why? Because getting there involves cutting across people and organizational domains. Never easy to do. Despite ongoing proof that a) 100 Gig gear exists that works over distances of 3,000 km, without need for signal compensation, and b) IP-DWDM is a cleaner solution, because it eliminates excess optical equipment and interfaces, and c) its pre-FEC can proactively re-route mission-critical data before signal paths become impaired, IP-DWDM is still in the “crawl” portion of any “crawl-walk-run” technological evolution.

We’ll end with this: Service providers will continue to sprint to keep up with capacity, and to compete with new, over-the-top providers. A more integrated and converged network lends itself better to the packet-based traffic loads of today. It scales for the future, and it saves capital and operational costs. Because one thing is certain: Data loads are not letting up anytime soon.

Sultan Dawood holds the position of Senior Marketing Manager for Core Routing and Transport Solutions at Cisco Systems. He has spent the last 18 years of his career focused on data networking and telecommunication systems working closely with both Enterprise and Service Provider customers. Prior to Cisco, Sultan held senior marketing and engineering positions at Hammerhead Systems, Motorola, 3COM, ADC Telecommunications and Litton Systems.

Sultan has a Bachelor of Science degree in Electrical Engineering from Old Dominion University in Norfolk, Virginia. He is also a Board member and the Vice President of Marketing for the Broadband Forum.

Researchers Demonstrate 1,000 Terabits per Second over 50km of Fiber

Researchers from NTT, Fujikura, Hokkaido University and Technical University of Denmark demonstrated ultra-large capacity transmission at 1 petabit (1000 terabits) per second over a 52.4 km length of 12-core (light paths) optical fiber -- a new record for transmission over a single strand of fiber. One petabit per second would carry 5,000 HDTV movies of two hours in a single second.

NTT said the breakthrough leverages spatial multiplexing optical communications and new  multicore optical fiber (MCF).  The two companies and two universities combined their expertise to develop multicore optical fiber designs, fabrication techniques, and spectrally-efficient transmission technologies to carry out this experiment.

The experimental system used a new 12-core MCF structure with the cores arranged in a nearly concentric pattern.  A novel fan-in fan-out device employed a digital coherent optical transmission scheme for transmitting DWDM signals in each core. The researchers said the 12-core MCF reduced signal leakage (crosstalk) between adjacent cores, which had been a problem with conventional MCF designs. The systems achieved a transmission capacity of 84.5 terabits per second for each core (= 380 Gbps capacity per wavelength X 222 wavelength channels), for a total capacity of 1.01 petabit (= 12 X 84.5 terabit) per second for the 12-core optical MCF through 52.4 km of fiber.

The result was reported in a postdeadline paper at the European Conference and Exhibition on Optical Communications (ECOC 2012).

Dubai Builds UAE-IX Internet Exchange Modeled on Frankfurt's DE-CIX

Frankfurt/Main and Dubai, 18 September 2012 - October 1, 2012 marks the start of

DE-CIX, Frankfurt's massive Internet Exchang, has provided know-how and support for UAE-IX, an Internet exchange in Dubai, United Arab Emirates (UAE).

UAE-IX is a neutral Internet traffic exchange platform that interconnects global networks and, above all, network operators and content providers in the Gulf region. UAE-IX is using a fully redundant switching platform located in a neutral secure datacenter in Dubai. The new Internet Exchange will reduce latency times by up to 80 per cent and costs by up to 70 per cent for Gulf providers.

The companies noted that many Internet service providers in the region have had to exchange their traffic via Europe, Asia or North America, leading to high latency rates. Initiated by the UAE’s Telecommunication Regulatory Authority (TRA) and supported by DE-CIX, UAE-IX delivers a highly available local alternative for regional traffic exchange, localizing Internet content.

“Across continents, data traffic is on the rise,” says Harald A. Summa, Managing Director of DE-CIX Management GmbH in Frankfurt. “The Internet’s global infrastructure must grow with it so that data travel shorter distances to get to users. As the operator of the largest Internet exchange in the world, we have drawn on our long-standing expertise to help design UAE-IX. UAE-IX will turn the GCC into an independent international hub for the digital economy and will no doubt attract Internet service providers from Europe, Africa and even India, Pakistan and China.”

Frankfurt's DE-CIX Internet Exchange Hits 2 Tbps Peak

DE-CIX, the Internet exchange located in Frankfurt am Main (Germany), hit a new data throughput record last week  as Internet traffic across its switching fabric exceeded the 2 Tbps (terabits per second) mark for the first time.

DE-CIX currently servers over 480 Internet service providers from over 50 countries.  At DE-CIX, more than 12 petabytes of data are exchanged per day.

"Although the traffic peak of over 2 Tbps marks a new high,” says Harald A. Summa, CEO at DE-CIX Management GmbH, “we do not see an end to data traffic growth on the horizon. We assume that Internet traffic will continue to grow by about 80 per cent per year in the future”. At DE-CIX, HD-TV, video and multimedia content, online gaming and cloud computing are considered the main drivers behind the continuing increase in data traffic."

The switching fabric of DE-CIX has the potential to scale to 40 Tbps, according to Arnold Nipper, Technical Manager at DE-CIX. "The DE-CIX peering infrastructurehas a star-shaped topology and is spread out over a total of twelve data centers operated by different providers in the Frankfurt metropolitan area.  The center of the DE-CIX peering star is composed of two redundant core switch clusters, one active and the other in hot standby mode.  If there are any problems with the operative switch cluster, data traffic is immediately and automatically, in other words within milliseconds, routed to the other switch cluster so that data streams can flow continually without interruption.  The central core switch clusters are redundantly connected to 14 other switches which are in turn connected to the ISPs."

  • Equipment deployed in the DE-CIX distributed fabric includes Force10 Networks' Terascale platform.

Friday, September 21, 2012

Australia's NBN Co Opens National Contact Centre

Australia's NBN Co opened its Gold Coast National Contact Centre.  The facility will handle queries from all over Australia as the company ramps up the construction of its fibre optic broadband network.  The contact centre expects is forecast to eventually house more than 100 workers. NBN Co is already handling over 8,000 inbound and outbound calls, emails, letters and web enquiries about the NBN each month.

BlackBerry Outage Hits Europe and Africa

Research in Motion (RIM) suffered a widespread outage of its BlackBerry service for customers across Europe and Africa.  Messages were delayed by up to 3 hours.

Thorsten Heins, RIM President and CEO, apologized for the disruption and said an investigation is underway.

Thursday, September 20, 2012

T-Systems Wins EUR 400M Outsoursing Deal in Spain

Deutsche Telekom subsidiary T-Systems has been awarded an oursourcing contract worth more than EUR 400 million in total by the government of Catalan.

 T-Systems will be responsible for operating workstation computers and applications, and providing user support. In addition, the Deutsche Telekom subsidiary is to network the public administration sites and provide telecommunications services and the data center infrastructure. 

"The deal with Catalonia is one of the biggest deals we have won outside Germany so far," said Deutsche Telekom Board of Management member and CEO of T-Systems Reinhard Clemens. "The economic situation in European countries is creating a favorable climate for big deals. When longer-term contracts come up for renewal, more and more large enterprises are looking to move partly over to cloud computing. And in Europe we are among the top players in this area."

CohesiveFT Joins ONF

CohesiveFT, a start-up providing enterprise application-to-cloud migration solutions, has joined the Open Networking Foundation (ONF).

CohesiveFT, which was founded in 2006 and is based in Chicago, supplies an application-controlled SDN product that provides control of addressing, topology, protocols, and encrypted communications for devices deployed to virtual infrastructure or cloud computing centers.

"Enabling enterprises to run business operations via the cloud is rapidly becoming an imperative for every organization. We have been working with OpenFlow and SDNs since 2008 and joined ONF to share our knowledge with the community of developers dedicated to creating the next generation enterprise network,” said CohesiveFT CEO Patrick Kerpan.

JDSU to Support RCS VoLTE Interoperability Test

JDSU announced its participation in the RCS VoLTE Interoperability Test (IOT) Event 2012 organized by the MultiService Forum (MSF), ETSI, and GSMA.

The event will be hosted by Telecom Slovenia Group in Sintesio, Kranj, Slovenia and by China Mobile in China Mobile Research Institute Laboratory in Beijing, China, from September 24 to October 12, 2012.

MSF will publish a whitepaper following the event to share the results and findings.

"JDSU is excited to again be part of such a pivotal series of interoperability test events critical to enabling the future of LTE and 4G technologies," said William Vink, a practice leader in JDSU’s Communications Test and Measurement business segment. "

Zayo Says AboveNet National Network Integration Complete

Two months after completing its acquisition of AboveNet, Zayo reported that integration of the two networks is complete.  The resulting combined network nearly doubles Zayo’s national network reach, and provides connectivity that spans over 65,000 route miles in 45 states and 7 countries.

Some key points

  • Previously separate DWDM (Wavelengths), Ethernet and IP systems have been linked with high bandwidth connectivity between markets.
  • The interconnection involved deploying new network equipment to link the fiber networks
  • Ethernet and IP networks allow customers to access Zayo’s Tier 1 IP network by extending across and into Metro markets shared by Zayo and AboveNet.
  • Network management and customer support have been consolidated into a single Network Control Center. Customers will receive support on all services through a single point of contact that can access all their service records as well as related network elements across the combined network.
  • Customers can now access Zayo’s national and dense metro networks across points-of-presence in over 200 markets, spanning 18 of the 20 largest markets in the US as well as smaller markets.

The integration also brings access to major cities across Europe including London, Paris, Frankfurt and Amsterdam through Zayo’s transatlantic capacity and European fiber networks.

Web Giants Back New Lobby Group -- The Internet Association

Fourteen of the largest web companies have joined forces to create  The Internet Association, a new public policy organization dedicated to strengthening and protecting a free and innovative Internet.  The Internet Association said its goal is to ensure " that the Internet will always have a voice in Washington and a seat at the table."

Member companies include, AOL, eBay, Expedia, Facebook, Google, IAC, LinkedIn, Monster Worldwide, Rackspace,, TripAdvisor, Yahoo!, and Zynga.

“A free and innovative Internet is vital to our nation’s economic growth,” said Michael Beckerman, President and CEO of The Internet Association.  “These companies are all fierce competitors in the market place, but they recognize the Internet needs a unified voice in Washington.  They understand the future of the Internet is at stake and that we must work together to protect it.”

Ericsson Opens R&D Lab in Nanjing, China

Ericsson inaugurated a new Nanjing R&D Center building covering a total area of 11,700 m2.  The Nanjing R&D Center, one of 5 such facilities worldwide, currently employs about 500 R&D engineers who are working on software and hardware for various communication standards, including GSM, WCDMA, LTE FDD and TDD (TD-LTE). The radio network controllers and radio base stations developed by the center have been deployed by major operators in more than 100 networks around the world.

Ericsson also marked the 20th anniversary of  Nanjing Ericsson Panda Communications Co. Ltd., which has grown to become Ericsson's largest production supply centers.

Alcatel-Lucent Cites Deployment of 400G Photonic Service Engine Chip

SK Broadband has deployed Alcatel-Lucent’s 100G optical coherent technology to address fast-growing subscriber demand for IPTV, high-definition video, high-speed Internet access, Voice over IP, and advanced business services in Seoul and the neighboring GyungGi province.

The deployment uses 100G optical coherent technology employed in the 1830 Photonic Service Switch (PSS), which uses Alcatel-Lucent's new Photonic Service Engine (PSE), 400G chip. The 1830 PSS can support a mixture of 10G, 40G and 100G channels on the same fiber pair.

Rajeev Singh-Molares, President, Asia Pacific for Alcatel-Lucent said: “SK Broadband is a top player in one of the world’s most competitive markets for communications services, serving a population that has very high expectations in terms of service quality. Our 100G technology, proven in the global market, will dramatically expand the capacity of SK Broadband’s network, helping ensure an excellent quality of experience for their subscribers. As importantly, we are providing SK Broadband with a flexible platform that they can use to expand their capacity to 400G down the road, with only a minimal investment.”

  • In March 2012, Alcatel-Lucent unveiled its Photonic Service Engine (PSE), a new chip for coherent optical networking that supports data rates of 400 Gbps.  Alcatel-Lucent said its 400G PSE chip can be deployed in a broad range of network configurations - from metro to regional to ultra-long haul - and transmit wavelengths over existing or new photonic lines. It is designed specifically for use in a family of line cards in the Alcatel-Lucent 1830 Photonic Service Switch (PSS). Specifically, the company is planning to use the PSE in a 100G muxponder card, a 100G transponder and a 100G backplane uplink. Alcatel-Lucent is also pushing ahead with a 400G line card for the 1830 Photonic Service Switch.
  • In June 2011, Alcatel-Lucent unveiled its 400 Gbps, "FP3" network processor for enabling the full stack of services over IP routers. The FP3 processor, which is scheduled to appear in Alcatel-Lucent's service router portfolio in 2012, supports 400 Gbps line rates, sufficient for handling 70,000 simultaneous High Definition video streams. It leverages 40nm process technology and represents the evolution of the company's 100 Gbps FP2 silicon, which was introduced in 2008 using 90nm process technology. It packs 288 RISC cores operating at 1 GHz. This compares with 112 cores at 840 MHz in the previous generation FP2 device. The new design uses 50% less power per bit than its predecessor.

Wednesday, September 19, 2012

Arista's 7150 Switch Focuses on Low-Latency, Virtualized Data Centers

Arista Networks unveiled a new low-latency data center switch family designed for software-defined networking (SDN) in Big Data, Cloud Networks, Financial Trading, HPC and Web 2.0 environments. The new switches are designed to interoperate with SDN controllers for network-wide virtualization, virtual machine(VM) mobility and network services.

The new Arista 7150 Series offers up to 64 wire-speed 1/10 GbE ports or 16 40GbE ports.  Notably, the 7150 Series supports VXLAN tunnels at wire-speed, supporting workload mobility between physical and virtual machines.

In terms of performance, Arista achieves 40GbE port-to-port latency of 350 nanoseconds for Layer 2/3 forwarding. The switches support advanced network services, such as Network Address Translation, IEEE 1588 Precision Time Protocol, and congestion management.

Arista said the wire speed Network Address Translation (NAT) capabilities eliminate 100s of microseconds of forwarding delay in HPC and financial trading architectures. The advanced Latency Analyzer (LANZ+) functions provide application-level microburst detection, congestion monitoring and analysis essential to optimize big data and other performance-sensitive applications. The flexible forwarding path enables new packet formats to be parsed and forwarded with deterministic performance and provides investment protection. Along with open EOS APIs the 7150 Series offers monitoring, analysis and forensic capabilities for both coarse and fine-grained views of data flows and network activities; including stateless load balancing and network analyzer functionality. Additional AgilePort capability allows four individual 10Gb ports to be combined into a single 40Gb ports for further scale and simple network migrations.

"We are excited to collaborate with Arista on the 7150 Series, designed with the Intel® Ethernet Switch FM6000 silicon for flexible forwarding. Together with Programmable Arista EOS, new SDN functionality can be achieved with a common platform. Such versatility is a breakthrough innovation in the industry and a testament to the world-class design achieved by Arista and Intel working together," said Diane Bryant, Vice President and General Manager, Datacenter Group at Intel.

"Arista has combined a flexible forwarding data path with the Arista EOS (Extensible Operating System) to deliver breakthrough latency, power, density and advanced SDN features in a compact 1U form-factor. The Arista 7150 is truly the first next generation SDN switch for virtualized data centers,” stated Andy Bechtolsheim, CDO and Chairman of Arista Networks.

Broadcom Powers Next Gen 10G EPON

Broadcom introduced a new family of high-density dual, quad and octal port single-chip 10G-EPON Optical Line Terminals (OLTs) that will pave the way for next-generation FTTX access networks.
The new BCM5553x family offers support for both 10 Gbps (10G) symmetric and asymmetric data rates combined with 1 Gpbs (1G)-EPON co-existence on a single fiber, while enabling 3G/LTE mobile backhaul and enterprise business services on a common fiber access network.  

Key Features
  • IEEE 802.3av, SIEPON P1904.1/D3.0, China Telecom 3.0, and Cable Labs DPoE  compliant
  • Simultaneous 10G symmetric and asymmetric operation with 1.25 Gbps/2.5 Gbps co-existence
  • 8x 10G IEEE 803.3av-compatible MACs with integrated 10G and 1G Burst-mode OLT SerDes
  • Supports up to 512 subscribers per 10G port with up to four classes of service per subscriber
  • Network Synchronization support for Mobile Backhaul over PON (1588 v2, ToD/1PPS, SyncE)
  • Support for extended-reach PON up to 100 km
  • Support for DOCSIS Mediation Layer (DML) middleware for Cable MSO applications
The new BCM5553x 10G EPON OLT chips can be matched with Broadcom's BCM88350 family of integrated single-chip traffic manager/packet processors and BCM55030 optical network unit (ONU) SoCs for a complete end-to-end solution set.

Broadcom Samples Universal Digital Front End for Base Stations

Broadcom introduced a Universal Digital Front End chip for wireless base stations and emerging HetNet radio applications.  

The BCM51030, which is now sampling, offers the full functionality of a digital front end (DFE) platform on a single chip with up to 10x wider bandwidth support to address fragmented spectrum.  The chip leverages Broadcom's VersaLine Digital Pre-Distortion (DPD) technology to adapt in real time to any combination of wireless protocols, including 2G, 3G and 4G signal combinations, power amplifiers and frequency bands. 

Cisco's Nexus 3548 Low Latency Switch Targets High Frequency Trading

Cisco introduced its lowest latency data center switch to date -- the Cisco Nexus 3548 with Algorithm Boost (Algo Boost). The Cisco Nexus 3548, which promises network-access performance as low as 190 nanoseconds (ns), is specifically optimized for high performance computing, high performance trading, and big data environments.

Specifically, the Nexus 3548 one-rack-unit (1RU) 10 GB Ethernet switch running in "warp mode" offers latencies as low as 190 ns in those environments with small to medium Layer 2 and Layer 3 scaling requirements. This makes it the fastest full-featured Ethernet switch on the market, according to the company.

Cisco's Algo Boost technology is integrated in the ASIC switching silicon to provide granular visibility into how the switch is performing. The switch leverages Precision Time Protocol to keep the entire infrastructure highly synchronized, enabling trading firms to correlate network events and better achieve regulatory compliance and digital forensics. Active Buffer Monitoring proactively monitors and alerts users to congestion points that could be occurring and may negatively impact application performance.

Cisco is also supporting Intelligent Traffic Mirroring, which consists of filtering and nanosecond time-stamping of captured traffic. This can help traders gain greater visibility into why gapping, slippage and slow order situations occur, correlating these trends with analytics tools to help enable smarter trading decisions.

Cisco said these capabilities in the new Nexus 3548 can give traders a competitive advantage in globally interconnected financial markets.

"Today, Cisco has leapfrogged our competitors in delivering a full featured switch that offers the lowest latency Ethernet in the networking industry. The Nexus 3548 with the unique Cisco Algo Boost technology implemented in ASIC provides a robust feature set to give financial traders more control over their sophisticated trading algorithms and respond more quickly to the changes in the market. In addition to the performance, this unique ultra-low-latency Ethernet technology is part of the unified data center fabric and offers strong total cost of ownership for commercial high-performance computing and big data environments as well as scale-out storage topologies," stated David Yen, senior vice president of Data Center Group at Cisco.

Huawei Looks to Bring SDN to IP Core, Aggregation and Access Infrastructure

Huawei has begun supporting Software Defined Network (SDN) capabilities on its Service Provider routers. The company said its SDN solution is based on its high-performance, large-capacity hardware platform and fully distributed "VRP" software, which enables a centralized control plane and a software-defined forwarding plane.

Huawei will support Openflow for decoupling of hardware and software. By abstracting the control layer and the orchestration layer, Huawei said its SDN Enabled router enables carriers to rapidly deploy new services without having to change the forwarding hardware.  Another advantage of SDN, according to Huawei, is that end-to-end path calculation based on routing policies is possible with a centralized controller.  The SDN Enabled router is also expected to simplify IP network management.

For IP core networks, Huawei said its SDN Enabled router can realize control plane distribution and virtualization across chassis and across devices while achieving >99.999% reliability. At the metro convergence layer, Huawei will offer an SDN Enabled broadband network gate (BNG), while at the access layer it will also support a series of network virtualization solutions.

Gai Gang, President of Huawei's Carrier IP Product Line, said: "Huawei’s SDN Enabled router can help carriers cut OPEX significantly and alleviate impacts on networks caused by service changes through network virtualization and programmability. In addition, the network openness enables carries to continuously create value from existing resources. Huawei's NE40E high-end router already supports Openflow1.2 protocol, and has tested for interoperability at the Open Network Forum (ONF) this year."

T-Mobile USA Appoints John Legere as CEO - former Global Crossing Exec

Deutsche Telekom has appointed John Legere as CEO of T-Mobile USA, replacing Jim Alling, who will return to his position of Chief Operating Officer.

Legere, a 32-year veteran of the U.S. and global telecommunications and technology industries, is the former CEO of Global Crossing, where he successfully transformed the company to become a leading provider of IP services worldwide. Prior to joining Global Crossing, Mr. Legere was CEO of Asia Global Crossing, a Microsoft, Softbank and Global Crossing joint venture. Before that, Mr. Legere served as Senior Vice President of Dell Computer Corporation, where he was President of the Company’s operations in Europe, the Middle East, Africa, as well as in the Asia-Pacific region.

  • In October 2011, Level 3 Communications completed its acquisition of Global Crossing Limited. The merger combines Level 3's U.S. and European footprint with Global Crossing's extensive international, intercity network. Level 3 now will operate a services platform for medium to large enterprise, wholesale, content and government customers, anchored by extensive owned fiber networks on three continents in more than 45 countries as well as substantial undersea facilities. The combined business had pro forma 2010 revenues of $6.2 billion and pro forma 2010 Adjusted EBITDA of $1.3 billion before synergies and $1.6 billion after expected synergies. Corporate headquarters will remain in Broomfield, Colorado. James Q. Crowe is CEO of the company.

Verizon and Union Reach Labor Agreement

Verizon Communications Inc. and the Communications Workers of America (CWA) and the International Brotherhood of Electrical Workers (IBEW) reached an agreement on new, three-year contracts for about 43,000 East wireline associates.  The deal will be submitted to union members for a vote.

"We believe this is a fair and balanced agreement that is good for our employees as well as for the future of the Wireline business," said Marc Reed, Verizon's chief administrative officer. "It provides competitive wages, valuable benefits and affordable quality health care while giving the company new flexibility to better serve customers and become more efficient."