Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Wednesday, August 16, 2017

Facebook picks Ohio for its latest data center

Facebook has selected New Albany, Ohio as the location for its 10th major data center. New Albany is a town of about 8,500 people located in the geographic center of Ohio, about 20 miles to the northeast of Columbus, and at an elevation of 1,000 feet.

Like Facebook's other recent data center projects, this new facility will be powered 100% by renewable energy and it will used Open Compute Project architecture and principles, including direct evaporative cooling by outdoor air.

The New Albany data center will be 900,000 square feet in size and located on a 22 acre parcel. Media reports stated that Facebook plans to invest $750 million in the project. The New Albany data center is expected to go online in 2019.

https://www.facebook.com/NewAlbanyDataCenter/?fref=mentions

Locations of Facebook data centers:

  • Prineville, Oregon
  • Forest City, North Carolina
  • Luleå, Sweden
  • Altoona, Iowa
  • Fort Worth, Texas
  • Clonee, Ireland
  • Los Lunas, (New Mexico
  • Odense, Denmark (announced Jan 2017)


Thursday, May 11, 2017

Facebook dreams of better network connectivity platforms – Part 2

Preamble

Facebook has a stated goal of reducing the cost of network connectivity by an order of magnitude. To achieve this its labs are playing with millimetre wave wireless, free-space optics and drones in the stratosphere.

Project Aquila takes flight

At this year's F8 conference, Facebook gave an update on the Aquila drone aircraft, which is being assembled in California's Mojave Desert. The Aquila project is cool - pretty much everything about this initiative, from its name to its sleek design, has an aura about it that says 'this is cool', who wouldn't want to be developing a solar-powered drone with the wingspan of a Boeing 737. Using millimetre wave technology onboard Aquila, Facebook has achieved data transmission speeds of up to 36 Gbit/s over a 13 km distance; and using free-space optical links from the aircraft has achieved speeds of 80 Gbit/s over 13 km.

Several media sources reported a technical set-back last year (rumours of a cracked frame) but those are in the past or perhaps not relevant any more. At F8, Facebook said Aquila has progressed and is now ready for field testing. However, here again, one element that seems to be missing is the business case. Just where is this aircraft going to fly and who will pay for it?

As described by Facebook, Aquila will serve regions of the planet with poor or no Internet access. Apparently, this would not include the oceans and seas, nor the polar regions, where such an aircraft might have to hover for months or years before serving even one customer. Satellites already cover much of our planet’s surface and for extremely remote locations this is likely to remain the only option for Internet access. New generations of satellites, include medium earth orbit (MEO) constellations, are coming with improved latency and throughput. So Facebook's Aquila must aim to be better than the satellites.

The aircraft is designed to soar and circle at altitudes up 90,000 feet during the day, slowly descending to 60,000 by the end of the night. The backhaul presumably will be a free-space laser to a ground station below. At such a height, Aquila would be above the weather and above the jet stream. During the day, with an unobscured view of the sun, it would recharge the batteries needed to keep flying at night.

Apart from satellites, the alternative architecture for serving such regions would be conventional base stations mounted on tall masts and connected via fibre, microwave or satellite links. Many vendors are already offering solar-powered versions of these base stations, and there are plenty of case studies of how they have been used successfully in part of Africa, and the advantages over a high-flying drone are obvious: mature technology, fixed towers and known costs, no possibility of dangerous or embarrassing crashes.

One could imagine that the Facebook approach might bring new Internet access possibilities to areas such as the Sahara, the Atacama, over islands in the Indonesian archipelago. But is not clear if Aquila’s onboard radios would be powerful enough to penetrate dense forests, such as in the Amazon or Congo. So, if the best deployment scenario is a desert or island with some humans but insufficient Internet access, why is satellite service not a viable option? The likely answer again is economics. Perhaps the populations living in these regions simply have not had the money to purchase enough smartphones or laptops to make it worthwhile for a carrier to bring service.

A further consideration worth noting is that it may be difficult for an American company to secure permission to have a fleet of drone aircraft circling permanently over a sovereign nation. Intuitively, many people would not feel comfortable with a U.S. drone circling overhead, even if it were delivering faster social media.

Designing a communications platform for emergency deployments

Facebook's connectivity lab is also interested in disaster preparedness. At the F8 keynote, it unveiled Tether-tenna, a helicopter-drone that carries a base station and is connected via fibre and a cable with high-voltage power to a mooring station. The system is designed for rapid deployment after a natural disaster and could provide mobile communications over a wide area. But is it a complex technology that provides minimal benefits (certainly not an order of magnitude) over existing solutions?

The closest equivalent in the real world is the cellular-on-wheels (COWs) segment, which is now commonly used by most mobile operators for extending or amplifying their coverage during special events such as outdoor concerts and football matches. A typical COW is really just a base station mounted on a truck or trailer. After being hauled to the desired location, the units can be put into operation in a matter of minutes, using on-board batteries, diesel generators or attachment to the electrical grid. The units often have telescoping masts that extend 4-5 metres in height.

In comparison to a COW, Facebook's Tether-tenna heli-drone will have a height advantage, perhaps 100 metres over the competitors, enabling it to extend coverage over a greater range. However, the downsides are quite apparent too. Base station weight restrictions on the heli-drone, which also must carry the weight of the tether, will be more limiting than on a mast, and this means that the Tether-tenna will not provide the density of coverage possible via a COW, thereby limiting its potential use cases.

In addition, a crashing heli-drone could do a lot of damage to people or property on the ground, and wind would be a major factor, as would lightning strikes. There is also the possibility of collisions with other drones, airplanes or birds. Therefore ensuring safety might require a human operator to be present when the drone is flying, and insurance costs inevitably would be higher than any of the many varieties of COWs that are already in service.

AT&T has a more elegant name for this gear, preferring to call them Cells on Light Trucks (COLTs). During the recent Coachella 2017 music festival in California, AT&T deployed four COLTs equipped with high-capacity drum set antennae, which offer 30x the capacity of a traditional, single-beam antenna. AT&T reported that the COLTs were instrumental in handling the 40 Tbit/s of data that traversed its network during the multi-day event - the equivalent of 113 million selfies. Data traffic from Coachella was up 37% over last year, according to AT&T. Would a heli-drone make sense for a week-long event such as this?  Probably not, but it's still a cool concept.

All of this raises the question: is a potential business case even considered before a development project gets funded at Facebook?

In conclusion, Facebook is a young company with a big ambition to connect the unconnected. Company execs talk about a ten-year plan to advance its technologies, so they have the time and money to play with multiple approaches that could make a difference. A business case for these three projects may not be apparent now but they could evolve into something serendipitously.

Wednesday, May 10, 2017

Facebook dreams of better network connectivity platforms – Part 1


Facebook's decision to launch the Open Compute Project (OCP) six years ago was a good one. At the time, Facebook was in the process of opening its first data centre, having previously leased space in various third party colocation facilities. As it constructed this first facility in Prineville, Oregon the company realised that it was going to have to build faster, cheaper and smarter if this strategy were to succeed, and that to keep up with its phenomenal growth it would have to open massive data centres in multiple locations.

In 2016, Facebook kicked off the Telecom Infra Project (TIP) with a mission to take the principles of the Open Compute Project (OCP) model and apply them to software systems and components involved in access, backhaul and core networks. The first TIP design ideas look solid and have quickly gained industry support. Among these is Voyager, a 'white box' transponder and routing platform based on Open Packet DWDM. This open line system will include Yang software data models of each component in the system and an open northbound software interface (such as NETCONF or Thrift) to the control plane software, essentially allowing multiple applications to run on top of the open software layer. The DWDM transponder hardware includes DSP ASICs and complex optoelectronic components, and thus accounts for much of the cost of the system.

The hardware design leverages technologies implemented in Wedge 100, Facebook's top-of-rack switch, including the same Broadcom Tomahawk switching ASIC. It also uses the DSP ASIC and optics module (AC400) from Acacia Communications for the DWDM line side with their open development environment. Several carriers and data centre operators have already begun testing Voyager platforms from multiple vendors.

In November 2016, Facebook outline its next TIP plans including Open Packet DWDM for metro and long-haul optical transport networks. This idea is intended to enable a clean separation of software and hardware based on open specifications. Again, there is early support for a platform with real world possibilities, either within Facebook's global infrastructure or as an open source specification that is ultimately adopted by others.

What's cooking at Facebook's network connectivity labs

At its recent F8 Developer’s conference in San Jose, Facebook highlighted several other telecom-related R&D projects out of its network connectivity lab that seem to be more whimsical fancy than down-to-earth practicality. In the big picture, these applied research projects could be game-changers in the race to the billions of people worldwide currently without Internet access, or potential Facebook users of the future. Facebook said its goal here is to bring down the cost of connectivity by an 'order of magnitude', a pretty high bar considering the pace of improvement already seen in mobile networking technologies.

This article will focus on three projects mentioned at this year's F8 keynote, namely: Terragraph, a 60 GHz multi-node wireless system for dense urban areas that uses radios based on the WiGig standard; Aquila, a solar-powered drone for Internet delivery from the stratosphere; and Tether-tenna, a sort of helicopter drone with a base station. It is not clear if these three projects will eventually become part of the TIP of even if they will progress beyond lab trials.

Terragraph

Terragraph is Facebook's multi-node wireless system for delivering high-speed Internet connectivity to dense urban areas and capable of delivering gigabit speed to mobile handsets. The scheme, first announced at last year's F8 conference, calls for IPv6-only Terragraph nodes to be placed at 200-metre intervals. Terragraph will incorporate commercial off-the-shelf components and aim for high-volume, low-cost production. Facebook noted that up to 7 GHz of bandwidth is available in the unlicensed 60 GHz band in many countries, while U.S. regulators are considering expanding this to a total of 14 GHz. Terragraph will also leverage an SDN-like cloud compute controller and a new modular routing protocol that Facebook has optimised for fast route convergence and failure detection. The architecture also tweaks the MAC layer to solve shortcomings of TCP/IP over a wireless link. The company says the TDMA-TDD MAC layers delivers up to 6x improvement in network efficiency while being more predictable than the existing WiFi/WiGig standard.

At the 2017 F8 conference, Facebook talked about how Terragraph is being tested in downtown San Jose, California, a convenient location given that is right next door for Facebook. Weather will not be a significant factor since San Jose does not experience the rolling summer fog of nearby San Francisco, nor does it suffer torrential tropical downpours, whiteout blizzard conditions, scorching summer heat, or Beijing-style air pollution that could obscure line-of-sight.

While the trial location might be ideal, one should also consider in which cities would Terragraph be practical. First, there are plenty of WiFi hotspots throughout San Jose and smartphone penetration is pretty much universal and nearly everyone has 4G service. Heavy data users have the option on unlimited plans from the major carriers. So maybe San Jose only serves as the technical trial and the business case is more applicable to Mexico City or Manaus, Lagos, Nairobi, or other such dense urban areas.

At the F8 conference, Facebook showed an AI system being used to optimise small cell placement from a 3D map of the city centre. The 3D map included data for the heights of buildings, trees and other obstacles. The company said this AI system alone could be a game changer simply by eliminating the many hours of human engineering that would be needed to scope out good locations for small cells. However, the real world is more complicated. Just because the software identifies a particular light pole as an ideal femtocell placement does not mean that the city will approve it. There are also factors such as neighbour objections, pole ownership, electrical connections, etc., that will stop the process from being fully automated. If this Terragraph system is aimed at second or third tier cities in developing countries, there is also the issue of chaotic development all around. In the shanty towns surrounding these big conurbations, legal niceties such as property boundaries and rights-of-way can be quite murky. Terragraph could be quite useful in bringing low-cost Internet into these areas, but it probably does not need fancy AI to optimise each small cell placement.

Generally speaking, 3G and now 4G services have arrived in most cities worldwide. The presumption is that Facebook is not seeking to become its own mobile carrier in developing countries but that it would partner with existing operators to augment their networks. Meanwhile one suspects that the reason carriers have been slow to upgrade capacity is certain neighbourhoods or cities is more economic than technical. It is probably not a lack of spectrum that is holding them back, nor a lack of viable femtocell products or microwave backlinks, but simply a lack of financial capital or a weak return on investment, or red tape. One reason for this that is often cited is that over-the-top services, such as Facebook, suck all the value out of the network, leaving the mobile operator with very thin margins and little customer stickiness.


Part 2 of this article we will look at Facebook's Aquila and Tether-tenna concepts.

Friday, May 5, 2017

Facebook's march to augmented reality

The big theme coming out of Facebook's recent F8 Developer Conference in San Jose, California was augmented reality (AR). Mark Zuckerberg told the audience that the human desire for community has been weakened over time and believes that social media could play a role in strengthening these ties.

Augmented reality begins as an add-on to Facebook Stories, its answer to Snapchat. Users simply take a photo and then use the app to place an overlay on top of the image, such as a silly hat, fake moustache, while funky filters keep the users engaged and help them create a unique image. Over time, the filter suggestions become increasingly smart, adapting to the content in the photo - think of a perfect frame if the photo is of the Eiffel Tower. The idea is to make the messaging more fun. In addition, geo-location data might be carried to the FB data centre to enhance the intelligence of the application, but most of the processing can happen on the device.

Many observers saw Facebook's demos as simply a needed response to Snapchat. However, Facebook is serious about pushing this concept far beyond cute visual effects for photos and video. AR and VR are key principles for what Facebook believes is the future of communications and community building.

As a thought experiment, one can consider some of the networking implications of real-time AR. In the Facebook demonstration, a user turns on the video chat application on their smartphone. While the application parameters of this demonstration are not known, the latest smartphones can record in 4K at 30 frames per second, and will soon be even sharper and faster. Apple's Facetime requires about 1 Mbit/s for HD resolution and this has been common for several years (video at 720p and 30 fps). AR certainly will benefit from high resolution, so one can estimate the video stream leaves the smart phone on a 4 Mbit/s link (this guestimate is on the low end). The website www.livestream.com calculates a minimum of 5 Mbit/s upstream bandwidth for launching a video stream with high to medium resolution. LTE-Advanced networks are capable of delivering 4 Mbit/s upstream, with plenty of headroom, and WiFi networks are even better.

To identify people, places and things in the video, Facebook will have to perform sophisticated graphical processing with machine learning. Currently this cannot be done locally by the app on the smartphone and so will need to be done at a Facebook data centre. So the 4 Mbit/s stream will have to leave the carrier network and be routed to the nearest Facebook data centre.

It is known from previous Open Compute Project (OCP) announcements that Facebook is building its own AI-ready compute clusters. The first design, called Big Sur, is an Open Rack-compatible chassis that incorporates eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It uses NVIDIA's Tesla accelerated computing platform. This design was announced in late 2015 and subsequently deployed in Facebook data centres to support its early work in AI. In March, Facebook unveiled Big Basin, its next-gen GPU server capable of machine learning models that are 30% bigger than those handled on Big Sur using greater arithmetic throughput and a memory increase from 12 to 16 Gbytes. The new chassis also allows for disaggregation of CPU compute from the GPUs, something that Facebook calls JBOG (just a bunch of GPUs), which should bring the benefits of virtualisation when many streams need to be processed simultaneously. The engineering has anticipated that increased PCIe bandwidth will be needed between the GPUs and the CPU head nodes, hence a new Tioga Pass server platform was also necessitated.

The Tioga Pass server features a dual-socket motherboard, with DIMMs on both PCB sides for maximum memory configuration. The PCIe slot has been upgraded from x24 to x32, which allows for two x16 slots, or one x16 slot and two x8 slots, to make the server more flexible as the head node for the Big Basin JBOG. This new hardware will need to be deployed at scale in Facebook data centres. Therefore, one can envision that the video stream originates at 4 Mbit/s and travels from the user's smartphone and is routed via the mobile operator to the nearest Facebook data centre.

Machine learning processes running on the GPU servers perform what Facebook terms Simultaneous Localisation and Mapping (SLAM). The AI essentially identifies the three-dimensional space of the video and the objects or people within it. The demo showed a number of 3D effects being applied to video stream, such as lighting/shading, placement of other objects or text. Once this processing has been completed, the output stream must continue to its destination, the other participants on the video call. Maybe further encoding has compressed the stream, but still Facebook will have to be burning some amount of outbound bandwidth to hand the video stream over to another mobile operator for delivery via IP to the app on the recipient's smartphone. Most likely, the recipient(s) of the call will have their video cameras turned on and these streams will also need the same AR processing in the reverse direction. Therefore, we can foresee see a two-way AR video call burning tens of mgeabits of WAN capacity to/from the Facebook data centre.

The question of scalability

Facebook does not charge users for accessing any of its services, which generally roll out across the entire platform at one go or in a rapid series of upgrade steps. Furthermore, Facebook often reminds us that it is now serving a billion users worldwide. So clearly, it must be thinking about AR on a massive scale. When Facebook first began serving videos from its own servers, the scalability question was also raised, but this test was passed successfully thanks to the power of caching and CDNs. When Facebook Live began rolling, it also seemed like a stretch that it could work at global scale. Yet now there are very successful Facebook video services.

Mobile operators should be able to handle large numbers of Facebook users engaging in 4 Mbit/s upstream connections, but each of those 4 Mbit/s streams will have to make a visit to the FB data centre for processing. Fifty users will burn 200 Mbit/s of inbound capacity to the data centre, 500 users will eat up 2 Gbit/s of bandwidth, 5,000 20 Gbit/s and 50,000 200 Gbit/s. For mobile operators, if AR chats prove to be popular lots of traffic will be moving in and out of Facebook data centres, and one could easily envision a big carrier like Verizon or Sprint having more than 500,000 simultaneous users on Facebook AR. So this would present a challenge if 10 million users decide to try this out on a Sunday evening. That would demand a lot of bandwidth that network engineers would have to find a way to support. Another point is that, from experience with other chat applications, people are no longer accustomed to economising in terms of length of the call or number of participants. One can expect many users to kick-off a Facebook AR call with friends on another continent and keep the stream opened for hours.

Of course, there could be clever compression algorithms in play so that the 4 Mbit/s at each end of the connection could be reduced, while if the participants do not move from where they are calling and nothing changes in the background, perhaps the AR can snooze, reducing the amount of processing needed and the bandwidth load. In addition, perhaps some of the AR processing can be done on next gen smartphones. However, the opposite could also be true, where AR performance is enhanced by using 4K, multiple cameras per user are used on the handset for better depth perception, and the video runs at 60 fps or faster.

Augmented reality is so new that it is not yet known whether it will take off quickly or be dismissed as a fad. Maybe it will only make sense in narrow applications. In addition, by the time AR calling is ready for mass deployment, Facebook will have more data centres in operation with a lot more DWDM to provide its massive optical transport – for example the MAREA submarine cable across the Atlantic Ocean between Virginia and Spain, which Facebook announced last year in partnership with Microsoft. The MAREA cable, which will be managed by Telxius, Telefónica’s new infrastructure company, will feature eight fibre pairs and an initial estimated design capacity of 160 Tbit/s. So what will fill all that bandwidth? Perhaps AR video calls, but the question then is, will metro and regional networks be ready?

Wednesday, April 26, 2017

Orange Teams with Facebook on Start-up Accelerator

Global telco Orange announced that, as a member of the Telecom Infra Project (TIP) and together with Facebook, it is launching the Orange Fab France Telecom Track accelerator, designed to support start-ups focused on network infrastructure development.

Through the initiative, selected start-ups will be mentored by Orange and provided with access to its global resources, as well as support from TIP Ecosystem Accelerator Centres (TEAC) and Facebook.

As part of the initiative, Orange is working with TIP and Facebook to identify and support start-ups focused on network infrastructure technology with the launch of the new Telecom Track as part of its Orange Fab accelerator program in France. The partnership will aim to identify the best innovations and talent within the sector and provide start-ups with support and guidance from experts at Orange, TIP and Facebook, as well as facilitate collaboration and investment opportunities.

The project will be managed through Orange Fab France, Orange's established accelerator program for start-ups located at the Orange Gardens campus in Paris that is dedicated to R&D. The program also has the support of Orange Digital Ventures. By engaging with experts from Orange and its partners, start-ups will be provided with support in tackling network-related issues such as network management and access technologies.

Start-ups selected for the program will receive the benefits offered as part of the existing Orange Fab program, including the opportunity to participate in dedicated workshops, mentoring sessions with specialists and an optional Euro 15,000 in funding. They will also be provided with work space at the Orange Gardens, where the company's R&D teams are based. Start-ups will also have access to experts from the TIP community, TEAC and Facebook.

Orange has launched a call for projects to French start-ups that runs until May 14th; following evaluation of submissions, start-ups will be selected to join the acceleration program and can present at a launch event planned for June that will attended by Orange, TIP and Facebook executives, as well as partners and venture capitalists.

Friday, March 24, 2017

Microsoft's Project Olympus provides an opening for ARM

A key observation from this year's Open Compute Summit is that the hyper-scale cloud vendors are indeed calling the shots in terms of hardware design for their data centres. This extends all the way from the chassis configurations to storage, networking, protocol stacks and now customised silicon.

To recap, Facebook's newly refreshed server line-up now has 7 models, each optimised for different workloads: Type 1 (Web); Type 2 - Flash (database); Type 3 – HDD (database); Type 4 (Hadoop); Type 5 (photos); Type 6 (multi-service); and Type 7 (cold storage). Racks of these servers are populated with a ToR switch followed by sleds with either the compute or storage resources.

In comparison, Microsoft, which was also a keynote presenter at this year's OCP Summit, is taking a slightly different approach with its Project Olympus universal server. Here the idea is also to reduce the cost and complexity of its Azure rollout in hyper-scale date centres around the world, but to do so using a universal server platform design. Project Olympus uses either a 1 RU or 2 RU chassis and various modules for adapting the server for various workloads or electrical inputs. Significantly, it is the first OCP server to support both Intel and ARM-based CPUs. 

Not surprisingly, Intel is looking to continue its role as the mainstay CPU supplier for data centre servers. Project Olympus will use the next generation Intel Xeon processors, code-named Skylake, and with its new FPGA capability in-house, Intel is sure to supply more silicon accelerators for Azure data centres. Jason Waxman, GM of Intel's Data Center Group, showed off a prototype Project Olympus server integrating Arria 10 FPGAs. Meanwhile, in a keynote presentation, Microsoft Distinguished Engineer Leendert van Doorn confirmed that ARM processors are now part of Project Olympus.

Microsoft showed Olympus versions running Windows server on Cavium's ThunderX2 and Qualcomm's 10 nm Centriq 2400, which offers 48 cores. AMD is another CPU partner for Olympus with its ARM-based processor, code-named Naples.  In addition, there are other ARM licensees waiting in the wings with designs aimed at data centres, including MACOM (AppliedMicro's X-Gene 3 processor) and Nephos, a spin-out from MediaTek. For Cavium and Qualcomm, the case for ARM-powered servers comes down to optimised performance for certain workloads, and in OCP Summit presentations, both companies cited web indexing and search as one of the first applications that Microsoft is using to test their processors.

Project Olympus is also putting forward an OCP design aimed at accelerating AI in its next-gen cloud infrastructure. Microsoft, together with NVIDIA and Ingrasys, is proposing a hyper-scale GPU accelerator chassis for AI. The design, code named HGX-1, will package eight of NVIDIA's latest Pascal GPUs connected via NVIDIA’s NVLink technology. The NVLink technology can scale to provide extremely high connectivity between as many as 32 GPUs - conceivably 4 HGX-1 boxes linked as one. A standardised AI chassis would enable Microsoft to rapidly rollout the same technology to all of its Azure data centres worldwide.

In tests published a few months ago, NVIDIA said its earlier DGX-1 server, which uses Pascal-powered Tesla P100 GPUs and an NVLink implementation, were delivering 170x of the performance of standard Xeon E5 CPUs when running Microsoft’s Cognitive Toolkit.

Meanwhile, Intel has introduced the second generation of its Rack Scale Design for OCP. This brings improvements in the management software for integrating OCP systems in a hyper-scale data centre and also adds open APIs to the Snap open source telemetry framework so that other partners can contribute to the management of each rack as an integrated system. This concept of easier data centre management was illustrated in an OCP keynote by Yahoo Japan, which amazingly delivers 62 billion page views per day to its users and remains the most popular website in that nation. The Yahoo Japan presentation focused on an OCP-compliant data centre it operates in the state of Washington, its only overseas data centre. The remote data centre facility is manned by only a skeleton crew that through streamlined OCP designs is able to perform most hardware maintenance tasks, such as replacing a disk drive, memory module or CPU, in less than two minutes.

One further note on Intel’s OCP efforts relates to its 100 Gbit/s CWDM4 silicon photonics modules, which it states are ramping up in shipment volume. These are lower cost 100 Gbit/s optical interfaces that run over up to 2 km for cross data centre connectivity.

On the OCP-compliant storage front not everything is flash, with spinning HDDs still in play. Seagate has recently announced a 12 Tbytes 3.5 HDD engineered to accommodate 550 Tbyte workloads annually. The company claims MTBF (mean time between failure) of 2.5 million hours and the drive is designed to operate 24/7 for five years. These 12 Tbyte enable a single 42 U rack to deploy over 10 Pbytes of storage, quite an amazing density considering how much bandwidth would be required to move this volume of data.


Google did not make a keynote appearance at this year’s OCP Summit, but had its own event underway in nearby San Francisco. The Google Cloud Next event gave the company an even bigger stage to present its vision for cloud services and the infrastructure needed to support it.

Thursday, March 23, 2017

Nokia and Facebook Test Trans-Atlantic Optimization

Nokia and Facebook announced they have collaborated on field trials of new optical digital signal processing technologies over a 5,500 km trans-Atlantic link between New York and Ireland.

To help address increasing demand for capacity on subsea fibre networks, Nokia and Facebook tested Nokia Bell Labs' new probabilistic constellation shaping (PCS) technology. The companies stated that the trial achieved an increase of almost 2.5x in capacity compared with the stated transmission capacity of the system, demonstrating the feasibility of using the technology across a real-world optical network.

PCS, a field of research at Nokia Bell Labs, is an advanced technique that utilises 'shaped' QAM formats to flexibly adjust transmission capacity to close to the physical limits, the Shannon limit, of a given fibre-optic link. Believed to be a first-of-its-kind experiment conducted on an installed submarine link, the test was conceived and planned by Facebook,

Nokia noted that PCS is based on 64QAM and, combined with digital nonlinearity compensation and low-linewidth lasers, enabled a claimed record spectral efficiency of 7.46 b/s/Hz, indicating the potential to upgrade this cable to 32 Tbit/s per fibre in the future. The test also included round-trip submarine transmission over 11,000 km using 'shaped' 64QAM with spectral efficiency of 5.68 b/s/Hz.

During the trial transmission tests based on the commercially available Nokia Photonic Service Engine 2 (PSE-2) validated the transmission of 8QAM wavelengths running at 200 Gbit/s and 16QAM wavelengths running at 250 Gbit/s, which is believed to be a first for trans-Atlantic transmission. The 200 Gbit/s 8QAM wavelengths supported a spectral efficiency of 4 b/s/Hz while also exhibiting sufficient performance margin for commercial operation.

Nokia and Facebook stated that the results of the trial will be presented in a post-deadline paper at the OFC 2017.

http://www.nokia.com

Wednesday, March 22, 2017

Facebook shows its progress with Open Compute Project

The latest instalment of the annual Open Compute Project (OCP) Summit, which was held March 8-9 in Silicon Valley, brought new open source designs for next-generation data centres. It is six years since Facebook launched OCP and it has grown into quite an institution. Membership in the group has doubled over the past year to 195 companies and it is clear that OCP is having an impact in adjacent sectors such as enterprise storage and telecom infrastructure gear.

The OCP was never intended to be a traditional standards organisation, serving more as a public forum in which Facebook, Microsoft and potentially other big buyers of data centre equipment can share their engineering designs with the industry. The hyper-scale cloud market, which also includes Amazon Web Services, Google, Alibaba and potentially others such as IBM and Tencent, are where the growth is at. IDC, in its Worldwide Quarterly Cloud IT Infrastructure Tracker, estimates total spending on IT infrastructure products (server, enterprise storage and Ethernet switches) for deployment in cloud environments will increase by 18% in 2017 to reach $44.2 billion. Of this, IDC estimates that 61% of spending will be by public cloud data centres, while off-premises private cloud environments constitute 15% of spending.

It is clear from previous disclosures that all Facebook data centres have adopted the OCP architecture, including its primary facilities in Prineville (Oregon), Forest City (North Carolina), Altoona (Iowa) and Luleå (Sweden). Meanwhile, the newest Facebook data centres, under construction in Fort Worth (Texas) and Clonee, Ireland are pushing OCP boundaries even further in terms of energy efficiency.

Facebook's ambitions famously extend to connecting all people on the planet and it has already passed the billion monthly user milestone for both its mobile and web platforms. The latest metrics indicate that Facebook is delivering 100 million hours of video content every day to its users; 95+ million photos and videos are shared on Instagram on a daily basis; and 400 million people now use Messenger for voice and video chat on a routine basis.

At this year's OCP Summit, Facebook is rolling out refreshed designs for all of its 'vanity-free' servers, each optimised for a particular workload type, and Facebook engineers can choose to run their applications on any of the supported server types. Highlights of the new designs include:

·         Bryce Canyon, a very high-density storage server for photos and videos that features a 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.

·         Yosemite v2, a compute server that features 'hot' service, meaning servers do not need to be powered down when the sled is pulled out of the chassis in order for components to be serviced.

·         Tioga Pass, a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards and GPUs) than its predecessor, Leopard, enabling larger memory configurations and faster compute time.

·         Big Basin, a server designed for artificial intelligence (AI) and machine learning, optimised for image processing and training neural networks. Compared to its predecessor, Big Basin can train machine learning models that are 30% larger due to greater arithmetical throughput and by implementing more memory (12 to 16 Gbytes).

Facebook currently has web server capacity to deliver 7.5 quadrillion instructions per second and its 10-year roadmap for data centre infrastructure, also highlighted at the OCP Summit, predicts that AI and machine learning will be applied to a wide range of applications hosted on the Facebook platform. Photos and videos uploaded to any of the Facebook services will routinely go through machine-based image recognition and to handle this load Facebook is pursuing additional OCP designs that bring fast storage capabilities closer to its compute resources. It will leverage silicon photonics to provide fast connectivity between resources inside its hyper-scale data centres and new open source models designed to speed innovation in both hardware and software.

Friday, March 17, 2017

Telia Carrier Tests TIP's Voyager Platform with Coriant

Telia Carrier, the wholesale carrier division of Sweden-based Telia, announced that the Telecom Infra Project (TIP) and Facebook have completed a trial of 100 and 200 Gbit/s transmission utilising Voyager equipment and technology developed by Coriant over its 1,089 km Stockholm to Hamburg route.

Telia Carrier stated that a key feature of the trial was the demonstration of the ability to effectively implement 16QAM signalling over long distances. Facebook has contributed Voyager, a whitebox transponder and routing solution that is being made available to the Telecom Infra Project (TIP) with the aim of enabling more open networks and a more connected world.

The company noted that the TIP aim of bringing a new approach to building and deploying telecom networks aligns with its own Carrier Declarations program, designed to help Telia Carrier expand and develop its network organically and be a leader in industry innovation.


For the trial, Telia Company, the parent company of Telia Carrier and a TIP member, supported Telia Carrier in conducting the test of the Voyager solution and has shared the results of the trial with other member companies. Carried out in early March, the trial demonstrated that decoupled DWDM transponder systems can offer a low cost, low power option, while also providing the necessary flexibility and accessibility to allow service providers to implement them within existing networks.

Following the trial, Telia Carrier is working with TIP, Facebook and Coriant towards the objective of disaggregating the hardware and software components of the network stack as part of the wider effort to make connectivity available to all.

Facebook unveiled Voyager at the TIP Summit last November and contributed the design to the TIP community. Recently, TIP cited forthcoming trial deployments of the solution by Telia, and noted that Orange was working with Facebook and the TIP Open Optical Pack Transport project group to evaluate the solution


When it introduced the platform in November, TIP stated that Equinix had tested Voyager with Lumentum's open line system over 140 km of production fibre, and that MTN had shared the results of a test of Voyager over its production network in South Africa. In addition, it was noted that Facebook, Acacia Communications, Broadcom, Celestica, Lumentum and Snaproute would deliver a disaggregated hardware and software optical networking platform, while ADVA was providing support for the solution.

Wednesday, March 8, 2017

Facebook Refreshes its OCP Server Designs

At this year's Open Compute Summit in Santa Clara, California, Facebook unveiled a number of new server designs to power the wide variety of workloads it now handles.

Some updated Facebook metrics:

  • People watch 100 million hours of video every day on Facebook; 
  • 95M+ photos and videos are posted to Instagram every day; 
  • 400M people now use voice and video chat every month on Messenger. 

Highlights of the new servers:

  • Bryce Canyon is a storage server primarily used for high-density storage, including photos and videos. The server is designed with more powerful processors and increased memory, and provides increased efficiency and performance. Bryce Canyon has 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.
  • Yosemite v2 is a compute server that provides the flexibility and power efficiency needed for scale-out data centers. The power design supports hot service, meaning servers don't need to be powered down when the sled is pulled out of the chassis in order for components to be serviced; these servers can continue to operate.
  • Tioga Pass is a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards, and GPUs) than its predecessor Leopard. This design enables larger memory configurations and speeds up compute time.
  • Big Basin is a server used to train neural networks, a technology that can do a number of research tasks including learning to identify images by examining enormous numbers of them. With Big Basin, Facebook can train machine learning models that are 30% larger (compared its predecessor Big Sur). They can do so due to greater arithmetic throughput now available and by implementing more memory (12GB to 16GB). In tests with image classification model Resnet-50, they reached almost 100% improvement in throughput compared to Big Sur.

http://www.opencompute.org/wiki/Files_and_Specs
https://www.facebook.com/Engineering/

Wednesday, March 1, 2017

BT and Cavium join Telecom Infra Project

BT and Cavium announced they have joined the Telecom Infra Project (TIP) founded by companies including Facebook, Deutsche Telekom and SK Telecom in 2016, and Keysight Technologies announced that it is expanding its commitment to the project.

BT

BT announced it is partnering with the TIP, which is seeking to transform how telecom networks are built, and Facebook to accelerate research into advanced telecoms technologies. As part of the initiative, BT, Facebook and TIP will work together in locations including BT Labs at Adastral Park in the UK and at London's Tech City. The relationship is intended to enable telecoms infrastructure start-ups to engage directly with experts from BT, the TIP community and Facebook.

TIP plans to establish a network of start-up acceleration centres in collaboration with major operators worldwide, and BT will host the first such centre in Europe. The UK acceleration centre joins a similar facility in South Korea, sponsored by SK Telecom. The initial focus of the UK acceleration centre will be in the areas of quantum computing, as applied to networks, and mission-critical communications.

Cavium

Cavium, a provider of semiconductor products for enterprise, data centre and wired and wireless networking, has joined TIP as part of its ongoing effort to support the development of telecom networks. Cavium will contribute wireless access technologies to TIP's OpenCellular project group, which is aiming to address the demand for faster, more agile networks.

Through the agreement, Cavium will specifically contribute and support the development of a production-ready hardware design for a single sector LTE 2T2R MIMO (64/128 user) for OpenCellular in the second half of 2017. The design will be based on its OCTEON Fusion CNF7130 baseband processor and will feature the associated end-to-end L1 to L3 software stack. Cavium will also offer open source access to certain APIs to enable enhanced capabilities for the LTE cell.

Keysight Technologies

Keysight, a supplier of electronic measurement solutions, is expanding its commitment to TIP, and as part of this partnership will co-chair a newly-formed sub-group focused on test automation within the OpenCellular project group. Keysight will contribute the open source code that supports automated testing for design verification and a low-cost manufacturing solution for OpenCellular base stations.

TIP was co-founded in February 2016 by Facebook, Deutsche Telekom, SK Telecom, EE, Nokia and Intel and now claims over 450 member organisations. The project is an engineering-focused initiative supported by telecom operators, infrastructure providers, system integrators and other technology companies that is aiming to transform how telecoms network infrastructure is built and deployed.

Monday, February 27, 2017

ADVA Advances Facebook-Designed Open Optical Packet Transport

ADVA Optical Networking reports that it is now working with nine customers on trials of the Facebook-designed Voyager solution, including Tier 1 service providers and large enterprises.  The open optical packet transport system is being tested in a range of proof of concept (POC) installations.

ADVA said the over the last nine months, Voyager has matured rapidly from blueprint to physical product. The 1RU DWDM unit features 12 x 100Gbit/s QSFP 28 clients and 4 x 200Gbit/s 16QAM on the line side.

ADVA also announced integration of Voyager into its FSP Network Manager.

“The two founding principles of the Voyager and Open Optical Packet Transport projects are openness and innovation and it’s these principles that have guided everything that our team has developed here,” said Niall Robinson, VP, global business development, ADVA Optical Networking. “Now that the Voyager system is complete, we’re focusing much of our time on further developing our services to ensure that customers have a turn-key solution. That’s why these POCs are so important. We’re able to see firsthand what resonates. Which services do the customers like? Which services do they need to see more of? Which services are truly critical? And what’s fascinating here is the breadth of applications these POCs cover. The lessons learnt from the next few months will be key as we move from trials to commercialization.”

“We’re excited by the progress we’ve made with Voyager by working with partners like ADVA Optical Networking. What was only an idea less than a year ago is now almost commercially ready for deployment,” said Hans-Juergen Schmidtke, co-chair, Open Optical Packet Transport project group, TIP, and director, engineering, Facebook. “We’re looking forward to collaborating with our partners to build the open optical packet transport solutions with Voyager into a complete package that will enable service providers and enterprises to deploy an open networking solution that delivers rapid results and enables continuous innovation.”


Facebook to Contribute Open Packet DWDM to Telecom Infra Project

Facebook outlined its plan for Open Packet DWDM for metro and long-haul fiber optic transport networks.  The idea is to "enable a clean separation of software and hardware" based on open specifications.

Facebook had already developed a new "white box" transponder and routing platform called Voyager based on Open Packet DWDM, which it will contribute to the Telecom Infra Project.

Facebook said the Voyager open line system will include Yang software data models of each component in the system, and an open northbound software interface (NETCONF, Thrift, etc.) to the control plane software. This allows multiple applications to run on top of the open software layer, enabling software innovations in DWDM system control algorithms and network management systems.

The DWDM transponder hardware includes DSP ASICs and complex optoelectronic components, and thus accounts for much of the cost of the system. The hardware design leverages technologies implemented in Wedge 100, Facebook's top-of-rack switch, including the same Broadcom Tomahawk switching ASIC.  It also uses the DSP ASIC and optics module (AC400) from Acacia Communications for the DWDM line side with their open development environment.

Facebook worked with Lumentum to develop a terminal amplifier specification so that multiple applications can run on top of the open software layer to enable software innovations in DWDM system control algorithms and network management systems.

Some additional notes:

  • Equinix has successfully tested the Voyager solution and Lumentum’s open line system over 140km of production fiber. MTN also shared the results of their successful test of Voyager over their production fiber network in South Africa. Facebook, Acacia Communications, Broadcom, Celestica, Lumentum, and Snaproute are delivering a complete disaggregated hardware and software optical networking platform that is expected to significantly advance the industry.
  • ADVA Optical Networking is providing commercial support for Voyager, including all of the essential services and software support needed to make it a complete network solution that is ready for deployment.
  • Coriant is extending its networking software to enable engineering support for Voyager, providing routing and switching as well as DWDM transmission capabilities. The combination of DWDM and packet switching/routing opens up the potential for more open and more programmable network architectures.
  • The first TIP Ecosystem Acceleration Centers, sponsored by SK Telecom and Facebook, will open in Seoul in early 2017. Other TIP Ecosystem Acceleration Centers are being planned to encourage community participation. The latest companies to join TIP include Bell Canada, du (EITC), NBN, Orange, Telia, Telstra, Accenture, Amdocs, Canonical, Hewlett Packard Enterprise, and Toyota InfoTechnology Center.
https://code.facebook.com/posts/1977308282496021/an-open-approach-for-switching-routing-and-transport/

Thursday, February 2, 2017

Facebook Passes 1.2 Billion Daily Active Users, up 18% YoY

Facebook is now serving over 1.227 billion daily active users, up from 1.038 billion DAUs a year earlier.

Some other metrics:

  • Mobile DAUs – Mobile DAUs were 1.15 billion on average for December 2016, an increase of 23% yearover
    -year.
  • Monthly active users (MAUs) – MAUs were 1.86 billion as of December 31, 2016, an increase of 17% year-over-year.
  • Mobile MAUs – Mobile MAUs were 1.74 billion as of December 31, 2016, an increase of 21% year-overyear.
  • Capital expenditures for the full year 2016 were $4.49 billion, up from $2.25 billion for 2015.


http://www.facebook.com

Thursday, January 19, 2017

Facebook to build Next Data Center in Odense, Denmark

Facebook has selected Odense, Denmark as the location for its next data center, joining Prineville (Oregon), Forest City (North Carolina), Luleå (Sweden), Altoona (Iowa), Fort Worth (Texas), Clonee (Ireland), and Los Lunas (New Mexico) facilities as one of the cornerstones of its global infrastructure.

The new data center will be built with Open Compute Project hardware designs and will be one of the most energy efficient to date.
The company said Denmark was chosen for its robust Nordic electric grid, access to fiber, access to renewable power, and a great set of collaborative community partner. Renewal energy is expected to account for 100% of electricity needs.

https://www.facebook.com/notes/odense-data-center/hello-odense/1104572789664613

Facebook Expands Fort Worth Data Center to Five Buildings


Facebook revealed plans to expand its data center in Fort Worth, Texas from one building to five buildings on the same campus. Construction on the first building is nearing completion. Facebook is now rolling racks of servers into the new building. The facility will be powered by 100% renewable energy, thanks to the more than 200 MW of new wind power that we worked with Citigroup Energy, Alterra Power Corporation, and Starwood Energy Group to bring...


Friday, December 23, 2016

Video - The Story of Facebook's Infrastructure



The Facebook Engineering team talks about building the company's leading-edge data center infrastructure. In a few years, Facebook has gone from rented space in colocation facilities to a network of hyperscale data centers with global connections.



Monday, November 14, 2016

Facebook Expands Fort Worth Data Center to Five Buildings

Facebook revealed plans to expand its data center in Fort Worth, Texas from one building to five buildings on the same campus.

Construction on the first building is nearing completion. Facebook is now rolling racks of servers into the new building.

The facility will be powered by 100% renewable energy, thanks to the more than 200 MW of new wind power that we worked with Citigroup Energy, Alterra Power Corporation, and Starwood Energy Group to bring to the Texas grid.

https://www.facebook.com/notes/fort-worth-data-center/expanding-our-fort-worth-data-center-to-five-buildings/1187767507983650

Tuesday, November 8, 2016

Facebook Deploys Backpack -- its 2nd Gen Data Center Switch

Facebook unveiled Backpack, its second-generation modular switch platform developed in house at Facebook for 100G data center infrastructure. It leverages Facebook's recently announced Wedge switch.

Backpack is designed with a clear separation of the data, control, and management planes. It uses simple building blocks called switch elements. The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together. The orthogonal direct chassis architecture opens up more air channel space for a better thermal performance for managing the heat from 100G ASICs and optics.  Facebook will use the BGP routing protocol for the distribution of routes between the different line cards in the chassis.

The design has already entered production and deployment in Facebook data centers.  The company plans to submit the design to the Open Compute Project.

https://code.facebook.com/posts/864213503715814/introducing-backpack-our-second-generation-modular-open-switch/

Facebook Targets 32x100G Wedge Data Center Switch

Facebook confirmed that work is already underway on Wedge 100, a 32x100G switch for its hyperscale data centers.

Facebook is also adapting Wedge to a much bigger aggregation switch called 6-pack, which uses Wedge as its foundation and stacks 12 of these Wedges in a modular and nonblocking aggregation switch. FBOSS will be used as a software stack across the growing platform of network switches: Wedge, 6-pack, and now Wedge 100.

In an engineering blog post, Facebook said thousands of its initial Wedge top-of-rack network switch have already been deployed.  The design has been contributed to the Open Compute Project and is now commercially available from various vendors.

Facebook also revealed that its FBOSS software undergoes a weekly cadence of new features and bug fixes. Facebook is able to update thousands of switches seamlessly, without any traffic loss, to keep up with this pace.  Some new capabilities include detailed monitoring, non-stop forwarding, warm boot, etc.

https://code.facebook.com/posts/145488969140934/open-networking-advances-with-wedge-and-fboss/

Wednesday, November 2, 2016

Facebook Hits 1.09 Billion Mobile Daily Users

Facebook reported Q3 2016 revenue of $7.011 billion, up 56% from the same period last year.

Highlights:

  • Daily active users (DAUs) – DAUs were 1.18 billion on average for September 2016, an increase of 17% year-over-year.
  • Mobile DAUs – Mobile DAUs were 1.09 billion on average for September 2016, an increase of 22% year-over-year.
  • Monthly active users (MAUs) – MAUs were 1.79 billion as of September 30, 2016, an increase of 16% year-over-year.
  • Mobile MAUs – Mobile MAUs were 1.66 billion as of September 30, 2016, an increase of 20% year-over-year.
  • Mobile advertising revenue represented approximately 84% of advertising revenue for the third quarter of 2016, up from approximately 78% of advertising revenue in the third quarter of 2015.
  • Capital expenditures for the third quarter of 2016 were $1.10 billion.
  • Cash and cash equivalents and marketable securities were $26.14 billion at the end of the third quarter of 2016.

http://www.facebook.com

Tuesday, November 1, 2016

Facebook to Contribute Open Packet DWDM to Telecom Infra Project

Facebook outlined its plan for Open Packet DWDM for metro and long-haul fiber optic transport networks.  The idea is to "enable a clean separation of software and hardware" based on open specifications.

Facebook had already developed a new "white box" transponder and routing platform called Voyager based on Open Packet DWDM, which it will contribute to the Telecom Infra Project.

Facebook said the Voyager open line system will include Yang software data models of each component in the system, and an open northbound software interface (NETCONF, Thrift, etc.) to the control plane software. This allows multiple applications to run on top of the open software layer, enabling software innovations in DWDM system control algorithms and network management systems.

The DWDM transponder hardware includes DSP ASICs and complex optoelectronic components, and thus accounts for much of the cost of the system. The hardware design leverages technologies implemented in Wedge 100, Facebook's top-of-rack switch, including the same Broadcom Tomahawk switching ASIC.  It also uses the DSP ASIC and optics module (AC400) from Acacia Communications for the DWDM line side with their open development environment.

Facebook worked with Lumentum to develop a terminal amplifier specification so that multiple applications can run on top of the open software layer to enable software innovations in DWDM system control algorithms and network management systems.

Some additional notes:

  • Equinix has successfully tested the Voyager solution and Lumentum’s open line system over 140km of production fiber. MTN also shared the results of their successful test of Voyager over their production fiber network in South Africa. Facebook, Acacia Communications, Broadcom, Celestica, Lumentum, and Snaproute are delivering a complete disaggregated hardware and software optical networking platform that is expected to significantly advance the industry.
  • ADVA Optical Networking is providing commercial support for Voyager, including all of the essential services and software support needed to make it a complete network solution that is ready for deployment.
  • Coriant is extending its networking software to enable engineering support for Voyager, providing routing and switching as well as DWDM transmission capabilities. The combination of DWDM and packet switching/routing opens up the potential for more open and more programmable network architectures.
  • The first TIP Ecosystem Acceleration Centers, sponsored by SK Telecom and Facebook, will open in Seoul in early 2017. Other TIP Ecosystem Acceleration Centers are being planned to encourage community participation. The latest companies to join TIP include Bell Canada, du (EITC), NBN, Orange, Telia, Telstra, Accenture, Amdocs, Canonical, Hewlett Packard Enterprise, and Toyota InfoTechnology Center.

https://code.facebook.com/posts/1977308282496021/an-open-approach-for-switching-routing-and-transport/


Infinera Validates Optical Line Systems with Lumentum's White Box

Infinera and Lumentum have validated Infinera’s portfolio of DWDM platforms over Lumentum’s white box optical line system.

The interoperability testing included Infinera’s XTM Series, Cloud Xpress Family and DTN-X Family (including XTC and XT Series) platforms. In addition, the companies successfully conducted interoperability testing of Infinera’s next generation Infinite Capacity Engine pilot hardware.

Specifically, the Infinera platforms interoperated with the Lumentum white box open line system including the 20 port Transport ROADM. The test cases covered point-to-point metro fiber links carrying multiple modulations including QPSK (quadrature phase shift keying), 8QAM (quadrature amplitude modulation) and 16QAM with PIC based super-channels over the Lumentum open line system. The test was able to fill the fiber to full capacity at QPSK, 8QAM and 16QAM data rates via 19 super-channels injected into a single rack unit 20 port ROADM. A fully loaded solution achieves up to 24 terabits of fiber capacity using the Infinite Capacity Engine at 16QAM. The testing successfully validated standard optical parameters including optical signal to noise ratio (OSNR) for seamless performance over metro distances.

The companies described the interoperability testing as the first multi-vendor driven demonstration of, and commitment to supporting, an open, interoperable and agile approach to the construction of transport networks, including for data center interconnect (DCI) and metro/edge WDM transport.

Infinera and Lumentum also noted that they are collaborating on open packet optical transport in the Telecom Infra Project (TIP), an industry initiative co-founded by Facebook.

“By validating the industry’s first white box open line system interoperability with Lumentum, Infinera has demonstrated our commitment to delivering the innovative and open optical solutions that our customers need,” said Tom Fallon, Infinera CEO. “Infinera’s leadership in technology innovation with large-scale photonic integration is changing telecom networks. This announcement is yet another step in the path to delivering on our vision of enabling an infinite pool of intelligent bandwidth that the next communications infrastructure is built upon.”

“Lumentum’s award-winning optical white boxes are designed for simplicity and scalability, with open interfaces to enable Software Defined Networking (SDN),” said Alan Lowe, Lumentum CEO. “The demonstration of successful interoperability with Infinera validates the open system approach that should deliver the simplicity and service innovation many network operators desire.”

https://www.infinera.com/infinera-validates-intelligent-transport-network-portfolio-over-lumentum-white-box-optical-line-system/
https://www.lumentum.com



See also