Monday, January 4, 2016

Blueprint: Four SDN Predictions for 2016

by Carolyn Raab, VP of Product Management at Corsa

In 2015, service providers, telcos and national research and engineering consortiums went through a major transition as they began implementing software-defined networks (SDN) to deliver programmable high performance and massive scale in the WAN and data center edge. And for network architects, operators and others involved in these next generation networks, the hard work is just beginning because the pressure will be on in 2016 to ensure that these SDN deployments live up to and exceed the hype. As these deployments move forward many architects will find themselves staring at a network that is completely different in size and shape than what they’re accustomed to. Fortunately, several new trends will help ensure greater control and scale across these networks, and compel us to make the following four predictions about the key developments that will benefit internet scale programmable networks in 2016.

1) FPGAs grow up and play a much larger role 

Network engineers need flexible, open hardware to create policy-driven, self-tuning networks. Hardware vendors need design cycles that can keep pace with network innovations and changes the network engineers demand.  FPGAs have advanced to the point where their underlying silicon process technology is in lock-step with ASICs, and can also provide users with the benefit of leveraging the combined volume of all other users of the same platform. They match the performance level and affordability of ASICs while offering full flexibility and rapid design cycles.  This shift to FPGAs will enable network architectures to evolve and scale more rapidly.

2) SDN will emerge from the hype cycle, based on real deployments

There are now confirmed, real deployments of SDN in service providers, Internet exchanges, ISPs, and data centers.  One challenge they all share is that the top-to-bottom solution requires an involved integration of SDN orchestration, control and data plane elements.  This clumsy stitching together of various parts of the equation has delayed real deployments as much as the lack of controllers and real SDN hardware that are performant and open.  However, with the availability of internet scale programmable and open hardware and open source controllers getting broad support the missing pieces are now present.  This top-to-bottom offering of interworking parts means real deployments will expand beyond the early, most sophisticated users to a broader base of networks of different shapes and sizes.

3) Re-programmable networks and real-time analytics will be hot topics for 2016

Because you program the network, you can make it better by creating an agile, self-tuning, automated network that create value for providers and users alike. This requires a virtuous circle of real-time statistics feeding into real-time analytics tools that trigger changes that are immediately programmed into the network.

To date, these tools existed, but in isolation of each other.  Now we see the beginnings of offerings that have created linkages to move towards closing the circle.  Through industry partnerships or as vertically integrated solutions from a single vendor, the ability to re-program the network on the fly is generating significant interest on the part of numerous stakeholders including service providers, broadcasters, municipalities, and enterprises.  All of them share a common requirement of needing to know what is going on in their networks so they can take the next appropriate action: Isolate? Allocate bandwidth? Add a new service?  Look for much discussion and some innovative deployments of re-programmable networks.

4) 2016: “The year of 100G SDN”

100G will begin to ramp up aggressively because both the data drivers and the underlying network have reached a critical junction.  Traffic growth on the network continues to put pressure on network infrastructure, and will be even more significant with 100G storage deploying to add to the massive growth in video and IoT generated traffic.  Operators will be able to answer with 100G SDN because of two key enablers:
  • Affordability – 100G SDN deployments are approaching a price point that is barely 3x what a 10G link would cost.  
  • Flexible feeds & speeds: QSFP28 for 100G, SFP+ for 10G and anything in between is possible with the same optics cage.  
Programmable SDN hardware designed with these cages can deploy as 10G initially and then rapidly move from 10G to 100G with a soft upgrade not a new hardware purchase to immediately address the data demands.

These and other trends highlight how large SDN deployments will require a more open and flexible approach at the software/firmware and hardware levels. It will be critical to ensure that networks can adapt and evolve as needed. We will be watching as networks take new innovative approaches to managing and orchestrating data in 2016

About the Author

Carolyn Raab is VP of Product Management at Corsa.

About Corsa Technology 

Corsa Technology is a networking hardware company focused on performance Software Defined Networking (SDN). Corsa develops programmable, flexible, internet-scale switches that respond in real-time to network orchestration, directing and managing traffic for SDN and NFV deployments from the 100G SDN WAN edge to networks needing full subscriber awareness. For more information, please visit

Blueprint: 2016 and the Rise of NFV – Practicality Rules

by Martin Taylor, CTO, Metaswitch

With network function virtualization quickly moving into the mainstream and proliferation of related technology offerings on the rise, clarity of purpose and ease of use is more critical than ever. Winning solutions in 2016 will combine purpose-built technology with turn-key simplicity, making it easy for network operators to understand, adopt and scale NFV deployment system-wide.

Here are some of my predictions for 2016:

1. Pragmatic network operators in 2016 will progress the fastest; those who deploy proven VNFs that are not too demanding on cloud, SDN, orchestration or OSS/BSS integration will usefully move the virtualization needle in 2016. Leading solutions will:

  • Deliver high availability on vanilla cloud infrastructures without relying on a specially-engineered cloud infrastructure to deliver high availability.
  • Require only basic IP connectivity from the NFV network fabric, vs. requiring a high degree of programmability to create service function chains. 
  • Have simple life-cycles and be able to deliver most of their value with little or no orchestration beyond initial deployment, vs. requiring sophisticated orchestration.
  • Have few and simple OSS / BSS touchpoints, rather than having complex configuration and management requirements and involving a lot of custom work to interface them to OSS and BSS. 

2. VoLTE and CPE will be the two most active areas of the network for NFV-based buildouts in the coming year.

  • VoLTE is a service that requires a number of network functions to be deployed including IMS, SBC, TAS and SCC-AS, all of which are available in virtualized form.
  • Many services offered by network operators require the deployment of multiple items of CPE, e.g. Metro Ethernet access device, firewall, WAN accelerator, intrusion detection system, enterprise SBC – each of which is currently deployed today as a separate physical appliance. NFV offers the opportunity to virtualize all these functions and deploy them as software in a generic CPE device based on a server, or in a service provider’s cloud in a metro data center, thus removing the need to ship and install a multiplicity of physical appliances on the customer premises.

3. While 2016 will see NFV cloud and orchestration solutions mature, OSS/BSS will emerge as the biggest brake on NFV progress.

  • There are two issues here. First, integration with OSS / BSS is usually the long pole in the tent when it comes to deploying any new network function. There are numerous backend systems that a network function needs to talk to for provisioning, configuration, alarms, performance reporting, etc., and integrating with a network function at each of these touchpoints often requires custom software work. This issue does not go away just because a network function is virtualized.
  • Secondly, traditional OSS / BSS is not well suited to managing virtualized network functions because its view of the world is appliance-centric and it doesn’t know how to handle shifting populations of different kinds of virtual machines that together do the work of a physical appliance. OSS / BSS needs to evolve very substantially to cope with the realities of NFV, and this will take time.

About the Author

Martin Taylor is chief technical officer of Metaswitch Networks. He joined the company in 2004, and headed up product management prior to becoming CTO. Previous roles have included founding CTO at CopperCom, a pioneer in Voice over DSL, where he led the ATM Forum standards initiative in Loop Emulation; VP of Network Architecture at Madge Networks, where he led the company’s successful strategy in Token Ring switching; and business general manager at GEC-Marconi, where he introduced key innovations in Passive Optical Networking. Martin has a degree in Engineering from the University of Cambridge. In January 2014, Martin was recognized by Light Reading as one of the top five industry “movers and shakers” in Network Functions Virtualization.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Blueprint: 2016 is the Year SDN Finds its Home, and its Name is NFV

by Peter Margaris, Head of Service Provider Product Marketing at F5 Networks

For the past few years, the testing and adoption of Software Defined Networking (SDN) has progressed incrementally. While at the same time, Service Providers (SPs) have made measurable progress towards the commercialization of network functions virtualization (NFV). SDN and NFV have been viewed as separate, but complimentary initiatives, but SPs are coupling these initiatives with the goal of transforming their entire networks. They are accelerating the adoption of NFV and SDN because of the speed in which they must adapt to the next generations of advanced devices, and due to the pressure to offer new and differentiated consumer and enterprise services. While there was significant progress in 2015, the continued evolution of industry standards and APIs, as well as the successful commercialization of multiple NFV use cases, will lead to Service Providers expanding their the SDN and NFV initiatives significantly in 2016.

Continued Evolution of Industry Standards and APIs 

The evolution of standards and Application Programming Interfaces (APIs) among vendors in 2016 will be critical for SPs to drive forward their network transformations. Previously, the lack of standardization and integration among architecture components slowed the adoption of both SDN and NFV. There is no doubt that SPs are committed to SDN and NFV. This is evidenced by the trials, PoCs, and standards coalescing around the commercialization of L4-L7 service offerings. In the past 12 months, the community of vendors and operators have made great strides on this front. In particular, the collaborative efforts governed by OPNFV (the Open Platform for NFV) along with ETSI NFV have accelerated the evolution of the NFV reference platform.

Also in 2015, we’ve seen greater collaboration between the Open Networking Foundation (ONF) for SDN standards, and the European Telecommunications Standards Institute (ETSI) for NFV standards. The result is that SPs are incorporating SDN architectures alongside specific NFV use cases in both trials and commercial deployments. The ETSI PoC #38 is an example in which multiple vendors collaborated with Australian service provider, Telstra, to produce the ETSI-certified proof-of-concept around delivering customer premise equipment (CPE) to enterprise customers from the cloud. This is also referred to as virtual CPE (vCPE) services.1 Service providers are now in better position to take advantage of the real gains that have been made, and the continued and evolving network transformation in 2016 will certainly provide a continued business transformation as well.

Entrance into New Markets

Opportunities for SPs to commercialize SDN/NFV architectures will expand in 2016 as more L4-L7 services are deployed with high-level NFV orchestration systems and SDN infrastructures. Because NFV enables them to deliver L4-L7 services on-demand through an automated and policy-driven process, markets that otherwise were not accessible will open to SPs. NFV Networks can flex on-demand to incorporate a wider range of virtual network functions (VNFs) into their architectures. SPs will look to expand their use of VNFs with rich sets of APIs that are more easily deployable to support different use case scenarios, customizable service chains for customers, and efficient delivery of network services.

This is still only the early stages of a long migration that ultimately will enable service providers to transform their networks and their businesses with the flexibility and agility that only these new network architectures can deliver.

About the Author

As Head of Service Provider Product Marketing at F5 Networks, Peter Margaris is responsible for the company’s overall solution messaging, positioning and market strategy directed at F5’s service provider business segment. With a diverse background and over 25 years of experience in telecommunications and mobile technologies, he has held business leadership roles at Motorola, Nokia, and Alcatel-Lucent, as well as wireless start-up companies in Silicon Valley.
1 HP Press Release regarding ETSI PoC #38:

ETSI Web Site:    NFV ISG Proof of Concept #38:,_activation_and_orchestration_of_VNFs_in_carrier_networks

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Blueprint: The (Near) Future of Enterprise Apps, Analytics, Big Data and The Cloud

by Derek Collison, Founder and CEO of Apcera

In 2016, technical innovation, combined with evolutionary trends, will bring rapid organizational changes and new competitive advantages to enterprises capable of adopting new technologies. Not surprisingly, however, the same dynamics will mean competitive risk for organizations that have not positioned themselves to easily absorb (and profit from) new technological changes. The following predictions touch on some of the areas in IT that I think will see the biggest evolutions in 2016 and beyond.
  1. Hadoop: old news in 24 months. Within the next two years, no one will be talking about big data and Apache Hadoop—at least, not as we think of the technology today. Machine Learning and AI will become so good and so fast that it will be possible to extract patterns, perform real-time predictions, and gain insight around causation and correlation without human intervention to model or prepare raw data. In order to function effectively, automated analytics typically need to be embedded in other systems that bring forth data. Next-generation AI-enabled machine learning systems (aka “big data,” even though this term will soon fade away), will be able to automatically assemble and deliver financial, marketing, scientific and other insights to managers, researchers, executive decision makers and consumers—giving them new levels of competitive advantage.
  2. Microservices will change how applications are developed. Containers will disrupt the industry by giving organizations the ability to build less and assemble more since the cost of the isolation context is so small, fast and cheap. While microservices are inherently complex, new platforms are emerging that will make it possible for IT organizations to innovate at speed without compromising security, or performing the undifferentiated heavy lifting to construct these micro-service systems in production. With robust auditing and logging tools, these platforms will be able to reason and decide how to effectively manage all IT resources, including containers, VMs and hybrids.
  3. The container ecosystem will continue to diversify and evolve. The coming year will see significant evolution in the container management space. Some container products will simply vanish from the market, while certain companies, not wanting to miss out on the hype, will simply acquire existing technology to claim a spot in the new ecosystem. This consolidation will shrink the size of the playing field, making viable container management choices easier for IT decision makers to identify. Over time, as container vendors seek to differentiate themselves, those that survive will be the ones that demonstrate the ability to orchestrate complex and blended workloads, in a manner that enterprises can manage with trust. The container will slowly become the most unimportant piece of the equation.
  4. True isolation and security will continue to push technology forward. Next year, look for creative advances in enabling technology, such as hybrid solutions, consisting of fast and lightweight virtual machines (VMs) that wrap containers, micro-task virtualization and unikernels. This is already beginning to happen. For example, Intel's Clear Containers (which are actually stripped-down VMs) use no more than 20 MB of memory each, making them look more like containers in terms of server overhead, and spin up in just 100-200 milliseconds. The goal here is to provide the isolation and security required by the enterprise, combined with the speed of the minimalist “Clear Linux OS.” Unikernels, another emerging technology, possess meaningful security benefits for organizations because they have an extremely small code footprint, which, by definition, reduces the size of the “attack surface.” In addition, unikernels feature low boot times, a performance characteristic always in favor with online customers who have dollars to spend and the burgeoning micro-services crowd.
This coming year is set to be a busy one. Technology is advancing at a pace that has never been seen before. The rise of machine learning in agile enterprises will truly transform the way information is gathered, analyzed and used. Microservices and containers are going to change the way software systems are designed and built, and we’ll see a lot of movement and acquisitions within the container ecosystem. And, as always, security will be a prominent concern; however, much of the new technology adopted next year will be built upon a foundation of isolation and security, not bolted on as an afterthought. Innovation that doesn’t compromise security will be a welcome change. 2016 is shaping up to be an exciting year.

About the Author

Derek Collison is CEO and founder of Apcera, provider of the trusted cloud platform for global 2000 companies. An industry veteran and pioneer in large-scale distributed systems and enterprise computing, Derek has held executive positions at TIBCO Software, Google and VMware. While at Google, he co-founded the AJAX APIs group and went on to VMware to design and architect the industry’s first open PaaS, Cloud Foundry. With numerous software patents and frequent speaking engagements, Derek is a recognized leader in distributed systems design and architecture and emerging cloud platforms.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Blueprint: 2016 Predictions - Death of Traditional ADC, Ubiquitous SSL

by Sonal Puri, CEO of Webscale Networks

With the end of the year comes the inevitable flood of predictions about the evolution of various technologies in the coming year.  And while predictions about upcoming security threats and malware are likely to come in droves, there’s one area that is sure to get easily overlooked:  what’s going to evolve in companies’ back-end systems in the New Year. Organizations’ networks and infrastructures will see some changes in 2016, and those changes should come from surprising places.

The Death of the Traditional ADC

Everyone understands that the rise of cloud computing will mean the beginning of the end for on-premise datacenters and server rooms. But one technology that gets overlooked is the application delivery controller (ADC). Traditional ADCs are physical boxes that sit in server rooms and control the distribution of website visitor requests to different servers. But as more companies move their servers to the cloud, there will no longer be a need for a physical product to handle web traffic distribution. Furthermore, if companies have a hybrid on-premise-plus-cloud deployment, it won’t make sense for them to use a box in their server room in addition to a SaaS version in the cloud. A SaaS version can transition across different cloud providers, managed services, and the data center, but for obvious reasons a physical ADC can’t.

The end of traditional ADCs will also be a bad sign for the middleware companies putting these boxes together. It’s likely that vendors such as F5 and Citrix will announce plans for cloud expansion next year, but it’s also likely that these plans will fizzle out. It’s highly possible that ADC vendors will become part of the trend of server vendors working directly with SaaS vendors, and leaving appliance middleware creators out in the cold. In the business world, it’s almost impossible for the old players to take on the new, more nimble players and succeed. For example, look at what happened to Blockbuster when it tried to take on Netflix, or the fight between Amazon and Barnes & Noble.

SSL Encryption Becomes Ubiquitous

SEO has evolved from a buzzword to a business strategy, and everyone is looking for a boost in page ranking over their competitors. But one of the lesser known strategies for boosting page ranking is “forcing” SSL encryption. Last year, Google announced that it had started determining if a website secures the entire user session by HTTPS – and, if it did, Google would raise the site owner’s page ranking. This news has gone under the radar until recently, but Symantec just published a white paper outlining how this process works. As Google is pretty much ubiquitous with searching, we’ll see a lot more companies adopting SSL encryption in 2016. While this technology has been out there for a while, its new marketing potential will prove to be much more of an impetus than will its security benefits.

Slow is the New Down

One of the biggest problems for organizations’ websites is an increased lack of patience from visitors when it comes to website performance. Website applications can go down, and companies won’t realize it because their old and unsophisticated monitoring tools typically report only if a website has crashed completely. When it comes to user satisfaction, however, a slow and unresponsive website is just as damaging to your brand as a downed website. Research has revealed that 47% of consumers expect a web page to load in two seconds or less, and 40% will completely abandon a website that takes more than three seconds to load. As was evident during this year’s Cyber Monday shopping, these experiences are not limited to small e-commerce businesses, but big players as well. With dynamic websites that require that every user see something different when visiting, infrastructure managers need to be looking at services that not only predict traffic surges and scale appropriately, but also self-heal in the event of a failure.

How to Prepare

What’s the best way to address these changes? More important than any particular solution is the need for organizations to be mindful of what users expect when visiting their websites. If an organization has cloud deployments or other infrastructure changes on the horizon, be sure that those changes can scale to meet future needs as well as present needs. Consider for example how social media and new customer engagement tools have transformed how we reach our customers, and the speed of their response. Think also about how your team will manage this growing infrastructure as you expand into new territories, new countries and across multiple public, private and hybrid cloud environments. In the end, improving an organizations’ infrastructure is about thinking how to meet the needs of users and customers, and how to ensure that if you do strike gold, with the right promotion, at the right time, that your website isn’t going down, while your sales are going up.

About the Author

Sonal Puri serves as the Chief Executive Officer of Webscale Networks (previously Lagrange Systems). Prior to Lagrange, she was the Chief Marketing Officer at Aryaka Networks and led sales, marketing and alliances for the pioneer in SaaS for global enterprise networks. Sonal has more than 18 years of experience with Internet Infrastructure in sales, marketing, corporate and business development and channels. Previously, Sonal headed business development and corporate strategy for the Application Acceleration business unit, and the Western US Corporate Development team at Akamai Technologies working on partnerships, mergers and acquisitions. Sonal also ran global business operations, channels, business development and the acquisition for Speedera Networks (AKAM) and held key management roles in sales, marketing and IT at Inktomi, CAS and Euclid. Sonal holds a Master’s degree in Building Science from the University of Southern California, and an undergraduate degree in Architecture from the University of Mumbai, India.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

ARRIS Completes Acquisition of Pace

ARRIS International plc completed its previously announced $2.1B (£1.4B) acquisition of Pace plc.

In addition to CPE, the combination further establishes ARRIS as a global leader in HFC/Optics, complementing its established CMTS leadership position.

"ARRIS is investing in our industry's next stage of growth. This acquisition enables us to scale our leadership and innovation to transform global entertainment and communications for millions of people," said Bob Stanzione, Chairman and CEO of ARRIS. "Our combined organization unites two of the strongest leadership and engineering teams in the industry—giving us the scale, expertise, and technology to make ARRIS, more than ever before, the partner of choice for the world's leading service providers. Together with our customers, we're creating a world of connected, personalized entertainment and communications that blend seamlessly into our everyday lives."

ARRIS to Acquire Pace for $2.1 Billion

ARRIS Group agreed to acquire Pace plc., a supplier of networking equipment for cable operators, for US$2.1 billion (£1.4 billion) is stock and cash.

Under the agreed upon terms, Pace shareholders will receive £1.325 of cash and a fixed exchange ratio of 0.1455 New ARRIS shares for each Pace share, reflecting aggregate consideration as of April 21, 2015 of £4.265 per share, representing a 28% premium to the Pace closing share price as of April 21, 2015.

The transaction will result in the formation of New ARRIS, which will be incorporated in the U.K., and its operational and worldwide headquarters will be in Suwanee, GA USA. New ARRIS is expected to be listed on the NASDAQ stock exchange under the ticker ARRS. In connection with the formation of New ARRIS each current share of ARRIS will be exchanged for one share in New ARRIS.

Intel Completes Acquisition of Altera

Intel completed its previously announced acquisition of Altera, a provider of field-programmable gate array (FPGA) technology.

Altera will operate as a new Intel business unit called the Programmable Solutions Group (PSG), led by Altera veteran Dan McNamara. Intel said it is committed to a smooth transition for Altera customers and will continue the support and future product development of Altera's many products, including FPGA, ARM-based SoC and power products. In addition to strengthening the existing FPGA business, PSG will work closely with Intel's Data Center Group and IoT Group to deliver the next generation of highly customized, integrated products and solutions.

"Altera is now part of Intel, and together we will make the next generation of semiconductors not only better but able to do more," said Brian Krzanich, Intel CEO. "We will apply Moore's Law to grow today's FPGA business, and we'll invent new products that make amazing experiences of the future possible – experiences like autonomous driving and machine learning."

Intel to Acquire Altera for its Programmable Logic Devices

Intel agreed to acquire Altera a for $54 per share in an all-cash transaction valued at approximately $16.7 billion.

Altera, which is based in San Jose, California, offers programmable logic, process technologies, IP cores and development tools . Its portfolio includes its Stratix series FPGAs with embedded memory, digital signal processing (DSP) blocks, high-speed transceivers, and high-speed I/O pins. Altera's Arria system-on-chip solutions integrate an ARM-based hard processor and memory interfaces with the FPGA fabric using a high-bandwidth interconnect. These devices include additional hard logic such as PCI Express Gen2, multiport memory controllers, error correction code (ECC), memory protection and high-speed serial transceivers.

Altera had 2014 revenue of $1.9 billion, of which 44% of sales were for telecom/wireless, 22% for industrial/military/automotive, and 16% for networking/computer/storage. Altera holds about 39% market share of the PLD segment compared to 49% for Xilinx. The company was founded in 1983 and has approximately 3,000 employees.

"Intel's growth strategy is to expand our core assets into profitable, complementary market segments," said Brian Krzanich, CEO of Intel. "With this acquisition, we will harness the power of Moore's Law to make the next generation of solutions not just better, but able to do more. Whether to enable new growth in the network, large cloud data centers or IoT segments, our customers expect better performance at lower costs. This is the promise of Moore's Law and it's the innovation enabled by Intel and Altera joining forces."

"Given our close partnership, we've seen firsthand the many benefits of our relationship with Intel—the world's largest semiconductor company and a proven technology leader, and look forward to the many opportunities we will have together," said John Daane, President, CEO and Chairman of Altera. "We believe that as part of Intel we will be able to develop innovative FPGAs and system-on-chips for our customers in all market segments."
  • In February 2013, Altera announced that its next generation FPGAs will be based on Intel’s 14 nm tri-gate transistor technology. These next-generation products target ultra high-performance systems for military, wireline communications, cloud networking, and compute and storage applications. Under a partnership deal announced by the firms, Altera’s next-generation products will now include 14 nm, in addition to previously announced 20 nm technologies.

Nokia Obtains Majority Share in Alcatel-Lucent

The French financial regulator, the Autorité des marchés financiers, confirmed that as a result of its ongoing public exchange offer, Nokia has now obtained over 71% over the share capital of Alcatel-Lucent, including over 76% of American depositary shares an over 89% of OCEANE convertible bonds issued by the company.

Philippe Camus, Chairman and interim CEO of Alcatel-Lucent stated: “With the Board of directors of Alcatel-Lucent, we are pleased that the combination of Nokia and Alcatel-Lucent has reached a decisive step, since Nokia obtained a large majority of the share capital on a fully diluted basis. We reaffirm our unanimous support to this industrial project which, by creating a global powerhouse in next-generation communications technologies and services, creates value for our shareholders, as well as for all our stakeholders. On behalf of the Board, I strongly encourage the investors in Alcatel-Lucent that have retained their securities to tender them into the re-opened offer in order to benefit from this creation of value and to fully participate in a major project for our industry.”

NVIDIA Develops Supercomputer for Self-Driving Cars

NVIDIA unveiled an artificial-intelligence supercomputer for self-driving cars.

In a pre-CES keynote in Las Vegas, NVIDIA's CEO Jen-Hsun Huang said the onboard processing needs of future automobiles far exceeds the silicon capabilities currently on the market.

NVIDIA's DRIVE PX 2 will pack the processing equivalent of 150 MacBook Pros -- 8 teraflops of power -- enough to process data from multiple sensors in real time, providing 360-degree detection of lanes, vehicles, pedestrians, signs, etc. The design will use the company's next gen Tegra processors plus two discrete, Pascal-based GPUs. NVIDIA is also developing a suite of software tools, libraries and modules to accelerate the development and testing of autonomous vehicles.

Volvo will be the first company to deploy the DRIVE PX 2. A public test of 100 autonomous cars using this technology is planned for Gothenburg, Sweden.

Zayo Completes Viatel Acquisition

Zayo completed its previously announced acquisition of Viatel for EUR 98.8 million.  The acquisition adds an 8,400 kilometer fiber network across eight countries to Zayo’s European footprint, including 12 new metro networks, seven data centers and connectivity to 81 on-net buildings.

“The acquisition of Viatel’s European network business strengthens our strategic position in Europe and provides customers with access to our fiber network and expanded connectivity to key international markets,” said Dan Caruso, Zayo chairman and CEO. “Because of the complementary nature of the acquisition, we will begin cross-selling our full suite of services to both Zayo and Viatel customers immediately.”

AT&T Introduces Family of LTE Modules For IOT

AT&T introduced a new family of LTE modules for Internet of Things (IoT) applications and optimized for battery life.

AT&T worked with Wistron NeWeb Corp. (WNC), a module and device manufacturer. The modules are expected to become available from WNC at prices planned as low as $14.99 each, plus applicable taxes, starting in the second quarter. Samples will be available for testing in the first quarter.

“Businesses depend on IoT solutions for gathering real-time information on assets across the world,” said Chris Penrose, senior vice president, Internet of Things, AT&T Mobility. “We’re pleased to be able to facilitate the availability of cost-effective modules so our customers can deploy IoT solutions over the AT&T 4G LTE network. The new LTE modules help the battery life of IoT devices last longer so businesses can better serve their customers.”