Monday, October 26, 2015

Oracle Intros Visual Analytics Cloud Service

Oracle unveiled a Data Visualization Cloud Service that lets users combine data from a variety of sources -- Oracle and other SaaS applications, on-premises systems, external sources and personal files -- and immediately gain new insights using visual analysis.

Oracle Data Visualization Cloud Service provides compatibility and integration with the full breadth of Oracle's analytics offerings.

"Oracle Data Visualization Cloud Service makes data visualization 100 percent self-service, empowering business users to go from raw data to actionable business insights in just a few minutes. This new service delivers rich visual analysis, rapid discovery of insights and secure collaboration, enabling fact-based decisions at every level of the organization," stated Hari Sankar, group vice president, Business Analytics, Oracle.

IBM Launches Apache Spark-as-a-Service

IBM is launching a Spark-as-a-Service offering on Bluemix following a successful 13-week Beta program with more than 4,600 developers using it to build intelligent business and consumer apps fueled by data.

IBM also confirmed that it has redesigned more than 15 of its core analytics and commerce solutions with Apache Spark.

Apache Spark was developed by the AMPLab at UC Berkeley as an open-source cluster computing framework. It offers in-memory processing and is known for its ease of use in creating algorithms that harness insight from complex data.

“For data scientists and engineers who want to do more with their data, the power and appeal of open source innovation for technologies like Spark is undeniable,” said Rob Thomas, Vice President of Product Development, IBM Analytics. “IBM is committed to using Spark as the foundation for its industry-leading analytics platform, and by offering a fully managed Spark service on IBM Bluemix, data professionals can access and analyze their data faster than ever before, with significantly reduced complexity.”

Databricks: Apache Spark Outgrowing Hadoop

The number of standalone deployments of Spark eclipses those on YARN as more users run Spark independent of Hadoop, according to a newly published survey of Spark users conducted by Databricks, the company founded by the creators of Apache Spark. Databricks said that users that are running Spark in standalone (48 percent of respondents) exceeds those running Spark on YARN (40 percent of respondents), alongside a majority of users running Spark in...

Google Cloud Dataproc Brings Fast Hadoop & Spark Cluster Provisioning

Google introduced new capabilities for managing clusters of Hadoop and Spark. Google Cloud Dataproc, which is now in beta,  is a managed Spark and Hadoop service that leverages open source data tools for batch processing, querying, streaming, and machine learning. The service can be used to create and manage clusters ranging in size from 3 to hundreds of nodes. Google said its Cloud Dataproc can create Spark and Hadoop clusters in 90 seconds...

IBM Backs Apache Spark for Cloud Data Processing

IBM is putting its weight behind Apache Spark, which is an open source engine for large-scale data processing and compatible with Hadoop data. Apache Spark can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like 

IBM Teams with Twitter and The Weather Company for Cloud Insights

IBM introduced Insight Cloud Services, a service that leverages data from Twitter and The Weather Company, as well as open data sets and business-owned data, to help turn streaming data into insights and change critical business outcomes.

IBM said its new cloud-based insight services use analytic models to take the complexity of combining internal and external data.

IBM Insight Cloud Services are accessed in a variety of offerings, including:

  • IBM Insight APIs for Developers: Four new APIs that developers can access from IBM Bluemix, IBM's cloud platform, to incorporate historical and forecasted weather data from The Weather Company into web and mobile apps; and two APIs that allow developers to incorporate Twitter content –that is enriched with sentiment insights from IBM –from Decahose or PowerTrack streams into apps.
  • IBM Insight Data Packages for Weather: New bundled data sets from IBM and The Weather Company customized for key industries and available on the IBM Cloud. Built on a variety of weather data feeds that provide everything from real-time alerts for severe weather disasters to seasonal forecasts, the data packages can help insurers use weather data to alert policyholders ahead of hail storms that may cause property damage, help utilities forecast demand and identify likely service outages, help local governments to develop detailed emergency planning in advance of severe weather, and enable many industries such as retail to use data to help optimize their operations, reduce costs and uncover revenue opportunities ahead of changes in weather.
  • IBM Industry Analytics Solutions: A set of pre-built solutions that leverage IBM Insight Cloud Services cognitive techniques to help enable business users to tackle very specific industry challenges. This expands on a set of industry solutions IBM introduced in May 2015 that provide businesses with the ability to generate new types of insights based on customer behavior.

Akanda Releases Astara Liberty Orchestration Code

Akanda, a start-up offering an NFV orchestration platform, announced its Astara's Liberty release for Layer 3-7 networking services.

The new version, which is now available to OpenStack operators, is the open source network orchestration platform's most substantial since its initial launch, and is Astara's first since becoming an official OpenStack project.

Some highlights of Astara's Liberty release:

  • More configurability: A new load balancer driver allows OpenStack operators to configure the platform to load and manage only the resources they choose. Current implementations include NGINX and NGINX Plus.
  • Quicker provisioning: Neutron resources are now much more quickly provisioned onto appliance VMs via a new service that manages pools of hot-standby appliance VMs.
  • Cumulus Networks integration: Tight integration and support for Dynamic Lightweight Network Virtualization gives OpenStack operators a complete, OpenStack-ready stack.
  • Higher availability: Active-active high availability and scaling improvements.

"Astara's first release as an official OpenStack project is an exciting one for OpenStack operators," said Henrik Rosendahl, CEO, Akanda. "The goal of Astara is to make Networking and DevOps' lives easier. With tremendous community support and momentum for the platform throughout its first year, Astara is the answer for massively simplified OpenStack networking stack that can replace traditional -- and expensive -- single vendor lock-in."

Akanda was incubated since 2012 by DreamHost.

Radisys and InterDigital Demo M2M + LTE Core Interoperability

Radisys and InterDigital have demonstrated interoperability between a Radisys end-to-end LTE network, including the CellEngine eNodeB and Evolved Packet Core, with InterDigital’s oneMPOWER Platform.

The demo, which is showcased at this week's IoT Korea International Conference in Seoul, Korea, includes 3GPP Release 12 compliant Radisys’ Trillium LTE CNE and CellEngine TOTALeNodeB LTE small cell software with certain feature modifications by InterDigital along with Google Nexus 5 handsets.

“The HetNet revolution can only become a reality with the successful convergence of networks to manage the data traffic explosion,” said Dr. Byung K. Yi, executive vice president, InterDigital Labs, and chief technology officer of InterDigital. “Our oneMPOWER Platform is well positioned to support the growing Internet of Things. With Radisys’ CellEngine TOTALeNodeB software, along with Radisys’ Professional Services expertise to add an Evolved Packet Core emulator and integrated with InterDigital’s oneMPOWER Platform, we’ve demonstrated the ability of HetNets to address the mobile broadband and interoperability requirements for Internet of Things deployments.”

“Radisys is supporting numerous LTE deployments around the world, including more than 80 LTE small cell wins with our CellEngine TOTALeNodeB small cell software,” said Tom McQuade, general manager, CellEngine and Trillium Software, Radisys. “Critical to our success has been the support of our Professional Services team to help configure and integrate our software products at customers alongside our partners’ complementary networking technologies. This successful interoperability demonstration with InterDigital is not only a win for our CellEngine software products, but for our Professional Services team as well.”

Cisco Builds its Silicon Valley Start-up Incubator

Five new Silicon Valley start-ups have joined the roster at the Cisco Entrepreneurs in Residence incubation program. These are:

  • C3DNA ( delivers self-reliance and portability of any application on any cloud infrastructure at the app layer.
  • LISNR ( provides a new communication protocol that uses SmartTones, an inaudible technology that has the ability to connect mobile applications and devices.
  • Simularity ( applies real-time artificial intelligence to the Internet of Things, enabling incident prediction, preventative maintenance and anomaly detection.
  • Tagnos ( enhances the patient experience and hospital efficiency with real-time smart location solutions powered by the Internet of Things.
  • Zoomdata ( develops a visual analytics solution for Big Data that empowers business users to easily consume and interact with disparate data sources in real time.
Since the program’s launch in 2014, Cisco EIR has incubated and collaborated with 17 startups in Silicon Valley and Vienna, Austria.

Cisco to Acquire ParStream for Database Analytics

Cisco has agreed to acquire ParStream, a privately-held company based in Cologne, Germany that provides an analytics database that allows companies to analyze large amounts of data and store it in near real time anywhere in the network. Financial terms were not disclosed.

Cisco said ParStream's highly specialized database is especially useful for IoT applications that generate large amounts of data at the edge that needs to be processed in real time, with minimal infrastructure.  ParStream uses compression and indexing to help customers access data faster and at scale, rapidly analyzing and filtering billions of records and getting information to the business in near real-time. ParStream will be integrated into Cisco’s Analytics and Automation portfolio.

ParStream was part of the Cisco Entrepreneurs in Residence start up program.

Qualcomm Intros LTE Modems for IoT

Qualcomm introduced its latest LTE modems (the MDM9207-1 and MDM9206) designed for Internet of Things (IoT) devices.

The MDM9207-1 is purpose-built for IoT applications like smart metering, security, asset tracking, wearables, point-of-sale and industrial automation – many of which require extremely reliable and power-efficient connections to cloud services. It offers Category 1 LTE connectivity with power and throughput optimizations, and other customizable features.

The MDM9206 will allow device manufacturers to enable cost-optimized solutions for low data rate IoT applications more efficiently addressed by a narrowband modem in addition to providing enhancements for ultra-low power and extended range as part of Cat-M (eMTC) and narrowband IoT (NB-IOT).

Key customizable features for the MDM9207-1 include support for:

  • LTE Category 1 up to 10 Mbps downlink and 5 Mbps uplink speeds with LTE multimode or LTE Single mode capability and dual Rx or single Rx
  • Power Save Mode (PSM) enabling 10+ years battery life
  • Major cellular standards, including LTE FDD, LTE TDD, DC-HSPA, GSM and TD-SCDMA
  • Scalable software across chipset platforms
  • Advanced, built-in hardware and software security features
  • Integrated voice support for Circuit Switched Fall Back (CSFB) and VoLTE
  • Integrated Applications Processor with ARM Cortex A7 @ 1.2 GHz
  • Linux OS for application development
  • Integrated global positioning support for GPS, Beidou, Glonass, and Galileo
  • Small package at 28nm LP to allow for optimized IoT form factors
  • Pre-integrated support for Qualcomm VIVE™ Wi-Fi 1x1, 802.11ac featuring Qualcomm MU | EFX MU-MIMO technology and BT 4.1 BLE
  • Qualcomm RF360 Front End Solution

"Qualcomm Technologies continues to expand the capabilities of LTE to accelerate progress in IoT today with the MDM9207-1 and MDM9206," said Anthony Murray, senior vice president and general manager, IoE, Qualcomm Technologies International Ltd.  "These modems demonstrate our continued commitment to the expansion of existing LTE commercial device capability in addition to new standards-based Low Power Wide Area (LPWA) technologies that will lead to global cellular solutions with longer range, lower power and lower complexity, such as LTE Cat-M (eMTC) and NB-IOT."

PMC Posts Q3 Revenue or $134 Million

PMC-Sierra reported Q3 2015 revenue of $133.6 million, an increase of 7.1 percent compared to $124.8 million in the second quarter of 2015, and a decrease of 1.4 percent from $135.5 million in the third quarter of 2014.

GAAP net income in the third quarter of 2015 totaled $6.7 million or $0.03 per diluted share, compared to GAAP net loss in the second quarter of 2015 of $8.6 million or $0.04 per share, and to GAAP net income in the third quarter of 2014 of $5.5 million or $0.03 per diluted share. GAAP operating margin in the third quarter of 2015 was 8.3 percent, compared to GAAP operating margin in the third quarter of 2014 of 5.5 percent.

For Q3:

  • Storage product revenues reached a quarterly record of $97.6 million, or 73 percent of revenues, an increase of 12 percent compared to storage product revenues of $87.0 million in the second quarter of 2015. 
  • Optical product revenues in the third quarter of 2015 totaled $23.5 million, or 18 percent of revenues, a decrease of 7 percent compared to optical product revenues of $25.2 million in the second quarter of 2015. 
  • Mobile product revenues in the third quarter of 2015 totaled $12.5 million, or 9 percent of revenues, which was flat compared to mobile product revenues of $12.6 million in the second quarter of 2015.

Marvell Confirms that PricewaterhouseCoopers Quits

In an SEC filing, Marvell Technology Group announced that PricewaterhouseCoopers has quit as its outside auditor. PricewaterhouseCoopers, had served in that position for the two past fiscal years, resigned on October 20.

Marvell noted that PwC advised the company that it would need to expand the scope of the 2016 audit in several areas, including whether senior management’s operating style resulted in an open flow of information and communication to set an appropriate tone for an effective control environment.

Sunday, October 25, 2015

Blueprint: NFV - the New and Improved Network…with Some of the Same Old Baggage

by Douglas Tait, Director, Product Marketing, Oracle Communications

Network function virtualization (NFV) is in motiion Software functions are separating from hardware infrastructure and virtualizing, then becoming capable of running on a wide array of suitable hardware. In fact, NFV adoption is so strong that according to a statement made by Heavy Reading analyst Jim Hodges at the recent NFV Everywhere event in Dallas, the global NFV market will grow from $485.3 million in 2014 to $2.2 billion by the end of this year. The promise here, of course, is that communications service providers (CSPs) can reduce operational and expenditure costs related to updating, maintaining, or enhancing network functions by decoupling the software from the hardware.  This provides CSPs with more options to buy and deploy best-of-breed software components that run on best-of-breed hardware components.

If only it were that simple.

This article will cover why the path to NFV isn’t so clear cut, and some ideas for overcoming the complexity.  

The New and Improved Network

The NFV “divide and conquer” approach makes sense—the software lifecycle is completely different from the hardware lifecycle, and IT has made huge strides in developing and testing software virtualization technology. NFV provides the blueprint to virtualize software and deploy agile services when needed, or when upgrades are required, all without major expensive network re-deployments.  

This separation of software from hardware is a significant first step forward for the communications industry that creates new ways to manage the network elements. Now, an open market for best-of-breed hardware is possible, which could drive down costs. Also possible is the encapsulation of software elements as “virtualized network functions” (VNFs), which allows CSPs to manage software lifecycle management separately so that upgrades and enhancements do not affect the hardware environment (except in the event of rare scaling and performance dependencies).

NFV is moving network technology in the right direction, and in many ways, it’s similar to the revolution of cloud computing in IT. And also like cloud computing, now that the deployment model has been revolutionized, the next step in NFV is to free up the hold on software functions. Specifically, NFV has matured to the point where CSPs can deploy on any suitable best of breed hardware. Now it’s time for CSPs to have more choices to deploy best of breed software on that hardware to build the best possible network.
But here is the fly in the ointment, or, the “same old baggage.” While hardware components are interoperable, the network software components were never designed to be interoperable.

Same Old Baggage

For NFV software, interoperability is required on two levels: 1) between VNFs and 2) between Management and Network Orchestrators (MANOs) and the VNFs. Regardless of NFV, the same old baggage comes from taking existing network functions that do not interoperate and virtualizing them into VNFs—you still have network functions that do not interoperate. In many ways, NFV is compounding the problem, because currently there aren’t agreed-upon standards between the MANO and VNF. As a result, the various suppliers are creating the best interface for just the NFV products they own.

If this situation sounds familiar, it probably is: the problem of NFV interoperability is not new. And there have already been several attempts to create hard standards for network functions to fix it—TINA-C, JAIN, PARLAY, OneAPI. Each made a valiant effort at standardization, yet did not fully achieve software interoperability in the communications network. Now, the NFV community is pursuing interoperability with an open sourced approach—that is, creating an open source reference implementation model for NFV and hoping that the network equipment providers will follow. This open source model has had some success in the IT industry—think Apache Foundation, Linux, and GNU. And for the communications industry, projects like OPNFV, Open Daylight, ONF, Open Stack, and Open vSwitch offer an approach that would move the industry to a common software model, but without requiring NFV vendors to comply with a standard.

The original NFV whitepaper makes it clear that many of the largest and most influential CSPs want to allow VNFs to proliferate in an open market where network providers may mix, match, buy, and deploy best of breed VNFs that would automatically connect and run their networks. But to make this objective a reality, full interoperability between VNFs and MANOs is required. So what is the best way for the industry to move forward from this stalemate?

NFV: Path to Software Interoperability 

To overcome these obstacles and achieve the full potential of NFV, the industry should consider not just one solution, but rather an integrated and multi-step path to jumpstart the VNF market from a premise to a promise with a real plan. Here are a few things that the industry should consider:

  • Assemble a policing agency or an interoperability group that tests or runs the software and generates compliance reports.  As discussed, one of the major roadblocks to reaching NFV’s potential is that there is very little standardization enforcement across the communications industry. A standards body or policing agency could help by validating that vendors’ products and solutions meet defined specifications required to call themselves “certified NFV suppliers”—and therefore deemed trustworthy by customers.
  • Continue with the open source community offerings.  Although the open source communities do not have a charter to enforce interoperability, CSPs may use the reference implementation the communities produce as a model or means to test the VNFs. 
  • Define a standard API for VNFs.  While this approach does not completely solve the interoperability issues and does not enforce the standard between the VNFs into the MANO, it would provide a universal programming interface for all VNFs. VNF providers could produce their products despite not having their own MANO product.
  • Define a standard protocol that the industry could adopt as a universal standard, or that at least would be enforceable via something like the Java Community Process. This would enable   CSPs to compare vendors, supporting a fair and free market—CSPs could buy the best product for their company without fear that the vendor is violating standards.
  • Provide an interface framework in the VNF manager.  In the absence of hard protocol standards, another way to accelerate the adoption of NFV is a VNF plugin framework. This would allow VNF suppliers to build and test executable plugins that interface with their products, yet run within the VNF manager—promoting technical interoperability between the VNF manager and the VNF, while opening the market for suppliers to work together. While a plugin framework does not solve the problem of interoperability between VNFs, VNF managers and various VNF suppliers would be able to rapidly integrate their products. And, when the industry finally advances and produces a standard, the only update required is the plugin; the VNF manager and the VNFs would require little change.  

If the industry can develop standards against which vendors can build NFV solutions, and employ a policing body to enforce these standards, VNF interoperability will move forward—driving unprecedented innovation to bring new services and new revenue streams to market quickly, with much lower risk. But the industry must continue to move forward in the meantime. So it must take action now to enable industry players to work together, promoting a culture of openness and innovation.

About the Author

Doug Tait is director of product marketing, Oracle Communications.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Oracle Partners with Intel to Take on IBM as it Pivots to Cloud

Oracle and Intel announced a special partnership aimed at migrating customers running Oracle databases on Intel Power Systems to new platform based on Intel Xeon silicon. The companies said they are able to deliver database-aware software enhancements optimized for Intel Xeon.

At this week's Oracle OpenWorld, the company is rolling out enhancements to its partner network as it makes a strategic "pivot to the cloud.: “Cloud is our top priority and we are aligning our resources to that strategic initiative,” said Shawn Price, senior vice president, Cloud, Oracle. “We will work with our partner ecosystem to pivot to the cloud and fully capitalize on the historic opportunity before us. We remain committed to expanding our partner community and providing all of its valued members the tools, technology and expertise they need to deliver excellence to our joint customers and succeed in the market.”

Oracle OpenWorld 2015 Expects 60,000 Attendees

This week's Oracle OpenWorld 2015 in San Francisco is expected to attract 60,000 in person attendees. The conference, which runs October 25-29, takes place at 18 locations throughout downtown San Francisco. It features 2,500 sessions, 3,000 speakers, and more than 400 Oracle demos, as well as partner and customer exhibitions.

Elton John will highlight the customer party at Treasure Island.

Friday, October 23, 2015

Australia's nbn Tests Alcatel-Lucent's

Australia’s nbn national broadband network completed a trial of Alcatel-Lucent's technology, which uses copper infrastructure that extends the last few hundred meters into the premises to achieve speeds of close to 1 Gbps.

nbn's stated goal is to provide affordable ultra-broadband download speeds of at least 25 Mbps to eight million premises and at least 50 Mbps to 90 percent of premises with fixed-line access by 2020. nbn is using a mix of technologies to meet specific deployment needs, including a significant fiber-to-the-node (FTTN) and fiber-to-the-building (FTTB) component using VDSL Vectoring, as well as fiber-to-the-premises (FTTP), hybrid fiber coaxial (HFC), fixed, wireless and satellite technologies.

The trial – conducted over the past month – demonstrated how can complement nbn’s existing multi-technology deployment toolkit, offering a range of opportunities to evolve its capabilities. Alcatel-Lucent’s technology uses the 7368 Intelligent Service Access Manager (ISAM) Optical Network Terminals (ONTs), 7368 Intelligent Services Access Manager (ISAM) CPE with integrated reverse power and 5520 Access Management System  to accelerate last mile fiber-to-the-home ultra-broadband connectivity.

Ericsson Cites Slowdown in 4G Rollouts in China

Ericsson's overall sales for Q3 2015 increased by 3% YoY to reach SEK 59.2 billion (approximately US$6.98 billion).  Sales, adjusted for comparable units and currency, decreased by -9% due to lower sales in Networks, where the company cited a slowdown of the 4G deployments in mainland China. Network sales in North American have stabilized. This was partly offset by sales growth in Professional Services, where sales increased by 15% YoY.

Gross margin came in at 33.9%, down from 35.2% a year earlier. Operating expenses, excluding restructuring, decreased YoY to SEK 14.3 billion.

Ericsson said its global cost and efficiency program is on target to achieve annual net savings of SEK 9 b. during 2017.

Huawei Marine to Build Brazil-to-Cameroon Cable

Huawei Marine has been commissioned to construct the Cameroon-Brazil Cable System (CBCS), a 6,000-km undersea network linking Fortaleza (Brazil) and Kribi (Cameroon).

The cable system, which is sponsored by CamTel and China Unicom, will have an initial system capacity of 32 Tbps over 4 fiber pairs. Huawei Marine will deploy its 6fp submarine Repeater 1660, the industry’s first titanium repeater, which boasts a slim-line profile to allow direct lay and plough burial. The cable system is expected to come online in 2017.

Thursday, October 22, 2015

PMC Unveils 3rd Gen OTN/Ethernet Framer with Encryption for Routers

PMC introduced its META-240G, its third generation, multi-rate OTN/Ethernet framer for routers. This next gen converged OTN/Ethernet 10G/40G/100G router PHY enables 2X scaling of router line cards.

New capabilities in this META-240G generation include an integrated 100G gearbox for interfacing directly with optical modules, including the new high-density QSFP28 transceivers.  PMC also added ultra-low latency OTN encryption, which is targeted at point-to-point, secure data center interconnects or multipoint, secure cloud connectivity. The new silicon also consumes 50 percent less power per port than the previous generation.

Key features include:
  • Support for OTN wrapping of 10GE/40GE/100GE to OTU2/OTU3/OTU4;
  • Industry compatible ITU-T 10G, 40G, and 100G FECs;
  • Integrated, protocol agnostic AES-256 datapath encryption engine with flexible key management for SDN control;
  • Support for IEEE 1588v2 PTP and Synchronous Ethernet timing protocols, enabling packet-based mobile backhaul and packet transport networks;
  • Integrated 100G Gearbox to connect directly to a wide range of 10G, 40G and 100G optical module types, including CFP2, CFP4 and QSFP28; and
  • Leverages common Software Development Kit across META and DIGI family code base.
“The convergence of packet and optical transport layers with OTN allows service providers to deploy more robust and manageable networks,” said Babak Samimi, vice president of marketing and applications for PMC’s Communications Business Unit. “With our DIGI products driving the migration to OTN in the optical transport network and the META family extending OTN to the router network, the industry is realizing this network vision built on PMC’s technology.”

Sampling is underway.

PMC's DIGI-G4 Processor Scales OTN Line Card Capacity by 4X

PMC-Sierra has commenced sampling of its new DIGI-G4 chip -- the industry's highest density 4x100G OTN processor and featuring 50 percent less power per port than the previous generation.

The DIGI-G4 OTN processor, which builds on the success of PMC’s DIGI-120G, enabling the transition to 400G line cards in packet optical transport platforms (P-OTP), ROADM/WDM and optimized data center interconnect platforms for OTN switched metro networks. It increases 10G, 40G and 100G line card port density by 4X with flexible client mapping of Ethernet, storage, IP/MPLS and SONET/SDH, while reducing power by 50 percent per port. DIGI-G4 builds on the IP from DIGI-120G, enabling customers to maintain their rich feature set and software investment, which reduces time to market by up to six months and lowers development costs.

Significantly, DIGI-G4 delivers multi-rate, sub-180ns latency OTN encryption, allowing cloud and communications service providers to ensure security without compromising performance. DIGI-G4 supports sub-wavelength OTN encryption and is compatible with OTN switched networks. PMC said these these capabilities, combined with the densest 10G/40G/100G Ethernet ports, enables a new class of low-power, high-capacity transport platforms optimized specifically for the hyperscale data center WAN interconnect market (see whitepaper).

“Without DIGI-G4 the industry would be challenged to transition to 400G OTN switched metro networks,” said Babak Samimi, vice president of marketing and applications for PMC’s Communications Business Unit. “The world’s leading OEMs are designing around DIGI-G4 because our solution allows them to scale their product portfolios to the highest capacities at half the power per port, while leveraging their existing DIGI software investment. Coupled with new capabilities like OTN encryption, we’re solidifying PMC’s position as the industry leader in OTN processors.”

DIGI-G4 Highlights

  • Industry’s first single-chip 4x100G solution for OTN switched line cards
  • Integrated 100G Gearbox for direct connect to CFP2, CFP4 and QSFP28 transceivers
  • Industry’s highest density 10G, 40G and 100G multi-service support, including Ethernet, storage, IP/MPLS and SONET/SDH
  • Industry’s first sub-wavelength OTN encryption solution to secure the cloud
  • Industry’s first 25G granularity flexible framer to DSP interface providing scalable line-rates to match the programmable modulation capabilities found in next-generation Coherent DSPs
  • Multi-chip Interlaken interconnect solutions for scalable compact chassis data center interconnect applications
  • High performance OTN-SDK with adapter layer software accelerates customer time to market

The PM5990 DIGI-G4 is sampling now.

Affirmed Networks Collaborates with HP on OpenStack Service Platform

Affirmed Networks is working with HP to deliver a carrier-grade OpenStack NFV service platform for mobile networks.

The platform combines Affirmed Networks' NFV-based mobile packet core solution with HP's OpenStack solution.

"Carriers worldwide are strongly motivated to move to virtualized networks running on OpenStack environments due to the many capital and operational savings this architecture provides," said Amit Tiwari, VP Strategic Alliances and Systems Engineering, Affirmed Networks.

"We continue to see strong interest in NFV from CSPs who want to gain greater agility against non-traditional competitors from a content delivery perspective," Werner Schaefer, vice president, Network Functions Virtualization, HP. "We are committed to working strategically with industry leaders like Affirmed Networks who can help us provide open, carrier-grade solutions that allow our customers to drive top-line growth."

Pacific Wave Activates 100G Link to Asia

Pacific Wave activated the first 100G research and education (R&E) network link between Asia and the U.S., with related transit, peering, and exchange fabric.

Pacific Wave will provide this 100Gbps capability to the National Science Foundation (NSF) funded International Research Network Connections (IRNC) TransPAC4 project, led by Indiana University.

This integrated 100Gbps trans-pacific layer 1, 2 and 3 TransPAC – Pacific Wave network fabric incorporates:

  • A dedicated 100Gbps wavelength between the Pacific Wave national Research & Education (R&E) node in Seattle, U.S.A. and Tokyo, Japan
  • 100Gbps peering and routing fabrics – using Brocade MLX routers - in Tokyo and Seattle
  • Access and peering in Tokyo for Asian R&E networks at both the long-standing WIDE/T-REX/T-LEX Open Exchange Point, and at the newly-established Pacific Wave node at 3-8-21 Higashi-Shinagawa, Shinagawa-Ku.
  • The 100Gbps connection in the U.S. using Pacific Wave’s existing 100Gbps open, distributed, wide-area peering and exchange fabric, which is based on a distributed mesh of Brocade MLX routers, across the Pacific Wave backbone, and has primary points of presence in Seattle, Sunnyvale, and Los Angeles, as well as additional 100Gbps access and peering at StarLight in Chicago
  • On the U.S. side, the Pacific Wave fabric provides direct 100Gbps connectivity with multiple 100Gbps interfaces to Internet2’s Advanced Layer 2 and 3 Services (AL3S and AL2S), as well as 100Gbps connectivity to ESnet, and 100Gbps and/or 10Gbps connections to nearly all the major Asia Pacific R&E networks, U.S. Department of Energy’s ESnet, U.S. National Oceanic and Atmospheric Administration N-wave, and commercial cloud providers regularly used by national and international R&E communities.
  • Interconnection of the U.S.-based Pacific Wave and the Japan-based WIDE/T-REX peering, exchange, interconnection and Science-DMZ facilities, creating the first intercontinental R&E open, distributed exchange and peering fabric
  • Extension of the new Pacific Wave experimental SDN and SDX fabrics across the Pacific Ocean to Asia, enabling direct interconnection with Asian R&E SDN and SDX projects, including those supported by WIDE and others. GENI, OpenFlow, and related projects will also be supported
  • Connectivity to Pacific Wave’s 100Gbps wide-area Inter-institutional Science DMZ network, which has primary points of presence within Los Angeles, Seattle, Sunnyvale, and which serves as the backplane for the new NSF-sponsored Pacific Research Platform

California's CENIC Wins Grant to Expand Pacific Wave Research Net

The Corporation for Education Network Initiatives in California (CENIC), along with the Pacific Northwest Gigapop (PNWGP), was awarded a grant of nearly $3.5 million from the National Science Foundation’s International Research Network Connections (IRNC) program to expand the Pacific Wave Software Defined Exchange (SDX) over a five-year period.

The grant enables the expansion of U.S.-Asia scientific research network collaboration.

The Pacific Wave SDX, which will be deployed in Seattle, Los Angeles, and the Bay Area, is an integral component of the international effort to interconnect research and education networks using Software Defined Networking (SDN). The Pacific Wave SDX joins several other IRNC awardees to support research, development and experimental deployment of multi-domain SDXs and will serve as an innovation platform for next generation networking, including enhancing connectivity to campus and wide-area “Science DMZ” infrastructures like the Pacific Research Platform (PRP), which enables researchers to move data between labs and scientific instruments to collaborators’ sites, supercomputer centers, and data-repositories without performance degradation.

Napatech: 100G Rollouts Will Drive Market for Network Appliances

By 2018, a steep rise in the penetration of 100G networks will drive the need for a new generation of accelerated, virtual network appliances.

A custom study conducted by Heavy Reading on behalf of Napatech has found that the traditional hardware appliance market has reached maturity and will be extended with 100G solutions and then, over time, replaced with virtualized appliances.  The study surveyed over 135 qualified service providers and vendors.  Here are some key findings:

Network appliances are undergoing a fundamental transition: Two drivers are responsible for this transition -- increased transport network throughput and the impact of virtualization.

100G transport networks to increase dramatically:
Increases in data network throughput are happening at all levels due to the deployment of 100G. The survey forecasts that the 100G transport network penetration will grow significantly by the end of 2018:

  • From 22 to 75 percent increase in core transport networks 
  • From 14 percent to 71 percent penetration in metro networks 
  • Nine percent penetration to 58 percent growth in access networks 

Profound impact of SDN/NFV expected: The impact of SDN and NFV on network appliances will be profound but positive overall. With virtualized network appliance growth on the rise, both NEPs and CSPs see a continued need for network appliances in a 100G virtualized world.

“As data growth continues to accelerate, increases in data network throughput are occurring at all levels due to the deployment of 100G. We are committed to delivering solutions that assist our customers in analyzing data at maximum throughput without compromise. This report is a strong indicator that our best-of-breed accelerators are poised to provide our customers with the tools they need to meet the demands of today’s high-speed, physical and virtual network environments,” stated Henrik Brill Jensen, CEO, Napatech.