Showing posts with label Blueprint Column. Show all posts
Showing posts with label Blueprint Column. Show all posts

Thursday, May 8, 2014

Nokia: Three Metrics of Disruptive Innovation


by David Letterman, Nokia

‘Disruptive innovation’ has been a favorite discussion topic for years. I am sure every industry, every company and every innovation team, has had rounds and rounds of discussion about what disruptive innovation means for them.

Rather than attempting another end all, be all definition for innovation, let’s focus on the passion it evokes and the permissions it enables. Disruptive innovation, as an internal charter, allows expansion beyond previous boundaries; it gives permission to go after new markets, new customers and new business models.  If left unencumbered, it can guide the company to proactively find and validate big problems for which external partners, new products and new markets can be created.  Disruptive innovation can be a source of otherwise unattainable revenue growth and market share.

Innovation is about converting ideas into something of value, making something better AND hopefully something that our customers are willing to pay for. For the purpose of putting the framework into two buckets, let’s distinguish between incremental and disruptive innovation.

Most innovation in established companies is developed by corporate innovation engines, whose job is to continually improve their products and services.  This continuous innovation delivers incremental advances in technology, process and business model.  Specialized R&D teams can add value to these innovation engines by solving problems differently or having a specific charter to go after larger levels of improvement.  Although the risks are higher, breakthrough innovation occurs when these teams achieve significantly better functionality or cost savings.  This combination of corporate and specialized incremental innovation is absolutely necessary for companies to keep up with or get ahead of the competition – and which most successful companies are very good at.

Disruptive innovation, on the other hand, is much more difficult for the corporate machinery. Here, new product categories are created, new markets are addressed and new value chains are established.

There is no known baseline to refer to.

Disruption implies that someone is losing – being disrupted. So clearly you won’t find a product roadmap for it in the company catalog. And it’s not even necessarily solving the problems of the current customer base. This is an area where, with the right passion, permissions and charter, a specialized innovation team can take a lead role and create significant growth for the company.

Here is my take on three characteristics of teams chartered to do disruptive innovation -

  • A strong outside-in perspective is crucial, for not only identifying the problem and validating the opportunity, but also for finding and creating a solution, and perhaps even taking it to market. Collaboration is everything when it comes to disruption.
  • Risk quotient - Arguably, all innovation contains some element of risk.  But, in this case of proactively seeking disruption, we must allow for an even higher degree of risk. For most innovation teams, ‘Fail fast’, ‘Fail often’ and ‘Fail safe’ are the mantras.  But in case of disruptive innovation when we are seeking new markets, perhaps based on new technologies, our probability for success is untested. And to the incumbents, this new  solution is unacceptable, often something they have never considered or simply cannot deliver.  If you are solving a really important problem it justifies embracing the risk, revalidating the opportunity and digging deeper to create a solution.  Redefine risk in the context of meaningful disruption – ‘Fail proud’ and keep on solving.  Remember SpaceX?
  • How disruptive is disruptive - For a new entrant to eventually become disruptive it needs to be significantly better in functionality, performance and efficiency - or much cheaper - than the alternatives.  Although the benefits may initially only be noticed by early adopters, for the solution to disrupt a category it must be made available to, and eventually accepted by, the masses.

A simple example that addresses these three characteristics – is how the Personal Navigation Device market was disrupted by the smartphone.

In the early and mid 2000s, Garmin and TomTom had a lock on the personal navigation market. When Nokia and the other phone manufacturers began delivering GPS via phones, they were coming to the market via a totally new channel, embedding the functionality in a device that the consumer would carry with them at all times.


The incumbents may have acted unfazed.  But in reality, they couldn’t respond to the threat.  The functionality may have been inferior to what they were selling but the cost was perceived as free.  It was totally unacceptable and the business model was “uncopiable.” What started as a feature in just select high-end phones would soon be adopted as a standard functionality in every smartphone, and expected by end users by default. In just two years, there were five times as many people carrying GPS enabled phones in their pockets as there were PNDs being sold.

Silicon Valley Open Innovation Challenge

There are many other characteristics you might consider to be the most important measurements for disruptive innovation.  For me, these three are as good as any.  It comes down to the simple questions of “Why does it matter?”  “What problem does this empower us to solve that was otherwise unmet?” and “How can we provide significantly positive impact for the company and for the people to whom the innovation will serve?”

Nokia’s Technology Exploration and Disruption (TED) team is chartered to look at exactly these questions. In its search for the next disruption, it has launched the – Silicon Valley Open Innovation Challenge.

This competition is an open call to Silicon Valley innovators to collaboratively discover and solve big problems with us, and to do so in ways that are significantly better, faster or cheaper than we could have done alone. We see Telco Cloud and colossal data analytics as the two major transformational areas for the wireless industry, opening up possibilities for disruption – and those are the focus themes for the Open Innovation Challenge. We’re willing to take the risk because we know the rewards of innovation are worth it.

Click here to submit your ideas and be part of something truly disruptive. Apply now!
Last date is 19th May 2014.

http://nsn.com/OpenInnovationChallenge

David Letterman works in the Networks business of Nokia within its Innovation Center in the heart of Silicon Valley. Looking after Ecosystem Development Strategy for the Technology Exploration and Disruption global team, David is exploring how to create exponential value by pushing the boundaries of internal innovation. An important initiative is Nokia’s Silicon Valley Open Innovation Challenge, calling on the concentrated problem-solving intellect of the Valley, to solve two of the biggest transformations for Telco: Colossal data analytics and Telco Cloud. Prior to his current position, David worked for a top tier Product Design and Innovation Consultancy, and held various business development and marketing management roles during a previous 10-year tenure with Nokia.


Nokia invests in technologies important in a world where billions of devices are connected. We are focused on three businesses: network infrastructure software, hardware and services, which we offer through Networks; location intelligence, which we provide through HERE; and advanced technology development and licensing, which we pursue through Technologies. Each of these businesses is a leader in its respective field. Through Networks, Nokia is the world’s specialist in mobile broadband. From the first ever call on GSM, to the first call on LTE, we operate at the forefront of each generation of mobile technology. Our global experts invent the new capabilities our customers need in their networks. We provide the world’s most efficient mobile networks, the intelligence to maximize the value of those networks, and the services to make it all work seamlessly. 
http://www.nsn.com
http://company.nokia.com

Wednesday, March 12, 2014

Blueprint: SDN and the Future of Carrier Networks

by Dave Jameson, Principal Architect, Fujitsu Network Communications

The world has seen rapid changes in technology in the last ten to twenty years that are historically unparalleled, particularly as it relates to mobile communications. As an example, in 1995 there were approximately 5 million cell phone subscribers in the US, less than 2 percent of the population. By 2012, according to CTIA, there were more than 326 million subscribers.  Of those, more than 123 million were smartphones. This paradigm shift has taken information from fixed devices, such as desktop computers, and made it available just about anywhere. With information being available anywhere in the hands of the individual users some have started to called this the "human centric network," as network demands are being driven by these individual, often mobile, users.

But this growth has also created greater bandwidth demands and in turn has taken its toll on the infrastructure that supports it. To meet these demands we’ve seen innovative approaches to extracting the most benefit from existing resources, extending their capabilities in real-time as needed.  Clouds, clusters and virtual machines are all forms of elastic compute platforms that have been used to support the ever growing human centric network.

But how does this virtualization of resources in the datacenter relate to SDN in the telecom carrier's network? Specifically how does SDN, designed for virtual orchestration of disparate computational resources, apply to transport networks? I would suggest that SDN is not only applicable to transport networks but a necessary requirement.

What is SDN?

The core concept behind SDN is that it decouples the control layer from the data layer. The control layer is the layer of the network that manages the network devices by means of signaling. The data layer, of course, is the layer where the actual traffic flows. By separating the two the control layer can use a different distribution model than the data layer.

The real power of SDN can be summed up in a single word - abstraction.  Instead of sending specific code to network devices, machines can talk to the controllers in generalized terms. And there are applications that run on top of the SDN network controller.

As seen in Figure 1 applications can be written and plugged-in to the SDN network controller. Using an interface, such as REST, the applications can make requests from the SDN controller, which will return the results. The controller understands the construct of the network and can communicate requests down to the various network elements that are connected to it.

The southbound interface handles all of the communications with the network elements themselves. The type of southbound interface can take one of two forms. The first is a system which creates a more programmable network. That is to say that instead of just sending commands to the devices to tell them what to do SDN can actually reprogram the device to function differently.

The second type of southbound interface is a more traditional type that uses existing communication protocols to manage devices that are currently being deployed with TL1 and SNMP interfaces.
SDN has the ability to control disparate technologies, not just equipment from multiple vendors.

Networks are, of course, comprised of different devices to manage specific segments of the network. As seen in Figure 2 a wireless carrier will have wireless transmission equipment (including small cell fronthaul) with transport equipment to backhaul traffic to the data center. In the data center there will be routers, switches, servers and other devices.


Today at best these are under "swivel chair management" and at worst have multiple NOCs managing their respective segment. Not only does this add OpEx in terms of cost for staffing and equipment but additionally makes provisioning difficult and time consuming as each network section must, in a coordinated fashion, provision their part.

In an SDN architecture there is a layer that can sit above the controller layer called the orchestration layer and its job is to talk to multiple controllers.

Why do carriers need SDN?

As an example of how SDN can greatly simplify the provisioning of the network let's take a look at what it would take to modify the bandwidth shown in Figure 2. If there is an existing 100MB Ethernet connection from the data center to the fronthaul and it is decided that the connection needs to be 150MB, a coordinated effort needs to be put in place. One team must increase the bandwidth settings of the small cells, the transport team must increase bandwidth on the NEs, and routers and switches in the data center must be configured by yet another team.

Such adds, moves, and changes are time consuming in an ever changing world where dynamic bandwidth needs are no longer a negotiable item. What is truly needed is the ability to respond to this demand in a real time fashion where the bandwidth can be provisioned by one individual using the power of abstraction. The infrastructure must be enabled to move at a pace that is closer to the one click world we live in and SDN provides the framework required to do so.

SDN Applications

No discussion of SDN would be complete without examining the capabilities that SDN can bring through the mechanism of applications. There are many applications that can be used in an SDN network. Figure 4 shows a list of examples of applications and is broken down based on the type of application. This list is by no means meant to be exhaustive.


One example of an application that specifically applies to carrier networks is path computation or end to end provisioning. Over the years there have been many methods that have sought to provide a path computation engine (PCE), including embedding the PCE into the NEs, intermingling the control and data layers. But since the hardware on the NEs is limited, so the scale of the domain it manages is also limited. SDN overcomes this issue by the very nature of the hardware it runs on, specifically a server. Should the server become unable to manage the network due to size, additional capacity can be added by simply increasing the hardware (e.g. add a blade or hard drive). SDN also addresses the fact that not all systems will share common signaling protocols.  SDN mitigates this issue by not only being able to work with disparate protocols but by being able to manage systems that do not have embedded controllers.

Protection and Restoration

Another application that can be built is for protection and restoration. The PCE can find an alternative path dynamically based on failures in the network. In fact it can even find restoration paths when there are multiple failed links. The system can systematically search for the best possible restoration paths even as new links are added to the existing network. It can search and find the most efficient path as they become available.

SDN and OTN Applications

A prime example of SDN being used to configure services can be seen when it is applied to OTN. OTN is a technology that allows users to densely and efficiently pack different service types into fewer DWDM wavelengths. OTN can greatly benefit the network by optimizing transport but it does add some complexity that can be simplified by the use of SDN.

Network Optimization  

Another area where SDN can improve the utilization is by optimizing the network so that over time, it can make better use of network resources. Again, using the example of OTN, SDN can be used to reroute OTN paths to minimize latencies, reroute OTN paths to prepare for cutovers, and reroute OTN paths based on churn in demand.

NFV

In addition to applications, SDN becomes an enabler of Network Function Virtualization (NFV). NFV allows companies to provide services that currently run on dedicated hardware located on the end user's premises by moving the functionality to the network.

Conclusion

It is time for us to think of our network as being more than just a collection of transport hardware. We need to remember that we are building a human centric network that caters to a mobile generation who think nothing of going shopping while they are riding the bus to work or streaming a movie on the train.

SDN is capable of creating a programmable network by taking both next generation systems and existing infrastructure and making them substantially more dynamic. It does this by taking disparate systems and technologies and bringing them together under a common management system that can utilize them to their full potential. By using abstraction, SDN can simplify the software needed to deliver services and improve both the use of the network and shorten delivery times leading to greater revenue.

About the Author
Dave Jameson is Principal Architect, Network Management Solutions, at Fujitsu Network Communications, Inc.

Dave has more than 20 years experience working in the telecommunications industry, most of which has been spent working on network management solutions. Dave joined Fujitsu Network Communications in February of 2001 as a product planner for NETSMART® 1500, Fujitsu’s network management tool and has also served as its product manager. He currently works as a solutions architect specializing in network management. Prior to working for Fujitsu, Dave ran a network operations center for a local exchange carrier in the north eastern United States that deployed cutting edge data services. Dave attended Cedarville University and holds a US patent related to network management.

About Fujitsu Network Communications Inc.

Fujitsu Network Communications Inc., headquartered in Richardson, Texas, is an innovator in Connection-Oriented Ethernet and optical transport technologies. A market leader in packet optical networking solutions, WDM and SONET, Fujitsu offers a broad portfolio of multivendor network services as well as end-to-end solutions for design, implementation, migration, support and management of optical networks. For seven consecutive years Fujitsu has been named the U.S. photonics patent leader, and is the only major optical networking vendor to manufacture its own equipment in North America. Fujitsu has over 500,000 network elements deployed by major North American carriers across the US, Canada, Europe, and Asia. For more information, please see: http://us.fujitsu.com/telecom


)

Wednesday, February 26, 2014

Blueprint Column: Impending ITU G.8273.2 to Simplify LTE Planning

By Martin Nuss, Vitesse Semiconductor

Fourth-generation wireless services based on long-term evolution (LTE) have new timing and synchronization requirements that will drive new capabilities in the network elements underlying a call or data session. For certain types of LTE networks, there is a maximum time error limit between adjacent cellsites of no more than 500 nanoseconds.

To enable network operators to meet the time error requirement in a predictable fashion, the International Telecommunications Union is set to ratify the ITU-T G.8273.2 standard for stringent time error limits for network elements. By using equipment meeting this standard, network operator will be able to design networks that will predictably comply with the 500-nanosecond maximum time error between cellsites.

In this article, we look at the factors driving timing and synchronization requirements in LTE and LTE-Advanced networks and how the new G.8273.2 standard will help network operators in meeting those requirements.

Types of Synchronization

Telecom networks rely on two basic types of synchronization. These include:
Frequency synchronization
Time-of-day synchronization, which includes phase synchronization

Different types of LTE require different types of synchronization. Frequency division duplexed LTE (FDD-LTE), the technology that was used in some of the earliest LTE deployments and continues to be deployed today, uses paired spectrum. One spectrum band is used for upstream traffic and the other is used for downstream traffic. Frequency synchronization is important for this type of LTE, but time-of-day synchronization isn’t required.

Time-division duplexed LTE (TD-LTE) does not require paired spectrum, but instead separates upstream and downstream traffic by timeslot. This saves on spectrum licensing costs but also allows to more flexible allocate bandwidth flexibly between upstream and downstream direction, which could be valuable for video.  Time-of-day synchronization is critical for this type of LTE. Recently TD-LTE deployments have become more commonplace than they were initially and the technology is expected to be widely deployed.

LTE-Advanced (LTE-A) is an upgrade to either TD-LTE or FDD-LTE that delivers greater bandwidth. It works by pooling multiple frequency bands, and by enabling multiple base stations to simultaneously send data to a handset. Accordingly adjacent base stations or small cells have to be aligned with one another – a requirement that drives the need for time-of-day synchronization. A few carriers, such as SK Telecom, Optus, and Unitel, have already made LTE-A deployments and those numbers are expected to grow quickly moving forward.

Traditionally wireless networks have relied on global positioning system (GPS) equipment installed at cell towers to provide synchronization. GPS can provide both frequency synchronization and time-of-day synchronization. But that approach will be impractical as networks rely more and more heavily on femtocells and picocells to increase both network coverage (for example indoors) and capacity. These devices may not be mounted high enough to have a line of sight to GPS satellites – and even if they could, GPS capability would make these devices too costly.  There is also increasing concern about the susceptibility of GPS to jamming and spoofing, and countries outside of the US are reluctant to exclusively rely on the US-operated GPS satellite system for their timing needs.

IEEE 1588

A more cost-effective alternative to GPS is to deploy equipment meeting timing and synchronization standards created by the Institute of Electrical and Electronics Engineers (IEEE).

The IEEE 1588 standards define a synchronization protocol known as precision time protocol (PTP) that originally was created for the test and automation industry. IEEE 1588 uses sync packets that are time stamped by a master clock and which traverse the network until they get to an ordinary clock, which uses the time stamps to produce a physical clock signal.

The 2008 version of the 1588 standard, also known as 1588v2, defines how PTP can be used to support frequency and time-of-day synchronization. For frequency delivery this can be a unidirectional flow. For time-of-day synchronization, a two-way mechanism is required.

Equipment developers must look outside the 1588 standards for details of how synchronization should be implemented to meet the needs of specific industries. The ITU is responsible for creating those specifications for the telecom industry.

How the telecom industry should implement frequency synchronization is described in the ITU-T G.826x series of standards, which were ratified previously. The ITU-T G.8273.2 standard for time-of-day synchronization was developed later and is expected to be ratified next month (March 2014).
Included in ITU-T G.8273.2 are stringent requirements for time error. This is an important aspect of the standard because wireless networks can’t tolerate time error greater than 500 nanoseconds between adjacent cellsites.

ITU-T G.8273.2 specifies standards for two different classes of equipment. These include:
Class A- maximum time error of 50 ns
Class B- maximum time error of 20 ns

Both constant and dynamic time errors will contribute to the total time error of each network element, with both adding linearly after applying a 0.1Hz low-pass filter. Network operators that use equipment complying with the G.8273.2 standard for all of the elements underlying a network connection between two cell sites can simply add the maximum time error of all of the elements to determine if the connection will have an acceptable level of time error. Previously, network operators had no way of determining time error until after equipment was deployed in the network, and the operators need predictability in their network planning.

Conforming to the new standard will be especially important as network operators rely more heavily on heterogeneous networks, also known as HetNets, which rely on a mixture of fiber and microwave devices, including small cells and femtocells. Equipment underlying HetNets is likely to come from multiple vendors, complicating the process of devising a solution in the event that the path between adjacent cell sites has an unacceptable time error level.

What Network Operators Should Do Now

Some equipment manufacturers already have begun shipping equipment capable of supporting ITU-T G.8273.2, as G.8273.2-compliant components are already available. As network operators make equipment decisions for the HetNets they are just beginning to deploy, they should take care to look for G.8273.2-compliant products.

As for equipment already deployed in wireless networks, over 1 million base stations currently support 1588 for frequency synchronization and can be upgraded to support time-of-day synchronization with a software or firmware upgrade.

Some previously deployed switches and routers may support 1588, while others may not. While 1588 may be supported by most switches and routers deployed within the last few years, it is unlikely that they meet the new ITU profiles for Time and Phase delivery.  IEEE1588 Boundary or Transparent Clocks with distributed time stamping directly at the PHY level will be required to meet these new profiles, and only few routers and switches have this capability today.  Depending where in the network a switch or router is installed, network operators may be able to continue to use GPS to provide synchronization, gradually upgrading routers by using 1588-compliant line cards for all new line card installations and swapping out non-compliant line cards where appropriate.

Wireless network operators should check with small cell, femtocell and switch and router vendors about support for 1588v2 and G.8273.2 if they haven’t already.

About the Author

Martin Nuss joined Vitesse in November 2007 and is the vice president of technology and strategy and the chief technology officer at Vitesse Semiconductor. With more than 20 years of technical and management experience, Mr. Nuss is a Fellow of the Optical Society of America and a member of IEEE. Mr. Nuss holds a doctorate in applied physics from the Technical University in Munich, Germany. He can be reached at nuss@vitesse.com.

About Vitesse
Vitesse (Nasdaq: VTSS) designs a diverse portfolio of high-performance semiconductor solutions for Carrier and Enterprise networks worldwide. Vitesse products enable the fastest-growing network infrastructure markets including Mobile Access/IP Edge, Cloud Computing and SMB/SME Enterprise Networking. Visit www.vitesse.com or follow us on Twitter @VitesseSemi.

Tuesday, February 25, 2014

Blueprint Column: Five Big Themes at RSA 2014

by John Trobough, president at Narus

Now that RSA is underway I wanted to take some time to cover five key themes being talked about at the event.

Machine Learning

Machine Learning is at the top of my list.  As the frequency of attacks, the sophistication of the intrusions, and the number of new networked applications increase, analysts cannot keep up with the volume, velocity, and variety of data.

The use of machine learning is gaining critical mass fueled by the bring your own device (BYOD) and Internet of Things (IOT) trends. This technology can crunch large data sets, adapt with experience, and quickly generate insight or derive meaning from the data. With machine assistance, analysts spend less time on data-processing duties, and focus more time on problem solving and defense bolstering activities. Machine learning brings new insights to network activity and malicious behavior, and is accelerating the time to resolve cyber threats.

Data Visualization

The historic and rudimentary approach of taking tabular data and presenting it in colorful pie charts and graphs does not deliver insight. According to ESG research, 44 percent of organizations classify their current security data collection as “big data” and another 44 percent expect to classify their data collection and analysis as “big data” within the next two years.  With the explosive growth of volume and variety of data, analysts are experiencing cognitive overload. Their brains cannot process information fast enough. The challenge is to display insight and conclusions from data analysis in a clear way to facilitate rapid response.

Symbolic representations, like visual threat fingerprints, will be required for quick interpretation and comparison before diving into details. Data visualization design will need to incorporate best practices including:
Context-aware controls, that appear only when required
Seamless integration, providing flow from one task to the next without assumed knowledge about the source of the data
Human factor principles, to display data, analysis, and controls in ways that enhance clarity and usability.

Context

According to Gartner, the use of context-aware security helps security technologies become more accurate and enhance usability and adoption in response to cyber threats.

If we define context as the information required to answer the questions “what,” “how” and “why,” context will provide the understanding needed to better assess the threats and resolve them faster.

The advancements made in data visualization enable organizations to determine when something isn’t right on their network. Context takes this further by allowing organizations to determine what their network activity is supposed to look like and how data visualization and context fit together.

Internet of Things (IoT)

Connected devices have become a hot and desirable trend. ABI Research estimates there will be more than 30 billion wirelessly connected devices by 2020. This machine-to-machine (M2M) conversation offers new opportunities for innovation, generates a plethora of new data streams and also creates new threat vectors.

Today, there is a desire for deeper connectivity in the workplace and home. For the business, IoT provides a range of benefits, from increasing operational efficiency to better managing resources and expanding existing business models.  As for the consumer, IoT assists with safety, health, everyday planning and more.

However, all this connectivity compounds security challenges. It’s one thing for your refrigerator to tell you you’re out of milk, but it’s quite another for hackers to use refrigerators to access your network and steal your data or initiate attacks on other networks.

Consumerization of Security

It’s no longer just about the impact that weak security has on the enterprise but also how it is affecting consumers. More and more people are producing and storing their own data and creating their own private clouds, but are still in the dark about how to properly protect it.

According to cybersecurity expert Peter W. Singer, it’s not just weak passwords, such as “password” and “123456” that cybercriminals are after. Usually, cybercriminals are after the ability to change a password with information acquired from public records (i.e. mother’s maiden name). With sophisticated threats looming all over the web, it’s only a matter of time before most consumers are faced with a stiff test on protecting their digital assets.

As consumers become more conscious of security and privacy issues, they will want to know how to prevent their identity from being stolen with just a click of a mouse. Many consumers will turn to the vendors, including retail and banking, for answers, and many vendors will turn to security providers.

Our Opportunities and Challenges

The security landscape faces a future of tremendous growth. More than ever, security is underlying all business practices. In a digital economy where connected devices are everything, security is critical and cannot be an afterthought. Security is not something that you layer on. Instead we should assume we will face a threat and be prepared to respond. While there will be many conversations happening at RSA on a multitude of other security topics, you can be sure these five themes will be heard loud and clear.

About the Author



John Trobough is president of Narus, Inc., a subsidiary of The Boeing Company (NYSE: BA).  Trobough previously was president of Teleca USA, a leading supplier of software services to the mobile device communications industry and one of the largest global Android commercialization partners in the Open Handset Alliance (OHA). He also held executive positions at Openwave Systems, Sylantro Systems, AT&T and Qwest Communications.







About the Company


Narus, a wholly owned subsidiary of The Boeing Company (NYSE:BA), is a pioneer in cybersecurity data analytics. The company's patented advanced analytics help enterprises, carriers and government customers proactively identify and accelerate the resolution of cyber threats. Using incisive intelligence culled from visual interactive and underlying data analytics, Narus nSystem identifies, predicts and characterizes the most advanced security threats, giving executives the visibility and context they need to make the right security decisions, right now, by letting them know what’s happening, why, and what to do about it. And because Narus solutions are scalable and deployable to any network configuration or business process, Narus boosts the ROI from existing IT investments. Narus is a U.S.-based company, incorporated in Delaware and headquartered in Sunnyvale, Calif. (U.S.A.), with regional offices around the world.

Blueprint Column: Making 5G A Reality

By Alan Carlton, Senior Director Technology Planning for InterDigital

By now we’ve all heard many conversations around 5G, but it seems that everyone is pretty much echoing the same thing—it won’t be here until 2025ish. And I agree. But it also seems that no one is really addressing how it will be developed. What should we expect in the next decade? What needs to be done in order for 5G to be a reality? And which companies will set themselves apart from others as leaders in the space?  


I don’t think the future just suddenly happens like turning a corner and magically a next generation appears. There are always signs and trends along the way that provide directional indicators as to how the future will likely take shape. 5G will be no different than previous generations whose genesis was seeded in societal challenges and emerging technologies often conceived or identified decades earlier. 

5G wireless will be driven by more efficient network architectures to support an internet of everything, smarter and new approaches to spectrum usage, energy centric designs and more intelligent strategies applied to the handling of content based upon context and user behaviors. From these perspective technologies/trends like the Cloud, SDN, NFV, CDN (in the context of a greater move to Information Centric Networking), Cognitive Radio and Millimeter Wave all represent interesting first steps on the roadmap to 5G. 

5G Requirements and Standards

 The requirements of what makes a network 5G are still being discussed, however, the best first stab at such requirements is reflected in the good work of the 5GPPP (in Horizon 2020).  Some of the requirements that have been suggested thus far have included:

  • Providing 1000 times higher capacity and more varied rich services compared to 2010
  • Saving 90 percent energy per service provided
  • Orders of magnitude reductions in latency to support new applications
  • Service creation from 90 hours to 90 minutes 
  • Secure, reliable and dependable: perceived zero downtime for services
  • User controlled privacy

But besides requirements, developing a standardization process for 5G will also have a significant impact in making 5G a reality. While the process has not yet begun, it is very reasonable to say that as an industry we are at the beginning of what might be described as a consensus building phase.

If we reflect on wireless history seminal moments, they may be where the next “G” began. The first GSM networks rolled out in the early 1990’s but its origins may be traced back as far as 1981 (and possibly earlier) to the formation of Groupe Spécial Mobile by CEPT. 3G and 4G share a similar history where the lead time between conceptualization and realization has been roughly consistent at the 10 year mark. This makes the formation of 5G focused industry and academic efforts such as the 5GPPP (in Horizon 2020) and the 5GIC (at the University of Surrey) in 2013/14 particularly interesting.

Assuming history repeats itself, these “events” may be foretelling of when we might realistically expect to see 5G standards and later deployed 5G systems. Components of 5G Technology 5G will bring profound changes on the both network and air interface components of the current wireless systems architecture. On the air interface we see three key tracks:

  • The first track might be called the spectrum sharing and energy efficiency track wherein a new, more sophisticated mechanism of dynamically sharing spectrum between players emerges. Within this new system paradigm and with the proliferation of IoT devices and services, it is quite reasonable to discuss new and more suitable waveforms. 
  • A second track that we see is the move to the leveraging of higher frequencies, so called mmW applications in the 60GHz bands and above. If 4G was the era of discussing the offloading of Cellular to WiFi, 5G may well be the time when we talk of offloading WiFi to mmW in new small cell and dynamic backhaul designs. 
  • A final air interface track that perhaps bridges both air interface and network might be called practical cross layer design. Context and sensor fusion are key emerging topics today and I believe that enormous performance improvements can be realized through tighter integration of this myriad of information with the operation of the protocols on the air interface. 

While real infinite bandwidth to the end user may still remain out of reach in even the 5G timeframe, through these mechanisms it may be possible to deliver a perception of infinite bandwidth in a very real sense to the user. By way of example, in some R&D labs today organizations have developed a technology called user adaptive video. This technology selectively chooses the best video streams that should be delivered to an end user based upon user behavior in front of the viewing screen. With this technology today bandwidth utilization has improved 80 percent without any detectable change in quality of experience perceived by the end user. 

5G’s Impact on the Network

 5G will be shaped by a mash up (and evolution) of three key emerging technologies: Software Defined Networking, Network Function Virtualization and an ever deeper Content caching in the network as exemplified by the slow roll of CDN technology into GGSN  equipment today (i.e. the edge of the access network!). This trend will continue deeper into the radio access network and, in conjunction with the other elements, create a perfect storm where an overhaul to the IP network becomes possible. Information Centric Networking is an approach that has been incubating in academia for many years whose time may now be right within these shifting sands. 

 Overall, the network will flatten further and a battle for where the intelligence resides either in the cloud or the network edges will play out with the result likely being a compromise between the two. Device-to-Device communications in a fully meshed virtual access resource fabric will become common place within this vision. The future may well be as much about the crowd as the cloud. If the cloud is about big data then the crowd will be about small data and the winners may well be the players who first recognize the value that lies here. Services in this new network will change. A compromise will be struck between the OTT and Carrier worlds and any distinction between the two will disappear. Perhaps, more than anything else 5G must deliver in this key respect.   

Benefits and Challenges of 5G

 Even the most conservative traffic forecast projections through 2020 will challenge the basic capabilities and spectrum allocations of LTE-A and current generation WiFi. Couple this with a recognition that energy requirements in wireless networks will spiral at the same rate as the traffic projections and add the chaos of the emergence of the 50 or 100 billion devices - the so called Internet of Everything - all connected to a common infrastructure, and the value of exploring a 5th Generation should quickly become apparent. 

The benefits of 5G at the highest level will simply be the sustaining of the wireless vision for our connected societies and economies in a cost effective and energy sustainable manner into the next decade and beyond.

 However, 5G will likely roll out into a world of considerably changed business models from its predecessor generations and this raises perhaps the greatest uncertainty and challenge. What will these business models look like? It is clear that today’s model where Carriers finance huge infrastructure investments but reap less of the end customer rewards is unsustainable over the longer term. Some level of consolidation will inevitably happen but 5G will also have to provide a solution for a more equitable sharing of the infrastructure investment costs. Just how these new business models take shape and how this new thinking might drive technological development is perhaps the greatest uncertainty and challenge for 5G development.

 While the conversations around 5G continue to grow, there is still a long way to go before reaching full scale deployment. While we may be looking farther down the line, the development is already in place and companies are already starting to do research and development into areas that might be considered foundational in helping 5G prevail. WiFi in white space is an early embodiment of a new more efficient spectrum utilization approach that is highly likely to be adopted in a more mainstream manner in 5G. More than this, companies are also exploring new waveforms (new proverbial 4 letter acronyms that often characterize a technology generation) that outperform LTE “OFDM” in both energy efficiency, operation in new emerging dynamic spectrum sharing paradigms and also in application to the emerging challenges that the internet of things will bring.


About the Author 

Alan Carlton is the senior director of technology planning for InterDigital where he is responsible for the vision, technology roadmap and strategic planning in the areas of mobile devices, networking technologies, applications & software services. One of his primary focus areas is 5G technology research and development. Alan has over 20 years of experience in the wireless industry.

Thursday, January 9, 2014

Blueprint: Optimizing SSDs with Software Defined Flash Requires a Flexible Processor Architecture

By Rahul Advani, Director of Flash Products, Enterprise Storage Division, PMC

With the rise of big data applications, such as in-memory analytics and database processing where performance is a key consideration, enterprise Solid-State Drive (SSD) use is growing rapidly. IDC forecasts the enterprise SSD segment to be a $5.5 billion market by 20151.  In many cases, SSDs are used as the highest level of a multi-tier storage system, but there is also a trend towards all-SSD storage arrays  as price performance metrics, including dollar per IOP ($/IOP) and dollar per workload ($/workload) make it an attractive option.

Flash-based SSDs are not only growing as a percentage of all storage in the enterprise, but they are also almost always the critical storage component to ensure a superior end-user experience using caching or tiering of storage.  The one constant constraint to the further use of NAND-based SSDs is cost, so it makes sense that the SSD industry is focused on technology re-use as a means to deliver cost-effective solutions that meet customers’ needs and increase adoption.

If you take the Serial Attached SCSi (SAS) market as an example, there are three distinct SSD usage models that are commonly measured in Random Fills Per Day (RFPD) for 5 years, or filling an entire drive xx times every day for 5 years.  There are the read intensive workloads at 1-3 RFPD, mixed workload at 5-10 RFPD and write intensive at 20+ RFPD. Furthermore, different customer bases like the Enterprise and Hyperscale datacenter have different requirements for application optimizations and scale for which SSDs are used in their infrastructure.  These differences in requirement show up typically in terms of number of years of service required, performance, power and sensitivity to corner cases in validation. The dilemma for the SSD makers is how do you meet these disparate needs and yet offer cost-effective solutions to end users.

In enterprise applications, software defined storage has many different definitions and interpretations, from virtualized pools of storage, to storage as a service.  For this article, we will stick to the application of software and firmware in flash-based storage SSDs to help address the varied applications from cold storage to high performance SSDs and caching cost effectively. There are a few key reasons why the industry prefers this approach:
  1. As the risk and cost associated with controller developments have risen, the concept of using software to generate optimizations is not only becoming popular, it’s a necessity.  Controller developments typically amount to several tens of millions of dollars for the silicon alone, and they often require several revisions to the silicon, which adds to the cost and risk of errors.
  2. The personnel skillset required for high-speed design and specific protocol optimizations (SAS or NVMe) are not easy to find.  Thus, software-defined flash, using firmware that has traditionally been deployed to address bugs found in the silicon, is increasingly being used to optimize solutions for different usage models in the industry.  For example, firmware and configuration optimizations for PMC’s SAS flash controller described below cost around 1/10th of the silicon development and the benefits of that are seen at the final product cost.
  3. Product validation costs can also be substantial and cycles long for enterprise SSDs, so time-to-market solutions also leverage silicon and firmware re-use as extensively as feasible.
Supporting these disparate requirements that span cold storage to high-performance SSDs for database applications cost-effectively requires a well-planned, flexible silicon architecture that will allow for software defined solutions.  These solutions need to support software optimizations based around (to name a few):

Different densities and over-provisioning NAND levels
Different types of NAND (SLC/MLC/TLC) at different nodes
Different power envelopes (9W and 11W typical for SAS, 25W for PCIe)
Different amounts of DRAM
Often need to support Toggle and ONFI, in order to maintain flexibility of NAND use

The table below shows the many different configurations that PMC’s 12G SAS flash processor supports:



Using a flexibly architected controller, you can modify features including power, flash density, DRAM density, flash type and host interface bandwidth for purpose-built designs based on the same device. And this allows you to span the gamut from cold storage (cost-effective but lower performance) to a caching adaptor (premium memory usage and higher performance) through different choices in firmware and memory. The key is that firmware and hardware be architected flexibly.  Here are three common design challenges that can be solved with software defined flash and a flexible SSD processor:

  • Protocol communication between the flash devices:  Not only does NAND from different vendors (ONFI and toggle protocols) differ, but even within each of these vendor’s offerings, there can be changes to the protocol.  Examples are changing from five to six bytes of addressing, or adding prefix commands to normal commands.  Having the protocol done by firmware allows the flexibility to adapt to these changes.  Additionally, having a firmware-defined protocol allows flash vendors to design in special access abilities.
  • Flash has inconsistent rules for order of programming and reading: A firmware-based solution can adapt to variable rules and use different variations of flash, even newer flash that might not have been available while developing the hardware.  By having both the low-level protocol handling, as well as control of the programming and reading all in firmware, it allows for a solution that is flexible enough to use many types and variations of flash.
  • Fine-tuning algorithms/product differentiation: Moving up to the higher level algorithms, like garbage collection and wear leveling, there are many intricacies in flash. Controlling everything from the low level up to these algorithms in firmware allows for fine-tuning of these higher level algorithms to work best with the different types of flash.  This takes advantage of the differences flash vendors put into their product so they can be best leveraged for diverse applications.

A flexible architecture that can support software defined flash optimizations is the key to supporting many different of usage models, types of NAND and configurations. It also helps reduce cost, which will accelerate deployment of NAND-based SSDs and ultimately enhance end-user experience.

Source: 1. IDC Worldwide Solid State Drive 2013-2017 Forecast Update, doc #244353, November 2013.

About the Author

Rahul Advani has served as Director of Flash Products for PMC’s Enterprise Storage Division since July 2012. Prior to joining PMC, he was director of Enterprise Marketing at Micron Technology, director of Technology Planning at Intel, and a product manager with Silicon Graphics. He holds a BS in Electrical Engineering from Cornell University and he received his PhD in Engineering and management training from the Massachusetts Institute of Technology.

About PMC

PMC® (Nasdaq: PMCS) is the semiconductor innovator transforming networks that connect, move and store big data. Building on a track record of technology leadership, the company is driving innovation across storage, optical and mobile networks. PMC’s highly integrated solutions increase performance and enable next-generation services to accelerate the network transformation.

Tuesday, December 17, 2013

CTO Viewpoint: Top Predictions for 2014

By Martin Nuss, Vitesse Semiconductor

As 2013 draws to a close, it’s time to ponder what’s next. We know connections are growing, as previously unconnected devices are now joining smart phones and tablets in the network, but how will they be networked? Furthermore, how will networks handle these additional connections, which are only going to grow faster in 2014? And lastly, how will all of these links be secured? Many advanced technologies have been developed for these exact questions. Here’s what I see coming to the forefront in 2014.

The Internet of Things: The Next All-Ethernet IP Network

Today’s world is defined by networking – public, private, cloud, industrial, you name it. Eventually everything will be connected, and mostly connected wirelessly. According to Morgan Stanley projections, 75 billion devices will be connected to the Internet of Things (IoT) by 2020. Clearly all these devices will need to be networked, and must be securely accessible anywhere, anytime.

Proprietary communications and networking protocols have long dominated networking within Industrial applications. With higher bandwidth and increased networking demands in Industrial process control, Smart-Grid Energy Distribution, Transportation, and Automotive applications, and Industrial networks are transitioning to standards-based Ethernet networking.

Networks within the broad-based Industrial applications realm will need many of the same capabilities developed for Carrier Ethernet, such as resiliency, high availability, accurate time synchronization, low power, security, and cloud connectivity. In 2014, we believe IoT will be the next network moving entirely to Ethernet-IP based in the Carrier Ethernet space. We also believe security, timing, reliability and deterministic behavior will become important requirements for these connected networks.

Network Security Sets Sights On Authentication, Authorization, Accounting (AAA) and Encryption

There will be more than 10 billion mobile devices/connections by 2017, including more than 1.7 billion M2M connections, according to Cisco’s most recent Visual Networking Index projections. As the number of network connections increase, so do the vulnerabilities. Anything with an IP address is theoretically hackable, and networking these devices without physical security heightens risk.

Security has long been an important issue, and the continued strong growth in the number of mobile Internet connections will bring more challenges in 2014. Operators will need to rely on the most advanced technologies available. New mobile devices with bandwidth-hungry applications, and the small cell networks needed to support them, exponentially multiply the number of network elements required in mobile networks. Long gone are the days of network equipment residing solely in physically secure locations like a central office or a macro base station. The network edge is particularly vulnerable because it is part of the Carrier network, but not physically secure. New types of access points directly exposed to users pose the obvious security concern. The BYOD trend introduces a new layer of vulnerable access points for enterprises to protect. Small cells are also particularly susceptible to hackers, as they are often installed outdoors at street level or indoors in easy-to-reach locations. Strong encryption of these last mile links can provide the necessary confidentiality of data. Authentication, authorization, and the corresponding accounting trails will ensure both the users and the equipment remain uncompromised.

In 2014, we expect that encryption and AAA will become key topics as Carrier equipment migrates to lamp posts, utility poles, and traffic signals. Encryption directly at the L2 Ethernet layer makes the most sense, especially as service providers offer more Carrier Ethernet Layer 2 (L2) VPN services. Fortunately, new MACsec technologies make it a viable option for wired and wireless WAN security.

SDN Looks Promising, But Carriers’ 2014 Focus Will Be On NFV

Software Defined Networking (SDN) and Network Function Virtualization (NFV) are widely discussed, but realization in Carriers’ networks is still some time away. Unlike datacenters, where SDN can be rolled out relatively easily, Carriers must modernize their complex operational structures before implementing SDN.

SDN’s biggest potential benefit to Carrier networks is its ability to create multiple, virtual private networks on top of a single physical network, which distributes costs by allowing multiple customers and service providers to securely share the same network. However, the entire network needs to support SDN in order to do that. On the other hand, NFV is about testing and deploying new services at the IP Edge faster and with lower CapEx. How? It’s made possible by creating the service in software, rather than with dedicated hardware. As long as the equipment at the network edge is NFV-ready, Carriers can create new services in centralized and virtualized servers. This captures Carriers’ imagination, since NFV promises a faster path to revenue with less risk and investment required. One of the first NFV applications we will see is Deep Packet Inspection (DPI). Because SDN requires spending money in order to save money, expect to see more Carrier attention to NFV in 2014.

4G RAN Sharing Becomes Widespread, Later Followed by 5G

Many see 5G as the next big thing, but beyond ‘more bandwidth’ little is defined, and the business drivers aren’t as clear as they were for 4G/LTE. We anticipate 5G will not fully materialize until 2020. Again, operators will need to upgrade networks for its deployment, and this might provide an opportunity to unify fixed, mobile, and nomadic network access.

In 2014, expect RAN sharing to become much more commonplace, with the financially strongest MNOs (Mobile Network Operators) installing the RAN infrastructure and leasing capacity back to other wireless service providers. This will allow participating operators to trade off CapEx and OpEx considerations. SDN (Software Defined Networking) will play a major role in slicing the RANs any way possible to partition the network infrastructure, while also virtualizing many aspects of the RAN.

About the Author

Martin Nuss, Ph.D. is Vice President, Technology and Strategy and Chief Technology Officer at Vitesse Semiconductor. Dr. Nuss has over 20 years of technical and management experience. He is a recognized industry expert in timing and synchronization for communications networks. Dr. Nuss serves on the board of directors for the Alliance for Telecommunications Industry Solutions (ATIS) and is a fellow of the Optical Society of America and IEEE member. He holds a doctorate in applied physics from the Technical University in Munich, Germany.

About Vitesse

Vitesse (Nasdaq: VTSS) designs a diverse portfolio of high-performance semiconductor solutions for Carrier and Enterprise networks worldwide. Vitesse products enable the fastest-growing network infrastructure markets including Mobile Access/IP Edge, Cloud Computing and SMB/SME Enterprise Networking. Visit www.vitesse.com or follow us on Twitter @VitesseSemi.

Wednesday, October 23, 2013

Blueprint Column: Quality Counts When Managing Millions of Network Elements Worldwide

By Deepti Arora, Vice President of Quality and Executive Board Member, NSN


Complex network infrastructures, a shortage of engineers skilled in cutting-edge technologies, and demand for fast-paced service deployment, are making it increasingly appealing for network operators to tap into additional resources and talent through outsourced managed services. Yet, the moment an operator considers leveraging a global service delivery model, the issue of how to deliver quality becomes a concern.
“How do we ensure a consistent customer experience when delivered by people across so many time zones and organizations?” How can we keep the lid on costs while managing so much complexity? How do we protect the privacy of our company’s operational and customer data with so many touch points across the globe?”
Best-in-class quality management is fundamental
When taking on these challenges, best-in-class quality management systems are important for achieving the outcomes operators and suppliers strive for. Adherence to global multi-site quality management (ISO 9001) ensures clear processes are defined and a regular rhythm of discipline is implemented for all employees. Environmental systems management (ISO 14001) is designed to help understand, manage and reduce environmental impact, and also leads to operational efficiencies in many cases. ISO 27001, an information security management system (ISMS) standard, addresses information systems with controls across 11 domains such as information security policy, governance of information security, and physical and environment security.
To raise the bar further, the QuEST (Quality Excellence for Suppliers of Telecommunications) Forum has created TL 9000 to define quality system requirements for the design, development production, delivery, installation and maintenance of more products and services. Common processes and metrics within and between companies ensure consistency and a basis for benchmarking and continuous improvement.
Nokia Solutions and Networks (NSN) has made a strategic commitment to quality as a pillar of its transformation. Integral to these efforts is the commitment to Quality Management Systems to help drive improvement. This encompasses a customer-centric closed-loop approach to measuring quality and value, a rigorous focus on proactively preventing defects, and actively building quality competence and disciplines amongst employees and with suppliers. NSN is investing significant senior management attention and dedicated resources to raising the bar on quality end-to-end for operators and their subscribers.
Global delivery demands a high standard of quality management
In NSN’s primary Global Delivery Centers (GDCs) in Portugal and India, and in smaller hubs around the world, NSN supports more than 550 customers globally, including remote management of almost one million network elements and approximately 200 million subscribers annually. This means a tremendous volume of data traffic and network operations and subscriber information. Day-to-day performance management is a cornerstone for network operations’ business growth and efficiency. Relentless performance monitoring and the ability to take immediate action are imperative.
Quality at the delivery centers means implementing systems and processes to comply with the highest level of accreditations and certification. Implementation of such standards is a massive undertaking, involving the education and testing of every individual at the delivery center. Achieving a ‘zero non-conformity’ audit result from Bureau Veritas audits, as the GDC Portugal did recently, is an indicator that NSN team members have adopted the commitment to quality. Building awareness and training employees within the organization to adhere to processes and protect information and related assets also goes a long way to foster customer confidence in network operations and service delivery. The certification has provided operators with another proof point that their networks and related information are in safe hands.
A common language for quality accelerates improvement
After introducing TL 9000, one NSN business line reduced problem reports by 82%. But accelerating alignment with operators has been one of the greatest benefits of TL 9000. “With one Asian operator, we were able to use a common TL 9000 metric to evaluate monthly alarms across different vendors. Together, we were able to implement changes that improved performance and reduced costs for both of our companies,” says Scott Schroepfer, Head of Quality for Small Cells/CDMA. A common language for quality with operators around the globe is allowing NSN to accelerate improvement actions and collaboration with its customers.
Looking Forward:  Planning for Quality when tapping the Cloud
Beyond managed services, operators are increasingly exploring cloud-based technology offerings as a way to completely change their business model. The benefits are compelling: expanded on-demand network resources through virtualization, faster innovation cycles for top line growth through leveraging a broader open eco-system, and a greater level of productivity and efficiencies through automation. 
But again, the issue of how to deliver quality becomes a real concern. Security, resiliency, availability, and stability can all be impacted negatively when managing and orchestrating across a myriad of virtual machines on different platforms, all in a multivendor environment. New complexities associated with the cloud paradigm will require the right set of tools and a commitment to plan for quality management from the start.
NSN is working closely on planning for the network quality requirements of cloud technology with major operators, leading cloud stack vendors such as VMware Inc., and industry forums such as the OpenStack Foundation and the ETSI Network Functions Virtualization Industry Specification (NFV) Group and the QuEST Forum. A series of proof of concept projects have provided the foundation for viable telco cloud by demonstrating the running of core network software on top of virtualized network infrastructure. Further tests have shown end-to-end VoLTE deployment readiness in a telco cloud and verified the automated deployment and elastic scaling of virtualized network elements, live migration of virtual machines from one server to another, and recovery from hardware failures.  
The conclusion is this: When operators strive to scale and improve productivity, through managed services or the cloud, quality cannot be an afterthought. The good news is the tools, technologies, standards and expertise are increasingly available.  Find out more about quality at NSN at http://nsn.com/about-us/company/quality. Learn about the QuEST Forum’s TL 9000 platform based on ISO 9001 to improve supply chain management effectiveness and efficiency at http://www.questforum.org.
 About the Author

With more than 25 years of international experience in the telecommunications industry, Deepti Arora is the Vice President of Quality at Nokia Solutions and Networks.  Deepti has held various roles in quality, business operations, engineering, sales and general management. She has a reputation of being a dynamic, results oriented leader with a passion for customer focus, and continually challenging the status quo. Her strong technology and business expertise, along with an ability to build high performing global teams has made Deepti a valued executive in driving organizational success.

Thursday, October 17, 2013

Blueprint Tutorial: SDN and NFV for Optimizing Carrier Networks

By Raghu Kondapalli, Director of Strategic Planning at LSI

The ongoing convergence of video and cloud-based applications, along with the exploding adoption of mobile devices and services, are having a profound impact on carrier networks. Carriers are under tremendous pressure to deploy new, value-added services to grow subscriber numbers and increase revenue per user, while simultaneously lowering capital and operational expenditures.

To help meet these challenges, some carriers are creating some of these new services by more tightly integrating the traditionally separate data center and carrier networks. By extending the virtualization technologies that are already well-established in data centers into the telecom network domain, overall network utilization and operational efficiencies can be improved end-to-end, resulting in a substantially more versatile and cost-effective infrastructure.

This two-part article series explores the application of two virtualization techniques—software-defined networking (SDN) and network function virtualization (NFV)—to the emerging unified datacenter-carrier network infrastructure.

Drivers for virtualization of carrier networks in a unified datacenter-carrier network

In recent years, user expectations for “anywhere, anytime” access to business and entertainment applications and services are changing the service model needed by carrier network operators. For example, e-commerce applications are now adopting cloud technologies, as service providers continue incorporating new business applications into their service models. For entertainment, video streaming content now includes not only traditional movies and shows, but also user-created content and Internet video. The video delivery mechanism is evolving, as well, to include streaming onto a variety of fixed and mobile platforms. Feature-rich mobile devices now serve as e-commerce and entertainment platforms in addition to their traditional role as communication devices, fueling deployment of new applications, such as mobile TV, online gaming, Web 2.0 and personalized video.

Figures 1 and 2 show some pertinent trends affecting carrier networks. Worldwide services revenue is expected to reach $2.1 trillion in 2017, according to an Insight research report, while the global number of mobile subscribers is expected to reach 2.6 billion by 2016, according to Infonetics Research.



To remain profitable, carriers need to offer value-added services that increase the average revenue per user (ARPU), and to create these new services cost-effectively, they need to leverage the existing datacenter and network infrastructures. This is why the datacenters running these new services are becoming as critical as the networks delivering them when it comes to providing profitable services to subscribers.

Datacenter and carrier networks are quite different in their architectures and operational models, which can make unifying them potentially complex and costly. According to The Yankee Group, about 30 percent of the total operating expenditures (OpEx) of a service provider are due to network costs, as shown in Figure 3. To reduce OpEx and, over time, capital expenditures (CapEx), service providers are being pushed to find solutions that enable them to leverage a more unified datacenter-carrier network model as a means to optimize their network and improve overall resource utilization.

Virtualization of the network infrastructure is one strategy for achieving this cost-effectively. Virtualization is a proven technique that has been widely adopted in enterprise IT based on its ability to improve utilization and operational efficiency of datacenter server, storage and network resources. By extending the virtualization principles into the various segments of a carrier network, a unified datacenter-carrier network can be fully virtualized—end-to-end and top-to-bottom—making it far more scalable, adaptable and affordable than ever before.

Benefits of integrating datacenters into a carrier network

Leveraging the virtualized datacenter model to virtualize the carrier network has several benefits that can help address the challenges associated with a growing subscriber base and more demanding performance expectations, while simultaneously reducing CapEx and OpEx. The approach also enables carriers to seamlessly integrate new services for businesses and consumers, such as Software-as-a-Service (SaaS) or video acceleration. Google, Facebook and Amazon, for example, now use integrated datacenter models to store and analyze Big Data. Integration makes it possible to leverage datacenter virtualization architectures, such as multi-tenant compute or content delivery networks, to scale or deploy new services without requiring expensive hardware upgrades. Incorporating the datacenter model can also enable a carrier to centralize its billing support system (BSS) and operation support system (OSS) stacks, thereby doing away with distributed, heterogeneous network elements and consolidating them to centralized servers. And by using commodity servers instead of proprietary network elements, carriers are able to further reduce both CapEx and OpEx.

Integrated datacenter-carrier virtualization technology trends

The benefits of virtualization derive from its ability to create a layer of abstraction with the physical resources. For example, the hypervisor software creates and manages multiple virtual machines (VMs) on a single physical server to improve overall utilization.

While the telecom industry has lagged behind the IT industry in virtualizing resources, most service providers are now aggressively working to adapt virtualization principles in their carrier networks. Network function virtualization (NFV), for example, is being developed by a collaboration of service providers as a standard means to decouple and virtualize carrier network functions from traditional network elements, and then distribute these functions across the network more cost-effectively. By enabling network functions to be consolidated onto VMs running on a homogenous hardware platform, NFV holds the potential to minimize both CapEx and OpEx in carrier networks.

Another trend in virtualized datacenters is the abstraction being made possible with software-defined networking, which is enabling datacenter networks to become more manageable and more open to innovation. SDN shifts the network paradigm by decoupling or abstracting the physical topology to present a logical or virtual view of the network. SDN technology is particularly applicable to carrier networks, which usually consist of disparate network segments based on heterogeneous hardware platforms.

Technical overview of network virtualization

Here is a brief overview of the two technologies currently being used in unified datacenter-carrier network infrastructures: SDN and NFV.

Software-Defined Networking

SDN is a network virtualization technique based on the logical separation and abstraction of both the control and data plane functions, as shown in Figure 4. Using SDN, the network elements, such as switches, routers, etc., can be implemented in software, virtualized as shown, and executed anywhere in a network, including in the cloud.


SDN decouples the network functions from the underlying physical resources using OpenFlow®, the vendor-agnostic standard interface being developed by the Open Networking Foundation (ONF). With SDN, a network administrator can deploy a new network application by writing a program that simply manipulates the logical map for a “slice” of the network.

Because most carrier networks are implemented today with a mix of different platforms and protocols, SDN offers some substantial advantages in a unified datacenter-carrier network. It opens up the network for incorporating innovation. It makes it easier for network administrators to manage and control the network infrastructure. It reduces CapEx by facilitating the use of commodity servers and services, potentially by mixing and matching platforms from different vendors. In the datacenter, for example, network functions could be decoupled from the network elements, like line and control cards, and moved onto commodity servers. Compared to expensive proprietary networking solutions, commodity servers provide a far more affordable yet fully mature platform based on proven virtualization technologies, and industry-standard processors and software.

To ensure robust security—always important in a carrier network—the OpenFlow architecture requires authentication when establishing connections between end-stations, and operators can leverage this capability to augment existing security functions or add new ones. This is especially beneficial in carrier networks where there is a need to support a variety of secure and non-secure applications, and third-party and user-defined APIs.

Network Function Virtualization

NFV is an initiative being driven by network operators with a goal to reduce end-to-end network expenditures by applying virtualization techniques to telecom infrastructures. Like SDN, NFV decouples network functions from traditional network elements, like switches, routers and appliances, enabling these task-based functions to then be centralized or distributed on other (less expensive) network elements. With NFV, the various network functions are normally consolidated onto commodity servers, switches and storage systems to lower costs. Figure 5 illustrates a virtualized carrier network in which network functions, such as a mobility management entity (MME), are run on VMs on a common hardware platform and an open source hypervisor, such as a KVM.

NFV and SDN are complementary technologies that can be applied independently of each other. Or NFV can provide a foundation for SDN. By using an NFV foundation combined with SDN’s separation of the control and data planes, carrier network performance can be enhanced, its management can be simplified, and new services can be more easily deployed. 


***********************************

 Raghu Kondapalli is director of technology focused on Strategic Planning and Solution Architecture for the Networking Solutions Group of LSI Corporation.

Kondapalli brings a rich experience and deep knowledge of the cloud-based, service provider and enterprise networking business, specifically in packet processing, switching and SoC architectures.

Most recently he was a founder and CTO of cloud-based video services company Cloud Grapes Inc., where he was the chief architect for the cloud-based video-as-a-service solution.  Prior to Cloud Grapes, Kondapalli led technology and architecture teams at AppliedMicro, Marvell, Nokia and Nortel. Kondapalli has about 25 patent applications in process and has been a thought leader behind many technologies at the companies where he has worked.

Kondapalli received a bachelor’s degree in Electronics and Telecommunications from Osmania University in India and a master’s degree in Electrical Engineering from San Jose State University.

Videos

Loading...