Showing posts with label Blueprint Column. Show all posts
Showing posts with label Blueprint Column. Show all posts

Monday, October 13, 2014

Blueprint: SDN's Impact on Data Center Power/Cooling Costs

by Jeff Klaus, General Manager of DCM Solutions, Intel

The growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.

SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.

How will the more dynamic infrastructures impact critical data center resources – power and cooling?
In the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.

These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.
Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing.  Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.

The energy management solutions can also give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites.

Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.

Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure.

To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.

Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.

Management expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.

IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.

Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.

About the Author




Jeff Klaus is the general manager of Data Center Manager (DCM) Solutions at Intel Corporation, where he has managed various groups for more than 13 years. He leads a global team that is pioneering data center infrastructure management (DCIM) solutions. A graduate of Boston College, Klaus also holds an MBA from Boston University. For more information, visit www.intel.com/datacentermanager

Wednesday, October 8, 2014

Blueprint Column: Women in Engineering - Changing the Odds

by Scott McGregor, President and Chief Executive Officer, Broadcom Corporation

Some of the greatest successes in the modern economy come from finding multibillion-dollar industries that are ripe for disruption. Engineers spend a lot of time looking for such opportunities. And yet, one of the ripest targets for disruption is right before their eyes: the engineering industry itself.

Here’s how to disrupt it: Increase the number of women in the engineering profession.

Of all the science, technology, engineering, and applied mathematics (STEM) professions in which women are under-represented, the disparity is greatest in engineering. According to the U.S. Bureau of Labor Statistics, in 2012 women comprised 45 percent of scientists, 25 percent of mathematicians, 22 percent of technology workers, but only 10 percent of engineers. Today, there are plenty of young women taking the required high school courses to pursue an engineering degree, including AP calculus and physics. Yet in engineering and computer sciences professions, where workforce demand and salaries are among the highest, the percentages of women earning degrees continues to lag.

By the time young women get to college, only three percent of them will declare a major in engineering and of the three percent, they are least likely to choose electrical and electronic engineering, according to the Department of Education. Female representation declines further at the graduate level and yet again in the transition to the workplace.

The fundamental issue to be addressed is gender bias. So where does it start and what can we do about it?

Middle school is a great place to start, where girls’ achievements and interest are shaped by stereotypes (“boys are better at math and science than girls”), biases (“math is hard”) and cultural beliefs (“engineering is a profession for men”). Researchers discovered that even subtle references to these gender stereotypes have been found to reduce girls’ interest in science and math.

In a landmark paper, “Why So Few? Women in Science, Technology, Engineering, and Mathematics,” researchers concluded “The answer lies in our perceptions and unconscious beliefs about gender in mathematics and science.” In other words, we can’t solve the gender disparity problem without first recognizing our own biases that are formed early and all too often persist into the workplace. The good news is that the negative impact of those faulty perceptions can be lessened, just by becoming aware of them.

We must celebrate the achievements of girls and women in STEM professions and provide young hopefuls with role models and mentors along the path from school to the workplace. We must encourage girls and young women to pursue engineering careers through interactive programs in our local schools and community.

We can enhance STEM curricula in schools, beginning in elementary school with hands-on projects that help girls build confidence in spatial skills, an area where girls underperform boys. Even playing with construction kits and toys can help girls build confidence in the spatial skills they will need later on to succeed at engineering. In middle school, 3-D computer games and digital sketching tools can reinforce that confidence. As their confidence increases, girls are less likely to fall back on the traditional stereotypes about gender, and more likely to feel like they “belong” in STEM courses.

Early exposure and encouragement in mathematics is also key. Studies have shown that girls who take calculus in high school are three times more likely to pursue STEM careers, including engineering.

Also crucial is creating a workplace culture that is welcoming and supportive of women. Enlightened work-life policies are important, but so too are active efforts to attract, promote, and retain women. Seminars, luncheons, peer coaching, scholarships, networking gatherings, and continuing education programs have been shown to be effective in turning talented women engineers into talented workplace leaders.

By spearheading women’s leadership programs and providing a range of professional development opportunities geared specifically for women, we can encourage women through all levels of their career. These women, in turn, can take that learning and inspiration back out to the community – which will help attract more young women into engineering as they see more female role models. It’s a virtuous cycle.

The “Why So Few?” report was funded by a number of organizations, including the National Science Foundation, and was published by the American Association of University Women. Here’s a link to thepaper

By the way, for anyone who thinks they can’t possibly have biases that impact the perception of women, or other stereotypes, check out this study at Harvard and take one or more of the tests.

About the Author

Scott McGregor serves as Broadcom's President and Chief Executive Officer. In this role, he is responsible for guiding the vision and direction for the company's corporate strategy. Since joining Broadcom in 2005, the company has expanded from $2.40 billion in revenue to $8.31 billion in 2013 revenue. Broadcom's geographic footprint has grown from 13 countries in 2005 to 25 and its patent portfolio has expanded from 4,800 U.S. and foreign patents and applications to more than 20,850.

McGregor joined Broadcom from Philips Semiconductors (now NXP Semiconductors) where he served as President and CEO from 2001 to 2004. He joined Philips in 1998 and rose through a series of leadership positions.  McGregor received a B.A. in Psychology and a M.S. in Computer Science and Computer Engineering from Stanford University. He serves on the board of Ingram Micro, on the Engineering Advisory Council for Stanford University and President of the Broadcom Foundation. Most recently, McGregor received UCLA's 2013 IS Executive Leadership Award.

Thursday, September 18, 2014

Blueprint: Carriers Set Their Sights on 1Gbps Rollouts with G.fast

By Kourosh Amiri, VP Marketing, Ikanos, Inc.

Demand for high-speed broadband access by consumers has never been more intense than it is today.  Rapidly increasing numbers of connected devices inside the home and the adoption of higher-resolution (4K and 8K) television are just the tip of the iceberg.  Home automation, remote patient monitoring, and multi-player gaming – among countless other applications – are contributing to an Internet of Things phenomenon that promises to drive bandwidth demands through the roof.

And carriers and ISPs are lining up to get a share of the prize, wondering how their current broadband technologies can evolve to meet the increasing demand.  In the case of telcos, for example, even with the potential of vectored VDSL2 to deliver aggregate bandwidths of up to 300 Mbps to consumers, competitive pressure continues to mount to deliver quantum increases in bandwidth.  To that end, G.fast, with its promise of up to 1Gbps service to each household, could roll out in initial trials in as little as 12-18 months.  

G.fast, a concept proposed in 2012 and achieving consent by the ITU-T standard body in December 2013, represents the next performance node in the evolution of xDSL. G.fast is defined to support up to 1Gbps over short (i.e., less than 100 meter) copper loops, and is designed to address gigabit broadband connectivity on hybrid fiber-copper carrier networks.  Service deployments are targeted from fiber-fed equipment located at a distribution point, such as a telephone pole, a pedestal, or inside an MDU (Multi-Dwelling Unit), serving customers on drop wires that span a distance of up to 100 meters.

G.fast – in the same way as existing ADSL and VDSL – takes advantage of fiber already deployed to cabinets or other nodes.  The proliferation of G.fast will come as service providers push fiber closer to homes, where a single distribution point will serve, typically, from 8 to 16 homes.

Why the interest in G.fast?  Even with vectored VDSL2’s ability to deliver hundreds of Mbps for Fiber to the Node (FTTN) applications (150Mbps aggregate performance on a 500-meter loop length demonstrated in many carrier lab  trials worldwide by Ikanos, and 300Mbps for shorter loops), the explosion in the number of devices per home and new services and applications is expected to drive strong interest in accelerating FTTdp with G.fast to market.  And, for carriers, the need to spur adoption of G.fast by their subscribers will play a lead role, keeping them in lockstep with competing services over cable and FTTH in the race to 1Gbps residential broadband connectivity.
Fortunately, much of the work in preparing the G.fast standard has already been completed, and chip suppliers have the consent of the ITU-T to start the development of G.fast chips, with some already making public announcements about their upcoming products.  (For example, Ikanos in October 2013 announced an architecture and development platform for G.fast.)

The infrastructure for G.fast is a variant of the infrastructure for VDSL2.  The primary difference is in the length of the copper pair that enters the residence.  G.fast will require shorter copper loops than those used with VDSL2 in order to accommodate the desired gigabit performance.  That, in turn, means that service providers must drive fiber closer to homes. In addition, carriers will need to ensure that the fiber-to-G.fast media converters at each distribution point will be backward-compatible with VDSL2.  Why?  To enable customers not yet subscribing to G.fast service (when it becomes available) to continue to receive VDSL2 service through G.fast transceivers in the network. This is a critical requirement for carriers looking to offer new services to their existing subscriber base.  And it is a practical consideration, as not all subscribers will choose to upgrade to these new services at the same time.  Without this VDSL2 backward-compatibility feature (also known as VDSL2 fallback), this transition to new services may end up creating many problems for carriers, including additional CAPEX and service disruptions.

As the adoption of G.fast nears, customer pre-qualification and self-installation are needed to ensure a smooth, cost-effective migration to G.fast.  Existing customers may be able to leverage new ADSL2 or VDSL2 CPE (or a software upgrade to their existing CPE) with advanced diagnostics to pre-qualify the line for G.fast service. Those diagnostics would be used to qualify the copper pair, check for RF noise, and advise whether any line conditioning actions would be required when installing the G.fast DPU.  What’s more, carriers expect that G.fast deployments will take place with virtually “no touch” installation and provisioning, setting the stage for more rapid adoption of the technology.

With the G.fast standard expected to be fully ratified later this year, and while deployment costs are still a matter of conjecture, and will potentially vary by geography, carrier, and specific network topologies, expectations are that they will undercut the costs of FTTH significantly.  And, with the current uncertainties in service providers’ plans and timing for broad rollouts of FTTH, the future for G.fast looks bright.

About the Author
Kourosh Amiri has more than 20 years of experience in the semiconductor industry. He has been responsible for the successful introduction of products targeting a range of applications in the networking, communications, and consumer segments. He joined Ikanos on February 2013 to lead the company’s global marketing and product strategy.Amiri was previously with Cavium, where he led marketing and business development for its emerging media processor group, and drove the strategy for turning Cavium into a leading supplier of wireless display media processors in multiple market segments, including smartphones and PC accessories. Prior to joining Cavium, Amiri held senior marketing and business development roles at Freescale and several venture-backed semiconductor start-ups, addressing a wide range of networking and media processing applications. Amiri has an MSEE from Stanford University and a BSEE from the University of California, Santa Barbara.

About Ikanos

Ikanos Communications, Inc. (NASDAQ:IKAN) is a leading provider of advanced broadband semiconductor and software products for the digital home. The company’s broadband DSL, communications processors and other offerings power access infrastructure and customer premises equipment for many of the world’s leading network equipment manufacturers and telecommunications service providers. For more information, visit www.ikanos.com
.

Tuesday, September 9, 2014

Blueprint: What’s Next for Carrier Ethernet?

by Stan Hubbard and Members of the MEF

More than 1,000 service providers and network operators worldwide now rely on Carrier Ethernet (CE) to support high-performance Ethernet & Ethernet-enabled data services, to interconnect network-enabled cloud services, to underpin 4G/LTE mobile and consumer triple play services, and to meet internal networking needs. Tens of thousands of businesses and enterprises in every industry vertical have transitioned to these CE services in order to control communications costs, efficiently scale with traffic demand, improve business agility, and boost productivity.

As the dominant protocol of choice for affordable, scalable, high-bandwidth connectivity, Ethernet has overtaken TDM in the wide area network (WAN) and has emerged as the indispensable digital fuel for accelerating communications-related business transformation. According to Vertical Systems Group, global business Ethernet services bandwidth surpassed installed legacy services bandwidth in 2012 and is projected to exceed 75% of total global business bandwidth by 2017. In short, CE has transformed the WAN network over the past decade.

CE’s key drivers

The twenty-first century’s accelerating bandwidth consumption paved the way for Carrier Ethernet and continues to drive demand. It was not just the size of the available bandwidth, however, but also the granular way it could be delivered.

Taking mobile backhaul as an example: as pressure on the network increased there was nothing to stop the operator from ordering further leased lines from the cell tower to core, but each extra line meant a big jump in cost, it took time and manual labour to install, and the effort needed to be justified in terms of future expected demand. But with a CE connection bandwidth could be raised immediately in small increments as needed, without field installation, and it could just as easily be lowered if the demand boost turned out to be temporary. This moved business from CapEx towards more flexible OpEx pricing model.

From a carrier perspective, CE’s flexibility gave similar benefits: enterprise custom could be attracted with “bandwidth on demand” services, and CE offered enormous scalability to accommodate new customers and rising demand.

Another advantage of Ethernet were that customers were already familiar with Ethernet in the LAN, so it was easier to understand and adapt to CE than to gather expertise in legacy WAN protocols like ATM and Frame Relay. The fact that enterprise customers typically wanted the carrier to link their Ethernet LANs, also made CE attractive as an end-to-end Ethernet solution.

Given the above drivers that helped launch the uptake of Carrier Ethernet, there arises another type of driver, and that is market momentum. As CE went mainstream, the relatively simple CE hardware (compared with legacy WAN systems) gained mass sales and became increasingly cost-effective. This further accelerated CE’s performance-price benefits.

The reason that cost was such a strong driver for CE was that its uptake coincided with a serious economic downturn, putting cost-efficiency high on the buying agenda for much of the first decade. But it was never the only factor: flexibility and simplicity are also very much in demand in times of high competition, and CE also majored on those benefits.

Lastly, sales exploded as standards were established – but that is a topic in itself.

Standards and the role of the MEF

Founded in 2002, the MEF is a global industry alliance with the aim to accelerate the worldwide adoption of Carrier-class Ethernet networks and services. The MEF develops Carrier Ethernet technical specifications and implementation agreements to promote interoperability and deployment of Carrier Ethernet worldwide.

As well as being responsible for the creation of Carrier Ethernet, the combined effort of MEF member companies has been to define, develop, and encourage worldwide adoption of standardized CE services and technologies. During the 2008 recession, even with CE offering massive benefits in cost and flexibility, business would have been far more nervous about migrating to a relatively new technology, had it not been for the MEF’s certification program, firstly to certify equipment to global CE standards, then services and then professional expertise.

According to Marie Fiala Timlin, Director of Marketing, CENX: “Vendors have converged on common CE standards, advocated by the MEF, so the SP has multiple options of standards-compliant infrastructure equipment, which will interoperate cleanly in the network.  Those benefits get passed on to the end-user in the form of high quality Internet connections, supported by CE service performance attributes.” Timlin added: “Furthermore, the specifications are continuously updated as technology and experience evolves, hence ensuring that vendors and SPs can innovate and yet remain standardized.”

And Christopher Cullan, Director of Product Marketing, Business Services, InfoVista, explained: “We’re in the standardization phase today with defined best practices from the MEF. MEF 35 is available with the basic support of Carrier Ethernet network and service performance monitoring, and MEF 36 and MEF 39 provide two constructs to enable MEF 35 using SNMP and NETCONF respectively. Some leading vendors are already moving forward, with MEF 35 compliance, and MEF 36. These cut the integration effort for an Ethernet device to enable full, MEF-aligned performance monitoring – valuable to both the internal stakeholders like operations and engineering as well as for end customers.”

The current suite of standards has been labelled “CE 2.0”. As Zeev Draer, VP Strategic Marketing, MRV explained: “The combined effort to ratify CE 2.0 was paramount in CE’s adoption in wide area and global international networks. CE 2.0 provides the right toolkit for legacy network replacement based on multiple Classes of Service (Multi-CoS), interconnect and manageability.”

“Interconnect” refers to CE 2.0’s E-Access, as Madhan Panchaksharam, Senior Product Manager, VeryX, explained: “The wholesale interconnect process has been tremendously simplified by MEF E-Access. The combined effort has resulted in overcoming delays in the wholesale interconnect process. This has enabled bigger carriers to quickly expand across geographies, and provided business opportunities for many smaller carriers to interconnect with bigger players and maximize their revenues.”

Already 26 service providers in 12 countries now offer more than 74 CE 2.0-certified services, and many more are in the process of services certification and/or have been building out CE 2.0-compliant services.  Meanwhile, 34 network equipment companies now offer 145 devices that are CE 2.0-certified and thus capable of powering CE 2.0 services. More than 2,300 individuals from 257 organizations in 62 countries have now been recognized as MEF Carrier Ethernet Certified Professionals (MEF-CECP or MEF CECP 2.0) – a population that has nearly tripled in the past 12 months. With MEF-CP standards, SPs can identify knowledgeable professionals to manage data network operations across a multi-vendor infrastructure.

MEF standards clearly help to harmonise the technical aspects, but they also make it easier to communicate between regions and business cultures, as Olga Havel, Head of Product Strategy and Planning, Amartus explains: ”We are creating the common industry language that specifies CE services, and therefore significantly reduces the cost of delivery for these services.” Christopher Cullan also says: “The more that CE standards are communicated to the buyer market (e.g. Enterprise), the greater the level of understanding, and hence adoption.”

Where next for CE adoption?

Zeev Draer says: “We are at a maturing stage in the networking industry... It’s no longer about big pipe connectivity, but more about application-driven intelligence with strong end-to-end multi-layer provisioning of services, performance monitoring across layers, and high elasticity of the network that should scale to millions of subscribers and services.”

Christopher Cullan agrees, adding: “Cheaper-than-TDM, is no longer good enough, it must be proven through simple, easily understood SLAs. As margins shrink with market maturity, over-provisioning cannot solve the needs of the enterprise business cost-effectively... Communication Service Providers need SLAs that align with the market and are standardized such that services are less bespoke and more cost effective.”

There is general agreement with these comments about the growing demands of cloud computing. Marie Fiala Timlin said: “CE is a means to connect enterprises to the cloud with guaranteed SLAs.  Within the data center itself, CE is the mechanism to provide quality exchange connectivity between tenants, and between the tenant enterprise to the cloud-based application server.  Also, CE serves to interconnect data center locations”.

For Zeev Draer: “The next step for Carrier Ethernet adaption will be highly focused on BSS and OSS integrations along standardization in CE 2.0 APIs. This is the most critical area that will save OPEX and enable new services such as the "Internet of Things" and services that didn't exist up to now. Now that we see maturity in CE definitions and more stringent technology factors than required from any large service provider, the focus will be on automation and monetization of CE services.”

Olga Havel agrees: “Automation is the key word right now, Service Providers and MEF must now focus on automation of full Carrier Ethernet delivery lifecycle (Design->Provision->Operate) in order to monetize their today’s networks and be ready to operate tomorrow’s virtualized networks. The next step for CE adoption is real-time OSS – service-centric orchestration platforms with open APIs that enable Software-Defined Service Orchestration.”

Towards agile, assured services orchestrated over efficient, interconnected networks
One way for SPs to compete is by reducing OPEX and increasing service lifecycle efficiency for interconnected SLA-oriented networks. Customers will pay more for performance guarantees, especially in cloud access networks with SLA dependency. But it requires a rich set of OAM capabilities for end-to-end service visibility.

Carrier Ethernet needs to evolve further to accommodate and facilitate new services  oriented towards business applications and needs. These require flexibility, agility, inter-connectivity and security in networks. Achieving these will require new CE attributes, interface definitions and APIs to enable greater programmability and automation.

Madhan Panchaksharam believes: “There is an increasing need to articulate MEF’s vision to bring together various players in the eco-system such as enterprises, cloud service providers, carriers and infrastructure providers, to demonstrate how this agility and dynamic delivery models can be achieved”. He sees the convergence of Carrier Ethernet, NFV and SDN as carriers transition towards agile, on-demand and flexible service models especially for cloud-type applications: “Carrier Ethernet has inherently better capabilities that can enable these goals to be achieved without sacrificing the quality of experience for users.”

However, NFV sub networks or overlays add further complexity according to Marie Fiala Timlin, who sees a corresponding need for next generation service orchestration systems: “Today’s OSS are siloed by function: inventory, fault, provisioning, performance monitoring.  One needs a holistic view of the network for end-to-end service fulfilment and assurance.  Also APIs between technology domains, and between carriers, are needed to help automate workflows for service agility”.

For Olga Havel too: “What needs to happen next is standardization of MEF Service Orchestration APIs. This will open the way for MEF certification for CE service orchestration platforms and interfaces. These APIs would enable users, applications and OSSs to design, provision & operate MEF services over single and multiple operators’ networks. MEF Service Management Reference Architecture must take into account integration between multiple Operators, but also with NFV Orchestrators and Cloud Managers for providing delivery of end-to-end connectivity services between Carrier Ethernet and Data Centre VMs and/or VNFs”.

Conclusion

The MEF has a reputation for moving quickly to anticipate business needs and deliver solutions and standards at the right time. Recognising that the issues go beyond technology and tools, the MEF launched its Service Operations Committee (SOC) last year to define, streamline and standardize processes for buying, selling, delivering and operating MEF-defined services.

MEF GEN14The SOC has established several projects to develop process flows, use cases and APIs to support all aspects of the ordering and provisioning of MEF-defined Ethernet services and accelerate delivery of MEF services to customers.

The MEF is also shaping a Vision and White Paper towards standardising delivery of dynamic connectivity services via physical or virtual network functions orchestrated over multiple operator’s networks. The MEF is also addressing the need for standardised service orchestration APIs. Later this year the MEF will be announcing more detail about its industry vision and various strategic initiatives.

The MEF Global Ethernet Networking 2014 (GEN14) event will be held on 17-20 November at the Gaylord National in Washington, DC.  GEN14 is a global gathering of the CE community defining the future of network-enabled cloud, data, and mobile services powered by the convergence of CE 2.0, SDN and virtualization technologies.

More information about GEN14 is available at www.gen14.com

About the Author

Stan Hubbard, Director of Communications & Research, MEF is a veteran Carrier Ethernet analyst Hubbard, who was previously Senior Analyst at independent research organization Heavy Reading for 9 years.

Thursday, May 8, 2014

Nokia: Three Metrics of Disruptive Innovation


by David Letterman, Nokia

‘Disruptive innovation’ has been a favorite discussion topic for years. I am sure every industry, every company and every innovation team, has had rounds and rounds of discussion about what disruptive innovation means for them.

Rather than attempting another end all, be all definition for innovation, let’s focus on the passion it evokes and the permissions it enables. Disruptive innovation, as an internal charter, allows expansion beyond previous boundaries; it gives permission to go after new markets, new customers and new business models.  If left unencumbered, it can guide the company to proactively find and validate big problems for which external partners, new products and new markets can be created.  Disruptive innovation can be a source of otherwise unattainable revenue growth and market share.

Innovation is about converting ideas into something of value, making something better AND hopefully something that our customers are willing to pay for. For the purpose of putting the framework into two buckets, let’s distinguish between incremental and disruptive innovation.

Most innovation in established companies is developed by corporate innovation engines, whose job is to continually improve their products and services.  This continuous innovation delivers incremental advances in technology, process and business model.  Specialized R&D teams can add value to these innovation engines by solving problems differently or having a specific charter to go after larger levels of improvement.  Although the risks are higher, breakthrough innovation occurs when these teams achieve significantly better functionality or cost savings.  This combination of corporate and specialized incremental innovation is absolutely necessary for companies to keep up with or get ahead of the competition – and which most successful companies are very good at.

Disruptive innovation, on the other hand, is much more difficult for the corporate machinery. Here, new product categories are created, new markets are addressed and new value chains are established.

There is no known baseline to refer to.

Disruption implies that someone is losing – being disrupted. So clearly you won’t find a product roadmap for it in the company catalog. And it’s not even necessarily solving the problems of the current customer base. This is an area where, with the right passion, permissions and charter, a specialized innovation team can take a lead role and create significant growth for the company.

Here is my take on three characteristics of teams chartered to do disruptive innovation -

  • A strong outside-in perspective is crucial, for not only identifying the problem and validating the opportunity, but also for finding and creating a solution, and perhaps even taking it to market. Collaboration is everything when it comes to disruption.
  • Risk quotient - Arguably, all innovation contains some element of risk.  But, in this case of proactively seeking disruption, we must allow for an even higher degree of risk. For most innovation teams, ‘Fail fast’, ‘Fail often’ and ‘Fail safe’ are the mantras.  But in case of disruptive innovation when we are seeking new markets, perhaps based on new technologies, our probability for success is untested. And to the incumbents, this new  solution is unacceptable, often something they have never considered or simply cannot deliver.  If you are solving a really important problem it justifies embracing the risk, revalidating the opportunity and digging deeper to create a solution.  Redefine risk in the context of meaningful disruption – ‘Fail proud’ and keep on solving.  Remember SpaceX?
  • How disruptive is disruptive - For a new entrant to eventually become disruptive it needs to be significantly better in functionality, performance and efficiency - or much cheaper - than the alternatives.  Although the benefits may initially only be noticed by early adopters, for the solution to disrupt a category it must be made available to, and eventually accepted by, the masses.

A simple example that addresses these three characteristics – is how the Personal Navigation Device market was disrupted by the smartphone.

In the early and mid 2000s, Garmin and TomTom had a lock on the personal navigation market. When Nokia and the other phone manufacturers began delivering GPS via phones, they were coming to the market via a totally new channel, embedding the functionality in a device that the consumer would carry with them at all times.


The incumbents may have acted unfazed.  But in reality, they couldn’t respond to the threat.  The functionality may have been inferior to what they were selling but the cost was perceived as free.  It was totally unacceptable and the business model was “uncopiable.” What started as a feature in just select high-end phones would soon be adopted as a standard functionality in every smartphone, and expected by end users by default. In just two years, there were five times as many people carrying GPS enabled phones in their pockets as there were PNDs being sold.

Silicon Valley Open Innovation Challenge

There are many other characteristics you might consider to be the most important measurements for disruptive innovation.  For me, these three are as good as any.  It comes down to the simple questions of “Why does it matter?”  “What problem does this empower us to solve that was otherwise unmet?” and “How can we provide significantly positive impact for the company and for the people to whom the innovation will serve?”

Nokia’s Technology Exploration and Disruption (TED) team is chartered to look at exactly these questions. In its search for the next disruption, it has launched the – Silicon Valley Open Innovation Challenge.

This competition is an open call to Silicon Valley innovators to collaboratively discover and solve big problems with us, and to do so in ways that are significantly better, faster or cheaper than we could have done alone. We see Telco Cloud and colossal data analytics as the two major transformational areas for the wireless industry, opening up possibilities for disruption – and those are the focus themes for the Open Innovation Challenge. We’re willing to take the risk because we know the rewards of innovation are worth it.

Click here to submit your ideas and be part of something truly disruptive. Apply now!
Last date is 19th May 2014.

http://nsn.com/OpenInnovationChallenge

David Letterman works in the Networks business of Nokia within its Innovation Center in the heart of Silicon Valley. Looking after Ecosystem Development Strategy for the Technology Exploration and Disruption global team, David is exploring how to create exponential value by pushing the boundaries of internal innovation. An important initiative is Nokia’s Silicon Valley Open Innovation Challenge, calling on the concentrated problem-solving intellect of the Valley, to solve two of the biggest transformations for Telco: Colossal data analytics and Telco Cloud. Prior to his current position, David worked for a top tier Product Design and Innovation Consultancy, and held various business development and marketing management roles during a previous 10-year tenure with Nokia.


Nokia invests in technologies important in a world where billions of devices are connected. We are focused on three businesses: network infrastructure software, hardware and services, which we offer through Networks; location intelligence, which we provide through HERE; and advanced technology development and licensing, which we pursue through Technologies. Each of these businesses is a leader in its respective field. Through Networks, Nokia is the world’s specialist in mobile broadband. From the first ever call on GSM, to the first call on LTE, we operate at the forefront of each generation of mobile technology. Our global experts invent the new capabilities our customers need in their networks. We provide the world’s most efficient mobile networks, the intelligence to maximize the value of those networks, and the services to make it all work seamlessly. 
http://www.nsn.com
http://company.nokia.com

Wednesday, March 12, 2014

Blueprint: SDN and the Future of Carrier Networks

by Dave Jameson, Principal Architect, Fujitsu Network Communications

The world has seen rapid changes in technology in the last ten to twenty years that are historically unparalleled, particularly as it relates to mobile communications. As an example, in 1995 there were approximately 5 million cell phone subscribers in the US, less than 2 percent of the population. By 2012, according to CTIA, there were more than 326 million subscribers.  Of those, more than 123 million were smartphones. This paradigm shift has taken information from fixed devices, such as desktop computers, and made it available just about anywhere. With information being available anywhere in the hands of the individual users some have started to called this the "human centric network," as network demands are being driven by these individual, often mobile, users.

But this growth has also created greater bandwidth demands and in turn has taken its toll on the infrastructure that supports it. To meet these demands we’ve seen innovative approaches to extracting the most benefit from existing resources, extending their capabilities in real-time as needed.  Clouds, clusters and virtual machines are all forms of elastic compute platforms that have been used to support the ever growing human centric network.

But how does this virtualization of resources in the datacenter relate to SDN in the telecom carrier's network? Specifically how does SDN, designed for virtual orchestration of disparate computational resources, apply to transport networks? I would suggest that SDN is not only applicable to transport networks but a necessary requirement.

What is SDN?

The core concept behind SDN is that it decouples the control layer from the data layer. The control layer is the layer of the network that manages the network devices by means of signaling. The data layer, of course, is the layer where the actual traffic flows. By separating the two the control layer can use a different distribution model than the data layer.

The real power of SDN can be summed up in a single word - abstraction.  Instead of sending specific code to network devices, machines can talk to the controllers in generalized terms. And there are applications that run on top of the SDN network controller.

As seen in Figure 1 applications can be written and plugged-in to the SDN network controller. Using an interface, such as REST, the applications can make requests from the SDN controller, which will return the results. The controller understands the construct of the network and can communicate requests down to the various network elements that are connected to it.

The southbound interface handles all of the communications with the network elements themselves. The type of southbound interface can take one of two forms. The first is a system which creates a more programmable network. That is to say that instead of just sending commands to the devices to tell them what to do SDN can actually reprogram the device to function differently.

The second type of southbound interface is a more traditional type that uses existing communication protocols to manage devices that are currently being deployed with TL1 and SNMP interfaces.
SDN has the ability to control disparate technologies, not just equipment from multiple vendors.

Networks are, of course, comprised of different devices to manage specific segments of the network. As seen in Figure 2 a wireless carrier will have wireless transmission equipment (including small cell fronthaul) with transport equipment to backhaul traffic to the data center. In the data center there will be routers, switches, servers and other devices.


Today at best these are under "swivel chair management" and at worst have multiple NOCs managing their respective segment. Not only does this add OpEx in terms of cost for staffing and equipment but additionally makes provisioning difficult and time consuming as each network section must, in a coordinated fashion, provision their part.

In an SDN architecture there is a layer that can sit above the controller layer called the orchestration layer and its job is to talk to multiple controllers.

Why do carriers need SDN?

As an example of how SDN can greatly simplify the provisioning of the network let's take a look at what it would take to modify the bandwidth shown in Figure 2. If there is an existing 100MB Ethernet connection from the data center to the fronthaul and it is decided that the connection needs to be 150MB, a coordinated effort needs to be put in place. One team must increase the bandwidth settings of the small cells, the transport team must increase bandwidth on the NEs, and routers and switches in the data center must be configured by yet another team.

Such adds, moves, and changes are time consuming in an ever changing world where dynamic bandwidth needs are no longer a negotiable item. What is truly needed is the ability to respond to this demand in a real time fashion where the bandwidth can be provisioned by one individual using the power of abstraction. The infrastructure must be enabled to move at a pace that is closer to the one click world we live in and SDN provides the framework required to do so.

SDN Applications

No discussion of SDN would be complete without examining the capabilities that SDN can bring through the mechanism of applications. There are many applications that can be used in an SDN network. Figure 4 shows a list of examples of applications and is broken down based on the type of application. This list is by no means meant to be exhaustive.


One example of an application that specifically applies to carrier networks is path computation or end to end provisioning. Over the years there have been many methods that have sought to provide a path computation engine (PCE), including embedding the PCE into the NEs, intermingling the control and data layers. But since the hardware on the NEs is limited, so the scale of the domain it manages is also limited. SDN overcomes this issue by the very nature of the hardware it runs on, specifically a server. Should the server become unable to manage the network due to size, additional capacity can be added by simply increasing the hardware (e.g. add a blade or hard drive). SDN also addresses the fact that not all systems will share common signaling protocols.  SDN mitigates this issue by not only being able to work with disparate protocols but by being able to manage systems that do not have embedded controllers.

Protection and Restoration

Another application that can be built is for protection and restoration. The PCE can find an alternative path dynamically based on failures in the network. In fact it can even find restoration paths when there are multiple failed links. The system can systematically search for the best possible restoration paths even as new links are added to the existing network. It can search and find the most efficient path as they become available.

SDN and OTN Applications

A prime example of SDN being used to configure services can be seen when it is applied to OTN. OTN is a technology that allows users to densely and efficiently pack different service types into fewer DWDM wavelengths. OTN can greatly benefit the network by optimizing transport but it does add some complexity that can be simplified by the use of SDN.

Network Optimization  

Another area where SDN can improve the utilization is by optimizing the network so that over time, it can make better use of network resources. Again, using the example of OTN, SDN can be used to reroute OTN paths to minimize latencies, reroute OTN paths to prepare for cutovers, and reroute OTN paths based on churn in demand.

NFV

In addition to applications, SDN becomes an enabler of Network Function Virtualization (NFV). NFV allows companies to provide services that currently run on dedicated hardware located on the end user's premises by moving the functionality to the network.

Conclusion

It is time for us to think of our network as being more than just a collection of transport hardware. We need to remember that we are building a human centric network that caters to a mobile generation who think nothing of going shopping while they are riding the bus to work or streaming a movie on the train.

SDN is capable of creating a programmable network by taking both next generation systems and existing infrastructure and making them substantially more dynamic. It does this by taking disparate systems and technologies and bringing them together under a common management system that can utilize them to their full potential. By using abstraction, SDN can simplify the software needed to deliver services and improve both the use of the network and shorten delivery times leading to greater revenue.

About the Author
Dave Jameson is Principal Architect, Network Management Solutions, at Fujitsu Network Communications, Inc.

Dave has more than 20 years experience working in the telecommunications industry, most of which has been spent working on network management solutions. Dave joined Fujitsu Network Communications in February of 2001 as a product planner for NETSMART® 1500, Fujitsu’s network management tool and has also served as its product manager. He currently works as a solutions architect specializing in network management. Prior to working for Fujitsu, Dave ran a network operations center for a local exchange carrier in the north eastern United States that deployed cutting edge data services. Dave attended Cedarville University and holds a US patent related to network management.

About Fujitsu Network Communications Inc.

Fujitsu Network Communications Inc., headquartered in Richardson, Texas, is an innovator in Connection-Oriented Ethernet and optical transport technologies. A market leader in packet optical networking solutions, WDM and SONET, Fujitsu offers a broad portfolio of multivendor network services as well as end-to-end solutions for design, implementation, migration, support and management of optical networks. For seven consecutive years Fujitsu has been named the U.S. photonics patent leader, and is the only major optical networking vendor to manufacture its own equipment in North America. Fujitsu has over 500,000 network elements deployed by major North American carriers across the US, Canada, Europe, and Asia. For more information, please see: http://us.fujitsu.com/telecom


)

Wednesday, February 26, 2014

Blueprint Column: Impending ITU G.8273.2 to Simplify LTE Planning

By Martin Nuss, Vitesse Semiconductor

Fourth-generation wireless services based on long-term evolution (LTE) have new timing and synchronization requirements that will drive new capabilities in the network elements underlying a call or data session. For certain types of LTE networks, there is a maximum time error limit between adjacent cellsites of no more than 500 nanoseconds.

To enable network operators to meet the time error requirement in a predictable fashion, the International Telecommunications Union is set to ratify the ITU-T G.8273.2 standard for stringent time error limits for network elements. By using equipment meeting this standard, network operator will be able to design networks that will predictably comply with the 500-nanosecond maximum time error between cellsites.

In this article, we look at the factors driving timing and synchronization requirements in LTE and LTE-Advanced networks and how the new G.8273.2 standard will help network operators in meeting those requirements.

Types of Synchronization

Telecom networks rely on two basic types of synchronization. These include:
Frequency synchronization
Time-of-day synchronization, which includes phase synchronization

Different types of LTE require different types of synchronization. Frequency division duplexed LTE (FDD-LTE), the technology that was used in some of the earliest LTE deployments and continues to be deployed today, uses paired spectrum. One spectrum band is used for upstream traffic and the other is used for downstream traffic. Frequency synchronization is important for this type of LTE, but time-of-day synchronization isn’t required.

Time-division duplexed LTE (TD-LTE) does not require paired spectrum, but instead separates upstream and downstream traffic by timeslot. This saves on spectrum licensing costs but also allows to more flexible allocate bandwidth flexibly between upstream and downstream direction, which could be valuable for video.  Time-of-day synchronization is critical for this type of LTE. Recently TD-LTE deployments have become more commonplace than they were initially and the technology is expected to be widely deployed.

LTE-Advanced (LTE-A) is an upgrade to either TD-LTE or FDD-LTE that delivers greater bandwidth. It works by pooling multiple frequency bands, and by enabling multiple base stations to simultaneously send data to a handset. Accordingly adjacent base stations or small cells have to be aligned with one another – a requirement that drives the need for time-of-day synchronization. A few carriers, such as SK Telecom, Optus, and Unitel, have already made LTE-A deployments and those numbers are expected to grow quickly moving forward.

Traditionally wireless networks have relied on global positioning system (GPS) equipment installed at cell towers to provide synchronization. GPS can provide both frequency synchronization and time-of-day synchronization. But that approach will be impractical as networks rely more and more heavily on femtocells and picocells to increase both network coverage (for example indoors) and capacity. These devices may not be mounted high enough to have a line of sight to GPS satellites – and even if they could, GPS capability would make these devices too costly.  There is also increasing concern about the susceptibility of GPS to jamming and spoofing, and countries outside of the US are reluctant to exclusively rely on the US-operated GPS satellite system for their timing needs.

IEEE 1588

A more cost-effective alternative to GPS is to deploy equipment meeting timing and synchronization standards created by the Institute of Electrical and Electronics Engineers (IEEE).

The IEEE 1588 standards define a synchronization protocol known as precision time protocol (PTP) that originally was created for the test and automation industry. IEEE 1588 uses sync packets that are time stamped by a master clock and which traverse the network until they get to an ordinary clock, which uses the time stamps to produce a physical clock signal.

The 2008 version of the 1588 standard, also known as 1588v2, defines how PTP can be used to support frequency and time-of-day synchronization. For frequency delivery this can be a unidirectional flow. For time-of-day synchronization, a two-way mechanism is required.

Equipment developers must look outside the 1588 standards for details of how synchronization should be implemented to meet the needs of specific industries. The ITU is responsible for creating those specifications for the telecom industry.

How the telecom industry should implement frequency synchronization is described in the ITU-T G.826x series of standards, which were ratified previously. The ITU-T G.8273.2 standard for time-of-day synchronization was developed later and is expected to be ratified next month (March 2014).
Included in ITU-T G.8273.2 are stringent requirements for time error. This is an important aspect of the standard because wireless networks can’t tolerate time error greater than 500 nanoseconds between adjacent cellsites.

ITU-T G.8273.2 specifies standards for two different classes of equipment. These include:
Class A- maximum time error of 50 ns
Class B- maximum time error of 20 ns

Both constant and dynamic time errors will contribute to the total time error of each network element, with both adding linearly after applying a 0.1Hz low-pass filter. Network operators that use equipment complying with the G.8273.2 standard for all of the elements underlying a network connection between two cell sites can simply add the maximum time error of all of the elements to determine if the connection will have an acceptable level of time error. Previously, network operators had no way of determining time error until after equipment was deployed in the network, and the operators need predictability in their network planning.

Conforming to the new standard will be especially important as network operators rely more heavily on heterogeneous networks, also known as HetNets, which rely on a mixture of fiber and microwave devices, including small cells and femtocells. Equipment underlying HetNets is likely to come from multiple vendors, complicating the process of devising a solution in the event that the path between adjacent cell sites has an unacceptable time error level.

What Network Operators Should Do Now

Some equipment manufacturers already have begun shipping equipment capable of supporting ITU-T G.8273.2, as G.8273.2-compliant components are already available. As network operators make equipment decisions for the HetNets they are just beginning to deploy, they should take care to look for G.8273.2-compliant products.

As for equipment already deployed in wireless networks, over 1 million base stations currently support 1588 for frequency synchronization and can be upgraded to support time-of-day synchronization with a software or firmware upgrade.

Some previously deployed switches and routers may support 1588, while others may not. While 1588 may be supported by most switches and routers deployed within the last few years, it is unlikely that they meet the new ITU profiles for Time and Phase delivery.  IEEE1588 Boundary or Transparent Clocks with distributed time stamping directly at the PHY level will be required to meet these new profiles, and only few routers and switches have this capability today.  Depending where in the network a switch or router is installed, network operators may be able to continue to use GPS to provide synchronization, gradually upgrading routers by using 1588-compliant line cards for all new line card installations and swapping out non-compliant line cards where appropriate.

Wireless network operators should check with small cell, femtocell and switch and router vendors about support for 1588v2 and G.8273.2 if they haven’t already.

About the Author

Martin Nuss joined Vitesse in November 2007 and is the vice president of technology and strategy and the chief technology officer at Vitesse Semiconductor. With more than 20 years of technical and management experience, Mr. Nuss is a Fellow of the Optical Society of America and a member of IEEE. Mr. Nuss holds a doctorate in applied physics from the Technical University in Munich, Germany. He can be reached at nuss@vitesse.com.

About Vitesse
Vitesse (Nasdaq: VTSS) designs a diverse portfolio of high-performance semiconductor solutions for Carrier and Enterprise networks worldwide. Vitesse products enable the fastest-growing network infrastructure markets including Mobile Access/IP Edge, Cloud Computing and SMB/SME Enterprise Networking. Visit www.vitesse.com or follow us on Twitter @VitesseSemi.

Tuesday, February 25, 2014

Blueprint Column: Five Big Themes at RSA 2014

by John Trobough, president at Narus

Now that RSA is underway I wanted to take some time to cover five key themes being talked about at the event.

Machine Learning

Machine Learning is at the top of my list.  As the frequency of attacks, the sophistication of the intrusions, and the number of new networked applications increase, analysts cannot keep up with the volume, velocity, and variety of data.

The use of machine learning is gaining critical mass fueled by the bring your own device (BYOD) and Internet of Things (IOT) trends. This technology can crunch large data sets, adapt with experience, and quickly generate insight or derive meaning from the data. With machine assistance, analysts spend less time on data-processing duties, and focus more time on problem solving and defense bolstering activities. Machine learning brings new insights to network activity and malicious behavior, and is accelerating the time to resolve cyber threats.

Data Visualization

The historic and rudimentary approach of taking tabular data and presenting it in colorful pie charts and graphs does not deliver insight. According to ESG research, 44 percent of organizations classify their current security data collection as “big data” and another 44 percent expect to classify their data collection and analysis as “big data” within the next two years.  With the explosive growth of volume and variety of data, analysts are experiencing cognitive overload. Their brains cannot process information fast enough. The challenge is to display insight and conclusions from data analysis in a clear way to facilitate rapid response.

Symbolic representations, like visual threat fingerprints, will be required for quick interpretation and comparison before diving into details. Data visualization design will need to incorporate best practices including:
Context-aware controls, that appear only when required
Seamless integration, providing flow from one task to the next without assumed knowledge about the source of the data
Human factor principles, to display data, analysis, and controls in ways that enhance clarity and usability.

Context

According to Gartner, the use of context-aware security helps security technologies become more accurate and enhance usability and adoption in response to cyber threats.

If we define context as the information required to answer the questions “what,” “how” and “why,” context will provide the understanding needed to better assess the threats and resolve them faster.

The advancements made in data visualization enable organizations to determine when something isn’t right on their network. Context takes this further by allowing organizations to determine what their network activity is supposed to look like and how data visualization and context fit together.

Internet of Things (IoT)

Connected devices have become a hot and desirable trend. ABI Research estimates there will be more than 30 billion wirelessly connected devices by 2020. This machine-to-machine (M2M) conversation offers new opportunities for innovation, generates a plethora of new data streams and also creates new threat vectors.

Today, there is a desire for deeper connectivity in the workplace and home. For the business, IoT provides a range of benefits, from increasing operational efficiency to better managing resources and expanding existing business models.  As for the consumer, IoT assists with safety, health, everyday planning and more.

However, all this connectivity compounds security challenges. It’s one thing for your refrigerator to tell you you’re out of milk, but it’s quite another for hackers to use refrigerators to access your network and steal your data or initiate attacks on other networks.

Consumerization of Security

It’s no longer just about the impact that weak security has on the enterprise but also how it is affecting consumers. More and more people are producing and storing their own data and creating their own private clouds, but are still in the dark about how to properly protect it.

According to cybersecurity expert Peter W. Singer, it’s not just weak passwords, such as “password” and “123456” that cybercriminals are after. Usually, cybercriminals are after the ability to change a password with information acquired from public records (i.e. mother’s maiden name). With sophisticated threats looming all over the web, it’s only a matter of time before most consumers are faced with a stiff test on protecting their digital assets.

As consumers become more conscious of security and privacy issues, they will want to know how to prevent their identity from being stolen with just a click of a mouse. Many consumers will turn to the vendors, including retail and banking, for answers, and many vendors will turn to security providers.

Our Opportunities and Challenges

The security landscape faces a future of tremendous growth. More than ever, security is underlying all business practices. In a digital economy where connected devices are everything, security is critical and cannot be an afterthought. Security is not something that you layer on. Instead we should assume we will face a threat and be prepared to respond. While there will be many conversations happening at RSA on a multitude of other security topics, you can be sure these five themes will be heard loud and clear.

About the Author



John Trobough is president of Narus, Inc., a subsidiary of The Boeing Company (NYSE: BA).  Trobough previously was president of Teleca USA, a leading supplier of software services to the mobile device communications industry and one of the largest global Android commercialization partners in the Open Handset Alliance (OHA). He also held executive positions at Openwave Systems, Sylantro Systems, AT&T and Qwest Communications.







About the Company


Narus, a wholly owned subsidiary of The Boeing Company (NYSE:BA), is a pioneer in cybersecurity data analytics. The company's patented advanced analytics help enterprises, carriers and government customers proactively identify and accelerate the resolution of cyber threats. Using incisive intelligence culled from visual interactive and underlying data analytics, Narus nSystem identifies, predicts and characterizes the most advanced security threats, giving executives the visibility and context they need to make the right security decisions, right now, by letting them know what’s happening, why, and what to do about it. And because Narus solutions are scalable and deployable to any network configuration or business process, Narus boosts the ROI from existing IT investments. Narus is a U.S.-based company, incorporated in Delaware and headquartered in Sunnyvale, Calif. (U.S.A.), with regional offices around the world.

Blueprint Column: Making 5G A Reality

By Alan Carlton, Senior Director Technology Planning for InterDigital

By now we’ve all heard many conversations around 5G, but it seems that everyone is pretty much echoing the same thing—it won’t be here until 2025ish. And I agree. But it also seems that no one is really addressing how it will be developed. What should we expect in the next decade? What needs to be done in order for 5G to be a reality? And which companies will set themselves apart from others as leaders in the space?  


I don’t think the future just suddenly happens like turning a corner and magically a next generation appears. There are always signs and trends along the way that provide directional indicators as to how the future will likely take shape. 5G will be no different than previous generations whose genesis was seeded in societal challenges and emerging technologies often conceived or identified decades earlier. 

5G wireless will be driven by more efficient network architectures to support an internet of everything, smarter and new approaches to spectrum usage, energy centric designs and more intelligent strategies applied to the handling of content based upon context and user behaviors. From these perspective technologies/trends like the Cloud, SDN, NFV, CDN (in the context of a greater move to Information Centric Networking), Cognitive Radio and Millimeter Wave all represent interesting first steps on the roadmap to 5G. 

5G Requirements and Standards

 The requirements of what makes a network 5G are still being discussed, however, the best first stab at such requirements is reflected in the good work of the 5GPPP (in Horizon 2020).  Some of the requirements that have been suggested thus far have included:

  • Providing 1000 times higher capacity and more varied rich services compared to 2010
  • Saving 90 percent energy per service provided
  • Orders of magnitude reductions in latency to support new applications
  • Service creation from 90 hours to 90 minutes 
  • Secure, reliable and dependable: perceived zero downtime for services
  • User controlled privacy

But besides requirements, developing a standardization process for 5G will also have a significant impact in making 5G a reality. While the process has not yet begun, it is very reasonable to say that as an industry we are at the beginning of what might be described as a consensus building phase.

If we reflect on wireless history seminal moments, they may be where the next “G” began. The first GSM networks rolled out in the early 1990’s but its origins may be traced back as far as 1981 (and possibly earlier) to the formation of Groupe Sp├ęcial Mobile by CEPT. 3G and 4G share a similar history where the lead time between conceptualization and realization has been roughly consistent at the 10 year mark. This makes the formation of 5G focused industry and academic efforts such as the 5GPPP (in Horizon 2020) and the 5GIC (at the University of Surrey) in 2013/14 particularly interesting.

Assuming history repeats itself, these “events” may be foretelling of when we might realistically expect to see 5G standards and later deployed 5G systems. Components of 5G Technology 5G will bring profound changes on the both network and air interface components of the current wireless systems architecture. On the air interface we see three key tracks:

  • The first track might be called the spectrum sharing and energy efficiency track wherein a new, more sophisticated mechanism of dynamically sharing spectrum between players emerges. Within this new system paradigm and with the proliferation of IoT devices and services, it is quite reasonable to discuss new and more suitable waveforms. 
  • A second track that we see is the move to the leveraging of higher frequencies, so called mmW applications in the 60GHz bands and above. If 4G was the era of discussing the offloading of Cellular to WiFi, 5G may well be the time when we talk of offloading WiFi to mmW in new small cell and dynamic backhaul designs. 
  • A final air interface track that perhaps bridges both air interface and network might be called practical cross layer design. Context and sensor fusion are key emerging topics today and I believe that enormous performance improvements can be realized through tighter integration of this myriad of information with the operation of the protocols on the air interface. 

While real infinite bandwidth to the end user may still remain out of reach in even the 5G timeframe, through these mechanisms it may be possible to deliver a perception of infinite bandwidth in a very real sense to the user. By way of example, in some R&D labs today organizations have developed a technology called user adaptive video. This technology selectively chooses the best video streams that should be delivered to an end user based upon user behavior in front of the viewing screen. With this technology today bandwidth utilization has improved 80 percent without any detectable change in quality of experience perceived by the end user. 

5G’s Impact on the Network

 5G will be shaped by a mash up (and evolution) of three key emerging technologies: Software Defined Networking, Network Function Virtualization and an ever deeper Content caching in the network as exemplified by the slow roll of CDN technology into GGSN  equipment today (i.e. the edge of the access network!). This trend will continue deeper into the radio access network and, in conjunction with the other elements, create a perfect storm where an overhaul to the IP network becomes possible. Information Centric Networking is an approach that has been incubating in academia for many years whose time may now be right within these shifting sands. 

 Overall, the network will flatten further and a battle for where the intelligence resides either in the cloud or the network edges will play out with the result likely being a compromise between the two. Device-to-Device communications in a fully meshed virtual access resource fabric will become common place within this vision. The future may well be as much about the crowd as the cloud. If the cloud is about big data then the crowd will be about small data and the winners may well be the players who first recognize the value that lies here. Services in this new network will change. A compromise will be struck between the OTT and Carrier worlds and any distinction between the two will disappear. Perhaps, more than anything else 5G must deliver in this key respect.   

Benefits and Challenges of 5G

 Even the most conservative traffic forecast projections through 2020 will challenge the basic capabilities and spectrum allocations of LTE-A and current generation WiFi. Couple this with a recognition that energy requirements in wireless networks will spiral at the same rate as the traffic projections and add the chaos of the emergence of the 50 or 100 billion devices - the so called Internet of Everything - all connected to a common infrastructure, and the value of exploring a 5th Generation should quickly become apparent. 

The benefits of 5G at the highest level will simply be the sustaining of the wireless vision for our connected societies and economies in a cost effective and energy sustainable manner into the next decade and beyond.

 However, 5G will likely roll out into a world of considerably changed business models from its predecessor generations and this raises perhaps the greatest uncertainty and challenge. What will these business models look like? It is clear that today’s model where Carriers finance huge infrastructure investments but reap less of the end customer rewards is unsustainable over the longer term. Some level of consolidation will inevitably happen but 5G will also have to provide a solution for a more equitable sharing of the infrastructure investment costs. Just how these new business models take shape and how this new thinking might drive technological development is perhaps the greatest uncertainty and challenge for 5G development.

 While the conversations around 5G continue to grow, there is still a long way to go before reaching full scale deployment. While we may be looking farther down the line, the development is already in place and companies are already starting to do research and development into areas that might be considered foundational in helping 5G prevail. WiFi in white space is an early embodiment of a new more efficient spectrum utilization approach that is highly likely to be adopted in a more mainstream manner in 5G. More than this, companies are also exploring new waveforms (new proverbial 4 letter acronyms that often characterize a technology generation) that outperform LTE “OFDM” in both energy efficiency, operation in new emerging dynamic spectrum sharing paradigms and also in application to the emerging challenges that the internet of things will bring.


About the Author 

Alan Carlton is the senior director of technology planning for InterDigital where he is responsible for the vision, technology roadmap and strategic planning in the areas of mobile devices, networking technologies, applications & software services. One of his primary focus areas is 5G technology research and development. Alan has over 20 years of experience in the wireless industry.

Videos

Loading...