Showing posts with label Blueprint Column. Show all posts
Showing posts with label Blueprint Column. Show all posts

Tuesday, February 1, 2022

Blueprint: An insider’s view on the do’s, don’ts and deal breakers of SaaS

Mark Bunn, Senior Vice President, SaaS Business Operations, Cloud and Network Services, Nokia

Launching a new business model isn’t for the faint of heart, particularly when the change disrupts the status quo. Software-as-a-Service for communication service providers promises to change the very foundation of how our industry does business today. Moving from a legacy of customized, on-premise technology to a cloud native environment where everything is managed by the software vendor, is not only a change in mindset, it also changes the way CSPs have managed their businesses since the industry began in the 1800s.

The tipping point where only the strong survive 

Are CSPs ready to step into another chapter of telecommunications history? It’s only a matter of time before we reach a tipping point. SaaS will transform the way CSPs consume software. 

What’s to gain? Faster time to market, faster deployment of systems and new capabilities. Done right, SaaS eliminates risky, cumbersome upgrades, delivers significant savings on total cost of ownership (TCO) and reduces worry. Adopting SaaS will vastly improve the time-to-value that CSPs can realize by having on-demand access to services. 

Software-as-a-Service for CSPs can usher in a new era, reducing business friction to a level that makes mass adoption and value creation possible. Of course, any major shift brings cultural, operational and technology changes and it’s important to pay attention to lessons we can learn along the way.

Five critical assertions

1. Security and compliance are non-negotiable. 

As security breaches can be devastating. it is critical to take every reasonable action while providing a fully functional and highly available service. Security compliance is table stakes and takes significant time, effort and cost to achieve.

2. Architecture drives profitability.

In a cloud native environment, consider scalability at both ends of the spectrum where cost control becomes most challenging. Keep a rein on technical debt incurred as a byproduct of time-to-market priority decisions. Automate, automate and automate again, relying on infrastructure-as-code instead of manually applying production changes. It keeps the costs of SaaS operations flat while growing the SaaS subscriber base. Finally, and most important, diligently manage the reliability associated related to the deployment architecture with software.  

A key difference between SaaS and other forms of hosted services is that the software, not human beings, is responsible for managing the SaaS services.  For example, if we had production SaaS customers on our SaaS Delivery Framework today, we would have expected little measurable service impact for the AWS outage that occurred the week of December 6. The combination of our SaaS Delivery Framework architecture and Site Reliability Engineering early detection system provides a shield against this type of service disruption. In our future end-state, the SaaS Delivery Framework will enable us to move workloads between hyperscaler platforms to mitigate cloud outages like this one.

3. Embrace the fact that we are the IT department.

The buck stops with the SaaS delivery team, as the responsibility for operations, administration and management moves to the SaaS service provider. The SaaS delivery team provides the equivalent of a public utility service with responsibility for infrastructure, security, patching, updates, and data management including backup, archival and recovery. 

4. The commercial risk is distributed. 

For a mature SaaS service, there is no upfront cost for the buyer and no upfront revenue for the seller. On and off-boarding is expected to be easy. An exceptional offering and ongoing engagement with the customer are critical for retention. 

The SaaS business model is cost-effective. The customer can reduce IT expenses related to the management of personnel, hardware, and software. With a pay-as-you-go, pay-as-you grow subscription, costs for the buyer and recurring revenue for the seller are better managed by providing commercial scaling based on actual need. 

Updates to customers are provided automatically and new features can be accessed immediately. In short, buyer and seller alike reap efficiency and financial benefits from SaaS. 

5. The customer can no longer “always be right”.

With SaaS offerings, we manage customers as a group, not as individuals. SaaS at commercial scale requires the SaaS service provider to maintain full control of the lifecycle of the service. As a result, customers don’t dictate release and upgrade schedules.

Are we there yet?

Now that we’ve laid the foundation with lessons learned, let’s look at clues for SaaS buyers that the service offering has yet to reach a mature state. 

Hosted private cloud versus SaaS

A SaaS buyer would expect that the installation process is fully automated. If professional services with fees are essential to get started or the time between confirming an order and deployment is measured in weeks, it’s likely a hosted private cloud and not SaaS. A SaaS offering includes standard support with the subscription price. Support (or “CARE”) isn’t sold as a separate add-on to the SaaS service. 

Extensibility is measured by the ability to tailor a system and the level of effort needed to implement and maintain the extension. High extensibility leading to extreme customization and, subsequently, increased security vulnerability risk, is inconsistent with a SaaS model. These characteristics are commonplace in hosted private cloud offerings.

Signs it might not be cloud native 

Forced downtime and long, scheduled maintenance windows indicate software that isn’t cloud native. Applications that don’t auto-recover are not mature cloud native applications even though they may have incorporated cloud native elements.

It’s closer to an on-premises model

More than a handful of product codes per service and/or overly complex pricing, indicates an on-premise commercial model. The absence of proactive security penetration testing is also a tell-tale sign. Simplified pricing models and sophisticated security validation are fundamental characteristics of SaaS.

Walk this way to full maturity

Delivering SaaS successfully depends on building a strong foundation for entering the marketplace. While a true SaaS offering needs many ingredients before it’s considered fully mature, that doesn’t mean not being in position and being ready to sell. 

You can offer direction on standard industry security compliances and increasingly provide self-service capabilities to tenants, including ordering, billing care, pay-as-you-go pricing, and service health dashboards. 

There’s a lot more than meets the eye to a true SaaS offering. Getting from where we are today to maturity promises to be the journey of a lifetime. 

Sunday, December 5, 2021

Blueprint: To Unlock the 5G Future, Service Providers Need to Get Active

by Steve Douglas, Head of Market Strategy, Spirent Communications

Imagine a big storm just passed through your neighborhood, and you’re worried it might have damaged your roof. Do you:

a. Go investigate how the roof is holding up 

b. Wait until the next storm to see if water starts dripping into your bedroom

Most of us would choose option A. It just makes sense to try to determine if something is broken ahead of time, before it fails and creates a bigger, more expensive problem. And yet, that’s not the approach most service providers use today when monitoring their networks.

Modern active monitoring technologies let operators poke and prod their networks in a variety of ways to spot potential problems before they affect customers. But only a fraction of service providers actually use them across the end-to-end network. Instead, most rely on the same techniques they’ve used for years: passively collecting telemetry data, analyzing it over time, detecting many problems only when someone calls to complain. 

This has never been an ideal situation, but in the near future, it will be an impossible one. As service providers progress with 5G rollouts, passive monitoring strategies fall apart. Which means, active testing and assurance is no longer optional. It’s becoming a mission-critical requirement. 

 Problems with Traditional Testing

Historically, most operators have relied on passive monitoring to assess network health, isolate faults, and ensure they live up to their service-level agreements (SLAs). That is, they deploy passive probes throughout their environment to capture network traffic data, dump that data into huge data lakes, and run analytics on it to identify anomalies. Active monitoring takes a more proactive approach. Instead of waiting for statistical analysis to reveal issues over weeks or months, it continually injects synthetic traffic into the network to measure performance in real time. 

Active monitoring is not a new concept. Many operators use it today in transport networks, where they’ve been seeking to introduce self-healing and automation capabilities. In the heart of most networks though, the passive approach still dominates. Now, that’s starting to change in response to five big trends:

  • Cloudification: To enable more agility and automation, operators are implementing more of the network as software, hosted in cloud environments. As a result, network elements are no longer static, rigid functions. They’re dynamic pieces of software that can be continually spun up and moved across cloud environments.  
  • Openness: The 5G specification mandates open interfaces. This allows operators to work with new vendors and open-source technologies in ways that weren’t possible before. But it also means that, instead of getting software updates a couple times a year from one or two well-known suppliers, you can now expect constant updates from dozens of vendors. 
  • Automation: Legacy manual approaches can’t keep up with the volume and velocity of change in cloudified networks. As complexity and costs grow, operators need to automate more of their operations and enable self-healing, self-optimizing networks. 
  • Artificial intelligence (AI): To enable true “self-driving networks,” you can’t stop and wait for human beings to make decisions. So, AI is playing play a larger role in network operations. 
  • Shift to work from home: Businesses were already seeing their workforces get more distributed, but COVID-19 kicked this trend into overdrive. Suddenly, operators need to deliver business-quality network experiences anywhere and everywhere.
As these trends converge, network traffic patterns become incredibly dynamic, elastic, and hard to predict. Just understanding what’s happening out there, much less isolating the source of issues, gets enormously difficult—especially if you’re relying on passive probes in static locations. 

Getting Active 

To navigate these issues and position themselves to succeed in the 5G marketplace, service providers are now extending active testing and assurance across more of their networks. Active monitoring involves three basic components: 

  • Active test agents—lightweight software probes that can run on any cloud compute platform and be spun up anywhere in the network, even on end-user devices
  • Large testing libraries to cover a variety of simulations—voice calls, video sessions, web browsing, low-latency services, and more
  • Intelligent automation, so the environment can not only run tests in the background continuously but can make smart decisions about which tests to run and where, without human input 

By adopting active testing and assurance, you can:

  • Monitor more proactively: With active testing always working in the background, you can continually probe your environment and spot most problems before they affect customers or SLAs. 
  • Accelerate change management: Active testing can become a default step when provisioning new services or network functions (NFs), immediately validating their performance as soon as they’re deployed. But it’s also valuable for contending with nonstop multivendor software updates. Now, you can rapidly test and validate updates in the live network, instead of having to wait weeks or months for lab testing. 
  • Assure SLAs: A growing number of services use hybrid environments, where parts of the service depend on cloud providers or other third parties. How do you guarantee that enterprise customers get the performance they’re paying for when you don’t fully own the service delivery infrastructure? The only way is to continually test the end-to-end service. 
  • Reduce mean time to repair (MTTR): If you’re relying on passive monitoring, you have to capture enough statistical data to feel confident that an anomaly signifies a real problem. Getting to that point takes time—especially if you’re waiting for organic traffic to recreate the conditions that caused the issue. Too often, while you’re waiting, customers are already calling to complain. With active monitoring, you can recreate any network conditions synthetically. And when you identify issues, you can isolate their source more quickly. 

In early active testing deployments, we’ve seen operators reduce MTTR by close to 75% through rapid fault isolation. Just as important, they’re seeing trouble tickets fall by nearly 90% through proactive monitoring—meaning they’re fixing most issues before they ever impact customers.

Preparing for the Future

Active testing can be enormously useful in today’s telecommunications networks. But if you want to achieve your business objectives in the coming years for 5G, it’s absolutely essential. Whether you’re embracing DevOps software methodologies to accelerate innovation, offering low-latency enterprise services under SLAs, or driving down costs and complexity with self-driving networks, you can’t do any of it with passive monitoring. It’s time for active assurance. 

Thursday, August 27, 2020

Perspective: 3 Ways COVID will shape our networks for years to come

by Julius Francis, Juniper Networks

 The COVID-19 pandemic shifted our physical world into a virtual one nearly overnight, and every day we are reminded of the critical role the network plays in maintaining continuity in our lives. In today’s hyperconnected world, where a strong internet connection is your lifeline to the world outside, the importance of the network has never been clearer. As enterprises continue to double on efforts to manage a remote workforce, service providers are being tasked with the enormous burden of delivering real-time computing of peak traffic levels and also a positive experience for the end-user.

This traffic isn’t just occurring in random spikes, but sustained plateaus, meaning the onus is on service providers to ensure they’re investing in network architectures to ensure they’re able to pass this pressure test. And while it’s likely that traffic patterns will return to normal once we emerge from the current situation, the pandemic will spur long term changes in terms of how networks are built and managed moving forward.

Digital Transformation Must Also Apply to the Network

Legacy systems and processes proved to be a major impediment to business continuity when we first shifted to remote work. As it stands, so much of IT process and knowledge is simply kept in people’s minds and shared only among a handful of individuals within a company rather than being tracked in a formalized process. Unfortunately, that is no longer acceptable in this new world of remote work. There’s been a lot of chatter about how COVID has made digital transformation a business imperative, and that also applies to network management. And while networks aren’t top of mind when investing in digital transformation, delivering seamless connectivity cannot be an afterthought as it’s more important now than ever in our lifetimes.

 The rapid disruption in business and IT continuity has forced service providers to audit their processes and overall approaches to network management, revealing the need for more autonomous operations and highly agile networks. Because networks were never designed or prepared for the rigors of fully digital operations, service providers should take this as an opportunity to overhaul and optimize each end of the network with automated operations and intelligent monitoring tools.

AI and Machine Learning Can’t Be an Afterthought

This is a watershed moment for companies to become more data-driven. While data-driven networking has been happening for quite some time, the current situation is accelerating this shift. And with this heightened emphasis on data, we can expect AI, automation, and machine learning technologies to take on an even greater role in service provider network management.

 For example, since the shift to remote work, AI has proven to be an important tool for service providers to analyze data and proactively identify network issues before they reach the end-user. Service providers can also overhaul operations with automation to make networks ‘zero-touch’ and deliver an uninterrupted experience for end-users by identifying problems before they begin the disrupt connectivity.

 Shifting to End-to-End Network Security

The pandemic has proven that bad actors will take advantage of any crisis for their own gain. That’s why it’s more important than ever for service providers to deploy automation technologies that work faster and more efficiently than humans to secure thousands of endpoints across the network. After all, for service provider networks, security must be ingrained everywhere – in the protocols, the systems, the elements, the provisioning, and in the business surrounding the network – to ensure each point of entry is secure.

 To do this, service providers must take a holistic, end-to-end security approach, layering on encryption and automation, to ensure that networks are protected all the way through.

A Future Centered Around Innovation

Rather than hunkering down to uphold the status quo, service providers should be looking ahead and using this crisis to improve their network architectures and prepare for the next unpredictable problem. For many, preparing for the unknown means finally embracing the shift to Cloud, 5G, and AI-driven networks – real business value can be derived when service providers deploy all three of these technologies collectively.

While each provider’s transformation journey into the new era of cloud, 5G and AI is different, success with all three hinges on investments in network architecture, operational economics, and services. This approach will usher in the next generation of services in this highly uncertain time – delivering massive speeds, deploying autonomous operations to manage huge data sets and keeping networks secure end-to-end. Together, cloud, 5G, and AI will enable a quantum leap forward in scale and performance.

Julius Francis is Senior Director of Product Marketing at Juniper Networks. 

Thursday, October 24, 2019

Blueprint column: Stop the intruders at the door!

by Prayson Pate, CTO, Edge Cloud, ADVA

Security is one of the biggest concerns about cloud computing. And securing the cloud means stopping intruders at the door by securing its onramp – the edge. How can edge cloud can be securely deployed, automatically, at scale, over public internet?

The bad news is that it’s impossible to be 100% secure, especially when you bring internet threats into the mix.

The good news is that we can make it so difficult for intruders that they move on to easier targets. And we can ensure that we contain and limit the damage if they do get in.

To achieve that requires an automated and layered approach. Automation ensures that policies are up to date, passwords and keys are rotated, and patches and updates are applied. Layering means that breaching one barrier does not give the intruder the keys to the kingdom. Finally, security must be designed in – not tacked on as an afterthought.

Let’s take a closer look at what edge cloud is, and how we can build and deliver it, securely and at scale.

Defining and building the edge cloud

Before we continue with the security discussion, let’s talk about what we mean by edge cloud.

Edge cloud is the delivery of cloud resources (compute, networking, and storage) to the perimeter of the network and the usage of those resources for both standard compute loads (micro-cloud) as well as for communications infrastructure (uCPE, SD-WAN, MEC, etc.), as shown below.
For maximum utility, we must build edge cloud in a manner consistent with public cloud. For many applications that means using standard open source components such as Linux, KVM and OpenStack, and supporting both virtual machines and containers.

One of the knocks against OpenStack is its heavy footprint. A standard data center deployment for OpenStack includes one or more servers for the OpenStack controller, with OpenStack agents running on each of the managed nodes.

It’s possible to optimize this model for edge cloud by slimming down the OpenStack controller and running it the same node as the managed resources. In this model, all the cloud resources – compute, storage, networking and control – reside in the same physical device. In other words, it’s a “cloud in a box.” This is a great model for edge cloud, and gives us the benefits of a standard cloud model in a small footprint.

Security out of the box

Security at an edge cloud starts when the hosting device or server is installed and initialized. We believe that the best way to accomplish this is with secure zero-touch provisioning (ZTP) of the device over public IP.

The process starts when an unconfigured server is delivered to an end user. Separately, the service provider sends a digital key to the end user. The end user powers up the server and enters the digital key. The edge cloud software builds a secure tunnel from the customer site to the ZTP server, and delivers the security key to identify and authenticate the edge cloud deployment. This step is essential to prevent unauthorized access if the hosting server is delivered to the wrong location. At that point, the site-specific configuration can be applied using the secure tunnel.

The secure tunnel doesn’t go away once the ZTP process completes. The management and orchestration (MANO) software uses the management channel for ongoing control and monitoring of the edge cloud. This approach provides security even when the connectivity is over public IP.

Security on the edge cloud

One possible drawback to the distributed compute resources and interface in an edge cloud model is an increased attack surface for hackers. We must defend edge cloud nodes with layered security at the device, including:
• Application layer – software-based encryption of data plane traffic at Layers 2, 3, or 4 as part of platform, with the addition of third-party firewall/UTM as a part of the service chain
• Management layer – two-factor authentication at customer site with encryption of management and user tunnels
• Virtualization layer – safeguard against VM escape (protecting one VM from another, and prevention of rogue management system connectivity to hypervisor) and VNF attestation via checksum validation
• Network layer – Modern encryption along with Layer 2 and Layer 3 protocols and micro-segmentation to separate management traffic from user traffic, and to protect both

Security of the management software

Effective automation of edge cloud deployments requires sophisticated MANO software, including the ZTP machinery. All of this software must be able to communicate with the managed edge cloud nodes, and do so securely. This means the use of modern security gateways to both protect the MANO software, as well as to provide the secure management tunnels for connectivity.

But that’s not enough. The MANO software should support scalable deployments and tenancy. Scalability should be built using modern techniques so that tools like load balancers can be used to support scaleout. Tenancy is a useful tool to separate customers or regions and to contain security breaches.

Security is an ongoing process

Hackers aren’t standing still, and neither can we. We must perform ongoing security scans of the software to ensure that vulnerabilities are not introduced. We must also monitor the open source distributions and apply patches as needed. A complete model would include:
Automated source code verification by tools such as Protecode and Black Duck
Automated functional verification by tools such as Nessus and OpenSCAP
Monitoring of vulnerability within open source components such as Linux and OpenStack
Following recommendations from the OpenStack Security Group (OSSG) to identify security vulnerabilities and required patches
Application of patches and updates as needed

Build out the cloud, but secure it

The move to the cloud means embracing multi-cloud models, and that should include edge cloud deployments for optimization of application deployment. But ensuring security at those distributed edge cloud nodes means applying a security in an automated and layered approach. There are tools and methods to realize this approach, but it takes discipline and dedication to do so.

Monday, October 13, 2014

Blueprint: SDN's Impact on Data Center Power/Cooling Costs

by Jeff Klaus, General Manager of DCM Solutions, Intel

The growing interest in software-defined networking (SDN) is understandable. Compared to traditional static networking approaches, the inherent flexibility of SDN compliments highly virtualized systems and environments that can expand or contract in an efficient business oriented way. That said, flexibility is not the main driver behind SDN adoption. Early adopters and industry watchers cite cost as a primary motivation.

SDN certainly offers great potential for simplifying network configuration and management, and raising the overall level of automation. However, SDN will also introduce profound changes to the data center. Reconfiguring networks on the fly introduces fluid conditions within the data center.

How will the more dynamic infrastructures impact critical data center resources – power and cooling?
In the past, 20 to 40 percent of data center resources were typically idle at any given time and yet still drawing power and dissipating heat. As energy costs have risen over the years, data centers have had to pay more attention to this waste and look for ways to keep the utility bills within budget. For example, many data centers have bumped up the thermostat to save on cooling costs.

These types of easy fixes, however, quickly fall short in the data centers associated with highly dynamic infrastructures. As network configurations change, so do the workloads on the servers, and network optimization must therefore take into consideration the data center impact.
Modern energy management solutions equip data center managers to solve this problem. They make it possible to see the big picture for energy use in the data center, even in environments that are continuously changing.  Holistic in nature, the best-in-class solutions automate the real-time gathering of power levels throughout the data center as well as server inlet temperatures for fine-grained visibility of both energy and temperature. This information is provided by today’s data center equipment, and the energy management solutions make it possible to turn this information into cost-effective management practices.

The energy management solutions can also give IT intuitive, graphical views of both real-time and historical data. The visual maps make it easy to identify and understand the thermal zones and energy usage patterns for a row or group of racks within one or multiple data center sites.

Collecting and analyzing this information makes it possible to evolve very proactive practices for data center and infrastructure management. For example, hot spots can be identified early, before they damage equipment or disrupt services. Logged data can be used to optimize rack configurations and server provisioning in response to network changes or for capacity planning.

Some of the same solutions that automate monitoring can also introduce control features. Server power capping can be introduced to ensure that any workload shifts do not result in harmful power spikes. Power thresholds make it possible to identify and adjust conditions to extend the life of the infrastructure.

To control server performance and quality of service, advanced energy management solutions also make it possible to balance power and server processor operating frequencies. The combination of power capping and frequency adjustments gives data center managers the ability to intelligently control and automate the allocation of server assets within a dynamic environment.

Early deployments are validating the potential for SDN, but data center managers should take time to consider the indirect and direct impacts of this or any disruptive technology so that expectations can be set accordingly. SDN is just one trend that puts more pressure on IT to be able to do more with less.

Management expects to see costs go down; users expect to see 100% uptime for the services they need to do their jobs. More than ever, IT needs the right tools to oversee the resources they are being asked to deploy and configure more rapidly. They need to know the impacts of any change on the resource allocations within the data center.

IT teams planning for SDN must also consider the increasing regulations and availability restrictions relating to energy in various locations and regions. Some utility companies are already unable to meet the service levels required by some data centers, regardless of price. Over-provisioning can no longer be considered a practical safety net for new deployments.

Regular evaluations of the energy situation in the data center should be a standard practice for technology planning. Holistic energy management solutions give data center managers many affordable tools for those efforts. Today’s challenge is to accurately assess technology trends before any pilot testing begins, and leverage an energy management solution that can minimize the pain points of any new technology project such as SDN.

About the Author

Jeff Klaus is the general manager of Data Center Manager (DCM) Solutions at Intel Corporation, where he has managed various groups for more than 13 years. He leads a global team that is pioneering data center infrastructure management (DCIM) solutions. A graduate of Boston College, Klaus also holds an MBA from Boston University. For more information, visit

Wednesday, October 8, 2014

Blueprint Column: Women in Engineering - Changing the Odds

by Scott McGregor, President and Chief Executive Officer, Broadcom Corporation

Some of the greatest successes in the modern economy come from finding multibillion-dollar industries that are ripe for disruption. Engineers spend a lot of time looking for such opportunities. And yet, one of the ripest targets for disruption is right before their eyes: the engineering industry itself.

Here’s how to disrupt it: Increase the number of women in the engineering profession.

Of all the science, technology, engineering, and applied mathematics (STEM) professions in which women are under-represented, the disparity is greatest in engineering. According to the U.S. Bureau of Labor Statistics, in 2012 women comprised 45 percent of scientists, 25 percent of mathematicians, 22 percent of technology workers, but only 10 percent of engineers. Today, there are plenty of young women taking the required high school courses to pursue an engineering degree, including AP calculus and physics. Yet in engineering and computer sciences professions, where workforce demand and salaries are among the highest, the percentages of women earning degrees continues to lag.

By the time young women get to college, only three percent of them will declare a major in engineering and of the three percent, they are least likely to choose electrical and electronic engineering, according to the Department of Education. Female representation declines further at the graduate level and yet again in the transition to the workplace.

The fundamental issue to be addressed is gender bias. So where does it start and what can we do about it?

Middle school is a great place to start, where girls’ achievements and interest are shaped by stereotypes (“boys are better at math and science than girls”), biases (“math is hard”) and cultural beliefs (“engineering is a profession for men”). Researchers discovered that even subtle references to these gender stereotypes have been found to reduce girls’ interest in science and math.

In a landmark paper, “Why So Few? Women in Science, Technology, Engineering, and Mathematics,” researchers concluded “The answer lies in our perceptions and unconscious beliefs about gender in mathematics and science.” In other words, we can’t solve the gender disparity problem without first recognizing our own biases that are formed early and all too often persist into the workplace. The good news is that the negative impact of those faulty perceptions can be lessened, just by becoming aware of them.

We must celebrate the achievements of girls and women in STEM professions and provide young hopefuls with role models and mentors along the path from school to the workplace. We must encourage girls and young women to pursue engineering careers through interactive programs in our local schools and community.

We can enhance STEM curricula in schools, beginning in elementary school with hands-on projects that help girls build confidence in spatial skills, an area where girls underperform boys. Even playing with construction kits and toys can help girls build confidence in the spatial skills they will need later on to succeed at engineering. In middle school, 3-D computer games and digital sketching tools can reinforce that confidence. As their confidence increases, girls are less likely to fall back on the traditional stereotypes about gender, and more likely to feel like they “belong” in STEM courses.

Early exposure and encouragement in mathematics is also key. Studies have shown that girls who take calculus in high school are three times more likely to pursue STEM careers, including engineering.

Also crucial is creating a workplace culture that is welcoming and supportive of women. Enlightened work-life policies are important, but so too are active efforts to attract, promote, and retain women. Seminars, luncheons, peer coaching, scholarships, networking gatherings, and continuing education programs have been shown to be effective in turning talented women engineers into talented workplace leaders.

By spearheading women’s leadership programs and providing a range of professional development opportunities geared specifically for women, we can encourage women through all levels of their career. These women, in turn, can take that learning and inspiration back out to the community – which will help attract more young women into engineering as they see more female role models. It’s a virtuous cycle.

The “Why So Few?” report was funded by a number of organizations, including the National Science Foundation, and was published by the American Association of University Women. Here’s a link to thepaper

By the way, for anyone who thinks they can’t possibly have biases that impact the perception of women, or other stereotypes, check out this study at Harvard and take one or more of the tests.

About the Author

Scott McGregor serves as Broadcom's President and Chief Executive Officer. In this role, he is responsible for guiding the vision and direction for the company's corporate strategy. Since joining Broadcom in 2005, the company has expanded from $2.40 billion in revenue to $8.31 billion in 2013 revenue. Broadcom's geographic footprint has grown from 13 countries in 2005 to 25 and its patent portfolio has expanded from 4,800 U.S. and foreign patents and applications to more than 20,850.

McGregor joined Broadcom from Philips Semiconductors (now NXP Semiconductors) where he served as President and CEO from 2001 to 2004. He joined Philips in 1998 and rose through a series of leadership positions.  McGregor received a B.A. in Psychology and a M.S. in Computer Science and Computer Engineering from Stanford University. He serves on the board of Ingram Micro, on the Engineering Advisory Council for Stanford University and President of the Broadcom Foundation. Most recently, McGregor received UCLA's 2013 IS Executive Leadership Award.

Thursday, September 18, 2014

Blueprint: Carriers Set Their Sights on 1Gbps Rollouts with

By Kourosh Amiri, VP Marketing, Ikanos, Inc.

Demand for high-speed broadband access by consumers has never been more intense than it is today.  Rapidly increasing numbers of connected devices inside the home and the adoption of higher-resolution (4K and 8K) television are just the tip of the iceberg.  Home automation, remote patient monitoring, and multi-player gaming – among countless other applications – are contributing to an Internet of Things phenomenon that promises to drive bandwidth demands through the roof.

And carriers and ISPs are lining up to get a share of the prize, wondering how their current broadband technologies can evolve to meet the increasing demand.  In the case of telcos, for example, even with the potential of vectored VDSL2 to deliver aggregate bandwidths of up to 300 Mbps to consumers, competitive pressure continues to mount to deliver quantum increases in bandwidth.  To that end,, with its promise of up to 1Gbps service to each household, could roll out in initial trials in as little as 12-18 months., a concept proposed in 2012 and achieving consent by the ITU-T standard body in December 2013, represents the next performance node in the evolution of xDSL. is defined to support up to 1Gbps over short (i.e., less than 100 meter) copper loops, and is designed to address gigabit broadband connectivity on hybrid fiber-copper carrier networks.  Service deployments are targeted from fiber-fed equipment located at a distribution point, such as a telephone pole, a pedestal, or inside an MDU (Multi-Dwelling Unit), serving customers on drop wires that span a distance of up to 100 meters. – in the same way as existing ADSL and VDSL – takes advantage of fiber already deployed to cabinets or other nodes.  The proliferation of will come as service providers push fiber closer to homes, where a single distribution point will serve, typically, from 8 to 16 homes.

Why the interest in  Even with vectored VDSL2’s ability to deliver hundreds of Mbps for Fiber to the Node (FTTN) applications (150Mbps aggregate performance on a 500-meter loop length demonstrated in many carrier lab  trials worldwide by Ikanos, and 300Mbps for shorter loops), the explosion in the number of devices per home and new services and applications is expected to drive strong interest in accelerating FTTdp with to market.  And, for carriers, the need to spur adoption of by their subscribers will play a lead role, keeping them in lockstep with competing services over cable and FTTH in the race to 1Gbps residential broadband connectivity.
Fortunately, much of the work in preparing the standard has already been completed, and chip suppliers have the consent of the ITU-T to start the development of chips, with some already making public announcements about their upcoming products.  (For example, Ikanos in October 2013 announced an architecture and development platform for

The infrastructure for is a variant of the infrastructure for VDSL2.  The primary difference is in the length of the copper pair that enters the residence. will require shorter copper loops than those used with VDSL2 in order to accommodate the desired gigabit performance.  That, in turn, means that service providers must drive fiber closer to homes. In addition, carriers will need to ensure that the media converters at each distribution point will be backward-compatible with VDSL2.  Why?  To enable customers not yet subscribing to service (when it becomes available) to continue to receive VDSL2 service through transceivers in the network. This is a critical requirement for carriers looking to offer new services to their existing subscriber base.  And it is a practical consideration, as not all subscribers will choose to upgrade to these new services at the same time.  Without this VDSL2 backward-compatibility feature (also known as VDSL2 fallback), this transition to new services may end up creating many problems for carriers, including additional CAPEX and service disruptions.

As the adoption of nears, customer pre-qualification and self-installation are needed to ensure a smooth, cost-effective migration to  Existing customers may be able to leverage new ADSL2 or VDSL2 CPE (or a software upgrade to their existing CPE) with advanced diagnostics to pre-qualify the line for service. Those diagnostics would be used to qualify the copper pair, check for RF noise, and advise whether any line conditioning actions would be required when installing the DPU.  What’s more, carriers expect that deployments will take place with virtually “no touch” installation and provisioning, setting the stage for more rapid adoption of the technology.

With the standard expected to be fully ratified later this year, and while deployment costs are still a matter of conjecture, and will potentially vary by geography, carrier, and specific network topologies, expectations are that they will undercut the costs of FTTH significantly.  And, with the current uncertainties in service providers’ plans and timing for broad rollouts of FTTH, the future for looks bright.

About the Author
Kourosh Amiri has more than 20 years of experience in the semiconductor industry. He has been responsible for the successful introduction of products targeting a range of applications in the networking, communications, and consumer segments. He joined Ikanos on February 2013 to lead the company’s global marketing and product strategy.Amiri was previously with Cavium, where he led marketing and business development for its emerging media processor group, and drove the strategy for turning Cavium into a leading supplier of wireless display media processors in multiple market segments, including smartphones and PC accessories. Prior to joining Cavium, Amiri held senior marketing and business development roles at Freescale and several venture-backed semiconductor start-ups, addressing a wide range of networking and media processing applications. Amiri has an MSEE from Stanford University and a BSEE from the University of California, Santa Barbara.

About Ikanos

Ikanos Communications, Inc. (NASDAQ:IKAN) is a leading provider of advanced broadband semiconductor and software products for the digital home. The company’s broadband DSL, communications processors and other offerings power access infrastructure and customer premises equipment for many of the world’s leading network equipment manufacturers and telecommunications service providers. For more information, visit

Tuesday, September 9, 2014

Blueprint: What’s Next for Carrier Ethernet?

by Stan Hubbard and Members of the MEF

More than 1,000 service providers and network operators worldwide now rely on Carrier Ethernet (CE) to support high-performance Ethernet & Ethernet-enabled data services, to interconnect network-enabled cloud services, to underpin 4G/LTE mobile and consumer triple play services, and to meet internal networking needs. Tens of thousands of businesses and enterprises in every industry vertical have transitioned to these CE services in order to control communications costs, efficiently scale with traffic demand, improve business agility, and boost productivity.

As the dominant protocol of choice for affordable, scalable, high-bandwidth connectivity, Ethernet has overtaken TDM in the wide area network (WAN) and has emerged as the indispensable digital fuel for accelerating communications-related business transformation. According to Vertical Systems Group, global business Ethernet services bandwidth surpassed installed legacy services bandwidth in 2012 and is projected to exceed 75% of total global business bandwidth by 2017. In short, CE has transformed the WAN network over the past decade.

CE’s key drivers

The twenty-first century’s accelerating bandwidth consumption paved the way for Carrier Ethernet and continues to drive demand. It was not just the size of the available bandwidth, however, but also the granular way it could be delivered.

Taking mobile backhaul as an example: as pressure on the network increased there was nothing to stop the operator from ordering further leased lines from the cell tower to core, but each extra line meant a big jump in cost, it took time and manual labour to install, and the effort needed to be justified in terms of future expected demand. But with a CE connection bandwidth could be raised immediately in small increments as needed, without field installation, and it could just as easily be lowered if the demand boost turned out to be temporary. This moved business from CapEx towards more flexible OpEx pricing model.

From a carrier perspective, CE’s flexibility gave similar benefits: enterprise custom could be attracted with “bandwidth on demand” services, and CE offered enormous scalability to accommodate new customers and rising demand.

Another advantage of Ethernet were that customers were already familiar with Ethernet in the LAN, so it was easier to understand and adapt to CE than to gather expertise in legacy WAN protocols like ATM and Frame Relay. The fact that enterprise customers typically wanted the carrier to link their Ethernet LANs, also made CE attractive as an end-to-end Ethernet solution.

Given the above drivers that helped launch the uptake of Carrier Ethernet, there arises another type of driver, and that is market momentum. As CE went mainstream, the relatively simple CE hardware (compared with legacy WAN systems) gained mass sales and became increasingly cost-effective. This further accelerated CE’s performance-price benefits.

The reason that cost was such a strong driver for CE was that its uptake coincided with a serious economic downturn, putting cost-efficiency high on the buying agenda for much of the first decade. But it was never the only factor: flexibility and simplicity are also very much in demand in times of high competition, and CE also majored on those benefits.

Lastly, sales exploded as standards were established – but that is a topic in itself.

Standards and the role of the MEF

Founded in 2002, the MEF is a global industry alliance with the aim to accelerate the worldwide adoption of Carrier-class Ethernet networks and services. The MEF develops Carrier Ethernet technical specifications and implementation agreements to promote interoperability and deployment of Carrier Ethernet worldwide.

As well as being responsible for the creation of Carrier Ethernet, the combined effort of MEF member companies has been to define, develop, and encourage worldwide adoption of standardized CE services and technologies. During the 2008 recession, even with CE offering massive benefits in cost and flexibility, business would have been far more nervous about migrating to a relatively new technology, had it not been for the MEF’s certification program, firstly to certify equipment to global CE standards, then services and then professional expertise.

According to Marie Fiala Timlin, Director of Marketing, CENX: “Vendors have converged on common CE standards, advocated by the MEF, so the SP has multiple options of standards-compliant infrastructure equipment, which will interoperate cleanly in the network.  Those benefits get passed on to the end-user in the form of high quality Internet connections, supported by CE service performance attributes.” Timlin added: “Furthermore, the specifications are continuously updated as technology and experience evolves, hence ensuring that vendors and SPs can innovate and yet remain standardized.”

And Christopher Cullan, Director of Product Marketing, Business Services, InfoVista, explained: “We’re in the standardization phase today with defined best practices from the MEF. MEF 35 is available with the basic support of Carrier Ethernet network and service performance monitoring, and MEF 36 and MEF 39 provide two constructs to enable MEF 35 using SNMP and NETCONF respectively. Some leading vendors are already moving forward, with MEF 35 compliance, and MEF 36. These cut the integration effort for an Ethernet device to enable full, MEF-aligned performance monitoring – valuable to both the internal stakeholders like operations and engineering as well as for end customers.”

The current suite of standards has been labelled “CE 2.0”. As Zeev Draer, VP Strategic Marketing, MRV explained: “The combined effort to ratify CE 2.0 was paramount in CE’s adoption in wide area and global international networks. CE 2.0 provides the right toolkit for legacy network replacement based on multiple Classes of Service (Multi-CoS), interconnect and manageability.”

“Interconnect” refers to CE 2.0’s E-Access, as Madhan Panchaksharam, Senior Product Manager, VeryX, explained: “The wholesale interconnect process has been tremendously simplified by MEF E-Access. The combined effort has resulted in overcoming delays in the wholesale interconnect process. This has enabled bigger carriers to quickly expand across geographies, and provided business opportunities for many smaller carriers to interconnect with bigger players and maximize their revenues.”

Already 26 service providers in 12 countries now offer more than 74 CE 2.0-certified services, and many more are in the process of services certification and/or have been building out CE 2.0-compliant services.  Meanwhile, 34 network equipment companies now offer 145 devices that are CE 2.0-certified and thus capable of powering CE 2.0 services. More than 2,300 individuals from 257 organizations in 62 countries have now been recognized as MEF Carrier Ethernet Certified Professionals (MEF-CECP or MEF CECP 2.0) – a population that has nearly tripled in the past 12 months. With MEF-CP standards, SPs can identify knowledgeable professionals to manage data network operations across a multi-vendor infrastructure.

MEF standards clearly help to harmonise the technical aspects, but they also make it easier to communicate between regions and business cultures, as Olga Havel, Head of Product Strategy and Planning, Amartus explains: ”We are creating the common industry language that specifies CE services, and therefore significantly reduces the cost of delivery for these services.” Christopher Cullan also says: “The more that CE standards are communicated to the buyer market (e.g. Enterprise), the greater the level of understanding, and hence adoption.”

Where next for CE adoption?

Zeev Draer says: “We are at a maturing stage in the networking industry... It’s no longer about big pipe connectivity, but more about application-driven intelligence with strong end-to-end multi-layer provisioning of services, performance monitoring across layers, and high elasticity of the network that should scale to millions of subscribers and services.”

Christopher Cullan agrees, adding: “Cheaper-than-TDM, is no longer good enough, it must be proven through simple, easily understood SLAs. As margins shrink with market maturity, over-provisioning cannot solve the needs of the enterprise business cost-effectively... Communication Service Providers need SLAs that align with the market and are standardized such that services are less bespoke and more cost effective.”

There is general agreement with these comments about the growing demands of cloud computing. Marie Fiala Timlin said: “CE is a means to connect enterprises to the cloud with guaranteed SLAs.  Within the data center itself, CE is the mechanism to provide quality exchange connectivity between tenants, and between the tenant enterprise to the cloud-based application server.  Also, CE serves to interconnect data center locations”.

For Zeev Draer: “The next step for Carrier Ethernet adaption will be highly focused on BSS and OSS integrations along standardization in CE 2.0 APIs. This is the most critical area that will save OPEX and enable new services such as the "Internet of Things" and services that didn't exist up to now. Now that we see maturity in CE definitions and more stringent technology factors than required from any large service provider, the focus will be on automation and monetization of CE services.”

Olga Havel agrees: “Automation is the key word right now, Service Providers and MEF must now focus on automation of full Carrier Ethernet delivery lifecycle (Design->Provision->Operate) in order to monetize their today’s networks and be ready to operate tomorrow’s virtualized networks. The next step for CE adoption is real-time OSS – service-centric orchestration platforms with open APIs that enable Software-Defined Service Orchestration.”

Towards agile, assured services orchestrated over efficient, interconnected networks
One way for SPs to compete is by reducing OPEX and increasing service lifecycle efficiency for interconnected SLA-oriented networks. Customers will pay more for performance guarantees, especially in cloud access networks with SLA dependency. But it requires a rich set of OAM capabilities for end-to-end service visibility.

Carrier Ethernet needs to evolve further to accommodate and facilitate new services  oriented towards business applications and needs. These require flexibility, agility, inter-connectivity and security in networks. Achieving these will require new CE attributes, interface definitions and APIs to enable greater programmability and automation.

Madhan Panchaksharam believes: “There is an increasing need to articulate MEF’s vision to bring together various players in the eco-system such as enterprises, cloud service providers, carriers and infrastructure providers, to demonstrate how this agility and dynamic delivery models can be achieved”. He sees the convergence of Carrier Ethernet, NFV and SDN as carriers transition towards agile, on-demand and flexible service models especially for cloud-type applications: “Carrier Ethernet has inherently better capabilities that can enable these goals to be achieved without sacrificing the quality of experience for users.”

However, NFV sub networks or overlays add further complexity according to Marie Fiala Timlin, who sees a corresponding need for next generation service orchestration systems: “Today’s OSS are siloed by function: inventory, fault, provisioning, performance monitoring.  One needs a holistic view of the network for end-to-end service fulfilment and assurance.  Also APIs between technology domains, and between carriers, are needed to help automate workflows for service agility”.

For Olga Havel too: “What needs to happen next is standardization of MEF Service Orchestration APIs. This will open the way for MEF certification for CE service orchestration platforms and interfaces. These APIs would enable users, applications and OSSs to design, provision & operate MEF services over single and multiple operators’ networks. MEF Service Management Reference Architecture must take into account integration between multiple Operators, but also with NFV Orchestrators and Cloud Managers for providing delivery of end-to-end connectivity services between Carrier Ethernet and Data Centre VMs and/or VNFs”.


The MEF has a reputation for moving quickly to anticipate business needs and deliver solutions and standards at the right time. Recognising that the issues go beyond technology and tools, the MEF launched its Service Operations Committee (SOC) last year to define, streamline and standardize processes for buying, selling, delivering and operating MEF-defined services.

MEF GEN14The SOC has established several projects to develop process flows, use cases and APIs to support all aspects of the ordering and provisioning of MEF-defined Ethernet services and accelerate delivery of MEF services to customers.

The MEF is also shaping a Vision and White Paper towards standardising delivery of dynamic connectivity services via physical or virtual network functions orchestrated over multiple operator’s networks. The MEF is also addressing the need for standardised service orchestration APIs. Later this year the MEF will be announcing more detail about its industry vision and various strategic initiatives.

The MEF Global Ethernet Networking 2014 (GEN14) event will be held on 17-20 November at the Gaylord National in Washington, DC.  GEN14 is a global gathering of the CE community defining the future of network-enabled cloud, data, and mobile services powered by the convergence of CE 2.0, SDN and virtualization technologies.

More information about GEN14 is available at

About the Author

Stan Hubbard, Director of Communications & Research, MEF is a veteran Carrier Ethernet analyst Hubbard, who was previously Senior Analyst at independent research organization Heavy Reading for 9 years.

Thursday, May 8, 2014

Nokia: Three Metrics of Disruptive Innovation

by David Letterman, Nokia

‘Disruptive innovation’ has been a favorite discussion topic for years. I am sure every industry, every company and every innovation team, has had rounds and rounds of discussion about what disruptive innovation means for them.

Rather than attempting another end all, be all definition for innovation, let’s focus on the passion it evokes and the permissions it enables. Disruptive innovation, as an internal charter, allows expansion beyond previous boundaries; it gives permission to go after new markets, new customers and new business models.  If left unencumbered, it can guide the company to proactively find and validate big problems for which external partners, new products and new markets can be created.  Disruptive innovation can be a source of otherwise unattainable revenue growth and market share.

Innovation is about converting ideas into something of value, making something better AND hopefully something that our customers are willing to pay for. For the purpose of putting the framework into two buckets, let’s distinguish between incremental and disruptive innovation.

Most innovation in established companies is developed by corporate innovation engines, whose job is to continually improve their products and services.  This continuous innovation delivers incremental advances in technology, process and business model.  Specialized R&D teams can add value to these innovation engines by solving problems differently or having a specific charter to go after larger levels of improvement.  Although the risks are higher, breakthrough innovation occurs when these teams achieve significantly better functionality or cost savings.  This combination of corporate and specialized incremental innovation is absolutely necessary for companies to keep up with or get ahead of the competition – and which most successful companies are very good at.

Disruptive innovation, on the other hand, is much more difficult for the corporate machinery. Here, new product categories are created, new markets are addressed and new value chains are established.

There is no known baseline to refer to.

Disruption implies that someone is losing – being disrupted. So clearly you won’t find a product roadmap for it in the company catalog. And it’s not even necessarily solving the problems of the current customer base. This is an area where, with the right passion, permissions and charter, a specialized innovation team can take a lead role and create significant growth for the company.

Here is my take on three characteristics of teams chartered to do disruptive innovation -

  • A strong outside-in perspective is crucial, for not only identifying the problem and validating the opportunity, but also for finding and creating a solution, and perhaps even taking it to market. Collaboration is everything when it comes to disruption.
  • Risk quotient - Arguably, all innovation contains some element of risk.  But, in this case of proactively seeking disruption, we must allow for an even higher degree of risk. For most innovation teams, ‘Fail fast’, ‘Fail often’ and ‘Fail safe’ are the mantras.  But in case of disruptive innovation when we are seeking new markets, perhaps based on new technologies, our probability for success is untested. And to the incumbents, this new  solution is unacceptable, often something they have never considered or simply cannot deliver.  If you are solving a really important problem it justifies embracing the risk, revalidating the opportunity and digging deeper to create a solution.  Redefine risk in the context of meaningful disruption – ‘Fail proud’ and keep on solving.  Remember SpaceX?
  • How disruptive is disruptive - For a new entrant to eventually become disruptive it needs to be significantly better in functionality, performance and efficiency - or much cheaper - than the alternatives.  Although the benefits may initially only be noticed by early adopters, for the solution to disrupt a category it must be made available to, and eventually accepted by, the masses.

A simple example that addresses these three characteristics – is how the Personal Navigation Device market was disrupted by the smartphone.

In the early and mid 2000s, Garmin and TomTom had a lock on the personal navigation market. When Nokia and the other phone manufacturers began delivering GPS via phones, they were coming to the market via a totally new channel, embedding the functionality in a device that the consumer would carry with them at all times.

The incumbents may have acted unfazed.  But in reality, they couldn’t respond to the threat.  The functionality may have been inferior to what they were selling but the cost was perceived as free.  It was totally unacceptable and the business model was “uncopiable.” What started as a feature in just select high-end phones would soon be adopted as a standard functionality in every smartphone, and expected by end users by default. In just two years, there were five times as many people carrying GPS enabled phones in their pockets as there were PNDs being sold.

Silicon Valley Open Innovation Challenge

There are many other characteristics you might consider to be the most important measurements for disruptive innovation.  For me, these three are as good as any.  It comes down to the simple questions of “Why does it matter?”  “What problem does this empower us to solve that was otherwise unmet?” and “How can we provide significantly positive impact for the company and for the people to whom the innovation will serve?”

Nokia’s Technology Exploration and Disruption (TED) team is chartered to look at exactly these questions. In its search for the next disruption, it has launched the – Silicon Valley Open Innovation Challenge.

This competition is an open call to Silicon Valley innovators to collaboratively discover and solve big problems with us, and to do so in ways that are significantly better, faster or cheaper than we could have done alone. We see Telco Cloud and colossal data analytics as the two major transformational areas for the wireless industry, opening up possibilities for disruption – and those are the focus themes for the Open Innovation Challenge. We’re willing to take the risk because we know the rewards of innovation are worth it.

Click here to submit your ideas and be part of something truly disruptive. Apply now!
Last date is 19th May 2014.

David Letterman works in the Networks business of Nokia within its Innovation Center in the heart of Silicon Valley. Looking after Ecosystem Development Strategy for the Technology Exploration and Disruption global team, David is exploring how to create exponential value by pushing the boundaries of internal innovation. An important initiative is Nokia’s Silicon Valley Open Innovation Challenge, calling on the concentrated problem-solving intellect of the Valley, to solve two of the biggest transformations for Telco: Colossal data analytics and Telco Cloud. Prior to his current position, David worked for a top tier Product Design and Innovation Consultancy, and held various business development and marketing management roles during a previous 10-year tenure with Nokia.

Nokia invests in technologies important in a world where billions of devices are connected. We are focused on three businesses: network infrastructure software, hardware and services, which we offer through Networks; location intelligence, which we provide through HERE; and advanced technology development and licensing, which we pursue through Technologies. Each of these businesses is a leader in its respective field. Through Networks, Nokia is the world’s specialist in mobile broadband. From the first ever call on GSM, to the first call on LTE, we operate at the forefront of each generation of mobile technology. Our global experts invent the new capabilities our customers need in their networks. We provide the world’s most efficient mobile networks, the intelligence to maximize the value of those networks, and the services to make it all work seamlessly.

Wednesday, March 12, 2014

Blueprint: SDN and the Future of Carrier Networks

by Dave Jameson, Principal Architect, Fujitsu Network Communications

The world has seen rapid changes in technology in the last ten to twenty years that are historically unparalleled, particularly as it relates to mobile communications. As an example, in 1995 there were approximately 5 million cell phone subscribers in the US, less than 2 percent of the population. By 2012, according to CTIA, there were more than 326 million subscribers.  Of those, more than 123 million were smartphones. This paradigm shift has taken information from fixed devices, such as desktop computers, and made it available just about anywhere. With information being available anywhere in the hands of the individual users some have started to called this the "human centric network," as network demands are being driven by these individual, often mobile, users.

But this growth has also created greater bandwidth demands and in turn has taken its toll on the infrastructure that supports it. To meet these demands we’ve seen innovative approaches to extracting the most benefit from existing resources, extending their capabilities in real-time as needed.  Clouds, clusters and virtual machines are all forms of elastic compute platforms that have been used to support the ever growing human centric network.

But how does this virtualization of resources in the datacenter relate to SDN in the telecom carrier's network? Specifically how does SDN, designed for virtual orchestration of disparate computational resources, apply to transport networks? I would suggest that SDN is not only applicable to transport networks but a necessary requirement.

What is SDN?

The core concept behind SDN is that it decouples the control layer from the data layer. The control layer is the layer of the network that manages the network devices by means of signaling. The data layer, of course, is the layer where the actual traffic flows. By separating the two the control layer can use a different distribution model than the data layer.

The real power of SDN can be summed up in a single word - abstraction.  Instead of sending specific code to network devices, machines can talk to the controllers in generalized terms. And there are applications that run on top of the SDN network controller.

As seen in Figure 1 applications can be written and plugged-in to the SDN network controller. Using an interface, such as REST, the applications can make requests from the SDN controller, which will return the results. The controller understands the construct of the network and can communicate requests down to the various network elements that are connected to it.

The southbound interface handles all of the communications with the network elements themselves. The type of southbound interface can take one of two forms. The first is a system which creates a more programmable network. That is to say that instead of just sending commands to the devices to tell them what to do SDN can actually reprogram the device to function differently.

The second type of southbound interface is a more traditional type that uses existing communication protocols to manage devices that are currently being deployed with TL1 and SNMP interfaces.
SDN has the ability to control disparate technologies, not just equipment from multiple vendors.

Networks are, of course, comprised of different devices to manage specific segments of the network. As seen in Figure 2 a wireless carrier will have wireless transmission equipment (including small cell fronthaul) with transport equipment to backhaul traffic to the data center. In the data center there will be routers, switches, servers and other devices.

Today at best these are under "swivel chair management" and at worst have multiple NOCs managing their respective segment. Not only does this add OpEx in terms of cost for staffing and equipment but additionally makes provisioning difficult and time consuming as each network section must, in a coordinated fashion, provision their part.

In an SDN architecture there is a layer that can sit above the controller layer called the orchestration layer and its job is to talk to multiple controllers.

Why do carriers need SDN?

As an example of how SDN can greatly simplify the provisioning of the network let's take a look at what it would take to modify the bandwidth shown in Figure 2. If there is an existing 100MB Ethernet connection from the data center to the fronthaul and it is decided that the connection needs to be 150MB, a coordinated effort needs to be put in place. One team must increase the bandwidth settings of the small cells, the transport team must increase bandwidth on the NEs, and routers and switches in the data center must be configured by yet another team.

Such adds, moves, and changes are time consuming in an ever changing world where dynamic bandwidth needs are no longer a negotiable item. What is truly needed is the ability to respond to this demand in a real time fashion where the bandwidth can be provisioned by one individual using the power of abstraction. The infrastructure must be enabled to move at a pace that is closer to the one click world we live in and SDN provides the framework required to do so.

SDN Applications

No discussion of SDN would be complete without examining the capabilities that SDN can bring through the mechanism of applications. There are many applications that can be used in an SDN network. Figure 4 shows a list of examples of applications and is broken down based on the type of application. This list is by no means meant to be exhaustive.

One example of an application that specifically applies to carrier networks is path computation or end to end provisioning. Over the years there have been many methods that have sought to provide a path computation engine (PCE), including embedding the PCE into the NEs, intermingling the control and data layers. But since the hardware on the NEs is limited, so the scale of the domain it manages is also limited. SDN overcomes this issue by the very nature of the hardware it runs on, specifically a server. Should the server become unable to manage the network due to size, additional capacity can be added by simply increasing the hardware (e.g. add a blade or hard drive). SDN also addresses the fact that not all systems will share common signaling protocols.  SDN mitigates this issue by not only being able to work with disparate protocols but by being able to manage systems that do not have embedded controllers.

Protection and Restoration

Another application that can be built is for protection and restoration. The PCE can find an alternative path dynamically based on failures in the network. In fact it can even find restoration paths when there are multiple failed links. The system can systematically search for the best possible restoration paths even as new links are added to the existing network. It can search and find the most efficient path as they become available.

SDN and OTN Applications

A prime example of SDN being used to configure services can be seen when it is applied to OTN. OTN is a technology that allows users to densely and efficiently pack different service types into fewer DWDM wavelengths. OTN can greatly benefit the network by optimizing transport but it does add some complexity that can be simplified by the use of SDN.

Network Optimization  

Another area where SDN can improve the utilization is by optimizing the network so that over time, it can make better use of network resources. Again, using the example of OTN, SDN can be used to reroute OTN paths to minimize latencies, reroute OTN paths to prepare for cutovers, and reroute OTN paths based on churn in demand.


In addition to applications, SDN becomes an enabler of Network Function Virtualization (NFV). NFV allows companies to provide services that currently run on dedicated hardware located on the end user's premises by moving the functionality to the network.


It is time for us to think of our network as being more than just a collection of transport hardware. We need to remember that we are building a human centric network that caters to a mobile generation who think nothing of going shopping while they are riding the bus to work or streaming a movie on the train.

SDN is capable of creating a programmable network by taking both next generation systems and existing infrastructure and making them substantially more dynamic. It does this by taking disparate systems and technologies and bringing them together under a common management system that can utilize them to their full potential. By using abstraction, SDN can simplify the software needed to deliver services and improve both the use of the network and shorten delivery times leading to greater revenue.

About the Author
Dave Jameson is Principal Architect, Network Management Solutions, at Fujitsu Network Communications, Inc.

Dave has more than 20 years experience working in the telecommunications industry, most of which has been spent working on network management solutions. Dave joined Fujitsu Network Communications in February of 2001 as a product planner for NETSMART® 1500, Fujitsu’s network management tool and has also served as its product manager. He currently works as a solutions architect specializing in network management. Prior to working for Fujitsu, Dave ran a network operations center for a local exchange carrier in the north eastern United States that deployed cutting edge data services. Dave attended Cedarville University and holds a US patent related to network management.

About Fujitsu Network Communications Inc.

Fujitsu Network Communications Inc., headquartered in Richardson, Texas, is an innovator in Connection-Oriented Ethernet and optical transport technologies. A market leader in packet optical networking solutions, WDM and SONET, Fujitsu offers a broad portfolio of multivendor network services as well as end-to-end solutions for design, implementation, migration, support and management of optical networks. For seven consecutive years Fujitsu has been named the U.S. photonics patent leader, and is the only major optical networking vendor to manufacture its own equipment in North America. Fujitsu has over 500,000 network elements deployed by major North American carriers across the US, Canada, Europe, and Asia. For more information, please see:


Wednesday, February 26, 2014

Blueprint: Impending ITU G.8273.2 to Simplify LTE Planning

By Martin Nuss, Vitesse Semiconductor

Fourth-generation wireless services based on long-term evolution (LTE) have new timing and synchronization requirements that will drive new capabilities in the network elements underlying a call or data session. For certain types of LTE networks, there is a maximum time error limit between adjacent cellsites of no more than 500 nanoseconds.

To enable network operators to meet the time error requirement in a predictable fashion, the International Telecommunications Union is set to ratify the ITU-T G.8273.2 standard for stringent time error limits for network elements. By using equipment meeting this standard, network operator will be able to design networks that will predictably comply with the 500-nanosecond maximum time error between cellsites.

In this article, we look at the factors driving timing and synchronization requirements in LTE and LTE-Advanced networks and how the new G.8273.2 standard will help network operators in meeting those requirements.

Types of Synchronization

Telecom networks rely on two basic types of synchronization. These include:
Frequency synchronization
Time-of-day synchronization, which includes phase synchronization

Different types of LTE require different types of synchronization. Frequency division duplexed LTE (FDD-LTE), the technology that was used in some of the earliest LTE deployments and continues to be deployed today, uses paired spectrum. One spectrum band is used for upstream traffic and the other is used for downstream traffic. Frequency synchronization is important for this type of LTE, but time-of-day synchronization isn’t required.

Time-division duplexed LTE (TD-LTE) does not require paired spectrum, but instead separates upstream and downstream traffic by timeslot. This saves on spectrum licensing costs but also allows to more flexible allocate bandwidth flexibly between upstream and downstream direction, which could be valuable for video.  Time-of-day synchronization is critical for this type of LTE. Recently TD-LTE deployments have become more commonplace than they were initially and the technology is expected to be widely deployed.

LTE-Advanced (LTE-A) is an upgrade to either TD-LTE or FDD-LTE that delivers greater bandwidth. It works by pooling multiple frequency bands, and by enabling multiple base stations to simultaneously send data to a handset. Accordingly adjacent base stations or small cells have to be aligned with one another – a requirement that drives the need for time-of-day synchronization. A few carriers, such as SK Telecom, Optus, and Unitel, have already made LTE-A deployments and those numbers are expected to grow quickly moving forward.

Traditionally wireless networks have relied on global positioning system (GPS) equipment installed at cell towers to provide synchronization. GPS can provide both frequency synchronization and time-of-day synchronization. But that approach will be impractical as networks rely more and more heavily on femtocells and picocells to increase both network coverage (for example indoors) and capacity. These devices may not be mounted high enough to have a line of sight to GPS satellites – and even if they could, GPS capability would make these devices too costly.  There is also increasing concern about the susceptibility of GPS to jamming and spoofing, and countries outside of the US are reluctant to exclusively rely on the US-operated GPS satellite system for their timing needs.

IEEE 1588

A more cost-effective alternative to GPS is to deploy equipment meeting timing and synchronization standards created by the Institute of Electrical and Electronics Engineers (IEEE).

The IEEE 1588 standards define a synchronization protocol known as precision time protocol (PTP) that originally was created for the test and automation industry. IEEE 1588 uses sync packets that are time stamped by a master clock and which traverse the network until they get to an ordinary clock, which uses the time stamps to produce a physical clock signal.

The 2008 version of the 1588 standard, also known as 1588v2, defines how PTP can be used to support frequency and time-of-day synchronization. For frequency delivery this can be a unidirectional flow. For time-of-day synchronization, a two-way mechanism is required.

Equipment developers must look outside the 1588 standards for details of how synchronization should be implemented to meet the needs of specific industries. The ITU is responsible for creating those specifications for the telecom industry.

How the telecom industry should implement frequency synchronization is described in the ITU-T G.826x series of standards, which were ratified previously. The ITU-T G.8273.2 standard for time-of-day synchronization was developed later and is expected to be ratified next month (March 2014).
Included in ITU-T G.8273.2 are stringent requirements for time error. This is an important aspect of the standard because wireless networks can’t tolerate time error greater than 500 nanoseconds between adjacent cellsites.

ITU-T G.8273.2 specifies standards for two different classes of equipment. These include:
Class A- maximum time error of 50 ns
Class B- maximum time error of 20 ns

Both constant and dynamic time errors will contribute to the total time error of each network element, with both adding linearly after applying a 0.1Hz low-pass filter. Network operators that use equipment complying with the G.8273.2 standard for all of the elements underlying a network connection between two cell sites can simply add the maximum time error of all of the elements to determine if the connection will have an acceptable level of time error. Previously, network operators had no way of determining time error until after equipment was deployed in the network, and the operators need predictability in their network planning.

Conforming to the new standard will be especially important as network operators rely more heavily on heterogeneous networks, also known as HetNets, which rely on a mixture of fiber and microwave devices, including small cells and femtocells. Equipment underlying HetNets is likely to come from multiple vendors, complicating the process of devising a solution in the event that the path between adjacent cell sites has an unacceptable time error level.

What Network Operators Should Do Now

Some equipment manufacturers already have begun shipping equipment capable of supporting ITU-T G.8273.2, as G.8273.2-compliant components are already available. As network operators make equipment decisions for the HetNets they are just beginning to deploy, they should take care to look for G.8273.2-compliant products.

As for equipment already deployed in wireless networks, over 1 million base stations currently support 1588 for frequency synchronization and can be upgraded to support time-of-day synchronization with a software or firmware upgrade.

Some previously deployed switches and routers may support 1588, while others may not. While 1588 may be supported by most switches and routers deployed within the last few years, it is unlikely that they meet the new ITU profiles for Time and Phase delivery.  IEEE1588 Boundary or Transparent Clocks with distributed time stamping directly at the PHY level will be required to meet these new profiles, and only few routers and switches have this capability today.  Depending where in the network a switch or router is installed, network operators may be able to continue to use GPS to provide synchronization, gradually upgrading routers by using 1588-compliant line cards for all new line card installations and swapping out non-compliant line cards where appropriate.

Wireless network operators should check with small cell, femtocell and switch and router vendors about support for 1588v2 and G.8273.2 if they haven’t already.

About the Author

Martin Nuss joined Vitesse in November 2007 and is the vice president of technology and strategy and the chief technology officer at Vitesse Semiconductor. With more than 20 years of technical and management experience, Mr. Nuss is a Fellow of the Optical Society of America and a member of IEEE. Mr. Nuss holds a doctorate in applied physics from the Technical University in Munich, Germany. He can be reached at

About Vitesse
Vitesse (Nasdaq: VTSS) designs a diverse portfolio of high-performance semiconductor solutions for Carrier and Enterprise networks worldwide. Vitesse products enable the fastest-growing network infrastructure markets including Mobile Access/IP Edge, Cloud Computing and SMB/SME Enterprise Networking. Visit or follow us on Twitter @VitesseSemi.