Showing posts with label Blueprint Column. Show all posts
Showing posts with label Blueprint Column. Show all posts

Tuesday, February 25, 2014

Blueprint Column: Making 5G A Reality

By Alan Carlton, Senior Director Technology Planning for InterDigital

By now we’ve all heard many conversations around 5G, but it seems that everyone is pretty much echoing the same thing—it won’t be here until 2025ish. And I agree. But it also seems that no one is really addressing how it will be developed. What should we expect in the next decade? What needs to be done in order for 5G to be a reality? And which companies will set themselves apart from others as leaders in the space?  


I don’t think the future just suddenly happens like turning a corner and magically a next generation appears. There are always signs and trends along the way that provide directional indicators as to how the future will likely take shape. 5G will be no different than previous generations whose genesis was seeded in societal challenges and emerging technologies often conceived or identified decades earlier. 

5G wireless will be driven by more efficient network architectures to support an internet of everything, smarter and new approaches to spectrum usage, energy centric designs and more intelligent strategies applied to the handling of content based upon context and user behaviors. From these perspective technologies/trends like the Cloud, SDN, NFV, CDN (in the context of a greater move to Information Centric Networking), Cognitive Radio and Millimeter Wave all represent interesting first steps on the roadmap to 5G. 

5G Requirements and Standards

 The requirements of what makes a network 5G are still being discussed, however, the best first stab at such requirements is reflected in the good work of the 5GPPP (in Horizon 2020).  Some of the requirements that have been suggested thus far have included:

  • Providing 1000 times higher capacity and more varied rich services compared to 2010
  • Saving 90 percent energy per service provided
  • Orders of magnitude reductions in latency to support new applications
  • Service creation from 90 hours to 90 minutes 
  • Secure, reliable and dependable: perceived zero downtime for services
  • User controlled privacy

But besides requirements, developing a standardization process for 5G will also have a significant impact in making 5G a reality. While the process has not yet begun, it is very reasonable to say that as an industry we are at the beginning of what might be described as a consensus building phase.

If we reflect on wireless history seminal moments, they may be where the next “G” began. The first GSM networks rolled out in the early 1990’s but its origins may be traced back as far as 1981 (and possibly earlier) to the formation of Groupe Sp├ęcial Mobile by CEPT. 3G and 4G share a similar history where the lead time between conceptualization and realization has been roughly consistent at the 10 year mark. This makes the formation of 5G focused industry and academic efforts such as the 5GPPP (in Horizon 2020) and the 5GIC (at the University of Surrey) in 2013/14 particularly interesting.

Assuming history repeats itself, these “events” may be foretelling of when we might realistically expect to see 5G standards and later deployed 5G systems. Components of 5G Technology 5G will bring profound changes on the both network and air interface components of the current wireless systems architecture. On the air interface we see three key tracks:

  • The first track might be called the spectrum sharing and energy efficiency track wherein a new, more sophisticated mechanism of dynamically sharing spectrum between players emerges. Within this new system paradigm and with the proliferation of IoT devices and services, it is quite reasonable to discuss new and more suitable waveforms. 
  • A second track that we see is the move to the leveraging of higher frequencies, so called mmW applications in the 60GHz bands and above. If 4G was the era of discussing the offloading of Cellular to WiFi, 5G may well be the time when we talk of offloading WiFi to mmW in new small cell and dynamic backhaul designs. 
  • A final air interface track that perhaps bridges both air interface and network might be called practical cross layer design. Context and sensor fusion are key emerging topics today and I believe that enormous performance improvements can be realized through tighter integration of this myriad of information with the operation of the protocols on the air interface. 

While real infinite bandwidth to the end user may still remain out of reach in even the 5G timeframe, through these mechanisms it may be possible to deliver a perception of infinite bandwidth in a very real sense to the user. By way of example, in some R&D labs today organizations have developed a technology called user adaptive video. This technology selectively chooses the best video streams that should be delivered to an end user based upon user behavior in front of the viewing screen. With this technology today bandwidth utilization has improved 80 percent without any detectable change in quality of experience perceived by the end user. 

5G’s Impact on the Network

 5G will be shaped by a mash up (and evolution) of three key emerging technologies: Software Defined Networking, Network Function Virtualization and an ever deeper Content caching in the network as exemplified by the slow roll of CDN technology into GGSN  equipment today (i.e. the edge of the access network!). This trend will continue deeper into the radio access network and, in conjunction with the other elements, create a perfect storm where an overhaul to the IP network becomes possible. Information Centric Networking is an approach that has been incubating in academia for many years whose time may now be right within these shifting sands. 

 Overall, the network will flatten further and a battle for where the intelligence resides either in the cloud or the network edges will play out with the result likely being a compromise between the two. Device-to-Device communications in a fully meshed virtual access resource fabric will become common place within this vision. The future may well be as much about the crowd as the cloud. If the cloud is about big data then the crowd will be about small data and the winners may well be the players who first recognize the value that lies here. Services in this new network will change. A compromise will be struck between the OTT and Carrier worlds and any distinction between the two will disappear. Perhaps, more than anything else 5G must deliver in this key respect.   

Benefits and Challenges of 5G

 Even the most conservative traffic forecast projections through 2020 will challenge the basic capabilities and spectrum allocations of LTE-A and current generation WiFi. Couple this with a recognition that energy requirements in wireless networks will spiral at the same rate as the traffic projections and add the chaos of the emergence of the 50 or 100 billion devices - the so called Internet of Everything - all connected to a common infrastructure, and the value of exploring a 5th Generation should quickly become apparent. 

The benefits of 5G at the highest level will simply be the sustaining of the wireless vision for our connected societies and economies in a cost effective and energy sustainable manner into the next decade and beyond.

 However, 5G will likely roll out into a world of considerably changed business models from its predecessor generations and this raises perhaps the greatest uncertainty and challenge. What will these business models look like? It is clear that today’s model where Carriers finance huge infrastructure investments but reap less of the end customer rewards is unsustainable over the longer term. Some level of consolidation will inevitably happen but 5G will also have to provide a solution for a more equitable sharing of the infrastructure investment costs. Just how these new business models take shape and how this new thinking might drive technological development is perhaps the greatest uncertainty and challenge for 5G development.

 While the conversations around 5G continue to grow, there is still a long way to go before reaching full scale deployment. While we may be looking farther down the line, the development is already in place and companies are already starting to do research and development into areas that might be considered foundational in helping 5G prevail. WiFi in white space is an early embodiment of a new more efficient spectrum utilization approach that is highly likely to be adopted in a more mainstream manner in 5G. More than this, companies are also exploring new waveforms (new proverbial 4 letter acronyms that often characterize a technology generation) that outperform LTE “OFDM” in both energy efficiency, operation in new emerging dynamic spectrum sharing paradigms and also in application to the emerging challenges that the internet of things will bring.


About the Author 

Alan Carlton is the senior director of technology planning for InterDigital where he is responsible for the vision, technology roadmap and strategic planning in the areas of mobile devices, networking technologies, applications & software services. One of his primary focus areas is 5G technology research and development. Alan has over 20 years of experience in the wireless industry.

Thursday, January 9, 2014

Blueprint: Optimizing SSDs with Software Defined Flash Requires a Flexible Processor Architecture

By Rahul Advani, Director of Flash Products, Enterprise Storage Division, PMC

With the rise of big data applications, such as in-memory analytics and database processing where performance is a key consideration, enterprise Solid-State Drive (SSD) use is growing rapidly. IDC forecasts the enterprise SSD segment to be a $5.5 billion market by 20151.  In many cases, SSDs are used as the highest level of a multi-tier storage system, but there is also a trend towards all-SSD storage arrays  as price performance metrics, including dollar per IOP ($/IOP) and dollar per workload ($/workload) make it an attractive option.

Flash-based SSDs are not only growing as a percentage of all storage in the enterprise, but they are also almost always the critical storage component to ensure a superior end-user experience using caching or tiering of storage.  The one constant constraint to the further use of NAND-based SSDs is cost, so it makes sense that the SSD industry is focused on technology re-use as a means to deliver cost-effective solutions that meet customers’ needs and increase adoption.

If you take the Serial Attached SCSi (SAS) market as an example, there are three distinct SSD usage models that are commonly measured in Random Fills Per Day (RFPD) for 5 years, or filling an entire drive xx times every day for 5 years.  There are the read intensive workloads at 1-3 RFPD, mixed workload at 5-10 RFPD and write intensive at 20+ RFPD. Furthermore, different customer bases like the Enterprise and Hyperscale datacenter have different requirements for application optimizations and scale for which SSDs are used in their infrastructure.  These differences in requirement show up typically in terms of number of years of service required, performance, power and sensitivity to corner cases in validation. The dilemma for the SSD makers is how do you meet these disparate needs and yet offer cost-effective solutions to end users.

In enterprise applications, software defined storage has many different definitions and interpretations, from virtualized pools of storage, to storage as a service.  For this article, we will stick to the application of software and firmware in flash-based storage SSDs to help address the varied applications from cold storage to high performance SSDs and caching cost effectively. There are a few key reasons why the industry prefers this approach:
  1. As the risk and cost associated with controller developments have risen, the concept of using software to generate optimizations is not only becoming popular, it’s a necessity.  Controller developments typically amount to several tens of millions of dollars for the silicon alone, and they often require several revisions to the silicon, which adds to the cost and risk of errors.
  2. The personnel skillset required for high-speed design and specific protocol optimizations (SAS or NVMe) are not easy to find.  Thus, software-defined flash, using firmware that has traditionally been deployed to address bugs found in the silicon, is increasingly being used to optimize solutions for different usage models in the industry.  For example, firmware and configuration optimizations for PMC’s SAS flash controller described below cost around 1/10th of the silicon development and the benefits of that are seen at the final product cost.
  3. Product validation costs can also be substantial and cycles long for enterprise SSDs, so time-to-market solutions also leverage silicon and firmware re-use as extensively as feasible.
Supporting these disparate requirements that span cold storage to high-performance SSDs for database applications cost-effectively requires a well-planned, flexible silicon architecture that will allow for software defined solutions.  These solutions need to support software optimizations based around (to name a few):

Different densities and over-provisioning NAND levels
Different types of NAND (SLC/MLC/TLC) at different nodes
Different power envelopes (9W and 11W typical for SAS, 25W for PCIe)
Different amounts of DRAM
Often need to support Toggle and ONFI, in order to maintain flexibility of NAND use

The table below shows the many different configurations that PMC’s 12G SAS flash processor supports:



Using a flexibly architected controller, you can modify features including power, flash density, DRAM density, flash type and host interface bandwidth for purpose-built designs based on the same device. And this allows you to span the gamut from cold storage (cost-effective but lower performance) to a caching adaptor (premium memory usage and higher performance) through different choices in firmware and memory. The key is that firmware and hardware be architected flexibly.  Here are three common design challenges that can be solved with software defined flash and a flexible SSD processor:

  • Protocol communication between the flash devices:  Not only does NAND from different vendors (ONFI and toggle protocols) differ, but even within each of these vendor’s offerings, there can be changes to the protocol.  Examples are changing from five to six bytes of addressing, or adding prefix commands to normal commands.  Having the protocol done by firmware allows the flexibility to adapt to these changes.  Additionally, having a firmware-defined protocol allows flash vendors to design in special access abilities.
  • Flash has inconsistent rules for order of programming and reading: A firmware-based solution can adapt to variable rules and use different variations of flash, even newer flash that might not have been available while developing the hardware.  By having both the low-level protocol handling, as well as control of the programming and reading all in firmware, it allows for a solution that is flexible enough to use many types and variations of flash.
  • Fine-tuning algorithms/product differentiation: Moving up to the higher level algorithms, like garbage collection and wear leveling, there are many intricacies in flash. Controlling everything from the low level up to these algorithms in firmware allows for fine-tuning of these higher level algorithms to work best with the different types of flash.  This takes advantage of the differences flash vendors put into their product so they can be best leveraged for diverse applications.

A flexible architecture that can support software defined flash optimizations is the key to supporting many different of usage models, types of NAND and configurations. It also helps reduce cost, which will accelerate deployment of NAND-based SSDs and ultimately enhance end-user experience.

Source: 1. IDC Worldwide Solid State Drive 2013-2017 Forecast Update, doc #244353, November 2013.

About the Author

Rahul Advani has served as Director of Flash Products for PMC’s Enterprise Storage Division since July 2012. Prior to joining PMC, he was director of Enterprise Marketing at Micron Technology, director of Technology Planning at Intel, and a product manager with Silicon Graphics. He holds a BS in Electrical Engineering from Cornell University and he received his PhD in Engineering and management training from the Massachusetts Institute of Technology.

About PMC

PMC® (Nasdaq: PMCS) is the semiconductor innovator transforming networks that connect, move and store big data. Building on a track record of technology leadership, the company is driving innovation across storage, optical and mobile networks. PMC’s highly integrated solutions increase performance and enable next-generation services to accelerate the network transformation.

Tuesday, December 17, 2013

CTO Viewpoint: Top Predictions for 2014

By Martin Nuss, Vitesse Semiconductor

As 2013 draws to a close, it’s time to ponder what’s next. We know connections are growing, as previously unconnected devices are now joining smart phones and tablets in the network, but how will they be networked? Furthermore, how will networks handle these additional connections, which are only going to grow faster in 2014? And lastly, how will all of these links be secured? Many advanced technologies have been developed for these exact questions. Here’s what I see coming to the forefront in 2014.

The Internet of Things: The Next All-Ethernet IP Network

Today’s world is defined by networking – public, private, cloud, industrial, you name it. Eventually everything will be connected, and mostly connected wirelessly. According to Morgan Stanley projections, 75 billion devices will be connected to the Internet of Things (IoT) by 2020. Clearly all these devices will need to be networked, and must be securely accessible anywhere, anytime.

Proprietary communications and networking protocols have long dominated networking within Industrial applications. With higher bandwidth and increased networking demands in Industrial process control, Smart-Grid Energy Distribution, Transportation, and Automotive applications, and Industrial networks are transitioning to standards-based Ethernet networking.

Networks within the broad-based Industrial applications realm will need many of the same capabilities developed for Carrier Ethernet, such as resiliency, high availability, accurate time synchronization, low power, security, and cloud connectivity. In 2014, we believe IoT will be the next network moving entirely to Ethernet-IP based in the Carrier Ethernet space. We also believe security, timing, reliability and deterministic behavior will become important requirements for these connected networks.

Network Security Sets Sights On Authentication, Authorization, Accounting (AAA) and Encryption

There will be more than 10 billion mobile devices/connections by 2017, including more than 1.7 billion M2M connections, according to Cisco’s most recent Visual Networking Index projections. As the number of network connections increase, so do the vulnerabilities. Anything with an IP address is theoretically hackable, and networking these devices without physical security heightens risk.

Security has long been an important issue, and the continued strong growth in the number of mobile Internet connections will bring more challenges in 2014. Operators will need to rely on the most advanced technologies available. New mobile devices with bandwidth-hungry applications, and the small cell networks needed to support them, exponentially multiply the number of network elements required in mobile networks. Long gone are the days of network equipment residing solely in physically secure locations like a central office or a macro base station. The network edge is particularly vulnerable because it is part of the Carrier network, but not physically secure. New types of access points directly exposed to users pose the obvious security concern. The BYOD trend introduces a new layer of vulnerable access points for enterprises to protect. Small cells are also particularly susceptible to hackers, as they are often installed outdoors at street level or indoors in easy-to-reach locations. Strong encryption of these last mile links can provide the necessary confidentiality of data. Authentication, authorization, and the corresponding accounting trails will ensure both the users and the equipment remain uncompromised.

In 2014, we expect that encryption and AAA will become key topics as Carrier equipment migrates to lamp posts, utility poles, and traffic signals. Encryption directly at the L2 Ethernet layer makes the most sense, especially as service providers offer more Carrier Ethernet Layer 2 (L2) VPN services. Fortunately, new MACsec technologies make it a viable option for wired and wireless WAN security.

SDN Looks Promising, But Carriers’ 2014 Focus Will Be On NFV

Software Defined Networking (SDN) and Network Function Virtualization (NFV) are widely discussed, but realization in Carriers’ networks is still some time away. Unlike datacenters, where SDN can be rolled out relatively easily, Carriers must modernize their complex operational structures before implementing SDN.

SDN’s biggest potential benefit to Carrier networks is its ability to create multiple, virtual private networks on top of a single physical network, which distributes costs by allowing multiple customers and service providers to securely share the same network. However, the entire network needs to support SDN in order to do that. On the other hand, NFV is about testing and deploying new services at the IP Edge faster and with lower CapEx. How? It’s made possible by creating the service in software, rather than with dedicated hardware. As long as the equipment at the network edge is NFV-ready, Carriers can create new services in centralized and virtualized servers. This captures Carriers’ imagination, since NFV promises a faster path to revenue with less risk and investment required. One of the first NFV applications we will see is Deep Packet Inspection (DPI). Because SDN requires spending money in order to save money, expect to see more Carrier attention to NFV in 2014.

4G RAN Sharing Becomes Widespread, Later Followed by 5G

Many see 5G as the next big thing, but beyond ‘more bandwidth’ little is defined, and the business drivers aren’t as clear as they were for 4G/LTE. We anticipate 5G will not fully materialize until 2020. Again, operators will need to upgrade networks for its deployment, and this might provide an opportunity to unify fixed, mobile, and nomadic network access.

In 2014, expect RAN sharing to become much more commonplace, with the financially strongest MNOs (Mobile Network Operators) installing the RAN infrastructure and leasing capacity back to other wireless service providers. This will allow participating operators to trade off CapEx and OpEx considerations. SDN (Software Defined Networking) will play a major role in slicing the RANs any way possible to partition the network infrastructure, while also virtualizing many aspects of the RAN.

About the Author

Martin Nuss, Ph.D. is Vice President, Technology and Strategy and Chief Technology Officer at Vitesse Semiconductor. Dr. Nuss has over 20 years of technical and management experience. He is a recognized industry expert in timing and synchronization for communications networks. Dr. Nuss serves on the board of directors for the Alliance for Telecommunications Industry Solutions (ATIS) and is a fellow of the Optical Society of America and IEEE member. He holds a doctorate in applied physics from the Technical University in Munich, Germany.

About Vitesse

Vitesse (Nasdaq: VTSS) designs a diverse portfolio of high-performance semiconductor solutions for Carrier and Enterprise networks worldwide. Vitesse products enable the fastest-growing network infrastructure markets including Mobile Access/IP Edge, Cloud Computing and SMB/SME Enterprise Networking. Visit www.vitesse.com or follow us on Twitter @VitesseSemi.

Wednesday, October 23, 2013

Blueprint Column: Quality Counts When Managing Millions of Network Elements Worldwide

By Deepti Arora, Vice President of Quality and Executive Board Member, NSN


Complex network infrastructures, a shortage of engineers skilled in cutting-edge technologies, and demand for fast-paced service deployment, are making it increasingly appealing for network operators to tap into additional resources and talent through outsourced managed services. Yet, the moment an operator considers leveraging a global service delivery model, the issue of how to deliver quality becomes a concern.
“How do we ensure a consistent customer experience when delivered by people across so many time zones and organizations?” How can we keep the lid on costs while managing so much complexity? How do we protect the privacy of our company’s operational and customer data with so many touch points across the globe?”
Best-in-class quality management is fundamental
When taking on these challenges, best-in-class quality management systems are important for achieving the outcomes operators and suppliers strive for. Adherence to global multi-site quality management (ISO 9001) ensures clear processes are defined and a regular rhythm of discipline is implemented for all employees. Environmental systems management (ISO 14001) is designed to help understand, manage and reduce environmental impact, and also leads to operational efficiencies in many cases. ISO 27001, an information security management system (ISMS) standard, addresses information systems with controls across 11 domains such as information security policy, governance of information security, and physical and environment security.
To raise the bar further, the QuEST (Quality Excellence for Suppliers of Telecommunications) Forum has created TL 9000 to define quality system requirements for the design, development production, delivery, installation and maintenance of more products and services. Common processes and metrics within and between companies ensure consistency and a basis for benchmarking and continuous improvement.
Nokia Solutions and Networks (NSN) has made a strategic commitment to quality as a pillar of its transformation. Integral to these efforts is the commitment to Quality Management Systems to help drive improvement. This encompasses a customer-centric closed-loop approach to measuring quality and value, a rigorous focus on proactively preventing defects, and actively building quality competence and disciplines amongst employees and with suppliers. NSN is investing significant senior management attention and dedicated resources to raising the bar on quality end-to-end for operators and their subscribers.
Global delivery demands a high standard of quality management
In NSN’s primary Global Delivery Centers (GDCs) in Portugal and India, and in smaller hubs around the world, NSN supports more than 550 customers globally, including remote management of almost one million network elements and approximately 200 million subscribers annually. This means a tremendous volume of data traffic and network operations and subscriber information. Day-to-day performance management is a cornerstone for network operations’ business growth and efficiency. Relentless performance monitoring and the ability to take immediate action are imperative.
Quality at the delivery centers means implementing systems and processes to comply with the highest level of accreditations and certification. Implementation of such standards is a massive undertaking, involving the education and testing of every individual at the delivery center. Achieving a ‘zero non-conformity’ audit result from Bureau Veritas audits, as the GDC Portugal did recently, is an indicator that NSN team members have adopted the commitment to quality. Building awareness and training employees within the organization to adhere to processes and protect information and related assets also goes a long way to foster customer confidence in network operations and service delivery. The certification has provided operators with another proof point that their networks and related information are in safe hands.
A common language for quality accelerates improvement
After introducing TL 9000, one NSN business line reduced problem reports by 82%. But accelerating alignment with operators has been one of the greatest benefits of TL 9000. “With one Asian operator, we were able to use a common TL 9000 metric to evaluate monthly alarms across different vendors. Together, we were able to implement changes that improved performance and reduced costs for both of our companies,” says Scott Schroepfer, Head of Quality for Small Cells/CDMA. A common language for quality with operators around the globe is allowing NSN to accelerate improvement actions and collaboration with its customers.
Looking Forward:  Planning for Quality when tapping the Cloud
Beyond managed services, operators are increasingly exploring cloud-based technology offerings as a way to completely change their business model. The benefits are compelling: expanded on-demand network resources through virtualization, faster innovation cycles for top line growth through leveraging a broader open eco-system, and a greater level of productivity and efficiencies through automation. 
But again, the issue of how to deliver quality becomes a real concern. Security, resiliency, availability, and stability can all be impacted negatively when managing and orchestrating across a myriad of virtual machines on different platforms, all in a multivendor environment. New complexities associated with the cloud paradigm will require the right set of tools and a commitment to plan for quality management from the start.
NSN is working closely on planning for the network quality requirements of cloud technology with major operators, leading cloud stack vendors such as VMware Inc., and industry forums such as the OpenStack Foundation and the ETSI Network Functions Virtualization Industry Specification (NFV) Group and the QuEST Forum. A series of proof of concept projects have provided the foundation for viable telco cloud by demonstrating the running of core network software on top of virtualized network infrastructure. Further tests have shown end-to-end VoLTE deployment readiness in a telco cloud and verified the automated deployment and elastic scaling of virtualized network elements, live migration of virtual machines from one server to another, and recovery from hardware failures.  
The conclusion is this: When operators strive to scale and improve productivity, through managed services or the cloud, quality cannot be an afterthought. The good news is the tools, technologies, standards and expertise are increasingly available.  Find out more about quality at NSN at http://nsn.com/about-us/company/quality. Learn about the QuEST Forum’s TL 9000 platform based on ISO 9001 to improve supply chain management effectiveness and efficiency at http://www.questforum.org.
 About the Author

With more than 25 years of international experience in the telecommunications industry, Deepti Arora is the Vice President of Quality at Nokia Solutions and Networks.  Deepti has held various roles in quality, business operations, engineering, sales and general management. She has a reputation of being a dynamic, results oriented leader with a passion for customer focus, and continually challenging the status quo. Her strong technology and business expertise, along with an ability to build high performing global teams has made Deepti a valued executive in driving organizational success.

Thursday, October 17, 2013

Blueprint Tutorial: SDN and NFV for Optimizing Carrier Networks

By Raghu Kondapalli, Director of Strategic Planning at LSI

The ongoing convergence of video and cloud-based applications, along with the exploding adoption of mobile devices and services, are having a profound impact on carrier networks. Carriers are under tremendous pressure to deploy new, value-added services to grow subscriber numbers and increase revenue per user, while simultaneously lowering capital and operational expenditures.

To help meet these challenges, some carriers are creating some of these new services by more tightly integrating the traditionally separate data center and carrier networks. By extending the virtualization technologies that are already well-established in data centers into the telecom network domain, overall network utilization and operational efficiencies can be improved end-to-end, resulting in a substantially more versatile and cost-effective infrastructure.

This two-part article series explores the application of two virtualization techniques—software-defined networking (SDN) and network function virtualization (NFV)—to the emerging unified datacenter-carrier network infrastructure.

Drivers for virtualization of carrier networks in a unified datacenter-carrier network

In recent years, user expectations for “anywhere, anytime” access to business and entertainment applications and services are changing the service model needed by carrier network operators. For example, e-commerce applications are now adopting cloud technologies, as service providers continue incorporating new business applications into their service models. For entertainment, video streaming content now includes not only traditional movies and shows, but also user-created content and Internet video. The video delivery mechanism is evolving, as well, to include streaming onto a variety of fixed and mobile platforms. Feature-rich mobile devices now serve as e-commerce and entertainment platforms in addition to their traditional role as communication devices, fueling deployment of new applications, such as mobile TV, online gaming, Web 2.0 and personalized video.

Figures 1 and 2 show some pertinent trends affecting carrier networks. Worldwide services revenue is expected to reach $2.1 trillion in 2017, according to an Insight research report, while the global number of mobile subscribers is expected to reach 2.6 billion by 2016, according to Infonetics Research.



To remain profitable, carriers need to offer value-added services that increase the average revenue per user (ARPU), and to create these new services cost-effectively, they need to leverage the existing datacenter and network infrastructures. This is why the datacenters running these new services are becoming as critical as the networks delivering them when it comes to providing profitable services to subscribers.

Datacenter and carrier networks are quite different in their architectures and operational models, which can make unifying them potentially complex and costly. According to The Yankee Group, about 30 percent of the total operating expenditures (OpEx) of a service provider are due to network costs, as shown in Figure 3. To reduce OpEx and, over time, capital expenditures (CapEx), service providers are being pushed to find solutions that enable them to leverage a more unified datacenter-carrier network model as a means to optimize their network and improve overall resource utilization.

Virtualization of the network infrastructure is one strategy for achieving this cost-effectively. Virtualization is a proven technique that has been widely adopted in enterprise IT based on its ability to improve utilization and operational efficiency of datacenter server, storage and network resources. By extending the virtualization principles into the various segments of a carrier network, a unified datacenter-carrier network can be fully virtualized—end-to-end and top-to-bottom—making it far more scalable, adaptable and affordable than ever before.

Benefits of integrating datacenters into a carrier network

Leveraging the virtualized datacenter model to virtualize the carrier network has several benefits that can help address the challenges associated with a growing subscriber base and more demanding performance expectations, while simultaneously reducing CapEx and OpEx. The approach also enables carriers to seamlessly integrate new services for businesses and consumers, such as Software-as-a-Service (SaaS) or video acceleration. Google, Facebook and Amazon, for example, now use integrated datacenter models to store and analyze Big Data. Integration makes it possible to leverage datacenter virtualization architectures, such as multi-tenant compute or content delivery networks, to scale or deploy new services without requiring expensive hardware upgrades. Incorporating the datacenter model can also enable a carrier to centralize its billing support system (BSS) and operation support system (OSS) stacks, thereby doing away with distributed, heterogeneous network elements and consolidating them to centralized servers. And by using commodity servers instead of proprietary network elements, carriers are able to further reduce both CapEx and OpEx.

Integrated datacenter-carrier virtualization technology trends

The benefits of virtualization derive from its ability to create a layer of abstraction with the physical resources. For example, the hypervisor software creates and manages multiple virtual machines (VMs) on a single physical server to improve overall utilization.

While the telecom industry has lagged behind the IT industry in virtualizing resources, most service providers are now aggressively working to adapt virtualization principles in their carrier networks. Network function virtualization (NFV), for example, is being developed by a collaboration of service providers as a standard means to decouple and virtualize carrier network functions from traditional network elements, and then distribute these functions across the network more cost-effectively. By enabling network functions to be consolidated onto VMs running on a homogenous hardware platform, NFV holds the potential to minimize both CapEx and OpEx in carrier networks.

Another trend in virtualized datacenters is the abstraction being made possible with software-defined networking, which is enabling datacenter networks to become more manageable and more open to innovation. SDN shifts the network paradigm by decoupling or abstracting the physical topology to present a logical or virtual view of the network. SDN technology is particularly applicable to carrier networks, which usually consist of disparate network segments based on heterogeneous hardware platforms.

Technical overview of network virtualization

Here is a brief overview of the two technologies currently being used in unified datacenter-carrier network infrastructures: SDN and NFV.

Software-Defined Networking

SDN is a network virtualization technique based on the logical separation and abstraction of both the control and data plane functions, as shown in Figure 4. Using SDN, the network elements, such as switches, routers, etc., can be implemented in software, virtualized as shown, and executed anywhere in a network, including in the cloud.


SDN decouples the network functions from the underlying physical resources using OpenFlow®, the vendor-agnostic standard interface being developed by the Open Networking Foundation (ONF). With SDN, a network administrator can deploy a new network application by writing a program that simply manipulates the logical map for a “slice” of the network.

Because most carrier networks are implemented today with a mix of different platforms and protocols, SDN offers some substantial advantages in a unified datacenter-carrier network. It opens up the network for incorporating innovation. It makes it easier for network administrators to manage and control the network infrastructure. It reduces CapEx by facilitating the use of commodity servers and services, potentially by mixing and matching platforms from different vendors. In the datacenter, for example, network functions could be decoupled from the network elements, like line and control cards, and moved onto commodity servers. Compared to expensive proprietary networking solutions, commodity servers provide a far more affordable yet fully mature platform based on proven virtualization technologies, and industry-standard processors and software.

To ensure robust security—always important in a carrier network—the OpenFlow architecture requires authentication when establishing connections between end-stations, and operators can leverage this capability to augment existing security functions or add new ones. This is especially beneficial in carrier networks where there is a need to support a variety of secure and non-secure applications, and third-party and user-defined APIs.

Network Function Virtualization

NFV is an initiative being driven by network operators with a goal to reduce end-to-end network expenditures by applying virtualization techniques to telecom infrastructures. Like SDN, NFV decouples network functions from traditional network elements, like switches, routers and appliances, enabling these task-based functions to then be centralized or distributed on other (less expensive) network elements. With NFV, the various network functions are normally consolidated onto commodity servers, switches and storage systems to lower costs. Figure 5 illustrates a virtualized carrier network in which network functions, such as a mobility management entity (MME), are run on VMs on a common hardware platform and an open source hypervisor, such as a KVM.

NFV and SDN are complementary technologies that can be applied independently of each other. Or NFV can provide a foundation for SDN. By using an NFV foundation combined with SDN’s separation of the control and data planes, carrier network performance can be enhanced, its management can be simplified, and new services can be more easily deployed. 


***********************************

 Raghu Kondapalli is director of technology focused on Strategic Planning and Solution Architecture for the Networking Solutions Group of LSI Corporation.

Kondapalli brings a rich experience and deep knowledge of the cloud-based, service provider and enterprise networking business, specifically in packet processing, switching and SoC architectures.

Most recently he was a founder and CTO of cloud-based video services company Cloud Grapes Inc., where he was the chief architect for the cloud-based video-as-a-service solution.  Prior to Cloud Grapes, Kondapalli led technology and architecture teams at AppliedMicro, Marvell, Nokia and Nortel. Kondapalli has about 25 patent applications in process and has been a thought leader behind many technologies at the companies where he has worked.

Kondapalli received a bachelor’s degree in Electronics and Telecommunications from Osmania University in India and a master’s degree in Electrical Engineering from San Jose State University.

Monday, April 1, 2013

Cyber 3.0 - Where the Semantic Web and Cyber Meet

by John Trobough, President, Narus

The term “Cyber 3.0” has been used mostly in reference to the strategy described by U.S. Deputy Defense Secretary William Lynn at an RSA conference. In his Cyber 3.0 strategy, Lynn stresses a five-part plan as a comprehensive approach to protect critical assets. The plan involves equipping military networks with active defenses, ensuring civilian networks are adequately protected, and marshaling the nation’s technological and human resources to maintain its status in cyberspace.

Cyber 3.0 technologies will be the key to enable such protection, and is achieved when the semantic Web’s automated, continuous machine learning is applied to cybersecurity and surveillance.

Cyber 3.0 will be the foundation for a future in which machines drive decision-making. But Cyber 3.0’s ability to deliver greater visibility, control and context has far-reaching implications in our current, hyper-connected environment, where massive amounts of information move easily and quickly across people, locations, time, devices and networks. It is a world where human intervention and intelligence alone simply can’t sift through and analyze information fast enough. Indeed, arming cybersecurity organizations with the incisive intelligence afforded by this machine learning means cybersecurity incidents are identified and security policies are enforced before critical assets are compromised.

THE PERFECT STORM: CONFLUENCE OF HYPER-CONNECTIVITY, MOBILITY AND BIG DATA

In order to stress the full weight of the meaning of Cyber 3.0, it is important to first put the state of our networked world into perspective. We can start by stating categorically that the Internet is changing: Access, content, and application creation and consumption are growing exponentially.

From narrowband to broadband, from kilobits to gigabits, from talking people to talking things, our networked world is changing forever. Today, the Internet is hyper-connecting people who are now enjoying super-fast connectivity anywhere, anytime and via any device. They are always on and always on the move, roaming seamlessly from network to network. Mobile platforms and applications only extend this behavior. As people use a growing collection of devices to stay connected (i.e., laptops, tablets, smartphones, televisions), they change the way they work and collaborate, the way they socialize, the way they communicate, and the way they conduct business.

Add to this the sheer enormity of digital information and devices that now connect us: Cisco estimates that by 2015, the amount of data crossing the Internet every five minutes will be equivalent to the total size of all movies ever made, and that annual Internet traffic will reach a zettabyte — roughly 200 times the total size of all words ever spoken by humans2. On a similar note, the number of connected devices will explode in the next few years, reaching an astonishing 50 billion by 20203. By this time, connected devices could even outnumber connected people by a ratio of 6-to-14. This interconnectedness indeed presents a level of productivity and convenience never before seen, but it also tempts fate: the variety and number of endpoints — so difficult to manage and secure — invite cyber breaches, and their hyper-connectivity guarantees the spread of cyber incidents as well as a safe hiding place for malicious machines and individuals engaged in illegal, dangerous or otherwise unsavory activities.

CYBER 3.0

Cyber is nonetheless integral to our everyday lives. Anything we do in the cyber world can be effortlessly shifted across people, locations, devices and time. While on one hand, cyber is positioned to dramatically facilitate the process of knowledge discovery and sharing among people (increasing performance and productivity and enabling faster interaction), on the other, companies of all sizes must now secure terabytes and petabytes of data. That data enters and leaves enterprises at unprecedented rates, and is often stored and accessed from a range of locations, such as from smartphones and tablets, virtual servers, or the cloud.
On top of all this, all the aforementioned endpoints have their own security needs, and the cybersecurity challenge today lies in how to control, manage and secure large volumes of data in increasingly vulnerable and open environments. Specifically, cybersecurity organizations need answers to how they can:

• Ensure visibility by keeping pace with the unprecedented and unpredictable progression of new applications running in their networks

• Retain control by staying ahead of the bad guys (for a change), who breach cybersecurity perimeters to steal invaluable corporate information or harm critical assets

• Position themselves to better define and enforce security policies across every aspect of their network (elements, content and users) to ensure they are aligned with their mission and gain situational awareness

• Understand context and slash the investigation time and time-to-resolution of a security problem or cyber incident

Unfortunately, cybersecurity organizations are impeded from realizing any of these. This is because their current solutions require human intervention to manually correlate growing, disparate data and identify and manage all cyber threats. And human beings just don’t scale.

CYBER 3.0: THE ANSWER TO A NEW GENERATION OF CYBER CHALLENGES

Indeed, given the great velocity, volume and variety of data generated now, the cyber technologies that rely on manual processes and human intervention — which worked well in the past — no longer suffice to address cybersecurity organizations’ current and future pain points, which correlate directly with the aforementioned confluence of hyper-connectivity, mobility and big data. Rather, next-generation cyber technology that can deliver visibility, control and context despite this confluence is the only answer. This technology is achieved by applying machine learning to cybersecurity and surveillance, and is called Cyber 3.0.

In using Cyber 3.0, human intervention is largely removed from the operational lifecycle, and processes, including decision-making, are tackled by automation: Data is automatically captured, contextualized and fused at an atomic granularity by smart machines, which then automatically connect devices to information (extracted from data) and information to people, and then execute end-to-end operational workflows. Workflows are executed faster than ever, and results are more accurate than ever. More and more facts are presented to analysts, who will be called on only to make a final decision, rather than to sift through massive piles of data in search of hidden or counter-intuitive answers. And analysts are relieved from taking part in very lengthy investigation processes to understand the after-the-fact root cause.

In the future, semantic analysis and sentiment analysis will be implanted into high-powered machines to:

• Dissect and analyze data across disparate networks

• Extract information across distinct dimensions within those networks

• Fuse knowledge and provide contextualized and definite answers

• Continuously learn the dynamics of the data to ensure that analytics and data models are promptly refined in an automated fashion

• Compound previously captured information with new information to dynamically enrich models with discovered knowledge

Ultimately, cybersecurity organizations are able to better control their networks via situational awareness gained through a complete understanding of network activity and user behavior. This level of understanding is achieved by integrating data from three different planes: the network plane, the semantic plane and the user plane. The network plane mines traditional network elements like applications and protocols; the semantic plane extracts the content and relationships; and the user plane establishes information about the users. By applying machine learning and analytics to the dimensions extracted across these three planes, cybersecurity organizations have the visibility, context and control required to fulfill their missions and business objectives.

Visibility: Full situational awareness across hosts, services, applications, protocols and ports, traffic, content, relationships, and users to determine baselines and detect anomalies

Control: Alignment of networks, content and users with enterprise goals, ensuring information security and intellectual property protection

Context: Identification of relationships and connectivity among network elements, content and end users

Clearly, these three attributes are essential to keeping critical assets safe from cybersecurity incidents or breaches in security policy. However, achieving them in the face of constantly changing data that is spread across countless sources, networks and applications is no small task — and definitely out of reach for any principles or practices that rely even partly on human interference. Moreover, without visibility, control and context, one can never be sure what type of action to take.

Cyber 3.0 is not a mythical direction of what “could” happen. It’s the reality we will face as the Web grows, as new technologies are put into practice, and as access to more and more devices continues to grow. The future is obvious. The question is: How will we respond?

By virtue of machine learning capabilities, Cyber 3.0 is the only approach that can rise to these challenges and deliver the incisive intelligence required to protect our critical assets and communities now and into the future.

About the Author



John Trobough is president of Narus, Inc., a subsidiary of The Boeing Company (NYSE: BA).  Trobough previously was president of Teleca USA, a leading supplier of software services to the mobile device communications industry and one of the largest global Android commercialization partners in the Open Handset Alliance (OHA). He also held executive positions at Openwave Systems, Sylantro Systems, AT&T and Qwest Communications.







About the Company


Narus, a wholly owned subsidiary of The Boeing Company (NYSE:BA), is a pioneer in cybersecurity.  Narus is one of the first companies to apply patented advanced analytics to proactively identify cyber threats from insiders and outside intruders. The innovative Narus nSystem of products and applications is based on the principles of Cyber 3.0, where the semantic Web and cyber intersect. Using incisive intelligence culled from big data analytics, Narus nSystem identifies, predicts and characterizes the most advanced security threats, empowering organizations to better protect their critical assets. Narus counts governments, carriers and enterprises around the world among its growing customer base. The company is based in the heart of Silicon Valley, in Sunnyvale, California.

Thursday, January 3, 2013

Understanding the Full Scope of the BYOD Opportunity for Carriers

by Ray Greenan, Global Marketing Director, Symantec Communications Service Providers


Over the past couple of years mobility has become one of the most important business and IT strategy topics, and the focus on it is only going to increase in 2013. It has also become increasingly difficult to have a discussion on business mobility without it including the bring your own device, or BYOD trend.

In fact, recent research by analyst firm Ovum indicates that by 2017, there will be 443,939,000 BYOD mobile connections worldwide. This number is impressive on its own, but is even more striking when it is compared to Ovum’s estimate that there will be 532,778,000 corporate-liable mobile connections worldwide by 2017 as well. Thus, in just five years from now there will be nearly as many employee-liable devices moving in and out of corporate networks as there will be corporate-liable devices.

At first glance, the concept of BYOD is quite simple: Allow employees to supply their own devices, thereby increasing employee satisfaction and hopefully reducing capital – and perhaps even operational – expenditures. Generally, and especially for the purposes of this article, the “device” in BYOD refers to mobile devices, particularly carrier network-connected connected smartphones and tablets.

However, for all its potential benefits BYOD also creates security and management challenges. After all, at the end of the day, BYOD involves IT relinquishing at least some control over the devices connecting to corporate networks, resources and data. As always, there is some risk when relinquishing any such control.

Because of this, impressions of BYOD range from company to company, with some embracing it wholeheartedly, some remaining cautiously optimistic and some still approaching the topic with outright reproach. That said, there is also a common belief among nearly all of these organizations: BYOD in some form or another is largely inevitable.

Nonetheless, companies realize that BYOD within their organizations is going to happen whether they promote it or not, thus many are coming to the conclusion that they can at least make it on their terms. This involves efficient enabling of employee-liable devices, establishing strong policies for their acceptable use and utilizing technology to enforce those policies and secure mobile devices against a myriad of threats, from loss or theft to malware.

When all of this is taken into account, there are many companies that simply either cannot or do not want to assume this burden. Some of these are enterprise-class organizations that are finding it more financially viable to outsource the management of their mobile infrastructure, while many others are small- to medium-sized companies who do not have the resources. After all, SMBs often already have their hands full with managing the demands of their traditional IT infrastructure and endpoints. Add mobility and BYOD to the mix and often overtaxed IT staffs become spread even thinner.

This is all excellent news for the wireless telecommunications industry. Why? Because herein lies a tremendous opportunity for carriers who have developed a trusted network to step into the role of managed service provider for companies such as those described here. And while some are already doing this to a degree, there is much more opportunity than first meets the eye. In fact, there are five specific areas of business mobility carriers should seek to address on behalf of their customers. 

These include:

App and Data Protection
Business data must be protected at all times. This is a primary objective of any IT organization, and the reason that most IT technologies exist in the first place.  Mobile apps are the primary method to access, view, store and transmit that data, so both apps and data must have controls and protection appropriate to the company and industry.

User and App Access
At all times, the people, the apps and the devices that are connecting to, and accessing, business assets must be identified and validated as authorized business participants. Identity is the first and most important component to any IT strategy, especially where mobility is involved because device and cloud access is not inherently as strict.

Device Management
Devices that access business assets and connect to company networks must be managed and secured according to applicable company policies and industry regulations. Every company should establish appropriate mobile policies, and those should be applied to all managed devices.

Threat Protection
With the incredible growth of mobile devices, they are rapidly becoming a key target for cyber criminals. Protecting devices and the apps and data on them is paramount to secure business data. Good threat protection should protect from external attacks, rogue apps, unsafe browsing and theft.

Secure File Sharing
Although mobile access, storage and sharing of files is not a challenge unique to mobile, the fact that a mobile device is typically one of multiple devices a user may have, the cloud is the obvious and simple solution for distributing and synchronizing information across devices. Businesses should have full administrative control over distribution and access to business documents on any network, and especially in the cloud.

While some carriers have begun to step into the role of managed service provider in some of the areas above, none have addressed all five areas. They are not offering a holistic managed security and management experience. Carriers would do well to expand their services offerings and the mobile security and management market has matured to the point where effective, scalable solutions are available to help them do this. Doing so will not only benefit their customers, but their bottom lines as well.

About the Author


Ray Greenan, Global Marketing Director, Symantec Communications Service Providers, is responsible for strategy and implementation of Symantec marketing solutions designed to help Communication Service Providers transform their networks and businesses to deliver new applications and services to their customers in a secure and reliable way.

Prior to his current position, Greenan spent 14 years at IBM where he held multiple positions including Global Marketing Executive, Service Delivery Platforms which focused on IBM's Service Provider Delivery Environment, SPDE and Marketing Program Director for the IBM Green Data Center group which focused on Green Technology and Sustainability for the Utility market and its customers. He also held the position of Power Marketing Program Director responsible for strategy and execution of marketing programs for the Greater China Region for IBM¹s Power Architecture and
IBM's membership within Power.org.

Greenan was awarded a Masters in Business Administration in Management Information Systems from Manhattan College, where he also received his Bachelor of Arts in English. He has also earned certification from NJIT for Sustainable Design and Green Technology.
About Symantec


Symantec protects the world's information, and is a global leader in security, backup and availability solutions. Our innovative products and services protect people and information in any environment - from the smallest mobile device, to the enterprise data center, to cloud-based systems. Our world-renowned expertise in protecting data, identities and interactions gives our customers confidence in a connected world. More information is available at www.symantec.com or by connecting with Symantec at: go.symantec.com/socialmedia.

Thursday, December 20, 2012

Blueprint 2013: OSS/BSS Adapts for Complex Services

Customers are tough to get and easy to lose. The good news is that communications service providers (CSPs)have more and more options for attracting and retaining customers. Here are nine strategies that will play out in 2013.

  1. Policy will evolve from a isolated defensive capability to business integral offensive measure. CSPs – particularly mobile operators – currently use policy to a large extent to protect their network. Increasingly they’ll use policy to differentiate their offerings and services based also on customer personal preferences, purchase and usage history. This will require integrated solution between PCRF and OCS, that enables a common product offer creation environment that can be used for both voice and all data product definitions (WCDMA, WiFi, Fixed Broadband).  
  1. CSPs will transform over-the-top (OTT) services from a problem into an opportunity. Consumers want ubiquitous communications services. The only way that CSPs can meet that demand is by accepting that for part of the time, they’ll have to serve their customers over someone else’s network. CSPs also have to accept that other CSPs and OTT providers will use their network to serve their customers. Why? Because no one owns the customer. If a CSP won’t meet their needs, they’ll turn to one that can. 
Instead of viewing OTT services as a problem, CSPs will increasingly look at them as a business opportunity. For example, a mobile operator could provide a certain amount of bandwidth and prioritization to a video OTT provider that agrees to share revenue because the QoS would help differentiate its service.

  1. CSPs will optimize their OSS/BSS infrastructure to accommodate increasingly complex services. Billing and service assurance will become more important for delivering an optimal mobile customer experience. As more services are introduced, and as the underlying network technologies become more complex, CSPs will focus on their OSS/BSS infrastructure as the centerpiece for ensuring a great customer experience. 
  1. Tailored pricing and packaging will become a market differentiator. One size doesn’t fit all. Not every mobile customer, for example, needs or can afford 5 GB per month or 20 Mbps. Tailored packaging of e.g. social media services at a low weekly cost will be one way lower the entry barrier and grow in new segments. With any technology, differentiated pricing appeals to a wider range of needs and budgets, enabling CSPs to cater to all demographics while ensuring profitability. 
  1. Smartphone growth won’t plateau anytime soon. Sure, smartphone penetration is already above 50 percent in markets such as the United States. But globally, it’s only 15 percent. That’s a lot of room to grow, and the grow will happen in customer segments that has different needs and wallets compared to the first wave of smartphone customer. Doing so requires differentiated packaging and pricing and other innovative rate plans and service bundles, which in turn require highly flexible BSS/OSS platforms. 
  1. Mobile broadband subscriptions will continue to grow. There are 2012 approx. 1.5 billion mobile subscriptions in the world. A big figure, and still only representing less than 25 percent of all mobile subscriptions. It is expected to grow to 6.5 billion mobile broadband subscriptions globally by 2018, representing an uptake of over 70% of all mobile subscriptions. So how can operators succeed to win in this new battle for market share and avoid sever price erosion of broadband data pricing. One key requirement will be highly flexible BSS/OSS platforms in order to create innovative and differentiated offerings that appeal to new segments and at the same time avoid price erosion in among existing customers.  
  1. Customer experience will matter more than ever. A reputation for poor service is expensive to overcome. It’s a fact today and it is getting increasingly important tomorrow, as the number of customers that operators are fighting about is not increasing. The increase is in data usage, number of devices and subscriptions. So in order to grow business existing customers need to be maintained especially as the cost for acquiring new customers is very high. Upsell of new services and subscriptions is difficult to customers that are not happy, and on the other hand customers that are happy are likely to buy more, talk well about you, and by that also help creating growth. So make sure that you can deliver on your promise.  For example, if they want to offer business customers a premium experience at a premium price, you first must have the tools in place to assess ensure that experience every step in both the purchase process, activation as well as during actual service usage. 
  1. CSPs will analyze customer behavior so they can capitalize on it. Simply providing a voice-and-data pipe out to a customer and collecting a fee is no longer a viable business model. Savvy CSPs realize this. They are deploying OSS/BSS solutions that enable them to analyze how all their customer (prepaid, postpaid and hybrid) are using their services and then create tailored promotions and tariffs that leverage each customer’s or customer group’s habits and preferences. 
  1. CSPs will turn customer disgust about being blindsided into a business opportunity and market differentiator. The global backlash against bill shock is just one example. CSPs suffer financially, too, when surprised customers become former customers or share their anguish with their social networks. CSPs also have the cost of fielding all those billing inquiries. Those are among the reasons why mobile operators, MSOs and other CSPs will increasingly provide customers with real-time control of minutes, messages and megabytes used. This information is particularly important for customers with shared, multi-device plans, such as a family or small business.
Providing this type of granular information in a timely manner requires an OSS/BSS solution capable of tracking, controlling and aggregating it. This investment also enables CSPs to create specialized offers, such as providing customers who are approaching their monthly allotment with the option of buying another block of minutes or messages at a special rate. This proactive outreach benefits the CSP’s reputation because now customers perceive it as being sensitive to their budget rather than trying to nickel and dime them.


About the Author

Niclas Melin, Director of OSS and BSS Marketing, joined Ericsson in 1995. He specializes in understanding how real-time capabilities in OSS/BSS can improve the customer experience and create value, and has developed a deep understanding of operators’ challenges and opportunities through operator workshops, discussions with industry analysts and his former role as chairman of the Ericsson Charging User Group.

About Ericsson

Ericsson is the world's leading provider of communications technology and services. We are enabling the Networked Society with efficient real-time solutions that allow us all to study, work and live our lives more freely, in sustainable societies around the world.
Our offering comprises services, software and infrastructure within Information and Communications Technology for telecom operators and other industries. Today more than 40 percent of the world's mobile traffic goes through Ericsson networks and we support customers' networks servicing more than 2.5 billion subscribers.