Showing posts with label Blueprint. Show all posts
Showing posts with label Blueprint. Show all posts

Sunday, July 18, 2021

Blueprint: Green network quality makes service operators happy

by Stefan Vallin, PhD in Network Management, Senior PLM, Juniper Networks

As we all have been hunkered down in our homes throughout the pandemic, many of us have been trying to make the best of a horrible situation and some of us have taken on new pursuits to keep our minds active.  Where possible, some of us are trying to live as actively as we can during these hard times, perhaps we have reconnected with the outdoors and nature?  Whether it be walking, hiking, cycling or gardening, there is something about fresh air and the greenery of the world that has comforted many of us.  For me, it has been an active interest in my green house for both gardening and contemplation.  There is something about it that allows me to feel very free and focused on something that brings me a little bit of happiness in an otherwise chaotic world of networking and service assurance. (Watch my video below on this topic from my greenhouse.)

Having studied service assurance for my doctorate and working across many roles in the industry with this focus, I recently have reflected on how this anxiety-reducing feeling of “green” comfort is so applicable to service operators.  Green network quality makes service operators and their telecom customers happy.  It’s just a fact!

But why are telecom’s customers not happy?  It’s because when it comes to network quality, services have been far from green.  We at Juniper Networks have researched that in the Service Provider industry, the Telecom NPS (Net Promoter Score) is roughly half that of any other industry.

So lately I’ve been thinking about this and how things are actually just getting worse. First, there is today’s drastically increased requirements on network quality, both in the case of businesses and home end-users that cannot live without a high performing network.  Secondly, there is tomorrow’s promise of 5G which rotates around high-quality services that are ultra-reliable with ultra-low latencies.  So 5G is very much connected to networks becoming critical when it comes to performance, which is tightly coupled with the classic concept of Service Level Agreements (SLAs).  Of course, this may sound like an outdated topic that has been around for decades, but with the cloud and 5G era, it is time to revisit the SLA topic as a key focal point of what we do in the networking industry!

The revenge of SLAs

If we look at SLAs, and what specifically is sold to both broadband and business customers, all communications service providers tout their high quality of experience and bandwidth guarantees, along with network performance that delivers extremely fast response times while minimizing loss, latency and jitter.  However, when we go to monitor these services in the service operations center, we see mostly that device health is being monitored in terms of alarms and performance counters from infrastructure that are not specifically related to the individual customer services.  This is a very device-centric approach.  And although this information may make services appear green, it does not really show service operators that they are indeed meeting SLAs and keeping customers happy.  

So how are service operators showing customers that they are meeting contracted SLAs then?  The process of many operators is to track ticket resolution times for fixing outages and well as outage hours.  However, bad performance over time and intermittent issues are a bigger problem than blackouts: they are harder to detect and they impact customers over a long time.  In fact, in today’s hyper competitive landscape, when customers are not satisfied with their services it has been studied that 95% of them leave without even complaining.  But not only that, they also tell others and boast about the new deal they got with your competitor!  

The cause of this poor customer experience is actually known, and untested network changes not being caught in time are costing dearly economically at the tune of billions, with dramatic negative impact to reputation and customer retention.

It is such a high cost, when the change we need in the industry is fairly simple and at a low cost of ownership.  The shift we need is as easy as moving away from the device-centric approach and taking on a service-centric approach by actively testing end-to-end network quality.  To achieve true SLA guarantees, we need to start monitoring network quality key performance indicators (KPIs).  This needs to be measured at the data-plane while most monitoring solutions are looking at the management plane for insights.  It is exactly this missing element that service operators need to enable services to be truly green and deliver experiences that delight customers.

Active Assurance is the missing piece for improving service operations 

Looking at the typical assurance stack, most operations rely of a mix of solutions.  Typically, a fault and event management system presents volumes of alarms that show you if any devices are broken, answering questions such as “Is the interface up or down?”  Important, but it does not tell you service health.  Secondly, we have performance monitoring systems that look at the overall network health, answering questions such as “How are my links utilized?”  We also have passive probes that give a centralized understanding of traffic flows in the network and what protocols are enabled, answering questions such as “What types of traffic are in my network and how does this traffic flow?”

All of these solutions are needed within service operations, but fail to deliver the service-centric approach needed to truly measure and guarantee end-to-end network quality for your services.  

The missing is piece is called “Active Assurance”.

Meet Active Assurance

Active assurance provides a straightforward approach that can provide immediate results whether you have an existing modern service assurance framework or not.  It works by measuring end-to-end service quality through actively sending a small amount of traffic on the data plane to simulate an end user.

With active assurance, you can easily and cost-effectively deploy a solution that will enable you to automate proactive testing and monitoring on the real-time data plane and locate emerging issues before customers are impacted.  When your services are actively assured, you will be able to guarantee service quality for your services.  This service-centric approach will also enable your service and network operations teams to ensure that all network changes are made right the first time and right all the time.  

So make your service operators and customers happy by delivering truly green network quality with active assurance!  Read our Juniper Networks Paragon Active Assurance white paper on “Service assurance in the 5G and cloud era” to learn more about how you can use it to achieve a proactive, service-centric operations model that puts your customers in focus.  


Monday, July 5, 2021

Blueprint: Leveraging automation to accelerate assured service

By: Julius Francis, Head of Product Marketing & Strategy, Juniper Networks 

Despite the world’s transition to remote operations nearly overnight, customers have maintained high expectations that their experiences would remain seamless throughout the COVID-19 pandemic. To keep pace with these expectations, service providers have had to shift their priorities to focus on automation in order to deliver reliable, efficient and scalable network operations despite surging traffic patterns and challenges posed by new disparate workforce models.

In fact, a study by Ernst & Young found that the main driver of automation adoption in telecommunications is the optimization of customer experience. Nearly four-fifths of respondents cited that the importance of optimizing the customer experience was the key reason for their adoption of artificial intelligence (AI). This concept is known as ‘Experience-First Networking,’ which requires a high level of automation to ensure large-scale networks are run reliably and efficiently.

However, the benefits of network automation extend not only to customer experience, but also to other critical areas where the network plays a foundational role, including Smart Cities, 5G growth, the introduction of AI for new applications and more.

The Vital Role of Networks in Emerging Applications and Smart Cities

There’s no doubt automated networking will play a vital role in the emergence of smart cities. In a recent report by ESI ThoughtLab, North American cities, including 40 in the United States, have more advanced digital services and digital infrastructure than their international counterparts. According to the report, cities in North America are the most prepared to deliver government services built on AI, the Internet of Things (IoT) and cloud-based software.

Beyond the United States, we’re seeing the emergence of smart cities around the world. In Singapore, one of the highest ranked smart cities in the world, there are impressive advances to the city’s technology infrastructure and ongoing digital initiatives. The smart city’s efforts were especially helpful in returning their citizens to normalcy during the COVID-19 pandemic – the government used smart facility management, IoT and surveillance to create advanced, safe and livable urban environments.

With more than two-thirds of the world’s population expected to live in cities by 2050, networks are poised to play a foundational role in supporting the future of smart cities. As planning takes shape, connectivity will need to incorporate a blend of emerging applications and technologies that will require a strong network infrastructure centered around a secure, scalable and automated multi-cloud environment. Leveraging automation and AI will be extremely important in gathering and analyzing the large amounts of data these cities produce, while allowing networks to make real-time decisions for an overall assured service experience.  

The Growth of 5G

While 5G rollouts slowed in the past year due to the pandemic, we’re now seeing rollouts quickly progressing around the world. This is crucial, as the right 5G infrastructure is required to deliver the next generation of services and experiences to consumers, enterprises and government. 

It’s important to note that AI and automation are required to reduce the operational costs and complexities of 5G networks by automating complex network functionalities and effectively using data to make decisions and solve problems. With massive speeds, huge connection densities and ultra-low-latency experiences, automation will be a critical aspect of 5G rollouts.

Further, with the expansion of 5G, we’re sure to see progress in new consumer applications (e.g., gaming and augmented/virtual/mixed reality), as well as 5G for industry verticals, consumer broadband, enterprise broadband, cloud-managed services and more. As such, service providers should expect to invest in AI and automation to manage 5G networks, ensuring they are optimized to provide assured service experiences to customers.

Tapping AI for Greater User Experience

While network operators have long relied on manual processes for managing connectivity and fixing issues that arise, this approach has consistently introduced human error into the process. The past year made it more evident that connectivity will always be in demand; therefore, it’s no surprise to see service providers continue to invest in open, agile network architectures that enable them to respond, innovate and scale smartly – driven by the expansion of 5G networks, smart cites and other emerging technologies. 

AI-powered automation has the power to transform the way of designing, building and running networks by taking the guesswork out of network operations, removing human error and improving the decision-making process. This will allow service provides to deliver a consistent and assured service experience for both operations and customers.

As the rise of emerging technologies make networks more complex and workforces become more dispersed, service providers must embrace the concept of ‘Experience-First Networking’ and formally place automation at the forefront of their customer experience investment priorities.



Sunday, May 2, 2021

Changing the Rules of the Road with Wireless Wireline Convergence

 by Sally Bament, Vice President of Cloud and Service Provider Marketing, Juniper Networks

Imagine that network infrastructure is a highway with two parallel roads going in the same direction: one for wireless and the other for wireline traffic. But there’s a big concrete barricade between them, and no one in one lane can see what is happening in the other. Now, let’s say there’s an event that changes the rules of the road, such as the pandemic. Virtually overnight, traffic patterns change wildly. There’s less commuting traffic, as rush hour has virtually disappeared. There’s more big-rig traffic, as consumers switched to a fully digital way of life. And while the big-rig traffic could really benefit from more lanes, jumping the concrete barricade simply is not an option.

When transportation systems are rigid and traffic becomes more complicated and dynamic, what are the options? A rebuild of the physical roadway is one option, but it’s expensive, disruptive and takes far too long. Worse, the same result could occur without the ability to adapt to future traffic patterns. 

Fortunately, when it comes to improving network infrastructure, there’s an easier choice – Wireless Wireline Convergence – a set of standards that turns constrained, siloed systems into a unified stack for service delivery. Put simply, Wireless Wireline Convergence (known as WWC) doesn’t break down siloes, instead, it rewires traffic to rise above them. In other words, WWC supports co-existence, interworking and interoperability – for service providers, that means they finally have flexibility in how, where, and when they move toward convergence.

Service Providers Take a Different Road 

The shift to WWC is coming at the right time as demands on service providers have reached an all-time high, requiring them to deliver seamless connectivity to subscribers as traffic patterns shifted and hit peak levels literally overnight.

After all, Fixed Mobile Convergence (WWC’s predecessor) has its limitations. Although it was designed to bridge services across siloed wireless and wireline infrastructures, Fixed Mobile Convergence failed to gain traction because software was tightly integrated with existing siloed platforms. But now, today’s leading service providers are already working to implement different aspects of WWC. Beyond the obvious advantage of a converged network with respect to operational costs, WWC has the ability to deliver new, differentiated service experiences. 

For example, it unlocks superior connectivity by ushering in a consistent access-agnostic service experience, meaning customers get consistent features across multi-access networks and different customer premises equipment. WWC also delivers improved application experiences by making it possible to aggregate available wireless and wireline bandwidth into one logical link that can improve speed, quality of service and reliability. 

And with so much bandwidth being consumed at the network edge from subscribers, devices and applications, the demand to turn up services even faster at the edge has never been so high. Service providers have responded quickly to manage the surge in traffic while avoiding lagging, downgraded quality, and slower speeds, but this on its own isn’t enough – now WWC helps them offer edge services at an even faster rate. 

All Roads Lead to WWC

It’s an extraordinary time for service providers around the globe as the industry undergoes long-term changes in relation to how they build, design and manage their networks. With these changes, service providers are finally seeing a wide-open road for convergence of wired and wireline services in a single service stack. By taking advantage of WWC, service providers can finally break down the walls separating yesterday’s siloed architectures and build a more versatile, powerful network for the future.

WWC will play an increasingly important role in the evolution of the network access and edge. It will help enable an exhilarating degree of freedom in planning and executing business strategies, including distributing network resources where and when they are needed. As major service providers look to incorporate WWC into their strategies, they’ll soon deliver the perfect “road” to meet their changing traffic needs.


Sunday, February 21, 2021

Blueprint column: Rise of cognitive self-organizing networks

by Yatin Dharamshi, Head of Digital Operations - Orchestration and Fulfilment Engineering, Nokia

For many communications service providers (CSPs), self-organizing networks (SON) have been the golden key to efficiently configure and optimize booming mobile networks with closed-loop automation. SON has brought tremendous value for earlier-generation mobile technologies. And if they were not already essential back then, they are undoubtedly going to be crucial for CSPs to manage the complexity that comes with the adoption of 5G technology.

So why does 5G make self-organizing networks more exciting in the next decade compared to the last? To start, 5G is a wireless technology that promises to cut the cord – untethering people and objects from certain locations or places. To achieve this promise, 5G has introduced a slew of new capabilities and deployment options to wireless networks. These emerging technologies include network slicing, dynamic spectrum sharing, beamforming, edge cloud, orchestration and more.

On its own, these technologies already pose difficulties for humans to manage individually. And if these technologies were united, the challenges that arise are becoming more difficult for humans to control and manage with the current toolset at our disposal. Further, 5G offers multiple frequency bands, which combined with network slicing would unleash a myriad of use cases. But with growing number of use cases (and technologies), complexity increases. This is where closed-loop automation comes in, and where SON will shine. Now more than ever, self-organizing networks will be a critical element in the shift toward autonomous operations, which will push current-generation SON to its limits.

A snapshot of SON and its capabilities

Most common self-organizing networks today have five key abilities. First, it drives automation and reduces the reliance on manual operational applications starting from network configurations during rollout to keeping networks optimized thereafter. 

Second, SON can conduct rapid, real-time detection of cell outages or degraded performance. This is crucial for network operators and CSPs to ensure that their networks can efficiently cope with unprecedented loads, as well as calibrate nearby cells to balance out the lost coverage.

The third capability of SON is that it allows for seamless connectivity, which helps network operators achieve optimal performances and overcome challenges around insufficient capacity or coverage and mobility robustness. 

Fourth, self-organizing networks can carry out ongoing network monitoring and healing. A typical example is identification of “sleeping cells,” or cells performing sub-optimally, and instinctively reset it to improve network reliability.

Lastly, SON is great for cost management as it enables network operators to manage and control costs. For e.g., by optimizing a network’s energy consumption. This is done through the active monitoring of cell loads so that traffic cells can be switched on or off automatically when needed.

Upgrading with cognitive SON

Today, many higher-order self-organizing network functions require a human expert touch. By this, I mean the involvement by experienced — and at times, hard to come by — optimization engineers in the following tasks:

  • Identify and place network performance objectives
  • Evaluate network conditions across individual regions and parameters, such as rural vs. urban, high-volume vs. low-volume, and so on
  • Analyze and correct problems, while also determining if those corrections were effective

While having the eyes of a human expert on SON functions is great, the dependency can also create bottlenecks in the dynamic and radically complex 5G environment. Besides, as humans, we are naturally prone to errors. Thus, shifting from human-led automation to fully machine-led autonomous operations is key.

This is where the next generation of self-organizing networks with cognitive abilities come into play. Cognitive SON is ideal because it brings in machine learning to take over manually driven SON functions. To do this, a mobile operator simply sets the objectives, and cognitive SON will do the rest: understanding network context, identifying problems, applying and orchestrating the right actions, and evaluating their efficacy. Machine learning is truly the secret ingredient to cognitive SON’s effectiveness. Its intelligence allows for predictive analytics, so cognitive SON can characterize networks, label different cells based on the deployment area and problems present, and instinctively invoke proper algorithms that provide solutions to reach an objective – all without the need for human intervention.

To take your cognitive SON to the next level, moving certain functions to the edge cloud will be key as it reduces latencies. Mobile operators understand that common networks today require a long time to collect data, and it’s often not real-time or near-real-time. But on the edge cloud, real-time data collection becomes a reality, which allows for faster changes in controls or functions, thus achieving swifter reaction times to problems that may arise. Combine this capability with the predictive analytics brought forth by machine learning, and cognitive SON becomes an extremely powerful tool.

Take a leap of faith in cognitive SON

The benefits of cognitive self-organizing networks are abundant. Its intelligent automation capabilities deliver improved, more consistent customer experiences, while also ensuring timely and automatic problem detection so issues can be mitigated in digital-time.

It’s understandable that not every CSP will be ready or comfortable to leap from their current human-led network optimization operations to trusting a fully machine-led autonomous system overnight. So, to ease CSPs into this new mode of working, cognitive SON offers extensive visibility and open controls so that experts can have a strong hand in influencing its operation’s journey while gradually adopting a fully autonomous system.

As we move deeper into the 5G era and CSPs continue on their journey to digitization, cognitive SON will be the key step for enabling autonomous intelligent closed-loop systems, and therefore achieving successful 5G operations.


Monday, December 21, 2020

Nework Predictions 2021: TelcoDR's Danielle Royston

 by Danielle Royston, Founder, TelcoDR

A telco will figure out how to really use the public cloud and save 50% on its IT costs – or more

How will it happen? It'll move a ton of software to the cloud and prove: 1) it works; 2) it’ll save a ton of money (the company that embraces the software of the public cloud will see a 50% savings on IT costs); 3) life is sweet! (And way sweeter than it ever was before. I’m talking about taking the oldest, suckiest, super unsexy legacy applications and refactoring them for 90% savings.)

Who’ll be the bold telco? Definitely not a company in the US. Sorry America. It’ll likely be based in Asia, which has moved on from dumb private cloud, and we’ve already seen examples of successful moves to public cloud in this region (take a bow, M1). 

In 2021 we might be going back to 1981-style boldness, but it’ll be a huge move forward for modernizing the telco industry. A bold telco will successfully transition to the public cloud and show everyone else how it’s done. Note to everyone else: be prepared, this change will require all hands on deck.

Telcos will take the wrong approach – and fail

Alongside public cloud success, we’ll also witness public cloud failure in 2021. Without a proper understanding of the cloud ecosystem – and what ‘cloud native’ means: see my 2020 round-up above – telcos will foot some spectacular fuck-ups. On that note: if you want to avoid being that telco, look for my blog in January where I’ll clarify cloud language and explain how each part of the telco business can benefit.

Back to those failures though. It’s common sense to move to the public cloud, but there are still so many misconceptions that telcos will get bound in. It’s not just about infrastructure and IT, for instance. It requires a top-down, organization-wide cultural change. It requires clear communication.

Wrong moves will result in failure. Or, if not complete failure, then a load of back-tracking, additional costs and tails between legs. No one wants to hear ‘I told you so.’ Bank of America probably didn’t. For almost a decade, the institution was adamant that ignoring public cloud and obsessing about its vanity project (aka, building its own private cloud) was the way to go. It wasn’t. In 2019, Brian Moynihan, BofA chairman of the board and CEO, admitted that although it had been pursuing private cloud – and spending on private cloud – third-party cloud providers are 25-30% "cheaper.” It then teamed with IBM to develop a public-cloud computing service for banks.

There’s also the cautionary tale of Verizon, a company that thought it was a great idea to spend $1.4 billion on data center provider Terremark. It later realized it couldn’t compete with the might of the hyperscalers and dumped the business on Equinix.

People will fall for IBM’s #fakecloud

You thought the claws of Oracle were bad? In 2021, you’ll see it’s IBM that has the real talons.

In November IBM launched its cloud-for-telco play. Unfortunately for telco – and bad luck for buyers – Big Blue launched a big crock of shit. This is not cloud. It was fake news. It’s #fakecloud. In 2021 we’ll see the results from the poor suckers who’ve invested and we’ll hopefully see a greater realization that a hybrid strategy and a half-assed move to the cloud will never work.

At launch, IBM tried to persuade telco to keep things on-premise. If you do move to the BFCs, then IBM can manage it all for you. What they didn’t mention was that this would happen at a cost, and it’d be a massive waste of time. Telcos that fell for this trap last year will be adding five more years to their public cloud journey, by which time they’ll be way behind competitors that saved time and money, and whose customers love the service they offer. 

Be wary of IBM, my telco children. Do not fall for the trap!

OpenRAN will explode

The tail end of 2020 saw OpenRAN start to bubble rapidly to the surface of telco conversations. In 2021, it’s gonna explode. Vendors: be afraid, be very afraid. Ericsson’s revenue will slip even further through its fingers – something it already admitted last year, when CEO Börje Ekholm said he expected OpenRAN market developments to “impact revenues” from 2023 onwards. 

Other vendors will hemorrhage revenue as telcos realize that there is (finally!) an alternative to overpriced infrastructure and vendor lock-in. They’ll get choice, at last, picking and choosing best-of-breed elements from whomever the hell they want! More features will be driven into software. Networks will be easier and cheaper to maintain, easier and cheaper to upgrade. Spend on RAN will go from historic levels of around 90% of total spend to 50XX%. It might not be next year, but the development and industry excitement around disaggregated network components will certainly define the trajectory of telcos’ decision making next year.

Pioneers like Rakuten will gain column inches and market share next year. It’s no wonder: Rakuten claims operators can reduce capex CAPEX by 40% with its telco-in-a-box network. Vodafone has also been staking its claim in the OpenRAN space: last November it announced it would be deploying OpenRAN technology at 2,600 mobile sites across Wales and the South West of England.

Experimentation is the name of the game here. There might be failures along the way, but telcos will be less afraid of dipping their toe in the OpenRAN water. This will gear them up for taking a plunge in the public cloud ocean down the line.

There’ll always be another G

You can’t move nowadays without being bombarded with something about a ‘G.’ Clearly people believe the hype – 5G networks will cover an estimated one billion people by the end of the year, attracting 220 million subscriptions, according to Ericsson. And it’s not all about faster speeds and greater capacity … research suggests 5G is 90% more energy efficient than legacy mobile infrastructure.

Telcos are set to ramp-up 5G investment in 2021, according to Fitch Ratings, which has warned there will be increased pressure on credit metrics for most worldwide. Free cash flow, it says, will be constrained over the next three years. But if telcos believe they can monetize all 5G capex by simply boosting customer experience, that’s just not possible. Instead, they should focus on bringing new forms of life into reality with the help of 5G – I’m thinking best-in-class remote work, e-learning and virtual services. 

That capex pressure will only increase with demands for more connections, higher speeds, greater capacity. Telcos simply can’t afford NOT to move to the public cloud, helping them to further enrich their offerings, as well as cut time and costs with reduced latency. Only the foolish would add to that capex pressure by building their own cloud – remove that headache by using the BFCs!

Sunday, December 20, 2020

Network predictions 2021: Ciena's Steve Alexander

by Steve Alexander, CTO, Ciena

2021 will take investment to the edge

5G networks are primed to deliver faster web browsing and video streaming with reduced latency, both very appealing for consumers. But 5G can do so much more once networks have matured. Advanced 5G services like rich AR and VR, cloud gaming, telemedicine, and Industry 4.0 (the connected manufacturing revolution), all require highly reliable networks that can deliver low latency as well as higher bandwidth – but also high levels of intelligence.

For these services to take off, networks must continue to get faster, closer and smarter, utilizing automation intelligence and software to deliver on the hype of these exciting services. A part of building faster, closer and smarter networks is to build out the edge, where we need up to five times more data centres than are available today.

There is already heavy investment in building out edge data center sites to bring the cloud closer to users and this investment will continue at pace in 2021. The carriers know they need to continue to focus on building out their edge infrastructure in these smaller data center sites, leveraging edge cloud capabilities which will mean that services can be processed closer to users, improving user experience and delivering on the bold promises of 5G.

Hitting new network requirements will become automatic

Carriers know the demands we are placing on networks show no signs of slowing as our lives become more digital and distributed. That means network rollout will continue at pace, but networks must now be built to adapt on their own. Carriers have already taken steps to make this happen, but in 2021, we will start to see even more use of software and analytics to improve the way optical networks function.

Advanced software capabilities will redefine how network providers engineer, operate and monetize their optical networks. These software solutions were originally focused on extracting more value from existing network assets. In 2021 will see these software solutions play a key role in new network builds, giving CSPs the ability to fine-tune, control and dynamically adjust optical connectivity and capacity.

Software will also give greater visibility into the health of the network via real-time link performance metrics and increased, end-to-end photonic layer automation. By utilizing the latest advanced software solutions, providers can monitor and mine all available network assets to be able to instantly respond to new and unexpected bandwidth demands and allocate capacity across any path in real time – a function which will become increasingly important year-on-year.

Increasing Digital Inclusion will be key to continued remote working

This year has demonstrated how important connectivity is for people to stay in touch, shop and work remotely to keep our economy moving.  It has also proven crucial to the continued education of students. There is a growing desire to maintain this flexibility even once Covid restrictions are lifted, but this is only possible if you have the connectivity and capacity.

In 2021, we’ll see rural connectivity and digital inclusion initiatives move higher up the political agenda, and solutions like low-orbit satellite connectivity will come to greater prominence. The solution that maximizes ultimate capacity is still scaling fiber based broadband, but we know this can be a challenge in rural areas, so will require a nudge from policy makers to get things moving.

If countries want to stay at the forefront of the digital economy, they must break down the barriers to rural connectivity and invest in fixing the last-mile problem. They must also continue supporting digital inclusion programmes that grant students access to technology and tools. Incentives and initiatives from the government, and an ongoing review to ensure that networks are using the most effective equipment suppliers, are certainly ways to help.

Enhanced reality will step forward as the first killer use case for 5G 

Almost as soon as talk of 5G networks first started, so too did questions about what the killer app for the new standard will be. 2021 might not be the year we get the definitive answer to that question, but it will be the year in which enhanced reality (AR and VR) applications take a step forward. However, it may not be consumer-centric services that light the path, but instead, enterprise use cases could lead the way. 

I think it’s safe to say that all of us have grown weary of online team meetings this year, and ‘zoom fatigue’ has become a very real thing. Next year I predict we will see more instances of AR and VR being used as collaboration tools, helping remote teams regain some of the ‘live’ element of working together. These services will initially need to run over combinations of home broadband, in building Wi-Fi, 4G and 5G networks.  They will ultimately open the door to more commercial AR and VR services over 5G networks and WiFi 6 further down the road. The quality of those networks will take these enhanced reality applications beyond a fun, short-term gimmick into being a viable and valuable service offering.

WebScalers and telcos expand their collaborations to improve our cloud experience

One of the biggest trends of 2020 has been the partnerships that have been forged between telecoms carriers and some of the the hyperscalers. There’s no doubt this will continue and grow well beyond 2021, but as networks become increasingly more software centric there is an opportunity to improve the delivery of new services and applications to the users.

From the perspective of a WebScale operator, service provider networks often appear to be a patchwork quilt of various vendors and technologies. The suite of Internet protocols allows this complexity to be abstracted up to a set of globally uniform IP addresses and this has served us fantastically well. At the same time, service provider networks look largely opaque to the cloud and consequently it is hard to guarantee a user the cloud experience that is desired. To deliver next generation service more collaboration between cloud and network is required.  Making the network adaptive through the use of intelligent software allow coordination between service provider networks and the cloud and will enable a generation of AR and VR-based immersive services and applications.

Steve Alexander is Ciena’s Senior Vice President and Chief Technology Officer. He has held a number of positions since joining the Company in 1994, including General Manager of Ciena's Transport & Switching and Data Networking business units, Vice President of Transport Products and Director of Lightwave Systems.

Sunday, December 6, 2020

2021 Foresight: Predictions for Service Providers

by Sally Bament, VP of Cloud & Service Provider Marketing, Juniper Networks

COVID’s Impact

COVID aims the spotlight on preparing networks for the unknown, AI/ML will be big focus

The COVID-19 pandemic shifted our world from physical to virtual literally overnight, placing enormous responsibility on service providers to deliver seamless real-time and near real-time experiences at peak traffic levels. Traffic patterns are shifting from mobility towards Wi-Fi and broadband networks, and as work continues to shift to the home, the lines between consumers and enterprise users continue to blur. This implies there will be long-term changes in how service providers architect and manage their networks particularly for enterprise customers, which by extension means to the home. Next year, we will see more focus on ensuring networks are ready for the “unknowns.” We will see accelerated investments in open, agile network architectures built on cloud principles, elastic on-demand capacities, and automation and security for an assured service experience. And with a heightened focus on service experience, we can expect automation, service assurance, AI/ML, and orchestration technologies to take on an even more significant role in service provider network operations, guaranteeing service quality and simplifying operations as networks get bigger, more dynamic and more complex.

COVID accelerates the value of the edge

Networks have never been more critical than they are right now. Business, education, telemedicine, social, all have moved from engaging in person to engaging virtually and multi-participant interactive video calls have become fundamental to our daily lives. We have seen a massive consumption of streaming media (largely video based), and similarly an all-time high in online gaming, each driving CDN growth. Service providers have responded fast to manage the surge in traffic while avoiding lagging, downgraded quality, and slower speeds. Next year, we’ll see service providers double down on investments in edge cloud, moving applications and data closer to users and connected devices to enhance the user and application experience, support new emerging low-latency applications, and make more efficient use of network transit capacity.

COVID drives network security

While security has often taken a back seat to make way for faster network speeds, the pandemic has proven that bad actors will take advantage of crises for their own gain. Next year, we’ll see service providers take a holistic, end-to-end security approach that combines network, application and end-user security to deliver a secure and assured service experience. This is especially important as we’re approaching a second wave of lockdowns and working from home becomes the new normal – which presents an enticing attack surface to attackers. In 2021, we’ll see companies investing more in Enterprise-at-Home solutions with security at the forefront, ensuring that all endpoints in the networks are secure, wherever they are.

5G

5G hype fades as monetization opportunities skyrocket

Despite the pandemic shifting operational priorities, causing some 5G roll outs to slow down, service providers have still been heavily investing in and deploying 5G networks. With over 100 commercial networks launched across the globe, and many more are expected in 2021, 5G is now real, bringing new monetization opportunities for operators. With massive speeds, huge connection densities and ultra-low-latency experiences, we expect to see progress in new consumer applications (e.g. gaming, AR/VR/MR), 5G for industry verticals, consumer broadband with content bundling, enterprise broadband and cloud-managed services, and fixed wireless access services in 2021.

400G

400G deployments ramp up beyond the cloud data center

As commercial solutions become more viable to support the relentless growth in bandwidth demand, we will continue to see momentum build for 400G in 2021. While large cloud providers are driving the first wave in the data center and the wide area network, expect to see 400G ramp up in service provider networks in 2021, as well as across data center interconnect, core, peering, and CDN gateway use cases, among others. We will see large-scale rollouts of 400G in the WAN, especially in the second-half of the year, driven by the availability of lower-cost optics, lower operating expense potential with fewer ports to manage, and pay-as-you-go pricing models that will allow operators to smoothly navigate the upgrades. Looking beyond 2021, we will see 400G appear in metro aggregation nodes as 5G buildouts drive even more traffic and network densification.

Open RAN

Open Architectures remain a top theme, Open RAN is here to stay

The service provider industry’s drive towards Open Architectures will continue to gain momentum in all areas from Open Access (including Open RAN, Open OLT), Open Broadband, Open IP/Optical and Open Core. Open RAN is no longer a question of IF, but WHEN. We will see accelerated momentum in Open RAN globally with RFPs, trials and early deployments as many operators commit to democratize their radio access domain primarily to drive vendor diversity and best-of-breed innovation. While commercial widescale deployments of Open RAN are a few years out, we will see a strengthened Open RAN ecosystem, greater technology maturity and new kinds of partnerships that will fundamentally change how radio networks will be deployed, managed and leveraged for value creation in the future.



The Role Operators can play at the Edge Over 50 billion devices are expected to come online next year, driving the need for edge-located control points to manage these devices in real-time and near real-time. For service providers, this makes edge compute a critical and strategic area of focus. Sally Bament, VP of Marketing at Juniper Networks, discusses the role operators can play in the edge value chain.

Thursday, October 15, 2020

Perspective: Growth occurs at the cloud edge

by Hitendra “Sonny” Soni, senior vice president worldwide sales and marketing, Kaloom

Elvis Presley sang the song “If I can dream” in ‘68, inspired by the turmoil a growing nation was going through. In today’s pandemic reality, connectivity has become more important than ever, but innovation needs to take place at multiple levels to get us where we need to be.

When our startup was founded, the assumptions that SDN and NFV would deliver programmability, automation, drive down costs and disrupt vendor lock-in had not yet materialized despite years of effort from the networking community. 

While SDN promised the Net Ops engineer’s dream of a truly programmable network, it initially enabled just a limited amount of additional software control and flexibility. Without programmability, the hardware could only perform the functions it was created with and networking would continue to lag behind the rapid advances made in other cloud technologies such as storage, compute and application development. 

Gartner comments about SDN’s “Plateau of Productivity” on the analyst firm’s famous hype-cycle curve has led to multiple pundit headlines such as “SDN is dead, long live SDN” and my personal favorite “SDN has left the building.” However, these are not just about naming nuances. They represent the true pitfalls of SDN as it was originally intended – specifically taking so long to mature, being difficult to operationalize and not delivering on lowering networking’s costs.  

Whatever you were doing, or wanted to do, in software you couldn’t change what the non-programmable chip/hardware was capable of. This meant that the much-anticipated rapid innovation pace of software development and open source collaboration that were supposed to accelerate networking capabilities were still hamstrung by a years-long hardware product cycle. If I could dream of a truly cloud-native programmable fabric, here are five characteristics that cloud-native edge solutions would look like. 

1. Open Source 

The real vision of SDN and NFV is built on community-based, open-source standards such as those from the IETF, ONF, The Broadband Forum, The Linux Foundation, and many others.

Recent years have seen an entire ecosystem of truly open-source, collaborative communities geared towards solving the challenges created by SDN’s initial vision. In fact, there are so many “.orgs” working on this that it can be confusing for service providers to decide which one to use to address each of its various needs. Today, many of these have joined, merged, or collaborated with the IEEE, Open Networking Foundation (ONF), Apache, Linux and – in the case of Kubernetes – its Cloud Native Computing Foundation (CNCF), among others. 

2. Live Truly on the Edge

New 5G-enabled apps require extreme low latency which demands a distributed edge architecture that puts applications close to their data source and end users. We can’t have autonomous vehicles or other mission-critical manufacturing apps experiencing loss of signal, network interruptions or increased latency. The delicacy of their connection to the network must be automatically prioritized. Workloads need to be managed and decisions made at edge-level precision which requires an end-to-end latency below 10 milliseconds. Much of our public cloud infrastructure today is not yet set up for this. 

For example, the latency from New-York to Amazon Web Services or Microsoft Azure in Northern - Virginia is greater than 20 milliseconds. Simply not good enough. The image below, which is taken from The Linux Foundation’s State of the Edge (SOTE) 2020 report, demonstrates the importance of low latency in supporting next-gen applications.

 3. Make Real-Business ‘Cents

At the moment, the 5G business case is simply not justified, and carriers will not deploy true nationwide 5G because there is no demand for it yet and there needs to be an opportunity to monetize. For example, service providers’ revenues from smartphone users running 3G/4G were about $50 per month. However, connected cars will only generate about $1 or $2 per month, and installing 5G requires extreme amounts of upfront investments, not only in antennas but also in the backend servers, storage, and networking switches required to support these apps. The reality is that 5G will requires a 10x reduced total cost of ownership for the infrastructure deployments to be profitable.  

To succeed any new technology must deliver significant economic disruption. One example of this is network slicing, or partitioning network architectures into virtual data centers while using the same shared physical infrastructure. With 5G-enabled secure, end to end, fully isolated network slicing, it’s conceivable that different service providers – for example MVNOs – could share the same physical network resources while maintaining different SLAs and offering differentiated services. They could also share a half- or full rack, depending on how many servers their apps require. This could enable initial 5G service rollouts while minimizing costs, and risks, via shared infrastructure.

4. Be Green

If there is anything this pandemic has taught us, it is that efficiency is king. We all stopped and realized just how much we needed to get by and how much was wasted. As the global manufacturing economy came to an abrupt halt in early 2020, we turned our focus on critical infrastructure. In that light, many local central offices (COs) are nearly maxed-out in terms of available space, power and cooling, leaving little room to support additional rack units (RUs). 

In fact, large regional cloud facilities were not built for the new distributed edge paradigm and service providers’ legacy Central Office (CO) architectures are even more ill-suited for the shift. Containing an odd mishmash of old and new equipment from each decade going back at least 50 years, these facilities are also typically near the limit of their space, power and cooling requirements.

This major buildup to 5G-supported edge infrastructure will have an extremely negative impact on the environment in terms of energy consumption. According to the Linux Foundation’s State of the Edge 2020 report, by 2028 it will consume 102,000 megawatts of power and over $700 billion in cumulative CAPEX will be spent within the next decade on edge IT infrastructure and data center facilities.  We need technology that can dramatically and rapidly reduce the power required to provide these new 5G services and apps to consumers and enterprises. This image, also from the SOTE2020 report, shows the massive need for more power to support 5G and its next-gen apps. 


5. Built on Collaboration

The “edge” is complicated and in many cases cloud players and telcos have yet to fully comprehend how to manage distributed edge locations and the next-generation of applications they will run. In order to truly succeed this dream takes a village to build, where carriers, network operators and cloud-native solution providers work together. Because if we dream it, we can build it. We truly believe in our calling and we may yet get to that promised land. Ok, that is the last Elvis pundit for this post. Thank you very much. 


Hitendra “Sonny” Soni is the Senior Vice President of Worldwide Sales and Marketing at Kaloom. With over 25 years of experience in sales, business development and marketing in the data center and cloud networking, converged infrastructures and management solutions, Sonny is a passionate entrepreneur, who has spent his entire career bringing innovative technology to the market.


Thursday, June 4, 2020

Blueprint 400GbE: The Next Era of Connectivity

Ben Baker, Senior Director of Cloud and SP Marketing, Juniper Networks

The network lies at the heart of everything we do, and it is more important now than ever as we shift toward an increasingly remote work and lifestyle. Network traffic is at record high and shows no signs of leveling off any time soon. Having 400GbE network capacity is critical in maintaining speed and ushering in the next era of connectivity. 

400GbE offers massive increases in capacity, density and power efficiency, and the demand for these capabilities is only growing. Many of the new technology trends we are excited about will require this level of compute, such as new cloud applications, emerging 5G networks, as well as the recent shift to remote work. 400GbE has been a long time coming, but now that we’re entering the 5G era with increased demands for high-bandwidth applications, we’re seeing a clearer path forward. 

Higher speeds can unlock new service capabilities, which pose greater revenue opportunities for service providers, cloud operators and enterprises alike. 400GbE offers more density, scale and power efficiency at a low cost, making it the perfect choice for data center networks, data center interconnect (DCI), and service provider core and transport networks. as well. So why don’t we see more of it out there? 

A Growing Demand
To understand the story of 400GbE, we have to start with the companies clamoring for it most urgently: cloud hyperscalers. Companies such as Google, Amazon and Facebook are experiencing explosive traffic growth across their data centers. Facebook, for example, generates four petabytes of new data every day. Google’s requirements for its data center networks double every 12-15 months. Trying to meet those demands with 100GbE links is like trying to pump a firehose worth of water through a handful of drinking straws. It’s doable, but time intensive

400GbE requires a lengthy time investment for equipment vendors to align around Multi-Source Agreements (MSAs) on optical standards, and therefore the fully realized potential of the technology has been delayed. These optical standards then require exhaustive testing, which can be a long, complex process. That said, the end result is hugely beneficial to the entire ecosystem and will be critical as we evolve into the next era of high-speed technology. 

As we move closer to large-scale commercial viability of 400GbE, expect to see major optical advances this year. Ongoing development of silicon photonics, for example, will further the convergence of optical transport with routing and switching, delivering pluggable transceiver capabilities on fixed configuration platforms or directly on line cards for modular platforms. 

Solving for the Next Wave of Challenges
As we prepare for these advances and wait for 400GbE economics to catch up with demand, here are some considerations to think about:

  • Think open and interoperable. Pursue platforms based on open standards-based technologies, with multi-vendor interoperability and zero lock-in. This will save you in the long run on capital expenditures and operational flexibility.
  • Get 400GbE-ready. As you pursue network refresh cycles, look for fixed configuration or modular solutions that support QSFP56-DD interfaces for 400GbE services, so you can make the transition quickly and easily—such as by swapping out one pluggable with another. 
  • Prioritize inline security. For many organizations, any traffic that leaves the data center must be encrypted. Look for 400GbE solutions that can provide MACsec encryption inline, so you don’t have to use separate components that increase power consumption and costs while sapping performance. 
  • Achieve telemetry at scale. To manage and monitor your network as you scale up, you need networking equipment that can scale telemetry capabilities as well. That means supporting millions of counters and many millions of filter operations per second. 
The Bottom Line
Internet traffic is growing exponentially and operators are trying to navigate the path forward to keep up with the exploding traffic growth. Service providers and cloud operators are fast approaching the tipping point where commercial solutions become viable. Delivering the next generation of digital services and applications—or even just supporting current ones more efficiently—requires a strong foundation powered by 400GbE. 

Tuesday, February 11, 2020

Blueprint column: End users are now demanding virtualized services

by Prayson Pate, CTO, Edge Cloud, ADVA

I used to ask the question: Why do customers of managed services care about NFV?

My answer was: They don’t. But they do care about the benefits of NFV, such as choice, services on demand, and new commercial models such as pay-as-you-go and try-before-you-buy.

But the situation has changed. Now, customers looking at managed services are asking for virtualized solutions. Our sources show that half of end-user tenders for managed services call for universal CPE (uCPE) by name. They want the benefits of a managed service, combined with the benefits of virtualization, without the headaches of doing it themselves.

And, in case you forgot, uCPE is the replacement of a stack of communications devices (e.g., router, firewall, SD-WAN endpoint, etc.) with software applications running on a standard server.

Why are end-users asking for uCPE? 

End-user reasons for virtualized services and uCPE

Here are some of the top reasons that end users are asking for virtualized solutions delivered using uCPE. These reasons apply whether the end-user is consuming a managed service or they are operating their own overlay network services.

Dynamic services delivered on-demand. This is probably the biggest reason. End-users want to be able to choose and change their services in real-time. They know if a service is delivered using a stack of dedicated appliances, then every service change means changing appliances – at every site. This is no longer acceptable, as it is costly, slow and it does not scale.

Usage-based services. End users can consume cloud resources on a pay-as-you-go basis, with no commitments. They want to be able to consume their managed communications services in the same way.

Try-before-you-buy services. Almost every paid service on the internet has a free trial period. End users expect the same with their communications services. Once a site is served by a uCPE hosting device, any service can be offered on a trial basis. This is great for end-users, but why would the service provider and VNF supplier support this model? Because their incremental cost is zero, and the acceptance rate is high. Try-before-you-buy is a win-win for all parties.

User-managed applications. Enterprises want to take advantage of multi-cloud hosting. That includes on-premises hosting to meet requirements for latency, security and bandwidth. They want those benefits, but without having to manage their own hardware. They see managed edge cloud hosting on uCPE as the answer.

Decouple hardware from software and break vendor lock-in. This one is standard for service providers, but it may surprise you to learn that it affects enterprises also. I recently talked to an enterprise that is operating their own SD-WAN network. Their favorite SD-WAN supplier was acquired by one of the big guys. As a result, their pricing went up, and the availability of the endpoint devices got much worse. To make a change meant ripping and replacing every endpoint. They do not want to be in this situation again. By moving to uCPE, they enable a future change of SD-WAN supplier – without changing the installed hardware.

Self-operated network versus managed services

Before we go on, I would like to comment on the eternal debate about whether to run your own network or use managed services. This topic has been well-hashed, but the advent of virtualized services on uCPE changes the equation. It provides more benefits than an appliance-based approach. But it introduces the complexity of a multi-vendor system. The complexity is going to be acceptable for some larger enterprises. But many others will find that a managed and virtualized service gives them all the advantages without the drawbacks (as described here).

Real-world example: before uCPE and with uCPE

Let’s take a look at how the advantages of a virtualized service delivered with uCPE can benefit an end-user. Assume that you are opening a new store or branch office, and you need internet connectivity, VPN, and managed security. Here is a step-by-step comparison of the end-user experience.


I don’t know about you, but I like the “with uCPE” model a lot better!

The cloud is spreading to telecom

End users are increasingly moving their applications to the cloud, and they understand the benefits of doing so. End users expect the same cloud benefits of flexibility, speed and software-centric development in their communications services. NFV and uCPE are how we bring the power of the cloud to communications services – and to end-users.

Sunday, January 5, 2020

Blueprint: The Power of Intent-Based Segmentation

by Peter Newton, senior director of products and solutions, Fortinet

Time-to-market pressures are driving digital transformation (DX) at organizations. This is not only putting pressure on the organization to adapt to a more agile business model, but it is also creating significant challenges for IT teams. In addition to having to build out new public and private cloud networks, update WAN connectivity to branch offices, adopt aggressive application development strategies to meet evolving consumer demands, and support a growing number of IoT and privately-owned end-user devices, those same overburdened IT workers need to secure that entire extended network, from core to cloud.

Of course, that’s easier said than done.

Too many organizations have fallen down the rabbit hole of building one security environment after the other to secure the DX project du jour. The result is an often slap-dashed collection of isolated security tools that actually diminish visibility and restrict control across the entire distributed network. What’s needed is a comprehensively integrated security architecture and security-driven networking strategy that ensures that not a single device, virtual or physical, is deployed without there being a security strategy in place to protect it. And what’s more, those security devices need to be seamlessly integrated together into a holistic security fabric that can be centrally managed and orchestrated.

The Limits of Traditional Segmentation Strategies

Of course, this is fine for new projects that will expand the potential attack surface. But how do you retroactively go back and secure your existing networked environments and the potentially thousands of IoT and other devices already deployed there? CISOs who understand the dynamics of modern network evolution are insisting that their teams move beyond perimeter security. Their aim is to respond more assertively to attack surfaces that are expanding on all fronts across the enterprise.
Typically, this involves segmenting the network and infrastructure and providing defense in-depth leveraging multiple forms of security. Unfortunately, traditional segmentation methods have proven to be insufficient in meeting DX security and compliance demands, and too complicated to be sustainable. Traditional network segmentation suffers from three key challenges:

  1. A limited ability to adapt to business and compliance requirements – especially in environments where the infrastructure is constantly adapting to shifting business demands.
  2. Unnecessary risk due to static or implicit trust – especially when data can move and devices can be repurposed on demand
  3. Poor security visibility and enforcement – especially when the attack surface is in a state of constant flux

The Power of Intent-based Segmentation

To address these concerns, organizations are instead transitioning to Intent-based Segmentation to establish and maintain a security-driven networking strategy because it addresses the shortcomings of traditional segmentation in the following ways:

  • Intent-based Segmentation uses business needs, rather than the network architecture alone, to establish the logic by which users, devices, and applications are segmented, grouped, and isolated.
  • It provides finely tunable access controls and uses those to achieve continuous, adaptive trust.
  • It uses high-performance, advanced Layer 7 (application-level) security across the network
  • It performs comprehensive content inspection and shares that information centrally to attain full visibility and thwart attacks

By using business intent to the drive the segmentation of the network, and establishing access controls using continuous trust assessments, intent-based segmentation provides comprehensive visibility of everything flowing across the network, enabling real-time access control tuning and threat mitigation.

Intent-based Segmentation and the Challenges of IoT

One of the most challenging elements of DX from a security perspective has been the rapid adoption and deployment of IoT devices. As most are aware, IoT devices are not only highly vulnerable to cyberattacks, but most are also headless, meaning they cannot be updated or patched. To protect the network from the potential of an IoT device becoming part of a botnet or delivering malicious code to other devices or places in the network, intent-based segmentation must be a fundamental element of any security strategy.

To begin, the three most important aspects of any IoT security strategy are device identification, proper network segmentation, and network traffic analytics. First, the network needs to be able to identify any devices being connected to the network. By combining intent-based segmentation with Network Access Control (NAC), devices can be identified, their proper roles and functions can be determined, and they can then be dynamically assigned to a segment of the network based on who they belong to, their function, where they are located, and other contextual criteria. The network can then monitor those IoT devices based on that criteria. That way, if a digital camera, for example, stops transmitting data and instead starts requesting it, the network knows it has been compromised and can pull it out of production.

The trick is in understanding the business intent of each device and building that into the formula for keeping it secured. IT teams that rely heavily on IoT security best practices, such as those developed by the National Institute of Standards and Technology (NIST), may wind up developing highly restrictive network segmentation rules that lead to operational disruptions. If an IoT device is deployed in an unexpected way, for example, standard segmentation may block some essential service it provides, while intent-based segmentation can secure it in a different way, such as tying it to a specific application or workflow rather than the sort of simple binary rules IT teams traditionally rely on. Such is the case with wireless infusion pumps, heart monitors and other critical-care devices in hospitals. When medical staff suddenly cannot access these devices over the network because of certain rigidities in the VLAN-based segmentation design, patients’ lives may be at risk. With Intent-based Segmentation, these devices would be tagged according to their medical use, regardless of their location on the network. Access permissions would then be tailored to those devices.

Adding Trust to the Mix

Of course, the opposite is true as well. Allowing implicit or static trust based on some pre-configured segmentation standard could expose critical resources to compromise should a section of the network become compromised. To determine the appropriate level of access for every user, device, or application, an Intent-based Segmentation solution must also assess their level of trustworthiness. Various trust databases exist that provide this information.

Trust, however, is not an attribute that is set once and forgotten. Trusted employees and contractors can go rogue and inflict extensive damage before they are discovered, as several large corporate breaches have proven. IoT devices are especially prone to compromise and can be manipulated for attacks, data exfiltration, and takeovers. And common attacks against business-critical applications – especially those used by suppliers, customers, and other players in the supply chain – can inflict damage far and wide if their trust status is only sporadically updated. Trust needs to be continually updated through an integrated security strategy. Behavioral analysis baselines and monitors the behaviors of users. Web application firewalls inspects applications during development and validates transactions once they are in production. And the trustworthiness of devices is maintained not only by strict access control and continuous monitoring of their data and traffic, but also by preventing them from performing functions outside of their intended purpose.

Ironically, one of the most effective strategies for establishing and maintaining trust is by creating a zero-trust network where all access is needs to be authenticated, all traffic and transcations are monitored, and all access is restricted by dynamic intent-based segmentation.

Securing Digital Transformation with a Single Security Fabric

Finally, the entire distributed network need to be wrapped in a single cocoon of integrated security solutions that span and see across the entire network. And that entire security fabric should enable granular control of any element of the network – whether physical or virtual, local or remote, static or mobile, or in the core or in the cloud – in a consistent fashion through a single management console. By combining verifiable trustworthiness, intent-based segmentation, and integrated security tools into a single solution, organizations can establish a trustworthy, security-driven networking strategy that can dynamically adapt to meet all of the security demands of the rapidly evolving digital marketplace.

About the author

Peter Newton is senior director of products and solutions – IoT and OT at Fortinet. He has more than 20 years of experience in the enterprise networking and security industry and serves as Fortinet’s products and solutions lead for IoT and operational technology solutions, including ICS and SCADA.

Thursday, October 24, 2019

Blueprint column: Stop the intruders at the door!

by Prayson Pate, CTO, Edge Cloud, ADVA

Security is one of the biggest concerns about cloud computing. And securing the cloud means stopping intruders at the door by securing its onramp – the edge. How can edge cloud can be securely deployed, automatically, at scale, over public internet?

The bad news is that it’s impossible to be 100% secure, especially when you bring internet threats into the mix.

The good news is that we can make it so difficult for intruders that they move on to easier targets. And we can ensure that we contain and limit the damage if they do get in.

To achieve that requires an automated and layered approach. Automation ensures that policies are up to date, passwords and keys are rotated, and patches and updates are applied. Layering means that breaching one barrier does not give the intruder the keys to the kingdom. Finally, security must be designed in – not tacked on as an afterthought.

Let’s take a closer look at what edge cloud is, and how we can build and deliver it, securely and at scale.

Defining and building the edge cloud

Before we continue with the security discussion, let’s talk about what we mean by edge cloud.

Edge cloud is the delivery of cloud resources (compute, networking, and storage) to the perimeter of the network and the usage of those resources for both standard compute loads (micro-cloud) as well as for communications infrastructure (uCPE, SD-WAN, MEC, etc.), as shown below.
For maximum utility, we must build edge cloud in a manner consistent with public cloud. For many applications that means using standard open source components such as Linux, KVM and OpenStack, and supporting both virtual machines and containers.

One of the knocks against OpenStack is its heavy footprint. A standard data center deployment for OpenStack includes one or more servers for the OpenStack controller, with OpenStack agents running on each of the managed nodes.

It’s possible to optimize this model for edge cloud by slimming down the OpenStack controller and running it the same node as the managed resources. In this model, all the cloud resources – compute, storage, networking and control – reside in the same physical device. In other words, it’s a “cloud in a box.” This is a great model for edge cloud, and gives us the benefits of a standard cloud model in a small footprint.

Security out of the box

Security at an edge cloud starts when the hosting device or server is installed and initialized. We believe that the best way to accomplish this is with secure zero-touch provisioning (ZTP) of the device over public IP.

The process starts when an unconfigured server is delivered to an end user. Separately, the service provider sends a digital key to the end user. The end user powers up the server and enters the digital key. The edge cloud software builds a secure tunnel from the customer site to the ZTP server, and delivers the security key to identify and authenticate the edge cloud deployment. This step is essential to prevent unauthorized access if the hosting server is delivered to the wrong location. At that point, the site-specific configuration can be applied using the secure tunnel.

The secure tunnel doesn’t go away once the ZTP process completes. The management and orchestration (MANO) software uses the management channel for ongoing control and monitoring of the edge cloud. This approach provides security even when the connectivity is over public IP.

Security on the edge cloud

One possible drawback to the distributed compute resources and interface in an edge cloud model is an increased attack surface for hackers. We must defend edge cloud nodes with layered security at the device, including:
• Application layer – software-based encryption of data plane traffic at Layers 2, 3, or 4 as part of platform, with the addition of third-party firewall/UTM as a part of the service chain
• Management layer – two-factor authentication at customer site with encryption of management and user tunnels
• Virtualization layer – safeguard against VM escape (protecting one VM from another, and prevention of rogue management system connectivity to hypervisor) and VNF attestation via checksum validation
• Network layer – Modern encryption along with Layer 2 and Layer 3 protocols and micro-segmentation to separate management traffic from user traffic, and to protect both

Security of the management software

Effective automation of edge cloud deployments requires sophisticated MANO software, including the ZTP machinery. All of this software must be able to communicate with the managed edge cloud nodes, and do so securely. This means the use of modern security gateways to both protect the MANO software, as well as to provide the secure management tunnels for connectivity.

But that’s not enough. The MANO software should support scalable deployments and tenancy. Scalability should be built using modern techniques so that tools like load balancers can be used to support scaleout. Tenancy is a useful tool to separate customers or regions and to contain security breaches.

Security is an ongoing process

Hackers aren’t standing still, and neither can we. We must perform ongoing security scans of the software to ensure that vulnerabilities are not introduced. We must also monitor the open source distributions and apply patches as needed. A complete model would include:
Automated source code verification by tools such as Protecode and Black Duck
Automated functional verification by tools such as Nessus and OpenSCAP
Monitoring of vulnerability within open source components such as Linux and OpenStack
Following recommendations from the OpenStack Security Group (OSSG) to identify security vulnerabilities and required patches
Application of patches and updates as needed

Build out the cloud, but secure it

The move to the cloud means embracing multi-cloud models, and that should include edge cloud deployments for optimization of application deployment. But ensuring security at those distributed edge cloud nodes means applying a security in an automated and layered approach. There are tools and methods to realize this approach, but it takes discipline and dedication to do so.

Sunday, August 25, 2019

Blueprint: Kubernetes is the End Game for NFVI

by Martin Taylor, Chief Technical Officer, Metaswitch

In October 2012, when a group of 13 network operators launched their white paper describing Network Functions Virtualization, the world of cloud computing technology looked very different than it does today.  As cloud computing has evolved, and as telcos have developed a deeper understanding of it, so the vision for NFV has evolved and changed out of all recognition.
The early vision of NFV focused on moving away from proprietary hardware to software running on commercial off-the-shelf servers.  This was described in terms of “software appliances”.  And in describing the compute environment in which those software appliances would run, the NFV pioneers took their inspiration from enterprise IT practices of that era, which focused on consolidating servers with the aid of hypervisors that essentially virtualized the physical host environment.

Meanwhile, hyperscale Web players such as Netflix and Facebook were developing cloud-based system architectures that support massive scalability with a high degree of resilience, which can be evolved very rapidly through incremental software enhancements, and which can be operated very cost-effectively with the aid of a high degree of operations automation.  The set of practices developed by these players has come to be known as “cloud-native”, which can be summarized as dynamically orchestratable micro-services architectures, often based on stateless processing elements working with separate state storage micro-services, all deployed in Linux containers.

It’s been clear to most network operators for at least a couple of years that cloud-native is the right way to do NFV, for the following reasons:

  • Microservices-based architectures promote rapid evolution of software capabilities to enable enhancement of services and operations, unlike legacy monolithic software architectures with their 9-18 month upgrade cycles and their costly and complicated roll-out procedures.
  • Microservices-based architectures enable independent and dynamic scaling of different functional elements of the system with active-active N+k redundancy, which minimizes the hardware resources required to deliver any given service.
  • Software packaged in containers is inherently more portable than VMs and does much to eliminate the problem of complex dependencies between VMs and the underlying infrastructure which has been a major issue for NFV deployments to date.
  • The cloud-native ecosystem includes some outstandingly useful open source projects, foremost among which is Kubernetes – of which more later.  Other key open source projects in the cloud-native ecosystem include Helm, a Kubernetes application deployment manager, service meshes such as Istio and Linkerd, and telemetry/logging solutions including Prometheus, Fluentd and Grafana.  All of these combine to simplify, accelerate and lower the cost of developing, deploying and operating cloud-native network functions.

5G is the first new generation of mobile technology since the advent of the NFV era, and as such it represents a great opportunity to do NFV right – that is, the cloud-native way.  The 3GPP standards for 5G are designed to promote a cloud-native approach to the 5G core – but they don’t actually guarantee that 5G core products will be recognisably cloud-native.  It’s perfectly possible to build a standards-compliant 5G core that is resolutely legacy in its software architecture, and we believe that some vendors will go down that path.  But some, at least, are stepping up to the plate and building genuinely cloud native solutions for the 5G core.

Cloud-native today is almost synonymous with containers orchestrated by Kubernetes.  It wasn’t always thus: when we started developing our cloud-native IMS solution in 2012, these technologies were not around.  It’s perfectly possible to build something that is cloud-native in all respects other than running in containers – i.e. dynamically orchestratable stateless microservices running in VMs – and production deployments of our cloud native IMS have demonstrated many of the benefits that cloud-native brings, particularly with regard to simple, rapid scaling of the system and the automation of lifecycle management operations such as software upgrade.  But there’s no question that building cloud-native systems with containers is far better, not least because you can then take advantage of Kubernetes, and the rich orchestration and management ecosystem around it.

The rise to prominence of Kubernetes is almost unprecedented among open source projects.  Originally released by Google as recently as July 2015, Kubernetes became the seed project of the Cloud Native Computing Foundation (CNCF), and rapidly eclipsed all the other container orchestration solutions that were out there at the time.  It is now available in multiple mature distros including Red Hat OpenShift and Pivotal Container Services, and is also offered as a service by all the major public cloud operators.  It’s the only game in town when it comes to deploying and managing cloud native applications.  And, for the first time, we have a genuinely common platform for running cloud applications across both private and public clouds.  This is hugely helpful to telcos who are starting to explore the possibility of hybrid clouds for NFV.

So what exactly is Kubernetes?  It’s a container orchestration system for automating application deployment, scaling and management.   For those who are familiar with the ETSI NFV architecture, it essentially covers the Virtual Infrastructure Manager (VIM) and VNF Manager (VNFM) roles.

In its VIM role, Kubernetes schedules container-based workloads and manages their network connectivity.  In OpenStack terms, those are covered by Nova and Neutron respectively.  Kubernetes includes a kind of Load Balancer as a Service, making it easy to deploy scale-out microservices.

In its VNFM role, Kubernetes can monitor the health of each container instance and restart any failed instance.  It can also monitor the relative load on a set of container instances that are providing some specific micro-service and can scale out (or scale in) by spinning up new containers or spinning down existing ones.  In this sense, Kubernetes acts as a Generic VNFM.  For some types of workloads, especially stateful ones such as databases or state stores, Kubernetes native functionality for lifecycle management is not sufficient.  For those cases, Kubernetes has an extension called the Operator Framework which provides a means to encapsulate any application-specific lifecycle management logic.  In NFV terms, a standardized way of building Specific VNFMs.

But Kubernetes goes way beyond the simple application lifecycle management envisaged by the ETSI NFV effort.  Kubernetes itself, together with a growing ecosystem of open source projects that surround it, is at the heart of a movement towards a declarative, version-controlled approach to defining both software infrastructure and applications.  The vision here is for all aspects of a complex cloud native system, including cluster infrastructure and application configuration, to be described in a set of documents that are under version control, typically in a Git repository, which maintains a complete history of every change.  These documents describe the desired state of the system, and a set of software agents act so as to ensure that the actual state of the system is automatically aligned with the desired state.  With the aid of a service mesh such as Istio, changes to system configuration or software version can be automatically “canary” tested on a small proportion of traffic prior to be rolled out fully across the deployment.  If any issues are detected, the change can simply be rolled back.  The high degree of automation and control offered by this kind of approach has enabled Web-scale companies such as Netflix to reduce software release cycles from months to minutes.

Many of the network operators we talk to have a pretty good understanding of the benefits of cloud native NFV, and the technicalities of containers and Kubernetes.  But we’ve also detected a substantial level of concern about how we get there from here.  “Here” means today’s NFV infrastructure built on a hypervisor-based virtualization environment supporting VNFs deployed as virtual machines, where the VIM is either OpenStack or VMware.  The conventional wisdom seems to be that you run Kubernetes on top of your existing VIM.  And this is certainly possible: you just provision a number of VMs and treat these as hosts for the purposes of installing a Kubernetes cluster.  But then you end up with a two-tier environment in which you have to deploy and orchestrate services across some mix of cloud native network functions in containers and VM-based VNFs, where orchestration is driving some mix of Kubernetes, OpenStack or VMware APIs and where Kubernetes needs to coexist with proprietary VNFMs for life-cycle management.  It doesn’t sound very pretty, and indeed it isn’t.

In our work with cloud-native VNFs, containers and Kubernetes, we’ve seen just how much easier it is to deploy and manage large scale applications using this approach compared with traditional hypervisor-based approaches.  The difference is huge.  We firmly believe that adopting this approach is the key to unlocking the massive potential of NFV to simplify operations and accelerate the pace of innovation in services.  But at the same time, we understand why some network operators would baulk at introducing further complexity into what is already a very complex NFV infrastructure.
That’s why we think the right approach is to level everything up to Kubernetes.  And there’s an emerging open source project that makes that possible: KubeVirt.

KubeVirt provides a way to take an existing Virtual Machine and run it inside a container.  From the point of view of the VM, it thinks it’s running on a hypervisor.  From the point of view of Kubernetes, it sees just another container workload.  So with KubeVirt, you can deploy and manage applications that comprise any arbitrary mix of native container workloads and VM workloads using Kubernetes.

In our view, KubeVirt could open the way to adopting Kubernetes as “level playing field” and de facto standard environment across all types of cloud infrastructure, supporting highly automated deployment and management of true cloud native VNFs and legacy VM-based VNFs alike.  The underlying infrastructure can be OpenStack, VMware, bare metal – or any of the main public clouds including Azure, AWS or Google.  This grand unified vision of NFV seems to us be truly compelling.  We think network operators should ratchet up the pressure on their vendors to deliver genuinely cloud native, container-based VNFs, and get serious about Kubernetes as an integral part of their NFV infrastructure.  Without any question, that is where the future lies.