Showing posts with label Blueprint. Show all posts
Showing posts with label Blueprint. Show all posts

Tuesday, February 11, 2020

Blueprint column: End users are now demanding virtualized services

by Prayson Pate, CTO, Edge Cloud, ADVA

I used to ask the question: Why do customers of managed services care about NFV?

My answer was: They don’t. But they do care about the benefits of NFV, such as choice, services on demand, and new commercial models such as pay-as-you-go and try-before-you-buy.

But the situation has changed. Now, customers looking at managed services are asking for virtualized solutions. Our sources show that half of end-user tenders for managed services call for universal CPE (uCPE) by name. They want the benefits of a managed service, combined with the benefits of virtualization, without the headaches of doing it themselves.

And, in case you forgot, uCPE is the replacement of a stack of communications devices (e.g., router, firewall, SD-WAN endpoint, etc.) with software applications running on a standard server.

Why are end-users asking for uCPE? 

End-user reasons for virtualized services and uCPE

Here are some of the top reasons that end users are asking for virtualized solutions delivered using uCPE. These reasons apply whether the end-user is consuming a managed service or they are operating their own overlay network services.

Dynamic services delivered on-demand. This is probably the biggest reason. End-users want to be able to choose and change their services in real-time. They know if a service is delivered using a stack of dedicated appliances, then every service change means changing appliances – at every site. This is no longer acceptable, as it is costly, slow and it does not scale.

Usage-based services. End users can consume cloud resources on a pay-as-you-go basis, with no commitments. They want to be able to consume their managed communications services in the same way.

Try-before-you-buy services. Almost every paid service on the internet has a free trial period. End users expect the same with their communications services. Once a site is served by a uCPE hosting device, any service can be offered on a trial basis. This is great for end-users, but why would the service provider and VNF supplier support this model? Because their incremental cost is zero, and the acceptance rate is high. Try-before-you-buy is a win-win for all parties.

User-managed applications. Enterprises want to take advantage of multi-cloud hosting. That includes on-premises hosting to meet requirements for latency, security and bandwidth. They want those benefits, but without having to manage their own hardware. They see managed edge cloud hosting on uCPE as the answer.

Decouple hardware from software and break vendor lock-in. This one is standard for service providers, but it may surprise you to learn that it affects enterprises also. I recently talked to an enterprise that is operating their own SD-WAN network. Their favorite SD-WAN supplier was acquired by one of the big guys. As a result, their pricing went up, and the availability of the endpoint devices got much worse. To make a change meant ripping and replacing every endpoint. They do not want to be in this situation again. By moving to uCPE, they enable a future change of SD-WAN supplier – without changing the installed hardware.

Self-operated network versus managed services

Before we go on, I would like to comment on the eternal debate about whether to run your own network or use managed services. This topic has been well-hashed, but the advent of virtualized services on uCPE changes the equation. It provides more benefits than an appliance-based approach. But it introduces the complexity of a multi-vendor system. The complexity is going to be acceptable for some larger enterprises. But many others will find that a managed and virtualized service gives them all the advantages without the drawbacks (as described here).

Real-world example: before uCPE and with uCPE

Let’s take a look at how the advantages of a virtualized service delivered with uCPE can benefit an end-user. Assume that you are opening a new store or branch office, and you need internet connectivity, VPN, and managed security. Here is a step-by-step comparison of the end-user experience.


I don’t know about you, but I like the “with uCPE” model a lot better!

The cloud is spreading to telecom

End users are increasingly moving their applications to the cloud, and they understand the benefits of doing so. End users expect the same cloud benefits of flexibility, speed and software-centric development in their communications services. NFV and uCPE are how we bring the power of the cloud to communications services – and to end-users.

Sunday, January 5, 2020

Blueprint: The Power of Intent-Based Segmentation

by Peter Newton, senior director of products and solutions, Fortinet

Time-to-market pressures are driving digital transformation (DX) at organizations. This is not only putting pressure on the organization to adapt to a more agile business model, but it is also creating significant challenges for IT teams. In addition to having to build out new public and private cloud networks, update WAN connectivity to branch offices, adopt aggressive application development strategies to meet evolving consumer demands, and support a growing number of IoT and privately-owned end-user devices, those same overburdened IT workers need to secure that entire extended network, from core to cloud.

Of course, that’s easier said than done.

Too many organizations have fallen down the rabbit hole of building one security environment after the other to secure the DX project du jour. The result is an often slap-dashed collection of isolated security tools that actually diminish visibility and restrict control across the entire distributed network. What’s needed is a comprehensively integrated security architecture and security-driven networking strategy that ensures that not a single device, virtual or physical, is deployed without there being a security strategy in place to protect it. And what’s more, those security devices need to be seamlessly integrated together into a holistic security fabric that can be centrally managed and orchestrated.

The Limits of Traditional Segmentation Strategies

Of course, this is fine for new projects that will expand the potential attack surface. But how do you retroactively go back and secure your existing networked environments and the potentially thousands of IoT and other devices already deployed there? CISOs who understand the dynamics of modern network evolution are insisting that their teams move beyond perimeter security. Their aim is to respond more assertively to attack surfaces that are expanding on all fronts across the enterprise.
Typically, this involves segmenting the network and infrastructure and providing defense in-depth leveraging multiple forms of security. Unfortunately, traditional segmentation methods have proven to be insufficient in meeting DX security and compliance demands, and too complicated to be sustainable. Traditional network segmentation suffers from three key challenges:

  1. A limited ability to adapt to business and compliance requirements – especially in environments where the infrastructure is constantly adapting to shifting business demands.
  2. Unnecessary risk due to static or implicit trust – especially when data can move and devices can be repurposed on demand
  3. Poor security visibility and enforcement – especially when the attack surface is in a state of constant flux

The Power of Intent-based Segmentation

To address these concerns, organizations are instead transitioning to Intent-based Segmentation to establish and maintain a security-driven networking strategy because it addresses the shortcomings of traditional segmentation in the following ways:

  • Intent-based Segmentation uses business needs, rather than the network architecture alone, to establish the logic by which users, devices, and applications are segmented, grouped, and isolated.
  • It provides finely tunable access controls and uses those to achieve continuous, adaptive trust.
  • It uses high-performance, advanced Layer 7 (application-level) security across the network
  • It performs comprehensive content inspection and shares that information centrally to attain full visibility and thwart attacks

By using business intent to the drive the segmentation of the network, and establishing access controls using continuous trust assessments, intent-based segmentation provides comprehensive visibility of everything flowing across the network, enabling real-time access control tuning and threat mitigation.

Intent-based Segmentation and the Challenges of IoT

One of the most challenging elements of DX from a security perspective has been the rapid adoption and deployment of IoT devices. As most are aware, IoT devices are not only highly vulnerable to cyberattacks, but most are also headless, meaning they cannot be updated or patched. To protect the network from the potential of an IoT device becoming part of a botnet or delivering malicious code to other devices or places in the network, intent-based segmentation must be a fundamental element of any security strategy.

To begin, the three most important aspects of any IoT security strategy are device identification, proper network segmentation, and network traffic analytics. First, the network needs to be able to identify any devices being connected to the network. By combining intent-based segmentation with Network Access Control (NAC), devices can be identified, their proper roles and functions can be determined, and they can then be dynamically assigned to a segment of the network based on who they belong to, their function, where they are located, and other contextual criteria. The network can then monitor those IoT devices based on that criteria. That way, if a digital camera, for example, stops transmitting data and instead starts requesting it, the network knows it has been compromised and can pull it out of production.

The trick is in understanding the business intent of each device and building that into the formula for keeping it secured. IT teams that rely heavily on IoT security best practices, such as those developed by the National Institute of Standards and Technology (NIST), may wind up developing highly restrictive network segmentation rules that lead to operational disruptions. If an IoT device is deployed in an unexpected way, for example, standard segmentation may block some essential service it provides, while intent-based segmentation can secure it in a different way, such as tying it to a specific application or workflow rather than the sort of simple binary rules IT teams traditionally rely on. Such is the case with wireless infusion pumps, heart monitors and other critical-care devices in hospitals. When medical staff suddenly cannot access these devices over the network because of certain rigidities in the VLAN-based segmentation design, patients’ lives may be at risk. With Intent-based Segmentation, these devices would be tagged according to their medical use, regardless of their location on the network. Access permissions would then be tailored to those devices.

Adding Trust to the Mix

Of course, the opposite is true as well. Allowing implicit or static trust based on some pre-configured segmentation standard could expose critical resources to compromise should a section of the network become compromised. To determine the appropriate level of access for every user, device, or application, an Intent-based Segmentation solution must also assess their level of trustworthiness. Various trust databases exist that provide this information.

Trust, however, is not an attribute that is set once and forgotten. Trusted employees and contractors can go rogue and inflict extensive damage before they are discovered, as several large corporate breaches have proven. IoT devices are especially prone to compromise and can be manipulated for attacks, data exfiltration, and takeovers. And common attacks against business-critical applications – especially those used by suppliers, customers, and other players in the supply chain – can inflict damage far and wide if their trust status is only sporadically updated. Trust needs to be continually updated through an integrated security strategy. Behavioral analysis baselines and monitors the behaviors of users. Web application firewalls inspects applications during development and validates transactions once they are in production. And the trustworthiness of devices is maintained not only by strict access control and continuous monitoring of their data and traffic, but also by preventing them from performing functions outside of their intended purpose.

Ironically, one of the most effective strategies for establishing and maintaining trust is by creating a zero-trust network where all access is needs to be authenticated, all traffic and transcations are monitored, and all access is restricted by dynamic intent-based segmentation.

Securing Digital Transformation with a Single Security Fabric

Finally, the entire distributed network need to be wrapped in a single cocoon of integrated security solutions that span and see across the entire network. And that entire security fabric should enable granular control of any element of the network – whether physical or virtual, local or remote, static or mobile, or in the core or in the cloud – in a consistent fashion through a single management console. By combining verifiable trustworthiness, intent-based segmentation, and integrated security tools into a single solution, organizations can establish a trustworthy, security-driven networking strategy that can dynamically adapt to meet all of the security demands of the rapidly evolving digital marketplace.

About the author

Peter Newton is senior director of products and solutions – IoT and OT at Fortinet. He has more than 20 years of experience in the enterprise networking and security industry and serves as Fortinet’s products and solutions lead for IoT and operational technology solutions, including ICS and SCADA.

Thursday, October 24, 2019

Blueprint column: Stop the intruders at the door!

by Prayson Pate, CTO, Edge Cloud, ADVA

Security is one of the biggest concerns about cloud computing. And securing the cloud means stopping intruders at the door by securing its onramp – the edge. How can edge cloud can be securely deployed, automatically, at scale, over public internet?

The bad news is that it’s impossible to be 100% secure, especially when you bring internet threats into the mix.

The good news is that we can make it so difficult for intruders that they move on to easier targets. And we can ensure that we contain and limit the damage if they do get in.

To achieve that requires an automated and layered approach. Automation ensures that policies are up to date, passwords and keys are rotated, and patches and updates are applied. Layering means that breaching one barrier does not give the intruder the keys to the kingdom. Finally, security must be designed in – not tacked on as an afterthought.

Let’s take a closer look at what edge cloud is, and how we can build and deliver it, securely and at scale.

Defining and building the edge cloud

Before we continue with the security discussion, let’s talk about what we mean by edge cloud.

Edge cloud is the delivery of cloud resources (compute, networking, and storage) to the perimeter of the network and the usage of those resources for both standard compute loads (micro-cloud) as well as for communications infrastructure (uCPE, SD-WAN, MEC, etc.), as shown below.
For maximum utility, we must build edge cloud in a manner consistent with public cloud. For many applications that means using standard open source components such as Linux, KVM and OpenStack, and supporting both virtual machines and containers.

One of the knocks against OpenStack is its heavy footprint. A standard data center deployment for OpenStack includes one or more servers for the OpenStack controller, with OpenStack agents running on each of the managed nodes.

It’s possible to optimize this model for edge cloud by slimming down the OpenStack controller and running it the same node as the managed resources. In this model, all the cloud resources – compute, storage, networking and control – reside in the same physical device. In other words, it’s a “cloud in a box.” This is a great model for edge cloud, and gives us the benefits of a standard cloud model in a small footprint.

Security out of the box

Security at an edge cloud starts when the hosting device or server is installed and initialized. We believe that the best way to accomplish this is with secure zero-touch provisioning (ZTP) of the device over public IP.

The process starts when an unconfigured server is delivered to an end user. Separately, the service provider sends a digital key to the end user. The end user powers up the server and enters the digital key. The edge cloud software builds a secure tunnel from the customer site to the ZTP server, and delivers the security key to identify and authenticate the edge cloud deployment. This step is essential to prevent unauthorized access if the hosting server is delivered to the wrong location. At that point, the site-specific configuration can be applied using the secure tunnel.

The secure tunnel doesn’t go away once the ZTP process completes. The management and orchestration (MANO) software uses the management channel for ongoing control and monitoring of the edge cloud. This approach provides security even when the connectivity is over public IP.

Security on the edge cloud

One possible drawback to the distributed compute resources and interface in an edge cloud model is an increased attack surface for hackers. We must defend edge cloud nodes with layered security at the device, including:
• Application layer – software-based encryption of data plane traffic at Layers 2, 3, or 4 as part of platform, with the addition of third-party firewall/UTM as a part of the service chain
• Management layer – two-factor authentication at customer site with encryption of management and user tunnels
• Virtualization layer – safeguard against VM escape (protecting one VM from another, and prevention of rogue management system connectivity to hypervisor) and VNF attestation via checksum validation
• Network layer – Modern encryption along with Layer 2 and Layer 3 protocols and micro-segmentation to separate management traffic from user traffic, and to protect both

Security of the management software

Effective automation of edge cloud deployments requires sophisticated MANO software, including the ZTP machinery. All of this software must be able to communicate with the managed edge cloud nodes, and do so securely. This means the use of modern security gateways to both protect the MANO software, as well as to provide the secure management tunnels for connectivity.

But that’s not enough. The MANO software should support scalable deployments and tenancy. Scalability should be built using modern techniques so that tools like load balancers can be used to support scaleout. Tenancy is a useful tool to separate customers or regions and to contain security breaches.

Security is an ongoing process

Hackers aren’t standing still, and neither can we. We must perform ongoing security scans of the software to ensure that vulnerabilities are not introduced. We must also monitor the open source distributions and apply patches as needed. A complete model would include:
Automated source code verification by tools such as Protecode and Black Duck
Automated functional verification by tools such as Nessus and OpenSCAP
Monitoring of vulnerability within open source components such as Linux and OpenStack
Following recommendations from the OpenStack Security Group (OSSG) to identify security vulnerabilities and required patches
Application of patches and updates as needed

Build out the cloud, but secure it

The move to the cloud means embracing multi-cloud models, and that should include edge cloud deployments for optimization of application deployment. But ensuring security at those distributed edge cloud nodes means applying a security in an automated and layered approach. There are tools and methods to realize this approach, but it takes discipline and dedication to do so.

Sunday, August 25, 2019

Blueprint: Kubernetes is the End Game for NFVI

by Martin Taylor, Chief Technical Officer, Metaswitch

In October 2012, when a group of 13 network operators launched their white paper describing Network Functions Virtualization, the world of cloud computing technology looked very different than it does today.  As cloud computing has evolved, and as telcos have developed a deeper understanding of it, so the vision for NFV has evolved and changed out of all recognition.
The early vision of NFV focused on moving away from proprietary hardware to software running on commercial off-the-shelf servers.  This was described in terms of “software appliances”.  And in describing the compute environment in which those software appliances would run, the NFV pioneers took their inspiration from enterprise IT practices of that era, which focused on consolidating servers with the aid of hypervisors that essentially virtualized the physical host environment.

Meanwhile, hyperscale Web players such as Netflix and Facebook were developing cloud-based system architectures that support massive scalability with a high degree of resilience, which can be evolved very rapidly through incremental software enhancements, and which can be operated very cost-effectively with the aid of a high degree of operations automation.  The set of practices developed by these players has come to be known as “cloud-native”, which can be summarized as dynamically orchestratable micro-services architectures, often based on stateless processing elements working with separate state storage micro-services, all deployed in Linux containers.

It’s been clear to most network operators for at least a couple of years that cloud-native is the right way to do NFV, for the following reasons:

  • Microservices-based architectures promote rapid evolution of software capabilities to enable enhancement of services and operations, unlike legacy monolithic software architectures with their 9-18 month upgrade cycles and their costly and complicated roll-out procedures.
  • Microservices-based architectures enable independent and dynamic scaling of different functional elements of the system with active-active N+k redundancy, which minimizes the hardware resources required to deliver any given service.
  • Software packaged in containers is inherently more portable than VMs and does much to eliminate the problem of complex dependencies between VMs and the underlying infrastructure which has been a major issue for NFV deployments to date.
  • The cloud-native ecosystem includes some outstandingly useful open source projects, foremost among which is Kubernetes – of which more later.  Other key open source projects in the cloud-native ecosystem include Helm, a Kubernetes application deployment manager, service meshes such as Istio and Linkerd, and telemetry/logging solutions including Prometheus, Fluentd and Grafana.  All of these combine to simplify, accelerate and lower the cost of developing, deploying and operating cloud-native network functions.

5G is the first new generation of mobile technology since the advent of the NFV era, and as such it represents a great opportunity to do NFV right – that is, the cloud-native way.  The 3GPP standards for 5G are designed to promote a cloud-native approach to the 5G core – but they don’t actually guarantee that 5G core products will be recognisably cloud-native.  It’s perfectly possible to build a standards-compliant 5G core that is resolutely legacy in its software architecture, and we believe that some vendors will go down that path.  But some, at least, are stepping up to the plate and building genuinely cloud native solutions for the 5G core.

Cloud-native today is almost synonymous with containers orchestrated by Kubernetes.  It wasn’t always thus: when we started developing our cloud-native IMS solution in 2012, these technologies were not around.  It’s perfectly possible to build something that is cloud-native in all respects other than running in containers – i.e. dynamically orchestratable stateless microservices running in VMs – and production deployments of our cloud native IMS have demonstrated many of the benefits that cloud-native brings, particularly with regard to simple, rapid scaling of the system and the automation of lifecycle management operations such as software upgrade.  But there’s no question that building cloud-native systems with containers is far better, not least because you can then take advantage of Kubernetes, and the rich orchestration and management ecosystem around it.

The rise to prominence of Kubernetes is almost unprecedented among open source projects.  Originally released by Google as recently as July 2015, Kubernetes became the seed project of the Cloud Native Computing Foundation (CNCF), and rapidly eclipsed all the other container orchestration solutions that were out there at the time.  It is now available in multiple mature distros including Red Hat OpenShift and Pivotal Container Services, and is also offered as a service by all the major public cloud operators.  It’s the only game in town when it comes to deploying and managing cloud native applications.  And, for the first time, we have a genuinely common platform for running cloud applications across both private and public clouds.  This is hugely helpful to telcos who are starting to explore the possibility of hybrid clouds for NFV.

So what exactly is Kubernetes?  It’s a container orchestration system for automating application deployment, scaling and management.   For those who are familiar with the ETSI NFV architecture, it essentially covers the Virtual Infrastructure Manager (VIM) and VNF Manager (VNFM) roles.

In its VIM role, Kubernetes schedules container-based workloads and manages their network connectivity.  In OpenStack terms, those are covered by Nova and Neutron respectively.  Kubernetes includes a kind of Load Balancer as a Service, making it easy to deploy scale-out microservices.

In its VNFM role, Kubernetes can monitor the health of each container instance and restart any failed instance.  It can also monitor the relative load on a set of container instances that are providing some specific micro-service and can scale out (or scale in) by spinning up new containers or spinning down existing ones.  In this sense, Kubernetes acts as a Generic VNFM.  For some types of workloads, especially stateful ones such as databases or state stores, Kubernetes native functionality for lifecycle management is not sufficient.  For those cases, Kubernetes has an extension called the Operator Framework which provides a means to encapsulate any application-specific lifecycle management logic.  In NFV terms, a standardized way of building Specific VNFMs.

But Kubernetes goes way beyond the simple application lifecycle management envisaged by the ETSI NFV effort.  Kubernetes itself, together with a growing ecosystem of open source projects that surround it, is at the heart of a movement towards a declarative, version-controlled approach to defining both software infrastructure and applications.  The vision here is for all aspects of a complex cloud native system, including cluster infrastructure and application configuration, to be described in a set of documents that are under version control, typically in a Git repository, which maintains a complete history of every change.  These documents describe the desired state of the system, and a set of software agents act so as to ensure that the actual state of the system is automatically aligned with the desired state.  With the aid of a service mesh such as Istio, changes to system configuration or software version can be automatically “canary” tested on a small proportion of traffic prior to be rolled out fully across the deployment.  If any issues are detected, the change can simply be rolled back.  The high degree of automation and control offered by this kind of approach has enabled Web-scale companies such as Netflix to reduce software release cycles from months to minutes.

Many of the network operators we talk to have a pretty good understanding of the benefits of cloud native NFV, and the technicalities of containers and Kubernetes.  But we’ve also detected a substantial level of concern about how we get there from here.  “Here” means today’s NFV infrastructure built on a hypervisor-based virtualization environment supporting VNFs deployed as virtual machines, where the VIM is either OpenStack or VMware.  The conventional wisdom seems to be that you run Kubernetes on top of your existing VIM.  And this is certainly possible: you just provision a number of VMs and treat these as hosts for the purposes of installing a Kubernetes cluster.  But then you end up with a two-tier environment in which you have to deploy and orchestrate services across some mix of cloud native network functions in containers and VM-based VNFs, where orchestration is driving some mix of Kubernetes, OpenStack or VMware APIs and where Kubernetes needs to coexist with proprietary VNFMs for life-cycle management.  It doesn’t sound very pretty, and indeed it isn’t.

In our work with cloud-native VNFs, containers and Kubernetes, we’ve seen just how much easier it is to deploy and manage large scale applications using this approach compared with traditional hypervisor-based approaches.  The difference is huge.  We firmly believe that adopting this approach is the key to unlocking the massive potential of NFV to simplify operations and accelerate the pace of innovation in services.  But at the same time, we understand why some network operators would baulk at introducing further complexity into what is already a very complex NFV infrastructure.
That’s why we think the right approach is to level everything up to Kubernetes.  And there’s an emerging open source project that makes that possible: KubeVirt.

KubeVirt provides a way to take an existing Virtual Machine and run it inside a container.  From the point of view of the VM, it thinks it’s running on a hypervisor.  From the point of view of Kubernetes, it sees just another container workload.  So with KubeVirt, you can deploy and manage applications that comprise any arbitrary mix of native container workloads and VM workloads using Kubernetes.

In our view, KubeVirt could open the way to adopting Kubernetes as “level playing field” and de facto standard environment across all types of cloud infrastructure, supporting highly automated deployment and management of true cloud native VNFs and legacy VM-based VNFs alike.  The underlying infrastructure can be OpenStack, VMware, bare metal – or any of the main public clouds including Azure, AWS or Google.  This grand unified vision of NFV seems to us be truly compelling.  We think network operators should ratchet up the pressure on their vendors to deliver genuinely cloud native, container-based VNFs, and get serious about Kubernetes as an integral part of their NFV infrastructure.  Without any question, that is where the future lies.

Wednesday, August 14, 2019

Blueprint: Turn Your Data Center into an Elastic Bare-Metal Cloud

by Denise Shiffman is Chief Product Officer for DriveScale.

What if you could create an automated, elastic, cloud-like experience in your own data center for a fraction of the cost of the public cloud? Today, high performance, data-oriented and containerized applications are commonly deployed on bare-metal which is keeping them on premises. But the hardware deployed is static, costing IT in overprovisioned, underutilized, siloed clusters.

Throughout the evolution of data center IT infrastructure, one thing has remained constant. Once deployed, compute, storage and networking systems remain fixed and inflexible. The move to virtual machines better utilized the resources on the host system they were tied to, but virtual machines didn’t make data center hardware more dynamic or adaptable.

In the era of advanced analytics, machine learning and cloud-native applications, IT needs to find ways to quickly adapt to new workloads and ever-growing data. This has many people talking about software-defined solutions. When software is pulled out of proprietary hardware, whether it’s compute, storage or networking hardware, then flexibility is increased, and costs are reduced. With next-generation, composable infrastructure, software-defined takes on new meaning. For the first time, IT can create and recreate logical hardware through software, making the hardware infrastructure fully programmable. And the benefits are enormous.

Composable Infrastructure can also support the move to more flexible and speedy deployments through DevOps with an automated and dynamic solution integrated with Kubernetes and containers. When deploying data-intensive, scale-out workloads, IT now has the opportunity to shift compute and storage infrastructures away from static, fixed resources. Modern database and application deployments require modern infrastructure driving the emergence of Composable Infrastructure – and it promises to address the exact problems that traditional data centers cannot. In fact, for the first time, using Composable Infrastructure, any data center can become an elastic bare-metal cloud. But what exactly is Composable Infrastructure and how do you implement it?

Elastic and Fully-Automated Infrastructure

Composable Infrastructure begins with disaggregating compute nodes from storage, essentially moving the drives to simple storage systems on a standard Ethernet network. Through a REST API, GUI or template, users choose the instances of compute and the instances of storage required by an application or workload and the cluster of resources is created on the fly ready for application deployment. Similar to the way users chooses instances in the public cloud and the cloud provider stitches that solution together, composable provides the ability to flexibly create, adapt, deploy and redeploy compute and storage resources instantly using pools of heterogeneous, commodity compute, storage and network fabric.

Composable gives you cloud agility and scale, and fundamentally different economics.
  • Eliminate Wasted Spend: With local storage inside the server, fixed configurations of compute and storage resources end up trapped inside the box and left unused. Composable Infrastructure enables the ability to independently scale processing and storage and make adjustments to deployments on the fly. Composable eliminates overprovisioning and stranded resources and enables the acquisition of lower cost hardware.
  • Low Cost, Automated Infrastructure: Providing automated infrastructure on premises, composable enables the flexibility and agility of cloud architectures, and creates independent lifecycles for compute and storage lowering costs and eliminating the noisy neighbors problem in the cloud.
  • Performance and Scale: With today’s high-speed standard Ethernet networks, Composable provides equivalent performance to local drives, while eliminating the need for specialized storage networks. Critical too, composable solutions can scale seamlessly to thousands of nodes while maintaining high performance and high availability.

The Local Storage Conundrum

Drive technology continues to advance with larger drives and with NVMe™ flash. Trapping these drives inside a server limits the ability to gain full utilization of these valuable resources. With machine learning and advanced analytics, storage needs to be shared with an ever-larger number of servers and users need to be able to expand and contract capacity on demand. Composable NVMe puts NVMe on a fabric whether that’s a TCP, RDMA or iSCSI fabric (often referred to as NVMe over fabrics), and user’s gain significant advantages:

  • Elastic storage: By disaggregating compute and storage, NVMe drives or slices of drives can be attached to almost any number of servers. The amount of storage can be expanded or reduced on demand. And a single building block vendor SKU can be used across a wide variety of configurations and use cases eliminating operational complexity. 
  • Increased storage utilization:  Historically, flash utilization has been a significant concern. Composable NVMe over fabrics enables the ability to gain full utilization of the drives and the storage system. Resources from storage systems are allocated to servers in a simple and fully-automated way – and very high IOPS and low-latency comparable to local drives is maintained. 

The Elastic Bare Metal Cloud Data Center

Deploying Kubernetes containerized applications bare metal with Composable Infrastructure enables optimized resource utilization and application, data and hardware availability. The combination of Kubernetes with programmable bare-metal resources turns any data center into a cloud.

Composable data centers eradicate static infrastructure and impose a model where hardware is redefined as a flexible, adaptable set of resources composed and re-composed at will as applications require – making infrastructure as code a reality. Hardware elasticity and cost-efficiencies can be achieved by using disaggregated, heterogeneous building blocks, requiring just a single diskless server SKU and a single eBOD (Ethernet-attached Box of Drives) SKU or JBOD (Just a Box of Drives) SKU to create an enormous array of logical server designs. Failed drives or compute nodes can be replaced through software, and compute and storage are scaled or upgraded independently. And with the ability to quickly and easily determine optimal resource requirements and adapt ratios of resources for deployed applications, composable data centers won’t leave resources stranded or underutilized.

Getting Started with Composable Infrastructure

Composable Infrastructure is built to meet the scale, performance and high availability demands of data-intensive and cloud-native applications while dramatically lowering the cost of deployment. Moving from static to fluid infrastructure may sound like a big jump, but composable doesn’t require a forklift upgrade. Composable Infrastructure can be easily added to a current cluster and used for the expansion of that cluster. It’s a seamless way to get started and to see cost-savings on day one.

Deploying applications in a composable data center will make it easier for IT to meet the needs of the business, while increasing speed to deployment and lowering infrastructure costs. Once you experience the power and control provided by Composable Infrastructure, you’ll wonder how you ever lived without it.

About DriveScale  
DriveScale instantly turns any data center into an elastic bare-metal cloud with on-demand instances of compute, GPU and storage, including native NVMe over Fabrics, to deliver the exact resources a workload needs, and to expand, reduce or replace resources on the fly. With DriveScale, high-performance bare-metal or Kubernetes clusters deploy in seconds for machine learning, advanced analytics and cloud-native applications at a fraction of the cost of the public cloud. www.drivescale.com

Tuesday, June 4, 2019

Blueprint column: The importance of Gi-LAN in 5G

by Takahiro Mitsuhata, Sr. Manager, Technical Marketing at A10 Networks 

Today's 4G networks support mobile broadband services (e.g., video conferencing, high-definition content streaming, etc.) across millions of smart devices, such as smartphones, laptops, tablets and IoT devices. The number of connected devices is on the rise, growing 15 percent or more year-over-year and projected to be 28.5 billion devices by 2022 according to Cisco's VNI forecast.

Adding networking nodes to scale-out capacity is a relatively easy change. Meanwhile, it's essential for service providers to keep offering innovative value-added services to differentiate service experience and monetize new services. These services including parental control, URL filtering, content protection and endpoint device protection from malware and ID theft, to name a few. Service providers, however, are now facing new challenges of operational complexity and extra network latency coming from those services. Such challenges will become even more significant when it comes to 5G, as this will drive even more rapid proliferation of mobile and the IoT devices. It will be critical to minimize latency to ensure there are no interruptions to emerging mission-critical services that are expected to dramatically increase with 5G networks.

Gi-LAN Network Overview

In a mobile network, there are two segments between the radio network and the Internet: the evolved packet core (EPC) and the Gi/SGi-LAN. The EPC is a packet-based mobile core running both voice and data on 4G/ LTE networks. The Gi-LAN is the network where service providers typically provide various homegrown and value-added services using unique capabilities through a combination of IP-based service functions, such as firewall, carrier-grade NAT (CGNAT), deep packet inspection (DPI), policy control and traffic and content optimization. And these services are generally provided by a wide variety of vendors. Service providers need to steer the traffic and direct it to specific service functions, which may be chained, only when necessary, in order to meet specific policy enforcement and service-level agreements for each subscriber.

The Gi-LAN network is an essential segment that enables enhanced security and value-added service offerings to differentiate and monetize services. Therefore, it's crucial to have an efficient Gi-LAN architecture to deliver a high-quality service experience.

 Figure: Gi-LAN with multiple service functions in the mobile network

Challenges in Gi-LAN Segment

In today's 4G/ LTE world, a typical mobile service provider has an ADC, a DPI, a CGNAT and a firewall device as part of Gi-LAN service components. They are mainly deployed as independent network functions on dedicated physical devices from a wide range of vendors. This makes Gi-LAN complex and inflexible from operational and management perspective. Thus, this type of architecture, as known as monolithic architecture, is reaching its limits and does not scale to meet the needs of the rising data traffic in 4G and 4G+ architectures. This will continue to be an issue in 5G infrastructure deployments. The two most serious issues are:

1. Increased latency
2. Significantly higher total cost of ownership

Latency is becoming a significant concern since, even today, lower latency is required by online gaming and video streaming services. With the transition to 5G, ultra-reliable low-latency connectivity targets latencies of less than 1ms for use cases, such as real-time interactive AR/ VR, tactile Internet, industrial automation, mission/life-critical service like remote surgery, self-driving cars and many more. The architecture with individual service functions on different hardware has a major impact on this promise of lower latency. Multiple service functions are usually chained and every hop the data packet traversing between service functions adds additional latency, causing overall service degradation.

The management overhead of each solution independently is also a burden. The network operator must invest in monitoring, management and deployment services for all devices from various vendors individually, resulting in large operational expenses.

Solution – Consolidating Service Functions in Gi-LAN

In order to overcome these issues, there are a few approaches you can take. Service-Based Architecture (SBA) or microservices architecture address operational concerns since leveraging such architecture leads to higher flexibility and automation and significant cost reduction. However, it is less likely to address the network latency concern because each service function, regardless of VNF or microservice, still contributes in the overall latency as far as they are deployed as an individual VM or microservice.

So, what if multiple service functions are consolidated into one instance? For example, CGNAT and Gi firewall are fundamental components in the mobile network, and some subscribers may choose to use additional services such as DPI, URL filtering. Such consolidation is feasible only if the product/ solution supports flexible traffic steering and service chaining capabilities along with those service functions.

Consolidating Gi-LAN service functions into one instance/ appliance helps to drastically reduce the extra latency and simplify network design and operation. Such concepts are not new but there aren't many vendors who can provide consolidated Gi-LAN service functions at scale.

Therefore, when building an efficient Gi-LAN network, service providers need to consider a solution that can offer:
  • Multiple network and service functions on a single instance/ appliance
  • Flexible service chaining support
  • Subscriber awareness and DPI capability supported for granular traffic steering
  • Variety of form-factor options - physical (PNF) and virtual (VNF) appliances
  • High performance and capacity with scale-out capability
  • Easy integration and transition to SDN/NFV deployment
About the author

Takahiro Mitsuhata, Sr. Manager, Technical Marketing at A10 Networks

About A10

A10 Networks (NYSE: ATEN) provides Reliable Security Always™, with a range of high-performance application networking solutions that help organizations ensure that their data center applications and networks remain highly available, accelerated and secure. Founded in 2004, A10 Networks is based in San Jose, Calif., and serves customers globally with offices worldwide. For more information, visit: www.a10networks.com and @A10Networks

Wednesday, January 3, 2018

Vodafone: IoT Trends for 2018

by Ludovico Fassati, Head of IoT for Vodafone Americas

IoT will drive business transformation

Companies that have adopted IoT see the technology as mission critical to their business. These companies are leading the way when it comes to digital transformation initiatives. According to Vodafone’s 2017/18 IoT Barometer, 74% of companies that have adopted IoT agree that digital transformation is impossible without it. The businesses that implement IoT solutions in the next year will have a clear advantage over competitors when it comes to evolving their digital capabilities.

LP-WAN solutions will open up the IoT market

IoT adopters have great expectations for the future of the technology, and new connectivity options like Low-Power Wide Area Networks (LP-WAN) are making innovation possible. LP-WAN technologies, like Narrowband IoT (NB-IoT) allow for increased network coverage over a wide area at a low cost, making them an ideal solution for adding connectivity in hard-to-reach places. According to the analyst firm Analysys Mason, once there is greater awareness and understanding of LP-WAN, there will be new wave of growth in this area. LP-WAN technologies will begin to open the IoT market to applications that have not previously benefitted from connectivity.

IoT will become central to enterprise IT functions

Today, most major enterprises have already integrated IoT into their core systems and initiatives to drive digital businesses. We will continue to see connectivity become part of the enterprise IT fabric – in fact, within five years, IoT will be core to millions of business processes. In the future, companies may even take for granted that devices and appliances like vehicles and HVAC systems can be controlled and monitored remotely, thanks to IoT connectivity.

Companies will be increasingly confident in IoT security solutions

As with any new technology, security remains a top concern when it comes to IoT. However, businesses with large IoT implementations are becoming more confident, given that they have the expertise and resources necessary to tackle security concerns. These organizations will begin to see these security measures as enablers that give them the confidence to push business forward. As the technology matures, trust in IoT-enabled applications and devices will only continue to grow.

Businesses will see unexpected benefits from IoT adoption

Companies that integrate IoT solutions will see a number of benefits from the technology. The benefits go way beyond just enabling better data collection and business insights. IoT will be seen as a driver of improvements across businesses – organizations are already using IoT to reduce risk, cut costs, create new revenue streams, improve employee productivity, enhance customer experience and more. Businesses are likely to see even more benefits as they implement the technology across operations.

Tuesday, June 20, 2017

The Evolution of VNFs within the SD-WAN Ecosystem

As the WAN quickly solidifies its role as the performance bottleneck for cloud services of all kinds, the SD-WAN market will continue to grow and evolve. This evolution will happen in lock step with the move to software-defined everything in data centers for both the enterprise and the service provider, with a focus on Virtual Network Functions (VNFs) and how they could be used to create specialized services based on custom WANs on demand. Although SD-WANs provide multiple benefits in terms of cost, ease-of-management, improved security, and improved telemetry, application performance and reliability remain paramount as the primary goals for the vast majority of SD-WAN deployments. When this is taken into consideration, the role of VNFs in extending and improving application performance becomes clear. Just as importantly, growing use of VNFs within SD-WANs extends an organization’s software-defined architecture throughout the broader network and sets the stage for the insertion of even more intelligence down the road.

What exactly do we mean by the term VNF? 

Before we get started, let’s define what we mean by VNF, since similar to SD-WAN, this term can be used to describe multiple things. For some VNFs are primarily a means of replicating legacy capabilities on a local appliance (physical or virtual) by means of software defined architectures, such as firewall, DHCP, DNS etc. However, restricting one’s scope to legacy services alone limits the potential high-value benefits that can be realized from a software-defined approach for more advanced features. Our definition of a VNF therefore is a superset of localized VNF and is really about the creation of a software-defined functions of more advanced capabilities, such as application aware VPNs, flow-based load balancing, self-healing overlay tunnels etc. What’s more, many advanced SD-WAN vendors provide their customers with the ability to customize these VNF applications to apply exclusively to their own WAN and/or their specific network requirements to enable unique WAN services.

What do we need VNFs for? 

SD-WAN’s enormous growth this year, as well as its predicted continued growth in the years to come follows the footsteps of the paradigm shift data centers are currently undergoing. That is, from a manually configured set of servers and storage appliances, to a software-defined architecture, where the servers and storage appliances (virtual or physical) can be managed and operated via a software-defined architecture. This means less manual errors, lower cost and more efficient way to operate the data center.

As an industry, as we implement some of the data-center approaches to the WAN (Wide Area Networks), one must note that there is a big difference between datacenter networks and WAN networks. Namely, datacenter LANs (Local Area Networks) have ample capacity and bandwidth and unless they are misconfigured, are never the bottleneck for performance. However, with WANs, whether done in-house by the enterprise or delivered as a service by a telecom or other MSP, the branch offices are connected to the Internet through WAN connections (MPLS, DSL, Cable, Fiber, T1, 3G/4G/LTE, etc.). As a result, the choking point of the performance is almost always the WAN. This is why SD-WANs became so popular so quickly, in that this provides immediate relief for this issue.

However, as WANs continue to grow in complexity, with enterprises operating multiple clouds and/or cloud models simultaneously, there is a growing need to add automation and programmability into the software-defined WAN in order to ensure performance and reliability. Therefore VNFs that can address this WAN performance bottleneck have the opportunity to transform how enterprises connect to their private, public and hybrid clouds. VNFs that extend beyond a single location, but can cover WAN networks, will have the ability to add programmability to the WAN. In a way, the “software defined” nature of the data center will be stretched out all the way to the branch office, including the WAN connectivity between them.

Defining SD-WAN VNFs

So what does a VNF that is programmable and addresses the WAN bottlenecks look like? These VNFs are overlay tunnels that can perform certain flow logic and therefore can work around network problems on a packet-by-packet basis per flow. These VNFs are so smart, they have the problem diagnosis, problem alerting and most importantly, resolution of the problem all baked into the VNF. In other words, unlike the days without SD-WAN where an IT manager would have an urgent support ticket whenever a network problem occurs. With VNF-based SD-WANs, the networks are becoming smart enough to solve the problem proactively, in most cases, before even it effects the applications, services and the user experience.

This increase in specific VNFs for the SD-WAN will start with the most immediate need, which is often latency and jitter sensitive applications such as voice, video, UC and other chatty applications. Even now, VNFs are being used to solve these issues. For example, a CIO can have a VNF that dynamically and automatically steers VOIP/SIP traffic around network problems caused by high latency, jitter and packet loss, and in parallel have another VNF to support cross-traffic and latency optimization for “chatty” applications.

In another example, a VNF can be built in minutes designed to steer non-real-time traffic away from a costly WAN link and apply header compression for real-time traffic only in situations where packet loss or latency crosses a specific threshold during certain times of the day, all the while updating syslog with telemetry data. With this level of flexibility and advanced capabilities, VNFs are poised to become the go-to solutions for issues related to the WAN.

A VNF load balancer is another such overlay that has the ability to load balance the traffic over the WAN links. Since the VNF load balancer is in essence a software code that can be deployed onto an SD-WAN appliance, it has the power of taking advantage of various types of intelligence and adaptability to optimize the WAN performance. VNF load balancers should also work with standard routing so that you can inject it in your network, say between the WAN modems and your firewall/router seamlessly.

Clearly, VNFs are part and parcel of SD-WAN next wave of evolution, bringing intelligence and agility to the enterprise WAN. As 2017 ramps up, we’ll see more and more innovation on this front, fully extending software-defined architecture from the data center throughout the network.

About the author

Dr. Cahit Jay Akin is the CEO and co-founder of Mushroom Networks, a long-time supplier of SD-WAN infrastructure for enterprises and service providers. Prior to Mushroom Networks, Dr. Akin spent many years as a successful venture capitalist. Dr. Akin received his Ph.D. and M.S.E. degree in Electrical Engineering and M.S. in Mathematics from the University of Michigan at Ann Arbor. He holds a B.S. degree in Electrical Engineering from Bilkent University, Turkey. Dr. Akin has worked on technical and research aspects of communications for over 15 years including authoring several patents and many publications. Dr. Akin was a nominee for the Most Admired CEO award by San Diego Business Journal. 

Sunday, April 30, 2017

Blueprint: Five Considerations for a Successful Cloud Migration in 2017

by Jay Smith, CTO, Webscale Networks

Forrester Research predicts that 2017 will see a dramatic increase in application migration to the cloud.  With more than 90 percent of businesses using the cloud in some form, the question facing IT leaders and web application owners is not when to move to the cloud but how to do it.

The complexities of application integration and migration to the cloud are ever-changing. Migration has its pitfalls: risk of becoming non-compliant with regulations of industry standards, security breaches, loss of control over applications and infrastructure, and issues with application portability, availability and reliability. Sure, there are additional complexities to be considered, but consider that some are simply obstacles to overcome, while others are outright deal-breakers: factors that cause organizations to halt plans to move apps to the cloud, or even to bring cloud-based apps back on premise.

As I see it, these are the deal-breakers in the minds of the decision maker:

Regulatory and Compliance

Many industries, including healthcare and finance, require compliance with multiple regulations or standards. Additionally, due to today’s global economy, companies need to understand the laws and regulations of their respective industries as well as of the countries in which their customers reside. With a migration, first and foremost, you need to know if the type of cloud you are migrating to supports the compliance and regulations your company requires. Because a cloud migration does not automatically make applications compliant, a knowledgeable cloud service provider can ensure that you maintain compliance, and do so at the lowest possible cost. In parallel, your cloud service provider needs to consider the security policies required to ensure compliance.

Data Security

To date, data security is still the biggest barrier preventing companies from realizing the benefits of the cloud. According to the Interop ITX 2017 State of the Cloud Report, more than half of respondents (51 percent) cited security as the biggest challenge in moving to the cloud. Although security risks are real, they are manageable. During a migration, you need to first ensure the secure transport of your data to and from the cloud. Once your data is in the cloud, you need to know your provider’s SLAs regarding data breaches, but also how the provider will remediate or contain any breaches that do occur. A comprehensive security plan, coupled with the provider’s ability to create purpose-built security solutions, can instill confidence that the provider is up to the task.

Loss of Control

When moving apps to the cloud, many companies assume that they will lose control of app performance and availability. This becomes an acute concern for companies that need to store production data in the cloud. However, from concern, solutions are born, and the solution is as much in the company’s hands as in the provider’s. Make sure that performance and availability are addressed front and center in the provider’s SLA.  That’s how you maintain control.  

Application Portability

With application portability, two issues need to be considered: first, IT organizations often view the hybrid cloud (for example, using a combination of public and private clouds) as their architecture of choice – and that choice invites concerns about moving between clouds. Clouds differ in their architecture, OS support, security, and other factors. Second, IT organizations want choice and do not want to be locked into a single cloud or cloud vendor, but the process of porting apps to a new cloud is complex and not for the faint of heart. If the perception of complexity is too great, IT will opt for keeping their applications on premise.

App Availability and Infrastructure Reliability

Availability and reliability can become issues if a cloud migration is not successful. To ensure its success, first, be sure the applications you are migrating are architected with the cloud in mind or can be adopted to cloud principles. Second, to ensure app availability and infrastructure reliability after the migration, consider any potential issues that may cause downtime including: server performance, network design, and configurations.  Business continuity after a cloud migration is ensured through proper planning.

The great migration is here, and to ensure your company’s success in moving to the cloud, it is important to find a partner that has the technology, people, processes and security capabilities in place to handle any challenges. Your partner must be experienced in architecture and deployment across private, public and hybrid clouds. A successful migration will help you achieve cost savings and peace of mind while leveraging the benefits and innovation of the cloud.

About the Author

Jay Smith founded Webscale in 2012 and currently serves as the Chief Technology Officer of the Company. Jay received his Ph.D. in Electrical and Computer Engineering from Colorado State University in 2008. Jay has co-authored over 30 peer-reviewed articles in parallel and distributed computing systems.

In addition to his academic publications, while at IBM, Jay received over 20 patents and numerous corporate awards for the quality of those patents. Jay left IBM as a Master Inventor in 2008 to focus on High Performance Computing at DigitalGlobe. There, Jay pioneered the application of GPGPU processing within DigitalGlobe.

Monday, January 9, 2017

Forecast for 2017? Cloudy

by Lori MacVittie, Technology Evangelist, F5 Networks

In 2016, IT professionals saw major shifts in the cloud computing industry, from developing more sophisticated approaches to application delivery to discovering the vulnerabilities of connected IoT devices. Enterprises continue to face increasing and entirely new security threats and availability challenges as they migrate to private, public and multi-cloud systems, which is causing organizations to rethink their infrastructures. As we inch toward the end of the year, F5 Networks predicts the key changes we can expect to see in the cloud computing landscape in 2017.

IT’s  MVP of 2017? Cloud architects 

With more enterprises adopting diverse cloud solutions, the role of cloud architects will become increasingly important. The IT professionals that will hold the most valuable positions in an IT organization are those with skills to define criteria for and manage complex cloud architectures.

Multi-cloud is the new normal in 2017

Over the next year, enterprises will continue to seek ways to avoid public cloud lock-in, relying on multi-cloud strategies to do so. They will aim to regain leverage over cloud providers, moving toward a model where they can pick and choose various services from multiple providers that are most optimal to their business needs.

Organizations will finally realize the full potential of the cloud

Companies are now understanding they can use the cloud for more than just finding efficiency and cost savings as part of their existing strategies and ways of doing business. 2017 will provide a tipping point for companies to invest in the cloud to enable entirely new scenarios, spurred by things like big data and machine learning that will transform how they do business in the future.

The increasing sophistication of cyber attacks will be put more emphasis on private cloud
While enterprises trust public cloud providers to host many of their apps, the lack of visibility into the data generated by those apps causes concerns about security. This means more enterprises will look to private cloud solutions. Public cloud deployments won’t be able to truly accelerate until companies feel comfortable enough with consistency of security policy and identity management.

More devices – More problems: In 2017,  public cloud will become too expensive for IoT 

Businesses typically think of public cloud as the cheaper business solution for their data center needs, yet they often forget that things like bandwidth and security services come at an extra cost. IoT devices generate vast amounts of data and as sensors are installed into more and more places, this data will continue to grow exponentially. This year, enterprises will put more IoT applications in their private clouds, that is, until public cloud providers develop economical solutions to manage the huge amounts of data these apps produce.

The conversation around apps will finally go beyond the “where?”

IT professionals constantly underestimate the cost, time and pain of stretching solutions up or down the stack. We’ve seen this with OpenStack, and we’ll see it with Docker. This year, cloud migration and containers will reach a point that customers won’t be able to just think about where they want to move apps, they’ll need to think about the identity tools needed for secure authentication and authorization, how to protect and prevent data loss from microservices and SaaS apps; and how to collect and analyze data across all infrastructure services quickly.

A new standard for cloud providers is in motion and this year will see major developments in not only reconsidering the value of enterprise cloud, but also modifying cloud strategy to fully extend enterprise offerings and data security. Evaluating the risks of cloud migration and management has never been as vital to a company’s stability as it is now. Over the course of the year, IT leaders who embrace and adapt to these industry shifts will be the ones to reap the benefits of a secure, cost-effective and reliable cloud.

About the Author

Lori MacVittie is Technology Evangelist at F5 Networks.  She is a subject matter expert on emerging technology responsible for outbound evangelism across F5’s entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O’Reilly author.

MacVittie is a member of the Board of Regents for the DevOps Institute, and an Advisory Board Member for CloudNOW.

Friday, January 6, 2017

Wi-Fi Trends Take Center Stage in 2017

by Shane Buckley, CEO, Xirrus 

From an unprecedented DNS outage that temporarily paralyzed the entire internet, to the evolution of federated identity for simple, secure access to Wi-Fi and applications, 2016 had its mix of growing pains and innovative steps forward.

Here’s why 2017 will shape up into an interesting year for Wi-Fi technology.

IoT will create continued security issues on global networks

In 2017, the growth of IoT will put enormous pressure on Wi-Fi networks. While vendors must address the complexity of onboarding these devices onto their network, security can’t get left behind. The proliferation of IoT devices will propel high density into almost all locations – from coffee shops to living rooms – prompting more performance and security concerns. Whether Wi-Fi connected alarms or smart refrigerators, the security of our homes will be scrutinized and will become a key concern in 2017. Mass production of IoT devices will make them more susceptible to hacking, as they will not be equipped with the proper built in security.

The recent IoT-based attack on DNS provider Dyn opened the floodgates, as estimates show the IoT market reaching 10 billion devices by 2020. The event foreshadows the power hackers hold when invading these IoT systems. Taking down a significant portion of the internet grows more detrimental, yet all too plausible these days. Because of increased security concerns, vendors will equip devices with the ability to only connect to the IoT server over pre-designed ports and protocols. If IoT vendors don’t start putting security at the forefront of product development, we can only expect more large-scale cyberattacks in 2017.

LTE networks won’t impact Wi-Fi usage

Don’t expect LTE networks to replace Wi-Fi. The cost of deploying LTE networks is ten times greater and LTE is less adaptable for indoor environments than Wi-Fi. Wi-Fi will remain the lowest cost technology available with similar or superior performance to LTE when deployed properly and therefore will not be replaced by LTE. When people have access to Wi-Fi, they’ll connect. Data plan limitations remain too common.

Additionally, the FCC and other international government agencies began licensing the 5GHz spectrum to offer free and uncharted access to Wi-Fi. But, we don’t want carriers grabbing free spectrum and charging us for every byte we send, now do we?

LTE and Wi-Fi will co-exist as they do today, where LTE works well outdoors and Wi-Fi well-designed to work consistently throughout internal spaces.

The push toward federated identity will continue in 2017

Today, there remains a disparate number of Wi-Fi networks, all with different authentication requirements. This marks an opportunity for Wi-Fi vendors. In the coming year, we will see federated identity become a primary differentiator. By implementing federated identity, vendors simplify and secure the login process. Consumers can auto-connect to any public Wi-Fi network with their existing credentials – whether Google, Microsoft or Facebook – thus providing them with a seamless onboarding experience. It’s the next step for Single Sign-On (SSO), and one that will set Wi-Fi vendors apart in 2017.

This coming year, the repercussions of IoT, coexistence of LTE and Wi-Fi, and demand for simple, secure access to Wi-Fi, will take center stage. The onus falls on company leaders, who must adapt their business strategies so they can keep pace with the fast and ever-changing Wi-Fi landscape. 2017 will have plenty in store.

About the Author

Shane Buckley is CEO of Xirrus. Most recently, Mr. Buckley was the General Manager and Senior Vice President at NETGEAR where he led the growth of NETGEAR’s commercial business unit to 50 percent revenue growth over 2 years, reaching $330 million in 2011 – and played a prime role in growing corporate revenues over 30 percent. Prior to that, Mr. Buckley was President & CEO of Rohati Systems, a leader in Cloud-based access management solutions, Chief Operating Officer of Nevis Networks, a leader in secure switching and access control. He has also held the position of Vice President WW Enterprise at Juniper Networks, President International at Peribit Networks, a leader in WAN Optimization and EMEA vice president at 3Com Corp. Mr. Buckley is a graduate of engineering from the Cork Institute of Technology in Ireland.

Sunday, December 18, 2016

Perspectives 2017: Financial Agility for Digital Transformation

by Andrew Blacklock, Senior Director, Strategy and Products, Cisco Capital

Ten years ago, companies like Uber and Airbnb were ideas waiting for technology to catch up. Now, these two brands represent a shift in the global economy in what’s known as digital transformation. This evolution towards digital-everything is constantly accelerating, leaving non-digital companies scrambling for a means to kickstart their digitization projects.

According to Gartner, there are 125,000 enterprises in the U.S. alone that are currently launching digital transformation projects. These companies are of all sizes, from nimble startups to global conglomerates. Despite the strong drive to a digital future, 40% of businesses will be unsuccessful in their digital transformation, according to Cisco’s Digital Vortex study.

Many attribute the difficulties associated with the digital transition to the significant costs of restructuring an organization’s technological backbone. Because of these challenges, many companies opt for an agile approach to financial restructuring.

Financial agility allows companies to evolve and meet the rapidly changing demands of digital business through liquid, scalable options that won’t break the bank. While it is not always possible to predict changes in the business environment, agile financing allows companies to acquire the proper technology and tools necessary to plan, work and expand their businesses.

Financial agility isn’t just another buzzword – it’s a characteristic that organizations of all sizes in all industries need to champion in order to drive efficiencies and competitive advantages. It’s a way that companies can acquire the technologies needed to shift their business without having to “go all in.” This allows companies to avoid large up-front capital investment, help with cash flow by spreading costs over time and preserve existing sources of capital to allocate to other areas of the transformation.

Organizations now need to decide how they can best adjust to the transformation and transition for the next stage of digital business. With financial options that enable organizations to acquire technology and scale quickly, companies can pivot with agility to meet the constantly-evolving demands of our digital age.

Looking at the bigger picture, financial agility is a crucial piece of an organization’s overall digital transformation puzzle. While the digital landscape might be constantly changing, flexible financing helps set an organization up for a successful transformation to the future of digital business.

About the Author

Andrew Blacklock is Senior Director, Strategy and Financial Product Development at Cisco Capital. As director of strategy & business operations, Andrew is responsible for strategy, program management and business operations. He has been with Cisco Capital for 17 years with more than 20 years of experience in captive financing. He is a graduate of Michigan State University and the Thunderbird School of Global Management.

Wednesday, December 14, 2016

Ten Cybersecurity Predictions for 2017

by Dr. Chase Cunningham, ECSA, LPT 
Director of Cyber Operations, A10 Networks 

The cyber landscape changes dramatically year after year. If you blink, you may miss something; whether that’s a noteworthy hack, a new attack vector or new solutions to protect your business. Sound cyber security means trying to stay one step ahead of threat actors. Before the end of 2016 comes around, I wanted to grab my crystal ball and take my best guess at what will be the big story lines in cyber security in 2017.

1. IoT continues to pose a major threat. In late 2016, all eyes were on IoT-borne attacks. Threat actors were using Internet of Things devices to build botnets to launch massive distrubted denial of service (DDoS) attacks. In two instances, these botnets collected unsecured “smart” cameras. As IoT devices proliferate, and everything has a Web connection — refrigerators, medical devices, cameras, cars, tires, you name it — this problem will continue to grow unless proper precautions like two-factor authentication, strong password protection and others are taken.

Device manufactures must also change behavior. They must scrap default passwords and either assign unique credentials to each device or apply modern password configuration techinques for the end user during setup.

2. DDoS attacks get even bigger. We recently saw some of the largest DDoS attacks on record, in some instances topping 1 Tbps. That’s absolutely massive, and it shows no sign of slowing. Through 2015, the largest attacks on record were in the 65 Gbps range. Going into 2017, we can expect to see DDoS attacks grow in size, further fueling the need for solutions tailored to protect against and mitigate these colossal attacks.

3. Predictive analytics gains groundMath, machine learning and artificial intelligence will be baked more into security solutions. Security solutions will learn from the past, and essentially predict attack vectors and behvior based on that historical data. This means security solutions will be able to more accurately and intelligently identify and predict attacks by using event data and marrying it to real-world attacks. 

4. Attack attempts on industrial control systems. Similar to the IoT attacks, it’s only due time until we see major industrial control system (ICS) attacks. Attacks on ecommerce stores, social media platforms and others have become so commonplace that we’ve almost grown cold to them. Bad guys will move onto bigger targets: dams, water treatment facilities and other critical systems to gain recognition.

5. Upstream providers become targets. The DDoS attack launched against DNS provider Dyn, which resulted in knocking out many major sites that use Dyn for DNS services, made headlines because it highlighted what can happen when threat actors target a service provider as opposed to just the end customers. These types of attacks on upstream providers causes a ripple effect that interrupts service not only for the provider, but all of their customers and users. The attack on Dyn set a dangerous presedent and will likely be emulated several times over in the coming year.

6. Physical security grows in importance. Cyber security is just one part of the puzzle. Strong physical security is also necessary. In 2017, companies will take notice, and will implement stronger physical security measures and policies to protect against internal threats and theft and unwanted devices coming in and infecting systems.

7. Automobiles become a target. With autonomous vehicles on the way and the massive success of sophisticated electric cars like Teslas, the automobile industry will become a much more attractive target for attackers. Taking control of an automobile isn’t fantasy, and it could be a real threat next year.

8. Point solutions no longer do the job. The days of Frankensteining together a set of security solutions has to stop. Instead of buying a single solution for each issue, businesses must trust security solutions from best-of-breed vendors and partnerships that answer a number of security needs. Why have 12 solutions when you can have three? In 2017, your security footprint will get smaller, but will be much more powerful.

9. The threat of ransomware growsRansomware was one of the fastest growing online threats in 2016, and it will become more serious and more frequent in 2017. We’ve seen businesses and individuals pay thousands of dollars to free their data from the grip of threat actors. The growth of ransomware means we must be more diligent to protect against it by not clicking on anything suspicious. Remember: if it sounds too good to be true, it probably is.

10. Security teams are 24/7. The days of security teams working 9-to-5 are long gone. Now is the dawn of the 24/7 security team. As more security solutions become services-based, consumers and businesses will demand the security teams and their vendors be available around the clock. While monitoring tools do some of the work, threats don’t stop just because it’s midnight, and security teams need to be ready to do battle all day, every day.

About the Author

Dr. Chase Cunningham (CPO USN Ret.)  is A10 Networks' Director of Cyber Operations. He is an industry authority on advanced threat intelligence and cyberattack tactics. Cunningham is a former US Navy chief cryptologic technician who supported US Special Forces and Navy Seals during three tours of Iraq. During this time, he also supported the NSA and acted as lead computer network exploitation expert for the US Joint Cryptologic Analysis Course. Prior to joining A10 Networks, Cunningham was the director of cyber threat research and innovation at Armor, a provider of cloud-based cyber defense solutions. 


See also