Showing posts with label Blueprint. Show all posts
Showing posts with label Blueprint. Show all posts

Thursday, October 24, 2019

Blueprint column: Stop the intruders at the door!

by Prayson Pate, CTO, Edge Cloud, ADVA

Security is one of the biggest concerns about cloud computing. And securing the cloud means stopping intruders at the door by securing its onramp – the edge. How can edge cloud can be securely deployed, automatically, at scale, over public internet?

The bad news is that it’s impossible to be 100% secure, especially when you bring internet threats into the mix.

The good news is that we can make it so difficult for intruders that they move on to easier targets. And we can ensure that we contain and limit the damage if they do get in.

To achieve that requires an automated and layered approach. Automation ensures that policies are up to date, passwords and keys are rotated, and patches and updates are applied. Layering means that breaching one barrier does not give the intruder the keys to the kingdom. Finally, security must be designed in – not tacked on as an afterthought.

Let’s take a closer look at what edge cloud is, and how we can build and deliver it, securely and at scale.

Defining and building the edge cloud

Before we continue with the security discussion, let’s talk about what we mean by edge cloud.

Edge cloud is the delivery of cloud resources (compute, networking, and storage) to the perimeter of the network and the usage of those resources for both standard compute loads (micro-cloud) as well as for communications infrastructure (uCPE, SD-WAN, MEC, etc.), as shown below.
For maximum utility, we must build edge cloud in a manner consistent with public cloud. For many applications that means using standard open source components such as Linux, KVM and OpenStack, and supporting both virtual machines and containers.

One of the knocks against OpenStack is its heavy footprint. A standard data center deployment for OpenStack includes one or more servers for the OpenStack controller, with OpenStack agents running on each of the managed nodes.

It’s possible to optimize this model for edge cloud by slimming down the OpenStack controller and running it the same node as the managed resources. In this model, all the cloud resources – compute, storage, networking and control – reside in the same physical device. In other words, it’s a “cloud in a box.” This is a great model for edge cloud, and gives us the benefits of a standard cloud model in a small footprint.

Security out of the box

Security at an edge cloud starts when the hosting device or server is installed and initialized. We believe that the best way to accomplish this is with secure zero-touch provisioning (ZTP) of the device over public IP.

The process starts when an unconfigured server is delivered to an end user. Separately, the service provider sends a digital key to the end user. The end user powers up the server and enters the digital key. The edge cloud software builds a secure tunnel from the customer site to the ZTP server, and delivers the security key to identify and authenticate the edge cloud deployment. This step is essential to prevent unauthorized access if the hosting server is delivered to the wrong location. At that point, the site-specific configuration can be applied using the secure tunnel.

The secure tunnel doesn’t go away once the ZTP process completes. The management and orchestration (MANO) software uses the management channel for ongoing control and monitoring of the edge cloud. This approach provides security even when the connectivity is over public IP.

Security on the edge cloud

One possible drawback to the distributed compute resources and interface in an edge cloud model is an increased attack surface for hackers. We must defend edge cloud nodes with layered security at the device, including:
• Application layer – software-based encryption of data plane traffic at Layers 2, 3, or 4 as part of platform, with the addition of third-party firewall/UTM as a part of the service chain
• Management layer – two-factor authentication at customer site with encryption of management and user tunnels
• Virtualization layer – safeguard against VM escape (protecting one VM from another, and prevention of rogue management system connectivity to hypervisor) and VNF attestation via checksum validation
• Network layer – Modern encryption along with Layer 2 and Layer 3 protocols and micro-segmentation to separate management traffic from user traffic, and to protect both

Security of the management software

Effective automation of edge cloud deployments requires sophisticated MANO software, including the ZTP machinery. All of this software must be able to communicate with the managed edge cloud nodes, and do so securely. This means the use of modern security gateways to both protect the MANO software, as well as to provide the secure management tunnels for connectivity.

But that’s not enough. The MANO software should support scalable deployments and tenancy. Scalability should be built using modern techniques so that tools like load balancers can be used to support scaleout. Tenancy is a useful tool to separate customers or regions and to contain security breaches.

Security is an ongoing process

Hackers aren’t standing still, and neither can we. We must perform ongoing security scans of the software to ensure that vulnerabilities are not introduced. We must also monitor the open source distributions and apply patches as needed. A complete model would include:
Automated source code verification by tools such as Protecode and Black Duck
Automated functional verification by tools such as Nessus and OpenSCAP
Monitoring of vulnerability within open source components such as Linux and OpenStack
Following recommendations from the OpenStack Security Group (OSSG) to identify security vulnerabilities and required patches
Application of patches and updates as needed

Build out the cloud, but secure it

The move to the cloud means embracing multi-cloud models, and that should include edge cloud deployments for optimization of application deployment. But ensuring security at those distributed edge cloud nodes means applying a security in an automated and layered approach. There are tools and methods to realize this approach, but it takes discipline and dedication to do so.

Sunday, August 25, 2019

Blueprint: Kubernetes is the End Game for NFVI

by Martin Taylor, Chief Technical Officer, Metaswitch

In October 2012, when a group of 13 network operators launched their white paper describing Network Functions Virtualization, the world of cloud computing technology looked very different than it does today.  As cloud computing has evolved, and as telcos have developed a deeper understanding of it, so the vision for NFV has evolved and changed out of all recognition.
The early vision of NFV focused on moving away from proprietary hardware to software running on commercial off-the-shelf servers.  This was described in terms of “software appliances”.  And in describing the compute environment in which those software appliances would run, the NFV pioneers took their inspiration from enterprise IT practices of that era, which focused on consolidating servers with the aid of hypervisors that essentially virtualized the physical host environment.

Meanwhile, hyperscale Web players such as Netflix and Facebook were developing cloud-based system architectures that support massive scalability with a high degree of resilience, which can be evolved very rapidly through incremental software enhancements, and which can be operated very cost-effectively with the aid of a high degree of operations automation.  The set of practices developed by these players has come to be known as “cloud-native”, which can be summarized as dynamically orchestratable micro-services architectures, often based on stateless processing elements working with separate state storage micro-services, all deployed in Linux containers.

It’s been clear to most network operators for at least a couple of years that cloud-native is the right way to do NFV, for the following reasons:

  • Microservices-based architectures promote rapid evolution of software capabilities to enable enhancement of services and operations, unlike legacy monolithic software architectures with their 9-18 month upgrade cycles and their costly and complicated roll-out procedures.
  • Microservices-based architectures enable independent and dynamic scaling of different functional elements of the system with active-active N+k redundancy, which minimizes the hardware resources required to deliver any given service.
  • Software packaged in containers is inherently more portable than VMs and does much to eliminate the problem of complex dependencies between VMs and the underlying infrastructure which has been a major issue for NFV deployments to date.
  • The cloud-native ecosystem includes some outstandingly useful open source projects, foremost among which is Kubernetes – of which more later.  Other key open source projects in the cloud-native ecosystem include Helm, a Kubernetes application deployment manager, service meshes such as Istio and Linkerd, and telemetry/logging solutions including Prometheus, Fluentd and Grafana.  All of these combine to simplify, accelerate and lower the cost of developing, deploying and operating cloud-native network functions.

5G is the first new generation of mobile technology since the advent of the NFV era, and as such it represents a great opportunity to do NFV right – that is, the cloud-native way.  The 3GPP standards for 5G are designed to promote a cloud-native approach to the 5G core – but they don’t actually guarantee that 5G core products will be recognisably cloud-native.  It’s perfectly possible to build a standards-compliant 5G core that is resolutely legacy in its software architecture, and we believe that some vendors will go down that path.  But some, at least, are stepping up to the plate and building genuinely cloud native solutions for the 5G core.

Cloud-native today is almost synonymous with containers orchestrated by Kubernetes.  It wasn’t always thus: when we started developing our cloud-native IMS solution in 2012, these technologies were not around.  It’s perfectly possible to build something that is cloud-native in all respects other than running in containers – i.e. dynamically orchestratable stateless microservices running in VMs – and production deployments of our cloud native IMS have demonstrated many of the benefits that cloud-native brings, particularly with regard to simple, rapid scaling of the system and the automation of lifecycle management operations such as software upgrade.  But there’s no question that building cloud-native systems with containers is far better, not least because you can then take advantage of Kubernetes, and the rich orchestration and management ecosystem around it.

The rise to prominence of Kubernetes is almost unprecedented among open source projects.  Originally released by Google as recently as July 2015, Kubernetes became the seed project of the Cloud Native Computing Foundation (CNCF), and rapidly eclipsed all the other container orchestration solutions that were out there at the time.  It is now available in multiple mature distros including Red Hat OpenShift and Pivotal Container Services, and is also offered as a service by all the major public cloud operators.  It’s the only game in town when it comes to deploying and managing cloud native applications.  And, for the first time, we have a genuinely common platform for running cloud applications across both private and public clouds.  This is hugely helpful to telcos who are starting to explore the possibility of hybrid clouds for NFV.

So what exactly is Kubernetes?  It’s a container orchestration system for automating application deployment, scaling and management.   For those who are familiar with the ETSI NFV architecture, it essentially covers the Virtual Infrastructure Manager (VIM) and VNF Manager (VNFM) roles.

In its VIM role, Kubernetes schedules container-based workloads and manages their network connectivity.  In OpenStack terms, those are covered by Nova and Neutron respectively.  Kubernetes includes a kind of Load Balancer as a Service, making it easy to deploy scale-out microservices.

In its VNFM role, Kubernetes can monitor the health of each container instance and restart any failed instance.  It can also monitor the relative load on a set of container instances that are providing some specific micro-service and can scale out (or scale in) by spinning up new containers or spinning down existing ones.  In this sense, Kubernetes acts as a Generic VNFM.  For some types of workloads, especially stateful ones such as databases or state stores, Kubernetes native functionality for lifecycle management is not sufficient.  For those cases, Kubernetes has an extension called the Operator Framework which provides a means to encapsulate any application-specific lifecycle management logic.  In NFV terms, a standardized way of building Specific VNFMs.

But Kubernetes goes way beyond the simple application lifecycle management envisaged by the ETSI NFV effort.  Kubernetes itself, together with a growing ecosystem of open source projects that surround it, is at the heart of a movement towards a declarative, version-controlled approach to defining both software infrastructure and applications.  The vision here is for all aspects of a complex cloud native system, including cluster infrastructure and application configuration, to be described in a set of documents that are under version control, typically in a Git repository, which maintains a complete history of every change.  These documents describe the desired state of the system, and a set of software agents act so as to ensure that the actual state of the system is automatically aligned with the desired state.  With the aid of a service mesh such as Istio, changes to system configuration or software version can be automatically “canary” tested on a small proportion of traffic prior to be rolled out fully across the deployment.  If any issues are detected, the change can simply be rolled back.  The high degree of automation and control offered by this kind of approach has enabled Web-scale companies such as Netflix to reduce software release cycles from months to minutes.

Many of the network operators we talk to have a pretty good understanding of the benefits of cloud native NFV, and the technicalities of containers and Kubernetes.  But we’ve also detected a substantial level of concern about how we get there from here.  “Here” means today’s NFV infrastructure built on a hypervisor-based virtualization environment supporting VNFs deployed as virtual machines, where the VIM is either OpenStack or VMware.  The conventional wisdom seems to be that you run Kubernetes on top of your existing VIM.  And this is certainly possible: you just provision a number of VMs and treat these as hosts for the purposes of installing a Kubernetes cluster.  But then you end up with a two-tier environment in which you have to deploy and orchestrate services across some mix of cloud native network functions in containers and VM-based VNFs, where orchestration is driving some mix of Kubernetes, OpenStack or VMware APIs and where Kubernetes needs to coexist with proprietary VNFMs for life-cycle management.  It doesn’t sound very pretty, and indeed it isn’t.

In our work with cloud-native VNFs, containers and Kubernetes, we’ve seen just how much easier it is to deploy and manage large scale applications using this approach compared with traditional hypervisor-based approaches.  The difference is huge.  We firmly believe that adopting this approach is the key to unlocking the massive potential of NFV to simplify operations and accelerate the pace of innovation in services.  But at the same time, we understand why some network operators would baulk at introducing further complexity into what is already a very complex NFV infrastructure.
That’s why we think the right approach is to level everything up to Kubernetes.  And there’s an emerging open source project that makes that possible: KubeVirt.

KubeVirt provides a way to take an existing Virtual Machine and run it inside a container.  From the point of view of the VM, it thinks it’s running on a hypervisor.  From the point of view of Kubernetes, it sees just another container workload.  So with KubeVirt, you can deploy and manage applications that comprise any arbitrary mix of native container workloads and VM workloads using Kubernetes.

In our view, KubeVirt could open the way to adopting Kubernetes as “level playing field” and de facto standard environment across all types of cloud infrastructure, supporting highly automated deployment and management of true cloud native VNFs and legacy VM-based VNFs alike.  The underlying infrastructure can be OpenStack, VMware, bare metal – or any of the main public clouds including Azure, AWS or Google.  This grand unified vision of NFV seems to us be truly compelling.  We think network operators should ratchet up the pressure on their vendors to deliver genuinely cloud native, container-based VNFs, and get serious about Kubernetes as an integral part of their NFV infrastructure.  Without any question, that is where the future lies.

Wednesday, August 14, 2019

Blueprint: Turn Your Data Center into an Elastic Bare-Metal Cloud

by Denise Shiffman is Chief Product Officer for DriveScale.

What if you could create an automated, elastic, cloud-like experience in your own data center for a fraction of the cost of the public cloud? Today, high performance, data-oriented and containerized applications are commonly deployed on bare-metal which is keeping them on premises. But the hardware deployed is static, costing IT in overprovisioned, underutilized, siloed clusters.

Throughout the evolution of data center IT infrastructure, one thing has remained constant. Once deployed, compute, storage and networking systems remain fixed and inflexible. The move to virtual machines better utilized the resources on the host system they were tied to, but virtual machines didn’t make data center hardware more dynamic or adaptable.

In the era of advanced analytics, machine learning and cloud-native applications, IT needs to find ways to quickly adapt to new workloads and ever-growing data. This has many people talking about software-defined solutions. When software is pulled out of proprietary hardware, whether it’s compute, storage or networking hardware, then flexibility is increased, and costs are reduced. With next-generation, composable infrastructure, software-defined takes on new meaning. For the first time, IT can create and recreate logical hardware through software, making the hardware infrastructure fully programmable. And the benefits are enormous.

Composable Infrastructure can also support the move to more flexible and speedy deployments through DevOps with an automated and dynamic solution integrated with Kubernetes and containers. When deploying data-intensive, scale-out workloads, IT now has the opportunity to shift compute and storage infrastructures away from static, fixed resources. Modern database and application deployments require modern infrastructure driving the emergence of Composable Infrastructure – and it promises to address the exact problems that traditional data centers cannot. In fact, for the first time, using Composable Infrastructure, any data center can become an elastic bare-metal cloud. But what exactly is Composable Infrastructure and how do you implement it?

Elastic and Fully-Automated Infrastructure

Composable Infrastructure begins with disaggregating compute nodes from storage, essentially moving the drives to simple storage systems on a standard Ethernet network. Through a REST API, GUI or template, users choose the instances of compute and the instances of storage required by an application or workload and the cluster of resources is created on the fly ready for application deployment. Similar to the way users chooses instances in the public cloud and the cloud provider stitches that solution together, composable provides the ability to flexibly create, adapt, deploy and redeploy compute and storage resources instantly using pools of heterogeneous, commodity compute, storage and network fabric.

Composable gives you cloud agility and scale, and fundamentally different economics.
  • Eliminate Wasted Spend: With local storage inside the server, fixed configurations of compute and storage resources end up trapped inside the box and left unused. Composable Infrastructure enables the ability to independently scale processing and storage and make adjustments to deployments on the fly. Composable eliminates overprovisioning and stranded resources and enables the acquisition of lower cost hardware.
  • Low Cost, Automated Infrastructure: Providing automated infrastructure on premises, composable enables the flexibility and agility of cloud architectures, and creates independent lifecycles for compute and storage lowering costs and eliminating the noisy neighbors problem in the cloud.
  • Performance and Scale: With today’s high-speed standard Ethernet networks, Composable provides equivalent performance to local drives, while eliminating the need for specialized storage networks. Critical too, composable solutions can scale seamlessly to thousands of nodes while maintaining high performance and high availability.

The Local Storage Conundrum

Drive technology continues to advance with larger drives and with NVMe™ flash. Trapping these drives inside a server limits the ability to gain full utilization of these valuable resources. With machine learning and advanced analytics, storage needs to be shared with an ever-larger number of servers and users need to be able to expand and contract capacity on demand. Composable NVMe puts NVMe on a fabric whether that’s a TCP, RDMA or iSCSI fabric (often referred to as NVMe over fabrics), and user’s gain significant advantages:

  • Elastic storage: By disaggregating compute and storage, NVMe drives or slices of drives can be attached to almost any number of servers. The amount of storage can be expanded or reduced on demand. And a single building block vendor SKU can be used across a wide variety of configurations and use cases eliminating operational complexity. 
  • Increased storage utilization:  Historically, flash utilization has been a significant concern. Composable NVMe over fabrics enables the ability to gain full utilization of the drives and the storage system. Resources from storage systems are allocated to servers in a simple and fully-automated way – and very high IOPS and low-latency comparable to local drives is maintained. 

The Elastic Bare Metal Cloud Data Center

Deploying Kubernetes containerized applications bare metal with Composable Infrastructure enables optimized resource utilization and application, data and hardware availability. The combination of Kubernetes with programmable bare-metal resources turns any data center into a cloud.

Composable data centers eradicate static infrastructure and impose a model where hardware is redefined as a flexible, adaptable set of resources composed and re-composed at will as applications require – making infrastructure as code a reality. Hardware elasticity and cost-efficiencies can be achieved by using disaggregated, heterogeneous building blocks, requiring just a single diskless server SKU and a single eBOD (Ethernet-attached Box of Drives) SKU or JBOD (Just a Box of Drives) SKU to create an enormous array of logical server designs. Failed drives or compute nodes can be replaced through software, and compute and storage are scaled or upgraded independently. And with the ability to quickly and easily determine optimal resource requirements and adapt ratios of resources for deployed applications, composable data centers won’t leave resources stranded or underutilized.

Getting Started with Composable Infrastructure

Composable Infrastructure is built to meet the scale, performance and high availability demands of data-intensive and cloud-native applications while dramatically lowering the cost of deployment. Moving from static to fluid infrastructure may sound like a big jump, but composable doesn’t require a forklift upgrade. Composable Infrastructure can be easily added to a current cluster and used for the expansion of that cluster. It’s a seamless way to get started and to see cost-savings on day one.

Deploying applications in a composable data center will make it easier for IT to meet the needs of the business, while increasing speed to deployment and lowering infrastructure costs. Once you experience the power and control provided by Composable Infrastructure, you’ll wonder how you ever lived without it.

About DriveScale  
DriveScale instantly turns any data center into an elastic bare-metal cloud with on-demand instances of compute, GPU and storage, including native NVMe over Fabrics, to deliver the exact resources a workload needs, and to expand, reduce or replace resources on the fly. With DriveScale, high-performance bare-metal or Kubernetes clusters deploy in seconds for machine learning, advanced analytics and cloud-native applications at a fraction of the cost of the public cloud. www.drivescale.com

Tuesday, June 4, 2019

Blueprint column: The importance of Gi-LAN in 5G

by Takahiro Mitsuhata, Sr. Manager, Technical Marketing at A10 Networks 

Today's 4G networks support mobile broadband services (e.g., video conferencing, high-definition content streaming, etc.) across millions of smart devices, such as smartphones, laptops, tablets and IoT devices. The number of connected devices is on the rise, growing 15 percent or more year-over-year and projected to be 28.5 billion devices by 2022 according to Cisco's VNI forecast.

Adding networking nodes to scale-out capacity is a relatively easy change. Meanwhile, it's essential for service providers to keep offering innovative value-added services to differentiate service experience and monetize new services. These services including parental control, URL filtering, content protection and endpoint device protection from malware and ID theft, to name a few. Service providers, however, are now facing new challenges of operational complexity and extra network latency coming from those services. Such challenges will become even more significant when it comes to 5G, as this will drive even more rapid proliferation of mobile and the IoT devices. It will be critical to minimize latency to ensure there are no interruptions to emerging mission-critical services that are expected to dramatically increase with 5G networks.

Gi-LAN Network Overview

In a mobile network, there are two segments between the radio network and the Internet: the evolved packet core (EPC) and the Gi/SGi-LAN. The EPC is a packet-based mobile core running both voice and data on 4G/ LTE networks. The Gi-LAN is the network where service providers typically provide various homegrown and value-added services using unique capabilities through a combination of IP-based service functions, such as firewall, carrier-grade NAT (CGNAT), deep packet inspection (DPI), policy control and traffic and content optimization. And these services are generally provided by a wide variety of vendors. Service providers need to steer the traffic and direct it to specific service functions, which may be chained, only when necessary, in order to meet specific policy enforcement and service-level agreements for each subscriber.

The Gi-LAN network is an essential segment that enables enhanced security and value-added service offerings to differentiate and monetize services. Therefore, it's crucial to have an efficient Gi-LAN architecture to deliver a high-quality service experience.

 Figure: Gi-LAN with multiple service functions in the mobile network

Challenges in Gi-LAN Segment

In today's 4G/ LTE world, a typical mobile service provider has an ADC, a DPI, a CGNAT and a firewall device as part of Gi-LAN service components. They are mainly deployed as independent network functions on dedicated physical devices from a wide range of vendors. This makes Gi-LAN complex and inflexible from operational and management perspective. Thus, this type of architecture, as known as monolithic architecture, is reaching its limits and does not scale to meet the needs of the rising data traffic in 4G and 4G+ architectures. This will continue to be an issue in 5G infrastructure deployments. The two most serious issues are:

1. Increased latency
2. Significantly higher total cost of ownership

Latency is becoming a significant concern since, even today, lower latency is required by online gaming and video streaming services. With the transition to 5G, ultra-reliable low-latency connectivity targets latencies of less than 1ms for use cases, such as real-time interactive AR/ VR, tactile Internet, industrial automation, mission/life-critical service like remote surgery, self-driving cars and many more. The architecture with individual service functions on different hardware has a major impact on this promise of lower latency. Multiple service functions are usually chained and every hop the data packet traversing between service functions adds additional latency, causing overall service degradation.

The management overhead of each solution independently is also a burden. The network operator must invest in monitoring, management and deployment services for all devices from various vendors individually, resulting in large operational expenses.

Solution – Consolidating Service Functions in Gi-LAN

In order to overcome these issues, there are a few approaches you can take. Service-Based Architecture (SBA) or microservices architecture address operational concerns since leveraging such architecture leads to higher flexibility and automation and significant cost reduction. However, it is less likely to address the network latency concern because each service function, regardless of VNF or microservice, still contributes in the overall latency as far as they are deployed as an individual VM or microservice.

So, what if multiple service functions are consolidated into one instance? For example, CGNAT and Gi firewall are fundamental components in the mobile network, and some subscribers may choose to use additional services such as DPI, URL filtering. Such consolidation is feasible only if the product/ solution supports flexible traffic steering and service chaining capabilities along with those service functions.

Consolidating Gi-LAN service functions into one instance/ appliance helps to drastically reduce the extra latency and simplify network design and operation. Such concepts are not new but there aren't many vendors who can provide consolidated Gi-LAN service functions at scale.

Therefore, when building an efficient Gi-LAN network, service providers need to consider a solution that can offer:
  • Multiple network and service functions on a single instance/ appliance
  • Flexible service chaining support
  • Subscriber awareness and DPI capability supported for granular traffic steering
  • Variety of form-factor options - physical (PNF) and virtual (VNF) appliances
  • High performance and capacity with scale-out capability
  • Easy integration and transition to SDN/NFV deployment
About the author

Takahiro Mitsuhata, Sr. Manager, Technical Marketing at A10 Networks

About A10

A10 Networks (NYSE: ATEN) provides Reliable Security Always™, with a range of high-performance application networking solutions that help organizations ensure that their data center applications and networks remain highly available, accelerated and secure. Founded in 2004, A10 Networks is based in San Jose, Calif., and serves customers globally with offices worldwide. For more information, visit: www.a10networks.com and @A10Networks

Wednesday, January 3, 2018

Vodafone: IoT Trends for 2018

by Ludovico Fassati, Head of IoT for Vodafone Americas

IoT will drive business transformation

Companies that have adopted IoT see the technology as mission critical to their business. These companies are leading the way when it comes to digital transformation initiatives. According to Vodafone’s 2017/18 IoT Barometer, 74% of companies that have adopted IoT agree that digital transformation is impossible without it. The businesses that implement IoT solutions in the next year will have a clear advantage over competitors when it comes to evolving their digital capabilities.

LP-WAN solutions will open up the IoT market

IoT adopters have great expectations for the future of the technology, and new connectivity options like Low-Power Wide Area Networks (LP-WAN) are making innovation possible. LP-WAN technologies, like Narrowband IoT (NB-IoT) allow for increased network coverage over a wide area at a low cost, making them an ideal solution for adding connectivity in hard-to-reach places. According to the analyst firm Analysys Mason, once there is greater awareness and understanding of LP-WAN, there will be new wave of growth in this area. LP-WAN technologies will begin to open the IoT market to applications that have not previously benefitted from connectivity.

IoT will become central to enterprise IT functions

Today, most major enterprises have already integrated IoT into their core systems and initiatives to drive digital businesses. We will continue to see connectivity become part of the enterprise IT fabric – in fact, within five years, IoT will be core to millions of business processes. In the future, companies may even take for granted that devices and appliances like vehicles and HVAC systems can be controlled and monitored remotely, thanks to IoT connectivity.

Companies will be increasingly confident in IoT security solutions

As with any new technology, security remains a top concern when it comes to IoT. However, businesses with large IoT implementations are becoming more confident, given that they have the expertise and resources necessary to tackle security concerns. These organizations will begin to see these security measures as enablers that give them the confidence to push business forward. As the technology matures, trust in IoT-enabled applications and devices will only continue to grow.

Businesses will see unexpected benefits from IoT adoption

Companies that integrate IoT solutions will see a number of benefits from the technology. The benefits go way beyond just enabling better data collection and business insights. IoT will be seen as a driver of improvements across businesses – organizations are already using IoT to reduce risk, cut costs, create new revenue streams, improve employee productivity, enhance customer experience and more. Businesses are likely to see even more benefits as they implement the technology across operations.

Tuesday, June 20, 2017

The Evolution of VNFs within the SD-WAN Ecosystem

As the WAN quickly solidifies its role as the performance bottleneck for cloud services of all kinds, the SD-WAN market will continue to grow and evolve. This evolution will happen in lock step with the move to software-defined everything in data centers for both the enterprise and the service provider, with a focus on Virtual Network Functions (VNFs) and how they could be used to create specialized services based on custom WANs on demand. Although SD-WANs provide multiple benefits in terms of cost, ease-of-management, improved security, and improved telemetry, application performance and reliability remain paramount as the primary goals for the vast majority of SD-WAN deployments. When this is taken into consideration, the role of VNFs in extending and improving application performance becomes clear. Just as importantly, growing use of VNFs within SD-WANs extends an organization’s software-defined architecture throughout the broader network and sets the stage for the insertion of even more intelligence down the road.

What exactly do we mean by the term VNF? 

Before we get started, let’s define what we mean by VNF, since similar to SD-WAN, this term can be used to describe multiple things. For some VNFs are primarily a means of replicating legacy capabilities on a local appliance (physical or virtual) by means of software defined architectures, such as firewall, DHCP, DNS etc. However, restricting one’s scope to legacy services alone limits the potential high-value benefits that can be realized from a software-defined approach for more advanced features. Our definition of a VNF therefore is a superset of localized VNF and is really about the creation of a software-defined functions of more advanced capabilities, such as application aware VPNs, flow-based load balancing, self-healing overlay tunnels etc. What’s more, many advanced SD-WAN vendors provide their customers with the ability to customize these VNF applications to apply exclusively to their own WAN and/or their specific network requirements to enable unique WAN services.

What do we need VNFs for? 

SD-WAN’s enormous growth this year, as well as its predicted continued growth in the years to come follows the footsteps of the paradigm shift data centers are currently undergoing. That is, from a manually configured set of servers and storage appliances, to a software-defined architecture, where the servers and storage appliances (virtual or physical) can be managed and operated via a software-defined architecture. This means less manual errors, lower cost and more efficient way to operate the data center.

As an industry, as we implement some of the data-center approaches to the WAN (Wide Area Networks), one must note that there is a big difference between datacenter networks and WAN networks. Namely, datacenter LANs (Local Area Networks) have ample capacity and bandwidth and unless they are misconfigured, are never the bottleneck for performance. However, with WANs, whether done in-house by the enterprise or delivered as a service by a telecom or other MSP, the branch offices are connected to the Internet through WAN connections (MPLS, DSL, Cable, Fiber, T1, 3G/4G/LTE, etc.). As a result, the choking point of the performance is almost always the WAN. This is why SD-WANs became so popular so quickly, in that this provides immediate relief for this issue.

However, as WANs continue to grow in complexity, with enterprises operating multiple clouds and/or cloud models simultaneously, there is a growing need to add automation and programmability into the software-defined WAN in order to ensure performance and reliability. Therefore VNFs that can address this WAN performance bottleneck have the opportunity to transform how enterprises connect to their private, public and hybrid clouds. VNFs that extend beyond a single location, but can cover WAN networks, will have the ability to add programmability to the WAN. In a way, the “software defined” nature of the data center will be stretched out all the way to the branch office, including the WAN connectivity between them.

Defining SD-WAN VNFs

So what does a VNF that is programmable and addresses the WAN bottlenecks look like? These VNFs are overlay tunnels that can perform certain flow logic and therefore can work around network problems on a packet-by-packet basis per flow. These VNFs are so smart, they have the problem diagnosis, problem alerting and most importantly, resolution of the problem all baked into the VNF. In other words, unlike the days without SD-WAN where an IT manager would have an urgent support ticket whenever a network problem occurs. With VNF-based SD-WANs, the networks are becoming smart enough to solve the problem proactively, in most cases, before even it effects the applications, services and the user experience.

This increase in specific VNFs for the SD-WAN will start with the most immediate need, which is often latency and jitter sensitive applications such as voice, video, UC and other chatty applications. Even now, VNFs are being used to solve these issues. For example, a CIO can have a VNF that dynamically and automatically steers VOIP/SIP traffic around network problems caused by high latency, jitter and packet loss, and in parallel have another VNF to support cross-traffic and latency optimization for “chatty” applications.

In another example, a VNF can be built in minutes designed to steer non-real-time traffic away from a costly WAN link and apply header compression for real-time traffic only in situations where packet loss or latency crosses a specific threshold during certain times of the day, all the while updating syslog with telemetry data. With this level of flexibility and advanced capabilities, VNFs are poised to become the go-to solutions for issues related to the WAN.

A VNF load balancer is another such overlay that has the ability to load balance the traffic over the WAN links. Since the VNF load balancer is in essence a software code that can be deployed onto an SD-WAN appliance, it has the power of taking advantage of various types of intelligence and adaptability to optimize the WAN performance. VNF load balancers should also work with standard routing so that you can inject it in your network, say between the WAN modems and your firewall/router seamlessly.

Clearly, VNFs are part and parcel of SD-WAN next wave of evolution, bringing intelligence and agility to the enterprise WAN. As 2017 ramps up, we’ll see more and more innovation on this front, fully extending software-defined architecture from the data center throughout the network.

About the author

Dr. Cahit Jay Akin is the CEO and co-founder of Mushroom Networks, a long-time supplier of SD-WAN infrastructure for enterprises and service providers. Prior to Mushroom Networks, Dr. Akin spent many years as a successful venture capitalist. Dr. Akin received his Ph.D. and M.S.E. degree in Electrical Engineering and M.S. in Mathematics from the University of Michigan at Ann Arbor. He holds a B.S. degree in Electrical Engineering from Bilkent University, Turkey. Dr. Akin has worked on technical and research aspects of communications for over 15 years including authoring several patents and many publications. Dr. Akin was a nominee for the Most Admired CEO award by San Diego Business Journal. 

Sunday, April 30, 2017

Blueprint: Five Considerations for a Successful Cloud Migration in 2017

by Jay Smith, CTO, Webscale Networks

Forrester Research predicts that 2017 will see a dramatic increase in application migration to the cloud.  With more than 90 percent of businesses using the cloud in some form, the question facing IT leaders and web application owners is not when to move to the cloud but how to do it.

The complexities of application integration and migration to the cloud are ever-changing. Migration has its pitfalls: risk of becoming non-compliant with regulations of industry standards, security breaches, loss of control over applications and infrastructure, and issues with application portability, availability and reliability. Sure, there are additional complexities to be considered, but consider that some are simply obstacles to overcome, while others are outright deal-breakers: factors that cause organizations to halt plans to move apps to the cloud, or even to bring cloud-based apps back on premise.

As I see it, these are the deal-breakers in the minds of the decision maker:

Regulatory and Compliance

Many industries, including healthcare and finance, require compliance with multiple regulations or standards. Additionally, due to today’s global economy, companies need to understand the laws and regulations of their respective industries as well as of the countries in which their customers reside. With a migration, first and foremost, you need to know if the type of cloud you are migrating to supports the compliance and regulations your company requires. Because a cloud migration does not automatically make applications compliant, a knowledgeable cloud service provider can ensure that you maintain compliance, and do so at the lowest possible cost. In parallel, your cloud service provider needs to consider the security policies required to ensure compliance.

Data Security

To date, data security is still the biggest barrier preventing companies from realizing the benefits of the cloud. According to the Interop ITX 2017 State of the Cloud Report, more than half of respondents (51 percent) cited security as the biggest challenge in moving to the cloud. Although security risks are real, they are manageable. During a migration, you need to first ensure the secure transport of your data to and from the cloud. Once your data is in the cloud, you need to know your provider’s SLAs regarding data breaches, but also how the provider will remediate or contain any breaches that do occur. A comprehensive security plan, coupled with the provider’s ability to create purpose-built security solutions, can instill confidence that the provider is up to the task.

Loss of Control

When moving apps to the cloud, many companies assume that they will lose control of app performance and availability. This becomes an acute concern for companies that need to store production data in the cloud. However, from concern, solutions are born, and the solution is as much in the company’s hands as in the provider’s. Make sure that performance and availability are addressed front and center in the provider’s SLA.  That’s how you maintain control.  

Application Portability

With application portability, two issues need to be considered: first, IT organizations often view the hybrid cloud (for example, using a combination of public and private clouds) as their architecture of choice – and that choice invites concerns about moving between clouds. Clouds differ in their architecture, OS support, security, and other factors. Second, IT organizations want choice and do not want to be locked into a single cloud or cloud vendor, but the process of porting apps to a new cloud is complex and not for the faint of heart. If the perception of complexity is too great, IT will opt for keeping their applications on premise.

App Availability and Infrastructure Reliability

Availability and reliability can become issues if a cloud migration is not successful. To ensure its success, first, be sure the applications you are migrating are architected with the cloud in mind or can be adopted to cloud principles. Second, to ensure app availability and infrastructure reliability after the migration, consider any potential issues that may cause downtime including: server performance, network design, and configurations.  Business continuity after a cloud migration is ensured through proper planning.

The great migration is here, and to ensure your company’s success in moving to the cloud, it is important to find a partner that has the technology, people, processes and security capabilities in place to handle any challenges. Your partner must be experienced in architecture and deployment across private, public and hybrid clouds. A successful migration will help you achieve cost savings and peace of mind while leveraging the benefits and innovation of the cloud.

About the Author

Jay Smith founded Webscale in 2012 and currently serves as the Chief Technology Officer of the Company. Jay received his Ph.D. in Electrical and Computer Engineering from Colorado State University in 2008. Jay has co-authored over 30 peer-reviewed articles in parallel and distributed computing systems.

In addition to his academic publications, while at IBM, Jay received over 20 patents and numerous corporate awards for the quality of those patents. Jay left IBM as a Master Inventor in 2008 to focus on High Performance Computing at DigitalGlobe. There, Jay pioneered the application of GPGPU processing within DigitalGlobe.

Monday, January 9, 2017

Forecast for 2017? Cloudy

by Lori MacVittie, Technology Evangelist, F5 Networks

In 2016, IT professionals saw major shifts in the cloud computing industry, from developing more sophisticated approaches to application delivery to discovering the vulnerabilities of connected IoT devices. Enterprises continue to face increasing and entirely new security threats and availability challenges as they migrate to private, public and multi-cloud systems, which is causing organizations to rethink their infrastructures. As we inch toward the end of the year, F5 Networks predicts the key changes we can expect to see in the cloud computing landscape in 2017.

IT’s  MVP of 2017? Cloud architects 

With more enterprises adopting diverse cloud solutions, the role of cloud architects will become increasingly important. The IT professionals that will hold the most valuable positions in an IT organization are those with skills to define criteria for and manage complex cloud architectures.

Multi-cloud is the new normal in 2017

Over the next year, enterprises will continue to seek ways to avoid public cloud lock-in, relying on multi-cloud strategies to do so. They will aim to regain leverage over cloud providers, moving toward a model where they can pick and choose various services from multiple providers that are most optimal to their business needs.

Organizations will finally realize the full potential of the cloud

Companies are now understanding they can use the cloud for more than just finding efficiency and cost savings as part of their existing strategies and ways of doing business. 2017 will provide a tipping point for companies to invest in the cloud to enable entirely new scenarios, spurred by things like big data and machine learning that will transform how they do business in the future.

The increasing sophistication of cyber attacks will be put more emphasis on private cloud
While enterprises trust public cloud providers to host many of their apps, the lack of visibility into the data generated by those apps causes concerns about security. This means more enterprises will look to private cloud solutions. Public cloud deployments won’t be able to truly accelerate until companies feel comfortable enough with consistency of security policy and identity management.

More devices – More problems: In 2017,  public cloud will become too expensive for IoT 

Businesses typically think of public cloud as the cheaper business solution for their data center needs, yet they often forget that things like bandwidth and security services come at an extra cost. IoT devices generate vast amounts of data and as sensors are installed into more and more places, this data will continue to grow exponentially. This year, enterprises will put more IoT applications in their private clouds, that is, until public cloud providers develop economical solutions to manage the huge amounts of data these apps produce.

The conversation around apps will finally go beyond the “where?”

IT professionals constantly underestimate the cost, time and pain of stretching solutions up or down the stack. We’ve seen this with OpenStack, and we’ll see it with Docker. This year, cloud migration and containers will reach a point that customers won’t be able to just think about where they want to move apps, they’ll need to think about the identity tools needed for secure authentication and authorization, how to protect and prevent data loss from microservices and SaaS apps; and how to collect and analyze data across all infrastructure services quickly.

A new standard for cloud providers is in motion and this year will see major developments in not only reconsidering the value of enterprise cloud, but also modifying cloud strategy to fully extend enterprise offerings and data security. Evaluating the risks of cloud migration and management has never been as vital to a company’s stability as it is now. Over the course of the year, IT leaders who embrace and adapt to these industry shifts will be the ones to reap the benefits of a secure, cost-effective and reliable cloud.

About the Author

Lori MacVittie is Technology Evangelist at F5 Networks.  She is a subject matter expert on emerging technology responsible for outbound evangelism across F5’s entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O’Reilly author.

MacVittie is a member of the Board of Regents for the DevOps Institute, and an Advisory Board Member for CloudNOW.

Friday, January 6, 2017

Wi-Fi Trends Take Center Stage in 2017

by Shane Buckley, CEO, Xirrus 

From an unprecedented DNS outage that temporarily paralyzed the entire internet, to the evolution of federated identity for simple, secure access to Wi-Fi and applications, 2016 had its mix of growing pains and innovative steps forward.

Here’s why 2017 will shape up into an interesting year for Wi-Fi technology.

IoT will create continued security issues on global networks

In 2017, the growth of IoT will put enormous pressure on Wi-Fi networks. While vendors must address the complexity of onboarding these devices onto their network, security can’t get left behind. The proliferation of IoT devices will propel high density into almost all locations – from coffee shops to living rooms – prompting more performance and security concerns. Whether Wi-Fi connected alarms or smart refrigerators, the security of our homes will be scrutinized and will become a key concern in 2017. Mass production of IoT devices will make them more susceptible to hacking, as they will not be equipped with the proper built in security.

The recent IoT-based attack on DNS provider Dyn opened the floodgates, as estimates show the IoT market reaching 10 billion devices by 2020. The event foreshadows the power hackers hold when invading these IoT systems. Taking down a significant portion of the internet grows more detrimental, yet all too plausible these days. Because of increased security concerns, vendors will equip devices with the ability to only connect to the IoT server over pre-designed ports and protocols. If IoT vendors don’t start putting security at the forefront of product development, we can only expect more large-scale cyberattacks in 2017.

LTE networks won’t impact Wi-Fi usage

Don’t expect LTE networks to replace Wi-Fi. The cost of deploying LTE networks is ten times greater and LTE is less adaptable for indoor environments than Wi-Fi. Wi-Fi will remain the lowest cost technology available with similar or superior performance to LTE when deployed properly and therefore will not be replaced by LTE. When people have access to Wi-Fi, they’ll connect. Data plan limitations remain too common.

Additionally, the FCC and other international government agencies began licensing the 5GHz spectrum to offer free and uncharted access to Wi-Fi. But, we don’t want carriers grabbing free spectrum and charging us for every byte we send, now do we?

LTE and Wi-Fi will co-exist as they do today, where LTE works well outdoors and Wi-Fi well-designed to work consistently throughout internal spaces.

The push toward federated identity will continue in 2017

Today, there remains a disparate number of Wi-Fi networks, all with different authentication requirements. This marks an opportunity for Wi-Fi vendors. In the coming year, we will see federated identity become a primary differentiator. By implementing federated identity, vendors simplify and secure the login process. Consumers can auto-connect to any public Wi-Fi network with their existing credentials – whether Google, Microsoft or Facebook – thus providing them with a seamless onboarding experience. It’s the next step for Single Sign-On (SSO), and one that will set Wi-Fi vendors apart in 2017.

This coming year, the repercussions of IoT, coexistence of LTE and Wi-Fi, and demand for simple, secure access to Wi-Fi, will take center stage. The onus falls on company leaders, who must adapt their business strategies so they can keep pace with the fast and ever-changing Wi-Fi landscape. 2017 will have plenty in store.

About the Author

Shane Buckley is CEO of Xirrus. Most recently, Mr. Buckley was the General Manager and Senior Vice President at NETGEAR where he led the growth of NETGEAR’s commercial business unit to 50 percent revenue growth over 2 years, reaching $330 million in 2011 – and played a prime role in growing corporate revenues over 30 percent. Prior to that, Mr. Buckley was President & CEO of Rohati Systems, a leader in Cloud-based access management solutions, Chief Operating Officer of Nevis Networks, a leader in secure switching and access control. He has also held the position of Vice President WW Enterprise at Juniper Networks, President International at Peribit Networks, a leader in WAN Optimization and EMEA vice president at 3Com Corp. Mr. Buckley is a graduate of engineering from the Cork Institute of Technology in Ireland.

Sunday, December 18, 2016

Perspectives 2017: Financial Agility for Digital Transformation

by Andrew Blacklock, Senior Director, Strategy and Products, Cisco Capital

Ten years ago, companies like Uber and Airbnb were ideas waiting for technology to catch up. Now, these two brands represent a shift in the global economy in what’s known as digital transformation. This evolution towards digital-everything is constantly accelerating, leaving non-digital companies scrambling for a means to kickstart their digitization projects.

According to Gartner, there are 125,000 enterprises in the U.S. alone that are currently launching digital transformation projects. These companies are of all sizes, from nimble startups to global conglomerates. Despite the strong drive to a digital future, 40% of businesses will be unsuccessful in their digital transformation, according to Cisco’s Digital Vortex study.

Many attribute the difficulties associated with the digital transition to the significant costs of restructuring an organization’s technological backbone. Because of these challenges, many companies opt for an agile approach to financial restructuring.

Financial agility allows companies to evolve and meet the rapidly changing demands of digital business through liquid, scalable options that won’t break the bank. While it is not always possible to predict changes in the business environment, agile financing allows companies to acquire the proper technology and tools necessary to plan, work and expand their businesses.

Financial agility isn’t just another buzzword – it’s a characteristic that organizations of all sizes in all industries need to champion in order to drive efficiencies and competitive advantages. It’s a way that companies can acquire the technologies needed to shift their business without having to “go all in.” This allows companies to avoid large up-front capital investment, help with cash flow by spreading costs over time and preserve existing sources of capital to allocate to other areas of the transformation.

Organizations now need to decide how they can best adjust to the transformation and transition for the next stage of digital business. With financial options that enable organizations to acquire technology and scale quickly, companies can pivot with agility to meet the constantly-evolving demands of our digital age.

Looking at the bigger picture, financial agility is a crucial piece of an organization’s overall digital transformation puzzle. While the digital landscape might be constantly changing, flexible financing helps set an organization up for a successful transformation to the future of digital business.

About the Author

Andrew Blacklock is Senior Director, Strategy and Financial Product Development at Cisco Capital. As director of strategy & business operations, Andrew is responsible for strategy, program management and business operations. He has been with Cisco Capital for 17 years with more than 20 years of experience in captive financing. He is a graduate of Michigan State University and the Thunderbird School of Global Management.

Wednesday, December 14, 2016

Ten Cybersecurity Predictions for 2017

by Dr. Chase Cunningham, ECSA, LPT 
Director of Cyber Operations, A10 Networks 

The cyber landscape changes dramatically year after year. If you blink, you may miss something; whether that’s a noteworthy hack, a new attack vector or new solutions to protect your business. Sound cyber security means trying to stay one step ahead of threat actors. Before the end of 2016 comes around, I wanted to grab my crystal ball and take my best guess at what will be the big story lines in cyber security in 2017.

1. IoT continues to pose a major threat. In late 2016, all eyes were on IoT-borne attacks. Threat actors were using Internet of Things devices to build botnets to launch massive distrubted denial of service (DDoS) attacks. In two instances, these botnets collected unsecured “smart” cameras. As IoT devices proliferate, and everything has a Web connection — refrigerators, medical devices, cameras, cars, tires, you name it — this problem will continue to grow unless proper precautions like two-factor authentication, strong password protection and others are taken.

Device manufactures must also change behavior. They must scrap default passwords and either assign unique credentials to each device or apply modern password configuration techinques for the end user during setup.

2. DDoS attacks get even bigger. We recently saw some of the largest DDoS attacks on record, in some instances topping 1 Tbps. That’s absolutely massive, and it shows no sign of slowing. Through 2015, the largest attacks on record were in the 65 Gbps range. Going into 2017, we can expect to see DDoS attacks grow in size, further fueling the need for solutions tailored to protect against and mitigate these colossal attacks.

3. Predictive analytics gains groundMath, machine learning and artificial intelligence will be baked more into security solutions. Security solutions will learn from the past, and essentially predict attack vectors and behvior based on that historical data. This means security solutions will be able to more accurately and intelligently identify and predict attacks by using event data and marrying it to real-world attacks. 

4. Attack attempts on industrial control systems. Similar to the IoT attacks, it’s only due time until we see major industrial control system (ICS) attacks. Attacks on ecommerce stores, social media platforms and others have become so commonplace that we’ve almost grown cold to them. Bad guys will move onto bigger targets: dams, water treatment facilities and other critical systems to gain recognition.

5. Upstream providers become targets. The DDoS attack launched against DNS provider Dyn, which resulted in knocking out many major sites that use Dyn for DNS services, made headlines because it highlighted what can happen when threat actors target a service provider as opposed to just the end customers. These types of attacks on upstream providers causes a ripple effect that interrupts service not only for the provider, but all of their customers and users. The attack on Dyn set a dangerous presedent and will likely be emulated several times over in the coming year.

6. Physical security grows in importance. Cyber security is just one part of the puzzle. Strong physical security is also necessary. In 2017, companies will take notice, and will implement stronger physical security measures and policies to protect against internal threats and theft and unwanted devices coming in and infecting systems.

7. Automobiles become a target. With autonomous vehicles on the way and the massive success of sophisticated electric cars like Teslas, the automobile industry will become a much more attractive target for attackers. Taking control of an automobile isn’t fantasy, and it could be a real threat next year.

8. Point solutions no longer do the job. The days of Frankensteining together a set of security solutions has to stop. Instead of buying a single solution for each issue, businesses must trust security solutions from best-of-breed vendors and partnerships that answer a number of security needs. Why have 12 solutions when you can have three? In 2017, your security footprint will get smaller, but will be much more powerful.

9. The threat of ransomware growsRansomware was one of the fastest growing online threats in 2016, and it will become more serious and more frequent in 2017. We’ve seen businesses and individuals pay thousands of dollars to free their data from the grip of threat actors. The growth of ransomware means we must be more diligent to protect against it by not clicking on anything suspicious. Remember: if it sounds too good to be true, it probably is.

10. Security teams are 24/7. The days of security teams working 9-to-5 are long gone. Now is the dawn of the 24/7 security team. As more security solutions become services-based, consumers and businesses will demand the security teams and their vendors be available around the clock. While monitoring tools do some of the work, threats don’t stop just because it’s midnight, and security teams need to be ready to do battle all day, every day.

About the Author

Dr. Chase Cunningham (CPO USN Ret.)  is A10 Networks' Director of Cyber Operations. He is an industry authority on advanced threat intelligence and cyberattack tactics. Cunningham is a former US Navy chief cryptologic technician who supported US Special Forces and Navy Seals during three tours of Iraq. During this time, he also supported the NSA and acted as lead computer network exploitation expert for the US Joint Cryptologic Analysis Course. Prior to joining A10 Networks, Cunningham was the director of cyber threat research and innovation at Armor, a provider of cloud-based cyber defense solutions. 


Tuesday, July 12, 2016

Blueprint: An Out-of-This-World Shift in Data Storage

by Scott Sobhani, CEO and co-founder, Cloud Constellation’s SpaceBelt

In light of ongoing, massive data breaches across all sectors and the consequent responsibility laid at executives’ and board members’ feet, the safe storing and transporting of sensitive data has become a critical priority. Cloud storage is a relatively new option, and both businesses and government entities have been flocking to it. Synergy Research Group reports that the worldwide cloud computing market grew 28 percent to $110B in revenues in 2015. In a similar vein, Technology Business Research projects that global public cloud revenue will increase from $80B in 2015 to $167B in 2020.

By taking part in the Cloud, organizations are using shared hosting facilities, which carries with it the risk of exposing critical data to surreptitious elements – not to mention the challenges associated with jurisdictional hazards. Organizations of all sizes are subject to leaky Internet and leased lines. As the world shifts away from legacy systems to more agile software solutions, it is becoming clear that the time is now for a paradigm shift in how to store, access and archive sensitive data.

The Need for a New Storage Model

Enterprises and government agencies need a better way to securely store and transport their sensitive data. What if there was a way to bypass the Internet and leased lines entirely to mitigate exposure and secure sensitive data from hijacking, theft and espionage, while reducing costs both from an infrastructure and risk perspective?

Though it may sound like science fiction to some, such an option is possible, and it’s become necessary for two main reasons:

  • Threatening Clouds – Cloud environments currently run on hybrid public and private networks using IT controls that are not protective enough to stay ahead of real-time cyber security threats. Enterprise data is maliciously targeted, searchable or stolen. Sensitive data can be subjected to government agency monitoring and exposed to acts of industrial espionage through unauthorized access to enterprise computers, passwords and cloud storage on public and private networks.
  • Questions of Jurisdiction – Due to government regulations, critical information could be restricted or exposed, especially when it has regularly been replicated or backed up to an undesirable jurisdiction at a cloud service provider’s data center. Diplomatic privacy rules are under review by governments intent on restricting cross-jurisdictional access and transfer of the personal and corporate data belonging to their citizens. This has created the requirement for enterprises to operate separate data centers in each jurisdiction – financially prohibitive for many medium-sized enterprises.

Storage Among the Stars

What government and private organizations need is an independent cloud infrastructure platform, entirely isolating and protecting sensitive data from the outside world. A neutral, space-based cloud storage network could provide this. Enterprise data can be stored and distributed to a private data vault designed to enable secure cloud storage networking without any exposure to the Internet and/or leased lines. Resistant to natural disasters and force majeure events, its architecture would provide a truly revolutionary way of reliably and redundantly storing data, liberating organizations from risk of cyberattack, hijacking, theft, espionage, sabotage and jurisdictional exposures.

A storage solution of this type might at first seem prohibitively expensive, but costs would run the same or less to build, operate and maintain as terrestrial networks. Further, it would serve as a key market differentiator for cloud service providers who are looking for solutions that provide physical protection of their customers’ critical information. This is because such a system would need to include its own telecom backbone infrastructure to be entirely secure.  While this is extremely expensive to accomplish on the ground, it need not be the case if properly architected as a space-based storage platform.

Sooner than many might think, governments and enterprises will begin to use satellites for the centralized storage and distribution of sensitive or classified material, the storage and protection of video and audio feeds from authorized personnel in remote locations, or the distribution of video and audio gathered by drones.

Escaping Earth’s Orbit

Cyber criminals don’t seem to be slowing their assault on the network, which means data breaches of Earth-based storage solutions will continue. Organizations need to think outside the Cloud in order to keep their critical data secure, both while being stored and in transit. The technology exists today to make satellite storage a reality, and for those who are working hard to stay ahead of malicious actors, it can’t arrive soon enough.

About the author

Scott Sobhani, CEO and cofounder of Cloud Constellation Corporation and the SpaceBelt Information Ultra-Highway, is an experienced telecom executive with over 25 years in executive management positions, most recent as VP for business development and commercial affairs at International Telecom Advisory Group (ITAG). Previous positions include CEO of TalkBox, VP & GM at Lockheed Martin, and VP, GM & senior economist at Hughes Electronics Corporation.

Mr. Sobhani was responsible for closing over $2.3 billion in competitive new business orders for satellite spacecraft systems, mobile network equipment and rocket launch vehicles. He co-authored “Sky Cloud Autonomous Electronic Data Storage and Information Delivery Network System”, “Space-Based Electronic Data Storage and Network System” and “Intermediary Satellite Network for Cross-Strapping and Local Network Decongestion” (each of which are patent pending). He has an MBA from the University of Southern California, and a bachelor’s degree from the University of California, Los Angeles.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.



Thursday, June 30, 2016

Blueprint: LSO Hackathons Bring Open Standards, Open Source

Open standards and open source projects are both essential ingredients for advancing the cause of interoperable next-generation carrier networks.

When a standards developing organization (SDO), like MEF, creates standards, those written documents themselves aren’t the end goal. Sure, the specifications look good on paper, but it takes a lot of work to turn those words and diagrams into hardware, software and services. And if there are any ambiguities in those specifications, or misinterpretations by vendors building out their products and services, interoperability could be problematic at best.

By contrast, when an open-source project is formed, the team’s job is obvious: to create software and solutions. All too often, the members of the project are focused on reaching a particularly objective. In those cases they are working in a vacuum, and might write code that works great but which can’t be abstracted to solve a more general problem. In those cases, interoperability may also be a huge issue.

The answer is clear: bring together SDOs and open-source teams to write open-source code that’s aligned with open specifications. That’s what is happening at the LSO (Lifecycle Service Orchestration) Hackathons hosted by MEF: open source teams come together to work on evolving specifications, and the end result is not only solid code but also effective feedback to MEF about its specs and architecture. Another benefit: networking experts from across the communications industry work together with software developers from the IT world face-to-face, fostering mutual understanding of the constraints of their peers in ways that lead to more effective interaction in their day jobs.


MEF recently completed its Euro16 LSO Hackathon held in Rome, Italy during April 27-29, 2016. This followed the debut LSO Hackathon at MEF’s GEN15 conference in Dallas in November 2015. (See “The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code,” published in Telecom Ramblings.)

“The Euro16 LSO Hackathon built on what we started in the first Hackathon at GEN15,” said Daniel Bar-Lev, Director of Certification and Strategic Programs, MEF and one of the architects of the LSO Hackathon series.

One big change: not everything had to be physically present in Rome, which expanded both the technology platform and the pool of participants. “We enabled work to be done remotely, said Bar-Lev. “While most of our participants were in Rome, we had people engaged from all over the United States. We also didn’t need to bring the networking equipment to Rome. Most of it remained installed and configured in the San Francisco Bay area. Instead of shipping racks of equipment, we set up remote access and were able to position the hardware and software in the optimal places to get development done.”

Lifecycle Service Orchestration and the Third Network Vision

Why “Lifecycle Service Orchestration” for the MEF-hosted LSO Hackathons? Bar-Lev explained that it ties into MEF’s broad vision for Third Network services that combine the ubiquity and flexibility of the public Internet with the quality and assurance of private connectivity services such as CE 2.0.

 “When we think of traditional CE 2.0 services, we tend to think of them as “static” — often taking weeks or months to provision or change a service,” said Bar-Lev. “With the Third Network vision, we are driving specifications for services like CE 2.0 that can be created and modified in minutes instead of months and also be orchestrated over multiple provider networks.”

As Bar-Lev explained, the real work of MEF today is to formally define Third Network services and all the related services required to implement flexible inter-network communications. “End-to-end LSO is essential for that,” he continued, “along with SDN and NFV.”

That’s where open standards and open source projects converge, with MEF initiatives like OpenLSO (Open Lifecycle Service Orchestration) and OpenCS (Open Connectivity Services). “It’s all about creating and trying out building blocks, so we can give service providers reference designs from which they can develop their offerings more quickly. They don’t have to define those services themselves from scratch; rather they can access them at MEF, which gives them a valuable and time-saving starting point,” Bar-Lev said.

Indeed, the OpenLSO and OpenCS projects describe a wide range of L1-L7 services that service providers need in order to implement Third Network services. MEF is defining these services, and developers work on evolving elements of the reference designs during LSO Hackathons.

A Broad Array of Projects and Participants at Euro16 LSO Hackathon

According to MEF, the OpenLSO scenarios worked upon at Euro16 LSO Hackathon were OpenLSO Inter-Carrier Ordering and OpenLSO Service Function Chaining. The OpenCS use cases were OpenCS Packet WAN and OpenCS Data Center. The primary objectives of the Euro16 LSO Hackathon included:

        Accelerate the development of comprehensive OpenLSO scenarios and OpenCS use cases as part of MEF's Open Initiative for the benefit of the open source communities and the industry as a whole.

        Provide feedback to ongoing MEF projects in support of MEF's Agile Standards Development approach to specification development.

        Facilitate discussion, collaboration, and the development of ideas, sample code, and solutions that can be used for the benefit of service providers and technology providers.

        Encourage interdepartmental collaboration and communications within MEF member companies, especially between BSS/OSS/service orchestration professionals and networking service/infrastructure professionals

Strong Industry Participation at Euro16

Around 45 people participated in the Euro16 LSO Hackathon – the majority in Rome and the remainder being the AT&T Remote Team in Plano, Texas as well as other participants attending remotely from other parts of the United States.

“We brought people together with widely divergent backgrounds,” said MEF’s Bar-Lev. “We had software developers with no networking expertise, and network experts with no software skills. The core group worked in the same room in Rome for three days, with additional folks working independently and syncing up with the Rome teams when appropriate.”

The Euro16 LSO Hackathon included participants from Amartus, Amdocs, AT&T, CableLabs, CenturyLink, Ciena, Cisco, Edge Core Networks, Ericsson, Gigaspaces, HPE, Huawei, Infinera, Iometrix, Microsemi, NEC, Netcracker, NTT, ON.Lab, Telecom Italia Sparkle and ZTE. The whole process was managed by Bar-Lev and Charles Eckel, Open Source Developer Evangelist at Cisco DevNet.

“What is most important about the LSO Hackathon is that it takes the specifications that are being defined and transforms them into code”, said Eckel. “It moves that process forward dramatically. The way standards have traditionally been done is a very long process in which people spend months and sometimes years getting the details of documents figured out, and then it can turn out that the specification is almost non-implementable. With the LSO Hackathon we create code based on early versions of the specifications. This helps the process move forward because we identify what’s wrong, what’s missing, and what’s unclear, then we update the specs accordingly. This is an important reason for doing the LSO Hackathon.”

Eckels continued, “Equally important is the positive impact on the participating open source projects and open source communities. Usability issues and gaps in functionality are identified and addressed. The code implemented during the Hackathon is contributed back upstream, making those projects better suited to address the requirements mapped out by the specifications.”

Dawn Kaplan, Solution Architect, Ericsson, added: “The Euro16 LSO Hackathon aimed to solve a very crucial inter-carrier business problem that will change our industry when solved.  The ordering project in the LSO Hackathon is focused on implementing the inter-carrier ordering process between service providers. At the Hackathon we built upon the defined use case, information model, and a sample API to enable service providers to order from one another in a completely automated fashion. With the code and practices developed at the Euro16 LSO Hackathon we will come much closer to tackling this very real issue.”

“We are a new participant in the LSO Hackathon and find this initiative very important on a community level,” explained Shay Naeh, Solution Architect for NFV/SDN Projects at Cloudify by GigaSpaces. “Through the Euro16 LSO Hackathon, we are learning how to contribute our own  open source code solutions and combine them alongside closed source solutions to make the whole ecosystem work. Open source is very important to us,  and we are excited to see telcos coming around to the open source model as well. By having a close relationship with open source communities, the telcos influence those projects to take into account their operational requirements while reducing the chances of being locked into relationships with specific technology providers. You can mix and match vendor components and avoid having a vertical or silo solution. What is very important to telcos is to introduce new business services with a click of a button and this is definitely achievable.”

MEF Euro16 LSO Hackathon Report

MEF has published a new report spotlighting recent advances in development of LSO capabilities and APIs that are key to enabling agile, assured, and orchestrated Third Network services over multiple provider networks. The report describes objectives, achievements, and recommendations from multiple teams of professionals who participated in the Euro16 LSO Hackathon.

Coming Next:  MEF16 LSO Hackathon, November 2016

The next MEF LSO Hackathon will be at upcoming MEF16 global networking conference, in Baltimore, November 7-10, 2016. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases.

“We will have different teams working on Third Network services,” said MEF’s Bar-Lev. The work will accelerate the delivery of descriptions of how to create Third network services, such as Layer 2 and Layer 3 services. Participants will get hands-on experience and involvement in identifying the different pieces of technology needed to develop those projects."

About the Author
Alan Zeichick is founder, president and principal analyst, Camden Associates. 
Follow Alan on Twitter @zeichick

See also