Tuesday, March 1, 2016

Cisco Ups its Nexus Data Center Game

Cisco is rolling out its next generation of Nexus data center switches, including a new Nexus 9000 model based on its custom ASIC and a Nexus 3000 model based on Broadcom's Tomahawk silicon.

The refreshed portfolio aims to transition the market from 10G and 40G ports to 25/50/100 Gbps at the same density of interfaces as existing systems. Cisco said it will now be able to deliver 25G at the previous price of 10G, and 100G at the previous price of 40G -- effectively 2.5x bandwidth at the same price.

In terms of network architecture and software programmability, Cisco is supporting three choices: its own, full-bore Application Centric Infrastructure (ACI) architecture, running on the APIC controller in its Nexus 7K and flagship 9K switches, a programmable fabric vision that could also be supported on the new Nexus 3000 switches with Broadcom silicon, and a lighter Programmable Network architecture running on any of the Nexus switches and featuring NX-OS enhancements for DevOps, automation and segment routing.

With the new Nexus 9000 switches, Cisco said it will achieve industry-leading performance for 100Gbps, with 25 percent more non-blocking performance, at 50 percent the cost of comparable solutions, plus greater reliability and lower power. The new Nexus 9K switches will support real time network telemetry at 100Gbps wire rate, enabling network security with pervasive NetFlow and fabric wide troubleshooting.  The switches will also scale up to 10 times in IP addresses and end points at cloud scale, and support over a million containers per rack. The new Nexus 9K is also designed to drive unique cloud services with adaptive capacity and congestion control, allowing customers to support lossless traffic for IP storage, hyperconverged and converged infrastructure on a single unified fabric that enables application completion times 50 percent faster than traditional competitive platforms.

In addition, Cisco is announcing a new Cisco Nexus Fabric Manager that automates the complete fabric lifecycle management with a point-and-click web interface, and offers automated configuration snapshots and rollbacks. Nexus Fabric Manager builds and self-manages a VXLAN-based fabric, dynamically configuring switches based on simplified user-based actions. An IT manager can fully deploy a VXLAN-based fabric in just three steps, complete with zero touch provisioning, and can upgrade all fabric switches to a new software release in "only four mouse clicks."

Cisco also announced several new ACI ecosystem pioneers: Infoblox, which automates network configuration and change; N3N, which extends ACI visibility beyond the network to the entire data center; Tufin, which provides visibility, control and security change orchestration across heterogeneous environments; vArmour, which provides application-aware micro-segmentation with advanced security analytics; and Veritas, which collects, protects, analyzes and optimizes customers’ global data.


Cisco's New HyperFlex Systems Take Aim at Hyperconvergence

Cisco is renewing its focus on hyperconverged infrastructure with the introduction of new HyperFlex Systems built on its UCS compute platform.

Compared with existing hyperconverged platforms, the new Cisco HyperFlex Systems aim to simplify policy-based automation across network, compute and storage for a wide set of enterprise applications. The systems promise plug-n-play setup within minutes, not days, with flexible, adaptive and independent scaling of compute, network as well as storage capacity. Cisco is offering data management capabilities, such as rapid clones and non-intrusive snapshots with always-on inline deduplication and inline compression, yielding up to 80% reduction in the data footprint.  The rollout includes:

  • Cisco HyperFlex HX220c M4 Node, 1 rack unit, up to 7 TB of storage, up to 2 processors per node
  • Cisco HyperFlex HX240c M4 Node, 2 rack units, up to 29 TB of storage, up to 2 processors per node
  • Cisco HyperFlex HX240c M4 Node with Cisco UCS B200 M4 Blade-Series Servers, max capacity and up to twice the compute, 2 rack units + 6 rack units, up to 29 TB of storage, and up to 4 processors per node


AT&T Selected by Defense Information Systems Agency

AT&T confirmed that it has been awarded a position on the Defense Information Systems Agency’s (DISA) Global Networking Services (GNS) contract. AT&T will provide services like global and regional transport services, software-defined networking, Ethernet and network integration services. DISA provides enterprise network and IT infrastructure to support the requirements of the Department of Defense’s more than 40 military services, combatant commands and support organizations worldwide.

DISA plans to spend $4.3 billion over 10 years to build a single global network that will work 10x faster than today’s.

“This award lets us do our part to help defend the nation,” said Kay Kapoor, president, AT&T Global Business - Public Sector Solutions. “It aligns with our long-standing commitment to use our innovative technology solutions to support our troops, national safety and freedom.”


IBM Cloud Receives DISA Impact Level 5 Authorization

The U.S. Defense Information Systems Agency (DISA) has authorized IBM Cloud services at the highest security levels – known as Impact Level 5 – for Controlled Unclassified Information as defined by the Department of Defense (DoD). This pavesthe way for DoD agencies seeking to take advantage of the innovations cloud can offer and manage sensitive data. DISA has granted the company a Conditional Authority to Operate IBM Cloud services hosted at the...

Level 3 Wins for DISA's Global Network Services Contract

The Defense Information Systems Agency (DISA) selected Level 3 Communications for a multiple-year competitive contract known as Global Network Services (GNS). The contract allows Level 3 to bid on and provide communications services to support DISA's goal of providing a single global network including both wired and wireless technologies by 2020. The contract has a five-year base period and five one-year option periods. Level 3 is Authorized to...

NTT Com Rolls Out Enhanced Enterprise Cloud Services

NTT Communications is rolling out a number of enhancements to its Enterprise Cloud portfolio, first in Japan and followed in the UK, Singapore, US, Australia, Hong Kong and Germany this year.

Some highlights of the new capabilities:

  1. Hosted Private Cloud for Traditional ICT - dedicated bare-metal servers with the automation, pay-as-you-go flexibility and options of a multi-hypervisor environment, including VMware vSphere and Microsoft Hyper-V. It also provides a highly flexible internal network, which facilitates the re-use of on-premises network architecture in the cloud. 
  2. Enterprise-class Multi-tenant Cloud for Cloud-Native ICT - a secure and highly reliable, enterprise-class multi-tenant cloud based on the OpenStack architecture, which therefore gives customers an industry-standard open API to control the Enterprise Cloud in an automated manner. In addition, by incorporating Cloud Foundry, an open source Platform-as-a-Service (PaaS) software, the Enterprise Cloud provides PaaS for agile application development and operational efficiency. 
  3. Seamless Hybrid Cloud Environment - the enhanced Enterprise Cloud's multi-tenant cloud and hosted private cloud are connected at Layer 2 with SDN, which gives customers the power to flexibly and seamlessly configure the network components (virtual server, bare-metal server, firewall, load balancer) running on a complex on-premises environment in the same network segment. As a result, customers can reduce approximately 30% of the workload required for the network and server re-configuration that usually comes with cloud migration.
  4. Free and Seamless Connection between Cloud Platforms - the enhanced Enterprise Cloud platform is connected with a 10Gbps best-effort closed network, which is free of charge. In addition, connectivity between Enterprise Cloud platforms and data centers is provided at competitive prices globally, utilizing NTT Com's infrastructure as a telecom carrier. 
  5. Cloud Management Platform (CMP) for Full Visibility and IT Governance - providing efficient management and unified control of both Enterprise Cloud and third-party providers' clouds, including Amazon Web Services (AWS) and Microsoft Azure. CMP also enables customer management and operation of hybrid clouds and meets the IT governance, cost control, security management and automated ICT operational requirements needed to successfully execute a cloud-first digital transformation strategy companywide.

NTT Com said it will continue to enhance its capabilities in areas such as SDx-enabled unification of cloud, world-class data centers and global network assets. The OpenStack-based micro service architecture of Enterprise Cloud enables NTT Com to incorporate the latest technology advancements coming from both open source communities and its technology partners. Some of the new functions NTT Com plans to launch for enterprises include SAP HANA, virtual private PaaS and enhanced cloud management capabilities, to further support the digital transformation journeys of its enterprise customers.

"As enterprise businesses move towards digitalization, globalization and cloud, NTT Com will continue to help customers innovate business processes and create new business models with Enterprise Cloud, which has been enhanced to meet requirements of both secure and reliable ICT and flexible and agile ICT," said Motoo Tanaka, Senior Vice President of Cloud Services at NTT Com.


Cisco to Acquire CliQr for Cloud Orchestration

Cisco agreed to acquire CliQr Technologies, a start-up offering an application-defined cloud orchestration platform to model, deploy and manage applications across bare metal, virtualized and container environments. Cisco will pay $260 million in cash and assumed equity awards, plus retention based incentives.

The CliQr platform provides a broad variety of application profile templates and an integrated service library to make it easier to model new or existing, simple or complex applications in a way that is cloud agnostic. CliQr already integrates with Cisco ACI to enable application portability for on-premise and cloud environments. The company cites several key benefits:

  • Profile once, deploy anywhere: CliQr’s solution allows customers to create a single application profile that is simple and secure to deploy across any data center, public or private cloud.
  • Ensure consistent policies: CliQr automatically applies a customer’s access control and security policies to an application, and then ensures that those policies move with the application.
  • Optimize applications across hybrid cloud environments: CliQr will measure both price and performance of applications on any cloud environment, helping users to make informed decisions about the best place for their application on any data center or cloud.
  • Manage with one-click: CliQr provides a single management interface to give customers complete visibility and control across applications, cloud environments and users. 
Cisco said it will integrate CliQr across its data center portfolio. 1The CliQr team will join Cisco’s Insieme Business Unit reporting to Prem Jain, senior vice president and general manager. The goal is to make it simpler for customers to automate and manage application policies across the entire data center stack.

“Customers today have to manage a massive number of complex and different applications across many clouds,” said Rob Salvagno, vice president, Cisco Corporate Development. “With CliQr, Cisco will be able to help our customers realize the promise of the cloud and easily manage the lifecycle of their applications on any hybrid cloud environment.”


  • CliQr was founded by Gaurav Manglik and Tenry Fu, both previously from VMware. 
  •  Investors in CliQr included Foundation Capital, Google Ventures, Translink Capital, and Polaris Partners.

MEF Appoints CTO, Launches Layer 3 IP Project

The MEF has appointed Pascal Menezes as its first-ever Chief Technology Officer. Menezes formerly was Principal at Microsoft Skype for Business. In this new role, he will lead the MEF’s technical strategy and align key work programs internally and in conjunction with the open source and SDO partners.

The MEF also announced an expansion of the scope of its work to accelerate the transition to agile, assured, and orchestrated Third Network services.

The new Layer 3 IP project will create a standard set of service attributes that can be used to define IP services delivered over a single provider network or over multiple interconnected networks. Having standardized service attributes – consistent and compatible with the globally adopted CE 2.0 services framework – will allow Layer 3 providers to leverage LSO definitions work and the new LSO Reference Architecture to create agile, assured, and orchestrated IP services.

"The new MEF project on IP Service Attributes is the first step in incorporating IP Services into the MEF LSO framework,” said David Ward, CTO of Engineering and Chief Architect, Cisco. “It enables the definition of standard LSO APIs and Yang modules at the service layer for managing and monitoring IP Services - including inter-provider services -  that are consistent with the LSO Reference Architecture used for Carrier Ethernet services.  Enabling automation of inter-provider IP services is a key pain point for our Service Provider customers today, which this work will help address."

The MEF now has more than 30 ongoing projects and initiatives, including a new project on Layer 3 IP service attributes that is a key step toward enabling multi-operator orchestration of IP services using MEF LSO (Lifecycle Service Orchestration).


Splunk Leads Adaptive Response Initiative

Splunk is spearheading an Adaptive Response Initiative to connect with a community of best-of-breed security vendors to improve cyber defense strategies and security operations. The idea is to combine alert and threat information from multiple security domains and technologies.

Splunk said this collective insight enables security teams to make better-informed decisions across the entire kill chain, especially when validating threats and applying analytics-driven response directives to their security environment.

Founding participants of the Adaptive Response Initiative include Carbon Black, CyberArk, Fortinet, Palo Alto Networks, Phantom, Splunk, Tanium, ThreatConnect and Ziften.

“The mission of the Adaptive Response Initiative is to bring together the best technologies across the security industry to help organizations combat advanced attacks,” said Haiyan Song, senior vice president of security markets, Splunk. “Modern cyber threats are dynamic, and attackers are constantly finding new ways to get in and exploit networks and systems. This new challenge goes well beyond preventing individual stages of an attack. Adaptive Response aims to more effectively connect intelligence across best-of-breed technologies to help organizations improve their security posture, quickly validate threats, and systematically disrupt the kill chain.”


SanDisk and IBM Team on Software-Defined All-Flash Storage

SanDisk and IBM announced a collaboration focused on next-generation, software-defined, all-flash storage solutions for the data center.

The partnership brings together SanDisk’s InfiniFlash System—a high-capacity and extreme-performance flash-based software defined storage system with IBM Spectrum Scale filesystem, which a software-defined, distributed, parallel filesystem, for high performance, large scale workloads on-premises or in the cloud. Featuring the ability to automatically tier data based on application needs, the Spectrum Scale software-defined storage solution provides file, object, and integrated data analytics for:

  • Compute clusters (technical computing)
  • Big data and analytics
  • Cognitive computing
  • Hadoop Distributed File System (HDFS)
  • Private cloud
  • Content repositories

“Our initiative with IBM brings the best of both worlds to data centers: breakthrough economics compared to traditional all-flash array deployments, and dramatically higher performance, improved reliability and lower power consumption compared to hard disk drive-based arrays,” said Ravi Swaminathan, vice president and general manager, System and Software Solutions, SanDisk. “These offerings enable customers to economically deploy flash at petabyte-scale, which drive business growth through new services and offerings for their end-customers.”


Monday, February 29, 2016

Top Twelve Cloud Computing Threats

The Cloud Security Alliance (CSA) Top Threats Working Group published a report listing The Treacherous 12: Cloud Computing Top Threats in 2016:

  • Data Breaches
  • Weak Identity, Credential and Access Management
  • Insecure APIs
  • System and Application Vulnerabilities
  • Account Hijacking
  • Malicious Insiders
  • Advanced Persistent Threats (APTs)
  • Data Loss
  • Insufficient Due Diligence
  • Abuse and Nefarious Use of Cloud Services
  • Denial of Service
  • Shared Technology Issues

"Our last Top Threats report highlighted developers and IT departments rolling out their own self-service Shadow IT projects, and the bypassing of organizational security requirements. A lot has changed since that time and what we are seeing in 2016 is that the cloud may be effectively aligned with the Executive strategies to maximize shareholder value," said Jon-Michael Brook, co-Chair of the Top Threats Working Group. "The 'always on' nature of cloud computing impacts factors that may skew external perceptions and, in turn, company valuations."


Gigamon Revs Metadata Engine for Contextual Security Analytics

Gigamon unveiled its Metadata Engine for their GigaSECURE Security Delivery Platform (SDP).

The solution centrally generates and aggregates contextual information about network traffic. It sends that metadata to the security analytics devices that can leverage the information.

The company said its Metadata Engine will ‘super-charge’ security information and event management systems (SIEMs), enabling forensics solutions and user behavioral analytics products to connect to its GigaSECURE Security Delivery Platform and receive output of the Metadata Engine, that includes:
  • NetFlow/IPFIX records
  • URL/URI information
  • SIP request information
  • HTTP response codes
  • DNS queries
  • DHCP queries (future)
  • Certificate information (future)
  • Custom data (future)
Gigamon also announced a number of ecosystem partners supporting this approach: FlowTraq, Lancope, now a Cisco company, LogRhythm, Niara, Plixer and SevOne.

“We want to enable our customers to drastically improve their security posture by taking advantage of the latest trends in security analytics,” said Shehzad Merchant, CTO, Gigamon. “By enabling both context and packet based security analytics, Gigamon’s customers benefit by improving their ability to uncover intruder threats faster.”


Gigamon Launches Security Visibility Platform for Advanced Persistent Threats

Gigamon introduced its "GigaSECURE" Security Delivery Platform for providing pervasive visibility of network traffic, users, applications and suspicious activity, and then delivering it to multiple security devices simultaneously without impacting network availability.

The idea is to counter Advanced Persistent Threats (APTs) by leveraging a traffic visibility fabric to extract scalable metadata across a network, including cloud and virtual environments, and thereby empower third party security applications. This enables improved forensics and the isolation of applications for targeted inspection. The company also said its solution is also able to deliver visibility to encrypted traffic for threat detection.  The architecture supports inline and out-of-band security device deployments.

Gigamon's GigaSECURE is comprised of scalable hardware and software elements:

  • Infrastructure-wide reach via GigaVUE-VM and GigaVUE nodes;
  • High-fidelity, un-sampled Netflow/IPFIX generation;
  • Application Session Filtering;
  • SSL decryption; and
  • Inline bypass capabilities.
Gigamon also highlighted its Application Session Filtering (ASF), a new, patent-pending GigaSMART application that can identifies applications based on signature or patterns that appear within a packet or packets. Once positively identified, ASF extracts the entire session corresponding to the matched application flow from the initial packet to the last packet of the flow, even if the match occurs well after the first packet. This allows an administrator to forward specific “traffic of interest” to security appliances thereby optimizing their operational efficiency and improving overall performance.

The GigaSECURE platform already supports a broad ecosystem of security partners and their respective security functions, including:

Advanced Malware Protection: Check Point, Cisco, Cyphort, FireEye and Lastline;
Behavior Analytics: Damballa, Lancope, LightCyber and Niara;
Forensics/Analytics: ExtraHop, PinDrop, RSA and Savvius;
IPS: Check Point and Cisco;
NGFW: Check Point, Cisco, Fortinet and Palo Alto Networks;
Secure Email Gateways: Cisco;
SIEMs: LogRythm and RSA;
WAFs: Imperva.


Pica8 Cranks Up OpenFlow Switching by 1000x

Pica8 is introducing a Table Type Patterns (TTP) functionality in its PicOS network operating system that overcomes limitations in OpenFlow scaling for very large data centers.

The company said TTP enables its PicOS to scale to 2 million flows with Cavium’s XPliant switch ASIC, and to 256,000 flows with Broadcom’s StrataXGS Tomahawk switch ASIC. Typical TCAM flow capacity in the top-of-rack installed base today is between 1,000 and 2,000 flows, and with Pica8’s TTP implementation, production networks can scale 1,000 times more.

TTP defines how tables are set up in a switch, which an SDN controller can program via the OpenFlow switch protocol. The development of a TTP-based approach has been motivated by several factors, including: to maximize the available capacity, to better accommodate heterogeneity of existing hardware switches, to enable future innovation in hardware switches through more seamless SDN application development, and to enable granular and automated communication between application / controller developers and switch vendors.

“TTP and our own abstraction technology – vASIC -- unlock custom ASICs to bring choice, programmability and scale to application developers,” said Dan Tuchler, vice president of product management at Pica8. “Application developers no longer have to worry about the limitations or differences between ASICs when delivering their solutions to the market.”

TTP is in early release and will be generally available with PicOS in March.


IBM to Acquire Resilient Systems for Security Incident Response

IBM agreed to acquire Resilient Systems, a leader in security incident response solutions, based in Cambridge, Mass. Financial terms were not disclosed.

Resilient Systems' incident response platform technology enables clients to respond to security breaches faster and with greater precision and coordination, allowing orchestration of response process across functions (security, HR, finance, government relations, etc.) and across security systems (those monitoring data, applications, end points, networks, etc.). It also helps clients to respond to increasing regulation.

"We are excited to be joining IBM Security, the industry's fastest-growing enterprise security company," said John Bruce, Resilient Systems Co-Founder and CEO. "By combining, the market now has access to the leading prevention, detection and response technologies available in the same portfolio – the security trifecta."

A major benefit will be the planned combinations of Resilient Systems' Incident Response Platform with IBM QRadar Security Intelligence Incident Forensics, BigFix, IBM X-Force Exchange and IBM Incident Response Services that can enable an orchestrated process for addressing security incidents.

IBM also launched new X-Force Incident Response Services, which include consulting and managed security services to help clients manage all aspects of responding to a cyber breach. IBM X-Force security experts will help clients develop response strategies, including Computer Incident Response Team playbooks, and a means to more effectively discover, track, respond to and report on security incidents.  These new capabilities will be further enhanced through the planned acquisition of Resilient Systems.


Trend Micro Cloud App Security Integrates Box, Dropbox and Google Drive

Trend Micro has extended its Cloud App Security solution to Box, Dropbox and Google Drive.

Cloud App Security capabilities include:

  • Guards against advanced threats with sandbox malware analysis
  • Uses DLP to provide visibility into sensitive data use with cloud file sharing
  • Detects malware hidden in office files using document exploit detection
  • Supports all user functionality and devices with simple API integration in the cloud
  • Integrates with Trend Micro Control Manager for central visibility of threat and DLP events across hybrid Exchange environments as well as endpoint, web, mobile, and server security layers.
  • Cloud App Security has been integrated with a number of leading marketplaces and cloud commerce platforms to give partners and customers a complete end-to-end purchasing and provision experience.


Check Point and IBM Form Threat Intelligence Alliance

Check Point Software Technologies and IBM announced an expanded alliance, including the sharing of threat intelligence, as the security industry moves to a more collaborative approach to defend against cybercrime.

The new alliance includes four main areas of collaboration:

  • Shared threat intelligence. An open approach to collaborative defense in the security industry is needed to effectively protect against new and evolving threats. IBM X-Force and Check Point’s security research team will directly collaborate through the bi-directional sharing of threat identification and analysis using IBM X-Force Exchange (XFE), IBM’s threat intelligence sharing platform. This collective threat intelligence may be integrated into each company’s threat intelligence products, to help deliver proactive threat protection to customers of both companies.
  • Integrated event management. Sharing capabilities across the security management platforms deployed by customers accelerates the company’s collective response to threat activity, and extends the value of security technology investments for clients. Check Point will be launching a new SmartConsole application in the IBM Security App Exchange for integration with the IBM Security QRadar Intelligence Platform. The app will deliver network data and security events from Check Point devices to QRadar to enable operators to view threat information in real-time directly from the QRadar console for faster incident response.
  • Advanced mobile protection. Integration within IBM Maas360 enterprise mobility management (EMM) will allow customers to easily deploy and manage Check Point Mobile Threat Prevention to limit compromised devices from accessing enterprise networks and data, based on real-time insights. The combination of these capabilities provides automated protection against advanced threats across mobile devices, apps and networks, while significantly simplifying the implementation and ongoing monitoring of mobile security technology across the enterprise.
  • Managed security services. IBM Managed Security Services (MSS) will continue to deepen its expertise in delivering and managing Check Point solutions for IBM customers. The deployment and management of a broader range of Check Point network security offerings will be supported through new lab equipment and ongoing training of IBM SOC analysts and solution architects, providing customers with cost-effective access to resources and expertise as their security requirements evolve.

“Today’s business environment is more connected and more innovative than ever before, requiring equally innovative ways to help customers keep a step ahead of possible threats,” said Avi Rembaum, vice president of security solutions, Check Point. “Both Check Point and IBM Security take a prevent-first approach to security. Through intelligence sharing and technology integration we aim to help improve our customers' security programs and create a new model for industry cooperation.”


Hibernia Networks Opens PoP in Dubai

Hibernia Networks has established a Point of Presence (PoP) in Dubai, UAE in one of the city's major telco hubs. The Ethernet-based connectivity service leverages the unmatched latency performance of the Hibernia Express cable across the Atlantic, which connects Europe and North America.

“With its strategic location, Dubai is a major international hub for financial markets as well as media and content distribution throughout the region and beyond,” states Omar Altaji, CCO of Hibernia Networks. “Hibernia Networks’ presence in Dubai confirms our commitment to strategically expanding our global network reach into new geographic locations in order to provide customers with the high-speed, high-quality connections they require around the globe for applications such as split second financial transactions and live broadcast feeds. We look forward to continued growth in key local and regional markets to better serve increasing global demand for secure and diverse low latency connectivity solutions.”


Krish Prabhu joins University of Texas (Arlington) Faculty

Krish Prabhu, president of AT&T Labs and chief technology officer, has been named to The University of Texas at Arlington's Engineering Hall of Achievement and appointed a research professor in the Department of Computer Science and Engineering. He will remain in his current role at AT&T while serving as a resource for UTA as it bolsters its research and teaching in his areas of expertise.

Prabhu previous was chief executive officer of Tellabs. Before that, he served as chief operating officer of Alcatel in Paris from 1991 to 2001.


Sunday, February 28, 2016

Blueprint: Enterprise Cloud is the “New Normal,” Now It’s About Making It Work

by Boris Goldberg, co-founder of Cloudyn

Predictions from 2014 suggested that enterprises would flock to the cloud, which actually happened. The spectacular growth that AWS has already enjoyed along with the estimates that by the end of 2015, AWS revenues would reach somewhere upwards of $7 billion, show that this market has evolved into something that is much more than what was anticipated. While Microsoft Azure is still in second place, it has also witnessed significant growth, more than that of AWS. In fact, its enterprise enrollment offering has extended the range of services to support enterprises in the cloud. Taking a look closer at both AWS and Azure, the impact they have had on traditional infrastructure budgets in enterprises is astonishing. So what can we expect to see as the year enfolds?

1. Multi-Cloud Plans
Today, a great deal of IT organizations and enterprises want to migrate to the cloud, and when they do, they usually stick with one IaaS provider (which is generally AWS, the market leader). However, they also recognize the need to reduce vendor lock-in and continue cultivating previously arranged business engagements. For example, if an organization has been working with Microsoft for the past ten years through Enterprise Agreement using its MSDN or Office packages, the CIO will still want to leverage these types of long term agreements in order to support the organization’s cloud initiatives.

So how is it even possible to bring two, three, or even four different vendors into one place where you can control them all from a single pane of glass? Coping with this challenge in 2016 will entail adopting cloud management and governance solutions that will help streamline cost control and even compliance between the different environments.

2. Administrator Job FUD
Provisioning resources online with just a credit card rather than through long, cumbersome IT procurement and provisioning processes has forced admins to acquire new skills in order to navigate and leverage the world of IaaS. And with more enterprises implementing the DevOps methodology, they will continue enhancing their agile adoption and will apply more automation when it comes to testing, and delivering full application stacks in the cloud.

It is up to IT organization leaders to ensure their administrator teams have job security to quell anxiety (fear, uncertainty and doubt) that is likely to surface. They must also offer ongoing guidance and required tools to make the shift to cloud and better ensure healthy, controlled cloud environments.

3. Sharpened Cloud Tool Belts
In 2015, we saw how production workloads from the largest enterprises in the world were moved to AWS. Today, enterprise IT leaders understand the public cloud’s economies of scale and recognize their ability to invest far more in aspects such as security. However, using a robust cloud offering does not mean that an environment will be robust. Management tools are still needed to ensure that cloud environments do not suffer from a lack of best practices when it comes to performance and security. While it’s critical that the infrastructure is well formed, enterprises also need to have the right tools and skills that are necessary to deploy and manage this highly dynamic and complex environment.

Throughout 2016, expect to see IT admins gain a deeper understanding of cloud implementation and best practices, as well as look for the right management tools in order to fulfill their side of the cloud’s shared responsibility model.

4. Cloud Financial Management
Users today are aware of the need to control their cloud costs, however cloud financial management is not only about costs. Values, such as delivery speed, flexibility, and even actual business revenues, which have been driven by leveraging the cloud, should also be recognized. This can be done by merging usage, performance and cost monitoring under one umbrella, which is where your cloud financial management solution comes into play. It should be able to show you how your cloud purchases have increased business performance.

2016 will embrace this shift in thought processes. IT leaders will recognize the great financial opportunities that the cloud has put in their hands and start making implementation plans for comprehensive financial management solutions.

5. The Hybrid Cloud has Landed
This year, we should expect to see hybrid clouds gain even more prominence, specifically in the discussion of financial and security management, with the focus of the year being how to manage and confront complex hybrid environments.      


Now that it’s clear that the business sector has embraced cloud computing and companies are implementing adoption or building environments, it's time to gain a deeper understanding. This means that the focus should be shifted to enhancing your personal skills and your team’s skills. With more sophisticated cloud operations, you will be able to create better alignment with your enterprise goals and ultimately allow IT to have a much bigger impact on business than ever before.

About the Author

Boris Goldberg is a co-founder of Cloudyn and serves as its Chief Technology Officer, responsible for its technology and architectural direction. Boris has more than 25 years of hands-on experience in configuration discovery, configuration management and business service management solutions. He served as Chief Architect for Evolven, and also served as Senior R&D Manager and Sr. Architect for BMC Performance Monitoring Solutions.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Saturday, February 27, 2016

Preview #ONS2016 with Guru Parulkar

This year's Open Networking Summit is going to bring together everything in open source networking, says Guru Parulkar, Program Chair of ONS. Expect to see mini-summits of all the big open source projects underway. The event will also bring together builders, enterprise users and Service Providers at the forefront of SDN and NFV.


YouTube link: https://youtu.be/D2u1VhhAdZY

Open Networking Summit - March 14-16, 2016
Santa Clara Convention Center
Santa Clara, California

See also