Showing posts with label Blueprint. Show all posts
Showing posts with label Blueprint. Show all posts

Tuesday, June 20, 2017

The Evolution of VNFs within the SD-WAN Ecosystem

As the WAN quickly solidifies its role as the performance bottleneck for cloud services of all kinds, the SD-WAN market will continue to grow and evolve. This evolution will happen in lock step with the move to software-defined everything in data centers for both the enterprise and the service provider, with a focus on Virtual Network Functions (VNFs) and how they could be used to create specialized services based on custom WANs on demand. Although SD-WANs provide multiple benefits in terms of cost, ease-of-management, improved security, and improved telemetry, application performance and reliability remain paramount as the primary goals for the vast majority of SD-WAN deployments. When this is taken into consideration, the role of VNFs in extending and improving application performance becomes clear. Just as importantly, growing use of VNFs within SD-WANs extends an organization’s software-defined architecture throughout the broader network and sets the stage for the insertion of even more intelligence down the road.

What exactly do we mean by the term VNF? 

Before we get started, let’s define what we mean by VNF, since similar to SD-WAN, this term can be used to describe multiple things. For some VNFs are primarily a means of replicating legacy capabilities on a local appliance (physical or virtual) by means of software defined architectures, such as firewall, DHCP, DNS etc. However, restricting one’s scope to legacy services alone limits the potential high-value benefits that can be realized from a software-defined approach for more advanced features. Our definition of a VNF therefore is a superset of localized VNF and is really about the creation of a software-defined functions of more advanced capabilities, such as application aware VPNs, flow-based load balancing, self-healing overlay tunnels etc. What’s more, many advanced SD-WAN vendors provide their customers with the ability to customize these VNF applications to apply exclusively to their own WAN and/or their specific network requirements to enable unique WAN services.

What do we need VNFs for? 

SD-WAN’s enormous growth this year, as well as its predicted continued growth in the years to come follows the footsteps of the paradigm shift data centers are currently undergoing. That is, from a manually configured set of servers and storage appliances, to a software-defined architecture, where the servers and storage appliances (virtual or physical) can be managed and operated via a software-defined architecture. This means less manual errors, lower cost and more efficient way to operate the data center.

As an industry, as we implement some of the data-center approaches to the WAN (Wide Area Networks), one must note that there is a big difference between datacenter networks and WAN networks. Namely, datacenter LANs (Local Area Networks) have ample capacity and bandwidth and unless they are misconfigured, are never the bottleneck for performance. However, with WANs, whether done in-house by the enterprise or delivered as a service by a telecom or other MSP, the branch offices are connected to the Internet through WAN connections (MPLS, DSL, Cable, Fiber, T1, 3G/4G/LTE, etc.). As a result, the choking point of the performance is almost always the WAN. This is why SD-WANs became so popular so quickly, in that this provides immediate relief for this issue.

However, as WANs continue to grow in complexity, with enterprises operating multiple clouds and/or cloud models simultaneously, there is a growing need to add automation and programmability into the software-defined WAN in order to ensure performance and reliability. Therefore VNFs that can address this WAN performance bottleneck have the opportunity to transform how enterprises connect to their private, public and hybrid clouds. VNFs that extend beyond a single location, but can cover WAN networks, will have the ability to add programmability to the WAN. In a way, the “software defined” nature of the data center will be stretched out all the way to the branch office, including the WAN connectivity between them.

Defining SD-WAN VNFs

So what does a VNF that is programmable and addresses the WAN bottlenecks look like? These VNFs are overlay tunnels that can perform certain flow logic and therefore can work around network problems on a packet-by-packet basis per flow. These VNFs are so smart, they have the problem diagnosis, problem alerting and most importantly, resolution of the problem all baked into the VNF. In other words, unlike the days without SD-WAN where an IT manager would have an urgent support ticket whenever a network problem occurs. With VNF-based SD-WANs, the networks are becoming smart enough to solve the problem proactively, in most cases, before even it effects the applications, services and the user experience.

This increase in specific VNFs for the SD-WAN will start with the most immediate need, which is often latency and jitter sensitive applications such as voice, video, UC and other chatty applications. Even now, VNFs are being used to solve these issues. For example, a CIO can have a VNF that dynamically and automatically steers VOIP/SIP traffic around network problems caused by high latency, jitter and packet loss, and in parallel have another VNF to support cross-traffic and latency optimization for “chatty” applications.

In another example, a VNF can be built in minutes designed to steer non-real-time traffic away from a costly WAN link and apply header compression for real-time traffic only in situations where packet loss or latency crosses a specific threshold during certain times of the day, all the while updating syslog with telemetry data. With this level of flexibility and advanced capabilities, VNFs are poised to become the go-to solutions for issues related to the WAN.

A VNF load balancer is another such overlay that has the ability to load balance the traffic over the WAN links. Since the VNF load balancer is in essence a software code that can be deployed onto an SD-WAN appliance, it has the power of taking advantage of various types of intelligence and adaptability to optimize the WAN performance. VNF load balancers should also work with standard routing so that you can inject it in your network, say between the WAN modems and your firewall/router seamlessly.

Clearly, VNFs are part and parcel of SD-WAN next wave of evolution, bringing intelligence and agility to the enterprise WAN. As 2017 ramps up, we’ll see more and more innovation on this front, fully extending software-defined architecture from the data center throughout the network.

About the author

Dr. Cahit Jay Akin is the CEO and co-founder of Mushroom Networks, a long-time supplier of SD-WAN infrastructure for enterprises and service providers. Prior to Mushroom Networks, Dr. Akin spent many years as a successful venture capitalist. Dr. Akin received his Ph.D. and M.S.E. degree in Electrical Engineering and M.S. in Mathematics from the University of Michigan at Ann Arbor. He holds a B.S. degree in Electrical Engineering from Bilkent University, Turkey. Dr. Akin has worked on technical and research aspects of communications for over 15 years including authoring several patents and many publications. Dr. Akin was a nominee for the Most Admired CEO award by San Diego Business Journal. 

Sunday, April 30, 2017

Blueprint: Five Considerations for a Successful Cloud Migration in 2017

by Jay Smith, CTO, Webscale Networks

Forrester Research predicts that 2017 will see a dramatic increase in application migration to the cloud.  With more than 90 percent of businesses using the cloud in some form, the question facing IT leaders and web application owners is not when to move to the cloud but how to do it.

The complexities of application integration and migration to the cloud are ever-changing. Migration has its pitfalls: risk of becoming non-compliant with regulations of industry standards, security breaches, loss of control over applications and infrastructure, and issues with application portability, availability and reliability. Sure, there are additional complexities to be considered, but consider that some are simply obstacles to overcome, while others are outright deal-breakers: factors that cause organizations to halt plans to move apps to the cloud, or even to bring cloud-based apps back on premise.

As I see it, these are the deal-breakers in the minds of the decision maker:

Regulatory and Compliance

Many industries, including healthcare and finance, require compliance with multiple regulations or standards. Additionally, due to today’s global economy, companies need to understand the laws and regulations of their respective industries as well as of the countries in which their customers reside. With a migration, first and foremost, you need to know if the type of cloud you are migrating to supports the compliance and regulations your company requires. Because a cloud migration does not automatically make applications compliant, a knowledgeable cloud service provider can ensure that you maintain compliance, and do so at the lowest possible cost. In parallel, your cloud service provider needs to consider the security policies required to ensure compliance.

Data Security

To date, data security is still the biggest barrier preventing companies from realizing the benefits of the cloud. According to the Interop ITX 2017 State of the Cloud Report, more than half of respondents (51 percent) cited security as the biggest challenge in moving to the cloud. Although security risks are real, they are manageable. During a migration, you need to first ensure the secure transport of your data to and from the cloud. Once your data is in the cloud, you need to know your provider’s SLAs regarding data breaches, but also how the provider will remediate or contain any breaches that do occur. A comprehensive security plan, coupled with the provider’s ability to create purpose-built security solutions, can instill confidence that the provider is up to the task.

Loss of Control

When moving apps to the cloud, many companies assume that they will lose control of app performance and availability. This becomes an acute concern for companies that need to store production data in the cloud. However, from concern, solutions are born, and the solution is as much in the company’s hands as in the provider’s. Make sure that performance and availability are addressed front and center in the provider’s SLA.  That’s how you maintain control.  

Application Portability

With application portability, two issues need to be considered: first, IT organizations often view the hybrid cloud (for example, using a combination of public and private clouds) as their architecture of choice – and that choice invites concerns about moving between clouds. Clouds differ in their architecture, OS support, security, and other factors. Second, IT organizations want choice and do not want to be locked into a single cloud or cloud vendor, but the process of porting apps to a new cloud is complex and not for the faint of heart. If the perception of complexity is too great, IT will opt for keeping their applications on premise.

App Availability and Infrastructure Reliability

Availability and reliability can become issues if a cloud migration is not successful. To ensure its success, first, be sure the applications you are migrating are architected with the cloud in mind or can be adopted to cloud principles. Second, to ensure app availability and infrastructure reliability after the migration, consider any potential issues that may cause downtime including: server performance, network design, and configurations.  Business continuity after a cloud migration is ensured through proper planning.

The great migration is here, and to ensure your company’s success in moving to the cloud, it is important to find a partner that has the technology, people, processes and security capabilities in place to handle any challenges. Your partner must be experienced in architecture and deployment across private, public and hybrid clouds. A successful migration will help you achieve cost savings and peace of mind while leveraging the benefits and innovation of the cloud.

About the Author

Jay Smith founded Webscale in 2012 and currently serves as the Chief Technology Officer of the Company. Jay received his Ph.D. in Electrical and Computer Engineering from Colorado State University in 2008. Jay has co-authored over 30 peer-reviewed articles in parallel and distributed computing systems.

In addition to his academic publications, while at IBM, Jay received over 20 patents and numerous corporate awards for the quality of those patents. Jay left IBM as a Master Inventor in 2008 to focus on High Performance Computing at DigitalGlobe. There, Jay pioneered the application of GPGPU processing within DigitalGlobe.

Monday, January 9, 2017

Forecast for 2017? Cloudy

by Lori MacVittie, Technology Evangelist, F5 Networks

In 2016, IT professionals saw major shifts in the cloud computing industry, from developing more sophisticated approaches to application delivery to discovering the vulnerabilities of connected IoT devices. Enterprises continue to face increasing and entirely new security threats and availability challenges as they migrate to private, public and multi-cloud systems, which is causing organizations to rethink their infrastructures. As we inch toward the end of the year, F5 Networks predicts the key changes we can expect to see in the cloud computing landscape in 2017.

IT’s  MVP of 2017? Cloud architects 

With more enterprises adopting diverse cloud solutions, the role of cloud architects will become increasingly important. The IT professionals that will hold the most valuable positions in an IT organization are those with skills to define criteria for and manage complex cloud architectures.

Multi-cloud is the new normal in 2017

Over the next year, enterprises will continue to seek ways to avoid public cloud lock-in, relying on multi-cloud strategies to do so. They will aim to regain leverage over cloud providers, moving toward a model where they can pick and choose various services from multiple providers that are most optimal to their business needs.

Organizations will finally realize the full potential of the cloud

Companies are now understanding they can use the cloud for more than just finding efficiency and cost savings as part of their existing strategies and ways of doing business. 2017 will provide a tipping point for companies to invest in the cloud to enable entirely new scenarios, spurred by things like big data and machine learning that will transform how they do business in the future.

The increasing sophistication of cyber attacks will be put more emphasis on private cloud
While enterprises trust public cloud providers to host many of their apps, the lack of visibility into the data generated by those apps causes concerns about security. This means more enterprises will look to private cloud solutions. Public cloud deployments won’t be able to truly accelerate until companies feel comfortable enough with consistency of security policy and identity management.

More devices – More problems: In 2017,  public cloud will become too expensive for IoT 

Businesses typically think of public cloud as the cheaper business solution for their data center needs, yet they often forget that things like bandwidth and security services come at an extra cost. IoT devices generate vast amounts of data and as sensors are installed into more and more places, this data will continue to grow exponentially. This year, enterprises will put more IoT applications in their private clouds, that is, until public cloud providers develop economical solutions to manage the huge amounts of data these apps produce.

The conversation around apps will finally go beyond the “where?”

IT professionals constantly underestimate the cost, time and pain of stretching solutions up or down the stack. We’ve seen this with OpenStack, and we’ll see it with Docker. This year, cloud migration and containers will reach a point that customers won’t be able to just think about where they want to move apps, they’ll need to think about the identity tools needed for secure authentication and authorization, how to protect and prevent data loss from microservices and SaaS apps; and how to collect and analyze data across all infrastructure services quickly.

A new standard for cloud providers is in motion and this year will see major developments in not only reconsidering the value of enterprise cloud, but also modifying cloud strategy to fully extend enterprise offerings and data security. Evaluating the risks of cloud migration and management has never been as vital to a company’s stability as it is now. Over the course of the year, IT leaders who embrace and adapt to these industry shifts will be the ones to reap the benefits of a secure, cost-effective and reliable cloud.

About the Author

Lori MacVittie is Technology Evangelist at F5 Networks.  She is a subject matter expert on emerging technology responsible for outbound evangelism across F5’s entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app security and encryption-related solutions. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University, and is an O’Reilly author.

MacVittie is a member of the Board of Regents for the DevOps Institute, and an Advisory Board Member for CloudNOW.

Friday, January 6, 2017

Wi-Fi Trends Take Center Stage in 2017

by Shane Buckley, CEO, Xirrus 

From an unprecedented DNS outage that temporarily paralyzed the entire internet, to the evolution of federated identity for simple, secure access to Wi-Fi and applications, 2016 had its mix of growing pains and innovative steps forward.

Here’s why 2017 will shape up into an interesting year for Wi-Fi technology.

IoT will create continued security issues on global networks

In 2017, the growth of IoT will put enormous pressure on Wi-Fi networks. While vendors must address the complexity of onboarding these devices onto their network, security can’t get left behind. The proliferation of IoT devices will propel high density into almost all locations – from coffee shops to living rooms – prompting more performance and security concerns. Whether Wi-Fi connected alarms or smart refrigerators, the security of our homes will be scrutinized and will become a key concern in 2017. Mass production of IoT devices will make them more susceptible to hacking, as they will not be equipped with the proper built in security.

The recent IoT-based attack on DNS provider Dyn opened the floodgates, as estimates show the IoT market reaching 10 billion devices by 2020. The event foreshadows the power hackers hold when invading these IoT systems. Taking down a significant portion of the internet grows more detrimental, yet all too plausible these days. Because of increased security concerns, vendors will equip devices with the ability to only connect to the IoT server over pre-designed ports and protocols. If IoT vendors don’t start putting security at the forefront of product development, we can only expect more large-scale cyberattacks in 2017.

LTE networks won’t impact Wi-Fi usage

Don’t expect LTE networks to replace Wi-Fi. The cost of deploying LTE networks is ten times greater and LTE is less adaptable for indoor environments than Wi-Fi. Wi-Fi will remain the lowest cost technology available with similar or superior performance to LTE when deployed properly and therefore will not be replaced by LTE. When people have access to Wi-Fi, they’ll connect. Data plan limitations remain too common.

Additionally, the FCC and other international government agencies began licensing the 5GHz spectrum to offer free and uncharted access to Wi-Fi. But, we don’t want carriers grabbing free spectrum and charging us for every byte we send, now do we?

LTE and Wi-Fi will co-exist as they do today, where LTE works well outdoors and Wi-Fi well-designed to work consistently throughout internal spaces.

The push toward federated identity will continue in 2017

Today, there remains a disparate number of Wi-Fi networks, all with different authentication requirements. This marks an opportunity for Wi-Fi vendors. In the coming year, we will see federated identity become a primary differentiator. By implementing federated identity, vendors simplify and secure the login process. Consumers can auto-connect to any public Wi-Fi network with their existing credentials – whether Google, Microsoft or Facebook – thus providing them with a seamless onboarding experience. It’s the next step for Single Sign-On (SSO), and one that will set Wi-Fi vendors apart in 2017.

This coming year, the repercussions of IoT, coexistence of LTE and Wi-Fi, and demand for simple, secure access to Wi-Fi, will take center stage. The onus falls on company leaders, who must adapt their business strategies so they can keep pace with the fast and ever-changing Wi-Fi landscape. 2017 will have plenty in store.

About the Author

Shane Buckley is CEO of Xirrus. Most recently, Mr. Buckley was the General Manager and Senior Vice President at NETGEAR where he led the growth of NETGEAR’s commercial business unit to 50 percent revenue growth over 2 years, reaching $330 million in 2011 – and played a prime role in growing corporate revenues over 30 percent. Prior to that, Mr. Buckley was President & CEO of Rohati Systems, a leader in Cloud-based access management solutions, Chief Operating Officer of Nevis Networks, a leader in secure switching and access control. He has also held the position of Vice President WW Enterprise at Juniper Networks, President International at Peribit Networks, a leader in WAN Optimization and EMEA vice president at 3Com Corp. Mr. Buckley is a graduate of engineering from the Cork Institute of Technology in Ireland.

Sunday, December 18, 2016

Perspectives 2017: Financial Agility for Digital Transformation

by Andrew Blacklock, Senior Director, Strategy and Products, Cisco Capital

Ten years ago, companies like Uber and Airbnb were ideas waiting for technology to catch up. Now, these two brands represent a shift in the global economy in what’s known as digital transformation. This evolution towards digital-everything is constantly accelerating, leaving non-digital companies scrambling for a means to kickstart their digitization projects.

According to Gartner, there are 125,000 enterprises in the U.S. alone that are currently launching digital transformation projects. These companies are of all sizes, from nimble startups to global conglomerates. Despite the strong drive to a digital future, 40% of businesses will be unsuccessful in their digital transformation, according to Cisco’s Digital Vortex study.

Many attribute the difficulties associated with the digital transition to the significant costs of restructuring an organization’s technological backbone. Because of these challenges, many companies opt for an agile approach to financial restructuring.

Financial agility allows companies to evolve and meet the rapidly changing demands of digital business through liquid, scalable options that won’t break the bank. While it is not always possible to predict changes in the business environment, agile financing allows companies to acquire the proper technology and tools necessary to plan, work and expand their businesses.

Financial agility isn’t just another buzzword – it’s a characteristic that organizations of all sizes in all industries need to champion in order to drive efficiencies and competitive advantages. It’s a way that companies can acquire the technologies needed to shift their business without having to “go all in.” This allows companies to avoid large up-front capital investment, help with cash flow by spreading costs over time and preserve existing sources of capital to allocate to other areas of the transformation.

Organizations now need to decide how they can best adjust to the transformation and transition for the next stage of digital business. With financial options that enable organizations to acquire technology and scale quickly, companies can pivot with agility to meet the constantly-evolving demands of our digital age.

Looking at the bigger picture, financial agility is a crucial piece of an organization’s overall digital transformation puzzle. While the digital landscape might be constantly changing, flexible financing helps set an organization up for a successful transformation to the future of digital business.

About the Author

Andrew Blacklock is Senior Director, Strategy and Financial Product Development at Cisco Capital. As director of strategy & business operations, Andrew is responsible for strategy, program management and business operations. He has been with Cisco Capital for 17 years with more than 20 years of experience in captive financing. He is a graduate of Michigan State University and the Thunderbird School of Global Management.

Wednesday, December 14, 2016

Ten Cybersecurity Predictions for 2017

by Dr. Chase Cunningham, ECSA, LPT 
Director of Cyber Operations, A10 Networks 

The cyber landscape changes dramatically year after year. If you blink, you may miss something; whether that’s a noteworthy hack, a new attack vector or new solutions to protect your business. Sound cyber security means trying to stay one step ahead of threat actors. Before the end of 2016 comes around, I wanted to grab my crystal ball and take my best guess at what will be the big story lines in cyber security in 2017.

1. IoT continues to pose a major threat. In late 2016, all eyes were on IoT-borne attacks. Threat actors were using Internet of Things devices to build botnets to launch massive distrubted denial of service (DDoS) attacks. In two instances, these botnets collected unsecured “smart” cameras. As IoT devices proliferate, and everything has a Web connection — refrigerators, medical devices, cameras, cars, tires, you name it — this problem will continue to grow unless proper precautions like two-factor authentication, strong password protection and others are taken.

Device manufactures must also change behavior. They must scrap default passwords and either assign unique credentials to each device or apply modern password configuration techinques for the end user during setup.

2. DDoS attacks get even bigger. We recently saw some of the largest DDoS attacks on record, in some instances topping 1 Tbps. That’s absolutely massive, and it shows no sign of slowing. Through 2015, the largest attacks on record were in the 65 Gbps range. Going into 2017, we can expect to see DDoS attacks grow in size, further fueling the need for solutions tailored to protect against and mitigate these colossal attacks.

3. Predictive analytics gains groundMath, machine learning and artificial intelligence will be baked more into security solutions. Security solutions will learn from the past, and essentially predict attack vectors and behvior based on that historical data. This means security solutions will be able to more accurately and intelligently identify and predict attacks by using event data and marrying it to real-world attacks. 

4. Attack attempts on industrial control systems. Similar to the IoT attacks, it’s only due time until we see major industrial control system (ICS) attacks. Attacks on ecommerce stores, social media platforms and others have become so commonplace that we’ve almost grown cold to them. Bad guys will move onto bigger targets: dams, water treatment facilities and other critical systems to gain recognition.

5. Upstream providers become targets. The DDoS attack launched against DNS provider Dyn, which resulted in knocking out many major sites that use Dyn for DNS services, made headlines because it highlighted what can happen when threat actors target a service provider as opposed to just the end customers. These types of attacks on upstream providers causes a ripple effect that interrupts service not only for the provider, but all of their customers and users. The attack on Dyn set a dangerous presedent and will likely be emulated several times over in the coming year.

6. Physical security grows in importance. Cyber security is just one part of the puzzle. Strong physical security is also necessary. In 2017, companies will take notice, and will implement stronger physical security measures and policies to protect against internal threats and theft and unwanted devices coming in and infecting systems.

7. Automobiles become a target. With autonomous vehicles on the way and the massive success of sophisticated electric cars like Teslas, the automobile industry will become a much more attractive target for attackers. Taking control of an automobile isn’t fantasy, and it could be a real threat next year.

8. Point solutions no longer do the job. The days of Frankensteining together a set of security solutions has to stop. Instead of buying a single solution for each issue, businesses must trust security solutions from best-of-breed vendors and partnerships that answer a number of security needs. Why have 12 solutions when you can have three? In 2017, your security footprint will get smaller, but will be much more powerful.

9. The threat of ransomware growsRansomware was one of the fastest growing online threats in 2016, and it will become more serious and more frequent in 2017. We’ve seen businesses and individuals pay thousands of dollars to free their data from the grip of threat actors. The growth of ransomware means we must be more diligent to protect against it by not clicking on anything suspicious. Remember: if it sounds too good to be true, it probably is.

10. Security teams are 24/7. The days of security teams working 9-to-5 are long gone. Now is the dawn of the 24/7 security team. As more security solutions become services-based, consumers and businesses will demand the security teams and their vendors be available around the clock. While monitoring tools do some of the work, threats don’t stop just because it’s midnight, and security teams need to be ready to do battle all day, every day.

About the Author

Dr. Chase Cunningham (CPO USN Ret.)  is A10 Networks' Director of Cyber Operations. He is an industry authority on advanced threat intelligence and cyberattack tactics. Cunningham is a former US Navy chief cryptologic technician who supported US Special Forces and Navy Seals during three tours of Iraq. During this time, he also supported the NSA and acted as lead computer network exploitation expert for the US Joint Cryptologic Analysis Course. Prior to joining A10 Networks, Cunningham was the director of cyber threat research and innovation at Armor, a provider of cloud-based cyber defense solutions. 


Tuesday, July 12, 2016

Blueprint: An Out-of-This-World Shift in Data Storage

by Scott Sobhani, CEO and co-founder, Cloud Constellation’s SpaceBelt

In light of ongoing, massive data breaches across all sectors and the consequent responsibility laid at executives’ and board members’ feet, the safe storing and transporting of sensitive data has become a critical priority. Cloud storage is a relatively new option, and both businesses and government entities have been flocking to it. Synergy Research Group reports that the worldwide cloud computing market grew 28 percent to $110B in revenues in 2015. In a similar vein, Technology Business Research projects that global public cloud revenue will increase from $80B in 2015 to $167B in 2020.

By taking part in the Cloud, organizations are using shared hosting facilities, which carries with it the risk of exposing critical data to surreptitious elements – not to mention the challenges associated with jurisdictional hazards. Organizations of all sizes are subject to leaky Internet and leased lines. As the world shifts away from legacy systems to more agile software solutions, it is becoming clear that the time is now for a paradigm shift in how to store, access and archive sensitive data.

The Need for a New Storage Model

Enterprises and government agencies need a better way to securely store and transport their sensitive data. What if there was a way to bypass the Internet and leased lines entirely to mitigate exposure and secure sensitive data from hijacking, theft and espionage, while reducing costs both from an infrastructure and risk perspective?

Though it may sound like science fiction to some, such an option is possible, and it’s become necessary for two main reasons:

  • Threatening Clouds – Cloud environments currently run on hybrid public and private networks using IT controls that are not protective enough to stay ahead of real-time cyber security threats. Enterprise data is maliciously targeted, searchable or stolen. Sensitive data can be subjected to government agency monitoring and exposed to acts of industrial espionage through unauthorized access to enterprise computers, passwords and cloud storage on public and private networks.
  • Questions of Jurisdiction – Due to government regulations, critical information could be restricted or exposed, especially when it has regularly been replicated or backed up to an undesirable jurisdiction at a cloud service provider’s data center. Diplomatic privacy rules are under review by governments intent on restricting cross-jurisdictional access and transfer of the personal and corporate data belonging to their citizens. This has created the requirement for enterprises to operate separate data centers in each jurisdiction – financially prohibitive for many medium-sized enterprises.

Storage Among the Stars

What government and private organizations need is an independent cloud infrastructure platform, entirely isolating and protecting sensitive data from the outside world. A neutral, space-based cloud storage network could provide this. Enterprise data can be stored and distributed to a private data vault designed to enable secure cloud storage networking without any exposure to the Internet and/or leased lines. Resistant to natural disasters and force majeure events, its architecture would provide a truly revolutionary way of reliably and redundantly storing data, liberating organizations from risk of cyberattack, hijacking, theft, espionage, sabotage and jurisdictional exposures.

A storage solution of this type might at first seem prohibitively expensive, but costs would run the same or less to build, operate and maintain as terrestrial networks. Further, it would serve as a key market differentiator for cloud service providers who are looking for solutions that provide physical protection of their customers’ critical information. This is because such a system would need to include its own telecom backbone infrastructure to be entirely secure.  While this is extremely expensive to accomplish on the ground, it need not be the case if properly architected as a space-based storage platform.

Sooner than many might think, governments and enterprises will begin to use satellites for the centralized storage and distribution of sensitive or classified material, the storage and protection of video and audio feeds from authorized personnel in remote locations, or the distribution of video and audio gathered by drones.

Escaping Earth’s Orbit

Cyber criminals don’t seem to be slowing their assault on the network, which means data breaches of Earth-based storage solutions will continue. Organizations need to think outside the Cloud in order to keep their critical data secure, both while being stored and in transit. The technology exists today to make satellite storage a reality, and for those who are working hard to stay ahead of malicious actors, it can’t arrive soon enough.

About the author

Scott Sobhani, CEO and cofounder of Cloud Constellation Corporation and the SpaceBelt Information Ultra-Highway, is an experienced telecom executive with over 25 years in executive management positions, most recent as VP for business development and commercial affairs at International Telecom Advisory Group (ITAG). Previous positions include CEO of TalkBox, VP & GM at Lockheed Martin, and VP, GM & senior economist at Hughes Electronics Corporation.

Mr. Sobhani was responsible for closing over $2.3 billion in competitive new business orders for satellite spacecraft systems, mobile network equipment and rocket launch vehicles. He co-authored “Sky Cloud Autonomous Electronic Data Storage and Information Delivery Network System”, “Space-Based Electronic Data Storage and Network System” and “Intermediary Satellite Network for Cross-Strapping and Local Network Decongestion” (each of which are patent pending). He has an MBA from the University of Southern California, and a bachelor’s degree from the University of California, Los Angeles.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.



Thursday, June 30, 2016

Blueprint: LSO Hackathons Bring Open Standards, Open Source

Open standards and open source projects are both essential ingredients for advancing the cause of interoperable next-generation carrier networks.

When a standards developing organization (SDO), like MEF, creates standards, those written documents themselves aren’t the end goal. Sure, the specifications look good on paper, but it takes a lot of work to turn those words and diagrams into hardware, software and services. And if there are any ambiguities in those specifications, or misinterpretations by vendors building out their products and services, interoperability could be problematic at best.

By contrast, when an open-source project is formed, the team’s job is obvious: to create software and solutions. All too often, the members of the project are focused on reaching a particularly objective. In those cases they are working in a vacuum, and might write code that works great but which can’t be abstracted to solve a more general problem. In those cases, interoperability may also be a huge issue.

The answer is clear: bring together SDOs and open-source teams to write open-source code that’s aligned with open specifications. That’s what is happening at the LSO (Lifecycle Service Orchestration) Hackathons hosted by MEF: open source teams come together to work on evolving specifications, and the end result is not only solid code but also effective feedback to MEF about its specs and architecture. Another benefit: networking experts from across the communications industry work together with software developers from the IT world face-to-face, fostering mutual understanding of the constraints of their peers in ways that lead to more effective interaction in their day jobs.


MEF recently completed its Euro16 LSO Hackathon held in Rome, Italy during April 27-29, 2016. This followed the debut LSO Hackathon at MEF’s GEN15 conference in Dallas in November 2015. (See “The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code,” published in Telecom Ramblings.)

“The Euro16 LSO Hackathon built on what we started in the first Hackathon at GEN15,” said Daniel Bar-Lev, Director of Certification and Strategic Programs, MEF and one of the architects of the LSO Hackathon series.

One big change: not everything had to be physically present in Rome, which expanded both the technology platform and the pool of participants. “We enabled work to be done remotely, said Bar-Lev. “While most of our participants were in Rome, we had people engaged from all over the United States. We also didn’t need to bring the networking equipment to Rome. Most of it remained installed and configured in the San Francisco Bay area. Instead of shipping racks of equipment, we set up remote access and were able to position the hardware and software in the optimal places to get development done.”

Lifecycle Service Orchestration and the Third Network Vision

Why “Lifecycle Service Orchestration” for the MEF-hosted LSO Hackathons? Bar-Lev explained that it ties into MEF’s broad vision for Third Network services that combine the ubiquity and flexibility of the public Internet with the quality and assurance of private connectivity services such as CE 2.0.

 “When we think of traditional CE 2.0 services, we tend to think of them as “static” — often taking weeks or months to provision or change a service,” said Bar-Lev. “With the Third Network vision, we are driving specifications for services like CE 2.0 that can be created and modified in minutes instead of months and also be orchestrated over multiple provider networks.”

As Bar-Lev explained, the real work of MEF today is to formally define Third Network services and all the related services required to implement flexible inter-network communications. “End-to-end LSO is essential for that,” he continued, “along with SDN and NFV.”

That’s where open standards and open source projects converge, with MEF initiatives like OpenLSO (Open Lifecycle Service Orchestration) and OpenCS (Open Connectivity Services). “It’s all about creating and trying out building blocks, so we can give service providers reference designs from which they can develop their offerings more quickly. They don’t have to define those services themselves from scratch; rather they can access them at MEF, which gives them a valuable and time-saving starting point,” Bar-Lev said.

Indeed, the OpenLSO and OpenCS projects describe a wide range of L1-L7 services that service providers need in order to implement Third Network services. MEF is defining these services, and developers work on evolving elements of the reference designs during LSO Hackathons.

A Broad Array of Projects and Participants at Euro16 LSO Hackathon

According to MEF, the OpenLSO scenarios worked upon at Euro16 LSO Hackathon were OpenLSO Inter-Carrier Ordering and OpenLSO Service Function Chaining. The OpenCS use cases were OpenCS Packet WAN and OpenCS Data Center. The primary objectives of the Euro16 LSO Hackathon included:

        Accelerate the development of comprehensive OpenLSO scenarios and OpenCS use cases as part of MEF's Open Initiative for the benefit of the open source communities and the industry as a whole.

        Provide feedback to ongoing MEF projects in support of MEF's Agile Standards Development approach to specification development.

        Facilitate discussion, collaboration, and the development of ideas, sample code, and solutions that can be used for the benefit of service providers and technology providers.

        Encourage interdepartmental collaboration and communications within MEF member companies, especially between BSS/OSS/service orchestration professionals and networking service/infrastructure professionals

Strong Industry Participation at Euro16

Around 45 people participated in the Euro16 LSO Hackathon – the majority in Rome and the remainder being the AT&T Remote Team in Plano, Texas as well as other participants attending remotely from other parts of the United States.

“We brought people together with widely divergent backgrounds,” said MEF’s Bar-Lev. “We had software developers with no networking expertise, and network experts with no software skills. The core group worked in the same room in Rome for three days, with additional folks working independently and syncing up with the Rome teams when appropriate.”

The Euro16 LSO Hackathon included participants from Amartus, Amdocs, AT&T, CableLabs, CenturyLink, Ciena, Cisco, Edge Core Networks, Ericsson, Gigaspaces, HPE, Huawei, Infinera, Iometrix, Microsemi, NEC, Netcracker, NTT, ON.Lab, Telecom Italia Sparkle and ZTE. The whole process was managed by Bar-Lev and Charles Eckel, Open Source Developer Evangelist at Cisco DevNet.

“What is most important about the LSO Hackathon is that it takes the specifications that are being defined and transforms them into code”, said Eckel. “It moves that process forward dramatically. The way standards have traditionally been done is a very long process in which people spend months and sometimes years getting the details of documents figured out, and then it can turn out that the specification is almost non-implementable. With the LSO Hackathon we create code based on early versions of the specifications. This helps the process move forward because we identify what’s wrong, what’s missing, and what’s unclear, then we update the specs accordingly. This is an important reason for doing the LSO Hackathon.”

Eckels continued, “Equally important is the positive impact on the participating open source projects and open source communities. Usability issues and gaps in functionality are identified and addressed. The code implemented during the Hackathon is contributed back upstream, making those projects better suited to address the requirements mapped out by the specifications.”

Dawn Kaplan, Solution Architect, Ericsson, added: “The Euro16 LSO Hackathon aimed to solve a very crucial inter-carrier business problem that will change our industry when solved.  The ordering project in the LSO Hackathon is focused on implementing the inter-carrier ordering process between service providers. At the Hackathon we built upon the defined use case, information model, and a sample API to enable service providers to order from one another in a completely automated fashion. With the code and practices developed at the Euro16 LSO Hackathon we will come much closer to tackling this very real issue.”

“We are a new participant in the LSO Hackathon and find this initiative very important on a community level,” explained Shay Naeh, Solution Architect for NFV/SDN Projects at Cloudify by GigaSpaces. “Through the Euro16 LSO Hackathon, we are learning how to contribute our own  open source code solutions and combine them alongside closed source solutions to make the whole ecosystem work. Open source is very important to us,  and we are excited to see telcos coming around to the open source model as well. By having a close relationship with open source communities, the telcos influence those projects to take into account their operational requirements while reducing the chances of being locked into relationships with specific technology providers. You can mix and match vendor components and avoid having a vertical or silo solution. What is very important to telcos is to introduce new business services with a click of a button and this is definitely achievable.”

MEF Euro16 LSO Hackathon Report

MEF has published a new report spotlighting recent advances in development of LSO capabilities and APIs that are key to enabling agile, assured, and orchestrated Third Network services over multiple provider networks. The report describes objectives, achievements, and recommendations from multiple teams of professionals who participated in the Euro16 LSO Hackathon.

Coming Next:  MEF16 LSO Hackathon, November 2016

The next MEF LSO Hackathon will be at upcoming MEF16 global networking conference, in Baltimore, November 7-10, 2016. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases.

“We will have different teams working on Third Network services,” said MEF’s Bar-Lev. The work will accelerate the delivery of descriptions of how to create Third network services, such as Layer 2 and Layer 3 services. Participants will get hands-on experience and involvement in identifying the different pieces of technology needed to develop those projects."

About the Author
Alan Zeichick is founder, president and principal analyst, Camden Associates. 
Follow Alan on Twitter @zeichick

Sunday, June 26, 2016

Blueprint: Why SD-WAN Cannot Solve for the MPLS Conundrum

by Gur Shatz, Co-Founder and CTO, Cato Networks

Software-defined infrastructure has firmly gained traction in public and private data centers and clouds, because of its game-changing nature: It has virtualized the server, giving it scalable capacity on demand at a fraction of the cost of its hardware counterpart. And what software-defined did for the server and storage markets, it is bound to do for the network, too.

Initial advances in software-defined networking include SD-WAN, which is poised to grow from $225 million in 2015 to $6 billion by 2020, according to IDC. Yet, SD-WAN has not fully cracked the network performance and security conundrum. SD-WAN still relies on MPLS links to ensure low-latency connectivity, and the use of the Internet is mostly for WAN backhauling and doesn’t fully address the need for secure Internet and cloud access.  This points to the need for a new software-defined approach that firmly binds network and security as one, and which frees up valuable networking resources.

Why SD-WAN Is Not Enough

The promise of SD-WAN lies in providing standard, low-cost Internet connections to supplement the managed, low-latency, yet expensive MPLS with its guaranteed capacity. However, a survey of network security professionals found that one-third cited latency between locations as their biggest network security challenge, and a quarter cite direct Internet access from remote locations.[1]

SD-WAN, while taking some of the network performance issues and costs out, cannot fully provide the game-changing impact of true software-defined infrastructure; it is a primarily a networking technology, not a security solution. For SD-WAN to be a viable solution for today’s hybrid networks, it needs to be secured in a way MPLS is not. Due to its nature as a private network, companies didn’t need to encrypt MPLS traffic. While MPLS networks are often not encrypted, SD-WAN cannot forego encryption – a new problem for most network teams. Furthermore, it has no impact on enabling direct internet access – for example, at the branch level – without adding third-party security solutions. SD-WAN requires investment in core security capabilities, such as app control, URL filtering, next-generation firewalls, and cloud access control (among others) – all of which add costs and management complexity right back into the enterprise.

SD-WAN++

SD-WAN tackles the legacy enterprise WAN: branches and datacenters. It adds Internet links to the MPLS-based WAN, but must continue and rely on MPLS for low-latency connectivity. This limits its impact. A contemporary WAN design should integrate, in addition to physical locations, mobile users and public cloud infrastructure. It should enable low-latency connectivity on a global basis to ensure consistent user experience, even if MPLS is not used. And, it should include an integrated security stack to protect WAN and Internet-bound traffic to Public Cloud Applications (SaaS) for all network users. To truly evolve the network, today’s IT leaders need a new simple, scalable and secure solution that binds a global network and built-in security. Such a unified, software-defined solution could enforce policies for all users and locations, with access to all data, in a way that reduces complexity and management overhead.  

Effectively, such a system becomes the real solution to the MPLS conundrum: it optimizes performance/latency and enables enterprise-grade security, creating the true hybrid network of the future - today. 

About the Author

Gur is co-founder and CTO of Cato Networks. Prior to Cato Networks, he was the co-founder and CEO of Incapsula Inc., a cloud-based web applications security and acceleration company. Before Incaspula, Gur was Director of Product Development, Vice President of Engineering and Vice President of Products at Imperva, a web application security and data security company.
Gur holds a BSc in Computer Science from Tel Aviv College.

About Cato Networks

Cato Networks is rethinking network security from the ground up and into the cloud. Cato has developed a new Network Security as a Service (NSaaS) platform that is changing the way network security is delivered, managed, and evolved for the distributed, cloud-centric, and mobile-first enterprise. Based in Tel Aviv, Israel, Cato Networks was founded in 2015 by cybersecurity luminary Shlomo Kramer, who previously cofounded Check Point Software Technologies and Imperva, and Gur Shatz, who previously cofounded Incapsula. Cato Networks is backed by Aspect Ventures and U.S. Venture Partners. For more information, visit http://www.catonetworks.com/.




[1] Based on feedback from 70+ network professionals who took part in “MPLS, SD-WAN and Cloud Networks: The path to a better, secure and more affordable WAN," May 18, 2016.


Sunday, May 22, 2016

Blueprint: Evolving Security for Evolving Threats in Payments

by Jose Diaz, Director, Payment Strategy, Thales e-Security

At this point in the history of cyber security, it seems like the eternal optimism of “it couldn’t happen to me” is the only reason consumers by the millions haven’t abandoned the digital life and gone back to cash-only transactions. Huge-scale data breaches persist, snatching more and more personal data. Retailers certainly want to protect their customers and their reputation, but are they really doing all they can?

There’s a reason why we are still experiencing huge breaches, and it’s not a lack of technology. Solutions that provide increased protection for cardholder data, while maintaining the highest levels of performance—up to millions of transactions per day—were defined and developed after the highly publicized breaches in 2009. The Payment Card Industry (PCI) released solution requirements for Point-to-Point Encryption to assist merchants in protecting cardholder data and reducing the scope of their environment for PCI DSS assessments. However, these approaches still seem to be a concept rather than common practice.

This is a critical issue in need of a thorough solution. Reducing the risk of payment data breaches requires encrypting sensitive data at the point of swipe (or dip in the case of EMV cards) in the payment device and only decrypting it at the processor. Direct attacks on devices in the payment acceptance process have become increasingly common and highly sophisticated, but strongly encrypted cardholder data is useless to cyber criminals. To understand the approaches, and the benefits, of implementing sensitive data protection, let’s focus on two key areas: traditional payment acceptance terminals and mobile.

Accepting Payment at the Terminal

Transaction speed is important to both customers and merchants; electronic POS solution providers need to maximize security for payment card transactions without slowing performance. Their solutions need to encrypt cardholder data from the precise moment of acceptance on through to the point of processing, where transactions can be decrypted and sent to the payment networks. By deploying point-to-point encryption (P2PE), intermediate systems that sit between the POI (point of interaction – the point of swipe) device and the point of decryption at the processor are removed from the scope of most PCI-DSS compliance requirements, since the sensitive data passing through them is encrypted.

All encryption is not the same. There’s a difference between encrypting the data at the point of swipe device and encrypting the data in the POS system, more specifically the retail terminal. POI devices are subject to a PCI certification process, thereby providing high-assurance cryptography and key management functionality. Retail terminals, on the other hand, are typically PC/tablet-based devices that in most cases only offer software-based encryption and do not have the security controls of PCI-certified devices.

Data decryption takes place at the point of processing using HSMs for secure key management, as required by PCI-P2PE requirements. HSMs perform secure key exchanges and, in most applications, key management that produces a unique key to protect each and every payment transaction. Taking advantage of these security capabilities, solution providers can build high-capacity and redundant secure systems so that multiple servers and multiple HSMs, deployed at multiple data centers, can combine seamlessly to service high transaction volumes with automated load balancing and failover.

With a distinctive combination of strong security and risk mitigation against malicious capture of cardholder data, Verifone—a provider of secure payment acceptance solutions—is one example of a P2PE solution provider that follows this approach. At the same time, this approach ensures performance and availability for transactions – a win-win for retailers. The Verifone VeriShield solution was specifically designed to enable retailers to implement Best Practices for Data Field Encryption, providing security that helps reduce the scope of PCI-DSS audits.

Accepting Payments on the Fly

Smaller merchants are now able, thanks to the mobile revolution, to afford on-the-go payment acceptance. However, with the increasing availability of mobile payment acceptance options, small merchants and mobile businesses need to take a moment to consider the security of their customers’ payment data.

Mobile devices equipped with an economical card reader “dongle” enable mobile point-of-sale, or mPOS. A mobile phone or tablet can accept payments from both EMV and magnetic stripe payment cards in this way. As with traditional POS, it is critical that the card reader encrypt the sensitive payment data it receives.

It can be challenging to secure mPOS solutions. CreditCall and ROYAL GATE, two payment services providers, overcame this challenge by using point-to-point encryption (P2PE) to protect the sensitive payment data from their mobile acceptance offerings. They integrated HSMs with their processing application as a critical component to manage keys and secure customer data following PCI P2PE solution requirements. The use of HSMs enables them to defend against external data extraction threats and to protect against compromise by a malicious insider.

Securing Payment Credentials

There are several options on the market that allow mobile devices to make payments, but Host Card Emulation (HCE) has distinct market advantages. Because the security of the payment data and transaction is not dependent on hardware embedded in the phone, it has much broader applicability; any smartphone could use the HCE approach by loading payment credentials on the device and using it in place of a physical card.

Mobile devices have a NFC (near field communications) controller, which HCE-based applications leverage to interact with a contactless payment terminal. However, since the application cannot rely on secure hardware embedded in the phone for protection of the payment credentials, alternative approaches for protecting sensitive data and transaction security have to be used. These approaches include tokenizing payment credential numbers as well as actively managing and rotating keys used for transaction authorization. This enables issuers to manage the risk introduced by having a less secure mobile device environment for payment credential data.

The approaches that protect this data are based on HSMs in the issuer environment, which not only create the rotating keys but also to send them securely to the mobile device. In addition, the HSMs are also a critical part of the tokenization and transaction authorization process. The HCE infrastructure does not actually introduce any new security processes or procedures for retailers and processors; it just enables issuers to combine their existing strong security practices—comprising key generation/distribution, data encryption and message authentication—into a cohesive offering to enable payments with mobile devices.

Protecting What’s Yours

The sophistication and determination of malicious actors has resulted in a global,
multi-billion-dollar industry. The real possibility of huge financial reward spurs cyber criminals to evolve their methods, including attacks on payment devices themselves. But the reality is that retailers and their acquirers can reduce their risk and fear if the sensitive cardholder data in their possession is nonsense to hackers. This is why P2PE is so critical in the fight to reduce fraud.

In addition to using P2PE and PCI-certified devices to keep card data safe, merchants are using HSMs in the processing environment to protect critical secure data protection and transaction keys. These steps also create a trust environment that complies with PCI requirements and reduces risk on payment acceptance and HCE-based credentials. Following these best practices will help merchants and their acquirers safeguard the lifeblood of their business, protecting their bottom line and their good name.

About the Author

Jose Diaz has worked with the Thales group for over 35 years and is currently responsible for payment product strategy at Thales e-Security. He has worked with payment application providers in developing solutions and roadmaps for securing the payments ecosystem. During his tenure at Thales, Jose has worked in Product Development, Systems Design, Sales in Latin America and the Caribbean, as well as Business Development.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.  See our guidelines.

Wednesday, May 11, 2016

Blueprint: The Rise of the Network Monitoring Engineer

by Patrick Hubbard, Head Geek, SolarWinds

Today’s network engineers experience tremendous complexity, in part due to increasing demand, but also given the diversity of protocols and the high number of multi-tier applications that are often outside of their control. Combined with improved automated failover it’s become impossible (except in the largest of organizations) for network administrators to be highly specialized, meaning the days of being a router jockey are gone.

Network administrators today are stuck between everyday tasks of change management, hardware refreshes and strategic changes required to support new business initiatives, and the on-demand troubleshooting work they are asked to do. On top of this, automation encourages IT managers to streamline their teams, so as network complexity increases, paradoxically the number of people available to help address these tasks is actually decreasing.

But this doesn’t mean the future of network administration is bleak. There are a number of ways network engineers can improve their skills and remain relevant to their organizations, especially at a time when hybrid IT is taking center stage. According to the most recent SolarWinds IT Trends Report, just nine percent of North American organizations have not migrated at least some infrastructure to the cloud and nearly all IT professionals say adopting cloud technologies is important to their organizations’ long-term business success.

Networking in a Hybrid Environment

In such complex environments, network administrators need the ability to view performance, traffic and configuration details of devices both within and outside their traditional networks. However, hybrid IT means network administrators have much more opacity or outright lack of control over the resources in the cloud that they need to manage and monitor.

Because the end user expectation that IT be able to assure delivery of services is the same for on-premises and cloud, this can be frustrating. It’s exacerbated by cloud service providers who include proprietary monitoring and management tools, but are not vendor-agnostic.  They actually create extra work for administrators who must flip between multiple dashboards without the benefit of holistic views that would allow them to troubleshoot quickly.

Often, such tools also spew alerts without indicating what might be causing the issue. For example, for an application running in the data center, network administrators have visibility into every network layer required to host the hypervisor. However, when that application is moved into the cloud, network administrators lose the administrative authority to be able to easily monitor.  They require a new way to monitor in order to keep the same rich visibility if it were on-premises.

Administrators still need to monitor interface performance, as well as identify service delivery issues as part of the path connecting the service to the end user. New technologies have become available that reveal the physical connectivity of the service components and end uses who might be experiencing poor performance.

So while using disparate vendor-provided tools may be cost-effective in the short term, having a large number of disparate solutions is its own kind of trouble—it doesn’t lend itself to a coherent, integrated alerting and notification strategy that allows administrators to stay on top of performance, ultimately costing time and money in the long term.

The Rise of the Dedicated Monitoring Engineer

Hybrid IT is drawing attention to the need for a new approach to monitoring and management essentials. Enter monitoring as a discipline, which varies from simple monitoring in that it is the defined role of one or more individuals within an organization. A designated monitoring engineer is able to work across systems and environments, thereby removing network and data center silos and gaining the able to turn data points generated by monitoring tools into actionable insights for the business.

Hiring a monitoring engineer or better yet, a team of monitoring engineers, should be considered a critical investment in services and business success. It’s one thing to say that companies need a certain headcount in order to maintain a business and keep the lights on, but another thing entirely when it comes to IT, which is largely viewed as a cost center, and every year most departments are exceeding budgets. However, enlightened companies are beginning to view monitoring as a cost-effective way to achieve greater IT ROI. Instead of purchasing ad hoc tools to keep an eye on their technology, progressive companies have figured out a way to bring discipline and structure to their monitoring practices via staffing and resources. For the right organization, this would be a team of monitoring engineers each with their own specialization—network monitoring, systems monitoring, etc.—but who work in lockstep from a “single point of truth” when it comes to overall infrastructure performance.

How to Make the Business Case for a Monitoring Engineer

With accelerating IT complexity in mind, it’s important that IT management begin to instill monitoring as a discipline principles within the business. IT professionals are already strapped for time and resources, and management needs to step in to help evangelize internally, offer examples and best practices and put budget for new tools and technologies behind these efforts in order to achieve the full benefit of monitoring as a discipline. Management must make a strong business case that the monitoring engineer or engineers will achieve ROI for not only the IT department, but the organization as a whole.

Critical Monitoring Engineer Skills

Although monitoring engineers must possess basic network engineering skills, there are a few particular skillsets in addition that are necessary to be truly successful in the role. These include:
  • A programmer’s eye towards customization and a willingness to improve – Often, we buy technology that’s custom-made and use it right out of the box. But the most successful monitoring engineer will turn their eye towards improving it all the time.
  • An analyst’s eye for data – Instead of simply poring over endless numbers in a spreadsheet, a monitoring engineer should be able to take a step back, look at the bigger picture and ask themselves what their “customers” will be using their monitoring reports for and how they should be visualized. And they must remember, less is more. 
  • On top of cultivating their skills with experience, studying is key – The best way to hone skills is to learn on the fly, as well as spend more than a few lunch breaks and evenings testing new technologies and processes in a lab environment. 
Our networks are growing in complexity as they become further tied to all elements of the IT environment, extending all the way to cloud. IT management should seize this opportunity to return as much value as possible out of existing technology by hiring a monitoring engineer or a monitoring team with at least one individual focused on the network that works in tandem with existing teams to holistically monitor the performance of the entire IT infrastructure.  Whether on-premises or in the cloud, these resources maintain an eye towards improving existing systems, delivering promised ROI and driving repeatable progress for the business.

About the Author

Patrick Hubbard is a head geek and senior technical product marketing manager at SolarWinds. With 20 years of technical expertise and IT customer perspective, his networking management experience includes work with campus, data center, storage networks, VoIP and virtualization, with a focus on application and service delivery in both Fortune 500 companies and startups in high tech, transportation, financial services and telecom industries.

About SolarWinds

SolarWinds (NYSE: SWI) provides powerful and affordable hybrid IT infrastructure management software to customers worldwide from Fortune 500® enterprises to small businesses, government agencies and educational institutions. We are committed to focusing exclusively on IT Pros, and strive to eliminate the complexity that they have been forced to accept from traditional enterprise software vendors. Regardless of where the IT asset or user sits, SolarWinds delivers products that are easy to find, buy, use, maintain and scale while providing the power to address all key areas of the infrastructure from on premises to the cloud. Our solutions are rooted in our deep connection to our user base, which interacts in our thwack online community to solve problems, share technology and best practices, and directly participate in our product development process. Learn more today at www.solarwinds.com.



Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

See also