Showing posts with label Monitoring. Show all posts
Showing posts with label Monitoring. Show all posts

Thursday, July 7, 2016

Ixia Integrates ControlTower with Cisco Nexus Switches

Ixia announced integration of its ControlTower network visibility solution with Cisco’s Nexus 3000 switches.

Ixia’s ControlTower solution, a key component of Ixia’s IxVision Architecture, provides visibility within physical, virtual, and software-defined networks (SDN). The distributed architecture provides network administrators access to monitoring and diagnostic tools from any point in the network.

Cisco Nexus switches use a common programmatic interface.

At Cisco Live 2016, Ixia will demonstrate how Ixia ControlTower enables network administrators to dynamically repartition Cisco switch ports between production switching and visibility enablement.

"We have expanded the functionality of ControlTower to now provide a single view over both Ixia supplied network packet brokers and Cisco Nexus 3000 switches, acting as an aggregation layer in large network visibility deployments,” said Dennis Cox, Ixia’s Chief Product Officer. “Ixia is the only visibility vendor to provide an integrated solution using our own equipment combined with Cisco Nexus 3000 switches.”

http://www.ixiacom.com

Wednesday, June 15, 2016

Cisco Tetration Targets Pervasive Visibility with Forensics

Cisco unveiled its new Tetration Analytics solution for deep, real-time visibility into packet flows across a data center -- every packet, every flow, every speed.

Cisco Tetration Analytics gathers telemetry data from the ASICs on-board Cisco Nexus 9000 switches and/or from low-overhead software sensors in servers. It then applies machine learning techniques using analytics software running on Cisco UCS C220 servers to addresses critical data center operations such as policy compliance, application forensics, and whitelist security. REST APIs are used to drive a WebGUI.

Cisco Tetration Analytics can continuously monitor application flows in real-time, sending out instant alerts when flows deviate from established behavior. The solution also tracks and analyses historical flows, providing forensic analysis of what happened on the network at certain points in time. Cisco claims 10s of billions of events are searchable in seconds. The big picture is to deliver a "single pane of glass" for all data center activity.

Cisco said continuous monitoring and analysis are key to simplifying operational reliability, ensuring zero-trust operations in automated data centers. The Tetration Analytics could be used in conjunction with Cisco's Application Centric Infrastructure (ACI) architecture to automate policy enforcement through a Cisco APIC. It could also be used in brownfield data centers with the software sensors in the servers.

“Gaining much deeper visibility into the data center and automating actionable analysis across a company’s infrastructure marks a critical technology advancement in building secure digital business models like cloud, mobile and IoT,” said David Goeckeler, senior vice president and general manager of Cisco’s Networking and Security Business Group. “We believe the insights we gain from  applications and the data center overall will enhance existing software solutions and drive the future development of new advanced software that will improve business operations, efficiency and customer experiences.”

Cisco has implemented Tetration in its own network, collecting billions of data points over a short period of time. The company said it has been able to reduce operational expenses by 70% while also gaining the ability to untangle application dependencies on hardware infrastructure.

http://www.cisco.com


Thursday, June 9, 2016

Fortinet to Acquire AccelOps for Security Monitoring

Fortinet agreed to acquire AccelOps, a start-up based in Santa Clara, California, that specializes in network security monitoring and analytics solutions. Financial terms were not disclosed.

AccelOps’s virtual appliance software monitors security, performance and compliance in local and virtualized infrastructures, resulting in a unified view of the environment. The software discovers, analyzes and automates IT issues across multi-tenant or single networks, spanning servers, devices, storage, networks, security, applications and users.

Fortinet said the acquisition extends Fortinet’s recently announced Security Fabric strategy by enhancing network security visibility, security data analytics and threat intelligence across multi-vendor solutions. AccelOps solutions will become FortiSIEM and become part of the Fortinet Security Fabric, providing customers with greater visibility across both Fortinet and multi-vendor security solutions.

“Fortinet and AccelOps share a common vision of providing holistic, actionable security intelligence across the entire IT infrastructure. Our mission has always been to help our customers make security and compliance management as effortless and effective as possible. The synergies between AccelOps’s solutions and Fortinet’s Security Fabric vision and thought leadership will ensure that our customers are protected with the most scalable and proven global threat intelligence, security and performance analytics and compliance and control across all types of network environments with multiple security and networking vendor products,” stated Partha Bhattacharya, founder and chief technology officer, AccelOps.

http://www.fortinet.com
http://www.accelops.com/

Blueprint: Endpoint Visibility in the IoT



A Five-Step Action Plan for Securing the Network in the Age of IoT by Tom Kelly, CEO, AccelOps A report from BI Intelligence projects that Internet of Things (IoT) deployments will create $421 billion in economic value for cities worldwide in 2019. Cities will enjoy benefits such as improved traffic flow, a reduction in air pollution and better public safety. This is just one example of the advancements the IoT will bring to all sectors. However,...


Blueprint: Three Predictions for Network Monitoring in 2016



by Tom Kelly, CEO, AccelOps Why do armies set up look-outs all around their camps? Why do people read their horoscopes and shake magic eight-balls? Simple: they want to see what’s coming. In business, it’s incredibly helpful to be able to accurately forecast needs and set strategy. In the network security and performance arena of the business, it’s table stakes. While there’s no crystal ball that can tell us everything, one thing is certain:...


AccelOps Builds Threat Intelligence into its Actionable Security Platform


AccelOps, a start-up based in Santa Clara, California, introduced threat intelligence capabilities for its integrated IT and operational visibility platform. The existing AccelOps virtual appliance software monitors security, performance and compliance in cloud and virtualized infrastructures on a single screen. It automatically discovers, analyzes and automates IT issues in machine and big data across organizations’ data centers and cloud resources,...

Wednesday, June 8, 2016

Solarwinds Brings Visibility in Hybrid IT

SolarWinds release a major update to its flagship Network Performance Monitor (NPM) tool that now gives network administrators the hybrid IT visual insights and analysis needed for visibility into the performance of services across not only the networks they own—but those of their service providers and cloud vendors as well.

Specifically, SolarWinds NPM 12 now features NetPath and Network Insight, providing the ability to view the performance details of and pinpoint bottlenecks on all the networks connecting critical services and applications, whether they be on-premises or in the cloud.

The SolarWinds NPM 12 network monitoring software visually maps hybrid network paths alongside on-premises data. For example, NetPath gives IT professionals whose organizations use cloud-based applications such as Salesforce® the ability to identify the exact location of a performance issue—whether on the internal LAN, with a WAN provider or on the cloud application vendor’s own network—and provides actionable insights for resolution. Specific NetPath features include:

  • Dynamic and visual hop-by-hop analysis of critical paths and devices along the entire network delivery path—on-premises, in the cloud or across hybrid IT environments.
  • Specific, actionable information to resolve network issues regardless of network ownership.
  • Automatic and dynamic thresholds to identify unhealthy critical paths and network nodes, providing an estimated 50 percent faster time to resolution.
  • Visualized critical path performance over time for historical views of network latency.
  • Identification of device configuration changes along critical paths when integrated with SolarWinds Network Configuration Manager.
  • Immediate insight into the traffic travelling across flow-enabled devices impacting network performance when integrated with SolarWinds NetFlow Traffic Analyzer.


“Applications, whether on-premises or in the cloud, are the heart of business, but they’re useless without the networks that connect them to end users,” said Christoph Pfister, executive vice president of products, SolarWinds. “Until now, the capabilities to monitor the performance of those networks regardless of ownership were impossible within a single tool—traceroute tools are typically blocked from accessing service provider networks, and cloud monitoring tools don’t have adequate visibility into on-premises infrastructure performance. With NetPath and Network Insight, SolarWinds NPM 12 is delivering the dynamic visibility and actionable insights IT professionals need to effectively manage all the networks impacting the applications their organizations rely on in a hybrid IT world—every node, every path, every network.”
http://www.solarwinds.com

Tuesday, May 17, 2016

Riverbed Extends its Visibility in the Cloud

Riverbed is rolling out a series of enhancements to its SteelCentral platform to improve monitoring across the cloud and virtualized environments. These include:

  • extending monitoring capabilities into the cloud with Microsoft Azure and AWS
  • platform-as-a-Service (PaaS) and containerized environments
  • large-scale virtualized network performance monitoring
  • expanded unified communications (UC) monitoring with new support for Skype for Business
  • and next generation diagnostics and troubleshooting.

“This release of SteelCentral delivers significant enhancements in cloud-based performance monitoring, along with new capabilities that will help accelerate business execution and boost productivity,” said Mike Sargent, SVP, General Manager of SteelCentral at Riverbed. “As more enterprises embrace digital and cloud services, SteelCentral provides the high definition visibility that is critical to enabling and assuring the success of these transformational initiatives. SteelCentral is the only performance management solution that can deliver comprehensive insight into end user experience, application, network, and infrastructure performance in a unified, central view. It truly is the command center for application performance for the digital enterprise.”

http://www.riverbed.com
http://www.riverbed.com/products/steelcentral/index.html

Wednesday, May 11, 2016

Blueprint: The Rise of the Network Monitoring Engineer

by Patrick Hubbard, Head Geek, SolarWinds

Today’s network engineers experience tremendous complexity, in part due to increasing demand, but also given the diversity of protocols and the high number of multi-tier applications that are often outside of their control. Combined with improved automated failover it’s become impossible (except in the largest of organizations) for network administrators to be highly specialized, meaning the days of being a router jockey are gone.

Network administrators today are stuck between everyday tasks of change management, hardware refreshes and strategic changes required to support new business initiatives, and the on-demand troubleshooting work they are asked to do. On top of this, automation encourages IT managers to streamline their teams, so as network complexity increases, paradoxically the number of people available to help address these tasks is actually decreasing.

But this doesn’t mean the future of network administration is bleak. There are a number of ways network engineers can improve their skills and remain relevant to their organizations, especially at a time when hybrid IT is taking center stage. According to the most recent SolarWinds IT Trends Report, just nine percent of North American organizations have not migrated at least some infrastructure to the cloud and nearly all IT professionals say adopting cloud technologies is important to their organizations’ long-term business success.

Networking in a Hybrid Environment

In such complex environments, network administrators need the ability to view performance, traffic and configuration details of devices both within and outside their traditional networks. However, hybrid IT means network administrators have much more opacity or outright lack of control over the resources in the cloud that they need to manage and monitor.

Because the end user expectation that IT be able to assure delivery of services is the same for on-premises and cloud, this can be frustrating. It’s exacerbated by cloud service providers who include proprietary monitoring and management tools, but are not vendor-agnostic.  They actually create extra work for administrators who must flip between multiple dashboards without the benefit of holistic views that would allow them to troubleshoot quickly.

Often, such tools also spew alerts without indicating what might be causing the issue. For example, for an application running in the data center, network administrators have visibility into every network layer required to host the hypervisor. However, when that application is moved into the cloud, network administrators lose the administrative authority to be able to easily monitor.  They require a new way to monitor in order to keep the same rich visibility if it were on-premises.

Administrators still need to monitor interface performance, as well as identify service delivery issues as part of the path connecting the service to the end user. New technologies have become available that reveal the physical connectivity of the service components and end uses who might be experiencing poor performance.

So while using disparate vendor-provided tools may be cost-effective in the short term, having a large number of disparate solutions is its own kind of trouble—it doesn’t lend itself to a coherent, integrated alerting and notification strategy that allows administrators to stay on top of performance, ultimately costing time and money in the long term.

The Rise of the Dedicated Monitoring Engineer

Hybrid IT is drawing attention to the need for a new approach to monitoring and management essentials. Enter monitoring as a discipline, which varies from simple monitoring in that it is the defined role of one or more individuals within an organization. A designated monitoring engineer is able to work across systems and environments, thereby removing network and data center silos and gaining the able to turn data points generated by monitoring tools into actionable insights for the business.

Hiring a monitoring engineer or better yet, a team of monitoring engineers, should be considered a critical investment in services and business success. It’s one thing to say that companies need a certain headcount in order to maintain a business and keep the lights on, but another thing entirely when it comes to IT, which is largely viewed as a cost center, and every year most departments are exceeding budgets. However, enlightened companies are beginning to view monitoring as a cost-effective way to achieve greater IT ROI. Instead of purchasing ad hoc tools to keep an eye on their technology, progressive companies have figured out a way to bring discipline and structure to their monitoring practices via staffing and resources. For the right organization, this would be a team of monitoring engineers each with their own specialization—network monitoring, systems monitoring, etc.—but who work in lockstep from a “single point of truth” when it comes to overall infrastructure performance.

How to Make the Business Case for a Monitoring Engineer

With accelerating IT complexity in mind, it’s important that IT management begin to instill monitoring as a discipline principles within the business. IT professionals are already strapped for time and resources, and management needs to step in to help evangelize internally, offer examples and best practices and put budget for new tools and technologies behind these efforts in order to achieve the full benefit of monitoring as a discipline. Management must make a strong business case that the monitoring engineer or engineers will achieve ROI for not only the IT department, but the organization as a whole.

Critical Monitoring Engineer Skills

Although monitoring engineers must possess basic network engineering skills, there are a few particular skillsets in addition that are necessary to be truly successful in the role. These include:
  • A programmer’s eye towards customization and a willingness to improve – Often, we buy technology that’s custom-made and use it right out of the box. But the most successful monitoring engineer will turn their eye towards improving it all the time.
  • An analyst’s eye for data – Instead of simply poring over endless numbers in a spreadsheet, a monitoring engineer should be able to take a step back, look at the bigger picture and ask themselves what their “customers” will be using their monitoring reports for and how they should be visualized. And they must remember, less is more. 
  • On top of cultivating their skills with experience, studying is key – The best way to hone skills is to learn on the fly, as well as spend more than a few lunch breaks and evenings testing new technologies and processes in a lab environment. 
Our networks are growing in complexity as they become further tied to all elements of the IT environment, extending all the way to cloud. IT management should seize this opportunity to return as much value as possible out of existing technology by hiring a monitoring engineer or a monitoring team with at least one individual focused on the network that works in tandem with existing teams to holistically monitor the performance of the entire IT infrastructure.  Whether on-premises or in the cloud, these resources maintain an eye towards improving existing systems, delivering promised ROI and driving repeatable progress for the business.

About the Author

Patrick Hubbard is a head geek and senior technical product marketing manager at SolarWinds. With 20 years of technical expertise and IT customer perspective, his networking management experience includes work with campus, data center, storage networks, VoIP and virtualization, with a focus on application and service delivery in both Fortune 500 companies and startups in high tech, transportation, financial services and telecom industries.

About SolarWinds

SolarWinds (NYSE: SWI) provides powerful and affordable hybrid IT infrastructure management software to customers worldwide from Fortune 500® enterprises to small businesses, government agencies and educational institutions. We are committed to focusing exclusively on IT Pros, and strive to eliminate the complexity that they have been forced to accept from traditional enterprise software vendors. Regardless of where the IT asset or user sits, SolarWinds delivers products that are easy to find, buy, use, maintain and scale while providing the power to address all key areas of the infrastructure from on premises to the cloud. Our solutions are rooted in our deep connection to our user base, which interacts in our thwack online community to solve problems, share technology and best practices, and directly participate in our product development process. Learn more today at www.solarwinds.com.



Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Weaveworks Raises $15 Million for Container Monitoring

Weaveworks, a start-up offering networking and monitoring for containers and microservices, secured $15 million in Series B funding led by new investor GV (formerly Google Ventures).

Weaveworks’ Weave provides a simple and consistent way to connect and manage containers and microservices. It provides simple and reliable networking across development, test and production environments and any mix of data centers and public c111louds. Weave also provides a unique console to visualize and interact with container and microservice deployments in development and production.

The new funding round also included participation from existing investor Accel.

“We’re delighted to welcome GV as a new investor, and Accel as a repeat investor in Weaveworks,” said Alexis Richardson, co-founder and CEO of Weaveworks and chair of the Cloud Native Computing Foundation’s (CNCF) Technical Oversight Committee. “Both share our vision of simplifying microservice-based application development by minimizing the connectivity and deployment complexities, and providing unique visual ways to understand and manage cloud-native applications.”

http://weave.works/

Weave Enhances its Networking + Monitoring for Docker

Weaveworks, a start-up with offices in San Francisco and London, announced the 1.4 release of its networking and monitoring software for Docker deployments.

Weave Net 1.4 is a Docker networking plug-in that eliminates the requirement to run and manage an external cluster store (database). The plug-in simplifies and accelerates the deployment of Docker containers by removing the requirement to deploy, manage and maintain a centralized database in development and production. It builds on Docker’s core networking capabilities. It runs a “micro router” on each Docker host that works just like an Internet router, providing IP addresses to local containers and sharing peer-to-peer updates with other micro-routers, and learning from their updates. It also responds to DNS requests by containers looking to find other containers by name, also known as Service Discovery. Features include:

  • A simple overlay networking approach for connecting containers across Docker hosts
  • Fast, standards-based VXLAN encapsulation for the network traffic
  • An application-centric micro-network
  • Built-in service discovery
Weaveworks developed “micro router” technology to make Docker container networking fast, easy and “invisible”.

“Removing the dependency on a cluster store makes it faster, easier and simpler to build, ship and run Docker containers,” said Mathew Lodge, COO of Weaveworks. “Weave Net 1.4 embodies Weaveworks’ commitment to making simple, easy to use products that accelerate the deployment of microservices and cloud native applications on containerized infrastructure.”

http://weave.works/

Monday, May 2, 2016

Pluribus Updates its Insight Analytics

Pluribus Networks announced a major update to its network monitoring and business analytics solution.

Pluribus  VCF Insight Analytics (VCF-IA) now provides automated metadata tagging allowing the business to get a contextualized view of their data center consumption. In addition, VCF-Insight Analytics is now available as a premier monitoring solution from Dell as part of its software-defined open networking portfolio.

Pluribus said VCF-IA 1.5 can be deployed in any data center network – either with or without other Pluribus Networks switches – to enable advanced monitoring and business analytics across a wide range of networking topologies.

“For years, enterprises have been searching for an affordable way to gain deep insight into the business operation of their data center IT systems from the network perspective, but their efforts have been severely constrained by the significant investment required,” said Mark Harris, VP of Corporate Marketing at Pluribus Networks. “Working with Dell, we have proven that customers can deploy our switching solutions to transform their data centers and realize the industry’s most cost-effective SDN.

http://www.pluribusnetworks.com

Thursday, March 3, 2016

Blueprint: Monitoring as a Discipline and the Network Administrator

by Leon Adato, Head Geek, SolarWinds

As IT professionals, we know our way around data centers like the backs of our hands. But what consistently surprises me when I speak with other admins is the general lack of knowledge about and resources put towards what we at SolarWinds call monitoring as a discipline, especially as it pertains to monitoring networks.

Evolution of the network

The network is a complex thing, and it has evolved considerably over the past decade.

For example, the network used to be defined by a mostly wired, physical entity controlled by routers and switches. Business connections were based on T1 and ISDN, and Internet connectivity was always backhauled through the data center. Each network device was a piece of company-owned hardware, and applications operated on well-defined ports and protocols. VoIP was used infrequently, and anywhere connectivity—if even a thing—was provided by the low-quality bandwidth of cell-based Internet access.

Today, however, wireless is becoming ubiquitous—it’s even overtaking wired networks in many instances—and the number of devices wirelessly connecting to the network is exploding (think Internet of Things). It doesn’t end there, though—networks are growing in all directions. Some network devices are even virtualized, resulting in a complex amalgam of the physical, the virtual and the Internet. Business connections are DSL/cable and Ethernet services. BYOD, BYOA, tablets and smartphones are prevalent and are creating bandwidth capacity and security issues. Application visibility based on port and protocol is largely impossible due to applications tunneling via HTTP/HTTPS. VOIP is common, also imposing higher demands on network bandwidth, and LTE provides high-quality anywhere connectivity.  

And the future isn’t looking any simpler. The Internet of Things (IoT); software defined networking (SDN); and hybrid IT, with its accompanying challenge of ensuring acceptable quality of service to meet the business performance needs for any given service delivered via a cloud provider, are all cresting the horizon.

What’s my point? These trends, challenges and complexities underscore a new set of monitoring and management essentials.

Enter monitoring as a discipline

What is monitoring as a discipline?

Monitoring as a discipline varies from simply monitoring in that it is an actual role, the defined job of one or more individuals within an organization, not just something “everyone kind of does when it’s needed.” The most important benefit of such a dedicated role is the ability to turn data points from various monitoring tools and utilities into more actionable insights for the business by looking at all of them from a holistic vantage point, rather than each disparately.

Although such a monitoring-dedicated individual or team is in reality probably only likely at larger organizations at this point in time, small- and medium-sized businesses may want to take note, as their infrastructures, all of which rely on the backbone known as the network, are only going to get more complex, bringing the need for even them to create such a role into sharp focus. Don’t believe me? Think about how common hiring a dedicated information security professional was ten years ago—nearly unheard of. But today, many organizations of almost every size consider this to be a necessity given the constant specter of security breaches.

Now reflect on how IT environments, not just the network, have grown, both in size and complexity, being distributed across geographies more than ever. In turn, monitoring them has equally grown in complexity. In fact, due to hybrid IT, it has become extremely difficult to pinpoint the root cause of issues—whether they lie with the cloud services provider or the organization’s internal network itself.

Thus, the “old way” of monitoring, where network admins, server admins and storage admins, etc. each operate in silos, monitoring only within their specific realm without much if any cross-silo oversight, is no longer really a viable option. By employing an expert who monitors as a specific discipline across all of the traditional silos can provide a cohesive view across an organization’s IT spectrum, making root cause analysis much more efficient and accurate, reducing costs in the process.

Expanding monitoring skillsets

All that said, given budget constraints, the reality for IT departments at many small- and medium-sized businesses will be one without such a dedicated monitoring expert for at least the near future. If having a dedicated monitoring expert is not in the cards for now, the next step is to expand your current IT team’s monitoring skillset. At minimum, your team should at least be able to effectively monitor:
  • Hardware
  • Networks (i.e. NetFlow and syslog)
  • Applications
  • Virtualization
  • Configurations
Configuration monitoring is especially important because when it comes to configs—what changed as well as the exact moment the change was made is critical to both the security and stability of entire environments. In fact, 80 percent of all corporate outages are caused by unexpected or uncontrolled config changes. And, in all honesty, in the absence of a dedicated monitoring expert, we generalist network admins are perhaps best positioned to step in and corral all this monitoring data into one cohesive set of actionable insights.

In conclusion

As the network becomes more complex and expands in nearly every direction, monitoring as a discipline will become more critical to business success. In summary, companies of all sizes should consider:
  • Adding a dedicated monitoring expert or experts who can provide a holistic view of the organization’s infrastructure performance, turning seemingly disparate data points gathered by monitoring tools into valuable, actionable insights.
  • If a dedicated expert is not possible, ensure the current IT team understands the nuances of monitoring hardware, networks, applications, virtualization and configurations and has a comprehensive, but not necessarily expensive, suite of monitoring tools available.
  • Putting network admins in charge of corralling all this monitoring data.
About the Author


Leon Adato is a Head Geek and technical evangelist at SolarWinds, and is a Cisco Certified Network Associate (CCNA), MCSE and SolarWinds Certified Professional (he was once a customer, after all). Before he was a SolarWinds Head Geek, Adato was a SolarWinds® user for over a decade. His expertise in IT began in 1989 and has led him through roles as a classroom instructor, courseware designer, desktop support tech, server support engineer, and software distribution expert. His career includes key roles at Rockwell Automation®, Nestle, PNC, and CardinalHealth providing server standardization, support, and network management and monitoring.


About SolarWinds 
SolarWinds (NYSE: SWI) provides powerful and affordable hybrid IT infrastructure management software to customers worldwide from Fortune 500® enterprises to small businesses, government agencies and educational institutions. We are committed to focusing exclusively on IT Pros, and strive to eliminate the complexity that they have been forced to accept from traditional enterprise software vendors. Regardless of where the IT asset or user sits, SolarWinds delivers products that are easy to find, buy, use, maintain and scale while providing the power to address all key areas of the infrastructure from on premises to the cloud. Our solutions are rooted in our deep connection to our user base, which interacts in our thwack online community to solve problems, share technology and best practices, and directly participate in our product development process. Learn more today at www.solarwinds.com. 

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Hibernia Employs Accedian on Transatlantic Cable

Hibernia Networks is using Accedian to provide tiered termination aggregation services on its new transatlantic cable, Hibernia Express, which offers round-trip speeds of under 58.95 milliseconds between New York and London.

The Accedian platform enables Hibernia Networks to provide flexible Ethernet speeds for its portfolio of low latency connectivity solutions, ensuring a more precisely scaled service that meets the individual requirements of its customers.

Accedian said its network solution is capable of providing monitoring and measurement functions on a one-way transatlantic transmission with sub-microsecond accuracy—enabling Hibernia Networks to provide more proactive performance assurance support for low latency services, from its network operations center. The performance monitoring metrics include latency, packet loss, and
jitter.

“We’re delighted to be part of Hibernia Networks state-of-the-art transatlantic cable system,” said Patrick Ostiguy, CEO, Accedian. “Our ability to guarantee lowest possible latency through technology innovation and best-practice continues to provide the hallmark for our ongoing expansion as we serve an increasing number of service providers spanning wholesale, mobile, and
cable networks.”

http://www.accedian.com

Tuesday, February 16, 2016

NETSCOUT Intros Flagship Packet Flow Switch for Carriers

NETSCOUT introduced its new flagship blade and chassis Packet Flow Switch -- the PFS 6010 -- designed for Service Providers who have migrated to 100G networks and require monitoring visibility infrastructure that scales with their networks and allows them to leverage investments in existing 1G or 10G monitoring tools.

The PFS 6010 features a 6Tbps non-blocking fabric architecture with port density from 60 to 600 ports in a single chassis. Advanced packet conditioning capabilities include packet slicing, header stripping and de-duplication to ensure their networks are managed efficiently.

NETSCOUT said its new PFS 6010 solves a critical challenge that organizations managing large networks encounter: scaling access to network traffic with the growing need for service assurance, performance management, application management and additional security systems.

“This is a strategic expansion of our portfolio, enabling packet flow switches to scale to the needs of the world’s largest and most innovative service providers who need application visibility,” said Brian McCann, vice president and general manager of NETSCOUT’s Packet Flow Switch Business Unit. “With 6Tbps performance and up to 600 available ports, the PFS 6010 offers the industry’s most scalable packet flow switch solution, supporting advanced traffic optimization, filtering and flow balancing functionality in a single system that support speeds from 1G to 100G.”

Key features and capabilities of the PFS 6010 include:

  • Massive scalability supports the largest data center and mobile network deployments, with up to 600 ports of 10G or 60 ports of 100G in a single chassis
  • Carrier class reliability and application performance with a non-blocking 6Tbps line rate fabric and fully redundant architecture
  • Advanced flexibility easily accommodates various traffic types and requirements, from 1G to 10G to 40G to 100G, with aggregation, replication, load balancing and advanced packet conditioning functionality
  • Highly modular 15RU 10-slot chassis, where each blade can be selected and deployed based on speeds, port density and functionality required


http://www.netscout.com/product/service-provider/ngenius-6000-series-packet-flow-switch-service-provider/

Tuesday, January 5, 2016

Blueprint: Three Predictions for Network Monitoring in 2016

by Tom Kelly, CEO, AccelOps

Why do armies set up look-outs all around their camps? Why do people read their horoscopes and shake magic eight-balls? Simple: they want to see what’s coming. In business, it’s incredibly helpful to be able to accurately forecast needs and set strategy. In the network security and performance arena of the business, it’s table stakes.

While there’s no crystal ball that can tell us everything, one thing is certain: organizations will need to fundamentally change the way they identify and manage threats. Below are my three predictions on this topic for the new year.

  1. It’s time to outsource security. With the unprecedented benefits and growth of the Internet of Things (IoT) and the vast number of touch points connecting to the network, new challenges and unknown risks associated with these tools will continue to multiply. Unknown risks include network and resource utilization, performance expectations and resource needs, interoperability with current systems and tools and, above all else, security risks and challenges to an organization’s livelyhood. As IT budgets shrink, and a shrinking pool of technical personnel, organizations will increasingly look outside their silos to managed security service providers (MSSP’s) for expert help.
  2. Organizations will map the customer journey. Consumers today have access to nearly infinite sources of information through the click of a mouse, resulting in a higher level of expectation for rapid answers from a variety of engagement channels. From websites to social media to mobile and multi-media, organizations are tasked with keeping up with customer demands from an ever-increasing set of “touch-points.” To that end, organizations will turn to tools that map and analyze a “360 view” of their customers’ journey and the respective “touch-points” throughout their organizations. As this integrated security and performance management requirement transitions from a tactical IT expenditure-driven initiative to a mission-critical, strategic business initiative, the era of CIOs and CISOs reporting to CFOs will shift to stronger oversight by boards of directors and CEOs.
  3. Businesses intelligence sources will converge. Proprietary customer and financial data and intellectual property are high-value targets for hackers. The challenge in protecting these targets will continue to grow as organizations become more reliant on business intelligence and analytics (Big Data) to dissect their various channels of customer engagement, workers, network and application productivity. As organizations store this valuable data in onsite and offsite locations (or a variety of both), Big Data is seen as a big target. These rich and proprietary sources of corporate analytics will spawn new and additional targets for hackers. Current silo-based approaches will need to converge with other business intelligence initiatives to provide more rapid identification and mitigation of risks.
Today’s dynamic, data-driven businesses have never been more reliant on the performance of their networks in managing risk and in the pursuit of their strategic initiatives. These same networks have never been more at risk for security breaches and the network performance impacts. With digital transformation in full swing, the pace of change is rapidly accelerating, and an organization’s ability to see into the network through solutions that provide a holistic, real-time view and correlation of the various elements in their network is becoming more critical than ever.

About the Author

Tom Kelly is CEO of Accelops and a technology industry veteran having led companies through founding, growth, IPO and strategic acquisition. He has served as a CEO, COO or CFO at Cadence Design Systems, Frame Technology, Cirrus Logic, Epicor Software and Blaze Software. Tom led successful turnarounds at Bluestar Solutions, MonteVista Software and Moxie Software, having served as CEO in repositioning and rebranding the companies in advance of their new growth. He serves on the Boards of Directors of FEI, Fabrinet, and ReadyPulse. Tom is a graduate of Santa Clara University where he is member of the University’s Board of Regents.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Tuesday, December 15, 2015

HPE Delivers Visibility into the Virtual Switching Fabric

Hewlett Packard Enterprise (HPE) announced network management and visualization software for virtual networks

The new Network Node Manager i (NNMi) software provides visibility into network topologies, helping companies to maintain control and extends their insight into the virtual switching fabric of their entire network. The software gives network managers the visibility into virtualized devices and topologies to help them ensure that their devices are connected, configured and performing as expected. When a device fails, NNMi can analyze events associated with the failure and help to recommend action. NNMi also provides predictive information that helps identify potential failures before they occur.

“Network managers today are unable to see a complete picture of their virtualized devices in real time, which limits their ability to ensure compliance and engineer the network for optimal  performance” said Balaji Venkatraman, PhD, Director of Product Line Management, Hewlett Packard Enterprise. “We are the first in the industry to provide a broad suite of network management system software with integrated fault, performance, configuration and compliance management capability that enables customers to optimize workloads to maintain application performance and resilience.”

https://www.hpe.com

Tuesday, July 21, 2015

Gigamon Launches Security Visibility Platform for Advanced Persistent Threats

Gigamon introduced its "GigaSECURE" Security Delivery Platform for providing pervasive visibility of network traffic, users, applications and suspicious activity, and then delivering it to multiple security devices simultaneously without impacting network availability.

The idea is to counter Advanced Persistent Threats (APTs) by leveraging a traffic visibility fabric to extract scalable metadata across a network, including cloud and virtual environments, and thereby empower third party security applications. This enables improved forensics and the isolation of applications for targeted inspection. The company also said its solution is also able to deliver visibility to encrypted traffic for threat detection.  The architecture supports inline and out-of-band security device deployments.

Gigamon's GigaSECURE is comprised of scalable hardware and software elements:

  • Infrastructure-wide reach via GigaVUE-VM and GigaVUE nodes;
  • High-fidelity, un-sampled Netflow/IPFIX generation;
  • Application Session Filtering;
  • SSL decryption; and
  • Inline bypass capabilities.

Gigamon also highlighted its Application Session Filtering (ASF), a new, patent-pending GigaSMART application that can identifies applications based on signature or patterns that appear within a packet or packets. Once positively identified, ASF extracts the entire session corresponding to the matched application flow from the initial packet to the last packet of the flow, even if the match occurs well after the first packet. This allows an administrator to forward specific “traffic of interest” to security appliances thereby optimizing their operational efficiency and improving overall performance.

The GigaSECURE platform already supports a broad ecosystem of security partners and their respective security functions, including:

Advanced Malware Protection: Check Point, Cisco, Cyphort, FireEye and Lastline;
Behavior Analytics: Damballa, Lancope, LightCyber and Niara;
Forensics/Analytics: ExtraHop, PinDrop, RSA and Savvius;
IPS: Check Point and Cisco;
NGFW: Check Point, Cisco, Fortinet and Palo Alto Networks;
Secure Email Gateways: Cisco;
SIEMs: LogRythm and RSA;
WAFs: Imperva.

https://www.gigamon.com/

Gigamon's Shehzad Merchant: Intersection of Open and Security


The open networking movement is here to stay. It's not just about open source software, says Shehzad Merchant, CTO of Gigamon, but really about taking a vertically-integrated networking stack and disaggregating it. With various components of the networking stack supplied by different vendors, maintaining visibility across every layer of that stack becomes critical.

By disaggregating the networking stack, you are, in principle, opening up new attack vectors across multiple surfaces. On the other hand, there will be a much broader ecosystem moving much quicker to address vulnerabilities.

This 9-minute sponsored video covers (1) whether the many open networking projects help or hurt the case for better network security (2) the overlapping trands of virtualization and higher networking speeds (3) security as the use case for SDN (4) redefining security boundaries with SDN

http://open.convergedigest.com/2015/05/gigamon.html

Automating Visibility inside the Cisco Live Network with Gigamon and JDSU

The Cisco Live Network and its state-of-the-art network operations center serve all of the attendees of Cisco's big annual event. Equipment must be deployed rapidly. As soon as the show begins, the network supports tens of thousands of clients and pushes terabytes of data to the Internet.

This video takes a look at the Cisco Live Network and the use of Gigamon's new software-defined visibility,  which leverages APIs to make real-times changes in the types of data under analysis. Software-defined visibility allows the NOC to change the nature of the visibility fabric to provide only the type data needed by the testing tools in real-time.  In addition, the video features a live use-case presented by JDSU covering software-defined visibility and their tools.

Presented by Andy Huckridge, Director of Service Provider Solutions at Gigamon; Joe Clarke, Distinguished Engineer at Cisco; and Charles Thompson, Senior Director, Product Line Management, at JDSU.

See video:  https://youtu.be/giYXwy2thlQ

Tuesday, July 14, 2015

SK Telecom Deploys Accedian's Performance Monitoring

Accedian Networks announced the deployment of its performance monitoring solution across the SK Telecom mobile network -- encompassing 12,000 locations across the six largest cities in South Korea.

Accedian's virtualized solutions collect precise network performance indicators in real time from SK Telecom's existing network infrastructure, as well as from Accedian's NFV-based (network function virtualization) performance assurance modules. SK Telecom operates a multi-vendor network, ensuring each supports the RFC-5357 two way active measurement protocol (TWAMP), a vendor-independent, global performance monitoring standard. Accedian's centralized monitoring solution acts as a uniform instrumentation layer, centralizing a real-time view of network performance throughout the network. Where access equipment lacks TWAMP standards support, SK Telecom installs Accedian's Nano smart SFP modules to ensure ubiquitous network coverage from end-to-end.

Accedian said SK Telecom plans to extend coverage to all sites serving over 28 million subscribers nationwide. T

"At SK Telecom, best-possible quality of customer service and experience is at the heart of our reputation and our business," stated Choi, Seung-won, Senior Vice President and Head of Network Solution Office, SK Telecom. "Partnering with Accedian helps us ensure the highest levels of QoS and QoE, which is particularly important as we continue to extend our network towards 5G, and to expand coverage with small cells, making the need for 24x7 end to end network visibility critical. Accedian's performance monitoring solutions make this possible. SK Telecom selected Accedian's TWAMP performance assurance solution specifically as the quality indicator and standard for LTE business-to-business networks."

"SK Telecom is a premier example of a mobile service provider pushing network performance and technology to the limit," stated Accedian Founder, President, and CEO, Patrick Ostiguy. "They are early adopters and innovators, and their efforts result in the extremely high levels of service and experience quality their customers have come to expect. Accedian is honored to help support their performance assurance objectives through our comprehensive solutions, built with providers like SK Telecom in mind."

http://www.accedian.com
http://www.sktelecom.com




Blueprint: 5G and the Need For SDN Flow Optimization


by Scott Sumner, VP Solutions Development and Marketing, Accedian Networks As more subscribers run bandwidth-intensive applications from a variety of devices, mobile access networks are increasingly strained to maintain quality. According to Ericsson, annual mobile traffic throughput is predicted to increase from 58 exabytes in 2013 to roughly 335 exabytes by 2020. It’s clear that brute-force bandwidth over-provisioning is no longer an economically...