Showing posts with label Monitoring. Show all posts
Showing posts with label Monitoring. Show all posts

Thursday, September 19, 2019

Datadog surges 39% in IPO

Shares in Datadog (Nasdaq: DDOG) surged 39% above the IPO price to close at $37.55 on the first day of trading.

DataDog is a monitoring service for cloud-scale applications, providing monitoring of servers, databases, tools, and services, through a SaaS-based data analytics platform.

In its S-1 filing, the company disclosed revenue of $153.4 million for the first half of 2019.

DataDog was founded in 2010 and is based in New York.


Sunday, August 18, 2019

VMware to acquire Veriflow for network visibility/assurance

VMware announced its planned acquisition of Veriflow, a start-up offering tools for network verification, assurance, and troubleshooting. Financial terms were not disclosed.

Specifically, Veriflow provides:

  • Network modeling in software;
  • Verifying network connectivity and application availability as well as segmentation assurance; and,
  • Preflight modeling and What-If capabilities to analyze proposed network changes, thus reducing network outages and maintenance windows.

Earlier this year, VeriFlow introduced its CloudPredict SaaS version which offers visibility and assurance across public cloud network deployments. The SaaS is built on the Veriflow verification and analytics platform.

Veriflow is backed by New Enterprise Associates (NEA), Menlo Ventures, the National Science Foundation and the U.S. Department of Defense.

https://www.veriflow.net/

Sunday, December 17, 2017

EXFO to acquire Astellia for mobile subscriber awareness

EXFO has launched an all-cash voluntary tender offer to acquire all of the outstanding shares of Astellia, a provider of network and subscriber intelligence solutions for mobile operators. EXFO already holds 33.1% of Astellia's equity.

The offer is proposed at a price of EUR 10 per Astellia share, valuing the entirety of Astellia's equity (on a fully diluted basis) at approximately €25.9 million.

Astellia's real-time monitoring and troubleshooting solution optimizes networks end-to-end, from radio to core.  The company is based in France with significant operations in Spain and a strong presence in Canada, Lebanon, Morocco and South Africa.

"We aim to combine the two companies and create a global leader in the service assurance and analytics industry", said Germain Lamonde, EXFO's founder and Executive Chairman of the Board. "Combining our complementary base of customers, technologies and competencies, as well as our similar corporate cultures, will enable the development of game-changing solutions and services within a large market in rapid transition—all this in the best interests of our customers, employees and shareholders."

Tuesday, October 3, 2017

100G - Challenges for Performance and Security Visibility Monitoring

by Brendan O’Flaherty, CEO, cPacket Networks

The 100Gbps Ethernet transport network is here, and the use cases for transport at 100Gbps are multiplying. The previous leap in network bandwidths was from 10Gbps to 40Gbps, and 40Gbps networks are prevalent today. However, while 40Gbps networks are meeting bandwidth and performance requirements in many enterprises, the “need for speed” to handle data growth in the enterprise simply cannot be tamed.

As companies continue to grow in scale, and as their data needs become more complex, 100Gbps (“100G”) offers the bandwidth and efficiency they desperately need. In addition, 100G better utilizes existing fiber installations and increases density, which significantly improves overall data center efficiency.

A pattern of growth in networks is emerging, and it seems to reflect the hypergrowth increases in data on corporate networks just over the last five years. In fact, the now-famous Gilder’s Law states that backbone bandwidth on a single cable is now a thousand times greater than the average monthly traffic exchanged across the entire global communications infrastructure five years ago.

A look at the numbers tells the story well. IDC says that 10G Ethernet switches are set to lose share, while 100G switches are set to double. Crehan Research (see Figure 1) says that 100G port shipments will pass 40G shipments in 2017, and will pass 10G shipments in 2021.



Figure 1: 100G Port Shipments Reaching Critical Mass, as 40G and 10G Shipments Decline

100 Gigabit by the Numbers

The increase in available link speeds and utilization creates new challenges for both the architectures upon which traditional network monitoring solutions are based and for the resolution required to view network behavior accurately. Let’s look at some numbers:

100G Capture to Disk

Traditional network monitoring architectures depend on the ability to bring network traffic to a NIC and write that data to disk for post-analysis. Let’s look at the volume of data involved at 100G:



(1)




(2)




(3)

By equation (3), at 100 Gbps on one link, in one direction, a one-terabyte disk will be filled in 80 seconds. Extending this calculation for one day, in order to store the amount of data generated on one 100 Gbps link in only one direction, 0.96 petabytes of storage is required:




(4)

Not only is this a lot of data (0.96 petabytes is about 1,000 terabytes, equivalent to 125 8TB desktop hard drives), but as of this writing (Aug 2017), a high-capacity network performance solution from a leading vendor can store approximately 300 terabytes, or only eight hours of network data from one highly utilized link.

100G in the Network – What is a Burst, and What is Its Impact?

A microburst can be defined as a period during which traffic is transmitted over the network at line rate (the maximum capacity of the link). Microbursts in the datacenter are quite common – often by design of the applications running in the network. Three common reasons are:

  • Traffic from two (or more) sources to one destination. This scenario is sometimes considered uncommon due to the low utilization of the source traffic, although this impression is the result of lack of accuracy in measurements, as we’ll see when we look at the amount of data in a one-millisecond burst.
  • Throughput maximizations. Many common operating system optimizations to reduce the overhead of disk operations or NIC offloading of interrupts will cause trains of packets to occur on the wire.
  • YouTube/Netflix ON/OFF buffer loading. Common to these two applications but frequently used with other video streaming applications is buffer loading from 64KB to 2MB – once again, this ON/OFF transmission of traffic inherently gives rise to bursty behavior in the network.
The equations below translate 100 gigabits per second (1011 bits/second) into bytes per millisecond:



(5)



(6)

The amount of data in a one-millisecond spike of data is greater than the total amount of (shared) memory resources available in a standard switch. This means that a single one-millisecond spike can cause packet drops in the network. For protocols such as TCP, the data will be retransmitted; however, the exponential backoff mechanisms will result in degraded performance. For UDP packets, the lost packets will translate to choppy voice or video, or gaps in market data feeds for algorithm trading platforms. In both cases, since the packet drops cannot be predicted in advance because the spikes and bursts will go undetected without millisecond monitoring resolution, the result will be intermittent behavior that is difficult to troubleshoot.

Network Monitoring Architecture


Typical Monitoring Stack
The typical network monitoring stack is described in Figure 2. At the bottom is the infrastructure – the switches, routers and firewalls that make up the network. Next, in the blue layer are TAPs and SPAN ports – TAPs are widely deployed due to their low cost, and most infrastructure devices provide some number of SPAN ports. The traffic from these TAPs and SPANs is then taken to an aggregation device (or matrix switch or “packet broker”) – at this point, a high number of links, typically 96 10G, are taken to a small number of tool ports, usually four 10G ports (a standard high-performance configuration). At the top are the network tools – these tools take the network traffic fed to them from the aggregation layer and provide the graphs, analytics and drilldowns that form dashboards/visualization.


Figure 2: Typical Network Monitoring Stack

Scalability of Network Monitoring Stack

Let’s now evaluate how this typical monitoring stack scales in high-speed environments.

·         Infrastructure: As evidenced by the transition to 100G, the infrastructure layer appears to be scaling well.

·         TAP/SPAN: TAPs are readily available and match the speeds found in the infrastructure layer. SPANs can be oversubscribed or alter timing, leading to loss of visibility and inaccurate assumptions about production traffic behavior.

·         Aggregation: The aggregation layer is where the scaling issues become problematic. As in the previous example, if 48 links are monitored by four 10G tool ports, the ratio of “traffic in” to monitoring capability is 96:4 (96 is the result of 48 links in two directions) or, reducing, an oversubscription ratio of 24:1. Packet drops due to oversubscription mean that network traffic is not reaching the tools – there are many links or large volumes of traffic that are not being monitored.

·         Tools: The tools layer is dependent on data acquisition and data storage, which translates to the dual technical hurdles of capturing all the data at the NIC as well as writing this data to disk for analysis. Continuing the example, at 96x10G to 4x10G at 10G, the percentage of traffic measured (assuming fully utilized links) is 4x10G/96x10G, or 4.2%. As the network increases to 100G (but the performance of monitoring tools does not), the percentage of traffic monitored drops further to 4x10G/96x100G, or 0.42%.

It is difficult to provide actionable insights into network behavior when only 0.42% of network traffic is monitored, especially during levels of high activity or security attacks.
Figure 3: Scalability of Network Monitoring Stack

Current Challenges with Traditional Monitoring Environments

Monitoring Requirements in the Datacenter

Modern datacenter monitoring has a number of requirements if it is to be comprehensive:  
  • Monitoring Must Be Always-On. Always-on network performance monitoring means being able to see all of the traffic and being able to perform drill-downs to packets of interest on the network without the delay incurred in activating and connecting a network tool only after an issue has been reported (which leads to reactive customer support rather than the proactive awareness necessary to address issues before customers are affected). Always-on KPIs at high resolution provide a constant stream of information for efficient network operations.
  • Monitoring Must Inspect All Packets. To be comprehensive, NPM must inspect every packet and every bit at all speeds—and without being affected by high traffic rates or minimum-sized packets. NPM solutions that drop packets (or only monitor 0.24% of the packets) as data rates increase do not provide the accuracy, by definition, to understand network behavior when accuracy is most needed – when the network is about to fail due to high load or a security attack.
  • High Resolution is Critical. Resolution down to 1ms was not mandatory in the days when 10Gbps networks prevailed. But there’s no alternative today: 1ms resolution is required for detecting problems such as transients, bursts and spikes at 100Gbps.
  • Convergence of Security and Performance Monitoring (NOC/SOC Integration). Security teams and network performance teams are often looking for the same data, with the goal of interpreting it based on their area of focus. Spikes and microbursts might represent a capacity issue for performance engineers but may be early signs of probing by an attacker to a security engineer. Growing response time may reflect server loads to a performance engineer or may indicate a reflection attack to the infosec team. Providing the tools to allow correlation of these events, given the same data, is essential to efficient security and performance engineering applications.
A Look Ahead

100G is just the latest leap in Ethernet-based transport in the enterprise. With 100G port shipments growing at the expense of 40G and 10G, the technology is on a trajectory to become the dominant data center speed by 2021. According to Light Reading, “We are seeing huge demand for 100G in the data center and elsewhere and expect the 100G optical module market to become very competitive through 2018, as the cost of modules is reduced and production volumes grow to meet the demand. The first solutions for 200G and 400G are already available. The industry is now working on cost-reduced 100G, higher-density 400G, and possible solutions for 800G and 1.6 Tbit/s.”


Tuesday, April 25, 2017

Gigamon Delivers Security Intelligence at 10/40/100G

Gigamon introduced its GigaVUE-HC3, a high-performance appliance to enable pervasive visibility and security intelligence at scale in 10Gb, 40Gb and 100Gb networks.

The new GigaVUE-HC3 extends Gigamon's Visibility Platform and GigaSMART technologies with higher compute and throughput performance, offering:

  • A total of 800Gbps of GigaSMART traffic intelligence per node, scaling to 25Tbps in clustered configurations
  • Up to 3.2Tbps of processing per node that scales to over 100Tbps per cluster.

“Organizations that deal with large volumes of network traffic are increasingly concerned about the attack surface posed by high-speed, distributed infrastructure and the ensuing challenges created for network security teams,” said Ananda Rajagopal, vice president of products at Gigamon. “Ensuring visibility and control in such environments is not just about tapping network traffic but also rapidly finding the proverbial needle in the haystack. The GigaVUE-HC3 is the first platform in the industry to provide intelligent visibility at scale.”

GigaVUE-HC3 will be generally available in May 2017.

http://www.gigamon.com

Wednesday, February 22, 2017

cPacket Supports Distributed Wire-speed Monitoring and Analytics at 100G

cPacket Networks, a provider of next-generation network performance monitoring and analytics solutions, announced a new network monitoring solution that provides wire-speed network monitoring at 100 Gbit/s under any traffic condition.

The new cPacket Cx4100 solution is claimed to be the first to monitor the network at the wire, almost eliminating the risk of false positives or negatives and dropped packets, issues that can affect legacy systems such as packet brokers and packet capture NICs, which require backhauling of network traffic for analysis. The solution features cPacket's distributed architecture design, which enables monitoring to multiple points throughout the network.

The Cx4100 solution is designed to support monitoring for compliance, capacity planning and troubleshooting on high-speed networks and allows network engineers to monitor 100 Gbit/s links with millisecond resolution to enable capacity planning and provide wire-speed packet KPIs, as well as address regulatory monitoring requirements. The Cx4100 also allows scalable capture of selected packets to help speed troubleshooting and time-to-resolution.

cPacket's new Cx4100 solution also features:

1. High-performance spike detection (millisecond speeds at up to a thousand feeds) at 100 Gbit/s through hardware analytics.

2. Four 100 Gbit/s interfaces, with all interfaces supporting monitoring at wire-speed.

3. Nanosecond-precision timestamping, a key requirement for regulated environments.

4. Automated load balancing at rates from 100 Gbit/s down to 10 Gbit/s.

The Cx4100 is the latest member of cPacket's Integrated Monitoring Fabric, which includes cStor forensic storage for online storage and analytics; and includes the SPIFEE management dashboard providing a unified view for monitoring and managing 1 Gbit/s to 100 Gbit/s networks. The Cx4100 additionally incorporates RESTful APIs to facilitate integration with third-party automation solutions.

Separately, cPacket announced a next-generation NPM platform targeting 40 and 100 Gbit/s networks for monitoring and reporting network and application-related issues at wire-speed with nanosecond timestamping and millisecond accuracy. The solution can scale from 1 up to 100 Gbit/s and provides multi-hop analysis for multiple KPIs within the same context or a single KPI within multiple contexts.

https://www.cpacket.com/

Wednesday, February 1, 2017

Network Visibility 2017 - AT&T



Carriers around the globe are embracing SDN and NFV to enhance the experience of their customers while also increasing their organizational agility and driving down operational costs, says Paul Hooper, CEO of Gigamon.

AT&T really started with the customer experience , says Josh Goodell, VP, Network on Demand, AT&T. Customers want control and flexibility in the network. He discusses how AT&T has reengineered the core fabric of its network to enhance the customer experience.

Series produced by Converge! Network Digest and sponsored by Gigamon.

https://youtu.be/rWw5ZV6gztA


Monday, January 30, 2017

Network Visibility 2017 - Verizon



Network operators are in the early days of a significant  transformation as they provide visibility and insight into the performance of applications running across their infrastructure.  Paul Hooper, CEO of Gigamon, provides perspective on the power of visibility solutions to transform networks.

Customers want better visibility and Verizon is now providing end-to-end metrics of network services, says Greg Harris, Associate Director of Ethernet Services, Verizon.

Series produced by Converge! Network Digest and sponsored by Gigamon.

https://youtu.be/AEVdhDlmb6c

Sunday, January 29, 2017

Network Visibility 2017 - Level 3 Communications



Managing the customer experience has to be the highest priority for Service Providers, says Andy Huckridge, Director of Service Provider Solutions at Gigamon.  The next wave of software-defined services has started to roll.

Level 3 is keenly focused on the customer experience, says Adam Saenger, VP Product Development and Management, Level 3. The customer experience extends across a continuum of shop, buy, get, use, pay and renew.  Each step needs to be assessed.

Series produced by Converge! Network Digest and sponsored by Gigamon.




Thursday, November 3, 2016

Gigamon Enhances FlowVUE Feature with Overlapping Flow Samples

Gigamon introduced a subscriber-based IP sampling paradigm which helps service providers turn big data into manageable data providing greater visibility and insight.

The new capabilities in Gigamon’s FlowVUE application enable users to send overlapping flow samples to multiple analytic tools at the same time.

Gigamon said that by expanding tool rail depth of analysis, operators can gain a multi-dimensional understanding of subscriber behavior and deeper traffic insight for improved and actionable intelligence.

FlowVUE also increases tool performance by scaling the traffic to fit the attached tool processing throughput capacity as the network traffic volumes shift during high and low peak times. Service providers gain new operational efficiencies by avoiding the unnecessary traffic and tool dimensioning of over- and under-subscription across a wide set of tools. The optimized throughput processing allows operators to dimension their tool rail for a specific percentage of the typical busy hour and apply cost savings elsewhere on their network.


“For the first time, service providers can experience higher levels of subscriber-aware visibility and intelligence to make real-time decisions and improved customer experiences,” said Andy Huckridge, director of service provider solutions at Gigamon. “By centralizing and correlating previously siloed data samples with Gigamon’s FlowVUE, service providers can now offer expanded services to their high-value subscribers, create new revenue streams, de-risk new technology rollouts and gain an operational advantage in the process.”

htttp://www.gigamon.com

See 1 Minute video: https://youtu.be/9BXVMfGM7SA



Tuesday, October 18, 2016

Gigamon Extends Visibility Coverage for Small Enterprises and Remote Sites

Gigamon is extending the visibility coverage of its GigaSECURE Security Delivery Platform from large data centers to small enterprises and remote sites in distributed enterprises with the introduction of its GigaVUE-HC1 appliance.

GigaVUE-HC1 is a modular one rack unit visibility appliance with twelve 10 gigabit (Gb) interfaces, four 1Gb interfaces, two modular slots for expansion and GigaSMART hardware processing built into the base system. It uses the same GigaVUE-OS operating system that powers the Gigamon portfolio, supporting capabilities such as Flow Mapping, inline bypass and multiple GigaSMART traffic intelligence applications.

The new GigaVUE-HC1 appliance provides a centralized way to administer and manage all tools, whether inline or out-of-band, making it well suited for remote sites with independent tools, such as call centers.  Gigamon said its enterprises and service providers can also use the GigaVUE-HC1 to selectively backhaul traffic flows and metadata of interest to centralized tools, enabling both consolidation

“Distributed and small enterprises are an often overlooked, yet an important market segment with specific visibility needs. GigaVUE-HC1 expands the scope of our successful GigaSECURE platform, enabling organizations to see more and increase the scope of security coverage,” said Ananda Rajagopal vice president of products, at Gigamon. “GigaVUE-HC1 delivers a rich combination of metadata and traffic intelligence with the flexibility to deploy both inline and out-of-band security tools, thereby reducing the time to deploy and maintain security tools at smaller sites.”

http://www.gigamon.com

Wednesday, September 7, 2016

NETSCOUT Debuts its InfiniStreamNG Visibility Platform

NETSCOUT SYSTEMS released its next-generation, real-time information platform called the InfiniStreamNG featuring multiple form factors and deployment options: virtual, software and hardware appliances.

InfiniStreamNG, which provides end-to-end visibility in data center, cloud, and hybrid infrastructures for both enterprise and service provider customers is the first proof point of the combined assets and technologies enabled by NETSCOUT’s strategic acquisition of Danaher Corporation’s Communications Business in mid-July 2015 that included Tektronix Communications, Arbor Networks and the enterprise portions of Fluke Networks.

The company said the new architecture of InfiniStreamNG “mines” IP traffic intelligence in real time, to deliver timely, accurate and actionable information to service assurance, cybersecurity, and business intelligence applications. It leverages an advanced and extended version of NETSCOUT’s patented Adaptive Service Intelligence™ (ASI) technology, now termed ASI Plus, to seamlessly incorporate technologies from the Danaher Communications acquisition, and facilitate support for a wider range of new analytic software from NETSCOUT, from strategic partners and from other third parties. As a result, the InfiniStreamNG is positioned to serve as the industry’s most versatile real-time metadata technology.

“Today’s most successful and innovative companies, both in the enterprise and the carrier sector, realize that the flawless delivery of digital services, agile deployment, and cost-effective operations require real-time, actionable intelligence,” said Anil Singhal, co-founder, president and chief executive officer, NETSCOUT. “Three years ago, we saw the opportunity to leverage IP convergence to provide scalable, real-time access to the mission-critical data needed to drive our customer’s digital initiatives. Since then, we have made significant investments in both organic innovation and strategic acquisitions to capitalize on this massive opportunity.”

The InfiniStreamNG has been shipping to a number of service provider customers for several quarters and will be available to all customers this month.

http://www.netscout.com/press-release/netscout-unveils-industrys-first-real-time-information-platform-for-service-assurance-cybersecurity-and-big-data/


NetScout Acquires Danaher’s Communications Business

NetScout Systems completed its acquisition of Danaher Corporation’s Communications Business. The deal was valued at $2.3 billion and involved the issuance of 62.5 million shares of NetScout common stock at $36.89 per share to Danaher’s shareholders.

The acquisition includes Tektronix Communications, Arbor Networks and parts of the Fluke Networks businesses, all of which were owned by Danaher Corp.  The deal was first announced in October 2014.

Danaher’s Communications business generated revenue (unaudited) of approximately $836 million for the year ended December 31, 2013.

Danaher’s Communications business, which has over 2,000 employees worldwide, includes: 

Tektronix Communications, based in Plano, Texas, which provides a comprehensive set of assurance, intelligence and test solutions and services support for a range of architectures and applications such as LTE, HSPA, 3G, IMS, mobile broadband, VoIP, video and triple play. Also included are VSS Monitoring and Newfield Wireless.

Arbor Networks, based in Burlington, Massachusetts, which provides solutions that help secure the world’s largest enterprise and service provider networks from DDoS attacks and advanced threats.

Fluke Networks, based in Everett, Washington, which delivers network monitoring solutions that speed the deployment and improve the performance of networks and applications. The data cabling tools business and carrier service provider (CSP) tools business within Fluke Networks are not included this transaction.

“This acquisition represents an important milestone for NetScout that enhances our ability to drive value for customers, stockholders, employees and other stakeholders,” stated Anil Singhal, president and CEO.   “With a broader range of market-leading capabilities and technologies, as well as more extensive, global go-to-market and distribution resources, NetScout will be better positioned to capitalize on the many exciting opportunities we see to further expand our customer relationships around the world.  We welcome over 2,000 new colleagues to NetScout and collectively, we are looking forward to realizing the Company’s potential in the marketplace.”

NetScout also announced today that it has secured a new five-year, $800 million senior secured revolving credit facility that replaces its previous revolving credit facility of $250 million.

http://www.netscout.com

Danaher acquired Tektronix in 2007 for $1.1 billion.

Danaher acquired Arbor Networks in 2010.

Tuesday, August 23, 2016

Arista Adds Telemetry Features with HPE, SAP, Veriflow and VMware

Arista Networks is rolling out new telemetry and analytics capabilities for cloud networks.

The new Arista EOS (Extensible Operating System) and CloudVision provides visibility into network workloads, workflows, and workstreams on a network-wide basis.

Key features include:

  • Instantaneous event-driven streaming of every state change, providing improved granularity compared to traditional polling models.
  • State visibility from all devices in the network, including configuration, counters, errors, statistics, tables, environmentals, buffer utilization, flow data, and much more.
  • CloudVision Analytics Engine for storing state history and performing trend analysis, event correlation, and automated alerts. The basis for both real-time monitoring and historical forensic event investigation.
  • New Telemetry Apps for the CloudVision Portal, including the Workstream Analytics Viewer, providing simplified visualization of network-wide state for faster time to resolution.
  • An open framework, built on standard RESTful APIs as well as OpenConfig-based infrastructure, providing a point for integration into a variety of partner solutions and customer-specific infrastructure.
  • Expansion of existing EOS Telemetry Tracer capabilities across device, topology, virtual machine, container, and application components.

“The automated network operations in today’s cloud networks are dependent on both a highly programmable software infrastructure as well as deeper visibility into what the network is doing. Legacy approaches to visibility fall short of these cloud requirements,” said Ken Duda, CTO and Senior Vice President, Software Engineering for Arista Networks. “The Arista state-streaming approach provides an open framework with unprecedented levels of completeness and granularity for network state information. Our CloudVision platform harnesses streamed network state to provide customers of all types with clearer real-time and historical visibility into their network.”

Arista said its partner ecosystem can leverage many benefits of this new telemetry solution via access to the network-wide state through common API’s at multiple integration points. Arista’s partners can access the network state either streamed directly from the devices or from the central state repository within CloudVision. The Arista CloudVision Telemetry solution is endorsed by Hewlett Packard Enterprise, SAP, Veriflow and VMware.

https://www.arista.com/en/company/news/press-release/1463-pr-20160822

Thursday, August 11, 2016

Ixia Adds High-density, Inline Bypass Switch

Ixia introduced a modular, high-density, inline bypass switch designed to help ensure that security tools have continuous visibility into network traffic.

The new iBypass VHD switch joins the company’s expanding portfolio of bypass switches.

Ixia said its iBypass VHD is the first bypass switch to support both active-standby and active-active network/security architectures. Active-standby offers improved security resilience with a primary and secondary tool that helps ensure ongoing network monitoring. Active–active enables a diverse deployment of security tools to maximize existing and new IT investments.

It supports up to twelve 10G bypass switches in one unit, three times more than previous versions. iBypass VHD is currently available via authorized Ixia channel partners at a starting US List price of $21,000.

Ixia iBypass VHD offers centralized and simplified management, leveraging Indigo Pro -- a central management tool for bypass switches that simplifies and speeds configuration and management of tens to hundreds of devices.

http://www.ixiacom.com

Wednesday, August 3, 2016

Riverbed: Olympic Games Could Strain Enterprise Networks

The majority of global enterprise network managers expect to be monitoring performance trends next week as the summer Olympics are underway in Brazil, according to a survey of 403 companies commissioned by Riverbed.

The survey found that network managers are wary of employees accessing Olympic video content during business hours, potentially impacting the performance of mission-critical applications.

Riverbed said 85% of surveyed companies reported that they were likely to more closely monitor the performance of their applications and networks, including Wi-Fi, specifically because of potential strain due to employees accessing Olympic content, with 42% of these same companies being very likely to monitor more closely. Only 2% stated that they were very unlikely to monitor any differently during this the Olympics.

"As athletes prepare for the games, IT organizations need to prepare for the significant increase in network traffic that will occur as a result of employees accessing and streaming online content and applications, and the related increase in volatility of that network demand,” said Mike Sargent, Senior Vice President and General Manager, SteelCentral at Riverbed.

http://www.riverbed.com

Thursday, July 28, 2016

Riverbed to Acquire Aternity for End User Experience Monitoring

Riverbed Technology agreed to acquire Aternity, a provider of End User Experience (EUE) and application performance monitoring solutions.

Aternity helps enterprises see the entire user experience for any application running on any device, providing a user-centric, application performance experience vantage point.  Aternity said it currently monitors more than 1.7 million mobile, virtual and desktop workforce endpoints.  The company is based in Westborough, MA.

Riverbed said the acquisition of the privately-held company will expand its SteelCentral performance monitoring solutions with an end user experience offering, and provide Riverbed customers and partners with an end-to-end visibility solution– spanning network, application and end user experience performance management.

“Aternity is another exciting and strategic acquisition for Riverbed. Their innovative end user experience monitoring offering perfectly complements and extends our SteelCentral solutions,” said Jerry M. Kennelly, Riverbed Chairman and Chief Executive Officer. “With the increased use of mobile devices, virtual desktop environments and the cloud, the ability to manage end user experience has become more important and complex for IT organizations. With this acquisition, Riverbed and our partners are now uniquely positioned to provide CIOs and businesses with a complete view across networks, applications and end users, all in one solution.”

The acquisition also follows Riverbed’s acquisition of leading SD-WAN provider Ocedo in January 2016, which enabled Riverbed to get to market faster with application-defined SD-WAN (software-defined wide area network) solution SteelConnect in April. Additionally, Riverbed offers a comprehensive Application Performance Platform that delivers end-to-end visibility, optimization and control.

http://www.riverbed.com/solutions/end-user-experience-monitoring.html

Tuesday, July 19, 2016

Cedexis Measures Latencies of Major U.S. Public Clouds

Cedexis, which specializes in Internet performance monitoring and optimization, released data measuring the performance of major clouds in the U.S. during March/April 2016.

The data includes Real End User Measurements for:

  • AWS
  • Azure
  • SoftLayer
  • Rackspace
  • Google

The report specifically includes findings on the following:

  • Worst to First: Latency of 17 US based Clouds (in Milliseconds)
  • Regional comparison of US based Clouds - US Latency in Milliseconds
  • Regional comparison - 8 best clouds with regard to US Latency, March/April 2016
  • Regional Comparison of top 4 Clouds with regard to Latency
  • US Availability of Clouds – Regional Breakout
  • Southwest latency over a 48 hour period – abnormal diurnal congestion
  • Southwest Availability over same 48 hour period – Google wins!

Cedexis noted that there are significant diurnal patterns that indicate congestion of major peering points, and that this congestion effects the different clouds to varying degrees.

http://www.cedexis.com/

NETSCOUT Announces nGenius for Flows

NETSCOUT SYSTEMS announced its "nGenius for Flows" solution for extending its Adaptive Service Intelligence analysis to flow-based data sources

Specifically, nGenius for Flows, which is an integrated extension to nGeniusONE, adds NetFlow and other flow data to the core packet flow. The data sources all are converted to proprietary Adaptive Service Intelligence® (ASI) data for business assurance analytics.

“The digital transformation pace today requires enterprises to have real-time visibility into the health and dependencies of their key digital initiatives and into the infrastructure supporting them so that they can accelerate time to deployment and reduce risk to service continuity and quality,” explained Michael Szabados, chief operating officer at NETSCOUT. “With the introduction of nGenius for Flows, NETSCOUT offers the most extensive and most scalable service monitoring capability and enables the largest global enterprises and government agencies to deploy major new initiatives with confidence.

http://www.netscout.com/press-release/netscout-strengthens-its-powerful-business-assurance-lineup-for-the-enterprise-with-ngenius-for-flows/

Monday, July 11, 2016

Gigamon Brings Automated Data Center Topology Visualization

Gigamon announced an automated network topology visualization - for managing visibility infrastructure at scale in large data centers.

The new capability provides end-to-end visualization of Gigamon's Visibility Fabric components, associated production networks and interfaces from which traffic is sourced and the connected security and monitoring tools that analyze this traffic. It provides automated discovery of the attached networks using the Link Layer Discovery Protocol (LLDP) or Cisco Discovery Protocol (CDP).

Gigamon said that when security and monitoring tools attached to its Security Delivery Platform detect abnormal behavior, an administrator can quickly trace back to the network interfaces at the source of the abnormality, significantly reducing the mean time to resolution (MTTR).

“We remain committed to offering vendor-agnostic visibility and are uniquely able to meet the needs of customers with large data centers and clouds,” said Sesh Sayani, Director of Product Management, Gigamon. “We recognize that the Visibility Fabric is rapidly becoming essential data center infrastructure and we will continue to be customer-driven in the capabilities that make our market-leading solution easy to use for troubleshooting and security.”

http://www.gigamon.com

Pluribus Advances its Network Monitoring Solution

Pluribus Networks introduced its VCFcenter -- a single pane of glass that combines a big data approach to network visibility with web-scale analytics to offer a business-level network analytics solution.

VCFcenter, which can be deployed in any new or existing campus, branch or data center, is ananalytics platform that provides a wide range of foundational services, including secure user access, common user interface and shared data repository to all of the applications that are hosted within its framework.

Pluribus said its VCFcenter allows organizations to collect and analyze contextual information about business service application flows, and can scale into the billions of flows for web-scale applications. VCFcenter and its applications provide performance metrics associated with the use of any business service, from the packet, to the network flow or even the application level.

“The Modern Enterprise is rapidly adopting software-defined and hyperconverged compute and storage solutions due to the simplicity and value they offer the end user. They are also adding big data, mobility and even IoT applications to their IT strategies,” said Tom Burns, VP and GM of Networking Products at Dell. “The network itself and the business-level analytics which can be derived from it have once again become a critical success factor for our customers. Working with Pluribus enables Dell and its partners to offer an affordable and complementary business-management solution at the network layer.”

http://www.pluribus.com

See also