Showing posts with label Start-up. Show all posts
Showing posts with label Start-up. Show all posts

Wednesday, June 12, 2019

Edgewise raises $11 million for microsegmentation

Edgewise Networks, a start-up based in Burlington, Mass, announced $11 million in funding for its microsegmentation platform based on software identity.

The funding round was led by existing investors .406 Ventures and Accomplice, with additional participation from Pillar.

Edgewise reduces the network attack surface in cloud and data center environments. Edgewise said it automatically protects application workloads in seconds, adding provable security to hybrid cloud environments. Machine learning and advanced analytics enable the rapid discovery of application communication topology and attack pathways. This real-time visibility allows security teams to microsegment environments with a single click. Policies are enforced no matter where the application resides — on premises, in the cloud, or in a container — and remain in effect even as the underlying network changes.

“Our innovative, patented approach makes microsegmentation — one of the hardest problems in cybersecurity — incredibly simple to implement,” said Peter Smith, co-founder and chief executive officer at Edgewise Networks. “With Edgewise, companies can operate their applications in hybrid cloud and container environments with peace of mind, knowing that they are protected. This strong support from our investors will enable us to expand to meet the demand for automated microsegmentation.”

https://www.edgewise.net

Thursday, January 17, 2019

Scalyr appoints Christine Heckart as CEO

Scalyr, a start-up offering a "blazing-fast" log management solution, appointed Christine Heckart as its new CEO. Steve Newman, founding CEO of Scalyr, will assume the role of chairman and founder to focus on advancing the company’s product vision and technology.

Scalyr is log management platform designed for modern development and deployment. Unlike traditional log management tools designed for the legacy data center, Scalyr is built to meet next-generation approaches to software development, including microservices and containers. The company is based in San Mateo, California.

Scalyr reports more than 100 percent groth in 2018, adding marquee names such as Cisco, Palo Alto Unified School District, Vanderbilt University, and Worldpay to its customer list.

Heckart most recently served as Senior Vice President at Cisco and as Executive Vice President at Brocade. In addition, Heckart has held multiple executive and C-level positions at global technology brands, including NetApp, Microsoft, and Juniper Networks. Heckart serves on the board of directors at Lam Research Corporation and 6sense.

“As digital customer experiences become increasingly immersive, their underlying systems and code have grown more complex, as have the challenges and bugs. The Scalyr platform helps engineers build and troubleshoot software within modern IT and application environments,” Heckart said. “We have an awesome product that developers use daily, an impressively diverse employee base, and a network-effect built into the architecture itself. A query that takes other companies ten minutes takes Scalyr one second, and it will only get faster and more affordable as we grow.”

Thursday, October 25, 2018

Arctic Wolf raises $45 million for Cyber Security Ops Center service

Arctic Wolf Networks, a start-up based in Sunnyvale, California with offices in Ontario, Canada, raised $45 million in series C funding for its security operations center (SOC)-as-a-service.

Arctic Wolf will use the new funding to accelerate company growth and meet the soaring demand for its SOC-as-a-service offering.

The Arctic Wolf service provides a cloud-based security incident and event management (SIEM) application combined with a team of expert security engineers committed to the client's operational requirements.

The new funding was led by Future Fund with participation from new investors Adams Street and Unusual Ventures, as well as existing investors Lightspeed Venture Partners, Redpoint Ventures, Sonae Investment Management and Knollwood Investment Advisory LLC. To date, Arctic Wolf has raised $91.2 million.

“Our growing team of security engineers is redefining the economics of security to protect companies of all sizes,” said Brian NeSmith, CEO and co-founder of Arctic Wolf. “In addition to supporting continued company growth, the funding will accelerate expansion of our service offering, as we continue to scale and expand to meet our customers’ individualized needs. We look forward to continuing our momentum and building out our internal vulnerability assessment and endpoint detection and response capabilities, in particular.”

  • Arctic Wolf is headed by Brian NeSmith, who previously was CEO of Blue Coat Systems. Before that, he was the CEO of Ipsilon Networks (acquired by Nokia). 

Monday, July 30, 2018

Cloudify appoints Ariel Dan as CEO

Cloudify, which specializes in IT operations automation technology, named Ariel Dan as its new CEO, replacing Zeev Bikowsky, who has been serving as Chief Executive Officer for nearly a decade.

Prior to Cloudify, Ariel led two companies to M&A, and has extensive experience in building sustainable cloud & SaaS operations.

While leading Cloudify, Bikowki was also the driving force behind establishing GigaSpaces.

Cloudify is based in Herzliya, Israel and funded by Intel Capital, Claridge Israel, BRM Group, FTV Capital, and Formula Vision, as well as additional private investors.

Sunday, January 14, 2018

DENSO invests in ActiveScaler for #AI-powered fleet management

DENSO, one of the world’s largest automotive suppliers, has a significant seed investment in ActiveScaler, a start-up based in Milpitas, California that is developing Managed MaaS (Mobility-as-a-Service) systems powered by artificial intelligence. Financial terms were not disclosed.

ActiveScaler's website says its FleetFactor AI-powered software leverages thousands of data points collected from a variety of sources such as internal vehicle data, in-vehicle computers, sensors, driver behavior, CRM/ERP, finance, dispatch and other systems.

"DENSO’s focus is to develop technologies that advance the future of mobility, and enable connected and automated driving," said Yoshifumi Kato, Senior Executive Director at DENSO Corporation. "These technologies directly influence the development of MaaS systems, which will disrupt the future of urban mobility for people and goods by making transportation solutions more seamless and accessible."

"We want to be the engine behind the future of MaaS – hence the term “Managed MaaS”, which will transform current fleet businesses to provide next generation mobility services," said Abhay Jain, CEO of ActiveScaler. "Traditional fleet management services and systems are quickly becoming obsolete because of issues like high upfront software and hardware costs, poor ecosystem integration, and lack of flexibility, which are limiting the type and quality of services that can be offered.

Tuesday, September 19, 2017

Minio raises $20m for Multi-Cloud Object Storage

Minio, a start-up based in Palo Alto, California, raised $20 million in Series A funding for open source object storage for cloud-native and containerized applications.

Minio has developed an object storage server that enables developers to store unstructured data on any public or private cloud infrastructure, including multi-cloud deployments. The solution lets users build their own Amazon S3-compatible object storage on bare metal, public cloud or existing SAN/NAS storage infrastructure.

Minio reports  over 10M downloads since its general availability in January 2017.

The Series A funding round was jointly led by Dell Technologies Capital, General Catalyst Partners and Nexus Venture Partners, with participation by Intel Capital, AME Cloud and Steve Singh.


Thursday, July 13, 2017

FogHorn Targets Edge Intelligence Software at IIoT

FogHorn Systems, a start-up based in Mountain View, California, released its Lightning ML edge intelligence software for the Industrial Internet of Things (IIoT).

The company said its Lightning ML brings the power of machine learning at the edge in three ways:
  • Leverages existing models and algorithms: can execute proprietary algorithms and machine learning models on live data streams produced physical assets and industrial control systems
  • Makes machine learning OT-accessible: offer tools to generate machine learning insights 
  • Runs in tiny software footprint: Lightning ML platform requires less than 256MB of memory footprint.

Lightning ML supports all x86-based IIoT gateways and OT systems as well as ARM32 OT control systems (like PLCs and DCSs). It also supports the newest generation of small footprint Raspberry Pi derivative IIoT gateways. The FogHorn Lightning ML software platform can run entirely on premise or connect to any private cloud or public cloud environment.

"In the initial launch of FogHorn’s Lightning platform, we successfully miniaturized the massive computing capabilities previously available only in the cloud. This allows customers to run powerful big data analytics directly on operations technology (OT) and IIoT devices right at the edge through our complex event processing (CEP) analytics engine. With the introduction of Lightning ML, we now offer customers the game changing combination of real-time streaming analytics and advanced machine learning capabilities powered by our high-performance CEP engine,” said said FogHorn CEO David C. King.

http://www.foghorn.io


  • In May 2017, FogHorn Systems announced today that it had raised additional Series A funding from Dell Technologies Capital and Saudi Aramco Energy Ventures (SAEV). The extended funding brings FogHorn’s total Series A round to $15 million, excluding the conversion of $2.5 million in seed funding. Dell Technologies Capital added to its initial Series A investment. Saudi Aramco Energy Ventures is a new investor in the company.

Tuesday, May 9, 2017

Flex Logix, developer of embedded FPGA technology, raises $5m

Flex Logix Technologies, headquartered in Mountain View, California, a supplier of embedded FPGA IP and software:

a.         Founded in March 2014 to develop solutions for reconfigurable RTL in chip and system designs employing embedded FPGA IP cores and software.

b.         Offering the EFLX technology platform designed to significantly reduce design and manufacturing risks, accelerate technology development and provide greater flexibility for customers' hardware.

c.         Which in October 2015 announced it had raised $7.4 million in a financing round was led by dedicated hardware fund Eclipse Ventures (formerly the Formation 8 hardware fund), with participation from founding investors Lux Capital and the Tate Family Trust.

Announced it has secured $5 million in Series B equity financing in a round led by existing investors Lux Capital and Eclipse Ventures, with participation from the Tate Family Trust.

Flex Logix stated that new funding will be used to expand its sales, applications and engineering teams to meet the growing customer demand for its embedded FPGA platform in applications including networking, government, data centres and deep learning.

Targeting chips in multiple markets, the Flex Logix EFLX platform can be used with networking chips with reconfigurable protocols, data centre chips with reconfigurable accelerators, deep learning chips with real-time upgradeable algorithms, base stations chips with customisable features and MCU/IoT chips with flexible I/O and accelerators. The company noted that EFLX is currently available for popular process nodes and is being ported to further process nodes based on customer demand.

The Flex Logix technology offers high-density blocks of programmable RTL in any size together with the key features customers require. The solution allows designers to customise a single chip to address multiple markets and/or upgrade the chip while in the system to meet to changing standards such as networking protocols. It also allows customers to update chips with new deep learning algorithms and implement their own versions of protocols in data centres.

Regarding the new funding, Peter Hebert, managing partner at Lux Capital, said, "I believe that Flex Logix's embedded FPGA has the potential to be as pervasive as ARM's embedded processors… the company's software and silicon are proven and in use at multiple customers, paving the way to become one of the most widely-used chip building blocks across many markets and for a range of applications".

While Pierre Lamond, partner at Eclipse Ventures, commented, "The Flex Logix platform is the… most scalable and flexible embedded FPGA solution on the market, delivering competitive advantages in time to market, engineering efficiency, minimum metal layers and high density… the patented technology combined with an experienced management team led by Geoff Tate, founding CEO of Rambus, position the company for rapid growth".


Wednesday, January 25, 2017

Apstra Demos Wedge Switch Running its OS

Apstra, a start-up based in Menlo Park, California, released its Apstra Operating System (AOS) 1.1.1 and an integration with Wedge 100, Facebook’s second generation top-of-rack network switch.

Apstra said its distributed operating system for the data center network will disaggregate the operational plane from the underlying device operating systems and hardware. Sitting above both open and traditional vendor hardware, AOS provides the abstraction required to automatically translate a data center network architect’s intent into a closed loop, continuously validated infrastructure. The intent, network configurations, and telemetry are stored in a distributed, system-wide state repository.

“At Apstra we believe in giving network engineers choice and control in operating their network and we are excited to be part of the network disaggregation movement,” said Mansour Karam, CEO and Founder of Apstra, Inc. “We are delighted to have been invited to demonstrate AOS integrated with Wedge 100 today. AOS provides network engineers with advanced operational control and situational awareness of network services, and enables them to design, deploy, and operate a truly Self-Operating Network™ (SON) without vendor lock-in.”

http://www.apstra.com

Facebook Deploys Backpack -- its 2nd Gen Data Center Switch

Facebook unveiled Backpack, its second-generation modular switch platform developed in house at Facebook for 100G data center infrastructure. It leverages Facebook's recently announced Wedge switch.

Backpack is designed with a clear separation of the data, control, and management planes. It uses simple building blocks called switch elements. The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together. The orthogonal direct chassis architecture opens up more air channel space for a better thermal performance for managing the heat from 100G ASICs and optics.  Facebook will use the BGP routing protocol for the distribution of routes between the different line cards in the chassis.

The design has already entered production and deployment in Facebook data centers.  The company plans to submit the design to the Open Compute Project.

https://code.facebook.com/posts/864213503715814/introducing-backpack-our-second-generation-modular-open-switch/

Thursday, April 28, 2016

Innovium Raises $15M, Settles Broadcom Litigation

Innovium, a pre-launch start-up targeting infrastructure solutions, announced the settlement of all litigation with Broadcom.  Financial terms were not disclosed.

Innovium also announced $15 million in Series A funding from Capricorn, Walden Riverwood and other venture capital investors.

The company, which is based in San Jose, California, was founded by Rajiv Khemani (previously Cavium), Puneet Agarwal (previously Broadcom), and Mohammad Issa (previously Broadcom).

Thursday, January 14, 2016

Blueprint: What’s in Store for the Database in 2016?

by Roger Levy, VP of Product at MariaDB

In 2015, CIOs focused on DevOps and similar technologies such as containers as a way to improve time to value. During 2016, greater attention will be focused on data analytics and data management as a way to improve the timeliness and quality of business decisions. How best to store and manage data is on the minds of most CIOs as they kick off the New Year. It’s exciting to see that databases, which underlie every app and enterprise on the planet, are now back in the spotlight. Here’s what organizations anticipate for next year.

Securing your data at multiple layers
2015 saw every type of organization, from global retailers to the Catholic Church, experience financial losses and reputation damage from data breaches. Security has long been a concern of CIOs, but the growing frequency of high-profile attacks and new regulations make data protection a critical 2016 priority for businesses, governments, and non-profit organizations.

Organizations can no longer rely on just a firewall to protect your data. Amidst a myriad of threats, a robust security regimen requires multiple levels of protection including network access, firewalls, disk-level encryption, identity management, anti-phishing education, and so forth. Ultimately, hackers want access to the contents of an enterprise's database, so securing the database itself must be a core component of every organization’s IT strategy.


Prudent software development teams will use database technology with native encryption to protect data as it resides in the database, and SSL encryption to protect data as it moves between applications. They also will control access to the database with stronger password validation and a variety of access authorization levels based on a user’s role. Of course organizations can’t kick back and rely on software alone; they still have to hold themselves accountable via regular audits and testing.

Migrating to the cloud 
With the recent revenue announcements by public cloud providers such as Amazon AWS and Microsoft Azure, it is clear that adoption of public cloud services is becoming mainstream. But they may never fully replace on-premise data storage. While the cloud offers greater scalability and flexibility, better business continuity, disaster recovery, and capital cost savings, for economic and security reasons companies continue to optimize a mix of public and private cloud and traditional on-premise data management solutions.

Managing data across multiple environments also presents challenges. Navigating the myriad of data privacy regulations across the globe, integrating applications and data across private and public infrastructures, and managing latency issues are a few of the challenges organizations face in their migration to the cloud. Enter the hybrid cloud where IT organizations are achieving the best of today’s cloud solutions – traditional data storage, private cloud and public cloud benefits.

In 2016, we’ll likely see hybrid clouds experiencing a surge in popularity as an alternative to either a public or a private cloud solution. Greater focus will be applied to developing solutions that improve migration to hybrid cloud infrastructures for overall security and efficiency, as well as instances such as cloud bursting when bandwidth demand spikes or disaster recovery by replicating databases in the cloud as backups.

Multi-model databases 
The variety, velocity and volume of data is exploding.  Every minute we send over 200 million emails and over 300 thousand tweets. Already by 2013, 90% of the world's data had been created in two years. But size is not everything. Not only have the volume and velocity of data increased, there is also an increasing variety of formats of data that organizations are collecting, storing and processing.

While different data models have different needs in terms of insert and read rates, query rates and data set size, companies are getting tired of the complexity of juggling different databases. Next year will kick off an increased trend toward data platforms which offer “polyglot persistence” – the ability to handle multiple data models within a single database. The demand for multi-model databases is exploding as Structured Query Language (SQL) relational data from existing applications and connected devices must be processed along-side JavaScript Object Notation (JSON) documents, unstructured data, graph data, geospatial and other forms of data generated in social media, customer interactions, and machine to machine communications.

Growth in applying machine learning
With the rapid growth in the type and volume of data being created and collected comes the opportunity for enterprises to mine that data for valuable information and insights into their business and their customers. As IT recruiters know well, more and more companies are employing specialist “data scientists” to introduce and implement machine learning technologies. But the number of experts in this field simply isn’t growing fast enough, and this rarity makes hiring a data scientist cost-prohibitive for most companies. In fact, the US alone faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of big data, according to McKinsey & Company. In response, organizations are turning to machine learning tools that enable all of their employees to derive insights without needing to rely on specialists. Just as crucial as collecting data is the need to understand what lies in a company’s database and how it can be turned into valuable insights.

Recently the major public cloud vendors have introduced a variety of offerings to provide machine learning services. These include offers such as Azure ML Studio from Microsoft, the Google Prediction API, Amazon Machine Learning and IBM’s Watson Analytics. We can expect that 2016 will be a year when additional solutions appear and mature, and are recognized as a critical, possibly required, piece of enterprise IT operations. The growth of machine learning will place new demands on databases which store and manage the data “fuel” for such applications.  In 2016, look for a focus on database capabilities that facilitate real-time analytical processing of large data sets.

What can IT personnel do?
With the recent rise of the Chief Data Officer, the widespread adoption of new database technologies, and the acute need for better IT security, the database is back in the spotlight. A CIO’s best bet for staying on top of these new trends in 2016 will be the same strategy as in years past, laying down clear policies for who can access data and what it gets used for, all the while staying on top of new technologies and new threats targeting the integrity of a company’s data.

About the Author

Roger Levy brings extensive international, engineering and business leadership experience to his role as VP, Products, at MariaDB. He has a proven track record of growing businesses, transforming organizations and fostering innovation in the areas of data networking, security, enterprise software, cloud computing and mobile communications solutions, resulting in on-time, high-quality and cost-effective products and services. Previous roles include VP and GM of HP Public Cloud at Hewlett-Packard, SVP of Products at Engine Yard, as well as founding R.P. Levy Consulting LLC.



Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Wednesday, July 16, 2014

Nitero Unveils 60 GHz 802.11ad Silicon

Nitero, a start-up based in Austin, Texas with a design studio in Melbourne, Australia, unveiled its 60 GHz chip implemented in CMOS and based on the IEEE 802.11ad. standard.  The device uses Samsung’s advanced 28nm RF process based on 28nm HKMG LPP technology.

Nitero said its design is up to 10x more power efficient than 802.11ad solutions designed for the PC, while bringing cost and form-factor in-line with existing 802.11ac Wi-Fi solutions.  Nitero’s NT4600 supports low-latency 4K display and peer-to-peer wireless connectivity at USB 3.0 data rates.  In addition, while single-antenna 802.11ad solutions sacrifice in-room performance to get to low power, the NT4600 supports transmit and receive beamforming to provide full coverage throughout the office, living room or conference room.

“802.11ad, the next generation of Wi-Fi, is the missing link to allow for the long-awaited convergence of PC, gaming and entertainment platforms onto a single mobile device. 802.11ad solutions built for the PC and slimmed down for mobile simply can’t meet the power, performance, and form-factor requirements of Tier 1 mobile customers,” said Pat Kelly, CEO of Nitero. “At Nitero, we targeted the smartphone from day one. The result is 60G.”

Key features of Nitero’s 60G:

  • Samsung 28nm RF CMOS process technology
  • Power reduced by up to 10x over PC 802.11ad solutions
  • Industry-leading output power, noise figure and sensitivity to maximize performance
  • Transmit and receive beamforming to support non-line-of-sight conditions
  • Low-latency 4K wireless display support in living-room, desktop, and conference-room environments
  • Peer-to-peer wireless connectivity using 16-QAM modulation at up to 4.6 Gbps
  • PCI Express host interface to support the latest mobile applications processors while minimizing software overhead
  • Fully-compliant to the ratified IEEE 802.11ad standard
  • Android driver support

Nitero is currently demonstrating the NT4600 to select partners and customers and will start production shipments in 2015.

http://www.nitero.com

  • Nitero is funded by leading venture capitalists with deep semiconductor roots – Austin Ventures, Southern Cross Venture Partners and Trailblazer Capital. 

Thursday, June 6, 2013

Panzura Raises $25 Million for Cloud Storage

Panzura, a start-up based in San Jose, California, raised $25 million in Series D funding for its cloud storage solution.

Panzura offers a cloud-integrated storage system for enterprises with network attached storage (NAS) functionality, native cloud support, a globally distributed file system, built-in FIPS 140-2 certified security and data protection, as well as high speed data transfer rates to and from the cloud.
The company said its customer base grew by 700 percent in 2012 and that it now had the largest number of petabytes under management.

The latest funding round was led by Meritech Capital Partners with participation from its existing investors Matrix Partners, Khosla Ventures, Opus Capital and Chevron Technology Ventures.

"Panzura is delighted to be adding the expertise of our new partners at Meritech, and particularly late-stage funding guru Paul Madera to our board, and we are encouraged by the ongoing support of our existing longtime partners,” said Randy Chou, CEO and co-founder of Panzura. “Panzura provides the only viable cloud-based solution for the enterprise, directly enabling high-value business objectives – such as improved cost, scale, management, availability and global access – while fitting seamlessly into existing IT infrastructures."

http://www.panzura.com

Wednesday, April 3, 2013

Interview: Nuage on Automating Data Centers for Cloud Services and MPLS VPNs


A redacted interview between Jim Carroll, Editor of Converge! Network Digest and Manish Gulyani, VP of Product Marketing, Alcatel-Lucent / Nuage Networks.

Converge! Digest:  How do you describe the Nuage Networks' solution?

Manish Gulyani: The Nuage Networks Virtualized Services platform is a software-only solution for fully automating and virtualizing data center networks. That’s our main value proposition.  As you know, today’s data center networks are very fragile, they use old technology, and they are very cumbersome to operate.  When we looked at cloud services, we found that storage and compute resources had been virtualized quite nicely, but the network really wasn’t there.  We saw a great opportunity to apply the lessons that we have learned in wide area networking along with SDN.  The idea is that if you want to sell cloud services, you need to support thousands of tenants.  And you want each tenant to think that they own their piece of the pie.  It has to feel like the experience of a private network, with full control, full security, full performance of a private network but with the cost advantages of a cloud solution, which is a shared infrastructure.  That’s what we’re bringing to the table with the Nuage solution.

Converge! Digest: So is the Nuage solution aimed specifically at those who want to sell cloud services?

Manish Gulyani: It is designed for anybody who runs a large enough data center that needs automation. For instance, the University of Pittsburgh Medical Center, which is one of our trial customers, does not sell cloud services but they have enough internal users and external tenants that want full control over a particular cloud resource.  If you can’t give them full control and automation, then the cloud resource is of no use.  You have to be able to turn up the cloud service as fast as the user turns up a VM, otherwise the cloud service doesn't work.  Whether it is a large enterprise, a web-scale company or a cloud service provider, all can benefit from the Nuage solution.

Converge! Digest: What are the strategic differentiators versus other SDN controllers out there?

Manish Gulyani: Some initial SDN solutions have come out in the last two years for data centers.  They took the approach of virtualizing primarily at Layer 2, which was a good first step beyond the VLAN architectures. But in our view, this isn't sufficient to go beyond the basic applications.  If you are limited to just Layer 2, you are not able to get the application design done the right way.  For example, if you want to do a three tier application, you need to use routing, load balancing, firewalls – and all those elements in a real architecture are very hard to coordinate in current SDN solutions.  So first, Nuage needs to overcome this obstacle. We give you full Layer 2 to Layer 4 virtualization as a base requirement.  Once we’ve done that, the next issue is how do you make it scale?  You can’t restrict cloud service to one data center.

If you have ambitions of being a cloud services provider and you run multiple data centers, you want the power to freely move around server workloads between data centers.  If you cannot connect the data centers in a seamless fashion, then you haven’t satisfied the demand. So our solutions scales to multiple data centers and provides seamless connectivity.  The third obstacle we overcome is this:  now that the cloud services are running, how can people on a corporate VPN get access to these resources?  How can they securely connect to a resource that has just been turned up in a data center?

We provide the full, seamless connectivity to a VPN service.  We extend from Layer 2 to Layer 4, we made it seamless across data centers, and then we extend it across the wide area network by seamlessly integrating with MPLS VPNs. So that is our virtual connectivity layers.

We also automate it and make it easy to use.  A lot of our energy has gone into the policy layer, which lets the user define a service without knowing any network-speak.  It’s just IT speak and no network-speak.  This might seem strange for a networking company to say that its customer do not need to learn about VLANs or subnets or IP addresses – just zones and domains and application connectivity language.  When a workload shifts from one data center to another, all of the IP addresses and sub-netting has to change, but real users can’t figure this out because it is too hard to do. If this function can just happen in the background, they’re good with that.  The final thing we said is that it has to be totally touchless.

The reason people are excited about the cloud is that it is quick. In fact, IT departments worry that users sign up for public cloud services because the internal IT guys can’t deliver quickly enough.  If you need 10 new servers or VMs of capacity, why wait 3-4 weeks for your IT department to purchase and install the equipment, when you can log onto Amazon Web Services today and activate this capacity immediately with a credit card?  The Nuage policy driven architecture basically says “turn up the VM, look up the policy, set-up the connection” – nobody actually touches the network.  That’s our innovation.

Converge! Digest:  Since it is a software suite, what type of hardware do you run on?

Manish Gulyani:  Nuage runs on virtual machines.  It runs on general purpose compute.  Our Services Directory is a virtual machine on any compute platform. Our Services Controller runs on a VM. And our virtual routing and switching Open vswitch implementation is essentially an augmentation of what runs today on a hypervisor.  You can’t go into a cloud world and propose new hardware because it is a virtualized environment.  We have no constraints on what time of compute platform.  The whole idea is to apply web-scale technologies.  We also offer horizontal scaling, where many instances run in parallel and can be federated.

Converge! Digest:  Alcatel-Lucent is especially known for IP MPLS, and yet Nuage is largely a data center play.  What technologies does Nuage inherit from Alcatel-Lucent that give it an edge over other SDN start-ups?

Manish Gulyani:  At Alcatel-Lucent, we learned a lot about building very large networks with IP MPLS.  That is a baseline technology deployed globally to offer multi-tenancy with VPNs on shared WAN infrastructure.  Why not use similar techniques inside the data center to provide the massive scale and virtualization needed for cloud services?  We took our Service Router operating system, which is the software running on all our IP platforms, and took the elements that we needed and then virtualized them.  This enables them to run in virtual machines instead of dedicated hardware. This give us the techniques and protocols for providing virtualization. Than we applied more SDN capabilities, such as a simplified forwarding plane that’s controlled by OpenFlow, which lives in the server and enables us to quickly configure the forwarding tables. Because of the way that we use IP protocols in wide area networks, we can support federation of our controller.  That’s how we link data centers together.  They talk standard IP protocols -- BGP – to create the topology of the service and the same way they extend to MPLS VPNs.  As I said, the key requirement for enterprises is to connect to data center cloud services using MPLS VPNs they are familiar with today.  This same SDN controller can now easily talk to the WAN edge router running MPLS VPNs.  We seamlessly stitch the data center virtualization all the way to the MPLS VPN in the wide area network and provide end-to-end connectivity.

Converge! Digest:  Two of the four trial customers for Nuage announced so far are Service Providers (SFR and TELUS), presumably Alcatel-Lucent MPLS customers as well, and of course many operators are trying to get into cloud services.  So, is that a design approach of Nuage?  Build off of the MPLS deployments of Alcatel-Lucent?

Manish Gulyani:  It doesn't have to be.  At Nuage, we don’t need for Alcatel-Lucent to be the incumbent supplier to sell this solution.  But of course it helps if they already know us and and already trust us in running highly-scalable networks. So when we talk about scalablity of data centers, we have a lot of credibility built in. Both SFR and TELUS have the ambition to offer cloud services.  I think they recognize that they must move to virtualization in the data center network and that the connectivity must be extended all the way to enterprise.  Nuage can deliver a solution unlike anything from anybody else today.  Existing SDN approaches only deliver virtualization in some subset of the data center, they can’t cross the boundary.  Carriers want to have multiple cloud data centers, but they cannot connect their resources easily to MPLS VPNs today. We give them that solution.

Converge! Digest:  In cloud services, it’s becoming clear that a few players are running away with the market.  You might say Amazon Web Services, followed by Microsoft Azure, Rackspace, Equinix and maybe soon Google, are capturing all the momentum.  One thing these guys have in common is a desire to be carrier neutral, so they are not tied to a particular MPLS service or footprint. Will Nuage appeal to these cloud guys too?

Manish Gulyani:  We do.  In fact, we are talking to some of these guys. As I said, Nuage is not designed for telecom operators.  It is designed for people who want to sell cloud services and who run very large data centers.  Carrier with multiple data center, like Equinix, will need the automation.  Until you virtualize and automate the data center, forget about selling cloud services.  Step 1 is creating the automation inside the data center.  Connecting to MPLS VPNs is step 2.  Amazon has been among the first ones, but they had to develop all of this themselves.  There was no solution on the market. They build that step 1 automation themselves. We now know that Amazon found it quite cumbersome to get secure connectivity between clouds. They are also experiencing how hard it is to connect a corporate VPN into the Amazon cloud. It can be tedious.  If others are going to offer services like Amazon, and they don’t have the size and wherewithal to figure it out themselves, then Nuage will get them there.

Converge! Digest:  On this question of data center interconnect (DCI), Alcatel-Lucent also has expertise at the optical transport layer, especially with your photonic switch. Will Nuage extend this SDN vision to the optical transport layer?

Manish Gulyani: We sell a lot of data center interconnect both at the optical layer and the MPLS layer, such as DWDM hitting the data center and also MPLS in an edge router.  We sell a lot of 100G on our optical transport systems because they really are the capacity needed for DCI. So that’s the physical connectivity.  The logical connectivity is what you need to move one virtual machine in one data center to another.  Even though the secure, physical connectivity exists between these data centers, the logical connectivity just is not there today. Nuage gives you that overlay on top of the physical infrastructure to deliver a per-tenant slice with the policy you want.

Converge! Digest:  How big is Nuage as a company in terms of number of employees?

Manish Gulyani:  We haven’t talked publicly about the size of the company or head count.

Converge! Digest:  About this term “spin-in” that is being used to describe Nuage… what does it mean to call Nuage a spin-in of Alcatel-Lucent?  How is the company organized?

Manish Gulyani:  Spin-in means that we are an internal start-up inside of Alcatel-Lucent.  There is a very good reason Alcatel-Lucent structured this as an internal start-up instead of an external start-up.  Nuage leverages so much existing Alcatel-Lucent intellectual property, there was no way it could let this outside of the company for others to have.  We would essentially have had to put out our Service Routing operating system for others to value and control the intellectual property and associate equity investments with it.  This would have been too complicated.  Others have tried to spin-out a new start-up with third party investors, only to find that they must acquire it back because they did not want their intellectual property to fall into the hands of others. Still, Nuage has full freedom to develop its solution and the right atmosphere to pull in the right talent.  We need a good mix of networking people and IT people.  We've been able to bring in guys who did Web 2.0 scaled-out IT solutions.

Converge! Digest: So Nuage is not a separate legal entity that can offer stock options to attract talent?

Manish Gulyani: No, Nuage is a fully funded internal start-up that is not a separate legal entity.

The start-up identity separate from Alcatel-Lucent also enables us to sell into the new cloud market, which is a different space from what Alcatel-Lucent has traditionally pursued. So, we can go after different market, we can attract new talent but still leverage the existing intellectual property that is essential to really get a good solution to market. This structure gives us freedom in multiple dimensions.

Tuesday, April 2, 2013

NetSocket Raises $9.2 Million for Expansion into SDN

NetSocket, a start-up based in Plano, Texas, raised $9.2 million in Series B funding for expansion of it network assurance expertise into SDN.

NetSocket currently offers a Cloud Experience Manager (CEM) that provides insight into network issues.


The funding round was led by new investor Venture Investors, with participation by existing investors Sevin Rosen Funds, Silver Creek Ventures and Trail Blazer Capital.

“NetSocket has been innovating in the Unified Communications (UC) service assurance solutions space, as evidenced by the traction generated from our recently announced expanded collaboration with Microsoft. NetSocket’s Cloud Experience Manager (CEM) now optimizes Microsoft Lync UC service management and user’s experience,” said John White, president and CEO of NetSocket. “We plan to apply that same innovation and focused vision to the SDN market which we expect to experience explosive growth this year.”

http://www.netsocket.com

Tuesday, March 5, 2013

Big Switch Signs Itochu and Net One for Japan

Big Switch Networks announced distribution partnerships with ITOCHU Techno-Solutions Corporation (CTC) and Net One Systems for the Japanese market.  Big Switch Networks’ Open SDN product suite includes an SDN controller that serves as a network application platform for a variety of applications, including network monitoring and data center network virtualization.

Last week, Big Switch announced the addition of Tony Bates to its Board of Directors.  Bates was a long time senior executive at Cisco Systems, and is currently the President of the Skype division of Microsoft.  He joins industry veterans Mike Volpi, Shirish Sathaye, Mark Leslie, Bill Meehan and co-founders Guido Appenzeller and Kyle Forster on the company’s Board.

http://www.bigswitch.com
http://www.netone.co.jp


Anaplan Raises $30 Million for Cloud-based Planning Service

Anaplan, a start-up offering cloud-based modeling and planning solutions for finance and operations, closed $30 million in new venture financing. Using cloud resources, the San Francisco-based company helps corporate customers to dynamically test their operational plans, manage complex multi-dimensional models, collaborate across functions and regions, and share insights and content.

Meritech Capital led the round, along with existing investors Granite Ventures and Shasta Ventures. Anaplan recently opened offices in the UK, France, Sweden and Singapore.

“The enterprise planning and modeling market has been under-innovated for 10 years and is ready to be profoundly disrupted. Anaplan is that long-awaited innovation,” said Anaplan CEO Fred Laluyaux.

http://www.anaplan.com

Tuesday, January 8, 2013

Panaya Raises $16 Million for SaaS Testing


Panaya, a start-up based in Israel, raised $16 million in Series D funding for its ERP testing and SaaS automation solutions.

Panaya is a Software as a Service (SaaS) company that facilitates ERP upgrades and maintenance by providing visibility and control over business application changes during the system's life-cycle. Panaya simulates upcoming upgrades to SAP or Oracle, automatically pinpointing which custom programs will break as a result of the upgrade and automatically fixing most of these problems.


The company said its value proposition is its ability to reduce the time SAP and Oracle users spend during upgrades, testing and maintenance and save significant money, testing risk and effort. 

Panaya claims over 850 customers, most of which are SAP customers. Since introducing its solution for Oracle E-Business Suite (EBS) upgrades last year, the company has surpassed the 100 Oracle customer mark.

Panaya recently opened regional offices in Saddle Brook, New Jersey; Karlsruhe, Germany; and Tokyo, Japan.

The new funding came from the Panaya's existing investors, led by Battery Ventures. Also participating in the round were Benchmark Capital and Hasso Plattner Ventures.

"This latest round of investment clearly demonstrates the strong confidence our investors have in the future of Panaya," said Yossi Cohen, Panaya's founder and CEO. "Despite the very strong interest of additional parties to invest in Panaya, we were very happy for the strong endorsement and validation signified by the fact that this round of funding was limited to our existing and satisfied group of investors."

http://www.panaya.com

Thursday, December 20, 2012

Pluribus Raises $23M for Hardware Accelerated Network Virtualization

Pluribus Networks, a start-up based in Palo Alto, California, raised $23 million in series C funding for its hardware-accelerated network virtualization platform for private and public cloud data centers.

Pluribus provides a platform for fabric-based computing that enables applications to move into the network, and to serve both physical and virtual network infrastructure. The solution includes highly-optimized Server-Switch hardware along with a a programmable, distributed network operating system (Netvisor).  The goal is zero-touch provisioning of virtual machines and network services. Pluribus also provides an ability to store full data flows and sessions in each of its F64 Server-Switches. The company has said that across a fabric of F64 Server-Switches, tens of Gigabytes per second of real-time analytics can be captured and processed.

The new funding round was led by Menlo Ventures with the participation of existing investors New Enterprise Associates, Mohr Davidow Ventures, and others.

http://www.pluribusnetworks.com


  • Pluribus Networks was founded in 2010.
  • The founders of Pluribus include Sunay Tripathi, previously a Senior Distinguished Engineer for Sun Microsystems and was the Chief Architect for Kernel/Network Virtualization in Core Solaris OS; Robert Drost, previously a Sr. Distinguished Engineer and Director of Advanced Hardware at Sun Microsystems; and C.K. Ken Yang, a Tenured Professor of EE at UCLA with a focus on high-performance communication.
  • In September, Pluribus announced a partnership with TIBCO Software Inc. to deliver TIBCO Enterprise Message Service Appliance and TIBCO FTL Message Switch.

Monday, July 2, 2012

Ixia Acquires Breaking Point Systems for Security Stress Testing

Ixia agreed to acquire BreakingPoint Systems, which specializes in security testing, for $160 million in cash.

BreakingPoint's Actionable Security Intelligence (ASI) provides global visibility into emerging threats, and actionable insight to harden and maintain resilient defenses. BreakingPoint's FireStorm appliances simulates hundreds of stateful applications and provides the ultimate application control necessary to stress deep packet inspection (DPI) devices. The 3-slot BreakingPoint FireStorm can create 90 million simultaneous wired and wireless users at live network speeds of up to 120 Gbps.

The company's platforms are kept current via an intelligence subscription service that regularly pushes newly discovered attacks, malware, and other intelligence aggregated from proprietary research, strategic customer relationships, and carrier feeds.

BreakingPoint is based in Austin, Texas.

Ixia said the acquisition enables it to provide an end-to-end solution that monitors, tests, and optimizes converged networks.

The company's noted that BreakingPoint grew revenue over 40 percent in calendar 2011 to $33.5 million while generating gross margin of 87 percent for the year. For calendar 2012, Ixia expects BreakingPoint’s revenue to again grow by more than 40 percent, and anticipates that the BreakingPoint transaction will be accretive to non-GAAP earnings in the first full quarter of operations after the acquisition closes. Non-GAAP earnings exclude stock-based compensation, amortization of acquired intangible assets, and other non-recurring charges, net of the applicable tax effects.

"As a leader in cyber security research, BreakingPoint has built a library of more than 34,000 attacks, exploits, malware, and more,” said Dennis Cox, co-founder and Chief Technology Officer of BreakingPoint. “Joining forces with Ixia creates a powerful platform in security and application testing – one with an extensive global sales reach into enterprises, service providers, and government agencies."

Ixia also updated its revenue guidance for the second quarter of 2012 to a range of $87 million to $89 million for its core business. This compares to the previously stated guidance of $86 million to $89 million. In addition to this amount, Ixia expects its recent Anue Systems, Inc. acquisition to add approximately $3 million to $4 million in additional revenue in the second quarter for the period from the June 1, 2012 acquisition closing date to June 30, 2012.
Some other recent acquisitions of testing firms:

  • In May 2012, Ixia agreed to acquire Anue Systems, which offers a network monitoring solutions, for $145 million in cash. Anue Systems offers a Net Tool Optimizer that provides traffic visibility for Service Providers and enterprises by enabling test tools to access taps across the network. Anue aggregates and filters network traffic to optimize network monitoring tool usage. Anue also can replicate network traffic from a single tap or SPAN port and send it to multiple monitoring tools simultaneously. The company was founded in 2002 and is based in Austin, Texas.
     
  • In April 2012, Danaher Corporation agreed to acquire VSS Monitoring, a privately-held company based in San Mateo, California, for its distributed traffic capture system for network and security monitoring. The company's Distributed Traffic Capture Systems provide an intelligent and robust platform for centralized monitoring, tool optimization, and scalability for the network monitoring and security infrastructure. VSS Monitoring optimizes the way data is extracted from high speed networks and then intelligently groomed and distributed to the eco-system of tools that require this data. Financial terms were not disclosed.
  • In April 2012, Spirent Communications agreed to acquire Mu Dynamics, which offers a network security and application performance testing tools, for $40.0 million in cash. Mu Studio, the company's flag ship product line, enables performance and security testing of cloud infrastructure, including network security systems, deep packet inspection (DPI) solutions, and LTE networks. Its Blitz is a self-service load and performance testing solution for cloud applications. Mu TestCloud compiles thousands of ready-to-runs tests, covering hundreds of applications.
  • In 2011, Ixia acquired VeriWave, a performance testing company for wireless LAN (WLAN) and Wi-Fi enabled smart devices, for an undisclosed sum. VeriWave wireless test solutions validate Wi-Fi networks, smart devices, and applications by benchmarking and measuring speed, quality, interoperability, compliance, and other pivotal aspects of wireless performance.

See also