Showing posts with label Start-ups. Show all posts
Showing posts with label Start-ups. Show all posts

Wednesday, June 13, 2018

Aviatrix offers cloud networking as a service for AWS, Azure and Google Cloud

Aviatrix, a start-up based in Palo Alto, California, announced a hosted service to build and manage virtual private cloud (VPC) networks in Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) public cloud environments.

The Aviatrix Hosted Service provides a centralized console for building and managing all secure connectivity. The company said its software-defined router goes well beyond what existing instance-based virtual routers offer. The solution consists of the Aviatrix Controller, now available via the Hosted Service, and Aviatrix Gateways, which are deployed in VPCs to support cloud networking use cases that include AWS global transit networks, remote user VPN and VPC egress security.

“Even using a public cloud vendor’s console—which makes it straightforward to build compute and storage in the public cloud—VPC networking has remained complex, especially as the number of VPCs grow from single digits to many hundreds across the globe,” said Steven Mih, CEO of Aviatrix. “The Aviatrix Hosted Service—the first cloud networking-as-a-service option—provides the easiest way to build out VPC networks in the public cloud. Using our hosted service, it takes less than 10 minutes, and requires no serious networking expertise, to deploy and securely connect a large number of VPCs. It’s your central console for all things networking.”

Key use cases include:

  • Next-gen global transit network. Create VPCs in seconds, scale and migrate workloads from on-premises sites, and manage growing numbers of VPCs with ease from a software-defined, centrally managed controller.
  • VPC egress security. Control VPC traffic outbound to the internet with powerful Layer 7 filtering that enables organizations to allow or deny access based on policies using high-availability, in-line gateways.
  • Remote user VPN. Provide secure remote access to VPCs and cloud services for developers, employees and partners—using the cloud-native Aviatrix solution, based on OpenVPN® technologies.
  • Multicloud peering. Simplify networking among AWS, Azure and GCP public cloud infrastructures. Use Aviatrix’s native, API-based approach to centrally manage connectivity and eliminate complexity for implementations spanning multiple cloud services.
  • Encrypted peering. Meet corporate and regulatory compliance requirements by encrypting data in motion. Use IPsec between any two VPCs to centrally manage secure peering across accounts and clouds.
  • Site-to-cloud VPN. Quickly create secure connections from on-premises data centers, sites or branch locations to cloud resources. Use existing on-prem hardware and internet infrastructure to minimize costs.


Monday, June 11, 2018

Cohesity pulls in $250 million for its hyperconverged secondary storage

Cohesity, a start-up based in San Jose, California, raised $250 million in an oversubscribed Series D funding round led by the SoftBank Vision Fund with strong participation from strategic investors Cisco Investments, Hewlett Packard Enterprise (HPE), and Morgan Stanley Expansion Capital, along with early investor Sequoia Capital and others.

Cohesity specializes in hyperconverged secondary storage. Its hyperconverged appliance consolidates all secondary data and associated management functions on one unified solution, including backups, files, objects, test/dev copies, and analytics.

“My vision has always been to provide enterprises with cloud-like simplicity for their many fragmented applications and data – backup, test and development, analytics, and more,” said Cohesity CEO and Founder Mohit Aron. “Cohesity has built significant momentum and market share during the last 12 months and we are just getting started. We succeed because our customers are some of the world’s brightest and most fanatical IT organizations and are an extension of our development efforts.”

Cohesity said its annual revenues surged 600% from 2016 to 2017. In the last two quarters, over 200 new enterprise customers selected Cohesity, including Air Bud Entertainment, AutoNation, BC Oil and Gas Commission, Bungie, Harris Teeter, Hyatt, Kelly Services, LendingClub, Piedmont Healthcare, Schneider Electric, the San Francisco Giants, TCF Bank, the U.S. Department of Energy, the U.S. Air Force, and WestLotto.

The latest $250 million round brings total funding in Cohesity to $410 million.

“We backed Mohit at his previous firm, Nutanix, and are proud to support him again at Cohesity. The company has a smart and timely vision for radically simplifying secondary data for large customers, which is part of a broader move by enterprises toward hybrid-cloud infrastructure. We believe Cohesity is armed with the right team and product to attack this large and growing market,” said Neeraj Agrawal, general partner, Battery Ventures.

Foundry.ai raises $67 million for enterprise AI

Foundry.ai, a start-up based in Washington, D.C., announced $67 million in funding for its artificial intelligence (AI) software solutions for large enterprises
.
Foundry has launched four businesses to date:

  • Vizual.ai, which provides image optimization to web publishers and e-commerce businesses;
  • Supplier.ai, which allows enterprises to improve procurement economics through improved vendor selection, pricing and risk management;
  • HUD.ai, which empowers go-to-market professionals selling large enterprise solutions to improve the quality and quantity of personalized, high-impact outreach; and
  • Curia.ai, which provides advanced decision optimization tools to healthcare provider networks.

"In every global 2000 C-suite and boardroom, someone is asking the question, 'How will AI impact our business?'," said Ned Brody, co-founder of Foundry.ai. "Foundry's new funding will allow us to build a significantly greater number of Practical AI businesses, creating AI solutions that focus on replicable, every-day decision improvements that drive immediate profitability increases."

https://www.foundry.ai/

  • Foundry.ai was founded by Jim Manzi, who previously was founder and CEO of Applied Predictive Technologies.

Wednesday, June 6, 2018

Paul Jacobs sets up a new company to pursue next-gen mobile

Paul Jacobs, the former CEO and executive chairman of Qualcomm, has founded a new company called XCOM to focus on next-generation mobile technologies.

Jacobs is joined in the effort by Derek Aberle, previously president of Qualcomm from March 2014 to January 2018, and Matt Grob, previously CTO of Qualcomm from 2011 to 2017.

https://xcom-tech.com

Avi pulls in $60 million including an investment from Cisco

Avi Networks, a start-up based in Santa Clara, California, announced $60 million in new funding including investments from Cisco Investments along with DAG Ventures, Greylock Partners, Lightspeed Venture Partners, and Menlo Ventures.

Cisco resells the Avi Vantage Platform in markets around the world, and Avi closely integrates with Cisco ACI, Cisco’s intent-based networking and automation solution for the data center.

Avi Networks offers an application delivery controller (ADC) with a Software Load Balancer, an Intelligent Web Application Firewall, and an Elastic Service Mesh for container-based applications. The company says that as businesses shift their operations to clouds such as Azure and AWS, its intent-based software offers easier management, faster performance, greater elasticity, deeper analytics, and more powerful automation than legacy ADC vendors.

Avi also reports that it has tripled its bookings over the past year, with significant adoption by the Global 2000 and 20% of the Fortune 50.

This latest round brings Avi’s total funding to $115 million.

“Modern applications are driving a new urgency with which enterprises are automating their networks and application delivery systems,” said Amit Pandey, CEO of Avi Networks. “Cisco software and infrastructure are a cornerstone in this transformation. I am thrilled about this strategic investment from Cisco and our continued joint efforts to deliver the elasticity, intelligence, and multi-cloud capabilities that enterprises need.”


  • Avi Networks is headed by Amit Pandey, who joined the company as CEO in 2015. Previously, Pandey spent nearly a decade at NetApp in a wide range of executive positions, and followed that with two successful stints at startups - first as CEO of TerraCotta that was acquired by the European software giant, Software AG and next as CEO of Zenprise that was acquired by Citrix.
  • Avi Networks was co-founded in November 2012 by Umesh Mahajan, who previously was VP/GM of Data Center Switching at Cisco; Murali Basavaiah, who previously was VP Engineering at Cisco for NX-OS Software and Nexus 7000/MDS product; and Ranga Rajagopalan, who previously was Sr. Director of Engineering at Cisco and responsible for NX-OS systems/platform software for the Cisco Nexus 7000.

Monday, May 28, 2018

Start-up profile: Rancher Labs, building container orchestration on Kubernetes

Rancher Labs is a start-up based in Cupertino, California that offers a container management platform that has racked up over four million downloads. The company recently released a major update for its container management system. Recently, I sat down with company co-founders Sheng Liang (CEO) and Shannon Williams (VP of Sales) to talk about Kubernetes, the open source container orchestration system that was originally developed by Google. Kubernetes was initially released in 2014, about the time that Rancher Labs was getting underway.

Jim Carroll, OND: So where does Kubernetes stand today?

Sheng Liang, Rancher Labs: Kubernetes has come a long way. When we started three years ago, Kubernetes was also just getting started. It had a lot of promise, but people were talking about orchestration wars and stuff. Kubernetes had not yet won but more importantly, it wasn't really useful.  In the early days, we couldn't even bring ourselves to say that we were going to focus exclusively on Kubernetes. It was not that we did not believe in Kubernetes, but it just didn't work for a lot of users. Kubernetes was almost seen as an end unto itself. Even standing up Kubernetes was such a challenge back then that just getting it to run became an end goal.  A lot of people in those days were experimenting with it, and the goal was simply to prove - hey- you've got a Kubernetes cluster.  Success was to get a few simple apps.  And its come a long way in 3 years.


A lot of things have changed. First, Kubernetes is now really established as the de facto container orchestration platform. We used to support Mesosphere, we used to support Swarm, and we used to build our own container orchestrations platform, which we called Cattle. We stopped doing all of that to focus entirely on Kubernetes. Luckily, the way we developed Cattle was closely modeled on Kubernetes, sort of an easy-to-use version of Kubernetes. So we were able to bring a lot our experience to run on top of Kubernetes. And now it turns out that we don't have to support all of those other frameworks. Kubernetes has settled that. It is now a common tool that everyone can use.

JC: The Big Three cloud companies are now fully behind Kubernetes, right?

Sheng Liang: Right. I think that for the longest time a lot of vendors were looking for opportunities to install and run Kubernetes. That kept us alive for a while. Some of the early Kubernetes deals that we closed were about installing Kubernetes.  These projects then turned to operation contracts because people thought they were going to need to help with upgrading or just maintaining the health of the cluster. This got blown out of the water last year when all of the big cloud providers started to offer Kubernetes as a service.

If you are on the cloud already, there is really no reason to stand up your own Kubernetes cluster.

Well, we're really not quite there yet, even though Amazon announced EKS in November, it is not even GA yet. It is still in closed beta status, but later this year Kubernetes as a service should become a commercial reality. And there are other benefits too.

I'm not sure about Amazon, but both Google and Microsoft  have decided to not charge for the management plane, so whatever resource you use to run the database, and the control plane nodes, you don't really pay for, I guess they must have a very efficient way of running it on some shared infrastructure. That's what I suspect. This allows them to amortize that cost on what they charge for the worker nodes.

The way people set up Kubernetes clusters in the early days was actually very wasteful. Like you would use three nodes for ECD and you would use two nodes for the control plane and then when setting it up people would throw in two more nodes for workers. So, they were using five nodes to manage two nodes, while paying for seven.

With cloud services, you don't have to do that. I think this makes Kubernetes table stakes. It is not just limited to the cloud.  I think it's really wherever you can get infrastructure. Enterprise customers, for instance, are still getting infrastructure from VMware. Or they get it from Nutanix.

All of the cloud companies have announced, or will shortly announce, support for Kubernetes out of the box. Kubernetes then will equate to infrastructure, just like virtual machines, or virtual SANS.

JC: So, how is Kubernetes actually being used now? Is it a one-way bridge or a two-way bridge for moving workloads? Are people actually moving workloads on a consistent basis, or it basically a one-time move to a new server or cloud?

Shannon Williams, Rancher Labs: Portability is actually less important than other features. It may be the sexy part of Kubernetes to say that you can move clusters of containers. The reality is that Kubernetes is just a really good way to run containers reliably.

The vast majority of people who are running containers are not using Kubernetes for the purpose of moving containers between clouds.  The vast majority of people running Kubernetes are doing so because it is more reliable than running containers directly on VMs. It is easier to use Kubernetes from an operational perspective. It is easier from a development perspective. It is easier from a testing perspective. So if you think of the value prop that Kubernetes represents, it comes down to faster development cycles, better operations. The portability is kind of the cherry on top of the Sundae.

It is interesting that people are excited about the portability enabled by Kubernetes, and I think it will become really important over the long term, but it is just as important that I can run it on my laptop as that I can run it on one Kubernetes cluster versus another.

Sheng Liang: I think that is a very important point. The vast major of the accounts we are familiar with run Kubernetes at just one place. That really tells you something about the power of Kubernetes. The fact that people are using this at just one place really tells you that portability is not the primary motivator.  The primary benefit is that Kubernetes is really a rock-solid way to run containers.

JC: What is the reason that Kubernetes is not being used so much for portability today? Is the use case weak for container transport? I would guess that a lot of companies would want to move jobs up to the cloud and back again.

Sheng Liang:  I just don't think that portability is the No.1 requirement for companies using containers today. Procurement teams are excited about this capability but operations people just don't need it right now.

Shannon Williams: From the procurement side, knowing that your containers could be moved to another cloud gives you the assurance that you won't be locked in.

But portability in itself is a complex problem. Even Kubernetes does not solve all the issues of porting an application from one system to another. For instance, I may be running Kubernetes on AWS but I may also be running an Amazon Relational Database (RDS) service as well.  Kubernetes is not going to magically support both of these in migrating to another cloud. There is going to be work required. I think we are still a ways away from ubiquitous computing but we are heading into a world where Kubernetes is how you run containers and containers are going to be the way that all microservices and next-gen applications are built. It may even be how I run my legacy applications. So, having Kubernetes everywhere means that the engineers can quickly understand all of these different infrastructure platforms without having to go through a heavy learning curve. With Kubernetes they will have already learned how to run containers reliably wherever it happens to be running.

JC: So how are people using Kubernetes? Where are the big use cases?

Shannon Williams: I think with Kubernetes we are seeing the same adoption pattern as with Amazon. The initial consumers of Kubernetes were people who were building early containerized applications, predominantly microservices, cloud-native Web apps, mobile apps, gaming, etc. One of the first good use cases was Pokemon Go. It needed massively-scalable systems and ran on Google Cloud. It needed to have systems that could handle rapid upgrades and changes. The adoption of Kubernetes moved from there to more traditional Web applications, to the more traditional applications.

Every business is trying to adopt an innovative stance with their IT department.  We have a bunch of insurance companies as customers. We have media companies as customers. We have many government agencies as customers, such as the USDA -- they run containers to be able to deliver websites. They have lots of constituencies that they need to build durable web services for.  These have to run consistently. Kubernetes and containers give them a lot of HA (high availability).

A year or so ago we were in Phase 0 with this movement. Now I would say we are entering Phase 1 with many new use cases. Any organization that is forward-looking in their IT strategy is probably adopting containers and Kubernetes. This is the best architecture for building applications.

JC: Is there physical limit to how far you can scale with Kubernetes?

Shannon Williams:  It is pretty darn big. You're talking about spanning maybe 5,000 servers.

Sheng Liang: I don't think there is a theoretical limit to how big you can go, but in practice, there is a database that eventually will bottleneck. The might be the limiting factor.

 I think some deployments have hit 5,000 nodes and each node these days could actually be a one terabyte machine. So that is actually a lot of resources. I think it could be made bigger, but so far that seems to be enough.

Shannon Williams: The pressure to hit that maximum size of 5,000 nodes or more in a cluster really is not applicable to the vast majority of the market.

Sheng Liang: And you could always manage multiple clusters with load balancing. It is probably not a good practice anyway to put everything in one superbig cluster.

Generally, we are not seeing people create huge clusters across multiple data centers or multiple regions.

Shannon Williams: In fact, I would say that we are seeing the trend move in the opposite direction.  Which is that the number of clusters in an organization is increasing faster than the size of any one cluster. What we see is any application that is running probably has at least two clusters available  -- one for testing and one for production.  There are often many divisions inside a company that push this requirement forward. For instance, a large media company has more than 150 Kubernetes clusters -- all deployed by different employees in different regions and often running different versions of their software. The even have multiple cloud providers. I think we are heading in that direction, rather than one massive Kubernetes cluster to rule them all.

Sheng Liang:  This is not what some of the web companies initially envisioned for Kubernetes.  When Google originally developed Kubernetes, they were used to the model where you have a very big pool of resources with bare metal servers. Their challenge was how to schedule all the workloads inside of that pool. When enterprises started adopting Kubernetes, one thing that immediately changed was that they really don't have the operational maturity to put all their eggs in one basket and make that really resilient. Second, because all of them were using some form of virtualization. They were either using VMware or they were using a cloud, so essentially the cost of making small clusters come down. There is not a lot of overhead. You can have a lot of clusters without having to dedicate the whole server into these clusters.

JC: Is there an opportunity then for the infrastructure provider, or the cloud provider, to add their own special sauce on top of Kubernetes?

Sheng Liang:  The cloud guys are all starting to do that. Over time, I think they will do more. Today is still early. Amazon, for instance, has not yet commercially launched the service to the public. And Digital Ocean just announced it. But Google has been offering Kubernetes as a service for three years. Microsoft has been doing it for probably over a year. If you look at Google's Kubernetes service, which is probably the most advanced, now includes more management dashboards and UIs, but nothing really fancy yet.

What I would expect them to do -- and this would be really great from my perspective -- is to bring their entire service suite, including their databases, AI and ML capabilities, and make them available inside of Kubernetes.

Shannon Williams: Yeah, they will want to integrate their entire cloud ecosystems. That's one of the appealing things about cloud providers offering Kubernetes -- there will be some level of standardization but they will have the opportunity to differentiate for local requirements and flavors.

That kind of leads to the challenge we are addressing.

There are three big things that most organizations face (1) you want to be able to run Kubernetes on-prem.  Some teams may run it on VMware, some may wish to run in on bare metal. They would like to be able to run it on-prem in a way that is reliable, consistent and supported. For IT groups, there is a growing requirement of offer Kubernetes as a service in the same way they offer VMs. To do so, they must standardize Kubernetes. (2) There is another desire to manage all of these clusters in a way that complies with your organization's policies. There will be questions like "how do I manage multiple clusters in a centralized way even if some are on-prem and some are in the cloud?"  This is a distro-level problem for Kubernetes. (3) Then there is a compliance and security concern with how to configure Kubernetes to enforce all of my access control policies, security policies, monitoring policies, etc.  Those are the challenges that we are taking on with Rancher 2.0

Jim Carroll, OND: Where does Rancher Labs fit in?

Shannon Williams, Rancher Labs: The challenge we are taking on is how to manage multiple Kubernetes clusters, including how to manage users and policies across multiple clusters in an organization.

Kubernetes is now available as a supported, enterprise-grade service for anybody in your company. At this scale, Kubernetes really becomes appealing to organizations as a standardization approach, not just so that workloads can easily move between places but so that workloads can be deployed to lots of places.  For instance, I might want some workloads to run on Alibaba Cloud for a project we are doing in China, or I might want to run some workloads on T-Systems's cloud for a project in Germany, where I have to comply with the new data privacy laws. I can now do those things with Kubernetes without having to understand the specific cloud parameters, benefits or limitations of any specific cloud. Kubernetes normalizes this experience. Rancher Labs makes it happen in a consistent way. That is a large part of what we are working on at Rancher Labs -- consistent distribution and consistent management of any cluster. We will manage the lifecycle of Amazon Kubernetes or Google Kubernetes, our Kubernetes, or new Kubernetes coming out of a dev lab.

JC: So the goal is to have the Rancher Labs experience running both on-prem and in the public cloud?

Shannon Williams, Rancher Labs:: Exactly. So think about it like this. We have a distro of Kubernetes and we can use it to implement Kubernetes for you on bare metal, or on VMware, or in the cloud, if you prefer, so you can build exactly the version of Kubernetes that suits you. That is the first piece of value -- we'll give you Kubernetes wherever you need it. The second piece is that we will manage all of the Kubernetes clusters for you, including where you requested Kubernetes from Amazon or Google. You have the options of consuming from the cloud as you wish or staying on-prem. There is one other piece that we are working on. It is one thing to provide this normalized service. The additional layer is about engaging users.

What you are seeing with Kubernetes is similar to the cloud. Early adopters move in quickly and have no hesitancy in consuming it -- but.they represent maybe 1% or 2% of the users.The challenge for the IT department is to make this preferred way to deliver resources. At this point, you want to encourage adoption and that means developing a positive experience.

JC: Is your goal to have all app developers aware of the Kubernetes layer? Or is Kubernetes management really the responsibility of the IT managers who thus far are also responsible for running the network, running the storage, running the firewalls..?

Shannon Williams, Rancher Labs: Great question, because Kubernetes is actually part of the infrastructure, but it is also part of the application resiliency layer. It deals with how an application handles a physical infrastructure failure, for example. Do I spin up another container? Do I wait to let a user decide what to do? How do I connect these parts of an application and how do I manage the secrets that are deployed around it? How do I perform system monitoring and alerting of application status? Kubernetes is blurring the line.

Sheng Liang, Rancher Labs: It is not really something the coders will be interested in. The interest in Kubernetes starts with DevOps and stops just before you get to storage and networking infrastructure management.

Shannon Williams, Rancher Labs: Kubernetes is becoming of interest to system architects -- the people who are designing how an application is going to be delivered. They are very aware that the app is going to be containerized and running in the cloud. The cloud-native architecture is pulling in developers. So I think it is a little more blurred than whether or not coders get to this level.

Sheng Liang, Rancher Labs: For instance, the Netflix guys used to talk a lot about how they developed applications. Most developers don't spend a lot of time worrying about how their applications are running. They have to spend most of their time worrying about the outcome. But they are highly aware of the architecture. Kubernetes is well regarded as the best way to develop such applications. Scalable, Resilient, Secure -- those are what's driving the acceptance of Kubernetes.

Shannon Williams, Rancher Labs:  I would add one more to the list -- quick to improve. There is a continuous pace of improvement with Kubernetes. I saw a great quote about containerization from a CIO, who said "I don't care about Docker or any other containers or Kubernetes. All I care about is continuous delivery. I care that we can improve our application continuously and it so happens that containers give us the best way to do that." The point is -- get more applications to your users in a safe, secure, and scalable process.

The Cloud-Native Computing Foundation (CNCF) aims to build next-generation systems that are more reliable, more secure, more scalable and Kubernetes is a big part of this effort.  That's why I've said the value of workload portability is often exaggerated.

Jim Carroll, OND:  Tell me about the Rancher Labs value proposition.

Shannon Williams, Rancher Labs: Our value proposition is centered on the idea that Kubernetes will become the common platform for cloud-native architecture. It is going to be really important for organizations to deliver that as a service reliably. It going to be really important for them to understand how to secure that and how to enforce company policies. Mostly, it will enable people to run their applications in a standardized way. That's our focus.

As an open source software company that means we build the tooling that thousands of companies are going to use to adopt Kubernetes. Rancher has 10,000 organizations using our platform today with our version 1.0 product. I expect our version 2.0 product to be even more popular because it is built around this exploding market for Kubernetes.

JC:  What is the customer profile? When does it make sense to go from Kubernetes to Kubernetes plus Rancher?

Shannon Williams, Rancher Labs: Anywhere where Kubernetes and containers are being adopted, really.  Our customers talk about the D-K-R stack:  Docker- Kubernetes-Rancher.

JC: Is there a particular threshold or requirement that drives the need for Rancher?

Shannon Williams, Rancher Labs:: Rancher is often something that users discover early in their exploration of Docker or Kubernetes.  Once they have a cluster deployed, they start to wonder how they are going to manage it on an on-going basis. This often occurs right at the beginning of a container deployment program - day 1, day 2 or day 3.

Like any other open source software companies, users can download our software for free. The point when a Rancher user becomes a Rancher customer usually happens when the deployment has moved to a mission-critical level.  When their business actually runs on the Kubernetes cluster, that's when we are asked to step in to provide support. We end up establishing a business relationship to support them with everything we build.

JC: And how does the business model work in a world of open source, container management? 

Shannon Williams, Rancher Labs: Customers purchase support subscriptions on an annual basis.

JC: Are you charging based on the number of clusters or nodes? 

Shannon Williams, Rancher Labs: Yes, based on the number or clusters and hosts. A team that is running their critical business systems on Kubernetes will get a lot of benefits in knowing that everything from the lowest level up, including the container runtime, the Kubernetes engine, the management platform, logging, monitoring  -- we provide that unified support.

JC: Does support mean that you actually run the clusters on behalf of the clients? 

Shannon Williams, Rancher Labs: Well, no, they're running it on their systems or in the cloud. Like other open source software developers, we can provide incident response for issues like "why is this running differently in Amazon than on-prem?" We also provide training for their teams and collaboration on the technology evolution.

JC: What about the company itself. What are the big milestones for Rancher Labs?

Shannon Williams, Rancher Labs: We're growing really fast and now have about 85 employees around the world. We have offices around the world, including in Australia, Japan, the UK and are expanding. We have about 170 customer accounts worldwide. We have over 10,000 organizations using the product and over 4 million downloads to date.  The big goals are rolling out Version 2.0, which is now in commercial release, and driving adoption of Kubernetes across the board. We're hoping to get lots of feedback as version 2.0 gets rolled out. So much of the opportunity now concerns the workload management layer.  How do we make it easier for customers to deploy containerized applications? How can we smoothe the rollout of containerized databases in a Kubernetes world? How do we solve the storage portability challenge? There are enormous opportunities to innovate in these areas. It is really exciting.

JC: What is needed to scale your company to the next level?

Shannon Williams, Rancher Labs: Right now we are in a good spot. We benefit from the magic of open source. We were able to grow this fast just on our Series B funding round because thousands of people downloaded our software and loved it. This has given us inroads with companies that often are the biggest in their industries. Lot's of the Fortune 500 are now using Rancher to run critical business functions for their teams. We get to work with the most innovative parts of most organizations.

Sheng Liang, Rancher Labs: There is a lot of excitement. We just have to make sure that we keep our quality high and that we make our customers successful. I feel the market is still in its early days. There is a lot more work to make Kubernetes really the next big thing.

Shannon Williams, Rancher Labs: We're still a tiny minority inside of IT. It will be a ten-year journey but the pieces are coming together.


Thursday, May 24, 2018

Lumina raises $10 million for its OpenDaylight-powered SDN controller

Lumina Networks, a start-up based offering an SDN controller powered by OpenDaylight, announced $10 million Series A financing, including $8 million in new funding led by Verizon Ventures. Other new investors included AT&T and Rahi Systems.

Lumina was formed in August 2017 as a spin-off from Brocade.

"This investment by both Verizon and AT&T demonstrates the strategic importance of open source networking to the automation and digitization of their networks,” said Andrew Coward, Founder and CEO of Lumina Networks. “We understand the value of our mission to take open source networking out of the labs of our customers and into production deployment. This funding will enable us to reach a wider customer base and realize the industry vision of easily deployable open source software-defined networking (SDN)."


“SDN has emerged as a key architectural model in delivering the promised goals of next generation wireless networks such as 5G by enabling high speeds and low latency at lower cost points,” said Alexander Khalin, Director at Verizon Ventures. “Open source is instrumental to Verizon’s digital transformation, and the team at Lumina Networks has built world-class, carrier grade products and solutions in this space and truly understands how to effectively work with network operators on their transformational journey.  We look forward to their continued success in this field."


“SDN is at the heart of our network transformation, and we’ve committed to virtualizing and software-controlling 75% of our core network functions by 2020,” said Chris Rice, Senior Vice President, AT&T Labs, Domain 2.0 Architecture and Design.  “Lumina’s leadership and work in OpenDaylight is important to creating a scalable software-defined network. Their open source business model is what our industry is moving to. Much of our future network will be powered by open source software, such as our white box initiative, and we’re excited to help drive innovation and collaboration in this space.”

Wednesday, May 23, 2018

Platform.sh raises $34M for enterprise cloud

Platform.sh, a start-up based in Paris with offices in San Francisco, raised $34 million in a Series C funding for its "idea-to-cloud" application platform.

Platform.sh simplifies deployments for enterprises by combining an automated cloud with its unique rapid cloning technology that can instantly spin up and deploy exact clones of entire live web applications in less than 60 seconds, allowing development teams to ensure that new features do not break when in production. Its product can be used to develop, test, deploy and run their cloud-based web applications with speed and confidence. The company claims more than 650 enterprise customers across the globe are currently using its platform and says sales have grown 110 percent this year.

The funding round was U.S.-based Partech and included Idinvest Partners, Benhamou Global Ventures (BGV), SNCF Digital Ventures and returning investor, Hi Inov.

“The customer traction and organic growth we’ve seen over the past 12 months – especially in North America – made it clear that we are ready to scale on a global level,” said Frederic Plais, CEO of Platform.sh. “The productivity gains that our platform delivers are beyond anything offered by managed hosting solutions, or DIY approaches with cloud infrastructures. The recent years have seen an explosion of incredibly strong tools that help implement novel cloud architectures, but the mainline approach is patchwork and piecemeal. Platform.sh proposes a unified model that transcends categories, not only solving difficult cluster orchestration and continuous delivery problems, but also improving testing and quality assurance of applications.”

https://platform.sh

Sunday, May 20, 2018

Oracle to acquire DataScience for centralized model development

Oracle agreed to acquire DataScience.com, whose platform centralizes data science tools, projects and infrastructure in a fully-governed workspace. Financial terms were not disclosed.

DataScience, which is based in Culver City, California, helps enterprises to organize work, easily access data and computing resources, and execute end-to-end model development workflows.

Oracle said it will use DataScience to provide customers with a single data science platform that leverages Oracle Cloud Infrastructure and the breadth of Oracle's integrated SaaS and PaaS offerings to help them realize the full potential of machine learning.

“Every organization is now exploring data science and machine learning as a key way to proactively develop competitive advantage, but the lack of comprehensive tooling and integrated machine learning capabilities can cause these projects to fall short,” said Amit Zavery, Executive Vice President of Oracle Cloud Platform, Oracle. “With the combination of Oracle and DataScience.com, customers will be able to harness a single data science platform to more effectively leverage machine learning and big data for predictive analysis and improved business results.”

“Data science requires a comprehensive platform to simplify operations and deliver value at scale,” said Ian Swanson, CEO of DataScience.com. “With DataScience.com, customers leverage a robust, easy-to-use platform that removes barriers to deploying valuable machine learning models in production. We are extremely enthusiastic about joining forces with Oracle’s leading cloud platform so customers can realize the benefits of their investments in data science.”

The founders of DataScience include Ian Swanson (previously founder and CEO of Sometrics, a virtual currency monetization platform acquired by American Express); Colin Schmidt (previously served vice president of engineering at online student loan management service Tuition.io, and as engineering lead at Sometrics); and Jonathan Beckhardt (previously led product management and analytics at Tuition.io and developed big data strategy at American Express).

Investors in Datascience included TenOneTen, Greycroft, Crosscut Ventures, and Pelion Venture Partners.

Thursday, May 17, 2018

Google Cloud acquires Cask for big data ingestion on-ramp

Google Cloud will acquire Cask Data Inc., a start-up based in Palo Alto, California, that offers a big data platform for enterprises. Financial terms were not disclosed.

The open source Cask Data Application Platform (CDAP) provides a data ingestion service that simplifies and automates the task of building, running, and managing data pipelines. Cask says it cuts down the time to production for data applications and data lakes by 80%. The idea is to provide a standardization and simplification layer that allows data portability across diverse environments, usability across diverse groups of users, and the security and governance needed in the enterprise.

Google said it plans to continue to develop and release new versions of the open source Cask Data Application Platform (CDAP).
a
“We’re thrilled to welcome the talented Cask team to Google Cloud, and are excited to work together to help make developers more productive with our data processing services both in the cloud and on-premise. We are committed to open source, and look forward to driving the CDAP project’s growth within the broader developer community,” stated William Vambenepe, Group Product Manager, Google Cloud

Over the past 6+ years, we have invested heavily in the open source CDAP available today and have deployed our technology with some of the largest enterprises in the world. We accomplished great things as a team, had tons of fun and learned so much over the years. We are extremely proud of what we’ve achieved with CDAP to date, and couldn’t be more excited about its future.

Cask was founded by Jonathan Gray and Nitin Motgi.


Tachyum announces its Universal Processor Platform

Tachyum, a start-up based in San Jose, California with offices in Slovakia, unveiled its new processor family – codenamed “Prodigy” – that combines the advantages of CPUs with GP-GPUs, and specialized AI chips in a single universal processor platform. The company says its processor architecture attains ten times the processing power per watt compared to conventional designs.

A key innovation of the design is the ability to connect very fast transistor with very slow wires, but technical details on the device physics have not yet been disclosed.

The universal processor promises programming ease comparable to a CPU with performance and efficiency comparable to GP-GPU. It is designed to handle hyperscale workloads, AI, HPC, and other demanding applications

Tachyum claims its Prodigy universal processor will enable a super-computational system for real-time full capacity human brain neural network simulation by 2020. One target application would be the real-time Human Brain Project, where there’s a need for more than 1019 Flops (10,000,000,000,000,000,000 floating-point operations per second - 10 exaflop).

“Rather than build separate infrastructures for AI, HPC and conventional compute, the Prodigy chip will deliver all within one unified simplified environment, so for example AI or HPC algorithms can run while a machine is otherwise idle or underutilized,” said Tachyum CEO Dr. Radoslav ‘Rado’ Danilak. “Instead of supercomputers with a price tag in the hundreds of millions, Tachyum will make it possible to empower hyperscale datacenters to produce more work in a radically more efficient and powerful format, at a lower cost.”

“Despite efficiency gains from virtualization, cloud computing, and parallelism, there are still critical problems with datacenter resource utilization particularly at a size and scale of hundreds of thousands of servers,” said Christos Kozyrakis, professor of electrical engineering and computer science at Stanford, who leads the university’s Multiscale Architecture & Systems Team (MAST), a research group for cloud computing, energy-efficient hardware, and operating systems. “Tachyum’s breakthrough processor architecture will deliver unprecedented performance and productivity.” Kozyrakis is a corporate advisor to Tachyum.

http://tachyum.com


  • Tachyum is headed by Dr. Radoslav ‘Rado’ Danilak, who previously was founder and CEO of Skyera, a supplier of ultra-dense solid-state storage systems, acquired by WD in 2014. He also was cofounder and CTO of SandForce, which was acquired by LSI in 2011 for $377M. Its cofounders include Rodney Mullendore (previously Sandforce, Nishan Systems, Sandia National Labs); Igor Shevlyakov (previously Skyera); Ken Wagner (previously Wave Computing, Silicon Analystics and Theseus Logic).
  • Tachyum is funded by IPM Growth, the venture capital division of InfraPartners Management LLP.

Vesper raises $23M for its MEMS piezoelectric microphones

Vesper, a start-up based in Boston, announced $23 million in Series B funding for its MEMS-based piezoelectric sensors.

Vesper said its MEMS microphones represent a radical shift from capacitive MEMS microphones. Its piezoelectric design is suited for far-field applications such as microphone arrays used in voice-interface devices. The design is waterproof, dustproof, particle-resistant and shockproof.

"Our vision is for Alexa to be everywhere, and that means devices need to be built with durable, high-quality components that stand up to the demands of many different environments, especially on-the-go scenarios that require better power efficiency," said Paul Bernard, director of the Amazon Alexa Fund. "Vesper has become further embedded in the Alexa community through its integrations with various development kits and integrated solutions for Amazon AVS, and this follow-on investment is a testament to their continued momentum."

The funding round was led by American Family Ventures, a venture capital fund focused on seed to growth stage rounds. Vesper also received investments from Accomplice, Amazon Alexa Fund, Baidu, Bose Ventures, Hyperplane, Sands Capital, Shure, Synaptics, ZZ Capital and other undisclosed investors. The round brings Vesper's total funding to date to $40 million.

http://www.vespermems.com

Friday, May 11, 2018

Mesosphere raises $125M for its hybrid cloud

Mesosphere, a start-up based in San Francisco, announced $125 million in Series D funding for its hybrid cloud platform.

Mesosphere DC/OS automates operations. The idea is to automate workload-specific operating procedures to “as-a-Service” anything from Kubernetes to data services, while optimizing workload density to achieve the highest utilization with resource guarantees.

Mesosphere said it has nearly tripled revenue year-over-year.

The series D funding was co-led by funds and accounts advised by T. Rowe Price Associates, Inc. and Koch Disruptive Technologies (KDT) with participation from ZWC Ventures, Qatar Investment Authority (QIA) and Disruptive Technology Advisers (DTA). The round also features participation from existing investors Andreessen Horowitz, Two Sigma Ventures, Khosla Ventures, Hewlett Packard Enterprise, SV Angel, Fuel Capital, and Triangle Peak Partners.

"We make world-changing technology, like Kubernetes, Tensorflow and more, available at the click of a button, enabling business impact faster because DC/OS automates operations of more than one hundred complex technologies," said Florian Leibert, CEO and co-founder at Mesosphere. "This investment will help us to arm the enterprise with leading edge technology, like containers, machine learning, and IoT applications, allowing them to reclaim their competitive edge and reinvent the customer experience."

Tuesday, May 8, 2018

Intel Capital announces 12 start-up bets in AI, Cloud, IoT and Silicon

Intel Capital announced investments totalling $72 million in a dozen start-up companies focused in AI, cloud, IoT and silicon technologies.

This week the company is hosting an Intel Capital Global Summit in Palm Desert, California to bring together startup entrepreneurs, venture capitalists and tech industry executives.

The 12 startups joining Intel Capital’s portfolio are:

Artificial Intelligence

Avaamo (Los Altos, California) is a deep learning software company that specializes in conversational interfaces to solve specific, high-impact problems in the enterprise. Avaamo is building fundamental AI technology across a broad area of neural networks, speech synthesis and deep learning to make conversational computing for the enterprise a reality. http://www.avaamo.com/

Fictiv (San Francisco, California) is democratizing access to manufacturing, transforming how hardware teams design, develop and deliver physical products. Its virtual manufacturing platform pairs intelligent workflow and collaboration software with Fictiv’s global network of highly vetted manufacturers. From prototype to production, Fictiv helps hardware teams work efficiently and bring products to market faster. https://www.fictiv.com/


Gamalon (Cambridge, Massachusetts) is leading the next wave in machine learning with an AI platform that teaches computers actual ideas. Gamalon’s Idea Learning technology provides accurate, editable and explainable processing of customer messages and other free-form data. Gamalon’s system learns faster, is easily extendable to specific domains, is completely auditable, and understands complexity and nuance. It can be used to structure free-form text such as surveys, chat transcripts, trouble tickets and more. https://gamalon.com/

Reconova (Xiamen, China) is a leading AI company providing cutting-edge visual perception solutions. Dedicated to the research of innovative computer vision and machine learning technologies, Reconova possesses a significant amount of core technologies in those fields. The company has achieved scale production and application across the smart retail, smart home and intelligent security segments. http://www.reconova.com/en/index.html

Syntiant (Irvine, California) is an AI semiconductor company that is accelerating the transition of machine learning from the cloud to edge devices. The company’s neural decision processors merge deep learning with semiconductor design to produce highly efficient ultralow-power analog neural computation for always-on applications in battery-powered devices, including mobile phones, wearable devices, smart sensors and drones. https://www.syntiant.com/

Cloud and IoT

Alauda (Beijing, China) is a container-based cloud services provider empowering enterprise IT with its enterprise platform-as-a-service offering and other strategic services. It delivers cloud-native capabilities and DevOps best practices to help enterprises modernize application architecture, maximize developer productivity and achieve operational excellence. Alauda serves organizations undergoing digital transformation across a number of industries, including financial services, manufacturing, aviation, energy and automotive. http://www.alauda.cn/?lang=EN

CloudGenix (San Jose, California) is a software-defined wide-area network (SD-WAN) leader, transforming legacy hardware WANs into a software-based, application-defined fabric. Using CloudGenix software, customers deploy cloud, unified communications and data center applications to remote offices over broadband networks with high performance and security. CloudGenix customers experience up to 70 percent WAN costs savings, an improved user experience for their applications, and more than 10x improvements in application and network uptime. https://www.cloudgenix.com/

Espressif Systems (Shanghai, China) is a multinational, fabless semiconductor company that leverages wireless computing to create high-performance IoT solutions that are more intelligent, versatile and cost-effective. The company’s all-in-one system-on-chips (SoCs) provide dual-mode connectivity (Wi-Fi+BT/BLE) to a wide range of IoT products – including tablets, cameras, wearables and smart home devices – at competitive prices. https://www.espressif.com/

VenueNext (Santa Clara, California) transforms the way guests experience every kind of venue, from arenas and concert halls to hotels and hospitals. Its smart-venue platform connects a facility’s siloed operational systems to give guests seamless access to services via their smartphones, and provides real-time analytics and insights that transform business outcomes. A sample of customers include Levi’s Stadium, Yankee Stadium, U.S. Bank Stadium, Amway Center, Churchill Downs and St. Luke’s Health Systems. http://www.venuenext.com/#welcome

Silicon

Lyncean Technologies (Fremont, California) was founded in 2001 to develop the Compact Light Source (CLS), a miniature synchrotron X-ray source. Enabling a reduction in scale by a factor of 200, the CLS shrinks a machine capable of synchrotron quality experiments from stadium-sized to room-sized. Lyncean’s newest development is a novel EUV source based on coherent photon generation in a compact electron storage ring, specifically designed for high-volume manufacturing semiconductor lithography. http://lynceantech.com/

Movellus (San Jose, California) develops semiconductor technologies that enable digital tools to automatically create and implement functionality previously achievable only with custom analog design. Using digital design, Movellus improves the efficiency of creating and laying out analog circuits for SoCs – resulting in faster design time, faster time to yield, smaller die size and lower failure rates. Movellus’ customers include semiconductor and systems companies in the AI, networking and FPGA segments. https://www.movellus.com/

SiFive (San Mateo, California) is the leading provider of market-ready processor core IP based on the RISC-V instruction set architecture. Founded by the inventors of RISC-V and led by a team of industry veterans, SiFive helps system-on-chip designers reduce time to market and increase cost savings by enabling system designers to produce customized, open-architecture processor cores. https://www.sifive.com/

ThoughtSpot raises $145 million for its enterprise AI-driven analytics

ThoughtSpot, a start-up based in Palo Alto, California, announced $145 million in Series D funding for its work in search and AI-driven analytics for the enterprise.

The round included existing investors Lightspeed Ventures, Future Fund, Khosla Ventures, and General Catalyst participated, alongside new participants Sapphire Ventures, and other global investors. Since its founding in 2012, ThoughtSpot has raised $306 million in total funding.

“In the few short years since founding ThoughtSpot, we have disrupted the analytics market and seen global enterprises adopt our search and AI-driven analytics due to its simplicity for business people and enterprise-grade scale and governance for today’s CIOs and CDOs,” said Ajeet Singh, founder and CEO, ThoughtSpot. “We see a world where your analytics platform serves up insights to you before you can even articulate a question. With the new funding, we’ll continue to push the boundaries of what’s possible with self-service analytics for our customers, partners, and the industry at large.”

Monday, May 7, 2018

Nokia acquires SpaceTime for IoT software

Nokia has acquired SpaceTime Insight, a start-up based in San Mateo, California with offices in Canada, UK, India, and Japan, specializing in IoT analytics. Financial terms were not disclosed.

SpaceTime Insight provides machine learning-powered analytics and IoT applications. Its machine learning models and other advanced analytics, designed specifically for asset-intensive industries, predict asset health with a high degree of accuracy and optimize related operations. The company said customers include some of the world's largest transportation, energy and utilities organizations, including Entergy, FedEx, NextEra Energy, Singapore Power and Union Pacific Railroad.

Nokia said the acquisition expands its Internet of Things (IoT) portfolio and IoT analytics capabilities, and accelerates the development of new IoT applications for key vertical markets.

Bhaskar Gorti, president of Nokia Software, said: "Adding SpaceTime to Nokia Software is a strong step forward in our strategy, and will help us deliver a new class of intelligent solutions to meet the demands of an increasingly interconnected world. Together, we can empower customers to realize the full value of their people, processes and assets, and enable them to deliver rich, world-class digital experiences."


Wednesday, May 2, 2018

Start-up profile: TidalScale, building an inverse hypervisor for scale-up servers

TidalScale, a start-up based in Campbell, California, is on a mission to build the world's largest virtual servers based on Intel x86 commodity hardware.

The company's "inverse" hypervisor combines multiple physical servers (including their associated CPUs, memory storage and network) into one or more large software-defined virtual servers. This is the inverse equivalent of VMware because a rack of physical servers are virtualized as though it were one. The concept is to scale-up a virtual server instance to handle Big Data workloads without making changes to applications or operating systems.

Why use another hypervisor to create a bigger server? Doesn’t Moore’s Law already deliver more powerful processors over time? And why not just provision a large number of individual servers from a Cloud IaaS vendor? The answers here would be (1) very large in-memory datasets (2) Moore’s law is not keeping pace with rising workloads demands (3) too costly and too limiting, especially since public cloud operators tend to limit the memory size of bare metal servers to 2TB and because in load balancing a workload there is a tendency to provision to more resources than necessary.

The TidalScale story

TidalScale was founded in 2012 by Dr. Ike Nassi, an Adjunct Professor of Computer Science at UC Santa Cruz, who has been involved in many tech developments including as Chief Scientist at SAP when the category of in-memory databases was established. He also was involved in 3 previous start-ups: Encore Computer, a pioneer in symmetric multiprocessors; InfoGear Technology, which developed Internet appliances and services; and Firetide, a wireless mesh networking company.

The technical team also includes Dr. David Reed as Chief Scientist, who holds many patents along with four degrees from MIT in EE and CS including his PhD. Reed's contributions to the networking field include work on the original Internet protocol design team. His architectural contributions included the UDP protocol design, the “slash” in TCP/IP, and formulation of the End-to-End Argument as its primary protocol design principle. Later, he went on to become  Chief Scientist at Lotus Development Corporation, an HO Fellow, and an SVP at SAO Reseach.

On the management side, TidalScale is headed by Gary Smerdon, who previously was the EVP & Chief Strategy Officer of Fusion-io, the devel.oper of flash-based PCIe hardware and software solutions that was ultimately acquired by SanDisk in 2014 for $1.3 billion. Before that, Smerdon was SVP and GM of the Accelerated Solutions Division at LSI, an internal startup that he founded. Smerdon also held executive positions at Greenfield Networks (acquired by Cisco), Tarari (acquired by LSI), Marvell, and AMD.

TidalScale, which first began shipping in 2016, aggregates all the resources (memory, cores, storage and bandwidth) of low-cost, high-performance, 2-socket Intel x86 servers into one or more Software Defined Servers. This accomplished by running a TidalScale HyperKernel on the physical server and a "WaveRunner" control plane and management console to orchestrate the spinning up or spinning down of virtualized servers. The HyperKernal instance on each physical server communicates with other HyperKernal over the Ethernet network, which essentially functions as a combined memory and I/O bus. Thus memory performance will be determined by the latency and throughput of the Ethernet connection. Still, for applications such as very large in-memory databases, a TidalScale software-defined server consisting of five physical nodes each with 128GB of DRAM, will be better than a single server with 128GB of DRAM if the memory required exceeds 128GB and a secondary SSD must also be employed. This is because DRAM performance is roughly 1000X that of flash memory.

Software-Defined Servers can be configured with dozens or even hundreds of processor cores, tens of terabytes of memory, and as much storage and networking I/O as needed. The configuration of servers can be automatically right-sized to the workload. TidalScale allows Docker containers and container management platforms (Kubernetes) to run on top. For instance, TidalScale could be used to deploy a single Linux instance with 15TB of DRAM and up to 400 cores by leveraging dozens of servers in a cloud data centre.

As mentioned above, TidalScale's paradigm scale-up paradigm on commodity servers should be especially relevant to in-memory databases, such as SAP HANA. The company says it can configure up to 64TB of in-memory performance on 2-socket Intel x86 servers. Currently, cloud customers can TidalScale to on standard servers available on IBM BlueMix, OrionVM’s Wholesale Cloud Platform, and Oracle Cloud Infrastructure, with virtual systems ranging from dozens to hundreds of cores and featuring up to 30TB or more of memory. Natural allies then would include any company in that database ecosystem. Because TidalScale was exhibiting at the Open Compute Project Summit, it reasonable to assume that it sees the hyperscale cloud companies also as potential customers.

TidalScale has received a number of awards, including being named a Gartner Cool Vendor, an IDC Innovator for 2017, a Red Herring Top 100 North America recipient for 2017. Another milestone occured in November 2016 when Infosys made an equity investment in TidalScale. Financial terms were not disclosed. Crunchbase says TidalScale has gone through several rounds of venture funding, raising at least $11.8 million, probably more.

In the broader context of software-defined data centres, the need for scale-up servers will certainly be just as important as scale-up storage. Many start-ups have pursued the JBOF (just a bunch of flash) storage array opportunity, and some of these companies were acquired at nice premiums and other completed IPOs. The software-defined server space likely won't have as many start-up entrants, giving this company a better chance at driving its inverse hypervisor paradigm forward.

Tuesday, May 1, 2018

Cisco to acquire Accompany for $270 million

Cisco, agreed to acquire Accompany, a start-up developing an AI-driven relationship intelligence platform, for $270 million in cash.

Accompany, which is based in Los Altos, California, offers business insights for finding new prospects, navigating the selling process, and strengthening relationships. Accompany Founder and CEO Amy Chang will join Cisco as senior vice president in charge of the Collaboration Technology Group. Chang, who has served as a member of Cisco's Board of Directors since October 2016, has in conjunction with the transaction resigned from the Cisco Board of Directors.

Cisco said the acquisition will enable it to take collaboration to the next level with even more intelligence. Accompany's AI technology and talent will help Cisco accelerate priority areas across its collaboration portfolio, such as providing user and company profile data in Webex meetings. Together, Cisco and Accompany will continue to power the future of work in a smarter way to enhance customer experiences.

"Amy has proven to be an effective and innovative leader through her years as an entrepreneur, an engineer, and CEO, and I couldn't be more pleased to have her and the Accompany team join Cisco," said Chuck Robbins, Cisco chairman and CEO. "Together, we have a tremendous opportunity to further enhance AI and machine learning capabilities in our collaboration portfolio and continue to create amazing collaboration experiences for customers."

"I am thrilled with the opportunity to join Cisco and the industry's leading collaboration team," said Amy Chang, Accompany founder and CEO. "Enterprise applications are rapidly becoming more intelligent and augmented with data and pertinent information in real-time. By combining Accompany's relationship intelligence capability with Cisco's award-winning collaboration product portfolio, customers will be able to more intelligently collaborate with employees, customers and partners."

In addition, Cisco announced that Rowan Trollope, current senior vice president and general manager of the Collaboration Technology Group, is leaving Cisco to become CEO at another company effective May 3.

In December 2016, Accompany raised $20 million in funding in a round led by Ignition Partners and participation from CRV. This brought total funding to $40 million.

Monday, April 30, 2018

Dell Technologies Capital: One third of new bets focused on AI/ML

Since emerging from stealth a year ago, Dell Technologies Capital, the venture investment practice for Dell Technologies, has completed 24 new and follow-on investments as part of its $100 million average annual investment run rate.

The company reports that a third of its new investments are focused on artificial intelligence (AI) and machine learning (ML) and the remaining investments focused on security, next-gen infrastructure and other technology areas strategic to the Dell Technologies family of companies.

Some other notes.

  • Dell Technologies Capital had 11 exits in the past year, of which three of its portfolio companies IPO'd in the past seven months. 
  • Dell Technologies Capital was the first institutional investor in Zscaler (NASDAQ: ZS), a leading pioneer in transforming network security for the cloud era; the startup went public in March 2018. 
  • Dell Technologies Capital invested in MongoDB (NASDAQ: MDB) which went public in October 2017 
  • Dell Technologies Capital invested in DocuSign (NASDAQ: DOCU), which also went public recently
  • Dell Technologies Capital's portfolio includes several startups currently experiencing growth rates of more than 100% and several exceeding $50 million in revenue. 

"Since coming out of stealth at Dell EMC World last year, we've had a very busy, and very successful, year," said Scott Darling, president of Dell Technologies Capital. "We are delighted with our continued strong performance and the market reception to the DocuSign, MongoDB and Zscaler IPOs. The real value we bring to Dell Technologies and our startup portfolio companies is through our joint work, which allows us to deliver best-of-breed solutions for our customers faster, especially in emerging tech areas."


https://www.delltechnologies.com/en-us/capital/ventures/portfolio.htm

Wednesday, April 25, 2018

Innovium raises $77M in Series D for its Switching Silicon

Innovium, a start-up based in San Jose, California, announced $77 Million in Series D funding for its high-performance switching silicon for data centers.

The new funding round included investment from Greylock Partners, Walden Everbright, Walden Riverwood Ventures, Paxion Capital, Capricorn Investment Group, Redline Capital, S-Cubed Capital and Qualcomm Ventures. This brings total funding in the company to over $160 million.

“Data center networks are experiencing dramatic traffic growth and face new requirements, driven by public and hybrid cloud, machine learning, analytics, storage and video. Innovium’s grounds-up innovations have enabled a revolutionary platform for a family of products, delivering the industry’s next generation of performance, programmability, cost/bit and robust features. We are excited to significantly increase our investment in Innovium, to help the company accelerate its production, roadmap, and go-to-market efforts,” said Asheem Chandna, Partner at Greylock Partners.

Innovium Unveils 12.8Tbps Data Center Switching Silicon

Innovium, a start-up based in San Jose, California, introduced its TERALYNX scalable Ethernet silicon for data centers switches.

Innovium said its TERALYNX will be the first single switching chip to break the 10 Tbps performance barrier, along with telemetry, line-rate programmability, the largest on-chip buffers and best-in-class low-latency. The chip is expected to sample in Q3 2017.

TERALYNX includes broad support for 10/25/40/50/100/200/400GbE Ethernet standards. It will deliver 128 ports of 100GbE, 64 ports of 200GbE or 32 ports of 400GbE in a single device. The TERALYNX switch family includes software compatible options at 12.8Tbps, 9.6Tbps, 6.4Tbps and 3.2Tbps performance points, each delivering compelling benefits for switch system vendors and data center operators.

Some highlights:

  • 12.8Tbps, 9.6Tbps, 6.4Tbps and 3.2Tbps single chip performance options at packet sizes of 300B or smaller 
  • Single flow performance of 400Gbps at 64B minimum packet size, 4x vs alternatives
  • 70MB of on-chip buffer for superior network quality, fewer packet drops and substantially lower latency compared to off-chip buffering options
  • Up to 128 ports of 100GbE, 64 ports of 200GbE or 32 ports of 400GbE, which enable flatter networks for lower Capex and fewer hops
  • Support for cut-through with best-in-class low latency of less than 350ns
  • Programmable, feature-rich INNOFLEX forwarding pipeline
  • Comprehensive layer 2/3 forwarding and flexible tunneling including MPLS
  • Large table resources with flexible allocation across L2, IPv4 and IPv6
  • Line-rate, standards-based programmability to add new/custom features and protocols
  • FLASHLIGHT telemetry and analytics to enable autonomous data center networks
  • Extensive visibility and telemetry capabilities such as sFlow, FlexMirroring along with highly customizable extra-wide counters
  • P4-INT in-band telemetry and extensions to dramatically simplify end to end analysis
  • Advanced analytics enable optimal resource monitoring, utilization and congestion control allowing predictive capabilities and network automation
  • SERDES I/Os for existing and upcoming networks
  • Industry-leading, proven SerDes supports 10G and 25G NRZ, as well as 50G PAM4, to provide customers a variety of connectivity choices, ranging from widely deployed 10/25/40/50/100G Ethernet to upcoming 200/400GbE
  • Up to 258 lanes of long-reach SerDes, each of which can be configured dynamically
  • Integrated GHz ARM CPU core along with PCIe Gen 3 host connectivity
  • ARM core enables development of differentiated real-time automation features
  • High speed host connectivity and DMA enhancements enable high performance packet, table and telemetry data transfers while minimizing CPU overhead
  • Two high-speed Ethernet ports for management or telemetry dat

See also