Showing posts with label Start-ups. Show all posts
Showing posts with label Start-ups. Show all posts

Monday, September 10, 2018

Intel acquires NetSpeed Systems for interconnect fabric expertise

Intel has acquired NetSpeed Systems, a start-up based in San Jose, California, for its system-on-chip (SoC) design tools and interconnect fabric intellectual property (IP). Financial terms were not disclosed.

Intel said NetSpeed’s highly configurable and synthesizable offerings will help it more quickly and cost-effectively design, develop and test new SoCs with an ever-increasing set of IP.

NetSpeed provides scalable, coherent, network-on-chip (NoC) IP to SoC designers. NetSpeed’s NoC tool automates SoC front-end design and generates programmable, synthesizable high-performance and efficient interconnect fabrics. The company was founded in 2011.

The NetSpeed team is joining Intel’s Silicon Engineering Group (SEG) led by Jim Keller. NetSpeed co-founder and CEO, Sundari Mitra, will continue to lead her team as an Intel vice president reporting to Keller.

“Intel is designing more products with more specialized features than ever before, which is incredibly exciting for Intel architects and for our customers. The challenge is synthesizing a broader set of IP blocks for optimal performance while reining in design time and cost. NetSpeed’s proven network-on-chip technology addresses this challenge, and we’re excited to now have their IP and expertise in-house,” stated Jim Keller, senior vice president and general manager of the Silicon Engineering Group at Intel.

Monday, September 3, 2018

DOCOMO tests edge computing for video processing

NTT DOCOMO has commenced a proof-of-concept (PoC) video IoT solution that will enable the interpretation and analysis of video data sourced from surveillance cameras using edge computing. DOCOMO will test the effectiveness of using edge computing to interpret and analyze video data. The edge computing will supplement processing performed in the cloud. As a first step, the PoC will test and evaluate the sourcing of data from surveillance cameras, aiming to develop a solution that uses existing cameras, requires no wired connectivity and does not involve the transmission of large quantities of data.

DOCOMO also confirmed a strategic investment in Cloudian, a Silicon Valley-based leader in enterprise object storage systems and developer of the Cloudian AI Box, a compact, high-speed AI data processing device equipped with camera connectivity and LTE / Wi-Fi capabilities, facilitating edge AI computing with both indoor and outdoor communications.

DOCOMO said the transfer and processing of large volumes of video data to the cloud have been a lengthy process involving significant delays and placing a considerable burden on cloud infrastructure and communication networks. Edge computing could help deal with these shortcomings and herald a new era of high-speed image recognition.



Cloudian raises $94 million for hyperscale data fabric

Cloudian, a start-up offering a hyperscale data fabric for enterprises, raised $94 million in a Series E funding, bringing the company’s total funding to $173 million.

“Cloudian redefines enterprise storage with a global data fabric that integrates both private and public clouds — spanning across sites and around the globe — at an unprecedented scale that creates new opportunities for businesses to derive value from data,” Cloudian CEO Michael Tso. “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”

The funding round included participation from investors Digital Alpha, Eight Roads Ventures, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures, Inc. and WS (Wilson Sonsini) Investments.

“Computing now operates without physical boundaries, and customers need storage solutions that also span from the data center to the edge,” said Takayuki Inagawa, president & CEO of NTT DOCOMO Ventures. “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that support the next generation of connected devices.”

Cloudian brings its S3 API interface to Azure Blob Storage

Cloudian, a start-up based in San Mateo, California, is extending its hybrid cloud object storage system into Microsoft Azure.

Cloudian HyperCloud for Microsoft Azure leverages the company's S3 API interface to Azure Blob Storage. Cloudian said the world's largest Industrial Internet enterprise is using Cloudian HyperCloud for Azure to connect its Industrial Internet of Things solution to Azure Blob Storage.

"Cloudian HyperCloud for Azure is a game-changer for public cloud storage, enabling true bi-modal data storage across multiple cloud environments," said Michael Tso, Cloudian CEO and co-founder. "For the first time, customers have a fully supported, enterprise-ready solution to access their choice of cloud platforms from their S3-compliant applications. Customers can be up and running in minutes by launching HyperCloud from the Microsoft Azure Marketplace."

Thursday, August 30, 2018

Database for the Instant Experience -- a profile of Redis Labs

The user experience is the ultimate test of network performance. For many applications, this often comes down to the lag after clicking and before the screen refreshes. We can trace the packets back from the user's handset, through the RAN, mobile core, metro transport, and perhaps long-haul optical backbone to a cloud data center. However, even if this path traverses the very latest generation infrastructure, if it ends up triggering a search in an archaic database, the delayed response time will be more harmful to the user experience than the network latency. Some databases are optimized for performance. Redis, an open source, in-memory, high-performance database, claims to be the fastest -- a database for the Instant Experience. I recently sat down with Ofer Bengal to discuss Redis, Redis Labs and the implication for networking and hyperscale clouds.



Jim Carroll:  The database market has been dominated by a few large players for a very long time. When did this space start to open up, and what inspired Redis Labs to jump into this business?

Ofer Bengal: The database segment of the software market had been on a stable trajectory for decades. If you had asked me ten years ago if it made sense to create a new database company, I would have said that it would be insane to try. But cracks started to open when large Internet companies such as Amazon and Facebook, which generated huge amounts of data and had very stringent performance requirements, realized that the relational databases provided by market leaders like Oracle, were not good enough for their modern use cases. With a relational database, when the amount of data grows beyond the size of a single server it is very complex to cluster and performance goes down dramatically.

About fifteen years ago, a number of Internet companies started to develop internal solutions to these problems. Later on, the open source community stepped in to address these challenges and a new breed of databases was born, which today is broadly categorized under “unstructured" or "NoSQL" databases.

Redis Labs was started in a bit of an unusual way, and not as a database company. The original idea was to improve application performance, because we, the founders, came from that space. We always knew that databases were the main bottleneck in app performance and looked for ways to improve that. So, we started with database caching. At that time, Memcached was a very popular open source caching system for accelerating database performance. We decided to improve it and make it more robust and enterprise-ready. And that's how we started the company.

In 2011, when we started to develop the product, we discovered a fairly new open source project by the name "Redis" (which stands for "Remote Dictionary Server"), which was started by Salvatore Sanfilippo, an Italian developer, who lives in Sicily until this very day. He essentially created his own in-memory database for a certain project he worked on and released it as open source. We decided to adopt it as the engine under the hood for what we were doing. However, shortly thereafter we started to see the amazing adoption of this open source database.  After a while, it was clear we were in the wrong business, and so we decided to focus on Redis as our main product and became a Redis company.  Salvatore Sanfilippo later joined the company and continues to lead the development of the open source project, with a group of developers. A much larger R&D team develops Redis Enterprise, our commercial offering.

Jim Carroll: To be clear, there is an open source Redis community and there's a company called Redis Labs, right?

Ofer Bengal:  Yes. Both the open source Redis and Redis Enterprise are developed by Redis Labs, but by two separate development teams. This is because a different mindset is required for developing open source code and an end-to-end solution suitable for enterprise deployment.
 
Jim Carroll: Tell us more about Redis Labs, the company.

Offer Bengal: We have a monumental number of open source Redis downloads. Its adoption has spread so widely that today you find it in most companies in the world. Our mission, at Redis Labs, is to help our customers unlock answers from their data. As a result, we invest equally into both open source Redis and enterprise-grade Redis, Redis Enterprise, and deliver disruptive capabilities that will help our customers find answers to their challenges and help them deliver the best application and service for their customers. We are passionate about our customers, community, people and our product. We're seeing a noticeable trend where enterprises that adopt OSS Redis are maturing their implementation with Redis Enterprise, to better handle scale, high availability, durability and data persistence. We have customers from all industry verticals, including six of the top Fortune 10 companies and about 40% of the Fortune 100 companies. To give you a few examples of some of our customers, we have AMEX, Walmart, DreamWorks, Intuit, Vodafone, Microsoft, TD Bank, C.H. Robinson, Home Depot, Kohl's, Atlassian, eHarmony – I could go on.

Redis Labs has now over 220 employees across our Mountain View CA HQ, R&D center in Israel, London sales office and other locations around the world.  We’ve completed a few investment rounds, totaling $80 million from Bain Capital Ventures, Goldman Sachs, Viola Ventures (Israel) and Dell Technologies Capital.

Jim Carroll: So, how can you grow and profit in an open source market as a software company?

Ofer Bengal:  The market for databases has changed a lot. Twenty years ago, if a company adopted Oracle, for example, any software development project carried out in that company had to be built with this database. This is not the case anymore. Digital transformation and cloud adoption are disrupting this very traditional model and driving the modernization of applications. New-age developers now have the flexibility to select their preferred solutions and tools for their specific problem at hand or use cases. They are looking for the best-of-breed database to meet each use case of their application. With the evolution of microservices, which is the modern way of building apps, this is even more evident. Each microservice may use a different database, so you end up with multiple databases for the same application. A simple smartphone application, for instance, may use four, five or even six different databases. These technological evolutions opened the market to database innovations.

In the past, most databases were relational, where the data is modeled in tables, and tables are associated with one another. This structure, while still relevant for some use cases, does not satisfy the needs of today’s modern applications.

Today, there are many flavors of unstructured NoSQL databases, starting with simple key value databases like DynamoDB, document-based databases like MongoDB, column-based databases like Cassandra, graph databases like Neo4j, and others.  Each one is good for certain use cases. There is also a new trend called multi-model databases, which means that a single database can support different data modeling techniques, such as relational, document, graph, etc.  The current race in the database world is about becoming the optimal multi-model database.

Tying it all together, how do we expect to grow as an organization and profit in an open source market?  We have never settled for the status quo. We looked at today’s environments and the challenges that come with them and have figured out a way to deliver Redis as a multi-model database. We continually strive to lead and disrupt this market. With the introduction of modules, customers can now use Redis Enterprise as a key-value store, document store, graph database, and for search and so much more. As a result, Redis Enterprise is the best-of-breed database suited to cater to the needs of modern-day applications. In addition to that, Redis Enterprise delivers the simplicity, ease of scale and high availability large enterprises desire. This has helped us become a well-loved database and a profitable business

Jim Carroll: What makes Redis different from the others?

Ofer Bengal: Redis is by far the fastest and most powerful database. It was built from day one for optimal performance: besides processing entirely in RAM (or any of the new memory technologies), everything is written in C, a low-level programming language. All the commands, data types, etc., are optimized for performance. All this makes Redis super-fast. For example, from a single, average size, cloud instance on Amazon, you can easily generate 1.5 million transactions per second at sub-millisecond latency. Can you imagine that? This means that the average latency of those 1.5 million transactions will be less than one millisecond. There is no database that comes even close to this performance. You may ask, what is the importance of this?  Well, the speed of the database is by far the major factor influencing application performance and Redis can guarantee instant application response.

Jim Carroll: How are you tracking the popularity of Redis?

Ofer Bengal: If you look at DockerHub, which is the marketplace for Docker containers, you can see the stats on how many containers of each type were launched there.  The last time I checked, over 882 million Redis containers have been launched on DockerHub. This compares to about 418 million for MySQL, and 642 million of MongoDB containers. So, Redis is way more popular than both MongoDB and MySQL. And we have many other similar data points confirming the popularity of Redis.

Jim Carroll: If Redis puts everything in RAM, how do you scale? RAM is an expensive resource, and aren’t you limited by the amount that you can fit in one system?

Ofer Bengal: We developed very advanced clustering technology which enables Redis Enterprise to scale infinitely. We have customers that have 10s of terabytes of data in RAM. The notion that RAM is tiny and used only for very special purposes, is no longer true, and as I said, we see many customers with extremely large datasets in RAM. Furthermore, we developed a technology for running Redis on Flash, with near-RAM performance at 20% the servers cost. The intelligent data tiering that Redis on Flash delivers allows our customers to keep their most used data in RAM while moving the less utilized data onto cheaper flash storage. This has organizations such as Whitepages saving over 80% of their infrastructure costs, with little compromise to performance.

In addition to that, we’re working very closely with Intel on their Optane™ DC persistent memory based on 3D Xpoint™. As this technology becomes mainstream, the majority of the database market will have to move to being in-memory.


Jim Carroll: What about the resiliency challenge? How does Redis deal with outages?

Ofer Bengal: Normally with memory-based systems, if something goes wrong with a node or a cluster, there is a risk of losing data. This is not the case with Redis Enterprise, because it is redundant and persistent.  You can write everything to disk without slowing down database operations. This is important to note because persisting to disk is a major technological challenge due to the bottleneck of writing to disk. We developed a persistence technology that preserves Redis' super-fast performance, while still writing everything to disk. In case of memory failures, you can read everything from disk. On top of that, the entire dataset is replicated in memory.  Each database can have multiple such replicas, so if one node fails, we instantly fail-over to a replica. With this and some other provisions, we provide several layers of resiliency.

We have been running our database-as-a-service for five years now, with thousands of customers, and never lost a customer's data, even when cloud nodes failed.

Jim Carroll: So how is the market for in-memory databases developing? Can you give some examples of applications that run best in memory?

Ofer Bengal: Any customer-facing application today needs to be fast. The new generation of end users expect instant experience from all their apps and are not tolerant to slow response, whether caused by the application or by the network.

You may ask "how is 'instant experience' defined?"  Let’s take an everyday example to illustrate what ‘instant’ really means., When browsing on your mobile device, how long are you willing to wait before your information is served to you? What we have found is that the expected time from tapping your smartphone or clicking on your laptop until you get the response, should not be more than 100 milliseconds. As an end consumer, we are all dissatisfied with waiting and we expect information to be served instantly. What really happens behind the scenes, however, is once you tap your phone, a query goes over the Internet to a remote application server, which processes the request and may generate several database queries. The response is then transmitted back over the Internet to your phone.

Now, the round trip over the Internet (in a "good" Internet day) is at least 50 milliseconds, and the app server needs at least 50 milliseconds to process your request. This means that at the database layer, the response time should be within sub-millisecond or you’re pretty much exceeding what is considered the acceptable standard wait time of 100 milliseconds. At a time of increasing digitization, consumers expect instant access to the service, and anything less will directly impact the bottom line. And, as I already mentioned, Redis is the only database that can respond in less than one millisecond, under almost any load of transactions.

Let me give you some use case examples. Companies in the finance industry (banks, financial institutions) are using relational databases for years. Any change, such as replacing an Oracle database, is analogous to open heart surgery. But when it comes to new customer facing banking applications, such as checking your account status or transferring funds, they would like to have instant experience. Many banks are now moving this type of applications to other databases, and Redis is often chosen for its blazing fast performance bar none.

As I mentioned earlier, the world is moving to microservices. Redis Enterprise fits the needs of this architecture quite nicely as a multi-model database. In addition, Redis is very popular for messaging, queuing and time series capabilities. It is also strong when you need fast data ingest, for example, when massive amounts of data are coming in from IoT devices, or in other cases where you have huge amounts of data that needs to be ingested in your system. What started off as a solution for caching has, over the course of the last few years, evolved into an enterprise data platform.

Jim Carroll: You mentioned microservices, and that word is almost becoming synonymous with containers. And when you mention containers, everybody wants to talk about Kubernetes, and managing clusters of containers in the cloud. How does this align with Redis?

Ofer Bengal: Redis Enterprise maintains a unified deployment across all Kubernetes environments, such as RedHat OpenShift, Pivotal Container Services (PKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Container Service for Kubernetes (EKS) and vanilla Kubernetes. It guarantees that each Redis Enterprise node (with one or more open source servers) reside on a POD that is hosted on a different VM or physical server. And in using the latest Kubernetes primitives, Redis Enterprise can now be run as a stateful service across these environments.

We use a layered architecture that splits responsibilities between tasks that Kubernetes does efficiently, such as node auto-healing and node scaling, tasks that Redis Enterprise cluster is good at, such as failover, shard level scaling, configuration and Redis monitoring functions, and tasks that both can orchestrate together, such as service discovery and rolling upgrades with zero downtime.

Jim Carroll: How are the public cloud providers supporting Redis?

Ofer Bengal:  Most cloud providers, such as AWS, Azure and Google, have launched their own versions of Redis database-as-a-service, based on open source Redis, although they hardly contribute to it.

Redis Labs, the major contributor to open source Redis, has launched services on all those clouds, based on Redis Enterprise.  There is a very big difference between open source Redis and Redis Enterprise, especially if you need enterprise-level robustness.

Jim Carroll: So what is the secret sauce that you add on top of open source Redis?

Offer Bengal:  Redis Enterprise brings many additional capabilities to open source Redis. For example, as I mentioned earlier, sometimes an installation requires terabytes of RAM, which can get quite expensive. We have built-in capabilities on Redis Enterprise that allows our customers to run Redis on SSDs with almost the same performance as RAM. This is great for reducing the customer's total cost of ownership.  By providing this capability, we can cut the underlying infrastructure costs by up to 80%. For the past few years, we’ve been working with most vendors of advanced memory technologies such as NVMe and Intel’s 3D Xpoint.  We will be one of the first database vendors to take advantage of these new memory technologies as they become more and more popular. Databases like Oracle, which were designed to write to disk, will have to undergo a major facelift in order to take advantage of these new memory technologies.

Another big advantage Redis Enterprise delivers is high availability. With Redis Enterprise, you can create multiple replicas in the same data center, across data centers, across regions, and across clouds.  You can also replicate between cloud and on-premise servers. Our single digit seconds failover mechanism guarantees service continuity.

Another differentiator is our active-active global distribution capability. If you would like to deploy an application both in the U.S. and Europe, for example, your will have application servers in a European data center and in a US data center. But what about the database? Would it be a single database for those two locations? While this helps avoid data inconsistency it’s terrible when it comes to performance, for at least one of these two data centers. If you have a separate database in each data center, performance may improve, but at the risk of consistency. Let’s assume that you and your wife share the same bank account, and that you are in the U.S. and she is traveling in Europe. What if both of you withdraw funds at an ATM at about the same time? If the app servers in the US and Europe are linked to the same database, there is no problem, but if the bank's app uses two databases (one in the US and one in Europe), how would they prevent overdraft? Having a globally distributed database with full sync is a major challenge. If you try to do conflict resolution over the Internet between Europe and the U.S., database operation will slow down dramatically, which is a no-go for the instant experience end users demand. So, we developed a unique technology for Redis Enterprise based on the mathematically proven CRDT concept, developed in universities. Today, with Redis Enterprise, our customers can deploy a global database in multiple data centers around the world while assuring local latency and strong eventual consistency. Each one works as if it is fully independent, but behind the scene we ensure they are all in sync.          

Jim Carroll: What is the ultimate ambition of this company?

Offer Bengal: We have the opportunity to build a very big software company. I’m not a kid anymore and I do not live on fantasies. Look at the database market – it’s huge! It is projected to grow to $50–$60 billion (depending on which analyst firm you ask) in sales in 2020. It is the largest segment in the software business, twice the size of the security/cyber market. The crack in the database market that opened up with NoSQL will represent 10% of this market in the near term. However, the border line between SQL and NoSQL is becoming a blur, as companies such as Oracle add NoSQL capabilities and NoSQL vendors add SQL capabilities. I think that over time, it will become a single large market. Redis Labs provides a true multi-model database. We support key-value with multiple data structures, graph, search, JSON (document based), all with built-in functionality, not just APIs. We constantly increase the use case coverage of our database, and that is ultimately the name of the game in this business. Couple all that with Redis' blazing fast performance, the massive adoption of open source Redis and the fact that it is the "most loved database" (according to StackOverflow), and you would agree that we have once in a lifetime opportunity!





Tuesday, August 28, 2018

Gremlin offers Failure-as-a-Service for Docker

Gremlin, a start-up based in San Francisco, has developed a "failure injection platform" that allows developers to stress test Docker environments to better prepare for real-world disasters by simulating compounding issues.

The company said its Failure-as-a-Service platform aims to make containerized infrastructure more resilient.

In December of 2017, Gremlin launched the first iteration of its platform alongside a $7.5 Million Series A funding round, recreating common failure states within hybrid cloud infrastructure.

“The concept of purposefully injecting failure into systems is still new for many companies, but chaos engineering has been practiced at places like Netflix and Amazon for over a decade,” said Matthew Fornaciari, CTO and Co-Founder of Gremlin. “We like to use the vaccine analogy: injecting small amounts of harm can build immunity that proactively avoids disasters. With today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production.”

http://www.gremlin.com


  • The Series A funding came from Index Ventures and Amplify Partners.


Monday, August 27, 2018

NEC invests in Tascent for its iris biometric system

NEC announced an equity investment in Tascent, a start-up based in Los Gatos, California, that offers a biometric identification based on iris scanning.

Tascent's technologies include optical control technology to remotely capture an accurate, high-quality iris image at high speed, and a user interface (UI) technology that smoothly guides users in support of capturing accurate biometric information. The technology is embedded in security systems used at airports, government agencies and enterprises. The company was founded in 2015.

NEC said its investment and partnership will enable the two companies to jointly enhance the capacity of iris recognition, using Tascent’s optical control and UI technologies and NEC’s advanced biometric engines, and create a next generation iris authentication offering for the public safety market.

Tuesday, August 21, 2018

Slack rakes in $427 million in series H funding

Slack, the San Francisco-based start-up offering collaboration apps and services, announced $427 million in a series H funding round. The company has previously raised $827 million in its previous funding rounds. The company says the new level of investment reflects a post-money valuation of more than $7.1 billion.

Slack claims more than 8 million Daily Active Users (DAUs) and more than 70,000 paid teams

The Series H equity round was led by Dragoneer Investment Group and General Atlantic, joined by funds and accounts advised by T. Rowe Price Associates, Inc. and funds advised by Wellington Management, and Baillie Gifford and Sands Capital, as well as existing investors.


  • Slack Technologies was founded in 2009 in Vancouver, British Columbia, Canada, by a team drawn from the founders of Ludicorp, the company that created Flickr.
  • Amazon Web Services (AWS) has previously published a case study about how Slack leverages its cloud infrastructure to enable its collaboration services.  https://aws.amazon.com/solutions/case-studies/slack/



Sunday, August 19, 2018

BT backs Telecom Infra Project's start-up accelerator programme

For the second year in a row, BT will host a start-up competition at the TIP Ecosystem Acceleration Centre (TEAC) at the BT Innovation Labs in Martlesham, Suffolk and in London’s Tech City.

The competition seeks start-ups in the Intent-Based Networking and Mobile fields.  Entries will be judged by a panel of senior network and technology leaders from BT, Facebook, and TIP.Shortlisted companies will be invited to a final pitch event at BT Tower on Friday 12th October, where the winners will be chosen.

Last year's winners included Unmanned Life, Zeetta Networks and KETS Quantum Security.

Howard Watson, CTIO of BT, and a member of the TIP Board, said: “TIP was created to help tackle some of the big challenges in Telecoms, boosting global connectivity by supporting big ideas. We’re particularly interested in start-ups with innovative ideas on how to deploy mobile networks cost-effectively in rural areas, as we look ahead to the roll-out of 5G services.

The Telecom Infra Project is a global community that includes more than 500 member companies, including operators, infrastructure providers, system integrators, and other technology companies working together to transform the traditional approach to building and deploying telecom network infrastructure.

Interested companies should apply by 24th September 2018 via the TEAC UK website: https://www.btplc.com/Innovation/TEAC/index.htm

Tuesday, July 31, 2018

Qadium secures $37M by U.S. Navy's Space and Warfare Command

Qadium, a start-up based in San Francisco, has been awarded a $37.6 million contract by the U.S. Department of Defense for its cybersecurity solution.

Qadium provides real-time monitoring of the entire global Internet for customers' assets. In

The company said the contract was awarded by the U.S. Navy's Space and Warfare Command after the Department of Defense validated Qadium's commercial software.  Qadium has done prior work for Defense Department entities including U.S. Cyber Command, the Defense Information Systems Agency, Fleet Cyber Command, Army Cyber Command and the DoD CIO office.

"The Defense Department used to love to build its own IT, often poorly and at high cost to taxpayers," said Qadium CEO and CIA veteran Tim Junio.  "The times are finally changing.  In the face of the greatest cybersecurity challenges in our nation's history, we're seeing the government and private tech companies coming together, making both sides better off."

Investors in Qadium include New Enterprise Associates, TPG, Institutional Venture Partners and Founders Fund.

http://www.qadium.com

Monday, July 30, 2018

Serverless raises $10M

Serverless, a start-up based in San Francisco, announced $10 million in Series A funding for its open source Serverless Framework.

The company's mission is to provide a single toolkit offering everything teams and enterprises need to operationalize serverless deployments.

The company said it takes a vendor-agnostic approach across major platforms and cloud providers such as AWS, Azure, Google Cloud Functions, Kubernetes, etc.

The funding was led by Lightspeed Venture Partners with additional participation by Trinity Ventures.

https://serverless.com/

Thursday, July 19, 2018

Benu Networks raises $17.5 million in funding

Benu Networks, which offers a carrier-class Virtual Service Edge software platform has closed a total of $17.5 million in two funding rounds over the past 12 months. The most recent round of $10 million, which closed in July 2018, was led by new investor Spring Lake Equity Partners, a Boston-based private equity firm.

The earlier funding round of $7.5 million was led by long-time investors Sutter Hill Ventures and Liberty Global Ventures, a global investment fund owned by Liberty Global, the world’s largest international cable company.

Benu Networks’ Virtual Service Edge platform enables network operators to rapidly create new business opportunities for Mobile Wi-Fi, Managed Business Networking, Managed Home Networking, Managed Security, and Managed Internet of Things (IoT).

Benu Networks is based in Billerica, Mass.

https://benunetworks.com/


Wednesday, July 18, 2018

SWIM.AI raises $10m for edge intelligence

SWIM.AI, a start-up based in San Jose, California announced $10 million in Series B funding for its edge intelligence software.

SWIM.AI combines local data processing/analytics, edge computing and machine learning to efficiently deliver real-time business insights from edge data on edge devices. The goal is to help customers analyze high volumes of streaming edge data and deliver real-time insights that can easily be shared and visualized.

The company said the funding will be used to launch a Cambridge UK based AI R&D center.

The funding round was led by Cambridge Innovation Capital plc (CIC), the Cambridge-based builder of technology and healthcare companies, with a strategic investment from Arm, and further participation from existing investors Silver Creek Ventures and Harris Barton Asset Management.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, and share new insights instantly peer-to-peer, locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of SWIM.AI. “We are thrilled to partner with our new and continuing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

Tuesday, July 17, 2018

Verodin raises $21 million for cyber security instrumentation

Verodin, a start-up based in Tysons, Virginia, announced $21 million in Series B funding for its Security Instrumentation Platform (SIP), which continuously executes tests and analyzes the results to proactively alert on drift from a known-good security baseline The system validates and optimizes control configuration, and provide evidence demonstrating if the controls purchased and deployed are actually delivering the desired business outcomes.

The new funding was led by TenEleven Ventures and Bessemer Venture Partners (BVP). Capital One Growth Ventures, Citi Ventures and all existing investors participated in the round.

Monday, July 16, 2018

Arrcus builds Network OS for white box data center infrastructure

Arrcus, a start-up based in San Jose, California, emerged from stealth to unveil its software-driven, hardware agnostic network operating system for white boxes.

Arrcus said it sees an opportunity to help enterprises transform the way they manage their networks, liberating them from vertically integrated proprietary solutions and opening the door to horizontally diversified choices of best-in-class silicon and hardware systems.

The company's new ArcOS networking operating system has been ported to both Broadcom’s StrataDNX Jericho+ and StrataXGS Trident 3 switching silicon platforms.

ArcOS is built on a modular micro-services paradigm and offers advanced Layer 3 routing capabilities. Key elements include a hyper-performance resilient control plane, an intelligent, programmable Dataplane Adaptation Layer (DPAL), a data-Model Driven Telemetry for Control Plane, Data Planes and Environmental, and a consistent YANG/OpenConfig APIs for easy programmatic access.  These capabilities in conjunction with Broadcom’s StrataDNX Jericho+ platform enable the support for the full BGP internet routing table.

Arrcus cites the following use cases:

  • Spine-Leaf Clos for Datacenter workloads
  • Internet Peering for CDN providers and ISPs
  • Resilient Routing to the Host
  • Massively Scalable Route-Reflector clusters in physical/container form-factors

Arrcus also announced $15 million in Series A funding from General Catalyst and Clear Ventures. Advisors include include Pankaj Patel, former EVP and CDO of Cisco; Amarjit Gill, serial entrepreneur who founded and sold companies to Apple, Broadcom, Cisco, EMC, Facebook, Google, and Intel; Farzad Nazem, former CTO of Yahoo; Randy Bush, Internet Hall of Fame, founder Verio (basis of NTT’s DataCenter Business); Fred Baker, former Cisco Fellow, IETF Chair and Co-Chair of the IPv6 Working Group; Nancy Lee, ex-VP of People at Google, and Shawn Zandi, Director, Network Engineering at LinkedIn.

“We use ‘network different’ as our fundamental approach to enable the freedom of choice through our product innovation and challenging the status quo.  Arrcus has assembled the world’s best networking technologists, is bringing new capabilities, and changing the business model to make it easier to design, deploy, and manage large scale networking solutions for our customers,” stated Arrcus co-founder and CEO Devesh Garg.

  • Arrcus is headed by Devesh Garg, who previously was president of EZchip (acquired by Tilera) and founding CEO of Tilera (acquired by EZchip). He also served at Bessemer Venture Partners and Broadcom. Other Arrcus co-founders include Keyur Patel, who was a Distinguished Engineer at Cisco; and Derek Yeung, a former Principal Engineer at Cisco.

Sunday, July 15, 2018

Oasis Labs plans cloud platform based on blockchain

Oasis Labs, a start-up based in Berkeley, California, is reported to have raised $45 million in a private token sale for its "privacy-first public cloud platform based on blockchain. The idea is to ensure that privacy is built into each layer of the stack, from the application all the way down to hardware.  The system promises codified and self-enforceable privacy protection without relying on any central party.

Oasis Labs is headed by Dr. Dawn Song (CEO) who is Professor of Computer Science at University of California, Berkeley, and a MacArthur Fellow.

https://www.oasislabs.com


Wednesday, July 11, 2018

AT&T takes equity stake in Magic Leap

AT&T has made an equity investment in Magic Leap. Financial terms were not disclosed.

Magic Leap is the high-profile start-up developing an enhanced reality computing and visualization platform. The company is based in Plantation, Florida with offices in Silicon Valley, Seattle, Austin, Dallas, the UK, New Zealand and Israel.

Tuesday, July 10, 2018

AT&T to acquire AlienVault for cyber threat intelligence

AT&T agreed to acquire AlienVault, a privately held company based in San Mateo, California that specializes in enterprise-grade security solutions for small and medium-sized businesses. Financial terms were not disclosed.

AT&T said it intends to integrate AlienVault’s threat intelligence with its cybersecurity solutions portfolio.

“Regardless of size or industry, businesses today need cyber threat detection and response technologies and services,” said Thaddeus Arroyo, CEO, AT&T Business. “The current threat landscape has shifted this from a luxury for some, to a requirement for all. AlienVault’s expertise in threat intelligence will improve our ability to help organizations detect and respond to cybersecurity attacks. Together, with our enterprise-grade detection, response and remediation capabilities, we’re providing scalable, intelligent, affordable security for business customers of all sizes.”

“We’re thrilled to join forces with AT&T. They bring a robust cybersecurity portfolio with an industry-leading technology ecosystem,” said Barmak Meftah, president and CEO, AlienVault. “This deal accelerates our ability to deliver on the AlienVault mission, which is to democratize threat detection and response to companies of all sizes.”



Saturday, June 30, 2018

Veridium raises $16.5 million for biometric authentication

Veridium, a start-up based in Quincy, Mass., announced $16.5 million in Series B funding for its biometric authentication solutions.

Veridium offers a software-only biometrics platform that enables users to replace passwords, tokens, OTPs or swipe cards with multiple biometrics from their smartphone. The solutions include native device sensors such as face and fingerprint, and Veridium’s 4 Fingers TouchlessID. The result is increased security, improved convenience and user experience; all while reducing fraud at a lower total cost of ownership than traditional multi-factor authentication (MFA) solutions.

The investment round was led by UK entrepreneur and philanthropist, Michael Spencer, with participation from Citrix Systems, Inc. and financial services executive and investor Michael Powell.

“In today’s digital age, global organizations are challenged to secure their most critical assets against advanced threats in a way that’s both convenient and secure,” said Michael Spencer. “Veridium is unique in the industry because it provides organizations an enterprise-ready authentication solution to address those problems with the adoption of biometrics – while increasing security and convenience.”

https://www.veridiumID.com


  • In May, Veridium announced that it had been selected by a multinational Swiss bank to replace passwords, tokens and swipe cards, validating the need for stronger more user-friendly authentication processes.

Thursday, June 28, 2018

Silver Peak adds $90 million in funding for SD-WAN

Silver Peak, which specializes in broadband and hybrid WAN solutions, announced a $90 million strategic investment from TCV.

"It’s rare that we see an opportunity to disrupt a massive, entrenched $100 billion technology category supporting mission-critical applications and communication,” said Tim McAdam, general partner at TCV. “After researching all the players in the multi-billion-dollar SD-WAN market and speaking with enterprise CIOs, it is clear that Silver Peak has the most complete solution, clear market differentiation and traction, and a unique vision for the future of the new WAN edge. We look forward to working with the team to rapidly grow the business."

"With more than $100 billion spent on the WAN every year by enterprises, much of it on technology that pre-dates the cloud, Silver Peak has an enormous opportunity as we deliver disruptive new WAN edge solutions for enterprises,” said Silver Peak Founder and CEO, David Hughes. “TCV has a proven track record for identifying high-growth markets and investing in those innovative companies with the right solution and the right team in place to achieve market leadership. Our partnership with TCV will help accelerate our growth trajectory, increase our competitive advantages and extend our market leadership. We are excited to work with the TCV team."

Silver Peak is based in Santa Clara, California.


Monday, June 25, 2018

Orange Fab launches Fab Connect(ai)

Orange Fab, which is an Orange Silicon Valley initiative for connecting startups to corporations for proof-of-concept projects, distribution, or investment opportunities, is kicking off an accelerator program called Fab Connect(ai) focused on artificial intelligence.

Fab Connect(ai) will run in partnership with a group of top-tier investors led by Cathay Innovation, Iris Capital, Michelin Ventures, Total Energy Ventures and Homebrew. It is being launched in collaboration with prominent partners, including Google Cloud’s Startup Program, NVIDIA’s Inception Program, Microsoft IoT & AI Insider Labs, LAB IX Flex Ventures, Publicis Groupe, Groupe Seb, Michelin, Valeo, Ping An Technology and Lumi.

Startups that participate in Fab Connect(ai) will have access to a network of seed-stage investors and corporations providing technical resources and real-world business challenges.

“Fab Connect(ai) is one of the first accelerator programs to align capital with global growth opportunities sustained by such a network of partners,” noted Georges Nahon, CEO of Orange Silicon Valley, the home of Orange Fab. “Our goal with Fab Connect(ai) is to identify the most promising startups in AI & IoT, a domain of growing importance for Orange and our Fab Force members.”

“Fab Connect(ai) is the only global initiative designed to empower AI with the benefits of smart connectivity, from a collaborative and global perspective, and with an extensive network of influential partners,” said Denis Barrier, co-founder and CEO of Cathay Innovation. “As a global venture capital fund deeply committed in the fourth industrial revolution, Cathay Innovation believes that this approach can be helpful for the rise of a Super AI able to foster the next wave of digital transformation.”

http://www.orangefab.com/connect

Wednesday, June 13, 2018

Aviatrix offers cloud networking as a service for AWS, Azure and Google Cloud

Aviatrix, a start-up based in Palo Alto, California, announced a hosted service to build and manage virtual private cloud (VPC) networks in Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) public cloud environments.

The Aviatrix Hosted Service provides a centralized console for building and managing all secure connectivity. The company said its software-defined router goes well beyond what existing instance-based virtual routers offer. The solution consists of the Aviatrix Controller, now available via the Hosted Service, and Aviatrix Gateways, which are deployed in VPCs to support cloud networking use cases that include AWS global transit networks, remote user VPN and VPC egress security.

“Even using a public cloud vendor’s console—which makes it straightforward to build compute and storage in the public cloud—VPC networking has remained complex, especially as the number of VPCs grow from single digits to many hundreds across the globe,” said Steven Mih, CEO of Aviatrix. “The Aviatrix Hosted Service—the first cloud networking-as-a-service option—provides the easiest way to build out VPC networks in the public cloud. Using our hosted service, it takes less than 10 minutes, and requires no serious networking expertise, to deploy and securely connect a large number of VPCs. It’s your central console for all things networking.”

Key use cases include:

  • Next-gen global transit network. Create VPCs in seconds, scale and migrate workloads from on-premises sites, and manage growing numbers of VPCs with ease from a software-defined, centrally managed controller.
  • VPC egress security. Control VPC traffic outbound to the internet with powerful Layer 7 filtering that enables organizations to allow or deny access based on policies using high-availability, in-line gateways.
  • Remote user VPN. Provide secure remote access to VPCs and cloud services for developers, employees and partners—using the cloud-native Aviatrix solution, based on OpenVPN® technologies.
  • Multicloud peering. Simplify networking among AWS, Azure and GCP public cloud infrastructures. Use Aviatrix’s native, API-based approach to centrally manage connectivity and eliminate complexity for implementations spanning multiple cloud services.
  • Encrypted peering. Meet corporate and regulatory compliance requirements by encrypting data in motion. Use IPsec between any two VPCs to centrally manage secure peering across accounts and clouds.
  • Site-to-cloud VPN. Quickly create secure connections from on-premises data centers, sites or branch locations to cloud resources. Use existing on-prem hardware and internet infrastructure to minimize costs.


See also