Showing posts with label Start-ups. Show all posts
Showing posts with label Start-ups. Show all posts

Thursday, November 15, 2018

Habana Labs raises $75M for AI processors, including Intel investment

Habana Labs, a start-up based in Tel-Aviv, Israel, raised $75 million in an oversubscribed series B funding for its development of AI processors.

Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc.  A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.

Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.

The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.

“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”

“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”

https://habana.ai/

Kaloom raises $10 million for its software-defined fabric for whiteboxes

Kaloom, a start-up based in Montreal with offices in Santa Clara, California, announced $10 million in Series A1 funding for its Software Defined Fabric (SDF) for automating and optimizing data center networks based on open networking white box switches.

The latest financing was led by the Fonds de solidarité FTQ, and Somel Investments, with the participation from MBUZZ Investments. This cash infusion brings Kaloom’s total investments to $20.7 million.

“We see a strong need among current beta and other potential customers to do something ‘bottom up’ with the networking fabric, where the switches self-discover and self-provision themselves automatically in a network that ultimately supports programmability,” said Laurent Marchand, CEO and founder of Kaloom. “The latest funding round is validation of Kaloom’s approach and where we believe the industry is moving; enabling us to grow faster than planned and respond to growing customer demand.”

“In a very short period, Kaloom has developed world class software for data centers. We are excited to see such strong interest in the company and its solution and see a bright future with Kaloom. After our initial investment for Kaloom’s launch in 2017, we are pleased to continue to support the company’s growth,” said Gaétan Morin, President and CEO of the Fonds de solidarité FTQ.

Kaloom also announced tha appointment of Mike Rymkiewicz as its new vice president of sales, and Thomas Eklund as Kaloom’s vice president of marketing.

  • Kaloom's SDF, which is designed to virtualize the data center, leverages P4-based programming capabilities initially in switching silicon from Barefoot Networks. A physical data center can be partitioned into multiple independent and fully isolated virtual data centers (vDCs). Each vDC operates with its own Virtual Fabric (vFabric), which can host millions of IPv4 or IPv6 based tenant networks. Its software-defined fabric offer interfaces to standard orchestration systems and SDN controllers such as Openstack (ML2), Kubernetes Container Networking Interface (CNI) and OpenDaylight (NETCONF). Initially, supported white boxes include Accton, Delta and Foxconn, which have been designed for hyperscale and distributed data centers. The SDF features self-forming and self-discovery capabilities, as well as zero-touch provisioning of the virtual network and virtual components with automated software upgrades.
  • The Kaloom Software Defined Product Family consists of the following components:
    Kaloom Software Defined Fabric
    Kaloom vRouter
    Kaloom vSwitch
    Kaloom vGW (virtual gateway)


Wednesday, October 10, 2018

CNEX Labs raises $23M for SSD Controller architecture

CNEX Labs, a start-up developing a transformative architecture for solid state drive (SSD) controllers, raised over $23 million in Series D venture capital.

CNEX said its patented, ground-up redesign of traditional SSD controller architecture plus its turn-key SSD design capability allows customers to procure SSDs customized for their own needs, while reducing their exposure to the cyclical swings in SSD supply that have constrained business growth. The SSD controller technology includes a highly-programmable interface to NAND flash memory, allowing the same controller to work with multiple types of NAND; flexible Flash Translation Layer (FTL) control (either drive- or host-based), allowing easier optimization for different types of workloads; and proprietary hardware acceleration supporting key functions typically run on slower firmware. The company is based in San Jose, California.

The new funding was led by early investor Dell Technologies Capital (which also led CNEX’s Series A round). Strategic investors also include M12, Microsoft’s venture fund (which led CNEX’s Series C round), major semiconductor foundries, large storage and networking semiconductor companies and other new and existing strategic investors. Additional investors in this round include Sierra Ventures, Walden Venture Investments, Brightstone Venture Capital and others.

“CNEX Labs technology relieves customers from the mercy of a commoditized market and puts them back in control of their own destinies,” said CNEX Labs CEO and Co-Founder Alan Armstrong. “We are proud to have achieved such strong backing and validation from industry partners and investors.”

http://www.cnexlabs.com

Shasta Ventures adds execs from Symantec, Salesforce, InterWest

Shasta Ventures, an early-stage investor based in Menlo Park, California with more than $1 billion under management, announced three additions to its team: former Symantec General Manager Balaji Yelamanchili, Salesforce Chief Information Security Officer (CISO) Izak Mutlu, and InterWest Board Partner Drew Harman.

“Balaji, Izak, and Drew are the dream team, joining us at a period of rapid growth,” said Shasta Managing Director and Partner Jason Pressman. “With the promotion of three partners and the addition of two new associates all within the last year, these new team members will be instrumental in helping us build our portfolio of SaaS, next-gen infrastructure, data intelligence, and security investments into world-class companies.”

Current Shasta enterprise software investments include Forbes 2018 Cloud 100 companies Anaplan and Canva, as well as SaaS 1000 Top Companies Highspot, LeanData, Leanplum, LiveIntent, Lucidworks, and Spiceworks and high-growth start-ups Scalyr and Sendbird. Earlier investments include Apptio (APTI: NASDAQ), the business management system of record for hybrid IT, Glint (acquired by LinkedIn), the people success platform, and Zuora (ZUO: NYSE), the leading cloud-based subscription management platform provider. Shasta’s security portfolio features Airspace Systems, Mocana, Stealth Security and Valimail, among others.

https://shastaventures.com

Tuesday, October 9, 2018

TidalScale raises $24 million for software-defined servers

TidalScale, a start-up offering software-defined servers, announced $24 million in Series B funding. TidalScale enables organizations to build a virtual server of any size from standard commodity physical servers in minutes. And once it’s up and running, TidalScale’s real-time machine learning layer continuously optimizes system performance.

The new funding comes from a strong investment syndicate that includes Bain Capital Ventures, Hummer Winblad, Sapphire Ventures, Infosys, SK Hynix, and a leading server OEM, as well as other undisclosed investors.

“TidalScale helps organizations sharpen their competitive advantage by making in-memory computing accessible with data sets that exceed the capabilities of even the largest traditional servers–with linear cost. Our breakthrough Software-Defined Server technology amplifies the value of modern data centers by enabling organizations to build a virtual server of any size—the right size—in just minutes. For our customers, TidalScale Software-Defined Servers have proven to be game-changing.  We’re honored that so many respected investors and partners recognize the value and promise of TidalScale,” stated Gary Smerdon, President & CEO at TidalScale.

TidalScale is based in Campbell, California.

Start-up profile: TidalScale, building an inverse hypervisor for scale-up servers


TidalScale, a start-up based in Campbell, California, is on a mission to build the world's largest virtual servers based on Intel x86 commodity hardware. The company's "inverse" hypervisor combines multiple physical servers (including their associated CPUs, memory storage and network) into one or more large software-defined virtual servers. This is the inverse equivalent of VMware because a rack of physical servers are virtualized as though it were...


Monday, September 10, 2018

Intel acquires NetSpeed Systems for interconnect fabric expertise

Intel has acquired NetSpeed Systems, a start-up based in San Jose, California, for its system-on-chip (SoC) design tools and interconnect fabric intellectual property (IP). Financial terms were not disclosed.

Intel said NetSpeed’s highly configurable and synthesizable offerings will help it more quickly and cost-effectively design, develop and test new SoCs with an ever-increasing set of IP.

NetSpeed provides scalable, coherent, network-on-chip (NoC) IP to SoC designers. NetSpeed’s NoC tool automates SoC front-end design and generates programmable, synthesizable high-performance and efficient interconnect fabrics. The company was founded in 2011.

The NetSpeed team is joining Intel’s Silicon Engineering Group (SEG) led by Jim Keller. NetSpeed co-founder and CEO, Sundari Mitra, will continue to lead her team as an Intel vice president reporting to Keller.

“Intel is designing more products with more specialized features than ever before, which is incredibly exciting for Intel architects and for our customers. The challenge is synthesizing a broader set of IP blocks for optimal performance while reining in design time and cost. NetSpeed’s proven network-on-chip technology addresses this challenge, and we’re excited to now have their IP and expertise in-house,” stated Jim Keller, senior vice president and general manager of the Silicon Engineering Group at Intel.

Monday, September 3, 2018

DOCOMO tests edge computing for video processing

NTT DOCOMO has commenced a proof-of-concept (PoC) video IoT solution that will enable the interpretation and analysis of video data sourced from surveillance cameras using edge computing. DOCOMO will test the effectiveness of using edge computing to interpret and analyze video data. The edge computing will supplement processing performed in the cloud. As a first step, the PoC will test and evaluate the sourcing of data from surveillance cameras, aiming to develop a solution that uses existing cameras, requires no wired connectivity and does not involve the transmission of large quantities of data.

DOCOMO also confirmed a strategic investment in Cloudian, a Silicon Valley-based leader in enterprise object storage systems and developer of the Cloudian AI Box, a compact, high-speed AI data processing device equipped with camera connectivity and LTE / Wi-Fi capabilities, facilitating edge AI computing with both indoor and outdoor communications.

DOCOMO said the transfer and processing of large volumes of video data to the cloud have been a lengthy process involving significant delays and placing a considerable burden on cloud infrastructure and communication networks. Edge computing could help deal with these shortcomings and herald a new era of high-speed image recognition.



Cloudian raises $94 million for hyperscale data fabric

Cloudian, a start-up offering a hyperscale data fabric for enterprises, raised $94 million in a Series E funding, bringing the company’s total funding to $173 million.

“Cloudian redefines enterprise storage with a global data fabric that integrates both private and public clouds — spanning across sites and around the globe — at an unprecedented scale that creates new opportunities for businesses to derive value from data,” Cloudian CEO Michael Tso. “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”

The funding round included participation from investors Digital Alpha, Eight Roads Ventures, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures, Inc. and WS (Wilson Sonsini) Investments.

“Computing now operates without physical boundaries, and customers need storage solutions that also span from the data center to the edge,” said Takayuki Inagawa, president & CEO of NTT DOCOMO Ventures. “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that support the next generation of connected devices.”

Cloudian brings its S3 API interface to Azure Blob Storage

Cloudian, a start-up based in San Mateo, California, is extending its hybrid cloud object storage system into Microsoft Azure.

Cloudian HyperCloud for Microsoft Azure leverages the company's S3 API interface to Azure Blob Storage. Cloudian said the world's largest Industrial Internet enterprise is using Cloudian HyperCloud for Azure to connect its Industrial Internet of Things solution to Azure Blob Storage.

"Cloudian HyperCloud for Azure is a game-changer for public cloud storage, enabling true bi-modal data storage across multiple cloud environments," said Michael Tso, Cloudian CEO and co-founder. "For the first time, customers have a fully supported, enterprise-ready solution to access their choice of cloud platforms from their S3-compliant applications. Customers can be up and running in minutes by launching HyperCloud from the Microsoft Azure Marketplace."

Thursday, August 30, 2018

Database for the Instant Experience -- a profile of Redis Labs

The user experience is the ultimate test of network performance. For many applications, this often comes down to the lag after clicking and before the screen refreshes. We can trace the packets back from the user's handset, through the RAN, mobile core, metro transport, and perhaps long-haul optical backbone to a cloud data center. However, even if this path traverses the very latest generation infrastructure, if it ends up triggering a search in an archaic database, the delayed response time will be more harmful to the user experience than the network latency. Some databases are optimized for performance. Redis, an open source, in-memory, high-performance database, claims to be the fastest -- a database for the Instant Experience. I recently sat down with Ofer Bengal to discuss Redis, Redis Labs and the implication for networking and hyperscale clouds.



Jim Carroll:  The database market has been dominated by a few large players for a very long time. When did this space start to open up, and what inspired Redis Labs to jump into this business?

Ofer Bengal: The database segment of the software market had been on a stable trajectory for decades. If you had asked me ten years ago if it made sense to create a new database company, I would have said that it would be insane to try. But cracks started to open when large Internet companies such as Amazon and Facebook, which generated huge amounts of data and had very stringent performance requirements, realized that the relational databases provided by market leaders like Oracle, were not good enough for their modern use cases. With a relational database, when the amount of data grows beyond the size of a single server it is very complex to cluster and performance goes down dramatically.

About fifteen years ago, a number of Internet companies started to develop internal solutions to these problems. Later on, the open source community stepped in to address these challenges and a new breed of databases was born, which today is broadly categorized under “unstructured" or "NoSQL" databases.

Redis Labs was started in a bit of an unusual way, and not as a database company. The original idea was to improve application performance, because we, the founders, came from that space. We always knew that databases were the main bottleneck in app performance and looked for ways to improve that. So, we started with database caching. At that time, Memcached was a very popular open source caching system for accelerating database performance. We decided to improve it and make it more robust and enterprise-ready. And that's how we started the company.

In 2011, when we started to develop the product, we discovered a fairly new open source project by the name "Redis" (which stands for "Remote Dictionary Server"), which was started by Salvatore Sanfilippo, an Italian developer, who lives in Sicily until this very day. He essentially created his own in-memory database for a certain project he worked on and released it as open source. We decided to adopt it as the engine under the hood for what we were doing. However, shortly thereafter we started to see the amazing adoption of this open source database.  After a while, it was clear we were in the wrong business, and so we decided to focus on Redis as our main product and became a Redis company.  Salvatore Sanfilippo later joined the company and continues to lead the development of the open source project, with a group of developers. A much larger R&D team develops Redis Enterprise, our commercial offering.

Jim Carroll: To be clear, there is an open source Redis community and there's a company called Redis Labs, right?

Ofer Bengal:  Yes. Both the open source Redis and Redis Enterprise are developed by Redis Labs, but by two separate development teams. This is because a different mindset is required for developing open source code and an end-to-end solution suitable for enterprise deployment.
 
Jim Carroll: Tell us more about Redis Labs, the company.

Offer Bengal: We have a monumental number of open source Redis downloads. Its adoption has spread so widely that today you find it in most companies in the world. Our mission, at Redis Labs, is to help our customers unlock answers from their data. As a result, we invest equally into both open source Redis and enterprise-grade Redis, Redis Enterprise, and deliver disruptive capabilities that will help our customers find answers to their challenges and help them deliver the best application and service for their customers. We are passionate about our customers, community, people and our product. We're seeing a noticeable trend where enterprises that adopt OSS Redis are maturing their implementation with Redis Enterprise, to better handle scale, high availability, durability and data persistence. We have customers from all industry verticals, including six of the top Fortune 10 companies and about 40% of the Fortune 100 companies. To give you a few examples of some of our customers, we have AMEX, Walmart, DreamWorks, Intuit, Vodafone, Microsoft, TD Bank, C.H. Robinson, Home Depot, Kohl's, Atlassian, eHarmony – I could go on.

Redis Labs has now over 220 employees across our Mountain View CA HQ, R&D center in Israel, London sales office and other locations around the world.  We’ve completed a few investment rounds, totaling $80 million from Bain Capital Ventures, Goldman Sachs, Viola Ventures (Israel) and Dell Technologies Capital.

Jim Carroll: So, how can you grow and profit in an open source market as a software company?

Ofer Bengal:  The market for databases has changed a lot. Twenty years ago, if a company adopted Oracle, for example, any software development project carried out in that company had to be built with this database. This is not the case anymore. Digital transformation and cloud adoption are disrupting this very traditional model and driving the modernization of applications. New-age developers now have the flexibility to select their preferred solutions and tools for their specific problem at hand or use cases. They are looking for the best-of-breed database to meet each use case of their application. With the evolution of microservices, which is the modern way of building apps, this is even more evident. Each microservice may use a different database, so you end up with multiple databases for the same application. A simple smartphone application, for instance, may use four, five or even six different databases. These technological evolutions opened the market to database innovations.

In the past, most databases were relational, where the data is modeled in tables, and tables are associated with one another. This structure, while still relevant for some use cases, does not satisfy the needs of today’s modern applications.

Today, there are many flavors of unstructured NoSQL databases, starting with simple key value databases like DynamoDB, document-based databases like MongoDB, column-based databases like Cassandra, graph databases like Neo4j, and others.  Each one is good for certain use cases. There is also a new trend called multi-model databases, which means that a single database can support different data modeling techniques, such as relational, document, graph, etc.  The current race in the database world is about becoming the optimal multi-model database.

Tying it all together, how do we expect to grow as an organization and profit in an open source market?  We have never settled for the status quo. We looked at today’s environments and the challenges that come with them and have figured out a way to deliver Redis as a multi-model database. We continually strive to lead and disrupt this market. With the introduction of modules, customers can now use Redis Enterprise as a key-value store, document store, graph database, and for search and so much more. As a result, Redis Enterprise is the best-of-breed database suited to cater to the needs of modern-day applications. In addition to that, Redis Enterprise delivers the simplicity, ease of scale and high availability large enterprises desire. This has helped us become a well-loved database and a profitable business

Jim Carroll: What makes Redis different from the others?

Ofer Bengal: Redis is by far the fastest and most powerful database. It was built from day one for optimal performance: besides processing entirely in RAM (or any of the new memory technologies), everything is written in C, a low-level programming language. All the commands, data types, etc., are optimized for performance. All this makes Redis super-fast. For example, from a single, average size, cloud instance on Amazon, you can easily generate 1.5 million transactions per second at sub-millisecond latency. Can you imagine that? This means that the average latency of those 1.5 million transactions will be less than one millisecond. There is no database that comes even close to this performance. You may ask, what is the importance of this?  Well, the speed of the database is by far the major factor influencing application performance and Redis can guarantee instant application response.

Jim Carroll: How are you tracking the popularity of Redis?

Ofer Bengal: If you look at DockerHub, which is the marketplace for Docker containers, you can see the stats on how many containers of each type were launched there.  The last time I checked, over 882 million Redis containers have been launched on DockerHub. This compares to about 418 million for MySQL, and 642 million of MongoDB containers. So, Redis is way more popular than both MongoDB and MySQL. And we have many other similar data points confirming the popularity of Redis.

Jim Carroll: If Redis puts everything in RAM, how do you scale? RAM is an expensive resource, and aren’t you limited by the amount that you can fit in one system?

Ofer Bengal: We developed very advanced clustering technology which enables Redis Enterprise to scale infinitely. We have customers that have 10s of terabytes of data in RAM. The notion that RAM is tiny and used only for very special purposes, is no longer true, and as I said, we see many customers with extremely large datasets in RAM. Furthermore, we developed a technology for running Redis on Flash, with near-RAM performance at 20% the servers cost. The intelligent data tiering that Redis on Flash delivers allows our customers to keep their most used data in RAM while moving the less utilized data onto cheaper flash storage. This has organizations such as Whitepages saving over 80% of their infrastructure costs, with little compromise to performance.

In addition to that, we’re working very closely with Intel on their Optane™ DC persistent memory based on 3D Xpoint™. As this technology becomes mainstream, the majority of the database market will have to move to being in-memory.


Jim Carroll: What about the resiliency challenge? How does Redis deal with outages?

Ofer Bengal: Normally with memory-based systems, if something goes wrong with a node or a cluster, there is a risk of losing data. This is not the case with Redis Enterprise, because it is redundant and persistent.  You can write everything to disk without slowing down database operations. This is important to note because persisting to disk is a major technological challenge due to the bottleneck of writing to disk. We developed a persistence technology that preserves Redis' super-fast performance, while still writing everything to disk. In case of memory failures, you can read everything from disk. On top of that, the entire dataset is replicated in memory.  Each database can have multiple such replicas, so if one node fails, we instantly fail-over to a replica. With this and some other provisions, we provide several layers of resiliency.

We have been running our database-as-a-service for five years now, with thousands of customers, and never lost a customer's data, even when cloud nodes failed.

Jim Carroll: So how is the market for in-memory databases developing? Can you give some examples of applications that run best in memory?

Ofer Bengal: Any customer-facing application today needs to be fast. The new generation of end users expect instant experience from all their apps and are not tolerant to slow response, whether caused by the application or by the network.

You may ask "how is 'instant experience' defined?"  Let’s take an everyday example to illustrate what ‘instant’ really means., When browsing on your mobile device, how long are you willing to wait before your information is served to you? What we have found is that the expected time from tapping your smartphone or clicking on your laptop until you get the response, should not be more than 100 milliseconds. As an end consumer, we are all dissatisfied with waiting and we expect information to be served instantly. What really happens behind the scenes, however, is once you tap your phone, a query goes over the Internet to a remote application server, which processes the request and may generate several database queries. The response is then transmitted back over the Internet to your phone.

Now, the round trip over the Internet (in a "good" Internet day) is at least 50 milliseconds, and the app server needs at least 50 milliseconds to process your request. This means that at the database layer, the response time should be within sub-millisecond or you’re pretty much exceeding what is considered the acceptable standard wait time of 100 milliseconds. At a time of increasing digitization, consumers expect instant access to the service, and anything less will directly impact the bottom line. And, as I already mentioned, Redis is the only database that can respond in less than one millisecond, under almost any load of transactions.

Let me give you some use case examples. Companies in the finance industry (banks, financial institutions) are using relational databases for years. Any change, such as replacing an Oracle database, is analogous to open heart surgery. But when it comes to new customer facing banking applications, such as checking your account status or transferring funds, they would like to have instant experience. Many banks are now moving this type of applications to other databases, and Redis is often chosen for its blazing fast performance bar none.

As I mentioned earlier, the world is moving to microservices. Redis Enterprise fits the needs of this architecture quite nicely as a multi-model database. In addition, Redis is very popular for messaging, queuing and time series capabilities. It is also strong when you need fast data ingest, for example, when massive amounts of data are coming in from IoT devices, or in other cases where you have huge amounts of data that needs to be ingested in your system. What started off as a solution for caching has, over the course of the last few years, evolved into an enterprise data platform.

Jim Carroll: You mentioned microservices, and that word is almost becoming synonymous with containers. And when you mention containers, everybody wants to talk about Kubernetes, and managing clusters of containers in the cloud. How does this align with Redis?

Ofer Bengal: Redis Enterprise maintains a unified deployment across all Kubernetes environments, such as RedHat OpenShift, Pivotal Container Services (PKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Container Service for Kubernetes (EKS) and vanilla Kubernetes. It guarantees that each Redis Enterprise node (with one or more open source servers) reside on a POD that is hosted on a different VM or physical server. And in using the latest Kubernetes primitives, Redis Enterprise can now be run as a stateful service across these environments.

We use a layered architecture that splits responsibilities between tasks that Kubernetes does efficiently, such as node auto-healing and node scaling, tasks that Redis Enterprise cluster is good at, such as failover, shard level scaling, configuration and Redis monitoring functions, and tasks that both can orchestrate together, such as service discovery and rolling upgrades with zero downtime.

Jim Carroll: How are the public cloud providers supporting Redis?

Ofer Bengal:  Most cloud providers, such as AWS, Azure and Google, have launched their own versions of Redis database-as-a-service, based on open source Redis, although they hardly contribute to it.

Redis Labs, the major contributor to open source Redis, has launched services on all those clouds, based on Redis Enterprise.  There is a very big difference between open source Redis and Redis Enterprise, especially if you need enterprise-level robustness.

Jim Carroll: So what is the secret sauce that you add on top of open source Redis?

Offer Bengal:  Redis Enterprise brings many additional capabilities to open source Redis. For example, as I mentioned earlier, sometimes an installation requires terabytes of RAM, which can get quite expensive. We have built-in capabilities on Redis Enterprise that allows our customers to run Redis on SSDs with almost the same performance as RAM. This is great for reducing the customer's total cost of ownership.  By providing this capability, we can cut the underlying infrastructure costs by up to 80%. For the past few years, we’ve been working with most vendors of advanced memory technologies such as NVMe and Intel’s 3D Xpoint.  We will be one of the first database vendors to take advantage of these new memory technologies as they become more and more popular. Databases like Oracle, which were designed to write to disk, will have to undergo a major facelift in order to take advantage of these new memory technologies.

Another big advantage Redis Enterprise delivers is high availability. With Redis Enterprise, you can create multiple replicas in the same data center, across data centers, across regions, and across clouds.  You can also replicate between cloud and on-premise servers. Our single digit seconds failover mechanism guarantees service continuity.

Another differentiator is our active-active global distribution capability. If you would like to deploy an application both in the U.S. and Europe, for example, your will have application servers in a European data center and in a US data center. But what about the database? Would it be a single database for those two locations? While this helps avoid data inconsistency it’s terrible when it comes to performance, for at least one of these two data centers. If you have a separate database in each data center, performance may improve, but at the risk of consistency. Let’s assume that you and your wife share the same bank account, and that you are in the U.S. and she is traveling in Europe. What if both of you withdraw funds at an ATM at about the same time? If the app servers in the US and Europe are linked to the same database, there is no problem, but if the bank's app uses two databases (one in the US and one in Europe), how would they prevent overdraft? Having a globally distributed database with full sync is a major challenge. If you try to do conflict resolution over the Internet between Europe and the U.S., database operation will slow down dramatically, which is a no-go for the instant experience end users demand. So, we developed a unique technology for Redis Enterprise based on the mathematically proven CRDT concept, developed in universities. Today, with Redis Enterprise, our customers can deploy a global database in multiple data centers around the world while assuring local latency and strong eventual consistency. Each one works as if it is fully independent, but behind the scene we ensure they are all in sync.          

Jim Carroll: What is the ultimate ambition of this company?

Offer Bengal: We have the opportunity to build a very big software company. I’m not a kid anymore and I do not live on fantasies. Look at the database market – it’s huge! It is projected to grow to $50–$60 billion (depending on which analyst firm you ask) in sales in 2020. It is the largest segment in the software business, twice the size of the security/cyber market. The crack in the database market that opened up with NoSQL will represent 10% of this market in the near term. However, the border line between SQL and NoSQL is becoming a blur, as companies such as Oracle add NoSQL capabilities and NoSQL vendors add SQL capabilities. I think that over time, it will become a single large market. Redis Labs provides a true multi-model database. We support key-value with multiple data structures, graph, search, JSON (document based), all with built-in functionality, not just APIs. We constantly increase the use case coverage of our database, and that is ultimately the name of the game in this business. Couple all that with Redis' blazing fast performance, the massive adoption of open source Redis and the fact that it is the "most loved database" (according to StackOverflow), and you would agree that we have once in a lifetime opportunity!





Tuesday, August 28, 2018

Gremlin offers Failure-as-a-Service for Docker

Gremlin, a start-up based in San Francisco, has developed a "failure injection platform" that allows developers to stress test Docker environments to better prepare for real-world disasters by simulating compounding issues.

The company said its Failure-as-a-Service platform aims to make containerized infrastructure more resilient.

In December of 2017, Gremlin launched the first iteration of its platform alongside a $7.5 Million Series A funding round, recreating common failure states within hybrid cloud infrastructure.

“The concept of purposefully injecting failure into systems is still new for many companies, but chaos engineering has been practiced at places like Netflix and Amazon for over a decade,” said Matthew Fornaciari, CTO and Co-Founder of Gremlin. “We like to use the vaccine analogy: injecting small amounts of harm can build immunity that proactively avoids disasters. With today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production.”

http://www.gremlin.com


  • The Series A funding came from Index Ventures and Amplify Partners.


Monday, August 27, 2018

NEC invests in Tascent for its iris biometric system

NEC announced an equity investment in Tascent, a start-up based in Los Gatos, California, that offers a biometric identification based on iris scanning.

Tascent's technologies include optical control technology to remotely capture an accurate, high-quality iris image at high speed, and a user interface (UI) technology that smoothly guides users in support of capturing accurate biometric information. The technology is embedded in security systems used at airports, government agencies and enterprises. The company was founded in 2015.

NEC said its investment and partnership will enable the two companies to jointly enhance the capacity of iris recognition, using Tascent’s optical control and UI technologies and NEC’s advanced biometric engines, and create a next generation iris authentication offering for the public safety market.

Tuesday, August 21, 2018

Slack rakes in $427 million in series H funding

Slack, the San Francisco-based start-up offering collaboration apps and services, announced $427 million in a series H funding round. The company has previously raised $827 million in its previous funding rounds. The company says the new level of investment reflects a post-money valuation of more than $7.1 billion.

Slack claims more than 8 million Daily Active Users (DAUs) and more than 70,000 paid teams

The Series H equity round was led by Dragoneer Investment Group and General Atlantic, joined by funds and accounts advised by T. Rowe Price Associates, Inc. and funds advised by Wellington Management, and Baillie Gifford and Sands Capital, as well as existing investors.


  • Slack Technologies was founded in 2009 in Vancouver, British Columbia, Canada, by a team drawn from the founders of Ludicorp, the company that created Flickr.
  • Amazon Web Services (AWS) has previously published a case study about how Slack leverages its cloud infrastructure to enable its collaboration services.  https://aws.amazon.com/solutions/case-studies/slack/



Sunday, August 19, 2018

BT backs Telecom Infra Project's start-up accelerator programme

For the second year in a row, BT will host a start-up competition at the TIP Ecosystem Acceleration Centre (TEAC) at the BT Innovation Labs in Martlesham, Suffolk and in London’s Tech City.

The competition seeks start-ups in the Intent-Based Networking and Mobile fields.  Entries will be judged by a panel of senior network and technology leaders from BT, Facebook, and TIP.Shortlisted companies will be invited to a final pitch event at BT Tower on Friday 12th October, where the winners will be chosen.

Last year's winners included Unmanned Life, Zeetta Networks and KETS Quantum Security.

Howard Watson, CTIO of BT, and a member of the TIP Board, said: “TIP was created to help tackle some of the big challenges in Telecoms, boosting global connectivity by supporting big ideas. We’re particularly interested in start-ups with innovative ideas on how to deploy mobile networks cost-effectively in rural areas, as we look ahead to the roll-out of 5G services.

The Telecom Infra Project is a global community that includes more than 500 member companies, including operators, infrastructure providers, system integrators, and other technology companies working together to transform the traditional approach to building and deploying telecom network infrastructure.

Interested companies should apply by 24th September 2018 via the TEAC UK website: https://www.btplc.com/Innovation/TEAC/index.htm

Tuesday, July 31, 2018

Qadium secures $37M by U.S. Navy's Space and Warfare Command

Qadium, a start-up based in San Francisco, has been awarded a $37.6 million contract by the U.S. Department of Defense for its cybersecurity solution.

Qadium provides real-time monitoring of the entire global Internet for customers' assets. In

The company said the contract was awarded by the U.S. Navy's Space and Warfare Command after the Department of Defense validated Qadium's commercial software.  Qadium has done prior work for Defense Department entities including U.S. Cyber Command, the Defense Information Systems Agency, Fleet Cyber Command, Army Cyber Command and the DoD CIO office.

"The Defense Department used to love to build its own IT, often poorly and at high cost to taxpayers," said Qadium CEO and CIA veteran Tim Junio.  "The times are finally changing.  In the face of the greatest cybersecurity challenges in our nation's history, we're seeing the government and private tech companies coming together, making both sides better off."

Investors in Qadium include New Enterprise Associates, TPG, Institutional Venture Partners and Founders Fund.

http://www.qadium.com

Monday, July 30, 2018

Serverless raises $10M

Serverless, a start-up based in San Francisco, announced $10 million in Series A funding for its open source Serverless Framework.

The company's mission is to provide a single toolkit offering everything teams and enterprises need to operationalize serverless deployments.

The company said it takes a vendor-agnostic approach across major platforms and cloud providers such as AWS, Azure, Google Cloud Functions, Kubernetes, etc.

The funding was led by Lightspeed Venture Partners with additional participation by Trinity Ventures.

https://serverless.com/

Thursday, July 19, 2018

Benu Networks raises $17.5 million in funding

Benu Networks, which offers a carrier-class Virtual Service Edge software platform has closed a total of $17.5 million in two funding rounds over the past 12 months. The most recent round of $10 million, which closed in July 2018, was led by new investor Spring Lake Equity Partners, a Boston-based private equity firm.

The earlier funding round of $7.5 million was led by long-time investors Sutter Hill Ventures and Liberty Global Ventures, a global investment fund owned by Liberty Global, the world’s largest international cable company.

Benu Networks’ Virtual Service Edge platform enables network operators to rapidly create new business opportunities for Mobile Wi-Fi, Managed Business Networking, Managed Home Networking, Managed Security, and Managed Internet of Things (IoT).

Benu Networks is based in Billerica, Mass.

https://benunetworks.com/


Wednesday, July 18, 2018

SWIM.AI raises $10m for edge intelligence

SWIM.AI, a start-up based in San Jose, California announced $10 million in Series B funding for its edge intelligence software.

SWIM.AI combines local data processing/analytics, edge computing and machine learning to efficiently deliver real-time business insights from edge data on edge devices. The goal is to help customers analyze high volumes of streaming edge data and deliver real-time insights that can easily be shared and visualized.

The company said the funding will be used to launch a Cambridge UK based AI R&D center.

The funding round was led by Cambridge Innovation Capital plc (CIC), the Cambridge-based builder of technology and healthcare companies, with a strategic investment from Arm, and further participation from existing investors Silver Creek Ventures and Harris Barton Asset Management.

“Demand for the EDX software is rapidly increasing, driven by our software’s unique ability to analyze and reduce data, and share new insights instantly peer-to-peer, locally at the ‘edge’ on existing equipment. Efficiently processing edge data and enabling insights to be easily created and delivered with the lowest latency are critical needs for any organization,” said Rusty Cumpston, co-founder and CEO of SWIM.AI. “We are thrilled to partner with our new and continuing investors who share our vision and look forward to shaping the future of real-time analytics at the edge.”

Tuesday, July 17, 2018

Verodin raises $21 million for cyber security instrumentation

Verodin, a start-up based in Tysons, Virginia, announced $21 million in Series B funding for its Security Instrumentation Platform (SIP), which continuously executes tests and analyzes the results to proactively alert on drift from a known-good security baseline The system validates and optimizes control configuration, and provide evidence demonstrating if the controls purchased and deployed are actually delivering the desired business outcomes.

The new funding was led by TenEleven Ventures and Bessemer Venture Partners (BVP). Capital One Growth Ventures, Citi Ventures and all existing investors participated in the round.

Monday, July 16, 2018

Arrcus builds Network OS for white box data center infrastructure

Arrcus, a start-up based in San Jose, California, emerged from stealth to unveil its software-driven, hardware agnostic network operating system for white boxes.

Arrcus said it sees an opportunity to help enterprises transform the way they manage their networks, liberating them from vertically integrated proprietary solutions and opening the door to horizontally diversified choices of best-in-class silicon and hardware systems.

The company's new ArcOS networking operating system has been ported to both Broadcom’s StrataDNX Jericho+ and StrataXGS Trident 3 switching silicon platforms.

ArcOS is built on a modular micro-services paradigm and offers advanced Layer 3 routing capabilities. Key elements include a hyper-performance resilient control plane, an intelligent, programmable Dataplane Adaptation Layer (DPAL), a data-Model Driven Telemetry for Control Plane, Data Planes and Environmental, and a consistent YANG/OpenConfig APIs for easy programmatic access.  These capabilities in conjunction with Broadcom’s StrataDNX Jericho+ platform enable the support for the full BGP internet routing table.

Arrcus cites the following use cases:

  • Spine-Leaf Clos for Datacenter workloads
  • Internet Peering for CDN providers and ISPs
  • Resilient Routing to the Host
  • Massively Scalable Route-Reflector clusters in physical/container form-factors

Arrcus also announced $15 million in Series A funding from General Catalyst and Clear Ventures. Advisors include include Pankaj Patel, former EVP and CDO of Cisco; Amarjit Gill, serial entrepreneur who founded and sold companies to Apple, Broadcom, Cisco, EMC, Facebook, Google, and Intel; Farzad Nazem, former CTO of Yahoo; Randy Bush, Internet Hall of Fame, founder Verio (basis of NTT’s DataCenter Business); Fred Baker, former Cisco Fellow, IETF Chair and Co-Chair of the IPv6 Working Group; Nancy Lee, ex-VP of People at Google, and Shawn Zandi, Director, Network Engineering at LinkedIn.

“We use ‘network different’ as our fundamental approach to enable the freedom of choice through our product innovation and challenging the status quo.  Arrcus has assembled the world’s best networking technologists, is bringing new capabilities, and changing the business model to make it easier to design, deploy, and manage large scale networking solutions for our customers,” stated Arrcus co-founder and CEO Devesh Garg.

  • Arrcus is headed by Devesh Garg, who previously was president of EZchip (acquired by Tilera) and founding CEO of Tilera (acquired by EZchip). He also served at Bessemer Venture Partners and Broadcom. Other Arrcus co-founders include Keyur Patel, who was a Distinguished Engineer at Cisco; and Derek Yeung, a former Principal Engineer at Cisco.

Sunday, July 15, 2018

Oasis Labs plans cloud platform based on blockchain

Oasis Labs, a start-up based in Berkeley, California, is reported to have raised $45 million in a private token sale for its "privacy-first public cloud platform based on blockchain. The idea is to ensure that privacy is built into each layer of the stack, from the application all the way down to hardware.  The system promises codified and self-enforceable privacy protection without relying on any central party.

Oasis Labs is headed by Dr. Dawn Song (CEO) who is Professor of Computer Science at University of California, Berkeley, and a MacArthur Fellow.

https://www.oasislabs.com


Wednesday, July 11, 2018

AT&T takes equity stake in Magic Leap

AT&T has made an equity investment in Magic Leap. Financial terms were not disclosed.

Magic Leap is the high-profile start-up developing an enhanced reality computing and visualization platform. The company is based in Plantation, Florida with offices in Silicon Valley, Seattle, Austin, Dallas, the UK, New Zealand and Israel.

See also