Showing posts with label Start-up. Show all posts
Showing posts with label Start-up. Show all posts

Wednesday, May 19, 2021

Ampere makes inroads with Microsoft, Tencent, Oracle

Ampere Computing hosted a technology update event in which it said it on track to deliver by next year new CPU designs with cores designed in-house and optimized for cloud-native workloads.

The company's current generation 80-core Altra and 128-core Altra Max processors, which are based on ARM Neoverse N1 architecture, are now shipping to customers including Microsoft, Oracle, Tencent, Bytedance and others. 

Ampere also announced a collaboration with Microsoft to bring new cloud-scale processing solutions to the market.

Sunday, April 4, 2021

Yugabyte raises $48 million for database-as-a-service

 Yugabyte, a start-up based in Sunnyvale, California, secured $48 million in venture funding for its open source distributed SQL databases for Internet-scale operations.

The funding will also be used to further accelerate enterprise adoption of Yugabyte’s commercial products. Yugabyte Platform, a self-managed private database-as-a-service offering available on any public, private, or hybrid cloud or Kubernetes infrastructure and Yugabyte Cloud, a fully-managed database service currently available on AWS and Google Cloud, have seen broad adoption in the past 12 months. Yugabyte also recently announced YugabyteDB 2.4, a major update including hardened enterprise-grade security features, enhanced multi-region deployment capabilities and significant performance improvements.

The funding round was led by Lightspeed Venture Partners with additional participation by Greenspring Associates, Dell Technologies Capital, Wipro Ventures and 8VC. 

“Today’s business environment demands flexibility and elasticity from database solutions, and distributed SQL is now critical for any organization where developer productivity and application uptime are top priorities. YugabyteDB makes something as fundamental and feature rich as PostgreSQL truly cloud native, resilient, elastic, and distributed,” said Kannan Muthukkaruppan, Co-Founder and President, Yugabyte. “With companies of all kinds accelerating their digital transformation initiatives, technologies that help them accelerate, like YugabyteDB, are in high demand. This new round of funding will position Yugabyte to meet this increased enterprise demand and power our global expansion into key markets.”

Sunday, March 14, 2021

HyperLight claims breakthrough with its lithium niobate optical modulator

HyperLight, a start-up based in Cambridge, MA developing thin-film lithium niobate (LN) photonic integrated circuits (PICs), announced breakthrough voltage-bandwidth performances in integrated electro-optic modulators. 

HyperLight says its electro-optic PIC could lead to orders of magnitude energy consumption reduction for next generation optical networking.

Current electro-optic modulators require extremely high radio-frequency (RF) driving voltages (> 5 V) as the analog bandwidth in ethernet ports approaches 100 GHz for future terabits per sec capacity transceivers. In comparison, a typical CMOS RF modulator driver delivers less than 0.5 V at such frequencies. Compound semiconductor modulator drivers can deliver voltage > 1 V at significantly increased cost and energy consumption but still fall short to meet the optimum driving voltage. The limited voltage-bandwidth performance in electro-optic modulators poses a serious challenge for meeting tight power consumption requirements from network builders.

HyperLight's integrated electro-optic modulator is capable of 3-dB bandwidth > 100 GHz, a previously impossible voltage-bandwidth achievement. The results are described in a manuscript entitled “Breaking voltage-bandwidth limits in integrated lithium niobate modulators using micro-structured electrodes,” published in Optica on March 8th, 2021.

“We believe the significantly improved electro-optic modulation performance in our integrated LN platform will lead to a paradigm shift for both analog and digital ultra-high speed RF links,” said Mian Zhang, author, CEO of HyperLight. “For example, using sub-volt modulators for digital applications, high speed electronic drivers may have largely reduced gain-bandwidth requirements or possibly be completely bypassed with modulators directly driven from electronic processors. This would save building and running costs for network operators. For RF links, the low-voltage, high bandwidth and excellent optical power handling ability could enable sensitive and low noise millimeter wave (mmWave) photonic links in ultrahigh-frequency bands.”

Sunday, November 1, 2020

Intel to acquire SigOpt for AI model optimization software

Intel agreed to acquire SigOpt, a start-up based in San Francisco that is focusing on the optimization of artificial intelligence (AI) software models at scale. Financial terms were not disclosed.

SigOpt is a standardized, scalable, enterprise-grade optimization platform and API. The company was founded by Patrick Hayes and Scott Clark, who is credited with building an open source the Metric Optimization Engine at Yelp.

Intel plans to use SigOpt’s software technologies across Intel’s AI hardware products to help accelerate, amplify and scale Intel’s AI software solution offerings to developers. 

Monday, August 17, 2020

Lightmatter is developing a photonic processor

Lightmatter, a start-up based in Boston, will unveil plans for an artificial intelligence (AI) photonic processor.

Lightmatter said its general-purpose AI inference accelerator will use light to compute and transport data. The 3D-stacked chip package contains over a billion FinFET transistors, tens of thousands of photonic arithmetic units, and hundreds of record-setting data converters. Lightmatter’s photonic processor runs standard machine learning frameworks including PyTorch and TensorFlow, enabling state-of-the-art AI algorithms.

“The Department of Energy estimates that by 2030, computing and communications technology will consume more than 8 percent of the world’s power. Transistors, the workhorse of traditional processors, aren’t improving; they’re simply too hot. Building larger and larger datacenters is a dead end path along the road of computational progress,” said Nicholas Harris, PhD, founder and CEO at Lightmatter. “We need a new computing paradigm. Lightmatter’s optical processors are dramatically faster and more energy efficient than traditional processors. We’re simultaneously enabling the growth of computing and reducing its impact on our planet.”

On August 18th, Lightmatter’s VP of Engineering, Carl Ramey, will present their photonic processor architecture at HotChips32.

Thursday, January 23, 2020

CloudKnox raises $12M for identity authorization for cloud

CloudKnox Security, a start-up based in Sunnyvale, California, raised $12 million for its work in identity authorization for hybrid and multi-cloud environments.

CloudKnox recently added new privilege-on-demand, auto remediation and anomaly detection capabilities, integration with AWS IAM Access Analyzer and support for VMware Cloud on AWS. The company was also recently awarded two patents: the first for activity-based access control in heterogeneous environments; and the second for a method and system to detect discrepancy in infrastructure security configurations.

The funding round was led by Sorenson Ventures with participation from early investors, including ClearSky Security, Dell Technologies Capital and Foundation Capital. This brings total funding raised to date to $22.75M.

CloudKnox also announced several key additions to the company’s board and executive team. Stephen Ward, CISO at The Home Depot; Ken Elefant, managing partner at Sorenson Ventures and Suresh Batchu, co-founder and CTO at MobileIron, joined the company’s Board of Directors. The company also appointed John Donnelly as vice president of sales. John has more than 30 years of experience as a sales leader, including roles as VP of sales for MobileIron, Vontu and, most recently, as a sales advisor for ClearSky Security and Wing Venture Capital.

“We’ve seen exceptional growth from customers and prospects looking to address the number one risk in their cloud infrastructure,” said Balaji Parimi, CEO and founder at CloudKnox Security. “This positioned us to pre-emptively secure another round of funding to leverage strong market adoption and accelerate our customer expansion. We’re delighted to have Sorenson Ventures join our current investors, who continue to show their commitment to our success, welcome John to our team, and Stephen and Suresh to our board.”

Wednesday, June 12, 2019

Edgewise raises $11 million for microsegmentation

Edgewise Networks, a start-up based in Burlington, Mass, announced $11 million in funding for its microsegmentation platform based on software identity.

The funding round was led by existing investors .406 Ventures and Accomplice, with additional participation from Pillar.

Edgewise reduces the network attack surface in cloud and data center environments. Edgewise said it automatically protects application workloads in seconds, adding provable security to hybrid cloud environments. Machine learning and advanced analytics enable the rapid discovery of application communication topology and attack pathways. This real-time visibility allows security teams to microsegment environments with a single click. Policies are enforced no matter where the application resides — on premises, in the cloud, or in a container — and remain in effect even as the underlying network changes.

“Our innovative, patented approach makes microsegmentation — one of the hardest problems in cybersecurity — incredibly simple to implement,” said Peter Smith, co-founder and chief executive officer at Edgewise Networks. “With Edgewise, companies can operate their applications in hybrid cloud and container environments with peace of mind, knowing that they are protected. This strong support from our investors will enable us to expand to meet the demand for automated microsegmentation.”

Thursday, January 17, 2019

Scalyr appoints Christine Heckart as CEO

Scalyr, a start-up offering a "blazing-fast" log management solution, appointed Christine Heckart as its new CEO. Steve Newman, founding CEO of Scalyr, will assume the role of chairman and founder to focus on advancing the company’s product vision and technology.

Scalyr is log management platform designed for modern development and deployment. Unlike traditional log management tools designed for the legacy data center, Scalyr is built to meet next-generation approaches to software development, including microservices and containers. The company is based in San Mateo, California.

Scalyr reports more than 100 percent groth in 2018, adding marquee names such as Cisco, Palo Alto Unified School District, Vanderbilt University, and Worldpay to its customer list.

Heckart most recently served as Senior Vice President at Cisco and as Executive Vice President at Brocade. In addition, Heckart has held multiple executive and C-level positions at global technology brands, including NetApp, Microsoft, and Juniper Networks. Heckart serves on the board of directors at Lam Research Corporation and 6sense.

“As digital customer experiences become increasingly immersive, their underlying systems and code have grown more complex, as have the challenges and bugs. The Scalyr platform helps engineers build and troubleshoot software within modern IT and application environments,” Heckart said. “We have an awesome product that developers use daily, an impressively diverse employee base, and a network-effect built into the architecture itself. A query that takes other companies ten minutes takes Scalyr one second, and it will only get faster and more affordable as we grow.”

Thursday, October 25, 2018

Arctic Wolf raises $45 million for Cyber Security Ops Center service

Arctic Wolf Networks, a start-up based in Sunnyvale, California with offices in Ontario, Canada, raised $45 million in series C funding for its security operations center (SOC)-as-a-service.

Arctic Wolf will use the new funding to accelerate company growth and meet the soaring demand for its SOC-as-a-service offering.

The Arctic Wolf service provides a cloud-based security incident and event management (SIEM) application combined with a team of expert security engineers committed to the client's operational requirements.

The new funding was led by Future Fund with participation from new investors Adams Street and Unusual Ventures, as well as existing investors Lightspeed Venture Partners, Redpoint Ventures, Sonae Investment Management and Knollwood Investment Advisory LLC. To date, Arctic Wolf has raised $91.2 million.

“Our growing team of security engineers is redefining the economics of security to protect companies of all sizes,” said Brian NeSmith, CEO and co-founder of Arctic Wolf. “In addition to supporting continued company growth, the funding will accelerate expansion of our service offering, as we continue to scale and expand to meet our customers’ individualized needs. We look forward to continuing our momentum and building out our internal vulnerability assessment and endpoint detection and response capabilities, in particular.”

  • Arctic Wolf is headed by Brian NeSmith, who previously was CEO of Blue Coat Systems. Before that, he was the CEO of Ipsilon Networks (acquired by Nokia). 

Monday, July 30, 2018

Cloudify appoints Ariel Dan as CEO

Cloudify, which specializes in IT operations automation technology, named Ariel Dan as its new CEO, replacing Zeev Bikowsky, who has been serving as Chief Executive Officer for nearly a decade.

Prior to Cloudify, Ariel led two companies to M&A, and has extensive experience in building sustainable cloud & SaaS operations.

While leading Cloudify, Bikowki was also the driving force behind establishing GigaSpaces.

Cloudify is based in Herzliya, Israel and funded by Intel Capital, Claridge Israel, BRM Group, FTV Capital, and Formula Vision, as well as additional private investors.

Sunday, January 14, 2018

DENSO invests in ActiveScaler for #AI-powered fleet management

DENSO, one of the world’s largest automotive suppliers, has a significant seed investment in ActiveScaler, a start-up based in Milpitas, California that is developing Managed MaaS (Mobility-as-a-Service) systems powered by artificial intelligence. Financial terms were not disclosed.

ActiveScaler's website says its FleetFactor AI-powered software leverages thousands of data points collected from a variety of sources such as internal vehicle data, in-vehicle computers, sensors, driver behavior, CRM/ERP, finance, dispatch and other systems.

"DENSO’s focus is to develop technologies that advance the future of mobility, and enable connected and automated driving," said Yoshifumi Kato, Senior Executive Director at DENSO Corporation. "These technologies directly influence the development of MaaS systems, which will disrupt the future of urban mobility for people and goods by making transportation solutions more seamless and accessible."

"We want to be the engine behind the future of MaaS – hence the term “Managed MaaS”, which will transform current fleet businesses to provide next generation mobility services," said Abhay Jain, CEO of ActiveScaler. "Traditional fleet management services and systems are quickly becoming obsolete because of issues like high upfront software and hardware costs, poor ecosystem integration, and lack of flexibility, which are limiting the type and quality of services that can be offered.

Tuesday, September 19, 2017

Minio raises $20m for Multi-Cloud Object Storage

Minio, a start-up based in Palo Alto, California, raised $20 million in Series A funding for open source object storage for cloud-native and containerized applications.

Minio has developed an object storage server that enables developers to store unstructured data on any public or private cloud infrastructure, including multi-cloud deployments. The solution lets users build their own Amazon S3-compatible object storage on bare metal, public cloud or existing SAN/NAS storage infrastructure.

Minio reports  over 10M downloads since its general availability in January 2017.

The Series A funding round was jointly led by Dell Technologies Capital, General Catalyst Partners and Nexus Venture Partners, with participation by Intel Capital, AME Cloud and Steve Singh.

Thursday, July 13, 2017

FogHorn Targets Edge Intelligence Software at IIoT

FogHorn Systems, a start-up based in Mountain View, California, released its Lightning ML edge intelligence software for the Industrial Internet of Things (IIoT).

The company said its Lightning ML brings the power of machine learning at the edge in three ways:
  • Leverages existing models and algorithms: can execute proprietary algorithms and machine learning models on live data streams produced physical assets and industrial control systems
  • Makes machine learning OT-accessible: offer tools to generate machine learning insights 
  • Runs in tiny software footprint: Lightning ML platform requires less than 256MB of memory footprint.

Lightning ML supports all x86-based IIoT gateways and OT systems as well as ARM32 OT control systems (like PLCs and DCSs). It also supports the newest generation of small footprint Raspberry Pi derivative IIoT gateways. The FogHorn Lightning ML software platform can run entirely on premise or connect to any private cloud or public cloud environment.

"In the initial launch of FogHorn’s Lightning platform, we successfully miniaturized the massive computing capabilities previously available only in the cloud. This allows customers to run powerful big data analytics directly on operations technology (OT) and IIoT devices right at the edge through our complex event processing (CEP) analytics engine. With the introduction of Lightning ML, we now offer customers the game changing combination of real-time streaming analytics and advanced machine learning capabilities powered by our high-performance CEP engine,” said said FogHorn CEO David C. King.

  • In May 2017, FogHorn Systems announced today that it had raised additional Series A funding from Dell Technologies Capital and Saudi Aramco Energy Ventures (SAEV). The extended funding brings FogHorn’s total Series A round to $15 million, excluding the conversion of $2.5 million in seed funding. Dell Technologies Capital added to its initial Series A investment. Saudi Aramco Energy Ventures is a new investor in the company.

Tuesday, May 9, 2017

Flex Logix, developer of embedded FPGA technology, raises $5m

Flex Logix Technologies, headquartered in Mountain View, California, a supplier of embedded FPGA IP and software:

a.         Founded in March 2014 to develop solutions for reconfigurable RTL in chip and system designs employing embedded FPGA IP cores and software.

b.         Offering the EFLX technology platform designed to significantly reduce design and manufacturing risks, accelerate technology development and provide greater flexibility for customers' hardware.

c.         Which in October 2015 announced it had raised $7.4 million in a financing round was led by dedicated hardware fund Eclipse Ventures (formerly the Formation 8 hardware fund), with participation from founding investors Lux Capital and the Tate Family Trust.

Announced it has secured $5 million in Series B equity financing in a round led by existing investors Lux Capital and Eclipse Ventures, with participation from the Tate Family Trust.

Flex Logix stated that new funding will be used to expand its sales, applications and engineering teams to meet the growing customer demand for its embedded FPGA platform in applications including networking, government, data centres and deep learning.

Targeting chips in multiple markets, the Flex Logix EFLX platform can be used with networking chips with reconfigurable protocols, data centre chips with reconfigurable accelerators, deep learning chips with real-time upgradeable algorithms, base stations chips with customisable features and MCU/IoT chips with flexible I/O and accelerators. The company noted that EFLX is currently available for popular process nodes and is being ported to further process nodes based on customer demand.

The Flex Logix technology offers high-density blocks of programmable RTL in any size together with the key features customers require. The solution allows designers to customise a single chip to address multiple markets and/or upgrade the chip while in the system to meet to changing standards such as networking protocols. It also allows customers to update chips with new deep learning algorithms and implement their own versions of protocols in data centres.

Regarding the new funding, Peter Hebert, managing partner at Lux Capital, said, "I believe that Flex Logix's embedded FPGA has the potential to be as pervasive as ARM's embedded processors… the company's software and silicon are proven and in use at multiple customers, paving the way to become one of the most widely-used chip building blocks across many markets and for a range of applications".

While Pierre Lamond, partner at Eclipse Ventures, commented, "The Flex Logix platform is the… most scalable and flexible embedded FPGA solution on the market, delivering competitive advantages in time to market, engineering efficiency, minimum metal layers and high density… the patented technology combined with an experienced management team led by Geoff Tate, founding CEO of Rambus, position the company for rapid growth".

Wednesday, January 25, 2017

Apstra Demos Wedge Switch Running its OS

Apstra, a start-up based in Menlo Park, California, released its Apstra Operating System (AOS) 1.1.1 and an integration with Wedge 100, Facebook’s second generation top-of-rack network switch.

Apstra said its distributed operating system for the data center network will disaggregate the operational plane from the underlying device operating systems and hardware. Sitting above both open and traditional vendor hardware, AOS provides the abstraction required to automatically translate a data center network architect’s intent into a closed loop, continuously validated infrastructure. The intent, network configurations, and telemetry are stored in a distributed, system-wide state repository.

“At Apstra we believe in giving network engineers choice and control in operating their network and we are excited to be part of the network disaggregation movement,” said Mansour Karam, CEO and Founder of Apstra, Inc. “We are delighted to have been invited to demonstrate AOS integrated with Wedge 100 today. AOS provides network engineers with advanced operational control and situational awareness of network services, and enables them to design, deploy, and operate a truly Self-Operating Network™ (SON) without vendor lock-in.”

Facebook Deploys Backpack -- its 2nd Gen Data Center Switch

Facebook unveiled Backpack, its second-generation modular switch platform developed in house at Facebook for 100G data center infrastructure. It leverages Facebook's recently announced Wedge switch.

Backpack is designed with a clear separation of the data, control, and management planes. It uses simple building blocks called switch elements. The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together. The orthogonal direct chassis architecture opens up more air channel space for a better thermal performance for managing the heat from 100G ASICs and optics.  Facebook will use the BGP routing protocol for the distribution of routes between the different line cards in the chassis.

The design has already entered production and deployment in Facebook data centers.  The company plans to submit the design to the Open Compute Project.

Thursday, April 28, 2016

Innovium Raises $15M, Settles Broadcom Litigation

Innovium, a pre-launch start-up targeting infrastructure solutions, announced the settlement of all litigation with Broadcom.  Financial terms were not disclosed.

Innovium also announced $15 million in Series A funding from Capricorn, Walden Riverwood and other venture capital investors.

The company, which is based in San Jose, California, was founded by Rajiv Khemani (previously Cavium), Puneet Agarwal (previously Broadcom), and Mohammad Issa (previously Broadcom).

Thursday, January 14, 2016

Blueprint: What’s in Store for the Database in 2016?

by Roger Levy, VP of Product at MariaDB

In 2015, CIOs focused on DevOps and similar technologies such as containers as a way to improve time to value. During 2016, greater attention will be focused on data analytics and data management as a way to improve the timeliness and quality of business decisions. How best to store and manage data is on the minds of most CIOs as they kick off the New Year. It’s exciting to see that databases, which underlie every app and enterprise on the planet, are now back in the spotlight. Here’s what organizations anticipate for next year.

Securing your data at multiple layers
2015 saw every type of organization, from global retailers to the Catholic Church, experience financial losses and reputation damage from data breaches. Security has long been a concern of CIOs, but the growing frequency of high-profile attacks and new regulations make data protection a critical 2016 priority for businesses, governments, and non-profit organizations.

Organizations can no longer rely on just a firewall to protect your data. Amidst a myriad of threats, a robust security regimen requires multiple levels of protection including network access, firewalls, disk-level encryption, identity management, anti-phishing education, and so forth. Ultimately, hackers want access to the contents of an enterprise's database, so securing the database itself must be a core component of every organization’s IT strategy.

Prudent software development teams will use database technology with native encryption to protect data as it resides in the database, and SSL encryption to protect data as it moves between applications. They also will control access to the database with stronger password validation and a variety of access authorization levels based on a user’s role. Of course organizations can’t kick back and rely on software alone; they still have to hold themselves accountable via regular audits and testing.

Migrating to the cloud 
With the recent revenue announcements by public cloud providers such as Amazon AWS and Microsoft Azure, it is clear that adoption of public cloud services is becoming mainstream. But they may never fully replace on-premise data storage. While the cloud offers greater scalability and flexibility, better business continuity, disaster recovery, and capital cost savings, for economic and security reasons companies continue to optimize a mix of public and private cloud and traditional on-premise data management solutions.

Managing data across multiple environments also presents challenges. Navigating the myriad of data privacy regulations across the globe, integrating applications and data across private and public infrastructures, and managing latency issues are a few of the challenges organizations face in their migration to the cloud. Enter the hybrid cloud where IT organizations are achieving the best of today’s cloud solutions – traditional data storage, private cloud and public cloud benefits.

In 2016, we’ll likely see hybrid clouds experiencing a surge in popularity as an alternative to either a public or a private cloud solution. Greater focus will be applied to developing solutions that improve migration to hybrid cloud infrastructures for overall security and efficiency, as well as instances such as cloud bursting when bandwidth demand spikes or disaster recovery by replicating databases in the cloud as backups.

Multi-model databases 
The variety, velocity and volume of data is exploding.  Every minute we send over 200 million emails and over 300 thousand tweets. Already by 2013, 90% of the world's data had been created in two years. But size is not everything. Not only have the volume and velocity of data increased, there is also an increasing variety of formats of data that organizations are collecting, storing and processing.

While different data models have different needs in terms of insert and read rates, query rates and data set size, companies are getting tired of the complexity of juggling different databases. Next year will kick off an increased trend toward data platforms which offer “polyglot persistence” – the ability to handle multiple data models within a single database. The demand for multi-model databases is exploding as Structured Query Language (SQL) relational data from existing applications and connected devices must be processed along-side JavaScript Object Notation (JSON) documents, unstructured data, graph data, geospatial and other forms of data generated in social media, customer interactions, and machine to machine communications.

Growth in applying machine learning
With the rapid growth in the type and volume of data being created and collected comes the opportunity for enterprises to mine that data for valuable information and insights into their business and their customers. As IT recruiters know well, more and more companies are employing specialist “data scientists” to introduce and implement machine learning technologies. But the number of experts in this field simply isn’t growing fast enough, and this rarity makes hiring a data scientist cost-prohibitive for most companies. In fact, the US alone faces a shortage of 140,000 to 190,000 people with analytical expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the analysis of big data, according to McKinsey & Company. In response, organizations are turning to machine learning tools that enable all of their employees to derive insights without needing to rely on specialists. Just as crucial as collecting data is the need to understand what lies in a company’s database and how it can be turned into valuable insights.

Recently the major public cloud vendors have introduced a variety of offerings to provide machine learning services. These include offers such as Azure ML Studio from Microsoft, the Google Prediction API, Amazon Machine Learning and IBM’s Watson Analytics. We can expect that 2016 will be a year when additional solutions appear and mature, and are recognized as a critical, possibly required, piece of enterprise IT operations. The growth of machine learning will place new demands on databases which store and manage the data “fuel” for such applications.  In 2016, look for a focus on database capabilities that facilitate real-time analytical processing of large data sets.

What can IT personnel do?
With the recent rise of the Chief Data Officer, the widespread adoption of new database technologies, and the acute need for better IT security, the database is back in the spotlight. A CIO’s best bet for staying on top of these new trends in 2016 will be the same strategy as in years past, laying down clear policies for who can access data and what it gets used for, all the while staying on top of new technologies and new threats targeting the integrity of a company’s data.

About the Author

Roger Levy brings extensive international, engineering and business leadership experience to his role as VP, Products, at MariaDB. He has a proven track record of growing businesses, transforming organizations and fostering innovation in the areas of data networking, security, enterprise software, cloud computing and mobile communications solutions, resulting in on-time, high-quality and cost-effective products and services. Previous roles include VP and GM of HP Public Cloud at Hewlett-Packard, SVP of Products at Engine Yard, as well as founding R.P. Levy Consulting LLC.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Wednesday, July 16, 2014

Nitero Unveils 60 GHz 802.11ad Silicon

Nitero, a start-up based in Austin, Texas with a design studio in Melbourne, Australia, unveiled its 60 GHz chip implemented in CMOS and based on the IEEE 802.11ad. standard.  The device uses Samsung’s advanced 28nm RF process based on 28nm HKMG LPP technology.

Nitero said its design is up to 10x more power efficient than 802.11ad solutions designed for the PC, while bringing cost and form-factor in-line with existing 802.11ac Wi-Fi solutions.  Nitero’s NT4600 supports low-latency 4K display and peer-to-peer wireless connectivity at USB 3.0 data rates.  In addition, while single-antenna 802.11ad solutions sacrifice in-room performance to get to low power, the NT4600 supports transmit and receive beamforming to provide full coverage throughout the office, living room or conference room.

“802.11ad, the next generation of Wi-Fi, is the missing link to allow for the long-awaited convergence of PC, gaming and entertainment platforms onto a single mobile device. 802.11ad solutions built for the PC and slimmed down for mobile simply can’t meet the power, performance, and form-factor requirements of Tier 1 mobile customers,” said Pat Kelly, CEO of Nitero. “At Nitero, we targeted the smartphone from day one. The result is 60G.”

Key features of Nitero’s 60G:

  • Samsung 28nm RF CMOS process technology
  • Power reduced by up to 10x over PC 802.11ad solutions
  • Industry-leading output power, noise figure and sensitivity to maximize performance
  • Transmit and receive beamforming to support non-line-of-sight conditions
  • Low-latency 4K wireless display support in living-room, desktop, and conference-room environments
  • Peer-to-peer wireless connectivity using 16-QAM modulation at up to 4.6 Gbps
  • PCI Express host interface to support the latest mobile applications processors while minimizing software overhead
  • Fully-compliant to the ratified IEEE 802.11ad standard
  • Android driver support

Nitero is currently demonstrating the NT4600 to select partners and customers and will start production shipments in 2015.

  • Nitero is funded by leading venture capitalists with deep semiconductor roots – Austin Ventures, Southern Cross Venture Partners and Trailblazer Capital. 

Thursday, June 6, 2013

Panzura Raises $25 Million for Cloud Storage

Panzura, a start-up based in San Jose, California, raised $25 million in Series D funding for its cloud storage solution.

Panzura offers a cloud-integrated storage system for enterprises with network attached storage (NAS) functionality, native cloud support, a globally distributed file system, built-in FIPS 140-2 certified security and data protection, as well as high speed data transfer rates to and from the cloud.
The company said its customer base grew by 700 percent in 2012 and that it now had the largest number of petabytes under management.

The latest funding round was led by Meritech Capital Partners with participation from its existing investors Matrix Partners, Khosla Ventures, Opus Capital and Chevron Technology Ventures.

"Panzura is delighted to be adding the expertise of our new partners at Meritech, and particularly late-stage funding guru Paul Madera to our board, and we are encouraged by the ongoing support of our existing longtime partners,” said Randy Chou, CEO and co-founder of Panzura. “Panzura provides the only viable cloud-based solution for the enterprise, directly enabling high-value business objectives – such as improved cost, scale, management, availability and global access – while fitting seamlessly into existing IT infrastructures."

Wednesday, April 3, 2013

Interview: Nuage on Automating Data Centers for Cloud Services and MPLS VPNs

A redacted interview between Jim Carroll, Editor of Converge! Network Digest and Manish Gulyani, VP of Product Marketing, Alcatel-Lucent / Nuage Networks.

Converge! Digest:  How do you describe the Nuage Networks' solution?

Manish Gulyani: The Nuage Networks Virtualized Services platform is a software-only solution for fully automating and virtualizing data center networks. That’s our main value proposition.  As you know, today’s data center networks are very fragile, they use old technology, and they are very cumbersome to operate.  When we looked at cloud services, we found that storage and compute resources had been virtualized quite nicely, but the network really wasn’t there.  We saw a great opportunity to apply the lessons that we have learned in wide area networking along with SDN.  The idea is that if you want to sell cloud services, you need to support thousands of tenants.  And you want each tenant to think that they own their piece of the pie.  It has to feel like the experience of a private network, with full control, full security, full performance of a private network but with the cost advantages of a cloud solution, which is a shared infrastructure.  That’s what we’re bringing to the table with the Nuage solution.

Converge! Digest: So is the Nuage solution aimed specifically at those who want to sell cloud services?

Manish Gulyani: It is designed for anybody who runs a large enough data center that needs automation. For instance, the University of Pittsburgh Medical Center, which is one of our trial customers, does not sell cloud services but they have enough internal users and external tenants that want full control over a particular cloud resource.  If you can’t give them full control and automation, then the cloud resource is of no use.  You have to be able to turn up the cloud service as fast as the user turns up a VM, otherwise the cloud service doesn't work.  Whether it is a large enterprise, a web-scale company or a cloud service provider, all can benefit from the Nuage solution.

Converge! Digest: What are the strategic differentiators versus other SDN controllers out there?

Manish Gulyani: Some initial SDN solutions have come out in the last two years for data centers.  They took the approach of virtualizing primarily at Layer 2, which was a good first step beyond the VLAN architectures. But in our view, this isn't sufficient to go beyond the basic applications.  If you are limited to just Layer 2, you are not able to get the application design done the right way.  For example, if you want to do a three tier application, you need to use routing, load balancing, firewalls – and all those elements in a real architecture are very hard to coordinate in current SDN solutions.  So first, Nuage needs to overcome this obstacle. We give you full Layer 2 to Layer 4 virtualization as a base requirement.  Once we’ve done that, the next issue is how do you make it scale?  You can’t restrict cloud service to one data center.

If you have ambitions of being a cloud services provider and you run multiple data centers, you want the power to freely move around server workloads between data centers.  If you cannot connect the data centers in a seamless fashion, then you haven’t satisfied the demand. So our solutions scales to multiple data centers and provides seamless connectivity.  The third obstacle we overcome is this:  now that the cloud services are running, how can people on a corporate VPN get access to these resources?  How can they securely connect to a resource that has just been turned up in a data center?

We provide the full, seamless connectivity to a VPN service.  We extend from Layer 2 to Layer 4, we made it seamless across data centers, and then we extend it across the wide area network by seamlessly integrating with MPLS VPNs. So that is our virtual connectivity layers.

We also automate it and make it easy to use.  A lot of our energy has gone into the policy layer, which lets the user define a service without knowing any network-speak.  It’s just IT speak and no network-speak.  This might seem strange for a networking company to say that its customer do not need to learn about VLANs or subnets or IP addresses – just zones and domains and application connectivity language.  When a workload shifts from one data center to another, all of the IP addresses and sub-netting has to change, but real users can’t figure this out because it is too hard to do. If this function can just happen in the background, they’re good with that.  The final thing we said is that it has to be totally touchless.

The reason people are excited about the cloud is that it is quick. In fact, IT departments worry that users sign up for public cloud services because the internal IT guys can’t deliver quickly enough.  If you need 10 new servers or VMs of capacity, why wait 3-4 weeks for your IT department to purchase and install the equipment, when you can log onto Amazon Web Services today and activate this capacity immediately with a credit card?  The Nuage policy driven architecture basically says “turn up the VM, look up the policy, set-up the connection” – nobody actually touches the network.  That’s our innovation.

Converge! Digest:  Since it is a software suite, what type of hardware do you run on?

Manish Gulyani:  Nuage runs on virtual machines.  It runs on general purpose compute.  Our Services Directory is a virtual machine on any compute platform. Our Services Controller runs on a VM. And our virtual routing and switching Open vswitch implementation is essentially an augmentation of what runs today on a hypervisor.  You can’t go into a cloud world and propose new hardware because it is a virtualized environment.  We have no constraints on what time of compute platform.  The whole idea is to apply web-scale technologies.  We also offer horizontal scaling, where many instances run in parallel and can be federated.

Converge! Digest:  Alcatel-Lucent is especially known for IP MPLS, and yet Nuage is largely a data center play.  What technologies does Nuage inherit from Alcatel-Lucent that give it an edge over other SDN start-ups?

Manish Gulyani:  At Alcatel-Lucent, we learned a lot about building very large networks with IP MPLS.  That is a baseline technology deployed globally to offer multi-tenancy with VPNs on shared WAN infrastructure.  Why not use similar techniques inside the data center to provide the massive scale and virtualization needed for cloud services?  We took our Service Router operating system, which is the software running on all our IP platforms, and took the elements that we needed and then virtualized them.  This enables them to run in virtual machines instead of dedicated hardware. This give us the techniques and protocols for providing virtualization. Than we applied more SDN capabilities, such as a simplified forwarding plane that’s controlled by OpenFlow, which lives in the server and enables us to quickly configure the forwarding tables. Because of the way that we use IP protocols in wide area networks, we can support federation of our controller.  That’s how we link data centers together.  They talk standard IP protocols -- BGP – to create the topology of the service and the same way they extend to MPLS VPNs.  As I said, the key requirement for enterprises is to connect to data center cloud services using MPLS VPNs they are familiar with today.  This same SDN controller can now easily talk to the WAN edge router running MPLS VPNs.  We seamlessly stitch the data center virtualization all the way to the MPLS VPN in the wide area network and provide end-to-end connectivity.

Converge! Digest:  Two of the four trial customers for Nuage announced so far are Service Providers (SFR and TELUS), presumably Alcatel-Lucent MPLS customers as well, and of course many operators are trying to get into cloud services.  So, is that a design approach of Nuage?  Build off of the MPLS deployments of Alcatel-Lucent?

Manish Gulyani:  It doesn't have to be.  At Nuage, we don’t need for Alcatel-Lucent to be the incumbent supplier to sell this solution.  But of course it helps if they already know us and and already trust us in running highly-scalable networks. So when we talk about scalablity of data centers, we have a lot of credibility built in. Both SFR and TELUS have the ambition to offer cloud services.  I think they recognize that they must move to virtualization in the data center network and that the connectivity must be extended all the way to enterprise.  Nuage can deliver a solution unlike anything from anybody else today.  Existing SDN approaches only deliver virtualization in some subset of the data center, they can’t cross the boundary.  Carriers want to have multiple cloud data centers, but they cannot connect their resources easily to MPLS VPNs today. We give them that solution.

Converge! Digest:  In cloud services, it’s becoming clear that a few players are running away with the market.  You might say Amazon Web Services, followed by Microsoft Azure, Rackspace, Equinix and maybe soon Google, are capturing all the momentum.  One thing these guys have in common is a desire to be carrier neutral, so they are not tied to a particular MPLS service or footprint. Will Nuage appeal to these cloud guys too?

Manish Gulyani:  We do.  In fact, we are talking to some of these guys. As I said, Nuage is not designed for telecom operators.  It is designed for people who want to sell cloud services and who run very large data centers.  Carrier with multiple data center, like Equinix, will need the automation.  Until you virtualize and automate the data center, forget about selling cloud services.  Step 1 is creating the automation inside the data center.  Connecting to MPLS VPNs is step 2.  Amazon has been among the first ones, but they had to develop all of this themselves.  There was no solution on the market. They build that step 1 automation themselves. We now know that Amazon found it quite cumbersome to get secure connectivity between clouds. They are also experiencing how hard it is to connect a corporate VPN into the Amazon cloud. It can be tedious.  If others are going to offer services like Amazon, and they don’t have the size and wherewithal to figure it out themselves, then Nuage will get them there.

Converge! Digest:  On this question of data center interconnect (DCI), Alcatel-Lucent also has expertise at the optical transport layer, especially with your photonic switch. Will Nuage extend this SDN vision to the optical transport layer?

Manish Gulyani: We sell a lot of data center interconnect both at the optical layer and the MPLS layer, such as DWDM hitting the data center and also MPLS in an edge router.  We sell a lot of 100G on our optical transport systems because they really are the capacity needed for DCI. So that’s the physical connectivity.  The logical connectivity is what you need to move one virtual machine in one data center to another.  Even though the secure, physical connectivity exists between these data centers, the logical connectivity just is not there today. Nuage gives you that overlay on top of the physical infrastructure to deliver a per-tenant slice with the policy you want.

Converge! Digest:  How big is Nuage as a company in terms of number of employees?

Manish Gulyani:  We haven’t talked publicly about the size of the company or head count.

Converge! Digest:  About this term “spin-in” that is being used to describe Nuage… what does it mean to call Nuage a spin-in of Alcatel-Lucent?  How is the company organized?

Manish Gulyani:  Spin-in means that we are an internal start-up inside of Alcatel-Lucent.  There is a very good reason Alcatel-Lucent structured this as an internal start-up instead of an external start-up.  Nuage leverages so much existing Alcatel-Lucent intellectual property, there was no way it could let this outside of the company for others to have.  We would essentially have had to put out our Service Routing operating system for others to value and control the intellectual property and associate equity investments with it.  This would have been too complicated.  Others have tried to spin-out a new start-up with third party investors, only to find that they must acquire it back because they did not want their intellectual property to fall into the hands of others. Still, Nuage has full freedom to develop its solution and the right atmosphere to pull in the right talent.  We need a good mix of networking people and IT people.  We've been able to bring in guys who did Web 2.0 scaled-out IT solutions.

Converge! Digest: So Nuage is not a separate legal entity that can offer stock options to attract talent?

Manish Gulyani: No, Nuage is a fully funded internal start-up that is not a separate legal entity.

The start-up identity separate from Alcatel-Lucent also enables us to sell into the new cloud market, which is a different space from what Alcatel-Lucent has traditionally pursued. So, we can go after different market, we can attract new talent but still leverage the existing intellectual property that is essential to really get a good solution to market. This structure gives us freedom in multiple dimensions.