Showing posts with label Start-ups. Show all posts
Showing posts with label Start-ups. Show all posts

Tuesday, October 19, 2021

Prosimo enhances App Transit for multi-cloud environments

 Prosimo, a start-up that offers an Application eXperience Infrastructure (AXI) platform for secure application delivery in multi-cloud environments, released new advanced App Transit features that builds on the platform's four core capabilities  — Transit, Application Performance, Secure Access, and Observability.

App Transit’s latest features include:

  • Fastlane — dynamically optimizes performance for latency-sensitive and business-critical applications and lets teams deploy new Edge points of presence (PoPs) and apply optimization techniques for specific applications in minutes. 
  • Autonomous Transit — dynamically adapts to performance issues seen in app-to-app networks in your infrastructure and recommends infrastructure expansion, performance, or latency improvements, all driven through data.
  • Dynamic Compliance — automatically inserts additional user measures based on dynamic & behavioral risk profiles to ensure industry-specific, local, and international compliance requirements are met.
  • Seamless connectivity for cloud-native services — including AWS S3, RedShift, Azure blob, Google Big Query, and more, providing scalable, secure, and repeatable connectivity between workloads and cloud services. 

“The AXI Platform’s initial focus was to deliver the outcomes that enterprises care about by combining networking, performance, security, observability, and cost-management into a single integrated infrastructure stack. The latest release takes this strategy much further to include Fastlane and Autonomous decision-making, all driven by high-quality data. This enables our customers to accelerate their path towards fully achieving Autonomous Multi-Cloud Networking,” stated Ramesh Prabagaran, co-founder and CEO at Prosimo.

https://prosimo.io/

Prosimo enters partnership with Google Cloud

Prosimo, which offers an Application eXperience Infrastructure (AXI) platform for secure application delivery in multi-cloud environments, announced a partnership with Google Cloud.Specifically, Prosimo will deliver edge networking with built in Zero Trust Security and performance stack on Google Cloud with Anthos, and will leverage Google Cloud capabilities like artificial intelligence (AI) and machine learning (ML), as well as Google’s and its...

Video: The Importance of Application Experience

It’s all about the application experience. In this video, Mehul Patel, Head of Marketing at Prosimo, discusses the important factors in delivering application experience with today’s infrastructure, including application security, speed, and the overall cost.https://youtu.be/H8cuTyjI6KQ Prosimo targets secure app delivery with multi-cloud networkingTuesday, April 06, 2021  Prosimo, SD-WAN, Silicon Valley  Prosimo,...

Prosimo targets secure app delivery with multi-cloud networking

Prosimo, a start-up based in Santa Clara, California, emerged from stealth to unveil its Application eXperience Infrastructure (AXI) platform for secure application delivery in multi-cloud environments. The platform is positioned for multi-cloud networking, Zero Trust with Identity Aware Proxy, app micro-segmentation, access to lift-and-shift VMware on AWS, Azure or GCP, app-infrastructure modernization for Kubernetes and service mesh apps, and etc.  Promiso,...

Who is Prosimo? CTO presentation

https://youtu.be/58YMebmBJwMProsimo, a start-up based in Santa Clara, California,  has launched its Application eXperience Infrastructure (AXI) platform for ensuring secure application experiences across multi-cloud environments.In this 40-minute presentation, Nehal Bhau, Co-founder & CTO, Prosimo, shares his insights on the SD-WAN market, Prosimo's vertically integrated stack and its method to leverage the global infrastructure of the top...

Cato Networks raises $200 million for its SASE

Cato Networks, a start-up based in Tel Aviv, Israel, raised $200 million in new funding at a market valuation of $2.5 billion for its SASE solutions. The Cato SASE Cloud is distributed across more than 65 PoPs worldwide.

The new funding round was led by Lightspeed Venture Partners with the participation of existing investors Greylock, Aspect Ventures / Acrew Capital, Coatue, Singtel Innov8, and Shlomo Kramer. 

“Cato is at the forefront of SASE transformation,” said Shlomo Kramer, CEO and co-founder of Cato Networks. “Large enterprises are deploying Cato as their global network to reap the operational and business benefits of Cato’s proven and mature SASE platform. Cato is rapidly expanding its service capabilities, global footprint, and sales and marketing teams, while preserving our unique DNA of agility, simplicity, and ease of doing business that is so valued by customers and partners.” 

https://www.catonetworks.com

Monday, October 18, 2021

TelcoDR announces $1 billion Telco Transformation Fund

TelcoDR  announced a $1 billion Telco Transformation Fund to support the strategic acquisition, development and cloudification of software products for the telco market. Acquired assets will be overseen and operated by 

Skyvera, a subsidiary of TelcoDR, will oversee and operate assets acquired by the fund. Industry veteran Matt Taylor will lead Skyvera.

In addition, TelcoDR has announced the acquisition of a number of assets from telco software company Zephyrtel. These assets will be integrated into TelcoDR’s growing portfolio of telco software products and will be operated by Skyvera. Skyvera will provide customers of acquired companies ongoing support, product innovation, and a path to leverage public cloud-native software, allowing them to unlock all the benefits that this revolutionary technology provides.

“With my Telco Transformation Fund, I’m building a library of software products purpose-built for the public cloud,” added Danielle Royston, TelcoDR founder and CEO, and telecom’s leading public cloud evangelist. “I’m thrilled to welcome the Zephyrtel customers to Skyvera, and I’m eager to continue to help accelerate CSPs’ move to the public cloud.”

https://www.telcodr.com/

Wednesday, October 13, 2021

Hailo raises $136 million for its edge AI processors

Hailo, a start-up based in Tel Aviv, raised $136 million in a Series C round of funding for its edge processor designed for AI workloads.

The Hailo-8 edge AI processor boasts up to 26 tera-operations per second (TOPS) performance, capable of processing of FHD stream in real-time, and with typical power consumption of 2.5W, according to the company.

The funding round was led by Poalim Equity and Gil Agmon with participation from existing investors including Hailo Chairman Zohar Zisapel, ABB Technology Ventures (ATV), Latitude Ventures and OurCrowd; and new investors Carasso Motors, Comasco, Shlomo Group, Talcar Corporation Ltd. and Automotive Equipment (AEV).

Hailo was established in Israel in 2017 by members of the Israel Defense Forces’ elite technology unit.

https://hailo.ai/

Thursday, September 23, 2021

Effect Photonics to open Boston office

EFFECT Photonics, which is a spin-off from the Eindhoven University of Technology in the Netherlands, announced plans to open an office in the Boston area. 

EFFECT says its US-based team will be a customer-focused expansion of the engineering, product testing & verification, customer support and marketing capacity to complement its European teams.

"Photonics is a global industry and we have the goal to be a global company. We are a fast growing organisation, offering compelling and innovative integrated photonic solutions. Expanding our geographic presence beyond The Netherlands, the UK and Taiwan, offers us the opportunity to be at the heart of one of our largest markets and to be closer to our customers and partners in the region,” states James Regan, CEO.

https://effectphotonics.com/effect-photonics-facility-in-the-us/


EFFECT Photonics raises $43 million for system-on-chip

EFFECT Photonics, a developer of DWDM components based on its optical System-on-Chip technology, announced $43 million in Series-C funding.

EFFECT Photonics, which is a spin-off from the Eindhoven University of Technology in the Netherlands, has developed a photonic chip in which light signals can be generated, modulated, filtered, and detected. 

The first close of the investment round, was co-led by Smile Invest together with existing investor Innovation Industries Fund, exactly one year after announcing the tape-out of its Manta full photonic integration coherent PIC. Smile Invest are joined by existing investors including Innovation Industries Fund, Photon Delta, btov Partners, Brabant Development Agency (BOM) and individual investors. This new round of funding will be used to further expand the current product line of optical transceivers for among other things 5G networks, and to scale up production capacity. In addition, the R&D activities for the next generation of optical chips, with capacity of more than 400 gigabits per second, will also be ramped up.

Boudewijn Docter, one of the founders and President of EFFECT Photonics: “As a company, we have come a long way to make the photonics technology market-ready. We are pleased that Invest-NL is joining our other investors in helping us scale up our production and enabling us to bring additional products to market quicker”.

Ruud Zandvliet, Senior Investment Manager at Invest-NL: “The Netherlands has a unique ecosystem for photonics technologies. EFFECT Photonics is a leading player in this field, capable of developing complex, fully integrated photonic chips. This offers the company the opportunity be a leading player as a manufacturer of the next generation of transceivers. By joining this investment round, Invest-NL contributes to the availability of financing for upscaling and future R&D investments. This is a good example of the role Invest-NL plays in increasing the strength of scale-ups and is in line with our objective of making the Netherlands more sustainable and innovative.

https://effectphotonics.com/

Thursday, September 16, 2021

GigaIO raises $14.7 million for its Universal Composable Fabric

GigaIO, a start-up based in San Diego, raised $14.7 million in Series B funding for its data center rack-scale architecture for artificial intelligence (AI) and high-performance computing (HPC) solutions.

GigaIO's Universal Composable Fabric, FabreX, orchestrates workloads by configuring any HPC and AI resource on the fly and integrating networking, storage, memory, and specialized accelerators into a single-system cluster fabric. 

Impact Venture Capital led the funding round, which was oversubscribed by 50% and included participation from Mark IV Capital, Lagomaj Capital, SK Hynix, and Four Palms Ventures.

“We have a tremendous technology and a development team with incredible expertise gained through years of working on some of the highest performing interconnects at companies such as Cray, Sun Microsystems, Cisco, Emulex, and QLogic,” said Alan Benjamin, President and CEO of GigaIO. “Today, by completing this funding round, we are better positioned to get the technology into the hands of more customers and channel partners and to increase traction among commercial and other customers.”

https://gigaio.com



Wednesday, September 15, 2021

Arrcus resets with new CEO, vision, and strategic partners

Arrcus, a hyperscale networking software start-up based in San Jose, California, named Shekar Ayyar as its CEO and chairman of the board. The company also announced a strategic shift toward edge-native, large-scale distributed and disaggregated networking opportunities. In addition, Arrcus announced the infusion of new capital from its strategic partners - Liberty Global, SoftBank Corp and Samsung Next. The new investors join the existing investors, Clear Ventures, General Catalyst Partners, and Lightspeed Venture Partners. 

Ayyar joins Arrcus from VMware where he was the executive vice president and general manager of the Telco and Edge Cloud business. He also led VMware’s strategy and corporate development efforts in enterprise software and communications while overseeing over 60 M&A transactions and investments including the acquisitions of Nicira, AirWatch, and VeloCloud.

Regarding its new focus, Arrcus said 5G and edge computing are accelerating the network infrastructure transformation requiring programmable and massively scalable, multi-cloud optimized, cost-effective connectivity for the next generation of distributed applications including remote work environments, industrial automation, and content distribution. 

The Arrcus Connected Edge (ACE) platform targets the convergence of communications and compute infrastructure at the edge, delivering massive network scale and performance, and enabling service providers and enterprises to create new services and deliver superior end-user experiences at a low total cost of ownership (TCO).


“Shekar is a passionate technology leader with a proven track record of translating strategy into execution and corralling high-performing teams to deliver results. On behalf of Arrcus employees and the board, I am excited to have him join us as our CEO at this important time,” said Keyur Patel, Founder and CTO, Arrcus. “We are confident in Shekar’s leadership to take advantage of the opportunities ahead of us and drive Arrcus forward as we enter the next phase of product innovation and growth.”

“I am delighted to be joining Arrcus as CEO at this inflection point in the industry when customers are seeking edge-native, large-scale distributed and disaggregated networking to support their 5G and edge deployments. The Arrcus Connected Edge (ACE) is uniquely positioned to provide network performance at scale, while delivering efficiently low TCO for enterprises and service providers,” said Shekar Ayyar, CEO, Arrcus.

“Service providers are under relentless pressure to modernize their networks to deliver new, innovative services at the edge while meeting stringent requirements for high bandwidth and low latency,” said Ram Velaga, senior vice president and general manager, Core Switching Group, Broadcom. “At the time when carrier networks are going through this transformation, we are excited to see the Arrcus Connected Edge infrastructure leverage our StrataDNX family of switch SOCs, including Jericho2, Qumran2C, Jericho2C, and Qumran2A merchant silicon platforms to offer a flexible, high-performance, cost-optimized, low-power solution and address the challenging needs of the most demanding telco workloads.”



SambaNova announces a key hire for its AI team

SambaNova Systems, a Silicon Valley start-up developing AI-driven Dataflow-as-a-Service, announced the appointment of Poonacha Kongetira as Vice President of Hardware.

Kongetira hails from Google where he led a team of engineers developing custom silicon accelerators for machine learning, video transcode, and datacenter infrastructure. He also led NVIDIA’s Bangalore design center for mobile and GPU chips with a team of more than 800 engineers. Kongetira was an early member of the leadership team developing the first Niagara processor at Afara which was acquired by Sun Microsystems and eventually Oracle.

“I am thrilled to join SambaNova working with the best in the industry and to rejoin the familiar faces of a talented team which led the transition from single to multicore processors with Niagara,” said Poonacha Kongetira, VP of Hardware at SambaNova. “Armed with SambaNova’s Reconfigurable Dataflow Architecture and Dataflow-as-a-Service, we have the unique opportunity to establish this unrivaled technology as the industry standard for machine learning and scientific applications just like we did with Niagara and apply the vision of machine learning and AI for enterprises.”

SambaNova raises $676 million for its AI platform

SambaNova Systems, a start-up based in Palo Alto, California, announced $676 million in Series D funding for its software, hardware and services to run AI applications.

SambaNova’s flagship offering is Dataflow-as-a-Service (DaaS), a subscription-based, extensible AI services platform designed to jump-start enterprise-level AI initiatives, augmenting organizations’ AI capabilities and accelerating the work of existing data centers, allowing the organization to focus on its business objectives instead of infrastructure.

At the core of DaaS is SambaNova’s DataScale, an integrated software and hardware systems platform with optimized algorithms and next-generation processors delivering unmatched capabilities and efficiency across applications for training, inference, data analytics, and high-performance computing. SambaNova’s software-defined-hardware approach has set world records in AI performance, accuracy, scale, and ease of use.

The funding round was led by SoftBank Vision Fund 2, and included additional new investors Temasek and GIC, plus existing backers including funds and accounts managed by BlackRock, Intel Capital, GV (formerly Google Ventures), Walden International and WRVI. This Series D brings SambaNova’s total funding to more than $1 billion and rockets its valuation to more than $5 billion.

SambaNova says it is working "to shatter the computational limits of AI hardware and software currently on the market — all while making AI solutions for private and public sectors more accessible."

“We’re here to revolutionize the AI market, and this round greatly accelerates that mission,” said Rodrigo Liang, SambaNova co-founder and CEO. “Traditional CPU and GPU architectures have reached their computational limits. To truly unleash AI’s potential to solve humanity’s greatest technology challenges, a new approach is needed. We’ve figured out that approach, and it’s exciting to see a wealth of prudent investors validate that.”

Stanford Professors Kunle Olukotun and Chris Ré, along with Liang, founded SambaNova in 2017 and came out of stealth in December 2020. Olukotun is known as the “father of the multi-core processor” and the leader of the Stanford Hydra Chip Multiprocessor (CMP) research project. Ré is an associate professor in the Department of Computer Science at Stanford University. He is a MacArthur Genius Award recipient, and is affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab.

http://www.sambanova.ai

Monday, September 13, 2021

Avicena demos multi-Tbps LED interconnect

At this week's European Conference for Optical Communications (ECOC) 2021, Avicena Tech Corp. is demonstrating its LightBundle multi-Tbps LED-based chip-to-chip interconnect technology.

The Avicena LightBundle achieves order-of-magnitude improvements in power dissipation and density over any other interconnect technology up to a reach of 10 meters. LightBundle™ is purpose-built for multi-Tbps chip-to-chip interconnects in distributed computing, processor-to-memory disaggregation, and other advanced computing applications. Avicena is based in Mountain View, California. 

Avicena's LightBundle is based on arrays of novel GaN high-speed micro-emitters with a reach of up to 10m. The technology leverages the microLED display manufacturing ecosystem and is fully compatible with high performance CMOS ICs. The company’s CROMEs are about an order of magnitude faster than the current state-of-the-art LEDs. At 200 parallel lanes this extrapolates to an aggregate link bandwidth of 2Tbps with a bandwidth density of 10Tbps/mm2.

“All of this is changing with the recent progress in optical emitter technology driven by advances in the display industry,” says Bardia Pezeshki, founder and CEO of Avicena. “We have developed super-efficient, high-density optical transmitters based on emitter technology from the display industry. These innovative devices would have been impractical just a few years ago. Our optimized devices and materials support 10Gbps links per lane over -40°C to +125°C temperature with excellent reliability. We refer to our new optical sources as Cavity-Reinforced Optical Micro-Emitters or CROMEs. We connect CROME arrays with CMOS compatible PDs using multi-core fiber bundles to create massively parallel interconnects with 100s of parallel lanes with a density of 10Tbps/mm2 over a reach of up to 10m. We call this new class of optical interconnect the Avicena LightBundle.”

https://avicena.tech/

Thursday, September 9, 2021

HyperLight advances its thin-film lithium niobate PICs

HyperLight, a start-up based in Cambridge, Mass., cited progress in its development of thin-film lithium niobate photonic integrated circuits for next-generation 800 Gbps and beyond 1.6 Tbps optical networking systems.

HyperLight claims its thin-film lithium niobate integrated photonics solution combines the proven superior material property of lithium niobate with an established and scalable integration process. The company has developed a thin-film lithium niobate modulator that reaches sub-volt driving voltages while maintaining >100 GHz bandwidth. Earlier generation of the devices achieved 700.5 Gbps line rate and 538.8 Gbps net rate over 10.2 km of single-mode fiber of intensity-modulated and direct detected (IM-DD) signals in a demonstration conducted with Nokia Bell Labs. 

Dr. Mian Zhang, CEO and co-founder of HyperLight, will present the company at the upcoming 2021 European Conference on Optical Communications (ECOC) next week in France.

https://hyperlightcorp.com/

HyperLight claims breakthrough with its lithium niobate optical modulator

HyperLight, a start-up based in Cambridge, MA developing thin-film lithium niobate (LN) photonic integrated circuits (PICs), announced breakthrough voltage-bandwidth performances in integrated electro-optic modulators. 

HyperLight says its electro-optic PIC could lead to orders of magnitude energy consumption reduction for next generation optical networking.

Current electro-optic modulators require extremely high radio-frequency (RF) driving voltages (> 5 V) as the analog bandwidth in ethernet ports approaches 100 GHz for future terabits per sec capacity transceivers. In comparison, a typical CMOS RF modulator driver delivers less than 0.5 V at such frequencies. Compound semiconductor modulator drivers can deliver voltage > 1 V at significantly increased cost and energy consumption but still fall short to meet the optimum driving voltage. The limited voltage-bandwidth performance in electro-optic modulators poses a serious challenge for meeting tight power consumption requirements from network builders.

HyperLight's integrated electro-optic modulator is capable of 3-dB bandwidth > 100 GHz, a previously impossible voltage-bandwidth achievement. The results are described in a manuscript entitled “Breaking voltage-bandwidth limits in integrated lithium niobate modulators using micro-structured electrodes,” published in Optica on March 8th, 2021.

“We believe the significantly improved electro-optic modulation performance in our integrated LN platform will lead to a paradigm shift for both analog and digital ultra-high speed RF links,” said Mian Zhang, author, CEO of HyperLight. “For example, using sub-volt modulators for digital applications, high speed electronic drivers may have largely reduced gain-bandwidth requirements or possibly be completely bypassed with modulators directly driven from electronic processors. This would save building and running costs for network operators. For RF links, the low-voltage, high bandwidth and excellent optical power handling ability could enable sensitive and low noise millimeter wave (mmWave) photonic links in ultrahigh-frequency bands.”

http://www.hyperlightcorp.com

Tuesday, August 24, 2021

Cerebras advances its "Brain-scale AI"

Cerebras Systems disclosed progress in its mission to deliver a "brain-scale" AI solution capable of supporting neural network models of over 120 trillion parameters in size. 

Cerebras’ new technology portfolio contains four innovations: Cerebras Weight Streaming, a new software execution architecture; Cerebras MemoryX, a memory extension technology; Cerebras SwarmX, a high-performance interconnect fabric technology; and Selectable Sparsity, a dynamic sparsity harvesting technology.

  • Cerebras Weight Streaming enables the ability to store model parameters off-chip while delivering the same training and inference performance as if they were on chip. This new execution model disaggregates compute and parameter storage – allowing researchers to flexibly scale size and speed independently – and eliminates the latency and memory bandwidth issues that challenge large clusters of small processors. It is designed to scale from 1 to up to 192 CS-2s with no software changes.
  • Cerebras MemoryX is a memory extension technology. MemoryX will provide the second-generation Cerebras Wafer Scale Engine (WSE-2) up to 2.4 Petabytes of high performance memory, all of which behaves as if it were on-chip. With MemoryX, CS-2 can support models with up to 120 trillion parameters.
  • Cerebras SwarmX is a high-performance, AI-optimized communication fabric that extends the Cerebras Swarm on-chip fabric to off-chip. SwarmX is designed to enable Cerebras to connect up to 163 million AI optimized cores across up to 192 CS-2s, working in concert to train a single neural network.
  • Selectable Sparsity enables users to select the level of weight sparsity in their model and provides a direct reduction in FLOPs and time-to-solution. Weight sparsity is an exciting area of ML research that has been challenging to study as it is extremely inefficient on graphics processing units. Selectable sparsity enables the CS-2 to accelerate work and use every available type of sparsity—including unstructured and dynamic weight sparsity—to produce answers in less time.

“Today, Cerebras moved the industry forward by increasing the size of the largest networks possible by 100 times,” said Andrew Feldman, CEO and co-founder of Cerebras. “Larger networks, such as GPT-3, have already transformed the natural language processing (NLP) landscape, making possible what was previously unimaginable. The industry is moving past 1 trillion parameter models, and we are extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120 trillion parameters.”

https://cerebras.net/news/cerebras-systems-announces-worlds-first-brain-scale-artificial-intelligence-solution/


Cerebras unveils 2nd-gen, 7nm Wafer Scale Engine chip

Cerebras Systems introduced its Wafer Scale Engine 2 (WSE-2) AI processor, boasting 2.6 trillion transistors and 850,000 AI optimized cores.

The wafer-sized processor, which is manufactured by TSMC on its 7nm-node, more than doubles all performance characteristics on the chip - the transistor count, core count, memory, memory bandwidth and fabric bandwidth - over the first generation WSE. 

“Less than two years ago, Cerebras revolutionized the industry with the introduction of WSE, the world’s first wafer scale processor,” said Dhiraj Mallik, Vice President Hardware Engineering, Cerebras Systems. “In AI compute, big chips are king, as they process information more quickly, producing answers in less time – and time is the enemy of progress in AI. The WSE-2 solves this major challenge as the industry’s fastest and largest AI processor ever made.”


The processors powers the Cerebras CS-2 system, which the company says delivers hundreds or thousands of times more performance than legacy alternatives, replacing clusters of hundreds or thousands of graphics processing units (GPUs) that consume dozens of racks, use hundreds of kilowatts of power, and take months to configure and program. The CS-2 fits in one-third of a standard data center rack.

Early deployment sites for the first generation Cerebras WSE and CS-1 included Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing centre at the University of Edinburgh, pharmaceutical leader GlaxoSmithKline, and Tokyo Electron Devices, amongst others.

“At GSK we are applying machine learning to make better predictions in drug discovery, so we are amassing data – faster than ever before – to help better understand disease and increase success rates,” said Kim Branson, SVP, AI/ML, GlaxoSmithKline. “Last year we generated more data in three months than in our entire 300-year history. With the Cerebras CS-1, we have been able to increase the complexity of the encoder models that we can generate, while decreasing their training time by 80x. We eagerly await the delivery of the CS-2 with its improved capabilities so we can further accelerate our AI efforts and, ultimately, help more patients.”

“As an early customer of Cerebras solutions, we have experienced performance gains that have greatly accelerated our scientific and medical AI research,” said Rick Stevens, Argonne National Laboratory Associate Laboratory Director for Computing, Environment and Life Sciences. “The CS-1 allowed us to reduce the experiment turnaround time on our cancer prediction models by 300x over initial estimates, ultimately enabling us to explore questions that previously would have taken years, in mere months. We look forward to seeing what the CS-2 will be able to do with more than double that performance.”

https://cerebras.net/product

Wednesday, August 18, 2021

EdgeQ samples its RISC-V 5G base station-on-a-Chip

EdgeQ, a start-up based in Santa Clara, California, is now sampling its RISC-V based, 5G Base Station-on-a-Chip to customers developing enterprise-grade 5G access points, Open-Radio Access Network (O-RAN) based Radio Unit (RU) and Distributed Unit (DU).

The EdgeQ platform combines highly integrated silicon with up to 50 RISC-V cores and 5G PHY software for processing all key functionalities and critical algorithms of the radio access network such as beamforming, channel estimation, massive MIMO and interference cancellation. The design is programmable and provides an open framework for L2/L3 software partners. 

EdgeQ, which has been developing its platform for the past three years, said traditional merchant silicon vendors offer the PHY as a reference software, placing the development burden on customers to invest years of effort to operationalize into production. By abstracting this friction with a total platform solution including a production-readied 5G PHY software, EdgeQ frees customers from the substantial investments, resources and time typically associated with productizing the 4G/5G PHY stack.

“Since day one, EdgeQ has been relentless about redefining the consumption and deployment model of 5G with its RISC-V based open architecture that converges connectivity, networking, and compute. How we elegantly club the hardware, the deployable RAN software, and an innovative chipset-as-a-service business model all together is what crystallizes the vision in a disruptively compelling way,” said Vinay Ravuri, CEO and Founder, EdgeQ. “Our sampling announcement today signifies that all this is a market reality.”

http://www.edgeq.io

EdgeQ pursues a feature subscription model for 5G basestation chip

EdgeQ, a start-up offering a 5G systems-on-a-chip, introduced a 5G chipset-as-a-service model in which customers can scale 5G and AI features as a function of subscription payments.  The service-oriented model would enable customers to scale from nominal to advanced 5G features such as ultra-reliable low latency communications, geo-location services, massive MIMO, fine-grained network slicing, as well as extending compatibility to other legacy wireless protocols. 

The company says its new service model is the very first in the chip industry to scale price, performance, and features as a function of need and use.  The potential is to elevate 5G Open Radio Access network (O-RAN) to an even more configurable, elastic, open wireless infrastructure. Enterprise network, telco, and cloud providers might also use the EdgeQ model to virtualize network resources.

“Our vision at EdgeQ has always been about implementing 5G in a format that is accessible, consumable, and intuitive for our customers. EdgeQ is not only the first company to converge both 5G and AI on a single chip for wireless infrastructure, but we are also able to make those capabilities available in a SaaS model.  This fundamentally reduces the initial capex investment required for 5G, thereby removing both technical and economic barriers of 5G adaptation at greenfield enterprises,” said Vinay Ravuri, CEO and Founder, EdgeQ. “This pay-as-you-go model ensures that the evolving demands of the market can leverage the full fluidity and elasticity of EdgeQ’s 5G-as-a-Service product.”



https://edgeq.io/



Video: New Silicon and Open Software Driving a New Ecosystem

There’s a plethora of interest in the 5G space from operators to enterprises and cloud service providers. In this video, Vinay Ravuri, CEO and Founder of EdgeQ, talks about how box makers are transforming with the emerging 5G market and how open software is driving a new set of markets.

https://youtu.be/KyRgSlZBpQQ

Tuesday, August 17, 2021

Microsoft invests in Rubrik for Zero Trust data protection

Microsoft will make an equity investment in Rubrik, a start-up based in Palo Alto, California, that offers  Cloud Data Management platform with data protection, search and analytics, archiving and compliance, and copy data management capabilities for hybrid cloud enterprises.  

The companies also agreed to co-engineer projects to deliver integrated Zero Trust data protection solutions built on Microsoft Azure to address rising customer needs to protect against surging ransomware attacks, which are growing 150% annually. Together, Rubrik and Microsoft will provide Microsoft 365 and hybrid cloud data protection and integrated cloud services on Microsoft Azure.

Rubrik said its data management was designed for the cloud from day one. As part of this collaboration, customers and partners gain additional data protection, so that critical Microsoft 365 data is secure, easily discoverable, and always accessible in the case of a malicious attack, ransomware attack, accidental deletion, or corruption. Rubrik also offers additional support and protection for Microsoft 365 including instant search and restore and policy-based management at scale. Additionally, Rubrik and Microsoft provide long-term archival of Microsoft 365 data for the purposes of regulatory compliance. 

Rubrik takes a Zero Trust approach to data management, which follows the NIST principles of Zero Trust for everyone interacting with data. This means operating with the assumption that no person, application, or device is trustworthy. To meet this standard, data must be natively immutable so that it’s not modified, encrypted, or deleted by ransomware. 

“As the pioneer of Zero Trust Data Management, Rubrik is helping the world’s leading organizations manage their data and recover from ransomware,” said Bipul Sinha, Co-founder and CEO of Rubrik. “Together with Microsoft, we are delivering tightly integrated data protection while accelerating and simplifying our customer’s journey to the cloud.”

“Customers, across industries, are migrating to the cloud to drive business transformation and realize growth,” said Nick Parker, Corporate Vice President, Global Partner Solutions, Microsoft. “End-to-end application and data management is critical to business success, and we believe that integrating Rubrik's Zero Trust Data Management solutions with Microsoft Azure and Microsoft 365 will make it easy for customers to advance their Zero Trust journey and increase their digital resilience.”    


Tuesday, August 3, 2021

Nozomi raises $100 million for OT and IoT security

Nozomi Networks, a start-up based in San Francisco, announced a $100 million pre-IPO-funding round to help accelerate its OT and IoT security solutions.

The company said it plans to grow its sales, marketing and partner enablement efforts, and enhance its products to address new challenges in both the operational technology (OT) and internet of things (IoT) visibility and security markets. 

The Series D funding was led by Triangle Peak Partners and included Forward Investments, Honeywell Ventures, In-Q-Tel, Keysight Technologies, Porsche Ventures, and Telefónica Ventures.

“As we began the fund-raising process, many of the largest ecosystem partners in the world along with our customers recognized Nozomi Networks as the industry leader and requested the opportunity to invest in the company,” said Edgard Capdevielle, President and CEO of Nozomi Networks. “It’s the ultimate endorsement when not only a prestigious firm such as Triangle Peak Partners leads the investment, but customers and partners embrace Nozomi Networks and further validate our market leadership.”

“With the OT and IoT security market on the verge of explosive growth, Nozomi Networks has not only risen to the top but is strongly positioned to continue to outpace the market,” said Dain F. DeGroff, Co-founding Partner and President, Triangle Peak Partners.“The company’s consistently strong performance in combination with an impressive R&D model and its ability to scale quickly set itself apart. We’re excited to be a part of Nozomi Networks’ future.”


Tuesday, July 27, 2021

Blaize raises $71 million in Series D for edge AI silicon

Blaize, a start-up based in El Dorado Hills, California, announced $71 million in Series D round of funding for its edge AI computing solutions in automotive, mobility, smart retail, security, industrial and metro market sectors.

The funding round was led by Franklin Templeton, a new investor, and Temasek, an existing investor, led the round, along with participation from DENSO and other new and existing investors.

“Blaize System on Chip (“SoCs”) for automotive edge and central compute functions are accelerating electric vehicles and future architectural ambitions of automotive OEMs,” said Tony Cannestra, Director of Corporate Ventures, DENSO. “With substantial power advantages making EVs more efficient and economical, Blaize SoCs offer best in class performance with lower power across in-cabin, out of vehicle, and autonomous operations, enabling a streamlined architectural evolution to centralize compute.”

http://www.blaize.com

Wednesday, July 21, 2021

Fungible appoints Eric Hayes as CEO

 Fungible has appointed Eric Hayes as its new CEO and member of Fungible’s Board of Directors, succeeding Pradeep Sindhu, who has served as Executive Chairman and CEO since Fungible’s inception. Sindhu will continue his role as Executive Chairman and assume the role of Chief Development Officer, where he will lead the engineering teams responsible for the company’s products and solutions.

Hayes most recently served as the Senior Vice President and General Manager of the High-Speed Connectivity business unit at Inphi where he led the company’s multi-hundred-million-dollar PAM4 DSP business. Prior to joining Inphi he held multiple senior leadership positions in marketing and general management at Marvell, Cavium and Broadcom. 


“It has been a great privilege for me to lead this extraordinarily talented group of dedicated individuals to invent the DPU and bring to market industry leading products that exploit its unique capabilities. The DPU is a new category of microprocessor destined to become a key building block of data centers as the industry embraces data-centric computing. My new role allows me to focus on technology: taking the learnings from our first generation of DPUs and applying them to the next and further enhancing our already industry leading products,” said Pradeep Sindhu, Co-Founder, Executive Chairman and Chief Development Officer of Fungible. 

“There are tremendous opportunities for Fungible to radically transform the global data center industry in the coming years, thanks to the great work of Pradeep and the team,” said Eric Hayes, CEO of Fungible. “I can’t express how truly inspired I am to join Fungible at this pivotal time. While many other companies continue to invest in faster CPUs and GPUs, the real bottleneck to achieving performance at scale remains the inability to efficiently disaggregate CPUs, GPUs and storage over a high performance standards-based network. The market is ripe for disruption, and Fungible’s DPU is the only technology capable of solving this problem.”


Atom Computing raises $15M for its quantum system

Atom Computing, a start-up based in Berkeley, California, announced $15 million in Series A funding for its first-generation quantum computing system.

Atom Computing is building nuclear-spin qubits out of an alkaline earth element. The company's first-generation quantum computing system, Phoenix, is currently capable of trapping 100 atoms in a vacuum chamber with optical tweezers. Phoenix is able to rearrange and manipulate their quantum states with lasers. The company said its design demonstrates exceptionally stable qubits at scale, with coherence times that are orders of magnitude greater than ever reported.

Atom Computing also announced the appointment of Rob Hays as CEO, President and member of Atom Computing's Board of Directors. Hays was most recently Vice President and Chief Strategy Officer for Lenovo's Infrastructure Solutions Group. He also served at Intel for more than 20 years, where he was Vice President and General Manager responsible for leading Intel's Xeon processor roadmaps. Company co-founder and CTO, Ben Bloom, Ph.D., will continue leading Atom Computing's engineering team.

"Quantum computing has accelerated to a point where it is no longer 10 years out. The scalability and stability of our systems gives us confidence that we will be able to lead the industry to true quantum advantage," said Rob Hays, CEO and President, Atom Computing. "We will be able to solve complex problems that have not been practical to address with classical computing, even with the exponential performance gains of Moore's Law and massively-scalable cluster architectures."

The funding round includes investment from Venrock, Innovation Endeavors and Prelude Ventures. In addition, the National Science Foundation awarded the company three grants.

"Atom Computing has a deep focus on scalable platforms compatible with error correction," said Ben Bloom, Co-founder and CTO, Atom Computing. "We've been able to focus on building a one-of-a-kind system that exists nowhere else in the world. Even within the first few months of Phoenix's operation, we have measured performance levels never before reported in any scalable quantum system." 

https://www.atom-computing.com/


BT makes equity investment in SAFE Security

BT announced a multi-million pound investment in Safe Security, cyber risk management firm based in Palo Alto, California.

The company's Security Assessment Framework for Enterprises') platform allows organisations to take a health check of their existing defences and understand their likelihood of suffering a major cyber attack.

Philip Jansen, Chief Executive of BT, said: "Cyber security is now at the top of the agenda for businesses and governments, who need to be able to trust that they're protected against increasing levels of attack. Adding SAFE to BT's proactive, predictive security services will give customers an enhanced view of their threat level, and rapidly pinpoint specific actions needed to strengthen their defences. Already one of the world's leading providers in a highly fragmented security market, this investment is a clear sign of BT's ambition to grow further."

Saket Modi, Co-founder and CEO of Safe Security, said: "We're delighted to be working with a proven global security leader in BT. Their investment and strategic partnership with Safe Security will further accelerate our vision of making SAFE scores the industry standard for measuring and mitigating cyber risks. By aligning BT's global reach and capabilities with SAFE's ability to provide real-time visibility on cyber risk posture, we are going to fundamentally change how cyber security is measured and managed across the globe."    

https://www.safe.security/

Tuesday, July 20, 2021

Untether AI raises $125 million for high-performance silicon

Untether AI, a start-up based in Toronto, announced an oversubscribed $125 million funding round for its at-memory computation and AI inference acceleration silicon.

The latest funding was led by an affiliate of Tracker Capital Management and by Intel Capital, with participation from new investor Canada Pension Plan Investment Board (“CPP Investments”) and existing investor Radical Ventures.

As part of the funding round, Tracker Capital Senior Advisor Dr. Shaygan Kheradpir will join Untether AI’s Board of Directors. Previously, Dr. Kheradpir served as Verizon’s Group CIO, Barclays Bank Group COO, and CEO of Coriant and Juniper Networks.

“I am pleased to add Tracker Capital to our prestigious group of investors and welcome Shaygan to our Board,” said Arun Iyengar, CEO, Untether AI. “Tracker Capital’s unmatched experience and relationships across sectors will help speed our engagements in multiple high-value markets, including telecom, technology, financial services, retail, and defense. I am also thrilled to welcome CPP Investments to the Untether AI family. With the new funding round and partnerships, we will be able to expand our current product reach and accelerate the development of our next generation products."

Saf Yeboah-Amankwah, Intel Chief Strategy Officer added: “We have been an investor in Untether AI since the seed round. During that time Untether AI has assembled a world-class management team, developed and launched an exceptional product, and is now poised for growth in the burgeoning AI inference acceleration space.”



The dramatic increase in the usage of AI, along with its heavy computational requirements, is overwhelming traditional compute architectures, and drastically increasing power consumption in datacenters. 


Untether AI said its at-memory compute architecture and tsunAImi accelerator cards can achieve record-breaking energy efficiency and compute density for inference acceleration. 


www.untether.ai

Tuesday, July 13, 2021

Lightbits awarded U.S. patent for NVMe/TCP overprovisioning

Lightbits Labs, a start-up based in San Jose, California focused on NVMe over TCP (NVMe/TCP) software-defined storage, has been assigned a patent (11,03,6626) for “a method and system to determine an optimal over-provisioning ratio.”

The abstract of the patent (11,03,6626) published by the U.S. Patent and Trademark Office states: A system and a method of managing over-provisioning (OP) on non-volatile memory (NVM) computer storage media including at least one NVM storage device, by at least one processor, may include: receiving a value of one or more run-time performance parameters pertaining to data access requests to one or more physical block addresses (PBAs) of the storage media; receiving at least one of a target performance parameter value and a system-inherent parameter value; analyzing the received at least one run-time performance parameter value, to determine an optimal OP ratio of at least one NVM storage device in view of the received at least of a target performance parameter value and system-inherent parameter value; and limiting storage of data objects on the at least one NVM storage device according to the determined OP ratio.