Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, June 9, 2021

Xilinx debuts Versal AI Edge series processors

Xilinx introduced the Versal AI Edge series processors, boasting 4X the AI performance-per-watt versus GPUs and 10X greater compute density versus previous-generation adaptive SoCs.

Xilinx is positioning the new Versal AI Edge adaptive compute acceleration platforms (ACAPs) for a range of applications including: automated driving with the highest levels of functional safety, collaborative robotics, predictive factory and healthcare systems, and multi-mission payloads for the aerospace and defense markets. The portfolio features AI Engine-ML to deliver 4X machine learning compute compared to the previous AI Engine architecture and integrates new accelerator RAM with an enhanced memory hierarchy for evolving AI algorithms. These architectural innovations deliver up to 4X AI performance-per-watt versus GPUs and lower latency resulting in far more capable devices at the edge.

"Edge computing applications require an architecture that can evolve to address new requirements and scenarios with a blend of flexible compute processing within tight thermal and latency constraints,” said Sumit Shah, senior director, Product Management and Marketing at Xilinx. “The Versal AI Edge series delivers these key attributes for a wide range of applications requiring greater intelligence, making it a critical addition to the Versal portfolio with devices that scale from intelligent edge sensors to CPU accelerators.”

The Versal AI Edge series takes the production-proven 7nm Versal architecture and miniaturizes it for AI compute at low latency, all with power efficiency as low as six watts and safety and security measures required in edge applications. As a heterogeneous platform with diverse processors, the Versal AI Edge series matches the engine to the algorithm, with Scalar Engines for embedded compute, Adaptable Engines for sensor fusion and hardware adaptability, and Intelligent Engines for AI inference that scales up to 479 (INT4) TOPS2—unmatched by ASSPs and GPUs targeting edge applications—and for advanced signal processing workloads for vision, radar, LiDAR, and software defined radio.

Sampling is available to early access customers, with shipments expected during the first half of 2022.

https://www.xilinx.com/versal-ai-edge

Thursday, April 29, 2021

Vectra AI raises $130 million for automated threat detection/response

Vectra AI, a start-up based in San Jose, California, announced $130 million in new funding for its work in automated cyber threat detection and response. The company's mission is "to see and stop threats before they become breaches."

“Over the past year, we have witnessed a continuous series of the most impactful and widespread cyberattacks in history. To protect their employees and digital assets, our customers require security solutions that are smarter than today’s adversaries and provide coverage for cloud, data centers and SaaS applications” said Hitesh Sheth, president and chief executive officer at Vectra. “As we look to the future, Blackstone’s global presence, operational resources, and in-house technology expertise will help us achieve our mission to become one of the dominant cybersecurity companies in the world.”

The new $130 funding round was led by funds managed by Blackstone Growth. This brings Vectra's total funding since inception to more than $350 million at a post-money $1.2 billion valuation.

Viral Patel, a Senior Managing Director at Blackstone, said: “Vectra has a proven ability to stop in-progress attacks in the cloud, on corporate networks, and in private data centers for some of the top organizations in the world. The company has experienced extraordinary success through its commitment to combining innovative AI technology, first-class customer service, and top talent, and Blackstone is excited to become part of the Vectra team.”

For 2020, the Vectra reported a compound annual growth rate (CAGR) exceeding 100 percent, while sales of its Cognito Detect product for Microsoft Office 365 have grown at a rate of over 700 percent. 

http://www.vectra.ai

  • Vectra AI is headed by Hitesh Sheth (president and CEO), who previously was chief operating officer at Aruba Networks. Hitesh joined Aruba from Juniper Networks, where he was EVP/GM for its switching business and before that, SVP for the Service Layer Technologies group, which included security. Prior to Juniper, Hitesh held a number of senior management positions at Cisco.

Thursday, April 22, 2021

Expedera develops deep-learning accelerator (DLA) for AI silicon

Expedera, a start-up based in Santa Clara, California, emerged from stealth to unveil its Origin neural engine intellectual property (IP) for edge system silicon.

Expedera, which plans to license its deep-learning accelerator (DLA) technology to SoC designers, is targetting low-power edge devices like smartphones, tablets, computers, edge servers, and automotive. The company says its deep-learning accelerator (DLA) provides up to 18 TOPS/W at 7nm, which is up to ten times more than competitive offerings while minimizing memory requirements. Origin accelerates the performance of neural network models such as object detection, recognition, segmentation, super-resolution, and natural language processing. 

Expedera's co-founders include Da Chuang (CEO), who previously was cofounder and COO of Memoir Systems (an optimized memory IP startup acquired by Cisco); Siyad Ma (VP Engineering), who previously led Algorithmic TCAM ASIC and IP teams for Cisco Nexus7k, MDS, Cat4k/6k; and Sharad Chole (Chief Scientist), who previously was an architect at Cisco, Memoir Systems (Cisco), and Microsoft. 

https://www.expedera.com/

Tuesday, April 13, 2021

SambaNova raises $676 million for its AI platform

SambaNova Systems, a start-up based in Palo Alto, California, announced $676 million in Series D funding for its software, hardware and services to run AI applications.

SambaNova’s flagship offering is Dataflow-as-a-Service (DaaS), a subscription-based, extensible AI services platform designed to jump-start enterprise-level AI initiatives, augmenting organizations’ AI capabilities and accelerating the work of existing data centers, allowing the organization to focus on its business objectives instead of infrastructure.

At the core of DaaS is SambaNova’s DataScale, an integrated software and hardware systems platform with optimized algorithms and next-generation processors delivering unmatched capabilities and efficiency across applications for training, inference, data analytics, and high-performance computing. SambaNova’s software-defined-hardware approach has set world records in AI performance, accuracy, scale, and ease of use.

The funding round was led by SoftBank Vision Fund 2, and included additional new investors Temasek and GIC, plus existing backers including funds and accounts managed by BlackRock, Intel Capital, GV (formerly Google Ventures), Walden International and WRVI. This Series D brings SambaNova’s total funding to more than $1 billion and rockets its valuation to more than $5 billion.

SambaNova says it is working "to shatter the computational limits of AI hardware and software currently on the market — all while making AI solutions for private and public sectors more accessible."

“We’re here to revolutionize the AI market, and this round greatly accelerates that mission,” said Rodrigo Liang, SambaNova co-founder and CEO. “Traditional CPU and GPU architectures have reached their computational limits. To truly unleash AI’s potential to solve humanity’s greatest technology challenges, a new approach is needed. We’ve figured out that approach, and it’s exciting to see a wealth of prudent investors validate that.”

Stanford Professors Kunle Olukotun and Chris Ré, along with Liang, founded SambaNova in 2017 and came out of stealth in December 2020. Olukotun is known as the “father of the multi-core processor” and the leader of the Stanford Hydra Chip Multiprocessor (CMP) research project. Ré is an associate professor in the Department of Computer Science at Stanford University. He is a MacArthur Genius Award recipient, and is affiliated with the Statistical Machine Learning Group, Pervasive Parallelism Lab, and Stanford AI Lab.

http://www.sambanova.ai


Cerebras appoints CMO as it continues to grow

Cerebras Systems, a start-up developing a Wafer Scale Engine (WSE) chip that contains 1.2 trillion transistors, covers more than 46,225 square millimeters of silicon and contains 400,000 AI optimized compute cores, announced the appointment of Rupal Shah Hollenbeck as Vice President and Chief Marketing Officer (CMO). 

Prior to Cerebras, Hollenbeck served as senior vice president and CMO at Oracle, where she led the marketing transformation strategy for the company, while overseeing global brand and demand generation for all product areas. Previously, she held various senior leadership positions at Intel for more than two decades, most recently serving as Corporate Vice President and General Manager for Sales & Marketing in Intel’s Data Center division. Hollenbeck serves as an Independent Director of Check Point Software Technologies and is a Founding Limited Partner in Neythri Futures Fund, a venture fund dedicated to bringing diversity to the investment ecosystem.

“I am thrilled to join Cerebras’ industry-leading team as they tackle some of society’s most urgent and challenging problems with their groundbreaking CS-1 AI supercomputer,” said Hollenbeck. “I’ve been impressed with Cerebras’ customer traction over the past year, and I look forward to further accelerating this momentum with new global partnerships and customer deployments.”

Over the past year, Cerebras opened new offices in Tokyo, Japan and Toronto, Canada. The company also announced a series of wins for its flagship product, the CS-1, including deployments at Argonne National Laboratory, Lawrence Livermore National Laboratory, Pittsburgh Supercomputing Center (PSC) for its groundbreaking Neocortex AI supercomputer, EPCC, the supercomputing centre at the University of Edinburgh, and pharmaceutical leader GlaxoSmithKline.

Wednesday, February 24, 2021

Juniper integrates Mist AI with SD-WAN and Secure Branch Gateway

Less than 2 months after completing its acquisition of 128 Technology, Juniper announced several new products integrate 128 Technology and further its vision of end-to-end AI-driven automation, insights and actions from client-to-cloud. The releases combine Mist AI with 128 Technology’s Session Smart SD-WAN technology to deliver integrated AIOps, security and troubleshooting across the WLAN, LAN and WAN. 

  • The latest additions to Juniper’s AI-driven enterprise include:

    • WAN Assurance and Marvis™ Virtual Network Assistant (VNA) for Session Smart SD-WAN. Ingesting telemetry data from Session Smart Routers into the Mist AI engine enables customers to set, monitor and enforce service levels across the WAN, proactively detect anomalies and gain enhanced insight into WAN conditions to assure optimal user experiences.

    • Enhanced SRX operations driven by Mist AI. The SRX Series of secure branch gateways can now be onboarded and configured using Mist AI and the cloud. With zero touch provisioning (ZTP) and automated workflows, Juniper simplifies deployment of these devices.
    • New EX4400 secure access switch. The latest addition to the EX Series portfolio is optimized for the cloud with best-in-class security and AIOps.

    “Juniper is consistently recognized for our experience-first approach to networking, where AI-driven automation, insight and actions simplify operator experiences and optimize end-user experiences from client-to-cloud,” said Jeff Aaron, VP Enterprise Product Marketing. “These latest product enhancements underscore our sustained commitment to executing on this vision, as well as our unique ability to rapidly deliver new solutions that drive real value to both customers and partners.”

    https://newsroom.juniper.net/news/news-details/2021/Juniper-Networks-Combines-Mist-AI-with-Session-Smart-SD-WAN-and-SRX-Secure-Branch-Gateway-for-Optimal-User-Experiences-from-Client-to-Cloud/default.asp

     Juniper to Acquire 128 Technology - focus on AI-driven WANs

    Juniper Networks agreed to acquire 128 Technology, a software-based networking company based in Burlington, Mass., for $450 million in cash and the assumption of outstanding equity awards. Juniper has also coordinated for 128 Technology to issue retention focused restricted stock units, which will be assumed by Juniper.

    128 Technology’s session-smart networking enables enterprise customers and service providers to create a user experience-centric fabric for WAN connectivity. Routing decisions are based on real-time user sessions and agile business policies instead of static network policies configured on a per tunnel basis. The company, which was founded in 2014 and launched in 2016,  is headed by Andy Orly, co-founder and CEO.

     Juniper said the deal will enhance its AI-driven enterprise network portfolio by uniting 128 Technology’s Session Smart networking with Juniper’s campus and branch solutions driven by Mist AI. 128 Technology will be integrated with Juniper’s AI-Driven Enterprise business unit, which includes wired and wireless access and SD-WAN, all driven by Mist AI. The combined portfolio will give customers a unified platform for optimized user experiences from client-to-cloud.

    “The acquisition of 128 Technology will enable Juniper to accelerate in a key area where we are seeing enormous success – the AI-driven enterprise,” said Rami Rahim, CEO of Juniper Networks. “Both companies share a common vision of putting user experiences above all else and leveraging automation with proactive actions to simplify IT operations. With 128 Technology, we are adding a highly differentiated technology into our award-winning arsenal of campus and branch solutions driven by Mist AI to deliver even more customer value while further accelerating Juniper’s continued growth in the enterprise.”

    “128 Technology has brought to market a groundbreaking session-based routing solution that gives rise to experience-based networking. This allows our customers to realign their network with the requirements of a digital future that includes cloud, mobility and virtualization,” said Andy Ory, Co-Founder and CEO of 128 Technology. “The combination of our Session Smart Router with Juniper’s AI-driven enterprise portfolio, expansive channel and world-class support will dramatically accelerate our vision to transform networking and make a big impact on a very large, yet still highly under-served, WAN-Edge market.”

    Tuesday, December 1, 2020

    Qualcomm's Snapdragon 888 brings 3rd gen 5G modem, 6th gen processor

    Qualcomm previewed its new flagship - the Snapdragon 888 - featuring its 3rd generation X60 5G Modem-RF System with global band coverage and a 6th generation AI Engine operating at an astonishing 26 tera operations per second (TOPS).

    The 5G modem operates in mmWave and sub-6 across all major bands worldwide, and it brings support for 5G carrier aggregation, global multi-SIM, stand alone, non-stand alone, and Dynamic Spectrum Sharing.

    The new 6th generation Qualcomm AI Engine features completely re-engineered Qualcomm Hexagon processor that improves performance and power efficiency.

    “Creating premium experiences takes a relentless focus on innovation. It takes long term commitment, even in the face of immense uncertainty,” said Cristiano Amon, president, Qualcomm Incorporated. “It takes an organization that’s focused on tomorrow, to continue to deliver the technologies that redefine premium experiences.”



    https://www.qualcomm.com/news/releases/2020/12/01/qualcomm-redefines-premium-snapdragon-tech-summit-digital-2020


    AWS to deploy Intel's Gaudi AI accelerators in EC2 instances

    AWS will begin offering EC2 instances with up to eight of Intel's Habana Gaudi accelerators for machine learning workloads.

    Gaudi accelerators are specifically designed for training deep learning models for workloads that include natural language processing, object detection and machine learning training, classification, recommendation and personalization.

    “We are proud that AWS has chosen Habana Gaudi processors for its forthcoming EC2 training instances. The Habana team looks forward to our continued collaboration with AWS to deliver on a roadmap that will provide customers with continuity and advances over time,” states David Dahan, chief executive officer at Habana Labs, an Intel Company.

    ntel acquires Habana Labs for $2 billion - AI chipset

    Intel has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center, for approximately $2 billion.

    Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers. Large-node training systems based on Gaudi are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. Gaudi is designed for efficient and flexible system scale-up and scale-out.

    Additionally, Habana’s Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope. Gaudi for training and Goya for inference offer a rich, easy-to-program development environment to help customers deploy and differentiate their solutions as AI workloads continue to evolve with growing demands on compute, memory and connectivity.

    Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies.

    “This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

    Habana Labs chairman Avigdor Willenz will serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana.

    “We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”


    Interview: Habana Labs targets AI processors



    Habana Labs, a start-up based in Israel with offices in Silicon Valley, emerged from stealth to unveil its first AI processor. Habana's deep learning inference processor, named Goya, is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. The company will offer a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads,...



    Monday, November 30, 2020

    SK Telecom designs its own AI chip

    SK Telecom unveiled its own artificial intelligence (AI) chip and announced plans to enter the AI semiconductor business.

    The South Korean telecoms operator said its new "SAPEON X220" chip is optimized for processing large amounts of data in parallel. Its deep learning computation speed is 6.7 kilo-_frames per second, which is 1.5 times faster than that of Graphics Processing Units (GPUs) for inference that are being widely used by AI-service companies. At the same time, it uses 20% less power than GPU by consuming 60 watts of energy and is about half the price of a GPU.

    SKT plans to use the chip for its own AI-powered services, including for voice recognition. The aim is to generate synergies by combining AI semiconductor chips and 5G edge cloud. 

    SAPEON X220 will also be utilized by SKT’s affiliate companies. For instance, ADT Caps will apply the chip to enhance the performance of its AI-based video monitoring service named T View. In addition, SAPEON X220 will be applied to the cloud server of the next-generation media platform of Cast.era, a joint venture of SKT and Sinclair Broadcast Group.

    SKT also announced a plan to enter the AI as a Service (AIaaS) business. It will offer a complete solution package as a service by combining its AI chip and AI software, including diverse AI algorithms for features like content recommendation, voice recognition, video recognition and media upscaling, along with Application Programming Interfaces (APIs).

    https://www.sktelecom.com/en/press/press_detail.do?page.page=1&idx=1492&page.type=all&page.keyword=



     

    Thursday, November 19, 2020

    IBM to acquire Instana for AIOps application monitoring

    IBM will acquire Instana, an application performance monitoring and observability company based in Chicago and with a development center in Germany. Financial terms were not disclosed.

    Instana provides businesses with capabilities to manage the performance of complex and modern cloud-native applications no matter where they reside – on mobile devices, public and private clouds and on-premises, including IBM Z.  Instana's enterprise observability platform automatically builds a deep contextual understanding of cloud applications and provides actionable insights to indicate how to best prevent and remedy IT issues that could damage the business or reduce customer satisfaction -- such as slow response times, services that aren't working or infrastructure that is down.

    Once Instana's capabilities are integrated into IBM, companies will be able to feed these insights into Watson AIOps. The information could then be compared to a baseline of a normal operating application, with AI triggering alerts to resolve issues quickly before negative impacts to that transaction or activity. This can help eliminate the need for IT staff to manually monitor and manage applications, freeing these employees to focus on innovation and higher value work. 

    "With the added responsibility of ensuring the build and run quality of the software they develop, DevOps teams need a new generation of application performance monitoring and observability capabilities to succeed," said Mirko Novakovic, co-founder and CEO, Instana. "Instana's observability capabilities combined with IBM's AI-powered automation capabilities across hybrid cloud environments will give clients a full view of their application performance to best optimize operations."

    https://www.ibm.com/cloud/blog/ibm-and-instana

    Thursday, November 12, 2020

    Nokia touts AVA Quality of Experience at the Edge

    Nokia announced the AVA QoE at the Edge service, which enables automated actions to fix customer issues instantly.

    Nokia says deployment of its AVA algorithms on traditional network architectures has achieved a 59 percent reduction in Netflix buffering and 15 percent fewer YouTube sessions that suffer from long playback. 

    Nokia AVA QoE at the Edge brings “code to where the data is”, deploying Machine Learning (ML) algorithms at the network edge to enable real-time automated actions. The solution also eases the data burden on CSPs, with an exponential reduction in the volume of user plane data required to feed ML models. 

    Dennis Lorenzin, Head of Network Cognitive Services, Global Services, Nokia, said: “Today, many CSPs are keen to launch new low latency services to their customers. With Nokia’s AVA QoE at the Edge, we bring AI to the edge, so CSPs can deliver personalized 5G experiences and guaranteed performance.”

    Nokia introduces “AI-as-a-service” for telcos

    Nokia introduced a set of AI capabilities for helping service providers to automate their network with cloud scalability. This framework provides an end-to-end service view with near real-time impact correlation for better visibility and control, supported by Nokia’s extensive library of AI use cases.

    The new Nokia AVA 5G Cognitive Operations offering anticipates network and service failures with a high level of precision and accuracy up to seven days in advance. If failures arise, Nokia 5G Cognitive Operations can solve them up to 50 percent faster and accurately assess the impact on customers and services. The insights provided will help support CSPs with their slice creation, with an intelligent provisioning system identifying network resources, what SLAs can be committed and where new revenue opportunities can be found. Future capability will also enable CSPs to customize slice creation, providing different SLA levels based on unique user requirements.

    Nokia is currently hosting the new capabilities in Microsoft Azure but says other public and private cloud options are possible.

    “Operators face a perfect storm of rising traffic and consumer expectations, so it is crucial to be able to predict and prevent service degradations at an earlier stage, while solving issues that arise significantly faster. Nokia AVA 5G Cognitive Operations enables CSPs to operate and assure latency for 5G use cases through AI, ultimately delivering an enhanced customer experience for consumers and enterprises,” states Dennis Lorenzin, Head of Network Cognitive Service unit at Nokia.

    Nokia claims that CSPs trialing the service have seen a 20 percent reduction in customer complaints and a 10 percent reduction in costly site visits.

    Monday, November 9, 2020

    Cellwize raises $32 million for its 5G automation and orchestration

    Cellwize, a start-up headquartered in Singapore with R&D in Israel, announced a $32 million Series B funding round for its mobile network automation and orchestration solution.

    The new funding round was led by Intel Capital and Qualcomm Ventures LLC with participation from Verizon Ventures, Samsung Next, and existing shareholders.

    Cellwize offers a RAN automation and orchestration platform for 5G rollouts. Cellwize CHIME enables operators to accelerate their 5G deployment by automating key business processes in the RAN domain. The company leverages artificial intelligence and machine learning for zero-touch 5G deployments, for automating 2G/3G/4G/5G network optimization, and for delivering mobile network assurance. 

    "We are delighted to have been selected by these leading VCs for their strategic investments to accelerate 5G in a way that is open and disaggregated," said Ofir Zemer, CEO of Cellwize. "This is a clear reflection of the trust they have in Cellwize and in the cutting-edge capabilities of CHIME for enabling the 5G revolution. "

    "Intel Capital is a lead investor in Cellwize because we're excited about the opportunity Cellwize has to help operators transform their networks to accelerate the 5G revolution," said David Flanagan, vice president and senior managing director at Intel Capital. "Cellwize and Intel Capital are aligned in their vision that Cellwize's cloud-native platform, which includes AI-based automation technology, will help customers deploy complex 5G networks in a more efficient, scalable, and flexible way. 

    "Qualcomm is at the forefront of 5G expansion, creating a robust ecosystem of technologies that will usher in the new era of connectivity," said Merav Weinryb, Senior Director of Qualcomm Israel Ltd. and Managing Director of Qualcomm Ventures Israel and Europe. "As a leader in RAN automation and orchestration, Cellwize plays an important role in 5G deployment. We are excited to support Cellwize through the Qualcomm Ventures' 5G global ecosystem fund as they scale and expedite 5G adoption worldwide." 

    https://www.cellwize.com/

    Sunday, November 1, 2020

    Intel to acquire SigOpt for AI model optimization software

    Intel agreed to acquire SigOpt, a start-up based in San Francisco that is focusing on the optimization of artificial intelligence (AI) software models at scale. Financial terms were not disclosed.

    SigOpt is a standardized, scalable, enterprise-grade optimization platform and API. The company was founded by Patrick Hayes and Scott Clark, who is credited with building an open source the Metric Optimization Engine at Yelp.

    Intel plans to use SigOpt’s software technologies across Intel’s AI hardware products to help accelerate, amplify and scale Intel’s AI software solution offerings to developers.

    https://sigopt.com/ 

    Thursday, October 29, 2020

    Untether AI leverages at-memory computation for inference processing

    Untether AI, a start-up based in Toronto, introduced its "tsunAImi" accelerator cards are powered by four of its own runAI200 processors, which feature a unique at-memory compute architecture that aims to re-think how computation for machine learning is accomplished. The company says that 90 percent of the energy for AI workloads in current processing architectures is consumed by data movement, transferring the weights and activations between external memory, on-chip caches, and finally to the computing element itself. 

    Untether AI says it is able to deliver two PetaOperations per second (POPs) in its new standard PCI-Express cards -- more than two times any currently announced PCIe cards, which translates into over 80,000 frames per second of ResNet-50 v 1.5 throughput at batch=1, three times the throughput of its nearest competitor. For natural language processing, tsunAImi accelerator cards are rated at more than 12,000 queries per second (qps) of BERT-base, four times faster than any announced product.

    “For AI inference in cloud and datacenters, compute density is king. Untether AI is ushering in the PetaOps era to accelerate AI inference workloads at scale with unprecedented efficiency,” said Arun Iyengar, CEO of Untether AI.

    “When we founded Untether AI, our laser focus was unlocking the potential of scalable AI, by delivering more efficient neural network compute,” said Martin Snelgrove, co-founder and CTO of Untether AI. “We are gratified to see our technology come to fruition.”

    The imAIgine SDK is currently in Early Access (EA) with select customers and partners. The tsunAImi accelerator card is sampling now and will be commercially available in 1Q2021.

    Untether AI is funded by Radical Ventures and Intel Capital. 

    http://www.untether.ai

    Monday, July 13, 2020

    Verizon pilots Google Cloud Contact Cente

    Verizon is testing Google Cloud Contact Center Artificial Intelligence to deliver more intuitive customer support through natural-language recognition, faster processing, and real-time customer service agent assistance.

    The Google Cloud Contact Center AI solution aims to deliver shorter call times, quicker resolutions, and improved outcomes for customer satisfaction.

    “Verizon’s commitment to innovation extends to all aspects of the customer experience,” said Shankar Arumugavelu, global CIO & SVP, Verizon. “These customer service enhancements, powered by the Verizon collaboration with Google Cloud, offer a faster and more personalized digital experience for our customers while empowering our customer support agents to provide a higher level of service.”

    “We’re proud to work with Verizon to help enable its digital transformation strategy,” said Thomas Kurian, CEO of Google Cloud. “By helping Verizon reimagine the customer experience through our AI and ML expertise, we can create an experience that not only delights consumers, but also helps differentiate Verizon in the market.”

    Verizon to deliver Google Stadia gaming

    Verizon will deliver Google Stadia gaming over its Fios network.

    Starting January 29, new Fios Gigabit internet customers will get a Stadia Premiere Edition on us. Stadia Premiere Edition includes a controller, a free three-month Stadia Pro subscription for access to games in up to 4k/60fps, and a Google Chromecast Ultra for streaming on an existing TV.

    “Fios has long been known as the leading Internet service for console gaming and streaming entertainment,” said Brian Higgins, vice president, consumer device marketing and products, Verizon. “With the recent surge in adoption of cloud gaming, led by Stadia, Fios will continue to serve as the backbone for the best cloud gaming services.”

    “Google working with Verizon to deliver incredible cloud gaming experiences is a great step forward for the industry,” said Brennan Mullin, vice president, Devices and Services Partnerships, Google. “Verizon’s commitment to delivering fast, reliable Fios internet matches perfectly with Stadia’s exciting new cloud gaming, delivering an unmatched gamer experience”

    Google unveiled its Stadia platform in March 2019.

    Tuesday, June 16, 2020

    CommScope rolls out AI-enabled, cloud network management

    CommScope introduced an AI-enabled network management as-a-service platform that enables enterprise IT and managed service providers (MSPs) to manage a converged wired and wireless network.

    RUCKUS Cloud offers single-pane management with network visibility and service assurance.  Key features:

    • Unified wired and wireless management - Intent-based workflows expedite provisioning, management, and control from large venues to hundreds or thousands of sites.
    • ML and AI - Analytics tools enable IT to react quickly to issues and stop network anomalies from rising to the service-affecting level.
    • RESTful APIs - OpenAPI-compliant APIs allow IT to automate any network function, create custom dashboards and reports, and easily integrate RUCKUS Cloud into existing enterprise systems.
    • MSP dashboard - Allows MSPs to offer branded services and manage multiple end customers.
    • Network health monitoring - IT teams can define and measure performance against service level agreements (SLAs) that best reflect the requirements of their users.
    • Remote client troubleshooting - Remote access to connection history and clearly identified points of failure facilitate a rapid response to user-reported network issues, regardless of client location.
    • Planning and reporting - 12 months of included historical device- and element-level data helps IT make well-informed network planning decisions.

    “Networks are changing rapidly, with accelerating growth in users, network elements, devices and device diversity that’s driving a new level of network complexity, making it difficult for IT organizations to keep up,” said Matthias Machowinski, Omdia senior research director, enterprise networks. “Modern cloud-managed networking and ML/AI-based assurance tools provide automation and in-depth network insights, promising to give control back to the IT organization and deliver greater efficiency.”

    Tuesday, June 9, 2020

    Aruba turns up AI with its new Network Edge platform

    Aruba introduced an AI-powered, cloud-native platform that continuously analyzes data across enterprise infrastructure in order to predict and solve problems at the network edge before they happen.

    The new Aruba ESP (Edge Services Platform) uses AI to identify traffic while seeing and securing unknown devices on the network. Aruba ESP is a full-stack, cloud-native platform for wired, wireless and SD-WAN environments that unifies multiple network elements for centralized management and control. Aruba ESP will be sold either as a service in the cloud or on-premises, as a managed service delivered through Aruba partners, or via network as a service through HPE GreenLake.

    Aruba says its AIOps can identify exact root causes with greater than 95% accuracy, auto-remediate network issues, proactively monitor the user experience, tune the network to prevent problems before they occur, and use peer benchmarking and prescriptive recommendations to continuously optimize and secure the network.

    “The Intelligent Edge is the catalyst that will spark limitless possibilities for organizations and enterprises that want to accelerate transformation and ensure business continuity by leveraging their technology investments as their greatest asset,” said Keerti Melkote, president of Aruba, a Hewlett Packard Enterprise company. “Built upon Aruba’s guiding principles of connect, protect, analyze, and act, Aruba ESP is the culmination of years of innovation, R&D, Aruba ingenuity and, most importantly, input from our valued customers whose honest feedback and insightful perspective has helped to make this platform a network that knows.”

    Highlights for Aruba ESP:

    • Cloud-native management for any size enterprise – Aruba Central currently runs mission critical networks for over 65,000 customers and now with new ArubaOS services, it is the industry’s only controllerless, cloud-based platform to provide full-stack management and operations for wired, wireless and SD-WAN infrastructure of any size across campus, data center, branch, and remote worker locations to be consumed on-premises or in the cloud.
    • Simplified daily operations with unified infrastructure – With access to a common data lake via Aruba ESP, the latest version of Aruba Central has been enhanced with simplified navigation, advanced search, and contextual views to present multiple dimensions of information through a single point-of-control, virtually eliminating the need for disparate tools to collect and correlate information across numerous domains and locations.
    • Reduced resolution time with AI and automation – Based on modeling data from over one million network devices generating over 1.5B data points per day, Aruba’s new AI Insights reduces troubleshooting time by identifying hard-to-see network configuration issues and providing root-cause, prescriptive recommendations and automated remediation to continuously optimize network operations.
    • AI-powered IT Efficiencies—Aruba Central now offers AI Search, a Natural Language Processing data discovery service that enables IT teams to eliminate “swivel chair” investigations by using simple, English language queries to extract comprehensive user and device information from Aruba ESP’s common data lake to present relevant information in context to quickly resolve a problem. 
    • Granular visibility across applications, devices and the network – Enhancements to Aruba Central enable user-centric analytics from User Experience Insight to identify client, application, and network performance issues faster.
    • Extension of next-gen switching to distributed and mid-size enterprises – This new series brings built-in analytics and automation capabilities to every network edge where user and device connectivity occurs, generating insights that can be applied to informing better business outcomes. The CX 6200 switch series further expands Aruba’s end-to-end CX switching portfolio, enabling customers to run a single operating model from the enterprise campus and branch access layer to the data center.
    • Ongoing innovation with new Developer Hub – Aruba is introducing a resource for developers that includes Aruba APIs and documentation to streamline the development of innovative, next-generation edge applications leveraging the open Aruba ESP platform.




    Thursday, May 21, 2020

    Juniper's Mist AI-powered Wi-Fi offers contact tracing

    Mist Systems' AI-powered enterprise Wi-Fi is now supporting key workplace business continuity safety tasks like proximity tracing, journey mapping and hot zone alerting as part of strategic contact tracing and social distancing initiatives.

    Juniper said that by using Mist access points and cloud services in conjunction with Wi-Fi and/or BLE-enabled devices such as phones and badges, enterprises can now support:

    • Proximity tracing - If an individual identifies as COVID-19 positive (or is experiencing symptoms), enterprises can quickly identify and notify other employees, guests or customers that may have been in close proximity to that individual while onsite. 
    • Journey mapping - Customers can view historical traffic patterns and dwell times for employees who have reported testing positive for COVID-19 – from the moment they were onsite to departure. Journey mapping can identify high-traffic hot zones so customers can take safety measures such as reconfiguring workspaces and deploying additional cleaning efforts.  
    • Hot zone alerting - By looking at the quantity of devices and locations in specific areas, enterprises can disperse or divert traffic away from congested areas with real-time, location-based alerting. They can also view trends over time to identify certain areas for proactive measures.

    “Employee health and wellness have always been a key part of business continuity planning, but now more than ever enterprises are looking to IT for help complying with OSHA, ADA, CDC and other guidelines,” said Sudheer Matta, VP of Products at Mist. “The Mist architecture provides unique value by combining Wi-Fi with patented virtual BLE technology, supported by a 16 antenna array that is bi-directional and minimizes the need for extra infrastructure hardware like battery-powered beacons. In addition, Mist recently launched a Premium Analytics service that provides unique insight from a variety of data sources to optimize end-user/client experiences and identify trends that can assist customers with workplace safety.”

    Monday, April 20, 2020

    LeapMind unveils ultra low-power AI inference accelerator

    Tokyo-based LeapMind introduced its "Efficiera" ultra-low power AI inference accelerator IP for companies that design ASIC and FPGA circuits, and other related products.

    "Efficiera" is an ultra-low power AI Inference Accelerator IP specialized for Convolutional Neural Network (CNN)(1) inference calculation processing; it functions as a circuit in an FPGA or ASIC device. Its "extreme low bit quantization" technology, which minimizes the number of quantized bits to 1–2 bits, does not require cutting-edge semiconductor manufacturing processes or the use of specialized cell libraries to maximize the power and space efficiency associated with convolution operations, which account for a majority of inference processing.

    LeapMind is simultaneously launching several related products and services: "Efficiera SDK," a software development tool providing a dedicated learning and development environment for Efficiera, the "Efficiera Deep Learning Model" for efficient training of deep learning models, and "Efficiera Professional Services," an application-specific semi-custom model building service based on LeapMind's expertise that enables customers to build extreme low bit quantized deep learning models applicable to their own unique requirements.

    Thursday, April 9, 2020

    Intel and Georgia Tech to lead DARPA project

    Intel and the Georgia Institute of Technology have been selected to lead a Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD) program team for the Defense Advanced Research Projects Agency (DARPA).

    The goal of the GARD program is to establish theoretical ML system foundations that will not only identify system vulnerabilities and characterize properties to enhance system robustness, but also promote the creation of effective defenses. Through these program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their effectiveness.

    The first phase of GARD will focus on enhancing object detection technologies through spatial, temporal and semantic coherence for both still images and videos.

    Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning (ML) models.

    “Intel and Georgia Tech are working together to advance the ecosystem’s collective understanding of and ability to mitigate against AI and ML vulnerabilities. Through innovative research in coherence techniques, we are collaborating on an approach to enhance object detection and to improve the ability for AI and ML to respond to adversarial attacks,” states Jason Martin, principal engineer at Intel Labs and principal investigator for the DARPA GARD program from Intel.