Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, May 21, 2020

Juniper's Mist AI-powered Wi-Fi offers contact tracing

Mist Systems' AI-powered enterprise Wi-Fi is now supporting key workplace business continuity safety tasks like proximity tracing, journey mapping and hot zone alerting as part of strategic contact tracing and social distancing initiatives.

Juniper said that by using Mist access points and cloud services in conjunction with Wi-Fi and/or BLE-enabled devices such as phones and badges, enterprises can now support:

  • Proximity tracing - If an individual identifies as COVID-19 positive (or is experiencing symptoms), enterprises can quickly identify and notify other employees, guests or customers that may have been in close proximity to that individual while onsite. 
  • Journey mapping - Customers can view historical traffic patterns and dwell times for employees who have reported testing positive for COVID-19 – from the moment they were onsite to departure. Journey mapping can identify high-traffic hot zones so customers can take safety measures such as reconfiguring workspaces and deploying additional cleaning efforts.  
  • Hot zone alerting - By looking at the quantity of devices and locations in specific areas, enterprises can disperse or divert traffic away from congested areas with real-time, location-based alerting. They can also view trends over time to identify certain areas for proactive measures.

“Employee health and wellness have always been a key part of business continuity planning, but now more than ever enterprises are looking to IT for help complying with OSHA, ADA, CDC and other guidelines,” said Sudheer Matta, VP of Products at Mist. “The Mist architecture provides unique value by combining Wi-Fi with patented virtual BLE technology, supported by a 16 antenna array that is bi-directional and minimizes the need for extra infrastructure hardware like battery-powered beacons. In addition, Mist recently launched a Premium Analytics service that provides unique insight from a variety of data sources to optimize end-user/client experiences and identify trends that can assist customers with workplace safety.”

Monday, April 20, 2020

LeapMind unveils ultra low-power AI inference accelerator

Tokyo-based LeapMind introduced its "Efficiera" ultra-low power AI inference accelerator IP for companies that design ASIC and FPGA circuits, and other related products.

"Efficiera" is an ultra-low power AI Inference Accelerator IP specialized for Convolutional Neural Network (CNN)(1) inference calculation processing; it functions as a circuit in an FPGA or ASIC device. Its "extreme low bit quantization" technology, which minimizes the number of quantized bits to 1–2 bits, does not require cutting-edge semiconductor manufacturing processes or the use of specialized cell libraries to maximize the power and space efficiency associated with convolution operations, which account for a majority of inference processing.

LeapMind is simultaneously launching several related products and services: "Efficiera SDK," a software development tool providing a dedicated learning and development environment for Efficiera, the "Efficiera Deep Learning Model" for efficient training of deep learning models, and "Efficiera Professional Services," an application-specific semi-custom model building service based on LeapMind's expertise that enables customers to build extreme low bit quantized deep learning models applicable to their own unique requirements.

Thursday, April 9, 2020

Intel and Georgia Tech to lead DARPA project

Intel and the Georgia Institute of Technology have been selected to lead a Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD) program team for the Defense Advanced Research Projects Agency (DARPA).

The goal of the GARD program is to establish theoretical ML system foundations that will not only identify system vulnerabilities and characterize properties to enhance system robustness, but also promote the creation of effective defenses. Through these program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their effectiveness.

The first phase of GARD will focus on enhancing object detection technologies through spatial, temporal and semantic coherence for both still images and videos.

Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning (ML) models.

“Intel and Georgia Tech are working together to advance the ecosystem’s collective understanding of and ability to mitigate against AI and ML vulnerabilities. Through innovative research in coherence techniques, we are collaborating on an approach to enhance object detection and to improve the ability for AI and ML to respond to adversarial attacks,” states Jason Martin, principal engineer at Intel Labs and principal investigator for the DARPA GARD program from Intel.

Monday, February 24, 2020

Conversational AI for Telco Service Providers



Customer support may well be the first really solid business case for AI in Service Provider networks. Imagine the ROI if conversational AI and automation could trim the time needed for millions of customer support calls.

In this video, Umesh Sachdev, CEO of Uniphore, discusses the use case for conversational AI by telcos.

More thought leadership videos on network automation may be found here:
https://nginfrastructure.com/network-automation/


Tuesday, January 28, 2020

ServiceNow to acquire Passage AI

ServiceNow agreed to acquire Passage AI, a start-up based in Mountain View, California, for its conversational AI platform. Financial terms were not disclosed.

ServiceNow said the acquisition will enhance its deep learning AI capabilities and accelerate its vision of supporting all major languages across the company’s Now Platform and products, including ServiceNow Virtual Agent, Service Portal, Workspaces and emerging interfaces.

“Work flows more smoothly when people can get things done in their native language,” said Debu Chatterjee, senior director of AI Engineering at ServiceNow. “Building deep learning, conversational AI capabilities into the Now Platform will enable a work request initiated in German or a customer inquiry initiated in Japanese to be solved by Virtual Agent. Passage AI’s technology will enable us to accelerate our vision of empowering great employee and customer experiences by delivering great workflow experiences. ServiceNow believes in making work flow more smoothly across the enterprise, in all major languages.”

Passage AI was founded in 2016 by CEO Ravi N. Raj, CTO Madhu Mathihalli and CTO Mitul Tiwari.

Monday, January 27, 2020

Iguazio raises $24M for its data science platform

Iguazio, a start-up based in Herzliya, Israel, raised $24 million in funding for its data science platform for real time machine learning applications.

The Iguazio data science platform helps data scientists create real-time AI applications while working within their chosen machine learning stack.

The funding was was led by INCapital Ventures, with participation from existing and new investors, including Pitango, Verizon Ventures, Magma Venture Partners, Samsung SDS, Kensington Capital Partners, Plaza Ventures and Silverton Capital Ventures.

“This is a pivotal time for AI. Our platform helps data scientists push the limits of their real-time AI applications and see their impact in real business environments,” said Asaf Somekh, co-founder and CEO of Iguazio. “With support from INCapital, Kensington Capital Partners, and our other investors, we are ready to expand our international team and reach our ambitious goals.”

http://www.iguazio.com

Wednesday, January 15, 2020

Turkcell establishes AI principles

Turkcell has announced a set of AI Principles that commit to the ethical and responsible use of artificial intelligence technologies.

During the press conference held at Turkcell HQ, the company shared its following principles:

  1. We are human and environment centric
  2. We are professionally responsible
  3. We respect data privacy
  4. We are transparent
  5. We are security-based
  6. We are fair
  7. We share and collaborate for a better future

“AI should be raised like children and we commit to teach better as responsible parents,” says Omer Barbaros Yis, Turkcell CMO. “Today we share our principles and our commitment to help AI have socially beneficial impacts for our customers and society at large. We are proud to become the first company to contribute to AI ethics in Turkey. The field will continuously expand and we will witness its transformative impacts in our daily lives. Backed by our experience in digital transformation and creating next-generation technologies, we will continue to drive a positive direction towards its advancement and help overcome public concerns about the field.”

Zinier raises $90M for AI-driven automation

Zinier, a start-up based in San Mateo, California, raised $90 million in Series C funding for its efforts to transform field service workforces with AI-driven automation.

Zinier said its intelligent field service automation platform, called ISAC, helps organizations work smarter—from the back office to the field—to solve problems more quickly, fix things before they break, and maintain the services that we rely on every day.

“Services that we rely on every day - electricity, transportation, communication - are getting by on centuries-old infrastructure that requires a major upgrade for the next generation of users,” said Arka Dhar, co-founder and CEO of Zinier. “A field service workforce powered by both people and automation is necessary to execute the massive amount of work required to not only maintain these critical human infrastructures, but to also prepare for growth. Our team is focused on enabling this transformation across industries through intelligent field service automation.”

New investor ICONIQ Capital led the round with new participation from Tiger Global Management, and return investors Accel, Founders Fund, Nokia-backed NGP Capital, France-based Newfund Capital and Qualcomm Ventures LLC. The funding will support global customer adoption and expansion of Zinier’s AI-driven field service automation platform, ISAC.

http://www.zinier.com

Monday, December 16, 2019

Intel acquires Habana Labs for $2 billion - AI chipset

Intel has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center, for approximately $2 billion.

Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers. Large-node training systems based on Gaudi are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. Gaudi is designed for efficient and flexible system scale-up and scale-out.

Additionally, Habana’s Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope. Gaudi for training and Goya for inference offer a rich, easy-to-program development environment to help customers deploy and differentiate their solutions as AI workloads continue to evolve with growing demands on compute, memory and connectivity.

Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies.

“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

Habana Labs chairman Avigdor Willenz will serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana.

“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”


Interview: Habana Labs targets AI processors



Habana Labs, a start-up based in Israel with offices in Silicon Valley, emerged from stealth to unveil its first AI processor. Habana's deep learning inference processor, named Goya, is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. The company will offer a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads,...

Habana Labs, a start-up based in Tel-Aviv, Israel, raised $75 million in an oversubscribed series B funding for its development of AI processors.

Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc.  A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.

Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.

The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.

“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”

“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”

https://habana.ai/

Intel ships its Nervana Neural Network Processors

Intel announced the commercial production of its Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000).

The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.

Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.

“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.

“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.



Monday, December 9, 2019

AI and the Customer Experience



How can AI and automation help to redefine the customer support experience?

Umesh Sachdev, co-founder and CEO of Uniphore, briefly shares his 2020 vision for AI and customer service.

Uniphore, a start-up based in Palo Alto, California, targets Conversational Service Automation. The company was incubated in 2008 in IIT Madras, the premier research institute in India, and has grown to over 150 employees located in the U.S., India, and Singapore. Earlier this year, Uniphore raised $51 million in Series C funding led by March Capital Partners, with participation from Chiratae Ventures (formerly IDG Ventures), Sistema Asia, CXO Fund, ITP, Iron Pillar, Patni Family, plus other investors. John Chambers, CEO and founder of JC2 Ventures and former CEO of Cisco, is an active advisor to Uniphore and holds a 10% stake in Uniphore.

https://youtu.be/5GkxfqTyt04

See our full series of Thought Leadership videos at https://nginfrastructure.com/network-...


Wednesday, November 6, 2019

Huawei to invest 100 million euros in European AI partners

Huawei announced an investment of 100 million euros over the next 5 years in European AI companies.

According to Jiang Tao, VP of Intelligent Computing BU, "Huawei is committed to investing in the AI computing industry in Europe, enabling enterprises and individual developers to leverage the Ascend AI series products for technological and business innovation. Over the next 5 years, Huawei plans to invest 100 million euros in the AI Ecosystem Program in Europe, helping industry organizations, 200,000 developers, 500 ISV partners, and 50 universities and research institutes to boost innovation."

At HUAWEI CONNECT 2019 (Shanghai), Huawei launched a broad product portfolio based on its Ascend 910 AI training processor. The products include the Atlas 300 AI training card, Atlas 800 training server, and Atlas 900 AI training cluster. The Atlas series products support all scenarios across device-edge-cloud, accelerating intelligent transformation of industries with ultimate computing power for training.

Tuesday, October 29, 2019

Next Gen Network Automation - AI inside China Mobile



AI is key for next-gen network transformation. Dr. Junlan Feng, General Manager of the AI and Intelligent Operation R&D Center & Chief Scientist at China Mobile, discusses the significance of AI and how AI coupled with automation is key to future networks.

https://nginfrastructure.com/network-automation/


Monday, October 21, 2019

ModelOp targets AI-based model operations in large enterprises

ModelOp, formerly Open Data Group (ODG), has relaunched with a focus on AI-based model operations in large enterprises. The idea is to integrate data science and machine learning with enterprise applications that use the models’ predictions to automate and improve decisions like credit scoring, fraud detection, bond trading, customer retention, manufacturing operations, supply chain optimization, ecommerce sales, ad conversions, etc.

The company was founded 2016 by Pete Foley and Stu Bailey, who previously worked together at Infoblox, where they pioneered a new category of IT infrastructure, leading to a successful IPO in 2011 and an acquisition of the company by Vista Equity for $1.6B in 2016.  Prior to co-founding Infoblox, Bailey as the technical lead at the National Center for Data Mining, worked closely with the pioneers of data science. Foley previously served as CEO at Port Authority (acquired by Websense) and Ring Cube (acquired by Citrix), and as chairman of Graphite Systems (acquired by EMC).

The company describes DS/ML models as a new kind of software because they are probabilistic and must be “trained” using large quantities of data.  Over time, DS/ML models decay, producing degraded results unless they’re re-trained or re-written. The practice of Model Operations must address the release, activation, monitoring, performance tracking, management, reuse, maintenance and governance of the AI and ML models. 

“We got ahead of the market when we started this venture, convinced that the unique characteristics of data science and machine learning (DS/ML) models would require new organizational and technical approaches in order to realize their value, especially within large enterprises” said Pete Foley, CEO of ModelOp.  Foley continued, “Our expanding customer base, and the level of activity we are seeing in the industry overall strongly validates our vision and makes us extremely excited for this next phase of growth.”  ModelOp’s customers include five of the top 10 largest financial institutions, as well as Fortune 500 manufacturers, insurers, and credit bureaus.

ModelOp has offices in Chicago, IL and San Jose, CA.

Wednesday, October 2, 2019

ETSI begins work on Securing Artificial Intelligence

ETSI has formed a new Industry Specification Group on Securing Artificial Intelligence (ISG SAI) to develop technical specifications to mitigate threats arising from the deployment of AI throughout multiple ICT-related industries. This includes threats to artificial intelligence systems from both conventional sources and other AIs.

The intent of the ISG SAI is to address 3 aspects of artificial intelligence in the standards domain:

  • Securing AI from attack e.g. where AI is a component in the system that needs defending
  • Mitigating against AI e.g. where AI is the ‘problem’ or is used to improve and enhance other more conventional attack vectors
  • Using AI to enhance security measures against attack from other things e.g. AI is part of the ‘solution’ or is used to improve and enhance more conventional countermeasures.

Securing AI Problem Statement

This specification will be modelled on the ETSI GS NFV-SEC 001 “Security Problem Statement” which has been highly influential in guiding the scope of ETSI NFV and enabling “security by design” for NFV infrastructures. It will define and prioritize potential AI threats along with recommended actions. The recommendations contained in this specification will be used to define the scope and timescales for the follow-up work.

The founding members of the new ETSI group include BT, Cadzow Communications, Huawei Technologies, NCSC and Telefónica.

The first meeting of ISG SAI will be held in Sophia Antipolis on 23 October.

https://www.etsi.org/newsroom/press-releases/1650-2019-10-etsi-launches-specification-group-on-securing-artificial-intelligence

Sunday, August 25, 2019

Huawei advances its AI with Ascend 910 processor and MindSpore

Huawei officially launched its Ascend 910 AI processor as well as its "MindSpore" AI framework.

The Ascend 910, which is designed for AI model training, delivers 256 TeraFLOPS for half-precision floating point (FP16), and 512 TeraOPS for integer precision calculations (INT8). Its max power consumption is only 310W.  All of these are new industry benchmarks, according to the company.

Huawei claims its MindSpore AI framework is adaptable to all devices, edge, and cloud environments. It helps ensure user privacy because it only deals with gradient and model information that has already been processed. It doesn't process the data itself, so private user data can be effectively protected even in cross-scenario environments.

"We have been making steady progress since we announced our AI strategy in October last year," said Eric Xu, Huawei's Rotating Chairman. "Everything is moving forward according to plan, from R&D to product launch. We promised a full-stack, all-scenario AI portfolio. And today we delivered, with the release of Ascend 910 and MindSpore. This also marks a new stage in Huawei's AI strategy."

Xu also outlined ten areas where Huawei wants to drive change for AI:

  1. Provide stronger computing power to increase the speed of complex model training from days and months to minutes – even seconds.
  2. Provide more affordable and abundant computing power. Right now, computing power is both costly and scarce, which limits AI development.
  3. Offer an all-scenario AI portfolio, meeting the different needs of businesses while ensuring that user privacy is well protected. This portfolio will allow AI to be deployed in any scenario, not just public cloud.
  4. Invest in basic AI algorithms. Algorithms of the future should be data-efficient, meaning they can deliver the same results with less data. They should also be energy-efficient, producing the same results with less computing power and less energy.
  5. Use MindSpore and ModelArts to help automate AI development, reducing reliance on human effort.
  6. Continue to improve model algorithms to produce industrial-grade AI that performs well in the real world, not just in tests.
  7. Develop a real-time, closed-loop system for model updates, making sure that enterprise AI applications continue to operate in their most optimal state.
  8. Maximize the value of AI by driving synergy with other technologies like cloud, IoT, edge computing, blockchain, big data, and databases.
  9. With a one-stop development platform of the full-stack AI portfolio, help AI become a basic skill for all application developers and ICT workers. Today only highly-skilled experts can work with AI.
  10. Invest more in an open AI ecosystem and build the next generation of AI talent to meet the growing demand for people with AI capabilities.
At a press event in Shenzhen, Xu also told reporters that the company is working to replace design tools from Cadence and Synopsys, and that being placed on the U.S. entity list will not impact Huawei's AI ambitions.

Wednesday, August 21, 2019

TeraPixel tapes out its 5nm AI chip

TeraPixel Technologies announced that it recently taped out its 5nm "Extrixa Processor".

The deep-learning AI processpr is the world's first chip that uses TSMC 5nm.

TeraPixel Technologies expects to tape out the second version with the same functions as the product version with SRAM and other functional blocks. The comopany is targeting commercial release in 2021.

Monday, August 19, 2019

Cerebras Wafer Scale Engine packs 1.2 trillion transistors

Cerebras, a start-up based in Los Altos, California, unveiled its Wafer Scale Engine - a record-setting AI processor boasting a die size of 46,225 square millimeters and containing more than 1.2 trillion transistors. The chip is 56X larger than the largest graphics processing unit and contains 3,000X more on-chip memory.

Key specs

  • 400,000 Sparse Linear Algebra (SLA) cores
  • 18GB on-chip SRAM, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth. 
  • 100 Pb/s interconnect bandwidth in a 2D mesh
  • Manufactured by TSMC on its 16nm process technology

The SLAC cores are flexible, programmable, and optimized for the sparse linear algebra, which underpins neural network computation. The cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network called Swarm.

“Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems. “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.”

The Cerebras product unveiling occurred at this week's Hot Chips Conference at Stanford University.

http://www.cerebras.net

Tuesday, July 2, 2019

Intel names Baidu as an AI development partner

Intel confirmed that it is collaborating with Baidu on development of the new Intel Nervana Neural Network Processor for Training (NNP-T). The collaboration involves the hardware and software designs of the new custom accelerator with one purpose – training deep learning models at lightning speed.

Since 2016, Intel has been optimizing Baidu’s PaddlePaddle deep learning framework for Intel® Xeon Scalable processors. Now, the companies give data scientists more hardware choice by optimizing the NNP-T for PaddlePaddle.

"The next few years will see an explosion in the complexity of AI models and the need for massive deep learning compute at scale. Intel and Baidu are focusing their decade-long collaboration on building radical new hardware, co-designed with enabling software, that will evolve with this new reality – something we call ‘AI 2.0,"  stated Naveen Rao, Intel corporate vice president and general manager of the AI Products Group.

Thursday, June 13, 2019

Renesas develops Processing-In-Memory Technology for AI chips

Renesas Electronics has developed an AI accelerator that performs CNN (convolutional neural network) processing at high speeds and low power to move towards the next generation of Renesas embedded AI (e-AI), which will accelerate increased intelligence of endpoint devices.
The company says its first test chip featuring this accelerator has achieved the power efficiency of 8.8 TOPS/W.

Renesas developed the following three technologies for the new AI accelerator: a ternary-valued (-1, 0, 1) SRAM structure PIM technology that can perform large-scale CNN computations; an SRAM circuit to be applied with comparators that can read out memory data at low power; and a technology that prevents calculation errors due to process variations in the manufacturing.

https://www.renesas.com/us/en/about/press-center/news/2019/news20190613.html

Wednesday, May 1, 2019

Liqid delivers software-defined fabric for AI workloads on Dell

Liqid, which has developed a software-defined composable infrastructure platform, announced an OEM relationship with Dell Technologies OEM & IoT Solutions.

Liqid's software-defined fabric can be coupled with the Dell EMC PowerEdge portfolio to deliver low-latency resource allocation to pools of disaggregated GPUs, FPGAs, CPUs, NVMe storage and Intel Optane memory extension technologies. This enables users to orchestrate balanced systems for each AI phase of data ingest, cleaning/tagging, training, and inference, while minimizing the data center footprint.

Liqid is based in Broomfield, Colorado.

“AI workloads represent a highly uneven series of compute processes in which data is moved from one system to the next depending on the task, with resources sitting idle much of the time. The cost associated with these architectural inefficiencies can make AI, edge computing, and other economy-driving technologies unsustainable for many users,” said Sumit Puri, CEO and Cofounder, Liqid. “We are proud to work with Dell Technologies OEM & IoT Solutions to provide solutions based on our respective, award-winning technologies, delivering composable infrastructure that permit users to utilize a single, comprehensive, adaptive platform to increase utilization by at least 2-3X and reduce the data center footprint for high-value applications.”

“Many organizations are looking for ways to integrate AI and machine learning into their IT infrastructure while avoiding the hardware sprawl and inefficiencies that often come with it,” said Ron Pugh, Vice President, Dell Technologies OEM & IoT Solutions. “Liqid now can provide its composable solutions for AI, based on trusted Dell EMC PowerEdge infrastructure and support, for IT users seeking to improve utilization and efficiency for data-intensive applications.”

http://www.liqid.com


See also