Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, October 10, 2018

Micron to invest $100 million in AI start-ups

Micron announced plans to invest up to $100 million in startups focused on artificial intelligence (AI), with twenty percent aimed at startups led by women and other underrepresented groups. The company has been investing in tech start-ups in its sector since 2006 via its Micron Ventures arm.

Micron will also offer a $1 million grant for universities and non-profit organizations to conduct research on AI.

"We are pleased to bring together the industry's brightest thinkers, researchers, innovators and technologists to discuss AI, machine learning and deep learning," said Micron President and CEO Sanjay Mehrotra. "These trends are at the heart of the biggest opportunities in front of us, and increasingly require memory and storage technologies to turn vast amounts of data into insights that accelerate intelligence."

The announcements were made at the inaugural Micron Insight 2018 event in San Francisco, which included leaders from Amazon, BMW, Google, Qualcomm, Microsoft, NVIDIA, and Visteon, along with author, cosmic explorer and MIT professor of physics, Max Tegmark.

http://bit.ly/MicronFoundation

Monday, October 1, 2018

IEEE launches Ethics Certification Program for AI

IEEE and the IEEE Standards Association (IEEE-SA) are launching an the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS).

The idea is to offer a process and define a series of marks by which organizations can seek certified A/IS products, systems and services.

“It becomes more and more evident that consumers and citizens of the world are expecting technology to conform with ethical principles beyond functionality,” said Konstantinos Karachalios, managing director of IEEE-SA. “IEEE is one of the first global organizations to recognize the importance of certified accountability, transparency and reduction of algorithmic bias as being a critical enabler for A/IS value realization. This is also why the formation of ECPAIS complements the series of our IEEE P7000™ standard projects, along with all our A/IS Ethics work.”

“Today’s technology ecosystem calls for solutions that secure fair and transparent A/IS development, and Finland is at the forefront of key global efforts to move ethical A/IS from principles to pragmatism through close public-private partnership,” noted Meeri Haataja, chair of Ethics Working Group in Finland’s AI Program, and chair of ECPAIS. “Moving forward, A/IS need certifiable processes supported by a trusted organization that establishes easily identifiable marks, in order to signal high levels of reliability and safety to the general public. As chair of this groundbreaking IEEE program, I am honored to more broadly share and further incentivize Finland’s, and Europe’s, forward-thinking push to secure certifiably ethical A/IS.”

Tuesday, September 25, 2018

Lattice Semiconductor pushes ahead with AI software stack for edge

Lattice Semiconductor is boosting the capabilities of its "sensAI" stack designed for machine learning inferencing in consumer and industrial IoT applications.

Specifically, Lattice is releasing new IP cores, reference designs, demos and hardware development kits that provide scalable performance and power for always-on, on-device artificial intelligence (AI) applications. The release includes an updated neural network compiler tool with improved ease-of-use and both Caffe and TensorFlow support for iCE40 UltraPlus FPGAs.

“Flexible, low-power, always-on, on-device AI is increasingly a requirement in edge devices that are battery operated or have thermal constraints. The new features of the sensAI stack are optimized to address this challenge, delivering improved accuracy, scalable performance, and ease-of-use, while still consuming only a few milliwatts of power,” said Deepak Boppana, Senior Director, Product and Segment Marketing, Lattice Semiconductor. “With these enhancements, sensAI solutions can now support a variety of low-power, flexible system architectures for always-on, on-device AI.”

Examples of the architectural choices that sensAI solutions enable include:

• Stand-alone iCE40 UltraPlus / ECP5 FPGA based always-on, integrated solutions, with latency, security, and form factor benefits.
• Solutions utilizing iCE40 UltraPlus as an always-on processor that detects key-phrases or objects, and wakes-up a high performance AP SoC / ASIC for further analytics only when required, reducing overall system power consumption.
• Solutions utilizing the scalable performance/power benefits of ECP5 for neural network acceleration, along with IO flexibility to seamlessly interface to on-board legacy devices including sensors and low-end MCUs for system control.

Wednesday, September 19, 2018

Cadence intros deep neural-network accelerator AI processor IP

Cadence Design Systems introduced its deep neural-network accelerator (DNA) AI processor intellectual property for developers of articial intelligence semiconductors for use in applications spanning autonomous vehicles (AVs), ADAS, surveillance, robotics, drones, augmented reality (AR)/virtual reality (VR), smartphones, smart home and IoT.

The Cadence Tensilica DNA 100 Processor IP targets high performance and power efficiency across a full range of compute from 0.5 TeraMAC (TMAC) to 100s of TMACs. The company said processors based on this IP could deliver up to 4.7X better performance and up to 2.3X more performance per watt compared to other solutions with similar multiplier-accumulator (MAC) array sizes. Compatibility with the latest version of the Tensilica Neural Network Compiler enables support for advanced AI frameworks including Caffe, TensorFlow, TensorFlow Lite, and a broad spectrum of neural networks including convolution and recurrent networks. This makes the DNA 100 processor an ideal candidate for on-device inferencing for vision, speech, radar, lidar and co

“The applications for AI processors are growing rapidly, but running the latest neural network models can strain available power budgets,” said Mike Demler, senior analyst at the Linley Group. “Meeting the demands for AI capabilities in devices ranging from small, battery-operated IoT sensors to self-driving cars will require more efficient architectures. The innovative sparse compute engine in Cadence’s new Tensilica DNA 100 processor addresses these limitations and packs a lot of performance for any power budget.”


Tuesday, September 18, 2018

Gyrfalcon ships its AI ASIC for edge applications

Gyrfalcon Technology Inc. (GTI), a start-up based in Milpitas, California emerged from stealth to unveil its low-power AI processing chip for edge equipment.

GTI said it has taken an "edge-first" approach to its IoT technology. Its "Lightspeeur" 2801S is a 7mm x 7mm 28nm ASIC that uses just 300mW of power to deliver 9.3 TOPS/W for processing audio and video input. Between two and 32 chips can be combined on one board for heavy compute loads or separate task handling.

The company said it is now shipping the device to 10 customers, including LG, Fujitsu, and Samsung. Also shipping to customers developing edge AI solutions is GTI's USB3.0 dongle with an embedded Lightspeeur 2801S that accelerates customer development processes when creating next-generation AI enabled equipment and devices.

"Balancing the cost-performance-energy equation has been a challenge for developers looking to bring AI-enabled equipment to market at scale," said Dr. Lin Yang, chief scientist, GTI. "The GTI founding team has been watching the industry struggle with this challenge for decades, and believe that our AI Processing in Memory and Matrix Processing Engine provide an elegant solution to avoid having to make trade-offs. By deploying APiM and MPE on a standard, commoditized ASIC, GTI is enabling our customers to bring innovative, AI-enabled devices to the masses."

"We are paving the way for the next wave of AI products to make it to market," said Kimble Dong, CEO of GTI. "We recognized that device makers were compromising on essential design variables in AI-enabled equipment and have sought to solve this over the past few decades. Our offering marries our "edge-first" approach with ultra-fast AI data processing technology, low power consumption and a small chip design to enable the best AI experience and performance at a low cost, within any AI use case, physical fit and deployment."

http://www.gyrfalcontech.com


Tuesday, September 4, 2018

Intel announces AI collaboration with Baidu Cloud

Baidu and Intel outlined new artificial intelligence (AI) collaborations showcasing applications ranging from financial services and shipping to video content detection.

Specifically, Baidu Cloud is leveraging Intel Xeon Scalable processors and the Intel Math Kernel Library-Deep Neural Network (Intel® MKL-DNN) as part of a new financial services solution for leading China banks; the Intel OpenVINO toolkit in new AI edge distribution and video solutions; and Intel Optane™ technology and Intel QLC NAND SSD technology for enhanced object storage.

“Intel is collaborating with Baidu Cloud to deliver end-to-end AI solutions. Adopting a new chip or optimizing a single framework is not enough to meet the demands of new AI workloads. What’s required is sysatems-level integration with software optimization, and Intel is enabling this through our expertise and extensive portfolio of AI technologies – all in the name of helping our customers achieve their AI goals,” stated Raejeanne Skillern, Intel vice president, Data Center Group, and general manager, Cloud Service Provider. Platform Group.

Wednesday, July 4, 2018

Baidu develops its own AI chip, rolls out first autonomous bus

At its second annual developer conference in Beijing this week, Baidu unveiled its "Kunlun" processor for AI applications.


Technical details on the new Kunlun silicon were scarce, but the company said its cloud-to-edge AI chip is built to accommodate high-performance requirements of a wide variety of AI scenarios, including deep learning and facial recognition.

Baidu is known to be developing FPGA designs for a number of years.

Baidu also announced volume production of China’s first commercially deployed fully autonomous bus. The first 100 "Apolong" buses are ready for the road.


Tuesday, May 29, 2018

Supermicro unveils 2 PetaFLOP SuperServer based on New NVIDIA HGX-2

Super Micro Computer is using the new NVIDIA HGX-2 cloud server platform to develop a 2 PetaFLOP "SuperServer" aimed at artificial intelligence (AI) and high-performance computing (HPC) applications.

"To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance," said Charles Liang, president and CEO of Supermicro. "The HGX-2 system will enable efficient training of complex models. It combines 16 Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate memory to deliver unmatched compute power."

The design packs over 80,000 CUDA cores.

Monday, April 30, 2018

Intel intros two AI software applications

Intel introduced two artificial intelligence (AI)-powered software applications that associative memory learning and reasoning to facilitate faster issue resolution. Target applications include issue resolution for manufacturing, software and aerospace.

The Intel Saffron AI Quality and Maintenance Decision Support Suite is comprised of:

Similarity Advisor finds the closest match to the issue under review, across both resolved and open cases, identifying paths to resolution from previous cases and surfacing duplicates to reduce backlogs.

Classification Advisor automatically classifies work issues into pre-set categories, regulator mandated or self-defined, speeding up and increasing reporting accuracy while improving operations planning.

“Testing is transforming into quality engineering where applied intelligence is at the core of driving productivity and agility,” said Kishore Durg, senior managing director, Growth and Strategy and Global Testing Services Lead for Accenture. “The Accenture Touchless Testing Platform is augmented with artificial intelligence technology from Intel Saffron AI that brings in analytics and visualization capabilities. These support rapid decision-making and help reduce over-engineering efforts that can save anywhere from 30 to 50 percent of time and effort.”

Intel Nervana Aims for AI

Intel introduced its "Nervana" platform and outlined its broad for artificial intelligence (AI), encompassing a range of new products, technologies and investments from the edge to the data center.

Intel currently powers 97 percent of data center servers running AI workloads on its existing Intel Xeon processors and Intel Xeon Phi processors, along with more workload-optimized accelerators, including FPGAs (field-programmable gate arrays).

Intel said the breakthrough technology acquired from Nervana earlier this summer will be integrated into its product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unprecedented compute density with a high-bandwidth interconnect.

Monday, March 26, 2018

The Acumos AI Project moves to the Linux Foundation

The Linux Foundation launched the Acumos AI Project, a federated platform for managing artificial intelligence (AI) and machine learning (ML) applications and sharing AI models.

AT&T and Tech Mahindra contributed the initial Acumos code.

"An open and federated AI platform like the Acumos platform allows developers and companies to take advantage of the latest AI technologies and to more easily share proven models and expertise," said Jim Zemlin, executive director at The Linux Foundation. "Acumos will benefit developers and data scientists across numerous industries and fields, from network and video analytics to content curation, threat prediction, and more."

Acumos, which is now freely available for download, provides users with a visual workflow to design AI and ML applications, as well as a marketplace for freely sharing AI solutions and data models. The Acumos framework is user-centric and simple to explore. The Acumos Marketplace packages various components as microservices and allows users to export ready-to-launch AI applications as containers to run in public clouds or private environments.

In addition, The Linux Foundation has formed an umbrella organization called the LF Deep Learning Foundation. Its mission is "to support and sustain open source innovation in artificial intelligence, machine learning, and deep learning while striving to make these critical new technologies available to developers and data scientists everywhere."

 Founding members of LF Deep Learning include Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa, and ZTE. With LF Deep Learning, members are working to create a neutral space where makers and sustainers of tools and infrastructure can interact and harmonize their efforts and accelerate the broad adoption of deep learning technologies. https://www.acumos.org

Wednesday, December 13, 2017

Google opens AI Research Center in Beijing

Google is opening an AI China Center to focus on basic research. Ms. Fei-Gei Li, who is Chief Scientist AI/ML at Google, notes that many of the world's top experts in AI are Chinese.

Google also has AI research groups located in New York, Toronto, London and Zurich.

Sunday, November 26, 2017

Amazon ML Solutions Lab Opens

Amazon Web Services is launching a program that connects machine learning experts from across Amazon with AWS customers "to help identify practical uses of machine learning inside customers’ businesses, and guide them in developing new machine learning-enabled features, products, and processes."

 The Amazon ML Solutions Lab combines hands-on educational workshops with brainstorming sessions. The idea is to help customers “work backwards from business challenges, and then go step-by-step through the process of developing machine learning-based solutions."

Machine learning workshops cover topics such as how to prepare data, build and train models, and put models into production.

https://aws.amazon.com/ml-solutions-lab


NVIDIA extends AI healthcare partnership with GE

NVIDIA is working with GE to bring the most sophisticated artificial intelligence (AI) to GE Healthcare’s 500,000 imaging devices globally.

The partnership, which was detailed at the 103rd annual meeting of the Radiological Society of North America (RSNA), includes a new NVIDIA-powered Revolution Frontier CT, advancements to the Vivid E95 4D Ultrasound and development of GE Healthcare’s Applied Intelligence analytics platform.

NVIDIA said its AI computing platform accelerates image processing in the new CT system by 2X.

NVIDIA notes that hhe average hospital generates 50 petabytes of data annually, through medical images, clinical charts and sensors, as well as operational and financial sources, providing many opportunities to accelerate data processing flows.

Tuesday, November 14, 2017

Didi Chuxing opens Silicon Valley Lab

Didi Chuxing, the leading ride sharing service in China, opened a U.S. research facility in Mountain View, California.  The offices encompass 36,000 square feet and offer capacity for more than 200 employees.

DiDi Labs, which was officially launched in March 2017, focuses on AI-based security and intelligent driving technologies.

Bob Zhang, CTO of Didi Chuxing, said at the campus opening, “It’s been an exciting year for DiDi Labs. Our talented team is growing fast and making important contributions across our key tech areas, from smart-city transportation management, AI ride-matching, to security and new product innovation. ”

Didi Chuxing now has over 450 million users and is handling over 25 million daily rides.

DiDi acquired Uber China in August 2016.

Thursday, October 12, 2017

AWS and Microsoft collaborate on deep learning library

Amazon Web Services (AWS) and Microsoft announced a new deep learning library, called Gluon,for prototyping, building, training and deploying sophisticated machine learning models for the cloud, devices at the edge and mobile apps.

"The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models requires a great deal of heavy lifting and specialized expertise,” said Swami Sivasubramanian, VP of Amazon AI. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” said Eric Boyd, Corporate Vice President of Microsoft AI and Research. “This is why Microsoft has collaborated with AWS to create the Gluon interface and enable an open AI ecosystem where developers have freedom of choice. Machine learning has the ability to transform the way we work, interact and communicate. To make this happen we need to put the right tools in the right hands, and the Gluon interface is a step in this direction.”

Key facts on the Gluon interface:

  • Provides an easy-to-understand programming interface that enables developers to quickly prototype and experiment with neural network models
  • Can be used to create neural networks on the fly, and to change their size and shape dynamically. 
  • Currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release
  • Can be used to build machine learning models using a simple Python API and a range of pre-built, optimized neural network components. 


https://github.com/gluon-api/gluon-api/.

Tuesday, October 10, 2017

Intel joins Open Neural Network Exchange

Intel has joined the Open Neural Network Exchange (ONNX), which was first announced last month by Microsoft and Facebook to give users more choice in AI frameworks.

Currently, the ONNX format is supported by Microsoft Cognitive Toolkit, Caffe2 and PyTorch. Microsoft’s FPGA-based Project Brainwave will also support ONNX.

Intel said it is participating in the project to provide greater flexibility to the developer community by giving access to the most suitable tools for each unique AI project and the ability to easily switch between frameworks and tools.
 center.

Monday, September 18, 2017

Intel Capital's AI Investments top $1 billion

Intel Capital has now invested over $1 billion in companies devoted to the advancement of artificial intelligence.

In a blog post, Intel's CEO Brian Krzanich said the company is fully committed to making its silicon the "platform of choice" for AI developers. Key areas of AI development inside Intel include:

  • Intel Xeon Scalable family of processors for evolving AI workloads. Intel also offers purpose-built silicon for deep learning training, code-named “Lake Crest”
  • Intel Mobileye vision technologies for specialized use cases such as active safety and autonomous driving
  • Intel FPGAs, which can serve as programmable accelerators for deep learning inference
  • Intel Movidius low-power vision technology, which provides machine learning at the edge.



Intel Nervana Aims for AI

Intel introduced its "Nervana" platform and outlined its broad for artificial intelligence (AI), encompassing a range of new products, technologies and investments from the edge to the data center.

Intel currently powers 97 percent of data center servers running AI workloads on its existing Intel Xeon processors and Intel Xeon Phi processors, along with more workload-optimized accelerators, including FPGAs (field-programmable gate arrays).

Intel said the breakthrough technology acquired from Nervana earlier this summer will be integrated into its product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unp

Saturday, September 9, 2017

IBM and MIT to open Artificial Intelligence lab

IBM announced a 10-year, $240 million investment to create the MIT–IBM Watson AI Lab in partnership with MIT.

The MIT–IBM Watson AI Lab aims to advance AI hardware, software and algorithms related to deep learning and other areas, increase AI’s impact on industries, such as health care and cybersecurity, and explore the economic and ethical implications of AI on society.

The lab will be co-chaired by IBM Research VP of AI and IBM Q, Dario Gil, and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

"The field of artificial intelligence has experienced incredible growth and progress over the past decade. Yet today’s AI systems, as remarkable as they are, will require new innovations to tackle increasingly difficult real-world problems to improve our work and lives,” said Dr. John Kelly III, IBM senior vice president, Cognitive Solutions and Research. “The extremely broad and deep technical capabilities and talent at MIT and IBM are unmatched, and will lead the field of AI for at least the next decade."

“I am delighted by this new collaboration,” says MIT President L. Rafael Reif. “True breakthroughs are often the result of fresh thinking inspired by new kinds of research teams. The combined MIT and IBM talent dedicated to this new effort will bring formidable power to a field with staggering potential to advance knowledge and help solve important challenges.”

http://www-03.ibm.com/press/us/en/pressrelease/53091.wss

Thursday, September 7, 2017

John Deere acquires Blue River Technology for AI

Deere & Company has acquired Blue River Technology, a start-up based in Sunnyvale, California that is applying machine learning to agriculture. The deal was valued at $305 million.

Blue River is developing computer vision and machine learning technology that will enable growers to reduce the use of herbicides by spraying only where weeds are present, optimizing the use of inputs in farming – a key objective of precision agriculture.

"We welcome the opportunity to work with a Blue River Technology team that is highly skilled and intensely dedicated to rapidly advancing the implementation of machine learning in agriculture," said John May, President, Agricultural Solutions, and Chief Information Officer at Deere. "As a leader in precision agriculture, John Deere recognizes the importance of technology to our customers. Machine learning is an important capability for Deere's future."

"Blue River is advancing precision agriculture by moving farm management decisions from the field level to the plant level," said Jorge Heraud, co-founder and CEO of Blue River Technology. "We are using computer vision, robotics, and machine learning to help smart machines detect, identify, and make management decisions about every single plant in the field."

http://www.JohnDeere.com
http://www.BlueRiverTechnology.com

  • Investors in Blue River included Khosla Ventures, Pontifax ATech. Innovation Endeavors, and Data Collective Venture Capital.


Thursday, May 11, 2017

Cisco to Acquire MindMeld for AI Expertise

Cisco agreed to acquire MindMeld, a start-up based in San Franciso that is developing a conversational platform based on natural language understanding (NLU). The deal was valued at $125 million in cash and assumed equity awards. The acquisition is expected to close in Cisco's fourth quarter of fiscal year 2017.

The MindMeld platform can be used for building intelligent conversational interfaces for companies to interact with their customers across almost any device or application. MindMeld is able to ingest customer data and create a highly accurate and customized natural language model, tailored to each company’s industry and requirements. MindMeld also delivers a dialog manager that enables a computer to respond to user requests through chat and voice applications in a human-like fashion.

MindMeld was founded in 2011 by Tim Tuttle, a former AI researcher from MIT and Bell Labs,

http://www.mindmeld.com

See also