Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, November 29, 2018

HPE to acquire BlueData for containerized ML software

Hewlett Packard Enterprise agreed to acquire BlueData, a software developer focused on artificial intelligence and big data analytics. Financial terms were not disclosed.

BlueData, which is based in Santa Clara, California, uses container technology to make simplify the deployment of large-scale machine learning and big data analytics applications.

“BlueData has developed an innovative and effective solution to address the pain points all companies face when contemplating, implementing, and deploying AI/ML and big data analytics. Adding BlueData’s complementary software platform to HPE’s market-leading Apollo Systems and professional services is consistent with HPE’s data-first strategy and enables our customers to extract insights from data – whether on-premises, in the cloud, or in a hybrid architecture,” said Milan Shetti, SVP and GM, Storage and Big Data Global Business Unit at HPE. “We are excited about the significant value we can deliver for our customers by working with the talented team at BlueData.”


Wednesday, November 28, 2018

Qualcomm Launches $100M AI Investment Fund

Qualcomm announced a $100 million venture fund focused on AI start-ups

Specifically, the Qualcomm Ventures AI Fund will focus on startups that share the vision of on-device AI becoming more powerful and widespread, with an emphasis on those developing new technology for autonomous cars, robotics and machine learning platforms.

“At Qualcomm, we invent breakthrough technologies that transform how the world connects, computes, and communicates,” said Steve Mollenkopf, CEO of Qualcomm Incorporated. “For over a decade, Qualcomm has been investing in the future of machine learning. As a pioneer of on-device AI, we strongly believe intelligence is moving from the cloud to the edge. Qualcomm’s AI strategy couples leading 5G connectivity with our R&D, fueling AI to transform industries, business models and experiences.”

As part of the AI Fund, Qualcomm Ventures LLC participated in a Series A funding round for AnyVision, a world-leading face, body, and object recognition startup.


Thursday, November 15, 2018

Habana Labs raises $75M for AI processors, including Intel investment

Habana Labs, a start-up based in Tel-Aviv, Israel, raised $75 million in an oversubscribed series B funding for its development of AI processors.

Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc.  A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.

Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.

The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.

“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”

“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”

https://habana.ai/

Wednesday, November 14, 2018

The Linux Foundation releases first Acumos AI Athena

The LF Deep Learning Foundation, a project of The Linux Foundation, announced the first software release of the Acumos AI Project - Athena.

Acumos AI standardizes the infrastructure stack and components required to run an out-of-the-box general AI environment.

Major highlights of the Athena release include:

  • One-click deployment of the platform utilizing Docker or Kubernetes;
  • The ability to deploy models into a public or private cloud infrastructure or in a Kubernetes environment on users’ own hardware including servers and virtual machines;
  • A design studio, which is a graphical interface for chaining together multiple models, data translation tools, filters and output adapters into a full end-to-end solution;
  • Use of a security token to allow simple onboarding of models from an external toolkit directly to an Acumos AI repository;
  • Decoupling of microservices generation from the model onboarding process to easily repurpose models for different environments and hardware; and
  • An advanced user portal with the ability to personalize marketplace view by theme, data on model authorship as well as to share models privately or publicly and user experience upgrades.

The next Acumos AI release which is expected in mid-2019, will add model training as well as data extraction pipelines to make models more flexible. Additionally, the next release will include updates to assist closed-source model developers, including secure and reliable licensing components to provide execution control and performance feedback across the community.

“The Acumos Athena release represents a significant step forward in making AI models more accessible for builders of AI applications and models along with users and trainers of those models and applications,” said Scott Nicholas, senior director of strategic planning at The Linux Foundation. “This furthers the goal of LF Deep Learning and the Acumos project of accelerating overall AI innovation.”

“Orange has been actively involved in Acumos since April 2018 through a Project Team Leader for the model onboarding module to manage and drive evolutions in the onboarding capabilities. This involvement shows the willingness of Orange to take part and promote the AI ecosystem in the telecom domain. Orange also considered the coherency of integrating Acumos in the continuity of all the works performed in the network automation LFN/ONAP project. Acumos is seen as a common platform that can bridge existing AI technologies and new ones through its openness. It can also favor cross business AI-based developments through its federative approach and thanks to its marketplace.” - Fran├žois Jezequel, Head of ITsation, Procurement and Operators, Orange

https://wiki.acumos.org/display/REL/Athena+Release
https://www.deeplearningfoundation.org

The Acumos AI Project moves to the Linux Foundation

The Linux Foundation launched the Acumos AI Project, a federated platform for managing artificial intelligence (AI) and machine learning (ML) applications and sharing AI models.

AT&T and Tech Mahindra contributed the initial Acumos code.

"An open and federated AI platform like the Acumos platform allows developers and companies to take advantage of the latest AI technologies and to more easily share proven models and expertise," said Jim Zemlin, executive director at The Linux Foundation. "Acumos will benefit developers and data scientists across numerous industries and fields, from network and video analytics to content curation, threat prediction, and more."

Wednesday, October 31, 2018

Wave Computing outlines its AI-enabled MIPS strategy for network edge

Wave Computing outlined its strategy for an AI-enabled MIPS offering and ecosystem. The company announced a new AI-enabled licensing roadmap and a broader 3rd party ecosystem.

The company said it will pursue a multi-pronged strategy to enable new use cases for AI based on MIPS architecture.

“Wave Computing’s MIPS technology is a key component of the Data Processing Unit we are developing as a fundamental new building block of next-generation data centers. We expect Fungible’s solution to be pivotal in powering modern data-centric applications such as AI and analytics” says, Pradeep Sindhu, CEO of Fungible, Inc.

“We are well underway in executing on our strategy to bring AI to All. This means delivering AI computing systems for datacenter and customer premise applications, licensable solutions for next-generation SoCs and AI application software for end customers in multiple markets. Since acquiring MIPS in June, we have been blown away by the overwhelmingly positive responses by customers and partners. This underscores the tremendous market opportunity for a common, AI-based development environment and associated suite of solutions. We’ve never been more optimistic on the value MIPS offers, and look forward to extending the market share of MIPS-based designs by enabling native AI performance for edge SoCs,” said Derek Meyer, CEO of Wave Computing.

Under the plan, the company is expanding its IP team globally, including hardware, software, marketing and sales staff.  It continues to invest in its 64-bit, scalable multi-threaded MIPS technology roadmap for embedded applications.

Wave Computing will offer new solutions ranging from CPU cores for edge applications to more robust implementations for emerging AI training and inferencing applications. Wave is also addressing the future of functional safety in autonomous vehicles by building on its ISO26262 certification and introducing advanced lock-step functionality for its MIPS cores.

  • In August, Wave Computing announced a strategic partnership with Broadcom to bring its dataflow processing unit (DPU) to market at the leading-edge 7nm process node. Specifically, Wave will leverage Broadcom’s design platform, productization skills, and 7nm 56Gbps and 112Gbps SerDes. The device will be fabbed using TSMC's 7nm process.

  • In June, Wave Computingacquired MIPS Tech, Inc. (formerly MIPS Technologies), a global leader in RISC processor Intellectual Property (IP) and licensable CPU cores.


Wednesday, October 10, 2018

Micron to invest $100 million in AI start-ups

Micron announced plans to invest up to $100 million in startups focused on artificial intelligence (AI), with twenty percent aimed at startups led by women and other underrepresented groups. The company has been investing in tech start-ups in its sector since 2006 via its Micron Ventures arm.

Micron will also offer a $1 million grant for universities and non-profit organizations to conduct research on AI.

"We are pleased to bring together the industry's brightest thinkers, researchers, innovators and technologists to discuss AI, machine learning and deep learning," said Micron President and CEO Sanjay Mehrotra. "These trends are at the heart of the biggest opportunities in front of us, and increasingly require memory and storage technologies to turn vast amounts of data into insights that accelerate intelligence."

The announcements were made at the inaugural Micron Insight 2018 event in San Francisco, which included leaders from Amazon, BMW, Google, Qualcomm, Microsoft, NVIDIA, and Visteon, along with author, cosmic explorer and MIT professor of physics, Max Tegmark.

http://bit.ly/MicronFoundation

Monday, October 1, 2018

IEEE launches Ethics Certification Program for AI

IEEE and the IEEE Standards Association (IEEE-SA) are launching an the Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS).

The idea is to offer a process and define a series of marks by which organizations can seek certified A/IS products, systems and services.

“It becomes more and more evident that consumers and citizens of the world are expecting technology to conform with ethical principles beyond functionality,” said Konstantinos Karachalios, managing director of IEEE-SA. “IEEE is one of the first global organizations to recognize the importance of certified accountability, transparency and reduction of algorithmic bias as being a critical enabler for A/IS value realization. This is also why the formation of ECPAIS complements the series of our IEEE P7000™ standard projects, along with all our A/IS Ethics work.”

“Today’s technology ecosystem calls for solutions that secure fair and transparent A/IS development, and Finland is at the forefront of key global efforts to move ethical A/IS from principles to pragmatism through close public-private partnership,” noted Meeri Haataja, chair of Ethics Working Group in Finland’s AI Program, and chair of ECPAIS. “Moving forward, A/IS need certifiable processes supported by a trusted organization that establishes easily identifiable marks, in order to signal high levels of reliability and safety to the general public. As chair of this groundbreaking IEEE program, I am honored to more broadly share and further incentivize Finland’s, and Europe’s, forward-thinking push to secure certifiably ethical A/IS.”

Tuesday, September 25, 2018

Lattice Semiconductor pushes ahead with AI software stack for edge

Lattice Semiconductor is boosting the capabilities of its "sensAI" stack designed for machine learning inferencing in consumer and industrial IoT applications.

Specifically, Lattice is releasing new IP cores, reference designs, demos and hardware development kits that provide scalable performance and power for always-on, on-device artificial intelligence (AI) applications. The release includes an updated neural network compiler tool with improved ease-of-use and both Caffe and TensorFlow support for iCE40 UltraPlus FPGAs.

“Flexible, low-power, always-on, on-device AI is increasingly a requirement in edge devices that are battery operated or have thermal constraints. The new features of the sensAI stack are optimized to address this challenge, delivering improved accuracy, scalable performance, and ease-of-use, while still consuming only a few milliwatts of power,” said Deepak Boppana, Senior Director, Product and Segment Marketing, Lattice Semiconductor. “With these enhancements, sensAI solutions can now support a variety of low-power, flexible system architectures for always-on, on-device AI.”

Examples of the architectural choices that sensAI solutions enable include:

• Stand-alone iCE40 UltraPlus / ECP5 FPGA based always-on, integrated solutions, with latency, security, and form factor benefits.
• Solutions utilizing iCE40 UltraPlus as an always-on processor that detects key-phrases or objects, and wakes-up a high performance AP SoC / ASIC for further analytics only when required, reducing overall system power consumption.
• Solutions utilizing the scalable performance/power benefits of ECP5 for neural network acceleration, along with IO flexibility to seamlessly interface to on-board legacy devices including sensors and low-end MCUs for system control.

Wednesday, September 19, 2018

Cadence intros deep neural-network accelerator AI processor IP

Cadence Design Systems introduced its deep neural-network accelerator (DNA) AI processor intellectual property for developers of articial intelligence semiconductors for use in applications spanning autonomous vehicles (AVs), ADAS, surveillance, robotics, drones, augmented reality (AR)/virtual reality (VR), smartphones, smart home and IoT.

The Cadence Tensilica DNA 100 Processor IP targets high performance and power efficiency across a full range of compute from 0.5 TeraMAC (TMAC) to 100s of TMACs. The company said processors based on this IP could deliver up to 4.7X better performance and up to 2.3X more performance per watt compared to other solutions with similar multiplier-accumulator (MAC) array sizes. Compatibility with the latest version of the Tensilica Neural Network Compiler enables support for advanced AI frameworks including Caffe, TensorFlow, TensorFlow Lite, and a broad spectrum of neural networks including convolution and recurrent networks. This makes the DNA 100 processor an ideal candidate for on-device inferencing for vision, speech, radar, lidar and co

“The applications for AI processors are growing rapidly, but running the latest neural network models can strain available power budgets,” said Mike Demler, senior analyst at the Linley Group. “Meeting the demands for AI capabilities in devices ranging from small, battery-operated IoT sensors to self-driving cars will require more efficient architectures. The innovative sparse compute engine in Cadence’s new Tensilica DNA 100 processor addresses these limitations and packs a lot of performance for any power budget.”


Tuesday, September 18, 2018

Gyrfalcon ships its AI ASIC for edge applications

Gyrfalcon Technology Inc. (GTI), a start-up based in Milpitas, California emerged from stealth to unveil its low-power AI processing chip for edge equipment.

GTI said it has taken an "edge-first" approach to its IoT technology. Its "Lightspeeur" 2801S is a 7mm x 7mm 28nm ASIC that uses just 300mW of power to deliver 9.3 TOPS/W for processing audio and video input. Between two and 32 chips can be combined on one board for heavy compute loads or separate task handling.

The company said it is now shipping the device to 10 customers, including LG, Fujitsu, and Samsung. Also shipping to customers developing edge AI solutions is GTI's USB3.0 dongle with an embedded Lightspeeur 2801S that accelerates customer development processes when creating next-generation AI enabled equipment and devices.

"Balancing the cost-performance-energy equation has been a challenge for developers looking to bring AI-enabled equipment to market at scale," said Dr. Lin Yang, chief scientist, GTI. "The GTI founding team has been watching the industry struggle with this challenge for decades, and believe that our AI Processing in Memory and Matrix Processing Engine provide an elegant solution to avoid having to make trade-offs. By deploying APiM and MPE on a standard, commoditized ASIC, GTI is enabling our customers to bring innovative, AI-enabled devices to the masses."

"We are paving the way for the next wave of AI products to make it to market," said Kimble Dong, CEO of GTI. "We recognized that device makers were compromising on essential design variables in AI-enabled equipment and have sought to solve this over the past few decades. Our offering marries our "edge-first" approach with ultra-fast AI data processing technology, low power consumption and a small chip design to enable the best AI experience and performance at a low cost, within any AI use case, physical fit and deployment."

http://www.gyrfalcontech.com


Tuesday, September 4, 2018

Intel announces AI collaboration with Baidu Cloud

Baidu and Intel outlined new artificial intelligence (AI) collaborations showcasing applications ranging from financial services and shipping to video content detection.

Specifically, Baidu Cloud is leveraging Intel Xeon Scalable processors and the Intel Math Kernel Library-Deep Neural Network (Intel® MKL-DNN) as part of a new financial services solution for leading China banks; the Intel OpenVINO toolkit in new AI edge distribution and video solutions; and Intel Optane™ technology and Intel QLC NAND SSD technology for enhanced object storage.

“Intel is collaborating with Baidu Cloud to deliver end-to-end AI solutions. Adopting a new chip or optimizing a single framework is not enough to meet the demands of new AI workloads. What’s required is sysatems-level integration with software optimization, and Intel is enabling this through our expertise and extensive portfolio of AI technologies – all in the name of helping our customers achieve their AI goals,” stated Raejeanne Skillern, Intel vice president, Data Center Group, and general manager, Cloud Service Provider. Platform Group.

Wednesday, July 4, 2018

Baidu develops its own AI chip, rolls out first autonomous bus

At its second annual developer conference in Beijing this week, Baidu unveiled its "Kunlun" processor for AI applications.


Technical details on the new Kunlun silicon were scarce, but the company said its cloud-to-edge AI chip is built to accommodate high-performance requirements of a wide variety of AI scenarios, including deep learning and facial recognition.

Baidu is known to be developing FPGA designs for a number of years.

Baidu also announced volume production of China’s first commercially deployed fully autonomous bus. The first 100 "Apolong" buses are ready for the road.


Tuesday, May 29, 2018

Supermicro unveils 2 PetaFLOP SuperServer based on New NVIDIA HGX-2

Super Micro Computer is using the new NVIDIA HGX-2 cloud server platform to develop a 2 PetaFLOP "SuperServer" aimed at artificial intelligence (AI) and high-performance computing (HPC) applications.

"To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance," said Charles Liang, president and CEO of Supermicro. "The HGX-2 system will enable efficient training of complex models. It combines 16 Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate memory to deliver unmatched compute power."

The design packs over 80,000 CUDA cores.

Monday, April 30, 2018

Intel intros two AI software applications

Intel introduced two artificial intelligence (AI)-powered software applications that associative memory learning and reasoning to facilitate faster issue resolution. Target applications include issue resolution for manufacturing, software and aerospace.

The Intel Saffron AI Quality and Maintenance Decision Support Suite is comprised of:

Similarity Advisor finds the closest match to the issue under review, across both resolved and open cases, identifying paths to resolution from previous cases and surfacing duplicates to reduce backlogs.

Classification Advisor automatically classifies work issues into pre-set categories, regulator mandated or self-defined, speeding up and increasing reporting accuracy while improving operations planning.

“Testing is transforming into quality engineering where applied intelligence is at the core of driving productivity and agility,” said Kishore Durg, senior managing director, Growth and Strategy and Global Testing Services Lead for Accenture. “The Accenture Touchless Testing Platform is augmented with artificial intelligence technology from Intel Saffron AI that brings in analytics and visualization capabilities. These support rapid decision-making and help reduce over-engineering efforts that can save anywhere from 30 to 50 percent of time and effort.”

Intel Nervana Aims for AI

Intel introduced its "Nervana" platform and outlined its broad for artificial intelligence (AI), encompassing a range of new products, technologies and investments from the edge to the data center.

Intel currently powers 97 percent of data center servers running AI workloads on its existing Intel Xeon processors and Intel Xeon Phi processors, along with more workload-optimized accelerators, including FPGAs (field-programmable gate arrays).

Intel said the breakthrough technology acquired from Nervana earlier this summer will be integrated into its product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unprecedented compute density with a high-bandwidth interconnect.

Monday, March 26, 2018

The Acumos AI Project moves to the Linux Foundation

The Linux Foundation launched the Acumos AI Project, a federated platform for managing artificial intelligence (AI) and machine learning (ML) applications and sharing AI models.

AT&T and Tech Mahindra contributed the initial Acumos code.

"An open and federated AI platform like the Acumos platform allows developers and companies to take advantage of the latest AI technologies and to more easily share proven models and expertise," said Jim Zemlin, executive director at The Linux Foundation. "Acumos will benefit developers and data scientists across numerous industries and fields, from network and video analytics to content curation, threat prediction, and more."

Acumos, which is now freely available for download, provides users with a visual workflow to design AI and ML applications, as well as a marketplace for freely sharing AI solutions and data models. The Acumos framework is user-centric and simple to explore. The Acumos Marketplace packages various components as microservices and allows users to export ready-to-launch AI applications as containers to run in public clouds or private environments.

In addition, The Linux Foundation has formed an umbrella organization called the LF Deep Learning Foundation. Its mission is "to support and sustain open source innovation in artificial intelligence, machine learning, and deep learning while striving to make these critical new technologies available to developers and data scientists everywhere."

 Founding members of LF Deep Learning include Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa, and ZTE. With LF Deep Learning, members are working to create a neutral space where makers and sustainers of tools and infrastructure can interact and harmonize their efforts and accelerate the broad adoption of deep learning technologies. https://www.acumos.org

Wednesday, December 13, 2017

Google opens AI Research Center in Beijing

Google is opening an AI China Center to focus on basic research. Ms. Fei-Gei Li, who is Chief Scientist AI/ML at Google, notes that many of the world's top experts in AI are Chinese.

Google also has AI research groups located in New York, Toronto, London and Zurich.

Sunday, November 26, 2017

Amazon ML Solutions Lab Opens

Amazon Web Services is launching a program that connects machine learning experts from across Amazon with AWS customers "to help identify practical uses of machine learning inside customers’ businesses, and guide them in developing new machine learning-enabled features, products, and processes."

 The Amazon ML Solutions Lab combines hands-on educational workshops with brainstorming sessions. The idea is to help customers “work backwards from business challenges, and then go step-by-step through the process of developing machine learning-based solutions."

Machine learning workshops cover topics such as how to prepare data, build and train models, and put models into production.

https://aws.amazon.com/ml-solutions-lab


NVIDIA extends AI healthcare partnership with GE

NVIDIA is working with GE to bring the most sophisticated artificial intelligence (AI) to GE Healthcare’s 500,000 imaging devices globally.

The partnership, which was detailed at the 103rd annual meeting of the Radiological Society of North America (RSNA), includes a new NVIDIA-powered Revolution Frontier CT, advancements to the Vivid E95 4D Ultrasound and development of GE Healthcare’s Applied Intelligence analytics platform.

NVIDIA said its AI computing platform accelerates image processing in the new CT system by 2X.

NVIDIA notes that hhe average hospital generates 50 petabytes of data annually, through medical images, clinical charts and sensors, as well as operational and financial sources, providing many opportunities to accelerate data processing flows.

Tuesday, November 14, 2017

Didi Chuxing opens Silicon Valley Lab

Didi Chuxing, the leading ride sharing service in China, opened a U.S. research facility in Mountain View, California.  The offices encompass 36,000 square feet and offer capacity for more than 200 employees.

DiDi Labs, which was officially launched in March 2017, focuses on AI-based security and intelligent driving technologies.

Bob Zhang, CTO of Didi Chuxing, said at the campus opening, “It’s been an exciting year for DiDi Labs. Our talented team is growing fast and making important contributions across our key tech areas, from smart-city transportation management, AI ride-matching, to security and new product innovation. ”

Didi Chuxing now has over 450 million users and is handling over 25 million daily rides.

DiDi acquired Uber China in August 2016.

Thursday, October 12, 2017

AWS and Microsoft collaborate on deep learning library

Amazon Web Services (AWS) and Microsoft announced a new deep learning library, called Gluon,for prototyping, building, training and deploying sophisticated machine learning models for the cloud, devices at the edge and mobile apps.

"The potential of machine learning can only be realized if it is accessible to all developers. Today’s reality is that building and training machine learning models requires a great deal of heavy lifting and specialized expertise,” said Swami Sivasubramanian, VP of Amazon AI. “We created the Gluon interface so building neural networks and training models can be as easy as building an app. We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“We believe it is important for the industry to work together and pool resources to build technology that benefits the broader community,” said Eric Boyd, Corporate Vice President of Microsoft AI and Research. “This is why Microsoft has collaborated with AWS to create the Gluon interface and enable an open AI ecosystem where developers have freedom of choice. Machine learning has the ability to transform the way we work, interact and communicate. To make this happen we need to put the right tools in the right hands, and the Gluon interface is a step in this direction.”

Key facts on the Gluon interface:

  • Provides an easy-to-understand programming interface that enables developers to quickly prototype and experiment with neural network models
  • Can be used to create neural networks on the fly, and to change their size and shape dynamically. 
  • Currently works with Apache MXNet and will support Microsoft Cognitive Toolkit (CNTK) in an upcoming release
  • Can be used to build machine learning models using a simple Python API and a range of pre-built, optimized neural network components. 


https://github.com/gluon-api/gluon-api/.

See also