Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, December 1, 2020

Qualcomm's Snapdragon 888 brings 3rd gen 5G modem, 6th gen processor

Qualcomm previewed its new flagship - the Snapdragon 888 - featuring its 3rd generation X60 5G Modem-RF System with global band coverage and a 6th generation AI Engine operating at an astonishing 26 tera operations per second (TOPS).

The 5G modem operates in mmWave and sub-6 across all major bands worldwide, and it brings support for 5G carrier aggregation, global multi-SIM, stand alone, non-stand alone, and Dynamic Spectrum Sharing.

The new 6th generation Qualcomm AI Engine features completely re-engineered Qualcomm Hexagon processor that improves performance and power efficiency.

“Creating premium experiences takes a relentless focus on innovation. It takes long term commitment, even in the face of immense uncertainty,” said Cristiano Amon, president, Qualcomm Incorporated. “It takes an organization that’s focused on tomorrow, to continue to deliver the technologies that redefine premium experiences.”



https://www.qualcomm.com/news/releases/2020/12/01/qualcomm-redefines-premium-snapdragon-tech-summit-digital-2020


AWS to deploy Intel's Gaudi AI accelerators in EC2 instances

AWS will begin offering EC2 instances with up to eight of Intel's Habana Gaudi accelerators for machine learning workloads.

Gaudi accelerators are specifically designed for training deep learning models for workloads that include natural language processing, object detection and machine learning training, classification, recommendation and personalization.

“We are proud that AWS has chosen Habana Gaudi processors for its forthcoming EC2 training instances. The Habana team looks forward to our continued collaboration with AWS to deliver on a roadmap that will provide customers with continuity and advances over time,” states David Dahan, chief executive officer at Habana Labs, an Intel Company.

ntel acquires Habana Labs for $2 billion - AI chipset

Intel has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center, for approximately $2 billion.

Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers. Large-node training systems based on Gaudi are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. Gaudi is designed for efficient and flexible system scale-up and scale-out.

Additionally, Habana’s Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope. Gaudi for training and Goya for inference offer a rich, easy-to-program development environment to help customers deploy and differentiate their solutions as AI workloads continue to evolve with growing demands on compute, memory and connectivity.

Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies.

“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

Habana Labs chairman Avigdor Willenz will serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana.

“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”


Interview: Habana Labs targets AI processors



Habana Labs, a start-up based in Israel with offices in Silicon Valley, emerged from stealth to unveil its first AI processor. Habana's deep learning inference processor, named Goya, is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. The company will offer a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads,...



Monday, November 30, 2020

SK Telecom designs its own AI chip

SK Telecom unveiled its own artificial intelligence (AI) chip and announced plans to enter the AI semiconductor business.

The South Korean telecoms operator said its new "SAPEON X220" chip is optimized for processing large amounts of data in parallel. Its deep learning computation speed is 6.7 kilo-_frames per second, which is 1.5 times faster than that of Graphics Processing Units (GPUs) for inference that are being widely used by AI-service companies. At the same time, it uses 20% less power than GPU by consuming 60 watts of energy and is about half the price of a GPU.

SKT plans to use the chip for its own AI-powered services, including for voice recognition. The aim is to generate synergies by combining AI semiconductor chips and 5G edge cloud. 

SAPEON X220 will also be utilized by SKT’s affiliate companies. For instance, ADT Caps will apply the chip to enhance the performance of its AI-based video monitoring service named T View. In addition, SAPEON X220 will be applied to the cloud server of the next-generation media platform of Cast.era, a joint venture of SKT and Sinclair Broadcast Group.

SKT also announced a plan to enter the AI as a Service (AIaaS) business. It will offer a complete solution package as a service by combining its AI chip and AI software, including diverse AI algorithms for features like content recommendation, voice recognition, video recognition and media upscaling, along with Application Programming Interfaces (APIs).

https://www.sktelecom.com/en/press/press_detail.do?page.page=1&idx=1492&page.type=all&page.keyword=



 

Thursday, November 19, 2020

IBM to acquire Instana for AIOps application monitoring

IBM will acquire Instana, an application performance monitoring and observability company based in Chicago and with a development center in Germany. Financial terms were not disclosed.

Instana provides businesses with capabilities to manage the performance of complex and modern cloud-native applications no matter where they reside – on mobile devices, public and private clouds and on-premises, including IBM Z.  Instana's enterprise observability platform automatically builds a deep contextual understanding of cloud applications and provides actionable insights to indicate how to best prevent and remedy IT issues that could damage the business or reduce customer satisfaction -- such as slow response times, services that aren't working or infrastructure that is down.

Once Instana's capabilities are integrated into IBM, companies will be able to feed these insights into Watson AIOps. The information could then be compared to a baseline of a normal operating application, with AI triggering alerts to resolve issues quickly before negative impacts to that transaction or activity. This can help eliminate the need for IT staff to manually monitor and manage applications, freeing these employees to focus on innovation and higher value work. 

"With the added responsibility of ensuring the build and run quality of the software they develop, DevOps teams need a new generation of application performance monitoring and observability capabilities to succeed," said Mirko Novakovic, co-founder and CEO, Instana. "Instana's observability capabilities combined with IBM's AI-powered automation capabilities across hybrid cloud environments will give clients a full view of their application performance to best optimize operations."

https://www.ibm.com/cloud/blog/ibm-and-instana

Thursday, November 12, 2020

Nokia touts AVA Quality of Experience at the Edge

Nokia announced the AVA QoE at the Edge service, which enables automated actions to fix customer issues instantly.

Nokia says deployment of its AVA algorithms on traditional network architectures has achieved a 59 percent reduction in Netflix buffering and 15 percent fewer YouTube sessions that suffer from long playback. 

Nokia AVA QoE at the Edge brings “code to where the data is”, deploying Machine Learning (ML) algorithms at the network edge to enable real-time automated actions. The solution also eases the data burden on CSPs, with an exponential reduction in the volume of user plane data required to feed ML models. 

Dennis Lorenzin, Head of Network Cognitive Services, Global Services, Nokia, said: “Today, many CSPs are keen to launch new low latency services to their customers. With Nokia’s AVA QoE at the Edge, we bring AI to the edge, so CSPs can deliver personalized 5G experiences and guaranteed performance.”

Nokia introduces “AI-as-a-service” for telcos

Nokia introduced a set of AI capabilities for helping service providers to automate their network with cloud scalability. This framework provides an end-to-end service view with near real-time impact correlation for better visibility and control, supported by Nokia’s extensive library of AI use cases.

The new Nokia AVA 5G Cognitive Operations offering anticipates network and service failures with a high level of precision and accuracy up to seven days in advance. If failures arise, Nokia 5G Cognitive Operations can solve them up to 50 percent faster and accurately assess the impact on customers and services. The insights provided will help support CSPs with their slice creation, with an intelligent provisioning system identifying network resources, what SLAs can be committed and where new revenue opportunities can be found. Future capability will also enable CSPs to customize slice creation, providing different SLA levels based on unique user requirements.

Nokia is currently hosting the new capabilities in Microsoft Azure but says other public and private cloud options are possible.

“Operators face a perfect storm of rising traffic and consumer expectations, so it is crucial to be able to predict and prevent service degradations at an earlier stage, while solving issues that arise significantly faster. Nokia AVA 5G Cognitive Operations enables CSPs to operate and assure latency for 5G use cases through AI, ultimately delivering an enhanced customer experience for consumers and enterprises,” states Dennis Lorenzin, Head of Network Cognitive Service unit at Nokia.

Nokia claims that CSPs trialing the service have seen a 20 percent reduction in customer complaints and a 10 percent reduction in costly site visits.

Monday, November 9, 2020

Cellwize raises $32 million for its 5G automation and orchestration

Cellwize, a start-up headquartered in Singapore with R&D in Israel, announced a $32 million Series B funding round for its mobile network automation and orchestration solution.

The new funding round was led by Intel Capital and Qualcomm Ventures LLC with participation from Verizon Ventures, Samsung Next, and existing shareholders.

Cellwize offers a RAN automation and orchestration platform for 5G rollouts. Cellwize CHIME enables operators to accelerate their 5G deployment by automating key business processes in the RAN domain. The company leverages artificial intelligence and machine learning for zero-touch 5G deployments, for automating 2G/3G/4G/5G network optimization, and for delivering mobile network assurance. 

"We are delighted to have been selected by these leading VCs for their strategic investments to accelerate 5G in a way that is open and disaggregated," said Ofir Zemer, CEO of Cellwize. "This is a clear reflection of the trust they have in Cellwize and in the cutting-edge capabilities of CHIME for enabling the 5G revolution. "

"Intel Capital is a lead investor in Cellwize because we're excited about the opportunity Cellwize has to help operators transform their networks to accelerate the 5G revolution," said David Flanagan, vice president and senior managing director at Intel Capital. "Cellwize and Intel Capital are aligned in their vision that Cellwize's cloud-native platform, which includes AI-based automation technology, will help customers deploy complex 5G networks in a more efficient, scalable, and flexible way. 

"Qualcomm is at the forefront of 5G expansion, creating a robust ecosystem of technologies that will usher in the new era of connectivity," said Merav Weinryb, Senior Director of Qualcomm Israel Ltd. and Managing Director of Qualcomm Ventures Israel and Europe. "As a leader in RAN automation and orchestration, Cellwize plays an important role in 5G deployment. We are excited to support Cellwize through the Qualcomm Ventures' 5G global ecosystem fund as they scale and expedite 5G adoption worldwide." 

https://www.cellwize.com/

Sunday, November 1, 2020

Intel to acquire SigOpt for AI model optimization software

Intel agreed to acquire SigOpt, a start-up based in San Francisco that is focusing on the optimization of artificial intelligence (AI) software models at scale. Financial terms were not disclosed.

SigOpt is a standardized, scalable, enterprise-grade optimization platform and API. The company was founded by Patrick Hayes and Scott Clark, who is credited with building an open source the Metric Optimization Engine at Yelp.

Intel plans to use SigOpt’s software technologies across Intel’s AI hardware products to help accelerate, amplify and scale Intel’s AI software solution offerings to developers.

https://sigopt.com/ 

Thursday, October 29, 2020

Untether AI leverages at-memory computation for inference processing

Untether AI, a start-up based in Toronto, introduced its "tsunAImi" accelerator cards are powered by four of its own runAI200 processors, which feature a unique at-memory compute architecture that aims to re-think how computation for machine learning is accomplished. The company says that 90 percent of the energy for AI workloads in current processing architectures is consumed by data movement, transferring the weights and activations between external memory, on-chip caches, and finally to the computing element itself. 

Untether AI says it is able to deliver two PetaOperations per second (POPs) in its new standard PCI-Express cards -- more than two times any currently announced PCIe cards, which translates into over 80,000 frames per second of ResNet-50 v 1.5 throughput at batch=1, three times the throughput of its nearest competitor. For natural language processing, tsunAImi accelerator cards are rated at more than 12,000 queries per second (qps) of BERT-base, four times faster than any announced product.

“For AI inference in cloud and datacenters, compute density is king. Untether AI is ushering in the PetaOps era to accelerate AI inference workloads at scale with unprecedented efficiency,” said Arun Iyengar, CEO of Untether AI.

“When we founded Untether AI, our laser focus was unlocking the potential of scalable AI, by delivering more efficient neural network compute,” said Martin Snelgrove, co-founder and CTO of Untether AI. “We are gratified to see our technology come to fruition.”

The imAIgine SDK is currently in Early Access (EA) with select customers and partners. The tsunAImi accelerator card is sampling now and will be commercially available in 1Q2021.

Untether AI is funded by Radical Ventures and Intel Capital. 

http://www.untether.ai

Monday, July 13, 2020

Verizon pilots Google Cloud Contact Cente

Verizon is testing Google Cloud Contact Center Artificial Intelligence to deliver more intuitive customer support through natural-language recognition, faster processing, and real-time customer service agent assistance.

The Google Cloud Contact Center AI solution aims to deliver shorter call times, quicker resolutions, and improved outcomes for customer satisfaction.

“Verizon’s commitment to innovation extends to all aspects of the customer experience,” said Shankar Arumugavelu, global CIO & SVP, Verizon. “These customer service enhancements, powered by the Verizon collaboration with Google Cloud, offer a faster and more personalized digital experience for our customers while empowering our customer support agents to provide a higher level of service.”

“We’re proud to work with Verizon to help enable its digital transformation strategy,” said Thomas Kurian, CEO of Google Cloud. “By helping Verizon reimagine the customer experience through our AI and ML expertise, we can create an experience that not only delights consumers, but also helps differentiate Verizon in the market.”

Verizon to deliver Google Stadia gaming

Verizon will deliver Google Stadia gaming over its Fios network.

Starting January 29, new Fios Gigabit internet customers will get a Stadia Premiere Edition on us. Stadia Premiere Edition includes a controller, a free three-month Stadia Pro subscription for access to games in up to 4k/60fps, and a Google Chromecast Ultra for streaming on an existing TV.

“Fios has long been known as the leading Internet service for console gaming and streaming entertainment,” said Brian Higgins, vice president, consumer device marketing and products, Verizon. “With the recent surge in adoption of cloud gaming, led by Stadia, Fios will continue to serve as the backbone for the best cloud gaming services.”

“Google working with Verizon to deliver incredible cloud gaming experiences is a great step forward for the industry,” said Brennan Mullin, vice president, Devices and Services Partnerships, Google. “Verizon’s commitment to delivering fast, reliable Fios internet matches perfectly with Stadia’s exciting new cloud gaming, delivering an unmatched gamer experience”

Google unveiled its Stadia platform in March 2019.

Tuesday, June 16, 2020

CommScope rolls out AI-enabled, cloud network management

CommScope introduced an AI-enabled network management as-a-service platform that enables enterprise IT and managed service providers (MSPs) to manage a converged wired and wireless network.

RUCKUS Cloud offers single-pane management with network visibility and service assurance.  Key features:

  • Unified wired and wireless management - Intent-based workflows expedite provisioning, management, and control from large venues to hundreds or thousands of sites.
  • ML and AI - Analytics tools enable IT to react quickly to issues and stop network anomalies from rising to the service-affecting level.
  • RESTful APIs - OpenAPI-compliant APIs allow IT to automate any network function, create custom dashboards and reports, and easily integrate RUCKUS Cloud into existing enterprise systems.
  • MSP dashboard - Allows MSPs to offer branded services and manage multiple end customers.
  • Network health monitoring - IT teams can define and measure performance against service level agreements (SLAs) that best reflect the requirements of their users.
  • Remote client troubleshooting - Remote access to connection history and clearly identified points of failure facilitate a rapid response to user-reported network issues, regardless of client location.
  • Planning and reporting - 12 months of included historical device- and element-level data helps IT make well-informed network planning decisions.

“Networks are changing rapidly, with accelerating growth in users, network elements, devices and device diversity that’s driving a new level of network complexity, making it difficult for IT organizations to keep up,” said Matthias Machowinski, Omdia senior research director, enterprise networks. “Modern cloud-managed networking and ML/AI-based assurance tools provide automation and in-depth network insights, promising to give control back to the IT organization and deliver greater efficiency.”

Tuesday, June 9, 2020

Aruba turns up AI with its new Network Edge platform

Aruba introduced an AI-powered, cloud-native platform that continuously analyzes data across enterprise infrastructure in order to predict and solve problems at the network edge before they happen.

The new Aruba ESP (Edge Services Platform) uses AI to identify traffic while seeing and securing unknown devices on the network. Aruba ESP is a full-stack, cloud-native platform for wired, wireless and SD-WAN environments that unifies multiple network elements for centralized management and control. Aruba ESP will be sold either as a service in the cloud or on-premises, as a managed service delivered through Aruba partners, or via network as a service through HPE GreenLake.

Aruba says its AIOps can identify exact root causes with greater than 95% accuracy, auto-remediate network issues, proactively monitor the user experience, tune the network to prevent problems before they occur, and use peer benchmarking and prescriptive recommendations to continuously optimize and secure the network.

“The Intelligent Edge is the catalyst that will spark limitless possibilities for organizations and enterprises that want to accelerate transformation and ensure business continuity by leveraging their technology investments as their greatest asset,” said Keerti Melkote, president of Aruba, a Hewlett Packard Enterprise company. “Built upon Aruba’s guiding principles of connect, protect, analyze, and act, Aruba ESP is the culmination of years of innovation, R&D, Aruba ingenuity and, most importantly, input from our valued customers whose honest feedback and insightful perspective has helped to make this platform a network that knows.”

Highlights for Aruba ESP:

  • Cloud-native management for any size enterprise – Aruba Central currently runs mission critical networks for over 65,000 customers and now with new ArubaOS services, it is the industry’s only controllerless, cloud-based platform to provide full-stack management and operations for wired, wireless and SD-WAN infrastructure of any size across campus, data center, branch, and remote worker locations to be consumed on-premises or in the cloud.
  • Simplified daily operations with unified infrastructure – With access to a common data lake via Aruba ESP, the latest version of Aruba Central has been enhanced with simplified navigation, advanced search, and contextual views to present multiple dimensions of information through a single point-of-control, virtually eliminating the need for disparate tools to collect and correlate information across numerous domains and locations.
  • Reduced resolution time with AI and automation – Based on modeling data from over one million network devices generating over 1.5B data points per day, Aruba’s new AI Insights reduces troubleshooting time by identifying hard-to-see network configuration issues and providing root-cause, prescriptive recommendations and automated remediation to continuously optimize network operations.
  • AI-powered IT Efficiencies—Aruba Central now offers AI Search, a Natural Language Processing data discovery service that enables IT teams to eliminate “swivel chair” investigations by using simple, English language queries to extract comprehensive user and device information from Aruba ESP’s common data lake to present relevant information in context to quickly resolve a problem. 
  • Granular visibility across applications, devices and the network – Enhancements to Aruba Central enable user-centric analytics from User Experience Insight to identify client, application, and network performance issues faster.
  • Extension of next-gen switching to distributed and mid-size enterprises – This new series brings built-in analytics and automation capabilities to every network edge where user and device connectivity occurs, generating insights that can be applied to informing better business outcomes. The CX 6200 switch series further expands Aruba’s end-to-end CX switching portfolio, enabling customers to run a single operating model from the enterprise campus and branch access layer to the data center.
  • Ongoing innovation with new Developer Hub – Aruba is introducing a resource for developers that includes Aruba APIs and documentation to streamline the development of innovative, next-generation edge applications leveraging the open Aruba ESP platform.




Thursday, May 21, 2020

Juniper's Mist AI-powered Wi-Fi offers contact tracing

Mist Systems' AI-powered enterprise Wi-Fi is now supporting key workplace business continuity safety tasks like proximity tracing, journey mapping and hot zone alerting as part of strategic contact tracing and social distancing initiatives.

Juniper said that by using Mist access points and cloud services in conjunction with Wi-Fi and/or BLE-enabled devices such as phones and badges, enterprises can now support:

  • Proximity tracing - If an individual identifies as COVID-19 positive (or is experiencing symptoms), enterprises can quickly identify and notify other employees, guests or customers that may have been in close proximity to that individual while onsite. 
  • Journey mapping - Customers can view historical traffic patterns and dwell times for employees who have reported testing positive for COVID-19 – from the moment they were onsite to departure. Journey mapping can identify high-traffic hot zones so customers can take safety measures such as reconfiguring workspaces and deploying additional cleaning efforts.  
  • Hot zone alerting - By looking at the quantity of devices and locations in specific areas, enterprises can disperse or divert traffic away from congested areas with real-time, location-based alerting. They can also view trends over time to identify certain areas for proactive measures.

“Employee health and wellness have always been a key part of business continuity planning, but now more than ever enterprises are looking to IT for help complying with OSHA, ADA, CDC and other guidelines,” said Sudheer Matta, VP of Products at Mist. “The Mist architecture provides unique value by combining Wi-Fi with patented virtual BLE technology, supported by a 16 antenna array that is bi-directional and minimizes the need for extra infrastructure hardware like battery-powered beacons. In addition, Mist recently launched a Premium Analytics service that provides unique insight from a variety of data sources to optimize end-user/client experiences and identify trends that can assist customers with workplace safety.”

Monday, April 20, 2020

LeapMind unveils ultra low-power AI inference accelerator

Tokyo-based LeapMind introduced its "Efficiera" ultra-low power AI inference accelerator IP for companies that design ASIC and FPGA circuits, and other related products.

"Efficiera" is an ultra-low power AI Inference Accelerator IP specialized for Convolutional Neural Network (CNN)(1) inference calculation processing; it functions as a circuit in an FPGA or ASIC device. Its "extreme low bit quantization" technology, which minimizes the number of quantized bits to 1–2 bits, does not require cutting-edge semiconductor manufacturing processes or the use of specialized cell libraries to maximize the power and space efficiency associated with convolution operations, which account for a majority of inference processing.

LeapMind is simultaneously launching several related products and services: "Efficiera SDK," a software development tool providing a dedicated learning and development environment for Efficiera, the "Efficiera Deep Learning Model" for efficient training of deep learning models, and "Efficiera Professional Services," an application-specific semi-custom model building service based on LeapMind's expertise that enables customers to build extreme low bit quantized deep learning models applicable to their own unique requirements.

Thursday, April 9, 2020

Intel and Georgia Tech to lead DARPA project

Intel and the Georgia Institute of Technology have been selected to lead a Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD) program team for the Defense Advanced Research Projects Agency (DARPA).

The goal of the GARD program is to establish theoretical ML system foundations that will not only identify system vulnerabilities and characterize properties to enhance system robustness, but also promote the creation of effective defenses. Through these program elements, GARD aims to create deception-resistant ML technologies with stringent criteria for evaluating their effectiveness.

The first phase of GARD will focus on enhancing object detection technologies through spatial, temporal and semantic coherence for both still images and videos.

Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning (ML) models.

“Intel and Georgia Tech are working together to advance the ecosystem’s collective understanding of and ability to mitigate against AI and ML vulnerabilities. Through innovative research in coherence techniques, we are collaborating on an approach to enhance object detection and to improve the ability for AI and ML to respond to adversarial attacks,” states Jason Martin, principal engineer at Intel Labs and principal investigator for the DARPA GARD program from Intel.

Monday, February 24, 2020

Conversational AI for Telco Service Providers



Customer support may well be the first really solid business case for AI in Service Provider networks. Imagine the ROI if conversational AI and automation could trim the time needed for millions of customer support calls.

In this video, Umesh Sachdev, CEO of Uniphore, discusses the use case for conversational AI by telcos.

More thought leadership videos on network automation may be found here:
https://nginfrastructure.com/network-automation/


Tuesday, January 28, 2020

ServiceNow to acquire Passage AI

ServiceNow agreed to acquire Passage AI, a start-up based in Mountain View, California, for its conversational AI platform. Financial terms were not disclosed.

ServiceNow said the acquisition will enhance its deep learning AI capabilities and accelerate its vision of supporting all major languages across the company’s Now Platform and products, including ServiceNow Virtual Agent, Service Portal, Workspaces and emerging interfaces.

“Work flows more smoothly when people can get things done in their native language,” said Debu Chatterjee, senior director of AI Engineering at ServiceNow. “Building deep learning, conversational AI capabilities into the Now Platform will enable a work request initiated in German or a customer inquiry initiated in Japanese to be solved by Virtual Agent. Passage AI’s technology will enable us to accelerate our vision of empowering great employee and customer experiences by delivering great workflow experiences. ServiceNow believes in making work flow more smoothly across the enterprise, in all major languages.”

Passage AI was founded in 2016 by CEO Ravi N. Raj, CTO Madhu Mathihalli and CTO Mitul Tiwari.

Monday, January 27, 2020

Iguazio raises $24M for its data science platform

Iguazio, a start-up based in Herzliya, Israel, raised $24 million in funding for its data science platform for real time machine learning applications.

The Iguazio data science platform helps data scientists create real-time AI applications while working within their chosen machine learning stack.

The funding was was led by INCapital Ventures, with participation from existing and new investors, including Pitango, Verizon Ventures, Magma Venture Partners, Samsung SDS, Kensington Capital Partners, Plaza Ventures and Silverton Capital Ventures.

“This is a pivotal time for AI. Our platform helps data scientists push the limits of their real-time AI applications and see their impact in real business environments,” said Asaf Somekh, co-founder and CEO of Iguazio. “With support from INCapital, Kensington Capital Partners, and our other investors, we are ready to expand our international team and reach our ambitious goals.”

http://www.iguazio.com

Wednesday, January 15, 2020

Turkcell establishes AI principles

Turkcell has announced a set of AI Principles that commit to the ethical and responsible use of artificial intelligence technologies.

During the press conference held at Turkcell HQ, the company shared its following principles:

  1. We are human and environment centric
  2. We are professionally responsible
  3. We respect data privacy
  4. We are transparent
  5. We are security-based
  6. We are fair
  7. We share and collaborate for a better future

“AI should be raised like children and we commit to teach better as responsible parents,” says Omer Barbaros Yis, Turkcell CMO. “Today we share our principles and our commitment to help AI have socially beneficial impacts for our customers and society at large. We are proud to become the first company to contribute to AI ethics in Turkey. The field will continuously expand and we will witness its transformative impacts in our daily lives. Backed by our experience in digital transformation and creating next-generation technologies, we will continue to drive a positive direction towards its advancement and help overcome public concerns about the field.”

Zinier raises $90M for AI-driven automation

Zinier, a start-up based in San Mateo, California, raised $90 million in Series C funding for its efforts to transform field service workforces with AI-driven automation.

Zinier said its intelligent field service automation platform, called ISAC, helps organizations work smarter—from the back office to the field—to solve problems more quickly, fix things before they break, and maintain the services that we rely on every day.

“Services that we rely on every day - electricity, transportation, communication - are getting by on centuries-old infrastructure that requires a major upgrade for the next generation of users,” said Arka Dhar, co-founder and CEO of Zinier. “A field service workforce powered by both people and automation is necessary to execute the massive amount of work required to not only maintain these critical human infrastructures, but to also prepare for growth. Our team is focused on enabling this transformation across industries through intelligent field service automation.”

New investor ICONIQ Capital led the round with new participation from Tiger Global Management, and return investors Accel, Founders Fund, Nokia-backed NGP Capital, France-based Newfund Capital and Qualcomm Ventures LLC. The funding will support global customer adoption and expansion of Zinier’s AI-driven field service automation platform, ISAC.

http://www.zinier.com

Monday, December 16, 2019

Intel acquires Habana Labs for $2 billion - AI chipset

Intel has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center, for approximately $2 billion.

Habana’s Gaudi AI Training Processor is currently sampling with select hyperscale customers. Large-node training systems based on Gaudi are expected to deliver up to a 4x increase in throughput versus systems built with the equivalent number of GPUs. Gaudi is designed for efficient and flexible system scale-up and scale-out.

Additionally, Habana’s Goya AI Inference Processor, which is commercially available, has demonstrated excellent inference performance including throughput and real-time latency in a highly competitive power envelope. Gaudi for training and Goya for inference offer a rich, easy-to-program development environment to help customers deploy and differentiate their solutions as AI workloads continue to evolve with growing demands on compute, memory and connectivity.

Habana will remain an independent business unit and will continue to be led by its current management team. Habana will report to Intel’s Data Platforms Group, home to Intel’s broad portfolio of data center class AI technologies.

“This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data center,” said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. “More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads.”

Habana Labs chairman Avigdor Willenz will serve as a senior adviser to the business unit as well as to Intel Corporation after Intel’s purchase of Habana.

“We have been fortunate to get to know and collaborate with Intel given its investment in Habana, and we’re thrilled to be officially joining the team,” said David Dahan, CEO of Habana. “Intel has created a world-class AI team and capability. We are excited to partner with Intel to accelerate and scale our business. Together, we will deliver our customers more AI innovation, faster.”


Interview: Habana Labs targets AI processors



Habana Labs, a start-up based in Israel with offices in Silicon Valley, emerged from stealth to unveil its first AI processor. Habana's deep learning inference processor, named Goya, is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. The company will offer a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads,...

Habana Labs, a start-up based in Tel-Aviv, Israel, raised $75 million in an oversubscribed series B funding for its development of AI processors.

Habana Labs is currently in production with its first product, a deep learning inference processor, named Goya, that is >2 orders of magnitude better in throughput & power than commonly deployed CPUs, according to the company. Habana is now offering a PCIe 4.0 card that incorporates a single Goya HL-1000 processor and designed to accelerate various AI inferencing workloads, such as image recognition, neural machine translation, sentiment analysis, recommender systems, etc.  A PCIe card based on its Goya HL-1000 processor delivers 15,000 images/second throughput on the ResNet-50 inference benchmark, with 1.3 milliseconds latency, while consuming only 100 watts of power. The Goya solution consists of a complete hardware and software stack, including a high-performance graph compiler, hundreds of kernel libraries, and tools.

Habana Labs expects to launch an training processor - codenamed Gaudi - in the second quarter of 2019.

The funding round was led by Intel Capital and joined by WRV Capital, Bessemer Venture Partners, Battery Ventures and others, including existing investors. This brings total funding to $120 million. The company was founded in 2016.

“We are fortunate to have attracted some of the world’s most professional investors, including the world’s leading semiconductor company, Intel,” said David Dahan, Chief Executive Officer of Habana Labs. “The funding will be used to execute on our product roadmap for inference and training solutions, including our next generation 7nm AI processors, to scale our sales and customer support teams, and it only increases our resolve to become the undisputed leader of the nascent AI processor market.”

“Among all AI semiconductor startups, Habana Labs is the first, and still the only one, which introduced a production-ready AI processor,” said Lip-Bu Tan, Founding Partner of WRV Capital, a leading international venture firm focusing on semiconductors and related hardware, systems, and software. “We are delighted to partner with Intel in backing Habana Labs’ products and its extraordinary team.”

https://habana.ai/

Intel ships its Nervana Neural Network Processors

Intel announced the commercial production of its Nervana Neural Network Processors (NNP) for training (NNP-T1000) and inference (NNP-I1000).

The new devices are Intel’s first purpose-built ASICs for complex deep learning for cloud and data center customers. Intel said its Nervana NNP-T strikes the right balance between computing, communication and memory, allowing near-linear, energy-efficient scaling from small clusters up to the largest pod supercomputers. Both products were developed for the AI processing needs of leading-edge AI customers like Baidu and Facebook.

Intel also revealed its next-generation Movidius Myriad Vision Processing Unit (VPU) for edge media, computer vision and inference applications. Additionally, Intel’s next-generation Intel Movidius VPU, scheduled to be available in the first half of 2020, incorporates unique, highly efficient architectural advances that are expected to deliver leading performance — more than 10 times the inference performance as the previous generation — with up to six times the power efficiency of competitor processors.

“With this next phase of AI, we’re reaching a breaking point in terms of computational hardware and memory. Purpose-built hardware like Intel Nervana NNPs and Movidius Myriad VPUs are necessary to continue the incredible progress in AI. Using more advanced forms of system-level AI will help us move from the conversion of data into information toward the transformation of information into knowledge,” stated Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group.

“We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I,” said Misha Smelyanskiy, director, AI System Co-Design at Facebook.