Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Monday, May 21, 2018

Google Cloud releases Kubernetes Engine

The Google Kubernetes Engine 1.10 has now entered commercial release.

In parallel to the GA of Kubernetes Engine 1.10, Google Cloud is new features to support enterprise use cases:

  • Shared Virtual Private Cloud (VPC) for better control of network resources
  • Regional Persistent Disks and Regional Clusters for higher-availability and stronger SLAs
  • Node Auto-Repair GA, and Custom Horizontal Pod Autoscaler for greater automation

Google also outlined several upcoming features for its Kubernetes Engine, including the ability for
teams within large organizations to share physical resources while maintaining logical separation of resources between departments. Workloads can be deployed in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model.

Google's Kubernetes Engine will also gain Regional Persistent Disk (Regional PD) support. This will ensure that network-attached block storage has synchronous replication of data between two zones in a region.

https://cloudplatform.googleblog.com/

Tuesday, May 15, 2018

NEC and Google test subsea modulation using probabilistic shaping

NEC and Google have tested probabilistic shaping techniques to adjust the modulation of optical transmission across the 11,000-km FASTER subsea cable linking the U.S. and Japan.

The companies have demonstrated that the FASTER open subsea cable can be upgraded to a spectral efficiency of 6 bits per second per hertz (b/s/Hz) in an 11,000km segment -- representing a capacity of more than 26 Tbps in the C-band, which is over 2.5X the capacity originally planned for the cable, for no additional wet plant capital expenditure. The achievement represents a spectral efficiency-distance product record of 66,102 b/s/Hz.

The field trial was performed with live traffic on neighboring channels.

The companies said their test used near-Shannon probabilistic-shaping at a modulation of 64QAM, and for the first time on a live cable, artificial intelligence (AI) was used to analyze data for the purpose of nonlinearity compensation (NLC). NEC developed an NLC algorithm based on data-driven deep neural networks (DNN) to accurately and efficiently estimate the signal nonlinearity.

"Other approaches to NLC have attempted to solve the nonlinear Schrodinger equation, which requires the use of very complex algorithms," said NEC's Mr. Toru Kawauchi, General Manager, Submarine Network Division. "This approach sets aside those deterministic models of nonlinear propagation, in favor of a low-complexity black-box model of the fiber, generated by machine learning algorithms. The results demonstrate both an improvement in transmission performance and a reduction in implementation complexity. Furthermore, since the black-box model is built up from live transmission data, it does not require advance knowledge of the cable parameters. This allows the model to be used on any cable without prior modeling or characterization, which shows the potential application of AI technology to open subsea cable systems, on which terminal equipment from multiple vendors may be readily installed."

Transpacific FASTER Cable Enters Service with 60 Tbps Capacity

The world's highest capacity undersea cable system has entered commercial service -- six fiber pairs capable of delivering 60 Terabits per second (Tbps) of bandwidth across the Pacific.

FASTER is a 9,000km trans-Pacific cable connecting Oregon and two landing sites in Japan (Chiba and Mie prefectures). The system has extended connections to major hubs on the West Coast of the U.S. covering Los Angeles, the San Francisco Bay Area, Portland and Seattle. The design features extremely low-loss fiber, without a dispersion compensation section, and the latest digital signal processing technology.

Google will have sole access to a dedicated fiber pair. This enables Google to carry 10 Tbps of traffic (100 wavelengths at 100 Gbps). In addition to greater capacity, the FASTER Cable System brings much needed diversity to East Asia, writes Alan Chin-Lun Cheung, Google Submarine Networking Infrastructure.

Construction of the system was announced in August 2014 by the FASTER consortium, consisting of China Mobile International, China Telecom Global, Global Transit, Google, KDDI and Singtel.

Tuesday, March 13, 2018

Google plans free public Wi-Fi in Mexixo

Google is launching free public Wi-Fi at locations across Mexico.

Initially, Google Station will be available in 60+ high-traffic venues across Mexico City and nationwide, including airports, shopping malls and public transit stations. Google plans to reach 100+ locations before the end of the year.

Google already provides public Wi-Fi in India and Indonesia.

Monday, February 19, 2018

Google agreed to acquire Xively for enterprise IoT

Google agreed to acquire Xively, a division of LogMeIn, for an undisclosed sum.

Xively offers an enterprise-ready IoT platform with advanced device management, messaging, and dashboard capabilities.

Xively, which was formerly known as Cosm and Pachube, is built on LogMeIn's cloud platform Gravity, which handles over 255 million devices, users, and customers across 7 datacenters worldwide.

Google said the acquisition will be paired with the security and scale of Google Cloud. The solution will also be augmented With Google Cloud’s data analytics and machine learning.

Wednesday, February 14, 2018

Will mobile networks be ready for Waymo's driverless ride-hailing service?

by James E. Carroll

Fiat Chrysler Automobiles (FCA) has confirmed an order for several thousand of the Pacifica Hybrid minivans to be delivered to Waymo, the autonomous car subsidiary of Alphabet (Google's parent company) this calendar year for deployment in several U.S. cities. Although the actual size of the order was not disclosed, it is believed to be between 3,000 and 10,000 autonomous vehicles. You may have already seen driveless Waymo minivans on the streets in live testing. Last year, FCA delivered 500 of the Pacifica minivans, adapted for self-driving, to Waymo for the test fleet. An earlier batch of 500 Pacificas was delivered in late 2016. The vehicles have racked up over 4 million miles (6.4 million kilometres) of testing on U.S. streets so far.

With this order for thousands of self-driving Waymos, the prospect of a commercial launch is in sight. For mobile network operators, this could be a golden opportunity. The question is whether mobile operators are bidding for this business.

"With the world's first fleet of fully self-driving vehicles on the road, we've moved from research and development, to operations and deployment," said John Krafcik, CEO of Waymo. "The Pacifica Hybrid minivans offer a versatile interior and a comfortable ride experience, and these additional vehicles will help us scale."

Although we do not know which mobile operator(s) Waymo has been working with, we do know that the connection from vehicle to the network must be LTE at best, as none of the big four operators have 5G trial networks in place for this level of testing.

Two conclusions can be drawn. (1) The autonomous vehicle R&D programs are ahead of the 5G movement. (2) the first generation of autonomous vehicles may not require 5G at all.
In many ways, 5G networks promise to be an ideal platform for autonomous vehicle fleets. Think about:
(a) reduced network latency
(b) dense small cell deployments ideally near street level in urban cores
(c) high bandwidth throughput
(d) network slicing
(e) enhanced security

Autonomous vehicle fleets would also be the ideal 5G customer for mobile operators. Let's say an operator such as Waymo procures and deploys a fleet of 5,000 vehicles. The connectivity requirement will be 24/7. These vehicles are described as "data centers on wheels." Some estimates say each autonomous vehicle could generate 4TB of data daily. Of course only a percentage of that data would need to be offloaded in real time, if at all, but clearly the

Background on Waymo

Waymo began developing its self-driving platform in 2009. At the time it was known as Google's Self-Driving Car project and was led by Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View.  The Waymo identity was adopted in December 2016. The company remains based at the Google campus in Mountain View, California.

Late last year, Waymo began test driving the Pacifica minivans in the Phoenix metro region without anyone in the driver's seat for some months. The test program has been expanding rapidly since then. Just after the New Year, Waymo announced that Atlanta would be its 35th test city,
In its 9-year of development, Waymo has worked on every aspect of its forthcoming Transportation-as-a-Service platform.  Its software is perhaps the key differentiator that will set it apart from the many fast followers. It is also the subject of the ongoing lawsuit launched by Waymo against Uber regarding purportedly stolen intellectual property.

Beside Fiat-Chrysler, we know that Waymo is working with a few other technology suppliers. Waymo’s cloud service provider, of course, is Google. On the hardware side, Intel has disclosed that it supplied sensor processing, general compute and connectivity technologies for Waymo's test fleet of Pacifica minivans. This includes Xeon processors, Arria FPGAs, and Gigabit Ethernet and XMM LTE modems. The partnership between Intel and Waymo was cited in a blog post by  Brian Krzanich in September.

Collecting mapping and other data from the fleet

At CES 2018, Intel disclosed that its Mobileye next-generation aftermarket collision avoidance system is capable of "collecting dynamic data to make cities smarter, safer and Autonomous Ready."
The idea is to harvest valuable information on city streets and infrastructure to create high-definition crowdsourced maps. Mobileye is developing a Road Experience Management (REM) to make this easier. Many companies, as well as government authorities, will see value in harvesting this data from the vehicle. Collecting this data need not require an autonomous vehicle. Plenty of regular buses, taxis, and trucks criss-cross cities every day on established routes. Retrofitting these vehicles for mass-scale data gathering can be as simple as installing a single camera and sensor, along with a mobile broadband connection. In fact, Mobileye has announced a number of players who are already moving in this direction:

  • The city of Dusseldorf, Germany is expected to equip 750 vehicles with Mobileye 8 Connect to investigate the suitability of Dusseldorf’s existing infrastructure for autonomous vehicles and connected driving. The project is funded by the German federal government.
  • London black cabs will be fitted with Mobileye 8 Connect to create an HD map of the city. Gett, a start-up working on mobility solutions, will equip approximately 500 London black cabs this year.
  • New York City will also get an HD map based on Mobileye crowdsourced data. Buggy TLC Leasing, which provides leasing of vehicles for ride-sharing services such as Uber, is expected to outfit approximately 2,000 New York City-based vehicles with Mobileye Aftermarket.
  • Berkshire Hathaway GUARD Insurance will equip approximately 1,000 to 2,000 trucks with Mobileye 8 Connect to generate an HD map of where these vehicles operate.¬¬¬


Tuesday, January 16, 2018

Google commissions own subsea cable from CA to Chile

TE Subcom has been awarded a contract by Alphabet, the parent company of Google, to build a subsea cable from California to Chile. A ready-for-service date is expected in 2019.

The Curie Submarine Cable will be a four fiber-pair subsea system spanning over 10,000 km from Los Angeles to Valparaiso. It will include a branching unit for future connectivity to Panama.

The project is believed to be the first subsea cable to land in Chile in 20 years.

“We’re proud to provide comprehensive services to Google on this project. Leveraging existing TE SubCom infrastructure through our SubCom Global Services (SGS) options put us in position to be a true partner to them. Our role in the continued growth of global connectivity and information sharing is a point of substantial pride for the TE SubCom team,” said Sanjay Chowbey, president of TE SubCom.

Google joins Havfrue and HK-G subsea cable projects

Google announced its participation in the HAVFRUE subsea cable project across the north Atlantic and in the Hong Kong to Guam cable system, both of which are expected to enter service in 2019.

In addition, Google confirmed that it is on-track to open cloud regions (data centers) in the Netherlands and Montreal this calendar quarter, followed by Los Angeles, Finland and Hong Kong.

HAVFRUE is the newly-announced new subsea cable project that will link New Jersey to the Jutland Peninsula of Denmark with a branch landing in County Mayo, Ireland. Optional branch extensions to Northern and Southern Norway are also included in the design. The HAVFRU system will be owned and operated by multiple parties, including Aqua Comms, Bulk Infrastructure, Facebook, Google and others. Aqua Comms, the Irish cable owner/operator and carriers’ carrier, will serve as the system operator and landing party in U.S.A., Ireland, and Denmark. Bulk Infrastructure of Norway will be the owner and landing party for the Norwegian branch options. The HAFVRUE subsea cable system will be optimized for coherent transmission and will offer a cross-sectional cable capacity of 108Tbps, scalable to higher capacities utilizing future generation SLTE technology. SubCom will incorporate their Wavelength Selective Switching Reconfigurable Optical Add Drop Multiplexer (WSS-ROADM) for flexible wavelength allocation over the system design life. It is the first new cable system in almost two decades that will traverse the North Atlantic to connect mainland Northern Europe to the U.S.A. TE Subcom is the system supplier.

The 3,900 kilometer Hong Kong - Guam Cable system (HK-G) will offer 48 Tbps of design capacity when it comes into service in late 2019. It features 100G optical transmission capabilities and is being built by RTI Connectivity Pte. Ltd. (RTI-C) and NEC Corporation with capital from the Fund Corporation for the Overseas Development of Japan's ICT and Postal Services Inc. (Japan ICT Fund), along with syndicated loans from Japanese institutions including NEC Capital Solutions Limited, among others. In Hong Kong, the cable is slated to land in Tseung Kwan O (TKO) and will land in Piti, Guam at the recently completed Teleguam Holdings LLC (GTA) cable landing station. HK-G will land in the same facility as the Southeast Asia - United States Cable System (SEA-US).

Google also noted its direct investment in 11 cables, including those planned or under construction:

Cable            Year in service             Landings
Curie             2019                            US, Chile
Havfrue         2019                            US, IE, DK
HK-G            2019                            HK, GU
Indigo            2019                            SG, ID, AU
PLCN            2019                            HK, LA
Tannat            2018                           BR, UY
Junior            2018                            Rio, Santos
Monet            2017                            US, BR
FASTER        2016                            US, JP, TW
SJC                2013                            JP, HK, SG
UNITY          2010                            US, JP


Wednesday, December 13, 2017

Google opens AI Research Center in Beijing

Google is opening an AI China Center to focus on basic research. Ms. Fei-Gei Li, who is Chief Scientist AI/ML at Google, notes that many of the world's top experts in AI are Chinese.

Google also has AI research groups located in New York, Toronto, London and Zurich.

Tuesday, November 28, 2017

Google plans cloud data center in Hong Kong

Google Cloud Platform will open a new data center region in Hong Kong in 2018.

The GCP Hong Kong region is being designed for high availability, launching with three zones to protect against service disruptions.

Hong Kong will be the sixth GCP region in Asia Pacific, joining the recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo.

Google says it has other Asia Pacific investments in the works.

Tuesday, November 14, 2017

Google launches Cloud Spanner Multi-Region with Five 9s SLA

Google announced the general availability of its Cloud Spanner Multi-Region configurations, which enables application developers to achieve synchronous replication of transactions across regions and continents.

Google describes Cloud Spanner as "the first and only enterprise-grade, globally distributed and strongly consistent database service built specifically for the cloud that combines the benefits and familiarity of relational database semantics with non-relational scale and performance."

So, regardless of location, Cloud Spanner can read and write up-to-date (strongly consistent) data globally and do so with minimal latency for end users. Google is promising a 99.999% availability SLA with no planned downtime.

Google Cloud Spanner also ensures database resiliency in the even of regional failure.

Sunday, November 12, 2017

Google chops latency of its Andromeda SDN stack by 40%

Google released a new edition of its Andromeda SDN stack that reduces network latency between Compute Engine VMs by 40% over the previous version.

Andromeda 2.1, which underpins all of Google Cloud Platform (GCP), introduces a hypervisor bypass that builds on virtio, the Linux paravirtualization standard for device drivers. This enables the Compute Engine guest VM and the Andromeda software switch to communicate directly via shared memory network queues, bypassing the hypervisor completely for performance-sensitive per-packet operations.

Google noted that is has cut the latency of its SDN stack by nearly a factor of 8 since it first launched Andromeda in 2014.

https://cloudplatform.googleblog.com/2017/11/Andromeda-2-1-reduces-GCPs-intra-zone-latency-by-40-percent.html

Wednesday, October 25, 2017

Cisco and Google Partner on New Hybrid Cloud Solution

Cisco and Google Cloud have formed a partnership to deliver a hybrid cloud solutions that enables applications and services to be deployed, managed and secured across on-premises environments and Google Cloud Platform. The pilot implementations are expected to be launched early next year, with commercial rollout later in 2018.

The main idea is to deliver a consistent Kubernetes environment for both on-premises Cisco Private Cloud Infrastructure and Google’s managed Kubernetes service, Google Container Engine.

The companies said their open hybrid cloud offering will provide enterprises with a way to run, secure and monitor workloads, thus enabling them to optimize their existing investments, plan their cloud migration at their own pace and avoid vendor lock in.

Cisco and Google Cloud hybrid solution highlights:


  • Orchestration and Management – Policy-based Kubernetes orchestration and lifecycle management of resources, applications and services across hybrid environments
  • Networking – Extend network policy and configurations to multiple on-premises and cloud environments
  • Security – Extend Security policy and monitor applications behavior
  • Visibility and Control – Real-time network and application performance monitoring and automation
  • Cloud-ready Infrastructure – Hyperconverged platform supporting existing application and cloud-native Kubernetes environments
  • Service Management with Istio – Open-source solution provides a uniform way to connect, secure, manage and monitor microservices
  • API Management – Google's Apigee enterprise-class API management enables legacy workloads running on premises to connect to the cloud through APIs
  • Developer Ready – Cisco's DevNet Developer Center provides tools and resources for cloud and enterprise developers to code in hybrid environments
  • Support – Joint coordinated technical support for the solution

"Our partnership with Google gives our customers the very best cloud has to offer— agility and scale, coupled with enterprise-class security and support," said Chuck Robbins, chief executive officer, Cisco. "We share a common vision of a hybrid cloud world that delivers the speed of innovation in an open and secure environment to bring the right solutions to our customers."

"This joint solution from Google and Cisco facilitates an easy and incremental approach to tapping the benefits of the Cloud. This is what we hear customers asking for," said Diane Greene, CEO, Google Cloud.

Thursday, October 12, 2017

IBM and Google collaborate on container security API

IBM is joining forces with Google to create and open source the Grafeas project, which is an open source initiative to define a uniform way for auditing and governing the modern software supply chain.

Grafeas (“scribe” in Greek) provides a central source of truth for tracking and enforcing policies across an ever growing set of software development teams and pipelines. The idea is to provide a central store that other applications and tools can query to retrieve metadata on software components of all kinds.

IBM is also working on Kritis, a component which allows organizations to set Kubernetes governance policies based on metadata stored in Grafeas.

Wednesday, September 27, 2017

Google Cloud IoT Core released to beta testing

Google Cloud IoT Core, which is a fully-managed service on Google Cloud Platform (GCP) to help securely connect and manage IoT devices at scale, has now entered public beta testing

Google has designed its Cloud IoT Core to enable organizations to connect and centrally manage millions of globally dispersed IoT devices. The company says it is working with many customers across industries such as transportation, oil and gas, utilities, healthcare and ride-sharing.

The Google Cloud IoT solution ingests IoT data and then can connect to other Google analytics services including Google Cloud Pub/Sub, Google Cloud Dataflow, Google Cloud Bigtable, Google BigQuery, and Google Cloud Machine Learning Engine.

Tuesday, September 26, 2017

Google acquires Bitium for identity and access management

Google has acquired Bitium, a start-up based Santa Monica, California, that specializes in identity and access management. Financial terms were not disclosed.

Bitium, which was founded in 2012, provides tools to support single sign-on, password management and analytics across cloud services for small, medium and enterprise businesses. Bitium helps enterprises manage access to range of web-based applications — including Google Apps and Microsoft Office 365, as well as social networks and CRM, collaboration and marketing tools.

Bitium will now become part of the Google Cloud team.

Wednesday, September 20, 2017

Google acquires HTC's hardware team

Google will hire a team of hardware engineers from HTC, the Taiwan-based mobile and consumer electronics firm.

The HTC team has been working closely with Google on its Pixel smartphone line.

The deal, reportedly worth US$1.1 billion, also includes a non-exclusive license for HTC intellectual property.

Google is planning to unveil its latest line of hardware products on October 4th.

  • In 2012, Google acquired Motorola ia deal valued at US$12.5 billion. The acquisition included an extensive intellectual property portfolio. Google later sold the Motorola handset business to Lenovo for $2.9 billion.

Saturday, September 9, 2017

Network service tiers become part of the public cloud discussion

An interesting development has just come out of the Google Cloud Platform.

Until now, we’ve seen Google internal global network growing by leaps and bounds. There is a public facing network as well with peering points all over the globe that seemingly must always keep ahead of the growth of the Internet overall. The internal network, however, which handles all the traffic between Google services and its data centers, was said to grow much faster. It is this unmatched, private backbone that is one of the distinguishing features of Google Compute Engine (GCE), the company’s public cloud service.

GCE provides on-demand virtual machines (VMs) running on the same class of servers and in the same data centers as Google’s own applications, including Search, Gmail, Maps, YouTube, Docs, etc.  The service delivers global scaling using Google’s load balancing over the private, worldwide fiber backbone.  This service continues and becomes knows as the “Premier Tier”.

The new option, called the “Standard Tier,” directs the user’s outbound application traffic to exit the Google network at the nearest IP peering point. From there, the traffic traverses ISP network(s) all the way to its destination. Google says this option will be lower performance but lower cost. For Google, the cost savings come from not having to use as much long-haul bandwidth to carry the traffic and to consume fewer resources in load-balancing traffic across regions, or failing over to other regions in the event of an anomaly.  In a similar way, inbound traffic travels over ISP networks until it reaches the region of the Google data center where the application is hosted. At that point, ingress occurs on the Google network.




Google has already conducted performance tests of how its Premier Tier measures up to Standard tier. The tests, which were conducted by Cedexis, found that Google’s own network delivers higher throughput and lower latency than a Standard tier with more routing hops and operating over third party network(s). Test data from the US Central region from mid-August indicate that the Standard Tier was delivering around 3,051kbps throughput while Premier Tier was delivering around 5,303kbps – or roughly a 73% performance boost in throughput. For latency in the US Central region, the Standard Tier was measured at 90ms for the 50th percentile, while the Premium Tier was measured at 77ms, roughly a 17% performance advantage.

Looking at the pricing differential

The portal for the Google Cloud platform shows a 17% savings for Standard Tier for North America to North America traffic.



Some implications of network service tiering

The first observation is that with these new Network Service Tiers, Google is recognizing that its own backbone is not a resource with infinite capacity and zero cost that can be used to carry all traffic indiscriminately. If the Google infrastructure is transporting packets with greater throughput and lower latency from one side of the planet to another, why shouldn’t they charge more for this service?
The second observation is that network transport becomes a more important consideration and comparison point for public cloud services in general.

Third, it could be advantageous for other public clouds to assemble their own Network Service Tiers in partnership with carriers. The other hyperscale public cloud companies also operate global-scale, private transport networks that outperform the hop-by-hop routing of the Internet.  Some of these companies are building private transatlantic and transpacific subsea cables, but building a private, global transport network at Google scale is costly.  Network service tiering should bring many opportunities for partnerships with carriers.

Saturday, August 19, 2017

Box integrates Google Cloud Vision for image recognition

Box is integrating Google Cloud Vision into is cloud storage service to provide its enterprise customers with advanced image recognition.

The capability, which is currently in private beta, leverages machine learning to help enterprises improve workflows and drive efficiencies through more accurate discovery and deeper insights into unstructured content stored in Box.

“Organizations today have no way to extract insights from the massive amounts of unstructured data that are essential to their business, missing a huge opportunity to drive innovation, efficiency, and cost savings,” said Aaron Levie, cofounder and CEO, Box. “By combining the machine learning capabilities of Google Cloud with the critical data businesses manage and secure in Box, we are enabling our customers – for the first time – to unlock tremendous new value from their content, digitize manual workflows, and accelerate business processes.”

“Box’s application of Google Cloud’s machine learning APIs brings to life the potential of AI in the enterprise,” said Fei-Fei Li, Chief Scientist, Google Cloud AI and Professor of Computer Science, Stanford University. “Understanding images remains a challenge for businesses and Box’s application of the Vision API demonstrates how the accessibility of machine learning models can unlock potential within a business’s own data. Ultimately it will democratize AI for more people and businesses.”

http://www.box.com

Wednesday, August 2, 2017

Google picks up the pace in cloud computing

When it comes to the cloud, Google certainly isn't taking a summer holiday. Over the past weeks there have been a string of cloud related developments from Google showing that is very focused, delivering innovative services and perhaps narrowing the considerable market share gap between itself and rivals IBM, Microsoft Azure and Amazon Web Services. There is a new Google cloud data centre in London, a new data transfer service, a new transfer appliance and a new offering for computational drug discovery. And this week came word from Bloomberg that Google is gearing up to launch its first quantum computing cloud services. While the company declined to comment directly about the Bloomberg story it is understood that quantum computing is an area of keen interest for Google.

New London data centre

Customers of Google Cloud Platform (GCP) can use the new region in London (europe-west2) to run applications. Google noted that London is its tenth region, joining the existing European region in Belgium. Future European regions include Frankfurt, the Netherlands and Finland. Google also stated that it is working diligently to address EU data protection requirements. Most recently, Google announced a commitment to GDPR compliance across GCP.

Introducing Google Transfer Appliance

This is a pre-configured solution that offers up to 480TB in 4U or 100TB in 2U of raw data capacity in a single rackmount device. Essentially, it is high-capacity storage server that a customer can install in a corporate data centre. Once the server is full, the customer simply ships the appliance back to Google for transferring the data to Google Cloud Storage. It offers a capacity of up to one-petabyte compressed.

The Google Transfer Appliance is a very practical solution even when massive bandwidth connections are available at both ends. For instance, for customers fortunate enough to possess a 10 Gbit/s connection, a 100TB data store would still take 30 hours to transfer electronically. A 1PB data library would take over 12 days using the same10 Gbit/s connection, and that is assuming no drops in connectivity performance. Google is now offering a 100TB model priced at $300, plus shipping via FedEx (approximately $500) and a 480TB model is priced at $1800, plus shipping (approximately $900). Amazon offers a similar Snowball Edge data migration appliance for migrating large volumes of data to its cloud the old-fashioned way.

Partnership for computational medicine

Under a partnership with Boston -based Silicon Therapeutics, Google recently deployed its INSITE Screening platform on Google Cloud Platform (GCP) to analyse over 10 million commercially available molecular compounds as potential starting materials for next-generation medicines. In one week, it performed over 500 million docking computations to evaluate how a protein responds to a given molecule. Each computation involved a docking program that predicted the preferred orientation of a small molecule to a protein and the associated energetics so it could assess whether it will bind and alter the function of the target protein.

With a combination of Google Compute Engine standard and Preemptible VMs, the partners used up to 16,000 cores, for a total of 3 million core-hours and a cost of about $30,000. Google noted that a final stage of the calculations delivered all-atom molecular dynamics (MD) simulations on the top 1,000 molecules to determine which ones to purchase and experimentally assay for activity.

Pushing ahead with Kubernetes

The recent open source release of Kubernetes 1.7 is now available on Container Engine, Google Cloud Platform’s (GCP) managed container service. The end result is better workload isolation within a cluster, which is a frequently requested security feature in Kubernetes. Google also announced that its Container Engine, which saw more than 10x growth last year, is now available from the following GCP regions:

•   Sydney (australia-southeast1).

•   Singapore (asia-southeast1).

•   Oregon (us-west1).

•   London (europe-west2).

Container engine clusters are already up and running at locations from Iowa to Belgium and Taiwan.

New strategic partnership with Nutanix

Google has formed a strategic partnership with Nutanix to help remove friction from hybrid cloud deployments for enterprises.

Reimagining virtual public clouds at global scale

Integrating cloud resources from different areas of the world no longer requires negotiating and installing a VPN solution from one or more service providers. Google can do it for you using its own global backbone. VPC is private, and with Google VPC customers can get private access to Google services such as storage, big data, analytics or machine learning, without having to give the service a public IP address. Global VPCs are divided into regional subnets that use Google’s private backbone to communicate as needed.

VPC, formerly known as GCP Virtual Networks, offers a privately administered space within Google Cloud Platform (GCP). This means global connectivity across locations and regions, and the elimination of silos across projects and teams.

Further information on Google Cloud Platform is available at the blog here:
:

Thursday, May 25, 2017

Google to Rearchitect its Data Centres as it Shifts to AI First Strategy

AI is driving Google to rethink all its products and services, said Sundar Pichai, Google CEO, at the company's Google IO event in Mountain View, California. Big strides have recently pushed AI to surpass human vision in terms of real world image recognition, while speech recognition is now widely deployed in many smartphone applications to provide a better input interface for users. However, it is one thing for AI to win at chess or a game of Go, but it is a significantly greater task to get AI to work at scale. Everything at Google is big, and here are some recent numbers:


  • Seven services, now with over a billion monthly users: Search, Android, Chrome, YouTube, Maps, Play, Gmail
  • Users on YouTube watch over a billion hours of video per day.
  • Every day, Google Maps helps users navigate 1 billion km.
  • Google Drive now has over 800 million active users; every week 3 billion objects are uploaded to Google Drive.
  • Google Photos has over 500 million active users, with over 1.2 billion photos are uploaded.
  • Android is now running on over 2 billion active devices.

Google is already pushing AI into many of these applications. Examples of this include Google Maps, which is now using machine learning to identify street signs and storefronts. YouTube is applying AI for generating better suggestions on what to watch next. Google Photos is using machine learning to deliver better search results and Android has smarter predictive typing.

Another way that AI is entering the user experience is by using voice as an input for many products. Pichai said the improvement in voice recognition over the past year have been amazing. For instance, its Google Home speaker uses only two onboard microphones to assess the direction and to identify the use in order to better serve customers. The company also claims that its image recognition algorithms have now surpassed humans (although it is not clear what the metrics are). Another one of the big announcements at this year's Google IO event concerned Google Lens, a new in-app capability to layer augmented reality on top of images and videos. The company demo'ed taking a photo of a flower and then asking Google to identify it.

Data centres will have to support the enormous processing challenges of an AI-first world and the network will have to deliver the I/O bandwidth for users to the data centres and for connecting specialised AI servers within and between Google data centres. Pichai said this involves nothing less than rearchitecting the data centre for AI.

Google's AI-first data centres will be packed with Tensor processing units (TPUs) for machine learning. TPUs, which Google launched last year, are custom ASICs that are about 30-50 times faster and more power efficient than general purpose CPUs or GPUs for certain machine learning functions. TPUs are tailored for TensorFlow, an open source software library for machine learning that was developed at Google. TensorFlow was originally developed by the Google Brain team and released under the Apache 2.0 open source license in November 2015. At the time, Google said TensorFlow would be able to run on multiple CPUs and GPUs. Parenthetically, it should also be noted that in November 2016 Intel and Google announced a strategic alliance to help enterprise IT deliver an open, flexible and secure multi-cloud infrastructure for their businesses. One area of focus is the optimisation of the open source TensorFlowOpens in a new window library to benefit from the performance of Intel architecture. The companies said their joint work will provide software developers an optimised machine learning library to drive the next wave of AI innovation across a range of models including convolutional and recurrent neural networks.

A quick note about the technology: Google describes TensorFlow as an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

Here come TPUs on the Google Compute Engine

At its recent I/O event, Google introduced the next generation of Cloud TPUs, which are optimised for both neural network training and inference on large data sets. TPUs are stacked onto boards, and each board has four TPUs, each of which is capable of 180 trillion floating point operations per second. Boards are then stacked into pods, each capable of 11.5 petaflops. This is an important advance for data centre infrastructure design for the AI era, said Pichai, and it is already being used within the Google Compute Engine service. This means that the same racks of TPU hardware can now be purchased online like any of the other Google cloud services.

There is an interesting photo that Sundar Pinchai shared with the audience, taken inside a Google data centre, which shows four racks of equipment packed with TensorFlow TPU pods. Bundles of fibre pair linking between the four racks can be seen. Apparently, there is no top-of-rack switch, but that is not certain. It will be interesting to know whether these new designs will soon fill a meaningful portion of the data centre floor.

See also