Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Wednesday, February 14, 2018

Will mobile networks be ready for Waymo's driverless ride-hailing service?

by James E. Carroll

Fiat Chrysler Automobiles (FCA) has confirmed an order for several thousand of the Pacifica Hybrid minivans to be delivered to Waymo, the autonomous car subsidiary of Alphabet (Google's parent company) this calendar year for deployment in several U.S. cities. Although the actual size of the order was not disclosed, it is believed to be between 3,000 and 10,000 autonomous vehicles. You may have already seen driveless Waymo minivans on the streets in live testing. Last year, FCA delivered 500 of the Pacifica minivans, adapted for self-driving, to Waymo for the test fleet. An earlier batch of 500 Pacificas was delivered in late 2016. The vehicles have racked up over 4 million miles (6.4 million kilometres) of testing on U.S. streets so far.

With this order for thousands of self-driving Waymos, the prospect of a commercial launch is in sight. For mobile network operators, this could be a golden opportunity. The question is whether mobile operators are bidding for this business.

"With the world's first fleet of fully self-driving vehicles on the road, we've moved from research and development, to operations and deployment," said John Krafcik, CEO of Waymo. "The Pacifica Hybrid minivans offer a versatile interior and a comfortable ride experience, and these additional vehicles will help us scale."

Although we do not know which mobile operator(s) Waymo has been working with, we do know that the connection from vehicle to the network must be LTE at best, as none of the big four operators have 5G trial networks in place for this level of testing.

Two conclusions can be drawn. (1) The autonomous vehicle R&D programs are ahead of the 5G movement. (2) the first generation of autonomous vehicles may not require 5G at all.
In many ways, 5G networks promise to be an ideal platform for autonomous vehicle fleets. Think about:
(a) reduced network latency
(b) dense small cell deployments ideally near street level in urban cores
(c) high bandwidth throughput
(d) network slicing
(e) enhanced security

Autonomous vehicle fleets would also be the ideal 5G customer for mobile operators. Let's say an operator such as Waymo procures and deploys a fleet of 5,000 vehicles. The connectivity requirement will be 24/7. These vehicles are described as "data centers on wheels." Some estimates say each autonomous vehicle could generate 4TB of data daily. Of course only a percentage of that data would need to be offloaded in real time, if at all, but clearly the

Background on Waymo

Waymo began developing its self-driving platform in 2009. At the time it was known as Google's Self-Driving Car project and was led by Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View.  The Waymo identity was adopted in December 2016. The company remains based at the Google campus in Mountain View, California.

Late last year, Waymo began test driving the Pacifica minivans in the Phoenix metro region without anyone in the driver's seat for some months. The test program has been expanding rapidly since then. Just after the New Year, Waymo announced that Atlanta would be its 35th test city,
In its 9-year of development, Waymo has worked on every aspect of its forthcoming Transportation-as-a-Service platform.  Its software is perhaps the key differentiator that will set it apart from the many fast followers. It is also the subject of the ongoing lawsuit launched by Waymo against Uber regarding purportedly stolen intellectual property.

Beside Fiat-Chrysler, we know that Waymo is working with a few other technology suppliers. Waymo’s cloud service provider, of course, is Google. On the hardware side, Intel has disclosed that it supplied sensor processing, general compute and connectivity technologies for Waymo's test fleet of Pacifica minivans. This includes Xeon processors, Arria FPGAs, and Gigabit Ethernet and XMM LTE modems. The partnership between Intel and Waymo was cited in a blog post by  Brian Krzanich in September.

Collecting mapping and other data from the fleet

At CES 2018, Intel disclosed that its Mobileye next-generation aftermarket collision avoidance system is capable of "collecting dynamic data to make cities smarter, safer and Autonomous Ready."
The idea is to harvest valuable information on city streets and infrastructure to create high-definition crowdsourced maps. Mobileye is developing a Road Experience Management (REM) to make this easier. Many companies, as well as government authorities, will see value in harvesting this data from the vehicle. Collecting this data need not require an autonomous vehicle. Plenty of regular buses, taxis, and trucks criss-cross cities every day on established routes. Retrofitting these vehicles for mass-scale data gathering can be as simple as installing a single camera and sensor, along with a mobile broadband connection. In fact, Mobileye has announced a number of players who are already moving in this direction:

  • The city of Dusseldorf, Germany is expected to equip 750 vehicles with Mobileye 8 Connect to investigate the suitability of Dusseldorf’s existing infrastructure for autonomous vehicles and connected driving. The project is funded by the German federal government.
  • London black cabs will be fitted with Mobileye 8 Connect to create an HD map of the city. Gett, a start-up working on mobility solutions, will equip approximately 500 London black cabs this year.
  • New York City will also get an HD map based on Mobileye crowdsourced data. Buggy TLC Leasing, which provides leasing of vehicles for ride-sharing services such as Uber, is expected to outfit approximately 2,000 New York City-based vehicles with Mobileye Aftermarket.
  • Berkshire Hathaway GUARD Insurance will equip approximately 1,000 to 2,000 trucks with Mobileye 8 Connect to generate an HD map of where these vehicles operate.¬¬¬


Tuesday, January 16, 2018

Google commissions own subsea cable from CA to Chile

TE Subcom has been awarded a contract by Alphabet, the parent company of Google, to build a subsea cable from California to Chile. A ready-for-service date is expected in 2019.

The Curie Submarine Cable will be a four fiber-pair subsea system spanning over 10,000 km from Los Angeles to Valparaiso. It will include a branching unit for future connectivity to Panama.

The project is believed to be the first subsea cable to land in Chile in 20 years.

“We’re proud to provide comprehensive services to Google on this project. Leveraging existing TE SubCom infrastructure through our SubCom Global Services (SGS) options put us in position to be a true partner to them. Our role in the continued growth of global connectivity and information sharing is a point of substantial pride for the TE SubCom team,” said Sanjay Chowbey, president of TE SubCom.

Google joins Havfrue and HK-G subsea cable projects

Google announced its participation in the HAVFRUE subsea cable project across the north Atlantic and in the Hong Kong to Guam cable system, both of which are expected to enter service in 2019.

In addition, Google confirmed that it is on-track to open cloud regions (data centers) in the Netherlands and Montreal this calendar quarter, followed by Los Angeles, Finland and Hong Kong.

HAVFRUE is the newly-announced new subsea cable project that will link New Jersey to the Jutland Peninsula of Denmark with a branch landing in County Mayo, Ireland. Optional branch extensions to Northern and Southern Norway are also included in the design. The HAVFRU system will be owned and operated by multiple parties, including Aqua Comms, Bulk Infrastructure, Facebook, Google and others. Aqua Comms, the Irish cable owner/operator and carriers’ carrier, will serve as the system operator and landing party in U.S.A., Ireland, and Denmark. Bulk Infrastructure of Norway will be the owner and landing party for the Norwegian branch options. The HAFVRUE subsea cable system will be optimized for coherent transmission and will offer a cross-sectional cable capacity of 108Tbps, scalable to higher capacities utilizing future generation SLTE technology. SubCom will incorporate their Wavelength Selective Switching Reconfigurable Optical Add Drop Multiplexer (WSS-ROADM) for flexible wavelength allocation over the system design life. It is the first new cable system in almost two decades that will traverse the North Atlantic to connect mainland Northern Europe to the U.S.A. TE Subcom is the system supplier.

The 3,900 kilometer Hong Kong - Guam Cable system (HK-G) will offer 48 Tbps of design capacity when it comes into service in late 2019. It features 100G optical transmission capabilities and is being built by RTI Connectivity Pte. Ltd. (RTI-C) and NEC Corporation with capital from the Fund Corporation for the Overseas Development of Japan's ICT and Postal Services Inc. (Japan ICT Fund), along with syndicated loans from Japanese institutions including NEC Capital Solutions Limited, among others. In Hong Kong, the cable is slated to land in Tseung Kwan O (TKO) and will land in Piti, Guam at the recently completed Teleguam Holdings LLC (GTA) cable landing station. HK-G will land in the same facility as the Southeast Asia - United States Cable System (SEA-US).

Google also noted its direct investment in 11 cables, including those planned or under construction:

Cable            Year in service             Landings
Curie             2019                            US, Chile
Havfrue         2019                            US, IE, DK
HK-G            2019                            HK, GU
Indigo            2019                            SG, ID, AU
PLCN            2019                            HK, LA
Tannat            2018                           BR, UY
Junior            2018                            Rio, Santos
Monet            2017                            US, BR
FASTER        2016                            US, JP, TW
SJC                2013                            JP, HK, SG
UNITY          2010                            US, JP


Wednesday, December 13, 2017

Google opens AI Research Center in Beijing

Google is opening an AI China Center to focus on basic research. Ms. Fei-Gei Li, who is Chief Scientist AI/ML at Google, notes that many of the world's top experts in AI are Chinese.

Google also has AI research groups located in New York, Toronto, London and Zurich.

Tuesday, November 28, 2017

Google plans cloud data center in Hong Kong

Google Cloud Platform will open a new data center region in Hong Kong in 2018.

The GCP Hong Kong region is being designed for high availability, launching with three zones to protect against service disruptions.

Hong Kong will be the sixth GCP region in Asia Pacific, joining the recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo.

Google says it has other Asia Pacific investments in the works.

Tuesday, November 14, 2017

Google launches Cloud Spanner Multi-Region with Five 9s SLA

Google announced the general availability of its Cloud Spanner Multi-Region configurations, which enables application developers to achieve synchronous replication of transactions across regions and continents.

Google describes Cloud Spanner as "the first and only enterprise-grade, globally distributed and strongly consistent database service built specifically for the cloud that combines the benefits and familiarity of relational database semantics with non-relational scale and performance."

So, regardless of location, Cloud Spanner can read and write up-to-date (strongly consistent) data globally and do so with minimal latency for end users. Google is promising a 99.999% availability SLA with no planned downtime.

Google Cloud Spanner also ensures database resiliency in the even of regional failure.

Sunday, November 12, 2017

Google chops latency of its Andromeda SDN stack by 40%

Google released a new edition of its Andromeda SDN stack that reduces network latency between Compute Engine VMs by 40% over the previous version.

Andromeda 2.1, which underpins all of Google Cloud Platform (GCP), introduces a hypervisor bypass that builds on virtio, the Linux paravirtualization standard for device drivers. This enables the Compute Engine guest VM and the Andromeda software switch to communicate directly via shared memory network queues, bypassing the hypervisor completely for performance-sensitive per-packet operations.

Google noted that is has cut the latency of its SDN stack by nearly a factor of 8 since it first launched Andromeda in 2014.

https://cloudplatform.googleblog.com/2017/11/Andromeda-2-1-reduces-GCPs-intra-zone-latency-by-40-percent.html

Wednesday, October 25, 2017

Cisco and Google Partner on New Hybrid Cloud Solution

Cisco and Google Cloud have formed a partnership to deliver a hybrid cloud solutions that enables applications and services to be deployed, managed and secured across on-premises environments and Google Cloud Platform. The pilot implementations are expected to be launched early next year, with commercial rollout later in 2018.

The main idea is to deliver a consistent Kubernetes environment for both on-premises Cisco Private Cloud Infrastructure and Google’s managed Kubernetes service, Google Container Engine.

The companies said their open hybrid cloud offering will provide enterprises with a way to run, secure and monitor workloads, thus enabling them to optimize their existing investments, plan their cloud migration at their own pace and avoid vendor lock in.

Cisco and Google Cloud hybrid solution highlights:


  • Orchestration and Management – Policy-based Kubernetes orchestration and lifecycle management of resources, applications and services across hybrid environments
  • Networking – Extend network policy and configurations to multiple on-premises and cloud environments
  • Security – Extend Security policy and monitor applications behavior
  • Visibility and Control – Real-time network and application performance monitoring and automation
  • Cloud-ready Infrastructure – Hyperconverged platform supporting existing application and cloud-native Kubernetes environments
  • Service Management with Istio – Open-source solution provides a uniform way to connect, secure, manage and monitor microservices
  • API Management – Google's Apigee enterprise-class API management enables legacy workloads running on premises to connect to the cloud through APIs
  • Developer Ready – Cisco's DevNet Developer Center provides tools and resources for cloud and enterprise developers to code in hybrid environments
  • Support – Joint coordinated technical support for the solution

"Our partnership with Google gives our customers the very best cloud has to offer— agility and scale, coupled with enterprise-class security and support," said Chuck Robbins, chief executive officer, Cisco. "We share a common vision of a hybrid cloud world that delivers the speed of innovation in an open and secure environment to bring the right solutions to our customers."

"This joint solution from Google and Cisco facilitates an easy and incremental approach to tapping the benefits of the Cloud. This is what we hear customers asking for," said Diane Greene, CEO, Google Cloud.

Thursday, October 12, 2017

IBM and Google collaborate on container security API

IBM is joining forces with Google to create and open source the Grafeas project, which is an open source initiative to define a uniform way for auditing and governing the modern software supply chain.

Grafeas (“scribe” in Greek) provides a central source of truth for tracking and enforcing policies across an ever growing set of software development teams and pipelines. The idea is to provide a central store that other applications and tools can query to retrieve metadata on software components of all kinds.

IBM is also working on Kritis, a component which allows organizations to set Kubernetes governance policies based on metadata stored in Grafeas.

Wednesday, September 27, 2017

Google Cloud IoT Core released to beta testing

Google Cloud IoT Core, which is a fully-managed service on Google Cloud Platform (GCP) to help securely connect and manage IoT devices at scale, has now entered public beta testing

Google has designed its Cloud IoT Core to enable organizations to connect and centrally manage millions of globally dispersed IoT devices. The company says it is working with many customers across industries such as transportation, oil and gas, utilities, healthcare and ride-sharing.

The Google Cloud IoT solution ingests IoT data and then can connect to other Google analytics services including Google Cloud Pub/Sub, Google Cloud Dataflow, Google Cloud Bigtable, Google BigQuery, and Google Cloud Machine Learning Engine.

Tuesday, September 26, 2017

Google acquires Bitium for identity and access management

Google has acquired Bitium, a start-up based Santa Monica, California, that specializes in identity and access management. Financial terms were not disclosed.

Bitium, which was founded in 2012, provides tools to support single sign-on, password management and analytics across cloud services for small, medium and enterprise businesses. Bitium helps enterprises manage access to range of web-based applications — including Google Apps and Microsoft Office 365, as well as social networks and CRM, collaboration and marketing tools.

Bitium will now become part of the Google Cloud team.

Wednesday, September 20, 2017

Google acquires HTC's hardware team

Google will hire a team of hardware engineers from HTC, the Taiwan-based mobile and consumer electronics firm.

The HTC team has been working closely with Google on its Pixel smartphone line.

The deal, reportedly worth US$1.1 billion, also includes a non-exclusive license for HTC intellectual property.

Google is planning to unveil its latest line of hardware products on October 4th.

  • In 2012, Google acquired Motorola ia deal valued at US$12.5 billion. The acquisition included an extensive intellectual property portfolio. Google later sold the Motorola handset business to Lenovo for $2.9 billion.

Saturday, September 9, 2017

Network service tiers become part of the public cloud discussion

An interesting development has just come out of the Google Cloud Platform.

Until now, we’ve seen Google internal global network growing by leaps and bounds. There is a public facing network as well with peering points all over the globe that seemingly must always keep ahead of the growth of the Internet overall. The internal network, however, which handles all the traffic between Google services and its data centers, was said to grow much faster. It is this unmatched, private backbone that is one of the distinguishing features of Google Compute Engine (GCE), the company’s public cloud service.

GCE provides on-demand virtual machines (VMs) running on the same class of servers and in the same data centers as Google’s own applications, including Search, Gmail, Maps, YouTube, Docs, etc.  The service delivers global scaling using Google’s load balancing over the private, worldwide fiber backbone.  This service continues and becomes knows as the “Premier Tier”.

The new option, called the “Standard Tier,” directs the user’s outbound application traffic to exit the Google network at the nearest IP peering point. From there, the traffic traverses ISP network(s) all the way to its destination. Google says this option will be lower performance but lower cost. For Google, the cost savings come from not having to use as much long-haul bandwidth to carry the traffic and to consume fewer resources in load-balancing traffic across regions, or failing over to other regions in the event of an anomaly.  In a similar way, inbound traffic travels over ISP networks until it reaches the region of the Google data center where the application is hosted. At that point, ingress occurs on the Google network.




Google has already conducted performance tests of how its Premier Tier measures up to Standard tier. The tests, which were conducted by Cedexis, found that Google’s own network delivers higher throughput and lower latency than a Standard tier with more routing hops and operating over third party network(s). Test data from the US Central region from mid-August indicate that the Standard Tier was delivering around 3,051kbps throughput while Premier Tier was delivering around 5,303kbps – or roughly a 73% performance boost in throughput. For latency in the US Central region, the Standard Tier was measured at 90ms for the 50th percentile, while the Premium Tier was measured at 77ms, roughly a 17% performance advantage.

Looking at the pricing differential

The portal for the Google Cloud platform shows a 17% savings for Standard Tier for North America to North America traffic.



Some implications of network service tiering

The first observation is that with these new Network Service Tiers, Google is recognizing that its own backbone is not a resource with infinite capacity and zero cost that can be used to carry all traffic indiscriminately. If the Google infrastructure is transporting packets with greater throughput and lower latency from one side of the planet to another, why shouldn’t they charge more for this service?
The second observation is that network transport becomes a more important consideration and comparison point for public cloud services in general.

Third, it could be advantageous for other public clouds to assemble their own Network Service Tiers in partnership with carriers. The other hyperscale public cloud companies also operate global-scale, private transport networks that outperform the hop-by-hop routing of the Internet.  Some of these companies are building private transatlantic and transpacific subsea cables, but building a private, global transport network at Google scale is costly.  Network service tiering should bring many opportunities for partnerships with carriers.

Saturday, August 19, 2017

Box integrates Google Cloud Vision for image recognition

Box is integrating Google Cloud Vision into is cloud storage service to provide its enterprise customers with advanced image recognition.

The capability, which is currently in private beta, leverages machine learning to help enterprises improve workflows and drive efficiencies through more accurate discovery and deeper insights into unstructured content stored in Box.

“Organizations today have no way to extract insights from the massive amounts of unstructured data that are essential to their business, missing a huge opportunity to drive innovation, efficiency, and cost savings,” said Aaron Levie, cofounder and CEO, Box. “By combining the machine learning capabilities of Google Cloud with the critical data businesses manage and secure in Box, we are enabling our customers – for the first time – to unlock tremendous new value from their content, digitize manual workflows, and accelerate business processes.”

“Box’s application of Google Cloud’s machine learning APIs brings to life the potential of AI in the enterprise,” said Fei-Fei Li, Chief Scientist, Google Cloud AI and Professor of Computer Science, Stanford University. “Understanding images remains a challenge for businesses and Box’s application of the Vision API demonstrates how the accessibility of machine learning models can unlock potential within a business’s own data. Ultimately it will democratize AI for more people and businesses.”

http://www.box.com

Wednesday, August 2, 2017

Google picks up the pace in cloud computing

When it comes to the cloud, Google certainly isn't taking a summer holiday. Over the past weeks there have been a string of cloud related developments from Google showing that is very focused, delivering innovative services and perhaps narrowing the considerable market share gap between itself and rivals IBM, Microsoft Azure and Amazon Web Services. There is a new Google cloud data centre in London, a new data transfer service, a new transfer appliance and a new offering for computational drug discovery. And this week came word from Bloomberg that Google is gearing up to launch its first quantum computing cloud services. While the company declined to comment directly about the Bloomberg story it is understood that quantum computing is an area of keen interest for Google.

New London data centre

Customers of Google Cloud Platform (GCP) can use the new region in London (europe-west2) to run applications. Google noted that London is its tenth region, joining the existing European region in Belgium. Future European regions include Frankfurt, the Netherlands and Finland. Google also stated that it is working diligently to address EU data protection requirements. Most recently, Google announced a commitment to GDPR compliance across GCP.

Introducing Google Transfer Appliance

This is a pre-configured solution that offers up to 480TB in 4U or 100TB in 2U of raw data capacity in a single rackmount device. Essentially, it is high-capacity storage server that a customer can install in a corporate data centre. Once the server is full, the customer simply ships the appliance back to Google for transferring the data to Google Cloud Storage. It offers a capacity of up to one-petabyte compressed.

The Google Transfer Appliance is a very practical solution even when massive bandwidth connections are available at both ends. For instance, for customers fortunate enough to possess a 10 Gbit/s connection, a 100TB data store would still take 30 hours to transfer electronically. A 1PB data library would take over 12 days using the same10 Gbit/s connection, and that is assuming no drops in connectivity performance. Google is now offering a 100TB model priced at $300, plus shipping via FedEx (approximately $500) and a 480TB model is priced at $1800, plus shipping (approximately $900). Amazon offers a similar Snowball Edge data migration appliance for migrating large volumes of data to its cloud the old-fashioned way.

Partnership for computational medicine

Under a partnership with Boston -based Silicon Therapeutics, Google recently deployed its INSITE Screening platform on Google Cloud Platform (GCP) to analyse over 10 million commercially available molecular compounds as potential starting materials for next-generation medicines. In one week, it performed over 500 million docking computations to evaluate how a protein responds to a given molecule. Each computation involved a docking program that predicted the preferred orientation of a small molecule to a protein and the associated energetics so it could assess whether it will bind and alter the function of the target protein.

With a combination of Google Compute Engine standard and Preemptible VMs, the partners used up to 16,000 cores, for a total of 3 million core-hours and a cost of about $30,000. Google noted that a final stage of the calculations delivered all-atom molecular dynamics (MD) simulations on the top 1,000 molecules to determine which ones to purchase and experimentally assay for activity.

Pushing ahead with Kubernetes

The recent open source release of Kubernetes 1.7 is now available on Container Engine, Google Cloud Platform’s (GCP) managed container service. The end result is better workload isolation within a cluster, which is a frequently requested security feature in Kubernetes. Google also announced that its Container Engine, which saw more than 10x growth last year, is now available from the following GCP regions:

•   Sydney (australia-southeast1).

•   Singapore (asia-southeast1).

•   Oregon (us-west1).

•   London (europe-west2).

Container engine clusters are already up and running at locations from Iowa to Belgium and Taiwan.

New strategic partnership with Nutanix

Google has formed a strategic partnership with Nutanix to help remove friction from hybrid cloud deployments for enterprises.

Reimagining virtual public clouds at global scale

Integrating cloud resources from different areas of the world no longer requires negotiating and installing a VPN solution from one or more service providers. Google can do it for you using its own global backbone. VPC is private, and with Google VPC customers can get private access to Google services such as storage, big data, analytics or machine learning, without having to give the service a public IP address. Global VPCs are divided into regional subnets that use Google’s private backbone to communicate as needed.

VPC, formerly known as GCP Virtual Networks, offers a privately administered space within Google Cloud Platform (GCP). This means global connectivity across locations and regions, and the elimination of silos across projects and teams.

Further information on Google Cloud Platform is available at the blog here:
:

Thursday, May 25, 2017

Google to Rearchitect its Data Centres as it Shifts to AI First Strategy

AI is driving Google to rethink all its products and services, said Sundar Pichai, Google CEO, at the company's Google IO event in Mountain View, California. Big strides have recently pushed AI to surpass human vision in terms of real world image recognition, while speech recognition is now widely deployed in many smartphone applications to provide a better input interface for users. However, it is one thing for AI to win at chess or a game of Go, but it is a significantly greater task to get AI to work at scale. Everything at Google is big, and here are some recent numbers:


  • Seven services, now with over a billion monthly users: Search, Android, Chrome, YouTube, Maps, Play, Gmail
  • Users on YouTube watch over a billion hours of video per day.
  • Every day, Google Maps helps users navigate 1 billion km.
  • Google Drive now has over 800 million active users; every week 3 billion objects are uploaded to Google Drive.
  • Google Photos has over 500 million active users, with over 1.2 billion photos are uploaded.
  • Android is now running on over 2 billion active devices.

Google is already pushing AI into many of these applications. Examples of this include Google Maps, which is now using machine learning to identify street signs and storefronts. YouTube is applying AI for generating better suggestions on what to watch next. Google Photos is using machine learning to deliver better search results and Android has smarter predictive typing.

Another way that AI is entering the user experience is by using voice as an input for many products. Pichai said the improvement in voice recognition over the past year have been amazing. For instance, its Google Home speaker uses only two onboard microphones to assess the direction and to identify the use in order to better serve customers. The company also claims that its image recognition algorithms have now surpassed humans (although it is not clear what the metrics are). Another one of the big announcements at this year's Google IO event concerned Google Lens, a new in-app capability to layer augmented reality on top of images and videos. The company demo'ed taking a photo of a flower and then asking Google to identify it.

Data centres will have to support the enormous processing challenges of an AI-first world and the network will have to deliver the I/O bandwidth for users to the data centres and for connecting specialised AI servers within and between Google data centres. Pichai said this involves nothing less than rearchitecting the data centre for AI.

Google's AI-first data centres will be packed with Tensor processing units (TPUs) for machine learning. TPUs, which Google launched last year, are custom ASICs that are about 30-50 times faster and more power efficient than general purpose CPUs or GPUs for certain machine learning functions. TPUs are tailored for TensorFlow, an open source software library for machine learning that was developed at Google. TensorFlow was originally developed by the Google Brain team and released under the Apache 2.0 open source license in November 2015. At the time, Google said TensorFlow would be able to run on multiple CPUs and GPUs. Parenthetically, it should also be noted that in November 2016 Intel and Google announced a strategic alliance to help enterprise IT deliver an open, flexible and secure multi-cloud infrastructure for their businesses. One area of focus is the optimisation of the open source TensorFlowOpens in a new window library to benefit from the performance of Intel architecture. The companies said their joint work will provide software developers an optimised machine learning library to drive the next wave of AI innovation across a range of models including convolutional and recurrent neural networks.

A quick note about the technology: Google describes TensorFlow as an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

Here come TPUs on the Google Compute Engine

At its recent I/O event, Google introduced the next generation of Cloud TPUs, which are optimised for both neural network training and inference on large data sets. TPUs are stacked onto boards, and each board has four TPUs, each of which is capable of 180 trillion floating point operations per second. Boards are then stacked into pods, each capable of 11.5 petaflops. This is an important advance for data centre infrastructure design for the AI era, said Pichai, and it is already being used within the Google Compute Engine service. This means that the same racks of TPU hardware can now be purchased online like any of the other Google cloud services.

There is an interesting photo that Sundar Pinchai shared with the audience, taken inside a Google data centre, which shows four racks of equipment packed with TensorFlow TPU pods. Bundles of fibre pair linking between the four racks can be seen. Apparently, there is no top-of-rack switch, but that is not certain. It will be interesting to know whether these new designs will soon fill a meaningful portion of the data centre floor.

Tuesday, April 11, 2017

Google Joins INDIGO Undersea Cable Project

A consortium comprising AARNet, Google, Indosat Ooredoo, Singtel, SubPartners and Telstra announced that they have entered into an agreement with Alcatel Submarine Networks (ASN) for the construction of a new subsea cable system.

On completion, the new INDIGO cable system (previously known as APX West & Central) will expand connectivity between Australia and South East Asia markets, and will enable higher speed services and improved reliability.

The INDIGO cable system will span approximately 9,000 km between Singapore and Perth, Australia, and onwards to Sydney. The system will land at existing facilities in Singapore, Australia and Indonesia, providing connections between Singapore and Jakarta.

The system will feature a two-fibre pair 'open cable' design and spectrum- sharing technology. This design will allow consortium members to share ownership of spectrum resources provided by the cable and allow them to independently leverage technology advances and implement future upgrades as required.

Utilising coherent optical technology, each of the two fibre pairs will provide a minimum capacity of 18 Tbit/s, with the option to further increase this capacity in the future.

In addition to Google, the INDIGO consortium is made up of: AARNet, a provider of national and international telecom infrastructure to Australia's research and education sector; Indosat Ooredoo, an Ooredoo Group company providing telecom services in Indonesia; Singtel of Singapore, with a presence in Asia, Australia and Africa; and SubPartners based in Brisbane, Australia, which focuses on delivering major telecoms infrastructure projects in partnership with other companies.


ASN will undertake construction of the subsea cable system, which is expected to be completed by mid-2019

Tuesday, April 4, 2017

Google Builds Expresso SDN for Public Internet

"SDN is how we do networking… so what’s next?" asked Amin Vahdat, Fellow and Technical Lead for Networking at Google, at the opening of his keynote address at the Open Networking Summit in Santa Clara, California.

Google has been using SDN for years to manage its private B4 global WAN (interconnects data centers) and its Jupiter networks (inside its data centers, which have up to 100,000 servers and up to 1 Pb/s of aggregate bandwidth). Traffic on its private networks exceeds that on its public Internet backbone, and the growth rate is faster too. Google also relies on its Andromeda network functions virtualization platform as one of its pillars of SDN.

Google's next step is to introduce "Expresso" SDN for its public Internet backbone.  Expresson has been running on the Google network for the past two years and is currently routing about 20% of traffic to the Internet.

Essentially, Google Expresso SDN Peering takes a metro and global view across many routers to find optimal route for traffic delivery.  Whereas traditional, router-centric protocols take a connectivity-first approach with only local information, Vahdat said Expresso is able to optimize route selection in real-time by leveraging application signals across the metro or global infrastructure. One effect is to remove the logic and control of traffic management from individual boxes and make it network-centric.

Here is an architectural view of Google Expresso SDN:



Expresso sets the stage for the next decade in networking, said Vahdat, where the Internet must deliver improvements in scale, agility, jitter, isolation and availability. Google sees each of these attributes as necessary for the next wave of serverless computing.

Monday, March 27, 2017

Expanding horizons for Google Cloud Platform

The most compelling case for adopting the Google Cloud Platform is that it is the same infrastructure that powers Google's own services, which attract well over a billion users daily. This was the case presented by company executives at last week's Google Next event in San Francisco – "get on the Cloud… now", said Eric Schmidt, Executive Chairman of Alphabet, Google's parent; "Cloud is the most exciting thing happening in IT", said Diane Greene, SVP of Google Cloud.

Direct revenue comparisons between leading companies are a bit tricky, but many analysts place the Google Cloud Platform at No.4 behind Amazon Web Services, Microsoft Azure and IBM in the U.S. market. Over the past three years, Google invested $29.4 billion for its infrastructure, according to Urs Hölzle, SVP, Technical Infrastructure for Google Cloud, on everything from efficient data centres to customised servers, customised networking gear and specialised ASICs for machine learning.

Google operates what it believes to be the largest global network, carrying anywhere from 25% to 40% of all Internet traffic. Google's backbone interconnects directly with nearly every ISP and its private network has a point of presence in 182 countries, while the company is investing heavily in ultra-high-capacity submarine cables.

The argument goes that by moving to the Google Cloud Platform (GCP), enterprise customers move directly into the fast lane of the Internet, putting their applications one hop away from any end-user ISP they need to reach with less latency and fewer hand-offs. Two example of satisfied GCP customers that Google likes to cite are Pokemon Go and Snap Chat, both of which took a compelling application and brought it to global scale by riding the Google infrastructure.

One question is, does the Google global network give its Google Cloud Platform a decisive edge over its rivals? Clearly all the big players are racing to scale out their infrastructure as quickly as possible, but Google is striving to take it one step further – to develop core technologies in hardware and software that other companies later follow. Examples include containers, noSQL, serverless, Kubernetes, Map Reduce, TensorFlow, and more recently its horizontally-scalable Cloud Spanner database synchronisation service, which uses atomic-clocks running in every Google data centre.

Highlights of Google's initiatives include:

·         New data centres: three new GCP regions - California, Montreal and the Netherlands - bringing the total number of Google Cloud regions to six, and the company anticipates more than 17 locations in the future. The new regions will feature a minimum of three zones, benefit from Google's global, private fibre network and offer a complement of GCP services.

·         GCP the first public cloud provider to run Intel Skylake, a custom Xeon chip for compute-heavy workloads and a larger range of VM memory and CPU options. GCP is doubling the number of vCPUs that can run in an instance from 32 to 64 and offering up to 416 Gbytes of memory. GCP is also adding GPU instances. Google and Intel are collaborating in other areas as well, including hybrid cloud orchestration, security, machine and deep learning, and IoT edge-to-cloud solutions; Intel is also a backer of Google’s Tensor Flow and Kubernetes open source initiatives.

·         Google Cloud Functions: a completely serverless environment and the smallest unit of compute offered by GCP; it is able to spin up a single function and spin it back down instantly, so billing occurs only while the function is executing, metered to the nearest one hundred milliseconds.

·         Free services: a new free tier to the GCP that provides limited access to Google Compute Engine (1 f1-micro instance per month in U.S. regions and 30 Gbyte-months HDD), Google Cloud Storage (5 Gbytes a month), Google Pub/Sub (10 Gbytes of messages a month), and Google Cloud Functions (two million invocations per month).

·         Lower prices for GCE: 5% price drop in the U.S., 4.9% drop in Europe; 8% drop in Japan.

·         Google BigQuery data analytics service, including automated data movement from select Google applications, such as Adsense, DoubleClick and YouTube, directly into BigQuery.

·         Titan: a custom security chip codenamed Titan that operates at the BIOS level; Intel is also introducing new security tools to keep customer data secure. The chip authenticates the hardware and services running on each server.

·         Project Kubo toolset: a joint effort with Pivotal for packaging and managing software in a Kubernetes environment.

·         Engineering support plans ranging from $100 per user per month to $1,500 per user per month with a 15-minute response time.

·         Data loss prevention API to guard information inside the Google Cloud.

The Google Next event provided a number of sessions for looking over the horizon. In a 'fireside chat', Marc Andreesen and Vint Cerf speculated on the arrival of quantum computing and neural networking/machine learning services on the public clouds. Both possibilities are likely to augment current public cloud computing models rather than replace them. The types of applications could vary. For instance, a cloud-based quantum computing service might be employed to finally secure communications.

Google is also betting big that the cloud is the ideal platform for AI. Fei-fei Li, Chief Scientist for Cloud AI and ML at Google, observed that a few self-driving cars can put considerable data into the cloud. What happens when there are millions of such vehicles? Building on ramps for AI is the next step with API and SDKs that draw new applications onto Google's TensorFlow platform. The company discusses this in terms of 'democratising' AI, which means making sure its algorithms and cloud analytic systems become widely available before others move into this space.


A final differentiator for GCP is that Google is the largest corporate purchaser of renewal energy. In 2017, the company is on track to reach 100% renewal power for its global data centres and offices. One hopes that others will catch up soon.

Thursday, March 9, 2017

Google Builds out its Cloud Portfolio

Google rolled out significant enhancements to its cloud platform. Here are some highlights:

  • Google has invested an estimated  $26 billion over the past 3 years in its infrastructure, including data centers and networks
  • Google has built a custom security chip codenamed Titan that operates at the BIOS level. Intel is also introducing new security tools to keep customer data secure. 
  • Three new Google Cloud Platform Regions are coming online -- The Netherlands, Montreal (Canada), and California, in addition to other data center construction underway in Northern Virginia, São Paulo, London, Finland, Frankfurt, Mumbai, Singapore, and Sydney.
  • Google is already deploying Intel's latest Skylake processors in its servers
  • New developer tools and data analytic services that will help enterprises build apps in the cloud and find new value from their data. 
  • Google Cloud Platform now offers an expanded environment for Google App Engine and welcomes all developers to use the public beta of Cloud Functions, including its new integration with Firebase. 
  • BigQuery, Google’s data warehouse, now features new pipelines that make it easy for customers to analyze their data from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers, and YouTube Analytics. This enables marketers and advertisers to gain a single view of their customer experience. 
  • Google Cloud launched a new Commercial Dataset program that enables users to access licensed datasets within BigQuery. This will offer the opportunity for businesses to access more data and find more robust insights and more easily build machine learning models.
  • Cloud Dataprep is a new data preparation tool that is capable of automatically detecting types of data, enabling data analysts and scientists to find insights at a faster rate.



See also