Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Thursday, October 12, 2017

IBM and Google collaborate on container security API

IBM is joining forces with Google to create and open source the Grafeas project, which is an open source initiative to define a uniform way for auditing and governing the modern software supply chain.

Grafeas (“scribe” in Greek) provides a central source of truth for tracking and enforcing policies across an ever growing set of software development teams and pipelines. The idea is to provide a central store that other applications and tools can query to retrieve metadata on software components of all kinds.

IBM is also working on Kritis, a component which allows organizations to set Kubernetes governance policies based on metadata stored in Grafeas.

Wednesday, September 27, 2017

Google Cloud IoT Core released to beta testing

Google Cloud IoT Core, which is a fully-managed service on Google Cloud Platform (GCP) to help securely connect and manage IoT devices at scale, has now entered public beta testing

Google has designed its Cloud IoT Core to enable organizations to connect and centrally manage millions of globally dispersed IoT devices. The company says it is working with many customers across industries such as transportation, oil and gas, utilities, healthcare and ride-sharing.

The Google Cloud IoT solution ingests IoT data and then can connect to other Google analytics services including Google Cloud Pub/Sub, Google Cloud Dataflow, Google Cloud Bigtable, Google BigQuery, and Google Cloud Machine Learning Engine.

Tuesday, September 26, 2017

Google acquires Bitium for identity and access management

Google has acquired Bitium, a start-up based Santa Monica, California, that specializes in identity and access management. Financial terms were not disclosed.

Bitium, which was founded in 2012, provides tools to support single sign-on, password management and analytics across cloud services for small, medium and enterprise businesses. Bitium helps enterprises manage access to range of web-based applications — including Google Apps and Microsoft Office 365, as well as social networks and CRM, collaboration and marketing tools.

Bitium will now become part of the Google Cloud team.

Wednesday, September 20, 2017

Google acquires HTC's hardware team

Google will hire a team of hardware engineers from HTC, the Taiwan-based mobile and consumer electronics firm.

The HTC team has been working closely with Google on its Pixel smartphone line.

The deal, reportedly worth US$1.1 billion, also includes a non-exclusive license for HTC intellectual property.

Google is planning to unveil its latest line of hardware products on October 4th.

  • In 2012, Google acquired Motorola ia deal valued at US$12.5 billion. The acquisition included an extensive intellectual property portfolio. Google later sold the Motorola handset business to Lenovo for $2.9 billion.

Saturday, September 9, 2017

Network service tiers become part of the public cloud discussion

An interesting development has just come out of the Google Cloud Platform.

Until now, we’ve seen Google internal global network growing by leaps and bounds. There is a public facing network as well with peering points all over the globe that seemingly must always keep ahead of the growth of the Internet overall. The internal network, however, which handles all the traffic between Google services and its data centers, was said to grow much faster. It is this unmatched, private backbone that is one of the distinguishing features of Google Compute Engine (GCE), the company’s public cloud service.

GCE provides on-demand virtual machines (VMs) running on the same class of servers and in the same data centers as Google’s own applications, including Search, Gmail, Maps, YouTube, Docs, etc.  The service delivers global scaling using Google’s load balancing over the private, worldwide fiber backbone.  This service continues and becomes knows as the “Premier Tier”.

The new option, called the “Standard Tier,” directs the user’s outbound application traffic to exit the Google network at the nearest IP peering point. From there, the traffic traverses ISP network(s) all the way to its destination. Google says this option will be lower performance but lower cost. For Google, the cost savings come from not having to use as much long-haul bandwidth to carry the traffic and to consume fewer resources in load-balancing traffic across regions, or failing over to other regions in the event of an anomaly.  In a similar way, inbound traffic travels over ISP networks until it reaches the region of the Google data center where the application is hosted. At that point, ingress occurs on the Google network.




Google has already conducted performance tests of how its Premier Tier measures up to Standard tier. The tests, which were conducted by Cedexis, found that Google’s own network delivers higher throughput and lower latency than a Standard tier with more routing hops and operating over third party network(s). Test data from the US Central region from mid-August indicate that the Standard Tier was delivering around 3,051kbps throughput while Premier Tier was delivering around 5,303kbps – or roughly a 73% performance boost in throughput. For latency in the US Central region, the Standard Tier was measured at 90ms for the 50th percentile, while the Premium Tier was measured at 77ms, roughly a 17% performance advantage.

Looking at the pricing differential

The portal for the Google Cloud platform shows a 17% savings for Standard Tier for North America to North America traffic.



Some implications of network service tiering

The first observation is that with these new Network Service Tiers, Google is recognizing that its own backbone is not a resource with infinite capacity and zero cost that can be used to carry all traffic indiscriminately. If the Google infrastructure is transporting packets with greater throughput and lower latency from one side of the planet to another, why shouldn’t they charge more for this service?
The second observation is that network transport becomes a more important consideration and comparison point for public cloud services in general.

Third, it could be advantageous for other public clouds to assemble their own Network Service Tiers in partnership with carriers. The other hyperscale public cloud companies also operate global-scale, private transport networks that outperform the hop-by-hop routing of the Internet.  Some of these companies are building private transatlantic and transpacific subsea cables, but building a private, global transport network at Google scale is costly.  Network service tiering should bring many opportunities for partnerships with carriers.

Saturday, August 19, 2017

Box integrates Google Cloud Vision for image recognition

Box is integrating Google Cloud Vision into is cloud storage service to provide its enterprise customers with advanced image recognition.

The capability, which is currently in private beta, leverages machine learning to help enterprises improve workflows and drive efficiencies through more accurate discovery and deeper insights into unstructured content stored in Box.

“Organizations today have no way to extract insights from the massive amounts of unstructured data that are essential to their business, missing a huge opportunity to drive innovation, efficiency, and cost savings,” said Aaron Levie, cofounder and CEO, Box. “By combining the machine learning capabilities of Google Cloud with the critical data businesses manage and secure in Box, we are enabling our customers – for the first time – to unlock tremendous new value from their content, digitize manual workflows, and accelerate business processes.”

“Box’s application of Google Cloud’s machine learning APIs brings to life the potential of AI in the enterprise,” said Fei-Fei Li, Chief Scientist, Google Cloud AI and Professor of Computer Science, Stanford University. “Understanding images remains a challenge for businesses and Box’s application of the Vision API demonstrates how the accessibility of machine learning models can unlock potential within a business’s own data. Ultimately it will democratize AI for more people and businesses.”

http://www.box.com

Wednesday, August 2, 2017

Google picks up the pace in cloud computing

When it comes to the cloud, Google certainly isn't taking a summer holiday. Over the past weeks there have been a string of cloud related developments from Google showing that is very focused, delivering innovative services and perhaps narrowing the considerable market share gap between itself and rivals IBM, Microsoft Azure and Amazon Web Services. There is a new Google cloud data centre in London, a new data transfer service, a new transfer appliance and a new offering for computational drug discovery. And this week came word from Bloomberg that Google is gearing up to launch its first quantum computing cloud services. While the company declined to comment directly about the Bloomberg story it is understood that quantum computing is an area of keen interest for Google.

New London data centre

Customers of Google Cloud Platform (GCP) can use the new region in London (europe-west2) to run applications. Google noted that London is its tenth region, joining the existing European region in Belgium. Future European regions include Frankfurt, the Netherlands and Finland. Google also stated that it is working diligently to address EU data protection requirements. Most recently, Google announced a commitment to GDPR compliance across GCP.

Introducing Google Transfer Appliance

This is a pre-configured solution that offers up to 480TB in 4U or 100TB in 2U of raw data capacity in a single rackmount device. Essentially, it is high-capacity storage server that a customer can install in a corporate data centre. Once the server is full, the customer simply ships the appliance back to Google for transferring the data to Google Cloud Storage. It offers a capacity of up to one-petabyte compressed.

The Google Transfer Appliance is a very practical solution even when massive bandwidth connections are available at both ends. For instance, for customers fortunate enough to possess a 10 Gbit/s connection, a 100TB data store would still take 30 hours to transfer electronically. A 1PB data library would take over 12 days using the same10 Gbit/s connection, and that is assuming no drops in connectivity performance. Google is now offering a 100TB model priced at $300, plus shipping via FedEx (approximately $500) and a 480TB model is priced at $1800, plus shipping (approximately $900). Amazon offers a similar Snowball Edge data migration appliance for migrating large volumes of data to its cloud the old-fashioned way.

Partnership for computational medicine

Under a partnership with Boston -based Silicon Therapeutics, Google recently deployed its INSITE Screening platform on Google Cloud Platform (GCP) to analyse over 10 million commercially available molecular compounds as potential starting materials for next-generation medicines. In one week, it performed over 500 million docking computations to evaluate how a protein responds to a given molecule. Each computation involved a docking program that predicted the preferred orientation of a small molecule to a protein and the associated energetics so it could assess whether it will bind and alter the function of the target protein.

With a combination of Google Compute Engine standard and Preemptible VMs, the partners used up to 16,000 cores, for a total of 3 million core-hours and a cost of about $30,000. Google noted that a final stage of the calculations delivered all-atom molecular dynamics (MD) simulations on the top 1,000 molecules to determine which ones to purchase and experimentally assay for activity.

Pushing ahead with Kubernetes

The recent open source release of Kubernetes 1.7 is now available on Container Engine, Google Cloud Platform’s (GCP) managed container service. The end result is better workload isolation within a cluster, which is a frequently requested security feature in Kubernetes. Google also announced that its Container Engine, which saw more than 10x growth last year, is now available from the following GCP regions:

•   Sydney (australia-southeast1).

•   Singapore (asia-southeast1).

•   Oregon (us-west1).

•   London (europe-west2).

Container engine clusters are already up and running at locations from Iowa to Belgium and Taiwan.

New strategic partnership with Nutanix

Google has formed a strategic partnership with Nutanix to help remove friction from hybrid cloud deployments for enterprises.

Reimagining virtual public clouds at global scale

Integrating cloud resources from different areas of the world no longer requires negotiating and installing a VPN solution from one or more service providers. Google can do it for you using its own global backbone. VPC is private, and with Google VPC customers can get private access to Google services such as storage, big data, analytics or machine learning, without having to give the service a public IP address. Global VPCs are divided into regional subnets that use Google’s private backbone to communicate as needed.

VPC, formerly known as GCP Virtual Networks, offers a privately administered space within Google Cloud Platform (GCP). This means global connectivity across locations and regions, and the elimination of silos across projects and teams.

Further information on Google Cloud Platform is available at the blog here:
:

Thursday, May 25, 2017

Google to Rearchitect its Data Centres as it Shifts to AI First Strategy

AI is driving Google to rethink all its products and services, said Sundar Pichai, Google CEO, at the company's Google IO event in Mountain View, California. Big strides have recently pushed AI to surpass human vision in terms of real world image recognition, while speech recognition is now widely deployed in many smartphone applications to provide a better input interface for users. However, it is one thing for AI to win at chess or a game of Go, but it is a significantly greater task to get AI to work at scale. Everything at Google is big, and here are some recent numbers:


  • Seven services, now with over a billion monthly users: Search, Android, Chrome, YouTube, Maps, Play, Gmail
  • Users on YouTube watch over a billion hours of video per day.
  • Every day, Google Maps helps users navigate 1 billion km.
  • Google Drive now has over 800 million active users; every week 3 billion objects are uploaded to Google Drive.
  • Google Photos has over 500 million active users, with over 1.2 billion photos are uploaded.
  • Android is now running on over 2 billion active devices.

Google is already pushing AI into many of these applications. Examples of this include Google Maps, which is now using machine learning to identify street signs and storefronts. YouTube is applying AI for generating better suggestions on what to watch next. Google Photos is using machine learning to deliver better search results and Android has smarter predictive typing.

Another way that AI is entering the user experience is by using voice as an input for many products. Pichai said the improvement in voice recognition over the past year have been amazing. For instance, its Google Home speaker uses only two onboard microphones to assess the direction and to identify the use in order to better serve customers. The company also claims that its image recognition algorithms have now surpassed humans (although it is not clear what the metrics are). Another one of the big announcements at this year's Google IO event concerned Google Lens, a new in-app capability to layer augmented reality on top of images and videos. The company demo'ed taking a photo of a flower and then asking Google to identify it.

Data centres will have to support the enormous processing challenges of an AI-first world and the network will have to deliver the I/O bandwidth for users to the data centres and for connecting specialised AI servers within and between Google data centres. Pichai said this involves nothing less than rearchitecting the data centre for AI.

Google's AI-first data centres will be packed with Tensor processing units (TPUs) for machine learning. TPUs, which Google launched last year, are custom ASICs that are about 30-50 times faster and more power efficient than general purpose CPUs or GPUs for certain machine learning functions. TPUs are tailored for TensorFlow, an open source software library for machine learning that was developed at Google. TensorFlow was originally developed by the Google Brain team and released under the Apache 2.0 open source license in November 2015. At the time, Google said TensorFlow would be able to run on multiple CPUs and GPUs. Parenthetically, it should also be noted that in November 2016 Intel and Google announced a strategic alliance to help enterprise IT deliver an open, flexible and secure multi-cloud infrastructure for their businesses. One area of focus is the optimisation of the open source TensorFlowOpens in a new window library to benefit from the performance of Intel architecture. The companies said their joint work will provide software developers an optimised machine learning library to drive the next wave of AI innovation across a range of models including convolutional and recurrent neural networks.

A quick note about the technology: Google describes TensorFlow as an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

Here come TPUs on the Google Compute Engine

At its recent I/O event, Google introduced the next generation of Cloud TPUs, which are optimised for both neural network training and inference on large data sets. TPUs are stacked onto boards, and each board has four TPUs, each of which is capable of 180 trillion floating point operations per second. Boards are then stacked into pods, each capable of 11.5 petaflops. This is an important advance for data centre infrastructure design for the AI era, said Pichai, and it is already being used within the Google Compute Engine service. This means that the same racks of TPU hardware can now be purchased online like any of the other Google cloud services.

There is an interesting photo that Sundar Pinchai shared with the audience, taken inside a Google data centre, which shows four racks of equipment packed with TensorFlow TPU pods. Bundles of fibre pair linking between the four racks can be seen. Apparently, there is no top-of-rack switch, but that is not certain. It will be interesting to know whether these new designs will soon fill a meaningful portion of the data centre floor.

Tuesday, April 11, 2017

Google Joins INDIGO Undersea Cable Project

A consortium comprising AARNet, Google, Indosat Ooredoo, Singtel, SubPartners and Telstra announced that they have entered into an agreement with Alcatel Submarine Networks (ASN) for the construction of a new subsea cable system.

On completion, the new INDIGO cable system (previously known as APX West & Central) will expand connectivity between Australia and South East Asia markets, and will enable higher speed services and improved reliability.

The INDIGO cable system will span approximately 9,000 km between Singapore and Perth, Australia, and onwards to Sydney. The system will land at existing facilities in Singapore, Australia and Indonesia, providing connections between Singapore and Jakarta.

The system will feature a two-fibre pair 'open cable' design and spectrum- sharing technology. This design will allow consortium members to share ownership of spectrum resources provided by the cable and allow them to independently leverage technology advances and implement future upgrades as required.

Utilising coherent optical technology, each of the two fibre pairs will provide a minimum capacity of 18 Tbit/s, with the option to further increase this capacity in the future.

In addition to Google, the INDIGO consortium is made up of: AARNet, a provider of national and international telecom infrastructure to Australia's research and education sector; Indosat Ooredoo, an Ooredoo Group company providing telecom services in Indonesia; Singtel of Singapore, with a presence in Asia, Australia and Africa; and SubPartners based in Brisbane, Australia, which focuses on delivering major telecoms infrastructure projects in partnership with other companies.


ASN will undertake construction of the subsea cable system, which is expected to be completed by mid-2019

Tuesday, April 4, 2017

Google Builds Expresso SDN for Public Internet

"SDN is how we do networking… so what’s next?" asked Amin Vahdat, Fellow and Technical Lead for Networking at Google, at the opening of his keynote address at the Open Networking Summit in Santa Clara, California.

Google has been using SDN for years to manage its private B4 global WAN (interconnects data centers) and its Jupiter networks (inside its data centers, which have up to 100,000 servers and up to 1 Pb/s of aggregate bandwidth). Traffic on its private networks exceeds that on its public Internet backbone, and the growth rate is faster too. Google also relies on its Andromeda network functions virtualization platform as one of its pillars of SDN.

Google's next step is to introduce "Expresso" SDN for its public Internet backbone.  Expresson has been running on the Google network for the past two years and is currently routing about 20% of traffic to the Internet.

Essentially, Google Expresso SDN Peering takes a metro and global view across many routers to find optimal route for traffic delivery.  Whereas traditional, router-centric protocols take a connectivity-first approach with only local information, Vahdat said Expresso is able to optimize route selection in real-time by leveraging application signals across the metro or global infrastructure. One effect is to remove the logic and control of traffic management from individual boxes and make it network-centric.

Here is an architectural view of Google Expresso SDN:



Expresso sets the stage for the next decade in networking, said Vahdat, where the Internet must deliver improvements in scale, agility, jitter, isolation and availability. Google sees each of these attributes as necessary for the next wave of serverless computing.

Monday, March 27, 2017

Expanding horizons for Google Cloud Platform

The most compelling case for adopting the Google Cloud Platform is that it is the same infrastructure that powers Google's own services, which attract well over a billion users daily. This was the case presented by company executives at last week's Google Next event in San Francisco – "get on the Cloud… now", said Eric Schmidt, Executive Chairman of Alphabet, Google's parent; "Cloud is the most exciting thing happening in IT", said Diane Greene, SVP of Google Cloud.

Direct revenue comparisons between leading companies are a bit tricky, but many analysts place the Google Cloud Platform at No.4 behind Amazon Web Services, Microsoft Azure and IBM in the U.S. market. Over the past three years, Google invested $29.4 billion for its infrastructure, according to Urs Hölzle, SVP, Technical Infrastructure for Google Cloud, on everything from efficient data centres to customised servers, customised networking gear and specialised ASICs for machine learning.

Google operates what it believes to be the largest global network, carrying anywhere from 25% to 40% of all Internet traffic. Google's backbone interconnects directly with nearly every ISP and its private network has a point of presence in 182 countries, while the company is investing heavily in ultra-high-capacity submarine cables.

The argument goes that by moving to the Google Cloud Platform (GCP), enterprise customers move directly into the fast lane of the Internet, putting their applications one hop away from any end-user ISP they need to reach with less latency and fewer hand-offs. Two example of satisfied GCP customers that Google likes to cite are Pokemon Go and Snap Chat, both of which took a compelling application and brought it to global scale by riding the Google infrastructure.

One question is, does the Google global network give its Google Cloud Platform a decisive edge over its rivals? Clearly all the big players are racing to scale out their infrastructure as quickly as possible, but Google is striving to take it one step further – to develop core technologies in hardware and software that other companies later follow. Examples include containers, noSQL, serverless, Kubernetes, Map Reduce, TensorFlow, and more recently its horizontally-scalable Cloud Spanner database synchronisation service, which uses atomic-clocks running in every Google data centre.

Highlights of Google's initiatives include:

·         New data centres: three new GCP regions - California, Montreal and the Netherlands - bringing the total number of Google Cloud regions to six, and the company anticipates more than 17 locations in the future. The new regions will feature a minimum of three zones, benefit from Google's global, private fibre network and offer a complement of GCP services.

·         GCP the first public cloud provider to run Intel Skylake, a custom Xeon chip for compute-heavy workloads and a larger range of VM memory and CPU options. GCP is doubling the number of vCPUs that can run in an instance from 32 to 64 and offering up to 416 Gbytes of memory. GCP is also adding GPU instances. Google and Intel are collaborating in other areas as well, including hybrid cloud orchestration, security, machine and deep learning, and IoT edge-to-cloud solutions; Intel is also a backer of Google’s Tensor Flow and Kubernetes open source initiatives.

·         Google Cloud Functions: a completely serverless environment and the smallest unit of compute offered by GCP; it is able to spin up a single function and spin it back down instantly, so billing occurs only while the function is executing, metered to the nearest one hundred milliseconds.

·         Free services: a new free tier to the GCP that provides limited access to Google Compute Engine (1 f1-micro instance per month in U.S. regions and 30 Gbyte-months HDD), Google Cloud Storage (5 Gbytes a month), Google Pub/Sub (10 Gbytes of messages a month), and Google Cloud Functions (two million invocations per month).

·         Lower prices for GCE: 5% price drop in the U.S., 4.9% drop in Europe; 8% drop in Japan.

·         Google BigQuery data analytics service, including automated data movement from select Google applications, such as Adsense, DoubleClick and YouTube, directly into BigQuery.

·         Titan: a custom security chip codenamed Titan that operates at the BIOS level; Intel is also introducing new security tools to keep customer data secure. The chip authenticates the hardware and services running on each server.

·         Project Kubo toolset: a joint effort with Pivotal for packaging and managing software in a Kubernetes environment.

·         Engineering support plans ranging from $100 per user per month to $1,500 per user per month with a 15-minute response time.

·         Data loss prevention API to guard information inside the Google Cloud.

The Google Next event provided a number of sessions for looking over the horizon. In a 'fireside chat', Marc Andreesen and Vint Cerf speculated on the arrival of quantum computing and neural networking/machine learning services on the public clouds. Both possibilities are likely to augment current public cloud computing models rather than replace them. The types of applications could vary. For instance, a cloud-based quantum computing service might be employed to finally secure communications.

Google is also betting big that the cloud is the ideal platform for AI. Fei-fei Li, Chief Scientist for Cloud AI and ML at Google, observed that a few self-driving cars can put considerable data into the cloud. What happens when there are millions of such vehicles? Building on ramps for AI is the next step with API and SDKs that draw new applications onto Google's TensorFlow platform. The company discusses this in terms of 'democratising' AI, which means making sure its algorithms and cloud analytic systems become widely available before others move into this space.


A final differentiator for GCP is that Google is the largest corporate purchaser of renewal energy. In 2017, the company is on track to reach 100% renewal power for its global data centres and offices. One hopes that others will catch up soon.

Thursday, March 9, 2017

Google Builds out its Cloud Portfolio

Google rolled out significant enhancements to its cloud platform. Here are some highlights:

  • Google has invested an estimated  $26 billion over the past 3 years in its infrastructure, including data centers and networks
  • Google has built a custom security chip codenamed Titan that operates at the BIOS level. Intel is also introducing new security tools to keep customer data secure. 
  • Three new Google Cloud Platform Regions are coming online -- The Netherlands, Montreal (Canada), and California, in addition to other data center construction underway in Northern Virginia, São Paulo, London, Finland, Frankfurt, Mumbai, Singapore, and Sydney.
  • Google is already deploying Intel's latest Skylake processors in its servers
  • New developer tools and data analytic services that will help enterprises build apps in the cloud and find new value from their data. 
  • Google Cloud Platform now offers an expanded environment for Google App Engine and welcomes all developers to use the public beta of Cloud Functions, including its new integration with Firebase. 
  • BigQuery, Google’s data warehouse, now features new pipelines that make it easy for customers to analyze their data from Google Adwords, DoubleClick Campaign Manager, DoubleClick for Publishers, and YouTube Analytics. This enables marketers and advertisers to gain a single view of their customer experience. 
  • Google Cloud launched a new Commercial Dataset program that enables users to access licensed datasets within BigQuery. This will offer the opportunity for businesses to access more data and find more robust insights and more easily build machine learning models.
  • Cloud Dataprep is a new data preparation tool that is capable of automatically detecting types of data, enabling data analysts and scientists to find insights at a faster rate.



Video: Google Cloud Next - Day 1



Here is the archived livestream of Day 1 of the Google Cloud Next event in San Francisco, March 8, 2017.

The 3 hour 25 minute video includes the presentations by Diane Greene, SVP of Google Cloud; Sundar Pichai, CEO of Google; Eric Schmidt, Chairman of Alphabet and Fei-Fei Li, Chief Scientist for Google Cloud Machine Learning and AI and Professor of Computer Science at Stanford.

See video: https://youtu.be/j_K1YoMHpbk


Monday, February 27, 2017

Qualcomm Adds Support for Android Things OS on 4G LTE Processors

Qualcomm plans to add support for the Android Things operating system (OS) on its Snapdragon 210 processors with X5 LTE modems.

The Android Things OS is a new vertical of Android designed for Internet of Things (IoT) devices, and Snapdragon processors are expected to be the world’s first commercial System-on-Chip (SoC) solutions to offer integrated 4G LTE support for this OS.

This combination is designed to support a new class of IoT applications requiring robust, security-focused and managed connectivity including electronic signage, remote video monitoring, asset tracking, payment and vending machines and manufacturing, as well as consumer devices such as smart assistants and home appliances.

Snapdragon 210 processors running Android Things OS will also allow manufactures and developers to harness the power of the Google Cloud Platform and Google services over 4G LTE in their IoT solutions. Additionally, support for Google

http://www.qualcomm.com

Sunday, December 18, 2016

Google Joins Cloud Foundry

The Google Cloud Platform (GCP) is now part of Cloud Foundry.

Google noted that it has been very active this year with the Cloud Foundry community, including the delivery of the BOSH Google CPI release, enabling the deployment of Cloud Foundry on GCP, and the recent release of the Open Service Broker API. The efforts have led to custom service brokers for eight of GCP services:

  • Google BigQuery
  • Google Cloud Storage
  • Google Cloud SQL
  • Google Cloud Pub/Sub
  • Google Cloud Vision API
  • Google Cloud Speech API
  • Google Cloud Natural Language API
  • Google Translation API


https://cloudplatform.googleblog.com/

Tuesday, December 13, 2016

Google Relaunches Self-driving Car Project as Waymo

The Google self-driving car project has been relaunched as an independent company called Waymo.

Waymo's focus is on fully self-driving cars that operate without steering wheels or human guidance. Its test fleet currently includes modified Lexus SUVs and custom-built prototype vehicles. The company plans to add modified Chrysler Pacifica minivans soon.

https://waymo.com/

Chrysler Pacifica Minivan Joins Google's Self-Driving Car Test Fleet

Fiat Chrysler Automobiles (FCA) will integrate Google's self-driving technology into all-new 2017 Chrysler Pacifica Hybrid minivans to expand Google's existing self-driving test program.

This is the first time that Google has worked directly with an automaker to integrate its self-driving system, including its sensors and software, into a passenger vehicle.

By later this year, around 100 Pacifica minivans will be built for Google's self-driving technology. Google will integrate the suite of sensors and computers that the vehicles will rely on to navigate roads autonomously.

Google, which is testing its self-driving cars in four U.S. cities, said the self-driving Chrysler Pacifica Hybrid minivans will be tested on its private test track in California prior to operating on public roads.

http://www.fcanorthamerica.com
http://www.google.com

Monday, December 12, 2016

Google Opens Massive Data Center in the Netherlands

Google announced the opening of its newest data center in Eemshaven, the Netherlands. The facility is 100% powered by renewable energy.

https://plus.google.com/+alphabetir/posts/HpBA1UYwV9h



  • In September 2014, Google broke ground for the data center in Eemshaven, saying it planned to invest EUR 600 million in the facility.  The target date for opening was late 2016.

Tuesday, December 6, 2016

Google to Reach 100% Renewable Target in 2017

Google expects to be using 100% renewable energy for its global operations, including data centers and offices, during 2017.

With current purchase commitments reaching 2.6 gigawatts of wind and solar energu, Google now ranks as the world's largest corporate buyer of renewable power.

Google's newly-published, 72-page Environmental Report is here:

https://static.googleusercontent.com/media/www.google.com/en//green/pdf/google-2016-environmental-report.pdf

Sunday, December 4, 2016

IBM’s Software Catalog Now Runs on Google Compute Engine

Google Cloud Platform is now officially an IBM Eligible Public Cloud, enabling users to run a wide range of IBM software SKUs on Google Compute Engine with existing licenses.

In a blog posting, Chuck Coulson, Global Technology Partnerships at Google, explains that the majority of IBM's vast catalog of software -- everything from middleware and DevOps products (Websphere, MQ Series, DataPower, Tivoli) to data and analytics offerings (DB2, Informix, Cloudant, Cognos, BigInsights), can run on Google Compute Engine.

https://cloudplatform.googleblog.com/2016/12/IBMs-software-catalog-now-eligible-to-run-on-Google-Cloud.html


Tuesday, October 25, 2016

Google Fiber Expansion Officially Halted, Craig Barratt Steps Down

The Google Fiber project officially confirmed that it is pausing operations and laying off workers in many cities where it had once anticipated deploying an FTTH network. The company is now considering new technology options and may resume discussions with potential partners in the future.

Google Fiber will continue the rollout in cities where the service has already launched or where construction is underway.

Craig Barratt, SVP, Alphabet and CEO of Access, is stepping down.

http://googlefiberblog.blogspot.com/2016/10/advancing-our-amazing-bet.html

See also