Showing posts with label Google Cloud. Show all posts
Showing posts with label Google Cloud. Show all posts

Wednesday, April 10, 2019

Google Cloud offers ice cold archive storage

Google Cloud announced new archive class of Cloud Storage designed for long-term data retention as an alternative to tape storage.

Pricing starts at $0.0012 per GB per month ($1.23 per TB per month).

Access and management are performed via the same APIs used by Googles other storage classes, with full integration into object lifecycle management. Data in Cloud Storage is always redundantly stored across availability zones with 11 9’s annual durability.

Google Cloud also announced the general availability of Cloud Filestore, a managed file storage system that’s built for high performance. Cloud Filestore’s premium instances will now provide increased read performance up to 1.2 GB/s throughput and 60k IOPS.

https://cloud.google.com/blog/products/storage-data-transfer/whats-cooler-than-being-cool-ice-cold-archive-storage

Tuesday, April 9, 2019

Google Cloud teams with partners for "Open Cloud"

Google Cloud committed to bring open source to the next level by announcing strategic partnerships with the following companies

Confluent: Founded by the team that built Apache Kafka, Confluent builds an event streaming platform that lets companies easily access data as real-time streams.

DataStax: DataStax powers enterprises with its always-on, distributed cloud database built on Apache Cassandra and designed for hybrid cloud.

Elastic: As the creators of the Elastic Stack, built on Elasticsearch, Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for search use cases, like logging, security, and analytics.

InfluxData: InfluxData’s time series platform can instrument, observe, learn and automate any system, application and business process across a variety of use cases. InfluxDB (developed by InfluxData) is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, IoT sensor data, and real-time analytics.

MongoDB: MongoDB is a modern, general-purpose database platform that brings software and data to developers and the applications they build, with a flexible model and control over data location.

Neo4j: Neo4j is a native graph database platform specifically optimized to map, store, and traverse networks of highly connected data to reveal invisible contexts and hidden relationships. By analyzing data points and the connections between them, Neo4j powers real-time applications.

Redis Labs: Redis Labs is the home of Redis, the world’s most popular in-memory database, and commercial provider of Redis Enterprise. It offers performance, reliability, and flexibility for personalization, machine learning, IoT, search, e-commerce, social, and metering solutions worldwide.

https://cloud.google.com/blog/products/open-source/bringing-the-best-of-open-source-to-google-cloud-customers

Thursday, April 4, 2019

CoreSite offers 10G access to Google Cloud’s Dedicated Interconnect

CoreSite is now supporting Google Cloud’s Dedicated Interconnect product in both their Los Angeles and Denver markets. This service from Google Cloud allows enterprise and network service providers that are collocated with CoreSite to directly connect to Google Cloud Platform through 10 Gbps  fiber interconnects.

The  dedicated Interconnect offers guaranteed uptime of 99.99%.

“CoreSite is pleased to announce the availability of direct fiber interconnection to Google Cloud Platform, providing our customers with a dedicated, flexible and high-performance solution to optimize their evolving cloud and connectivity requirements,” said Maile Kaiser, Senior Vice President of Sales at CoreSite.

In addition to Dedicated Interconnect in CoreSite’s Los Angeles and Denver markets, CoreSite customers are able to connect directly to Google Cloud Platform via the CoreSite Any2Exchange in Chicago, Denver, and Los Angeles. Additionally, customers may access Google Cloud Platform from all of CoreSite’s markets via its network-rich ecosystem of providers or through its inter-site connectivity in select markets. CoreSite offers numerous inter-site connectivity options including lit transport solutions and dedicated dark fiber.

CoreSite’s Any2Exchange is the second-largest Internet exchange in the United States and is the largest Internet exchange on the West Coast. With CoreSite’s Any2Exchange, customers can make secure, SLA-backed, low-latency connections over one port and at a variety of speeds (including 1Gbps, 10Gbps, and 100Gbps) with direct peering to Google Cloud Platform. Customers connecting directly to Google Cloud Platform benefit from reduced congestion and routing issues, reduction in transit costs, lower latency and reduced complexity, all while having access to connect and peer with over 400 participating members.

Sunday, February 10, 2019

Google encrypts Kubernetes secrets with Cloud KMS

Google Cloud, which was already encrypting data at rest by default, including data in Google Kubernetes Engine (GKE), is adding application-layer secrets encryption using the same keys in its hosted Cloud Key Management Service (KMS).

Application-layer secrets encryption, which is now in beta in GKE, protects secrets with envelope encryption: secrets are encrypted locally in AES-CBC mode with a local data encryption key, and the data encryption key is encrypted with a key encryption key managed in Cloud KMS as the root of trust.

Google Cloud said the new capability provides flexibility for specific security models.

https://cloud.google.com/blog/products/containers-kubernetes/exploring-container-security-encrypting-kubernetes-secrets-with-cloud-kms


Sunday, November 18, 2018

Thomas Kurian to replace Diane Greene at Google Cloud

Thomas Kurian will succeed Diane Greene as head of Google Cloud beginning in January.

Kurian currently serves as President of Product Development at Oracle. He has an MBA from Stanford and a BS in Electrical Engineering from Princeton.

Greene has led the division since December 2015. In a Google blog post, Greene says she will now dedicate time to mentoring women entrepreneurs and focusing on education.

Thursday, October 11, 2018

Google Cloud Platform expands its enterprise networking

Google Cloud Platform introduced several additions to its enterprise networking portfolio.

Cloud NATBeta - a Google-managed Network Address Translation service that lets enterprises provision application instances without public IP addresses while also allowing them to access the Tnternet for updates, patching, config management in a controlled and efficient manner. Outside resources cannot directly access any of the private instances behind the Cloud NAT gateway, thereby helping to keep your Google Cloud VPCs isolated and secure.

Firewall Rules LoggingBeta - allows enterprises to audit, verify, and analyze the effects of firewall rules. For example, it provides visibility into potential connection attempts that are blocked by a given firewall rule. Logging is also useful to determine that there weren't any unauthorized connections allowed into an application. Firewall log records of allowed or denied connections are reported every five seconds, and can be exported to Stackdriver Logging, Cloud Pub/Sub, or BigQuery.

Managed TLS Certificates for HTTPS load balancersBeta - load balancing customers can now deploy HTTPS load balancers, Google taking care of provisioning root-trusted TLS certificates for you and manage their lifecycle including renewals and revocation.

https://cloud.google.com/blog/products/networking/simplifying-cloud-networking-for-enterprises-announcing-cloud-nat-and-more

Google Cloud Storage adds regional replication

Google Cloud Platform (GCP) introduced more data storage replication options within regions and between regions. This allows customers to move large data sets across regions using Google infrastructure for ensuring availability and business continuity.

Some highlights:

  • A new dual-regional option for replicating data, which is especially beneficial for analytics or big data workloads. Customers are able to write to a single dual-regional bucket without having to manually copy data between primary and secondary locations. 
  • Nearline and Coldline data now geo-redundant in multi-regional locations, raising the availability of archival data stored this way. 
  • A new look coming for Cloud Storage choices.
  • A new Cloud Storage C++ library so developers can customize their applications.
  • A new dual-regional option available in beta



Thursday, August 16, 2018

Google Cloud Platform: Building a Hybrid Render Farm

Google Cloud Platform posted an online guide for building a render farm using its scalable architecture.

Render farms, which are needed by content creators for outputting digital productions, can be very expensive to build on-premise and require a lot of electricity to run.

GCP is now enabling render applications to scale from 2- or 4-core VMs all the way to a 160-core VM with up to 3844 GB of RAM. Up to 8 GPUs may be attached to any VM to create a GPU farm.

https://cloud.google.com/blog/products/gcp/building-a-hybrid-render-farm-on-gcp-new-guide-available


Monday, August 6, 2018

Google Cloud expands its virtual workstations with NVIDIA Tesla GPUs

Google Cloud Platform began offering virtual workstations optimized for graphics-intensive applications and machine learning inference based on the NVIDIA Tesla P4 GPU.

The new support enables users to turn any instance with one or more GPUs into a high-end workstation. P4s offer 8GB of GDDR5 memory

https://cloud.google.com/


Wednesday, July 25, 2018

Google expands its cloud database capabilities

Google Cloud Platform (GCP) is expanding its portfolio of managed database services and announcing new cloud partnerships. Here are the highlights:

  • Oracle workloads are now supported in GCP
  • SAP HANA workloads can run on GCP persistent-memory VMs
  • Cloud Firestore launching for all users developing cloud-native apps. Cloud Firestore, which is a serverless, NoSQL document database, brings the ability to store and sync app data at global scale. 
  • Regional replication, visualization tool is now available for Cloud Bigtable, which is a high-throughput, low-latency, and massively scalable NoSQL database.
  • Cloud Spanner updates, by popular demand
  • GCP is partnering with Intel and SAP to offer Compute Engine virtual machines backed by the upcoming Intel Optane DC Persistent Memory for SAP HANA workloads. GCP is scaling up its instance size roadmap for SAP HANA production workloads from a max of 4TB currently to 12TB of memory by next summer, and 18TB of memory by the end of 2019.
  • Google Compute Engine gains additional resource-based pricing options. With resource-based pricing, Google will add up all the resources you use across all your machines into a single total and then apply a usage discount. 

Tuesday, July 24, 2018

Google builds its Cloud Services Platform

Google outlined its vision for integrated cloud services running over on-premise equipment as well as its own, massively-scalable cloud infrastructure. At is Google Next conference in San Francisco, Google presented its Cloud Services Platform, which is architecturally aligned with the joint hybrid cloud products being developed with Cisco.

So far, the Cloud Services Platform includes the following:


  • Service mesh: Google is announcing an Istio service for managing services within a Kubernetes Engine cluster. Google is releasing Istio 1.0 in open source, Managed Istio, and Apigee API Management for Istio.
  • Hybrid computing: Google Kubernetes Engine (GKE) On-Prem will provide multi-cluster management
  • Policy enforcement: GKE Policy Management, to take control of Kubernetes workloads
  • Ops tooling: Stackdriver Service Monitoring
  • Serverless computing: GKE Serverless add-on and Knative, an open source serverless framework. Essentially, Google is rolling out serverless containers, which allow customers to run container-based workloads in a fully managed environment and to pay only for actual usage. 
  • Developer tools: Cloud Build, a fully managed CI/CD platform
Google is also rolling out Cloud Functions, an event-driven compute service with an SLA and support for Python 3.7 and Node.js 8, networking and security controls. Cloud Functions can be tied into more than 20 GCP services such as BigQuery, Cloud Pub/Sub, machine learning APIs, G Suite, Google Assistant, etc.

Google is introducing its third-generation cloud tensorflow processing units (TPU)

Google is rapidly expanding its Cloud AutoML (machine learning) capabilities, which is now in public beta testing.


Cisco and Google Partner on New Hybrid Cloud Solution

Cisco and Google Cloud have formed a partnership to deliver a hybrid cloud solutions that enables applications and services to be deployed, managed and secured across on-premises environments and Google Cloud Platform. The pilot implementations are expected to be launched early next year, with commercial rollout later in 2018.

The main idea is to deliver a consistent Kubernetes environment for both on-premises Cisco Private Cloud Infrastructure and Google’s managed Kubernetes service, Google Container Engine.

The companies said their open hybrid cloud offering will provide enterprises with a way to run, secure and monitor workloads, thus enabling them to optimize their existing investments, plan their cloud migration at their own pace and avoid vendor lock in.

Cisco and Google Cloud hybrid solution highlights:


  • Orchestration and Management – Policy-based Kubernetes orchestration and lifecycle management of resources, applications and services across hybrid environments
  • Networking – Extend network policy and configurations to multiple on-premises and cloud environments
  • Security – Extend Security policy and monitor applications behavior
  • Visibility and Control – Real-time network and application performance monitoring and automation
  • Cloud-ready Infrastructure – Hyperconverged platform supporting existing application and cloud-native Kubernetes environments
  • Service Management with Istio – Open-source solution provides a uniform way to connect, secure, manage and monitor microservices
  • API Management – Google's Apigee enterprise-class API management enables legacy workloads running on premises to connect to the cloud through APIs
  • Developer Ready – Cisco's DevNet Developer Center provides tools and resources for cloud and enterprise developers to code in hybrid environments
  • Support – Joint coordinated technical support for the solution

Tuesday, July 3, 2018

Diane Bryant departs Google Cloud

Diane Bryant has stepped down as Chief Operating Officer for Google Cloud, where she reported to Diane Greene. Bryant joined Google Cloud in December 2017.

Byant was formerly Group President at Intel and known for her leadership Intel’s Data Center Group (DCG) as general manager and executive vice president.  Intel's DCG generated $17 billion in revenue in 2016.

Tuesday, June 26, 2018

Google Cloud readies launch in Los Angeles

Google Cloud Platform is ready to launch a new Los Angeles cloud region next month, joining its current regions or Oregon, Iowa, South Carolina and northern Virginia.

The new Los Angeles cloud region will target the aerospace, music, media and entertainment industries of southern California.

"Los Angeles is a global hub for fashion, music, entertainment, aerospace, and more—and technology is essential to strengthening our status as a center of invention and creativity,” said Los Angeles Mayor Eric Garcetti. “We are excited that Google Cloud has chosen Los Angeles to provide infrastructure and technology solutions to our businesses and entrepreneurs.”

https://cloudplatform.googleblog.com/

Google Clouds adds high-performance file storage

Google Cloud Platform (GCP) announced a new high-performance storage service for users who need to create, read and write large files with low latency.

The new Cloud Filestore is managed file storage for applications that require a file system interface and a shared file system. It gives users a simple, integrated, native experience for standing up fully managed network-attached storage (NAS) with their Google Compute Engine and Kubernetes Engine instances.

Two tiers are offered. A high-performance Premium tier is $0.30 per GB per month, and the midrange performance Standard tier is $0.20 per GB per month in us-east1, us-central1, and us-west1 (Other regions vary).

https://cloudplatform.googleblog.com/

Tuesday, May 29, 2018

AT&T NetBond brings direct connect to Google Cloud Platform

AT&T and Google Cloud announced two areas of collaboration.

First, business customers will be able to use AT&T NetBond for Cloud to connect in a highly secure manner to Google Cloud Platform. Google's Partner Interconnect offers organizations private connectivity to GCP and allows data centers geographically distant from a Google Cloud region or point of presence to connect at up to 10 Gbps. Google has joined more than 20 leading cloud providers in the NetBond® for Cloud ecosystem, which gives access to more than 130 different cloud solutions.

Second, G Suite, which is Google's cloud-based productivity suite for business including Gmail, Docs and Drive, is now available through AT&T Collaborate, a hosted voice and collaboration solution for businesses.

"We're committed to helping businesses transform through our edge-to-edge capabilities. This collaboration with Google Cloud gives businesses access to a full suite of productivity tools and a highly secure, private network connection to the Google Cloud Platform," said Roman Pacewicz, chief product officer, AT&T Business. "Together, Google Cloud and AT&T are helping businesses streamline productivity and connectivity in a simple, efficient way."

"AT&T provides organizations globally with secure, smart solutions, and our work to bring Google Cloud's portfolio of products, services and tools to every layer of its customers' business helps serve this mission," said Paul Ferrand, President Global Customer Operations, Google Cloud. "Our alliance allows businesses to seamlessly communicate and collaborate from virtually anywhere and connect their networks to our highly-scalable and reliable infrastructure."

Thursday, May 17, 2018

Google Cloud acquires Cask for big data ingestion on-ramp

Google Cloud will acquire Cask Data Inc., a start-up based in Palo Alto, California, that offers a big data platform for enterprises. Financial terms were not disclosed.

The open source Cask Data Application Platform (CDAP) provides a data ingestion service that simplifies and automates the task of building, running, and managing data pipelines. Cask says it cuts down the time to production for data applications and data lakes by 80%. The idea is to provide a standardization and simplification layer that allows data portability across diverse environments, usability across diverse groups of users, and the security and governance needed in the enterprise.

Google said it plans to continue to develop and release new versions of the open source Cask Data Application Platform (CDAP).
a
“We’re thrilled to welcome the talented Cask team to Google Cloud, and are excited to work together to help make developers more productive with our data processing services both in the cloud and on-premise. We are committed to open source, and look forward to driving the CDAP project’s growth within the broader developer community,” stated William Vambenepe, Group Product Manager, Google Cloud

Over the past 6+ years, we have invested heavily in the open source CDAP available today and have deployed our technology with some of the largest enterprises in the world. We accomplished great things as a team, had tons of fun and learned so much over the years. We are extremely proud of what we’ve achieved with CDAP to date, and couldn’t be more excited about its future.

Cask was founded by Jonathan Gray and Nitin Motgi.


Wednesday, May 9, 2018

Google Cloud intros managed in-memory data store for Redis

Google launched a public beta of a fully-managed in-memory data store service for Redis,

The company says its Cloud Memorystore provides a scalable, more secure and highly available Redis service that is fully compatible with open source Redis, letting you migrate your applications to Google Cloud Platform (GCP) with zero code changes.

Redis is an open-source in-memory database project implementing a distributed, in-memory key-value store. It supports data structures and features like persistence, replication and pub-sub.

Tuesday, May 8, 2018

Google Cloud Platform to open region in Zurich

Google Cloud Platform (GCP) is building a new region in Z├╝rich, Switzerland. The facility is expected to be online in the first half of 2019.

This will be GCP's sixth region in Europe, joining our future region in Finland, and existing regions in the Netherlands, Belgium, Germany, he United Kingdom, and another facility under development in Finland. With Switzerlan, GCP will have 20 regions in service or announced.




Tuesday, April 24, 2018

Megaport's SDN now extends to Google Cloud's global network

Megaport has added support for Google Cloud's Partner Interconnect, a service from Google Cloud that allows customers to privately connect to Google Cloud Platform.

Google Cloud's Partner Interconnect is a new product in the Google Cloud Interconnect family. Last September, Google announced Dedicated Interconnect, which provides higher-speed and lower-cost connectivity than VPN, and has become the go-to solution to connect on-premises data centres with the cloud.

Megaport said it is now providing connectivity to the nearest Google edge Point of Presence at a variety of sub-rate interface speeds varying from 50 Mbps to 10 Gbps.

"Partner Interconnect gives Google Cloud customers even more connectivity choices for hybrid environments," said, John Veizades, Product Manager, Google Cloud. "Together with Megaport, we are making it easier for customers to extend their on-prem infrastructure to the Google Cloud Platform."

"Scalable connectivity to Google Cloud Platform ensures that cloud-enabled applications perform to meet mission-critical business requirements," said Vincent English, CEO of Megaport. "Google Cloud brings tremendous value to the Megaport Ecosystem and empowers our customers to address a wide variety of business needs. We have been working with Google Cloud since our inception and we are excited to grow and evolve our integration to ensure the next generation of business growth."

Wednesday, April 4, 2018

Google Cloud adopts P4Runtime as foundational for network programming

Google Cloud will use P4Runtime as the foundation for its next generation of data centers and wide area network control-plane programming, according to a new blog posting from Jim Wanderer, Engineering Director at Google and Amin Vahdat, Google Fellow.

P4 is a programming language was designed to be target-independent (i.e. a program written in P4 could be compiled, without modification, to run on a variety of targets, such as ASICs, FPGAs, CPUs, NPUs, and GPUs), and protocol-independent (i.e. a P4 program can describe existing standard protocols, or be used to specify innovative, new, customized forwarding behaviors). P4 can be used for both programmable and fixed-function devices alike. For example, it is used to capture the switch pipeline behavior under the Switch Abstraction Interface (SAI) APIs. P4 is also used by the ONF Stratum project to describe forwarding behavior across a variety of fixed and programmable devices. The P4 Language Consortium (P4.org), creator of the P4 programming language, recently transitioned to become a project of the Open Networking Foundation (ONF) and part of the Linux Foundation portfolio.

Google, which designs and builds its own hardware switches and software, sees the P4Runtime as a new way for control plane software to program the forwarding path of a switch as it provides a well-defined API to specify the switch forwarding pipelines. The vision is that the P4Runtime will be used to control any forwarding plane, from a fixed-function ASIC to a fully programmable network switch.

Google is working with the Open Networking Foundation (ONF) on Stratum, an open source project to implement an open reference platform for a truly "software-defined" data plane, designed and built around the P4Runtime.


https://cloudplatform.googleblog.com/

See also