Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Wednesday, December 12, 2018

Amazon activates AWS Europe (Stockholm) Region

Amazon announced the opening of the AWS Europe (Stockholm) Region with three Availability Zones.

AWS Regions are comprised of Availability Zones, which are technology infrastructure in separate and distinct geographic locations with enough distance to significantly reduce the risk of a single event impacting business continuity, yet near enough to provide low latency for high availability applications. Each Availability Zone has independent power, cooling, and physical security and are connected via redundant, ultra-low-latency networks.

With Stockholm online, AWS now provides 60 Availability Zones across 20 infrastructure regions globally, with another 12 Availability Zones and four regions in Bahrain, Hong Kong SAR, Italy, and South Africa all coming online by the first half of 2020.

The AWS Europe (Stockholm) Region is AWS’s fifth in Europe, joining existing regions in France, Germany, Ireland, and the UK.

“Since the early days of AWS, Nordic organizations have been using AWS’s cloud technologies to help reinvent entire industries, such as Supercell and Rovio in gaming, Scania and Volvo in automotive, and Nokia and Telenor in telecommunications,” said Andy Jassy, Chief Executive Officer, Amazon Web Services. “Tens of thousands of Nordic customers have been using AWS from regions around the world, but many have shared that they also wanted an AWS Region in the Nordics so they can easily operate their most latency-sensitive workloads for end-users in the Nordics while meeting any data sovereignty requirements. We’re excited to deliver our AWS Stockholm Region today to meet these customer requests.”

https://aws.amazon.com/local/nordics/

Wednesday, November 28, 2018

AWS re:Invent: Highlights from Day 3

In perhaps its most significant announcement at this week's re:Invent conference in Las Vegas, Amazon Web Services unveiled plans to offer pre-configured compute & storage hardware racks for deployment in customers' private data centers. The hardware will be fully managed by AWS, allowng customers to run compute and storage on-premises, while seamlessly connecting to the rest of AWS’s broad array of services in the cloud.  The service is currently in private preview and AWS expects it will be widely available in the second half of 2019.


AWS Outposts come in two variants; first, an extension of the fast-growing VMware Cloud on AWS service that runs on AWS Outposts; second, AWS Outposts that allow customers to run compute and storage on-premises using the same native AWS APIs used in the AWS cloud.

VMware Cloud on AWS Outposts, delivers the entire VMware Software-Defined Data Center (SDDC) - compute, storage, and networking infrastructure - to run on-premises and managed as a Service from the same console as VMware Cloud on AWS, using AWS Outposts and enables customers to take advantage of the ease of management and integration with AWS services.

The AWS native variant of AWS Outposts is aimed at customers who prefer the same exact APIs and control plane they’re used to running in AWS’s cloud, but on-premises. This variant allows customers to run other software with native AWS Outposts, starting with a new integrated offering from VMware called VMware Cloud Foundation for EC2, which will feature VMware technologies and services that work across VMware and Amazon EC2 environments, like NSX (to help bridge AWS Outposts to local data center networks), VMware AppDefense (to protect known good applications), and VMware vRealize Automation (for workload provisioning).

“Customers are telling us that they don’t want a hybrid experience that attempts to recreate a stunted version of a cloud on-premises, because it’s perpetually out of sync with the cloud version and requires a lot of heavy lifting, managing custom hardware, different control planes, different tooling, and manual software updates. There just isn’t a lot of value in that type of on-premises offering and that’s why these solutions aren’t getting much traction,” said Andy Jassy, CEO of AWS. “So we started with what our customers were asking for and worked backwards. They told us they want an extension of their AWS or VMware Cloud on AWS environment on-premises, using the same hardware we’re using, the same interfaces, the same APIs, the same instant access to the latest AWS capabilities the minute they become available, and they don’t want to manage hardware or software. So, we tried to reimagine what customers really wanted when running in hybrid mode, and developed AWS Outposts.”

“VMware Cloud on AWS broke the barriers between the data center and the cloud by combining the best of the private cloud and public cloud in the AWS cloud,” said Pat Gelsinger, chief executive officer, VMware. “Today we expand our strategic collaboration with AWS to provide our mutual enterprise customers with more choice and options as they extend their hybrid cloud environments to drive agility, simplicity, security, and full infrastructure interoperability.

Other highlights from Day 3:

  • AWS announced 13 new machine learning capabilities and services, across all layers in the machine learning stack.
  • 85% of TensorFlow workloads are already on AWS
  • AWS has significatly improved the way in which TensorFlow distributes training tasks across GPUs, enabling close to linear scalability when training multiple types of neural networks (90 percent efficiency across 256 GPUs, compared to the prior norm of 65 percent). U
  • AWS announced its own high performance machine learning inference chip called AWS Inferentia. It was developed by the Annapurna team. The new inference processor provides hundreds of teraflops per chip and thousands of teraflops per Amazon EC2 instance for multiple frameworks (including TensorFlow, Apache MXNet, and PyTorch), and multiple data types (including INT-8 and mixed precision FP-16 and bfloat16)
  • Hundreds of companies are standardizing their machine learning workloads on AWS SageMaker. Customers include Adobe, BMW, Cathay Pacific, Dow Jones, Expedia, Formula 1, GE Healthcare, HERE, Intuit, Johnson & Johnson, Kia Motors, Lionbridge, Major League Baseball, NASA JPL, Politico.eu, Ryanair, Shell, Tinder, United Nations, Vonage, the World Bank, and Zillow.  

  • Amazon SageMaker Neo (generally available now) is new deep learning model compiler that supports hardware platforms from NVIDIA, Intel, Xilinx, Cadence, and Arm, and popular frameworks such as TensorFlow, Apache MXNet, and PyTorch. AWS will also make Neo available as an open source project.
  • Introduced an AWS Marketplace for machine learning -- 3rd party algorithms and tools that plug into SageMaker
  • The Amazon Aurora database is the fastest growing service in the history of AWS and now tens of thousands of customers
  • AWS announced two new purpose-built database services, Amazon Timestream, a fast, scalable, and fully managed time series database for IoT and operational applications and, Amazon Quantum Ledger Database (QLDB), a highly scalable, immutable, and cryptographically verifiable ledger.
  • A new AWS Control Tower gives customers an automated “landing zone” for setting up their multi-account environment and continuously govern their AWS workloads with rules for security, operations, and compliance. 
  • A new AWS Security Hub wil provides centralized management for security and compliance. It will be open for 3rd party add-ons.
  • A new AWS Lake Formation service will make it easier to set up a secure data lake
  • New Amazon S3 Intelligent-Tiering optimizes storage costs by automatically selecting the most cost-effective storage tier based on usage patterns.
  • Amazon S3 Glacier Deep Archive is a new storage class priced at just $0.00099 per GB-month (less than one-tenth of one cent, or $1 per TB-month)
  • Guardian Life Insurance has gone all-in with public cloud and closed its last data center. Guardian named AWS as its preferred cloud provider.
  • Open Bank, a subsidiary of Santander Group with 1.3 million customers in Spain, has gone all-in on AWS.



Tuesday, November 27, 2018

AWS re:Invent: Highlights from Day 2

AWS is launching a new Ground Station service allowing customers to purchase capacity at a satellite ground station on a pay-as-you-go basis.

The company says that while the cost of launching a satellite into low-earth orbit has come down, capturing data from the satellite and transmitting it to the cloud is more difficult and expensive than it should be.

The new AWS Ground Station service begins operations with a pair of ground stations, and plans to have 12 in operation by mid-2019. Each ground station is associated with a particular AWS Region; the raw analog data from the satellite is processed by our modem digitizer into a data stream (in what is formally known as VITA 49 baseband or VITA 49 RF over IP data streams) and routed to an EC2 instance that is responsible for doing the signal processing to turn it into a byte stream.

This means that customers don’t need to build or maintain antennas in order to capture data from satellite resources.



Amazon also announced a strategic collaboration to integrate the new AWS Ground Station service with Lockheed Martin’s new Verge antenna service, which is a distributed network of low-cost receivers. Each Verge antenna costs under $20,000 and uses common, readily-available parts (COTS hardware). Under4 the partnership, Lockheed Martin Verge customers benefit from being able to upload satellite commands and data through AWS Ground Station and to quickly download large amounts of data over the high-speed AWS Ground Station network.

“Together, AWS and Lockheed Martin are providing satellite operators increased flexibility, resiliency, and scale in a complete connectivity solution, ground architecture, and cloud environment for integrated satellite and data management operations,” said Rick Ambrose, Executive Vice President of Lockheed Martin Space. “Our collaboration with AWS allows us to deliver robust ground communications that will unlock new benefits for environmental research, scientific studies, security operations, and real-time news media. In time, with satellites built to take full advantage of the distributed Verge network, AWS and Lockheed Martin expect to see customers develop surprising new capabilities using the service.”

https://aws.amazon.com/blogs/aws/aws-ground-station-ingest-and-process-data-from-orbiting-satellites/

Amazon is launching a private edition of AWS Marketplace that lets enterprises create a custom digital catalog of pre-approved software products for their employees. AWS Marketplace currently offers 35 categories and more than 4,500 software listings from more than 1,400 Independent Software Vendors (ISVs). AWS says its customers use over 650 million hours a month of Amazon EC2 for products in AWS Marketplace and have more than 950,000 active software subscriptions. The new Private Marketplace makes it easier for enterprises to deliver licensed software to their employees.

Amazon Comprehend Medical is a new natural language processing service that uses real-time APIs forr language detection, entity categorization, sentiment analysis, and key phrase extraction.  The service could be used by medial organization to extract actionable data from patient records.

AWS is launching more powerful machine learning instances based on NVIDIA GPUs - Amazon EC2 P3 instances now offer up to 8 NVIDIA Tesla V100 GPUs and up to 100 Gbps of networking throughput.

There are now 180 container software products on offer in the AWS Marketplace in categories such as high performance computing, security, and developer tools.

Amgen, one of the world’s leading biotechnology companies, has selected AWS for the vast majority of its cloud infrastructure. Amgen uses AWS’s compute, storage, database, analytics, and machine learning services, to support the development of new applications and to automate processes in the cloud.

Korean Air Lines Co. is going "all-in" of AWS and plans to shut down its private data centers over the next three years. Korean Air plans to leverage Amazon Simple Storage Services (Amazon S3) and AWS data warehousing and analytics services, such as Amazon Redshift and Amazon Athena, for its data lake project. As part of Korean Air’s all-in journey to AWS, the airline is migrating production workloads including its website, loyalty program, flight operations, and other mission-critical operations to AWS.

AWS IoT Events is a new, fully managed IoT service for detecting and responding to events from IoT sensors and applications. AWS says its service can detect events across thousands of IoT sensors sending telemetry data, such as temperature from a freezer, humidity from respiratory equipment, or belt speed on a motor.

AWS IoT Greengrass, which enables local compute, messaging, data caching, sync, and ML inference capabilities on edge devices, now supports connectors to third-party applications and AWS services, hardware root of trust private key storage, and isolation and permission settings.

Monday, November 26, 2018

AWS re:Invent: Highlights from Day 1

AWS re:Invent 2018 kicks off this week in Las Vegas. Once again the event is sold out and seat reservations are required for popular sessions. Here are the highlights from Day 1:

Introducing AWS Global Accelerator, a network service that enables organizations to seamlessly route traffic to multiple regions and improve availability and performance for their end users. AWS Global Accelerator uses AWS’s global network to direct internet traffic from end users to applications running in AWS regions. AWS says its global network is highly-available and largely congestion-free compared with the public Internet. Clients route to the optimal region based on client location, health-checks, and configured weights. No changes are needed at the client-side. AWS Global Accelerator supports both TCP and UDP protocols. It provides health checking of target endpoints and then will route traffic away from unhealthy applications or congested regions. Pricing is based on gigabytes of data transferred over the AWS network.

A new AWS Transit Gateway will let enterprises build a hub-and-spoke network topology on AWS infrastructure, enabling the interconnection of existing VPCs, data centers, remote offices, and remote gateways. The customer gets full control over network routing and security. Connected resources and span multiple AWS accounts, including VPCs, Active Directories, and shared services. The new AWS Transit Gateway may also be used to consolidate existing edge connectivity and route it through a single ingress/egress point. Pricing is based on a per-hour rate along with a per-GB data processing fee.

AWS introduced its first cloud instance with up to 100 Gbps of network bandwidth.  Use cases are expected to include in-memory caches, data lakes, and other communication-intensive applications. AWS said its new C5n instances incorporate 4th gen custom Nitro hardware. The Elastic Network Interface (ENI) on the C5n uses up to 32 queues (in comparison to 8 on the C5 and C5d), allowing the packet processing workload to be better distributed across all available vCPUs. The ability to push more packets per second will make these instances a great fit for network appliances such as firewalls, routers, and 5G cellular infrastructure. Here are the specs:



AWS is launching its first cloud instances based on its own Arm-based AWS Graviton Processors. The new processors are the result of the acquisition of Annapurna Labs in 2015.  AWS said its Graviton processors are optimized for performance and cost, making them a fit for scale-out workloads where you can share the load across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets. The new A1 instances are available now in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions in On-Demand, Reserved Instance, Spot, Dedicated Instance, and Dedicated Host form.

In addition to processors from strategic partner Intel and its own AWS Graviton processors, AWS is offering cloud instances powered by AMD at a 10% discount.

AWS introduced Firecracker, a new virtualization technology for #containers -- think microVMs with fast startup times (125ms). The company says Firecracker uses multiple levels of isolation and protection and exposes a minimal attack surface for better security. Firecracker is expected to improve the efficiency of AWS infrastructure. It is also being released as open source. Firecracker is already powering multiple high-volume AWS services including AWS Lambda and AWS Fargate.

AWS has developed its own Scalable Reliable Datagram (SRD) protocol for high-performance computing clusters as an alternative to TCP/IP



Epic Games, creator of Fortnite, is running on AWS.


https://reinvent.awsevents.com

AWS re:Invent - More companies designate AWS as preferred cloud

Amazon Web Services announced two more high profile companies going :all in" or designating it as their preferred public cloud provider.

Ellie Mae is going "all in" by moving its infrastructure to AWS while rebuilding its core applications and creatomg new digital products for the evolving needs of homebuyers. Ellie Mae will use the breadth and depth of AWS services, including compute, storage, database, serverless, and containers, to develop new ways of delivering the digital mortgage and simplifying the loan process for its customers and partners. Ellie Mae has already built a company-wide data lake on AWS using Amazon Simple Storage Service (Amazon S3) to better understand, personalize, and further automate digital lending.

“We process more than a third of mortgage applications in the United States, and use AWS to help us deliver on our mission of the true digital mortgage, so lenders can achieve compliance, quality, and efficiency,” said Satheesh Ravala, Senior Vice President, Cloud Engineering and Operations at Ellie Mae. “AWS gives us an unmatched set of cloud services and a highly reliable infrastructure to work with as we continue to build solutions that provide borrowers and lenders with the best digital loan experiences. As a result of early successes on AWS, we are confident that their services will continue to give us what we need to be nimble, innovate, achieve results, and cut costs while we grow and expand our business well into the future.”

Mobileye, an Intel company, chose AWS as its preferred public cloud provider for its autonomous vehicle business. Mobileye is running core workloads on AWS for greater speed, agility, and compute power. Mobileye is using AWS’s broad portfolio, including compute, storage, database, analytics, machine learning, and edge computing, to supply automakers with the most advanced self-driving applications. As Mobileye grows workloads on AWS, the organization will build a data lake on Amazon Simple Storage Service (Amazon S3) to ingest, process, and analyze hundreds of petabytes of vehicle data gathered from sensors, images, and video feeds.

“AWS gives us the most comprehensive set of services and the best performance so that we can provide our teams with the cloud capabilities required to deliver autonomous vehicles,” said Professor Amnon Shashua, President and Chief Executive Officer at Mobileye. “Making AWS our preferred cloud provider aligns with our overall technical strategy and desired pace of innovation. We are becoming a more agile organization on AWS, and continuing our 18-year history of leveraging the most advanced machine learning and deep learning technologies available.”

AWS re:Invent - Dynatrace extends visibility with AWS CloudTrail

Dynatrace announced the extension of its platform’s cloud visibility and contextual data ingestion from Amazon Web Services (AWS) with Amazon CloudWatch (CloudWatch) and AWS CloudTrail (CloudTrail).

The company says the addition of AWS metrics and events from the two services enriches the high-fidelity data that it processes. This enhances its contextual problem identification and root cause analysis.

AWS CloudTrail allows businesses to monitor and log account activity related to actions across their AWS infrastructure. CloudTrail log ingestion extends Dynatrace AI’s automated root cause analysis and problem detection to include AWS account-initiated activity. This data provides ops teams with insights into not just what caused a problem, but also which user or account made service-impacting changes.

“Enterprises are rapidly expanding their cloud footprint to support the development of cloud-native applications and the modernization of IT operations,” said Steve Tack, SVP Product Management at Dynatrace. “Dynatrace was purpose-built to deal with the scale and complexity of the enterprise cloud, providing teams with intelligence to manage their cloud operations with a single platform. Ultimately, with CloudWatch, our customers can gain additional context, which when combined with our full-stack, AI powered monitoring capabilities, allows for faster and more precise answers.”

https://www.dynatrace.com/technologies/aws-monitoring

AWS re:Invent - Cylance brings its AI-powered security to AWS

Cylance announced support for Amazon Web Services (AWS) with its CylancePROTECT for the cloud.

“We are excited to make our AI-driven, prevention-first security solutions available to cloud computing environments,” said Stuart McClure, founder and chief executive at Cylance. “By approaching security with sophisticated machine learning techniques and offering scalable threat detection, response, root cause analysis, and threat hunting, Cylance helps prevent data breaches that impact the security of an organization’s data in the cloud.”

Cylance simplifies cloud security by utilizing an agent with a small footprint and with no configuration or signature-update needs.

http://www.cylance.com

BlackBerry to acquire Cylance for AI-power cybersecurity



BlackBerry Limited agreed to acquire Cylance, a privately-held develop of cybersecurity solution, for US $1.4 billion in cash, plus the assumption of unvested employee incentive awards. “Cylance’s leadership in artificial intelligence and cybersecurity will immediately complement our entire portfolio, UEM and QNX in particular. We are very excited to onboard their team and leverage our newly combined expertise,” said John Chen, Executive Chairman...

Monday, November 12, 2018

AWS launches 2nd GovCloud Region (US-East)

Amazon Web Services announced the launch of the AWS GovCloud (US-East) Region, its second GovCloud infrastructure region in the United States.

The AWS GovCloud is an isolated infrastructure region designed to meet the stringent requirements of the public sector and highly regulated industries, including being operated on US soil by US citizens, and are accessible only to vetted US entities and root account holders who must confirm they are US persons.

The first AWS GovCloud (US-West) Region opened in 2011. Like AWS GovCloud (US-West), AWS GovCloud (US-East) offers three Availability Zones. AWS Regions are comprised of multiple Availability Zones, which refer to technology infrastructure in separate and distinct geographic locations with enough distance to significantly reduce the risk of a single event impacting business continuity, yet near enough to provide low-latency for high availability applications. Each Availability Zone has independent power, cooling, physical security, and is connected via redundant, ultra-low-latency networks. AWS customers focused on high availability can design their applications to run in multiple Availability Zones to achieve even greater fault tolerance.

AWS now provides 57 Availability Zones across 19 geographic regions globally with another 12 Availability Zones and four regions coming online in Bahrain, Hong Kong SAR, South Africa, and Sweden between the end of 2018 and the first half of 2020.

“For more than seven years, government customers and those in highly regulated industries have been using AWS GovCloud (US-West) to run workloads that must meet the most stringent security and compliance requirements,” said Teresa Carlson, Vice President, Worldwide Public Sector at AWS. “Based on the growth of GovCloud (US-West) and high customer demand for a second region in the eastern part of the US, we’ve opened a second GovCloud Region so that AWS customers can support their mission-critical programs with even lower latency to the East Coast and have the ability to implement cross-region disaster recovery.”

https://aws.amazon.com/govcloud-us/

Thursday, November 8, 2018

Cisco intros Hybrid Solution for Kubernetes on AWS

Cisco introduced a Hybrid Solution for Kubernetes on AWS for making it easier to run containerized application across on-premise and the AWS cloud. The solution configures on-premises Kubernetes environments to be consistent with Amazon Elastic Container Service for Kubernetes (Amazon EKS) while leveraging Cisco's networking, security, management and monitoring software.

Cisco said its implementation reduces complexity and costs for IT operations teams. The management of on-premises Kubernetes infrastructure is simplified with a common set of tools on-premises and on AWS. Cisco's enterprise support covers all parts of the solution.

"Today, most customers are forced to choose between developing applications on-premises or in the cloud. This can create a complex mix of environments, technologies, teams and vendors. But they shouldn't have to make a choice," said Kip Compton, senior vice president, Cloud Platform and Solutions at Cisco. "Now, developers can use existing investments to build new cloud-scale applications that fuel business innovation. This makes it easier to deploy and manage hybrid applications, no matter where they run. This allows customers to get the best out of both cloud and their on-premises environments with a single solution."

"More customers run containers on AWS and Kubernetes on AWS than anywhere else," said Terry Wise, Global Vice President of Channels & Alliances, Amazon Web Services, Inc. "Our customers want solutions that are designed for the cloud and Cisco's integration with Amazon EKS will make it easier for them to rapidly deploy and run containerized applications across both Cisco-based on-premises environments and the AWS cloud."

The Cisco Hybrid Solution for Kubernetes on AWS will be provided as both a software-only solution requiring only the Cisco Container Platform, or a hardware/software solution with the Cisco Container Platform running on Cisco HyperFlex.  The software is licensed in one-, three- and five-year subscriptions. Pricing for software-only subscriptions will start at approximately $65,000 per year for a typical entry-level configuration.  On AWS, customers pay $0.20 per hour for each Amazon EKS cluster that they create in addition to the AWS resources (e.g. Amazon EC2 instances or Amazon Elastic Block Store volumes) they create to run Kubernetes worker nodes.

Cisco and Google Partner on New Hybrid Cloud Solution

Cisco and Google Cloud have formed a partnership to deliver a hybrid cloud solutions that enables applications and services to be deployed, managed and secured across on-premises environments and Google Cloud Platform. The pilot implementations are expected to be launched early next year, with commercial rollout later in 2018.

The main idea is to deliver a consistent Kubernetes environment for both on-premises Cisco Private Cloud Infrastructure and Google’s managed Kubernetes service, Google Container Engine.

The companies said their open hybrid cloud offering will provide enterprises with a way to run, secure and monitor workloads, thus enabling them to optimize their existing investments, plan their cloud migration at their own pace and avoid vendor lock in.

Cisco and Google Cloud hybrid solution highlights:


  • Orchestration and Management – Policy-based Kubernetes orchestration and lifecycle management of resources, applications and services across hybrid environments
  • Networking – Extend network policy and configurations to multiple on-premises and cloud environments
  • Security – Extend Security policy and monitor applications behavior
  • Visibility and Control – Real-time network and application performance monitoring and automation
  • Cloud-ready Infrastructure – Hyperconverged platform supporting existing application and cloud-native Kubernetes environments
  • Service Management with Istio – Open-source solution provides a uniform way to connect, secure, manage and monitor microservices
  • API Management – Google's Apigee enterprise-class API management enables legacy workloads running on premises to connect to the cloud through APIs
  • Developer Ready – Cisco's DevNet Developer Center provides tools and resources for cloud and enterprise developers to code in hybrid environments
  • Support – Joint coordinated technical support for the solution

"Our partnership with Google gives our customers the very best cloud has to offer— agility and scale, coupled with enterprise-class security and support," said Chuck Robbins, chief executive officer, Cisco. "We share a common vision of a hybrid cloud world that delivers the speed of innovation in an open and secure environment to bring the right solutions to our customers."

Tuesday, November 6, 2018

AWS launches AMD EPYC cloud instances at lower cost

Amazon Web Services (AWS) began offering cloud instances based on AMD EPYC processors.

The new general purpose (M5 and T3) and memory-optimized (R5) instance types with AMD EPYC processors that are 10% less expensive than the current M5, T3, and R5 instances.

AWS said the AMD-based instances provide additional options for customers who are looking to achieve cost savings on their Amazon EC2 compute environment for a variety of workloads, such as microservices, low-latency interactive applications, small and medium databases, virtual desktops, development and test environments, code repositories, and business applications.

https://aws.amazon.com/ec2/

Thursday, September 27, 2018

AWS intros EC2 12 TB instances for large in-memory databases

Amazon Web Services (AWS) announced the availability of new High Memory instances for Amazon Elastic Compute Cloud (Amazon EC2) for running large in-memory databases, including production deployments of SAP HANA.

Amazon EC2 High Memory instances deliver 6 TB, 9 TB, and 12 TB of memory today, with 18 TB and 24 TB instances coming in 2019.

These are the largest memory sizes available in the cloud, according to AWS.

“Amazon EC2 provides the most comprehensive selection of instances by far, giving customers the flexibility to select the right instance for the right workload today and into the future,” said Matt Garman, Vice President of Compute Services at AWS. “We have memory-optimized instances today, and they’ve proven quite popular with customers who want to run memory-intensive applications, including in-memory databases. With 12 TB instances available in AWS, and 24 TB instances coming next year, Amazon EC2 High Memory instances give our customers the ability to scale their in-memory database with predictable performance in the same VPC as their other AWS services. Customers can grow their in-memory database and easily connect it to their storage, networking, analytics, IoT, and machine learning services – helping them make faster and better business decisions.”

Thursday, September 13, 2018

Affirmed Networks launches virtualized EPC on AWS

Affirmed Networks is introducing a virtualized Evolved Packet Core (vEPC) on Amazon Web Services (AWS).

The company says its "Mobile Network as a Service" enables mobile operators to quickly and economically deliver both 4G and 5G services over a scalable cloud infrastructure without requiring excessive capital investments.Mobile operators could use the vEPC to deliver differentiated services, such as IoT/M2M services, enterprise data services, MVNO wholesale services, 4G and 5G services, without costly data center or network infrastructure investments.

"The industry has realized that to keep pace with the explosion of data growth a new approach is required," said Amit Tiwari, Affirmed Networks' Vice President, Strategic Alliances and Systems Engineering. "Cloud-based mobile network architectures are providing operators with unprecedented network flexibility. With the ability to now access AWS as part of a "Mobile Network as a Service" solution, operators gain the ability to easily and cost-effectively scale their networks and their business across geographies and networks that were previously out of reach. With this evolution in networking, they can provide new, innovative, and extremely cost-effective services for end customers."

Tuesday, August 21, 2018

Amazon EC2 launches burstable T3 instances for microservices

Amazon Web Services announced commercial availability of T3 instances, the next generation of burstable general-purpose instances for Amazon Elastic Compute Cloud (Amazon EC2), providing up to 30% improved price performance than previous generation T2 instances.

The new T3 instances are designed for applications with variable CPU usage that experience occasional spikes in demand – such as microservices, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business critical applications. T3 instances feature Intel Xeon Scalable processors and support up to 5 Gbps in peak network bandwidth.

“Since T2 instances ‘burst’ on the scene in 2014, they’ve been wildly popular as they’ve helped customers optimize the cost and performance for applications that have variable CPU demands,” said Matt Garman, Vice President, Compute Services, AWS. “We think customers are going to be pretty excited by the launch of our third generation burstable instance (T3) as it’s both 30% more cost effective on a price-to-performance basis than the T2 and enables, by default, the unmatched capability of unlimited burst for customers’ applications.

Sunday, August 12, 2018

Portworx expands container data management options for AWS

Portworx, a start-up based in Los Altos, California announced that its PX-Enterprise can now be integrated with Amazon Elastic Container Service (ECS), enabling mission critical stateful workloads to run in Docker containers with dynamic provisioning, cross-Availability Zone high availability, application consistent snapshots, auto-scaling and encryption functionality.

Portworx can also be integrated with Amazon Elastic Container Service for Kubernetes (EKS).

"Enterprise container adoption is skyrocketing as companies recognize the value that container technologies provide on the path to digital transformation," said Murli Thirumale, co-founder and CEO of Portworx. "Amazon Web Services integration with Portworx for both EKS and now ECS is evidence of a sea change happening in the industry: enterprises running on Amazon need flexible cloud native storage solutions that play well containers. By giving enterprises these two options for container data management, we're radically simplifying operations of containerized stateful services running on Amazon."

Key benefits of Amazon ECS with Portworx's cloud native storage include:

  • Multi-AZ EBS for Containers – Docker containers within and across Availability Zones based on business needs. Portworx will not only replicate each container's volume data among ECS nodes and across Availability Zones, but also add additional EBS drives based on reaching capacity thresholds.
  • Daemon Scheduling on ECS:  automatically run a daemon task on every one of a selected set of instances in an ECS cluster. This ensures that as ECS adds new nodes, every server can consume and access Portworx storage volumes.
  • Auto-scaling groups for stateful applications – dynamic creation of EBS volumes for an ASG, so if a pod is rescheduled after a host failure, the pre-existing EBS volume will be reused, reducing failover time by 300%.
  • Hyperconverged compute and storage for ultra-high performance databases – ECS can reschedule the pod to another host in the cluster where Portworx has placed an up-to-date replica. This ensures hyperconvergence is maintained even across reschedules.
  • Application-aware snapshots – ECS administrators can define groups of volumes that constitute their application state and consistently snapshot directly via .docker. These group snapshots can be backed up to S3 or moved directly to another Amazon region in case of a disaster.


Thursday, August 9, 2018

Amazon Aurora Serverless offers auto-scaling, per-minute pricing

AWS introduced a new deployment option for Amazon Aurora that automatically starts, scales, and shuts down database capacity with per-second billing for applications with less predictable usage patterns.

Amazon Aurora Serverless is a MySQL-compatible database built for the cloud.  AWS said it is best suited for applications with intermittent or cyclical usage patterns. Customers will not need to manage the database servers.

“More and more customers are moving production applications and databases from Oracle and SQL Server to Amazon Aurora because it's a highly available, highly durable, built-for-the-cloud database at one tenth the cost of the older guard database offerings," said Raju Gulabani, Vice President, Databases, Analytics, and Machine Learning, at Amazon Web Services. “

Beta customers included NTT DOCOMO, Cognizant, Pagely, CB Insights, California Polytechnic State University, Currencycloud, and CourseStorm.

https://aws.amazon.com/aurora/serverless

Wednesday, August 8, 2018

Samsung Heavy Industries picks AWS to develop autonomous shipping

Samsung Heavy Industries selected AWS as its preferred cloud provider.

Samsung Heavy Industries is developing an autonomous smart shipping system to enable the self-piloting of large container ships, LNG carriers, and floating production systems. The company will use the breadth of AWS’s services, including machine learning, augmented reality and virtual reality, analytics, databases, compute, and storage to develop this platform. This includes Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS), Amazon Simple Storage Solution (Amazon S3), AWS Key Management Service (KMS), and AWS CloudTrail to create integrated systems for all vessel-related data collected from land to sea.

“We’re digitizing our shipping fleet by using the most advanced technologies in the world to enhance our approaches to shipbuilding, operations, and delivery, and chose AWS as our preferred cloud provider to help us quickly transform Samsung Heavy Industries’ into a cloud-first maritime business,” said Dongyeon Lee, Director of Ship & Offshore Performance Research Center at Samsung Heavy Industries. “By leveraging AWS, we’ve successfully released several smart shipping systems so that our customers can manage their ships and fleets more efficiently, and we continue to test new capabilities for ocean-bound vessel navigation and automation. AWS delivers a highly flexible environment, with the broadest and deepest portfolio of cloud services, that is ideal for accelerating research and development across the company, and it has enabled our developers and data scientists to bring new ideas to market at an unprecedented pace.”

Wednesday, July 18, 2018

Intuit sells its data center, goes all-in with AWS

Intuit sold its data center in Quincy, Washington to H5 Data Centers, one of the leading privately-owned data center operators. Financial terms were not announced but Intuit said the sale is expected to result in a GAAP operating loss of $75 to $85 million.

Intuit said the move is part of its strategy to move operations to AWS.

“We chose to move to Amazon Web Services (AWS) to accelerate developer productivity and innovation for our customers, and to accommodate spikes in customer usage through the tax season,” said H. Tayloe Stansbury, Intuit Executive Vice President and Chief Technology Officer. “Our TurboTax Online customers were served entirely from AWS during the latter part of this tax season, and we expect to finish transitioning QuickBooks Online this year. Now that most of our core applications are in AWS, the time is right to transition the ownership and operation of this data center to a team who will expertly manage the infrastructure through the remainder of this transition.”

Tuesday, July 17, 2018

AWS lands Major League Baseball and 21st Century Fox

Major League Baseball (MLB) has chosen AWS as its official provider for machine learning, artificial intelligence, and deep learning workloads. Specifically, MLB will use AWS machine learning services to continue development of Statcast—the tracking technology that runs on AWS to analyze player performance for every game—and develop new technologies to support MLB Clubs.

AWS said it's cloud-based machine learning services will enable MLB to eliminate the manual, time-intensive processes associated with record keeping and statistics, such as scorekeeping, capturing game notes, and classifying pitches.

“Incorporating machine learning into our systems and practices is a great way to take understanding of the game to a whole new level for our fans and the 30 clubs,” said Jason Gaedtke, Chief Technology Officer at Major League Baseball. “We chose AWS because of their strength, depth, and proven expertise in delivering machine learning services and are looking forward to working with the Amazon ML Solutions Lab on a number of exciting projects, including detecting and automating key events, as well as creating new opportunities to share never-before-seen metrics.

AWS also announced that 21st Century Fox has selected it for the vast majority of its key platforms and workloads. The media company is also leveraging AWS’s machine learning and data analytics services to create a consistent set of digital media capabilities across its brands. the companies said 21st Century Fox has already reduced its data center needs by 50 percent and moved over 30 million assets—approximately 10 petabytes of content—to Amazon Glacier and Amazon Simple Storage Service (Amazon S3). Continuing this transformation, 21st Century Fox will use AWS as the primary platform to deliver over 90,000 titles on demand for key brands such as FOX, FX, National Geographic, 20th Century Fox Television, 20th Century Fox Film, and FOX Sports. 21st Century Fox has also. implemented a company-wide approach to data collection, processing, and instrumentation using AWS’s technologies.

Sunday, July 8, 2018

NEC selected for Bay to Bay Express subsea cable system

NEC has been selected to build a high-performance submarine cable connecting Singapore, Hong Kong and the United States.

A consortium composed of China Mobile International, Facebook and Amazon Web Services is backing the Bay to Bay Express Cable System (BtoBE).

Construction of the nearly 16,000-kilometer optical submarine cable is expected to be completed by the fourth quarter of 2020.

NEC said the BtoBE system will utilize multiple pairs of optical fiber and achieve round trip latency of less than 130 milliseconds.

"NEC is honored to be selected by the BtoBE consortium as the turn-key system supplier for this world record-breaking optical fiber submarine cable system that covers the longest distance without regeneration. The BtoBE, landing at three locations spanning across the Pacific Ocean, is designed so that once completed, it can carry at least 18Tbs of capacity per fiber pair," said Mr. Toru Kawauchi, General Manager of the Submarine Network Division at NEC Corporation. "The BtoBE will provide seamless connectivity and network diversity, while serving to complement other Asia-Pacific submarine cables, among others."

https://www.nec.com/en/press/201807/global_20180709_03.html



Sunday, July 1, 2018

Formula One Group goes all-in with AWS

Formula One Group (Formula 1) is moving the vast majority of its infrastructure from on-premises data centers to AWS, and standardizing on AWS’s machine learning and data analytics services.

AWS is working with Formula 1 to enhance its race strategies, data tracking systems, and digital broadcasts through a range of AWS services. These include Amazon SageMaker, a fully managed machine learning service; AWS Lambda, AWS's event-driven serverless computing service; and AWS analytics services. Formula 1 has also selected AWS Elemental Media Services to power its video asset workflows, enhancing the viewing experience for its international fan base.

“For our needs, AWS outperforms all other cloud providers, in speed, scalability, reliability, global reach, partner community, and breadth and depth of cloud services available,” said Pete Samara, Director of Innovation and Digital Technology at Formula 1. “By leveraging Amazon SageMaker and AWS’s machine learning services, we are now able to deliver these powerful insights and predictions to fans in real time. We are also excited that the Formula 1 Motorsports division will run High Performance Compute workloads in a scalable environment on AWS. This will significantly increase the number and quality of the simulations our aerodynamics team can run as we work to develop the new car design rules for Formula 1.”

“Leveraging the cornucopia of services offered by the world’s leading cloud, Formula 1 will engage with its growing global fan base in unique ways,” said Mike Clayville, Vice President, Worldwide Commercial Sales at AWS.

See also