Showing posts with label Data Center. Show all posts
Showing posts with label Data Center. Show all posts

Monday, November 15, 2021

American Tower to buy CoreSite for $10B

American Tower agreed to acquire CoreSite for $170.00 per share in cash, an enterprise value of $10.1 billion when including the assumption and/or repayment of CoreSite’s existing debt.

CoreSite, which as of Q3 2021 consisted of 25 data centers, 21 cloud on-ramps and over 32,000 interconnections in eight major U.S. markets, generated annualized revenue and Adjusted EBITDA of $655 million and $343 million, respectively, in Q3 2021. CoreSite has averaged double-digit annual revenue growth over the past five years.

American Tower said it expects to leverage its strong financial position to further accelerate CoreSite’s attractive development pipeline in the U.S., while also evaluating the potential for international expansion in the data center space. The acquisition is also expected to be transformative for American Tower’s mobile edge compute business in advance of the proliferation of 5G low-latency applications throughout the cloud, enterprise and network ecosystems, establishing a converged communications and computing infrastructure offering with distributed points of presence across multiple edge layers. 

Tom Bartlett, American Tower’s Chief Executive Officer stated, “We are in the early stages of a cloud-based, connected and globally distributed digital transformation that will evolve over the next decade and beyond. We expect the combination of our leading global distributed real estate portfolio and CoreSite’s high quality, interconnection-focused data center business to help position American Tower to lead in the 5G world. As the convergence of wireless and wireline networks accelerates and classes of communications infrastructure further align, we anticipate the emergence of attractive value creation opportunities within the digital infrastructure ecosystem. We look forward to welcoming CoreSite’s talented team to American Tower and working together to capitalize on those opportunities to drive enhanced long-term value creation for our customers and shareholders as we continue to connect billions of people across the globe.”

CoreSite’s Chief Executive Officer, Paul Szurek, stated, “We are excited to partner with American Tower to expand its communications infrastructure ecosystem and accelerate its edge computing strategy through the addition of CoreSite’s differentiated portfolio of U.S. metro data center campuses. The combined company will be ideally positioned to address the growing need for convergence between mobile network providers, cloud service providers, and other digital platforms as 5G deployments emerge and evolve. In addition, we expect the enhanced scale and further geographic reach to provide a platform for the combined company to accelerate its growth trajectory and expand into additional U.S. metro areas, as well as internationally, leveraging American Tower’s extensive presence across the globe. CoreSite’s outstanding team, interconnection platform and data center campus portfolio are a highly complementary fit with American Tower’s existing communications sites, and we believe this partnership delivers significant value to CoreSite’s stockholders and will create an exciting new chapter for our customers, employees and partners.”

American Tower intends to finance the transaction in a manner consistent with maintaining its investment grade credit rating and has obtained committed financing from J.P. Morgan. 




Monday, October 11, 2021

Crehan: 400 GbE port shipments to exceed 10 million this year

Shipments of 400 gigabit Ethernet (GbE) data center switch ports are on track to exceed 10 million this year, more than triple the prior year's shipments, according to an upcoming report from Crehan Research Inc. 

In correlation with this strong 400GbE data center switch ramp, 50Gbps SerDes shipments are surpassing 25Gbps SerDes shipments, to become the mostly widely deployed data center switch SerDes lane speed.

“The 400GbE data center switch transition is well under way, with volumes now starting to move toward the tens of millions in conjunction with broader customer adoption,” said Seamus Crehan, president of Crehan Research. “Newer applications and architectures such as machine learning and disaggregated computing, on top of existing high-bandwidth drivers including video, machine-to-machine traffic anddata analytics, are driving the need for faster switch speeds with better bandwidth economics.”

These results reflect the data center switch silicon transition from 3.2Tbps to 12.8Tbps and their respective underlying technologies of 25Gbps and 50Gbps SerDes. Crehan’s report further points out that the 50Gbps SerDes transition time of approximately three years is faster than what the 25Gbps SerDes transition was.

"These large deployments of 12.8Tbps data center switches, with underlying 50Gbps SerDes technology, are laying the groundwork for the deployments of next-generation 25.6Tbps data center switches with 100Gbps SerDes technology, which we expect to start ramping next year," Crehan said.

https://crehanresearch.com/

Wednesday, September 29, 2021

Fungible enhances its Storage Cluster for NVMe over TCP

Fungible, a start-up pursuing DPU accelerated data center computing, announced new capabilities enabling its Fungible Storage Initiator (SI) cards installed in standard servers to access NVMe over TCP (NVMe/TCP).

Fungible claims its accelerators deliver the world’s fastest and most efficient implementation of NVMe/TCP. The enhancements bring security and usability capabilities for the entire data platform.


The Fungible Storage Initiator solution is delivered on Fungible’s FC200, FC100 and FC50 cards. Each of these cards is powered by the S1 Fungible DPU, and a single FC200 card is capable of delivering a record breaking 2.5 million IOPS to its host. These cards, and the Fungible Storage Cluster, are managed by Fungible Composer, which orchestrates the composition of disaggregated data center resources on demand.  

“With our high-performance and low-latency implementation, Fungible’s disaggregated NVMe/TCP solution becomes a game changer. Over the last five years, we have designed our products to support NVMe/TCP natively to revolutionize the economics of deploying flash storage in scale-out implementations,” said Eric Hayes, CEO of Fungible. “In addition to industry leading performance, our solutions offer more value and the highest levels of security, compression, efficiency, durability and ease of use. At Fungible, we continue to disrupt the traditional rigid models by disaggregating compute and storage using available industry standards like NVMe/TCP.”

https://www.fungible.com/

Fungible ships its disaggregated NVMe storage platform powered by DPUs

Fungible, a start-up based in Santa Clara, California, unveiled a disaggregated data storage platform powered by its own Fungible Data Processing Unit (DPU).

The new Fungible Storage Cluster delivers 15M IOPS in a 2RU form factor, scaling linearly to 300M IOPS in a single 40RU rack, and extending further to many racks. The company says its high-performance design improves $/IOPS by at least 3x compared to existing software-defined storage solutions by consolidating workloads and increasing utilization of storage media.

The Fungible Storage Cluster comprises a cluster of Fungible FS1600 storage target nodes connected over a standards-based IP network and the Fungible Composer software. The FS1600s implement the data path for storage while the Fungible Composer performs control and management functions. This clean separation of functions results in higher performance, better scalability and better reliability. Each FS1600 storage target node is powered by two Fungible F1 DPUs and packs 24 standard NVMe SSDs delivering an aggregate of 15M IOPS in a 2RU form factor.

Notably, the Fungible Storage Cluster has been validated with IBM Spectrum Scale, delivering more than 80M read IOPS/PB.

“Today, we demonstrate how the breakthrough value of the Fungible DPU is realized in a storage product,” said Pradeep Sindhu, CEO and Co-Founder of Fungible. “The Fungible Storage Cluster is not only the fastest storage platform in the market today, it is also the most cost-effective, reliable, secure and easy to use. This is truly a significant milestone on our journey to realize the vision of Fungible Data Centers — where compute and storage resources are hyperdisaggregated and then composed on-demand to dynamically serve application requirements.”

“Innovations in data center infrastructure have occurred largely within the silos of compute, storage and networking,” said Raj Yavatkar, CTO at Juniper Networks. “Fungible has broken down these silos delivering end-to-end value with Fungible DPU enabled servers interconnected by TrueFabric, a truly ground-breaking networking technology, and software composable for on-demand provisioning. This approach will serve as a blueprint for future data centers from core to edge.”



Sunday, January 31, 2021

Inspur contributes rack management spec to OCP

Inspur has contribued a specification to the Open Compute Project that clarifies the scope of information collection, data presentation modes and hardware deployment options of collection modules in rack management.

Inspur said its specification provides a reference architecture for centralized rack management and lays the foundations for smarter operation of data centers.

Dozens of server nodes, fans, power supply and other components are integrated into a rack. These components are centrally managed and monitored by an RMC (Rack Management Controller). The OpenRMC design specification addresses the challenges of centralized management across different racks. In addition, it meets a range of needs among small and medium-sized data centers, such as enhancing automated operations capability, improving system availability, and reducing overall energy consumption.

OpenRMC Design Specification v1.0, contributed by Inspur, unifies the format and parameters of read data by defining the northbound and southbound specifications, allowing users to manage all racks in one interface. In terms of the northbound data presentation, OpenRMC is integrated with Redfish, the next-generation data center management standard, allowing all kinds of server data to be presented through a browser, an approach that is more user-friendly than a binary display mode. Meanwhile, firmware can be flashed remotely, making it more convenient for operations personnel to control.

“The OpenRMC Sub-project aims at an opensource-based rack management solution, which is fundamental to efficient, flexible, and open data center management. As the project-lead of the OpenRMC project, Inspur’s Rack Management Specification offers a significant push to the fulfillment of that goal,” said Rajeev Sharma, Director of Software & Technologies, of the OCP Foundation. “We are confident that Inspur and participating companies will yield more designs and specifications that apply Open Compute technologies to solving data center operation challenges.”

Wilson Guo, Inspur's senior technology director, said that Inspur has always been an active advocate of open source technologies ranging from Linux to OCP to OpenStack and is currently a member of three leading global open computing standards organizations. “Inspur has been involved in many open source communities in the hardware and software fields,” said Inspur’s Wilson. “To stay ahead of the future transformation of cloud data centers, Inspur has been driving the development of converged data centers and smart computing to advance the integration of open computing technologies and help build a truly open ecosystem.”


Tuesday, September 22, 2020

Arm expands its Neoverse data center server CPU porfolio

Arm is unveiling two new platforms in its Neoverse silicon portfolio for data centers CPUs: 

  • the Arm Neoverse V1 platform, delivering a single-threaded performance uplift of more than 50% over N1 and aimed at applications more reliant on CPU performance and bandwidth. Neoverse V1 supports Scalable Vector Extensions (SVE), which enables execution of single-instruction multiple dispatch (SIMD) integer, bfloat16, or floating-point instructions on wider vector units using a software programming model that’s agnostic to the width of the unit. Arm says SVE will ensure portability and longevity of the software code, along with efficient execution. Potential markets include high-performance cloud, HPC, and machine learning.
  • the Neoverse N2, which is the second-generation N-series platform, and aimed at the scale-out performance needs of applications across a range of use cases, from cloud to SmartNICs and enterprise networking, to power-constrained edge devices.  Neoverse N2 offers 40% higher single-threaded performance, compared to Neoverse N1, and retains the same level of power and area efficiency as Neoverse N1.
Arm's Neoverse roadmap now extends from 7nm devices currently in production, to 5nm designs in 2021 and 3nm in 2022.


Arm cited growing momentum for its Neoverse silicon across a range of data center applications. Operating systems and hypervisors, Xen, KVM, Docker containers, and, increasingly, Kubernetes have all announced support for Arm. 

http://www.arm.com



Sunday, August 30, 2020

Crehan: Data center Ethernet switch shipments up 12% in 1H2020

Despite COVID-related supply and demand disruptions, customers deployed more data center Ethernet switches in the first half of 2020 than they did in the same year-ago period, according to a recent report from Crehan Research Inc. Port shipments increased by 12% year-over-year, resulting in a new record high.

Hyperscale cloud service providers and China were significant contributors to the market’s growth, according to the report. The hyperscale cloud service provider’s contribution was reflected in the especially strong growth of 100 gigabit Ethernet (GbE) and 25GbE – a preferred data center networking architecture within this customer segment. In fact, 100GbE and 25GbE combined had a 40% year-over-year increase, comprising a majority of total data center switch port shipments.

“This robust shipment growth, even in the face of COVID disruptions, is a reflection of the critical nature of data center networks in delivering needed services to businesses, homes
and governments,” said Seamus Crehan, president of Crehan Research.

Other noteworthy results from Crehan’s data center switch report:

  • Cisco accounted for the majority of data center switch shipments and saw stable year-over-year market share.
  • As a result of its strong presence in the hyperscale cloud service provider segment, Arista was a key driver of the 100GbE switch growth, holding the top share position in this segment.
  • In correlation with the strong growth in China, H3C and Huawei gained additional market share.
  • Nvidia, through its Mellanox acquisition, saw a doubling of its data center switch shipments, on the strength of its Spectrum-based 100GbE switches.
  • “Back in January 2017, we forecast that combined shipments of 100GbE and 25GbE would comprise over half of all data center Ethernet switch shipments by 2021,” Crehan said.
  • “These recent results show that the transition to higher networking speeds that underpin modern data center architectures is happening even faster than expected.”

Thursday, June 27, 2019

NTT Com builds mega data center in Indonesia

NTT Communications will develop a new data center campus at Bekasi, Indonesia.

The new campus will be known as “Indonesia Jakarta 3 Data Center” (JKT3) and is capable of up to 18,000 sqm (7,800 racks) IT space and 45MW IT load once fully developed. The four-story building will be in a large industrial area located 30km east of central Jakarta. Since JKT3 enables customers to customize its server room as well as providing colocation space by the rack, JKT3 will meet various customer’s requirements, in particular, from OTTs and financial institutions who require flexible facility design.

The project is described as the largest data center in Indonesia.

Monday, April 29, 2019

Dell Technologies Cloud targets data center as a service

Dell Technologies Cloud is a new offering enterprises a consistent operating model for private, public, and hybrid cloud operations.

Unveiled at this week's Dell Tech World conference in Las Vegas, Dell Technologies Cloud Platforms promises to be an operational hub for hybrid cloud environments, reducing total cost of ownership by up to 47% compared to native public cloud.  The company officials said the new framework will "control the chaos" of managing hybrid cloud environments. It works across more than 4,200 VMware Cloud Provider Program providers and hyperscalers including new addition, Microsoft Azure.

The Dell Technologies Cloud portfolio consists of the new Dell Technologies Cloud Platforms and the new Data Center-as-a-Service offering, VMware Cloud on Dell EMC.

Dell Technologies Cloud Data Center-as-a-Service, delivered as VMware Cloud on Dell EMC with VxRail, currently is available in beta deployments with limited customer availability planned for the second half of 2019.

“For many organizations, the increasingly diverse cloud landscape is resulting in an enormous amount of IT complexity, and no one is more qualified or capable to help customers solve this challenge than Dell Technologies,” said Jeff Clarke, vice chairman of products and operations, Dell Technologies.

Thursday, March 7, 2019

Source Photonics expands data center and routing portfolio

Source Photonics has expanded its portfolio of single-mode products for datacenter and routing applications. The new products leverage the company’s multi-year investment in 28Gbaud and 53Gbaud PAM4 developments and support applications such as the 400G DR4, FR4 and LR8, 100G DR/FR, and 50G LR and ER.

In addition, Source Photonics also announces the availability of its second-generation 100G QSFP28 CWDM4.

“Source Photonics led the market deployment of small form-factor 10 km products for several generations,” says Ed Ulrichs, Director of PLM at Source Photonics. “The performance of our advanced 50G, 100G and 400G PAM4 based products prove our technical and production capabilities to lead scale of the entire 500m to 40 km portfolio.”

Source Photonics’ portfolio of Datacenter and Routing products include:

  • 400G LR8 supporting 400GE links up to 10 km; the platform is expandable to support 400G-ER8 for 400G links up to 40 km by 2020
  • 400G DR4 supporting 400GE links for 500m as well as enhanced reach of up to 2 km; the platform supports breakout into 100G-DR/FR as well as high-density 100GE links
  • 400G FR4 supporting 400GE links up to 2 km over duplex single-mode fiber
  • 400G SR8 supporting 400GE links up to 100m (OM4) over multi-mode fiber
  • 100G DR/FR supporting single-channel 100G connectivity and supporting breakout from 400G DR4
  • 100G LR4 supporting 100GE links up to 10 km with support for Ethernet only and OTU4 as well as enhanced performance for E-temp and I-temp applications
  • 100G 4WDM-40 supporting 100GE links up to 40 km with Ethernet only and OTU4 support
  • 100G CWDM4 supporting 100GE links up to 2 km over duplex single-mode fiber
  • 100G SR4 supporting 100GE links up to 100m (OM4) over multi-mode fiber

http://sourcephotonics.com

Tuesday, January 15, 2019

AvidThink: Huawei’s CloudEngine 16800 switch based on Broadcom

Huawei's newly-announced CloudEngine 16800 data center switch is based on Broadcom’s merchant silicon, according to a published report from AvidThink's Roy Chua.

In terms of capacity, the Huawei CloudEngine 16800 data center switch boasts the industry’s highest density 48-port 400GE line card per slot, yielding an overall 768-port 400GE switching capacity.

AvidThink notes that the current top-of-line Broadcom Tomahawk 3 is capable of 32 400GbE ports for 12.8 Tbps switching capacity, less than the 48 400GbE ports on a single Huawei 16800 line card. The report also reveals that Huawei's 16800 will be available for early field trials in Q2 2019.

In addition, Huawei will incorporate its own AI silicon into the switching platform's design for fine-tuned traffic optimization.

https://avidthink.com/analysis/huawei-cloudengine-16800-ascend-ai/

  • AvidThink was formed in the fall of 2018 as an independent research and analysis company focused on technology infrastructure. Prior to that, AvidThink had operated as SDxCentral Research, part of SDxCentral.com, a leading technology media publication.

Huawei's CloudEngine 16800 data center switch boasts 768-port 400GE

Huawei unveiled its CloudEngine 16800 data center switch built for the Artificial Intelligence (AI) era.

The platform has three defining characteristics making it suitable for the AI era: an embedded AI chip, the capacity for a 48-port 400GE line cards per slot,, and the capability to evolve to the autonomous driving network.

Huawei said its embedded, high-performance AI chip will apply an innovative iLossless algorithm for the auto-sensing and auto-optimization of the traffic model. It promises lower latency and higher throughput based on zero packet loss. The company estimates its optimization will increase the AI computing power from 50 percent to 100 percent compared to traditional Ethernet, while improving the data storage Input/Output Operations Per Second (IOPS) by 30 percent. The CloudEngine 16800’s local intelligence and the centralized network analyzer FabricInsight creates a distributed AI O&M architecture capable of identifing faults -- a key goal of an autonomous driving network.

In terms of capacity, the Huawei CloudEngine 16800 data center switch boasts the industry’s highest density 48-port 400GE line card per slot, yielding an overall 768-port 400GE switching capacity. Huawei says power consumption per bit is reduced by 50% with the massive configuration. The company also claims to have overcome multiple technical challenges such as high-speed signal transmission, heat dissipation, and power supply.

http://e.huawei.com/topic/cloud-engine2019/en/index.html?ic_medium=hwdc&ic_source=ebg_banner_EEBGHQ179Q19W

Thursday, January 10, 2019

Huawei's CloudEngine 16800 data center switch boasts 768-port 400GE

Huawei unveiled its CloudEngine 16800 data center switch built for the Artificial Intelligence (AI) era.

The platform has three defining characteristics making it suitable for the AI era: an embedded AI chip, the capacity for a 48-port 400GE line cards per slot,, and the capability to evolve to the autonomous driving network.

Huawei said its embedded, high-performance AI chip will apply an innovative iLossless algorithm for the auto-sensing and auto-optimization of the traffic model. It promises lower latency and higher throughput based on zero packet loss. The company estimates its optimization will increase the AI computing power from 50 percent to 100 percent compared to traditional Ethernet, while improving the data storage Input/Output Operations Per Second (IOPS) by 30 percent. The CloudEngine 16800’s local intelligence and the centralized network analyzer FabricInsight creates a distributed AI O&M architecture capable of identifing faults -- a key goal of an autonomous driving network.

In terms of capacity, the Huawei CloudEngine 16800 data center switch boasts the industry’s highest density 48-port 400GE line card per slot, yielding an overall 768-port 400GE switching capacity. Huawei says power consumption per bit is reduced by 50% with the massive configuration. The company also claims to have overcome multiple technical challenges such as high-speed signal transmission, heat dissipation, and power supply.

http://e.huawei.com/topic/cloud-engine2019/en/index.html?ic_medium=hwdc&ic_source=ebg_banner_EEBGHQ179Q19W



Thursday, December 13, 2018

Data center land in Hong Kong sells for US$697 million

The last remaining plot of land designated for development into a data centre sold for a higher-than-expected HK$5.45 billion ((US$697 million), according to the South China Morning Post. There were nine bidders for the 295,405 square foot property. The buyer is listed as Sunevision Holdings.

https://www.scmp.com/business/article/2177665/hong-kongs-biggest-and-last-data-centre-site-fetches-hk545-billion


Tuesday, November 20, 2018

Google plans EUR 600 million data center in Denmark

Google confirmed plans for a new data center in western Denmark, just outside Fredericia.

The new facility represents an investment of EUR 600 million. Google is securing Power Purchase Agreements with renewable energy sources in Denmark. Construction is expected to be completed in late 2021. This will be Google's fifth data center in Europe, joining sites in Ireland, Finland, the Netherlands and Belgium.

https://www.blog.google/inside-google/infrastructure/breaking-ground-googles-first-data-center-denmark/

Thursday, September 6, 2018

Vantage Data Centers completes Santa Clara data center -- 75MW

Vantage Data Centers completed construction of its final data center at its Santa Clara, California campus, known as V5. The new four-story addition adds 15MW of critical IT load, bringing the campus total to 75MW. This campus is the largest in Silicon Valley. The addition also features a new cooling system that uses a combination of outside air and a chilled water loop, which uses recycled water and modular chiller and dry-cooler technology designed to minimize water usage while maintaining ultra-low PUEs. The water loop also utilizes non-potable grey water, further reducing impact on local water resources. The facility also supports both traditional and high-density data center designs.

“Silicon Valley continues to be a vital and strategic market for our customers,” said Vantage President and Chief Executive Officer Sureel Choksi. “With the final facility on our first Santa Clara campus complete, and construction of our 69MW Matthew Street campus also in Santa Clara well underway, Vantage can support the growth of enterprises, cloud and hyperscale customers well into the future.”

Vantage is also building a new 42-acre, 108MW campus in Ashburn, Virginia. The first 24MW building, which will provide 6MW of initial capacity on this completely new campus, is scheduled to be completed in early 2019.

The new expansion contains several features designed to enhance sustainability while maintaining a low total cost of ownership.

Monday, July 16, 2018

Arrcus builds Network OS for white box data center infrastructure

Arrcus, a start-up based in San Jose, California, emerged from stealth to unveil its software-driven, hardware agnostic network operating system for white boxes.

Arrcus said it sees an opportunity to help enterprises transform the way they manage their networks, liberating them from vertically integrated proprietary solutions and opening the door to horizontally diversified choices of best-in-class silicon and hardware systems.

The company's new ArcOS networking operating system has been ported to both Broadcom’s StrataDNX Jericho+ and StrataXGS Trident 3 switching silicon platforms.

ArcOS is built on a modular micro-services paradigm and offers advanced Layer 3 routing capabilities. Key elements include a hyper-performance resilient control plane, an intelligent, programmable Dataplane Adaptation Layer (DPAL), a data-Model Driven Telemetry for Control Plane, Data Planes and Environmental, and a consistent YANG/OpenConfig APIs for easy programmatic access.  These capabilities in conjunction with Broadcom’s StrataDNX Jericho+ platform enable the support for the full BGP internet routing table.

Arrcus cites the following use cases:

  • Spine-Leaf Clos for Datacenter workloads
  • Internet Peering for CDN providers and ISPs
  • Resilient Routing to the Host
  • Massively Scalable Route-Reflector clusters in physical/container form-factors

Arrcus also announced $15 million in Series A funding from General Catalyst and Clear Ventures. Advisors include include Pankaj Patel, former EVP and CDO of Cisco; Amarjit Gill, serial entrepreneur who founded and sold companies to Apple, Broadcom, Cisco, EMC, Facebook, Google, and Intel; Farzad Nazem, former CTO of Yahoo; Randy Bush, Internet Hall of Fame, founder Verio (basis of NTT’s DataCenter Business); Fred Baker, former Cisco Fellow, IETF Chair and Co-Chair of the IPv6 Working Group; Nancy Lee, ex-VP of People at Google, and Shawn Zandi, Director, Network Engineering at LinkedIn.

“We use ‘network different’ as our fundamental approach to enable the freedom of choice through our product innovation and challenging the status quo.  Arrcus has assembled the world’s best networking technologists, is bringing new capabilities, and changing the business model to make it easier to design, deploy, and manage large scale networking solutions for our customers,” stated Arrcus co-founder and CEO Devesh Garg.

  • Arrcus is headed by Devesh Garg, who previously was president of EZchip (acquired by Tilera) and founding CEO of Tilera (acquired by EZchip). He also served at Bessemer Venture Partners and Broadcom. Other Arrcus co-founders include Keyur Patel, who was a Distinguished Engineer at Cisco; and Derek Yeung, a former Principal Engineer at Cisco.

Thursday, August 24, 2017

Apple picks Iowa for $1.3 billion data center

Apple will build a 400,000-square-foot, state-of-the-art data center in Waukee, Iowa, a town with a population of about 14,000 located in the center of the state, near Des Moines.

The investment is valued at $1.3 billion. Construction is expected to start early next year and be completed in 2020.

The new facility will run entirely on renewable energy from day one.

“At Apple, we’re always looking at ways to deliver even better experiences for our customers. Our new data center in Iowa will help serve millions of people across North America who use Siri, iMessage, Apple Music and other Apple services — all powered by renewable energy,” said Tim Cook, Apple’s CEO. “Apple is responsible for 2 million jobs in all 50 states and we’re proud today’s investment will add to the more than 10,000 jobs we already support across Iowa, providing even more economic opportunity for the community.”

“At Apple, we’re always looking at ways to deliver even better experiences for our customers. Our new data center in Iowa will help serve millions of people across North America who use Siri, iMessage, Apple Music and other Apple services — all powered by renewable energy,” said Tim Cook, Apple’s CEO. “Apple is responsible for 2 million jobs in all 50 states and we’re proud today’s investment will add to the more than 10,000 jobs we already support across Iowa, providing even more economic opportunity for the community.”

https://www.apple.com/newsroom/2017/08/apples-next-us-data-center-will-be-built-in-iowa/


  • Google operates a major data center in Council Bluffs, Iowan


Facebook Plans 4th Expansion of Iowa Data Center


Facebook announced the fourth major expansion of its hyper-scale data center campus in Altoona, Iowa. Specifically, Facebook will add cold storage capabilities to the complex.  The expansion will add more than 100,000 square feet to building 3. Cold storage of Facebook photos and other archival media is currently done at Facebook data centers in Prineville, Oregon and Forest City, North Carolina. 

Wednesday, March 8, 2017

Radisys Announces DCEngine Release 1.0 Management Software

Radisys announced its DCEngine Management Software Release 1.0 for optimizing resources for hyperscale data centers,

The software is now available and shipping integrated with Radisys’ DCEngine product line, which is an open hardware platform based on the Open Compute Project (OCP) CG-OpenRack-19 specification. The specification is a scalable carrier-grade rack level system that integrates high performance compute, storage and networking in a standard 19 inch rack. Future DCEngine Management Software releases will extend these capabilities with a focus on facilitating the deployment and integration of the DCEngine hyperscale data center solution into existing SDN-enabled ecosystems.

Highlights of DCEngine Management Software Release 1.0

  • Intel Rack Scale Design APIs to enable dynamic composition of resources based on workload specific demands
  • Modules for leading data center orchestration frameworks, such as Ansible, to make firmware updates easy and convenient
  • Redfish Interface 1.0 protocol support


“CSPs are evolving their virtualization strategies and deploying data center infrastructure to support high availability applications such as virtualized network functions and real-time data analytics,” said Bryan Sadowski, vice president, FlowEngine and DCEngine, Radisys. “Our new management software for DCEngine delivers essential hardware resource management capabilities that are increasingly needed in this new ecosystem. We’ve reduced the operational pain points for rack scale deployments and operations by building a suite of tools that enable automated and convenient configuration as well as resource management to meet CSPs’ evolving requirements.”

http://www.radisys.com

Tuesday, November 1, 2016

Cisco Brings on Roland Acra as SVP/GM, Data Center Business Group

Cisco announced the appointment of Roland Acra as SVP/GM, Data Center Business Group. He will be responsible for defining the next phase of Cisco data center strategy and driving development for its data center portfolio, including all data center switching products – NX2/4/5/6K, NX3K, NX7K, and NX9K, UCS, SAN, and associated products and programs.

As a long-standing industry expert in Internet routing, software engineering and communication protocol development, Roland fits right in – once again. He

Acra is a Cisco veteran having held several general management and executive leadership positions from 1991 – 2003. In 2010, he came back to Cisco as Vice President in the Smart Grid Business Unit, following the acquisition of Arch Rock , a developer of IPv6-based wireless sensor networks where he served as President and CEO. Prior to Arch Rock, he was the President and CEO of Procket Networks.

http://www.cisco.com

Wednesday, June 8, 2016

HPE Expands its Cloud Portfolio

Hewlett Packard Enterprise rolled out a set of major updates to its cloud portfolio, including:

  • HPE Helion Cloud Suite, a new software suite enabling customers to deliver and manage their full spectrum of applications -- from traditional, virtualized, cloud native and containers -- across a broad range of infrastructure environments. HPE Helion Cloud Suite includes full-stack automation to enable rapid delivery of IT services and applications, and provides a common simplified, self-service storefront for IT and developers. It also includes a complete development environment supporting DevOps processes for traditional and cloud-native applications.
  • HPE Helion CloudSystem 10, an engineered hardware and software solution to build and rapidly deploy an enterprise grade cloud environment for a full range of workloads and offering a deep integration with HPE OneView 3.0 for automatic provisioning of cloud resources from bare metal infrastructure. It delivers hosting, automation and orchestration of traditional and cloud-native workloads. This pre-integrated solution, brings together HPE hardware, storage, networking, software and services into one package.
  • HPE Helion Stackato 4.0, a complete and open application development platform-as-a-service (PaaS) solution, powered by Cloud Foundry, designed to speed the delivery of cloud native applications.
  • HPE Cloudline 3100 Server, offering industry leading economics for service providers. The CL3100 meets cloud service providers' dense storage requirements for Hadoop/Cassandra/compute workloads. The CL3100 (1U) storage server is 75 percent smaller than the CL5200 storage server (4U), requiring a smaller footprint.
  • New HPE Technology Services offerings for IT transformation and workload migration

"No enterprise has a one size fits all approach to cloud -- every customer wants solutions that help them drive their business faster and cut costs across their full spectrum of applications," said Bill Hilf, senior vice president and general manager, HPE Cloud. "Organizations require their own right mix of traditional IT, private, managed and public clouds and the flexibility to support applications spanning different technologies, architectures and delivery models. The HPE Helion portfolio gives customers a simple, powerful set of options, offering the breadth and coverage they need."

http://www.hpe.com

Sunday, June 5, 2016

Cavium's 64-Bit ARM ThunderX2 Packs up to 54 Cores

Cavium unveiled its 64-bit ARM-based ThunderX2 processor for servers in cloud data centers used for workloads such as compute, security, storage, data analytics, network function virtualization (NFV) and distributed databases.

The second generation ARM processor from Cavium, which offers a number of on-board accelerators and advanced capabilities, packs up to 54 cores, enabling it to deliver two to three times the performance across a wide range of standard benchmarks and applications compared to ThunderX. It is built in 14nm FinFET process and is compliant with ARMv8.2 architecture as well as ARM's Server Base System Architecture (SBSA) standard.

Key ThunderX2 features will include:

  • 2nd generation of full custom Cavium ARM core: 2.4 to 2.8GHz in normal mode, Up to 3 GHz in Turbo mode; > 2X single thread performance compared to ThunderX.
  • Up to 54 cores per socket delivering 2-3X socket level performance compared to ThunderX.
  • Cache: 40K I-Cache and 64K D-cache, highly associative; 32MB shared Last Level Cache (LLC).
  • Single and dual socket configuration support using 2nd generation of Cavium Coherent Interconnect with > 2.5X coherent bandwidth compared to ThunderX.
  • System Memory: 6 DDR4 memory controllers per socket; Dual DIMM per memory controller, for a total of 12 DIMMs per socket.
  • Full system virtualization for low latency from virtual machine to IO enabled through Cavium virtSOC technology.
  • Integrated 10/25/40/50/100GbE network connectivity.
  • Multiple integrated SATAv3 interfaces.
  • Integrated PCIe Gen3 interfaces, x1, x4, x8 and x16 support.
  • Integrated Hardware Accelerators: OCTEON style packet parsing, shaping, lookup, QoS and forwarding; Virtual Switch (vSwitch) offload; Virtualization, storage and NITROX V security.

Four versions of the ThunderX2 will be offered:

  • ThunderX2_CP:  Optimized for cloud compute workloads such as private and public clouds, web serving, web caching, web search, commercial HPC workloads such as computational fluid dynamics (CFD) and reservoir modeling. This family supports multiple 10/25/40/50/100 GbE network Interfaces and PCIe Gen3 interfaces. It also includes accelerators for virtualization and vSwitch offload.
  • ThunderX2_ST: Optimized for big data, cloud storage, massively parallel processing (MPP) databases and Data warehousing workloads. This family supports multiple 10/25/40/50/100 GbE network interfaces, PCIe Gen3 interfaces and SATAv3 interfaces. It also includes hardware accelerators for data protection/ integrity/security, user to user efficient data movement.
  • ThunderX2_SC:  Optimized for secure web front-end, security appliances and cloud RAN type workloads. This family supports multiple 10/25/40/50/100 GbE interfaces and PCIe Gen3 interfaces. Integrated hardware accelerators include Cavium’s industry leading, 5th generation NITROX security technology with acceleration for IPSec, RSA and SSL.
  • ThunderX2_NT: Optimized for media servers, scale-out embedded applications and NFV type workloads. This family supports multiple 10/25/40/50/100 GbE interfaces. It also includes OCTEON style hardware accelerators for packet parsing, shaping, lookup, QoS and forwarding.

"ThunderX2 combines our next generation core that will deliver significantly higher single thread performance with next generation IO and hardware accelerators to provide a compelling value proposition for the server market and greatly expand the serviceable server TAM," said Syed Ali, President and CEO of Cavium. "ThunderX2 will enable flexible, scalable and fully optimizable servers for next generation software defined data centers."

http://www.cavium.com


Cavium Unleashes 64-bit ARM-based OCTEON TX

Cavium unveiled its OCTEON TX family, a line of 64-bit ARM based SOCs for control plane and data plane applications in networking, security, and storage.

The OCTEON TX combines Cavium's data plane architecture with its optimized ARMv8.1 cores (the company continues to produce its OCTEON III processors, which are based on MIPS).

The new processors expand the addressability of Cavium’s embedded products into control plane application areas within enterprise, service provider, data center networking and storage that need support of extensive software ecosystem and virtualization features. This product line is also optimized to run multiple concurrent data and control planes simultaneously for security and router appliances, NFV and SDN infrastructure, service provider CPE, wireless transport, NAS, storage controllers, IOT gateways, printer and industrial applications.

Cavium said the control planes of next gen platforms will need to run commercial software distributions and operating systems (e.g., RHEL, Canonical and Java SE), support open source applications (e.g., OpenStack, OpenFlow and Quagga), launch services dynamically and run customer specific services. Multiple types of high performance data plane applications also need to be supported for firewall, content delivery, routing, and traffic management. While current OCTEON SOCs are used in applications for data plane as well as control plane with embedded software, control plane applications requiring wider software ecosystem and support traditionally have been addressed by the x86 architecture. The ARM architecture is able to service these critical needs.