Tuesday, April 7, 2015

AT&T Extends its Switched Ethernet with Network on Demand

AT&T's Switched Ethernet Network on Demand service is now available in more than 100 U.S. cities.

The service offers a customer portal that lets business customers order more ports, instantly add or change services, scale bandwidth to meet their changing needs and manage their network in near real time.

"Businesses want fast, versatile network service tailored to their needs," said Josh Goodell, vice president, Network on Demand. "The rapid expansion of Network on Demand for AT&T Switched Ethernet Servicesm will provide customers improved network control and help them work more efficiently. We look forward to adding more services in 2015."

AT&T highlights:

  • Speed – order service significantly faster and change it in near real time
  • Simplicity – order and manage service through the user-friendly AT&T Business Center web portal
  • Flexibility – scale service up or down as needed
  • Reliability – enjoy the reliability and security from the AT&T network

http://www.business.att.com/enterprise/Family/network-services/ethernet/

Cisco Offers Threat Grid via Cloud or On-prem Appliance

Cisco announced a number of new network security capabilities and services, including a new AMP Threat Grid in the Cisco Advanced Malware Protection (AMP) portfolio:

AMP Threat Grid provides dynamic malware analytics and threat intelligence.  These advanced capabilities are provided as a standalone cloud service or via new UCS based on premise appliances.  AMP Threat Grid analytics engines provide security teams with breach detection against advanced malware, allowing them to quickly scope and recover from a breach by providing context-rich, actionable threat intelligence.

Vulnerability visibility and prioritization: AMP for Endpoints brings additional visibility to the extended network by providing a list of hosts that contain vulnerable software, a list of the vulnerable software on each host, and the hosts most likely to be compromised. Powered by Cisco threat intelligence and security analytics, AMP identifies vulnerable software being targeted by malware and the potential exploit, and provides customers with a prioritized list of hosts to patch.
Enhanced Retrospective Security Capabilities

Cisco also announced new models of its ASA with FirePOWER Services -- a threat-focused next-generation firewall (NGFW) aimed at midsize companies, branch offices and industrial environments with the same advanced malware protection and threat detection capabilities deployed by large enterprise organizations. The Cisco ASA with FirePOWER Services combines stateful firewall, application visibility and control (AVC), advanced malware protection (AMP), and next-generation intrusion prevention capabilities (NGIPS) into a single device. Pricing starts at US $995 including a Cisco ASA with FirePOWER Services appliance and management.

 "Every day organizations are faced with advanced threats that infiltrate and persist in company environments for months before they are discovered.  We believe that the most effective way to address these real-world challenges is continuous threat protection against these attacks. Further enhancements like advanced correlation of indicators of compromise, vulnerability mapping and expanded retrospective security further differentiate Cisco AMP and strengthen security teams' responses before, during and after an attack," stated Marty Roesch, Vice President, Chief Architect, Cisco Security Business Group.

http://newsroom.cisco.com/press-release-content?type=webcontent&articleId=1615794

Blueprint: Converged Infrastructure in the Age of Cloud

by Scott Geng, CTO of Egenera

The past decade has been a time of serious change in the technology industry with the advent of virtualization and cloud computing. Virtualization, by itself, has been a significant agent for change in every IT data center around the world. It’s driven a massive push for server consolidation that has made IT more efficient with their compute resources by dramatically driving up utilization rates and fundamentally changing the processes used for application deployment. It has also driven the development of many new tools to help administrators get their job done.  As an example, tools for live migration provide more sophisticated ways to deal with hardware changes and load distribution than was previously possible.

While virtualization was aggressively being adopted in data centers across the globe, another major agent of change came along on its heels – cloud computing. Cloud computing, which was really built on top of virtualization, gives IT even more flexibility in how to deploy applications. It enables user self-service capabilities that allow traditional IT workflow processes to be managed by end users or other third parties, and allows those processes to be fully automated to ease the delivery of services. Cloud technology has also driven the availability of public services like IaaS, PaaS and SaaS. These concepts have introduced a new paradigm of service delivery that is causing many organizations to redesign how they deliver their business value.

Cloud Has Major Impact on Reseller Channels

Virtualization and cloud computing have also resulted in a massive amount of change across the industry - including the big server vendors, indirect channels like resellers and VARs, service providers, colos and enterprises.

Let’s start with the big server vendors. Despite experiencing modest growth in the face of virtualization and cloud trends, their slice of the pie is getting substantially smaller. Baird Equity Research Technology estimates that for every dollar spent on Amazon Public Cloud resources, there is at least $3/$4 dollars not spent on traditional IT. The reason for this is two fold. First, the massive consolidation effort spawned by virtualization has reduced the number of servers that IT needs to run their businesses. Second, the fact that huge companies like Amazon and Google are not buying their servers from the big hardware vendors, but building the hardware themselves to control the cost of their infrastructure. It’s hard to get a definitive count of how many servers that represents but based on gross estimates for Amazon and Google, which is close to 3 million servers, it is 10% of the big server vendor’s market share.

VARs and resellers are feeling the pinch even more. If you are an IBM reseller, for example, you are feeling the impact of the smaller number of opportunities to sell hardware and add value on top of it. It will continue to be an uphill battle for these vendors to remain relevant in this new cloud economy.

The Public Cloud model is also impacting Service Providers. As businesses move a larger percentage of their services into the cloud, the traditional service providers are finding it harder to prevent customers from abandoning ship to the big public cloud vendors – mostly because the price points from these vendors are so attractive. Take Amazon pricing for example. They have lowered prices over 40 times since 2008. That constant price pressure makes it extremely difficult for these businesses to compete. The best path to success for these vendors is to provide specialized / differentiated services to avoid the infrastructure price wars that will otherwise crush them.

Enterprises are faced with an independent set of challenges as they struggle with the fundamental questions of what to move to the cloud and what is the best way to get there. The easy access to public cloud resources leaves many IT organizations hard pressed to get their arms around which business departments are already using these resources. It’s a real security concern because of the ubiquitous access the cloud enables, as well as the problem of identifying and controlling costs.

Now, let’s look at its impact on the data center and IT organizations. Most IT organizations see the power of these new technologies and are working hard to take advantage of the capabilities that they provide. However these new capabilities and processes have a price in the form of management complexity. From a process perspective, the management of virtualized solutions is an added burden for IT. It’s not like virtualization is completely replacing all the pre-existing processes used to manage physical servers. That still has to be done, and it’s not just a matter of hardware deployment, as hardware has an operational life cycle that IT has to manage too including provisioning, firmware management, break-fix, next gen and more. So from a practical point of view, virtualization has added another layer of management complexity to IT’s day-to-day operations.

The same can be said about cloud computing. While cloud computing has expanded IT’s toolbox by enabling user self-service and access to multiple service deployment models, it has added another layer of choice and management complexity.  This is especially true for organizations starting to adopt hybrid cloud environments where IT has the challenge of managing multiple disparate environments that include a mix of hardware vendors, hypervisors and/or multiple public cloud solutions.

To reduce complexity, IT is sometimes forced to limit those choices, which is problematic because it locks organizations into solutions that ultimately limit their ability to adapt to future changes. It is also fair to say that most cloud solutions today are still relatively immature, especially with respect to integration, or the lack thereof, into the business processes of the company. Organizations are left trying to piece together the integration into their existing processes, which is typically hard to do.

Another important point is that most cloud management solutions today assume the infrastructure (the hardware platforms, the hypervisors and the management software) is already deployed and setup. The services on today’s market don’t help IT deal with moving to a new generation of hardware or changing hardware vendors entirely. The reality is that there are very few examples of solutions that integrate the concept of self-service for the actual physical infrastructure itself or that make it easy to react to infrastructure changes that happen naturally over time. This leaves IT with no choice but to support separate processes for setting up and managing their infrastructure. And that spells complexity.

Given these challenges, can converged infrastructure help address some of these complexities?  

As the name implies, converged infrastructure is the consolidation/integration of data center resources (compute, network and storage) into a single solution that can be centrally managed. There are also a few important related concepts - stateless computing and converged fabrics. As it turns out, both of these technologies can really help in the fight against operational complexity. Stateless computing refers to servers that do not store any unique software configuration or “state” within them when they are powered off. The value of this approach is that servers become anonymous resources that can be used to run any operating system, hypervisor and application at any time. Converged fabric solutions are another example of consolidation but down at the network/fabric layer - essentially sending network, storage and management traffic over a single wire. This is important in the drive for simplification because it reduce the number of physical components. Fewer components means less things to manage, less things that can fail, lower costs and better utilization - all things every IT director is striving for.

In my view converged infrastructure with stateless computing and converged fabrics are ultimately what is needed to address the complexity of physical, virtual and hybrid clouds. Let’s examine why.

By combining compute, storage and network with converged infrastructure management you get an integrated solution that provides a single pane of glass for managing the disparate parts of your infrastructure. This certainly addresses one of the major pain points with today’s data center. The complexity of different management interfaces for each subsystem is a serious headache for IT and having a single user interface to provision and manage all of these resources is just what the doctor ordered.

The integration of these technologies lends itself to a simpler environment and a significant increase in automation. The traditional workflow for deploying a server is complex, because of the manual breaks in the workflow that naturally happen as IT moves between the various boundaries of compute, network and storage. These subsystems often require special expertise to orchestrate the infrastructure. With converged infrastructure solutions, these complex workflows become simple automated activities that are driven by software. As always, automation is king in terms of simplifying and streamlining IT operations.

While a converged infrastructure solution addresses some of the key pressure points that IT admins experience today, when combined with stateless computing and a converged fabric it delivers the ultimate in simplicity, flexibility and automation for IT:

1. Enables provisioning of bare metal in the same way as provisioning virtual servers. You can now create your physical server by defining your compute resources, network interfaces and storage connectivity all via a simple software interface. This allows IT to get back to a single process for deploying and managing their infrastructure (regardless of whether its virtual or physical) - a real impact on IT operations.

2. Higher service levels. The power of a stateless computing approach enables automated hardware failover capabilities that drive a radical simplification for providing highly available services. In fact, it becomes so easy with this model that IT administrators can make any operating system and application highly available with the click of a button, and even pool failover resources between applications – driving incredible efficiency and flexibility.

3. Flexibility that improves utilization. A converged infrastructure model allows you to take full advantage of your compute resource by being able to move your compute power to where you need it most, when you need it. For application developers it ensures the ability to right size applications both before and after production.

4. Simplified Disaster Recovery (DR)

One of the key ingredients to a simplified DR approach is create a software definition for a physical server. Once you have this model in place it is easy to copy those definitions to different locations around the world. Of course, you have to copy the server’s data too. The key benefit here is that it creates a digital blueprint of data center resources including the server definitions, network and storage connectivity along with any policies. In the case of a disaster, the entire environment (servers, network and storage connectivity) can be reconstituted on a completely different set of physical resources. This is a powerful enabler for IT to simplify and protect their business and do it in a way that increases the reliability and effectiveness of IT.

So, what value does this all bring from the business point of view?
  •  Faster delivery of business services
  •  Better service levels for business services
  • Lower capital and operational costs, as well as reduced software license costs
  • Enhanced availability and business continuity insurance
  • Flexibility to react to change

I think it’s clear that while virtualization and cloud computing have brought fantastic benefits to IT, those trends have also caused serious disruption across the industry. A converged infrastructure approach can ensure you get the benefits you are striving for, without the headaches and complexity you always want to avoid.

About the Author

Scott Geng is Chief Technology Officer and Executive Vice President of Engineering at Egenera. Previously, Geng managed the development of leading-edge operating systems and middleware products for Hitachi Computer Products, including development of the operating system for the world’s then-fastest commercial supercomputer and the first industry-compliant Enterprise JavaBeans Server. Geng has also held senior technical positions at IBM, where he served as advisory programmer and team leader in bringing UNIX to the mainframe; The Open Software Foundation, where he served as consulting engineer for the OSF/1 1.3 micro-kernel release; and Wang Laboratories, as principle engineer for the base kernel and member of the architectural review board.

About Egenera

Converge. Unify. Simplify. That’s how Egenera brings confidence to the cloud. The company’s industry leading cloud and data center infrastructure management software, Egenera PAN Cloud Director™ and PAN Manager® software, provide a simple yet powerful way to quickly design, deploy and manage IT services while guaranteeing those cloud services automatically meet the security, performance and availability levels required by the business. Headquartered in Boxborough, Mass., Egenera has thousands of production installations globally, including premier enterprise data centers, service providers and government agencies. For more information on the company, please visit egenera.com



Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Egenera Launches Wholesale, Multi-Cloud Leveraging AWS and Equinix

Egenera launched a wholesale, multi-cloud management platform that enables managed service providers and VARs to offer hybrid cloud services while leveraging the resources of Amazon Web Services and the Equinix Cloud Exchange.

Egenera said its Xterity platform enables its resale partners to quickly enter the cloud services market with their own branded service without up-front capital costs or ongoing management costs, and with the margins needed to ensure a profitable cloud services business. Xterity provides resellers with the ability to design, price and manage complex multi-tier and multi-cloud application environments for their end user customers. Xterity also includes the business and service management features.

Egenera's Xterity ties into the Equinix Cloud Exchange.  Its APIs currently support Amazon Web Services.  Egenera is looking to add support Microsoft Azure. It also integrates private cloud implementations.

The cited business continuity and disaster recovery services as growth drivers for its wholesale partners.  The service has been running in Ireland for a number of months and is now available globally.

“A global wholesale service like this makes it simple for partners to deliver a fully managed cloud service and deploy mission critical applications – all without having to build, manage or own their own infrastructure.  This frees resellers to focus on their customer relationships, create custom solutions and enhance their brand with their end users,” said Pete Manca, CEO of Egenera.

http://www.egenera.com


  • Egenera, which provides physical, virtual and cloud management, has acquired Fort Technologies, a cloud lifecycle software provider based in Dublin, Ireland. Fort’s cloud management capabilities were added to Egenera's PAN Cloud Director software for enterprises.  The deal also expandws Egenera’s sales footprint, partner network and customer base in EMEA. Financial terms were not disclosed.

Brocade Fills Out its Campus Switching Portfolio

Brocade introduced a new campus LAN switch boasting the industry's highest 10 GbE port density for any switch in its class.  The Brocade ICX 7250 delivers 50 percent greater stacking density than comparable switches, consolidating up to 576 1 GbE ports into a virtual chassis and single management touchpoint.

Brocade also unveiled Switch Port Extender, a new "HyperEdge" Architecture technology to simplify network deployment and ongoing maintenance. Through added automation, this technology enables shared network services and management between Brocade ICX 7250, 7450, and 7750 switches distributed across the campus.

Brocade is also extending OpenFlow 1.3 support to its ICX 7450 and 7750 switches. The Brocade ICX switch family is certified with the OpenDaylight-based Brocade Vyatta Controller and will support other OpenDaylight-compliant controllers.

"Today's organizations require a high-performance, scalable campus LAN infrastructure to address the proliferation of mobile devices, rich media, and insatiable user expectations," said Jason Nolet, senior vice president of the Switching, Routing, and Analytics Products Group at Brocade. "In addition, customers are seeking networking solutions that will help to increase IT agility and lower operating expenses through automation and management consolidation."

http://www.brocade.com

IIX Secures $20 Million, Acquires IX Reach

IIX, a software-defined interconnection company based in Santa Clara, California, announced $20 million in funding from TriplePoint Capital.  The company’s SDI platform enables programmable interconnection between networks that allow customers to gain more control, improve security, reduce costs associated with IP transit delivery, optimize network performance and extend network reach across the globe.

IIX also announced its acquisition of IX Reach Limited, a global network solutions provider and partner to leading Internet Exchange Points around the world. Financial terms were not disclosed.  IIX said the acquisition gives it a combined interconnection footprint into more than 150 Points of Presence (PoP) across various regions in North America, Europe, the Middle East and Asia. The expansion will also bring the company’s software-defined interconnection platform into more markets across the globe, enabling simple, secure and programmable direct network connections between content providers, cloud application providers and other enterprises.

Stephen Wilcox, IX Reach’s Founder and CEO has been appointed IIX’s President of EMEA and Chief of Global Networks. He brings more than 17 years of management experience in the technology sector to the company. Stephen led the global expansion of IX Reach into multiple regions, including Europe, North America, the Middle East and Asia. In addition to founding IX Reach in 2007, Stephen previously served in various senior roles with such companies as Google, Renesys, Telecomplete and U-NET. He was also a 10-year member of the board of directors for the London Internet Exchange.

Monday, April 6, 2015

Juniper Debuts Accelerated 40GbE Switch with Xeon and FPGA for Programmability

Juniper Networks introduced an application acceleration switch and a new packet flow accelerator module designed to deliver lower latency for financial networks.

The new QFX5100-AA application acceleration switch and QFX-PFA packet flow accelerator module, which build on Juniper's QFX product family, leverage Maxeler Technologies’ customizable software logic to significantly accelerate business-critical applications in latency-sensitive computing environments.

The new QFX5100-AA switch combines the Intel Xeon processor E3-1125C v2 with Broadcom switching silicon, and the QFX-PFA module based on the Altera multi-100G field-programmable gate array (FPGA). It is configured with 24 ports of 40GbE with 2 expansion slots for 4x40, or one double wide slot for the FPGA-based module. Customers can use Java to program the module for compute-intensive applications. Integration with Junos Space Network Director ensures automated and simple data center management from a single point of control.

“The possibilities of compute-integrated networking are transformative across a broad variety of financial services, including equities and commodities exchanges, market data providers, high-frequency trading, and credit processing. We see great potential in other sectors for this technology, including energy, research, education and large enterprises. With the introduction of Juniper Networks QFX5100-AA and QFX-PFA, we are delivering new computing capabilities within the data center network to ensure that our customers can make the most informed split-second investment decisions,” stated Andrew Bach, chief architect for Financial Services Team, Juniper Networks.

“Juniper Networks’ new switch, powered by the Intel Xeon processor E3-1125C v2, provides the performance and added intelligence needed to improve the ease of application integration. This solution, which embeds applications directly into the switch and utilizes the power of distributed computing, allows customers to take advantage of greatly improved performance,” said Sandra Rivera, vice president & general manager, Network Platforms Group, Intel

http://www.juniper.net

CoreOS Announces "Tectonic" Kubernetes Platform, Google Investment

CoreOS, a San Francisco start-up building a new Linux distribution for modern infrastructure stacks, introduced Tectonic, its commercial Kubernetes platform.

Tectonic, which combines Kubernetes and the CoreOS stack, pre-packages all of the components required to build "Google-style infrastructure."  CoreOS said it adds a number of commercial features to the mix, such as a management console for workflows and dashboards, an integrated registry to build and share Linux containers, and additional tools to automate deployment and customize rolling updates.

In addition, CoreOS announced a new $12 million round of funding led by Google Ventures, with additional investment from Kleiner Perkins Caufield & Byers (KPCB), Fuel Capital and Accel Partners, bringing its total funding to $20 million.

"When we started CoreOS, we set out to build and deliver Google's infrastructure to everyone else," said Alex Polvi, CEO of CoreOS. "Today, this goal is becoming a reality with Tectonic, which allows enterprises across the world to securely run containers in a distributed environment, similar to how Google runs their infrastructure internally."

"We see a broader industry trend where enterprise computing is shifting to mirror the infrastructure of large-scale software companies," said Dave Munichiello, Partner at Google Ventures. "With a focus on security, reliability, and ease of deployment CoreOS delivers a comprehensive platform for global enterprises to deliver services at scale. We are excited to be working with the team."

https://coreos.com/blog/announcing-tectonic/


In December 2014, CoreOS released Rocket, a new portable container format, as an alternative to the Docker runtime. The idea is to provide a “standard container” that can be used for moving workloads between multiple servers and environments. The company said its container consumes 40% less RAM on boot than an average Linux installation and features an active/passive dual-partition scheme to update the OS as a single unit instead of package by package. Applications on CoreOS can run as Docker containers.  Up until this announcement, CoreOS had been a big supporter of Docker. In a blog posting, the company said the Docker company has strayed from its early principle of building a simple, composable container unit that could be used in a variety of systems and supported by everyone. So CoreOS is now developing Rocket around the App Container specification and promoting it as a new set of simple and open specifications for a portable container format.

Akamai Acquires Octoshape for OTT Optimization

Akamai Technologies has acquired Octoshape, a cloud OTT IPTV service provider that focuses on delivering broadcast, enterprise, and carrier solutions. Financial terms were not disclosed.

Octoshape's services are designed to help optimize the quality of video streams for over-the-top (OTT) content and to enable Internet Protocol television (IPTV) solutions. Octoshape uses a combination of patented video and network optimization technologies for the delivery of video streams across the Internet using standard media formats and players. The privately-held company has approximately 40 employees.

"As more video gets consumed over the Internet, and on devices that can display higher-quality resolution, it is important for us to develop new ways to acquire, transform and distribute the highest-quality media for broadcast-size audiences," said Tom Leighton, CEO of Akamai. "We are working to continue to extend our platform to accommodate video throughput increases that come from the adoption of 4K, and to support a potential 100-1000X increase in network traffic in the future."

http://www.octoshape.com
http://www.akamai.net

100G and Beyond - @Huawei Comments at #OFC2015

What changes will we see as network transport evolves to 100G and beyond? Peter Ashwood-Smith, Technical VP of Optical Product Line at Huawei, breaks it down into a discussion of the control layer and the physical interface.  He sees 100G as the "workhorse" of optical transport for the next 3-5 years. We'll see improvements in density and the adoption on pluggable formats in 100G interfaces. Another factor for 100G is silicon photonics.

See video:  https://youtu.be/1xLUwwwhwiA


Comcast Rolls out 2 Gbps Residential Service

Comcast is launching a symmetrical, 2 Gbps residential broadband service starting next month in the Atlanta metro area.


The Comcast Gigabit Pro service will be delivered via fiber-to-the-home. Comcast says the service will be available to any home within close proximity of its fiber network and will require an installation of professional-grade CPE.

Comcast also said it plans to expand the 2 Gbps service to other markets across the country.  The goal is to have the service available to 18 million homes by the end of the year.

"Our approach is to offer the most comprehensive rollout of multi-gigabit service to the most homes as quickly as possible, not just to certain neighborhoods," said Doug Guthrie, SVP of Comcast Cable’s South Region.

To date, Comcast has built out more than 145,000 route miles of fiber across its service area.

http://corporate.comcast.com/news-information/news-feed/comcast-begins-rollout-of-residential-2-gig-service-in-atlanta-metro-area

Dell Intros X-Series Smart-Managed 1GbE and 10GbE Switches for SMBs

Dell introduced a new X-Series family of smart-managed 1GbE and 10GbE switches for small and medium-sized businesses. The new product provide work flow management, traffic visibility and real-time control to optimize cloud and onsite network applications.  Dell is also expanding its N-Series family of fully managed 1GbE switches with Layer 2/3 capability designed for smaller networks. The switches utilize a comprehensive enterprise-class Layer 2/3 feature set, common command-line interface (CLI) for consistent management and standard 10GbE SFP+ transceivers and cables for stacking, providing up to 200 1GbE ports in a 4-unit stack.

http://www.dell.com/learn/us/en/uscorp1/press-releases/2015-04-06-dell-networking-smb

Thursday, April 2, 2015

ONOS Blackbird Focuses on SDN Control Plane Performance and Scale

A new version of the Open Network Operating System (ONOS), named Blackbird, has been released (the first version of ONOS was out in December 2014).

ONOS features a highly available, scalable SDN control plane featuring northbound and southbound open APIs and paradigms for a diversity of management, control, and service applications across mission critical networks. It is architected as a distributed but logically centralized control plane to achieve high performance, scale-out and high availability. ONOS' high availability characteristics include full recovery from events such as switch and link failure, node failure, entire ONOS cluster failure, single node cluster failure, cluster partitioning and device-node communication failure.

The ONOS Blackbird release defines the following set of metrics to effectively measure performance and other carrier-grade attributes of the SDN control plane.

Performance Metrics
Topology – link change latency
Topology – switch change latency
Flow operations throughput
Intent (Northbound) install latency
Intent (Northbound) withdraw latency
Intent (Northbound) reroute latency
Intent (Northbound) throughput

Scalability
Ability to scale control plane by adding capacity

High Availability
Uninterrupted operation in the wake of failures, maintenance and upgrades

ONOS aims to achieve extremely high target numbers of 1,000,000 flow operations per second and less than 100 ms (and ideally under 10 ms) latency. Most of ONOS Blackbird release's measurements meet these targets; the ones that do not will continue to be optimized in the coming releases and in conjunction with use case and deployment requirements.

The Blackbird release also addresses the challenge of effectively determining "the carrier-grade quotient" of the SDN control plane. Metrics currently used to measure performance, including simplistic ones such as "Cbench," do not provide a complete or accurate view of the SDN control plane capabilities thereby highlighting the need for a more indicative set of measurements.

"Achieving the high availability required to deliver network resilience at the necessary scale without compromising performance as you add controller instances has been an elusive goal for open source SDN solutions and a barrier to adoption—until now," said Guru Parulkar, Executive Director for ON.Lab.  0"Architected as a distributed system, ONOS is the first open source SDN solution to achieve linear scale-out while maintaining high performance and availability. As the size of your network grows, ONOS instances can be added to scale the SDN control plane, and seamlessly deliver the needed throughput. This ability not only breaks down barriers to real-world deployment but also future-proofs your network."

A comprehensive explanation of these metrics and Blackbird performance assessment using these metrics is published on the ONOS wiki at http://bit.ly/1GhIr3X

http://onosproject.org/


Video: Guru Parulkar on the Strategic Vision of ONOS

The strategic vision of ONOS is simple - to build a scalable, high-available, high-performance network operating system for Service Provider networks, says Guru Parulkar, Executive Director of ON.Lab. Here he gives an update of how this open community effort fits in with the ambition of network operators.

See video:  https://youtu.be/ilKSkCK91U8

 

Video: Evolving Transport Networks for Clouds

TeliaSonera International Carrier is already seeing a major impact from cloud traffic, says Mattias Fridström Vice President, Technology.  Much of it is driven by the enormous flows between the mega data centers of the big cloud providers.

Topics in this interview include:

1:04 - Does traffic from cloud services tend to be aggregated in big hubs?
1:44 - Has the market for 100G transport developed as expected?
2:39 - Who is buying 100G interfaces today?
3:13 - Do you provide wavelength or dark fiber as well?
4:03 - Hot trends at #OFC2015

See video:  https://youtu.be/lGfowCF6gvI

FCC to Consider Spectrum Sharing in 3.5 GHz Band

The FCC upcoming open meeting on April 17 will consider ways to leverage spectrum sharing technologies to make 150 megahertz of contiguous spectrum available in the 3550-3700 MHz band for wireless broadband and other uses, as part of a Citizens Broadband Radio Service.

http://www.fcc.gov/document/fcc-announces-tentative-agenda-april-open-meeting-2

Dell'Oro: Strong Dollar to Have Big Impact on Telecom CAPEX

Telecom operators around the world invested heavily in their fiber and LTE networks during 2014 resulting in a fourth consecutive year of Capex growth as advancements in mobile related spending offset declining wireline investments. However, the strength of the dollar could wipe out $20 billion in telecom Capex in 2015, according to a newly published Carrier Economics report by Dell’Oro Group.

“We have not made any major changes to our constant currency Capex projections for 2015 and continue to expect the market will grow at a low-single-digit pace in 2015 driven primarily by China and Europe,” said Stefan Pongratz, Dell’Oro Group Carrier Analyst. “But in U.S. Dollar terms, assuming rates remain at current levels, the strengthening U.S. Dollar will unequivocally impact Telecom Capex, and we have revised our 2015 Capex in U.S. Dollar terms downward rather significantly to adjust for currency fluctuations,” continued Pongratz.

http://www.DellOro.com

Telstra to Sell IBM's SoftLayer Infrastructure-as-a-Service

Telstra will sell IBM's SoftLayer Infrastructure-as-a-Service (IaaS) platform from IBM. Under the agreement, Telstra customers will have access to SoftLayer’s highly secure and agile cloud infrastructure.

IBM recently opened new data centres in Melbourne and Sydney.

“Telstra customers will be able to access IBM’s hourly and monthly compute services on the SoftLayer platform, a network of virtual data centres and global points-of-presence (PoPs), all of which are increasingly important as enterprises look to run their applications on the cloud. SoftLayer is a platform that lets businesses quickly migrate, build, test, and deploy their applications and innovations,” said Erez Yarkoni, Telstra’s Chief Information Officer and Executive Director of Cloud.

http://www.telstra.com.au/aboutus/media/media-releases/ibm-and-telstra-join-forces-to-offer-softlayer-cloud-platform.xml

CyrusOne Buys New Data Center in Austin

CyrusOne will purchase an additional powered shell in Austin’s Met Center, creating what is expected to be its largest facility in Austin at 172,000 total square feet of shell and offering 120,000 colocation square feet (CSF), with up to 12 megawatts of power, and over 25,000 square feet of class A office space at full build.

CyrusOne’s new Austin III data center will use the company's "Massively Modular" design engineering approach to optimize materials sourcing and enable delivery of industry-leading energy optimization and just-in-time data hall inventory to meet customer demand. The first phase of construction includes up to 60,000 square feet of CSF and 6 megawatts of critical load.

“Based on current and projected customer demand, it was essential to expand in this market. We’ve been extremely successful and have seen a tremendous amount of growth in Austin,” said John Hatem, senior vice president, data center design and construction, CyrusOne. “Once this facility is complete, enterprise-level companies will be able to utilize our Massively Modular design capabilities to scale rapidly and efficiently while taking advantage of CyrusOne’s exceptional uptime delivered by redundant power, cooling, and connectivity infrastructure.”

http://www.cyrusone.com/



Wednesday, April 1, 2015

Ethernet Roadmap Envisions Terabit Interfaces in 2020s

The Ethernet Alliance has published a 2015 Ethernet Roadmap that outlines the ongoing development and evolution of Ethernet through the end of the decade, while envisioning terabit speed interfaces scaling up to 10 Tbps rates by 2030.

Four new speeds – 2.5 Gigabit per second (Gb/s); 5 Gb/s; 25 Gb/s; and 400 Gb/s Ethernet – are currently in development by the IEEE, while the industry is also considering 50Gb/s and 200 Gb/s Ethernet.

The 2015 Ethernet Roadmap looks at Ethernet’s accelerating evolution and expansion in four key areas: consumer and residential; enterprise and campus; hyperscale data centers; and service providers. The roadmap provides visibility into the underlying technologies, including electrical and optical infrastructures. It further highlights the different area’s rate progressions, while emphasizing the changing dynamics and challenges within the Ethernet ecosystem, which includes support for wireless technologies such as 802.11ac.

“Ethernet is constantly evolving and diversifying into new markets and application spaces. Such expansion is successful when there is greater visibility about a technology’s future. The 2015 Ethernet Roadmap will allow the industry to peer into Ethernet’s future,” said Scott Kipp, president, Ethernet Alliance; and principal technologist, Brocade. “The roadmap, developed by our members, will help users understand where Ethernet is going. Such insight will heighten confidence to the market that Ethernet has a clear path forward and help further drive adoption of Ethernet solutions.”

http://www.ethernetalliance.org/roadmap