Thursday, December 31, 2015

History Channel: History of Transatlantic Cable

Sunday, December 27, 2015

Comcast Installs first DOCSIS 3.1 Modem

Comcast announced an important step toward deliver residential gigabit Internet speeds over its existing  plant by installing what it claims is the world’s first DOCSIS 3.1 modem on a customer-facing network.

The deployment last month at a home in the Philadelphia area used the standard Comcast cable connections and along with a new modem an a software upgrade to the device that serves that neighborhood.

Comcast said it plans to introduce a gigabit speed choice in several U.S. markets before the end of 2016.

http://corporate.comcast.com/comcast-voices/worlds-first-live-docsis-3-1-gigabit-class-modem-goes-online-in-philadelphia



  • In May 2015, Comcast unveiled its first DOCSIS 3.1 modem capable of delivering speeds greater than 1 Gbps. The Gigabit Home Gateway will be the company’s first product to integrate software that Comcast acquired in its 2014 purchase of PowerCloud. It also uses open-sourced RDK B software, architected by Comcast with contributions from many in the RDK community, which will help us introduce new features faster and address issues more efficiently; similar to what has been done with X1.

Acacia Communications Files for IPO

Acacia Communications, a start-up based in Maynard, MA, filed a registration statement with the SEC for an initial public offering of its shares.

The company is seeking to list its shares under the symbol ACIA on the Nasdaq Global Market
Acacia, which was founded in 2009, develops  high-speed coherent optical interconnect products, including a series of low-power coherent DSP ASICs and silicon PICs.  The company has integrated into families of optical interconnect modules with transmission speeds ranging from 40 to 400 Gbps for use in long-haul, metro and inter-data center markets. Acacia’s coherent DSP ASICs and silicon PICs are manufactured using CMOS and CMOS-compatible processes. Using CMOS to siliconize optical interconnect technology enables Acacia to continue to integrate increasing functionality into its products, benefit from higher yields and reliability associated with CMOS and capitalize on regular improvements in CMOS performance, density and cost.

In its S-1 statement, Acacia said it had 20 network equipment manufacturers as customers for the year ending September 30, 2015. Acacia's revenue for 2014 was $146.2 million, an 88.3% increase from $77.7 million of revenue in 2013. Its revenue for the nine months ended September 30, 2015 was $170.5 million, a 62.0% increase from $105.2 million of revenue in the nine months ended September 30, 2014. In 2014, the company generated net income of $13.5 million and our adjusted EBITDA was $20.4 million, compared to a net loss of $1.2 million and adjusted EBITDA of $3.6 million in 2013.  For the nine months ended September 30, 2015, Acacia generated net income of $17.9 million and our adjusted EBITDA was $31.7 million, compared to net income of $11.0 million and adjusted EBITDA of $16.0 million for the nine months ended September 30, 2014.

http://www.sec.gov/Archives/edgar/data/1651235/000119312515409344/d46988ds1.htm
http://acacia-inc.com


Wednesday, December 23, 2015

Nutanix Files for IPO

Nutanix has filed a registration statement with the U.S. Securities and Exchange Commission (SEC) for a proposed initial public offering of its Class A common stock.

Nutanix is seeking to list its Class A common stock on The NASDAQ Global Select Market under the ticker symbol "NTNX.”

Goldman, Sachs & Co. and Morgan Stanley & Co. LLC will act as lead book-running managers, J.P. Morgan Securities LLC and Credit Suisse Securities (USA) LLC will act as book-running managers for the proposed offering. Robert W. Baird & Co. Incorporated; Needham & Company LLC; Oppenheimer & Co. Inc.; Pacific Crest Securities, a division of KeyBanc Capital Markets Inc.; Piper Jaffray & Co.; Raymond James; Stifel; and William Blair & Company, L.L.C. will act as co-managers.

http://www.nutanix.com

Nutanix Raises $140 Million for Converged Data Center Solutions

Nutanix, a start-up based in San Jose, California, announced a $140 million Series E funding round at over a $2 billion valuation.

Nutanix offers a Virtual Computing Platform, which integrates compute and storage into a single solution for the data center. Its web-scale software runs on all popular virtualization hypervisors, including VMware vSphere, Microsoft Hyper-V and open source KVM, and is uniquely able to span multiple hypervisors in the same environment.

The latest round brings Nutanix's total funding to $312 million.

Nutanix reports annualized bookings exceeding a run rate of $200 million.  The company has over 800 customers, including 29 customers who have purchased more than $1 million in aggregate products and services.  Nutanix's growing list of customers includes Airbus, China Merchant Bank, Honda, ConocoPhillips, Total SA, Toyota, US Navy and Yahoo! Japan.

"The convergence of servers, storage and networking in the datacenter has created one of the largest business opportunities in enterprise technology, and Nutanix is at the epicenter of this transformation," said Dheeraj Pandey, co-founder and CEO, Nutanix. "We are proud of the progress we have made, and are confident in capitalizing on the enormous opportunity that lies ahead of us. We recognize the importance of building relationships with leading public market investors, and are honored to welcome them as partners in driving the long-term success of our Company."

http://www.nutanix.com


  • In June 2014, Nutanix announced an OEM deal under which Dell will offer a new family of converged infrastructure appliances based on Nutanix web-scale technology under an OEM deal announced by the firms. The companies said the combination of Nutanix’s software running on Dell’s servers delivers a flexible, scale-out platform that brings IT simplicity to modern data centers.
    Specifically, the new Dell XC Series of Web-scale Converged Appliances will be built with Nutanix software running on Dell PowerEdge servers, and will be available in multiple variants to meet a wide range of price and performance options. The appliances will deliver high-performance converged infrastructure ideal for powering a broad spectrum of popular enterprise use cases, including virtual desktop infrastructure (VDI), virtualized business applications, multi-hypervisor environments and more.

Tuesday, December 22, 2015

Blueprint: One Box or Two? New Options in “Hyper” Storage

by Stefan Bernbo, founder and CEO of Compuverde

Cisco’s latest Visual Networking Index: Global Mobile Data Traffic Forecast offers just one example of what enterprises are facing on the storage front. The report predicts that global mobile data traffic will grow at a compound annual growth rate of 57 percent from 2014 to 2019. That’s a ten-fold increase in just five years.

How will organizations scale to meet these massive new storage demands? Hardware costs make rapid scaling prohibitive for most businesses, yet a solution is needed quickly. Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands.

Such flexibility can be found in software-defined storage (SDS). Because the storage and compute needs of organizations are varied, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below – and which resellers should be versed in.

Storage Then and Now

Before next-gen storage was “hyper,” it was merely “converged.” Converged storage combines storage and computing hardware to increase delivery time and minimal the physical space required in virtualized and cloud-based environments. This was an improvement over the traditional storage approach, where storage and compute functions were housed in separate hardware. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.

Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers. Instead, it uses a hardware-based approach comprised of discrete components, each of which can be used on its own for its original purpose in a “building block” model.

In contrast, hyperconverged storage infrastructure is software-defined. All components are converged at the software level and cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.

Flexibility and agility are increased, and that is exactly what enterprise IT admins need to effectively and efficiently manage today’s data demands. Hyperconverged storage also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more “1:1” scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is storage’s version of a Swiss Army knife; it is useful in many business scenarios. The end result is one building block that works exactly the same; it’s just a question of how many building blocks a data center needs.

Start Small, Scale as Needed

The hyperconverged approach seems like just what the storage doctor ordered, but hyperscale is also worth exploring. Hyperscale computing is a distributed computing environment in which the storage controller and array are separated. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required in order to build big data or cloud systems; it’s what Internet giants like Amazon and Google use to meet their vast storage demands. However, software-defined storage now enables many enterprises to enjoy the benefits of hyperscale.

Lower total cost of ownership is a major benefit. Commodity off-the-shelf (COTS) servers are typically used in the hyperscale approach, and a data center can have millions of virtual servers without the added expense that this many physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.

The Best of Both Worlds

Here’s an easy way to look at the two approaches. Hyperconverged storage is like
having one box with everything in it; hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a VM – which would make it a hyperconverged configuration.

Perhaps the best news of all, as enterprises scramble to reconfigure current storage architectures, is that data center architects can employ a combination of hyperconverged and hyperscale infrastructures to meet their needs. Enterprises will appreciate the flexibility of these software-defined solutions, as storage needs are sure to change. Savvy resellers will be ready to explain how having this kind of agile infrastructure will help enterprises to future-proof their storage and save money at the same time.

About the Author

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Monday, December 21, 2015

Blueprint: 2016 and Beyond

by Cam Cullen, Vice President of Global Marketing at Procera Networks

I recently attended the Light Reading Vision Executive 2020 Summit in Dublin, and the event was a great peek into the thought process of some of the largest network operators in the world. Light Reading and Heavy Reading presented a number of different perspectives on what the future holds for telecom operators, some of which where quite compelling. One report that they presented was on the New IP Agency Interoperability testing, a first of a kind Network Functions Virtualization test that brought 12 vendors together to show that NFV solutions from various vendors could work together. This test was a big step forward for NFV, because it showed that the industry has stepped beyond just virtualization and moving towards true NFV.

The event inspired me to put my own thoughts on what the new trends that we will see in 2016 and beyond on the blog. So….here we go…

Virtualization and NFV get some big wins and deployments: Most operators have already implemented a few projects using virtualization, often their internal IT or control plane deployments. More and more operators are making vendors decisions based upon virtualization products, and I fully expect to see a few significant data plane deployments in 2016.

Orchestration gets real: One point made at the Vision 2020 conference that is often glossed over is that orchestration is really about automation. There are a lot of vendors that have designed their solutions to be very friendly to APIs and automation, and there have already been some ETSI POCs that demonstrate this is real-world scenarios. The New IP Agency intends to do an orchestration interoperability test in 2016, and that test will shine a light on the real state of NFV orchestration, but I expect orchestration to reach out beyond NFV in 2016.

4K video begins to appear in the wild: I have a 4K TV in my house, and the picture is stunning, even with simple upscaling on existing HD video. It makes recorded TV shows look like they are almost live, which can be a bit disconcerting at times because the picture is so clear. Interestingly enough, the easiest way to get 4K streams now is directly to your smart TV since most devices are not yet supporting 4K, but that will change in 2016.

Video bandwidth continues to increase: It is a bit of a no-brainer to say that video bandwidth will increase, but the 4K prediction above is the biggest thing that will accelerate video bandwidth consumption. Netflix recommends 3Mbps for SD, 5Mbps for HD, and 25Mbps for UHD, so users may go from 3or 5Mbps to 25Mpbs for some UHD quality content. With video already consuming from 60-70% of downstream bandwidth on our customer’s networks, it will only get worse.

A new game-changing app will appear: Every year a new app appears that had to potential to change consumer consumption patterns. In 2015, Popcorn Time and Periscope were notable new additions to the landscape (fortunately for Hollywood, Popcorn Time hasn’t taken off yet). Periscope is interesting because it can turn every device into a live video stream and has the support of Twitter, and it won the App Store “Best of the Year” as a recognition of this potential. In August, Periscope claimed 10M users, and I expect that number to keep growing. What will the new app be in 2016? The beauty of it is that we don’t know, and that uncertainty is actually awesome and a testament to the creativity enabled by the Internet.

Streaming-only cord cutters get enabled: Amazon has started an aggregation offering for streaming services that includes Showtime, Starz, and other services (as a start). CBS announced (although it will begin in January of 2017 and not 2016) a new Star Trek series only for online. The biggest advantage that Pay-TV has today is bundling convenience, and Amazon’s offering is the first of what I would expect to see of many offers. Cord Cutters today will end up paying more if they want to watch a similar line-up to cable services, and have to manage a lot of different apps. Aggregation offerings may change that equation going forward.

2016 will be an interesting year for consumer broadband, and I look forward to seeing “What’s Next” in 2016.

About the Author

Mr. Cullen is the Vice President of Global Marketing at Procera Networks. Mr. Cullen is responsible for Procera's overall global marketing and product management, and is an active evangelist for Procera's solution and general market trends as well as an active blogger for Procera. He joined Procera as VP of Product Management to execute on product strategy and to expand the company's product offering. Prior to Procera, Mr. Cullen held senior Product Management and Marketing roles at Allot and Quarry Technologies/Reef Point Systems, where he was VP of Product Management and Marketing, and held various roles in business development, marketing, and sales at 3Com. Mr. Cullen was a captain in the US Air Force where we worked at the National Security Agency and the Air Force Information Warfare Center, and holds a Bachelor of Science in Electrical Engineering from the University of Alabama.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Oracle Acquires StackEngine for Container Management

Oracle has acquired StackEngine, a start-up specializing in container operations management.  Financial terms were not disclosed.

StackEngine, which is based in Austin, offers software to manage and automate Docker applications, giving organizations the power to compose, deploy, and automate resilient container-native applications. Its flagship product, Container Application Center, is an end-to-end container application management solution for developers, DevOps and IT operations teams that brings users through the entire container application lifecycle, from development to deployment.

All StackEngine employees will be joining Oracle as part of Oracle Public Cloud.

http://www.stackengine.com/
https://www.oracle.com/corporate/acquisitions/stackengine/index.html

Pivotal Acquires CloudCredo for Cloud Foundry Expertise

 Pivotal, has acquired CloudCredo, a privately-held software developer based in London, along with CloudCredo subsidiary, stayUp, a log analysis technology company for Cloud Foundry.

CloudCredo has a highly-regarded team of Cloud Foundry experts.  Pivotal said the acquisition will will better enable enterprise adoption of Pivotal Cloud Foundry.

Pivotal is a spin-out and joint venture of EMC Corporation and its subsidiary VMware. The Pivotal Cloud Native Platform offers integrated application framework, runtime and infrastructure automation capabilities.

“CloudCredo enhances Pivotal’s powerful next-generation portfolio of products and services by bringing extensive knowledge of deploying, running and customizing Cloud Foundry for some of the world’s largest and most admired brands,” said Rob Mee, CEO of Pivotal. “With this expertise, we can better help our customers transform their enterprises by embracing and leveraging Pivotal’s Cloud Native platform more quickly.“

“When we started CloudCredo, we were profoundly influenced by The Pivotal Way. It shaped our approach to modern software development, our culture promoting openness and doing things the right way, and passion for delivering differentiated value to our customers,” says Colin Humphreys, CloudCredo Co-Founder and CEO.“ Joining Pivotal allows us to operate at a global scale, overnight, and help the world's largest and most admired brands use software to transform their businesses and make an impact on the world.”

http://pivotal.io/platform

NetApp to Acquire SolidFire for All-Flash Data Center Arrays

NetApp agreed to acquire SolidFire for $870 million in cash.

SolidFire specializes in all-flash storage systems for next-generation data centers.

NetApp said the SolidFire acquisition extends its portfolio to include all-flash offerings that address each of the three largest All-Flash Array market segments. For the traditional enterprise infrastructure buyer, the NetApp All Flash FAS (AFF) product line delivers enterprise-grade features across flash, disk and cloud resources. For the application owner, the NetApp EF Series product line offers world-class SPC-1 benchmarks with consistent low-latency performance and proven 6x9’s reliability. For the next-generation infrastructure buyer, SolidFire’s distributed, self-healing, webscale architecture delivers seamless scalability, white box economics, and radically simple management. This enables customers to accelerate third platform use cases and webscale economics. SolidFire is an active leader in the cloud community with extensive integrated storage management capabilities with OpenStack, VMware, and other cloud frameworks.

“This acquisition will benefit current and future customers looking to gain the benefits of webscale cloud providers for their own data centers,” said George Kurian, chief executive officer of NetApp. “SolidFire combines the performance and economics of all-flash storage with a webscale architecture that radically simplifies data center operations and enables rapid deployments of new applications. We look forward to extending NetApp’s flash leadership with the SolidFire team, products and partner ecosystem, and to accelerating flash adoption through NetApp’s large partner and customer base.”

http://www.netapp.com/
http://www.solidfire.com

SolidFire Raises $82 Million for Flash Storage

SolidFire, a start-up based in Boulder, Colorado, closed $82 million in Series D funding for its all-flash storage systems.

SolidFire said its revenue grew over 700 percent in 2013 and has increased over 50 percent quarter over quarter in 2014. Its customer base is approximately a 50:50 service provider/enterprise.

SolidFire also announced the expansion of its flagship SF Series product line, unveiling two new storage nodes that represent the third generation of SolidFire hardware to be released since the platform became generally available in November 2012.

The latest investments bring the company's total funding to $150 million. New investor Greenspring Associates led the round along with a major sovereign wealth fund, with participation from current investors NEA, Novak Biddle, Samsung Ventures and Valhalla Partners.

Ericsson and Apple Settle Patent Dispute

Ericsson and Apple have signed off on a global patent license agreement, ending a long-running legal dispute in multiple jurisdictions and a case before the U.S. International Trade Commission..  As part of a seven-year agreement, Apple will make an initial payment to Ericsson and, thereafter, will pay on-going royalties. Financial terms were not disclosed.

The deal includes a cross license that covers patents relating to both companies' standard-essential patents (including the GSM, UMTS and LTE cellular standards), and grants certain other patent rights. In addition, the agreement includes releases that resolve all pending patent-infringement litigation between the companies.

Ericsson did note that the positive effects from the settlement, and alongside the ongoing IPR business with all other licensees, will bring its estimated IPR revenues will amount to SEK 13-14 b.

"We are pleased with this new agreement with Apple, which clears the way for both companies to continue to focus on bringing new technology to the global market, and opens up for more joint business opportunities in the future," said Kasim Alfalahi, Chief Intellectual Property Officer at Ericsson.

http://www.ericsson.com/news/1974964

TeliaSonera Sells its Stake in Nepal's Ncell

TeliaSonera will sell its 60.4 percent ownership in the Nepalese operator Ncell to Axiata, one of Asia’s largest telecommunication groups, for US$1,030 million on a cash and debt free basis. At the same time, TeliaSonera will dissolve its economic interests in the 20 percent local ownership and receives approximately US$48 million. The transactions are conditional on each other.

Axiata has more than 260 million customers and 25,000 employees. Axiata said Ncell will complement its portfolio of Asian telecommunications assets, which includes operations in Malaysia, Indonesia, Sri Lanka, Bangladesh, Cambodia, India, Singapore and Pakistan. Axiata, which is listed on the Malaysian stock exchange, is a reputable company with a strong focus and expertise in South Asia and is also a long-term investor contributing to development and advancements of the countries it operates in.

“In September we announced our ambition to reduce our presence in our seven Eurasian markets and focus on our operations in the Nordics and Baltics, within the strategy of creating the new TeliaSonera. Today, I am very pleased to announce a first step and proof point in this reshaping of TeliaSonera. I am also glad to see Axiata as a new owner. That gives me comfort that our dedicated employees are in good hands when taking Ncell to the next level,” says Johan Dennelind, TeliaSonera’s President and CEO.
 
http://www.teliasonera.com/en/newsroom/press-releases/2015/12/teliasonera-divests-its-holding-in-ncell/
http://www.axiata.com/

MTS and Ericsson to Showcase 5G at 2018 World Cup in Russia

Russia's Mobile TeleSystems (MTS) and Ericsson signed an MOU on 5G research and deployment in Russia, including spectrum studies of the next generation network and the building of a test system. The project will support a dialog with government regulators concerning the bands being targeted for 5G and requirements for next generation systems.

The companies said their partnership will also lead to implementation of 5G-related technologies in MTS' network, such as Ericsson Lean Carrier and the use of unlicensed spectrum, and other technologies that use the design concepts Ericsson is developing for 5G.

http://www.ericsson.com/news/1975090


Russia's MTS Picks Ericsson for LTE


Russia's Mobile TeleSystems (MTS) selected Ericsson to deploy its LTE network in four regions covering more than half of Russia, thus becoming MTS's main vendor. Financial terms were not disclosed. Under the three-year agreement and starting in Q2, 2013, Ericsson will roll out LTE in the Siberian, Ural, Volga, and Southern Federal Districts of Russia. In the first stage of the project, Ericsson will supply no less than 10,000 new-generation products...

Friday, December 18, 2015

AT&T Transfers Managed App and Hosting Business to IBM

AT&T agreed to transition its managed application and managed hosting services unit to IBM. IBM will then align these managed service capabilities with the IBM Cloud portfolio. IBM will also acquire equipment and access to floor space in AT&T data centers currently supporting the applications and managed hosting operations.

Financial terms were not disclosed.

AT&T will continue to provide other managed networking services, including security, cloud networking and mobility offerings.

The companies said the deal builds on their long-term partnership.

“Today’s announcement represents an expansion of our strategic relationship with AT&T and continuing collaboration to deliver new innovative solutions,” said Philip Guido, IBM General Manager of Global Technology Services for North America. “Working with AT&T, we will deliver a robust set of IBM Cloud and managed services that can continuously evolve to meet clients’ business objectives.”

http://www-03.ibm.com/press/us/en/pressrelease/48500.wss
http://about.att.com/story/att_and_ibm_expand_strategic_relationship.html

AT&T and IBM Team Up on Mobile Cloud Security


AT&T and IBM announced a partnership focused on delivering a scalable mobile cloud solution to help protect corporate data and apps. 

The reference architecture includes:
  • IBM MobileFirst Protect: helps organizations manage and control mobile devices, apps and documents.
  • AT&T NetBond: provides a highly secure, scalable network connection to IBM's Cloud infrastructure services, SoftLayer.
  • IBM Cloud: the SoftLayer infrastructure secures public and private clouds for applications and data storage.
  • AT&T Work Platform: enables separate billing of business and personal charges for voice, messaging and data use on an employee's personal handset.
"Balancing employees' need for convenience with security has become a challenge for CISOs and CIOs across the world," said Caleb Barlow, vice president, IBM Security. "To help protect organizations, employees and data, IBM Security and AT&T are delivering a tested and easy to deploy set of complementary tools. We're giving enterprise mobile device users stable, private access to data and apps in the cloud."

Juniper Discloses Unauthorized Code in ScreenOS

Juniper Networks disclosed the discovery of unauthorized code in its ScreenOS that could allow a knowledgeable attacker to gain administrative access to NetScreen devices and to decrypt VPN connections.

It is not known who inserted the code into the OS nor how long it has been there.

All NetScreen devices using ScreenOS 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 are affected by these issues and require patching.

The urgent security bulletin urges customers to update their systems and apply the patched releases with the highest priority.

http://forums.juniper.net/t5/Security-Incident-Response/Important-Announcement-about-ScreenOS/ba-p/285554

Box Extends its Strategic Sales Partnership with IBM

Box announced a new agreement to expand its strategic partnership with IBM.

The companies will undertake stronger go-to-market and sales commitments. With a new term that has the potential to last for a decade or more, the partnership underscores the companies’ long-term commitment to delivering modern enterprise content management and collaboration solutions.

“IBM and Box are committed to delivering world-class solutions that transform how businesses work,” said Aaron Levie, co-founder and CEO of Box. “We are thrilled to extend our partnership with IBM, further expanding our product capabilities and creating new go-to-market channels.”

Additionally, the companies announced today the general availability of two new product integrations for IBM Case Manager and IBM Datacap. The availability of these new solutions complements the previously announced product integrations with IBM Content Navigator and IBM StoredIQ.

http://www.box.com

Linux Foundation Backs Blockchain for Transaction Verification

The Linux Foundation announced a new collaborative effort to advance blockchain technology.

Blockchain is a digital technology for recording and verifying transactions. The distributed ledger is a permanent, secure tool that makes it easier to create cost-efficient business networks without requiring a centralized point of control. With distributed ledgers, virtually anything of value can be tracked and traded. The application of this emerging technology is showing great promise in the enterprise. For example, it allows securities to be settled in minutes instead of days. It can be used to help companies manage the flow of goods and related payments or enable manufacturers to share production logs with OEMs and regulators to reduce product recalls.

The project will develop an enterprise grade, open source distributed ledger framework and free developers to focus on building robust, industry-specific applications, platforms and hardware systems to support business transactions.

Early commitments to this work come from Accenture, ANZ Bank, Cisco, CLS, Credits, Deutsche Börse, Digital Asset Holdings, DTCC, Fujitsu Limited, IC3, IBM, Intel, J.P. Morgan, London Stock Exchange Group, Mitsubishi UFJ Financial Group (MUFG), R3, State Street, SWIFT, VMware and Wells Fargo.

“Distributed ledgers are poised to transform a wide range of industries from banking and shipping to the Internet of Things, among others,” said Jim Zemlin, executive director at The Linux Foundation. “As with any early-stage, highly-complex technology that demonstrates the ability to change the way we live our lives and conduct business, blockchain demands a cross-industry, open source collaboration to advance the technology for all.”

https://blockchain.linuxfoundation.org/

Spirent Debuts First 50Gb Ethernet Test System

Spirent unveiled the world's first 50 Gbps Higher Speed Ethernet test solution.

Spirent's 50GbE Boost system tests layer 1 to layer 3 quality and highlights error-free performance with different combinations of streams, frame lengths and rates by delivering per-port and per-stream statistics such as latency, frames out of sequence, frame counts and rates, L1 PRBS capability and layer 1 statistics to help debug physical link problems.

"According to Dell'Oro Group 2015 data, 50Gbps servers will represent more than 30 percent of server shipments in 2019, up from essentially zero percent today," says Abhitesh Kastuar, General Manager of Spirent's Cloud and IP business. "With virtualized applications driving server performance it makes sense that access speeds are increasing from 10 to 25 and further to 50Gbps.  I believe the rapid adoption of 25G and 50G server links will drive the move to 100GbE and beyond for the uplinks from the leaf nodes to the spine nodes."

http://www.spirent.com

Thursday, December 17, 2015

Equinix Unveils Interconnection Oriented Architecture

Equinix unveiled its Interconnection Oriented Architecture (IOA) -- a blueprint for becoming an interconnected enterprise.

Equinix said goal with IOA is to help companies understand, deploy and benefit from interconnection.

Equinix said it developed IOA as a repeatable engagement model that both enterprises and solution providers can leverage to directly and securely connect people, locations, clouds and data.  The goal is to shift the fundamental IT delivery architecture from siloed and centralized to interconnected and distributed, and only the Equinix Interconnection Platform provides the critical building blocks to implement this architecture – a global footprint, dense cloud and service provider ecosystems, and the ability to integrate data and analytics at the edge.

Equinix has a global footprint of more than 100 data centers, combined with its Performance Hub solution, which combines data center elements, networking infrastructure, and cloud access via Equinix Cloud Exchange.

"Results of our recent Enterprise of the Future survey, together with the experience we've gained working with our top global enterprise and service provider customers, indicate that we are on the precipice of a massive interconnection-led reinvention of enterprise IT.  We believe an Interconnection Oriented Architecture is a valuable, repeatable strategy that will guide enterprises on their journey to becoming a truly interconnected enterprise, leveraging Equinix's distinct interconnection and colocation capabilities," Tony Bishop, vice president, vertical marketing, Equinix.

http://www.equinix.com/ioa/

Cloud Native Computing Foundation Opens for Technical Contributions

The Cloud Native Computing Foundation, which was launched earlier this year to drive the alignment of container technologies, announced its formal open governance structure and new details about its technology stack.

Cloud native applications are container-packaged, dynamically scheduled and microservices-oriented. The Cloud Native Computing Foundation focuses on development of open source technologies, reference architectures and common format for Cloud native applications or services. This work provides the necessary infrastructure for Internet companies and enterprises to scale their businesses.

The foundation's open governance model includes a Technical Oversight Committee (TOC) that will direct technical decisions, oversee working group projects and manage contributions to the code base. The nominations are currently open for TOC members and we encourage everyone to get involved; see the c​ncf-toc mailing list f​or more information. There is also an End User Advisory Board and Board of Directors to guide business decisions and ensure alignment between the technical and end user communities.

The Foundation said is looking at open source at the orchestration level, followed by the integration of hosts and services by defining APIs and standards through a code first approach to advance the state-of-art of container-packaged application infrastructure. The organization is also working with other L​inux Foundation Collaborative Projects, such as the Open Container Initiative, which is establishing the industry standard for container image specification and runtime, and the Cloud Foundry Foundation.

“As cloud computing matures, we’re finding there are a variety of ways to approach application development in this environment. Among some of the world’s largest users, such as Google and Facebook, cloud native development allows unprecedented scale and efficiency,” said Jim Zemlin, executive director at The Linux Foundation. “With the level of investment from across the industry and technical contributions from some of the developer community’s best talent, the Cloud Native Computing Foundation is poised to advance the state of the art of application development at Internet scale.”

h​ttps://cncf.io/

Cloud Native Computing Foundation Seeks Alignment Among Container Technologies

A new Cloud Native Computing Foundation is being launched to drive the alignment of container technologies.  Founding organizations include AT&T, Box, Cisco, Cloud Foundry Foundation, CoreOS, Cycle Computing, Docker, eBay, Goldman Sachs, Google, Huawei, IBM, Intel, Joyent, Kismatic, Mesosphere, Red Hat, Switch SUPERNAP, Twitter, Univa, VMware and Weaveworks.

The Cloud Native Computing Foundation, which is a project managed by The Linux Foundation, aims to advance the state-of-the-art for building cloud native applications and services, allowing developers to take full advantage of existing and to-be-developed open source technologies. Cloud native refers to applications or services that are container-packaged, dynamically scheduled and micro services-oriented.

Specifically, he Foundation will look at open source at the orchestration level, followed by the integration of hosts and services by defining API's and standards through a code first approach to advance the state-of-art of container-packaged application infrastructure. The organization will also work with the recently announced Open Container Initiative on its container image specification. Beyond orchestration and the image specification, the Cloud Native Computing Foundation aims to assemble components to address a comprehensive set of container application infrastructure needs.

“The Cloud Native Computing Foundation will help facilitate collaboration among developers and operators on common technologies for deploying cloud native applications and services,” said Jim Zemlin, executive director at The Linux Foundation. “By bringing together the open source community’s very best talent and code in a neutral and collaborative forum, the Cloud Native Computing Foundation aims to advance the state of the art of application development at Internet scale.”

https://cncf.io

New MulteFire Alliance Promotes LTE in Unlicensed Spectrum

A new MulteFire Alliance has been formed with the backing of Nokia, Qualcomm, Ericsson and Intel to promote MulteFire, an LTE-based technology for small cells operating solely in unlicensed spectrum, such as the global 5 GHz unlicensed band.

MulteFire will utilize the robust radio link, ease of management and self-organizing characteristics of LTE and its 3GPP standard evolution to deliver enhanced performance in local area network deployments.

“By bringing the benefits of LTE technologies to unlicensed spectrum, MulteFire helps provide enhanced coverage, capacity and mobility. It can also improve the Quality of Experience and security in private network deployments” said Stephan Litjens, Vice President, Portfolio Strategy & Analytics, Mobile  Broadband, Nokia, and MulteFire Alliance board chair.  “This technology is also aimed to deliver value to existing mobile networks and private customers such as building owners. MulteFire can act as a “neutral host” with the ability to serve users from multiple operators, especially in hard to reach places such as indoor locations, venues and enterprises.”

“With MulteFire, consumers and network providers will enjoy the combination of 4G-LTE like performance with Wi-Fi-like deployment simplicity in local-area deployments,” said Ed Tiedemann, Senior Vice President, Engineering, Qualcomm Technologies, Inc., and MulteFire Alliance board member.  “Users will benefit from an enhanced connectivity experience when moving across spaces such as shopping malls and corporate offices thanks to MulteFire’s mobility features and optional integration with wide-area networks.”

http://www.MulteFire.org

3GPP Release 13 to Include Low Power Wide Area Specs

The GSMA hailed new 3GPP standards for the emerging Low Power Wide Area (LPWA) connections, a market forecast to grow to US$589 billion by 2020, or approximately 47 percent of the machine-to-machine (M2M) market, according to Machina Research.

The new standards, which will be part of 3GPP Release 13, include: Narrow Band IoT (NB-IoT), Extended Coverage GPRS (EC-GPRS) and LTE Machine Type Communication (LTE-MTC).

“This is an important step in enabling operators to deliver industry standard solutions by extending their existing high-quality managed networks, service platforms and world-class customer management capabilities,” said Alex Sinclair, Acting Director General and Chief Technology Officer, GSMA. “The Low Power Wide Area market is a high-growth area of the Internet of Things and represents a huge opportunity in its development. A common and global vision will remove fragmentation, accelerate the availability of industry standard solutions and help the market to fulfil its potential.”

In addition, the members of the recently announced NB-IoT Forum have agreed that this will now be part of the GSMA’s Mobile IoT Initiative and will focus on fostering a global ecosystem for NB-IoT technology. A key element of the forum will be the creation of ‘Open IoT Labs’ that will be available to any operator, module vendor or application provider and are designed to develop and accelerate the commercial availability of LPWA technology as well as encourage organisations to create NB-IoT enabled devices and applications for a variety of different verticals. They will also provide an opportunity for end-to-end and interoperability testing.

http://www.gsma.com/newsroom/press-release/gsma-welcomes-mobile-industry-agreement-on-technology-standards/

IDC Estimates 2 Billion Mobile Internet Users in 2016

International Data Corporation (IDC) estimates that 3.2 billion people, or 44% of the world's population, will have access to the Internet in 2016. Of this number, more than 2 billion will be using mobile devices to do so.

"Over the next five years global growth in the number of people accessing the Internet exclusively through mobile devices will grow by more than 25% per year while the amount of time we spend on them continues to grow. This change in the way we access the Internet is fueling explosive growth in mobile commerce and mobile advertising," said Scott Strawn, Program Director, Strategic Advisory Service.

IDC said the big gains in new users are coming from China, India, and Indonesia, which together will account for almost half of the gains in access globally over the course of the next five years. The combination of lower-cost devices and inexpensive wireless networks are making accessibility easier in countries with populations that could not previously afford them.

The total number of mobile Internet users is forecast to rise at a pace of 2% annually through 2020 unless significant new methods of accessing the Internet are introduced. Efforts by Google, SpaceX, and Facebook among others to make the Internet available to the remaining 4 billion people via high altitude planes, balloons, and satellites are underway. However, it remains unclear how successful these endeavors will be and when they will be operational at scale.

http://www.idc.com

Weave Enhances its Networking + Monitoring for Docker

Weaveworks, a start-up with offices in San Francisco and London, announced the 1.4 release of its networking and monitoring software for Docker deployments.

Weave Net 1.4 is a Docker networking plug-in that eliminates the requirement to run and manage an external cluster store (database). The plug-in simplifies and accelerates the deployment of Docker containers by removing the requirement to deploy, manage and maintain a centralized database in development and production. It builds on Docker’s core networking capabilities. It runs a “micro router” on each Docker host that works just like an Internet router, providing IP addresses to local containers and sharing peer-to-peer updates with other micro-routers, and learning from their updates. It also responds to DNS requests by containers looking to find other containers by name, also known as Service Discovery. Features include:


  • A simple overlay networking approach for connecting containers across Docker hosts
  • Fast, standards-based VXLAN encapsulation for the network traffic
  • An application-centric micro-network
  • Built-in service discovery

Weaveworks developed “micro router” technology to make Docker container networking fast, easy and “invisible”.

“Removing the dependency on a cluster store makes it faster, easier and simpler to build, ship and run Docker containers,” said Mathew Lodge, COO of Weaveworks. “Weave Net 1.4 embodies Weaveworks’ commitment to making simple, easy to use products that accelerate the deployment of microservices and cloud native applications on containerized infrastructure.”

http://weave.works/

Canada's Shaw Communications Diversifies with Bid for WIND Mobile

Shaw Communications agreed to acquire WIND Mobile Corp. for approximately CAD$1.6 billion.

WIND is Canada's largest non-incumbent wireless services provider, serving approximately 940,000 subscribers across Ontario, British Columbia and Alberta with 50MHz of spectrum in each of these regions. The company reports a 47% increase in its subscriber base over the past two years which has translated into strong growth in revenue and EBITDA. In calendar year 2015, WIND is expected to generate $485 million in revenue and $65 million in EBITDA. A scheduled upgrade to 4G LTE services is planned by 2017.

Shaw Communications, which is headquartered in Calgary, is a diversified cable, communications and media company, serving 3.2 million customers mainly in  British Columbia and Alberta, with smaller systems in Saskatchewan, Manitoba, and Northern Ontario.

“The global telecom landscape is quickly evolving towards 'mobile-first' product offerings as consumers demand ubiquitous connectivity from their service providers. The acquisition of WIND provides Shaw with a unique platform in the wireless sector which will allow us to offer a converged network solution to our customers that leverages our full portfolio of best-in-class telecom services, including fibre, cable, WiFi, and now wireless," said Chief Executive Officer, Brad Shaw.

http://newsroom.shaw.ca/

Wednesday, December 16, 2015

Ciena to Offer Hardened Version of ONOS Open-Source Software

Ciena announced plans for commercial version of the Open Networking Operating System (ONOS). Blue Planet ONOS will extend Ciena’s Blue Planet network orchestration software to enable highly-scalable, flow-based control of data center networks.

Ciena said its hardened version of ONOS, to be marketed as Blue Planet ONOS, will give service providers the ability to take advantage of the cumulative software expertise of the open source community combined with the level of assurance and support that users require for commercial deployment. Any enhancements Ciena makes to ONOS for customer engagements will be fed back into the open source community. Ciena’s Blue Planet ONOS, which aligns with the Falcon release of ONOS, is projected to be available in the first calendar quarter of 2016.

“Our mission at the ONOS project has been to produce an open source network operating system that enables service providers and vendors to build real software-defined networks. We are excited that Ciena, a recognized leader and advocate for open, programmable networks, will bring this vision to market with Blue Planet ONOS and really appreciate its commitment to contributing bug fixes and enhancements back to open source. In one short year, we have gone from inception to commercial maturity – this is a validation of the power of open source innovation,” stated Guru Parulkar, Co-Founder and Executive Director ON.Lab and Chairman of the ONOS Board

“As with all Blue Planet software development and support, Ciena embarked on this effort with the LINUX Foundation and ON.Lab as a result of customer demand and engagements. Ciena’s support of ONOS re-affirms our commitment to open source technology and reflects our position as a leader in the global, software-based network transformation movement," said Mike Hatfield, Senior Vice President & General Manager, Blue Planet, Ciena and Director on the ONOS Project Board of Directors

http://www.ciena.com/about/newsroom/press-releases/Cienas-Blue-Planet-Division-Collaborates-with-ONLab-to-Offer-First-Hardened-Version-of-ONOS-Open-Source-Software-for-Commercial-Use.html

ONOS Enters 5th Release for Carrier-Grade SDN

ONOS, the open source SDN networking operating system for Service Provider networks, released its fifth generation platform.

"When we initially released ONOS, our goal was to provide a solid platform that would act as a base on which ON.Lab, its partners and the community could rapidly develop a number of SDN applications," said Thomas Vachuska, Chief Architect at ON.Lab's ONOS project. "ONOS' growing list of SDN and NFV use cases and solutions is a testament to the robustness of its initial distributed architecture design. Even as we add more features, it continues to provide high availability, scalability, performance, and the rich north- and southbound abstractions required for service provider and mission critical networks."

The Emu release, which keeps to a quarterly update cycle, brings improvements to the platform such as IP Multicast and SDN-IP and key use cases including Central Office Re-Architected as a Data Center (CORD), Packet/Optical, service function chaining (SFC) and support for the Open Platform for NFV Project (OPNFV) and OpenStack.

IHS: 5-Year Small Cell Backhaul Market Worth $6.5 Billion

IHS is forecasting that a cumulative $6.5 billion will be spent worldwide on outdoor small cell backhaul equipment between 2015 and 2019.

The latest IHS Infonetics Small Cell Mobile Backhaul Equipment report tracks equipment used for transporting traffic from outdoor small cell sites, such as those attached to light poles, utility poles, and the sides and tops of buildings. Some highlights:

  • Much of the activity in the small cell market continues to be around indoor small cells, as in-building spaces in public venues often have problematic macro-based mobile network coverage
  • In 2015, the global small cell mobile backhaul equipment market is forecast to grow 143 percent from the previous year
  • Around 75,000 outdoor small cell backhaul connections are projected to be deployed in 2015, rising to 960,000 in 2019
  • Point-to-point (P2P) microwave is anticipated to account for just under a third of total small cell backhaul equipment revenue in 2015, the highest of any technology

“As outdoor small cell deployments scale up, we look for the small cell backhaul equipment market to kick into higher gear in 2017, when it will reach over $1.2 billion. This year we started to see the early ramp-up beginning, while trials are still ongoing for many operators,” said Richard Webb, research director for mobile backhaul and small cells at IHS.

http://www.infonetics.com/pr/2015/1H15-Small-Cell-Mobile-Backhaul-Market.asp

Ekinops Supplies 100G DWDM in Kazakhstan

Transtelecom JSC, one of the largest service providers in the Republic of Kazakhstan, has deployed 100G DWDM optical transport equipment from Ekinops across its country-wide optical network and to the border of China.

Transtelecom owns more than thirteen thousand kilometers of fiber-optic cables throughout Kazakhstan, primarily along the railway lines. Transtelecom wanted to increase its backbone capacity in Kazakhstan and their ability to carry high capacity from China across to Russia. A major requirement for serving customers in China is providing low latency and the Ekinops solution was able to meet this and other requirements.

http://www.ekinops.net/en/press-releases/337-transtelecom-in-kazakhstan-chooses-ekinops-for-its-100g-dwdm-network-to-china

Huawei Marine Builds 100G Cable for Maldives

Huawei Marine is working with Ooredoo Maldives to deploy a Nation-wide Fiber Optic Submarine Cable to the Maldives.

The nationwide submarine cable, which will span 1,200k, will use Huawei Marine’s 100G technology. The cable will address the country’s increasing communication needs across developing islands and new resort locations.

“As the internet continues to bring remarkable value to individuals, families and businesses around the world, we firmly believe that it is the right of each and every individual to be connected to the abundant benefits of the internet,” said Vikram Sinha, CEO of Ooredoo. “With this is mind, we are proud to take this revolutionary step that will enable us to connect our communities to a fast and reliable broadband internet and support their digital lifestyles from wherever they are in the Maldives.”

Ooredoo’s Nationwide Submarine Cable is expected to be completed by the end of 2016.

http://www.huawei.com

US Internet Readies 2.5G and 5G Residential Service in Minneapolis

US Internet is preparing to launch 2.5 and 5 Gbps fiber Internet service in Q1 2016 via its "Active Ethernet" fiber platform for residents and businesses of the City of Minneapolis.

USI launched a 10G service in Minneapolis a year ago. The new services bridge the gap in speed. Plans now include 1, 2.5, 5 and 10 Gbps at $65/mo., $99/mo., $199/mo. and $399/mo., respectively.

Travis Carter, CTO and Co-Founder of US Internet, said: "Our top local goal is to connect with the community and make the City of Minneapolis the most connected city in the U.S. Through this expanded high-speed Internet initiative, USI continues to empower Minneapolis residents and businesses with leading edge and extremely cost-effective technology solutions to meet their ever-growing digital needs without bottlenecks, slowdowns or hidden fees."

http://fiber.usinternet.com

IBM Signs Vodafone Netherlands for MobileFirst Portfolio

Vodafone Netherlands will offer IBM MobileFirst solutions to enterprise clients.

BM MobileFirst will extend Vodafone Ready Business, which includes communications solutions aimed at helping businesses become more flexible, innovative, collaborative and responsive to their customers and employees. The IBM and Vodafone collaboration will help further drive mobile adoption among enterprise organizations in the Netherlands.

“Vodafone has always been at the forefront of enterprise mobility. In collaborating with IBM, we are drawing on the enterprise expertise of both companies, combining Vodafone’s strengths in enterprise mobility management and 4G connectivity with IBM’s industry, mobile and integration knowledge,” said Alexander Saul, Director Enterprise Business Unit Vodafone Netherlands. “Together, we are empowering our enterprise clients with enhanced mobile capabilities that simplify business processes, increase employee productivity, enable deeper customer engagement and drive overall growth.”

https://www-03.ibm.com/press/us/en/pressrelease/48499.wss

IBM Opens Global HQ for Watson IoT in Munich

IBM inaugurated its global headquarters for Watson Internet of Things (IoT) in Munich, launching a series of new offerings, capabilities and ecosystem partners designed to extend the power of cognitive computing to the billions of connected devices, sensors and systems that comprise the IoT. These new offerings will be available through the IBM Watson IoT Cloud, the company’s global platform for IoT business and developers.

IBM said its campus in Munich will bring together 1,000 IBM developers, consultants, researchers and designers to drive deeper engagement with clients and partners, and will also serve as an innovation lab for data scientists, engineers and programmers building a new class of connected solutions at the intersection of cognitive computing and the IoT.

http://www.ibm.com

Tuesday, December 15, 2015

Blueprint: Network Capacity Planning and Bandwidth Management in the IoT Era

by Leon Adato, Head Geek, SolarWinds

Enterprise network architecture has certainly evolved; from flat networks where everything was interconnected, to hierarchical models with enhanced security and now to a borderless world. But the one network metric that has remained a priority despite these changes is bandwidth, and by extension the individual traffic flows that comprise it.

Many enterprises have treated bandwidth as the elephant in the room—they have awareness of it and the tools to monitor it, but don’t have nearly enough insight into its usage. They don’t have visibility into usage history. They can’t correlate past performance to future trends. They lack any way to get a breakdown from the “big number” (bandwidth usage) to the root of the problem.

This has to change because, quite frankly, bandwidth is running out, and has been for some time now. What’s more is that there are additional bandwidth-hogging trends starting to crest the horizon.

Where We’re At

Already, BYOD and cloud have added extra layers of complexity when it comes to managing the network. With users connecting externally obtained devices such as cell phones, tablets, smart watches and even personal personal computers to corporate environments on top of corporate-provided devices, we’ve seen an exponential increase in bandwidth consumption. Furthermore, as more infrastructure is moved to the cloud, the network connections needed on that offsite infrastructure have also grown in both number and criticality. As a result, we network administrators have been tasked with redesigning networking schemes to adapt to these changes.

For those of us who have been through this and lived to tell the tale, we know that the key to success in the era of network complexity is preparedness in the form of network capacity planning and bandwidth management. Having a plan to manage current bandwidth issues and regularly analyze utilization information will best set us up to stay ahead of future issues that may arise.

Where We’re Going

Speaking of, what might be considered the second—yet much more challenging—installment of the bandwidth-hogging BYOD trend is fast approaching. Enter another now (in)famous acronym—IoT, or the Internet of Things.

Yes, it’s true that soon, even your company’s toaster oven may be connected to the network, along with a host of other devices and appliances—it won’t just be a swarm of HVAC, lighting and security controls or intelligent shop-floor tools that will expect Internet access; delivery trucks, trailers, shipping containers, smart pallets with onboard GPS, inventory management routing, sort and delivery elements, scanners and sensors of every variety will become Internet “things” using network protocols and bandwidth in unexpected ways. With more network devices in play than ever before, there will be an explosion of network traffic to accommodate the massive data volume, resulting in a harder time regulating the network. While some of the bandwidth being used will of course remain strictly internal, at the end of the day it’s all competing traffic, and competing traffic at a volume we haven’t had to account for in the past.

On top of that, because IoT will fundamentally change the way we humans interact with our environments, the ensuing complexities won’t just be about device and bandwidth entitlement added to the fully burdened cost of each employee. Environments will respond to the presence of humans, and user context—person, authentication, location, traffic, application—will all need to flow seamlessly as people move across traditional IT boundaries. So, as another consequence of IoT, IT departments will need to work closer than ever with the CIO, as well as legal, HR and other business departments.

And of course, it will be left up to us network engineers to sort all of this out.

Getting a Grip on IoT Network Capacity Planning and Bandwidth Management

If you’re like me, it probably seems like you’ve just barely gotten BYOD under control. The good news is that we learned some valuable lessons during that (long) process that are very applicable to getting a grip on IoT network capacity planning and bandwidth issues, too.

First and foremost is the need to closely monitor traffic—and not just the raw volume of network traffic, but application traffic, too. When it comes to IoT, traditional approaches like NetFlow will still be valuable, but IoT traffic will be more about application awareness than simple traffic monitoring and management. Quality of service monitoring is also very important, keeping in mind that IoT device responsiveness will be more important than traditional bandwidth-consuming things like email. Paradoxically, latency and reachability will be top priorities over limiting traffic.

And it won’t be enough to simply be a data collection platform or even a metrics dashboard solution. This monitoring will need to analyze more data than ever before, and transform it into concise, useful information to help us troubleshoot bandwidth-related network performance problems. Providing the breadth and depth of information needed to support devices, applications and networks in the era of IoT can only be done with an end-to-end, comprehensive monitoring solution.

As part of capacity planning for IoT, it will also be important to get IP address management under control and get gear ready for IPv6, which is what most IoT devices will prefer. Traditionally, many of us have managed IP address infrastructure with manual processes, which is labor-intensive, time-consuming and error prone. In addition, it leads to decentralized, fragmented, and outdated data. A simple request for a single new IP assignment can result in many hours of work, complex coordination and the likelihood for errors and conflicts, which in turn, can lead to a plethora of network problems. Just imagine what this will all look like when innumerable IoT devices start requiring their own addresses.

Finally, automate, automate, automate. In the IoT era, automating as much network management as possible will be more important than ever. At a time when there will be more devices accessing the network than you can shake a stick at, automation solutions will help to more quickly correct issues as they arise and will provide immediate remediation to reduce response times, significantly reducing potential network downtime due to any number of IoT-related capacity and bandwidth issues.

While it may seem crazy to say so, network capacity planning and bandwidth management in the IoT era really does not need to be a daunting task—we’ve been down this road before with BYOD. It’s simply a matter of remembering what worked then, being aware of the subtle differences we’ll experience with IoT and planning for them.

About the Author 
Leon Adato is a Head Geek and technical evangelist at SolarWinds, and is a Cisco Certified Network Associate (CCNA), MCSE and SolarWinds Certified Professional (he was once a customer, after all). Before he was a SolarWinds Head Geek, Adato was a SolarWinds® user for over a decade. His expertise in IT began in 1989 and has led him through roles as a classroom instructor, courseware designer, desktop support tech, server support engineer, and software distribution expert. His career includes key roles at Rockwell Automation®, Nestle, PNC, and CardinalHealth providing server standardization, support, and network management and monitoring.

About SolarWinds 
SolarWinds (NYSE: SWI) provides powerful and affordable hybrid IT infrastructure management software to customers worldwide from Fortune 500® enterprises to small businesses, government agencies and educational institutions. We are committed to focusing exclusively on IT Pros, and strive to eliminate the complexity that they have been forced to accept from traditional enterprise software vendors. Regardless of where the IT asset or user sits, SolarWinds delivers products that are easy to find, buy, use, maintain and scale while providing the power to address all key areas of the infrastructure from on premises to the cloud. Our solutions are rooted in our deep connection to our user base, which interacts in our thwack online community to solve problems, share technology and best practices, and directly participate in our product development process. Learn more today at www.solarwinds.com.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

See also