Monday, January 4, 2016

NVIDIA Develops Supercomputer for Self-Driving Cars

NVIDIA unveiled an artificial-intelligence supercomputer for self-driving cars.

In a pre-CES keynote in Las Vegas, NVIDIA's CEO Jen-Hsun Huang said the onboard processing needs of future automobiles far exceeds the silicon capabilities currently on the market.

NVIDIA's DRIVE PX 2 will pack the processing equivalent of 150 MacBook Pros -- 8 teraflops of power -- enough to process data from multiple sensors in real time, providing 360-degree detection of lanes, vehicles, pedestrians, signs, etc. The design will use the company's next gen Tegra processors plus two discrete, Pascal-based GPUs. NVIDIA is also developing a suite of software tools, libraries and modules to accelerate the development and testing of autonomous vehicles.

Volvo will be the first company to deploy the DRIVE PX 2. A public test of 100 autonomous cars using this technology is planned for Gothenburg, Sweden.

Zayo Completes Viatel Acquisition

Zayo completed its previously announced acquisition of Viatel for EUR 98.8 million.  The acquisition adds an 8,400 kilometer fiber network across eight countries to Zayo’s European footprint, including 12 new metro networks, seven data centers and connectivity to 81 on-net buildings.

“The acquisition of Viatel’s European network business strengthens our strategic position in Europe and provides customers with access to our fiber network and expanded connectivity to key international markets,” said Dan Caruso, Zayo chairman and CEO. “Because of the complementary nature of the acquisition, we will begin cross-selling our full suite of services to both Zayo and Viatel customers immediately.”

AT&T Introduces Family of LTE Modules For IOT

AT&T introduced a new family of LTE modules for Internet of Things (IoT) applications and optimized for battery life.

AT&T worked with Wistron NeWeb Corp. (WNC), a module and device manufacturer. The modules are expected to become available from WNC at prices planned as low as $14.99 each, plus applicable taxes, starting in the second quarter. Samples will be available for testing in the first quarter.

“Businesses depend on IoT solutions for gathering real-time information on assets across the world,” said Chris Penrose, senior vice president, Internet of Things, AT&T Mobility. “We’re pleased to be able to facilitate the availability of cost-effective modules so our customers can deploy IoT solutions over the AT&T 4G LTE network. The new LTE modules help the battery life of IoT devices last longer so businesses can better serve their customers.”

Thursday, December 31, 2015

History Channel: History of Transatlantic Cable

Sunday, December 27, 2015

Comcast Installs first DOCSIS 3.1 Modem

Comcast announced an important step toward deliver residential gigabit Internet speeds over its existing  plant by installing what it claims is the world’s first DOCSIS 3.1 modem on a customer-facing network.

The deployment last month at a home in the Philadelphia area used the standard Comcast cable connections and along with a new modem an a software upgrade to the device that serves that neighborhood.

Comcast said it plans to introduce a gigabit speed choice in several U.S. markets before the end of 2016.

  • In May 2015, Comcast unveiled its first DOCSIS 3.1 modem capable of delivering speeds greater than 1 Gbps. The Gigabit Home Gateway will be the company’s first product to integrate software that Comcast acquired in its 2014 purchase of PowerCloud. It also uses open-sourced RDK B software, architected by Comcast with contributions from many in the RDK community, which will help us introduce new features faster and address issues more efficiently; similar to what has been done with X1.

Acacia Communications Files for IPO

Acacia Communications, a start-up based in Maynard, MA, filed a registration statement with the SEC for an initial public offering of its shares.

The company is seeking to list its shares under the symbol ACIA on the Nasdaq Global Market
Acacia, which was founded in 2009, develops  high-speed coherent optical interconnect products, including a series of low-power coherent DSP ASICs and silicon PICs.  The company has integrated into families of optical interconnect modules with transmission speeds ranging from 40 to 400 Gbps for use in long-haul, metro and inter-data center markets. Acacia’s coherent DSP ASICs and silicon PICs are manufactured using CMOS and CMOS-compatible processes. Using CMOS to siliconize optical interconnect technology enables Acacia to continue to integrate increasing functionality into its products, benefit from higher yields and reliability associated with CMOS and capitalize on regular improvements in CMOS performance, density and cost.

In its S-1 statement, Acacia said it had 20 network equipment manufacturers as customers for the year ending September 30, 2015. Acacia's revenue for 2014 was $146.2 million, an 88.3% increase from $77.7 million of revenue in 2013. Its revenue for the nine months ended September 30, 2015 was $170.5 million, a 62.0% increase from $105.2 million of revenue in the nine months ended September 30, 2014. In 2014, the company generated net income of $13.5 million and our adjusted EBITDA was $20.4 million, compared to a net loss of $1.2 million and adjusted EBITDA of $3.6 million in 2013.  For the nine months ended September 30, 2015, Acacia generated net income of $17.9 million and our adjusted EBITDA was $31.7 million, compared to net income of $11.0 million and adjusted EBITDA of $16.0 million for the nine months ended September 30, 2014.

Wednesday, December 23, 2015

Nutanix Files for IPO

Nutanix has filed a registration statement with the U.S. Securities and Exchange Commission (SEC) for a proposed initial public offering of its Class A common stock.

Nutanix is seeking to list its Class A common stock on The NASDAQ Global Select Market under the ticker symbol "NTNX.”

Goldman, Sachs & Co. and Morgan Stanley & Co. LLC will act as lead book-running managers, J.P. Morgan Securities LLC and Credit Suisse Securities (USA) LLC will act as book-running managers for the proposed offering. Robert W. Baird & Co. Incorporated; Needham & Company LLC; Oppenheimer & Co. Inc.; Pacific Crest Securities, a division of KeyBanc Capital Markets Inc.; Piper Jaffray & Co.; Raymond James; Stifel; and William Blair & Company, L.L.C. will act as co-managers.

Nutanix Raises $140 Million for Converged Data Center Solutions

Nutanix, a start-up based in San Jose, California, announced a $140 million Series E funding round at over a $2 billion valuation.

Nutanix offers a Virtual Computing Platform, which integrates compute and storage into a single solution for the data center. Its web-scale software runs on all popular virtualization hypervisors, including VMware vSphere, Microsoft Hyper-V and open source KVM, and is uniquely able to span multiple hypervisors in the same environment.

The latest round brings Nutanix's total funding to $312 million.

Nutanix reports annualized bookings exceeding a run rate of $200 million.  The company has over 800 customers, including 29 customers who have purchased more than $1 million in aggregate products and services.  Nutanix's growing list of customers includes Airbus, China Merchant Bank, Honda, ConocoPhillips, Total SA, Toyota, US Navy and Yahoo! Japan.

"The convergence of servers, storage and networking in the datacenter has created one of the largest business opportunities in enterprise technology, and Nutanix is at the epicenter of this transformation," said Dheeraj Pandey, co-founder and CEO, Nutanix. "We are proud of the progress we have made, and are confident in capitalizing on the enormous opportunity that lies ahead of us. We recognize the importance of building relationships with leading public market investors, and are honored to welcome them as partners in driving the long-term success of our Company."

  • In June 2014, Nutanix announced an OEM deal under which Dell will offer a new family of converged infrastructure appliances based on Nutanix web-scale technology under an OEM deal announced by the firms. The companies said the combination of Nutanix’s software running on Dell’s servers delivers a flexible, scale-out platform that brings IT simplicity to modern data centers.
    Specifically, the new Dell XC Series of Web-scale Converged Appliances will be built with Nutanix software running on Dell PowerEdge servers, and will be available in multiple variants to meet a wide range of price and performance options. The appliances will deliver high-performance converged infrastructure ideal for powering a broad spectrum of popular enterprise use cases, including virtual desktop infrastructure (VDI), virtualized business applications, multi-hypervisor environments and more.

Tuesday, December 22, 2015

Blueprint: One Box or Two? New Options in “Hyper” Storage

by Stefan Bernbo, founder and CEO of Compuverde

Cisco’s latest Visual Networking Index: Global Mobile Data Traffic Forecast offers just one example of what enterprises are facing on the storage front. The report predicts that global mobile data traffic will grow at a compound annual growth rate of 57 percent from 2014 to 2019. That’s a ten-fold increase in just five years.

How will organizations scale to meet these massive new storage demands? Hardware costs make rapid scaling prohibitive for most businesses, yet a solution is needed quickly. Enterprises today need flexible, scalable storage approaches if they hope to keep up with rising data demands.

Such flexibility can be found in software-defined storage (SDS). Because the storage and compute needs of organizations are varied, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below – and which resellers should be versed in.

Storage Then and Now

Before next-gen storage was “hyper,” it was merely “converged.” Converged storage combines storage and computing hardware to increase delivery time and minimal the physical space required in virtualized and cloud-based environments. This was an improvement over the traditional storage approach, where storage and compute functions were housed in separate hardware. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.

Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers. Instead, it uses a hardware-based approach comprised of discrete components, each of which can be used on its own for its original purpose in a “building block” model.

In contrast, hyperconverged storage infrastructure is software-defined. All components are converged at the software level and cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.

Flexibility and agility are increased, and that is exactly what enterprise IT admins need to effectively and efficiently manage today’s data demands. Hyperconverged storage also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more “1:1” scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is storage’s version of a Swiss Army knife; it is useful in many business scenarios. The end result is one building block that works exactly the same; it’s just a question of how many building blocks a data center needs.

Start Small, Scale as Needed

The hyperconverged approach seems like just what the storage doctor ordered, but hyperscale is also worth exploring. Hyperscale computing is a distributed computing environment in which the storage controller and array are separated. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required in order to build big data or cloud systems; it’s what Internet giants like Amazon and Google use to meet their vast storage demands. However, software-defined storage now enables many enterprises to enjoy the benefits of hyperscale.

Lower total cost of ownership is a major benefit. Commodity off-the-shelf (COTS) servers are typically used in the hyperscale approach, and a data center can have millions of virtual servers without the added expense that this many physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.

The Best of Both Worlds

Here’s an easy way to look at the two approaches. Hyperconverged storage is like
having one box with everything in it; hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a VM – which would make it a hyperconverged configuration.

Perhaps the best news of all, as enterprises scramble to reconfigure current storage architectures, is that data center architects can employ a combination of hyperconverged and hyperscale infrastructures to meet their needs. Enterprises will appreciate the flexibility of these software-defined solutions, as storage needs are sure to change. Savvy resellers will be ready to explain how having this kind of agile infrastructure will help enterprises to future-proof their storage and save money at the same time.

About the Author

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Monday, December 21, 2015

Blueprint: 2016 and Beyond

by Cam Cullen, Vice President of Global Marketing at Procera Networks

I recently attended the Light Reading Vision Executive 2020 Summit in Dublin, and the event was a great peek into the thought process of some of the largest network operators in the world. Light Reading and Heavy Reading presented a number of different perspectives on what the future holds for telecom operators, some of which where quite compelling. One report that they presented was on the New IP Agency Interoperability testing, a first of a kind Network Functions Virtualization test that brought 12 vendors together to show that NFV solutions from various vendors could work together. This test was a big step forward for NFV, because it showed that the industry has stepped beyond just virtualization and moving towards true NFV.

The event inspired me to put my own thoughts on what the new trends that we will see in 2016 and beyond on the blog. So….here we go…

Virtualization and NFV get some big wins and deployments: Most operators have already implemented a few projects using virtualization, often their internal IT or control plane deployments. More and more operators are making vendors decisions based upon virtualization products, and I fully expect to see a few significant data plane deployments in 2016.

Orchestration gets real: One point made at the Vision 2020 conference that is often glossed over is that orchestration is really about automation. There are a lot of vendors that have designed their solutions to be very friendly to APIs and automation, and there have already been some ETSI POCs that demonstrate this is real-world scenarios. The New IP Agency intends to do an orchestration interoperability test in 2016, and that test will shine a light on the real state of NFV orchestration, but I expect orchestration to reach out beyond NFV in 2016.

4K video begins to appear in the wild: I have a 4K TV in my house, and the picture is stunning, even with simple upscaling on existing HD video. It makes recorded TV shows look like they are almost live, which can be a bit disconcerting at times because the picture is so clear. Interestingly enough, the easiest way to get 4K streams now is directly to your smart TV since most devices are not yet supporting 4K, but that will change in 2016.

Video bandwidth continues to increase: It is a bit of a no-brainer to say that video bandwidth will increase, but the 4K prediction above is the biggest thing that will accelerate video bandwidth consumption. Netflix recommends 3Mbps for SD, 5Mbps for HD, and 25Mbps for UHD, so users may go from 3or 5Mbps to 25Mpbs for some UHD quality content. With video already consuming from 60-70% of downstream bandwidth on our customer’s networks, it will only get worse.

A new game-changing app will appear: Every year a new app appears that had to potential to change consumer consumption patterns. In 2015, Popcorn Time and Periscope were notable new additions to the landscape (fortunately for Hollywood, Popcorn Time hasn’t taken off yet). Periscope is interesting because it can turn every device into a live video stream and has the support of Twitter, and it won the App Store “Best of the Year” as a recognition of this potential. In August, Periscope claimed 10M users, and I expect that number to keep growing. What will the new app be in 2016? The beauty of it is that we don’t know, and that uncertainty is actually awesome and a testament to the creativity enabled by the Internet.

Streaming-only cord cutters get enabled: Amazon has started an aggregation offering for streaming services that includes Showtime, Starz, and other services (as a start). CBS announced (although it will begin in January of 2017 and not 2016) a new Star Trek series only for online. The biggest advantage that Pay-TV has today is bundling convenience, and Amazon’s offering is the first of what I would expect to see of many offers. Cord Cutters today will end up paying more if they want to watch a similar line-up to cable services, and have to manage a lot of different apps. Aggregation offerings may change that equation going forward.

2016 will be an interesting year for consumer broadband, and I look forward to seeing “What’s Next” in 2016.

About the Author

Mr. Cullen is the Vice President of Global Marketing at Procera Networks. Mr. Cullen is responsible for Procera's overall global marketing and product management, and is an active evangelist for Procera's solution and general market trends as well as an active blogger for Procera. He joined Procera as VP of Product Management to execute on product strategy and to expand the company's product offering. Prior to Procera, Mr. Cullen held senior Product Management and Marketing roles at Allot and Quarry Technologies/Reef Point Systems, where he was VP of Product Management and Marketing, and held various roles in business development, marketing, and sales at 3Com. Mr. Cullen was a captain in the US Air Force where we worked at the National Security Agency and the Air Force Information Warfare Center, and holds a Bachelor of Science in Electrical Engineering from the University of Alabama.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Oracle Acquires StackEngine for Container Management

Oracle has acquired StackEngine, a start-up specializing in container operations management.  Financial terms were not disclosed.

StackEngine, which is based in Austin, offers software to manage and automate Docker applications, giving organizations the power to compose, deploy, and automate resilient container-native applications. Its flagship product, Container Application Center, is an end-to-end container application management solution for developers, DevOps and IT operations teams that brings users through the entire container application lifecycle, from development to deployment.

All StackEngine employees will be joining Oracle as part of Oracle Public Cloud.

Pivotal Acquires CloudCredo for Cloud Foundry Expertise

 Pivotal, has acquired CloudCredo, a privately-held software developer based in London, along with CloudCredo subsidiary, stayUp, a log analysis technology company for Cloud Foundry.

CloudCredo has a highly-regarded team of Cloud Foundry experts.  Pivotal said the acquisition will will better enable enterprise adoption of Pivotal Cloud Foundry.

Pivotal is a spin-out and joint venture of EMC Corporation and its subsidiary VMware. The Pivotal Cloud Native Platform offers integrated application framework, runtime and infrastructure automation capabilities.

“CloudCredo enhances Pivotal’s powerful next-generation portfolio of products and services by bringing extensive knowledge of deploying, running and customizing Cloud Foundry for some of the world’s largest and most admired brands,” said Rob Mee, CEO of Pivotal. “With this expertise, we can better help our customers transform their enterprises by embracing and leveraging Pivotal’s Cloud Native platform more quickly.“

“When we started CloudCredo, we were profoundly influenced by The Pivotal Way. It shaped our approach to modern software development, our culture promoting openness and doing things the right way, and passion for delivering differentiated value to our customers,” says Colin Humphreys, CloudCredo Co-Founder and CEO.“ Joining Pivotal allows us to operate at a global scale, overnight, and help the world's largest and most admired brands use software to transform their businesses and make an impact on the world.”

NetApp to Acquire SolidFire for All-Flash Data Center Arrays

NetApp agreed to acquire SolidFire for $870 million in cash.

SolidFire specializes in all-flash storage systems for next-generation data centers.

NetApp said the SolidFire acquisition extends its portfolio to include all-flash offerings that address each of the three largest All-Flash Array market segments. For the traditional enterprise infrastructure buyer, the NetApp All Flash FAS (AFF) product line delivers enterprise-grade features across flash, disk and cloud resources. For the application owner, the NetApp EF Series product line offers world-class SPC-1 benchmarks with consistent low-latency performance and proven 6x9’s reliability. For the next-generation infrastructure buyer, SolidFire’s distributed, self-healing, webscale architecture delivers seamless scalability, white box economics, and radically simple management. This enables customers to accelerate third platform use cases and webscale economics. SolidFire is an active leader in the cloud community with extensive integrated storage management capabilities with OpenStack, VMware, and other cloud frameworks.

“This acquisition will benefit current and future customers looking to gain the benefits of webscale cloud providers for their own data centers,” said George Kurian, chief executive officer of NetApp. “SolidFire combines the performance and economics of all-flash storage with a webscale architecture that radically simplifies data center operations and enables rapid deployments of new applications. We look forward to extending NetApp’s flash leadership with the SolidFire team, products and partner ecosystem, and to accelerating flash adoption through NetApp’s large partner and customer base.”

SolidFire Raises $82 Million for Flash Storage

SolidFire, a start-up based in Boulder, Colorado, closed $82 million in Series D funding for its all-flash storage systems.

SolidFire said its revenue grew over 700 percent in 2013 and has increased over 50 percent quarter over quarter in 2014. Its customer base is approximately a 50:50 service provider/enterprise.

SolidFire also announced the expansion of its flagship SF Series product line, unveiling two new storage nodes that represent the third generation of SolidFire hardware to be released since the platform became generally available in November 2012.

The latest investments bring the company's total funding to $150 million. New investor Greenspring Associates led the round along with a major sovereign wealth fund, with participation from current investors NEA, Novak Biddle, Samsung Ventures and Valhalla Partners.

Ericsson and Apple Settle Patent Dispute

Ericsson and Apple have signed off on a global patent license agreement, ending a long-running legal dispute in multiple jurisdictions and a case before the U.S. International Trade Commission..  As part of a seven-year agreement, Apple will make an initial payment to Ericsson and, thereafter, will pay on-going royalties. Financial terms were not disclosed.

The deal includes a cross license that covers patents relating to both companies' standard-essential patents (including the GSM, UMTS and LTE cellular standards), and grants certain other patent rights. In addition, the agreement includes releases that resolve all pending patent-infringement litigation between the companies.

Ericsson did note that the positive effects from the settlement, and alongside the ongoing IPR business with all other licensees, will bring its estimated IPR revenues will amount to SEK 13-14 b.

"We are pleased with this new agreement with Apple, which clears the way for both companies to continue to focus on bringing new technology to the global market, and opens up for more joint business opportunities in the future," said Kasim Alfalahi, Chief Intellectual Property Officer at Ericsson.

TeliaSonera Sells its Stake in Nepal's Ncell

TeliaSonera will sell its 60.4 percent ownership in the Nepalese operator Ncell to Axiata, one of Asia’s largest telecommunication groups, for US$1,030 million on a cash and debt free basis. At the same time, TeliaSonera will dissolve its economic interests in the 20 percent local ownership and receives approximately US$48 million. The transactions are conditional on each other.

Axiata has more than 260 million customers and 25,000 employees. Axiata said Ncell will complement its portfolio of Asian telecommunications assets, which includes operations in Malaysia, Indonesia, Sri Lanka, Bangladesh, Cambodia, India, Singapore and Pakistan. Axiata, which is listed on the Malaysian stock exchange, is a reputable company with a strong focus and expertise in South Asia and is also a long-term investor contributing to development and advancements of the countries it operates in.

“In September we announced our ambition to reduce our presence in our seven Eurasian markets and focus on our operations in the Nordics and Baltics, within the strategy of creating the new TeliaSonera. Today, I am very pleased to announce a first step and proof point in this reshaping of TeliaSonera. I am also glad to see Axiata as a new owner. That gives me comfort that our dedicated employees are in good hands when taking Ncell to the next level,” says Johan Dennelind, TeliaSonera’s President and CEO.

MTS and Ericsson to Showcase 5G at 2018 World Cup in Russia

Russia's Mobile TeleSystems (MTS) and Ericsson signed an MOU on 5G research and deployment in Russia, including spectrum studies of the next generation network and the building of a test system. The project will support a dialog with government regulators concerning the bands being targeted for 5G and requirements for next generation systems.

The companies said their partnership will also lead to implementation of 5G-related technologies in MTS' network, such as Ericsson Lean Carrier and the use of unlicensed spectrum, and other technologies that use the design concepts Ericsson is developing for 5G.

Russia's MTS Picks Ericsson for LTE

Russia's Mobile TeleSystems (MTS) selected Ericsson to deploy its LTE network in four regions covering more than half of Russia, thus becoming MTS's main vendor. Financial terms were not disclosed. Under the three-year agreement and starting in Q2, 2013, Ericsson will roll out LTE in the Siberian, Ural, Volga, and Southern Federal Districts of Russia. In the first stage of the project, Ericsson will supply no less than 10,000 new-generation products...

Friday, December 18, 2015

AT&T Transfers Managed App and Hosting Business to IBM

AT&T agreed to transition its managed application and managed hosting services unit to IBM. IBM will then align these managed service capabilities with the IBM Cloud portfolio. IBM will also acquire equipment and access to floor space in AT&T data centers currently supporting the applications and managed hosting operations.

Financial terms were not disclosed.

AT&T will continue to provide other managed networking services, including security, cloud networking and mobility offerings.

The companies said the deal builds on their long-term partnership.

“Today’s announcement represents an expansion of our strategic relationship with AT&T and continuing collaboration to deliver new innovative solutions,” said Philip Guido, IBM General Manager of Global Technology Services for North America. “Working with AT&T, we will deliver a robust set of IBM Cloud and managed services that can continuously evolve to meet clients’ business objectives.”

AT&T and IBM Team Up on Mobile Cloud Security

AT&T and IBM announced a partnership focused on delivering a scalable mobile cloud solution to help protect corporate data and apps. 

The reference architecture includes:
  • IBM MobileFirst Protect: helps organizations manage and control mobile devices, apps and documents.
  • AT&T NetBond: provides a highly secure, scalable network connection to IBM's Cloud infrastructure services, SoftLayer.
  • IBM Cloud: the SoftLayer infrastructure secures public and private clouds for applications and data storage.
  • AT&T Work Platform: enables separate billing of business and personal charges for voice, messaging and data use on an employee's personal handset.
"Balancing employees' need for convenience with security has become a challenge for CISOs and CIOs across the world," said Caleb Barlow, vice president, IBM Security. "To help protect organizations, employees and data, IBM Security and AT&T are delivering a tested and easy to deploy set of complementary tools. We're giving enterprise mobile device users stable, private access to data and apps in the cloud."

Juniper Discloses Unauthorized Code in ScreenOS

Juniper Networks disclosed the discovery of unauthorized code in its ScreenOS that could allow a knowledgeable attacker to gain administrative access to NetScreen devices and to decrypt VPN connections.

It is not known who inserted the code into the OS nor how long it has been there.

All NetScreen devices using ScreenOS 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 are affected by these issues and require patching.

The urgent security bulletin urges customers to update their systems and apply the patched releases with the highest priority.

Box Extends its Strategic Sales Partnership with IBM

Box announced a new agreement to expand its strategic partnership with IBM.

The companies will undertake stronger go-to-market and sales commitments. With a new term that has the potential to last for a decade or more, the partnership underscores the companies’ long-term commitment to delivering modern enterprise content management and collaboration solutions.

“IBM and Box are committed to delivering world-class solutions that transform how businesses work,” said Aaron Levie, co-founder and CEO of Box. “We are thrilled to extend our partnership with IBM, further expanding our product capabilities and creating new go-to-market channels.”

Additionally, the companies announced today the general availability of two new product integrations for IBM Case Manager and IBM Datacap. The availability of these new solutions complements the previously announced product integrations with IBM Content Navigator and IBM StoredIQ.

Linux Foundation Backs Blockchain for Transaction Verification

The Linux Foundation announced a new collaborative effort to advance blockchain technology.

Blockchain is a digital technology for recording and verifying transactions. The distributed ledger is a permanent, secure tool that makes it easier to create cost-efficient business networks without requiring a centralized point of control. With distributed ledgers, virtually anything of value can be tracked and traded. The application of this emerging technology is showing great promise in the enterprise. For example, it allows securities to be settled in minutes instead of days. It can be used to help companies manage the flow of goods and related payments or enable manufacturers to share production logs with OEMs and regulators to reduce product recalls.

The project will develop an enterprise grade, open source distributed ledger framework and free developers to focus on building robust, industry-specific applications, platforms and hardware systems to support business transactions.

Early commitments to this work come from Accenture, ANZ Bank, Cisco, CLS, Credits, Deutsche Börse, Digital Asset Holdings, DTCC, Fujitsu Limited, IC3, IBM, Intel, J.P. Morgan, London Stock Exchange Group, Mitsubishi UFJ Financial Group (MUFG), R3, State Street, SWIFT, VMware and Wells Fargo.

“Distributed ledgers are poised to transform a wide range of industries from banking and shipping to the Internet of Things, among others,” said Jim Zemlin, executive director at The Linux Foundation. “As with any early-stage, highly-complex technology that demonstrates the ability to change the way we live our lives and conduct business, blockchain demands a cross-industry, open source collaboration to advance the technology for all.”

Spirent Debuts First 50Gb Ethernet Test System

Spirent unveiled the world's first 50 Gbps Higher Speed Ethernet test solution.

Spirent's 50GbE Boost system tests layer 1 to layer 3 quality and highlights error-free performance with different combinations of streams, frame lengths and rates by delivering per-port and per-stream statistics such as latency, frames out of sequence, frame counts and rates, L1 PRBS capability and layer 1 statistics to help debug physical link problems.

"According to Dell'Oro Group 2015 data, 50Gbps servers will represent more than 30 percent of server shipments in 2019, up from essentially zero percent today," says Abhitesh Kastuar, General Manager of Spirent's Cloud and IP business. "With virtualized applications driving server performance it makes sense that access speeds are increasing from 10 to 25 and further to 50Gbps.  I believe the rapid adoption of 25G and 50G server links will drive the move to 100GbE and beyond for the uplinks from the leaf nodes to the spine nodes."