Friday, April 8, 2016

Orange County K-12 Gets 100G Internet Connection

CENIC, the Orange County Department of Education (OCDE) and the California Department of Education have activated the world’s first 100-Gigabit per second K-12 connection.

The Orange County 100Gb dark fiber connection is the first of a number of planned 100Gb connections for K-12 sites in California. CENIC is working to complete 100Gb connections for Riverside and San Diego County Offices of Education within the next few months. An additional ten 100Gb connections for K-12 sites are included in CENIC's consortial E-Rate filing and planned for production after July 1, 2016, coinciding with the current FCC E-Rate cycle.

“Our new 100-Gigabit ethernet connection will directly benefit more than half a million students and 20,000-plus teachers across 27 school districts. In doing so, it reflects our commitment to the vision that Orange County students will lead the nation in college and career readiness and success,” said Orange County Superintendent of Schools Dr. Al Mijares.

CENIC is the nonprofit organization that operates the California Research & Education Network (CalREN), a high-capacity network designed to meet the unique requirements of over 20 million users, including the vast majority of K-20 students together with educators, researchers, and other vital public-serving institutions.

Thursday, April 7, 2016

Blueprint: Top 10 Best Practices for Planning and Conducting an Endpoint PoC

by Paul Morville, Founder and VP of Products, Confer

Few things are more disappointing or costly than deploying a product that fails to live up to the vendor’s claims or doesn’t meet the team’s expectations. More often than not, there is a very large grey area where it’s difficult to discern what the PowerPoint slides promise versus what the product will actually deliver. A well-structured Proof of Concept (PoC) can be extremely useful in turning this grey area into black and white. But, these PoCs can be complicated and costly to run, sapping security operations center and security analyst resources that are already spread too thin.

For endpoint security, planning and conducting a good POC is even more important than usual because security’s reputation is on the line. While improving endpoint security is essential in today’s environment, endpoint deployments can be risky. They are highly visible across the company and a failed deployment will get the security team into hot water with their end users.

By designing a solid and comprehensive PoC, you can vastly improve your chances of managing the gaggle of vendors vying for your business, make the right decision and ultimately, ensure a smooth rollout and a successful project.

Our Top 10 Do’s and Don’ts:

1: Don’t delegate the scoping and planning process

Senior security team members are typically at maximum capacity, so it’s tempting to delegate the task of planning a PoC to a more junior staff member. Don’t. The PoC is the chance to define what the organization wants from an endpoint security solution in terms of technical, operational and business requirements. In forward-thinking organizations, an experienced CISO is engaged in the upfront planning to ensure the requirements are well-defined.

2: Do ask yourself, “Will it flatten the stack?”

When testing a product, ask yourself whether it will help you flatten the endpoint security stack, thereby reducing management cost and complexity. How many items can you check off on your requirement list? How many endpoint agents can you retire?

The PoC should thoroughly evaluate every function the product claims to offer. For example, if the product blocks attacks – what kind? If the product supports incident response, does it give full visibility into the details and impact on the endpoint?

3: Do adopt the mindset of the adversary

The PoC test should serve as a proxy for the determined adversaries the organization faces. By adopting the mindset of the adversary, the CISO can emulate the types of attacker behaviors they are likely to face.

Skilled attackers can easily penetrate most networks, so the test scenarios should not focus solely on breach prevention. It’s also critical to evaluate the level of damage the attackers can do once they are inside the network, and how readily their behavior can be detected and thwarted.

4: Do form Red and Blue Teams

Conducting a PoC that most accurately reflects a real-world scenario in a specific organization requires selecting members of the security staff to mimic the attackers who are constantly trying to compromise employees’ devices and steal valuable data. These employees become the Red Team. On the flip side, staff members chosen to mimic the defenders, those who work to mitigate all threats facing the organization, become the Blue Team. If everyone knows their roles, the PoC will be as close to reality as possible.

5: Do allow those teams to work together

Often, the Red Team launches an attack and then, a month later, writes a report that says, “We got in, and here are the vulnerabilities we found.” The PoC will be far more useful if one or two key members of the Blue Team are sitting alongside the Red Team and interacting with them. The Blue team can watch how an attack unfolds, analyze how the defenses react, and evaluate what kind of information is generated by the product being tested. In turn, this gives them a better sense of how the product can actually be used, and how it will perform in a real-world environment.

6: Do testing in both the lab and the real world

A typical medium enterprise will have over 5 million executables in their environment and will see upwards of 5,000 new executables enter the environment every day. Every one of these executables has the potential to generate a false positive, but that’s impossible to simulate in a lab. Therefore, a well-designed PoC will strike a balance between bench-testing live malware in a virtual-lab setting, and testing a subset of the real-world production environment under the conditions of an actual attack. An effective PoC should include deployment on at least 20 devices from the general population to provide the real world perspective.

7: Do use a representative set of attacks

Organizations are most likely to be attacked by the same actors who have attacked them in the past, using methods that were previously successful. The goal, therefore, is not to test against the most obscure or exotic malware, but rather to focus on threats the organization has already faced. Maintaining a repository of malware samples from past incidents is a good start. Also, include malwareless attacks – such as document-based or PowerShell scripts. They are common in today’s enterprise and just as damaging as a malware-based attack.

8: Don’t blindly accept tests from your vendors

If a CISO relies on the vendor to provide malware test samples, it will be very important to ask questions about how those samples were derived.  Vendors sometimes skew PoC results by repackaging known malware so it evades their competitors’ products, but is detected by their own engine (not a big surprise, since they generated it.) Ask questions and use a mixture of sources.

9: Don’t test malware on a live network

At the risk of stating the obvious, it is never wise to test live malware in a production environment. Inexperienced security personnel have actually done this, triggering a full-scale outbreak in the environment. For live malware testing, the best case is to use a segregated network consisting of virtual machines that are immediately reimaged after infection so as to avoid an actual attack.

10: Don’t test on a suspect endpoint

When conducting a PoC, it can be tempting to “kill two birds with one stone” by including real devices that are suspected of already having been compromised. This approach is not advised because it presents an incomplete picture. If the attacker has already come and gone, you often have very little to go on. Unless you plan to install the product exclusively post-incident, try to simulate the whole attack lifecycle.

Following these 10 best practices will help test how well a product addresses specific endpoint security requirements in the only environment that truly matters – yours.

About the Author

Paul Morville has been working in information security for more than 15 years. Prior to founding Confer, Paul held numerous roles at Arbor Networks, including VP Product Management and VP Corporate Business Development. Paul was an early employee at Arbor and helped take the company from pre-revenue to more than $100M in annual sales, establishing it as the leader in network security DDoS detection and prevention.

While there, Paul developed and launched Arbor’s flagship enterprise network security product line, established partnerships with ISS/IBM, Cisco and Alcatel-Lucent; managed Arbor’s Security Engineering & Response research team; acquired a company; and ultimately managed Arbor’s sale to Danaher Corporation in 2010.

Prior to entering the security industry, Paul worked for several other startups. He holds an MBA with Distinction from Michigan’s Ross School of Business.

About Confer

Confer offers a fundamentally different approach to endpoint security through a Converged Endpoint Security Platform, an adaptive defense that integrates prevention, detection and incident response for endpoints, servers and cloud workloads. The patented technology disrupts most attacks while collecting a rich history of endpoint behavior to support post-incident response and remediation. Confer automates this approach to secure millions of devices, regardless of where they are, allowing security teams to focus on more important activities.

Rackspace Offers Hosted OpenStack Private Clouds

Rackspace is now its fully-managed OpenStack services in any data center -- including private enterprise data center, a third party data centers of the customer's choosing, a Rackspace-supported third party colocation facility or a Rackspace data center.

Rackspace will fully manage the underlying OpenStack software and hardware, including all compute, network and storage. The company promises "Fanatical Support."

The company said this new approach enables customers to run OpenStack private clouds without the high cost, risk and operational burden of doing it themselves. And companies can free up money and resources by moving their IT infrastructure from a capital expense to an operating expense model.

“Companies realize they can free up money and resources for more strategic business investments when they turn their IT capital expenses into operating expenses,” said Darrin Hanson, GM and VP of OpenStack Private Cloud at Rackspace. “When OpenStack is consumed as a managed service, businesses can remove non-core operations, reduce software licensing, and minimize infrastructure acquisition and IT operations costs.”

Unwired Planet to Sell Patent and Trademark Assets

Unwired Planet, an intellectual property company focused exclusively on the mobile industry, will sell its  patent and trademark assets to Optis UP Holdings for $30 million in cash and up to an additional $10 million in cash on the second anniversary of the closing of the transactions.

Unwired Plantet claims approximately 2,500 issued and pending US and foreign patents, includes technologies that allow mobile devices to connect to the Internet and enable mobile communications. The portfolio includes patents related to key mobile technologies, including baseband mobile communications, mobile browsers, mobile advertising, push notification technology, maps and location based services, mobile application stores, social networking, mobile gaming, and mobile search.

Intel Acquires YOGITECH for ADAS

Intel is acquiring YOGITECH S.p.A., which specializes in semiconductor functional safety and related standards. Financial terms were not disclosed.

YOGITECH's work focuses on functional safety (including Advanced Driver Assistance Systems or ADAS) of transportation and factory systems. One of the fastest-growing segments in automotive electronics, ADAS makes features like assisted parking possible and paves the way for fully autonomous vehicles in the not-so-distant future.

The YOGITECH team, based in Italy, will join Intel’s Internet of Things Group.

Electric Imp Raises $21 Million for IoT Platform

Electric Imp, a start-up based in Los Altos, California with offices in Cambridge, UK, raised $21 million in Series C funding for its IoT platform that securely connects devices to advanced cloud computing resources.

Electric Imp's solution includes fully integrated hardware, OS, security, APIs and cloud services.

London-based Rampart Capital led the funding round alongside company insiders and returning venture capital firm Redpoint Ventures. This brings total funding to $43 million.

"This funding is a natural step in Electric Imp’s ongoing expansion and validates our approach with large commercial and industrial customers including Pitney Bowes and other yet to be announced global enterprises,” said Hugo Fiennes, CEO and co-founder of Electric Imp. "Our company is strategically positioned to maximize the potential of our industry-leading technology platform where proven security and scalability are critical to commercial and industrial enterprises.

“In 2014, we proved the reliability and usability of our scalable platform in the consumer market, and partnered with Murata to design and build our hardware modules, enabling our customers to connect their devices quickly, easily, and securely,” continued Fiennes. “In 2015, we launched our enterprise cloud offerings, which allow customers to build on top of our class-leading platform, accelerating their company-wide IoT strategies. Our continued focus on enterprise services has helped us with key customer wins, and has enabled our customers to get their devices connected in record time without sacrificing security.”

Puppet Refreshes its Brand

Puppet Labs officially shortened its name to "Puppet" as part a corporate rebranding aimed at the $200 billion software infrastructure market that is emerging as a result of mass migration to the cloud.

“Software powers everything around us, from the devices on our wrists and our walls to the work we do, the fun we have, and everything in between. Modern cars are powered by millions of lines of code, our financial world is entirely mediated by software to enable speed and throughput, and it’s critical to delivery of core functions like medicine, utilities, and food. Nevertheless, most businesses take weeks, months and even years to deliver everything from simple upgrades to the latest innovations, and too much of this software is out of date, insecure, and thus a barrier to progress rather than an enabler of it,” said Luke Kanies, Puppet founder and CEO.

Puppet also announced today new leadership, product updates, integrations, resources and branding.

Sanjay Mirchandani was named president and COO -- the first executive to hold this position at Puppet. He previously served as a senior vice president of VMware.

Project Blueshift and Puppet Enterprise 2016.1 – Blueshift represents Puppet's engagement with leading-edge technologies and their communities — technologies like Docker, Mesos and Kubernetes — and Puppet's commitment to giving organizations the tools to build and operate constantly modern software. The new Puppet Enterprise 2016.1 gives customers direct control of — and real-time visibility into — the changes they need to push out, whether to an app running in a Kubernetes cluster or a fleet of VMs running in AWS. For complete details, read our press release.

Atlassian HipChat integration – This new integration makes it possible for DevOps teams to direct change with the Puppet Orchestrator, see change as it occurs, then discuss and collaborate on changes in process — all right in HipChat. For complete details, read our press release.

Splunk integration – Proactive monitoring of infrastructure and applications is a key DevOps practice, enabling continuous improvement. The Puppet Enterprise App for Splunk now extends the Splunk platform to Puppet customers to diagnose issues and solve problems faster, so they can deploy critical changes with confidence. For complete details, read our press release.

Molex Acquires Interconnect Systems

Molex has acquired Interconnect Systems, which specializes in the design and manufacture of high density silicon packaging with advanced interconnect technologies.

Interconnect Systems, which is based in Camarillo, California, delivers advanced packaging and interconnect solutions to top-tier OEMs in a wide range of industries and technology markets, including aerospace & defense, industrial, data storage and networking, telecom, and high performance computing.

Molex said the acquisition enables it to offer a wider range of fully integrated solutions to customers worldwide.

“We are thrilled to join forces with Molex. By combining respective strengths and leveraging their global manufacturing footprint, we can more efficiently and effectively provide customers with advanced technology platforms and top-notch support services, while scaling up to higher volume production,” said Bill Miller, president, ISI.

Wednesday, April 6, 2016

OpenPOWER Advances in HyperScale Data Center Race

Since its founding two years ago, the OpenPOWER Foundation, which is an open development alliance based on IBM's POWER microprocessor architecture, has grown to more than 200 participating companies and organizations. The goal is to build advanced server, networking, storage and GPU-acceleration technology for next-generation, hyperscale data centers.

At the second annual OpenPOWER Summit held in San Jose this week, more than 50 new infrastructure and software innovations, spanning the entire system stack, including systems, boards, cards and accelerators are showcased.

Some highlights:

  • New Servers for High Performance Computing and Cloud Deployments – Foundation members introduced more than 10 new OpenPOWER servers, offering expanded services for high performance computing and server virtualization.
  • Google is developing a next-generation OpenPOWER and Open Compute Project form factor server. Google is working with Rackspace to co-develop an open server specification based on the new POWER9 architecture, and the two companies will submit a candidate server design to the Open Compute Project.
  • Rackspace announced that “Barreleye” has moved from the lab to the data center. Rackspace anticipates “Barreleye” will move into broader availability throughout the rest of the year, with the first applications on the Rackspace Public Cloud powered by OpenStack. Rackspace and IBM collectively contributed the “Barreleye” specifications to the Open Compute Project in January 2016.
  • IBM, in collaboration with NVIDIA and Wistron, plans to release its second-generation OpenPOWER server, which includes support for the NVIDIA Tesla Accelerated Computing platform. The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via the NVIDIA NVLink high-speed interconnect technology. Early systems will be available in Q4 2016. Additionally, IBM and NVIDIA plan to create global acceleration labs to help developers and ISVs port applications on the POWER8 and NVIDIA NVLink-based platform.
  • Expanded use of CAPI for Acceleration Technology – Foundation members, including Bittware, IBM, Mellanox and Xilinx, unveiled more than a dozen new accelerator solutions based on the Coherent Accelerator Processor Interface (CAPI). Alpha Data also unveiled a Xilinx FPGA-based CAPI hardware card at the Summit. These new accelerator technologies leverage CAPI to provide performance, cost and power benefits when compared to application programs running on a core or custom acceleration implementation attached via non-coherent interfaces. This is a key differentiator in building infrastructure to accelerate computation of big data and analytics workloads on the POWER architecture.
  • A Continued Commitment to Genomics Research – Following successful collaborations with LSU and tranSMART, OpenPOWER Foundation members continue to develop new advancements for genomics research. Edico Genome announced the DRAGEN Genomics Platform, a new appliance that enables ultra-rapid analysis of genomic data, reducing the time to analyze an entire genome from hours to just minutes, allowing healthcare providers to identify patients at higher risk for cancer before conditions worsen.
“To meet the demands of today’s data centers, businesses need open system design that provides greater f1exibility and speed at a lower cost,” said Calista Redmond, President of the OpenPOWER Foundation and Director of OpenPOWER Global Alliances, IBM. “The innovations introduced today demonstrate OpenPOWER members’ commitment to building technology infrastructures that provide customers with more choice, allowing them to leverage increased data workloads and analytics to drive better business outcomes.”

U-Michigan Collaborates with IBM on HPC

The University of Michigan is collaborating with IBM to develop and deliver “data-centric” supercomputing systems based on OpenPower architecture.

Specifically, under a grant from the National Science Foundation,  U-M researchers have designed a computing resource called ConFlux to enable high performance computing clusters to communicate directly and at interactive speeds with data-intensive operations.

ConFlux incorporates IBM Power Systems LC servers and is also powered by the latest additions to the NVIDIA Tesla Accelerated Computing Platform: NVIDIA Tesla P100 GPU accelerators with the NVLink high-speed interconnect technology. Additional data-centric solutions U-M is using include IBM Elastic Storage Server, IBM Spectrum Scale software (scale-out, parallel access network attached storage), and IBM Platform Computing software.

An initial application for the high-performance system involves a simulation of turbulence around aircraft and rocket engines.

Nokia Begins Post-Merger Job Cuts

Nokia announced global job cuts related to its recent acquisition of Alcatel-Lucent.

Nokia is targeting EUR 900 million of operating cost synergies to be achieved in full year 2018 .

The headcount reductions are expected to take place between now and the end of 2018, consistent with the company's synergy target timeline.

The company said personnel reductions will come largely in areas where there are overlaps, such as research and development, regional and sales organizations as well as corporate functions. Consultations are beginning with the company's two European Works Councils. Similar meetings and consultations with employee representatives are taking place in almost 30 countries in the coming weeks. Processes and timelines will vary from one country to another.

"These actions are designed to ensure that Nokia remains a strong industry leader," said Nokia President and CEO Rajeev Suri. "When we announced the acquisition of Alcatel-Lucent we made a commitment to deliver EUR 900 million in synergies - and that commitment has not changed. We also know that our actions will have real human consequences and, given this, we will proceed in a way that that is consistent with our company values and provide transition and other support to the impacted employees."

Alliance for Open Media Targets Open Source Video Codec

The Alliance for Open Media, which was launched last year with the goal of developing an open standard for video compression and delivery over the web, has just added AMD, ARM and NVIDIA to its ranks.

The AOMedia consortium already included Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix.

The Alliance is also announcing public availability of its AOMedia Video source code as an open source project, and is welcoming contribution from the broader developer community.

"The open source availability of our AOMedia Video project with active contributions from industry leaders marks the beginning of a new era of openness and interoperability for Internet video," said Gabe Frost, Executive Director, the Alliance for Open Media. "We’re delighted to welcome AMD, ARM, and NVIDIA to the Alliance for Open Media, reflecting the importance of hardware support to achieve broad industry adoption.”

Baidu's Silicon Valley R&D Team Demos Deep Speech

Baidu Research demonstrated its Deep Speech technology integrated into Peel's AI-based platform for home control to create next-generation voice-enabled smart home products.

Peel Smart Remote is universal remote app that puts content discovery and device control onto a smartphone. The app has more than 150 million users in 200 countries and 10 billion monthly remote commands.

Deep Speech is a state-of-the-art speech recognition system developed using "end-to-end deep learning" by Baidu Research's Silicon Valley AI Lab (SVAIL).

The Peel demo uses speech recognition to access live TV, DVR and streaming content seamlessly across devices. For example, using voice commands, a user can switch between House of Cards on Roku and Game of Thrones on cable TV, or ask to see a line-up of comedy shows or programs about the US presidential election.

Baidu's Adam Coates, who leads the SVAIL team, said: "Speech recognition is at an inflection point. In the future, it will be as easy to talk to your devices as it is to talk to the person next to you. We are excited about the potential of the collaboration with Peel to bring that experience to users."

Peel Co-founder and Chief Product Officer Bala Krishnan added: "This collaboration is opening up new exciting possibilities for our users. Voice command and artificial intelligence are the foundation for the next generation of Peel universal home control."

Vasona Raises $14.6 Million for Mobile Edge Platform

Vasona Networks, a start-up based in San Jose, California with R&D in Tel Aviv, secured $14.6 million in series C funding for its platform for mobile network capacity, resource management and edge intelligence.

Key Vasona Networks offerings include the SmartAIR edge application controller and SmartVISION analysis suite. These platforms leverage edge locations between core and radio access networks, which is a key position for precise traffic management and monitoring at scale. SmartAIR assesses each cell’s conditions and acts on congestion in real time, taking into account the cause and the subscriber’s location. SmartVISION gives unprecedented live and historical visibility into networks at the level of individual cell performance.

The company said it is deployed by several tier-one mobile network operators in four of the world’s largest cities, including announced use by Telef√≥nica UK in its London O2 system.

Participants in the funding round include Bessemer Venture Partners, New Venture Partners and NexStar Partners.  This brings total funds raised by the company to $48 million.

“We are working with the world’s top mobile network operators on pressing and emerging needs, including the constant pursuit of better mobile experiences,” says Vasona Networks CEO Biren Sood. “As operators turn their focus to edge-based traffic management for the most value, flexibility and control, our capabilities best meet business and network demands in any market.”

Coriant Launches Multi-layer Network Optimization and Planning

Coriant introduced a Multi-layer Network Optimization and Migration Planning Service to help network operators optimize multi-layer (Layer 0-3) network resources while reducing the operational cost and complexity associated with service deployment and time-to-revenue.

Key benefits of the vendor-agnostic service include:

  • Improve multi-layer network efficiency by optimizing network design based on traffic patterns
  • Reclaim network resources through advanced multi-layer network defragmentation capabilities and capacity analysis
  • Identify potential network capacity bottlenecks to eliminate service delays and inefficient use of resources
  • Simplify operations by dramatically reducing the effort required to plan new services for the short and the long term
  • Expedite business case development (including potential space and power savings) for migration from legacy infrastructure to the latest and most suitable technology

“As video and cloud services drive unprecedented data growth and more dynamic traffic workloads, it is increasingly important to ensure that IP and optical transport networking resources are utilized in the most efficient way,” said Tarcisio Ribeiro, Executive Vice President, Global Sales and Services. “This new service leverages Coriant’s expertise in IP and optical networking to facilitate OpEx- and CapEx-optimized planning and implementation as networks evolve to support a new generation of cloud-centric services and applications.”

Germany's Energy Grid Deploys Ciena

Open Grid Europe, Germany's leading natural gas transmission network, is working with Ciena, in consultation with Kapsch CarrierCom, to deploy a dedicated communications network backbone.

The programmable network will enable Open Grid Europe’s intelligent coordination of gas transmission and sustainable energy supply. The deployment includes Ciena’s 6500 Packet-Optical Platform and unified management capability.

Nokia Offers Passive Optical Alternative to Ethernet

Nokia is launching a Passive Optical LAN (POL) solution that is positioned as an alternative to a copper-based Ethernet LANs for operators, enterprises, governments, healthcare and hospitality providers, and higher education institutions.

Nokia said its POL solution, which leverages existing GPON technology, is less costly and requires, on average, 50 percent less space to deploy and power to run than traditional Ethernet based LANs.

To better serve customers around the world, Nokia is collaborating on the launch with global system integrators, resellers and distributors including IBM and KDDI.

Nokia's POL solution includes:

  • 7360 ISAM FX high-capacity fiber platform
  • 7368 ISAM ONT fiber termination points
  • 5571 POL command center (PCC) intuitive management system

Federico Guillen, president of Nokia's Fixed Networks business group, said: "New solutions for enterprise LAN are needed due to growing capacity needs, management complexity, network maintenance and high upgrade costs.  Passive optical LAN provides a viable, simple and cost-effective alternative, and will accommodate the evolving connectivity needs of organizations today and in the future."

ZTE's Revenue Tops RMB100.1 Billion in 2015

ZTE's full operating revenues for 2015 topped RMB100.19 billion (US$15.62 billion), while operating cash flow and cash dividend hit historic highs as sales in 4G products, smart city and emerging ICT technologies pushed operating revenues up 23% compared to 2014.

The ZTE Group reported net profit attributable to holders of ordinary shares of the listed company of RMB3.21 billion for 2015, with basic earnings per share amounting to RMB0.78. The Group’s operating cash flow was RMB7.405 billion according to PRC ASBEs, was an all-time record for the Group. While the cash dividend totaled RMB1.038 billion was the highest ever in the history of the Group.

2015 highlights:

In 2015, domestic carriers expanded the scale of construction of planned 4G networks and their ancillary facilities following the issuance of FDD-LTE permits. Carriers were increasingly concerned with and increased their investments in Cloud Computing, Internet of Things, Big Data, Smart City and high-end routers, although investments in equipment remained focused on wireless, transmission, access and broadband sectors.

The trend of growing equipment investment in the global telecommunications industry continued in 2015, with enormous opportunities for innovation provided by Industry 4.0, Smart City, informatisation of the medical sector, informatisation of the education sector, mobile e-commerce, agricultural modernisation and other developments.

Domestically, the Group reported operating revenue of RMB53.11 billion, accounting for 53% of the Group’s overall operating revenue. The issuance of FDD-LTE permits, “Internet+” and the rush for optical fibre upgrades drove further growth in investments in 4G equipment and broadband networks. Meanwhile, the Group also vigorously expanded its cloud computing and Big Data services, Smart City and high-end routers.

For the international market, the Group reported operating revenue of RMB47.08 billion, accounting for 47.0% of the Group’s operating revenue. The Group continued to implement its strategy of focusing on populous nations and mainstream carriers and deepen partnerships with mainstream carriers in the global market to help these carriers add value through Internet applications. The Group’s global operator strategy targeting the top 125 carriers generated over 25.2% growth in revenue from the largest providers in Europe and the US. In Europe, significant partnerships and initiatives were forged with Spain’s Telefonica, Hutchison Drei Austria, Telenor in Norway and Dutch carrier VimpelCom.

During the year, the Group reported operating revenue of RMB57.22 billion for carriers’ networks. Operating revenue for government and corporate business amounted to RMB10.50 billion. Operating revenue for the consumer business amounted to RMB32.47 billion.

Carrier networks

For carrier networks and wireless products, the Group persisted in prioritizing and breaking into the high-end markets of Europe and North America with cutting-edge innovations in Massive MIMO, Cloud Radio, QCell, UBR and Magic Radio technologies. The Group also made significant breakthroughs in the research of key 5G technologies and successfully deployed its commercially ready Pre-5G technology. ZTE has successfully partnered with China Mobile, Softbank, Deutsche Telekom, Orange, KT Corporation and U Mobile in 5G and Pre-5G initiatives.

In connection with wireline and optical communications products, the Group sustained stable growth thanks to dedicated efforts in product innovation and solution operations. The growth also stemmed from ongoing improvements in product competitiveness, leadership in terms of new technology categories, optimised global market development and the reinforced strategy of focusing on populous nations and mainstream carriers.

Government and corporate business

In connection with cloud computing and emerging ICT technologies, the Group strengthened its R&D and investment in RCS, Big Video, Cloud Computing, Big Data and Internet of Things as core elements of its M-ICT master strategy. The establishment of data centres in domestic and international markets was expedited and breakthrough achievements were made in the research of cutting-edge developments such as Big Video solutions and smart Q&A systems.

Seizing opportunities afforded by global digitalization, Smart City construction, “Internet+” and autonomous and controllable information security, ZTE achieved rapid growth in sectors such as finance, Internet, new energy, transport and Smart City.

Consumer business

The Group persisted in the shift of focus to consumer products and an Internet-driven mentality. ZTE smartphone shipments totaled over 56 million, growing 16% year on year, with overseas shipments exceeding 70% growth and establishing market leadership in several major countries. In 2015, ZTE sold 15 million smart phones in the US, growing over 30% year and breaking into the top four providers. ZTE ranked number one in the paid market in Australia; third in Russia (10% market share), and top 4-5 in Turkey, Mexico, South Africa.

In connection with handset terminals, the Group continued to take its global strategic setup to further depths with a strong focus on major nations, refined products, channel enhancement, branding, product quality control and team building, while paying close attention to customer experience and vigorously driving transformation.