Showing posts with label Interview. Show all posts
Showing posts with label Interview. Show all posts

Sunday, May 13, 2018

Interview - Disaggregating and Virtualizing the RAN

The xRAN Forum is a carrier-led initiative aiming to apply the principles of virtualization, openness and standardization to one area of networking that has remained stubbornly closed and proprietary -- the radio access network (RAN) and, in particular, the critical segment that connects a base station unit to the antennas. Recently, I sat down with Dr. Sachin Katti, Professor in the Electrical Engineering and Computer Science departments at Stanford University and Director of the xRAN Forum, to find out what this is all about.

Jim Carroll, OND: Welcome Professor Katti. So let's talk about xRAN. It's a new initiative. Could you introduce it for us?

Dr. Sachin Katti, Director of xRAN Forum: Sure. xRAN is a little less than two years old. It was founded in late 2016 by me along with AT&T, Deutsche Telecom and SK Telecom -- and it's grown significantly since then.  We now are up to around ten operators and at least 20 vendor companies so it's been growing quite a bit the last year and a half.

JC: So why did xRAN come about?

SK:  Some history about how all of happened... I was actually at Stanford as my role as a faculty here at Stanford collaborating with both AT&T and Deutsche Telecom on something we called soft-RAN, which stood for software-defined radio access network. The research really was around how do you take radio access networks, which historically have been very tightly integrated and coupled with hardware, and make them more virtualized - to disaggregate the infrastructure so that you have more modular components, and also defined interfaces between the different common components. I think we all realized at that point that to really have an impact, we need to take this out of the research lab and get the industry and the cross-industry ecosystem to join forces and make this happen in reality.

That's the context behind how xRAN was born. The focus is on how do we define a disaggregated architecture for the RAN. Specifically, how do you take what's called the eNodeB base station and deconstruct the software stuff that's running on the base station such that you have modular components with open interfaces between them that allows for interoperability, so that you could truly have a multi-vendor deployment. And two, it also has a lot more programmability so that an operator could customize it for their own needs, enabling new applications and new service much more easily without having to go through a vendor every single time. I think it was really meant so that you can try all of those aspects and that's how it got started.

JC: Okay. Is there a short mission statement?  

SK: Sure. The mission statement for xRAN is to build an open virtualized, disaggregated radio access network architecture that opens standardized interfaces between all of these components, and to be able to build all of these components in a virtualized fashion on commodity hardware wherever possible.

JC:  In terms of the use cases, why would carriers need to virtualize their RAN, especially when they have other network slicing paradigms under development?

SK: It's great that you bring up network slice actually. Network slicing is one of the trialing use cases and the way to think about this is, in the future, everyone expects to have network slices with very different connectivity needs for enabling different kinds of applications. So you might have a slice for cars that have very different bandwidth and latency characteristics compared to a slice for IOT traffic, which is a bit more delay tolerant for example.

JC: And those are slices in a virtual EPC? Is that right?

SK:  Those are slices that need to be end-to-end. It can't just be the EPC because ultimately the SLAs you can give for the kind of connectivity you can deliver, is ultimately going to be dictated by what happens on the access. So, eventually, a slice has to be end-to-end and the challenge was if an operator, for example, wants to define new slices then how do they program the radio access network to deliver that SLA, to deliver that connectivity that that slice needs.

In the EPC there was a lot of progress on what are those interfaces to enable such slicing but there was not similar progress that happened in the RAN. How do you program the base station, and how do you program the access network itself to deliver such slicing capability? So that's actually one of the driving use cases that's in there since the start of xRAN. Another big use case, and I'm not sure whether we should call it a use case, but just a need, is around having a multi-vendor deployment. Historically, if you look at radio access network deployments, they're a single vendor. So, if you take a U.S. operator, for example, they literally divide up their markets into an Ericsson market or a Nokia market or whatever. And the understanding is everything in that market, from the base station to the antenna to the backhaul, everything comes from one vendor. They really cannot mix and match components from different vendors because there haven't been many interoperable interfaces, so the other big need or requirement that is coming all this is interoperability in a multivendor environment that they want to get to.

JC: How about infrastructure sharing? I mean we see that the tower companies are now growing by leaps and bounds and many carriers thinking that maybe it's no longer strategically important to own the tower and so share that tower, and they might share the backhaul as well. 

SK: It will actually help. It will actually enable that kind of sharing at an even more deeper level, because if you have an infrastructure that is virtualized and is running on more commodity hardware in a virtualized fashion then it becomes easier for a tower company to set up the compute substrate and their underlying backhaul substrate and then provide virtual infrastructure slices to each operator to operate on top of. And so instead of actually just physically separating -- right now they are basically renting space on the top right but instead if you could just the same underlying compute substrate and the same backhaul infrastructure as well a fronthaul infrastructure and virtually slice it and run multiple networks on top, it actually makes it possible to share on the infrastructure even more. So virtualization is almost a prerequisite to any of the sharing of infrastructure.

JC: Tell us about the newly released, xRAN fronthaul specification version 1.0. What is the body of work it builds on?

SK: Sure, let me step back and just talk about all the standardization efforts, and then I'll answer the question. xRAN actually has three big different working groups. One is around fronthaul, which refers to the link between the radio head and that baseband unit. This is the transport that's actually carrying the data between the baseline unit and the radio transmission and, in the reverse direction, when you receive something from the mobile unit.  So that's one aspect. The second one is around the control plane and user plane separation in the base station. Historically, the control plane and the user plane are tightly coupled. A significant working group effort in xRAN right now is how do you decouple those and define standardized interfaces between a control plane and a user plane.  And the last working group is trying to define what are the interfaces between the control plane of the radio access network and orchestration systems like ONAP. So those are three main focus areas.

Our first specification, which describes the fronthaul interfaces, was released this month. So, what went on there?  The problem that we solved concerns closed interfaces. Today if you bought a base station you also have to buy the antenna from the same vendor. That's it. For example, if you bought an Ericsson base station you have to buy an antenna from Ericsson as well. There are very few compatible antenna systems, but with 5G, and even with 4G, there's been a lot of innovation on the antenna side. There are innovators developing massive MIMO systems. These have lots of antennas and can significantly increase the capacity of the RAN. Many start-ups that are trying to do this, but they're struggling to get any traction because they cannot sell their antennas and connect it to an existing vendor's baseband unit. So, a critical requirement that operators are pushing was how do we make it such that this fronthaul specification is truly interoperable, making it possible to mix and match. You could take a small vendor's radio head and antenna and connect it with an existing well-established vendor's baseband unit -- that was the underlying requirement. What the new fronthaul work is truly trying to accomplish is to make sure that this interface is very clearly specified such that you do not need tight integration between the baseband unit and the radio head unit.

This fronthaul work came about initially with Verizon, AT&T and Deutsche Telekom driving it. Over the past year, we have had multiple operators joining the initiative, including NTT DoCoMo,  and several vendors they brought along including Nokia. Samsung, Mavenir, and a bunch of other companies, all coming together to write the specification and contribute IP towards it.

JC: Interesting, so you have support from those existing vendors who would seem to have a lot to lose if this disaggregation occurred disfavorably to them.

SK: Yes, we do. Current xRAN members include all or the bigger vendors, such as Nokia and Samsung, especially on the radio side. Cisco is a member which is more often on the orchestration side and there are several other big vendors that are part of this effort. And yeah, they have been quite supportive.

The xRAN Forum is an operator-driven body. The way we set up a new working group or project is that operators come in and tell us what their needs are, what their use cases are, and if we see enough consistency, when multiple operators share the same need or share the same use case, that leads to the start of the new working group. The operators often end up bringing their vendors along by saying we need this, "we are gonna drive it through the xRAN consortium and we need you to come and participate, otherwise you'll be left out." That's typically how vendors are forced to open up.

JC: Okay, interesting, so let's talk a little bit about the timelines and how this could play out. You talked about plugging into an existing baseband unit or base station unit so I guess there is a backward compatibility aspect?

SK: No, we are not expecting operators to build entirely new networks. The first fronthaul specification is meant both for 4G and 5G. The fronthaul is actually independent of the underlying air interface so it can work under 4G networks. On the baseband side, it does require a software update. It does require these systems to adhere to the spec in terms of how to talk to the radio head, and if they do, then the expectation is that someone should be able to plug in a new radio head and be able to make that system work. That being said, where we are at right now, is we have released a public specification. We believe it's interoperable but the next stage is to do interoperability testing. We expect that to happen later this year. Once interoperability testing happens, we will know what set of systems are compatible. Then we will have, if you will, a certificate saying that these are compliant.

JC: And would that certification be just for the fronthaul component or would that be for the control plane and data plane separation as well?

SK: Our working groups are progressing at different cadences.  The fronthaul specification already is out and they expect to the interoperability testing later this year, and that will be only for the fronthaul.  As and when we release the first specification for the control plane and use plane separation, we will have a corresponding timeline. But I think one thing to realize is that these are not all coupled. You could use the fronthaul specification on its own without having the rest the architecture. You could take existing infrastructure implement just the fronthaul specification and realize the benefits of the interoperability without necessarily having a control plane that's decoupled from the user plane. So the thing is structured such that each of those working groups can act independently. We didn't want to couple them because that would mean that it'll take a long time before anything happens.

JC: Wouldn't some of the xRAN work naturally have fit into 3GPP or ETSI's carrier virtualization efforts? Why have a new forum?

SK: Definitely. 3GPP is a big intersection point. I think the way we look at it is that we are trying to work on areas that 3GPP elected not to. So if it has anything to do with the air interface, for example, how should the infrastructure talk to the phone itself -we are not trying to work in that space. If it's got anything to do with how the base station talks to the core network, we are not trying to specify that interface. But there are things that 3GPP elected not to work on for whatever reason, and which could be how vendor incentives come into play. Perhaps these vendors discouraged 3GPP from working on intereroperable fronthaul interfaces. And we don't know the reason why 3GPP chose this path. You can see that this is also operator driven. So operators want certain things to happen but they
are not successful in getting 3GPP to do it. So xRAN is a venue for them to come in and specify
what they want to do and what they want to accomplish and get appropriately incentivized
vendors to actually come up together. So it is complementary in terms of the work effort, but I could see a scenario where the fronthaul specification that we come out with, this one and the next one, eventually forms the basis for a 3GPP standardized specification -- but that's not necessarily a conflict -- that actually might be how things eventually get fully standardized.

JC: There are other virtualization ideas that have sprung up from this same lab and in the Bay Area. How does this work in collaboration with CORD and M-CORD?

SK: Historically, I think virtualization has infected, if you will, the rest of the networking domain but has struggled to make headway in the RAN. If you looked at the rest of the network there's been a lot of success with virtualization. The RAN has traditionally been quite hard to do. I think there are multiple reasons for that. One is that the workload -- the things that you want to do in their RAN -- are much more stressful and demanding than the rest of the network in terms of processing. I think the hardware is now catching up to the point where you can take off-the-shelf hardware and run virtualized instances of the RAN on top. I think that's been one.

Second, the RAN is also a little bit harder to disaggregate because many of the control plane
decisions are occurring at a very fast timescale. There are things, for example, like how should I
schedule a particular user’s traffic to be sent over the air. That's a decision that the base station is making every millisecond and, at that timescale, it's really hard to run it at a deeper level. So, having a separate piece of logic making that decision, and then communicating that decision to the data plane if you will, and then the data plane implementing that decision, which would be classically how we  think about SDN, that's not going to work because if you have a round-trip latency of one millisecond that you can tolerate, it's too stringent.  I think we need to figure out how to deconstruct the problem, take out the right amount of control logic but still leave the very latency sensitive pieces in the underlying data plane of the infrastructure itself. I think that's still work in progress. We still know there are hard technical challenges there. 

JC: Okay, talking about inspiration -- one last thing- is there an application that you have in
mind that inspires this work?

SK: Sure. I am thinking a pretty compelling example is network slicing. As you look at these very demanding applications --if you think about virtual reality and augmented reality applications, or self-driving cars --there are very strict requirements on how that traffic should be handled in the network. If I think about a self-driving car, and it wants to offload some of its some mapping and sensing capabilities to the edge cloud, that loop, that interaction loop between that car and the edge cloud has very strict requirements. And you want that application to be able to come to the network and say this is the kind of connectivity I need for my traffic, and for the network to be programmable enough that the operator should be able to program the underlying infrastructure such that I can deliver that kind of connectivity to the self-driving car application.

I think those two classes of applications are characterized by latency sensitivity and bandwidth intensity. You don't get any leeway on either dimension. Right now, the people developing those applications do not trust the network. If you think about current prototypes of self-driving cars, the developers cannot assume that the network will be there. So they currently must build very complex systems to make the vehicle completely autonomous. If we truly want to build thinks where the cloud can actually play a role in controlling some systems, then we need this programmable network to enable such a world. 

Excellent, well thank you very much and good luck!


Wednesday, April 18, 2018

Interview: Lee Chen, CEO of A10 Networks

A10 Networks, which is sometimes referred to as the best-kept networking secret due to its high-performance security and load balancing product lines yet quiet public company profile, was founded by Lee Chen in 2007. Chen is a veteran of Silicon Valley networking start-ups having served in the key technical roles at Centillion Networks and later at Foundry Networks. Centillion Networks was a switching pioneer active in the late 1990s and later acquired by Bay Networks. Foundry Networks was a follow-up company that was first to ship a Gigabit Ethernet switch, and later had the good fortune of completing its IPO near the peak of the Internet bubble in 1999. Years later, Foundry was acquired by Brocade. A10 Networks, which got started in 2006 and is based in San Jose, California, completed its IPO in March 2014.

For its most recently reported fiscal quarter (3Q2017), the company's revenue grew 12 percent year-over-year to $61.4 million. Service Provider sales were about 53% of total revenue and enterprise sales were 47%. Total gross margin was 78.3%. A10 Networks postponed its 1Q2018 financial report, which had been expected in February, citing an internal investigation concerning a violation of its insider trading policy by a mid-level employee within its finance department.

Question: As yet another year of the annual RSA conference gets underway, it's clear that the really critical cybersecurity issues have not gone away. The daily news is filled with stories of attacks on critical infrastructure, major cryptocurrency heists, interference by one country in the electoral process of others, and most recently of the revelation from Facebook that possibly all of its nearly 2 billion users may have had profile data scaped by bad actors. What's your overall assessment of cyber security?

Lee Chen: My assessment of cybersecurity is that attacks will become frequent and more sophisticated. Will they ever go away? It’s not impossible but it’s not likely in the near term. I just don’t see in the next 10-20 years that they’re just going to go away. It’s a real part of our lives. That’s why it’s very important for every telecom company or operators, IT staff, IT organization, any enterprise customers, they need to have a security policy in place.

Q: Let's talk about DDoS. Over time, the number, duration, and volume of attacks always go higher.  What are your observations?

Lee Chen: As the number of DDoS attacks is increasing, the duration is getting longer the volume is getting higher and the vendor Solutions are getting more sophisticated, with much higher performance – and they’re becoming automated and easier to deploy. These solutions are getting better over time. It’s like any technology – it never stands still, and you always have a new era of attacks and solutions. The users need to make sure they keep up to date with the latest and greatest technology from the industry’s best vendors.

Q: The rise of crypto currencies tells us that a lot of electronic money is moving from the well-defended infrastructure of major banks to smaller platforms that may exist in less secure environments, perhaps making them more vulnerable to DDoS attacks. Does that mean that a greater amount of the money supply or capital will exist in a more vulnerable environment?

Lee Chen: I do believe that cryptocurrency is here to stay. Many people believe it’s a blip, but I believe it’s here to stay. I'm not sure cryptocurrency is necessarily more vulnerable because one of the things about the use of blockchain with cryptocurrency is that blockchain is more secure and provides more privacy. Just like any new technology, it will constantly be the target of cyberattacks, but for cryptocurrency to evolve, it will need a significant investment in cybersecurity.

Q: We're starting to see really substantial numbers of IoT devices coming online, some with better security controls than others. In some cases, it is the enterprise that is deploying IoT in volume to track their own assets. How significant is the security threat?

Lee Chen: IoT’s threat to the enterprise is not significant but in the future will be. IoT devices have the widest variety of different use cases: some are related to convenience, some for life and death, some for cost control, some for energy consumptions. Counting on IoT devices to be secure is not realistic – IoT devices will never be fully protected because attackers will always figure out a way get through the IoT device’s security.

Just like in any security scenario, it always comes down to policy. You need to have a well-designed security policy in place to make sure the application and IoT devices are protected from malware and DDoS attacks, and also from other network and application attacks.

Q: The gaming industry is becoming the latest professional sport. Players have a lot on the line to win their competition, but here again, there is a need for a very clean network.  How is this segment developing?

Lee Chen: Gaming is a very interesting and very challenging industry. It’s one of the most demanding DDoS protection environments, as no dirty traffic is allowed in the industry. One significant difference for a gaming environment is that the network needs to be super clean. Because of the time-sensitive nature of a gaming environment, you can’t have any lag, and you can’t have any latency due to dirty traffic on the network. The gaming industry needs a device that is really sophisticated, because any dirty traffic will cause one side to lose, and the stakes are very high. You need a device that can detect attacks instantly and will never allow volumetric attacks to happen to the network.

Q: Cloud migration. The move to public cloud services is another megatrend. Many companies, of course, are pursuing a hybrid public/private cloud strategy and this changes their security posture. How do you think about security when traditional network boundaries are changing?

Lee Chen: Most of the enterprise is moving from traditional networks to the cloud, and all corporations will have some data in the cloud and some on their corporate networks. Cloud is a great opportunity for companies to invest in a hybrid cloud strategy – as a matter of fact, one of the largest public cloud providers is one of A10’s marquee customers and does just that, with 40 data centers with 45 TB of data protected globally A10’s DDoS mitigation solutions.

Q: Carrier network virtualization - SDN and NFV are bringing the benefits of virtualization to carrier networks.  As they deploy x86-based infrastructure instead of proprietary systems, is this opening up new security vulnerabilities?

Lee Chen: The virtualized network such as SDN and NFV does provide the network efficiency, agility and flexibility, which is a must for virtual networks when it comes to providing good analytics and orchestration – all without vendor lock-in. And there are quite a few options when it comes to implementation: you have different versions of OpenStack; different vendors with their own versions, and different integrations. So you actually offer more integration opportunities. In the longer term, I can see significant advantages, and in the near-term, I see a lot of opportunities to integrate with the different vendors. The virtual solution does have some challenges because virtual management is a big issue. Overall, visibility and control is a must and there is a good opportunity for the application intelligence and analytics companies to provide a good solution for the virtualized networks.

Q: We are starting to see the rise of autonomous vehicles as companies like Waymo, Uber, Lyft, Maven and others talk about deploying tens of thousands of vehicles.  These future businesses will rely heavily on low-latency, mobile networks, presumably 5G. Could they also be vulnerable to DDoS attacks?

Lee Chen: Similar to the gaming industry and gaming networks, autonomous vehicles need super clean connectivity because now we're talking about life and death. With 5G networks, the opportunity to update the software in autonomous cars is really great. I think autonomous car usage will be popular although I don't know when. These cars absolutely need protection, because if somehow the network is compromised the risk is very high. Similar to the gaming industry, the DDoS protection needs to be very sophisticated, and be able to keep any volumetric attacks from entering the network. This DDoS detection and mitigation needs to be quick and automated via intelligent automation.


Monday, September 29, 2014

Interview: Cisco's Intercloud Meets the Equinix Cloud Exchange

Equinix will deploy Cisco's Intercloud capabilities in the Equinix Cloud Exchange, enabling native connectivity between the Cisco ecosystem and all of the third-party cloud services available via Equinix, including Amazon Web Services and Microsoft Azure.  Cisco Cloud Services will also become available in the Equinix Cloud Exchange.

In this interview, Ihab Tarazi discusses:

0:05 - About the Equinix Cloud Exchange
0:57 - Highlights of the new partnership between Equinix and Cisco
1:51 - New capabilities enabled by Intercloud
2:27 - Is the deal exclusive?
3:04 - Significance for the rest of the networking industry
3:35 - How does this impact Equinix customers and partners?
04:14 - Will this change the way Equinix designs and develops its own data centers?
5:01 - Will the Intercloud deployments be enable across Equinix metro markets?

See video:  http://youtu.be/syio-23t2xU


Wednesday, September 24, 2014

Broadcom Interview: What's Next for Data Center Switching

In this 9-minute video interview, Nick Ilyadis, VP/CTO of the Networking Products Group at Broadcom, discusses the evolution of data center switching.

Key topics covered include:

00:08 - How are data center traffic patterns changing due to Big Data apps and scale-out clouds?

01:05 - How will data center networking evolve to accommodate cloud services across the WAN?

02:48 - What is the case for 25G/50G Ethernet?

04:10 - How do you see the NFV ecosystem emerging?

07:08 - What does the current trajectory of cloud services and hyper-scale data centers tell you about the future of switching?

08:28 - What's next for Broadcom?


http://youtu.be/F5Xn4dZVA0U


Wednesday, April 3, 2013

Interview: Nuage on Automating Data Centers for Cloud Services and MPLS VPNs


A redacted interview between Jim Carroll, Editor of Converge! Network Digest and Manish Gulyani, VP of Product Marketing, Alcatel-Lucent / Nuage Networks.

Converge! Digest:  How do you describe the Nuage Networks' solution?

Manish Gulyani: The Nuage Networks Virtualized Services platform is a software-only solution for fully automating and virtualizing data center networks. That’s our main value proposition.  As you know, today’s data center networks are very fragile, they use old technology, and they are very cumbersome to operate.  When we looked at cloud services, we found that storage and compute resources had been virtualized quite nicely, but the network really wasn’t there.  We saw a great opportunity to apply the lessons that we have learned in wide area networking along with SDN.  The idea is that if you want to sell cloud services, you need to support thousands of tenants.  And you want each tenant to think that they own their piece of the pie.  It has to feel like the experience of a private network, with full control, full security, full performance of a private network but with the cost advantages of a cloud solution, which is a shared infrastructure.  That’s what we’re bringing to the table with the Nuage solution.

Converge! Digest: So is the Nuage solution aimed specifically at those who want to sell cloud services?

Manish Gulyani: It is designed for anybody who runs a large enough data center that needs automation. For instance, the University of Pittsburgh Medical Center, which is one of our trial customers, does not sell cloud services but they have enough internal users and external tenants that want full control over a particular cloud resource.  If you can’t give them full control and automation, then the cloud resource is of no use.  You have to be able to turn up the cloud service as fast as the user turns up a VM, otherwise the cloud service doesn't work.  Whether it is a large enterprise, a web-scale company or a cloud service provider, all can benefit from the Nuage solution.

Converge! Digest: What are the strategic differentiators versus other SDN controllers out there?

Manish Gulyani: Some initial SDN solutions have come out in the last two years for data centers.  They took the approach of virtualizing primarily at Layer 2, which was a good first step beyond the VLAN architectures. But in our view, this isn't sufficient to go beyond the basic applications.  If you are limited to just Layer 2, you are not able to get the application design done the right way.  For example, if you want to do a three tier application, you need to use routing, load balancing, firewalls – and all those elements in a real architecture are very hard to coordinate in current SDN solutions.  So first, Nuage needs to overcome this obstacle. We give you full Layer 2 to Layer 4 virtualization as a base requirement.  Once we’ve done that, the next issue is how do you make it scale?  You can’t restrict cloud service to one data center.

If you have ambitions of being a cloud services provider and you run multiple data centers, you want the power to freely move around server workloads between data centers.  If you cannot connect the data centers in a seamless fashion, then you haven’t satisfied the demand. So our solutions scales to multiple data centers and provides seamless connectivity.  The third obstacle we overcome is this:  now that the cloud services are running, how can people on a corporate VPN get access to these resources?  How can they securely connect to a resource that has just been turned up in a data center?

We provide the full, seamless connectivity to a VPN service.  We extend from Layer 2 to Layer 4, we made it seamless across data centers, and then we extend it across the wide area network by seamlessly integrating with MPLS VPNs. So that is our virtual connectivity layers.

We also automate it and make it easy to use.  A lot of our energy has gone into the policy layer, which lets the user define a service without knowing any network-speak.  It’s just IT speak and no network-speak.  This might seem strange for a networking company to say that its customer do not need to learn about VLANs or subnets or IP addresses – just zones and domains and application connectivity language.  When a workload shifts from one data center to another, all of the IP addresses and sub-netting has to change, but real users can’t figure this out because it is too hard to do. If this function can just happen in the background, they’re good with that.  The final thing we said is that it has to be totally touchless.

The reason people are excited about the cloud is that it is quick. In fact, IT departments worry that users sign up for public cloud services because the internal IT guys can’t deliver quickly enough.  If you need 10 new servers or VMs of capacity, why wait 3-4 weeks for your IT department to purchase and install the equipment, when you can log onto Amazon Web Services today and activate this capacity immediately with a credit card?  The Nuage policy driven architecture basically says “turn up the VM, look up the policy, set-up the connection” – nobody actually touches the network.  That’s our innovation.

Converge! Digest:  Since it is a software suite, what type of hardware do you run on?

Manish Gulyani:  Nuage runs on virtual machines.  It runs on general purpose compute.  Our Services Directory is a virtual machine on any compute platform. Our Services Controller runs on a VM. And our virtual routing and switching Open vswitch implementation is essentially an augmentation of what runs today on a hypervisor.  You can’t go into a cloud world and propose new hardware because it is a virtualized environment.  We have no constraints on what time of compute platform.  The whole idea is to apply web-scale technologies.  We also offer horizontal scaling, where many instances run in parallel and can be federated.

Converge! Digest:  Alcatel-Lucent is especially known for IP MPLS, and yet Nuage is largely a data center play.  What technologies does Nuage inherit from Alcatel-Lucent that give it an edge over other SDN start-ups?

Manish Gulyani:  At Alcatel-Lucent, we learned a lot about building very large networks with IP MPLS.  That is a baseline technology deployed globally to offer multi-tenancy with VPNs on shared WAN infrastructure.  Why not use similar techniques inside the data center to provide the massive scale and virtualization needed for cloud services?  We took our Service Router operating system, which is the software running on all our IP platforms, and took the elements that we needed and then virtualized them.  This enables them to run in virtual machines instead of dedicated hardware. This give us the techniques and protocols for providing virtualization. Than we applied more SDN capabilities, such as a simplified forwarding plane that’s controlled by OpenFlow, which lives in the server and enables us to quickly configure the forwarding tables. Because of the way that we use IP protocols in wide area networks, we can support federation of our controller.  That’s how we link data centers together.  They talk standard IP protocols -- BGP – to create the topology of the service and the same way they extend to MPLS VPNs.  As I said, the key requirement for enterprises is to connect to data center cloud services using MPLS VPNs they are familiar with today.  This same SDN controller can now easily talk to the WAN edge router running MPLS VPNs.  We seamlessly stitch the data center virtualization all the way to the MPLS VPN in the wide area network and provide end-to-end connectivity.

Converge! Digest:  Two of the four trial customers for Nuage announced so far are Service Providers (SFR and TELUS), presumably Alcatel-Lucent MPLS customers as well, and of course many operators are trying to get into cloud services.  So, is that a design approach of Nuage?  Build off of the MPLS deployments of Alcatel-Lucent?

Manish Gulyani:  It doesn't have to be.  At Nuage, we don’t need for Alcatel-Lucent to be the incumbent supplier to sell this solution.  But of course it helps if they already know us and and already trust us in running highly-scalable networks. So when we talk about scalablity of data centers, we have a lot of credibility built in. Both SFR and TELUS have the ambition to offer cloud services.  I think they recognize that they must move to virtualization in the data center network and that the connectivity must be extended all the way to enterprise.  Nuage can deliver a solution unlike anything from anybody else today.  Existing SDN approaches only deliver virtualization in some subset of the data center, they can’t cross the boundary.  Carriers want to have multiple cloud data centers, but they cannot connect their resources easily to MPLS VPNs today. We give them that solution.

Converge! Digest:  In cloud services, it’s becoming clear that a few players are running away with the market.  You might say Amazon Web Services, followed by Microsoft Azure, Rackspace, Equinix and maybe soon Google, are capturing all the momentum.  One thing these guys have in common is a desire to be carrier neutral, so they are not tied to a particular MPLS service or footprint. Will Nuage appeal to these cloud guys too?

Manish Gulyani:  We do.  In fact, we are talking to some of these guys. As I said, Nuage is not designed for telecom operators.  It is designed for people who want to sell cloud services and who run very large data centers.  Carrier with multiple data center, like Equinix, will need the automation.  Until you virtualize and automate the data center, forget about selling cloud services.  Step 1 is creating the automation inside the data center.  Connecting to MPLS VPNs is step 2.  Amazon has been among the first ones, but they had to develop all of this themselves.  There was no solution on the market. They build that step 1 automation themselves. We now know that Amazon found it quite cumbersome to get secure connectivity between clouds. They are also experiencing how hard it is to connect a corporate VPN into the Amazon cloud. It can be tedious.  If others are going to offer services like Amazon, and they don’t have the size and wherewithal to figure it out themselves, then Nuage will get them there.

Converge! Digest:  On this question of data center interconnect (DCI), Alcatel-Lucent also has expertise at the optical transport layer, especially with your photonic switch. Will Nuage extend this SDN vision to the optical transport layer?

Manish Gulyani: We sell a lot of data center interconnect both at the optical layer and the MPLS layer, such as DWDM hitting the data center and also MPLS in an edge router.  We sell a lot of 100G on our optical transport systems because they really are the capacity needed for DCI. So that’s the physical connectivity.  The logical connectivity is what you need to move one virtual machine in one data center to another.  Even though the secure, physical connectivity exists between these data centers, the logical connectivity just is not there today. Nuage gives you that overlay on top of the physical infrastructure to deliver a per-tenant slice with the policy you want.

Converge! Digest:  How big is Nuage as a company in terms of number of employees?

Manish Gulyani:  We haven’t talked publicly about the size of the company or head count.

Converge! Digest:  About this term “spin-in” that is being used to describe Nuage… what does it mean to call Nuage a spin-in of Alcatel-Lucent?  How is the company organized?

Manish Gulyani:  Spin-in means that we are an internal start-up inside of Alcatel-Lucent.  There is a very good reason Alcatel-Lucent structured this as an internal start-up instead of an external start-up.  Nuage leverages so much existing Alcatel-Lucent intellectual property, there was no way it could let this outside of the company for others to have.  We would essentially have had to put out our Service Routing operating system for others to value and control the intellectual property and associate equity investments with it.  This would have been too complicated.  Others have tried to spin-out a new start-up with third party investors, only to find that they must acquire it back because they did not want their intellectual property to fall into the hands of others. Still, Nuage has full freedom to develop its solution and the right atmosphere to pull in the right talent.  We need a good mix of networking people and IT people.  We've been able to bring in guys who did Web 2.0 scaled-out IT solutions.

Converge! Digest: So Nuage is not a separate legal entity that can offer stock options to attract talent?

Manish Gulyani: No, Nuage is a fully funded internal start-up that is not a separate legal entity.

The start-up identity separate from Alcatel-Lucent also enables us to sell into the new cloud market, which is a different space from what Alcatel-Lucent has traditionally pursued. So, we can go after different market, we can attract new talent but still leverage the existing intellectual property that is essential to really get a good solution to market. This structure gives us freedom in multiple dimensions.

See also