Cisco ONE Controller – SDN Startup Killer?

Military nations demonstrate their power by testing nuclear weapons. Pure play networking vendors display their power in the SDN ecoystem by releasing Controllers. ~Anonymous

I sat in today on Cisco’s Webcast on OpenFlow and the ONE Controller. Cisco CTO, and Engineering and Chief Architect, David Ward spoke at length of this announcement. Ward is also the Chair of the Technical Advisory Group of the Open Network Foundation (ONF). The webcast featured two use cases – in the Enterprise (Indiana University) and in the Service Provider (NTT Communications) arenas.

OpenFlow Model
OpenFlow Model

A typical OpenFlow Controller, or Switch as defined by the standards, would interface to the Data Plane via OpenFlow Configuration Protocol, OF-Config, (persistent across reboots) and OpenFlow Protocol (mechanism for adding and deleting flows). But OpenFlow is just a part of SDN.

In a classical router or switch, the fast packet forwarding (data path) and the high level routing decisions (control path) occur on the same device. An OpenFlow Switch separates these two functions. The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller, typically a standard server. The OpenFlow Switch and Controller communicate via the OpenFlow protocol, which defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats. – ONF Website

Cisco ONE Controller Model
Cisco ONE Controller Model

The goal of Cisco’s ONE Software Controller is to enable flexible, application-driven customization of network infrastructure. It includes the onePK toolkit – an SDK for developers to write custom applications to solve their business needs. So, a ONE Controller could speak to other vendor devices via the OpenFlow standard or it could speak to Cisco devices via the onePK southbound API. At least that is what the diagram shows – onePK and OpenFlow are side-by-side. However, during the webcast Q&A, it was stated that onePK is an infrastructure that includes support for multiple abstraction protocol; onePK includes Openflow. This is probably semantic.

One of the features described is network slicing. It is intended to provide more than just L2 or L3 segmentation. It is more like a form of multi-tenancy. The way it was described on the call, instead of making decision based on just ‘shortest path’, network slicing can enable the controller to differentiate based on lowest cost path, highest bandwidth path, and latency. At a demo at Cisco Live in London, latency was tweaked and the Controller was able to compute a different path accordingly.

Another feature presented by Cisco in ONE Controller is of hybrid mode SDN, in which network operators can use SDN for specific flows and traditional integrated CP/DP (i.e. classical routers or switches) for the remaining traffic

What are the ramifications of this release on the SDN ecosystem? Well, although the new open source consortium Daylight supposedly does not include Cisco onePK on Day 1, it is very likely it will be included in about six months. Cisco has announced platform support roadmaps for the Platform APIs (onePK platforms), Controller Agents, and Overlay Networks such as VXLAN Gateway. Some of these won’t be available until Q3 of this year. That sounds just about the right time for a vendor to provide an end-to-end solution for Daylight. If a pure play hardware networking vendor, such as Cisco, can provide a free open source controller, it will be able to kill the competition from many SDN startups. For example, take Floodlight, the open source OpenFlow controller that was developed by Big Switch and is sold on a freemium licensing model. If ONE Controller is given away for free, why would a customer use Floodlight?

In other words, in Daylight there is no need for Floodlights!

What is PLUMgrid up to?

PLUMgrid is a Silicon Valley based SDN startup that raised $10.7 million in Series A funding in August 2012. They are still in stealth mode and even SDNCentral hasn’t covered much on them yet. However, things are brewing at this startup, which was mentioned last year in the The Economist and featured in Network World. Details are still thin, but when a company publishes five teaser blog posts in the past two weeks (they started their blog in August 2012), signs point to a larger announcement around the corner.

Their blog posts are unlikely to win a Pulitzer prize. If anything, they have enhanced their mystique by keeping things vague. PLUMgrid shuns the term Network Virtualization as it encompasses legacy technologies, such as VLANs, VRFs, and VPNs. Instead, they like to refer to Virtual Networking Infrastructure (VNI) as the means (i.e. abstraction) to provide a Virtual Network Domain, which they define as an administrative boundary established by the Data Center Operator where the VND operator can: Define, instantiate, and manage its own network needs. PLUMgrid defines SDN as networking technologies that create value in the VNI. Moreover, they wrote of consumption models in a manner that would make unicorns cry. A more concrete set of definitions would have helped.

PLUMgrid wrote of Distributed Virtual Switches:

Datacenter admins are quickly realizing that as appealing as DVS may be, Virtual Broadcast Domains (aka Distributed VLANs) are restrictive when designing proper networking solutions. To overcome the limited capabilities of DVS, multiple vendors are providing solutions pointing towards two different vectors.

I would have liked them to elaborate further on this point. How exactly do data center admins find DVS’ restrictive? Is it the lack of control plane? Is it the presence of Multicast state in the core of the network. I want to know what PLUMgrid’s solution offers that, say, Nicira’s STT doesn’t. They have refrained from describing the problems in their posts.

On point that PLUMgrid made in their blog series that caught my attention was when they wrote this of VNI:

VNI platforms should be open to a rich ecosystems of 3rd party network functions: Prevent vendor lock-in.

To me that sounds like a plug-in to OpenStack. In fact, I wouldn’t be surprised to see PLUMgrid’s name listed as a member the new open source SDN Consortium called Daylight, which will be announced at the ONS summit in April 2013.

Are they the Secret Company that will come out of stealth mode and present at Networking Field Day 5 March 6-8? Only time will tell.

Edit: I listened to the Open Networking User Group (ONUG) Lippis Report Podcast this morning and PLUMgrid’s CEO, Awais Nemat, reiterated the need to define new vocabulary for SDN ecosystem, such as VNI. He said that the concept of Cloud only started making sense to people once SAAS, IAAS, and PAAS were clearly defined. Based on conversations it has held with customers, PLUMgrid has its eyes set on Consumption Models as compared to Build Models, which the rest of the industry has been focusing on (Northbound/Southbound APIs, CP/DP separation, etc).

South Bay OpenStack Meetup – Where are the Networkers?

Just a quick note before the weekend. Yesterday I attended a South Bay OpenStack Meetup organized by Mirantis, a provider of OpenStack services. Over 100 people had RSVP’d for the event, which was held on Yahoo!’s campus. About 50 attended.

The event featured an introductory presentation by Mirantis on the OpenStack architecture and featured excellent coverage on the messaging between the various components and APIs. Co-resenting was Lee Xie, Senior Technical Engagement Manager from Mirantis, who had earlier in the day published a detailed, albeit subjective comparison of OpenStack versus VMware.

What struck me as amazing was the lack of questions and familiarity with Quantum, the networking component of OpenStack that has been out since Folsom was released in September 2012. I had never expected myself to be the only person asking questions about Quantum at an OpenStack event! OpenStack itself has been around since 2010 or so, and it is possible that most of the attendees had server and storage background. JSON, REST, Puppet, and Rabbit were the more fluid topics of discussion. I drew puzzled looks when I broached the subject of Floodlight and OpenFlow.

The meetups are scheduled for every other week with Beginners and Intermediate tracks held at the same time in different rooms. Maybe I’ll attend the Intermediate track next time.

Plethora of Cisco Cloud Announcements – February 2013

I’m writing this post the week after Cisco Live was held in London. I did not attend Cisco Live, but this morning I attended a Cisco event today titled entitled Fabric Innovations for the World of Many Clouds. It was kicked off by Cisco’s Chief Strategy Officer Padmasree Warrior who outlined the Fabric vision of the company at this time, which is summarized in the figure below.

February 4, 2013 Cisco Announcement
February 4, 2013 Cisco Announcement

The Nexus 6000 is a new product line with a super high 10/40 Gbps port density and hovering at 1.2 microsecond port-to-port latency. Available today, the 4RU Nexus 6004 has 48x40Gbps ports along with 4 expansion modules allowing for a total of up to 96x40Gbps ports. Also announced, but available in Q2, is the Nexus 6001 – a 1RU switch with 48x1Gx10G with 4x10G/40G uplinks.  Senior VP of Cisco’s Data Center Business Unit, David Yen, said that even Cisco could avail of merchant silicon, but that they still backed their own custom silicon to deliver lower port-to-port latencies, as seen in their Algo Boost technology. To give you an idea on how low 1.2 microseconds is in the industry, Arista has been boasting low-latency switches as low as 350 nanoseconds port-to-port for several years. But Cisco already has an answer for Arista’s ultra-low latency switches – the Nexus 3548 which boast port-to-port latencies as low as 190 nanoseconds. These are better suited for financial exchanges where low switching latencies are critical for conducting electronic trades.

Cisco claims it can scale the Nexus 6004’s 1.2 microsecond latency for as many as 1,500 10G ports. The number 1500 is attained when the Nexus 6004 is combined with another new product – the Nexus 2248PQ Fabric Extender. The last-named product can support 1500 GE or 10GE server ports through Cisco’s FEX technology. Assuming 50 VMs per server, this means that the 1500 FEX ports can support up to 75,000 VMs. This is an impressive number and shows the scalability of the Nexus 6000 platform.

The Network Analysis Module (NAM) has also now formally made its foray into the Nexus offering. I worked a lot with the first two generations of the NAM in 2004 and was impressed by its robustness (one of the few products at the time to be built on Linux) and ease of use. Of course, that was with the Catalyst 6500 platform, which was defribilliated a couple of years ago with the Supervisor 2T. It seems that Cisco is now finally bringing service modules onto the Nexus platform.

The second major announcement was the Nexus 1000V InterCloud for connecting enterprise clouds to provider clouds in a secure manner. The highlights are making application migrations incredibly simple without having to convert VM formats, create templates, deploy site-to-site tunnels between clouds, or re-configure network policies. The Nexus 1000V IC is intended to automate all these steps and support all hypervisors. It is managed by Virtual Network Management Center (VNMC) InterCloud. The highlight of that (to me) was that it hooks into cloud orchestration systems like Cloupia (Cisco’s recent acquisition) and Cisco’s own Intelligent Automation for Cloud (IAC) via a northbound API. Hybrid cloud deployment solutions are a relatively new area and I will be following how this pans out with great interest.

I was most keen about the third announcement, which was of Cisco’s ONE Controller. Last year Cisco announced onePK, but there was no product. Now finally, there is the Controller. It features northbound APIs, such as REST and OSGI and southbound APIs, such as OpenFlow and Cisco’s own onePK. Cisco also announced a roadmap for the ONE Controller’s compatibility with Cisco’s existing Nexus and Catalyst product line.

More information is available from the following links:

Introducing Nexus 6000 Series
Cisco Launches Nexus 1000V InterCloud Part I
Cisco Launches Nexus 1000V InterCloud Part II