Category Archives: OpenStack

What does it mean to be ‘OpenStack Compatible’?

Recently, Big Switch Networks earned bragging rights as the first networking vendor to attain the OpenStack Compatible certification. To achieve this status, the requirements are different for hardware and software products. Big Switch demonstrated compatibility with both Nova and Neutron networking environments. There are more details on the Big Switch and Mirantis sites. Big Switch differentiates between the two environments as:

  • In a Neutron implementation, the Big Cloud Fabric leverages the BSN ML2 Driver, enabling automation and orchestration of its bare metal, SDN-based Big Cloud Fabric with the OpenStack controller.
  • In a Nova implementation, Big Cloud Fabric has optimized configurations and performance enhancements that let it serve as a multi-path leaf/spine CLOS service 4k VLANs to every edge port. Unlike traditional spanning-tree based switching designs, full cross-section bandwidth can be acheived while delivering 4k vlans to every edge port with no performance penalty.

The question I have is that while obviously somebody has to be first, why aren’t there more products and vendors listed? Specifically, how soon will it be before we see HP, the leading contributor to OpenStack on that list?

Advertisement

Live Blog of OpenStack Silicon Valley 2014 Community Event live blog Part 2

I attended a variety of track sessions in the afternoon of the OpenStack Silicon Valley Community Event. Part 1 can be viewed here.

From the Networking track:

Popular Options for SDN Deployment in OpenStack

  • Jonathan LaCour | Dreamhost
  • Peter Balland | VMWare
  • Mike Dvorkin Cisco
  • Azmir Mohamed | PLUMgrid

Mohamed: PLUMgrid’s baseline is on Neutron. It is our entry into OpenStack. One problem is L2 underlay, because it doesn’t scale. Second problem is Overlay. Upstreaming code is a challenge.

Lacour: We are trying to build public cloud at Dreamhost. Dreamhost has always been built on open source. L2 virtualization is provided by VMware. L3 and above is implemented internally. Most of the 300,000 customers are end users, not enterprises. Today Neutron isn’t up to speed with IPv6 though Juno is getting there. To have intelligent overlays and cheap fast dumb underlays is great for us.

Balland: Working on high level abstractions. Congress. VMware has had full-time developers dedicated to Neutron and Quantum before that. Geneve will replace GRE.

Next, under Emerging Trends, was NFV in OpenStack, featuring

  • Martin TaylorMetaswitch \ Calico
  • Alan Kavanagh Ericsson
  • Kyle MacDonald | OpenNile
  • Jennifer Lin | Juniper
  • Chris WrightRed Hat

MacDonald: ETIS is European Telco Standards Institute, wrote a manifesto for all of their vendors at 2018 to deliver their network gear as Software instead of aggregated systems. Started in 2012. When telcos have the agility of virtualization, they can achieve a lot. Lot of POCs so far.

Taylor: VNF performs the means of a telco in software. Traditionally telcos demanded standards and would wait 5 yrs or so. Now they no longer have the luxury to wait and need to get stuff done now. ETSI has defined a framework wherere vendors and Network Operators can get together and demonstrate some capability to drive the industry forward. We already have a virtualized session board controller in Europe. Challenge of network people is to convince carrier that the solution is carrier grade. Carriers feel that OpenStack has some challenges, such as packet processing. Telcos feel that their ability to innovate is hindered by the very small number of vendors they work with. In the ETSI manifesto, they say that they can source workloads from a much broader ecosystems, who are more nimble. Telcos have learned from the lesson of WhatsApp who facilitate free text messages and then got bought for $19B as an example of a competitor. Telcos feel they need to get up to speed.

Wright: Service chaining is the ability to steer packets through a collection of services such as a set of logical services (Firewalls, Load Balancers, IDS, etc) in the proper order. SDN steers the packets, Service chaining is how an SP will package these services to customers. Service Providers want a general purpose compute platform, and that is a cloud problem. ODL has OpenStack integration that can help NFV. There is also a service chaining project. Overall ETSI NFV project identifies a suite of open source technologies such as from DPDK and ODL. To have elastic flexible infrastructure is the biggest gain to shift from CapEx to OpEx.

Kavanagh: Telco vendors must be able to pick and choose various servers on vendor boxes and chain them together ala service chaining. Ericsson is virtualizing their entire product portfolio. Challenge is the time to market. NFV will shrink lead times tremendously. ODL is a good complement to Neutron; we will see more advanced services like BGP being added. Telcos are looking into NFV to reduce the lead time. and virtualize their key services to compete with Amazon. They also see OpenStack as a good fit because it also reduces costs.

LinDiff b/w SDN and NFV: SDN is about automating and orchestrating SW-driven platform. ETSI has unique applications, like stateful apps. how to tie into OSSP systems, how to handle latency-sensitive apps that are cannot exist in a stateless form. We should not reinvent the wheel, e.g. IP VPN. Contrail technology is based on a standard that was co-authored by AT&T and Verizon. NFV is more service orchestration and service delivery than how do we transport packets. Challenge is how do we expOpenStacke this as a service and improve the orchestration.

The final panel in the afternoon track I attended was Four SDN Use Cases, with

  • Lane Patterson | Symantec
  • Jonathan LaCour | DreamhOpenStackt
  • Adam Johnson | Midokura
  • Ben Lin | VMWare

Lin: VMware is running its Internal cloud on OpenStack since 2012, which is used for development, test, demos, pretty much everything. Ebay is one of their first customers. NSX is multi-hypervisor, hence OpenStack. Main pane of glass is vSphere. With so many moving parts in OpenStack, it is important to understand the amount of scale you’re looking for. Out of the box, they get great performance with STT, investigating DPDK. They can send traffic directly from hypervisor to hypervisor, which reduces traffic a lot.

LaCour: He spoke of a Dream Host OpenStack-based cloud offering called Dream Compute, mass market public cloud offering that has IPv6 Dual Stack support for customers as well as back-end systems, L2 isolations for every tenant, and drives cost down (went with white box switches running Cumulus Linux so that their operating team could be generalist. Spine and leaf 40G switches b/w spine and leaf. 10G access switches, 10G storage networks (i.e. storage based on ). They believe that overlays enable simpler physical networks. Their overlays are with VMware NSX, and they were one of the first Nicira customers. They used them because it had the most solid L2 support. They don’t use their L3 solutions because they didn’t have supoprt at the time, so they implemented their own, using Akanda (a router virtual appliance that has Neutron extensions). Their L3 service appliance uses iptables in Linux. Storage networks are separate from data networks, minimal overhead. Each tenant gets a single router appliance that is booted independently and ephemerally and can be as small as 256MB, 512MB with more features. They expect Dream Compute to go GA next year.

Johnson: Midokura does Network Virtualization Overlays for OpenStack. KVH is the customer, member of the Fidelity group, providing Managed Private Cloud for financial services customers. MidoNet is an OpenStack overlay for isolated private cloud environments that provides distributed features (Load Balancing, Firewall, routing), not just distributed L2 services. It programs the Linux kernel to do packet processing; uses VXLAN. They saturate their Mellanox 40G uplinks with 37G.

Patterson: Symantec grew by acquisition with several BUs and 58 DCs around the world. They consolidated many of them via OpenStack. They offer services such as object storage, batch processing, authentication, etc, i.e. more than just IaaS. They use Bare metal for their Hadoop and Cassandra + OpenStack Nova environment. For the SDN layer, they used something to supplement Neutron, and ended up going with Contrail for its MPLS over GRE capability, though they are moving to VxLAN. To speak to the bare metal side, they used a Docker-like layer so that each tenant on the Computer virtualization side has to authenticate before talking to the Big Data side. In one environment, for leaf-spine, they use Cisco and in another they use Arista with only generic vendor agnostic features being implemented. TORs are 4x40G into the spine, spine is a 2x, moving into a 4x. When they used generic OVS, every vendor outperformed OVS in the bakeoffs. They saturated the uplinks from the TORs to the spines.

Interestingly, none of the 4 panelists used OpenFlow at all in their clouds.

Live Blog of OpenStack Silicon Valley 2014 Community Event live blog Part 1

This is a live blog of the inaugural OpenStack Silicon Valley Community Event, held at the Computer History Museum in Mountain View, California on September 16, 2014. It was put together by Mirantis, one of the biggest proponents of OpenStack.

The Agenda and Megathemes for the Day is given by Alex FreedlandMirantis Co-founder and Board Member of OpenStack Foundation. 

Freedland states that as someone who’s been involved with OpenStack since the beginning, four years ago, he has noticed a distinct qualitative change in OpenStack. We no longer have to prove that OpenFlow will win the open cloud game. It is here to stay and will be the open cloud platform of the future. The question we have to ask is “What will it look like when it matures and how will it appear to enterprises, service providers, etc?” Freedland feels we are entering OpenStack 2.0 and this event is about OS 2.0. Now that we are in the 2.0 territory, we are seeing a lot of huge deals, with some large companies entering at massive scale. The usage patterns we see are of agility and management. It’s necessary for Software Defined Economy to detect change quicker. In just four years, OpenStack is at least as large as Linux.

Next, up is a keynote by Martin Fink, EVP and CTO, HP and a late inclusion, Marten Mickos, CEO, Eucalyptus. The title was Looking ahead: OpenStack, Eucalyptus, Open Source, Enterprise and the cloud?

As HP recently announced plans to acquire Eucalyptus. Martin Fink and Marten Mickos present a look ahead at what this holds for open source, the cloud, the enterprise and OpenStack. Eucalyptus is open source private cloud software that supports industry standard AWS APIs and creates cloud resources for compute, network, and storage.

image

Martin Fink: “Beginning Nov/Dec last year, there was a massive swing of HP contributions towards OpenStack. Helion launched in May 2014. Today with Juno, HP is the No. #1 contributor to OpenStack. When we started working on open cloud, we felt we had to do it in the true spirit of open source, getting very close to trunk. When I was running the open source division at HP between 1999 and 2005, the funnest part was being in a room with competitors, like IBM, and working towards the overall benefit of open community. I made some friends along the way. We were not confused about when it was time to compete. HP and Mirantis compete in some areas, but today we are together. We’re not just delivering a distribution or a piece of software. We’re delivering the actual distro, the hardware along with it, and we deliver it any way you want: private, VPC, hybrid, public, you name it.”

Marten Mickos then comes on. The acquisition has not yet been finalized: Open source will win in Cloud by being modular, having competitors collaborate, having anyone to scrutinize & improve quality, by having Darwinism (good stuff survives), and by having less lock-in. Open source will have challenges too: for example, there has to be someone who says NO and members compete with each other.” Also, “OpenStack and Eucalyptus is where Nimble (Eucalyptus) meets massive (OpenStack). Eucalyptus is a tiny group. It is also Hybrid with AWS design patterns in the open. In public cloud, AWS API is private. In private cloud, AWS API is public. It is critical in cloud for the core pieces to remain hardened. OpenStack can and will have components, add-ons, adjuncts and alternative projects (Ceph, RiakCS, Midonet) and the aim of Eucalyptus is to become one.”

Next is a lightning talk by Ken Ross | Director of Product Management, Brocade

Ross says Brocade has been involved with OpenStack for 3 years and has increased its level of investment. Brocade’s contributions include SAN FibreChannel for Cinder, Multitenant DC-DC via MPLS, and NFV ETSI POCs inc. scheduling, FWaaS, and VPNaaS. 80% of NFV discussions are centered around OpenStack. Challenges: Neutron maturity – it has evolving functionality in cusotmer engagements (big spectrum of customer asks across releases, Folsom, Grizzly, Havana, and Icehouse, which makes it extremely difficult to be agile in development). Another challenge is for the SDN community and Open Daylight community to understand each other better.

This is followed by a keynote on The Software Defined Economy by Jonathan Bryce Executive Director, OpenStack Foundation

No matter what size your organization is, it’s not moving fast enough. Software innovation is make-or-break. If infrastructure isn’t part of the solution, it’s part of the problem.

Bryce says “Every company is competing with a startup. E.g. in banking you have to go against Stripe, Swuare, PayPal, etc. In Big Media, you have Netflix (won an Emmy), Zynga. In Automotive, Tesla, Uber, Lyft, SpaceX (which forces the US Airforce to compete with a startup). This is the Software Defined Economy. It is the ability to change easily, quickly from one vendor to another. The Old model was passive consumption – we bought what our vendors sold us and upgraded when they told us to upgrade; it was ok to use multi-year product cycles). The New Model is: I want what I want now (for example, mix and match in a single DC, release early and release often, deploy directly to production. Be agile, BYOD). Technology decisions are moving out to the edges of the business. Cloud is being driven from the edges of the business. It removes barriers and allows innovation. Quote from Disney: “In an IT department, you have to think like a Product company.” Top 10 car company used OpenStack to harness Big Data from dealer reports, insurance filings, car sensors, and generate reports to supply to various departments (R&D, Sales, Marketing)

Time for the keynote from Martin Casado | CTO of Networking, VMware on Policy for the Cloud Frontier. He comes to the stage with extreme energy, as if he just sprinted 100 meters.

Casado: Automation is great. Everyone loves the promise of all cloud operations codified as running programs tirelessly doing the work of dozens of operators. Unfortunately, today we can only automate infrastructure and applications——ideals for cloud behavior that are more concerned with business logic, regulations, security, risk management, and cost optimization than infrastructure and applications. As a result, the policy layer presents a new challenge in our quest for cloud automation. In this talk, I will discuss the policy problem and why it is an emerging area we should focus on. I will then discuss Congress, an OpenStack effort to create an open policy framework to help us as a community step into this new frontier.

image

Casado: “Automation does not remove the human. Automation is necessary but insufficient for removing humans from the cloud control loop. Humans still need to interact with cloud to make it obey business policies. Policy is the Holy Grail of IT. The Policy problem is as follows: Humans have ideas, humans can document ideas, systems don’t understand human languages. Non-technical people with non-technical ideas want to dictate how the system works. Somehow we’re supposed to get this to work on the backend. How OpenStack can crack the problem is as follow: Computer Scientists will want to write a declarative language, which will need a compiler to implement this in the System. But such policy systems have always existed and have their flaws. Traditional barriers are 1. Device Canonicalization (lowest common denominators are Cisco, Juniper, Brocade, Arista, etc) that fail because of the interoperability, 2. Distributed State Management (e.g. at 5 am person XYZ is probably not in a proper state of mind, so don’t give him access), and 3. Topology Independence (if you choose a language that is independent of toplogy, you require mapping from physical topology to logical, which is very difficult). Many of these problems have been solved by OpenStack because of abstractions (no more device canonicalization problem). Software abstraction has been canonicalized. Primary value of A CMS is to present a consistent view, manipulate. Policy compiler can now exist over this with this level of abstraction present.

He then defines Congress: An open policy framework for automated IT infrastructure. Every component (networking, storage, security, compute) has a policy layer. But they can only be used for that one component. High level framework has to unify them. it is an enormous hurdle to adoption to translate them. E.g. Application Developer, Cloud Operator, and Compliance Officer have separate ideas, but should be limited by the laws of 1 language. That’s what Congress aims to achieve. This is how it is the Holy Grail.

Randy Bias | CEO and Founder, Cloudscaling then spoke on The lie of the Benevolent Dictator; the truth of a working democratic meritocracy

Bias spoke of the groups with longer term vision (strategic business direction) and tactical teams with shorter term focus (release dev lifecycle). OpenStack doesn’t have a vision or product strategy. There are so many new projects coming up that it changes the meaning of OpenStack. But there is no shared vision or ownership at the top. By product strategy for open source projects, we need product leadership, not a benevolent dictator. Some requirements include: managing it like a product (even though it isn’t one); focus on end-user needs and requirements; have long term vision and long term prioritization & planning; have corporate independence; closer workings b/w Board and TC; and architectural oversight and leadership. He respects AWS in the sense that it has a small architectural review board, a product management function (team of PMs per product line). Suggests having something similar: (Architecture Review Board elected for 2-4 years, having a wide set of domain expertise; and PM function with specific domain expertise.

This was followed by a Lightning talk by Chris Kemp | Founder and Chief Strategy Officer, Nebula

Kemp said that the interoperability b/w products and services is what will make OpenStack successful in the long run. The goal should be to have zero consultants, zero additional headcount. Movie studios using biotech companies, space agencies, using Nebula solution.

The final keynote was by Adrian Ionel | CEO, Mirantis on OpenStack 2016: Boom or Bust?

Ionel spoke of the growth adoption of OpenStack. He said that the measure of success is actual workloads. AWS is $6B business. Collectively OpenStack doesn’t even scratch the surface of that number. Docker has had 20M downloads over the past 4 months. In the end, Developers win. They are the vanguards. They don’t care about deployment choice of monitoring software, which hypervisor is used underneath, what the scalability of the network layer is, or which storage system is used for volumes. What they do care about is API quality and ease of use, feature velocity, and portability. He suggests focusing on the APIs as the key to adoption, investing in ease of use vs even more flexible plumbing, not moving up the stack (e.g. XaaS) but rather partnering instead, reshaping upstream engineering (i.e. technical committees) to foster open competition (vs central plumbing), and enabling workload mobility to other platforms. Mirantis signs 2 new customers a week, however, it is early days yet. And OpenStack needs to be able to scale appropriately.

Three fireside chats followed. The first was Is open source cloud a winner-take-all game?

  • Gary Chen (Moderator) | Research Manager, IDC
  • Marten Mickos | CEO, Eucalytpus
  • Steve Wilson | VP & Product Unit Manager Cloud Platforms, Citrix Cloudstack
  • Boris Renski | Mirantis

Mickos: In a digital world with exponential development, you do see Winner-Take-All examples, e.g. Linux and mySQL. But exceptions exist. He quoted Linux Torvalds as “If Linux kills Microsoft, it will be an unintended consequence”. Customers just want value. Not many companies have the depth and breadth of skill that HP has. They believe in hybrid clouds; that’s why it stands out.

Wilson: Innovation isn’t a zero-sum game. There are different solutions for different people. If you declare victory at this point, you’re living in a bubble. You’re ignoring AWS, VMware. It is heterogeneous by definition. CloudStack is the most active solution in the Apache community. It doesn’t make sense to have a winner-take-all game. I don’t think OpenStack and CloudStack compete with each other. CloudStack is very easy to use and works at scale. Thousands of CloudStack deployments exist around the world. The Netscaler and Citrix team already contributes to the OpenStack world. Xen contributes to hypervisor component of OpenStack. We will work with whatever cloud infrastructure our customers use.

Renski: There is an opportunity for only 1 standard open source cloud. There is only space for 1 winner. Enterprises and technology-centric organizations invest a lot in their infrastructure and OpenStack is the only way to solve their problems. The sheer number of vendors who have come together to solve problems with minimal disruptions to the tenants is the core advantage of OpenStack.

The next Fireside chat, titled, If OpenStack is so awesome, why doesn’t every enterprise run on it?, featured:

  • Chris Kemp | Founder and Chief Strategy Officer, Nebula
  • Peter ffoulkes | Research Director, Servers and Virtualization and Cloud Computing
  • Jo Maitland | Analyst for Google Cloud Platform
  • Alessandro Perilli | VP & GM, Open Hybrid Cloud, Red Hat

Ffoulkes Enterprises are moving slowly and don’t like lock-in. OpenStack is moving fast. There will be mergers and acquisitions. Customers’ have difference challenges, e.g in one country deployment can be on-premises, in another must be off-premises. There’s a high cost and complex for enterprises to build DCs of their own or have private clouds. It is going to be a slow journey for customers in non-IT verticals to migrate to public clouds.

Maitland OpenStack hasn’t lived up to process of workload portability, but containers will help as they are super efficient and fast. The response to Docker is encouraging. As soon as there is a single semantic deployment model for OpenStack, the floodgates will open on the enterprise on-premises side. But it is a gradual migration.

Perilli There is a massive gap b/w expectations and reality. Customers ask if OpenStack is a cheap version of a virtualization layer that they can get from other vendors? That is a misperception. Vendors are not preaching OpenStack properly; they are confusing and scaring customers by saying this is the new world. They need to transition from scale-up model to scale out model. You need to have something on top that glues the scale-out with the scale-up. Enterprises generally take a long time to make such decisions. In order to increase adoption, what’s missing from OpenStack is the capability to think beyond what OpenStack can offer. It needs to be coupled with other layers that can be merged with OpenStack in a seamless way to enforce policies that large enterprises need. We’re still looking at the foundation.

The Final Fireside chat of the morning session, titled Open Source Business Models: Open Core, Open Everything or…, featured:

  • Jonathan Bryce (Moderator) | Executive Director, OpenStack Foundation
  • Jonathan Donaldson | General Manager, Software Defined Infrastructure, Intel
  • Brian Gentile | VP & GM, Tibco Jaspersoft
  • Nati Shalom | CTO & Founder, Gigaspaces

Shalom: There are different reasons why open source projects succeed and fail. e.g. mySQL succeeded because it entered a proprietary space with a cheap solution that delivered Alternatives were expensive. Also, Docker went from disruption to commodity in 6 months or less. They entered a very small space that few people were addressing, and was timely. For OpenStack, RackSpace realized that they couldn’t compete very long with AWS. They built a coalition with OpenStack. You go into coalitions because there is no other option. Open Source changes the dynamics in that things that are commodities should be free, and those that add value should be paid for. With Android, Google created the gravity that allows allows open source developers to contribute. It is not healthy for companies to argue for equal rights.

Gentile: Open Source needs to attain an unmet need. The development of distribution methodologies of open source is superior than proprietary ways. If all the physicists of the world held tight to their discoveries and work, where would we be?

Donaldson: Mass adoption from a community perspective must comes with acceleration. Enterprise customers often like licensed model, which might work. Somebody has to pay the bills though.

This concluded the morning session. I will cover the afternoon session in Part 2.

Deconstructing Nuage Networks at NFD8

I enjoy Tech Field Day events for the independence and sheer nerdiness that they bring out. Networking Field Day events are held twice a year. I had the privilege of presenting the demo for Infineta Systems at NFD3 and made it through unscathed. There is absolutely no room for ‘marketecture’. When you have sharp people like Ivan Pepeljnak of ipSpace fame and Greg Ferro of Packet Pushers fame questioning you across the protocol stack, you have to be on your toes.

I recently watched the videos for NFD8. This blog post is about the presentation made by Nuage Networks. As an Alcatel-Lucent venture, Nuage focuses on building an open SDN ecosystem based on best of breed. They had also presented last year at NFD6.

To recap what they do, Nuage’s key solution is Virtualized Services Platform (VSP), which is based on the following three virtualized components:

  • The Virtualized Services Directory (VSD) is a policy server for high level primitives from Cloud Services. It gets service policies from VMware, OpenStack, and CloudStack and also has a builtin business logic and analytics engine based on Hadoop.
  • The Virtualized Services Controller (VSC) is the control plane. It is based on ALU Service Router OS, which was originally developed 12-13 years ago and is deployed in 300,000 routers, now stripped to be relevant as an SDN Controller. The scope of Controller is a domain, but it can be extended to multiple domains or data centers via a BGP-MP federation, thereby supporting IP Mobility. A single availability domain has a single data center zone. High availability domains have two data center zones. A VSC is a 4-core VM with 4 GB memory. VSCs act as clients of BGP route reflectors in order to extend network services.
  • The Virtual Routing and Switching module (VRS) is the Data Path agent that does L2-L4 switching, routing, and policies. It integrates to VMware via ESXi, XEN via XAPI, and KVM via libvirt. The libvirt API exposes all the resources needed to manage the support of VMs. (As a side, you can see how it comes into play in this primer on OVS 1.4.0 installation I wrote a while back.) The VRS gets the full profile of the VM from the hypervisor and reports that to the VSC. The VSC then downloads the policy from the VSD and implements them. These could be L2 FIBs, L3 RIBs/ACLs, and/or L4 distributed firewall rules. For VMware, VRS is implemented as a VM with some hooks because ESXi has a limitation of 1M pps.

At NFD8, Nuage discussed a recent customer win that demonstrates its ability to segment clouds. The customer was a Canadian Cloud Service Provider (CSP), OVH, that has deployed 300,000 servers in its Canadian DCs. OVH’s customers can, as a beta service offering, launch their own clouds. In other words, it is akin to Cloud-as-a-Service with the Nuage SDN solution underneath. It’s like a wholesaler of cloud services whereby multiple CSPs could businesses could run their own OpenStack cloud without building it themselves. Every customer of this OVH offering would be running independent Nuage’s services. Pretty cool.

Next came some demos that address following 4 questions about SDN:

  1. Is proprietary HW needed? The short answer is NO. The demo showed how to achieve Hardware VTEP integration. In the early days of SDN, overlay gateways proved to be a challenge because they were needed to go from the NV domain to the IP domain. As a result VLANs needed to be manually configured between server-based SW gateways and the DC routers – a most cumbersome process. The Nuage solution solves that problem by speaking routing language, uses standard RFC 4797 (GRE encapsulation) on its dedicated TOR gateway to tunnel VXLAN to routers. As covered in NFD6, Nuage has three approaches to VTEP Gateways:
    1. Software-based – for small DCs with up to 10 Gbps
    2. White box-based – for larger DCs based on standard L2 OVSDB schema. In NFD8, two partner gateways were introduced – Arista and the HP 5930. Both feature L2 at this point only, but will get to L3 at some point.
    3. High performance-based (7850 VSG) – 1 Tbps L3 gateway using merchant silicon, and attaining L3 connectivity via MP-BGP
  2. How well can SDN scale?
    The Scaling and Performance demo explained how scaling in network virtualization is far more difficult than scaling in server virtualization. For example, the number of ACLs needed grows quadratically as the number of web servers or database servers increases linearly. The Nuage solution breaks down ACLs into abstractions or policies. I liken this to an Access Control Group, whereby ACLs fall under an Access Control Group. Another way of understanding this is Access Control Entries being part of an Access Control List (for example, an ACL for all web servers or an ACL for all database servers) so that the ACL is more manageable. Any time a new VM is added, it is a new ACE. So, policies are pushed, rather than individual Access Control Entries, which scales much better. Individual VMs are identified by tagging routes, which is accomplished by, you guessed it right, BGP communities (these Nuage folks sure love BGP!).
  3. Can it natively support any workload? The demo showed multiple workloads including containers in their natural environments without being VMs, i.e. bare metal. Nuage ran their scalability demo on AWS with 40 servers. But instead of VMs, they used Docker containers. Recently, there has been a lot of buzz around Linux containers, especially Docker. The advantage containers hold over VMs is that they have much lower overhead (by sharing certain portions of the host kernel and operating system instance), allow for only a single OS to be managed (albeit Linux on Linux), have better hardware utilization, and have quicker launch times than VMs. Scott Lowe has a good series of writeups on containers and Docker on his blog. Also, Greg Ferro has a pretty detailed first pass on Docker. Nuage CTO Dimitri Stiliadis explained how containers are changing the game as short-lived application workloads are becoming increasingly prevalent. The advantages that Docker brings, as he explained, is to move the processing to the data rather than the other way round. Whereas typically you’d see no more than 40-50 VMs on a physical server, the Nuage demo had 500 Docker container instances per server. So there were 20,000 container instances total. And they showed how to bring them up along with 7 ACLs per container instance (140K ACLs total) in just 8 minutes. That’s 50 containers or VMs per second! For reference, in the demo, they used an AWS c3.4xlarge instance (which has 30GB memory) for the VSD, a c3.2xlarge for the VSC, and 40 c3.xlarge instances for the ‘hypervisors’ where the VRS agents ran. The Nuage solution was able to successfully respond to the rapid and dynamic connectivity requirements of containers. Moreover, since the VRS agent is at the process level (instead of the host levels with VMs), it can implement policies at a very fine control. Really impressive demo.
  4. How easily can applications be designed?
    The Application Designer demo here showed how to bridge the gap between app developers, and infrastructure teams by means of high level policies to make application deployment really easy. In Packet Pushers Show 203, Martin Casado and Tim Hinrichs discussed their work in OpenStack Congress, which attempts to formalize policy-based networking so that a Policy Controller can abstract high level, human-readable primitives (which could be HIPAA, PCI, or SOX as an example), and express them in a language to an SDN Controller. Nuage confirmed that they contribute to Congress. The Nuage demo defined application tiers and showed how to deploy an WordPress container application along with a backend database in seconds. Another demo integrated OpenStack Neutron with extensions. You can create templates to have multiple instantiations of the applications. Another truly remarkable demo.

To summarize, the Nuage solution seems pretty solid and embraces open standards, not for the sake of lip service, but to solve actual customer problems.

Collaborating with the Networking Community in the Age of Information Overload

Literally speaking, 2012 was the hottest year in recorded history, though there will always be climate change deniers. From the perspective of networking as well, it was a very hot year. Dozens of vendors are battling it out to claim their share of the SDN pie, a market, which IDC expects to grow to $3 billion by 2016. With IAAS/Cloud finally living up to the hype it generated five years ago or so, we are truly in a golden age of innovation in networking. Greg Ferro often says that the last time networking saw such excitement was when MPLS was introduced. However, MPLS was always a Service Provider solution and just a direct replacement for Frame Relay and ATM. If you ran a mid-size Enterprise network or an SMB, the chances are that you wouldn’t need to worry about MPLS. Some have argued that MPLS can be run in the Data Center, but the number of implementations is quite few. More importantly, MPLS had no consideration about the type of applications that it was transporting. SDN, on the other hand, with its Northbound API, is completely application-aware. With all the monumental changes happening in networking nowadays, it can be rather overwhelming trying to keep up just by reading blogs and newsletters. In this post I’ll outline three ways of collaborating with the networking community.

Packet Pushers, which the aforementioned Greg Ferro co-hosts along with Ethan Banks, is the premier podcast show for getting the scoop on trends in the networking industry. It features quality professionals, many of whom maintain their own blogs or are active on Twitter. Packet Pushers has a handy forum where you can ask questions on just about anything and can interact with like-minded networking professionals in the virtual meeting room. Greg and Ethan complement each other very well. While Ethan is more in tune with the more day-to-day activities of a network engineer, Greg is generally more active in promoting the discourse for newer technologies, such as the OpenStack Quantum project. The shows generally tend to be more in favor on Data Centers and SDN than, say, VoIP or Wireless, but thanks to the forum, listeners can chime in with their preferences for upcoming shows.

SDNCentral was launched in January 2012 as means for people to educate themselves on the SDN market and it does a wonderful job at that. One of the website’s features is the SDN Trending Index, which measures the most popular SDN companies, based on SDNCentral community activity. This is a clever way to gauge how hot a new SDN vendor is. A more recent feature of SDNCentral is the Demo Friday series in which an SDN vendor demonstrates their product. At the time this post is published was the second in this series – Cloud-enabled Networking–NEC ProgrammableFlow SDN in Action. The first in the series was Plexxi and Boundary. I had written about Plexxi after listening to them in a sponsored Packet Pushers show. I have since softened my stance on them thanks to the demo, which showcased Plexxi’s optically-connected switches built around a closed, controller-based architecture. I was impressed with how it flattens the network and how it can co-exist with legacy network designs. Indeed, it would be difficult to survive nowadays with a rip-and-replace strategy. From SDNCentral: Boundary applies analytics against real-time network flow data to enable Application Performance Management without the need for appliances or tap/span ports. The demo showed how Boundary discovers real-time application topology and monitors application throughput, latency, packet retransmits and other metrics on a per second basis. In other words, it is Software Defined Monitoring. Without SDNCentral, I probably would not have learned about Boundary or appreciate the value Plexxi can offer.

image
Ben Pfaff speaking at the Bay Area Network Virtualization Meetup at Hacker Dojo on March 20, 2013

Meetups provide an excellent opportunity to learn by interacting with real people in a face-to-face environment. In the San Francisco Bay Area, there are a few meetups that are bringing a sense of community to the networking industry, fueled by the Open Source movement. It wasn’t like this between 2000 and 2010. Hackathons were traditionally associated with only developers, not networking folks. This week, Nicira’s Ben Pfaff spoke at Hacker Dojo of the past, present, and future of Open vSwitch, which he helped create. He showed a live demonstration of how OVSDB, the configuration tool of OVS, works. I met some of my former colleagues and other peers who I normally interact with online. Nowadays, in the SF Bay OpenStack meetups led by Mirantis and Sean Roberts from Yahoo!, attendees bring their laptops and help each other through the OpenStack installation and configuration process with DevStack. Similarly, the Bay Area Network Virtualization meetup offers a fantastic opportunity not only to learn about OpenFlow and Open vSwitch, but also to mingle with fellow practitioners. However, meetups are not limited to the San Francisco Bay Area. In a recent Packet Pushers show, Kyle Mestery, one of the original team members of the Nexus 1000V, mentioned that an OpenStack meetup has also started in Minnesota. Meetups tend to catch on like wild fire. Hopefully we’ll see many more that cater to open networking.

These are healthy signs of a growing industry with plenty of people willing to help out and give back to the community.