Live Blog of OpenStack Silicon Valley 2014 Community Event live blog Part 2

I attended a variety of track sessions in the afternoon of the OpenStack Silicon Valley Community Event. Part 1 can be viewed here.

From the Networking track:

Popular Options for SDN Deployment in OpenStack

  • Jonathan LaCour | Dreamhost
  • Peter Balland | VMWare
  • Mike Dvorkin Cisco
  • Azmir Mohamed | PLUMgrid

Mohamed: PLUMgrid’s baseline is on Neutron. It is our entry into OpenStack. One problem is L2 underlay, because it doesn’t scale. Second problem is Overlay. Upstreaming code is a challenge.

Lacour: We are trying to build public cloud at Dreamhost. Dreamhost has always been built on open source. L2 virtualization is provided by VMware. L3 and above is implemented internally. Most of the 300,000 customers are end users, not enterprises. Today Neutron isn’t up to speed with IPv6 though Juno is getting there. To have intelligent overlays and cheap fast dumb underlays is great for us.

Balland: Working on high level abstractions. Congress. VMware has had full-time developers dedicated to Neutron and Quantum before that. Geneve will replace GRE.

Next, under Emerging Trends, was NFV in OpenStack, featuring

  • Martin TaylorMetaswitch \ Calico
  • Alan Kavanagh Ericsson
  • Kyle MacDonald | OpenNile
  • Jennifer Lin | Juniper
  • Chris WrightRed Hat

MacDonald: ETIS is European Telco Standards Institute, wrote a manifesto for all of their vendors at 2018 to deliver their network gear as Software instead of aggregated systems. Started in 2012. When telcos have the agility of virtualization, they can achieve a lot. Lot of POCs so far.

Taylor: VNF performs the means of a telco in software. Traditionally telcos demanded standards and would wait 5 yrs or so. Now they no longer have the luxury to wait and need to get stuff done now. ETSI has defined a framework wherere vendors and Network Operators can get together and demonstrate some capability to drive the industry forward. We already have a virtualized session board controller in Europe. Challenge of network people is to convince carrier that the solution is carrier grade. Carriers feel that OpenStack has some challenges, such as packet processing. Telcos feel that their ability to innovate is hindered by the very small number of vendors they work with. In the ETSI manifesto, they say that they can source workloads from a much broader ecosystems, who are more nimble. Telcos have learned from the lesson of WhatsApp who facilitate free text messages and then got bought for $19B as an example of a competitor. Telcos feel they need to get up to speed.

Wright: Service chaining is the ability to steer packets through a collection of services such as a set of logical services (Firewalls, Load Balancers, IDS, etc) in the proper order. SDN steers the packets, Service chaining is how an SP will package these services to customers. Service Providers want a general purpose compute platform, and that is a cloud problem. ODL has OpenStack integration that can help NFV. There is also a service chaining project. Overall ETSI NFV project identifies a suite of open source technologies such as from DPDK and ODL. To have elastic flexible infrastructure is the biggest gain to shift from CapEx to OpEx.

Kavanagh: Telco vendors must be able to pick and choose various servers on vendor boxes and chain them together ala service chaining. Ericsson is virtualizing their entire product portfolio. Challenge is the time to market. NFV will shrink lead times tremendously. ODL is a good complement to Neutron; we will see more advanced services like BGP being added. Telcos are looking into NFV to reduce the lead time. and virtualize their key services to compete with Amazon. They also see OpenStack as a good fit because it also reduces costs.

LinDiff b/w SDN and NFV: SDN is about automating and orchestrating SW-driven platform. ETSI has unique applications, like stateful apps. how to tie into OSSP systems, how to handle latency-sensitive apps that are cannot exist in a stateless form. We should not reinvent the wheel, e.g. IP VPN. Contrail technology is based on a standard that was co-authored by AT&T and Verizon. NFV is more service orchestration and service delivery than how do we transport packets. Challenge is how do we expOpenStacke this as a service and improve the orchestration.

The final panel in the afternoon track I attended was Four SDN Use Cases, with

  • Lane Patterson | Symantec
  • Jonathan LaCour | DreamhOpenStackt
  • Adam Johnson | Midokura
  • Ben Lin | VMWare

Lin: VMware is running its Internal cloud on OpenStack since 2012, which is used for development, test, demos, pretty much everything. Ebay is one of their first customers. NSX is multi-hypervisor, hence OpenStack. Main pane of glass is vSphere. With so many moving parts in OpenStack, it is important to understand the amount of scale you’re looking for. Out of the box, they get great performance with STT, investigating DPDK. They can send traffic directly from hypervisor to hypervisor, which reduces traffic a lot.

LaCour: He spoke of a Dream Host OpenStack-based cloud offering called Dream Compute, mass market public cloud offering that has IPv6 Dual Stack support for customers as well as back-end systems, L2 isolations for every tenant, and drives cost down (went with white box switches running Cumulus Linux so that their operating team could be generalist. Spine and leaf 40G switches b/w spine and leaf. 10G access switches, 10G storage networks (i.e. storage based on ). They believe that overlays enable simpler physical networks. Their overlays are with VMware NSX, and they were one of the first Nicira customers. They used them because it had the most solid L2 support. They don’t use their L3 solutions because they didn’t have supoprt at the time, so they implemented their own, using Akanda (a router virtual appliance that has Neutron extensions). Their L3 service appliance uses iptables in Linux. Storage networks are separate from data networks, minimal overhead. Each tenant gets a single router appliance that is booted independently and ephemerally and can be as small as 256MB, 512MB with more features. They expect Dream Compute to go GA next year.

Johnson: Midokura does Network Virtualization Overlays for OpenStack. KVH is the customer, member of the Fidelity group, providing Managed Private Cloud for financial services customers. MidoNet is an OpenStack overlay for isolated private cloud environments that provides distributed features (Load Balancing, Firewall, routing), not just distributed L2 services. It programs the Linux kernel to do packet processing; uses VXLAN. They saturate their Mellanox 40G uplinks with 37G.

Patterson: Symantec grew by acquisition with several BUs and 58 DCs around the world. They consolidated many of them via OpenStack. They offer services such as object storage, batch processing, authentication, etc, i.e. more than just IaaS. They use Bare metal for their Hadoop and Cassandra + OpenStack Nova environment. For the SDN layer, they used something to supplement Neutron, and ended up going with Contrail for its MPLS over GRE capability, though they are moving to VxLAN. To speak to the bare metal side, they used a Docker-like layer so that each tenant on the Computer virtualization side has to authenticate before talking to the Big Data side. In one environment, for leaf-spine, they use Cisco and in another they use Arista with only generic vendor agnostic features being implemented. TORs are 4x40G into the spine, spine is a 2x, moving into a 4x. When they used generic OVS, every vendor outperformed OVS in the bakeoffs. They saturated the uplinks from the TORs to the spines.

Interestingly, none of the 4 panelists used OpenFlow at all in their clouds.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.