Category Archives: 30 blogs in 30 days

Santa Cruz New Tech Meetup

Since moving to Santa Cruz, I’ve attended two meetups for Santa Cruz New Tech Meetup, which is the 8th largest meetup in the United States. The events are held on the first Wednesday of each month and feature pitches from some of the local tech entrepreneurs in the city. While Santa Cruz isn’t technically Silicon Valley (it is on the other side of the hill), it is considered a part of the San Francisco Bay Area and is host to some talented entrepreneurs. However, there aren’t (m)any startups in Santa Cruz looking into the SDN or Cloud space. In this post, I outline the companies that presented to the audience of over 200 people at the November 2014 Santa Cruz New Tech Meetup .

Eggcyte has a small handheld product called The Egg, which is basically a webserver that stores media that can then be shared selectively. It is intended to provide a level of privacy that social media outlets can’t offer, because the cloud is essentially the Egg. With 128 GB storage and 10-12 hours of battery life, the founders are intending to provide a more tangible ownership experience of media. It has a long way to go though, and needs to better address security (screen scraping, encryption, etc) in order to gain traction.

Moxtra has its roots in WebEx. One of the co-founders was the founder and CEO of WebEx before it was acquired by Cisco. Moxtra is a cloud collaboration platform that encompasses multimedia, such as text, voice and multimedia chat capabilities, visual and verbal content annotations, mobile screen sharing, and task management.

Tuul is currently arguably the hottest startup in Santa Cruz and is focused on improving the customer experience. In their words, Enhanced by our patent-pending tuulBots, Tuul’s customer support automation solution provides a platform for businesses to interact with their customers in a more direct, simple, and efficient way. tuulView dashboards enable business to handle multiple requests simul­taneously, with little integration required.

City Blooms has taken a plunge in to the Internet of Things, or as they call it, Internet of Farms. As they say, Cityblooms creates modular micro-farms that grow fresh and healthy food on rooftops, parking lots, patios, parks, and everywhere in between. They have a prototype installed on the Plantronics (another Santa Cruz company). This was a very impressive solution that I hope succeeds.

Finally, PredPol (short for Predictive Policing for law enforcement) uses analytics based on historical data to help reduce crime. It reminds you of The Minority Report, except it is less intrusive (thankfully). According to them, Law enforcement agencies deploying PredPol are experiencing marked drops in crime due to increased police presence in areas deemed to be at greatest risk.

Advertisement

The Etymology of Elephant and Mice Flows

elephant flowOver the past 3-4 years, the term elephant flows has been used to refer to east-west (machine-to-machine) traffic, such as vMotion, Migration, Backup, and Replication. The term mice flows is used to refer to north-south (user-to-machine) traffic. Why are we using these terms all of a sudden and did they come from?

Wikipedia statesIt is not clear who coined “elephant flow”, but the term began occurring in published Internet network research in 2001 when the observations were made that a small number of flows carry the majority of Internet traffic and the remainder consists of a large number of flows that carry very little Internet traffic”.

The traffic that traverses Data Center Interconnects (DCI) is typically east-west and flow-oriented (TCP-based). These applications have huge bandwidth requirements when compared to north-south. RFC 1028 defines a term LFN (Long Fat Network), which is when the Bandwidth Delay Product (BDP) is 105 bits or 12500 bytes. BDP and LFN have existed in the world of WAN Optimization (traditionally for north-south traffic) for over a decade. It is only more recently in the era of east-west traffic in DCI that elephant flows have become more prominent. The terms remain even within a data center, as the folks from VMware have shown in this well-written piece from exactly a year ago.

White Box switch readiness for prime time

Matthew Stone runs Cumulus Networks switches in his production network. He came on the Software Gone Wild podcast recently to talk about his experiences. Cumulus, Pica8, and Big Switch are the three biggest proponents of white box switching. While Pica8 focuses on the Linux abstractions for L2/L3, Pica8 focuses more on the OpenFlow implementation, and Big Switch on leveraging white boxes to form taps and, more recently, piecing together leaf-spine fabric pods.

I believe white box switches are years away from entering campus networks. Even managed services are not close. You won’t see a Meraki-style deployment of these white box switches in closets for a while. But Stone remains optimistic and makes solid points as an implementer. My favorite part is when he describes how Cumulus has rewritten the ifupdown script, to simplify configuration for network switches (which typically are roughly 50 ports as compared to 4-port servers), and repackaged it as ifupdown2 to the Debian distribution. Have a listen.

NBASE-T or MGBASE-T?

Last week I wrote about five new speeds that the Ethernet Alliance (the marketing arm of IEEE) are working on. The lower speeds 2.5 Gbps and 5 Gbps are called MGBASE-T and according to this post from the Ethernet Alliance, the MGBASE-T Alliance is overseeing the development of these standards outside of IEEE. This week, news broke out about leading PHY vendor Aquantia teaming up with Cisco, Freescale, and Xilinx to form the NBASE-T Alliance. This raises some questions about the work and causes that the MGBASE-T Alliance and NBASE-T Alliance are committed to.

Both NBASE-T and MGBASE-T are trademarks of Aquantia. Both the MGBASE-T Alliance and the NBASE-T Alliance are Delaware corporations. It appears as though the MGBASE-T Alliance was formed around June 2014, while NBASE-T Alliance is newer, September 2014.

The NBASE-T Alliance website defines the technology as follows:

NBASE-T™  is a proven technology boosting the speed of twisted pair copper cabling up to 100 meters in length well beyond the designed limits of 1 Gbps.

Capable of reaching 2.5 and 5 Gigabits per second over 100m of Cat 5e cable, the disruptive NBASE-T solution allows a new type of signaling over twisted-pair cabling. Should the silicon have the capability, auto-negotiation can allow the NBASE-T solution to accurately select the best speed: 100 Megabit Ethernet (100MbE), 1 Gigabit Ethernet (GbE), 2.5 Gigabit Ethernet (2.5GbE) and 5 Gigabit Ethernet (5GbE).

So what happens to MGBASE-T given that Aquantia was a part of both? My hunch is that it fizzles away and the other vendors who were working on it (no names here) lost in the race to Cisco, Freescale, and Xilinx.

Head End Replication and VXLAN Compliance

Arista Networks recently announced that its implementation of VXLAN no longer requires IP Multicast in the underlay network. Instead, the implementation will now rely on a technique called Head End Replication to forward BUM (Broadcast, Unknown Unicast, and Multicast) traffic in the VLANs that it transports. But first, let’s rewind to the original VXLAN specification.

Virtual eXtensible Local Area Networks were first defined in an Internet draft called draft-mahalingam-dutt-dcops-vxlan-00.txt in August 2011. It took some time for switch vendors to implement it, but now Broadcom’s Trident II supports it. Of course, software overlay solutions such as VMware NSX and Nuage Virtualized Services Platform (VSP) also implement it. Three years later, in August 2014, this draft became RFC 7348. The draft had 9 revisions to it, so it went up to draft-mahalingam-dutt-dcops-vxlan-09.txt, but there are no significant changes with respect to Multicast requirements in the underlay. They all say the same thing in section 4.2:

Consider the VM on the source host attempting to communicate with the destination VM using IP.  Assuming that they are both on the same subnet, the VM sends out an Address Resolution Protocol (ARP) broadcast frame. In the non-VXLAN environment, this frame would be sent out using MAC broadcast across all switches carrying that VLAN.

With VXLAN, a header including the VXLAN VNI is inserted at the beginning of the packet along with the IP header and UDP header. However, this broadcast packet is sent out to the IP multicast group on which that VXLAN overlay network is realized. To effect this, we need to have a mapping between the VXLAN VNI and the IP multicast group that it will use.

In essence, IP multicast is the control plane in VXLAN. But, as we know, IP multicast is very complex to configure and manage.

In June 2013, Cisco deviated from the VXLAN standard in the Nexus 1000V in two ways:

  1. It makes copies of packets for each possible IP address at which the destination MAC address can be found, and sent from the head-end of the VXLAN tunnel, or VLAN Tunnel End Point (VTEP). Then these packets are unicast to all VMs within the VXLAN segment, thereby precluding the need to have IP multicast in the core of the network.
  2. The Virtual Supervisor Module (VSM) of the Nexus 1000V acts as the control plane by maintaining the MAC address table of the VMs, which it then distributes, via a proprietary signaling protocol, to the Virtual Ethernet Module (VEM), which, in turn, acts as the data plane in the Nexus 1000V.

To their credit Cisco acknowledged that this mode is not compliant with the standard, although they do support a multicast-mode configuration as well. At that time they expressed hope that the rest of the industry would back their solution. Well, the RFC still states that an IP multicast backbone is needed.

This brings me to the original announcement from Arista. They claim in their press statementThe Arista VXLAN implementation is truly open and standards based with the ability to interoperate with a wide range of data center switches.

But nowhere else on their website do they state how they actually adhere to the standard. Cisco breaks the standard by conducting Head End Replication. Adam Raffe does a great job in explaining how this works (basically, the source VTEP will replicate the Broadcast or Multicast packet and send to all VMs in the same VXLAN). Arista should explain how exactly their enhanced implementation works.