Category Archives: AWS

AppIQ – Unprecedented visibility that Aviatrix CoPilot brings

Earlier in my career, I worked as a Network Engineer in the high-frequency trading industry at a capital market exchange. It was the time when electronic trading was gaining heavy momentum as open outcry was receding. This was thanks mainly in part to vendors such as Arista who leveraged merchant silicon from Broadcom to lead the charge of low-latency networking.

Scores of trading firms would set up their equipment in one of the exchange’s many data centers inside the building to practice latency arbitrage. Speed was the name of the game and livelihoods were hedged on the network’s ability to pass packets as quickly as possible.

In the early days, any time there was a significant delay (could be as low as 1-2 seconds), the exchange would get hit with hefty fines. However, if we could prove that it was not the fault of the network, but rather the application that caused a trade to execute slowly, then we were off the hook. So my team invested in several network taps and sniffers from NETSCOUT and Gigamon to perform forensic analysis on these low-latency, high-throughput financial systems.

But there were never enough taps. Taps allowed us to pinpoint the location and cause of delays and retransmissions if we were lucky enough to have placed them at the exact spot in the network where the delay was incurred. It was like a playing a game of whack-a-mole. Providing evidential data was a nightmare in those days. There was such little visibility.

Did I mention we owned the entire network?

Fast forward to public clouds today which are complete black boxes. They provide very little visibility and the network has no way to prove it is not at fault because there have been no tools that are able to extract meaningful data until Aviatrix CoPilot came along. It already had the ability to display NetFlow records to provide such empirical data. Take this screenshot as an example.

If I were to see a flow with a few SYNs coming in, for example, I could use that information to ask the Application team whether everything is okay on their end. Or if I see a SYN followed immediately by a RST, that might point in the direction of a firewall blocking something. Or maybe if PSH packets are going through fine and data is being passed for a while, it might be another indication of the network doing its job and the application developer needing to be pulled in. It’s a very powerful feature.

But with the new AppIQ feature released this week in CoPilot, visibility is taken to the next level. AppIQ allows you to generate a comprehensive report of latency, traffic, and performance monitoring data between any two cloud instances connected via your Aviatrix transit network, such as shown here with an SSH test.

Now you can see latencies on a hop-by-hop basis. AWS us-east-1 (N. Virginia) to us-east-2 (Ohio) regions are about 12 ms away on average. And each of those green links represents an encrypted tunnel.

End-to-end encryption in the cloud with the visibility: that’s what every network engineer dreams of having.

Building a Multi-Cloud Network for less than $1 an Hour – Aviatrix Kickstart

This is the post I had been meaning to write for ages. How do you leverage Infrastructure as Code to build a multi-cloud network? It turns out you don’t have to write the code yourself. This is the beauty of Aviatrix Kickstart.

For less than $1 an hour, I was able to build a multi-cloud transit network with 2 spoke VPCs in AWS, 2 spoke VNets in Azure, a transit VPC in AWS, a transit VNet in Azure, and a peered connection between the two. Oh, and also 2 EC2 test instances in AWS for various tests. All within an hour.

The quickest way to build this would have been to script it in Terraform. But with Kickstart, a containerized environment handles this for you. So you don’t need to have Terraform skills or write any code.

Kickstart is a Docker image that you can download from here. All you need to run it are:

  • Docker. You can do this either by installing Docker Desktop on your client PC (Windows or Mac) or running it on an AWS EC2 Linux 2 AMI.
  • An AWS account and optionally an Azure account. I built the entire environment in AWS and Azure Free Tier accounts.

Once you run docker run -it aviatrix/kickstart bash, the script walks you through the process and allows you to configure the region, names of the resources, and CIDRs of the Aviatrix Controller as well as the Multi-Cloud Network Architecture (MCNA) Transits. It then leverages Terraform to issue several Day-Zero activities. Aviatrix Kickstart makes it so easy that I was able to build the following architecture in less than an hour without writing a line of code.

Check out these resources for more details:

  • Guide to building the environment
  • 10-minute video of demo
  • Test cases to try out once you’ve built the environment

Why I Joined Aviatrix

Earlier this month I joined Aviatrix Systems as a Solutions Architect with a focus on growing the Aviatrix Certified Engineer (ACE) program. I had gone through a journey of 2 years of immersing myself in Public Cloud platforms from training sites, such as A Cloud Guru and Linux Academy. Here are some of my observations during that period which led to my decision to join Aviatrix:

  • Cloud Networking is radically different from on-premises networking. For example,
    • In the on-prem world, network architects designed in layers (Core, Aggregation/Access). The world of Public Cloud is flat in order to meet the pace of DevOps.
    • Security principles, such as Defense-in-Depth have led to new constructs, such as IAM, Accounts, Organizations, Subscriptions, which were not prevalent in the on-prem world.
    • Cloud Vendors try their best to abstract the networking underlay constructs so that networking is represented as a black box to the cloud architect. To a certain extent they’ve done well (who honestly misses Spanning Tree?), but just because they don’t offer a mechanism to view these constructs, it doesn’t mean they no longer exist. In fact, Operations needs better visibility now than they did in the on-prem world.
  • While Cloud Vendors offer Networking Specialty certifications, they don’t provide any visibility into Day 2 Operations. And from an Architecture perspective, they trivialize the networking underlay. For example, they don’t provide solutions to real-world problems like overlapping subnets or end-to-end visibility.
  • Cloud vendors are incentivized by lock-in and have no real motivation for multi-cloud.
  • Enterprises find it easier to interpret multi cloud mostly in terms of governance and billing rather than infrastructure.
  • Cloud Training platforms such as A Cloud Guru and Udemy completely lack multi-cloud networking offerings. They have training courses on various cloud-first tools and technologies like Terraform, CloudFormation, Deployment Manager, Docker, Kubernetes, and certification courses for AWS, Azure, and GCP. But when it comes to multi cloud let alone multi cloud networking, they have not yet capitalized on the opportunity.
  • Enterprises need better instruction on the need for multi-cloud networking. Often when Enterprises say they need Cloud Infrastructure Architects, they really mean Cloud Application Architects. Yet, when they cross that bridge of multi-cloud (and they almost inevitably will), then they realize that application performance relies on a rock solid transit. And that is where Aviatrix shines.

Aviatrix is the pioneer in multi-cloud networking and is solving a really hard problem the right way – by simplifying. I’m looking forward to sharing some more of my learnings with you as I embark on this new journey.

What’s the Big Deal About Multi-Cloud Networking – Part 2

If you were experiencing issues with Zoom calls today, you were not alone.

But if you take a close look at today’s outage, it is clear that it was correlated with an AWS outage today.

In fact, most of Zoom runs on AWS, according to AWS. This is despite Oracle’s claim that millions of users run Zoom on Oracle Cloud. Zoom didn’t state the cause of the outage, but it is quite possible from these two charts that a well-architected transit network, such as the Aviatrix Multi-Cloud Network Architecture, could have prevented this outage.

Bringing Reference Architectures to Multi-Cloud Networking

Recently I attended Aviatrix Certified Engineer training to better understand multi-cloud networking and how Aviatrix is trying to solve its many problems, some of which I have experienced first-hand. Disclaimer: Since 2011, I’ve been an avid listener of the Packet Pushers podcast, where Aviatrix has sponsored 3 shows since December 2019.

Ever since I embarked on the public cloud journey, I have noticed how each of the big 4 vendors (AWS, Azure, GCP, and OCI) approach networking in the cloud differently from how it has been done on-premises. They all have many similarities, such as:

  • The concept of a virtual Data Center (VPC in AWS and GCP, VNET in Azure, VCN in OCI).
  • Abstracting Layer 2 as much as possible (no mention of Spanning Tree or ARP anywhere) from the user despite the fact that these protocols never went away.

However, there are many differences as well, such as this one:

  • In AWS, subnets have zonal scope – each subnet must reside entirely within one Availability Zone and cannot span zones.
  • In GCP, subnets have regional scope – a subnet may span multiple zones within a region.

Broadly speaking, the major Cloud Service Providers (CSPs) do a fairly decent job with their documentation, but they don’t make it easy for one to connect clouds together. They give you plenty of rope to hang yourself, and you end up being on your own. Consequently, your multi-cloud network design ends up being unique – a snowflake.

In the pre-Public Cloud, on-premises world, we would never have gotten far if it weren’t for reference designs. Whether it was the 3-tier Core/Aggregation/Access design that Cisco came out with in the late 1990’s, or the more scaleable spine-leaf fabric designs that followed a decade later, there has always been a need for cookie-cutter blueprints for enterprises to follow. Otherwise they end up reinventing the wheel and being snowflakes. And as any good networking engineer worth their salt will tell you, networking is the plumbing of the Internet, of a Data Center, of a Campus, and that is also true of an application that needs to be built in the cloud. You don’t appreciate it when it is performing well, only when it is broken.

What exacerbates things is that the leading CSP, AWS, does not even acknowledge multiple clouds. In their documentation, they write as if Hybrid IT only means the world of on-premises and of AWS. There is only one cloud in AWS’ world and that is AWS. But the reality is that there is a growing need for enterprises to be multi-cloud – such as needing the IoT capabilities of AWS, but some AI/ML capabilities of GCP; or starting on one cloud, but later needing a second because of a merger/acquisition/partnership. Under such circumstances, an organization has to consider multi-cloud, but in the absence of a common reference architecture, the network becomes incredibly complex and brittle.

Enter Aviatrix with its Multi-Cloud Network Architecture (MCNA). This is a repeatable 3-layered architecture that abstracts all the complexity from the cloud-native components, i.e. regardless of the CSPs being used. The most important of the 3 layers is the Transit Layer, as it handles intra-region, inter-region, and inter-cloud connectivity

Aviatrix Multi-Cloud Networking Architecture (MCNA)

Transitive routing is a feature that none of the CSPs support natively. You need to have full-mesh designs that may work fine for a handful of VPCs. But it is an N² problem (actually N(N-1)/2), which does not scale well in distributed systems. In AWS, it used to be that customers had to be able to address this completely on their own with Transit VPCs, which was very difficult to manage. In an attempt to address this problem with a managed service, AWS announced Transit Gateways at re:Invent 2018, but that doesn’t solve the entire problem either. With Transit Gateways (TGW), a peered VPC sends it routes to the TGW it is attached to. However, that TGW does not automatically redistribute those routes to the other VPCs that are attached to it. The repeatable design of the Aviatrix MCNA is able to solve this and many other multi-cloud networking problems.

Aviatrix has a broad suite of features. The ones from the training that impressed me the most were:

  • Simplicity of solution – This is a born-in-the-cloud solution whose components are:
    • a Controller that can even run on a t2.micro instance
    • a Gateway that handles the Data Plane and can scale out or up
    • Cloud native constructs, such as VPC/VNET/VCN
  • High Performance Encryption (HPE) – This is ideal for enterprises who, for compliance reasons, require end-to-end encryption. Throughput for encrypting a private AWS Direct Connect, Azure ExpressRoute, GCP Cloud Interconnect, or OCI FastConnect link cannot exceed 1.25 Gbps because virtual routers utilize a single core and establish only 1 IPSec tunnel. So even if you are paying for 10 Gbps, you are limited by IPSec performance and get only 1.25 Gbps performance. Aviatrix HPE is able to achieve line-rate encryption using ECMP.
  • CloudWAN – This takes advantage of the existing investment that enterprises have poured into Cisco WAN infrastructure. When such organizations need to connect to the cloud with optimal latency between branches and apps running in the cloud, Aviatrix CloudWAN is able log in to these Cisco ISRs, and configure VPN and BGP appropriately so that they connect to an Aviatrix Transit Gateway with the AWS Global Accelerator service for the shortest latency path to the cloud.
  • Smart SAML User VPN – I wrote a post on this here.
  • Operational Tools – FlightPath is the coolest multi-cloud feature I have ever seen. It is an inter-VPC/VNET/VCN troubleshooting tool that retrieves and displays Security Groups, Route table entries, and Network ACLs along all the cloud VPCs through which data traverses so you can pinpoint where a problem exists along the dataplane. This would otherwise involve approximately 25 data points to investigate manually (and that doesn’t even include multi-cloud, multi-region, and multi-account). FlightPath automates all of this. Think Traceroute for multi-cloud.

In the weeks and months to come, I’m hoping to get my hands wet with some labs and write about my experience here.