This post details my experience with creating a simple multi-tier Kubernetes app using Google Cloud Platform (GCP) as well as Amazon Web Services (AWS) by taking advantage of the free tier accounts for each service. The app comprises a Redis master for storage, multiple Redis read replicas (a.k.a Slaves), and load-balanced web frontends. Kubernetes acts as a Frontend load balancer that proxies traffic to one or more of these Slaves (known as ‘container nodes’ in Kubernetes lingo). In order to manage these, I created Kubernetes replication controllers, pods, and services in this sequence:
- Create the Redis master replication controller.
- Create the Redis master service.
- Create the Redis slave replication controller.
- Create the Redis slave service.
- Create the guestbook replication controller.
- Create the guestbook service.
GCP has a few easy-to-follow tutorials for their main services. For Google Kubernetes Engine (GKE), the tutorial walks you through creating cluster to deploy a simple Guestbook application to the cluster.
AWS launched Amazon Elastic Container Service for Kubernetes (Amazon EKS) at re:Invent 2017 and it became Generally Available in June 2018.
For EKS, I generally followed the EKS documentation with the following observations:
- Some pre-work was needed in EKS, such as:
- Installing the latest AWS CLI. On the other hand, Google Cloud Shell in GCP makes it really simple to perform operation and administration tasks.
- Installing a tool to use AWS IAM credentials to authenticate to a Kubernetes cluster. It’s not clear to me why exactly this is needed. The documentation for AWS IAM Authenticator for Kubernetes states “If you are an administrator running a Kubernetes cluster on AWS, you already need to manage AWS IAM credentials to provision and update the cluster. By using AWS IAM Authenticator for Kubernetes, you avoid having to manage a separate credential for Kubernetes access.” However, it appears to be a needless step. Why can’t Kubernetes clusters in AWS EKS leverage AWS IAM credentials directly?
- In GCP, Google Cloud Shell includes the kubectl CLI utility. In AWS, you need to install it locally. Moreover, GCP made it far easier to configure the kubeconfig file by issuing the gcloud container clusters get-credentials pakdude-kubernetes-cluster –zone us-central1-a command. With EKS, I had to manually edit the kubeconfig file to populate the cluster endpoint (called anAPI server endpoint in EKS) and auth data (called a Certificate Authority in EKS).
- Creating a Kubernetes cluster takes about 10 minutes in EKS, compared to just 2-3 minutes with GKE.
- When creating the Kubernetes cluster in EKS, as per the documentation, You must use IAM user credentials for this step, not root credentials. I got stuck on this step for a while, but it is good practice anyway to use an IAM user instead of the root credentials.
- In GKE, I ran into memory allocation issues when creating node pools that had machine type f1.micro, which are 1 shared vCPU, 0.6 GB memory. However, when I created the node pools with machine type g1-small (1 shared vCPU, 1.7 GB memory), things ran more smoothly. In EKS, t1.micro instances are not even offered; the smallest type of instance I could specify for the NodeInstanceType was t2.small.
- The supported Operating Systems for nodes in GKE node pools are Container Optimized OS (cos) and Ubuntu. In EKS, you have to use an Amazon EKS-optimized AMI, which differs across regions. That difference, alone, opens up inconsistencies between the two implementations.
- There is very little one can do at the AWS Console for EKS. Most of the work at the CLI or programmatically. For example, there is no way in AWS to check the status of worker nodes. You have to use the kubectl get nodes –watch command.
The sample GKE Frontend/Guestbook code was cloned from Git Hub. After setting it up, it gave this output:
umairhoodbhoy@pakdude713:~$ kubectl get pods NAME READY STATUS RESTARTS AGE frontend-qgghb 1/1 Running 0 23h frontend-qngcj 1/1 Running 0 23h frontend-vm7nq 1/1 Running 0 23h redis-master-j6wc9 1/1 Running 0 23h redis-slave-plc59 1/1 Running 0 23h redis-slave-r664d 1/1 Running 0 23h umairhoodbhoy@pakdude713:~$ kubectl get rc NAME DESIRED CURRENT READY AGE frontend 3 3 3 23h redis-master 1 1 1 1d redis-slave 2 2 2 23h umairhoodbhoy@pakdude713:~$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.11.242.118 35.232.113.254 80:30549/TCP 23h kubernetes ClusterIP 10.11.240.1 <none> 443/TCP 1d redis-master ClusterIP 10.11.241.226 <none> 6379/TCP 1d redis-slave ClusterIP 10.11.246.51 <none> 6379/TCP 23h umairhoodbhoy@pakdude713:~$
The sample EKS Guestbook app was also cloned from Git Hub. After setting it up, it gave this output:
hoodbu@macbook-pro /AWS (608) kubectl get pods NAME READY STATUS RESTARTS AGE guestbook-9lztg 1/1 Running 0 3h guestbook-bb7md 1/1 Running 0 3h guestbook-gx6sr 1/1 Running 0 3h redis-master-qhk8h 1/1 Running 0 3h redis-slave-7jlpb 1/1 Running 0 3h redis-slave-8hsmg 1/1 Running 0 3h hoodbu@macbook-pro /AWS (609) kubectl get rc NAME DESIRED CURRENT READY AGE guestbook 3 3 3 3h redis-master 1 1 1 3h redis-slave 2 2 2 3h hoodbu@macbook-pro /AWS (610) kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE guestbook LoadBalancer 10.100.79.4 affaf9aae9039... 3000:30189/TCP 3h kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 5h redis-master ClusterIP 10.100.195.31 <none> 6379/TCP 3h redis-slave ClusterIP 10.100.151.66 <none> 6379/TCP 3h hoodbu@macbook-pro /AWS (611)
Obviously, these are just simple Guestbook applications and I only covered the ease of setting up Kubernetes. A more relevant measure of Kubernetes on either cloud platform would be the performance and scalability. I had no easy way of stress testing the apps. However, after going through this exercise in both AWS and GCP, it is clear where GCP’s strengths lie. AWS may be the dominant Public Cloud player, but launching EKS despite having its own Elastic Container Service (ECS) is an indication of how popular Kubernetes is as a container orchestration system. Running Kubernetes on AWS is incredibly cumbersome compared to running it on GCP. EKS is relatively new and I’m sure AWS will iron out the kinks in the months to come.