EKS + ALB Controller — Simple canary deployment for applications on AWS EKS

Sumanth Reddy
3 min readDec 10, 2022

This setup aims at demonstrating basic canary traffic split on EKS using AWS load balancer controller.

If you are not aware already, Services in Kubernetes is the mechanism which helps us to communicate with pods over network. We can use Node port and LoadBalancer service types for external communication ie., ability to expose pods for communication outside the cluster. Problem is core cloud controller spins up classic load balancer in AWS by default for LoadBalancer type and this doesn’t provide us ability to split traffic. Hence we will be using another open source project AWS Load Balancer controller for our goal of traffic splitting.

From project readme

AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.

It satisfies Kubernetes Ingress resources by provisioning Application Load Balancers.

It satisfies Kubernetes Service resources by provisioning Network Load Balancers.

So the idea is simple — We create Ingress resource which creates Application load balancer which provides us with traffic split. We ensure there are two target groups — each pointing to one deployment (primary & canary)— and there is fixed traffic split between these two.

Installing AWS lb controller is out of scope for this article but the project has excellent docs for this.

Assuming that the controller has been installed, For this example we create two deployments — primary and canary each running nginx, but the index.html overriden to show primary and canary respectively. Two services of type ClusterIP will be created one for each deployment. The ingress will have two weighted routes pointing to these services and on given port. The full manifests for this are available at https://github.com/sumanthkumarc/aws-eks-canary

The below image is self explanatory as to how infra components are reconciled from code.

Infrastructure As a Code

Few important observations to note

  1. This implementation assumes that we use AWS vpc cni networking providing each pod an ip address resolvable from across the vpc. This is important if we want to use target-type as IP in Ingress annotations.
  2. The Target groups themselves are created by controller based on config provided in Ingress. Controller also creates a new CR of type targetgroupbindings for each target group it creates. This CRD is the binding between target group backends(ip’s) and target group.
  3. When we use target-type as IP, AWS load balancer controller directly gets the list of pod ip’s from the endpointslices and sync them as backends for the target group. The traffic directly reaches ALB to pod.
  4. If we use target-type as instance, all the nodes are registered as backends for the Target groups. This adds extra hop for traffic first to node then to pod.

Reference links

Controller installation — https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/deploy/installation/

Target group binding — https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/targetgroupbinding/targetgroupbinding/

Annotations — https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/

Kubernetes services — https://kubernetes.io/docs/concepts/services-networking/service/

--

--