AWS — Deploying Rabbitmq cluster on EKS

Sumanth Reddy
3 min readJan 18, 2021

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.

EKS gives you a fully managed, highly available control plane(Kubernetes API servers and the etcd persistence layer).

For data plane, you have option to either use

  1. Self managed nodes — EC2 worker nodes and other related resources you create and maintain in your own account. Billed for all the resources which you create.
  2. Managed node groups — All resources are created same like above in your account but AWS creates them for you. You need to maintain it though like patching the OS etc.
  3. AWS Fargate — This is the container service managed by AWS and now provides ability to deploy pods to this service. This is not like EC2 instance and resources are not created in your account. You only pay for per pod usage(like cpu and memory). Nothing to manage like OS patching etc.

More about available compute options here- https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html

For this example, we are going with Managed node groups. The quickest and easiest way to start with EKS is using eksctl. From docs:

eksctl is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, uses CloudFormation, was created by Weaveworks and it welcomes contributions from the community.

There are two primary ways to create cluster using eksctl.

  1. Using eksctl cli. Easiest and fastest but difficult to provide all minute details especially when using node groups. If you want to go with Fargate, this is best option.
  2. Using cluster.yaml file which describes all the details. Schema at https://eksctl.io/usage/schema/ and config examples at https://github.com/weaveworks/eksctl/tree/master/examples

Even though it’s easier to create custom config for node groups in cluster.yaml, for simplicity we will go with eksctl cli.

1. Cluster creation

Let’s create cluster with below command. We will create node groups later.

eksctl create cluster \
— name eks-test \
— version 1.18 \
— with-oidc \
— without-nodegroup

After successful completion of above command, your kubeconfig is stored at default location ~/.kube/config. oidc(OpenID connect) is useful for authentication. For example in use case like assigning iam role to kubernetes service accounts as described here.

Next let’s create managed node group using below command. The <SSH_KEY> can either be path on your machine to public key or the existing ssh key pair in your AWS region.

eksctl create nodegroup \
— cluster eks-test \
— region us-west-1 \
— name eks-ng \
— node-type t2.micro \
— nodes 3 \
— nodes-min 2 \
— nodes-max 4 \
— ssh-access \
— ssh-public-key <SSH_KEY> \
— managed

This would create EKS cluster with a managed node group of 3 nodes.

2. Nginx ingress controller

Deploy nginx ingress controller using below command taken from docs.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/aws/deploy.yaml

This provides you with normal http endpoint by launching a network load balancer in your region. See this if you need TLS termination as well, meaning https endpoints. Take note of lb url which will be used to access Rabbitmq UI in later steps.

3. Deploy RabbitMQ

Use manifests at https://github.com/sumanthkumarc/k8s-rabbitmq-clustered to deploy RabbitMQ.

The setup consists of statefulset with rabbitmq_peer_discovery_k8s plugin. The clustering requirements of RabbitMQ are at https://www.rabbitmq.com/clustering.html#cluster-formation-requirements

NOTE: The service.yaml has ingress resource which needs the lb url to be replaced in the hosts section.

You can now access the RabbitMQ UI at lb url.

RabbitMQ management UI

--

--