EKS (Elastic Kubernetes Service) is AWS’s managed containers orchestration solution that simplifies Kubernetes cluster management. This article presents a technical implementation example that demonstrates how to automatically deploy a AWS EKS Cluster that uses a combination of AWS ALB (Application Load Balance) and NGINX as its Kubernetes Ingress Controller.
What is Kubernetes Ingress Controller?
One important concept to consider here is the Kubernetes Ingress, which is responsible to determine how services within the cluster are exposed to the external world. Also, Ingress needs an Ingress Controller to listen to Kubernetes API requests and match them to their respective Ingress.
As we are talking about AWS EKS, there is a very interesting article written by Dan Maas that presents options regarding Ingress Controller options. Below is the link to the article:
https://medium.com/@dmaas/amazon-eks-ingress-guide-8ec2ec940a70
Architecture Overview
From the Kubernetes Ingress perspective, this technical implementation will follow the ‘ALB + NGINX’ ingress approach. It uses a AWS ALB as internet facing load balancer, automatically managed by ALB Ingress Controller (more information about EKS with ALB Ingress Controller can be found here). NGIX will be responsible for the final routing.
The diagram below shows how AWS manages the EKS infrastructure across multiple availability zones, and any unhealthy node is automatically replaced.
The picture below was extracted from the AWS AKS product page and shows how it works.
The worker node implementation uses AWS Auto Scaling functionality to benefit from cloud elasticity, maintaining the performance according to demand and optimising costs. Additionally, autoscaling group deploys workers nodes across multiple availability zones to increase availability and recoverability.
AWS EKS Requirements
AWS Credentials
This example uses the environment variables approach to provide AWS credentials. Also, these environment variables must be set before the deployment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_DEFAULT_REGION
More information about how to use AWS CLI environment variables can be found here.
Required Tools
The tools below are required to run the automated deployment:
Terraform Code
You can find the source code for this example on GitHub, just clone it by using:
git clone http://github.com/rodrigocmn/eks_nginx_terraform.git
Configuration files
This template will automatically create the kubeconfig file (used to configure access to Kubernetes cluster).
IMPORTANT! Backup any existing kubeconfig file as terraform will overwrite it!
AWS EKS Platform Deployment
The environments sub-folders represent the target environment where the template will be deployed. For this example we will only use the development (dev) environment. To deploy this example, just go to the dev/container/managed
and run the following terraform commands described below.
First we need to initialise our working directory:
terraform init
Now we can create our execution plan:
terraform plan
And apply the changes:
terraform apply
Now you should have:
- AWS EKS cluster
- 2 worker nodes joined the cluster
- NGINX and ALB Ingress Controller deployed to Kubernetes
- 1 AWS ALB created and configured
- All necessary routes, security groups, roles and other resources necessary to support the solution.
The only thing left to do is the DNS configuration, which was not included in this template as you may not have a domain registered in your AWS Route 53. The next section presents a couple options to implement it.
DNS Configuration
If you have a domain registered on your Route 53, just create an alias record and point it to the ALB created above. Although, if you don’t have a domain registered on Route 53, just add a line in your machine hosts file with ALB IP address and the EKS Cluster hostname (this info is available on the respective AWS Console pages). For example:
# ALB IP Adress Cluster Hostname
52.33.73.73 6d11a1732b0d08eed2c03a03bf5f5262.yl4.us-west-2.eks.amazonaws.com
IMPORTANT! Note that you cannot access the EKS Cluster APIs while this host configuration is being used, so delete this line after you finish your test.
After applying your DNS configuration you should be able to test by using the cluster hostname in your browser. For example:
http://6d11a1732b0d08eed2c03a03bf5f5262.yl4.us-west-2.eks.amazonaws.com
Kubernetes Dashboard (bonus!)
If you want to use Kubernetes Dashboard to check your cluster configuration, I’ve included a quick guide explaining how to deploy it on AWS EKS. You can find it here.
Destroying
First, we need to delete the ALB created by the Nginx ingress rule. Therefore, the following command should remove the AWS ALB (and respective resources) created for Nginx.
kubectl delete -f config_output/nginx-ingress.yaml
Finally, we just need to run the destroy command on terraform and everything else should be deleted.
terraform destroy