From Code to Cloud: Provisioning, Containerizing, and Deploying An Application with Terraform, Docker, GitHub Actions, and Kubernetes
In this comprehensive guide, we will walk through the entire process of taking your application from code to cloud. We'll cover provisioning resources using Terraform, containerizing your application with Docker, setting up build pipelines using GitHub Actions, and deploying your application using Kubernetes manifests.
Prerequisites
Before we get started, ensure you have the following set up:
IDE (VS Code): A powerful code editor to manage and edit your files.
AWS Account: Sign up for an AWS account if you don’t have one. You'll also need the AWS CLI installed and configured on your machine.
Docker Account: Sign up for a Docker account and have Docker installed on your machine.
GitHub Account: Create a GitHub account if you don’t already have one.
Basic Understanding of Terraform: Familiarity with infrastructure as code concepts and basic Terraform syntax will be helpful.
Understanding of Kubernetes: Knowledge of Kubernetes fundamentals, including pods, services, and deployments, is required.
Let's Get Started!
In this tutorial, we will cover the following steps:
Provisioning Resources Using Terraform: Set up and manage infrastructure on AWS using Terraform.
Containerizing Your Application with Docker: Create Docker images for your application and push them to Docker Hub.
Building Pipelines with GitHub Actions: Automate the build and deployment process using GitHub Actions.
Deploying Applications Using Kubernetes Manifests: Deploy and manage your application on Kubernetes clusters.
To follow along with this guide, you can find the source code in this GitHub repository:
https://github.com/Gatete-Bruno/humangov
backend.tf
This file configures the backend for Terraform state storage. It specifies using an S3 bucket to store the Terraform state file and a DynamoDB table for state locking.
terraform {
backend "s3" {
bucket = "humangov-terraform-state-ct2023"
key = "terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "humangov-terraform-state-lock-table"
}
}
main.tf
This file defines the main Terraform configuration, setting up the AWS provider and calling the aws_humangov_infrastructure
module for each specified state.
provider "aws" {
region = "us-east-1"
}
module "aws_humangov_infrastructure" {
source = "./modules/aws_humangov_infrastructure"
for_each = toset(var.states)
state_name = each.value
}
output.tf
This file outputs the public DNS of the EC2 instances, DynamoDB table names, and S3 bucket names for each state.
output "state_infrastructure_outputs" {
value = {
for state, infrastructure in module.aws_humangov_infrastructure :
state => {
ec2_public_dns = infrastructure.state_ec2_public_dns
dynamodb_table = infrastructure.state_dynamodb_table
s3_bucket = infrastructure.state_s3_bucket
}
}
}
variables.tf
This file declares the variables used in the Terraform configuration. In this case, a list of state names.
variable "states" {
description = "A list of state names"
default = ["california", "florida", "texas"]
}
modules/aws_humangov_infrastructure/main.tf
This module sets up the infrastructure for each state, including security groups, EC2 instances, DynamoDB tables, S3 buckets, and IAM roles.
resource "aws_security_group" "state_ec2_sg" {
name = "humangov-${var.state_name}-ec2-sg"
description = "Allow traffic on ports 22, 80, and 5000"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 5000
to_port = 5000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "humangov-${var.state_name}"
}
}
resource "aws_instance" "state_ec2" {
ami = "ami-007855ac798b5175e"
instance_type = "t2.micro"
key_name = "humangov-ec2-key"
vpc_security_group_ids = [aws_security_group.state_ec2_sg.id]
iam_instance_profile = aws_iam_instance_profile.s3_dynamodb_full_access_instance_profile.name
provisioner "local-exec" {
command = "sudo sh -c 'echo ${var.state_name} id=${self.id} ansible_host=${self.private_ip} ansible_user=ubuntu us_state=${var.state_name} aws_region=${var.region} aws_s3_bucket=${aws_s3_bucket.state_s3.bucket} aws_dynamodb_table=${aws_dynamodb_table.state_dynamodb.name} >> /etc/ansible/hosts'"
}
provisioner "local-exec" {
when = destroy
command = "sed -i '/${self.id}/d' /etc/ansible/hosts"
}
tags = {
Name = "humangov-${var.state_name}"
}
}
resource "aws_dynamodb_table" "state_dynamodb" {
name = "humangov-${var.state_name}-dynamodb"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
tags = {
Name = "humangov-${var.state_name}"
}
}
resource "random_string" "bucket_suffix" {
length = 4
special = false
upper = false
}
resource "aws_s3_bucket" "state_s3" {
bucket = "humangov-${var.state_name}-s3-${random_string.bucket_suffix.result}"
tags = {
Name = "humangov-${var.state_name}"
}
}
resource "aws_iam_role" "s3_dynamodb_full_access_role" {
name = "humangov-${var.state_name}-s3_dynamodb_full_access_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
tags = {
Name = "humangov-${var.state_name}"
}
}
resource "aws_iam_role_policy_attachment" "s3_full_access_role_policy_attachment" {
role = aws_iam_role.s3_dynamodb_full_access_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
resource "aws_iam_role_policy_attachment" "dynamodb_full_access_role_policy_attachment" {
role = aws_iam_role.s3_dynamodb_full_access_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
resource "aws_iam_instance_profile" "s3_dynamodb_full_access_instance_profile" {
name = "humangov-${var.state_name}-s3_dynamodb_full_access_instance_profile"
role = aws_iam_role.s3_dynamodb_full_access_role.name
tags = {
Name = "humangov-${var.state_name}"
}
}
modules/aws_humangov_infrastructure/outputs.tf
This file defines outputs for the module, exposing the EC2 public DNS, DynamoDB table names, and S3 bucket names.
output "state_ec2_public_dns" {
value = aws_instance.state_ec2.public_dns
}
output "state_dynamodb_table" {
value = aws_dynamodb_table.state_dynamodb.name
}
output "state_s3_bucket" {
value = aws_s3_bucket.state_s3.bucket
}
modules/aws_humangov_infrastructure/variables.tf
This file declares variables used within the module, specifically the state name and AWS region.
variable "state_name" {
description = "The name of the US State"
}
variable "region" {
default = "us-east-1"
}
This how our code is structured
bruno@Batman-2 terraform % tree
.
├── backend.tf
├── main.tf
├── modules
│ └── aws_humangov_infrastructure
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── output.tf
└── variables.tf
3 directories, 7 files
bruno@Batman-2 terraform %
Setting Up the Backend for Terraform State Management
Before we can execute our Terraform code, we need to set up the backend infrastructure that Terraform will use to store its state files. This involves creating an S3 bucket and a DynamoDB table. The S3 bucket will store the state files, and the DynamoDB table will handle state locking to prevent concurrent modifications.
Create an S3 Bucket
First, we need to create an S3 bucket. This bucket will be used by Terraform to store the state files. The state file keeps track of the resources Terraform manages, and it's essential for ensuring that the infrastructure is correctly managed and updated.
This command creates a new S3 bucket named
humangov-terraform-state-ct2023
in theus-east-1
region.aws s3api create-bucket --bucket humangov-terraform-state-ct2023 --region us-east-1
Create the DynamoDB Table Manually
Next, we need to create a DynamoDB table that Terraform will use for state locking. State locking is crucial as it prevents multiple processes from modifying the state file concurrently, which could lead to inconsistencies.
Run the following command to create the DynamoDB table:
This command creates a DynamoDB table named
humangov-terraform-state-lock-table
with a primary keyLockID
of type string (S
). The provisioned throughput is set to 5 read and write capacity units.aws dynamodb create-table \ --table-name humangov-terraform-state-lock-table \ --attribute-definitions AttributeName=LockID,AttributeType=S \ --key-schema AttributeName=LockID,KeyType=HASH \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ --region us-east-1
Output:
bruno@Batman-2 terraform % aws dynamodb create-table \
--table-name humangov-terraform-state-lock-table \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--region us-east-1
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "LockID",
"AttributeType": "S"
}
],
"TableName": "humangov-terraform-state-lock-table",
"KeySchema": [
{
"AttributeName": "LockID",
"KeyType": "HASH"
}
],
"TableStatus": "CREATING",
"CreationDateTime": "2024-05-26T12:47:49.176000+02:00",
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:us-east-1:001473275106:table/humangov-terraform-state-lock-table",
"TableId": "f92456ab-989a-4e14-86bc-0ceff4c83a08",
"DeletionProtectionEnabled": false
}
}
Next up :
We will run terraform commands to create the resources
terraform validate
terrafrom init
terraform plan
terraform apply
bruno@Batman-2 terraform % terraform fmt
bruno@Batman-2 terraform % terraform validate
Success! The configuration is valid.
bruno@Batman-2 terraform % terraform init
Initializing the backend...
Initializing modules...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Using previously-installed hashicorp/aws v5.49.0
- Using previously-installed hashicorp/random v3.6.1
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
bruno@Batman-2 terraform % terraform plan
Acquiring state lock. This may take a few moments...
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.aws_humangov_infrastructure["california"].aws_dynamodb_table.state_dynamodb will be created
+ resource "aws_dynamodb_table" "state_dynamodb" {
+ arn = (known after apply)
+ billing_mode = "PAY_PER_REQUEST"
+ hash_key = "id"
+ id = (known after apply)
+ name = "humangov-california-dynamodb"
+ read_capacity = (known after apply)
+ stream_arn = (known after apply)
+ stream_label = (known after apply)
+ stream_view_type = (known after apply)
+ tags = {
+ "Name" = "humangov-california"
}
+ tags_all = {
+ "Name" = "humangov-california"
}
+ write_capacity = (known after apply)
+ attribute {
+ name = "id"
+ type = "S"
}
}
# module.aws_humangov_infrastructure["california"].aws_s3_bucket.state_s3 will be created
+ resource "aws_s3_bucket" "state_s3" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags = {
+ "Name" = "humangov-california"
}
+ tags_all = {
+ "Name" = "humangov-california"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
# module.aws_humangov_infrastructure["california"].random_string.bucket_suffix will be created
+ resource "random_string" "bucket_suffix" {
+ id = (known after apply)
+ length = 4
+ lower = true
+ min_lower = 0
+ min_numeric = 0
+ min_special = 0
+ min_upper = 0
+ number = true
+ numeric = true
+ result = (known after apply)
+ special = false
+ upper = false
}
# module.aws_humangov_infrastructure["texas"].aws_dynamodb_table.state_dynamodb will be created
+ resource "aws_dynamodb_table" "state_dynamodb" {
+ arn = (known after apply)
+ billing_mode = "PAY_PER_REQUEST"
+ hash_key = "id"
+ id = (known after apply)
+ name = "humangov-texas-dynamodb"
+ read_capacity = (known after apply)
+ stream_arn = (known after apply)
+ stream_label = (known after apply)
+ stream_view_type = (known after apply)
+ tags = {
+ "Name" = "humangov-texas"
}
+ tags_all = {
+ "Name" = "humangov-texas"
}
+ write_capacity = (known after apply)
+ attribute {
+ name = "id"
+ type = "S"
}
}
# module.aws_humangov_infrastructure["texas"].aws_s3_bucket.state_s3 will be created
+ resource "aws_s3_bucket" "state_s3" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags = {
+ "Name" = "humangov-texas"
}
+ tags_all = {
+ "Name" = "humangov-texas"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
# module.aws_humangov_infrastructure["texas"].random_string.bucket_suffix will be created
+ resource "random_string" "bucket_suffix" {
+ id = (known after apply)
+ length = 4
+ lower = true
+ min_lower = 0
+ min_numeric = 0
+ min_special = 0
+ min_upper = 0
+ number = true
+ numeric = true
+ result = (known after apply)
+ special = false
+ upper = false
}
Plan: 6 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ state_infrastructure_outputs = {
+ california = {
+ dynamodb_table = "humangov-california-dynamodb"
+ s3_bucket = (known after apply)
}
+ texas = {
+ dynamodb_table = "humangov-texas-dynamodb"
+ s3_bucket = (known after apply)
}
}
─────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to
take exactly these actions if you run "terraform apply" now.
Releasing state lock. This may take a few moments...
bruno@Batman-2 terraform % terraform show
The state file is empty. No resources are represented.
bruno@Batman-2 terraform %
bruno@Batman-2 terraform % terraform apply
Acquiring state lock. This may take a few moments...
Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.aws_humangov_infrastructure["california"].aws_dynamodb_table.state_dynamodb will be created
+ resource "aws_dynamodb_table" "state_dynamodb" {
+ arn = (known after apply)
+ billing_mode = "PAY_PER_REQUEST"
+ hash_key = "id"
+ id = (known after apply)
+ name = "humangov-california-dynamodb"
+ read_capacity = (known after apply)
+ stream_arn = (known after apply)
+ stream_label = (known after apply)
+ stream_view_type = (known after apply)
+ tags = {
+ "Name" = "humangov-california"
}
+ tags_all = {
+ "Name" = "humangov-california"
}
+ write_capacity = (known after apply)
+ attribute {
+ name = "id"
+ type = "S"
}
}
# module.aws_humangov_infrastructure["california"].aws_s3_bucket.state_s3 will be created
+ resource "aws_s3_bucket" "state_s3" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags = {
+ "Name" = "humangov-california"
}
+ tags_all = {
+ "Name" = "humangov-california"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
# module.aws_humangov_infrastructure["california"].random_string.bucket_suffix will be created
+ resource "random_string" "bucket_suffix" {
+ id = (known after apply)
+ length = 4
+ lower = true
+ min_lower = 0
+ min_numeric = 0
+ min_special = 0
+ min_upper = 0
+ number = true
+ numeric = true
+ result = (known after apply)
+ special = false
+ upper = false
}
# module.aws_humangov_infrastructure["texas"].aws_dynamodb_table.state_dynamodb will be created
+ resource "aws_dynamodb_table" "state_dynamodb" {
+ arn = (known after apply)
+ billing_mode = "PAY_PER_REQUEST"
+ hash_key = "id"
+ id = (known after apply)
+ name = "humangov-texas-dynamodb"
+ read_capacity = (known after apply)
+ stream_arn = (known after apply)
+ stream_label = (known after apply)
+ stream_view_type = (known after apply)
+ tags = {
+ "Name" = "humangov-texas"
}
+ tags_all = {
+ "Name" = "humangov-texas"
}
+ write_capacity = (known after apply)
+ attribute {
+ name = "id"
+ type = "S"
}
}
# module.aws_humangov_infrastructure["texas"].aws_s3_bucket.state_s3 will be created
+ resource "aws_s3_bucket" "state_s3" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags = {
+ "Name" = "humangov-texas"
}
+ tags_all = {
+ "Name" = "humangov-texas"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
# module.aws_humangov_infrastructure["texas"].random_string.bucket_suffix will be created
+ resource "random_string" "bucket_suffix" {
+ id = (known after apply)
+ length = 4
+ lower = true
+ min_lower = 0
+ min_numeric = 0
+ min_special = 0
+ min_upper = 0
+ number = true
+ numeric = true
+ result = (known after apply)
+ special = false
+ upper = false
}
Plan: 6 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ state_infrastructure_outputs = {
+ california = {
+ dynamodb_table = "humangov-california-dynamodb"
+ s3_bucket = (known after apply)
}
+ texas = {
+ dynamodb_table = "humangov-texas-dynamodb"
+ s3_bucket = (known after apply)
}
}
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.aws_humangov_infrastructure["california"].random_string.bucket_suffix: Creating...
module.aws_humangov_infrastructure["texas"].random_string.bucket_suffix: Creating...
Creating an Amazon EKS Cluster
After setting up the backend for Terraform state management, the next step involves creating an Amazon EKS (Elastic Kubernetes Service) cluster to deploy and manage our Kubernetes applications.
We will use eksctl
, a simple CLI tool for creating and managing Kubernetes clusters on EKS.
Step-by-Step Guide
Create an EKS Cluster
To create the EKS cluster, run the following command:
eksctl create cluster --name humangov-cluster --region us-east-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 1
This command will:
Create an EKS cluster named
humangov-cluster
in theus-east-1
region.Create a node group named
standard-workers
witht3.medium
instance types.Provision 1 node in the node group.
The detailed output of this command shows the process of creating the cluster and node group using CloudFormation stacks, setting availability zones, subnets, and other configurations.
Update Kubeconfig
Once the cluster is created, update the kubeconfig file to enable kubectl to interact with the new cluster:
aws eks update-kubeconfig --name humangov-cluster
- This command adds the new EKS cluster context to your kubeconfig file.
Verify the Cluster
Verify that the cluster and nodes are correctly set up by listing the nodes:
kubectl get nodes
Setting Up the AWS Load Balancer Controller
The AWS Load Balancer Controller is necessary for managing AWS Elastic Load Balancers for a Kubernetes cluster. This involves creating an IAM policy and attaching it to an IAM role that the controller can use.
Download the IAM Policy
First, download the IAM policy for the AWS Load Balancer Controller:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Create the IAM Policy
Create a new IAM policy using the downloaded policy document:
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
This command creates an IAM policy named
AWSLoadBalancerControllerIAMPolicy
. The output confirms the creation and provides the ARN of the policy.
Output:
bruno@Batman-2 terraform % eksctl create cluster --name humangov-cluster --region us-east-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 1
2024-05-26 12:53:45 [ℹ] eksctl version 0.176.0
2024-05-26 12:53:45 [ℹ] using region us-east-1
2024-05-26 12:53:47 [ℹ] skipping us-east-1e from selection because it doesn't support the following instance type(s): t3.medium
2024-05-26 12:53:47 [ℹ] setting availability zones to [us-east-1a us-east-1d]
2024-05-26 12:53:47 [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2024-05-26 12:53:47 [ℹ] subnets for us-east-1d - public:192.168.32.0/19 private:192.168.96.0/19
2024-05-26 12:53:47 [ℹ] nodegroup "standard-workers" will use "" [AmazonLinux2/1.29]
2024-05-26 12:53:47 [ℹ] using Kubernetes version 1.29
2024-05-26 12:53:47 [ℹ] creating EKS cluster "humangov-cluster" in "us-east-1" region with managed nodes
2024-05-26 12:53:47 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2024-05-26 12:53:47 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=humangov-cluster'
2024-05-26 12:53:47 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "humangov-cluster" in "us-east-1"
2024-05-26 12:53:47 [ℹ] CloudWatch logging will not be enabled for cluster "humangov-cluster" in "us-east-1"
2024-05-26 12:53:47 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=humangov-cluster'
2024-05-26 12:53:47 [ℹ]
2 sequential tasks: { create cluster control plane "humangov-cluster",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "standard-workers",
}
}
2024-05-26 12:53:47 [ℹ] building cluster stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:53:50 [ℹ] deploying stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:54:20 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:54:51 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:55:53 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:56:55 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:57:57 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 12:58:59 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 13:00:01 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 13:01:02 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 13:02:04 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 13:03:05 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 13:04:07 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-cluster"
2024-05-26 13:06:25 [ℹ] building managed nodegroup stack "eksctl-humangov-cluster-nodegroup-standard-workers"
2024-05-26 13:06:27 [ℹ] deploying stack "eksctl-humangov-cluster-nodegroup-standard-workers"
2024-05-26 13:06:28 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-nodegroup-standard-workers"
2024-05-26 13:06:59 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-nodegroup-standard-workers"
2024-05-26 13:07:36 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-nodegroup-standard-workers"
^[[B^[[C0^[[A^[[D
gsz hx^[[1;6D^[[1;6A^[[C000000002024-05-26 13:08:43 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-nodegroup-standard-workers"
2024-05-26 13:10:26 [ℹ] waiting for CloudFormation stack "eksctl-humangov-cluster-nodegroup-standard-workers"
2024-05-26 13:10:26 [ℹ] waiting for the control plane to become ready
2024-05-26 13:10:28 [✔] saved kubeconfig as "/Users/bruno/.kube/config"
2024-05-26 13:10:28 [ℹ] no tasks
2024-05-26 13:10:28 [✔] all EKS cluster resources for "humangov-cluster" have been created
2024-05-26 13:10:28 [✔] created 0 nodegroup(s) in cluster "humangov-cluster"
2024-05-26 13:10:29 [ℹ] nodegroup "standard-workers" has 1 node(s)
2024-05-26 13:10:29 [ℹ] node "ip-192-168-6-224.ec2.internal" is ready
2024-05-26 13:10:29 [ℹ] waiting for at least 1 node(s) to become ready in "standard-workers"
2024-05-26 13:10:29 [ℹ] nodegroup "standard-workers" has 1 node(s)
2024-05-26 13:10:29 [ℹ] node "ip-192-168-6-224.ec2.internal" is ready
2024-05-26 13:10:29 [✔] created 1 managed nodegroup(s) in cluster "humangov-cluster"
2024-05-26 13:10:32 [ℹ] kubectl command should work with "/Users/bruno/.kube/config", try 'kubectl get nodes'
2024-05-26 13:10:32 [✔] EKS cluster "humangov-cluster" in "us-east-1" region is ready
bruno@Batman-2 terraform %
bruno@Batman-2 terraform % eksctl create cluster --name humangov-cluster --region us-east-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 1
2024-05-26 13:10:33 [ℹ] eksctl version 0.176.0
2024-05-26 13:10:33 [ℹ] using region us-east-1
2024-05-26 13:10:35 [ℹ] skipping us-east-1e from selection because it doesn't support the following instance type(s): t3.medium
bruno@Batman-2 terraform % aws eks update-kubeconfig --name humangov-cluster
Added new context arn:aws:eks:us-east-1:001473275106:cluster/humangov-cluster to /Users/bruno/.kube/config
bruno@Batman-2 terraform % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 13m
bruno@Batman-2 terraform % kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-6-224.ec2.internal Ready <none> 6m15s v1.29.3-eks-ae9a62a
Installing Ingress Controller and Application Load Balancer** (ALB)
We need to set up an Application Load Balancer (ALB) and an Ingress Controller to manage incoming traffic by directing the traffic to the correct services within the EKS cluster.
Step 1: Create an IAM policy
We need a policy that defines the permissions required by the AWS Load Balancer Controller to manage resources in your AWS account when deployed in an Amazon EKS cluster. The AWS Load Balancer Controller is a Kubernetes controller that manages Elastic Load Balancers (ELBs) for services running in a Kubernetes cluster.
Download the policy
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
Create an IAM policy
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
List all the policies in your account
aws iam list-policies
Associate the IAM OIDC identity provider for your Amazon EKS cluster with eksctl**
When you associate the IAM OIDC identity provider with your EKS cluster, it establishes a trust relationship between the Kubernetes cluster and AWS IAM. This allows you to create Kubernetes service accounts and assign them IAM roles to access AWS resources such as S3 buckets, DynamoDB tables, or other AWS services. In our case, we need to access the Application Load Balancer (ALB) service. You can skip this step when you do not use IAM roles.
eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster humangov-cluster --approve
Create a Kubernetes service account and associate an IAM role**
A Kubernetes service account is a Kubernetes-native concept for identity and access management. When a pod is created, it is assigned a service account to authenticate and authorize requests made by containers within the cluster to access Kubernetes resources.
By associating an IAM role with a Kubernetes service account, you enable the pods running within the Kubernetes cluster to assume the permissions defined by the IAM role when making requests to AWS services outside of the cluster. You’re essentially creating a mapping between a Kubernetes service account and an IAM role.
Create a Kubernetes service account with AmazonEKSLoadBalancerControllerRole, and attach AWSLoadBalancerControllerIAMPolicy to this role to grant AWS Load Balancer Controller with necessary permissions to create and manage ALB.
Create an Kubernetes service account within your Amazon EKS cluster
eksctl create iamserviceaccount \
--cluster=humangov-cluster \
--namespace=kube-system \
--region=us-east-1 \
--name=aws-load-balancer-controller \
--role-name=AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::001473275106:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Remember to replace the Account ID with your account ID !
Output:
bruno@Batman-2 terraform % aws eks update-kubeconfig --name humangov-cluster
Added new context arn:aws:eks:us-east-1:001473275106:cluster/humangov-cluster to /Users/bruno/.kube/config
bruno@Batman-2 terraform % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 13m
bruno@Batman-2 terraform % kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-6-224.ec2.internal Ready <none> 6m15s v1.29.3-eks-ae9a62a
bruno@Batman-2 terraform % cd ~/environment
cd: no such file or directory: /Users/bruno/environment
bruno@Batman-2 terraform % curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 8386 100 8386 0 0 6905 0 0:00:01 0:00:01 --:--:-- 6902
bruno@Batman-2 terraform % aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
{
"Policy": {
"PolicyName": "AWSLoadBalancerControllerIAMPolicy",
"PolicyId": "ANPAQAV6QMTRJ4UJFD7PI",
"Arn": "arn:aws:iam::001473275106:policy/AWSLoadBalancerControllerIAMPolicy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2024-05-26T11:15:16+00:00",
"UpdateDate": "2024-05-26T11:15:16+00:00"
}
}
bruno@Batman-2 terraform % aws iam list-policies
{
"Policies": [
{
"PolicyName": "allow_all",
"PolicyId": "ANPAQAV6QMTRFMR4C5VVP",
"Arn": "arn:aws:iam::001473275106:policy/allow_all",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2024-05-26T09:19:10+00:00",
"UpdateDate": "2024-05-26T09:19:10+00:00"
},
{
"PolicyName": "AWSLoadBalancerControllerIAMPolicy",
"PolicyId": "ANPAQAV6QMTRJ4UJFD7PI",
"Arn": "arn:aws:iam::001473275106:policy/AWSLoadBalancerControllerIAMPolicy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2024-05-26T11:15:16+00:00",
"UpdateDate": "2024-05-26T11:15:16+00:00"
},
{
"PolicyName": "Playground_AWS_Sandbox",
"PolicyId": "ANPAQAV6QMTRCOB6Y7NSX",
"Arn": "arn:aws:iam::001473275106:policy/Playground_AWS_Sandbox",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2024-05-26T09:19:10+00:00",
"UpdateDate": "2024-05-26T09:19:10+00:00"
},
{
"PolicyName": "AdministratorAccess",
"PolicyId": "ANPAIWMBCKSKIEE64ZLYK",
"Arn": "arn:aws:iam::aws:policy/AdministratorAccess",
Install the AWS Load Balancer Controller using** Helm V3
Helm is a package manager for Kubernetes. The ALB controller needs to be installed in the Kubernetes cluster. After this point, we will use kubectl to interact with our Kubernetes cluster and its attached services.
Add the AWS EKS Helm chart repository to your Helm configuration
helm repo add eks https://aws.github.io/eks-charts
Ensure that you have access to the most recent versions of the charts available in the repository**
helm repo update eks
Install the AWS Load Balancer Controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=humangov-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
Verify the controller installation
kubectl get deployment -n kube-system aws-load-balancer-controller
Deploying the HumanGov Application**
Step 1: Create a Role and Service Account to provide pods access to S3 and DynamoDB tables
We will create another IAM service account to allow pods to access the S3 bucket & DynamoDB table. our application will be deployed using pod(s).
eksctl create iamserviceaccount \
--cluster=humangov-cluster \
--name=humangov-pod-execution-role \
--role-name=HumanGovPodExecutionRole \
--attach-policy-arn=arn:aws:iam::aws:policy/AmazonS3FullAccess \
--attach-policy-arn=arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess \
--region=us-east-1 \
--approve
I containerized the application using docker and Github actions
Part 1: CI/CD Pipeline for Python Application
Let's start by setting up a CI/CD pipeline for a Python application. We'll create a workflow that installs dependencies, runs linting, builds a Docker image, and pushes it to Docker Hub.
Workflow Configuration:
Create a file named .github/workflows/main.yml
with the following content:
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
services:
docker:
image: docker:19.03.12
options: --privileged
ports:
- 8000:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install dependencies
working-directory: python-app/src
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
working-directory: python-app/src
run: |
pip install flake8
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
continue-on-error: true
# Uncomment if you have tests
#- name: Test with pytest
# working-directory: python-app/src
# run: |
# pip install pytest
# pytest
- name: Log into Docker Hub (using secrets)
run: docker login --username ${{ secrets.DOCKER_USERNAME }} --password ${{ secrets.DOCKER_PASSWORD_SYMBOLS_ALLOWED }}
- name: Build Docker image
working-directory: python-app/src
run: docker build -t bruno74t/humangov-image:latest .
- name: Push Docker image
run: docker push bruno74t/humangov-image:latest
- name: Run Docker container (optional)
run: docker run -d -p 8000:8000 bruno74t/humangov-image:latest
Explanation:
Triggering Events: The workflow triggers on pushes and pull requests to the
main
branch.Job Setup: Runs on the latest Ubuntu environment and sets up Docker as a service.
Steps:
Checkout Code: Uses
actions/checkout@v2
to checkout the repository.Set up Python: Uses
actions/setup-python@v2
to set up Python 3.8.Install Dependencies: Installs Python dependencies specified in
requirements.txt
.Linting: Runs
flake8
for linting the code.Log into Docker Hub: Uses secrets for Docker Hub credentials.
Build and Push Docker Image: Builds and pushes the Docker image to Docker Hub.
Run Docker Container: (Optional) Runs the Docker container for testing.
Part 2: CI/CD Pipeline for NGINX
Next, we'll create a CI/CD pipeline for setting up and deploying an NGINX server with a custom configuration.
Workflow Configuration:
Create a file named .github/workflows/nginx-cicd.yaml
with the following content:
name: CI/CD Pipeline for NGINX
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Log into Docker Hub
run: echo "${{ secrets.DOCKER_PASSWORD_SYMBOLS_ALLOWED }}" | docker login --username "${{ secrets.DOCKER_USERNAME }}" --password-stdin
- name: Set up Docker build context
run: |
mkdir -p docker-context
cp python-app/src/nginx/nginx.conf docker-context/
cp python-app/src/nginx/proxy_params docker-context/
- name: Remove default NGINX configuration
run: |
echo 'FROM nginx:alpine' > docker-context/Dockerfile
echo 'RUN rm /etc/nginx/conf.d/default.conf' >> docker-context/Dockerfile
- name: Copy custom configuration file
run: |
echo 'COPY nginx.conf /etc/nginx/conf.d' >> docker-context/Dockerfile
- name: Copy proxy parameters
run: |
echo 'COPY proxy_params /etc/nginx/proxy_params' >> docker-context/Dockerfile
- name: Expose port 80
run: |
echo 'EXPOSE 80' >> docker-context/Dockerfile
- name: Start NGINX
run: |
echo 'CMD ["nginx", "-g", "daemon off;"]' >> docker-context/Dockerfile
- name: Build NGINX Docker image
run: docker build -t bruno74t/nginx-humangov:latest docker-context
- name: Push NGINX Docker image to Docker Hub
run: docker push bruno74t/nginx-humangov:latest
- name: Deploy NGINX Docker container
run: docker run -d -p 80:80 bruno74t/nginx-humangov:latest
Explanation:
Triggering Events: Similar to the first workflow, it triggers on pushes and pull requests to the
main
branch.Job Setup: Runs on the latest Ubuntu environment.
Steps:
Checkout Code: Uses
actions/checkout@v2
to checkout the repository.Log into Docker Hub: Uses secrets for Docker Hub credentials.
Set up Docker Build Context: Prepares a Docker build context by copying custom NGINX configuration files.
Create Dockerfile: Customizes the NGINX Dockerfile to use the copied configuration files.
Build and Push Docker Image: Builds and pushes the custom NGINX Docker image to Docker Hub.
Deploy Docker Container: Runs the NGINX container.
By leveraging GitHub Actions, we've automated the CI/CD process for both a Python application and an NGINX web server. These workflows ensure that any changes pushed to the main
branch are automatically built, tested, and deployed, improving the efficiency and reliability of the development process. With these pipelines in place, you can focus on writing code and let GitHub Actions handle the rest.
Output :
Once we sign into Docker Hub we have the container Image uploaded.
Part 1: Deploying the Python Application
First, let's define the deployment and service for our Python application. We'll create a deployment manifest that specifies the Docker image to use and the necessary environment variables.
Deployment Configuration:
Create a file named python-app-deployment.yml
with the following content:
yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
name: humangov-python-app-california
spec:
replicas: 1
selector:
matchLabels:
app: humangov-python-app-california
template:
metadata:
labels:
app: humangov-python-app-california
spec:
serviceAccountName: humangov-pod-execution-role
containers:
- name: humangov-python-app-california
image: bruno74t/humangov-image:latest
env:
- name: AWS_BUCKET
value: "humangov-california-s3-tim7"
- name: AWS_DYNAMODB_TABLE
value: "humangov-california-dynamodb"
- name: AWS_REGION
value: "us-east-1"
- name: US_STATE
value: "california"
Service Configuration:
Next, create a file named python-app-service.yml
:
yamlCopy codeapiVersion: v1
kind: Service
metadata:
name: humangov-python-app-service-california
spec:
type: LoadBalancer
selector:
app: humangov-python-app-california
ports:
- protocol: TCP
port: 8000
targetPort: 8000
Explanation:
Deployment:
Metadata: Names the deployment
humangov-python-app-california
.Spec:
Sets the number of replicas to 1.
Selects pods with the label
app: humangov-python-app-california
.Defines the pod template, setting the container name and image.
Configures environment variables for AWS integration.
Service:
Metadata: Names the service
humangov-python-app-service-california
.Spec:
Sets the service type to
LoadBalancer
to expose it externally.Selects pods with the label
app: humangov-python-app-california
.Maps port 8000 on the service to port 8000 on the container.
Part 2: Deploying the NGINX Reverse Proxy
Next, we'll set up an NGINX reverse proxy to forward requests to our Python application. We'll define a deployment, service, and a ConfigMap for NGINX configuration.
Deployment Configuration:
Create a file named nginx-deployment.yml
with the following content:
yamlCopy codeapiVersion: apps/v1
kind: Deployment
metadata:
name: humangov-nginx-reverse-proxy-california
spec:
replicas: 1
selector:
matchLabels:
app: humangov-nginx-reverse-proxy-california
template:
metadata:
labels:
app: humangov-nginx-reverse-proxy-california
spec:
containers:
- name: humangov-nginx-reverse-proxy-california
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: humangov-nginx-config-california-vol
mountPath: /etc/nginx/
volumes:
- name: humangov-nginx-config-california-vol
configMap:
name: humangov-nginx-config-california
Service Configuration:
Create a file named nginx-service.yml
:
yamlCopy codeapiVersion: v1
kind: Service
metadata:
name: humangov-nginx-service-california
spec:
type: LoadBalancer
selector:
app: humangov-nginx-reverse-proxy-california
ports:
- protocol: TCP
port: 80
targetPort: 80
ConfigMap Configuration:
Finally, create a file named nginx-configmap.yml
:
yamlCopy codeapiVersion: v1
kind: ConfigMap
metadata:
name: humangov-nginx-config-california
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://humangov-python-app-service-california:8000;
}
}
}
proxy_params: |
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
Explanation:
Deployment:
Metadata: Names the deployment
humangov-nginx-reverse-proxy-california
.Spec:
Sets the number of replicas to 1.
Selects pods with the label
app: humangov-nginx-reverse-proxy-california
.Defines the pod template, setting the container name and image.
Mounts the NGINX configuration from the ConfigMap.
Service:
Metadata: Names the service
humangov-nginx-service-california
.Spec:
Sets the service type to
LoadBalancer
to expose it externally.Selects pods with the label
app: humangov-nginx-reverse-proxy-california
.Maps port 80 on the service to port 80 on the container.
ConfigMap:
Metadata: Names the ConfigMap
humangov-nginx-config-california
.Data: Defines the NGINX configuration to proxy requests to the Python application's service.
By following this guide, you've successfully set up a Kubernetes environment to deploy a Python application and an NGINX reverse proxy. The deployment manifests ensure that your application is scalable and easy to manage. The NGINX proxy setup allows for efficient traffic management, forwarding client requests to the appropriate backend service.
Application can be accessible via services on cluster IP
http://a282c23d37ef34745aff849f4e6075f0-499676195.us-east-1.elb.amazonaws.com/
http://ab1aa69602dcb4eee966a6c2f5ce625e-889346329.us-east-1.elb.amazonaws.com/new_record
Enhancing Application Delivery with Kubernetes Ingress and Application Load Balancer (ALB) with SSL
In this follow-up tutorial, we will enhance our existing Kubernetes deployment by setting up a Kubernetes Ingress and an Application Load Balancer (ALB) with SSL termination. This setup will improve our application delivery and security, providing a more robust and scalable solution.
Part 1: Setting Up Kubernetes Ingress
Kubernetes Ingress allows you to manage external access to your services in a more flexible and scalable way compared to using LoadBalancer services directly. We'll define an Ingress resource to route traffic to our NGINX reverse proxy.
Ingress Controller Installation:
Ingress Resource Configuration:
Part 2: Setting Up Application Load Balancer (ALB) with SSL
To further secure and improve our application delivery, we'll use AWS Application Load Balancer (ALB) with SSL termination. We'll leverage AWS Load Balancer Controller and Cert-Manager for managing SSL certificates.
AWS Load Balancer Controller Installation:
Cert-Manager Installation:
Cheers 🍻