Deploying a 3-Tier Web Application on AWS

Deploying a 3-Tier Web Application on AWS

We previously touched on how to simply deploy a 3-tier web app locally, You can check it out on my previous blog/article :

https://gatete.hashnode.dev/fundermentals-of-deploying-a-3-tier-application-on-kubernetes

We will be deploying an 3-tier web application on AWS as a cloud provider, leveraging GitHub for source code management and github actions, we will as well use docker to build containers to package both our frontend and backend applications plus the stateful side of the application

Pre-requsities:

  1. AWS Account with IAM User with elevated priviledges

  2. Github Account

    1. Source Code Management

    2. GitHub Actions CI/CD piplelines

  3. DockerHub

  4. Helm Installed

  5. Kubectl Installed locally

  6. EKS installed locally

Creating a User on AWS

Go to IAM, Select Users and type Username to create User and follow the instructions

For Production purposes, it is recommended you use the groups features to add policies and roles for this user, while this is a demo i will be attaching the policies directly to the user created above

Finally review and Create the User !

Next Up you will create access key that in turn will generate a secretkey which you will use to access the amazon cloud services via CLI

Once this is complete you will have both an access key and secret key , be mindful and store your keys securely , with these keys an aws account can be accessed , more so i will would recommend MFA to be configured for every use!

Now that we have our user let us go on to access the source code for the project

git clone https://github.com/Gatete-Bruno/3-Tier-Aws.git

You will have something like this !

Let us breakdown the Source Code:

For the Frontend , we have a Dockerfile

bruno@Batman-2 frontend % tree
.
├── Dockerfile
├── package-lock.json
├── package.json
├── public
│   ├── favicon.ico
│   ├── index.html
│   ├── logo192.png
│   ├── logo512.png
│   ├── manifest.json
│   └── robots.txt
└── src
    ├── App.css
    ├── App.js
    ├── Tasks.js
    ├── index.css
    ├── index.js
    └── services
        └── taskServices.js

4 directories, 15 files
bruno@Batman-2 frontend % cat Dockerfile 
# Use the official Node.js 14 image as a base image
FROM node:14

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy the package.json and package-lock.json files to the container
COPY package*.json ./

# Install the application's dependencies inside the container
RUN npm install

# Copy the rest of the application code to the container
COPY . .

# Exposing Port to the container
EXPOSE 3000

# Specify the command to run when the container starts
CMD [ "npm", "start" ]
bruno@Batman-2 frontend %

On the Backend dir,

bruno@Batman-2 backend % tree
.
├── Dockerfile
├── db.js
├── index.js
├── models
│   └── task.js
├── package-lock.json
├── package.json
└── routes
    └── tasks.js

3 directories, 7 files
bruno@Batman-2 backend % cat Dockerfile 
# Use the official Node.js 14 image as a base image
FROM node:14

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy the package.json and package-lock.json files to the container
COPY package*.json ./

# Install the application's dependencies inside the container
RUN npm install

# Copy the rest of the application code to the container
COPY . .

# Expose Port to the container
EXPOSE 8080

# Specify the command to run when the container starts
CMD [ "node", "index.js" ]
bruno@Batman-2 backend %

For now we can go on to deploy these two using Docker by building container images, and then access them on an instance.

kato@docker-vm:~/3-Tier-Aws$ tree
.
├── README.md
├── backend
│   ├── Dockerfile
│   ├── db.js
│   ├── index.js
│   ├── models
│   │   └── task.js
│   ├── package-lock.json
│   ├── package.json
│   └── routes
│       └── tasks.js
├── frontend
│   ├── Dockerfile
│   ├── package-lock.json
│   ├── package.json
│   ├── public
│   │   ├── favicon.ico
│   │   ├── index.html
│   │   ├── logo192.png
│   │   ├── logo512.png
│   │   ├── manifest.json
│   │   └── robots.txt
│   └── src
│       ├── App.css
│       ├── App.js
│       ├── Tasks.js
│       ├── index.css
│       ├── index.js
│       └── services
│           └── taskServices.js

Let us build the images and run the container.


kato@docker-vm:~/3-Tier-Aws$ docker build -t bruno74t/frontend-image:latest frontend
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  739.3kB
Step 1/7 : FROM node:14
 ---> 1d12470fa662
Step 2/7 : WORKDIR /usr/src/app
 ---> Using cache
 ---> 270ff4ebac5b
Step 3/7 : COPY package*.json ./
 ---> Using cache
 ---> faa27f1e0b0d
Step 4/7 : RUN npm install
 ---> Using cache
 ---> 5d1f3a93737b
Step 5/7 : COPY . .
 ---> Using cache
 ---> be7fc8279bc3
Step 6/7 : EXPOSE 3000
 ---> Using cache
 ---> 5fa542abcd9f
Step 7/7 : CMD [ "npm", "start" ]
 ---> Using cache
 ---> 59f535a9b640
Successfully built 59f535a9b640
Successfully tagged bruno74t/frontend-image:latest
kato@docker-vm:~/3-Tier-Aws$ docker build -t bruno74t/backend-image:latest backend
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
            Install the buildx component to build images with BuildKit:
            https://docs.docker.com/go/buildx/

Sending build context to Docker daemon  34.82kB
Step 1/7 : FROM node:14
 ---> 1d12470fa662
Step 2/7 : WORKDIR /usr/src/app
 ---> Using cache
 ---> 270ff4ebac5b
Step 3/7 : COPY package*.json ./
 ---> Using cache
 ---> 35b49dd08091
Step 4/7 : RUN npm install
 ---> Using cache
 ---> ff520b75da38
Step 5/7 : COPY . .
 ---> Using cache
 ---> a535ce9cf960
Step 6/7 : EXPOSE 8080
 ---> Using cache
 ---> bcf9cc942d92
Step 7/7 : CMD [ "node", "index.js" ]
 ---> Using cache
 ---> 7a17a8ba4437
Successfully built 7a17a8ba4437
Successfully tagged bruno74t/backend-image:latest
kato@docker-vm:~/3-Tier-Aws$ docker run -d -p 3000:3000 bruno74t/frontend-image:latest
e944172360a0158a5a234e7e1e0e033bde20b3873a0744f3d4c4df4bc31b06b7
kato@docker-vm:~/3-Tier-Aws$ docker run -d -p 8080:8080 bruno74t/backend-image:latest
423750f12c3e9bcc0698c04ea911282a5cc02002e44493d0456dc9292bdf8bdc
kato@docker-vm:~/3-Tier-Aws$

To learn more about the basics of docker you can as well check out my blog on that

https://gatete.hashnode.dev/docker-101

As you can see the application is accessible below after containerization !

Github Actions : CI/CD Pipelines

We are going to be using github actions to build the container images, and push them to dockerhub as our container registery for .

To understand the basics of using github actions for CI/CD pipelines, check out this blog

https://gatete.hashnode.dev/github-actions-a-beginners-guide

Lets get into it :

In our Source Code repo locally, create this

bruno@Batman-2 .github % tree
.
└── workflows
    ├── backend.yml
    ├── frontend.yml
    └── main.yml

2 directories, 3 files
bruno@Batman-2 .github %

We have three files :

bruno@Batman-2 .github % cat workflows/backend.yml 
name: Build and Push Backend Docker Image

on:
  push:
    branches:
      - main  # Or whichever branch you want to trigger the workflow

jobs:
  build-and-push:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      - name: Build Backend Docker Image
        run: docker build -t bruno74t/backend-image:latest backend

      - name: Log into Docker Hub
        run: echo ${{ secrets.DOCKER_PASSWORD_SYMBOLS_ALLOWED }} | docker login --username ${{ secrets.DOCKER_USERNAME }} --password-stdin

      - name: Push Backend Docker Image
        run: docker push bruno74t/backend-image:latest
bruno@Batman-2 .github % cat workflows/main.yml    
name: Main Workflow

on:
  push:
    branches:
      - main  # Or whichever branch you want to trigger the workflow

jobs:
  build-and-push-frontend:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      - name: Build Frontend Docker Image
        run: docker build -t bruno74t/frontend-image:latest frontend

      - name: Log into Docker Hub
        run: echo ${{ secrets.DOCKER_PASSWORD_SYMBOLS_ALLOWED }} | docker login --username ${{ secrets.DOCKER_USERNAME }} --password-stdin

      - name: Push Frontend Docker Image
        run: docker push bruno74t/frontend-image:latest

  build-and-push-backend:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Repository
        uses: actions/checkout@v2

      - name: Build Backend Docker Image
        run: docker build -t bruno74t/backend-image:latest backend

      - name: Log into Docker Hub
        run: echo ${{ secrets.DOCKER_PASSWORD_SYMBOLS_ALLOWED }} | docker login --username ${{ secrets.DOCKER_USERNAME }} --password-stdin

      - name: Push Backend Docker Image
        run: docker push bruno74t/backend-image:latest

bruno@Batman-2 .github %

With these we can be able to build the images and push them to dockerhub

Once we trigger and push changes to github, actions will dd what is required.

Let us check DockHub to see if our imaged were pushed.

Kubernetes on AWS

What is Kubernetes to begin with .....!

https://gatete.hashnode.dev/k8s-up-running

Now we are to create a eks cluster on AWS

eksctl create cluster --name 3-tier-aws --region us-east-1 --node-type t2.medium --nodes-min 2 --nodes-max 3

aws eks update-kubeconfig --region us-east-1 --name 3-tier-aws

kubectl get nodes

Creating a cluster will take sometime ,

how it looks like

validation :

bruno@Batman-2 .github % aws eks update-kubeconfig --region us-east-1 --name 3-tier-aws
Added new context arn:aws:eks:us-east-1:241141669128:cluster/3-tier-aws to /Users/bruno/.kube/config
bruno@Batman-2 .github % kubectl get nodes
NAME                             STATUS   ROLES    AGE     VERSION
ip-192-168-2-65.ec2.internal     Ready    <none>   3m59s   v1.29.0-eks-5e0fdde
ip-192-168-53-191.ec2.internal   Ready    <none>   4m9s    v1.29.0-eks-5e0fdde
bruno@Batman-2 .github %

Once the cluster is finalized

lets create a namespace for our k8s workloads

kubectl create namespace 3-tier-aws
kubectl config set-context --current --namespace 3-tier-aws

bruno@Batman-2 .github % kubectl create namespace 3-tier-aws
namespace/3-tier-aws created
bruno@Batman-2 .github % 
kubectl config set-context --current --namespace 3-tier-aws
Context "arn:aws:eks:us-east-1:241141669128:cluster/3-tier-aws" modified.
bruno@Batman-2 .github %

Let us deploy the k8s workloads

Frontend First.

kubectl apply -f frontend-deployment.yaml
kubectl apply -f frontend-service.yaml

Backend Next

kubectl apply -f backend-deployment.yaml
kubectl apply -f backend-service.yaml

Let us check our workloads

kubectl get all -n 3-tier-aws

Database Tier


bruno@Batman-2 k8s_manifests % cd mongo 
bruno@Batman-2 mongo % kubectl apply -f .
deployment.apps/mongodb created
secret/mongo-sec created
service/mongodb-svc created
bruno@Batman-2 mongo %

Setup Application Load balancer and ingress

Purpose:

we have to create a application load balancer to route outside traffic towards cluster and an ingress for in internal routing between our 3 tiers

Below command fetch the iam policy for your ALB

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json

create the iam policy in your aws account from iam_policy.json file that is setup in the first command

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

This command apply the load balancer policy to your eks cluster so that your eks cluster is working with your load balancer according to the policy

eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=3-tier-aws --approve

This command create and attach an service account to your cluster so that your cluster is allowed to work with load balancer service

please change your aws account no. from the below command otherwise it won’t work

eksctl create iamserviceaccount --cluster=3-tier-aws --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::241141669128:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=us-east-1

All the policies are attached let’s deploy the load balancer

For this we have to install helm→Helm is a special tool that helps you easily carry and manage your software when you’re using Kubernetes, which is like a big playground for running applications.

brew install helm
helm version

After this we have to add a particular manifest for load balancer that is pre written by someone on eks repo by using helm

helm repo add eks https://aws.github.io/eks-charts

update the eks repo using helm

helm repo update eks

Install the load balancer controller on your eks cluster**

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=my-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

kubectl get deployment -n kube-system aws-load-balancer-controller

Wait for it to be ready

Setup Ingress for internal routing

Locate the full_stack_lb.yaml file

kubectl apply -f full_stack_lb.yaml 
kubectl get ing -n 3-tier-aws

You can access the app on via :

http://k8s-3tieraws-mainlb-81edaf30d8-1886029530.us-east-1.elb.amazonaws.com/

Finally let us delete the resources

Delete Resources

eksctl delete cluster --name 3-tier-aws --region us-east-1

aws cloudformation delete-stack --stack-name eksctl-3-tier-aws-cluster

Cheers !