Project: GitOps CI/CD with ArgoCD, EKS, Terraform, and Helm
Table of Contents
Project Overview & Architecture
Prerequisites
Provisioning EKS with Terraform
Setting Up Your Git Repositories
Installing and Configuring ArgoCD
Deploying a Sample Application with Helm & ArgoCD
Managing Secrets with External Secrets Operator
Canary Deployments with Argo Rollouts
Putting It All Together: The CI Pipeline (GitHub Actions)
Tearing Down the Infrastructure
1. Project Overview & Architecture
We will build a complete GitOps workflow. The key idea is that Git is the single source of truth.
Developers push new application code to the app-source-code repository.
A CI Pipeline (GitHub Actions) builds a new Docker image, pushes it to a registry (AWS ECR), and then updates a Helm chart’s values.yaml in the manifests-repo with the new image tag.
ArgoCD, running in our EKS cluster, constantly monitors the manifests-repo.
When ArgoCD detects the change (the new image tag), it automatically pulls the updated manifests and applies them to the Kubernetes cluster, deploying the new version of the application.
2. Prerequisites
AWS Account with programmatic access (Access Key & Secret Key).
AWS CLI installed and configured (aws configure).
Terraform installed (>= 1.0).
kubectl installed.
Helm installed.
GitHub Account.
Docker installed and running locally (for the CI part).
3. Provisioning EKS with Terraform
We will use Terraform to create the VPC, EKS cluster, and an ECR repository for our Docker images.
Directory Structure:
GitOps CI/CD
└── terraform/
├── main.tf
├── variables.tf
├── vpc.tf
├── eks.tf
└── outputs.tfterraform/variables.tf
variable "aws_region" {
description = "The AWS region to deploy resources in."
type = string
default = "us-east-1"
}
variable "cluster_name" {
description = "The name for the EKS cluster."
type = string
default = "gitops-cluster"
}
variable "app_name" {
description = "The name of our sample application."
type = string
default = "guestbook"
}terraform/main.tf
provider "aws" {
region = var.aws_region
access_key = "your access key"
secret_key = "your secret key"
}
# Create an ECR repository to store our application's Docker image
resource "aws_ecr_repository" "app_ecr_repo" {
name = "${var.app_name}-repo"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}terraform/vpc.tf
We use the official AWS VPC module for simplicity and best practices.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.1.2"
name = "${var.cluster_name}-vpc"
cidr = "10.0.0.0/16"
azs = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}terraform/eks.tf
We use the official AWS EKS module.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.16.0"
cluster_name = var.cluster_name
cluster_version = "1.28"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
}
eks_managed_node_groups = {
one = {
name = "general-nodes"
instance_types = ["t3.medium"]
min_size = 1
max_size = 3
desired_size = 2
}
}
}terraform/outputs.tf
output "cluster_name" {
description = "Amazon EKS Cluster name"
value = module.eks.cluster_name
}
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}
output "ecr_repository_url" {
description = "URL of the ECR repository"
value = aws_ecr_repository.app_ecr_repo.repository_url
}Execution
Navigate to the terraform directory.
Initialize Terraform:
terraform initReview the plan:
terraform planApply the changes to create the infrastructure (this will take 15-20 minutes):
terraform apply --auto-approveOnce complete, configure kubectl to communicate with your new cluster:
aws eks --region $(terraform output -raw aws_region) update-kubeconfig --name $(terraform output -raw cluster_name)Verify the connection:
kubectl get nodes # You should see your 2 running nodes.
4. Phase 2: Setting Up Your Git Repositories
GitOps relies on Git repositories. We’ll use two:
app-source-code: For the application source code and Dockerfile.
gitops-manifests: For the Kubernetes manifests (Helm charts, ArgoCD applications). This is the repo ArgoCD will watch.
Action: Go to GitHub and create these two private repositories.
gitops-manifests Repository Structure
We’ll use the App of Apps pattern, which is a best practice for ArgoCD. We have a root “app” that points to other “apps”. This makes managing multiple applications/environments much easier.
Create the following structure in your gitops-manifests repo:
Generated code
.GitOps-CI-CD-with-ArgoCD-EKS-Terraform-and-Helm
└── argo-cd/
├── app-of-apps.yaml
└── apps/
└── guestbook.yaml
└── helm-charts/
└── guestbook/
├── Chart.yaml
├── templates/
│ ├── deployment.yaml
│ └── service.yaml
└── values.yamlPopulate the files:
apiVersion: v2
name: guestbook
description: A Helm chart for the Guestbook application
type: application
version: 0.1.0
appVersion: "1.0"
replicaCount: 2
image:
repository: YOUR_ECR_REPO_URL # We will populate this later
tag: "latest" # This will be updated by our CI pipeline
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Chart.Name }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 80apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}-service
spec:
selector:
app: {{ .Chart.Name }}
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: 80
type: {{ .Values.service.type }}Now, let’s create the ArgoCD application manifests.
This file tells ArgoCD about our guestbook application.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
spec:
project: default
source:
repoURL: 'YOUR_GITOPS_MANIFESTS_REPO_URL' # e.g., https://github.com/Consultantsrihari/gitops-manifests.git
targetRevision: HEAD
path: helm-charts/guestbook # Path to the Helm chart
helm:
valueFiles:
- values.yaml
destination:
server: 'https://kubernetes.default.svc'
namespace: guestbook
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueThis is the root application that deploys all other applications.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: app-of-apps
namespace: argocd
spec:
project: default
source:
repoURL: 'YOUR_GITOPS_MANIFESTS_REPO_URL' # e.g., https://github.com/Consultantsrihari/gitops-manifests.git
targetRevision: HEAD
path: argo-cd/apps # Path to the directory containing all our app definitions
destination:
server: 'https://kubernetes.default.svc'
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: trueAction:
Replace YOUR_GITOPS_MANIFESTS_REPO_URL in the two files above.
Commit and push this entire structure to your gitops-manifests repository.
5. Phase 3: Installing and Configuring ArgoCD
Now we install ArgoCD into our EKS cluster.
Create a namespace for ArgoCD:
kubectl create namespace argocdInstall ArgoCD using Helm:
helm repo add argo https://argoproj.github.io/argo-helm helm repo update helm install argocd argo/argo-cd --namespace argocdAccess the ArgoCD UI:
For security, the API server isn’t exposed publicly by default. We’ll use port-forwarding to access it.kubectl port-forward svc/argocd-server -n argocd 8080:443Now, open https://localhost:8080 in your browser.
Get the initial admin password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dLogin to the UI with username admin and this password.
Connect ArgoCD to your gitops-manifests repo:
Since your repo is private, you need to give ArgoCD access. The easiest way is with a Deploy Key.In the ArgoCD UI, go to Settings -> Repositories.
Click CONNECT REPO USING HTTPS.
Enter your gitops-manifests repo URL.
For username, enter git.
For password, you’ll need a GitHub Personal Access Token (PAT). Go to GitHub -> Settings -> Developer settings -> Personal access tokens -> Generate new token. Give it the repo scope. Copy the token and paste it as the password in ArgoCD.
Click Connect. You should see Connection Successful.
6. Phase 4: Deploying a Sample Application with Helm & ArgoCD
Now for the magic. We’ll tell ArgoCD to deploy our app-of-apps.
Bootstrap the process:
Apply the root app-of-apps.yaml manifest to your cluster. This is the only manual kubectl apply we need to do for our applications.# Create a local copy of the file cat <<EOF > app-of-apps.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: app-of-apps namespace: argocd spec: project: default source: repoURL: 'YOUR_GITOPS_MANIFESTS_REPO_URL' # Replace with your repo URL targetRevision: HEAD path: argo-cd/apps destination: server: 'https://kubernetes.default.svc' namespace: argocd syncPolicy: automated: prune: true selfHeal: true EOF # Apply it kubectl apply -f app-of-apps.yamlObserve in ArgoCD:
Go to your ArgoCD UI. You will see a new application called app-of-apps.
Click on it. You will see that it has created another application called guestbook.
Initially, guestbook might be Missing and OutOfSync. Click Sync.
ArgoCD will now read the guestbook.yaml definition, find its Helm chart in the specified path (helm-charts/guestbook), and deploy it to the guestbook namespace in your cluster.
Your application is now deployed! However, it won’t work yet because the image URL in values.yaml is invalid. We will fix this in the CI phase.
7. Phase 5: Managing Secrets with External Secrets Operator
Hardcoding secrets is bad. We’ll use AWS Secrets Manager and the External Secrets Operator (ESO) to inject secrets securely.
Create a Secret in AWS Secrets Manager:
Let’s imagine our app needs a database password.aws secretsmanager create-secret --name guestbook/db-password --secret-string 'SuperSecretPassword123'Install External Secrets Operator using Helm:
helm repo add external-secrets https://charts.external-secrets.io helm repo update helm install external-secrets external-secrets/external-secrets \ -n external-secrets --create-namespaceConfigure IAM Role for Service Account (IRSA):
This is the secure way to grant pods AWS permissions without using access keys.We need to create an IAM OIDC provider for our cluster. Terraform has already done this for us via the EKS module.
Create a file iam-for-eso.tf in your terraform/ directory to create the specific role.
terraform/iam-for-eso.tf:
data "aws_iam_policy_document" "secret_reader_policy" { statement { actions = ["secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret"] resources = ["arn:aws:secretsmanager:${var.aws_region}:${data.aws_caller_identity.current.account_id}:secret:guestbook/*"] effect = "Allow" } } resource "aws_iam_policy" "secret_reader_policy" { name = "${var.cluster_name}-secret-reader-policy" description = "Allows reading specific secrets from Secrets Manager" policy = data.aws_iam_policy_document.secret_reader_policy.json } data "aws_caller_identity" "current" {} module "iam_assumable_role_for_sa" { source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc" version = "5.30.0" create_role = true role_name = "${var.cluster_name}-guestbook-sa-role" provider_url = replace(module.eks.cluster_oidc_issuer_url, "https://", "") role_policy_arns = [aws_iam_policy.secret_reader_policy.arn] oidc_fully_qualified_subjects = ["system:serviceaccount:guestbook:guestbook-sa"] # namespace:serviceaccount-name }Run terraform apply –auto-approve again to create this IAM role.
Add an output to your outputs.tf to easily get the role ARN:
output "guestbook_sa_role_arn" { description = "ARN of the IAM role for the guestbook service account" value = module.iam_assumable_role_for_sa.iam_role_arn }Run terraform apply one more time
Create Kubernetes resources in your GitOps repo:
Add these files to your gitops-manifests repo inside helm-charts/guestbook/templates/.helm-charts/guestbook/templates/secret-store.yaml
apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: aws-secret-store spec: provider: aws: service: SecretsManager region: {{ .Values.aws.region | default "us-east-1" }} role: {{ .Values.aws.serviceAccountRoleArn }}helm-charts/guestbook/templates/external-secret.yaml
apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: db-password-secret spec: secretStoreRef: name: aws-secret-store kind: SecretStore target: name: db-credentials # Name of the k8s secret that will be created creationPolicy: Owner data: - secretKey: password remoteRef: key: guestbook/db-passwordhelm-charts/guestbook/templates/service-account.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: guestbook-sa annotations: # This annotation links the k8s SA to the IAM Role eks.amazonaws.com/role-arn: {{ .Values.aws.serviceAccountRoleArn }}Update values.yaml and deployment.yaml:
helm-charts/guestbook/values.yaml:
# ... existing values ... aws: region: "us-east-1" # Your AWS Region serviceAccountRoleArn: "YOUR_IAM_ROLE_ARN" # The output from terraformhelm-charts/guestbook/templates/deployment.yaml: Add serviceAccountName and mount the secret.
# ... spec: # ... template: # ... spec: serviceAccountName: guestbook-sa # Use our new service account containers: - name: {{ .Chart.Name }} # ... env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials # The k8s secret created by ESO key: password
Action:
Get the IAM Role ARN: terraform output -raw guestbook_sa_role_arn
Update values.yaml with this ARN.
Commit and push all new files and changes to your gitops-manifests repo.
ArgoCD will sync automatically, creating the Service Account, SecretStore, and ExternalSecret. ESO will then create the final Kubernetes Secret named db-credentials. Your pod now has the password available as an environment variable.
8. Bonus: Canary Deployments with Argo Rollouts
Argo Rollouts is a progressive delivery controller that provides advanced deployment strategies like canary and blue-green.
Install Argo Rollouts Controller:
kubectl create namespace argo-rollouts kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yamlChange Deployment to Rollout:
In your gitops-manifests repo, rename helm-charts/guestbook/templates/deployment.yaml to helm-charts/guestbook/templates/rollout.yaml and change its content.helm-charts/guestbook/templates/rollout.yaml
apiVersion: argoproj.io/v1alpha1 kind: Rollout # Changed from Deployment metadata: name: {{ .Chart.Name }} spec: replicas: {{ .Values.replicaCount }} strategy: canary: # Define the canary strategy steps: - setWeight: 20 # Send 20% of traffic to the new version - pause: { duration: 30s } # Pause for 30 seconds to observe - setWeight: 50 # Send 50% traffic - pause: { duration: 30s } # The rollout will be fully promoted after the last step selector: matchLabels: app: {{ .Chart.Name }} template: # The pod template is now nested under template metadata: labels: app: {{ .Chart.Name }} spec: serviceAccountName: guestbook-sa containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - containerPort: 80 env: - name: DB_PASSWORD valueFrom: secretKeyRef: name: db-credentials key: passwordDelete the old deployment.yaml and commit the new rollout.yaml. ArgoCD will sync and replace the Deployment with a Rollout object.
Now, whenever you update the image tag, Argo Rollouts will manage the update according to your canary strategy instead of doing a standard rolling update. You can observe the process with the kubectl plugin: kubectl argo rollouts get rollout guestbook -n guestbook –watch.
9. Putting It All Together: The CI Pipeline (GitHub Actions)
This pipeline will live in your app-source-code repository. It will build and push the Docker image, then update the gitops-manifests repo to trigger the deployment.
Create a Sample Application:
In your app-source-code repo, create a simple web app.app/main.py (A simple Flask app)
from flask import Flask import os app = Flask(__name__) @app.route('/') def hello(): db_pass = os.environ.get("DB_PASSWORD", "Not Found") # In a real app, you'd use this password to connect to a DB. # Here we just display a part of it for demonstration. return f"<h1>Hello from the Guestbook App!</h1><p>DB Password starts with: {db_pass[:3]}...</p>" if __name__ == "__main__": app.run(host='0.0.0.0', port=80)app/requirements.txt
Flask==2.2.2Dockerfile
FROM python:3.9-slim WORKDIR /app COPY app/requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY app/ . CMD ["python", "main.py"]Create the GitHub Actions Workflow:
In your app-source-code repo, create .github/workflows/ci-cd.yml.name: Build, Push, and Deploy on: push: branches: - main jobs: build-and-deploy: runs-on: ubuntu-latest permissions: id-token: write # Required for AWS OIDC authentication contents: read steps: - name: Checkout App Source Code uses: actions/checkout@v3 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v2 with: role-to-assume: ${{ secrets.AWS_IAM_ROLE_TO_ASSUME }} # IAM Role for GitHub Actions aws-region: ${{ secrets.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Build, tag, and push image to Amazon ECR env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY_NAME }} # e.g., guestbook-repo IMAGE_TAG: ${{ github.sha }} run: | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG - name: Checkout Manifests Repo uses: actions/checkout@v3 with: repository: Consultantsrihari/gitops-manifests # Your manifests repo token: ${{ secrets.GITOPS_PAT }} # The PAT for the manifests repo path: 'gitops-manifests' - name: Update Helm values file uses: mikefarah/yq@v4.30.8 with: cmd: | yq e '.image.tag = "${{ github.sha }}"' -i 'gitops-manifests/helm-charts/guestbook/values.yaml' yq e '.image.repository = "${{ steps.login-ecr.outputs.registry }}/${{ secrets.ECR_REPOSITORY_NAME }}"' -i 'gitops-manifests/helm-charts/guestbook/values.yaml' - name: Commit and push changes run: | cd gitops-manifests git config --global user.name "techcareerhub" git config --global user.email "techcareerhubs@gmail.com" git commit -am "Update image tag to ${{ github.sha }}" git pushConfigure GitHub Actions Secrets and IAM:
IAM Role for GitHub Actions: You need to create an IAM role that GitHub Actions can assume. This is another IRSA-like setup, but for GitHub. Follow the official AWS guide to create this role. It needs permissions to push to your ECR repository.
GitHub Secrets: In your app-source-code repository settings (Settings -> Secrets and variables -> Actions), add the following secrets:
AWS_REGION: e.g., us-east-1
AWS_IAM_ROLE_TO_ASSUME: The ARN of the IAM role you just created for GitHub Actions.
ECR_REPOSITORY_NAME: The name of your ECR repo (e.g., guestbook-repo).
GITOPS_PAT: The GitHub Personal Access Token you created earlier, which has repo access to your gitops-manifests repository.
Action:
Push the sample Flask app and Dockerfile to your app-source-code repo.
Create the IAM Role for GitHub Actions.
Add the required secrets to your GitHub repo.
Push the ci-cd.yml workflow file.
Now, every push to the main branch of app-source-code will trigger the pipeline, which in turn will trigger ArgoCD to deploy the new version via the canary strategy.
10. Tearing Down the Infrastructure
To avoid ongoing AWS costs, destroy all the resources you created.
Navigate to your terraform/ directory.
Run the destroy command:
terraform destroy --auto-approve
This will delete the EKS cluster, VPC, IAM roles, and ECR repository. Your GitHub repositories and secrets will remain.
For more information about Job Notifications, Open-source Projects, DevOps and Cloud project, please stay tuned TechCareerHubs official website.







