Kubernetes (K8S) Cluster with CI/CD pipeline — Infrastructure as Code
Kubernetes(k8s), DevOps, and CI/CD became buzz words and not limited to Big Clouds anymore. Thanks to projects like MicroK8S from Canonical, K3S from Rancher Labs (now part of SUSE), Kind, and MiniKube. These can be deployed on the development systems for learning and testing. Some support Production-ready High Availability Clusters setup on IoT/Edge devices.
This article is not a comparison between tools, rather as an individual developer, how can I build an environment to run a k8s cluster at home for running few applications or throw away clusters for learning. Also, the idea is to build on Infrastructure as Code, so that it is easy to re-build in case of failures.
I am using the following tools to achieve my goal:
- k3s a fully compliant Kubernetes distribution
- k3d a lightweight wrapper to run k3s in Docker for deploying the cluster
- Ansible for setting up systems and building clusters as code
- drone.io for automating build, test, and deployment process as code
- slack.com incoming webhook for pipeline notifications
- Docker is the base of the whole container system
- Rancher for a beautiful dashboard and managing k8s cluster
Who is this article for? For developers/individuals exploring options to build their own infrastructure end-to-end ( Development, Test, Deployment), for running few public-facing web applications and microservices for internal communications on Kubernetes.
Where can I find the Code?
What to expect in the Article? As a developer, learning constantly evolving technology can be daunting and I am on the trajectory of learning. The main goal of this article is to show a simple (may not be perfect or best) way of setting up the k8s cluster and run some personal applications on their own servers End-to-End, without relying on Cloud infrastructure. This article is not about explaining the concepts or advantages of one over the other.
Prerequisites:
- Although, the code provided here is self-explanatory and a bunch of commands to execute in a shell. Basic knowledge of Kubernetes, Ansible, Docker, and k3d/k3s is recommended.
- As I am trying to build a public-facing Server, a registered domain name is preferred. Could work with free domain registrars (but not tested).
- Domain for the site: I am using AWS Route53 config to the domain name, for reference we call it “example.com”
- CNAME records for docker.example.com, drone.example.com, and *.uat.kube.example.com
System Config:
- Either a Virtual Machine or Bare metal system
- OS: Ubuntu 20.04 LTS Server x86_64
- 8G RAM
- 2 Cores
- 60G Hard Disk Space
- Server Name: utmp, User: deploy
- with SSH Server
Prepare the environment:
- Copy SSH ID to the server, to log in remotely for Ansible, for more instructions click here.
$ ssh-copy-id deploy@utmp
- Set up Ansible in the local system (other than Server) to execute commands, please find the steps for installing on ubuntu here.
$ sudo apt update
$ sudo apt install software-properties-common
$ sudo apt-add-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible
- fork the repository to your namespace and git clone to the local system
- create file
vault.pass
atproject_folder/ansible
with some super strong password for encrypting Ansible vault variables - create file
vault.yml
atproject_folder/ansible/inventory/host_vars/utmp
with allvault_*
variables mentioned below to execute Ansible scripts - encrypt the
vault.yml
file to secure sensitive data
$ ansible-vault encrypt inventory/host_vars/utmp/vault.yml
Note: All ansible-* commands should be executed from
project_folder/ansible
Prepare Variables: vault.yml
# password to execute shell commands as SUDO
vault_become_pass: deploy# root domain of your site (must be changed)
vault_root_domain: example.com# route53 config for the domain
vault_aws_dns_email: aws-user@gmail.com
vault_aws_r53_access_key_id: route53-access-id
vault_aws_r53_secret_access_key: route53-access-key# docker registry password for basic authentication
vault_docker_pass: sample-docker-pass# github auth for drone integration
vault_github_client_id: github-client-id
vault_github_client_secret: github-client-secret# drone API token from user settings (after drone setup step)
# optional - script 'drone-secrets' will try to fetch it from DB
vault_drone_api_key: drone-api-token# to securely connect between drone server and runner
vault_rpc_secret: some-random-pass# your github user id used in drone
vault_github_user: hareeshbabu82ns# Slack webhook settings
vault_pipeline_notifier: "https://hooks.slack.com/services/T2../B0.."
vault_pipeline_notifier_oauth: "xoxb-7425..."# k8s server url from `kubeconfig` (after cluster setup step)
# optional - script 'drone-secrets' will try to fetch it from k8s
vault_k8s_server: "https://192.168.86.74:33333"
Command Summary (Happy Path) to building the cluster:
$> cd ansible
$> echo "some-password" > vault.pass # add vault variables
$> vi inventory/host_vars/utmp/vault.yml $> ansible-vault encrypt inventory/host_vars/utmp/vault.yml $> ansible-playbook site.yml --tags "prepare" $> ansible-playbook site.yml --tags "swag-up" $> ansible-playbook site.yml --tags "registry-up" $> ansible-playbook site.yml --tags "drone-up" # after login to drone ui at drone.example.com, make yourself admin
$> ansible-playbook site.yml --tags "drone-admin" $> ansible-playbook site.yml --tags "drone-cli" # install k3d cluster
$> ansible-playbook site.yml --tags "k3d" $> watch kubectl get deployments -A # (optional) helm charts
$> ansible-playbook site.yml --tags "helms" # activate forked repo your_user/k3d-cicd-demo in drone ui
# set drone secrets for the forked repo
$> ansible-playbook site.yml --tags "drone-secrets" # update and commit to master branch to start pipeline execution # open app at https://rnginx.uat.kube.example.com
Project Structure:
ansible
contains all code related to building infrastructure- —
inventory
contains hosts to connect to and variables to substitute in tasks - —
roles
contains task collection - — —
tasks
contains related tasks - — — —
main.yml
includes all tasks withtags
, each task contains a sequence of steps to be performed on respective host - — —
defaults
contains role-specific variables - — —
templates
contains files to be parsed with runtime variables from vaults - — —
vars
contains role-specific encrypted variables - —
ansible.cfg
specifies where to look for vault password file, inventory, and roles - —
site.yml
contains the entry points to the scripts react-nginx
contains sample React web application to be deployed on Successful Pipeline execution.drone.yml
contains instructions to the Drone server/runner on how to build, test, deploy and notify using docker containers
Spills on Architecture
- swag handles all HTTP/HTTPS traffic from the internet
- docker registry will act as a central place to push/pull docker images that we build as part of the CI/CD pipeline process
- alternatively, the registry can be built right into the k3d cluster, but I chose to be out of the cluster so that it will be a source for all my clusters in IoT devices like Raspberry Pi.
- The drone server will handle/monitor changes to the source on GitHub/GitLab and schedule drone runners to perform actual task execution.
- drone runner uses docker containers to perform each step, this way it is not limited by the functionality as we can use community or custom Docker images to run the tasks.
- drone runner uses Docker-in-Docker to build docker images and push them to our private registry
- finally, drone runner uses
sinlead/drone-kubectl
image to connect to our k3d cluster using the service account we created above and deploy apps react-nginx/deployment-react-nginx.yaml
contains the instructions to Deployment, Service, and Ingress to expose on k8s
Finally, Building the Cluster
$ ansible-playbook site.yml --tags "prepare"
- prepare script, updates server using
apt
and installsDocker
,kubectl
,k3d
tools along with requiredpip
python-support libraries.
$ ansible-playbook site.yml --tags "swag-up"
- swag-up creates a docker container with
Nginx
reverse-proxy to route the external traffic to other docker containers or k8s cluster. ansible/roles/swag/templates
contains configuration options- —
route53.ini
to configure secrets of AWS Route53 - —
*.conf
files are to provide additional routing for Nginx - —
current.env
contains environment variables for the docker container - —
docker-compose.yaml
contains instructions to start/stop the docker container - more config options can be found for swag here.
$ ansible-playbook site.yml --tags "registry-up"
- registry-up creates a private docker registry with basic authentication
ansible/roles/registry/templates
contains config options- —
subdomain.conf
wjill be served to swag instance - a password will be served from
vault_docker_pass
variable
$ ansible-playbook site.yml --tags "drone-up"
- drone-up creates both drone server and drone runner containers with shared
vault_rpc_secret
variable - also used
vault_github_client_id
andvault_github_client_secret
for authenticating/integrating Github webhooks to Continuous Integration - once the server container is up, we can visit https://drone.example.com to give authentication with Github and sync projects
$ ansible-playbook site.yml --tags "drone-admin"
- drone-admin makes current user configure in
vault_github_user
as admin, which will allow admin access on drone UI
$ ansible-playbook site.yml --tags "drone-cli"
- drone-cli installs drone utility binary to update secrets from the command line
$ ansible-playbook site.yml --tags "k3d"
- k3d script installs Kubernetes cluster using k3d
- uses
ansible/roles/k3d/master/templates/k3d-config.yaml
to create k3d cluster
- creates
cluster admin
usingservice-account.yaml
along withregisty-secret
for private docker registry manifests
folder containsPersistent Volume and Volume Claims
for host folder to each k3s agent- installs
Helm Charts
binary and installsCertificate Manager
andRancher
$ watch kubectl get deployments -A
to watch for deployment status- once the above step finished, we can visit https://rancher.uat.kube.example.com for further setup
$ ansible-playbook site.yml --tags "helms"
- helms script is optional and will install sample helms like
heimdall
andnginx
, just for showing how we can use helms to install required apps to clusters - helm values can be configured from
ansible/roles/k3d/helms/templates
- can visit apps on https://heimdall.uat.kube.example.com and https://nginx.uat.kube.example.com
Setup Pipeline Notifier:
- Pipeline notifier is just a webhook to a Slack channel
- instructions to configure can be found here
- configure the values to vault variables
vault_pipeline_notifier
— for webhook URLvault_pipeline_notifier_oauth
— app OAuth token
Setup CI/CD Pipeline using Drone:
- activate the cloned project from https://drone.example.com
- Configure as
Trusted
project and point to.drone.yml
file
- This is a good time to run the script to update project-specific Drone secrets
update
git_repo
variable in fileansible/roles/drone/tasks/main.yml
to match the git repository name to set the secrets
$ ansible-playbook site.yml --tags "drone-secrets"
Now, its time for executing our pipeline:
- update some code in
react-nginx/src/App.js
- commit changes and push changes to the master branch to the forked repository
- our Drone instance should automatically pick the push and start the pipeline
- When Pipeline finishes, it invokes the Slack webhook to notify
- Finally, check the app deployed at https://rnginx.uat.kube.example.com
Conclusion
As we can see here, with simple k3d configuration we can setup clusters and use drone.io to CI/CD. Ansible on top of all to provision Infrastructure as Code.
Its interesting to explore such options as it will be safe to developer build test clusters to deploy the apps while developing with out effecting Dev or UAT clusters which will effect whole team.
Although this is simple and scoped to a small usecase, I am confident with little collaboration within teams, it could be used for wider scenarios.
Thanks for reading, I would love to hear any comments or suggestions to improve.
You can find me at: