Automated Application Versioning using Istio and Cloud Build

Srinibas Misra
9 min readApr 25, 2021

Let's meet Kevin and Ishan. Two developers from the team of Gen-Erik Softwares. They have built an application that displays a random string in blue color. A revolutionary application indeed. They have harnessed the power of microservices-based architecture and have deployed their application onto Kubernetes.

They have CI/CD pipelines in place which deploy their code to the Dev environment whenever a push occurs in their repository. However, here they face few challenges.

Initially, automated deployments to the Dev environment were happening whenever a push occurred, irrespective of the branch. This caused a few problems because, when Kevin was testing his code, Ishan would push it and Ishan’s version now would be live, and Kevin would need to redeploy to finish testing.

They tried triggering the deployments from only the master branch. But then, a lot of branch merges needed to happen to finish the development and testing. This also wasn’t a viable solution.

So, what solution are we suggesting?

Using the implementation provided in this article, their Development environment deployments would work like this:

Whenever someone pushes to a specific branch, the code of that branch is deployed as a version in the development environment.

If Kevin pushes to a branch called “feature-A”, his code would be deployed to the cluster under a version called “feature-A”. And he would access his code, by setting a custom header called “version” whose value would be “feature-A”. Simultaneously, if Ishan deploys from a branch called “feature-B”, his version would be available by setting the “version” header to “feature-B”. And these versions would completely be isolated from each other. And if no version header is provided, we would go to the default version.

Let’s see how we can implement this!

Note: This article assumes that you have some basic knowledge of the following technologies:
1. Docker
2. Kubernetes
3. Git

Additional Note: We would be using Google Cloud Platform services in this article. However, the code can be modified and made compatible with your choice of Cloud Providers and CI/CD tools.

All the code used in this article is available here.

What would we be using?

  1. Google Kubernetes Engine as our Kubernetes service of choice.
  2. Cloud Source Repositories as our git repository.
  3. Cloud Build as our CI/CD tool.
  4. Google Container Registry as our Docker registry.

What are the applications?

There are two applications:

Frontend:

It is a small HTML/Javascript Application served using NGINX. This application makes a GET request to the backend and prints the data received.

NGINX configuration can be found here.

Dockerfile for the frontend can be found here.

Backend:

It is a small Python Flask webserver with one route, “/generate”, which returns a random string.

Dockerfile for the backend can be found here.

Cluster setup

Download the source code (as zip) either from Github, or you can run the following commands:

wget -O istio-demo.zip https://github.com/srinibasmisra97/Istio-Versioning-Demo/archive/refs/heads/main.zip
unzip istio-demo.zip
cd Istio-Versioning-Demo-main

We can now create the GKE cluster:

gcloud container clusters create istio-demo --zone <Zone> --project <Project ID>

Istio Setup

Istio is a solution which helps us manage a service mesh.

A service mesh is nothing but a large mesh of different kubernetes services and deployments. When the number of services increases, we would face a bunch of issues in operability, monitoring, end to end encryption. And to mitigate them manually would be an immense task. Istio helps us with that.

Using Istio, we can perform intelligent traffic routing, and get clear and simple operability on the entire service mesh.

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.9.3
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y

Once installation is complete, you can check if all the Istio System pods are running:

kubectl get pods -n istio-system
Istio pods

Check the Istio services as well:

kubectl get services -n istio-system
Istio services

We also need to add a label to the default namespace, so that all pods deployed to this namespace would also have the Istio Envoy sidecar injected to them.

kubectl label namespace default istio-injection=enabled

Application Architecture

Ideally, without Istio, the architecture would be pretty straightforward.

Kubernetes Architecture

Both the backend and the frontend would be individual deployments, and each having their own service, with the Frontend service either being of a Load Balancer type or connected to an Ingress resource.

However, for our solution, the architecture would look something like this:

Here, each version would have its own deployment. Pod labels would be used to distinguish the different versions.

labels:
app: backend
version: -VERSION-

And, we would have a single service for all backend/frontend versions.

But, how would we route traffic to a specific version?

Here, we would use the magic of some custom Istio resources, mainly:
1. Destination Rule
2. Virtual Service
3. Gateway

Destination Rule

A Destination Rule, is the resource that defines the different backends that are available for a given service.

We have to specify the host (Kubernetes service), which would have the different versions. Subsets are label selectors to select pods of an individual version.

If a version, “v2” is deployed, but the Destination Rule does not have “v2” as a subset, Istio will not be able to route the traffic to this version.

More information about Destination Rules can be found here.

Virtual Service

A Virtual Service is another custom Istio resource that works like a URL map. It contains multiple rules that define how the traffic needs to be routed.

The host parameter of virtual service needs to be a proper domain name, either a public domain or a Kubernetes service domain. We are using “*” to allow all traffic to enter our service mesh.

The gateway parameter is not compulsory, but can be specified to allow traffic to enter from specific gateways.

We would be routing traffic to specific destination subsets based on the value of the “version” header.

Gateway

Gateway serves as the gate which allows the traffic to enter the service mesh.

It is connected to the default Istio Ingressgateway deployment, and the hosts parameter can be used to allow traffic from specific domain names.

So with the help of Istio, the request routing would look like this:

Istio service routing

Application Setup

Lets deploy the default version of the application, with the version name as live.

kubectl apply -f k8s/manifest.yaml
kubectl apply -f k8s/live/backend
kubectl apply -f k8s/live/frontend

This would setup the following:
1. Istio gateway
2. Backend deployment
3. Backend service
4. Backend destination rules
5. Backend virtual service
6. Frontend deployment
7. Frontend service
8. Frontend destination rules
9. Frontend virtual service

We can now access the application from the Istio Ingressgateway public IP.

kubectl get service istio-ingressgateway -n istio-system

You should see an output similar to this:

Application output

On refreshing the page, the value would change.

Cloud Build Setup

Now, since the application is setup, lets setup the source repository and CI/CD trigger using Cloud Build.

Source Repository Setup

gcloud source repos create [REPO_NAME]

Add the remote to your current directory:

git init
git remote add google https://source.developers.google.com/p/[PROJECT_NAME]/r/[REPO_NAME]
git add .
git commit -m "initial commit"
git push -u google master

If you face any authentication issues while pushing, configure your git credentials from here.

Cloud Build Access

Before we start triggering deployments, we need to make sure that Cloud Build has access to deploy to GKE.

Go to Cloud Build settings.

Enable access to Kubernetes Engine. This would be the only access we would need.

Cloud Build settings

Cloud Build YAML

The Cloud Build job configuration is present in the cloudbuild.yaml files. Backend and Frontend have their own cloudbuild.yaml files.

You can find the backend and frontend cloudbuild.yaml files here.

Note: Make sure you update your cluster details in the substitution variables of backend/cloudbuild.yaml and frontend/cloudbuild.yaml. Both the build configurations are nearly identical, apart from the words “backend” and “frontend”.

The build steps are:

  1. Docker image build.
  2. Docker image push to Google Container Registry.
  3. Fetch credentials for GKE cluster.
  4. Get existing deployments.
  5. Generate Deployment, VirtualService and DestinationRule YAML files.
  6. Apply the Deployment file.
  7. Apply the DestinationRule file.
  8. Apply the VirtualService file.

All the steps are performed using available Cloud Builders. However, the generation of the deployment.yaml, virtualservice.yaml and destinationrule.yaml are done using a Python script.

The script generates a deployment.yaml for the code being deployed. It also generates the virtualservice.yaml and destinationrule.yaml containing the new version and the previous versions.

The script can be found here.

Cloud Build Triggers

We can now connect Cloud Build to the source repository we just created.

  1. Go to Cloud Build trigger page.
  2. Click on Create Trigger.
  3. Provide the name of the trigger.
  4. Set the Event as Push to Branch.
  5. Select the source repository from the dropdown.
  6. Set the branch as frontend/*.
  7. In the cloud build configuration file location, set the path as frontend/cloudbuild.yaml.
  8. Click on Save.

Perform the similar steps for the backend trigger as well.

Triggers

Lets test the trigger!

Update the frontend/index.html file and set the color to red.

Push to a branch named frontend/feature-red. This would deploy a new frontend version called feature-red.

Now go to the Cloud Build history page.

You should see that a build job has been be automatically triggered.

Build job triggered

Wait till the build completes.

Build successful

You can see that a new deployment is created.

kubectl get deployments

Lets access this version!

Go to the Istio Ingressgateway public IP address.

You should see the output as before. A blue text.

Now add the custom header to specify the version. You can use a browser extension like ModHeader to do it.

Add the header version and set its value to feature-red. And refresh the page.

And voila! The text is red!

Add custom header

If you disable ModHeader or change the value of the version, you would be redirected back to the original blue color.

Lets now deploy a new backend version with the name as feature-red!

Update backend/app.py and update the “v1” in Line 11 to “red”.

Push this change to the branch backend/feature-red.

Similar to before, you should see a new build job triggered.

Once the job completes, refresh the page, making sure that the version header is set.

You would now see the text start with red. Again, disable the custom header to see that you go back to the default version.

Feature red

Now, lets try one last thing.

Update backend/app.py and change the red to new-string.

Push this change to branch backend/new-string. And check for this version in the browser.

Note that the color is blue, and the string is new-string.

This shows that the routing is independant. You don’t need to have the same version deployed for both backend and frontend.

Conclusion

Using this mechanism for deployment of containerised applications, development environment deployments and testing are immensely simplified. And adding Istio in the picture would allow you to get more out of your kubernetes services.

Feel free to tweak the code used in this article to fit your projects!

References

  1. Code used in this article.
  2. What is Istio?
  3. Getting started with Istio.
  4. Istio Virtual Services.
  5. Istio Destination Rules.
  6. Istio Gateways.
  7. Cloud Build Docs.

--

--