Golang Application on Kubernetes

This post is a complete walk-through of how to deploy a monolithic Go web application in a Kubernetes cluster, how to attach a domain name so that it can be publicly accessible and finally, how to secure it with LetsEncrypt's https and cert-manager. Lets ride!

More About Kubernetes

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system. Find out more about Kubernetes

Example Application

I put together an example Go application. It’s a dead-simple key-value store app backed by a redis database. I have chosen redis database in order to show how to setup and connect to an external storage service from inside a web application when both are running in the same Kubernetes cluster. The example application contains three endpoints, /set to set a value in the backing redis store, /get to get the stored value and /status to check the status or health of the application. The example app code is hosted here on github.

Prerequisites or tools required

This post assumes you have Docker & Kubernetes installed. It also assumes you have access to a kubernetes cluster(Local or Cloud) and Kubernetes management tool like kubectl. I am going to be using Docker Desktop for Mac Docker Desktop (v 2.1.0.4) .

Deploying Redis

Kubernetes has support for deployment, where you can define a deployment spec and Kubenetes will make sure your deployment is running up to the standard of the spec you defined. Kubernetes deployment is more useful for stateless applications. What would be more suitable for a stateful application – e.g databases or applications that needs their data to be persisted between container restarts and rescheduling, is Statefulset. Basically, the major difference between Statefulset and deployment is, Statefulset remembers its state after restarts or rescheduling, therefore container or application data is not lost when they crash whereas deployment is more lightweight and its mostly used for application or container that can rebuild it data from backend systems.

Let’s deploy our redis database with a kubernetes Statefulset. Create a statefulsets.yml and paste the following content.

 1apiVersion: apps/v1  
 2kind: StatefulSet  
 3metadata:  
 4  name: redis-store  
 5spec:  
 6  serviceName: "kv-redis-service"  
 7  replicas: 1  
 8  selector:  
 9    matchLabels:  
10      app: kv-redis-service  
11  template:  
12    metadata:  
13      labels:  
14        app: kv-redis-service  
15    spec:  
16      containers:  
17        - name: redis-store  
18          image: redis:latest  
19          ports:  
20            - containerPort: 6379  
21              name: tcp-port  
22          volumeMounts:  
23            - name: redis-volume  
24              mountPath: /var/redis/data  
25              subPath: redis  
26  volumeClaimTemplates:  
27    - metadata:  
28        name: redis-volume  
29      spec:  
30        accessModes: [ "ReadWriteOnce" ]    
31        resources:  
32          requests:  
33            storage: 4Gi

And then run, $ kubectl apply -f statefulsets.yml You should get a response like this: statefulset.apps/redis-store created

Run $ kubectl get pods to verify that statefulset pod is running

1NAMESPACE       NAME                                        READY   STATUS             RESTARTS   AGE
2default         redis-store-0                               1/1     Running            0          3m3s
3

If you get the above response, great! We’re on the right track, if not, check the steps and make sure you have not missed anything. Next step, we need to be able to connect to our new redis backend, we will use kubernetes Service for this purpose. We want to expose our redis on its usual port 6379 in a ClusterIP type service so that other pods in our cluster can connect to it. Create a services.yml file and paste the following:

 1apiVersion: v1  
 2kind: Service  
 3metadata:  
 4  name: kv-redis-service  
 5  labels:  
 6    app: kv-redis-service  
 7spec:  
 8  type: ClusterIP  
 9  ports:  
10    - name: http-port  
11      port: 6379  
12      protocol: TCP  
13      targetPort: 6379  
14  selector:  
15    app: kv-redis-service

Run $ kubectl apply -f services.yml and you should get a response like below: service/kv-redis-service created Then run,

1$ kubectl get svc
2NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
3kv-redis-service      ClusterIP   10.107.7.35      <none>        6379/TCP   77s

Great! Our service is up. Now, the applications running in this cluster can connect to our new redis using the url kv-redis-service:6379

Deploying the application

We need to create a kubernetes deployment for our key-value store application. A docker image is available on public dockerhub , you can also create your own image from the example repository. To create deployment for our application, create a deployments.yml file paste the below content

 1apiVersion: apps/v1  
 2kind: Deployment  
 3metadata:  
 4  name: kv-app  
 5  labels:  
 6    web: app-service  
 7spec:  
 8  replicas: 1  
 9  selector:  
10    matchLabels:  
11      web: app-service  
12  template:  
13    spec:  
14      containers:  
15        - name: kv-app-container  
16          image: dockadigun/kv-app 
17          ports:  
18            - containerPort: 7002  
19              protocol: TCP  
20              name: access-port  
21          env:  
22            - name: PORT  
23              value: "7002"  
24	  - name: REDIS_HOST  
25            value: "kv-redis-service:6379"  
26  imagePullPolicy: Always  
27    metadata:  
28      labels:  
29        web: app-service

As you can see from the above, we’re passing some values to our application through the container’s environment variable, These variables are PORT - The port on which the application http service will run on and REDIS_HOST which is the accessible address for our previously created redis service. After we’ve defined our deployment manifest, we need to apply it using kubectl. Run $ kubectl apply -f deployments.yml , you should get a response like this - deployment.apps/kv-app created, run $ kubectl get pods -o wide to check the status of our new deployment.

1$ kubectl apply -f deployments.yml
2deployment.apps/kv-app created
3
4$ kubectl get pods
5NAME                     READY   STATUS    RESTARTS   AGE
6kv-app-d584954bf-6xv4r   1/1     Running   0          3m13s
7

If you get a response similar to the above, Yay! Our application is running. Next task is to expose this deployment to a service so that we can access it publicly. We will be exposing the deployment through a ClusterIP service and then we will use Nginx Ingress Controller to route or load-balance the incoming traffic into our deployment. The first step is creating a ClusterIP service, we’ve understood that ClusterIP service creates a network that can only be accessible within the cluster. To create the service, append the below content to services.yml

 1---  
 2apiVersion: v1  
 3kind: Service  
 4metadata:  
 5  name: app-service  
 6  labels:  
 7    web: app-service  
 8spec:  
 9  type: ClusterIP  
10  ports:  
11    - name: http-port  
12      port: 7002  
13      protocol: TCP  
14      targetPort: 7002  
15  selector:  
16    web: app-service

Then, run $ kubectl apply -f services.yml, you should get a message service/app-service created as part of your response and then run $ kubectl get svc to see all created services.

1$ kubectl apply -f services.yml
2service/app-service created
3
4$ kubectl get svc
5NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
6app-service           ClusterIP   10.102.142.208   <none>        7002/TCP   55s
7kv-redis-service      ClusterIP   10.107.7.35      <none>        6379/TCP   31m

Now, we have app-service which exposes our kv-app deployment and a kv-redis-service which exposes the redis server our app is connecting to for data storage. Few more steps and we will have a world-class service that could scale from 1 to 1000s replicas in a time of crises :-)

Installing and setting up NGINX Ingress Controller

NGINX Ingress controller is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration. As i mentioned above, we’ll be using NGINX ingress controller to route or load-balance requests to our kv-app service. We need to install Nginx Ingress Controller in our cluster. Installation steps varies depending on the environment or cloud platform that the cluster is running on. Here, i am running Docker Desktop for Mac which includes a single node local kubernetes cluster, i am going to go ahead and install nginx ingress controller, you can follow my steps if you’re on Mac or use this link to install a specific one for your environment or platform. Don’t worry, it is very straight-forward. Firstly, we need to install components that are generic to all environments or platforms. Run

1$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

this command will install all the required prerequisites for nginx ingress controller, and then you will need to run the command below if you’re on Mac or use the link above to run the right command for your environment.

1kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml

the above command creates a LoadBalancer service using your provider’s underlying resources for creating a LoadBalancer. To check if all installation goes well, run the bellow command and compare the responses to what i have here.

1$ kubectl get svc -n ingress-nginx
2NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
3ingress-nginx   LoadBalancer   10.103.137.101   localhost     80:32727/TCP,443:31170/TCP   1m
4

If you’re running Kubernetes in a cloud platform, value of EXTERNAL_IP must be the value of the LoadBalancer's IP address service created for Nginx Ingress Controller, technically, your service should accessible through this IP address once Nginx Ingress controller transitioned to RUNNING state. Also, run kubectl get pods -n ingress nginx to verify that nginx pod is running

1$ kubectl get pods -n ingress nginx   
2NAME                                        READY   STATUS    RESTARTS   AGE
3nginx-ingress-controller-7fbc8f8d75-lm6gd   1/1     Running   1          10m

You should have a response similar to the above. Now, we’re ready to access our app. Just one more step away.

Creating Ingress resources

We need to create ingress resources that defines how we want our apps to be routed. Create an ingress.yml file and paste the below content

 1apiVersion: networking.k8s.io/v1beta1  
 2kind: Ingress  
 3metadata:  
 4  name: kv-ingress  
 5  annotations:  
 6    kubernetes.io/ingress.class: "nginx"    
 7spec:  
 8  rules:  
 9    - host: "example.kv" # or a registered domain
10  http:  
11        paths:  
12          - backend:  
13              serviceName: app-service  
14              servicePort: 7002  
15  # set default backend
16  backend:  
17    serviceName: app-service  
18    servicePort: 7002

Run $ kubectl apply -f ingress.yml and verify that ingress resource is created by running $ kubectl get ingress, the response should be similar to this

1$ kubectl get ingress
2NAME          HOSTS        ADDRESS     PORTS     AGE                                                    
3proxy-ingress example.kv   localhost   80, 443   10d

If you have a registered domain name and you’re not running a local kubernetes cluster, you can create an A record pointing to the EXTERNAL-IP of our nginx ingress service and you’ll be able to access the service directly through your registered domain name once the DNS records is propagated(takes ~5mins). If you’re running a local cluster, we can do the same thing locally by modifying system hosts records. Go to terminal and run nano /etc/hosts and append 127.0.0.1 example.kv to the end of the file, save and close.

Testing

Curling curl http://example.kv/set?key=key&value=value should respond with a 200 OK http status code. And, that’s it. We’re done.

Adding https with LetsEncrypt and cert-manager

Note: This step is only applicable to readers that are not running local kubernetes, have a registered domain and have an A record pointing to the EXTERNAL-IP of ingress-nginx service LoadBalance

We can add free, automated certificate issuing and management to our service so that we’re always running securely, cert-manager and LetsEncrypt can help us achieve this. Firstly, we need to install cert-manager components. The way that i’ve found easiest is by applying this manifest.

1$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.yaml

This will install cert-manager into your cluster, be it local or cloud based. Check for successful installation by running

1$ kubectl get pods -n cert-manager
2NAME                                       READY   STATUS    RESTARTS   AGE
3cert-manager-5c6866597-zw7kh               1/1     Running   0          2m
4cert-manager-cainjector-577f6d9fd7-tr77l   1/1     Running   0          2m
5cert-manager-webhook-787858fcdb-nlzsq      1/1     Running   0          2m

If you get the above response, Congratulations. cert-manager is running successfully.

Creating Issuer and Certificate object

You can find more information about Issuer and Certificate on cert-manage website. Basically, all we need to do to activate automated https certificate creation and maintenance is a Issuer object , a Certificate object and a little modification in our ingress.yml file and we’re set. Create a tls.yml and append the following contents.

 1apiVersion: cert-manager.io/v1alpha2  
 2kind: ClusterIssuer  
 3metadata:  
 4  name: cluster-issuer  
 5  namespace: cert-manager  
 6spec:  
 7  acme:  
 8    server: https://acme-staging-v02.api.letsencrypt.org/directory  
 9  #server: https://acme-v02.api.letsencrypt.org/directory  
10    email: youremail@gmail.com  
11    privateKeySecretRef:  
12      name: certs-key  
13    solvers:  
14    - http01:  
15        ingress:  
16          class: nginx  
17---  
18apiVersion: cert-manager.io/v1alpha2  
19kind: Certificate  
20metadata:  
21  name: tls-cert  
22  namespace: cert-manager  
23spec:  
24  secretName: certs-secret  
25  issuerRef:  
26    name: cluster-issuer  
27    kind: ClusterIssuer  
28  commonName: example.kv # or your host/domain name
29  dnsNames:  
30    - example.kv # or your host/domain name

The above yaml creates an issuer and a certificate object which would be used by cert-manager to create a certificate for example.kv. We can check if our certificate is ready by running

1$ kubectl get certs -n cert-manager
2NAME        READY   SECRET              AGE
3tls-certs   True    certs-secret        10m

When the value of READY transitioned to true, your certificate is ready to be used. Let’s modify our ingress resource to consume the new certificate we just created.

 1apiVersion: networking.k8s.io/v1beta1  
 2kind: Ingress  
 3metadata:  
 4  name: proxy-ingress  
 5  annotations:  
 6    kubernetes.io/ingress.class: "nginx"  
 7    cert-manager.io/cluster-issuer: "cluster-issuer"  
 8spec:  
 9  tls:  
10    - hosts:  
11        - "example.kv" # or your host/domain name
12  secretName: certs-secret  
13  rules:  
14    - host: "example.kv" # or your host/domain name  
15  http:  
16        paths:  
17          - backend:  
18              serviceName: app-service  
19              servicePort: 7002  
20  backend:  
21    serviceName: app-service  
22    servicePort: 7002

Notice the new annotation cert-manager.io/cluster-issuer: "cluster-issuer" and the spec.tls value. This tells Nginx Ingress Controller to look for a certificate and apply it on the internal nginx proxy. Curling curl https://example.kv/set?key=key&value=value must still have the same response as we’ve done earlier. And then, we’re done.

Thanks for following.