Exercise

This exercise aims to package a TICK stack into a Helm chart. It will first introduce this application stack and more importantly show the necessary steps for packaging an application. You can follow these steps later to package your own applications.

1. The TICK Stack

This application stack is used for time series management. It’s a good candidate for IoT projects where sensors continuously send data (temperature, atmospheric pressure, etc.).

Its name comes from its different components:

  • Telegraf
  • InfluxDB
  • Chronograf
  • Kapacitor

The following diagram illustrates the overall architecture:

tick

Data is sent to Telegraf and stored in an InfluxDB database. Chronograf allows querying via a web interface. Kapacitor is an engine that processes this data in real-time and can raise alerts based on its evolution.

2. Manifest Files

The tick.tar archive, available at https://luc.run/tick.tar, contains all the specifications needed to deploy this stack in a Kubernetes cluster:

  • a Service and a Deployment for each component (Telegraf, Influxdb, Chronograf, Kapacitor)
  • a ConfigMap containing Telegraf configuration
  • an Ingress resource to expose the different services:
    • telegraf service will be exposed via telegraf.tick.com
    • chronograf service will be exposed via chronograf.tick.com

Create a tick directory, download the manifests.tar archive and place it in this directory. Then unarchive it with the command tar xvf tick.tar.

The tick directory will then contain a manifests directory with the following files:

$ cd tick
$ tree manifests
manifests
├── configmap-telegraf.yaml
├── deploy-chronograf.yaml
├── deploy-influxdb.yaml
├── deploy-kapacitor.yaml
├── deploy-telegraf.yaml
├── ingress.yaml
├── service-chronograf.yaml
├── service-influxdb.yaml
├── service-kapacitor.yaml
└── service-telegraf.yaml

3. Installing an Ingress Controller

An Ingress controller is necessary to expose services outside the cluster via Ingress type resources.

⚠️
If you already have an Ingress Controller in your cluster (even if you didn’t install it with Helm), you can skip to step 4.

If you don’t have an Ingress Controller, you will now install it as a Helm Chart.

Make sure you have installed the helm client (version 3.x.y) and run the following commands:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress ingress-nginx/ingress-nginx

Using the following command, verify that the Pod running the Ingress Controller has started correctly:

Note: this command won’t return control, you can stop it once the Pod shows 1/1 in the READY column and Running in the STATUS column:

kubectl get pods --watch

4. Testing the Application

Creation

Position yourself in the tick directory and create the different resources present in the manifests directory:

kubectl apply -f manifests

Then verify that the creation went correctly by running the following command:

kubectl get deploy,po,svc,ingress

You should get a result similar to the one below:

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/chronograf   1/1     1            1           38s
deployment.apps/influxdb     1/1     1            1           38s
deployment.apps/kapacitor    1/1     1            1           38s
deployment.apps/telegraf     1/1     1            1           38s

NAME                              READY   STATUS    RESTARTS   AGE
pod/chronograf-868b4b665b-5xlw8   1/1     Running   0          38s
pod/influxdb-7f98cb47dc-d2tlg     1/1     Running   0          38s
pod/kapacitor-f65dd777c-xwgdx     1/1     Running   0          38s
pod/telegraf-54c7f75f6f-pk7xf     1/1     Running   0          38s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/chronograf   ClusterIP   10.245.205.66    <none>        8888/TCP   38s
service/influxdb     ClusterIP   10.245.158.139   <none>        8086/TCP   37s
service/kapacitor    ClusterIP   10.245.65.240    <none>        9092/TCP   37s
service/kubernetes   ClusterIP   10.245.0.1       <none>        443/TCP    121m
service/telegraf     ClusterIP   10.245.151.236   <none>        8186/TCP   37s

NAME                      CLASS    HOSTS                                   ADDRESS   PORTS   AGE
ingress.extensions/tick   <none>   telegraf.tick.com,chronograf.tick.com             80      38s

Entry Point Configuration

  • 1st case

If your cluster is deployed with a cloud provider that supports LoadBalancer type services, a load-balancer component will be automatically created on the infrastructure and you will need to use its external IP address to send HTTP requests to the application.

The following command will allow you to get the IP address of this LoadBalancer:

⚠️
You will need to specify the ingress-nginx namespace if you haven’t installed the Ingress Controller in the default namespace.
$ kubectl get svc
NAME                                       TYPE         CLUSTER-IP    EXTERNAL-IP    PORT(S)                    AGE
ingress-ingress-nginx-controller           LoadBalancer 10.245.40.95  157.245.28.245 80:32461/TCP,443:31568/TCP 6m34s
ingress-ingress-nginx-controller-admission ClusterIP    10.245.67.139 <none>         443/TCP                    6m34s

In the example above, the external IP is 157.245.28.245, it is obtained from the EXTERNAL_IP field of the ingress-ingress-nginx-controller service (present in the ingress-nginx namespace).

For this exercise, you’ll need to update the /etc/hosts file on your local machine so that the subdomains telegraf.tick.com and chronograf.tick.com resolve to this IP address.

:fire: If you’re on Windows, it’s the C:\Windows\System32\drivers\etc\hosts file that you’ll need to open with administrator rights.

In the example above, I added the following entries to the /etc/hosts file:

157.245.28.245    telegraf.tick.com
157.245.28.245    chronograf.tick.com
  • 2nd case

if your infrastructure doesn’t allow the creation of a LoadBalancer, you’ll need to use the IP address of one of your cluster’s nodes. For example, if one of your cluster nodes has the IP address 192.168.99.100 you should modify your /etc/hosts file as follows:

192.168.99.100   telegraf.tick.com
192.168.99.100   chronograf.tick.com

Accessing the Application

From a browser, you can access the chronograf interface from the URL http://chronograf.tick.com

Chronograf

:fire: if your infrastructure doesn’t allow the creation of a LoadBalancer, you’ll need to use the URL http://chronograf.tick.com:NODE_PORT, where NODE_PORT corresponds to the port opened on each node of your cluster to access your ingress controller.

Sending Test Data

Using the following instructions, you’ll generate fictitious data, simulating a sinusoidal distribution of temperature, and send it to the tick stack via the endpoint exposed by Telegraf.

  • Data Generation

You’ll launch a Pod based on the lucj/genx image with some additional parameters:

kubectl run data --restart=Never --image=lucj/genx:0.1 -- -type cos -duration 3d -min 10 -max 25 -step 1h
  • After a few seconds, make sure the previously launched pod is in Completed status:
kubectl get pod data
  • Sending Data

The following command retrieves the generated data and sends it to Telegraf:

  • if you’re using an intermediate machine to address the cluster, you’ll also need to modify its /etc/hosts file before running this command so that the domain name telegraf.tick.com can be resolved
  • if your infrastructure doesn’t allow the creation of a LoadBalancer, you’ll need to use the URL http://telegraf.tick.com:NODE_PORT (not http://telegraf.tick.com), where NODE_PORT corresponds to the port opened on each node of your cluster to access your ingress controller
kubectl logs data | while read line; do
  ts="$(echo $line | cut -d' ' -f1)000000000"
  value=$(echo $line | cut -d' ' -f2)
  curl -is -XPOST http://telegraf.tick.com/write --data-binary "temp value=${value} ${ts}"
done

Note: you should get a succession of 204 status codes indicating that all data has been correctly received

You can then visualize this data using the query select "value" from "test"."autogen"."temp" from the Explore menu of the Chronograf web interface.

Chronograf

Cleanup

Delete the application with the following command:

kubectl delete -f manifests

You will now package this application in a HELM chart.

5. Creating a Helm Chart

Still from the tick directory, use the following command to create a Chart named tick_chart.

helm create tick_chart

By default, it contains mainly the following elements:

  • a Chart.yaml file that defines the project metadata
  • a template for creating a Deployment that manages a single Pod
  • a template for creating a Service to expose this Pod inside the cluster
  • a template for creating an Ingress resource to expose the service externally
  • a values.yaml file used to substitute placeholders in templates with dynamic values
  • a NOTES.txt file that provides information when creating the release and during updates
$ tree tick_chart
tick_chart
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

Copying Manifest Files

The first thing you’ll do is delete all files contained in the templates directory and copy the files from the manifests directory (files we handled previously). Also delete the content of the values.yaml file (but don’t delete the file), the NOTES.txt file, and the test directory.

rm tick_chart/templates/*.yaml
rm -r tick_chart/templates/tests
rm tick_chart/templates/NOTES.txt
cp manifests/*.yaml tick_chart/templates
echo > tick_chart/values.yaml

The tick_chart directory will then have the following content:

$ tree tick_chart/
tick_chart/
├── Chart.yaml
├── charts
├── templates
│  ├── _helpers.tpl
│  ├── configmap-telegraf.yaml
│  ├── deploy-chronograf.yaml
│  ├── deploy-influxdb.yaml
│  ├── deploy-kapacitor.yaml
│  ├── deploy-telegraf.yaml
│  ├── ingress.yaml
│  ├── service-chronograf.yaml
│  ├── service-influxdb.yaml
│  ├── service-kapacitor.yaml
│  ├── service-telegraf.yaml
└── values.yaml

Launching the Chart

Using the following command, launch the application now packaged in a Helm chart:

helm install tick ./tick_chart

You should get a result similar to the following:

NAME: tick
LAST DEPLOYED: Thu Sep 17 13:40:31 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

Then check the list of releases (Helm terminology) present:

$ helm ls -A
NAME   	NAMESPACE    	REVISION	UPDATED                              	STATUS  	CHART              	APP VERSION
ingress	ingress-nginx	1       	2020-09-17 11:53:13.172588 +0200 CEST	deployed	ingress-nginx-3.1.0	0.35.0
tick   	default      	1       	2020-09-17 13:40:31.674099 +0200 CEST	deployed	tick_chart-0.1.0   	1.16.0

Testing the Application

Send the previously generated data to the stack that is now deployed as a Helm chart.

  • if you’re using an intermediate machine to address the cluster, you’ll also need to modify its /etc/hosts file before running this command so that the domain name telegraf.tick.com can be resolved
  • if your infrastructure doesn’t allow the creation of a LoadBalancer, you’ll need to use the URL http://telegraf.tick.com:NODE_PORT (not http://telegraf.tick.com), where NODE_PORT corresponds to the port opened on each node of your cluster to access your ingress controller
kubectl logs data | while read line; do
  ts="$(echo $line | cut -d' ' -f1)000000000"
  value=$(echo $line | cut -d' ' -f2)
  curl -is -XPOST http://telegraf.tick.com/write --data-binary "temp value=${value} ${ts}"
done

Once again, visualize this data using the query select "value" from "test"."autogen"."temp" from the Explore menu of the Chronograf web interface.

Using Templating

The benefit of an application packaged in a Helm Chart is to facilitate its distribution and deployment, particularly by using the power of templates.

In this exercise, we’ll make the tags of the different images dynamic. To do this, start by modifying the tick_chart/values.yaml file so that it has the following content (we’ll use the alpine variation for each image):

telegraf:
  tag: 1.13-alpine
chronograf:
  tag: 1.7-alpine
kapacitor:
  tag: 1.5-alpine
influxdb:
  tag: 1.5-alpine

Then, for each Deployment file present in tick_chart/templates, replace the image tag with {{ .Values.COMPONENT.tag }}, where COMPONENT is influxdb, telegraf, chronograf, or kapacitor. For example, the Influxdb Deployment file will be modified as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: influxdb
spec:
  selector:
    matchLabels:
      app: influxdb
  template:
    metadata:
      labels:
        app: influxdb
    spec:
      containers:
      - image: influxdb:{{ .Values.influxdb.tag }}
        name: influxdb

Still from the tick directory, you can then update the release with the following command:

helm upgrade tick tick_chart --values tick_chart/values.yaml

You should get a result similar to the one below:

Release "tick" has been upgraded. Happy Helming!
NAME: tick
LAST DEPLOYED: Thu Sep 17 13:52:18 2020
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

Then verify that the Pods are indeed based on the new image versions.

For example, the following commands allow you to get the image being used by the Pod running Telegraf:

  • Getting the list of Pods
$ kubectl get pod
NAME                          READY   STATUS      RESTARTS   AGE
chronograf-6cb9c64d56-vw97l   1/1     Running     0          13m
data                          0/1     Completed   0          120m
influxdb-64765784c9-gzr49     1/1     Running     0          13m
kapacitor-7cd66b69f-j595b     1/1     Running     0          13m
telegraf-6d84769594-z2p7h     1/1     Running     0          12m
...
  • getting the image used by the Telegraf Pod
$ kubectl get pod telegraf-6d84769594-z2p7h -o jsonpath='{ .spec.containers[0].image }'
telegraf:1.13-alpine

Summary

Here we’ve seen a simple example of using templating, the important thing being to understand how it works. When you package your own applications in Helm Charts, you’ll generally start by using templating for simple fields and then add increasingly complex templating elements (conditional structures, loops, …).