Exercise
In this exercise, we will explore the use of ConfigMap object to provide a configuration file to a simple reverse proxy based on nginx. We will configure this proxy so that requests received on the /whoami endpoint are forwarded to a service named whoami, also running in the cluster. This service exposes the / endpoint and simply returns the name of the container that processed the request.
- The following specification defines a Pod containing a single container based on the lucj/whoami image, and a ClusterIP type service whose role is to expose this Pod inside the cluster.
apiVersion: v1
kind: Pod
metadata:
name: poddy
labels:
app: whoami
spec:
containers:
- name: whoami
image: lucj/whoami
---
apiVersion: v1
kind: Service
metadata:
name: whoami
labels:
app: whoami
spec:
selector:
app: whoami
type: ClusterIP
ports:
- port: 80
targetPort: 80
Copy this specification into a whoami.yaml file and create the Pod and Service with the following command:
kubectl apply -f whoami.yaml
Then verify that these 2 objects were launched correctly:
$ kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/poddy 1/1 Running 0 26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/whoami ClusterIP 10.11.243.238 <none> 80/TCP 18s
- We will use the configuration below for the nginx server that we will set up later.
user nginx;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
server {
listen *:80;
location = /whoami {
proxy_pass http://whoami/;
}
}
}
After copying this configuration into a nginx.conf file, run the following command to create the proxy-config ConfigMap:
kubectl create configmap proxy-config --from-file=./nginx.conf
- The following specification defines a Pod containing a single container based on the nginx image, and a NodePort type service whose role is to expose this Pod outside the cluster. This is the service to which we will send an HTTP request later.
As we can see, the specification defines a volume that is used to mount the proxy-config ConfigMap in the proxy container and thus configure it
apiVersion: v1
kind: Pod
metadata:
name: proxy
labels:
app: proxy
spec:
containers:
- name: proxy
image: nginx:1.20-alpine
volumeMounts:
- name: config
mountPath: "/etc/nginx/"
volumes:
- name: config
configMap:
name: proxy-config
---
apiVersion: v1
kind: Service
metadata:
name: proxy
labels:
app: proxy
spec:
selector:
app: proxy
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 31600
Copy this specification into a proxy.yaml file and create the Pod and Service with the following command:
kubectl apply -f proxy.yaml
Then verify that these 2 objects were launched correctly:
$ kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/proxy 1/1 Running 0 17s
pod/poddy 1/1 Running 0 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/proxy NodePort 10.11.255.14 <none> 80:31600/TCP 17s
service/whoami ClusterIP 10.11.243.238 <none> 80/TCP 18m
- Testing the application
From the IP of one of the cluster machines, we can then send a GET request to the /whoami endpoint and see that it is processed by the whoami application, it returns poddy, the name of the Pod.
Use the kubectl get nodes -o wide
command to get the IP addresses of the cluster machines and replace HOST_IP with one of them:
curl HOST_IP:31600/whoami
- Delete the various resources created:
kubectl delete -f proxy.yaml
kubectl delete -f whoami.yaml
kubectl delete cm proxy-config
We have thus used a ConfigMap object that we mounted in the nginx container of the Pod acting as a reverse proxy. This allows decoupling of configuration and application.