Exercise
In this exercise, you will set up NetworkPolicy resources to isolate Pods at the network level within a namespace.
- Create the demo namespace:
kubectl create ns demo
- In the demo namespace, you will deploy an application consisting of 3 Pods, each exposed by a service.
- Frontend
Copy the following specification into the front.yaml file. This specification defines a Pod based on an nginx image, and a NodePort service that exposes this Pod on port 30000.
apiVersion: v1
kind: Pod
metadata:
labels:
app: demo
tiers: front
name: front
spec:
containers:
- image: lucj/frontend:0.1
name: front
---
apiVersion: v1
kind: Service
metadata:
labels:
app: demo
tiers: front
name: front
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30000
selector:
app: demo
tiers: front
Create the Pod and corresponding Service in the demo namespace:
kubectl -n demo apply -f front.yaml
- Backend
Copy the following specification into the back.yaml file. This specification defines a Pod running a simple Python application that returns a random country (on a GET /random request), and a ClusterIP service to expose this Pod.
apiVersion: v1
kind: Pod
metadata:
labels:
app: demo
tiers: back
name: back
spec:
containers:
- image: lucj/backend:0.1
name: back
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
labels:
app: demo
tiers: back
name: back
spec:
ports:
- port: 80
targetPort: 5000
selector:
app: demo
tiers: back
Create the Pod and corresponding Service in the demo namespace:
kubectl -n demo apply -f back.yaml
- Database
Copy the following specification into the db.yaml file. This specification defines a Pod based on redis and a ClusterIP service to expose this Pod.
apiVersion: v1
kind: Pod
metadata:
labels:
app: demo
tiers: db
name: db
spec:
containers:
- image: redis:6.2.6
name: db
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
labels:
app: demo
tiers: db
name: db
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: demo
tiers: db
Create the Pod and corresponding Service:
kubectl -n demo apply -f db.yaml
- Make sure all Pods and Services were created successfully:
kubectl -n demo get po,svc
You should get a result similar to this:
NAME READY STATUS RESTARTS AGE
pod/front 1/1 Running 0 2m45s
pod/back 1/1 Running 0 2m45s
pod/db 1/1 Running 0 2m45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/back ClusterIP 10.43.168.143 <none> 80/TCP 2m45s
service/db ClusterIP 10.43.99.62 <none> 6379/TCP 2m45s
service/front NodePort 10.43.181.25 <none> 80:30000/TCP 2m45s
- By default, all Pods in the cluster can communicate with each other, even if they’re not in the same namespace. You’ll test this in this section.
- communication between front Pod and back Pod
Check, using the following command, that the front Pod can reach the web server running in the back Pod:
kubectl -n demo exec front -- curl http://back/random
You should get a response showing that the server is accessible, returning a random country:
{"alpha_2":"ZA","alpha_3":"ZAF","name":"South Africa","numeric":"710"}
- communication between back Pod and db Pod
Check, using the following command, that the back Pod can reach the redis db running in the db Pod (the back Pod contains a redis client to demonstrate this):
kubectl -n demo exec back -- redis-cli -h db ping
You should get a response showing that the server is accessible:
PONG
- communication between front Pod and db Pod
Check, using the following command, that the front Pod can reach the redis db running in the db Pod (the front Pod contains a redis client to demonstrate this):
kubectl -n demo exec front -- redis-cli -h db ping
As before, you should get a response showing that the redis db is accessible:
PONG
- communication between Pods in different namespaces
Now launch a Pod in the default namespace and verify that it can communicate with the back Pod in the demo namespace:
kubectl run test --rm --restart=Never -ti --image=busybox -- wget -T 5 -q -O - http://back.demo/random
Note: if you get the error “wget: bad address ‘back.demo’”, run the same command using back.demo.svc.cluster.local instead of back.demo to specify the service’s FQDN.
You should get a response showing that the server is accessible, returning a random country (and immediately deleting the test Pod):
{"alpha_2":"LA","alpha_3":"LAO","name":"Lao People's Democratic Republic","numeric":"418"}
pod "test" deleted
Note: you could also verify that a Pod launched in the default namespace can communicate with the front and db Pods in the demo namespace.
- communication to the outside
Using the following command, verify that the back Pod can access Google’s DNS (IP: 8.8.8.8):
kubectl -n demo exec -ti back -- ping -c3 8.8.8.8
You should get a result similar to this:
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=118 time=18.508 ms
64 bytes from 8.8.8.8: seq=1 ttl=118 time=19.586 ms
64 bytes from 8.8.8.8: seq=2 ttl=118 time=18.897 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 18.508/18.997/19.586 ms
- First, you’ll set up a NetworkPolicy that prevents all incoming and outgoing communications for selected Pods.
Create the default-deny.yaml file with the following specification:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny
spec:
podSelector: {}
policyTypes:
- Egress
- Ingress
Then create the resource in the demo namespace:
kubectl -n demo apply -f default-deny.yaml
Using the different commands from the previous section (Communication between Pods), verify:
- that Pods in the demo namespace can no longer communicate with each other
- that a Pod launched in the default namespace cannot communicate with a Pod in the demo namespace
- that a Pod in the demo namespace cannot communicate with the outside
You should get an error message for each of these commands.
- NetworkPolicy - DNS authorization
It’s often necessary to allow Pods to access DNS servers. To do this, modify the default-deny.yaml file as follows:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny
spec:
podSelector: {}
policyTypes:
- Egress
- Ingress
egress:
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
All selected Pods (Pods existing in the NetworkPolicy’s namespace) will be able to access port 53 (DNS port).
Then update the NetworkPolicy:
kubectl -n demo apply -f default-deny.yaml
- Authorizing front -> back communications
In the front-np.yaml file, define a new NetworkPolicy with the following specification. This selects the front Pod (Pod with label tiers: front) and allows outgoing traffic to the back Pod (Pod with label tiers: back).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front
spec:
podSelector:
matchLabels:
tiers: front
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
tiers: back
Then create the resource in the demo namespace:
kubectl -n demo apply -f front-np.yaml
Check if it’s possible to communicate from the front Pod to the back Pod:
kubectl -n demo exec front -- curl --connect-timeout 5 http://back/random
You should get an error (timeout) because only outgoing traffic from the front Pod is authorized. You also need to create a second NetworkPolicy allowing incoming traffic to the back Pod.
- Authorizing back <- front communications
In the back-np.yaml file, define a new NetworkPolicy with the following specification. This selects the back Pod and allows incoming traffic from the front Pod:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: back
spec:
podSelector:
matchLabels:
tiers: back
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tiers: front
Then create the resource in the demo namespace:
kubectl -n demo apply -f back-np.yaml
Check if it’s now possible to communicate from the front Pod to the back Pod:
kubectl -n demo exec front -- curl http://back/random
This time you should get a result similar to this:
{"alpha_2":"GE","alpha_3":"GEO","name":"Georgia","numeric":"268"}
- Authorizing back <-> db communications
Using the same approach as before, use NetworkPolicies to enable communication between the back Pod and the db Pod.
- Then delete the demo namespace (this will delete all resources it contains):
kubectl delete ns demo
Cilium Editor
NetworkPolicies are widely used to secure a cluster. The Cilium NetworkPolicy editor is an excellent resource for understanding in detail how NetworkPolicies are created. Feel free to experiment with this tool and create your own NetworkPolicies.