During the previous months, I have created a lot of infrastructure in Kubernetes. This is stuff that just makes things work if I just configure things correctly. So, I thought I should just go and deploy a simple service, Supermario, that lives on https://github.com/GuopingJia/k8s-games/tree/main/super-mario
I like to run a all my things in a namespace, so I start off with creating that
kind: Namespace
apiVersion: v1
metadata:
name: supermario
Then I deploy supermario
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: supermario
name: mario-deployment
spec:
replicas: 2 # You can adjust the number of replicas as needed
selector:
matchLabels:
app: mario
template:
metadata:
labels:
app: mario
spec:
containers:
- name: mario-container
image: guopingjia/mario:cfe-demo
ports:
- containerPort: 80
This is running two containers. I have added a label app=mario. This label is very useful, you’ll use it in the service, in the security policy, and possibly other places.
So let’s create service
apiVersion: v1
kind: Service
metadata:
namespace: supermario
name: mario-service
spec:
type: ClusterIP
selector:
app: mario
ports:
- protocol: TCP
port: 80
targetPort: 80
This service will be sort of a loadbalancer in the namespace, it will create an ip that will forward port 80 to port 80 on any pods with the label app=mario. Now, it’s still just a cluster-ip, but you’ll be able to talk to it from other parts in the cluster – if the security policy allows it.
I run pretty strict security policies, nothing is allowed before I deny it. So I create a security policy
kind: NetworkPolicy
metadata:
name: default-deny
namespace: supermario
spec:
ingress:
- action: Deny
egress:
- action: Deny
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-all-in-namespace
namespace: supermario # Change this to your namespace
spec:
ingress:
- action: Allow
source:
namespaceSelector: kubernetes.io/metadata.name == "supermario"
egress:
- action: Allow
destination:
namespaceSelector: kubernetes.io/metadata.name == "supermario"
selector: all() # Applies this policy to all pods in the namespace
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-egress
namespace: supermario
spec:
egress:
- action: Allow
---
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: allow-ingress-to-supermario-service
namespace: supermario
spec:
ingress:
- action: Allow
protocol: TCP
source:
namespaceSelector: role == "ingress-external"
destination:
selector: app == "mario"
ports: [80]
selector: app == "mario"
This creates for rules:
- Deny everything
- Allow everything inside the same namespace
- Allow everything out of the namespace
- Allow traffic from namespaces with the label role = ingress-external.
The last one is the label I have set on my traefik-external namespace, which is through where we want to expose it.
So, we tell traefik about our service by creating an ingressroute:
hassio% less ingressroutes.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: supermario-ingressroute
namespace: traefik-external
annotations:
kubernetes.io/ingress.class: "traefik-external"
spec:
entryPoints:
- websecure
routes:
- match: Host(`supermario.engen.priv.no`)
kind: Rule
services:
- name: mario-service
namespace: supermario
port: 80
tls:
certResolver: letsencrypt
This tells traefik-external to create a virtual host for supermario.engen.priv.no, expose it on port 443, and forward it to port 80 on the service. Behind the hood, this name/namespace-pair will be translated to a hostname mario-service.supermario.svc.cluster.local. So all traffic for supermario.engen.priv.no will be forwarded to https://mario-service.supermario.svc.cluster.local:80/, which is somthing that the Kubernetes internal DNS, CoreDNS, will be able to resolve.
It also tells trafik to request a service with the letsencrypt resolver. This is set up with this magic I have earlier set up on traefik-external:
certificatesResolvers:
letsencrypt:
acme:
email: vegard@engen.priv.no
storage: /data/acme.json
dnschallenge:
provider: linode
delaybeforechec: 10
resolvers: 8.8.8.8:53,1.1.1.1:53
The linode-provider is built into traefik, I just need to feed it a token in an environment variable:
env:
- name: LINODE_TOKEN
valueFrom:
secretKeyRef:
name: linode-dns-token
key: LINODE_TOKEN
I already have a loadbalancer for IPv4, as I only have one external public ipv4 address that can forward inward to only one ip address. But I need to add a DNS address for it. I can have externalDNS do that, because I have this service object
metadata:
name: dns-service
namespace: infrastructure
annotations:
external-dns.alpha.kubernetes.io/hostname: homeassistant.engen.priv.no,vegard.blog.engen.priv.no,nextcloud.engen.priv.no,plex.engen.priv.no,supermario.engen.priv.no
external-dns/external: "true"
external-dns.alpha.kubernetes.io/ttl: "300"
ipchanger.alpha.kubernetes.io/patch: "true"
spec:
type: ExternalName
externalName: 51.174.8.204
as you can see, I just need to add the hostname here.
I will add its own IPv6 load balancer though:
apiVersion: v1
kind: Service
metadata:
name: traefik-supermario
namespace: traefik-external
annotations:
metallb.universe.tf/address-pool: public-ipv6-pool
external-dns.alpha.kubernetes.io/hostname: supermario.engen.priv.no
external-dns/external: "true"
external-dns.alpha.kubernetes.io/ttl: "30"
unifi.engen.priv.no/firewall-group: externalweb
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv6
ports:
- name: web
port: 80
- name: websecure
port: 443
selector:
app: traefik-external
The highlighted annotations tell
- metallb to assign it an address from public-ipv6-pool
- external-dns to create a DNS entry for it
- My unifi operator to add it to the firewall-group externalweb
This firewall-group is defined as this:
apiVersion: unifi.engen.priv.no/v1beta1
kind: FirewallGroup
metadata:
labels:
app.kubernetes.io/name: unifi-network-operator
app.kubernetes.io/managed-by: kustomize
name: externalweb
spec:
name: ExternalWeb
matchServicesInAllNamespaces: true
The name from metadata is what is used, the Unifi-operator will see that the service has a correct annotation, and the unifi-operator will do it’s magic and add the load balancer ip address to the firewall group.
The firewall opening is also already defined:
apiVersion: unifi.engen.priv.no/v1beta1
kind: FirewallPolicy
metadata:
labels:
app.kubernetes.io/name: unifi-network-operator
app.kubernetes.io/managed-by: kustomize
name: external-web-openings
spec:
name: "External-web"
source:
from_zones:
- name: external
- name: dmz
- name: klient
- name: server
- name: vpn
- name: entertainment
from_networks:
- name: server
destination:
firewall_groups:
- name: "externalweb"
namespace: default
And that’s really it. That’s what it takes to set up supermario on Kubernetes in 10 minutes.