Being somewhat of a minimalist, having only one server at home, but still trying to make a good, secure and stable infrastructure, it bothered me to forward network traffic directly to my server. Granted, most of the services exposed was running in docker, but it still was forwarding it directly into my «red zone».
A christmas gift to myself was a new Unifi gateway (complete with some other stuff, but you can do this on a simple Unifi Cloud Gateway Ultra. Maybe even an express). I have a Unifi Cloud Gateway Max, a Unifi Lite 8-ports switch plus an access point. Some of what I exaplain will be refering to that, but can be configure in any VLAN-capable equipment.
I started off with configuring a DMZ VLAN on VLAN ID 9, with fde2:c20f:2d35:0007://64 as my IPv6 network. It’s a good practise, security-wise, to make sure a VLAN isn’t exposed on any ports it’s not going to be used, so I configured it tagged on the VLAN trunk to the switch and made sure it’s not exposed at any of the other ports on my Cloud Gateway. Then I configured it tagged on the port to my server, ready for being picked up by the network stack on the server. This is the interface where traffic from Internet towards the DMZ will come in.
$ sudo ip link add link eth0 name dmz01 type vlan id 9
There’s various ways to make this permanent, but on my Ubuntu server, it’s using netplan:
# cat /etc/netplan/01-vlan-config.yaml network: vlans: dmz01: id: 9 link: eno1 # netplan apply
I don’t want any IP addresses on it on the host itself, it will be forwarded as is into docker.
Since I need a way for the DMZ to reach the server, I will need another interface. This will be a macvlan bridge interface. This will need ip adress configuration on the host side. There will also be ip address setup on the docker side of this. This was probably the hardest to figure out how should be done. To this interface, I will attach a docker container running a firewall, to control traffic from the DMZ into the red zone, as we’d do with physical setup.
My red zone runs in 192.168.1.0/24. The gateway is on 192.168.1.1, and my server lives at 192.168.1.153. I decided to make the exit from my docker-defined firewall come out at 192.168.9. The reason for this is mainly that I wanted the server-side subnetwork to not encompass my gateway, and none of the other hosts in my red zone (which was quite easy, as I have only one).
To make this work, network-wise, I need to configure an ip address on the host side with a network mask that encompasses both the address I define in docker and the address I configure on the outside of it. So I decide on 192.168.1.13 with a netmask of 255.255.255.248. I will ignore ipv6 in this part and use only ipv4 in the backend setup, as there will be one less place to change when my provider decides to lose my ipv6 prefix and change it to a new one. We can still use ipv6 to the world, and I will include it on the services in the DMZ.
Disclaimer: There were some trials and errors before I got this working, and I still haven't figured out all of this!
# ip link add dmz02 link eno1 type macvlan mode bridge # ifconfig dmz02 192.168.1.13 netmask 255.255.255.248
On Ubuntu, there’s no way I have found to install this through netplan, so I needed to install ifupdown and do this in /etc/network/interfaces. Luckily, I can have both netplan and this installed, as long as I am careful about what I do where.
# apt install ifupdown # cat /etc/network/interfaces auto dmz02 iface dmz02 inet static address 192.168.1.13 netmask 255.255.255.248 pre-up ip link add dmz02 link eno1 type macvlan mode bridge # ifup dmz02
Having gotten this far, we are finished with the preparations outside docker, and we can move to the docker part of the configuration.
Since docker doesn’t support changing ipv6 prefixes, I have decided to hard-code my ip addresses for now in my configuration. Next up is going to be scripting change in all of my configuration if my ISP decides to change the prefix, but for now I’m going to assume it’s static.
I am also specifying the IPv6 fde2:c20f:2d35::/56 as my prefix. This is within the ULA address space and is non-routable – to protect my privacy and to not by chance use something that someone else might be assigned.
As earlier mentioned, I have chosen 192.168.25.0/24 as my DMZ IPV4 range and fde2:c20f:2d35:0007:/64 as my DMZ IPV6 range. In IPV6, the netmask of a local zone will almost always be a /64, so it’s good practise to stick to this even if you do it all manually as I do here.
I will define three neworks:
- dmz for the DMZ vlan
- hostnet for the host-facing part of my firewall
- dmz_internal for things that might need to communicate internally in docker, but not necessarly be exposed in the DMZ itself.
networks: dmz: driver: macvlan driver_opts: parent: dmz01 enable_ipv6: true ipam: config: - subnet: "192.168.25.0/24" ip_range: "192.168.25.0/24" gateway: "192.168.25.1" - subnet: "fde2:c20f:2d35:0007::/64" ip_range: "fde2:c20f:2d35:0007::/64" gateway: "fde2:c20f:2d35:0007::1" hostnet: driver: macvlan driver_opts: parent: dmz02 ipam: config: - subnet: "192.168.1.0/24" ip_range: "192.168.1.0/24" gateway: "192.168.1.1" dmz_internal:
Note that the last one is a standard docker bridge network with no special setup.
In my DMZ, I will describe 4 docker containers I run
- nginx is a nginx-proxy-manager I use in front of my web sites – among other things, this blog.
- db is a mysql server for use by the nginx-proxy-manager
- bastion is an SSH bastion I use to be able to ssh into my network from the Internet
- firewall is my firewall between docker and the server I run it on.
First, ngninx:
nginx: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped environment: DB_MYSQL_HOST: "db" DB_MYSQL_PORT: <port> DB_MYSQL_USER: "<user>" DB_MYSQL_PASSWORD: "<password>" DB_MYSQL_NAME: "<dbname>" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' networks: dmz: ipv4_address: 192.168.25.3 ipv6_address: fde2:c20f:2d35:0007::2 dmz_internal: volumes: - nginx_data:/data - nginx_letsencrypt:/etc/letsencrypt depends_on: - db
So, I run my web server on IPV4 address 192.168.25.3, IPV6 address fde2:c20f:2d35:0007::2, and need to open port 80 and 443 (with NAT for ipv4). Also note that this runs directly on the DMZ network, so I don’t need to forward any ports. Security-wise, this might be improved. There’s an extra admin-interface port for nginx-proxy-manager. This, you’ll make sure can only be reachable from the internal network, possibly limited further there by firewall rules.
Next, my db, is a standard mysql. I run this on dmz_internal to not expose it into DMS zone:
db: image: 'mariadb:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: '<password>' MYSQL_DATABASE: '<database>' MYSQL_USER: '<user>' MYSQL_PASSWORD: '<password>' volumes: - nginx_mysql:/var/lib/mysql networks: dmz_internal: aliases: - nginxdb
I’ll not explain this too much, more explanation can be found at https://github.com/NginxProxyManager/nginx-proxy-manager
For my SSH bastion, I have been using binlab/bastion:
bastion:
image: binlab/bastion
restart: unless-stopped
expose:
– 22/tcp
environment:
PUBKEY_AUTHENTICATION: «true»
GATEWAY_PORTS: «false»
PERMIT_TUNNEL: «false»
X11_FORWARDING: «false»
TCP_FORWARDING: «true»
AGENT_FORWARDING: «true»
volumes:
– bastion_home:/var/lib/bastion:ro
– bastion:/usr/etc/ssh:rw
extra_hosts:
– «server:192.168.25.2»
networks:
dmz:
ipv4_address: 192.168.25.251
ipv6_address: fde2:c20f:2d35:0007::251:1
Note: binlab/bastion doesn’t support ipv6 and is pretty dated, so I’ll likely replace this with something more modern. But I have specified IPv6 address nonetheless. Remember to open port 22 (with NAT for ipv4), but possibly with some limitations to where you expose it. I have chose to allow only pubkey-authentication here, but won’t go into the detail of the configuration else.
Last, my firewall. I was originally thinking ufw, so the image actually has ufw, but anything that has iptables in it will do. You can possibly even build it yourself from alpine or something other minimal – the lesser the better on a firewall!
firewall: image: kuramoto/ufw restart: always command: /bin/sh tty: true networks: dmz: ipv4_address: 192.168.25.2 ipv6_address: fde2:c20f:2d35:0007::10:1 hostnet: ipv4_address: 192.168.1.9 volumes: - firewall_etc:/usr/local/etc/firewall cap_add: - NET_ADMIN - NET_RAW entrypoint: [ "bash", "-c", "sleep 10 && /usr/local/etc/firewall/initfw.sh && sh" ]
I need the cap_add to be able to modify firewall rules in docker, something not normally allowed. There’s also going to be pre-defined docker firewalls when you check inside the docker container, but that’s fine – don’t touch it!
In the volume firewall_etc, I put my initfw.sh script:
#/bin/sh # Destination NAT: 192.168.25.2 -> 192.168.1.153 # Source NAT: 192.168.25.3 -> 192.168.1.9 # This is needed to make it possible for things on the server to reach the DMZ from the frontend. iptables -t nat -I PREROUTING -d 192.168.25.2 -j DNAT --to-destination 192.168.1.153 iptables -t nat -I POSTROUTING -s 192.168.25.0/24 -j SNAT --to-source 192.168.1.9 # Regular openings, based on what openings are needed from nginx. # For my example, I am giving only a web server that runs on 8080 and 8443 on the host. iptables -I FORWARD -s 192.168.25.3 -d 192.168.1.153 -m tcp -p tcp --dport 8080 -j ACCEPT iptables -I FORWARD -s 192.168.25.3 -d 192.168.1.153 -m tcp -p tcp --dport 8443 -j ACCEPT iptables -I FORWARD -s 192.168.25.251 -d 192.168.1.153 -m tcp -p tcp --dport 22 -j ACCEPT
And last part of docker config is of course the volumes we have used:
volumes: bastion: bastion_home: nginx_mysql: nginx_data: nginx_letsencrypt: firewall_etc:
If you have followed all this to the end, you should now have a fully functional DMZ, all in docker!
This is of course not as secure as dedicated boxes, and almost noone recommends using docker for security mechanisms, but nevertheless it’s probably better than exposing the OS side of your host directly.
Further hardening is probably possible, some ideas:
- Make sure the hosts in the DMZ only expose intended ports into the DMZ
- Isolate the hosts in the DMZ between each other – or possibly create more dedicated DMZes, for example one for the nginx container and one for SSH bastion.
Ideas? Comments? Questions? Please leave a comment, but note that I am moderating it, so things won’t show up instantly. If you have ideas to improvements, they’re especially welcome!