DigitalOcean Kubernetes Without a Load Balancer

This is the third post in a series on Modernizing my Personal Web Projects where I look at setting up DigitalOcean Kubernetes without a load balancer.

Why You Need a Load Balancer

DigitalOcean Load Balancers are a convenient managed service for distributing traffic between backend servers, and it integrates natively with their Kubernetes service. They offer a quick way to expose services to the public internet without having to use NodePort. However, managed load balancers are excessive for personal websites with low traffic, since they won’t fully utilise the performance or high-availability benefits. It also costs more, which is no good for my original goal of creating a budget Kubernetes setup!

A popular option for load-balancing in general is NGINX, a high-performance web server that’s free to use (when using the open-source version). When used in combination with virtual machines, a typical configuration is to host NGINX on servers with static or floating IPs. Then the DNS for your website can be pointed at these IPs, and you can have many virtual hosts being served from the same NGINX server. However, Kubernetes by nature is backed by volatile servers that come and go, and DigitalOcean does not support assigning Floating IPs to Kubernetes nodes. So, are we out of luck? Not quite – fortunately, there’s a workaround.

Setting Up an NGINX Ingress Controller

NGINX can be used as an Ingress controller on Kubernetes. An Ingress controller allows external access to services on a cluster, for example HTTP, and can fulfil the typical roles of load-balancers including virtual host routing. The Kubernetes team made an NGINX Ingress controller, which is the one I decided to use.

Enabling HTTP and HTTPS Traffic to the Cluster

Because the firewall doesn’t allow public access to your cluster by default, the first step is to add a firewall rule to allow incoming HTTP and HTTPS traffic. This can be done either using doctl or the web UI. In either case, you’ll want to apply the rule to the tag of your cluster including the UUID, for example k8s:123e4567-e89b-12d3-a456-426614174000.

  1. Navigate to Networking -> Firewalls
  2. Click Create Firewall
  3. Under Inbound Rules, change the type to HTTP and add a new rule for HTTPS
  4. Under Apply to Droplets, enter the k8s:xxx tag that applies to the Kubernetes cluster.
Creating a firewall rule to allow HTTP/HTTPS traffic to Kubernetes

Installing the NGINX Ingress Controller

It’s fairly easy to install the NGINX Ingress Controller using Helm. However, using the default configuration will create a new DigitalOcean Load Balancer by default, which isn’t what we want. So we need to change it to use the host (node)’s ports instead by creating a custom configuration file. DaemonSet ensures it will run on all of my nodes, while ClusterIP means it won’t create a load balancer. The remaining settings tell it to use the host’s ports 80 and 443.

  kind: DaemonSet
    useHostPort: true
  dnsPolicy: ClusterFirstWithHostNet
  hostNetwork: true
    type: ClusterIP
  create: true

Save this in a file like nginx-ingress.yaml and proceed with the installation:

helm repo add ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx -f nginx-ingress.yaml

Configuring External DNS

After getting the ingress up and running, it should now be accessible by http (port 80) and https (port 443). But as mentioned before, we’ll want to have a DNS entry pointing to it so it can be accessed by name too. Since the IP addresses of the Kubernetes cluster are always changing, this isn’t as straightforward as manually creating a DNS entry and forgetting about it – we need Kubernetes itself to manage the DNS for us. By the way, I use Cloudflare‘s free plans for my DNS hosting. They’re great.

With external-dns

One option is to use Kubernetes external-dns to automatically publish external DNS records. It has support for many providers and is worth checking out. However, I couldn’t get it to work because it was picking up the internal ClusterIP address of the NGINX ingress rather than the node’s external IP address. (Apparently the solution to this is to set the publishService.enabled: false setting, but I haven’t tried this yet.

With kubernetes-cloudflare-sync

Alternatively, there is a Cloudflare-specific solution at kubernetes-cloudflare-sync. This is what I ended up using. It works a little differently because it directly syncs the node’s external IP addresses but does not add entries for the ingress hosts. In my opinion this is a cleaner solution.

What I do is use this to create a DNS entry for the cluster like k8.domain. Next, when I add a new ingress, I manually create a CNAME that points the vhost to k8.domain. This has two advantages: firstly, there’s no automated changes to the live DNS. This is more secure and acts as a manual check before going ‘live’ on my public domains. Secondly, if something goes wrong, I only need to update k8.domain instead of all my vhosts. The downside of course is there’s an extra manual step to create the CNAMEs, but it’s not like I add new vhosts very frequently so that doesn’t bother me.

I created my own fork for the above project here: It uses the new Cloudflare API tokens for authentication instead of the key+email method. Also, it’s based on the more lightweight ‘distroless‘ Docker base image – I’m aiming for a budget Kubernetes setup, so every bit counts!

Wrapping Up

After completing the above steps, I have an NGINX load balancer linked to my DNS ready to serve whatever websites I like on my cluster. This saved me a minimum of $10/month by configuring DigitalOcean Kubernetes without a load balancer.

When I deploy websites to my cluster (which I’ll describe in future posts), I just need to create an Ingress entry using the NGINX controller as described in the user guide. For example, given a file named ingress-mysite.yaml:

kind: Ingress
  name: ingress-myservicea
  - host:
      - path: /
        pathType: Prefix
            name: myservicea
              number: 80
  ingressClassName: nginx

This can be deployed with kubectl apply -f ingress-mysite.yaml. NGINX will then automatically start to route incoming traffic to your domain to the service.

Leave a comment

Your email address will not be published. Required fields are marked *