Incus primer
I’ve been experimenting with Incus recently, and here’s a short primer mostly for future me. My goal is to provision all my containers using terraform even though the Incus webui is pretty nice as well. This primer will mostly cover my setup.
It should be mentioned that when I started experimenting with Proxmox I wanted to use something like cloud-init with Proxmox Containers, which sadly doesn’t seem to be available. So I kept looking at alternatives and tested Incus, which proved to be a great idea!
Table of Contents
Networking setup #
I’m using this on a Debian 12 based machine with 2 NICs and ZFS. I have setup my network cards to something like this:
bridge0
, for management1, only untagged traffic, witheth0
as a bridge memberincusbr0
, for container traffic, only tagged traffic, witheth1
as a bridge member
I’m not using the normal, managed Incus bridge (due to reasons) but I
manually setup this bridge using systemd-networkd
. My network in
general uses one VLAN per service which I want to continue to do,
mostly due to reasons.
Incus setup #
I have a default profile setup which contains basic configuration (which storage pool plus an network interface name) plus a set of other profiles per network configuration, which basically means one additional profile per VLAN.2
# Create a profile for vlan1000
incus profile create net-services
incus profile device add net-services eth0 \
nic \
nictype=macvlan \
parent=incusbr0 \
vlan=1000
Once the profile is setup, I can just create as many containers as I like for said subnet, chaining the profiles together.
# Launch a new container using the default + net-services profiles
incus launch images:debian/12 \
hello-world \
--profile default \
--profile net-services
Authentication #
The default, recommended authentication setup for remote access3 uses client certificates, which I like as it’s good enough for homelab use (for “real” usage you’d want something more!).
# Expose the server
incus config set core.https_address :8443
Then, visit the Incus host with a web browser and follow the tutorial to setup a client certificate and importing it in your browser/system.
Stacking profiles #
One nice pattern is to stack multiple profiles on a container instance, where each profile adds some important functionality. My first idea was to setup SSH authentication and my internal CA inside each container automagically using cloud-init.
To use the setup-server
profile together with the net-services
profile (which contains necessary networking configuration) you could
create a new instance like this:
incus launch images:debian/12/cloud \
hello-ssh \
--profile default \
--profile net-services \
--profile setup-server
setup-server #
This is literally the first time I’ve used cloud-init, but it’s pretty
straight forward to use – I think using vendor-data
is reasonable
for this use-case.
config:
cloud-init.vendor-data: |-
#cloud-config
users:
- default
- name: ansible
groups: users
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-ed25519 fooooobaaaar oscar@example.com
package_update: true
packages:
- openssh-server
- curl
ssh_authorized_keys:
- ssh-ed25519 foobar oscar@example
ca_certs:
trusted:
- |
-----BEGIN CERTIFICATE-----
MII...==
-----END CERTIFICATE-----
description: Setup an internal, Ansible ready container
devices: {}
name: setup-internal
used_by: []
project: default
This is enough to use Ansible and internal ACME with the container! There are also many more modules in cloud-init and the documentation is pretty good.
systemd-networkd setup #
This is nothing special, below is simply for completeness.
# incusbr0.netdev
[NetDev]
Name=incusbr0
Kind=bridge
[Bridge]
MulticastSnooping=false
This interface doesn’t receive any untagged traffic, so no point in setting up any addressing et c for it.
# incusbr0.network
[Match]
Name=incusbr0
[Network]
DHCP=no
LinkLocalAddressing=no
[Link]
RequiredForOnline=no
ActivationPolicy=always-up
Then, for any physical interface that is member of incusbr0
:
[Match]
Name=eth1
[Network]
Bridge=incusbr0
Conclusion #
I like Incus, and anyone interested in it should try the interactive demo environment available! And if the above setup seems unnecessary complicated, it is due to my particular setup and not because of Incus.
I went with a bridge in case I ever want to run containers on the management interface, or add another NIC for management. ↩︎
I think this works OK, even though my initial goal was to replicate the default Proxmox setup with a bridge to which I just created VLAN sub interfaces to. ↩︎
The default host level access uses normal unix groups which is very reasonable. ↩︎