Why Hudu belongs in your stack (even if you’re not an MSP)
Most teams don’t have a documentation problem, instead they have a documentation decay problem.
- Runbooks get written once and never touched again.
- Asset “inventories” live in spreadsheets until they quietly decay.
- Passwords live in too many places, and nobody knows which one is correct.
- Critical knowledge gets archived in ticket notes and chat logs.
Hudu exists to stop that drift. It’s an IT documentation system designed to be a single source of truth for procedures, assets, credentials, vendors, diagrams, and institutional knowledge. It’s widely used by MSPs, but it’s equally useful for internal IT teams and companies that need structured, searchable documentation instead of scattered wikis and ad-hoc notes. See: Hudu product overview.
What makes Hudu compelling:
- Everything is connected (assets ↔ passwords ↔ procedures ↔ vendors).
- It’s opinionated enough to stay organized, but still flexible.
- It can be automated (API + integrations) so facts don’t rot. See: Hudu site + docs.
- Self-hosting is supported so you can control data location and operational posture. See: Hudu self-hosted getting started.
Two ways to publish Hudu: Standard vs Cloudflare Tunnel
Hudu’s standard self-hosted setup uses SWAG (nginx + Let’s Encrypt) and expects inbound 80/443 open for certificate issuance/renewal (you can use DNS-Challenge to avoid opening ports). See: Hudu standard setup guide.
Cloudflare Tunnel lets you publish services without a publicly routable IP and without opening inbound ports, because cloudflared establishes outbound-only connections to Cloudflare. See: Cloudflare Tunnel overview. This is ideal if you’re behind CGNAT, want to reduce attack surface, or prefer “no open ports” as your baseline.
Cloudflare also supports proxied WebSockets (important for apps that rely on realtime endpoints like /cable). See: Cloudflare WebSockets.
Recommendation: If you can, publish Hudu behind Cloudflare Tunnel + Access policies. You’ll get:
- a smaller internet footprint (no inbound ports),
- optional SSO/MFA gating in front of the login page,
- clean routing for multiple internal apps.
Clean host hardening
SSH key-only access (GitHub import or manual)
Option A — import keys from GitHub
sudo apt update
sudo apt install -y ssh-import-id
ssh-import-id gh:<github_username>
This appends all keys from GitHub into:
~/.ssh/authorized_keys
Review and remove keys you don’t want:
nano ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
Option B — manually add a single key
mkdir -p ~/.ssh
nano ~/.ssh/authorized_keys
Paste your public key (example):
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPmIXz2w0b4JjaAdCdevVWvuep3baxxxxxxxxxxx your-key-comment
Lock down permissions:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
Disable password login over SSH
Edit your sshd config (cloud images often use this snippet):
sudo nano /etc/ssh/sshd_config.d/50-cloud-init.conf
Ensure these are set (and uncomment if needed):
PasswordAuthentication no
Restart SSH:
sudo systemctl restart ssh.service
Safety tip: keep a second SSH session open before restarting sshd, so you can recover if you typo a config.
Optional: passwordless sudo (make sure you understand the risks)
If you have key-only SSH and want frictionless admin work (replace <username>):
echo "<username> ALL=(ALL) NOPASSWD: ALL" | sudo EDITOR=tee visudo -f /etc/sudoers.d/<username> >/dev/null
Optional (Proxmox): QEMU guest agent - if running on Proxmox
sudo apt update
sudo apt install qemu-guest-agent -y
sudo systemctl enable qemu-guest-agent
sudo systemctl start qemu-guest-agent
# enable may warn on some templates; typically safe to ignore
Install Docker Engine (Ubuntu, official apt repo)
This section is copy/paste friendly and follows Docker’s official Ubuntu install method. See: Docker Engine on Ubuntu.
Supported Ubuntu versions (64-bit)
Docker’s official docs currently list support for:
- Ubuntu Questing 25.10
- Ubuntu Noble 24.04 (LTS)
- Ubuntu Jammy 22.04 (LTS)
See: Docker Engine on Ubuntu.
Uninstall old/conflicting packages
Unofficial distribution packages can conflict with Docker’s official ones. Remove them first:
sudo apt remove $(dpkg --get-selections docker.io docker-compose docker-compose-v2 docker-doc podman-docker containerd runc | cut -f1)
It’s OK if apt reports that none are installed. See: Docker uninstall notes.
Install using the apt repository
# Add Docker's official GPG key:
sudo apt update
sudo apt install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources > /dev/null <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
sudo apt update
Install Docker packages:
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify Docker:
sudo docker run hello-world
Optional - not recommended: run Docker without sudo:
sudo usermod -aG docker "$USER"
# log out and back in
Install Hudu (Docker Compose, standard structure)
Hudu’s standard self-hosted approach is Docker Compose with SWAG reverse proxy + Let’s Encrypt. See: Hudu standard setup guide.
1) Create a project directory
mkdir -p ~/hudu2
cd ~/hudu2
2) Create docker-compose.yml using Hudu’s standard guide
Use Hudu’s Standard Setup Guide as your source of truth for service definitions (Postgres, Redis, Hudu app, worker, SWAG, volumes). See: Hudu standard setup guide.
3) Generate .env with Hudu’s wizard
cd ~/hudu2 && bash <(curl -fsSL https://raw.githubusercontent.com/Hudu-Technologies-Inc/self-hosting/refs/heads/main/hudu-env-wizard.sh)
4) Bring it up
cd ~/hudu2
sudo docker compose up -d
5) Apply Hudu’s recommended SWAG nginx config (websockets)
Hudu’s guide includes a default.conf that accounts for websocket handling. See: Hudu standard setup guide.
cd ~/hudu2
sudo docker compose down
cd /var/www/hudu2/config/nginx/site-confs/
sudo nano default.conf
# paste Hudu-provided config from the standard guide
cd ~/hudu2
sudo docker compose up -d
Publish Hudu behind Cloudflare Tunnel (no open inbound ports)
Why the tunnel is worth it
Cloudflare Tunnel allows publishing services without exposing a public IP or opening inbound ports — cloudflared establishes outbound-only connections. See: Cloudflare Tunnel overview.
Cloudflare supports proxied WebSocket connections. See: Cloudflare WebSockets.
Pattern A: Cloudflare TLS at the edge → HTTP to your origin
Flow: User HTTPS → Cloudflare → Tunnel → http://swag:80 (internal)
Pros:
- easy to stand up
- no need for origin certificates
Tradeoffs:
- you must ensure the proxy/app handles “original scheme is https” correctly (cookies/redirects)
Pattern B (recommended): End-to-end TLS → Tunnel to HTTPS to your origin
Flow: User HTTPS → Cloudflare → Tunnel → https://swag:443 (internal)
Pros:
- fewer “scheme mismatch” problems
- keeps Hudu’s standard SWAG reverse-proxy structure intact
Tradeoffs:
- you need an origin certificate approach (DNS validation or origin cert)
If you don’t want to open 80/443 inbound, configure SWAG for DNS validation so it can request/renew certificates without inbound ports. See: Cloudflare LetsEncrypt DNS Challenge.
Step 1: Create a tunnel (Cloudflare Zero Trust dashboard)
Cloudflare’s docs:
- access Zero Trust secion from your Cloudflare account
- go to Networks → Connectors → Cloudflare Tunnels
- choose Create a tunnel
- select Cloudflared
- name it, copy the token, then save
See: Create a remote tunnel.
Step 2: Run cloudflared in Docker
Stop the stack:
cd ~/hudu2
sudo docker compose down
Add a cloudflared service to your existing Compose stack. The simplest method is using a Tunnel token (generated in the dashboard).
services:
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CLOUDFLARE_TUNNEL_TOKEN}
environment:
- CLOUDFLARE_TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
restart: unless-stopped
depends_on:
- swag
Add the token to your .env:
nano ~/hudu2/.env
# add:
CLOUDFLARE_TUNNEL_TOKEN=...
Bring it up:
cd ~/hudu2
sudo docker compose up -d
Step 3: Route a hostname to your origin service
In your Tunnel config:
- Published application routes → Add a Published application route
- Hostname:
hudu.yourdomain.com - Service target:
- Pattern A:
http://swag:80 - Pattern B:
https://swag:443
- Pattern A:
Step 4 (strongly recommended): put Cloudflare Access in front of Hudu
Cloudflare Access lets you require SSO/MFA and policy checks before anyone even sees the Hudu login page.
Start with:
- adding your IdP (Google Workspace / Azure AD / Okta)
- add an application from Access control → Applications
- Input method
Default, and fill in your subdomain - Then in Policies, add a new policy that allows access only to your required email
- Login methods, check your authentication provider and enable Instant Auth if required
- Input method
See Cloudflare’s guidance on protecting self-hosted apps with Access: Cloudflare Access for self-hosted apps.
Step 5 Configuring Bypass and Allow Routes
Some Hudu paths need to remain unrestricted (bypassed) so the application works correctly — particularly for sharing, manifests, or integrations.
Below are recommended configurations for the main application:
| Application URL | Policy Type |
|---|---|
/ | Allow – assign to your authenticated group |
/manifest.json | Bypass – required for PWA and browser compatibility |
/secure_notes, /shared, /otp_shared_access, /shared_article | Bypass – allow access for public sharing; doesn’t take up a license |
/public_photo | Bypass – allow access to view uploaded photos |
/app_assets | Bypass – static assets such as JS/CSS |
/api/v1 | Bypass or Allow – depends on your integration setup |
For external apps and extensions:
| Application URL | Policy Type |
|---|---|
/jwt/refresh | Bypass – token refresh requests |
/external_apps/* | Bypass – general endpoints for external apps/extensions |
/external_apps/companies, /external_apps/passwords, /external_apps/vault_passwords, /external_apps/password_folders, /external_apps/articles, /external_apps/article_folders, /external_apps/pins, /external_apps/styles | Bypass – these will allow access to specific endpoints |
Extras - what we use for logos and browser addons
| Application URL | Policy Type |
|---|---|
uploads/account/1/logo | Bypass – customization |
uploads/account/1/favicon | Bypass – customization |
oauth/* | Bypass – for screenconnect plugin auth |
Basically, you create an application with up to 5 Public hostnames filling in the Path with the above and tie it to a Bypass policy (Action Bypass, Selector Everyone)
If you need more Public hosnames, just create more applications
Maintenance that keeps Hudu boring (in a good way)
Self-hosting success is about backups and predictable upgrades.
Backups: database + uploads
Postgres dump:
cd ~/hudu2
sudo docker compose exec -T db pg_dump -U postgres hudu_production > hudu-$(date +%F).sql
Postgres backup to S3:
Add an extra service to your docker compose file and insert the required connection details
postgres3:
image: hudusoftware/doubletake:latest
restart: unless-stopped
links:
- db
environment:
S3_HOST_BASE: 's3.us-west-1.wasabisys.com'
S3_REGION: 's3.us-west-1'
S3_BUCKET: 'bucketname'
S3_FOLDER: 'foldername'
S3_ACCESS_KEY_ID: 'XXXX'
S3_SECRET_ACCESS_KEY: 'XXXXXXXX'
CRON_SCHEDULE: '0 */6 * * *'
DB_NAME: 'hudu_production'
DB_USER: 'postgres'
POSTGRES_PASSWORD:
DB_HOST: 'db'
DB_BACKUP_VERIFY: '1'
DB_BACKUP_COMPRESS: '0' # Optional, default is 0
DB_BACKUP_COMPRESS_LEVEL: # Optional, default is 4
POSTGRES_EXTRA_OPTS: '--schema=public --blobs'
Uploads:
- If local storage: back up your uploads volume.
/var/lib/docker/volumes/hudu2_app_data/_data/
- If S3: back up the bucket and consider versioning/object lock. Or use lifecycle rules to back up S3 to Amazon Glacier.
DB Restores
Once you have a database backup, you can restore your Hudu instance to the old backup by following these steps:
- Make sure you have an up-to-date backup of your documentation before you begin.
- Move the .sql database dump file into the
~/hudu2directory. Typically, the easiest way to move files is via SCP or SFTP. - Run
sudo docker compose downto bring your instance down. - Run
sudo docker compose up -d dbto only bring up the DB container - Run the command:
sudo docker compose exec db dropdb hudu_production -U postgres - Run the command:
sudo docker compose exec db createdb hudu_production -U postgres - Run the command:
cat NAME-OF-DUMP.sql | sudo docker compose exec -T db psql -d hudu_production -U postgres - Run
sudo docker compose down - Run
sudo docker compose up -dto get your instance back up and running
Updates: pull, restart, verify
Hudu’s update workflow:
cd ~/hudu2 && sudo docker compose down && sudo docker compose pull && sudo docker compose up -d
After updates do a quick log check:
cd ~/hudu2
sudo docker compose ps
sudo docker compose logs --tail=200