Photo by Brett Sayles on Pexels
n8n Self-Hosted Setup Tutorial: Complete 2026 Guide
Quick answer
This n8n self-hosted setup tutorial covers the complete installation path: Docker Compose with PostgreSQL, nginx as a reverse proxy, SSL via Let’s Encrypt, and the environment variables that matter for production. The quick Docker run command gets n8n working locally — this guide gets it working reliably. If you haven’t used n8n before, read the n8n tutorial for beginners first to understand what you’re setting up.
This n8n self-hosted setup tutorial covers the full path — not just the Docker command that gets n8n running on your laptop, but everything required to keep it running reliably for months.
A project I reviewed had been running n8n self-hosted for eight months on SQLite, with no reverse proxy and webhook URLs that were raw IP addresses: http://165.22.48.3:5678/webhook/abc123. It worked in testing. In production, with a dozen external services calling those webhooks, it held together on luck and the fact that nobody had looked at it closely.
When the team decided to depend on it properly, retrofitting — PostgreSQL migration, nginx, SSL, an environment variable audit — took three days. Setting it up correctly from the start would have taken three hours. This is the same pattern as enabling TypeScript strict mode after two years without it: the work doesn’t disappear, it compounds with time.
This guide covers Docker Compose installation, database selection, nginx with Let’s Encrypt, the environment variables that actually matter, and how to keep it stable once it’s running.
Self-Hosted vs n8n.cloud: The Honest Comparison
The decision has two variables: cost and responsibility. Self-hosting removes the subscription cost and adds infrastructure work. n8n.cloud removes the infrastructure work and adds a monthly bill that compounds as your team grows.
| Factor | Self-hosted | n8n.cloud |
|---|---|---|
| Monthly cost | $5–20/month (server only) | From $20/month per plan |
| Setup time | 30–60 minutes (with this guide) | 2 minutes |
| Maintenance | Updates, backups, monitoring — yours | Managed for you |
| Data residency | Your server, your control | n8n’s infrastructure |
| Execution limits | Your server’s capacity | Plan-dependent |
| Webhook URLs | Your domain | *.n8n.io subdomain |
| Automatic backups | You configure them | Included |
Most developers testing n8n start on the cloud trial and move to self-hosting once they’ve confirmed the tool is useful. The switch happens at one of three points: the free tier’s execution limits become a constraint, the monthly cost compounds with team growth, or a workflow needs to reach an internal service that n8n.cloud can’t access.
One thing the comparison often glosses over: n8n.cloud includes automatic updates. Self-hosted means you run docker compose pull && docker compose up -d yourself. That’s the concrete trade-off this tutorial asks you to accept. If that sounds fine, proceed.
What You Need Before You Start

Before running any commands, confirm you have everything in place. Starting without these will require backtracking, which is more time-consuming than checking first.
- A Linux VPS — Ubuntu 22.04 LTS is the most tested distribution for n8n self-hosting
- SSH access to the server
- A domain name with an A record pointing to the server’s IP (required for webhooks and SSL)
- Docker and Docker Compose installed (
docker --versionanddocker compose versionshould both return a version) - Ports 80 and 443 open in the server’s firewall
| Server size | RAM | Disk | Suitable for |
|---|---|---|---|
| Hetzner CX11 (~$4/mo) | 2 GB | 20 GB | Personal use, small team, testing |
| Hetzner CX21 (~$6/mo) | 4 GB | 40 GB | Regular workflow volume, AI pipelines |
| DigitalOcean Basic ($12/mo) | 2 GB | 50 GB | Personal or team with moderate use |
| Any VPS 4 GB+ | 4+ GB | 80+ GB | High-volume workflows, production teams |
The domain requirement is non-negotiable if you’re using webhooks. External services cannot POST to a raw IP address with a port number through most firewalls, and without HTTPS, many external services will refuse to call your endpoint at all. A domain also lets nginx route traffic cleanly and certbot issue an SSL certificate. Set the A record pointing to your server IP before running anything — DNS propagation takes minutes to hours.
curl -fsSL https://get.docker.com | sh — Docker’s official install script handles Ubuntu, Debian, and most common distributions. Docker Compose is included with Docker Engine since v20.10.Installing n8n with Docker Compose
The single docker run command gets n8n running on port 5678 locally. It uses SQLite by default and stores data inside the container, which means it disappears when the container is removed. Docker Compose with a defined data volume and PostgreSQL is what you actually want for any deployment you’ll depend on.
Create the project directory and files
mkdir ~/n8n && cd ~/n8n
touch docker-compose.yml .envdocker-compose.yml
version: '3.8'
services:
n8n:
image: docker.n8io/n8n
restart: always
ports:
- "127.0.0.1:5678:5678"
environment:
- N8N_HOST=${N8N_HOST}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${N8N_HOST}/
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- GENERIC_TIMEZONE=UTC
- N8N_LOG_LEVEL=warn
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:15
restart: always
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 5
volumes:
n8n_data:
postgres_data:.env file
N8N_HOST=n8n.yourdomain.com
N8N_ENCRYPTION_KEY=your-32-character-random-string
POSTGRES_DB=n8n
POSTGRES_USER=n8n
POSTGRES_PASSWORD=your-secure-password-hereGenerate a random encryption key with openssl rand -hex 16. Write it down somewhere safe. If you lose it, every saved credential in n8n becomes unreadable — not recoverable, unreadable. The encryption key is the one piece of this setup where a mistake is expensive.
A few things worth noting about this Compose file:
127.0.0.1:5678:5678— binds n8n only to localhost, not to the public IP. nginx handles public traffic. If you bind to0.0.0.0:5678, anyone with your server’s IP and the port can reach n8n directly, bypassing any reverse proxy auth.restart: always— restarts the container if it crashes or the server reboots. Without this, n8n stops after any disruption and you find out when a workflow doesn’t fire.depends_onwithservice_healthy— waits for PostgreSQL to accept connections before starting n8n. Without this, n8n sometimes starts before the database is ready and fails with a connection error.
Start the stack:
docker compose up -dCheck both containers are running:
docker compose psBoth should show Up. If postgres shows Up (healthy) and n8n shows Up, the stack started correctly. Confirm n8n is accessible at http://localhost:5678 from the server before continuing to the nginx step. At this point it’s running on HTTP, reachable only locally — that’s correct.
Setting Up nginx and SSL

nginx sits between the internet and n8n: it receives HTTPS requests on port 443, terminates SSL, and forwards traffic to n8n on port 5678. certbot manages the Let’s Encrypt certificate.
Install nginx and certbot
sudo apt update
sudo apt install -y nginx certbot python3-certbot-nginxCreate the nginx site config
Create /etc/nginx/sites-available/n8n:
server {
server_name n8n.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:5678;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 300;
proxy_connect_timeout 300;
}
listen 80;
}Enable the site and test the config:
sudo ln -s /etc/nginx/sites-available/n8n /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginxIf nginx -t returns syntax is ok and test is successful, the configuration is valid. If it returns an error, check the config file for typos — most commonly a missing semicolon or incorrect path. (nginx configuration is one of those things where understanding it fully is optional for getting it working, but mandatory for understanding why it stopped.)
Issue the SSL certificate
sudo certbot --nginx -d n8n.yourdomain.comcertbot modifies the nginx config to add SSL directives and sets up automatic renewal. Confirm the certificate issued correctly by visiting https://n8n.yourdomain.com — the n8n login or setup screen should appear over HTTPS.
certbot installs a systemd timer or cron job that renews certificates before expiry. Verify it’s active with
sudo systemctl status certbot.timer. Let’s Encrypt certificates expire after 90 days — if the timer isn’t running, your SSL certificate will expire and n8n will be unreachable over HTTPS.Environment Variables for Production
The .env file above covers the minimum. These additional variables change n8n’s behaviour in ways that matter once you’re running real workflows:
| Variable | Example value | What it controls |
|---|---|---|
GENERIC_TIMEZONE | America/New_York | Schedule trigger times — wrong timezone means workflows fire at the wrong hour |
N8N_LOG_LEVEL | warn | Reduces log volume; info produces a lot of noise for high-frequency workflows |
EXECUTIONS_DATA_PRUNE | true | Enables automatic pruning of old execution logs — prevents the database growing unbounded |
EXECUTIONS_DATA_MAX_AGE | 336 | Keeps execution history for 336 hours (14 days); older records are pruned |
N8N_BASIC_AUTH_ACTIVE | true | Enables HTTP basic auth — adds a username/password prompt before the n8n login |
N8N_BASIC_AUTH_USER | admin | Basic auth username |
N8N_BASIC_AUTH_PASSWORD | your-password | Basic auth password — this gates the entire n8n UI, not individual workflows |
Set GENERIC_TIMEZONE correctly before creating any scheduled workflows. If the timezone is wrong when a scheduled workflow is created, the trigger fires at the wrong time. Correcting the timezone later doesn’t retroactively fix existing scheduled triggers — you need to re-save them.
After adding new variables to .env, restart the n8n container to pick them up:
docker compose up -d --force-recreate n8nFor ongoing backups, set up a daily pg_dump with something like:
docker exec n8n-postgres-1 pg_dump -U n8n n8n | gzip > /backups/n8n-$(date +%Y%m%d).sql.gzRun this from a cron job and ship the output to S3 or Backblaze B2. The n8n data volume (credentials, workflow files) should be backed up alongside it — both are needed to restore a working instance. This is the maintenance work n8n.cloud handles for you. Account for it honestly when comparing the two options.
My Take: Boring Infrastructure Is the Right Infrastructure
There’s a persistent temptation when self-hosting to choose interesting tools: Traefik instead of nginx, CockroachDB instead of PostgreSQL, k3s instead of Docker Compose. They’re all technically valid. Most of them create problems that have no Stack Overflow answers when something breaks at an inconvenient time.
Pick boring technology for the foundation. PostgreSQL over SQLite for production — not because SQLite is bad, but because PostgreSQL is what every developer who has debugged a concurrent write issue knows, and every production database problem PostgreSQL can have is documented in depth. nginx over alternatives, because nginx reverse proxy configurations are the most copied, most debugged, most understood in the field. Docker Compose over anything more complex, because Docker Compose failures are diagnosable with docker compose logs.
The compounding value of boring infrastructure: when something breaks — and something will break — the error message is in the logs, the fix is in a Stack Overflow answer from 2017, and the whole incident takes forty minutes instead of a day. Exotic infrastructure choices at the foundation level trade convenience for uniqueness in the worst possible direction. Use the part of your afternoon where you would have configured Traefik to build another workflow instead.
Every n8n self-hosted instance I’ve seen in production that was running reliably six months later used PostgreSQL, nginx, and Docker Compose. No exceptions.
When NOT to Self-Host n8n
Four situations where self-hosting is the wrong choice:
1. Nobody on the team can maintain it
n8n self-hosted requires someone who can SSH into a server, run Docker commands, interpret logs, and handle an unexpected restart without panicking. If you’re the only person on the team who can do this, and you plan to leave or be unavailable, n8n self-hosted is a liability. The moment something breaks and the person who knows how to fix it isn’t available, every automated workflow stops. n8n.cloud removes that dependency.
2. You need it running in minutes
Self-hosting takes 30–60 minutes even when everything goes smoothly. If you need something working today and the urgency is real, n8n.cloud’s free trial takes two minutes. Start there. The cloud-to-self-hosted migration is straightforward — export your workflows as JSON and import them. Don’t delay an important workflow because you want to self-host and haven’t done the setup yet.
3. You’re running fewer than 50 executions per month
At very low volume, the infrastructure overhead of self-hosting — server cost, maintenance time, occasional debugging — is not justified. n8n.cloud’s free tier or a cheap plan covers modest usage without the operational burden. Self-hosting makes economic sense when you’re running workflows frequently enough that execution limits or cost compounds. Twice a week is not that threshold.
4. Data residency isn’t a concern and your team is small
For small teams where data control isn’t a regulatory requirement, n8n.cloud’s managed experience — automatic updates, built-in monitoring, included backups — often provides better reliability than a self-managed VPS that doesn’t get updated for months. Self-hosting is worth it when you have a reason. “I prefer to control it” is a reason, but evaluate it honestly against the maintenance cost. Also worth comparing: our n8n vs Make breakdown covers cases where a different tool fits better.
Conclusion
The team whose n8n instance took three days to retrofit for production wasn’t making unusual choices. They made the same choices most people make when testing: SQLite because it’s the default, no nginx because they weren’t using webhooks yet, raw IPs because it worked. Then the tool became useful enough to depend on, and the setup cost arrived all at once.
Key takeaways:
- Set up PostgreSQL from day one, not when you decide to “go production” — that day is harder to identify in advance than you’d expect
- Bind n8n to
127.0.0.1only; let nginx handle public traffic - Generate the
N8N_ENCRYPTION_KEYbefore creating any credentials and back it up separately - Set
GENERIC_TIMEZONEbefore creating scheduled workflows, not after - Enable execution pruning — an unmanaged database grows predictably until it becomes a problem
Self-hosting n8n is roughly like owning a car: it costs less over time, gives you full control, and requires periodic attention the managed alternative does not. Most outages at 2am are now your responsibility. For workflows that justify the setup, that trade is worth making. For everything else, the cloud exists.
Once it’s running, the n8n AI workflow tutorial covers building AI pipelines on top of the setup you just completed, and the n8n automation workflow tutorial goes deep on triggers, expressions, and branching logic.
Frequently Asked Questions
How much does it cost to self-host n8n?
n8n itself is free and open source under the Sustainable Use License for self-hosting. The cost is your server: a Hetzner CX11 at ~$4/month or a DigitalOcean Droplet at $6/month handles n8n with PostgreSQL on the same machine. Factor in domain registration (~$12/year) and SSL is free via Let’s Encrypt. Total cost runs $5–15/month depending on provider and server size.
Should I use SQLite or PostgreSQL for n8n self-hosted?
Use PostgreSQL for any deployment you’ll depend on. SQLite is n8n’s default and works for local testing, but it cannot handle concurrent writes, has no native backup mechanism, and is difficult to migrate once your execution history grows. PostgreSQL handles all of this correctly. The extra setup takes about five minutes with Docker Compose. Set it up on day one and you won’t think about it again.
What server specs does n8n self-hosted require?
For a personal or small-team instance with PostgreSQL on the same server, 2GB RAM and 20GB disk is the starting point. n8n’s process is light — database size and large workflow executions drive memory consumption. For high-volume workflows or AI pipelines processing large responses, 4GB RAM is more comfortable. Ubuntu 22.04 LTS is the most tested distribution for n8n self-hosting as of 2026.
Do I need a domain name to self-host n8n?
You need a domain if you use webhooks. External services can’t POST to a raw IP with a port number through most firewalls, and many services require HTTPS endpoints. Without a domain, you can’t get an SSL certificate, and without SSL, webhook-based integrations often fail entirely. For workflows that only use the Schedule trigger with no incoming webhooks, you can technically run without a domain — but webhooks are where n8n’s real value is, so this is an unusual constraint to accept.
How do I back up a self-hosted n8n instance?
Back up two things: the PostgreSQL database and the n8n data volume. For Postgres, a daily cron job running pg_dump and uploading the output to object storage (S3, Backblaze B2) is standard. The n8n data volume contains your encryption key, credentials, and workflow definitions — include its Docker volume mount path in the backup script. n8n.cloud handles backups automatically. This maintenance is the concrete operational cost of self-hosting; account for it.
What is the N8N_ENCRYPTION_KEY and why does it matter?
It’s the secret key n8n uses to encrypt saved credentials in the database. Lose it and every saved API key, password, and OAuth token in n8n becomes unreadable — not recoverable, permanently unreadable. Generate it once with openssl rand -hex 16, store it in your .env file, and back it up separately from the database. Never rotate it after credentials are saved. The encryption key and the database backup must be stored together to restore a working instance.
How do I update n8n when self-hosted with Docker?
Pull the new image and restart: docker compose pull && docker compose up -d. n8n runs database migrations automatically on startup. Before updating, back up your database and check the n8n changelog for breaking changes — major version updates occasionally require manual steps. Run updates during low-traffic periods if workflows are time-sensitive.
