Loading...
Loading...

n8n is one of those tools that looks simple in the demos and then takes you a full day to actually set up properly for the first time. Not because it's bad software. Because the docs assume you already know which decisions matter and which don't.
I've set up n8n on Docker, on a VPS, on Railway, and in more complex multi-instance configurations. Here's the version that works without the trial and error.
n8n is a workflow automation platform. You build visual flows that connect services: "when something happens here, do something there." Same category as Zapier and Make (Integromat), but self-hostable and significantly more powerful for technical users.
The key differences from Zapier:
| Feature | n8n | Zapier |
|---|---|---|
| Hosting | Self-hosted or cloud | Cloud only |
| Pricing at scale | Fixed (self-hosted) or reasonable cloud tiers | Per-task pricing gets expensive fast |
| Custom code | Full JavaScript/Python in nodes | Limited |
| AI integration | Native AI nodes, LangChain support | Third-party apps only |
| Error handling | Detailed, debuggable | Limited |
| API | Full API for programmatic control | Limited |
For developers building serious automations, especially ones that involve AI, n8n's self-hosted version is almost always the better choice.
I use Docker Compose for every n8n installation. It's the most reliable setup and the easiest to maintain and upgrade.
# docker-compose.yml
version: "3.8"
services:
n8n:
image: docker.n8n.io/n8nio/n8n:latest
restart: always
ports:
- "5678:5678"
environment:
- N8N_HOST=${N8N_HOST:-localhost}
- N8N_PORT=5678
- N8N_PROTOCOL=${N8N_PROTOCOL:-http}
- NODE_ENV=production
- WEBHOOK_URL=${WEBHOOK_URL}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=720 # 30 days
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:15
restart: always
environment:
- POSTGRES_DB=n8n
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 5s
timeout: 5s
retries: 5
volumes:
n8n_data:
postgres_data:Create your .env file:
# .env
N8N_HOST=n8n.yourdomain.com # or localhost for local dev
N8N_PROTOCOL=https # or http for local
WEBHOOK_URL=https://n8n.yourdomain.com/
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32) # Generate a secure key
POSTGRES_PASSWORD=$(openssl rand -hex 16) # Generate a secure passwordStart it:
docker-compose up -dAccess at http://localhost:5678 (or your domain).
If you want webhooks from external services (Stripe, GitHub, etc.), you need HTTPS. Here's the Nginx reverse proxy config:
# /etc/nginx/sites-available/n8n
server {
server_name n8n.yourdomain.com;
location / {
proxy_pass http://localhost:5678;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Increase timeouts for long-running workflows
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
}
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/n8n.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/n8n.yourdomain.com/privkey.pem;
}Get the SSL cert with Certbot:
certbot --nginx -d n8n.yourdomain.comLet me walk through building a real workflow: a system that monitors RSS feeds, summarizes new posts with AI, and sends a Slack digest.
In n8n's visual editor, create this flow:
[Schedule Trigger: Every morning at 8am]
|
v
[RSS Feed Read: Fetch from list of URLs]
|
v
[Filter: Only items from last 24 hours]
|
v
[OpenAI/Claude Node: Summarize each item]
|
v
[Code Node: Format into digest]
|
v
[Slack Node: Post to #team-digest channel]
Here's the JavaScript code for the Code node that formats the digest:
// Code node: Format Digest
const items = $input.all();
const digestItems = items
.filter(item => item.json.summary) // Only items with summaries
.map(item => {
const { title, link, summary, source } = item.json;
return `*${title}*\n${summary}\n<${link}|Read more> | ${source}`;
});
if (digestItems.length === 0) {
return [{ json: { text: "No significant new content today." } }];
}
const digestText = [
`*Daily Content Digest - ${new Date().toLocaleDateString()}*`,
`_${digestItems.length} items worth reading_`,
"",
...digestItems.slice(0, 10), // Limit to 10 items
].join("\n\n");
return [{ json: { text: digestText } }];For the Claude integration, use the HTTP Request node to call the Anthropic API directly, or use n8n's built-in AI nodes:
// HTTP Request node settings:
// Method: POST
// URL: https://api.anthropic.com/v1/messages
// Headers: x-api-key: {{ $credentials.anthropicApi.apiKey }}
// anthropic-version: 2023-06-01
// Content-Type: application/json
// Body (JSON):
{
"model": "claude-haiku-4-20250514",
"max_tokens": 256,
"messages": [{
"role": "user",
"content": "Summarize this in 2 sentences: {{ $json.content }}"
}]
}Workflows that don't handle errors get abandoned. Here's how to build resilient flows.
Error Workflow Pattern. In n8n, every workflow can have a dedicated error handler. Create a workflow called "Error Handler" that:
Enable it in each workflow's settings: Settings > Error Workflow > select your Error Handler workflow.
Retry Logic for External APIs. External services fail. Add retry logic in your HTTP Request nodes:
Conditional Error Handling. For workflows where a partial failure is acceptable:
// Code node: Handle partial failures
const results = $input.all();
const successful = results.filter(r => !r.json.error);
const failed = results.filter(r => r.json.error);
if (failed.length > 0) {
console.log(`Warning: ${failed.length} items failed:`, failed.map(f => f.json.error));
}
// Continue with successful results
return successful;Before any workflow goes live:
Webhooks not receiving: Check that your WEBHOOK_URL environment variable exactly matches your public URL, including trailing slash. n8n is strict about this.
Database connection errors: If you see Postgres connection errors after a restart, check that the Postgres container is fully healthy before n8n starts. The depends_on with health check handles this.
Memory issues with large datasets: Use the Limit node aggressively. Process large datasets in batches. Don't try to process 10,000 records in a single workflow execution.
Workflow timing out: For long-running workflows, increase both the Nginx proxy timeout and n8n's own execution timeout in settings. Default timeout is 2 minutes.
API rate limits: Use n8n's built-in Rate Limit node or add Wait nodes between batches of API calls.
The highest-ROI automations for most teams:
Each of these can be running in an afternoon with n8n.

Your inbox is not a to-do list. It's an interrupt machine. Here's how to build an AI system that handles 80% of your email without you reading it.

Cold outreach is broken. Spray and pray is dead. Here's how to build an AI lead generation system that finds the right people and says the right thing.

Stop watching tutorials about tutorials. Here's how to actually build an AI agent that does something useful, from zero, in one sitting.
Stop reading about AI and start building with it. Book a free discovery call and see how AI agents can accelerate your business.