Uptime Kuma is a self-hosted monitoring tool that covers the same ground as paid services like UptimeRobot, Pingdom, and StatusCake without the per-monitor pricing or third-party dependency. It runs as a single Docker container, monitors HTTP/HTTPS endpoints, TCP ports, ping targets, DNS records, SSL certificate expiry, Docker containers, and more, then notifies you through over 90 channels when something breaks. This tutorial covers the deployment, notification setup for Slack, Telegram, and email, and configuration of a public status page suitable for sharing with customers or team members.
What You Will Build
A Uptime Kuma instance behind a reverse proxy with HTTPS, monitors for your key services, alerting through three notification channels (one chat, one mobile-friendly, one email fallback), and a public status page hosted at a subdomain like status.example.com.
Prerequisites
- A VPS running Debian 12 or Ubuntu 24.04
- Docker and Docker Compose installed
- A domain with two subdomains (one for the dashboard, one for the public status page)
- SSH root access
Uptime Kuma sips resources. A small Cloud VPS handles hundreds of monitors comfortably.
Step 1: Install Docker
If you have not already:
apt update && apt upgrade -y
apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg \
-o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/debian \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" \
> /etc/apt/sources.list.d/docker.list
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
systemctl enable --now docker
Step 2: Create the Compose Stack
mkdir -p /opt/uptime-kuma/{data,caddy-data,caddy-config}
cd /opt/uptime-kuma
Create /opt/uptime-kuma/docker-compose.yml:
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
restart: unless-stopped
volumes:
- ./data:/app/data
networks:
- web
caddy:
image: caddy:2-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy-data:/data
- ./caddy-config:/config
networks:
- web
networks:
web:
driver: bridge
We pin to the 1 major tag rather than latest. The Kuma project has spent considerable time on a 2.x rewrite; pinning prevents an accidental migration during a routine docker compose pull.
Step 3: Configure the Reverse Proxy
Create /opt/uptime-kuma/Caddyfile:
status.example.com {
encode gzip
reverse_proxy uptime-kuma:3001
}
monitor.example.com {
encode gzip
reverse_proxy uptime-kuma:3001
@admin {
path /dashboard /dashboard/* /add /edit /settings /settings/*
path /api/* /socket.io/*
}
header @admin {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Frame-Options "DENY"
}
}
Both subdomains point at the same Kuma instance. Kuma serves the dashboard, API, and public status pages from the same backend; the separation is purely cosmetic at the proxy layer. Both subdomains need A/AAAA records pointing at the VPS.
Step 4: Launch the Stack
docker compose up -d
docker compose logs -f
Wait for Kuma to log Listening on 3001 and Caddy to log successful TLS certificate issuance. Visit https://monitor.example.com and you will see the first-run setup wizard.
Step 5: Create the Admin Account
The first user you register becomes the administrator. Pick a strong password and enable two-factor authentication immediately afterward in Settings → 2FA. Kuma has no public self-registration; only the initial owner can create additional users.
Step 6: Add Your First Monitors
Click Add New Monitor in the dashboard. The most useful monitor types for typical infrastructure:
| Monitor Type | What It Checks | Recommended Interval |
|---|---|---|
| HTTP(s) | Website returns 2xx/3xx | 60 seconds |
| HTTP(s) - Keyword | Page contains expected text | 60 seconds |
| TCP Port | Service is reachable on port | 60 seconds |
| Ping | Host responds to ICMP | 60 seconds |
| DNS | DNS record resolves to expected value | 300 seconds |
| HTTP(s) Cert expiry | SSL certificate days remaining | 86400 seconds |
| Docker Container | Container is running | 60 seconds |
For a typical small business, you probably want:
- HTTPS keyword monitor for your main website (looking for a string only present when the site is fully working, like
</body>) - TCP monitor for your mail server on port 587
- HTTPS cert expiry for each public-facing domain
- Ping or HTTP monitor for any backup or staging environment
- DNS monitor for your apex record
Keyword monitors catch real failures that simple HTTP 200 checks miss, like a database error page that still returns a 200 status.
Step 7: Configure Notifications
Notifications attach to monitors. Configure the channels first in Settings → Notifications, then assign them when creating or editing monitors.
Slack
- In Slack, create an Incoming Webhook for the channel that should receive alerts
- Copy the webhook URL
- In Kuma, Settings → Notifications → Setup Notification
- Notification Type: Slack
- Friendly Name:
Slack #alerts - Webhook URL: paste
- Default Enabled: on (applies to new monitors automatically)
- Test
Telegram
Telegram is excellent for personal alerts because it pushes reliably to your phone with no third-party app required beyond Telegram itself.
- Talk to
@BotFatheron Telegram and create a new bot, save the token - Send any message to your new bot
- Visit
https://api.telegram.org/bot<YOUR_TOKEN>/getUpdatesand find your chat ID in the JSON response - In Kuma, add a Telegram notification with the bot token and chat ID
- Test
For a team, point the bot at a Telegram group instead. Add the bot to the group, send a message in the group, and use the negative numeric chat ID from getUpdates.
Email (SMTP)
Email is the right fallback when chat platforms are themselves down. Configure SMTP carefully because Kuma email is the alert you want most when other channels fail.
- Settings → Notifications → Setup Notification → SMTP
- Hostname: your SMTP server
- Port: 587 (STARTTLS) or 465 (SMTPS)
- Username and password
- From: a real address with proper SPF/DKIM, otherwise it will go to spam
- To: where alerts arrive
- Test, and verify the message hits the inbox rather than spam
Configure these three channels with Default Enabled on, and every new monitor will alert through all three automatically.
Step 8: Configure the Public Status Page
Status pages live at /status/<slug>. To create one:
- Status Page → New Status Page
- Title:
Production Services - Slug:
main(final URL:https://status.example.com/status/main) - Add Group:
Customer-Facing Services - Add monitors to the group
- Add another group for
Internal Servicesif you want operations visibility separately - Save
To make the slug-less URL (https://status.example.com/) redirect to your main status page, set the default status page slug in Settings → General → Default Status Page.
Status page tips:
- Group monitors logically (customer-facing, infrastructure, third-party dependencies)
- Use the description field on each monitor to write a customer-readable name.
prod-web-01 HTTP 200 checkis not what a customer should see;Website availabilityis. - Add an incident announcement when you have planned maintenance, so customers visiting see context, not just a red dot
Step 9: Maintenance Windows
When you push deployments or perform planned work, monitors fire false alerts. Use Maintenance to schedule quiet windows:
- Maintenance → New Maintenance
- Title:
Weekly deployment window - Strategy: Recurring (e.g., every Tuesday 22:00 to 22:30)
- Apply to specific monitors or all monitors
- Save
During the window, monitors continue running but alerts are suppressed. The historical record still shows what happened, which is useful for post-deployment review.
Step 10: Backup Strategy
Kuma's entire state lives in /opt/uptime-kuma/data. The SQLite database, configuration, and monitor history are all in there.
cat > /opt/uptime-kuma/backup.sh <<'EOF'
#!/bin/bash
set -euo pipefail
BACKUP_DIR=/opt/uptime-kuma/backups
DATE=$(date +%Y%m%d-%H%M%S)
mkdir -p "$BACKUP_DIR"
docker exec uptime-kuma sqlite3 /app/data/kuma.db \
".backup '/app/data/kuma-backup.db'"
tar -czf "$BACKUP_DIR/kuma-$DATE.tar.gz" \
-C /opt/uptime-kuma data
find "$BACKUP_DIR" -name "kuma-*.tar.gz" -mtime +14 -delete
rclone copy "$BACKUP_DIR/kuma-$DATE.tar.gz" remote:kuma-backups/
EOF
chmod +x /opt/uptime-kuma/backup.sh
echo "0 4 * * * root /opt/uptime-kuma/backup.sh >> /var/log/kuma-backup.log 2>&1" \
> /etc/cron.d/kuma-backup
Test restoration by stopping the container, replacing the data directory with an extracted backup, and starting again. If the dashboard returns with all monitors intact, your backup pipeline is real.
Step 11: Updating Uptime Kuma
cd /opt/uptime-kuma
docker compose pull
docker compose up -d
Because we pinned to the 1 major tag, you receive 1.x patches but not the 2.x migration. When 2.x stabilizes, change the tag, take a backup, then update.
FAQ
How many monitors can one Uptime Kuma instance handle? A small VPS with 1 vCPU and 1 GB RAM comfortably runs 100 to 200 monitors at 60-second intervals. Beyond 500 monitors, watch CPU on the VPS and consider increasing intervals.
Can multiple users share one instance? Yes. The admin creates additional users in Settings → Users. Note that all users have equal access; Kuma does not currently offer fine-grained role-based access control.
What happens if Uptime Kuma itself goes down? You stop receiving alerts. This is the case for every monitoring system, including paid SaaS ones. Mitigation options: monitor Kuma from a second instance on a different VPS, use Kuma's own "Push" monitor type to confirm it is alive, or pair with an external dead-man's-switch service that alerts when it stops hearing from Kuma.
Can I import existing monitors from UptimeRobot or similar? Kuma supports CSV import for HTTP monitors. Export from your current tool to CSV, adjust the column headers to match Kuma's format, and import via Settings → Backup.
How is the public status page styled? Kuma includes a clean default style. Custom CSS is supported via the status page settings. For tighter branding, the most common approach is to use the status page as is and link to it from your main site.
Where to Host It
A monitoring system you control needs to live somewhere reliable, ideally not on the same infrastructure it monitors. Our Cloud VPS is a sensible substrate: EU-located, predictable IO from Ceph-backed storage, and included Proxmox Backup Server snapshots so your historical monitor data survives any single mistake. Keeping the monitor outside your primary cluster is the whole point; if your production server goes dark, the monitor is the one thing that must keep running and tell you.