Production Deployment
Übersicht
Dieser Guide beschreibt das Production-Deployment von Solar-Log mit:
- ✅ HTTPS/SSL via Cloudflare
- ✅ PostgreSQL Datenbank
- ✅ Automated Backups
- ✅ Monitoring & Logging
- ✅ Security Hardening
- ✅ High Availability
Deployment-Strategie
Option 1: Cloudflare Tunnel (Empfohlen)
Vorteile: - Kein Port-Forwarding nötig - Automatisches SSL - DDoS Protection kostenlos - Zero Trust Security
Setup: Siehe Cloudflare Tunnel Guide
Option 2: VPS mit Reverse Proxy
Vorteile: - Volle Kontrolle - Bessere Performance - Kein Cloudflare Vendor Lock-in
Nachteil: Server-Kosten (~5-10€/Monat)
Pre-Deployment Checklist
Security
- Starke Passwörter für PostgreSQL
-
SECRET_KEYgeneriert (min. 32 Zeichen) -
DEBUG=falsegesetzt - CORS Origins konfiguriert
- Firewall Rules definiert
- SSL Zertifikat vorhanden (oder Cloudflare)
Infrastructure
- Docker & Docker Compose installiert
- Backup-Strategie definiert
- Monitoring-Tool gewählt (z.B. Prometheus)
- Log-Aggregation eingerichtet (z.B. Loki)
- Domain konfiguriert (DNS)
Application
- Environment Variables gesetzt
- Database Migrations getestet
- Frontend Build optimiert
- API Rate Limiting konfiguriert
- Health Checks aktiviert
Environment Configuration
.env.production
# Database (Production)
POSTGRES_USER=solarlog_prod
POSTGRES_PASSWORD=<strong-random-password>
POSTGRES_DB=solarlog_production
DATABASE_URL=postgresql://solarlog_prod:<password>@postgres:5432/solarlog_production
# Backend
SECRET_KEY=<random-32-char-string>
DEBUG=false
LOG_LEVEL=WARNING
ENVIRONMENT=production
# Security
CORS_ORIGINS=https://solarlog.karma.organic,https://solarlog-api.karma.organic
ALLOWED_HOSTS=solarlog.karma.organic,solarlog-api.karma.organic
# Polling
POLL_INTERVAL=60
MAX_RETRIES=3
TIMEOUT=30
# Monitoring
SENTRY_DSN=https://your-sentry-dsn@sentry.io/project
PROMETHEUS_ENABLED=true
Passwörter generieren
# SECRET_KEY (32 Zeichen)
openssl rand -hex 32
# PostgreSQL Password (16 Zeichen)
openssl rand -base64 16
# Alle Secrets in .env.production speichern
Docker Compose Production
docker-compose.prod.yml
version: '3.8'
services:
postgres:
image: postgres:16-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./backups:/backups
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: '2'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
backend:
image: solarlog-backend:latest
restart: always
environment:
DATABASE_URL: ${DATABASE_URL}
SECRET_KEY: ${SECRET_KEY}
DEBUG: ${DEBUG}
LOG_LEVEL: ${LOG_LEVEL}
SENTRY_DSN: ${SENTRY_DSN}
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '4'
memory: 2G
reservations:
cpus: '1'
memory: 512M
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
frontend:
image: solarlog-frontend:latest
restart: always
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:80"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '1'
memory: 256M
reservations:
cpus: '0.25'
memory: 128M
nginx:
image: nginx:alpine
restart: always
volumes:
- ./nginx/nginx.prod.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro # SSL Certificates
ports:
- "80:80"
- "443:443"
depends_on:
- backend
- frontend
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '2'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
# Backup Service (Cron-basiert)
backup:
image: postgres:16-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- ./backups:/backups
- ./scripts/backup.sh:/backup.sh
command: /bin/sh -c "crontab -e && crond -f"
depends_on:
- postgres
volumes:
postgres-data:
driver: local
driver_opts:
type: none
device: /data/solarlog/postgres
o: bind
networks:
default:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
Deployment Steps
1. Server Setup
# Server vorbereiten (Ubuntu/Debian)
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io docker-compose git curl
# Docker aktivieren
sudo systemctl enable docker
sudo systemctl start docker
# User zu Docker Gruppe hinzufügen
sudo usermod -aG docker $USER
2. Repository klonen
# Code deployen
cd /opt
sudo git clone https://github.com/yourusername/solarlog.git
cd solarlog
# Production Branch
git checkout production
3. Environment konfigurieren
# Production Env erstellen
cp .env.example .env.production
nano .env.production # Secrets eintragen
# Permissions
chmod 600 .env.production
4. Images bauen
# Production Build
docker compose -f docker-compose.prod.yml build
# Images testen
docker compose -f docker-compose.prod.yml up -d
docker compose -f docker-compose.prod.yml ps
5. Database Migrations
# Migrations ausführen
docker compose exec backend alembic upgrade head
# Verify
docker compose exec backend alembic current
6. Initial Data
# Admin User erstellen
docker compose exec backend python scripts/create_admin.py
# Sample Inverters (optional)
docker compose exec backend python scripts/seed_inverters.py
SSL/TLS Setup
Option 1: Cloudflare (Empfohlen)
Cloudflare bietet automatisches SSL: - Keine manuelle Zertifikatsverwaltung - Automatische Renewal - Global CDN
Setup: Cloudflare Tunnel
Option 2: Let's Encrypt (Certbot)
# Certbot installieren
sudo apt install certbot python3-certbot-nginx
# Zertifikat erstellen
sudo certbot --nginx -d solarlog.karma.organic -d solarlog-api.karma.organic
# Auto-Renewal testen
sudo certbot renew --dry-run
Nginx SSL Config
server {
listen 443 ssl http2;
server_name solarlog.karma.organic;
ssl_certificate /etc/letsencrypt/live/solarlog.karma.organic/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/solarlog.karma.organic/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
location / {
proxy_pass http://frontend:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Backup Strategy
Automated Database Backup
Script: scripts/backup.sh
#!/bin/bash
BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d_%H%M%S)
FILENAME="solarlog_backup_$DATE.sql.gz"
# Backup erstellen
docker compose exec -T postgres pg_dump -U solarlog_prod solarlog_production | gzip > "$BACKUP_DIR/$FILENAME"
# Alte Backups löschen (älter als 30 Tage)
find $BACKUP_DIR -name "solarlog_backup_*.sql.gz" -mtime +30 -delete
echo "Backup created: $FILENAME"
Cron Job
# Backup täglich um 2 Uhr
crontab -e
# Hinzufügen:
0 2 * * * /opt/solarlog/scripts/backup.sh >> /var/log/solarlog-backup.log 2>&1
Offsite Backup
# S3 Backup (AWS CLI)
aws s3 cp /backups/solarlog_backup_$(date +%Y%m%d).sql.gz s3://your-bucket/backups/
# Oder Rsync zu Remote Server
rsync -avz /backups/ user@backup-server:/backups/solarlog/
Monitoring
Prometheus + Grafana
docker-compose.monitoring.yml:
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus-data:/prometheus
ports:
- "9090:9090"
grafana:
image: grafana/grafana:latest
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
ports:
- "3001:3000"
volumes:
prometheus-data:
grafana-data:
Metrics Endpoints
# Backend Metrics
curl http://localhost:8000/metrics
# Nginx Metrics (nginx-exporter)
curl http://localhost:9113/metrics
Logging
Centralized Logging (Loki)
services:
loki:
image: grafana/loki:latest
ports:
- "3100:3100"
volumes:
- ./loki/loki-config.yml:/etc/loki/local-config.yaml
- loki-data:/loki
promtail:
image: grafana/promtail:latest
volumes:
- /var/log:/var/log:ro
- ./promtail/promtail-config.yml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
Log Rotation
# /etc/logrotate.d/solarlog
/var/log/solarlog/*.log {
daily
rotate 14
compress
delaycompress
notifempty
create 0640 root adm
sharedscripts
postrotate
docker compose restart backend
endscript
}
Security Hardening
Firewall (UFW)
# UFW aktivieren
sudo ufw enable
# Nur notwendige Ports öffnen
sudo ufw allow 22/tcp # SSH
sudo ufw allow 80/tcp # HTTP
sudo ufw allow 443/tcp # HTTPS
# Status prüfen
sudo ufw status verbose
Fail2Ban
# Fail2Ban installieren
sudo apt install fail2ban
# Config
sudo nano /etc/fail2ban/jail.local
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5
[sshd]
enabled = true
Docker Security
# In docker-compose.prod.yml
services:
backend:
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp
High Availability
Load Balancing
upstream backend_servers {
least_conn;
server backend1:8000 weight=1 max_fails=3 fail_timeout=30s;
server backend2:8000 weight=1 max_fails=3 fail_timeout=30s;
server backend3:8000 backup;
}
server {
location /api {
proxy_pass http://backend_servers;
}
}
Database Replication
Primary (Master):
postgres-primary:
image: postgres:16-alpine
environment:
POSTGRES_REPLICATION_MODE: master
POSTGRES_REPLICATION_USER: replicator
POSTGRES_REPLICATION_PASSWORD: repl_password
Replica (Slave):
postgres-replica:
image: postgres:16-alpine
environment:
POSTGRES_REPLICATION_MODE: slave
POSTGRES_MASTER_HOST: postgres-primary
POSTGRES_MASTER_PORT: 5432
Health Checks & Auto-Restart
services:
backend:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
Performance Tuning
PostgreSQL
-- postgresql.conf
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 16MB
max_connections = 100
Backend (Uvicorn)
# In Dockerfile
CMD ["uvicorn", "app.main:app", \
"--host", "0.0.0.0", \
"--port", "8000", \
"--workers", "4", \
"--loop", "uvloop", \
"--http", "httptools"]
Nginx Caching
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m max_size=1g inactive=60m;
location /api {
proxy_cache api_cache;
proxy_cache_valid 200 5m;
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
}
Maintenance
Rolling Updates
# Backend Update (Zero Downtime)
docker compose -f docker-compose.prod.yml up -d --no-deps --build backend
# Frontend Update
docker compose -f docker-compose.prod.yml up -d --no-deps --build frontend
Database Maintenance
# Vacuum
docker compose exec postgres psql -U solarlog_prod -c "VACUUM ANALYZE;"
# Reindex
docker compose exec postgres psql -U solarlog_prod -c "REINDEX DATABASE solarlog_production;"
Disaster Recovery
Restore from Backup
# Service stoppen
docker compose down
# Datenbank wiederherstellen
gunzip -c /backups/solarlog_backup_20250123.sql.gz | \
docker compose exec -T postgres psql -U solarlog_prod solarlog_production
# Service starten
docker compose up -d
Failover Prozedur
- Health Check failed → Auto-Restart (3x versuchen)
- Persistent Failure → Promote Replica zu Primary
- Notification → Slack/Email Alert
- Manual Intervention → Investigation
Checklists
Pre-Deploy
- Code getestet (Unit + Integration Tests)
- Database Migrations vorbereitet
- Backup erstellt
- Monitoring aktiviert
- SSL Zertifikat gültig
- DNS propagiert
Post-Deploy
- Health Checks grün
- Logs prüfen (keine Errors)
- Performance testen
- Backup-Job läuft
- Monitoring Dashboards aktualisiert
- Documentation aktualisiert