Enterprise Upgrade - Umfassende Analyse
Datum: 24. Oktober 2025
Status: Analyse & Planungsphase
Ziel: Raspberry Pi Produktionssystem mit Enterprise-Features
π Executive Summary
Das aktuelle System ist eine Entwicklungsumgebung (macOS + Docker). Das Upgrade zielt auf ein produktionsreifes Raspberry Pi System ab mit:
- β Grafana Dashboard fΓΌr Enterprise-Kunden
- β Raspberry Pi 4/5 als Host mit NixOS
- β LVGL Touchscreen UI fΓΌr Installateure
- β VS Code Server fΓΌr Entwickler
- β Battery Management System
- β EVCC & Home Assistant Integration
- β BootfΓ€higes Image fΓΌr einfache Installation
Teil 1: Grafana Enterprise Dashboard
Anforderung
"FΓΌr Enterprise-Kunden mit groΓen Anlagen: Grafana Dashboard fΓΌr PV-Statistiken"
Analyse
Option A: Grafana OSS (Open Source)
Vorteile: - β Komplett kostenlos (Apache 2.0 Lizenz) - β Volle Features fΓΌr Visualisierung - β TimescaleDB/PostgreSQL Integration mΓΆglich - β Template verfΓΌgbar: Dashboard 13295 - β LΓ€uft als Docker Container
Nachteile: - β οΈ Keine Enterprise-Features (Alerting, RBAC, Reports) - β οΈ ZusΓ€tzlicher Container (RAM-Verbrauch)
Referenz-Projekt:
https://github.com/michbeck100/pv-monitoring
- Verwendet Grafana OSS + InfluxDB
- Docker Compose Setup
- Solar-spezifische Dashboards
Option B: Apache Superset (Alternative)
Vorteile: - β Komplett Open Source - β Direkt auf PostgreSQL - β Mehr Kontrolle ΓΌber Dashboards
Nachteile: - β οΈ Weniger Solar-spezifische Templates - β οΈ Komplexere Konfiguration
Empfehlung: Grafana OSS
Implementierung:
# docker-compose.yml Erweiterung
services:
grafana:
image: grafana/grafana-oss:latest
container_name: solarlog-grafana
ports:
- "3001:3000"
volumes:
- ./data/grafana:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/datasources:/etc/grafana/provisioning/datasources
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
- GF_INSTALL_PLUGINS=
depends_on:
- postgres
networks:
- solarlog-network
# Prometheus fΓΌr Metriken (optional)
prometheus:
image: prom/prometheus:latest
container_name: solarlog-prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- ./data/prometheus:/prometheus
networks:
- solarlog-network
Datenbankanbindung: 1. Grafana β PostgreSQL (bestehende Datenbank) 2. Queries fΓΌr: - Tagesertrag pro Inverter - Gesamtertrag - Performance Ratio - Offline-Zeiten - Peak-Leistung
Cloudflare Tunnel:
GeschΓ€tzte Ressourcen: - RAM: ~150MB (Grafana) + 50MB (Prometheus) - Disk: ~500MB
Teil 2: Raspberry Pi Host System
Hardware-Anforderungen
Raspberry Pi 4 vs. 5
| Feature | Pi 4 (8GB) | Pi 5 (8GB) | Empfehlung |
|---|---|---|---|
| CPU | 4x 1.5GHz | 4x 2.4GHz | Pi 5 (60% schneller) |
| RAM | 8GB LPDDR4 | 8GB LPDDR4X | Pi 5 (schneller) |
| USB | USB 3.0 | USB 3.0 | Gleich |
| Ethernet | 1 Gbit | 1 Gbit | Gleich |
| PCIe | - | M.2 HAT+ | Pi 5 (NVMe mΓΆglich!) |
| Docker | Langsam | Schnell | Pi 5 |
| LVGL UI | 30fps | 60fps | Pi 5 |
| Preis | ~90β¬ | ~110β¬ | Pi 5 (+20β¬ = lohnt sich) |
Empfehlung: Raspberry Pi 5 (8GB)
GrΓΌnde: 1. β Docker lΓ€uft 2x schneller 2. β NVMe SSD ΓΌber M.2 HAT mΓΆglich (DB-Performance!) 3. β LVGL UI flΓΌssiger (Touchscreen) 4. β Grafana + Home Assistant + EVCC gleichzeitig mΓΆglich 5. β Zukunftssicher fΓΌr weitere Features
Storage
SD Card (Boot): - Mindestens: 64GB (Class 10, A2) - Empfohlen: 128GB Samsung Evo Plus - Nur fΓΌr: OS, System, Config
NVMe SSD (Daten): - Mindestens: 256GB NVMe - Empfohlen: 512GB Samsung 980 - FΓΌr: PostgreSQL, Grafana, Logs, Backups - Anschluss: Raspberry Pi M.2 HAT+
Partitionierung:
SD Card (128GB):
- /boot (512MB) - NixOS Bootloader
- / (100GB) - NixOS System + Docker Images
- /swap (4GB) - Swap
- /config (20GB) - Config Files, .env
NVMe SSD (512GB):
- /var/lib/postgresql (200GB) - Database
- /var/lib/docker (100GB) - Container Data
- /var/log (50GB) - Logs
- /backups (150GB) - Automated Backups
USB-C Touchscreen
LVGL UI Anforderungen: - Resolution: Min. 800x480 (optimal: 1024x600) - Touch: Kapazitiv (10-Punkt) - Anschluss: USB-C (Power + Data) - Empfehlung: Waveshare 7" DSI Touch Display
Betriebssystem: NixOS
Warum NixOS?
Vorteile:
1. β
Reproduzierbar: Komplette System-Konfiguration in /etc/nixos/configuration.nix
2. β
Rollback: Bei Problemen einfach zur vorherigen Version zurΓΌck
3. β
Deklarativ: Alle Services, Docker, Tunnel in einer Config
4. β
Image-Erstellung: nixos-generate fΓΌr bootfΓ€higes Image
5. β
Security: Minimales System, nur notwendige Packages
Nachteil: - β οΈ Steile Lernkurve (aber wir haben bereits Erfahrung)
NixOS auf Raspberry Pi 5
Support-Status: - β Offiziell supported seit NixOS 23.11 - β ARM64 Image verfΓΌgbar - β Device Tree fΓΌr Pi 5 included
Installation:
# /etc/nixos/configuration.nix fΓΌr Raspberry Pi 5
{ config, pkgs, ... }:
{
# Raspberry Pi 5 specific
boot = {
kernelPackages = pkgs.linuxPackages_rpi5;
loader = {
grub.enable = false;
generic-extlinux-compatible.enable = true;
};
initrd.availableKernelModules = [ "nvme" "usb_storage" "sd_mod" ];
};
# Hardware
hardware.raspberry-pi."5".enable = true;
hardware.raspberry-pi."5".fkms-3d.enable = true; # GPU fΓΌr LVGL
# Netzwerk
networking = {
hostName = "solarlog-pi";
networkmanager.enable = true;
firewall = {
enable = true;
allowedTCPPorts = [ 22 ]; # SSH nur lokal
interfaces.lo.allowedTCPPorts = [ 8080 3001 8123 7575 ]; # Services
};
};
# Docker
virtualisation.docker = {
enable = true;
storageDriver = "overlay2";
autoPrune.enable = true;
};
# Services
services = {
openssh = {
enable = true;
settings.PermitRootLogin = "no";
settings.PasswordAuthentication = false;
};
# Cloudflare Tunnel
cloudflared = {
enable = true;
config = "/etc/cloudflared/config.yml";
};
};
# Users
users.users.installer = {
isNormalUser = true;
extraGroups = [ "wheel" "docker" "dialout" ]; # dialout fΓΌr ESP32
packages = with pkgs; [ vim git ];
};
users.users.developer = {
isNormalUser = true;
extraGroups = [ "docker" ];
shell = pkgs.zsh;
};
# System Packages
environment.systemPackages = with pkgs; [
docker-compose
cloudflared
postgresql_15
python311
nodejs_20
git
vim
htop
esptool # ESP32 Flashing!
];
}
LVGL Touchscreen UI
Architektur
Problem: LVGL ist normalerweise C/C++, nicht Python/TypeScript
LΓΆsungsansΓ€tze:
Option A: LVGL MicroPython (β Empfohlen)
# LΓ€uft direkt auf Pi, kein Docker
# Nutzt /dev/fb0 (Framebuffer) + /dev/input/event0 (Touch)
import lvgl as lv
import lv_drivers
# Display Init
lv_drivers.fb_init()
disp_drv = lv.disp_drv_t()
lv_drivers.fb_register(disp_drv)
# Touch Init
lv_drivers.evdev_init()
indev_drv = lv.indev_drv_t()
lv_drivers.evdev_register(indev_drv)
# UI Screens
screen_main = lv.obj()
label_power = lv.label(screen_main)
label_power.set_text("β‘ 3.2 kW")
label_power.set_pos(10, 10)
# API Integration
import requests
def update_ui():
resp = requests.get("http://localhost:8000/api/v1/production/latest")
data = resp.json()
label_power.set_text(f"β‘ {data['current_power']/1000:.1f} kW")
lv.timer_create(lambda t: update_ui(), 5000, None)
lv.scr_load(screen_main)
while True:
lv.task_handler()
Option B: Browser-basiert (Einfacher, aber weniger Performance)
# Chromium Kiosk Mode
chromium-browser \
--kiosk \
--disable-infobars \
--noerrdialogs \
--touch-events=enabled \
http://localhost:3000
Empfehlung fΓΌr Installateur-UI:
Hybrid-Ansatz:
1. LVGL fΓΌr System-MenΓΌ (unabhΓ€ngig, schnell, touch-optimiert)
- Network Config (IP x.x.x.x)
- Service Status (Docker, Tunnel, DB)
- ESP32 Flashing
- Reboot/Shutdown
2. Browser (Chromium Kiosk) fΓΌr Dashboard
- https://solarlog.karma.organic/
- Normale Web-UI
- Inverter-Management
Service-Architektur:
Pi Boot
ββ> LVGL Installer UI (systemd service, Auto-Start)
β ββ> Button "Dashboard" β Launch Browser
ββ> Docker (Backend, Frontend, Nginx, Grafana)
ββ> Cloudflare Tunnel
Installateur vs. Entwickler Umgebung
Installateur (Standard-Modus)
Zugriff: - β LVGL Touchscreen UI - β Browser: https://solarlog.karma.organic/ - β SSH nur im lokalen Netzwerk - β Kein VS Code Server - β Kein direkter Container-Zugriff
Features: - Network Configuration - Service Status Check - ESP32 Flashing (ΓΌber WebSerial oder Pi USB) - System Reboot - Backup/Restore
Entwickler (Dev-Modus)
Aktivierung:
# Γber Installateur-UI: "Developer Mode aktivieren"
# Oder CLI:
sudo nixos-rebuild switch --flake .#solarlog-dev
ZusΓ€tzliche Services:
- β
VS Code Server (ΓΌber Tunnel: code.solarlog.karma.organic)
- β
GitHub Copilot + Chat
- β
Extension Gallery (Open VSX)
- β
SSH mit GitHub Auth
- β
Docker Debug Tools
- β
PostgreSQL pgAdmin
VS Code Server Setup:
# configuration.nix
services.code-server = {
enable = true;
auth = "none"; # Auth ΓΌber Cloudflare Tunnel
host = "127.0.0.1";
port = 8443;
extensionsDir = "/var/lib/code-server/extensions";
userDataDir = "/var/lib/code-server/data";
};
# Pre-install Extensions
environment.etc."code-server/extensions.txt".text = ''
github.copilot
github.copilot-chat
ms-python.python
ms-azuretools.vscode-docker
esbenp.prettier-vscode
'';
Security:
- SSH nur mit Key-Auth (kein Password)
- VS Code Server nur ΓΌber Cloudflare Tunnel
- Separate User: developer (nicht installer)
- Audit-Logging aller Developer-Actions
Cloudflare Tunnel Konfiguration
# /etc/cloudflared/config.yml
tunnel: <TUNNEL_ID>
credentials-file: /etc/cloudflared/credentials.json
ingress:
# Frontend (Public)
- hostname: solarlog.karma.organic
service: http://localhost:8080
originRequest:
noTLSVerify: true
# Grafana Enterprise Dashboard (Public)
- hostname: grafana.solarlog.karma.organic
service: http://localhost:3001
originRequest:
noTLSVerify: true
# VS Code Server (Developer only, mit Access Policy)
- hostname: code.solarlog.karma.organic
service: http://localhost:8443
originRequest:
noTLSVerify: true
# EVCC (Optional, Public oder Access Policy)
- hostname: evcc.solarlog.karma.organic
service: http://localhost:7070
originRequest:
noTLSVerify: true
# Home Assistant (Optional, mit Access Policy)
- hostname: ha.solarlog.karma.organic
service: http://localhost:8123
originRequest:
noTLSVerify: true
# Catch-all
- service: http_status:404
Cloudflare Access Policies:
# VS Code Server - Nur fΓΌr Entwickler
- name: "VS Code Server Access"
decision: allow
include:
- email:
- developer@karma.organic
applications:
- code.solarlog.karma.organic
# Home Assistant - Nur fΓΌr Admins
- name: "Home Assistant Access"
decision: allow
include:
- email_domain:
- karma.organic
applications:
- ha.solarlog.karma.organic
EVCC Integration
EVCC: Wallbox & E-Auto Ladesteuerung mit Solar-Γberschuss
Docker Service:
# docker-compose.yml
services:
evcc:
image: evcc/evcc:latest
container_name: solarlog-evcc
ports:
- "7070:7070"
volumes:
- ./evcc/evcc.yaml:/etc/evcc.yaml
- ./data/evcc:/data
environment:
- EVCC_DATABASE_DSN=postgresql://user:pass@postgres:5432/solarlog
depends_on:
- postgres
networks:
- solarlog-network
Integration:
1. EVCC nutzt /api/v1/production/latest fΓΌr Solar-Daten
2. LΓ€dt E-Auto nur bei Γberschuss
3. Dashboard in Grafana integriert
Home Assistant Integration
Home Assistant: Smart Home Zentrale
Docker Service:
services:
homeassistant:
image: ghcr.io/home-assistant/home-assistant:stable
container_name: solarlog-homeassistant
ports:
- "8123:8123"
volumes:
- ./homeassistant:/config
environment:
- TZ=Europe/Berlin
privileged: true
networks:
- solarlog-network
SolarLog Integration:
# homeassistant/configuration.yaml
sensor:
- platform: rest
name: "Solar Power"
resource: "http://localhost:8000/api/v1/production/latest"
json_attributes:
- current_power
- daily_energy
value_template: "{{ value_json.current_power }}"
unit_of_measurement: "W"
scan_interval: 30
- platform: template
sensors:
solar_daily_energy:
friendly_name: "Solar Tagesertrag"
value_template: "{{ state_attr('sensor.solar_power', 'daily_energy') }}"
unit_of_measurement: "kWh"
Teil 3: Battery Management System
Architektur
Γhnlich wie Inverter, aber eigene EntitΓ€t:
# backend/app/database/models.py
class Battery(Base):
__tablename__ = "batteries"
id = Column(Integer, primary_key=True)
uuid = Column(String(36), unique=True, nullable=False)
name = Column(String(100), nullable=False)
manufacturer = Column(String(100)) # "BYD", "Tesla", "SonnenBatterie"
model = Column(String(100))
capacity_kwh = Column(Float) # KapazitΓ€t in kWh
type = Column(String(50)) # "lithium", "leadacid", "saltwater"
installation_date = Column(Date)
location = Column(String(200))
api_endpoint = Column(String(255))
api_type = Column(String(50)) # "modbus", "rest", "mqtt"
enabled = Column(Boolean, default=True)
is_demo = Column(Boolean, default=False)
demo_simulate_offline = Column(Boolean, default=False)
# Relationships
battery_data = relationship("BatteryData", back_populates="battery")
class BatteryData(Base):
__tablename__ = "battery_data"
id = Column(Integer, primary_key=True)
battery_id = Column(Integer, ForeignKey("batteries.id"))
timestamp = Column(DateTime, default=datetime.utcnow)
# Status
state_of_charge = Column(Float) # SOC in % (0-100)
voltage = Column(Float) # Spannung in V
current = Column(Float) # Strom in A (+ = Laden, - = Entladen)
power = Column(Float) # Leistung in W (+ = Laden, - = Entladen)
temperature = Column(Float) # Temperatur in Β°C
# Statistiken
cycles = Column(Integer) # Ladezyklen
health = Column(Float) # State of Health in % (100 = neu)
# Energie
charged_today = Column(Float) # Geladen heute in kWh
discharged_today = Column(Float) # Entladen heute in kWh
charged_total = Column(Float) # Total geladen in kWh
discharged_total = Column(Float) # Total entladen in kWh
battery = relationship("Battery", back_populates="battery_data")
EVCC Battery APIs
EVCC unterstΓΌtzt bereits viele Battery-Systeme:
# Beispiel: BYD Battery-Box Premium HVS
batteries:
- name: byd_battery
type: custom
power:
source: modbus
uri: 192.168.1.100:502
id: 1
register:
address: 30775
type: holding
decode: int16
soc:
source: modbus
uri: 192.168.1.100:502
id: 1
register:
address: 30843
type: holding
decode: uint16
API-Typen aus EVCC ΓΌbernehmen: 1. Modbus TCP/RTU 2. REST API 3. MQTT 4. SMA Sunny Island 5. Tesla Powerwall 6. Kostal PLENTICORE 7. Fronius
Frontend Erweiterung
// frontend-web/src/types/battery.ts
export interface Battery {
id: number;
uuid: string;
name: string;
manufacturer: string;
model: string;
capacity_kwh: number;
type: string;
enabled: boolean;
is_demo: boolean;
// Latest Data
state_of_charge: number; // %
power: number; // W (+ = charging, - = discharging)
voltage: number;
temperature: number;
health: number; // %
cycles: number;
}
// frontend-web/src/components/BatteryCard.tsx
export const BatteryCard: React.FC<{ battery: Battery }> = ({ battery }) => {
const isCharging = battery.power > 0;
const isDischarging = battery.power < 0;
return (
<Card>
<CardHeader>
<h3>{battery.name}</h3>
<Battery icon with SOC indicator />
</CardHeader>
<CardBody>
<div>SOC: {battery.state_of_charge}%</div>
<div>Power: {Math.abs(battery.power)}W {isCharging ? 'β¬οΈ' : 'β¬οΈ'}</div>
<div>Health: {battery.health}%</div>
<div>Cycles: {battery.cycles}</div>
</CardBody>
</Card>
);
};
Dashboard Layout:
βββββββββββββββββββββββββββββββββββββββββββ
β Solar Production: 3.2 kW β
βββββββββββββββββββββββββββββββββββββββββββ€
β Inverters (3) β
β βββββ βββββ βββββ β
β βWR1β βWR2β βWR3β β
β βββββ βββββ βββββ β
βββββββββββββββββββββββββββββββββββββββββββ€
β Batteries (2) β
β βββββββββββ βββββββββββ β
β β BYD 10kWh β Tesla 13kWh β β
β β 87% β¬οΈ β 45% β¬οΈ β β
β βββββββββββ βββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββ
ESP32 Battery Monitoring
ESP32 Erweiterung:
// INVERTER-ESP/src/battery_monitor.h
class BatteryMonitor {
private:
String battery_uuid;
ModbusClient modbus;
public:
struct BatteryData {
float soc;
float voltage;
float current;
float power;
float temperature;
int cycles;
};
BatteryData readData() {
BatteryData data;
data.soc = modbus.readHoldingRegister(30843);
data.voltage = modbus.readHoldingRegister(30775);
// ...
return data;
}
void sendToAPI(BatteryData& data) {
HTTPClient http;
http.begin(API_URL + "/batteries/" + battery_uuid + "/data");
// POST data
}
};
Teil 4: Backup & Transfer Systeme
Aktuelle Probleme
- β Keine automatischen Backups
- β Kein Disaster Recovery Plan
- β Keine Migration zwischen Dev/Prod
- β UUID-System nicht im Backup berΓΌcksichtigt
Enterprise Backup-Strategie
3-2-1 Regel
- 3 Kopien der Daten
- 2 verschiedene Medien
- 1 Kopie extern (off-site)
Implementierung
Level 1: Continuous Database Backup (WAL Archiving)
# NixOS Configuration
services.postgresql = {
enable = true;
package = pkgs.postgresql_15;
settings = {
# WAL Archiving fΓΌr Point-in-Time Recovery
wal_level = "replica";
archive_mode = "on";
archive_command = "cp %p /backups/wal/%f";
max_wal_senders = 3;
};
};
Level 2: Daily Full Backup
#!/usr/bin/env bash
# scripts/backup-full.sh
BACKUP_DIR="/backups/daily"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# Database Dump
docker exec solarlog-postgres pg_dump -U solarlog -Fc solarlog > \
"$BACKUP_DIR/db_${TIMESTAMP}.dump"
# Docker Volumes
docker run --rm \
-v solarlog_postgres_data:/data \
-v $BACKUP_DIR:/backup \
alpine tar czf "/backup/volumes_${TIMESTAMP}.tar.gz" /data
# Config Files
tar czf "$BACKUP_DIR/config_${TIMESTAMP}.tar.gz" \
/etc/solarlog \
/etc/cloudflared \
/etc/nixos
# Cleanup: Keep last 7 days
find $BACKUP_DIR -type f -mtime +7 -delete
# Upload to Cloud (optional)
rclone sync $BACKUP_DIR remote:solarlog-backups
Level 3: Off-Site Replication
# Systemd Timer
systemd.services.solarlog-backup = {
description = "SolarLog Daily Backup";
script = "${pkgs.bash}/bin/bash /scripts/backup-full.sh";
serviceConfig = {
Type = "oneshot";
User = "postgres";
};
};
systemd.timers.solarlog-backup = {
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "daily";
OnCalendar = "03:00";
Persistent = true;
};
};
Migration Script (Dev β Prod)
#!/usr/bin/env bash
# scripts/migrate-to-pi.sh
set -e
echo "π SolarLog Migration: Dev β Raspberry Pi"
# 1. Backup Dev Database
echo "π¦ Backup Database..."
docker exec solarlog-postgres pg_dump -U solarlog -Fc solarlog > /tmp/solarlog.dump
# 2. Export Config
echo "π¦ Export Config..."
tar czf /tmp/solarlog-config.tar.gz \
.env \
docker-compose.yml \
deployment/cloudflare
# 3. Sync to Pi
echo "π‘ Sync to Pi..."
rsync -avz --progress \
/tmp/solarlog.dump \
/tmp/solarlog-config.tar.gz \
installer@raspberrypi.local:/tmp/
# 4. Remote: Restore on Pi
echo "π§ Restore on Pi..."
ssh installer@raspberrypi.local << 'EOF'
cd /opt/solarlog
# Stop services
docker compose down
# Restore database
docker compose up -d postgres
sleep 5
docker exec solarlog-postgres pg_restore -U solarlog -d solarlog -c /tmp/solarlog.dump
# Extract config
tar xzf /tmp/solarlog-config.tar.gz -C /opt/solarlog
# Start all services
docker compose up -d
# Health check
sleep 10
curl -f http://localhost:8000/health || exit 1
EOF
echo "β
Migration complete!"
Raspberry Pi Image Erstellung
Image Build Process
Verwendet: nixos-generators
# image.nix
{ config, pkgs, modulesPath, ... }:
{
imports = [
"${modulesPath}/installer/sd-card/sd-image-aarch64.nix"
];
# Image Configuration
sdImage = {
imageName = "solarlog-pi5-${config.system.nixos.version}.img";
compressImage = true;
populateFirmwareCommands = ''
# Raspberry Pi 5 Firmware
${pkgs.raspberrypi-firmware}/bin/rpi-update
'';
populateRootCommands = ''
# Pre-install Docker Images
mkdir -p ./files/docker-images
docker save solarlog-backend:latest > ./files/docker-images/backend.tar
docker save solarlog-frontend:latest > ./files/docker-images/frontend.tar
docker save grafana/grafana-oss:latest > ./files/docker-images/grafana.tar
# Copy SolarLog Application
cp -r ${../../.} ./files/opt/solarlog
# Copy Database Dump
cp ${./solarlog-demo.dump} ./files/opt/solarlog/solarlog.sql
# Generate Initial Password
echo "$(openssl rand -base64 32)" > ./files/opt/solarlog/.initial-password
'';
};
# Base Configuration
networking.hostName = "solarlog-pi";
# Auto-import on first boot
systemd.services.solarlog-first-boot = {
description = "SolarLog First Boot Setup";
wantedBy = [ "multi-user.target" ];
after = [ "postgresql.service" ];
script = ''
if [ ! -f /var/lib/solarlog/.setup-complete ]; then
# Import Database
${pkgs.postgresql}/bin/psql -U postgres -d solarlog < /opt/solarlog/solarlog.sql
# Load Docker Images
docker load < /opt/solarlog/docker-images/backend.tar
docker load < /opt/solarlog/docker-images/frontend.tar
docker load < /opt/solarlog/docker-images/grafana.tar
# Start Services
cd /opt/solarlog && docker compose up -d
# Mark as complete
touch /var/lib/solarlog/.setup-complete
echo "β
SolarLog First Boot Setup Complete!"
echo "π Initial Password: $(cat /opt/solarlog/.initial-password)"
fi
'';
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
};
}
Build Command:
Flash mit Raspberry Pi Imager:
1. "Use custom" β Select .img.zst
2. Flash to SD Card
3. Boot Pi β Auto-Setup lΓ€uft
4. Initial Password in /opt/solarlog/.initial-password
Ressourcen-Γbersicht
Raspberry Pi 5 (8GB) mit allen Services
| Service | RAM | CPU | Disk |
|---|---|---|---|
| NixOS Base | 500MB | 5% | 8GB |
| PostgreSQL | 512MB | 10% | 50GB |
| Backend (FastAPI) | 256MB | 15% | 500MB |
| Frontend (Nginx) | 128MB | 5% | 100MB |
| Grafana | 200MB | 10% | 500MB |
| Prometheus | 100MB | 5% | 2GB |
| EVCC | 150MB | 8% | 200MB |
| Home Assistant | 400MB | 12% | 1GB |
| Cloudflare Tunnel | 50MB | 2% | 50MB |
| VS Code Server | 500MB | 15% | 1GB |
| LVGL UI | 100MB | 5% | 50MB |
| GESAMT | ~3GB | ~90% | ~65GB |
VerfΓΌgbar: - RAM: 5GB frei (fΓΌr Peaks) - CPU: LΓ€uft bei ~70% Last im Normalbetrieb - Disk: 64GB SD Card + 256GB NVMe ausreichend
Empfehlung: - β 128GB SD Card - β 512GB NVMe SSD - β LΓΌftung fΓΌr Pi 5 (passiv reicht)
Sicherheitskonzept
Network Isolation
Internet
β
Cloudflare Tunnel (TLS)
β
Raspberry Pi (lokales Netzwerk)
β
βββββββββββββββββββββββββββββββ
β Public Services (via Tunnel)β
β - Frontend (solarlog.*) β
β - Grafana (grafana.*) β
β - EVCC (evcc.*) β
βββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββ
β Protected (Access Policies) β
β - VS Code (code.*) β
β - Home Assistant (ha.*) β
βββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββ
β Local Only (No Tunnel) β
β - PostgreSQL (5432) β
β - SSH (22) β
β - LVGL UI (Touchscreen) β
βββββββββββββββββββββββββββββββ
Firewall Rules
networking.firewall = {
enable = true;
# Lokal: Alle Services fΓΌr LVGL UI
interfaces.lo.allowedTCPPorts = [ 3000 3001 7070 8080 8123 8443 ];
# LAN: Nur SSH
interfaces.eth0.allowedTCPPorts = [ 22 ];
# WAN: Blocked (nur via Tunnel)
extraCommands = ''
iptables -A INPUT -i wlan0 -j DROP
'';
};
User Separation
users.users = {
# Installateur (Standard)
installer = {
isNormalUser = true;
description = "Installateur User";
extraGroups = [ "dialout" ]; # Nur ESP32 Flashing
shell = pkgs.bash;
};
# Entwickler (Opt-In)
developer = {
isNormalUser = true;
description = "Developer User";
extraGroups = [ "docker" "wheel" ];
shell = pkgs.zsh;
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAA... developer@karma.organic"
];
};
};
Zeitplan & Aufwand
Phase 1: Grafana Dashboard (2-3 Tage)
- Grafana Docker Service
- PostgreSQL Queries
- Dashboard Templates
- Cloudflare Tunnel Routing
Phase 2: NixOS Pi Image (5-7 Tage)
- NixOS Configuration
- LVGL Touchscreen UI
- First-Boot Setup Script
- Image Builder
- Testing
Phase 3: Battery Management (3-4 Tage)
- Database Models
- API Endpoints
- Frontend Components
- ESP32 Integration
Phase 4: Backup System (2-3 Tage)
- WAL Archiving
- Daily Backup Script
- Migration Script
- Testing & Documentation
Gesamt: ~15-20 Arbeitstage
NΓ€chste Schritte
- β Analyse abgeschlossen β WIR SIND HIER
- β³ MkDocs Dokumentation erstellen
- β³ Setup Scripts vorbereiten
- β³ Testphase auf Dev-System
- β³ Pi Image erstellen
- β³ Production Deployment
Offene Fragen
- Grafana Dashboard: Welche spezifischen Metriken brauchst du fΓΌr Enterprise?
- LVGL UI: Touchscreen-Modell? (Waveshare 7" DSI empfohlen)
- Battery Hersteller: Welche Systeme sollen initial supported werden?
- EVCC/Home Assistant: Zwingend erforderlich oder optional?
- VS Code Server: GitHub Account fΓΌr Copilot bereitstellen?
Soll ich jetzt mit der MkDocs-Dokumentation fortfahren?