Insert step 5 (Configure AI) into the getting-started guide with mentions of Setup > AI Summary, --llm-provider/--llm-model CLI flags, and Ollama local support. Renumber Fetch Forecast to step 6 and View Dashboard to step 7.
HeatGuard
Personalized heat preparedness for your home. HeatGuard analyzes your living spaces, fetches weather forecasts, and generates hour-by-hour action plans to keep you safe during heat events.
Features
- Room-level heat budget analysis — models internal gains (devices, occupants), solar gains, ventilation, and AC capacity per room
- Risk assessment — 4-tier risk levels (low/moderate/high/extreme) with time windows
- 24h SVG temperature timeline — color-coded area chart with budget status strip
- Weather integration — Open-Meteo forecasts + DWD severe weather warnings
- AI summary — optional LLM-powered daily briefing (Anthropic, OpenAI, Gemini, Ollama)
- Care checklist — automatic reminders when vulnerable occupants are present
- Multilingual — English and German, switchable in-app
- Privacy-first — all user data stays in the browser (IndexedDB), server is stateless
Architecture
Browser (IndexedDB) Go Server (stateless)
┌─────────────────┐ ┌──────────────────────┐
│ Profiles, Rooms │ JSON │ /api/compute/dashboard│
│ Devices, AC │────────>│ /api/weather/forecast │
│ Forecasts │<────────│ /api/weather/warnings │
│ LLM Settings │ │ /api/llm/summarize │
└─────────────────┘ └──────────────────────┘
The Go server embeds all web assets (templates, JS, CSS, i18n) and serves them directly. No database on the server — all user data lives in the browser's IndexedDB.
Quick Start
Prerequisites
- Go 1.25+
- Node.js 18+ (for Tailwind CSS build)
Build & Run
npm install
make build
./bin/heatguard
Open http://localhost:8080 in your browser.
Development Mode
make dev
Serves files from the filesystem (hot-reload templates/JS) on port 8080.
CLI Flags
| Flag | Default | Description |
|---|---|---|
-port |
8080 |
HTTP listen port |
-dev |
false |
Development mode (serve from filesystem) |
-llm-provider |
"" |
LLM provider (anthropic, openai, gemini, ollama, none) |
-llm-model |
"" |
Model name override |
-llm-endpoint |
"" |
API endpoint override (for Ollama) |
Example — run with a local Ollama instance:
./bin/heatguard --llm-provider ollama --llm-model llama3.2
Configuration
HeatGuard works out of the box with zero configuration. Optional server-side config for LLM:
# ~/.config/heatwave/config.yaml
llm:
provider: anthropic # anthropic | openai | gemini | ollama | none
model: claude-sonnet-4-5-20250929
# endpoint: http://localhost:11434 # for ollama
API keys via environment variables:
| Provider | Variable |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Gemini | GEMINI_API_KEY |
API keys can also be configured directly in the browser under Setup > AI Summary, stored locally in IndexedDB.
LLM Providers
| Provider | Auth | Default Model | Notes |
|---|---|---|---|
| Anthropic | ANTHROPIC_API_KEY |
claude-sonnet-4-5-20250929 | Cloud API |
| OpenAI | OPENAI_API_KEY |
gpt-4o | Cloud API |
| Gemini | GEMINI_API_KEY |
gemini-2.0-flash | Cloud API |
| Ollama | None (local) | — | Set -llm-endpoint if not http://localhost:11434 |
| None | — | — | Default. AI features disabled. |
Deployment
Standalone Binary
make build
./bin/heatguard -port 3000
The binary is fully self-contained — all web assets are embedded. Copy it to any Linux server and run.
Systemd Service
# /etc/systemd/system/heatguard.service
[Unit]
Description=HeatGuard heat preparedness server
After=network.target
[Service]
Type=simple
User=heatguard
ExecStart=/opt/heatguard/heatguard -port 8080
Environment=ANTHROPIC_API_KEY=sk-...
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now heatguard
Docker
docker build -t heatguard .
docker run -d -p 8080:8080 heatguard
With an LLM provider:
docker run -d -p 8080:8080 \
-e ANTHROPIC_API_KEY=sk-... \
heatguard --llm-provider anthropic
The Dockerfile uses a multi-stage build (golang:1.25-alpine builder + distroless/static runtime) for a minimal image.
Kubernetes / Helm
A Helm chart is provided in helm/heatguard/:
helm install heatguard ./helm/heatguard \
--set env.ANTHROPIC_API_KEY=sk-...
See helm/heatguard/values.yaml for all configurable values (replicas, ingress, resources, etc.).
Development
make test # run all tests with race detector
make build # build CSS + binary
make dev # run in dev mode
make clean # remove build artifacts