Remove unused crypto module, DataDir/DefaultDBPath (SQLite remnant), and ListenAndServe (replaced by direct http.Server in main). Strip 17 unreferenced i18n keys from en/de translations. Add --llm-provider, --llm-model, and --llm-endpoint CLI flags for runtime LLM override without a config file. Rewrite README with correct Go 1.25 version, shields, LLM providers table, Docker/Helm deployment docs. Fix .gitignore pattern to not match cmd/heatguard/ directory.
183 lines
5.2 KiB
Markdown
183 lines
5.2 KiB
Markdown
# HeatGuard
|
|
|
|
Personalized heat preparedness for your home. HeatGuard analyzes your living spaces, fetches weather forecasts, and generates hour-by-hour action plans to keep you safe during heat events.
|
|
|
|

|
|

|
|

|
|
|
|
## Features
|
|
|
|
- **Room-level heat budget analysis** — models internal gains (devices, occupants), solar gains, ventilation, and AC capacity per room
|
|
- **Risk assessment** — 4-tier risk levels (low/moderate/high/extreme) with time windows
|
|
- **24h SVG temperature timeline** — color-coded area chart with budget status strip
|
|
- **Weather integration** — Open-Meteo forecasts + DWD severe weather warnings
|
|
- **AI summary** — optional LLM-powered daily briefing (Anthropic, OpenAI, Gemini, Ollama)
|
|
- **Care checklist** — automatic reminders when vulnerable occupants are present
|
|
- **Multilingual** — English and German, switchable in-app
|
|
- **Privacy-first** — all user data stays in the browser (IndexedDB), server is stateless
|
|
|
|
## Architecture
|
|
|
|
```
|
|
Browser (IndexedDB) Go Server (stateless)
|
|
┌─────────────────┐ ┌──────────────────────┐
|
|
│ Profiles, Rooms │ JSON │ /api/compute/dashboard│
|
|
│ Devices, AC │────────>│ /api/weather/forecast │
|
|
│ Forecasts │<────────│ /api/weather/warnings │
|
|
│ LLM Settings │ │ /api/llm/summarize │
|
|
└─────────────────┘ └──────────────────────┘
|
|
```
|
|
|
|
The Go server embeds all web assets (templates, JS, CSS, i18n) and serves them directly. No database on the server — all user data lives in the browser's IndexedDB.
|
|
|
|
## Quick Start
|
|
|
|
### Prerequisites
|
|
|
|
- Go 1.25+
|
|
- Node.js 18+ (for Tailwind CSS build)
|
|
|
|
### Build & Run
|
|
|
|
```bash
|
|
npm install
|
|
make build
|
|
./bin/heatguard
|
|
```
|
|
|
|
Open [http://localhost:8080](http://localhost:8080) in your browser.
|
|
|
|
### Development Mode
|
|
|
|
```bash
|
|
make dev
|
|
```
|
|
|
|
Serves files from the filesystem (hot-reload templates/JS) on port 8080.
|
|
|
|
## CLI Flags
|
|
|
|
| Flag | Default | Description |
|
|
|------|---------|-------------|
|
|
| `-port` | `8080` | HTTP listen port |
|
|
| `-dev` | `false` | Development mode (serve from filesystem) |
|
|
| `-llm-provider` | `""` | LLM provider (`anthropic`, `openai`, `gemini`, `ollama`, `none`) |
|
|
| `-llm-model` | `""` | Model name override |
|
|
| `-llm-endpoint` | `""` | API endpoint override (for Ollama) |
|
|
|
|
Example — run with a local Ollama instance:
|
|
|
|
```bash
|
|
./bin/heatguard --llm-provider ollama --llm-model llama3.2
|
|
```
|
|
|
|
## Configuration
|
|
|
|
HeatGuard works out of the box with zero configuration. Optional server-side config for LLM:
|
|
|
|
```yaml
|
|
# ~/.config/heatwave/config.yaml
|
|
llm:
|
|
provider: anthropic # anthropic | openai | gemini | ollama | none
|
|
model: claude-sonnet-4-5-20250929
|
|
# endpoint: http://localhost:11434 # for ollama
|
|
```
|
|
|
|
API keys via environment variables:
|
|
|
|
| Provider | Variable |
|
|
|----------|----------|
|
|
| Anthropic | `ANTHROPIC_API_KEY` |
|
|
| OpenAI | `OPENAI_API_KEY` |
|
|
| Gemini | `GEMINI_API_KEY` |
|
|
|
|
API keys can also be configured directly in the browser under **Setup > AI Summary**, stored locally in IndexedDB.
|
|
|
|
## LLM Providers
|
|
|
|
| Provider | Auth | Default Model | Notes |
|
|
|----------|------|---------------|-------|
|
|
| Anthropic | `ANTHROPIC_API_KEY` | claude-sonnet-4-5-20250929 | Cloud API |
|
|
| OpenAI | `OPENAI_API_KEY` | gpt-4o | Cloud API |
|
|
| Gemini | `GEMINI_API_KEY` | gemini-2.0-flash | Cloud API |
|
|
| Ollama | None (local) | — | Set `-llm-endpoint` if not `http://localhost:11434` |
|
|
| None | — | — | Default. AI features disabled. |
|
|
|
|
## Deployment
|
|
|
|
### Standalone Binary
|
|
|
|
```bash
|
|
make build
|
|
./bin/heatguard -port 3000
|
|
```
|
|
|
|
The binary is fully self-contained — all web assets are embedded. Copy it to any Linux server and run.
|
|
|
|
### Systemd Service
|
|
|
|
```ini
|
|
# /etc/systemd/system/heatguard.service
|
|
[Unit]
|
|
Description=HeatGuard heat preparedness server
|
|
After=network.target
|
|
|
|
[Service]
|
|
Type=simple
|
|
User=heatguard
|
|
ExecStart=/opt/heatguard/heatguard -port 8080
|
|
Environment=ANTHROPIC_API_KEY=sk-...
|
|
Restart=on-failure
|
|
RestartSec=5
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
```
|
|
|
|
```bash
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable --now heatguard
|
|
```
|
|
|
|
### Docker
|
|
|
|
```bash
|
|
docker build -t heatguard .
|
|
docker run -d -p 8080:8080 heatguard
|
|
```
|
|
|
|
With an LLM provider:
|
|
|
|
```bash
|
|
docker run -d -p 8080:8080 \
|
|
-e ANTHROPIC_API_KEY=sk-... \
|
|
heatguard --llm-provider anthropic
|
|
```
|
|
|
|
The Dockerfile uses a multi-stage build (`golang:1.25-alpine` builder + `distroless/static` runtime) for a minimal image.
|
|
|
|
### Kubernetes / Helm
|
|
|
|
A Helm chart is provided in `helm/heatguard/`:
|
|
|
|
```bash
|
|
helm install heatguard ./helm/heatguard \
|
|
--set env.ANTHROPIC_API_KEY=sk-...
|
|
```
|
|
|
|
See `helm/heatguard/values.yaml` for all configurable values (replicas, ingress, resources, etc.).
|
|
|
|
## Development
|
|
|
|
```bash
|
|
make test # run all tests with race detector
|
|
make build # build CSS + binary
|
|
make dev # run in dev mode
|
|
make clean # remove build artifacts
|
|
```
|
|
|
|
## License
|
|
|
|
[GPL-3.0](LICENSE)
|