Files
vessel/backend/internal/api
vikingowl a80ddc0fe4 feat: add multi-backend LLM support (Ollama, llama.cpp, LM Studio)
Add unified backend abstraction layer supporting multiple LLM providers:

Backend (Go):
- New backends package with interface, registry, and adapters
- Ollama adapter wrapping existing functionality
- OpenAI-compatible adapter for llama.cpp and LM Studio
- Unified API routes under /api/v1/ai/*
- SSE to NDJSON streaming conversion for OpenAI backends
- Auto-discovery of backends on default ports

Frontend (Svelte 5):
- New backendsState store for backend management
- Unified LLM client routing through backend API
- AI Providers tab combining Backends and Models sub-tabs
- Backend-aware chat streaming (uses appropriate client)
- Model name display for non-Ollama backends in top nav
- Persist and restore last selected backend

Key features:
- Switch between backends without restart
- Conditional UI based on backend capabilities
- Models tab only visible when Ollama active
- llama.cpp/LM Studio show loaded model name
2026-01-23 15:04:49 +01:00
..
2026-01-20 14:26:24 -08:00
2026-01-02 19:35:00 +01:00