Release v0.7.0 #8

Merged
vikingowl merged 4 commits from dev into main 2026-01-23 15:54:50 +01:00
Owner

Release v0.7.0

Multi-Backend LLM Support

This release adds support for multiple LLM backends with a unified interface:

  • Ollama (default) - Full model management, pull/delete/create custom models, native thinking and tool calling
  • llama.cpp - High-performance inference with GGUF models via OpenAI-compatible API
  • LM Studio - Desktop app integration via OpenAI-compatible API

Features

  • Backend Switching: Switch between backends in Settings > AI Providers without restart
  • Auto-Detection: Automatically discover available backends on default ports
  • Backend Persistence: Remembers your last selected backend across sessions
  • Unified Chat: Seamless chat experience regardless of active backend

Backend (Go)

  • New backends package with interface, registry, and adapters
  • Ollama adapter wrapping existing functionality with full feature support
  • OpenAI-compatible adapter for llama.cpp and LM Studio
  • Unified API routes under /api/v1/ai/*
  • SSE to NDJSON streaming conversion for OpenAI-compatible backends
  • Auto-discovery of backends on default ports

Frontend (Svelte 5)

  • New backendsState store for backend management
  • Unified LLM client routing through backend API
  • AI Providers settings tab combining Backends and Models sub-tabs
  • Backend-aware chat streaming (uses appropriate client per backend)
  • Model name display for non-Ollama backends in top navigation
  • Persist and restore last selected backend

Bug Fixes

  • Fixed 6 pre-existing TypeScript errors in test files
  • Fixed 69 accessibility warnings (aria-labels, keyboard navigation, form labels)

Documentation

  • New LLM-Backends wiki page with comprehensive backend guide
  • Updated README with multi-backend features
  • Updated Configuration docs with new environment variables
  • Updated Getting-Started with alternative backend mentions
## Release v0.7.0 ### Multi-Backend LLM Support This release adds support for multiple LLM backends with a unified interface: - **Ollama** (default) - Full model management, pull/delete/create custom models, native thinking and tool calling - **llama.cpp** - High-performance inference with GGUF models via OpenAI-compatible API - **LM Studio** - Desktop app integration via OpenAI-compatible API ### Features - **Backend Switching**: Switch between backends in Settings > AI Providers without restart - **Auto-Detection**: Automatically discover available backends on default ports - **Backend Persistence**: Remembers your last selected backend across sessions - **Unified Chat**: Seamless chat experience regardless of active backend ### Backend (Go) - New `backends` package with interface, registry, and adapters - Ollama adapter wrapping existing functionality with full feature support - OpenAI-compatible adapter for llama.cpp and LM Studio - Unified API routes under `/api/v1/ai/*` - SSE to NDJSON streaming conversion for OpenAI-compatible backends - Auto-discovery of backends on default ports ### Frontend (Svelte 5) - New `backendsState` store for backend management - Unified LLM client routing through backend API - AI Providers settings tab combining Backends and Models sub-tabs - Backend-aware chat streaming (uses appropriate client per backend) - Model name display for non-Ollama backends in top navigation - Persist and restore last selected backend ### Bug Fixes - Fixed 6 pre-existing TypeScript errors in test files - Fixed 69 accessibility warnings (aria-labels, keyboard navigation, form labels) ### Documentation - New LLM-Backends wiki page with comprehensive backend guide - Updated README with multi-backend features - Updated Configuration docs with new environment variables - Updated Getting-Started with alternative backend mentions
vikingowl added 4 commits 2026-01-23 15:54:40 +01:00
Add unified backend abstraction layer supporting multiple LLM providers:

Backend (Go):
- New backends package with interface, registry, and adapters
- Ollama adapter wrapping existing functionality
- OpenAI-compatible adapter for llama.cpp and LM Studio
- Unified API routes under /api/v1/ai/*
- SSE to NDJSON streaming conversion for OpenAI backends
- Auto-discovery of backends on default ports

Frontend (Svelte 5):
- New backendsState store for backend management
- Unified LLM client routing through backend API
- AI Providers tab combining Backends and Models sub-tabs
- Backend-aware chat streaming (uses appropriate client)
- Model name display for non-Ollama backends in top nav
- Persist and restore last selected backend

Key features:
- Switch between backends without restart
- Conditional UI based on backend capabilities
- Models tab only visible when Ollama active
- llama.cpp/LM Studio show loaded model name
TypeScript error fixes:
- Fix UUID mock type in chunker.test.ts
- Remove invalid timestamp property from Message types in tests
- Fix mockFetch type in client.test.ts
- Add missing parameters property to tool definition in test

Accessibility fixes (109 → 40 warnings, remaining are CSS @apply):
- Add aria-labels to all toggle switches and icon-only buttons
- Add tabindex="-1" to all dialog elements with role="dialog"
- Add onkeydown handlers to modal backdrops for keyboard accessibility
- Fix form labels: change decorative labels to spans, use fieldset/legend for groups
- Convert fileInput variables to $state() for proper reactivity
- Fix closure captures in ThinkingBlock and HtmlPreview with $derived()
- Add role="region" to drag-and-drop zones
- Restore keyboard navigation to BranchNavigator

All 547 tests pass.
- Update tagline to 'local LLMs' instead of 'Ollama'
- Add LLM Backends section with Ollama, llama.cpp, LM Studio
- Update Prerequisites to list all supported backends
- Add LLM Backends to documentation table
- Update Roadmap with multi-backend as completed
- Update Non-Goals to clarify cloud providers not supported
vikingowl merged commit 61bf8038d0 into main 2026-01-23 15:54:50 +01:00
Sign in to join this conversation.
No Reviewers
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: vikingowl/vessel#8