-
v0.7.1 - Docker Backend Discovery & Vision Support
StableSome checks failedCreate Release / release (push) Has been cancelledreleased this
2026-01-23 17:21:47 +01:00 | 0 commits to main since this releasev0.7.1 - Docker Backend Discovery & Vision Support
Patch release fixing Docker compatibility issues and adding vision support for llama.cpp and LM Studio.
Bug Fixes
Docker Backend Discovery
- Discovery endpoints now read from
OLLAMA_URL,LLAMACPP_URL,LMSTUDIO_URLenvironment variables docker-compose.ymlsets all backends tohost.docker.internalfor container→host access- Previously, auto-detect would fail for llama.cpp and LM Studio when running Vessel in Docker
Vision Support for OpenAI-compatible Backends
- OpenAI adapter now converts images to content parts array format
- Enables vision/multimodal models with llama.cpp and LM Studio
- Previously, images were ignored when using non-Ollama backends
Justfile Updates
- Added
--host 0.0.0.0to llama-server commands for Docker accessibility
Documentation Updates
- Added
--host 0.0.0.0requirement for llama-server (Docker access) - Added LM Studio "Serve on Local Network" setup instructions
- Added Docker-specific troubleshooting notes for both backends
Upgrading
docker compose pull docker compose up -dOr use the install script:
./install.sh --updateNote: If using llama.cpp with Docker, restart llama-server with
--host 0.0.0.0. For LM Studio, enable "Serve on Local Network" in Server Settings.Downloads
- Discovery endpoints now read from
-
v0.7.0 - Multi-Backend LLM Support
StableSome checks failedCreate Release / release (push) Has been cancelledreleased this
2026-01-23 15:57:23 +01:00 | 2 commits to main since this releasev0.7.0 - Multi-Backend LLM Support
This release adds support for multiple LLM backends, making Vessel more flexible while maintaining its focus on local-first LLM usage.
Highlights
- Multiple Backend Support: Use Ollama, llama.cpp, or LM Studio interchangeably
- Seamless Switching: Change backends in Settings without restart
- Auto-Detection: Automatically discover available backends on your system
- Unified Experience: Same great chat interface regardless of backend
New Features
LLM Backends
- Ollama (default) - Full model management, pull/delete/create custom models, native thinking mode and tool calling
- llama.cpp - High-performance inference with GGUF models via OpenAI-compatible API
- LM Studio - Desktop app integration via OpenAI-compatible API
Backend Management
- Switch between backends in Settings > AI Providers
- Auto-detect available backends on default ports
- Persists your backend selection across sessions
- Shows current model name for all backend types
Technical Changes
Backend (Go)
- New
backendspackage with interface, registry, and adapters - Ollama adapter wrapping existing functionality with full feature support
- OpenAI-compatible adapter for llama.cpp and LM Studio
- Unified API routes under
/api/v1/ai/* - SSE to NDJSON streaming conversion for OpenAI-compatible backends
- Auto-discovery of backends on default ports (Ollama: 11434, llama.cpp: 8081, LM Studio: 1234)
Frontend (Svelte 5)
- New
backendsStatestore for backend management - Unified LLM client routing through backend API
- AI Providers settings tab combining Backends and Models sub-tabs
- Backend-aware chat streaming
- Model name display for non-Ollama backends in top navigation
Bug Fixes
- Fixed 6 pre-existing TypeScript errors in test files
- Fixed 69 accessibility warnings (aria-labels, keyboard navigation, form labels)
Documentation
- New LLM Backends wiki page
- Updated README with multi-backend features
- Updated Configuration docs with new environment variables (
LLAMACPP_URL,LMSTUDIO_URL)
Upgrading
Pull the latest Docker images:
docker compose pull docker compose up -dOr use the install script:
./install.sh --updateFull Changelog
58 files changed, 6,458 insertions(+), 246 deletions(-)
Downloads
-
released this
2026-01-22 12:42:12 +01:00 | 9 commits to main since this releaseNew Features
- About Page: Dedicated Settings > About tab with version info, update checking, and links
- App branding with Vessel logo and version badge
- Manual "Check for Updates" button with loading state
- Download link when update available
- Links to GitHub repo and issue tracker
- Tech stack badges and license info
Improvements
- Development Configuration: justfile and docker-compose now read from .env
- PORT, DEV_PORT, LLAMA_PORT, OLLAMA_PORT configurable via environment
- Added
just dev-buildandjust dev-rebuildrecipes - docker-compose.dev.yml uses variable substitution
- Updated .env.example with all configurable variables
Full Changelog: https://somegit.dev/vikingowl/vessel/compare/v0.6.0...v0.6.1
Downloads
- About Page: Dedicated Settings > About tab with version info, update checking, and links
-
released this
2026-01-22 12:13:48 +01:00 | 13 commits to main since this release🤖 New Feature: Agents (v1)
Agents allow you to create specialized AI personas with custom system prompts and tool sets.
Features:
- Agent identity: Name and description
- System prompt: Reference prompts from your Prompt Library
- Tool filtering: Enable specific tools per agent
- Preferred model: Optionally set a default model for the agent
- Per-chat selection: Choose an agent from the chat interface
- Project assignment: Assign agents to projects (many-to-many)
UI:
- New Agents tab in Settings for managing agents
- Agent selector dropdown in chat (next to system prompt selector)
- Full CRUD operations with search and filtering
🧪 Test Coverage Improvements
- Extended test coverage for backend and frontend
- 60+ new unit tests for agents feature
- 14 E2E tests for agents UI and chat integration
📝 Documentation & Configuration
- Added
.env.examplewith configuration documentation - Fixed hardcoded Ollama URL (now configurable)
- Updated CONTRIBUTING.md with branching strategy
🔧 Developer Experience
- Added
justfilefor common development commands - Sync status indicator and warning banner
- Improved Playwright config with BASE_URL support
Full Changelog: https://somegit.dev/vikingowl/vessel/compare/v0.5.2...v0.6.0
Downloads
-
released this
2026-01-07 22:21:07 +01:00 | 26 commits to main since this releaseWhat's Changed
Bug Fixes
- Sidebar hover icons - Improved visibility of action icons (pin, move, export, delete) when hovering over conversations. Icons now have a subtle background and brighter colors.
- Pin/Archive persistence - Fixed conversation pin and archive states not persisting to IndexedDB
Documentation
- Updated README with new features: Projects, Knowledge Base (RAG), Search, Thinking mode
- Added wiki pages for Projects, Search, and Knowledge Base features
- Updated wiki Home and Getting Started pages
Upgrade
./install.sh --updateOr pull the latest Docker images:
docker compose pull && docker compose up -dDownloads
-
v0.5.1 - Bug Fixes & UX Improvements
StableSome checks failedCreate Release / release (push) Has been cancelledreleased this
2026-01-07 20:51:56 +01:00 | 30 commits to main since this releaseBug Fixes
- Project deletion: Fixed project delete failing due to missing
db.chunkstable in database transaction
UX Improvements
- Confirmation dialogs: Replaced all browser
confirm()dialogs with styledConfirmDialogcomponent for a consistent, modern look- Project deletion
- Custom tool deletion
- Document deletion (Knowledge Base)
- Prompt deletion
- Project file deletion
Upgrade
cd ~/.vessel && ./install.sh --updateDownloads
- Project deletion: Fixed project delete failing due to missing
-
v0.5.0 - Settings Hub & Projects
StableSome checks failedCreate Release / release (push) Has been cancelledreleased this
2026-01-07 20:32:45 +01:00 | 32 commits to main since this releaseHighlights
Settings Hub - All settings consolidated into a single page with intuitive tab navigation (General, Models, Prompts, Tools, Knowledge, Memory).
Projects - Organize your conversations into projects with shared context and cross-chat RAG search.
Global Search - Semantic search across all your chats and knowledge base using embedding models.
New Features
- Settings Hub: Unified settings page with 6 tabs replacing 5 separate sidebar pages
- Projects: Create projects to group related conversations together
- Cross-chat RAG: Search and reference content across all conversations in a project
- Global Search: Semantic search page with embedding model configuration
- Release Notes: Install script now shows what's new after
--update - Smart Embedding Detection: Automatically detects installed embedding models
Improvements
- Embedding model selector in Knowledge and Memory settings
- Non-blocking file upload for knowledge base documents
- Better agentic tool descriptions for improved model discovery
- Cached project conversations for better performance
Bug Fixes
- Memory store validation and consistency improvements
- Text-based tool call parsing for models without native function calling
- Project page performance optimizations
- Fixed reference to deleted state variable in file upload
- Embedding generation timeout to prevent page freeze
Migration Notes
- Old URLs (
/models,/prompts,/tools,/knowledge) redirect to Settings Hub - IndexedDB schema upgraded to v6 (automatic migration)
- New tables:
projects,projectLinks,chatChunks
Upgrade
cd ~/.vessel && ./install.sh --updateOr pull the latest Docker images:
docker compose pull && docker compose up -dDownloads
-
released this
2026-01-07 12:30:29 +01:00 | 51 commits to main since this releaseWhat's New
Agentic Tool Templates
New category of tools designed for agentic LLM behavior:
- Task Manager 📋 - Create, update, list, and complete tasks with persistent storage (localStorage)
- Memory Store 🧠 - Store and recall information across conversation turns with categories
- Structured Thinking 💭 - Break down problems into explicit reasoning steps with quality indicators
- Decision Matrix ⚖️ - Evaluate options against weighted criteria for better decisions
- Project Planner 📊 - Decompose projects into phases, tasks, and dependencies
Improved Custom Tool Display
- Auto-detect styling for custom tools based on name patterns (27 pattern categories)
- Humanized tool labels for better readability
- Distinctive icons and gradient colors per tool type
Other Improvements
- Added credit attribution to prompt browser linking to source repository
Downloads
-
released this
2026-01-07 12:07:29 +01:00 | 52 commits to main since this releaseWhat's New
Features
-
Prompt Template Browser: Browse and add curated system prompts to your library
- 16 professionally crafted prompts across 5 categories (Coding, Writing, Analysis, Creative, Assistant)
- Filter by category, preview full prompt content, one-click add to library
- Includes: Code Reviewer, Debug Assistant, API Designer, SQL Expert, UI/UX Advisor, Security Auditor, and more
-
New Tool Templates: Added design-focused JavaScript tool templates
- Design Brief Generator: Create structured design briefs from project requirements
- Color Palette Generator: Generate harmonious color palettes from a base color with CSS variable output
Bug Fixes
- HTTP endpoint tools can now be tested directly in the editor (v0.4.13)
Upgrade
Pull the latest Docker images or rebuild from source.
Downloads
-
-
released this
2026-01-07 11:39:15 +01:00 | 54 commits to main since this releaseWhat's New
Features
- HTTP Tool Testing: Added the ability to test HTTP endpoint custom tools directly in the editor, matching the existing test functionality for Python and JavaScript tools (closes #6)
Upgrade
Pull the latest Docker images or rebuild from source.
Downloads