From 291871c3b5f1ffc0db1507c7b8f6023c5d4f60ba Mon Sep 17 00:00:00 2001 From: vikingowl Date: Fri, 23 Jan 2026 15:51:56 +0100 Subject: [PATCH] docs: update README for multi-backend LLM support - Update tagline to 'local LLMs' instead of 'Ollama' - Add LLM Backends section with Ollama, llama.cpp, LM Studio - Update Prerequisites to list all supported backends - Add LLM Backends to documentation table - Update Roadmap with multi-backend as completed - Update Non-Goals to clarify cloud providers not supported --- README.md | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index e986512..d2633c8 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@

Vessel

- A modern, feature-rich web interface for Ollama + A modern, feature-rich web interface for local LLMs

@@ -28,13 +28,14 @@ **Vessel** is intentionally focused on: -- A clean, local-first UI for **Ollama** +- A clean, local-first UI for **local LLMs** +- **Multiple backends**: Ollama, llama.cpp, LM Studio - Minimal configuration - Low visual and cognitive overhead - Doing a small set of things well If you want a **universal, highly configurable platform** → [open-webui](https://github.com/open-webui/open-webui) is a great choice. -If you want a **small, focused UI for local Ollama usage** → Vessel is built for that. +If you want a **small, focused UI for local LLM usage** → Vessel is built for that. --- @@ -65,7 +66,13 @@ If you want a **small, focused UI for local Ollama usage** → Vessel is built f - Agentic tool calling with chain-of-thought reasoning - Test tools before saving with the built-in testing panel -### Models +### LLM Backends +- **Ollama** — Full model management, pull/delete/create custom models +- **llama.cpp** — High-performance inference with GGUF models +- **LM Studio** — Desktop app integration +- Switch backends without restart, auto-detection of available backends + +### Models (Ollama) - Browse and pull models from ollama.com - Create custom models with embedded system prompts - **Per-model parameters** — customize temperature, context size, top_k/top_p @@ -112,7 +119,10 @@ If you want a **small, focused UI for local Ollama usage** → Vessel is built f ### Prerequisites - [Docker](https://docs.docker.com/get-docker/) and Docker Compose -- [Ollama](https://ollama.com/download) running locally +- An LLM backend (at least one): + - [Ollama](https://ollama.com/download) (recommended) + - [llama.cpp](https://github.com/ggerganov/llama.cpp) + - [LM Studio](https://lmstudio.ai/) ### Configure Ollama @@ -160,6 +170,7 @@ Full documentation is available on the **[GitHub Wiki](https://github.com/Viking | Guide | Description | |-------|-------------| | [Getting Started](https://github.com/VikingOwl91/vessel/wiki/Getting-Started) | Installation and configuration | +| [LLM Backends](https://github.com/VikingOwl91/vessel/wiki/LLM-Backends) | Configure Ollama, llama.cpp, or LM Studio | | [Projects](https://github.com/VikingOwl91/vessel/wiki/Projects) | Organize conversations into projects | | [Knowledge Base](https://github.com/VikingOwl91/vessel/wiki/Knowledge-Base) | RAG with document upload and semantic search | | [Search](https://github.com/VikingOwl91/vessel/wiki/Search) | Semantic and content search across chats | @@ -178,6 +189,7 @@ Full documentation is available on the **[GitHub Wiki](https://github.com/Viking Vessel prioritizes **usability and simplicity** over feature breadth. **Completed:** +- [x] Multi-backend support (Ollama, llama.cpp, LM Studio) - [x] Model browser with filtering and update detection - [x] Custom tools (JavaScript, Python, HTTP) - [x] System prompt library with model-specific defaults @@ -197,7 +209,7 @@ Vessel prioritizes **usability and simplicity** over feature breadth. - Multi-user systems - Cloud sync - Plugin ecosystems -- Support for every LLM runtime +- Cloud/API-based LLM providers (OpenAI, Anthropic, etc.) > *Do one thing well. Keep the UI out of the way.* @@ -223,5 +235,5 @@ Contributions are welcome! GPL-3.0 — See [LICENSE](LICENSE) for details.

- Made with Ollama and Svelte + Made with Svelte • Supports Ollama, llama.cpp, and LM Studio