Vessel
A modern, feature-rich web interface for Ollama
Why Vessel • Features • Screenshots • Quick Start • Installation • Roadmap
Why Vessel
Vessel and open-webui solve different problems.
Vessel is intentionally focused on:
- A clean, local-first UI for Ollama
- Minimal configuration
- Low visual and cognitive overhead
- Doing a small set of things well
It exists for users who want a UI that is fast and uncluttered, makes browsing and managing Ollama models simple, and stays out of the way once set up.
open-webui aims to be a feature-rich, extensible frontend supporting many runtimes, integrations, and workflows. That flexibility is powerful — but it comes with more complexity in setup, UI, and maintenance.
In short
- If you want a universal, highly configurable platform → open-webui is a great choice
- If you want a small, focused UI for local Ollama usage → Vessel is built for that
Vessel deliberately avoids becoming a platform. Its scope is narrow by design.
Features
Core Chat Experience
- Real-time streaming — Watch responses appear token by token
- Conversation history — All chats stored locally in IndexedDB
- Message editing — Edit any message and regenerate responses with branching
- Branch navigation — Explore different response paths from edited messages
- Markdown rendering — Full GFM support with tables, lists, and formatting
- Syntax highlighting — Beautiful code blocks powered by Shiki with 100+ languages
- Dark/Light mode — Seamless theme switching with system preference detection
Built-in Tools (Function Calling)
Vessel includes five powerful tools that models can invoke automatically:
| Tool | Description |
|---|---|
| Web Search | Search the internet for current information, news, weather, prices |
| Fetch URL | Read and extract content from any webpage |
| Calculator | Safe math expression parser with functions (sqrt, sin, cos, log, etc.) |
| Get Location | Detect user location via GPS or IP for local queries |
| Get Time | Current date/time with timezone support |
Model Management
- Model browser — Browse, search, and pull models from Ollama registry
- Live status — See which models are currently loaded in memory
- Quick switch — Change models mid-conversation
- Model metadata — View parameters, quantization, and capabilities
Developer Experience
- Beautiful code generation — Syntax-highlighted output for any language
- Copy code blocks — One-click copy with visual feedback
- Scroll to bottom — Smart auto-scroll with manual override
- Keyboard shortcuts — Navigate efficiently with hotkeys
Screenshots
Clean, modern chat interface |
Syntax-highlighted code output |
Integrated web search with styled results |
Light theme for daytime use |
Browse and manage Ollama models |
|
Quick Start
Prerequisites
Ollama Configuration
Ollama must listen on all interfaces for Docker containers to connect. Configure it by setting OLLAMA_HOST=0.0.0.0:
Option A: Using systemd (Linux, recommended)
sudo systemctl edit ollama
Add these lines:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Then restart:
sudo systemctl daemon-reload
sudo systemctl restart ollama
Option B: Manual start
OLLAMA_HOST=0.0.0.0 ollama serve
One-Line Install
curl -fsSL https://somegit.dev/vikingowl/vessel/raw/main/install.sh | bash
Or Clone and Run
git clone https://somegit.dev/vikingowl/vessel.git
cd vessel
./install.sh
The installer will:
- Check for Docker, Docker Compose, and Ollama
- Start the frontend and backend services
- Optionally pull a starter model (llama3.2)
Once running, open http://localhost:7842 in your browser.
Installation
Option 1: Install Script (Recommended)
The install script handles everything automatically:
./install.sh # Install and start
./install.sh --update # Update to latest version
./install.sh --uninstall # Remove installation
Requirements:
- Ollama must be installed and running locally
- Docker and Docker Compose
- Linux or macOS
Option 2: Docker Compose (Manual)
# Make sure Ollama is running first
ollama serve
# Start Vessel
docker compose up -d
Option 3: Manual Setup (Development)
Prerequisites
Frontend
cd frontend
npm install
npm run dev
Frontend runs on http://localhost:5173
Backend
cd backend
go mod tidy
go run cmd/server/main.go -port 9090
Backend API runs on http://localhost:9090
Configuration
Environment Variables
Frontend
| Variable | Default | Description |
|---|---|---|
OLLAMA_API_URL |
http://localhost:11434 |
Ollama API endpoint |
BACKEND_URL |
http://localhost:9090 |
Vessel backend API |
Backend
| Variable | Default | Description |
|---|---|---|
OLLAMA_URL |
http://localhost:11434 |
Ollama API endpoint |
PORT |
8080 |
Backend server port |
GIN_MODE |
debug |
Gin mode (debug, release) |
Docker Compose Override
Create docker-compose.override.yml for local customizations:
services:
frontend:
environment:
- CUSTOM_VAR=value
ports:
- "3000:3000" # Different port
Architecture
vessel/
├── frontend/ # SvelteKit 5 application
│ ├── src/
│ │ ├── lib/
│ │ │ ├── components/ # UI components
│ │ │ ├── stores/ # Svelte 5 runes state
│ │ │ ├── tools/ # Built-in tool definitions
│ │ │ ├── storage/ # IndexedDB (Dexie)
│ │ │ └── api/ # API clients
│ │ └── routes/ # SvelteKit routes
│ └── Dockerfile
│
├── backend/ # Go API server
│ ├── cmd/server/ # Entry point
│ └── internal/
│ ├── api/ # HTTP handlers
│ │ ├── fetcher.go # URL fetching with wget/curl/chromedp
│ │ ├── search.go # Web search via DuckDuckGo
│ │ └── routes.go # Route definitions
│ ├── database/ # SQLite storage
│ └── models/ # Data models
│
├── docker-compose.yml # Production setup
└── docker-compose.dev.yml # Development with hot reload
Tech Stack
Frontend
- SvelteKit 5 — Full-stack framework
- Svelte 5 — Runes-based reactivity
- TypeScript — Type safety
- Tailwind CSS — Utility-first styling
- Skeleton UI — Component library
- Shiki — Syntax highlighting
- Dexie — IndexedDB wrapper
- Marked — Markdown parser
- DOMPurify — XSS sanitization
Backend
- Go 1.24 — Fast, compiled backend
- Gin — HTTP framework
- SQLite — Embedded database
- chromedp — Headless browser
Development
Running Tests
# Frontend unit tests
cd frontend
npm run test
# With coverage
npm run test:coverage
# Watch mode
npm run test:watch
Type Checking
cd frontend
npm run check
Development Mode
Use the dev compose file for hot reloading:
docker compose -f docker-compose.dev.yml up
API Reference
Backend Endpoints
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/proxy/search |
Web search via DuckDuckGo |
POST |
/api/v1/proxy/fetch |
Fetch URL content |
GET |
/api/v1/location |
Get user location from IP |
GET |
/api/v1/models/registry |
Browse Ollama model registry |
GET |
/api/v1/models/search |
Search models |
POST |
/api/v1/chats/sync |
Sync conversations |
Ollama Proxy
All requests to /ollama/* are proxied to the Ollama API, enabling CORS.
Roadmap
Vessel is intentionally focused on being a clean, local-first UI for Ollama. The roadmap prioritizes usability, clarity, and low friction over feature breadth.
Core UX Improvements (Near-term)
These improve the existing experience without expanding scope.
- Improve model browser & search
- better filtering (size, tags, quantization)
- clearer metadata presentation
- Keyboard-first workflows
- model switching
- prompt navigation
- UX polish & stability
- error handling
- loading / offline states
- small performance improvements
Local Ecosystem Quality-of-Life (Opt-in)
Still local-first, still focused — but easing onboarding and workflows.
- Docker-based Ollama support (for systems without native Ollama installs)
- Optional voice input/output (accessibility & convenience, not a core requirement)
- Presets for common workflows (model + tool combinations, kept simple)
Experimental / Explicitly Optional
These are explorations, not promises. They are intentionally separated to avoid scope creep.
- Image generation support (only if it can be cleanly isolated from the core UI)
- Hugging Face integration (evaluated carefully to avoid bloating the local-first experience)
Non-Goals (By Design)
Vessel intentionally avoids becoming a platform.
- Multi-user / account-based systems
- Cloud sync or hosted services
- Large plugin ecosystems
- "Universal" support for every LLM runtime
If a feature meaningfully compromises simplicity, it likely doesn't belong in core Vessel.
Philosophy
Do one thing well. Keep the UI out of the way. Prefer clarity over configurability.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Issues and feature requests are tracked on GitHub: https://github.com/VikingOwl91/vessel/issues
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
Copyright (C) 2026 VikingOwl
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.




