vikingowl 7b2c417a95 docs: add GPL-3 license and update README
- Add GPL-3.0 license with copyright notice (2026 VikingOwl)
- Update license badge from MIT to GPL-3.0
- Add copyright to license section
- Remove completed roadmap items (system prompts, search, shortcuts, export)
- Add Hugging Face integration to roadmap

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 08:02:20 +01:00

Vessel

Vessel

A modern, feature-rich web interface for Ollama

FeaturesScreenshotsQuick StartInstallationConfigurationDevelopment

SvelteKit 5 Svelte 5 Go 1.24 TypeScript Tailwind CSS Docker

License GPL-3.0 PRs Welcome


Features

Core Chat Experience

  • Real-time streaming — Watch responses appear token by token
  • Conversation history — All chats stored locally in IndexedDB
  • Message editing — Edit any message and regenerate responses with branching
  • Branch navigation — Explore different response paths from edited messages
  • Markdown rendering — Full GFM support with tables, lists, and formatting
  • Syntax highlighting — Beautiful code blocks powered by Shiki with 100+ languages
  • Dark/Light mode — Seamless theme switching with system preference detection

Built-in Tools (Function Calling)

Vessel includes five powerful tools that models can invoke automatically:

Tool Description
Web Search Search the internet for current information, news, weather, prices
Fetch URL Read and extract content from any webpage
Calculator Safe math expression parser with functions (sqrt, sin, cos, log, etc.)
Get Location Detect user location via GPS or IP for local queries
Get Time Current date/time with timezone support

Model Management

  • Model browser — Browse, search, and pull models from Ollama registry
  • Live status — See which models are currently loaded in memory
  • Quick switch — Change models mid-conversation
  • Model metadata — View parameters, quantization, and capabilities

Developer Experience

  • Beautiful code generation — Syntax-highlighted output for any language
  • Copy code blocks — One-click copy with visual feedback
  • Scroll to bottom — Smart auto-scroll with manual override
  • Keyboard shortcuts — Navigate efficiently with hotkeys

Screenshots

Chat Interface - Dark Mode
Clean, modern chat interface
Code Generation
Syntax-highlighted code output
Web Search Results
Integrated web search with styled results
Light Mode
Light theme for daytime use
Model Browser
Browse and manage Ollama models

Quick Start

The fastest way to get running with Docker Compose:

# Clone the repository
git clone https://github.com/yourusername/vessel.git
cd vessel

# Start all services (frontend, backend, ollama)
docker compose up -d

# Open in browser
open http://localhost:7842

This starts:

  • Frontend on http://localhost:7842
  • Backend API on http://localhost:9090
  • Ollama on http://localhost:11434

First Model

Pull your first model from the UI or via command line:

# Via Ollama CLI
docker compose exec ollama ollama pull llama3.2

# Or use the Model Browser in the UI

Installation

docker compose up -d

With GPU Support (NVIDIA)

Uncomment the GPU section in docker-compose.yml:

ollama:
  image: ollama/ollama:latest
  deploy:
    resources:
      reservations:
        devices:
          - driver: nvidia
            count: all
            capabilities: [gpu]

Then run:

docker compose up -d

Option 2: Manual Setup

Prerequisites

Frontend

cd frontend
npm install
npm run dev

Frontend runs on http://localhost:5173

Backend

cd backend
go mod tidy
go run cmd/server/main.go -port 9090

Backend API runs on http://localhost:9090


Configuration

Environment Variables

Frontend

Variable Default Description
OLLAMA_API_URL http://localhost:11434 Ollama API endpoint
BACKEND_URL http://localhost:9090 Vessel backend API

Backend

Variable Default Description
OLLAMA_URL http://localhost:11434 Ollama API endpoint
PORT 8080 Backend server port
GIN_MODE debug Gin mode (debug, release)

Docker Compose Override

Create docker-compose.override.yml for local customizations:

services:
  frontend:
    environment:
      - CUSTOM_VAR=value
    ports:
      - "3000:3000"  # Different port

Architecture

vessel/
├── frontend/               # SvelteKit 5 application
│   ├── src/
│   │   ├── lib/
│   │   │   ├── components/ # UI components
│   │   │   ├── stores/     # Svelte 5 runes state
│   │   │   ├── tools/      # Built-in tool definitions
│   │   │   ├── storage/    # IndexedDB (Dexie)
│   │   │   └── api/        # API clients
│   │   └── routes/         # SvelteKit routes
│   └── Dockerfile
│
├── backend/                # Go API server
│   ├── cmd/server/         # Entry point
│   └── internal/
│       ├── api/            # HTTP handlers
│       │   ├── fetcher.go  # URL fetching with wget/curl/chromedp
│       │   ├── search.go   # Web search via DuckDuckGo
│       │   └── routes.go   # Route definitions
│       ├── database/       # SQLite storage
│       └── models/         # Data models
│
├── docker-compose.yml      # Production setup
└── docker-compose.dev.yml  # Development with hot reload

Tech Stack

Frontend

Backend


Development

Running Tests

# Frontend unit tests
cd frontend
npm run test

# With coverage
npm run test:coverage

# Watch mode
npm run test:watch

Type Checking

cd frontend
npm run check

Development Mode

Use the dev compose file for hot reloading:

docker compose -f docker-compose.dev.yml up

API Reference

Backend Endpoints

Method Endpoint Description
POST /api/v1/proxy/search Web search via DuckDuckGo
POST /api/v1/proxy/fetch Fetch URL content
GET /api/v1/location Get user location from IP
GET /api/v1/models/registry Browse Ollama model registry
GET /api/v1/models/search Search models
POST /api/v1/chats/sync Sync conversations

Ollama Proxy

All requests to /ollama/* are proxied to the Ollama API, enabling CORS.


Roadmap

  • Image generation (Stable Diffusion, Hugging Face models)
  • Hugging Face integration
  • Voice input/output
  • Multi-user support
  • Plugin system

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

Copyright (C) 2026 VikingOwl

This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.


Made with Ollama and Svelte

Description
No description provided
Readme GPL-3.0 15 MiB
Languages
Svelte 42.2%
TypeScript 40.6%
Go 15.9%
Shell 0.7%
CSS 0.2%
Other 0.2%