Files
owlen/docs/migration-guide.md

7.9 KiB
Raw Blame History

Migration Guide

This guide documents breaking changes between versions of Owlen and provides instructions on how to migrate your configuration or usage.

As Owlen is currently in its alpha phase (pre-v1.0), breaking changes may occur more frequently. We will do our best to document them here.


Migrating from v0.x to v1.0 (MCP-Only Architecture)

Version 1.0 marks a major milestone: Owlen has completed its transition to a MCP-only architecture (Model Context Protocol). This brings significant improvements in modularity, extensibility, and performance, but requires configuration updates.

Breaking Changes

1. MCP Mode now defaults to remote_preferred

The [mcp] section in config.toml still accepts a mode setting, but the default behaviour has changed. If you previously relied on mode = "legacy", you can keep that line the value now maps to the local_only runtime with a compatibility warning instead of breaking outright. New installs default to the safer remote_preferred mode, which attempts to use any configured external MCP server and automatically falls back to the local in-process tooling when permitted.

Supported values (v1.0+):

Value Behaviour
remote_preferred Default. Use the first configured [[mcp_servers]], fall back to local if allow_fallback = true.
remote_only Require a configured server; the CLI will error if it cannot start.
local_only Force the built-in MCP client and the direct Ollama provider.
legacy Alias for local_only kept for compatibility (emits a warning).
disabled Not supported by the TUI; intended for headless tooling.

You can additionally control the automatic fallback behaviour:

[mcp]
mode = "remote_preferred"
allow_fallback = true
warn_on_legacy = true

2. Direct Provider Access Removed (with opt-in compatibility)

In v0.x, Owlen could make direct HTTP calls to Ollama when in "legacy" mode. The default v1.0 behaviour keeps all LLM interactions behind MCP, but choosing mode = "local_only" or mode = "legacy" now reinstates the direct Ollama provider while still keeping the MCP tooling stack available locally.

What Changed Under the Hood

The v1.0 architecture implements the full 10-phase migration plan:

  • Phase 1-2: File operations via MCP servers
  • Phase 3: LLM inference via MCP servers (Ollama wrapped)
  • Phase 4: Agent loop with ReAct pattern
  • Phase 5: Mode system (chat/code) with tool availability
  • Phase 6: Web search integration
  • Phase 7: Code execution with Docker sandboxing
  • Phase 8: Prompt server for versioned prompts
  • Phase 9: Remote MCP server support (HTTP/WebSocket)
  • Phase 10: Legacy mode removal and production polish

Migration Steps

Step 1: Review Your MCP Configuration

Edit ~/.config/owlen/config.toml and ensure the [mcp] section reflects how you want to run Owlen:

[mcp]
mode = "remote_preferred"
allow_fallback = true

If you encounter issues with remote servers, you can temporarily switch to:

[mcp]
mode = "local_only"  # or "legacy" for backwards compatibility

You will see a warning on startup when legacy is used so you remember to migrate later.

Quick fix: run owlen config doctor to apply these defaults automatically and validate your configuration file.

Step 2: Verify Provider Configuration

Ensure your provider configuration is correct. For Ollama:

[general]
default_provider = "ollama_local"
default_model = "llama3.2:latest"  # or your preferred model

[providers.ollama_local]
enabled = true
provider_type = "ollama"
base_url = "http://localhost:11434"

[providers.ollama_cloud]
enabled = true                 # set to false if you do not use the hosted API
provider_type = "ollama_cloud"
base_url = "https://ollama.com"
api_key_env = "OLLAMA_API_KEY"

Step 3: Understanding MCP Server Configuration

While not required for basic usage (Owlen will use the built-in local MCP client), you can optionally configure external MCP servers:

[[mcp_servers]]
name = "llm"
command = "owlen-mcp-llm-server"
transport = "stdio"

[[mcp_servers]]
name = "filesystem"
command = "/path/to/filesystem-server"
transport = "stdio"

Note: If no mcp_servers are configured, Owlen automatically falls back to its built-in local MCP client, which provides the same functionality.

Step 4: Verify Installation

After updating your config:

  1. Check Ollama is running:

    curl http://localhost:11434/api/version
    
  2. List available models:

    ollama list
    
  3. Test Owlen:

    owlen
    

Common Issues After Migration

Issue: "Warning: No MCP servers defined in config. Using local client."

This is normal! In v1.0+, if you don't configure external MCP servers, Owlen uses its built-in local MCP client. This provides the same functionality without needing separate server processes.

No action required unless you specifically want to use external MCP servers.

Issue: Timeouts on First Message

Cause: Ollama loads models into memory on first use, which can take 10-60 seconds for large models.

Solution:

  • Be patient on first inference after model selection
  • Use smaller models for faster loading (e.g., llama3.2:latest instead of qwen3-coder:latest)
  • Pre-load models with: ollama run <model-name>

Issue: Cloud Models Return 404 Errors

Cause: Ollama Cloud model names may differ from local model names.

Solution:

  • Verify model availability on https://ollama.com/models
  • Remove the -cloud suffix from model names when using cloud provider
  • Ensure api_key/api_key_env is set in [providers.ollama_cloud] config

0.1.9 Explicit Ollama Modes & Cloud Endpoint Storage

Owlen 0.1.9 introduces targeted quality-of-life fixes for users who switch between local Ollama models and Ollama Cloud:

  • providers.<name>.extra.ollama_mode now accepts "auto", "local", or "cloud". Migrations default existing entries to auto, while preserving any explicit local base URLs you set previously.
  • owlen cloud setup writes the hosted endpoint to providers.<name>.extra.cloud_endpoint rather than overwriting base_url, so local catalogues keep working after you import an API key. Pass --force-cloud-base-url if you truly want the provider to point at the hosted service.
  • The model picker surfaces Local unavailable / Cloud unavailable badges when a source probe fails, highlighting what to fix instead of presenting an empty list.

Run owlen config doctor after upgrading to ensure these migration tweaks are applied automatically.

Rollback to v0.x

If you encounter issues and need to rollback:

  1. Reinstall v0.x:

    # Using AUR (if applicable)
    yay -S owlen-git
    
    # Or from source
    git checkout <v0.x-tag>
    cargo install --path crates/owlen-tui
    
  2. Restore configuration:

    [mcp]
    mode = "legacy"
    
  3. Report issues: https://github.com/Owlibou/owlen/issues

Benefits of v1.0 MCP Architecture

  • Modularity: LLM, file operations, and tools are isolated in MCP servers
  • Extensibility: Easy to add new tools and capabilities via MCP protocol
  • Multi-Provider: Support for multiple LLM providers through standard interface
  • Remote Execution: Can connect to remote MCP servers over HTTP/WebSocket
  • Better Error Handling: Structured error responses from MCP servers
  • Agentic Capabilities: ReAct pattern for autonomous task completion

Getting Help


Future Migrations

We will continue to document breaking changes here as Owlen evolves. Always check this guide when upgrading to a new major version.