feat(v1.0): remove legacy MCP mode and complete Phase 10 migration

This commit completes the Phase 10 migration to MCP-only architecture by
removing all legacy mode code paths and configuration options.

**Breaking Changes:**
- Removed `McpMode` enum from configuration system
- Removed `mode` setting from `[mcp]` config section
- MCP architecture is now always enabled (no option to disable)

**Code Changes:**
- Simplified `McpSettings` struct (now a placeholder for future options)
- Updated `McpClientFactory` to remove legacy mode branching
- Always use MCP architecture with automatic fallback to local client
- Added test infrastructure: `MockProvider` and `MockMcpClient` in test_utils

**Documentation:**
- Created comprehensive v0.x → v1.0 migration guide
- Added CHANGELOG_v1.0.md with detailed technical changes
- Documented common issues (cloud model 404s, timeouts, API key setup)
- Included rollback procedures and troubleshooting steps

**Testing:**
- All 29 tests passing
- Fixed agent tests to use new mock implementations
- Updated factory test to reflect new behavior

This completes the 10-phase migration plan documented in .agents/new_phases.md,
establishing Owlen as a production-ready MCP-only TUI application.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-10-11 00:24:29 +02:00
parent 7534c9ef8d
commit 5e81185df3
7 changed files with 463 additions and 54 deletions

View File

@@ -6,29 +6,183 @@ As Owlen is currently in its alpha phase (pre-v1.0), breaking changes may occur
---
## Migrating from v0.1.x to v0.2.x (Example)
## Migrating from v0.x to v1.0 (MCP-Only Architecture)
*This is a template for a future migration. No breaking changes have occurred yet.*
**Version 1.0** marks a major milestone: Owlen has completed its transition to a **MCP-only architecture** (Model Context Protocol). This brings significant improvements in modularity, extensibility, and performance, but requires configuration updates.
Version 0.2.0 introduces a new configuration structure for providers.
### Breaking Changes
### Configuration File Changes
#### 1. MCP Mode is Now Always Enabled
Previously, your `config.toml` might have looked like this:
The `[mcp]` section in `config.toml` previously had a `mode` setting that could be set to `"legacy"` or `"enabled"`. In v1.0+, MCP architecture is **always enabled** and the `mode` setting has been removed.
**Old configuration (v0.x):**
```toml
# old config.toml (pre-v0.2.0)
ollama_base_url = "http://localhost:11434"
[mcp]
mode = "legacy" # or "enabled"
```
In v0.2.0, all provider settings are now nested under a `[providers]` table. You will need to update your `config.toml` to the new format:
**New configuration (v1.0+):**
```toml
[mcp]
# MCP is now always enabled - no mode setting needed
# This section is kept for future MCP-specific configuration options
```
#### 2. Direct Provider Access Removed
In v0.x, Owlen could make direct HTTP calls to Ollama and other providers when in "legacy" mode. In v1.0+, **all LLM interactions go through MCP servers**.
### What Changed Under the Hood
The v1.0 architecture implements the full 10-phase migration plan:
- **Phase 1-2**: File operations via MCP servers
- **Phase 3**: LLM inference via MCP servers (Ollama wrapped)
- **Phase 4**: Agent loop with ReAct pattern
- **Phase 5**: Mode system (chat/code) with tool availability
- **Phase 6**: Web search integration
- **Phase 7**: Code execution with Docker sandboxing
- **Phase 8**: Prompt server for versioned prompts
- **Phase 9**: Remote MCP server support (HTTP/WebSocket)
- **Phase 10**: Legacy mode removal and production polish
### Migration Steps
#### Step 1: Update Your Configuration
Edit `~/.config/owlen/config.toml`:
**Remove the `mode` line:**
```diff
[mcp]
-mode = "legacy"
```
The `[mcp]` section can now be empty or contain future MCP-specific settings.
#### Step 2: Verify Provider Configuration
Ensure your provider configuration is correct. For Ollama:
```toml
# new config.toml (v0.2.0+)
[general]
default_provider = "ollama"
default_model = "llama3.2:latest" # or your preferred model
[providers.ollama]
provider_type = "ollama"
base_url = "http://localhost:11434"
[providers.ollama-cloud]
provider_type = "ollama-cloud"
base_url = "https://ollama.com"
api_key = "$OLLAMA_API_KEY" # Optional: for Ollama Cloud
```
### Action Required
#### Step 3: Understanding MCP Server Configuration
Update your `~/.config/owlen/config.toml` to match the new structure. If you do not, Owlen will fall back to its default provider configuration.
While not required for basic usage (Owlen will use the built-in local MCP client), you can optionally configure external MCP servers:
```toml
[[mcp_servers]]
name = "llm"
command = "owlen-mcp-llm-server"
transport = "stdio"
[[mcp_servers]]
name = "filesystem"
command = "/path/to/filesystem-server"
transport = "stdio"
```
**Note**: If no `mcp_servers` are configured, Owlen automatically falls back to its built-in local MCP client, which provides the same functionality.
#### Step 4: Verify Installation
After updating your config:
1. **Check Ollama is running**:
```bash
curl http://localhost:11434/api/version
```
2. **List available models**:
```bash
ollama list
```
3. **Test Owlen**:
```bash
owlen
```
### Common Issues After Migration
#### Issue: "Warning: No MCP servers defined in config. Using local client."
**This is normal!** In v1.0+, if you don't configure external MCP servers, Owlen uses its built-in local MCP client. This provides the same functionality without needing separate server processes.
**No action required** unless you specifically want to use external MCP servers.
#### Issue: Timeouts on First Message
**Cause**: Ollama loads models into memory on first use, which can take 10-60 seconds for large models.
**Solution**:
- Be patient on first inference after model selection
- Use smaller models for faster loading (e.g., `llama3.2:latest` instead of `qwen3-coder:latest`)
- Pre-load models with: `ollama run <model-name>`
#### Issue: Cloud Models Return 404 Errors
**Cause**: Ollama Cloud model names may differ from local model names.
**Solution**:
- Verify model availability on https://ollama.com/models
- Remove the `-cloud` suffix from model names when using cloud provider
- Ensure `api_key` is set in `[providers.ollama-cloud]` config
### Rollback to v0.x
If you encounter issues and need to rollback:
1. **Reinstall v0.x**:
```bash
# Using AUR (if applicable)
yay -S owlen-git
# Or from source
git checkout <v0.x-tag>
cargo install --path crates/owlen-tui
```
2. **Restore configuration**:
```toml
[mcp]
mode = "legacy"
```
3. **Report issues**: https://github.com/Owlibou/owlen/issues
### Benefits of v1.0 MCP Architecture
- **Modularity**: LLM, file operations, and tools are isolated in MCP servers
- **Extensibility**: Easy to add new tools and capabilities via MCP protocol
- **Multi-Provider**: Support for multiple LLM providers through standard interface
- **Remote Execution**: Can connect to remote MCP servers over HTTP/WebSocket
- **Better Error Handling**: Structured error responses from MCP servers
- **Agentic Capabilities**: ReAct pattern for autonomous task completion
### Getting Help
- **Documentation**: See `docs/` directory for detailed guides
- **Issues**: https://github.com/Owlibou/owlen/issues
- **Configuration Reference**: `docs/configuration.md`
- **Troubleshooting**: `docs/troubleshooting.md`
---
## Future Migrations
We will continue to document breaking changes here as Owlen evolves. Always check this guide when upgrading to a new major version.