Phase 10 "Cleanup & Production Polish" is now complete. All LLM
interactions now go through the Model Context Protocol (MCP), removing
direct provider dependencies from CLI/TUI.
## Major Changes
### MCP Architecture
- All providers (local and cloud Ollama) now use RemoteMcpClient
- Removed owlen-ollama dependency from owlen-tui
- MCP LLM server accepts OLLAMA_URL environment variable for cloud providers
- Proper notification handling for streaming responses
- Fixed response deserialization (McpToolResponse unwrapping)
### Code Cleanup
- Removed direct OllamaProvider instantiation from TUI
- Updated collect_models_from_all_providers() to use MCP for all providers
- Updated switch_provider() to use MCP with environment configuration
- Removed unused general config variable
### Documentation
- Added comprehensive MCP Architecture section to docs/architecture.md
- Documented MCP communication flow and cloud provider support
- Updated crate breakdown to reflect MCP servers
### Security & Performance
- Path traversal protection verified for all resource operations
- Process isolation via separate MCP server processes
- Tool permissions controlled via consent manager
- Clean release build of entire workspace verified
## Benefits of MCP Architecture
1. **Separation of Concerns**: TUI/CLI never directly instantiates providers
2. **Process Isolation**: LLM interactions run in separate processes
3. **Extensibility**: New providers can be added as MCP servers
4. **Multi-Transport**: Supports STDIO, HTTP, and WebSocket
5. **Tool Integration**: MCP servers expose tools to LLMs
This completes Phase 10 and establishes a clean, production-ready architecture
for future development.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Adds consent management for tool execution, input validation, sandboxed process execution, and MCP server integration. Updates session management to support tool use, conversation persistence, and streaming responses.
Major additions:
- Database migrations for conversations and secure storage
- Encryption and credential management infrastructure
- Extensible tool system with code execution and web search
- Consent management and validation systems
- Sandboxed process execution
- MCP server integration
Infrastructure changes:
- Module registration and workspace dependencies
- ToolCall type and tool-related Message methods
- Privacy, security, and tool configuration structures
- Database-backed conversation persistence
- Tool call tracking in conversations
Provider and UI updates:
- Ollama provider updates for tool support and new Role types
- TUI chat and code app updates for async initialization
- CLI updates for new SessionController API
- Configuration documentation updates
- CHANGELOG updates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Standardize array and vector formatting for clarity.
- Adjust spacing and indentation in examples and TUI code.
- Ensure proper newline usage across files (e.g., LICENSE, TOML files, etc.).
- Simplify `.to_string()` and `.ok()` calls for brevity.
- Include detailed architecture overview in `docs/architecture.md`.
- Add `docs/configuration.md`, detailing configuration file structure and settings.
- Provide a step-by-step provider implementation guide in `docs/provider-implementation.md`.
- Add frequently asked questions (FAQ) document in `docs/faq.md`.
- Create `docs/migration-guide.md` for future breaking changes and version upgrades.
- Introduce new examples in `examples/` showcasing basic chat, custom providers, and theming.
- Add a changelog (`CHANGELOG.md`) for tracking significant changes.
- Provide contribution guidelines (`CONTRIBUTING.md`) and a Code of Conduct (`CODE_OF_CONDUCT.md`).