- Export `LLMProvider` from `owlen-core` and replace public `Provider` re-exports.
- Convert `OllamaProvider` to implement the new `LLMProvider` trait with associated future types.
- Adjust imports and trait bounds in `remote_client.rs` to use the updated types.
- Add comprehensive provider interface tests (`provider_interface.rs`) verifying router routing and provider registry model listing with `MockProvider`.
- Align dependency versions across workspace crates by switching to workspace-managed versions.
- Extend CI (`.woodpecker.yml`) with a dedicated test step and generate coverage reports.
- Update architecture documentation to reflect the new provider abstraction.
This commit fixes 12 categories of errors across the codebase:
- Fix owlen-mcp-llm-server build target conflict by renaming lib.rs to main.rs
- Resolve ambiguous glob re-exports in owlen-core by using explicit exports
- Add Default derive to MockMcpClient and MockProvider test utilities
- Remove unused imports from owlen-core test files
- Fix needless borrows in test file arguments
- Improve Config initialization style in mode_tool_filter tests
- Make AgentExecutor::parse_response public for testing
- Remove non-existent max_tool_calls field from AgentConfig usage
- Fix AgentExecutor::new calls to use correct 3-argument signature
- Fix AgentResult field access in agent tests
- Use Debug formatting instead of Display for AgentResult
- Remove unnecessary default() calls on unit structs
All changes ensure the project compiles cleanly with:
- cargo check --all-targets ✓
- cargo clippy --all-targets -- -D warnings ✓
- cargo test --no-run ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Phase 10 "Cleanup & Production Polish" is now complete. All LLM
interactions now go through the Model Context Protocol (MCP), removing
direct provider dependencies from CLI/TUI.
## Major Changes
### MCP Architecture
- All providers (local and cloud Ollama) now use RemoteMcpClient
- Removed owlen-ollama dependency from owlen-tui
- MCP LLM server accepts OLLAMA_URL environment variable for cloud providers
- Proper notification handling for streaming responses
- Fixed response deserialization (McpToolResponse unwrapping)
### Code Cleanup
- Removed direct OllamaProvider instantiation from TUI
- Updated collect_models_from_all_providers() to use MCP for all providers
- Updated switch_provider() to use MCP with environment configuration
- Removed unused general config variable
### Documentation
- Added comprehensive MCP Architecture section to docs/architecture.md
- Documented MCP communication flow and cloud provider support
- Updated crate breakdown to reflect MCP servers
### Security & Performance
- Path traversal protection verified for all resource operations
- Process isolation via separate MCP server processes
- Tool permissions controlled via consent manager
- Clean release build of entire workspace verified
## Benefits of MCP Architecture
1. **Separation of Concerns**: TUI/CLI never directly instantiates providers
2. **Process Isolation**: LLM interactions run in separate processes
3. **Extensibility**: New providers can be added as MCP servers
4. **Multi-Transport**: Supports STDIO, HTTP, and WebSocket
5. **Tool Integration**: MCP servers expose tools to LLMs
This completes Phase 10 and establishes a clean, production-ready architecture
for future development.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ACTION_INPUT regex to properly capture multiline JSON responses
- Changed from stopping at first newline to capturing all remaining text
- Resolves parsing errors when LLM generates formatted JSON with line breaks
- Enhance tool schemas with detailed descriptions and parameter specifications
- Add comprehensive Message schema for generate_text tool
- Clarify distinction between resources/get (file read) and resources/list (directory listing)
- Include clear usage guidance in tool descriptions
- Set default model to llama3.2:latest instead of invalid "ollama"
- Add parse error debugging to help troubleshoot LLM response issues
The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduce `owlen-mcp-llm-server` crate with RPC handling, `generate_text` tool, model listing, and streaming notifications.
- Add `RpcNotification` struct and `MODELS_LIST` method to the MCP protocol.
- Update `owlen-core` to depend on `tokio-stream`.
- Adjust Ollama provider to omit empty `tools` field for compatibility.
- Enhance `RemoteMcpClient` to locate the renamed server binary, handle resource tools locally, and implement the `Provider` trait (model listing, chat, streaming, health check).
- Add new crate to workspace `Cargo.toml`.