This commit completes Phase 10 of the MCP migration by removing all
direct provider usage from CLI/TUI and enforcing MCP-first architecture.
## Changes
### Core Architecture
- **main.rs**: Replaced OllamaProvider with RemoteMcpClient
- Uses MCP server configuration from config.toml if available
- Falls back to auto-discovery of MCP LLM server binary
- **agent_main.rs**: Unified provider and MCP client to single RemoteMcpClient
- Simplifies initialization with Arc::clone pattern
- All LLM communication now goes through MCP protocol
### Dependencies
- **Cargo.toml**: Removed owlen-ollama dependency from owlen-cli
- CLI no longer knows about Ollama implementation details
- Clean separation: only MCP servers use provider crates internally
### Tests
- **agent_tests.rs**: Updated all tests to use RemoteMcpClient
- Replaced OllamaProvider::new() with RemoteMcpClient::new()
- Updated test documentation to reflect MCP requirements
- All tests compile and run successfully
### Examples
- **Removed**: custom_provider.rs, basic_chat.rs (deprecated)
- **Added**: mcp_chat.rs - demonstrates recommended MCP-based usage
- Shows how to use RemoteMcpClient for LLM interactions
- Includes model listing and chat request examples
### Cleanup
- Removed outdated TODO about MCP integration (now complete)
- Updated comments to reflect current MCP architecture
## Architecture
```
CLI/TUI → RemoteMcpClient (impl Provider)
↓ MCP Protocol (STDIO/HTTP/WS)
MCP LLM Server → OllamaProvider → Ollama
```
## Benefits
- ✅ Clean separation of concerns
- ✅ CLI is protocol-agnostic (only knows MCP)
- ✅ Easier to add new LLM backends (just implement MCP server)
- ✅ All tests passing
- ✅ Full workspace builds successfully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove the separate owlen-code binary as code assistance functionality
is now integrated into the main application through the mode consolidation system.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ACTION_INPUT regex to properly capture multiline JSON responses
- Changed from stopping at first newline to capturing all remaining text
- Resolves parsing errors when LLM generates formatted JSON with line breaks
- Enhance tool schemas with detailed descriptions and parameter specifications
- Add comprehensive Message schema for generate_text tool
- Clarify distinction between resources/get (file read) and resources/list (directory listing)
- Include clear usage guidance in tool descriptions
- Set default model to llama3.2:latest instead of invalid "ollama"
- Add parse error debugging to help troubleshoot LLM response issues
The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>