- Introduce new `owlen-markdown` crate that converts Markdown strings to `ratatui::Text` with headings, lists, bold/italic, and inline code.
- Add `render_markdown` config option (default true) and expose it via `app.render_markdown_enabled()`.
- Implement `:markdown [on|off]` command to toggle markdown rendering.
- Update help overlay to document the new markdown toggle.
- Adjust UI rendering to conditionally apply markdown styling based on the markdown flag and code mode.
- Wire the new crate into `owlen-tui` Cargo.toml.
Introduce `McpCommand` enum and handlers in `owlen-cli` to manage MCP server registrations, including adding, listing, and removing servers across configuration scopes. Add scoped configuration support (`ScopedMcpServer`, `McpConfigScope`) and OAuth token handling in core config, alongside runtime refresh of MCP servers. Implement toast notifications in the TUI (`render_toasts`, `Toast`, `ToastLevel`) and integrate async handling for session events. Update config loading, validation, and schema versioning to accommodate new MCP scopes and resources. Add `httpmock` as a dev dependency for testing.
- Introduce `MessageRenderContext` and `MessageCacheEntry` for caching wrapped lines per message.
- Implement `render_message_lines_cached` using cache, invalidating on updates.
- Add role/style helpers and content hashing for cache validation.
- Throttle UI redraws in the main loop during active streaming (50 ms interval) and adjust idle tick timing.
- Update drawing logic to use cached rendering and manage draw intervals.
- Remove unused `role_color` function and adjust imports accordingly.
Introduce `DetailedModelInfo` and `ModelInfoRetrievalError` structs for richer model data.
Add `ModelDetailsCache` with TTL‑based storage and async API for get/insert/invalidate.
Extend `OllamaProvider` to fetch, cache, refresh, and list detailed model info.
Expose model‑detail methods in `Session` for on‑demand and bulk retrieval.
Add `ModelInfoPanel` widget to display detailed info with scrolling support.
Update TUI rendering to show the panel, compute viewport height, and render model selector labels with parameters, size, and context length.
Adjust imports and module re‑exports accordingly.
Deletes the `owlen-ollama` Cargo.toml and source files, fully removing the Ollama provider from the workspace. This aligns the project with the MCP‑only architecture and eliminates direct provider dependencies.
- Detect terminal color support and automatically switch to the new `ansi_basic` theme when only 16‑color support is available.
- Introduce `OfflineProvider` that supplies a placeholder model and friendly messages when no providers are reachable, keeping the TUI usable.
- Add `CONFIG_SCHEMA_VERSION` (`1.1.0`) with schema migration logic and default handling in `Config`.
- Update configuration saving to persist the schema version and ensure defaults.
- Register the `ansi_basic` theme in `theme.rs`.
- Extend `ChatApp` with `set_status_message` to display custom status lines.
- Update documentation (architecture, Vim mode state machine) to reflect new behavior.
- Add async‑trait and futures dependencies required for the offline provider implementation.
- Export `LLMProvider` from `owlen-core` and replace public `Provider` re-exports.
- Convert `OllamaProvider` to implement the new `LLMProvider` trait with associated future types.
- Adjust imports and trait bounds in `remote_client.rs` to use the updated types.
- Add comprehensive provider interface tests (`provider_interface.rs`) verifying router routing and provider registry model listing with `MockProvider`.
- Align dependency versions across workspace crates by switching to workspace-managed versions.
- Extend CI (`.woodpecker.yml`) with a dedicated test step and generate coverage reports.
- Update architecture documentation to reflect the new provider abstraction.
This commit fixes 12 categories of errors across the codebase:
- Fix owlen-mcp-llm-server build target conflict by renaming lib.rs to main.rs
- Resolve ambiguous glob re-exports in owlen-core by using explicit exports
- Add Default derive to MockMcpClient and MockProvider test utilities
- Remove unused imports from owlen-core test files
- Fix needless borrows in test file arguments
- Improve Config initialization style in mode_tool_filter tests
- Make AgentExecutor::parse_response public for testing
- Remove non-existent max_tool_calls field from AgentConfig usage
- Fix AgentExecutor::new calls to use correct 3-argument signature
- Fix AgentResult field access in agent tests
- Use Debug formatting instead of Display for AgentResult
- Remove unnecessary default() calls on unit structs
All changes ensure the project compiles cleanly with:
- cargo check --all-targets ✓
- cargo clippy --all-targets -- -D warnings ✓
- cargo test --no-run ✓
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit completes Phase 10 of the MCP migration by removing all
direct provider usage from CLI/TUI and enforcing MCP-first architecture.
## Changes
### Core Architecture
- **main.rs**: Replaced OllamaProvider with RemoteMcpClient
- Uses MCP server configuration from config.toml if available
- Falls back to auto-discovery of MCP LLM server binary
- **agent_main.rs**: Unified provider and MCP client to single RemoteMcpClient
- Simplifies initialization with Arc::clone pattern
- All LLM communication now goes through MCP protocol
### Dependencies
- **Cargo.toml**: Removed owlen-ollama dependency from owlen-cli
- CLI no longer knows about Ollama implementation details
- Clean separation: only MCP servers use provider crates internally
### Tests
- **agent_tests.rs**: Updated all tests to use RemoteMcpClient
- Replaced OllamaProvider::new() with RemoteMcpClient::new()
- Updated test documentation to reflect MCP requirements
- All tests compile and run successfully
### Examples
- **Removed**: custom_provider.rs, basic_chat.rs (deprecated)
- **Added**: mcp_chat.rs - demonstrates recommended MCP-based usage
- Shows how to use RemoteMcpClient for LLM interactions
- Includes model listing and chat request examples
### Cleanup
- Removed outdated TODO about MCP integration (now complete)
- Updated comments to reflect current MCP architecture
## Architecture
```
CLI/TUI → RemoteMcpClient (impl Provider)
↓ MCP Protocol (STDIO/HTTP/WS)
MCP LLM Server → OllamaProvider → Ollama
```
## Benefits
- ✅ Clean separation of concerns
- ✅ CLI is protocol-agnostic (only knows MCP)
- ✅ Easier to add new LLM backends (just implement MCP server)
- ✅ All tests passing
- ✅ Full workspace builds successfully
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove the separate owlen-code binary as code assistance functionality
is now integrated into the main application through the mode consolidation system.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fix ACTION_INPUT regex to properly capture multiline JSON responses
- Changed from stopping at first newline to capturing all remaining text
- Resolves parsing errors when LLM generates formatted JSON with line breaks
- Enhance tool schemas with detailed descriptions and parameter specifications
- Add comprehensive Message schema for generate_text tool
- Clarify distinction between resources/get (file read) and resources/list (directory listing)
- Include clear usage guidance in tool descriptions
- Set default model to llama3.2:latest instead of invalid "ollama"
- Add parse error debugging to help troubleshoot LLM response issues
The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Adds consent management for tool execution, input validation, sandboxed process execution, and MCP server integration. Updates session management to support tool use, conversation persistence, and streaming responses.
Major additions:
- Database migrations for conversations and secure storage
- Encryption and credential management infrastructure
- Extensible tool system with code execution and web search
- Consent management and validation systems
- Sandboxed process execution
- MCP server integration
Infrastructure changes:
- Module registration and workspace dependencies
- ToolCall type and tool-related Message methods
- Privacy, security, and tool configuration structures
- Database-backed conversation persistence
- Tool call tracking in conversations
Provider and UI updates:
- Ollama provider updates for tool support and new Role types
- TUI chat and code app updates for async initialization
- CLI updates for new SessionController API
- Configuration documentation updates
- CHANGELOG updates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Include detailed architecture overview in `docs/architecture.md`.
- Add `docs/configuration.md`, detailing configuration file structure and settings.
- Provide a step-by-step provider implementation guide in `docs/provider-implementation.md`.
- Add frequently asked questions (FAQ) document in `docs/faq.md`.
- Create `docs/migration-guide.md` for future breaking changes and version upgrades.
- Introduce new examples in `examples/` showcasing basic chat, custom providers, and theming.
- Add a changelog (`CHANGELOG.md`) for tracking significant changes.
- Provide contribution guidelines (`CONTRIBUTING.md`) and a Code of Conduct (`CODE_OF_CONDUCT.md`).