Milestone M8 implementation adds MCP integration for connecting to external
tool servers and resources.
New crate: crates/integration/mcp-client
- JSON-RPC 2.0 protocol implementation
- Stdio transport for spawning MCP server processes
- Capability negotiation (initialize handshake)
- Tool operations:
* tools/list: List available tools from server
* tools/call: Invoke tools with arguments
- Resource operations:
* resources/list: List available resources
* resources/read: Read resource contents
- Async design using tokio for non-blocking I/O
MCP Client Features:
- McpClient: Main client with subprocess management
- ServerCapabilities: Capability discovery
- McpTool: Tool definitions with JSON schema
- McpResource: Resource definitions with URI/mime-type
- Automatic request ID management
- Error handling with proper JSON-RPC error codes
Permission Integration:
- Added Tool::Mcp to permission system
- Pattern matching support for mcp__server__tool format
* "filesystem__*" matches all filesystem server tools
* "filesystem__read_file" matches specific tool
- MCP requires Ask permission in Plan/AcceptEdits modes
- MCP allowed in Code mode (like Bash)
Tests added (3 new tests with mock Python servers):
1. mcp_server_capability_negotiation - Verifies initialize handshake
2. mcp_tool_invocation - Tests tool listing and calling
3. mcp_resource_reads - Tests resource listing and reading
Permission tests added (2 new tests):
1. mcp_server_pattern_matching - Verifies server-level wildcards
2. mcp_exact_tool_matching - Verifies tool-level exact matching
All 75 tests passing (up from 68).
Note: CLI integration deferred - MCP infrastructure is in place and fully
tested. Future work will add MCP server configuration and CLI commands to
invoke MCP tools.
Protocol: Implements MCP 2024-11-05 specification over stdio transport.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Milestone M6 implementation adds a comprehensive hook system that allows
users to run custom scripts at various lifecycle events.
New crate: crates/platform/hooks
- HookEvent enum with multiple event types:
* PreToolUse: fires before tool execution, can deny operations (exit code 2)
* PostToolUse: fires after tool execution
* SessionStart: fires at session start, can persist env vars
* SessionEnd, UserPromptSubmit, PreCompact (defined for future use)
- HookManager for executing hooks with timeout support
- JSON I/O: hooks receive event data via stdin, can output to stdout
- Hooks located in .owlen/hooks/{EventName}
CLI integration:
- All tool commands (Read, Write, Edit, Glob, Grep, Bash, SlashCommand)
now fire PreToolUse hooks before execution
- Hooks can deny operations by exiting with code 2
- Hooks timeout after 5 seconds by default
Tests added:
- pretooluse_can_deny_call: verifies hooks can block tool execution
- posttooluse_runs_parallel: verifies PostToolUse hooks execute
- sessionstart_persists_env: verifies SessionStart can create env files
- hook_timeout_works: verifies timeout mechanism
- hook_not_found_is_ok: verifies missing hooks don't cause errors
All 63 tests passing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit implements the complete M1 milestone (Config & Permissions) including:
- New permissions crate with Tool, Action, Mode, and PermissionManager
- Three permission modes: Plan (read-only default), AcceptEdits, Code
- Pattern matching for permission rules (exact match and prefix with *)
- Integration with config-agent for mode-based permission management
- CLI integration with --mode flag to override configured mode
- Permission checks for Read, Glob, and Grep operations
- Comprehensive test suite (10 tests in permissions, 4 in config, 4 in CLI)
Also fixes:
- Fixed failing test in tools-fs (glob pattern issue)
- Improved glob_list() root extraction to handle patterns like "/*.txt"
All 21 workspace tests passing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Update workspace members and dependency paths to reflect new directory structure:
- crates/cli → crates/app/cli
- crates/config → crates/platform/config
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add new tools-fs crate with read, glob, and grep utilities
- Fix glob command to support actual glob patterns (**, *) instead of just directory walking
- Rename binary from "code" to "owlen" to match package name
- Fix test to reference correct binary name "owlen"
- Add API key support to OllamaClient for authentication
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Set up Cargo workspace with initial crates:
- cli: main application entry point with chat streaming tests
- config: configuration management
- llm/ollama: Ollama client integration with NDJSON support
Includes .gitignore for Rust and JetBrains IDEs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Acceptance-Criteria:\n- spec-compliant names are shared via WEB_SEARCH_TOOL_NAME and ModeConfig checks canonical identifiers.\n- workspace depends on once_cell so regex helpers build without local target hacks.
Test-Notes:\n- cargo test
Acceptance Criteria:
- Workspace metadata, PKGBUILD, and CHANGELOG announce version 0.2.0
- Release notes summarize major v0.2 additions, changes, and fixes for users
Test Notes:
- cargo test -p owlen-cli
Acceptance Criteria:
- Workspace builds against ratatui 0.29, crossterm 0.28.1, and tui-textarea 0.7 with palette support enabled
- Chat header context and usage gauges render with refreshed tailwind gradients
- Header layout uses the Flex API to balance top-row metadata across window widths
Test Notes:
- cargo test -p owlen-tui
Update pkgver in PKGBUILD, version badge in README, and workspace package version in Cargo.toml. Add changelog entry for 0.1.11 reflecting the metadata bump.
- Introduce `owlen-providers` crate with Cargo.toml and lib entry.
- Expose `OllamaClient` handling HTTP communication, health checks, model listing, and streaming generation.
- Implement request building, endpoint handling, and error mapping.
- Parse Ollama tags response and generation stream lines into core types.
- Add shared module re-exports for easy integration with the provider layer.
- Introduce new `owlen-markdown` crate that converts Markdown strings to `ratatui::Text` with headings, lists, bold/italic, and inline code.
- Add `render_markdown` config option (default true) and expose it via `app.render_markdown_enabled()`.
- Implement `:markdown [on|off]` command to toggle markdown rendering.
- Update help overlay to document the new markdown toggle.
- Adjust UI rendering to conditionally apply markdown styling based on the markdown flag and code mode.
- Wire the new crate into `owlen-tui` Cargo.toml.
Deletes the `owlen-ollama` Cargo.toml and source files, fully removing the Ollama provider from the workspace. This aligns the project with the MCP‑only architecture and eliminates direct provider dependencies.
- Export `LLMProvider` from `owlen-core` and replace public `Provider` re-exports.
- Convert `OllamaProvider` to implement the new `LLMProvider` trait with associated future types.
- Adjust imports and trait bounds in `remote_client.rs` to use the updated types.
- Add comprehensive provider interface tests (`provider_interface.rs`) verifying router routing and provider registry model listing with `MockProvider`.
- Align dependency versions across workspace crates by switching to workspace-managed versions.
- Extend CI (`.woodpecker.yml`) with a dedicated test step and generate coverage reports.
- Update architecture documentation to reflect the new provider abstraction.
- Fix ACTION_INPUT regex to properly capture multiline JSON responses
- Changed from stopping at first newline to capturing all remaining text
- Resolves parsing errors when LLM generates formatted JSON with line breaks
- Enhance tool schemas with detailed descriptions and parameter specifications
- Add comprehensive Message schema for generate_text tool
- Clarify distinction between resources/get (file read) and resources/list (directory listing)
- Include clear usage guidance in tool descriptions
- Set default model to llama3.2:latest instead of invalid "ollama"
- Add parse error debugging to help troubleshoot LLM response issues
The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduce `owlen-mcp-llm-server` crate with RPC handling, `generate_text` tool, model listing, and streaming notifications.
- Add `RpcNotification` struct and `MODELS_LIST` method to the MCP protocol.
- Update `owlen-core` to depend on `tokio-stream`.
- Adjust Ollama provider to omit empty `tools` field for compatibility.
- Enhance `RemoteMcpClient` to locate the renamed server binary, handle resource tools locally, and implement the `Provider` trait (model listing, chat, streaming, health check).
- Add new crate to workspace `Cargo.toml`.
- Added a `tool_output` color to the `Theme` struct.
- Updated all built-in themes to include the new color.
- Modified the TUI to use the `tool_output` color for rendering tool output.
Adds consent management for tool execution, input validation, sandboxed process execution, and MCP server integration. Updates session management to support tool use, conversation persistence, and streaming responses.
Major additions:
- Database migrations for conversations and secure storage
- Encryption and credential management infrastructure
- Extensible tool system with code execution and web search
- Consent management and validation systems
- Sandboxed process execution
- MCP server integration
Infrastructure changes:
- Module registration and workspace dependencies
- ToolCall type and tool-related Message methods
- Privacy, security, and tool configuration structures
- Database-backed conversation persistence
- Tool call tracking in conversations
Provider and UI updates:
- Ollama provider updates for tool support and new Role types
- TUI chat and code app updates for async initialization
- CLI updates for new SessionController API
- Configuration documentation updates
- CHANGELOG updates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduce multiple built-in themes (`default_dark`, `default_light`, `gruvbox`, `dracula`, `solarized`, `midnight-ocean`, `rose-pine`, `monokai`, `material-dark`, `material-light`).
- Implement theming system with customizable color schemes for all UI components in the TUI.
- Include documentation for themes in `themes/README.md`.
- Add fallback mechanisms for default themes in case of parsing errors.
- Support custom themes with overrides via configuration.
- Introduce `.cargo/config.toml` with platform-specific linker and flags.
- Update Woodpecker CI to include Windows target, adjust build and packaging steps.
- Modify `Cargo.toml` to use `reqwest` with `rustls-tls` for TLS support.