- Fix ACTION_INPUT regex to properly capture multiline JSON responses
- Changed from stopping at first newline to capturing all remaining text
- Resolves parsing errors when LLM generates formatted JSON with line breaks
- Enhance tool schemas with detailed descriptions and parameter specifications
- Add comprehensive Message schema for generate_text tool
- Clarify distinction between resources/get (file read) and resources/list (directory listing)
- Include clear usage guidance in tool descriptions
- Set default model to llama3.2:latest instead of invalid "ollama"
- Add parse error debugging to help troubleshoot LLM response issues
The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduce `owlen-mcp-llm-server` crate with RPC handling, `generate_text` tool, model listing, and streaming notifications.
- Add `RpcNotification` struct and `MODELS_LIST` method to the MCP protocol.
- Update `owlen-core` to depend on `tokio-stream`.
- Adjust Ollama provider to omit empty `tools` field for compatibility.
- Enhance `RemoteMcpClient` to locate the renamed server binary, handle resource tools locally, and implement the `Provider` trait (model listing, chat, streaming, health check).
- Add new crate to workspace `Cargo.toml`.
- Added a `tool_output` color to the `Theme` struct.
- Updated all built-in themes to include the new color.
- Modified the TUI to use the `tool_output` color for rendering tool output.
Adds consent management for tool execution, input validation, sandboxed process execution, and MCP server integration. Updates session management to support tool use, conversation persistence, and streaming responses.
Major additions:
- Database migrations for conversations and secure storage
- Encryption and credential management infrastructure
- Extensible tool system with code execution and web search
- Consent management and validation systems
- Sandboxed process execution
- MCP server integration
Infrastructure changes:
- Module registration and workspace dependencies
- ToolCall type and tool-related Message methods
- Privacy, security, and tool configuration structures
- Database-backed conversation persistence
- Tool call tracking in conversations
Provider and UI updates:
- Ollama provider updates for tool support and new Role types
- TUI chat and code app updates for async initialization
- CLI updates for new SessionController API
- Configuration documentation updates
- CHANGELOG updates
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Introduce multiple built-in themes (`default_dark`, `default_light`, `gruvbox`, `dracula`, `solarized`, `midnight-ocean`, `rose-pine`, `monokai`, `material-dark`, `material-light`).
- Implement theming system with customizable color schemes for all UI components in the TUI.
- Include documentation for themes in `themes/README.md`.
- Add fallback mechanisms for default themes in case of parsing errors.
- Support custom themes with overrides via configuration.
- Introduce `.cargo/config.toml` with platform-specific linker and flags.
- Update Woodpecker CI to include Windows target, adjust build and packaging steps.
- Modify `Cargo.toml` to use `reqwest` with `rustls-tls` for TLS support.