- Introduce reference MCP presets with installation/audit helpers and remove legacy connector lists.
- Add CLI `owlen tools` commands to install presets or audit configuration, with optional pruning.
- Extend the TUI :tools command to support listing presets, installing them, and auditing current configuration.
- Document the preset workflow and provide regression tests for preset application.
- Reject dotted tool identifiers during registration and remove alias-backed lookups.
- Drop web.search compatibility, normalize all code/tests around the canonical web_search name, and update consent/session logic.
- Harden CLI toggles to manage the spec-compliant identifier and ensure MCP configs shed non-compliant entries automatically.
Acceptance Criteria:
- Tool registry denies invalid identifiers by default and no alias codepaths remain.
Test Notes:
- cargo check -p owlen-core (tests unavailable in sandbox).
Acceptance Criteria:\n- web.search proxies Ollama Cloud's /api/web_search via the configured provider endpoint\n- Tool is only registered when remote search is enabled and the cloud provider is active\n- Consent prompts, docs, and MCP tooling no longer reference DuckDuckGo or expose web_search_detailed
Test Notes:\n- cargo check
- Fix ACTION_INPUT regex to properly capture multiline JSON responses
- Changed from stopping at first newline to capturing all remaining text
- Resolves parsing errors when LLM generates formatted JSON with line breaks
- Enhance tool schemas with detailed descriptions and parameter specifications
- Add comprehensive Message schema for generate_text tool
- Clarify distinction between resources/get (file read) and resources/list (directory listing)
- Include clear usage guidance in tool descriptions
- Set default model to llama3.2:latest instead of invalid "ollama"
- Add parse error debugging to help troubleshoot LLM response issues
The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Added a `tool_output` color to the `Theme` struct.
- Updated all built-in themes to include the new color.
- Modified the TUI to use the `tool_output` color for rendering tool output.
- Added a `tool_output` color to the `Theme` struct.
- Updated all built-in themes to include the new color.
- Modified the TUI to use the `tool_output` color for rendering tool output.
Introduces a tool registry architecture with sandboxed code execution, web search capabilities, and consent-based permission management. Enables safe, pluggable LLM tool integration with schema validation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>