Files
owlen/crates/owlen-core
vikingowl 33d11ae223 fix(agent): improve ReAct parser and tool schemas for better LLM compatibility
- Fix ACTION_INPUT regex to properly capture multiline JSON responses
  - Changed from stopping at first newline to capturing all remaining text
  - Resolves parsing errors when LLM generates formatted JSON with line breaks

- Enhance tool schemas with detailed descriptions and parameter specifications
  - Add comprehensive Message schema for generate_text tool
  - Clarify distinction between resources/get (file read) and resources/list (directory listing)
  - Include clear usage guidance in tool descriptions

- Set default model to llama3.2:latest instead of invalid "ollama"

- Add parse error debugging to help troubleshoot LLM response issues

The agent infrastructure now correctly handles multiline tool arguments and
provides better guidance to LLMs through improved tool schemas. Remaining
errors are due to LLM quality (model making poor tool choices or generating
malformed responses), not infrastructure bugs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-09 19:43:07 +02:00
..
2025-10-09 11:33:27 +02:00

Owlen Core

This crate provides the core abstractions and data structures for the Owlen ecosystem.

It defines the essential traits and types that enable communication with various LLM providers, manage sessions, and handle configuration.

Key Components

  • Provider trait: The fundamental abstraction for all LLM providers. Implement this trait to add support for a new provider.
  • Session: Represents a single conversation, managing message history and context.
  • Model: Defines the structure for LLM models, including their names and properties.
  • Configuration: Handles loading and parsing of the application's configuration.