Add a new terminal UI crate (crates/app/ui) built with ratatui providing an
interactive chat interface with real-time LLM streaming and tool visualization.
Features:
- Chat panel with horizontal padding for improved readability
- Input box with cursor navigation and command history
- Status bar with session statistics and uniform background styling
- 7 theme presets: Tokyo Night (default), Dracula, Catppuccin, Nord,
Synthwave, Rose Pine, and Midnight Ocean
- Theme switching via /theme <name> and /themes commands
- Streaming LLM responses that accumulate into single messages
- Real-time tool call visualization with success/error states
- Session tracking (messages, tokens, tool calls, duration)
- REPL commands: /help, /status, /cost, /checkpoint, /rewind, /clear, /exit
Integration:
- CLI automatically launches TUI mode when running interactively (no prompt)
- Falls back to legacy text REPL with --no-tui flag
- Uses existing agent loop with streaming support
- Supports all existing tools (read, write, edit, glob, grep, bash)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Implements the remaining M12 features from AGENTS.md:
**Plugin System (crates/platform/plugins)**
- Plugin manifest schema with plugin.json support
- Plugin loader for commands, agents, skills, hooks, and MCP servers
- Discovers plugins from ~/.config/owlen/plugins and .owlen/plugins
- Includes comprehensive tests (4 passing)
**Session Checkpointing (crates/core/agent)**
- Checkpoint struct capturing session state and file diffs
- CheckpointManager with snapshot, diff, save, load, and rewind capabilities
- File diff tracking with before/after content
- Checkpoint persistence to .owlen/checkpoints/
- Includes comprehensive tests (6 passing)
**REPL Commands (crates/app/cli)**
- /checkpoint - Save current session with file diffs
- /checkpoints - List all saved checkpoints
- /rewind <id> - Restore session and files from checkpoint
- Updated /help documentation
M12 milestone now fully complete:
✅ /permissions, /status, /cost (previously implemented)
✅ Checkpointing and /rewind
✅ Plugin loader with manifest schema
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add proper interactive mode when no prompt is provided:
**Interactive REPL Features**:
- Starts when running `cargo run` with no arguments
- Shows welcome message with model name
- Prompts with `> ` for user input
- Each input runs through the full agent loop with tools
- Continues until Ctrl+C or EOF
- Displays tool calls and results in real-time
**Changes**:
- Detect empty prompt and enter interactive loop
- Use stdin.lines() for reading user input
- Call agent_core::run_agent_loop for each message
- Handle errors gracefully and continue
- Clean up unused imports
**Usage**:
```bash
# Interactive mode
cargo run
# Single prompt mode
cargo run -- --print "Find all Cargo.toml files"
# Tool subcommands
cargo run -- glob "**/*.rs"
```
Example session:
```
🤖 Owlen Interactive Mode
Model: qwen3:8b
> Find all markdown files
🔧 Tool call: glob with args: {"pattern":"**/*.md"}
✅ Tool result: ./README.md ./CLAUDE.md ./AGENTS.md
...
> exit
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add complete agent orchestration system that enables LLM to call tools:
**Core Agent System** (`crates/core/agent`):
- Agent execution loop with tool call/result cycle
- Tool definitions in Ollama-compatible format (6 tools)
- Tool execution with permission checking
- Multi-iteration support with max iteration safety
**Tool Definitions**:
- read: Read file contents
- glob: Find files by pattern
- grep: Search for patterns in files
- write: Write content to files
- edit: Edit files with find/replace
- bash: Execute bash commands
**Ollama Integration Updates**:
- Extended ChatMessage to support tool_calls
- Added Tool, ToolCall, ToolFunction types
- Updated chat_stream to accept tools parameter
- Made tool call fields optional for Ollama compatibility
**CLI Integration**:
- Wired agent loop into all output formats (Text, JSON, StreamJSON)
- Tool calls displayed with 🔧 icon, results with ✅
- Replaced simple chat with agent orchestrator
**Permission Integration**:
- All tool executions check permissions before running
- Respects plan/acceptEdits/code modes
- Returns clear error messages for denied operations
**Example**:
User: "Find all Cargo.toml files in the workspace"
LLM: Calls glob("**/Cargo.toml")
Agent: Executes and returns 14 files
LLM: Formats human-readable response
This transforms owlen from a passive chatbot into an active agent that
can autonomously use tools to accomplish user goals.
Tested with: qwen3:8b successfully calling glob tool
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>