Implements the remaining M12 features from AGENTS.md:
**Plugin System (crates/platform/plugins)**
- Plugin manifest schema with plugin.json support
- Plugin loader for commands, agents, skills, hooks, and MCP servers
- Discovers plugins from ~/.config/owlen/plugins and .owlen/plugins
- Includes comprehensive tests (4 passing)
**Session Checkpointing (crates/core/agent)**
- Checkpoint struct capturing session state and file diffs
- CheckpointManager with snapshot, diff, save, load, and rewind capabilities
- File diff tracking with before/after content
- Checkpoint persistence to .owlen/checkpoints/
- Includes comprehensive tests (6 passing)
**REPL Commands (crates/app/cli)**
- /checkpoint - Save current session with file diffs
- /checkpoints - List all saved checkpoints
- /rewind <id> - Restore session and files from checkpoint
- Updated /help documentation
M12 milestone now fully complete:
✅ /permissions, /status, /cost (previously implemented)
✅ Checkpointing and /rewind
✅ Plugin loader with manifest schema
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add complete agent orchestration system that enables LLM to call tools:
**Core Agent System** (`crates/core/agent`):
- Agent execution loop with tool call/result cycle
- Tool definitions in Ollama-compatible format (6 tools)
- Tool execution with permission checking
- Multi-iteration support with max iteration safety
**Tool Definitions**:
- read: Read file contents
- glob: Find files by pattern
- grep: Search for patterns in files
- write: Write content to files
- edit: Edit files with find/replace
- bash: Execute bash commands
**Ollama Integration Updates**:
- Extended ChatMessage to support tool_calls
- Added Tool, ToolCall, ToolFunction types
- Updated chat_stream to accept tools parameter
- Made tool call fields optional for Ollama compatibility
**CLI Integration**:
- Wired agent loop into all output formats (Text, JSON, StreamJSON)
- Tool calls displayed with 🔧 icon, results with ✅
- Replaced simple chat with agent orchestrator
**Permission Integration**:
- All tool executions check permissions before running
- Respects plan/acceptEdits/code modes
- Returns clear error messages for denied operations
**Example**:
User: "Find all Cargo.toml files in the workspace"
LLM: Calls glob("**/Cargo.toml")
Agent: Executes and returns 14 files
LLM: Formats human-readable response
This transforms owlen from a passive chatbot into an active agent that
can autonomously use tools to accomplish user goals.
Tested with: qwen3:8b successfully calling glob tool
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>