Added detailed development guide based on feature parity analysis with OpenAI Codex and Claude Code. Includes: - Project overview and philosophy (local-first, MCP-native) - Architecture details and technology stack - Current v1.0 features documentation - Development guidelines and best practices - 10-phase roadmap (Phases 11-20) for feature parity - Phase 11: MCP Client Enhancement (HIGHEST PRIORITY) - Phase 12: Approval & Sandbox System (HIGHEST PRIORITY) - Phase 13: Project Documentation System (HIGH PRIORITY) - Phase 14: Non-Interactive Mode (HIGH PRIORITY) - Phase 15: Multi-Provider Expansion (HIGH PRIORITY) - Testing requirements and standards - Git workflow and security guidelines - Debugging tips and troubleshooting This document serves as the primary reference for AI agents working on the Owlen codebase and provides a clear roadmap for achieving feature parity with leading code assistants. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
20 KiB
AGENTS.md - AI Agent Instructions for Owlen Development
This document provides comprehensive context and guidelines for AI agents (Claude, GPT-4, etc.) working on the Owlen codebase.
Project Overview
Owlen is a local-first, terminal-based AI assistant built in Rust using the Ratatui TUI framework. It implements a Model Context Protocol (MCP) architecture for modular tool execution and supports both local (Ollama) and cloud LLM providers.
Core Philosophy:
- Local-first: Prioritize local LLMs (Ollama) with cloud as fallback
- Privacy-focused: No telemetry, user data stays on device
- MCP-native: All operations through MCP servers for modularity
- Terminal-native: Vim-style modal interaction in a beautiful TUI
Current Status: v1.0 - MCP-only architecture (Phase 10 complete)
Architecture
Project Structure
owlen/
├── crates/
│ ├── owlen-core/ # Core types, config, provider traits
│ ├── owlen-tui/ # Ratatui-based terminal interface
│ ├── owlen-cli/ # Command-line interface
│ ├── owlen-ollama/ # Ollama provider implementation
│ ├── owlen-mcp-llm-server/ # LLM inference as MCP server
│ ├── owlen-mcp-client/ # MCP client library
│ ├── owlen-mcp-server/ # Base MCP server framework
│ ├── owlen-mcp-code-server/ # Code execution in Docker
│ └── owlen-mcp-prompt-server/ # Prompt management server
├── docs/ # Documentation
├── themes/ # TUI color themes
└── .agents/ # Agent development plans
Key Technologies
- Language: Rust 1.83+
- TUI: Ratatui with Crossterm backend
- Async Runtime: Tokio
- Config: TOML (serde)
- HTTP Client: reqwest
- LLM Providers: Ollama (primary), with extensibility for OpenAI/Anthropic
- Protocol: JSON-RPC 2.0 over STDIO/HTTP/WebSocket
Current Features (v1.0)
Core Capabilities
-
MCP Architecture (Phase 3-10 complete)
- All LLM interactions via MCP servers
- Local and remote MCP client support
- STDIO, HTTP, WebSocket transports
- Automatic failover with health checks
-
Provider System
- Ollama (local and cloud)
- Configurable per-provider settings
- API key management with env variable expansion
- Model switching via TUI (
:mcommand)
-
Agentic Loop (ReAct pattern)
- THOUGHT → ACTION → OBSERVATION cycle
- Tool discovery and execution
- Configurable iteration limits
- Emergency stop (Ctrl+C)
-
Mode System
- Chat mode: Limited tool availability
- Code mode: Full tool access
- Tool filtering by mode
- Runtime mode switching
-
Session Management
- Auto-save conversations
- Session persistence with encryption
- Description generation
- Session timeout management
-
Security
- Docker sandboxing for code execution
- Tool whitelisting
- Permission prompts for dangerous operations
- Network isolation options
TUI Features
- Vim-style modal editing (Normal, Insert, Visual, Command modes)
- Multi-panel layout (conversation, status, input)
- Syntax highlighting for code blocks
- Theme system (10+ built-in themes)
- Scrollback history (configurable limit)
- Word wrap and visual selection
Development Guidelines
Code Style
-
Rust Best Practices
- Use
rustfmt(pre-commit hook enforced) - Run
cargo clippybefore commits - Prefer
Resultoverpanic!for errors - Document public APIs with
///comments
- Use
-
Error Handling
- Use
owlen_core::Errorenum for all errors - Chain errors with context (
.map_err(|e| Error::X(format!(...)))) - Never unwrap in library code (tests OK)
- Use
-
Async Patterns
- All I/O operations must be async
- Use
tokio::spawnfor background tasks - Prefer
tokio::sync::mpscfor channels - Always set timeouts for network operations
-
Testing
- Unit tests in same file (
#[cfg(test)] mod tests) - Use mock implementations from
test_utilsmodules - Integration tests in
crates/*/tests/ - All public APIs must have tests
- Unit tests in same file (
File Organization
When editing existing files:
- Read the entire file first (use
Readtool) - Preserve existing code style and formatting
- Update related tests in the same commit
- Keep changes atomic and focused
When creating new files:
- Check
crates/owlen-core/src/for similar modules - Follow existing module structure
- Add to
lib.rswith appropriate visibility - Document module purpose with
//!header
Configuration
Config file: ~/.config/owlen/config.toml
Example structure:
[general]
default_provider = "ollama"
default_model = "llama3.2:latest"
enable_streaming = true
[mcp]
# MCP is always enabled in v1.0+
[providers.ollama]
provider_type = "ollama"
base_url = "http://localhost:11434"
[providers.ollama-cloud]
provider_type = "ollama-cloud"
base_url = "https://ollama.com"
api_key = "$OLLAMA_API_KEY"
[ui]
theme = "default_dark"
word_wrap = true
[security]
enable_sandboxing = true
allowed_tools = ["web_search", "code_exec"]
Common Tasks
Adding a New Provider
- Create
crates/owlen-{provider}/crate - Implement
owlen_core::provider::Providertrait - Add to
owlen_core::router::ProviderRouter - Update config schema in
owlen_core::config - Add tests with
MockProviderpattern - Document in
docs/provider-implementation.md
Adding a New MCP Server
- Create
crates/owlen-mcp-{name}-server/crate - Implement JSON-RPC 2.0 protocol handlers
- Define tool descriptors with JSON schemas
- Add sandboxing/security checks
- Register in
mcp_serversconfig array - Document tool capabilities
Adding a TUI Feature
- Modify
crates/owlen-tui/src/chat_app.rs - Update keybinding handlers
- Extend UI rendering in
draw()method - Add to help screen (
?command) - Test with different terminal sizes
- Ensure theme compatibility
Feature Parity Roadmap
Based on analysis of OpenAI Codex and Claude Code, here are prioritized features to implement:
Phase 11: MCP Client Enhancement (HIGHEST PRIORITY)
Goal: Full MCP client capabilities to access ecosystem tools
Features:
-
MCP Server Management
owlen mcp add/list/removecommands- Three config scopes: local, project (
.mcp.json), user - Environment variable expansion in config
- OAuth 2.0 authentication for remote servers
-
MCP Resource References
@github:issue://123syntax@postgres:schema://userssyntax- Auto-completion for resources
-
MCP Prompts as Slash Commands
/mcp__github__list_prs- Dynamic command registration
Implementation:
- Extend
owlen-mcp-clientcrate - Add
.mcp.jsonparsing toowlen-core::config - Update TUI command parser for
@and/mcp__syntax - Add OAuth flow to TUI
Files to modify:
crates/owlen-mcp-client/src/lib.rscrates/owlen-core/src/config.rscrates/owlen-tui/src/command_parser.rs
Phase 12: Approval & Sandbox System (HIGHEST PRIORITY)
Goal: Safe agentic behavior with user control
Features:
-
Three-tier Approval Modes
suggest: Approve ALL file writes and shell commands (default)auto-edit: Auto-approve file changes, prompt for shellfull-auto: Auto-approve everything (requires Git repo)
-
Platform-specific Sandboxing
- Linux: Docker with network isolation
- macOS: Apple Seatbelt (
sandbox-exec) - Windows: AppContainer or Job Objects
-
Permission Management
/permissionscommand in TUI- Tool allowlist (e.g.,
Edit,Bash(git commit:*)) - Stored in
.owlen/settings.json(project) or~/.owlen.json(user)
Implementation:
- New
owlen-core::approvalmodule - Extend
owlen-core::sandboxwith platform detection - Update
owlen-mcp-code-serverto use new sandbox - Add permission storage to config system
Files to create:
crates/owlen-core/src/approval.rscrates/owlen-core/src/sandbox/linux.rscrates/owlen-core/src/sandbox/macos.rscrates/owlen-core/src/sandbox/windows.rs
Phase 13: Project Documentation System (HIGH PRIORITY)
Goal: Massive usability improvement with project context
Features:
-
OWLEN.md System
OWLEN.mdat repo root (checked into git)OWLEN.local.md(gitignored, personal)~/.config/owlen/OWLEN.md(global)- Support nested OWLEN.md in monorepos
-
Auto-generation
/initcommand to generate project-specific OWLEN.md- Analyze codebase structure
- Detect build system, test framework
- Suggest common commands
-
Live Updates
#command to add instructions to OWLEN.md- Context-aware insertion (relevant section)
Contents of OWLEN.md:
- Common bash commands
- Code style guidelines
- Testing instructions
- Core files and utilities
- Known quirks/warnings
Implementation:
- New
owlen-core::project_docmodule - File discovery algorithm (walk up directory tree)
- Markdown parser for sections
- TUI commands:
/init,#
Files to create:
crates/owlen-core/src/project_doc.rscrates/owlen-tui/src/commands/init.rs
Phase 14: Non-Interactive Mode (HIGH PRIORITY)
Goal: Enable CI/CD integration and automation
Features:
-
Headless Execution
owlen exec "fix linting errors" --approval-mode auto-edit owlen --quiet "update CHANGELOG" --json -
Environment Variables
OWLEN_QUIET_MODE=1OWLEN_DISABLE_PROJECT_DOC=1OWLEN_APPROVAL_MODE=full-auto
-
JSON Output
- Structured output for parsing
- Exit codes for success/failure
- Progress events on stderr
Implementation:
- New
owlen-clisubcommand:exec - Extend
owlen-core::sessionwith non-interactive mode - Add JSON serialization for results
- Environment variable parsing in config
Files to modify:
crates/owlen-cli/src/main.rscrates/owlen-core/src/session.rs
Phase 15: Multi-Provider Expansion (HIGH PRIORITY)
Goal: Support cloud providers while maintaining local-first
Providers to add:
- OpenAI (GPT-4, o1, o4-mini)
- Anthropic (Claude 3.5 Sonnet, Opus)
- Google (Gemini Ultra, Pro)
- Mistral AI
Configuration:
[providers.openai]
api_key = "${OPENAI_API_KEY}"
model = "o4-mini"
enabled = true
[providers.anthropic]
api_key = "${ANTHROPIC_API_KEY}"
model = "claude-3-5-sonnet"
enabled = true
Runtime Switching:
:model ollama/starcoder
:model openai/o4-mini
:model anthropic/claude-3-5-sonnet
Implementation:
- Create
owlen-openai,owlen-anthropic,owlen-googlecrates - Implement
Providertrait for each - Add runtime model switching to TUI
- Maintain Ollama as default
Files to create:
crates/owlen-openai/src/lib.rscrates/owlen-anthropic/src/lib.rscrates/owlen-google/src/lib.rs
Phase 16: Custom Slash Commands (MEDIUM PRIORITY)
Goal: User and team-defined workflows
Features:
-
Command Directories
~/.owlen/commands/(user, available everywhere).owlen/commands/(project, checked into git)- Support
$ARGUMENTSkeyword
-
Example Structure
# .owlen/commands/fix-github-issue.md Please analyze and fix GitHub issue: $ARGUMENTS. 1. Use `gh issue view` to get details 2. Implement changes 3. Write and run tests 4. Create PR -
TUI Integration
- Auto-complete for custom commands
- Help text from command files
- Parameter validation
Implementation:
- New
owlen-core::commandsmodule - Command discovery and parsing
- Template expansion
- TUI command registration
Files to create:
crates/owlen-core/src/commands.rscrates/owlen-tui/src/commands/custom.rs
Phase 17: Plugin System (MEDIUM PRIORITY)
Goal: One-command installation of tool collections
Features:
-
Plugin Structure
{ "name": "github-workflow", "version": "1.0.0", "commands": [ {"name": "pr", "file": "commands/pr.md"} ], "mcp_servers": [ { "name": "github", "command": "${OWLEN_PLUGIN_ROOT}/bin/github-mcp" } ] } -
Installation
owlen plugin install github-workflow owlen plugin list owlen plugin remove github-workflow -
Discovery
~/.owlen/plugins/directory- Git repository URLs
- Plugin registry (future)
Implementation:
- New
owlen-core::pluginsmodule - Plugin manifest parser
- Installation/removal logic
- Sandboxing for plugin code
Files to create:
crates/owlen-core/src/plugins.rscrates/owlen-cli/src/commands/plugin.rs
Phase 18: Extended Thinking Modes (MEDIUM PRIORITY)
Goal: Progressive computation budgets for complex tasks
Modes:
think- basic extended thinkingthink hard- increased computationthink harder- more computationultrathink- maximum budget
Implementation:
- Extend
owlen-core::types::ChatParameters - Add thinking mode to TUI commands
- Configure per-provider max tokens
Files to modify:
crates/owlen-core/src/types.rscrates/owlen-tui/src/command_parser.rs
Phase 19: Git Workflow Automation (MEDIUM PRIORITY)
Goal: Streamline common Git operations
Features:
- Auto-commit message generation
- PR creation via
ghCLI - Rebase conflict resolution
- File revert operations
- Git history analysis
Implementation:
- New
owlen-mcp-git-servercrate - Tools:
commit,create_pr,rebase,revert,history - Integration with TUI commands
Files to create:
crates/owlen-mcp-git-server/src/lib.rs
Phase 20: Enterprise Features (LOW PRIORITY)
Goal: Team and enterprise deployment support
Features:
-
Managed Configuration
/etc/owlen/managed-mcp.json(Linux)- Restrict user additions with
useEnterpriseMcpConfigOnly
-
Audit Logging
- Log all file writes and shell commands
- Structured JSON logs
- Tamper-proof storage
-
Team Collaboration
- Shared OWLEN.md across team
- Project-scoped MCP servers in
.mcp.json - Approval policy enforcement
Implementation:
- Extend
owlen-core::configwith managed settings - New
owlen-core::auditmodule - Enterprise deployment documentation
Testing Requirements
Test Coverage Goals
- Unit tests: 80%+ coverage for
owlen-core - Integration tests: All MCP servers, providers
- TUI tests: Key workflows (not pixel-perfect)
Test Organization
#[cfg(test)]
mod tests {
use super::*;
use crate::provider::test_utils::MockProvider;
use crate::mcp::test_utils::MockMcpClient;
#[test]
fn test_feature() {
// Setup
let provider = MockProvider::new();
// Execute
let result = provider.chat(request).await;
// Assert
assert!(result.is_ok());
}
}
Running Tests
cargo test --all # All tests
cargo test --lib -p owlen-core # Core library tests
cargo test --test integration # Integration tests
Documentation Standards
Code Documentation
-
Module-level (
//!at top of file)://! Brief module description //! //! Detailed explanation of module purpose, //! key types, and usage examples. -
Public APIs (
///above items):/// Brief description /// /// # Arguments /// * `arg1` - Description /// /// # Returns /// Description of return value /// /// # Errors /// When this function returns an error /// /// # Example /// ``` /// let result = function(arg); /// ``` pub fn function(arg: Type) -> Result<Output> { // implementation } -
Private items: Optional, use for complex logic
User Documentation
Location: docs/ directory
Files to maintain:
architecture.md- System designconfiguration.md- Config referencemigration-guide.md- Version upgradestroubleshooting.md- Common issuesprovider-implementation.md- Adding providersfaq.md- Frequently asked questions
Git Workflow
Branch Strategy
main- stable releases onlydev- active development (default)feature/*- new featuresfix/*- bug fixesdocs/*- documentation only
Commit Messages
Follow conventional commits:
type(scope): brief description
Detailed explanation of changes.
Breaking changes, if any.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Types: feat, fix, docs, refactor, test, chore
Pre-commit Hooks
Automatically run:
cargo fmt(formatting)cargo check(compilation)cargo clippy(linting)- YAML/TOML validation
- Trailing whitespace removal
Performance Guidelines
Optimization Priorities
- Startup time: < 500ms cold start
- First token latency: < 2s for local models
- Memory usage: < 100MB base, < 500MB with conversation
- Responsiveness: TUI redraws < 16ms (60 FPS)
Profiling
cargo build --release --features profiling
valgrind --tool=callgrind target/release/owlen
kcachegrind callgrind.out.*
Async Performance
- Avoid blocking in async contexts
- Use
tokio::spawnfor CPU-intensive work - Set timeouts on all network operations
- Cancel tasks on shutdown
Security Considerations
Threat Model
Trusted:
- User's local machine
- User-installed Ollama models
- User configuration files
Untrusted:
- MCP server responses
- Web search results
- Code execution output
- Cloud LLM responses
Security Measures
-
Input Validation
- Sanitize all MCP tool arguments
- Validate JSON schemas strictly
- Escape shell commands
-
Sandboxing
- Docker for code execution
- Network isolation
- Filesystem restrictions
-
Secrets Management
- Never log API keys
- Use environment variables
- Encrypt sensitive config fields
-
Dependency Auditing
cargo audit cargo deny check
Debugging Tips
Enable Debug Logging
OWLEN_DEBUG_OLLAMA=1 owlen # Ollama requests
RUST_LOG=debug owlen # All debug logs
RUST_BACKTRACE=1 owlen # Stack traces
Common Issues
-
Timeout on Ollama
- Check
ollama psfor loaded models - Increase timeout in config
- Restart Ollama service
- Check
-
MCP Server Not Found
- Verify
mcp_serversconfig - Check server binary exists
- Test server manually with STDIO
- Verify
-
TUI Rendering Issues
- Test in different terminals
- Check terminal size (
tput cols; tput lines) - Verify theme compatibility
Contributing
Before Submitting PR
- Run full test suite:
cargo test --all - Check formatting:
cargo fmt -- --check - Run linter:
cargo clippy -- -D warnings - Update documentation if API changed
- Add tests for new features
- Update CHANGELOG.md
PR Description Template
## Summary
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
Describe tests performed
## Checklist
- [ ] Tests added/updated
- [ ] Documentation updated
- [ ] CHANGELOG.md updated
- [ ] No clippy warnings
Resources
External Documentation
Internal Documentation
.agents/new_phases.md- 10-phase migration plan (completed)docs/phase5-mode-system.md- Mode system designdocs/migration-guide.md- v0.x → v1.0 migration
Community
- GitHub Issues: Bug reports and feature requests
- GitHub Discussions: Questions and ideas
- AUR Package:
owlen-git(Arch Linux)
Version History
- v1.0.0 (current) - MCP-only architecture, Phase 10 complete
- v0.2.0 - Added web search, code execution servers
- v0.1.0 - Initial release with Ollama support
License
Owlen is open source software. See LICENSE file for details.
Last Updated: 2025-10-11 Maintained By: Owlen Development Team For AI Agents: Follow these guidelines when modifying Owlen codebase. Prioritize MCP client enhancement (Phase 11) and approval system (Phase 12) for feature parity with Codex/Claude Code while maintaining local-first philosophy.