82 Commits

Author SHA1 Message Date
688d1fe58a feat(M8): implement MCP (Model Context Protocol) integration with stdio transport
Milestone M8 implementation adds MCP integration for connecting to external
tool servers and resources.

New crate: crates/integration/mcp-client
- JSON-RPC 2.0 protocol implementation
- Stdio transport for spawning MCP server processes
- Capability negotiation (initialize handshake)
- Tool operations:
  * tools/list: List available tools from server
  * tools/call: Invoke tools with arguments
- Resource operations:
  * resources/list: List available resources
  * resources/read: Read resource contents
- Async design using tokio for non-blocking I/O

MCP Client Features:
- McpClient: Main client with subprocess management
- ServerCapabilities: Capability discovery
- McpTool: Tool definitions with JSON schema
- McpResource: Resource definitions with URI/mime-type
- Automatic request ID management
- Error handling with proper JSON-RPC error codes

Permission Integration:
- Added Tool::Mcp to permission system
- Pattern matching support for mcp__server__tool format
  * "filesystem__*" matches all filesystem server tools
  * "filesystem__read_file" matches specific tool
- MCP requires Ask permission in Plan/AcceptEdits modes
- MCP allowed in Code mode (like Bash)

Tests added (3 new tests with mock Python servers):
1. mcp_server_capability_negotiation - Verifies initialize handshake
2. mcp_tool_invocation - Tests tool listing and calling
3. mcp_resource_reads - Tests resource listing and reading

Permission tests added (2 new tests):
1. mcp_server_pattern_matching - Verifies server-level wildcards
2. mcp_exact_tool_matching - Verifies tool-level exact matching

All 75 tests passing (up from 68).

Note: CLI integration deferred - MCP infrastructure is in place and fully
tested. Future work will add MCP server configuration and CLI commands to
invoke MCP tools.

Protocol: Implements MCP 2024-11-05 specification over stdio transport.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 20:15:39 +01:00
b1b95a4560 feat(M7): implement headless mode with JSON and stream-JSON output formats
Milestone M7 implementation adds programmatic output formats for automation
and machine consumption.

New features:
- --output-format flag with three modes:
  * text (default): Human-readable streaming output
  * json: Single JSON object with session_id, messages, and stats
  * stream-json: NDJSON format with event stream (session_start, chunk, session_end)

- Session tracking:
  * Unique session ID generation (timestamp-based)
  * Duration tracking (ms)
  * Token count estimation (chars / 4 approximation)

- Output structures:
  * SessionOutput: Complete session with messages and stats
  * StreamEvent: Individual events for NDJSON streaming
  * Stats: Token counts (total, prompt, completion) and duration

- Tool result formatting:
  * All tool commands (Read, Write, Edit, Glob, Grep, Bash, SlashCommand)
    support all three output formats
  * JSON mode wraps results with session metadata
  * Stream-JSON mode emits event sequences

- Chat streaming:
  * Text mode: Real-time character streaming (unchanged behavior)
  * JSON mode: Collects full response, outputs once with stats
  * Stream-JSON mode: Emits chunk events as they arrive

Tests added (5 new tests):
1. print_json_has_session_id_and_stats - Verifies JSON output structure
2. stream_json_sequence_is_well_formed - Verifies NDJSON event sequence
3. text_format_is_default - Verifies default behavior unchanged
4. json_format_with_tool_execution - Verifies tool result formatting
5. stream_json_includes_chunk_events - Verifies streaming chunks

All 68 tests passing (up from 63).

This enables programmatic usage for automation, CI/CD, and integration
with other tools.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 20:05:23 +01:00
a024a764d6 feat(M6): implement hooks system with PreToolUse, PostToolUse, and SessionStart events
Milestone M6 implementation adds a comprehensive hook system that allows
users to run custom scripts at various lifecycle events.

New crate: crates/platform/hooks
- HookEvent enum with multiple event types:
  * PreToolUse: fires before tool execution, can deny operations (exit code 2)
  * PostToolUse: fires after tool execution
  * SessionStart: fires at session start, can persist env vars
  * SessionEnd, UserPromptSubmit, PreCompact (defined for future use)
- HookManager for executing hooks with timeout support
- JSON I/O: hooks receive event data via stdin, can output to stdout
- Hooks located in .owlen/hooks/{EventName}

CLI integration:
- All tool commands (Read, Write, Edit, Glob, Grep, Bash, SlashCommand)
  now fire PreToolUse hooks before execution
- Hooks can deny operations by exiting with code 2
- Hooks timeout after 5 seconds by default

Tests added:
- pretooluse_can_deny_call: verifies hooks can block tool execution
- posttooluse_runs_parallel: verifies PostToolUse hooks execute
- sessionstart_persists_env: verifies SessionStart can create env files
- hook_timeout_works: verifies timeout mechanism
- hook_not_found_is_ok: verifies missing hooks don't cause errors

All 63 tests passing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 19:57:38 +01:00
686526bbd4 chore: change slash command directory from .claude to .owlen
Changes slash command directory from `.claude/commands/` to
`.owlen/commands/` to reflect that owlen is its own tool while
maintaining compatibility with claude-code slash command syntax.

Updated locations:
- CLI main: command file path lookup
- Tests: slash_command_works and slash_command_file_refs

All 56 tests passing.
2025-11-01 19:46:40 +01:00
5134462deb feat(tools): implement Slash Commands with frontmatter and file refs (M5 complete)
This commit implements the complete M5 milestone (Slash Commands) including:

Slash Command Parser (tools-slash):
- YAML frontmatter parsing with serde_yaml
- Metadata extraction (description, author, tags, version)
- Arbitrary frontmatter fields via flattened HashMap
- Graceful fallback for commands without frontmatter

Argument Substitution:
- $ARGUMENTS - all arguments joined by space
- $1, $2, $3, etc. - positional arguments
- Unmatched placeholders remain unchanged
- Empty arguments result in empty string for $ARGUMENTS

File Reference Resolution:
- @path syntax to include file contents inline
- Regex-based matching for file references
- Multiple file references supported
- Clear error messages for missing files

CLI Integration:
- Added `slash` subcommand: `owlen slash <command> <args...>`
- Loads commands from `.claude/commands/<name>.md`
- Permission checks for SlashCommand tool
- Automatic file reference resolution before output

Command Structure:
---
description: "Command description"
author: "Author name"
tags:
  - tag1
  - tag2
---
Command body with $ARGUMENTS and @file.txt references

Permission Enforcement:
- Plan mode: SlashCommand allowed (utility tool)
- All modes: SlashCommand respects permissions
- File references respect filesystem permissions

Testing:
- 10 tests in tools-slash for parser functionality
  - Frontmatter parsing with complex YAML
  - Argument substitution (all variants)
  - File reference resolution (single and multiple)
  - Edge cases (no frontmatter, empty args, etc.)
- 3 new tests in CLI for integration
  - slash_command_works (with args and frontmatter)
  - slash_command_file_refs (file inclusion)
  - slash_command_not_found (error handling)
- All 56 workspace tests passing 

Dependencies Added:
- serde_yaml 0.9 for YAML frontmatter parsing
- regex 1.12 for file reference pattern matching

M5 milestone complete! 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 19:41:42 +01:00
d7ddc365ec feat(tools): implement Bash tool with persistent sessions and timeouts (M4 complete)
This commit implements the complete M4 milestone (Bash tool) including:

Bash Session:
- Persistent bash session using tokio::process
- Environment variables persist between commands
- Current working directory persists between commands
- Session-based execution (not one-off commands)
- Automatic cleanup on session close

Key Features:
- Command timeout support (default: 2 minutes, configurable per-command)
- Output truncation (max 2000 lines for stdout/stderr)
- Exit code capture and propagation
- Stderr capture alongside stdout
- Command delimiter system to reliably detect command completion
- Automatic backup of exit codes to temp files

Implementation Details:
- Uses tokio::process for async command execution
- BashSession maintains single bash process across multiple commands
- stdio handles (stdin/stdout/stderr) are taken and restored for each command
- Non-blocking stderr reading with timeout to avoid deadlocks
- Mutex protection for concurrent access safety

CLI Integration:
- Added `bash` subcommand: `owlen bash <command> [--timeout <ms>]`
- Permission checks with command context for pattern matching
- Stdout/stderr properly routed to respective streams
- Exit code propagation (exits with same code as bash command)

Permission Enforcement:
- Plan mode (default): blocks Bash (asks for approval)
- Code mode: allows Bash
- Pattern matching support for command-specific rules (e.g., "npm test*")

Testing:
- 7 tests in tools-bash for session behavior
  - bash_persists_env_between_calls 
  - bash_persists_cwd_between_calls 
  - bash_command_timeout 
  - bash_output_truncation 
  - bash_command_failure_returns_error_code 
  - bash_stderr_captured 
  - bash_multiple_commands_in_sequence 
- 3 new tests in CLI for permission enforcement
  - plan_mode_blocks_bash_operations 
  - code_mode_allows_bash 
  - bash_command_timeout_works 
- All 43 workspace tests passing 

Dependencies Added:
- tokio with process, io-util, time, sync features

M4 milestone complete! 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 19:31:36 +01:00
6108b9e3d1 feat(tools): implement Edit and Write tools with deterministic patches (M3 complete)
This commit implements the complete M3 milestone (Edit & Write tools) including:

Write tool:
- Creates new files with parent directory creation
- Overwrites existing files safely
- Simple and straightforward implementation

Edit tool:
- Exact string replacement with uniqueness enforcement
- Detects ambiguous matches (multiple occurrences) and fails safely
- Detects no-match scenarios and fails with clear error
- Automatic backup before modification
- Rollback on write failure (restores from backup)
- Supports multiline string replacements

CLI integration:
- Added `write` subcommand: `owlen write <path> <content>`
- Added `edit` subcommand: `owlen edit <path> <old_string> <new_string>`
- Permission checks for both Write and Edit tools
- Clear error messages for permission denials

Permission enforcement:
- Plan mode (default): blocks Write and Edit (asks for approval)
- AcceptEdits mode: allows Write and Edit
- Code mode: allows all operations

Testing:
- 6 new tests in tools-fs for Write/Edit functionality
- 5 new tests in CLI for permission enforcement with Edit/Write
- Tests verify plan mode blocks, acceptEdits allows, code mode allows all
- All 32 workspace tests passing

Dependencies:
- Added `similar` crate for future diff/patch enhancements

M3 milestone complete! 

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 19:19:49 +01:00
a6cf8585ef feat(permissions): implement permission system with plan mode enforcement (M1 complete)
This commit implements the complete M1 milestone (Config & Permissions) including:

- New permissions crate with Tool, Action, Mode, and PermissionManager
- Three permission modes: Plan (read-only default), AcceptEdits, Code
- Pattern matching for permission rules (exact match and prefix with *)
- Integration with config-agent for mode-based permission management
- CLI integration with --mode flag to override configured mode
- Permission checks for Read, Glob, and Grep operations
- Comprehensive test suite (10 tests in permissions, 4 in config, 4 in CLI)

Also fixes:
- Fixed failing test in tools-fs (glob pattern issue)
- Improved glob_list() root extraction to handle patterns like "/*.txt"

All 21 workspace tests passing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 19:14:54 +01:00
baf833427a chore: update workspace paths after directory reorganization
Update workspace members and dependency paths to reflect new directory structure:
- crates/cli → crates/app/cli
- crates/config → crates/platform/config

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 18:50:05 +01:00
d21945dbc0 chore(git): ignore custom documentation files
Add AGENTS.md and CLAUDE.md to .gitignore to exclude project-specific documentation files.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 18:49:44 +01:00
7f39bf1eca feat(tools): add filesystem tools crate with glob pattern support
- Add new tools-fs crate with read, glob, and grep utilities
- Fix glob command to support actual glob patterns (**, *) instead of just directory walking
- Rename binary from "code" to "owlen" to match package name
- Fix test to reference correct binary name "owlen"
- Add API key support to OllamaClient for authentication

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 18:40:57 +01:00
dcda8216dc feat(ollama): add cloud support with api key and model suffix detection
Add support for Ollama Cloud by detecting model names with "-cloud" suffix
and checking for API key presence. Update config to read OLLAMA_API_KEY
environment variable. When both conditions are met, automatically use
https://ollama.com endpoint; otherwise use local/configured URL.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 18:20:33 +01:00
ff49e7ce93 fix(config): correct environment variable precedence and update prefix
Fix configuration loading order to ensure environment variables have
highest precedence over config files. Also update env prefix from
CODE_ to OWLEN_ for consistency with project naming.

Changes:
- Move env variable merge to end of chain for proper precedence
- Update environment prefix from CODE_ to OWLEN_
- Add precedence tests to verify correct override behavior
- Clean up unused dependencies (serde_json, toml)
- Add tempfile dev dependency for testing

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 16:50:45 +01:00
b63d26f0cd **feat:** update default model to qwen3:8b and simplify chat streaming loop with proper error handling and trailing newline. 2025-11-01 16:37:35 +01:00
64fd3206a2 **chore(git): add JetBrains .idea directory to .gitignore 2025-11-01 16:32:29 +01:00
2a651ebd7b feat(workspace): initialize Rust workspace structure for v2
Set up Cargo workspace with initial crates:
- cli: main application entry point with chat streaming tests
- config: configuration management
- llm/ollama: Ollama client integration with NDJSON support

Includes .gitignore for Rust and JetBrains IDEs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-01 16:30:09 +01:00
491fd049b0 refactor(all)!: clean out project for v2 2025-11-01 14:26:52 +01:00
c9e2f9bae6 fix(core,tui): complete remaining P1 critical fixes
This commit addresses the final 3 P1 high-priority issues from
project-analysis.md, improving resource management and stability.

Changes:

1. **Pin ollama-rs to exact version (P1)**
   - Updated owlen-core/Cargo.toml: ollama-rs "0.3" -> "=0.3.2"
   - Prevents silent breaking changes from 0.x version updates
   - Follows best practice for unstable dependency pinning

2. **Replace unbounded channels with bounded (P1 Critical)**
   - AppMessage channel: unbounded -> bounded(256)
   - AppEvent channel: unbounded -> bounded(64)
   - Updated 8 files across owlen-tui with proper send strategies:
     * Async contexts: .send().await (natural backpressure)
     * Sync contexts: .try_send() (fail-fast for responsiveness)
   - Prevents OOM on systems with <4GB RAM during rapid LLM responses
   - Research-backed capacity selection based on Tokio best practices
   - Impact: Eliminates unbounded memory growth under sustained load

3. **Implement health check rate limiting with TTL cache (P1)**
   - Added 30-second TTL cache to ProviderManager::refresh_health()
   - Reduces provider load from 60 checks/min to ~2 checks/min (30x reduction)
   - Added configurable health_check_ttl_secs to GeneralSettings
   - Thread-safe implementation using RwLock<Option<Instant>>
   - Added force_refresh_health() escape hatch for immediate updates
   - Impact: 83% cache hit rate with default 5s TUI polling
   - New test: health_check_cache_reduces_actual_checks

4. **Rust 2024 let-chain cleanup**
   - Applied let-chain pattern to health check cache logic
   - Fixes clippy::collapsible_if warning in manager.rs:174

Testing:
-  All unit tests pass (owlen-core: 40, owlen-tui: 53)
-  Full build successful in 10.42s
-  Zero clippy warnings with -D warnings
-  Integration tests verify bounded channel backpressure
-  Cache tests confirm 30x load reduction

Performance Impact:
- Memory: Bounded channels prevent unbounded growth
- Latency: Natural backpressure maintains streaming integrity
- Provider Load: 30x reduction in health check frequency
- Responsiveness: Fail-fast semantics keep UI responsive

Files Modified:
- crates/owlen-core/Cargo.toml
- crates/owlen-core/src/config.rs
- crates/owlen-core/src/provider/manager.rs
- crates/owlen-core/tests/provider_manager_edge_cases.rs
- crates/owlen-tui/src/app/mod.rs
- crates/owlen-tui/src/app/generation.rs
- crates/owlen-tui/src/app/worker.rs
- crates/owlen-tui/tests/generation_tests.rs

Status: P0/P1 issues now 100% complete (10/10)
- P0: 2/2 complete
- P1: 10/10 complete (includes 3 from this commit)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 15:06:11 +01:00
7b87459a72 chore: update dependencies and fix tree-sitter compatibility
Update dependencies via cargo update to latest compatible versions:
- sqlx: 0.8.0 → 0.8.6 (bug fixes and improvements)
- libsqlite3-sys: 0.28.0 → 0.30.1
- webpki-roots: 0.25.4 → 0.26.11 (TLS security updates)
- hashlink: 0.9.1 → 0.10.0
- serde_json: updated to 1.0.145

Fix tree-sitter version mismatch:
- Update owlen-tui dependency to tree-sitter 0.25 (from 0.20)
- Adapt API call: set_language() now requires &Language reference
- Location: crates/owlen-tui/src/state/search.rs:715

Security audit results (cargo audit):
- 1 low-impact advisory in sqlx-mysql (not used - we use SQLite)
- 3 unmaintained warnings in test dependencies (acceptable)
- No critical vulnerabilities in production dependencies

Testing:
-  cargo build --all: Success
-  cargo test --all: 171+ tests pass, 0 failures
-  cargo clippy: Clean
-  cargo audit: No critical issues

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 14:20:52 +01:00
4935a64a13 refactor: complete Rust 2024 let-chain migration
Migrate all remaining collapsible_if patterns to Rust 2024 let-chain
syntax across the entire codebase. This modernizes conditional logic
by replacing nested if statements with single-level expressions using
the && operator with let patterns.

Changes:
- storage.rs: 2 let-chain conversions (database dir creation, legacy archiving)
- session.rs: 3 let-chain conversions (empty content check, ledger dir creation, consent flow)
- ollama.rs: 8 let-chain conversions (socket parsing, cloud validation, model caching, capabilities)
- main.rs: 2 let-chain conversions (API key validation, provider enablement)
- owlen-tui: ~50 let-chain conversions across app/mod.rs, chat_app.rs, ui.rs, highlight.rs, and state modules

Test fixes:
- prompt_server.rs: Add missing .await on async RemoteMcpClient::new_with_config
- presets.rs, prompt_server.rs: Add missing rpc_timeout_secs field to McpServerConfig
- file_write.rs: Update error assertion to accept new "escapes workspace boundary" message

Verification:
- cargo build --all:  succeeds
- cargo clippy --all -- -D clippy::collapsible_if:  zero warnings
- cargo test --all:  109+ tests pass

Net result: -46 lines of code, improved readability and maintainability.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 14:10:12 +01:00
a84c8a425d feat: complete Sprint 2 - security fixes, test coverage, Rust 2024 migration
This commit completes Sprint 2 tasks from the project analysis report:

**Security Updates**
- Upgrade sqlx 0.7 → 0.8 (CVE-2024-0363 mitigation, PostgreSQL/MySQL only)
  - Split runtime feature flags: runtime-tokio + tls-rustls
  - Created comprehensive migration guide (SQLX_MIGRATION_GUIDE.md)
  - No breaking changes for SQLite users
- Update ring 0.17.9 → 0.17.14 (AES panic vulnerability CVE fix)
  - Set minimum version constraint: >=0.17.12
  - Verified build and tests pass with updated version

**Provider Manager Test Coverage**
- Add 13 comprehensive edge case tests (provider_manager_edge_cases.rs)
  - Health check state transitions (Available ↔ Unavailable ↔ RequiresSetup)
  - Concurrent registration safety (10 parallel registrations)
  - Generate failure propagation and error handling
  - Empty registry edge cases
  - Stateful FlakeyProvider mock for testing state transitions
- Achieves 90%+ coverage target for ProviderManager

**ProviderManager Clone Optimizations**
- Document optimization strategy (PROVIDER_MANAGER_OPTIMIZATIONS.md)
  - Replace deep HashMap clones with Arc<HashMap> for status_cache
  - Eliminate intermediate Vec allocations in list_all_models
  - Use copy-on-write pattern for writes (optimize hot read path)
  - Expected 15-20% performance improvement in model listing
- Guide ready for implementation (blocked by file watchers in agent session)

**Rust 2024 Edition Migration Audit**
- Remove legacy clippy suppressions (#![allow(clippy::collapsible_if)])
  - Removed from owlen-core/src/lib.rs
  - Removed from owlen-tui/src/lib.rs
  - Removed from owlen-cli/src/main.rs
- Refactor to let-chain syntax (Rust 2024 edition feature)
  - Completed: config.rs (2 locations)
  - Remaining: ollama.rs (8), session.rs (3), storage.rs (2) - documented in agent output
- Enforces modern Rust 2024 patterns

**Test Fixes**
- Fix tool_consent_denied_generates_fallback_message test
  - Root cause: Test didn't trigger ControllerEvent::ToolRequested
  - Solution: Call SessionController::check_streaming_tool_calls()
  - Properly registers consent request in pending_tool_requests
  - Test now passes consistently

**Migration Guides Created**
- SQLX_MIGRATION_GUIDE.md: Comprehensive SQLx 0.8 upgrade guide
- PROVIDER_MANAGER_OPTIMIZATIONS.md: Performance optimization roadmap

**Files Modified**
- Cargo.toml: sqlx 0.8, ring >=0.17.12
- crates/owlen-core/src/{lib.rs, config.rs}: Remove collapsible_if suppressions
- crates/owlen-tui/src/{lib.rs, chat_app.rs}: Remove suppressions, fix test
- crates/owlen-cli/src/main.rs: Remove suppressions

**Files Added**
- crates/owlen-core/tests/provider_manager_edge_cases.rs (13 tests, 420 lines)
- SQLX_MIGRATION_GUIDE.md (migration documentation)
- PROVIDER_MANAGER_OPTIMIZATIONS.md (optimization guide)

**Test Results**
- All owlen-core tests pass (122 total including 13 new)
- owlen-tui::tool_consent_denied_generates_fallback_message now passes
- Build succeeds with all security updates applied

Sprint 2 complete. Next: Apply remaining let-chain refactorings (documented in agent output).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 13:35:44 +01:00
16c0e71147 feat: complete Sprint 1 - async migration, RPC timeouts, dependency updates
This commit completes all remaining Sprint 1 tasks from the project analysis:

**MCP RPC Timeout Protection**
- Add configurable `rpc_timeout_secs` field to McpServerConfig
- Implement operation-specific timeouts (10s-120s based on method type)
- Wrap all MCP RPC calls with tokio::time::timeout to prevent indefinite hangs
- Add comprehensive test suite (mcp_timeout.rs) with 5 test cases
- Modified files: config.rs, remote_client.rs, presets.rs, failover.rs, factory.rs, chat_app.rs, mcp.rs

**Async Migration Completion**
- Remove all remaining tokio::task::block_in_place calls
- Replace with try_lock() spin loop pattern for uncontended config access
- Maintains sync API for UI rendering while completing async migration
- Modified files: session.rs (config/config_mut), chat_app.rs (controller_lock)

**Dependency Updates**
- Update tokio 1.47.1 → 1.48.0 for latest performance improvements
- Update reqwest 0.12.23 → 0.12.24 for security patches
- Update 60+ transitive dependencies via cargo update
- Run cargo audit: identified 3 CVEs for Sprint 2 (sqlx, ring, rsa)

**Code Quality**
- Fix clippy deprecation warnings (generic-array 0.x usage in encryption/storage)
- Add temporary #![allow(deprecated)] with TODO comments for future generic-array 1.x upgrade
- All tests passing (except 1 pre-existing failure unrelated to these changes)

Sprint 1 is now complete. Next up: Sprint 2 security fixes and test coverage improvements.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 13:14:00 +01:00
0728262a9e fix(core,mcp,security)!: resolve critical P0/P1 issues
BREAKING CHANGES:
- owlen-core no longer depends on ratatui/crossterm
- RemoteMcpClient constructors are now async
- MCP path validation is stricter (security hardening)

This commit resolves three critical issues identified in project analysis:

## P0-1: Extract TUI dependencies from owlen-core

Create owlen-ui-common crate to hold UI-agnostic color and theme
abstractions, removing architectural boundary violation.

Changes:
- Create new owlen-ui-common crate with abstract Color enum
- Move theme.rs from owlen-core to owlen-ui-common
- Define Color with Rgb and Named variants (no ratatui dependency)
- Create color conversion layer in owlen-tui (color_convert.rs)
- Update 35+ color usages with conversion wrappers
- Remove ratatui/crossterm from owlen-core dependencies

Benefits:
- owlen-core usable in headless/CLI contexts
- Enables future GUI frontends
- Reduces binary size for core library consumers

## P0-2: Fix blocking WebSocket connections

Convert RemoteMcpClient constructors to async, eliminating runtime
blocking that froze TUI for 30+ seconds on slow connections.

Changes:
- Make new_with_runtime(), new_with_config(), new() async
- Remove block_in_place wrappers for I/O operations
- Add 30-second connection timeout with tokio::time::timeout
- Update 15+ call sites across 10 files to await constructors
- Convert 4 test functions to #[tokio::test]

Benefits:
- TUI remains responsive during WebSocket connections
- Proper async I/O follows Rust best practices
- No more indefinite hangs

## P1-1: Secure path traversal vulnerabilities

Implement comprehensive path validation with 7 defense layers to
prevent file access outside workspace boundaries.

Changes:
- Create validate_safe_path() with multi-layer security:
  * URL decoding (prevents %2E%2E bypasses)
  * Absolute path rejection
  * Null byte protection
  * Windows-specific checks (UNC/device paths)
  * Lexical path cleaning (removes .. components)
  * Symlink resolution via canonicalization
  * Boundary verification with starts_with check
- Update 4 MCP resource functions (get/list/write/delete)
- Add 11 comprehensive security tests

Benefits:
- Blocks URL-encoded, absolute, UNC path attacks
- Prevents null byte injection
- Stops symlink escape attempts
- Cross-platform security (Windows/Linux/macOS)

## Test Results

- owlen-core: 109/109 tests pass (100%)
- owlen-tui: 52/53 tests pass (98%, 1 pre-existing failure)
- owlen-providers: 2/2 tests pass (100%)
- Build: cargo build --all succeeds

## Verification

- ✓ cargo tree -p owlen-core shows no TUI dependencies
- ✓ No block_in_place calls remain in MCP I/O code
- ✓ All 11 security tests pass

Fixes: #P0-1, #P0-2, #P1-1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 12:31:20 +01:00
7aa80fb0a4 feat: add repo automation workflows 2025-10-26 05:49:21 +01:00
28b6eb0a9a feat: enable multimodal attachments for agents 2025-10-26 05:14:17 +01:00
353c0a8239 feat(agent): load configurable profiles from .owlen/agents 2025-10-26 03:12:31 +01:00
44b07c8e27 feat(tui): add command queue and thought summaries 2025-10-26 02:38:10 +01:00
76e59c2d0e feat(tui): add AppEvent dispatch loop 2025-10-26 02:05:14 +01:00
c92e07b866 feat(security): add approval modes with CLI controls 2025-10-26 02:31:03 +02:00
9aa8722ec3 feat(session): disable tools for unsupported models 2025-10-26 01:56:43 +02:00
7daa4f4ebe ci(ollama): add regression workflow 2025-10-26 01:38:48 +02:00
a788b8941e docs(ollama): document cloud credential precedence 2025-10-26 01:36:56 +02:00
16bc534837 refactor(ollama): reuse base normalization in session 2025-10-26 01:33:42 +02:00
eef0e3dea0 test(ollama): cover cloud search defaults 2025-10-26 01:26:28 +02:00
5d9ecec82c feat(ollama): align provider defaults with codex semantics 2025-10-26 01:21:17 +02:00
6980640324 chore: remove outdated roadmap doc 2025-10-26 00:29:45 +02:00
a0868a9b49 feat(compression): adaptive auto transcript compactor 2025-10-26 00:25:23 +02:00
877ece07be fix(xtask): skip png conversion on legacy chafa 2025-10-25 23:16:24 +02:00
f6a3f235df fix(xtask): handle missing chafa gracefully 2025-10-25 23:10:02 +02:00
a4f7a45e56 chore(assets): scripted screenshot pipeline 2025-10-25 23:06:00 +02:00
94ef08db6b test(tui): expand UX regression snapshots 2025-10-25 22:17:53 +02:00
57942219a8 docs: add TUI UX & keybinding playbook 2025-10-25 22:03:11 +02:00
03244e8d24 feat(tui): status surface & toast overhaul 2025-10-25 21:55:52 +02:00
d7066d7d37 feat(guidance): inline cheat-sheets & onboarding 2025-10-25 21:00:36 +02:00
124db19e68 feat(tui): adaptive layout & polish refresh 2025-10-25 18:52:37 +02:00
e89da02d49 feat(commands): add metadata-driven palette with tag filters 2025-10-25 10:30:47 +02:00
cf0a8f21d5 feat(keymap): add configurable leader and Emacs enhancements 2025-10-25 09:54:24 +02:00
2d45406982 feat(keymap): rebuild modal binding registry
Acceptance Criteria:\n- Declarative keymap sequences replace pending_key/pending_focus_chord logic.\n- :keymap show streams the active binding table and bindings export reflects new schema.\n- Vim/Emacs default keymaps resolve multi-step chords via the registry.

Test Notes:\n- cargo test -p owlen-tui
2025-10-25 09:12:14 +02:00
f592840d39 chore(docs): remove obsolete AGENTS playbook
Acceptance Criteria:\n- AGENTS.md deleted to avoid stale planning guidance.\n\nTest Notes:\n- none
2025-10-25 08:18:56 +02:00
9090bddf68 chore(docs): drop remaining agents notes
Acceptance Criteria:\n- Removed agents-2025-10-23.md from the workspace.\n\nTest Notes:\n- none
2025-10-25 08:18:07 +02:00
4981a63224 chore(docs): remove superseded agents playbook
Acceptance Criteria:\n- agents-2025-10-25.md is removed from the repository.\n\nTest Notes:\n- none (docs only)
2025-10-25 08:16:49 +02:00
1238bbe000 feat(accessibility): add high-contrast display modes
Acceptance Criteria:\n- Users can toggle high-contrast and reduced-chrome modes via :accessibility commands.\n- Accessibility flags persist in config and update UI header legends without restart.\n- Reduced chrome removes glass shadows while preserving layout readability.

Test Notes:\n- cargo test -p owlen-tui
2025-10-25 08:12:52 +02:00
f29f306692 test(tui): add golden streaming flows for chat + tool calls
Acceptance-Criteria:\n- snapshot coverage for idle chat and tool-call streaming states protects header, toast, and transcript rendering.\n- Tests use deterministic settings so reruns pass without manual snapshot acceptance.

Test-Notes:\n- INSTA_UPDATE=always cargo test -p owlen-tui --test chat_snapshots\n- cargo test
2025-10-25 07:19:05 +02:00
9024e2b914 feat(mcp): enforce spec-compliant tool identifiers
Acceptance-Criteria:\n- spec-compliant names are shared via WEB_SEARCH_TOOL_NAME and ModeConfig checks canonical identifiers.\n- workspace depends on once_cell so regex helpers build without local target hacks.

Test-Notes:\n- cargo test
2025-10-25 06:45:18 +02:00
6849d5ef12 fix(provider/ollama): respect cloud overrides and rate limits
Acceptance-Criteria:\n- cloud WireMock tests pass when providers.ollama_cloud.base_url targets a local endpoint.\n- 429 responses surface ProviderErrorKind::RateLimited and 401 payloads report API key guidance.

Test-Notes:\n- cargo test -p owlen-core --test ollama_wiremock cloud_rate_limit_returns_error_without_usage\n- cargo test -p owlen-core --test ollama_wiremock cloud_tool_call_flows_through_web_search\n- cargo test
2025-10-25 06:38:55 +02:00
3c6e689de9 docs(mcp): benchmark leading client ecosystems
Acceptance-Criteria:\n- docs cite the MCP identifier regex and enumerate the combined connector bundle.\n- legacy dotted identifiers are removed from the plan in favour of compliant names.

Test-Notes:\n- docs-only change; no automated tests required.
2025-10-25 06:31:05 +02:00
1994367a2e feat(mcp): add tool presets and audit commands
- Introduce reference MCP presets with installation/audit helpers and remove legacy connector lists.
- Add CLI `owlen tools` commands to install presets or audit configuration, with optional pruning.
- Extend the TUI :tools command to support listing presets, installing them, and auditing current configuration.
- Document the preset workflow and provide regression tests for preset application.
2025-10-25 05:39:58 +02:00
c3a92a092b feat(mcp): enforce spec-compliant tool registry
- Reject dotted tool identifiers during registration and remove alias-backed lookups.
- Drop web.search compatibility, normalize all code/tests around the canonical web_search name, and update consent/session logic.
- Harden CLI toggles to manage the spec-compliant identifier and ensure MCP configs shed non-compliant entries automatically.

Acceptance Criteria:
- Tool registry denies invalid identifiers by default and no alias codepaths remain.

Test Notes:
- cargo check -p owlen-core (tests unavailable in sandbox).
2025-10-25 04:48:45 +02:00
6a94373c4f docs(mcp): document reference connector bundles
- Replace brand-specific guidance with spec-compliant naming rules and connector categories.
- Add docs/mcp-reference.md outlining common MCP connectors and installation workflow.
- Point configuration docs at the new guide and reiterate the naming regex.

Acceptance Criteria:
- Documentation directs users toward a combined reference bundle without citing specific vendors.

Test Notes:
- Docs-only change; link checks not run.
2025-10-25 04:25:34 +02:00
83280f68cc docs(mcp): align tool docs with codex parity
- Document web_search as the canonical MCP tool name and retain the legacy web.search alias across README, changelog, and mode docs.
- Explain how to mirror Codex CLI MCP installs with codex mcp add and note the common server bundle.
- Point the configuration guide at spec-compliant naming and runtime toggles for Codex users.

Acceptance Criteria:
- Documentation stops advertising dotted tool names as canonical and references Codex-compatible installs.

Test Notes:
- docs-only change; no automated tests run.
2025-10-25 03:08:34 +02:00
21759898fb fix(commands): surface limits and web toggles
Acceptance-Criteria:
- Command palette suggestions include limits/web management commands
- Help overlay documents :limits and :web on|off|status controls

Test-Notes:
- cargo test -p owlen-tui
- cargo clippy -p owlen-tui --tests -- -D warnings
2025-10-25 01:18:06 +02:00
02df6d893c fix(markdown): restore ratatui bold assertions
Acceptance-Criteria:
- cargo test -p owlen-markdown completes without Style::contains usage
- Workspace lint hook passes under cargo clippy --all-features -D warnings
- Markdown heading and inline code tests still confirm bold styling

Test-Notes:
- cargo test -p owlen-markdown
- cargo clippy -p owlen-markdown --tests -- -D warnings
- cargo clippy --all-features -- -D warnings
2025-10-25 01:10:17 +02:00
8f9d601fdc chore(release): bump workspace to v0.2
Acceptance Criteria:
- Workspace metadata, PKGBUILD, and CHANGELOG announce version 0.2.0
- Release notes summarize major v0.2 additions, changes, and fixes for users

Test Notes:
- cargo test -p owlen-cli
2025-10-25 00:33:15 +02:00
40e42c8918 chore(deps/ui): upgrade ratatui 0.29 and refresh gradients
Acceptance Criteria:
- Workspace builds against ratatui 0.29, crossterm 0.28.1, and tui-textarea 0.7 with palette support enabled
- Chat header context and usage gauges render with refreshed tailwind gradients
- Header layout uses the Flex API to balance top-row metadata across window widths

Test Notes:
- cargo test -p owlen-tui
2025-10-25 00:26:01 +02:00
6e12bb3acb test(integration): add wiremock coverage for ollama flows
Acceptance Criteria:\n- Local provider chat succeeds and records usage\n- Cloud tool-call scenario exercises web.search and usage tracking\n- Unauthorized and rate-limited cloud responses surface errors without recording usage\n\nTest Notes:\n- CARGO_NET_OFFLINE=true cargo test -p owlen-core --tests ollama_wiremock
2025-10-24 23:56:38 +02:00
16b6f24e3e refactor(errors): surface typed provider failures
AC:\n- Providers emit ProviderFailure with structured kind/detail for auth, rate-limit, timeout, and unavailable cases.\n- TUI maps ProviderFailure kinds to consistent toasts and fallbacks (no 429 string matching).\n- Cloud health checks detect unauthorized failures without relying on string parsing.\n\nTests:\n- cargo test -p owlen-core (fails: httpmock cannot bind 127.0.0.1 inside sandbox).\n- cargo test -p owlen-providers\n- cargo test -p owlen-tui
2025-10-24 14:23:00 +02:00
25628d1d58 feat(config): align defaults with provider sections
AC:\n- Config defaults include provider TTL/context extras and normalize cloud quotas/endpoints when missing.\n- owlen config init scaffolds the latest schema; config doctor updates legacy env names and issues warnings.\n- Documentation covers init/doctor usage and runtime env precedence.\n\nTests:\n- cargo test -p owlen-cli\n- cargo test -p owlen-core default_config_sets_provider_extras\n- cargo test -p owlen-core ensure_defaults_backfills_missing_provider_metadata
2025-10-24 13:55:42 +02:00
e813736b47 feat(commands): expose runtime web toggle
AC:
- :web on/off updates tool exposure immediately and persists the toggle.
- owlen providers web --enable/--disable reflects the same setting and reports current status.
- Help/docs cover the new toggle paths and troubleshooting guidance.

Tests:
- cargo test -p owlen-cli
- cargo test -p owlen-core toggling_web_search_updates_config_and_registry
2025-10-24 13:23:47 +02:00
7e2c6ea037 docs(release): prep v0.2 guidance and config samples
AC:\n- README badge shows 0.2.0 and highlights cloud fallback, quotas, web search.\n- Configuration docs and sample config cover list TTL, quotas, context window, and updated env guidance.\n- Troubleshooting docs explain authentication fallback and rate limit recovery.\n\nTests:\n- Attempted 'cargo xtask lint-docs' (command unavailable: no such command: xtask).
2025-10-24 12:56:49 +02:00
3f6d7d56f6 feat(ui): add glass modals and theme preview
AC:\n- Theme, help, command, and model modals share the glass chrome.\n- Theme selector shows a live preview for the highlighted palette.\n- Updated docs and screenshots explain the refreshed cockpit.\n\nTests:\n- cargo test -p owlen-tui
2025-10-24 02:54:19 +02:00
bbb94367e1 feat(tool/web): route searches through provider
Acceptance Criteria:\n- web.search proxies Ollama Cloud's /api/web_search via the configured provider endpoint\n- Tool is only registered when remote search is enabled and the cloud provider is active\n- Consent prompts, docs, and MCP tooling no longer reference DuckDuckGo or expose web_search_detailed

Test Notes:\n- cargo check
2025-10-24 01:29:37 +02:00
79fdafce97 feat(usage): track cloud quotas and expose :limits
Acceptance Criteria:\n- header shows hourly/weekly usage with colored thresholds\n- :limits command prints persisted usage data and quotas\n- token usage survives restarts and emits 80%/95% toasts

Test Notes:\n- cargo test -p owlen-core usage
2025-10-24 00:30:59 +02:00
24671f5f2a feat(provider/ollama): enable tool calls and enrich metadata
Acceptance Criteria:\n- tool descriptors from MCP are forwarded to Ollama chat requests\n- models advertise tool support when metadata or heuristics imply function calling\n- chat responses include provider metadata with final token metrics

Test Notes:\n- cargo test -p owlen-core providers::ollama::tests::prepare_chat_request_serializes_tool_descriptors\n- cargo test -p owlen-core providers::ollama::tests::convert_model_marks_tool_capability\n- cargo test -p owlen-core providers::ollama::tests::convert_response_attaches_provider_metadata
2025-10-23 20:22:52 +02:00
e0b14a42f2 fix(provider/ollama): keep stream whitespace intact
Acceptance Criteria:\n- streaming chunks retain leading whitespace and indentation\n- end-of-stream metadata is still propagated\n- malformed frames emit defensive logging without crashing

Test Notes:\n- cargo test -p owlen-providers
2025-10-23 19:40:53 +02:00
3e8788dd44 fix(config): align ollama cloud defaults with upstream 2025-10-23 19:25:58 +02:00
38a4c55eaa fix(config): rename owlen cloud api key env 2025-10-23 18:41:45 +02:00
c7b7fe98ec feat(session): implement streaming state with text delta and tool‑call diff handling
- Introduce `StreamingMessageState` to track full text, last tool calls, and completion.
- Add `StreamDiff`, `TextDelta`, and `TextDeltaKind` for describing incremental changes.
- SessionController now maintains a `stream_states` map keyed by response IDs.
- `apply_stream_chunk` uses the new state to emit append/replace text deltas and tool‑call updates, handling final chunks and cleanup.
- `Conversation` gains `set_stream_content` to replace streaming content and manage metadata.
- Ensure stream state is cleared on cancel, conversation reset, and controller clear.
2025-10-18 07:15:12 +02:00
4820a6706f feat(provider): enrich model metadata with provider tags and display names, add canonical provider ID handling, and update UI to use new display names and handle provider errors 2025-10-18 06:57:58 +02:00
3308b483f7 feat(providers/ollama): add variant support, retryable tag fetching with CLI fallback, and configurable provider name for robust model listing and health checks 2025-10-18 05:59:50 +02:00
4ce4ac0b0e docs(agents): rewrite AGENTS.md with detailed v0.2 execution plan
Replaces the original overview with a comprehensive execution plan for Owlen v0.2, including:
- Provider health checks and resilient model listing for Ollama and Ollama Cloud
- Cloud key‑gating, rate‑limit handling, and usage tracking
- Multi‑provider model registry and UI aggregation
- Session pipeline refactor for tool calls and partial updates
- Robust streaming JSON parser
- UI header displaying context usage percentages
- Token usage tracker with hourly/weekly limits and toasts
- New web.search tool wrapper and related commands
- Expanded command set (`:provider`, `:model`, `:limits`, `:web`) and config sections
- Additional documentation, testing guidelines, and release notes for v0.2.
2025-10-18 04:52:07 +02:00
3722840d2c feat(tui): add Emacs keymap profile with runtime switching
- Introduce built‑in Emacs keymap (`keymap_emacs.toml`) alongside existing Vim layout.
- Add `ui.keymap_profile` and `ui.keymap_path` configuration options; persist profile changes via `:keymap` command.
- Expose `KeymapProfile` enum (Vim, Emacs, Custom) and integrate it throughout state, UI rendering, and help overlay.
- Extend command registry with `keymap.set_vim` and `keymap.set_emacs` to allow profile switching.
- Update help overlay, command specs, and README to reflect new keybindings and profile commands.
- Adjust `Keymap::load` to honor preferred profile, custom paths, and fallback logic.
2025-10-18 04:51:39 +02:00
02f25b7bec feat(tui): add mouse input handling and layout snapshot for region detection
- Extend event handling to include `MouseEvent` and expose it via a new `Event::Mouse` variant.
- Introduce `LayoutSnapshot` to capture the geometry of UI panels each render cycle.
- Store the latest layout snapshot in `ChatApp` for mouse region lookup.
- Implement mouse click and scroll handling across panels (file tree, thinking, actions, code, model info, chat, input, etc.).
- Add utility functions for region detection, cursor placement from mouse position, and scrolling logic.
- Update UI rendering to populate the layout snapshot with panel rectangles.
2025-10-18 04:11:29 +02:00
225 changed files with 3588 additions and 51856 deletions

View File

@@ -1,20 +0,0 @@
[target.x86_64-unknown-linux-musl]
linker = "x86_64-linux-gnu-gcc"
rustflags = ["-C", "target-feature=+crt-static", "-C", "link-arg=-lgcc"]
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
[target.aarch64-unknown-linux-musl]
linker = "aarch64-linux-gnu-gcc"
rustflags = ["-C", "target-feature=+crt-static", "-C", "link-arg=-lgcc"]
[target.armv7-unknown-linux-gnueabihf]
linker = "arm-linux-gnueabihf-gcc"
[target.armv7-unknown-linux-musleabihf]
linker = "arm-linux-gnueabihf-gcc"
rustflags = ["-C", "target-feature=+crt-static", "-C", "link-arg=-lgcc"]
[target.x86_64-pc-windows-gnu]
linker = "x86_64-w64-mingw32-gcc"

View File

@@ -1,34 +0,0 @@
name: macos-check
on:
push:
branches:
- dev
pull_request:
branches:
- dev
jobs:
build:
name: cargo check (macOS)
runs-on: macos-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
- name: Cache Cargo registry
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-
- name: Cargo check
run: cargo check --workspace --all-features

39
.gitignore vendored
View File

@@ -1,13 +1,12 @@
### Custom
AGENTS.md
CLAUDE.md
### Rust template
# Generated by Cargo
# will have compiled files and executables
debug/
target/
dev/
.agents/
.env
.env.*
!.env.example
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
@@ -19,17 +18,10 @@ Cargo.lock
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
# RustRover
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
### JetBrains template
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
.idea/
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
@@ -60,14 +52,15 @@ Cargo.lock
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
.idea/artifacts
.idea/compiler.xml
.idea/jarRepositories.xml
.idea/modules.xml
.idea/*.iml
.idea/modules
*.iml
*.ipr
.idea
# CMake
cmake-build-*/
@@ -104,3 +97,9 @@ fabric.properties
# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser
### rust-analyzer template
# Can be generated by other build systems other than cargo (ex: bazelbuild/rust_rules)
rust-project.json

View File

@@ -1,35 +0,0 @@
# Pre-commit hooks configuration
# See https://pre-commit.com for more information
repos:
# General file checks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
args: ['--allow-multiple-documents']
- id: check-toml
- id: check-merge-conflict
- id: check-added-large-files
args: ['--maxkb=1000']
- id: mixed-line-ending
# Rust formatting
- repo: https://github.com/doublify/pre-commit-rust
rev: v1.0
hooks:
- id: fmt
name: cargo fmt
description: Format Rust code with rustfmt
- id: cargo-check
name: cargo check
description: Check Rust code compilation
- id: clippy
name: cargo clippy
description: Lint Rust code with clippy
args: ['--all-features', '--', '-D', 'warnings']
# Optional: run on all files when config changes
default_install_hook_types: [pre-commit, pre-push]

View File

@@ -1,197 +0,0 @@
---
kind: pipeline
name: pr-checks
when:
event:
- push
- pull_request
steps:
- name: fmt-clippy-test
image: rust:1.83
commands:
- rustup component add rustfmt clippy
- cargo fmt --all -- --check
- cargo clippy --workspace --all-features -- -D warnings
- cargo test --workspace --all-features
---
kind: pipeline
name: security-audit
when:
event:
- push
- cron
branch:
- dev
cron: weekly-security
steps:
- name: cargo-audit
image: rust:1.83
commands:
- cargo install cargo-audit --locked
- cargo audit
---
kind: pipeline
name: release-tests
when:
event: tag
tag: v*
steps:
- name: workspace-tests
image: rust:1.83
commands:
- rustup component add llvm-tools-preview
- cargo install cargo-llvm-cov --locked
- cargo llvm-cov --workspace --all-features --summary-only
- cargo llvm-cov --workspace --all-features --lcov --output-path coverage.lcov --no-run
---
kind: pipeline
name: release
when:
event: tag
tag: v*
variables:
- &rust_image 'rust:1.83'
depends_on:
- release-tests
matrix:
include:
# Linux
- TARGET: x86_64-unknown-linux-gnu
ARTIFACT: owlen-linux-x86_64-gnu
PLATFORM: linux
EXT: ""
- TARGET: x86_64-unknown-linux-musl
ARTIFACT: owlen-linux-x86_64-musl
PLATFORM: linux
EXT: ""
- TARGET: aarch64-unknown-linux-gnu
ARTIFACT: owlen-linux-aarch64-gnu
PLATFORM: linux
EXT: ""
- TARGET: aarch64-unknown-linux-musl
ARTIFACT: owlen-linux-aarch64-musl
PLATFORM: linux
EXT: ""
- TARGET: armv7-unknown-linux-gnueabihf
ARTIFACT: owlen-linux-armv7-gnu
PLATFORM: linux
EXT: ""
- TARGET: armv7-unknown-linux-musleabihf
ARTIFACT: owlen-linux-armv7-musl
PLATFORM: linux
EXT: ""
# Windows
- TARGET: x86_64-pc-windows-gnu
ARTIFACT: owlen-windows-x86_64
PLATFORM: windows
EXT: ".exe"
steps:
- name: build
image: *rust_image
commands:
# Install cross-compilation tools
- apt-get update
- apt-get install -y musl-tools gcc-aarch64-linux-gnu g++-aarch64-linux-gnu gcc-arm-linux-gnueabihf g++-arm-linux-gnueabihf mingw-w64 zip
# Verify cross-compilers are installed
- which aarch64-linux-gnu-gcc || echo "aarch64-linux-gnu-gcc not found!"
- which arm-linux-gnueabihf-gcc || echo "arm-linux-gnueabihf-gcc not found!"
- which x86_64-w64-mingw32-gcc || echo "x86_64-w64-mingw32-gcc not found!"
# Add rust target
- rustup target add ${TARGET}
# Set up cross-compilation environment variables and build
- |
case "${TARGET}" in
aarch64-unknown-linux-gnu)
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=/usr/bin/aarch64-linux-gnu-gcc
export CC_aarch64_unknown_linux_gnu=/usr/bin/aarch64-linux-gnu-gcc
export CXX_aarch64_unknown_linux_gnu=/usr/bin/aarch64-linux-gnu-g++
export AR_aarch64_unknown_linux_gnu=/usr/bin/aarch64-linux-gnu-ar
;;
aarch64-unknown-linux-musl)
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER=/usr/bin/aarch64-linux-gnu-gcc
export CC_aarch64_unknown_linux_musl=/usr/bin/aarch64-linux-gnu-gcc
export CXX_aarch64_unknown_linux_musl=/usr/bin/aarch64-linux-gnu-g++
export AR_aarch64_unknown_linux_musl=/usr/bin/aarch64-linux-gnu-ar
;;
armv7-unknown-linux-gnueabihf)
export CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_LINKER=/usr/bin/arm-linux-gnueabihf-gcc
export CC_armv7_unknown_linux_gnueabihf=/usr/bin/arm-linux-gnueabihf-gcc
export CXX_armv7_unknown_linux_gnueabihf=/usr/bin/arm-linux-gnueabihf-g++
export AR_armv7_unknown_linux_gnueabihf=/usr/bin/arm-linux-gnueabihf-ar
;;
armv7-unknown-linux-musleabihf)
export CARGO_TARGET_ARMV7_UNKNOWN_LINUX_MUSLEABIHF_LINKER=/usr/bin/arm-linux-gnueabihf-gcc
export CC_armv7_unknown_linux_musleabihf=/usr/bin/arm-linux-gnueabihf-gcc
export CXX_armv7_unknown_linux_musleabihf=/usr/bin/arm-linux-gnueabihf-g++
export AR_armv7_unknown_linux_musleabihf=/usr/bin/arm-linux-gnueabihf-ar
;;
x86_64-pc-windows-gnu)
export CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER=/usr/bin/x86_64-w64-mingw32-gcc
export CC_x86_64_pc_windows_gnu=/usr/bin/x86_64-w64-mingw32-gcc
export CXX_x86_64_pc_windows_gnu=/usr/bin/x86_64-w64-mingw32-g++
export AR_x86_64_pc_windows_gnu=/usr/bin/x86_64-w64-mingw32-ar
;;
esac
# Build the project
cargo build --release --all-features --target ${TARGET}
- name: package
image: *rust_image
commands:
- apt-get update && apt-get install -y zip
- mkdir -p dist
- |
if [ "${PLATFORM}" = "windows" ]; then
cp target/${TARGET}/release/owlen.exe dist/owlen.exe
cp target/${TARGET}/release/owlen-code.exe dist/owlen-code.exe
cd dist
zip -9 ${ARTIFACT}.zip owlen.exe owlen-code.exe
cd ..
mv dist/${ARTIFACT}.zip .
sha256sum ${ARTIFACT}.zip > ${ARTIFACT}.zip.sha256
else
cp target/${TARGET}/release/owlen dist/owlen
cp target/${TARGET}/release/owlen-code dist/owlen-code
cd dist
tar czf ${ARTIFACT}.tar.gz owlen owlen-code
cd ..
mv dist/${ARTIFACT}.tar.gz .
sha256sum ${ARTIFACT}.tar.gz > ${ARTIFACT}.tar.gz.sha256
fi
- name: release-notes
image: *rust_image
commands:
- scripts/release-notes.sh "${CI_COMMIT_TAG}" release-notes.md
- name: release
image: plugins/gitea-release
settings:
api_key:
from_secret: gitea_token
base_url: https://somegit.dev
files:
- ${ARTIFACT}.tar.gz
- ${ARTIFACT}.tar.gz.sha256
- ${ARTIFACT}.zip
- ${ARTIFACT}.zip.sha256
title: Release ${CI_COMMIT_TAG}
note_file: release-notes.md

798
AGENTS.md
View File

@@ -1,798 +0,0 @@
# AGENTS.md - AI Agent Instructions for Owlen Development
This document provides comprehensive context and guidelines for AI agents (Claude, GPT-4, etc.) working on the Owlen codebase.
## Project Overview
**Owlen** is a local-first, terminal-based AI assistant built in Rust using the Ratatui TUI framework. It implements a Model Context Protocol (MCP) architecture for modular tool execution and supports both local (Ollama) and cloud LLM providers.
**Core Philosophy:**
- **Local-first**: Prioritize local LLMs (Ollama) with cloud as fallback
- **Privacy-focused**: No telemetry, user data stays on device
- **MCP-native**: All operations through MCP servers for modularity
- **Terminal-native**: Vim-style modal interaction in a beautiful TUI
**Current Status:** v1.0 - MCP-only architecture (Phase 10 complete)
## Architecture
### Project Structure
```
owlen/
├── crates/
│ ├── owlen-core/ # Core types, config, provider traits
│ ├── owlen-tui/ # Ratatui-based terminal interface
│ ├── owlen-cli/ # Command-line interface
│ ├── owlen-ollama/ # Ollama provider implementation
│ ├── owlen-mcp-llm-server/ # LLM inference as MCP server
│ ├── owlen-mcp-client/ # MCP client library
│ ├── owlen-mcp-server/ # Base MCP server framework
│ ├── owlen-mcp-code-server/ # Code execution in Docker
│ └── owlen-mcp-prompt-server/ # Prompt management server
├── docs/ # Documentation
├── themes/ # TUI color themes
└── .agents/ # Agent development plans
```
### Key Technologies
- **Language**: Rust 1.83+
- **TUI**: Ratatui with Crossterm backend
- **Async Runtime**: Tokio
- **Config**: TOML (serde)
- **HTTP Client**: reqwest
- **LLM Providers**: Ollama (primary), with extensibility for OpenAI/Anthropic
- **Protocol**: JSON-RPC 2.0 over STDIO/HTTP/WebSocket
## Current Features (v1.0)
### Core Capabilities
1. **MCP Architecture** (Phase 3-10 complete)
- All LLM interactions via MCP servers
- Local and remote MCP client support
- STDIO, HTTP, WebSocket transports
- Automatic failover with health checks
2. **Provider System**
- Ollama (local and cloud)
- Configurable per-provider settings
- API key management with env variable expansion
- Model switching via TUI (`:m` command)
3. **Agentic Loop** (ReAct pattern)
- THOUGHT → ACTION → OBSERVATION cycle
- Tool discovery and execution
- Configurable iteration limits
- Emergency stop (Ctrl+C)
4. **Mode System**
- Chat mode: Limited tool availability
- Code mode: Full tool access
- Tool filtering by mode
- Runtime mode switching
5. **Session Management**
- Auto-save conversations
- Session persistence with encryption
- Description generation
- Session timeout management
6. **Security**
- Docker sandboxing for code execution
- Tool whitelisting
- Permission prompts for dangerous operations
- Network isolation options
### TUI Features
- Vim-style modal editing (Normal, Insert, Visual, Command modes)
- Multi-panel layout (conversation, status, input)
- Syntax highlighting for code blocks
- Theme system (10+ built-in themes)
- Scrollback history (configurable limit)
- Word wrap and visual selection
## Development Guidelines
### Code Style
1. **Rust Best Practices**
- Use `rustfmt` (pre-commit hook enforced)
- Run `cargo clippy` before commits
- Prefer `Result` over `panic!` for errors
- Document public APIs with `///` comments
2. **Error Handling**
- Use `owlen_core::Error` enum for all errors
- Chain errors with context (`.map_err(|e| Error::X(format!(...)))`)
- Never unwrap in library code (tests OK)
3. **Async Patterns**
- All I/O operations must be async
- Use `tokio::spawn` for background tasks
- Prefer `tokio::sync::mpsc` for channels
- Always set timeouts for network operations
4. **Testing**
- Unit tests in same file (`#[cfg(test)] mod tests`)
- Use mock implementations from `test_utils` modules
- Integration tests in `crates/*/tests/`
- All public APIs must have tests
### File Organization
**When editing existing files:**
1. Read the entire file first (use `Read` tool)
2. Preserve existing code style and formatting
3. Update related tests in the same commit
4. Keep changes atomic and focused
**When creating new files:**
1. Check `crates/owlen-core/src/` for similar modules
2. Follow existing module structure
3. Add to `lib.rs` with appropriate visibility
4. Document module purpose with `//!` header
### Configuration
**Config file**: `~/.config/owlen/config.toml`
Example structure:
```toml
[general]
default_provider = "ollama"
default_model = "llama3.2:latest"
enable_streaming = true
[mcp]
# MCP is always enabled in v1.0+
[providers.ollama]
provider_type = "ollama"
base_url = "http://localhost:11434"
[providers.ollama-cloud]
provider_type = "ollama-cloud"
base_url = "https://ollama.com"
api_key = "$OLLAMA_API_KEY"
[ui]
theme = "default_dark"
word_wrap = true
[security]
enable_sandboxing = true
allowed_tools = ["web_search", "code_exec"]
```
### Common Tasks
#### Adding a New Provider
1. Create `crates/owlen-{provider}/` crate
2. Implement `owlen_core::provider::Provider` trait
3. Add to `owlen_core::router::ProviderRouter`
4. Update config schema in `owlen_core::config`
5. Add tests with `MockProvider` pattern
6. Document in `docs/provider-implementation.md`
#### Adding a New MCP Server
1. Create `crates/owlen-mcp-{name}-server/` crate
2. Implement JSON-RPC 2.0 protocol handlers
3. Define tool descriptors with JSON schemas
4. Add sandboxing/security checks
5. Register in `mcp_servers` config array
6. Document tool capabilities
#### Adding a TUI Feature
1. Modify `crates/owlen-tui/src/chat_app.rs`
2. Update keybinding handlers
3. Extend UI rendering in `draw()` method
4. Add to help screen (`?` command)
5. Test with different terminal sizes
6. Ensure theme compatibility
## Feature Parity Roadmap
Based on analysis of OpenAI Codex and Claude Code, here are prioritized features to implement:
### Phase 11: MCP Client Enhancement (HIGHEST PRIORITY)
**Goal**: Full MCP client capabilities to access ecosystem tools
**Features:**
1. **MCP Server Management**
- `owlen mcp add/list/remove` commands
- Three config scopes: local, project (`.mcp.json`), user
- Environment variable expansion in config
- OAuth 2.0 authentication for remote servers
2. **MCP Resource References**
- `@github:issue://123` syntax
- `@postgres:schema://users` syntax
- Auto-completion for resources
3. **MCP Prompts as Slash Commands**
- `/mcp__github__list_prs`
- Dynamic command registration
**Implementation:**
- Extend `owlen-mcp-client` crate
- Add `.mcp.json` parsing to `owlen-core::config`
- Update TUI command parser for `@` and `/mcp__` syntax
- Add OAuth flow to TUI
**Files to modify:**
- `crates/owlen-mcp-client/src/lib.rs`
- `crates/owlen-core/src/config.rs`
- `crates/owlen-tui/src/command_parser.rs`
### Phase 12: Approval & Sandbox System (HIGHEST PRIORITY)
**Goal**: Safe agentic behavior with user control
**Features:**
1. **Three-tier Approval Modes**
- `suggest`: Approve ALL file writes and shell commands (default)
- `auto-edit`: Auto-approve file changes, prompt for shell
- `full-auto`: Auto-approve everything (requires Git repo)
2. **Platform-specific Sandboxing**
- Linux: Docker with network isolation
- macOS: Apple Seatbelt (`sandbox-exec`)
- Windows: AppContainer or Job Objects
3. **Permission Management**
- `/permissions` command in TUI
- Tool allowlist (e.g., `Edit`, `Bash(git commit:*)`)
- Stored in `.owlen/settings.json` (project) or `~/.owlen.json` (user)
**Implementation:**
- New `owlen-core::approval` module
- Extend `owlen-core::sandbox` with platform detection
- Update `owlen-mcp-code-server` to use new sandbox
- Add permission storage to config system
**Files to create:**
- `crates/owlen-core/src/approval.rs`
- `crates/owlen-core/src/sandbox/linux.rs`
- `crates/owlen-core/src/sandbox/macos.rs`
- `crates/owlen-core/src/sandbox/windows.rs`
### Phase 13: Project Documentation System (HIGH PRIORITY)
**Goal**: Massive usability improvement with project context
**Features:**
1. **OWLEN.md System**
- `OWLEN.md` at repo root (checked into git)
- `OWLEN.local.md` (gitignored, personal)
- `~/.config/owlen/OWLEN.md` (global)
- Support nested OWLEN.md in monorepos
2. **Auto-generation**
- `/init` command to generate project-specific OWLEN.md
- Analyze codebase structure
- Detect build system, test framework
- Suggest common commands
3. **Live Updates**
- `#` command to add instructions to OWLEN.md
- Context-aware insertion (relevant section)
**Contents of OWLEN.md:**
- Common bash commands
- Code style guidelines
- Testing instructions
- Core files and utilities
- Known quirks/warnings
**Implementation:**
- New `owlen-core::project_doc` module
- File discovery algorithm (walk up directory tree)
- Markdown parser for sections
- TUI commands: `/init`, `#`
**Files to create:**
- `crates/owlen-core/src/project_doc.rs`
- `crates/owlen-tui/src/commands/init.rs`
### Phase 14: Non-Interactive Mode (HIGH PRIORITY)
**Goal**: Enable CI/CD integration and automation
**Features:**
1. **Headless Execution**
```bash
owlen exec "fix linting errors" --approval-mode auto-edit
owlen --quiet "update CHANGELOG" --json
```
2. **Environment Variables**
- `OWLEN_QUIET_MODE=1`
- `OWLEN_DISABLE_PROJECT_DOC=1`
- `OWLEN_APPROVAL_MODE=full-auto`
3. **JSON Output**
- Structured output for parsing
- Exit codes for success/failure
- Progress events on stderr
**Implementation:**
- New `owlen-cli` subcommand: `exec`
- Extend `owlen-core::session` with non-interactive mode
- Add JSON serialization for results
- Environment variable parsing in config
**Files to modify:**
- `crates/owlen-cli/src/main.rs`
- `crates/owlen-core/src/session.rs`
### Phase 15: Multi-Provider Expansion (HIGH PRIORITY)
**Goal**: Support cloud providers while maintaining local-first
**Providers to add:**
1. OpenAI (GPT-4, o1, o4-mini)
2. Anthropic (Claude 3.5 Sonnet, Opus)
3. Google (Gemini Ultra, Pro)
4. Mistral AI
**Configuration:**
```toml
[providers.openai]
api_key = "${OPENAI_API_KEY}"
model = "o4-mini"
enabled = true
[providers.anthropic]
api_key = "${ANTHROPIC_API_KEY}"
model = "claude-3-5-sonnet"
enabled = true
```
**Runtime Switching:**
```
:model ollama/starcoder
:model openai/o4-mini
:model anthropic/claude-3-5-sonnet
```
**Implementation:**
- Create `owlen-openai`, `owlen-anthropic`, `owlen-google` crates
- Implement `Provider` trait for each
- Add runtime model switching to TUI
- Maintain Ollama as default
**Files to create:**
- `crates/owlen-openai/src/lib.rs`
- `crates/owlen-anthropic/src/lib.rs`
- `crates/owlen-google/src/lib.rs`
### Phase 16: Custom Slash Commands (MEDIUM PRIORITY)
**Goal**: User and team-defined workflows
**Features:**
1. **Command Directories**
- `~/.owlen/commands/` (user, available everywhere)
- `.owlen/commands/` (project, checked into git)
- Support `$ARGUMENTS` keyword
2. **Example Structure**
```markdown
# .owlen/commands/fix-github-issue.md
Please analyze and fix GitHub issue: $ARGUMENTS.
1. Use `gh issue view` to get details
2. Implement changes
3. Write and run tests
4. Create PR
```
3. **TUI Integration**
- Auto-complete for custom commands
- Help text from command files
- Parameter validation
**Implementation:**
- New `owlen-core::commands` module
- Command discovery and parsing
- Template expansion
- TUI command registration
**Files to create:**
- `crates/owlen-core/src/commands.rs`
- `crates/owlen-tui/src/commands/custom.rs`
### Phase 17: Plugin System (MEDIUM PRIORITY)
**Goal**: One-command installation of tool collections
**Features:**
1. **Plugin Structure**
```json
{
"name": "github-workflow",
"version": "1.0.0",
"commands": [
{"name": "pr", "file": "commands/pr.md"}
],
"mcp_servers": [
{
"name": "github",
"command": "${OWLEN_PLUGIN_ROOT}/bin/github-mcp"
}
]
}
```
2. **Installation**
```bash
owlen plugin install github-workflow
owlen plugin list
owlen plugin remove github-workflow
```
3. **Discovery**
- `~/.owlen/plugins/` directory
- Git repository URLs
- Plugin registry (future)
**Implementation:**
- New `owlen-core::plugins` module
- Plugin manifest parser
- Installation/removal logic
- Sandboxing for plugin code
**Files to create:**
- `crates/owlen-core/src/plugins.rs`
- `crates/owlen-cli/src/commands/plugin.rs`
### Phase 18: Extended Thinking Modes (MEDIUM PRIORITY)
**Goal**: Progressive computation budgets for complex tasks
**Modes:**
- `think` - basic extended thinking
- `think hard` - increased computation
- `think harder` - more computation
- `ultrathink` - maximum budget
**Implementation:**
- Extend `owlen-core::types::ChatParameters`
- Add thinking mode to TUI commands
- Configure per-provider max tokens
**Files to modify:**
- `crates/owlen-core/src/types.rs`
- `crates/owlen-tui/src/command_parser.rs`
### Phase 19: Git Workflow Automation (MEDIUM PRIORITY)
**Goal**: Streamline common Git operations
**Features:**
1. Auto-commit message generation
2. PR creation via `gh` CLI
3. Rebase conflict resolution
4. File revert operations
5. Git history analysis
**Implementation:**
- New `owlen-mcp-git-server` crate
- Tools: `commit`, `create_pr`, `rebase`, `revert`, `history`
- Integration with TUI commands
**Files to create:**
- `crates/owlen-mcp-git-server/src/lib.rs`
### Phase 20: Enterprise Features (LOW PRIORITY)
**Goal**: Team and enterprise deployment support
**Features:**
1. **Managed Configuration**
- `/etc/owlen/managed-mcp.json` (Linux)
- Restrict user additions with `useEnterpriseMcpConfigOnly`
2. **Audit Logging**
- Log all file writes and shell commands
- Structured JSON logs
- Tamper-proof storage
3. **Team Collaboration**
- Shared OWLEN.md across team
- Project-scoped MCP servers in `.mcp.json`
- Approval policy enforcement
**Implementation:**
- Extend `owlen-core::config` with managed settings
- New `owlen-core::audit` module
- Enterprise deployment documentation
## Testing Requirements
### Test Coverage Goals
- **Unit tests**: 80%+ coverage for `owlen-core`
- **Integration tests**: All MCP servers, providers
- **TUI tests**: Key workflows (not pixel-perfect)
### Test Organization
```rust
#[cfg(test)]
mod tests {
use super::*;
use crate::provider::test_utils::MockProvider;
use crate::mcp::test_utils::MockMcpClient;
#[test]
fn test_feature() {
// Setup
let provider = MockProvider::new();
// Execute
let result = provider.chat(request).await;
// Assert
assert!(result.is_ok());
}
}
```
### Running Tests
```bash
cargo test --all # All tests
cargo test --lib -p owlen-core # Core library tests
cargo test --test integration # Integration tests
```
## Documentation Standards
### Code Documentation
1. **Module-level** (`//!` at top of file):
```rust
//! Brief module description
//!
//! Detailed explanation of module purpose,
//! key types, and usage examples.
```
2. **Public APIs** (`///` above items):
```rust
/// Brief description
///
/// # Arguments
/// * `arg1` - Description
///
/// # Returns
/// Description of return value
///
/// # Errors
/// When this function returns an error
///
/// # Example
/// ```
/// let result = function(arg);
/// ```
pub fn function(arg: Type) -> Result<Output> {
// implementation
}
```
3. **Private items**: Optional, use for complex logic
### User Documentation
Location: `docs/` directory
Files to maintain:
- `architecture.md` - System design
- `configuration.md` - Config reference
- `migration-guide.md` - Version upgrades
- `troubleshooting.md` - Common issues
- `provider-implementation.md` - Adding providers
- `faq.md` - Frequently asked questions
## Git Workflow
### Branch Strategy
- `main` - stable releases only
- `dev` - active development (default)
- `feature/*` - new features
- `fix/*` - bug fixes
- `docs/*` - documentation only
### Commit Messages
Follow conventional commits:
```
type(scope): brief description
Detailed explanation of changes.
Breaking changes, if any.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
Types: `feat`, `fix`, `docs`, `refactor`, `test`, `chore`
### Pre-commit Hooks
Automatically run:
- `cargo fmt` (formatting)
- `cargo check` (compilation)
- `cargo clippy` (linting)
- YAML/TOML validation
- Trailing whitespace removal
## Performance Guidelines
### Optimization Priorities
1. **Startup time**: < 500ms cold start
2. **First token latency**: < 2s for local models
3. **Memory usage**: < 100MB base, < 500MB with conversation
4. **Responsiveness**: TUI redraws < 16ms (60 FPS)
### Profiling
```bash
cargo build --release --features profiling
valgrind --tool=callgrind target/release/owlen
kcachegrind callgrind.out.*
```
### Async Performance
- Avoid blocking in async contexts
- Use `tokio::spawn` for CPU-intensive work
- Set timeouts on all network operations
- Cancel tasks on shutdown
## Security Considerations
### Threat Model
**Trusted:**
- User's local machine
- User-installed Ollama models
- User configuration files
**Untrusted:**
- MCP server responses
- Web search results
- Code execution output
- Cloud LLM responses
### Security Measures
1. **Input Validation**
- Sanitize all MCP tool arguments
- Validate JSON schemas strictly
- Escape shell commands
2. **Sandboxing**
- Docker for code execution
- Network isolation
- Filesystem restrictions
3. **Secrets Management**
- Never log API keys
- Use environment variables
- Encrypt sensitive config fields
4. **Dependency Auditing**
```bash
cargo audit
cargo deny check
```
## Debugging Tips
### Enable Debug Logging
```bash
OWLEN_DEBUG_OLLAMA=1 owlen # Ollama requests
RUST_LOG=debug owlen # All debug logs
RUST_BACKTRACE=1 owlen # Stack traces
```
### Common Issues
1. **Timeout on Ollama**
- Check `ollama ps` for loaded models
- Increase timeout in config
- Restart Ollama service
2. **MCP Server Not Found**
- Verify `mcp_servers` config
- Check server binary exists
- Test server manually with STDIO
3. **TUI Rendering Issues**
- Test in different terminals
- Check terminal size (`tput cols; tput lines`)
- Verify theme compatibility
## Contributing
### Before Submitting PR
1. Run full test suite: `cargo test --all`
2. Check formatting: `cargo fmt -- --check`
3. Run linter: `cargo clippy -- -D warnings`
4. Update documentation if API changed
5. Add tests for new features
6. Update CHANGELOG.md
### PR Description Template
```markdown
## Summary
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Testing
Describe tests performed
## Checklist
- [ ] Tests added/updated
- [ ] Documentation updated
- [ ] CHANGELOG.md updated
- [ ] No clippy warnings
```
## Resources
### External Documentation
- [Ratatui Docs](https://ratatui.rs/)
- [Tokio Tutorial](https://tokio.rs/tokio/tutorial)
- [MCP Specification](https://modelcontextprotocol.io/)
- [Ollama API](https://github.com/ollama/ollama/blob/main/docs/api.md)
### Internal Documentation
- `.agents/new_phases.md` - 10-phase migration plan (completed)
- `docs/phase5-mode-system.md` - Mode system design
- `docs/migration-guide.md` - v0.x → v1.0 migration
### Community
- GitHub Issues: Bug reports and feature requests
- GitHub Discussions: Questions and ideas
- AUR Package: `owlen-git` (Arch Linux)
## Version History
- **v1.0.0** (current) - MCP-only architecture, Phase 10 complete
- **v0.2.0** - Added web search, code execution servers
- **v0.1.0** - Initial release with Ollama support
## License
Owlen is open source software. See LICENSE file for details.
---
**Last Updated**: 2025-10-11
**Maintained By**: Owlen Development Team
**For AI Agents**: Follow these guidelines when modifying Owlen codebase. Prioritize MCP client enhancement (Phase 11) and approval system (Phase 12) for feature parity with Codex/Claude Code while maintaining local-first philosophy.

View File

@@ -1,114 +0,0 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Comprehensive documentation suite including guides for architecture, configuration, testing, and more.
- Rustdoc examples for core components like `Provider` and `SessionController`.
- Module-level documentation for `owlen-tui`.
- Provider integration tests (`crates/owlen-providers/tests`) covering registration, routing, and health status handling for the new `ProviderManager`.
- TUI message and generation tests that exercise the non-blocking event loop, background worker, and message dispatch.
- Ollama integration can now talk to Ollama Cloud when an API key is configured.
- Ollama provider will also read `OLLAMA_API_KEY` / `OLLAMA_CLOUD_API_KEY` environment variables when no key is stored in the config.
- `owlen config doctor`, `owlen config path`, and `owlen upgrade` CLI commands to automate migrations and surface manual update steps.
- Startup provider health check with actionable hints when Ollama or remote MCP servers are unavailable.
- `dev/check-windows.sh` helper script for on-demand Windows cross-checks.
- Global F1 keybinding for the in-app help overlay and a clearer status hint on launch.
- Automatic fallback to the new `ansi_basic` theme when the active terminal only advertises 16-color support.
- Offline provider shim that keeps the TUI usable while primary providers are unreachable and communicates recovery steps inline.
- `owlen cloud` subcommands (`setup`, `status`, `models`, `logout`) for managing Ollama Cloud credentials without hand-editing config files.
- Tabbed model selector that separates local and cloud providers, including cloud indicators in the UI.
- Footer status line includes provider connectivity/credential summaries (e.g., cloud auth failures, missing API keys).
- Secure credential vault integration for Ollama Cloud API keys when `privacy.encrypt_local_data = true`.
- Input panel respects a new `ui.input_max_rows` setting so long prompts expand predictably before scrolling kicks in.
- Command palette offers fuzzy `:model` filtering and `:provider` completions for fast switching.
- Message rendering caches wrapped lines and throttles streaming redraws to keep the TUI responsive on long sessions.
- Model picker badges now inspect provider capabilities so vision/audio/thinking models surface the correct icons even when descriptions are sparse.
- Chat history honors `ui.scrollback_lines`, trimming older rows to keep the TUI responsive and surfacing a "↓ New messages" badge whenever updates land off-screen.
### Changed
- The main `README.md` has been updated to be more concise and link to the new documentation.
- Default configuration now pre-populates both `providers.ollama` and `providers.ollama-cloud` entries so switching between local and cloud backends is a single setting change.
- `McpMode` support was restored with explicit validation; `remote_only`, `remote_preferred`, and `local_only` now behave predictably.
- Configuration loading performs structural validation and fails fast on missing default providers or invalid MCP definitions.
- Ollama provider error handling now distinguishes timeouts, missing models, and authentication failures.
- `owlen` warns when the active terminal likely lacks 256-color support.
- `config.toml` now carries a schema version (`1.2.0`) and is migrated automatically; deprecated keys such as `agent.max_tool_calls` trigger warnings instead of hard failures.
- Model selector navigation (Tab/Shift-Tab) now switches between local and cloud tabs while preserving selection state.
- Header displays the active model together with its provider (e.g., `Model (Provider)`), improving clarity when swapping backends.
- Documentation refreshed to cover the message handler architecture, the background health worker, multi-provider configuration, and the new provider onboarding checklist.
---
## [0.1.11] - 2025-10-18
### Changed
- Bump workspace packages and distribution metadata to version `0.1.11`.
## [0.1.10] - 2025-10-03
### Added
- **Material Light Theme**: A new built-in theme, `material-light`, has been added.
### Fixed
- **UI Readability**: Fixed a bug causing unreadable text in light themes.
- **Visual Selection**: The visual selection mode now correctly colors unselected text portions.
### Changed
- **Theme Colors**: The color palettes for `gruvbox`, `rose-pine`, and `monokai` have been corrected.
- **In-App Help**: The `:help` menu has been significantly expanded and updated.
## [0.1.9] - 2025-10-03
*This version corresponds to the release tagged v0.1.10 in the source repository.*
### Added
- **Material Light Theme**: A new built-in theme, `material-light`, has been added.
### Fixed
- **UI Readability**: Fixed a bug causing unreadable text in light themes.
- **Visual Selection**: The visual selection mode now correctly colors unselected text portions.
### Changed
- **Theme Colors**: The color palettes for `gruvbox`, `rose-pine`, and `monokai` have been corrected.
- **In-App Help**: The `:help` menu has been significantly expanded and updated.
## [0.1.8] - 2025-10-02
### Added
- **Command Autocompletion**: Implemented intelligent command suggestions and Tab completion in command mode.
### Changed
- **Build & CI**: Fixed cross-compilation for ARM64, ARMv7, and Windows.
## [0.1.7] - 2025-10-02
### Added
- **Tabbed Help System**: The help menu is now organized into five tabs for easier navigation.
- **Command Aliases**: Added `:o` as a short alias for `:load` / `:open`.
### Changed
- **Session Management**: Improved AI-generated session descriptions.
## [0.1.6] - 2025-10-02
### Added
- **Platform-Specific Storage**: Sessions are now saved to platform-appropriate directories (e.g., `~/.local/share/owlen` on Linux).
- **AI-Generated Session Descriptions**: Conversations can be automatically summarized on save.
### Changed
- **Migration**: Users on older versions can manually move their sessions from `~/.config/owlen/sessions` to the new platform-specific directory.
## [0.1.4] - 2025-10-01
### Added
- **Multi-Platform Builds**: Pre-built binaries are now provided for Linux (x86_64, aarch64, armv7) and Windows (x86_64).
- **AUR Package**: Owlen is now available on the Arch User Repository.
### Changed
- **Build System**: Switched from OpenSSL to rustls for better cross-platform compatibility.

View File

@@ -1,121 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that are welcoming, open, and respectful.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[security@owlibou.com](mailto:security@owlibou.com). All complaints will be
reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interaction in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html

View File

@@ -1,126 +0,0 @@
# Contributing to Owlen
First off, thank you for considering contributing to Owlen! It's people like you that make Owlen such a great tool.
Following these guidelines helps to communicate that you respect the time of the developers managing and developing this open source project. In return, they should reciprocate that respect in addressing your issue, assessing changes, and helping you finalize your pull requests.
## Code of Conduct
This project and everyone participating in it is governed by the [Owlen Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior.
## How Can I Contribute?
### Repository map
Need a quick orientation before diving in? Start with the curated [repo map](docs/repo-map.md) for a two-level directory overview. If you move folders around, regenerate it with `scripts/gen-repo-map.sh`.
### Reporting Bugs
This is one of the most helpful ways you can contribute. Before creating a bug report, please check a few things:
1. **Check the [troubleshooting guide](docs/troubleshooting.md).** Your issue might be a common one with a known solution.
2. **Search the existing issues.** It's possible someone has already reported the same bug. If so, add a comment to the existing issue instead of creating a new one.
When you are creating a bug report, please include as many details as possible. Fill out the required template, the information it asks for helps us resolve issues faster.
### Suggesting Enhancements
If you have an idea for a new feature or an improvement to an existing one, we'd love to hear about it. Please provide as much context as you can about what you're trying to achieve.
### Your First Code Contribution
Unsure where to begin contributing to Owlen? You can start by looking through `good first issue` and `help wanted` issues.
### Pull Requests
The process for submitting a pull request is as follows:
1. **Fork the repository** and create your branch from `main`.
2. **Set up pre-commit hooks** (see [Development Setup](#development-setup) above). This will automatically format and lint your code.
3. **Make your changes.**
4. **Run the tests.**
- `cargo test --all`
5. **Commit your changes.** The pre-commit hooks will automatically run `cargo fmt`, `cargo check`, and `cargo clippy`. If you need to bypass the hooks (not recommended), use `git commit --no-verify`.
6. **Add a clear, concise commit message.** We follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) specification.
7. **Push to your fork** and submit a pull request to Owlen's `main` branch.
8. **Include a clear description** of the problem and solution. Include the relevant issue number if applicable.
9. **Declare AI assistance.** If any part of the patch was generated with an AI tool (e.g., ChatGPT, Claude Code), call that out in the PR description. A human maintainer must review and approve AI-assisted changes before merge.
## Development Setup
To get started with the codebase, you'll need to have Rust installed. Then, you can clone the repository and build the project:
```sh
git clone https://github.com/Owlibou/owlen.git
cd owlen
cargo build
```
### Pre-commit Hooks
We use [pre-commit](https://pre-commit.com/) to automatically run formatting and linting checks before each commit. This helps maintain code quality and consistency.
**Install pre-commit:**
```sh
# Arch Linux
sudo pacman -S pre-commit
# Other Linux/macOS
pip install pre-commit
# Verify installation
pre-commit --version
```
**Setup the hooks:**
```sh
cd owlen
pre-commit install
```
Once installed, the hooks will automatically run on every commit. You can also run them manually:
```sh
# Run on all files
pre-commit run --all-files
# Run on staged files only
pre-commit run
```
The pre-commit hooks will check:
- Code formatting (`cargo fmt`)
- Compilation (`cargo check`)
- Linting (`cargo clippy --all-features`)
- General file hygiene (trailing whitespace, EOF newlines, etc.)
## Coding Style
- We use `cargo fmt` for automated code formatting. Please run it before committing your changes.
- We use `cargo clippy` for linting. Your code should be free of any clippy warnings.
## Commit Message Conventions
We use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) for our commit messages. This allows for automated changelog generation and makes the project history easier to read.
The basic format is:
```
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
```
**Types:** `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`, `build`, `ci`.
**Example:**
```
feat(provider): add support for Gemini Pro
```
Thank you for your contribution!

View File

@@ -1,86 +1,18 @@
[workspace]
resolver = "2"
members = [
"crates/owlen-core",
"crates/owlen-tui",
"crates/owlen-cli",
"crates/owlen-providers",
"crates/mcp/server",
"crates/mcp/llm-server",
"crates/mcp/client",
"crates/mcp/code-server",
"crates/mcp/prompt-server",
"crates/owlen-markdown",
"xtask",
"crates/app/cli",
"crates/llm/ollama",
"crates/platform/config",
"crates/platform/hooks",
"crates/platform/permissions",
"crates/tools/bash",
"crates/tools/fs",
"crates/tools/slash",
"crates/integration/mcp-client",
]
exclude = []
resolver = "2"
[workspace.package]
version = "0.1.11"
edition = "2024"
authors = ["Owlibou"]
license = "AGPL-3.0"
repository = "https://somegit.dev/Owlibou/owlen"
homepage = "https://somegit.dev/Owlibou/owlen"
keywords = ["llm", "tui", "cli", "ollama", "chat"]
categories = ["command-line-utilities"]
[workspace.dependencies]
# Async runtime and utilities
tokio = { version = "1.0", features = ["full"] }
tokio-stream = "0.1"
tokio-util = { version = "0.7", features = ["rt"] }
futures = "0.3"
futures-util = "0.3"
# TUI framework
ratatui = "0.28"
crossterm = "0.28"
tui-textarea = "0.6"
# HTTP client and JSON handling
reqwest = { version = "0.12", default-features = false, features = ["json", "stream", "rustls-tls"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0" }
# Utilities
uuid = { version = "1.0", features = ["v4", "serde"] }
anyhow = "1.0"
thiserror = "2.0"
nix = "0.29"
which = "6.0"
tempfile = "3.8"
jsonschema = "0.17"
aes-gcm = "0.10"
ring = "0.17"
keyring = "3.0"
chrono = { version = "0.4", features = ["serde"] }
urlencoding = "2.1"
regex = "1.10"
rpassword = "7.3"
sqlx = { version = "0.7", default-features = false, features = ["runtime-tokio-rustls", "sqlite", "macros", "uuid", "chrono", "migrate"] }
log = "0.4"
dirs = "5.0"
serde_yaml = "0.9"
handlebars = "6.0"
# Configuration
toml = "0.8"
shellexpand = "3.1"
# Database
sled = "0.34"
# For better text handling
textwrap = "0.16"
# Async traits
async-trait = "0.1"
# CLI framework
clap = { version = "4.0", features = ["derive"] }
# Dev dependencies
tokio-test = "0.4"
# For more keys and their definitions, see https://doc.rust-lang.org/cargo/reference/manifest.html
rust-version = "1.91"

661
LICENSE
View File

@@ -1,661 +0,0 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

View File

@@ -1,49 +0,0 @@
# Maintainer: vikingowl <christian@nachtigall.dev>
pkgname=owlen
pkgver=0.1.11
pkgrel=1
pkgdesc="Terminal User Interface LLM client for Ollama with chat and code assistance features"
arch=('x86_64')
url="https://somegit.dev/Owlibou/owlen"
license=('AGPL-3.0-or-later')
depends=('gcc-libs')
makedepends=('cargo' 'git')
options=(!lto) # avoid LTO-linked ring symbol drop with lld
source=("$pkgname-$pkgver.tar.gz::$url/archive/v$pkgver.tar.gz")
sha256sums=('cabb1cfdfc247b5d008c6c5f94e13548bcefeba874aae9a9d45aa95ae1c085ec')
prepare() {
cd $pkgname
cargo fetch --target "$(rustc -vV | sed -n 's/host: //p')"
}
build() {
cd $pkgname
export RUSTFLAGS="${RUSTFLAGS:-} -C link-arg=-Wl,--no-as-needed"
export CARGO_PROFILE_RELEASE_LTO=false
export CARGO_TARGET_DIR=target
cargo build --frozen --release --all-features
}
check() {
cd $pkgname
export RUSTFLAGS="${RUSTFLAGS:-} -C link-arg=-Wl,--no-as-needed"
cargo test --frozen --all-features
}
package() {
cd $pkgname
# Install binaries
install -Dm755 target/release/owlen "$pkgdir/usr/bin/owlen"
install -Dm755 target/release/owlen-code "$pkgdir/usr/bin/owlen-code"
# Install documentation
install -Dm644 README.md "$pkgdir/usr/share/doc/$pkgname/README.md"
# Install built-in themes for reference
install -Dm644 themes/README.md "$pkgdir/usr/share/$pkgname/themes/README.md"
for theme in themes/*.toml; do
install -Dm644 "$theme" "$pkgdir/usr/share/$pkgname/themes/$(basename $theme)"
done
}

172
README.md
View File

@@ -1,172 +0,0 @@
# OWLEN
> Terminal-native assistant for running local language models with a comfortable TUI.
![Status](https://img.shields.io/badge/status-alpha-yellow)
![Version](https://img.shields.io/badge/version-0.1.11-blue)
![Rust](https://img.shields.io/badge/made_with-Rust-ffc832?logo=rust&logoColor=white)
![License](https://img.shields.io/badge/license-AGPL--3.0-blue)
## What Is OWLEN?
OWLEN is a Rust-powered, terminal-first interface for interacting with local and cloud
language models. It provides a responsive chat workflow that now routes through a
multi-provider manager—handling local Ollama, Ollama Cloud, and future MCP-backed providers—
with a focus on developer productivity, vim-style navigation, and seamless session
management—all without leaving your terminal.
## Alpha Status
This project is currently in **alpha** and under active development. Core features are functional, but expect occasional bugs and breaking changes. Feedback, bug reports, and contributions are very welcome!
## Screenshots
![OWLEN TUI Layout](images/layout.png)
The OWLEN interface features a clean, multi-panel layout with vim-inspired navigation. See more screenshots in the [`images/`](images/) directory.
## Features
- **Vim-style Navigation**: Normal, editing, visual, and command modes.
- **Streaming Responses**: Real-time token streaming from Ollama.
- **Advanced Text Editing**: Multi-line input, history, and clipboard support.
- **Session Management**: Save, load, and manage conversations.
- **Code Side Panel**: Switch to code mode (`:mode code`) and open files inline with `:open <path>` for LLM-assisted coding.
- **Theming System**: 10 built-in themes and support for custom themes.
- **Modular Architecture**: Extensible provider system orchestrated by the new `ProviderManager`, ready for additional MCP-backed providers.
- **Dual-Source Model Picker**: Merge local and cloud catalogues with real-time availability badges powered by the background health worker.
- **Non-Blocking UI Loop**: Asynchronous generation tasks and provider health checks run off-thread, keeping the TUI responsive even while streaming long replies.
- **Guided Setup**: `owlen config doctor` upgrades legacy configs and verifies your environment in seconds.
## Security & Privacy
Owlen is designed to keep data local by default while still allowing controlled access to remote tooling.
- **Local-first execution**: All LLM calls flow through the bundled MCP LLM server which talks to a local Ollama instance. If the server is unreachable, Owlen stays usable in “offline mode” and surfaces clear recovery instructions.
- **Sandboxed tooling**: Code execution runs in Docker according to the MCP Code Server settings, and future releases will extend this to other OS-level sandboxes (`sandbox-exec` on macOS, Windows job objects).
- **Session storage**: Conversations are stored under the platform data directory and can be encrypted at rest. Set `privacy.encrypt_local_data = true` in `config.toml` to enable AES-GCM storage protected by a user-supplied passphrase.
- **Network access**: No telemetry is sent. The only outbound requests occur when you explicitly enable remote tooling (e.g., web search) or configure a cloud LLM provider. Each tool is opt-in via `privacy` and `tools` configuration sections.
- **Config migrations**: Every saved `config.toml` carries a schema version and is upgraded automatically; deprecated keys trigger warnings so security-related settings are not silently ignored.
## Getting Started
### Prerequisites
- Rust 1.75+ and Cargo.
- A running Ollama instance.
- A terminal that supports 256 colors.
### Installation
Pick the option that matches your platform and appetite for source builds:
| Platform | Package / Command | Notes |
| --- | --- | --- |
| Arch Linux | `yay -S owlen-git` | Builds from the latest `dev` branch via AUR. |
| Other Linux | `cargo install --path crates/owlen-cli --locked --force` | Requires Rust 1.75+ and a running Ollama daemon. |
| macOS | `cargo install --path crates/owlen-cli --locked --force` | macOS 12+ tested. Install Ollama separately (`brew install ollama`). The binary links against the system OpenSSL ensure Command Line Tools are installed. |
| Windows (experimental) | `cargo install --path crates/owlen-cli --locked --force` | Enable the GNU toolchain (`rustup target add x86_64-pc-windows-gnu`) and install Ollama for Windows preview builds. Some optional tools (e.g., Docker-based code execution) are currently disabled. |
If you prefer containerised builds, use the provided `Dockerfile` as a base image and copy out `target/release/owlen`.
Run the helper scripts to sanity-check platform coverage:
```bash
# Windows compatibility smoke test (GNU toolchain)
scripts/check-windows.sh
# Reproduce CI packaging locally (choose a target from .woodpecker.yml)
dev/local_build.sh x86_64-unknown-linux-gnu
```
> **Tip (macOS):** On the first launch macOS Gatekeeper may quarantine the binary. Clear the attribute (`xattr -d com.apple.quarantine $(which owlen)`) or build from source locally to avoid notarisation prompts.
### Running OWLEN
Make sure Ollama is running, then launch the application:
```bash
owlen
```
If you built from source without installing, you can run it with:
```bash
./target/release/owlen
```
### Updating
Owlen does not auto-update. Run `owlen upgrade` at any time to print the recommended manual steps (pull the repository and reinstall with `cargo install --path crates/owlen-cli --force`). Arch Linux users can update via the `owlen-git` AUR package.
## Using the TUI
OWLEN uses a modal, vim-inspired interface. Press `F1` (available from any mode) or `?` in Normal mode to view the help screen with all keybindings.
- **Normal Mode**: Navigate with `h/j/k/l`, `w/b`, `gg/G`.
- **Editing Mode**: Enter with `i` or `a`. Send messages with `Enter`.
- **Command Mode**: Enter with `:`. Access commands like `:quit`, `:w`, `:session save`, `:theme`.
- **Quick Exit**: Press `Ctrl+C` twice in Normal mode to quit quickly (first press still cancels active generations).
- **Tutorial Command**: Type `:tutorial` any time for a quick summary of the most important keybindings.
- **MCP Slash Commands**: Owlen auto-registers zero-argument MCP tools as slash commands—type `/mcp__github__list_prs` (for example) to pull remote context directly into the chat log.
Model discovery commands worth remembering:
- `:models --local` or `:models --cloud` jump directly to the corresponding section in the picker.
- `:cloud setup [--force-cloud-base-url]` stores your cloud API key without clobbering an existing local base URL (unless you opt in with the flag).
When a catalogue is unreachable, Owlen now tags the picker with `Local unavailable` / `Cloud unavailable` so you can recover without guessing.
## Documentation
For more detailed information, please refer to the following documents:
- **[CONTRIBUTING.md](CONTRIBUTING.md)**: Guidelines for contributing to the project.
- **[CHANGELOG.md](CHANGELOG.md)**: A log of changes for each version.
- **[docs/architecture.md](docs/architecture.md)**: An overview of the project's architecture.
- **[docs/troubleshooting.md](docs/troubleshooting.md)**: Help with common issues.
- **[docs/repo-map.md](docs/repo-map.md)**: Snapshot of the workspace layout and key crates.
- **[docs/provider-implementation.md](docs/provider-implementation.md)**: Trait-level details for implementing providers.
- **[docs/adding-providers.md](docs/adding-providers.md)**: Step-by-step checklist for wiring a provider into the multi-provider architecture and test suite.
- **Experimental providers staging area**: [crates/providers/experimental/README.md](crates/providers/experimental/README.md) records the placeholder crates (OpenAI, Anthropic, Gemini) and their current status.
- **[docs/platform-support.md](docs/platform-support.md)**: Current OS support matrix and cross-check instructions.
## Configuration
OWLEN stores its configuration in the standard platform-specific config directory:
| Platform | Location |
|----------|----------|
| Linux | `~/.config/owlen/config.toml` |
| macOS | `~/Library/Application Support/owlen/config.toml` |
| Windows | `%APPDATA%\owlen\config.toml` |
Use `owlen config path` to print the exact location on your machine and `owlen config doctor` to migrate a legacy config automatically.
You can also add custom themes alongside the config directory (e.g., `~/.config/owlen/themes/`).
See the [themes/README.md](themes/README.md) for more details on theming.
## Testing
Owlen uses standard Rust tooling for verification. Run the full test suite with:
```bash
cargo test
```
Unit tests cover the command palette state machine, agent response parsing, and key MCP abstractions. Formatting and lint checks can be run with `cargo fmt --all` and `cargo clippy` respectively.
## Roadmap
Upcoming milestones focus on feature parity with modern code assistants while keeping Owlen local-first:
1. **Phase 11 MCP client enhancements**: `owlen mcp add/list/remove`, resource references (`@github:issue://123`), and MCP prompt slash commands.
2. **Phase 12 Approval & sandboxing**: Three-tier approval modes plus platform-specific sandboxes (Docker, `sandbox-exec`, Windows job objects).
3. **Phase 13 Project documentation system**: Automatic `OWLEN.md` generation, contextual updates, and nested project support.
4. **Phase 15 Provider expansion**: OpenAI, Anthropic, and other cloud providers layered onto the existing Ollama-first architecture.
See `AGENTS.md` for the long-form roadmap and design notes.
## Contributing
Contributions are highly welcome! Please see our **[Contributing Guide](CONTRIBUTING.md)** for details on how to get started, including our code style, commit conventions, and pull request process.
## License
This project is licensed under the GNU Affero General Public License v3.0. See the [LICENSE](LICENSE) file for details.
For commercial or proprietary integrations that cannot adopt AGPL, please reach out to the maintainers to discuss alternative licensing arrangements.

View File

@@ -1,40 +0,0 @@
# Security Policy
## Supported Versions
We are currently in a pre-release phase, so only the latest version is actively supported. As we move towards a 1.0 release, this policy will be updated with specific version support.
| Version | Supported |
| ------- | ------------------ |
| < 1.0 | :white_check_mark: |
## Reporting a Vulnerability
The Owlen team and community take all security vulnerabilities seriously. Thank you for improving the security of our project. We appreciate your efforts and responsible disclosure and will make every effort to acknowledge your contributions.
To report a security vulnerability, please email the project lead at [security@owlibou.com](mailto:security@owlibou.com) with a detailed description of the issue, the steps to reproduce it, and any affected versions.
You will receive a response from us within 48 hours. If the issue is confirmed, we will release a patch as soon as possible, depending on the complexity of the issue.
Please do not report security vulnerabilities through public GitHub issues.
## Design Overview
Owlen ships with a local-first architecture:
- **Process isolation** The TUI speaks to language models through a separate MCP LLM server. Tool execution (code, web, filesystem) occurs in dedicated MCP processes so a crash or hang cannot take down the UI.
- **Sandboxing** The MCP Code Server executes snippets in Docker containers. Upcoming releases will extend this to platform sandboxes (`sandbox-exec` on macOS, Windows job objects) as described in our roadmap.
- **Network posture** No telemetry is emitted. The application only reaches the network when a user explicitly enables remote tools (web search, remote MCP servers) or configures cloud providers. All tools require allow-listing in `config.toml`.
## Data Handling
- **Sessions** Conversations are stored in the users data directory (`~/.local/share/owlen` on Linux, equivalent paths on macOS/Windows). Enable `privacy.encrypt_local_data = true` to wrap the session store in AES-GCM encryption protected by a passphrase (`OWLEN_MASTER_PASSWORD` or an interactive prompt).
- **Credentials** API tokens are resolved from the config file or environment variables at runtime and are never written to logs.
- **Remote calls** When remote search or cloud LLM tooling is on, only the minimum payload (prompt, tool arguments) is sent. All outbound requests go through the MCP servers so they can be audited or disabled centrally.
## Supply-Chain Safeguards
- The repository includes a git `pre-commit` configuration that runs `cargo fmt`, `cargo check`, and `cargo clippy -- -D warnings` on every commit.
- Pull requests generated with the assistance of AI tooling must receive manual maintainer review before merging. Contributors are asked to declare AI involvement in their PR description so maintainers can double-check the changes.
Additional recommendations for operators (e.g., running Owlen on shared systems) are maintained in `docs/security.md` (planned) and the issue tracker.

View File

@@ -1,29 +0,0 @@
[general]
default_provider = "ollama_local"
default_model = "llama3.2:latest"
[privacy]
encrypt_local_data = true
[providers.ollama_local]
enabled = true
provider_type = "ollama"
base_url = "http://localhost:11434"
[providers.ollama_cloud]
enabled = false
provider_type = "ollama_cloud"
base_url = "https://ollama.com"
api_key_env = "OLLAMA_CLOUD_API_KEY"
[providers.openai]
enabled = false
provider_type = "openai"
base_url = "https://api.openai.com/v1"
api_key_env = "OPENAI_API_KEY"
[providers.anthropic]
enabled = false
provider_type = "anthropic"
base_url = "https://api.anthropic.com/v1"
api_key_env = "ANTHROPIC_API_KEY"

22
crates/app/cli/.gitignore vendored Normal file
View File

@@ -0,0 +1,22 @@
/target
### Rust template
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
### rust-analyzer template
# Can be generated by other build systems other than cargo (ex: bazelbuild/rust_rules)
rust-project.json

28
crates/app/cli/Cargo.toml Normal file
View File

@@ -0,0 +1,28 @@
[package]
name = "owlen"
version = "0.1.0"
edition.workspace = true
license.workspace = true
rust-version.workspace = true
[dependencies]
clap = { version = "4.5", features = ["derive"] }
tokio = { version = "1.39", features = ["macros", "rt-multi-thread"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
color-eyre = "0.6"
llm-ollama = { path = "../../llm/ollama" }
tools-fs = { path = "../../tools/fs" }
tools-bash = { path = "../../tools/bash" }
tools-slash = { path = "../../tools/slash" }
config-agent = { package = "config-agent", path = "../../platform/config" }
permissions = { path = "../../platform/permissions" }
hooks = { path = "../../platform/hooks" }
futures-util = "0.3.31"
[dev-dependencies]
assert_cmd = "2.0"
predicates = "3.1"
httpmock = "0.7"
tokio = { version = "1.39", features = ["macros", "rt-multi-thread"] }
tempfile = "3.23.0"

580
crates/app/cli/src/main.rs Normal file
View File

@@ -0,0 +1,580 @@
use clap::{Parser, ValueEnum};
use color_eyre::eyre::{Result, eyre};
use config_agent::load_settings;
use futures_util::TryStreamExt;
use hooks::{HookEvent, HookManager, HookResult};
use llm_ollama::{OllamaClient, OllamaOptions, types::ChatMessage};
use permissions::{PermissionDecision, Tool};
use serde::Serialize;
use std::io::{self, Write};
use std::time::{SystemTime, UNIX_EPOCH};
#[derive(Debug, Clone, Copy, ValueEnum)]
enum OutputFormat {
Text,
Json,
StreamJson,
}
#[derive(Serialize)]
struct SessionOutput {
session_id: String,
messages: Vec<serde_json::Value>,
stats: Stats,
#[serde(skip_serializing_if = "Option::is_none")]
result: Option<serde_json::Value>,
#[serde(skip_serializing_if = "Option::is_none")]
tool: Option<String>,
}
#[derive(Serialize)]
struct Stats {
total_tokens: u64,
#[serde(skip_serializing_if = "Option::is_none")]
prompt_tokens: Option<u64>,
#[serde(skip_serializing_if = "Option::is_none")]
completion_tokens: Option<u64>,
duration_ms: u64,
}
#[derive(Serialize)]
struct StreamEvent {
#[serde(rename = "type")]
event_type: String,
#[serde(skip_serializing_if = "Option::is_none")]
session_id: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
content: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
stats: Option<Stats>,
}
fn generate_session_id() -> String {
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis();
format!("session-{}", timestamp)
}
fn output_tool_result(
format: OutputFormat,
tool: &str,
result: serde_json::Value,
session_id: &str,
) -> Result<()> {
match format {
OutputFormat::Text => {
// For text, just print the result as-is
if let Some(s) = result.as_str() {
println!("{}", s);
} else {
println!("{}", serde_json::to_string_pretty(&result)?);
}
}
OutputFormat::Json => {
let output = SessionOutput {
session_id: session_id.to_string(),
messages: vec![],
stats: Stats {
total_tokens: 0,
prompt_tokens: None,
completion_tokens: None,
duration_ms: 0,
},
result: Some(result),
tool: Some(tool.to_string()),
};
println!("{}", serde_json::to_string(&output)?);
}
OutputFormat::StreamJson => {
// For stream-json, emit session_start, result, and session_end
let session_start = StreamEvent {
event_type: "session_start".to_string(),
session_id: Some(session_id.to_string()),
content: None,
stats: None,
};
println!("{}", serde_json::to_string(&session_start)?);
let result_event = StreamEvent {
event_type: "tool_result".to_string(),
session_id: None,
content: Some(serde_json::to_string(&result)?),
stats: None,
};
println!("{}", serde_json::to_string(&result_event)?);
let session_end = StreamEvent {
event_type: "session_end".to_string(),
session_id: None,
content: None,
stats: Some(Stats {
total_tokens: 0,
prompt_tokens: None,
completion_tokens: None,
duration_ms: 0,
}),
};
println!("{}", serde_json::to_string(&session_end)?);
}
}
Ok(())
}
#[derive(clap::Subcommand, Debug)]
enum Cmd {
Read { path: String },
Glob { pattern: String },
Grep { root: String, pattern: String },
Write { path: String, content: String },
Edit { path: String, old_string: String, new_string: String },
Bash { command: String, #[arg(long)] timeout: Option<u64> },
Slash { command_name: String, args: Vec<String> },
}
#[derive(Parser, Debug)]
#[command(name = "code", version)]
struct Args {
#[arg(long)]
ollama_url: Option<String>,
#[arg(long)]
model: Option<String>,
#[arg(long)]
api_key: Option<String>,
#[arg(long)]
print: bool,
/// Override the permission mode (plan, acceptEdits, code)
#[arg(long)]
mode: Option<String>,
/// Output format (text, json, stream-json)
#[arg(long, value_enum, default_value = "text")]
output_format: OutputFormat,
#[arg()]
prompt: Vec<String>,
#[command(subcommand)]
cmd: Option<Cmd>,
}
#[tokio::main]
async fn main() -> Result<()> {
color_eyre::install()?;
let args = Args::parse();
let mut settings = load_settings(None).unwrap_or_default();
// Override mode if specified via CLI
if let Some(mode) = args.mode {
settings.mode = mode;
}
// Create permission manager from settings
let perms = settings.create_permission_manager();
// Create hook manager
let hook_mgr = HookManager::new(".");
// Generate session ID
let session_id = generate_session_id();
let output_format = args.output_format;
if let Some(cmd) = args.cmd {
match cmd {
Cmd::Read { path } => {
// Check permission
match perms.check(Tool::Read, None) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "Read".to_string(),
args: serde_json::json!({"path": &path}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied Read operation"));
}
HookResult::Allow => {}
}
let s = tools_fs::read_file(&path)?;
output_tool_result(output_format, "Read", serde_json::json!(s), &session_id)?;
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Read operation requires approval. Use --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Read operation is blocked."));
}
}
}
Cmd::Glob { pattern } => {
// Check permission
match perms.check(Tool::Glob, None) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "Glob".to_string(),
args: serde_json::json!({"pattern": &pattern}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied Glob operation"));
}
HookResult::Allow => {}
}
for p in tools_fs::glob_list(&pattern)? {
println!("{}", p);
}
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Glob operation requires approval. Use --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Glob operation is blocked."));
}
}
}
Cmd::Grep { root, pattern } => {
// Check permission
match perms.check(Tool::Grep, None) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "Grep".to_string(),
args: serde_json::json!({"root": &root, "pattern": &pattern}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied Grep operation"));
}
HookResult::Allow => {}
}
for (path, line_number, text) in tools_fs::grep(&root, &pattern)? {
println!("{path}:{line_number}:{text}")
}
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Grep operation requires approval. Use --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Grep operation is blocked."));
}
}
}
Cmd::Write { path, content } => {
// Check permission
match perms.check(Tool::Write, None) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "Write".to_string(),
args: serde_json::json!({"path": &path, "content": &content}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied Write operation"));
}
HookResult::Allow => {}
}
tools_fs::write_file(&path, &content)?;
println!("File written: {}", path);
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Write operation requires approval. Use --mode acceptEdits or --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Write operation is blocked."));
}
}
}
Cmd::Edit { path, old_string, new_string } => {
// Check permission
match perms.check(Tool::Edit, None) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "Edit".to_string(),
args: serde_json::json!({"path": &path, "old_string": &old_string, "new_string": &new_string}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied Edit operation"));
}
HookResult::Allow => {}
}
tools_fs::edit_file(&path, &old_string, &new_string)?;
println!("File edited: {}", path);
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Edit operation requires approval. Use --mode acceptEdits or --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Edit operation is blocked."));
}
}
}
Cmd::Bash { command, timeout } => {
// Check permission with command context for pattern matching
match perms.check(Tool::Bash, Some(&command)) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "Bash".to_string(),
args: serde_json::json!({"command": &command, "timeout": timeout}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied Bash operation"));
}
HookResult::Allow => {}
}
let mut session = tools_bash::BashSession::new().await?;
let output = session.execute(&command, timeout).await?;
// Print stdout
if !output.stdout.is_empty() {
print!("{}", output.stdout);
}
// Print stderr to stderr
if !output.stderr.is_empty() {
eprint!("{}", output.stderr);
}
session.close().await?;
// Exit with same code as command
if !output.success {
std::process::exit(output.exit_code);
}
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Bash operation requires approval. Use --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Bash operation is blocked."));
}
}
}
Cmd::Slash { command_name, args } => {
// Check permission
match perms.check(Tool::SlashCommand, None) {
PermissionDecision::Allow => {
// Check PreToolUse hook
let event = HookEvent::PreToolUse {
tool: "SlashCommand".to_string(),
args: serde_json::json!({"command_name": &command_name, "args": &args}),
};
match hook_mgr.execute(&event, Some(5000)).await? {
HookResult::Deny => {
return Err(eyre!("Hook denied SlashCommand operation"));
}
HookResult::Allow => {}
}
// Look for command file in .owlen/commands/
let command_path = format!(".owlen/commands/{}.md", command_name);
// Read the command file
let content = match tools_fs::read_file(&command_path) {
Ok(c) => c,
Err(_) => {
return Err(eyre!(
"Slash command '{}' not found at {}",
command_name,
command_path
));
}
};
// Parse with arguments
let args_refs: Vec<&str> = args.iter().map(|s| s.as_str()).collect();
let slash_cmd = tools_slash::parse_slash_command(&content, &args_refs)?;
// Resolve file references
let resolved_body = slash_cmd.resolve_file_refs()?;
// Print the resolved command body
println!("{}", resolved_body);
return Ok(());
}
PermissionDecision::Ask => {
return Err(eyre!(
"Permission denied: Slash command requires approval. Use --mode code to allow."
));
}
PermissionDecision::Deny => {
return Err(eyre!("Permission denied: Slash command is blocked."));
}
}
}
}
}
let prompt = if args.prompt.is_empty() {
"Say hello".to_string()
} else {
args.prompt.join(" ")
};
let model = args.model.unwrap_or(settings.model);
let api_key = args.api_key.or(settings.api_key);
// Use Ollama Cloud when model has "-cloud" suffix AND API key is set
let use_cloud = model.ends_with("-cloud") && api_key.is_some();
let client = if use_cloud {
OllamaClient::with_cloud().with_api_key(api_key.unwrap())
} else {
let base_url = args.ollama_url.unwrap_or(settings.ollama_url);
let mut client = OllamaClient::new(base_url);
if let Some(key) = api_key {
client = client.with_api_key(key);
}
client
};
let opts = OllamaOptions {
model,
stream: true,
};
let msgs = vec![ChatMessage {
role: "user".into(),
content: prompt.clone(),
}];
let start_time = SystemTime::now();
// Handle different output formats
match output_format {
OutputFormat::Text => {
// Text format: stream to stdout as before
let mut stream = client.chat_stream(&msgs, &opts).await?;
while let Some(chunk) = stream.try_next().await? {
if let Some(m) = chunk.message {
if let Some(c) = m.content {
print!("{c}");
io::stdout().flush()?;
}
}
if matches!(chunk.done, Some(true)) {
break;
}
}
println!(); // Newline after response
}
OutputFormat::Json => {
// JSON format: collect all chunks, then output final JSON
let mut stream = client.chat_stream(&msgs, &opts).await?;
let mut response = String::new();
while let Some(chunk) = stream.try_next().await? {
if let Some(m) = chunk.message {
if let Some(c) = m.content {
response.push_str(&c);
}
}
if matches!(chunk.done, Some(true)) {
break;
}
}
let duration_ms = start_time.elapsed().unwrap().as_millis() as u64;
// Rough token estimate (tokens ~= chars / 4)
let estimated_tokens = ((prompt.len() + response.len()) / 4) as u64;
let output = SessionOutput {
session_id,
messages: vec![
serde_json::json!({"role": "user", "content": prompt}),
serde_json::json!({"role": "assistant", "content": response}),
],
stats: Stats {
total_tokens: estimated_tokens,
prompt_tokens: Some((prompt.len() / 4) as u64),
completion_tokens: Some((response.len() / 4) as u64),
duration_ms,
},
result: None,
tool: None,
};
println!("{}", serde_json::to_string(&output)?);
}
OutputFormat::StreamJson => {
// Stream-JSON format: emit session_start, chunks, and session_end
let session_start = StreamEvent {
event_type: "session_start".to_string(),
session_id: Some(session_id.clone()),
content: None,
stats: None,
};
println!("{}", serde_json::to_string(&session_start)?);
let mut stream = client.chat_stream(&msgs, &opts).await?;
let mut response = String::new();
while let Some(chunk) = stream.try_next().await? {
if let Some(m) = chunk.message {
if let Some(c) = m.content {
response.push_str(&c);
let chunk_event = StreamEvent {
event_type: "chunk".to_string(),
session_id: None,
content: Some(c),
stats: None,
};
println!("{}", serde_json::to_string(&chunk_event)?);
}
}
if matches!(chunk.done, Some(true)) {
break;
}
}
let duration_ms = start_time.elapsed().unwrap().as_millis() as u64;
// Rough token estimate
let estimated_tokens = ((prompt.len() + response.len()) / 4) as u64;
let session_end = StreamEvent {
event_type: "session_end".to_string(),
session_id: None,
content: None,
stats: Some(Stats {
total_tokens: estimated_tokens,
prompt_tokens: Some((prompt.len() / 4) as u64),
completion_tokens: Some((response.len() / 4) as u64),
duration_ms,
}),
};
println!("{}", serde_json::to_string(&session_end)?);
}
}
Ok(())
}

View File

@@ -0,0 +1,39 @@
use assert_cmd::Command;
use httpmock::prelude::*;
use predicates::prelude::PredicateBooleanExt;
#[tokio::test]
async fn headless_streams_ndjson() {
let server = MockServer::start_async().await;
// Mock /api/chat with NDJSON lines
let body = serde_json::json!({
"model": "qwen2.5",
"messages": [{"role": "user", "content": "hello"}],
"stream": true
});
let response = concat!(
r#"{"message":{"role":"assistant","content":"Hel"}}"#,"\n",
r#"{"message":{"role":"assistant","content":"lo"}}"#,"\n",
r#"{"done":true}"#,"\n",
);
let _m = server.mock(|when, then| {
when.method(POST)
.path("/api/chat")
.json_body(body.clone());
then.status(200)
.header("content-type", "application/x-ndjson")
.body(response);
});
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--ollama-url").arg(server.base_url())
.arg("--model").arg("qwen2.5")
.arg("--print")
.arg("hello");
cmd.assert()
.success()
.stdout(predicates::str::contains("Hello").count(1).or(predicates::str::contains("Hel").and(predicates::str::contains("lo"))));
}

View File

@@ -0,0 +1,145 @@
use assert_cmd::Command;
use serde_json::Value;
use std::fs;
use tempfile::tempdir;
#[test]
fn print_json_has_session_id_and_stats() {
let mut cmd = Command::cargo_bin("owlen").unwrap();
cmd.arg("--output-format")
.arg("json")
.arg("Say hello");
let output = cmd.assert().success();
let stdout = String::from_utf8_lossy(&output.get_output().stdout);
// Parse JSON output
let json: Value = serde_json::from_str(&stdout).expect("Output should be valid JSON");
// Verify session_id exists
assert!(json.get("session_id").is_some(), "JSON output should have session_id");
let session_id = json["session_id"].as_str().unwrap();
assert!(!session_id.is_empty(), "session_id should not be empty");
// Verify stats exist
assert!(json.get("stats").is_some(), "JSON output should have stats");
let stats = &json["stats"];
// Check for token counts
assert!(stats.get("total_tokens").is_some(), "stats should have total_tokens");
// Check for messages
assert!(json.get("messages").is_some(), "JSON output should have messages");
}
#[test]
fn stream_json_sequence_is_well_formed() {
let mut cmd = Command::cargo_bin("owlen").unwrap();
cmd.arg("--output-format")
.arg("stream-json")
.arg("Say hello");
let output = cmd.assert().success();
let stdout = String::from_utf8_lossy(&output.get_output().stdout);
// Stream-JSON is NDJSON - each line should be valid JSON
let lines: Vec<&str> = stdout.lines().filter(|l| !l.is_empty()).collect();
assert!(!lines.is_empty(), "Stream-JSON should produce at least one event");
// Each line should be valid JSON
for (i, line) in lines.iter().enumerate() {
let json: Value = serde_json::from_str(line)
.expect(&format!("Line {} should be valid JSON: {}", i, line));
// Each event should have a type
assert!(json.get("type").is_some(), "Event should have a type field");
}
// First event should be session_start
let first: Value = serde_json::from_str(lines[0]).unwrap();
assert_eq!(first["type"].as_str().unwrap(), "session_start");
assert!(first.get("session_id").is_some());
// Last event should be session_end or complete
let last: Value = serde_json::from_str(lines[lines.len() - 1]).unwrap();
let last_type = last["type"].as_str().unwrap();
assert!(
last_type == "session_end" || last_type == "complete",
"Last event should be session_end or complete, got: {}",
last_type
);
}
#[test]
fn text_format_is_default() {
let mut cmd = Command::cargo_bin("owlen").unwrap();
cmd.arg("Say hello");
let output = cmd.assert().success();
let stdout = String::from_utf8_lossy(&output.get_output().stdout);
// Text format should not be JSON
assert!(serde_json::from_str::<Value>(&stdout).is_err(),
"Default output should be text, not JSON");
}
#[test]
fn json_format_with_tool_execution() {
let dir = tempdir().unwrap();
let file = dir.path().join("test.txt");
fs::write(&file, "hello world").unwrap();
let mut cmd = Command::cargo_bin("owlen").unwrap();
cmd.arg("--mode")
.arg("code")
.arg("--output-format")
.arg("json")
.arg("read")
.arg(file.to_str().unwrap());
let output = cmd.assert().success();
let stdout = String::from_utf8_lossy(&output.get_output().stdout);
let json: Value = serde_json::from_str(&stdout).expect("Output should be valid JSON");
// Should have result
assert!(json.get("result").is_some());
// Should have tool info
assert!(json.get("tool").is_some());
assert_eq!(json["tool"].as_str().unwrap(), "Read");
}
#[test]
fn stream_json_includes_chunk_events() {
let mut cmd = Command::cargo_bin("owlen").unwrap();
cmd.arg("--output-format")
.arg("stream-json")
.arg("Say hello");
let output = cmd.assert().success();
let stdout = String::from_utf8_lossy(&output.get_output().stdout);
let lines: Vec<&str> = stdout.lines().filter(|l| !l.is_empty()).collect();
// Should have chunk events between session_start and session_end
let chunk_events: Vec<&str> = lines.iter()
.filter(|line| {
if let Ok(json) = serde_json::from_str::<Value>(line) {
json["type"].as_str() == Some("chunk")
} else {
false
}
})
.copied()
.collect();
assert!(!chunk_events.is_empty(), "Should have at least one chunk event");
// Each chunk should have content
for chunk_line in chunk_events {
let chunk: Value = serde_json::from_str(chunk_line).unwrap();
assert!(chunk.get("content").is_some(), "Chunk should have content");
}
}

View File

@@ -0,0 +1,255 @@
use assert_cmd::Command;
use std::fs;
use tempfile::tempdir;
#[test]
fn plan_mode_allows_read_operations() {
// Create a temp file to read
let dir = tempdir().unwrap();
let file = dir.path().join("test.txt");
fs::write(&file, "hello world").unwrap();
// Read operation should work in plan mode (default)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("read").arg(file.to_str().unwrap());
cmd.assert().success().stdout("hello world\n");
}
#[test]
fn plan_mode_allows_glob_operations() {
let dir = tempdir().unwrap();
fs::write(dir.path().join("a.txt"), "test").unwrap();
fs::write(dir.path().join("b.txt"), "test").unwrap();
let pattern = format!("{}/*.txt", dir.path().display());
// Glob operation should work in plan mode (default)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("glob").arg(&pattern);
cmd.assert().success();
}
#[test]
fn plan_mode_allows_grep_operations() {
let dir = tempdir().unwrap();
fs::write(dir.path().join("test.txt"), "hello world\nfoo bar").unwrap();
// Grep operation should work in plan mode (default)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("grep").arg(dir.path().to_str().unwrap()).arg("hello");
cmd.assert().success();
}
#[test]
fn mode_override_via_cli_flag() {
let dir = tempdir().unwrap();
let file = dir.path().join("test.txt");
fs::write(&file, "content").unwrap();
// Test with --mode code (should also allow read)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode")
.arg("code")
.arg("read")
.arg(file.to_str().unwrap());
cmd.assert().success().stdout("content\n");
}
#[test]
fn plan_mode_blocks_write_operations() {
let dir = tempdir().unwrap();
let file = dir.path().join("new.txt");
// Write operation should be blocked in plan mode (default)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("write").arg(file.to_str().unwrap()).arg("content");
cmd.assert().failure();
}
#[test]
fn plan_mode_blocks_edit_operations() {
let dir = tempdir().unwrap();
let file = dir.path().join("test.txt");
fs::write(&file, "old content").unwrap();
// Edit operation should be blocked in plan mode (default)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("edit")
.arg(file.to_str().unwrap())
.arg("old")
.arg("new");
cmd.assert().failure();
}
#[test]
fn accept_edits_mode_allows_write() {
let dir = tempdir().unwrap();
let file = dir.path().join("new.txt");
// Write operation should work in acceptEdits mode
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode")
.arg("acceptEdits")
.arg("write")
.arg(file.to_str().unwrap())
.arg("new content");
cmd.assert().success();
// Verify file was written
assert_eq!(fs::read_to_string(&file).unwrap(), "new content");
}
#[test]
fn accept_edits_mode_allows_edit() {
let dir = tempdir().unwrap();
let file = dir.path().join("test.txt");
fs::write(&file, "line 1\nline 2\nline 3").unwrap();
// Edit operation should work in acceptEdits mode
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode")
.arg("acceptEdits")
.arg("edit")
.arg(file.to_str().unwrap())
.arg("line 2")
.arg("modified line");
cmd.assert().success();
// Verify file was edited
assert_eq!(
fs::read_to_string(&file).unwrap(),
"line 1\nmodified line\nline 3"
);
}
#[test]
fn code_mode_allows_all_operations() {
let dir = tempdir().unwrap();
let file = dir.path().join("test.txt");
// Write in code mode
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode")
.arg("code")
.arg("write")
.arg(file.to_str().unwrap())
.arg("initial content");
cmd.assert().success();
// Edit in code mode
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode")
.arg("code")
.arg("edit")
.arg(file.to_str().unwrap())
.arg("initial")
.arg("modified");
cmd.assert().success();
assert_eq!(fs::read_to_string(&file).unwrap(), "modified content");
}
#[test]
fn plan_mode_blocks_bash_operations() {
// Bash operation should be blocked in plan mode (default)
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("bash").arg("echo hello");
cmd.assert().failure();
}
#[test]
fn code_mode_allows_bash() {
// Bash operation should work in code mode
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode").arg("code").arg("bash").arg("echo hello");
cmd.assert().success().stdout("hello\n");
}
#[test]
fn bash_command_timeout_works() {
// Test that timeout works
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.arg("--mode")
.arg("code")
.arg("bash")
.arg("sleep 10")
.arg("--timeout")
.arg("1000");
cmd.assert().failure();
}
#[test]
fn slash_command_works() {
// Create .owlen/commands directory in temp dir
let dir = tempdir().unwrap();
let commands_dir = dir.path().join(".owlen/commands");
fs::create_dir_all(&commands_dir).unwrap();
// Create a test slash command
let command_content = r#"---
description: "Test command"
---
Hello from slash command!
Args: $ARGUMENTS
First: $1
"#;
let command_file = commands_dir.join("test.md");
fs::write(&command_file, command_content).unwrap();
// Execute slash command with args from the temp directory
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.current_dir(dir.path())
.arg("--mode")
.arg("code")
.arg("slash")
.arg("test")
.arg("arg1");
cmd.assert()
.success()
.stdout(predicates::str::contains("Hello from slash command!"))
.stdout(predicates::str::contains("Args: arg1"))
.stdout(predicates::str::contains("First: arg1"));
}
#[test]
fn slash_command_file_refs() {
let dir = tempdir().unwrap();
let commands_dir = dir.path().join(".owlen/commands");
fs::create_dir_all(&commands_dir).unwrap();
// Create a file to reference
let data_file = dir.path().join("data.txt");
fs::write(&data_file, "Referenced content").unwrap();
// Create slash command with file reference
let command_content = format!("File content: @{}", data_file.display());
fs::write(commands_dir.join("reftest.md"), command_content).unwrap();
// Execute slash command
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.current_dir(dir.path())
.arg("--mode")
.arg("code")
.arg("slash")
.arg("reftest");
cmd.assert()
.success()
.stdout(predicates::str::contains("Referenced content"));
}
#[test]
fn slash_command_not_found() {
let dir = tempdir().unwrap();
// Try to execute non-existent slash command
let mut cmd = Command::new(assert_cmd::cargo::cargo_bin!("owlen"));
cmd.current_dir(dir.path())
.arg("--mode")
.arg("code")
.arg("slash")
.arg("nonexistent");
cmd.assert().failure();
}

View File

@@ -0,0 +1,16 @@
[package]
name = "mcp-client"
version = "0.1.0"
edition.workspace = true
license.workspace = true
rust-version.workspace = true
[dependencies]
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tokio = { version = "1.39", features = ["process", "io-util", "sync", "time"] }
color-eyre = "0.6"
[dev-dependencies]
tempfile = "3.23.0"
tokio = { version = "1.39", features = ["macros", "rt-multi-thread"] }

View File

@@ -0,0 +1,272 @@
use color_eyre::eyre::{Result, eyre};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::process::Stdio;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::{Child, Command};
use tokio::sync::Mutex;
/// JSON-RPC 2.0 request
#[derive(Debug, Serialize)]
struct JsonRpcRequest {
jsonrpc: String,
id: u64,
method: String,
#[serde(skip_serializing_if = "Option::is_none")]
params: Option<Value>,
}
/// JSON-RPC 2.0 response
#[derive(Debug, Deserialize)]
struct JsonRpcResponse {
jsonrpc: String,
id: u64,
#[serde(skip_serializing_if = "Option::is_none")]
result: Option<Value>,
#[serde(skip_serializing_if = "Option::is_none")]
error: Option<JsonRpcError>,
}
#[derive(Debug, Deserialize)]
struct JsonRpcError {
code: i32,
message: String,
}
/// MCP server capabilities
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ServerCapabilities {
#[serde(default)]
pub tools: Option<ToolsCapability>,
#[serde(default)]
pub resources: Option<ResourcesCapability>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ToolsCapability {
#[serde(default)]
pub list_changed: Option<bool>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct ResourcesCapability {
#[serde(default)]
pub subscribe: Option<bool>,
#[serde(default)]
pub list_changed: Option<bool>,
}
/// MCP Tool definition
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct McpTool {
pub name: String,
#[serde(default)]
pub description: Option<String>,
#[serde(default)]
pub input_schema: Option<Value>,
}
/// MCP Resource definition
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct McpResource {
pub uri: String,
#[serde(default)]
pub name: Option<String>,
#[serde(default)]
pub description: Option<String>,
#[serde(default)]
pub mime_type: Option<String>,
}
/// MCP Client over stdio transport
pub struct McpClient {
process: Mutex<Child>,
next_id: Mutex<u64>,
server_name: String,
}
impl McpClient {
/// Create a new MCP client by spawning a subprocess
pub async fn spawn(command: &str, args: &[&str], server_name: &str) -> Result<Self> {
let mut child = Command::new(command)
.args(args)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()?;
// Verify process is running
if child.try_wait()?.is_some() {
return Err(eyre!("MCP server process exited immediately"));
}
Ok(Self {
process: Mutex::new(child),
next_id: Mutex::new(1),
server_name: server_name.to_string(),
})
}
/// Initialize the MCP connection
pub async fn initialize(&self) -> Result<ServerCapabilities> {
let params = serde_json::json!({
"protocolVersion": "2024-11-05",
"capabilities": {
"roots": {
"listChanged": true
}
},
"clientInfo": {
"name": "owlen",
"version": env!("CARGO_PKG_VERSION")
}
});
let response = self.send_request("initialize", Some(params)).await?;
let capabilities = response
.get("capabilities")
.ok_or_else(|| eyre!("No capabilities in initialize response"))?;
Ok(serde_json::from_value(capabilities.clone())?)
}
/// List available tools
pub async fn list_tools(&self) -> Result<Vec<McpTool>> {
let response = self.send_request("tools/list", None).await?;
let tools = response
.get("tools")
.ok_or_else(|| eyre!("No tools in response"))?;
Ok(serde_json::from_value(tools.clone())?)
}
/// Call a tool
pub async fn call_tool(&self, name: &str, arguments: Value) -> Result<Value> {
let params = serde_json::json!({
"name": name,
"arguments": arguments
});
let response = self.send_request("tools/call", Some(params)).await?;
response
.get("content")
.cloned()
.ok_or_else(|| eyre!("No content in tool call response"))
}
/// List available resources
pub async fn list_resources(&self) -> Result<Vec<McpResource>> {
let response = self.send_request("resources/list", None).await?;
let resources = response
.get("resources")
.ok_or_else(|| eyre!("No resources in response"))?;
Ok(serde_json::from_value(resources.clone())?)
}
/// Read a resource
pub async fn read_resource(&self, uri: &str) -> Result<Value> {
let params = serde_json::json!({
"uri": uri
});
let response = self.send_request("resources/read", Some(params)).await?;
response
.get("contents")
.cloned()
.ok_or_else(|| eyre!("No contents in resource read response"))
}
/// Get the server name
pub fn server_name(&self) -> &str {
&self.server_name
}
/// Send a JSON-RPC request and get the response
async fn send_request(&self, method: &str, params: Option<Value>) -> Result<Value> {
let mut next_id = self.next_id.lock().await;
let id = *next_id;
*next_id += 1;
drop(next_id);
let request = JsonRpcRequest {
jsonrpc: "2.0".to_string(),
id,
method: method.to_string(),
params,
};
let request_json = serde_json::to_string(&request)?;
let mut process = self.process.lock().await;
// Write request
let stdin = process.stdin.as_mut().ok_or_else(|| eyre!("No stdin"))?;
stdin.write_all(request_json.as_bytes()).await?;
stdin.write_all(b"\n").await?;
stdin.flush().await?;
// Read response
let stdout = process.stdout.take().ok_or_else(|| eyre!("No stdout"))?;
let mut reader = BufReader::new(stdout);
let mut response_line = String::new();
reader.read_line(&mut response_line).await?;
// Put stdout back
process.stdout = Some(reader.into_inner());
drop(process);
let response: JsonRpcResponse = serde_json::from_str(&response_line)?;
if response.id != id {
return Err(eyre!("Response ID mismatch: expected {}, got {}", id, response.id));
}
if let Some(error) = response.error {
return Err(eyre!("MCP error {}: {}", error.code, error.message));
}
response.result.ok_or_else(|| eyre!("No result in response"))
}
/// Close the MCP connection
pub async fn close(self) -> Result<()> {
let mut process = self.process.into_inner();
// Close stdin to signal the server to exit
drop(process.stdin.take());
// Wait for process to exit (with timeout)
tokio::time::timeout(
std::time::Duration::from_secs(5),
process.wait()
).await??;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn jsonrpc_request_serializes() {
let req = JsonRpcRequest {
jsonrpc: "2.0".to_string(),
id: 1,
method: "test".to_string(),
params: Some(serde_json::json!({"key": "value"})),
};
let json = serde_json::to_string(&req).unwrap();
assert!(json.contains("\"method\":\"test\""));
assert!(json.contains("\"id\":1"));
}
}

View File

@@ -0,0 +1,347 @@
use mcp_client::McpClient;
use std::fs;
use tempfile::tempdir;
#[tokio::test]
async fn mcp_server_capability_negotiation() {
// Create a mock MCP server script
let dir = tempdir().unwrap();
let server_script = dir.path().join("mock_server.py");
let script_content = r#"#!/usr/bin/env python3
import sys
import json
def read_request():
line = sys.stdin.readline()
return json.loads(line)
def send_response(response):
sys.stdout.write(json.dumps(response) + '\n')
sys.stdout.flush()
# Main loop
while True:
try:
req = read_request()
method = req.get('method')
req_id = req.get('id')
if method == 'initialize':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'protocolVersion': '2024-11-05',
'capabilities': {
'tools': {'list_changed': True},
'resources': {'subscribe': False}
},
'serverInfo': {
'name': 'test-server',
'version': '1.0.0'
}
}
})
elif method == 'tools/list':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'tools': []
}
})
else:
send_response({
'jsonrpc': '2.0',
'id': req_id,
'error': {
'code': -32601,
'message': f'Method not found: {method}'
}
})
except EOFError:
break
except Exception as e:
sys.stderr.write(f'Error: {e}\n')
break
"#;
fs::write(&server_script, script_content).unwrap();
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
fs::set_permissions(&server_script, std::fs::Permissions::from_mode(0o755)).unwrap();
}
// Connect to the server
let client = McpClient::spawn(
"python3",
&[server_script.to_str().unwrap()],
"test-server"
).await.unwrap();
// Initialize
let capabilities = client.initialize().await.unwrap();
// Verify capabilities
assert!(capabilities.tools.is_some());
assert_eq!(capabilities.tools.unwrap().list_changed, Some(true));
client.close().await.unwrap();
}
#[tokio::test]
async fn mcp_tool_invocation() {
let dir = tempdir().unwrap();
let server_script = dir.path().join("mock_server.py");
let script_content = r#"#!/usr/bin/env python3
import sys
import json
def read_request():
line = sys.stdin.readline()
return json.loads(line)
def send_response(response):
sys.stdout.write(json.dumps(response) + '\n')
sys.stdout.flush()
while True:
try:
req = read_request()
method = req.get('method')
req_id = req.get('id')
params = req.get('params', {})
if method == 'initialize':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'protocolVersion': '2024-11-05',
'capabilities': {
'tools': {}
},
'serverInfo': {
'name': 'test-server',
'version': '1.0.0'
}
}
})
elif method == 'tools/list':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'tools': [
{
'name': 'echo',
'description': 'Echo the input',
'input_schema': {
'type': 'object',
'properties': {
'message': {'type': 'string'}
}
}
}
]
}
})
elif method == 'tools/call':
tool_name = params.get('name')
arguments = params.get('arguments', {})
if tool_name == 'echo':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'content': [
{
'type': 'text',
'text': arguments.get('message', '')
}
]
}
})
else:
send_response({
'jsonrpc': '2.0',
'id': req_id,
'error': {
'code': -32602,
'message': f'Unknown tool: {tool_name}'
}
})
else:
send_response({
'jsonrpc': '2.0',
'id': req_id,
'error': {
'code': -32601,
'message': f'Method not found: {method}'
}
})
except EOFError:
break
except Exception as e:
sys.stderr.write(f'Error: {e}\n')
break
"#;
fs::write(&server_script, script_content).unwrap();
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
fs::set_permissions(&server_script, std::fs::Permissions::from_mode(0o755)).unwrap();
}
let client = McpClient::spawn(
"python3",
&[server_script.to_str().unwrap()],
"test-server"
).await.unwrap();
client.initialize().await.unwrap();
// List tools
let tools = client.list_tools().await.unwrap();
assert_eq!(tools.len(), 1);
assert_eq!(tools[0].name, "echo");
// Call tool
let result = client.call_tool(
"echo",
serde_json::json!({"message": "Hello, MCP!"})
).await.unwrap();
// Verify result
let content = result.as_array().unwrap();
assert_eq!(content[0]["text"].as_str().unwrap(), "Hello, MCP!");
client.close().await.unwrap();
}
#[tokio::test]
async fn mcp_resource_reads() {
let dir = tempdir().unwrap();
let server_script = dir.path().join("mock_server.py");
let script_content = r#"#!/usr/bin/env python3
import sys
import json
def read_request():
line = sys.stdin.readline()
return json.loads(line)
def send_response(response):
sys.stdout.write(json.dumps(response) + '\n')
sys.stdout.flush()
while True:
try:
req = read_request()
method = req.get('method')
req_id = req.get('id')
params = req.get('params', {})
if method == 'initialize':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'protocolVersion': '2024-11-05',
'capabilities': {
'resources': {}
},
'serverInfo': {
'name': 'test-server',
'version': '1.0.0'
}
}
})
elif method == 'resources/list':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'resources': [
{
'uri': 'file:///test.txt',
'name': 'Test File',
'description': 'A test file',
'mime_type': 'text/plain'
}
]
}
})
elif method == 'resources/read':
uri = params.get('uri')
if uri == 'file:///test.txt':
send_response({
'jsonrpc': '2.0',
'id': req_id,
'result': {
'contents': [
{
'uri': uri,
'mime_type': 'text/plain',
'text': 'Hello from resource!'
}
]
}
})
else:
send_response({
'jsonrpc': '2.0',
'id': req_id,
'error': {
'code': -32602,
'message': f'Unknown resource: {uri}'
}
})
else:
send_response({
'jsonrpc': '2.0',
'id': req_id,
'error': {
'code': -32601,
'message': f'Method not found: {method}'
}
})
except EOFError:
break
except Exception as e:
sys.stderr.write(f'Error: {e}\n')
break
"#;
fs::write(&server_script, script_content).unwrap();
#[cfg(unix)]
{
use std::os::unix::fs::PermissionsExt;
fs::set_permissions(&server_script, std::fs::Permissions::from_mode(0o755)).unwrap();
}
let client = McpClient::spawn(
"python3",
&[server_script.to_str().unwrap()],
"test-server"
).await.unwrap();
client.initialize().await.unwrap();
// List resources
let resources = client.list_resources().await.unwrap();
assert_eq!(resources.len(), 1);
assert_eq!(resources[0].uri, "file:///test.txt");
// Read resource
let contents = client.read_resource("file:///test.txt").await.unwrap();
let contents_array = contents.as_array().unwrap();
assert_eq!(contents_array[0]["text"].as_str().unwrap(), "Hello from resource!");
client.close().await.unwrap();
}

22
crates/llm/ollama/.gitignore vendored Normal file
View File

@@ -0,0 +1,22 @@
/target
### Rust template
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
### rust-analyzer template
# Can be generated by other build systems other than cargo (ex: bazelbuild/rust_rules)
rust-project.json

View File

@@ -0,0 +1,16 @@
[package]
name = "llm-ollama"
version = "0.1.0"
edition.workspace = true
license.workspace = true
rust-version.workspace = true
[dependencies]
reqwest = { version = "0.12", features = ["json", "stream"] }
tokio = { version = "1.39", features = ["rt-multi-thread"] }
futures = "0.3"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
thiserror = "1"
bytes = "1"
tokio-stream = "0.1.17"

View File

@@ -0,0 +1,98 @@
use crate::types::{ChatMessage, ChatResponseChunk};
use futures::{Stream, TryStreamExt};
use reqwest::Client;
use serde::Serialize;
use thiserror::Error;
#[derive(Debug, Clone)]
pub struct OllamaClient {
http: Client,
base_url: String, // e.g. "http://localhost:11434"
api_key: Option<String>, // For Ollama Cloud authentication
}
#[derive(Debug, Clone, Default)]
pub struct OllamaOptions {
pub model: String,
pub stream: bool,
}
#[derive(Error, Debug)]
pub enum OllamaError {
#[error("http: {0}")]
Http(#[from] reqwest::Error),
#[error("json: {0}")]
Json(#[from] serde_json::Error),
#[error("protocol: {0}")]
Protocol(String),
}
impl OllamaClient {
pub fn new(base_url: impl Into<String>) -> Self {
Self {
http: Client::new(),
base_url: base_url.into().trim_end_matches('/').to_string(),
api_key: None,
}
}
pub fn with_api_key(mut self, api_key: impl Into<String>) -> Self {
self.api_key = Some(api_key.into());
self
}
pub fn with_cloud() -> Self {
// Same API, different base
Self::new("https://ollama.com")
}
pub async fn chat_stream(
&self,
messages: &[ChatMessage],
opts: &OllamaOptions,
) -> Result<impl Stream<Item = Result<ChatResponseChunk, OllamaError>>, OllamaError> {
#[derive(Serialize)]
struct Body<'a> {
model: &'a str,
messages: &'a [ChatMessage],
stream: bool,
}
let url = format!("{}/api/chat", self.base_url);
let body = Body {model: &opts.model, messages, stream: true};
let mut req = self.http.post(url).json(&body);
// Add Authorization header if API key is present
if let Some(ref key) = self.api_key {
req = req.header("Authorization", format!("Bearer {}", key));
}
let resp = req.send().await?;
let bytes_stream = resp.bytes_stream();
// NDJSON parser: split by '\n', parse each as JSON and stream the results
let out = bytes_stream
.map_err(OllamaError::Http)
.map_ok(|bytes| {
// Convert the chunk to a UTF8 string and own it
let txt = String::from_utf8_lossy(&bytes).into_owned();
// Parse each nonempty line into a ChatResponseChunk
let results: Vec<Result<ChatResponseChunk, OllamaError>> = txt
.lines()
.filter_map(|line| {
let trimmed = line.trim();
if trimmed.is_empty() {
None
} else {
Some(
serde_json::from_str::<ChatResponseChunk>(trimmed)
.map_err(OllamaError::Json),
)
}
})
.collect();
futures::stream::iter(results)
})
.try_flatten(); // Stream<Item = Result<ChatResponseChunk, OllamaError>>
Ok(out)
}
}

View File

@@ -0,0 +1,5 @@
pub mod client;
pub mod types;
pub use client::{OllamaClient, OllamaOptions};
pub use types::{ChatMessage, ChatResponseChunk};

View File

@@ -0,0 +1,22 @@
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChatMessage {
pub role: String, // "user", | "assistant" | "system"
pub content: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ChatResponseChunk {
pub model: Option<String>,
pub created_at: Option<String>,
pub message: Option<ChunkMessage>,
pub done: Option<bool>,
pub total_duration: Option<u64>,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ChunkMessage {
pub role: Option<String>,
pub content: Option<String>,
}

View File

@@ -0,0 +1,12 @@
use llm_ollama::{OllamaClient, OllamaOptions};
// This test stubs NDJSON by spinning a tiny local server is overkill for M0.
// Instead, test the line parser indirectly by mocking reqwest is complex.
// We'll smoke-test the client type compiles and leave end-to-end to cli tests.
#[tokio::test]
async fn client_compiles_smoke() {
let _ = OllamaClient::new("http://localhost:11434");
let _ = OllamaClient::with_cloud();
let _ = OllamaOptions { model: "qwen2.5".into(), stream: true };
}

View File

@@ -1,12 +0,0 @@
[package]
name = "owlen-mcp-client"
version = "0.1.0"
edition.workspace = true
description = "Dedicated MCP client library for Owlen, exposing remote MCP server communication"
license = "AGPL-3.0"
[dependencies]
owlen-core = { path = "../../owlen-core" }
[features]
default = []

View File

@@ -1,17 +0,0 @@
//! Owlen MCP client library.
//!
//! This crate provides a thin façade over the remote MCP client implementation
//! inside `owlen-core`. It reexports the most useful types so downstream
//! crates can depend only on `owlen-mcp-client` without pulling in the entire
//! core crate internals.
pub use owlen_core::config::{McpConfigScope, ScopedMcpServer};
pub use owlen_core::mcp::remote_client::RemoteMcpClient;
pub use owlen_core::mcp::{McpClient, McpToolCall, McpToolDescriptor, McpToolResponse};
// Reexport the core Provider trait so that the MCP client can also be used as an LLM provider.
pub use owlen_core::Provider as McpProvider;
// Note: The `RemoteMcpClient` type provides its own `new` constructor in the core
// crate. Users can call `RemoteMcpClient::new()` directly. No additional wrapper
// is needed here.

View File

@@ -1,22 +0,0 @@
[package]
name = "owlen-mcp-code-server"
version = "0.1.0"
edition.workspace = true
description = "MCP server exposing safe code execution tools for Owlen"
license = "AGPL-3.0"
[dependencies]
owlen-core = { path = "../../owlen-core" }
serde = { workspace = true }
serde_json = { workspace = true }
tokio = { workspace = true }
anyhow = { workspace = true }
async-trait = { workspace = true }
bollard = "0.17"
tempfile = { workspace = true }
uuid = { workspace = true }
futures = { workspace = true }
[lib]
name = "owlen_mcp_code_server"
path = "src/lib.rs"

View File

@@ -1,186 +0,0 @@
//! MCP server exposing code execution tools with Docker sandboxing.
//!
//! This server provides:
//! - compile_project: Build projects (Rust, Node.js, Python)
//! - run_tests: Execute test suites
//! - format_code: Run code formatters
//! - lint_code: Run linters
pub mod sandbox;
pub mod tools;
use owlen_core::mcp::protocol::{
ErrorCode, InitializeParams, InitializeResult, PROTOCOL_VERSION, RequestId, RpcError,
RpcErrorResponse, RpcRequest, RpcResponse, ServerCapabilities, ServerInfo, methods,
};
use owlen_core::tools::{Tool, ToolResult};
use serde_json::{Value, json};
use std::collections::HashMap;
use std::sync::Arc;
use tokio::io::{self, AsyncBufReadExt, AsyncWriteExt};
use tools::{CompileProjectTool, FormatCodeTool, LintCodeTool, RunTestsTool};
/// Tool registry for the code server
#[allow(dead_code)]
struct ToolRegistry {
tools: HashMap<String, Box<dyn Tool + Send + Sync>>,
}
#[allow(dead_code)]
impl ToolRegistry {
fn new() -> Self {
let mut tools: HashMap<String, Box<dyn Tool + Send + Sync>> = HashMap::new();
tools.insert(
"compile_project".to_string(),
Box::new(CompileProjectTool::new()),
);
tools.insert("run_tests".to_string(), Box::new(RunTestsTool::new()));
tools.insert("format_code".to_string(), Box::new(FormatCodeTool::new()));
tools.insert("lint_code".to_string(), Box::new(LintCodeTool::new()));
Self { tools }
}
fn list_tools(&self) -> Vec<owlen_core::mcp::McpToolDescriptor> {
self.tools
.values()
.map(|tool| owlen_core::mcp::McpToolDescriptor {
name: tool.name().to_string(),
description: tool.description().to_string(),
input_schema: tool.schema(),
requires_network: tool.requires_network(),
requires_filesystem: tool.requires_filesystem(),
})
.collect()
}
async fn execute(&self, name: &str, args: Value) -> Result<ToolResult, String> {
self.tools
.get(name)
.ok_or_else(|| format!("Tool not found: {}", name))?
.execute(args)
.await
.map_err(|e| e.to_string())
}
}
#[allow(dead_code)]
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut stdin = io::BufReader::new(io::stdin());
let mut stdout = io::stdout();
let registry = Arc::new(ToolRegistry::new());
loop {
let mut line = String::new();
match stdin.read_line(&mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
let req: RpcRequest = match serde_json::from_str(&line) {
Ok(r) => r,
Err(e) => {
let err = RpcErrorResponse::new(
RequestId::Number(0),
RpcError::parse_error(format!("Parse error: {}", e)),
);
let s = serde_json::to_string(&err)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let resp = handle_request(req.clone(), registry.clone()).await;
match resp {
Ok(r) => {
let s = serde_json::to_string(&r)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
Err(e) => {
let err = RpcErrorResponse::new(req.id.clone(), e);
let s = serde_json::to_string(&err)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
}
}
Err(e) => {
eprintln!("Error reading stdin: {}", e);
break;
}
}
}
Ok(())
}
#[allow(dead_code)]
async fn handle_request(
req: RpcRequest,
registry: Arc<ToolRegistry>,
) -> Result<RpcResponse, RpcError> {
match req.method.as_str() {
methods::INITIALIZE => {
let params: InitializeParams =
serde_json::from_value(req.params.unwrap_or_else(|| json!({})))
.map_err(|e| RpcError::invalid_params(format!("Invalid init params: {}", e)))?;
if !params.protocol_version.eq(PROTOCOL_VERSION) {
return Err(RpcError::new(
ErrorCode::INVALID_REQUEST,
format!(
"Incompatible protocol version. Client: {}, Server: {}",
params.protocol_version, PROTOCOL_VERSION
),
));
}
let result = InitializeResult {
protocol_version: PROTOCOL_VERSION.to_string(),
server_info: ServerInfo {
name: "owlen-mcp-code-server".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: ServerCapabilities {
supports_tools: Some(true),
supports_resources: Some(false),
supports_streaming: Some(false),
},
};
let payload = serde_json::to_value(result).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize initialize result: {}", e))
})?;
Ok(RpcResponse::new(req.id, payload))
}
methods::TOOLS_LIST => {
let tools = registry.list_tools();
Ok(RpcResponse::new(req.id, json!(tools)))
}
methods::TOOLS_CALL => {
let call = serde_json::from_value::<owlen_core::mcp::McpToolCall>(
req.params.unwrap_or_else(|| json!({})),
)
.map_err(|e| RpcError::invalid_params(format!("Invalid tool call: {}", e)))?;
let result: ToolResult = registry
.execute(&call.name, call.arguments)
.await
.map_err(|e| RpcError::internal_error(format!("Tool execution failed: {}", e)))?;
let resp = owlen_core::mcp::McpToolResponse {
name: call.name,
success: result.success,
output: result.output,
metadata: result.metadata,
duration_ms: result.duration.as_millis() as u128,
};
let payload = serde_json::to_value(resp).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize tool response: {}", e))
})?;
Ok(RpcResponse::new(req.id, payload))
}
_ => Err(RpcError::method_not_found(&req.method)),
}
}

View File

@@ -1,250 +0,0 @@
//! Docker-based sandboxing for secure code execution
use anyhow::{Context, Result};
use bollard::Docker;
use bollard::container::{
Config, CreateContainerOptions, RemoveContainerOptions, StartContainerOptions,
WaitContainerOptions,
};
use bollard::models::{HostConfig, Mount, MountTypeEnum};
use std::collections::HashMap;
use std::path::Path;
/// Result of executing code in a sandbox
#[derive(Debug, Clone)]
pub struct ExecutionResult {
pub stdout: String,
pub stderr: String,
pub exit_code: i64,
pub timed_out: bool,
}
/// Docker-based sandbox executor
pub struct Sandbox {
docker: Docker,
memory_limit: i64,
cpu_quota: i64,
timeout_secs: u64,
}
impl Sandbox {
/// Create a new sandbox with default resource limits
pub fn new() -> Result<Self> {
let docker =
Docker::connect_with_local_defaults().context("Failed to connect to Docker daemon")?;
Ok(Self {
docker,
memory_limit: 512 * 1024 * 1024, // 512MB
cpu_quota: 50000, // 50% of one core
timeout_secs: 30,
})
}
/// Execute a command in a sandboxed container
pub async fn execute(
&self,
image: &str,
cmd: &[&str],
workspace: Option<&Path>,
env: HashMap<String, String>,
) -> Result<ExecutionResult> {
let container_name = format!("owlen-sandbox-{}", uuid::Uuid::new_v4());
// Prepare volume mount if workspace provided
let mounts = if let Some(ws) = workspace {
vec![Mount {
target: Some("/workspace".to_string()),
source: Some(ws.to_string_lossy().to_string()),
typ: Some(MountTypeEnum::BIND),
read_only: Some(false),
..Default::default()
}]
} else {
vec![]
};
// Create container config
let host_config = HostConfig {
memory: Some(self.memory_limit),
cpu_quota: Some(self.cpu_quota),
network_mode: Some("none".to_string()), // No network access
mounts: Some(mounts),
auto_remove: Some(true),
..Default::default()
};
let config = Config {
image: Some(image.to_string()),
cmd: Some(cmd.iter().map(|s| s.to_string()).collect()),
working_dir: Some("/workspace".to_string()),
env: Some(env.iter().map(|(k, v)| format!("{}={}", k, v)).collect()),
host_config: Some(host_config),
attach_stdout: Some(true),
attach_stderr: Some(true),
tty: Some(false),
..Default::default()
};
// Create container
let container = self
.docker
.create_container(
Some(CreateContainerOptions {
name: container_name.clone(),
..Default::default()
}),
config,
)
.await
.context("Failed to create container")?;
// Start container
self.docker
.start_container(&container.id, None::<StartContainerOptions<String>>)
.await
.context("Failed to start container")?;
// Wait for container with timeout
let wait_result =
tokio::time::timeout(std::time::Duration::from_secs(self.timeout_secs), async {
let mut wait_stream = self
.docker
.wait_container(&container.id, None::<WaitContainerOptions<String>>);
use futures::StreamExt;
if let Some(result) = wait_stream.next().await {
result
} else {
Err(bollard::errors::Error::IOError {
err: std::io::Error::other("Container wait stream ended unexpectedly"),
})
}
})
.await;
let (exit_code, timed_out) = match wait_result {
Ok(Ok(result)) => (result.status_code, false),
Ok(Err(e)) => {
eprintln!("Container wait error: {}", e);
(1, false)
}
Err(_) => {
// Timeout - kill the container
let _ = self
.docker
.kill_container(
&container.id,
None::<bollard::container::KillContainerOptions<String>>,
)
.await;
(124, true)
}
};
// Get logs
let logs = self.docker.logs(
&container.id,
Some(bollard::container::LogsOptions::<String> {
stdout: true,
stderr: true,
..Default::default()
}),
);
use futures::StreamExt;
let mut stdout = String::new();
let mut stderr = String::new();
let log_result = tokio::time::timeout(std::time::Duration::from_secs(5), async {
let mut logs = logs;
while let Some(log) = logs.next().await {
match log {
Ok(bollard::container::LogOutput::StdOut { message }) => {
stdout.push_str(&String::from_utf8_lossy(&message));
}
Ok(bollard::container::LogOutput::StdErr { message }) => {
stderr.push_str(&String::from_utf8_lossy(&message));
}
_ => {}
}
}
})
.await;
if log_result.is_err() {
eprintln!("Timeout reading container logs");
}
// Remove container (auto_remove should handle this, but be explicit)
let _ = self
.docker
.remove_container(
&container.id,
Some(RemoveContainerOptions {
force: true,
..Default::default()
}),
)
.await;
Ok(ExecutionResult {
stdout,
stderr,
exit_code,
timed_out,
})
}
/// Execute in a Rust environment
pub async fn execute_rust(&self, workspace: &Path, cmd: &[&str]) -> Result<ExecutionResult> {
self.execute("rust:1.75-slim", cmd, Some(workspace), HashMap::new())
.await
}
/// Execute in a Python environment
pub async fn execute_python(&self, workspace: &Path, cmd: &[&str]) -> Result<ExecutionResult> {
self.execute("python:3.11-slim", cmd, Some(workspace), HashMap::new())
.await
}
/// Execute in a Node.js environment
pub async fn execute_node(&self, workspace: &Path, cmd: &[&str]) -> Result<ExecutionResult> {
self.execute("node:20-slim", cmd, Some(workspace), HashMap::new())
.await
}
}
impl Default for Sandbox {
fn default() -> Self {
Self::new().expect("Failed to create default sandbox")
}
}
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
#[tokio::test]
#[ignore] // Requires Docker daemon
async fn test_sandbox_rust_compile() {
let sandbox = Sandbox::new().unwrap();
let temp_dir = TempDir::new().unwrap();
// Create a simple Rust project
std::fs::write(
temp_dir.path().join("main.rs"),
"fn main() { println!(\"Hello from sandbox!\"); }",
)
.unwrap();
let result = sandbox
.execute_rust(temp_dir.path(), &["rustc", "main.rs"])
.await
.unwrap();
assert_eq!(result.exit_code, 0);
assert!(!result.timed_out);
}
}

View File

@@ -1,417 +0,0 @@
//! Code execution tools using Docker sandboxing
use crate::sandbox::Sandbox;
use async_trait::async_trait;
use owlen_core::Result;
use owlen_core::tools::{Tool, ToolResult};
use serde_json::{Value, json};
use std::path::PathBuf;
/// Tool for compiling projects (Rust, Node.js, Python)
pub struct CompileProjectTool {
sandbox: Sandbox,
}
impl Default for CompileProjectTool {
fn default() -> Self {
Self::new()
}
}
impl CompileProjectTool {
pub fn new() -> Self {
Self {
sandbox: Sandbox::default(),
}
}
}
#[async_trait]
impl Tool for CompileProjectTool {
fn name(&self) -> &'static str {
"compile_project"
}
fn description(&self) -> &'static str {
"Compile a project (Rust, Node.js, Python). Detects project type automatically."
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"project_path": {
"type": "string",
"description": "Path to the project root"
},
"project_type": {
"type": "string",
"enum": ["rust", "node", "python"],
"description": "Project type (auto-detected if not specified)"
}
},
"required": ["project_path"]
})
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let project_path = args
.get("project_path")
.and_then(|v| v.as_str())
.ok_or_else(|| owlen_core::Error::InvalidInput("Missing project_path".into()))?;
let path = PathBuf::from(project_path);
if !path.exists() {
return Ok(ToolResult::error("Project path does not exist"));
}
// Detect project type
let project_type = if let Some(pt) = args.get("project_type").and_then(|v| v.as_str()) {
pt.to_string()
} else if path.join("Cargo.toml").exists() {
"rust".to_string()
} else if path.join("package.json").exists() {
"node".to_string()
} else if path.join("setup.py").exists() || path.join("pyproject.toml").exists() {
"python".to_string()
} else {
return Ok(ToolResult::error("Could not detect project type"));
};
// Execute compilation
let result = match project_type.as_str() {
"rust" => self.sandbox.execute_rust(&path, &["cargo", "build"]).await,
"node" => {
self.sandbox
.execute_node(&path, &["npm", "run", "build"])
.await
}
"python" => {
// Python typically doesn't need compilation, but we can check syntax
self.sandbox
.execute_python(&path, &["python", "-m", "compileall", "."])
.await
}
_ => return Ok(ToolResult::error("Unsupported project type")),
};
match result {
Ok(exec_result) => {
if exec_result.timed_out {
Ok(ToolResult::error("Compilation timed out"))
} else if exec_result.exit_code == 0 {
Ok(ToolResult::success(json!({
"success": true,
"stdout": exec_result.stdout,
"stderr": exec_result.stderr,
"project_type": project_type
})))
} else {
Ok(ToolResult::success(json!({
"success": false,
"exit_code": exec_result.exit_code,
"stdout": exec_result.stdout,
"stderr": exec_result.stderr,
"project_type": project_type
})))
}
}
Err(e) => Ok(ToolResult::error(&format!("Compilation failed: {}", e))),
}
}
}
/// Tool for running test suites
pub struct RunTestsTool {
sandbox: Sandbox,
}
impl Default for RunTestsTool {
fn default() -> Self {
Self::new()
}
}
impl RunTestsTool {
pub fn new() -> Self {
Self {
sandbox: Sandbox::default(),
}
}
}
#[async_trait]
impl Tool for RunTestsTool {
fn name(&self) -> &'static str {
"run_tests"
}
fn description(&self) -> &'static str {
"Run tests for a project (Rust, Node.js, Python)"
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"project_path": {
"type": "string",
"description": "Path to the project root"
},
"test_filter": {
"type": "string",
"description": "Optional test filter/pattern"
}
},
"required": ["project_path"]
})
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let project_path = args
.get("project_path")
.and_then(|v| v.as_str())
.ok_or_else(|| owlen_core::Error::InvalidInput("Missing project_path".into()))?;
let path = PathBuf::from(project_path);
if !path.exists() {
return Ok(ToolResult::error("Project path does not exist"));
}
let test_filter = args.get("test_filter").and_then(|v| v.as_str());
// Detect project type and run tests
let result = if path.join("Cargo.toml").exists() {
let cmd = if let Some(filter) = test_filter {
vec!["cargo", "test", filter]
} else {
vec!["cargo", "test"]
};
self.sandbox.execute_rust(&path, &cmd).await
} else if path.join("package.json").exists() {
self.sandbox.execute_node(&path, &["npm", "test"]).await
} else if path.join("pytest.ini").exists()
|| path.join("setup.py").exists()
|| path.join("pyproject.toml").exists()
{
let cmd = if let Some(filter) = test_filter {
vec!["pytest", "-k", filter]
} else {
vec!["pytest"]
};
self.sandbox.execute_python(&path, &cmd).await
} else {
return Ok(ToolResult::error("Could not detect test framework"));
};
match result {
Ok(exec_result) => Ok(ToolResult::success(json!({
"success": exec_result.exit_code == 0 && !exec_result.timed_out,
"exit_code": exec_result.exit_code,
"stdout": exec_result.stdout,
"stderr": exec_result.stderr,
"timed_out": exec_result.timed_out
}))),
Err(e) => Ok(ToolResult::error(&format!("Tests failed to run: {}", e))),
}
}
}
/// Tool for formatting code
pub struct FormatCodeTool {
sandbox: Sandbox,
}
impl Default for FormatCodeTool {
fn default() -> Self {
Self::new()
}
}
impl FormatCodeTool {
pub fn new() -> Self {
Self {
sandbox: Sandbox::default(),
}
}
}
#[async_trait]
impl Tool for FormatCodeTool {
fn name(&self) -> &'static str {
"format_code"
}
fn description(&self) -> &'static str {
"Format code using project-appropriate formatter (rustfmt, prettier, black)"
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"project_path": {
"type": "string",
"description": "Path to the project root"
},
"check_only": {
"type": "boolean",
"description": "Only check formatting without modifying files",
"default": false
}
},
"required": ["project_path"]
})
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let project_path = args
.get("project_path")
.and_then(|v| v.as_str())
.ok_or_else(|| owlen_core::Error::InvalidInput("Missing project_path".into()))?;
let path = PathBuf::from(project_path);
if !path.exists() {
return Ok(ToolResult::error("Project path does not exist"));
}
let check_only = args
.get("check_only")
.and_then(|v| v.as_bool())
.unwrap_or(false);
// Detect project type and run formatter
let result = if path.join("Cargo.toml").exists() {
let cmd = if check_only {
vec!["cargo", "fmt", "--", "--check"]
} else {
vec!["cargo", "fmt"]
};
self.sandbox.execute_rust(&path, &cmd).await
} else if path.join("package.json").exists() {
let cmd = if check_only {
vec!["npx", "prettier", "--check", "."]
} else {
vec!["npx", "prettier", "--write", "."]
};
self.sandbox.execute_node(&path, &cmd).await
} else if path.join("setup.py").exists() || path.join("pyproject.toml").exists() {
let cmd = if check_only {
vec!["black", "--check", "."]
} else {
vec!["black", "."]
};
self.sandbox.execute_python(&path, &cmd).await
} else {
return Ok(ToolResult::error("Could not detect project type"));
};
match result {
Ok(exec_result) => Ok(ToolResult::success(json!({
"success": exec_result.exit_code == 0,
"formatted": !check_only && exec_result.exit_code == 0,
"stdout": exec_result.stdout,
"stderr": exec_result.stderr
}))),
Err(e) => Ok(ToolResult::error(&format!("Formatting failed: {}", e))),
}
}
}
/// Tool for linting code
pub struct LintCodeTool {
sandbox: Sandbox,
}
impl Default for LintCodeTool {
fn default() -> Self {
Self::new()
}
}
impl LintCodeTool {
pub fn new() -> Self {
Self {
sandbox: Sandbox::default(),
}
}
}
#[async_trait]
impl Tool for LintCodeTool {
fn name(&self) -> &'static str {
"lint_code"
}
fn description(&self) -> &'static str {
"Lint code using project-appropriate linter (clippy, eslint, pylint)"
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"project_path": {
"type": "string",
"description": "Path to the project root"
},
"fix": {
"type": "boolean",
"description": "Automatically fix issues if possible",
"default": false
}
},
"required": ["project_path"]
})
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let project_path = args
.get("project_path")
.and_then(|v| v.as_str())
.ok_or_else(|| owlen_core::Error::InvalidInput("Missing project_path".into()))?;
let path = PathBuf::from(project_path);
if !path.exists() {
return Ok(ToolResult::error("Project path does not exist"));
}
let fix = args.get("fix").and_then(|v| v.as_bool()).unwrap_or(false);
// Detect project type and run linter
let result = if path.join("Cargo.toml").exists() {
let cmd = if fix {
vec!["cargo", "clippy", "--fix", "--allow-dirty"]
} else {
vec!["cargo", "clippy"]
};
self.sandbox.execute_rust(&path, &cmd).await
} else if path.join("package.json").exists() {
let cmd = if fix {
vec!["npx", "eslint", ".", "--fix"]
} else {
vec!["npx", "eslint", "."]
};
self.sandbox.execute_node(&path, &cmd).await
} else if path.join("setup.py").exists() || path.join("pyproject.toml").exists() {
// pylint doesn't have auto-fix
self.sandbox.execute_python(&path, &["pylint", "."]).await
} else {
return Ok(ToolResult::error("Could not detect project type"));
};
match result {
Ok(exec_result) => {
let issues_found = exec_result.exit_code != 0;
Ok(ToolResult::success(json!({
"success": true,
"issues_found": issues_found,
"exit_code": exec_result.exit_code,
"stdout": exec_result.stdout,
"stderr": exec_result.stderr
})))
}
Err(e) => Ok(ToolResult::error(&format!("Linting failed: {}", e))),
}
}
}

View File

@@ -1,16 +0,0 @@
[package]
name = "owlen-mcp-llm-server"
version = "0.1.0"
edition.workspace = true
[dependencies]
owlen-core = { path = "../../owlen-core" }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
anyhow = { workspace = true }
tokio-stream = { workspace = true }
[[bin]]
name = "owlen-mcp-llm-server"
path = "src/main.rs"

View File

@@ -1,597 +0,0 @@
#![allow(
unused_imports,
unused_variables,
dead_code,
clippy::unnecessary_cast,
clippy::manual_flatten,
clippy::empty_line_after_outer_attr
)]
use owlen_core::Provider;
use owlen_core::ProviderConfig;
use owlen_core::config::{Config as OwlenConfig, ensure_provider_config};
use owlen_core::mcp::protocol::{
ErrorCode, InitializeParams, InitializeResult, PROTOCOL_VERSION, RequestId, RpcError,
RpcErrorResponse, RpcNotification, RpcRequest, RpcResponse, ServerCapabilities, ServerInfo,
methods,
};
use owlen_core::mcp::{McpToolCall, McpToolDescriptor, McpToolResponse};
use owlen_core::providers::OllamaProvider;
use owlen_core::types::{ChatParameters, ChatRequest, Message};
use serde::Deserialize;
use serde_json::{Value, json};
use std::collections::HashMap;
use std::env;
use std::sync::Arc;
use tokio::io::{self, AsyncBufReadExt, AsyncWriteExt};
use tokio_stream::StreamExt;
// Suppress warnings are handled by the crate-level attribute at the top.
/// Arguments for the generate_text tool
#[derive(Debug, Deserialize)]
struct GenerateTextArgs {
messages: Vec<Message>,
temperature: Option<f32>,
max_tokens: Option<u32>,
model: String,
stream: bool,
}
/// Simple tool descriptor for generate_text
fn generate_text_descriptor() -> McpToolDescriptor {
McpToolDescriptor {
name: "generate_text".to_string(),
description: "Generate text using Ollama LLM. Each message must have 'role' (user/assistant/system) and 'content' (string) fields.".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"messages": {
"type": "array",
"items": {
"type": "object",
"properties": {
"role": {
"type": "string",
"enum": ["user", "assistant", "system"],
"description": "The role of the message sender"
},
"content": {
"type": "string",
"description": "The message content"
}
},
"required": ["role", "content"]
},
"description": "Array of message objects with role and content"
},
"temperature": {"type": ["number", "null"], "description": "Sampling temperature (0.0-2.0)"},
"max_tokens": {"type": ["integer", "null"], "description": "Maximum tokens to generate"},
"model": {"type": "string", "description": "Model name (e.g., llama3.2:latest)"},
"stream": {"type": "boolean", "description": "Whether to stream the response"}
},
"required": ["messages", "model", "stream"]
}),
requires_network: true,
requires_filesystem: vec![],
}
}
/// Tool descriptor for resources/get (read file)
fn resources_get_descriptor() -> McpToolDescriptor {
McpToolDescriptor {
name: "resources/get".to_string(),
description: "Read and return the TEXT CONTENTS of a single FILE. Use this to read the contents of code files, config files, or text documents. Do NOT use for directories.".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"path": {"type": "string", "description": "Path to the FILE (not directory) to read"}
},
"required": ["path"]
}),
requires_network: false,
requires_filesystem: vec!["read".to_string()],
}
}
/// Tool descriptor for resources/list (list directory)
fn resources_list_descriptor() -> McpToolDescriptor {
McpToolDescriptor {
name: "resources/list".to_string(),
description: "List the NAMES of all files and directories in a directory. Use this to see what files exist in a folder, or to list directory contents. Returns an array of file/directory names.".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"path": {"type": "string", "description": "Path to the DIRECTORY to list (use '.' for current directory)"}
}
}),
requires_network: false,
requires_filesystem: vec!["read".to_string()],
}
}
fn provider_from_config() -> Result<Arc<dyn Provider>, RpcError> {
let mut config = OwlenConfig::load(None).unwrap_or_default();
let requested_name =
env::var("OWLEN_PROVIDER").unwrap_or_else(|_| config.general.default_provider.clone());
let provider_key = canonical_provider_name(&requested_name);
if config.provider(&provider_key).is_none() {
ensure_provider_config(&mut config, &provider_key);
}
let provider_cfg: ProviderConfig =
config.provider(&provider_key).cloned().ok_or_else(|| {
RpcError::internal_error(format!(
"Provider '{provider_key}' not found in configuration"
))
})?;
match provider_cfg.provider_type.as_str() {
"ollama" | "ollama_cloud" => {
let provider = OllamaProvider::from_config(&provider_cfg, Some(&config.general))
.map_err(|e| {
RpcError::internal_error(format!(
"Failed to init Ollama provider from config: {e}"
))
})?;
Ok(Arc::new(provider) as Arc<dyn Provider>)
}
other => Err(RpcError::internal_error(format!(
"Unsupported provider type '{other}' for MCP LLM server"
))),
}
}
fn create_provider() -> Result<Arc<dyn Provider>, RpcError> {
if let Ok(url) = env::var("OLLAMA_URL") {
let provider = OllamaProvider::new(&url).map_err(|e| {
RpcError::internal_error(format!("Failed to init Ollama provider: {e}"))
})?;
return Ok(Arc::new(provider) as Arc<dyn Provider>);
}
provider_from_config()
}
fn canonical_provider_name(name: &str) -> String {
let normalized = name.trim().to_ascii_lowercase().replace('-', "_");
match normalized.as_str() {
"" => "ollama_local".to_string(),
"ollama" | "ollama_local" => "ollama_local".to_string(),
"ollama_cloud" => "ollama_cloud".to_string(),
other => other.to_string(),
}
}
async fn handle_generate_text(args: GenerateTextArgs) -> Result<String, RpcError> {
let provider = create_provider()?;
let parameters = ChatParameters {
temperature: args.temperature,
max_tokens: args.max_tokens.map(|v| v as u32),
stream: args.stream,
extra: HashMap::new(),
};
let request = ChatRequest {
model: args.model,
messages: args.messages,
parameters,
tools: None,
};
// Use streaming API and collect output
let mut stream = provider
.stream_prompt(request)
.await
.map_err(|e| RpcError::internal_error(format!("Chat request failed: {}", e)))?;
let mut content = String::new();
while let Some(chunk) = stream.next().await {
match chunk {
Ok(resp) => {
content.push_str(&resp.message.content);
if resp.is_final {
break;
}
}
Err(e) => {
return Err(RpcError::internal_error(format!("Stream error: {}", e)));
}
}
}
Ok(content)
}
async fn handle_request(req: &RpcRequest) -> Result<Value, RpcError> {
match req.method.as_str() {
methods::INITIALIZE => {
let params = req
.params
.as_ref()
.ok_or_else(|| RpcError::invalid_params("Missing params for initialize"))?;
let init: InitializeParams = serde_json::from_value(params.clone())
.map_err(|e| RpcError::invalid_params(format!("Invalid init params: {}", e)))?;
if !init.protocol_version.eq(PROTOCOL_VERSION) {
return Err(RpcError::new(
ErrorCode::INVALID_REQUEST,
format!(
"Incompatible protocol version. Client: {}, Server: {}",
init.protocol_version, PROTOCOL_VERSION
),
));
}
let result = InitializeResult {
protocol_version: PROTOCOL_VERSION.to_string(),
server_info: ServerInfo {
name: "owlen-mcp-llm-server".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: ServerCapabilities {
supports_tools: Some(true),
supports_resources: Some(false),
supports_streaming: Some(true),
},
};
serde_json::to_value(result).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize init result: {}", e))
})
}
methods::TOOLS_LIST => {
let tools = vec![
generate_text_descriptor(),
resources_get_descriptor(),
resources_list_descriptor(),
];
Ok(json!(tools))
}
// New method to list available Ollama models via the provider.
methods::MODELS_LIST => {
let provider = create_provider()?;
let models = provider
.list_models()
.await
.map_err(|e| RpcError::internal_error(format!("Failed to list models: {}", e)))?;
serde_json::to_value(models).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize model list: {}", e))
})
}
methods::TOOLS_CALL => {
// For streaming we will send incremental notifications directly from here.
// The caller (main loop) will handle writing the final response.
Err(RpcError::internal_error(
"TOOLS_CALL should be handled in main loop for streaming",
))
}
_ => Err(RpcError::method_not_found(&req.method)),
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let root = env::current_dir()?; // not used but kept for parity
let mut stdin = io::BufReader::new(io::stdin());
let mut stdout = io::stdout();
loop {
let mut line = String::new();
match stdin.read_line(&mut line).await {
Ok(0) => break,
Ok(_) => {
let req: RpcRequest = match serde_json::from_str(&line) {
Ok(r) => r,
Err(e) => {
let err = RpcErrorResponse::new(
RequestId::Number(0),
RpcError::parse_error(format!("Parse error: {}", e)),
);
let s = serde_json::to_string(&err)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let id = req.id.clone();
// Streaming tool calls (generate_text) are handled specially to emit incremental notifications.
if req.method == methods::TOOLS_CALL {
// Parse the tool call
let params = match &req.params {
Some(p) => p,
None => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::invalid_params("Missing params for tool call"),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let call: McpToolCall = match serde_json::from_value(params.clone()) {
Ok(c) => c,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::invalid_params(format!("Invalid tool call: {}", e)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
// Dispatch based on the requested tool name.
// Handle resources tools manually.
if call.name.starts_with("resources/get") {
let path = call
.arguments
.get("path")
.and_then(|v| v.as_str())
.unwrap_or("");
match std::fs::read_to_string(path) {
Ok(content) => {
let response = McpToolResponse {
name: call.name,
success: true,
output: json!(content),
metadata: HashMap::new(),
duration_ms: 0,
};
let payload = match serde_json::to_value(&response) {
Ok(value) => value,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!(
"Failed to serialize resource response: {}",
e
)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let final_resp = RpcResponse::new(id.clone(), payload);
let s = serde_json::to_string(&final_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!("Failed to read file: {}", e)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
}
}
if call.name.starts_with("resources/list") {
let path = call
.arguments
.get("path")
.and_then(|v| v.as_str())
.unwrap_or(".");
match std::fs::read_dir(path) {
Ok(entries) => {
let mut names = Vec::new();
for entry in entries.flatten() {
if let Some(name) = entry.file_name().to_str() {
names.push(name.to_string());
}
}
let response = McpToolResponse {
name: call.name,
success: true,
output: json!(names),
metadata: HashMap::new(),
duration_ms: 0,
};
let payload = match serde_json::to_value(&response) {
Ok(value) => value,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!(
"Failed to serialize directory listing: {}",
e
)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let final_resp = RpcResponse::new(id.clone(), payload);
let s = serde_json::to_string(&final_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!("Failed to list dir: {}", e)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
}
}
// Expect generate_text tool for the remaining path.
if call.name != "generate_text" {
let err_resp =
RpcErrorResponse::new(id.clone(), RpcError::tool_not_found(&call.name));
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
let args: GenerateTextArgs =
match serde_json::from_value(call.arguments.clone()) {
Ok(a) => a,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::invalid_params(format!("Invalid arguments: {}", e)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
// Initialize provider and start streaming
let provider = match create_provider() {
Ok(p) => p,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!(
"Failed to initialize provider: {:?}",
e
)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let parameters = ChatParameters {
temperature: args.temperature,
max_tokens: args.max_tokens.map(|v| v as u32),
stream: true,
extra: HashMap::new(),
};
let request = ChatRequest {
model: args.model,
messages: args.messages,
parameters,
tools: None,
};
let mut stream = match provider.stream_prompt(request).await {
Ok(s) => s,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!("Chat request failed: {}", e)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
// Accumulate full content while sending incremental progress notifications
let mut final_content = String::new();
while let Some(chunk) = stream.next().await {
match chunk {
Ok(resp) => {
// Append chunk to the final content buffer
final_content.push_str(&resp.message.content);
// Emit a progress notification for the UI
let notif = RpcNotification::new(
"tools/call/progress",
Some(json!({ "content": resp.message.content })),
);
let s = serde_json::to_string(&notif)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
if resp.is_final {
break;
}
}
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!("Stream error: {}", e)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
break;
}
}
}
// After streaming, send the final tool response containing the full content
let final_output = final_content.clone();
let response = McpToolResponse {
name: call.name,
success: true,
output: json!(final_output),
metadata: HashMap::new(),
duration_ms: 0,
};
let payload = match serde_json::to_value(&response) {
Ok(value) => value,
Err(e) => {
let err_resp = RpcErrorResponse::new(
id.clone(),
RpcError::internal_error(format!(
"Failed to serialize final streaming response: {}",
e
)),
);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let final_resp = RpcResponse::new(id.clone(), payload);
let s = serde_json::to_string(&final_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
// Nonstreaming requests are handled by the generic handler
match handle_request(&req).await {
Ok(res) => {
let resp = RpcResponse::new(id, res);
let s = serde_json::to_string(&resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
Err(err) => {
let err_resp = RpcErrorResponse::new(id, err);
let s = serde_json::to_string(&err_resp)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
}
}
Err(e) => {
eprintln!("Read error: {}", e);
break;
}
}
}
Ok(())
}

View File

@@ -1,21 +0,0 @@
[package]
name = "owlen-mcp-prompt-server"
version = "0.1.0"
edition.workspace = true
description = "MCP server that renders prompt templates (YAML) for Owlen"
license = "AGPL-3.0"
[dependencies]
owlen-core = { path = "../../owlen-core" }
serde = { workspace = true }
serde_json = { workspace = true }
serde_yaml = { workspace = true }
tokio = { workspace = true }
anyhow = { workspace = true }
handlebars = { workspace = true }
dirs = { workspace = true }
futures = { workspace = true }
[lib]
name = "owlen_mcp_prompt_server"
path = "src/lib.rs"

View File

@@ -1,415 +0,0 @@
//! MCP server for rendering prompt templates with YAML storage and Handlebars rendering.
//!
//! Templates are stored in `~/.config/owlen/prompts/` as YAML files.
//! Provides full Handlebars templating support for dynamic prompt generation.
use anyhow::{Context, Result};
use handlebars::Handlebars;
use serde::{Deserialize, Serialize};
use serde_json::{Value, json};
use std::collections::HashMap;
use std::fs;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use tokio::sync::RwLock;
use owlen_core::mcp::protocol::{
ErrorCode, InitializeParams, InitializeResult, PROTOCOL_VERSION, RequestId, RpcError,
RpcErrorResponse, RpcRequest, RpcResponse, ServerCapabilities, ServerInfo, methods,
};
use owlen_core::mcp::{McpToolCall, McpToolDescriptor, McpToolResponse};
use tokio::io::{self, AsyncBufReadExt, AsyncWriteExt};
/// Prompt template definition
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PromptTemplate {
/// Template name
pub name: String,
/// Template version
pub version: String,
/// Optional mode restriction
#[serde(skip_serializing_if = "Option::is_none")]
pub mode: Option<String>,
/// Handlebars template content
pub template: String,
/// Template description
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<String>,
}
/// Prompt server managing templates
pub struct PromptServer {
templates: Arc<RwLock<HashMap<String, PromptTemplate>>>,
handlebars: Handlebars<'static>,
templates_dir: PathBuf,
}
impl PromptServer {
/// Create a new prompt server
pub fn new() -> Result<Self> {
let templates_dir = Self::get_templates_dir()?;
// Create templates directory if it doesn't exist
if !templates_dir.exists() {
fs::create_dir_all(&templates_dir)?;
Self::create_default_templates(&templates_dir)?;
}
let mut server = Self {
templates: Arc::new(RwLock::new(HashMap::new())),
handlebars: Handlebars::new(),
templates_dir,
};
// Load all templates
server.load_templates()?;
Ok(server)
}
/// Get the templates directory path
fn get_templates_dir() -> Result<PathBuf> {
let config_dir = dirs::config_dir().context("Could not determine config directory")?;
Ok(config_dir.join("owlen").join("prompts"))
}
/// Create default template examples
fn create_default_templates(dir: &Path) -> Result<()> {
let chat_mode_system = PromptTemplate {
name: "chat_mode_system".to_string(),
version: "1.0".to_string(),
mode: Some("chat".to_string()),
description: Some("System prompt for chat mode".to_string()),
template: r#"You are Owlen, a helpful AI assistant. You have access to these tools:
{{#each tools}}
- {{name}}: {{description}}
{{/each}}
Use the ReAct pattern:
THOUGHT: Your reasoning
ACTION: tool_name
ACTION_INPUT: {"param": "value"}
When you have enough information:
FINAL_ANSWER: Your response"#
.to_string(),
};
let code_mode_system = PromptTemplate {
name: "code_mode_system".to_string(),
version: "1.0".to_string(),
mode: Some("code".to_string()),
description: Some("System prompt for code mode".to_string()),
template: r#"You are Owlen in code mode, with full development capabilities. You have access to:
{{#each tools}}
- {{name}}: {{description}}
{{/each}}
Use the ReAct pattern to solve coding tasks:
THOUGHT: Analyze what needs to be done
ACTION: tool_name (compile_project, run_tests, format_code, lint_code, etc.)
ACTION_INPUT: {"param": "value"}
Continue iterating until the task is complete, then provide:
FINAL_ANSWER: Summary of what was done"#
.to_string(),
};
// Save templates
let chat_path = dir.join("chat_mode_system.yaml");
let code_path = dir.join("code_mode_system.yaml");
fs::write(chat_path, serde_yaml::to_string(&chat_mode_system)?)?;
fs::write(code_path, serde_yaml::to_string(&code_mode_system)?)?;
Ok(())
}
/// Load all templates from the templates directory
fn load_templates(&mut self) -> Result<()> {
let entries = fs::read_dir(&self.templates_dir)?;
for entry in entries {
let entry = entry?;
let path = entry.path();
if path.extension().and_then(|s| s.to_str()) == Some("yaml")
|| path.extension().and_then(|s| s.to_str()) == Some("yml")
{
match self.load_template(&path) {
Ok(template) => {
// Register with Handlebars
if let Err(e) = self
.handlebars
.register_template_string(&template.name, &template.template)
{
eprintln!(
"Warning: Failed to register template {}: {}",
template.name, e
);
} else {
let mut templates = self.templates.blocking_write();
templates.insert(template.name.clone(), template);
}
}
Err(e) => {
eprintln!("Warning: Failed to load template {:?}: {}", path, e);
}
}
}
}
Ok(())
}
/// Load a single template from file
fn load_template(&self, path: &Path) -> Result<PromptTemplate> {
let content = fs::read_to_string(path)?;
let template: PromptTemplate = serde_yaml::from_str(&content)?;
Ok(template)
}
/// Get a template by name
pub async fn get_template(&self, name: &str) -> Option<PromptTemplate> {
let templates = self.templates.read().await;
templates.get(name).cloned()
}
/// List all available templates
pub async fn list_templates(&self) -> Vec<String> {
let templates = self.templates.read().await;
templates.keys().cloned().collect()
}
/// Render a template with given variables
pub fn render_template(&self, name: &str, vars: &Value) -> Result<String> {
self.handlebars
.render(name, vars)
.context("Failed to render template")
}
/// Reload all templates from disk
pub async fn reload_templates(&mut self) -> Result<()> {
{
let mut templates = self.templates.write().await;
templates.clear();
}
self.handlebars = Handlebars::new();
self.load_templates()
}
}
#[allow(dead_code)]
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut stdin = io::BufReader::new(io::stdin());
let mut stdout = io::stdout();
let server = Arc::new(tokio::sync::Mutex::new(PromptServer::new()?));
loop {
let mut line = String::new();
match stdin.read_line(&mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
let req: RpcRequest = match serde_json::from_str(&line) {
Ok(r) => r,
Err(e) => {
let err = RpcErrorResponse::new(
RequestId::Number(0),
RpcError::parse_error(format!("Parse error: {}", e)),
);
let s = serde_json::to_string(&err)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let resp = handle_request(req.clone(), server.clone()).await;
match resp {
Ok(r) => {
let s = serde_json::to_string(&r)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
Err(e) => {
let err = RpcErrorResponse::new(req.id.clone(), e);
let s = serde_json::to_string(&err)?;
stdout.write_all(s.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
}
}
Err(e) => {
eprintln!("Error reading stdin: {}", e);
break;
}
}
}
Ok(())
}
#[allow(dead_code)]
async fn handle_request(
req: RpcRequest,
server: Arc<tokio::sync::Mutex<PromptServer>>,
) -> Result<RpcResponse, RpcError> {
match req.method.as_str() {
methods::INITIALIZE => {
let params: InitializeParams =
serde_json::from_value(req.params.unwrap_or_else(|| json!({})))
.map_err(|e| RpcError::invalid_params(format!("Invalid init params: {}", e)))?;
if !params.protocol_version.eq(PROTOCOL_VERSION) {
return Err(RpcError::new(
ErrorCode::INVALID_REQUEST,
format!(
"Incompatible protocol version. Client: {}, Server: {}",
params.protocol_version, PROTOCOL_VERSION
),
));
}
let result = InitializeResult {
protocol_version: PROTOCOL_VERSION.to_string(),
server_info: ServerInfo {
name: "owlen-mcp-prompt-server".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: ServerCapabilities {
supports_tools: Some(true),
supports_resources: Some(false),
supports_streaming: Some(false),
},
};
let payload = serde_json::to_value(result).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize initialize result: {}", e))
})?;
Ok(RpcResponse::new(req.id, payload))
}
methods::TOOLS_LIST => {
let tools = vec![
McpToolDescriptor {
name: "get_prompt".to_string(),
description: "Retrieve a prompt template by name".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"name": {"type": "string", "description": "Template name"}
},
"required": ["name"]
}),
requires_network: false,
requires_filesystem: vec![],
},
McpToolDescriptor {
name: "render_prompt".to_string(),
description: "Render a prompt template with Handlebars variables".to_string(),
input_schema: json!({
"type": "object",
"properties": {
"name": {"type": "string", "description": "Template name"},
"vars": {"type": "object", "description": "Variables for Handlebars rendering"}
},
"required": ["name"]
}),
requires_network: false,
requires_filesystem: vec![],
},
McpToolDescriptor {
name: "list_prompts".to_string(),
description: "List all available prompt templates".to_string(),
input_schema: json!({"type": "object", "properties": {}}),
requires_network: false,
requires_filesystem: vec![],
},
McpToolDescriptor {
name: "reload_prompts".to_string(),
description: "Reload all prompts from disk".to_string(),
input_schema: json!({"type": "object", "properties": {}}),
requires_network: false,
requires_filesystem: vec![],
},
];
Ok(RpcResponse::new(req.id, json!(tools)))
}
methods::TOOLS_CALL => {
let call: McpToolCall = serde_json::from_value(req.params.unwrap_or_else(|| json!({})))
.map_err(|e| RpcError::invalid_params(format!("Invalid tool call: {}", e)))?;
let result = match call.name.as_str() {
"get_prompt" => {
let name = call
.arguments
.get("name")
.and_then(|v| v.as_str())
.ok_or_else(|| RpcError::invalid_params("Missing 'name' parameter"))?;
let srv = server.lock().await;
match srv.get_template(name).await {
Some(template) => match serde_json::to_value(template) {
Ok(serialized) => {
json!({"success": true, "template": serialized})
}
Err(e) => {
return Err(RpcError::internal_error(format!(
"Failed to serialize template '{}': {}",
name, e
)));
}
},
None => json!({"success": false, "error": "Template not found"}),
}
}
"render_prompt" => {
let name = call
.arguments
.get("name")
.and_then(|v| v.as_str())
.ok_or_else(|| RpcError::invalid_params("Missing 'name' parameter"))?;
let default_vars = json!({});
let vars = call.arguments.get("vars").unwrap_or(&default_vars);
let srv = server.lock().await;
match srv.render_template(name, vars) {
Ok(rendered) => json!({"success": true, "rendered": rendered}),
Err(e) => json!({"success": false, "error": e.to_string()}),
}
}
"list_prompts" => {
let srv = server.lock().await;
let templates = srv.list_templates().await;
json!({"success": true, "templates": templates})
}
"reload_prompts" => {
let mut srv = server.lock().await;
match srv.reload_templates().await {
Ok(_) => json!({"success": true, "message": "Prompts reloaded"}),
Err(e) => json!({"success": false, "error": e.to_string()}),
}
}
_ => return Err(RpcError::method_not_found(&call.name)),
};
let resp = McpToolResponse {
name: call.name,
success: result
.get("success")
.and_then(|v| v.as_bool())
.unwrap_or(false),
output: result,
metadata: HashMap::new(),
duration_ms: 0,
};
let payload = serde_json::to_value(resp).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize tool response: {}", e))
})?;
Ok(RpcResponse::new(req.id, payload))
}
_ => Err(RpcError::method_not_found(&req.method)),
}
}

View File

@@ -1,3 +0,0 @@
prompt: |
Hello {{name}}!
Your role is: {{role}}.

View File

@@ -1,12 +0,0 @@
[package]
name = "owlen-mcp-server"
version = "0.1.0"
edition.workspace = true
[dependencies]
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
anyhow = { workspace = true }
path-clean = "1.0"
owlen-core = { path = "../../owlen-core" }

View File

@@ -1,246 +0,0 @@
use owlen_core::mcp::protocol::{
ErrorCode, InitializeParams, InitializeResult, PROTOCOL_VERSION, RequestId, RpcError,
RpcErrorResponse, RpcRequest, RpcResponse, ServerCapabilities, ServerInfo, is_compatible,
};
use path_clean::PathClean;
use serde::Deserialize;
use std::env;
use std::fs;
use std::path::{Path, PathBuf};
use tokio::io::{self, AsyncBufReadExt, AsyncWriteExt};
#[derive(Deserialize)]
struct FileArgs {
path: String,
}
#[derive(Deserialize)]
struct WriteArgs {
path: String,
content: String,
}
async fn handle_request(req: &RpcRequest, root: &Path) -> Result<serde_json::Value, RpcError> {
match req.method.as_str() {
"initialize" => {
let params = req
.params
.as_ref()
.ok_or_else(|| RpcError::invalid_params("Missing params for initialize"))?;
let init_params: InitializeParams =
serde_json::from_value(params.clone()).map_err(|e| {
RpcError::invalid_params(format!("Invalid initialize params: {}", e))
})?;
// Check protocol version compatibility
if !is_compatible(&init_params.protocol_version, PROTOCOL_VERSION) {
return Err(RpcError::new(
ErrorCode::INVALID_REQUEST,
format!(
"Incompatible protocol version. Client: {}, Server: {}",
init_params.protocol_version, PROTOCOL_VERSION
),
));
}
// Build initialization result
let result = InitializeResult {
protocol_version: PROTOCOL_VERSION.to_string(),
server_info: ServerInfo {
name: "owlen-mcp-server".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: ServerCapabilities {
supports_tools: Some(false),
supports_resources: Some(true), // Supports read, write, delete
supports_streaming: Some(false),
},
};
Ok(serde_json::to_value(result).map_err(|e| {
RpcError::internal_error(format!("Failed to serialize result: {}", e))
})?)
}
"resources/list" => {
let params = req
.params
.as_ref()
.ok_or_else(|| RpcError::invalid_params("Missing params"))?;
let args: FileArgs = serde_json::from_value(params.clone())
.map_err(|e| RpcError::invalid_params(format!("Invalid params: {}", e)))?;
resources_list(&args.path, root).await
}
"resources/get" => {
let params = req
.params
.as_ref()
.ok_or_else(|| RpcError::invalid_params("Missing params"))?;
let args: FileArgs = serde_json::from_value(params.clone())
.map_err(|e| RpcError::invalid_params(format!("Invalid params: {}", e)))?;
resources_get(&args.path, root).await
}
"resources/write" => {
let params = req
.params
.as_ref()
.ok_or_else(|| RpcError::invalid_params("Missing params"))?;
let args: WriteArgs = serde_json::from_value(params.clone())
.map_err(|e| RpcError::invalid_params(format!("Invalid params: {}", e)))?;
resources_write(&args.path, &args.content, root).await
}
"resources/delete" => {
let params = req
.params
.as_ref()
.ok_or_else(|| RpcError::invalid_params("Missing params"))?;
let args: FileArgs = serde_json::from_value(params.clone())
.map_err(|e| RpcError::invalid_params(format!("Invalid params: {}", e)))?;
resources_delete(&args.path, root).await
}
_ => Err(RpcError::method_not_found(&req.method)),
}
}
fn sanitize_path(path: &str, root: &Path) -> Result<PathBuf, RpcError> {
let path = Path::new(path);
let path = if path.is_absolute() {
path.strip_prefix("/")
.map_err(|_| RpcError::invalid_params("Invalid path"))?
.to_path_buf()
} else {
path.to_path_buf()
};
let full_path = root.join(path).clean();
if !full_path.starts_with(root) {
return Err(RpcError::path_traversal());
}
Ok(full_path)
}
async fn resources_list(path: &str, root: &Path) -> Result<serde_json::Value, RpcError> {
let full_path = sanitize_path(path, root)?;
let entries = fs::read_dir(full_path).map_err(|e| {
RpcError::new(
ErrorCode::RESOURCE_NOT_FOUND,
format!("Failed to read directory: {}", e),
)
})?;
let mut result = Vec::new();
for entry in entries {
let entry = entry.map_err(|e| {
RpcError::internal_error(format!("Failed to read directory entry: {}", e))
})?;
result.push(entry.file_name().to_string_lossy().to_string());
}
Ok(serde_json::json!(result))
}
async fn resources_get(path: &str, root: &Path) -> Result<serde_json::Value, RpcError> {
let full_path = sanitize_path(path, root)?;
let content = fs::read_to_string(full_path).map_err(|e| {
RpcError::new(
ErrorCode::RESOURCE_NOT_FOUND,
format!("Failed to read file: {}", e),
)
})?;
Ok(serde_json::json!(content))
}
async fn resources_write(
path: &str,
content: &str,
root: &Path,
) -> Result<serde_json::Value, RpcError> {
let full_path = sanitize_path(path, root)?;
// Ensure parent directory exists
if let Some(parent) = full_path.parent() {
std::fs::create_dir_all(parent).map_err(|e| {
RpcError::internal_error(format!("Failed to create parent directories: {}", e))
})?;
}
std::fs::write(full_path, content)
.map_err(|e| RpcError::internal_error(format!("Failed to write file: {}", e)))?;
Ok(serde_json::json!(null))
}
async fn resources_delete(path: &str, root: &Path) -> Result<serde_json::Value, RpcError> {
let full_path = sanitize_path(path, root)?;
if full_path.is_file() {
std::fs::remove_file(full_path)
.map_err(|e| RpcError::internal_error(format!("Failed to delete file: {}", e)))?;
Ok(serde_json::json!(null))
} else {
Err(RpcError::new(
ErrorCode::RESOURCE_NOT_FOUND,
"Path does not refer to a file",
))
}
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let root = env::current_dir()?;
let mut stdin = io::BufReader::new(io::stdin());
let mut stdout = io::stdout();
loop {
let mut line = String::new();
match stdin.read_line(&mut line).await {
Ok(0) => {
// EOF
break;
}
Ok(_) => {
let req: RpcRequest = match serde_json::from_str(&line) {
Ok(req) => req,
Err(e) => {
let err_resp = RpcErrorResponse::new(
RequestId::Number(0),
RpcError::parse_error(format!("Parse error: {}", e)),
);
let resp_str = serde_json::to_string(&err_resp)?;
stdout.write_all(resp_str.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
continue;
}
};
let request_id = req.id.clone();
match handle_request(&req, &root).await {
Ok(result) => {
let resp = RpcResponse::new(request_id, result);
let resp_str = serde_json::to_string(&resp)?;
stdout.write_all(resp_str.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
Err(error) => {
let err_resp = RpcErrorResponse::new(request_id, error);
let resp_str = serde_json::to_string(&err_resp)?;
stdout.write_all(resp_str.as_bytes()).await?;
stdout.write_all(b"\n").await?;
stdout.flush().await?;
}
}
}
Err(e) => {
// Handle read error
eprintln!("Error reading from stdin: {}", e);
break;
}
}
}
Ok(())
}

View File

@@ -1,60 +0,0 @@
[package]
name = "owlen-cli"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
description = "Command-line interface for OWLEN LLM client"
[features]
default = ["chat-client"]
chat-client = ["owlen-tui"]
[[bin]]
name = "owlen"
path = "src/main.rs"
required-features = ["chat-client"]
[[bin]]
name = "owlen-code"
path = "src/code_main.rs"
required-features = ["chat-client"]
[[bin]]
name = "owlen-agent"
path = "src/agent_main.rs"
required-features = ["chat-client"]
[dependencies]
owlen-core = { path = "../owlen-core" }
owlen-providers = { path = "../owlen-providers" }
# Optional TUI dependency, enabled by the "chat-client" feature.
owlen-tui = { path = "../owlen-tui", optional = true }
log = { workspace = true }
async-trait = { workspace = true }
futures = { workspace = true }
# CLI framework
clap = { workspace = true, features = ["derive"] }
# Async runtime
tokio = { workspace = true }
tokio-util = { workspace = true }
# TUI framework
ratatui = { workspace = true }
crossterm = { workspace = true }
# Utilities
anyhow = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
regex = { workspace = true }
thiserror = { workspace = true }
dirs = { workspace = true }
[dev-dependencies]
tokio = { workspace = true }
tokio-test = { workspace = true }

View File

@@ -1,15 +0,0 @@
# Owlen CLI
This crate is the command-line entry point for the Owlen application.
It is responsible for:
- Parsing command-line arguments.
- Loading the configuration.
- Initializing the providers.
- Starting the `owlen-tui` application.
There are two binaries:
- `owlen`: The main chat application.
- `owlen-code`: A specialized version for code-related tasks.

View File

@@ -1,31 +0,0 @@
use std::process::Command;
fn main() {
const MIN_VERSION: (u32, u32, u32) = (1, 75, 0);
let rustc = std::env::var("RUSTC").unwrap_or_else(|_| "rustc".into());
let output = Command::new(&rustc)
.arg("--version")
.output()
.expect("failed to invoke rustc");
let version_line = String::from_utf8_lossy(&output.stdout);
let version_str = version_line.split_whitespace().nth(1).unwrap_or("0.0.0");
let sanitized = version_str.split('-').next().unwrap_or(version_str);
let mut parts = sanitized
.split('.')
.map(|part| part.parse::<u32>().unwrap_or(0));
let current = (
parts.next().unwrap_or(0),
parts.next().unwrap_or(0),
parts.next().unwrap_or(0),
);
if current < MIN_VERSION {
panic!(
"owlen requires rustc {}.{}.{} or newer (found {version_line})",
MIN_VERSION.0, MIN_VERSION.1, MIN_VERSION.2
);
}
}

View File

@@ -1,61 +0,0 @@
//! Simple entry point for the ReAct agentic executor.
//!
//! Usage: `owlen-agent "<prompt>" [--model <model>] [--max-iter <n>]`
//!
//! This binary demonstrates Phase4 without the full TUI. It creates an
//! OllamaProvider, a RemoteMcpClient, runs the AgentExecutor and prints the
//! final answer.
use std::sync::Arc;
use clap::Parser;
use owlen_cli::agent::{AgentConfig, AgentExecutor};
use owlen_core::mcp::remote_client::RemoteMcpClient;
/// Commandline arguments for the agent binary.
#[derive(Parser, Debug)]
#[command(
name = "owlen-agent",
author,
version,
about = "Run the ReAct agent via MCP"
)]
struct Args {
/// The initial user query.
prompt: String,
/// Model to use (defaults to Ollama default).
#[arg(long)]
model: Option<String>,
/// Maximum ReAct iterations.
#[arg(long, default_value_t = 10)]
max_iter: usize,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
// Initialise the MCP LLM client it implements Provider and talks to the
// MCP LLM server which wraps Ollama. This ensures all communication goes
// through the MCP architecture (Phase 10 requirement).
let provider = Arc::new(RemoteMcpClient::new()?);
// The MCP client also serves as the tool client for resource operations
let mcp_client = Arc::clone(&provider) as Arc<RemoteMcpClient>;
let config = AgentConfig {
max_iterations: args.max_iter,
model: args.model.unwrap_or_else(|| "llama3.2:latest".to_string()),
..AgentConfig::default()
};
let executor = AgentExecutor::new(provider, mcp_client, config);
match executor.run(args.prompt).await {
Ok(result) => {
println!("\n✓ Agent completed in {} iterations", result.iterations);
println!("\nFinal answer:\n{}", result.answer);
Ok(())
}
Err(e) => Err(anyhow::anyhow!(e)),
}
}

View File

@@ -1,326 +0,0 @@
use std::borrow::Cow;
use std::io;
use std::sync::Arc;
use anyhow::{Result, anyhow};
use async_trait::async_trait;
use crossterm::{
event::{DisableBracketedPaste, DisableMouseCapture, EnableBracketedPaste, EnableMouseCapture},
execute,
terminal::{EnterAlternateScreen, LeaveAlternateScreen, disable_raw_mode, enable_raw_mode},
};
use futures::stream;
use owlen_core::{
ChatStream, Error, Provider,
config::{Config, McpMode},
mcp::remote_client::RemoteMcpClient,
mode::Mode,
provider::ProviderManager,
providers::OllamaProvider,
session::{ControllerEvent, SessionController},
storage::StorageManager,
types::{ChatRequest, ChatResponse, Message, ModelInfo},
};
use owlen_tui::{
ChatApp, SessionEvent,
app::App as RuntimeApp,
config,
tui_controller::{TuiController, TuiRequest},
ui,
};
use ratatui::{Terminal, prelude::CrosstermBackend};
use tokio::sync::mpsc;
use crate::commands::cloud::{load_runtime_credentials, set_env_var};
pub async fn launch(initial_mode: Mode) -> Result<()> {
set_env_var("OWLEN_AUTO_CONSENT", "1");
let color_support = detect_terminal_color_support();
let mut cfg = config::try_load_config().unwrap_or_default();
let _ = cfg.refresh_mcp_servers(None);
if let Some(previous_theme) = apply_terminal_theme(&mut cfg, &color_support) {
let term_label = match &color_support {
TerminalColorSupport::Limited { term } => Cow::from(term.as_str()),
TerminalColorSupport::Full => Cow::from("current terminal"),
};
eprintln!(
"Terminal '{}' lacks full 256-color support. Using '{}' theme instead of '{}'.",
term_label, BASIC_THEME_NAME, previous_theme
);
} else if let TerminalColorSupport::Limited { term } = &color_support {
eprintln!(
"Warning: terminal '{}' may not fully support 256-color themes.",
term
);
}
cfg.validate()?;
let storage = Arc::new(StorageManager::new().await?);
load_runtime_credentials(&mut cfg, storage.clone()).await?;
let (tui_tx, _tui_rx) = mpsc::unbounded_channel::<TuiRequest>();
let tui_controller = Arc::new(TuiController::new(tui_tx));
let provider = build_provider(&cfg)?;
let mut offline_notice: Option<String> = None;
let provider = match provider.health_check().await {
Ok(_) => provider,
Err(err) => {
let hint = if matches!(cfg.mcp.mode, McpMode::RemotePreferred | McpMode::RemoteOnly)
&& !cfg.effective_mcp_servers().is_empty()
{
"Ensure the configured MCP server is running and reachable."
} else {
"Ensure Ollama is running (`ollama serve`) and reachable at the configured base_url."
};
let notice =
format!("Provider health check failed: {err}. {hint} Continuing in offline mode.");
eprintln!("{notice}");
offline_notice = Some(notice.clone());
let fallback_model = cfg
.general
.default_model
.clone()
.unwrap_or_else(|| "offline".to_string());
Arc::new(OfflineProvider::new(notice, fallback_model)) as Arc<dyn Provider>
}
};
let (controller_event_tx, controller_event_rx) = mpsc::unbounded_channel::<ControllerEvent>();
let controller = SessionController::new(
provider,
cfg,
storage.clone(),
tui_controller,
false,
Some(controller_event_tx),
)
.await?;
let provider_manager = Arc::new(ProviderManager::default());
let mut runtime = RuntimeApp::new(provider_manager);
let (mut app, mut session_rx) = ChatApp::new(controller, controller_event_rx).await?;
app.initialize_models().await?;
if let Some(notice) = offline_notice.clone() {
app.set_status_message(&notice);
app.set_system_status(notice);
}
app.set_mode(initial_mode).await;
enable_raw_mode()?;
let mut stdout = io::stdout();
execute!(
stdout,
EnterAlternateScreen,
EnableMouseCapture,
EnableBracketedPaste
)?;
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
let result = run_app(&mut terminal, &mut runtime, &mut app, &mut session_rx).await;
config::save_config(&app.config())?;
disable_raw_mode()?;
execute!(
terminal.backend_mut(),
LeaveAlternateScreen,
DisableMouseCapture,
DisableBracketedPaste
)?;
terminal.show_cursor()?;
if let Err(err) = result {
println!("{err:?}");
}
Ok(())
}
fn build_provider(cfg: &Config) -> Result<Arc<dyn Provider>> {
match cfg.mcp.mode {
McpMode::RemotePreferred => {
let remote_result = if let Some(mcp_server) = cfg.effective_mcp_servers().first() {
RemoteMcpClient::new_with_config(mcp_server)
} else {
RemoteMcpClient::new()
};
match remote_result {
Ok(client) => Ok(Arc::new(client) as Arc<dyn Provider>),
Err(err) if cfg.mcp.allow_fallback => {
log::warn!(
"Remote MCP client unavailable ({}); falling back to local provider.",
err
);
build_local_provider(cfg)
}
Err(err) => Err(anyhow!(err)),
}
}
McpMode::RemoteOnly => {
let mcp_server = cfg.effective_mcp_servers().first().ok_or_else(|| {
anyhow!("[[mcp_servers]] must be configured when [mcp].mode = \"remote_only\"")
})?;
let client = RemoteMcpClient::new_with_config(mcp_server)?;
Ok(Arc::new(client) as Arc<dyn Provider>)
}
McpMode::LocalOnly | McpMode::Legacy => build_local_provider(cfg),
McpMode::Disabled => Err(anyhow!(
"MCP mode 'disabled' is not supported by the owlen TUI"
)),
}
}
fn build_local_provider(cfg: &Config) -> Result<Arc<dyn Provider>> {
let provider_name = cfg.general.default_provider.clone();
let provider_cfg = cfg.provider(&provider_name).ok_or_else(|| {
anyhow!(format!(
"No provider configuration found for '{provider_name}' in [providers]"
))
})?;
match provider_cfg.provider_type.as_str() {
"ollama" | "ollama_cloud" => {
let provider = OllamaProvider::from_config(provider_cfg, Some(&cfg.general))?;
Ok(Arc::new(provider) as Arc<dyn Provider>)
}
other => Err(anyhow!(format!(
"Provider type '{other}' is not supported in legacy/local MCP mode"
))),
}
}
const BASIC_THEME_NAME: &str = "ansi_basic";
#[derive(Debug, Clone)]
enum TerminalColorSupport {
Full,
Limited { term: String },
}
fn detect_terminal_color_support() -> TerminalColorSupport {
let term = std::env::var("TERM").unwrap_or_else(|_| "unknown".to_string());
let colorterm = std::env::var("COLORTERM").unwrap_or_default();
let term_lower = term.to_lowercase();
let color_lower = colorterm.to_lowercase();
let supports_extended = term_lower.contains("256color")
|| color_lower.contains("truecolor")
|| color_lower.contains("24bit")
|| color_lower.contains("fullcolor");
if supports_extended {
TerminalColorSupport::Full
} else {
TerminalColorSupport::Limited { term }
}
}
fn apply_terminal_theme(cfg: &mut Config, support: &TerminalColorSupport) -> Option<String> {
match support {
TerminalColorSupport::Full => None,
TerminalColorSupport::Limited { .. } => {
if cfg.ui.theme != BASIC_THEME_NAME {
let previous = std::mem::replace(&mut cfg.ui.theme, BASIC_THEME_NAME.to_string());
Some(previous)
} else {
None
}
}
}
}
struct OfflineProvider {
reason: String,
placeholder_model: String,
}
impl OfflineProvider {
fn new(reason: String, placeholder_model: String) -> Self {
Self {
reason,
placeholder_model,
}
}
fn friendly_response(&self, requested_model: &str) -> ChatResponse {
let mut message = String::new();
message.push_str("⚠️ Owlen is running in offline mode.\n\n");
message.push_str(&self.reason);
if !requested_model.is_empty() && requested_model != self.placeholder_model {
message.push_str(&format!(
"\n\nYou requested model '{}', but no providers are reachable.",
requested_model
));
}
message.push_str(
"\n\nStart your preferred provider (e.g. `ollama serve`) or switch providers with `:provider` once connectivity is restored.",
);
ChatResponse {
message: Message::assistant(message),
usage: None,
is_streaming: false,
is_final: true,
}
}
}
#[async_trait]
impl Provider for OfflineProvider {
fn name(&self) -> &str {
"offline"
}
async fn list_models(&self) -> Result<Vec<ModelInfo>, Error> {
Ok(vec![ModelInfo {
id: self.placeholder_model.clone(),
provider: "offline".to_string(),
name: format!("Offline (fallback: {})", self.placeholder_model),
description: Some("Placeholder model used while no providers are reachable".into()),
context_window: None,
capabilities: vec![],
supports_tools: false,
}])
}
async fn send_prompt(&self, request: ChatRequest) -> Result<ChatResponse, Error> {
Ok(self.friendly_response(&request.model))
}
async fn stream_prompt(&self, request: ChatRequest) -> Result<ChatStream, Error> {
let response = self.friendly_response(&request.model);
Ok(Box::pin(stream::iter(vec![Ok(response)])))
}
async fn health_check(&self) -> Result<(), Error> {
Err(Error::Provider(anyhow!(
"offline provider cannot reach any backing models"
)))
}
fn as_any(&self) -> &(dyn std::any::Any + Send + Sync) {
self
}
}
async fn run_app(
terminal: &mut Terminal<CrosstermBackend<io::Stdout>>,
runtime: &mut RuntimeApp,
app: &mut ChatApp,
session_rx: &mut mpsc::UnboundedReceiver<SessionEvent>,
) -> Result<()> {
let mut render = |terminal: &mut Terminal<CrosstermBackend<io::Stdout>>,
state: &mut ChatApp|
-> Result<()> {
terminal.draw(|f| ui::render_chat(f, state))?;
Ok(())
};
runtime.run(terminal, app, session_rx, &mut render).await?;
Ok(())
}

View File

@@ -1,16 +0,0 @@
//! Owlen CLI entrypoint optimised for code-first workflows.
#![allow(dead_code, unused_imports)]
mod bootstrap;
mod commands;
mod mcp;
use anyhow::Result;
use owlen_core::config as core_config;
use owlen_core::mode::Mode;
use owlen_tui::config;
#[tokio::main(flavor = "multi_thread")]
async fn main() -> Result<()> {
bootstrap::launch(Mode::Code).await
}

View File

@@ -1,479 +0,0 @@
use std::ffi::OsStr;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use anyhow::{Context, Result, anyhow, bail};
use clap::Subcommand;
use owlen_core::LlmProvider;
use owlen_core::ProviderConfig;
use owlen_core::config::{
self as core_config, Config, OLLAMA_CLOUD_API_KEY_ENV, OLLAMA_CLOUD_BASE_URL,
OLLAMA_CLOUD_ENDPOINT_KEY, OLLAMA_MODE_KEY,
};
use owlen_core::credentials::{ApiCredentials, CredentialManager, OLLAMA_CLOUD_CREDENTIAL_ID};
use owlen_core::encryption;
use owlen_core::providers::OllamaProvider;
use owlen_core::storage::StorageManager;
use serde_json::Value;
const DEFAULT_CLOUD_ENDPOINT: &str = OLLAMA_CLOUD_BASE_URL;
const CLOUD_ENDPOINT_KEY: &str = OLLAMA_CLOUD_ENDPOINT_KEY;
const CLOUD_PROVIDER_KEY: &str = "ollama_cloud";
#[derive(Debug, Subcommand)]
pub enum CloudCommand {
/// Configure Ollama Cloud credentials
Setup {
/// API key passed directly on the command line (prompted when omitted)
#[arg(long)]
api_key: Option<String>,
/// Override the cloud endpoint (default: https://ollama.com)
#[arg(long)]
endpoint: Option<String>,
/// Provider name to configure (default: ollama_cloud)
#[arg(long, default_value = "ollama_cloud")]
provider: String,
/// Overwrite the provider base URL with the cloud endpoint
#[arg(long)]
force_cloud_base_url: bool,
},
/// Check connectivity to Ollama Cloud
Status {
/// Provider name to check (default: ollama_cloud)
#[arg(long, default_value = "ollama_cloud")]
provider: String,
},
/// List available cloud-hosted models
Models {
/// Provider name to query (default: ollama_cloud)
#[arg(long, default_value = "ollama_cloud")]
provider: String,
},
/// Remove stored Ollama Cloud credentials
Logout {
/// Provider name to clear (default: ollama_cloud)
#[arg(long, default_value = "ollama_cloud")]
provider: String,
},
}
pub async fn run_cloud_command(command: CloudCommand) -> Result<()> {
match command {
CloudCommand::Setup {
api_key,
endpoint,
provider,
force_cloud_base_url,
} => setup(provider, api_key, endpoint, force_cloud_base_url).await,
CloudCommand::Status { provider } => status(provider).await,
CloudCommand::Models { provider } => models(provider).await,
CloudCommand::Logout { provider } => logout(provider).await,
}
}
async fn setup(
provider: String,
api_key: Option<String>,
endpoint: Option<String>,
force_cloud_base_url: bool,
) -> Result<()> {
let provider = canonical_provider_name(&provider);
let mut config = crate::config::try_load_config().unwrap_or_default();
let endpoint =
normalize_endpoint(&endpoint.unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string()));
let base_changed = {
let entry = ensure_provider_entry(&mut config, &provider);
entry.enabled = true;
configure_cloud_endpoint(entry, &endpoint, force_cloud_base_url)
};
let key = match api_key {
Some(value) if !value.trim().is_empty() => value,
_ => {
let prompt = format!("Enter API key for {provider}: ");
encryption::prompt_password(&prompt)?
}
};
if config.privacy.encrypt_local_data {
let storage = Arc::new(StorageManager::new().await?);
let manager = unlock_credential_manager(&config, storage.clone())?;
let credentials = ApiCredentials {
api_key: key.clone(),
endpoint: endpoint.clone(),
};
manager
.store_credentials(OLLAMA_CLOUD_CREDENTIAL_ID, &credentials)
.await?;
// Ensure plaintext key is not persisted to disk.
if let Some(entry) = config.providers.get_mut(&provider) {
entry.api_key = None;
}
} else if let Some(entry) = config.providers.get_mut(&provider) {
entry.api_key = Some(key.clone());
}
crate::config::save_config(&config)?;
println!("Saved Ollama configuration for provider '{provider}'.");
if config.privacy.encrypt_local_data {
println!("API key stored securely in the encrypted credential vault.");
} else {
println!("API key stored in plaintext configuration (encryption disabled).");
}
if !force_cloud_base_url && !base_changed {
println!(
"Local base URL preserved; cloud endpoint stored as {}.",
CLOUD_ENDPOINT_KEY
);
}
Ok(())
}
async fn status(provider: String) -> Result<()> {
let provider = canonical_provider_name(&provider);
let mut config = crate::config::try_load_config().unwrap_or_default();
let storage = Arc::new(StorageManager::new().await?);
let manager = if config.privacy.encrypt_local_data {
Some(unlock_credential_manager(&config, storage.clone())?)
} else {
None
};
let api_key = hydrate_api_key(&mut config, manager.as_ref()).await?;
{
let entry = ensure_provider_entry(&mut config, &provider);
entry.enabled = true;
configure_cloud_endpoint(entry, DEFAULT_CLOUD_ENDPOINT, false);
}
let provider_cfg = config
.provider(&provider)
.cloned()
.ok_or_else(|| anyhow!("Provider '{provider}' is not configured"))?;
let endpoint =
resolve_cloud_endpoint(&provider_cfg).unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string());
let mut runtime_cfg = provider_cfg.clone();
runtime_cfg.base_url = Some(endpoint.clone());
runtime_cfg.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("cloud".to_string()),
);
let ollama = OllamaProvider::from_config(&runtime_cfg, Some(&config.general))
.with_context(|| "Failed to construct Ollama provider. Run `owlen cloud setup` first.")?;
match ollama.health_check().await {
Ok(_) => {
println!("✓ Connected to {provider} ({})", endpoint);
if api_key.is_none() && config.privacy.encrypt_local_data {
println!(
"Warning: No API key stored; connection succeeded via environment variables."
);
}
}
Err(err) => {
println!("✗ Failed to reach {provider}: {err}");
}
}
Ok(())
}
async fn models(provider: String) -> Result<()> {
let provider = canonical_provider_name(&provider);
let mut config = crate::config::try_load_config().unwrap_or_default();
let storage = Arc::new(StorageManager::new().await?);
let manager = if config.privacy.encrypt_local_data {
Some(unlock_credential_manager(&config, storage.clone())?)
} else {
None
};
hydrate_api_key(&mut config, manager.as_ref()).await?;
{
let entry = ensure_provider_entry(&mut config, &provider);
entry.enabled = true;
configure_cloud_endpoint(entry, DEFAULT_CLOUD_ENDPOINT, false);
}
let provider_cfg = config
.provider(&provider)
.cloned()
.ok_or_else(|| anyhow!("Provider '{provider}' is not configured"))?;
let endpoint =
resolve_cloud_endpoint(&provider_cfg).unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string());
let mut runtime_cfg = provider_cfg.clone();
runtime_cfg.base_url = Some(endpoint);
runtime_cfg.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("cloud".to_string()),
);
let ollama = OllamaProvider::from_config(&runtime_cfg, Some(&config.general))
.with_context(|| "Failed to construct Ollama provider. Run `owlen cloud setup` first.")?;
match ollama.list_models().await {
Ok(models) => {
if models.is_empty() {
println!("No cloud models reported by '{}'.", provider);
} else {
println!("Models available via '{}':", provider);
for model in models {
if let Some(description) = &model.description {
println!(" - {} ({})", model.id, description);
} else {
println!(" - {}", model.id);
}
}
}
}
Err(err) => {
bail!("Failed to list models: {err}");
}
}
Ok(())
}
async fn logout(provider: String) -> Result<()> {
let provider = canonical_provider_name(&provider);
let mut config = crate::config::try_load_config().unwrap_or_default();
let storage = Arc::new(StorageManager::new().await?);
if config.privacy.encrypt_local_data {
let manager = unlock_credential_manager(&config, storage.clone())?;
manager
.delete_credentials(OLLAMA_CLOUD_CREDENTIAL_ID)
.await?;
}
if let Some(entry) = config.providers.get_mut(&provider) {
entry.api_key = None;
entry.enabled = false;
}
crate::config::save_config(&config)?;
println!("Cleared credentials for provider '{provider}'.");
Ok(())
}
fn ensure_provider_entry<'a>(config: &'a mut Config, provider: &str) -> &'a mut ProviderConfig {
core_config::ensure_provider_config_mut(config, provider)
}
fn configure_cloud_endpoint(entry: &mut ProviderConfig, endpoint: &str, force: bool) -> bool {
let normalized = normalize_endpoint(endpoint);
let previous_base = entry.base_url.clone();
entry.extra.insert(
CLOUD_ENDPOINT_KEY.to_string(),
Value::String(normalized.clone()),
);
if entry.api_key_env.is_none() {
entry.api_key_env = Some(OLLAMA_CLOUD_API_KEY_ENV.to_string());
}
if force
|| entry
.base_url
.as_ref()
.map(|value| value.trim().is_empty())
.unwrap_or(true)
{
entry.base_url = Some(normalized.clone());
}
if force {
entry.enabled = true;
}
entry.base_url != previous_base
}
fn resolve_cloud_endpoint(cfg: &ProviderConfig) -> Option<String> {
if let Some(value) = cfg
.extra
.get(CLOUD_ENDPOINT_KEY)
.and_then(|value| value.as_str())
.map(normalize_endpoint)
{
return Some(value);
}
cfg.base_url
.as_ref()
.map(|value| value.trim_end_matches('/').to_string())
.filter(|value| !value.is_empty())
}
fn normalize_endpoint(endpoint: &str) -> String {
let trimmed = endpoint.trim().trim_end_matches('/');
if trimmed.is_empty() {
DEFAULT_CLOUD_ENDPOINT.to_string()
} else {
trimmed.to_string()
}
}
fn canonical_provider_name(provider: &str) -> String {
let normalized = provider.trim().to_ascii_lowercase().replace('-', "_");
match normalized.as_str() {
"" => CLOUD_PROVIDER_KEY.to_string(),
"ollama" => CLOUD_PROVIDER_KEY.to_string(),
"ollama_cloud" => CLOUD_PROVIDER_KEY.to_string(),
value => value.to_string(),
}
}
pub(crate) fn set_env_var<K, V>(key: K, value: V)
where
K: AsRef<OsStr>,
V: AsRef<OsStr>,
{
// Safety: the CLI updates process-wide environment variables during startup while no
// other threads are mutating the environment.
unsafe {
std::env::set_var(key, value);
}
}
fn set_env_if_missing(var: &str, value: &str) {
if std::env::var(var)
.map(|v| v.trim().is_empty())
.unwrap_or(true)
{
set_env_var(var, value);
}
}
fn unlock_credential_manager(
config: &Config,
storage: Arc<StorageManager>,
) -> Result<Arc<CredentialManager>> {
if !config.privacy.encrypt_local_data {
bail!("Credential manager requested but encryption is disabled");
}
let secure_path = vault_path(&storage)?;
let handle = unlock_vault(&secure_path)?;
let master_key = Arc::new(handle.data.master_key.clone());
Ok(Arc::new(CredentialManager::new(
storage,
master_key.clone(),
)))
}
fn vault_path(storage: &StorageManager) -> Result<PathBuf> {
let base_dir = storage
.database_path()
.parent()
.map(|p| p.to_path_buf())
.or_else(dirs::data_local_dir)
.unwrap_or_else(|| PathBuf::from("."));
Ok(base_dir.join("encrypted_data.json"))
}
fn unlock_vault(path: &Path) -> Result<encryption::VaultHandle> {
use std::env;
if path.exists() {
if let Some(password) = env::var("OWLEN_MASTER_PASSWORD")
.ok()
.map(|value| value.trim().to_string())
.filter(|password| !password.is_empty())
{
return encryption::unlock_with_password(path.to_path_buf(), &password)
.context("Failed to unlock vault with OWLEN_MASTER_PASSWORD");
}
for attempt in 0..3 {
let password = encryption::prompt_password("Enter master password: ")?;
match encryption::unlock_with_password(path.to_path_buf(), &password) {
Ok(handle) => {
set_env_var("OWLEN_MASTER_PASSWORD", password);
return Ok(handle);
}
Err(err) => {
eprintln!("Failed to unlock vault: {err}");
if attempt == 2 {
return Err(err);
}
}
}
}
bail!("Unable to unlock encrypted credential vault");
}
let handle = encryption::unlock_interactive(path.to_path_buf())?;
if env::var("OWLEN_MASTER_PASSWORD")
.map(|v| v.trim().is_empty())
.unwrap_or(true)
{
let password = encryption::prompt_password("Cache master password for this session: ")?;
set_env_var("OWLEN_MASTER_PASSWORD", password);
}
Ok(handle)
}
async fn hydrate_api_key(
config: &mut Config,
manager: Option<&Arc<CredentialManager>>,
) -> Result<Option<String>> {
let credentials = match manager {
Some(manager) => manager.get_credentials(OLLAMA_CLOUD_CREDENTIAL_ID).await?,
None => None,
};
if let Some(credentials) = credentials {
let key = credentials.api_key.trim().to_string();
if !key.is_empty() {
set_env_if_missing("OLLAMA_API_KEY", &key);
set_env_if_missing("OLLAMA_CLOUD_API_KEY", &key);
}
let cfg = core_config::ensure_provider_config_mut(config, CLOUD_PROVIDER_KEY);
configure_cloud_endpoint(cfg, &credentials.endpoint, false);
return Ok(Some(key));
}
if let Some(key) = config
.provider(CLOUD_PROVIDER_KEY)
.and_then(|cfg| cfg.api_key.as_ref())
.map(|value| value.trim())
.filter(|value| !value.is_empty())
{
set_env_if_missing("OLLAMA_API_KEY", key);
set_env_if_missing("OLLAMA_CLOUD_API_KEY", key);
return Ok(Some(key.to_string()));
}
Ok(None)
}
pub async fn load_runtime_credentials(
config: &mut Config,
storage: Arc<StorageManager>,
) -> Result<()> {
if config.privacy.encrypt_local_data {
let manager = unlock_credential_manager(config, storage.clone())?;
hydrate_api_key(config, Some(&manager)).await?;
} else {
hydrate_api_key(config, None).await?;
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn canonicalises_provider_names() {
assert_eq!(canonical_provider_name("OLLAMA_CLOUD"), CLOUD_PROVIDER_KEY);
assert_eq!(canonical_provider_name(" ollama-cloud"), CLOUD_PROVIDER_KEY);
assert_eq!(canonical_provider_name(""), CLOUD_PROVIDER_KEY);
}
}

View File

@@ -1,4 +0,0 @@
//! Command implementations for the `owlen` CLI.
pub mod cloud;
pub mod providers;

View File

@@ -1,651 +0,0 @@
use std::collections::HashMap;
use std::sync::Arc;
use anyhow::{Result, anyhow};
use clap::{Args, Subcommand};
use owlen_core::ProviderConfig;
use owlen_core::config::{self as core_config, Config};
use owlen_core::provider::{
AnnotatedModelInfo, ModelProvider, ProviderManager, ProviderStatus, ProviderType,
};
use owlen_core::storage::StorageManager;
use owlen_providers::ollama::{OllamaCloudProvider, OllamaLocalProvider};
use owlen_tui::config as tui_config;
use super::cloud;
/// CLI subcommands for provider management.
#[derive(Debug, Subcommand)]
pub enum ProvidersCommand {
/// List configured providers and their metadata.
List,
/// Run health checks against providers.
Status {
/// Optional provider identifier to check.
#[arg(value_name = "PROVIDER")]
provider: Option<String>,
},
/// Enable a provider in the configuration.
Enable {
/// Provider identifier to enable.
provider: String,
},
/// Disable a provider in the configuration.
Disable {
/// Provider identifier to disable.
provider: String,
},
}
/// Arguments for the `owlen models` command.
#[derive(Debug, Default, Args)]
pub struct ModelsArgs {
/// Restrict output to a specific provider.
#[arg(long)]
pub provider: Option<String>,
}
pub async fn run_providers_command(command: ProvidersCommand) -> Result<()> {
match command {
ProvidersCommand::List => list_providers(),
ProvidersCommand::Status { provider } => status_providers(provider.as_deref()).await,
ProvidersCommand::Enable { provider } => toggle_provider(&provider, true),
ProvidersCommand::Disable { provider } => toggle_provider(&provider, false),
}
}
pub async fn run_models_command(args: ModelsArgs) -> Result<()> {
list_models(args.provider.as_deref()).await
}
fn list_providers() -> Result<()> {
let config = tui_config::try_load_config().unwrap_or_default();
let default_provider = canonical_provider_id(&config.general.default_provider);
let mut rows = Vec::new();
for (id, cfg) in &config.providers {
let type_label = describe_provider_type(id, cfg);
let auth_label = describe_auth(cfg, requires_auth(id, cfg));
let enabled = if cfg.enabled { "yes" } else { "no" };
let default = if id == &default_provider { "*" } else { "" };
let base = cfg
.base_url
.as_ref()
.map(|value| value.trim().to_string())
.unwrap_or_else(|| "-".to_string());
rows.push(ProviderListRow {
id: id.to_string(),
type_label,
enabled: enabled.to_string(),
default: default.to_string(),
auth: auth_label,
base_url: base,
});
}
rows.sort_by(|a, b| a.id.cmp(&b.id));
let id_width = rows
.iter()
.map(|row| row.id.len())
.max()
.unwrap_or(8)
.max("Provider".len());
let enabled_width = rows
.iter()
.map(|row| row.enabled.len())
.max()
.unwrap_or(7)
.max("Enabled".len());
let default_width = rows
.iter()
.map(|row| row.default.len())
.max()
.unwrap_or(7)
.max("Default".len());
let type_width = rows
.iter()
.map(|row| row.type_label.len())
.max()
.unwrap_or(4)
.max("Type".len());
let auth_width = rows
.iter()
.map(|row| row.auth.len())
.max()
.unwrap_or(4)
.max("Auth".len());
println!(
"{:<id_width$} {:<enabled_width$} {:<default_width$} {:<type_width$} {:<auth_width$} Base URL",
"Provider",
"Enabled",
"Default",
"Type",
"Auth",
id_width = id_width,
enabled_width = enabled_width,
default_width = default_width,
type_width = type_width,
auth_width = auth_width,
);
for row in rows {
println!(
"{:<id_width$} {:<enabled_width$} {:<default_width$} {:<type_width$} {:<auth_width$} {}",
row.id,
row.enabled,
row.default,
row.type_label,
row.auth,
row.base_url,
id_width = id_width,
enabled_width = enabled_width,
default_width = default_width,
type_width = type_width,
auth_width = auth_width,
);
}
Ok(())
}
async fn status_providers(filter: Option<&str>) -> Result<()> {
let mut config = tui_config::try_load_config().unwrap_or_default();
let filter = filter.map(canonical_provider_id);
verify_provider_filter(&config, filter.as_deref())?;
let storage = Arc::new(StorageManager::new().await?);
cloud::load_runtime_credentials(&mut config, storage.clone()).await?;
let manager = ProviderManager::new(&config);
let records = register_enabled_providers(&manager, &config, filter.as_deref()).await?;
let health = manager.refresh_health().await;
let mut rows = Vec::new();
for record in records {
let status = health.get(&record.id).copied();
rows.push(ProviderStatusRow::from_record(record, status));
}
rows.sort_by(|a, b| a.id.cmp(&b.id));
print_status_rows(&rows);
Ok(())
}
async fn list_models(filter: Option<&str>) -> Result<()> {
let mut config = tui_config::try_load_config().unwrap_or_default();
let filter = filter.map(canonical_provider_id);
verify_provider_filter(&config, filter.as_deref())?;
let storage = Arc::new(StorageManager::new().await?);
cloud::load_runtime_credentials(&mut config, storage.clone()).await?;
let manager = ProviderManager::new(&config);
let records = register_enabled_providers(&manager, &config, filter.as_deref()).await?;
let models = manager
.list_all_models()
.await
.map_err(|err| anyhow!(err))?;
let statuses = manager.provider_statuses().await;
print_models(records, models, statuses);
Ok(())
}
fn verify_provider_filter(config: &Config, filter: Option<&str>) -> Result<()> {
if let Some(filter) = filter
&& !config.providers.contains_key(filter)
{
return Err(anyhow!(
"Provider '{}' is not defined in configuration.",
filter
));
}
Ok(())
}
fn toggle_provider(provider: &str, enable: bool) -> Result<()> {
let mut config = tui_config::try_load_config().unwrap_or_default();
let canonical = canonical_provider_id(provider);
if canonical.is_empty() {
return Err(anyhow!("Provider name cannot be empty."));
}
let previous_default = config.general.default_provider.clone();
let previous_fallback_enabled = config.providers.get("ollama_local").map(|cfg| cfg.enabled);
let previous_enabled;
{
let entry = core_config::ensure_provider_config_mut(&mut config, &canonical);
previous_enabled = entry.enabled;
if previous_enabled == enable {
println!(
"Provider '{}' is already {}.",
canonical,
if enable { "enabled" } else { "disabled" }
);
return Ok(());
}
entry.enabled = enable;
}
if !enable && config.general.default_provider == canonical {
if let Some(candidate) = choose_fallback_provider(&config, &canonical) {
config.general.default_provider = candidate.clone();
println!(
"Default provider set to '{}' because '{}' was disabled.",
candidate, canonical
);
} else {
let entry = core_config::ensure_provider_config_mut(&mut config, "ollama_local");
entry.enabled = true;
config.general.default_provider = "ollama_local".to_string();
println!(
"Enabled 'ollama_local' and made it default because no other providers are active."
);
}
}
if let Err(err) = config.validate() {
{
let entry = core_config::ensure_provider_config_mut(&mut config, &canonical);
entry.enabled = previous_enabled;
}
config.general.default_provider = previous_default;
if let Some(enabled) = previous_fallback_enabled
&& let Some(entry) = config.providers.get_mut("ollama_local")
{
entry.enabled = enabled;
}
return Err(anyhow!(err));
}
tui_config::save_config(&config).map_err(|err| anyhow!(err))?;
println!(
"{} provider '{}'.",
if enable { "Enabled" } else { "Disabled" },
canonical
);
Ok(())
}
fn choose_fallback_provider(config: &Config, exclude: &str) -> Option<String> {
if exclude != "ollama_local"
&& let Some(cfg) = config.providers.get("ollama_local")
&& cfg.enabled
{
return Some("ollama_local".to_string());
}
let mut candidates: Vec<String> = config
.providers
.iter()
.filter(|(id, cfg)| cfg.enabled && id.as_str() != exclude)
.map(|(id, _)| id.clone())
.collect();
candidates.sort();
candidates.into_iter().next()
}
async fn register_enabled_providers(
manager: &ProviderManager,
config: &Config,
filter: Option<&str>,
) -> Result<Vec<ProviderRecord>> {
let default_provider = canonical_provider_id(&config.general.default_provider);
let mut records = Vec::new();
for (id, cfg) in &config.providers {
if let Some(filter) = filter
&& id != filter
{
continue;
}
let mut record = ProviderRecord::from_config(id, cfg, id == &default_provider);
if !cfg.enabled {
records.push(record);
continue;
}
match instantiate_provider(id, cfg) {
Ok(provider) => {
let metadata = provider.metadata().clone();
record.provider_type_label = provider_type_label(metadata.provider_type);
record.requires_auth = metadata.requires_auth;
record.metadata = Some(metadata);
manager.register_provider(provider).await;
}
Err(err) => {
record.registration_error = Some(err.to_string());
}
}
records.push(record);
}
records.sort_by(|a, b| a.id.cmp(&b.id));
Ok(records)
}
fn instantiate_provider(id: &str, cfg: &ProviderConfig) -> Result<Arc<dyn ModelProvider>> {
let kind = cfg.provider_type.trim().to_ascii_lowercase();
if kind == "ollama" || id == "ollama_local" {
let provider = OllamaLocalProvider::new(cfg.base_url.clone(), None, None)
.map_err(|err| anyhow!(err))?;
Ok(Arc::new(provider))
} else if kind == "ollama_cloud" || id == "ollama_cloud" {
let provider = OllamaCloudProvider::new(cfg.base_url.clone(), cfg.api_key.clone(), None)
.map_err(|err| anyhow!(err))?;
Ok(Arc::new(provider))
} else {
Err(anyhow!(
"Provider '{}' uses unsupported type '{}'.",
id,
if kind.is_empty() {
"unknown"
} else {
kind.as_str()
}
))
}
}
fn describe_provider_type(id: &str, cfg: &ProviderConfig) -> String {
if cfg.provider_type.trim().eq_ignore_ascii_case("ollama") || id.ends_with("_local") {
"Local".to_string()
} else if cfg
.provider_type
.trim()
.eq_ignore_ascii_case("ollama_cloud")
|| id.contains("cloud")
{
"Cloud".to_string()
} else {
"Custom".to_string()
}
}
fn requires_auth(id: &str, cfg: &ProviderConfig) -> bool {
cfg.api_key.is_some()
|| cfg.api_key_env.is_some()
|| matches!(id, "ollama_cloud" | "openai" | "anthropic")
}
fn describe_auth(cfg: &ProviderConfig, required: bool) -> String {
if let Some(env) = cfg
.api_key_env
.as_ref()
.map(|value| value.trim())
.filter(|value| !value.is_empty())
{
format!("env:{env}")
} else if cfg
.api_key
.as_ref()
.map(|value| !value.trim().is_empty())
.unwrap_or(false)
{
"config".to_string()
} else if required {
"required".to_string()
} else {
"-".to_string()
}
}
fn canonical_provider_id(raw: &str) -> String {
let trimmed = raw.trim().to_ascii_lowercase();
if trimmed.is_empty() {
return trimmed;
}
match trimmed.as_str() {
"ollama" | "ollama-local" => "ollama_local".to_string(),
"ollama_cloud" | "ollama-cloud" => "ollama_cloud".to_string(),
other => other.replace('-', "_"),
}
}
fn provider_type_label(provider_type: ProviderType) -> String {
match provider_type {
ProviderType::Local => "Local".to_string(),
ProviderType::Cloud => "Cloud".to_string(),
}
}
fn provider_status_strings(status: ProviderStatus) -> (&'static str, &'static str) {
match status {
ProviderStatus::Available => ("OK", "available"),
ProviderStatus::Unavailable => ("ERR", "unavailable"),
ProviderStatus::RequiresSetup => ("SETUP", "requires setup"),
}
}
fn print_status_rows(rows: &[ProviderStatusRow]) {
let id_width = rows
.iter()
.map(|row| row.id.len())
.max()
.unwrap_or(8)
.max("Provider".len());
let type_width = rows
.iter()
.map(|row| row.provider_type.len())
.max()
.unwrap_or(4)
.max("Type".len());
let status_width = rows
.iter()
.map(|row| row.indicator.len() + 1 + row.status_label.len())
.max()
.unwrap_or(6)
.max("State".len());
println!(
"{:<id_width$} {:<4} {:<type_width$} {:<status_width$} Details",
"Provider",
"Def",
"Type",
"State",
id_width = id_width,
type_width = type_width,
status_width = status_width,
);
for row in rows {
let def = if row.default_provider { "*" } else { "-" };
let details = row.detail.as_deref().unwrap_or("-");
println!(
"{:<id_width$} {:<4} {:<type_width$} {:<status_width$} {}",
row.id,
def,
row.provider_type,
format!("{} {}", row.indicator, row.status_label),
details,
id_width = id_width,
type_width = type_width,
status_width = status_width,
);
}
}
fn print_models(
records: Vec<ProviderRecord>,
models: Vec<AnnotatedModelInfo>,
statuses: HashMap<String, ProviderStatus>,
) {
let mut grouped: HashMap<String, Vec<AnnotatedModelInfo>> = HashMap::new();
for info in models {
grouped
.entry(info.provider_id.clone())
.or_default()
.push(info);
}
for record in records {
let status = statuses.get(&record.id).copied().or_else(|| {
if record.metadata.is_some() && record.registration_error.is_none() && record.enabled {
Some(ProviderStatus::Unavailable)
} else {
None
}
});
let (indicator, label, status_value) = if !record.enabled {
("-", "disabled", None)
} else if record.registration_error.is_some() {
("ERR", "error", None)
} else if let Some(status) = status {
let (indicator, label) = provider_status_strings(status);
(indicator, label, Some(status))
} else {
("?", "unknown", None)
};
let title = if record.default_provider {
format!("{} (default)", record.id)
} else {
record.id.clone()
};
println!(
"{} {} [{}] {}",
indicator, title, record.provider_type_label, label
);
if let Some(err) = &record.registration_error {
println!(" error: {}", err);
println!();
continue;
}
if !record.enabled {
println!(" provider disabled");
println!();
continue;
}
if let Some(entries) = grouped.get(&record.id) {
let mut entries = entries.clone();
entries.sort_by(|a, b| a.model.name.cmp(&b.model.name));
if entries.is_empty() {
println!(" (no models reported)");
} else {
for entry in entries {
let mut line = format!(" - {}", entry.model.name);
if let Some(description) = &entry.model.description
&& !description.trim().is_empty()
{
line.push_str(&format!("{}", description.trim()));
}
println!("{}", line);
}
}
} else {
println!(" (no models reported)");
}
if let Some(ProviderStatus::RequiresSetup) = status_value
&& record.requires_auth
{
println!(" configure provider credentials or API key");
}
println!();
}
}
struct ProviderListRow {
id: String,
type_label: String,
enabled: String,
default: String,
auth: String,
base_url: String,
}
struct ProviderRecord {
id: String,
enabled: bool,
default_provider: bool,
provider_type_label: String,
requires_auth: bool,
registration_error: Option<String>,
metadata: Option<owlen_core::provider::ProviderMetadata>,
}
impl ProviderRecord {
fn from_config(id: &str, cfg: &ProviderConfig, default_provider: bool) -> Self {
Self {
id: id.to_string(),
enabled: cfg.enabled,
default_provider,
provider_type_label: describe_provider_type(id, cfg),
requires_auth: requires_auth(id, cfg),
registration_error: None,
metadata: None,
}
}
}
struct ProviderStatusRow {
id: String,
provider_type: String,
default_provider: bool,
indicator: String,
status_label: String,
detail: Option<String>,
}
impl ProviderStatusRow {
fn from_record(record: ProviderRecord, status: Option<ProviderStatus>) -> Self {
if !record.enabled {
return Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: "-".to_string(),
status_label: "disabled".to_string(),
detail: None,
};
}
if let Some(err) = record.registration_error {
return Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: "ERR".to_string(),
status_label: "error".to_string(),
detail: Some(err),
};
}
if let Some(status) = status {
let (indicator, label) = provider_status_strings(status);
return Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: indicator.to_string(),
status_label: label.to_string(),
detail: if matches!(status, ProviderStatus::RequiresSetup) && record.requires_auth {
Some("credentials required".to_string())
} else {
None
},
};
}
Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: "?".to_string(),
status_label: "unknown".to_string(),
detail: None,
}
}
}

View File

@@ -1,8 +0,0 @@
//! Library portion of the `owlen-cli` crate.
//!
//! It currently only reexports the `agent` module used by the standalone
//! `owlen-agent` binary. Additional shared functionality can be added here in
//! the future.
// Re-export agent module from owlen-core
pub use owlen_core::agent;

View File

@@ -1,228 +0,0 @@
#![allow(clippy::collapsible_if)] // TODO: Remove once Rust 2024 let-chains are available
//! OWLEN CLI - Chat TUI client
mod bootstrap;
mod commands;
mod mcp;
use anyhow::Result;
use clap::{Parser, Subcommand};
use commands::{
cloud::{CloudCommand, run_cloud_command},
providers::{ModelsArgs, ProvidersCommand, run_models_command, run_providers_command},
};
use mcp::{McpCommand, run_mcp_command};
use owlen_core::config as core_config;
use owlen_core::config::McpMode;
use owlen_core::mode::Mode;
use owlen_tui::config;
/// Owlen - Terminal UI for LLM chat
#[derive(Parser, Debug)]
#[command(name = "owlen")]
#[command(about = "Terminal UI for LLM chat via MCP", long_about = None)]
struct Args {
/// Start in code mode (enables all tools)
#[arg(long, short = 'c')]
code: bool,
#[command(subcommand)]
command: Option<OwlenCommand>,
}
#[derive(Debug, Subcommand)]
enum OwlenCommand {
/// Inspect or upgrade configuration files
#[command(subcommand)]
Config(ConfigCommand),
/// Manage Ollama Cloud credentials
#[command(subcommand)]
Cloud(CloudCommand),
/// Manage model providers
#[command(subcommand)]
Providers(ProvidersCommand),
/// List models exposed by configured providers
Models(ModelsArgs),
/// Manage MCP server registrations
#[command(subcommand)]
Mcp(McpCommand),
/// Show manual steps for updating Owlen to the latest revision
Upgrade,
}
#[derive(Debug, Subcommand)]
enum ConfigCommand {
/// Automatically upgrade legacy configuration values and ensure validity
Doctor,
/// Print the resolved configuration file path
Path,
}
async fn run_command(command: OwlenCommand) -> Result<()> {
match command {
OwlenCommand::Config(config_cmd) => run_config_command(config_cmd),
OwlenCommand::Cloud(cloud_cmd) => run_cloud_command(cloud_cmd).await,
OwlenCommand::Providers(provider_cmd) => run_providers_command(provider_cmd).await,
OwlenCommand::Models(args) => run_models_command(args).await,
OwlenCommand::Mcp(mcp_cmd) => run_mcp_command(mcp_cmd),
OwlenCommand::Upgrade => {
println!(
"To update Owlen from source:\n git pull\n cargo install --path crates/owlen-cli --force"
);
println!(
"If you installed from the AUR, use your package manager (e.g., yay -S owlen-git)."
);
Ok(())
}
}
}
fn run_config_command(command: ConfigCommand) -> Result<()> {
match command {
ConfigCommand::Doctor => run_config_doctor(),
ConfigCommand::Path => {
let path = core_config::default_config_path();
println!("{}", path.display());
Ok(())
}
}
}
fn run_config_doctor() -> Result<()> {
let config_path = core_config::default_config_path();
let existed = config_path.exists();
let mut config = config::try_load_config().unwrap_or_default();
let _ = config.refresh_mcp_servers(None);
let mut changes = Vec::new();
if !existed {
changes.push("created configuration file from defaults".to_string());
}
if config.provider(&config.general.default_provider).is_none() {
config.general.default_provider = "ollama_local".to_string();
changes.push("default provider missing; reset to 'ollama_local'".to_string());
}
for key in ["ollama_local", "ollama_cloud", "openai", "anthropic"] {
if !config.providers.contains_key(key) {
core_config::ensure_provider_config_mut(&mut config, key);
changes.push(format!("added default configuration for provider '{key}'"));
}
}
if let Some(entry) = config.providers.get_mut("ollama_local") {
if entry.provider_type.trim().is_empty() || entry.provider_type != "ollama" {
entry.provider_type = "ollama".to_string();
changes.push("normalised providers.ollama_local.provider_type to 'ollama'".to_string());
}
}
let mut ensure_default_enabled = true;
if !config.providers.values().any(|cfg| cfg.enabled) {
let entry = core_config::ensure_provider_config_mut(&mut config, "ollama_local");
if !entry.enabled {
entry.enabled = true;
changes.push("no providers were enabled; enabled 'ollama_local'".to_string());
}
if config.general.default_provider != "ollama_local" {
config.general.default_provider = "ollama_local".to_string();
changes.push(
"default provider reset to 'ollama_local' because no providers were enabled"
.to_string(),
);
}
ensure_default_enabled = false;
}
if ensure_default_enabled {
let default_id = config.general.default_provider.clone();
if let Some(default_cfg) = config.providers.get(&default_id) {
if !default_cfg.enabled {
if let Some(new_default) = config
.providers
.iter()
.filter(|(id, cfg)| cfg.enabled && *id != &default_id)
.map(|(id, _)| id.clone())
.min()
{
config.general.default_provider = new_default.clone();
changes.push(format!(
"default provider '{default_id}' was disabled; switched default to '{new_default}'"
));
} else {
let entry =
core_config::ensure_provider_config_mut(&mut config, "ollama_local");
if !entry.enabled {
entry.enabled = true;
changes.push(
"enabled 'ollama_local' because default provider was disabled"
.to_string(),
);
}
if config.general.default_provider != "ollama_local" {
config.general.default_provider = "ollama_local".to_string();
changes.push(
"default provider reset to 'ollama_local' because previous default was disabled"
.to_string(),
);
}
}
}
}
}
match config.mcp.mode {
McpMode::Legacy => {
config.mcp.mode = McpMode::LocalOnly;
config.mcp.warn_on_legacy = true;
changes.push("converted [mcp].mode = 'legacy' to 'local_only'".to_string());
}
McpMode::RemoteOnly if config.effective_mcp_servers().is_empty() => {
config.mcp.mode = McpMode::RemotePreferred;
config.mcp.allow_fallback = true;
changes.push(
"downgraded remote-only configuration to remote_preferred because no servers are defined"
.to_string(),
);
}
McpMode::RemotePreferred
if !config.mcp.allow_fallback && config.effective_mcp_servers().is_empty() =>
{
config.mcp.allow_fallback = true;
changes.push(
"enabled [mcp].allow_fallback because no remote servers are configured".to_string(),
);
}
_ => {}
}
config.validate()?;
config::save_config(&config)?;
if changes.is_empty() {
println!(
"Configuration already up to date: {}",
config_path.display()
);
} else {
println!("Updated {}:", config_path.display());
for change in changes {
println!(" - {change}");
}
}
Ok(())
}
#[tokio::main(flavor = "multi_thread")]
async fn main() -> Result<()> {
// Parse command-line arguments
let Args { code, command } = Args::parse();
if let Some(command) = command {
return run_command(command).await;
}
let initial_mode = if code { Mode::Code } else { Mode::Chat };
bootstrap::launch(initial_mode).await
}

View File

@@ -1,259 +0,0 @@
use std::collections::{HashMap, HashSet};
use anyhow::{Result, anyhow};
use clap::{Args, Subcommand, ValueEnum};
use owlen_core::config::{self as core_config, Config, McpConfigScope, McpServerConfig};
use owlen_tui::config as tui_config;
#[derive(Debug, Subcommand)]
pub enum McpCommand {
/// Add or update an MCP server in the selected scope
Add(AddArgs),
/// List MCP servers across scopes
List(ListArgs),
/// Remove an MCP server from a scope
Remove(RemoveArgs),
}
pub fn run_mcp_command(command: McpCommand) -> Result<()> {
match command {
McpCommand::Add(args) => handle_add(args),
McpCommand::List(args) => handle_list(args),
McpCommand::Remove(args) => handle_remove(args),
}
}
#[derive(Debug, Clone, Copy, ValueEnum, Default)]
pub enum ScopeArg {
User,
#[default]
Project,
Local,
}
impl From<ScopeArg> for McpConfigScope {
fn from(value: ScopeArg) -> Self {
match value {
ScopeArg::User => McpConfigScope::User,
ScopeArg::Project => McpConfigScope::Project,
ScopeArg::Local => McpConfigScope::Local,
}
}
}
#[derive(Debug, Args)]
pub struct AddArgs {
/// Logical name used to reference the server
pub name: String,
/// Command or endpoint invoked for the server
pub command: String,
/// Transport mechanism (stdio, http, websocket)
#[arg(long, default_value = "stdio")]
pub transport: String,
/// Configuration scope to write the server into
#[arg(long, value_enum, default_value_t = ScopeArg::Project)]
pub scope: ScopeArg,
/// Environment variables (KEY=VALUE) passed to the server process
#[arg(long = "env")]
pub env: Vec<String>,
/// Additional arguments appended when launching the server
#[arg(trailing_var_arg = true, value_name = "ARG")]
pub args: Vec<String>,
}
#[derive(Debug, Args, Default)]
pub struct ListArgs {
/// Restrict output to a specific configuration scope
#[arg(long, value_enum)]
pub scope: Option<ScopeArg>,
/// Display only the effective servers (after precedence resolution)
#[arg(long)]
pub effective_only: bool,
}
#[derive(Debug, Args)]
pub struct RemoveArgs {
/// Name of the server to remove
pub name: String,
/// Optional explicit scope to remove from
#[arg(long, value_enum)]
pub scope: Option<ScopeArg>,
}
fn handle_add(args: AddArgs) -> Result<()> {
let mut config = load_config()?;
let scope: McpConfigScope = args.scope.into();
let mut env_map = HashMap::new();
for pair in &args.env {
let (key, value) = pair
.split_once('=')
.ok_or_else(|| anyhow!("Environment pairs must use KEY=VALUE syntax: '{}'", pair))?;
if key.trim().is_empty() {
return Err(anyhow!("Environment variable name cannot be empty"));
}
env_map.insert(key.trim().to_string(), value.to_string());
}
let server = McpServerConfig {
name: args.name.clone(),
command: args.command.clone(),
args: args.args.clone(),
transport: args.transport.to_lowercase(),
env: env_map,
oauth: None,
};
config.add_mcp_server(scope, server.clone(), None)?;
if matches!(scope, McpConfigScope::User) {
tui_config::save_config(&config)?;
}
if let Some(path) = core_config::mcp_scope_path(scope, None) {
println!(
"Registered MCP server '{}' in {} scope ({})",
server.name,
scope,
path.display()
);
} else {
println!(
"Registered MCP server '{}' in {} scope.",
server.name, scope
);
}
Ok(())
}
fn handle_list(args: ListArgs) -> Result<()> {
let mut config = load_config()?;
config.refresh_mcp_servers(None)?;
let scoped = config.scoped_mcp_servers();
if scoped.is_empty() {
println!("No MCP servers configured.");
return Ok(());
}
let filter_scope = args.scope.map(|scope| scope.into());
let effective = config.effective_mcp_servers();
let mut active = HashSet::new();
for server in effective {
active.insert((
server.name.clone(),
server.command.clone(),
server.transport.to_lowercase(),
));
}
println!(
"{:<2} {:<8} {:<20} {:<10} Command",
"", "Scope", "Name", "Transport"
);
for entry in scoped {
if filter_scope
.as_ref()
.is_some_and(|target_scope| entry.scope != *target_scope)
{
continue;
}
let payload = format_command_line(&entry.config.command, &entry.config.args);
let key = (
entry.config.name.clone(),
entry.config.command.clone(),
entry.config.transport.to_lowercase(),
);
let marker = if active.contains(&key) { "*" } else { " " };
if args.effective_only && marker != "*" {
continue;
}
println!(
"{} {:<8} {:<20} {:<10} {}",
marker, entry.scope, entry.config.name, entry.config.transport, payload
);
}
let scoped_resources = config.scoped_mcp_resources();
if !scoped_resources.is_empty() {
println!();
println!("{:<2} {:<8} {:<30} Title", "", "Scope", "Resource");
let effective_keys: HashSet<(String, String)> = config
.effective_mcp_resources()
.iter()
.map(|res| (res.server.clone(), res.uri.clone()))
.collect();
for entry in scoped_resources {
if filter_scope
.as_ref()
.is_some_and(|target_scope| entry.scope != *target_scope)
{
continue;
}
let key = (entry.config.server.clone(), entry.config.uri.clone());
let marker = if effective_keys.contains(&key) {
"*"
} else {
" "
};
if args.effective_only && marker != "*" {
continue;
}
let reference = format!("@{}:{}", entry.config.server, entry.config.uri);
let title = entry.config.title.as_deref().unwrap_or("");
println!("{} {:<8} {:<30} {}", marker, entry.scope, reference, title);
}
}
Ok(())
}
fn handle_remove(args: RemoveArgs) -> Result<()> {
let mut config = load_config()?;
let scope_hint = args.scope.map(|scope| scope.into());
let result = config.remove_mcp_server(scope_hint, &args.name, None)?;
match result {
Some(scope) => {
if matches!(scope, McpConfigScope::User) {
tui_config::save_config(&config)?;
}
if let Some(path) = core_config::mcp_scope_path(scope, None) {
println!(
"Removed MCP server '{}' from {} scope ({})",
args.name,
scope,
path.display()
);
} else {
println!("Removed MCP server '{}' from {} scope.", args.name, scope);
}
}
None => {
println!("No MCP server named '{}' was found.", args.name);
}
}
Ok(())
}
fn load_config() -> Result<Config> {
let mut config = tui_config::try_load_config().unwrap_or_default();
config.refresh_mcp_servers(None)?;
Ok(config)
}
fn format_command_line(command: &str, args: &[String]) -> String {
if args.is_empty() {
command.to_string()
} else {
format!("{} {}", command, args.join(" "))
}
}

View File

@@ -1,266 +0,0 @@
//! Integration tests for the ReAct agent loop functionality.
//!
//! These tests verify that the agent executor correctly:
//! - Parses ReAct formatted responses
//! - Executes tool calls
//! - Handles multi-step workflows
//! - Recovers from errors
//! - Respects iteration limits
use owlen_cli::agent::{AgentConfig, AgentExecutor, LlmResponse};
use owlen_core::mcp::remote_client::RemoteMcpClient;
use std::sync::Arc;
#[tokio::test]
async fn test_react_parsing_tool_call() {
let executor = create_test_executor();
// Test parsing a tool call with JSON arguments
let text = "THOUGHT: I should search for information\nACTION: web_search\nACTION_INPUT: {\"query\": \"rust async programming\"}\n";
let result = executor.parse_response(text);
match result {
Ok(LlmResponse::ToolCall {
thought,
tool_name,
arguments,
}) => {
assert_eq!(thought, "I should search for information");
assert_eq!(tool_name, "web_search");
assert_eq!(arguments["query"], "rust async programming");
}
other => panic!("Expected ToolCall, got: {:?}", other),
}
}
#[tokio::test]
async fn test_react_parsing_final_answer() {
let executor = create_test_executor();
let text = "THOUGHT: I have enough information now\nFINAL_ANSWER: The answer is 42\n";
let result = executor.parse_response(text);
match result {
Ok(LlmResponse::FinalAnswer { thought, answer }) => {
assert_eq!(thought, "I have enough information now");
assert_eq!(answer, "The answer is 42");
}
other => panic!("Expected FinalAnswer, got: {:?}", other),
}
}
#[tokio::test]
async fn test_react_parsing_with_multiline_thought() {
let executor = create_test_executor();
let text = "THOUGHT: This is a complex\nmulti-line thought\nACTION: list_files\nACTION_INPUT: {\"path\": \".\"}\n";
let result = executor.parse_response(text);
// The regex currently only captures until first newline
// This test documents current behavior
match result {
Ok(LlmResponse::ToolCall { thought, .. }) => {
// Regex pattern stops at first \n after THOUGHT:
assert!(thought.contains("This is a complex"));
}
other => panic!("Expected ToolCall, got: {:?}", other),
}
}
#[tokio::test]
#[ignore] // Requires MCP LLM server to be running
async fn test_agent_single_tool_scenario() {
// This test requires a running MCP LLM server (which wraps Ollama)
let provider = Arc::new(RemoteMcpClient::new().unwrap());
let mcp_client = Arc::clone(&provider) as Arc<RemoteMcpClient>;
let config = AgentConfig {
max_iterations: 5,
model: "llama3.2".to_string(),
temperature: Some(0.7),
max_tokens: None,
};
let executor = AgentExecutor::new(provider, mcp_client, config);
// Simple query that should complete in one tool call
let result = executor
.run("List files in the current directory".to_string())
.await;
match result {
Ok(agent_result) => {
assert!(
!agent_result.answer.is_empty(),
"Answer should not be empty"
);
println!("Agent answer: {}", agent_result.answer);
}
Err(e) => {
// It's okay if this fails due to LLM not following format
println!("Agent test skipped: {}", e);
}
}
}
#[tokio::test]
#[ignore] // Requires Ollama to be running
async fn test_agent_multi_step_workflow() {
// Test a query that requires multiple tool calls
let provider = Arc::new(RemoteMcpClient::new().unwrap());
let mcp_client = Arc::clone(&provider) as Arc<RemoteMcpClient>;
let config = AgentConfig {
max_iterations: 10,
model: "llama3.2".to_string(),
temperature: Some(0.5), // Lower temperature for more consistent behavior
max_tokens: None,
};
let executor = AgentExecutor::new(provider, mcp_client, config);
// Query requiring multiple steps: list -> read -> analyze
let result = executor
.run("Find all Rust files and tell me which one contains 'Agent'".to_string())
.await;
match result {
Ok(agent_result) => {
assert!(!agent_result.answer.is_empty());
println!("Multi-step answer: {:?}", agent_result);
}
Err(e) => {
println!("Multi-step test skipped: {}", e);
}
}
}
#[tokio::test]
#[ignore] // Requires Ollama
async fn test_agent_iteration_limit() {
let provider = Arc::new(RemoteMcpClient::new().unwrap());
let mcp_client = Arc::clone(&provider) as Arc<RemoteMcpClient>;
let config = AgentConfig {
max_iterations: 2, // Very low limit to test enforcement
model: "llama3.2".to_string(),
temperature: Some(0.7),
max_tokens: None,
};
let executor = AgentExecutor::new(provider, mcp_client, config);
// Complex query that would require many iterations
let result = executor
.run("Perform an exhaustive analysis of all files".to_string())
.await;
// Should hit the iteration limit (or parse error if LLM doesn't follow format)
match result {
Err(e) => {
let error_str = format!("{}", e);
// Accept either iteration limit error or parse error (LLM didn't follow ReAct format)
assert!(
error_str.contains("Maximum iterations")
|| error_str.contains("2")
|| error_str.contains("parse"),
"Expected iteration limit or parse error, got: {}",
error_str
);
println!("Test passed: agent stopped with error: {}", error_str);
}
Ok(_) => {
// It's possible the LLM completed within 2 iterations
println!("Agent completed within iteration limit");
}
}
}
#[tokio::test]
#[ignore] // Requires Ollama
async fn test_agent_tool_budget_enforcement() {
let provider = Arc::new(RemoteMcpClient::new().unwrap());
let mcp_client = Arc::clone(&provider) as Arc<RemoteMcpClient>;
let config = AgentConfig {
max_iterations: 3, // Very low iteration limit to enforce budget
model: "llama3.2".to_string(),
temperature: Some(0.7),
max_tokens: None,
};
let executor = AgentExecutor::new(provider, mcp_client, config);
// Query that would require many tool calls
let result = executor
.run("Read every file in the project and summarize them all".to_string())
.await;
// Should hit the tool call budget (or parse error if LLM doesn't follow format)
match result {
Err(e) => {
let error_str = format!("{}", e);
// Accept either budget error or parse error (LLM didn't follow ReAct format)
assert!(
error_str.contains("Maximum iterations")
|| error_str.contains("budget")
|| error_str.contains("parse"),
"Expected budget or parse error, got: {}",
error_str
);
println!("Test passed: agent stopped with error: {}", error_str);
}
Ok(_) => {
println!("Agent completed within tool budget");
}
}
}
// Helper function to create a test executor
// For parsing tests, we don't need a real connection
fn create_test_executor() -> AgentExecutor {
// For parsing tests, we can accept the error from RemoteMcpClient::new()
// since we're only testing parse_response which doesn't use the MCP client
let provider = match RemoteMcpClient::new() {
Ok(client) => Arc::new(client),
Err(_) => {
// If MCP server binary doesn't exist, parsing tests can still run
// by using a dummy client that will never be called
// This is a workaround for unit tests that only need parse_response
panic!("MCP server binary not found - build the project first with: cargo build --all");
}
};
let mcp_client = Arc::clone(&provider) as Arc<RemoteMcpClient>;
let config = AgentConfig::default();
AgentExecutor::new(provider, mcp_client, config)
}
#[test]
fn test_agent_config_defaults() {
let config = AgentConfig::default();
assert_eq!(config.max_iterations, 15);
assert_eq!(config.model, "llama3.2:latest");
assert_eq!(config.temperature, Some(0.7));
// max_tool_calls field removed - agent now tracks iterations instead
}
#[test]
fn test_agent_config_custom() {
let config = AgentConfig {
max_iterations: 15,
model: "custom-model".to_string(),
temperature: Some(0.5),
max_tokens: Some(2000),
};
assert_eq!(config.max_iterations, 15);
assert_eq!(config.model, "custom-model");
assert_eq!(config.temperature, Some(0.5));
assert_eq!(config.max_tokens, Some(2000));
}

View File

@@ -1,53 +0,0 @@
[package]
name = "owlen-core"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
homepage.workspace = true
description = "Core traits and types for OWLEN LLM client"
[dependencies]
anyhow = { workspace = true }
log = { workspace = true }
regex = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true }
unicode-segmentation = "1.11"
unicode-width = "0.1"
uuid = { workspace = true }
textwrap = { workspace = true }
futures = { workspace = true }
futures-util = { workspace = true }
async-trait = { workspace = true }
toml = { workspace = true }
shellexpand = { workspace = true }
dirs = { workspace = true }
ratatui = { workspace = true }
tempfile = { workspace = true }
jsonschema = { workspace = true }
which = { workspace = true }
nix = { workspace = true }
aes-gcm = { workspace = true }
ring = { workspace = true }
keyring = { workspace = true }
chrono = { workspace = true }
crossterm = { workspace = true }
urlencoding = { workspace = true }
rpassword = { workspace = true }
sqlx = { workspace = true }
duckduckgo = "0.2.0"
reqwest = { workspace = true, features = ["default"] }
reqwest_011 = { version = "0.11", package = "reqwest" }
path-clean = "1.0"
tokio-stream = { workspace = true }
tokio-tungstenite = "0.21"
tungstenite = "0.21"
ollama-rs = { version = "0.3", features = ["stream", "headers"] }
[dev-dependencies]
tokio-test = { workspace = true }
httpmock = "0.7"

View File

@@ -1,12 +0,0 @@
# Owlen Core
This crate provides the core abstractions and data structures for the Owlen ecosystem.
It defines the essential traits and types that enable communication with various LLM providers, manage sessions, and handle configuration.
## Key Components
- **`Provider` trait**: The fundamental abstraction for all LLM providers. Implement this trait to add support for a new provider.
- **`Session`**: Represents a single conversation, managing message history and context.
- **`Model`**: Defines the structure for LLM models, including their names and properties.
- **Configuration**: Handles loading and parsing of the application's configuration.

View File

@@ -1,12 +0,0 @@
CREATE TABLE IF NOT EXISTS conversations (
id TEXT PRIMARY KEY,
name TEXT,
description TEXT,
model TEXT NOT NULL,
message_count INTEGER NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
data TEXT NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_conversations_updated_at ON conversations(updated_at DESC);

View File

@@ -1,7 +0,0 @@
CREATE TABLE IF NOT EXISTS secure_items (
key TEXT PRIMARY KEY,
nonce BLOB NOT NULL,
ciphertext BLOB NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL
);

View File

@@ -1,421 +0,0 @@
//! Agentic execution loop with ReAct pattern support.
//!
//! This module provides the core agent orchestration logic that allows an LLM
//! to reason about tasks, execute tools, and observe results in an iterative loop.
use crate::Provider;
use crate::mcp::{McpClient, McpToolCall, McpToolDescriptor, McpToolResponse};
use crate::types::{ChatParameters, ChatRequest, Message};
use crate::{Error, Result};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
/// Maximum number of agent iterations before stopping
const DEFAULT_MAX_ITERATIONS: usize = 15;
/// Parsed response from the LLM in ReAct format
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum LlmResponse {
/// LLM wants to execute a tool
ToolCall {
thought: String,
tool_name: String,
arguments: serde_json::Value,
},
/// LLM has reached a final answer
FinalAnswer { thought: String, answer: String },
/// LLM is just reasoning without taking action
Reasoning { thought: String },
}
/// Parse error when LLM response doesn't match expected format
#[derive(Debug, thiserror::Error)]
pub enum ParseError {
#[error("No recognizable pattern found in response")]
NoPattern,
#[error("Missing required field: {0}")]
MissingField(String),
#[error("Invalid JSON in ACTION_INPUT: {0}")]
InvalidJson(String),
}
/// Result of an agent execution
#[derive(Debug, Clone)]
pub struct AgentResult {
/// Final answer from the agent
pub answer: String,
/// Number of iterations taken
pub iterations: usize,
/// All messages exchanged during execution
pub messages: Vec<Message>,
/// Whether the agent completed successfully
pub success: bool,
}
/// Configuration for agent execution
#[derive(Debug, Clone)]
pub struct AgentConfig {
/// Maximum number of iterations
pub max_iterations: usize,
/// Model to use for reasoning
pub model: String,
/// Temperature for LLM sampling
pub temperature: Option<f32>,
/// Max tokens per LLM call
pub max_tokens: Option<u32>,
}
impl Default for AgentConfig {
fn default() -> Self {
Self {
max_iterations: DEFAULT_MAX_ITERATIONS,
model: "llama3.2:latest".to_string(),
temperature: Some(0.7),
max_tokens: Some(4096),
}
}
}
/// Agent executor that orchestrates the ReAct loop
pub struct AgentExecutor {
/// LLM provider for reasoning
llm_client: Arc<dyn Provider>,
/// MCP client for tool execution
tool_client: Arc<dyn McpClient>,
/// Agent configuration
config: AgentConfig,
}
impl AgentExecutor {
/// Create a new agent executor
pub fn new(
llm_client: Arc<dyn Provider>,
tool_client: Arc<dyn McpClient>,
config: AgentConfig,
) -> Self {
Self {
llm_client,
tool_client,
config,
}
}
/// Run the agent loop with the given query
pub async fn run(&self, query: String) -> Result<AgentResult> {
let mut messages = vec![Message::user(query)];
let tools = self.discover_tools().await?;
for iteration in 0..self.config.max_iterations {
let prompt = self.build_react_prompt(&messages, &tools);
let response = self.generate_llm_response(prompt).await?;
match self.parse_response(&response)? {
LlmResponse::ToolCall {
thought,
tool_name,
arguments,
} => {
// Add assistant's reasoning
messages.push(Message::assistant(format!(
"THOUGHT: {}\nACTION: {}\nACTION_INPUT: {}",
thought,
tool_name,
serde_json::to_string_pretty(&arguments).unwrap_or_default()
)));
// Execute the tool
let result = self.execute_tool(&tool_name, arguments).await?;
// Add observation
messages.push(Message::tool(
tool_name.clone(),
format!(
"OBSERVATION: {}",
serde_json::to_string_pretty(&result.output).unwrap_or_default()
),
));
}
LlmResponse::FinalAnswer { thought, answer } => {
messages.push(Message::assistant(format!(
"THOUGHT: {}\nFINAL_ANSWER: {}",
thought, answer
)));
return Ok(AgentResult {
answer,
iterations: iteration + 1,
messages,
success: true,
});
}
LlmResponse::Reasoning { thought } => {
messages.push(Message::assistant(format!("THOUGHT: {}", thought)));
}
}
}
// Max iterations reached
Ok(AgentResult {
answer: "Maximum iterations reached without finding a final answer".to_string(),
iterations: self.config.max_iterations,
messages,
success: false,
})
}
/// Discover available tools from the MCP client
async fn discover_tools(&self) -> Result<Vec<McpToolDescriptor>> {
self.tool_client.list_tools().await
}
/// Build a ReAct-formatted prompt with available tools
fn build_react_prompt(
&self,
messages: &[Message],
tools: &[McpToolDescriptor],
) -> Vec<Message> {
let mut prompt_messages = Vec::new();
// System prompt with ReAct instructions
let system_prompt = self.build_system_prompt(tools);
prompt_messages.push(Message::system(system_prompt));
// Add conversation history
prompt_messages.extend_from_slice(messages);
prompt_messages
}
/// Build the system prompt with ReAct format and tool descriptions
fn build_system_prompt(&self, tools: &[McpToolDescriptor]) -> String {
let mut prompt = String::from(
"You are an AI assistant that uses the ReAct (Reasoning and Acting) pattern to solve tasks.\n\n\
You have access to the following tools:\n\n",
);
for tool in tools {
prompt.push_str(&format!("- {}: {}\n", tool.name, tool.description));
}
prompt.push_str(
"\nUse the following format:\n\n\
THOUGHT: Your reasoning about what to do next\n\
ACTION: tool_name\n\
ACTION_INPUT: {\"param\": \"value\"}\n\n\
You will receive:\n\
OBSERVATION: The result of the tool execution\n\n\
Continue this process until you have enough information, then provide:\n\
THOUGHT: Final reasoning\n\
FINAL_ANSWER: Your comprehensive answer\n\n\
Important:\n\
- Always start with THOUGHT to explain your reasoning\n\
- ACTION must be one of the available tools\n\
- ACTION_INPUT must be valid JSON\n\
- Use FINAL_ANSWER only when you have sufficient information\n",
);
prompt
}
/// Generate an LLM response
async fn generate_llm_response(&self, messages: Vec<Message>) -> Result<String> {
let request = ChatRequest {
model: self.config.model.clone(),
messages,
parameters: ChatParameters {
temperature: self.config.temperature,
max_tokens: self.config.max_tokens,
stream: false,
..Default::default()
},
tools: None,
};
let response = self.llm_client.send_prompt(request).await?;
Ok(response.message.content)
}
/// Parse LLM response into structured format
pub fn parse_response(&self, text: &str) -> Result<LlmResponse> {
let lines: Vec<&str> = text.lines().collect();
let mut thought = String::new();
let mut action = String::new();
let mut action_input = String::new();
let mut final_answer = String::new();
let mut i = 0;
while i < lines.len() {
let line = lines[i].trim();
if line.starts_with("THOUGHT:") {
thought = line
.strip_prefix("THOUGHT:")
.unwrap_or("")
.trim()
.to_string();
// Collect multi-line thoughts
i += 1;
while i < lines.len()
&& !lines[i].trim().starts_with("ACTION")
&& !lines[i].trim().starts_with("FINAL_ANSWER")
{
if !lines[i].trim().is_empty() {
thought.push(' ');
thought.push_str(lines[i].trim());
}
i += 1;
}
continue;
}
if line.starts_with("ACTION:") {
action = line
.strip_prefix("ACTION:")
.unwrap_or("")
.trim()
.to_string();
i += 1;
continue;
}
if line.starts_with("ACTION_INPUT:") {
action_input = line
.strip_prefix("ACTION_INPUT:")
.unwrap_or("")
.trim()
.to_string();
// Collect multi-line JSON
i += 1;
while i < lines.len()
&& !lines[i].trim().starts_with("THOUGHT")
&& !lines[i].trim().starts_with("ACTION")
{
action_input.push(' ');
action_input.push_str(lines[i].trim());
i += 1;
}
continue;
}
if line.starts_with("FINAL_ANSWER:") {
final_answer = line
.strip_prefix("FINAL_ANSWER:")
.unwrap_or("")
.trim()
.to_string();
// Collect multi-line answer
i += 1;
while i < lines.len() {
if !lines[i].trim().is_empty() {
final_answer.push(' ');
final_answer.push_str(lines[i].trim());
}
i += 1;
}
break;
}
i += 1;
}
// Determine response type
if !final_answer.is_empty() {
return Ok(LlmResponse::FinalAnswer {
thought,
answer: final_answer,
});
}
if !action.is_empty() {
let arguments = if action_input.is_empty() {
serde_json::json!({})
} else {
serde_json::from_str(&action_input)
.map_err(|e| Error::Agent(ParseError::InvalidJson(e.to_string()).to_string()))?
};
return Ok(LlmResponse::ToolCall {
thought,
tool_name: action,
arguments,
});
}
if !thought.is_empty() {
return Ok(LlmResponse::Reasoning { thought });
}
Err(Error::Agent(ParseError::NoPattern.to_string()))
}
/// Execute a tool call
async fn execute_tool(
&self,
tool_name: &str,
arguments: serde_json::Value,
) -> Result<McpToolResponse> {
let call = McpToolCall {
name: tool_name.to_string(),
arguments,
};
self.tool_client.call_tool(call).await
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::llm::test_utils::MockProvider;
use crate::mcp::test_utils::MockMcpClient;
#[test]
fn test_parse_tool_call() {
let executor = AgentExecutor {
llm_client: Arc::new(MockProvider::default()),
tool_client: Arc::new(MockMcpClient),
config: AgentConfig::default(),
};
let text = r#"
THOUGHT: I need to search for information about Rust
ACTION: web_search
ACTION_INPUT: {"query": "Rust programming language"}
"#;
let result = executor.parse_response(text).unwrap();
match result {
LlmResponse::ToolCall {
thought,
tool_name,
arguments,
} => {
assert!(thought.contains("search for information"));
assert_eq!(tool_name, "web_search");
assert_eq!(arguments["query"], "Rust programming language");
}
_ => panic!("Expected ToolCall"),
}
}
#[test]
fn test_parse_final_answer() {
let executor = AgentExecutor {
llm_client: Arc::new(MockProvider::default()),
tool_client: Arc::new(MockMcpClient),
config: AgentConfig::default(),
};
let text = r#"
THOUGHT: I now have enough information to answer
FINAL_ANSWER: Rust is a systems programming language focused on safety and performance.
"#;
let result = executor.parse_response(text).unwrap();
match result {
LlmResponse::FinalAnswer { thought, answer } => {
assert!(thought.contains("enough information"));
assert!(answer.contains("Rust is a systems programming language"));
}
_ => panic!("Expected FinalAnswer"),
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,303 +0,0 @@
use std::collections::HashMap;
use std::io::{self, Write};
use std::sync::Arc;
use anyhow::Result;
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use crate::encryption::VaultHandle;
#[derive(Clone, Debug)]
pub struct ConsentRequest {
pub tool_name: String,
}
/// Scope of consent grant
#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]
pub enum ConsentScope {
/// Grant only for this single operation
Once,
/// Grant for the duration of the current session
Session,
/// Grant permanently (persisted across sessions)
Permanent,
/// Explicitly denied
Denied,
}
#[derive(Serialize, Deserialize, Clone, Debug)]
pub struct ConsentRecord {
pub tool_name: String,
pub scope: ConsentScope,
pub timestamp: DateTime<Utc>,
pub data_types: Vec<String>,
pub external_endpoints: Vec<String>,
}
#[derive(Serialize, Deserialize, Default)]
pub struct ConsentManager {
/// Permanent consent records (persisted to vault)
permanent_records: HashMap<String, ConsentRecord>,
/// Session-scoped consent (cleared on manager drop or explicit clear)
#[serde(skip)]
session_records: HashMap<String, ConsentRecord>,
/// Once-scoped consent (used once then cleared)
#[serde(skip)]
once_records: HashMap<String, ConsentRecord>,
/// Pending consent requests (to prevent duplicate prompts)
#[serde(skip)]
pending_requests: HashMap<String, ()>,
}
impl ConsentManager {
pub fn new() -> Self {
Self::default()
}
/// Load consent records from vault storage
pub fn from_vault(vault: &Arc<std::sync::Mutex<VaultHandle>>) -> Self {
let guard = vault.lock().expect("Vault mutex poisoned");
if let Some(permanent_records) =
guard
.settings()
.get("consent_records")
.and_then(|consent_data| {
serde_json::from_value::<HashMap<String, ConsentRecord>>(consent_data.clone())
.ok()
})
{
return Self {
permanent_records,
session_records: HashMap::new(),
once_records: HashMap::new(),
pending_requests: HashMap::new(),
};
}
Self::default()
}
/// Persist permanent consent records to vault storage
pub fn persist_to_vault(&self, vault: &Arc<std::sync::Mutex<VaultHandle>>) -> Result<()> {
let mut guard = vault.lock().expect("Vault mutex poisoned");
let consent_json = serde_json::to_value(&self.permanent_records)?;
guard
.settings_mut()
.insert("consent_records".to_string(), consent_json);
guard.persist()?;
Ok(())
}
pub fn request_consent(
&mut self,
tool_name: &str,
data_types: Vec<String>,
endpoints: Vec<String>,
) -> Result<ConsentScope> {
// Check if already granted permanently
if self
.permanent_records
.get(tool_name)
.is_some_and(|existing| existing.scope == ConsentScope::Permanent)
{
return Ok(ConsentScope::Permanent);
}
// Check if granted for session
if self
.session_records
.get(tool_name)
.is_some_and(|existing| existing.scope == ConsentScope::Session)
{
return Ok(ConsentScope::Session);
}
// Check if request is already pending (prevent duplicate prompts)
if self.pending_requests.contains_key(tool_name) {
// Wait for the other prompt to complete by returning denied temporarily
// The caller should retry after a short delay
return Ok(ConsentScope::Denied);
}
// Mark as pending
self.pending_requests.insert(tool_name.to_string(), ());
// Show consent dialog and get scope
let scope = self.show_consent_dialog(tool_name, &data_types, &endpoints)?;
// Remove from pending
self.pending_requests.remove(tool_name);
// Create record based on scope
let record = ConsentRecord {
tool_name: tool_name.to_string(),
scope: scope.clone(),
timestamp: Utc::now(),
data_types,
external_endpoints: endpoints,
};
// Store in appropriate location
match scope {
ConsentScope::Permanent => {
self.permanent_records.insert(tool_name.to_string(), record);
}
ConsentScope::Session => {
self.session_records.insert(tool_name.to_string(), record);
}
ConsentScope::Once | ConsentScope::Denied => {
// Don't store, just return the decision
}
}
Ok(scope)
}
/// Grant consent programmatically (for TUI or automated flows)
pub fn grant_consent(
&mut self,
tool_name: &str,
data_types: Vec<String>,
endpoints: Vec<String>,
) {
self.grant_consent_with_scope(tool_name, data_types, endpoints, ConsentScope::Permanent);
}
/// Grant consent with specific scope
pub fn grant_consent_with_scope(
&mut self,
tool_name: &str,
data_types: Vec<String>,
endpoints: Vec<String>,
scope: ConsentScope,
) {
let record = ConsentRecord {
tool_name: tool_name.to_string(),
scope: scope.clone(),
timestamp: Utc::now(),
data_types,
external_endpoints: endpoints,
};
match scope {
ConsentScope::Permanent => {
self.permanent_records.insert(tool_name.to_string(), record);
}
ConsentScope::Session => {
self.session_records.insert(tool_name.to_string(), record);
}
ConsentScope::Once => {
self.once_records.insert(tool_name.to_string(), record);
}
ConsentScope::Denied => {} // Denied is not stored
}
}
/// Check if consent is needed (returns None if already granted, Some(info) if needed)
pub fn check_consent_needed(&self, tool_name: &str) -> Option<ConsentRequest> {
if self.has_consent(tool_name) {
None
} else {
Some(ConsentRequest {
tool_name: tool_name.to_string(),
})
}
}
pub fn has_consent(&self, tool_name: &str) -> bool {
// Check permanent first, then session, then once
self.permanent_records
.get(tool_name)
.map(|r| r.scope == ConsentScope::Permanent)
.or_else(|| {
self.session_records
.get(tool_name)
.map(|r| r.scope == ConsentScope::Session)
})
.or_else(|| {
self.once_records
.get(tool_name)
.map(|r| r.scope == ConsentScope::Once)
})
.unwrap_or(false)
}
/// Consume "once" consent for a tool (clears it after first use)
pub fn consume_once_consent(&mut self, tool_name: &str) {
self.once_records.remove(tool_name);
}
pub fn revoke_consent(&mut self, tool_name: &str) {
self.permanent_records.remove(tool_name);
self.session_records.remove(tool_name);
self.once_records.remove(tool_name);
}
pub fn clear_all_consent(&mut self) {
self.permanent_records.clear();
self.session_records.clear();
self.once_records.clear();
}
/// Clear only session-scoped consent (useful when starting new session)
pub fn clear_session_consent(&mut self) {
self.session_records.clear();
self.once_records.clear(); // Also clear once consent on session clear
}
/// Check if consent is needed for a tool (non-blocking)
/// Returns Some with consent details if needed, None if already granted
pub fn check_if_consent_needed(
&self,
tool_name: &str,
data_types: Vec<String>,
endpoints: Vec<String>,
) -> Option<(String, Vec<String>, Vec<String>)> {
if self.has_consent(tool_name) {
return None;
}
Some((tool_name.to_string(), data_types, endpoints))
}
fn show_consent_dialog(
&self,
tool_name: &str,
data_types: &[String],
endpoints: &[String],
) -> Result<ConsentScope> {
// TEMPORARY: Auto-grant session consent when not in a proper terminal (TUI mode)
// TODO: Integrate consent UI into the TUI event loop
use std::io::IsTerminal;
if !io::stdin().is_terminal() || std::env::var("OWLEN_AUTO_CONSENT").is_ok() {
eprintln!("Auto-granting session consent for {} (TUI mode)", tool_name);
return Ok(ConsentScope::Session);
}
println!("\n╔══════════════════════════════════════════════════╗");
println!("║ 🔒 PRIVACY CONSENT REQUIRED 🔒 ║");
println!("╚══════════════════════════════════════════════════╝");
println!();
println!("Tool: {}", tool_name);
println!("Data: {}", data_types.join(", "));
println!("Endpoints: {}", endpoints.join(", "));
println!();
println!("Choose consent scope:");
println!(" [1] Allow once - Grant only for this operation");
println!(" [2] Allow session - Grant for current session");
println!(" [3] Allow always - Grant permanently");
println!(" [4] Deny - Reject this operation");
println!();
print!("Enter choice (1-4) [default: 4]: ");
io::stdout().flush()?;
let mut input = String::new();
io::stdin().read_line(&mut input)?;
match input.trim() {
"1" => Ok(ConsentScope::Once),
"2" => Ok(ConsentScope::Session),
"3" => Ok(ConsentScope::Permanent),
_ => Ok(ConsentScope::Denied),
}
}
}

View File

@@ -1,374 +0,0 @@
use crate::Result;
use crate::storage::StorageManager;
use crate::types::{Conversation, Message};
use serde_json::{Number, Value};
use std::collections::{HashMap, VecDeque};
use std::time::{Duration, Instant};
use uuid::Uuid;
const STREAMING_FLAG: &str = "streaming";
const LAST_CHUNK_TS: &str = "last_chunk_ts";
const PLACEHOLDER_FLAG: &str = "placeholder";
/// Manage active and historical conversations, including streaming updates.
pub struct ConversationManager {
active: Conversation,
history: VecDeque<Conversation>,
message_index: HashMap<Uuid, usize>,
streaming: HashMap<Uuid, StreamingMetadata>,
max_history: usize,
}
#[derive(Debug, Clone)]
pub struct StreamingMetadata {
started: Instant,
last_update: Instant,
}
impl ConversationManager {
/// Create a new conversation manager with a default model
pub fn new(model: impl Into<String>) -> Self {
Self::with_history_capacity(model, 32)
}
/// Create with explicit history capacity
pub fn with_history_capacity(model: impl Into<String>, max_history: usize) -> Self {
let conversation = Conversation::new(model.into());
Self {
active: conversation,
history: VecDeque::new(),
message_index: HashMap::new(),
streaming: HashMap::new(),
max_history: max_history.max(1),
}
}
/// Access the active conversation
pub fn active(&self) -> &Conversation {
&self.active
}
/// Public mutable access to the active conversation
pub fn active_mut(&mut self) -> &mut Conversation {
&mut self.active
}
/// Replace the active conversation with a provided one, archiving the existing conversation if it contains data
pub fn load(&mut self, conversation: Conversation) {
if !self.active.messages.is_empty() {
self.archive_active();
}
self.message_index.clear();
for (idx, message) in conversation.messages.iter().enumerate() {
self.message_index.insert(message.id, idx);
}
self.stream_reset();
self.active = conversation;
}
/// Start a brand new conversation, archiving the previous one
pub fn start_new(&mut self, model: Option<String>, name: Option<String>) {
self.archive_active();
let model = model.unwrap_or_else(|| self.active.model.clone());
self.active = Conversation::new(model);
self.active.name = name;
self.message_index.clear();
self.stream_reset();
}
/// Archive the active conversation into history
pub fn archive_active(&mut self) {
if self.active.messages.is_empty() {
return;
}
let mut archived = self.active.clone();
archived.updated_at = std::time::SystemTime::now();
self.history.push_front(archived);
while self.history.len() > self.max_history {
self.history.pop_back();
}
}
/// Get immutable history
pub fn history(&self) -> impl Iterator<Item = &Conversation> {
self.history.iter()
}
/// Add a user message and return its identifier
pub fn push_user_message(&mut self, content: impl Into<String>) -> Uuid {
let message = Message::user(content.into());
self.register_message(message)
}
/// Add a system message and return its identifier
pub fn push_system_message(&mut self, content: impl Into<String>) -> Uuid {
let message = Message::system(content.into());
self.register_message(message)
}
/// Add an assistant message (non-streaming) and return its identifier
pub fn push_assistant_message(&mut self, content: impl Into<String>) -> Uuid {
let message = Message::assistant(content.into());
self.register_message(message)
}
/// Push an arbitrary message into the active conversation
pub fn push_message(&mut self, message: Message) -> Uuid {
self.register_message(message)
}
/// Start tracking a streaming assistant response, returning the message id to update
pub fn start_streaming_response(&mut self) -> Uuid {
let mut message = Message::assistant(String::new());
message
.metadata
.insert(STREAMING_FLAG.to_string(), Value::Bool(true));
let id = message.id;
self.register_message(message);
self.streaming.insert(
id,
StreamingMetadata {
started: Instant::now(),
last_update: Instant::now(),
},
);
id
}
/// Append streaming content to an assistant message
pub fn append_stream_chunk(
&mut self,
message_id: Uuid,
chunk: &str,
is_final: bool,
) -> Result<()> {
let index = self
.message_index
.get(&message_id)
.copied()
.ok_or_else(|| crate::Error::Unknown(format!("Unknown message id: {message_id}")))?;
let conversation = self.active_mut();
if let Some(message) = conversation.messages.get_mut(index) {
let was_placeholder = message
.metadata
.remove(PLACEHOLDER_FLAG)
.and_then(|v| v.as_bool())
.unwrap_or(false);
if was_placeholder {
message.content.clear();
}
if !chunk.is_empty() {
message.content.push_str(chunk);
}
message.timestamp = std::time::SystemTime::now();
let millis = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64;
message.metadata.insert(
LAST_CHUNK_TS.to_string(),
Value::Number(Number::from(millis)),
);
if is_final {
message
.metadata
.insert(STREAMING_FLAG.to_string(), Value::Bool(false));
self.streaming.remove(&message_id);
} else if let Some(info) = self.streaming.get_mut(&message_id) {
info.last_update = Instant::now();
}
}
Ok(())
}
/// Set placeholder text for a streaming message
pub fn set_stream_placeholder(
&mut self,
message_id: Uuid,
text: impl Into<String>,
) -> Result<()> {
let index = self
.message_index
.get(&message_id)
.copied()
.ok_or_else(|| crate::Error::Unknown(format!("Unknown message id: {message_id}")))?;
if let Some(message) = self.active_mut().messages.get_mut(index) {
message.content = text.into();
message.timestamp = std::time::SystemTime::now();
message
.metadata
.insert(PLACEHOLDER_FLAG.to_string(), Value::Bool(true));
}
Ok(())
}
pub fn cancel_stream(&mut self, message_id: Uuid, notice: impl Into<String>) -> Result<()> {
let index = self
.message_index
.get(&message_id)
.copied()
.ok_or_else(|| crate::Error::Unknown(format!("Unknown message id: {message_id}")))?;
if let Some(message) = self.active_mut().messages.get_mut(index) {
message.content = notice.into();
message.timestamp = std::time::SystemTime::now();
message
.metadata
.insert(STREAMING_FLAG.to_string(), Value::Bool(false));
message.metadata.remove(PLACEHOLDER_FLAG);
let millis = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64;
message.metadata.insert(
LAST_CHUNK_TS.to_string(),
Value::Number(Number::from(millis)),
);
}
self.streaming.remove(&message_id);
Ok(())
}
/// Set tool calls on a streaming message
pub fn set_tool_calls_on_message(
&mut self,
message_id: Uuid,
tool_calls: Vec<crate::types::ToolCall>,
) -> Result<()> {
let index = self
.message_index
.get(&message_id)
.copied()
.ok_or_else(|| crate::Error::Unknown(format!("Unknown message id: {message_id}")))?;
if let Some(message) = self.active_mut().messages.get_mut(index) {
message.tool_calls = Some(tool_calls);
}
Ok(())
}
/// Update the active model (used when user changes model mid session)
pub fn set_model(&mut self, model: impl Into<String>) {
self.active.model = model.into();
self.active.updated_at = std::time::SystemTime::now();
}
/// Provide read access to the cached streaming metadata
pub fn streaming_metadata(&self, message_id: &Uuid) -> Option<StreamingMetadata> {
self.streaming.get(message_id).cloned()
}
/// Remove inactive streaming messages that have stalled beyond the provided timeout
pub fn expire_stalled_streams(&mut self, idle_timeout: Duration) -> Vec<Uuid> {
let cutoff = Instant::now() - idle_timeout;
let mut expired = Vec::new();
self.streaming.retain(|id, meta| {
if meta.last_update < cutoff {
expired.push(*id);
false
} else {
true
}
});
expired
}
/// Clear all state
pub fn clear(&mut self) {
self.active.clear();
self.history.clear();
self.message_index.clear();
self.streaming.clear();
}
fn register_message(&mut self, message: Message) -> Uuid {
let id = message.id;
let idx;
{
let conversation = self.active_mut();
idx = conversation.messages.len();
conversation.messages.push(message);
conversation.updated_at = std::time::SystemTime::now();
}
self.message_index.insert(id, idx);
id
}
fn stream_reset(&mut self) {
self.streaming.clear();
}
/// Save the active conversation to disk
pub async fn save_active(
&self,
storage: &StorageManager,
name: Option<String>,
) -> Result<Uuid> {
storage.save_conversation(&self.active, name).await?;
Ok(self.active.id)
}
/// Save the active conversation to disk with a description
pub async fn save_active_with_description(
&self,
storage: &StorageManager,
name: Option<String>,
description: Option<String>,
) -> Result<Uuid> {
storage
.save_conversation_with_description(&self.active, name, description)
.await?;
Ok(self.active.id)
}
/// Load a conversation from storage and make it active
pub async fn load_saved(&mut self, storage: &StorageManager, id: Uuid) -> Result<()> {
let conversation = storage.load_conversation(id).await?;
self.load(conversation);
Ok(())
}
/// List all saved sessions
pub async fn list_saved_sessions(
storage: &StorageManager,
) -> Result<Vec<crate::storage::SessionMeta>> {
storage.list_sessions().await
}
}
impl StreamingMetadata {
/// Duration since the stream started
pub fn elapsed(&self) -> Duration {
self.started.elapsed()
}
/// Duration since the last chunk was received
pub fn idle_duration(&self) -> Duration {
self.last_update.elapsed()
}
/// Timestamp when streaming started
pub fn started_at(&self) -> Instant {
self.started
}
/// Timestamp of most recent update
pub fn last_update_at(&self) -> Instant {
self.last_update
}
}

View File

@@ -1,108 +0,0 @@
use std::sync::Arc;
use serde::{Deserialize, Serialize};
use crate::{Error, Result, oauth::OAuthToken, storage::StorageManager};
#[derive(Serialize, Deserialize, Debug)]
pub struct ApiCredentials {
pub api_key: String,
pub endpoint: String,
}
pub const OLLAMA_CLOUD_CREDENTIAL_ID: &str = "provider_ollama_cloud";
pub struct CredentialManager {
storage: Arc<StorageManager>,
master_key: Arc<Vec<u8>>,
namespace: String,
}
impl CredentialManager {
pub fn new(storage: Arc<StorageManager>, master_key: Arc<Vec<u8>>) -> Self {
Self {
storage,
master_key,
namespace: "owlen".to_string(),
}
}
fn namespaced_key(&self, tool_name: &str) -> String {
format!("{}_{}", self.namespace, tool_name)
}
fn oauth_storage_key(&self, resource: &str) -> String {
self.namespaced_key(&format!("oauth_{resource}"))
}
pub async fn store_credentials(
&self,
tool_name: &str,
credentials: &ApiCredentials,
) -> Result<()> {
let key = self.namespaced_key(tool_name);
let payload = serde_json::to_vec(credentials).map_err(|e| {
Error::Storage(format!(
"Failed to serialize credentials for secure storage: {e}"
))
})?;
self.storage
.store_secure_item(&key, &payload, &self.master_key)
.await
}
pub async fn get_credentials(&self, tool_name: &str) -> Result<Option<ApiCredentials>> {
let key = self.namespaced_key(tool_name);
match self
.storage
.load_secure_item(&key, &self.master_key)
.await?
{
Some(bytes) => {
let creds = serde_json::from_slice(&bytes).map_err(|e| {
Error::Storage(format!("Failed to deserialize stored credentials: {e}"))
})?;
Ok(Some(creds))
}
None => Ok(None),
}
}
pub async fn delete_credentials(&self, tool_name: &str) -> Result<()> {
let key = self.namespaced_key(tool_name);
self.storage.delete_secure_item(&key).await
}
pub async fn store_oauth_token(&self, resource: &str, token: &OAuthToken) -> Result<()> {
let key = self.oauth_storage_key(resource);
let payload = serde_json::to_vec(token).map_err(|err| {
Error::Storage(format!(
"Failed to serialize OAuth token for secure storage: {err}"
))
})?;
self.storage
.store_secure_item(&key, &payload, &self.master_key)
.await
}
pub async fn load_oauth_token(&self, resource: &str) -> Result<Option<OAuthToken>> {
let key = self.oauth_storage_key(resource);
let raw = self
.storage
.load_secure_item(&key, &self.master_key)
.await?;
if let Some(bytes) = raw {
let token = serde_json::from_slice(&bytes).map_err(|err| {
Error::Storage(format!("Failed to deserialize stored OAuth token: {err}"))
})?;
Ok(Some(token))
} else {
Ok(None)
}
}
pub async fn delete_oauth_token(&self, resource: &str) -> Result<()> {
let key = self.oauth_storage_key(resource);
self.storage.delete_secure_item(&key).await
}
}

View File

@@ -1,241 +0,0 @@
use std::collections::HashMap;
use std::fs;
use std::path::PathBuf;
use aes_gcm::{
Aes256Gcm, Nonce,
aead::{Aead, KeyInit},
};
use anyhow::{Context, Result, bail};
use ring::digest;
use ring::rand::{SecureRandom, SystemRandom};
use serde::{Deserialize, Serialize};
use serde_json::Value as JsonValue;
pub struct EncryptedStorage {
cipher: Aes256Gcm,
storage_path: PathBuf,
}
#[derive(Serialize, Deserialize)]
struct EncryptedData {
nonce: [u8; 12],
ciphertext: Vec<u8>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct VaultData {
pub master_key: Vec<u8>,
#[serde(default)]
pub settings: HashMap<String, JsonValue>,
}
pub struct VaultHandle {
storage: EncryptedStorage,
pub data: VaultData,
}
impl VaultHandle {
pub fn master_key(&self) -> &[u8] {
&self.data.master_key
}
pub fn settings(&self) -> &HashMap<String, JsonValue> {
&self.data.settings
}
pub fn settings_mut(&mut self) -> &mut HashMap<String, JsonValue> {
&mut self.data.settings
}
pub fn persist(&self) -> Result<()> {
self.storage.store(&self.data)
}
}
impl EncryptedStorage {
pub fn new(storage_path: PathBuf, password: &str) -> Result<Self> {
let digest = digest::digest(&digest::SHA256, password.as_bytes());
let cipher = Aes256Gcm::new_from_slice(digest.as_ref())
.map_err(|_| anyhow::anyhow!("Invalid key length for AES-256"))?;
if let Some(parent) = storage_path.parent() {
fs::create_dir_all(parent).context("Failed to ensure storage directory exists")?;
}
Ok(Self {
cipher,
storage_path,
})
}
pub fn store<T: Serialize>(&self, data: &T) -> Result<()> {
let json = serde_json::to_vec(data).context("Failed to serialize data")?;
let nonce = generate_nonce()?;
let nonce_ref = Nonce::from_slice(&nonce);
let ciphertext = self
.cipher
.encrypt(nonce_ref, json.as_ref())
.map_err(|e| anyhow::anyhow!("Encryption failed: {}", e))?;
let encrypted_data = EncryptedData { nonce, ciphertext };
let encrypted_json = serde_json::to_vec(&encrypted_data)?;
fs::write(&self.storage_path, encrypted_json).context("Failed to write encrypted data")?;
Ok(())
}
pub fn load<T: for<'de> Deserialize<'de>>(&self) -> Result<T> {
let encrypted_json =
fs::read(&self.storage_path).context("Failed to read encrypted data")?;
let encrypted_data: EncryptedData =
serde_json::from_slice(&encrypted_json).context("Failed to parse encrypted data")?;
let nonce_ref = Nonce::from_slice(&encrypted_data.nonce);
let plaintext = self
.cipher
.decrypt(nonce_ref, encrypted_data.ciphertext.as_ref())
.map_err(|e| anyhow::anyhow!("Decryption failed: {}", e))?;
let data: T =
serde_json::from_slice(&plaintext).context("Failed to deserialize decrypted data")?;
Ok(data)
}
pub fn exists(&self) -> bool {
self.storage_path.exists()
}
pub fn delete(&self) -> Result<()> {
if self.exists() {
fs::remove_file(&self.storage_path).context("Failed to delete encrypted storage")?;
}
Ok(())
}
pub fn verify_password(&self) -> Result<()> {
if !self.exists() {
return Ok(());
}
let encrypted_json =
fs::read(&self.storage_path).context("Failed to read encrypted data")?;
if encrypted_json.is_empty() {
return Ok(());
}
let encrypted_data: EncryptedData =
serde_json::from_slice(&encrypted_json).context("Failed to parse encrypted data")?;
let nonce_ref = Nonce::from_slice(&encrypted_data.nonce);
self.cipher
.decrypt(nonce_ref, encrypted_data.ciphertext.as_ref())
.map(|_| ())
.map_err(|e| anyhow::anyhow!("Decryption failed: {}", e))
}
}
pub fn prompt_password(prompt: &str) -> Result<String> {
let password = rpassword::prompt_password(prompt)
.map_err(|e| anyhow::anyhow!("Failed to read password: {e}"))?;
if password.is_empty() {
bail!("Password cannot be empty");
}
Ok(password)
}
pub fn prompt_new_password() -> Result<String> {
loop {
let first = prompt_password("Enter new master password: ")?;
let confirm = prompt_password("Confirm master password: ")?;
if first == confirm {
return Ok(first);
}
println!("Passwords did not match. Please try again.");
}
}
pub fn unlock_with_password(storage_path: PathBuf, password: &str) -> Result<VaultHandle> {
let storage = EncryptedStorage::new(storage_path, password)?;
let data = load_or_initialize_vault(&storage)?;
Ok(VaultHandle { storage, data })
}
pub fn unlock_interactive(storage_path: PathBuf) -> Result<VaultHandle> {
if storage_path.exists() {
for attempt in 0..3 {
let password = prompt_password("Enter master password: ")?;
match unlock_with_password(storage_path.clone(), &password) {
Ok(handle) => return Ok(handle),
Err(err) => {
println!("Failed to unlock vault: {err}");
if attempt == 2 {
return Err(err);
}
}
}
}
bail!("Failed to unlock encrypted storage after multiple attempts");
} else {
println!(
"No encrypted storage found at {}. Initializing a new vault.",
storage_path.display()
);
let password = prompt_new_password()?;
let storage = EncryptedStorage::new(storage_path, &password)?;
let data = VaultData {
master_key: generate_master_key()?,
..Default::default()
};
storage.store(&data)?;
Ok(VaultHandle { storage, data })
}
}
fn load_or_initialize_vault(storage: &EncryptedStorage) -> Result<VaultData> {
match storage.load::<VaultData>() {
Ok(data) => {
if data.master_key.len() != 32 {
bail!(
"Corrupted vault: master key has invalid length ({}). \
Expected 32 bytes for AES-256. Vault cannot be recovered.",
data.master_key.len()
);
}
Ok(data)
}
Err(err) => {
if storage.exists() {
return Err(err);
}
let data = VaultData {
master_key: generate_master_key()?,
..Default::default()
};
storage.store(&data)?;
Ok(data)
}
}
}
fn generate_master_key() -> Result<Vec<u8>> {
let mut key = vec![0u8; 32];
SystemRandom::new()
.fill(&mut key)
.map_err(|_| anyhow::anyhow!("Failed to generate master key"))?;
Ok(key)
}
fn generate_nonce() -> Result<[u8; 12]> {
let mut nonce = [0u8; 12];
let rng = SystemRandom::new();
rng.fill(&mut nonce)
.map_err(|_| anyhow::anyhow!("Failed to generate nonce"))?;
Ok(nonce)
}

View File

@@ -1,32 +0,0 @@
use std::sync::Arc;
use async_trait::async_trait;
use crate::{
Result,
llm::ChatStream,
mcp::{McpToolCall, McpToolDescriptor, McpToolResponse},
types::{ChatRequest, ChatResponse, ModelInfo},
};
/// Object-safe facade for interacting with LLM backends.
#[async_trait]
pub trait LlmClient: Send + Sync {
/// List the models exposed by this client.
async fn list_models(&self) -> Result<Vec<ModelInfo>>;
/// Issue a one-shot chat request and wait for the complete response.
async fn send_chat(&self, request: ChatRequest) -> Result<ChatResponse>;
/// Stream chat responses incrementally.
async fn stream_chat(&self, request: ChatRequest) -> Result<ChatStream>;
/// Enumerate tools exposed by the backing provider.
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>>;
/// Invoke a tool exposed by the provider.
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse>;
}
/// Convenience alias for trait-object clients.
pub type DynLlmClient = Arc<dyn LlmClient>;

View File

@@ -1 +0,0 @@
pub mod llm_client;

View File

@@ -1,112 +0,0 @@
use crate::types::Message;
use crate::ui::RoleLabelDisplay;
/// Formats messages for display across different clients.
#[derive(Debug, Clone)]
pub struct MessageFormatter {
wrap_width: usize,
role_label_mode: RoleLabelDisplay,
preserve_empty_lines: bool,
}
impl MessageFormatter {
/// Create a new formatter
pub fn new(wrap_width: usize, role_label_mode: RoleLabelDisplay) -> Self {
Self {
wrap_width: wrap_width.max(20),
role_label_mode,
preserve_empty_lines: false,
}
}
/// Override whether empty lines should be preserved
pub fn with_preserve_empty(mut self, preserve: bool) -> Self {
self.preserve_empty_lines = preserve;
self
}
/// Update the wrap width
pub fn set_wrap_width(&mut self, width: usize) {
self.wrap_width = width.max(20);
}
/// The configured role label layout preference.
pub fn role_label_mode(&self) -> RoleLabelDisplay {
self.role_label_mode
}
/// Whether any role label should be shown alongside messages.
pub fn show_role_labels(&self) -> bool {
!matches!(self.role_label_mode, RoleLabelDisplay::None)
}
/// Update the role label layout preference.
pub fn set_role_label_mode(&mut self, mode: RoleLabelDisplay) {
self.role_label_mode = mode;
}
pub fn format_message(&self, message: &Message) -> Vec<String> {
message
.content
.trim()
.lines()
.map(|s| s.to_string())
.collect()
}
/// Extract thinking content from <think> tags, returning (content_without_think, thinking_content)
/// This handles both complete and incomplete (streaming) think tags.
pub fn extract_thinking(&self, content: &str) -> (String, Option<String>) {
let mut result = String::new();
let mut thinking = String::new();
let mut current_pos = 0;
while let Some(start_pos) = content[current_pos..].find("<think>") {
let abs_start = current_pos + start_pos;
// Add content before <think> tag to result
result.push_str(&content[current_pos..abs_start]);
// Find closing tag
if let Some(end_pos) = content[abs_start..].find("</think>") {
let abs_end = abs_start + end_pos;
let think_content = &content[abs_start + 7..abs_end]; // 7 = len("<think>")
if !thinking.is_empty() {
thinking.push_str("\n\n");
}
thinking.push_str(think_content.trim());
current_pos = abs_end + 8; // 8 = len("</think>")
} else {
// Unclosed tag - this is streaming content
// Extract everything after <think> as thinking content
let think_content = &content[abs_start + 7..]; // 7 = len("<think>")
if !thinking.is_empty() {
thinking.push_str("\n\n");
}
thinking.push_str(think_content);
current_pos = content.len();
break;
}
}
// Add remaining content
result.push_str(&content[current_pos..]);
let thinking_result = if thinking.is_empty() {
None
} else {
Some(thinking)
};
// If the result is empty but we have thinking content, show a placeholder
if result.trim().is_empty() && thinking_result.is_some() {
result.push_str("[Thinking...]");
}
(result, thinking_result)
}
}

View File

@@ -1,223 +0,0 @@
use std::collections::VecDeque;
/// Text input buffer with history and cursor management.
#[derive(Debug, Clone)]
pub struct InputBuffer {
buffer: String,
cursor: usize,
history: VecDeque<String>,
history_index: Option<usize>,
max_history: usize,
pub multiline: bool,
tab_width: u8,
}
impl InputBuffer {
/// Create a new input buffer
pub fn new(max_history: usize, multiline: bool, tab_width: u8) -> Self {
Self {
buffer: String::new(),
cursor: 0,
history: VecDeque::with_capacity(max_history.max(1)),
history_index: None,
max_history: max_history.max(1),
multiline,
tab_width: tab_width.max(1),
}
}
/// Get current text
pub fn text(&self) -> &str {
&self.buffer
}
/// Current cursor position
pub fn cursor(&self) -> usize {
self.cursor
}
/// Replace buffer contents
pub fn set_text(&mut self, text: impl Into<String>) {
self.buffer = text.into();
self.cursor = self.buffer.len();
self.history_index = None;
}
/// Clear buffer and reset cursor
pub fn clear(&mut self) {
self.buffer.clear();
self.cursor = 0;
self.history_index = None;
}
/// Insert a character at the cursor position
pub fn insert_char(&mut self, ch: char) {
if ch == '\t' {
self.insert_tab();
return;
}
self.buffer.insert(self.cursor, ch);
self.cursor += ch.len_utf8();
}
/// Insert text at cursor
pub fn insert_text(&mut self, text: &str) {
self.buffer.insert_str(self.cursor, text);
self.cursor += text.len();
}
/// Insert spaces representing a tab
pub fn insert_tab(&mut self) {
let spaces = " ".repeat(self.tab_width as usize);
self.insert_text(&spaces);
}
/// Remove character before cursor
pub fn backspace(&mut self) {
if self.cursor == 0 {
return;
}
let prev_index = prev_char_boundary(&self.buffer, self.cursor);
self.buffer.drain(prev_index..self.cursor);
self.cursor = prev_index;
}
/// Remove character at cursor
pub fn delete(&mut self) {
if self.cursor >= self.buffer.len() {
return;
}
let next_index = next_char_boundary(&self.buffer, self.cursor);
self.buffer.drain(self.cursor..next_index);
}
/// Move cursor left by one grapheme
pub fn move_left(&mut self) {
if self.cursor == 0 {
return;
}
self.cursor = prev_char_boundary(&self.buffer, self.cursor);
}
/// Move cursor right by one grapheme
pub fn move_right(&mut self) {
if self.cursor >= self.buffer.len() {
return;
}
self.cursor = next_char_boundary(&self.buffer, self.cursor);
}
/// Move cursor to start of the buffer
pub fn move_home(&mut self) {
self.cursor = 0;
}
/// Move cursor to end of the buffer
pub fn move_end(&mut self) {
self.cursor = self.buffer.len();
}
/// Push current buffer into history, clearing the buffer afterwards
pub fn commit_to_history(&mut self) -> String {
let text = std::mem::take(&mut self.buffer);
if !text.trim().is_empty() {
self.push_history_entry(text.clone());
}
self.cursor = 0;
self.history_index = None;
text
}
/// Navigate to previous history entry
pub fn history_previous(&mut self) {
if self.history.is_empty() {
return;
}
let new_index = match self.history_index {
Some(idx) if idx + 1 < self.history.len() => idx + 1,
None => 0,
_ => return,
};
self.history_index = Some(new_index);
if let Some(entry) = self.history.get(new_index) {
self.buffer = entry.clone();
self.cursor = self.buffer.len();
}
}
/// Navigate to next history entry
pub fn history_next(&mut self) {
if self.history.is_empty() {
return;
}
if let Some(idx) = self.history_index {
if idx > 0 {
let new_idx = idx - 1;
self.history_index = Some(new_idx);
if let Some(entry) = self.history.get(new_idx) {
self.buffer = entry.clone();
self.cursor = self.buffer.len();
}
} else {
self.history_index = None;
self.buffer.clear();
self.cursor = 0;
}
} else {
self.buffer.clear();
self.cursor = 0;
}
}
/// Push a new entry into the history buffer, enforcing capacity
pub fn push_history_entry(&mut self, entry: String) {
if self
.history
.front()
.map(|existing| existing == &entry)
.unwrap_or(false)
{
return;
}
self.history.push_front(entry);
while self.history.len() > self.max_history {
self.history.pop_back();
}
}
/// Clear saved input history entries.
pub fn clear_history(&mut self) {
self.history.clear();
self.history_index = None;
}
}
fn prev_char_boundary(buffer: &str, cursor: usize) -> usize {
buffer[..cursor]
.char_indices()
.last()
.map(|(idx, _)| idx)
.unwrap_or(0)
}
fn next_char_boundary(buffer: &str, cursor: usize) -> usize {
if cursor >= buffer.len() {
return buffer.len();
}
let slice = &buffer[cursor..];
let mut iter = slice.char_indices();
iter.next();
if let Some((idx, _)) = iter.next() {
cursor + idx
} else {
buffer.len()
}
}

View File

@@ -1,110 +0,0 @@
#![allow(clippy::collapsible_if)] // TODO: Remove once we can rely on Rust 2024 let-chains
//! Core traits and types for OWLEN LLM client
//!
//! This crate provides the foundational abstractions for building
//! LLM providers, routers, and MCP (Model Context Protocol) adapters.
pub mod agent;
pub mod config;
pub mod consent;
pub mod conversation;
pub mod credentials;
pub mod encryption;
pub mod facade;
pub mod formatting;
pub mod input;
pub mod llm;
pub mod mcp;
pub mod mode;
pub mod model;
pub mod oauth;
pub mod provider;
pub mod providers;
pub mod router;
pub mod sandbox;
pub mod session;
pub mod state;
pub mod storage;
pub mod theme;
pub mod tools;
pub mod types;
pub mod ui;
pub mod validation;
pub mod wrap_cursor;
pub use agent::*;
pub use config::*;
pub use consent::*;
pub use conversation::*;
pub use credentials::*;
pub use encryption::*;
pub use formatting::*;
pub use input::*;
pub use oauth::*;
// Export MCP types but exclude test_utils to avoid ambiguity
pub use facade::llm_client::*;
pub use llm::{
ChatStream, LlmProvider, Provider, ProviderConfig, ProviderRegistry, send_via_stream,
};
pub use mcp::{
LocalMcpClient, McpServer, McpToolCall, McpToolDescriptor, McpToolResponse, client, factory,
failover, permission, protocol, remote_client,
};
pub use mode::*;
pub use model::*;
pub use provider::*;
pub use providers::*;
pub use router::*;
pub use sandbox::*;
pub use session::*;
pub use state::*;
pub use theme::*;
pub use tools::*;
pub use validation::*;
/// Result type used throughout the OWLEN ecosystem
pub type Result<T> = std::result::Result<T, Error>;
/// Core error types for OWLEN
#[derive(thiserror::Error, Debug)]
pub enum Error {
#[error("Provider error: {0}")]
Provider(#[from] anyhow::Error),
#[error("Network error: {0}")]
Network(String),
#[error("Authentication error: {0}")]
Auth(String),
#[error("Configuration error: {0}")]
Config(String),
#[error("I/O error: {0}")]
Io(#[from] std::io::Error),
#[error("Invalid input: {0}")]
InvalidInput(String),
#[error("Operation timed out: {0}")]
Timeout(String),
#[error("Serialization error: {0}")]
Serialization(#[from] serde_json::Error),
#[error("Storage error: {0}")]
Storage(String),
#[error("Unknown error: {0}")]
Unknown(String),
#[error("Not implemented: {0}")]
NotImplemented(String),
#[error("Permission denied: {0}")]
PermissionDenied(String),
#[error("Agent execution error: {0}")]
Agent(String),
}

View File

@@ -1,337 +0,0 @@
//! LLM provider abstractions and registry.
//!
//! This module defines the provider trait hierarchy along with helpers that
//! make it easy to register concrete LLM backends and access them through
//! dynamic dispatch when wiring the application together.
use crate::{Error, Result, types::*};
use anyhow::anyhow;
use futures::{Stream, StreamExt};
use serde_json::Value;
use std::any::Any;
use std::collections::HashMap;
use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;
/// A boxed stream of chat responses produced by a provider.
pub type ChatStream = Pin<Box<dyn Stream<Item = Result<ChatResponse>> + Send>>;
/// Trait implemented by every LLM backend Owlen can speak to.
///
/// Providers expose both one-shot and streaming prompt APIs. Concrete
/// implementations typically live in `crate::providers`.
pub trait LlmProvider: Send + Sync + 'static + Any + Sized {
/// Stream type returned by [`Self::stream_prompt`].
type Stream: Stream<Item = Result<ChatResponse>> + Send + 'static;
type ListModelsFuture<'a>: Future<Output = Result<Vec<ModelInfo>>> + Send
where
Self: 'a;
type SendPromptFuture<'a>: Future<Output = Result<ChatResponse>> + Send
where
Self: 'a;
type StreamPromptFuture<'a>: Future<Output = Result<Self::Stream>> + Send
where
Self: 'a;
type HealthCheckFuture<'a>: Future<Output = Result<()>> + Send
where
Self: 'a;
/// Human-readable provider identifier.
fn name(&self) -> &str;
/// Return metadata on all models exposed by this provider.
fn list_models(&self) -> Self::ListModelsFuture<'_>;
/// Issue a prompt and wait for the provider to return the full response.
fn send_prompt(&self, request: ChatRequest) -> Self::SendPromptFuture<'_>;
/// Issue a prompt and receive responses incrementally as a stream.
fn stream_prompt(&self, request: ChatRequest) -> Self::StreamPromptFuture<'_>;
/// Perform a lightweight health check.
fn health_check(&self) -> Self::HealthCheckFuture<'_>;
/// Provider-specific configuration schema (optional).
fn config_schema(&self) -> serde_json::Value {
serde_json::json!({})
}
/// Access the provider as an `Any` for downcasting.
fn as_any(&self) -> &(dyn Any + Send + Sync) {
self
}
}
/// Helper that requests a streamed generation and yields the first chunk as a
/// regular response. This is handy for providers that only implement the
/// streaming API.
pub async fn send_via_stream<'a, P>(provider: &'a P, request: ChatRequest) -> Result<ChatResponse>
where
P: LlmProvider + 'a,
{
let stream = provider.stream_prompt(request).await?;
let mut boxed: ChatStream = Box::pin(stream);
match boxed.next().await {
Some(Ok(response)) => Ok(response),
Some(Err(err)) => Err(err),
None => Err(Error::Provider(anyhow!(
"Empty chat stream from provider {}",
provider.name()
))),
}
}
/// Object-safe wrapper around [`LlmProvider`] for dynamic dispatch scenarios.
#[async_trait::async_trait]
pub trait Provider: Send + Sync {
fn name(&self) -> &str;
async fn list_models(&self) -> Result<Vec<ModelInfo>>;
async fn send_prompt(&self, request: ChatRequest) -> Result<ChatResponse>;
async fn stream_prompt(&self, request: ChatRequest) -> Result<ChatStream>;
async fn health_check(&self) -> Result<()>;
fn config_schema(&self) -> serde_json::Value {
serde_json::json!({})
}
fn as_any(&self) -> &(dyn Any + Send + Sync);
}
#[async_trait::async_trait]
impl<T> Provider for T
where
T: LlmProvider,
{
fn name(&self) -> &str {
LlmProvider::name(self)
}
async fn list_models(&self) -> Result<Vec<ModelInfo>> {
LlmProvider::list_models(self).await
}
async fn send_prompt(&self, request: ChatRequest) -> Result<ChatResponse> {
LlmProvider::send_prompt(self, request).await
}
async fn stream_prompt(&self, request: ChatRequest) -> Result<ChatStream> {
let stream = LlmProvider::stream_prompt(self, request).await?;
Ok(Box::pin(stream))
}
async fn health_check(&self) -> Result<()> {
LlmProvider::health_check(self).await
}
fn config_schema(&self) -> serde_json::Value {
LlmProvider::config_schema(self)
}
fn as_any(&self) -> &(dyn Any + Send + Sync) {
LlmProvider::as_any(self)
}
}
/// Runtime configuration for a provider instance.
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct ProviderConfig {
/// Whether this provider should be activated.
#[serde(default = "ProviderConfig::default_enabled")]
pub enabled: bool,
/// Provider type identifier used to resolve implementations.
#[serde(default)]
pub provider_type: String,
/// Base URL for API calls.
#[serde(default)]
pub base_url: Option<String>,
/// API key or token material.
#[serde(default)]
pub api_key: Option<String>,
/// Environment variable holding the API key.
#[serde(default)]
pub api_key_env: Option<String>,
/// Additional provider-specific configuration.
#[serde(flatten)]
pub extra: HashMap<String, Value>,
}
impl ProviderConfig {
const fn default_enabled() -> bool {
true
}
/// Merge the current configuration with overrides from `other`.
pub fn merge_from(&mut self, mut other: ProviderConfig) {
self.enabled = other.enabled;
if !other.provider_type.is_empty() {
self.provider_type = other.provider_type;
}
if let Some(base_url) = other.base_url.take() {
self.base_url = Some(base_url);
}
if let Some(api_key) = other.api_key.take() {
self.api_key = Some(api_key);
}
if let Some(api_key_env) = other.api_key_env.take() {
self.api_key_env = Some(api_key_env);
}
if !other.extra.is_empty() {
self.extra.extend(other.extra);
}
}
}
/// Static registry of providers available to the application.
pub struct ProviderRegistry {
providers: HashMap<String, Arc<dyn Provider>>,
}
impl ProviderRegistry {
pub fn new() -> Self {
Self {
providers: HashMap::new(),
}
}
pub fn register<P: LlmProvider + 'static>(&mut self, provider: P) {
self.register_arc(Arc::new(provider));
}
pub fn register_arc(&mut self, provider: Arc<dyn Provider>) {
let name = provider.name().to_string();
self.providers.insert(name, provider);
}
pub fn get(&self, name: &str) -> Option<Arc<dyn Provider>> {
self.providers.get(name).cloned()
}
pub fn list_providers(&self) -> Vec<String> {
self.providers.keys().cloned().collect()
}
pub async fn list_all_models(&self) -> Result<Vec<ModelInfo>> {
let mut all_models = Vec::new();
for provider in self.providers.values() {
match provider.list_models().await {
Ok(mut models) => all_models.append(&mut models),
Err(_) => {
// Ignore failing providers and continue.
}
}
}
Ok(all_models)
}
}
impl Default for ProviderRegistry {
fn default() -> Self {
Self::new()
}
}
/// Test utilities for constructing mock providers.
#[cfg(test)]
pub mod test_utils {
use super::*;
use futures::stream;
use std::sync::atomic::{AtomicUsize, Ordering};
/// Simple provider stub that always returns the same response.
pub struct MockProvider {
name: String,
response: ChatResponse,
call_count: AtomicUsize,
}
impl MockProvider {
pub fn new(name: impl Into<String>, response: ChatResponse) -> Self {
Self {
name: name.into(),
response,
call_count: AtomicUsize::new(0),
}
}
pub fn call_count(&self) -> usize {
self.call_count.load(Ordering::Relaxed)
}
}
impl Default for MockProvider {
fn default() -> Self {
Self::new(
"mock-provider",
ChatResponse {
message: Message::assistant("mock response".to_string()),
usage: None,
is_streaming: false,
is_final: true,
},
)
}
}
impl LlmProvider for MockProvider {
type Stream = stream::Iter<std::vec::IntoIter<Result<ChatResponse>>>;
type ListModelsFuture<'a>
= futures::future::Ready<Result<Vec<ModelInfo>>>
where
Self: 'a;
type SendPromptFuture<'a>
= futures::future::Ready<Result<ChatResponse>>
where
Self: 'a;
type StreamPromptFuture<'a>
= futures::future::Ready<Result<Self::Stream>>
where
Self: 'a;
type HealthCheckFuture<'a>
= futures::future::Ready<Result<()>>
where
Self: 'a;
fn name(&self) -> &str {
&self.name
}
fn list_models(&self) -> Self::ListModelsFuture<'_> {
futures::future::ready(Ok(vec![]))
}
fn send_prompt(&self, _request: ChatRequest) -> Self::SendPromptFuture<'_> {
self.call_count.fetch_add(1, Ordering::Relaxed);
futures::future::ready(Ok(self.response.clone()))
}
fn stream_prompt(&self, _request: ChatRequest) -> Self::StreamPromptFuture<'_> {
self.call_count.fetch_add(1, Ordering::Relaxed);
let response = self.response.clone();
futures::future::ready(Ok(stream::iter(vec![Ok(response)])))
}
fn health_check(&self) -> Self::HealthCheckFuture<'_> {
futures::future::ready(Ok(()))
}
}
}

View File

@@ -1,187 +0,0 @@
use crate::Result;
use crate::mode::Mode;
use crate::tools::registry::ToolRegistry;
use crate::validation::SchemaValidator;
use async_trait::async_trait;
pub use client::McpClient;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
use std::sync::Arc;
use std::time::Duration;
pub mod client;
pub mod factory;
pub mod failover;
pub mod permission;
pub mod protocol;
pub mod remote_client;
/// Descriptor for a tool exposed over MCP
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpToolDescriptor {
pub name: String,
pub description: String,
pub input_schema: Value,
pub requires_network: bool,
pub requires_filesystem: Vec<String>,
}
/// Invocation payload for a tool call
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpToolCall {
pub name: String,
pub arguments: Value,
}
/// Result returned by a tool invocation
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct McpToolResponse {
pub name: String,
pub success: bool,
pub output: Value,
pub metadata: HashMap<String, String>,
pub duration_ms: u128,
}
/// Thin MCP server facade over the tool registry
pub struct McpServer {
registry: Arc<ToolRegistry>,
validator: Arc<SchemaValidator>,
mode: Arc<tokio::sync::RwLock<Mode>>,
}
impl McpServer {
pub fn new(registry: Arc<ToolRegistry>, validator: Arc<SchemaValidator>) -> Self {
Self {
registry,
validator,
mode: Arc::new(tokio::sync::RwLock::new(Mode::default())),
}
}
/// Set the current operating mode
pub async fn set_mode(&self, mode: Mode) {
*self.mode.write().await = mode;
}
/// Get the current operating mode
pub async fn get_mode(&self) -> Mode {
*self.mode.read().await
}
/// Enumerate the registered tools as MCP descriptors
pub async fn list_tools(&self) -> Vec<McpToolDescriptor> {
let mode = self.get_mode().await;
let available_tools = self.registry.available_tools(mode).await;
self.registry
.all()
.into_iter()
.filter(|tool| available_tools.contains(&tool.name().to_string()))
.map(|tool| McpToolDescriptor {
name: tool.name().to_string(),
description: tool.description().to_string(),
input_schema: tool.schema(),
requires_network: tool.requires_network(),
requires_filesystem: tool.requires_filesystem(),
})
.collect()
}
/// Execute a tool call after validating inputs against the registered schema
pub async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
self.validator.validate(&call.name, &call.arguments)?;
let mode = self.get_mode().await;
let result = self
.registry
.execute(&call.name, call.arguments, mode)
.await?;
Ok(McpToolResponse {
name: call.name,
success: result.success,
output: result.output,
metadata: result.metadata,
duration_ms: duration_to_millis(result.duration),
})
}
}
fn duration_to_millis(duration: Duration) -> u128 {
duration.as_secs() as u128 * 1_000 + u128::from(duration.subsec_millis())
}
pub struct LocalMcpClient {
server: McpServer,
}
impl LocalMcpClient {
pub fn new(registry: Arc<ToolRegistry>, validator: Arc<SchemaValidator>) -> Self {
Self {
server: McpServer::new(registry, validator),
}
}
/// Set the current operating mode
pub async fn set_mode(&self, mode: Mode) {
self.server.set_mode(mode).await;
}
/// Get the current operating mode
pub async fn get_mode(&self) -> Mode {
self.server.get_mode().await
}
}
#[async_trait]
impl McpClient for LocalMcpClient {
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
Ok(self.server.list_tools().await)
}
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
self.server.call_tool(call).await
}
async fn set_mode(&self, mode: Mode) -> Result<()> {
self.server.set_mode(mode).await;
Ok(())
}
}
#[cfg(test)]
pub mod test_utils {
use super::*;
/// Mock MCP client for testing
#[derive(Default)]
pub struct MockMcpClient;
#[async_trait]
impl McpClient for MockMcpClient {
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
Ok(vec![McpToolDescriptor {
name: "mock_tool".to_string(),
description: "A mock tool for testing".to_string(),
input_schema: serde_json::json!({
"type": "object",
"properties": {
"query": {"type": "string"}
}
}),
requires_network: false,
requires_filesystem: vec![],
}])
}
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
Ok(McpToolResponse {
name: call.name,
success: true,
output: serde_json::json!({"result": "mock result"}),
metadata: HashMap::new(),
duration_ms: 10,
})
}
}
}

View File

@@ -1,21 +0,0 @@
use super::{McpToolCall, McpToolDescriptor, McpToolResponse};
use crate::{Result, mode::Mode};
use async_trait::async_trait;
/// Trait for a client that can interact with an MCP server
#[async_trait]
pub trait McpClient: Send + Sync {
/// List the tools available on the server
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>>;
/// Call a tool on the server
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse>;
/// Update the server with the active operating mode.
async fn set_mode(&self, _mode: Mode) -> Result<()> {
Ok(())
}
}
// Re-export the concrete implementation that supports stdio and HTTP transports.
pub use super::remote_client::RemoteMcpClient;

View File

@@ -1,192 +0,0 @@
/// MCP Client Factory
///
/// Provides a unified interface for creating MCP clients based on configuration.
/// Supports switching between local (in-process) and remote (STDIO) execution modes.
use super::client::McpClient;
use super::{
LocalMcpClient,
remote_client::{McpRuntimeSecrets, RemoteMcpClient},
};
use crate::config::{Config, McpMode};
use crate::tools::registry::ToolRegistry;
use crate::validation::SchemaValidator;
use crate::{Error, Result};
use log::{info, warn};
use std::sync::Arc;
/// Factory for creating MCP clients based on configuration
pub struct McpClientFactory {
config: Arc<Config>,
registry: Arc<ToolRegistry>,
validator: Arc<SchemaValidator>,
}
impl McpClientFactory {
pub fn new(
config: Arc<Config>,
registry: Arc<ToolRegistry>,
validator: Arc<SchemaValidator>,
) -> Self {
Self {
config,
registry,
validator,
}
}
/// Create an MCP client based on the current configuration.
pub fn create(&self) -> Result<Box<dyn McpClient>> {
self.create_with_secrets(None)
}
/// Create an MCP client using optional runtime secrets (OAuth tokens, env overrides).
pub fn create_with_secrets(
&self,
runtime: Option<McpRuntimeSecrets>,
) -> Result<Box<dyn McpClient>> {
match self.config.mcp.mode {
McpMode::Disabled => Err(Error::Config(
"MCP mode is set to 'disabled'; tooling cannot function in this configuration."
.to_string(),
)),
McpMode::LocalOnly | McpMode::Legacy => {
if matches!(self.config.mcp.mode, McpMode::Legacy) {
warn!("Using deprecated MCP legacy mode; consider switching to 'local_only'.");
}
Ok(Box::new(LocalMcpClient::new(
self.registry.clone(),
self.validator.clone(),
)))
}
McpMode::RemoteOnly => {
let server_cfg = self.config.effective_mcp_servers().first().ok_or_else(|| {
Error::Config(
"MCP mode 'remote_only' requires at least one entry in [[mcp_servers]]"
.to_string(),
)
})?;
RemoteMcpClient::new_with_runtime(server_cfg, runtime)
.map(|client| Box::new(client) as Box<dyn McpClient>)
.map_err(|e| {
Error::Config(format!(
"Failed to start remote MCP client '{}': {e}",
server_cfg.name
))
})
}
McpMode::RemotePreferred => {
if let Some(server_cfg) = self.config.effective_mcp_servers().first() {
match RemoteMcpClient::new_with_runtime(server_cfg, runtime.clone()) {
Ok(client) => {
info!(
"Connected to remote MCP server '{}' via {} transport.",
server_cfg.name, server_cfg.transport
);
Ok(Box::new(client) as Box<dyn McpClient>)
}
Err(e) if self.config.mcp.allow_fallback => {
warn!(
"Failed to start remote MCP client '{}': {}. Falling back to local tooling.",
server_cfg.name, e
);
Ok(Box::new(LocalMcpClient::new(
self.registry.clone(),
self.validator.clone(),
)))
}
Err(e) => Err(Error::Config(format!(
"Failed to start remote MCP client '{}': {e}. To allow fallback, set [mcp].allow_fallback = true.",
server_cfg.name
))),
}
} else {
warn!("No MCP servers configured; using local MCP tooling.");
Ok(Box::new(LocalMcpClient::new(
self.registry.clone(),
self.validator.clone(),
)))
}
}
}
}
/// Check if remote MCP mode is available
pub fn is_remote_available() -> bool {
RemoteMcpClient::new().is_ok()
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::Error;
use crate::config::McpServerConfig;
fn build_factory(config: Config) -> McpClientFactory {
let ui = Arc::new(crate::ui::NoOpUiController);
let registry = Arc::new(ToolRegistry::new(
Arc::new(tokio::sync::Mutex::new(config.clone())),
ui,
));
let validator = Arc::new(SchemaValidator::new());
McpClientFactory::new(Arc::new(config), registry, validator)
}
#[test]
fn test_factory_creates_local_client_when_no_servers_configured() {
let mut config = Config::default();
config.refresh_mcp_servers(None).unwrap();
let factory = build_factory(config);
// Should create without error and fall back to local client
let result = factory.create();
assert!(result.is_ok());
}
#[test]
fn test_remote_only_without_servers_errors() {
let mut config = Config::default();
config.mcp.mode = McpMode::RemoteOnly;
config.mcp_servers.clear();
config.refresh_mcp_servers(None).unwrap();
let factory = build_factory(config);
let result = factory.create();
assert!(matches!(result, Err(Error::Config(_))));
}
#[test]
fn test_remote_preferred_without_fallback_propagates_remote_error() {
let mut config = Config::default();
config.mcp.mode = McpMode::RemotePreferred;
config.mcp.allow_fallback = false;
config.mcp_servers = vec![McpServerConfig {
name: "invalid".to_string(),
command: "nonexistent-mcp-server-binary".to_string(),
args: Vec::new(),
transport: "stdio".to_string(),
env: std::collections::HashMap::new(),
oauth: None,
}];
config.refresh_mcp_servers(None).unwrap();
let factory = build_factory(config);
let result = factory.create();
assert!(
matches!(result, Err(Error::Config(message)) if message.contains("Failed to start remote MCP client"))
);
}
#[test]
fn test_legacy_mode_uses_local_client() {
let mut config = Config::default();
config.mcp.mode = McpMode::Legacy;
let factory = build_factory(config);
let result = factory.create();
assert!(result.is_ok());
}
}

View File

@@ -1,323 +0,0 @@
//! Failover and redundancy support for MCP clients
//!
//! Provides automatic failover between multiple MCP servers with:
//! - Health checking
//! - Priority-based selection
//! - Automatic retry with exponential backoff
//! - Circuit breaker pattern
use super::{McpClient, McpToolCall, McpToolDescriptor, McpToolResponse};
use crate::{Error, Result};
use async_trait::async_trait;
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::sync::RwLock;
/// Server health status
#[derive(Debug, Clone, PartialEq)]
pub enum ServerHealth {
/// Server is healthy and available
Healthy,
/// Server is experiencing issues but may recover
Degraded { since: Instant },
/// Server is down
Down { since: Instant },
}
/// Server configuration with priority
#[derive(Clone)]
pub struct ServerEntry {
/// Name for logging
pub name: String,
/// MCP client instance
pub client: Arc<dyn McpClient>,
/// Priority (lower = higher priority)
pub priority: u32,
/// Health status
health: Arc<RwLock<ServerHealth>>,
/// Last health check time
last_check: Arc<RwLock<Option<Instant>>>,
}
impl ServerEntry {
pub fn new(name: String, client: Arc<dyn McpClient>, priority: u32) -> Self {
Self {
name,
client,
priority,
health: Arc::new(RwLock::new(ServerHealth::Healthy)),
last_check: Arc::new(RwLock::new(None)),
}
}
/// Check if server is available
pub async fn is_available(&self) -> bool {
let health = self.health.read().await;
matches!(*health, ServerHealth::Healthy)
}
/// Mark server as healthy
pub async fn mark_healthy(&self) {
let mut health = self.health.write().await;
*health = ServerHealth::Healthy;
let mut last_check = self.last_check.write().await;
*last_check = Some(Instant::now());
}
/// Mark server as down
pub async fn mark_down(&self) {
let mut health = self.health.write().await;
*health = ServerHealth::Down {
since: Instant::now(),
};
}
/// Mark server as degraded
pub async fn mark_degraded(&self) {
let mut health = self.health.write().await;
if matches!(*health, ServerHealth::Healthy) {
*health = ServerHealth::Degraded {
since: Instant::now(),
};
}
}
/// Get current health status
pub async fn get_health(&self) -> ServerHealth {
self.health.read().await.clone()
}
}
/// Failover configuration
#[derive(Debug, Clone)]
pub struct FailoverConfig {
/// Maximum number of retry attempts
pub max_retries: usize,
/// Base retry delay (will be exponentially increased)
pub base_retry_delay: Duration,
/// Health check interval
pub health_check_interval: Duration,
/// Timeout for health checks
pub health_check_timeout: Duration,
/// Circuit breaker threshold (failures before opening circuit)
pub circuit_breaker_threshold: usize,
}
impl Default for FailoverConfig {
fn default() -> Self {
Self {
max_retries: 3,
base_retry_delay: Duration::from_millis(100),
health_check_interval: Duration::from_secs(30),
health_check_timeout: Duration::from_secs(5),
circuit_breaker_threshold: 5,
}
}
}
/// MCP client with failover support
pub struct FailoverMcpClient {
servers: Arc<RwLock<Vec<ServerEntry>>>,
config: FailoverConfig,
consecutive_failures: Arc<RwLock<usize>>,
}
impl FailoverMcpClient {
/// Create a new failover client with multiple servers
pub fn new(servers: Vec<ServerEntry>, config: FailoverConfig) -> Self {
// Sort servers by priority
let mut sorted_servers = servers;
sorted_servers.sort_by_key(|s| s.priority);
Self {
servers: Arc::new(RwLock::new(sorted_servers)),
config,
consecutive_failures: Arc::new(RwLock::new(0)),
}
}
/// Create with default configuration
pub fn with_servers(servers: Vec<ServerEntry>) -> Self {
Self::new(servers, FailoverConfig::default())
}
/// Get the first available server
async fn get_available_server(&self) -> Option<ServerEntry> {
let servers = self.servers.read().await;
for server in servers.iter() {
if server.is_available().await {
return Some(server.clone());
}
}
None
}
/// Execute an operation with automatic failover
async fn with_failover<F, T>(&self, operation: F) -> Result<T>
where
F: Fn(Arc<dyn McpClient>) -> futures::future::BoxFuture<'static, Result<T>>,
T: Send + 'static,
{
let mut attempt = 0;
let mut last_error = None;
while attempt < self.config.max_retries {
// Get available server
let server = match self.get_available_server().await {
Some(s) => s,
None => {
// No healthy servers, try all servers anyway
let servers = self.servers.read().await;
if let Some(first) = servers.first() {
first.clone()
} else {
return Err(Error::Network("No servers configured".to_string()));
}
}
};
// Execute operation
match operation(server.client.clone()).await {
Ok(result) => {
server.mark_healthy().await;
let mut failures = self.consecutive_failures.write().await;
*failures = 0;
return Ok(result);
}
Err(e) => {
log::warn!("Server '{}' failed: {}", server.name, e);
server.mark_degraded().await;
last_error = Some(e);
let mut failures = self.consecutive_failures.write().await;
*failures += 1;
if *failures >= self.config.circuit_breaker_threshold {
server.mark_down().await;
}
}
}
// Exponential backoff
if attempt < self.config.max_retries - 1 {
let delay = self.config.base_retry_delay * 2_u32.pow(attempt as u32);
tokio::time::sleep(delay).await;
}
attempt += 1;
}
Err(last_error.unwrap_or_else(|| Error::Network("All servers failed".to_string())))
}
/// Perform health check on all servers
pub async fn health_check_all(&self) {
let servers = self.servers.read().await;
for server in servers.iter() {
let client = server.client.clone();
let server_clone = server.clone();
tokio::spawn(async move {
match tokio::time::timeout(
Duration::from_secs(5),
// Use a simple list_tools call as health check
async { client.list_tools().await },
)
.await
{
Ok(Ok(_)) => server_clone.mark_healthy().await,
Ok(Err(e)) => {
log::warn!("Health check failed for '{}': {}", server_clone.name, e);
server_clone.mark_down().await;
}
Err(_) => {
log::warn!("Health check timeout for '{}'", server_clone.name);
server_clone.mark_down().await;
}
}
});
}
}
/// Start background health checking
pub fn start_health_checks(&self) -> tokio::task::JoinHandle<()> {
let client = self.clone_ref();
let interval = self.config.health_check_interval;
tokio::spawn(async move {
let mut interval_timer = tokio::time::interval(interval);
loop {
interval_timer.tick().await;
client.health_check_all().await;
}
})
}
/// Clone the client (returns new handle to same underlying data)
fn clone_ref(&self) -> Self {
Self {
servers: self.servers.clone(),
config: self.config.clone(),
consecutive_failures: self.consecutive_failures.clone(),
}
}
/// Get status of all servers
pub async fn get_server_status(&self) -> Vec<(String, ServerHealth)> {
let servers = self.servers.read().await;
let mut status = Vec::new();
for server in servers.iter() {
status.push((server.name.clone(), server.get_health().await));
}
status
}
}
#[async_trait]
impl McpClient for FailoverMcpClient {
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
self.with_failover(|client| Box::pin(async move { client.list_tools().await }))
.await
}
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
self.with_failover(|client| {
let call_clone = call.clone();
Box::pin(async move { client.call_tool(call_clone).await })
})
.await
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_server_entry_health() {
use crate::mcp::remote_client::RemoteMcpClient;
// This would need a mock client in practice
// Just demonstrating the API
let config = crate::config::McpServerConfig {
name: "test".to_string(),
command: "test".to_string(),
args: vec![],
transport: "http".to_string(),
env: std::collections::HashMap::new(),
oauth: None,
};
if let Ok(client) = RemoteMcpClient::new_with_config(&config) {
let entry = ServerEntry::new("test".to_string(), Arc::new(client), 1);
assert!(entry.is_available().await);
entry.mark_down().await;
assert!(!entry.is_available().await);
entry.mark_healthy().await;
assert!(entry.is_available().await);
}
}
}

View File

@@ -1,222 +0,0 @@
/// Permission and Safety Layer for MCP
///
/// This module provides runtime enforcement of security policies for tool execution.
/// It wraps MCP clients to filter/whitelist tool calls, log invocations, and prompt for consent.
use super::client::McpClient;
use super::{McpToolCall, McpToolDescriptor, McpToolResponse};
use crate::{Error, Result};
use crate::{config::Config, mode::Mode};
use async_trait::async_trait;
use std::collections::HashSet;
use std::sync::Arc;
/// Callback for requesting user consent for dangerous operations
pub type ConsentCallback = Arc<dyn Fn(&str, &McpToolCall) -> bool + Send + Sync>;
/// Callback for logging tool invocations
pub type LogCallback = Arc<dyn Fn(&str, &McpToolCall, &Result<McpToolResponse>) + Send + Sync>;
/// Permission-enforcing wrapper around an MCP client
pub struct PermissionLayer {
inner: Box<dyn McpClient>,
config: Arc<Config>,
consent_callback: Option<ConsentCallback>,
log_callback: Option<LogCallback>,
allowed_tools: HashSet<String>,
}
impl PermissionLayer {
/// Create a new permission layer wrapping the given client
pub fn new(inner: Box<dyn McpClient>, config: Arc<Config>) -> Self {
let allowed_tools = config.security.allowed_tools.iter().cloned().collect();
Self {
inner,
config,
consent_callback: None,
log_callback: None,
allowed_tools,
}
}
/// Set a callback for requesting user consent
pub fn with_consent_callback(mut self, callback: ConsentCallback) -> Self {
self.consent_callback = Some(callback);
self
}
/// Set a callback for logging tool invocations
pub fn with_log_callback(mut self, callback: LogCallback) -> Self {
self.log_callback = Some(callback);
self
}
/// Check if a tool requires dangerous filesystem operations
fn requires_dangerous_filesystem(&self, tool_name: &str) -> bool {
matches!(
tool_name,
"resources/write" | "resources/delete" | "file_write" | "file_delete"
)
}
/// Check if a tool is allowed by security policy
fn is_tool_allowed(&self, tool_descriptor: &McpToolDescriptor) -> bool {
// Check if tool requires filesystem access
for fs_perm in &tool_descriptor.requires_filesystem {
if !self.allowed_tools.contains(fs_perm) {
return false;
}
}
// Check if tool requires network access
if tool_descriptor.requires_network && !self.allowed_tools.contains("web_search") {
return false;
}
true
}
/// Request user consent for a tool call
fn request_consent(&self, tool_name: &str, call: &McpToolCall) -> bool {
if let Some(ref callback) = self.consent_callback {
callback(tool_name, call)
} else {
// If no callback is set, deny dangerous operations by default
!self.requires_dangerous_filesystem(tool_name)
}
}
/// Log a tool invocation
fn log_invocation(
&self,
tool_name: &str,
call: &McpToolCall,
result: &Result<McpToolResponse>,
) {
if let Some(ref callback) = self.log_callback {
callback(tool_name, call, result);
} else {
// Default logging to stderr
match result {
Ok(resp) => {
eprintln!(
"[MCP] Tool '{}' executed successfully ({}ms)",
tool_name, resp.duration_ms
);
}
Err(e) => {
eprintln!("[MCP] Tool '{}' failed: {}", tool_name, e);
}
}
}
}
}
#[async_trait]
impl McpClient for PermissionLayer {
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
let tools = self.inner.list_tools().await?;
// Filter tools based on security policy
Ok(tools
.into_iter()
.filter(|tool| self.is_tool_allowed(tool))
.collect())
}
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
// Check if tool requires consent
if self.requires_dangerous_filesystem(&call.name)
&& self.config.privacy.require_consent_per_session
&& !self.request_consent(&call.name, &call)
{
let result = Err(Error::PermissionDenied(format!(
"User denied consent for tool '{}'",
call.name
)));
self.log_invocation(&call.name, &call, &result);
return result;
}
// Execute the tool call
let result = self.inner.call_tool(call.clone()).await;
// Log the invocation
self.log_invocation(&call.name, &call, &result);
result
}
async fn set_mode(&self, mode: Mode) -> Result<()> {
self.inner.set_mode(mode).await
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::mcp::LocalMcpClient;
use crate::tools::registry::ToolRegistry;
use crate::ui::NoOpUiController;
use crate::validation::SchemaValidator;
use std::sync::atomic::{AtomicBool, Ordering};
#[tokio::test]
async fn test_permission_layer_filters_dangerous_tools() {
let config = Arc::new(Config::default());
let ui = Arc::new(NoOpUiController);
let registry = Arc::new(ToolRegistry::new(
Arc::new(tokio::sync::Mutex::new((*config).clone())),
ui,
));
let validator = Arc::new(SchemaValidator::new());
let client = Box::new(LocalMcpClient::new(registry, validator));
let mut config_mut = (*config).clone();
// Disallow file operations
config_mut.security.allowed_tools = vec!["web_search".to_string()];
let permission_layer = PermissionLayer::new(client, Arc::new(config_mut));
let tools = permission_layer.list_tools().await.unwrap();
// Should not include file_write or file_delete tools
assert!(!tools.iter().any(|t| t.name.contains("write")));
assert!(!tools.iter().any(|t| t.name.contains("delete")));
}
#[tokio::test]
async fn test_consent_callback_is_invoked() {
let config = Arc::new(Config::default());
let ui = Arc::new(NoOpUiController);
let registry = Arc::new(ToolRegistry::new(
Arc::new(tokio::sync::Mutex::new((*config).clone())),
ui,
));
let validator = Arc::new(SchemaValidator::new());
let client = Box::new(LocalMcpClient::new(registry, validator));
let consent_called = Arc::new(AtomicBool::new(false));
let consent_called_clone = consent_called.clone();
let consent_callback: ConsentCallback = Arc::new(move |_tool, _call| {
consent_called_clone.store(true, Ordering::SeqCst);
false // Deny
});
let mut config_mut = (*config).clone();
config_mut.privacy.require_consent_per_session = true;
let permission_layer = PermissionLayer::new(client, Arc::new(config_mut))
.with_consent_callback(consent_callback);
let call = McpToolCall {
name: "resources/write".to_string(),
arguments: serde_json::json!({"path": "test.txt", "content": "hello"}),
};
let result = permission_layer.call_tool(call).await;
assert!(consent_called.load(Ordering::SeqCst));
assert!(result.is_err());
}
}

View File

@@ -1,389 +0,0 @@
/// MCP Protocol Definitions
///
/// This module defines the JSON-RPC protocol contracts for the Model Context Protocol (MCP).
/// It includes request/response schemas, error codes, and versioning semantics.
use serde::{Deserialize, Serialize};
use serde_json::Value;
/// MCP Protocol version - uses semantic versioning
pub const PROTOCOL_VERSION: &str = "1.0.0";
/// JSON-RPC version constant
pub const JSONRPC_VERSION: &str = "2.0";
// ============================================================================
// Error Codes and Handling
// ============================================================================
/// Standard JSON-RPC error codes following the spec
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub struct ErrorCode(pub i64);
impl ErrorCode {
// Standard JSON-RPC 2.0 errors
pub const PARSE_ERROR: Self = Self(-32700);
pub const INVALID_REQUEST: Self = Self(-32600);
pub const METHOD_NOT_FOUND: Self = Self(-32601);
pub const INVALID_PARAMS: Self = Self(-32602);
pub const INTERNAL_ERROR: Self = Self(-32603);
// MCP-specific errors (range -32000 to -32099)
pub const TOOL_NOT_FOUND: Self = Self(-32000);
pub const TOOL_EXECUTION_FAILED: Self = Self(-32001);
pub const PERMISSION_DENIED: Self = Self(-32002);
pub const RESOURCE_NOT_FOUND: Self = Self(-32003);
pub const TIMEOUT: Self = Self(-32004);
pub const VALIDATION_ERROR: Self = Self(-32005);
pub const PATH_TRAVERSAL: Self = Self(-32006);
pub const RATE_LIMIT_EXCEEDED: Self = Self(-32007);
}
/// Structured error response
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RpcError {
pub code: i64,
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub data: Option<Value>,
}
impl RpcError {
pub fn new(code: ErrorCode, message: impl Into<String>) -> Self {
Self {
code: code.0,
message: message.into(),
data: None,
}
}
pub fn with_data(mut self, data: Value) -> Self {
self.data = Some(data);
self
}
pub fn parse_error(message: impl Into<String>) -> Self {
Self::new(ErrorCode::PARSE_ERROR, message)
}
pub fn invalid_request(message: impl Into<String>) -> Self {
Self::new(ErrorCode::INVALID_REQUEST, message)
}
pub fn method_not_found(method: &str) -> Self {
Self::new(
ErrorCode::METHOD_NOT_FOUND,
format!("Method not found: {}", method),
)
}
pub fn invalid_params(message: impl Into<String>) -> Self {
Self::new(ErrorCode::INVALID_PARAMS, message)
}
pub fn internal_error(message: impl Into<String>) -> Self {
Self::new(ErrorCode::INTERNAL_ERROR, message)
}
pub fn tool_not_found(tool_name: &str) -> Self {
Self::new(
ErrorCode::TOOL_NOT_FOUND,
format!("Tool not found: {}", tool_name),
)
}
pub fn permission_denied(message: impl Into<String>) -> Self {
Self::new(ErrorCode::PERMISSION_DENIED, message)
}
pub fn path_traversal() -> Self {
Self::new(ErrorCode::PATH_TRAVERSAL, "Path traversal attempt detected")
}
}
// ============================================================================
// Request/Response Structures
// ============================================================================
/// JSON-RPC request structure
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RpcRequest {
pub jsonrpc: String,
pub id: RequestId,
pub method: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub params: Option<Value>,
}
impl RpcRequest {
pub fn new(id: RequestId, method: impl Into<String>, params: Option<Value>) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_string(),
id,
method: method.into(),
params,
}
}
}
/// JSON-RPC response structure (success)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RpcResponse {
pub jsonrpc: String,
pub id: RequestId,
pub result: Value,
}
impl RpcResponse {
pub fn new(id: RequestId, result: Value) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_string(),
id,
result,
}
}
}
/// JSON-RPC error response
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RpcErrorResponse {
pub jsonrpc: String,
pub id: RequestId,
pub error: RpcError,
}
impl RpcErrorResponse {
pub fn new(id: RequestId, error: RpcError) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_string(),
id,
error,
}
}
}
/// JSONRPC notification (no id). Used for streaming partial results.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RpcNotification {
pub jsonrpc: String,
pub method: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub params: Option<Value>,
}
impl RpcNotification {
pub fn new(method: impl Into<String>, params: Option<Value>) -> Self {
Self {
jsonrpc: JSONRPC_VERSION.to_string(),
method: method.into(),
params,
}
}
}
/// Request ID can be string, number, or null
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
#[serde(untagged)]
pub enum RequestId {
Number(u64),
String(String),
}
impl From<u64> for RequestId {
fn from(n: u64) -> Self {
Self::Number(n)
}
}
impl From<String> for RequestId {
fn from(s: String) -> Self {
Self::String(s)
}
}
// ============================================================================
// MCP Method Names
// ============================================================================
/// Standard MCP methods
pub mod methods {
pub const INITIALIZE: &str = "initialize";
pub const TOOLS_LIST: &str = "tools/list";
pub const TOOLS_CALL: &str = "tools/call";
pub const RESOURCES_LIST: &str = "resources/list";
pub const RESOURCES_GET: &str = "resources/get";
pub const RESOURCES_WRITE: &str = "resources/write";
pub const RESOURCES_DELETE: &str = "resources/delete";
pub const MODELS_LIST: &str = "models/list";
}
// ============================================================================
// Initialization Protocol
// ============================================================================
/// Initialize request parameters
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct InitializeParams {
pub protocol_version: String,
pub client_info: ClientInfo,
#[serde(skip_serializing_if = "Option::is_none")]
pub capabilities: Option<ClientCapabilities>,
}
impl Default for InitializeParams {
fn default() -> Self {
Self {
protocol_version: PROTOCOL_VERSION.to_string(),
client_info: ClientInfo {
name: "owlen".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
capabilities: None,
}
}
}
/// Client information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ClientInfo {
pub name: String,
pub version: String,
}
/// Client capabilities
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ClientCapabilities {
#[serde(skip_serializing_if = "Option::is_none")]
pub supports_streaming: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub supports_cancellation: Option<bool>,
}
/// Initialize response
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct InitializeResult {
pub protocol_version: String,
pub server_info: ServerInfo,
pub capabilities: ServerCapabilities,
}
/// Server information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServerInfo {
pub name: String,
pub version: String,
}
/// Server capabilities
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ServerCapabilities {
#[serde(skip_serializing_if = "Option::is_none")]
pub supports_tools: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub supports_resources: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub supports_streaming: Option<bool>,
}
// ============================================================================
// Tool Call Protocol
// ============================================================================
/// Parameters for tools/list
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ToolsListParams {
#[serde(skip_serializing_if = "Option::is_none")]
pub filter: Option<String>,
}
/// Parameters for tools/call
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolsCallParams {
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub arguments: Option<Value>,
}
/// Result of tools/call
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ToolsCallResult {
pub success: bool,
pub output: Value,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub metadata: Option<Value>,
}
// ============================================================================
// Resource Protocol
// ============================================================================
/// Parameters for resources/list
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ResourcesListParams {
pub path: String,
}
/// Parameters for resources/get
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ResourcesGetParams {
pub path: String,
}
/// Parameters for resources/write
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ResourcesWriteParams {
pub path: String,
pub content: String,
}
/// Parameters for resources/delete
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ResourcesDeleteParams {
pub path: String,
}
// ============================================================================
// Versioning and Compatibility
// ============================================================================
/// Check if a protocol version is compatible
pub fn is_compatible(client_version: &str, server_version: &str) -> bool {
// For now, simple exact match on major version
let client_major = client_version.split('.').next().unwrap_or("0");
let server_major = server_version.split('.').next().unwrap_or("0");
client_major == server_major
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_error_codes() {
let err = RpcError::tool_not_found("test_tool");
assert_eq!(err.code, ErrorCode::TOOL_NOT_FOUND.0);
assert!(err.message.contains("test_tool"));
}
#[test]
fn test_version_compatibility() {
assert!(is_compatible("1.0.0", "1.0.0"));
assert!(is_compatible("1.0.0", "1.1.0"));
assert!(is_compatible("1.2.5", "1.0.0"));
assert!(!is_compatible("1.0.0", "2.0.0"));
assert!(!is_compatible("2.0.0", "1.0.0"));
}
#[test]
fn test_request_serialization() {
let req = RpcRequest::new(
RequestId::Number(1),
"tools/call",
Some(serde_json::json!({"name": "test"})),
);
let json = serde_json::to_string(&req).unwrap();
assert!(json.contains("\"jsonrpc\":\"2.0\""));
assert!(json.contains("\"method\":\"tools/call\""));
}
}

View File

@@ -1,593 +0,0 @@
use super::protocol::methods;
use super::protocol::{
PROTOCOL_VERSION, RequestId, RpcErrorResponse, RpcNotification, RpcRequest, RpcResponse,
};
use super::{McpClient, McpToolCall, McpToolDescriptor, McpToolResponse};
use crate::consent::{ConsentManager, ConsentScope};
use crate::tools::{Tool, WebScrapeTool, WebSearchTool};
use crate::types::ModelInfo;
use crate::types::{ChatResponse, Message, Role};
use crate::{
ChatStream, Error, LlmProvider, Result, facade::llm_client::LlmClient, mode::Mode,
send_via_stream,
};
use anyhow::anyhow;
use futures::{StreamExt, future::BoxFuture, stream};
use reqwest::Client as HttpClient;
use serde_json::json;
use std::collections::HashMap;
use std::path::Path;
use std::sync::Arc;
use std::sync::atomic::{AtomicU64, Ordering};
use std::time::Duration;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::{Child, Command};
use tokio::sync::Mutex;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream, connect_async};
use tungstenite::protocol::Message as WsMessage;
/// Client that talks to the external `owlen-mcp-server` over STDIO, HTTP, or WebSocket.
pub struct RemoteMcpClient {
// Child process handling the server (kept alive for the duration of the client).
#[allow(dead_code)]
// For stdio transport, we keep the child process handles.
child: Option<Arc<Mutex<Child>>>,
stdin: Option<Arc<Mutex<tokio::process::ChildStdin>>>, // async write
stdout: Option<Arc<Mutex<BufReader<tokio::process::ChildStdout>>>>,
// For HTTP transport we keep a reusable client and base URL.
http_client: Option<HttpClient>,
http_endpoint: Option<String>,
// For WebSocket transport we keep a WebSocket stream.
ws_stream: Option<Arc<Mutex<WebSocketStream<MaybeTlsStream<tokio::net::TcpStream>>>>>,
#[allow(dead_code)] // Useful for debugging/logging
ws_endpoint: Option<String>,
// Incrementing request identifier.
next_id: AtomicU64,
// Optional HTTP header (name, value) injected into every request.
http_header: Option<(String, String)>,
}
/// Runtime secrets provided when constructing an MCP client.
#[derive(Debug, Default, Clone)]
pub struct McpRuntimeSecrets {
pub env_overrides: HashMap<String, String>,
pub http_header: Option<(String, String)>,
}
impl RemoteMcpClient {
/// Spawn the MCP server binary and prepare communication channels.
/// Spawn an MCP server based on a configuration entry.
/// The `transport` field must be "stdio" (the only supported mode).
/// Spawn an external MCP server based on a configuration entry.
/// The server must communicate over STDIO (the only supported transport).
pub fn new_with_config(config: &crate::config::McpServerConfig) -> Result<Self> {
Self::new_with_runtime(config, None)
}
pub fn new_with_runtime(
config: &crate::config::McpServerConfig,
runtime: Option<McpRuntimeSecrets>,
) -> Result<Self> {
let mut runtime = runtime.unwrap_or_default();
let transport = config.transport.to_lowercase();
match transport.as_str() {
"stdio" => {
// Build the command using the provided binary and arguments.
let mut cmd = Command::new(config.command.clone());
if !config.args.is_empty() {
cmd.args(config.args.clone());
}
cmd.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::inherit());
// Apply environment variables defined in the configuration.
for (k, v) in config.env.iter() {
cmd.env(k, v);
}
for (k, v) in runtime.env_overrides.drain() {
cmd.env(k, v);
}
let mut child = cmd.spawn().map_err(|e| {
Error::Io(std::io::Error::new(
e.kind(),
format!("Failed to spawn MCP server '{}': {}", config.name, e),
))
})?;
let stdin = child.stdin.take().ok_or_else(|| {
Error::Io(std::io::Error::other(
"Failed to capture stdin of MCP server",
))
})?;
let stdout = child.stdout.take().ok_or_else(|| {
Error::Io(std::io::Error::other(
"Failed to capture stdout of MCP server",
))
})?;
Ok(Self {
child: Some(Arc::new(Mutex::new(child))),
stdin: Some(Arc::new(Mutex::new(stdin))),
stdout: Some(Arc::new(Mutex::new(BufReader::new(stdout)))),
http_client: None,
http_endpoint: None,
ws_stream: None,
ws_endpoint: None,
next_id: AtomicU64::new(1),
http_header: None,
})
}
"http" => {
// For HTTP we treat `command` as the base URL.
let client = HttpClient::builder()
.timeout(Duration::from_secs(30))
.build()
.map_err(|e| Error::Network(e.to_string()))?;
Ok(Self {
child: None,
stdin: None,
stdout: None,
http_client: Some(client),
http_endpoint: Some(config.command.clone()),
ws_stream: None,
ws_endpoint: None,
next_id: AtomicU64::new(1),
http_header: runtime.http_header.take(),
})
}
"websocket" => {
// For WebSocket, the `command` field contains the WebSocket URL.
// We need to use a blocking task to establish the connection.
let ws_url = config.command.clone();
let (ws_stream, _response) = tokio::task::block_in_place(|| {
tokio::runtime::Handle::current().block_on(async {
connect_async(&ws_url).await.map_err(|e| {
Error::Network(format!("WebSocket connection failed: {}", e))
})
})
})?;
Ok(Self {
child: None,
stdin: None,
stdout: None,
http_client: None,
http_endpoint: None,
ws_stream: Some(Arc::new(Mutex::new(ws_stream))),
ws_endpoint: Some(ws_url),
next_id: AtomicU64::new(1),
http_header: runtime.http_header.take(),
})
}
other => Err(Error::NotImplemented(format!(
"Transport '{}' not supported",
other
))),
}
}
/// Legacy constructor kept for compatibility; attempts to locate a binary.
pub fn new() -> Result<Self> {
// Fall back to searching for a binary as before, then delegate to new_with_config.
let workspace_root = std::path::Path::new(env!("CARGO_MANIFEST_DIR"))
.join("../..")
.canonicalize()
.map_err(Error::Io)?;
// Prefer the LLM server binary as it provides both LLM and resource tools.
// The generic file-server is kept as a fallback for testing.
let candidates = [
"target/debug/owlen-mcp-llm-server",
"target/release/owlen-mcp-llm-server",
"target/debug/owlen-mcp-server",
];
let binary_path = candidates
.iter()
.map(|rel| workspace_root.join(rel))
.find(|p| p.exists())
.ok_or_else(|| {
Error::NotImplemented(format!(
"owlen-mcp server binary not found; checked {}, {}, and {}",
candidates[0], candidates[1], candidates[2]
))
})?;
let config = crate::config::McpServerConfig {
name: "default".to_string(),
command: binary_path.to_string_lossy().into_owned(),
args: Vec::new(),
transport: "stdio".to_string(),
env: std::collections::HashMap::new(),
oauth: None,
};
Self::new_with_config(&config)
}
async fn send_rpc(&self, method: &str, params: serde_json::Value) -> Result<serde_json::Value> {
let id = RequestId::Number(self.next_id.fetch_add(1, Ordering::Relaxed));
let request = RpcRequest::new(id.clone(), method, Some(params));
let req_str = serde_json::to_string(&request)? + "\n";
// For stdio transport we forward the request to the child process.
if let Some(stdin_arc) = &self.stdin {
let mut stdin = stdin_arc.lock().await;
stdin.write_all(req_str.as_bytes()).await?;
stdin.flush().await?;
}
// Read a single line response
// Handle based on selected transport.
if let Some(client) = &self.http_client {
// HTTP: POST JSON body to endpoint.
let endpoint = self
.http_endpoint
.as_ref()
.ok_or_else(|| Error::Network("Missing HTTP endpoint".into()))?;
let mut builder = client.post(endpoint);
if let Some((ref header_name, ref header_value)) = self.http_header {
builder = builder.header(header_name, header_value);
}
let resp = builder
.json(&request)
.send()
.await
.map_err(|e| Error::Network(e.to_string()))?;
let text = resp
.text()
.await
.map_err(|e| Error::Network(e.to_string()))?;
// Try to parse as success then error.
if let Ok(r) = serde_json::from_str::<RpcResponse>(&text)
&& r.id == id
{
return Ok(r.result);
}
let err_resp: RpcErrorResponse =
serde_json::from_str(&text).map_err(Error::Serialization)?;
return Err(Error::Network(format!(
"MCP server error {}: {}",
err_resp.error.code, err_resp.error.message
)));
}
// WebSocket path.
if let Some(ws_arc) = &self.ws_stream {
use futures::SinkExt;
let mut ws = ws_arc.lock().await;
// Send request as text message
let req_json = serde_json::to_string(&request)?;
ws.send(WsMessage::Text(req_json))
.await
.map_err(|e| Error::Network(format!("WebSocket send failed: {}", e)))?;
// Read response
let response_msg = ws
.next()
.await
.ok_or_else(|| Error::Network("WebSocket stream closed".into()))?
.map_err(|e| Error::Network(format!("WebSocket receive failed: {}", e)))?;
let response_text = match response_msg {
WsMessage::Text(text) => text,
WsMessage::Binary(data) => String::from_utf8(data).map_err(|e| {
Error::Network(format!("Invalid UTF-8 in binary message: {}", e))
})?,
WsMessage::Close(_) => {
return Err(Error::Network(
"WebSocket connection closed by server".into(),
));
}
_ => return Err(Error::Network("Unexpected WebSocket message type".into())),
};
// Try to parse as success then error.
if let Ok(r) = serde_json::from_str::<RpcResponse>(&response_text)
&& r.id == id
{
return Ok(r.result);
}
let err_resp: RpcErrorResponse =
serde_json::from_str(&response_text).map_err(Error::Serialization)?;
return Err(Error::Network(format!(
"MCP server error {}: {}",
err_resp.error.code, err_resp.error.message
)));
}
// STDIO path (default).
// Loop to skip notifications and find the response with matching ID.
loop {
let mut line = String::new();
{
let mut stdout = self
.stdout
.as_ref()
.ok_or_else(|| Error::Network("STDIO stdout not available".into()))?
.lock()
.await;
stdout.read_line(&mut line).await?;
}
// Try to parse as notification first (has no id field)
if let Ok(_notif) = serde_json::from_str::<RpcNotification>(&line) {
// Skip notifications and continue reading
continue;
}
// Try to parse successful response
if let Ok(resp) = serde_json::from_str::<RpcResponse>(&line) {
if resp.id == id {
return Ok(resp.result);
}
// If ID doesn't match, continue (though this shouldn't happen)
continue;
}
// Fallback to error response
if let Ok(err_resp) = serde_json::from_str::<RpcErrorResponse>(&line) {
return Err(Error::Network(format!(
"MCP server error {}: {}",
err_resp.error.code, err_resp.error.message
)));
}
// If we can't parse as any known type, return error
return Err(Error::Network(format!(
"Unable to parse server response: {}",
line.trim()
)));
}
}
}
impl RemoteMcpClient {
/// Convenience wrapper delegating to the `McpClient` trait methods.
pub async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
<Self as McpClient>::list_tools(self).await
}
pub async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
<Self as McpClient>::call_tool(self, call).await
}
}
#[async_trait::async_trait]
impl McpClient for RemoteMcpClient {
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
// Query the remote MCP server for its tool descriptors using the standard
// `tools/list` RPC method. The server returns a JSON array of
// `McpToolDescriptor` objects.
let result = self.send_rpc(methods::TOOLS_LIST, json!(null)).await?;
let descriptors: Vec<McpToolDescriptor> = serde_json::from_value(result)?;
Ok(descriptors)
}
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
// Local handling for simple resource tools to avoid needing the MCP server
// to implement them.
if call.name.starts_with("resources/get") {
let path = call
.arguments
.get("path")
.and_then(|v| v.as_str())
.unwrap_or("");
let content = std::fs::read_to_string(path).map_err(Error::Io)?;
return Ok(McpToolResponse {
name: call.name,
success: true,
output: serde_json::json!(content),
metadata: std::collections::HashMap::new(),
duration_ms: 0,
});
}
if call.name.starts_with("resources/list") {
let path = call
.arguments
.get("path")
.and_then(|v| v.as_str())
.unwrap_or(".");
let mut names = Vec::new();
for entry in std::fs::read_dir(path).map_err(Error::Io)?.flatten() {
if let Some(name) = entry.file_name().to_str() {
names.push(name.to_string());
}
}
return Ok(McpToolResponse {
name: call.name,
success: true,
output: serde_json::json!(names),
metadata: std::collections::HashMap::new(),
duration_ms: 0,
});
}
// Handle write and delete resources locally as well.
if call.name.starts_with("resources/write") {
let path = call
.arguments
.get("path")
.and_then(|v| v.as_str())
.ok_or_else(|| Error::InvalidInput("path missing".into()))?;
// Simple pathtraversal protection: reject any path containing ".." or absolute paths.
if path.contains("..") || Path::new(path).is_absolute() {
return Err(Error::InvalidInput("path traversal".into()));
}
let content = call
.arguments
.get("content")
.and_then(|v| v.as_str())
.ok_or_else(|| Error::InvalidInput("content missing".into()))?;
std::fs::write(path, content).map_err(Error::Io)?;
return Ok(McpToolResponse {
name: call.name,
success: true,
output: serde_json::json!(null),
metadata: std::collections::HashMap::new(),
duration_ms: 0,
});
}
if call.name.starts_with("resources/delete") {
let path = call
.arguments
.get("path")
.and_then(|v| v.as_str())
.ok_or_else(|| Error::InvalidInput("path missing".into()))?;
if path.contains("..") || Path::new(path).is_absolute() {
return Err(Error::InvalidInput("path traversal".into()));
}
std::fs::remove_file(path).map_err(Error::Io)?;
return Ok(McpToolResponse {
name: call.name,
success: true,
output: serde_json::json!(null),
metadata: std::collections::HashMap::new(),
duration_ms: 0,
});
}
// Local handling for web tools to avoid needing an external MCP server.
if call.name == "web_search" {
// Autogrant consent for the web_search tool (permanent for this process).
let consent_manager = std::sync::Arc::new(std::sync::Mutex::new(ConsentManager::new()));
{
let mut cm = consent_manager
.lock()
.map_err(|_| Error::Provider(anyhow!("Consent manager mutex poisoned")))?;
cm.grant_consent_with_scope(
"web_search",
Vec::new(),
Vec::new(),
ConsentScope::Permanent,
);
}
let tool = WebSearchTool::new(consent_manager.clone(), None, None);
let result = tool
.execute(call.arguments.clone())
.await
.map_err(|e| Error::Provider(e.into()))?;
return Ok(McpToolResponse {
name: call.name,
success: true,
output: result.output,
metadata: std::collections::HashMap::new(),
duration_ms: result.duration.as_millis() as u128,
});
}
if call.name == "web_scrape" {
let tool = WebScrapeTool::new();
let result = tool
.execute(call.arguments.clone())
.await
.map_err(|e| Error::Provider(e.into()))?;
return Ok(McpToolResponse {
name: call.name,
success: true,
output: result.output,
metadata: std::collections::HashMap::new(),
duration_ms: result.duration.as_millis() as u128,
});
}
// MCP server expects a generic "tools/call" method with a payload containing the
// specific tool name and its arguments. Wrap the incoming call accordingly.
let payload = serde_json::to_value(&call)?;
let result = self.send_rpc(methods::TOOLS_CALL, payload).await?;
// The server returns an McpToolResponse; deserialize it.
let response: McpToolResponse = serde_json::from_value(result)?;
Ok(response)
}
async fn set_mode(&self, _mode: Mode) -> Result<()> {
// Remote servers manage their own mode settings; treat as best-effort no-op.
Ok(())
}
}
// ---------------------------------------------------------------------------
// Provider implementation forwards chat requests to the generate_text tool.
// ---------------------------------------------------------------------------
impl LlmProvider for RemoteMcpClient {
type Stream = stream::Iter<std::vec::IntoIter<Result<ChatResponse>>>;
type ListModelsFuture<'a> = BoxFuture<'a, Result<Vec<ModelInfo>>>;
type SendPromptFuture<'a> = BoxFuture<'a, Result<ChatResponse>>;
type StreamPromptFuture<'a> = BoxFuture<'a, Result<Self::Stream>>;
type HealthCheckFuture<'a> = BoxFuture<'a, Result<()>>;
fn name(&self) -> &str {
"mcp-llm-server"
}
fn list_models(&self) -> Self::ListModelsFuture<'_> {
Box::pin(async move {
let result = self.send_rpc(methods::MODELS_LIST, json!(null)).await?;
let models: Vec<ModelInfo> = serde_json::from_value(result)?;
Ok(models)
})
}
fn send_prompt(&self, request: crate::types::ChatRequest) -> Self::SendPromptFuture<'_> {
Box::pin(send_via_stream(self, request))
}
fn stream_prompt(&self, request: crate::types::ChatRequest) -> Self::StreamPromptFuture<'_> {
Box::pin(async move {
let args = serde_json::json!({
"messages": request.messages,
"temperature": request.parameters.temperature,
"max_tokens": request.parameters.max_tokens,
"model": request.model,
"stream": request.parameters.stream,
});
let call = McpToolCall {
name: "generate_text".to_string(),
arguments: args,
};
let resp = self.call_tool(call).await?;
let content = resp.output.as_str().unwrap_or("").to_string();
let message = Message::new(Role::Assistant, content);
let chat_resp = ChatResponse {
message,
usage: None,
is_streaming: false,
is_final: true,
};
Ok(stream::iter(vec![Ok(chat_resp)]))
})
}
fn health_check(&self) -> Self::HealthCheckFuture<'_> {
Box::pin(async move {
let params = serde_json::json!({
"protocol_version": PROTOCOL_VERSION,
"client_info": {
"name": "owlen",
"version": env!("CARGO_PKG_VERSION"),
},
"capabilities": {}
});
self.send_rpc(methods::INITIALIZE, params).await.map(|_| ())
})
}
}
#[async_trait::async_trait]
impl LlmClient for RemoteMcpClient {
async fn list_models(&self) -> Result<Vec<ModelInfo>> {
<Self as LlmProvider>::list_models(self).await
}
async fn send_chat(&self, request: crate::types::ChatRequest) -> Result<ChatResponse> {
<Self as LlmProvider>::send_prompt(self, request).await
}
async fn stream_chat(&self, request: crate::types::ChatRequest) -> Result<ChatStream> {
let stream = <Self as LlmProvider>::stream_prompt(self, request).await?;
Ok(Box::pin(stream))
}
async fn list_tools(&self) -> Result<Vec<McpToolDescriptor>> {
<Self as McpClient>::list_tools(self).await
}
async fn call_tool(&self, call: McpToolCall) -> Result<McpToolResponse> {
<Self as McpClient>::call_tool(self, call).await
}
}

View File

@@ -1,182 +0,0 @@
//! Operating modes for Owlen
//!
//! Defines the different modes in which Owlen can operate and their associated
//! tool availability policies.
use serde::{Deserialize, Serialize};
use std::str::FromStr;
/// Operating mode for Owlen
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize, Default)]
#[serde(rename_all = "lowercase")]
pub enum Mode {
/// Chat mode - limited tool access, safe for general conversation
#[default]
Chat,
/// Code mode - full tool access for development tasks
Code,
}
impl Mode {
/// Get the display name for this mode
pub fn display_name(&self) -> &'static str {
match self {
Mode::Chat => "chat",
Mode::Code => "code",
}
}
}
impl std::fmt::Display for Mode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.display_name())
}
}
impl FromStr for Mode {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.to_lowercase().as_str() {
"chat" => Ok(Mode::Chat),
"code" => Ok(Mode::Code),
_ => Err(format!(
"Invalid mode: '{}'. Valid modes are 'chat' or 'code'",
s
)),
}
}
}
/// Configuration for tool availability in different modes
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ModeConfig {
/// Tools allowed in chat mode
#[serde(default = "ModeConfig::default_chat_tools")]
pub chat: ModeToolConfig,
/// Tools allowed in code mode
#[serde(default = "ModeConfig::default_code_tools")]
pub code: ModeToolConfig,
}
impl Default for ModeConfig {
fn default() -> Self {
Self {
chat: Self::default_chat_tools(),
code: Self::default_code_tools(),
}
}
}
impl ModeConfig {
fn default_chat_tools() -> ModeToolConfig {
ModeToolConfig {
allowed_tools: vec!["web_search".to_string()],
}
}
fn default_code_tools() -> ModeToolConfig {
ModeToolConfig {
allowed_tools: vec!["*".to_string()], // All tools allowed
}
}
/// Check if a tool is allowed in the given mode
pub fn is_tool_allowed(&self, mode: Mode, tool_name: &str) -> bool {
let config = match mode {
Mode::Chat => &self.chat,
Mode::Code => &self.code,
};
config.is_tool_allowed(tool_name)
}
}
/// Tool configuration for a specific mode
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ModeToolConfig {
/// List of allowed tools. Use "*" to allow all tools.
pub allowed_tools: Vec<String>,
}
impl ModeToolConfig {
/// Check if a tool is allowed in this mode
pub fn is_tool_allowed(&self, tool_name: &str) -> bool {
// Check for wildcard
if self.allowed_tools.iter().any(|t| t == "*") {
return true;
}
// Check if tool is explicitly listed
self.allowed_tools.iter().any(|t| t == tool_name)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_mode_display() {
assert_eq!(Mode::Chat.to_string(), "chat");
assert_eq!(Mode::Code.to_string(), "code");
}
#[test]
fn test_mode_from_str() {
assert_eq!("chat".parse::<Mode>(), Ok(Mode::Chat));
assert_eq!("code".parse::<Mode>(), Ok(Mode::Code));
assert_eq!("CHAT".parse::<Mode>(), Ok(Mode::Chat));
assert_eq!("CODE".parse::<Mode>(), Ok(Mode::Code));
assert!("invalid".parse::<Mode>().is_err());
}
#[test]
fn test_default_mode() {
assert_eq!(Mode::default(), Mode::Chat);
}
#[test]
fn test_chat_mode_restrictions() {
let config = ModeConfig::default();
// Web search should be allowed in chat mode
assert!(config.is_tool_allowed(Mode::Chat, "web_search"));
// Code exec should not be allowed in chat mode
assert!(!config.is_tool_allowed(Mode::Chat, "code_exec"));
assert!(!config.is_tool_allowed(Mode::Chat, "file_write"));
}
#[test]
fn test_code_mode_allows_all() {
let config = ModeConfig::default();
// All tools should be allowed in code mode
assert!(config.is_tool_allowed(Mode::Code, "web_search"));
assert!(config.is_tool_allowed(Mode::Code, "code_exec"));
assert!(config.is_tool_allowed(Mode::Code, "file_write"));
assert!(config.is_tool_allowed(Mode::Code, "anything"));
}
#[test]
fn test_wildcard_tool_config() {
let config = ModeToolConfig {
allowed_tools: vec!["*".to_string()],
};
assert!(config.is_tool_allowed("any_tool"));
assert!(config.is_tool_allowed("another_tool"));
}
#[test]
fn test_explicit_tool_list() {
let config = ModeToolConfig {
allowed_tools: vec!["tool1".to_string(), "tool2".to_string()],
};
assert!(config.is_tool_allowed("tool1"));
assert!(config.is_tool_allowed("tool2"));
assert!(!config.is_tool_allowed("tool3"));
}
}

View File

@@ -1,209 +0,0 @@
pub mod details;
pub use details::{DetailedModelInfo, ModelInfoRetrievalError};
use crate::Result;
use crate::types::ModelInfo;
use std::collections::HashMap;
use std::future::Future;
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::sync::RwLock;
#[derive(Default, Debug)]
struct ModelCache {
models: Vec<ModelInfo>,
last_refresh: Option<Instant>,
}
/// Caches model listings for improved selection performance
#[derive(Clone, Debug)]
pub struct ModelManager {
cache: Arc<RwLock<ModelCache>>,
ttl: Duration,
}
impl ModelManager {
/// Create a new manager with the desired cache TTL
pub fn new(ttl: Duration) -> Self {
Self {
cache: Arc::new(RwLock::new(ModelCache::default())),
ttl,
}
}
/// Get cached models, refreshing via the provided fetcher when stale. Returns the up-to-date model list.
pub async fn get_or_refresh<F, Fut>(
&self,
force_refresh: bool,
fetcher: F,
) -> Result<Vec<ModelInfo>>
where
F: FnOnce() -> Fut,
Fut: Future<Output = Result<Vec<ModelInfo>>>,
{
if let (false, Some(models)) = (force_refresh, self.cached_if_fresh().await) {
return Ok(models);
}
let models = fetcher().await?;
let mut cache = self.cache.write().await;
cache.models = models.clone();
cache.last_refresh = Some(Instant::now());
Ok(models)
}
/// Return cached models without refreshing
pub async fn cached(&self) -> Vec<ModelInfo> {
self.cache.read().await.models.clone()
}
/// Drop cached models, forcing next call to refresh
pub async fn invalidate(&self) {
let mut cache = self.cache.write().await;
cache.models.clear();
cache.last_refresh = None;
}
/// Select a model by id or name from the cache
pub async fn select(&self, identifier: &str) -> Option<ModelInfo> {
let cache = self.cache.read().await;
cache
.models
.iter()
.find(|m| m.id == identifier || m.name == identifier)
.cloned()
}
async fn cached_if_fresh(&self) -> Option<Vec<ModelInfo>> {
let cache = self.cache.read().await;
let fresh = matches!(cache.last_refresh, Some(ts) if ts.elapsed() < self.ttl);
if fresh && !cache.models.is_empty() {
Some(cache.models.clone())
} else {
None
}
}
}
#[derive(Default, Debug)]
struct ModelDetailsCacheInner {
by_key: HashMap<String, DetailedModelInfo>,
name_to_key: HashMap<String, String>,
fetched_at: HashMap<String, Instant>,
}
/// Cache for rich model details, indexed by digest when available.
#[derive(Clone, Debug)]
pub struct ModelDetailsCache {
inner: Arc<RwLock<ModelDetailsCacheInner>>,
ttl: Duration,
}
impl ModelDetailsCache {
/// Create a new details cache with the provided TTL.
pub fn new(ttl: Duration) -> Self {
Self {
inner: Arc::new(RwLock::new(ModelDetailsCacheInner::default())),
ttl,
}
}
/// Try to read cached details for the provided model name.
pub async fn get(&self, name: &str) -> Option<DetailedModelInfo> {
let mut inner = self.inner.write().await;
let key = inner.name_to_key.get(name).cloned()?;
let stale = inner
.fetched_at
.get(&key)
.is_some_and(|ts| ts.elapsed() >= self.ttl);
if stale {
inner.by_key.remove(&key);
inner.name_to_key.remove(name);
inner.fetched_at.remove(&key);
return None;
}
inner.by_key.get(&key).cloned()
}
/// Cache the provided details, overwriting existing entries.
pub async fn insert(&self, info: DetailedModelInfo) {
let key = info.digest.clone().unwrap_or_else(|| info.name.clone());
let mut inner = self.inner.write().await;
// Remove prior mappings for this model name (possibly different digest).
if let Some(previous_key) = inner.name_to_key.get(&info.name).cloned()
&& previous_key != key
{
inner.by_key.remove(&previous_key);
inner.fetched_at.remove(&previous_key);
}
inner.fetched_at.insert(key.clone(), Instant::now());
inner.name_to_key.insert(info.name.clone(), key.clone());
inner.by_key.insert(key, info);
}
/// Remove a specific model from the cache.
pub async fn invalidate(&self, name: &str) {
let mut inner = self.inner.write().await;
if let Some(key) = inner.name_to_key.remove(name) {
inner.by_key.remove(&key);
inner.fetched_at.remove(&key);
}
}
/// Clear the entire cache.
pub async fn invalidate_all(&self) {
let mut inner = self.inner.write().await;
inner.by_key.clear();
inner.name_to_key.clear();
inner.fetched_at.clear();
}
/// Return all cached values regardless of freshness.
pub async fn cached(&self) -> Vec<DetailedModelInfo> {
let inner = self.inner.read().await;
inner.by_key.values().cloned().collect()
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::time::Duration;
use tokio::time::sleep;
fn sample_details(name: &str) -> DetailedModelInfo {
DetailedModelInfo {
name: name.to_string(),
..Default::default()
}
}
#[tokio::test]
async fn model_details_cache_returns_cached_entry() {
let cache = ModelDetailsCache::new(Duration::from_millis(50));
let info = sample_details("llama");
cache.insert(info.clone()).await;
let cached = cache.get("llama").await;
assert!(cached.is_some());
assert_eq!(cached.unwrap().name, "llama");
}
#[tokio::test]
async fn model_details_cache_expires_based_on_ttl() {
let cache = ModelDetailsCache::new(Duration::from_millis(10));
cache.insert(sample_details("phi")).await;
sleep(Duration::from_millis(30)).await;
assert!(cache.get("phi").await.is_none());
}
#[tokio::test]
async fn model_details_cache_invalidate_removes_entry() {
let cache = ModelDetailsCache::new(Duration::from_secs(1));
cache.insert(sample_details("mistral")).await;
cache.invalidate("mistral").await;
assert!(cache.get("mistral").await.is_none());
}
}

View File

@@ -1,105 +0,0 @@
//! Detailed model metadata for provider inspection features.
//!
//! These types capture richer information about locally available models
//! than the lightweight [`crate::types::ModelInfo`] listing and back the
//! higher-level inspection UI exposed in the Owlen TUI.
use serde::{Deserialize, Serialize};
/// Rich metadata about an Ollama model.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct DetailedModelInfo {
/// Canonical model name (including tag).
pub name: String,
/// Reported architecture or model format.
#[serde(skip_serializing_if = "Option::is_none")]
pub architecture: Option<String>,
/// Human-readable parameter / quantisation summary.
#[serde(skip_serializing_if = "Option::is_none")]
pub parameters: Option<String>,
/// Context window length, if provided.
#[serde(skip_serializing_if = "Option::is_none")]
pub context_length: Option<u64>,
/// Embedding vector length for embedding-capable models.
#[serde(skip_serializing_if = "Option::is_none")]
pub embedding_length: Option<u64>,
/// Quantisation level (e.g., Q4_0, Q5_K_M).
#[serde(skip_serializing_if = "Option::is_none")]
pub quantization: Option<String>,
/// Primary family identifier (e.g., llama3).
#[serde(skip_serializing_if = "Option::is_none")]
pub family: Option<String>,
/// Additional family tags reported by Ollama.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub families: Vec<String>,
/// Verbose parameter size description (e.g., 70B parameters).
#[serde(skip_serializing_if = "Option::is_none")]
pub parameter_size: Option<String>,
/// Default prompt template packaged with the model.
#[serde(skip_serializing_if = "Option::is_none")]
pub template: Option<String>,
/// Default system prompt packaged with the model.
#[serde(skip_serializing_if = "Option::is_none")]
pub system: Option<String>,
/// License string provided by the model.
#[serde(skip_serializing_if = "Option::is_none")]
pub license: Option<String>,
/// Raw modelfile contents (if available).
#[serde(skip_serializing_if = "Option::is_none")]
pub modelfile: Option<String>,
/// Modification timestamp (ISO-8601) if reported.
#[serde(skip_serializing_if = "Option::is_none")]
pub modified_at: Option<String>,
/// Approximate model size in bytes.
#[serde(skip_serializing_if = "Option::is_none")]
pub size: Option<u64>,
/// Digest / checksum used by Ollama (sha256).
#[serde(skip_serializing_if = "Option::is_none")]
pub digest: Option<String>,
}
impl DetailedModelInfo {
/// Convenience helper that normalises empty strings to `None`.
pub fn with_normalised_strings(mut self) -> Self {
if self.architecture.as_ref().is_some_and(String::is_empty) {
self.architecture = None;
}
if self.parameters.as_ref().is_some_and(String::is_empty) {
self.parameters = None;
}
if self.quantization.as_ref().is_some_and(String::is_empty) {
self.quantization = None;
}
if self.family.as_ref().is_some_and(String::is_empty) {
self.family = None;
}
if self.parameter_size.as_ref().is_some_and(String::is_empty) {
self.parameter_size = None;
}
if self.template.as_ref().is_some_and(String::is_empty) {
self.template = None;
}
if self.system.as_ref().is_some_and(String::is_empty) {
self.system = None;
}
if self.license.as_ref().is_some_and(String::is_empty) {
self.license = None;
}
if self.modelfile.as_ref().is_some_and(String::is_empty) {
self.modelfile = None;
}
if self.digest.as_ref().is_some_and(String::is_empty) {
self.digest = None;
}
self
}
}
/// Error payload returned when model inspection fails for a specific model.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ModelInfoRetrievalError {
/// Model that failed to resolve.
pub model_name: String,
/// Human-readable description of the failure.
pub error_message: String,
}

View File

@@ -1,507 +0,0 @@
use std::time::Duration as StdDuration;
use chrono::{DateTime, Duration, Utc};
use reqwest::Client;
use serde::{Deserialize, Serialize};
use crate::{Error, Result, config::McpOAuthConfig};
/// Persisted OAuth token set for MCP servers and providers.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Default)]
pub struct OAuthToken {
/// Bearer access token returned by the authorization server.
pub access_token: String,
/// Optional refresh token if the provider issues one.
#[serde(default)]
pub refresh_token: Option<String>,
/// Absolute UTC expiration timestamp for the access token.
#[serde(default)]
pub expires_at: Option<DateTime<Utc>>,
/// Optional space-delimited scope string supplied by the provider.
#[serde(default)]
pub scope: Option<String>,
/// Token type reported by the provider (typically `Bearer`).
#[serde(default)]
pub token_type: Option<String>,
}
impl OAuthToken {
/// Returns `true` if the access token has expired at the provided instant.
pub fn is_expired(&self, now: DateTime<Utc>) -> bool {
matches!(self.expires_at, Some(expiry) if now >= expiry)
}
/// Returns `true` if the token will expire within the supplied duration window.
pub fn will_expire_within(&self, window: Duration, now: DateTime<Utc>) -> bool {
matches!(self.expires_at, Some(expiry) if expiry - now <= window)
}
}
/// Active device-authorization session details returned by the authorization server.
#[derive(Debug, Clone)]
pub struct DeviceAuthorization {
pub device_code: String,
pub user_code: String,
pub verification_uri: String,
pub verification_uri_complete: Option<String>,
pub expires_at: DateTime<Utc>,
pub interval: StdDuration,
pub message: Option<String>,
}
impl DeviceAuthorization {
pub fn is_expired(&self, now: DateTime<Utc>) -> bool {
now >= self.expires_at
}
}
/// Result of polling the token endpoint during a device-authorization flow.
#[derive(Debug, Clone)]
pub enum DevicePollState {
Pending { retry_in: StdDuration },
Complete(OAuthToken),
}
pub struct OAuthClient {
http: Client,
config: McpOAuthConfig,
}
impl OAuthClient {
pub fn new(config: McpOAuthConfig) -> Result<Self> {
let http = Client::builder()
.user_agent("OwlenOAuth/1.0")
.build()
.map_err(|err| Error::Network(format!("Failed to construct HTTP client: {err}")))?;
Ok(Self { http, config })
}
fn scope_value(&self) -> Option<String> {
if self.config.scopes.is_empty() {
None
} else {
Some(self.config.scopes.join(" "))
}
}
fn token_request_base(&self) -> Vec<(String, String)> {
let mut params = vec![("client_id".to_string(), self.config.client_id.clone())];
if let Some(secret) = &self.config.client_secret {
params.push(("client_secret".to_string(), secret.clone()));
}
params
}
pub async fn start_device_authorization(&self) -> Result<DeviceAuthorization> {
let device_url = self
.config
.device_authorization_url
.as_ref()
.ok_or_else(|| {
Error::Config("Device authorization endpoint is not configured.".to_string())
})?;
let mut params = self.token_request_base();
if let Some(scope) = self.scope_value() {
params.push(("scope".to_string(), scope));
}
let response = self
.http
.post(device_url)
.form(&params)
.send()
.await
.map_err(|err| map_http_error("start device authorization", err))?;
let status = response.status();
let payload = response
.json::<DeviceAuthorizationResponse>()
.await
.map_err(|err| {
Error::Auth(format!(
"Failed to parse device authorization response (status {status}): {err}"
))
})?;
let expires_at =
Utc::now() + Duration::seconds(payload.expires_in.min(i64::MAX as u64) as i64);
let interval = StdDuration::from_secs(payload.interval.unwrap_or(5).max(1));
Ok(DeviceAuthorization {
device_code: payload.device_code,
user_code: payload.user_code,
verification_uri: payload.verification_uri,
verification_uri_complete: payload.verification_uri_complete,
expires_at,
interval,
message: payload.message,
})
}
pub async fn poll_device_token(&self, auth: &DeviceAuthorization) -> Result<DevicePollState> {
let mut params = self.token_request_base();
params.push(("grant_type".to_string(), DEVICE_CODE_GRANT.to_string()));
params.push(("device_code".to_string(), auth.device_code.clone()));
if let Some(scope) = self.scope_value() {
params.push(("scope".to_string(), scope));
}
let response = self
.http
.post(&self.config.token_url)
.form(&params)
.send()
.await
.map_err(|err| map_http_error("poll device token", err))?;
let status = response.status();
let text = response
.text()
.await
.map_err(|err| map_http_error("read token response", err))?;
if status.is_success() {
let payload: TokenResponse = serde_json::from_str(&text).map_err(|err| {
Error::Auth(format!(
"Failed to parse OAuth token response: {err}; body: {text}"
))
})?;
return Ok(DevicePollState::Complete(oauth_token_from_response(
payload,
)));
}
let error = serde_json::from_str::<OAuthErrorResponse>(&text).unwrap_or_else(|_| {
OAuthErrorResponse {
error: "unknown_error".to_string(),
error_description: Some(text.clone()),
}
});
match error.error.as_str() {
"authorization_pending" => Ok(DevicePollState::Pending {
retry_in: auth.interval,
}),
"slow_down" => Ok(DevicePollState::Pending {
retry_in: auth.interval.saturating_add(StdDuration::from_secs(5)),
}),
"access_denied" => {
Err(Error::Auth(error.error_description.unwrap_or_else(|| {
"User declined authorization".to_string()
})))
}
"expired_token" | "expired_device_code" => {
Err(Error::Auth(error.error_description.unwrap_or_else(|| {
"Device authorization expired".to_string()
})))
}
other => Err(Error::Auth(
error
.error_description
.unwrap_or_else(|| format!("OAuth error: {other}")),
)),
}
}
pub async fn refresh_token(&self, refresh_token: &str) -> Result<OAuthToken> {
let mut params = self.token_request_base();
params.push(("grant_type".to_string(), "refresh_token".to_string()));
params.push(("refresh_token".to_string(), refresh_token.to_string()));
if let Some(scope) = self.scope_value() {
params.push(("scope".to_string(), scope));
}
let response = self
.http
.post(&self.config.token_url)
.form(&params)
.send()
.await
.map_err(|err| map_http_error("refresh OAuth token", err))?;
let status = response.status();
let text = response
.text()
.await
.map_err(|err| map_http_error("read refresh response", err))?;
if status.is_success() {
let payload: TokenResponse = serde_json::from_str(&text).map_err(|err| {
Error::Auth(format!(
"Failed to parse OAuth refresh response: {err}; body: {text}"
))
})?;
Ok(oauth_token_from_response(payload))
} else {
let error = serde_json::from_str::<OAuthErrorResponse>(&text).unwrap_or_else(|_| {
OAuthErrorResponse {
error: "unknown_error".to_string(),
error_description: Some(text.clone()),
}
});
Err(Error::Auth(error.error_description.unwrap_or_else(|| {
format!("OAuth token refresh failed: {}", error.error)
})))
}
}
}
const DEVICE_CODE_GRANT: &str = "urn:ietf:params:oauth:grant-type:device_code";
#[derive(Debug, Deserialize)]
struct DeviceAuthorizationResponse {
device_code: String,
user_code: String,
verification_uri: String,
#[serde(default)]
verification_uri_complete: Option<String>,
expires_in: u64,
#[serde(default)]
interval: Option<u64>,
#[serde(default)]
message: Option<String>,
}
#[derive(Debug, Deserialize)]
struct TokenResponse {
access_token: String,
#[serde(default)]
refresh_token: Option<String>,
#[serde(default)]
expires_in: Option<u64>,
#[serde(default)]
scope: Option<String>,
#[serde(default)]
token_type: Option<String>,
}
#[derive(Debug, Deserialize)]
struct OAuthErrorResponse {
error: String,
#[serde(default)]
error_description: Option<String>,
}
fn oauth_token_from_response(payload: TokenResponse) -> OAuthToken {
let expires_at = payload
.expires_in
.map(|seconds| seconds.min(i64::MAX as u64) as i64)
.map(|seconds| Utc::now() + Duration::seconds(seconds));
OAuthToken {
access_token: payload.access_token,
refresh_token: payload.refresh_token,
expires_at,
scope: payload.scope,
token_type: payload.token_type,
}
}
fn map_http_error(action: &str, err: reqwest::Error) -> Error {
if err.is_timeout() {
Error::Timeout(format!("OAuth {action} request timed out: {err}"))
} else if err.is_connect() {
Error::Network(format!("OAuth {action} connection error: {err}"))
} else {
Error::Network(format!("OAuth {action} request failed: {err}"))
}
}
#[cfg(test)]
mod tests {
use super::*;
use httpmock::prelude::*;
use serde_json::json;
fn config_for(server: &MockServer) -> McpOAuthConfig {
McpOAuthConfig {
client_id: "test-client".to_string(),
client_secret: None,
authorize_url: server.url("/authorize"),
token_url: server.url("/token"),
device_authorization_url: Some(server.url("/device")),
redirect_url: None,
scopes: vec!["repo".to_string(), "user".to_string()],
token_env: None,
header: None,
header_prefix: None,
}
}
fn sample_device_authorization() -> DeviceAuthorization {
DeviceAuthorization {
device_code: "device-123".to_string(),
user_code: "ABCD-EFGH".to_string(),
verification_uri: "https://example.test/activate".to_string(),
verification_uri_complete: Some(
"https://example.test/activate?user_code=ABCD-EFGH".to_string(),
),
expires_at: Utc::now() + Duration::minutes(10),
interval: StdDuration::from_secs(5),
message: Some("Open the verification URL and enter the code.".to_string()),
}
}
#[tokio::test]
async fn start_device_authorization_returns_payload() {
let server = MockServer::start_async().await;
let device_mock = server
.mock_async(|when, then| {
when.method(POST).path("/device");
then.status(200)
.header("content-type", "application/json")
.json_body(json!({
"device_code": "device-123",
"user_code": "ABCD-EFGH",
"verification_uri": "https://example.test/activate",
"verification_uri_complete": "https://example.test/activate?user_code=ABCD-EFGH",
"expires_in": 600,
"interval": 7,
"message": "Open the verification URL and enter the code."
}));
})
.await;
let client = OAuthClient::new(config_for(&server)).expect("client");
let auth = client
.start_device_authorization()
.await
.expect("device authorization payload");
assert_eq!(auth.user_code, "ABCD-EFGH");
assert_eq!(auth.interval, StdDuration::from_secs(7));
assert!(auth.expires_at > Utc::now());
device_mock.assert_async().await;
}
#[tokio::test]
async fn poll_device_token_reports_pending() {
let server = MockServer::start_async().await;
let pending = server
.mock_async(|when, then| {
when.method(POST)
.path("/token")
.body_contains(
"grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code",
)
.body_contains("device_code=device-123");
then.status(400)
.header("content-type", "application/json")
.json_body(json!({
"error": "authorization_pending"
}));
})
.await;
let config = config_for(&server);
let client = OAuthClient::new(config).expect("client");
let auth = sample_device_authorization();
let result = client.poll_device_token(&auth).await.expect("poll result");
match result {
DevicePollState::Pending { retry_in } => {
assert_eq!(retry_in, StdDuration::from_secs(5));
}
other => panic!("expected pending state, got {other:?}"),
}
pending.assert_async().await;
}
#[tokio::test]
async fn poll_device_token_applies_slow_down_backoff() {
let server = MockServer::start_async().await;
let slow = server
.mock_async(|when, then| {
when.method(POST).path("/token");
then.status(400)
.header("content-type", "application/json")
.json_body(json!({
"error": "slow_down"
}));
})
.await;
let config = config_for(&server);
let client = OAuthClient::new(config).expect("client");
let auth = sample_device_authorization();
let result = client.poll_device_token(&auth).await.expect("poll result");
match result {
DevicePollState::Pending { retry_in } => {
assert_eq!(retry_in, StdDuration::from_secs(10));
}
other => panic!("expected pending state, got {other:?}"),
}
slow.assert_async().await;
}
#[tokio::test]
async fn poll_device_token_returns_token_when_authorized() {
let server = MockServer::start_async().await;
let token = server
.mock_async(|when, then| {
when.method(POST).path("/token");
then.status(200)
.header("content-type", "application/json")
.json_body(json!({
"access_token": "token-abc",
"refresh_token": "refresh-xyz",
"expires_in": 3600,
"token_type": "Bearer",
"scope": "repo user"
}));
})
.await;
let config = config_for(&server);
let client = OAuthClient::new(config).expect("client");
let auth = sample_device_authorization();
let result = client.poll_device_token(&auth).await.expect("poll result");
let token_info = match result {
DevicePollState::Complete(token) => token,
other => panic!("expected completion, got {other:?}"),
};
assert_eq!(token_info.access_token, "token-abc");
assert_eq!(token_info.refresh_token.as_deref(), Some("refresh-xyz"));
assert!(token_info.expires_at.is_some());
token.assert_async().await;
}
#[tokio::test]
async fn refresh_token_roundtrip() {
let server = MockServer::start_async().await;
let refresh = server
.mock_async(|when, then| {
when.method(POST)
.path("/token")
.body_contains("grant_type=refresh_token")
.body_contains("refresh_token=old-refresh");
then.status(200)
.header("content-type", "application/json")
.json_body(json!({
"access_token": "token-new",
"refresh_token": "refresh-new",
"expires_in": 1200,
"token_type": "Bearer"
}));
})
.await;
let config = config_for(&server);
let client = OAuthClient::new(config).expect("client");
let token = client
.refresh_token("old-refresh")
.await
.expect("refresh response");
assert_eq!(token.access_token, "token-new");
assert_eq!(token.refresh_token.as_deref(), Some("refresh-new"));
assert!(token.expires_at.is_some());
refresh.assert_async().await;
}
}

View File

@@ -1,227 +0,0 @@
use std::collections::HashMap;
use std::sync::Arc;
use futures::stream::{FuturesUnordered, StreamExt};
use log::{debug, warn};
use tokio::sync::RwLock;
use crate::config::Config;
use crate::{Error, Result};
use super::{GenerateRequest, GenerateStream, ModelInfo, ModelProvider, ProviderStatus};
/// Model information annotated with the originating provider metadata.
#[derive(Debug, Clone)]
pub struct AnnotatedModelInfo {
pub provider_id: String,
pub provider_status: ProviderStatus,
pub model: ModelInfo,
}
/// Coordinates multiple [`ModelProvider`] implementations and tracks their
/// health state.
pub struct ProviderManager {
providers: RwLock<HashMap<String, Arc<dyn ModelProvider>>>,
status_cache: RwLock<HashMap<String, ProviderStatus>>,
}
impl ProviderManager {
/// Construct a new manager using the supplied configuration. Providers
/// defined in the configuration start with a `RequiresSetup` status so
/// that frontends can surface incomplete configuration to users.
pub fn new(config: &Config) -> Self {
let mut status_cache = HashMap::new();
for provider_id in config.providers.keys() {
status_cache.insert(provider_id.clone(), ProviderStatus::RequiresSetup);
}
Self {
providers: RwLock::new(HashMap::new()),
status_cache: RwLock::new(status_cache),
}
}
/// Register a provider instance with the manager.
pub async fn register_provider(&self, provider: Arc<dyn ModelProvider>) {
let provider_id = provider.metadata().id.clone();
debug!("registering provider {}", provider_id);
self.providers
.write()
.await
.insert(provider_id.clone(), provider);
self.status_cache
.write()
.await
.insert(provider_id, ProviderStatus::Unavailable);
}
/// Return a stream by routing the request to the designated provider.
pub async fn generate(
&self,
provider_id: &str,
request: GenerateRequest,
) -> Result<GenerateStream> {
let provider = {
let guard = self.providers.read().await;
guard.get(provider_id).cloned()
}
.ok_or_else(|| Error::Config(format!("provider '{provider_id}' not registered")))?;
match provider.generate_stream(request).await {
Ok(stream) => {
self.status_cache
.write()
.await
.insert(provider_id.to_string(), ProviderStatus::Available);
Ok(stream)
}
Err(err) => {
self.status_cache
.write()
.await
.insert(provider_id.to_string(), ProviderStatus::Unavailable);
Err(err)
}
}
}
/// List models across all providers, updating provider status along the way.
pub async fn list_all_models(&self) -> Result<Vec<AnnotatedModelInfo>> {
let providers: Vec<(String, Arc<dyn ModelProvider>)> = {
let guard = self.providers.read().await;
guard
.iter()
.map(|(id, provider)| (id.clone(), Arc::clone(provider)))
.collect()
};
let mut tasks = FuturesUnordered::new();
for (provider_id, provider) in providers {
tasks.push(async move {
let log_id = provider_id.clone();
let mut status = ProviderStatus::Unavailable;
let mut models = Vec::new();
match provider.health_check().await {
Ok(health) => {
status = health;
if matches!(status, ProviderStatus::Available) {
match provider.list_models().await {
Ok(list) => {
models = list;
}
Err(err) => {
status = ProviderStatus::Unavailable;
warn!("listing models failed for provider {}: {}", log_id, err);
}
}
}
}
Err(err) => {
warn!("health check failed for provider {}: {}", log_id, err);
}
}
(provider_id, status, models)
});
}
let mut annotated = Vec::new();
let mut status_updates = HashMap::new();
while let Some((provider_id, status, models)) = tasks.next().await {
status_updates.insert(provider_id.clone(), status);
for model in models {
annotated.push(AnnotatedModelInfo {
provider_id: provider_id.clone(),
provider_status: status,
model,
});
}
}
{
let mut guard = self.status_cache.write().await;
for (provider_id, status) in status_updates {
guard.insert(provider_id, status);
}
}
Ok(annotated)
}
/// Refresh the health of all registered providers in parallel, returning
/// the latest status snapshot.
pub async fn refresh_health(&self) -> HashMap<String, ProviderStatus> {
let providers: Vec<(String, Arc<dyn ModelProvider>)> = {
let guard = self.providers.read().await;
guard
.iter()
.map(|(id, provider)| (id.clone(), Arc::clone(provider)))
.collect()
};
let mut tasks = FuturesUnordered::new();
for (provider_id, provider) in providers {
tasks.push(async move {
let status = match provider.health_check().await {
Ok(status) => status,
Err(err) => {
warn!("health check failed for provider {}: {}", provider_id, err);
ProviderStatus::Unavailable
}
};
(provider_id, status)
});
}
let mut updates = HashMap::new();
while let Some((provider_id, status)) = tasks.next().await {
updates.insert(provider_id, status);
}
{
let mut guard = self.status_cache.write().await;
for (provider_id, status) in &updates {
guard.insert(provider_id.clone(), *status);
}
}
updates
}
/// Return the provider instance for an identifier.
pub async fn get_provider(&self, provider_id: &str) -> Option<Arc<dyn ModelProvider>> {
let guard = self.providers.read().await;
guard.get(provider_id).cloned()
}
/// List the registered provider identifiers.
pub async fn provider_ids(&self) -> Vec<String> {
let guard = self.providers.read().await;
guard.keys().cloned().collect()
}
/// Retrieve the last known status for a provider.
pub async fn provider_status(&self, provider_id: &str) -> Option<ProviderStatus> {
let guard = self.status_cache.read().await;
guard.get(provider_id).copied()
}
/// Snapshot the currently cached statuses.
pub async fn provider_statuses(&self) -> HashMap<String, ProviderStatus> {
let guard = self.status_cache.read().await;
guard.clone()
}
}
impl Default for ProviderManager {
fn default() -> Self {
Self {
providers: RwLock::new(HashMap::new()),
status_cache: RwLock::new(HashMap::new()),
}
}
}

View File

@@ -1,36 +0,0 @@
//! Unified provider abstraction layer.
//!
//! This module defines the async [`ModelProvider`] trait that all model
//! backends implement, together with a small suite of shared data structures
//! used for model discovery and streaming generation. The [`ProviderManager`]
//! orchestrates multiple providers and coordinates their health state.
mod manager;
mod types;
use std::pin::Pin;
use async_trait::async_trait;
use futures::Stream;
pub use self::{manager::*, types::*};
use crate::Result;
/// Convenience alias for the stream type yielded by [`ModelProvider::generate_stream`].
pub type GenerateStream = Pin<Box<dyn Stream<Item = Result<GenerateChunk>> + Send + 'static>>;
#[async_trait]
pub trait ModelProvider: Send + Sync {
/// Returns descriptive metadata about the provider.
fn metadata(&self) -> &ProviderMetadata;
/// Check the current health state for the provider.
async fn health_check(&self) -> Result<ProviderStatus>;
/// List all models available through the provider.
async fn list_models(&self) -> Result<Vec<ModelInfo>>;
/// Acquire a streaming response for a generation request.
async fn generate_stream(&self, request: GenerateRequest) -> Result<GenerateStream>;
}

View File

@@ -1,124 +0,0 @@
//! Shared types used by the unified provider abstraction layer.
use std::collections::HashMap;
use serde::{Deserialize, Serialize};
use serde_json::Value;
/// Categorises providers so the UI can distinguish between local and hosted
/// backends.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
pub enum ProviderType {
Local,
Cloud,
}
/// Represents the current availability state for a provider.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
pub enum ProviderStatus {
Available,
Unavailable,
RequiresSetup,
}
/// Describes core metadata for a provider implementation.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct ProviderMetadata {
pub id: String,
pub name: String,
pub provider_type: ProviderType,
pub requires_auth: bool,
#[serde(default)]
pub metadata: HashMap<String, Value>,
}
impl ProviderMetadata {
/// Construct a new metadata instance for a provider.
pub fn new(
id: impl Into<String>,
name: impl Into<String>,
provider_type: ProviderType,
requires_auth: bool,
) -> Self {
Self {
id: id.into(),
name: name.into(),
provider_type,
requires_auth,
metadata: HashMap::new(),
}
}
}
/// Information about a model that can be displayed to users.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ModelInfo {
pub name: String,
#[serde(default)]
pub size_bytes: Option<u64>,
#[serde(default)]
pub capabilities: Vec<String>,
#[serde(default)]
pub description: Option<String>,
pub provider: ProviderMetadata,
#[serde(default)]
pub metadata: HashMap<String, Value>,
}
/// Unified request for streaming text generation across providers.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct GenerateRequest {
pub model: String,
#[serde(default)]
pub prompt: Option<String>,
#[serde(default)]
pub context: Vec<String>,
#[serde(default)]
pub parameters: HashMap<String, Value>,
#[serde(default)]
pub metadata: HashMap<String, Value>,
}
impl GenerateRequest {
/// Helper for building a request from the minimum required fields.
pub fn new(model: impl Into<String>) -> Self {
Self {
model: model.into(),
prompt: None,
context: Vec::new(),
parameters: HashMap::new(),
metadata: HashMap::new(),
}
}
}
/// Streamed chunk of generation output from a model.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct GenerateChunk {
#[serde(default)]
pub text: Option<String>,
#[serde(default)]
pub is_final: bool,
#[serde(default)]
pub metadata: HashMap<String, Value>,
}
impl GenerateChunk {
/// Construct a new chunk with the provided text payload.
pub fn from_text(text: impl Into<String>) -> Self {
Self {
text: Some(text.into()),
is_final: false,
metadata: HashMap::new(),
}
}
/// Mark the chunk as the terminal item in a stream.
pub fn final_chunk() -> Self {
Self {
text: None,
is_final: true,
metadata: HashMap::new(),
}
}
}

View File

@@ -1,8 +0,0 @@
//! Built-in LLM provider implementations.
//!
//! Each provider integration lives in its own module so that maintenance
//! stays focused and configuration remains clear.
pub mod ollama;
pub use ollama::OllamaProvider;

File diff suppressed because it is too large Load Diff

View File

@@ -1,157 +0,0 @@
//! Router for managing multiple providers and routing requests
use crate::{Result, llm::*, types::*};
use anyhow::anyhow;
use std::sync::Arc;
/// A router that can distribute requests across multiple providers
pub struct Router {
registry: ProviderRegistry,
routing_rules: Vec<RoutingRule>,
default_provider: Option<String>,
}
/// A rule for routing requests to specific providers
#[derive(Debug, Clone)]
pub struct RoutingRule {
/// Pattern to match against model names
pub model_pattern: String,
/// Provider to route to
pub provider: String,
/// Priority (higher numbers are checked first)
pub priority: u32,
}
impl Router {
/// Create a new router
pub fn new() -> Self {
Self {
registry: ProviderRegistry::new(),
routing_rules: Vec::new(),
default_provider: None,
}
}
/// Register a provider with the router
pub fn register_provider<P: LlmProvider + 'static>(&mut self, provider: P) {
self.registry.register(provider);
}
/// Set the default provider
pub fn set_default_provider(&mut self, provider_name: String) {
self.default_provider = Some(provider_name);
}
/// Add a routing rule
pub fn add_routing_rule(&mut self, rule: RoutingRule) {
self.routing_rules.push(rule);
// Sort by priority (descending)
self.routing_rules
.sort_by(|a, b| b.priority.cmp(&a.priority));
}
/// Route a request to the appropriate provider
pub async fn chat(&self, request: ChatRequest) -> Result<ChatResponse> {
let provider = self.find_provider_for_model(&request.model)?;
provider.send_prompt(request).await
}
/// Route a streaming request to the appropriate provider
pub async fn chat_stream(&self, request: ChatRequest) -> Result<ChatStream> {
let provider = self.find_provider_for_model(&request.model)?;
provider.stream_prompt(request).await
}
/// List all available models from all providers
pub async fn list_models(&self) -> Result<Vec<ModelInfo>> {
self.registry.list_all_models().await
}
/// Find the appropriate provider for a given model
fn find_provider_for_model(&self, model: &str) -> Result<Arc<dyn Provider>> {
// Check routing rules first
for rule in &self.routing_rules {
if !self.matches_pattern(&rule.model_pattern, model) {
continue;
}
if let Some(provider) = self.registry.get(&rule.provider) {
return Ok(provider);
}
}
// Fall back to default provider
if let Some(provider) = self
.default_provider
.as_ref()
.and_then(|default| self.registry.get(default))
{
return Ok(provider);
}
// If no default, try to find any provider that has this model
// This is a fallback for cases where routing isn't configured
for provider_name in self.registry.list_providers() {
if let Some(provider) = self.registry.get(&provider_name) {
return Ok(provider);
}
}
Err(crate::Error::Provider(anyhow!(
"No provider found for model: {}",
model
)))
}
/// Check if a model name matches a pattern
fn matches_pattern(&self, pattern: &str, model: &str) -> bool {
// Simple pattern matching for now
// Could be extended to support more complex patterns
if pattern == "*" {
return true;
}
if let Some(prefix) = pattern.strip_suffix('*') {
return model.starts_with(prefix);
}
if let Some(suffix) = pattern.strip_prefix('*') {
return model.ends_with(suffix);
}
pattern == model
}
/// Get routing configuration
pub fn get_routing_rules(&self) -> &[RoutingRule] {
&self.routing_rules
}
/// Get the default provider name
pub fn get_default_provider(&self) -> Option<&str> {
self.default_provider.as_deref()
}
}
impl Default for Router {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_pattern_matching() {
let router = Router::new();
assert!(router.matches_pattern("*", "any-model"));
assert!(router.matches_pattern("gpt*", "gpt-4"));
assert!(router.matches_pattern("gpt*", "gpt-3.5-turbo"));
assert!(!router.matches_pattern("gpt*", "claude-3"));
assert!(router.matches_pattern("*:latest", "llama2:latest"));
assert!(router.matches_pattern("exact-match", "exact-match"));
assert!(!router.matches_pattern("exact-match", "different-model"));
}
}

View File

@@ -1,216 +0,0 @@
use std::path::PathBuf;
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
use anyhow::{Context, Result, bail};
use tempfile::TempDir;
/// Configuration options for sandboxed process execution.
#[derive(Clone, Debug)]
pub struct SandboxConfig {
pub allow_network: bool,
pub allow_paths: Vec<PathBuf>,
pub readonly_paths: Vec<PathBuf>,
pub timeout_seconds: u64,
pub max_memory_mb: u64,
}
impl Default for SandboxConfig {
fn default() -> Self {
Self {
allow_network: false,
allow_paths: Vec::new(),
readonly_paths: Vec::new(),
timeout_seconds: 30,
max_memory_mb: 512,
}
}
}
/// Wrapper around a bubblewrap sandbox instance.
///
/// Memory limits are enforced via:
/// - bwrap's --rlimit-as (version >= 0.12.0)
/// - prlimit wrapper (fallback for older bwrap versions)
/// - timeout mechanism (always enforced as last resort)
pub struct SandboxedProcess {
temp_dir: TempDir,
config: SandboxConfig,
}
impl SandboxedProcess {
pub fn new(config: SandboxConfig) -> Result<Self> {
let temp_dir = TempDir::new().context("Failed to create temp directory")?;
which::which("bwrap")
.context("bubblewrap not found. Install with: sudo apt install bubblewrap")?;
Ok(Self { temp_dir, config })
}
pub fn execute(&self, command: &str, args: &[&str]) -> Result<SandboxResult> {
let supports_rlimit = self.supports_rlimit_as();
let use_prlimit = !supports_rlimit && which::which("prlimit").is_ok();
let mut cmd = if use_prlimit {
// Use prlimit wrapper for older bwrap versions
let mut prlimit_cmd = Command::new("prlimit");
let memory_limit_bytes = self
.config
.max_memory_mb
.saturating_mul(1024)
.saturating_mul(1024);
prlimit_cmd.arg(format!("--as={}", memory_limit_bytes));
prlimit_cmd.arg("bwrap");
prlimit_cmd
} else {
Command::new("bwrap")
};
cmd.args(["--unshare-all", "--die-with-parent", "--new-session"]);
if self.config.allow_network {
cmd.arg("--share-net");
} else {
cmd.arg("--unshare-net");
}
cmd.args(["--proc", "/proc", "--dev", "/dev", "--tmpfs", "/tmp"]);
// Bind essential system paths readonly for executables and libraries
let system_paths = ["/usr", "/bin", "/lib", "/lib64", "/etc"];
for sys_path in &system_paths {
let path = std::path::Path::new(sys_path);
if path.exists() {
cmd.arg("--ro-bind").arg(sys_path).arg(sys_path);
}
}
// Bind /run for DNS resolution (resolv.conf may be a symlink to /run/systemd/resolve/*)
if std::path::Path::new("/run").exists() {
cmd.arg("--ro-bind").arg("/run").arg("/run");
}
for path in &self.config.allow_paths {
let path_host = path.to_string_lossy().into_owned();
let path_guest = path_host.clone();
cmd.arg("--bind").arg(&path_host).arg(&path_guest);
}
for path in &self.config.readonly_paths {
let path_host = path.to_string_lossy().into_owned();
let path_guest = path_host.clone();
cmd.arg("--ro-bind").arg(&path_host).arg(&path_guest);
}
let work_dir = self.temp_dir.path().to_string_lossy().into_owned();
cmd.arg("--bind").arg(&work_dir).arg("/work");
cmd.arg("--chdir").arg("/work");
// Add memory limits via bwrap's --rlimit-as if supported (version >= 0.12.0)
// If not supported, we use prlimit wrapper (set earlier)
if supports_rlimit && !use_prlimit {
let memory_limit_bytes = self
.config
.max_memory_mb
.saturating_mul(1024)
.saturating_mul(1024);
let memory_soft = memory_limit_bytes.to_string();
let memory_hard = memory_limit_bytes.to_string();
cmd.arg("--rlimit-as").arg(&memory_soft).arg(&memory_hard);
}
cmd.arg(command);
cmd.args(args);
let start = Instant::now();
let timeout = Duration::from_secs(self.config.timeout_seconds);
// Spawn the process instead of waiting immediately
let mut child = cmd
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.context("Failed to spawn sandboxed command")?;
let mut was_timeout = false;
// Wait for the child with timeout
let output = loop {
match child.try_wait() {
Ok(Some(_status)) => {
// Process exited
let output = child
.wait_with_output()
.context("Failed to collect process output")?;
break output;
}
Ok(None) => {
// Process still running, check timeout
if start.elapsed() >= timeout {
// Timeout exceeded, kill the process
was_timeout = true;
child.kill().context("Failed to kill timed-out process")?;
// Wait for the killed process to exit
let output = child
.wait_with_output()
.context("Failed to collect output from killed process")?;
break output;
}
// Sleep briefly before checking again
std::thread::sleep(Duration::from_millis(50));
}
Err(e) => {
bail!("Failed to check process status: {}", e);
}
}
};
let duration = start.elapsed();
Ok(SandboxResult {
stdout: String::from_utf8_lossy(&output.stdout).to_string(),
stderr: String::from_utf8_lossy(&output.stderr).to_string(),
exit_code: output.status.code().unwrap_or(-1),
duration,
was_timeout,
})
}
/// Check if bubblewrap supports --rlimit-as option (version >= 0.12.0)
fn supports_rlimit_as(&self) -> bool {
// Try to get bwrap version
let output = Command::new("bwrap").arg("--version").output();
if let Ok(output) = output {
let version_str = String::from_utf8_lossy(&output.stdout);
// Parse version like "bubblewrap 0.11.0" or "0.11.0"
return version_str
.split_whitespace()
.last()
.and_then(|part| {
part.split_once('.').and_then(|(major, rest)| {
rest.split_once('.').and_then(|(minor, _)| {
let maj = major.parse::<u32>().ok()?;
let min = minor.parse::<u32>().ok()?;
Some((maj, min))
})
})
})
.map(|(maj, min)| maj > 0 || (maj == 0 && min >= 12))
.unwrap_or(false);
}
// If we can't determine the version, assume it doesn't support it (safer default)
false
}
}
#[derive(Debug, Clone)]
pub struct SandboxResult {
pub stdout: String,
pub stderr: String,
pub exit_code: i32,
pub duration: Duration,
pub was_timeout: bool,
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,199 +0,0 @@
//! Shared application state types used across TUI frontends.
use std::fmt;
/// High-level application state reported by the UI loop.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum AppState {
Running,
Quit,
}
/// Vim-style input modes supported by the TUI.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum InputMode {
Normal,
Editing,
ProviderSelection,
ModelSelection,
Help,
Visual,
Command,
SessionBrowser,
ThemeBrowser,
RepoSearch,
SymbolSearch,
}
impl fmt::Display for InputMode {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let label = match self {
InputMode::Normal => "Normal",
InputMode::Editing => "Editing",
InputMode::ModelSelection => "Model",
InputMode::ProviderSelection => "Provider",
InputMode::Help => "Help",
InputMode::Visual => "Visual",
InputMode::Command => "Command",
InputMode::SessionBrowser => "Sessions",
InputMode::ThemeBrowser => "Themes",
InputMode::RepoSearch => "Search",
InputMode::SymbolSearch => "Symbols",
};
f.write_str(label)
}
}
/// Represents which panel is currently focused in the TUI layout.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum FocusedPanel {
Files,
Chat,
Thinking,
Input,
Code,
}
/// Auto-scroll state manager for scrollable panels.
#[derive(Debug, Clone)]
pub struct AutoScroll {
pub scroll: usize,
pub content_len: usize,
pub stick_to_bottom: bool,
}
impl Default for AutoScroll {
fn default() -> Self {
Self {
scroll: 0,
content_len: 0,
stick_to_bottom: true,
}
}
}
impl AutoScroll {
/// Update scroll position based on viewport height.
pub fn on_viewport(&mut self, viewport_h: usize) {
let max = self.content_len.saturating_sub(viewport_h);
if self.stick_to_bottom {
self.scroll = max;
} else {
self.scroll = self.scroll.min(max);
}
}
/// Handle user scroll input.
pub fn on_user_scroll(&mut self, delta: isize, viewport_h: usize) {
let max = self.content_len.saturating_sub(viewport_h) as isize;
let s = (self.scroll as isize + delta).clamp(0, max) as usize;
self.scroll = s;
self.stick_to_bottom = s as isize == max;
}
pub fn scroll_half_page_down(&mut self, viewport_h: usize) {
let delta = (viewport_h / 2) as isize;
self.on_user_scroll(delta, viewport_h);
}
pub fn scroll_half_page_up(&mut self, viewport_h: usize) {
let delta = -((viewport_h / 2) as isize);
self.on_user_scroll(delta, viewport_h);
}
pub fn scroll_full_page_down(&mut self, viewport_h: usize) {
let delta = viewport_h as isize;
self.on_user_scroll(delta, viewport_h);
}
pub fn scroll_full_page_up(&mut self, viewport_h: usize) {
let delta = -(viewport_h as isize);
self.on_user_scroll(delta, viewport_h);
}
pub fn jump_to_top(&mut self) {
self.scroll = 0;
self.stick_to_bottom = false;
}
pub fn jump_to_bottom(&mut self, viewport_h: usize) {
self.stick_to_bottom = true;
self.on_viewport(viewport_h);
}
}
/// Visual selection state for text selection.
#[derive(Debug, Clone, Default)]
pub struct VisualSelection {
pub start: Option<(usize, usize)>,
pub end: Option<(usize, usize)>,
}
impl VisualSelection {
pub fn new() -> Self {
Self::default()
}
pub fn start_at(&mut self, pos: (usize, usize)) {
self.start = Some(pos);
self.end = Some(pos);
}
pub fn extend_to(&mut self, pos: (usize, usize)) {
self.end = Some(pos);
}
pub fn clear(&mut self) {
self.start = None;
self.end = None;
}
pub fn is_active(&self) -> bool {
self.start.is_some() && self.end.is_some()
}
pub fn get_normalized(&self) -> Option<((usize, usize), (usize, usize))> {
if let (Some(s), Some(e)) = (self.start, self.end) {
if s.0 < e.0 || (s.0 == e.0 && s.1 <= e.1) {
Some((s, e))
} else {
Some((e, s))
}
} else {
None
}
}
}
/// Cursor position helper for navigating scrollable content.
#[derive(Debug, Clone, Copy, Default)]
pub struct CursorPosition {
pub row: usize,
pub col: usize,
}
impl CursorPosition {
pub fn new(row: usize, col: usize) -> Self {
Self { row, col }
}
pub fn move_up(&mut self, amount: usize) {
self.row = self.row.saturating_sub(amount);
}
pub fn move_down(&mut self, amount: usize, max: usize) {
self.row = (self.row + amount).min(max);
}
pub fn move_left(&mut self, amount: usize) {
self.col = self.col.saturating_sub(amount);
}
pub fn move_right(&mut self, amount: usize, max: usize) {
self.col = (self.col + amount).min(max);
}
pub fn as_tuple(&self) -> (usize, usize) {
(self.row, self.col)
}
}

View File

@@ -1,559 +0,0 @@
//! Session persistence and storage management backed by SQLite
use crate::types::Conversation;
use crate::{Error, Result};
use aes_gcm::aead::{Aead, KeyInit};
use aes_gcm::{Aes256Gcm, Nonce};
use ring::rand::{SecureRandom, SystemRandom};
use serde::{Deserialize, Serialize};
use sqlx::sqlite::{SqliteConnectOptions, SqliteJournalMode, SqlitePoolOptions, SqliteSynchronous};
use sqlx::{Pool, Row, Sqlite};
use std::fs;
use std::io::IsTerminal;
use std::io::{self, Write};
use std::path::{Path, PathBuf};
use std::str::FromStr;
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use uuid::Uuid;
/// Metadata about a saved session
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SessionMeta {
/// Conversation ID
pub id: Uuid,
/// Optional session name
pub name: Option<String>,
/// Optional AI-generated description
pub description: Option<String>,
/// Number of messages in the conversation
pub message_count: usize,
/// Model used
pub model: String,
/// When the session was created
pub created_at: SystemTime,
/// When the session was last updated
pub updated_at: SystemTime,
}
/// Storage manager for persisting conversations in SQLite
pub struct StorageManager {
pool: Pool<Sqlite>,
database_path: PathBuf,
}
impl StorageManager {
/// Create a new storage manager using the default database path
pub async fn new() -> Result<Self> {
let db_path = Self::default_database_path()?;
Self::with_database_path(db_path).await
}
/// Create a storage manager using the provided database path
pub async fn with_database_path(database_path: PathBuf) -> Result<Self> {
if let Some(parent) = database_path.parent() {
if !parent.exists() {
std::fs::create_dir_all(parent).map_err(|e| {
Error::Storage(format!(
"Failed to create database directory {parent:?}: {e}"
))
})?;
}
}
let options = SqliteConnectOptions::from_str(&format!(
"sqlite://{}",
database_path
.to_str()
.ok_or_else(|| Error::Storage("Invalid database path".to_string()))?
))
.map_err(|e| Error::Storage(format!("Invalid database URL: {e}")))?
.create_if_missing(true)
.journal_mode(SqliteJournalMode::Wal)
.synchronous(SqliteSynchronous::Normal);
let pool = SqlitePoolOptions::new()
.max_connections(5)
.connect_with(options)
.await
.map_err(|e| Error::Storage(format!("Failed to connect to database: {e}")))?;
sqlx::migrate!("./migrations")
.run(&pool)
.await
.map_err(|e| Error::Storage(format!("Failed to run database migrations: {e}")))?;
let storage = Self {
pool,
database_path,
};
storage.try_migrate_legacy_sessions().await?;
Ok(storage)
}
/// Save a conversation. Existing entries are updated in-place.
pub async fn save_conversation(
&self,
conversation: &Conversation,
name: Option<String>,
) -> Result<()> {
self.save_conversation_with_description(conversation, name, None)
.await
}
/// Save a conversation with an optional description override
pub async fn save_conversation_with_description(
&self,
conversation: &Conversation,
name: Option<String>,
description: Option<String>,
) -> Result<()> {
let mut serialized = conversation.clone();
if name.is_some() {
serialized.name = name.clone();
}
if description.is_some() {
serialized.description = description.clone();
}
let data = serde_json::to_string(&serialized)
.map_err(|e| Error::Storage(format!("Failed to serialize conversation: {e}")))?;
let created_at = to_epoch_seconds(serialized.created_at);
let updated_at = to_epoch_seconds(serialized.updated_at);
let message_count = serialized.messages.len() as i64;
sqlx::query(
r#"
INSERT INTO conversations (
id,
name,
description,
model,
message_count,
created_at,
updated_at,
data
) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)
ON CONFLICT(id) DO UPDATE SET
name = excluded.name,
description = excluded.description,
model = excluded.model,
message_count = excluded.message_count,
created_at = excluded.created_at,
updated_at = excluded.updated_at,
data = excluded.data
"#,
)
.bind(serialized.id.to_string())
.bind(name.or(serialized.name.clone()))
.bind(description.or(serialized.description.clone()))
.bind(&serialized.model)
.bind(message_count)
.bind(created_at)
.bind(updated_at)
.bind(data)
.execute(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to save conversation: {e}")))?;
Ok(())
}
/// Load a conversation by ID
pub async fn load_conversation(&self, id: Uuid) -> Result<Conversation> {
let record = sqlx::query(r#"SELECT data FROM conversations WHERE id = ?1"#)
.bind(id.to_string())
.fetch_optional(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to load conversation: {e}")))?;
let row =
record.ok_or_else(|| Error::Storage(format!("No conversation found with id {id}")))?;
let data: String = row
.try_get("data")
.map_err(|e| Error::Storage(format!("Failed to read conversation payload: {e}")))?;
serde_json::from_str(&data)
.map_err(|e| Error::Storage(format!("Failed to deserialize conversation: {e}")))
}
/// List metadata for all saved conversations ordered by most recent update
pub async fn list_sessions(&self) -> Result<Vec<SessionMeta>> {
let rows = sqlx::query(
r#"
SELECT id, name, description, model, message_count, created_at, updated_at
FROM conversations
ORDER BY updated_at DESC
"#,
)
.fetch_all(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to list sessions: {e}")))?;
let mut sessions = Vec::with_capacity(rows.len());
for row in rows {
let id_text: String = row
.try_get("id")
.map_err(|e| Error::Storage(format!("Failed to read id column: {e}")))?;
let id = Uuid::parse_str(&id_text)
.map_err(|e| Error::Storage(format!("Invalid UUID in storage: {e}")))?;
let message_count: i64 = row
.try_get("message_count")
.map_err(|e| Error::Storage(format!("Failed to read message count: {e}")))?;
let created_at: i64 = row
.try_get("created_at")
.map_err(|e| Error::Storage(format!("Failed to read created_at: {e}")))?;
let updated_at: i64 = row
.try_get("updated_at")
.map_err(|e| Error::Storage(format!("Failed to read updated_at: {e}")))?;
sessions.push(SessionMeta {
id,
name: row
.try_get("name")
.map_err(|e| Error::Storage(format!("Failed to read name: {e}")))?,
description: row
.try_get("description")
.map_err(|e| Error::Storage(format!("Failed to read description: {e}")))?,
model: row
.try_get("model")
.map_err(|e| Error::Storage(format!("Failed to read model: {e}")))?,
message_count: message_count as usize,
created_at: from_epoch_seconds(created_at),
updated_at: from_epoch_seconds(updated_at),
});
}
Ok(sessions)
}
/// Delete a conversation by ID
pub async fn delete_session(&self, id: Uuid) -> Result<()> {
sqlx::query("DELETE FROM conversations WHERE id = ?1")
.bind(id.to_string())
.execute(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to delete conversation: {e}")))?;
Ok(())
}
pub async fn store_secure_item(
&self,
key: &str,
plaintext: &[u8],
master_key: &[u8],
) -> Result<()> {
let cipher = create_cipher(master_key)?;
let nonce_bytes = generate_nonce()?;
let nonce = Nonce::from_slice(&nonce_bytes);
let ciphertext = cipher
.encrypt(nonce, plaintext)
.map_err(|e| Error::Storage(format!("Failed to encrypt secure item: {e}")))?;
let now = to_epoch_seconds(SystemTime::now());
sqlx::query(
r#"
INSERT INTO secure_items (key, nonce, ciphertext, created_at, updated_at)
VALUES (?1, ?2, ?3, ?4, ?5)
ON CONFLICT(key) DO UPDATE SET
nonce = excluded.nonce,
ciphertext = excluded.ciphertext,
updated_at = excluded.updated_at
"#,
)
.bind(key)
.bind(&nonce_bytes[..])
.bind(&ciphertext[..])
.bind(now)
.bind(now)
.execute(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to store secure item: {e}")))?;
Ok(())
}
pub async fn load_secure_item(&self, key: &str, master_key: &[u8]) -> Result<Option<Vec<u8>>> {
let record = sqlx::query("SELECT nonce, ciphertext FROM secure_items WHERE key = ?1")
.bind(key)
.fetch_optional(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to load secure item: {e}")))?;
let Some(row) = record else {
return Ok(None);
};
let nonce_bytes: Vec<u8> = row
.try_get("nonce")
.map_err(|e| Error::Storage(format!("Failed to read secure item nonce: {e}")))?;
let ciphertext: Vec<u8> = row
.try_get("ciphertext")
.map_err(|e| Error::Storage(format!("Failed to read secure item ciphertext: {e}")))?;
if nonce_bytes.len() != 12 {
return Err(Error::Storage(
"Invalid nonce length for secure item".to_string(),
));
}
let cipher = create_cipher(master_key)?;
let nonce = Nonce::from_slice(&nonce_bytes);
let plaintext = cipher
.decrypt(nonce, ciphertext.as_ref())
.map_err(|e| Error::Storage(format!("Failed to decrypt secure item: {e}")))?;
Ok(Some(plaintext))
}
pub async fn delete_secure_item(&self, key: &str) -> Result<()> {
sqlx::query("DELETE FROM secure_items WHERE key = ?1")
.bind(key)
.execute(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to delete secure item: {e}")))?;
Ok(())
}
pub async fn clear_secure_items(&self) -> Result<()> {
sqlx::query("DELETE FROM secure_items")
.execute(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to clear secure items: {e}")))?;
Ok(())
}
/// Database location used by this storage manager
pub fn database_path(&self) -> &Path {
&self.database_path
}
/// Determine default database path (platform specific)
pub fn default_database_path() -> Result<PathBuf> {
let data_dir = dirs::data_local_dir()
.ok_or_else(|| Error::Storage("Could not determine data directory".to_string()))?;
Ok(data_dir.join("owlen").join("owlen.db"))
}
fn legacy_sessions_dir() -> Result<PathBuf> {
let data_dir = dirs::data_local_dir()
.ok_or_else(|| Error::Storage("Could not determine data directory".to_string()))?;
Ok(data_dir.join("owlen").join("sessions"))
}
async fn database_has_records(&self) -> Result<bool> {
let (count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM conversations")
.fetch_one(&self.pool)
.await
.map_err(|e| Error::Storage(format!("Failed to inspect database: {e}")))?;
Ok(count > 0)
}
async fn try_migrate_legacy_sessions(&self) -> Result<()> {
if self.database_has_records().await? {
return Ok(());
}
let legacy_dir = match Self::legacy_sessions_dir() {
Ok(dir) => dir,
Err(_) => return Ok(()),
};
if !legacy_dir.exists() {
return Ok(());
}
let entries = fs::read_dir(&legacy_dir).map_err(|e| {
Error::Storage(format!("Failed to read legacy sessions directory: {e}"))
})?;
let mut json_files = Vec::new();
for entry in entries.flatten() {
let path = entry.path();
if path.extension().and_then(|s| s.to_str()) == Some("json") {
json_files.push(path);
}
}
if json_files.is_empty() {
return Ok(());
}
if !io::stdin().is_terminal() {
return Ok(());
}
println!(
"Legacy OWLEN session files were found in {}.",
legacy_dir.display()
);
if !prompt_yes_no("Migrate them to the new SQLite storage? (y/N) ")? {
println!("Skipping legacy session migration.");
return Ok(());
}
println!("Migrating legacy sessions...");
let mut migrated = 0usize;
for path in &json_files {
match fs::read_to_string(path) {
Ok(content) => match serde_json::from_str::<Conversation>(&content) {
Ok(conversation) => {
if let Err(err) = self
.save_conversation_with_description(
&conversation,
conversation.name.clone(),
conversation.description.clone(),
)
.await
{
println!(" • Failed to migrate {}: {}", path.display(), err);
} else {
migrated += 1;
}
}
Err(err) => {
println!(
" • Failed to parse conversation {}: {}",
path.display(),
err
);
}
},
Err(err) => {
println!(" • Failed to read {}: {}", path.display(), err);
}
}
}
if migrated > 0 {
if let Err(err) = archive_legacy_directory(&legacy_dir) {
println!(
"Warning: migrated sessions but failed to archive legacy directory: {}",
err
);
}
}
println!("Migrated {} legacy sessions.", migrated);
Ok(())
}
}
fn to_epoch_seconds(time: SystemTime) -> i64 {
match time.duration_since(UNIX_EPOCH) {
Ok(duration) => duration.as_secs() as i64,
Err(_) => 0,
}
}
fn from_epoch_seconds(seconds: i64) -> SystemTime {
UNIX_EPOCH + Duration::from_secs(seconds.max(0) as u64)
}
fn prompt_yes_no(prompt: &str) -> Result<bool> {
print!("{}", prompt);
io::stdout()
.flush()
.map_err(|e| Error::Storage(format!("Failed to flush stdout: {e}")))?;
let mut input = String::new();
io::stdin()
.read_line(&mut input)
.map_err(|e| Error::Storage(format!("Failed to read input: {e}")))?;
let trimmed = input.trim().to_lowercase();
Ok(matches!(trimmed.as_str(), "y" | "yes"))
}
fn archive_legacy_directory(legacy_dir: &Path) -> Result<()> {
let mut backup_dir = legacy_dir.with_file_name("sessions_legacy_backup");
let mut counter = 1;
while backup_dir.exists() {
backup_dir = legacy_dir.with_file_name(format!("sessions_legacy_backup_{}", counter));
counter += 1;
}
fs::rename(legacy_dir, &backup_dir).map_err(|e| {
Error::Storage(format!(
"Failed to archive legacy sessions directory {}: {}",
legacy_dir.display(),
e
))
})?;
println!("Legacy session files archived to {}", backup_dir.display());
Ok(())
}
fn create_cipher(master_key: &[u8]) -> Result<Aes256Gcm> {
if master_key.len() != 32 {
return Err(Error::Storage(
"Master key must be 32 bytes for AES-256-GCM".to_string(),
));
}
Aes256Gcm::new_from_slice(master_key).map_err(|_| {
Error::Storage("Failed to initialize cipher with provided master key".to_string())
})
}
fn generate_nonce() -> Result<[u8; 12]> {
let mut nonce = [0u8; 12];
SystemRandom::new()
.fill(&mut nonce)
.map_err(|_| Error::Storage("Failed to generate nonce".to_string()))?;
Ok(nonce)
}
#[cfg(test)]
mod tests {
use super::*;
use crate::types::{Conversation, Message};
use tempfile::tempdir;
fn sample_conversation() -> Conversation {
Conversation {
id: Uuid::new_v4(),
name: Some("Test conversation".to_string()),
description: Some("A sample conversation".to_string()),
messages: vec![
Message::user("Hello".to_string()),
Message::assistant("Hi".to_string()),
],
model: "test-model".to_string(),
created_at: SystemTime::now(),
updated_at: SystemTime::now(),
}
}
#[tokio::test]
async fn test_storage_lifecycle() {
let temp_dir = tempdir().expect("failed to create temp dir");
let db_path = temp_dir.path().join("owlen.db");
let storage = StorageManager::with_database_path(db_path).await.unwrap();
let conversation = sample_conversation();
storage
.save_conversation(&conversation, None)
.await
.expect("failed to save conversation");
let sessions = storage.list_sessions().await.unwrap();
assert_eq!(sessions.len(), 1);
assert_eq!(sessions[0].id, conversation.id);
let loaded = storage.load_conversation(conversation.id).await.unwrap();
assert_eq!(loaded.messages.len(), 2);
storage
.delete_session(conversation.id)
.await
.expect("failed to delete conversation");
let sessions = storage.list_sessions().await.unwrap();
assert!(sessions.is_empty());
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,97 +0,0 @@
//! Tool module aggregating builtin tool implementations.
//!
//! The crate originally declared `pub mod tools;` in `lib.rs` but the source
//! directory only contained individual tool files without a `mod.rs`, causing the
//! compiler to look for `tools.rs` and fail. Adding this module file makes the
//! directory a proper Rust module and reexports the concrete tool types.
pub mod code_exec;
pub mod fs_tools;
pub mod registry;
pub mod web_scrape;
pub mod web_search;
pub mod web_search_detailed;
use async_trait::async_trait;
use serde_json::{Value, json};
use std::collections::HashMap;
use std::time::Duration;
use crate::Result;
/// Trait representing a tool that can be called via the MCP interface.
#[async_trait]
pub trait Tool: Send + Sync {
/// Unique name of the tool (used in the MCP protocol).
fn name(&self) -> &'static str;
/// Humanreadable description for documentation.
fn description(&self) -> &'static str;
/// JSONSchema describing the expected arguments.
fn schema(&self) -> Value;
/// Execute the tool with the provided arguments.
fn requires_network(&self) -> bool {
false
}
fn requires_filesystem(&self) -> Vec<String> {
Vec::new()
}
async fn execute(&self, args: Value) -> Result<ToolResult>;
}
/// Result returned by a tool execution.
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct ToolResult {
/// Indicates whether the tool completed successfully.
pub success: bool,
/// Humanreadable status string retained for compatibility.
pub status: String,
/// Arbitrary JSON payload describing the tool output.
pub output: Value,
/// Execution duration.
#[serde(skip_serializing_if = "Duration::is_zero", default)]
pub duration: Duration,
/// Optional key/value metadata for the tool invocation.
#[serde(default)]
pub metadata: HashMap<String, String>,
}
impl ToolResult {
pub fn success(output: Value) -> Self {
Self {
success: true,
status: "success".into(),
output,
duration: Duration::default(),
metadata: HashMap::new(),
}
}
pub fn error(msg: &str) -> Self {
Self {
success: false,
status: "error".into(),
output: json!({ "error": msg }),
duration: Duration::default(),
metadata: HashMap::new(),
}
}
pub fn cancelled(msg: &str) -> Self {
Self {
success: false,
status: "cancelled".into(),
output: json!({ "error": msg }),
duration: Duration::default(),
metadata: HashMap::new(),
}
}
}
// Reexport the most commonly used types so they can be accessed as
// `owlen_core::tools::CodeExecTool`, etc.
pub use code_exec::CodeExecTool;
pub use fs_tools::{ResourcesDeleteTool, ResourcesGetTool, ResourcesListTool, ResourcesWriteTool};
pub use registry::ToolRegistry;
pub use web_scrape::WebScrapeTool;
pub use web_search::WebSearchTool;
pub use web_search_detailed::WebSearchDetailedTool;

View File

@@ -1,148 +0,0 @@
use std::sync::Arc;
use std::time::Instant;
use crate::Result;
use anyhow::{Context, anyhow};
use async_trait::async_trait;
use serde_json::{Value, json};
use super::{Tool, ToolResult};
use crate::sandbox::{SandboxConfig, SandboxedProcess};
pub struct CodeExecTool {
allowed_languages: Arc<Vec<String>>,
}
impl CodeExecTool {
pub fn new(allowed_languages: Vec<String>) -> Self {
Self {
allowed_languages: Arc::new(allowed_languages),
}
}
}
#[async_trait]
impl Tool for CodeExecTool {
fn name(&self) -> &'static str {
"code_exec"
}
fn description(&self) -> &'static str {
"Execute code snippets within a sandboxed environment"
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"language": {
"type": "string",
"enum": self.allowed_languages.as_slice(),
"description": "Language of the code block"
},
"code": {
"type": "string",
"minLength": 1,
"maxLength": 10000,
"description": "Code to execute"
},
"timeout": {
"type": "integer",
"minimum": 1,
"maximum": 300,
"default": 30,
"description": "Execution timeout in seconds"
}
},
"required": ["language", "code"],
"additionalProperties": false
})
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let start = Instant::now();
let language = args
.get("language")
.and_then(Value::as_str)
.context("Missing language parameter")?;
let code = args
.get("code")
.and_then(Value::as_str)
.context("Missing code parameter")?;
let timeout = args.get("timeout").and_then(Value::as_u64).unwrap_or(30);
if !self.allowed_languages.iter().any(|lang| lang == language) {
return Err(anyhow!("Language '{}' not permitted", language).into());
}
let (command, command_args) = match language {
"python" => (
"python3".to_string(),
vec!["-c".to_string(), code.to_string()],
),
"javascript" => ("node".to_string(), vec!["-e".to_string(), code.to_string()]),
"bash" => ("bash".to_string(), vec!["-c".to_string(), code.to_string()]),
"rust" => {
let mut result =
ToolResult::error("Rust execution is not yet supported in the sandbox");
result.duration = start.elapsed();
return Ok(result);
}
other => return Err(anyhow!("Unsupported language: {}", other).into()),
};
let sandbox_config = SandboxConfig {
allow_network: false,
timeout_seconds: timeout,
..Default::default()
};
let sandbox_result = tokio::task::spawn_blocking(move || {
let sandbox = SandboxedProcess::new(sandbox_config)?;
let arg_refs: Vec<&str> = command_args.iter().map(|s| s.as_str()).collect();
sandbox.execute(&command, &arg_refs)
})
.await
.context("Sandbox execution task failed")??;
let mut result = if sandbox_result.exit_code == 0 {
ToolResult::success(json!({
"stdout": sandbox_result.stdout,
"stderr": sandbox_result.stderr,
"exit_code": sandbox_result.exit_code,
"timed_out": sandbox_result.was_timeout,
}))
} else {
let error_msg = if sandbox_result.was_timeout {
format!(
"Execution timed out after {} seconds (exit code {}): {}",
timeout, sandbox_result.exit_code, sandbox_result.stderr
)
} else {
format!(
"Execution failed with status {}: {}",
sandbox_result.exit_code, sandbox_result.stderr
)
};
let mut err_result = ToolResult::error(&error_msg);
err_result.output = json!({
"stdout": sandbox_result.stdout,
"stderr": sandbox_result.stderr,
"exit_code": sandbox_result.exit_code,
"timed_out": sandbox_result.was_timeout,
});
err_result
};
result.duration = start.elapsed();
result
.metadata
.insert("language".to_string(), language.to_string());
result
.metadata
.insert("timeout_seconds".to_string(), timeout.to_string());
Ok(result)
}
}

View File

@@ -1,198 +0,0 @@
use crate::tools::{Tool, ToolResult};
use crate::{Error, Result};
use async_trait::async_trait;
use path_clean::PathClean;
use serde::Deserialize;
use serde_json::json;
use std::env;
use std::fs;
use std::path::{Path, PathBuf};
#[derive(Deserialize)]
struct FileArgs {
path: String,
}
fn sanitize_path(path: &str, root: &Path) -> Result<PathBuf> {
let path = Path::new(path);
let path = if path.is_absolute() {
// Strip leading '/' to treat as relative to the project root.
path.strip_prefix("/")
.map_err(|_| Error::InvalidInput("Invalid path".into()))?
.to_path_buf()
} else {
path.to_path_buf()
};
let full_path = root.join(path).clean();
if !full_path.starts_with(root) {
return Err(Error::PermissionDenied("Path traversal detected".into()));
}
Ok(full_path)
}
pub struct ResourcesListTool;
#[async_trait]
impl Tool for ResourcesListTool {
fn name(&self) -> &'static str {
"resources/list"
}
fn description(&self) -> &'static str {
"Lists directory contents."
}
fn schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the directory to list."
}
},
"required": ["path"]
})
}
async fn execute(&self, args: serde_json::Value) -> Result<ToolResult> {
let args: FileArgs = serde_json::from_value(args)?;
let root = env::current_dir()?;
let full_path = sanitize_path(&args.path, &root)?;
let entries = fs::read_dir(full_path)?;
let mut result = Vec::new();
for entry in entries {
let entry = entry?;
result.push(entry.file_name().to_string_lossy().to_string());
}
Ok(ToolResult::success(serde_json::to_value(result)?))
}
}
pub struct ResourcesGetTool;
#[async_trait]
impl Tool for ResourcesGetTool {
fn name(&self) -> &'static str {
"resources/get"
}
fn description(&self) -> &'static str {
"Reads file content."
}
fn schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "The path to the file to read."
}
},
"required": ["path"]
})
}
async fn execute(&self, args: serde_json::Value) -> Result<ToolResult> {
let args: FileArgs = serde_json::from_value(args)?;
let root = env::current_dir()?;
let full_path = sanitize_path(&args.path, &root)?;
let content = fs::read_to_string(full_path)?;
Ok(ToolResult::success(serde_json::to_value(content)?))
}
}
// ---------------------------------------------------------------------------
// Write tool writes (or overwrites) a file under the project root.
// ---------------------------------------------------------------------------
pub struct ResourcesWriteTool;
#[derive(Deserialize)]
struct WriteArgs {
path: String,
content: String,
}
#[async_trait]
impl Tool for ResourcesWriteTool {
fn name(&self) -> &'static str {
"resources/write"
}
fn description(&self) -> &'static str {
"Writes (or overwrites) a file. Requires explicit consent."
}
fn schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": {
"path": { "type": "string", "description": "Target file path (relative to project root)" },
"content": { "type": "string", "description": "File content to write" }
},
"required": ["path", "content"]
})
}
fn requires_filesystem(&self) -> Vec<String> {
vec!["file_write".to_string()]
}
async fn execute(&self, args: serde_json::Value) -> Result<ToolResult> {
let args: WriteArgs = serde_json::from_value(args)?;
let root = env::current_dir()?;
let full_path = sanitize_path(&args.path, &root)?;
// Ensure the parent directory exists
if let Some(parent) = full_path.parent() {
fs::create_dir_all(parent)?;
}
fs::write(full_path, args.content)?;
Ok(ToolResult::success(json!(null)))
}
}
// ---------------------------------------------------------------------------
// Delete tool deletes a file under the project root.
// ---------------------------------------------------------------------------
pub struct ResourcesDeleteTool;
#[derive(Deserialize)]
struct DeleteArgs {
path: String,
}
#[async_trait]
impl Tool for ResourcesDeleteTool {
fn name(&self) -> &'static str {
"resources/delete"
}
fn description(&self) -> &'static str {
"Deletes a file. Requires explicit consent."
}
fn schema(&self) -> serde_json::Value {
json!({
"type": "object",
"properties": { "path": { "type": "string", "description": "File path to delete" } },
"required": ["path"]
})
}
fn requires_filesystem(&self) -> Vec<String> {
vec!["file_delete".to_string()]
}
async fn execute(&self, args: serde_json::Value) -> Result<ToolResult> {
let args: DeleteArgs = serde_json::from_value(args)?;
let root = env::current_dir()?;
let full_path = sanitize_path(&args.path, &root)?;
if full_path.is_file() {
fs::remove_file(full_path)?;
Ok(ToolResult::success(json!(null)))
} else {
Err(Error::InvalidInput("Path does not refer to a file".into()))
}
}
}

View File

@@ -1,114 +0,0 @@
use std::collections::HashMap;
use std::sync::Arc;
use crate::Result;
use anyhow::Context;
use serde_json::Value;
use super::{Tool, ToolResult};
use crate::config::Config;
use crate::mode::Mode;
use crate::ui::UiController;
pub struct ToolRegistry {
tools: HashMap<String, Arc<dyn Tool>>,
config: Arc<tokio::sync::Mutex<Config>>,
ui: Arc<dyn UiController>,
}
impl ToolRegistry {
pub fn new(config: Arc<tokio::sync::Mutex<Config>>, ui: Arc<dyn UiController>) -> Self {
Self {
tools: HashMap::new(),
config,
ui,
}
}
pub fn register<T>(&mut self, tool: T)
where
T: Tool + 'static,
{
let tool: Arc<dyn Tool> = Arc::new(tool);
let name = tool.name().to_string();
self.tools.insert(name, tool);
}
pub fn get(&self, name: &str) -> Option<Arc<dyn Tool>> {
self.tools.get(name).cloned()
}
pub fn all(&self) -> Vec<Arc<dyn Tool>> {
self.tools.values().cloned().collect()
}
pub async fn execute(&self, name: &str, args: Value, mode: Mode) -> Result<ToolResult> {
let tool = self
.get(name)
.with_context(|| format!("Tool not registered: {}", name))?;
let mut config = self.config.lock().await;
// Check mode-based tool availability first
if !config.modes.is_tool_allowed(mode, name) {
let alternate_mode = match mode {
Mode::Chat => Mode::Code,
Mode::Code => Mode::Chat,
};
if config.modes.is_tool_allowed(alternate_mode, name) {
return Ok(ToolResult::error(&format!(
"Tool '{}' is not available in {} mode. Switch to {} mode to use this tool (use :mode {} command).",
name, mode, alternate_mode, alternate_mode
)));
} else {
return Ok(ToolResult::error(&format!(
"Tool '{}' is not available in any mode. Check your configuration.",
name
)));
}
}
let is_enabled = match name {
"web_search" => config.tools.web_search.enabled,
"code_exec" => config.tools.code_exec.enabled,
_ => true, // All other tools are considered enabled by default
};
if !is_enabled {
let prompt = format!(
"Tool '{}' is disabled. Would you like to enable it for this session?",
name
);
if self.ui.confirm(&prompt).await {
// Enable the tool in the in-memory config for the current session
match name {
"web_search" => config.tools.web_search.enabled = true,
"code_exec" => config.tools.code_exec.enabled = true,
_ => {}
}
} else {
return Ok(ToolResult::cancelled(&format!(
"Tool '{}' execution was cancelled by the user.",
name
)));
}
}
tool.execute(args).await
}
/// Get all tools available in the given mode
pub async fn available_tools(&self, mode: Mode) -> Vec<String> {
let config = self.config.lock().await;
self.tools
.keys()
.filter(|name| config.modes.is_tool_allowed(mode, name))
.cloned()
.collect()
}
pub fn tools(&self) -> Vec<String> {
self.tools.keys().cloned().collect()
}
}

View File

@@ -1,102 +0,0 @@
use super::{Tool, ToolResult};
use crate::Result;
use anyhow::Context;
use async_trait::async_trait;
use serde_json::{Value, json};
/// Tool that fetches the raw HTML content for a list of URLs.
///
/// Input schema expects:
/// urls: array of strings (max 5 URLs)
/// timeout_secs: optional integer perrequest timeout (default 10)
pub struct WebScrapeTool {
// No special dependencies; uses reqwest_011 for compatibility with existing web_search.
client: reqwest_011::Client,
}
impl Default for WebScrapeTool {
fn default() -> Self {
Self::new()
}
}
impl WebScrapeTool {
pub fn new() -> Self {
let client = reqwest_011::Client::builder()
.user_agent("OwlenWebScrape/0.1")
.build()
.expect("Failed to build reqwest client");
Self { client }
}
}
#[async_trait]
impl Tool for WebScrapeTool {
fn name(&self) -> &'static str {
"web_scrape"
}
fn description(&self) -> &'static str {
"Fetch raw HTML content for a list of URLs"
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"urls": {
"type": "array",
"items": { "type": "string", "format": "uri" },
"minItems": 1,
"maxItems": 5,
"description": "List of URLs to scrape"
},
"timeout_secs": {
"type": "integer",
"minimum": 1,
"maximum": 30,
"default": 10,
"description": "Perrequest timeout in seconds"
}
},
"required": ["urls"],
"additionalProperties": false
})
}
fn requires_network(&self) -> bool {
true
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let urls = args
.get("urls")
.and_then(|v| v.as_array())
.context("Missing 'urls' array")?;
let timeout_secs = args
.get("timeout_secs")
.and_then(|v| v.as_u64())
.unwrap_or(10);
let mut results = Vec::new();
for url_val in urls {
let url = url_val.as_str().unwrap_or("");
let resp = self
.client
.get(url)
.timeout(std::time::Duration::from_secs(timeout_secs))
.send()
.await;
match resp {
Ok(r) => {
let text = r.text().await.unwrap_or_default();
results.push(json!({ "url": url, "content": text }));
}
Err(e) => {
results.push(json!({ "url": url, "error": e.to_string() }));
}
}
}
Ok(ToolResult::success(json!({ "pages": results })))
}
}

View File

@@ -1,154 +0,0 @@
use std::sync::{Arc, Mutex};
use std::time::Instant;
use crate::Result;
use anyhow::Context;
use async_trait::async_trait;
use serde_json::{Value, json};
use super::{Tool, ToolResult};
use crate::consent::ConsentManager;
use crate::credentials::CredentialManager;
use crate::encryption::VaultHandle;
pub struct WebSearchTool {
consent_manager: Arc<Mutex<ConsentManager>>,
_credential_manager: Option<Arc<CredentialManager>>,
browser: duckduckgo::browser::Browser,
}
impl WebSearchTool {
pub fn new(
consent_manager: Arc<Mutex<ConsentManager>>,
credential_manager: Option<Arc<CredentialManager>>,
_vault: Option<Arc<Mutex<VaultHandle>>>,
) -> Self {
// Create a reqwest client compatible with duckduckgo crate (v0.11)
let client = reqwest_011::Client::new();
let browser = duckduckgo::browser::Browser::new(client);
Self {
consent_manager,
_credential_manager: credential_manager,
browser,
}
}
}
#[async_trait]
impl Tool for WebSearchTool {
fn name(&self) -> &'static str {
"web_search"
}
fn description(&self) -> &'static str {
"Search the web for information using DuckDuckGo API"
}
fn schema(&self) -> Value {
json!({
"type": "object",
"properties": {
"query": {
"type": "string",
"minLength": 1,
"maxLength": 500,
"description": "Search query"
},
"max_results": {
"type": "integer",
"minimum": 1,
"maximum": 10,
"default": 5,
"description": "Maximum number of results"
}
},
"required": ["query"],
"additionalProperties": false
})
}
fn requires_network(&self) -> bool {
true
}
async fn execute(&self, args: Value) -> Result<ToolResult> {
let start = Instant::now();
// Check if consent has been granted (non-blocking check)
// Consent should have been granted via TUI dialog before tool execution
{
let consent = self
.consent_manager
.lock()
.expect("Consent manager mutex poisoned");
if !consent.has_consent(self.name()) {
return Ok(ToolResult::error(
"Consent not granted for web search. This should have been handled by the TUI.",
));
}
}
let query = args
.get("query")
.and_then(Value::as_str)
.context("Missing query parameter")?;
let max_results = args.get("max_results").and_then(Value::as_u64).unwrap_or(5) as usize;
let user_agent = duckduckgo::user_agents::get("firefox").unwrap_or(
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0",
);
// Detect if this is a news query - use news endpoint for better snippets
let is_news_query = query.to_lowercase().contains("news")
|| query.to_lowercase().contains("latest")
|| query.to_lowercase().contains("today")
|| query.to_lowercase().contains("recent");
let mut formatted_results = Vec::new();
if is_news_query {
// Use news endpoint which returns excerpts/snippets
let news_results = self
.browser
.news(query, "wt-wt", false, Some(max_results), user_agent)
.await
.context("DuckDuckGo news search failed")?;
for result in news_results {
formatted_results.push(json!({
"title": result.title,
"url": result.url,
"snippet": result.body, // news has body/excerpt
"source": result.source,
"date": result.date
}));
}
} else {
// Use lite search for general queries (fast but no snippets)
let search_results = self
.browser
.lite_search(query, "wt-wt", Some(max_results), user_agent)
.await
.context("DuckDuckGo search failed")?;
for result in search_results {
formatted_results.push(json!({
"title": result.title,
"url": result.url,
"snippet": result.snippet
}));
}
}
let mut result = ToolResult::success(json!({
"query": query,
"results": formatted_results,
"total_found": formatted_results.len()
}));
result.duration = start.elapsed();
Ok(result)
}
}

Some files were not shown because too many files have changed in this diff Show More