Files
owlen/examples/mcp_chat.rs
vikingowl 0728262a9e fix(core,mcp,security)!: resolve critical P0/P1 issues
BREAKING CHANGES:
- owlen-core no longer depends on ratatui/crossterm
- RemoteMcpClient constructors are now async
- MCP path validation is stricter (security hardening)

This commit resolves three critical issues identified in project analysis:

## P0-1: Extract TUI dependencies from owlen-core

Create owlen-ui-common crate to hold UI-agnostic color and theme
abstractions, removing architectural boundary violation.

Changes:
- Create new owlen-ui-common crate with abstract Color enum
- Move theme.rs from owlen-core to owlen-ui-common
- Define Color with Rgb and Named variants (no ratatui dependency)
- Create color conversion layer in owlen-tui (color_convert.rs)
- Update 35+ color usages with conversion wrappers
- Remove ratatui/crossterm from owlen-core dependencies

Benefits:
- owlen-core usable in headless/CLI contexts
- Enables future GUI frontends
- Reduces binary size for core library consumers

## P0-2: Fix blocking WebSocket connections

Convert RemoteMcpClient constructors to async, eliminating runtime
blocking that froze TUI for 30+ seconds on slow connections.

Changes:
- Make new_with_runtime(), new_with_config(), new() async
- Remove block_in_place wrappers for I/O operations
- Add 30-second connection timeout with tokio::time::timeout
- Update 15+ call sites across 10 files to await constructors
- Convert 4 test functions to #[tokio::test]

Benefits:
- TUI remains responsive during WebSocket connections
- Proper async I/O follows Rust best practices
- No more indefinite hangs

## P1-1: Secure path traversal vulnerabilities

Implement comprehensive path validation with 7 defense layers to
prevent file access outside workspace boundaries.

Changes:
- Create validate_safe_path() with multi-layer security:
  * URL decoding (prevents %2E%2E bypasses)
  * Absolute path rejection
  * Null byte protection
  * Windows-specific checks (UNC/device paths)
  * Lexical path cleaning (removes .. components)
  * Symlink resolution via canonicalization
  * Boundary verification with starts_with check
- Update 4 MCP resource functions (get/list/write/delete)
- Add 11 comprehensive security tests

Benefits:
- Blocks URL-encoded, absolute, UNC path attacks
- Prevents null byte injection
- Stops symlink escape attempts
- Cross-platform security (Windows/Linux/macOS)

## Test Results

- owlen-core: 109/109 tests pass (100%)
- owlen-tui: 52/53 tests pass (98%, 1 pre-existing failure)
- owlen-providers: 2/2 tests pass (100%)
- Build: cargo build --all succeeds

## Verification

- ✓ cargo tree -p owlen-core shows no TUI dependencies
- ✓ No block_in_place calls remain in MCP I/O code
- ✓ All 11 security tests pass

Fixes: #P0-1, #P0-2, #P1-1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-29 12:31:20 +01:00

72 lines
2.2 KiB
Rust

//! Example demonstrating MCP-based chat interaction.
//!
//! This example shows the recommended way to interact with LLMs via the MCP architecture.
//! It uses `RemoteMcpClient` which communicates with the MCP LLM server.
//!
//! Prerequisites:
//! - Build the MCP LLM server: `cargo build --release -p owlen-mcp-llm-server`
//! - Ensure Ollama is running with a model available
use owlen_core::{
Provider,
mcp::remote_client::RemoteMcpClient,
types::{ChatParameters, ChatRequest, Message, Role},
};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
println!("🦉 Owlen MCP Chat Example\n");
// Create MCP client - this will spawn/connect to the MCP LLM server
println!("Connecting to MCP LLM server...");
let client = Arc::new(RemoteMcpClient::new().await?);
println!("✓ Connected\n");
// List available models
println!("Fetching available models...");
let models = client.list_models().await?;
println!("Available models:");
for model in &models {
println!(" - {} ({})", model.name, model.provider);
}
println!();
// Select first available model or default
let model_name = models
.first()
.map(|m| m.id.clone())
.unwrap_or_else(|| "llama3.2:latest".to_string());
println!("Using model: {}\n", model_name);
// Create a simple chat request
let user_message = "What is the capital of France? Please be concise.";
println!("User: {}", user_message);
let request = ChatRequest {
model: model_name,
messages: vec![Message::new(Role::User, user_message.to_string())],
parameters: ChatParameters {
temperature: Some(0.7),
max_tokens: Some(100),
stream: false,
extra: std::collections::HashMap::new(),
},
tools: None,
};
// Send request and get response
println!("\nAssistant: ");
let response = client.send_prompt(request).await?;
println!("{}", response.message.content);
if let Some(usage) = response.usage {
println!(
"\n📊 Tokens: {} prompt + {} completion = {} total",
usage.prompt_tokens, usage.completion_tokens, usage.total_tokens
);
}
Ok(())
}