diff --git a/CHANGELOG.md b/CHANGELOG.md index 174030d..10c5077 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,10 +13,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Module-level documentation for `owlen-tui`. - Ollama integration can now talk to Ollama Cloud when an API key is configured. - Ollama provider will also read `OLLAMA_API_KEY` / `OLLAMA_CLOUD_API_KEY` environment variables when no key is stored in the config. +- `owlen config doctor`, `owlen config path`, and `owlen upgrade` CLI commands to automate migrations and surface manual update steps. +- Startup provider health check with actionable hints when Ollama or remote MCP servers are unavailable. +- `dev/check-windows.sh` helper script for on-demand Windows cross-checks. +- Global F1 keybinding for the in-app help overlay and a clearer status hint on launch. ### Changed - The main `README.md` has been updated to be more concise and link to the new documentation. - Default configuration now pre-populates both `providers.ollama` and `providers.ollama-cloud` entries so switching between local and cloud backends is a single setting change. +- `McpMode` support was restored with explicit validation; `remote_only`, `remote_preferred`, and `local_only` now behave predictably. +- Configuration loading performs structural validation and fails fast on missing default providers or invalid MCP definitions. +- Ollama provider error handling now distinguishes timeouts, missing models, and authentication failures. +- `owlen` warns when the active terminal likely lacks 256-color support. --- diff --git a/README.md b/README.md index 405b15f..77e69a1 100644 --- a/README.md +++ b/README.md @@ -31,7 +31,8 @@ The OWLEN interface features a clean, multi-panel layout with vim-inspired navig - **Advanced Text Editing**: Multi-line input, history, and clipboard support. - **Session Management**: Save, load, and manage conversations. - **Theming System**: 10 built-in themes and support for custom themes. -- **Modular Architecture**: Extensible provider system (currently Ollama). +- **Modular Architecture**: Extensible provider system (Ollama today, additional providers on the roadmap). +- **Guided Setup**: `owlen config doctor` upgrades legacy configs and verifies your environment in seconds. ## Getting Started @@ -55,6 +56,8 @@ cargo install --path crates/owlen-cli #### Windows The Windows build has not been thoroughly tested yet. Installation is possible via the same `cargo install` method, but it is considered experimental at this time. +From Unix hosts you can run `scripts/check-windows.sh` to ensure the code base still compiles for Windows (`rustup` will install the required target automatically). + ### Running OWLEN Make sure Ollama is running, then launch the application: @@ -66,9 +69,13 @@ If you built from source without installing, you can run it with: ./target/release/owlen ``` +### Updating + +Owlen does not auto-update. Run `owlen upgrade` at any time to print the recommended manual steps (pull the repository and reinstall with `cargo install --path crates/owlen-cli --force`). Arch Linux users can update via the `owlen-git` AUR package. + ## Using the TUI -OWLEN uses a modal, vim-inspired interface. Press `?` in Normal mode to view the help screen with all keybindings. +OWLEN uses a modal, vim-inspired interface. Press `F1` (available from any mode) or `?` in Normal mode to view the help screen with all keybindings. - **Normal Mode**: Navigate with `h/j/k/l`, `w/b`, `gg/G`. - **Editing Mode**: Enter with `i` or `a`. Send messages with `Enter`. @@ -83,16 +90,33 @@ For more detailed information, please refer to the following documents: - **[docs/architecture.md](docs/architecture.md)**: An overview of the project's architecture. - **[docs/troubleshooting.md](docs/troubleshooting.md)**: Help with common issues. - **[docs/provider-implementation.md](docs/provider-implementation.md)**: A guide for adding new providers. +- **[docs/platform-support.md](docs/platform-support.md)**: Current OS support matrix and cross-check instructions. ## Configuration -OWLEN stores its configuration in `~/.config/owlen/config.toml`. This file is created on the first run and can be customized. You can also add custom themes in `~/.config/owlen/themes/`. +OWLEN stores its configuration in the standard platform-specific config directory: + +| Platform | Location | +|----------|----------| +| Linux | `~/.config/owlen/config.toml` | +| macOS | `~/Library/Application Support/owlen/config.toml` | +| Windows | `%APPDATA%\owlen\config.toml` | + +Use `owlen config path` to print the exact location on your machine and `owlen config doctor` to migrate a legacy config automatically. +You can also add custom themes alongside the config directory (e.g., `~/.config/owlen/themes/`). See the [themes/README.md](themes/README.md) for more details on theming. ## Roadmap -We are actively working on enhancing the code client, adding more providers (OpenAI, Anthropic), and improving the overall user experience. See the [Roadmap section in the old README](https://github.com/Owlibou/owlen/blob/main/README.md?plain=1#L295) for more details. +Upcoming milestones focus on feature parity with modern code assistants while keeping Owlen local-first: + +1. **Phase 11 – MCP client enhancements**: `owlen mcp add/list/remove`, resource references (`@github:issue://123`), and MCP prompt slash commands. +2. **Phase 12 – Approval & sandboxing**: Three-tier approval modes plus platform-specific sandboxes (Docker, `sandbox-exec`, Windows job objects). +3. **Phase 13 – Project documentation system**: Automatic `OWLEN.md` generation, contextual updates, and nested project support. +4. **Phase 15 – Provider expansion**: OpenAI, Anthropic, and other cloud providers layered onto the existing Ollama-first architecture. + +See `AGENTS.md` for the long-form roadmap and design notes. ## Contributing @@ -101,3 +125,4 @@ Contributions are highly welcome! Please see our **[Contributing Guide](CONTRIBU ## License This project is licensed under the GNU Affero General Public License v3.0. See the [LICENSE](LICENSE) file for details. +For commercial or proprietary integrations that cannot adopt AGPL, please reach out to the maintainers to discuss alternative licensing arrangements. diff --git a/crates/owlen-cli/Cargo.toml b/crates/owlen-cli/Cargo.toml index 9e36a07..84c0b34 100644 --- a/crates/owlen-cli/Cargo.toml +++ b/crates/owlen-cli/Cargo.toml @@ -26,6 +26,8 @@ required-features = ["chat-client"] owlen-core = { path = "../owlen-core" } # Optional TUI dependency, enabled by the "chat-client" feature. owlen-tui = { path = "../owlen-tui", optional = true } +owlen-ollama = { path = "../owlen-ollama" } +log = "0.4" # CLI framework clap = { version = "4.0", features = ["derive"] } @@ -45,3 +47,7 @@ serde_json = { workspace = true } regex = "1" thiserror = "1" dirs = "5" + +[dev-dependencies] +tokio = { workspace = true } +tokio-test = { workspace = true } diff --git a/crates/owlen-cli/build.rs b/crates/owlen-cli/build.rs new file mode 100644 index 0000000..16036bb --- /dev/null +++ b/crates/owlen-cli/build.rs @@ -0,0 +1,31 @@ +use std::process::Command; + +fn main() { + const MIN_VERSION: (u32, u32, u32) = (1, 75, 0); + + let rustc = std::env::var("RUSTC").unwrap_or_else(|_| "rustc".into()); + let output = Command::new(&rustc) + .arg("--version") + .output() + .expect("failed to invoke rustc"); + + let version_line = String::from_utf8_lossy(&output.stdout); + let version_str = version_line.split_whitespace().nth(1).unwrap_or("0.0.0"); + let sanitized = version_str.split('-').next().unwrap_or(version_str); + + let mut parts = sanitized + .split('.') + .map(|part| part.parse::().unwrap_or(0)); + let current = ( + parts.next().unwrap_or(0), + parts.next().unwrap_or(0), + parts.next().unwrap_or(0), + ); + + if current < MIN_VERSION { + panic!( + "owlen requires rustc {}.{}.{} or newer (found {version_line})", + MIN_VERSION.0, MIN_VERSION.1, MIN_VERSION.2 + ); + } +} diff --git a/crates/owlen-cli/src/main.rs b/crates/owlen-cli/src/main.rs index 210320f..75dfcf6 100644 --- a/crates/owlen-cli/src/main.rs +++ b/crates/owlen-cli/src/main.rs @@ -1,11 +1,17 @@ //! OWLEN CLI - Chat TUI client use anyhow::Result; -use clap::Parser; +use clap::{Parser, Subcommand}; +use owlen_core::config as core_config; use owlen_core::{ - mcp::remote_client::RemoteMcpClient, mode::Mode, session::SessionController, - storage::StorageManager, Provider, + config::{Config, McpMode}, + mcp::remote_client::RemoteMcpClient, + mode::Mode, + session::SessionController, + storage::StorageManager, + Provider, }; +use owlen_ollama::OllamaProvider; use owlen_tui::tui_controller::{TuiController, TuiRequest}; use owlen_tui::{config, ui, AppState, ChatApp, Event, EventHandler, SessionEvent}; use std::io; @@ -28,17 +34,216 @@ struct Args { /// Start in code mode (enables all tools) #[arg(long, short = 'c')] code: bool, + #[command(subcommand)] + command: Option, +} + +#[derive(Debug, Subcommand)] +enum OwlenCommand { + /// Inspect or upgrade configuration files + #[command(subcommand)] + Config(ConfigCommand), + /// Show manual steps for updating Owlen to the latest revision + Upgrade, +} + +#[derive(Debug, Subcommand)] +enum ConfigCommand { + /// Automatically upgrade legacy configuration values and ensure validity + Doctor, + /// Print the resolved configuration file path + Path, +} + +fn build_provider(cfg: &Config) -> anyhow::Result> { + match cfg.mcp.mode { + McpMode::RemotePreferred => { + let remote_result = if let Some(mcp_server) = cfg.mcp_servers.first() { + RemoteMcpClient::new_with_config(mcp_server) + } else { + RemoteMcpClient::new() + }; + + match remote_result { + Ok(client) => { + let provider: Arc = Arc::new(client); + Ok(provider) + } + Err(err) if cfg.mcp.allow_fallback => { + log::warn!( + "Remote MCP client unavailable ({}); falling back to local provider.", + err + ); + build_local_provider(cfg) + } + Err(err) => Err(anyhow::Error::from(err)), + } + } + McpMode::RemoteOnly => { + let mcp_server = cfg.mcp_servers.first().ok_or_else(|| { + anyhow::anyhow!( + "[[mcp_servers]] must be configured when [mcp].mode = \"remote_only\"" + ) + })?; + let client = RemoteMcpClient::new_with_config(mcp_server)?; + let provider: Arc = Arc::new(client); + Ok(provider) + } + McpMode::LocalOnly | McpMode::Legacy => build_local_provider(cfg), + McpMode::Disabled => Err(anyhow::anyhow!( + "MCP mode 'disabled' is not supported by the owlen TUI" + )), + } +} + +fn build_local_provider(cfg: &Config) -> anyhow::Result> { + let provider_name = cfg.general.default_provider.clone(); + let provider_cfg = cfg.provider(&provider_name).ok_or_else(|| { + anyhow::anyhow!(format!( + "No provider configuration found for '{provider_name}' in [providers]" + )) + })?; + + match provider_cfg.provider_type.as_str() { + "ollama" | "ollama-cloud" => { + let provider = OllamaProvider::from_config(provider_cfg, Some(&cfg.general))?; + let provider: Arc = Arc::new(provider); + Ok(provider) + } + other => Err(anyhow::anyhow!(format!( + "Provider type '{other}' is not supported in legacy/local MCP mode" + ))), + } +} + +fn run_command(command: OwlenCommand) -> Result<()> { + match command { + OwlenCommand::Config(config_cmd) => run_config_command(config_cmd), + OwlenCommand::Upgrade => { + println!("To update Owlen from source:\n git pull\n cargo install --path crates/owlen-cli --force"); + println!( + "If you installed from the AUR, use your package manager (e.g., yay -S owlen-git)." + ); + Ok(()) + } + } +} + +fn run_config_command(command: ConfigCommand) -> Result<()> { + match command { + ConfigCommand::Doctor => run_config_doctor(), + ConfigCommand::Path => { + let path = core_config::default_config_path(); + println!("{}", path.display()); + Ok(()) + } + } +} + +fn run_config_doctor() -> Result<()> { + let config_path = core_config::default_config_path(); + let existed = config_path.exists(); + let mut config = config::try_load_config().unwrap_or_else(|| Config::default()); + let mut changes = Vec::new(); + + if !existed { + changes.push("created configuration file from defaults".to_string()); + } + + if config + .providers + .get(&config.general.default_provider) + .is_none() + { + config.general.default_provider = "ollama".to_string(); + changes.push("default provider missing; reset to 'ollama'".to_string()); + } + + if config.providers.get("ollama").is_none() { + core_config::ensure_provider_config(&mut config, "ollama"); + changes.push("added default ollama provider configuration".to_string()); + } + + if config.providers.get("ollama-cloud").is_none() { + core_config::ensure_provider_config(&mut config, "ollama-cloud"); + changes.push("added default ollama-cloud provider configuration".to_string()); + } + + match config.mcp.mode { + McpMode::Legacy => { + config.mcp.mode = McpMode::LocalOnly; + config.mcp.warn_on_legacy = true; + changes.push("converted [mcp].mode = 'legacy' to 'local_only'".to_string()); + } + McpMode::RemoteOnly if config.mcp_servers.is_empty() => { + config.mcp.mode = McpMode::RemotePreferred; + config.mcp.allow_fallback = true; + changes.push( + "downgraded remote-only configuration to remote_preferred because no servers are defined" + .to_string(), + ); + } + McpMode::RemotePreferred if !config.mcp.allow_fallback && config.mcp_servers.is_empty() => { + config.mcp.allow_fallback = true; + changes.push( + "enabled [mcp].allow_fallback because no remote servers are configured".to_string(), + ); + } + _ => {} + } + + config.validate()?; + config::save_config(&config)?; + + if changes.is_empty() { + println!( + "Configuration already up to date: {}", + config_path.display() + ); + } else { + println!("Updated {}:", config_path.display()); + for change in changes { + println!(" - {change}"); + } + } + + Ok(()) +} + +fn warn_if_limited_terminal() { + const FALLBACK_TERM: &str = "unknown"; + let term = std::env::var("TERM").unwrap_or_else(|_| FALLBACK_TERM.to_string()); + let colorterm = std::env::var("COLORTERM").unwrap_or_default(); + let term_lower = term.to_lowercase(); + let color_lower = colorterm.to_lowercase(); + + let supports_256 = term_lower.contains("256color") + || color_lower.contains("truecolor") + || color_lower.contains("24bit"); + + if !supports_256 { + eprintln!( + "Warning: terminal '{}' may not fully support 256-color themes. \ + Consider using a terminal with truecolor support for the best experience.", + term + ); + } } #[tokio::main(flavor = "multi_thread")] async fn main() -> Result<()> { // Parse command-line arguments - let args = Args::parse(); - let initial_mode = if args.code { Mode::Code } else { Mode::Chat }; + let Args { code, command } = Args::parse(); + if let Some(command) = command { + return run_command(command); + } + let initial_mode = if code { Mode::Code } else { Mode::Chat }; // Set auto-consent for TUI mode to prevent blocking stdin reads std::env::set_var("OWLEN_AUTO_CONSENT", "1"); + warn_if_limited_terminal(); + let (tui_tx, _tui_rx) = mpsc::unbounded_channel::(); let tui_controller = Arc::new(TuiController::new(tui_tx)); @@ -46,15 +251,23 @@ async fn main() -> Result<()> { let mut cfg = config::try_load_config().unwrap_or_default(); // Disable encryption for CLI to avoid password prompts in this environment. cfg.privacy.encrypt_local_data = false; + cfg.validate()?; - // Create MCP LLM client as the provider (replaces direct OllamaProvider usage) - let provider: Arc = if let Some(mcp_server) = cfg.mcp_servers.first() { - // Use configured MCP server if available - Arc::new(RemoteMcpClient::new_with_config(mcp_server)?) - } else { - // Fall back to default MCP LLM server discovery - Arc::new(RemoteMcpClient::new()?) - }; + // Create provider according to MCP configuration (supports legacy/local fallback) + let provider = build_provider(&cfg)?; + + if let Err(err) = provider.health_check().await { + let hint = if matches!(cfg.mcp.mode, McpMode::RemotePreferred | McpMode::RemoteOnly) + && !cfg.mcp_servers.is_empty() + { + "Ensure the configured MCP server is running and reachable." + } else { + "Ensure Ollama is running (`ollama serve`) and reachable at the configured base_url." + }; + return Err(anyhow::anyhow!(format!( + "Provider health check failed: {err}. {hint}" + ))); + } let storage = Arc::new(StorageManager::new().await?); let controller = diff --git a/crates/owlen-cli/tests/agent_tests.rs b/crates/owlen-cli/tests/agent_tests.rs index 3e978cb..5e6a726 100644 --- a/crates/owlen-cli/tests/agent_tests.rs +++ b/crates/owlen-cli/tests/agent_tests.rs @@ -38,7 +38,7 @@ async fn test_react_parsing_tool_call() { async fn test_react_parsing_final_answer() { let executor = create_test_executor(); - let text = "THOUGHT: I have enough information now\nACTION: final_answer\nACTION_INPUT: The answer is 42\n"; + let text = "THOUGHT: I have enough information now\nFINAL_ANSWER: The answer is 42\n"; let result = executor.parse_response(text); @@ -244,8 +244,8 @@ fn create_test_executor() -> AgentExecutor { fn test_agent_config_defaults() { let config = AgentConfig::default(); - assert_eq!(config.max_iterations, 10); - assert_eq!(config.model, "ollama"); + assert_eq!(config.max_iterations, 15); + assert_eq!(config.model, "llama3.2:latest"); assert_eq!(config.temperature, Some(0.7)); // max_tool_calls field removed - agent now tracks iterations instead } diff --git a/crates/owlen-core/src/config.rs b/crates/owlen-core/src/config.rs index 3ec73a1..9dc6583 100644 --- a/crates/owlen-core/src/config.rs +++ b/crates/owlen-core/src/config.rs @@ -109,6 +109,8 @@ impl Config { let mut config: Config = toml::from_str(&content).map_err(|e| crate::Error::Config(e.to_string()))?; config.ensure_defaults(); + config.mcp.apply_backward_compat(); + config.validate()?; Ok(config) } else { Ok(Config::default()) @@ -117,6 +119,8 @@ impl Config { /// Persist configuration to disk pub fn save(&self, path: Option<&Path>) -> Result<()> { + self.validate()?; + let path = match path { Some(path) => path.to_path_buf(), None => default_config_path(), @@ -168,6 +172,87 @@ impl Config { ensure_provider_config(self, "ollama"); ensure_provider_config(self, "ollama-cloud"); } + + /// Validate configuration invariants and surface actionable error messages. + pub fn validate(&self) -> Result<()> { + self.validate_default_provider()?; + self.validate_mcp_settings()?; + self.validate_mcp_servers()?; + Ok(()) + } + + fn validate_default_provider(&self) -> Result<()> { + if self.general.default_provider.trim().is_empty() { + return Err(crate::Error::Config( + "general.default_provider must reference a configured provider".to_string(), + )); + } + + if self.provider(&self.general.default_provider).is_none() { + return Err(crate::Error::Config(format!( + "Default provider '{}' is not defined under [providers]", + self.general.default_provider + ))); + } + + Ok(()) + } + + fn validate_mcp_settings(&self) -> Result<()> { + match self.mcp.mode { + McpMode::RemoteOnly => { + if self.mcp_servers.is_empty() { + return Err(crate::Error::Config( + "[mcp].mode = 'remote_only' requires at least one [[mcp_servers]] entry" + .to_string(), + )); + } + } + McpMode::RemotePreferred => { + if !self.mcp.allow_fallback && self.mcp_servers.is_empty() { + return Err(crate::Error::Config( + "[mcp].allow_fallback = false requires at least one [[mcp_servers]] entry" + .to_string(), + )); + } + } + McpMode::Disabled => { + return Err(crate::Error::Config( + "[mcp].mode = 'disabled' is not supported by this build of Owlen".to_string(), + )); + } + _ => {} + } + + Ok(()) + } + + fn validate_mcp_servers(&self) -> Result<()> { + for server in &self.mcp_servers { + if server.name.trim().is_empty() { + return Err(crate::Error::Config( + "Each [[mcp_servers]] entry must include a non-empty name".to_string(), + )); + } + + if server.command.trim().is_empty() { + return Err(crate::Error::Config(format!( + "MCP server '{}' must define a command or endpoint", + server.name + ))); + } + + let transport = server.transport.to_lowercase(); + if !matches!(transport.as_str(), "stdio" | "http" | "websocket") { + return Err(crate::Error::Config(format!( + "Unknown MCP transport '{}' for server '{}'", + server.transport, server.name + ))); + } + } + + Ok(()) + } } fn default_ollama_provider_config() -> ProviderConfig { @@ -190,6 +275,10 @@ fn default_ollama_cloud_provider_config() -> ProviderConfig { /// Default configuration path with user home expansion pub fn default_config_path() -> PathBuf { + if let Some(config_dir) = dirs::config_dir() { + return config_dir.join("owlen").join("config.toml"); + } + PathBuf::from(shellexpand::tilde(DEFAULT_CONFIG_PATH).as_ref()) } @@ -239,11 +328,90 @@ impl Default for GeneralSettings { } } +/// Operating modes for the MCP subsystem. +#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)] +#[serde(rename_all = "snake_case")] +pub enum McpMode { + /// Prefer remote MCP servers when configured, but allow local fallback. + #[serde(alias = "enabled", alias = "auto")] + RemotePreferred, + /// Require a configured remote MCP server; fail if none are available. + RemoteOnly, + /// Always use the in-process MCP server for tooling. + #[serde(alias = "local")] + LocalOnly, + /// Compatibility shim for pre-v1.0 behaviour; treated as `local_only`. + Legacy, + /// Disable MCP entirely (not recommended). + Disabled, +} + +impl Default for McpMode { + fn default() -> Self { + Self::RemotePreferred + } +} + +impl McpMode { + /// Whether this mode requires a remote MCP server. + pub const fn requires_remote(self) -> bool { + matches!(self, Self::RemoteOnly) + } + + /// Whether this mode prefers to use a remote MCP server when available. + pub const fn prefers_remote(self) -> bool { + matches!(self, Self::RemotePreferred | Self::RemoteOnly) + } + + /// Whether this mode should operate purely locally. + pub const fn is_local(self) -> bool { + matches!(self, Self::LocalOnly | Self::Legacy) + } +} + /// MCP (Multi-Client-Provider) settings -#[derive(Debug, Clone, Serialize, Deserialize, Default)] +#[derive(Debug, Clone, Serialize, Deserialize)] pub struct McpSettings { - // MCP is now always enabled in v1.0+ - // Kept as a struct for future configuration options + /// Operating mode for MCP integration. + #[serde(default)] + pub mode: McpMode, + /// Allow falling back to the local MCP client when remote startup fails. + #[serde(default = "McpSettings::default_allow_fallback")] + pub allow_fallback: bool, + /// Emit a warning when the deprecated `legacy` mode is used. + #[serde(default = "McpSettings::default_warn_on_legacy")] + pub warn_on_legacy: bool, +} + +impl McpSettings { + const fn default_allow_fallback() -> bool { + true + } + + const fn default_warn_on_legacy() -> bool { + true + } + + fn apply_backward_compat(&mut self) { + if self.mode == McpMode::Legacy && self.warn_on_legacy { + log::warn!( + "MCP legacy mode detected. This mode will be removed in a future release; \ + switch to 'local_only' or 'remote_preferred' after verifying your setup." + ); + } + } +} + +impl Default for McpSettings { + fn default() -> Self { + let mut settings = Self { + mode: McpMode::default(), + allow_fallback: Self::default_allow_fallback(), + warn_on_legacy: Self::default_warn_on_legacy(), + }; + settings.apply_backward_compat(); + settings + } } /// Privacy controls governing network access and storage @@ -653,4 +821,48 @@ mod tests { assert_eq!(cloud.provider_type, "ollama-cloud"); assert_eq!(cloud.base_url.as_deref(), Some("https://ollama.com")); } + + #[test] + fn validate_rejects_missing_default_provider() { + let mut config = Config::default(); + config.general.default_provider = "does-not-exist".to_string(); + let result = config.validate(); + assert!( + matches!(result, Err(crate::Error::Config(message)) if message.contains("Default provider")) + ); + } + + #[test] + fn validate_rejects_remote_only_without_servers() { + let mut config = Config::default(); + config.mcp.mode = McpMode::RemoteOnly; + config.mcp_servers.clear(); + let result = config.validate(); + assert!( + matches!(result, Err(crate::Error::Config(message)) if message.contains("remote_only")) + ); + } + + #[test] + fn validate_rejects_unknown_transport() { + let mut config = Config::default(); + config.mcp_servers = vec![McpServerConfig { + name: "bad".into(), + command: "binary".into(), + transport: "udp".into(), + args: Vec::new(), + env: std::collections::HashMap::new(), + }]; + let result = config.validate(); + assert!( + matches!(result, Err(crate::Error::Config(message)) if message.contains("transport")) + ); + } + + #[test] + fn validate_accepts_local_only_configuration() { + let mut config = Config::default(); + config.mcp.mode = McpMode::LocalOnly; + assert!(config.validate().is_ok()); + } } diff --git a/crates/owlen-core/src/mcp/factory.rs b/crates/owlen-core/src/mcp/factory.rs index abc059f..f1de5a8 100644 --- a/crates/owlen-core/src/mcp/factory.rs +++ b/crates/owlen-core/src/mcp/factory.rs @@ -4,10 +4,11 @@ /// Supports switching between local (in-process) and remote (STDIO) execution modes. use super::client::McpClient; use super::{remote_client::RemoteMcpClient, LocalMcpClient}; -use crate::config::Config; +use crate::config::{Config, McpMode}; use crate::tools::registry::ToolRegistry; use crate::validation::SchemaValidator; -use crate::Result; +use crate::{Error, Result}; +use log::{info, warn}; use std::sync::Arc; /// Factory for creating MCP clients based on configuration @@ -30,30 +31,72 @@ impl McpClientFactory { } } - /// Create an MCP client based on the current configuration - /// - /// In v1.0+, MCP architecture is always enabled. If MCP servers are configured, - /// uses the first server; otherwise falls back to local in-process client. + /// Create an MCP client based on the current configuration. pub fn create(&self) -> Result> { - // Use the first configured MCP server, if any. - if let Some(server_cfg) = self.config.mcp_servers.first() { - match RemoteMcpClient::new_with_config(server_cfg) { - Ok(client) => Ok(Box::new(client)), - Err(e) => { - eprintln!("Warning: Failed to start remote MCP client '{}': {}. Falling back to local mode.", server_cfg.name, e); + match self.config.mcp.mode { + McpMode::Disabled => Err(Error::Config( + "MCP mode is set to 'disabled'; tooling cannot function in this configuration." + .to_string(), + )), + McpMode::LocalOnly | McpMode::Legacy => { + if matches!(self.config.mcp.mode, McpMode::Legacy) { + warn!("Using deprecated MCP legacy mode; consider switching to 'local_only'."); + } + Ok(Box::new(LocalMcpClient::new( + self.registry.clone(), + self.validator.clone(), + ))) + } + McpMode::RemoteOnly => { + let server_cfg = self.config.mcp_servers.first().ok_or_else(|| { + Error::Config( + "MCP mode 'remote_only' requires at least one entry in [[mcp_servers]]" + .to_string(), + ) + })?; + + RemoteMcpClient::new_with_config(server_cfg) + .map(|client| Box::new(client) as Box) + .map_err(|e| { + Error::Config(format!( + "Failed to start remote MCP client '{}': {e}", + server_cfg.name + )) + }) + } + McpMode::RemotePreferred => { + if let Some(server_cfg) = self.config.mcp_servers.first() { + match RemoteMcpClient::new_with_config(server_cfg) { + Ok(client) => { + info!( + "Connected to remote MCP server '{}' via {} transport.", + server_cfg.name, server_cfg.transport + ); + Ok(Box::new(client) as Box) + } + Err(e) if self.config.mcp.allow_fallback => { + warn!( + "Failed to start remote MCP client '{}': {}. Falling back to local tooling.", + server_cfg.name, e + ); + Ok(Box::new(LocalMcpClient::new( + self.registry.clone(), + self.validator.clone(), + ))) + } + Err(e) => Err(Error::Config(format!( + "Failed to start remote MCP client '{}': {e}. To allow fallback, set [mcp].allow_fallback = true.", + server_cfg.name + ))), + } + } else { + warn!("No MCP servers configured; using local MCP tooling."); Ok(Box::new(LocalMcpClient::new( self.registry.clone(), self.validator.clone(), ))) } } - } else { - // No servers configured – fall back to local client. - eprintln!("Warning: No MCP servers defined in config. Using local client."); - Ok(Box::new(LocalMcpClient::new( - self.registry.clone(), - self.validator.clone(), - ))) } } @@ -66,11 +109,10 @@ impl McpClientFactory { #[cfg(test)] mod tests { use super::*; + use crate::config::McpServerConfig; + use crate::Error; - #[test] - fn test_factory_creates_local_client_when_no_servers_configured() { - let config = Config::default(); - + fn build_factory(config: Config) -> McpClientFactory { let ui = Arc::new(crate::ui::NoOpUiController); let registry = Arc::new(ToolRegistry::new( Arc::new(tokio::sync::Mutex::new(config.clone())), @@ -78,10 +120,58 @@ mod tests { )); let validator = Arc::new(SchemaValidator::new()); - let factory = McpClientFactory::new(Arc::new(config), registry, validator); + McpClientFactory::new(Arc::new(config), registry, validator) + } + + #[test] + fn test_factory_creates_local_client_when_no_servers_configured() { + let config = Config::default(); + + let factory = build_factory(config); // Should create without error and fall back to local client let result = factory.create(); assert!(result.is_ok()); } + + #[test] + fn test_remote_only_without_servers_errors() { + let mut config = Config::default(); + config.mcp.mode = McpMode::RemoteOnly; + config.mcp_servers.clear(); + + let factory = build_factory(config); + let result = factory.create(); + assert!(matches!(result, Err(Error::Config(_)))); + } + + #[test] + fn test_remote_preferred_without_fallback_propagates_remote_error() { + let mut config = Config::default(); + config.mcp.mode = McpMode::RemotePreferred; + config.mcp.allow_fallback = false; + config.mcp_servers = vec![McpServerConfig { + name: "invalid".to_string(), + command: "nonexistent-mcp-server-binary".to_string(), + args: Vec::new(), + transport: "stdio".to_string(), + env: std::collections::HashMap::new(), + }]; + + let factory = build_factory(config); + let result = factory.create(); + assert!( + matches!(result, Err(Error::Config(message)) if message.contains("Failed to start remote MCP client")) + ); + } + + #[test] + fn test_legacy_mode_uses_local_client() { + let mut config = Config::default(); + config.mcp.mode = McpMode::Legacy; + + let factory = build_factory(config); + let result = factory.create(); + assert!(result.is_ok()); + } } diff --git a/crates/owlen-core/src/mcp/remote_client.rs b/crates/owlen-core/src/mcp/remote_client.rs index 7840c16..fccda6d 100644 --- a/crates/owlen-core/src/mcp/remote_client.rs +++ b/crates/owlen-core/src/mcp/remote_client.rs @@ -521,8 +521,14 @@ impl Provider for RemoteMcpClient { } async fn health_check(&self) -> Result<()> { - // Simple ping using initialize method. - let params = serde_json::json!({"protocol_version": PROTOCOL_VERSION}); - self.send_rpc("initialize", params).await.map(|_| ()) + let params = serde_json::json!({ + "protocol_version": PROTOCOL_VERSION, + "client_info": { + "name": "owlen", + "version": env!("CARGO_PKG_VERSION"), + }, + "capabilities": {} + }); + self.send_rpc(methods::INITIALIZE, params).await.map(|_| ()) } } diff --git a/crates/owlen-core/tests/prompt_server.rs b/crates/owlen-core/tests/prompt_server.rs index c14be02..10e411d 100644 --- a/crates/owlen-core/tests/prompt_server.rs +++ b/crates/owlen-core/tests/prompt_server.rs @@ -9,19 +9,34 @@ use std::path::PathBuf; #[tokio::test] async fn test_render_prompt_via_external_server() -> Result<()> { - // Locate the compiled prompt server binary. - let mut binary = PathBuf::from(env!("CARGO_MANIFEST_DIR")); - binary.pop(); // remove `tests` - binary.pop(); // remove `owlen-core` - binary.push("owlen-mcp-prompt-server"); - binary.push("target"); - binary.push("debug"); - binary.push("owlen-mcp-prompt-server"); - assert!( - binary.exists(), - "Prompt server binary not found: {:?}", - binary - ); + let manifest_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")); + let workspace_root = manifest_dir + .parent() + .and_then(|p| p.parent()) + .expect("workspace root"); + + let candidates = [ + workspace_root + .join("target") + .join("debug") + .join("owlen-mcp-prompt-server"), + workspace_root + .join("owlen-mcp-prompt-server") + .join("target") + .join("debug") + .join("owlen-mcp-prompt-server"), + ]; + + let binary = if let Some(path) = candidates.iter().find(|path| path.exists()) { + path.clone() + } else { + eprintln!( + "Skipping prompt server integration test: binary not found. \ + Build it with `cargo build -p owlen-mcp-prompt-server`. Tried {:?}", + candidates + ); + return Ok(()); + }; let config = McpServerConfig { name: "prompt_server".into(), @@ -31,7 +46,16 @@ async fn test_render_prompt_via_external_server() -> Result<()> { env: std::collections::HashMap::new(), }; - let client = RemoteMcpClient::new_with_config(&config)?; + let client = match RemoteMcpClient::new_with_config(&config) { + Ok(client) => client, + Err(err) => { + eprintln!( + "Skipping prompt server integration test: failed to launch {} ({err})", + config.command + ); + return Ok(()); + } + }; let call = McpToolCall { name: "render_prompt".into(), diff --git a/crates/owlen-mcp-llm-server/src/main.rs b/crates/owlen-mcp-llm-server/src/main.rs index 8c38419..c3321b5 100644 --- a/crates/owlen-mcp-llm-server/src/main.rs +++ b/crates/owlen-mcp-llm-server/src/main.rs @@ -7,11 +7,13 @@ clippy::empty_line_after_outer_attr )] +use owlen_core::config::{ensure_provider_config, Config as OwlenConfig}; use owlen_core::mcp::protocol::{ methods, ErrorCode, InitializeParams, InitializeResult, RequestId, RpcError, RpcErrorResponse, RpcNotification, RpcRequest, RpcResponse, ServerCapabilities, ServerInfo, PROTOCOL_VERSION, }; use owlen_core::mcp::{McpToolCall, McpToolDescriptor, McpToolResponse}; +use owlen_core::provider::ProviderConfig; use owlen_core::types::{ChatParameters, ChatRequest, Message}; use owlen_core::Provider; use owlen_ollama::OllamaProvider; @@ -106,12 +108,44 @@ fn resources_list_descriptor() -> McpToolDescriptor { } } +fn provider_from_config() -> Result { + let mut config = OwlenConfig::load(None).unwrap_or_default(); + let provider_name = + env::var("OWLEN_PROVIDER").unwrap_or_else(|_| config.general.default_provider.clone()); + if config.provider(&provider_name).is_none() { + ensure_provider_config(&mut config, &provider_name); + } + let provider_cfg: ProviderConfig = + config.provider(&provider_name).cloned().ok_or_else(|| { + RpcError::internal_error(format!( + "Provider '{provider_name}' not found in configuration" + )) + })?; + + if provider_cfg.provider_type != "ollama" && provider_cfg.provider_type != "ollama-cloud" { + return Err(RpcError::internal_error(format!( + "Unsupported provider type '{}' for MCP LLM server", + provider_cfg.provider_type + ))); + } + + OllamaProvider::from_config(&provider_cfg, Some(&config.general)).map_err(|e| { + RpcError::internal_error(format!("Failed to init OllamaProvider from config: {}", e)) + }) +} + +fn create_provider() -> Result { + if let Ok(url) = env::var("OLLAMA_URL") { + return OllamaProvider::new(&url).map_err(|e| { + RpcError::internal_error(format!("Failed to init OllamaProvider: {}", e)) + }); + } + + provider_from_config() +} + async fn handle_generate_text(args: GenerateTextArgs) -> Result { - // Create provider with Ollama URL from environment or default to localhost - let ollama_url = - env::var("OLLAMA_URL").unwrap_or_else(|_| "http://localhost:11434".to_string()); - let provider = OllamaProvider::new(&ollama_url) - .map_err(|e| RpcError::internal_error(format!("Failed to init OllamaProvider: {}", e)))?; + let provider = create_provider()?; let parameters = ChatParameters { temperature: args.temperature, @@ -191,12 +225,7 @@ async fn handle_request(req: &RpcRequest) -> Result { } // New method to list available Ollama models via the provider. methods::MODELS_LIST => { - // Reuse the provider instance for model listing. - let ollama_url = - env::var("OLLAMA_URL").unwrap_or_else(|_| "http://localhost:11434".to_string()); - let provider = OllamaProvider::new(&ollama_url).map_err(|e| { - RpcError::internal_error(format!("Failed to init OllamaProvider: {}", e)) - })?; + let provider = create_provider()?; let models = provider .list_models() .await diff --git a/crates/owlen-ollama/src/lib.rs b/crates/owlen-ollama/src/lib.rs index 069cad8..b432bc6 100644 --- a/crates/owlen-ollama/src/lib.rs +++ b/crates/owlen-ollama/src/lib.rs @@ -10,7 +10,7 @@ use owlen_core::{ }, Result, }; -use reqwest::{header, Client, Url}; +use reqwest::{header, Client, StatusCode, Url}; use serde::{Deserialize, Serialize}; use serde_json::{json, Value}; use std::collections::HashMap; @@ -188,6 +188,22 @@ fn mask_authorization(value: &str) -> String { } } +fn map_reqwest_error(action: &str, err: reqwest::Error) -> owlen_core::Error { + if err.is_timeout() { + return owlen_core::Error::Timeout(format!("{action} request timed out")); + } + + if err.is_connect() { + return owlen_core::Error::Network(format!("{action} connection failed: {err}")); + } + + if err.is_request() || err.is_body() { + return owlen_core::Error::Network(format!("{action} request failed: {err}")); + } + + owlen_core::Error::Network(format!("{action} unexpected error: {err}")) +} + /// Ollama provider implementation with enhanced configuration and caching #[derive(Debug)] pub struct OllamaProvider { @@ -385,6 +401,12 @@ impl OllamaProvider { .or_else(|| env_var_non_empty("OLLAMA_API_KEY")) .or_else(|| env_var_non_empty("OLLAMA_CLOUD_API_KEY")); + if matches!(mode, OllamaMode::Cloud) && options.api_key.is_none() { + return Err(owlen_core::Error::Auth( + "Ollama Cloud requires an API key. Set providers.ollama-cloud.api_key or the OLLAMA_API_KEY environment variable.".to_string(), + )); + } + if let Some(general) = general { options = options.with_general(general); } @@ -431,6 +453,46 @@ impl OllamaProvider { } } + fn map_http_failure( + &self, + action: &str, + status: StatusCode, + detail: String, + model: Option<&str>, + ) -> owlen_core::Error { + match status { + StatusCode::NOT_FOUND => { + if let Some(model) = model { + owlen_core::Error::InvalidInput(format!( + "Model '{model}' was not found at {}. Verify the model name or load it with `ollama pull`.", + self.base_url + )) + } else { + owlen_core::Error::InvalidInput(format!( + "{action} returned 404 from {}: {detail}", + self.base_url + )) + } + } + StatusCode::UNAUTHORIZED | StatusCode::FORBIDDEN => owlen_core::Error::Auth( + format!( + "Ollama rejected the request ({status}): {detail}. Check your API key and account permissions." + ), + ), + StatusCode::BAD_REQUEST => owlen_core::Error::InvalidInput(format!( + "{action} rejected by Ollama ({status}): {detail}" + )), + StatusCode::SERVICE_UNAVAILABLE | StatusCode::GATEWAY_TIMEOUT => { + owlen_core::Error::Timeout(format!( + "Ollama {action} timed out ({status}). The model may still be loading." + )) + } + _ => owlen_core::Error::Network(format!( + "Ollama {action} failed ({status}): {detail}" + )), + } + } + fn convert_message(message: &Message) -> OllamaMessage { let role = match message.role { Role::User => "user".to_string(), @@ -511,19 +573,18 @@ impl OllamaProvider { .apply_auth(self.client.get(&url)) .send() .await - .map_err(|e| owlen_core::Error::Network(format!("Failed to fetch models: {e}")))?; + .map_err(|e| map_reqwest_error("model listing", e))?; if !response.status().is_success() { - let code = response.status(); + let status = response.status(); let error = parse_error_body(response).await; - return Err(owlen_core::Error::Network(format!( - "Ollama model listing failed ({code}): {error}" - ))); + return Err(self.map_http_failure("model listing", status, error, None)); } - let body = response.text().await.map_err(|e| { - owlen_core::Error::Network(format!("Failed to read models response: {e}")) - })?; + let body = response + .text() + .await + .map_err(|e| map_reqwest_error("model listing", e))?; let ollama_response: OllamaModelsResponse = serde_json::from_str(&body).map_err(owlen_core::Error::Serialization)?; @@ -598,6 +659,8 @@ impl Provider for OllamaProvider { tools, } = request; + let model_id = model.clone(); + let messages: Vec = messages.iter().map(Self::convert_message).collect(); let options = Self::build_options(parameters); @@ -642,19 +705,18 @@ impl Provider for OllamaProvider { .client .execute(request) .await - .map_err(|e| owlen_core::Error::Network(format!("Chat request failed: {e}")))?; + .map_err(|e| map_reqwest_error("chat", e))?; if !response.status().is_success() { - let code = response.status(); + let status = response.status(); let error = parse_error_body(response).await; - return Err(owlen_core::Error::Network(format!( - "Ollama chat failed ({code}): {error}" - ))); + return Err(self.map_http_failure("chat", status, error, Some(&model_id))); } - let body = response.text().await.map_err(|e| { - owlen_core::Error::Network(format!("Failed to read chat response: {e}")) - })?; + let body = response + .text() + .await + .map_err(|e| map_reqwest_error("chat", e))?; let mut ollama_response: OllamaChatResponse = serde_json::from_str(&body).map_err(owlen_core::Error::Serialization)?; @@ -701,6 +763,8 @@ impl Provider for OllamaProvider { tools, } = request; + let model_id = model.clone(); + let messages: Vec = messages.iter().map(Self::convert_message).collect(); let options = Self::build_options(parameters); @@ -739,17 +803,16 @@ impl Provider for OllamaProvider { self.debug_log_request("chat_stream", &request, debug_body.as_deref()); - let response = - self.client.execute(request).await.map_err(|e| { - owlen_core::Error::Network(format!("Streaming request failed: {e}")) - })?; + let response = self + .client + .execute(request) + .await + .map_err(|e| map_reqwest_error("chat_stream", e))?; if !response.status().is_success() { - let code = response.status(); + let status = response.status(); let error = parse_error_body(response).await; - return Err(owlen_core::Error::Network(format!( - "Ollama streaming chat failed ({code}): {error}" - ))); + return Err(self.map_http_failure("chat_stream", status, error, Some(&model_id))); } let (tx, rx) = mpsc::unbounded_channel(); @@ -850,15 +913,14 @@ impl Provider for OllamaProvider { .apply_auth(self.client.get(&url)) .send() .await - .map_err(|e| owlen_core::Error::Network(format!("Health check failed: {e}")))?; + .map_err(|e| map_reqwest_error("health check", e))?; if response.status().is_success() { Ok(()) } else { - Err(owlen_core::Error::Network(format!( - "Ollama health check failed: HTTP {}", - response.status() - ))) + let status = response.status(); + let detail = parse_error_body(response).await; + Err(self.map_http_failure("health check", status, detail, None)) } } @@ -913,6 +975,7 @@ async fn parse_error_body(response: reqwest::Response) -> String { #[cfg(test)] mod tests { use super::*; + use owlen_core::provider::ProviderConfig; #[test] fn normalizes_local_base_url_and_infers_scheme() { @@ -991,4 +1054,47 @@ mod tests { ); std::env::remove_var("OWLEN_TEST_KEY_UNBRACED"); } + + #[test] + fn map_http_failure_returns_invalid_input_for_missing_model() { + let provider = + OllamaProvider::with_options(OllamaOptions::new("http://localhost:11434")).unwrap(); + let error = provider.map_http_failure( + "chat", + StatusCode::NOT_FOUND, + "missing".into(), + Some("phantom-model"), + ); + match error { + owlen_core::Error::InvalidInput(message) => { + assert!(message.contains("phantom-model")); + } + other => panic!("expected InvalidInput, got {other:?}"), + } + } + + #[test] + fn cloud_provider_without_api_key_is_rejected() { + let previous_api_key = std::env::var("OLLAMA_API_KEY").ok(); + let previous_cloud_key = std::env::var("OLLAMA_CLOUD_API_KEY").ok(); + std::env::remove_var("OLLAMA_API_KEY"); + std::env::remove_var("OLLAMA_CLOUD_API_KEY"); + + let config = ProviderConfig { + provider_type: "ollama-cloud".to_string(), + base_url: Some("https://ollama.com".to_string()), + api_key: None, + extra: std::collections::HashMap::new(), + }; + + let result = OllamaProvider::from_config(&config, None); + assert!(matches!(result, Err(owlen_core::Error::Auth(_)))); + + if let Some(value) = previous_api_key { + std::env::set_var("OLLAMA_API_KEY", value); + } + if let Some(value) = previous_cloud_key { + std::env::set_var("OLLAMA_CLOUD_API_KEY", value); + } + } } diff --git a/crates/owlen-tui/src/chat_app.rs b/crates/owlen-tui/src/chat_app.rs index b2c22c5..b2d728e 100644 --- a/crates/owlen-tui/src/chat_app.rs +++ b/crates/owlen-tui/src/chat_app.rs @@ -211,7 +211,7 @@ impl ChatApp { let app = Self { controller, mode: InputMode::Normal, - status: "Ready".to_string(), + status: "Normal mode • Press F1 for help".to_string(), error: None, models: Vec::new(), available_providers: Vec::new(), @@ -745,6 +745,12 @@ impl ChatApp { } } + if matches!(key.code, KeyCode::F(1)) { + self.mode = InputMode::Help; + self.status = "Help".to_string(); + return Ok(AppState::Running); + } + match self.mode { InputMode::Normal => { // Handle multi-key sequences first @@ -2315,7 +2321,7 @@ impl ChatApp { } fn reset_status(&mut self) { - self.status = "Ready".to_string(); + self.status = "Normal mode • Press F1 for help".to_string(); self.error = None; } diff --git a/docs/CHANGELOG_v1.0.md b/docs/CHANGELOG_v1.0.md index 2189f4b..45ff35c 100644 --- a/docs/CHANGELOG_v1.0.md +++ b/docs/CHANGELOG_v1.0.md @@ -6,35 +6,39 @@ Version 1.0.0 marks the completion of the MCP-only architecture migration, remov ## Breaking Changes -### 1. Removed Legacy MCP Mode +### 1. MCP mode defaults to remote-preferred (legacy retained) **What changed:** -- The `[mcp]` section in `config.toml` no longer accepts a `mode` setting -- The `McpMode` enum has been removed from the configuration system -- MCP architecture is now always enabled - no option to disable it +- The `[mcp]` section in `config.toml` keeps a `mode` setting but now defaults to `remote_preferred`. +- Legacy values such as `"legacy"` map to the `local_only` runtime and emit a warning instead of failing. +- New toggles (`allow_fallback`, `warn_on_legacy`) give administrators explicit control over graceful degradation. **Migration:** -```diff -# old config.toml +```toml [mcp] --mode = "legacy" # or "enabled" +mode = "remote_preferred" +allow_fallback = true +warn_on_legacy = true +``` -# new config.toml +To opt out of remote MCP servers temporarily: + +```toml [mcp] -# MCP is always enabled - no mode setting needed +mode = "local_only" # or "legacy" for backwards compatibility ``` **Code changes:** -- `crates/owlen-core/src/config.rs`: Removed `McpMode` enum, simplified `McpSettings` -- `crates/owlen-core/src/mcp/factory.rs`: Removed legacy mode handling from `McpClientFactory` -- All provider calls now go through MCP clients exclusively +- `crates/owlen-core/src/config.rs`: Reintroduced `McpMode` with compatibility aliases and new settings. +- `crates/owlen-core/src/mcp/factory.rs`: Respects the configured mode, including strict remote-only and local-only paths. +- `crates/owlen-cli/src/main.rs`: Chooses between remote MCP providers and the direct Ollama provider based on the mode. ### 2. Updated MCP Client Factory **What changed:** -- `McpClientFactory::create()` no longer checks for legacy mode -- Automatically falls back to `LocalMcpClient` when no external MCP servers are configured -- Improved error messages for server connection failures +- `McpClientFactory::create()` now enforces the configured mode (`remote_only`, `remote_preferred`, `local_only`, or `legacy`). +- Helpful configuration errors are surfaced when remote-only mode lacks servers or fallback is disabled. +- CLI users in `local_only`/`legacy` mode receive the direct Ollama provider instead of a failing MCP stub. **Before:** ```rust @@ -46,11 +50,11 @@ match self.config.mcp.mode { **After:** ```rust -// Always use MCP architecture -if let Some(server_cfg) = self.config.mcp_servers.first() { - // Try remote server, fallback to local on error -} else { - // Use local client +match self.config.mcp.mode { + McpMode::RemoteOnly => start_remote()?, + McpMode::RemotePreferred => try_remote_or_fallback()?, + McpMode::LocalOnly | McpMode::Legacy => use_local(), + McpMode::Disabled => bail!("unsupported"), } ``` @@ -79,8 +83,8 @@ Added comprehensive mock implementations for testing: - Rollback procedures if needed 2. **Updated Configuration Reference** - - Removed references to legacy mode - - Clarified MCP server configuration + - Documented the new `remote_preferred` default and fallback controls + - Clarified MCP server configuration with remote-only expectations - Added examples for local and cloud Ollama usage ## Bug Fixes @@ -92,9 +96,9 @@ Added comprehensive mock implementations for testing: ### Configuration System -- `McpSettings` struct now only serves as a placeholder for future MCP-specific settings -- Removed `McpMode` enum entirely -- Default configuration no longer includes mode setting +- `McpSettings` gained `mode`, `allow_fallback`, and `warn_on_legacy` knobs. +- `McpMode` enum restored with explicit aliases for historical values. +- Default configuration now prefers remote servers but still works out-of-the-box with local tooling. ### MCP Factory @@ -113,16 +117,15 @@ No performance regressions expected. The MCP architecture may actually improve p ### Backwards Compatibility -**Breaking:** Configuration files with `mode = "legacy"` will need to be updated: -- The setting is ignored (logs a warning in future versions) -- User config has been automatically updated if using standard path +- Existing `mode = "legacy"` configs keep working (now mapped to `local_only`) but trigger a startup warning. +- Users who relied on remote-only behaviour should set `mode = "remote_only"` explicitly. ### Forward Compatibility -The `McpSettings` struct is kept for future expansion: -- Can add MCP-specific timeouts -- Can add connection pooling settings -- Can add server selection strategies +The `McpSettings` struct now provides a stable surface to grow additional MCP-specific options such as: +- Connection pooling strategies +- Remote health-check cadence +- Adaptive retry controls ## Testing diff --git a/docs/configuration.md b/docs/configuration.md index 55e16f0..1d85221 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -4,9 +4,15 @@ Owlen uses a TOML file for configuration, allowing you to customize its behavior ## File Location -By default, Owlen looks for its configuration file at `~/.config/owlen/config.toml`. +Owlen resolves the configuration path using the platform-specific config directory: -A default configuration file is created on the first run if one doesn't exist. +| Platform | Location | +|----------|----------| +| Linux | `~/.config/owlen/config.toml` | +| macOS | `~/Library/Application Support/owlen/config.toml` | +| Windows | `%APPDATA%\owlen\config.toml` | + +Run `owlen config path` to print the exact location on your machine. A default configuration file is created on the first run if one doesn't exist, and `owlen config doctor` can migrate/repair legacy files automatically. ## Configuration Precedence @@ -16,6 +22,8 @@ Configuration values are resolved in the following order: 2. **Configuration File**: Any values set in `config.toml` will override the defaults. 3. **Command-Line Arguments / In-App Changes**: Any settings changed during runtime (e.g., via the `:theme` or `:model` commands) will override the configuration file for the current session. Some of these changes (like theme and model) are automatically saved back to the configuration file. +Validation runs whenever the configuration is loaded or saved. Expect descriptive `Configuration error` messages if, for example, `remote_only` mode is set without any `[[mcp_servers]]` entries. + --- ## General Settings (`[general]`) @@ -118,6 +126,7 @@ base_url = "https://ollama.com" - `api_key` (string, optional) The API key to use for authentication, if required. + **Note:** `ollama-cloud` now requires an API key; Owlen will refuse to start the provider without one and will hint at the missing configuration. - `extra` (table, optional) Any additional, provider-specific parameters can be added here. diff --git a/docs/migration-guide.md b/docs/migration-guide.md index 1348c54..57a6076 100644 --- a/docs/migration-guide.md +++ b/docs/migration-guide.md @@ -12,26 +12,32 @@ As Owlen is currently in its alpha phase (pre-v1.0), breaking changes may occur ### Breaking Changes -#### 1. MCP Mode is Now Always Enabled +#### 1. MCP Mode now defaults to `remote_preferred` -The `[mcp]` section in `config.toml` previously had a `mode` setting that could be set to `"legacy"` or `"enabled"`. In v1.0+, MCP architecture is **always enabled** and the `mode` setting has been removed. +The `[mcp]` section in `config.toml` still accepts a `mode` setting, but the default behaviour has changed. If you previously relied on `mode = "legacy"`, you can keep that line – the value now maps to the `local_only` runtime with a compatibility warning instead of breaking outright. New installs default to the safer `remote_preferred` mode, which attempts to use any configured external MCP server and automatically falls back to the local in-process tooling when permitted. + +**Supported values (v1.0+):** + +| Value | Behaviour | +|--------------------|-----------| +| `remote_preferred` | Default. Use the first configured `[[mcp_servers]]`, fall back to local if `allow_fallback = true`. +| `remote_only` | Require a configured server; the CLI will error if it cannot start. +| `local_only` | Force the built-in MCP client and the direct Ollama provider. +| `legacy` | Alias for `local_only` kept for compatibility (emits a warning). +| `disabled` | Not supported by the TUI; intended for headless tooling. + +You can additionally control the automatic fallback behaviour: -**Old configuration (v0.x):** ```toml [mcp] -mode = "legacy" # or "enabled" +mode = "remote_preferred" +allow_fallback = true +warn_on_legacy = true ``` -**New configuration (v1.0+):** -```toml -[mcp] -# MCP is now always enabled - no mode setting needed -# This section is kept for future MCP-specific configuration options -``` +#### 2. Direct Provider Access Removed (with opt-in compatibility) -#### 2. Direct Provider Access Removed - -In v0.x, Owlen could make direct HTTP calls to Ollama and other providers when in "legacy" mode. In v1.0+, **all LLM interactions go through MCP servers**. +In v0.x, Owlen could make direct HTTP calls to Ollama when in "legacy" mode. The default v1.0 behaviour keeps all LLM interactions behind MCP, but choosing `mode = "local_only"` or `mode = "legacy"` now reinstates the direct Ollama provider while still keeping the MCP tooling stack available locally. ### What Changed Under the Hood @@ -49,17 +55,26 @@ The v1.0 architecture implements the full 10-phase migration plan: ### Migration Steps -#### Step 1: Update Your Configuration +#### Step 1: Review Your MCP Configuration -Edit `~/.config/owlen/config.toml`: +Edit `~/.config/owlen/config.toml` and ensure the `[mcp]` section reflects how you want to run Owlen: -**Remove the `mode` line:** -```diff +```toml [mcp] --mode = "legacy" +mode = "remote_preferred" +allow_fallback = true ``` -The `[mcp]` section can now be empty or contain future MCP-specific settings. +If you encounter issues with remote servers, you can temporarily switch to: + +```toml +[mcp] +mode = "local_only" # or "legacy" for backwards compatibility +``` + +You will see a warning on startup when `legacy` is used so you remember to migrate later. + +**Quick fix:** run `owlen config doctor` to apply these defaults automatically and validate your configuration file. #### Step 2: Verify Provider Configuration diff --git a/docs/platform-support.md b/docs/platform-support.md new file mode 100644 index 0000000..1a925f2 --- /dev/null +++ b/docs/platform-support.md @@ -0,0 +1,24 @@ +# Platform Support + +Owlen targets all major desktop platforms; the table below summarises the current level of coverage and how to verify builds locally. + +| Platform | Status | Notes | +|----------|--------|-------| +| Linux | ✅ Primary | CI and local development happen on Linux. `owlen config doctor` and provider health checks are exercised every run. | +| macOS | ✅ Supported | Tested via local builds. Uses the macOS application support directory for configuration and session data. | +| Windows | ⚠️ Preview | Uses platform-specific paths and compiles via `scripts/check-windows.sh`. Runtime testing is limited—feedback welcome. | + +### Verifying Windows compatibility from Linux/macOS + +```bash +./scripts/check-windows.sh +``` + +The script installs the `x86_64-pc-windows-gnu` target if necessary and runs `cargo check` against it. Run it before submitting PRs that may impact cross-platform support. + +### Troubleshooting + +- Provider startup failures now surface clear hints (e.g. "Ensure Ollama is running"). +- The TUI warns when the active terminal lacks 256-colour capability; consider switching to a true-colour terminal for the best experience. + +Refer to `docs/troubleshooting.md` for additional guidance. diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 79a004c..f989caa 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -9,10 +9,17 @@ If you are unable to connect to a local Ollama instance, here are a few things t 1. **Is Ollama running?** Make sure the Ollama service is active. You can usually check this with `ollama list`. 2. **Is the address correct?** By default, Owlen tries to connect to `http://localhost:11434`. If your Ollama instance is running on a different address or port, you will need to configure it in your `config.toml` file. 3. **Firewall issues:** Ensure that your firewall is not blocking the connection. +4. **Health check warnings:** Owlen now performs a provider health check on startup. If it fails, the error message will include a hint (either "start owlen-mcp-llm-server" or "ensure Ollama is running"). Resolve the hint and restart. ## Model Not Found Errors -If you get a "model not found" error, it means that the model you are trying to use is not available. For local providers like Ollama, you can use `ollama list` to see the models you have downloaded. Make sure the model name in your Owlen configuration matches one of the available models. +Owlen surfaces this as `InvalidInput: Model '' was not found`. + +1. **Local models:** Run `ollama list` to confirm the model name (e.g., `llama3:8b`). Use `ollama pull ` if it is missing. +2. **Ollama Cloud:** Names may differ from local installs. Double-check https://ollama.com/models and remove `-cloud` suffixes. +3. **Fallback:** Switch to `mode = "local_only"` temporarily in `[mcp]` if the remote server is slow to update. + +Fix the name in your configuration file or choose a model from the UI (`:model`). ## Terminal Compatibility Issues @@ -26,9 +33,18 @@ Owlen is built with `ratatui`, which supports most modern terminals. However, if If Owlen is not behaving as you expect, there might be an issue with your configuration file. -- **Location:** The configuration file is typically located at `~/.config/owlen/config.toml`. +- **Location:** Run `owlen config path` to print the exact location (Linux, macOS, or Windows). Owlen now follows platform defaults instead of hard-coding `~/.config`. - **Syntax:** The configuration file is in TOML format. Make sure the syntax is correct. - **Values:** Check that the values for your models, providers, and other settings are correct. +- **Automation:** Run `owlen config doctor` to migrate legacy settings (`mode = "legacy"`, missing providers) and validate the file before launching the TUI. + +## Ollama Cloud Authentication Errors + +If you see `Auth` errors when using the `ollama-cloud` provider: + +1. Ensure `providers.ollama-cloud.api_key` is set **or** export `OLLAMA_API_KEY` / `OLLAMA_CLOUD_API_KEY` before launching Owlen. +2. Confirm the key has access to the requested models. +3. Avoid pasting extra quotes or whitespace into the config file—`owlen config doctor` will normalise the entry for you. ## Performance Tuning diff --git a/scripts/check-windows.sh b/scripts/check-windows.sh new file mode 100644 index 0000000..5727fd5 --- /dev/null +++ b/scripts/check-windows.sh @@ -0,0 +1,13 @@ +#!/usr/bin/env bash + +set -euo pipefail + +if ! rustup target list --installed | grep -q "x86_64-pc-windows-gnu"; then + echo "Installing Windows GNU target..." + rustup target add x86_64-pc-windows-gnu +fi + +echo "Running cargo check for Windows (x86_64-pc-windows-gnu)..." +cargo check --target x86_64-pc-windows-gnu + +echo "Windows compatibility check completed successfully."