9 Commits

Author SHA1 Message Date
3271697f6b feat(cli): add provider management and model listing commands and integrate them into the CLI 2025-10-16 23:35:38 +02:00
cbfef5a5df docs: add provider onboarding guide and update documentation for ProviderManager, health worker, and multi‑provider architecture 2025-10-16 23:01:57 +02:00
52efd5f341 test(app): add generation and message unit tests
- New test suite in `crates/owlen-tui/tests` covering generation orchestration, message variant round‑trip, and background worker status updates.
- Extend `model_picker` to filter models by matching keywords against capabilities as well as provider names.
- Update `state_tests` to assert that suggestion lists are non‑empty instead of checking prefix matches.
- Re‑export `background_worker` from `app::mod.rs` for external consumption.
2025-10-16 22:56:00 +02:00
200cdbc4bd test(provider): add integration tests for ProviderManager using MockProvider
- Introduce `MockProvider` with configurable models, health status, generation handlers, and error simulation.
- Add common test utilities and integration tests covering provider registration, model aggregation, request routing, error handling, and health refresh.
2025-10-16 22:41:33 +02:00
8525819ab4 feat(app): introduce UiRuntime trait and RuntimeApp run loop, add crossterm event conversion, refactor CLI to use RuntimeApp for unified UI handling 2025-10-16 22:21:33 +02:00
bcd52d526c feat(app): introduce MessageState trait and handler for AppMessage dispatch
- Add `MessageState` trait defining UI reaction callbacks for generation lifecycle, model updates, provider status, resize, and tick events.
- Implement `App::handle_message` to route `AppMessage` variants to the provided `MessageState` and determine exit condition.
- Add `handler.rs` module with the trait and dispatch logic; re-export `MessageState` in `app/mod.rs`.
- Extend `ActiveGeneration` with a public `request_id` getter and clean up dead code annotations.
- Implement empty `MessageState` for `ChatApp` to integrate UI handling.
- Add `log` crate dependency for warning messages.
2025-10-16 21:58:26 +02:00
7effade1d3 refactor(tui): extract model selector UI into dedicated widget module
Added `widgets::model_picker` containing the full model picker rendering logic and moved related helper functions there. Updated `ui.rs` to use `render_model_picker` and removed the now‑duplicate model selector implementation. This cleanly separates UI concerns and improves code reuse.
2025-10-16 21:39:50 +02:00
dc0fee2ee3 feat(app): add background worker for provider health checks
Introduce a `worker` module with `background_worker` that periodically refreshes provider health and emits status updates via the app's message channel. Add `spawn_background_worker` method to `App` for launching the worker as a Tokio task.
2025-10-16 21:01:08 +02:00
ea04a25ed6 feat(app): add generation orchestration, messaging, and core App struct
Introduce `App` with provider manager, unbounded message channel, and active generation tracking.
Add `AppMessage` enum covering UI events, generation lifecycle (start, chunk, complete, error), model refresh, and provider status updates.
Implement `start_generation` to spawn asynchronous generation tasks, stream results, handle errors, and abort any previous generation.
Expose the new module via `pub mod app` in the crate root.
2025-10-16 20:39:53 +02:00
30 changed files with 3018 additions and 660 deletions

View File

@@ -11,6 +11,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Comprehensive documentation suite including guides for architecture, configuration, testing, and more.
- Rustdoc examples for core components like `Provider` and `SessionController`.
- Module-level documentation for `owlen-tui`.
- Provider integration tests (`crates/owlen-providers/tests`) covering registration, routing, and health status handling for the new `ProviderManager`.
- TUI message and generation tests that exercise the non-blocking event loop, background worker, and message dispatch.
- Ollama integration can now talk to Ollama Cloud when an API key is configured.
- Ollama provider will also read `OLLAMA_API_KEY` / `OLLAMA_CLOUD_API_KEY` environment variables when no key is stored in the config.
- `owlen config doctor`, `owlen config path`, and `owlen upgrade` CLI commands to automate migrations and surface manual update steps.
@@ -26,6 +28,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Input panel respects a new `ui.input_max_rows` setting so long prompts expand predictably before scrolling kicks in.
- Command palette offers fuzzy `:model` filtering and `:provider` completions for fast switching.
- Message rendering caches wrapped lines and throttles streaming redraws to keep the TUI responsive on long sessions.
- Model picker badges now inspect provider capabilities so vision/audio/thinking models surface the correct icons even when descriptions are sparse.
- Chat history honors `ui.scrollback_lines`, trimming older rows to keep the TUI responsive and surfacing a "↓ New messages" badge whenever updates land off-screen.
### Changed
@@ -38,6 +41,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `config.toml` now carries a schema version (`1.2.0`) and is migrated automatically; deprecated keys such as `agent.max_tool_calls` trigger warnings instead of hard failures.
- Model selector navigation (Tab/Shift-Tab) now switches between local and cloud tabs while preserving selection state.
- Header displays the active model together with its provider (e.g., `Model (Provider)`), improving clarity when swapping backends.
- Documentation refreshed to cover the message handler architecture, the background health worker, multi-provider configuration, and the new provider onboarding checklist.
---

View File

@@ -9,10 +9,11 @@
## What Is OWLEN?
OWLEN is a Rust-powered, terminal-first interface for interacting with local large
language models. It provides a responsive chat workflow that runs against
[Ollama](https://ollama.com/) with a focus on developer productivity, vim-style navigation,
and seamless session management—all without leaving your terminal.
OWLEN is a Rust-powered, terminal-first interface for interacting with local and cloud
language models. It provides a responsive chat workflow that now routes through a
multi-provider manager—handling local Ollama, Ollama Cloud, and future MCP-backed providers—
with a focus on developer productivity, vim-style navigation, and seamless session
management—all without leaving your terminal.
## Alpha Status
@@ -32,8 +33,9 @@ The OWLEN interface features a clean, multi-panel layout with vim-inspired navig
- **Session Management**: Save, load, and manage conversations.
- **Code Side Panel**: Switch to code mode (`:mode code`) and open files inline with `:open <path>` for LLM-assisted coding.
- **Theming System**: 10 built-in themes and support for custom themes.
- **Modular Architecture**: Extensible provider system (Ollama today, additional providers on the roadmap).
- **Dual-Source Model Picker**: Merge local and cloud Ollama models with live availability indicators so you can see at a glance which catalogues are reachable.
- **Modular Architecture**: Extensible provider system orchestrated by the new `ProviderManager`, ready for additional MCP-backed providers.
- **Dual-Source Model Picker**: Merge local and cloud catalogues with real-time availability badges powered by the background health worker.
- **Non-Blocking UI Loop**: Asynchronous generation tasks and provider health checks run off-thread, keeping the TUI responsive even while streaming long replies.
- **Guided Setup**: `owlen config doctor` upgrades legacy configs and verifies your environment in seconds.
## Security & Privacy
@@ -110,7 +112,8 @@ For more detailed information, please refer to the following documents:
- **[CHANGELOG.md](CHANGELOG.md)**: A log of changes for each version.
- **[docs/architecture.md](docs/architecture.md)**: An overview of the project's architecture.
- **[docs/troubleshooting.md](docs/troubleshooting.md)**: Help with common issues.
- **[docs/provider-implementation.md](docs/provider-implementation.md)**: A guide for adding new providers.
- **[docs/provider-implementation.md](docs/provider-implementation.md)**: Trait-level details for implementing providers.
- **[docs/adding-providers.md](docs/adding-providers.md)**: Step-by-step checklist for wiring a provider into the multi-provider architecture and test suite.
- **[docs/platform-support.md](docs/platform-support.md)**: Current OS support matrix and cross-check instructions.
## Configuration

View File

@@ -24,6 +24,7 @@ required-features = ["chat-client"]
[dependencies]
owlen-core = { path = "../owlen-core" }
owlen-providers = { path = "../owlen-providers" }
# Optional TUI dependency, enabled by the "chat-client" feature.
owlen-tui = { path = "../owlen-tui", optional = true }
log = { workspace = true }

View File

@@ -0,0 +1,4 @@
//! Command implementations for the `owlen` CLI.
pub mod cloud;
pub mod providers;

View File

@@ -0,0 +1,652 @@
use std::collections::HashMap;
use std::sync::Arc;
use anyhow::{Result, anyhow};
use clap::{Args, Subcommand};
use owlen_core::ProviderConfig;
use owlen_core::config::{self as core_config, Config};
use owlen_core::provider::{
AnnotatedModelInfo, ModelProvider, ProviderManager, ProviderStatus, ProviderType,
};
use owlen_core::storage::StorageManager;
use owlen_providers::ollama::{OllamaCloudProvider, OllamaLocalProvider};
use owlen_tui::config as tui_config;
use super::cloud;
/// CLI subcommands for provider management.
#[derive(Debug, Subcommand)]
pub enum ProvidersCommand {
/// List configured providers and their metadata.
List,
/// Run health checks against providers.
Status {
/// Optional provider identifier to check.
#[arg(value_name = "PROVIDER")]
provider: Option<String>,
},
/// Enable a provider in the configuration.
Enable {
/// Provider identifier to enable.
provider: String,
},
/// Disable a provider in the configuration.
Disable {
/// Provider identifier to disable.
provider: String,
},
}
/// Arguments for the `owlen models` command.
#[derive(Debug, Default, Args)]
pub struct ModelsArgs {
/// Restrict output to a specific provider.
#[arg(long)]
pub provider: Option<String>,
}
pub async fn run_providers_command(command: ProvidersCommand) -> Result<()> {
match command {
ProvidersCommand::List => list_providers(),
ProvidersCommand::Status { provider } => status_providers(provider.as_deref()).await,
ProvidersCommand::Enable { provider } => toggle_provider(&provider, true),
ProvidersCommand::Disable { provider } => toggle_provider(&provider, false),
}
}
pub async fn run_models_command(args: ModelsArgs) -> Result<()> {
list_models(args.provider.as_deref()).await
}
fn list_providers() -> Result<()> {
let config = tui_config::try_load_config().unwrap_or_default();
let default_provider = canonical_provider_id(&config.general.default_provider);
let mut rows = Vec::new();
for (id, cfg) in &config.providers {
let type_label = describe_provider_type(id, cfg);
let auth_label = describe_auth(cfg, requires_auth(id, cfg));
let enabled = if cfg.enabled { "yes" } else { "no" };
let default = if id == &default_provider { "*" } else { "" };
let base = cfg
.base_url
.as_ref()
.map(|value| value.trim().to_string())
.unwrap_or_else(|| "-".to_string());
rows.push(ProviderListRow {
id: id.to_string(),
type_label,
enabled: enabled.to_string(),
default: default.to_string(),
auth: auth_label,
base_url: base,
});
}
rows.sort_by(|a, b| a.id.cmp(&b.id));
let id_width = rows
.iter()
.map(|row| row.id.len())
.max()
.unwrap_or(8)
.max("Provider".len());
let enabled_width = rows
.iter()
.map(|row| row.enabled.len())
.max()
.unwrap_or(7)
.max("Enabled".len());
let default_width = rows
.iter()
.map(|row| row.default.len())
.max()
.unwrap_or(7)
.max("Default".len());
let type_width = rows
.iter()
.map(|row| row.type_label.len())
.max()
.unwrap_or(4)
.max("Type".len());
let auth_width = rows
.iter()
.map(|row| row.auth.len())
.max()
.unwrap_or(4)
.max("Auth".len());
println!(
"{:<id_width$} {:<enabled_width$} {:<default_width$} {:<type_width$} {:<auth_width$} Base URL",
"Provider",
"Enabled",
"Default",
"Type",
"Auth",
id_width = id_width,
enabled_width = enabled_width,
default_width = default_width,
type_width = type_width,
auth_width = auth_width,
);
for row in rows {
println!(
"{:<id_width$} {:<enabled_width$} {:<default_width$} {:<type_width$} {:<auth_width$} {}",
row.id,
row.enabled,
row.default,
row.type_label,
row.auth,
row.base_url,
id_width = id_width,
enabled_width = enabled_width,
default_width = default_width,
type_width = type_width,
auth_width = auth_width,
);
}
Ok(())
}
async fn status_providers(filter: Option<&str>) -> Result<()> {
let mut config = tui_config::try_load_config().unwrap_or_default();
let filter = filter.map(canonical_provider_id);
verify_provider_filter(&config, filter.as_deref())?;
let storage = Arc::new(StorageManager::new().await?);
cloud::load_runtime_credentials(&mut config, storage.clone()).await?;
let manager = ProviderManager::new(&config);
let records = register_enabled_providers(&manager, &config, filter.as_deref()).await?;
let health = manager.refresh_health().await;
let mut rows = Vec::new();
for record in records {
let status = health.get(&record.id).copied();
rows.push(ProviderStatusRow::from_record(record, status));
}
rows.sort_by(|a, b| a.id.cmp(&b.id));
print_status_rows(&rows);
Ok(())
}
async fn list_models(filter: Option<&str>) -> Result<()> {
let mut config = tui_config::try_load_config().unwrap_or_default();
let filter = filter.map(canonical_provider_id);
verify_provider_filter(&config, filter.as_deref())?;
let storage = Arc::new(StorageManager::new().await?);
cloud::load_runtime_credentials(&mut config, storage.clone()).await?;
let manager = ProviderManager::new(&config);
let records = register_enabled_providers(&manager, &config, filter.as_deref()).await?;
let models = manager
.list_all_models()
.await
.map_err(|err| anyhow!(err))?;
let statuses = manager.provider_statuses().await;
print_models(records, models, statuses);
Ok(())
}
fn verify_provider_filter(config: &Config, filter: Option<&str>) -> Result<()> {
if let Some(filter) = filter {
if !config.providers.contains_key(filter) {
return Err(anyhow!(
"Provider '{}' is not defined in configuration.",
filter
));
}
}
Ok(())
}
fn toggle_provider(provider: &str, enable: bool) -> Result<()> {
let mut config = tui_config::try_load_config().unwrap_or_default();
let canonical = canonical_provider_id(provider);
if canonical.is_empty() {
return Err(anyhow!("Provider name cannot be empty."));
}
let previous_default = config.general.default_provider.clone();
let previous_fallback_enabled = config.providers.get("ollama_local").map(|cfg| cfg.enabled);
let previous_enabled;
{
let entry = core_config::ensure_provider_config_mut(&mut config, &canonical);
previous_enabled = entry.enabled;
if previous_enabled == enable {
println!(
"Provider '{}' is already {}.",
canonical,
if enable { "enabled" } else { "disabled" }
);
return Ok(());
}
entry.enabled = enable;
}
if !enable && config.general.default_provider == canonical {
if let Some(candidate) = choose_fallback_provider(&config, &canonical) {
config.general.default_provider = candidate.clone();
println!(
"Default provider set to '{}' because '{}' was disabled.",
candidate, canonical
);
} else {
let entry = core_config::ensure_provider_config_mut(&mut config, "ollama_local");
entry.enabled = true;
config.general.default_provider = "ollama_local".to_string();
println!(
"Enabled 'ollama_local' and made it default because no other providers are active."
);
}
}
if let Err(err) = config.validate() {
{
let entry = core_config::ensure_provider_config_mut(&mut config, &canonical);
entry.enabled = previous_enabled;
}
config.general.default_provider = previous_default;
if let Some(enabled) = previous_fallback_enabled {
if let Some(entry) = config.providers.get_mut("ollama_local") {
entry.enabled = enabled;
}
}
return Err(anyhow!(err));
}
tui_config::save_config(&config).map_err(|err| anyhow!(err))?;
println!(
"{} provider '{}'.",
if enable { "Enabled" } else { "Disabled" },
canonical
);
Ok(())
}
fn choose_fallback_provider(config: &Config, exclude: &str) -> Option<String> {
if exclude != "ollama_local" {
if let Some(cfg) = config.providers.get("ollama_local") {
if cfg.enabled {
return Some("ollama_local".to_string());
}
}
}
let mut candidates: Vec<String> = config
.providers
.iter()
.filter(|(id, cfg)| cfg.enabled && id.as_str() != exclude)
.map(|(id, _)| id.clone())
.collect();
candidates.sort();
candidates.into_iter().next()
}
async fn register_enabled_providers(
manager: &ProviderManager,
config: &Config,
filter: Option<&str>,
) -> Result<Vec<ProviderRecord>> {
let default_provider = canonical_provider_id(&config.general.default_provider);
let mut records = Vec::new();
for (id, cfg) in &config.providers {
if let Some(filter) = filter {
if id != filter {
continue;
}
}
let mut record = ProviderRecord::from_config(id, cfg, id == &default_provider);
if !cfg.enabled {
records.push(record);
continue;
}
match instantiate_provider(id, cfg) {
Ok(provider) => {
let metadata = provider.metadata().clone();
record.provider_type_label = provider_type_label(metadata.provider_type);
record.requires_auth = metadata.requires_auth;
record.metadata = Some(metadata);
manager.register_provider(provider).await;
}
Err(err) => {
record.registration_error = Some(err.to_string());
}
}
records.push(record);
}
records.sort_by(|a, b| a.id.cmp(&b.id));
Ok(records)
}
fn instantiate_provider(id: &str, cfg: &ProviderConfig) -> Result<Arc<dyn ModelProvider>> {
let kind = cfg.provider_type.trim().to_ascii_lowercase();
if kind == "ollama" || id == "ollama_local" {
let provider = OllamaLocalProvider::new(cfg.base_url.clone(), None, None)
.map_err(|err| anyhow!(err))?;
Ok(Arc::new(provider))
} else if kind == "ollama_cloud" || id == "ollama_cloud" {
let provider = OllamaCloudProvider::new(cfg.base_url.clone(), cfg.api_key.clone(), None)
.map_err(|err| anyhow!(err))?;
Ok(Arc::new(provider))
} else {
Err(anyhow!(
"Provider '{}' uses unsupported type '{}'.",
id,
if kind.is_empty() {
"unknown"
} else {
kind.as_str()
}
))
}
}
fn describe_provider_type(id: &str, cfg: &ProviderConfig) -> String {
if cfg.provider_type.trim().eq_ignore_ascii_case("ollama") || id.ends_with("_local") {
"Local".to_string()
} else if cfg
.provider_type
.trim()
.eq_ignore_ascii_case("ollama_cloud")
|| id.contains("cloud")
{
"Cloud".to_string()
} else {
"Custom".to_string()
}
}
fn requires_auth(id: &str, cfg: &ProviderConfig) -> bool {
cfg.api_key.is_some()
|| cfg.api_key_env.is_some()
|| matches!(id, "ollama_cloud" | "openai" | "anthropic")
}
fn describe_auth(cfg: &ProviderConfig, required: bool) -> String {
if let Some(env) = cfg
.api_key_env
.as_ref()
.map(|value| value.trim())
.filter(|value| !value.is_empty())
{
format!("env:{env}")
} else if cfg
.api_key
.as_ref()
.map(|value| !value.trim().is_empty())
.unwrap_or(false)
{
"config".to_string()
} else if required {
"required".to_string()
} else {
"-".to_string()
}
}
fn canonical_provider_id(raw: &str) -> String {
let trimmed = raw.trim().to_ascii_lowercase();
if trimmed.is_empty() {
return trimmed;
}
match trimmed.as_str() {
"ollama" | "ollama-local" => "ollama_local".to_string(),
"ollama_cloud" | "ollama-cloud" => "ollama_cloud".to_string(),
other => other.replace('-', "_"),
}
}
fn provider_type_label(provider_type: ProviderType) -> String {
match provider_type {
ProviderType::Local => "Local".to_string(),
ProviderType::Cloud => "Cloud".to_string(),
}
}
fn provider_status_strings(status: ProviderStatus) -> (&'static str, &'static str) {
match status {
ProviderStatus::Available => ("OK", "available"),
ProviderStatus::Unavailable => ("ERR", "unavailable"),
ProviderStatus::RequiresSetup => ("SETUP", "requires setup"),
}
}
fn print_status_rows(rows: &[ProviderStatusRow]) {
let id_width = rows
.iter()
.map(|row| row.id.len())
.max()
.unwrap_or(8)
.max("Provider".len());
let type_width = rows
.iter()
.map(|row| row.provider_type.len())
.max()
.unwrap_or(4)
.max("Type".len());
let status_width = rows
.iter()
.map(|row| row.indicator.len() + 1 + row.status_label.len())
.max()
.unwrap_or(6)
.max("State".len());
println!(
"{:<id_width$} {:<4} {:<type_width$} {:<status_width$} Details",
"Provider",
"Def",
"Type",
"State",
id_width = id_width,
type_width = type_width,
status_width = status_width,
);
for row in rows {
let def = if row.default_provider { "*" } else { "-" };
let details = row.detail.as_deref().unwrap_or("-");
println!(
"{:<id_width$} {:<4} {:<type_width$} {:<status_width$} {}",
row.id,
def,
row.provider_type,
format!("{} {}", row.indicator, row.status_label),
details,
id_width = id_width,
type_width = type_width,
status_width = status_width,
);
}
}
fn print_models(
records: Vec<ProviderRecord>,
models: Vec<AnnotatedModelInfo>,
statuses: HashMap<String, ProviderStatus>,
) {
let mut grouped: HashMap<String, Vec<AnnotatedModelInfo>> = HashMap::new();
for info in models {
grouped
.entry(info.provider_id.clone())
.or_default()
.push(info);
}
for record in records {
let status = statuses.get(&record.id).copied().or_else(|| {
if record.metadata.is_some() && record.registration_error.is_none() && record.enabled {
Some(ProviderStatus::Unavailable)
} else {
None
}
});
let (indicator, label, status_value) = if !record.enabled {
("-", "disabled", None)
} else if record.registration_error.is_some() {
("ERR", "error", None)
} else if let Some(status) = status {
let (indicator, label) = provider_status_strings(status);
(indicator, label, Some(status))
} else {
("?", "unknown", None)
};
let title = if record.default_provider {
format!("{} (default)", record.id)
} else {
record.id.clone()
};
println!(
"{} {} [{}] {}",
indicator, title, record.provider_type_label, label
);
if let Some(err) = &record.registration_error {
println!(" error: {}", err);
println!();
continue;
}
if !record.enabled {
println!(" provider disabled");
println!();
continue;
}
if let Some(entries) = grouped.get(&record.id) {
let mut entries = entries.clone();
entries.sort_by(|a, b| a.model.name.cmp(&b.model.name));
if entries.is_empty() {
println!(" (no models reported)");
} else {
for entry in entries {
let mut line = format!(" - {}", entry.model.name);
if let Some(description) = &entry.model.description {
if !description.trim().is_empty() {
line.push_str(&format!("{}", description.trim()));
}
}
println!("{}", line);
}
}
} else {
println!(" (no models reported)");
}
if let Some(ProviderStatus::RequiresSetup) = status_value {
if record.requires_auth {
println!(" configure provider credentials or API key");
}
}
println!();
}
}
struct ProviderListRow {
id: String,
type_label: String,
enabled: String,
default: String,
auth: String,
base_url: String,
}
struct ProviderRecord {
id: String,
enabled: bool,
default_provider: bool,
provider_type_label: String,
requires_auth: bool,
registration_error: Option<String>,
metadata: Option<owlen_core::provider::ProviderMetadata>,
}
impl ProviderRecord {
fn from_config(id: &str, cfg: &ProviderConfig, default_provider: bool) -> Self {
Self {
id: id.to_string(),
enabled: cfg.enabled,
default_provider,
provider_type_label: describe_provider_type(id, cfg),
requires_auth: requires_auth(id, cfg),
registration_error: None,
metadata: None,
}
}
}
struct ProviderStatusRow {
id: String,
provider_type: String,
default_provider: bool,
indicator: String,
status_label: String,
detail: Option<String>,
}
impl ProviderStatusRow {
fn from_record(record: ProviderRecord, status: Option<ProviderStatus>) -> Self {
if !record.enabled {
return Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: "-".to_string(),
status_label: "disabled".to_string(),
detail: None,
};
}
if let Some(err) = record.registration_error {
return Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: "ERR".to_string(),
status_label: "error".to_string(),
detail: Some(err),
};
}
if let Some(status) = status {
let (indicator, label) = provider_status_strings(status);
return Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: indicator.to_string(),
status_label: label.to_string(),
detail: if matches!(status, ProviderStatus::RequiresSetup) && record.requires_auth {
Some("credentials required".to_string())
} else {
None
},
};
}
Self {
id: record.id,
provider_type: record.provider_type_label,
default_provider: record.default_provider,
indicator: "?".to_string(),
status_label: "unknown".to_string(),
detail: None,
}
}
}

View File

@@ -2,13 +2,16 @@
//! OWLEN CLI - Chat TUI client
mod cloud;
mod commands;
mod mcp;
use anyhow::{Result, anyhow};
use async_trait::async_trait;
use clap::{Parser, Subcommand};
use cloud::{CloudCommand, load_runtime_credentials, set_env_var};
use commands::{
cloud::{CloudCommand, load_runtime_credentials, run_cloud_command, set_env_var},
providers::{ModelsArgs, ProvidersCommand, run_models_command, run_providers_command},
};
use mcp::{McpCommand, run_mcp_command};
use owlen_core::config as core_config;
use owlen_core::{
@@ -16,19 +19,19 @@ use owlen_core::{
config::{Config, McpMode},
mcp::remote_client::RemoteMcpClient,
mode::Mode,
provider::ProviderManager,
providers::OllamaProvider,
session::SessionController,
storage::StorageManager,
types::{ChatRequest, ChatResponse, Message, ModelInfo},
};
use owlen_tui::tui_controller::{TuiController, TuiRequest};
use owlen_tui::{AppState, ChatApp, Event, EventHandler, SessionEvent, config, ui};
use owlen_tui::{ChatApp, SessionEvent, app::App as RuntimeApp, config, ui};
use std::any::Any;
use std::borrow::Cow;
use std::io;
use std::sync::Arc;
use tokio::sync::mpsc;
use tokio_util::sync::CancellationToken;
use crossterm::{
event::{DisableBracketedPaste, DisableMouseCapture, EnableBracketedPaste, EnableMouseCapture},
@@ -58,6 +61,11 @@ enum OwlenCommand {
/// Manage Ollama Cloud credentials
#[command(subcommand)]
Cloud(CloudCommand),
/// Manage model providers
#[command(subcommand)]
Providers(ProvidersCommand),
/// List models exposed by configured providers
Models(ModelsArgs),
/// Manage MCP server registrations
#[command(subcommand)]
Mcp(McpCommand),
@@ -136,7 +144,9 @@ fn build_local_provider(cfg: &Config) -> anyhow::Result<Arc<dyn Provider>> {
async fn run_command(command: OwlenCommand) -> Result<()> {
match command {
OwlenCommand::Config(config_cmd) => run_config_command(config_cmd),
OwlenCommand::Cloud(cloud_cmd) => cloud::run_cloud_command(cloud_cmd).await,
OwlenCommand::Cloud(cloud_cmd) => run_cloud_command(cloud_cmd).await,
OwlenCommand::Providers(provider_cmd) => run_providers_command(provider_cmd).await,
OwlenCommand::Models(args) => run_models_command(args).await,
OwlenCommand::Mcp(mcp_cmd) => run_mcp_command(mcp_cmd),
OwlenCommand::Upgrade => {
println!(
@@ -184,6 +194,68 @@ fn run_config_doctor() -> Result<()> {
}
}
if let Some(entry) = config.providers.get_mut("ollama_local") {
if entry.provider_type.trim().is_empty() || entry.provider_type != "ollama" {
entry.provider_type = "ollama".to_string();
changes.push("normalised providers.ollama_local.provider_type to 'ollama'".to_string());
}
}
let mut ensure_default_enabled = true;
if !config.providers.values().any(|cfg| cfg.enabled) {
let entry = core_config::ensure_provider_config_mut(&mut config, "ollama_local");
if !entry.enabled {
entry.enabled = true;
changes.push("no providers were enabled; enabled 'ollama_local'".to_string());
}
if config.general.default_provider != "ollama_local" {
config.general.default_provider = "ollama_local".to_string();
changes.push(
"default provider reset to 'ollama_local' because no providers were enabled"
.to_string(),
);
}
ensure_default_enabled = false;
}
if ensure_default_enabled {
let default_id = config.general.default_provider.clone();
if let Some(default_cfg) = config.providers.get(&default_id) {
if !default_cfg.enabled {
if let Some(new_default) = config
.providers
.iter()
.filter(|(id, cfg)| cfg.enabled && *id != &default_id)
.map(|(id, _)| id.clone())
.min()
{
config.general.default_provider = new_default.clone();
changes.push(format!(
"default provider '{default_id}' was disabled; switched default to '{new_default}'"
));
} else {
let entry =
core_config::ensure_provider_config_mut(&mut config, "ollama_local");
if !entry.enabled {
entry.enabled = true;
changes.push(
"enabled 'ollama_local' because default provider was disabled"
.to_string(),
);
}
if config.general.default_provider != "ollama_local" {
config.general.default_provider = "ollama_local".to_string();
changes.push(
"default provider reset to 'ollama_local' because previous default was disabled"
.to_string(),
);
}
}
}
}
}
match config.mcp.mode {
McpMode::Legacy => {
config.mcp.mode = McpMode::LocalOnly;
@@ -407,6 +479,8 @@ async fn main() -> Result<()> {
let controller =
SessionController::new(provider, cfg, storage.clone(), tui_controller, false).await?;
let provider_manager = Arc::new(ProviderManager::default());
let mut runtime = RuntimeApp::new(provider_manager);
let (mut app, mut session_rx) = ChatApp::new(controller).await?;
app.initialize_models().await?;
if let Some(notice) = offline_notice {
@@ -417,12 +491,6 @@ async fn main() -> Result<()> {
// Set the initial mode
app.set_mode(initial_mode).await;
// Event infrastructure
let cancellation_token = CancellationToken::new();
let (event_tx, event_rx) = mpsc::unbounded_channel();
let event_handler = EventHandler::new(event_tx, cancellation_token.clone());
let event_handle = tokio::spawn(async move { event_handler.run().await });
// Terminal setup
enable_raw_mode()?;
let mut stdout = io::stdout();
@@ -435,11 +503,7 @@ async fn main() -> Result<()> {
let backend = CrosstermBackend::new(stdout);
let mut terminal = Terminal::new(backend)?;
let result = run_app(&mut terminal, &mut app, event_rx, &mut session_rx).await;
// Shutdown
cancellation_token.cancel();
event_handle.await?;
let result = run_app(&mut terminal, &mut runtime, &mut app, &mut session_rx).await;
// Persist configuration updates (e.g., selected model)
config::save_config(&app.config())?;
@@ -462,58 +526,17 @@ async fn main() -> Result<()> {
async fn run_app(
terminal: &mut Terminal<CrosstermBackend<io::Stdout>>,
runtime: &mut RuntimeApp,
app: &mut ChatApp,
mut event_rx: mpsc::UnboundedReceiver<Event>,
session_rx: &mut mpsc::UnboundedReceiver<SessionEvent>,
) -> Result<()> {
let stream_draw_interval = tokio::time::Duration::from_millis(50);
let idle_tick = tokio::time::Duration::from_millis(100);
let mut last_draw = tokio::time::Instant::now() - stream_draw_interval;
let mut render = |terminal: &mut Terminal<CrosstermBackend<io::Stdout>>,
state: &mut ChatApp|
-> Result<()> {
terminal.draw(|f| ui::render_chat(f, state))?;
Ok(())
};
loop {
// Advance loading animation frame
app.advance_loading_animation();
let streaming_active = app.streaming_count() > 0;
let draw_due = if streaming_active {
last_draw.elapsed() >= stream_draw_interval
} else {
true
};
if draw_due {
terminal.draw(|f| ui::render_chat(f, app))?;
last_draw = tokio::time::Instant::now();
}
// Process any pending LLM requests AFTER UI has been drawn
if let Err(e) = app.process_pending_llm_request().await {
eprintln!("Error processing LLM request: {}", e);
}
// Process any pending tool executions AFTER UI has been drawn
if let Err(e) = app.process_pending_tool_execution().await {
eprintln!("Error processing tool execution: {}", e);
}
let sleep_duration = if streaming_active {
stream_draw_interval
.checked_sub(last_draw.elapsed())
.unwrap_or_else(|| tokio::time::Duration::from_millis(0))
} else {
idle_tick
};
tokio::select! {
Some(event) = event_rx.recv() => {
if let AppState::Quit = app.handle_event(event).await? {
return Ok(());
}
}
Some(session_event) = session_rx.recv() => {
app.handle_session_event(session_event).await?;
}
_ = tokio::time::sleep(sleep_duration) => {}
}
}
runtime.run(terminal, app, session_rx, &mut render).await?;
Ok(())
}

View File

@@ -0,0 +1,106 @@
use std::sync::Arc;
use async_trait::async_trait;
use futures::stream::{self, StreamExt};
use owlen_core::Result as CoreResult;
use owlen_core::provider::{
GenerateChunk, GenerateRequest, GenerateStream, ModelInfo, ModelProvider, ProviderMetadata,
ProviderStatus, ProviderType,
};
pub struct MockProvider {
metadata: ProviderMetadata,
models: Vec<ModelInfo>,
status: ProviderStatus,
#[allow(clippy::type_complexity)]
generate_handler: Option<Arc<dyn Fn(GenerateRequest) -> Vec<GenerateChunk> + Send + Sync>>,
generate_error: Option<Arc<dyn Fn() -> owlen_core::Error + Send + Sync>>,
}
impl MockProvider {
pub fn new(id: &str) -> Self {
let metadata = ProviderMetadata::new(
id,
format!("Mock Provider ({})", id),
ProviderType::Local,
false,
);
Self {
metadata,
models: vec![ModelInfo {
name: format!("{}-primary", id),
size_bytes: None,
capabilities: vec!["chat".into()],
description: Some("Mock model".into()),
provider: ProviderMetadata::new(id, "Mock", ProviderType::Local, false),
metadata: Default::default(),
}],
status: ProviderStatus::Available,
generate_handler: None,
generate_error: None,
}
}
pub fn with_models(mut self, models: Vec<ModelInfo>) -> Self {
self.models = models;
self
}
pub fn with_status(mut self, status: ProviderStatus) -> Self {
self.status = status;
self
}
pub fn with_generate_handler<F>(mut self, handler: F) -> Self
where
F: Fn(GenerateRequest) -> Vec<GenerateChunk> + Send + Sync + 'static,
{
self.generate_handler = Some(Arc::new(handler));
self
}
pub fn with_generate_error<F>(mut self, factory: F) -> Self
where
F: Fn() -> owlen_core::Error + Send + Sync + 'static,
{
self.generate_error = Some(Arc::new(factory));
self
}
}
#[async_trait]
impl ModelProvider for MockProvider {
fn metadata(&self) -> &ProviderMetadata {
&self.metadata
}
async fn health_check(&self) -> CoreResult<ProviderStatus> {
Ok(self.status)
}
async fn list_models(&self) -> CoreResult<Vec<ModelInfo>> {
Ok(self.models.clone())
}
async fn generate_stream(&self, request: GenerateRequest) -> CoreResult<GenerateStream> {
if let Some(factory) = &self.generate_error {
return Err(factory());
}
let chunks = if let Some(handler) = &self.generate_handler {
(handler)(request)
} else {
vec![GenerateChunk::final_chunk()]
};
let stream = stream::iter(chunks.into_iter().map(Ok)).boxed();
Ok(Box::pin(stream))
}
}
impl From<MockProvider> for Arc<dyn ModelProvider> {
fn from(provider: MockProvider) -> Self {
Arc::new(provider)
}
}

View File

@@ -0,0 +1 @@
pub mod mock_provider;

View File

@@ -0,0 +1,117 @@
mod common;
use std::sync::Arc;
use futures::StreamExt;
use common::mock_provider::MockProvider;
use owlen_core::config::Config;
use owlen_core::provider::{
GenerateChunk, GenerateRequest, ModelInfo, ProviderManager, ProviderType,
};
#[allow(dead_code)]
fn base_config() -> Config {
Config {
providers: Default::default(),
..Default::default()
}
}
fn make_model(name: &str, provider: &str) -> ModelInfo {
ModelInfo {
name: name.into(),
size_bytes: None,
capabilities: vec!["chat".into()],
description: Some("mock".into()),
provider: owlen_core::provider::ProviderMetadata::new(
provider,
provider,
ProviderType::Local,
false,
),
metadata: Default::default(),
}
}
#[tokio::test]
async fn registers_providers_and_lists_ids() {
let manager = ProviderManager::default();
let provider: Arc<dyn owlen_core::provider::ModelProvider> = MockProvider::new("mock-a").into();
manager.register_provider(provider).await;
let ids = manager.provider_ids().await;
assert_eq!(ids, vec!["mock-a".to_string()]);
}
#[tokio::test]
async fn aggregates_models_across_providers() {
let manager = ProviderManager::default();
let provider_a = MockProvider::new("mock-a").with_models(vec![make_model("alpha", "mock-a")]);
let provider_b = MockProvider::new("mock-b").with_models(vec![make_model("beta", "mock-b")]);
manager.register_provider(provider_a.into()).await;
manager.register_provider(provider_b.into()).await;
let models = manager.list_all_models().await.unwrap();
assert_eq!(models.len(), 2);
assert!(models.iter().any(|m| m.model.name == "alpha"));
assert!(models.iter().any(|m| m.model.name == "beta"));
}
#[tokio::test]
async fn routes_generation_to_specific_provider() {
let manager = ProviderManager::default();
let provider = MockProvider::new("mock-gen").with_generate_handler(|_req| {
vec![
GenerateChunk::from_text("hello"),
GenerateChunk::final_chunk(),
]
});
manager.register_provider(provider.into()).await;
let request = GenerateRequest::new("mock-gen::primary");
let mut stream = manager.generate("mock-gen", request).await.unwrap();
let mut collected = Vec::new();
while let Some(chunk) = stream.next().await {
collected.push(chunk.unwrap());
}
assert_eq!(collected.len(), 2);
assert_eq!(collected[0].text.as_deref(), Some("hello"));
assert!(collected[1].is_final);
}
#[tokio::test]
async fn marks_provider_unavailable_on_error() {
let manager = ProviderManager::default();
let provider = MockProvider::new("flaky")
.with_generate_error(|| owlen_core::Error::Network("boom".into()));
manager.register_provider(provider.into()).await;
let request = GenerateRequest::new("flaky::model");
let result = manager.generate("flaky", request).await;
assert!(result.is_err());
let status = manager.provider_status("flaky").await.unwrap();
assert!(matches!(
status,
owlen_core::provider::ProviderStatus::Unavailable
));
}
#[tokio::test]
async fn health_refresh_updates_status_cache() {
let manager = ProviderManager::default();
let provider =
MockProvider::new("healthy").with_status(owlen_core::provider::ProviderStatus::Available);
manager.register_provider(provider.into()).await;
let statuses = manager.refresh_health().await;
assert_eq!(
statuses.get("healthy"),
Some(&owlen_core::provider::ProviderStatus::Available)
);
}

View File

@@ -42,6 +42,7 @@ uuid = { workspace = true }
serde_json.workspace = true
serde.workspace = true
chrono = { workspace = true }
log = { workspace = true }
[dev-dependencies]
tokio-test = { workspace = true }

View File

@@ -0,0 +1,77 @@
use std::sync::Arc;
use anyhow::{Result, anyhow};
use futures_util::StreamExt;
use owlen_core::provider::GenerateRequest;
use uuid::Uuid;
use super::{ActiveGeneration, App, AppMessage};
impl App {
/// Kick off a new generation task on the supplied provider.
pub fn start_generation(
&mut self,
provider_id: impl Into<String>,
request: GenerateRequest,
) -> Result<Uuid> {
let provider_id = provider_id.into();
let request_id = Uuid::new_v4();
// Cancel any existing task so we don't interleave output.
if let Some(active) = self.active_generation.take() {
active.abort();
}
self.message_tx
.send(AppMessage::GenerateStart {
request_id,
provider_id: provider_id.clone(),
request: request.clone(),
})
.map_err(|err| anyhow!("failed to queue generation start: {err:?}"))?;
let manager = Arc::clone(&self.provider_manager);
let message_tx = self.message_tx.clone();
let provider_for_task = provider_id.clone();
let join_handle = tokio::spawn(async move {
let mut stream = match manager.generate(&provider_for_task, request).await {
Ok(stream) => stream,
Err(err) => {
let _ = message_tx.send(AppMessage::GenerateError {
request_id: Some(request_id),
message: err.to_string(),
});
return;
}
};
while let Some(chunk_result) = stream.next().await {
match chunk_result {
Ok(chunk) => {
if message_tx
.send(AppMessage::GenerateChunk { request_id, chunk })
.is_err()
{
break;
}
}
Err(err) => {
let _ = message_tx.send(AppMessage::GenerateError {
request_id: Some(request_id),
message: err.to_string(),
});
return;
}
}
}
let _ = message_tx.send(AppMessage::GenerateComplete { request_id });
});
let generation = ActiveGeneration::new(request_id, provider_id, join_handle);
self.active_generation = Some(generation);
Ok(request_id)
}
}

View File

@@ -0,0 +1,135 @@
use super::{App, messages::AppMessage};
use log::warn;
use owlen_core::{
provider::{GenerateChunk, GenerateRequest, ProviderStatus},
state::AppState,
};
use uuid::Uuid;
/// Trait implemented by UI state containers to react to [`AppMessage`] events.
pub trait MessageState {
/// Called when a generation request is about to start.
#[allow(unused_variables)]
fn start_generation(
&mut self,
request_id: Uuid,
provider_id: &str,
request: &GenerateRequest,
) -> AppState {
AppState::Running
}
/// Called for every streamed generation chunk.
#[allow(unused_variables)]
fn append_chunk(&mut self, request_id: Uuid, chunk: &GenerateChunk) -> AppState {
AppState::Running
}
/// Called when a generation finishes successfully.
#[allow(unused_variables)]
fn generation_complete(&mut self, request_id: Uuid) -> AppState {
AppState::Running
}
/// Called when a generation fails.
#[allow(unused_variables)]
fn generation_failed(&mut self, request_id: Option<Uuid>, message: &str) -> AppState {
AppState::Running
}
/// Called when refreshed model metadata is available.
fn update_model_list(&mut self) -> AppState {
AppState::Running
}
/// Called when a models refresh has been requested.
fn refresh_model_list(&mut self) -> AppState {
AppState::Running
}
/// Called when provider status updates arrive.
#[allow(unused_variables)]
fn update_provider_status(&mut self, provider_id: &str, status: ProviderStatus) -> AppState {
AppState::Running
}
/// Called when a resize event occurs.
#[allow(unused_variables)]
fn handle_resize(&mut self, width: u16, height: u16) -> AppState {
AppState::Running
}
/// Called on periodic ticks.
fn handle_tick(&mut self) -> AppState {
AppState::Running
}
}
impl App {
/// Dispatch a message to the provided [`MessageState`]. Returns `true` when the
/// state indicates the UI should exit.
pub fn handle_message<State>(&mut self, state: &mut State, message: AppMessage) -> bool
where
State: MessageState,
{
use AppMessage::*;
let outcome = match message {
KeyPress(_) => AppState::Running,
Resize { width, height } => state.handle_resize(width, height),
Tick => state.handle_tick(),
GenerateStart {
request_id,
provider_id,
request,
} => state.start_generation(request_id, &provider_id, &request),
GenerateChunk { request_id, chunk } => state.append_chunk(request_id, &chunk),
GenerateComplete { request_id } => {
self.clear_active_generation(request_id);
state.generation_complete(request_id)
}
GenerateError {
request_id,
message,
} => {
self.clear_active_generation_optional(request_id);
state.generation_failed(request_id, &message)
}
ModelsRefresh => state.refresh_model_list(),
ModelsUpdated => state.update_model_list(),
ProviderStatus {
provider_id,
status,
} => state.update_provider_status(&provider_id, status),
};
matches!(outcome, AppState::Quit)
}
fn clear_active_generation(&mut self, request_id: Uuid) {
if self
.active_generation
.as_ref()
.map(|active| active.request_id() == request_id)
.unwrap_or(false)
{
self.active_generation = None;
} else {
warn!(
"received completion for unknown request {}, ignoring",
request_id
);
}
}
fn clear_active_generation_optional(&mut self, request_id: Option<Uuid>) {
match request_id {
Some(id) => self.clear_active_generation(id),
None => {
if self.active_generation.is_some() {
self.active_generation = None;
}
}
}
}
}

View File

@@ -0,0 +1,41 @@
use crossterm::event::KeyEvent;
use owlen_core::provider::{GenerateChunk, GenerateRequest, ProviderStatus};
use uuid::Uuid;
/// Messages exchanged between the UI event loop and background workers.
#[derive(Debug)]
pub enum AppMessage {
/// User input event bubbled up from the terminal layer.
KeyPress(KeyEvent),
/// Terminal resize notification.
Resize { width: u16, height: u16 },
/// Periodic tick used to drive animations.
Tick,
/// Initiate a new text generation request.
GenerateStart {
request_id: Uuid,
provider_id: String,
request: GenerateRequest,
},
/// Streamed response chunk from the active generation task.
GenerateChunk {
request_id: Uuid,
chunk: GenerateChunk,
},
/// Generation finished successfully.
GenerateComplete { request_id: Uuid },
/// Generation failed or was aborted.
GenerateError {
request_id: Option<Uuid>,
message: String,
},
/// Trigger a background refresh of available models.
ModelsRefresh,
/// New model list data is ready.
ModelsUpdated,
/// Provider health status update.
ProviderStatus {
provider_id: String,
status: ProviderStatus,
},
}

View File

@@ -0,0 +1,237 @@
mod generation;
mod handler;
mod worker;
pub mod messages;
pub use worker::background_worker;
use std::{
io,
sync::Arc,
time::{Duration, Instant},
};
use anyhow::Result;
use async_trait::async_trait;
use crossterm::event::{self, KeyEventKind};
use owlen_core::{provider::ProviderManager, state::AppState};
use ratatui::{Terminal, backend::CrosstermBackend};
use tokio::{
sync::mpsc::{self, error::TryRecvError},
task::{AbortHandle, JoinHandle, yield_now},
};
use uuid::Uuid;
use crate::{Event, SessionEvent, events};
pub use handler::MessageState;
pub use messages::AppMessage;
#[async_trait]
pub trait UiRuntime: MessageState {
async fn handle_ui_event(&mut self, event: Event) -> Result<AppState>;
async fn handle_session_event(&mut self, event: SessionEvent) -> Result<()>;
async fn process_pending_llm_request(&mut self) -> Result<()>;
async fn process_pending_tool_execution(&mut self) -> Result<()>;
fn advance_loading_animation(&mut self);
fn streaming_count(&self) -> usize;
}
/// High-level application state driving the non-blocking TUI.
pub struct App {
provider_manager: Arc<ProviderManager>,
message_tx: mpsc::UnboundedSender<AppMessage>,
message_rx: Option<mpsc::UnboundedReceiver<AppMessage>>,
active_generation: Option<ActiveGeneration>,
}
impl App {
/// Construct a new application instance with an associated message channel.
pub fn new(provider_manager: Arc<ProviderManager>) -> Self {
let (message_tx, message_rx) = mpsc::unbounded_channel();
Self {
provider_manager,
message_tx,
message_rx: Some(message_rx),
active_generation: None,
}
}
/// Cloneable sender handle for pushing messages into the application loop.
pub fn message_sender(&self) -> mpsc::UnboundedSender<AppMessage> {
self.message_tx.clone()
}
/// Whether a generation task is currently in flight.
pub fn has_active_generation(&self) -> bool {
self.active_generation.is_some()
}
/// Abort any in-flight generation task.
pub fn abort_active_generation(&mut self) {
if let Some(active) = self.active_generation.take() {
active.abort();
}
}
/// Launch the background worker responsible for provider health checks.
pub fn spawn_background_worker(&self) -> JoinHandle<()> {
let manager = Arc::clone(&self.provider_manager);
let sender = self.message_tx.clone();
tokio::spawn(async move {
worker::background_worker(manager, sender).await;
})
}
/// Drive the main UI loop, handling terminal events, background messages, and
/// provider status updates without blocking rendering.
pub async fn run<State, RenderFn>(
&mut self,
terminal: &mut Terminal<CrosstermBackend<io::Stdout>>,
state: &mut State,
session_rx: &mut mpsc::UnboundedReceiver<SessionEvent>,
mut render: RenderFn,
) -> Result<AppState>
where
State: UiRuntime,
RenderFn: FnMut(&mut Terminal<CrosstermBackend<io::Stdout>>, &mut State) -> Result<()>,
{
let mut message_rx = self
.message_rx
.take()
.expect("App::run called without an available message receiver");
let poll_interval = Duration::from_millis(16);
let mut last_frame = Instant::now();
let frame_interval = Duration::from_millis(16);
let mut worker_handle = Some(self.spawn_background_worker());
let exit_state = AppState::Quit;
'main: loop {
state.advance_loading_animation();
state.process_pending_llm_request().await?;
state.process_pending_tool_execution().await?;
loop {
match session_rx.try_recv() {
Ok(session_event) => {
state.handle_session_event(session_event).await?;
}
Err(TryRecvError::Empty) => break,
Err(TryRecvError::Disconnected) => {
break 'main;
}
}
}
loop {
match message_rx.try_recv() {
Ok(message) => {
if self.handle_message(state, message) {
if let Some(handle) = worker_handle.take() {
handle.abort();
}
break 'main;
}
}
Err(tokio::sync::mpsc::error::TryRecvError::Empty) => break,
Err(tokio::sync::mpsc::error::TryRecvError::Disconnected) => break,
}
}
if last_frame.elapsed() >= frame_interval {
render(terminal, state)?;
last_frame = Instant::now();
}
if self.handle_message(state, AppMessage::Tick) {
if let Some(handle) = worker_handle.take() {
handle.abort();
}
break 'main;
}
match event::poll(poll_interval) {
Ok(true) => match event::read() {
Ok(raw_event) => {
if let Some(ui_event) = events::from_crossterm_event(raw_event) {
if let Event::Key(key) = &ui_event {
if key.kind == KeyEventKind::Press {
let _ = self.message_tx.send(AppMessage::KeyPress(*key));
}
} else if let Event::Resize(width, height) = &ui_event {
let _ = self.message_tx.send(AppMessage::Resize {
width: *width,
height: *height,
});
}
if matches!(state.handle_ui_event(ui_event).await?, AppState::Quit) {
if let Some(handle) = worker_handle.take() {
handle.abort();
}
break 'main;
}
}
}
Err(err) => {
if let Some(handle) = worker_handle.take() {
handle.abort();
}
return Err(err.into());
}
},
Ok(false) => {}
Err(err) => {
if let Some(handle) = worker_handle.take() {
handle.abort();
}
return Err(err.into());
}
}
yield_now().await;
}
if let Some(handle) = worker_handle {
handle.abort();
}
self.message_rx = Some(message_rx);
Ok(exit_state)
}
}
struct ActiveGeneration {
request_id: Uuid,
#[allow(dead_code)]
provider_id: String,
abort_handle: AbortHandle,
#[allow(dead_code)]
join_handle: JoinHandle<()>,
}
impl ActiveGeneration {
fn new(request_id: Uuid, provider_id: String, join_handle: JoinHandle<()>) -> Self {
let abort_handle = join_handle.abort_handle();
Self {
request_id,
provider_id,
abort_handle,
join_handle,
}
}
fn abort(self) {
self.abort_handle.abort();
}
fn request_id(&self) -> Uuid {
self.request_id
}
}

View File

@@ -0,0 +1,52 @@
use std::sync::Arc;
use std::time::Duration;
use tokio::{sync::mpsc, time};
use owlen_core::provider::ProviderManager;
use super::AppMessage;
const HEALTH_CHECK_INTERVAL: Duration = Duration::from_secs(30);
/// Periodically refresh provider health and emit status updates into the app's
/// message channel. Exits automatically once the receiver side of the channel
/// is dropped.
pub async fn background_worker(
provider_manager: Arc<ProviderManager>,
message_tx: mpsc::UnboundedSender<AppMessage>,
) {
let mut interval = time::interval(HEALTH_CHECK_INTERVAL);
let mut last_statuses = provider_manager.provider_statuses().await;
loop {
interval.tick().await;
if message_tx.is_closed() {
break;
}
let statuses = provider_manager.refresh_health().await;
for (provider_id, status) in statuses {
let changed = match last_statuses.get(&provider_id) {
Some(previous) => previous != &status,
None => true,
};
last_statuses.insert(provider_id.clone(), status);
if changed
&& message_tx
.send(AppMessage::ProviderStatus {
provider_id,
status,
})
.is_err()
{
// Receiver dropped; terminate worker.
return;
}
}
}
}

View File

@@ -1,8 +1,13 @@
use anyhow::{Context, Result, anyhow};
use async_trait::async_trait;
use chrono::{DateTime, Local, Utc};
use crossterm::terminal::{disable_raw_mode, enable_raw_mode};
use owlen_core::mcp::remote_client::RemoteMcpClient;
use owlen_core::mcp::{McpToolDescriptor, McpToolResponse};
use owlen_core::provider::{
AnnotatedModelInfo, ModelInfo as ProviderModelInfo, ProviderMetadata, ProviderStatus,
ProviderType,
};
use owlen_core::{
Provider, ProviderConfig,
config::McpResourceConfig,
@@ -28,6 +33,7 @@ use unicode_segmentation::UnicodeSegmentation;
use unicode_width::UnicodeWidthStr;
use uuid::Uuid;
use crate::app::{MessageState, UiRuntime};
use crate::config;
use crate::events::Event;
use crate::model_info_panel::ModelInfoPanel;
@@ -40,6 +46,7 @@ use crate::state::{
};
use crate::toast::{Toast, ToastLevel, ToastManager};
use crate::ui::format_tool_output;
use crate::widgets::model_picker::FilterMode;
use crate::{commands, highlight};
use owlen_core::config::{
OLLAMA_CLOUD_API_KEY_ENV, OLLAMA_CLOUD_BASE_URL, OLLAMA_CLOUD_ENDPOINT_KEY, OLLAMA_MODE_KEY,
@@ -102,11 +109,14 @@ pub(crate) enum ModelSelectorItemKind {
Header {
provider: String,
expanded: bool,
status: ProviderStatus,
provider_type: ProviderType,
},
Scope {
provider: String,
label: String,
scope: ModelScope,
status: ModelAvailabilityState,
},
Model {
provider: String,
@@ -115,25 +125,39 @@ pub(crate) enum ModelSelectorItemKind {
Empty {
provider: String,
message: Option<String>,
status: Option<ModelAvailabilityState>,
},
}
impl ModelSelectorItem {
fn header(provider: impl Into<String>, expanded: bool) -> Self {
fn header(
provider: impl Into<String>,
expanded: bool,
status: ProviderStatus,
provider_type: ProviderType,
) -> Self {
Self {
kind: ModelSelectorItemKind::Header {
provider: provider.into(),
expanded,
status,
provider_type,
},
}
}
fn scope(provider: impl Into<String>, label: impl Into<String>, scope: ModelScope) -> Self {
fn scope(
provider: impl Into<String>,
label: impl Into<String>,
scope: ModelScope,
status: ModelAvailabilityState,
) -> Self {
Self {
kind: ModelSelectorItemKind::Scope {
provider: provider.into(),
label: label.into(),
scope,
status,
},
}
}
@@ -147,11 +171,16 @@ impl ModelSelectorItem {
}
}
fn empty(provider: impl Into<String>, message: Option<String>) -> Self {
fn empty(
provider: impl Into<String>,
message: Option<String>,
status: Option<ModelAvailabilityState>,
) -> Self {
Self {
kind: ModelSelectorItemKind::Empty {
provider: provider.into(),
message,
status,
},
}
}
@@ -250,13 +279,15 @@ pub struct ChatApp {
mode_flash_until: Option<Instant>,
pub status: String,
pub error: Option<String>,
models: Vec<ModelInfo>, // All models fetched
models: Vec<ModelInfo>, // All models fetched
annotated_models: Vec<AnnotatedModelInfo>, // Models annotated with provider metadata
provider_scope_status: HashMap<String, ProviderScopeStatus>,
pub available_providers: Vec<String>, // Unique providers from models
pub selected_provider: String, // The currently selected provider
pub selected_provider_index: usize, // Index into the available_providers list
pub selected_model_item: Option<usize>, // Index into the flattened model selector list
model_selector_items: Vec<ModelSelectorItem>, // Flattened provider/model list for selector
model_filter_mode: FilterMode, // Active filter applied to the model list
model_info_panel: ModelInfoPanel, // Dedicated model information viewer
model_details_cache: HashMap<String, DetailedModelInfo>, // Cached detailed metadata per model
show_model_info: bool, // Whether the model info panel is visible
@@ -500,12 +531,14 @@ impl ChatApp {
},
error: None,
models: Vec::new(),
annotated_models: Vec::new(),
provider_scope_status: HashMap::new(),
available_providers: Vec::new(),
selected_provider: "ollama_local".to_string(), // Default, will be updated in initialize_models
selected_provider_index: 0,
selected_model_item: None,
model_selector_items: Vec::new(),
model_filter_mode: FilterMode::All,
model_info_panel: ModelInfoPanel::new(),
model_details_cache: HashMap::new(),
show_model_info: false,
@@ -1210,6 +1243,21 @@ impl ChatApp {
&self.model_selector_items
}
pub(crate) fn annotated_models(&self) -> &[AnnotatedModelInfo] {
&self.annotated_models
}
pub(crate) fn model_filter_mode(&self) -> FilterMode {
self.model_filter_mode
}
pub(crate) fn set_model_filter_mode(&mut self, mode: FilterMode) {
if self.model_filter_mode != mode {
self.model_filter_mode = mode;
self.rebuild_model_selector_items();
}
}
pub fn selected_model_item(&self) -> Option<usize> {
self.selected_model_item
}
@@ -5200,7 +5248,7 @@ impl ChatApp {
return Ok(AppState::Running);
}
(KeyCode::Char('m'), KeyModifiers::NONE) => {
if let Err(err) = self.show_model_picker().await {
if let Err(err) = self.show_model_picker(FilterMode::All).await {
self.error = Some(err.to_string());
}
return Ok(AppState::Running);
@@ -6066,7 +6114,9 @@ impl ChatApp {
}
"m" | "model" => {
if args.is_empty() {
if let Err(err) = self.show_model_picker().await {
if let Err(err) =
self.show_model_picker(FilterMode::All).await
{
self.error = Some(err.to_string());
}
self.command_palette.clear();
@@ -6257,7 +6307,9 @@ impl ChatApp {
}
"models" => {
if args.is_empty() {
if let Err(err) = self.show_model_picker().await {
if let Err(err) =
self.show_model_picker(FilterMode::All).await
{
self.error = Some(err.to_string());
}
self.command_palette.clear();
@@ -6266,7 +6318,9 @@ impl ChatApp {
match args[0] {
"--local" => {
if let Err(err) = self.show_model_picker().await {
if let Err(err) =
self.show_model_picker(FilterMode::LocalOnly).await
{
self.error = Some(err.to_string());
} else if !self
.focus_first_model_in_scope(&ModelScope::Local)
@@ -6281,7 +6335,9 @@ impl ChatApp {
return Ok(AppState::Running);
}
"--cloud" => {
if let Err(err) = self.show_model_picker().await {
if let Err(err) =
self.show_model_picker(FilterMode::CloudOnly).await
{
self.error = Some(err.to_string());
} else if !self
.focus_first_model_in_scope(&ModelScope::Cloud)
@@ -6295,6 +6351,22 @@ impl ChatApp {
self.command_palette.clear();
return Ok(AppState::Running);
}
"--available" => {
if let Err(err) =
self.show_model_picker(FilterMode::Available).await
{
self.error = Some(err.to_string());
} else if !self.focus_first_available_model() {
self.status =
"No available models right now".to_string();
} else {
self.status =
"Showing available models".to_string();
self.error = None;
}
self.command_palette.clear();
return Ok(AppState::Running);
}
"info" => {
let force_refresh = args
.get(1)
@@ -6743,7 +6815,9 @@ impl ChatApp {
KeyCode::Enter => {
if let Some(item) = self.current_model_selector_item() {
match item.kind() {
ModelSelectorItemKind::Header { provider, expanded } => {
ModelSelectorItemKind::Header {
provider, expanded, ..
} => {
if *expanded {
let provider_name = provider.clone();
self.collapse_provider(&provider_name);
@@ -6839,7 +6913,9 @@ impl ChatApp {
KeyCode::Left => {
if let Some(item) = self.current_model_selector_item() {
match item.kind() {
ModelSelectorItemKind::Header { provider, expanded } => {
ModelSelectorItemKind::Header {
provider, expanded, ..
} => {
if *expanded {
let provider_name = provider.clone();
self.collapse_provider(&provider_name);
@@ -6873,7 +6949,9 @@ impl ChatApp {
KeyCode::Right => {
if let Some(item) = self.current_model_selector_item() {
match item.kind() {
ModelSelectorItemKind::Header { provider, expanded } => {
ModelSelectorItemKind::Header {
provider, expanded, ..
} => {
if !expanded {
let provider_name = provider.clone();
self.expand_provider(&provider_name, true);
@@ -6895,8 +6973,9 @@ impl ChatApp {
}
KeyCode::Char(' ') => {
if let Some(item) = self.current_model_selector_item() {
if let ModelSelectorItemKind::Header { provider, expanded } =
item.kind()
if let ModelSelectorItemKind::Header {
provider, expanded, ..
} = item.kind()
{
if *expanded {
let provider_name = provider.clone();
@@ -7575,17 +7654,29 @@ impl ChatApp {
}
fn scope_header_label(
provider: &str,
_provider: &str,
scope: &ModelScope,
status: Option<ModelAvailabilityState>,
filter: FilterMode,
) -> String {
let icon = Self::scope_icon(scope);
let scope_name = Self::scope_display_name(scope);
let provider_name = capitalize_first(provider);
let mut label = format!("{icon} {scope_name} · {provider_name}");
let mut label = format!("{icon} {scope_name}");
if let Some(ModelAvailabilityState::Unavailable) = status {
label.push_str(" (Unavailable)");
if let Some(state) = status {
match state {
ModelAvailabilityState::Available => {
if matches!(filter, FilterMode::Available) {
label.push_str(" · ✓");
}
}
ModelAvailabilityState::Unavailable => label.push_str(" · ✗"),
ModelAvailabilityState::Unknown => label.push_str(" · ⚙"),
}
}
if matches!(filter, FilterMode::Available) {
label.push_str(" · available only");
}
label
@@ -7694,11 +7785,66 @@ impl ChatApp {
result
}
fn rebuild_annotated_models(&mut self) {
let mut annotated = Vec::with_capacity(self.models.len());
for model in &self.models {
let provider_id = model.provider.clone();
let scope = Self::model_scope_from_capabilities(model);
let scope_state = self.provider_scope_state(provider_id.as_str(), &scope);
let provider_status = Self::provider_status_from_state(scope_state);
let provider_type = Self::infer_provider_type(&provider_id, &scope);
let mut provider_metadata = ProviderMetadata::new(
provider_id.clone(),
Self::provider_display_name(&provider_id),
provider_type,
matches!(provider_type, ProviderType::Cloud),
);
provider_metadata.metadata.insert(
"scope".to_string(),
Value::String(Self::scope_display_name(&scope)),
);
let mut model_metadata = HashMap::new();
if !model.name.trim().is_empty() && model.name != model.id {
model_metadata.insert(
"display_name".to_string(),
Value::String(model.name.clone()),
);
}
if let Some(ctx) = model.context_window {
model_metadata.insert("context_window".to_string(), Value::from(ctx));
}
let provider_model = ProviderModelInfo {
name: model.id.clone(),
size_bytes: None,
capabilities: model.capabilities.clone(),
description: model.description.clone(),
provider: provider_metadata,
metadata: model_metadata,
};
annotated.push(AnnotatedModelInfo {
provider_id,
provider_status,
model: provider_model,
});
}
self.annotated_models = annotated;
}
fn rebuild_model_selector_items(&mut self) {
let mut items = Vec::new();
if self.available_providers.is_empty() {
items.push(ModelSelectorItem::header("ollama_local", false));
items.push(ModelSelectorItem::header(
"ollama_local",
false,
ProviderStatus::RequiresSetup,
ProviderType::Local,
));
self.model_selector_items = items;
return;
}
@@ -7707,7 +7853,14 @@ impl ChatApp {
for provider in &self.available_providers {
let is_expanded = expanded.as_ref().map(|p| p == provider).unwrap_or(false);
items.push(ModelSelectorItem::header(provider.clone(), is_expanded));
let provider_status = self.provider_overall_status(provider);
let provider_type = self.provider_type_for(provider);
items.push(ModelSelectorItem::header(
provider.clone(),
is_expanded,
provider_status,
provider_type,
));
if is_expanded {
let relevant: Vec<(usize, &ModelInfo)> = self
@@ -7736,6 +7889,10 @@ impl ChatApp {
let mut rendered_body = false;
for scope in scopes_to_render {
if !self.filter_allows_scope(&scope) {
continue;
}
rendered_scope = true;
let entries = scoped.get(&scope).cloned().unwrap_or_default();
let deduped =
@@ -7745,16 +7902,36 @@ impl ChatApp {
.and_then(|map| map.get(&scope))
.cloned()
.unwrap_or_default();
let label =
Self::scope_header_label(provider, &scope, Some(status_entry.state));
let label = Self::scope_header_label(
provider,
&scope,
Some(status_entry.state),
self.model_filter_mode,
);
items.push(ModelSelectorItem::scope(
provider.clone(),
label,
scope.clone(),
status_entry.state,
));
let scope_allowed = self.filter_scope_allows_models(&scope, status_entry.state);
if deduped.is_empty() {
if !scope_allowed {
let message = self.scope_filter_message(&scope, status_entry.state);
if let Some(msg) = message {
rendered_body = true;
items.push(ModelSelectorItem::empty(
provider.clone(),
Some(msg),
Some(status_entry.state),
));
}
continue;
}
let fallback_message = match status_entry.state {
ModelAvailabilityState::Unavailable => {
Some(format!("{} unavailable", Self::scope_display_name(&scope)))
@@ -7768,7 +7945,24 @@ impl ChatApp {
if let Some(message) = fallback_message {
rendered_body = true;
items.push(ModelSelectorItem::empty(provider.clone(), Some(message)));
items.push(ModelSelectorItem::empty(
provider.clone(),
Some(message),
Some(status_entry.state),
));
}
continue;
}
if !scope_allowed {
let message = self.scope_filter_message(&scope, status_entry.state);
if let Some(msg) = message {
rendered_body = true;
items.push(ModelSelectorItem::empty(
provider.clone(),
Some(msg),
Some(status_entry.state),
));
}
continue;
}
@@ -7780,7 +7974,7 @@ impl ChatApp {
}
if !rendered_scope || !rendered_body {
items.push(ModelSelectorItem::empty(provider.clone(), None));
items.push(ModelSelectorItem::empty(provider.clone(), None, None));
}
}
}
@@ -7789,6 +7983,131 @@ impl ChatApp {
self.ensure_valid_model_selection();
}
fn provider_scope_state(&self, provider: &str, scope: &ModelScope) -> ModelAvailabilityState {
self.provider_scope_status
.get(provider)
.and_then(|map| map.get(scope))
.map(|entry| entry.state)
.unwrap_or(ModelAvailabilityState::Unknown)
}
fn provider_overall_status(&self, provider: &str) -> ProviderStatus {
if let Some(status_map) = self.provider_scope_status.get(provider) {
let mut saw_unknown = false;
for entry in status_map.values() {
match entry.state {
ModelAvailabilityState::Unavailable => return ProviderStatus::Unavailable,
ModelAvailabilityState::Unknown => saw_unknown = true,
ModelAvailabilityState::Available => {}
}
}
if saw_unknown {
ProviderStatus::RequiresSetup
} else {
ProviderStatus::Available
}
} else {
self.annotated_models
.iter()
.find(|m| m.provider_id == provider)
.map(|m| m.provider_status)
.unwrap_or(ProviderStatus::RequiresSetup)
}
}
fn provider_type_for(&self, provider: &str) -> ProviderType {
self.annotated_models
.iter()
.find(|m| m.provider_id == provider)
.map(|m| m.model.provider.provider_type)
.unwrap_or_else(|| {
if provider.to_ascii_lowercase().contains("cloud") {
ProviderType::Cloud
} else {
ProviderType::Local
}
})
}
fn filter_allows_scope(&self, scope: &ModelScope) -> bool {
match self.model_filter_mode {
FilterMode::All => true,
FilterMode::LocalOnly => matches!(scope, ModelScope::Local),
FilterMode::CloudOnly => matches!(scope, ModelScope::Cloud),
FilterMode::Available => true,
}
}
fn filter_scope_allows_models(
&self,
scope: &ModelScope,
status: ModelAvailabilityState,
) -> bool {
match self.model_filter_mode {
FilterMode::Available => status == ModelAvailabilityState::Available,
FilterMode::LocalOnly => matches!(scope, ModelScope::Local),
FilterMode::CloudOnly => matches!(scope, ModelScope::Cloud),
FilterMode::All => true,
}
}
fn scope_filter_message(
&self,
scope: &ModelScope,
status: ModelAvailabilityState,
) -> Option<String> {
match self.model_filter_mode {
FilterMode::Available => match status {
ModelAvailabilityState::Available => None,
ModelAvailabilityState::Unavailable => {
Some(format!("{} unavailable", Self::scope_display_name(scope)))
}
ModelAvailabilityState::Unknown => Some(format!(
"{} setup required",
Self::scope_display_name(scope)
)),
},
FilterMode::LocalOnly | FilterMode::CloudOnly => {
if status == ModelAvailabilityState::Unavailable {
Some(format!("{} unavailable", Self::scope_display_name(scope)))
} else {
None
}
}
FilterMode::All => None,
}
}
fn provider_display_name(provider: &str) -> String {
if provider.trim().is_empty() {
return "Provider".to_string();
}
let normalized = provider.replace(['_', '-'], " ");
capitalize_first(normalized.as_str())
}
fn infer_provider_type(provider: &str, scope: &ModelScope) -> ProviderType {
match scope {
ModelScope::Local => ProviderType::Local,
ModelScope::Cloud => ProviderType::Cloud,
ModelScope::Other(_) => {
if provider.to_ascii_lowercase().contains("cloud") {
ProviderType::Cloud
} else {
ProviderType::Local
}
}
}
}
fn provider_status_from_state(state: ModelAvailabilityState) -> ProviderStatus {
match state {
ModelAvailabilityState::Available => ProviderStatus::Available,
ModelAvailabilityState::Unavailable => ProviderStatus::Unavailable,
ModelAvailabilityState::Unknown => ProviderStatus::RequiresSetup,
}
}
fn first_model_item_index(&self) -> Option<usize> {
self.model_selector_items
.iter()
@@ -7900,6 +8219,19 @@ impl ChatApp {
true
}
fn focus_first_available_model(&mut self) -> bool {
if self.model_selector_items.is_empty() {
return false;
}
if let Some(idx) = self.first_model_item_index() {
self.set_selected_model_item(idx);
true
} else {
false
}
}
fn ensure_valid_model_selection(&mut self) {
if self.model_selector_items.is_empty() {
self.selected_model_item = None;
@@ -8091,6 +8423,7 @@ impl ChatApp {
self.models = all_models;
self.provider_scope_status = scope_status;
self.rebuild_annotated_models();
self.model_info_panel.clear();
self.set_model_info_visible(false);
self.populate_model_details_cache_from_session().await;
@@ -8137,6 +8470,7 @@ impl ChatApp {
self.models.len(),
self.available_providers.len()
);
self.rebuild_model_selector_items();
self.update_command_palette_catalog();
@@ -8401,13 +8735,15 @@ impl ChatApp {
Ok(())
}
async fn show_model_picker(&mut self) -> Result<()> {
async fn show_model_picker(&mut self, filter: FilterMode) -> Result<()> {
self.refresh_models().await?;
if self.models.is_empty() {
return Ok(());
}
self.set_model_filter_mode(filter);
if self.available_providers.len() <= 1 {
self.set_input_mode(InputMode::ModelSelection);
self.ensure_valid_model_selection();
@@ -10866,3 +11202,32 @@ fn configure_textarea_defaults(textarea: &mut TextArea<'static>) {
textarea.set_cursor_style(Style::default());
textarea.set_cursor_line_style(Style::default());
}
impl MessageState for ChatApp {}
#[async_trait]
impl UiRuntime for ChatApp {
async fn handle_ui_event(&mut self, event: Event) -> Result<AppState> {
ChatApp::handle_event(self, event).await
}
async fn handle_session_event(&mut self, event: SessionEvent) -> Result<()> {
ChatApp::handle_session_event(self, event).await
}
async fn process_pending_llm_request(&mut self) -> Result<()> {
ChatApp::process_pending_llm_request(self).await
}
async fn process_pending_tool_execution(&mut self) -> Result<()> {
ChatApp::process_pending_tool_execution(self).await
}
fn advance_loading_animation(&mut self) {
ChatApp::advance_loading_animation(self);
}
fn streaming_count(&self) -> usize {
ChatApp::streaming_count(self)
}
}

View File

@@ -148,6 +148,10 @@ const COMMANDS: &[CommandSpec] = &[
keyword: "models --cloud",
description: "Open model picker focused on cloud models",
},
CommandSpec {
keyword: "models --available",
description: "Open model picker showing available models",
},
CommandSpec {
keyword: "new",
description: "Start a new conversation",

View File

@@ -17,6 +17,22 @@ pub enum Event {
Tick,
}
/// Convert a raw crossterm event into an application event.
pub fn from_crossterm_event(raw: crossterm::event::Event) -> Option<Event> {
match raw {
crossterm::event::Event::Key(key) => {
if key.kind == KeyEventKind::Press {
Some(Event::Key(key))
} else {
None
}
}
crossterm::event::Event::Resize(width, height) => Some(Event::Resize(width, height)),
crossterm::event::Event::Paste(text) => Some(Event::Paste(text)),
_ => None,
}
}
/// Event handler that captures terminal events and sends them to the application
pub struct EventHandler {
sender: mpsc::UnboundedSender<Event>,
@@ -52,20 +68,8 @@ impl EventHandler {
if event::poll(timeout).unwrap_or(false) {
match event::read() {
Ok(event) => {
match event {
crossterm::event::Event::Key(key) => {
// Only handle KeyEventKind::Press to avoid duplicate events
if key.kind == KeyEventKind::Press {
let _ = self.sender.send(Event::Key(key));
}
}
crossterm::event::Event::Resize(width, height) => {
let _ = self.sender.send(Event::Resize(width, height));
}
crossterm::event::Event::Paste(text) => {
let _ = self.sender.send(Event::Paste(text));
}
_ => {}
if let Some(converted) = from_crossterm_event(event) {
let _ = self.sender.send(converted);
}
}
Err(_) => {

View File

@@ -14,6 +14,7 @@
//! - `events`: Event handling for user input and other asynchronous actions.
//! - `ui`: The rendering logic for all TUI components.
pub mod app;
pub mod chat_app;
pub mod code_app;
pub mod commands;
@@ -26,6 +27,7 @@ pub mod state;
pub mod toast;
pub mod tui_controller;
pub mod ui;
pub mod widgets;
pub use chat_app::{ChatApp, SessionEvent};
pub use code_app::CodeApp;

View File

@@ -11,19 +11,16 @@ use tui_textarea::TextArea;
use unicode_segmentation::UnicodeSegmentation;
use unicode_width::UnicodeWidthStr;
use crate::chat_app::{
ChatApp, HELP_TAB_COUNT, MIN_MESSAGE_CARD_WIDTH, MessageRenderContext, ModelScope,
ModelSelectorItemKind,
};
use crate::chat_app::{ChatApp, HELP_TAB_COUNT, MIN_MESSAGE_CARD_WIDTH, MessageRenderContext};
use crate::highlight;
use crate::state::{
CodePane, EditorTab, FileFilterMode, FileNode, LayoutNode, PaletteGroup, PaneId,
RepoSearchRowKind, SplitAxis, VisibleFileEntry,
};
use crate::toast::{Toast, ToastLevel};
use owlen_core::model::DetailedModelInfo;
use crate::widgets::model_picker::render_model_picker;
use owlen_core::theme::Theme;
use owlen_core::types::{ModelInfo, Role};
use owlen_core::types::Role;
use owlen_core::ui::{FocusedPanel, InputMode, RoleLabelDisplay};
use textwrap::wrap;
@@ -337,7 +334,7 @@ pub fn render_chat(frame: &mut Frame<'_>, app: &mut ChatApp) {
} else {
match app.mode() {
InputMode::ProviderSelection => render_provider_selector(frame, app),
InputMode::ModelSelection => render_model_selector(frame, app),
InputMode::ModelSelection => render_model_picker(frame, app),
InputMode::Help => render_help(frame, app),
InputMode::SessionBrowser => render_session_browser(frame, app),
InputMode::ThemeBrowser => render_theme_browser(frame, app),
@@ -2653,429 +2650,6 @@ fn render_provider_selector(frame: &mut Frame<'_>, app: &ChatApp) {
frame.render_stateful_widget(list, area, &mut state);
}
fn model_badge_icons(model: &ModelInfo) -> Vec<&'static str> {
let mut badges = Vec::new();
if model.supports_tools {
badges.push("🔧");
}
if model_has_feature(model, &["think", "reason"]) {
badges.push("🧠");
}
if model_has_feature(model, &["vision", "multimodal", "image"]) {
badges.push("👁️");
}
if model_has_feature(model, &["audio", "speech", "voice"]) {
badges.push("🎧");
}
badges
}
fn model_has_feature(model: &ModelInfo, keywords: &[&str]) -> bool {
let name_lower = model.name.to_ascii_lowercase();
if keywords.iter().any(|kw| name_lower.contains(kw)) {
return true;
}
if let Some(description) = &model.description {
let description_lower = description.to_ascii_lowercase();
if keywords.iter().any(|kw| description_lower.contains(kw)) {
return true;
}
}
model.capabilities.iter().any(|cap| {
let lower = cap.to_ascii_lowercase();
keywords.iter().any(|kw| lower.contains(kw))
})
}
fn render_model_selector(frame: &mut Frame<'_>, app: &ChatApp) {
let theme = app.theme();
let area = frame.area();
if area.width == 0 || area.height == 0 {
return;
}
let selector_items = app.model_selector_items();
if selector_items.is_empty() {
return;
}
let max_width: u16 = 80;
let min_width: u16 = 50;
let mut width = area.width.min(max_width);
if area.width >= min_width {
width = width.max(min_width);
}
width = width.max(1);
let mut height = (selector_items.len().clamp(1, 10) as u16) * 3 + 6;
height = height.clamp(6, area.height);
let x = area.x + (area.width.saturating_sub(width)) / 2;
let mut y = area.y + (area.height.saturating_sub(height)) / 3;
if y < area.y {
y = area.y;
}
let popup_area = Rect::new(x, y, width, height);
frame.render_widget(Clear, popup_area);
let title_line = Line::from(vec![
Span::styled(
" Model Selector ",
Style::default().fg(theme.info).add_modifier(Modifier::BOLD),
),
Span::styled(
format!("· Provider: {}", app.selected_provider),
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM),
),
]);
let block = Block::default()
.title(title_line)
.borders(Borders::ALL)
.border_style(Style::default().fg(theme.info))
.style(Style::default().bg(theme.background).fg(theme.text));
let inner = block.inner(popup_area);
frame.render_widget(block, popup_area);
if inner.width == 0 || inner.height == 0 {
return;
}
let highlight_symbol = " ";
let highlight_width = UnicodeWidthStr::width(highlight_symbol);
let max_line_width = inner.width.saturating_sub(highlight_width as u16).max(1) as usize;
let layout = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Min(4), Constraint::Length(2)])
.split(inner);
let active_model_id = app.selected_model();
let mut items: Vec<ListItem> = Vec::new();
for item in selector_items.iter() {
match item.kind() {
ModelSelectorItemKind::Header { provider, expanded } => {
let marker = if *expanded { "" } else { "" };
let line = clip_line_to_width(
Line::from(vec![
Span::styled(
marker,
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::BOLD),
),
Span::raw(" "),
Span::styled(
provider.clone(),
Style::default()
.fg(theme.mode_command)
.add_modifier(Modifier::BOLD),
),
]),
max_line_width,
);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Scope { label, scope, .. } => {
let (fg, modifier) = match scope {
ModelScope::Local => (theme.mode_normal, Modifier::BOLD),
ModelScope::Cloud => (theme.mode_help, Modifier::BOLD),
ModelScope::Other(_) => (theme.placeholder, Modifier::ITALIC),
};
let style = Style::default().fg(fg).add_modifier(modifier);
let line = clip_line_to_width(
Line::from(Span::styled(format!(" {label}"), style)),
max_line_width,
);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Model { model_index, .. } => {
let mut lines: Vec<Line<'static>> = Vec::new();
if let Some(model) = app.model_info_by_index(*model_index) {
let badges = model_badge_icons(model);
let detail = app.cached_model_detail(&model.id);
let (title, metadata) = build_model_selector_label(
model,
detail,
&badges,
model.id == active_model_id,
);
lines.push(clip_line_to_width(
Line::from(Span::styled(title, Style::default().fg(theme.text))),
max_line_width,
));
if let Some(meta) = metadata {
lines.push(clip_line_to_width(
Line::from(Span::styled(
meta,
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM),
)),
max_line_width,
));
}
} else {
lines.push(clip_line_to_width(
Line::from(Span::styled(
" <model unavailable>",
Style::default().fg(theme.error),
)),
max_line_width,
));
}
items.push(ListItem::new(lines).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Empty { provider, message } => {
let text = message
.as_ref()
.map(|msg| format!(" {msg}"))
.unwrap_or_else(|| format!(" (no models configured for {provider})"));
let is_unavailable = message
.as_ref()
.map(|msg| msg.to_ascii_lowercase().contains("unavailable"))
.unwrap_or(false);
let style = if is_unavailable {
Style::default()
.fg(theme.error)
.add_modifier(Modifier::BOLD)
} else {
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM | Modifier::ITALIC)
};
let line =
clip_line_to_width(Line::from(Span::styled(text, style)), max_line_width);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
}
}
let highlight_style = Style::default()
.bg(theme.selection_bg)
.fg(theme.selection_fg)
.add_modifier(Modifier::BOLD);
let mut state = ListState::default();
state.select(app.selected_model_item());
let list = List::new(items)
.highlight_style(highlight_style)
.highlight_symbol(" ")
.style(Style::default().bg(theme.background).fg(theme.text));
frame.render_stateful_widget(list, layout[0], &mut state);
let footer = Paragraph::new(Line::from(Span::styled(
"Enter: select · Space: toggle provider · ←/→ collapse/expand · Esc: cancel",
Style::default().fg(theme.placeholder),
)))
.alignment(Alignment::Center)
.style(Style::default().bg(theme.background).fg(theme.placeholder));
frame.render_widget(footer, layout[1]);
}
fn clip_line_to_width(line: Line<'_>, max_width: usize) -> Line<'static> {
if max_width == 0 {
return Line::from(Vec::<Span<'static>>::new());
}
let mut used = 0usize;
let mut clipped: Vec<Span<'static>> = Vec::new();
for span in line.spans {
if used >= max_width {
break;
}
let text = span.content.to_string();
let span_width = UnicodeWidthStr::width(text.as_str());
if used + span_width <= max_width {
if !text.is_empty() {
clipped.push(Span::styled(text, span.style));
}
used += span_width;
} else {
let mut buf = String::new();
for grapheme in span.content.as_ref().graphemes(true) {
let g_width = UnicodeWidthStr::width(grapheme);
if g_width == 0 {
buf.push_str(grapheme);
continue;
}
if used + g_width > max_width {
break;
}
buf.push_str(grapheme);
used += g_width;
}
if !buf.is_empty() {
clipped.push(Span::styled(buf, span.style));
}
break;
}
}
Line::from(clipped)
}
fn build_model_selector_label(
model: &ModelInfo,
detail: Option<&DetailedModelInfo>,
badges: &[&'static str],
is_current: bool,
) -> (String, Option<String>) {
let scope = ChatApp::model_scope_from_capabilities(model);
let scope_icon = ChatApp::scope_icon(&scope);
let scope_label = ChatApp::scope_display_name(&scope);
let mut display_name = if model.name.trim().is_empty() {
model.id.clone()
} else {
model.name.clone()
};
if !display_name.eq_ignore_ascii_case(&model.id) {
display_name.push_str(&format!(" · {}", model.id));
}
let mut title = format!(" {} {}", scope_icon, display_name);
if !badges.is_empty() {
title.push(' ');
title.push_str(&badges.join(" "));
}
if is_current {
title.push_str("");
}
let mut meta_parts: Vec<String> = Vec::new();
let mut seen_meta: HashSet<String> = HashSet::new();
let mut push_meta = |value: String| {
let trimmed = value.trim();
if trimmed.is_empty() {
return;
}
let key = trimmed.to_ascii_lowercase();
if seen_meta.insert(key) {
meta_parts.push(trimmed.to_string());
}
};
if !scope_label.eq_ignore_ascii_case("unknown") {
push_meta(scope_label.clone());
}
if let Some(detail) = detail {
if let Some(ctx) = detail.context_length {
push_meta(format!("max tokens {}", ctx));
} else if let Some(ctx) = model.context_window {
push_meta(format!("max tokens {}", ctx));
}
if let Some(parameters) = detail
.parameter_size
.as_ref()
.or(detail.parameters.as_ref())
&& !parameters.trim().is_empty()
{
push_meta(parameters.trim().to_string());
}
if let Some(arch) = detail.architecture.as_deref() {
let trimmed = arch.trim();
if !trimmed.is_empty() {
push_meta(format!("arch {}", trimmed));
}
} else if let Some(family) = detail.family.as_deref() {
let trimmed = family.trim();
if !trimmed.is_empty() {
push_meta(format!("family {}", trimmed));
}
} else if !detail.families.is_empty() {
let families = detail
.families
.iter()
.map(|f| f.trim())
.filter(|f| !f.is_empty())
.take(2)
.collect::<Vec<_>>()
.join("/");
if !families.is_empty() {
push_meta(format!("family {}", families));
}
}
if let Some(embedding) = detail.embedding_length {
push_meta(format!("embedding {}", embedding));
}
if let Some(size) = detail.size {
push_meta(format_short_size(size));
}
if let Some(quant) = detail
.quantization
.as_ref()
.filter(|q| !q.trim().is_empty())
{
push_meta(format!("quant {}", quant.trim()));
}
} else if let Some(ctx) = model.context_window {
push_meta(format!("max tokens {}", ctx));
}
if let Some(desc) = model.description.as_deref() {
let trimmed = desc.trim();
if !trimmed.is_empty() {
meta_parts.push(ellipsize(trimmed, 80));
}
}
let metadata = if meta_parts.is_empty() {
None
} else {
Some(format!(" {}", meta_parts.join("")))
};
(title, metadata)
}
fn ellipsize(text: &str, max_chars: usize) -> String {
if text.chars().count() <= max_chars {
return text.to_string();
}
let target = max_chars.saturating_sub(1).max(1);
let mut truncated = String::new();
for ch in text.chars().take(target) {
truncated.push(ch);
}
truncated.push('…');
truncated
}
fn format_short_size(bytes: u64) -> String {
if bytes >= 1_000_000_000 {
format!("{:.1} GB", bytes as f64 / 1_000_000_000_f64)
} else if bytes >= 1_000_000 {
format!("{:.1} MB", bytes as f64 / 1_000_000_f64)
} else if bytes >= 1_000 {
format!("{:.1} KB", bytes as f64 / 1_000_f64)
} else {
format!("{} B", bytes)
}
}
fn render_consent_dialog(frame: &mut Frame<'_>, app: &ChatApp) {
let theme = app.theme();
@@ -3232,67 +2806,6 @@ fn render_consent_dialog(frame: &mut Frame<'_>, app: &ChatApp) {
frame.render_widget(paragraph, area);
}
#[cfg(test)]
mod tests {
use super::*;
fn model_with(capabilities: Vec<&str>, description: Option<&str>) -> ModelInfo {
ModelInfo {
id: "model".into(),
name: "model".into(),
description: description.map(|s| s.to_string()),
provider: "test".into(),
context_window: None,
capabilities: capabilities.into_iter().map(|s| s.to_string()).collect(),
supports_tools: false,
}
}
#[test]
fn badges_include_tool_icon() {
let model = ModelInfo {
id: "tool-model".into(),
name: "tool-model".into(),
description: None,
provider: "test".into(),
context_window: None,
capabilities: vec![],
supports_tools: true,
};
assert!(model_badge_icons(&model).contains(&"🔧"));
}
#[test]
fn badges_detect_thinking_capability() {
let model = model_with(vec!["Thinking"], None);
let icons = model_badge_icons(&model);
assert!(icons.contains(&"🧠"));
}
#[test]
fn badges_detect_vision_from_description() {
let model = model_with(vec!["chat"], Some("Supports multimodal vision"));
let icons = model_badge_icons(&model);
assert!(icons.contains(&"👁️"));
}
#[test]
fn badges_detect_audio_from_name() {
let model = ModelInfo {
id: "voice-specialist".into(),
name: "Voice-Specialist".into(),
description: None,
provider: "test".into(),
context_window: None,
capabilities: vec![],
supports_tools: false,
};
let icons = model_badge_icons(&model);
assert!(icons.contains(&"🎧"));
}
}
fn render_privacy_settings(frame: &mut Frame<'_>, area: Rect, app: &ChatApp) {
let theme = app.theme();
let config = app.config();

View File

@@ -0,0 +1,3 @@
//! Reusable widgets composed specifically for the Owlen TUI.
pub mod model_picker;

View File

@@ -0,0 +1,621 @@
use std::collections::HashSet;
use owlen_core::provider::{AnnotatedModelInfo, ProviderStatus, ProviderType};
use owlen_core::types::ModelInfo;
use ratatui::{
Frame,
layout::{Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
text::{Line, Span},
widgets::{Block, Borders, Clear, List, ListItem, ListState, Paragraph},
};
use unicode_segmentation::UnicodeSegmentation;
use unicode_width::UnicodeWidthStr;
use crate::chat_app::{ChatApp, ModelAvailabilityState, ModelScope, ModelSelectorItemKind};
/// Filtering modes for the model picker popup.
#[derive(Debug, Default, Clone, Copy, PartialEq, Eq)]
pub enum FilterMode {
#[default]
All,
LocalOnly,
CloudOnly,
Available,
}
pub fn render_model_picker(frame: &mut Frame<'_>, app: &ChatApp) {
let theme = app.theme();
let area = frame.area();
if area.width == 0 || area.height == 0 {
return;
}
let selector_items = app.model_selector_items();
if selector_items.is_empty() {
return;
}
let max_width: u16 = 80;
let min_width: u16 = 50;
let mut width = area.width.min(max_width);
if area.width >= min_width {
width = width.max(min_width);
}
width = width.max(1);
let mut height = (selector_items.len().clamp(1, 10) as u16) * 3 + 6;
height = height.clamp(6, area.height);
let x = area.x + (area.width.saturating_sub(width)) / 2;
let mut y = area.y + (area.height.saturating_sub(height)) / 3;
if y < area.y {
y = area.y;
}
let popup_area = Rect::new(x, y, width, height);
frame.render_widget(Clear, popup_area);
let mut title_spans = vec![
Span::styled(
" Model Selector ",
Style::default().fg(theme.info).add_modifier(Modifier::BOLD),
),
Span::styled(
format!("· Provider: {}", app.selected_provider),
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM),
),
];
if app.model_filter_mode() != FilterMode::All {
title_spans.push(Span::raw(" "));
title_spans.push(filter_badge(app.model_filter_mode(), theme));
}
let block = Block::default()
.title(Line::from(title_spans))
.borders(Borders::ALL)
.border_style(Style::default().fg(theme.info))
.style(Style::default().bg(theme.background).fg(theme.text));
let inner = block.inner(popup_area);
frame.render_widget(block, popup_area);
if inner.width == 0 || inner.height == 0 {
return;
}
let highlight_symbol = " ";
let highlight_width = UnicodeWidthStr::width(highlight_symbol);
let max_line_width = inner.width.saturating_sub(highlight_width as u16).max(1) as usize;
let layout = Layout::default()
.direction(Direction::Vertical)
.constraints([Constraint::Min(4), Constraint::Length(2)])
.split(inner);
let active_model_id = app.selected_model();
let annotated = app.annotated_models();
let mut items: Vec<ListItem> = Vec::new();
for item in selector_items.iter() {
match item.kind() {
ModelSelectorItemKind::Header {
provider,
expanded,
status,
provider_type,
} => {
let mut spans = Vec::new();
spans.push(status_icon(*status, theme));
spans.push(Span::raw(" "));
spans.push(Span::styled(
provider.clone(),
Style::default()
.fg(theme.mode_command)
.add_modifier(Modifier::BOLD),
));
spans.push(Span::raw(" "));
spans.push(provider_type_badge(*provider_type, theme));
spans.push(Span::raw(" "));
spans.push(Span::styled(
if *expanded { "" } else { "" },
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM),
));
let line = clip_line_to_width(Line::from(spans), max_line_width);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Scope { label, status, .. } => {
let (style, icon) = scope_status_style(*status, theme);
let line = clip_line_to_width(
Line::from(vec![
Span::styled(icon, style),
Span::raw(" "),
Span::styled(label.clone(), style),
]),
max_line_width,
);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Model { model_index, .. } => {
let mut lines: Vec<Line<'static>> = Vec::new();
if let Some(model) = app.model_info_by_index(*model_index) {
let badges = model_badge_icons(model);
let detail = app.cached_model_detail(&model.id);
let annotated_model = annotated.get(*model_index);
let (title, metadata) = build_model_selector_lines(
theme,
model,
annotated_model,
&badges,
detail,
model.id == active_model_id,
);
lines.push(clip_line_to_width(title, max_line_width));
if let Some(meta) = metadata {
lines.push(clip_line_to_width(meta, max_line_width));
}
} else {
lines.push(clip_line_to_width(
Line::from(Span::styled(
" <model unavailable>",
Style::default().fg(theme.error),
)),
max_line_width,
));
}
items.push(ListItem::new(lines).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Empty {
message, status, ..
} => {
let (style, icon) = empty_status_style(*status, theme);
let msg = message
.as_ref()
.map(|msg| msg.as_str())
.unwrap_or("(no models configured)");
let line = clip_line_to_width(
Line::from(vec![
Span::styled(icon, style),
Span::raw(" "),
Span::styled(format!(" {}", msg), style),
]),
max_line_width,
);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
}
}
let list = List::new(items)
.highlight_style(
Style::default()
.bg(theme.selection_bg)
.fg(theme.selection_fg)
.add_modifier(Modifier::BOLD),
)
.highlight_symbol(" ");
let mut state = ListState::default();
state.select(app.selected_model_item);
frame.render_stateful_widget(list, layout[0], &mut state);
let footer = Paragraph::new(Line::from(Span::styled(
"Enter: select · Space: toggle provider · ←/→ collapse/expand · Esc: cancel",
Style::default().fg(theme.placeholder),
)))
.alignment(ratatui::layout::Alignment::Center)
.style(Style::default().bg(theme.background).fg(theme.placeholder));
frame.render_widget(footer, layout[1]);
}
fn status_icon(status: ProviderStatus, theme: &owlen_core::theme::Theme) -> Span<'static> {
let (symbol, color) = match status {
ProviderStatus::Available => ("", theme.info),
ProviderStatus::Unavailable => ("", theme.error),
ProviderStatus::RequiresSetup => ("", Color::Yellow),
};
Span::styled(
symbol,
Style::default().fg(color).add_modifier(Modifier::BOLD),
)
}
fn provider_type_badge(
provider_type: ProviderType,
theme: &owlen_core::theme::Theme,
) -> Span<'static> {
let (label, color) = match provider_type {
ProviderType::Local => ("[Local]", theme.mode_normal),
ProviderType::Cloud => ("[Cloud]", theme.mode_help),
};
Span::styled(
label,
Style::default().fg(color).add_modifier(Modifier::BOLD),
)
}
fn scope_status_style(
status: ModelAvailabilityState,
theme: &owlen_core::theme::Theme,
) -> (Style, &'static str) {
match status {
ModelAvailabilityState::Available => (
Style::default().fg(theme.info).add_modifier(Modifier::BOLD),
"",
),
ModelAvailabilityState::Unavailable => (
Style::default()
.fg(theme.error)
.add_modifier(Modifier::BOLD),
"",
),
ModelAvailabilityState::Unknown => (
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
"",
),
}
}
fn empty_status_style(
status: Option<ModelAvailabilityState>,
theme: &owlen_core::theme::Theme,
) -> (Style, &'static str) {
match status.unwrap_or(ModelAvailabilityState::Unknown) {
ModelAvailabilityState::Available => (
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM),
"",
),
ModelAvailabilityState::Unavailable => (
Style::default()
.fg(theme.error)
.add_modifier(Modifier::BOLD),
"",
),
ModelAvailabilityState::Unknown => (
Style::default()
.fg(Color::Yellow)
.add_modifier(Modifier::BOLD),
"",
),
}
}
fn filter_badge(mode: FilterMode, theme: &owlen_core::theme::Theme) -> Span<'static> {
let label = match mode {
FilterMode::All => return Span::raw(""),
FilterMode::LocalOnly => "Local",
FilterMode::CloudOnly => "Cloud",
FilterMode::Available => "Available",
};
Span::styled(
format!("[{label}]"),
Style::default()
.fg(theme.mode_provider_selection)
.add_modifier(Modifier::BOLD),
)
}
fn build_model_selector_lines(
theme: &owlen_core::theme::Theme,
model: &ModelInfo,
annotated: Option<&AnnotatedModelInfo>,
badges: &[&'static str],
detail: Option<&owlen_core::model::DetailedModelInfo>,
is_current: bool,
) -> (Line<'static>, Option<Line<'static>>) {
let provider_type = annotated
.map(|info| info.model.provider.provider_type)
.unwrap_or_else(|| match ChatApp::model_scope_from_capabilities(model) {
ModelScope::Cloud => ProviderType::Cloud,
ModelScope::Local => ProviderType::Local,
ModelScope::Other(_) => {
if model.provider.to_ascii_lowercase().contains("cloud") {
ProviderType::Cloud
} else {
ProviderType::Local
}
}
});
let mut spans: Vec<Span<'static>> = Vec::new();
spans.push(Span::raw(" "));
spans.push(provider_type_badge(provider_type, theme));
spans.push(Span::raw(" "));
let mut display_name = if model.name.trim().is_empty() {
model.id.clone()
} else {
model.name.clone()
};
if !display_name.eq_ignore_ascii_case(&model.id) {
display_name.push_str(&format!(" · {}", model.id));
}
spans.push(Span::styled(
display_name,
Style::default().fg(theme.text).add_modifier(Modifier::BOLD),
));
if !badges.is_empty() {
spans.push(Span::raw(" "));
spans.push(Span::styled(
badges.join(" "),
Style::default().fg(theme.placeholder),
));
}
if is_current {
spans.push(Span::raw(" "));
spans.push(Span::styled(
"",
Style::default().fg(theme.info).add_modifier(Modifier::BOLD),
));
}
let mut meta_parts: Vec<String> = Vec::new();
let mut seen_meta: HashSet<String> = HashSet::new();
let mut push_meta = |value: String| {
let trimmed = value.trim();
if trimmed.is_empty() {
return;
}
let key = trimmed.to_ascii_lowercase();
if seen_meta.insert(key) {
meta_parts.push(trimmed.to_string());
}
};
let scope = ChatApp::model_scope_from_capabilities(model);
let scope_label = ChatApp::scope_display_name(&scope);
if !scope_label.eq_ignore_ascii_case("unknown") {
push_meta(scope_label.clone());
}
if let Some(detail) = detail {
if let Some(ctx) = detail.context_length {
push_meta(format!("max tokens {}", ctx));
} else if let Some(ctx) = model.context_window {
push_meta(format!("max tokens {}", ctx));
}
if let Some(parameters) = detail
.parameter_size
.as_ref()
.or(detail.parameters.as_ref())
&& !parameters.trim().is_empty()
{
push_meta(parameters.trim().to_string());
}
if let Some(arch) = detail.architecture.as_deref() {
let trimmed = arch.trim();
if !trimmed.is_empty() {
push_meta(format!("arch {}", trimmed));
}
} else if let Some(family) = detail.family.as_deref() {
let trimmed = family.trim();
if !trimmed.is_empty() {
push_meta(format!("family {}", trimmed));
}
} else if !detail.families.is_empty() {
let families = detail
.families
.iter()
.map(|f| f.trim())
.filter(|f| !f.is_empty())
.take(2)
.collect::<Vec<_>>()
.join("/");
if !families.is_empty() {
push_meta(format!("family {}", families));
}
}
if let Some(embedding) = detail.embedding_length {
push_meta(format!("embedding {}", embedding));
}
if let Some(size) = detail.size {
push_meta(format_short_size(size));
}
if let Some(quant) = detail
.quantization
.as_ref()
.filter(|q| !q.trim().is_empty())
{
push_meta(format!("quant {}", quant.trim()));
}
} else if let Some(ctx) = model.context_window {
push_meta(format!("max tokens {}", ctx));
}
if let Some(desc) = model.description.as_deref() {
let trimmed = desc.trim();
if !trimmed.is_empty() {
meta_parts.push(ellipsize(trimmed, 80));
}
}
let metadata = if meta_parts.is_empty() {
None
} else {
Some(Line::from(vec![Span::styled(
format!(" {}", meta_parts.join("")),
Style::default()
.fg(theme.placeholder)
.add_modifier(Modifier::DIM),
)]))
};
(Line::from(spans), metadata)
}
fn clip_line_to_width(line: Line<'_>, max_width: usize) -> Line<'static> {
if max_width == 0 {
return Line::from(Vec::<Span<'static>>::new());
}
let mut used = 0usize;
let mut clipped: Vec<Span<'static>> = Vec::new();
for span in line.spans {
if used >= max_width {
break;
}
let text = span.content.to_string();
let span_width = UnicodeWidthStr::width(text.as_str());
if used + span_width <= max_width {
if !text.is_empty() {
clipped.push(Span::styled(text, span.style));
}
used += span_width;
} else {
let mut buf = String::new();
for grapheme in span.content.as_ref().graphemes(true) {
let g_width = UnicodeWidthStr::width(grapheme);
if g_width == 0 {
buf.push_str(grapheme);
continue;
}
if used + g_width > max_width {
break;
}
buf.push_str(grapheme);
used += g_width;
}
if !buf.is_empty() {
clipped.push(Span::styled(buf, span.style));
}
break;
}
}
Line::from(clipped)
}
fn ellipsize(text: &str, max_chars: usize) -> String {
if text.chars().count() <= max_chars {
return text.to_string();
}
let target = max_chars.saturating_sub(1).max(1);
let mut truncated = String::new();
for ch in text.chars().take(target) {
truncated.push(ch);
}
truncated.push('…');
truncated
}
fn model_badge_icons(model: &ModelInfo) -> Vec<&'static str> {
let mut badges = Vec::new();
if model.supports_tools {
badges.push("🔧");
}
if model_has_feature(model, &["think", "reason"]) {
badges.push("🧠");
}
if model_has_feature(model, &["vision", "multimodal", "image"]) {
badges.push("👁️");
}
if model_has_feature(model, &["audio", "speech", "voice"]) {
badges.push("🎧");
}
badges
}
fn model_has_feature(model: &ModelInfo, keywords: &[&str]) -> bool {
let name_lower = model.name.to_ascii_lowercase();
if keywords.iter().any(|kw| name_lower.contains(kw)) {
return true;
}
if let Some(description) = &model.description {
let description_lower = description.to_ascii_lowercase();
if keywords.iter().any(|kw| description_lower.contains(kw)) {
return true;
}
}
if model.capabilities.iter().any(|cap| {
let lc = cap.to_ascii_lowercase();
keywords.iter().any(|kw| lc.contains(kw))
}) {
return true;
}
keywords
.iter()
.any(|kw| model.provider.to_ascii_lowercase().contains(kw))
}
fn format_short_size(bytes: u64) -> String {
if bytes >= 1_000_000_000 {
format!("{:.1} GB", bytes as f64 / 1_000_000_000_f64)
} else if bytes >= 1_000_000 {
format!("{:.1} MB", bytes as f64 / 1_000_000_f64)
} else if bytes >= 1_000 {
format!("{:.1} KB", bytes as f64 / 1_000_f64)
} else {
format!("{} B", bytes)
}
}
#[cfg(test)]
mod tests {
use super::*;
use owlen_core::types::ModelInfo;
fn model_with(capabilities: Vec<&str>, description: Option<&str>) -> ModelInfo {
ModelInfo {
id: "model".into(),
name: "model".into(),
description: description.map(|s| s.to_string()),
provider: "test".into(),
context_window: None,
capabilities: capabilities.into_iter().map(|s| s.to_string()).collect(),
supports_tools: false,
}
}
#[test]
fn model_badges_recognize_thinking_capability() {
let model = model_with(vec!["think"], None);
assert!(model_badge_icons(&model).contains(&"🧠"));
}
#[test]
fn model_badges_detect_tool_support() {
let mut model = model_with(vec![], None);
model.supports_tools = true;
let icons = model_badge_icons(&model);
assert!(icons.contains(&"🔧"));
}
#[test]
fn model_badges_detect_vision_capability() {
let model = model_with(vec![], Some("Supports vision tasks"));
let icons = model_badge_icons(&model);
assert!(icons.contains(&"👁️"));
}
#[test]
fn model_badges_detect_audio_capability() {
let model = model_with(vec!["audio"], None);
let icons = model_badge_icons(&model);
assert!(icons.contains(&"🎧"));
}
}

View File

@@ -0,0 +1,216 @@
use std::sync::{Arc, Mutex};
use std::time::Duration;
use anyhow::Result;
use async_trait::async_trait;
use futures_util::stream;
use owlen_core::provider::{
GenerateChunk, GenerateRequest, GenerateStream, ModelInfo, ModelProvider, ProviderMetadata,
ProviderStatus, ProviderType,
};
use owlen_core::state::AppState;
use owlen_tui::app::{self, App, MessageState, messages::AppMessage};
use tokio::sync::mpsc;
use tokio::task::{JoinHandle, yield_now};
use tokio::time::advance;
use uuid::Uuid;
#[derive(Clone)]
struct StatusProvider {
metadata: ProviderMetadata,
status: Arc<Mutex<ProviderStatus>>,
chunks: Arc<Vec<GenerateChunk>>,
}
impl StatusProvider {
fn new(status: ProviderStatus, chunks: Vec<GenerateChunk>) -> Self {
Self {
metadata: ProviderMetadata::new("stub", "Stub", ProviderType::Local, false),
status: Arc::new(Mutex::new(status)),
chunks: Arc::new(chunks),
}
}
fn set_status(&self, status: ProviderStatus) {
*self.status.lock().unwrap() = status;
}
}
#[async_trait]
impl ModelProvider for StatusProvider {
fn metadata(&self) -> &ProviderMetadata {
&self.metadata
}
async fn health_check(&self) -> Result<ProviderStatus, owlen_core::Error> {
Ok(*self.status.lock().unwrap())
}
async fn list_models(&self) -> Result<Vec<ModelInfo>, owlen_core::Error> {
Ok(vec![])
}
async fn generate_stream(
&self,
_request: GenerateRequest,
) -> Result<GenerateStream, owlen_core::Error> {
let items = Arc::clone(&self.chunks);
let stream_items = items.as_ref().clone();
Ok(Box::pin(stream::iter(stream_items.into_iter().map(Ok))))
}
}
#[derive(Default)]
struct RecordingState {
started: bool,
appended: bool,
completed: bool,
failed: bool,
refreshed: bool,
updated: bool,
provider_status: Option<ProviderStatus>,
}
impl MessageState for RecordingState {
fn start_generation(
&mut self,
_request_id: Uuid,
_provider_id: &str,
_request: &GenerateRequest,
) -> AppState {
self.started = true;
AppState::Running
}
fn append_chunk(&mut self, _request_id: Uuid, _chunk: &GenerateChunk) -> AppState {
self.appended = true;
AppState::Running
}
fn generation_complete(&mut self, _request_id: Uuid) -> AppState {
self.completed = true;
AppState::Running
}
fn generation_failed(&mut self, _request_id: Option<Uuid>, _message: &str) -> AppState {
self.failed = true;
AppState::Running
}
fn refresh_model_list(&mut self) -> AppState {
self.refreshed = true;
AppState::Running
}
fn update_model_list(&mut self) -> AppState {
self.updated = true;
AppState::Running
}
fn update_provider_status(&mut self, _provider_id: &str, status: ProviderStatus) -> AppState {
self.provider_status = Some(status);
AppState::Running
}
}
#[tokio::test]
async fn start_and_abort_generation_manage_active_state() {
let manager = Arc::new(owlen_core::provider::ProviderManager::default());
let provider = StatusProvider::new(
ProviderStatus::Available,
vec![
GenerateChunk::from_text("hello"),
GenerateChunk::final_chunk(),
],
);
manager.register_provider(Arc::new(provider.clone())).await;
let mut app = App::new(Arc::clone(&manager));
let request_id = app
.start_generation("stub", GenerateRequest::new("stub-model"))
.expect("start generation");
assert!(app.has_active_generation());
assert_ne!(request_id, Uuid::nil());
app.abort_active_generation();
assert!(!app.has_active_generation());
}
#[test]
fn handle_message_dispatches_variants() {
let manager = Arc::new(owlen_core::provider::ProviderManager::default());
let mut app = App::new(Arc::clone(&manager));
let mut state = RecordingState::default();
let request_id = Uuid::new_v4();
let _ = app.handle_message(
&mut state,
AppMessage::GenerateStart {
request_id,
provider_id: "stub".into(),
request: GenerateRequest::new("stub"),
},
);
let _ = app.handle_message(
&mut state,
AppMessage::GenerateChunk {
request_id,
chunk: GenerateChunk::from_text("chunk"),
},
);
let _ = app.handle_message(&mut state, AppMessage::GenerateComplete { request_id });
let _ = app.handle_message(
&mut state,
AppMessage::GenerateError {
request_id: Some(request_id),
message: "error".into(),
},
);
let _ = app.handle_message(&mut state, AppMessage::ModelsRefresh);
let _ = app.handle_message(&mut state, AppMessage::ModelsUpdated);
let _ = app.handle_message(
&mut state,
AppMessage::ProviderStatus {
provider_id: "stub".into(),
status: ProviderStatus::Available,
},
);
assert!(state.started);
assert!(state.appended);
assert!(state.completed);
assert!(state.failed);
assert!(state.refreshed);
assert!(state.updated);
assert!(matches!(
state.provider_status,
Some(ProviderStatus::Available)
));
}
#[tokio::test(start_paused = true)]
async fn background_worker_emits_status_changes() {
let manager = Arc::new(owlen_core::provider::ProviderManager::default());
let provider = StatusProvider::new(
ProviderStatus::Unavailable,
vec![GenerateChunk::final_chunk()],
);
manager.register_provider(Arc::new(provider.clone())).await;
let (tx, mut rx) = mpsc::unbounded_channel();
let worker: JoinHandle<()> = tokio::spawn(app::background_worker(Arc::clone(&manager), tx));
provider.set_status(ProviderStatus::Available);
advance(Duration::from_secs(31)).await;
yield_now().await;
if let Some(AppMessage::ProviderStatus { status, .. }) = rx.recv().await {
assert!(matches!(status, ProviderStatus::Available));
} else {
panic!("expected provider status update");
}
worker.abort();
let _ = worker.await;
yield_now().await;
}

View File

@@ -0,0 +1,97 @@
use crossterm::event::{KeyCode, KeyEvent, KeyEventKind, KeyEventState, KeyModifiers};
use owlen_core::provider::{GenerateChunk, GenerateRequest, ProviderStatus};
use owlen_tui::app::messages::AppMessage;
use uuid::Uuid;
#[test]
fn message_variants_roundtrip_their_data() {
let request = GenerateRequest::new("demo-model");
let request_id = Uuid::new_v4();
let key_event = KeyEvent {
code: KeyCode::Char('a'),
modifiers: KeyModifiers::CONTROL,
kind: KeyEventKind::Press,
state: KeyEventState::NONE,
};
let messages = vec![
AppMessage::KeyPress(key_event),
AppMessage::Resize {
width: 120,
height: 40,
},
AppMessage::Tick,
AppMessage::GenerateStart {
request_id,
provider_id: "mock".into(),
request: request.clone(),
},
AppMessage::GenerateChunk {
request_id,
chunk: GenerateChunk::from_text("hi"),
},
AppMessage::GenerateComplete { request_id },
AppMessage::GenerateError {
request_id: Some(request_id),
message: "oops".into(),
},
AppMessage::ModelsRefresh,
AppMessage::ModelsUpdated,
AppMessage::ProviderStatus {
provider_id: "mock".into(),
status: ProviderStatus::Available,
},
];
for message in messages {
match message {
AppMessage::KeyPress(event) => {
assert_eq!(event.code, KeyCode::Char('a'));
assert!(event.modifiers.contains(KeyModifiers::CONTROL));
}
AppMessage::Resize { width, height } => {
assert_eq!(width, 120);
assert_eq!(height, 40);
}
AppMessage::Tick => {}
AppMessage::GenerateStart {
request_id: id,
provider_id,
request,
} => {
assert_eq!(id, request_id);
assert_eq!(provider_id, "mock");
assert_eq!(request.model, "demo-model");
}
AppMessage::GenerateChunk {
request_id: id,
chunk,
} => {
assert_eq!(id, request_id);
assert_eq!(chunk.text.as_deref(), Some("hi"));
}
AppMessage::GenerateComplete { request_id: id } => {
assert_eq!(id, request_id);
}
AppMessage::GenerateError {
request_id: Some(id),
message,
} => {
assert_eq!(id, request_id);
assert_eq!(message, "oops");
}
AppMessage::ModelsRefresh => {}
AppMessage::ModelsUpdated => {}
AppMessage::ProviderStatus {
provider_id,
status,
} => {
assert_eq!(provider_id, "mock");
assert!(matches!(status, ProviderStatus::Available));
}
AppMessage::GenerateError {
request_id: None, ..
} => panic!("missing request id"),
}
}
}

View File

@@ -9,21 +9,11 @@ fn palette_tracks_buffer_and_suggestions() {
palette.set_buffer("mo");
assert_eq!(palette.buffer(), "mo");
assert!(
palette
.suggestions()
.iter()
.all(|s| s.value.starts_with("mo"))
);
assert!(!palette.suggestions().is_empty());
palette.push_char('d');
assert_eq!(palette.buffer(), "mod");
assert!(
palette
.suggestions()
.iter()
.all(|s| s.value.starts_with("mod"))
);
assert!(!palette.suggestions().is_empty());
palette.pop_char();
assert_eq!(palette.buffer(), "mo");

62
docs/adding-providers.md Normal file
View File

@@ -0,0 +1,62 @@
# Adding a Provider to Owlen
This guide complements `docs/provider-implementation.md` with a practical checklist for wiring a new model backend into the Phase 10 architecture.
## 1. Define the Provider Type
Providers live in their own crate (for example `owlen-providers`). Create a module that implements the `owlen_core::provider::ModelProvider` trait:
```rust
pub struct MyProvider {
client: MyHttpClient,
metadata: ProviderMetadata,
}
#[async_trait]
impl ModelProvider for MyProvider {
fn metadata(&self) -> &ProviderMetadata { &self.metadata }
async fn health_check(&self) -> Result<ProviderStatus> { ... }
async fn list_models(&self) -> Result<Vec<ModelInfo>> { ... }
async fn generate_stream(&self, request: GenerateRequest) -> Result<GenerateStream> { ... }
}
```
Set `ProviderMetadata::provider_type` to `ProviderType::Local` or `ProviderType::Cloud` so the TUI can label it correctly.
## 2. Register with `ProviderManager`
`ProviderManager` owns provider instances and tracks their health. In your startup code (usually `owlen-cli` or an MCP server), construct the provider and register it:
```rust
let manager = ProviderManager::new(config);
manager.register_provider(Arc::new(MyProvider::new(config)?.into())).await;
```
The manager caches `ProviderStatus` values so the TUI can surface availability in the picker and background worker events.
## 3. Expose Through MCP (Optional)
For providers that should run out-of-process, implement an MCP server (`owlen-mcp-llm-server` demonstrates the pattern). The TUI uses `RemoteMcpClient`, so exposing `generate_text` keeps the UI completely decoupled from provider details.
## 4. Add Tests
Commit 13 introduced integration tests in `crates/owlen-providers/tests`. Follow this pattern to exercise:
- registration with `ProviderManager`
- model aggregation across providers
- routing of `generate` requests
- provider status transitions when generation succeeds or fails
In-memory mocks are enough; the goal is to protect the trait contract and the managers health cache.
## 5. Document Configuration
Update `docs/configuration.md` and the default `config.toml` snippet so users can enable the new provider. Include environment variables, auth requirements, or special flags.
## 6. Update User-Facing Docs
- Add a short entry to the feature list in `README.md`.
- Mention the new provider in `CHANGELOG.md` under the “Added” section.
- If the provider requires troubleshooting steps, append them to `docs/troubleshooting.md`.
Following these steps keeps the provider lifecycle consistent with Owlens multi-provider architecture: providers register once, the manager handles orchestration, and the TUI reacts via message-driven updates.

View File

@@ -6,7 +6,8 @@ This document provides a high-level overview of the Owlen architecture. Its purp
The architecture is designed to be modular and extensible, centered around a few key concepts:
- **Providers**: Connect to various LLM APIs (Ollama, OpenAI, etc.).
- **Provider Manager**: Coordinates multiple `ModelProvider` implementations, aggregates model metadata, and caches health status for the UI.
- **Providers**: Concrete backends (Ollama Local, Ollama Cloud, future providers) accessed either directly or through MCP servers.
- **Session**: Manages the conversation history and state.
- **TUI**: The terminal user interface, built with `ratatui`.
- **Events**: A system for handling user input and other events.
@@ -16,18 +17,20 @@ The architecture is designed to be modular and extensible, centered around a few
A simplified diagram of how components interact:
```
[User Input] -> [Event Loop] -> [Session Controller] -> [Provider]
^ |
| v
[TUI Renderer] <------------------------------------ [API Response]
[User Input] -> [Event Loop] -> [Message Handler] -> [Session Controller] -> [Provider Manager] -> [Provider]
^ |
| v
[TUI Renderer] <- [AppMessage Stream] <- [Background Worker] <--------------- [Provider Health]
```
1. **User Input**: The user interacts with the TUI, generating events (e.g., key presses).
2. **Event Loop**: The main event loop in `owlen-tui` captures these events.
3. **Session Controller**: The event is processed, and if it's a prompt, the session controller sends a request to the current provider.
4. **Provider**: The provider formats the request for the specific LLM API and sends it.
5. **API Response**: The LLM API returns a response.
6. **TUI Renderer**: The response is processed, the session state is updated, and the TUI is re-rendered to display the new information.
2. **Event Loop**: The non-blocking event loop in `owlen-tui` bundles raw input, async session events, and background health updates into `AppMessage` events.
3. **Message Handler**: `App::handle_message` centralises dispatch, updating runtime state (chat, model picker, provider indicators) before the UI redraw.
4. **Session Controller**: Prompt events create `GenerateRequest`s that flow through `ProviderManager::generate` to the designated provider.
5. **Provider**: The provider formats requests for its API and streams back `GenerateChunk`s.
6. **Provider Manager**: Tracks health while streaming; errors mark a provider unavailable so background workers and the model picker reflect the state.
7. **Background Worker**: A periodic task runs health checks and emits status updates as `AppMessage::ProviderStatus` events.
8. **TUI Renderer**: The response is processed, the session state is updated, and the TUI is re-rendered to display the new information.
## Crate Breakdown
@@ -106,7 +109,7 @@ The session management system is responsible for tracking the state of a convers
- **`SessionController`**: This is the high-level controller that manages the active conversation. It handles:
- Storing and retrieving conversation history via the `ConversationManager`.
- Managing the context that is sent to the LLM provider.
- Switching between different models.
- Switching between different models by selecting a provider ID managed by `ProviderManager`.
- Sending requests to the provider and handling the responses (both streaming and complete).
When a user sends a message, the `SessionController` adds the message to the current `Conversation`, sends the updated message list to the `Provider`, and then adds the provider's response to the `Conversation`.

View File

@@ -2,24 +2,22 @@
This guide explains how to implement a new provider for Owlen. Providers are the components that connect to different LLM APIs.
## The `Provider` Trait
## The `ModelProvider` Trait
The core of the provider system is the `Provider` trait, located in `owlen-core`. Any new provider must implement this trait.
The core of the provider system is the `ModelProvider` trait, located in `owlen-core::provider`. Any new provider must implement this async trait so it can be managed by `ProviderManager`.
Here is a simplified version of the trait:
```rust
use async_trait::async_trait;
use owlen_core::model::Model;
use owlen_core::session::Session;
use owlen_core::provider::{GenerateChunk, GenerateRequest, GenerateStream, ModelInfo, ProviderMetadata, ProviderStatus};
#[async_trait]
pub trait Provider {
/// Returns the name of the provider.
fn name(&self) -> &str;
/// Sends the session to the provider and returns the response.
async fn chat(&self, session: &Session, model: &Model) -> Result<String, anyhow::Error>;
pub trait ModelProvider: Send + Sync {
fn metadata(&self) -> &ProviderMetadata;
async fn health_check(&self) -> owlen_core::Result<ProviderStatus>;
async fn list_models(&self) -> owlen_core::Result<Vec<ModelInfo>>;
async fn generate_stream(&self, request: GenerateRequest) -> owlen_core::Result<GenerateStream>;
}
```
@@ -35,41 +33,66 @@ In your new crate's `lib.rs`, you will define a struct for your provider and imp
```rust
use async_trait::async_trait;
use owlen_core::model::Model;
use owlen_core::Provider;
use owlen_core::session::Session;
use owlen_core::provider::{
GenerateRequest, GenerateStream, ModelInfo, ModelProvider, ProviderMetadata,
ProviderStatus, ProviderType,
};
pub struct MyProvider;
pub struct MyProvider {
metadata: ProviderMetadata,
client: MyHttpClient,
}
impl MyProvider {
pub fn new(config: &MyConfig) -> owlen_core::Result<Self> {
let metadata = ProviderMetadata::new(
"my_provider",
"My Provider",
ProviderType::Cloud,
true,
);
Ok(Self {
metadata,
client: MyHttpClient::new(config)?,
})
}
}
#[async_trait]
impl Provider for MyProvider {
fn name(&self) -> &str {
"my-provider"
impl ModelProvider for MyProvider {
fn metadata(&self) -> &ProviderMetadata {
&self.metadata
}
async fn chat(&self, session: &Session, model: &Model) -> Result<String, anyhow::Error> {
// 1. Get the conversation history from the session.
let history = session.get_messages();
async fn health_check(&self) -> owlen_core::Result<ProviderStatus> {
self.client.ping().await.map(|_| ProviderStatus::Available)
}
// 2. Format the request for your provider's API.
// This might involve creating a JSON body with the messages.
async fn list_models(&self) -> owlen_core::Result<Vec<ModelInfo>> {
self.client.list_models().await
}
// 3. Send the request to the API using a client like reqwest.
// 4. Parse the response from the API.
// 5. Return the content of the response as a String.
Ok("Hello from my provider!".to_string())
async fn generate_stream(&self, request: GenerateRequest) -> owlen_core::Result<GenerateStream> {
self.client.generate(request).await
}
}
```
## Integrating with Owlen
Once your provider is implemented, you will need to integrate it into the main Owlen application.
Once your provider is implemented, you will need to register it with the `ProviderManager` and surface it to users.
1. **Add your provider crate** as a dependency to `owlen-cli`.
2. **In `owlen-cli`, modify the provider registration** to include your new provider. This will likely involve adding it to a list of available providers that the user can select from in the configuration.
1. **Add your provider crate** as a dependency to the component that will host it (an MCP server or `owlen-cli`).
2. **Register the provider** with `ProviderManager` during startup:
This guide provides a basic outline. For more detailed examples, you can look at the existing provider implementations, such as `owlen-ollama`.
```rust
let manager = ProviderManager::new(config);
manager.register_provider(Arc::new(MyProvider::new(config)?)).await;
```
3. **Update configuration docs/examples** so the provider has a `[providers.my_provider]` entry.
4. **Expose via MCP (optional)** if the provider should run out-of-process. Owlens TUI talks to providers exclusively via MCP after Phase 10.
5. **Add tests** similar to `crates/owlen-providers/tests/integration_test.rs` that exercise registration, model aggregation, generation routing, and health transitions.
For concrete examples, see the Ollama providers in `crates/owlen-providers/` and the integration tests added in commit 13.

View File

@@ -31,6 +31,7 @@ Owlen now queries both the local daemon and Ollama Cloud and shows them side-by-
4. **Keep the base URL local.** The cloud setup command no longer overrides `providers.ollama.base_url` unless `--force-cloud-base-url` is passed. If you changed it manually, edit `config.toml` or run `owlen config doctor` to restore the default `http://localhost:11434` value.
Once the daemon responds again, the picker will automatically merge the updated local list with the cloud catalogue.
Owlen runs a background health worker every 30 seconds; once the daemon responds it will update the picker automatically without needing a restart.
## Terminal Compatibility Issues