feat(ollama): add explicit Ollama mode config, cloud endpoint storage, and scope‑availability caching with status annotations.

This commit is contained in:
2025-10-15 10:05:34 +02:00
parent 5210e196f2
commit 708c626176
9 changed files with 1845 additions and 241 deletions

View File

@@ -33,6 +33,7 @@ The OWLEN interface features a clean, multi-panel layout with vim-inspired navig
- **Code Side Panel**: Switch to code mode (`:mode code`) and open files inline with `:open <path>` for LLM-assisted coding. - **Code Side Panel**: Switch to code mode (`:mode code`) and open files inline with `:open <path>` for LLM-assisted coding.
- **Theming System**: 10 built-in themes and support for custom themes. - **Theming System**: 10 built-in themes and support for custom themes.
- **Modular Architecture**: Extensible provider system (Ollama today, additional providers on the roadmap). - **Modular Architecture**: Extensible provider system (Ollama today, additional providers on the roadmap).
- **Dual-Source Model Picker**: Merge local and cloud Ollama models with live availability indicators so you can see at a glance which catalogues are reachable.
- **Guided Setup**: `owlen config doctor` upgrades legacy configs and verifies your environment in seconds. - **Guided Setup**: `owlen config doctor` upgrades legacy configs and verifies your environment in seconds.
## Security & Privacy ## Security & Privacy
@@ -95,6 +96,12 @@ OWLEN uses a modal, vim-inspired interface. Press `F1` (available from any mode)
- **Tutorial Command**: Type `:tutorial` any time for a quick summary of the most important keybindings. - **Tutorial Command**: Type `:tutorial` any time for a quick summary of the most important keybindings.
- **MCP Slash Commands**: Owlen auto-registers zero-argument MCP tools as slash commands—type `/mcp__github__list_prs` (for example) to pull remote context directly into the chat log. - **MCP Slash Commands**: Owlen auto-registers zero-argument MCP tools as slash commands—type `/mcp__github__list_prs` (for example) to pull remote context directly into the chat log.
Model discovery commands worth remembering:
- `:models --local` or `:models --cloud` jump directly to the corresponding section in the picker.
- `:cloud setup [--force-cloud-base-url]` stores your cloud API key without clobbering an existing local base URL (unless you opt in with the flag).
When a catalogue is unreachable, Owlen now tags the picker with `Local unavailable` / `Cloud unavailable` so you can recover without guessing.
## Documentation ## Documentation
For more detailed information, please refer to the following documents: For more detailed information, please refer to the following documents:

View File

@@ -6,14 +6,17 @@ use anyhow::{Context, Result, anyhow, bail};
use clap::Subcommand; use clap::Subcommand;
use owlen_core::LlmProvider; use owlen_core::LlmProvider;
use owlen_core::ProviderConfig; use owlen_core::ProviderConfig;
use owlen_core::config as core_config; use owlen_core::config::{
use owlen_core::config::Config; self as core_config, Config, OLLAMA_CLOUD_BASE_URL, OLLAMA_CLOUD_ENDPOINT_KEY, OLLAMA_MODE_KEY,
};
use owlen_core::credentials::{ApiCredentials, CredentialManager, OLLAMA_CLOUD_CREDENTIAL_ID}; use owlen_core::credentials::{ApiCredentials, CredentialManager, OLLAMA_CLOUD_CREDENTIAL_ID};
use owlen_core::encryption; use owlen_core::encryption;
use owlen_core::providers::OllamaProvider; use owlen_core::providers::OllamaProvider;
use owlen_core::storage::StorageManager; use owlen_core::storage::StorageManager;
use serde_json::Value;
const DEFAULT_CLOUD_ENDPOINT: &str = "https://ollama.com"; const DEFAULT_CLOUD_ENDPOINT: &str = OLLAMA_CLOUD_BASE_URL;
const CLOUD_ENDPOINT_KEY: &str = OLLAMA_CLOUD_ENDPOINT_KEY;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
pub enum CloudCommand { pub enum CloudCommand {
@@ -28,6 +31,9 @@ pub enum CloudCommand {
/// Provider name to configure (default: ollama) /// Provider name to configure (default: ollama)
#[arg(long, default_value = "ollama")] #[arg(long, default_value = "ollama")]
provider: String, provider: String,
/// Overwrite the provider base URL with the cloud endpoint
#[arg(long)]
force_cloud_base_url: bool,
}, },
/// Check connectivity to Ollama Cloud /// Check connectivity to Ollama Cloud
Status { Status {
@@ -55,19 +61,29 @@ pub async fn run_cloud_command(command: CloudCommand) -> Result<()> {
api_key, api_key,
endpoint, endpoint,
provider, provider,
} => setup(provider, api_key, endpoint).await, force_cloud_base_url,
} => setup(provider, api_key, endpoint, force_cloud_base_url).await,
CloudCommand::Status { provider } => status(provider).await, CloudCommand::Status { provider } => status(provider).await,
CloudCommand::Models { provider } => models(provider).await, CloudCommand::Models { provider } => models(provider).await,
CloudCommand::Logout { provider } => logout(provider).await, CloudCommand::Logout { provider } => logout(provider).await,
} }
} }
async fn setup(provider: String, api_key: Option<String>, endpoint: Option<String>) -> Result<()> { async fn setup(
provider: String,
api_key: Option<String>,
endpoint: Option<String>,
force_cloud_base_url: bool,
) -> Result<()> {
let provider = canonical_provider_name(&provider); let provider = canonical_provider_name(&provider);
let mut config = crate::config::try_load_config().unwrap_or_default(); let mut config = crate::config::try_load_config().unwrap_or_default();
let endpoint = endpoint.unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string()); let endpoint =
normalize_endpoint(&endpoint.unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string()));
ensure_provider_entry(&mut config, &provider, &endpoint); let base_changed = {
let entry = ensure_provider_entry(&mut config, &provider);
configure_cloud_endpoint(entry, &endpoint, force_cloud_base_url)
};
let key = match api_key { let key = match api_key {
Some(value) if !value.trim().is_empty() => value, Some(value) if !value.trim().is_empty() => value,
@@ -95,10 +111,6 @@ async fn setup(provider: String, api_key: Option<String>, endpoint: Option<Strin
entry.api_key = Some(key.clone()); entry.api_key = Some(key.clone());
} }
if let Some(entry) = config.providers.get_mut(&provider) {
entry.base_url = Some(endpoint.clone());
}
crate::config::save_config(&config)?; crate::config::save_config(&config)?;
println!("Saved Ollama configuration for provider '{provider}'."); println!("Saved Ollama configuration for provider '{provider}'.");
if config.privacy.encrypt_local_data { if config.privacy.encrypt_local_data {
@@ -106,6 +118,12 @@ async fn setup(provider: String, api_key: Option<String>, endpoint: Option<Strin
} else { } else {
println!("API key stored in plaintext configuration (encryption disabled)."); println!("API key stored in plaintext configuration (encryption disabled).");
} }
if !force_cloud_base_url && !base_changed {
println!(
"Local base URL preserved; cloud endpoint stored as {}.",
CLOUD_ENDPOINT_KEY
);
}
Ok(()) Ok(())
} }
@@ -120,25 +138,31 @@ async fn status(provider: String) -> Result<()> {
}; };
let api_key = hydrate_api_key(&mut config, manager.as_ref()).await?; let api_key = hydrate_api_key(&mut config, manager.as_ref()).await?;
ensure_provider_entry(&mut config, &provider, DEFAULT_CLOUD_ENDPOINT); {
let entry = ensure_provider_entry(&mut config, &provider);
configure_cloud_endpoint(entry, DEFAULT_CLOUD_ENDPOINT, false);
}
let provider_cfg = config let provider_cfg = config
.provider(&provider) .provider(&provider)
.cloned() .cloned()
.ok_or_else(|| anyhow!("Provider '{provider}' is not configured"))?; .ok_or_else(|| anyhow!("Provider '{provider}' is not configured"))?;
let ollama = OllamaProvider::from_config(&provider_cfg, Some(&config.general)) let endpoint =
resolve_cloud_endpoint(&provider_cfg).unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string());
let mut runtime_cfg = provider_cfg.clone();
runtime_cfg.base_url = Some(endpoint.clone());
runtime_cfg.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("cloud".to_string()),
);
let ollama = OllamaProvider::from_config(&runtime_cfg, Some(&config.general))
.with_context(|| "Failed to construct Ollama provider. Run `owlen cloud setup` first.")?; .with_context(|| "Failed to construct Ollama provider. Run `owlen cloud setup` first.")?;
match ollama.health_check().await { match ollama.health_check().await {
Ok(_) => { Ok(_) => {
println!( println!("✓ Connected to {provider} ({})", endpoint);
"✓ Connected to {provider} ({})",
provider_cfg
.base_url
.as_deref()
.unwrap_or(DEFAULT_CLOUD_ENDPOINT)
);
if api_key.is_none() && config.privacy.encrypt_local_data { if api_key.is_none() && config.privacy.encrypt_local_data {
println!( println!(
"Warning: No API key stored; connection succeeded via environment variables." "Warning: No API key stored; connection succeeded via environment variables."
@@ -164,13 +188,26 @@ async fn models(provider: String) -> Result<()> {
}; };
hydrate_api_key(&mut config, manager.as_ref()).await?; hydrate_api_key(&mut config, manager.as_ref()).await?;
ensure_provider_entry(&mut config, &provider, DEFAULT_CLOUD_ENDPOINT); {
let entry = ensure_provider_entry(&mut config, &provider);
configure_cloud_endpoint(entry, DEFAULT_CLOUD_ENDPOINT, false);
}
let provider_cfg = config let provider_cfg = config
.provider(&provider) .provider(&provider)
.cloned() .cloned()
.ok_or_else(|| anyhow!("Provider '{provider}' is not configured"))?; .ok_or_else(|| anyhow!("Provider '{provider}' is not configured"))?;
let ollama = OllamaProvider::from_config(&provider_cfg, Some(&config.general)) let endpoint =
resolve_cloud_endpoint(&provider_cfg).unwrap_or_else(|| DEFAULT_CLOUD_ENDPOINT.to_string());
let mut runtime_cfg = provider_cfg.clone();
runtime_cfg.base_url = Some(endpoint);
runtime_cfg.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("cloud".to_string()),
);
let ollama = OllamaProvider::from_config(&runtime_cfg, Some(&config.general))
.with_context(|| "Failed to construct Ollama provider. Run `owlen cloud setup` first.")?; .with_context(|| "Failed to construct Ollama provider. Run `owlen cloud setup` first.")?;
match ollama.list_models().await { match ollama.list_models().await {
@@ -217,7 +254,7 @@ async fn logout(provider: String) -> Result<()> {
Ok(()) Ok(())
} }
fn ensure_provider_entry(config: &mut Config, provider: &str, endpoint: &str) { fn ensure_provider_entry<'a>(config: &'a mut Config, provider: &str) -> &'a mut ProviderConfig {
if provider == "ollama" if provider == "ollama"
&& config.providers.contains_key("ollama-cloud") && config.providers.contains_key("ollama-cloud")
&& !config.providers.contains_key("ollama") && !config.providers.contains_key("ollama")
@@ -230,13 +267,68 @@ fn ensure_provider_entry(config: &mut Config, provider: &str, endpoint: &str) {
core_config::ensure_provider_config(config, provider); core_config::ensure_provider_config(config, provider);
if let Some(cfg) = config.providers.get_mut(provider) { let entry = config
if cfg.provider_type != "ollama" { .providers
cfg.provider_type = "ollama".to_string(); .get_mut(provider)
.expect("provider entry must exist");
if entry.provider_type != "ollama" {
entry.provider_type = "ollama".to_string();
} }
if cfg.base_url.is_none() {
cfg.base_url = Some(endpoint.to_string()); entry
}
fn configure_cloud_endpoint(entry: &mut ProviderConfig, endpoint: &str, force: bool) -> bool {
let normalized = normalize_endpoint(endpoint);
let previous_base = entry.base_url.clone();
entry.extra.insert(
CLOUD_ENDPOINT_KEY.to_string(),
Value::String(normalized.clone()),
);
if force
|| entry
.base_url
.as_ref()
.map(|value| value.trim().is_empty())
.unwrap_or(true)
{
entry.base_url = Some(normalized.clone());
} }
if force {
entry.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("cloud".to_string()),
);
}
entry.base_url != previous_base
}
fn resolve_cloud_endpoint(cfg: &ProviderConfig) -> Option<String> {
if let Some(value) = cfg
.extra
.get(CLOUD_ENDPOINT_KEY)
.and_then(|value| value.as_str())
.map(normalize_endpoint)
{
return Some(value);
}
cfg.base_url
.as_ref()
.map(|value| value.trim_end_matches('/').to_string())
.filter(|value| !value.is_empty())
}
fn normalize_endpoint(endpoint: &str) -> String {
let trimmed = endpoint.trim().trim_end_matches('/');
if trimmed.is_empty() {
DEFAULT_CLOUD_ENDPOINT.to_string()
} else {
trimmed.to_string()
} }
} }
@@ -374,9 +466,7 @@ async fn hydrate_api_key(
let Some(cfg) = provider_entry_mut(config) else { let Some(cfg) = provider_entry_mut(config) else {
return Ok(Some(key)); return Ok(Some(key));
}; };
if cfg.base_url.is_none() && !credentials.endpoint.trim().is_empty() { configure_cloud_endpoint(cfg, &credentials.endpoint, false);
cfg.base_url = Some(credentials.endpoint.clone());
}
return Ok(Some(key)); return Ok(Some(key));
} }

View File

@@ -16,7 +16,14 @@ use std::time::Duration;
pub const DEFAULT_CONFIG_PATH: &str = "~/.config/owlen/config.toml"; pub const DEFAULT_CONFIG_PATH: &str = "~/.config/owlen/config.toml";
/// Current schema version written to `config.toml`. /// Current schema version written to `config.toml`.
pub const CONFIG_SCHEMA_VERSION: &str = "1.4.0"; pub const CONFIG_SCHEMA_VERSION: &str = "1.5.0";
/// Provider config key for forcing Ollama provider mode.
pub const OLLAMA_MODE_KEY: &str = "ollama_mode";
/// Extra config key storing the preferred Ollama Cloud endpoint.
pub const OLLAMA_CLOUD_ENDPOINT_KEY: &str = "cloud_endpoint";
/// Canonical Ollama Cloud base URL.
pub const OLLAMA_CLOUD_BASE_URL: &str = "https://ollama.com";
/// Core configuration shared by all OWLEN clients /// Core configuration shared by all OWLEN clients
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
@@ -574,6 +581,23 @@ impl Config {
self.merge_legacy_ollama_provider(legacy_cloud); self.merge_legacy_ollama_provider(legacy_cloud);
} }
if let Some(ollama) = self.providers.get_mut("ollama") {
let previous_mode = ollama
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str())
.map(|value| value.to_ascii_lowercase());
ensure_ollama_mode_extra(ollama);
if previous_mode.as_deref().unwrap_or("auto") == "auto"
&& is_cloud_base_url(ollama.base_url.as_ref())
{
ollama.extra.insert(
OLLAMA_MODE_KEY.to_string(),
serde_json::Value::String("cloud".to_string()),
);
}
}
self.schema_version = CONFIG_SCHEMA_VERSION.to_string(); self.schema_version = CONFIG_SCHEMA_VERSION.to_string();
} }
@@ -594,9 +618,12 @@ impl Config {
if target.extra.is_empty() && !legacy_cloud.extra.is_empty() { if target.extra.is_empty() && !legacy_cloud.extra.is_empty() {
target.extra = legacy_cloud.extra; target.extra = legacy_cloud.extra;
} }
ensure_ollama_mode_extra(target);
} }
Entry::Vacant(entry) => { Entry::Vacant(entry) => {
entry.insert(legacy_cloud); let mut inserted = legacy_cloud;
ensure_ollama_mode_extra(&mut inserted);
entry.insert(inserted);
} }
} }
} }
@@ -669,12 +696,47 @@ impl Config {
} }
fn default_ollama_provider_config() -> ProviderConfig { fn default_ollama_provider_config() -> ProviderConfig {
ProviderConfig { let mut config = ProviderConfig {
provider_type: "ollama".to_string(), provider_type: "ollama".to_string(),
base_url: Some("http://localhost:11434".to_string()), base_url: Some("http://localhost:11434".to_string()),
api_key: None, api_key: None,
extra: HashMap::new(), extra: HashMap::new(),
};
ensure_ollama_mode_extra(&mut config);
config
}
fn ensure_ollama_mode_extra(provider: &mut ProviderConfig) {
if provider.provider_type != "ollama" {
return;
} }
let entry = provider
.extra
.entry(OLLAMA_MODE_KEY.to_string())
.or_insert_with(|| serde_json::Value::String("auto".to_string()));
if let Some(value) = entry.as_str() {
let normalized = value.trim().to_ascii_lowercase();
if matches!(normalized.as_str(), "auto" | "local" | "cloud") {
if normalized != value {
*entry = serde_json::Value::String(normalized);
}
} else {
*entry = serde_json::Value::String("auto".to_string());
}
} else {
*entry = serde_json::Value::String("auto".to_string());
}
}
fn is_cloud_base_url(base_url: Option<&String>) -> bool {
base_url
.map(|url| {
let trimmed = url.trim_end_matches('/');
trimmed == OLLAMA_CLOUD_BASE_URL || trimmed.starts_with("https://ollama.com/")
})
.unwrap_or(false)
} }
fn validate_mcp_server_entry(server: &McpServerConfig, scope: McpConfigScope) -> Result<()> { fn validate_mcp_server_entry(server: &McpServerConfig, scope: McpConfigScope) -> Result<()> {
@@ -1603,9 +1665,11 @@ pub fn ensure_provider_config<'a>(
} }
match config.providers.entry(provider_name.to_string()) { match config.providers.entry(provider_name.to_string()) {
Entry::Occupied(entry) => entry.into_mut(), Entry::Occupied(mut entry) => {
ensure_ollama_mode_extra(entry.get_mut());
}
Entry::Vacant(entry) => { Entry::Vacant(entry) => {
let default = match provider_name { let mut default = match provider_name {
"ollama" => default_ollama_provider_config(), "ollama" => default_ollama_provider_config(),
other => ProviderConfig { other => ProviderConfig {
provider_type: other.to_string(), provider_type: other.to_string(),
@@ -1614,9 +1678,15 @@ pub fn ensure_provider_config<'a>(
extra: HashMap::new(), extra: HashMap::new(),
}, },
}; };
entry.insert(default) ensure_ollama_mode_extra(&mut default);
entry.insert(default);
} }
} }
config
.providers
.get(provider_name)
.expect("provider entry must exist")
} }
/// Calculate absolute timeout for session data based on configuration /// Calculate absolute timeout for session data based on configuration
@@ -1723,6 +1793,14 @@ mod tests {
fn default_config_contains_local_provider() { fn default_config_contains_local_provider() {
let config = Config::default(); let config = Config::default();
assert!(config.providers.contains_key("ollama")); assert!(config.providers.contains_key("ollama"));
let provider = config.providers.get("ollama").unwrap();
assert_eq!(
provider
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str()),
Some("auto")
);
} }
#[test] #[test]
@@ -1732,6 +1810,13 @@ mod tests {
let cloud = ensure_provider_config(&mut config, "ollama-cloud"); let cloud = ensure_provider_config(&mut config, "ollama-cloud");
assert_eq!(cloud.provider_type, "ollama"); assert_eq!(cloud.provider_type, "ollama");
assert_eq!(cloud.base_url.as_deref(), Some("http://localhost:11434")); assert_eq!(cloud.base_url.as_deref(), Some("http://localhost:11434"));
assert_eq!(
cloud
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str()),
Some("auto")
);
assert!(config.providers.contains_key("ollama")); assert!(config.providers.contains_key("ollama"));
assert!(!config.providers.contains_key("ollama-cloud")); assert!(!config.providers.contains_key("ollama-cloud"));
} }
@@ -1758,6 +1843,33 @@ mod tests {
assert_eq!(cloud.provider_type, "ollama"); assert_eq!(cloud.provider_type, "ollama");
assert_eq!(cloud.base_url.as_deref(), Some("https://api.ollama.com")); assert_eq!(cloud.base_url.as_deref(), Some("https://api.ollama.com"));
assert_eq!(cloud.api_key.as_deref(), Some("secret")); assert_eq!(cloud.api_key.as_deref(), Some("secret"));
assert_eq!(
cloud
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str()),
Some("auto")
);
}
#[test]
fn migration_sets_cloud_mode_for_cloud_base() {
let mut config = Config::default();
if let Some(ollama) = config.providers.get_mut("ollama") {
ollama.base_url = Some(OLLAMA_CLOUD_BASE_URL.to_string());
ollama.extra.remove(OLLAMA_MODE_KEY);
}
config.apply_schema_migrations("1.4.0");
let provider = config.providers.get("ollama").expect("ollama provider");
assert_eq!(
provider
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str()),
Some("cloud")
);
} }
#[test] #[test]

View File

@@ -1,9 +1,11 @@
//! Ollama provider built on top of the `ollama-rs` crate. //! Ollama provider built on top of the `ollama-rs` crate.
use std::{ use std::{
collections::HashMap, collections::{HashMap, HashSet},
env, env,
net::{SocketAddr, TcpStream},
pin::Pin, pin::Pin,
time::{Duration, SystemTime}, sync::Arc,
time::{Duration, Instant, SystemTime},
}; };
use anyhow::anyhow; use anyhow::anyhow;
@@ -22,11 +24,17 @@ use ollama_rs::{
}; };
use reqwest::{Client, StatusCode, Url}; use reqwest::{Client, StatusCode, Url};
use serde_json::{Map as JsonMap, Value, json}; use serde_json::{Map as JsonMap, Value, json};
use tokio::{sync::RwLock, time::timeout};
#[cfg(test)]
use std::sync::{Mutex, OnceLock};
#[cfg(test)]
use tokio_test::block_on;
use uuid::Uuid; use uuid::Uuid;
use crate::{ use crate::{
Error, Result, Error, Result,
config::GeneralSettings, config::{GeneralSettings, OLLAMA_CLOUD_BASE_URL, OLLAMA_CLOUD_ENDPOINT_KEY, OLLAMA_MODE_KEY},
llm::{LlmProvider, ProviderConfig}, llm::{LlmProvider, ProviderConfig},
mcp::McpToolDescriptor, mcp::McpToolDescriptor,
model::{DetailedModelInfo, ModelDetailsCache, ModelManager}, model::{DetailedModelInfo, ModelDetailsCache, ModelManager},
@@ -37,9 +45,11 @@ use crate::{
const DEFAULT_TIMEOUT_SECS: u64 = 120; const DEFAULT_TIMEOUT_SECS: u64 = 120;
const DEFAULT_MODEL_CACHE_TTL_SECS: u64 = 60; const DEFAULT_MODEL_CACHE_TTL_SECS: u64 = 60;
const CLOUD_BASE_URL: &str = "https://ollama.com"; pub(crate) const CLOUD_BASE_URL: &str = OLLAMA_CLOUD_BASE_URL;
const LOCAL_PROBE_TIMEOUT_MS: u64 = 200;
const LOCAL_PROBE_TARGETS: &[&str] = &["127.0.0.1:11434", "[::1]:11434"];
#[derive(Debug, Clone, Copy, PartialEq, Eq)] #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
enum OllamaMode { enum OllamaMode {
Local, Local,
Cloud, Cloud,
@@ -54,6 +64,44 @@ impl OllamaMode {
} }
} }
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum ScopeAvailability {
Unknown,
Available,
Unavailable,
}
impl ScopeAvailability {
fn as_str(self) -> &'static str {
match self {
ScopeAvailability::Unknown => "unknown",
ScopeAvailability::Available => "available",
ScopeAvailability::Unavailable => "unavailable",
}
}
}
#[derive(Debug, Clone)]
struct ScopeSnapshot {
models: Vec<ModelInfo>,
fetched_at: Option<Instant>,
availability: ScopeAvailability,
last_error: Option<String>,
last_checked: Option<Instant>,
}
impl Default for ScopeSnapshot {
fn default() -> Self {
Self {
models: Vec::new(),
fetched_at: None,
availability: ScopeAvailability::Unknown,
last_error: None,
last_checked: None,
}
}
}
#[derive(Debug)] #[derive(Debug)]
struct OllamaOptions { struct OllamaOptions {
mode: OllamaMode, mode: OllamaMode,
@@ -61,6 +109,7 @@ struct OllamaOptions {
request_timeout: Duration, request_timeout: Duration,
model_cache_ttl: Duration, model_cache_ttl: Duration,
api_key: Option<String>, api_key: Option<String>,
cloud_endpoint: Option<String>,
} }
impl OllamaOptions { impl OllamaOptions {
@@ -71,6 +120,7 @@ impl OllamaOptions {
request_timeout: Duration::from_secs(DEFAULT_TIMEOUT_SECS), request_timeout: Duration::from_secs(DEFAULT_TIMEOUT_SECS),
model_cache_ttl: Duration::from_secs(DEFAULT_MODEL_CACHE_TTL_SECS), model_cache_ttl: Duration::from_secs(DEFAULT_MODEL_CACHE_TTL_SECS),
api_key: None, api_key: None,
cloud_endpoint: None,
} }
} }
@@ -87,8 +137,78 @@ pub struct OllamaProvider {
client: Ollama, client: Ollama,
http_client: Client, http_client: Client,
base_url: String, base_url: String,
request_timeout: Duration,
api_key: Option<String>,
cloud_endpoint: Option<String>,
model_manager: ModelManager, model_manager: ModelManager,
model_details_cache: ModelDetailsCache, model_details_cache: ModelDetailsCache,
model_cache_ttl: Duration,
scope_cache: Arc<RwLock<HashMap<OllamaMode, ScopeSnapshot>>>,
}
fn configured_mode_from_extra(config: &ProviderConfig) -> Option<OllamaMode> {
config
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str())
.and_then(|value| match value.trim().to_ascii_lowercase().as_str() {
"local" => Some(OllamaMode::Local),
"cloud" => Some(OllamaMode::Cloud),
_ => None,
})
}
fn is_explicit_local_base(base_url: Option<&str>) -> bool {
base_url
.and_then(|raw| Url::parse(raw).ok())
.and_then(|parsed| parsed.host_str().map(|host| host.to_ascii_lowercase()))
.map(|host| host == "localhost" || host == "127.0.0.1" || host == "::1")
.unwrap_or(false)
}
fn is_explicit_cloud_base(base_url: Option<&str>) -> bool {
base_url
.map(|raw| {
let trimmed = raw.trim_end_matches('/');
trimmed == CLOUD_BASE_URL || trimmed.starts_with("https://ollama.com/")
})
.unwrap_or(false)
}
#[cfg(test)]
static PROBE_OVERRIDE: OnceLock<Mutex<Option<bool>>> = OnceLock::new();
#[cfg(test)]
fn set_probe_override(value: Option<bool>) {
let guard = PROBE_OVERRIDE.get_or_init(|| Mutex::new(None));
*guard.lock().expect("probe override mutex poisoned") = value;
}
#[cfg(test)]
fn probe_override_value() -> Option<bool> {
PROBE_OVERRIDE
.get_or_init(|| Mutex::new(None))
.lock()
.expect("probe override mutex poisoned")
.to_owned()
}
fn probe_default_local_daemon(timeout: Duration) -> bool {
#[cfg(test)]
{
if let Some(value) = probe_override_value() {
return value;
}
}
for target in LOCAL_PROBE_TARGETS {
if let Ok(address) = target.parse::<SocketAddr>() {
if TcpStream::connect_timeout(&address, timeout).is_ok() {
return true;
}
}
}
false
} }
impl OllamaProvider { impl OllamaProvider {
@@ -105,23 +225,64 @@ impl OllamaProvider {
let mut api_key = resolve_api_key(config.api_key.clone()) let mut api_key = resolve_api_key(config.api_key.clone())
.or_else(|| env_var_non_empty("OLLAMA_API_KEY")) .or_else(|| env_var_non_empty("OLLAMA_API_KEY"))
.or_else(|| env_var_non_empty("OLLAMA_CLOUD_API_KEY")); .or_else(|| env_var_non_empty("OLLAMA_CLOUD_API_KEY"));
let configured_mode = configured_mode_from_extra(config);
let configured_mode_label = config
.extra
.get(OLLAMA_MODE_KEY)
.and_then(|value| value.as_str())
.unwrap_or("auto");
let base_url = config.base_url.as_deref();
let base_is_local = is_explicit_local_base(base_url);
let base_is_cloud = is_explicit_cloud_base(base_url);
let base_is_other = base_url.is_some() && !base_is_local && !base_is_cloud;
let mode = if api_key.is_some() { let mut local_probe_result = None;
let cloud_endpoint = config
.extra
.get(OLLAMA_CLOUD_ENDPOINT_KEY)
.and_then(Value::as_str)
.map(normalize_cloud_endpoint)
.transpose()
.map_err(Error::Config)?;
let mode = match configured_mode {
Some(mode) => mode,
None => {
if base_is_local || base_is_other {
OllamaMode::Local
} else if base_is_cloud && api_key.is_some() {
OllamaMode::Cloud
} else {
let probe =
probe_default_local_daemon(Duration::from_millis(LOCAL_PROBE_TIMEOUT_MS));
local_probe_result = Some(probe);
if probe {
OllamaMode::Local
} else if api_key.is_some() {
OllamaMode::Cloud OllamaMode::Cloud
} else { } else {
OllamaMode::Local OllamaMode::Local
}
}
}
}; };
let base_candidate = if mode == OllamaMode::Cloud { let base_candidate = match mode {
Some(CLOUD_BASE_URL) OllamaMode::Local => base_url,
OllamaMode::Cloud => {
if base_is_cloud {
base_url
} else { } else {
config.base_url.as_deref() Some(CLOUD_BASE_URL)
}
}
}; };
let normalized_base_url = let normalized_base_url =
normalize_base_url(base_candidate, mode).map_err(Error::Config)?; normalize_base_url(base_candidate, mode).map_err(Error::Config)?;
let mut options = OllamaOptions::new(mode, normalized_base_url); let mut options = OllamaOptions::new(mode, normalized_base_url.clone());
options.cloud_endpoint = cloud_endpoint.clone();
if let Some(timeout) = config if let Some(timeout) = config
.extra .extra
@@ -145,6 +306,23 @@ impl OllamaProvider {
options = options.with_general(general); options = options.with_general(general);
} }
debug!(
"Resolved Ollama provider: mode={:?}, base_url={}, configured_mode={}, api_key_present={}, local_probe={}",
mode,
normalized_base_url,
configured_mode_label,
if options.api_key.is_some() {
"yes"
} else {
"no"
},
match local_probe_result {
Some(true) => "success",
Some(false) => "failed",
None => "skipped",
}
);
Self::with_options(options) Self::with_options(options)
} }
@@ -155,44 +333,32 @@ impl OllamaProvider {
request_timeout, request_timeout,
model_cache_ttl, model_cache_ttl,
api_key, api_key,
cloud_endpoint,
} = options; } = options;
let url = Url::parse(&base_url) let api_key_ref = api_key.as_deref();
.map_err(|err| Error::Config(format!("Invalid Ollama base URL '{base_url}': {err}")))?; let (ollama_client, http_client) =
build_client_for_base(&base_url, request_timeout, api_key_ref)?;
let mut headers = HeaderMap::new(); let scope_cache = {
if let Some(ref key) = api_key { let mut initial = HashMap::new();
let value = HeaderValue::from_str(&format!("Bearer {key}")).map_err(|_| { initial.insert(OllamaMode::Local, ScopeSnapshot::default());
Error::Config("OLLAMA API key contains invalid characters".to_string()) initial.insert(OllamaMode::Cloud, ScopeSnapshot::default());
})?; Arc::new(RwLock::new(initial))
headers.insert(AUTHORIZATION, value); };
}
let mut client_builder = Client::builder().timeout(request_timeout);
if !headers.is_empty() {
client_builder = client_builder.default_headers(headers.clone());
}
let http_client = client_builder
.build()
.map_err(|err| Error::Config(format!("Failed to build HTTP client: {err}")))?;
let port = url.port_or_known_default().ok_or_else(|| {
Error::Config(format!("Unable to determine port for Ollama URL '{}'", url))
})?;
let mut ollama_client = Ollama::new_with_client(url.clone(), port, http_client.clone());
if !headers.is_empty() {
ollama_client.set_headers(Some(headers.clone()));
}
Ok(Self { Ok(Self {
mode, mode,
client: ollama_client, client: ollama_client,
http_client, http_client,
base_url: base_url.trim_end_matches('/').to_string(), base_url: base_url.trim_end_matches('/').to_string(),
request_timeout,
api_key,
cloud_endpoint,
model_manager: ModelManager::new(model_cache_ttl), model_manager: ModelManager::new(model_cache_ttl),
model_details_cache: ModelDetailsCache::new(model_cache_ttl), model_details_cache: ModelDetailsCache::new(model_cache_ttl),
model_cache_ttl,
scope_cache,
}) })
} }
@@ -200,6 +366,121 @@ impl OllamaProvider {
build_api_endpoint(&self.base_url, endpoint) build_api_endpoint(&self.base_url, endpoint)
} }
fn local_base_url() -> &'static str {
OllamaMode::Local.default_base_url()
}
fn scope_key(scope: OllamaMode) -> &'static str {
match scope {
OllamaMode::Local => "local",
OllamaMode::Cloud => "cloud",
}
}
fn build_local_client(&self) -> Result<Option<Ollama>> {
if matches!(self.mode, OllamaMode::Local) {
return Ok(Some(self.client.clone()));
}
let (client, _) =
build_client_for_base(Self::local_base_url(), self.request_timeout, None)?;
Ok(Some(client))
}
fn build_cloud_client(&self) -> Result<Option<Ollama>> {
if matches!(self.mode, OllamaMode::Cloud) {
return Ok(Some(self.client.clone()));
}
let api_key = match self.api_key.as_deref() {
Some(key) if !key.trim().is_empty() => key,
_ => return Ok(None),
};
let endpoint = self.cloud_endpoint.as_deref().unwrap_or(CLOUD_BASE_URL);
let (client, _) = build_client_for_base(endpoint, self.request_timeout, Some(api_key))?;
Ok(Some(client))
}
async fn cached_scope_models(&self, scope: OllamaMode) -> Option<Vec<ModelInfo>> {
let cache = self.scope_cache.read().await;
cache.get(&scope).and_then(|entry| {
if entry.availability == ScopeAvailability::Unknown {
return None;
}
entry.fetched_at.and_then(|ts| {
if ts.elapsed() < self.model_cache_ttl {
Some(entry.models.clone())
} else {
None
}
})
})
}
async fn update_scope_success(&self, scope: OllamaMode, models: &[ModelInfo]) {
let mut cache = self.scope_cache.write().await;
let entry = cache.entry(scope).or_default();
entry.models = models.to_vec();
entry.fetched_at = Some(Instant::now());
entry.last_checked = Some(Instant::now());
entry.availability = ScopeAvailability::Available;
entry.last_error = None;
}
async fn mark_scope_failure(&self, scope: OllamaMode, message: String) {
let mut cache = self.scope_cache.write().await;
let entry = cache.entry(scope).or_default();
entry.availability = ScopeAvailability::Unavailable;
entry.last_error = Some(message);
entry.last_checked = Some(Instant::now());
}
async fn annotate_scope_status(&self, models: &mut [ModelInfo]) {
if models.is_empty() {
return;
}
let cache = self.scope_cache.read().await;
for (scope, snapshot) in cache.iter() {
if snapshot.availability == ScopeAvailability::Unknown {
continue;
}
let scope_key = Self::scope_key(*scope);
let capability = format!(
"scope-status:{}:{}",
scope_key,
snapshot.availability.as_str()
);
for model in models.iter_mut() {
if !model.capabilities.iter().any(|cap| cap == &capability) {
model.capabilities.push(capability.clone());
}
}
if let Some(raw_reason) = snapshot.last_error.as_ref() {
let cleaned = raw_reason.replace('\n', " ").trim().to_string();
if !cleaned.is_empty() {
let truncated: String = cleaned.chars().take(160).collect();
let message_capability =
format!("scope-status-message:{}:{}", scope_key, truncated);
for model in models.iter_mut() {
if !model
.capabilities
.iter()
.any(|cap| cap == &message_capability)
{
model.capabilities.push(message_capability.clone());
}
}
}
}
}
}
/// Attempt to resolve detailed model information for the given model, using the local cache when possible. /// Attempt to resolve detailed model information for the given model, using the local cache when possible.
pub async fn get_model_info(&self, model_name: &str) -> Result<DetailedModelInfo> { pub async fn get_model_info(&self, model_name: &str) -> Result<DetailedModelInfo> {
if let Some(info) = self.model_details_cache.get(model_name).await { if let Some(info) = self.model_details_cache.get(model_name).await {
@@ -312,15 +593,92 @@ impl OllamaProvider {
} }
async fn fetch_models(&self) -> Result<Vec<ModelInfo>> { async fn fetch_models(&self) -> Result<Vec<ModelInfo>> {
let models = self let mut combined = Vec::new();
.client let mut seen: HashSet<String> = HashSet::new();
let mut errors: Vec<Error> = Vec::new();
if let Some(local_client) = self.build_local_client()? {
match self
.fetch_models_for_scope(OllamaMode::Local, local_client.clone())
.await
{
Ok(models) => {
for model in models {
let key = format!("local::{}", model.id);
if seen.insert(key) {
combined.push(model);
}
}
}
Err(err) => errors.push(err),
}
}
if let Some(cloud_client) = self.build_cloud_client()? {
match self
.fetch_models_for_scope(OllamaMode::Cloud, cloud_client.clone())
.await
{
Ok(models) => {
for model in models {
let key = format!("cloud::{}", model.id);
if seen.insert(key) {
combined.push(model);
}
}
}
Err(err) => errors.push(err),
}
}
if combined.is_empty() {
if let Some(err) = errors.pop() {
return Err(err);
}
}
self.annotate_scope_status(&mut combined).await;
combined.sort_by(|a, b| a.name.to_lowercase().cmp(&b.name.to_lowercase()));
Ok(combined)
}
async fn fetch_models_for_scope(
&self,
scope: OllamaMode,
client: Ollama,
) -> Result<Vec<ModelInfo>> {
let list_result = if matches!(scope, OllamaMode::Local) {
match timeout(
Duration::from_millis(LOCAL_PROBE_TIMEOUT_MS),
client.list_local_models(),
)
.await
{
Ok(result) => result.map_err(|err| self.map_ollama_error("list models", err, None)),
Err(_) => Err(Error::Timeout(
"Timed out while contacting the local Ollama daemon".to_string(),
)),
}
} else {
client
.list_local_models() .list_local_models()
.await .await
.map_err(|err| self.map_ollama_error("list models", err, None))?; .map_err(|err| self.map_ollama_error("list models", err, None))
};
let models = match list_result {
Ok(models) => models,
Err(err) => {
let message = err.to_string();
self.mark_scope_failure(scope, message).await;
if let Some(cached) = self.cached_scope_models(scope).await {
return Ok(cached);
}
return Err(err);
}
};
let client = self.client.clone();
let cache = self.model_details_cache.clone(); let cache = self.model_details_cache.clone();
let mode = self.mode;
let fetched = join_all(models.into_iter().map(|local| { let fetched = join_all(models.into_iter().map(|local| {
let client = client.clone(); let client = client.clone();
let cache = cache.clone(); let cache = cache.clone();
@@ -329,7 +687,7 @@ impl OllamaProvider {
let detail = match client.show_model_info(name.clone()).await { let detail = match client.show_model_info(name.clone()).await {
Ok(info) => { Ok(info) => {
let detailed = OllamaProvider::convert_detailed_model_info( let detailed = OllamaProvider::convert_detailed_model_info(
mode, scope,
&name, &name,
Some(&local), Some(&local),
&info, &info,
@@ -347,10 +705,13 @@ impl OllamaProvider {
})) }))
.await; .await;
Ok(fetched let converted: Vec<ModelInfo> = fetched
.into_iter() .into_iter()
.map(|(local, detail)| self.convert_model(local, detail)) .map(|(local, detail)| self.convert_model(scope, local, detail))
.collect()) .collect();
self.update_scope_success(scope, &converted).await;
Ok(converted)
} }
fn convert_detailed_model_info( fn convert_detailed_model_info(
@@ -430,8 +791,13 @@ impl OllamaProvider {
info.with_normalised_strings() info.with_normalised_strings()
} }
fn convert_model(&self, model: LocalModel, detail: Option<OllamaModelInfo>) -> ModelInfo { fn convert_model(
let scope = match self.mode { &self,
scope: OllamaMode,
model: LocalModel,
detail: Option<OllamaModelInfo>,
) -> ModelInfo {
let scope_tag = match scope {
OllamaMode::Local => "local", OllamaMode::Local => "local",
OllamaMode::Cloud => "cloud", OllamaMode::Cloud => "cloud",
}; };
@@ -453,7 +819,9 @@ impl OllamaProvider {
push_capability(&mut capabilities, &heuristic); push_capability(&mut capabilities, &heuristic);
} }
let description = build_model_description(scope, detail.as_ref()); push_capability(&mut capabilities, &format!("scope:{scope_tag}"));
let description = build_model_description(scope_tag, detail.as_ref());
ModelInfo { ModelInfo {
id: name.clone(), id: name.clone(),
@@ -1004,6 +1372,10 @@ fn normalize_base_url(
Ok(url.to_string().trim_end_matches('/').to_string()) Ok(url.to_string().trim_end_matches('/').to_string())
} }
fn normalize_cloud_endpoint(input: &str) -> std::result::Result<String, String> {
normalize_base_url(Some(input), OllamaMode::Cloud)
}
fn build_api_endpoint(base_url: &str, endpoint: &str) -> String { fn build_api_endpoint(base_url: &str, endpoint: &str) -> String {
let trimmed_base = base_url.trim_end_matches('/'); let trimmed_base = base_url.trim_end_matches('/');
let trimmed_endpoint = endpoint.trim_start_matches('/'); let trimmed_endpoint = endpoint.trim_start_matches('/');
@@ -1015,9 +1387,48 @@ fn build_api_endpoint(base_url: &str, endpoint: &str) -> String {
} }
} }
fn build_client_for_base(
base_url: &str,
timeout: Duration,
api_key: Option<&str>,
) -> Result<(Ollama, Client)> {
let url = Url::parse(base_url)
.map_err(|err| Error::Config(format!("Invalid Ollama base URL '{base_url}': {err}")))?;
let mut headers = HeaderMap::new();
if let Some(key) = api_key {
let value = HeaderValue::from_str(&format!("Bearer {key}"))
.map_err(|_| Error::Config("OLLAMA API key contains invalid characters".to_string()))?;
headers.insert(AUTHORIZATION, value);
}
let mut client_builder = Client::builder().timeout(timeout);
if !headers.is_empty() {
client_builder = client_builder.default_headers(headers.clone());
}
let http_client = client_builder.build().map_err(|err| {
Error::Config(format!(
"Failed to build HTTP client for '{base_url}': {err}"
))
})?;
let port = url.port_or_known_default().ok_or_else(|| {
Error::Config(format!("Unable to determine port for Ollama URL '{}'", url))
})?;
let mut ollama_client = Ollama::new_with_client(url.clone(), port, http_client.clone());
if !headers.is_empty() {
ollama_client.set_headers(Some(headers));
}
Ok((ollama_client, http_client))
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use std::collections::HashMap;
#[test] #[test]
fn resolve_api_key_prefers_literal_value() { fn resolve_api_key_prefers_literal_value() {
@@ -1053,6 +1464,60 @@ mod tests {
assert!(err.contains("https")); assert!(err.contains("https"));
} }
#[test]
fn explicit_local_mode_overrides_api_key() {
let mut config = ProviderConfig {
provider_type: "ollama".to_string(),
base_url: Some("http://localhost:11434".to_string()),
api_key: Some("secret-key".to_string()),
extra: HashMap::new(),
};
config.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("local".to_string()),
);
let provider = OllamaProvider::from_config(&config, None).expect("provider constructed");
assert_eq!(provider.mode, OllamaMode::Local);
assert_eq!(provider.base_url, "http://localhost:11434");
}
#[test]
fn auto_mode_prefers_explicit_local_base() {
let config = ProviderConfig {
provider_type: "ollama".to_string(),
base_url: Some("http://localhost:11434".to_string()),
api_key: Some("secret-key".to_string()),
extra: HashMap::new(),
};
// simulate missing explicit mode; defaults to auto
let provider = OllamaProvider::from_config(&config, None).expect("provider constructed");
assert_eq!(provider.mode, OllamaMode::Local);
assert_eq!(provider.base_url, "http://localhost:11434");
}
#[test]
fn auto_mode_with_api_key_and_no_local_probe_switches_to_cloud() {
let mut config = ProviderConfig {
provider_type: "ollama".to_string(),
base_url: None,
api_key: Some("secret-key".to_string()),
extra: HashMap::new(),
};
config.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("auto".to_string()),
);
let provider = OllamaProvider::from_config(&config, None).expect("provider constructed");
assert_eq!(provider.mode, OllamaMode::Cloud);
assert_eq!(provider.base_url, CLOUD_BASE_URL);
}
#[test] #[test]
fn build_model_options_merges_parameters() { fn build_model_options_merges_parameters() {
let mut parameters = ChatParameters::default(); let mut parameters = ChatParameters::default();
@@ -1091,3 +1556,110 @@ mod tests {
assert!(caps.iter().any(|cap| cap == "vision")); assert!(caps.iter().any(|cap| cap == "vision"));
} }
} }
#[cfg(test)]
struct ProbeOverrideGuard;
#[cfg(test)]
impl ProbeOverrideGuard {
fn set(value: Option<bool>) -> Self {
set_probe_override(value);
ProbeOverrideGuard
}
}
#[cfg(test)]
impl Drop for ProbeOverrideGuard {
fn drop(&mut self) {
set_probe_override(None);
}
}
#[test]
fn auto_mode_with_api_key_and_successful_probe_prefers_local() {
let _guard = ProbeOverrideGuard::set(Some(true));
let mut config = ProviderConfig {
provider_type: "ollama".to_string(),
base_url: None,
api_key: Some("secret-key".to_string()),
extra: HashMap::new(),
};
config.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("auto".to_string()),
);
assert!(probe_default_local_daemon(Duration::from_millis(1)));
let provider = OllamaProvider::from_config(&config, None).expect("provider constructed");
assert_eq!(provider.mode, OllamaMode::Local);
assert_eq!(provider.base_url, "http://localhost:11434");
}
#[test]
fn auto_mode_with_api_key_and_failed_probe_prefers_cloud() {
let _guard = ProbeOverrideGuard::set(Some(false));
let mut config = ProviderConfig {
provider_type: "ollama".to_string(),
base_url: None,
api_key: Some("secret-key".to_string()),
extra: HashMap::new(),
};
config.extra.insert(
OLLAMA_MODE_KEY.to_string(),
Value::String("auto".to_string()),
);
let provider = OllamaProvider::from_config(&config, None).expect("provider constructed");
assert_eq!(provider.mode, OllamaMode::Cloud);
assert_eq!(provider.base_url, CLOUD_BASE_URL);
}
#[test]
fn annotate_scope_status_adds_capabilities_for_unavailable_scopes() {
let config = ProviderConfig {
provider_type: "ollama".to_string(),
base_url: Some("http://localhost:11434".to_string()),
api_key: None,
extra: HashMap::new(),
};
let provider = OllamaProvider::from_config(&config, None).expect("provider constructed");
let mut models = vec![ModelInfo {
id: "llama3".to_string(),
name: "Llama 3".to_string(),
description: None,
provider: "ollama".to_string(),
context_window: None,
capabilities: vec!["scope:local".to_string()],
supports_tools: false,
}];
block_on(async {
{
let mut cache = provider.scope_cache.write().await;
let entry = cache.entry(OllamaMode::Cloud).or_default();
entry.availability = ScopeAvailability::Unavailable;
entry.last_error = Some("Cloud endpoint unreachable".to_string());
}
provider.annotate_scope_status(&mut models).await;
});
let capabilities = &models[0].capabilities;
assert!(
capabilities
.iter()
.any(|cap| cap == "scope-status:cloud:unavailable")
);
assert!(
capabilities
.iter()
.any(|cap| cap.starts_with("scope-status-message:cloud:"))
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -102,7 +102,23 @@ const COMMANDS: &[CommandSpec] = &[
}, },
CommandSpec { CommandSpec {
keyword: "provider", keyword: "provider",
description: "Switch active provider", description: "Switch provider or set its mode",
},
CommandSpec {
keyword: "cloud setup",
description: "Configure Ollama Cloud credentials",
},
CommandSpec {
keyword: "cloud status",
description: "Check Ollama Cloud connectivity",
},
CommandSpec {
keyword: "cloud models",
description: "List models available in Ollama Cloud",
},
CommandSpec {
keyword: "cloud logout",
description: "Remove stored Ollama Cloud credentials",
}, },
CommandSpec { CommandSpec {
keyword: "model info", keyword: "model info",
@@ -124,6 +140,14 @@ const COMMANDS: &[CommandSpec] = &[
keyword: "models info", keyword: "models info",
description: "Prefetch detailed information for all models", description: "Prefetch detailed information for all models",
}, },
CommandSpec {
keyword: "models --local",
description: "Open model picker focused on local models",
},
CommandSpec {
keyword: "models --cloud",
description: "Open model picker focused on cloud models",
},
CommandSpec { CommandSpec {
keyword: "new", keyword: "new",
description: "Start a new conversation", description: "Start a new conversation",

View File

@@ -12,7 +12,8 @@ use unicode_segmentation::UnicodeSegmentation;
use unicode_width::UnicodeWidthStr; use unicode_width::UnicodeWidthStr;
use crate::chat_app::{ use crate::chat_app::{
ChatApp, HELP_TAB_COUNT, MIN_MESSAGE_CARD_WIDTH, MessageRenderContext, ModelSelectorItemKind, ChatApp, HELP_TAB_COUNT, MIN_MESSAGE_CARD_WIDTH, MessageRenderContext, ModelScope,
ModelSelectorItemKind,
}; };
use crate::highlight; use crate::highlight;
use crate::state::{ use crate::state::{
@@ -2785,6 +2786,19 @@ fn render_model_selector(frame: &mut Frame<'_>, app: &ChatApp) {
); );
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background))); items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
} }
ModelSelectorItemKind::Scope { label, scope, .. } => {
let (fg, modifier) = match scope {
ModelScope::Local => (theme.mode_normal, Modifier::BOLD),
ModelScope::Cloud => (theme.mode_help, Modifier::BOLD),
ModelScope::Other(_) => (theme.placeholder, Modifier::ITALIC),
};
let style = Style::default().fg(fg).add_modifier(modifier);
let line = clip_line_to_width(
Line::from(Span::styled(format!(" {label}"), style)),
max_line_width,
);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
}
ModelSelectorItemKind::Model { model_index, .. } => { ModelSelectorItemKind::Model { model_index, .. } => {
let mut lines: Vec<Line<'static>> = Vec::new(); let mut lines: Vec<Line<'static>> = Vec::new();
if let Some(model) = app.model_info_by_index(*model_index) { if let Some(model) = app.model_info_by_index(*model_index) {
@@ -2822,16 +2836,28 @@ fn render_model_selector(frame: &mut Frame<'_>, app: &ChatApp) {
} }
items.push(ListItem::new(lines).style(Style::default().bg(theme.background))); items.push(ListItem::new(lines).style(Style::default().bg(theme.background)));
} }
ModelSelectorItemKind::Empty { provider } => { ModelSelectorItemKind::Empty { provider, message } => {
let line = clip_line_to_width( let text = message
Line::from(Span::styled( .as_ref()
format!(" (no models configured for {provider})"), .map(|msg| format!(" {msg}"))
.unwrap_or_else(|| format!(" (no models configured for {provider})"));
let is_unavailable = message
.as_ref()
.map(|msg| msg.to_ascii_lowercase().contains("unavailable"))
.unwrap_or(false);
let style = if is_unavailable {
Style::default()
.fg(theme.error)
.add_modifier(Modifier::BOLD)
} else {
Style::default() Style::default()
.fg(theme.placeholder) .fg(theme.placeholder)
.add_modifier(Modifier::DIM | Modifier::ITALIC), .add_modifier(Modifier::DIM | Modifier::ITALIC)
)), };
max_line_width,
); let line =
clip_line_to_width(Line::from(Span::styled(text, style)), max_line_width);
items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background))); items.push(ListItem::new(vec![line]).style(Style::default().bg(theme.background)));
} }
} }
@@ -2910,6 +2936,9 @@ fn build_model_selector_label(
badges: &[&'static str], badges: &[&'static str],
is_current: bool, is_current: bool,
) -> (String, Option<String>) { ) -> (String, Option<String>) {
let scope = ChatApp::model_scope_from_capabilities(model);
let scope_icon = ChatApp::scope_icon(&scope);
let scope_label = ChatApp::scope_display_name(&scope);
let mut display_name = if model.name.trim().is_empty() { let mut display_name = if model.name.trim().is_empty() {
model.id.clone() model.id.clone()
} else { } else {
@@ -2920,7 +2949,7 @@ fn build_model_selector_label(
display_name.push_str(&format!(" · {}", model.id)); display_name.push_str(&format!(" · {}", model.id));
} }
let mut title = format!(" {}", display_name); let mut title = format!(" {} {}", scope_icon, display_name);
if !badges.is_empty() { if !badges.is_empty() {
title.push(' '); title.push(' ');
title.push_str(&badges.join(" ")); title.push_str(&badges.join(" "));
@@ -2942,6 +2971,10 @@ fn build_model_selector_label(
} }
}; };
if !scope_label.eq_ignore_ascii_case("unknown") {
push_meta(scope_label.clone());
}
if let Some(detail) = detail { if let Some(detail) = detail {
if let Some(ctx) = detail.context_length { if let Some(ctx) = detail.context_length {
push_meta(format!("max tokens {}", ctx)); push_meta(format!("max tokens {}", ctx));
@@ -3567,6 +3600,9 @@ fn render_help(frame: &mut Frame<'_>, app: &ChatApp) {
Line::from(" :m, :model → open model selector"), Line::from(" :m, :model → open model selector"),
Line::from(" :themes → open theme selector"), Line::from(" :themes → open theme selector"),
Line::from(" :theme <name> → switch to a specific theme"), Line::from(" :theme <name> → switch to a specific theme"),
Line::from(" :provider <name> [auto|local|cloud] → switch provider or set mode"),
Line::from(" :models --local | --cloud → focus models by scope"),
Line::from(" :cloud setup [--force-cloud-base-url] → configure Ollama Cloud"),
Line::from(""), Line::from(""),
Line::from(vec![Span::styled( Line::from(vec![Span::styled(
"SESSION MANAGEMENT", "SESSION MANAGEMENT",

View File

@@ -158,6 +158,16 @@ After updating your config:
- Remove the `-cloud` suffix from model names when using cloud provider - Remove the `-cloud` suffix from model names when using cloud provider
- Ensure `api_key` is set in `[providers.ollama-cloud]` config - Ensure `api_key` is set in `[providers.ollama-cloud]` config
### 0.1.9 Explicit Ollama Modes & Cloud Endpoint Storage
Owlen 0.1.9 introduces targeted quality-of-life fixes for users who switch between local Ollama models and Ollama Cloud:
- `providers.<name>.extra.ollama_mode` now accepts `"auto"`, `"local"`, or `"cloud"`. Migrations default existing entries to `auto`, while preserving any explicit local base URLs you set previously.
- `owlen cloud setup` writes the hosted endpoint to `providers.<name>.extra.cloud_endpoint` rather than overwriting `base_url`, so local catalogues keep working after you import an API key. Pass `--force-cloud-base-url` if you truly want the provider to point at the hosted service.
- The model picker surfaces `Local unavailable` / `Cloud unavailable` badges when a source probe fails, highlighting what to fix instead of presenting an empty list.
Run `owlen config doctor` after upgrading to ensure these migration tweaks are applied automatically.
### Rollback to v0.x ### Rollback to v0.x
If you encounter issues and need to rollback: If you encounter issues and need to rollback:

View File

@@ -21,6 +21,17 @@ Owlen surfaces this as `InvalidInput: Model '<name>' was not found`.
Fix the name in your configuration file or choose a model from the UI (`:model`). Fix the name in your configuration file or choose a model from the UI (`:model`).
## Local Models Missing After Cloud Setup
Owlen now queries both the local daemon and Ollama Cloud and shows them side-by-side in the picker. If you only see the cloud section (or a red `Local unavailable` banner):
1. **Confirm the daemon is reachable.** Run `ollama list` locally. If the command times out, restart the service (`ollama serve` or your systemd unit).
2. **Refresh the picker.** In the TUI press `:models --local` to focus the local section. The footer will explain if Owlen skipped the source because it was unreachable.
3. **Inspect the status line.** When the quick health probe fails, Owlen adds a `Local unavailable` / `Cloud unavailable` message instead of leaving the list blank. Use that hint to decide whether to restart Ollama or re-run `owlen cloud setup`.
4. **Keep the base URL local.** The cloud setup command no longer overrides `providers.ollama.base_url` unless `--force-cloud-base-url` is passed. If you changed it manually, edit `config.toml` or run `owlen config doctor` to restore the default `http://localhost:11434` value.
Once the daemon responds again, the picker will automatically merge the updated local list with the cloud catalogue.
## Terminal Compatibility Issues ## Terminal Compatibility Issues
Owlen is built with `ratatui`, which supports most modern terminals. However, if you are experiencing rendering issues, please check the following: Owlen is built with `ratatui`, which supports most modern terminals. However, if you are experiencing rendering issues, please check the following: