Add shared ProviderManager architecture for runtime provider/model switching between TUI and Engine: Core Architecture: - Add SwitchProvider and SwitchModel messages to UserAction enum - Create run_engine_loop_dynamic() with shared ProviderManager - Add ClientSource enum to AgentManager (Fixed vs Dynamic) - Implement get_client() that resolves provider at call time TUI Integration: - Add ProviderMode::Shared variant for shared manager - Add with_shared_provider_manager() constructor - Update switch_provider/set_current_model for shared mode - Fix /model command to update shared ProviderManager (was only updating local TUI state, not propagating to engine) - Fix /provider command to use switch_provider() Infrastructure: - Wire main.rs to create shared ProviderManager for both TUI and engine - Add HTTP status code validation to Ollama client - Consolidate messages.rs and state.rs into agent-core Both TUI and Engine now share the same ProviderManager via Arc<Mutex<>>. Provider/model changes via [1]/[2]/[3] keys, model picker, or /model command now properly propagate to the engine. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Owlen Ollama Provider
Local LLM integration via Ollama for the Owlen AI agent.
Overview
This crate enables the Owlen agent to use local models running via Ollama. This is ideal for privacy-focused workflows or development without an internet connection.
Features
- Local Execution: No API keys required for basic local use.
- Llama 3 / Qwen Support: Compatible with popular open-source models.
- Custom Model URLs: Connect to Ollama instances running on non-standard ports or remote servers.
Configuration
Requires a running Ollama instance. The default connection URL is http://localhost:11434.