- Export `LLMProvider` from `owlen-core` and replace public `Provider` re-exports. - Convert `OllamaProvider` to implement the new `LLMProvider` trait with associated future types. - Adjust imports and trait bounds in `remote_client.rs` to use the updated types. - Add comprehensive provider interface tests (`provider_interface.rs`) verifying router routing and provider registry model listing with `MockProvider`. - Align dependency versions across workspace crates by switching to workspace-managed versions. - Extend CI (`.woodpecker.yml`) with a dedicated test step and generate coverage reports. - Update architecture documentation to reflect the new provider abstraction.
Owlen Ollama
This crate provides an implementation of the owlen-core::Provider trait for the Ollama backend.
It allows Owlen to communicate with a local Ollama instance, sending requests and receiving responses from locally-run large language models. You can also target Ollama Cloud by pointing the provider at https://ollama.com (or https://api.ollama.com) and providing an API key through your Owlen configuration (or the OLLAMA_API_KEY / OLLAMA_CLOUD_API_KEY environment variables). The client automatically adds the required Bearer authorization header when a key is supplied, accepts either host without rewriting, and expands inline environment references like $OLLAMA_API_KEY if you prefer not to check the secret into your config file. The generated configuration now includes both providers.ollama and providers.ollama-cloud entries—switch between them by updating general.default_provider.
Configuration
To use this provider, you need to have Ollama installed and running. The default address is http://localhost:11434. You can configure this in your config.toml if your Ollama instance is running elsewhere.