Files
owlen/crates/owlen-ollama
vikingowl 05e90d3e2b feat(mcp): add LLM server crate and remote client integration
- Introduce `owlen-mcp-llm-server` crate with RPC handling, `generate_text` tool, model listing, and streaming notifications.
- Add `RpcNotification` struct and `MODELS_LIST` method to the MCP protocol.
- Update `owlen-core` to depend on `tokio-stream`.
- Adjust Ollama provider to omit empty `tools` field for compatibility.
- Enhance `RemoteMcpClient` to locate the renamed server binary, handle resource tools locally, and implement the `Provider` trait (model listing, chat, streaming, health check).
- Add new crate to workspace `Cargo.toml`.
2025-10-09 13:46:33 +02:00
..

Owlen Ollama

This crate provides an implementation of the owlen-core::Provider trait for the Ollama backend.

It allows Owlen to communicate with a local Ollama instance, sending requests and receiving responses from locally-run large language models. You can also target Ollama Cloud by pointing the provider at https://ollama.com (or https://api.ollama.com) and providing an API key through your Owlen configuration (or the OLLAMA_API_KEY / OLLAMA_CLOUD_API_KEY environment variables). The client automatically adds the required Bearer authorization header when a key is supplied, accepts either host without rewriting, and expands inline environment references like $OLLAMA_API_KEY if you prefer not to check the secret into your config file. The generated configuration now includes both providers.ollama and providers.ollama-cloud entries—switch between them by updating general.default_provider.

Configuration

To use this provider, you need to have Ollama installed and running. The default address is http://localhost:11434. You can configure this in your config.toml if your Ollama instance is running elsewhere.