# Owlen Ollama This crate provides an implementation of the `owlen-core::Provider` trait for the [Ollama](https://ollama.ai) backend. It allows Owlen to communicate with a local Ollama instance, sending requests and receiving responses from locally-run large language models. ## Configuration To use this provider, you need to have Ollama installed and running. The default address is `http://localhost:11434`. You can configure this in your `config.toml` if your Ollama instance is running elsewhere.