Files
owlen/crates/owlen-ollama/README.md
vikingowl 5b202fed4f Add comprehensive documentation and examples for Owlen architecture and usage
- Include detailed architecture overview in `docs/architecture.md`.
- Add `docs/configuration.md`, detailing configuration file structure and settings.
- Provide a step-by-step provider implementation guide in `docs/provider-implementation.md`.
- Add frequently asked questions (FAQ) document in `docs/faq.md`.
- Create `docs/migration-guide.md` for future breaking changes and version upgrades.
- Introduce new examples in `examples/` showcasing basic chat, custom providers, and theming.
- Add a changelog (`CHANGELOG.md`) for tracking significant changes.
- Provide contribution guidelines (`CONTRIBUTING.md`) and a Code of Conduct (`CODE_OF_CONDUCT.md`).
2025-10-05 02:23:32 +02:00

10 lines
506 B
Markdown

# Owlen Ollama
This crate provides an implementation of the `owlen-core::Provider` trait for the [Ollama](https://ollama.ai) backend.
It allows Owlen to communicate with a local Ollama instance, sending requests and receiving responses from locally-run large language models.
## Configuration
To use this provider, you need to have Ollama installed and running. The default address is `http://localhost:11434`. You can configure this in your `config.toml` if your Ollama instance is running elsewhere.