Add comprehensive documentation and examples for Owlen architecture and usage

- Include detailed architecture overview in `docs/architecture.md`.
- Add `docs/configuration.md`, detailing configuration file structure and settings.
- Provide a step-by-step provider implementation guide in `docs/provider-implementation.md`.
- Add frequently asked questions (FAQ) document in `docs/faq.md`.
- Create `docs/migration-guide.md` for future breaking changes and version upgrades.
- Introduce new examples in `examples/` showcasing basic chat, custom providers, and theming.
- Add a changelog (`CHANGELOG.md`) for tracking significant changes.
- Provide contribution guidelines (`CONTRIBUTING.md`) and a Code of Conduct (`CODE_OF_CONDUCT.md`).
This commit is contained in:
2025-10-05 02:23:32 +02:00
parent 979347bf53
commit 5b202fed4f
26 changed files with 1108 additions and 328 deletions

View File

@@ -0,0 +1,9 @@
# Owlen Ollama
This crate provides an implementation of the `owlen-core::Provider` trait for the [Ollama](https://ollama.ai) backend.
It allows Owlen to communicate with a local Ollama instance, sending requests and receiving responses from locally-run large language models.
## Configuration
To use this provider, you need to have Ollama installed and running. The default address is `http://localhost:11434`. You can configure this in your `config.toml` if your Ollama instance is running elsewhere.