Add comprehensive documentation and examples for Owlen architecture and usage

- Include detailed architecture overview in `docs/architecture.md`.
- Add `docs/configuration.md`, detailing configuration file structure and settings.
- Provide a step-by-step provider implementation guide in `docs/provider-implementation.md`.
- Add frequently asked questions (FAQ) document in `docs/faq.md`.
- Create `docs/migration-guide.md` for future breaking changes and version upgrades.
- Introduce new examples in `examples/` showcasing basic chat, custom providers, and theming.
- Add a changelog (`CHANGELOG.md`) for tracking significant changes.
- Provide contribution guidelines (`CONTRIBUTING.md`) and a Code of Conduct (`CODE_OF_CONDUCT.md`).
This commit is contained in:
2025-10-05 02:23:32 +02:00
parent 979347bf53
commit 5b202fed4f
26 changed files with 1108 additions and 328 deletions

71
docs/architecture.md Normal file
View File

@@ -0,0 +1,71 @@
# Owlen Architecture
This document provides a high-level overview of the Owlen architecture. Its purpose is to help developers understand how the different parts of the application fit together.
## Core Concepts
The architecture is designed to be modular and extensible, centered around a few key concepts:
- **Providers**: Connect to various LLM APIs (Ollama, OpenAI, etc.).
- **Session**: Manages the conversation history and state.
- **TUI**: The terminal user interface, built with `ratatui`.
- **Events**: A system for handling user input and other events.
## Component Interaction
A simplified diagram of how components interact:
```
[User Input] -> [Event Loop] -> [Session Controller] -> [Provider]
^ |
| v
[TUI Renderer] <------------------------------------ [API Response]
```
1. **User Input**: The user interacts with the TUI, generating events (e.g., key presses).
2. **Event Loop**: The main event loop in `owlen-tui` captures these events.
3. **Session Controller**: The event is processed, and if it's a prompt, the session controller sends a request to the current provider.
4. **Provider**: The provider formats the request for the specific LLM API and sends it.
5. **API Response**: The LLM API returns a response.
6. **TUI Renderer**: The response is processed, the session state is updated, and the TUI is re-rendered to display the new information.
## Crate Breakdown
- `owlen-core`: Defines the core traits and data structures, like `Provider` and `Session`.
- `owlen-tui`: Contains all the logic for the terminal user interface, including event handling and rendering.
- `owlen-cli`: The command-line entry point, responsible for parsing arguments and starting the TUI.
- `owlen-ollama` / `owlen-openai` / etc.: Implementations of the `Provider` trait for specific services.
## Session Management
The session management system is responsible for tracking the state of a conversation. The two main structs are:
- **`Conversation`**: Found in `owlen-core`, this struct holds the messages of a single conversation, the model being used, and other metadata. It is a simple data container.
- **`SessionController`**: This is the high-level controller that manages the active conversation. It handles:
- Storing and retrieving conversation history via the `ConversationManager`.
- Managing the context that is sent to the LLM provider.
- Switching between different models.
- Sending requests to the provider and handling the responses (both streaming and complete).
When a user sends a message, the `SessionController` adds the message to the current `Conversation`, sends the updated message list to the `Provider`, and then adds the provider's response to the `Conversation`.
## Event Flow
The event flow is managed by the `EventHandler` in `owlen-tui`. It operates in a loop, waiting for events and dispatching them to the active application (`ChatApp` or `CodeApp`).
1. **Event Source**: Events are primarily generated by `crossterm` from user keyboard input. Asynchronous events, like responses from a `Provider`, are also fed into the event system via a `tokio::mpsc` channel.
2. **`EventHandler::next()`**: The main application loop calls this method to wait for the next event.
3. **Event Enum**: Events are defined in the `owlen_tui::events::Event` enum. This includes `Key` events, `Tick` events (for UI updates), and `Message` events (for async provider data).
4. **Dispatch**: The application's `run` method matches on the `Event` type and calls the appropriate handler function (e.g., `dispatch_key_event`).
5. **State Update**: The handler function updates the application state based on the event. For example, a key press might change the `InputMode` or modify the text in the input buffer.
6. **Re-render**: After the state is updated, the UI is re-rendered to reflect the changes.
## TUI Rendering Pipeline
The TUI is rendered on each iteration of the main application loop in `owlen-tui`. The process is as follows:
1. **`tui.draw()`**: The main loop calls this method, passing the current application state.
2. **`Terminal::draw()`**: This method, from `ratatui`, takes a closure that receives a `Frame`.
3. **UI Composition**: Inside the closure, the UI is built by composing `ratatui` widgets. The root UI is defined in `owlen_tui::ui::render`, which builds the main layout and calls other functions to render specific components (like the chat panel, input box, etc.).
4. **State-Driven Rendering**: Each rendering function takes the current application state as an argument. It uses this state to decide what and how to render. For example, the border color of a panel might change if it is focused.
5. **Buffer and Diff**: `ratatui` does not draw directly to the terminal. Instead, it renders the widgets to an in-memory buffer. It then compares this buffer to the previous buffer and only sends the necessary changes to the terminal. This is highly efficient and prevents flickering.

118
docs/configuration.md Normal file
View File

@@ -0,0 +1,118 @@
# Owlen Configuration
Owlen uses a TOML file for configuration, allowing you to customize its behavior to your liking. This document details all the available options.
## File Location
By default, Owlen looks for its configuration file at `~/.config/owlen/config.toml`.
A default configuration file is created on the first run if one doesn't exist.
## Configuration Precedence
Configuration values are resolved in the following order:
1. **Defaults**: The application has hard-coded default values for all settings.
2. **Configuration File**: Any values set in `config.toml` will override the defaults.
3. **Command-Line Arguments / In-App Changes**: Any settings changed during runtime (e.g., via the `:theme` or `:model` commands) will override the configuration file for the current session. Some of these changes (like theme and model) are automatically saved back to the configuration file.
---
## General Settings (`[general]`)
These settings control the core behavior of the application.
- `default_provider` (string, default: `"ollama"`)
The name of the provider to use by default.
- `default_model` (string, optional, default: `"llama3.2:latest"`)
The default model to use for new conversations.
- `enable_streaming` (boolean, default: `true`)
Whether to stream responses from the provider by default.
- `project_context_file` (string, optional, default: `"OWLEN.md"`)
Path to a file whose content will be automatically injected as a system prompt. This is useful for providing project-specific context.
- `model_cache_ttl_secs` (integer, default: `60`)
Time-to-live in seconds for the cached list of available models.
## UI Settings (`[ui]`)
These settings customize the look and feel of the terminal interface.
- `theme` (string, default: `"default_dark"`)
The name of the theme to use. See the [Theming Guide](https://github.com/Owlibou/owlen/blob/main/themes/README.md) for available themes.
- `word_wrap` (boolean, default: `true`)
Whether to wrap long lines in the chat view.
- `max_history_lines` (integer, default: `2000`)
The maximum number of lines to keep in the scrollback buffer for the chat history.
- `show_role_labels` (boolean, default: `true`)
Whether to show the `user` and `bot` role labels next to messages.
- `wrap_column` (integer, default: `100`)
The column at which to wrap text if `word_wrap` is enabled.
## Storage Settings (`[storage]`)
These settings control how conversations are saved and loaded.
- `conversation_dir` (string, optional, default: platform-specific)
The directory where conversation sessions are saved. If not set, a default directory is used:
- **Linux**: `~/.local/share/owlen/sessions`
- **Windows**: `%APPDATA%\owlen\sessions`
- **macOS**: `~/Library/Application Support/owlen/sessions`
- `auto_save_sessions` (boolean, default: `true`)
Whether to automatically save the session when the application exits.
- `max_saved_sessions` (integer, default: `25`)
The maximum number of saved sessions to keep.
- `session_timeout_minutes` (integer, default: `120`)
The number of minutes of inactivity before a session is considered for auto-saving as a new session.
- `generate_descriptions` (boolean, default: `true`)
Whether to automatically generate a short summary of a conversation when saving it.
## Input Settings (`[input]`)
These settings control the behavior of the text input area.
- `multiline` (boolean, default: `true`)
Whether to allow multi-line input.
- `history_size` (integer, default: `100`)
The number of sent messages to keep in the input history (accessible with `Ctrl-Up/Down`).
- `tab_width` (integer, default: `4`)
The number of spaces to insert when the `Tab` key is pressed.
- `confirm_send` (boolean, default: `false`)
If true, requires an additional confirmation before sending a message.
## Provider Settings (`[providers]`)
This section contains a table for each provider you want to configure. The key is the provider name (e.g., `ollama`).
```toml
[providers.ollama]
provider_type = "ollama"
base_url = "http://localhost:11434"
# api_key = "..."
```
- `provider_type` (string, required)
The type of the provider. Currently, only `"ollama"` is built-in.
- `base_url` (string, optional)
The base URL of the provider's API.
- `api_key` (string, optional)
The API key to use for authentication, if required.
- `extra` (table, optional)
Any additional, provider-specific parameters can be added here.

42
docs/faq.md Normal file
View File

@@ -0,0 +1,42 @@
# Frequently Asked Questions (FAQ)
### What is the difference between `owlen` and `owlen-code`?
- `owlen` is the general-purpose chat client.
- `owlen-code` is an experimental client with a system prompt that is optimized for programming and code-related questions. In the future, it will include more code-specific features like file context and syntax highlighting.
### How do I use Owlen with a different terminal?
Owlen is designed to work with most modern terminals that support 256 colors and Unicode. If you experience rendering issues, you might try:
- **WezTerm**: Excellent cross-platform, GPU-accelerated terminal.
- **Alacritty**: Another fast, GPU-accelerated terminal.
- **Kitty**: A feature-rich terminal emulator.
If issues persist, please open an issue and let us know what terminal you are using.
### What is the setup for Windows?
The Windows build is currently experimental. However, you can install it from source using `cargo` if you have the Rust toolchain installed.
1. Install Rust from [rustup.rs](https://rustup.rs).
2. Install Git for Windows.
3. Clone the repository: `git clone https://github.com/Owlibou/owlen.git`
4. Install: `cd owlen && cargo install --path crates/owlen-cli`
Official binary releases for Windows are planned for the future.
### What is the setup for macOS?
Similar to Windows, the recommended installation method for macOS is to build from source using `cargo`.
1. Install the Xcode command-line tools: `xcode-select --install`
2. Install Rust from [rustup.rs](https://rustup.rs).
3. Clone the repository: `git clone https://github.com/Owlibou/owlen.git`
4. Install: `cd owlen && cargo install --path crates/owlen-cli`
Official binary releases for macOS are planned.
### I'm getting connection failures to Ollama.
Please see the [Troubleshooting Guide](troubleshooting.md#connection-failures-to-ollama) for help with this common issue.

34
docs/migration-guide.md Normal file
View File

@@ -0,0 +1,34 @@
# Migration Guide
This guide documents breaking changes between versions of Owlen and provides instructions on how to migrate your configuration or usage.
As Owlen is currently in its alpha phase (pre-v1.0), breaking changes may occur more frequently. We will do our best to document them here.
---
## Migrating from v0.1.x to v0.2.x (Example)
*This is a template for a future migration. No breaking changes have occurred yet.*
Version 0.2.0 introduces a new configuration structure for providers.
### Configuration File Changes
Previously, your `config.toml` might have looked like this:
```toml
# old config.toml (pre-v0.2.0)
ollama_base_url = "http://localhost:11434"
```
In v0.2.0, all provider settings are now nested under a `[providers]` table. You will need to update your `config.toml` to the new format:
```toml
# new config.toml (v0.2.0+)
[providers.ollama]
base_url = "http://localhost:11434"
```
### Action Required
Update your `~/.config/owlen/config.toml` to match the new structure. If you do not, Owlen will fall back to its default provider configuration.

View File

@@ -0,0 +1,75 @@
# Provider Implementation Guide
This guide explains how to implement a new provider for Owlen. Providers are the components that connect to different LLM APIs.
## The `Provider` Trait
The core of the provider system is the `Provider` trait, located in `owlen-core`. Any new provider must implement this trait.
Here is a simplified version of the trait:
```rust
use async_trait::async_trait;
use owlen_core::model::Model;
use owlen_core::session::Session;
#[async_trait]
pub trait Provider {
/// Returns the name of the provider.
fn name(&self) -> &str;
/// Sends the session to the provider and returns the response.
async fn chat(&self, session: &Session, model: &Model) -> Result<String, anyhow::Error>;
}
```
## Creating a New Crate
1. **Create a new crate** in the `crates/` directory. For example, `owlen-myprovider`.
2. **Add dependencies** to your new crate's `Cargo.toml`. You will need `owlen-core`, `async-trait`, `tokio`, and any crates required for interacting with the new API (e.g., `reqwest`).
3. **Add the new crate to the workspace** in the root `Cargo.toml`.
## Implementing the Trait
In your new crate's `lib.rs`, you will define a struct for your provider and implement the `Provider` trait for it.
```rust
use async_trait::async_trait;
use owlen_core::model::Model;
use owlen_core::provider::Provider;
use owlen_core::session::Session;
pub struct MyProvider;
#[async_trait]
impl Provider for MyProvider {
fn name(&self) -> &str {
"my-provider"
}
async fn chat(&self, session: &Session, model: &Model) -> Result<String, anyhow::Error> {
// 1. Get the conversation history from the session.
let history = session.get_messages();
// 2. Format the request for your provider's API.
// This might involve creating a JSON body with the messages.
// 3. Send the request to the API using a client like reqwest.
// 4. Parse the response from the API.
// 5. Return the content of the response as a String.
Ok("Hello from my provider!".to_string())
}
}
```
## Integrating with Owlen
Once your provider is implemented, you will need to integrate it into the main Owlen application.
1. **Add your provider crate** as a dependency to `owlen-cli`.
2. **In `owlen-cli`, modify the provider registration** to include your new provider. This will likely involve adding it to a list of available providers that the user can select from in the configuration.
This guide provides a basic outline. For more detailed examples, you can look at the existing provider implementations, such as `owlen-ollama`.

58
docs/testing.md Normal file
View File

@@ -0,0 +1,58 @@
# Testing Guide
This guide provides instructions on how to run existing tests and how to write new tests for Owlen.
## Running Tests
The entire test suite can be run from the root of the repository using the standard `cargo test` command.
```sh
# Run all tests in the workspace
cargo test --all
# Run tests for a specific crate
cargo test -p owlen-core
```
We use `cargo clippy` for linting and `cargo fmt` for formatting. Please run these before submitting a pull request.
```sh
cargo clippy --all -- -D warnings
cargo fmt --all -- --check
```
## Writing New Tests
Tests are located in the `tests/` directory within each crate, or in a `tests` module at the bottom of the file they are testing. We follow standard Rust testing practices.
### Unit Tests
For testing specific functions or components in isolation, use unit tests. These should be placed in a `#[cfg(test)]` module in the same file as the code being tested.
```rust
// in src/my_module.rs
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_add() {
assert_eq!(add(2, 2), 4);
}
}
```
### Integration Tests
For testing how different parts of the application work together, use integration tests. These should be placed in the `tests/` directory of the crate.
For example, to test the `SessionController`, you might create a mock `Provider` and simulate sending messages, as seen in the `SessionController` documentation example.
### TUI and UI Component Tests
Testing TUI components can be challenging. For UI logic in `owlen-core` (like `wrap_cursor`), we have detailed unit tests that manipulate the component's state and assert the results. For higher-level TUI components in `owlen-tui`, the focus is on testing the state management logic rather than the visual output.

40
docs/troubleshooting.md Normal file
View File

@@ -0,0 +1,40 @@
# Troubleshooting Guide
This guide is intended to help you with common issues you might encounter while using Owlen.
## Connection Failures to Ollama
If you are unable to connect to a local Ollama instance, here are a few things to check:
1. **Is Ollama running?** Make sure the Ollama service is active. You can usually check this with `ollama list`.
2. **Is the address correct?** By default, Owlen tries to connect to `http://localhost:11434`. If your Ollama instance is running on a different address or port, you will need to configure it in your `config.toml` file.
3. **Firewall issues:** Ensure that your firewall is not blocking the connection.
## Model Not Found Errors
If you get a "model not found" error, it means that the model you are trying to use is not available. For local providers like Ollama, you can use `ollama list` to see the models you have downloaded. Make sure the model name in your Owlen configuration matches one of the available models.
## Terminal Compatibility Issues
Owlen is built with `ratatui`, which supports most modern terminals. However, if you are experiencing rendering issues, please check the following:
- Your terminal supports Unicode.
- You are using a font that includes the characters being displayed.
- Try a different terminal emulator to see if the issue persists.
## Configuration File Problems
If Owlen is not behaving as you expect, there might be an issue with your configuration file.
- **Location:** The configuration file is typically located at `~/.config/owlen/config.toml`.
- **Syntax:** The configuration file is in TOML format. Make sure the syntax is correct.
- **Values:** Check that the values for your models, providers, and other settings are correct.
## Performance Tuning
If you are experiencing performance issues, you can try the following:
- **Reduce context size:** A smaller context size will result in faster responses from the LLM.
- **Use a less resource-intensive model:** Some models are faster but less capable than others.
If you are still having trouble, please [open an issue](https://github.com/Owlibou/owlen/issues) on our GitHub repository.