Add comprehensive documentation and examples for Owlen architecture and usage

- Include detailed architecture overview in `docs/architecture.md`.
- Add `docs/configuration.md`, detailing configuration file structure and settings.
- Provide a step-by-step provider implementation guide in `docs/provider-implementation.md`.
- Add frequently asked questions (FAQ) document in `docs/faq.md`.
- Create `docs/migration-guide.md` for future breaking changes and version upgrades.
- Introduce new examples in `examples/` showcasing basic chat, custom providers, and theming.
- Add a changelog (`CHANGELOG.md`) for tracking significant changes.
- Provide contribution guidelines (`CONTRIBUTING.md`) and a Code of Conduct (`CODE_OF_CONDUCT.md`).
This commit is contained in:
2025-10-05 02:23:32 +02:00
parent 979347bf53
commit 5b202fed4f
26 changed files with 1108 additions and 328 deletions

View File

@@ -0,0 +1,5 @@
# Owlen Anthropic
This crate is a placeholder for a future `owlen-core::Provider` implementation for the Anthropic (Claude) API.
This provider is not yet implemented. Contributions are welcome!

View File

@@ -0,0 +1,15 @@
# Owlen CLI
This crate is the command-line entry point for the Owlen application.
It is responsible for:
- Parsing command-line arguments.
- Loading the configuration.
- Initializing the providers.
- Starting the `owlen-tui` application.
There are two binaries:
- `owlen`: The main chat application.
- `owlen-code`: A specialized version for code-related tasks.

View File

@@ -0,0 +1,12 @@
# Owlen Core
This crate provides the core abstractions and data structures for the Owlen ecosystem.
It defines the essential traits and types that enable communication with various LLM providers, manage sessions, and handle configuration.
## Key Components
- **`Provider` trait**: The fundamental abstraction for all LLM providers. Implement this trait to add support for a new provider.
- **`Session`**: Represents a single conversation, managing message history and context.
- **`Model`**: Defines the structure for LLM models, including their names and properties.
- **Configuration**: Handles loading and parsing of the application's configuration.

View File

@@ -9,6 +9,72 @@ use std::sync::Arc;
pub type ChatStream = Pin<Box<dyn Stream<Item = Result<ChatResponse>> + Send>>;
/// Trait for LLM providers (Ollama, OpenAI, Anthropic, etc.)
///
/// # Example
///
/// ```
/// use std::pin::Pin;
/// use std::sync::Arc;
/// use futures::Stream;
/// use owlen_core::provider::{Provider, ProviderRegistry, ChatStream};
/// use owlen_core::types::{ChatRequest, ChatResponse, ModelInfo, Message};
/// use owlen_core::Result;
///
/// // 1. Create a mock provider
/// struct MockProvider;
///
/// #[async_trait::async_trait]
/// impl Provider for MockProvider {
/// fn name(&self) -> &str {
/// "mock"
/// }
///
/// async fn list_models(&self) -> Result<Vec<ModelInfo>> {
/// Ok(vec![ModelInfo {
/// provider: "mock".to_string(),
/// name: "mock-model".to_string(),
/// ..Default::default()
/// }])
/// }
///
/// async fn chat(&self, request: ChatRequest) -> Result<ChatResponse> {
/// let content = format!("Response to: {}", request.messages.last().unwrap().content);
/// Ok(ChatResponse {
/// model: request.model,
/// message: Message { role: "assistant".to_string(), content, ..Default::default() },
/// ..Default::default()
/// })
/// }
///
/// async fn chat_stream(&self, request: ChatRequest) -> Result<ChatStream> {
/// unimplemented!();
/// }
///
/// async fn health_check(&self) -> Result<()> {
/// Ok(())
/// }
/// }
///
/// // 2. Use the provider with a registry
/// #[tokio::main]
/// async fn main() {
/// let mut registry = ProviderRegistry::new();
/// registry.register(MockProvider);
///
/// let provider = registry.get("mock").unwrap();
/// let models = provider.list_models().await.unwrap();
/// assert_eq!(models[0].name, "mock-model");
///
/// let request = ChatRequest {
/// model: "mock-model".to_string(),
/// messages: vec![Message { role: "user".to_string(), content: "Hello".to_string(), ..Default::default() }],
/// ..Default::default()
/// };
///
/// let response = provider.chat(request).await.unwrap();
/// assert_eq!(response.message.content, "Response to: Hello");
/// }
/// ```
#[async_trait::async_trait]
pub trait Provider: Send + Sync {
/// Get the name of this provider

View File

@@ -20,7 +20,61 @@ pub enum SessionOutcome {
},
}
/// High-level controller encapsulating session state and provider interactions
/// High-level controller encapsulating session state and provider interactions.
///
/// This is the main entry point for managing conversations and interacting with LLM providers.
///
/// # Example
///
/// ```
/// use std::sync::Arc;
/// use owlen_core::config::Config;
/// use owlen_core::provider::{Provider, ChatStream};
/// use owlen_core::session::{SessionController, SessionOutcome};
/// use owlen_core::types::{ChatRequest, ChatResponse, ChatParameters, Message, ModelInfo};
/// use owlen_core::Result;
///
/// // Mock provider for the example
/// struct MockProvider;
/// #[async_trait::async_trait]
/// impl Provider for MockProvider {
/// fn name(&self) -> &str { "mock" }
/// async fn list_models(&self) -> Result<Vec<ModelInfo>> { Ok(vec![]) }
/// async fn chat(&self, request: ChatRequest) -> Result<ChatResponse> {
/// Ok(ChatResponse {
/// model: request.model,
/// message: Message::assistant("Hello back!".to_string()),
/// ..Default::default()
/// })
/// }
/// async fn chat_stream(&self, request: ChatRequest) -> Result<ChatStream> { unimplemented!() }
/// async fn health_check(&self) -> Result<()> { Ok(()) }
/// }
///
/// #[tokio::main]
/// async fn main() {
/// let provider = Arc::new(MockProvider);
/// let config = Config::default();
/// let mut session_controller = SessionController::new(provider, config);
///
/// // Send a message
/// let outcome = session_controller.send_message(
/// "Hello".to_string(),
/// ChatParameters { stream: false, ..Default::default() }
/// ).await.unwrap();
///
/// // Check the response
/// if let SessionOutcome::Complete(response) = outcome {
/// assert_eq!(response.message.content, "Hello back!");
/// }
///
/// // The conversation now contains both messages
/// let messages = session_controller.conversation().messages.clone();
/// assert_eq!(messages.len(), 2);
/// assert_eq!(messages[0].content, "Hello");
/// assert_eq!(messages[1].content, "Hello back!");
/// }
/// ```
pub struct SessionController {
provider: Arc<dyn Provider>,
conversation: ConversationManager,

View File

@@ -0,0 +1,5 @@
# Owlen Gemini
This crate is a placeholder for a future `owlen-core::Provider` implementation for the Google Gemini API.
This provider is not yet implemented. Contributions are welcome!

View File

@@ -0,0 +1,9 @@
# Owlen Ollama
This crate provides an implementation of the `owlen-core::Provider` trait for the [Ollama](https://ollama.ai) backend.
It allows Owlen to communicate with a local Ollama instance, sending requests and receiving responses from locally-run large language models.
## Configuration
To use this provider, you need to have Ollama installed and running. The default address is `http://localhost:11434`. You can configure this in your `config.toml` if your Ollama instance is running elsewhere.

View File

@@ -0,0 +1,5 @@
# Owlen OpenAI
This crate is a placeholder for a future `owlen-core::Provider` implementation for the OpenAI API.
This provider is not yet implemented. Contributions are welcome!

View File

@@ -0,0 +1,12 @@
# Owlen TUI
This crate contains all the logic for the terminal user interface (TUI) of Owlen.
It is built using the excellent [`ratatui`](https://ratatui.rs) library and is responsible for rendering the chat interface, handling user input, and managing the application state.
## Features
- **Chat View**: A scrollable view of the conversation history.
- **Input Box**: A text input area for composing messages.
- **Model Selection**: An interface for switching between different models.
- **Event Handling**: A system for managing keyboard events and asynchronous operations.

View File

@@ -1,3 +1,17 @@
//! # Owlen TUI
//!
//! This crate contains all the logic for the terminal user interface (TUI) of Owlen.
//!
//! It is built using the excellent [`ratatui`](https://ratatui.rs) library and is responsible for
//! rendering the chat interface, handling user input, and managing the application state.
//!
//! ## Modules
//! - `chat_app`: The main application logic for the chat client.
//! - `code_app`: The main application logic for the experimental code client.
//! - `config`: TUI-specific configuration.
//! - `events`: Event handling for user input and other asynchronous actions.
//! - `ui`: The rendering logic for all TUI components.
pub mod chat_app;
pub mod code_app;
pub mod config;