docs(llm): Add README.md for all LLM crates
This commit is contained in:
15
crates/llm/anthropic/README.md
Normal file
15
crates/llm/anthropic/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Owlen Anthropic Provider
|
||||
|
||||
Anthropic Claude integration for the Owlen AI agent.
|
||||
|
||||
## Overview
|
||||
This crate provides the implementation of the `LlmProvider` trait for Anthropic's Claude models. It handles the specific API requirements for Claude, including its unique tool calling format and streaming response structure.
|
||||
|
||||
## Features
|
||||
- **Claude 3.5 Sonnet Support:** Optimized for the latest high-performance models.
|
||||
- **Tool Use:** Native integration with Claude's tool calling capabilities.
|
||||
- **Streaming:** Efficient real-time response generation using server-sent events.
|
||||
- **Token Counting:** Accurate token estimation using Anthropic-specific logic.
|
||||
|
||||
## Configuration
|
||||
Requires an `ANTHROPIC_API_KEY` to be set in the environment or configuration.
|
||||
18
crates/llm/core/README.md
Normal file
18
crates/llm/core/README.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Owlen LLM Core
|
||||
|
||||
The core abstraction layer for Large Language Model (LLM) providers in the Owlen AI agent.
|
||||
|
||||
## Overview
|
||||
This crate defines the common traits and types used to integrate various AI providers. It enables the agent to be model-agnostic and switch between different backends at runtime.
|
||||
|
||||
## Key Components
|
||||
- `LlmProvider` Trait: The primary interface that all provider implementations (Anthropic, OpenAI, Ollama) must satisfy.
|
||||
- `ChatMessage`: Unified message structure for conversation history.
|
||||
- `StreamChunk`: Standardized format for streaming LLM responses.
|
||||
- `ToolCall`: Abstraction for tool invocation requests from the model.
|
||||
- `TokenCounter`: Utilities for estimating token usage and managing context windows.
|
||||
|
||||
## Supported Providers
|
||||
- **Anthropic:** Integration with Claude 3.5 models.
|
||||
- **OpenAI:** Integration with GPT-4o models.
|
||||
- **Ollama:** Support for local models (e.g., Llama 3, Qwen).
|
||||
14
crates/llm/ollama/README.md
Normal file
14
crates/llm/ollama/README.md
Normal file
@@ -0,0 +1,14 @@
|
||||
# Owlen Ollama Provider
|
||||
|
||||
Local LLM integration via Ollama for the Owlen AI agent.
|
||||
|
||||
## Overview
|
||||
This crate enables the Owlen agent to use local models running via Ollama. This is ideal for privacy-focused workflows or development without an internet connection.
|
||||
|
||||
## Features
|
||||
- **Local Execution:** No API keys required for basic local use.
|
||||
- **Llama 3 / Qwen Support:** Compatible with popular open-source models.
|
||||
- **Custom Model URLs:** Connect to Ollama instances running on non-standard ports or remote servers.
|
||||
|
||||
## Configuration
|
||||
Requires a running Ollama instance. The default connection URL is `http://localhost:11434`.
|
||||
14
crates/llm/openai/README.md
Normal file
14
crates/llm/openai/README.md
Normal file
@@ -0,0 +1,14 @@
|
||||
# Owlen OpenAI Provider
|
||||
|
||||
OpenAI GPT integration for the Owlen AI agent.
|
||||
|
||||
## Overview
|
||||
This crate provides the implementation of the `LlmProvider` trait for OpenAI's models (e.g., GPT-4o). It supports the standard OpenAI chat completion API, including tool calling and streaming.
|
||||
|
||||
## Features
|
||||
- **GPT-4o Support:** Reliable integration with flagship OpenAI models.
|
||||
- **Function Calling:** Full support for OpenAI's tool/function calling mechanism.
|
||||
- **Streaming:** Robust real-time response generation.
|
||||
|
||||
## Configuration
|
||||
Requires an `OPENAI_API_KEY` to be set in the environment or configuration.
|
||||
Reference in New Issue
Block a user