Rewrite README: streamline introduction, improve structure, and update usage details. Revise sections for prerequisites, configuration, and roadmap.

This commit is contained in:
2025-09-28 02:06:17 +02:00
parent 44884d8627
commit e2b52dd084

342
README.md
View File

@@ -1,275 +1,123 @@
# OWLEN # OWLEN
A terminal user interface (TUI) for interacting with Ollama models, similar to `claude code` or `gemini-cli` but using Ollama as the backend. > Terminal-native assistant for running local language models with a comfortable TUI.
## Features ## Pre-Alpha Status
- This project is currently **pre-alpha** and under active development.
- Expect breaking changes, missing features, and occasional rough edges.
- Feedback, bug reports, and ideas are very welcome while we shape the roadmap.
🤖 **AI Chat Interface**: Interactive conversations with Ollama models ## What Is OWLEN?
🔄 **Real-time Streaming**: See responses as they're generated OWLEN is a Rust-powered, terminal-first interface for interacting with local large
📝 **Multi-model Support**: Switch between different Ollama models language models. It focuses on a responsive chat workflow that runs against
⌨️ **Vim-inspired Keys**: Intuitive keyboard navigation [Ollama](https://ollama.com/) and surfaces the tools needed to manage sessions,
🎨 **Rich UI**: Clean, modern terminal interface with syntax highlighting inspect project context, and iterate quickly without leaving your shell.
📜 **Conversation History**: Keep track of your chat history
🚀 **Fast & Lightweight**: Built in Rust for performance
## Prerequisites ### Current Highlights
- Chat-first terminal UI built with `ratatui` and `crossterm`.
- Out-of-the-box Ollama integration with streaming responses.
- Persistent configuration, model caching, and session statistics.
- Project-aware context loading (reads `OWLEN.md` when present).
- Experimental coding assistant mode (opt-in build feature).
- [Ollama](https://ollama.ai/) installed and running ## Getting Started
- Rust 1.70+ (for building from source)
## Development Setup and Usage ### Prerequisites
- Rust 1.75+ and Cargo (`rustup` recommended).
- A running Ollama instance with at least one model pulled
(defaults to `http://localhost:11434`).
- A terminal that supports 256 colours.
To set up the project for development, follow these steps: ### Clone and Build
```bash
git clone https://github.com/Owlibou/owlen.git
cd owlen
cargo build -p owlen-cli
```
1. **Clone the repository:** ### Run the Chat Client
```bash Make sure Ollama is running, then launch:
git clone https://somegit.dev/Owlibou/owlen.git
cd owlen
```
2. **Build the project:**
```bash
cargo build
```
This will compile all crates in the workspace.
3. **Run tests:**
```bash
cargo test
```
4. **Run the `owlen` (chat) application in development mode:**
```bash
cargo run --bin owlen
```
5. **Run the `owlen-code` (code) application in development mode:**
```bash
cargo run --bin owlen-code
```
## Installation
### Build from Source
```bash ```bash
git clone https://github.com/yourusername/owlen cargo run -p owlen-cli --bin owlen
cd owlen
cargo build --release
``` ```
This will build two executables: `owlen` (for general chat) and `owlen-code` (for code-focused interactions). ### (Optional) Try the Coding Client
The coding-focused TUI is experimental and ships behind a feature flag:
## Quick Start ```bash
cargo run -p owlen-cli --features code-client --bin owlen-code
1. **Start Ollama**: Make sure Ollama is running on your system:
```bash
ollama serve
```
2. **Pull a Model**: Download a model to chat with:
```bash
ollama pull llama3.2
```
3. **Run OWLEN (General Chat)**:
```bash
./target/release/owlen
# Or using cargo:
cargo run
```
4. **Run OWLEN (Code Mode)**:
```bash
./target/release/owlen-code
# Or using cargo:
cargo run --bin owlen-code
```
## Usage (General Chat Mode)
### Key Bindings
#### Normal Mode (Default)
- `i` - Enter input mode to type a message
- `m` - Open model selection menu
- `c` - Clear current conversation
- `r` - Refresh available models list
- `j`/`k` - Scroll up/down in chat history
- ``/`` - Scroll up/down in chat history
- `q` - Quit application
#### Input Mode
- `Enter` - Send message
- `Esc` - Cancel input and return to normal mode
- ``/`` - Move cursor left/right
- `Backspace` - Delete character
#### Model Selection Mode
- ``/`` - Navigate model list
- `Enter` - Select model
- `Esc` - Cancel selection
### Interface Layout
```
┌─ OWLEN ──────────────────────────────┐
│ 🦉 OWLEN - AI Assistant │
│ Model: llama3.2:latest │
├──────────────────────────────────────┤
│ │
│ 👤 You: │
│ Hello! Can you help me with Rust? │
│ │
│ 🤖 Assistant: │
│ Of course! I'd be happy to help │
│ you with Rust programming... │
│ │
├──────────────────────────────────────┤
│ Input (Press 'i' to start typing) │
│ │
├──────────────────────────────────────┤
│ NORMAL | Ready │
│ Help: i:Input m:Model c:Clear q:Quit │
└──────────────────────────────────────┘
``` ```
## Code Mode (`owlen-code`) ## Using the TUI
- `i` / `Enter` focus the input box.
- `Enter` send the current message.
- `Shift+Enter` / `Ctrl+J` insert a newline while editing.
- `m` open the model selector.
- `n` start a fresh conversation.
- `c` clear the current chat history.
- `h` open inline help.
- `q` quit.
The `owlen-code` binary provides a specialized interface for interacting with LLMs for code-related tasks. It is designed to be used in conjunction with a code editor or IDE, allowing you to quickly get assistance with debugging, code generation, refactoring, and more. The status line surfaces hints, error messages, and streaming progress.
### Key Bindings
(Assuming similar key bindings to general chat mode for now. Further details can be added if `owlen-code` has distinct keybindings or features.)
- `i` - Enter input mode to type a message
- `m` - Open model selection menu
- `c` - Clear current conversation
- `r` - Refresh available models list
- `j`/`k` - Scroll up/down in chat history
- ``/`` - Scroll up/down in chat history
- `q` - Quit application
#### Input Mode
- `Enter` - Send message
- `Esc` - Cancel input and return to normal mode
- ``/`` - Move cursor left/right
- `Backspace` - Delete character
#### Model Selection Mode
- ``/`` - Navigate model list
- `Enter` - Select model
- `Esc` - Cancel selection
## Configuration ## Configuration
OWLEN stores configuration in `~/.config/owlen/config.toml`. The file is created
on first run and can be edited to customise behaviour:
The application connects to Ollama on `localhost:11434` by default. You can modify this in the source code if needed. ```toml
[general]
default_model = "llama3.2:latest"
enable_streaming = true
project_context_file = "OWLEN.md"
## Use Cases [providers.ollama]
provider_type = "ollama"
### Code Assistant base_url = "http://localhost:11434"
Perfect for getting help with programming tasks:
- Debugging code issues
- Learning new programming concepts
- Code reviews and suggestions
- Architecture discussions
### General AI Chat
Use it as a general-purpose AI assistant:
- Writing assistance
- Research questions
- Creative projects
- Learning new topics
## Architecture
The application is built with a modular architecture, composed of several crates:
- **owlen-core**: Provides core traits and types for the LLM client, acting as the foundation.
- **owlen-ollama**: Implements the Ollama API client with streaming support, handling communication with Ollama models.
- **owlen-tui**: Manages the Terminal User Interface rendering and interactions using `ratatui`.
- **owlen-cli**: The command-line interface, which orchestrates the `owlen-tui` and `owlen-ollama` crates to provide the main `owlen` and `owlen-code` binaries.
- **main.rs** - Application entry point and terminal setup
- **app.rs** - Core application state and event handling
- **ollama.rs** - Ollama API client with streaming support
- **ui.rs** - Terminal UI rendering with ratatui
- **events.rs** - Terminal event handling and processing
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Acknowledgments
- [Ollama](https://ollama.ai/) - Local LLM inference
- [Ratatui](https://github.com/ratatui/ratatui) - Rust TUI library
- [Claude Code](https://claude.ai/code) - Inspiration for the interface
- [Gemini CLI](https://github.com/google-gemini/gemini-cli) - CLI patterns
## Troubleshooting
### Ollama Not Found
```
Error: Failed to connect to Ollama
``` ```
**Solution**: Make sure Ollama is installed and running: Additional sections cover UI preferences, file limits, and storage paths. Each
```bash client persists its latest selections back to this file on exit.
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama ## Repository Layout
ollama serve - `crates/owlen-core` shared types, configuration, and session orchestration.
- `crates/owlen-ollama` provider implementation that speaks to the Ollama API.
- `crates/owlen-tui` `ratatui`-based UI, helpers, and event handling.
- `crates/owlen-cli` binaries (`owlen`, `owlen-code`) wiring everything together.
- `tests` integration-style smoke tests.
# Pull a model ## Development Notes
ollama pull llama3.2 - Standard Rust workflows apply (`cargo fmt`, `cargo clippy`, `cargo test`).
``` - The codebase uses async Rust (`tokio`) for event handling and streaming.
- Configuration and chat history are cached locally; wipe `~/.config/owlen` to reset.
### No Models Available
```
No models available
```
**Solution**: Pull at least one model:
```bash
ollama pull llama3.2
# or
ollama pull codellama
# or
ollama pull mistral
```
### Connection Refused
```
Connection refused (os error 61)
```
**Solution**: Check if Ollama is running on the correct port:
```bash
# Default port is 11434
curl http://localhost:11434/api/tags
```
## Roadmap ## Roadmap
- [ ] Add autoscroll.
- [ ] Push user message before loading the LLM response.
- [ ] Add support for "thinking" models.
- [ ] Add theming options.
- [ ] Provide proper configuration UX.
- [ ] Add chat-management tooling.
- [ ] Reactivate and polish the coding client.
- [ ] Add support for streaming responses.
- [ ] Add support for streaming chat history.
- [ ] Add support for streaming model statistics.
- [ ] Add coding client
- [ ] Add support for in project code navigation.
- [ ] Add support for code completion.
- [ ] Add support for code formatting.
- [ ] Add support for code linting.
- [ ] Add support for code refactoring.
- [ ] Add support for code navigation.
- [ ] Add support for code snippets.
- [ ] Add support for in project config folder.
- [ ] Add support for more local LLM providers.
- [ ] Add support for cloud LLM providers.
- [ ] Chat interface (`owlen`) (in progress) ## Contributing
- [ ] Code interface (`owlen-code`) Contributions are encouraged, but expect a moving target while we stabilise the
- [ ] Configuration file support core experience. Opening an issue before a sizeable change helps coordinate the
- [ ] Custom Ollama host configuration roadmap.
- [ ] Session persistence
- [ ] Export conversations ## License
- [ ] Syntax highlighting for code blocks License terms are still being finalised for the pre-alpha release.
- [ ] Plugin system for custom commands
- [ ] Multiple conversation tabs
- [ ] Search within conversations