128 lines
4.4 KiB
Markdown
128 lines
4.4 KiB
Markdown
# OWLEN
|
||
|
||
> Terminal-native assistant for running local language models with a comfortable TUI.
|
||
|
||

|
||

|
||

|
||
|
||
## Pre-Alpha Status
|
||
- This project is currently **pre-alpha** and under active development.
|
||
- Expect breaking changes, missing features, and occasional rough edges.
|
||
- Feedback, bug reports, and ideas are very welcome while we shape the roadmap.
|
||
|
||
## What Is OWLEN?
|
||
OWLEN is a Rust-powered, terminal-first interface for interacting with local large
|
||
language models. It focuses on a responsive chat workflow that runs against
|
||
[Ollama](https://ollama.com/) and surfaces the tools needed to manage sessions,
|
||
inspect project context, and iterate quickly without leaving your shell.
|
||
|
||
### Current Highlights
|
||
- Chat-first terminal UI built with `ratatui` and `crossterm`.
|
||
- Out-of-the-box Ollama integration with streaming responses.
|
||
- Persistent configuration, model caching, and session statistics.
|
||
- Project-aware context loading (reads `OWLEN.md` when present).
|
||
- Experimental coding assistant mode (opt-in build feature).
|
||
|
||
## Getting Started
|
||
|
||
### Prerequisites
|
||
- Rust 1.75+ and Cargo (`rustup` recommended).
|
||
- A running Ollama instance with at least one model pulled
|
||
(defaults to `http://localhost:11434`).
|
||
- A terminal that supports 256 colours.
|
||
|
||
### Clone and Build
|
||
```bash
|
||
git clone https://github.com/Owlibou/owlen.git
|
||
cd owlen
|
||
cargo build -p owlen-cli
|
||
```
|
||
|
||
### Run the Chat Client
|
||
Make sure Ollama is running, then launch:
|
||
|
||
```bash
|
||
cargo run -p owlen-cli --bin owlen
|
||
```
|
||
|
||
### (Optional) Try the Coding Client
|
||
The coding-focused TUI is experimental and ships behind a feature flag:
|
||
|
||
```bash
|
||
cargo run -p owlen-cli --features code-client --bin owlen-code
|
||
```
|
||
|
||
## Using the TUI
|
||
- `i` / `Enter` – focus the input box.
|
||
- `Enter` – send the current message.
|
||
- `Shift+Enter` / `Ctrl+J` – insert a newline while editing.
|
||
- `m` – open the model selector.
|
||
- `n` – start a fresh conversation.
|
||
- `c` – clear the current chat history.
|
||
- `h` – open inline help.
|
||
- `q` – quit.
|
||
|
||
The status line surfaces hints, error messages, and streaming progress.
|
||
|
||
## Configuration
|
||
OWLEN stores configuration in `~/.config/owlen/config.toml`. The file is created
|
||
on first run and can be edited to customise behaviour:
|
||
|
||
```toml
|
||
[general]
|
||
default_model = "llama3.2:latest"
|
||
enable_streaming = true
|
||
project_context_file = "OWLEN.md"
|
||
|
||
[providers.ollama]
|
||
provider_type = "ollama"
|
||
base_url = "http://localhost:11434"
|
||
```
|
||
|
||
Additional sections cover UI preferences, file limits, and storage paths. Each
|
||
client persists its latest selections back to this file on exit.
|
||
|
||
## Repository Layout
|
||
- `crates/owlen-core` – shared types, configuration, and session orchestration.
|
||
- `crates/owlen-ollama` – provider implementation that speaks to the Ollama API.
|
||
- `crates/owlen-tui` – `ratatui`-based UI, helpers, and event handling.
|
||
- `crates/owlen-cli` – binaries (`owlen`, `owlen-code`) wiring everything together.
|
||
- `tests` – integration-style smoke tests.
|
||
|
||
## Development Notes
|
||
- Standard Rust workflows apply (`cargo fmt`, `cargo clippy`, `cargo test`).
|
||
- The codebase uses async Rust (`tokio`) for event handling and streaming.
|
||
- Configuration and chat history are cached locally; wipe `~/.config/owlen` to reset.
|
||
|
||
## Roadmap
|
||
- [x] Add autoscroll.
|
||
- [x] Push user message before loading the LLM response.
|
||
- [ ] Add support for "thinking" models.
|
||
- [ ] Add theming options.
|
||
- [ ] Provide proper configuration UX.
|
||
- [ ] Add chat-management tooling.
|
||
- [ ] Reactivate and polish the coding client.
|
||
- [ ] Add support for streaming responses.
|
||
- [ ] Add support for streaming chat history.
|
||
- [ ] Add support for streaming model statistics.
|
||
- [ ] Add coding client
|
||
- [ ] Add support for in project code navigation.
|
||
- [ ] Add support for code completion.
|
||
- [ ] Add support for code formatting.
|
||
- [ ] Add support for code linting.
|
||
- [ ] Add support for code refactoring.
|
||
- [ ] Add support for code navigation.
|
||
- [ ] Add support for code snippets.
|
||
- [ ] Add support for in project config folder.
|
||
- [ ] Add support for more local LLM providers.
|
||
- [ ] Add support for cloud LLM providers.
|
||
|
||
## Contributing
|
||
Contributions are encouraged, but expect a moving target while we stabilise the
|
||
core experience. Opening an issue before a sizeable change helps coordinate the
|
||
roadmap.
|
||
|
||
## License
|
||
License terms are still being finalised for the pre-alpha release.
|