7.6 KiB
OWLEN
A terminal user interface (TUI) for interacting with Ollama models, similar to claude code or gemini-cli but using Ollama as the backend.
Features
🤖 AI Chat Interface: Interactive conversations with Ollama models 🔄 Real-time Streaming: See responses as they're generated 📝 Multi-model Support: Switch between different Ollama models ⌨️ Vim-inspired Keys: Intuitive keyboard navigation 🎨 Rich UI: Clean, modern terminal interface with syntax highlighting 📜 Conversation History: Keep track of your chat history 🚀 Fast & Lightweight: Built in Rust for performance
Prerequisites
- Ollama installed and running
- Rust 1.70+ (for building from source)
Development Setup and Usage
To set up the project for development, follow these steps:
-
Clone the repository:
git clone https://somegit.dev/Owlibou/owlen.git cd owlen -
Build the project:
cargo buildThis will compile all crates in the workspace.
-
Run tests:
cargo test -
Run the
owlen(chat) application in development mode:cargo run --bin owlen -
Run the
owlen-code(code) application in development mode:cargo run --bin owlen-code
Installation
Build from Source
git clone https://github.com/yourusername/owlen
cd owlen
cargo build --release
This will build two executables: owlen (for general chat) and owlen-code (for code-focused interactions).
Quick Start
-
Start Ollama: Make sure Ollama is running on your system:
ollama serve -
Pull a Model: Download a model to chat with:
ollama pull llama3.2 -
Run OWLEN (General Chat):
./target/release/owlen # Or using cargo: cargo run -
Run OWLEN (Code Mode):
./target/release/owlen-code # Or using cargo: cargo run --bin owlen-code
Usage (General Chat Mode)
Key Bindings
Normal Mode (Default)
i- Enter input mode to type a messagem- Open model selection menuc- Clear current conversationr- Refresh available models listj/k- Scroll up/down in chat history↑/↓- Scroll up/down in chat historyq- Quit application
Input Mode
Enter- Send messageEsc- Cancel input and return to normal mode←/→- Move cursor left/rightBackspace- Delete character
Model Selection Mode
↑/↓- Navigate model listEnter- Select modelEsc- Cancel selection
Interface Layout
┌─ OWLEN ─────────────────────────┐
│ 🦉 OWLEN - AI Assistant │
│ Model: llama3.2:latest │
├──────────────────────────────────────┤
│ │
│ 👤 You: │
│ Hello! Can you help me with Rust? │
│ │
│ 🤖 Assistant: │
│ Of course! I'd be happy to help │
│ you with Rust programming... │
│ │
├──────────────────────────────────────┤
│ Input (Press 'i' to start typing) │
│ │
├──────────────────────────────────────┤
│ NORMAL | Ready │
│ Help: i:Input m:Model c:Clear q:Quit │
└──────────────────────────────────────┘
Code Mode (owlen-code)
The owlen-code binary provides a specialized interface for interacting with LLMs for code-related tasks. It is designed to be used in conjunction with a code editor or IDE, allowing you to quickly get assistance with debugging, code generation, refactoring, and more.
Key Bindings
(Assuming similar key bindings to general chat mode for now. Further details can be added if owlen-code has distinct keybindings or features.)
i- Enter input mode to type a messagem- Open model selection menuc- Clear current conversationr- Refresh available models listj/k- Scroll up/down in chat history↑/↓- Scroll up/down in chat historyq- Quit application
Input Mode
Enter- Send messageEsc- Cancel input and return to normal mode←/→- Move cursor left/rightBackspace- Delete character
Model Selection Mode
↑/↓- Navigate model listEnter- Select modelEsc- Cancel selection
Configuration
The application connects to Ollama on localhost:11434 by default. You can modify this in the source code if needed.
Use Cases
Code Assistant
Perfect for getting help with programming tasks:
- Debugging code issues
- Learning new programming concepts
- Code reviews and suggestions
- Architecture discussions
General AI Chat
Use it as a general-purpose AI assistant:
- Writing assistance
- Research questions
- Creative projects
- Learning new topics
Architecture
The application is built with a modular architecture, composed of several crates:
-
owlen-core: Provides core traits and types for the LLM client, acting as the foundation.
-
owlen-ollama: Implements the Ollama API client with streaming support, handling communication with Ollama models.
-
owlen-tui: Manages the Terminal User Interface rendering and interactions using
ratatui. -
owlen-cli: The command-line interface, which orchestrates the
owlen-tuiandowlen-ollamacrates to provide the mainowlenandowlen-codebinaries. -
main.rs - Application entry point and terminal setup
-
app.rs - Core application state and event handling
-
ollama.rs - Ollama API client with streaming support
-
ui.rs - Terminal UI rendering with ratatui
-
events.rs - Terminal event handling and processing
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Ollama - Local LLM inference
- Ratatui - Rust TUI library
- Claude Code - Inspiration for the interface
- Gemini CLI - CLI patterns
Troubleshooting
Ollama Not Found
Error: Failed to connect to Ollama
Solution: Make sure Ollama is installed and running:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama
ollama serve
# Pull a model
ollama pull llama3.2
No Models Available
No models available
Solution: Pull at least one model:
ollama pull llama3.2
# or
ollama pull codellama
# or
ollama pull mistral
Connection Refused
Connection refused (os error 61)
Solution: Check if Ollama is running on the correct port:
# Default port is 11434
curl http://localhost:11434/api/tags
Roadmap
- Chat interface (
owlen) (in progress) - Code interface (
owlen-code) - Configuration file support
- Custom Ollama host configuration
- Session persistence
- Export conversations
- Syntax highlighting for code blocks
- Plugin system for custom commands
- Multiple conversation tabs
- Search within conversations