OWLEN

A terminal user interface (TUI) for interacting with Ollama models, similar to claude code or gemini-cli but using Ollama as the backend.

Features

🤖 AI Chat Interface: Interactive conversations with Ollama models 🔄 Real-time Streaming: See responses as they're generated 📝 Multi-model Support: Switch between different Ollama models ⌨️ Vim-inspired Keys: Intuitive keyboard navigation 🎨 Rich UI: Clean, modern terminal interface with syntax highlighting 📜 Conversation History: Keep track of your chat history 🚀 Fast & Lightweight: Built in Rust for performance

Prerequisites

  • Ollama installed and running
  • Rust 1.70+ (for building from source)

Installation

Build from Source

git clone https://github.com/yourusername/owlen
cd owlen
cargo build --release

This will build two executables: owlen (for general chat) and owlen-code (for code-focused interactions).

Quick Start

  1. Start Ollama: Make sure Ollama is running on your system:

    ollama serve
    
  2. Pull a Model: Download a model to chat with:

    ollama pull llama3.2
    
  3. Run OWLEN (General Chat):

    ./target/release/owlen
    # Or using cargo:
    cargo run
    
  4. Run OWLEN (Code Mode):

    ./target/release/owlen-code
    # Or using cargo:
    cargo run --bin owlen-code
    

Usage (General Chat Mode)

Key Bindings

Normal Mode (Default)

  • i - Enter input mode to type a message
  • m - Open model selection menu
  • c - Clear current conversation
  • r - Refresh available models list
  • j/k - Scroll up/down in chat history
  • / - Scroll up/down in chat history
  • q - Quit application

Input Mode

  • Enter - Send message
  • Esc - Cancel input and return to normal mode
  • / - Move cursor left/right
  • Backspace - Delete character

Model Selection Mode

  • / - Navigate model list
  • Enter - Select model
  • Esc - Cancel selection

Interface Layout

┌─ OWLEN ─────────────────────────┐
│ 🦉 OWLEN - AI Assistant         │
│ Model: llama3.2:latest               │
├──────────────────────────────────────┤
│                                      │
│ 👤 You:                              │
│   Hello! Can you help me with Rust? │
│                                      │
│ 🤖 Assistant:                        │
│   Of course! I'd be happy to help   │
│   you with Rust programming...      │
│                                      │
├──────────────────────────────────────┤
│ Input (Press 'i' to start typing)   │
│                                      │
├──────────────────────────────────────┤
│ NORMAL | Ready                       │
│ Help: i:Input m:Model c:Clear q:Quit │
└──────────────────────────────────────┘

Code Mode (owlen-code)

The owlen-code binary provides a specialized interface for interacting with LLMs for code-related tasks. It is designed to be used in conjunction with a code editor or IDE, allowing you to quickly get assistance with debugging, code generation, refactoring, and more.

Key Bindings

(Assuming similar key bindings to general chat mode for now. Further details can be added if owlen-code has distinct keybindings or features.)

  • i - Enter input mode to type a message
  • m - Open model selection menu
  • c - Clear current conversation
  • r - Refresh available models list
  • j/k - Scroll up/down in chat history
  • / - Scroll up/down in chat history
  • q - Quit application

Input Mode

  • Enter - Send message
  • Esc - Cancel input and return to normal mode
  • / - Move cursor left/right
  • Backspace - Delete character

Model Selection Mode

  • / - Navigate model list
  • Enter - Select model
  • Esc - Cancel selection

Configuration

The application connects to Ollama on localhost:11434 by default. You can modify this in the source code if needed.

Use Cases

Code Assistant

Perfect for getting help with programming tasks:

  • Debugging code issues
  • Learning new programming concepts
  • Code reviews and suggestions
  • Architecture discussions

General AI Chat

Use it as a general-purpose AI assistant:

  • Writing assistance
  • Research questions
  • Creative projects
  • Learning new topics

Architecture

The application is built with a modular architecture, composed of several crates:

  • owlen-core: Provides core traits and types for the LLM client, acting as the foundation.

  • owlen-ollama: Implements the Ollama API client with streaming support, handling communication with Ollama models.

  • owlen-tui: Manages the Terminal User Interface rendering and interactions using ratatui.

  • owlen-cli: The command-line interface, which orchestrates the owlen-tui and owlen-ollama crates to provide the main owlen and owlen-code binaries.

  • main.rs - Application entry point and terminal setup

  • app.rs - Core application state and event handling

  • ollama.rs - Ollama API client with streaming support

  • ui.rs - Terminal UI rendering with ratatui

  • events.rs - Terminal event handling and processing

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Troubleshooting

Ollama Not Found

Error: Failed to connect to Ollama

Solution: Make sure Ollama is installed and running:

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Start Ollama
ollama serve

# Pull a model
ollama pull llama3.2

No Models Available

No models available

Solution: Pull at least one model:

ollama pull llama3.2
# or
ollama pull codellama
# or
ollama pull mistral

Connection Refused

Connection refused (os error 61)

Solution: Check if Ollama is running on the correct port:

# Default port is 11434
curl http://localhost:11434/api/tags

Roadmap

  • Configuration file support
  • Custom Ollama host configuration
  • Session persistence
  • Export conversations
  • Syntax highlighting for code blocks
  • Plugin system for custom commands
  • Multiple conversation tabs
  • Search within conversations
Description
No description provided
Readme AGPL-3.0 2 MiB
2025-10-03 07:57:53 +02:00
Languages
Rust 99.5%
Shell 0.5%