vikingowl 60c859b3ab feat(ui): add configurable scrollback lines and new‑message alert badge
Introduce `ui.scrollback_lines` (default 2000) to cap the number of chat lines kept in memory, with `0` disabling trimming. Implement automatic trimming of older lines, maintain a scroll offset, and show a “↓ New messages (press G)” badge when new messages arrive off‑screen. Update core UI settings, TUI rendering, chat app state, migrations, documentation, and changelog to reflect the new feature.
2025-10-12 14:23:04 +02:00

OWLEN

Terminal-native assistant for running local language models with a comfortable TUI.

Status Version Rust License

What Is OWLEN?

OWLEN is a Rust-powered, terminal-first interface for interacting with local large language models. It provides a responsive chat workflow that runs against Ollama with a focus on developer productivity, vim-style navigation, and seamless session management—all without leaving your terminal.

Alpha Status

This project is currently in alpha and under active development. Core features are functional, but expect occasional bugs and breaking changes. Feedback, bug reports, and contributions are very welcome!

Screenshots

OWLEN TUI Layout

The OWLEN interface features a clean, multi-panel layout with vim-inspired navigation. See more screenshots in the images/ directory.

Features

  • Vim-style Navigation: Normal, editing, visual, and command modes.
  • Streaming Responses: Real-time token streaming from Ollama.
  • Advanced Text Editing: Multi-line input, history, and clipboard support.
  • Session Management: Save, load, and manage conversations.
  • Code Side Panel: Switch to code mode (:mode code) and open files inline with :open <path> for LLM-assisted coding.
  • Theming System: 10 built-in themes and support for custom themes.
  • Modular Architecture: Extensible provider system (Ollama today, additional providers on the roadmap).
  • Guided Setup: owlen config doctor upgrades legacy configs and verifies your environment in seconds.

Security & Privacy

Owlen is designed to keep data local by default while still allowing controlled access to remote tooling.

  • Local-first execution: All LLM calls flow through the bundled MCP LLM server which talks to a local Ollama instance. If the server is unreachable, Owlen stays usable in “offline mode” and surfaces clear recovery instructions.
  • Sandboxed tooling: Code execution runs in Docker according to the MCP Code Server settings, and future releases will extend this to other OS-level sandboxes (sandbox-exec on macOS, Windows job objects).
  • Session storage: Conversations are stored under the platform data directory and can be encrypted at rest. Set privacy.encrypt_local_data = true in config.toml to enable AES-GCM storage protected by a user-supplied passphrase.
  • Network access: No telemetry is sent. The only outbound requests occur when you explicitly enable remote tooling (e.g., web search) or configure a cloud LLM provider. Each tool is opt-in via privacy and tools configuration sections.
  • Config migrations: Every saved config.toml carries a schema version and is upgraded automatically; deprecated keys trigger warnings so security-related settings are not silently ignored.

Getting Started

Prerequisites

  • Rust 1.75+ and Cargo.
  • A running Ollama instance.
  • A terminal that supports 256 colors.

Installation

Linux & macOS

The recommended way to install on Linux and macOS is to clone the repository and install using cargo.

git clone https://github.com/Owlibou/owlen.git
cd owlen
cargo install --path crates/owlen-cli

Note for macOS: While this method works, official binary releases for macOS are planned for the future.

Windows

The Windows build has not been thoroughly tested yet. Installation is possible via the same cargo install method, but it is considered experimental at this time.

From Unix hosts you can run scripts/check-windows.sh to ensure the code base still compiles for Windows (rustup will install the required target automatically).

Running OWLEN

Make sure Ollama is running, then launch the application:

owlen

If you built from source without installing, you can run it with:

./target/release/owlen

Updating

Owlen does not auto-update. Run owlen upgrade at any time to print the recommended manual steps (pull the repository and reinstall with cargo install --path crates/owlen-cli --force). Arch Linux users can update via the owlen-git AUR package.

Using the TUI

OWLEN uses a modal, vim-inspired interface. Press F1 (available from any mode) or ? in Normal mode to view the help screen with all keybindings.

  • Normal Mode: Navigate with h/j/k/l, w/b, gg/G.
  • Editing Mode: Enter with i or a. Send messages with Enter.
  • Command Mode: Enter with :. Access commands like :quit, :save, :theme.
  • Tutorial Command: Type :tutorial any time for a quick summary of the most important keybindings.

Documentation

For more detailed information, please refer to the following documents:

Configuration

OWLEN stores its configuration in the standard platform-specific config directory:

Platform Location
Linux ~/.config/owlen/config.toml
macOS ~/Library/Application Support/owlen/config.toml
Windows %APPDATA%\owlen\config.toml

Use owlen config path to print the exact location on your machine and owlen config doctor to migrate a legacy config automatically. You can also add custom themes alongside the config directory (e.g., ~/.config/owlen/themes/).

See the themes/README.md for more details on theming.

Testing

Owlen uses standard Rust tooling for verification. Run the full test suite with:

cargo test

Unit tests cover the command palette state machine, agent response parsing, and key MCP abstractions. Formatting and lint checks can be run with cargo fmt --all and cargo clippy respectively.

Roadmap

Upcoming milestones focus on feature parity with modern code assistants while keeping Owlen local-first:

  1. Phase 11 MCP client enhancements: owlen mcp add/list/remove, resource references (@github:issue://123), and MCP prompt slash commands.
  2. Phase 12 Approval & sandboxing: Three-tier approval modes plus platform-specific sandboxes (Docker, sandbox-exec, Windows job objects).
  3. Phase 13 Project documentation system: Automatic OWLEN.md generation, contextual updates, and nested project support.
  4. Phase 15 Provider expansion: OpenAI, Anthropic, and other cloud providers layered onto the existing Ollama-first architecture.

See AGENTS.md for the long-form roadmap and design notes.

Contributing

Contributions are highly welcome! Please see our Contributing Guide for details on how to get started, including our code style, commit conventions, and pull request process.

License

This project is licensed under the GNU Affero General Public License v3.0. See the LICENSE file for details. For commercial or proprietary integrations that cannot adopt AGPL, please reach out to the maintainers to discuss alternative licensing arrangements.

Description
No description provided
Readme AGPL-3.0 2 MiB
2025-10-03 07:57:53 +02:00
Languages
Rust 99.5%
Shell 0.5%