Add App core struct with event-handling and initialization logic for TUI.
This commit is contained in:
242
README.md
Normal file
242
README.md
Normal file
@@ -0,0 +1,242 @@
|
||||
# OWLEN
|
||||
|
||||
A terminal user interface (TUI) for interacting with Ollama models, similar to `claude code` or `gemini-cli` but using Ollama as the backend.
|
||||
|
||||
## Features
|
||||
|
||||
🤖 **AI Chat Interface**: Interactive conversations with Ollama models
|
||||
🔄 **Real-time Streaming**: See responses as they're generated
|
||||
📝 **Multi-model Support**: Switch between different Ollama models
|
||||
⌨️ **Vim-inspired Keys**: Intuitive keyboard navigation
|
||||
🎨 **Rich UI**: Clean, modern terminal interface with syntax highlighting
|
||||
📜 **Conversation History**: Keep track of your chat history
|
||||
🚀 **Fast & Lightweight**: Built in Rust for performance
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Ollama](https://ollama.ai/) installed and running
|
||||
- Rust 1.70+ (for building from source)
|
||||
|
||||
## Installation
|
||||
|
||||
### Build from Source
|
||||
|
||||
```bash
|
||||
git clone https://github.com/yourusername/owlen
|
||||
cd owlen
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
This will build two executables: `owlen` (for general chat) and `owlen-code` (for code-focused interactions).
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Start Ollama**: Make sure Ollama is running on your system:
|
||||
```bash
|
||||
ollama serve
|
||||
```
|
||||
|
||||
2. **Pull a Model**: Download a model to chat with:
|
||||
```bash
|
||||
ollama pull llama3.2
|
||||
```
|
||||
|
||||
3. **Run OWLEN (General Chat)**:
|
||||
```bash
|
||||
./target/release/owlen
|
||||
# Or using cargo:
|
||||
cargo run
|
||||
```
|
||||
|
||||
4. **Run OWLEN (Code Mode)**:
|
||||
```bash
|
||||
./target/release/owlen-code
|
||||
# Or using cargo:
|
||||
cargo run --bin owlen-code
|
||||
```
|
||||
|
||||
## Usage (General Chat Mode)
|
||||
|
||||
### Key Bindings
|
||||
|
||||
#### Normal Mode (Default)
|
||||
- `i` - Enter input mode to type a message
|
||||
- `m` - Open model selection menu
|
||||
- `c` - Clear current conversation
|
||||
- `r` - Refresh available models list
|
||||
- `j`/`k` - Scroll up/down in chat history
|
||||
- `↑`/`↓` - Scroll up/down in chat history
|
||||
- `q` - Quit application
|
||||
|
||||
#### Input Mode
|
||||
- `Enter` - Send message
|
||||
- `Esc` - Cancel input and return to normal mode
|
||||
- `←`/`→` - Move cursor left/right
|
||||
- `Backspace` - Delete character
|
||||
|
||||
#### Model Selection Mode
|
||||
- `↑`/`↓` - Navigate model list
|
||||
- `Enter` - Select model
|
||||
- `Esc` - Cancel selection
|
||||
|
||||
### Interface Layout
|
||||
|
||||
```
|
||||
┌─ OWLEN ─────────────────────────┐
|
||||
│ 🦉 OWLEN - AI Assistant │
|
||||
│ Model: llama3.2:latest │
|
||||
├──────────────────────────────────────┤
|
||||
│ │
|
||||
│ 👤 You: │
|
||||
│ Hello! Can you help me with Rust? │
|
||||
│ │
|
||||
│ 🤖 Assistant: │
|
||||
│ Of course! I'd be happy to help │
|
||||
│ you with Rust programming... │
|
||||
│ │
|
||||
├──────────────────────────────────────┤
|
||||
│ Input (Press 'i' to start typing) │
|
||||
│ │
|
||||
├──────────────────────────────────────┤
|
||||
│ NORMAL | Ready │
|
||||
│ Help: i:Input m:Model c:Clear q:Quit │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Code Mode (`owlen-code`)
|
||||
|
||||
The `owlen-code` binary provides a specialized interface for interacting with LLMs for code-related tasks. It is designed to be used in conjunction with a code editor or IDE, allowing you to quickly get assistance with debugging, code generation, refactoring, and more.
|
||||
|
||||
### Key Bindings
|
||||
|
||||
(Assuming similar key bindings to general chat mode for now. Further details can be added if `owlen-code` has distinct keybindings or features.)
|
||||
|
||||
- `i` - Enter input mode to type a message
|
||||
- `m` - Open model selection menu
|
||||
- `c` - Clear current conversation
|
||||
- `r` - Refresh available models list
|
||||
- `j`/`k` - Scroll up/down in chat history
|
||||
- `↑`/`↓` - Scroll up/down in chat history
|
||||
- `q` - Quit application
|
||||
|
||||
#### Input Mode
|
||||
- `Enter` - Send message
|
||||
- `Esc` - Cancel input and return to normal mode
|
||||
- `←`/`→` - Move cursor left/right
|
||||
- `Backspace` - Delete character
|
||||
|
||||
#### Model Selection Mode
|
||||
- `↑`/`↓` - Navigate model list
|
||||
- `Enter` - Select model
|
||||
- `Esc` - Cancel selection
|
||||
|
||||
## Configuration
|
||||
|
||||
The application connects to Ollama on `localhost:11434` by default. You can modify this in the source code if needed.
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Code Assistant
|
||||
Perfect for getting help with programming tasks:
|
||||
|
||||
- Debugging code issues
|
||||
- Learning new programming concepts
|
||||
- Code reviews and suggestions
|
||||
- Architecture discussions
|
||||
|
||||
### General AI Chat
|
||||
Use it as a general-purpose AI assistant:
|
||||
|
||||
- Writing assistance
|
||||
- Research questions
|
||||
- Creative projects
|
||||
- Learning new topics
|
||||
|
||||
## Architecture
|
||||
|
||||
The application is built with a modular architecture, composed of several crates:
|
||||
|
||||
- **owlen-core**: Provides core traits and types for the LLM client, acting as the foundation.
|
||||
- **owlen-ollama**: Implements the Ollama API client with streaming support, handling communication with Ollama models.
|
||||
- **owlen-tui**: Manages the Terminal User Interface rendering and interactions using `ratatui`.
|
||||
- **owlen-cli**: The command-line interface, which orchestrates the `owlen-tui` and `owlen-ollama` crates to provide the main `owlen` and `owlen-code` binaries.
|
||||
|
||||
- **main.rs** - Application entry point and terminal setup
|
||||
- **app.rs** - Core application state and event handling
|
||||
- **ollama.rs** - Ollama API client with streaming support
|
||||
- **ui.rs** - Terminal UI rendering with ratatui
|
||||
- **events.rs** - Terminal event handling and processing
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Make your changes
|
||||
4. Add tests if applicable
|
||||
5. Submit a pull request
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License - see the LICENSE file for details.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- [Ollama](https://ollama.ai/) - Local LLM inference
|
||||
- [Ratatui](https://github.com/ratatui/ratatui) - Rust TUI library
|
||||
- [Claude Code](https://claude.ai/code) - Inspiration for the interface
|
||||
- [Gemini CLI](https://github.com/google-gemini/gemini-cli) - CLI patterns
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Ollama Not Found
|
||||
```
|
||||
Error: Failed to connect to Ollama
|
||||
```
|
||||
|
||||
**Solution**: Make sure Ollama is installed and running:
|
||||
```bash
|
||||
# Install Ollama
|
||||
curl -fsSL https://ollama.ai/install.sh | sh
|
||||
|
||||
# Start Ollama
|
||||
ollama serve
|
||||
|
||||
# Pull a model
|
||||
ollama pull llama3.2
|
||||
```
|
||||
|
||||
### No Models Available
|
||||
```
|
||||
No models available
|
||||
```
|
||||
|
||||
**Solution**: Pull at least one model:
|
||||
```bash
|
||||
ollama pull llama3.2
|
||||
# or
|
||||
ollama pull codellama
|
||||
# or
|
||||
ollama pull mistral
|
||||
```
|
||||
|
||||
### Connection Refused
|
||||
```
|
||||
Connection refused (os error 61)
|
||||
```
|
||||
|
||||
**Solution**: Check if Ollama is running on the correct port:
|
||||
```bash
|
||||
# Default port is 11434
|
||||
curl http://localhost:11434/api/tags
|
||||
```
|
||||
|
||||
## Roadmap
|
||||
|
||||
- [ ] Configuration file support
|
||||
- [ ] Custom Ollama host configuration
|
||||
- [ ] Session persistence
|
||||
- [ ] Export conversations
|
||||
- [ ] Syntax highlighting for code blocks
|
||||
- [ ] Plugin system for custom commands
|
||||
- [ ] Multiple conversation tabs
|
||||
- [ ] Search within conversations
|
||||
Reference in New Issue
Block a user