Files
owlen/docs/troubleshooting.md
vikingowl 7e2c6ea037 docs(release): prep v0.2 guidance and config samples
AC:\n- README badge shows 0.2.0 and highlights cloud fallback, quotas, web search.\n- Configuration docs and sample config cover list TTL, quotas, context window, and updated env guidance.\n- Troubleshooting docs explain authentication fallback and rate limit recovery.\n\nTests:\n- Attempted 'cargo xtask lint-docs' (command unavailable: no such command: xtask).
2025-10-24 12:56:49 +02:00

136 lines
8.3 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Troubleshooting Guide
This guide is intended to help you with common issues you might encounter while using Owlen.
## Connection Failures to Ollama
If you are unable to connect to a local Ollama instance, here are a few things to check:
1. **Is Ollama running?** Make sure the Ollama service is active. You can usually check this with `ollama list`.
2. **Is the address correct?** By default, Owlen tries to connect to `http://localhost:11434`. If your Ollama instance is running on a different address or port, you will need to configure it in your `config.toml` file.
3. **Firewall issues:** Ensure that your firewall is not blocking the connection.
4. **Health check warnings:** Owlen now performs a provider health check on startup. If it fails, the error message will include a hint (either "start owlen-mcp-llm-server" or "ensure Ollama is running"). Resolve the hint and restart.
## Model Not Found Errors
Owlen surfaces this as `InvalidInput: Model '<name>' was not found`.
1. **Local models:** Run `ollama list` to confirm the model name (e.g., `llama3:8b`). Use `ollama pull <model>` if it is missing.
2. **Ollama Cloud:** Names may differ from local installs. Double-check https://ollama.com/models and remove `-cloud` suffixes.
3. **Fallback:** Switch to `mode = "local_only"` temporarily in `[mcp]` if the remote server is slow to update.
Fix the name in your configuration file or choose a model from the UI (`:model`).
## Local Models Missing After Cloud Setup
Owlen now queries both the local daemon and Ollama Cloud and shows them side-by-side in the picker. If you only see the cloud section (or a red `Local unavailable` banner):
1. **Confirm the daemon is reachable.** Run `ollama list` locally. If the command times out, restart the service (`ollama serve` or your systemd unit).
2. **Refresh the picker.** In the TUI press `:models --local` to focus the local section. The footer will explain if Owlen skipped the source because it was unreachable.
3. **Inspect the status line.** When the quick health probe fails, Owlen adds a `Local unavailable` / `Cloud unavailable` message instead of leaving the list blank. Use that hint to decide whether to restart Ollama or re-run `owlen cloud setup`.
4. **Keep the base URL local.** The cloud setup command no longer overrides `providers.ollama.base_url` unless `--force-cloud-base-url` is passed. If you changed it manually, edit `config.toml` or run `owlen config doctor` to restore the default `http://localhost:11434` value.
Once the daemon responds again, the picker will automatically merge the updated local list with the cloud catalogue.
Owlen runs a background health worker every 30 seconds; once the daemon responds it will update the picker automatically without needing a restart.
## Terminal Compatibility Issues
Owlen is built with `ratatui`, which supports most modern terminals. However, if you are experiencing rendering issues, please check the following:
- Your terminal supports Unicode.
- You are using a font that includes the characters being displayed.
- Try a different terminal emulator to see if the issue persists.
## Configuration File Problems
If Owlen is not behaving as you expect, there might be an issue with your configuration file.
- **Location:** Run `owlen config path` to print the exact location (Linux, macOS, or Windows). Owlen now follows platform defaults instead of hard-coding `~/.config`.
- **Syntax:** The configuration file is in TOML format. Make sure the syntax is correct.
- **Values:** Check that the values for your models, providers, and other settings are correct.
- **Automation:** Run `owlen config doctor` to migrate legacy settings (`mode = "legacy"`, missing providers) and validate the file before launching the TUI.
## Ollama Cloud Authentication Errors
If you see `Auth` errors when using the hosted service:
1. Run `owlen cloud setup` to register your API key (with `--api-key` for non-interactive use).
2. Use `owlen cloud status` to verify Owlen can authenticate against [Ollama Cloud](https://docs.ollama.com/cloud) with the canonical `https://ollama.com` base URL. Override the endpoint via `providers.ollama_cloud.base_url` only if your account is pointed at a custom region.
3. Ensure `providers.ollama_cloud.api_key` is set **or** export `OLLAMA_API_KEY` (legacy: `OLLAMA_CLOUD_API_KEY` / `OWLEN_OLLAMA_CLOUD_API_KEY`) when encryption is disabled. With `privacy.encrypt_local_data = true`, the key lives in the encrypted vault and is loaded automatically.
4. Confirm the key has access to the requested models. Recent accounts scope access per workspace; visit <https://ollama.com/models> while signed in to double-check the SKU name.
5. Owlen disables the cloud provider after consecutive 401/403 responses, posts a toast, and falls back to the last healthy local provider so you can keep chatting. Re-run `owlen cloud setup` and flip back with `:provider ollama_cloud` once the key is valid again.
6. Avoid pasting extra quotes or whitespace into the config file—`owlen config doctor` will normalise the entry for you.
## Ollama Cloud Rate Limits (HTTP 429)
If the hosted API returns `HTTP 429 Too Many Requests`, Owlen keeps the provider enabled but surfaces a rate-limit toast and replays your message against the local provider so you do not lose work. To recover:
1. Check the cockpit header or run `:limits` to see your locally tracked hourly/weekly totals. When either bar crosses 80% Owlen warns you; 95% triggers a critical toast.
2. Raise or remove the soft quotas (`providers.ollama_cloud.hourly_quota_tokens`, `weekly_quota_tokens`) if your vendor allotment is higher, or pause cloud usage until the next window resets.
3. If you need the cloud-only model, retry after the providers cooling-off period (Ollama currently resets the rate window hourly for most SKUs). Adjust `list_ttl_secs` upward if automated refreshes are consuming too many tokens.
### Linux Ollama Sign-In Workaround (v0.12.3)
Ollama v0.12.3 on Linux ships with a broken `ollama signin` command. Until you can upgrade to ≥0.12.4, use one of the manual workflows below to register your key pair.
#### 1. Manual key copy
1. **Locate (or generate) keys on Linux**
```bash
ls -la /usr/share/ollama/.ollama/
sudo systemctl start ollama # start the service if the directory is empty
```
2. **Copy keys from a working Mac**
```bash
# On macOS (source machine)
cat ~/.ollama/id_ed25519.pub
cat ~/.ollama/id_ed25519
```
```bash
# On Linux (target machine)
sudo systemctl stop ollama
sudo mkdir -p /usr/share/ollama/.ollama
sudo tee /usr/share/ollama/.ollama/id_ed25519.pub <<'EOF'
<paste mac public key>
EOF
sudo tee /usr/share/ollama/.ollama/id_ed25519 <<'EOF'
<paste mac private key>
EOF
sudo chown -R ollama:ollama /usr/share/ollama/.ollama/
sudo chmod 600 /usr/share/ollama/.ollama/id_ed25519
sudo chmod 644 /usr/share/ollama/.ollama/id_ed25519.pub
sudo systemctl start ollama
```
#### 2. Manual web registration
1. Read the Linux public key:
```bash
sudo cat /usr/share/ollama/.ollama/id_ed25519.pub
```
2. Open <https://ollama.com/settings/keys> and paste the public key.
After either method, confirm access:
```bash
ollama list
```
#### Troubleshooting
- Permissions: `sudo chown -R ollama:ollama /usr/share/ollama/.ollama/` then re-apply `chmod` (`600` private, `644` public).
- Service status: `sudo systemctl status ollama` and `sudo journalctl -u ollama -f`.
- Alternate paths: Some distros run Ollama as a user process (`~/.ollama`). Copy the keys into that directory if `/usr/share/ollama/.ollama` is unused.
This workaround mirrors what `ollama signin` should do—register the key pair with Ollama Cloud—without waiting for the patched release. Once you upgrade to v0.12.4 or newer, the interactive sign-in command works again.
## Performance Tuning
If you are experiencing performance issues, you can try the following:
- **Reduce context size:** A smaller context size will result in faster responses from the LLM.
- **Use a less resource-intensive model:** Some models are faster but less capable than others.
- **Watch the header gauges:** The cockpit header now shows live context usage and cloud quota bands—if either bar turns amber or red, trim the prompt or switch providers before retrying.
If you are still having trouble, please [open an issue](https://github.com/Owlibou/owlen/issues) on our GitHub repository.