# Owlen Configuration Owlen uses a TOML file for configuration, allowing you to customize its behavior to your liking. This document details all the available options. ## File Location Owlen resolves the configuration path using the platform-specific config directory: | Platform | Location | |----------|----------| | Linux | `~/.config/owlen/config.toml` | | macOS | `~/Library/Application Support/owlen/config.toml` | | Windows | `%APPDATA%\owlen\config.toml` | Use `owlen config init` to scaffold the latest default configuration (pass `--force` to overwrite an existing file), `owlen config path` to print the resolved location, and `owlen config doctor` to migrate or repair legacy files automatically. ## Configuration Precedence Configuration values are resolved in the following order: 1. **Defaults**: The application has hard-coded default values for all settings. 2. **Configuration File**: Any values set in `config.toml` will override the defaults. 3. **Command-Line Arguments / In-App Changes**: Any settings changed during runtime (e.g., via the `:theme` or `:model` commands) will override the configuration file for the current session. Some of these changes (like theme and model) are automatically saved back to the configuration file. Validation runs whenever the configuration is loaded or saved. Expect descriptive `Configuration error` messages if, for example, `remote_only` mode is set without any `[[mcp_servers]]` entries. --- ## General Settings (`[general]`) These settings control the core behavior of the application. - `default_provider` (string, default: `"ollama"`) The name of the provider to use by default. - `default_model` (string, optional, default: `"llama3.2:latest"`) The default model to use for new conversations. - `enable_streaming` (boolean, default: `true`) Whether to stream responses from the provider by default. - `project_context_file` (string, optional, default: `"OWLEN.md"`) Path to a file whose content will be automatically injected as a system prompt. This is useful for providing project-specific context. - `model_cache_ttl_secs` (integer, default: `60`) Time-to-live in seconds for the cached list of available models. ## UI Settings (`[ui]`) These settings customize the look and feel of the terminal interface. - `theme` (string, default: `"default_dark"`) The name of the theme to use. See the [Theming Guide](https://github.com/Owlibou/owlen/blob/main/themes/README.md) for available themes. - `word_wrap` (boolean, default: `true`) Whether to wrap long lines in the chat view. - `max_history_lines` (integer, default: `2000`) The maximum number of lines to keep in the scrollback buffer for the chat history. - `role_label` (string, default: `"above"`) Controls how sender labels are rendered next to messages. Valid values are `"above"` (label on its own line), `"inline"` (label shares the first line of the message), and `"none"` (no label). - `wrap_column` (integer, default: `100`) The column at which to wrap text if `word_wrap` is enabled. - `input_max_rows` (integer, default: `5`) The maximum number of rows the input panel will expand to before it starts scrolling internally. Increase this value if you prefer to see more of long prompts while editing. - `scrollback_lines` (integer, default: `2000`) The maximum number of rendered lines the chat view keeps in memory. Set to `0` to disable trimming entirely if you prefer unlimited history. - `syntax_highlighting` (boolean, default: `false`) Enables lightweight syntax highlighting inside fenced code blocks when the terminal supports 256-color output. - `keymap_profile` (string, optional) Set to `"vim"` or `"emacs"` to pick a built-in keymap profile. When omitted the default Vim bindings are used. Runtime changes triggered via `:keymap ...` are persisted by updating this field. - `keymap_path` (string, optional) Absolute path to a custom keymap definition. When present it overrides `keymap_profile`. See `crates/owlen-tui/keymap.toml` or `crates/owlen-tui/keymap_emacs.toml` for the expected TOML structure. ## Storage Settings (`[storage]`) These settings control how conversations are saved and loaded. - `conversation_dir` (string, optional, default: platform-specific) The directory where conversation sessions are saved. If not set, a default directory is used: - **Linux**: `~/.local/share/owlen/sessions` - **Windows**: `%APPDATA%\owlen\sessions` - **macOS**: `~/Library/Application Support/owlen/sessions` - `auto_save_sessions` (boolean, default: `true`) Whether to automatically save the session when the application exits. - `max_saved_sessions` (integer, default: `25`) The maximum number of saved sessions to keep. - `session_timeout_minutes` (integer, default: `120`) The number of minutes of inactivity before a session is considered for auto-saving as a new session. - `generate_descriptions` (boolean, default: `true`) Whether to automatically generate a short summary of a conversation when saving it. ## Input Settings (`[input]`) These settings control the behavior of the text input area. - `multiline` (boolean, default: `true`) Whether to allow multi-line input. - `history_size` (integer, default: `100`) The number of sent messages to keep in the input history (accessible with `Ctrl-Up/Down`). - `tab_width` (integer, default: `4`) The number of spaces to insert when the `Tab` key is pressed. - `confirm_send` (boolean, default: `false`) If true, requires an additional confirmation before sending a message. ## Provider Settings (`[providers]`) This section contains a table for each provider you want to configure. Owlen now ships with four entries pre-populated—`ollama_local`, `ollama_cloud`, `openai`, and `anthropic`. Switch between them by updating `general.default_provider`. ```toml [providers.ollama_local] enabled = true provider_type = "ollama" base_url = "http://localhost:11434" list_ttl_secs = 60 default_context_window = 8192 [providers.ollama_cloud] enabled = false provider_type = "ollama_cloud" base_url = "https://ollama.com" api_key_env = "OLLAMA_API_KEY" hourly_quota_tokens = 50000 weekly_quota_tokens = 250000 list_ttl_secs = 60 default_context_window = 8192 [providers.openai] enabled = false provider_type = "openai" base_url = "https://api.openai.com/v1" api_key_env = "OPENAI_API_KEY" [providers.anthropic] enabled = false provider_type = "anthropic" base_url = "https://api.anthropic.com/v1" api_key_env = "ANTHROPIC_API_KEY" ``` - `enabled` (boolean, default: `true`) Whether the provider should be considered when refreshing models or issuing requests. - `provider_type` (string, required) Identifies which implementation to use. Local Ollama instances use `"ollama"`; the hosted service uses `"ollama_cloud"`. Third-party providers use their own identifiers (`"openai"`, `"anthropic"`, ...). - `base_url` (string, optional) The base URL of the provider's API. - `api_key` / `api_key_env` (string, optional) Authentication material. Prefer `api_key_env` to reference an environment variable so secrets remain outside of the config file. - `list_ttl_secs` (integer, default: `60`) Time-to-live for the cached model list used by the picker. Increase it to reduce background traffic or decrease it if you rotate models frequently. - `default_context_window` (integer, optional) Expected maximum prompt length (tokens) for the provider. Owlen uses this to render the context usage gauge and warn when you approach the limit. - `hourly_quota_tokens` / `weekly_quota_tokens` (integer, optional) Soft limits that drive the cloud usage gauge and `:limits` readout. Owlen tracks actual usage locally and compares it to these thresholds to raise 80% / 95% toasts. - `extra` (table, optional) Any additional, provider-specific parameters can be added here. ### Using Ollama Cloud Owlen now separates the local daemon and the hosted API into two providers. Enable `ollama_cloud` once you have credentials, while keeping `ollama_local` available for on-device workloads. A minimal configuration looks like this: ```toml [general] default_provider = "ollama_local" [providers.ollama_local] enabled = true base_url = "http://localhost:11434" [providers.ollama_cloud] enabled = true base_url = "https://ollama.com" api_key_env = "OLLAMA_API_KEY" hourly_quota_tokens = 50000 weekly_quota_tokens = 250000 list_ttl_secs = 60 default_context_window = 8192 ``` Key points to keep in mind: - **Base URL normalisation** – Owlen accepts `https://ollama.com`, `https://ollama.com/api`, `https://ollama.com/v1`, and the legacy `https://api.ollama.com`, canonicalising them to the correct HTTPS host. Local deployments get the same treatment for `http://localhost:11434`, `/api`, or `/v1`. You only need to customise `base_url` when the service is proxied elsewhere. - **Credential precedence** – The resolver prefers an inline `api_key` first, then `api_key_env`, then the process environment in the order `OLLAMA_API_KEY`, `OLLAMA_CLOUD_API_KEY`, and `OWLEN_OLLAMA_CLOUD_API_KEY`. Set exactly one source to avoid surprises. When the `ollama` (local) provider is selected, any API key is ignored. - **Transport security** – Hosted requests must use HTTPS unless the development-only flag `OWLEN_ALLOW_INSECURE_CLOUD=1` is set. Leave this unset in production to avoid leaking credentials over cleartext channels. - **Web search endpoint** – By default Owlen calls `/api/web_search`. If your deployment exposes the tool at `/v1/web/search` (Codex CLI style) or any other path, set `providers.ollama_cloud.extra.web_search_endpoint`. The session layer reuses the same normalisation logic, so whatever URL the provider accepts will also be used for tool consent checks. The quota fields are optional and purely informational—they are never sent to the provider. Owlen uses them to display hourly/weekly token usage in the chat header, emit pre-limit toasts at 80% and 95%, and power the `:limits` command. Adjust the numbers to reflect the soft limits on your account or remove the keys altogether if you do not want usage tracking. If your deployment exposes the web search endpoint under a different path, set `web_search_endpoint` in the same table. The default (`/api/web_search`) matches the Ollama Cloud REST API documented in the web retrieval guide. Tools must be registered with spec-compliant identifiers (for example `web_search`, `browser_fetch`); dotted names are rejected.citeturn11search0 Toggle the feature at runtime with `:web on` / `:web off` from the TUI or `owlen providers web --enable/--disable` on the CLI; both commands persist the change back to `config.toml`. See `docs/mcp-reference.md` for the full connector bundle, installation commands, and health checks.citeturn1search6turn1search8 > **Tip:** If the official `ollama signin` flow fails on Linux v0.12.3, follow the [Linux Ollama sign-in workaround](#linux-ollama-sign-in-workaround-v0123) in the troubleshooting guide to copy keys from a working machine or register them manually. ### Managing cloud credentials via CLI Owlen now ships with an interactive helper for Ollama Cloud: ```bash owlen cloud setup --api-key # Configure your API key (uses stored value when omitted) owlen cloud status # Verify authentication/latency owlen cloud models # List the hosted models your account can access owlen cloud logout # Forget the stored API key ``` When `privacy.encrypt_local_data = true`, the API key is written to Owlen's encrypted credential vault instead of being persisted in plaintext. The vault key is generated and managed automatically—no passphrase prompts are ever shown—and subsequent invocations hydrate the runtime environment from that secure store. If encryption is disabled, the key is stored under `[providers.ollama_cloud].api_key` as before.