3.0 KiB
Security Policy
Supported Versions
We are currently in a pre-release phase, so only the latest version is actively supported. As we move towards a 1.0 release, this policy will be updated with specific version support.
| Version | Supported |
|---|---|
| < 1.0 | ✅ |
Reporting a Vulnerability
The Owlen team and community take all security vulnerabilities seriously. Thank you for improving the security of our project. We appreciate your efforts and responsible disclosure and will make every effort to acknowledge your contributions.
To report a security vulnerability, please email the project lead at security@owlibou.com with a detailed description of the issue, the steps to reproduce it, and any affected versions.
You will receive a response from us within 48 hours. If the issue is confirmed, we will release a patch as soon as possible, depending on the complexity of the issue.
Please do not report security vulnerabilities through public GitHub issues.
Design Overview
Owlen ships with a local-first architecture:
- Process isolation – The TUI speaks to language models through a separate MCP LLM server. Tool execution (code, web, filesystem) occurs in dedicated MCP processes so a crash or hang cannot take down the UI.
- Sandboxing – The MCP Code Server executes snippets in Docker containers. Upcoming releases will extend this to platform sandboxes (
sandbox-execon macOS, Windows job objects) as described in our roadmap. - Network posture – No telemetry is emitted. The application only reaches the network when a user explicitly enables remote tools (web search, remote MCP servers) or configures cloud providers. All tools require allow-listing in
config.toml.
Data Handling
- Sessions – Conversations are stored in the user’s data directory (
~/.local/share/owlenon Linux, equivalent paths on macOS/Windows). Enableprivacy.encrypt_local_data = trueto wrap the session store in AES-GCM encryption using an Owlen-managed key—no interactive passphrase prompts are required. - Credentials – API tokens are resolved from the config file or environment variables at runtime and are never written to logs.
- Remote calls – When remote search or cloud LLM tooling is on, only the minimum payload (prompt, tool arguments) is sent. All outbound requests go through the MCP servers so they can be audited or disabled centrally.
Supply-Chain Safeguards
- The repository includes a git
pre-commitconfiguration that runscargo fmt,cargo check, andcargo clippy -- -D warningson every commit. - Pull requests generated with the assistance of AI tooling must receive manual maintainer review before merging. Contributors are asked to declare AI involvement in their PR description so maintainers can double-check the changes.
Additional recommendations for operators (e.g., running Owlen on shared systems) are maintained in docs/security.md (planned) and the issue tracker.