Files
owlen/SECURITY.md
vikingowl d0d3079df5 docs: expand security documentation and add AI assistance declaration to CONTRIBUTING
- Added comprehensive **Design Overview**, **Data Handling**, and **Supply‑Chain Safeguards** sections to `SECURITY.md`.
- Updated `README.md` with a new **Security & Privacy** section summarizing local‑first execution, sandboxed tooling, encrypted session storage, and opt‑in network access.
- Modified `CONTRIBUTING.md` to require contributors to declare any AI‑generated code in PR descriptions, ensuring human reviewer approval before merge.
2025-10-12 02:22:09 +02:00

3.0 KiB
Raw Permalink Blame History

Security Policy

Supported Versions

We are currently in a pre-release phase, so only the latest version is actively supported. As we move towards a 1.0 release, this policy will be updated with specific version support.

Version Supported
< 1.0

Reporting a Vulnerability

The Owlen team and community take all security vulnerabilities seriously. Thank you for improving the security of our project. We appreciate your efforts and responsible disclosure and will make every effort to acknowledge your contributions.

To report a security vulnerability, please email the project lead at security@owlibou.com with a detailed description of the issue, the steps to reproduce it, and any affected versions.

You will receive a response from us within 48 hours. If the issue is confirmed, we will release a patch as soon as possible, depending on the complexity of the issue.

Please do not report security vulnerabilities through public GitHub issues.

Design Overview

Owlen ships with a local-first architecture:

  • Process isolation The TUI speaks to language models through a separate MCP LLM server. Tool execution (code, web, filesystem) occurs in dedicated MCP processes so a crash or hang cannot take down the UI.
  • Sandboxing The MCP Code Server executes snippets in Docker containers. Upcoming releases will extend this to platform sandboxes (sandbox-exec on macOS, Windows job objects) as described in our roadmap.
  • Network posture No telemetry is emitted. The application only reaches the network when a user explicitly enables remote tools (web search, remote MCP servers) or configures cloud providers. All tools require allow-listing in config.toml.

Data Handling

  • Sessions Conversations are stored in the users data directory (~/.local/share/owlen on Linux, equivalent paths on macOS/Windows). Enable privacy.encrypt_local_data = true to wrap the session store in AES-GCM encryption protected by a passphrase (OWLEN_MASTER_PASSWORD or an interactive prompt).
  • Credentials API tokens are resolved from the config file or environment variables at runtime and are never written to logs.
  • Remote calls When remote search or cloud LLM tooling is on, only the minimum payload (prompt, tool arguments) is sent. All outbound requests go through the MCP servers so they can be audited or disabled centrally.

Supply-Chain Safeguards

  • The repository includes a git pre-commit configuration that runs cargo fmt, cargo check, and cargo clippy -- -D warnings on every commit.
  • Pull requests generated with the assistance of AI tooling must receive manual maintainer review before merging. Contributors are asked to declare AI involvement in their PR description so maintainers can double-check the changes.

Additional recommendations for operators (e.g., running Owlen on shared systems) are maintained in docs/security.md (planned) and the issue tracker.