docs: expand security documentation and add AI assistance declaration to CONTRIBUTING

- Added comprehensive **Design Overview**, **Data Handling**, and **Supply‑Chain Safeguards** sections to `SECURITY.md`.
- Updated `README.md` with a new **Security & Privacy** section summarizing local‑first execution, sandboxed tooling, encrypted session storage, and opt‑in network access.
- Modified `CONTRIBUTING.md` to require contributors to declare any AI‑generated code in PR descriptions, ensuring human reviewer approval before merge.
This commit is contained in:
2025-10-12 02:22:09 +02:00
parent 56de1170ee
commit d0d3079df5
3 changed files with 32 additions and 0 deletions

View File

@@ -17,3 +17,24 @@ To report a security vulnerability, please email the project lead at [security@o
You will receive a response from us within 48 hours. If the issue is confirmed, we will release a patch as soon as possible, depending on the complexity of the issue.
Please do not report security vulnerabilities through public GitHub issues.
## Design Overview
Owlen ships with a local-first architecture:
- **Process isolation** The TUI speaks to language models through a separate MCP LLM server. Tool execution (code, web, filesystem) occurs in dedicated MCP processes so a crash or hang cannot take down the UI.
- **Sandboxing** The MCP Code Server executes snippets in Docker containers. Upcoming releases will extend this to platform sandboxes (`sandbox-exec` on macOS, Windows job objects) as described in our roadmap.
- **Network posture** No telemetry is emitted. The application only reaches the network when a user explicitly enables remote tools (web search, remote MCP servers) or configures cloud providers. All tools require allow-listing in `config.toml`.
## Data Handling
- **Sessions** Conversations are stored in the users data directory (`~/.local/share/owlen` on Linux, equivalent paths on macOS/Windows). Enable `privacy.encrypt_local_data = true` to wrap the session store in AES-GCM encryption protected by a passphrase (`OWLEN_MASTER_PASSWORD` or an interactive prompt).
- **Credentials** API tokens are resolved from the config file or environment variables at runtime and are never written to logs.
- **Remote calls** When remote search or cloud LLM tooling is on, only the minimum payload (prompt, tool arguments) is sent. All outbound requests go through the MCP servers so they can be audited or disabled centrally.
## Supply-Chain Safeguards
- The repository includes a git `pre-commit` configuration that runs `cargo fmt`, `cargo check`, and `cargo clippy -- -D warnings` on every commit.
- Pull requests generated with the assistance of AI tooling must receive manual maintainer review before merging. Contributors are asked to declare AI involvement in their PR description so maintainers can double-check the changes.
Additional recommendations for operators (e.g., running Owlen on shared systems) are maintained in `docs/security.md` (planned) and the issue tracker.