Compare commits

...

6 Commits

25 changed files with 2597 additions and 335 deletions

3
.gitignore vendored
View File

@@ -34,10 +34,11 @@ build/
logs/
*.log
# Database files
# Database files (now includes the specific dev database)
*.sqlite
*.sqlite3
*.db
owlynews.sqlite3*
# Dependency directories
node_modules/

2173
backend-rust/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,5 +6,11 @@ edition = "2024"
[dependencies]
anyhow = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-rusqlite = "0.6.0"
rusqlite = "=0.32.0"
axum = "0.8.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
sqlx = { version = "0.8", features = ["runtime-tokio", "tls-native-tls", "sqlite", "macros", "migrate", "chrono", "json"] }
dotenv = "0.15"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
toml = "0.9.5"

171
backend-rust/ROADMAP.md Normal file
View File

@@ -0,0 +1,171 @@
# Owly News Summariser - Project Roadmap
This document outlines the strategic approach for transforming the project through three phases: Python-to-Rust backend migration, CLI application addition, and Vue-to-Dioxus frontend migration.
## Project Structure Strategy
### Current Phase: Axum API Setup
```
owly-news-summariser/
├── src/
│ ├── main.rs # Entry point (will evolve)
│ ├── db.rs # Database connection & SQLx setup
│ ├── api.rs # API module declaration
│ ├── api/ # API-specific modules (no mod.rs needed)
│ │ ├── routes.rs # Route definitions
│ │ ├── middleware.rs # Custom middleware
│ │ └── handlers.rs # Request handlers & business logic
│ ├── models.rs # Models module declaration
│ ├── models/ # Data models & database entities
│ │ ├── user.rs
│ │ ├── article.rs
│ │ └── summary.rs
│ ├── services.rs # Services module declaration
│ ├── services/ # Business logic layer
│ │ ├── news_service.rs
│ │ └── summary_service.rs
│ └── config.rs # Configuration management
├── migrations/ # SQLx migrations (managed by SQLx CLI)
├── frontend/ # Keep existing Vue frontend for now
└── Cargo.toml
```
### Phase 2: Multi-Binary Structure (API + CLI)
```
owly-news-summariser/
├── src/
│ ├── lib.rs # Shared library code
│ ├── bin/
│ │ ├── server.rs # API server binary
│ │ └── cli.rs # CLI application binary
│ ├── [same module structure as Phase 1]
├── migrations/
├── frontend/
└── Cargo.toml # Updated for multiple binaries
```
### Phase 3: Full Rust Stack
```
owly-news-summariser/
├── src/
│ ├── [same structure as Phase 2]
├── migrations/
├── frontend-dioxus/ # New Dioxus frontend
├── frontend/ # Legacy Vue (to be removed)
└── Cargo.toml
```
## Step-by-Step Process
### Phase 1: Axum API Implementation
**Step 1: Core Infrastructure Setup**
- Set up database connection pooling with SQLx
- Create configuration management system (environment variables, config files)
- Establish error handling patterns with `anyhow`
- Set up logging infrastructure
**Step 2: Data Layer**
- Design your database schema and create SQLx migrations using `sqlx migrate add`
- Create Rust structs that mirror your Python backend's data models
- Implement database access layer with proper async patterns
- Use SQLx's compile-time checked queries
**Step 3: API Layer Architecture**
- Create modular route structure (users, articles, summaries, etc.)
- Implement middleware for CORS, authentication, logging
- Set up request/response serialization with Serde
- Create proper error responses and status codes
**Step 4: Business Logic Migration**
- Port your Python backend logic to Rust services
- Maintain API compatibility with your existing Vue frontend
- Implement proper async patterns for external API calls
- Add comprehensive testing
**Step 5: Integration & Testing**
- Test API endpoints thoroughly
- Ensure Vue frontend works seamlessly with new Rust backend
- Performance testing and optimization
- Deploy and monitor
### Phase 2: CLI Application Addition
**Step 1: Restructure for Multiple Binaries**
- Move API code to `src/bin/server.rs`
- Create `src/bin/cli.rs` for CLI application
- Keep shared logic in `src/lib.rs`
- Update Cargo.toml to support multiple binaries
**Step 2: CLI Architecture**
- Use clap for command-line argument parsing
- Reuse existing services and models from the API
- Create CLI-specific output formatting
- Implement batch processing capabilities
**Step 3: Shared Core Logic**
- Extract common functionality into library crates
- Ensure both API and CLI can use the same business logic
- Implement proper configuration management for both contexts
### Phase 3: Dioxus Frontend Migration
**Step 1: Parallel Development**
- Create new `frontend-dioxus/` directory
- Keep existing Vue frontend running during development
- Set up Dioxus project structure with proper routing
**Step 2: Component Architecture**
- Design reusable Dioxus components
- Implement state management (similar to Pinia in Vue)
- Create API client layer for communication with Rust backend
**Step 3: Feature Parity**
- Port Vue components to Dioxus incrementally
- Ensure UI/UX consistency
- Implement proper error handling and loading states
**Step 4: Final Migration**
- Switch production traffic to Dioxus frontend
- Remove Vue frontend after thorough testing
- Optimize bundle size and performance
## Key Strategic Considerations
### 1. Modern Rust Practices
- Use modern module structure without `mod.rs` files
- Leverage SQLx's built-in migration and connection management
- Follow Rust 2018+ edition conventions
### 2. Maintain Backward Compatibility
- Keep API contracts stable during Vue-to-Dioxus transition
- Use feature flags for gradual rollouts
### 3. Shared Code Architecture
- Design your core business logic to be framework-agnostic
- Use workspace structure for better code organization
- Consider extracting domain logic into separate crates
### 4. Testing Strategy
- Unit tests for business logic
- Integration tests for API endpoints
- End-to-end tests for the full stack
- CLI integration tests
### 5. Configuration Management
- Environment-based configuration
- Support for different deployment scenarios (API-only, CLI-only, full stack)
- Proper secrets management
### 6. Database Strategy
- Use SQLx migrations for schema evolution (`sqlx migrate add/run`)
- Leverage compile-time checked queries with SQLx macros
- Implement proper connection pooling and error handling
- Let SQLx handle what it does best - don't reinvent the wheel
## What SQLx Handles for You
- **Migrations**: Use `sqlx migrate add <name>` to create, `sqlx::migrate!()` macro to embed
- **Connection Pooling**: Built-in `SqlitePool` with configuration options
- **Query Safety**: Compile-time checked queries prevent SQL injection and typos
- **Type Safety**: Automatic Rust type mapping from database types

3
backend-rust/config.toml Normal file
View File

@@ -0,0 +1,3 @@
[server]
host = '127.0.0.1'
port = 8090

31
backend-rust/example.env Normal file
View File

@@ -0,0 +1,31 @@
# URL for the Ollama service
OLLAMA_HOST=http://localhost:11434
# Interval for scheduled news fetching in hours
CRON_HOURS=1
# Minimum interval for scheduled news fetching in hours
MIN_CRON_HOURS=0.5
# Cooldown period in minutes between manual syncs
SYNC_COOLDOWN_MINUTES=30
# LLM model to use for summarization
LLM_MODEL=qwen2:7b-instruct-q4_K_M
LLM_MODEL=phi3:3.8b-mini-128k-instruct-q4_0
LLM_MODEL=mistral-nemo:12b
# Timeout in seconds for LLM requests
LLM_TIMEOUT_SECONDS=180
# Timeout in seconds for Ollama API requests
OLLAMA_API_TIMEOUT_SECONDS=10
# Timeout in seconds for article fetching
ARTICLE_FETCH_TIMEOUT=30
# Maximum length of article content to process
MAX_ARTICLE_LENGTH=5000
# SQLite database connection string
DB_NAME=owlynews.sqlite3

View File

@@ -0,0 +1,5 @@
DROP TABLE IF EXISTS meta;
DROP TABLE IF EXISTS settings;
DROP TABLE IF EXISTS feeds;
DROP INDEX IF EXISTS idx_news_published;
DROP TABLE IF EXISTS news;

View File

@@ -36,10 +36,3 @@ CREATE TABLE IF NOT EXISTS meta
key TEXT PRIMARY KEY,
val TEXT NOT NULL
);
-- DOWN
DROP TABLE IF EXISTS meta;
DROP TABLE IF EXISTS settings;
DROP TABLE IF EXISTS feeds;
DROP INDEX IF EXISTS idx_news_published;
DROP TABLE IF EXISTS news;

View File

@@ -1,8 +1,3 @@
-- Add category field to news table
ALTER TABLE news
ADD COLUMN category TEXT;
-- DOWN
CREATE TABLE news_backup
(
id INTEGER PRIMARY KEY AUTOINCREMENT,

View File

@@ -0,0 +1,3 @@
-- Add category field to news table
ALTER TABLE news
ADD COLUMN category TEXT;

3
backend-rust/src/api.rs Normal file
View File

@@ -0,0 +1,3 @@
pub mod handlers;
pub mod middleware;
pub mod routes;

View File

@@ -0,0 +1,39 @@
use axum::Json;
use axum::extract::State;
use serde_json::Value;
use sqlx::SqlitePool;
pub async fn get_articles(State(pool): State<SqlitePool>) -> Result<Json<Value>, AppError> {
// TODO: Article logic
Ok(Json(serde_json::json!({"articles": []})))
}
pub async fn get_summaries(State(pool): State<SqlitePool>) -> Result<Json<Value>, AppError> {
// TODO: Summaries logic
Ok(Json(serde_json::json!({"summaries": []})))
}
use axum::{
http::StatusCode,
response::{IntoResponse, Response},
};
pub struct AppError(anyhow::Error);
impl IntoResponse for AppError {
fn into_response(self) -> Response {
(
StatusCode::INTERNAL_SERVER_ERROR,
format!("Something went wrong: {}", self.0),
)
.into_response()
}
}
impl<E> From<E> for AppError
where
E: Into<anyhow::Error>, {
fn from(err: E) -> Self {
Self(err.into())
}
}

View File

View File

@@ -0,0 +1,11 @@
use axum::Router;
use axum::routing::get;
use sqlx::SqlitePool;
use crate::api::handlers;
pub fn routes() -> Router<SqlitePool> {
Router::new()
.route("/articles", get(handlers::get_articles))
.route("/summaries", get(handlers::get_summaries))
// Add more routes as needed
}

100
backend-rust/src/config.rs Normal file
View File

@@ -0,0 +1,100 @@
use serde::Deserialize;
use std::env;
use std::path::PathBuf;
use toml::Value;
use tracing::{error, info};
#[derive(Deserialize, Debug)]
pub struct AppSettings {
pub config_path: String,
pub db_path: String,
pub migration_path: String,
pub config: Config,
}
#[derive(Deserialize, Debug)]
pub struct Config {
pub server: Server,
}
#[derive(Deserialize, Debug)]
pub struct Server {
pub host: String,
pub port: u16,
}
#[derive(Deserialize, Debug)]
struct ConfigFile {
server: Server,
}
impl AppSettings {
pub fn get_app_settings() -> Self {
let config_file = Self::load_config_file().unwrap_or_else(|| {
info!("Using default config values");
ConfigFile {
server: Server {
host: "127.0.0.1".to_string(),
port: 1337,
},
}
});
Self {
config_path: Self::get_config_path(),
db_path: Self::get_db_path(),
migration_path: String::from("./migrations"),
config: Config {
server: config_file.server,
},
}
}
fn load_config_file() -> Option<ConfigFile> {
let config_path = Self::get_config_path();
let contents = std::fs::read_to_string(&config_path)
.map_err(|e| error!("Failed to read config file: {}", e))
.ok()?;
toml::from_str(&contents)
.map_err(|e| error!("Failed to parse TOML: {}", e))
.ok()
}
fn get_db_path() -> String {
if cfg!(debug_assertions) {
// Development: Use backend-rust directory
// TODO: Change later
let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
path.push("owlynews.sqlite3");
path.to_str().unwrap().to_string()
} else {
// Production: Use standard Linux applications data directory
"/var/lib/owly-news-summariser/owlynews.sqlite3".to_string()
}
}
fn get_config_path() -> String {
if cfg!(debug_assertions) {
// Development: Use backend-rust directory
// TODO: Change later
let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
path.push("config.toml");
path.to_str().unwrap().to_string()
} else {
// Production: Use standard Linux applications data directory
"$HOME/owly-news-summariser/config.toml".to_string()
}
}
pub fn database_url(&self) -> String {
format!("sqlite:{}", self.db_path)
}
pub fn ensure_db_directory(&self) -> Result<(), std::io::Error> {
if let Some(parent) = std::path::Path::new(&self.db_path).parent() {
std::fs::create_dir_all(parent)?;
}
Ok(())
}
}

View File

@@ -1,13 +1,31 @@
use crate::config::{AppSettings};
use anyhow::Result;
use std::path::{Path};
use tokio_rusqlite::Connection as AsyncConn;
use crate::migrations::Migrator;
use sqlx::migrate::Migrator;
use sqlx::sqlite::{SqliteConnectOptions};
use sqlx::{Pool, Sqlite, SqlitePool};
use std::str::FromStr;
use tracing::info;
pub async fn initialize_db(db_path: &Path, migrations_dir: &Path) -> Result<AsyncConn> {
let conn = AsyncConn::open(db_path).await?;
let migrator = Migrator::new(migrations_dir.to_path_buf())?;
pub const MIGRATOR: Migrator = sqlx::migrate!("./migrations");
migrator.migrate_up_async(&conn).await?;
pub async fn initialize_db(app_settings: &AppSettings) -> Result<Pool<Sqlite>> {
app_settings.ensure_db_directory()?;
Ok(conn)
let options = SqliteConnectOptions::from_str(&app_settings.database_url())?
.create_if_missing(true)
.journal_mode(sqlx::sqlite::SqliteJournalMode::Wal)
.foreign_keys(true);
let pool = SqlitePool::connect_with(options).await?;
MIGRATOR.run(&pool).await?;
info!("Database migrations completed successfully");
Ok(pool)
}
pub async fn create_pool(opts: SqliteConnectOptions) -> Result<SqlitePool> {
let pool = SqlitePool::connect_with(opts).await?;
Ok(pool)
}

View File

@@ -1,22 +1,74 @@
use std::path::Path;
mod api;
mod config;
mod db;
mod migrations;
mod models;
mod services;
use crate::config::{AppSettings};
use anyhow::Result;
use axum::Router;
use axum::routing::get;
use tokio::signal;
use tracing::{info};
use tracing_subscriber;
#[tokio::main]
async fn main() {
let migrations_folder = String::from("src/migrations");
async fn main() -> Result<()> {
tracing_subscriber::fmt()
.with_target(false)
.compact()
.init();
let db_path = Path::new("owlynews.sqlite3");
let migrations_path = Path::new(&migrations_folder);
let app_settings = AppSettings::get_app_settings();
match db::initialize_db(&db_path, migrations_path).await {
Ok(_conn) => {
println!("Database initialized successfully");
// Logic goes here
let pool = db::initialize_db(&app_settings).await?;
let app = create_app(pool);
let listener =
tokio::net::TcpListener::bind(format!("{}:{}", app_settings.config.server.host, app_settings.config.server.port)).await?;
info!("Server starting on {}:{}", app_settings.config.server.host, app_settings.config.server.port);
axum::serve(listener, app)
.with_graceful_shutdown(shutdown_signal())
.await?;
Ok(())
}
Err(e) => {
println!("Error initializing database: {:?}", e);
fn create_app(pool: sqlx::SqlitePool) -> Router {
Router::new()
.route("/health", get(health_check))
.nest("/api", api::routes::routes())
.with_state(pool)
}
async fn health_check() -> &'static str {
"OK"
}
async fn shutdown_signal() {
let ctrl_c = async {
signal::ctrl_c()
.await
.expect("failed to install CTRL+C handler");
};
#[cfg(unix)]
let terminate = async {
signal::unix::signal(signal::unix::SignalKind::terminate())
.expect("failed to install terminate handler")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => {},
_ = terminate => {},
}
info!("Signal received, shutting down");
}

View File

@@ -1,245 +0,0 @@
use anyhow::{Context, Result};
use rusqlite::{Connection, params};
use std::collections::HashSet;
use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
use tokio::fs;
use tokio_rusqlite::Connection as AsyncConn;
pub struct Migration {
pub version: i64,
pub name: String,
pub sql_up: String,
#[allow(dead_code)]
pub sql_down: String,
}
#[derive(Clone)]
pub struct Migrator {
migrations_dir: PathBuf,
}
impl Migrator {
pub fn new(migrations_dir: PathBuf) -> Result<Self> {
Ok(Migrator { migrations_dir })
}
fn initialize(&self, conn: &mut Connection) -> Result<()> {
let tx = conn
.transaction()
.context("Failed to start transaction for initialization")?;
tx.execute(
"CREATE TABLE IF NOT EXISTS migrations (version INTEGER PRIMARY KEY)",
[],
)
.context("Failed to create migrations table")?;
let columns: HashSet<String> = {
let mut stmt = tx.prepare("PRAGMA table_info(migrations)")?;
stmt.query_map([], |row| row.get(1))?
.collect::<Result<HashSet<String>, _>>()?
};
if !columns.contains("name") {
tx.execute("ALTER TABLE migrations ADD COLUMN name TEXT", [])
.context("Failed to add 'name' column to migrations table")?;
}
if !columns.contains("applied_at") {
tx.execute("ALTER TABLE migrations ADD COLUMN applied_at INTEGER", [])
.context("Failed to add 'applied_at' column to migrations table")?;
}
tx.commit()
.context("Failed to commit migrations table initialization")?;
Ok(())
}
pub async fn load_migrations_async(&self) -> Result<Vec<Migration>> {
let mut migrations = Vec::new();
// Use async-aware try_exists
if !fs::try_exists(&self.migrations_dir).await? {
return Ok(migrations);
}
let mut entries = fs::read_dir(&self.migrations_dir)
.await
.context("Failed to read migrations directory")?;
while let Some(entry) = entries.next_entry().await? {
let path = entry.path();
if path.is_file() && path.extension().unwrap_or_default() == "sql" {
let file_name = path.file_stem().unwrap().to_string_lossy();
// Format should be: VERSION_NAME.sql (e.g. 001_create_users.sql
if let Some((version_str, name)) = file_name.split_once('_') {
if let Ok(version) = version_str.parse::<i64>() {
let content = fs::read_to_string(&path).await.with_context(|| {
format!("Failed to read migration file: {}", path.display())
})?;
// Split content into up and down migrations if they exist
let parts: Vec<&str> = content.split("-- DOWN").collect();
let sql_up = parts[0].trim().to_string();
let sql_down = parts.get(1).map_or(String::new(), |s| s.trim().to_string());
migrations.push(Migration {
version,
name: name.to_string(),
sql_up,
sql_down,
});
}
}
}
}
migrations.sort_by_key(|m| m.version);
Ok(migrations)
}
pub fn get_applied_migrations(&self, conn: &mut Connection) -> Result<HashSet<i64>> {
let mut stmt = conn
.prepare("SELECT version FROM migrations ORDER BY version")
.context("Failed to prepare query for applied migrations")?;
let versions = stmt
.query_map([], |row| row.get(0))?
.collect::<Result<HashSet<i64>, _>>()?;
Ok(versions)
}
pub async fn migrate_up_async(&self, async_conn: &AsyncConn) -> Result<()> {
let migrations = self.load_migrations_async().await?;
let migrator = self.clone();
// Perform all database operations within a blocking-safe context
async_conn
.call(move |conn| {
migrator.initialize(conn).expect("TODO: panic message");
let applied = migrator
.get_applied_migrations(conn)
.expect("TODO: panic message");
let tx = conn
.transaction()
.context("Failed to start transaction for migrations")
.expect("TODO: panic message");
for migration in migrations {
if !applied.contains(&migration.version) {
println!(
"Applying migration {}: {}",
migration.version, migration.name
);
tx.execute_batch(&migration.sql_up)
.with_context(|| {
format!(
"Failed to execute migration {}: {}",
migration.version, migration.name
)
})
.expect("TODO: panic message");
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("TODO: panic message")
.as_secs() as i64;
tx.execute(
"INSERT INTO migrations (version, name, applied_at) VALUES (?, ?, ?)",
params![migration.version, migration.name, now],
)
.with_context(|| {
format!("Failed to record migration {}", migration.version)
})
.expect("TODO: panic message");
}
}
tx.commit()
.context("Failed to commit migrations")
.expect("TODO: panic message");
Ok(())
})
.await?;
Ok(())
}
#[allow(dead_code)]
pub async fn migrate_down_async(
&self,
async_conn: &AsyncConn,
target_version: Option<i64>,
) -> Result<()> {
let migrations = self.load_migrations_async().await?;
let migrator = self.clone();
// Perform all database operations within a blocking-safe context
async_conn
.call(move |conn| {
migrator.initialize(conn).expect("TODO: panic message");
let applied = migrator
.get_applied_migrations(conn)
.expect("TODO: panic message");
// If no target specified, roll back only the latest migration
let max_applied = *applied.iter().max().unwrap_or(&0);
let target =
target_version.unwrap_or(if max_applied > 0 { max_applied - 1 } else { 0 });
let tx = conn
.transaction()
.context("Failed to start transaction for migrations")
.expect("TODO: panic message");
// Find migrations to roll back (in reverse order)
let mut to_rollback: Vec<&Migration> = migrations
.iter()
.filter(|m| applied.contains(&m.version) && m.version > target)
.collect();
to_rollback.sort_by_key(|m| std::cmp::Reverse(m.version));
for migration in to_rollback {
println!(
"Rolling back migration {}: {}",
migration.version, migration.name
);
if !migration.sql_down.is_empty() {
tx.execute_batch(&migration.sql_down)
.with_context(|| {
format!(
"Failed to rollback migration {}: {}",
migration.version, migration.name
)
})
.expect("TODO: panic message");
} else {
println!("Warning: No down migration defined for {}", migration.name);
}
// Remove the migration record
tx.execute(
"DELETE FROM migrations WHERE version = ?",
[&migration.version],
)
.with_context(|| {
format!("Failed to remove migration record {}", migration.version)
})
.expect("TODO: panic message");
}
tx.commit()
.context("Failed to commit rollback")
.expect("TODO: panic message");
Ok(())
})
.await?;
Ok(())
}
}

View File

@@ -0,0 +1,3 @@
mod article;
mod summary;
mod user;

View File

View File

View File

View File

@@ -0,0 +1,2 @@
mod summary_service;
mod news_service;