# Supervision-RS Implementation Plan > **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. **Goal:** Créer `SuperVisionRust/` — un exécutable Windows autonome en Rust avec parité complète avec l'app Python (dashboard web, alertes email, service Windows). **Architecture:** Axum 0.7 (HTTP) + Tera (templates) + sysinfo (métriques) + lettre (email) + windows-service (service Windows). AppState partagé via `Arc>` entre le thread de monitoring (tokio task) et les routes HTTP. Flash messages stockés en session. Templates Jinja2 portés en Tera (syntaxe quasi-identique). **Tech Stack:** Rust 1.78+, axum 0.7, tokio 1, tera 1, sysinfo 0.32, lettre 0.11, tower-sessions 0.12, tower_governor 0.4, bcrypt 0.15, serde_json 1, windows-service 0.7 (Windows only) --- ## Structure des fichiers ``` SuperVisionRust/ ← dossier frère de supervision/ ├── Cargo.toml ├── src/ │ ├── main.rs # point d'entrée + service Windows │ ├── config.rs # structs Config + ConfigManager (JSON) │ ├── monitor.rs # SystemMonitor (sysinfo + seuils + alertes) │ ├── alerter.rs # EmailAlerter (lettre SMTP) │ ├── user_monitor.rs # UserMonitor (parsing logs Amadea) │ └── routes/ │ ├── mod.rs # AppState + flash helpers + AuthUser extractor │ ├── auth.rs # GET/POST /login, GET /logout │ ├── dashboard.rs # GET /, GET /api/metrics, POST /api/monitoring/toggle │ ├── settings.rs # GET /settings + tous les POST /settings/* │ ├── alerts.rs # GET /alerts, POST /alerts/clear │ └── users.rs # GET /users, GET /api/users, GET /api/users/activity/weekly ├── templates/ │ ├── base.html │ ├── login.html │ ├── dashboard.html │ ├── settings.html │ ├── alerts.html │ └── users.html └── static/ └── style.css ``` --- ## Notes de portage Jinja2 → Tera | Jinja2 | Tera | |---|---| | `url_for('dashboard')` | `/` | | `url_for('settings')` | `/settings` | | `url_for('alerts')` | `/alerts` | | `url_for('users')` | `/users` | | `url_for('login')` | `/login` | | `url_for('logout')` | `/logout` | | `url_for('toggle_monitoring')` | `/api/monitoring/toggle` | | `url_for('clear_alerts')` | `/alerts/clear` | | `url_for('update_thresholds')` | `/settings/thresholds` | | `url_for('update_monitoring')` | `/settings/monitoring` | | `url_for('update_smtp')` | `/settings/smtp` | | `url_for('test_smtp')` | `/settings/smtp/test` | | `url_for('update_processes')` | `/settings/processes` | | `url_for('update_password')` | `/settings/password` | | `url_for('update_port')` | `/settings/port` | | `url_for('update_amadea_log_path')` | `/settings/amadea-log-path` | | `url_for('update_user_thresholds')` | `/settings/user-thresholds` | | `url_for('static', filename='style.css')` | `/static/style.css` | | `get_flashed_messages(with_categories=true)` | variable `flash_messages` dans le contexte | | `current_user.is_authenticated` | variable `is_authenticated` dans le contexte | | `request.endpoint == 'dashboard'` | variable `active_page == "dashboard"` | | `\| join(', ')` | `\| join(sep=", ")` | | `\| replace('T', ' ')` | `\| replace(from="T", to=" ")` | | `{% with %}...{% endwith %}` | `{% set %}` ou direct | | `loop.index0` | `loop.index0` (identique) | --- ## Task 1: Scaffold — Cargo.toml + structure vide **Files:** - Create: `../SuperVisionRust/Cargo.toml` - Create: `../SuperVisionRust/src/main.rs` - Create: `../SuperVisionRust/src/config.rs` - Create: `../SuperVisionRust/src/monitor.rs` - Create: `../SuperVisionRust/src/alerter.rs` - Create: `../SuperVisionRust/src/user_monitor.rs` - Create: `../SuperVisionRust/src/routes/mod.rs` - Create: `../SuperVisionRust/src/routes/auth.rs` - Create: `../SuperVisionRust/src/routes/dashboard.rs` - Create: `../SuperVisionRust/src/routes/settings.rs` - Create: `../SuperVisionRust/src/routes/alerts.rs` - Create: `../SuperVisionRust/src/routes/users.rs` - [ ] **Step 1: Créer le dossier SuperVisionRust** ```bash cd "/Users/oussi/Documents/Documents - MacBook Pro de oussi/EttaSante/monitoring" mkdir SuperVisionRust cd SuperVisionRust mkdir -p src/routes templates static ``` - [ ] **Step 2: Créer Cargo.toml** ```toml [package] name = "supervision" version = "0.1.0" edition = "2021" [[bin]] name = "supervision" path = "src/main.rs" [dependencies] axum = { version = "0.7", features = ["macros", "form"] } tokio = { version = "1", features = ["full"] } tera = "1" sysinfo = "0.32" lettre = { version = "0.11", features = ["tokio1-native-tls", "builder"] } serde = { version = "1", features = ["derive"] } serde_json = "1" tower-sessions = { version = "0.12", features = ["memory-store"] } tower = "0.4" tower_governor = "0.4" tower-http = { version = "0.5", features = ["fs"] } tracing = "0.1" tracing-subscriber = { version = "0.3", features = ["env-filter"] } bcrypt = "0.15" chrono = { version = "0.4", features = ["serde"] } rand = "0.8" async-trait = "0.1" http = "1" regex = "1" glob = "0.3" [target.'cfg(windows)'.dependencies] windows-service = "0.7" ``` - [ ] **Step 3: Créer src/main.rs minimal** ```rust mod config; mod monitor; mod alerter; mod user_monitor; mod routes; fn main() { println!("Supervision"); } ``` - [ ] **Step 4: Créer les fichiers modules vides** `src/config.rs` : ```rust // Config module — à implémenter dans Task 2 ``` `src/monitor.rs` : ```rust // Monitor module — à implémenter dans Task 4 ``` `src/alerter.rs` : ```rust // Alerter module — à implémenter dans Task 3 ``` `src/user_monitor.rs` : ```rust // UserMonitor module — à implémenter dans Task 5 ``` `src/routes/mod.rs` : ```rust pub mod auth; pub mod dashboard; pub mod settings; pub mod alerts; pub mod users; ``` `src/routes/auth.rs`, `src/routes/dashboard.rs`, `src/routes/settings.rs`, `src/routes/alerts.rs`, `src/routes/users.rs` : fichiers vides avec `// TODO`. - [ ] **Step 5: Vérifier que ça compile** ```bash cd SuperVisionRust cargo check ``` Expected: `Finished` sans erreurs (quelques warnings `unused` sont OK). - [ ] **Step 6: Commit** ```bash git init git add . git commit -m "feat: scaffold projet supervision-rs" ``` --- ## Task 2: Config module **Files:** - Modify: `src/config.rs` - Test: dans `src/config.rs` section `#[cfg(test)]` - [ ] **Step 1: Écrire le test** Ajouter à la fin de `src/config.rs` : ```rust #[cfg(test)] mod tests { use super::*; #[test] fn default_config_has_expected_values() { let cfg = Config::default(); assert_eq!(cfg.port, 5000); assert_eq!(cfg.thresholds.cpu_percent, 90.0); assert_eq!(cfg.thresholds.ram_percent, 85.0); assert_eq!(cfg.thresholds.disk_percent, 90.0); assert_eq!(cfg.check_interval_minutes, 1); assert_eq!(cfg.alert_cooldown_minutes, 30); assert_eq!(cfg.processes.len(), 3); assert_eq!(cfg.admin.username, "admin"); } #[test] fn config_serializes_and_deserializes() { let cfg = Config::default(); let json = serde_json::to_string(&cfg).unwrap(); let cfg2: Config = serde_json::from_str(&json).unwrap(); assert_eq!(cfg.port, cfg2.port); assert_eq!(cfg.admin.username, cfg2.admin.username); } #[test] fn save_alert_truncates_at_500() { let dir = tempfile::tempdir().unwrap(); let mut cm = ConfigManager::new_with_dir(dir.path().to_path_buf()); for i in 0..510u32 { cm.save_alert(Alert { timestamp: format!("2026-01-01T00:00:{:02}", i % 60), alert_type: "threshold".into(), key: format!("cpu_{}", i), message: format!("msg {}", i), value: i as f64, threshold: 90.0, hostname: "test".into(), }); } let alerts = cm.load_alerts(); assert_eq!(alerts.len(), 500); } } ``` - [ ] **Step 2: Vérifier que le test échoue (pas encore de code)** ```bash cargo test config ``` Expected: erreur de compilation — `Config`, `ConfigManager`, `Alert` pas définis. - [ ] **Step 3: Implémenter config.rs** ```rust use serde::{Deserialize, Serialize}; use std::fs; use std::path::{Path, PathBuf}; const MAX_ALERTS: usize = 500; #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Thresholds { pub cpu_percent: f64, pub ram_percent: f64, pub disk_percent: f64, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct ProcessConfig { pub name: String, pub pattern: String, pub memory_threshold_mb: f64, pub enabled: bool, pub alert_on_down: bool, } #[derive(Debug, Clone, Serialize, Deserialize, Default)] pub struct SmtpConfig { pub server: String, pub port: u16, pub use_tls: bool, pub username: String, pub password: String, pub from_email: String, pub to_emails: Vec, } impl SmtpConfig { pub fn default_port() -> u16 { 587 } pub fn default_tls() -> bool { true } } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct UserStatusThresholds { pub active_minutes: u64, pub inactive_minutes: u64, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct AdminConfig { pub username: String, pub password_hash: String, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Config { pub secret_key: String, pub port: u16, pub check_interval_minutes: u64, pub alert_cooldown_minutes: u64, pub thresholds: Thresholds, pub processes: Vec, pub smtp: SmtpConfig, pub amadea_log_path: String, pub user_status_thresholds: UserStatusThresholds, pub admin: AdminConfig, } impl Default for Config { fn default() -> Self { Config { secret_key: generate_secret_key(), port: 5000, check_interval_minutes: 1, alert_cooldown_minutes: 30, thresholds: Thresholds { cpu_percent: 90.0, ram_percent: 85.0, disk_percent: 90.0, }, processes: vec![ ProcessConfig { name: "JVM".into(), pattern: "java".into(), memory_threshold_mb: 0.0, enabled: true, alert_on_down: true, }, ProcessConfig { name: "Nginx".into(), pattern: "nginx".into(), memory_threshold_mb: 0.0, enabled: false, alert_on_down: false, }, ProcessConfig { name: "Amadea Web 8 x64".into(), pattern: "amadea".into(), memory_threshold_mb: 0.0, enabled: true, alert_on_down: true, }, ], smtp: SmtpConfig { port: 587, use_tls: true, ..Default::default() }, amadea_log_path: r"C:\ProgramData\ISoft\Amadea Web 8 x64\data\logs".into(), user_status_thresholds: UserStatusThresholds { active_minutes: 5, inactive_minutes: 30, }, admin: AdminConfig { username: "admin".into(), password_hash: bcrypt::hash("admin", bcrypt::DEFAULT_COST) .unwrap_or_default(), }, } } } fn generate_secret_key() -> String { use rand::Rng; let mut rng = rand::thread_rng(); (0..32).map(|_| format!("{:02x}", rng.gen::())).collect() } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Alert { pub timestamp: String, #[serde(rename = "type")] pub alert_type: String, pub key: String, pub message: String, pub value: f64, pub threshold: f64, pub hostname: String, } pub struct ConfigManager { config_file: PathBuf, alerts_file: PathBuf, pub config: Config, } impl ConfigManager { pub fn new() -> Self { let exe_dir = std::env::current_exe() .unwrap_or_default() .parent() .unwrap_or(Path::new(".")) .to_path_buf(); Self::new_with_dir(exe_dir.join("data")) } pub fn new_with_dir(data_dir: PathBuf) -> Self { fs::create_dir_all(&data_dir).ok(); let config_file = data_dir.join("config.json"); let alerts_file = data_dir.join("alerts.json"); let config = if config_file.exists() { fs::read_to_string(&config_file) .ok() .and_then(|s| serde_json::from_str(&s).ok()) .unwrap_or_default() } else { let cfg = Config::default(); let json = serde_json::to_string_pretty(&cfg).unwrap_or_default(); fs::write(&config_file, &json).ok(); cfg }; ConfigManager { config_file, alerts_file, config } } pub fn save(&self) { let json = serde_json::to_string_pretty(&self.config).unwrap_or_default(); fs::write(&self.config_file, json).ok(); } pub fn update(&mut self, config: Config) { self.config = config; self.save(); } pub fn load_alerts(&self) -> Vec { if !self.alerts_file.exists() { return vec![]; } fs::read_to_string(&self.alerts_file) .ok() .and_then(|s| serde_json::from_str(&s).ok()) .unwrap_or_default() } pub fn save_alert(&self, alert: Alert) { let mut alerts = self.load_alerts(); alerts.insert(0, alert); alerts.truncate(MAX_ALERTS); let json = serde_json::to_string_pretty(&alerts).unwrap_or_default(); fs::write(&self.alerts_file, json).ok(); } pub fn clear_alerts(&self) { fs::write(&self.alerts_file, "[]").ok(); } } ``` Ajouter `tempfile = "3"` dans `[dev-dependencies]` de Cargo.toml : ```toml [dev-dependencies] tempfile = "3" ``` - [ ] **Step 4: Lancer les tests** ```bash cargo test config ``` Expected: 3 tests passent. - [ ] **Step 5: Commit** ```bash git add src/config.rs Cargo.toml git commit -m "feat: config module avec structs et persistence JSON" ``` --- ## Task 3: Alerter module **Files:** - Modify: `src/alerter.rs` - Test: dans `src/alerter.rs` section `#[cfg(test)]` - [ ] **Step 1: Écrire le test** ```rust #[cfg(test)] mod tests { use super::*; use crate::config::SmtpConfig; #[test] fn not_configured_when_server_empty() { let smtp = SmtpConfig::default(); assert!(!is_smtp_configured(&smtp)); } #[test] fn configured_when_all_fields_present() { let smtp = SmtpConfig { server: "smtp.example.com".into(), port: 587, use_tls: true, username: "user".into(), password: "pass".into(), from_email: "from@example.com".into(), to_emails: vec!["to@example.com".into()], }; assert!(is_smtp_configured(&smtp)); } #[test] fn not_configured_when_no_recipients() { let smtp = SmtpConfig { server: "smtp.example.com".into(), port: 587, use_tls: true, username: "user".into(), password: "pass".into(), from_email: "from@example.com".into(), to_emails: vec![], }; assert!(!is_smtp_configured(&smtp)); } } ``` - [ ] **Step 2: Vérifier que les tests échouent** ```bash cargo test alerter ``` Expected: erreur de compilation — `is_smtp_configured` pas défini. - [ ] **Step 3: Implémenter alerter.rs** ```rust use crate::config::SmtpConfig; use lettre::{ message::header::ContentType, transport::smtp::{ authentication::Credentials, client::{Tls, TlsParameters}, }, AsyncSmtpTransport, AsyncTransport, Message, Tokio1Executor, }; pub fn is_smtp_configured(smtp: &SmtpConfig) -> bool { !smtp.server.is_empty() && !smtp.from_email.is_empty() && !smtp.to_emails.is_empty() } pub struct Alerter; impl Alerter { pub async fn send(&self, smtp: &SmtpConfig, subject: &str, body: &str) -> (bool, String) { if !is_smtp_configured(smtp) { return (false, "SMTP non configure".into()); } match self.send_email(smtp, subject, body).await { Ok(msg) => (true, msg), Err(e) => (false, e), } } async fn send_email( &self, smtp: &SmtpConfig, subject: &str, body: &str, ) -> Result { let from = smtp .from_email .parse() .map_err(|_| "Email expediteur invalide".to_string())?; let mut builder = Message::builder().from(from).subject(subject); for recipient in &smtp.to_emails { let mb = recipient .parse() .map_err(|_| format!("Destinataire invalide: {}", recipient))?; builder = builder.to(mb); } let email = builder .header(ContentType::TEXT_PLAIN) .body(body.to_string()) .map_err(|e| e.to_string())?; let transport = self.build_transport(smtp)?; transport .send(email) .await .map(|_| "Email envoye avec succes".to_string()) .map_err(|e| format!("Erreur SMTP: {}", e)) } fn build_transport( &self, smtp: &SmtpConfig, ) -> Result, String> { let mut builder = if smtp.use_tls { AsyncSmtpTransport::::starttls_relay(&smtp.server) .map_err(|e| e.to_string())? } else { AsyncSmtpTransport::::builder_dangerous(&smtp.server) }; builder = builder.port(smtp.port); if !smtp.username.is_empty() { builder = builder.credentials(Credentials::new( smtp.username.clone(), smtp.password.clone(), )); } Ok(builder.build()) } pub async fn send_test(&self, smtp: &SmtpConfig) -> (bool, String) { let subject = "[TEST] Supervision - Test de configuration email"; let body = "Ceci est un email de test.\n\nSi vous recevez ce message, la configuration SMTP est correcte.\n\n-- Supervision"; self.send(smtp, subject, body).await } } ``` - [ ] **Step 4: Lancer les tests** ```bash cargo test alerter ``` Expected: 3 tests passent. - [ ] **Step 5: Commit** ```bash git add src/alerter.rs git commit -m "feat: alerter module SMTP avec lettre" ``` --- ## Task 4: Monitor module **Files:** - Modify: `src/monitor.rs` - Test: dans `src/monitor.rs` section `#[cfg(test)]` - [ ] **Step 1: Écrire les tests** ```rust #[cfg(test)] mod tests { use super::*; #[test] fn eval_status_ok_below_80_percent() { assert_eq!(eval_status(70.0, 90.0), "ok"); } #[test] fn eval_status_warning_at_80_percent_of_threshold() { assert_eq!(eval_status(72.0, 90.0), "warning"); // 72/90 = 0.8 } #[test] fn eval_status_critical_at_threshold() { assert_eq!(eval_status(90.0, 90.0), "critical"); } #[test] fn eval_status_critical_above_threshold() { assert_eq!(eval_status(95.0, 90.0), "critical"); } #[test] fn eval_status_ok_with_zero_threshold() { assert_eq!(eval_status(50.0, 0.0), "ok"); } } ``` - [ ] **Step 2: Vérifier que les tests échouent** ```bash cargo test monitor ``` Expected: erreur de compilation. - [ ] **Step 3: Implémenter monitor.rs** ```rust use crate::alerter::Alerter; use crate::config::{Alert, ConfigManager, ProcessConfig}; use chrono::{DateTime, Duration, Local}; use serde::{Deserialize, Serialize}; use std::collections::HashMap; use std::sync::{Arc, Mutex, RwLock}; use std::time::{Duration as StdDuration, Instant}; use sysinfo::{CpuRefreshKind, Disks, MemoryRefreshKind, ProcessRefreshKind, RefreshKind, System}; use tokio::sync::Mutex as AsyncMutex; pub fn eval_status(value: f64, threshold: f64) -> &'static str { if threshold <= 0.0 { return "ok"; } let ratio = value / threshold; if ratio >= 1.0 { "critical" } else if ratio >= 0.80 { "warning" } else { "ok" } } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CpuMetrics { pub percent: f64, pub cores: usize, pub threshold: f64, pub status: String, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct RamMetrics { pub percent: f64, pub total_gb: f64, pub used_gb: f64, pub available_gb: f64, pub threshold: f64, pub status: String, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct DiskMetrics { pub drive: String, pub mountpoint: String, pub percent: f64, pub total_gb: f64, pub used_gb: f64, pub free_gb: f64, pub threshold: f64, pub status: String, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct ProcessMetrics { pub name: String, pub pattern: String, pub running: bool, pub enabled: bool, pub alert_on_down: bool, pub instance_count: usize, pub total_memory_mb: f64, pub total_cpu_percent: f64, pub memory_threshold_mb: f64, pub memory_status: String, pub pids: Vec, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct Metrics { pub timestamp: String, pub hostname: String, pub os: String, pub cpu: CpuMetrics, pub ram: RamMetrics, pub disks: Vec, pub processes: Vec, pub uptime: String, pub boot_time: String, pub monitoring_active: bool, pub last_check: String, pub next_check: String, } pub struct SystemMonitor { config_manager: Arc>, alerter: Arc, pub metrics: Arc>>, pub running: Arc, last_alerts: Arc>>>, } impl SystemMonitor { pub fn new( config_manager: Arc>, alerter: Arc, ) -> Self { SystemMonitor { config_manager, alerter, metrics: Arc::new(RwLock::new(None)), running: Arc::new(std::sync::atomic::AtomicBool::new(false)), last_alerts: Arc::new(Mutex::new(HashMap::new())), } } pub async fn collect(&self) -> Metrics { let config = { let cm = self.config_manager.lock().await; cm.config.clone() }; let mut sys = System::new_with_specifics( RefreshKind::new() .with_cpu(CpuRefreshKind::everything()) .with_memory(MemoryRefreshKind::everything()), ); std::thread::sleep(StdDuration::from_millis(500)); sys.refresh_all(); let cpu_percent = sys.global_cpu_usage() as f64; let cpu_status = eval_status(cpu_percent, config.thresholds.cpu_percent).to_string(); let ram_total = sys.total_memory() as f64; let ram_used = sys.used_memory() as f64; let ram_available = sys.available_memory() as f64; let ram_percent = if ram_total > 0.0 { ram_used / ram_total * 100.0 } else { 0.0 }; let ram_status = eval_status(ram_percent, config.thresholds.ram_percent).to_string(); let mut disks = Vec::new(); let disk_list = Disks::new_with_refreshed_list(); let ignored_fs = ["squashfs", "tmpfs", "devtmpfs", "overlay", "iso9660"]; for disk in &disk_list { let fs = disk.file_system().to_string_lossy().to_lowercase(); if ignored_fs.iter().any(|&f| fs.contains(f)) { continue; } let total = disk.total_space() as f64; if total < 1_073_741_824.0 { continue; } // < 1 GB let available = disk.available_space() as f64; let used = total - available; let percent = (used / total * 100.0).round() / 10.0 * 10.0; let percent = (used / total * 100.0 * 10.0).round() / 10.0; let status = eval_status(percent, config.thresholds.disk_percent).to_string(); disks.push(DiskMetrics { drive: disk.name().to_string_lossy().trim_end_matches('\\').to_string(), mountpoint: disk.mount_point().to_string_lossy().to_string(), percent, total_gb: (total / 1_073_741_824.0 * 10.0).round() / 10.0, used_gb: (used / 1_073_741_824.0 * 10.0).round() / 10.0, free_gb: (available / 1_073_741_824.0 * 10.0).round() / 10.0, threshold: config.thresholds.disk_percent, status, }); } let processes = self.check_processes(&sys, &config.processes); let boot_time = System::boot_time(); let now_unix = chrono::Local::now().timestamp() as u64; let uptime_secs = now_unix.saturating_sub(boot_time); let uptime = format!( "{}:{:02}:{:02}", uptime_secs / 3600, (uptime_secs % 3600) / 60, uptime_secs % 60 ); let now = chrono::Local::now(); let interval = config.check_interval_minutes; Metrics { timestamp: now.to_rfc3339(), hostname: System::host_name().unwrap_or_else(|| "inconnu".into()), os: format!("{} {}", System::name().unwrap_or_default(), System::os_version().unwrap_or_default()), cpu: CpuMetrics { percent: (cpu_percent * 10.0).round() / 10.0, cores: sys.cpus().len(), threshold: config.thresholds.cpu_percent, status: cpu_status, }, ram: RamMetrics { percent: (ram_percent * 10.0).round() / 10.0, total_gb: (ram_total / 1_073_741_824.0 * 10.0).round() / 10.0, used_gb: (ram_used / 1_073_741_824.0 * 10.0).round() / 10.0, available_gb: (ram_available / 1_073_741_824.0 * 10.0).round() / 10.0, threshold: config.thresholds.ram_percent, status: ram_status, }, disks, processes, uptime, boot_time: chrono::DateTime::from_timestamp(boot_time as i64, 0) .map(|dt: chrono::DateTime| dt.to_rfc3339()) .unwrap_or_default(), monitoring_active: self.running.load(std::sync::atomic::Ordering::Relaxed), last_check: now.to_rfc3339(), next_check: (now + Duration::minutes(interval as i64)).to_rfc3339(), } } fn check_processes( &self, sys: &System, process_configs: &[ProcessConfig], ) -> Vec { let mut results = Vec::new(); for pc in process_configs { let pattern = pc.pattern.to_lowercase(); let mut found_pids = Vec::new(); let mut total_mem: f64 = 0.0; let mut total_cpu: f64 = 0.0; if pc.enabled { for (pid, proc) in sys.processes() { let name = proc.name().to_string_lossy().to_lowercase(); let cmd = proc .cmd() .iter() .map(|s| s.to_string_lossy().to_lowercase()) .collect::>() .join(" "); if name.contains(&pattern) || cmd.contains(&pattern) { found_pids.push(pid.as_u32()); total_mem += proc.memory() as f64 / 1_048_576.0; total_cpu += proc.cpu_usage() as f64; } } } let mem_status = if pc.memory_threshold_mb > 0.0 && total_mem > 0.0 { eval_status(total_mem, pc.memory_threshold_mb).to_string() } else { "ok".to_string() }; results.push(ProcessMetrics { name: pc.name.clone(), pattern: pc.pattern.clone(), running: !found_pids.is_empty(), enabled: pc.enabled, alert_on_down: pc.alert_on_down, instance_count: found_pids.len(), total_memory_mb: (total_mem * 10.0).round() / 10.0, total_cpu_percent: (total_cpu * 10.0).round() / 10.0, memory_threshold_mb: pc.memory_threshold_mb, memory_status: mem_status, pids: found_pids, }); } results } pub async fn check_and_alert(&self, metrics: &Metrics) { let (cooldown, hostname) = { let cm = self.config_manager.lock().await; (cm.config.alert_cooldown_minutes, metrics.hostname.clone()) }; let mut to_alert: Vec<(String, String, f64, f64, String)> = Vec::new(); { let mut last = self.last_alerts.lock().unwrap(); let now = chrono::Local::now(); macro_rules! maybe_alert { ($key:expr, $msg:expr, $val:expr, $thr:expr, $type:expr) => { let key = $key.to_string(); let should = match last.get(&key) { Some(t) => (now - *t) >= Duration::minutes(cooldown as i64), None => true, }; if should { last.insert(key.clone(), now); to_alert.push((key, $msg.to_string(), $val, $thr, $type.to_string())); } }; } if metrics.cpu.status == "critical" { maybe_alert!( "cpu", format!("CPU a {}% (seuil: {}%)", metrics.cpu.percent, metrics.cpu.threshold), metrics.cpu.percent, metrics.cpu.threshold, "threshold" ); } if metrics.ram.status == "critical" { maybe_alert!( "ram", format!("RAM a {}% (seuil: {}%)", metrics.ram.percent, metrics.ram.threshold), metrics.ram.percent, metrics.ram.threshold, "threshold" ); } for disk in &metrics.disks { if disk.status == "critical" { maybe_alert!( format!("disk_{}", disk.drive), format!("Disque {} a {}% (seuil: {}%)", disk.drive, disk.percent, disk.threshold), disk.percent, disk.threshold, "threshold" ); } } for proc in &metrics.processes { if !proc.enabled { continue; } if proc.alert_on_down && !proc.running { maybe_alert!( format!("process_down_{}", proc.name), format!("Processus '{}' non detecte (pattern: {})", proc.name, proc.pattern), 0.0, 0.0, "process_down" ); } if proc.memory_threshold_mb > 0.0 && proc.memory_status == "critical" { maybe_alert!( format!("process_mem_{}", proc.name), format!("Processus '{}' utilise {} Mo (seuil: {} Mo)", proc.name, proc.total_memory_mb, proc.memory_threshold_mb), proc.total_memory_mb, proc.memory_threshold_mb, "threshold" ); } } } for (key, message, value, threshold, alert_type) in to_alert { let alert = Alert { timestamp: chrono::Local::now().to_rfc3339(), alert_type: alert_type.clone(), key, message: message.clone(), value, threshold, hostname: hostname.clone(), }; { let cm = self.config_manager.lock().await; cm.save_alert(alert); let subject = format!("[ALERTE] {} - {}", hostname, message); let body = format!( "Alerte de supervision\n{}\n\nServeur : {}\nDate : {}\nType : {}\n\nMessage : {}\n\n{}\nSupervision - Monitoring automatique", "=".repeat(40), hostname, chrono::Local::now().to_rfc3339(), alert_type, message, "=".repeat(40) ); self.alerter.send(&cm.config.smtp, &subject, &body).await; } } } pub async fn start(self: Arc) { self.running.store(true, std::sync::atomic::Ordering::Relaxed); let monitor = self.clone(); tokio::spawn(async move { loop { if !monitor.running.load(std::sync::atomic::Ordering::Relaxed) { break; } let metrics = monitor.collect().await; { let mut m = monitor.metrics.write().unwrap(); *m = Some(metrics.clone()); } monitor.check_and_alert(&metrics).await; let interval = { let cm = monitor.config_manager.lock().await; cm.config.check_interval_minutes }; tokio::time::sleep(StdDuration::from_secs(interval * 60)).await; } }); } pub fn stop(&self) { self.running.store(false, std::sync::atomic::Ordering::Relaxed); } } ``` - [ ] **Step 4: Lancer les tests** ```bash cargo test monitor ``` Expected: 5 tests passent. - [ ] **Step 5: Compile check complet** ```bash cargo check ``` Expected: pas d'erreurs. - [ ] **Step 6: Commit** ```bash git add src/monitor.rs git commit -m "feat: monitor module sysinfo + évaluation seuils" ``` --- ## Task 5: User monitor module **Files:** - Modify: `src/user_monitor.rs` - Test: dans `src/user_monitor.rs` - [ ] **Step 1: Écrire les tests** ```rust #[cfg(test)] mod tests { use super::*; #[test] fn parse_awevents_line_extracts_user_and_action() { let line = r#"2026-04-07 14:23:45.123;server;;;;"login=jdupont,action=consulter,Label=Consulter dossier""#; let mut users = std::collections::HashMap::new(); let cutoff = chrono::Local::now() - chrono::Duration::hours(25); let mut hourly = (0..24).map(|h| (h, std::collections::HashSet::new())).collect(); parse_awevents_line(line, &mut users, cutoff.naive_local(), &mut hourly); assert!(users.contains_key("jdupont")); } #[test] fn parse_awevents_line_ignores_malformed() { let line = "not a valid log line"; let mut users = std::collections::HashMap::new(); let cutoff = chrono::Local::now().naive_local(); let mut hourly = (0..24).map(|h| (h, std::collections::HashSet::new())).collect(); parse_awevents_line(line, &mut users, cutoff, &mut hourly); assert!(users.is_empty()); } #[test] fn compute_statuses_marks_recent_as_active() { let now = chrono::Local::now().naive_local(); let mut users = std::collections::HashMap::new(); users.insert("alice".into(), UserEntry { login: "alice".into(), last_action_time: now, last_action_label: "test".into(), action_count_24h: 1, status: "deconnecte".into(), explicit_logout: false, logout_time: None, connected_since: Some(now), }); compute_statuses(&mut users, 5, 30, now); assert_eq!(users["alice"].status, "actif"); } } ``` - [ ] **Step 2: Vérifier que les tests échouent** ```bash cargo test user_monitor ``` Expected: erreur de compilation. - [ ] **Step 3: Implémenter user_monitor.rs** ```rust use chrono::{Duration, Local, NaiveDateTime}; use regex::Regex; use serde::{Deserialize, Serialize}; use std::collections::{HashMap, HashSet}; use std::fs; use std::path::Path; use std::sync::{Arc, Mutex}; use tokio::sync::Mutex as AsyncMutex; use crate::config::ConfigManager; #[derive(Debug, Clone, Serialize, Deserialize)] pub struct UserEntry { pub login: String, pub last_action_time: NaiveDateTime, pub last_action_label: String, pub action_count_24h: u32, pub status: String, pub explicit_logout: bool, pub logout_time: Option, pub connected_since: Option, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct HourlyCount { pub hour: u32, pub count: usize, } #[derive(Debug, Clone, Default)] pub struct UserData { pub users: Vec, pub hourly: Vec, pub error: Option, pub no_files: bool, } fn log_files_for_date(log_path: &Path, prefix: &str, date_str: &str) -> Vec { let pattern = format!("{}/{}_{}_*", log_path.to_string_lossy(), prefix, date_str); let re = Regex::new(r"_(\d+)\.[^.]+$").unwrap(); let mut files: Vec<_> = glob::glob(&pattern) .unwrap_or_else(|_| glob::glob("").unwrap()) .filter_map(|f| f.ok()) .filter(|f| !f.to_string_lossy().ends_with(".zip")) .collect(); files.sort_by_key(|f| { re.captures(&f.to_string_lossy()) .and_then(|c| c.get(1)) .and_then(|m| m.as_str().parse::().ok()) .unwrap_or(0) }); files } pub fn parse_awevents_line( line: &str, users: &mut HashMap, cutoff_24h: NaiveDateTime, hourly: &mut HashMap>, ) { let re = Regex::new( r#"^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+;[^;]*;;;;"login=([^,]+),action=([^,]+),Label=(.+?)"?\s*$"#, ).unwrap(); let m = match re.captures(line) { Some(m) => m, None => return, }; let ts_str = &m[1]; let login = m[2].trim().to_string(); let label = m[4].to_string(); if login.is_empty() { return; } let ts = match NaiveDateTime::parse_from_str(ts_str, "%Y-%m-%d %H:%M:%S") { Ok(t) => t, Err(_) => return, }; let is_logout = label.to_lowercase().contains("se deconnecter"); let entry = users.entry(login.clone()).or_insert_with(|| UserEntry { login: login.clone(), last_action_time: ts, last_action_label: label.chars().take(60).collect(), action_count_24h: 0, status: "deconnecte".into(), explicit_logout: is_logout, logout_time: if is_logout { Some(ts) } else { None }, connected_since: Some(ts), }); if ts > entry.last_action_time { entry.last_action_time = ts; entry.last_action_label = label.chars().take(60).collect(); } if is_logout { entry.explicit_logout = true; entry.logout_time = Some(ts); } else if entry.explicit_logout { if let Some(lt) = entry.logout_time { if ts > lt { entry.explicit_logout = false; entry.logout_time = None; } } } if ts >= cutoff_24h { entry.action_count_24h += 1; } hourly .entry(ts.hour() as u32) .or_default() .insert(login); } pub fn compute_statuses( users: &mut HashMap, active_min: u64, inactive_min: u64, now: NaiveDateTime, ) { for user in users.values_mut() { let delta = (now - user.last_action_time) .num_minutes() .max(0) as u64; user.status = if user.explicit_logout { "deconnecte".into() } else if delta > inactive_min { "deconnecte".into() } else if delta > active_min { "inactif".into() } else { "actif".into() }; } } pub struct UserMonitor { config_manager: Arc>, pub data: Arc>, running: Arc, } impl UserMonitor { pub fn new(config_manager: Arc>) -> Self { UserMonitor { config_manager, data: Arc::new(Mutex::new(UserData::default())), running: Arc::new(std::sync::atomic::AtomicBool::new(false)), } } pub async fn parse_logs(&self) { let (log_path, active_min, inactive_min) = { let cm = self.config_manager.lock().await; ( cm.config.amadea_log_path.clone(), cm.config.user_status_thresholds.active_minutes, cm.config.user_status_thresholds.inactive_minutes, ) }; let log_dir = Path::new(&log_path); if !log_dir.is_dir() { let mut d = self.data.lock().unwrap(); *d = UserData { error: Some(format!("Dossier de logs introuvable : {}", log_path)), ..Default::default() }; return; } let now = Local::now().naive_local(); let date_str = Local::now().format("%y-%m-%d").to_string(); let cutoff_24h = now - Duration::hours(24); let awevents_files = log_files_for_date(log_dir, "awevents", &date_str); if awevents_files.is_empty() { let mut d = self.data.lock().unwrap(); *d = UserData { no_files: true, ..Default::default() }; return; } let mut users: HashMap = HashMap::new(); let mut hourly: HashMap> = (0..24).map(|h| (h, HashSet::new())).collect(); for file in &awevents_files { if let Ok(content) = fs::read_to_string(file) { for line in content.lines() { parse_awevents_line(line, &mut users, cutoff_24h, &mut hourly); } } } let re_isoft = Regex::new( r"^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}).*method=OpenUserSession.*login=([A-Za-z0-9_]+)" ).unwrap(); for file in log_files_for_date(log_dir, "isoft", &date_str) { if let Ok(content) = fs::read_to_string(file) { for line in content.lines() { if let Some(m) = re_isoft.captures(line) { let login = m[2].to_string(); if let Ok(ts) = NaiveDateTime::parse_from_str(&m[1], "%Y-%m-%d %H:%M:%S") { if let Some(u) = users.get_mut(&login) { if u.connected_since.is_none() { u.connected_since = Some(ts); } } } } } } } compute_statuses(&mut users, active_min, inactive_min, now); let status_order = |s: &str| match s { "actif" => 0, "inactif" => 1, _ => 2, }; let mut sorted: Vec = users.into_values().collect(); sorted.sort_by_key(|u| status_order(&u.status)); let hourly_data: Vec = { let mut v: Vec<_> = hourly.iter().collect(); v.sort_by_key(|(h, _)| *h); v.iter().map(|(h, s)| HourlyCount { hour: **h, count: s.len() }).collect() }; let mut d = self.data.lock().unwrap(); *d = UserData { users: sorted, hourly: hourly_data, error: None, no_files: false, }; } pub async fn get_weekly_activity(&self) -> Vec { let log_path = { let cm = self.config_manager.lock().await; cm.config.amadea_log_path.clone() }; let log_dir = Path::new(&log_path); if !log_dir.is_dir() { return vec![]; } let today = Local::now().date_naive(); let mut result = Vec::new(); let re = Regex::new(r"^(\d{4}-\d{2}-\d{2} (\d{2}):\d{2}:\d{2}).*login=([^,]+),").unwrap(); for delta in (0..=6).rev() { let day = today - Duration::days(delta); let date_str = day.format("%y-%m-%d").to_string(); let files = log_files_for_date(log_dir, "awevents", &date_str); if files.is_empty() { result.push(serde_json::json!({ "date": day.to_string(), "count": null })); continue; } let mut hourly: HashMap> = (0..24).map(|h| (h, HashSet::new())).collect(); for file in &files { if let Ok(content) = fs::read_to_string(file) { for line in content.lines() { if let Some(m) = re.captures(line) { let hour: u32 = m[2].parse().unwrap_or(0); let login = m[3].trim().to_string(); if !login.is_empty() { hourly.entry(hour).or_default().insert(login); } } } } } let max_concurrent = hourly.values().map(|s| s.len()).max().unwrap_or(0); result.push(serde_json::json!({ "date": day.to_string(), "count": max_concurrent })); } result } pub async fn start(self: Arc) { self.running.store(true, std::sync::atomic::Ordering::Relaxed); let um = self.clone(); tokio::spawn(async move { loop { if !um.running.load(std::sync::atomic::Ordering::Relaxed) { break; } um.parse_logs().await; let interval = { let cm = um.config_manager.lock().await; cm.config.check_interval_minutes }; tokio::time::sleep(std::time::Duration::from_secs(interval * 60)).await; } }); } } ``` - [ ] **Step 4: Lancer les tests** ```bash cargo test user_monitor ``` Expected: 3 tests passent. - [ ] **Step 5: Commit** ```bash git add src/user_monitor.rs git commit -m "feat: user_monitor — parsing logs Amadea" ``` --- ## Task 6: AppState + flash + AuthUser extractor **Files:** - Modify: `src/routes/mod.rs` - [ ] **Step 1: Implémenter routes/mod.rs** ```rust pub mod auth; pub mod dashboard; pub mod settings; pub mod alerts; pub mod users; use axum::{ async_trait, extract::FromRequestParts, http::{request::Parts, StatusCode}, response::{IntoResponse, Redirect, Response}, }; use std::sync::{Arc, RwLock}; use tera::Tera; use tokio::sync::Mutex as AsyncMutex; use tower_sessions::Session; use crate::alerter::Alerter; use crate::config::ConfigManager; use crate::monitor::{Metrics, SystemMonitor}; use crate::user_monitor::UserMonitor; const SESSION_USER_KEY: &str = "username"; const SESSION_FLASH_KEY: &str = "flash_messages"; #[derive(Clone)] pub struct AppState { pub config_manager: Arc>, pub monitor: Arc, pub alerter: Arc, pub user_monitor: Arc, pub tera: Arc, } impl AppState { pub fn new( config_manager: Arc>, monitor: Arc, alerter: Arc, user_monitor: Arc, ) -> Self { let tera = build_tera(); AppState { config_manager, monitor, alerter, user_monitor, tera: Arc::new(tera), } } } fn build_tera() -> Tera { let mut tera = Tera::default(); tera.add_raw_templates(vec![ ("base.html", include_str!("../../templates/base.html")), ("login.html", include_str!("../../templates/login.html")), ("dashboard.html", include_str!("../../templates/dashboard.html")), ("settings.html", include_str!("../../templates/settings.html")), ("alerts.html", include_str!("../../templates/alerts.html")), ("users.html", include_str!("../../templates/users.html")), ]) .expect("Erreur chargement templates Tera"); tera } /// Flash message helpers pub async fn flash(session: &Session, category: &str, message: &str) { let mut messages: Vec<(String, String)> = session .get::>(SESSION_FLASH_KEY) .await .unwrap_or_default() .unwrap_or_default(); messages.push((category.to_string(), message.to_string())); session .insert(SESSION_FLASH_KEY, messages) .await .ok(); } pub async fn get_and_clear_flash(session: &Session) -> Vec<(String, String)> { let messages: Vec<(String, String)> = session .get::>(SESSION_FLASH_KEY) .await .unwrap_or_default() .unwrap_or_default(); session.remove::>(SESSION_FLASH_KEY).await.ok(); messages } /// Extractor: renvoie le username si authentifié, sinon redirige vers /login pub struct AuthUser(pub String); #[async_trait] impl FromRequestParts for AuthUser where S: Send + Sync, { type Rejection = Response; async fn from_request_parts(parts: &mut Parts, state: &S) -> Result { let session = Session::from_request_parts(parts, state) .await .map_err(|e| e.into_response())?; match session .get::(SESSION_USER_KEY) .await .unwrap_or_default() { Some(u) => Ok(AuthUser(u)), None => Err(Redirect::to("/login").into_response()), } } } pub fn render_html( tera: &Tera, template: &str, ctx: tera::Context, ) -> axum::response::Html { match tera.render(template, &ctx) { Ok(html) => axum::response::Html(html), Err(e) => axum::response::Html(format!("
Erreur template: {}
", e)), } } ``` - [ ] **Step 2: Compile check** ```bash cargo check ``` Expected: pas d'erreurs. - [ ] **Step 3: Commit** ```bash git add src/routes/mod.rs git commit -m "feat: AppState, flash helpers, AuthUser extractor" ``` --- ## Task 7: Auth routes (login/logout) **Files:** - Modify: `src/routes/auth.rs` - [ ] **Step 1: Implémenter auth.rs** ```rust use axum::{ extract::{Form, State}, response::{Html, IntoResponse, Redirect}, }; use serde::Deserialize; use tower_sessions::Session; use crate::routes::{flash, get_and_clear_flash, render_html, AppState, SESSION_USER_KEY}; const SESSION_USER_KEY: &str = "username"; #[derive(Deserialize)] pub struct LoginForm { pub username: String, pub password: String, } pub async fn login_get( session: Session, State(state): State, ) -> impl IntoResponse { // Si déjà connecté, rediriger vers dashboard if session .get::(SESSION_USER_KEY) .await .unwrap_or_default() .is_some() { return Redirect::to("/").into_response(); } let flash_messages = get_and_clear_flash(&session).await; let mut ctx = tera::Context::new(); ctx.insert("flash_messages", &flash_messages); ctx.insert("is_authenticated", &false); render_html(&state.tera, "login.html", ctx).into_response() } pub async fn login_post( session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { let (admin_username, admin_hash) = { let cm = state.config_manager.lock().await; (cm.config.admin.username.clone(), cm.config.admin.password_hash.clone()) }; let username = form.username.trim().to_string(); let password = form.password.clone(); let valid = username == admin_username && bcrypt::verify(&password, &admin_hash).unwrap_or(false); if valid { session .insert(SESSION_USER_KEY, username) .await .ok(); return Redirect::to("/").into_response(); } flash(&session, "danger", "Identifiants incorrects.").await; Redirect::to("/login").into_response() } pub async fn logout(session: Session) -> impl IntoResponse { session.flush().await.ok(); Redirect::to("/login") } ``` Note: `SESSION_USER_KEY` est défini dans `routes/mod.rs`. Supprimer la redéfinition locale dans `auth.rs` et utiliser `crate::routes::SESSION_USER_KEY` (rendre `pub` dans mod.rs). - [ ] **Step 2: Compile check** ```bash cargo check ``` Expected: pas d'erreurs. - [ ] **Step 3: Commit** ```bash git add src/routes/auth.rs git commit -m "feat: routes auth login/logout" ``` --- ## Task 8: Dashboard et API métriques **Files:** - Modify: `src/routes/dashboard.rs` - [ ] **Step 1: Implémenter dashboard.rs** ```rust use axum::{ extract::State, response::{IntoResponse, Json, Redirect}, }; use tower_sessions::Session; use crate::routes::{flash, get_and_clear_flash, render_html, AppState, AuthUser}; pub async fn dashboard( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { let flash_messages = get_and_clear_flash(&session).await; let metrics = state.monitor.metrics.read().unwrap().clone(); let (is_default_pw, admin_username) = { let cm = state.config_manager.lock().await; let hash = cm.config.admin.password_hash.clone(); let username = cm.config.admin.username.clone(); (bcrypt::verify("admin", &hash).unwrap_or(false), username) }; let mut ctx = tera::Context::new(); ctx.insert("flash_messages", &flash_messages); ctx.insert("is_authenticated", &true); ctx.insert("active_page", "dashboard"); ctx.insert("metrics", &metrics); ctx.insert("default_pw", &is_default_pw); ctx.insert("admin_username", &admin_username); render_html(&state.tera, "dashboard.html", ctx) } pub async fn api_metrics( _auth: AuthUser, State(state): State, ) -> impl IntoResponse { let metrics = state.monitor.metrics.read().unwrap().clone(); Json(metrics) } pub async fn toggle_monitoring( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { let is_running = state.monitor.running.load(std::sync::atomic::Ordering::Relaxed); if is_running { state.monitor.stop(); flash(&session, "warning", "Monitoring arrêté.").await; } else { let m = state.monitor.clone(); m.start().await; flash(&session, "success", "Monitoring démarré.").await; } Redirect::to("/") } ``` - [ ] **Step 2: Compile check** ```bash cargo check ``` Expected: pas d'erreurs. - [ ] **Step 3: Commit** ```bash git add src/routes/dashboard.rs git commit -m "feat: dashboard + api/metrics routes" ``` --- ## Task 9: Settings routes **Files:** - Modify: `src/routes/settings.rs` - [ ] **Step 1: Implémenter settings.rs** ```rust use axum::{ extract::{Form, State}, response::{IntoResponse, Redirect}, }; use serde::Deserialize; use tower_sessions::Session; use crate::routes::{flash, get_and_clear_flash, render_html, AppState, AuthUser}; pub async fn settings_get( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { let flash_messages = get_and_clear_flash(&session).await; let (config, is_default_pw) = { let cm = state.config_manager.lock().await; let pw_default = bcrypt::verify("admin", &cm.config.admin.password_hash).unwrap_or(false); (cm.config.clone(), pw_default) }; // Masquer le mot de passe SMTP let smtp_masked = if config.smtp.password.is_empty() { "".to_string() } else { "********".to_string() }; let mut ctx = tera::Context::new(); ctx.insert("flash_messages", &flash_messages); ctx.insert("is_authenticated", &true); ctx.insert("active_page", "settings"); ctx.insert("config", &config); ctx.insert("smtp", &config.smtp); ctx.insert("smtp_password_masked", &smtp_masked); ctx.insert("default_pw", &is_default_pw); render_html(&state.tera, "settings.html", ctx) } #[derive(Deserialize)] pub struct ThresholdsForm { pub cpu_percent: u32, pub ram_percent: u32, pub disk_percent: u32, } pub async fn update_thresholds( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { if !(1..=100).contains(&form.cpu_percent) || !(1..=100).contains(&form.ram_percent) || !(1..=100).contains(&form.disk_percent) { flash(&session, "danger", "Les seuils doivent être entre 1 et 100.").await; return Redirect::to("/settings"); } { let mut cm = state.config_manager.lock().await; cm.config.thresholds.cpu_percent = form.cpu_percent as f64; cm.config.thresholds.ram_percent = form.ram_percent as f64; cm.config.thresholds.disk_percent = form.disk_percent as f64; cm.save(); } flash(&session, "success", "Seuils mis à jour.").await; Redirect::to("/settings") } #[derive(Deserialize)] pub struct MonitoringForm { pub check_interval_minutes: u64, pub alert_cooldown_minutes: u64, } pub async fn update_monitoring( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { if form.check_interval_minutes < 1 { flash(&session, "danger", "L'intervalle doit être d'au moins 1 minute.").await; return Redirect::to("/settings"); } if form.alert_cooldown_minutes < 1 { flash(&session, "danger", "Le cooldown doit être d'au moins 1 minute.").await; return Redirect::to("/settings"); } { let mut cm = state.config_manager.lock().await; cm.config.check_interval_minutes = form.check_interval_minutes; cm.config.alert_cooldown_minutes = form.alert_cooldown_minutes; cm.save(); } flash(&session, "success", "Paramètres de monitoring mis à jour.").await; Redirect::to("/settings") } #[derive(Deserialize)] pub struct SmtpForm { pub smtp_server: String, pub smtp_port: u16, pub smtp_tls: Option, pub smtp_username: String, pub smtp_password: Option, pub smtp_from: String, pub smtp_to: String, } pub async fn update_smtp( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { let to_emails: Vec = form .smtp_to .split(',') .map(|s| s.trim().to_string()) .filter(|s| !s.is_empty()) .collect(); { let mut cm = state.config_manager.lock().await; let old_password = cm.config.smtp.password.clone(); cm.config.smtp.server = form.smtp_server.trim().to_string(); cm.config.smtp.port = form.smtp_port; cm.config.smtp.use_tls = form.smtp_tls.is_some(); cm.config.smtp.username = form.smtp_username.trim().to_string(); cm.config.smtp.from_email = form.smtp_from.trim().to_string(); cm.config.smtp.to_emails = to_emails; cm.config.smtp.password = match form.smtp_password { Some(pw) if !pw.is_empty() => pw, _ => old_password, }; cm.save(); } flash(&session, "success", "Configuration SMTP mise à jour.").await; Redirect::to("/settings") } pub async fn test_smtp( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { let smtp = { let cm = state.config_manager.lock().await; cm.config.smtp.clone() }; let (ok, msg) = state.alerter.send_test(&smtp).await; if ok { flash(&session, "success", &format!("Test réussi : {}", msg)).await; } else { flash(&session, "danger", &format!("Test échoué : {}", msg)).await; } Redirect::to("/settings") } #[derive(Deserialize)] pub struct ProcessesForm { #[serde(rename = "proc_name[]")] pub proc_name: Option>, #[serde(rename = "proc_pattern[]")] pub proc_pattern: Option>, #[serde(rename = "proc_mem_threshold[]")] pub proc_mem_threshold: Option>, #[serde(rename = "proc_enabled[]")] pub proc_enabled: Option>, #[serde(rename = "proc_alert_down[]")] pub proc_alert_down: Option>, } pub async fn update_processes( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { use crate::config::ProcessConfig; let names = form.proc_name.unwrap_or_default(); let patterns = form.proc_pattern.unwrap_or_default(); let mem_thresholds = form.proc_mem_threshold.unwrap_or_default(); let enableds = form.proc_enabled.unwrap_or_default(); let alert_downs = form.proc_alert_down.unwrap_or_default(); let mut processes = Vec::new(); for (i, name) in names.iter().enumerate() { let name = name.trim().to_string(); if name.is_empty() { continue; } processes.push(ProcessConfig { name, pattern: patterns.get(i).map(|s| s.trim().to_lowercase()).unwrap_or_default(), memory_threshold_mb: mem_thresholds.get(i) .and_then(|s| s.parse().ok()) .unwrap_or(0.0), enabled: enableds.contains(&i.to_string()), alert_on_down: alert_downs.contains(&i.to_string()), }); } { let mut cm = state.config_manager.lock().await; cm.config.processes = processes; cm.save(); } flash(&session, "success", "Processus surveillés mis à jour.").await; Redirect::to("/settings") } #[derive(Deserialize)] pub struct PasswordForm { pub current_password: String, pub new_password: String, pub confirm_password: String, } pub async fn update_password( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { let hash = { let cm = state.config_manager.lock().await; cm.config.admin.password_hash.clone() }; if !bcrypt::verify(&form.current_password, &hash).unwrap_or(false) { flash(&session, "danger", "Mot de passe actuel incorrect.").await; return Redirect::to("/settings"); } if form.new_password.len() < 8 { flash(&session, "danger", "Le nouveau mot de passe doit faire au moins 8 caractères.").await; return Redirect::to("/settings"); } if form.new_password != form.confirm_password { flash(&session, "danger", "Les mots de passe ne correspondent pas.").await; return Redirect::to("/settings"); } let new_hash = match bcrypt::hash(&form.new_password, bcrypt::DEFAULT_COST) { Ok(h) => h, Err(_) => { flash(&session, "danger", "Erreur lors du hachage du mot de passe.").await; return Redirect::to("/settings"); } }; { let mut cm = state.config_manager.lock().await; cm.config.admin.password_hash = new_hash; cm.save(); } flash(&session, "success", "Mot de passe mis à jour.").await; Redirect::to("/settings") } #[derive(Deserialize)] pub struct PortForm { pub port: u16, } pub async fn update_port( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { if !(1024..=65535).contains(&form.port) { flash(&session, "danger", "Le port doit être entre 1024 et 65535.").await; return Redirect::to("/settings"); } { let mut cm = state.config_manager.lock().await; cm.config.port = form.port; cm.save(); } flash( &session, "warning", &format!("Port mis à jour à {}. Redémarrez l'application pour appliquer.", form.port), ).await; Redirect::to("/settings") } #[derive(Deserialize)] pub struct AmadeaLogPathForm { pub amadea_log_path: String, } pub async fn update_amadea_log_path( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { let path = form.amadea_log_path.trim().to_string(); if path.is_empty() { flash(&session, "danger", "Le chemin ne peut pas être vide.").await; return Redirect::to("/settings"); } { let mut cm = state.config_manager.lock().await; cm.config.amadea_log_path = path; cm.save(); } flash(&session, "success", "Chemin des logs Amadea mis à jour.").await; Redirect::to("/settings") } #[derive(Deserialize)] pub struct UserThresholdsForm { pub active_minutes: u64, pub inactive_minutes: u64, } pub async fn update_user_thresholds( _auth: AuthUser, session: Session, State(state): State, Form(form): Form, ) -> impl IntoResponse { if form.active_minutes < 1 || form.inactive_minutes < 1 { flash(&session, "danger", "Les seuils doivent être d'au moins 1 minute.").await; return Redirect::to("/settings"); } if form.active_minutes >= form.inactive_minutes { flash(&session, "danger", "Le seuil 'actif' doit être inférieur au seuil 'inactif'.").await; return Redirect::to("/settings"); } { let mut cm = state.config_manager.lock().await; cm.config.user_status_thresholds.active_minutes = form.active_minutes; cm.config.user_status_thresholds.inactive_minutes = form.inactive_minutes; cm.save(); } flash(&session, "success", "Seuils utilisateurs mis à jour.").await; Redirect::to("/settings") } ``` - [ ] **Step 2: Compile check** ```bash cargo check ``` Expected: pas d'erreurs. - [ ] **Step 3: Commit** ```bash git add src/routes/settings.rs git commit -m "feat: routes settings — seuils, smtp, processus, mot de passe, port" ``` --- ## Task 10: Alerts routes **Files:** - Modify: `src/routes/alerts.rs` - [ ] **Step 1: Implémenter alerts.rs** ```rust use axum::{ extract::State, response::{IntoResponse, Redirect}, }; use tower_sessions::Session; use crate::routes::{flash, get_and_clear_flash, render_html, AppState, AuthUser}; pub async fn alerts_get( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { let flash_messages = get_and_clear_flash(&session).await; let raw_alerts = { let cm = state.config_manager.lock().await; cm.load_alerts() }; // Formater les timestamps pour l'affichage (YYYY-MM-DDTHH:MM:SS → YYYY-MM-DD HH:MM:SS) let alerts: Vec = raw_alerts .iter() .map(|a| { let mut v = serde_json::to_value(a).unwrap(); if let Some(ts) = v.get("timestamp").and_then(|t| t.as_str()) { let formatted = ts .chars() .take(19) .collect::() .replace('T', " "); v["timestamp_display"] = serde_json::Value::String(formatted); } v }) .collect(); let mut ctx = tera::Context::new(); ctx.insert("flash_messages", &flash_messages); ctx.insert("is_authenticated", &true); ctx.insert("active_page", "alerts"); ctx.insert("alerts", &alerts); render_html(&state.tera, "alerts.html", ctx) } pub async fn clear_alerts( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { { let cm = state.config_manager.lock().await; cm.clear_alerts(); } flash(&session, "success", "Historique des alertes effacé.").await; Redirect::to("/alerts") } ``` - [ ] **Step 2: Compile check** ```bash cargo check ``` - [ ] **Step 3: Commit** ```bash git add src/routes/alerts.rs git commit -m "feat: routes alertes" ``` --- ## Task 11: Users routes **Files:** - Modify: `src/routes/users.rs` - [ ] **Step 1: Implémenter users.rs** ```rust use axum::{ extract::State, response::{IntoResponse, Json}, }; use serde_json::json; use tower_sessions::Session; use crate::routes::{get_and_clear_flash, render_html, AppState, AuthUser}; pub async fn users_get( _auth: AuthUser, session: Session, State(state): State, ) -> impl IntoResponse { let flash_messages = get_and_clear_flash(&session).await; let mut ctx = tera::Context::new(); ctx.insert("flash_messages", &flash_messages); ctx.insert("is_authenticated", &true); ctx.insert("active_page", "users"); render_html(&state.tera, "users.html", ctx) } pub async fn api_users( _auth: AuthUser, State(state): State, ) -> impl IntoResponse { let data = state.user_monitor.data.lock().unwrap().clone(); if let Some(err) = &data.error { return Json(json!({ "error": err })); } if data.no_files { return Json(json!({ "no_files": true })); } let users: Vec = data .users .iter() .map(|u| { json!({ "login": u.login, "status": u.status, "last_action_time": u.last_action_time.format("%H:%M:%S").to_string(), "last_action_label": u.last_action_label, "action_count_24h": u.action_count_24h, "connected_since": u.connected_since.map(|t| t.format("%H:%M").to_string()), "explicit_logout": u.explicit_logout, }) }) .collect(); Json(json!({ "users": users, "hourly": data.hourly })) } pub async fn api_users_weekly( _auth: AuthUser, State(state): State, ) -> impl IntoResponse { let weekly = state.user_monitor.get_weekly_activity().await; Json(json!({ "weekly": weekly })) } ``` - [ ] **Step 2: Compile check** ```bash cargo check ``` - [ ] **Step 3: Commit** ```bash git add src/routes/users.rs git commit -m "feat: routes utilisateurs Amadea" ``` --- ## Task 12: Templates Tera **Files:** - Create: `templates/base.html` - Create: `templates/login.html` - Create: `templates/dashboard.html` - Create: `templates/settings.html` - Create: `templates/alerts.html` - Create: `templates/users.html` - Create: `static/style.css` - [ ] **Step 1: Créer static/style.css** Copier le contenu de `../supervision/static/style.css` tel quel : ```css /* Supervision — Style */ body { background-color: #f4f6f9; font-size: 0.9rem; } .metric-card { transition: border-color 0.3s; } .metric-card .metric-value { font-size: 2.2rem; font-weight: 700; line-height: 1.1; } .card { box-shadow: 0 1px 3px rgba(0, 0, 0, 0.08); } .badge { font-size: 0.75rem; text-transform: uppercase; } .table th { font-size: 0.8rem; text-transform: uppercase; color: #6c757d; border-bottom-width: 1px; } .navbar-brand i { color: #4fc3f7; } .border-success { border-left: 4px solid #198754 !important; } .border-warning { border-left: 4px solid #ffc107 !important; } .border-danger { border-left: 4px solid #dc3545 !important; } @media (max-width: 768px) { .metric-card .metric-value { font-size: 1.8rem; } } ``` - [ ] **Step 2: Créer templates/base.html** ```html {% block title %}Supervision{% endblock %} {% if is_authenticated %} {% endif %}
{% if flash_messages %} {% for item in flash_messages %} {% endfor %} {% endif %} {% if default_pw is defined and default_pw %}
Sécurité : Le mot de passe par défaut est encore actif. Changez-le maintenant.
{% endif %} {% block content %}{% endblock %}
{% block scripts %}{% endblock %} ``` - [ ] **Step 3: Créer templates/login.html** Porter `../supervision/templates/login.html` en remplaçant : - `url_for('login')` → `/login` - `{% with messages = get_flashed_messages(with_categories=true) %}...{% endwith %}` → `{% if flash_messages %}{% for item in flash_messages %}...{{ item.0 }}...{{ item.1 }}...{% endfor %}{% endif %}` ```html Supervision - Connexion
``` - [ ] **Step 4: Créer templates/dashboard.html** Porter `../supervision/templates/dashboard.html` : - `url_for('toggle_monitoring')` → `/api/monitoring/toggle` - `url_for('alerts')` → `/alerts` - `{% extends "base.html" %}` → identique (Tera supporte extends) - `{{ metrics.cpu.percent }}` → `{{ metrics.cpu.percent }}` - `loop.index0` → `loop.index0` ```html {% extends "base.html" %} {% block title %}Supervision - Tableau de bord{% endblock %} {% block content %}

Tableau de bord {% if metrics and metrics.hostname %} — {{ metrics.hostname }} {% endif %}

{% if metrics and metrics.monitoring_active %} {% else %} {% endif %}
{% if not metrics %}
Collecte des métriques en cours...
{% else %}
{{ metrics.hostname }} {{ metrics.os }} Uptime: {{ metrics.uptime }} {{ metrics.cpu.cores }} cœurs {{ metrics.ram.total_gb }} Go RAM
CPU
{{ metrics.cpu.status }}
{{ metrics.cpu.percent }}%
Seuil: {{ metrics.cpu.threshold }}%
RAM
{{ metrics.ram.status }}
{{ metrics.ram.percent }}%
{{ metrics.ram.used_gb }} / {{ metrics.ram.total_gb }} Go — Seuil: {{ metrics.ram.threshold }}%
{% for disk in metrics.disks %}
{{ disk.drive }}
{{ disk.status }}
{{ disk.percent }}%
{{ disk.used_gb }} / {{ disk.total_gb }} Go ({{ disk.free_gb }} Go libres) — Seuil: {{ disk.threshold }}%
{% endfor %}
Processus surveillés
{% for proc in metrics.processes %} {% endfor %}
ProcessusStatutInstances MémoireCPUPID(s)
{{ proc.name }}
pattern: {{ proc.pattern }}
{% if not proc.enabled %}Désactivé {% elif proc.running %}Actif {% else %}Arrêté{% endif %} {{ proc.instance_count }} {{ proc.total_memory_mb }} Mo {% if proc.memory_threshold_mb > 0 %}
seuil: {{ proc.memory_threshold_mb }} Mo{% endif %}
{{ proc.total_cpu_percent }}% {{ proc.pids | join(sep=", ") }}
Alertes récentes
Voir tout
Aucune alerte récente.
{% endif %} {% endblock %} {% block scripts %} {% endblock %} ``` - [ ] **Step 5: Créer templates/alerts.html** Porter `../supervision/templates/alerts.html` : - `url_for('clear_alerts')` → `/alerts/clear` - `alert.timestamp[:19] | replace('T', ' ')` → `alert.timestamp_display` (formaté côté Rust dans la route) - `alerts | length` → `alerts | length` ```html {% extends "base.html" %} {% block title %}Supervision - Alertes{% endblock %} {% block content %}

Historique des alertes

{% if alerts %}
{% endif %}
{% if not alerts %}
Aucune alerte enregistrée.
{% else %}
{% for alert in alerts %} {% endfor %}
DateTypeMessageValeurSeuil
{{ alert.timestamp_display }} {% if alert.type == "process_down" %} Processus {% else %} Seuil {% endif %} {{ alert.message }} {{ alert.value }} {{ alert.threshold }}
{{ alerts | length }} alerte(s) — les 500 dernières sont conservées.
{% endif %} {% endblock %} ``` - [ ] **Step 6: Créer templates/settings.html** Porter `../supervision/templates/settings.html` en remplaçant tous les `url_for('...')` selon la table de portage au début du plan. Les `{% if smtp.use_tls %}checked{% endif %}` fonctionnent identiquement en Tera. Structure : copier le contenu de `../supervision/templates/settings.html` et remplacer : - `url_for('update_thresholds')` → `/settings/thresholds` - `url_for('update_monitoring')` → `/settings/monitoring` - `url_for('update_port')` → `/settings/port` - `url_for('update_smtp')` → `/settings/smtp` - `url_for('test_smtp')` → `/settings/smtp/test` - `url_for('update_processes')` → `/settings/processes` - `url_for('update_password')` → `/settings/password` - `url_for('update_amadea_log_path')` → `/settings/amadea-log-path` - `url_for('update_user_thresholds')` → `/settings/user-thresholds` - `{{ smtp.to_emails | join(', ') }}` → `{{ smtp.to_emails | join(sep=", ") }}` - `{{ smtp.password_masked }}` → `{{ smtp_password_masked }}` - `{% if smtp.password_masked %}{{ smtp.password_masked }}{% else %}Non défini{% endif %}` → `{% if smtp_password_masked %}{{ smtp_password_masked }}{% else %}Non défini{% endif %}` - [ ] **Step 7: Créer templates/users.html** Copier `../supervision/templates/users.html` tel quel — aucune modification nécessaire car il n'utilise que du JS pur avec fetch vers `/api/users`. - [ ] **Step 8: Compile check (avec les templates embarqués)** ```bash cargo check ``` Expected: pas d'erreurs. - [ ] **Step 9: Commit** ```bash git add templates/ static/ git commit -m "feat: templates Tera + style.css" ``` --- ## Task 13: Main.rs — assemblage + rate limiting + headers sécurité **Files:** - Modify: `src/main.rs` - [ ] **Step 1: Implémenter main.rs** ```rust mod config; mod monitor; mod alerter; mod user_monitor; mod routes; use std::sync::Arc; use tokio::sync::Mutex as AsyncMutex; use tower_sessions::{MemoryStore, SessionManagerLayer}; use tower_governor::{governor::GovernorConfigBuilder, GovernorLayer}; use tower_http::services::ServeDir; use axum::{ Router, routing::{get, post}, middleware, http::{HeaderValue, Method}, response::IntoResponse, }; use config::ConfigManager; use monitor::SystemMonitor; use alerter::Alerter; use user_monitor::UserMonitor; use routes::{ AppState, auth::{login_get, login_post, logout}, dashboard::{dashboard, api_metrics, toggle_monitoring}, settings::{ settings_get, update_thresholds, update_monitoring, update_smtp, test_smtp, update_processes, update_password, update_port, update_amadea_log_path, update_user_thresholds, }, alerts::{alerts_get, clear_alerts}, users::{users_get, api_users, api_users_weekly}, }; async fn run_server() { // Init config let config_manager = Arc::new(AsyncMutex::new(ConfigManager::new())); // Init services let alerter = Arc::new(Alerter); let monitor = Arc::new(SystemMonitor::new( config_manager.clone(), alerter.clone(), )); let user_monitor = Arc::new(UserMonitor::new(config_manager.clone())); // Démarrer monitoring et user monitor monitor.clone().start().await; { let m = monitor.clone(); let _ = m.collect().await; // collecte initiale let metrics = m.metrics.read().unwrap().clone(); if let Some(ref met) = metrics { m.metrics.write().unwrap().replace(met.clone()); } } user_monitor.clone().start().await; // App state let state = AppState::new( config_manager.clone(), monitor, alerter, user_monitor, ); // Rate limiting pour /login (10 req/min) let governor_conf = Arc::new( GovernorConfigBuilder::default() .per_second(6) // 10 par minute = 1 toutes les 6 secondes .burst_size(10) .finish() .unwrap(), ); // Sessions let session_store = MemoryStore::default(); let session_layer = SessionManagerLayer::new(session_store) .with_secure(false) .with_name("supervision_session"); // Router let login_routes = Router::new() .route("/login", post(login_post)) .layer(GovernorLayer { config: governor_conf }); let app = Router::new() .route("/login", get(login_get)) .merge(login_routes) .route("/logout", get(logout)) .route("/", get(dashboard)) .route("/api/metrics", get(api_metrics)) .route("/api/monitoring/toggle", post(toggle_monitoring)) .route("/settings", get(settings_get)) .route("/settings/thresholds", post(update_thresholds)) .route("/settings/monitoring", post(update_monitoring)) .route("/settings/smtp", post(update_smtp)) .route("/settings/smtp/test", post(test_smtp)) .route("/settings/processes", post(update_processes)) .route("/settings/password", post(update_password)) .route("/settings/port", post(update_port)) .route("/settings/amadea-log-path", post(update_amadea_log_path)) .route("/settings/user-thresholds", post(update_user_thresholds)) .route("/alerts", get(alerts_get)) .route("/alerts/clear", post(clear_alerts)) .route("/users", get(users_get)) .route("/api/users", get(api_users)) .route("/api/users/activity/weekly", get(api_users_weekly)) .nest_service("/static", ServeDir::new("static")) .layer(session_layer) .with_state(state.clone()) // En-têtes sécurité .layer(axum::middleware::map_response(|mut response: axum::response::Response| async move { let headers = response.headers_mut(); headers.insert("X-Content-Type-Options", HeaderValue::from_static("nosniff")); headers.insert("X-Frame-Options", HeaderValue::from_static("DENY")); headers.insert("X-XSS-Protection", HeaderValue::from_static("1; mode=block")); response })); let port = { let cm = state.config_manager.lock().await; cm.config.port }; let addr = format!("0.0.0.0:{}", port); tracing::info!("Supervision démarré sur http://localhost:{}", port); let listener = tokio::net::TcpListener::bind(&addr).await.unwrap(); axum::serve(listener, app).await.unwrap(); } #[tokio::main] async fn main() { tracing_subscriber::fmt::init(); let args: Vec = std::env::args().collect(); #[cfg(windows)] { if args.get(1).map(|s| s.as_str()) == Some("install") { service::install_service(); return; } if args.get(1).map(|s| s.as_str()) == Some("uninstall") { service::uninstall_service(); return; } // Détecter si lancé par le Service Control Manager if service::is_running_as_service() { service::run_service(); return; } } // Mode console (développement ou lancement manuel) run_server().await; } #[cfg(windows)] mod service { pub fn install_service() { // Implémenté dans Task 14 println!("Installation du service..."); } pub fn uninstall_service() { println!("Désinstallation du service..."); } pub fn is_running_as_service() -> bool { false // Stub pour Task 13, complété dans Task 14 } pub fn run_service() {} } ``` - [ ] **Step 2: Build de test** ```bash cargo build ``` Expected: compilation réussie, quelques warnings OK. - [ ] **Step 3: Test manuel (mode console)** ```bash cargo run ``` Expected: `Supervision démarré sur http://localhost:5000`. Ouvrir `http://localhost:5000/login` dans le navigateur → page de login visible. - [ ] **Step 4: Commit** ```bash git add src/main.rs git commit -m "feat: main.rs — assemblage complet du serveur Axum" ``` --- ## Task 14: Windows Service (compilation sur Windows) **Files:** - Modify: `src/main.rs` (module `service`) Note: Cette tâche s'exécute sur Windows uniquement. Le code `#[cfg(windows)]` ne compile pas sur macOS/Linux. - [ ] **Step 1: Remplacer le module `service` dans main.rs** Remplacer le module `#[cfg(windows)] mod service { ... }` par : ```rust #[cfg(windows)] mod service { use std::ffi::OsString; use std::time::Duration; use windows_service::{ define_windows_service, service::{ ServiceControl, ServiceControlAccept, ServiceExitCode, ServiceState, ServiceStatus, ServiceType, }, service_control_handler::{self, ServiceControlHandlerResult}, service_dispatcher, service_manager::{ServiceManager, ServiceManagerAccess}, service::{ServiceAccess, ServiceErrorControl, ServiceInfo, ServiceStartType}, }; const SERVICE_NAME: &str = "Supervision"; const SERVICE_DISPLAY: &str = "Supervision - Monitoring Système"; const SERVICE_DESCRIPTION: &str = "Surveille CPU, RAM, disques et processus. Interface web sur http://localhost:5000"; pub fn install_service() { let manager = ServiceManager::local_computer( None::<&str>, ServiceManagerAccess::CREATE_SERVICE, ) .expect("Impossible d'ouvrir le Service Manager (lancer en administrateur)"); let exe_path = std::env::current_exe().unwrap(); let service_info = ServiceInfo { name: OsString::from(SERVICE_NAME), display_name: OsString::from(SERVICE_DISPLAY), service_type: ServiceType::OWN_PROCESS, start_type: ServiceStartType::AutoStart, error_control: ServiceErrorControl::Normal, executable_path: exe_path, launch_arguments: vec![], dependencies: vec![], account_name: None, account_password: None, }; let service = manager .create_service(&service_info, ServiceAccess::CHANGE_CONFIG) .expect("Impossible de créer le service"); service .set_description(SERVICE_DESCRIPTION) .expect("Impossible de définir la description"); println!("Service '{}' installé avec succès.", SERVICE_NAME); println!("Démarrer avec: sc start {}", SERVICE_NAME); } pub fn uninstall_service() { let manager = ServiceManager::local_computer( None::<&str>, ServiceManagerAccess::CONNECT, ) .expect("Impossible d'ouvrir le Service Manager"); let service = manager .open_service(SERVICE_NAME, ServiceAccess::DELETE) .expect("Service introuvable"); service.delete().expect("Impossible de supprimer le service"); println!("Service '{}' désinstallé.", SERVICE_NAME); } pub fn is_running_as_service() -> bool { // Heuristique : pas de console attachée et variable d'env SESSIONNAME absente std::env::var("SESSIONNAME").is_err() } define_windows_service!(ffi_service_main, service_main); fn service_main(_arguments: Vec) { let rt = tokio::runtime::Runtime::new().unwrap(); rt.block_on(async { let (shutdown_tx, shutdown_rx) = tokio::sync::oneshot::channel::<()>(); let status_handle = service_control_handler::register( SERVICE_NAME, move |control| match control { ServiceControl::Stop => { let _ = shutdown_tx.send(()); ServiceControlHandlerResult::NoError } ServiceControl::Interrogate => ServiceControlHandlerResult::NoError, _ => ServiceControlHandlerResult::NotImplemented, }, ) .unwrap(); status_handle .set_service_status(ServiceStatus { service_type: ServiceType::OWN_PROCESS, current_state: ServiceState::Running, controls_accepted: ServiceControlAccept::STOP, exit_code: ServiceExitCode::Win32(0), checkpoint: 0, wait_hint: Duration::default(), process_id: None, }) .unwrap(); tokio::select! { _ = crate::run_server() => {}, _ = shutdown_rx => {}, } status_handle .set_service_status(ServiceStatus { service_type: ServiceType::OWN_PROCESS, current_state: ServiceState::Stopped, controls_accepted: ServiceControlAccept::empty(), exit_code: ServiceExitCode::Win32(0), checkpoint: 0, wait_hint: Duration::default(), process_id: None, }) .unwrap(); }); } pub fn run_service() { service_dispatcher::start(SERVICE_NAME, ffi_service_main) .expect("Impossible de démarrer le service dispatcher"); } } ``` - [ ] **Step 2: Build release sur Windows** ```cmd cargo build --release ``` Expected: `supervision.exe` dans `target\release\`. - [ ] **Step 3: Test mode console sur Windows** ```cmd target\release\supervision.exe ``` Expected: Démarrage sur port 5000. Ouvrir `http://localhost:5000`. - [ ] **Step 4: Test installation service** ```cmd supervision.exe install sc start Supervision ``` Expected: service démarré, interface accessible sur `http://localhost:5000`. - [ ] **Step 5: Build final et commit** ```bash git add src/main.rs git commit -m "feat: intégration Windows Service (install/uninstall/run)" ``` --- ## Notes de déploiement 1. Compiler une seule fois sur Windows : `cargo build --release` 2. Récupérer `target\release\supervision.exe` 3. Copier l'exe sur le serveur cible dans un dossier dédié (ex: `C:\Supervision\`) 4. Ouvrir un terminal administrateur dans ce dossier : ```cmd supervision.exe install sc start Supervision ``` 5. Accéder à `http://localhost:5000` — login : `admin` / `admin` 6. **Changer le mot de passe immédiatement** dans Configuration → Mot de passe administrateur Le dossier `data\` est créé automatiquement à côté de l'exe au premier lancement.