diff --git a/.gitignore b/.gitignore
index fccdf46..c3395e7 100644
--- a/.gitignore
+++ b/.gitignore
@@ -15,3 +15,4 @@ docs/
*.spec
build/
dist/
+docs/
diff --git a/docs/superpowers/plans/2026-04-02-users-tab.md b/docs/superpowers/plans/2026-04-02-users-tab.md
deleted file mode 100644
index c31fdd4..0000000
--- a/docs/superpowers/plans/2026-04-02-users-tab.md
+++ /dev/null
@@ -1,1002 +0,0 @@
-# Onglet Utilisateurs Amadea — Implementation Plan
-
-> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
-
-**Goal:** Ajouter un onglet "Utilisateurs" affichant en temps réel les utilisateurs connectés à Amadea Web 8 x64 (statut, dernière action, graphique d'activité horaire), avec configuration du chemin logs et des seuils de statut.
-
-**Architecture:** Un module `user_monitor.py` tourne en background thread (miroir de `SystemMonitor`) — il parse les fichiers `awevents_YY-MM-DD_*.{txt,log}` et `isoft_YY-MM-DD_*.{txt,log}` du jour, calcule les statuts utilisateurs et expose un cache thread-safe. L'API `/api/users` retourne ce cache directement. Un second endpoint `/api/users/activity/weekly` lit les fichiers des 7 derniers jours à la demande.
-
-**Tech Stack:** Python 3.10+, Flask 3.1, Bootstrap 5.3, Bootstrap Icons, barres CSS pures (aucune librairie graphique), pytest
-
----
-
-## Fichiers créés / modifiés
-
-| Fichier | Action | Rôle |
-|---|---|---|
-| `user_monitor.py` | Créer | Classe `UserMonitor` : thread, parsing, cache |
-| `tests/__init__.py` | Créer | Package tests |
-| `tests/test_user_monitor.py` | Créer | Tests unitaires du parsing et du calcul de statut |
-| `config_manager.py` | Modifier | Ajout des clés `amadea_log_path` + `user_status_thresholds` |
-| `app.py` | Modifier | Instanciation `UserMonitor`, 4 nouvelles routes |
-| `templates/base.html` | Modifier | Lien "Utilisateurs" dans la navbar |
-| `templates/settings.html` | Modifier | 2 nouveaux blocs de configuration |
-| `templates/users.html` | Créer | Tableau utilisateurs + graphique CSS + JS auto-refresh |
-| `requirements.txt` | Modifier | Ajout de `pytest` |
-
----
-
-## Task 1 : Étendre ConfigManager avec les nouvelles clés
-
-**Files:**
-- Modify: `config_manager.py`
-- Modify: `requirements.txt`
-
-- [ ] **Étape 1 : Ajouter les nouvelles clés dans `get_default_config()`**
-
-Dans `config_manager.py`, dans la fonction `get_default_config()`, ajouter après la clé `"smtp"` :
-
-```python
- "amadea_log_path": r"C:\ProgramData\ISoft\Amadea Web 8 x64\data\logs",
- "user_status_thresholds": {
- "active_minutes": 5,
- "inactive_minutes": 30,
- },
-```
-
-La méthode `_load()` existante fusionne automatiquement les nouvelles clés avec une config déjà sauvegardée — aucune migration manuelle n'est nécessaire.
-
-- [ ] **Étape 2 : Ajouter pytest à requirements.txt**
-
-Ajouter à la fin de `requirements.txt` :
-```
-pytest==8.3.*
-```
-
-- [ ] **Étape 3 : Commit**
-
-```bash
-git add config_manager.py requirements.txt
-git commit -m "feat: add amadea_log_path and user_status_thresholds config keys"
-```
-
----
-
-## Task 2 : Créer `user_monitor.py`
-
-**Files:**
-- Create: `user_monitor.py`
-- Create: `tests/__init__.py`
-- Create: `tests/test_user_monitor.py`
-
-- [ ] **Étape 1 : Créer `user_monitor.py`**
-
-```python
-"""Suivi des utilisateurs connectes a Amadea via parsing des logs."""
-
-import glob
-import os
-import re
-import threading
-import time
-from datetime import datetime, timedelta
-
-# Regex compilees au niveau module (performance)
-_AWEVENTS_RE = re.compile(
- r'^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+;[^;]*;;;;"login=([^,]+),action=([^,]+),Label=(.+?)"?\s*$'
-)
-_ISOFT_LOGIN_RE = re.compile(
- r'^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}).*method=OpenUserSession.*login=([A-Za-z0-9_]+)'
-)
-
-
-def _log_files_for_date(log_path, prefix, date_str):
- """Retourne les fichiers de logs pour un prefixe et une date donnes, tries par index."""
- pattern = os.path.join(log_path, f"{prefix}_{date_str}_*")
- files = [f for f in glob.glob(pattern) if not f.endswith('.zip')]
- return sorted(files, key=lambda f: int(re.search(r'_(\d+)\.[^.]+$', f).group(1)))
-
-
-class UserMonitor:
- def __init__(self, config_manager):
- self.config = config_manager
- self._cache = {"users": {}, "hourly": [], "error": None, "no_files": False}
- self._lock = threading.Lock()
- self._running = False
- self._thread = None
-
- @property
- def data(self):
- with self._lock:
- return dict(self._cache)
-
- def start(self):
- if self._running:
- return
- self._running = True
- self._thread = threading.Thread(target=self._loop, daemon=True)
- self._thread.start()
-
- def stop(self):
- self._running = False
-
- def _loop(self):
- last_parse = 0
- while self._running:
- interval = self.config.get("check_interval_minutes", 1) * 60
- if time.time() - last_parse >= interval:
- try:
- self.parse_logs()
- except Exception as e:
- print(f"[UserMonitor] Erreur: {e}")
- last_parse = time.time()
- time.sleep(5)
-
- def parse_logs(self):
- log_path = self.config.get(
- "amadea_log_path",
- r"C:\ProgramData\ISoft\Amadea Web 8 x64\data\logs"
- )
- thresholds = self.config.get(
- "user_status_thresholds",
- {"active_minutes": 5, "inactive_minutes": 30}
- )
-
- if not os.path.isdir(log_path):
- with self._lock:
- self._cache = {
- "error": f"Dossier de logs introuvable : {log_path}",
- "users": {}, "hourly": [], "no_files": False,
- }
- return
-
- date_str = datetime.now().strftime("%y-%m-%d")
- awevents_files = _log_files_for_date(log_path, "awevents", date_str)
-
- if not awevents_files:
- with self._lock:
- self._cache = {"no_files": True, "error": None, "users": {}, "hourly": []}
- return
-
- now = datetime.now()
- cutoff_24h = now - timedelta(hours=24)
- users = {}
- hourly = {h: set() for h in range(24)}
-
- for filepath in awevents_files:
- try:
- with open(filepath, "r", encoding="utf-8", errors="ignore") as f:
- for line in f:
- self._parse_awevents_line(line, users, cutoff_24h, hourly)
- except (PermissionError, OSError):
- continue
-
- isoft_files = _log_files_for_date(log_path, "isoft", date_str)
- for filepath in isoft_files:
- try:
- with open(filepath, "r", encoding="utf-8", errors="ignore") as f:
- for line in f:
- self._parse_isoft_line(line, users)
- except (PermissionError, OSError):
- continue
-
- self._compute_statuses(users, thresholds, now)
-
- status_order = {"actif": 0, "inactif": 1, "deconnecte": 2}
- sorted_users = dict(
- sorted(users.items(), key=lambda x: status_order.get(x[1]["status"], 3))
- )
- hourly_data = [{"hour": h, "count": len(logins)} for h, logins in sorted(hourly.items())]
-
- with self._lock:
- self._cache = {
- "error": None,
- "no_files": False,
- "users": sorted_users,
- "hourly": hourly_data,
- }
-
- def _parse_awevents_line(self, line, users, cutoff_24h, hourly):
- m = _AWEVENTS_RE.match(line)
- if not m:
- return
- ts_str, login, action, label = m.group(1), m.group(2), m.group(3), m.group(4)
- try:
- ts = datetime.strptime(ts_str, "%Y-%m-%d %H:%M:%S")
- except ValueError:
- return
-
- login = login.strip()
- if not login:
- return
-
- is_logout = "se deconnecter" in label.lower()
-
- if login not in users:
- users[login] = {
- "login": login,
- "last_action_time": ts,
- "last_action_label": label[:60],
- "action_count_24h": 0,
- "status": "deconnecte",
- "explicit_logout": is_logout,
- "logout_time": ts if is_logout else None,
- "connected_since": ts,
- }
- else:
- user = users[login]
- if ts > user["last_action_time"]:
- user["last_action_time"] = ts
- user["last_action_label"] = label[:60]
- if is_logout:
- user["explicit_logout"] = True
- user["logout_time"] = ts
- elif user["explicit_logout"] and user.get("logout_time") and ts > user["logout_time"]:
- # Activite apres deconnexion explicite = reconnexion
- user["explicit_logout"] = False
- user["logout_time"] = None
-
- if ts >= cutoff_24h:
- users[login]["action_count_24h"] += 1
-
- hourly[ts.hour].add(login)
-
- def _parse_isoft_line(self, line, users):
- m = _ISOFT_LOGIN_RE.match(line)
- if not m:
- return
- ts_str, login = m.group(1), m.group(2)
- try:
- ts = datetime.strptime(ts_str, "%Y-%m-%d %H:%M:%S")
- except ValueError:
- return
- if login in users and users[login]["connected_since"] is None:
- users[login]["connected_since"] = ts
-
- def _compute_statuses(self, users, thresholds, now):
- active_min = thresholds.get("active_minutes", 5)
- inactive_min = thresholds.get("inactive_minutes", 30)
- for user in users.values():
- delta = (now - user["last_action_time"]).total_seconds() / 60
- if user.get("explicit_logout"):
- user["status"] = "deconnecte"
- elif delta > inactive_min:
- user["status"] = "deconnecte"
- elif delta > active_min:
- user["status"] = "inactif"
- else:
- user["status"] = "actif"
-
- def get_weekly_activity(self):
- """Retourne le nombre max d'utilisateurs actifs simultanes par jour (7 derniers jours)."""
- log_path = self.config.get(
- "amadea_log_path",
- r"C:\ProgramData\ISoft\Amadea Web 8 x64\data\logs"
- )
- if not os.path.isdir(log_path):
- return []
-
- result = []
- today = datetime.now().date()
- for delta in range(6, -1, -1):
- day = today - timedelta(days=delta)
- date_str = day.strftime("%y-%m-%d")
- files = _log_files_for_date(log_path, "awevents", date_str)
- if not files:
- result.append({"date": day.isoformat(), "count": None})
- continue
- hourly = {h: set() for h in range(24)}
- for filepath in files:
- try:
- with open(filepath, "r", encoding="utf-8", errors="ignore") as f:
- for line in f:
- m = re.match(
- r'^(\d{4}-\d{2}-\d{2} (\d{2}):\d{2}:\d{2}).*login=([^,]+),',
- line
- )
- if m:
- hour = int(m.group(2))
- login = m.group(3).strip()
- if login:
- hourly[hour].add(login)
- except (PermissionError, OSError):
- continue
- max_concurrent = max((len(v) for v in hourly.values()), default=0)
- result.append({"date": day.isoformat(), "count": max_concurrent})
- return result
-```
-
-- [ ] **Étape 2 : Créer `tests/__init__.py`** (fichier vide)
-
-```bash
-mkdir -p tests && touch tests/__init__.py
-```
-
-- [ ] **Étape 3 : Créer `tests/test_user_monitor.py`**
-
-```python
-"""Tests unitaires pour user_monitor.py"""
-
-from datetime import datetime, timedelta
-import pytest
-from user_monitor import UserMonitor
-
-
-class FakeConfig:
- def get(self, key, default=None):
- return {
- "amadea_log_path": "/nonexistent",
- "user_status_thresholds": {"active_minutes": 5, "inactive_minutes": 30},
- "check_interval_minutes": 1,
- }.get(key, default)
-
-
-def make_monitor():
- return UserMonitor(FakeConfig())
-
-
-# --- Parsing awevents ---
-
-def test_parse_awevents_line_basic():
- monitor = make_monitor()
- users, hourly = {}, {h: set() for h in range(24)}
- cutoff = datetime(2026, 3, 30, 0, 0, 0)
- line = '2026-03-30 10:34:24.034;INFO ;;;;"login=JENKINS,action=SelectionChange,Label=BAO_Main/MenuPrincipal"\n'
- monitor._parse_awevents_line(line, users, cutoff, hourly)
- assert "JENKINS" in users
- assert users["JENKINS"]["last_action_time"] == datetime(2026, 3, 30, 10, 34, 24)
- assert users["JENKINS"]["action_count_24h"] == 1
- assert users["JENKINS"]["explicit_logout"] is False
- assert hourly[10] == {"JENKINS"}
-
-
-def test_parse_awevents_line_explicit_logout():
- monitor = make_monitor()
- users, hourly = {}, {h: set() for h in range(24)}
- cutoff = datetime(2026, 3, 30, 0, 0, 0)
- line = '2026-03-30 11:34:00.500;INFO ;;;;"login=MB,action=Action,Label=Main/se deconnecter"\n'
- monitor._parse_awevents_line(line, users, cutoff, hourly)
- assert users["MB"]["explicit_logout"] is True
- assert users["MB"]["logout_time"] == datetime(2026, 3, 30, 11, 34, 0)
-
-
-def test_parse_awevents_line_reconnect_after_logout():
- monitor = make_monitor()
- users, hourly = {}, {h: set() for h in range(24)}
- cutoff = datetime(2026, 3, 30, 0, 0, 0)
- logout_line = '2026-03-30 11:34:00.500;INFO ;;;;"login=MB,action=Action,Label=Main/se deconnecter"\n'
- reconnect_line = '2026-03-30 11:34:19.594;INFO ;;;;"login=MB,action=Action,Label=Main/OuvrirCTRL 1/Table"\n'
- monitor._parse_awevents_line(logout_line, users, cutoff, hourly)
- assert users["MB"]["explicit_logout"] is True
- monitor._parse_awevents_line(reconnect_line, users, cutoff, hourly)
- assert users["MB"]["explicit_logout"] is False
-
-
-def test_parse_awevents_line_invalid_ignored():
- monitor = make_monitor()
- users, hourly = {}, {h: set() for h in range(24)}
- cutoff = datetime(2026, 3, 30, 0, 0, 0)
- monitor._parse_awevents_line("ligne invalide sans format attendu\n", users, cutoff, hourly)
- assert users == {}
-
-
-def test_parse_awevents_action_count_outside_24h():
- monitor = make_monitor()
- users, hourly = {}, {h: set() for h in range(24)}
- cutoff = datetime(2026, 3, 30, 12, 0, 0)
- old_line = '2026-03-30 08:00:00.000;INFO ;;;;"login=JENKINS,action=Click,Label=Main/Page"\n'
- monitor._parse_awevents_line(old_line, users, cutoff, hourly)
- assert users["JENKINS"]["action_count_24h"] == 0
-
-
-# --- Calcul de statut ---
-
-def test_compute_statuses_actif():
- monitor = make_monitor()
- now = datetime(2026, 3, 30, 12, 0, 0)
- thresholds = {"active_minutes": 5, "inactive_minutes": 30}
- users = {"JENKINS": {"last_action_time": now - timedelta(minutes=2), "explicit_logout": False}}
- monitor._compute_statuses(users, thresholds, now)
- assert users["JENKINS"]["status"] == "actif"
-
-
-def test_compute_statuses_inactif():
- monitor = make_monitor()
- now = datetime(2026, 3, 30, 12, 0, 0)
- thresholds = {"active_minutes": 5, "inactive_minutes": 30}
- users = {"MB": {"last_action_time": now - timedelta(minutes=15), "explicit_logout": False}}
- monitor._compute_statuses(users, thresholds, now)
- assert users["MB"]["status"] == "inactif"
-
-
-def test_compute_statuses_deconnecte_timeout():
- monitor = make_monitor()
- now = datetime(2026, 3, 30, 12, 0, 0)
- thresholds = {"active_minutes": 5, "inactive_minutes": 30}
- users = {"KO": {"last_action_time": now - timedelta(minutes=45), "explicit_logout": False}}
- monitor._compute_statuses(users, thresholds, now)
- assert users["KO"]["status"] == "deconnecte"
-
-
-def test_compute_statuses_deconnecte_explicit():
- monitor = make_monitor()
- now = datetime(2026, 3, 30, 12, 0, 0)
- thresholds = {"active_minutes": 5, "inactive_minutes": 30}
- users = {"MB": {"last_action_time": now - timedelta(minutes=2), "explicit_logout": True}}
- monitor._compute_statuses(users, thresholds, now)
- assert users["MB"]["status"] == "deconnecte"
-```
-
-- [ ] **Étape 4 : Lancer les tests**
-
-```bash
-.venv/bin/pip install pytest -q
-.venv/bin/pytest tests/test_user_monitor.py -v
-```
-
-Résultat attendu : 9 tests PASSED.
-
-- [ ] **Étape 5 : Commit**
-
-```bash
-git add user_monitor.py tests/__init__.py tests/test_user_monitor.py requirements.txt
-git commit -m "feat: add UserMonitor with Amadea log parsing"
-```
-
----
-
-## Task 3 : Étendre `app.py` — routes et instanciation
-
-**Files:**
-- Modify: `app.py`
-
-- [ ] **Étape 1 : Importer et instancier UserMonitor**
-
-Dans `app.py`, après `from alerter import EmailAlerter`, ajouter :
-
-```python
-from user_monitor import UserMonitor
-```
-
-Après `monitor = SystemMonitor(config, alerter)`, ajouter :
-
-```python
-user_monitor = UserMonitor(config)
-```
-
-- [ ] **Étape 2 : Démarrer UserMonitor dans `main()`**
-
-Dans `main()`, après `monitor.start()`, ajouter :
-
-```python
- user_monitor.start()
- user_monitor.parse_logs()
-```
-
-- [ ] **Étape 3 : Ajouter les 4 nouvelles routes**
-
-Ajouter avant la fonction `check_port_available` :
-
-```python
-@app.route("/users")
-@login_required
-def users():
- return render_template("users.html")
-
-
-@app.route("/api/users")
-@login_required
-def api_users():
- cache = user_monitor.data
- if cache.get("error"):
- return jsonify({"error": cache["error"]})
- if cache.get("no_files"):
- return jsonify({"no_files": True})
- users_list = [
- {
- "login": u["login"],
- "status": u["status"],
- "last_action_time": u["last_action_time"].strftime("%H:%M:%S") if u.get("last_action_time") else None,
- "last_action_label": u.get("last_action_label", ""),
- "action_count_24h": u.get("action_count_24h", 0),
- "connected_since": u["connected_since"].strftime("%H:%M") if u.get("connected_since") else None,
- "explicit_logout": u.get("explicit_logout", False),
- }
- for u in cache.get("users", {}).values()
- ]
- return jsonify({"users": users_list, "hourly": cache.get("hourly", [])})
-
-
-@app.route("/api/users/activity/weekly")
-@login_required
-def api_users_weekly():
- return jsonify({"weekly": user_monitor.get_weekly_activity()})
-
-
-@app.route("/settings/amadea-log-path", methods=["POST"])
-@login_required
-def update_amadea_log_path():
- path = request.form.get("amadea_log_path", "").strip()
- if not path:
- flash("Le chemin ne peut pas etre vide.", "danger")
- return redirect(url_for("settings"))
- config.set("amadea_log_path", path)
- flash("Chemin des logs Amadea mis a jour.", "success")
- return redirect(url_for("settings"))
-
-
-@app.route("/settings/user-thresholds", methods=["POST"])
-@login_required
-def update_user_thresholds():
- try:
- active = int(request.form["active_minutes"])
- inactive = int(request.form["inactive_minutes"])
- if active < 1 or inactive < 1:
- flash("Les seuils doivent etre d'au moins 1 minute.", "danger")
- return redirect(url_for("settings"))
- if active >= inactive:
- flash("Le seuil 'actif' doit etre inferieur au seuil 'inactif'.", "danger")
- return redirect(url_for("settings"))
- config.set("user_status_thresholds", {"active_minutes": active, "inactive_minutes": inactive})
- flash("Seuils utilisateurs mis a jour.", "success")
- except (ValueError, KeyError) as e:
- flash(f"Erreur: {e}", "danger")
- return redirect(url_for("settings"))
-```
-
-- [ ] **Étape 4 : Vérifier le démarrage**
-
-```bash
-.venv/bin/python app.py
-```
-
-Résultat attendu : `[Supervision] Demarrage sur le port 5000` sans traceback. Ctrl+C pour arrêter.
-
-- [ ] **Étape 5 : Commit**
-
-```bash
-git add app.py
-git commit -m "feat: wire UserMonitor into app, add /users and /api/users routes"
-```
-
----
-
-## Task 4 : Mettre à jour la navigation dans `base.html`
-
-**Files:**
-- Modify: `templates/base.html`
-
-- [ ] **Étape 1 : Ajouter le lien "Utilisateurs"**
-
-Dans `templates/base.html`, après le `
` pour "Alertes" (après la balise `` qui ferme le lien Alertes), ajouter :
-
-```html
-
-
- Utilisateurs
-
-
-```
-
-- [ ] **Étape 2 : Vérifier visuellement**
-
-Ouvrir http://localhost:5000 — la navbar doit afficher : Tableau de bord | Configuration | Alertes | Utilisateurs.
-
-- [ ] **Étape 3 : Commit**
-
-```bash
-git add templates/base.html
-git commit -m "feat: add Utilisateurs link to navbar"
-```
-
----
-
-## Task 5 : Mettre à jour `settings.html` — 2 nouveaux blocs
-
-**Files:**
-- Modify: `templates/settings.html`
-
-- [ ] **Étape 1 : Ajouter les 2 blocs avant `{% endblock %}`**
-
-Dans `templates/settings.html`, juste avant la balise `{% endblock %}` finale, ajouter :
-
-```html
-
-
-```
-
-- [ ] **Étape 2 : Vérifier visuellement**
-
-Ouvrir http://localhost:5000/settings. Les deux nouveaux blocs doivent apparaître en bas avec les valeurs par défaut pré-remplies. Tester la sauvegarde de chaque formulaire.
-
-- [ ] **Étape 3 : Commit**
-
-```bash
-git add templates/settings.html
-git commit -m "feat: add Amadea log path and user status thresholds to settings"
-```
-
----
-
-## Task 6 : Créer `templates/users.html`
-
-**Files:**
-- Create: `templates/users.html`
-
-- [ ] **Étape 1 : Créer le template**
-
-Le JavaScript utilise exclusivement `textContent` et `createElement` pour insérer des données dans le DOM — aucune interpolation de données dans des chaînes HTML.
-
-```html
-{% extends "base.html" %}
-{% block title %}Supervision - Utilisateurs{% endblock %}
-
-{% block content %}
-
-
Utilisateurs Amadea
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Les donnees historiques des jours precedents ne sont pas disponibles (fichiers archives).
-
-
-
-
-
-
-
-
-
-
-
-
- | Utilisateur |
- Statut |
- Derniere action |
- Actions (24h) |
- Depuis |
-
-
-
-
-
-
-{% endblock %}
-
-{% block scripts %}
-
-{% endblock %}
-```
-
-- [ ] **Étape 2 : Vérifier visuellement**
-
-Lancer l'app et ouvrir http://localhost:5000/users.
-
-- Sur macOS (dossier logs Windows inexistant) : alerte orange "Dossier de logs introuvable"
-- Sur Windows avec logs présents : tableau + graphique barres bleues
-- Tester le sélecteur "7 derniers jours"
-- Vérifier auto-refresh via la console réseau (requête toutes les 30s)
-
-- [ ] **Étape 3 : Commit final**
-
-```bash
-git add templates/users.html
-git commit -m "feat: add users.html template with table and activity chart"
-```
-
----
-
-## Vérification finale
-
-- [ ] `pytest tests/test_user_monitor.py -v` → tous verts
-- [ ] Routes fonctionnelles : `/users`, `/api/users`, `/api/users/activity/weekly`
-- [ ] Configuration → sauvegarder un chemin de logs → vérifier `data/config.json`
-- [ ] Configuration → seuils invalides (actif >= inactif) → message d'erreur affiché
-- [ ] Onglets existants (Tableau de bord, Configuration, Alertes) → comportement inchangé
-
-```bash
-git log --oneline -6
-```
-
-Résultat attendu :
-```
-feat: add users.html template with table and activity chart
-feat: add Amadea log path and user status thresholds to settings
-feat: add Utilisateurs link to navbar
-feat: wire UserMonitor into app, add /users and /api/users routes
-feat: add UserMonitor with Amadea log parsing
-feat: add amadea_log_path and user_status_thresholds config keys
-```
diff --git a/docs/superpowers/plans/2026-04-07-supervision-rust.md b/docs/superpowers/plans/2026-04-07-supervision-rust.md
deleted file mode 100644
index 62502d6..0000000
--- a/docs/superpowers/plans/2026-04-07-supervision-rust.md
+++ /dev/null
@@ -1,3276 +0,0 @@
-# Supervision-RS Implementation Plan
-
-> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
-
-**Goal:** Créer `SuperVisionRust/` — un exécutable Windows autonome en Rust avec parité complète avec l'app Python (dashboard web, alertes email, service Windows).
-
-**Architecture:** Axum 0.7 (HTTP) + Tera (templates) + sysinfo (métriques) + lettre (email) + windows-service (service Windows). AppState partagé via `Arc>` entre le thread de monitoring (tokio task) et les routes HTTP. Flash messages stockés en session. Templates Jinja2 portés en Tera (syntaxe quasi-identique).
-
-**Tech Stack:** Rust 1.78+, axum 0.7, tokio 1, tera 1, sysinfo 0.32, lettre 0.11, tower-sessions 0.12, tower_governor 0.4, bcrypt 0.15, serde_json 1, windows-service 0.7 (Windows only)
-
----
-
-## Structure des fichiers
-
-```
-SuperVisionRust/ ← dossier frère de supervision/
-├── Cargo.toml
-├── src/
-│ ├── main.rs # point d'entrée + service Windows
-│ ├── config.rs # structs Config + ConfigManager (JSON)
-│ ├── monitor.rs # SystemMonitor (sysinfo + seuils + alertes)
-│ ├── alerter.rs # EmailAlerter (lettre SMTP)
-│ ├── user_monitor.rs # UserMonitor (parsing logs Amadea)
-│ └── routes/
-│ ├── mod.rs # AppState + flash helpers + AuthUser extractor
-│ ├── auth.rs # GET/POST /login, GET /logout
-│ ├── dashboard.rs # GET /, GET /api/metrics, POST /api/monitoring/toggle
-│ ├── settings.rs # GET /settings + tous les POST /settings/*
-│ ├── alerts.rs # GET /alerts, POST /alerts/clear
-│ └── users.rs # GET /users, GET /api/users, GET /api/users/activity/weekly
-├── templates/
-│ ├── base.html
-│ ├── login.html
-│ ├── dashboard.html
-│ ├── settings.html
-│ ├── alerts.html
-│ └── users.html
-└── static/
- └── style.css
-```
-
----
-
-## Notes de portage Jinja2 → Tera
-
-| Jinja2 | Tera |
-|---|---|
-| `url_for('dashboard')` | `/` |
-| `url_for('settings')` | `/settings` |
-| `url_for('alerts')` | `/alerts` |
-| `url_for('users')` | `/users` |
-| `url_for('login')` | `/login` |
-| `url_for('logout')` | `/logout` |
-| `url_for('toggle_monitoring')` | `/api/monitoring/toggle` |
-| `url_for('clear_alerts')` | `/alerts/clear` |
-| `url_for('update_thresholds')` | `/settings/thresholds` |
-| `url_for('update_monitoring')` | `/settings/monitoring` |
-| `url_for('update_smtp')` | `/settings/smtp` |
-| `url_for('test_smtp')` | `/settings/smtp/test` |
-| `url_for('update_processes')` | `/settings/processes` |
-| `url_for('update_password')` | `/settings/password` |
-| `url_for('update_port')` | `/settings/port` |
-| `url_for('update_amadea_log_path')` | `/settings/amadea-log-path` |
-| `url_for('update_user_thresholds')` | `/settings/user-thresholds` |
-| `url_for('static', filename='style.css')` | `/static/style.css` |
-| `get_flashed_messages(with_categories=true)` | variable `flash_messages` dans le contexte |
-| `current_user.is_authenticated` | variable `is_authenticated` dans le contexte |
-| `request.endpoint == 'dashboard'` | variable `active_page == "dashboard"` |
-| `\| join(', ')` | `\| join(sep=", ")` |
-| `\| replace('T', ' ')` | `\| replace(from="T", to=" ")` |
-| `{% with %}...{% endwith %}` | `{% set %}` ou direct |
-| `loop.index0` | `loop.index0` (identique) |
-
----
-
-## Task 1: Scaffold — Cargo.toml + structure vide
-
-**Files:**
-- Create: `../SuperVisionRust/Cargo.toml`
-- Create: `../SuperVisionRust/src/main.rs`
-- Create: `../SuperVisionRust/src/config.rs`
-- Create: `../SuperVisionRust/src/monitor.rs`
-- Create: `../SuperVisionRust/src/alerter.rs`
-- Create: `../SuperVisionRust/src/user_monitor.rs`
-- Create: `../SuperVisionRust/src/routes/mod.rs`
-- Create: `../SuperVisionRust/src/routes/auth.rs`
-- Create: `../SuperVisionRust/src/routes/dashboard.rs`
-- Create: `../SuperVisionRust/src/routes/settings.rs`
-- Create: `../SuperVisionRust/src/routes/alerts.rs`
-- Create: `../SuperVisionRust/src/routes/users.rs`
-
-- [ ] **Step 1: Créer le dossier SuperVisionRust**
-
-```bash
-cd "/Users/oussi/Documents/Documents - MacBook Pro de oussi/EttaSante/monitoring"
-mkdir SuperVisionRust
-cd SuperVisionRust
-mkdir -p src/routes templates static
-```
-
-- [ ] **Step 2: Créer Cargo.toml**
-
-```toml
-[package]
-name = "supervision"
-version = "0.1.0"
-edition = "2021"
-
-[[bin]]
-name = "supervision"
-path = "src/main.rs"
-
-[dependencies]
-axum = { version = "0.7", features = ["macros", "form"] }
-tokio = { version = "1", features = ["full"] }
-tera = "1"
-sysinfo = "0.32"
-lettre = { version = "0.11", features = ["tokio1-native-tls", "builder"] }
-serde = { version = "1", features = ["derive"] }
-serde_json = "1"
-tower-sessions = { version = "0.12", features = ["memory-store"] }
-tower = "0.4"
-tower_governor = "0.4"
-tower-http = { version = "0.5", features = ["fs"] }
-tracing = "0.1"
-tracing-subscriber = { version = "0.3", features = ["env-filter"] }
-bcrypt = "0.15"
-chrono = { version = "0.4", features = ["serde"] }
-rand = "0.8"
-async-trait = "0.1"
-http = "1"
-regex = "1"
-glob = "0.3"
-
-[target.'cfg(windows)'.dependencies]
-windows-service = "0.7"
-```
-
-- [ ] **Step 3: Créer src/main.rs minimal**
-
-```rust
-mod config;
-mod monitor;
-mod alerter;
-mod user_monitor;
-mod routes;
-
-fn main() {
- println!("Supervision");
-}
-```
-
-- [ ] **Step 4: Créer les fichiers modules vides**
-
-`src/config.rs` :
-```rust
-// Config module — à implémenter dans Task 2
-```
-
-`src/monitor.rs` :
-```rust
-// Monitor module — à implémenter dans Task 4
-```
-
-`src/alerter.rs` :
-```rust
-// Alerter module — à implémenter dans Task 3
-```
-
-`src/user_monitor.rs` :
-```rust
-// UserMonitor module — à implémenter dans Task 5
-```
-
-`src/routes/mod.rs` :
-```rust
-pub mod auth;
-pub mod dashboard;
-pub mod settings;
-pub mod alerts;
-pub mod users;
-```
-
-`src/routes/auth.rs`, `src/routes/dashboard.rs`, `src/routes/settings.rs`, `src/routes/alerts.rs`, `src/routes/users.rs` : fichiers vides avec `// TODO`.
-
-- [ ] **Step 5: Vérifier que ça compile**
-
-```bash
-cd SuperVisionRust
-cargo check
-```
-Expected: `Finished` sans erreurs (quelques warnings `unused` sont OK).
-
-- [ ] **Step 6: Commit**
-
-```bash
-git init
-git add .
-git commit -m "feat: scaffold projet supervision-rs"
-```
-
----
-
-## Task 2: Config module
-
-**Files:**
-- Modify: `src/config.rs`
-- Test: dans `src/config.rs` section `#[cfg(test)]`
-
-- [ ] **Step 1: Écrire le test**
-
-Ajouter à la fin de `src/config.rs` :
-
-```rust
-#[cfg(test)]
-mod tests {
- use super::*;
-
- #[test]
- fn default_config_has_expected_values() {
- let cfg = Config::default();
- assert_eq!(cfg.port, 5000);
- assert_eq!(cfg.thresholds.cpu_percent, 90.0);
- assert_eq!(cfg.thresholds.ram_percent, 85.0);
- assert_eq!(cfg.thresholds.disk_percent, 90.0);
- assert_eq!(cfg.check_interval_minutes, 1);
- assert_eq!(cfg.alert_cooldown_minutes, 30);
- assert_eq!(cfg.processes.len(), 3);
- assert_eq!(cfg.admin.username, "admin");
- }
-
- #[test]
- fn config_serializes_and_deserializes() {
- let cfg = Config::default();
- let json = serde_json::to_string(&cfg).unwrap();
- let cfg2: Config = serde_json::from_str(&json).unwrap();
- assert_eq!(cfg.port, cfg2.port);
- assert_eq!(cfg.admin.username, cfg2.admin.username);
- }
-
- #[test]
- fn save_alert_truncates_at_500() {
- let dir = tempfile::tempdir().unwrap();
- let mut cm = ConfigManager::new_with_dir(dir.path().to_path_buf());
- for i in 0..510u32 {
- cm.save_alert(Alert {
- timestamp: format!("2026-01-01T00:00:{:02}", i % 60),
- alert_type: "threshold".into(),
- key: format!("cpu_{}", i),
- message: format!("msg {}", i),
- value: i as f64,
- threshold: 90.0,
- hostname: "test".into(),
- });
- }
- let alerts = cm.load_alerts();
- assert_eq!(alerts.len(), 500);
- }
-}
-```
-
-- [ ] **Step 2: Vérifier que le test échoue (pas encore de code)**
-
-```bash
-cargo test config
-```
-Expected: erreur de compilation — `Config`, `ConfigManager`, `Alert` pas définis.
-
-- [ ] **Step 3: Implémenter config.rs**
-
-```rust
-use serde::{Deserialize, Serialize};
-use std::fs;
-use std::path::{Path, PathBuf};
-
-const MAX_ALERTS: usize = 500;
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct Thresholds {
- pub cpu_percent: f64,
- pub ram_percent: f64,
- pub disk_percent: f64,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct ProcessConfig {
- pub name: String,
- pub pattern: String,
- pub memory_threshold_mb: f64,
- pub enabled: bool,
- pub alert_on_down: bool,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize, Default)]
-pub struct SmtpConfig {
- pub server: String,
- pub port: u16,
- pub use_tls: bool,
- pub username: String,
- pub password: String,
- pub from_email: String,
- pub to_emails: Vec,
-}
-
-impl SmtpConfig {
- pub fn default_port() -> u16 { 587 }
- pub fn default_tls() -> bool { true }
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct UserStatusThresholds {
- pub active_minutes: u64,
- pub inactive_minutes: u64,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct AdminConfig {
- pub username: String,
- pub password_hash: String,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct Config {
- pub secret_key: String,
- pub port: u16,
- pub check_interval_minutes: u64,
- pub alert_cooldown_minutes: u64,
- pub thresholds: Thresholds,
- pub processes: Vec,
- pub smtp: SmtpConfig,
- pub amadea_log_path: String,
- pub user_status_thresholds: UserStatusThresholds,
- pub admin: AdminConfig,
-}
-
-impl Default for Config {
- fn default() -> Self {
- Config {
- secret_key: generate_secret_key(),
- port: 5000,
- check_interval_minutes: 1,
- alert_cooldown_minutes: 30,
- thresholds: Thresholds {
- cpu_percent: 90.0,
- ram_percent: 85.0,
- disk_percent: 90.0,
- },
- processes: vec![
- ProcessConfig {
- name: "JVM".into(),
- pattern: "java".into(),
- memory_threshold_mb: 0.0,
- enabled: true,
- alert_on_down: true,
- },
- ProcessConfig {
- name: "Nginx".into(),
- pattern: "nginx".into(),
- memory_threshold_mb: 0.0,
- enabled: false,
- alert_on_down: false,
- },
- ProcessConfig {
- name: "Amadea Web 8 x64".into(),
- pattern: "amadea".into(),
- memory_threshold_mb: 0.0,
- enabled: true,
- alert_on_down: true,
- },
- ],
- smtp: SmtpConfig {
- port: 587,
- use_tls: true,
- ..Default::default()
- },
- amadea_log_path: r"C:\ProgramData\ISoft\Amadea Web 8 x64\data\logs".into(),
- user_status_thresholds: UserStatusThresholds {
- active_minutes: 5,
- inactive_minutes: 30,
- },
- admin: AdminConfig {
- username: "admin".into(),
- password_hash: bcrypt::hash("admin", bcrypt::DEFAULT_COST)
- .unwrap_or_default(),
- },
- }
- }
-}
-
-fn generate_secret_key() -> String {
- use rand::Rng;
- let mut rng = rand::thread_rng();
- (0..32).map(|_| format!("{:02x}", rng.gen::())).collect()
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct Alert {
- pub timestamp: String,
- #[serde(rename = "type")]
- pub alert_type: String,
- pub key: String,
- pub message: String,
- pub value: f64,
- pub threshold: f64,
- pub hostname: String,
-}
-
-pub struct ConfigManager {
- config_file: PathBuf,
- alerts_file: PathBuf,
- pub config: Config,
-}
-
-impl ConfigManager {
- pub fn new() -> Self {
- let exe_dir = std::env::current_exe()
- .unwrap_or_default()
- .parent()
- .unwrap_or(Path::new("."))
- .to_path_buf();
- Self::new_with_dir(exe_dir.join("data"))
- }
-
- pub fn new_with_dir(data_dir: PathBuf) -> Self {
- fs::create_dir_all(&data_dir).ok();
- let config_file = data_dir.join("config.json");
- let alerts_file = data_dir.join("alerts.json");
-
- let config = if config_file.exists() {
- fs::read_to_string(&config_file)
- .ok()
- .and_then(|s| serde_json::from_str(&s).ok())
- .unwrap_or_default()
- } else {
- let cfg = Config::default();
- let json = serde_json::to_string_pretty(&cfg).unwrap_or_default();
- fs::write(&config_file, &json).ok();
- cfg
- };
-
- ConfigManager { config_file, alerts_file, config }
- }
-
- pub fn save(&self) {
- let json = serde_json::to_string_pretty(&self.config).unwrap_or_default();
- fs::write(&self.config_file, json).ok();
- }
-
- pub fn update(&mut self, config: Config) {
- self.config = config;
- self.save();
- }
-
- pub fn load_alerts(&self) -> Vec {
- if !self.alerts_file.exists() {
- return vec![];
- }
- fs::read_to_string(&self.alerts_file)
- .ok()
- .and_then(|s| serde_json::from_str(&s).ok())
- .unwrap_or_default()
- }
-
- pub fn save_alert(&self, alert: Alert) {
- let mut alerts = self.load_alerts();
- alerts.insert(0, alert);
- alerts.truncate(MAX_ALERTS);
- let json = serde_json::to_string_pretty(&alerts).unwrap_or_default();
- fs::write(&self.alerts_file, json).ok();
- }
-
- pub fn clear_alerts(&self) {
- fs::write(&self.alerts_file, "[]").ok();
- }
-}
-```
-
-Ajouter `tempfile = "3"` dans `[dev-dependencies]` de Cargo.toml :
-```toml
-[dev-dependencies]
-tempfile = "3"
-```
-
-- [ ] **Step 4: Lancer les tests**
-
-```bash
-cargo test config
-```
-Expected: 3 tests passent.
-
-- [ ] **Step 5: Commit**
-
-```bash
-git add src/config.rs Cargo.toml
-git commit -m "feat: config module avec structs et persistence JSON"
-```
-
----
-
-## Task 3: Alerter module
-
-**Files:**
-- Modify: `src/alerter.rs`
-- Test: dans `src/alerter.rs` section `#[cfg(test)]`
-
-- [ ] **Step 1: Écrire le test**
-
-```rust
-#[cfg(test)]
-mod tests {
- use super::*;
- use crate::config::SmtpConfig;
-
- #[test]
- fn not_configured_when_server_empty() {
- let smtp = SmtpConfig::default();
- assert!(!is_smtp_configured(&smtp));
- }
-
- #[test]
- fn configured_when_all_fields_present() {
- let smtp = SmtpConfig {
- server: "smtp.example.com".into(),
- port: 587,
- use_tls: true,
- username: "user".into(),
- password: "pass".into(),
- from_email: "from@example.com".into(),
- to_emails: vec!["to@example.com".into()],
- };
- assert!(is_smtp_configured(&smtp));
- }
-
- #[test]
- fn not_configured_when_no_recipients() {
- let smtp = SmtpConfig {
- server: "smtp.example.com".into(),
- port: 587,
- use_tls: true,
- username: "user".into(),
- password: "pass".into(),
- from_email: "from@example.com".into(),
- to_emails: vec![],
- };
- assert!(!is_smtp_configured(&smtp));
- }
-}
-```
-
-- [ ] **Step 2: Vérifier que les tests échouent**
-
-```bash
-cargo test alerter
-```
-Expected: erreur de compilation — `is_smtp_configured` pas défini.
-
-- [ ] **Step 3: Implémenter alerter.rs**
-
-```rust
-use crate::config::SmtpConfig;
-use lettre::{
- message::header::ContentType,
- transport::smtp::{
- authentication::Credentials,
- client::{Tls, TlsParameters},
- },
- AsyncSmtpTransport, AsyncTransport, Message, Tokio1Executor,
-};
-
-pub fn is_smtp_configured(smtp: &SmtpConfig) -> bool {
- !smtp.server.is_empty() && !smtp.from_email.is_empty() && !smtp.to_emails.is_empty()
-}
-
-pub struct Alerter;
-
-impl Alerter {
- pub async fn send(&self, smtp: &SmtpConfig, subject: &str, body: &str) -> (bool, String) {
- if !is_smtp_configured(smtp) {
- return (false, "SMTP non configure".into());
- }
- match self.send_email(smtp, subject, body).await {
- Ok(msg) => (true, msg),
- Err(e) => (false, e),
- }
- }
-
- async fn send_email(
- &self,
- smtp: &SmtpConfig,
- subject: &str,
- body: &str,
- ) -> Result {
- let from = smtp
- .from_email
- .parse()
- .map_err(|_| "Email expediteur invalide".to_string())?;
-
- let mut builder = Message::builder().from(from).subject(subject);
- for recipient in &smtp.to_emails {
- let mb = recipient
- .parse()
- .map_err(|_| format!("Destinataire invalide: {}", recipient))?;
- builder = builder.to(mb);
- }
- let email = builder
- .header(ContentType::TEXT_PLAIN)
- .body(body.to_string())
- .map_err(|e| e.to_string())?;
-
- let transport = self.build_transport(smtp)?;
- transport
- .send(email)
- .await
- .map(|_| "Email envoye avec succes".to_string())
- .map_err(|e| format!("Erreur SMTP: {}", e))
- }
-
- fn build_transport(
- &self,
- smtp: &SmtpConfig,
- ) -> Result, String> {
- let mut builder = if smtp.use_tls {
- AsyncSmtpTransport::::starttls_relay(&smtp.server)
- .map_err(|e| e.to_string())?
- } else {
- AsyncSmtpTransport::::builder_dangerous(&smtp.server)
- };
-
- builder = builder.port(smtp.port);
-
- if !smtp.username.is_empty() {
- builder = builder.credentials(Credentials::new(
- smtp.username.clone(),
- smtp.password.clone(),
- ));
- }
-
- Ok(builder.build())
- }
-
- pub async fn send_test(&self, smtp: &SmtpConfig) -> (bool, String) {
- let subject = "[TEST] Supervision - Test de configuration email";
- let body = "Ceci est un email de test.\n\nSi vous recevez ce message, la configuration SMTP est correcte.\n\n-- Supervision";
- self.send(smtp, subject, body).await
- }
-}
-```
-
-- [ ] **Step 4: Lancer les tests**
-
-```bash
-cargo test alerter
-```
-Expected: 3 tests passent.
-
-- [ ] **Step 5: Commit**
-
-```bash
-git add src/alerter.rs
-git commit -m "feat: alerter module SMTP avec lettre"
-```
-
----
-
-## Task 4: Monitor module
-
-**Files:**
-- Modify: `src/monitor.rs`
-- Test: dans `src/monitor.rs` section `#[cfg(test)]`
-
-- [ ] **Step 1: Écrire les tests**
-
-```rust
-#[cfg(test)]
-mod tests {
- use super::*;
-
- #[test]
- fn eval_status_ok_below_80_percent() {
- assert_eq!(eval_status(70.0, 90.0), "ok");
- }
-
- #[test]
- fn eval_status_warning_at_80_percent_of_threshold() {
- assert_eq!(eval_status(72.0, 90.0), "warning"); // 72/90 = 0.8
- }
-
- #[test]
- fn eval_status_critical_at_threshold() {
- assert_eq!(eval_status(90.0, 90.0), "critical");
- }
-
- #[test]
- fn eval_status_critical_above_threshold() {
- assert_eq!(eval_status(95.0, 90.0), "critical");
- }
-
- #[test]
- fn eval_status_ok_with_zero_threshold() {
- assert_eq!(eval_status(50.0, 0.0), "ok");
- }
-}
-```
-
-- [ ] **Step 2: Vérifier que les tests échouent**
-
-```bash
-cargo test monitor
-```
-Expected: erreur de compilation.
-
-- [ ] **Step 3: Implémenter monitor.rs**
-
-```rust
-use crate::alerter::Alerter;
-use crate::config::{Alert, ConfigManager, ProcessConfig};
-use chrono::{DateTime, Duration, Local};
-use serde::{Deserialize, Serialize};
-use std::collections::HashMap;
-use std::sync::{Arc, Mutex, RwLock};
-use std::time::{Duration as StdDuration, Instant};
-use sysinfo::{CpuRefreshKind, Disks, MemoryRefreshKind, ProcessRefreshKind, RefreshKind, System};
-use tokio::sync::Mutex as AsyncMutex;
-
-pub fn eval_status(value: f64, threshold: f64) -> &'static str {
- if threshold <= 0.0 {
- return "ok";
- }
- let ratio = value / threshold;
- if ratio >= 1.0 {
- "critical"
- } else if ratio >= 0.80 {
- "warning"
- } else {
- "ok"
- }
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct CpuMetrics {
- pub percent: f64,
- pub cores: usize,
- pub threshold: f64,
- pub status: String,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct RamMetrics {
- pub percent: f64,
- pub total_gb: f64,
- pub used_gb: f64,
- pub available_gb: f64,
- pub threshold: f64,
- pub status: String,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct DiskMetrics {
- pub drive: String,
- pub mountpoint: String,
- pub percent: f64,
- pub total_gb: f64,
- pub used_gb: f64,
- pub free_gb: f64,
- pub threshold: f64,
- pub status: String,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct ProcessMetrics {
- pub name: String,
- pub pattern: String,
- pub running: bool,
- pub enabled: bool,
- pub alert_on_down: bool,
- pub instance_count: usize,
- pub total_memory_mb: f64,
- pub total_cpu_percent: f64,
- pub memory_threshold_mb: f64,
- pub memory_status: String,
- pub pids: Vec,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct Metrics {
- pub timestamp: String,
- pub hostname: String,
- pub os: String,
- pub cpu: CpuMetrics,
- pub ram: RamMetrics,
- pub disks: Vec,
- pub processes: Vec,
- pub uptime: String,
- pub boot_time: String,
- pub monitoring_active: bool,
- pub last_check: String,
- pub next_check: String,
-}
-
-pub struct SystemMonitor {
- config_manager: Arc>,
- alerter: Arc,
- pub metrics: Arc>>,
- pub running: Arc,
- last_alerts: Arc>>>,
-}
-
-impl SystemMonitor {
- pub fn new(
- config_manager: Arc>,
- alerter: Arc,
- ) -> Self {
- SystemMonitor {
- config_manager,
- alerter,
- metrics: Arc::new(RwLock::new(None)),
- running: Arc::new(std::sync::atomic::AtomicBool::new(false)),
- last_alerts: Arc::new(Mutex::new(HashMap::new())),
- }
- }
-
- pub async fn collect(&self) -> Metrics {
- let config = {
- let cm = self.config_manager.lock().await;
- cm.config.clone()
- };
-
- let mut sys = System::new_with_specifics(
- RefreshKind::new()
- .with_cpu(CpuRefreshKind::everything())
- .with_memory(MemoryRefreshKind::everything()),
- );
- std::thread::sleep(StdDuration::from_millis(500));
- sys.refresh_all();
-
- let cpu_percent = sys.global_cpu_usage() as f64;
- let cpu_status = eval_status(cpu_percent, config.thresholds.cpu_percent).to_string();
-
- let ram_total = sys.total_memory() as f64;
- let ram_used = sys.used_memory() as f64;
- let ram_available = sys.available_memory() as f64;
- let ram_percent = if ram_total > 0.0 { ram_used / ram_total * 100.0 } else { 0.0 };
- let ram_status = eval_status(ram_percent, config.thresholds.ram_percent).to_string();
-
- let mut disks = Vec::new();
- let disk_list = Disks::new_with_refreshed_list();
- let ignored_fs = ["squashfs", "tmpfs", "devtmpfs", "overlay", "iso9660"];
- for disk in &disk_list {
- let fs = disk.file_system().to_string_lossy().to_lowercase();
- if ignored_fs.iter().any(|&f| fs.contains(f)) { continue; }
- let total = disk.total_space() as f64;
- if total < 1_073_741_824.0 { continue; } // < 1 GB
- let available = disk.available_space() as f64;
- let used = total - available;
- let percent = (used / total * 100.0).round() / 10.0 * 10.0;
- let percent = (used / total * 100.0 * 10.0).round() / 10.0;
- let status = eval_status(percent, config.thresholds.disk_percent).to_string();
- disks.push(DiskMetrics {
- drive: disk.name().to_string_lossy().trim_end_matches('\\').to_string(),
- mountpoint: disk.mount_point().to_string_lossy().to_string(),
- percent,
- total_gb: (total / 1_073_741_824.0 * 10.0).round() / 10.0,
- used_gb: (used / 1_073_741_824.0 * 10.0).round() / 10.0,
- free_gb: (available / 1_073_741_824.0 * 10.0).round() / 10.0,
- threshold: config.thresholds.disk_percent,
- status,
- });
- }
-
- let processes = self.check_processes(&sys, &config.processes);
-
- let boot_time = System::boot_time();
- let now_unix = chrono::Local::now().timestamp() as u64;
- let uptime_secs = now_unix.saturating_sub(boot_time);
- let uptime = format!(
- "{}:{:02}:{:02}",
- uptime_secs / 3600,
- (uptime_secs % 3600) / 60,
- uptime_secs % 60
- );
-
- let now = chrono::Local::now();
- let interval = config.check_interval_minutes;
-
- Metrics {
- timestamp: now.to_rfc3339(),
- hostname: System::host_name().unwrap_or_else(|| "inconnu".into()),
- os: format!("{} {}", System::name().unwrap_or_default(), System::os_version().unwrap_or_default()),
- cpu: CpuMetrics {
- percent: (cpu_percent * 10.0).round() / 10.0,
- cores: sys.cpus().len(),
- threshold: config.thresholds.cpu_percent,
- status: cpu_status,
- },
- ram: RamMetrics {
- percent: (ram_percent * 10.0).round() / 10.0,
- total_gb: (ram_total / 1_073_741_824.0 * 10.0).round() / 10.0,
- used_gb: (ram_used / 1_073_741_824.0 * 10.0).round() / 10.0,
- available_gb: (ram_available / 1_073_741_824.0 * 10.0).round() / 10.0,
- threshold: config.thresholds.ram_percent,
- status: ram_status,
- },
- disks,
- processes,
- uptime,
- boot_time: chrono::DateTime::from_timestamp(boot_time as i64, 0)
- .map(|dt: chrono::DateTime| dt.to_rfc3339())
- .unwrap_or_default(),
- monitoring_active: self.running.load(std::sync::atomic::Ordering::Relaxed),
- last_check: now.to_rfc3339(),
- next_check: (now + Duration::minutes(interval as i64)).to_rfc3339(),
- }
- }
-
- fn check_processes(
- &self,
- sys: &System,
- process_configs: &[ProcessConfig],
- ) -> Vec {
- let mut results = Vec::new();
- for pc in process_configs {
- let pattern = pc.pattern.to_lowercase();
- let mut found_pids = Vec::new();
- let mut total_mem: f64 = 0.0;
- let mut total_cpu: f64 = 0.0;
-
- if pc.enabled {
- for (pid, proc) in sys.processes() {
- let name = proc.name().to_string_lossy().to_lowercase();
- let cmd = proc
- .cmd()
- .iter()
- .map(|s| s.to_string_lossy().to_lowercase())
- .collect::>()
- .join(" ");
- if name.contains(&pattern) || cmd.contains(&pattern) {
- found_pids.push(pid.as_u32());
- total_mem += proc.memory() as f64 / 1_048_576.0;
- total_cpu += proc.cpu_usage() as f64;
- }
- }
- }
-
- let mem_status = if pc.memory_threshold_mb > 0.0 && total_mem > 0.0 {
- eval_status(total_mem, pc.memory_threshold_mb).to_string()
- } else {
- "ok".to_string()
- };
-
- results.push(ProcessMetrics {
- name: pc.name.clone(),
- pattern: pc.pattern.clone(),
- running: !found_pids.is_empty(),
- enabled: pc.enabled,
- alert_on_down: pc.alert_on_down,
- instance_count: found_pids.len(),
- total_memory_mb: (total_mem * 10.0).round() / 10.0,
- total_cpu_percent: (total_cpu * 10.0).round() / 10.0,
- memory_threshold_mb: pc.memory_threshold_mb,
- memory_status: mem_status,
- pids: found_pids,
- });
- }
- results
- }
-
- pub async fn check_and_alert(&self, metrics: &Metrics) {
- let (cooldown, hostname) = {
- let cm = self.config_manager.lock().await;
- (cm.config.alert_cooldown_minutes, metrics.hostname.clone())
- };
-
- let mut to_alert: Vec<(String, String, f64, f64, String)> = Vec::new();
-
- {
- let mut last = self.last_alerts.lock().unwrap();
- let now = chrono::Local::now();
-
- macro_rules! maybe_alert {
- ($key:expr, $msg:expr, $val:expr, $thr:expr, $type:expr) => {
- let key = $key.to_string();
- let should = match last.get(&key) {
- Some(t) => (now - *t) >= Duration::minutes(cooldown as i64),
- None => true,
- };
- if should {
- last.insert(key.clone(), now);
- to_alert.push((key, $msg.to_string(), $val, $thr, $type.to_string()));
- }
- };
- }
-
- if metrics.cpu.status == "critical" {
- maybe_alert!(
- "cpu",
- format!("CPU a {}% (seuil: {}%)", metrics.cpu.percent, metrics.cpu.threshold),
- metrics.cpu.percent, metrics.cpu.threshold, "threshold"
- );
- }
- if metrics.ram.status == "critical" {
- maybe_alert!(
- "ram",
- format!("RAM a {}% (seuil: {}%)", metrics.ram.percent, metrics.ram.threshold),
- metrics.ram.percent, metrics.ram.threshold, "threshold"
- );
- }
- for disk in &metrics.disks {
- if disk.status == "critical" {
- maybe_alert!(
- format!("disk_{}", disk.drive),
- format!("Disque {} a {}% (seuil: {}%)", disk.drive, disk.percent, disk.threshold),
- disk.percent, disk.threshold, "threshold"
- );
- }
- }
- for proc in &metrics.processes {
- if !proc.enabled { continue; }
- if proc.alert_on_down && !proc.running {
- maybe_alert!(
- format!("process_down_{}", proc.name),
- format!("Processus '{}' non detecte (pattern: {})", proc.name, proc.pattern),
- 0.0, 0.0, "process_down"
- );
- }
- if proc.memory_threshold_mb > 0.0 && proc.memory_status == "critical" {
- maybe_alert!(
- format!("process_mem_{}", proc.name),
- format!("Processus '{}' utilise {} Mo (seuil: {} Mo)", proc.name, proc.total_memory_mb, proc.memory_threshold_mb),
- proc.total_memory_mb, proc.memory_threshold_mb, "threshold"
- );
- }
- }
- }
-
- for (key, message, value, threshold, alert_type) in to_alert {
- let alert = Alert {
- timestamp: chrono::Local::now().to_rfc3339(),
- alert_type: alert_type.clone(),
- key,
- message: message.clone(),
- value,
- threshold,
- hostname: hostname.clone(),
- };
- {
- let cm = self.config_manager.lock().await;
- cm.save_alert(alert);
- let subject = format!("[ALERTE] {} - {}", hostname, message);
- let body = format!(
- "Alerte de supervision\n{}\n\nServeur : {}\nDate : {}\nType : {}\n\nMessage : {}\n\n{}\nSupervision - Monitoring automatique",
- "=".repeat(40), hostname, chrono::Local::now().to_rfc3339(), alert_type, message, "=".repeat(40)
- );
- self.alerter.send(&cm.config.smtp, &subject, &body).await;
- }
- }
- }
-
- pub async fn start(self: Arc) {
- self.running.store(true, std::sync::atomic::Ordering::Relaxed);
- let monitor = self.clone();
- tokio::spawn(async move {
- loop {
- if !monitor.running.load(std::sync::atomic::Ordering::Relaxed) {
- break;
- }
- let metrics = monitor.collect().await;
- {
- let mut m = monitor.metrics.write().unwrap();
- *m = Some(metrics.clone());
- }
- monitor.check_and_alert(&metrics).await;
-
- let interval = {
- let cm = monitor.config_manager.lock().await;
- cm.config.check_interval_minutes
- };
- tokio::time::sleep(StdDuration::from_secs(interval * 60)).await;
- }
- });
- }
-
- pub fn stop(&self) {
- self.running.store(false, std::sync::atomic::Ordering::Relaxed);
- }
-}
-```
-
-- [ ] **Step 4: Lancer les tests**
-
-```bash
-cargo test monitor
-```
-Expected: 5 tests passent.
-
-- [ ] **Step 5: Compile check complet**
-
-```bash
-cargo check
-```
-Expected: pas d'erreurs.
-
-- [ ] **Step 6: Commit**
-
-```bash
-git add src/monitor.rs
-git commit -m "feat: monitor module sysinfo + évaluation seuils"
-```
-
----
-
-## Task 5: User monitor module
-
-**Files:**
-- Modify: `src/user_monitor.rs`
-- Test: dans `src/user_monitor.rs`
-
-- [ ] **Step 1: Écrire les tests**
-
-```rust
-#[cfg(test)]
-mod tests {
- use super::*;
-
- #[test]
- fn parse_awevents_line_extracts_user_and_action() {
- let line = r#"2026-04-07 14:23:45.123;server;;;;"login=jdupont,action=consulter,Label=Consulter dossier""#;
- let mut users = std::collections::HashMap::new();
- let cutoff = chrono::Local::now() - chrono::Duration::hours(25);
- let mut hourly = (0..24).map(|h| (h, std::collections::HashSet::new())).collect();
- parse_awevents_line(line, &mut users, cutoff.naive_local(), &mut hourly);
- assert!(users.contains_key("jdupont"));
- }
-
- #[test]
- fn parse_awevents_line_ignores_malformed() {
- let line = "not a valid log line";
- let mut users = std::collections::HashMap::new();
- let cutoff = chrono::Local::now().naive_local();
- let mut hourly = (0..24).map(|h| (h, std::collections::HashSet::new())).collect();
- parse_awevents_line(line, &mut users, cutoff, &mut hourly);
- assert!(users.is_empty());
- }
-
- #[test]
- fn compute_statuses_marks_recent_as_active() {
- let now = chrono::Local::now().naive_local();
- let mut users = std::collections::HashMap::new();
- users.insert("alice".into(), UserEntry {
- login: "alice".into(),
- last_action_time: now,
- last_action_label: "test".into(),
- action_count_24h: 1,
- status: "deconnecte".into(),
- explicit_logout: false,
- logout_time: None,
- connected_since: Some(now),
- });
- compute_statuses(&mut users, 5, 30, now);
- assert_eq!(users["alice"].status, "actif");
- }
-}
-```
-
-- [ ] **Step 2: Vérifier que les tests échouent**
-
-```bash
-cargo test user_monitor
-```
-Expected: erreur de compilation.
-
-- [ ] **Step 3: Implémenter user_monitor.rs**
-
-```rust
-use chrono::{Duration, Local, NaiveDateTime};
-use regex::Regex;
-use serde::{Deserialize, Serialize};
-use std::collections::{HashMap, HashSet};
-use std::fs;
-use std::path::Path;
-use std::sync::{Arc, Mutex};
-use tokio::sync::Mutex as AsyncMutex;
-use crate::config::ConfigManager;
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct UserEntry {
- pub login: String,
- pub last_action_time: NaiveDateTime,
- pub last_action_label: String,
- pub action_count_24h: u32,
- pub status: String,
- pub explicit_logout: bool,
- pub logout_time: Option,
- pub connected_since: Option,
-}
-
-#[derive(Debug, Clone, Serialize, Deserialize)]
-pub struct HourlyCount {
- pub hour: u32,
- pub count: usize,
-}
-
-#[derive(Debug, Clone, Default)]
-pub struct UserData {
- pub users: Vec,
- pub hourly: Vec,
- pub error: Option,
- pub no_files: bool,
-}
-
-fn log_files_for_date(log_path: &Path, prefix: &str, date_str: &str) -> Vec {
- let pattern = format!("{}/{}_{}_*", log_path.to_string_lossy(), prefix, date_str);
- let re = Regex::new(r"_(\d+)\.[^.]+$").unwrap();
- let mut files: Vec<_> = glob::glob(&pattern)
- .unwrap_or_else(|_| glob::glob("").unwrap())
- .filter_map(|f| f.ok())
- .filter(|f| !f.to_string_lossy().ends_with(".zip"))
- .collect();
- files.sort_by_key(|f| {
- re.captures(&f.to_string_lossy())
- .and_then(|c| c.get(1))
- .and_then(|m| m.as_str().parse::().ok())
- .unwrap_or(0)
- });
- files
-}
-
-pub fn parse_awevents_line(
- line: &str,
- users: &mut HashMap,
- cutoff_24h: NaiveDateTime,
- hourly: &mut HashMap>,
-) {
- let re = Regex::new(
- r#"^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\.\d+;[^;]*;;;;"login=([^,]+),action=([^,]+),Label=(.+?)"?\s*$"#,
- ).unwrap();
- let m = match re.captures(line) {
- Some(m) => m,
- None => return,
- };
- let ts_str = &m[1];
- let login = m[2].trim().to_string();
- let label = m[4].to_string();
-
- if login.is_empty() { return; }
-
- let ts = match NaiveDateTime::parse_from_str(ts_str, "%Y-%m-%d %H:%M:%S") {
- Ok(t) => t,
- Err(_) => return,
- };
-
- let is_logout = label.to_lowercase().contains("se deconnecter");
-
- let entry = users.entry(login.clone()).or_insert_with(|| UserEntry {
- login: login.clone(),
- last_action_time: ts,
- last_action_label: label.chars().take(60).collect(),
- action_count_24h: 0,
- status: "deconnecte".into(),
- explicit_logout: is_logout,
- logout_time: if is_logout { Some(ts) } else { None },
- connected_since: Some(ts),
- });
-
- if ts > entry.last_action_time {
- entry.last_action_time = ts;
- entry.last_action_label = label.chars().take(60).collect();
- }
- if is_logout {
- entry.explicit_logout = true;
- entry.logout_time = Some(ts);
- } else if entry.explicit_logout {
- if let Some(lt) = entry.logout_time {
- if ts > lt {
- entry.explicit_logout = false;
- entry.logout_time = None;
- }
- }
- }
-
- if ts >= cutoff_24h {
- entry.action_count_24h += 1;
- }
-
- hourly
- .entry(ts.hour() as u32)
- .or_default()
- .insert(login);
-}
-
-pub fn compute_statuses(
- users: &mut HashMap,
- active_min: u64,
- inactive_min: u64,
- now: NaiveDateTime,
-) {
- for user in users.values_mut() {
- let delta = (now - user.last_action_time)
- .num_minutes()
- .max(0) as u64;
- user.status = if user.explicit_logout {
- "deconnecte".into()
- } else if delta > inactive_min {
- "deconnecte".into()
- } else if delta > active_min {
- "inactif".into()
- } else {
- "actif".into()
- };
- }
-}
-
-pub struct UserMonitor {
- config_manager: Arc>,
- pub data: Arc>,
- running: Arc,
-}
-
-impl UserMonitor {
- pub fn new(config_manager: Arc>) -> Self {
- UserMonitor {
- config_manager,
- data: Arc::new(Mutex::new(UserData::default())),
- running: Arc::new(std::sync::atomic::AtomicBool::new(false)),
- }
- }
-
- pub async fn parse_logs(&self) {
- let (log_path, active_min, inactive_min) = {
- let cm = self.config_manager.lock().await;
- (
- cm.config.amadea_log_path.clone(),
- cm.config.user_status_thresholds.active_minutes,
- cm.config.user_status_thresholds.inactive_minutes,
- )
- };
-
- let log_dir = Path::new(&log_path);
- if !log_dir.is_dir() {
- let mut d = self.data.lock().unwrap();
- *d = UserData {
- error: Some(format!("Dossier de logs introuvable : {}", log_path)),
- ..Default::default()
- };
- return;
- }
-
- let now = Local::now().naive_local();
- let date_str = Local::now().format("%y-%m-%d").to_string();
- let cutoff_24h = now - Duration::hours(24);
- let awevents_files = log_files_for_date(log_dir, "awevents", &date_str);
-
- if awevents_files.is_empty() {
- let mut d = self.data.lock().unwrap();
- *d = UserData { no_files: true, ..Default::default() };
- return;
- }
-
- let mut users: HashMap = HashMap::new();
- let mut hourly: HashMap> =
- (0..24).map(|h| (h, HashSet::new())).collect();
-
- for file in &awevents_files {
- if let Ok(content) = fs::read_to_string(file) {
- for line in content.lines() {
- parse_awevents_line(line, &mut users, cutoff_24h, &mut hourly);
- }
- }
- }
-
- let re_isoft = Regex::new(
- r"^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}).*method=OpenUserSession.*login=([A-Za-z0-9_]+)"
- ).unwrap();
- for file in log_files_for_date(log_dir, "isoft", &date_str) {
- if let Ok(content) = fs::read_to_string(file) {
- for line in content.lines() {
- if let Some(m) = re_isoft.captures(line) {
- let login = m[2].to_string();
- if let Ok(ts) = NaiveDateTime::parse_from_str(&m[1], "%Y-%m-%d %H:%M:%S") {
- if let Some(u) = users.get_mut(&login) {
- if u.connected_since.is_none() {
- u.connected_since = Some(ts);
- }
- }
- }
- }
- }
- }
- }
-
- compute_statuses(&mut users, active_min, inactive_min, now);
-
- let status_order = |s: &str| match s {
- "actif" => 0,
- "inactif" => 1,
- _ => 2,
- };
- let mut sorted: Vec = users.into_values().collect();
- sorted.sort_by_key(|u| status_order(&u.status));
-
- let hourly_data: Vec = {
- let mut v: Vec<_> = hourly.iter().collect();
- v.sort_by_key(|(h, _)| *h);
- v.iter().map(|(h, s)| HourlyCount { hour: **h, count: s.len() }).collect()
- };
-
- let mut d = self.data.lock().unwrap();
- *d = UserData {
- users: sorted,
- hourly: hourly_data,
- error: None,
- no_files: false,
- };
- }
-
- pub async fn get_weekly_activity(&self) -> Vec {
- let log_path = {
- let cm = self.config_manager.lock().await;
- cm.config.amadea_log_path.clone()
- };
- let log_dir = Path::new(&log_path);
- if !log_dir.is_dir() { return vec![]; }
-
- let today = Local::now().date_naive();
- let mut result = Vec::new();
- let re = Regex::new(r"^(\d{4}-\d{2}-\d{2} (\d{2}):\d{2}:\d{2}).*login=([^,]+),").unwrap();
-
- for delta in (0..=6).rev() {
- let day = today - Duration::days(delta);
- let date_str = day.format("%y-%m-%d").to_string();
- let files = log_files_for_date(log_dir, "awevents", &date_str);
- if files.is_empty() {
- result.push(serde_json::json!({ "date": day.to_string(), "count": null }));
- continue;
- }
- let mut hourly: HashMap> =
- (0..24).map(|h| (h, HashSet::new())).collect();
- for file in &files {
- if let Ok(content) = fs::read_to_string(file) {
- for line in content.lines() {
- if let Some(m) = re.captures(line) {
- let hour: u32 = m[2].parse().unwrap_or(0);
- let login = m[3].trim().to_string();
- if !login.is_empty() {
- hourly.entry(hour).or_default().insert(login);
- }
- }
- }
- }
- }
- let max_concurrent = hourly.values().map(|s| s.len()).max().unwrap_or(0);
- result.push(serde_json::json!({ "date": day.to_string(), "count": max_concurrent }));
- }
- result
- }
-
- pub async fn start(self: Arc) {
- self.running.store(true, std::sync::atomic::Ordering::Relaxed);
- let um = self.clone();
- tokio::spawn(async move {
- loop {
- if !um.running.load(std::sync::atomic::Ordering::Relaxed) { break; }
- um.parse_logs().await;
- let interval = {
- let cm = um.config_manager.lock().await;
- cm.config.check_interval_minutes
- };
- tokio::time::sleep(std::time::Duration::from_secs(interval * 60)).await;
- }
- });
- }
-}
-```
-
-- [ ] **Step 4: Lancer les tests**
-
-```bash
-cargo test user_monitor
-```
-Expected: 3 tests passent.
-
-- [ ] **Step 5: Commit**
-
-```bash
-git add src/user_monitor.rs
-git commit -m "feat: user_monitor — parsing logs Amadea"
-```
-
----
-
-## Task 6: AppState + flash + AuthUser extractor
-
-**Files:**
-- Modify: `src/routes/mod.rs`
-
-- [ ] **Step 1: Implémenter routes/mod.rs**
-
-```rust
-pub mod auth;
-pub mod dashboard;
-pub mod settings;
-pub mod alerts;
-pub mod users;
-
-use axum::{
- async_trait,
- extract::FromRequestParts,
- http::{request::Parts, StatusCode},
- response::{IntoResponse, Redirect, Response},
-};
-use std::sync::{Arc, RwLock};
-use tera::Tera;
-use tokio::sync::Mutex as AsyncMutex;
-use tower_sessions::Session;
-
-use crate::alerter::Alerter;
-use crate::config::ConfigManager;
-use crate::monitor::{Metrics, SystemMonitor};
-use crate::user_monitor::UserMonitor;
-
-const SESSION_USER_KEY: &str = "username";
-const SESSION_FLASH_KEY: &str = "flash_messages";
-
-#[derive(Clone)]
-pub struct AppState {
- pub config_manager: Arc>,
- pub monitor: Arc,
- pub alerter: Arc,
- pub user_monitor: Arc,
- pub tera: Arc,
-}
-
-impl AppState {
- pub fn new(
- config_manager: Arc>,
- monitor: Arc,
- alerter: Arc,
- user_monitor: Arc,
- ) -> Self {
- let tera = build_tera();
- AppState {
- config_manager,
- monitor,
- alerter,
- user_monitor,
- tera: Arc::new(tera),
- }
- }
-}
-
-fn build_tera() -> Tera {
- let mut tera = Tera::default();
- tera.add_raw_templates(vec![
- ("base.html", include_str!("../../templates/base.html")),
- ("login.html", include_str!("../../templates/login.html")),
- ("dashboard.html", include_str!("../../templates/dashboard.html")),
- ("settings.html", include_str!("../../templates/settings.html")),
- ("alerts.html", include_str!("../../templates/alerts.html")),
- ("users.html", include_str!("../../templates/users.html")),
- ])
- .expect("Erreur chargement templates Tera");
- tera
-}
-
-/// Flash message helpers
-pub async fn flash(session: &Session, category: &str, message: &str) {
- let mut messages: Vec<(String, String)> = session
- .get::>(SESSION_FLASH_KEY)
- .await
- .unwrap_or_default()
- .unwrap_or_default();
- messages.push((category.to_string(), message.to_string()));
- session
- .insert(SESSION_FLASH_KEY, messages)
- .await
- .ok();
-}
-
-pub async fn get_and_clear_flash(session: &Session) -> Vec<(String, String)> {
- let messages: Vec<(String, String)> = session
- .get::>(SESSION_FLASH_KEY)
- .await
- .unwrap_or_default()
- .unwrap_or_default();
- session.remove::>(SESSION_FLASH_KEY).await.ok();
- messages
-}
-
-/// Extractor: renvoie le username si authentifié, sinon redirige vers /login
-pub struct AuthUser(pub String);
-
-#[async_trait]
-impl FromRequestParts for AuthUser
-where
- S: Send + Sync,
-{
- type Rejection = Response;
-
- async fn from_request_parts(parts: &mut Parts, state: &S) -> Result {
- let session = Session::from_request_parts(parts, state)
- .await
- .map_err(|e| e.into_response())?;
- match session
- .get::(SESSION_USER_KEY)
- .await
- .unwrap_or_default()
- {
- Some(u) => Ok(AuthUser(u)),
- None => Err(Redirect::to("/login").into_response()),
- }
- }
-}
-
-pub fn render_html(
- tera: &Tera,
- template: &str,
- ctx: tera::Context,
-) -> axum::response::Html {
- match tera.render(template, &ctx) {
- Ok(html) => axum::response::Html(html),
- Err(e) => axum::response::Html(format!("Erreur template: {}", e)),
- }
-}
-```
-
-- [ ] **Step 2: Compile check**
-
-```bash
-cargo check
-```
-Expected: pas d'erreurs.
-
-- [ ] **Step 3: Commit**
-
-```bash
-git add src/routes/mod.rs
-git commit -m "feat: AppState, flash helpers, AuthUser extractor"
-```
-
----
-
-## Task 7: Auth routes (login/logout)
-
-**Files:**
-- Modify: `src/routes/auth.rs`
-
-- [ ] **Step 1: Implémenter auth.rs**
-
-```rust
-use axum::{
- extract::{Form, State},
- response::{Html, IntoResponse, Redirect},
-};
-use serde::Deserialize;
-use tower_sessions::Session;
-
-use crate::routes::{flash, get_and_clear_flash, render_html, AppState, SESSION_USER_KEY};
-
-const SESSION_USER_KEY: &str = "username";
-
-#[derive(Deserialize)]
-pub struct LoginForm {
- pub username: String,
- pub password: String,
-}
-
-pub async fn login_get(
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- // Si déjà connecté, rediriger vers dashboard
- if session
- .get::(SESSION_USER_KEY)
- .await
- .unwrap_or_default()
- .is_some()
- {
- return Redirect::to("/").into_response();
- }
- let flash_messages = get_and_clear_flash(&session).await;
- let mut ctx = tera::Context::new();
- ctx.insert("flash_messages", &flash_messages);
- ctx.insert("is_authenticated", &false);
- render_html(&state.tera, "login.html", ctx).into_response()
-}
-
-pub async fn login_post(
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- let (admin_username, admin_hash) = {
- let cm = state.config_manager.lock().await;
- (cm.config.admin.username.clone(), cm.config.admin.password_hash.clone())
- };
-
- let username = form.username.trim().to_string();
- let password = form.password.clone();
-
- let valid = username == admin_username
- && bcrypt::verify(&password, &admin_hash).unwrap_or(false);
-
- if valid {
- session
- .insert(SESSION_USER_KEY, username)
- .await
- .ok();
- return Redirect::to("/").into_response();
- }
-
- flash(&session, "danger", "Identifiants incorrects.").await;
- Redirect::to("/login").into_response()
-}
-
-pub async fn logout(session: Session) -> impl IntoResponse {
- session.flush().await.ok();
- Redirect::to("/login")
-}
-```
-
-Note: `SESSION_USER_KEY` est défini dans `routes/mod.rs`. Supprimer la redéfinition locale dans `auth.rs` et utiliser `crate::routes::SESSION_USER_KEY` (rendre `pub` dans mod.rs).
-
-- [ ] **Step 2: Compile check**
-
-```bash
-cargo check
-```
-Expected: pas d'erreurs.
-
-- [ ] **Step 3: Commit**
-
-```bash
-git add src/routes/auth.rs
-git commit -m "feat: routes auth login/logout"
-```
-
----
-
-## Task 8: Dashboard et API métriques
-
-**Files:**
-- Modify: `src/routes/dashboard.rs`
-
-- [ ] **Step 1: Implémenter dashboard.rs**
-
-```rust
-use axum::{
- extract::State,
- response::{IntoResponse, Json, Redirect},
-};
-use tower_sessions::Session;
-
-use crate::routes::{flash, get_and_clear_flash, render_html, AppState, AuthUser};
-
-pub async fn dashboard(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- let flash_messages = get_and_clear_flash(&session).await;
- let metrics = state.monitor.metrics.read().unwrap().clone();
-
- let (is_default_pw, admin_username) = {
- let cm = state.config_manager.lock().await;
- let hash = cm.config.admin.password_hash.clone();
- let username = cm.config.admin.username.clone();
- (bcrypt::verify("admin", &hash).unwrap_or(false), username)
- };
-
- let mut ctx = tera::Context::new();
- ctx.insert("flash_messages", &flash_messages);
- ctx.insert("is_authenticated", &true);
- ctx.insert("active_page", "dashboard");
- ctx.insert("metrics", &metrics);
- ctx.insert("default_pw", &is_default_pw);
- ctx.insert("admin_username", &admin_username);
-
- render_html(&state.tera, "dashboard.html", ctx)
-}
-
-pub async fn api_metrics(
- _auth: AuthUser,
- State(state): State,
-) -> impl IntoResponse {
- let metrics = state.monitor.metrics.read().unwrap().clone();
- Json(metrics)
-}
-
-pub async fn toggle_monitoring(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- let is_running = state.monitor.running.load(std::sync::atomic::Ordering::Relaxed);
- if is_running {
- state.monitor.stop();
- flash(&session, "warning", "Monitoring arrêté.").await;
- } else {
- let m = state.monitor.clone();
- m.start().await;
- flash(&session, "success", "Monitoring démarré.").await;
- }
- Redirect::to("/")
-}
-```
-
-- [ ] **Step 2: Compile check**
-
-```bash
-cargo check
-```
-Expected: pas d'erreurs.
-
-- [ ] **Step 3: Commit**
-
-```bash
-git add src/routes/dashboard.rs
-git commit -m "feat: dashboard + api/metrics routes"
-```
-
----
-
-## Task 9: Settings routes
-
-**Files:**
-- Modify: `src/routes/settings.rs`
-
-- [ ] **Step 1: Implémenter settings.rs**
-
-```rust
-use axum::{
- extract::{Form, State},
- response::{IntoResponse, Redirect},
-};
-use serde::Deserialize;
-use tower_sessions::Session;
-
-use crate::routes::{flash, get_and_clear_flash, render_html, AppState, AuthUser};
-
-pub async fn settings_get(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- let flash_messages = get_and_clear_flash(&session).await;
- let (config, is_default_pw) = {
- let cm = state.config_manager.lock().await;
- let pw_default = bcrypt::verify("admin", &cm.config.admin.password_hash).unwrap_or(false);
- (cm.config.clone(), pw_default)
- };
-
- // Masquer le mot de passe SMTP
- let smtp_masked = if config.smtp.password.is_empty() {
- "".to_string()
- } else {
- "********".to_string()
- };
-
- let mut ctx = tera::Context::new();
- ctx.insert("flash_messages", &flash_messages);
- ctx.insert("is_authenticated", &true);
- ctx.insert("active_page", "settings");
- ctx.insert("config", &config);
- ctx.insert("smtp", &config.smtp);
- ctx.insert("smtp_password_masked", &smtp_masked);
- ctx.insert("default_pw", &is_default_pw);
-
- render_html(&state.tera, "settings.html", ctx)
-}
-
-#[derive(Deserialize)]
-pub struct ThresholdsForm {
- pub cpu_percent: u32,
- pub ram_percent: u32,
- pub disk_percent: u32,
-}
-
-pub async fn update_thresholds(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- if !(1..=100).contains(&form.cpu_percent)
- || !(1..=100).contains(&form.ram_percent)
- || !(1..=100).contains(&form.disk_percent)
- {
- flash(&session, "danger", "Les seuils doivent être entre 1 et 100.").await;
- return Redirect::to("/settings");
- }
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.thresholds.cpu_percent = form.cpu_percent as f64;
- cm.config.thresholds.ram_percent = form.ram_percent as f64;
- cm.config.thresholds.disk_percent = form.disk_percent as f64;
- cm.save();
- }
- flash(&session, "success", "Seuils mis à jour.").await;
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct MonitoringForm {
- pub check_interval_minutes: u64,
- pub alert_cooldown_minutes: u64,
-}
-
-pub async fn update_monitoring(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- if form.check_interval_minutes < 1 {
- flash(&session, "danger", "L'intervalle doit être d'au moins 1 minute.").await;
- return Redirect::to("/settings");
- }
- if form.alert_cooldown_minutes < 1 {
- flash(&session, "danger", "Le cooldown doit être d'au moins 1 minute.").await;
- return Redirect::to("/settings");
- }
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.check_interval_minutes = form.check_interval_minutes;
- cm.config.alert_cooldown_minutes = form.alert_cooldown_minutes;
- cm.save();
- }
- flash(&session, "success", "Paramètres de monitoring mis à jour.").await;
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct SmtpForm {
- pub smtp_server: String,
- pub smtp_port: u16,
- pub smtp_tls: Option,
- pub smtp_username: String,
- pub smtp_password: Option,
- pub smtp_from: String,
- pub smtp_to: String,
-}
-
-pub async fn update_smtp(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- let to_emails: Vec = form
- .smtp_to
- .split(',')
- .map(|s| s.trim().to_string())
- .filter(|s| !s.is_empty())
- .collect();
- {
- let mut cm = state.config_manager.lock().await;
- let old_password = cm.config.smtp.password.clone();
- cm.config.smtp.server = form.smtp_server.trim().to_string();
- cm.config.smtp.port = form.smtp_port;
- cm.config.smtp.use_tls = form.smtp_tls.is_some();
- cm.config.smtp.username = form.smtp_username.trim().to_string();
- cm.config.smtp.from_email = form.smtp_from.trim().to_string();
- cm.config.smtp.to_emails = to_emails;
- cm.config.smtp.password = match form.smtp_password {
- Some(pw) if !pw.is_empty() => pw,
- _ => old_password,
- };
- cm.save();
- }
- flash(&session, "success", "Configuration SMTP mise à jour.").await;
- Redirect::to("/settings")
-}
-
-pub async fn test_smtp(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- let smtp = {
- let cm = state.config_manager.lock().await;
- cm.config.smtp.clone()
- };
- let (ok, msg) = state.alerter.send_test(&smtp).await;
- if ok {
- flash(&session, "success", &format!("Test réussi : {}", msg)).await;
- } else {
- flash(&session, "danger", &format!("Test échoué : {}", msg)).await;
- }
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct ProcessesForm {
- #[serde(rename = "proc_name[]")]
- pub proc_name: Option>,
- #[serde(rename = "proc_pattern[]")]
- pub proc_pattern: Option>,
- #[serde(rename = "proc_mem_threshold[]")]
- pub proc_mem_threshold: Option>,
- #[serde(rename = "proc_enabled[]")]
- pub proc_enabled: Option>,
- #[serde(rename = "proc_alert_down[]")]
- pub proc_alert_down: Option>,
-}
-
-pub async fn update_processes(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- use crate::config::ProcessConfig;
- let names = form.proc_name.unwrap_or_default();
- let patterns = form.proc_pattern.unwrap_or_default();
- let mem_thresholds = form.proc_mem_threshold.unwrap_or_default();
- let enableds = form.proc_enabled.unwrap_or_default();
- let alert_downs = form.proc_alert_down.unwrap_or_default();
-
- let mut processes = Vec::new();
- for (i, name) in names.iter().enumerate() {
- let name = name.trim().to_string();
- if name.is_empty() { continue; }
- processes.push(ProcessConfig {
- name,
- pattern: patterns.get(i).map(|s| s.trim().to_lowercase()).unwrap_or_default(),
- memory_threshold_mb: mem_thresholds.get(i)
- .and_then(|s| s.parse().ok())
- .unwrap_or(0.0),
- enabled: enableds.contains(&i.to_string()),
- alert_on_down: alert_downs.contains(&i.to_string()),
- });
- }
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.processes = processes;
- cm.save();
- }
- flash(&session, "success", "Processus surveillés mis à jour.").await;
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct PasswordForm {
- pub current_password: String,
- pub new_password: String,
- pub confirm_password: String,
-}
-
-pub async fn update_password(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- let hash = {
- let cm = state.config_manager.lock().await;
- cm.config.admin.password_hash.clone()
- };
- if !bcrypt::verify(&form.current_password, &hash).unwrap_or(false) {
- flash(&session, "danger", "Mot de passe actuel incorrect.").await;
- return Redirect::to("/settings");
- }
- if form.new_password.len() < 8 {
- flash(&session, "danger", "Le nouveau mot de passe doit faire au moins 8 caractères.").await;
- return Redirect::to("/settings");
- }
- if form.new_password != form.confirm_password {
- flash(&session, "danger", "Les mots de passe ne correspondent pas.").await;
- return Redirect::to("/settings");
- }
- let new_hash = match bcrypt::hash(&form.new_password, bcrypt::DEFAULT_COST) {
- Ok(h) => h,
- Err(_) => {
- flash(&session, "danger", "Erreur lors du hachage du mot de passe.").await;
- return Redirect::to("/settings");
- }
- };
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.admin.password_hash = new_hash;
- cm.save();
- }
- flash(&session, "success", "Mot de passe mis à jour.").await;
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct PortForm {
- pub port: u16,
-}
-
-pub async fn update_port(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- if !(1024..=65535).contains(&form.port) {
- flash(&session, "danger", "Le port doit être entre 1024 et 65535.").await;
- return Redirect::to("/settings");
- }
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.port = form.port;
- cm.save();
- }
- flash(
- &session,
- "warning",
- &format!("Port mis à jour à {}. Redémarrez l'application pour appliquer.", form.port),
- ).await;
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct AmadeaLogPathForm {
- pub amadea_log_path: String,
-}
-
-pub async fn update_amadea_log_path(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- let path = form.amadea_log_path.trim().to_string();
- if path.is_empty() {
- flash(&session, "danger", "Le chemin ne peut pas être vide.").await;
- return Redirect::to("/settings");
- }
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.amadea_log_path = path;
- cm.save();
- }
- flash(&session, "success", "Chemin des logs Amadea mis à jour.").await;
- Redirect::to("/settings")
-}
-
-#[derive(Deserialize)]
-pub struct UserThresholdsForm {
- pub active_minutes: u64,
- pub inactive_minutes: u64,
-}
-
-pub async fn update_user_thresholds(
- _auth: AuthUser,
- session: Session,
- State(state): State,
- Form(form): Form,
-) -> impl IntoResponse {
- if form.active_minutes < 1 || form.inactive_minutes < 1 {
- flash(&session, "danger", "Les seuils doivent être d'au moins 1 minute.").await;
- return Redirect::to("/settings");
- }
- if form.active_minutes >= form.inactive_minutes {
- flash(&session, "danger", "Le seuil 'actif' doit être inférieur au seuil 'inactif'.").await;
- return Redirect::to("/settings");
- }
- {
- let mut cm = state.config_manager.lock().await;
- cm.config.user_status_thresholds.active_minutes = form.active_minutes;
- cm.config.user_status_thresholds.inactive_minutes = form.inactive_minutes;
- cm.save();
- }
- flash(&session, "success", "Seuils utilisateurs mis à jour.").await;
- Redirect::to("/settings")
-}
-```
-
-- [ ] **Step 2: Compile check**
-
-```bash
-cargo check
-```
-Expected: pas d'erreurs.
-
-- [ ] **Step 3: Commit**
-
-```bash
-git add src/routes/settings.rs
-git commit -m "feat: routes settings — seuils, smtp, processus, mot de passe, port"
-```
-
----
-
-## Task 10: Alerts routes
-
-**Files:**
-- Modify: `src/routes/alerts.rs`
-
-- [ ] **Step 1: Implémenter alerts.rs**
-
-```rust
-use axum::{
- extract::State,
- response::{IntoResponse, Redirect},
-};
-use tower_sessions::Session;
-
-use crate::routes::{flash, get_and_clear_flash, render_html, AppState, AuthUser};
-
-pub async fn alerts_get(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- let flash_messages = get_and_clear_flash(&session).await;
- let raw_alerts = {
- let cm = state.config_manager.lock().await;
- cm.load_alerts()
- };
-
- // Formater les timestamps pour l'affichage (YYYY-MM-DDTHH:MM:SS → YYYY-MM-DD HH:MM:SS)
- let alerts: Vec = raw_alerts
- .iter()
- .map(|a| {
- let mut v = serde_json::to_value(a).unwrap();
- if let Some(ts) = v.get("timestamp").and_then(|t| t.as_str()) {
- let formatted = ts
- .chars()
- .take(19)
- .collect::()
- .replace('T', " ");
- v["timestamp_display"] = serde_json::Value::String(formatted);
- }
- v
- })
- .collect();
-
- let mut ctx = tera::Context::new();
- ctx.insert("flash_messages", &flash_messages);
- ctx.insert("is_authenticated", &true);
- ctx.insert("active_page", "alerts");
- ctx.insert("alerts", &alerts);
-
- render_html(&state.tera, "alerts.html", ctx)
-}
-
-pub async fn clear_alerts(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- {
- let cm = state.config_manager.lock().await;
- cm.clear_alerts();
- }
- flash(&session, "success", "Historique des alertes effacé.").await;
- Redirect::to("/alerts")
-}
-```
-
-- [ ] **Step 2: Compile check**
-
-```bash
-cargo check
-```
-
-- [ ] **Step 3: Commit**
-
-```bash
-git add src/routes/alerts.rs
-git commit -m "feat: routes alertes"
-```
-
----
-
-## Task 11: Users routes
-
-**Files:**
-- Modify: `src/routes/users.rs`
-
-- [ ] **Step 1: Implémenter users.rs**
-
-```rust
-use axum::{
- extract::State,
- response::{IntoResponse, Json},
-};
-use serde_json::json;
-use tower_sessions::Session;
-
-use crate::routes::{get_and_clear_flash, render_html, AppState, AuthUser};
-
-pub async fn users_get(
- _auth: AuthUser,
- session: Session,
- State(state): State,
-) -> impl IntoResponse {
- let flash_messages = get_and_clear_flash(&session).await;
- let mut ctx = tera::Context::new();
- ctx.insert("flash_messages", &flash_messages);
- ctx.insert("is_authenticated", &true);
- ctx.insert("active_page", "users");
- render_html(&state.tera, "users.html", ctx)
-}
-
-pub async fn api_users(
- _auth: AuthUser,
- State(state): State,
-) -> impl IntoResponse {
- let data = state.user_monitor.data.lock().unwrap().clone();
- if let Some(err) = &data.error {
- return Json(json!({ "error": err }));
- }
- if data.no_files {
- return Json(json!({ "no_files": true }));
- }
- let users: Vec = data
- .users
- .iter()
- .map(|u| {
- json!({
- "login": u.login,
- "status": u.status,
- "last_action_time": u.last_action_time.format("%H:%M:%S").to_string(),
- "last_action_label": u.last_action_label,
- "action_count_24h": u.action_count_24h,
- "connected_since": u.connected_since.map(|t| t.format("%H:%M").to_string()),
- "explicit_logout": u.explicit_logout,
- })
- })
- .collect();
- Json(json!({ "users": users, "hourly": data.hourly }))
-}
-
-pub async fn api_users_weekly(
- _auth: AuthUser,
- State(state): State,
-) -> impl IntoResponse {
- let weekly = state.user_monitor.get_weekly_activity().await;
- Json(json!({ "weekly": weekly }))
-}
-```
-
-- [ ] **Step 2: Compile check**
-
-```bash
-cargo check
-```
-
-- [ ] **Step 3: Commit**
-
-```bash
-git add src/routes/users.rs
-git commit -m "feat: routes utilisateurs Amadea"
-```
-
----
-
-## Task 12: Templates Tera
-
-**Files:**
-- Create: `templates/base.html`
-- Create: `templates/login.html`
-- Create: `templates/dashboard.html`
-- Create: `templates/settings.html`
-- Create: `templates/alerts.html`
-- Create: `templates/users.html`
-- Create: `static/style.css`
-
-- [ ] **Step 1: Créer static/style.css**
-
-Copier le contenu de `../supervision/static/style.css` tel quel :
-
-```css
-/* Supervision — Style */
-
-body {
- background-color: #f4f6f9;
- font-size: 0.9rem;
-}
-
-.metric-card {
- transition: border-color 0.3s;
-}
-
-.metric-card .metric-value {
- font-size: 2.2rem;
- font-weight: 700;
- line-height: 1.1;
-}
-
-.card {
- box-shadow: 0 1px 3px rgba(0, 0, 0, 0.08);
-}
-
-.badge {
- font-size: 0.75rem;
- text-transform: uppercase;
-}
-
-.table th {
- font-size: 0.8rem;
- text-transform: uppercase;
- color: #6c757d;
- border-bottom-width: 1px;
-}
-
-.navbar-brand i {
- color: #4fc3f7;
-}
-
-.border-success { border-left: 4px solid #198754 !important; }
-.border-warning { border-left: 4px solid #ffc107 !important; }
-.border-danger { border-left: 4px solid #dc3545 !important; }
-
-@media (max-width: 768px) {
- .metric-card .metric-value {
- font-size: 1.8rem;
- }
-}
-```
-
-- [ ] **Step 2: Créer templates/base.html**
-
-```html
-
-
-
-
-
- {% block title %}Supervision{% endblock %}
-
-
-
-
-
- {% if is_authenticated %}
-
- {% endif %}
-
-
- {% if flash_messages %}
- {% for item in flash_messages %}
-
- {{ item.1 }}
-
-
- {% endfor %}
- {% endif %}
-
- {% if default_pw is defined and default_pw %}
-
- {% endif %}
-
- {% block content %}{% endblock %}
-
-
-
- {% block scripts %}{% endblock %}
-
-
-```
-
-- [ ] **Step 3: Créer templates/login.html**
-
-Porter `../supervision/templates/login.html` en remplaçant :
-- `url_for('login')` → `/login`
-- `{% with messages = get_flashed_messages(with_categories=true) %}...{% endwith %}` → `{% if flash_messages %}{% for item in flash_messages %}...{{ item.0 }}...{{ item.1 }}...{% endfor %}{% endif %}`
-
-```html
-
-
-
-
-
- Supervision - Connexion
-
-
-
-
-
-
-
-
-
-
Supervision
- Monitoring système
-
-
- {% if flash_messages %}
- {% for item in flash_messages %}
-
- {{ item.1 }}
-
-
- {% endfor %}
- {% endif %}
-
-
-
-
-
-
-```
-
-- [ ] **Step 4: Créer templates/dashboard.html**
-
-Porter `../supervision/templates/dashboard.html` :
-- `url_for('toggle_monitoring')` → `/api/monitoring/toggle`
-- `url_for('alerts')` → `/alerts`
-- `{% extends "base.html" %}` → identique (Tera supporte extends)
-- `{{ metrics.cpu.percent }}` → `{{ metrics.cpu.percent }}`
-- `loop.index0` → `loop.index0`
-
-```html
-{% extends "base.html" %}
-{% block title %}Supervision - Tableau de bord{% endblock %}
-
-{% block content %}
-
-
- Tableau de bord
- {% if metrics and metrics.hostname %}
- — {{ metrics.hostname }}
- {% endif %}
-
-
-
-
-
-
-
-{% if not metrics %}
-
- Collecte des métriques en cours...
-
-{% else %}
-
-
-
-
-
- {{ metrics.hostname }}
- {{ metrics.os }}
- Uptime: {{ metrics.uptime }}
- {{ metrics.cpu.cores }} cœurs
- {{ metrics.ram.total_gb }} Go RAM
-
-
-
-
-
-
-
-
-
-
-
CPU
- {{ metrics.cpu.status }}
-
-
{{ metrics.cpu.percent }}%
-
-
Seuil: {{ metrics.cpu.threshold }}%
-
-
-
-
-
-
-
-
RAM
- {{ metrics.ram.status }}
-
-
{{ metrics.ram.percent }}%
-
-
- {{ metrics.ram.used_gb }} /
- {{ metrics.ram.total_gb }} Go
- — Seuil: {{ metrics.ram.threshold }}%
-
-
-
-
- {% for disk in metrics.disks %}
-
-
-
-
-
{{ disk.drive }}
- {{ disk.status }}
-
-
{{ disk.percent }}%
-
-
- {{ disk.used_gb }} / {{ disk.total_gb }} Go ({{ disk.free_gb }} Go libres)
- — Seuil: {{ disk.threshold }}%
-
-
-
-
- {% endfor %}
-
-
-
-
-
-
-
-
-
-
- | Processus | Statut | Instances |
- Mémoire | CPU | PID(s) |
-
-
-
- {% for proc in metrics.processes %}
-
- {{ proc.name }} pattern: {{ proc.pattern }} |
-
- {% if not proc.enabled %}Désactivé
- {% elif proc.running %}Actif
- {% else %}Arrêté{% endif %}
- |
- {{ proc.instance_count }} |
- {{ proc.total_memory_mb }} Mo
- {% if proc.memory_threshold_mb > 0 %} seuil: {{ proc.memory_threshold_mb }} Mo{% endif %}
- |
- {{ proc.total_cpu_percent }}% |
- {{ proc.pids | join(sep=", ") }} |
-
- {% endfor %}
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Aucune alerte récente.
-
-
-
-
-{% endif %}
-{% endblock %}
-
-{% block scripts %}
-
-{% endblock %}
-```
-
-- [ ] **Step 5: Créer templates/alerts.html**
-
-Porter `../supervision/templates/alerts.html` :
-- `url_for('clear_alerts')` → `/alerts/clear`
-- `alert.timestamp[:19] | replace('T', ' ')` → `alert.timestamp_display` (formaté côté Rust dans la route)
-- `alerts | length` → `alerts | length`
-
-```html
-{% extends "base.html" %}
-{% block title %}Supervision - Alertes{% endblock %}
-
-{% block content %}
-
-
Historique des alertes
- {% if alerts %}
-
- {% endif %}
-
-
-{% if not alerts %}
-
- Aucune alerte enregistrée.
-
-{% else %}
-
-
-
-
- | Date | Type | Message | Valeur | Seuil |
-
-
- {% for alert in alerts %}
-
- | {{ alert.timestamp_display }} |
-
- {% if alert.type == "process_down" %}
- Processus
- {% else %}
- Seuil
- {% endif %}
- |
- {{ alert.message }} |
- {{ alert.value }} |
- {{ alert.threshold }} |
-
- {% endfor %}
-
-
-
-
-{{ alerts | length }} alerte(s) — les 500 dernières sont conservées.
-{% endif %}
-{% endblock %}
-```
-
-- [ ] **Step 6: Créer templates/settings.html**
-
-Porter `../supervision/templates/settings.html` en remplaçant tous les `url_for('...')` selon la table de portage au début du plan. Les `{% if smtp.use_tls %}checked{% endif %}` fonctionnent identiquement en Tera.
-
-Structure : copier le contenu de `../supervision/templates/settings.html` et remplacer :
-- `url_for('update_thresholds')` → `/settings/thresholds`
-- `url_for('update_monitoring')` → `/settings/monitoring`
-- `url_for('update_port')` → `/settings/port`
-- `url_for('update_smtp')` → `/settings/smtp`
-- `url_for('test_smtp')` → `/settings/smtp/test`
-- `url_for('update_processes')` → `/settings/processes`
-- `url_for('update_password')` → `/settings/password`
-- `url_for('update_amadea_log_path')` → `/settings/amadea-log-path`
-- `url_for('update_user_thresholds')` → `/settings/user-thresholds`
-- `{{ smtp.to_emails | join(', ') }}` → `{{ smtp.to_emails | join(sep=", ") }}`
-- `{{ smtp.password_masked }}` → `{{ smtp_password_masked }}`
-- `{% if smtp.password_masked %}{{ smtp.password_masked }}{% else %}Non défini{% endif %}` → `{% if smtp_password_masked %}{{ smtp_password_masked }}{% else %}Non défini{% endif %}`
-
-- [ ] **Step 7: Créer templates/users.html**
-
-Copier `../supervision/templates/users.html` tel quel — aucune modification nécessaire car il n'utilise que du JS pur avec fetch vers `/api/users`.
-
-- [ ] **Step 8: Compile check (avec les templates embarqués)**
-
-```bash
-cargo check
-```
-Expected: pas d'erreurs.
-
-- [ ] **Step 9: Commit**
-
-```bash
-git add templates/ static/
-git commit -m "feat: templates Tera + style.css"
-```
-
----
-
-## Task 13: Main.rs — assemblage + rate limiting + headers sécurité
-
-**Files:**
-- Modify: `src/main.rs`
-
-- [ ] **Step 1: Implémenter main.rs**
-
-```rust
-mod config;
-mod monitor;
-mod alerter;
-mod user_monitor;
-mod routes;
-
-use std::sync::Arc;
-use tokio::sync::Mutex as AsyncMutex;
-use tower_sessions::{MemoryStore, SessionManagerLayer};
-use tower_governor::{governor::GovernorConfigBuilder, GovernorLayer};
-use tower_http::services::ServeDir;
-use axum::{
- Router,
- routing::{get, post},
- middleware,
- http::{HeaderValue, Method},
- response::IntoResponse,
-};
-
-use config::ConfigManager;
-use monitor::SystemMonitor;
-use alerter::Alerter;
-use user_monitor::UserMonitor;
-use routes::{
- AppState,
- auth::{login_get, login_post, logout},
- dashboard::{dashboard, api_metrics, toggle_monitoring},
- settings::{
- settings_get, update_thresholds, update_monitoring, update_smtp,
- test_smtp, update_processes, update_password, update_port,
- update_amadea_log_path, update_user_thresholds,
- },
- alerts::{alerts_get, clear_alerts},
- users::{users_get, api_users, api_users_weekly},
-};
-
-async fn run_server() {
- // Init config
- let config_manager = Arc::new(AsyncMutex::new(ConfigManager::new()));
-
- // Init services
- let alerter = Arc::new(Alerter);
- let monitor = Arc::new(SystemMonitor::new(
- config_manager.clone(),
- alerter.clone(),
- ));
- let user_monitor = Arc::new(UserMonitor::new(config_manager.clone()));
-
- // Démarrer monitoring et user monitor
- monitor.clone().start().await;
- {
- let m = monitor.clone();
- let _ = m.collect().await; // collecte initiale
- let metrics = m.metrics.read().unwrap().clone();
- if let Some(ref met) = metrics {
- m.metrics.write().unwrap().replace(met.clone());
- }
- }
- user_monitor.clone().start().await;
-
- // App state
- let state = AppState::new(
- config_manager.clone(),
- monitor,
- alerter,
- user_monitor,
- );
-
- // Rate limiting pour /login (10 req/min)
- let governor_conf = Arc::new(
- GovernorConfigBuilder::default()
- .per_second(6) // 10 par minute = 1 toutes les 6 secondes
- .burst_size(10)
- .finish()
- .unwrap(),
- );
-
- // Sessions
- let session_store = MemoryStore::default();
- let session_layer = SessionManagerLayer::new(session_store)
- .with_secure(false)
- .with_name("supervision_session");
-
- // Router
- let login_routes = Router::new()
- .route("/login", post(login_post))
- .layer(GovernorLayer { config: governor_conf });
-
- let app = Router::new()
- .route("/login", get(login_get))
- .merge(login_routes)
- .route("/logout", get(logout))
- .route("/", get(dashboard))
- .route("/api/metrics", get(api_metrics))
- .route("/api/monitoring/toggle", post(toggle_monitoring))
- .route("/settings", get(settings_get))
- .route("/settings/thresholds", post(update_thresholds))
- .route("/settings/monitoring", post(update_monitoring))
- .route("/settings/smtp", post(update_smtp))
- .route("/settings/smtp/test", post(test_smtp))
- .route("/settings/processes", post(update_processes))
- .route("/settings/password", post(update_password))
- .route("/settings/port", post(update_port))
- .route("/settings/amadea-log-path", post(update_amadea_log_path))
- .route("/settings/user-thresholds", post(update_user_thresholds))
- .route("/alerts", get(alerts_get))
- .route("/alerts/clear", post(clear_alerts))
- .route("/users", get(users_get))
- .route("/api/users", get(api_users))
- .route("/api/users/activity/weekly", get(api_users_weekly))
- .nest_service("/static", ServeDir::new("static"))
- .layer(session_layer)
- .with_state(state.clone())
- // En-têtes sécurité
- .layer(axum::middleware::map_response(|mut response: axum::response::Response| async move {
- let headers = response.headers_mut();
- headers.insert("X-Content-Type-Options", HeaderValue::from_static("nosniff"));
- headers.insert("X-Frame-Options", HeaderValue::from_static("DENY"));
- headers.insert("X-XSS-Protection", HeaderValue::from_static("1; mode=block"));
- response
- }));
-
- let port = {
- let cm = state.config_manager.lock().await;
- cm.config.port
- };
-
- let addr = format!("0.0.0.0:{}", port);
- tracing::info!("Supervision démarré sur http://localhost:{}", port);
-
- let listener = tokio::net::TcpListener::bind(&addr).await.unwrap();
- axum::serve(listener, app).await.unwrap();
-}
-
-#[tokio::main]
-async fn main() {
- tracing_subscriber::fmt::init();
-
- let args: Vec = std::env::args().collect();
-
- #[cfg(windows)]
- {
- if args.get(1).map(|s| s.as_str()) == Some("install") {
- service::install_service();
- return;
- }
- if args.get(1).map(|s| s.as_str()) == Some("uninstall") {
- service::uninstall_service();
- return;
- }
- // Détecter si lancé par le Service Control Manager
- if service::is_running_as_service() {
- service::run_service();
- return;
- }
- }
-
- // Mode console (développement ou lancement manuel)
- run_server().await;
-}
-
-#[cfg(windows)]
-mod service {
- pub fn install_service() {
- // Implémenté dans Task 14
- println!("Installation du service...");
- }
- pub fn uninstall_service() {
- println!("Désinstallation du service...");
- }
- pub fn is_running_as_service() -> bool {
- false // Stub pour Task 13, complété dans Task 14
- }
- pub fn run_service() {}
-}
-```
-
-- [ ] **Step 2: Build de test**
-
-```bash
-cargo build
-```
-Expected: compilation réussie, quelques warnings OK.
-
-- [ ] **Step 3: Test manuel (mode console)**
-
-```bash
-cargo run
-```
-Expected: `Supervision démarré sur http://localhost:5000`.
-Ouvrir `http://localhost:5000/login` dans le navigateur → page de login visible.
-
-- [ ] **Step 4: Commit**
-
-```bash
-git add src/main.rs
-git commit -m "feat: main.rs — assemblage complet du serveur Axum"
-```
-
----
-
-## Task 14: Windows Service (compilation sur Windows)
-
-**Files:**
-- Modify: `src/main.rs` (module `service`)
-
-Note: Cette tâche s'exécute sur Windows uniquement. Le code `#[cfg(windows)]` ne compile pas sur macOS/Linux.
-
-- [ ] **Step 1: Remplacer le module `service` dans main.rs**
-
-Remplacer le module `#[cfg(windows)] mod service { ... }` par :
-
-```rust
-#[cfg(windows)]
-mod service {
- use std::ffi::OsString;
- use std::time::Duration;
- use windows_service::{
- define_windows_service,
- service::{
- ServiceControl, ServiceControlAccept, ServiceExitCode, ServiceState, ServiceStatus,
- ServiceType,
- },
- service_control_handler::{self, ServiceControlHandlerResult},
- service_dispatcher,
- service_manager::{ServiceManager, ServiceManagerAccess},
- service::{ServiceAccess, ServiceErrorControl, ServiceInfo, ServiceStartType},
- };
-
- const SERVICE_NAME: &str = "Supervision";
- const SERVICE_DISPLAY: &str = "Supervision - Monitoring Système";
- const SERVICE_DESCRIPTION: &str = "Surveille CPU, RAM, disques et processus. Interface web sur http://localhost:5000";
-
- pub fn install_service() {
- let manager = ServiceManager::local_computer(
- None::<&str>,
- ServiceManagerAccess::CREATE_SERVICE,
- )
- .expect("Impossible d'ouvrir le Service Manager (lancer en administrateur)");
-
- let exe_path = std::env::current_exe().unwrap();
-
- let service_info = ServiceInfo {
- name: OsString::from(SERVICE_NAME),
- display_name: OsString::from(SERVICE_DISPLAY),
- service_type: ServiceType::OWN_PROCESS,
- start_type: ServiceStartType::AutoStart,
- error_control: ServiceErrorControl::Normal,
- executable_path: exe_path,
- launch_arguments: vec![],
- dependencies: vec![],
- account_name: None,
- account_password: None,
- };
-
- let service = manager
- .create_service(&service_info, ServiceAccess::CHANGE_CONFIG)
- .expect("Impossible de créer le service");
-
- service
- .set_description(SERVICE_DESCRIPTION)
- .expect("Impossible de définir la description");
-
- println!("Service '{}' installé avec succès.", SERVICE_NAME);
- println!("Démarrer avec: sc start {}", SERVICE_NAME);
- }
-
- pub fn uninstall_service() {
- let manager = ServiceManager::local_computer(
- None::<&str>,
- ServiceManagerAccess::CONNECT,
- )
- .expect("Impossible d'ouvrir le Service Manager");
-
- let service = manager
- .open_service(SERVICE_NAME, ServiceAccess::DELETE)
- .expect("Service introuvable");
-
- service.delete().expect("Impossible de supprimer le service");
- println!("Service '{}' désinstallé.", SERVICE_NAME);
- }
-
- pub fn is_running_as_service() -> bool {
- // Heuristique : pas de console attachée et variable d'env SESSIONNAME absente
- std::env::var("SESSIONNAME").is_err()
- }
-
- define_windows_service!(ffi_service_main, service_main);
-
- fn service_main(_arguments: Vec) {
- let rt = tokio::runtime::Runtime::new().unwrap();
- rt.block_on(async {
- let (shutdown_tx, shutdown_rx) = tokio::sync::oneshot::channel::<()>();
-
- let status_handle = service_control_handler::register(
- SERVICE_NAME,
- move |control| match control {
- ServiceControl::Stop => {
- let _ = shutdown_tx.send(());
- ServiceControlHandlerResult::NoError
- }
- ServiceControl::Interrogate => ServiceControlHandlerResult::NoError,
- _ => ServiceControlHandlerResult::NotImplemented,
- },
- )
- .unwrap();
-
- status_handle
- .set_service_status(ServiceStatus {
- service_type: ServiceType::OWN_PROCESS,
- current_state: ServiceState::Running,
- controls_accepted: ServiceControlAccept::STOP,
- exit_code: ServiceExitCode::Win32(0),
- checkpoint: 0,
- wait_hint: Duration::default(),
- process_id: None,
- })
- .unwrap();
-
- tokio::select! {
- _ = crate::run_server() => {},
- _ = shutdown_rx => {},
- }
-
- status_handle
- .set_service_status(ServiceStatus {
- service_type: ServiceType::OWN_PROCESS,
- current_state: ServiceState::Stopped,
- controls_accepted: ServiceControlAccept::empty(),
- exit_code: ServiceExitCode::Win32(0),
- checkpoint: 0,
- wait_hint: Duration::default(),
- process_id: None,
- })
- .unwrap();
- });
- }
-
- pub fn run_service() {
- service_dispatcher::start(SERVICE_NAME, ffi_service_main)
- .expect("Impossible de démarrer le service dispatcher");
- }
-}
-```
-
-- [ ] **Step 2: Build release sur Windows**
-
-```cmd
-cargo build --release
-```
-Expected: `supervision.exe` dans `target\release\`.
-
-- [ ] **Step 3: Test mode console sur Windows**
-
-```cmd
-target\release\supervision.exe
-```
-Expected: Démarrage sur port 5000. Ouvrir `http://localhost:5000`.
-
-- [ ] **Step 4: Test installation service**
-
-```cmd
-supervision.exe install
-sc start Supervision
-```
-Expected: service démarré, interface accessible sur `http://localhost:5000`.
-
-- [ ] **Step 5: Build final et commit**
-
-```bash
-git add src/main.rs
-git commit -m "feat: intégration Windows Service (install/uninstall/run)"
-```
-
----
-
-## Notes de déploiement
-
-1. Compiler une seule fois sur Windows : `cargo build --release`
-2. Récupérer `target\release\supervision.exe`
-3. Copier l'exe sur le serveur cible dans un dossier dédié (ex: `C:\Supervision\`)
-4. Ouvrir un terminal administrateur dans ce dossier :
- ```cmd
- supervision.exe install
- sc start Supervision
- ```
-5. Accéder à `http://localhost:5000` — login : `admin` / `admin`
-6. **Changer le mot de passe immédiatement** dans Configuration → Mot de passe administrateur
-
-Le dossier `data\` est créé automatiquement à côté de l'exe au premier lancement.
diff --git a/docs/superpowers/specs/2026-04-02-users-tab-design.md b/docs/superpowers/specs/2026-04-02-users-tab-design.md
deleted file mode 100644
index 7673ebb..0000000
--- a/docs/superpowers/specs/2026-04-02-users-tab-design.md
+++ /dev/null
@@ -1,188 +0,0 @@
-# Design — Onglet Utilisateurs Amadea
-
-**Date :** 2026-04-02
-**Statut :** Approuvé
-
----
-
-## Contexte
-
-Ajouter un onglet "Utilisateurs" à l'application de supervision, affichant en temps réel les utilisateurs connectés à Amadea Web 8 x64, leur statut d'activité, et un graphique d'utilisation horaire. Le chemin des logs Amadea et les seuils de statut sont rendus configurables dans l'onglet Configuration existant.
-
----
-
-## Architecture générale
-
-### Nouveaux fichiers
-
-- **`user_monitor.py`** — classe `UserMonitor` : thread de fond, parsing des logs, cache thread-safe. Miroir exact de `SystemMonitor`.
-
-### Fichiers modifiés
-
-| Fichier | Modification |
-|---|---|
-| `config_manager.py` | Ajout des clés `amadea_log_path` et `user_status_thresholds` dans la config par défaut |
-| `app.py` | Instanciation de `UserMonitor`, routes `/users` et `/api/users` |
-| `templates/base.html` | Lien "Utilisateurs" dans la navbar |
-| `templates/settings.html` | 2 nouveaux blocs de configuration |
-| `templates/users.html` | Nouvelle page (tableau + graphique CSS) |
-
-### Flux de données
-
-```
-UserMonitor (background thread)
- ├─ parse awevents_YY-MM-DD_*.log → activité utilisateur + déconnexions explicites
- └─ parse isoft_YY-MM-DD_*.log → événements de session (login/timeout)
- ↓ cache thread-safe (dict par login)
-/api/users → JSON
-/users → rendu Jinja2 initial + auto-refresh JS (30s, même pattern que dashboard)
-```
-
----
-
-## Parsing et modèle de données
-
-### Sélection des fichiers du jour
-
-Les fichiers de logs suivent le pattern `PREFIX_YY-MM-DD_N.ext` (ex: `awevents_26-04-02_1.log`).
-
-- Date du jour formatée en `%y-%m-%d` (ex: `26-04-02` pour 2026-04-02)
-- Tous les fichiers du jour sont lus, triés par index `N` croissant
-- Si plusieurs fichiers du même jour existent (index incrémental), ils sont tous parsés dans l'ordre
-
-### Format `awevents_YY-MM-DD_N.log` (source principale)
-
-```
-2026-03-30 10:34:24.034;INFO ;;;;"login=JENKINS,action=SelectionChange,Label=BAO_Main/..."
-```
-
-Regex d'extraction :
-```python
-r'^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}).*login=([^,]+),action=([^,]+),Label=(.+)"?$'
-```
-
-- `timestamp` → groupe 1
-- `login` → groupe 2 (identifiant utilisateur)
-- `action` → groupe 3 (SelectionChange, Action, Click, ValueChange…)
-- `label` → groupe 4 (contexte UI)
-
-**Déconnexion explicite :** ligne dont le label contient `se deconnecter`
-
-### Format `isoft_YY-MM-DD_N.log` (complément sessions)
-
-```
-2026-03-30 10:33:05.830;INFO ;"ISExecutingThread...";...;"method=OpenUserSession,...,login=JENKINS"
-```
-
-Regex pour login réussi :
-```python
-r'^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}).*method=OpenUserSession.*login=([A-Za-z0-9_]+)'
-```
-
-Regex pour timeout de session :
-```python
-r'^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}).*method=closeSession'
-```
-
-### Modèle par utilisateur (cache)
-
-```python
-{
- "login": str, # identifiant (ex: "JENKINS")
- "last_action_time": datetime, # horodatage de la dernière action
- "last_action_label": str, # label de la dernière action (tronqué à 60 chars)
- "action_count_24h": int, # nombre d'actions dans les dernières 24h
- "status": str, # "actif" | "inactif" | "deconnecte"
- "explicit_logout": bool, # True si déconnexion explicite détectée
- "connected_since": datetime | None, # heure de la première action du jour (premier awevents du jour pour cet utilisateur)
-}
-```
-
-### Règles de statut
-
-Seuils configurables (clé `user_status_thresholds`, défauts : `active_minutes=5`, `inactive_minutes=30`) :
-
-| Condition | Statut |
-|---|---|
-| Déconnexion explicite détectée | DÉCONNECTÉ |
-| Dernière action > `inactive_minutes` | DÉCONNECTÉ |
-| Dernière action entre `active_minutes` et `inactive_minutes` | INACTIF |
-| Dernière action < `active_minutes` | ACTIF |
-
-### Graphique d'activité horaire
-
-Comptage du nombre d'utilisateurs distincts ayant au moins une action par tranche horaire (H:00 → H:59). Données issues uniquement des fichiers du jour (`awevents_*.log`).
-
-Pour le sélecteur "7 derniers jours" : si les fichiers zippés ne sont pas disponibles (cas nominal en prod), afficher un `alert-info` : *"Les données historiques des jours précédents ne sont pas disponibles (fichiers archivés)."*
-
----
-
-## Configuration
-
-### Nouvelles clés dans `config.json`
-
-```json
-{
- "amadea_log_path": "C:\\ProgramData\\ISoft\\Amadea Web 8 x64\\data\\logs",
- "user_status_thresholds": {
- "active_minutes": 5,
- "inactive_minutes": 30
- }
-}
-```
-
-### Nouveaux blocs dans `settings.html`
-
-**Bloc 1 — Chemin des logs Amadea** (même style `card` + `form-control` + `btn btn-primary`) :
-- Champ texte pré-rempli avec la valeur actuelle
-- Bouton "Enregistrer"
-- Route POST : `/settings/amadea-log-path`
-
-**Bloc 2 — Seuils statut utilisateurs** (même style que le bloc "Seuils d'alerte") :
-- Champ numérique "Actif si dernière action < N min" (défaut: 5)
-- Champ numérique "Inactif si dernière action < N min" (défaut: 30)
-- Bouton "Enregistrer"
-- Route POST : `/settings/user-thresholds`
-
----
-
-## Interface utilisateurs (`users.html`)
-
-### Tableau
-
-Colonnes : **Utilisateur | Statut | Dernière action | Actions (24h) | Depuis**
-
-Tri par défaut : Actifs → Inactifs → Déconnectés (ordre de priorité statut).
-
-Badges Bootstrap cohérents avec l'existant :
-- ACTIF → `ACTIF`
-- INACTIF → `INACTIF`
-- DÉCONNECTÉ → `DÉCONNECTÉ`
-
-### Graphique d'activité
-
-Barres CSS Bootstrap (divs avec hauteur proportionnelle), aucune librairie externe.
-Une barre par heure de la journée (00h–23h), largeur fixe, hauteur = `(valeur / max) * 100%`.
-Couleur : `bg-primary`. Tooltip au survol (attribut `title`).
-
-### Auto-refresh
-
-`setInterval` toutes les 30 secondes, appel `fetch('/api/users')`, même pattern que `refreshMetrics()` dans `dashboard.html`.
-
-### Gestion d'erreurs
-
-| Cas | Affichage |
-|---|---|
-| Dossier de logs introuvable | `alert alert-warning` avec le chemin configuré |
-| Aucun fichier du jour trouvé | `alert alert-info "Aucun log disponible pour aujourd'hui"` |
-| Fichier verrouillé/illisible | Ignoré silencieusement, parsing continue |
-| Aucun utilisateur détecté | Message `text-muted` dans le tableau |
-
----
-
-## Choix techniques
-
-- **Librairie graphique** : barres CSS Bootstrap pures (aucune dépendance externe)
-- **Thread** : daemon thread (comme `SystemMonitor`), s'arrête avec l'application
-- **Encodage fichiers** : `utf-8` avec `errors='ignore'` pour tolérer les caractères invalides
-- **Performance** : les fichiers `isoft_*.log` peuvent être très volumineux (index > 80). Le parsing lit ligne par ligne sans charger le fichier en mémoire (`for line in f`)
diff --git a/docs/superpowers/specs/2026-04-07-supervision-rust-design.md b/docs/superpowers/specs/2026-04-07-supervision-rust-design.md
deleted file mode 100644
index 012a171..0000000
--- a/docs/superpowers/specs/2026-04-07-supervision-rust-design.md
+++ /dev/null
@@ -1,151 +0,0 @@
-# Design — supervision-rs
-
-**Date :** 2026-04-07
-**Objectif :** Réécrire l'application de supervision système en Rust pour produire un exécutable Windows standalone, sans dépendances, déployable par simple copie.
-
----
-
-## Contexte
-
-L'application actuelle est en Python (Flask + psutil + PyInstaller). Le tuteur a recommandé une réécriture en Rust pour obtenir un exécutable plus stable sur Windows. Le binaire doit être autonome : on le copie sur n'importe quel serveur Windows, on l'installe comme service, et c'est tout.
-
----
-
-## Périmètre fonctionnel
-
-Parité complète avec l'app Python actuelle :
-
-- Interface web sécurisée (login admin, cookie de session)
-- Dashboard temps réel : CPU, RAM, disques, processus surveillés
-- Alertes email (SMTP plain ou STARTTLS) avec cooldown configurable
-- Configuration via UI (seuils, SMTP, intervalle, processus)
-- Historique des alertes (max 500, rotation automatique)
-- Fonctionne comme service Windows (démarre avec Windows, tourne en arrière-plan)
-
----
-
-## Architecture
-
-### Structure du projet
-
-```
-supervision-rs/
-├── Cargo.toml
-├── src/
-│ ├── main.rs # Point d'entrée + intégration service Windows
-│ ├── config.rs # Lecture/écriture config.json et alerts.json
-│ ├── monitor.rs # Thread de collecte métriques + évaluation seuils
-│ ├── alerter.rs # Envoi email SMTP
-│ ├── auth.rs # Session admin, rate limiting login
-│ └── routes.rs # Endpoints Axum (dashboard, API, config, login)
-├── templates/ # Templates Tera (portés depuis Jinja2)
-│ ├── base.html
-│ ├── dashboard.html
-│ ├── login.html
-│ ├── config.html
-│ └── alerts.html
-└── data/ # Créé au premier lancement (gitignored)
- ├── config.json
- └── alerts.json
-```
-
-### Crates
-
-| Crate | Rôle | Équivalent Python |
-|---|---|---|
-| `axum` | Serveur HTTP async | Flask |
-| `tokio` | Runtime async | — |
-| `tera` | Templates HTML | Jinja2 |
-| `sysinfo` | Métriques CPU/RAM/disque/processus | psutil |
-| `lettre` | Email SMTP | smtplib |
-| `serde` + `serde_json` | Sérialisation config/alertes | json |
-| `windows-service` | Intégration service Windows | — |
-| `tower-sessions` | Sessions auth (cookie signé) | Flask-Login |
-| `tower_governor` | Rate limiting | Flask-Limiter |
-| `tracing` | Logs | logging |
-
----
-
-## Modes de démarrage
-
-```
-supervision.exe → mode console (test, développement)
-supervision.exe install → installe le service Windows
-supervision.exe uninstall → supprime le service Windows
-(lancé par Windows SCM) → mode service (background automatique)
-```
-
-Détection automatique dans `main.rs` : si lancé par le Service Control Manager de Windows, entre en mode service. Sinon, mode console.
-
-**Installation sur un serveur :**
-```cmd
-supervision.exe install
-sc start Supervision
-```
-
----
-
-## State partagé (concurrence)
-
-```
-Arc
-├── config: RwLock # Config lue/écrite par routes + monitor
-├── metrics: RwLock # Écrit par monitor, lu par /api/metrics
-└── alerter: Alerter # Utilisé par monitor
-```
-
-Le thread de monitoring est une `tokio::task` qui tourne en arrière-plan. Il met à jour `metrics` via `RwLock`. Les routes Axum lisent ce state sans bloquer.
-
----
-
-## Authentification
-
-- Un seul admin, identifiants stockés dans `config.json` (mot de passe hashé bcrypt)
-- Session via cookie signé (`tower-sessions`)
-- Rate limiting sur `POST /login` : 10 tentatives/minute
-- Toutes les routes (sauf `/login`, `/static`) redirigent vers login si non authentifié
-
----
-
-## Métriques & seuils
-
-Collecte via `sysinfo` :
-- CPU : pourcentage global
-- RAM : pourcentage utilisé, total/utilisé/disponible en Go
-- Disques : partitions physiques ≥1 Go, pourcentage, espace total/utilisé/libre
-- Processus : recherche par pattern dans nom ou ligne de commande, mémoire RSS, CPU
-
-Niveaux de statut (identique au Python) :
-- `ok` : < 80% du seuil
-- `warning` : ≥ 80% du seuil
-- `critical` : ≥ 100% du seuil
-
----
-
-## Alertes email
-
-- SMTP plain ou STARTTLS (configurable)
-- Cooldown par clé (en mémoire, réinitialisé au redémarrage)
-- Alertes déclenchées sur : CPU critique, RAM critique, disque critique, processus arrêté, mémoire processus critique
-- Stockage dans `alerts.json` (max 500 entrées, rotation FIFO)
-
----
-
-## Persistance
-
-- `data/config.json` : configuration complète (seuils, SMTP, admin, processus surveillés, intervalle)
-- `data/alerts.json` : historique des alertes
-- Dossier `data/` créé automatiquement au premier lancement, dans le même répertoire que l'exe
-- Pas de base de données
-
----
-
-## Déploiement
-
-1. Compiler sur Windows : `cargo build --release`
-2. Récupérer `target/release/supervision.exe`
-3. Copier l'exe seul sur le serveur cible
-4. Lancer `supervision.exe install` puis `sc start Supervision`
-5. Accéder à `http://localhost:5000` dans le navigateur
-
-Aucune autre dépendance requise sur le serveur cible.