Files
rpa_vision_v3/docs/architecture_data_extraction.md
Dom d5deac3029 feat: replay visuel VLM-first, worker séparé, package Léa, AZERTY, sécurité HTTPS
Pipeline replay visuel :
- VLM-first : l'agent appelle Ollama directement pour trouver les éléments
- Template matching en fallback (seuil strict 0.90)
- Stop immédiat si élément non trouvé (pas de clic blind)
- Replay depuis session brute (/replay-session) sans attendre le VLM
- Vérification post-action (screenshot hash avant/après)
- Gestion des popups (Enter/Escape/Tab+Enter)

Worker VLM séparé :
- run_worker.py : process distinct du serveur HTTP
- Communication par fichiers (_worker_queue.txt + _replay_active.lock)
- Le serveur HTTP ne fait plus jamais de VLM → toujours réactif
- Service systemd rpa-worker.service

Capture clavier :
- raw_keys (vk + press/release) pour replay exact indépendant du layout
- Fix AZERTY : ToUnicodeEx + AltGr detection
- Enter capturé comme \n, Tab comme \t
- Filtrage modificateurs seuls (Ctrl/Alt/Shift parasites)
- Fusion text_input consécutifs, dédup key_combo

Sécurité & Internet :
- HTTPS Let's Encrypt (lea.labs + vwb.labs.laurinebazin.design)
- Token API fixe dans .env.local
- HTTP Basic Auth sur VWB
- Security headers (HSTS, CSP, nosniff)
- CORS domaines publics, plus de wildcard

Infrastructure :
- DPI awareness (SetProcessDpiAwareness) Python + Rust
- Métadonnées système (dpi_scale, window_bounds, monitors, os_theme)
- Template matching multi-scale [0.5, 2.0]
- Résolution dynamique (plus de hardcode 1920x1080)
- VLM prefill fix (47x speedup, 3.5s au lieu de 180s)

Modules :
- core/auth/ : credential vault (Fernet AES), TOTP (RFC 6238), auth handler
- core/federation/ : LearningPack export/import anonymisé, FAISS global
- deploy/ : package Léa (config.txt, Lea.bat, install.bat, LISEZMOI.txt)

UX :
- Filtrage OS (VWB + Chat montrent que les workflows de l'OS courant)
- Bibliothèque persistante (cache local + SQLite)
- Clustering hybride (titre fenêtre + DBSCAN)
- EdgeConstraints + PostConditions peuplés
- GraphBuilder compound actions (toutes les frappes)

Agent Rust :
- Token Bearer auth (network.rs)
- sysinfo.rs (DPI, résolution, window bounds via Win32 API)
- config.txt lu automatiquement
- Support Chrome/Brave/Firefox (pas que Edge)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 10:19:18 +01:00

485 KiB

{"parentUuid":null,"isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":"Tu es architecte pour le projet RPA Vision V3. L'utilisateur veut une fonctionnalité de scraping visuel structuré : le système apprend à naviguer dans une application métier (ex: DPI — Dossier Patient Informatisé), ouvre chaque dossier un par un, lit visuellement les informations à l'écran (OCR + VLM), et les structure dans une base de données.\n\nContexte du projet :\n- Architecture client/serveur : Agent V1 (Windows, no GPU) ↔ Streaming Server (Linux, RTX 5070)\n- Pipeline existant : ScreenAnalyzer (OCR docTR + UIDetector + VLM qwen2.5vl) → ScreenState → StateEmbedding → WorkflowGraph\n- Replay visuel déjà implémenté : le système sait exécuter des workflows avec résolution visuelle des cibles UI\n- 3 phases : Shadow (observation) → Copilot (supervision humaine) → Autonomous (chat-driven)\n- Chat interface sur port 5004 (agent_chat/app.py)\n- Workflows stockés en JSON dans data/training/workflows/\n\nFichiers clés à explorer :\n- agent_v0/server_v1/api_stream.py — API streaming, endpoints replay\n- agent_v0/server_v1/stream_processor.py — ScreenAnalyzer, pipeline GPU\n- agent_v0/agent_v1/core/executor.py — exécution physique d'actions\n- core/models/raw_session.py — modèles de données session\n- core/graph/graph_builder.py — construction de workflows\n- core/analysis/screen_analyzer.py — analyse d'écran (OCR, UI detection)\n- agent_chat/app.py — interface chat\n\nProduis un plan d'architecture détaillé pour cette feature d'extraction de données :\n\n1. Nouveau type d'action "extract" dans les workflows — comment l'intégrer dans le format workflow JSON existant\n2. Schéma d'extraction configurable — comment l'utilisateur définit quels champs extraire de quel écran\n3. Pipeline d'extraction — comment utiliser le VLM (qwen2.5vl ou plus gros) pour extraire des données structurées d'un screenshot\n4. Itération/pagination — comment boucler sur une liste de dossiers (concept de "for each" dans un workflow)\n5. Stockage base de données — SQLite d'abord, extensible PostgreSQL\n6. Intégration dans le cycle Shadow→Copilot→Autonomous — comment l'utilisateur enseigne l'extraction\n\nSois concret : propose des structures JSON, des noms de fichiers, des endpoints API. Pense aux edge cases (écrans variables, données manquantes, erreurs de navigation).\n\nNE PAS écrire de code. Uniquement un plan d'architecture avec structures de données et diagrammes.\n"},"uuid":"8032329c-101e-4013-aa60-3e5a601109b2","timestamp":"2026-03-14T21:14:09.841Z"} {"parentUuid":"8032329c-101e-4013-aa60-3e5a601109b2","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_018zaGgwBpWGBr2DDyh32bYQ","type":"message","role":"assistant","content":[{"type":"text","text":"\n\nI'll start by thoroughly exploring the existing codebase to understand the current architecture before designing the plan."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":2,"cache_creation_input_tokens":12328,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":12328,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fTXbxvgvjif9CfKQnR","type":"assistant","uuid":"7242631e-42b7-4f9c-91dd-ce498fe556dc","timestamp":"2026-03-14T21:14:14.282Z"} {"parentUuid":"7242631e-42b7-4f9c-91dd-ce498fe556dc","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_018zaGgwBpWGBr2DDyh32bYQ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01QZuwrY1gP7RTveNCV1iozY","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/agent_v0/server_v1/api_stream.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":2,"cache_creation_input_tokens":12328,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":12328,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fTXbxvgvjif9CfKQnR","type":"assistant","uuid":"0f5a1970-9274-4d59-bf2d-830f444cb4bd","timestamp":"2026-03-14T21:14:15.108Z"} {"parentUuid":"0f5a1970-9274-4d59-bf2d-830f444cb4bd","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_018zaGgwBpWGBr2DDyh32bYQ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01L2iYUGSiD4WG1ca5Q47rCm","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/agent_v0/server_v1/stream_processor.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":2,"cache_creation_input_tokens":12328,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":12328,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fTXbxvgvjif9CfKQnR","type":"assistant","uuid":"6144986b-9ed4-40b9-b99f-c0102f036a25","timestamp":"2026-03-14T21:14:15.249Z"} {"parentUuid":"6144986b-9ed4-40b9-b99f-c0102f036a25","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01QZuwrY1gP7RTveNCV1iozY","toolUseID":"toolu_01QZuwrY1gP7RTveNCV1iozY","timestamp":"2026-03-14T21:14:15.413Z","uuid":"60d598bb-1843-4180-ae2a-69da0d48bb80"} {"parentUuid":"0f5a1970-9274-4d59-bf2d-830f444cb4bd","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01QZuwrY1gP7RTveNCV1iozY","type":"tool_result","content":" 1→# agent_v0/server_v1/api_stream.py\n 2→"""\n 3→API de Streaming Temps Réel pour RPA Vision V3.\n 4→\n 5→Connecte l'Agent V1 au core pipeline via StreamProcessor.\n 6→Tous les calculs GPU (ScreenAnalyzer, CLIP, FAISS) tournent ici sur le serveur.\n 7→\n 8→Inclut les endpoints de replay pour renvoyer des ordres d'exécution à l'Agent V1.\n 9→"""\n 10→\n 11→import json\n 12→import logging\n 13→import threading\n 14→import uuid\n 15→from collections import defaultdict\n 16→from concurrent.futures import ThreadPoolExecutor\n 17→from pathlib import Path\n 18→from typing import Any, Dict, List, Optional\n 19→\n 20→from fastapi import BackgroundTasks, FastAPI, File, HTTPException, UploadFile\n 21→from pydantic import BaseModel\n 22→\n 23→from .stream_processor import StreamProcessor\n 24→from .worker_stream import StreamWorker\n 25→\n 26→logger = logging.getLogger("api_stream")\n 27→app = FastAPI(title="RPA Vision V3 - Streaming API v1")\n 28→\n 29→# Dossier des sessions live\n 30→ROOT_DIR = Path(file).parent.parent.parent\n 31→LIVE_SESSIONS_DIR = ROOT_DIR / "data" / "training" / "live_sessions"\n 32→LIVE_SESSIONS_DIR.mkdir(parents=True, exist_ok=True)\n 33→\n 34→# Instance globale partagée\n 35→processor = StreamProcessor(data_dir=str(LIVE_SESSIONS_DIR))\n 36→worker = StreamWorker(live_dir=str(LIVE_SESSIONS_DIR), processor=processor)\n 37→\n 38→# =========================================================================\n 39→# Compteur d'analyses en cours par session (pour attendre avant finalize)\n 40→# =========================================================================\n 41→_pending_analyses: Dict[str, int] = defaultdict(int)\n 42→_pending_lock = threading.Lock()\n 43→\n 44→# =========================================================================\n 45→# File d'attente de replay par session\n 46→# Chaque session a une queue d'actions à exécuter et un état de replay\n 47→# =========================================================================\n 48→_replay_lock = threading.Lock()\n 49→# session_id -> liste d'actions en attente (FIFO)\n 50→_replay_queues: Dict[str, List[Dict[str, Any]]] = defaultdict(list)\n 51→# replay_id -> état du replay (workflow_id, session_id, status, progress)\n 52→_replay_states: Dict[str, Dict[str, Any]] = {}\n 53→\n 54→\n 55→class StreamEvent(BaseModel):\n 56→ session_id: str\n 57→ timestamp: float\n 58→ event: Dict[str, Any]\n 59→\n 60→\n 61→class ReplayRequest(BaseModel):\n 62→ """Requête de lancement de replay d'un workflow."""\n 63→ workflow_id: str\n 64→ session_id: str\n 65→ params: Optional[Dict[str, Any]] = None\n 66→\n 67→\n 68→class ReplayResultReport(BaseModel):\n 69→ """Rapport de résultat d'exécution d'une action par l'Agent V1."""\n 70→ session_id: str\n 71→ action_id: str\n 72→ success: bool\n 73→ error: Optional[str] = None\n 74→ screenshot: Optional[str] = None # Chemin ou base64 du screenshot post-action\n 75→\n 76→\n 77→@app.on_event("startup")\n 78→async def startup():\n 79→ """Démarrer le worker et charger les workflows existants."""\n 80→ worker.start(blocking=False)\n 81→\n 82→ # Charger les workflows existants depuis le disque\n 83→ _load_existing_workflows()\n 84→ logger.info("API Streaming démarrée — StreamProcessor et Worker prêts.")\n 85→\n 86→\n 87→def _load_existing_workflows():\n 88→ """Charger les workflows JSON existants dans processor._workflows.\n 89→\n 90→ Supporte deux formats :\n 91→ - Workflow.load_from_file (format complet avec workflow_id)\n 92→ - JSON brut avec clé 'name' (format simplifié VWB/manuels)\n 93→ """\n 94→ from core.models.workflow_graph import Workflow\n 95→\n 96→ workflow_dirs = [\n 97→ ROOT_DIR / "data" / "workflows",\n 98→ ROOT_DIR / "data" / "training" / "workflows",\n 99→ LIVE_SESSIONS_DIR / "workflows",\n 100→ ]\n 101→\n 102→ loaded = 0\n 103→ for wf_dir in workflow_dirs:\n 104→ if not wf_dir.exists():\n 105→ continue\n 106→ for wf_file in wf_dir.glob(".json"):\n 107→ try:\n 108→ wf = Workflow.load_from_file(str(wf_file))\n 109→ if wf and hasattr(wf, 'workflow_id'):\n 110→ with processor._data_lock:\n 111→ processor._workflows[wf.workflow_id] = wf\n 112→ loaded += 1\n 113→ continue\n 114→ except Exception:\n 115→ pass\n 116→\n 117→ # Fallback : charger comme JSON brut et injecter un workflow_id\n 118→ try:\n 119→ wf_data = json.loads(wf_file.read_text(encoding="utf-8"))\n 120→ wf_id = wf_data.get("workflow_id") or wf_file.stem\n 121→ # Stocker le dict brut (suffisant pour _workflow_to_actions)\n 122→ with processor._data_lock:\n 123→ processor._workflows[wf_id] = wf_data\n 124→ loaded += 1\n 125→ except Exception as e:\n 126→ logger.debug(f"Skip workflow {wf_file.name}: {e}")\n 127→\n 128→ logger.info(f"Workflows chargés depuis disque: {loaded}")\n 129→\n 130→\n 131→@app.on_event("shutdown")\n 132→async def shutdown():\n 133→ worker.stop()\n 134→ logger.info("API Streaming arrêtée.")\n 135→\n 136→\n 137→# =========================================================================\n 138→# Session management\n 139→# =========================================================================\n 140→\n 141→@app.post("/api/v1/traces/stream/register")\n 142→async def register_session(session_id: str):\n 143→ """Enregistrer une nouvelle session de streaming."""\n 144→ processor.session_manager.register_session(session_id)\n 145→ # Reset des compteurs pour cette session (évite les reliquats d'une session précédente)\n 146→ with _pending_lock:\n 147→ _pending_analyses[session_id] = 0\n 148→ _analyzed_shots[session_id] = set()\n 149→ logger.info(f"Session {session_id} enregistrée (compteurs réinitialisés)")\n 150→ return {"status": "session_registered", "session_id": session_id}\n 151→\n 152→\n 153→def _ensure_session_registered(session_id: str):\n 154→ """Auto-enregistrer une session si elle n'existe pas encore.\n 155→\n 156→ Robustesse au redémarrage du serveur : l'Agent V1 ne re-register pas\n 157→ sa session, mais continue d'envoyer des events/images. On l'enregistre\n 158→ automatiquement à la première réception.\n 159→ """\n 160→ session = processor.session_manager.get_session(session_id)\n 161→ if session is None:\n 162→ logger.info(f"Auto-enregistrement de la session {session_id}")\n 163→ processor.session_manager.register_session(session_id)\n 164→ with _pending_lock:\n 165→ _pending_analyses[session_id] = 0\n 166→ _analyzed_shots[session_id] = set()\n 167→\n 168→\n 169→# =========================================================================\n 170→# Événements\n 171→# =========================================================================\n 172→\n 173→@app.post("/api/v1/traces/stream/event")\n 174→async def stream_event(data: StreamEvent):\n 175→ """Reçoit un événement et l'enregistre dans la session."""\n 176→ session_id = data.session_id\n 177→\n 178→ # Auto-enregistrer la session si inconnue (robustesse au redémarrage serveur)\n 179→ _ensure_session_registered(session_id)\n 180→\n 181→ # Persister sur disque (journal JSONL)\n 182→ session_path = LIVE_SESSIONS_DIR / session_id\n 183→ session_path.mkdir(exist_ok=True)\n 184→ event_file = session_path / "live_events.jsonl"\n 185→ with open(event_file, "a", encoding="utf-8") as f:\n 186→ f.write(json.dumps(data.dict()) + "\n")\n 187→\n 188→ # Traitement direct via StreamProcessor\n 189→ result = worker.process_event_direct(session_id, data.event)\n 190→ return {"status": "event_synced", "session_id": session_id, **result}\n 191→\n 192→\n 193→# =========================================================================\n 194→# Images\n 195→# =========================================================================\n 196→\n 197→# Ensemble des screenshots déjà analysés (évite les doublons de retry)\n 198→analyzed_shots: Dict[str, set] = defaultdict(set)\n 199→\n 200→# ThreadPool pour l'analyse GPU (évite de bloquer le event loop async)\n 201→gpu_executor = ThreadPoolExecutor(max_workers=1, thread_name_prefix="gpu_analysis")\n 202→\n 203→\n 204→@app.post("/api/v1/traces/stream/image")\n 205→async def stream_image(\n 206→ session_id: str,\n 207→ shot_id: str,\n 208→ file: UploadFile = File(...),\n 209→ background_tasks: BackgroundTasks = None,\n 210→):\n 211→ """Reçoit une image et déclenche l'analyse via le core pipeline."""\n 212→ # Auto-enregistrer la session si inconnue (robustesse au redémarrage serveur)\n 213→ ensure_session_registered(session_id)\n 214→\n 215→ # Sauvegarder sur disque\n 216→ session_path = LIVE_SESSIONS_DIR / session_id\n 217→ shots_dir = session_path / "shots"\n 218→ shots_dir.mkdir(parents=True, exist_ok=True)\n 219→\n 220→ file_path = shots_dir / f"{shot_id}.png"\n 221→ content = await file.read()\n 222→ with open(file_path, "wb") as f:\n 223→ f.write(content)\n 224→\n 225→ file_path_str = str(file_path)\n 226→\n 227→ # Crops : traitement léger (pas d'analyse ScreenAnalyzer)\n 228→ if "crop" in shot_id:\n 229→ result = worker.process_crop_direct(session_id, shot_id, file_path_str)\n 230→ return {"status": "crop_stored", "shot_id": shot_id, **result}\n 231→\n 232→ # Filtrer les screenshots qui ne nécessitent PAS d'analyse GPU.\n 233→ # Seuls les shot_XXXX_full (screenshots d'action) sont analysés.\n 234→ # Les autres (heartbeat, focus, res_shot) sont stockés sur disque\n 235→ # mais pas envoyés au GPU — sinon le ThreadPool (1 worker, ~10-30s/analyse)\n 236→ # est submergé et la finalisation timeout avec 0 states.\n 237→ if shot_id.startswith("heartbeat"):\n 238→ return {"status": "heartbeat_stored", "shot_id": shot_id}\n 239→ if shot_id.startswith("focus"):\n 240→ return {"status": "focus_stored", "shot_id": shot_id}\n 241→ if shot_id.startswith("res_shot"):\n 242→ return {"status": "res_stored", "shot_id": shot_id}\n 243→ if not shot_id.startswith("shot") or "_full" not in shot_id:\n 244→ # Tout ce qui n'est pas shot_XXXX_full → stocker sans analyser\n 245→ logger.debug(f"Screenshot {shot_id} stocké sans analyse GPU")\n 246→ return {"status": "stored_no_analysis", "shot_id": shot_id}\n 247→\n 248→ # Déduplication : ne pas réanalyser un screenshot déjà traité\n 249→ with _pending_lock:\n 250→ if shot_id in _analyzed_shots[session_id]:\n 251→ logger.debug(f"Screenshot {shot_id} déjà analysé, skip")\n 252→ return {"status": "already_analyzed", "shot_id": shot_id}\n 253→ _analyzed_shots[session_id].add(shot_id)\n 254→\n 255→ # Screenshots full : analyse GPU dans un thread séparé (ne bloque pas l'event loop)\n 256→ with _pending_lock:\n 257→ _pending_analyses[session_id] += 1\n 258→ _gpu_executor.submit(_process_screenshot_thread, session_id, shot_id, file_path_str)\n 259→ return {"status": "image_queued", "shot_id": shot_id}\n 260→\n 261→\n 262→def _process_screenshot_thread(session_id: str, shot_id: str, path: str):\n 263→ """Analyse GPU d'un screenshot dans un thread séparé (ne bloque pas FastAPI)."""\n 264→ try:\n 265→ result = worker.process_screenshot_direct(session_id, shot_id, path)\n 266→ logger.info(\n 267→ f"Screenshot {shot_id} analysé en fond: "\n 268→ f"{result.get('ui_elements_count', 0)} UI, "\n 269→ f"{result.get('text_detected', 0)} textes, "\n 270→ f"indexed={result.get('embedding_indexed', False)}"\n 271→ )\n 272→ except Exception as e:\n 273→ logger.error(f"Erreur analyse screenshot {shot_id}: {e}")\n 274→ finally:\n 275→ with _pending_lock:\n 276→ _pending_analyses[session_id] = max(0, _pending_analyses[session_id] - 1)\n 277→\n 278→\n 279→# =========================================================================\n 280→# Finalisation\n 281→# =========================================================================\n 282→\n 283→@app.post("/api/v1/traces/stream/finalize")\n 284→async def finalize(session_id: str):\n 285→ """Clôture la session et construit le workflow via GraphBuilder.\n 286→\n 287→ Attend que toutes les analyses de screenshots en cours soient terminées\n 288→ avant de lancer la construction du workflow (timeout 60s).\n 289→ """\n 290→ import asyncio\n 291→\n 292→ # Attendre que les analyses background soient terminées (max 60s)\n 293→ max_wait = 60\n 294→ waited = 0\n 295→ while waited < max_wait:\n 296→ with _pending_lock:\n 297→ pending = _pending_analyses.get(session_id, 0)\n 298→ if pending <= 0:\n 299→ break\n 300→ logger.info(\n 301→ f"Finalisation {session_id}: attente de {pending} analyses en cours..."\n 302→ )\n 303→ await asyncio.sleep(1.0)\n 304→ waited += 1\n 305→\n 306→ if waited > 0:\n 307→ logger.info(\n 308→ f"Finalisation {session_id}: analyses terminées après {waited}s d'attente"\n 309→ )\n 310→\n 311→ logger.info(f"Finalisation de la session {session_id}")\n 312→ result = worker.finalize_session(session_id)\n 313→ return {"status": "finalized", **result}\n 314→\n 315→\n 316→# =========================================================================\n 317→# Monitoring\n 318→# =========================================================================\n 319→\n 320→@app.get("/api/v1/traces/stream/stats")\n 321→async def get_stats():\n 322→ """Statistiques du serveur de streaming."""\n 323→ return worker.stats\n 324→\n 325→\n 326→@app.get("/api/v1/traces/stream/sessions")\n 327→async def list_sessions():\n 328→ """Lister toutes les sessions (actives et finalisées)."""\n 329→ return {"sessions": processor.list_sessions()}\n 330→\n 331→\n 332→@app.get("/api/v1/traces/stream/workflows")\n 333→async def list_workflows():\n 334→ """Lister tous les workflows construits."""\n 335→ return {"workflows": processor.list_workflows()}\n 336→\n 337→\n 338→@app.get("/api/v1/traces/stream/session/{session_id}")\n 339→async def get_session(session_id: str):\n 340→ """État d'une session."""\n 341→ session = processor.session_manager.get_session(session_id)\n 342→ if not session:\n 343→ raise HTTPException(status_code=404, detail=f"Session {session_id} non trouvée")\n 344→ return {\n 345→ "session_id": session.session_id,\n 346→ "events_count": len(session.events),\n 347→ "screenshots_count": len(session.shot_paths),\n 348→ "last_window": session.last_window_info,\n 349→ "created_at": session.created_at.isoformat(),\n 350→ "last_activity": session.last_activity.isoformat(),\n 351→ "finalized": session.finalized,\n 352→ }\n 353→\n 354→\n 355→# =========================================================================\n 356→# Replay — Exécution de workflows sur l'Agent V1\n 357→# =========================================================================\n 358→\n 359→\n 360→def find_active_agent_session() -> Optional[str]:\n 361→ """Trouver la dernière session Agent V1 pour le replay.\n 362→\n 363→ Stratégie en 2 passes :\n 364→ 1. D'abord chercher une session non-finalisée (Agent V1 actif)\n 365→ 2. Sinon, prendre la plus récente même finalisée (Agent V1 peut avoir\n 366→ redémarré et créé une nouvelle session, ou la session a été finalisée\n 367→ par timeout mais l'agent est toujours là)\n 368→\n 369→ Dans les deux cas, on ne considère que les sessions 'sess' (Agent V1).\n 370→ """\n 371→ with processor.session_manager._lock:\n 372→ all_agent_sessions = [\n 373→ s for s in processor.session_manager.sessions.values()\n 374→ if s.session_id.startswith("sess")\n 375→ ]\n 376→\n 377→ if not all_agent_sessions:\n 378→ return None\n 379→\n 380→ # Trier par session_id (contient un timestamp) — plus récent d'abord\n 381→ all_agent_sessions.sort(key=lambda s: s.session_id, reverse=True)\n 382→\n 383→ # Passe 1 : préférer une session non-finalisée\n 384→ for s in all_agent_sessions:\n 385→ if not s.finalized:\n 386→ return s.session_id\n 387→\n 388→ # Passe 2 : fallback sur la plus récente (même finalisée)\n 389→ # L'Agent V1 poll /replay/next indépendamment de l'état finalized\n 390→ return all_agent_sessions[0].session_id\n 391→\n 392→\n 393→def _workflow_to_actions(workflow, params: Optional[Dict[str, Any]] = None) -> List[Dict[str, Any]]:\n 394→ """\n 395→ Convertir un workflow (nodes + edges ordonnés) en liste d'actions normalisées.\n 396→\n 397→ Parcourt le graphe depuis les entry_nodes en suivant les edges.\n 398→ Chaque edge produit une action normalisée avec coordonnées en pourcentage.\n 399→ """\n 400→ actions = []\n 401→ params = params or {}\n 402→\n 403→ # Construire un index des edges sortants par node\n 404→ outgoing: Dict[str, list] = defaultdict(list)\n 405→ for edge in workflow.edges:\n 406→ outgoing[edge.from_node].append(edge)\n 407→\n 408→ # Parcours linéaire depuis le premier entry_node\n 409→ visited = set()\n 410→ current_nodes = list(workflow.entry_nodes) if workflow.entry_nodes else []\n 411→\n 412→ # Fallback : si pas d'entry_nodes, prendre le premier node\n 413→ if not current_nodes and workflow.nodes:\n 414→ current_nodes = [workflow.nodes[0].node_id]\n 415→\n 416→ while current_nodes:\n 417→ node_id = current_nodes.pop(0)\n 418→ if node_id in visited:\n 419→ continue\n 420→ visited.add(node_id)\n 421→\n 422→ edges = outgoing.get(node_id, [])\n 423→ for edge in edges:\n 424→ edge_actions = _edge_to_normalized_actions(edge, params)\n 425→ actions.extend(edge_actions)\n 426→ # Suivre le graphe vers le prochain node\n 427→ if edge.to_node not in visited:\n 428→ current_nodes.append(edge.to_node)\n 429→\n 430→ return actions\n 431→\n 432→\n 433→def _edge_to_normalized_actions(edge, params: Dict[str, Any]) -> List[Dict[str, Any]]:\n 434→ """\n 435→ Convertir un WorkflowEdge en liste d'actions normalisées pour l'Agent V1.\n 436→\n 437→ Un edge simple produit 1 action, un edge compound produit N actions (une par step).\n 438→ """\n 439→ action = edge.action\n 440→ action_type = action.type\n 441→ target = action.target\n 442→ action_params = action.parameters or {}\n 443→\n 444→ # Extraire les coordonnées normalisées depuis TargetSpec.by_position\n 445→ x_pct = 0.0\n 446→ y_pct = 0.0\n 447→ if target.by_position:\n 448→ px, py = target.by_position\n 449→ if px <= 1.0 and py <= 1.0:\n 450→ x_pct = px\n 451→ y_pct = py\n 452→ else:\n 453→ ref_w = action_params.get("ref_width", 1920)\n 454→ ref_h = action_params.get("ref_height", 1080)\n 455→ x_pct = round(px / ref_w, 6)\n 456→ y_pct = round(py / ref_h, 6)\n 457→\n 458→ base = {"edge_id": edge.edge_id, "from_node": edge.from_node, "to_node": edge.to_node}\n 459→\n 460→ # Compound : décomposer en actions individuelles\n 461→ if action_type == "compound":\n 462→ return expand_compound_steps(action_params.get("steps", []), base, params)\n 463→\n 464→ # Actions simples\n 465→ normalized = {**base, "action_id": f"act{uuid.uuid4().hex[:8]}"}\n 466→\n 467→ if action_type == "mouse_click":\n 468→ normalized["type"] = "click"\n 469→ normalized["x_pct"] = x_pct\n 470→ normalized["y_pct"] = y_pct\n 471→ normalized["button"] = action_params.get("button", "left")\n 472→\n 473→ elif action_type == "text_input":\n 474→ normalized["type"] = "type"\n 475→ text = action_params.get("text", "")\n 476→ text = substitute_variables(text, params, action_params.get("defaults", {}))\n 477→ normalized["text"] = text\n 478→ normalized["x_pct"] = x_pct\n 479→ normalized["y_pct"] = y_pct\n 480→\n 481→ elif action_type == "key_press":\n 482→ normalized["type"] = "key_combo"\n 483→ keys = action_params.get("keys", [])\n 484→ if not keys and action_params.get("key"):\n 485→ keys = [action_params["key"]]\n 486→ normalized["keys"] = keys\n 487→\n 488→ else:\n 489→ logger.warning(f"Type d'action inconnu : {action_type}")\n 490→ return []\n 491→\n 492→ # Ajouter le target_spec complet pour la résolution visuelle\n 493→ target_spec = {}\n 494→ if target.by_role:\n 495→ target_spec["by_role"] = target.by_role\n 496→ normalized["target_role"] = target.by_role # Compat debug\n 497→ if target.by_text:\n 498→ target_spec["by_text"] = target.by_text\n 499→ normalized["target_text"] = target.by_text # Compat debug\n 500→ if hasattr(target, 'context_hints') and target.context_hints:\n 501→ target_spec["context_hints"] = target.context_hints\n 502→ if target_spec:\n 503→ normalized["target_spec"] = target_spec\n 504→ normalized["visual_mode"] = True # Signal à l'agent d'utiliser la résolution visuelle\n 505→\n 506→ return [normalized]\n 507→\n 508→\n 509→def substitute_variables(text: str, params: Dict[str, Any], defaults: Dict[str, Any]) -> str:\n 510→ """Substituer les variables ${var} dans un texte.\n 511→\n 512→ Priorité : params utilisateur > defaults du workflow > texte brut inchangé.\n 513→ Supporte ${var} dans un texte plus long (ex: "${expression}=").\n 514→ """\n 515→ import re\n 516→\n 517→ def replacer(match):\n 518→ var_name = match.group(1)\n 519→ return str(params.get(var_name, defaults.get(var_name, match.group(0))))\n 520→\n 521→ return re.sub(r'\$\{(\w+)\}', replacer, text)\n 522→\n 523→\n 524→def expand_compound_steps(\n 525→ steps: List[Dict[str, Any]], base: Dict[str, Any], params: Dict[str, Any]\n 526→) -> List[Dict[str, Any]]:\n 527→ """Décomposer les steps d'un compound en actions individuelles."""\n 528→ actions = []\n 529→ for step in steps:\n 530→ step_type = step.get("type", "unknown")\n 531→ action = {\n 532→ **base,\n 533→ "action_id": f"act{uuid.uuid4().hex[:8]}",\n 534→ }\n 535→\n 536→ if step_type == "key_press":\n 537→ action["type"] = "key_combo"\n 538→ keys = step.get("keys", [])\n 539→ if not keys and step.get("key"):\n 540→ keys = [step["key"]]\n 541→ action["keys"] = keys\n 542→\n 543→ elif step_type == "text_input":\n 544→ action["type"] = "type"\n 545→ text = step.get("text", "")\n 546→ text = substitute_variables(text, params, {})\n 547→ action["text"] = text\n 548→\n 549→ elif step_type == "wait":\n 550→ action["type"] = "wait"\n 551→ action["duration_ms"] = step.get("duration_ms", 500)\n 552→\n 553→ elif step_type == "mouse_click":\n 554→ action["type"] = "click"\n 555→ action["x_pct"] = step.get("x_pct", 0.0)\n 556→ action["y_pct"] = step.get("y_pct", 0.0)\n 557→ action["button"] = step.get("button", "left")\n 558→\n 559→ else:\n 560→ logger.debug(f"Step compound inconnu : {step_type}")\n 561→ continue\n 562→\n 563→ actions.append(action)\n 564→\n 565→ return actions\n 566→\n 567→\n 568→@app.post("/api/v1/traces/stream/replay")\n 569→async def start_replay(request: ReplayRequest):\n 570→ """\n 571→ Lancer le replay d'un workflow sur une session Agent V1 active.\n 572→\n 573→ Le serveur charge le workflow, le convertit en liste d'actions normalisées,\n 574→ et les place dans la queue de la session. L'Agent V1 les récupérera\n 575→ via GET /replay/next (modèle pull).\n 576→\n 577→ Si session_id commence par "chat" ou est vide, on détecte automatiquement\n 578→ la dernière session Agent V1 active (non finalisée, préfixe "sess").\n 579→ """\n 580→ workflow_id = request.workflow_id\n 581→ session_id = request.session_id\n 582→ params = request.params or {}\n 583→\n 584→ # Auto-détection de la session Agent V1 active\n 585→ if not session_id or session_id.startswith("chat"):\n 586→ active_session = _find_active_agent_session()\n 587→ if active_session:\n 588→ logger.info(\n 589→ f"Auto-détection session Agent V1 : {active_session} "\n 590→ f"(demandé: {session_id})"\n 591→ )\n 592→ session_id = active_session\n 593→ else:\n 594→ raise HTTPException(\n 595→ status_code=404,\n 596→ detail="Aucune session Agent V1 active. "\n 597→ "Lancez l'Agent V1 et démarrez une session d'abord."\n 598→ )\n 599→\n 600→ # Vérifier que le workflow existe\n 601→ with processor._data_lock:\n 602→ workflow = processor._workflows.get(workflow_id)\n 603→\n 604→ if not workflow:\n 605→ raise HTTPException(\n 606→ status_code=404,\n 607→ detail=f"Workflow '{workflow_id}' non trouvé. "\n 608→ f"Workflows disponibles : {list(processor._workflows.keys())}"\n 609→ )\n 610→\n 611→ # Convertir le workflow en actions normalisées\n 612→ actions = workflow_to_actions(workflow, params)\n 613→ if not actions:\n 614→ raise HTTPException(\n 615→ status_code=400,\n 616→ detail=f"Le workflow '{workflow_id}' ne contient aucune action exécutable."\n 617→ )\n 618→\n 619→ # Créer l'identifiant de replay\n 620→ replay_id = f"replay{uuid.uuid4().hex[:8]}"\n 621→\n 622→ # Injecter les actions dans la queue de la session\n 623→ with _replay_lock:\n 624→ _replay_queues[session_id] = list(actions) # Remplacer la queue existante\n 625→ _replay_states[replay_id] = {\n 626→ "replay_id": replay_id,\n 627→ "workflow_id": workflow_id,\n 628→ "session_id": session_id,\n 629→ "status": "running",\n 630→ "total_actions": len(actions),\n 631→ "completed_actions": 0,\n 632→ "failed_actions": 0,\n 633→ "current_action_index": 0,\n 634→ "params": params,\n 635→ "results": [], # Historique des résultats action par action\n 636→ }\n 637→\n 638→ logger.info(\n 639→ f"Replay démarré : {replay_id} | workflow={workflow_id} | "\n 640→ f"session={session_id} | {len(actions)} actions à exécuter"\n 641→ )\n 642→\n 643→ return {\n 644→ "replay_id": replay_id,\n 645→ "status": "running",\n 646→ "workflow_id": workflow_id,\n 647→ "session_id": session_id,\n 648→ "total_actions": len(actions),\n 649→ }\n 650→\n 651→\n 652→@app.get("/api/v1/traces/stream/replay/next")\n 653→async def get_next_action(session_id: str):\n 654→ """\n 655→ L'Agent V1 poll cet endpoint pour récupérer la prochaine action à exécuter.\n 656→\n 657→ Retourne la prochaine action de la queue ou {"action": null} si rien.\n 658→ Modèle pull : l'agent demande, pas de WebSocket nécessaire.\n 659→\n 660→ Si la session de l'agent n'a pas d'actions en attente, cherche dans les\n 661→ autres queues (le replay peut avoir été assigné à une session différente\n 662→ suite à un redémarrage serveur/agent).\n 663→ """\n 664→ with _replay_lock:\n 665→ queue = _replay_queues.get(session_id, [])\n 666→\n 667→ if not queue:\n 668→ # Chercher dans les autres queues (mismatch de session après redémarrage)\n 669→ for other_sid, other_queue in _replay_queues.items():\n 670→ if other_queue and other_sid != session_id:\n 671→ logger.info(\n 672→ f"Replay cross-session: {session_id} récupère une action "\n 673→ f"de la queue {other_sid}"\n 674→ )\n 675→ queue = other_queue\n 676→ # Mettre à jour le replay_state pour pointer vers la bonne session\n 677→ for state in _replay_states.values():\n 678→ if state["session_id"] == other_sid and state["status"] == "running":\n 679→ state["session_id"] = session_id\n 680→ # Transférer la queue\n 681→ _replay_queues[session_id] = other_queue\n 682→ del _replay_queues[other_sid]\n 683→ break\n 684→\n 685→ if not queue:\n 686→ return {"action": None, "session_id": session_id}\n 687→\n 688→ # Retirer la première action de la queue (FIFO)\n 689→ action = queue.pop(0)\n 690→\n 691→ logger.info(\n 692→ f"Action envoyée à {session_id} : {action.get('type')} "\n 693→ f"(id={action.get('action_id')})"\n 694→ )\n 695→\n 696→ return {"action": action, "session_id": session_id}\n 697→\n 698→\n 699→@app.post("/api/v1/traces/stream/replay/result")\n 700→async def report_action_result(report: ReplayResultReport):\n 701→ """\n 702→ L'Agent V1 renvoie le résultat d'exécution d'une action.\n 703→\n 704→ Permet au serveur de suivre la progression et de détecter les échecs.\n 705→ En cas d'échec, le replay est marqué en erreur (pas de retry automatique).\n 706→ """\n 707→ session_id = report.session_id\n 708→ action_id = report.action_id\n 709→\n 710→ # Trouver le replay correspondant à cette session\n 711→ with _replay_lock:\n 712→ replay_state = None\n 713→ for state in _replay_states.values():\n 714→ if state["session_id"] == session_id and state["status"] == "running":\n 715→ replay_state = state\n 716→ break\n 717→\n 718→ if not replay_state:\n 719→ logger.warning(\n 720→ f"Résultat reçu pour session {session_id} mais aucun replay actif"\n 721→ )\n 722→ return {"status": "no_active_replay", "session_id": session_id}\n 723→\n 724→ # Enregistrer le résultat\n 725→ result_entry = {\n 726→ "action_id": action_id,\n 727→ "success": report.success,\n 728→ "error": report.error,\n 729→ "has_screenshot": bool(report.screenshot),\n 730→ }\n 731→ replay_state["results"].append(result_entry)\n 732→\n 733→ if report.success:\n 734→ replay_state["completed_actions"] += 1\n 735→ else:\n 736→ replay_state["failed_actions"] += 1\n 737→ # En cas d'échec, on marque le replay en erreur et on vide la queue\n 738→ replay_state["status"] = "error"\n 739→ _replay_queues[session_id] = []\n 740→ logger.error(\n 741→ f"Replay {replay_state['replay_id']} échoué à l'action {action_id} : "\n 742→ f"{report.error}"\n 743→ )\n 744→\n 745→ # Vérifier si le replay est terminé (queue vide + dernière action réussie)\n 746→ remaining = len(_replay_queues.get(session_id, []))\n 747→ if remaining == 0 and replay_state["status"] == "running":\n 748→ replay_state["status"] = "completed"\n 749→ logger.info(\n 750→ f"Replay {replay_state['replay_id']} terminé avec succès : "\n 751→ f"{replay_state['completed_actions']}/{replay_state['total_actions']} actions"\n 752→ )\n 753→\n 754→ return {\n 755→ "status": "recorded",\n 756→ "action_id": action_id,\n 757→ "success": report.success,\n 758→ "replay_status": replay_state["status"],\n 759→ "remaining_actions": remaining,\n 760→ }\n 761→\n 762→\n 763→@app.get("/api/v1/traces/stream/replay/{replay_id}")\n 764→async def get_replay_status(replay_id: str):\n 765→ """Consulter l'état d'un replay en cours ou terminé."""\n 766→ with _replay_lock:\n 767→ state = _replay_states.get(replay_id)\n 768→\n 769→ if not state:\n 770→ raise HTTPException(\n 771→ status_code=404, detail=f"Replay '{replay_id}' non trouvé"\n 772→ )\n 773→\n 774→ return state\n 775→\n 776→\n 777→@app.get("/api/v1/traces/stream/replays")\n 778→async def list_replays():\n 779→ """Lister tous les replays (actifs, terminés, en erreur)."""\n 780→ with _replay_lock:\n 781→ return {"replays": list(_replay_states.values())}\n 782→\n 783→\n 784→# =========================================================================\n 785→# Visual Replay — Résolution visuelle des cibles\n 786→# =========================================================================\n 787→\n 788→\n 789→class ResolveTargetRequest(BaseModel):\n 790→ """Requête de résolution visuelle d'une cible."""\n 791→ session_id: str\n 792→ screenshot_b64: str # Screenshot JPEG en base64\n 793→ target_spec: Dict[str, Any] # {by_role, by_text, by_position, ...}\n 794→ fallback_x_pct: float = 0.0 # Coordonnées de fallback\n 795→ fallback_y_pct: float = 0.0\n 796→ screen_width: int = 1920\n 797→ screen_height: int = 1080\n 798→\n 799→\n 800→@app.post("/api/v1/traces/stream/replay/resolve_target")\n 801→async def resolve_target(request: ResolveTargetRequest):\n 802→ """\n 803→ Résoudre visuellement une cible UI à partir d'un screenshot.\n 804→\n 805→ L'Agent V1 envoie un screenshot + target_spec AVANT d'exécuter l'action.\n 806→ Le serveur analyse l'image avec UIDetector/OCR et retourne les coordonnées\n 807→ de l'élément trouvé.\n 808→\n 809→ Stratégie de matching (par priorité) :\n 810→ 1. by_text — chercher un élément dont le label contient le texte\n 811→ 2. by_role — chercher un élément avec le bon rôle sémantique\n 812→ 3. by_text + by_role — intersection des deux\n 813→ 4. fallback — utiliser les coordonnées statiques\n 814→ """\n 815→ import base64\n 816→ import io\n 817→ import tempfile\n 818→\n 819→ from PIL import Image\n 820→\n 821→ # Décoder le screenshot\n 822→ try:\n 823→ img_bytes = base64.b64decode(request.screenshot_b64)\n 824→ img = Image.open(io.BytesIO(img_bytes))\n 825→ except Exception as e:\n 826→ logger.error(f"Décodage screenshot échoué: {e}")\n 827→ return _fallback_response(request, "decode_error", str(e))\n 828→\n 829→ # Sauver temporairement pour les analyseurs (ils attendent un chemin fichier)\n 830→ with tempfile.NamedTemporaryFile(suffix=".jpg", delete=False) as tmp:\n 831→ img.save(tmp, format="JPEG", quality=90)\n 832→ tmp_path = tmp.name\n 833→\n 834→ try:\n 835→ # Lancer la résolution visuelle dans le thread GPU\n 836→ import asyncio\n 837→ loop = asyncio.get_event_loop()\n 838→ result = await loop.run_in_executor(\n 839→ _gpu_executor,\n 840→ _resolve_target_sync,\n 841→ tmp_path,\n 842→ request.target_spec,\n 843→ request.screen_width,\n 844→ request.screen_height,\n 845→ request.fallback_x_pct,\n 846→ request.fallback_y_pct,\n 847→ )\n 848→ return result\n 849→ except Exception as e:\n 850→ logger.error(f"Résolution visuelle échouée: {e}")\n 851→ return _fallback_response(request, "analysis_error", str(e))\n 852→ finally:\n 853→ import os\n 854→ try:\n 855→ os.unlink(tmp_path)\n 856→ except OSError:\n 857→ pass\n 858→\n 859→\n 860→def _resolve_target_sync(\n 861→ screenshot_path: str,\n 862→ target_spec: Dict[str, Any],\n 863→ screen_width: int,\n 864→ screen_height: int,\n 865→ fallback_x_pct: float,\n 866→ fallback_y_pct: float,\n 867→) -> Dict[str, Any]:\n 868→ """Résoudre la cible visuellement (exécuté dans le thread GPU)."""\n 869→ processor._ensure_initialized()\n 870→\n 871→ if processor._screen_analyzer is None:\n 872→ return {\n 873→ "resolved": False,\n 874→ "method": "fallback",\n 875→ "reason": "screen_analyzer_unavailable",\n 876→ "x_pct": fallback_x_pct,\n 877→ "y_pct": fallback_y_pct,\n 878→ }\n 879→\n 880→ # Analyser le screenshot (Niveaux 1-3 : raw, OCR, UI elements)\n 881→ try:\n 882→ screen_state = processor._screen_analyzer.analyze(screenshot_path)\n 883→ except Exception as e:\n 884→ logger.warning(f"Analyse screenshot échouée: {e}")\n 885→ return {\n 886→ "resolved": False,\n 887→ "method": "fallback",\n 888→ "reason": f"analysis_failed: {e}",\n 889→ "x_pct": fallback_x_pct,\n 890→ "y_pct": fallback_y_pct,\n 891→ }\n 892→\n 893→ ui_elements = screen_state.ui_elements or []\n 894→ if not ui_elements:\n 895→ logger.info("Aucun élément UI détecté, fallback coordonnées")\n 896→ return {\n 897→ "resolved": False,\n 898→ "method": "fallback",\n 899→ "reason": "no_ui_elements",\n 900→ "x_pct": fallback_x_pct,\n 901→ "y_pct": fallback_y_pct,\n 902→ }\n 903→\n 904→ # Matching de la cible parmi les éléments détectés\n 905→ by_text = target_spec.get("by_text", "")\n 906→ by_role = target_spec.get("by_role", "")\n 907→ candidates = []\n 908→\n 909→ for elem in ui_elements:\n 910→ score = 0.0\n 911→\n 912→ # Score par texte (label)\n 913→ if by_text and elem.label:\n 914→ text_lower = by_text.lower()\n 915→ label_lower = elem.label.lower()\n 916→ if text_lower in label_lower or label_lower in text_lower:\n 917→ score += 0.6\n 918→ elif _fuzzy_match(text_lower, label_lower):\n 919→ score += 0.3\n 920→\n 921→ # Score par rôle\n 922→ if by_role:\n 923→ role_lower = by_role.lower()\n 924→ if elem.role and role_lower in elem.role.lower():\n 925→ score += 0.3\n 926→ if elem.type and role_lower in elem.type.lower():\n 927→ score += 0.2\n 928→\n 929→ if score > 0:\n 930→ candidates.append((elem, score))\n 931→\n 932→ if not candidates:\n 933→ logger.info(\n 934→ f"Aucun match visuel pour target(text='{by_text}', role='{by_role}') "\n 935→ f"parmi {len(ui_elements)} éléments"\n 936→ )\n 937→ return {\n 938→ "resolved": False,\n 939→ "method": "fallback",\n 940→ "reason": "no_match",\n 941→ "x_pct": fallback_x_pct,\n 942→ "y_pct": fallback_y_pct,\n 943→ "ui_elements_count": len(ui_elements),\n 944→ }\n 945→\n 946→ # Trier par score décroissant et prendre le meilleur\n 947→ candidates.sort(key=lambda c: c[1], reverse=True)\n 948→ best_elem, best_score = candidates[0]\n 949→\n 950→ # Convertir les coordonnées pixel en proportions\n 951→ cx, cy = best_elem.center\n 952→ x_pct = round(cx / screen_width, 6) if screen_width > 0 else 0.0\n 953→ y_pct = round(cy / screen_height, 6) if screen_height > 0 else 0.0\n 954→\n 955→ logger.info(\n 956→ f"Cible résolue visuellement: '{best_elem.label}' ({best_elem.type}/{best_elem.role}) "\n 957→ f"score={best_score:.2f} → ({x_pct:.4f}, {y_pct:.4f})"\n 958→ )\n 959→\n 960→ return {\n 961→ "resolved": True,\n 962→ "method": "visual",\n 963→ "x_pct": x_pct,\n 964→ "y_pct": y_pct,\n 965→ "matched_element": {\n 966→ "label": best_elem.label,\n 967→ "type": best_elem.type,\n 968→ "role": best_elem.role,\n 969→ "center": list(best_elem.center),\n 970→ "confidence": best_elem.label_confidence,\n 971→ },\n 972→ "score": best_score,\n 973→ "candidates_count": len(candidates),\n 974→ "ui_elements_count": len(ui_elements),\n 975→ }\n 976→\n 977→\n 978→def _fuzzy_match(a: str, b: str, threshold: float = 0.6) -> bool:\n 979→ """Match approximatif par ratio de caractères communs."""\n 980→ if not a or not b:\n 981→ return False\n 982→ common = sum(1 for c in a if c in b)\n 983→ return (common / max(len(a), len(b))) >= threshold\n 984→\n 985→\n 986→def _fallback_response(request: ResolveTargetRequest, reason: str, detail: str) -> Dict:\n 987→ """Réponse de fallback quand la résolution visuelle échoue."""\n 988→ return {\n 989→ "resolved": False,\n 990→ "method": "fallback",\n 991→ "reason": reason,\n 992→ "detail": detail,\n 993→ "x_pct": request.fallback_x_pct,\n 994→ "y_pct": request.fallback_y_pct,\n 995→ }\n 996→\n 997→\n 998→if name == "main":\n 999→ import uvicorn\n 1000→\n 1001→ logging.basicConfig(\n 1002→ level=logging.INFO,\n 1003→ format="%(asctime)s [API-STREAM] %(message)s",\n 1004→ )\n 1005→ uvicorn.run(app, host="0.0.0.0", port=5005)\n 1006→"}]},"uuid":"140e8a34-f399-4ce3-a0b1-be995603f6ef","timestamp":"2026-03-14T21:14:15.413Z","sourceToolAssistantUUID":"0f5a1970-9274-4d59-bf2d-830f444cb4bd"} {"parentUuid":"140e8a34-f399-4ce3-a0b1-be995603f6ef","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01L2iYUGSiD4WG1ca5Q47rCm","toolUseID":"toolu_01L2iYUGSiD4WG1ca5Q47rCm","timestamp":"2026-03-14T21:14:15.256Z","uuid":"61628ca4-d96c-43a2-b999-0e3cdb14563f"} {"parentUuid":"6144986b-9ed4-40b9-b99f-c0102f036a25","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01L2iYUGSiD4WG1ca5Q47rCm","type":"tool_result","content":" 1→"""\n 2→StreamProcessor — Pont entre le streaming Agent V1 et le core pipeline RPA Vision V3.\n 3→\n 4→Orchestre les composants core (ScreenAnalyzer, CLIP, FAISS, GraphBuilder)\n 5→pour traiter en temps réel les screenshots et événements reçus via fibre.\n 6→\n 7→Tous les calculs GPU tournent ici (serveur RTX 5070).\n 8→"""\n 9→\n 10→import logging\n 11→import threading\n 12→from datetime import datetime\n 13→from pathlib import Path\n 14→from typing import Any, Dict, List, Optional\n 15→\n 16→import numpy as np\n 17→\n 18→from .live_session_manager import LiveSessionManager\n 19→\n 20→logger = logging.getLogger(name)\n 21→\n 22→\n 23→class StreamProcessor:\n 24→ """\n 25→ Processeur de streaming qui connecte les données Agent V1 au core pipeline.\n 26→\n 27→ Cycle de vie :\n 28→ 1. register_session() — crée l'état mémoire\n 29→ 2. process_event() — accumule événements, extrait contexte fenêtre\n 30→ 3. process_screenshot() — analyse via ScreenAnalyzer + CLIP embedding\n 31→ 4. finalize_session() — construit le Workflow via GraphBuilder (DBSCAN)\n 32→ """\n 33→\n 34→ def init(self, data_dir: str = "data/training"):\n 35→ self.data_dir = Path(data_dir)\n 36→ self.session_manager = LiveSessionManager()\n 37→ self._lock = threading.Lock()\n 38→\n 39→ # Core components (chargés paresseusement pour éviter les imports lourds au démarrage)\n 40→ self._screen_analyzer = None\n 41→ self._clip_embedder = None\n 42→ self._state_embedding_builder = None # P0-3 : pipeline d'embedding unifié (fusion multi-modale)\n 43→ self._faiss_manager = None\n 44→ self._initialized = False\n 45→\n 46→ # Lock pour l'accès concurrent aux données de session (screen_states, embeddings, workflows)\n 47→ self._data_lock = threading.Lock()\n 48→\n 49→ # Résultats d'analyse par session\n 50→ self._screen_states: Dict[str, list] = {} # session_id -> List[ScreenState]\n 51→ self._embeddings: Dict[str, list] = {} # session_id -> List[np.ndarray]\n 52→\n 53→ # Workflows construits (pour le matching)\n 54→ self._workflows: Dict[str, Any] = {}\n 55→\n 56→ # Charger les workflows existants depuis le disque\n 57→ self._load_persisted_workflows()\n 58→\n 59→ def _load_persisted_workflows(self):\n 60→ """Charger les workflows sauvegardés depuis le disque au démarrage."""\n 61→ workflows_dir = self.data_dir / "workflows"\n 62→ if not workflows_dir.exists():\n 63→ return\n 64→\n 65→ try:\n 66→ from core.models.workflow_graph import Workflow\n 67→\n 68→ count = 0\n 69→ for wf_file in sorted(workflows_dir.glob(".json")):\n 70→ try:\n 71→ wf = Workflow.load_from_file(wf_file)\n 72→ self._workflows[wf.workflow_id] = wf\n 73→ count += 1\n 74→ except Exception as e:\n 75→ logger.warning(f"Impossible de charger {wf_file.name}: {e}")\n 76→\n 77→ if count:\n 78→ logger.info(f"{count} workflow(s) chargé(s) depuis {workflows_dir}")\n 79→ except ImportError:\n 80→ logger.debug("core.models.workflow_graph non disponible, skip chargement")\n 81→\n 82→ def _ensure_initialized(self):\n 83→ """Charger les composants core GPU si pas encore fait."""\n 84→ if self._initialized:\n 85→ return\n 86→\n 87→ with self._lock:\n 88→ if self._initialized:\n 89→ return\n 90→\n 91→ logger.info("Initialisation des composants core (GPU)...")\n 92→\n 93→ try:\n 94→ from core.pipeline.screen_analyzer import ScreenAnalyzer\n 95→ self._screen_analyzer = ScreenAnalyzer(session_id="stream_server")\n 96→ logger.info(" ScreenAnalyzer prêt")\n 97→ except Exception as e:\n 98→ logger.error(f" Erreur init ScreenAnalyzer: {e}")\n 99→ self._screen_analyzer = None\n 100→\n 101→ try:\n 102→ from core.embedding.clip_embedder import CLIPEmbedder\n 103→ self._clip_embedder = CLIPEmbedder()\n 104→ logger.info(" CLIPEmbedder prêt (singleton, ne sera plus rechargé)")\n 105→ except Exception as e:\n 106→ logger.error(f" Erreur init CLIPEmbedder: {e}")\n 107→ self._clip_embedder = None\n 108→\n 109→ # P0-3 : Initialiser le StateEmbeddingBuilder pour unifier l'espace d'embedding\n 110→ # Utilise le même CLIPEmbedder (pas de rechargement du modèle) + FusionEngine\n 111→ # pour produire des vecteurs fusionnés (image+text+title+ui) identiques à GraphBuilder\n 112→ try:\n 113→ from core.embedding.state_embedding_builder import StateEmbeddingBuilder\n 114→ if self._clip_embedder is not None:\n 115→ # Injecter le CLIPEmbedder déjà chargé pour éviter un double chargement\n 116→ self._state_embedding_builder = StateEmbeddingBuilder(\n 117→ embedders={\n 118→ "image": self._clip_embedder,\n 119→ "text": self._clip_embedder,\n 120→ "title": self._clip_embedder,\n 121→ "ui": self._clip_embedder,\n 122→ },\n 123→ output_dir=self.data_dir / "embeddings",\n 124→ use_clip=False, # Pas besoin, on fournit les embedders directement\n 125→ )\n 126→ else:\n 127→ # Fallback : laisser le builder créer son propre CLIPEmbedder\n 128→ self._state_embedding_builder = StateEmbeddingBuilder(\n 129→ output_dir=self.data_dir / "embeddings",\n 130→ use_clip=True,\n 131→ )\n 132→ logger.info(" StateEmbeddingBuilder prêt (fusion multi-modale unifiée)")\n 133→ except Exception as e:\n 134→ logger.warning(f" StateEmbeddingBuilder non disponible, fallback CLIP pur: {e}")\n 135→ self._state_embedding_builder = None\n 136→\n 137→ try:\n 138→ from core.embedding.faiss_manager import FAISSManager\n 139→ self._faiss_manager = FAISSManager(\n 140→ dimensions=512,\n 141→ index_type="Flat",\n 142→ metric="cosine",\n 143→ )\n 144→ logger.info(" FAISSManager prêt (512 dims, cosine)")\n 145→ except Exception as e:\n 146→ logger.error(f" Erreur init FAISSManager: {e}")\n 147→ self._faiss_manager = None\n 148→\n 149→ self._initialized = True\n 150→ logger.info("Composants core initialisés.")\n 151→\n 152→ # =========================================================================\n 153→ # Événements\n 154→ # =========================================================================\n 155→\n 156→ def process_event(self, session_id: str, event_data: Dict[str, Any]) -> Dict[str, Any]:\n 157→ """Enregistrer un événement dans la session live."""\n 158→ self.session_manager.add_event(session_id, event_data)\n 159→ return {"status": "event_recorded", "session_id": session_id}\n 160→\n 161→ # =========================================================================\n 162→ # Screenshots\n 163→ # =========================================================================\n 164→\n 165→ def process_screenshot(self, session_id: str, shot_id: str, file_path: str) -> Dict[str, Any]:\n 166→ """\n 167→ Analyser un screenshot full via le core pipeline.\n 168→\n 169→ 1. ScreenAnalyzer → ScreenState (OCR, UI detection)\n 170→ 2. StateEmbeddingBuilder → vecteur fusionné 512d (image+text+title+ui)\n 171→ Même espace d'embedding que GraphBuilder (P0-3)\n 172→ Fallback : CLIP embed_image() si StateEmbeddingBuilder échoue\n 173→ 3. FAISS indexation → matching temps réel\n 174→ """\n 175→ self._ensure_initialized()\n 176→ self.session_manager.add_screenshot(session_id, shot_id, file_path)\n 177→\n 178→ result = {\n 179→ "shot_id": shot_id,\n 180→ "session_id": session_id,\n 181→ "state_id": None,\n 182→ "ui_elements_count": 0,\n 183→ "text_detected": 0,\n 184→ "embedding_indexed": False,\n 185→ "match": None,\n 186→ }\n 187→\n 188→ # 1. Construire le ScreenState\n 189→ if self._screen_analyzer is None:\n 190→ logger.warning("ScreenAnalyzer non disponible, skip analyse")\n 191→ return result\n 192→\n 193→ session = self.session_manager.get_session(session_id)\n 194→ window_info = session.last_window_info if session else {}\n 195→\n 196→ try:\n 197→ screen_state = self._screen_analyzer.analyze(\n 198→ screenshot_path=file_path,\n 199→ window_info=window_info,\n 200→ )\n 201→ result["state_id"] = screen_state.screen_state_id\n 202→ result["ui_elements_count"] = len(screen_state.ui_elements)\n 203→ result["text_detected"] = len(\n 204→ getattr(screen_state.perception, "detected_text", [])\n 205→ )\n 206→\n 207→ # Stocker le ScreenState pour le build final\n 208→ with self._data_lock:\n 209→ if session_id not in self._screen_states:\n 210→ self._screen_states[session_id] = []\n 211→ self._screen_states[session_id].append(screen_state)\n 212→\n 213→ logger.info(\n 214→ f"Screenshot analysé: {shot_id} | "\n 215→ f"{result['ui_elements_count']} UI elements, "\n 216→ f"{result['text_detected']} textes"\n 217→ )\n 218→ except Exception as e:\n 219→ logger.error(f"Erreur analyse screenshot {shot_id}: {e}")\n 220→ return result\n 221→\n 222→ # 2. Construire l'embedding fusionné via StateEmbeddingBuilder (P0-3)\n 223→ # Utilise le même pipeline que GraphBuilder : fusion image+text+title+ui\n 224→ # pour garantir que les vecteurs FAISS sont dans le même espace d'embedding\n 225→ embedding_vector = None\n 226→\n 227→ if self._state_embedding_builder is not None:\n 228→ try:\n 229→ state_embedding = self._state_embedding_builder.build(screen_state)\n 230→ # Récupérer le vecteur fusionné depuis le StateEmbedding\n 231→ fused_vec = state_embedding.get_vector()\n 232→ if fused_vec is not None:\n 233→ embedding_vector = fused_vec.astype(np.float32)\n 234→ logger.debug(\n 235→ f"Embedding fusionné multi-modal calculé pour {shot_id} "\n 236→ f"(dim={embedding_vector.shape[0]})"\n 237→ )\n 238→ except Exception as e:\n 239→ logger.warning(\n 240→ f"StateEmbeddingBuilder échoué pour {shot_id}: {e}, "\n 241→ f"fallback sur CLIP pur"\n 242→ )\n 243→\n 244→ # Fallback 1 : embedding pré-calculé dans le ScreenState (si disponible)\n 245→ if embedding_vector is None:\n 246→ if hasattr(screen_state, "perception") and screen_state.perception:\n 247→ emb_ref = getattr(screen_state.perception, "embedding", None)\n 248→ if emb_ref and hasattr(emb_ref, "vector") and emb_ref.vector is not None:\n 249→ embedding_vector = np.array(emb_ref.vector, dtype=np.float32)\n 250→\n 251→ # Fallback 2 : utiliser le CLIPEmbedder singleton (embedding image seul)\n 252→ if embedding_vector is None and self._clip_embedder is not None:\n 253→ try:\n 254→ from PIL import Image\n 255→ pil_image = Image.open(file_path)\n 256→ embedding_vector = self._clip_embedder.embed_image(pil_image)\n 257→ except Exception as e:\n 258→ logger.debug(f"CLIP embedding échoué: {e}")\n 259→\n 260→ if embedding_vector is not None:\n 261→ # Stocker pour le build final\n 262→ with self._data_lock:\n 263→ if session_id not in self._embeddings:\n 264→ self._embeddings[session_id] = []\n 265→ self._embeddings[session_id].append(embedding_vector)\n 266→\n 267→ # 3. Indexer dans FAISS\n 268→ if self._faiss_manager is not None:\n 269→ try:\n 270→ self._faiss_manager.add_embedding(\n 271→ embedding_id=screen_state.screen_state_id,\n 272→ vector=embedding_vector,\n 273→ metadata={\n 274→ "session_id": session_id,\n 275→ "shot_id": shot_id,\n 276→ "window_title": window_info.get("title", ""),\n 277→ },\n 278→ )\n 279→ result["embedding_indexed"] = True\n 280→ except Exception as e:\n 281→ logger.error(f"Erreur FAISS indexation: {e}")\n 282→\n 283→ # 4. Matching temps réel contre les workflows connus\n 284→ with self._data_lock:\n 285→ has_workflows = bool(self._workflows)\n 286→ if embedding_vector is not None and has_workflows:\n 287→ result["match"] = self._try_match(embedding_vector)\n 288→\n 289→ return result\n 290→\n 291→ def process_crop(self, session_id: str, shot_id: str, file_path: str) -> Dict[str, Any]:\n 292→ """\n 293→ Enregistrer un crop (400x400). Pas d'analyse ScreenAnalyzer\n 294→ (un crop est un fragment, pas un écran complet).\n 295→ """\n 296→ self.session_manager.add_screenshot(session_id, shot_id, file_path)\n 297→ return {"status": "crop_stored", "shot_id": shot_id}\n 298→\n 299→ # =========================================================================\n 300→ # Finalisation\n 301→ # =========================================================================\n 302→\n 303→ def finalize_session(self, session_id: str) -> Dict[str, Any]:\n 304→ """\n 305→ Construire un Workflow depuis les données accumulées.\n 306→\n 307→ Utilise le GraphBuilder du core avec les ScreenStates et embeddings\n 308→ collectés pendant le streaming.\n 309→ """\n 310→ self._ensure_initialized()\n 311→\n 312→ session = self.session_manager.finalize(session_id)\n 313→ if not session:\n 314→ return {"error": f"Session {session_id} non trouvée"}\n 315→\n 316→ with self._data_lock:\n 317→ states = list(self._screen_states.get(session_id, []))\n 318→ embeddings = list(self._embeddings.get(session_id, []))\n 319→\n 320→ if len(states) < 2:\n 321→ logger.warning(\n 322→ f"Session {session_id}: seulement {len(states)} states, "\n 323→ f"pas assez pour construire un workflow"\n 324→ )\n 325→ return {\n 326→ "session_id": session_id,\n 327→ "status": "insufficient_data",\n 328→ "states_count": len(states),\n 329→ "min_required": 2,\n 330→ }\n 331→\n 332→ # Convertir en RawSession pour le GraphBuilder\n 333→ raw_dict = self.session_manager.to_raw_session(session_id)\n 334→ if not raw_dict:\n 335→ return {"error": "Conversion RawSession échouée"}\n 336→\n 337→ try:\n 338→ from core.models.raw_session import RawSession\n 339→ raw_session = RawSession.from_dict(raw_dict)\n 340→ except Exception as e:\n 341→ logger.error(f"Erreur construction RawSession: {e}")\n 342→ # Fallback : construire manuellement\n 343→ try:\n 344→ raw_session = self.build_raw_session_fallback(session, raw_dict)\n 345→ except Exception as e2:\n 346→ return {"error": f"Erreur RawSession: {e2}"}\n 347→\n 348→ # Construire le workflow via GraphBuilder\n 349→ try:\n 350→ from core.graph.graph_builder import GraphBuilder\n 351→\n 352→ n = len(states)\n 353→ min_reps = 2 if n < 10 else 3 if n <= 30 else min(5, n // 10)\n 354→\n 355→ builder = GraphBuilder(\n 356→ min_pattern_repetitions=min_reps,\n 357→ clustering_eps=0.15,\n 358→ clustering_min_samples=2,\n 359→ )\n 360→\n 361→ # Injecter les ScreenStates pré-calculés pour éviter de re-analyser\n 362→ workflow = builder.build_from_session(\n 363→ raw_session,\n 364→ workflow_name=f"stream{session_id}",\n 365→ precomputed_states=states,\n 366→ )\n 367→\n 368→ with self._data_lock:\n 369→ self._workflows[workflow.workflow_id] = workflow\n 370→\n 371→ # Persister sur disque\n 372→ saved_path = self._persist_workflow(workflow, session_id)\n 373→\n 374→ result = {\n 375→ "session_id": session_id,\n 376→ "status": "workflow_built",\n 377→ "workflow_id": workflow.workflow_id,\n 378→ "nodes": len(workflow.nodes),\n 379→ "edges": len(workflow.edges),\n 380→ "states_analyzed": len(states),\n 381→ "embeddings_indexed": len(embeddings),\n 382→ "saved_path": str(saved_path) if saved_path else None,\n 383→ }\n 384→\n 385→ logger.info(\n 386→ f"Workflow construit: {workflow.workflow_id} | "\n 387→ f"{result['nodes']} nodes, {result['edges']} edges"\n 388→ )\n 389→ return result\n 390→\n 391→ except Exception as e:\n 392→ logger.error(f"Erreur construction workflow: {e}")\n 393→ return {"error": f"GraphBuilder: {e}", "session_id": session_id}\n 394→\n 395→ # =========================================================================\n 396→ # Matching\n 397→ # =========================================================================\n 398→\n 399→ def _try_match(self, embedding_vector: np.ndarray) -> Optional[Dict[str, Any]]:\n 400→ """Matcher un embedding contre les workflows connus."""\n 401→ if self._faiss_manager is None or self._faiss_manager.index.ntotal == 0:\n 402→ return None\n 403→\n 404→ try:\n 405→ results = self._faiss_manager.search_similar(\n 406→ query_vector=embedding_vector,\n 407→ k=1,\n 408→ min_similarity=0.85,\n 409→ )\n 410→ if results:\n 411→ best = results[0]\n 412→ return {\n 413→ "matched_id": best.embedding_id,\n 414→ "similarity": round(best.similarity, 4),\n 415→ "metadata": best.metadata,\n 416→ }\n 417→ except Exception as e:\n 418→ logger.debug(f"Erreur matching: {e}")\n 419→\n 420→ return None\n 421→\n 422→ # =========================================================================\n 423→ # Helpers\n 424→ # =========================================================================\n 425→\n 426→ def _persist_workflow(self, workflow, session_id: str) -> Optional[Path]:\n 427→ """Sauvegarder le workflow JSON sur disque."""\n 428→ try:\n 429→ workflows_dir = self.data_dir / "workflows"\n 430→ workflows_dir.mkdir(parents=True, exist_ok=True)\n 431→ filepath = workflows_dir / f"{workflow.workflow_id}.json"\n 432→ workflow.save_to_file(filepath)\n 433→ logger.info(f"Workflow sauvegardé: {filepath}")\n 434→ return filepath\n 435→ except Exception as e:\n 436→ logger.error(f"Erreur sauvegarde workflow {session_id}: {e}")\n 437→ return None\n 438→\n 439→ def _build_raw_session_fallback(self, session, raw_dict):\n 440→ """Construire un RawSession manuellement si from_dict échoue."""\n 441→ from core.models.raw_session import RawSession, Event, Screenshot, RawWindowContext\n 442→\n 443→ events = []\n 444→ for evt_dict in raw_dict.get("events", []):\n 445→ window_data = evt_dict.get("window", {"title": "", "app_name": "unknown"})\n 446→ window = RawWindowContext(\n 447→ title=window_data.get("title", ""),\n 448→ app_name=window_data.get("app_name", "unknown"),\n 449→ )\n 450→ events.append(Event(\n 451→ t=evt_dict.get("t", 0.0),\n 452→ type=evt_dict.get("type", "unknown"),\n 453→ window=window,\n 454→ data={k: v for k, v in evt_dict.items()\n 455→ if k not in ("t", "type", "window", "screenshot_id")},\n 456→ screenshot_id=evt_dict.get("screenshot_id"),\n 457→ ))\n 458→\n 459→ screenshots = []\n 460→ for ss_dict in raw_dict.get("screenshots", []):\n 461→ screenshots.append(Screenshot(\n 462→ screenshot_id=ss_dict["screenshot_id"],\n 463→ relative_path=ss_dict.get("relative_path", ss_dict.get("path", "")),\n 464→ captured_at=ss_dict.get("captured_at", datetime.now().isoformat()),\n 465→ ))\n 466→\n 467→ return RawSession(\n 468→ session_id=session.session_id,\n 469→ agent_version="agent_v1_stream",\n 470→ environment=raw_dict.get("environment", {}),\n 471→ user=raw_dict.get("user", {"id": "remote_agent"}),\n 472→ context=raw_dict.get("context", {}),\n 473→ started_at=session.created_at,\n 474→ ended_at=datetime.now(),\n 475→ events=events,\n 476→ screenshots=screenshots,\n 477→ )\n 478→\n 479→ def list_sessions(self) -> List[Dict[str, Any]]:\n 480→ """Lister toutes les sessions avec leur état."""\n 481→ sessions = []\n 482→ for sid in self.session_manager.session_ids:\n 483→ session = self.session_manager.get_session(sid)\n 484→ if session is None:\n 485→ continue\n 486→ with self._data_lock:\n 487→ states_count = len(self._screen_states.get(sid, []))\n 488→ embeddings_count = len(self._embeddings.get(sid, []))\n 489→ sessions.append({\n 490→ "session_id": session.session_id,\n 491→ "events_count": len(session.events),\n 492→ "screenshots_count": len(session.shot_paths),\n 493→ "states_count": states_count,\n 494→ "embeddings_count": embeddings_count,\n 495→ "last_window": session.last_window_info,\n 496→ "created_at": session.created_at.isoformat(),\n 497→ "last_activity": session.last_activity.isoformat(),\n 498→ "finalized": session.finalized,\n 499→ })\n 500→ return sessions\n 501→\n 502→ def list_workflows(self) -> List[Dict[str, Any]]:\n 503→ """Lister tous les workflows construits."""\n 504→ with self._data_lock:\n 505→ workflows_snapshot = list(self._workflows.items())\n 506→ result = []\n 507→ for wf_id, wf in workflows_snapshot:\n 508→ result.append({\n 509→ "workflow_id": wf_id,\n 510→ "nodes": len(wf.nodes) if hasattr(wf, "nodes") else 0,\n 511→ "edges": len(wf.edges) if hasattr(wf, "edges") else 0,\n 512→ "name": getattr(wf, "name", wf_id),\n 513→ })\n 514→ return result\n 515→\n 516→ @property\n 517→ def stats(self) -> Dict[str, Any]:\n 518→ """Statistiques du processeur."""\n 519→ with self._data_lock:\n 520→ total_workflows = len(self._workflows)\n 521→ return {\n 522→ "active_sessions": self.session_manager.active_session_count,\n 523→ "total_sessions": len(self.session_manager.session_ids),\n 524→ "total_workflows": total_workflows,\n 525→ "faiss_vectors": self._faiss_manager.index.ntotal if self._faiss_manager else 0,\n 526→ "initialized": self._initialized,\n 527→ }\n 528→"}]},"uuid":"7889bcc6-0d4b-432c-aa8f-771b40dae957","timestamp":"2026-03-14T21:14:15.256Z","sourceToolAssistantUUID":"6144986b-9ed4-40b9-b99f-c0102f036a25"} {"parentUuid":"7889bcc6-0d4b-432c-aa8f-771b40dae957","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_018zaGgwBpWGBr2DDyh32bYQ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01Nt69baC9LKRR8LxPsKXyDw","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/agent_v0/agent_v1/core/executor.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":2,"cache_creation_input_tokens":12328,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":12328,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fTXbxvgvjif9CfKQnR","type":"assistant","uuid":"965a25a8-ce5b-449f-ab60-0118efc7e487","timestamp":"2026-03-14T21:14:15.669Z"} {"parentUuid":"965a25a8-ce5b-449f-ab60-0118efc7e487","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01Nt69baC9LKRR8LxPsKXyDw","toolUseID":"toolu_01Nt69baC9LKRR8LxPsKXyDw","timestamp":"2026-03-14T21:14:15.672Z","uuid":"ea32cb07-baf7-4c33-a656-4112b8aa2d70"} {"parentUuid":"965a25a8-ce5b-449f-ab60-0118efc7e487","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01Nt69baC9LKRR8LxPsKXyDw","type":"tool_result","content":" 1→# agent_v1/core/executor.py\n 2→"""\n 3→Executeur d'actions visuelles pour Agent V1.\n 4→Opere par coordonnees normalisees (proportions) pour le rejeu en univers ferme (VM).\n 5→\n 6→Supporte deux modes :\n 7→ - Watchdog fichier (command.json) — legacy\n 8→ - Polling serveur (GET /replay/next) — mode replay P0-5\n 9→"""\n 10→\n 11→import base64\n 12→import io\n 13→import time\n 14→import logging\n 15→\n 16→import mss\n 17→from pynput.mouse import Button, Controller as MouseController\n 18→from pynput.keyboard import Controller as KeyboardController, Key\n 19→\n 20→logger = logging.getLogger(name)\n 21→\n 22→# Mapping des noms de touches spéciales vers pynput.Key\n 23→_SPECIAL_KEYS = {\n 24→ "enter": Key.enter,\n 25→ "return": Key.enter,\n 26→ "tab": Key.tab,\n 27→ "escape": Key.esc,\n 28→ "esc": Key.esc,\n 29→ "backspace": Key.backspace,\n 30→ "delete": Key.delete,\n 31→ "space": Key.space,\n 32→ "up": Key.up,\n 33→ "down": Key.down,\n 34→ "left": Key.left,\n 35→ "right": Key.right,\n 36→ "home": Key.home,\n 37→ "end": Key.end,\n 38→ "page_up": Key.page_up,\n 39→ "page_down": Key.page_down,\n 40→ "f1": Key.f1, "f2": Key.f2, "f3": Key.f3, "f4": Key.f4,\n 41→ "f5": Key.f5, "f6": Key.f6, "f7": Key.f7, "f8": Key.f8,\n 42→ "f9": Key.f9, "f10": Key.f10, "f11": Key.f11, "f12": Key.f12,\n 43→ "ctrl": Key.ctrl, "ctrl_l": Key.ctrl_l, "ctrl_r": Key.ctrl_r,\n 44→ "alt": Key.alt, "alt_l": Key.alt_l, "alt_r": Key.alt_r,\n 45→ "shift": Key.shift, "shift_l": Key.shift_l, "shift_r": Key.shift_r,\n 46→ "cmd": Key.cmd, "win": Key.cmd,\n 47→ "super": Key.cmd, "super_l": Key.cmd, "super_r": Key.cmd,\n 48→ "windows": Key.cmd, "meta": Key.cmd,\n 49→ "insert": Key.insert, "print_screen": Key.print_screen,\n 50→ "caps_lock": Key.caps_lock, "num_lock": Key.num_lock,\n 51→}\n 52→\n 53→\n 54→class ActionExecutorV1:\n 55→ def init(self):\n 56→ self.mouse = MouseController()\n 57→ self.keyboard = KeyboardController()\n 58→ self.sct = mss.mss()\n 59→ self.running = True\n 60→\n 61→ # =========================================================================\n 62→ # Execution legacy (watchdog command.json)\n 63→ # =========================================================================\n 64→\n 65→ def execute_normalized_order(self, order: dict):\n 66→ """\n 67→ Execute un ordre base sur des proportions (0.0 a 1.0).\n 68→ Ex: {"action": "mouse_click", "x_pct": 0.5, "y_pct": 0.5} (Clic au centre)\n 69→ """\n 70→ action = order.get("action")\n 71→ try:\n 72→ # Recuperation de la resolution actuelle de la VM\n 73→ monitor = self.sct.monitors[1]\n 74→ width, height = monitor["width"], monitor["height"]\n 75→\n 76→ if action == "mouse_click":\n 77→ # Traduction Proportions -> Pixels reels de la VM\n 78→ real_x = int(order.get("x_pct", 0) * width)\n 79→ real_y = int(order.get("y_pct", 0) * height)\n 80→ self._click((real_x, real_y), order.get("button", "left"))\n 81→\n 82→ elif action == "text_input":\n 83→ self.keyboard.type(order.get("text", ""))\n 84→\n 85→ logger.info(f"Ordre Visuel execute : {action} sur ({width}x{height})")\n 86→ except Exception as e:\n 87→ logger.error(f"Echec de l'ordre {action} : {e}")\n 88→\n 89→ # =========================================================================\n 90→ # Execution replay (polling serveur)\n 91→ # =========================================================================\n 92→\n 93→ def execute_replay_action(self, action: dict, server_url: str = "") -> dict:\n 94→ """\n 95→ Execute une action normalisee recue du serveur de replay.\n 96→\n 97→ Supporte deux modes :\n 98→ - Visual mode (visual_mode=True + target_spec) : capture un screenshot,\n 99→ l'envoie au serveur pour resolution visuelle, puis execute a la position trouvee.\n 100→ - Blind mode (defaut) : utilise les coordonnees statiques x_pct/y_pct.\n 101→\n 102→ Format d'entree :\n 103→ {\n 104→ "action_id": "act_xxxx",\n 105→ "type": "click|type|key_combo|scroll|wait",\n 106→ "x_pct": 0.5,\n 107→ "y_pct": 0.3,\n 108→ "text": "...",\n 109→ "keys": [...],\n 110→ "button": "left",\n 111→ "duration_ms": 500,\n 112→ "visual_mode": true,\n 113→ "target_spec": {"by_role": "button", "by_text": "Submit"}\n 114→ }\n 115→\n 116→ Retourne :\n 117→ {\n 118→ "action_id": "act_xxxx",\n 119→ "success": True/False,\n 120→ "error": None ou message,\n 121→ "screenshot": base64 du screenshot post-action,\n 122→ "visual_resolved": True/False\n 123→ }\n 124→ """\n 125→ action_id = action.get("action_id", "unknown")\n 126→ action_type = action.get("type", "unknown")\n 127→ visual_mode = action.get("visual_mode", False)\n 128→ target_spec = action.get("target_spec", {})\n 129→ result = {\n 130→ "action_id": action_id,\n 131→ "success": False,\n 132→ "error": None,\n 133→ "screenshot": None,\n 134→ "visual_resolved": False,\n 135→ }\n 136→\n 137→ try:\n 138→ monitor = self.sct.monitors[1]\n 139→ width, height = monitor["width"], monitor["height"]\n 140→\n 141→ # Resolution visuelle des coordonnees si demande\n 142→ x_pct = action.get("x_pct", 0.0)\n 143→ y_pct = action.get("y_pct", 0.0)\n 144→\n 145→ if visual_mode and target_spec and server_url:\n 146→ resolved = self._resolve_target_visual(\n 147→ server_url, target_spec, x_pct, y_pct, width, height\n 148→ )\n 149→ if resolved:\n 150→ x_pct = resolved["x_pct"]\n 151→ y_pct = resolved["y_pct"]\n 152→ result["visual_resolved"] = resolved.get("resolved", False)\n 153→ if resolved.get("resolved"):\n 154→ logger.info(\n 155→ f"Visual resolve OK: {resolved.get('matched_element', {}).get('label', '?')} "\n 156→ f"-> ({x_pct:.4f}, {y_pct:.4f})"\n 157→ )\n 158→\n 159→ if action_type == "click":\n 160→ real_x = int(x_pct * width)\n 161→ real_y = int(y_pct * height)\n 162→ button = action.get("button", "left")\n 163→ self._click((real_x, real_y), button)\n 164→ mode = "VISUAL" if result["visual_resolved"] else "BLIND"\n 165→ logger.info(\n 166→ f"Replay click [{mode}] : ({x_pct:.3f}, {y_pct:.3f}) -> "\n 167→ f"({real_x}, {real_y}) sur ({width}x{height})"\n 168→ )\n 169→\n 170→ elif action_type == "type":\n 171→ text = action.get("text", "")\n 172→ # Cliquer sur le champ avant de taper (si coordonnees disponibles)\n 173→ if x_pct > 0 and y_pct > 0:\n 174→ real_x = int(x_pct * width)\n 175→ real_y = int(y_pct * height)\n 176→ self._click((real_x, real_y), "left")\n 177→ time.sleep(0.3)\n 178→ self.keyboard.type(text)\n 179→ logger.info(f"Replay type : '{text[:30]}...' ({len(text)} chars)")\n 180→\n 181→ elif action_type == "key_combo":\n 182→ keys = action.get("keys", [])\n 183→ self._execute_key_combo(keys)\n 184→ logger.info(f"Replay key_combo : {keys}")\n 185→\n 186→ elif action_type == "scroll":\n 187→ real_x = int(x_pct * width) if x_pct > 0 else int(0.5 * width)\n 188→ real_y = int(y_pct * height) if y_pct > 0 else int(0.5 * height)\n 189→ delta = action.get("delta", -3)\n 190→ self.mouse.position = (real_x, real_y)\n 191→ time.sleep(0.05)\n 192→ self.mouse.scroll(0, delta)\n 193→ logger.info(f"Replay scroll : delta={delta} a ({real_x}, {real_y})")\n 194→\n 195→ elif action_type == "wait":\n 196→ duration_ms = action.get("duration_ms", 500)\n 197→ time.sleep(duration_ms / 1000.0)\n 198→ logger.info(f"Replay wait : {duration_ms}ms")\n 199→\n 200→ else:\n 201→ result["error"] = f"Type d'action inconnu : {action_type}"\n 202→ logger.warning(result["error"])\n 203→ return result\n 204→\n 205→ result["success"] = True\n 206→\n 207→ # Capturer un screenshot post-action\n 208→ time.sleep(0.5)\n 209→ result["screenshot"] = self._capture_screenshot_b64()\n 210→\n 211→ except Exception as e:\n 212→ result["error"] = str(e)\n 213→ logger.error(f"Echec replay action {action_id} ({action_type}) : {e}")\n 214→\n 215→ return result\n 216→\n 217→ def _resolve_target_visual(\n 218→ self, server_url: str, target_spec: dict,\n 219→ fallback_x: float, fallback_y: float,\n 220→ screen_width: int, screen_height: int,\n 221→ ) -> dict:\n 222→ """\n 223→ Envoyer un screenshot au serveur pour resolution visuelle de la cible.\n 224→\n 225→ Capture l'ecran, l'encode en base64 JPEG, et POST au endpoint\n 226→ /replay/resolve_target. Retourne les coordonnees resolues.\n 227→ """\n 228→ import requests\n 229→\n 230→ try:\n 231→ # Capturer le screenshot actuel\n 232→ screenshot_b64 = self._capture_screenshot_b64()\n 233→ if not screenshot_b64:\n 234→ logger.warning("Capture screenshot echouee pour visual resolve")\n 235→ return None\n 236→\n 237→ # Appel au serveur\n 238→ resolve_url = f"{server_url}/traces/stream/replay/resolve_target"\n 239→ payload = {\n 240→ "session_id": "", # Pas critique pour la resolution\n 241→ "screenshot_b64": screenshot_b64,\n 242→ "target_spec": target_spec,\n 243→ "fallback_x_pct": fallback_x,\n 244→ "fallback_y_pct": fallback_y,\n 245→ "screen_width": screen_width,\n 246→ "screen_height": screen_height,\n 247→ }\n 248→\n 249→ resp = requests.post(resolve_url, json=payload, timeout=30)\n 250→ if resp.ok:\n 251→ return resp.json()\n 252→ else:\n 253→ logger.warning(f"Visual resolve HTTP {resp.status_code}: {resp.text[:200]}")\n 254→ return None\n 255→\n 256→ except requests.exceptions.Timeout:\n 257→ logger.warning("Visual resolve timeout (30s)")\n 258→ return None\n 259→ except Exception as e:\n 260→ logger.warning(f"Visual resolve echoue: {e}")\n 261→ return None\n 262→\n 263→ def poll_and_execute(self, session_id: str, server_url: str) -> bool:\n 264→ """\n 265→ Poll le serveur pour recuperer et executer la prochaine action.\n 266→\n 267→ 1. GET /replay/next pour recuperer l'action\n 268→ 2. Execute l'action (clic, texte, etc.)\n 269→ 3. POST /replay/result avec le resultat + screenshot\n 270→\n 271→ Retourne True si une action a ete executee, False sinon.\n 272→ """\n 273→ import requests\n 274→\n 275→ replay_next_url = f"{server_url}/traces/stream/replay/next"\n 276→ replay_result_url = f"{server_url}/traces/stream/replay/result"\n 277→\n 278→ try:\n 279→ # 1. Recuperer la prochaine action\n 280→ resp = requests.get(\n 281→ replay_next_url,\n 282→ params={"session_id": session_id},\n 283→ timeout=5,\n 284→ )\n 285→ if not resp.ok:\n 286→ logger.debug(f"Poll replay echoue : HTTP {resp.status_code}")\n 287→ return False\n 288→\n 289→ data = resp.json()\n 290→ action = data.get("action")\n 291→ if action is None:\n 292→ # Pas d'action en attente — rien a faire\n 293→ return False\n 294→\n 295→ logger.info(\n 296→ f"Action de replay recue : {action.get('type')} "\n 297→ f"(id={action.get('action_id')})"\n 298→ )\n 299→\n 300→ # 2. Executer l'action (avec server_url pour la resolution visuelle)\n 301→ result = self.execute_replay_action(action, server_url=server_url)\n 302→\n 303→ # 3. Rapporter le resultat au serveur\n 304→ report = {\n 305→ "session_id": session_id,\n 306→ "action_id": result["action_id"],\n 307→ "success": result["success"],\n 308→ "error": result.get("error"),\n 309→ "screenshot": result.get("screenshot"),\n 310→ }\n 311→ try:\n 312→ resp2 = requests.post(\n 313→ replay_result_url,\n 314→ json=report,\n 315→ timeout=5,\n 316→ )\n 317→ if resp2.ok:\n 318→ server_resp = resp2.json()\n 319→ logger.info(\n 320→ f"Resultat rapporte : replay_status={server_resp.get('replay_status')}, "\n 321→ f"restant={server_resp.get('remaining_actions')}"\n 322→ )\n 323→ else:\n 324→ logger.warning(f"Rapport resultat echoue : HTTP {resp2.status_code}")\n 325→ except Exception as e:\n 326→ logger.warning(f"Impossible de rapporter le resultat : {e}")\n 327→\n 328→ return True\n 329→\n 330→ except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):\n 331→ # Serveur non disponible ou occupé — pas de log spam\n 332→ return False\n 333→ except Exception as e:\n 334→ logger.error(f"Erreur poll_and_execute : {e}")\n 335→ return False\n 336→\n 337→ # =========================================================================\n 338→ # Helpers\n 339→ # =========================================================================\n 340→\n 341→ def _click(self, pos, button_name):\n 342→ """Deplacer la souris et cliquer."""\n 343→ self.mouse.position = pos\n 344→ time.sleep(0.1) # Delai pour simuler le temps de reaction humain\n 345→ btn = Button.left if button_name == "left" else Button.right\n 346→ self.mouse.click(btn)\n 347→\n 348→ def _execute_key_combo(self, keys: list):\n 349→ """\n 350→ Executer une combinaison de touches.\n 351→ Ex: ["ctrl", "a"] -> Ctrl+A\n 352→ Ex: ["enter"] -> Enter\n 353→ """\n 354→ if not keys:\n 355→ return\n 356→\n 357→ # Resoudre les noms de touches vers les objets pynput\n 358→ resolved = []\n 359→ for key_name in keys:\n 360→ key_lower = key_name.lower()\n 361→ if key_lower in _SPECIAL_KEYS:\n 362→ resolved.append(_SPECIAL_KEYS[key_lower])\n 363→ elif len(key_name) == 1:\n 364→ resolved.append(key_name)\n 365→ else:\n 366→ logger.warning(f"Touche inconnue : '{key_name}', ignoree")\n 367→\n 368→ if not resolved:\n 369→ return\n 370→\n 371→ # Si une seule touche, simple press\n 372→ if len(resolved) == 1:\n 373→ self.keyboard.press(resolved[0])\n 374→ self.keyboard.release(resolved[0])\n 375→ return\n 376→\n 377→ # Combo : maintenir les modificateurs, taper la derniere touche\n 378→ modifiers = resolved[:-1]\n 379→ final_key = resolved[-1]\n 380→\n 381→ for mod in modifiers:\n 382→ self.keyboard.press(mod)\n 383→ time.sleep(0.05)\n 384→\n 385→ self.keyboard.press(final_key)\n 386→ self.keyboard.release(final_key)\n 387→\n 388→ for mod in reversed(modifiers):\n 389→ self.keyboard.release(mod)\n 390→\n 391→ def _capture_screenshot_b64(self) -> str:\n 392→ """\n 393→ Capturer l'ecran et retourner le screenshot en base64.\n 394→ Utilise une qualite reduite pour limiter la taille du payload.\n 395→ """\n 396→ try:\n 397→ from PIL import Image\n 398→\n 399→ monitor = self.sct.monitors[1]\n 400→ raw = self.sct.grab(monitor)\n 401→ img = Image.frombytes("RGB", raw.size, raw.bgra, "raw", "BGRX")\n 402→\n 403→ # Redimensionner pour limiter la taille (max 800px de large)\n 404→ max_w = 800\n 405→ if img.width > max_w:\n 406→ ratio = max_w / img.width\n 407→ new_h = int(img.height * ratio)\n 408→ img = img.resize((max_w, new_h), Image.LANCZOS)\n 409→\n 410→ buffer = io.BytesIO()\n 411→ img.save(buffer, format="JPEG", quality=60)\n 412→ return base64.b64encode(buffer.getvalue()).decode("utf-8")\n 413→ except ImportError:\n 414→ # PIL non disponible — retourner None\n 415→ logger.debug("PIL non disponible, pas de screenshot base64")\n 416→ return ""\n 417→ except Exception as e:\n 418→ logger.debug(f"Capture screenshot base64 echouee : {e}")\n 419→ return ""\n 420→"}]},"uuid":"46b0b6f6-9e78-47d1-84a6-0d64abd3e078","timestamp":"2026-03-14T21:14:15.671Z","sourceToolAssistantUUID":"965a25a8-ce5b-449f-ab60-0118efc7e487"} {"parentUuid":"46b0b6f6-9e78-47d1-84a6-0d64abd3e078","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_018zaGgwBpWGBr2DDyh32bYQ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01W38MzfzZfph6YQYX97tRxJ","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/models/raw_session.py"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":2,"cache_creation_input_tokens":12328,"cache_read_input_tokens":0,"output_tokens":294,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":12328},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fTXbxvgvjif9CfKQnR","type":"assistant","uuid":"74035396-1344-40a9-a424-1b708980a39d","timestamp":"2026-03-14T21:14:16.064Z"} {"parentUuid":"74035396-1344-40a9-a424-1b708980a39d","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01W38MzfzZfph6YQYX97tRxJ","toolUseID":"toolu_01W38MzfzZfph6YQYX97tRxJ","timestamp":"2026-03-14T21:14:16.066Z","uuid":"a741bc4f-7d13-46e6-9b2d-aa01ac47b75e"} {"parentUuid":"74035396-1344-40a9-a424-1b708980a39d","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01W38MzfzZfph6YQYX97tRxJ","type":"tool_result","content":" 1→"""\n 2→RawSession - Couche 0 : Capture Brute\n 3→\n 4→Enregistre fidèlement toutes les interactions utilisateur avec horodatage précis\n 5→et contexte complet. C'est la fondation du système RPA Vision V3.\n 6→"""\n 7→\n 8→from dataclasses import dataclass, field\n 9→from datetime import datetime\n 10→from typing import Dict, List, Optional, Any\n 11→from pathlib import Path\n 12→import json\n 13→\n 14→\n 15→@dataclass\n 16→class RawWindowContext:\n 17→ """\n 18→ Contexte de fenêtre pour un événement (RawSession)\n 19→ \n 20→ Renommé de WindowContext pour éviter collision avec ScreenState.WindowContext\n 21→ Auteur: Dom, Alice Kiro - 15 décembre 2024\n 22→ """\n 23→ title: str\n 24→ app_name: str\n 25→ \n 26→ def to_dict(self) -> Dict[str, str]:\n 27→ return {\n 28→ "title": self.title,\n 29→ "app_name": self.app_name\n 30→ }\n 31→ \n 32→ @classmethod\n 33→ def from_dict(cls, data: Dict[str, str]) -> 'RawWindowContext':\n 34→ return cls(\n 35→ title=data["title"],\n 36→ app_name=data["app_name"]\n 37→ )\n 38→\n 39→\n 40→# Alias de compatibilité pour migration douce\n 41→WindowContext = RawWindowContext\n 42→\n 43→\n 44→@dataclass\n 45→class Event:\n 46→ """\n 47→ Événement utilisateur capturé\n 48→ \n 49→ Types supportés:\n 50→ - mouse_click, mouse_move, mouse_scroll\n 51→ - key_press, key_release, text_input\n 52→ - window_change, screen_change\n 53→ """\n 54→ t: float # Timestamp relatif en secondes depuis début session\n 55→ type: str # Type d'événement\n 56→ window: RawWindowContext\n 57→ screenshot_id: Optional[str] = None\n 58→ data: Dict[str, Any] = field(default_factory=dict) # Données spécifiques au type\n 59→ \n 60→ def to_dict(self) -> Dict[str, Any]:\n 61→ result = {\n 62→ "t": self.t,\n 63→ "type": self.type,\n 64→ "window": self.window.to_dict(),\n 65→ }\n 66→ if self.screenshot_id:\n 67→ result["screenshot_id"] = self.screenshot_id\n 68→ # Ajouter les données spécifiques\n 69→ result.update(self.data)\n 70→ return result\n 71→ \n 72→ @classmethod\n 73→ def from_dict(cls, data: Dict[str, Any]) -> 'Event':\n 74→ # Extraire les champs de base\n 75→ t = data["t"]\n 76→ event_type = data["type"]\n 77→ window = RawWindowContext.from_dict(data["window"])\n 78→ screenshot_id = data.get("screenshot_id")\n 79→ \n 80→ # Le reste va dans data\n 81→ event_data = {k: v for k, v in data.items() \n 82→ if k not in ["t", "type", "window", "screenshot_id"]}\n 83→ \n 84→ return cls(\n 85→ t=t,\n 86→ type=event_type,\n 87→ window=window,\n 88→ screenshot_id=screenshot_id,\n 89→ data=event_data\n 90→ )\n 91→\n 92→\n 93→@dataclass\n 94→class Screenshot:\n 95→ """Référence à un screenshot capturé"""\n 96→ screenshot_id: str\n 97→ relative_path: str\n 98→ captured_at: str # ISO format timestamp\n 99→ \n 100→ def to_dict(self) -> Dict[str, str]:\n 101→ return {\n 102→ "screenshot_id": self.screenshot_id,\n 103→ "relative_path": self.relative_path,\n 104→ "captured_at": self.captured_at\n 105→ }\n 106→ \n 107→ @classmethod\n 108→ def from_dict(cls, data: Dict[str, str]) -> 'Screenshot':\n 109→ return cls(\n 110→ screenshot_id=data["screenshot_id"],\n 111→ relative_path=data["relative_path"],\n 112→ captured_at=data["captured_at"]\n 113→ )\n 114→\n 115→\n 116→@dataclass\n 117→class RawSession:\n 118→ """\n 119→ Session brute capturant tous les événements utilisateur\n 120→ \n 121→ Format: rawsession_v1\n 122→ """\n 123→ session_id: str\n 124→ agent_version: str\n 125→ environment: Dict[str, Any]\n 126→ user: Dict[str, str]\n 127→ context: Dict[str, str]\n 128→ started_at: datetime\n 129→ ended_at: Optional[datetime] = None\n 130→ events: List[Event] = field(default_factory=list)\n 131→ screenshots: List[Screenshot] = field(default_factory=list)\n 132→ schema_version: str = "rawsession_v1"\n 133→ \n 134→ def add_event(self, event: Event) -> None:\n 135→ """Ajouter un événement à la session"""\n 136→ self.events.append(event)\n 137→ \n 138→ def add_screenshot(self, screenshot: Screenshot) -> None:\n 139→ """Ajouter un screenshot à la session"""\n 140→ self.screenshots.append(screenshot)\n 141→ \n 142→ def to_json(self) -> Dict[str, Any]:\n 143→ """Sérialiser en JSON"""\n 144→ return {\n 145→ "schema_version": self.schema_version,\n 146→ "session_id": self.session_id,\n 147→ "agent_version": self.agent_version,\n 148→ "environment": self.environment,\n 149→ "user": self.user,\n 150→ "context": self.context,\n 151→ "started_at": self.started_at.isoformat(),\n 152→ "ended_at": self.ended_at.isoformat() if self.ended_at else None,\n 153→ "events": [event.to_dict() for event in self.events],\n 154→ "screenshots": [screenshot.to_dict() for screenshot in self.screenshots]\n 155→ }\n 156→ \n 157→ @classmethod\n 158→ def from_json(cls, data: Dict[str, Any]) -> 'RawSession':\n 159→ """Désérialiser depuis JSON"""\n 160→ # Valider schéma\n 161→ schema_version = data.get("schema_version")\n 162→ if schema_version != "rawsession_v1":\n 163→ raise ValueError(\n 164→ f"Unsupported schema version: {schema_version}. "\n 165→ f"Expected: rawsession_v1"\n 166→ )\n 167→ \n 168→ # Parser dates\n 169→ started_at = datetime.fromisoformat(data["started_at"])\n 170→ ended_at = datetime.fromisoformat(data["ended_at"]) if data.get("ended_at") else None\n 171→ \n 172→ # Parser events et screenshots\n 173→ events = [Event.from_dict(e) for e in data.get("events", [])]\n 174→ screenshots = [Screenshot.from_dict(s) for s in data.get("screenshots", [])]\n 175→ \n 176→ return cls(\n 177→ schema_version=schema_version,\n 178→ session_id=data["session_id"],\n 179→ agent_version=data["agent_version"],\n 180→ environment=data["environment"],\n 181→ user=data["user"],\n 182→ context=data["context"],\n 183→ started_at=started_at,\n 184→ ended_at=ended_at,\n 185→ events=events,\n 186→ screenshots=screenshots\n 187→ )\n 188→ \n 189→ def save_to_file(self, filepath: Path) -> None:\n 190→ """Sauvegarder dans un fichier JSON"""\n 191→ filepath.parent.mkdir(parents=True, exist_ok=True)\n 192→ with open(filepath, 'w', encoding='utf-8') as f:\n 193→ json.dump(self.to_json(), f, indent=2, ensure_ascii=False)\n 194→ \n 195→ @classmethod\n 196→ def load_from_file(cls, filepath: Path) -> 'RawSession':\n 197→ """Charger depuis un fichier JSON"""\n 198→ with open(filepath, 'r', encoding='utf-8') as f:\n 199→ data = json.load(f)\n 200→ return cls.from_json(data)\n 201→"}]},"uuid":"6cb44275-219c-4d0e-9293-504a98648f63","timestamp":"2026-03-14T21:14:16.066Z","sourceToolAssistantUUID":"74035396-1344-40a9-a424-1b708980a39d"} {"parentUuid":"6cb44275-219c-4d0e-9293-504a98648f63","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01KJZWXtYFt4YeJeT79C5HFZ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_015sEk5WTTPnQvM6rvUNNHaZ","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/graph/graph_builder.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":29989,"cache_read_input_tokens":12328,"cache_creation":{"ephemeral_5m_input_tokens":29989,"ephemeral_1h_input_tokens":0},"output_tokens":25,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fTzHpSw1CM3An5Rg6a","type":"assistant","uuid":"44291c11-e293-4eb3-8257-9ab8a6b59276","timestamp":"2026-03-14T21:14:20.189Z"} {"parentUuid":"44291c11-e293-4eb3-8257-9ab8a6b59276","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_015sEk5WTTPnQvM6rvUNNHaZ","toolUseID":"toolu_015sEk5WTTPnQvM6rvUNNHaZ","timestamp":"2026-03-14T21:14:20.393Z","uuid":"fe54ed18-a675-4b01-87b9-336c0c578408"} {"parentUuid":"44291c11-e293-4eb3-8257-9ab8a6b59276","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_015sEk5WTTPnQvM6rvUNNHaZ","type":"tool_result","content":" 1→"""\n 2→GraphBuilder - Construction Automatique de Workflow Graphs\n 3→\n 4→Ce module implémente la construction automatique de graphes de workflows\n 5→en analysant les sessions enregistrées et en détectant les patterns répétés.\n 6→\n 7→Architecture:\n 8→ 1. Création de ScreenStates depuis RawSession\n 9→ 2. Calcul de State Embeddings pour tous les états\n 10→ 3. Détection de patterns via clustering DBSCAN\n 11→ 4. Construction de WorkflowNodes depuis clusters\n 12→ 5. Construction de WorkflowEdges depuis transitions\n 13→\n 14→Algorithme de Détection de Patterns:\n 15→ - Utilise DBSCAN (Density-Based Spatial Clustering of Applications with Noise)\n 16→ - Métrique: similarité cosinus entre embeddings\n 17→ - Filtre les clusters avec moins de N répétitions\n 18→ - Calcule un prototype (moyenne) pour chaque cluster\n 19→\n 20→Example:\n 21→ >>> builder = GraphBuilder(min_pattern_repetitions=3)\n 22→ >>> workflow = builder.build_from_session(raw_session)\n 23→ >>> print(f"Workflow with {len(workflow.nodes)} nodes")\n 24→"""\n 25→\n 26→import logging\n 27→import os\n 28→from typing import List, Dict, Optional, Tuple, Any\n 29→from collections import defaultdict, Counter\n 30→from datetime import datetime\n 31→from pathlib import Path\n 32→\n 33→import numpy as np\n 34→from sklearn.cluster import DBSCAN\n 35→\n 36→from core.models.raw_session import RawSession, Event\n 37→from core.models.screen_state import (\n 38→ ScreenState, WindowContext, RawLevel, PerceptionLevel, \n 39→ ContextLevel, EmbeddingRef\n 40→)\n 41→from core.models.workflow_graph import (\n 42→ Workflow,\n 43→ WorkflowNode,\n 44→ WorkflowEdge,\n 45→ ScreenTemplate,\n 46→ Action,\n 47→ TargetSpec,\n 48→ EdgeConstraints,\n 49→ PostConditions,\n 50→ WindowConstraint,\n 51→ TextConstraint,\n 52→ UIConstraint,\n 53→ EmbeddingPrototype,\n 54→)\n 55→from core.embedding.state_embedding_builder import StateEmbeddingBuilder\n 56→from core.embedding.faiss_manager import FAISSManager\n 57→from core.training.quality_validator import TrainingQualityValidator, QualityReport\n 58→\n 59→logger = logging.getLogger(name)\n 60→\n 61→\n 62→class GraphBuilder:\n 63→ """\n 64→ Constructeur de graphes de workflows depuis sessions brutes.\n 65→ \n 66→ Cette classe analyse une RawSession pour construire automatiquement\n 67→ un Workflow avec ses nodes et edges en détectant les patterns répétés.\n 68→ \n 69→ Attributes:\n 70→ embedding_builder: Builder pour calculer les State Embeddings\n 71→ faiss_manager: Manager FAISS pour indexation (optionnel)\n 72→ min_pattern_repetitions: Nombre minimum de répétitions pour un pattern\n 73→ clustering_eps: Distance maximum entre points pour DBSCAN\n 74→ clustering_min_samples: Nombre minimum d'échantillons par cluster\n 75→ \n 76→ Example:\n 77→ >>> builder = GraphBuilder(min_pattern_repetitions=3)\n 78→ >>> workflow = builder.build_from_session(session, "Login Workflow")\n 79→ """\n 80→ \n 81→ def init(\n 82→ self,\n 83→ embedding_builder: Optional[StateEmbeddingBuilder] = None,\n 84→ faiss_manager: Optional[FAISSManager] = None,\n 85→ quality_validator: Optional[TrainingQualityValidator] = None,\n 86→ min_pattern_repetitions: int = 3,\n 87→ clustering_eps: float = 0.15,\n 88→ clustering_min_samples: int = 2,\n 89→ enable_quality_validation: bool = True,\n 90→ ):\n 91→ """\n 92→ Initialiser le GraphBuilder.\n 93→ \n 94→ Args:\n 95→ embedding_builder: Builder pour State Embeddings (créé si None)\n 96→ faiss_manager: Manager FAISS pour indexation (optionnel)\n 97→ quality_validator: Validateur de qualité (créé si None)\n 98→ min_pattern_repetitions: Nombre minimum de répétitions pour un pattern\n 99→ clustering_eps: Epsilon pour DBSCAN (distance max entre points)\n 100→ clustering_min_samples: Nombre minimum d'échantillons pour un cluster\n 101→ enable_quality_validation: Activer la validation de qualité\n 102→ """\n 103→ self.embedding_builder = embedding_builder or StateEmbeddingBuilder()\n 104→ self.faiss_manager = faiss_manager\n 105→ self.quality_validator = quality_validator or TrainingQualityValidator()\n 106→ self.min_pattern_repetitions = min_pattern_repetitions\n 107→ self.clustering_eps = clustering_eps\n 108→ self.clustering_min_samples = clustering_min_samples\n 109→ self.enable_quality_validation = enable_quality_validation\n 110→ self._screen_analyzer = None # ScreenAnalyzer (lazy import)\n 111→ \n 112→ logger.info(\n 113→ f"GraphBuilder initialized: "\n 114→ f"min_repetitions={min_pattern_repetitions}, "\n 115→ f"eps={clustering_eps}, "\n 116→ f"min_samples={clustering_min_samples}, "\n 117→ f"quality_validation={enable_quality_validation}"\n 118→ )\n 119→ \n 120→ def build_from_session(\n 121→ self,\n 122→ session: RawSession,\n 123→ workflow_name: Optional[str] = None,\n 124→ precomputed_states: Optional[List["ScreenState"]] = None,\n 125→ ) -> Workflow:\n 126→ """\n 127→ Construire un Workflow complet depuis une RawSession.\n 128→\n 129→ Processus:\n 130→ 1. Créer ScreenStates depuis screenshots (ou utiliser precomputed_states)\n 131→ 2. Calculer embeddings pour chaque état\n 132→ 3. Détecter patterns via clustering\n 133→ 4. Construire nodes depuis clusters\n 134→ 5. Construire edges depuis transitions\n 135→\n 136→ Args:\n 137→ session: Session brute à analyser\n 138→ workflow_name: Nom du workflow (généré si None)\n 139→ precomputed_states: ScreenStates déjà analysés (streaming).\n 140→ Si fourni, saute l'étape 1 (pas de re-analyse via ScreenAnalyzer).\n 141→\n 142→ Returns:\n 143→ Workflow construit avec nodes et edges\n 144→\n 145→ Raises:\n 146→ ValueError: Si la session est vide ou invalide\n 147→ """\n 148→ if not precomputed_states and not session.screenshots:\n 149→ raise ValueError("Session has no screenshots and no precomputed states")\n 150→\n 151→ logger.info(\n 152→ f"Building workflow from session {session.session_id} "\n 153→ f"with {len(precomputed_states or session.screenshots)} "\n 154→ f"{'precomputed states' if precomputed_states else 'screenshots'}"\n 155→ )\n 156→\n 157→ # Étape 1: Créer ScreenStates (ou réutiliser ceux pré-calculés)\n 158→ if precomputed_states:\n 159→ screen_states = precomputed_states\n 160→ logger.debug(f"Using {len(screen_states)} precomputed screen states")\n 161→ else:\n 162→ screen_states = self._create_screen_states(session)\n 163→ logger.debug(f"Created {len(screen_states)} screen states")\n 164→\n 165→ # Étape 2: Calculer embeddings\n 166→ embeddings = self._compute_embeddings(screen_states)\n 167→ logger.debug(f"Computed {len(embeddings)} embeddings")\n 168→ \n 169→ # Étape 3: Détecter patterns\n 170→ clusters = self._detect_patterns(embeddings, screen_states)\n 171→ logger.info(f"Detected {len(clusters)} patterns")\n 172→ \n 173→ # Étape 4: Construire nodes\n 174→ nodes = self._build_nodes(clusters, screen_states, embeddings)\n 175→ logger.info(f"Built {len(nodes)} workflow nodes")\n 176→ \n 177→ # Étape 5: Construire edges\n 178→ edges = self.build_edges(nodes, screen_states, session)\n 179→ logger.info(f"Built {len(edges)} workflow edges")\n 180→ \n 181→ # Créer Workflow\n 182→ from core.models.workflow_graph import WorkflowStats, SafetyRules, LearningConfig\n 183→ \n 184→ workflow = Workflow(\n 185→ workflow_id=workflow_name or f"workflow{session.session_id}",\n 186→ name=workflow_name or "Unnamed Workflow",\n 187→ description="Auto-generated workflow",\n 188→ version=1,\n 189→ learning_state="OBSERVATION",\n 190→ created_at=datetime.now(),\n 191→ updated_at=datetime.now(),\n 192→ entry_nodes=[nodes[0].node_id] if nodes else [],\n 193→ end_nodes=[],\n 194→ nodes=nodes,\n 195→ edges=edges,\n 196→ safety_rules=SafetyRules(),\n 197→ stats=WorkflowStats(),\n 198→ learning=LearningConfig()\n 199→ )\n 200→ \n 201→ # Étape 6: Validation de qualité\n 202→ quality_report = None\n 203→ if self.enable_quality_validation and screen_states:\n 204→ quality_report = self._validate_workflow_quality(\n 205→ workflow, screen_states, embeddings, clusters\n 206→ )\n 207→ \n 208→ # Stocker le rapport dans les métadonnées du workflow\n 209→ workflow.metadata = workflow.metadata or {}\n 210→ workflow.metadata['quality_report'] = quality_report.to_dict()\n 211→ \n 212→ # Ajuster learning_state basé sur la qualité\n 213→ if quality_report.is_production_ready:\n 214→ workflow.learning_state = "AUTO_CANDIDATE"\n 215→ logger.info("Workflow qualité suffisante -> AUTO_CANDIDATE")\n 216→ else:\n 217→ workflow.learning_state = "OBSERVATION"\n 218→ logger.warning(\n 219→ f"Qualité insuffisante ({quality_report.overall_score:.3f}), "\n 220→ f"workflow reste en OBSERVATION"\n 221→ )\n 222→ \n 223→ logger.info(\n 224→ f"Workflow '{workflow.name}' built successfully: "\n 225→ f"{len(nodes)} nodes, {len(edges)} edges"\n 226→ )\n 227→ \n 228→ return workflow\n 229→ \n 230→ def _validate_workflow_quality(\n 231→ self,\n 232→ workflow: Workflow,\n 233→ screen_states: List[ScreenState],\n 234→ embeddings: List[np.ndarray],\n 235→ clusters: Dict[int, List[int]]\n 236→ ) -> QualityReport:\n 237→ """\n 238→ Valider la qualité du workflow construit.\n 239→ \n 240→ Args:\n 241→ workflow: Workflow à valider\n 242→ screen_states: États d'écran utilisés\n 243→ embeddings: Embeddings calculés\n 244→ clusters: Clusters détectés\n 245→ \n 246→ Returns:\n 247→ QualityReport avec métriques et recommandations\n 248→ """\n 249→ logger.info(f"Validation qualité du workflow {workflow.workflow_id}")\n 250→ \n 251→ # Préparer les données pour le validateur\n 252→ embeddings_array = np.array(embeddings)\n 253→ \n 254→ # Créer labels depuis les clusters\n 255→ labels = np.full(len(embeddings), -1) # -1 = bruit\n 256→ for cluster_id, indices in clusters.items():\n 257→ for idx in indices:\n 258→ labels[idx] = cluster_id\n 259→ \n 260→ # Valider avec le TrainingQualityValidator\n 261→ report = self.quality_validator.validate_workflow(\n 262→ workflow=workflow,\n 263→ observations=screen_states,\n 264→ embeddings=embeddings_array,\n 265→ labels=labels\n 266→ )\n 267→ \n 268→ logger.info(\n 269→ f"Validation terminée: score={report.overall_score:.3f}, "\n 270→ f"production_ready={report.is_production_ready}"\n 271→ )\n 272→ \n 273→ return report\n 274→ \n 275→ def _create_screen_states(self, session: RawSession) -> List[ScreenState]:\n 276→ """\n 277→ Créer ScreenStates enrichis depuis les screenshots de la session.\n 278→ \n 279→ Pour chaque screenshot:\n 280→ 1. Trouver l'événement associé pour le contexte de fenêtre\n 281→ 2. Créer les 4 niveaux du ScreenState\n 282→ 3. Optionnellement détecter les éléments UI\n 283→ \n 284→ Args:\n 285→ session: Session brute\n 286→ \n 287→ Returns:\n 288→ Liste de ScreenStates enrichis\n 289→ """\n 290→ screen_states = []\n 291→ \n 292→ # Créer un mapping screenshot_id -> événement\n 293→ screenshot_to_event = {}\n 294→ for event in session.events:\n 295→ if event.screenshot_id:\n 296→ screenshot_to_event[event.screenshot_id] = event\n 297→ \n 298→ for i, screenshot in enumerate(session.screenshots):\n 299→ # Trouver l'événement associé\n 300→ event = screenshot_to_event.get(screenshot.screenshot_id)\n 301→ \n 302→ # Créer WindowContext depuis l'événement\n 303→ if event and event.window:\n 304→ window = WindowContext(\n 305→ app_name=event.window.app_name,\n 306→ window_title=event.window.title,\n 307→ screen_resolution=session.environment.get("screen", {}).get("primary_resolution", [1920, 1080]),\n 308→ workspace="main"\n 309→ )\n 310→ else:\n 311→ window = WindowContext(\n 312→ app_name="unknown",\n 313→ window_title="Unknown",\n 314→ screen_resolution=[1920, 1080],\n 315→ workspace="main"\n 316→ )\n 317→ \n 318→ # Créer RawLevel\n 319→ # Construire chemin absolu : data/training/sessions/{session_id}/{session_id}/{relative_path}\n 320→ screenshot_absolute_path = f"data/training/sessions/{session.session_id}/{session.session_id}/{screenshot.relative_path}"\n 321→ screenshot_path = Path(screenshot_absolute_path)\n 322→ raw = RawLevel(\n 323→ screenshot_path=str(screenshot_path),\n 324→ capture_method="mss",\n 325→ file_size_bytes=screenshot_path.stat().st_size if screenshot_path.exists() else 0\n 326→ )\n 327→ \n 328→ # Créer PerceptionLevel — enrichir avec OCR si le screenshot existe\n 329→ detected_text = []\n 330→ text_method = "none"\n 331→\n 332→ if screenshot_path.exists():\n 333→ try:\n 334→ if self._screen_analyzer is None:\n 335→ from core.pipeline.screen_analyzer import ScreenAnalyzer\n 336→ self._screen_analyzer = ScreenAnalyzer(session_id=session.session_id)\n 337→ extracted = self._screen_analyzer._extract_text(str(screenshot_path))\n 338→ if extracted:\n 339→ detected_text = extracted\n 340→ text_method = self._screen_analyzer._get_ocr_method_name()\n 341→ except Exception as e:\n 342→ logger.debug(f"OCR échoué pour {screenshot_path}: {e}")\n 343→\n 344→ perception = PerceptionLevel(\n 345→ embedding=EmbeddingRef(\n 346→ provider="openclip_ViT-B-32",\n 347→ vector_id=f"data/embeddings/screens/{session.session_id}state{i:04d}.npy",\n 348→ dimensions=512\n 349→ ),\n 350→ detected_text=detected_text,\n 351→ text_detection_method=text_method,\n 352→ confidence_avg=0.85 if detected_text else 0.0\n 353→ )\n 354→ \n 355→ # Créer ContextLevel\n 356→ context = ContextLevel(\n 357→ current_workflow_candidate=None,\n 358→ workflow_step=i,\n 359→ user_id=session.user.get("id", "unknown"),\n 360→ tags=list(session.context.get("tags", [])) if isinstance(session.context.get("tags"), list) else [],\n 361→ business_variables={}\n 362→ )\n 363→ \n 364→ # Parser timestamp\n 365→ if isinstance(screenshot.captured_at, str):\n 366→ timestamp = datetime.fromisoformat(screenshot.captured_at.replace('Z', '+00:00'))\n 367→ else:\n 368→ timestamp = screenshot.captured_at\n 369→ \n 370→ # Créer ScreenState complet\n 371→ state = ScreenState(\n 372→ screen_state_id=f"{session.session_id}state{i:04d}",\n 373→ timestamp=timestamp,\n 374→ session_id=session.session_id,\n 375→ window=window,\n 376→ raw=raw,\n 377→ perception=perception,\n 378→ context=context,\n 379→ metadata={\n 380→ "screenshot_id": screenshot.screenshot_id,\n 381→ "event_type": event.type if event else None,\n 382→ "event_time": event.t if event else None\n 383→ },\n 384→ ui_elements=[] # Sera rempli par UIDetector si disponible\n 385→ )\n 386→ \n 387→ screen_states.append(state)\n 388→ \n 389→ logger.info(f"Created {len(screen_states)} enriched screen states")\n 390→ return screen_states\n 391→ \n 392→ def _compute_embeddings(\n 393→ self, screen_states: List[ScreenState]\n 394→ ) -> List[np.ndarray]:\n 395→ """\n 396→ Calculer State Embeddings pour tous les états.\n 397→ \n 398→ Utilise StateEmbeddingBuilder pour générer des embeddings\n 399→ multi-modaux (image + texte + UI). Ajoute optionnellement\n 400→ les embeddings à l'index FAISS.\n 401→ \n 402→ Args:\n 403→ screen_states: Liste de ScreenStates\n 404→ \n 405→ Returns:\n 406→ Liste de vecteurs d'embeddings (numpy arrays)\n 407→ """\n 408→ embeddings = []\n 409→ \n 410→ for state in screen_states:\n 411→ # Construire embedding\n 412→ state_embedding = self.embedding_builder.build(state)\n 413→ vector = state_embedding.get_vector()\n 414→ embeddings.append(vector)\n 415→ \n 416→ # Ajouter à FAISS si disponible\n 417→ if self.faiss_manager:\n 418→ self.faiss_manager.add_embedding(\n 419→ state.screen_state_id,\n 420→ vector,\n 421→ {"state_id": state.screen_state_id},\n 422→ )\n 423→ \n 424→ return embeddings\n 425→ \n 426→ def _detect_patterns(\n 427→ self,\n 428→ embeddings: List[np.ndarray],\n 429→ screen_states: List[ScreenState],\n 430→ ) -> Dict[int, List[int]]:\n 431→ """\n 432→ Détecter patterns répétés via clustering DBSCAN.\n 433→ \n 434→ Algorithme:\n 435→ 1. Convertir embeddings en matrice numpy\n 436→ 2. Appliquer DBSCAN avec métrique cosinus\n 437→ 3. Grouper états par cluster\n 438→ 4. Filtrer clusters avec assez de répétitions\n 439→ \n 440→ Args:\n 441→ embeddings: Vecteurs d'embeddings\n 442→ screen_states: ScreenStates correspondants\n 443→ \n 444→ Returns:\n 445→ Dictionnaire {cluster_id: [indices des états]}\n 446→ \n 447→ Note:\n 448→ Les états non assignés (bruit) ont label=-1 et sont ignorés\n 449→ """\n 450→ if len(embeddings) < self.min_pattern_repetitions:\n 451→ logger.warning(\n 452→ f"Not enough states ({len(embeddings)}) for pattern detection "\n 453→ f"(minimum: {self.min_pattern_repetitions})"\n 454→ )\n 455→ return {}\n 456→ \n 457→ # Convertir en matrice numpy\n 458→ X = np.array(embeddings)\n 459→ \n 460→ # Clustering DBSCAN\n 461→ clustering = DBSCAN(\n 462→ eps=self.clustering_eps,\n 463→ min_samples=self.clustering_min_samples,\n 464→ metric="cosine",\n 465→ )\n 466→ labels = clustering.fit_predict(X)\n 467→ \n 468→ # Grouper par cluster\n 469→ clusters = defaultdict(list)\n 470→ noise_count = 0\n 471→ \n 472→ for idx, label in enumerate(labels):\n 473→ if label == -1:\n 474→ noise_count += 1\n 475→ else:\n 476→ clusters[label].append(idx)\n 477→ \n 478→ # Filtrer clusters avec assez de répétitions\n 479→ filtered_clusters = {\n 480→ cluster_id: indices\n 481→ for cluster_id, indices in clusters.items()\n 482→ if len(indices) >= self.min_pattern_repetitions\n 483→ }\n 484→ \n 485→ logger.info(\n 486→ f"Clustering results: {len(filtered_clusters)} patterns, "\n 487→ f"{noise_count} noise points, "\n 488→ f"{len(clusters) - len(filtered_clusters)} small clusters filtered"\n 489→ )\n 490→ \n 491→ return filtered_clusters\n 492→ \n 493→ def _build_nodes(\n 494→ self,\n 495→ clusters: Dict[int, List[int]],\n 496→ screen_states: List[ScreenState],\n 497→ embeddings: List[np.ndarray],\n 498→ ) -> List[WorkflowNode]:\n 499→ """\n 500→ Construire WorkflowNodes depuis les clusters détectés.\n 501→ \n 502→ Pour chaque cluster:\n 503→ 1. Calculer embedding prototype (moyenne normalisée)\n 504→ 2. Extraire contraintes depuis états du cluster\n 505→ 3. Créer ScreenTemplate\n 506→ 4. Créer WorkflowNode\n 507→ \n 508→ Args:\n 509→ clusters: Clusters détectés {cluster_id: [indices]}\n 510→ screen_states: ScreenStates\n 511→ embeddings: Embeddings\n 512→ \n 513→ Returns:\n 514→ Liste de WorkflowNodes\n 515→ """\n 516→ nodes = []\n 517→ \n 518→ for cluster_id, indices in clusters.items():\n 519→ # Calculer embedding prototype (moyenne)\n 520→ cluster_embeddings = [embeddings[i] for i in indices]\n 521→ prototype = np.mean(cluster_embeddings, axis=0)\n 522→ prototype = prototype / np.linalg.norm(prototype) # Normaliser\n 523→ \n 524→ # Extraire contraintes depuis les états du cluster\n 525→ cluster_states = [screen_states[i] for i in indices]\n 526→ template = self.create_screen_template(cluster_states, prototype)\n 527→ \n 528→ # Créer node\n 529→ node = WorkflowNode(\n 530→ node_id=f"node{cluster_id:03d}",\n 531→ name=f"State Pattern {cluster_id}",\n 532→ description=f"Pattern auto-détecté ({len(indices)} observations)",\n 533→ template=template,\n 534→ metadata={\n 535→ "observation_count": len(indices),\n 536→ "_prototype_vector": prototype.tolist(),\n 537→ },\n 538→ )\n 539→ \n 540→ nodes.append(node)\n 541→ logger.debug(\n 542→ f"Created node {node.node_id} with {len(indices)} observations"\n 543→ )\n 544→ \n 545→ return nodes\n 546→ \n 547→ def _create_screen_template(\n 548→ self,\n 549→ states: List[ScreenState],\n 550→ prototype_embedding: np.ndarray,\n 551→ ) -> ScreenTemplate:\n 552→ """\n 553→ Créer un ScreenTemplate depuis un cluster d'états.\n 554→\n 555→ Extrait les contraintes communes à tous les états du cluster :\n 556→ - window_title_pattern : titre de fenêtre commun\n 557→ - required_text_patterns : textes présents dans la majorité des états\n 558→ - required_ui_elements : rôles/types UI récurrents\n 559→\n 560→ Args:\n 561→ states: États du cluster\n 562→ prototype_embedding: Embedding prototype\n 563→\n 564→ Returns:\n 565→ ScreenTemplate avec contraintes extraites\n 566→ """\n 567→ # --- Extraction du titre de fenêtre commun ---\n 568→ window_title_pattern = self._extract_window_pattern(states)\n 569→\n 570→ # --- Extraction des textes récurrents ---\n 571→ required_text_patterns = self._extract_common_texts(states)\n 572→\n 573→ # --- Extraction des éléments UI récurrents ---\n 574→ required_ui_elements = self._extract_common_ui_elements(states)\n 575→\n 576→ # Construire les sous-objets de contraintes\n 577→ window_constraint = WindowConstraint(\n 578→ title_pattern=window_title_pattern,\n 579→ title_contains=window_title_pattern,\n 580→ )\n 581→\n 582→ text_constraint = TextConstraint(\n 583→ required_texts=required_text_patterns,\n 584→ )\n 585→\n 586→ ui_roles = [\n 587→ e.get("role", "") for e in required_ui_elements if e.get("role")\n 588→ ]\n 589→ ui_constraint = UIConstraint(\n 590→ required_roles=ui_roles,\n 591→ )\n 592→\n 593→ embedding_proto = EmbeddingPrototype(\n 594→ provider="openclip_ViT-B-32",\n 595→ vector_id="", # Le vecteur est stocké dans node.metadata._prototype_vector\n 596→ min_cosine_similarity=0.85,\n 597→ sample_count=len(states),\n 598→ )\n 599→\n 600→ return ScreenTemplate(\n 601→ window=window_constraint,\n 602→ text=text_constraint,\n 603→ ui=ui_constraint,\n 604→ embedding=embedding_proto,\n 605→ )\n 606→\n 607→ def _extract_window_pattern(self, states: List[ScreenState]) -> Optional[str]:\n 608→ """Extraire un pattern de titre de fenêtre commun aux états du cluster."""\n 609→ titles = [s.window.window_title for s in states if s.window.window_title]\n 610→ if not titles:\n 611→ return None\n 612→\n 613→ # Si tous les titres sont identiques, retourner directement\n 614→ if len(set(titles)) == 1:\n 615→ return titles[0]\n 616→\n 617→ # Trouver le préfixe commun le plus long\n 618→ prefix = os.path.commonprefix(titles)\n 619→ if len(prefix) >= 5:\n 620→ return prefix.rstrip(" -–—|")\n 621→\n 622→ # Fallback: le titre le plus fréquent\n 623→ from collections import Counter\n 624→ most_common = Counter(titles).most_common(1)[0][0]\n 625→ return most_common\n 626→\n 627→ def _extract_common_texts(\n 628→ self, states: List[ScreenState], min_presence_ratio: float = 0.6\n 629→ ) -> List[str]:\n 630→ """\n 631→ Extraire les textes présents dans la majorité des états du cluster.\n 632→\n 633→ Args:\n 634→ states: États du cluster\n 635→ min_presence_ratio: Proportion minimale de présence (0.6 = 60% des états)\n 636→ """\n 637→ if not states:\n 638→ return []\n 639→\n 640→ # Collecter les textes de chaque état\n 641→ text_counts: Dict[str, int] = defaultdict(int)\n 642→ states_with_text = 0\n 643→\n 644→ for state in states:\n 645→ if hasattr(state.perception, 'detected_text') and state.perception.detected_text:\n 646→ states_with_text += 1\n 647→ seen_in_state = set()\n 648→ for text in state.perception.detected_text:\n 649→ normalized = text.strip().lower()\n 650→ if len(normalized) >= 3 and normalized not in seen_in_state:\n 651→ text_counts[normalized] += 1\n 652→ seen_in_state.add(normalized)\n 653→\n 654→ if states_with_text == 0:\n 655→ return []\n 656→\n 657→ # Garder les textes présents dans au moins min_presence_ratio des états\n 658→ threshold = max(2, int(states_with_text * min_presence_ratio))\n 659→ common_texts = [\n 660→ text for text, count in text_counts.items()\n 661→ if count >= threshold\n 662→ ]\n 663→\n 664→ # Limiter à 10 textes les plus fréquents\n 665→ common_texts.sort(key=lambda t: text_counts[t], reverse=True)\n 666→ return common_texts[:10]\n 667→\n 668→ def _extract_common_ui_elements(\n 669→ self, states: List[ScreenState], min_presence_ratio: float = 0.5\n 670→ ) -> List[Dict[str, Any]]:\n 671→ """\n 672→ Extraire les types/rôles d'éléments UI récurrents dans le cluster.\n 673→\n 674→ Retourne une liste de contraintes UI au format:\n 675→ [{"type": "button", "role": "validate", "min_count": 1}, ...]\n 676→ """\n 677→ if not states:\n 678→ return []\n 679→\n 680→ # Compter les paires (type, role) dans chaque état\n 681→ role_counts: Dict[str, int] = defaultdict(int)\n 682→ type_counts: Dict[str, int] = defaultdict(int)\n 683→ states_with_ui = 0\n 684→\n 685→ for state in states:\n 686→ if state.ui_elements:\n 687→ states_with_ui += 1\n 688→ seen_roles = set()\n 689→ seen_types = set()\n 690→ for el in state.ui_elements:\n 691→ el_type = getattr(el, 'type', 'unknown')\n 692→ el_role = getattr(el, 'role', 'unknown')\n 693→\n 694→ if el_role != 'unknown' and el_role not in seen_roles:\n 695→ role_counts[el_role] += 1\n 696→ seen_roles.add(el_role)\n 697→\n 698→ if el_type != 'unknown' and el_type not in seen_types:\n 699→ type_counts[el_type] += 1\n 700→ seen_types.add(el_type)\n 701→\n 702→ if states_with_ui == 0:\n 703→ return []\n 704→\n 705→ threshold = max(2, int(states_with_ui * min_presence_ratio))\n 706→\n 707→ constraints = []\n 708→\n 709→ # Ajouter les rôles récurrents\n 710→ for role, count in role_counts.items():\n 711→ if count >= threshold:\n 712→ constraints.append({\n 713→ "role": role,\n 714→ "min_count": 1,\n 715→ })\n 716→\n 717→ # Limiter à 8 contraintes\n 718→ constraints.sort(key=lambda c: role_counts.get(c.get("role", ""), 0), reverse=True)\n 719→ return constraints[:8]\n 720→ \n 721→ def _build_edges(\n 722→ self,\n 723→ nodes: List[WorkflowNode],\n 724→ screen_states: List[ScreenState],\n 725→ session: RawSession,\n 726→ ) -> List[WorkflowEdge]:\n 727→ """\n 728→ Construire WorkflowEdges depuis les transitions observées.\n 729→ \n 730→ Algorithme:\n 731→ 1. Mapper chaque ScreenState vers son node (via embedding similarity)\n 732→ 2. Identifier les transitions (state_i -> state_j où node change)\n 733→ 3. Extraire l'action depuis l'événement entre les deux états\n 734→ 4. Créer WorkflowEdge avec action et conditions\n 735→ \n 736→ Args:\n 737→ nodes: WorkflowNodes construits\n 738→ screen_states: ScreenStates\n 739→ session: Session brute (pour événements)\n 740→ \n 741→ Returns:\n 742→ Liste de WorkflowEdges\n 743→ """\n 744→ if not nodes or len(screen_states) < 2:\n 745→ logger.warning("Not enough data to build edges")\n 746→ return []\n 747→ \n 748→ edges = []\n 749→ edge_counts = defaultdict(int) # Pour compter les occurrences de chaque transition\n 750→ \n 751→ # Étape 1: Mapper chaque état vers son node\n 752→ state_to_node = self._map_states_to_nodes(screen_states, nodes)\n 753→ \n 754→ # Étape 2: Créer un mapping screenshot_id -> événement\n 755→ screenshot_to_event = {}\n 756→ for event in session.events:\n 757→ if event.screenshot_id:\n 758→ screenshot_to_event[event.screenshot_id] = event\n 759→ \n 760→ # Étape 3: Parcourir les transitions\n 761→ for i in range(len(screen_states) - 1):\n 762→ current_state = screen_states[i]\n 763→ next_state = screen_states[i + 1]\n 764→ \n 765→ current_node_id = state_to_node.get(current_state.screen_state_id)\n 766→ next_node_id = state_to_node.get(next_state.screen_state_id)\n 767→ \n 768→ # Si les deux états sont dans des nodes différents, c'est une transition\n 769→ if current_node_id and next_node_id and current_node_id != next_node_id:\n 770→ # Trouver l'événement qui a causé la transition\n 771→ event = self._find_transition_event(\n 772→ current_state, next_state, session.events\n 773→ )\n 774→ \n 775→ # Créer l'edge\n 776→ edge_key = f"{current_node_id}to{next_node_id}"\n 777→ edge_counts[edge_key] += 1\n 778→ \n 779→ # Ne créer l'edge qu'une fois, mais compter les occurrences\n 780→ if edge_counts[edge_key] == 1:\n 781→ edge = self._create_edge(\n 782→ current_node_id, next_node_id, event, edge_key\n 783→ )\n 784→ edges.append(edge)\n 785→ \n 786→ # Mettre à jour les stats des edges avec les comptages\n 787→ for edge in edges:\n 788→ edge_key = f"{edge.from_node}to{edge.to_node}"\n 789→ edge.stats.execution_count = edge_counts[edge_key]\n 790→ edge.stats.success_count = edge_counts[edge_key]\n 791→ \n 792→ logger.info(f"Built {len(edges)} edges from {sum(edge_counts.values())} transitions")\n 793→ return edges\n 794→ \n 795→ def _map_states_to_nodes(\n 796→ self,\n 797→ screen_states: List[ScreenState],\n 798→ nodes: List[WorkflowNode]\n 799→ ) -> Dict[str, str]:\n 800→ """\n 801→ Mapper chaque ScreenState vers le node le plus proche.\n 802→ \n 803→ Utilise la similarité d'embedding pour trouver le meilleur match.\n 804→ """\n 805→ state_to_node = {}\n 806→ \n 807→ # Récupérer les embeddings des prototypes de nodes\n 808→ node_prototypes = {}\n 809→ for node in nodes:\n 810→ # Priorité : vecteur en mémoire (metadata), sinon chargement depuis disque\n 811→ proto_list = node.metadata.get("_prototype_vector")\n 812→ if proto_list is not None:\n 813→ node_prototypes[node.node_id] = np.array(proto_list, dtype=np.float32)\n 814→ elif node.template and node.template.embedding and node.template.embedding.vector_id:\n 815→ proto_path = Path(node.template.embedding.vector_id)\n 816→ if proto_path.exists():\n 817→ node_prototypes[node.node_id] = np.load(proto_path)\n 818→ \n 819→ if not node_prototypes:\n 820→ logger.warning("No node prototypes available for mapping")\n 821→ return state_to_node\n 822→ \n 823→ # Pour chaque état, trouver le node le plus proche\n 824→ for state in screen_states:\n 825→ # Calculer embedding de l'état\n 826→ try:\n 827→ state_embedding = self.embedding_builder.build(state)\n 828→ state_vector = state_embedding.get_vector()\n 829→ \n 830→ # Trouver le node avec la meilleure similarité\n 831→ best_node_id = None\n 832→ best_similarity = -1\n 833→ \n 834→ for node_id, prototype in node_prototypes.items():\n 835→ similarity = np.dot(state_vector, prototype)\n 836→ if similarity > best_similarity:\n 837→ best_similarity = similarity\n 838→ best_node_id = node_id\n 839→ \n 840→ if best_node_id and best_similarity > 0.7: # Seuil minimum\n 841→ state_to_node[state.screen_state_id] = best_node_id\n 842→ \n 843→ except Exception as e:\n 844→ logger.warning(f"Failed to map state {state.screen_state_id}: {e}")\n 845→ \n 846→ return state_to_node\n 847→ \n 848→ def _find_transition_event(\n 849→ self,\n 850→ current_state: ScreenState,\n 851→ next_state: ScreenState,\n 852→ events: List[Event]\n 853→ ) -> Optional[Event]:\n 854→ """\n 855→ Trouver l'événement qui a causé la transition entre deux états.\n 856→ \n 857→ Cherche l'événement (clic, frappe) qui s'est produit entre les deux screenshots.\n 858→ """\n 859→ current_time = current_state.metadata.get("event_time", 0)\n 860→ next_time = next_state.metadata.get("event_time", float('inf'))\n 861→ \n 862→ # Chercher les événements d'action entre les deux timestamps\n 863→ action_events = []\n 864→ for event in events:\n 865→ if current_time <= event.t < next_time:\n 866→ if event.type in ["mouse_click", "key_press", "text_input"]:\n 867→ action_events.append(event)\n 868→ \n 869→ # Retourner le dernier événement d'action (celui qui a probablement causé la transition)\n 870→ if action_events:\n 871→ return action_events[-1]\n 872→ \n 873→ return None\n 874→ \n 875→ def _create_edge(\n 876→ self,\n 877→ from_node: str,\n 878→ to_node: str,\n 879→ event: Optional[Event],\n 880→ edge_id: str\n 881→ ) -> WorkflowEdge:\n 882→ """\n 883→ Créer un WorkflowEdge depuis une transition observée.\n 884→ """\n 885→ # Déterminer le type d'action\n 886→ if event:\n 887→ action_type = event.type\n 888→ action_params = {}\n 889→ \n 890→ if action_type == "mouse_click":\n 891→ action_params = {\n 892→ "button": event.data.get("button", "left"),\n 893→ "position": event.data.get("pos", [0, 0]),\n 894→ "wait_after_ms": 500\n 895→ }\n 896→ target_role = "unknown_element" # Sera affiné avec détection UI\n 897→ \n 898→ elif action_type == "key_press":\n 899→ action_params = {\n 900→ "keys": event.data.get("keys", []),\n 901→ "wait_after_ms": 200\n 902→ }\n 903→ target_role = "keyboard_input"\n 904→ \n 905→ elif action_type == "text_input":\n 906→ action_params = {\n 907→ "text": event.data.get("text", ""),\n 908→ "wait_after_ms": 300\n 909→ }\n 910→ target_role = "text_field"\n 911→ else:\n 912→ action_params = {}\n 913→ target_role = "unknown"\n 914→ else:\n 915→ action_type = "unknown"\n 916→ action_params = {}\n 917→ target_role = "unknown"\n 918→ \n 919→ # Créer l'action\n 920→ action = Action(\n 921→ type=action_type,\n 922→ target=TargetSpec(\n 923→ by_role=target_role,\n 924→ selection_policy="first",\n 925→ fallback_strategy="visual_similarity"\n 926→ ),\n 927→ parameters=action_params\n 928→ )\n 929→ \n 930→ # Créer les contraintes\n 931→ constraints = EdgeConstraints(\n 932→ pre_conditions={},\n 933→ required_confidence=0.8,\n 934→ max_wait_time_ms=5000\n 935→ )\n 936→ \n 937→ # Créer les post-conditions\n 938→ post_conditions = PostConditions(\n 939→ expected_node=to_node,\n 940→ window_change_expected=False,\n 941→ new_ui_elements_expected=[],\n 942→ timeout_ms=3000\n 943→ )\n 944→ \n 945→ # Créer l'edge\n 946→ from core.models.workflow_graph import EdgeStats\n 947→ \n 948→ return WorkflowEdge(\n 949→ edge_id=edge_id,\n 950→ from_node=from_node,\n 951→ to_node=to_node,\n 952→ action=action,\n 953→ constraints=constraints,\n 954→ post_conditions=post_conditions,\n 955→ stats=EdgeStats(),\n 956→ metadata={\n 957→ "created_from_event": event.type if event else None,\n 958→ "auto_generated": True\n 959→ }\n 960→ )\n 961→\n 962→\n 963→def main():\n 964→ """Point d'entrée pour tests manuels."""\n 965→ logging.basicConfig(\n 966→ level=logging.INFO,\n 967→ format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",\n 968→ )\n 969→ \n 970→ builder = GraphBuilder(min_pattern_repetitions=3)\n 971→ logger.info(f"GraphBuilder initialized: {builder}")\n 972→ logger.info("Ready to build workflows from sessions")\n 973→\n 974→\n 975→if name == "main":\n 976→ main()\n 977→"}]},"uuid":"9efb7a10-5357-418d-8251-93e4f23d9909","timestamp":"2026-03-14T21:14:20.393Z","sourceToolAssistantUUID":"44291c11-e293-4eb3-8257-9ab8a6b59276"} {"parentUuid":"9efb7a10-5357-418d-8251-93e4f23d9909","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01KJZWXtYFt4YeJeT79C5HFZ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01VvgoaDLbCz3XPXZQg9k9Ld","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/analysis/screen_analyzer.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":29989,"cache_read_input_tokens":12328,"cache_creation":{"ephemeral_5m_input_tokens":29989,"ephemeral_1h_input_tokens":0},"output_tokens":25,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fTzHpSw1CM3An5Rg6a","type":"assistant","uuid":"9ef499b9-bde2-42bb-8109-b57b94ffe46e","timestamp":"2026-03-14T21:14:20.705Z"} {"parentUuid":"9ef499b9-bde2-42bb-8109-b57b94ffe46e","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"type":"tool_result","content":"File does not exist. Note: your current working directory is /home/dom/ai/rpa_vision_v3/agent_v0.","is_error":true,"tool_use_id":"toolu_01VvgoaDLbCz3XPXZQg9k9Ld"}]},"uuid":"c5dcc4a7-bc96-44b1-99c3-977e9ea85433","timestamp":"2026-03-14T21:14:20.708Z","toolUseResult":"Error: File does not exist. Note: your current working directory is /home/dom/ai/rpa_vision_v3/agent_v0.","sourceToolAssistantUUID":"9ef499b9-bde2-42bb-8109-b57b94ffe46e"} {"parentUuid":"c5dcc4a7-bc96-44b1-99c3-977e9ea85433","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01KJZWXtYFt4YeJeT79C5HFZ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_013QiUXY1fNBCUn45oLh4J12","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/agent_chat/app.py"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":29989,"cache_read_input_tokens":12328,"output_tokens":192,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":29989},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fTzHpSw1CM3An5Rg6a","type":"assistant","uuid":"e2f56cab-9423-4d72-9dc4-ad0d440c4278","timestamp":"2026-03-14T21:14:22.583Z"} {"parentUuid":"e2f56cab-9423-4d72-9dc4-ad0d440c4278","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_013QiUXY1fNBCUn45oLh4J12","toolUseID":"toolu_013QiUXY1fNBCUn45oLh4J12","timestamp":"2026-03-14T21:14:23.766Z","uuid":"51ae516d-6b5b-4178-941a-a0dc9e48d6b9"} {"parentUuid":"e2f56cab-9423-4d72-9dc4-ad0d440c4278","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_013QiUXY1fNBCUn45oLh4J12","type":"tool_result","content":"\nOutput too large (61.9KB). Full output saved to: /home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_013QiUXY1fNBCUn45oLh4J12.txt\n\nPreview (first 2KB):\n 1→#!/usr/bin/env python3\n 2→"""\n 3→RPA Vision V3 - Agent Chat\n 4→\n 5→Interface conversationnelle pour communiquer avec le système RPA.\n 6→Style "Spotlight/Alfred" - minimaliste et efficace.\n 7→\n 8→Composants intégrés:\n 9→- IntentParser: Compréhension des intentions utilisateur\n 10→- ConfirmationLoop: Validation avant actions critiques\n 11→- ResponseGenerator: Réponses en langage naturel\n 12→- ConversationManager: Contexte multi-tour\n 13→\n 14→Usage:\n 15→ python agent_chat/app.py\n 16→\n 17→Puis ouvrir: http://localhost:5004\n 18→\n 19→Auteur: Dom - Janvier 2026\n 20→"""\n 21→\n 22→import asyncio\n 23→import json\n 24→import logging\n 25→import os\n 26→import sys\n 27→from pathlib import Path\n 28→from datetime import datetime\n 29→from typing import Dict, Any, List, Optional\n 30→\n 31→import requests as http_requests # Pour les appels au streaming server\n 32→\n 33→from flask import Flask, render_template, request, jsonify\n 34→from flask_socketio import SocketIO, emit\n 35→\n 36→# Add project root to path\n 37→sys.path.insert(0, str(Path(file).parent.parent))\n 38→\n 39→from core.workflow import SemanticMatcher, VariableManager\n 40→\n 41→# Import des composants conversationnels\n 42→from .intent_parser import IntentParser, IntentType, get_intent_parser\n 43→from .confirmation import ConfirmationLoop, ConfirmationStatus, RiskLevel, get_confirmation_loop\n 44→from .response_generator import ResponseGenerator, get_response_generator\n 45→from .conversation_manager import ConversationManager, get_conversation_manager\n 46→from .autonomous_planner import AutonomousPlanner, get_autonomous_planner, ExecutionPlan\n 47→\n 48→# GPU Resource Manager (optional)\n 49→try:\n 50→ from core.gpu import get_gpu_resource_manager, ExecutionMode\n 51→ GPU_AVAILABLE = True\n 52→except ImportError:\n 53→ GPU_AVAILABLE = False\n 54→\n...\n"}]},"uuid":"c16ab019-9353-48bf-9486-83378ee2b611","timestamp":"2026-03-14T21:14:23.766Z","sourceToolAssistantUUID":"e2f56cab-9423-4d72-9dc4-ad0d440c4278"} {"parentUuid":"c16ab019-9353-48bf-9486-83378ee2b611","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01E4BR695FSRsgCqtQGDUC31","type":"message","role":"assistant","content":[{"type":"text","text":"Let me continue exploring the key files."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":13624,"cache_read_input_tokens":42317,"cache_creation":{"ephemeral_5m_input_tokens":13624,"ephemeral_1h_input_tokens":0},"output_tokens":1,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fUZQ8qJncoiWbXi5B2","type":"assistant","uuid":"e046efef-f76f-4804-8108-ed5a384d1316","timestamp":"2026-03-14T21:14:26.901Z"} {"parentUuid":"e046efef-f76f-4804-8108-ed5a384d1316","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01E4BR695FSRsgCqtQGDUC31","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01UZvAHPPoSdzKG2ZSU9kZsR","name":"Read","input":{"file_path":"/home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_013QiUXY1fNBCUn45oLh4J12.txt","offset":1,"limit":200},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":13624,"cache_read_input_tokens":42317,"cache_creation":{"ephemeral_5m_input_tokens":13624,"ephemeral_1h_input_tokens":0},"output_tokens":1,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fUZQ8qJncoiWbXi5B2","type":"assistant","uuid":"b4be59d4-ae9b-40c9-a94d-9501793254ca","timestamp":"2026-03-14T21:14:28.325Z"} {"parentUuid":"b4be59d4-ae9b-40c9-a94d-9501793254ca","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01UZvAHPPoSdzKG2ZSU9kZsR","toolUseID":"toolu_01UZvAHPPoSdzKG2ZSU9kZsR","timestamp":"2026-03-14T21:14:28.330Z","uuid":"21584fc9-6453-4644-b1b5-14173a5d7dc6"} {"parentUuid":"b4be59d4-ae9b-40c9-a94d-9501793254ca","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01UZvAHPPoSdzKG2ZSU9kZsR","type":"tool_result","content":" 1→ 1→#!/usr/bin/env python3\n 2→ 2→"""\n 3→ 3→RPA Vision V3 - Agent Chat\n 4→ 4→\n 5→ 5→Interface conversationnelle pour communiquer avec le système RPA.\n 6→ 6→Style "Spotlight/Alfred" - minimaliste et efficace.\n 7→ 7→\n 8→ 8→Composants intégrés:\n 9→ 9→- IntentParser: Compréhension des intentions utilisateur\n 10→ 10→- ConfirmationLoop: Validation avant actions critiques\n 11→ 11→- ResponseGenerator: Réponses en langage naturel\n 12→ 12→- ConversationManager: Contexte multi-tour\n 13→ 13→\n 14→ 14→Usage:\n 15→ 15→ python agent_chat/app.py\n 16→ 16→\n 17→ 17→Puis ouvrir: http://localhost:5004\n 18→ 18→\n 19→ 19→Auteur: Dom - Janvier 2026\n 20→ 20→"""\n 21→ 21→\n 22→ 22→import asyncio\n 23→ 23→import json\n 24→ 24→import logging\n 25→ 25→import os\n 26→ 26→import sys\n 27→ 27→from pathlib import Path\n 28→ 28→from datetime import datetime\n 29→ 29→from typing import Dict, Any, List, Optional\n 30→ 30→\n 31→ 31→import requests as http_requests # Pour les appels au streaming server\n 32→ 32→\n 33→ 33→from flask import Flask, render_template, request, jsonify\n 34→ 34→from flask_socketio import SocketIO, emit\n 35→ 35→\n 36→ 36→# Add project root to path\n 37→ 37→sys.path.insert(0, str(Path(file).parent.parent))\n 38→ 38→\n 39→ 39→from core.workflow import SemanticMatcher, VariableManager\n 40→ 40→\n 41→ 41→# Import des composants conversationnels\n 42→ 42→from .intent_parser import IntentParser, IntentType, get_intent_parser\n 43→ 43→from .confirmation import ConfirmationLoop, ConfirmationStatus, RiskLevel, get_confirmation_loop\n 44→ 44→from .response_generator import ResponseGenerator, get_response_generator\n 45→ 45→from .conversation_manager import ConversationManager, get_conversation_manager\n 46→ 46→from .autonomous_planner import AutonomousPlanner, get_autonomous_planner, ExecutionPlan\n 47→ 47→\n 48→ 48→# GPU Resource Manager (optional)\n 49→ 49→try:\n 50→ 50→ from core.gpu import get_gpu_resource_manager, ExecutionMode\n 51→ 51→ GPU_AVAILABLE = True\n 52→ 52→except ImportError:\n 53→ 53→ GPU_AVAILABLE = False\n 54→ 54→\n 55→ 55→# Execution components (optional - pour exécution réelle)\n 56→ 56→try:\n 57→ 57→ from core.execution import ActionExecutor, TargetResolver, ErrorHandler\n 58→ 58→ from core.execution.execution_loop import ExecutionLoop, ExecutionMode as ExecMode, ExecutionState\n 59→ 59→ from core.pipeline.workflow_pipeline import WorkflowPipeline\n 60→ 60→ from core.capture import ScreenCapturer\n 61→ 61→ EXECUTION_AVAILABLE = True\n 62→ 62→except ImportError as e:\n 63→ 63→ logger.warning(f"Composants d'exécution non disponibles: {e}")\n 64→ 64→ EXECUTION_AVAILABLE = False\n 65→ 65→\n 66→ 66→logging.basicConfig(level=logging.INFO)\n 67→ 67→logger = logging.getLogger(name)\n 68→ 68→\n 69→ 69→app = Flask(name)\n 70→ 70→app.config['SECRET_KEY'] = 'rpa-vision-v3-secret'\n 71→ 71→socketio = SocketIO(app, cors_allowed_origins="")\n 72→ 72→\n 73→ 73→# Global state\n 74→ 74→matcher: Optional[SemanticMatcher] = None\n 75→ 75→gpu_manager = None\n 76→ 76→intent_parser: Optional[IntentParser] = None\n 77→ 77→confirmation_loop: Optional[ConfirmationLoop] = None\n 78→ 78→response_generator: Optional[ResponseGenerator] = None\n 79→ 79→conversation_manager: Optional[ConversationManager] = None\n 80→ 80→autonomous_planner: Optional[AutonomousPlanner] = None\n 81→ 81→\n 82→ 82→# Execution components\n 83→ 83→workflow_pipeline = None\n 84→ 84→action_executor = None\n 85→ 85→execution_loop = None\n 86→ 86→screen_capturer = None\n 87→ 87→\n 88→ 88→# URL du streaming server (Agent V1) pour l'exécution distante\n 89→ 89→STREAMING_SERVER_URL = os.environ.get(\n 90→ 90→ "RPA_STREAMING_URL", "http://localhost:5005"\n 91→ 91→)\n 92→ 92→\n 93→ 93→execution_status = {\n 94→ 94→ "running": False,\n 95→ 95→ "workflow": None,\n 96→ 96→ "progress": 0,\n 97→ 97→ "message": "",\n 98→ 98→ "can_minimize": True\n 99→ 99→}\n 100→ 100→command_history: List[Dict[str, Any]] = []\n 101→ 101→\n 102→ 102→\n 103→ 103→def init_system():\n 104→ 104→ """Initialiser tous les composants du système."""\n 105→ 105→ global matcher, gpu_manager\n 106→ 106→ global intent_parser, confirmation_loop, response_generator, conversation_manager\n 107→ 107→ global autonomous_planner\n 108→ 108→\n 109→ 109→ # 1. SemanticMatcher — multi-répertoires (P0-6) + matching LLM (P0-7)\n 110→ 110→ # Scan data/workflows/ + data/training/workflows/ + data/training/live_sessions/workflows/\n 111→ 111→ try:\n 112→ 112→ matcher = SemanticMatcher(\n 113→ 113→ workflows_dir=None, # None = scan tous les répertoires par défaut\n 114→ 114→ use_llm=True, # Matching sémantique via Ollama (P0-7)\n 115→ 115→ llm_model="qwen2.5:7b",\n 116→ 116→ )\n 117→ 117→ dirs_info = matcher.get_directories()\n 118→ 118→ dirs_summary = ", ".join(\n 119→ 119→ f"{d['path']}({d['workflow_count']})" for d in dirs_info if d['exists']\n 120→ 120→ )\n 121→ 121→ logger.info(\n 122→ 122→ f"✓ SemanticMatcher: {len(matcher.get_all_workflows())} workflows "\n 123→ 123→ f"[{dirs_summary}]"\n 124→ 124→ )\n 125→ 125→ except Exception as e:\n 126→ 126→ logger.error(f"✗ SemanticMatcher: {e}")\n 127→ 127→ matcher = None\n 128→ 128→\n 129→ 129→ # 2. GPU Resource Manager\n 130→ 130→ if GPU_AVAILABLE:\n 131→ 131→ try:\n 132→ 132→ gpu_manager = get_gpu_resource_manager()\n 133→ 133→ logger.info("✓ GPU Resource Manager connected")\n 134→ 134→ except Exception as e:\n 135→ 135→ logger.warning(f"⚠ GPU Resource Manager: {e}")\n 136→ 136→ gpu_manager = None\n 137→ 137→\n 138→ 138→ # 3. Composants conversationnels\n 139→ 139→ try:\n 140→ 140→ intent_parser = get_intent_parser(use_llm=True) # LLM activé (Ollama)\n 141→ 141→ confirmation_loop = get_confirmation_loop()\n 142→ 142→ response_generator = get_response_generator()\n 143→ 143→ conversation_manager = get_conversation_manager()\n 144→ 144→\n 145→ 145→ # Injecter les workflows dans l'intent_parser pour contexte LLM\n 146→ 146→ if matcher and intent_parser:\n 147→ 147→ workflows_for_llm = [\n 148→ 148→ {"name": wf.name, "description": wf.description, "tags": wf.tags}\n 149→ 149→ for wf in matcher.get_all_workflows()\n 150→ 150→ ]\n 151→ 151→ intent_parser.set_workflows(workflows_for_llm)\n 152→ 152→\n 153→ 153→ logger.info("✓ Composants conversationnels initialisés (LLM activé)")\n 154→ 154→ except Exception as e:\n 155→ 155→ logger.error(f"✗ Composants conversationnels: {e}")\n 156→ 156→ # Fallback aux composants de base\n 157→ 157→ intent_parser = IntentParser(use_llm=False)\n 158→ 158→ confirmation_loop = ConfirmationLoop()\n 159→ 159→ response_generator = ResponseGenerator()\n 160→ 160→ conversation_manager = ConversationManager()\n 161→ 161→\n 162→ 162→ # 4. Composants d'exécution réelle\n 163→ 163→ global workflow_pipeline, action_executor, execution_loop, screen_capturer\n 164→ 164→ if EXECUTION_AVAILABLE:\n 165→ 165→ try:\n 166→ 166→ # Pipeline de workflow (matching + actions)\n 167→ 167→ workflow_pipeline = WorkflowPipeline()\n 168→ 168→ logger.info("✓ WorkflowPipeline initialisé")\n 169→ 169→\n 170→ 170→ # Capture d'écran\n 171→ 171→ screen_capturer = ScreenCapturer()\n 172→ 172→ logger.info("✓ ScreenCapturer initialisé")\n 173→ 173→\n 174→ 174→ # Résolveur de cibles et gestionnaire d'erreurs\n 175→ 175→ target_resolver = TargetResolver()\n 176→ 176→ error_handler = ErrorHandler()\n 177→ 177→\n 178→ 178→ # Exécuteur d'actions\n 179→ 179→ action_executor = ActionExecutor(\n 180→ 180→ target_resolver=target_resolver,\n 181→ 181→ error_handler=error_handler,\n 182→ 182→ verify_postconditions=True\n 183→ 183→ )\n 184→ 184→ logger.info("✓ ActionExecutor initialisé")\n 185→ 185→\n 186→ 186→ # Boucle d'exécution (pour mode automatique)\n 187→ 187→ execution_loop = ExecutionLoop(\n 188→ 188→ pipeline=workflow_pipeline,\n 189→ 189→ action_executor=action_executor,\n 190→ 190→ screen_capturer=screen_capturer\n 191→ 191→ )\n 192→ 192→ logger.info("✓ ExecutionLoop initialisé")\n 193→ 193→\n 194→ 194→ except Exception as e:\n 195→ 195→ logger.warning(f"⚠ Composants d'exécution partiels: {e}")\n 196→ 196→ # Mode dégradé: simulation uniquement\n 197→ 197→ workflow_pipeline = None\n 198→ 198→ action_executor = None\n 199→ 199→ execution_loop = None\n 200→ 200→ else:"}]},"uuid":"4d23fdf7-7fac-49e8-bf9b-bc55d6ce7c68","timestamp":"2026-03-14T21:14:28.330Z","sourceToolAssistantUUID":"b4be59d4-ae9b-40c9-a94d-9501793254ca"} {"parentUuid":"4d23fdf7-7fac-49e8-bf9b-bc55d6ce7c68","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01E4BR695FSRsgCqtQGDUC31","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01X8WDVqsezUXH1vkcskBwyT","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/models/workflow_graph.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":13624,"cache_read_input_tokens":42317,"cache_creation":{"ephemeral_5m_input_tokens":13624,"ephemeral_1h_input_tokens":0},"output_tokens":1,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fUZQ8qJncoiWbXi5B2","type":"assistant","uuid":"86b034b8-df96-4ad9-8577-80771fab7eee","timestamp":"2026-03-14T21:14:28.691Z"} {"parentUuid":"86b034b8-df96-4ad9-8577-80771fab7eee","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01X8WDVqsezUXH1vkcskBwyT","toolUseID":"toolu_01X8WDVqsezUXH1vkcskBwyT","timestamp":"2026-03-14T21:14:28.919Z","uuid":"33b89291-7cc2-4657-aa5a-da22d51764bc"} {"parentUuid":"86b034b8-df96-4ad9-8577-80771fab7eee","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01X8WDVqsezUXH1vkcskBwyT","type":"tool_result","content":"\nOutput too large (51KB). Full output saved to: /home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_01X8WDVqsezUXH1vkcskBwyT.txt\n\nPreview (first 2KB):\n 1→"""\n 2→Workflow Graph - Couche 4 : Graphes de Workflows Sémantiques\n 3→\n 4→Représente des workflows appris comme des graphes de transitions entre états d'écran.\n 5→Chaque node est un template d'écran, chaque edge est une action sémantique.\n 6→\n 7→Auteur : Dom, Alice Kiro\n 8→Date : 15 décembre 2024\n 9→"""\n 10→\n 11→from dataclasses import dataclass, field\n 12→from typing import Dict, List, Optional, Any, Tuple\n 13→from datetime import datetime\n 14→from pathlib import Path\n 15→import json\n 16→from enum import Enum\n 17→\n 18→\n 19→# ============================================================================\n 20→# Enums et Types\n 21→# ============================================================================\n 22→\n 23→class LearningState(str, Enum):\n 24→ """États d'apprentissage d'un workflow"""\n 25→ OBSERVATION = "OBSERVATION" # Observation passive\n 26→ COACHING = "COACHING" # Suggestions avec confirmation\n 27→ AUTO_CANDIDATE = "AUTO_CANDIDATE" # Exécution auto en test\n 28→ AUTO_CONFIRMED = "AUTO_CONFIRMED" # Exécution auto validée\n 29→\n 30→\n 31→class ActionType(str, Enum):\n 32→ """Types d'actions possibles"""\n 33→ MOUSE_CLICK = "mouse_click"\n 34→ TEXT_INPUT = "text_input"\n 35→ KEY_PRESS = "key_press"\n 36→ COMPOUND = "compound" # Séquence d'actions\n 37→\n 38→\n 39→class SelectionPolicy(str, Enum):\n 40→ """Politiques de sélection d'éléments UI"""\n 41→ FIRST = "first"\n 42→ LAST = "last"\n 43→ BY_SIMILARITY = "by_similarity"\n 44→ BY_POSITION = "by_position"\n 45→\n 46→\n 47→# ============================================================================\n 48→# Classes de Support\n 49→# ============================================================================\n 50→\n 51→@dataclass\n 52→class WindowConstraint:\n 53→ """Contrainte sur la fenêtre active"""\n...\n"}]},"uuid":"ee1a0f2b-bf62-489f-ad20-df7e84a3ce4b","timestamp":"2026-03-14T21:14:28.918Z","sourceToolAssistantUUID":"86b034b8-df96-4ad9-8577-80771fab7eee"} {"parentUuid":"ee1a0f2b-bf62-489f-ad20-df7e84a3ce4b","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01E4BR695FSRsgCqtQGDUC31","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01BY4fJMFkEofT5DjaGkduho","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/pipeline/screen_analyzer.py"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":13624,"cache_read_input_tokens":42317,"output_tokens":290,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":13624},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fUZQ8qJncoiWbXi5B2","type":"assistant","uuid":"f8385088-af9a-4979-8239-d835ae08618b","timestamp":"2026-03-14T21:14:29.232Z"} {"parentUuid":"f8385088-af9a-4979-8239-d835ae08618b","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01BY4fJMFkEofT5DjaGkduho","toolUseID":"toolu_01BY4fJMFkEofT5DjaGkduho","timestamp":"2026-03-14T21:14:29.236Z","uuid":"c71abe1e-7333-4fa4-85d2-9458e11e3e61"} {"parentUuid":"f8385088-af9a-4979-8239-d835ae08618b","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01BY4fJMFkEofT5DjaGkduho","type":"tool_result","content":" 1→"""\n 2→ScreenAnalyzer - Construction complète d'un ScreenState depuis un screenshot\n 3→\n 4→Orchestre les 4 niveaux du ScreenState :\n 5→ Niveau 1 (Raw) : métadonnées de l'image\n 6→ Niveau 2 (Perception): OCR + embedding global\n 7→ Niveau 3 (UI) : détection d'éléments UI\n 8→ Niveau 4 (Contexte) : fenêtre active, workflow en cours\n 9→\n 10→Ce module comble le chaînon manquant entre la capture brute (Couche 0)\n 11→et la construction d'embeddings (Couche 3).\n 12→"""\n 13→\n 14→import logging\n 15→import os\n 16→from datetime import datetime\n 17→from pathlib import Path\n 18→from typing import Optional, Dict, Any, List\n 19→\n 20→from PIL import Image\n 21→\n 22→from core.models.screen_state import (\n 23→ ScreenState,\n 24→ RawLevel,\n 25→ PerceptionLevel,\n 26→ ContextLevel,\n 27→ WindowContext,\n 28→ EmbeddingRef,\n 29→)\n 30→from core.models.ui_element import UIElement\n 31→\n 32→logger = logging.getLogger(name)\n 33→\n 34→\n 35→class ScreenAnalyzer:\n 36→ """\n 37→ Construit un ScreenState complet (4 niveaux) depuis un screenshot.\n 38→\n 39→ Utilise le UIDetector pour la détection d'éléments et un OCR\n 40→ (docTR ou Tesseract) pour l'extraction de texte.\n 41→\n 42→ Example:\n 43→ >>> analyzer = ScreenAnalyzer()\n 44→ >>> state = analyzer.analyze("/path/to/screenshot.png")\n 45→ >>> print(state.perception.detected_text)\n 46→ >>> print(len(state.ui_elements))\n 47→ """\n 48→\n 49→ def init(\n 50→ self,\n 51→ ui_detector=None,\n 52→ ocr_engine: Optional[str] = None,\n 53→ session_id: str = "",\n 54→ ):\n 55→ """\n 56→ Args:\n 57→ ui_detector: Instance de UIDetector (créé si None)\n 58→ ocr_engine: Moteur OCR à utiliser ("doctr", "tesseract", None=auto)\n 59→ session_id: ID de la session en cours\n 60→ """\n 61→ self._ui_detector = ui_detector\n 62→ self._ocr_engine_name = ocr_engine\n 63→ self._ocr = None\n 64→ self.session_id = session_id\n 65→ self._state_counter = 0\n 66→\n 67→ # Initialisation lazy pour éviter les imports lourds au démarrage\n 68→ self._ui_detector_initialized = ui_detector is not None\n 69→ self._ocr_initialized = False\n 70→\n 71→ # =========================================================================\n 72→ # API publique\n 73→ # =========================================================================\n 74→\n 75→ def analyze(\n 76→ self,\n 77→ screenshot_path: str,\n 78→ window_info: Optional[Dict[str, Any]] = None,\n 79→ context: Optional[Dict[str, Any]] = None,\n 80→ ) -> ScreenState:\n 81→ """\n 82→ Analyser un screenshot et construire un ScreenState complet.\n 83→\n 84→ Args:\n 85→ screenshot_path: Chemin vers le fichier image\n 86→ window_info: Infos fenêtre active {"title": ..., "app_name": ...}\n 87→ context: Contexte métier optionnel\n 88→\n 89→ Returns:\n 90→ ScreenState avec les 4 niveaux remplis\n 91→ """\n 92→ screenshot_path = str(screenshot_path)\n 93→ self._state_counter += 1\n 94→\n 95→ state_id = f"{self.session_id}state{self.state_counter:04d}" if self.session_id else f"state{self._state_counter:04d}"\n 96→\n 97→ # Niveau 1 : Raw\n 98→ raw = self._build_raw_level(screenshot_path)\n 99→\n 100→ # Niveau 2 : Perception (OCR)\n 101→ detected_text = self._extract_text(screenshot_path)\n 102→ perception = PerceptionLevel(\n 103→ embedding=EmbeddingRef(\n 104→ provider="openclip_ViT-B-32",\n 105→ vector_id=f"data/embeddings/screens/{state_id}.npy",\n 106→ dimensions=512,\n 107→ ),\n 108→ detected_text=detected_text,\n 109→ text_detection_method=self._get_ocr_method_name(),\n 110→ confidence_avg=0.85 if detected_text else 0.0,\n 111→ )\n 112→\n 113→ # Niveau 3 : UI Elements\n 114→ ui_elements = self.detect_ui_elements(screenshot_path, window_info)\n 115→\n 116→ # Niveau 4 : Contexte\n 117→ window_ctx = self.build_window_context(window_info)\n 118→ context_level = self.build_context_level(context)\n 119→\n 120→ state = ScreenState(\n 121→ screen_state_id=state_id,\n 122→ timestamp=datetime.now(),\n 123→ session_id=self.session_id,\n 124→ window=window_ctx,\n 125→ raw=raw,\n 126→ perception=perception,\n 127→ context=context_level,\n 128→ metadata={\n 129→ "analyzer_version": "1.0",\n 130→ "ui_elements_count": len(ui_elements),\n 131→ "text_regions_count": len(detected_text),\n 132→ },\n 133→ ui_elements=ui_elements,\n 134→ )\n 135→\n 136→ logger.info(\n 137→ f"ScreenState {state_id} construit: "\n 138→ f"{len(ui_elements)} éléments UI, {len(detected_text)} textes détectés"\n 139→ )\n 140→ return state\n 141→\n 142→ def analyze_image(\n 143→ self,\n 144→ image: Image.Image,\n 145→ save_dir: str = "data/screens",\n 146→ window_info: Optional[Dict[str, Any]] = None,\n 147→ context: Optional[Dict[str, Any]] = None,\n 148→ ) -> ScreenState:\n 149→ """\n 150→ Analyser une PIL Image (utile quand on a déjà l'image en mémoire).\n 151→\n 152→ Sauvegarde l'image sur disque puis appelle analyze().\n 153→ """\n 154→ save_path = Path(save_dir)\n 155→ save_path.mkdir(parents=True, exist_ok=True)\n 156→\n 157→ timestamp = datetime.now().strftime("%Y%m%d%H%M%S%f")\n 158→ filename = f"screen{timestamp}.png"\n 159→ filepath = save_path / filename\n 160→\n 161→ image.save(str(filepath))\n 162→ return self.analyze(str(filepath), window_info=window_info, context=context)\n 163→\n 164→ # =========================================================================\n 165→ # Niveau 1 : Raw\n 166→ # =========================================================================\n 167→\n 168→ def _build_raw_level(self, screenshot_path: str) -> RawLevel:\n 169→ file_size = 0\n 170→ try:\n 171→ file_size = os.path.getsize(screenshot_path)\n 172→ except OSError:\n 173→ pass\n 174→\n 175→ return RawLevel(\n 176→ screenshot_path=screenshot_path,\n 177→ capture_method="mss",\n 178→ file_size_bytes=file_size,\n 179→ )\n 180→\n 181→ # =========================================================================\n 182→ # Niveau 2 : Perception — OCR\n 183→ # =========================================================================\n 184→\n 185→ def _extract_text(self, screenshot_path: str) -> List[str]:\n 186→ """Extraire le texte d'un screenshot via OCR."""\n 187→ self._ensure_ocr()\n 188→\n 189→ if self._ocr is None:\n 190→ return []\n 191→\n 192→ try:\n 193→ return self._ocr(screenshot_path)\n 194→ except Exception as e:\n 195→ logger.warning(f"OCR échoué: {e}")\n 196→ return []\n 197→\n 198→ def _ensure_ocr(self) -> None:\n 199→ """Initialiser le moteur OCR (lazy)."""\n 200→ if self._ocr_initialized:\n 201→ return\n 202→ self._ocr_initialized = True\n 203→\n 204→ engine = self._ocr_engine_name\n 205→\n 206→ # Auto-détection : essayer docTR puis Tesseract\n 207→ if engine is None or engine == "doctr":\n 208→ try:\n 209→ self._ocr = self._create_doctr_ocr()\n 210→ logger.info("OCR initialisé avec docTR")\n 211→ return\n 212→ except Exception as e:\n 213→ if engine == "doctr":\n 214→ logger.warning(f"docTR non disponible: {e}")\n 215→ return\n 216→\n 217→ if engine is None or engine == "tesseract":\n 218→ try:\n 219→ self._ocr = self._create_tesseract_ocr()\n 220→ logger.info("OCR initialisé avec Tesseract")\n 221→ return\n 222→ except Exception as e:\n 223→ logger.warning(f"Tesseract non disponible: {e}")\n 224→\n 225→ logger.warning("Aucun moteur OCR disponible — detected_text sera vide")\n 226→\n 227→ def _create_doctr_ocr(self):\n 228→ """Créer une fonction OCR basée sur docTR."""\n 229→ from doctr.io import DocumentFile\n 230→ from doctr.models import ocr_predictor\n 231→\n 232→ predictor = ocr_predictor(det_arch="db_resnet50", reco_arch="crnn_vgg16_bn", pretrained=True)\n 233→\n 234→ def ocr_func(image_path: str) -> List[str]:\n 235→ doc = DocumentFile.from_images(image_path)\n 236→ result = predictor(doc)\n 237→ texts = []\n 238→ for page in result.pages:\n 239→ for block in page.blocks:\n 240→ for line in block.lines:\n 241→ line_text = " ".join(word.value for word in line.words)\n 242→ if line_text.strip():\n 243→ texts.append(line_text.strip())\n 244→ return texts\n 245→\n 246→ return ocr_func\n 247→\n 248→ def _create_tesseract_ocr(self):\n 249→ """Créer une fonction OCR basée sur Tesseract."""\n 250→ import pytesseract\n 251→\n 252→ def ocr_func(image_path: str) -> List[str]:\n 253→ img = Image.open(image_path)\n 254→ raw_text = pytesseract.image_to_string(img, lang="fra+eng")\n 255→ lines = [line.strip() for line in raw_text.split("\n") if line.strip()]\n 256→ return lines\n 257→\n 258→ return ocr_func\n 259→\n 260→ def _get_ocr_method_name(self) -> str:\n 261→ if self._ocr is None:\n 262→ return "none"\n 263→ if self._ocr_engine_name:\n 264→ return self._ocr_engine_name\n 265→ return "doctr"\n 266→\n 267→ # =========================================================================\n 268→ # Niveau 3 : UI Elements\n 269→ # =========================================================================\n 270→\n 271→ def _detect_ui_elements(\n 272→ self,\n 273→ screenshot_path: str,\n 274→ window_info: Optional[Dict[str, Any]] = None,\n 275→ ) -> List[UIElement]:\n 276→ """Détecter les éléments UI dans le screenshot."""\n 277→ self._ensure_ui_detector()\n 278→\n 279→ if self._ui_detector is None:\n 280→ return []\n 281→\n 282→ try:\n 283→ elements = self._ui_detector.detect(\n 284→ screenshot_path, window_context=window_info\n 285→ )\n 286→ return elements\n 287→ except Exception as e:\n 288→ logger.warning(f"Détection UI échouée: {e}")\n 289→ return []\n 290→\n 291→ def _ensure_ui_detector(self) -> None:\n 292→ """Initialiser le UIDetector (lazy)."""\n 293→ if self._ui_detector_initialized:\n 294→ return\n 295→ self._ui_detector_initialized = True\n 296→\n 297→ try:\n 298→ from core.detection.ui_detector import UIDetector, DetectionConfig\n 299→\n 300→ config = DetectionConfig(\n 301→ use_owl_detection=False, # Désactiver OWL par défaut (lourd)\n 302→ use_vlm_classification=True,\n 303→ confidence_threshold=0.6,\n 304→ )\n 305→ self._ui_detector = UIDetector(config)\n 306→ logger.info("UIDetector initialisé")\n 307→ except Exception as e:\n 308→ logger.warning(f"UIDetector non disponible: {e}")\n 309→ self._ui_detector = None\n 310→\n 311→ # =========================================================================\n 312→ # Niveau 4 : Contexte\n 313→ # =========================================================================\n 314→\n 315→ def _build_window_context(\n 316→ self, window_info: Optional[Dict[str, Any]] = None\n 317→ ) -> WindowContext:\n 318→ if window_info:\n 319→ return WindowContext(\n 320→ app_name=window_info.get("app_name", "unknown"),\n 321→ window_title=window_info.get("title", "Unknown"),\n 322→ screen_resolution=window_info.get("screen_resolution", [1920, 1080]),\n 323→ workspace=window_info.get("workspace", "main"),\n 324→ )\n 325→ return WindowContext(\n 326→ app_name="unknown",\n 327→ window_title="Unknown",\n 328→ screen_resolution=[1920, 1080],\n 329→ workspace="main",\n 330→ )\n 331→\n 332→ def _build_context_level(\n 333→ self, context: Optional[Dict[str, Any]] = None\n 334→ ) -> ContextLevel:\n 335→ if context:\n 336→ return ContextLevel(\n 337→ current_workflow_candidate=context.get("workflow_candidate"),\n 338→ workflow_step=context.get("workflow_step"),\n 339→ user_id=context.get("user_id", ""),\n 340→ tags=context.get("tags", []),\n 341→ business_variables=context.get("business_variables", {}),\n 342→ )\n 343→ return ContextLevel()\n 344→"}]},"uuid":"42e8800d-8a2a-47ad-bafd-8f5638a40ea6","timestamp":"2026-03-14T21:14:29.235Z","sourceToolAssistantUUID":"f8385088-af9a-4979-8239-d835ae08618b"} {"parentUuid":"42e8800d-8a2a-47ad-bafd-8f5638a40ea6","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01FrpLjJqb79PrZpd3QUj5gi","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01AwBh7WHUpEG7p7mcNnXamQ","name":"Read","input":{"file_path":"/home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_01X8WDVqsezUXH1vkcskBwyT.txt","offset":1,"limit":250},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":8597,"cache_read_input_tokens":55941,"cache_creation":{"ephemeral_5m_input_tokens":8597,"ephemeral_1h_input_tokens":0},"output_tokens":25,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fUxpJ9FSvvFEnLaDco","type":"assistant","uuid":"43d78478-d3c5-426f-adfc-162c756b10f1","timestamp":"2026-03-14T21:14:37.026Z"} {"parentUuid":"43d78478-d3c5-426f-adfc-162c756b10f1","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01AwBh7WHUpEG7p7mcNnXamQ","toolUseID":"toolu_01AwBh7WHUpEG7p7mcNnXamQ","timestamp":"2026-03-14T21:14:37.032Z","uuid":"ae195a76-1d73-4668-a39b-00ebab4be778"} {"parentUuid":"43d78478-d3c5-426f-adfc-162c756b10f1","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01AwBh7WHUpEG7p7mcNnXamQ","type":"tool_result","content":" 1→ 1→"""\n 2→ 2→Workflow Graph - Couche 4 : Graphes de Workflows Sémantiques\n 3→ 3→\n 4→ 4→Représente des workflows appris comme des graphes de transitions entre états d'écran.\n 5→ 5→Chaque node est un template d'écran, chaque edge est une action sémantique.\n 6→ 6→\n 7→ 7→Auteur : Dom, Alice Kiro\n 8→ 8→Date : 15 décembre 2024\n 9→ 9→"""\n 10→ 10→\n 11→ 11→from dataclasses import dataclass, field\n 12→ 12→from typing import Dict, List, Optional, Any, Tuple\n 13→ 13→from datetime import datetime\n 14→ 14→from pathlib import Path\n 15→ 15→import json\n 16→ 16→from enum import Enum\n 17→ 17→\n 18→ 18→\n 19→ 19→# ============================================================================\n 20→ 20→# Enums et Types\n 21→ 21→# ============================================================================\n 22→ 22→\n 23→ 23→class LearningState(str, Enum):\n 24→ 24→ """États d'apprentissage d'un workflow"""\n 25→ 25→ OBSERVATION = "OBSERVATION" # Observation passive\n 26→ 26→ COACHING = "COACHING" # Suggestions avec confirmation\n 27→ 27→ AUTO_CANDIDATE = "AUTO_CANDIDATE" # Exécution auto en test\n 28→ 28→ AUTO_CONFIRMED = "AUTO_CONFIRMED" # Exécution auto validée\n 29→ 29→\n 30→ 30→\n 31→ 31→class ActionType(str, Enum):\n 32→ 32→ """Types d'actions possibles"""\n 33→ 33→ MOUSE_CLICK = "mouse_click"\n 34→ 34→ TEXT_INPUT = "text_input"\n 35→ 35→ KEY_PRESS = "key_press"\n 36→ 36→ COMPOUND = "compound" # Séquence d'actions\n 37→ 37→\n 38→ 38→\n 39→ 39→class SelectionPolicy(str, Enum):\n 40→ 40→ """Politiques de sélection d'éléments UI"""\n 41→ 41→ FIRST = "first"\n 42→ 42→ LAST = "last"\n 43→ 43→ BY_SIMILARITY = "by_similarity"\n 44→ 44→ BY_POSITION = "by_position"\n 45→ 45→\n 46→ 46→\n 47→ 47→# ============================================================================\n 48→ 48→# Classes de Support\n 49→ 49→# ============================================================================\n 50→ 50→\n 51→ 51→@dataclass\n 52→ 52→class WindowConstraint:\n 53→ 53→ """Contrainte sur la fenêtre active"""\n 54→ 54→ title_pattern: Optional[str] = None\n 55→ 55→ title_contains: Optional[str] = None\n 56→ 56→ process_name: Optional[str] = None\n 57→ 57→ \n 58→ 58→ def matches(self, window_title: str, process: str) -> bool:\n 59→ 59→ """Vérifier si la fenêtre correspond aux contraintes"""\n 60→ 60→ if self.title_contains and self.title_contains not in window_title:\n 61→ 61→ return False\n 62→ 62→ if self.process_name and self.process_name != process:\n 63→ 63→ return False\n 64→ 64→ return True\n 65→ 65→ \n 66→ 66→ def to_dict(self) -> Dict[str, Any]:\n 67→ 67→ return {\n 68→ 68→ "title_pattern": self.title_pattern,\n 69→ 69→ "title_contains": self.title_contains,\n 70→ 70→ "process_name": self.process_name\n 71→ 71→ }\n 72→ 72→ \n 73→ 73→ @classmethod\n 74→ 74→ def from_dict(cls, data: Dict[str, Any]) -> 'WindowConstraint':\n 75→ 75→ return cls(\n 76→ 76→ title_pattern=data.get("title_pattern"),\n 77→ 77→ title_contains=data.get("title_contains"),\n 78→ 78→ process_name=data.get("process_name")\n 79→ 79→ )\n 80→ 80→\n 81→ 81→\n 82→ 82→@dataclass\n 83→ 83→class TextConstraint:\n 84→ 84→ """Contrainte sur le texte présent à l'écran"""\n 85→ 85→ required_texts: List[str] = field(default_factory=list)\n 86→ 86→ forbidden_texts: List[str] = field(default_factory=list)\n 87→ 87→ \n 88→ 88→ def matches(self, detected_texts: List[str]) -> bool:\n 89→ 89→ """Vérifier si les textes correspondent aux contraintes"""\n 90→ 90→ # Vérifier textes requis\n 91→ 91→ for required in self.required_texts:\n 92→ 92→ if not any(required in text for text in detected_texts):\n 93→ 93→ return False\n 94→ 94→ \n 95→ 95→ # Vérifier textes interdits\n 96→ 96→ for forbidden in self.forbidden_texts:\n 97→ 97→ if any(forbidden in text for text in detected_texts):\n 98→ 98→ return False\n 99→ 99→ \n 100→ 100→ return True\n 101→ 101→ \n 102→ 102→ def to_dict(self) -> Dict[str, Any]:\n 103→ 103→ return {\n 104→ 104→ "required_texts": self.required_texts,\n 105→ 105→ "forbidden_texts": self.forbidden_texts\n 106→ 106→ }\n 107→ 107→ \n 108→ 108→ @classmethod\n 109→ 109→ def from_dict(cls, data: Dict[str, Any]) -> 'TextConstraint':\n 110→ 110→ return cls(\n 111→ 111→ required_texts=data.get("required_texts", []),\n 112→ 112→ forbidden_texts=data.get("forbidden_texts", [])\n 113→ 113→ )\n 114→ 114→\n 115→ 115→\n 116→ 116→@dataclass\n 117→ 117→class UIConstraint:\n 118→ 118→ """Contrainte sur les éléments UI présents"""\n 119→ 119→ required_roles: List[str] = field(default_factory=list)\n 120→ 120→ required_types: List[str] = field(default_factory=list)\n 121→ 121→ min_element_count: int = 0\n 122→ 122→ \n 123→ 123→ def matches(self, ui_elements: List[Any]) -> bool:\n 124→ 124→ """Vérifier si les éléments UI correspondent aux contraintes"""\n 125→ 125→ if len(ui_elements) < self.min_element_count:\n 126→ 126→ return False\n 127→ 127→ \n 128→ 128→ # Vérifier rôles requis\n 129→ 129→ element_roles = [el.role for el in ui_elements if hasattr(el, 'role')]\n 130→ 130→ for required_role in self.required_roles:\n 131→ 131→ if required_role not in element_roles:\n 132→ 132→ return False\n 133→ 133→ \n 134→ 134→ # Vérifier types requis\n 135→ 135→ element_types = [el.type for el in ui_elements if hasattr(el, 'type')]\n 136→ 136→ for required_type in self.required_types:\n 137→ 137→ if required_type not in element_types:\n 138→ 138→ return False\n 139→ 139→ \n 140→ 140→ return True\n 141→ 141→ \n 142→ 142→ def to_dict(self) -> Dict[str, Any]:\n 143→ 143→ return {\n 144→ 144→ "required_roles": self.required_roles,\n 145→ 145→ "required_types": self.required_types,\n 146→ 146→ "min_element_count": self.min_element_count\n 147→ 147→ }\n 148→ 148→ \n 149→ 149→ @classmethod\n 150→ 150→ def from_dict(cls, data: Dict[str, Any]) -> 'UIConstraint':\n 151→ 151→ return cls(\n 152→ 152→ required_roles=data.get("required_roles", []),\n 153→ 153→ required_types=data.get("required_types", []),\n 154→ 154→ min_element_count=data.get("min_element_count", 0)\n 155→ 155→ )\n 156→ 156→\n 157→ 157→\n 158→ 158→@dataclass\n 159→ 159→class EmbeddingPrototype:\n 160→ 160→ """Prototype d'embedding pour un node"""\n 161→ 161→ provider: str # e.g., "openclip_ViT-B-32"\n 162→ 162→ vector_id: str # Chemin vers .npy du prototype\n 163→ 163→ min_cosine_similarity: float # Seuil de similarité\n 164→ 164→ sample_count: int # Nombre d'échantillons utilisés\n 165→ 165→ \n 166→ 166→ def to_dict(self) -> Dict[str, Any]:\n 167→ 167→ return {\n 168→ 168→ "provider": self.provider,\n 169→ 169→ "vector_id": self.vector_id,\n 170→ 170→ "min_cosine_similarity": self.min_cosine_similarity,\n 171→ 171→ "sample_count": self.sample_count\n 172→ 172→ }\n 173→ 173→ \n 174→ 174→ @classmethod\n 175→ 175→ def from_dict(cls, data: Dict[str, Any]) -> 'EmbeddingPrototype':\n 176→ 176→ return cls(\n 177→ 177→ provider=data["provider"],\n 178→ 178→ vector_id=data["vector_id"],\n 179→ 179→ min_cosine_similarity=data["min_cosine_similarity"],\n 180→ 180→ sample_count=data["sample_count"]\n 181→ 181→ )\n 182→ 182→\n 183→ 183→\n 184→ 184→@dataclass\n 185→ 185→class NodeVariant:\n 186→ 186→ """Variante d'un node de workflow (pour gérer les variations UI)"""\n 187→ 187→ variant_id: str\n 188→ 188→ embedding_path: str # Chemin vers l'embedding .npy\n 189→ 189→ similarity_to_primary: float # Similarité avec le prototype principal\n 190→ 190→ observation_count: int = 1\n 191→ 191→ success_rate: float = 0.0\n 192→ 192→ ui_state: Optional[str] = None # enabled, disabled, checked, loading, error\n 193→ 193→ created_at: datetime = field(default_factory=datetime.now)\n 194→ 194→ metadata: Dict[str, Any] = field(default_factory=dict)\n 195→ 195→ \n 196→ 196→ def to_dict(self) -> Dict[str, Any]:\n 197→ 197→ return {\n 198→ 198→ "variant_id": self.variant_id,\n 199→ 199→ "embedding_path": self.embedding_path,\n 200→ 200→ "similarity_to_primary": self.similarity_to_primary,\n 201→ 201→ "observation_count": self.observation_count,\n 202→ 202→ "success_rate": self.success_rate,\n 203→ 203→ "ui_state": self.ui_state,\n 204→ 204→ "created_at": self.created_at.isoformat(),\n 205→ 205→ "metadata": self.metadata\n 206→ 206→ }\n 207→ 207→ \n 208→ 208→ @classmethod\n 209→ 209→ def from_dict(cls, data: Dict[str, Any]) -> 'NodeVariant':\n 210→ 210→ created_at = datetime.now()\n 211→ 211→ if data.get("created_at"):\n 212→ 212→ created_at = datetime.fromisoformat(data["created_at"])\n 213→ 213→ return cls(\n 214→ 214→ variant_id=data["variant_id"],\n 215→ 215→ embedding_path=data["embedding_path"],\n 216→ 216→ similarity_to_primary=data["similarity_to_primary"],\n 217→ 217→ observation_count=data.get("observation_count", 1),\n 218→ 218→ success_rate=data.get("success_rate", 0.0),\n 219→ 219→ ui_state=data.get("ui_state"),\n 220→ 220→ created_at=created_at,\n 221→ 221→ metadata=data.get("metadata", {})\n 222→ 222→ )\n 223→ 223→\n 224→ 224→\n 225→ 225→@dataclass\n 226→ 226→class PrototypeVersion:\n 227→ 227→ """Version d'un prototype pour le versioning"""\n 228→ 228→ version_id: str\n 229→ 229→ embedding_path: str\n 230→ 230→ created_at: datetime\n 231→ 231→ reason: str # "initial", "drift_update", "manual_update", "consolidation"\n 232→ 232→ confidence_at_creation: float = 0.0\n 233→ 233→ \n 234→ 234→ def to_dict(self) -> Dict[str, Any]:\n 235→ 235→ return {\n 236→ 236→ "version_id": self.version_id,\n 237→ 237→ "embedding_path": self.embedding_path,\n 238→ 238→ "created_at": self.created_at.isoformat(),\n 239→ 239→ "reason": self.reason,\n 240→ 240→ "confidence_at_creation": self.confidence_at_creation\n 241→ 241→ }\n 242→ 242→ \n 243→ 243→ @classmethod\n 244→ 244→ def from_dict(cls, data: Dict[str, Any]) -> 'PrototypeVersion':\n 245→ 245→ return cls(\n 246→ 246→ version_id=data["version_id"],\n 247→ 247→ embedding_path=data["embedding_path"],\n 248→ 248→ created_at=datetime.fromisoformat(data["created_at"]),\n 249→ 249→ reason=data.get("reason", "unknown"),\n 250→ 250→ confidence_at_creation=data.get("confidence_at_creation", 0.0)"}]},"uuid":"42af3b85-3c1a-46f9-88dd-31da2be06090","timestamp":"2026-03-14T21:14:37.032Z","sourceToolAssistantUUID":"43d78478-d3c5-426f-adfc-162c756b10f1"} {"parentUuid":"42af3b85-3c1a-46f9-88dd-31da2be06090","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01FrpLjJqb79PrZpd3QUj5gi","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01M9kzvhjvHyF57gXp6ozA6c","name":"Glob","input":{"pattern":"/data/training/workflows/*.json","path":"/home/dom/ai/rpa_vision_v3"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":8597,"cache_read_input_tokens":55941,"cache_creation":{"ephemeral_5m_input_tokens":8597,"ephemeral_1h_input_tokens":0},"output_tokens":25,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fUxpJ9FSvvFEnLaDco","type":"assistant","uuid":"088defb1-bc27-4905-996f-d1c7f00df84e","timestamp":"2026-03-14T21:14:37.876Z"} {"parentUuid":"088defb1-bc27-4905-996f-d1c7f00df84e","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01FrpLjJqb79PrZpd3QUj5gi","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_011A8eLFwH3zeU3QBxknGD6M","name":"Glob","input":{"pattern":"/data/workflows/.json","path":"/home/dom/ai/rpa_vision_v3"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":8597,"cache_read_input_tokens":55941,"output_tokens":306,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":8597},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fUxpJ9FSvvFEnLaDco","type":"assistant","uuid":"909c9518-3d9d-4e30-bf24-ad16dcf2e6c6","timestamp":"2026-03-14T21:14:38.049Z"} {"parentUuid":"909c9518-3d9d-4e30-bf24-ad16dcf2e6c6","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Glob","command":"callback"},"parentToolUseID":"toolu_01M9kzvhjvHyF57gXp6ozA6c","toolUseID":"toolu_01M9kzvhjvHyF57gXp6ozA6c","timestamp":"2026-03-14T21:14:38.297Z","uuid":"8d4c6b40-0ed6-44bc-8065-e24d1cd8122d"} {"parentUuid":"088defb1-bc27-4905-996f-d1c7f00df84e","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01M9kzvhjvHyF57gXp6ozA6c","type":"tool_result","content":"/home/dom/ai/rpa_vision_v3/data/training/workflows/demo_notepad.json\n/home/dom/ai/rpa_vision_v3/data/training/workflows/demo_calculator.json"}]},"uuid":"321a5fdb-49c7-4e17-9471-18f8b4de4488","timestamp":"2026-03-14T21:14:38.296Z","sourceToolAssistantUUID":"088defb1-bc27-4905-996f-d1c7f00df84e"} {"parentUuid":"321a5fdb-49c7-4e17-9471-18f8b4de4488","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Glob","command":"callback"},"parentToolUseID":"toolu_011A8eLFwH3zeU3QBxknGD6M","toolUseID":"toolu_011A8eLFwH3zeU3QBxknGD6M","timestamp":"2026-03-14T21:14:38.457Z","uuid":"a8f7dbb4-fd9f-439b-912e-dfae465ab97f"} {"parentUuid":"909c9518-3d9d-4e30-bf24-ad16dcf2e6c6","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_011A8eLFwH3zeU3QBxknGD6M","type":"tool_result","content":"/home/dom/ai/rpa_vision_v3/data/workflows/facturation_client.json\n/home/dom/ai/rpa_vision_v3/data/workflows/export_rapport.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_04ee1c6d9d62.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_81ce5ea46c3f.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_ad7ccf92ca57.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_04ee1c6d9d62.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_ad7ccf92ca57.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_201335e7f498.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_b40df255f3c7.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_201335e7f498.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_b40df255f3c7.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_d6c3d79a9f3b.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_46a49a8014d9.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_46a49a8014d9.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_9c8ce0d625a4.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_f9605ef7334b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_9c8ce0d625a4.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_f9605ef7334b.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_03f5cce5e4ff.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_1e7cd99722b9.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_54c765497489.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_54c765497489.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_03f5cce5e4ff.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_0b5852737d63.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_bb224c0d2ada.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_6431e27d58dd.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_bb224c0d2ada.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_6431e27d58dd.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_6065a7e99355.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_6065a7e99355.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_fcb0cfdd419b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_fcb0cfdd419b.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_e0406995738f.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_e0406995738f.json\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/visual_workflow_builder/backend/data/workflows/wf_90ecda5de0ed.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_90ecda5de0ed.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_2c0aca849e14.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_363cd6de262b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_820b2a9e6270.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_3a1a6f402335.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_b86cac3286b3.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_000213.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_004101.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_101425.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_101626.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_130454.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_132430.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260113_234050.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_2c0aca849e14.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_363cd6de262b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_820b2a9e6270.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_90ecda5de0ed.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_6065a7e99355.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_fcb0cfdd419b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_e0406995738f.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_04ee1c6d9d62.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_201335e7f498.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_81ce5ea46c3f.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_ad7ccf92ca57.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_b40df255f3c7.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_d6c3d79a9f3b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_46a49a8014d9.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_9c8ce0d625a4.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_f9605ef7334b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_1e7cd99722b9.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_54c765497489.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_03f5cce5e4ff.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/_a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_0b5852737d63.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_bb224c0d2ada.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/a_trier/backups/backups_14jan2026_learning/backend/data/workflows/wf_6431e27d58dd.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_211419.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_211536.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_211756.json\n/home/dom/ai/rpa_vision_v3/data/workflows/wf_20260114_211833.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_3a1a6f402335.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_b86cac3286b3.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_211536.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/facturation_client.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_130454.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/export_rapport.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_101425.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_211419.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_101626.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_132430.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_211833.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_004101.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260113_234050.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_211756.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_20260114_000213.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_d6c3d79a9f3b.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_81ce5ea46c3f.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/wf_1e7cd99722b9.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/ae2ae57b-afe5-479f-8ee9-5858346322c7.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/f4d8a1fd-d6aa-4cbd-a589-35bdb8be1644.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/5a8191ca-b095-4dd4-9c89-0f009c59124e.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/b8f0a3af-ba54-4194-958c-9e412c80ec85.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/31d2a6e7-2b3a-48ef-8123-ec134fea922f.json\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/data/workflows/2d5484c7-077d-4e1e-b789-33ebf63bc3c5.json"}]},"uuid":"d26ab6ac-cae5-4afc-b3bc-0871976e2f3f","timestamp":"2026-03-14T21:14:38.457Z","sourceToolAssistantUUID":"909c9518-3d9d-4e30-bf24-ad16dcf2e6c6"} {"parentUuid":"d26ab6ac-cae5-4afc-b3bc-0871976e2f3f","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01J3i27LT9oGKvqrkVUTBBLN","type":"message","role":"assistant","content":[{"type":"text","text":"Now let me look at an existing workflow JSON to understand the format, the workflow_graph model more deeply, and the autonomous planner."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":9111,"cache_read_input_tokens":64538,"cache_creation":{"ephemeral_5m_input_tokens":9111,"ephemeral_1h_input_tokens":0},"output_tokens":1,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fVe9AsfhmFbwzYEdsj","type":"assistant","uuid":"4123e1cc-434f-4187-9d2c-a6fbea140b29","timestamp":"2026-03-14T21:14:42.201Z"} {"parentUuid":"4123e1cc-434f-4187-9d2c-a6fbea140b29","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01J3i27LT9oGKvqrkVUTBBLN","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01DJqCSzvwByvsip3iL9mBKJ","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/data/workflows/facturation_client.json"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":9111,"cache_read_input_tokens":64538,"cache_creation":{"ephemeral_5m_input_tokens":9111,"ephemeral_1h_input_tokens":0},"output_tokens":1,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fVe9AsfhmFbwzYEdsj","type":"assistant","uuid":"635f3a7a-f348-45ae-95c8-fbfd2812fc97","timestamp":"2026-03-14T21:14:42.671Z"} {"parentUuid":"635f3a7a-f348-45ae-95c8-fbfd2812fc97","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01DJqCSzvwByvsip3iL9mBKJ","toolUseID":"toolu_01DJqCSzvwByvsip3iL9mBKJ","timestamp":"2026-03-14T21:14:42.676Z","uuid":"76651116-db54-4392-aff5-53f8d3cab1d3"} {"parentUuid":"635f3a7a-f348-45ae-95c8-fbfd2812fc97","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01DJqCSzvwByvsip3iL9mBKJ","type":"tool_result","content":" 1→{\n 2→ "name": "Facturation Client",\n 3→ "description": "Créer une facture pour un client dans le système de facturation",\n 4→ "tags": ["facturer", "facture", "client", "invoice", "billing"],\n 5→ "version": "1.0",\n 6→ "created_at": "2024-11-29",\n 7→ "learning_state": "AUTO_CANDIDATE",\n 8→ "variables": [\n 9→ {\n 10→ "name": "client",\n 11→ "description": "Nom du client à facturer",\n 12→ "required": true,\n 13→ "type": "string"\n 14→ },\n 15→ {\n 16→ "name": "montant",\n 17→ "description": "Montant de la facture",\n 18→ "required": false,\n 19→ "default_value": "0",\n 20→ "type": "number"\n 21→ }\n 22→ ],\n 23→ "param_patterns": [\n 24→ "(?:client|customer)\\s+(?P[A-Za-z0-9\\-]+)",\n 25→ "(?:facturer|invoice)\\s+(?P[A-Za-z0-9\\-]+)",\n 26→ "(?P\\d+(?:\\.\\d+)?)\\s(?:€|EUR|euros?)"\n 27→ ],\n 28→ "nodes": [\n 29→ {\n 30→ "node_id": "start",\n 31→ "name": "Écran d'accueil",\n 32→ "type": "entry"\n 33→ },\n 34→ {\n 35→ "node_id": "client_search",\n 36→ "name": "Recherche client",\n 37→ "type": "intermediate"\n 38→ },\n 39→ {\n 40→ "node_id": "client_selected",\n 41→ "name": "Client sélectionné",\n 42→ "type": "intermediate"\n 43→ },\n 44→ {\n 45→ "node_id": "invoice_form",\n 46→ "name": "Formulaire facture",\n 47→ "type": "intermediate"\n 48→ },\n 49→ {\n 50→ "node_id": "invoice_created",\n 51→ "name": "Facture créée",\n 52→ "type": "end"\n 53→ }\n 54→ ],\n 55→ "edges": [\n 56→ {\n 57→ "edge_id": "e1",\n 58→ "source": "start",\n 59→ "target": "client_search",\n 60→ "action": {\n 61→ "type": "mouse_click",\n 62→ "target": {\n 63→ "role": "search_field",\n 64→ "text": "Rechercher client"\n 65→ }\n 66→ }\n 67→ },\n 68→ {\n 69→ "edge_id": "e2",\n 70→ "source": "client_search",\n 71→ "target": "client_selected",\n 72→ "action": {\n 73→ "type": "text_input",\n 74→ "target": {\n 75→ "role": "search_field"\n 76→ },\n 77→ "parameters": {\n 78→ "text": "{{client}}"\n 79→ }\n 80→ }\n 81→ },\n 82→ {\n 83→ "edge_id": "e3",\n 84→ "source": "client_selected",\n 85→ "target": "invoice_form",\n 86→ "action": {\n 87→ "type": "mouse_click",\n 88→ "target": {\n 89→ "role": "primary_action",\n 90→ "text": "Nouvelle facture"\n 91→ }\n 92→ }\n 93→ },\n 94→ {\n 95→ "edge_id": "e4",\n 96→ "source": "invoice_form",\n 97→ "target": "invoice_created",\n 98→ "action": {\n 99→ "type": "mouse_click",\n 100→ "target": {\n 101→ "role": "submit",\n 102→ "text": "Créer"\n 103→ }\n 104→ }\n 105→ }\n 106→ ],\n 107→ "entry_nodes": ["start"],\n 108→ "end_nodes": ["invoice_created"]\n 109→}\n 110→"}]},"uuid":"fc8b809d-75c5-48db-8e61-9bca7507d524","timestamp":"2026-03-14T21:14:42.676Z","sourceToolAssistantUUID":"635f3a7a-f348-45ae-95c8-fbfd2812fc97"} {"parentUuid":"fc8b809d-75c5-48db-8e61-9bca7507d524","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01J3i27LT9oGKvqrkVUTBBLN","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01MRd4TnHK7Y1aRdbinTQr2M","name":"Read","input":{"file_path":"/home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_01X8WDVqsezUXH1vkcskBwyT.txt","offset":250,"limit":300},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":9111,"cache_read_input_tokens":64538,"cache_creation":{"ephemeral_5m_input_tokens":9111,"ephemeral_1h_input_tokens":0},"output_tokens":1,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fVe9AsfhmFbwzYEdsj","type":"assistant","uuid":"2cd48daa-b380-46a3-8bad-1b3141a8e4b9","timestamp":"2026-03-14T21:14:46.226Z"} {"parentUuid":"2cd48daa-b380-46a3-8bad-1b3141a8e4b9","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01MRd4TnHK7Y1aRdbinTQr2M","toolUseID":"toolu_01MRd4TnHK7Y1aRdbinTQr2M","timestamp":"2026-03-14T21:14:46.233Z","uuid":"68d33d4f-1f0e-47ab-84b9-bd6c76e63fc2"} {"parentUuid":"2cd48daa-b380-46a3-8bad-1b3141a8e4b9","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01MRd4TnHK7Y1aRdbinTQr2M","type":"tool_result","content":" 250→ 250→ confidence_at_creation=data.get("confidence_at_creation", 0.0)\n 251→ 251→ )\n 252→ 252→\n 253→ 253→\n 254→ 254→@dataclass\n 255→ 255→class SpatialRelation:\n 256→ 256→ """Relation spatiale entre éléments UI"""\n 257→ 257→ relation_id: str\n 258→ 258→ source_element_id: str\n 259→ 259→ target_element_id: str\n 260→ 260→ relation_type: str # above, below, left_of, right_of, inside, contains\n 261→ 261→ distance_pixels: float = 0.0\n 262→ 262→ confidence: float = 1.0\n 263→ 263→ \n 264→ 264→ def to_dict(self) -> Dict[str, Any]:\n 265→ 265→ return {\n 266→ 266→ "relation_id": self.relation_id,\n 267→ 267→ "source_element_id": self.source_element_id,\n 268→ 268→ "target_element_id": self.target_element_id,\n 269→ 269→ "relation_type": self.relation_type,\n 270→ 270→ "distance_pixels": self.distance_pixels,\n 271→ 271→ "confidence": self.confidence\n 272→ 272→ }\n 273→ 273→ \n 274→ 274→ @classmethod\n 275→ 275→ def from_dict(cls, data: Dict[str, Any]) -> 'SpatialRelation':\n 276→ 276→ return cls(\n 277→ 277→ relation_id=data["relation_id"],\n 278→ 278→ source_element_id=data["source_element_id"],\n 279→ 279→ target_element_id=data["target_element_id"],\n 280→ 280→ relation_type=data["relation_type"],\n 281→ 281→ distance_pixels=data.get("distance_pixels", 0.0),\n 282→ 282→ confidence=data.get("confidence", 1.0)\n 283→ 283→ )\n 284→ 284→\n 285→ 285→\n 286→ 286→@dataclass\n 287→ 287→class ScreenTemplate:\n 288→ 288→ """Template d'écran pour un WorkflowNode"""\n 289→ 289→ window: WindowConstraint\n 290→ 290→ text: TextConstraint\n 291→ 291→ ui: UIConstraint\n 292→ 292→ embedding: EmbeddingPrototype\n 293→ 293→ \n 294→ 294→ def matches(self, screen_state: Any, embedding_similarity: float) -> Tuple[bool, float]:\n 295→ 295→ """\n 296→ 296→ Vérifier si un ScreenState correspond à ce template\n 297→ 297→ \n 298→ 298→ Returns:\n 299→ 299→ (matches: bool, confidence: float)\n 300→ 300→ """\n 301→ 301→ # Vérifier embedding similarity\n 302→ 302→ if embedding_similarity < self.embedding.min_cosine_similarity:\n 303→ 303→ return False, embedding_similarity\n 304→ 304→ \n 305→ 305→ # Vérifier contraintes de fenêtre\n 306→ 306→ if hasattr(screen_state, 'window'):\n 307→ 307→ window_title = getattr(screen_state.window, 'title', '')\n 308→ 308→ process = getattr(screen_state.window, 'process', '')\n 309→ 309→ if not self.window.matches(window_title, process):\n 310→ 310→ return False, 0.0\n 311→ 311→ \n 312→ 312→ # Vérifier contraintes de texte\n 313→ 313→ if hasattr(screen_state, 'perception'):\n 314→ 314→ detected_text = getattr(screen_state.perception, 'detected_text', [])\n 315→ 315→ if not self.text.matches(detected_text):\n 316→ 316→ return False, 0.0\n 317→ 317→ \n 318→ 318→ # Vérifier contraintes UI\n 319→ 319→ if hasattr(screen_state, 'ui_elements'):\n 320→ 320→ if not self.ui.matches(screen_state.ui_elements):\n 321→ 321→ return False, 0.0\n 322→ 322→ \n 323→ 323→ # Calculer confiance globale\n 324→ 324→ confidence = embedding_similarity\n 325→ 325→ \n 326→ 326→ return True, confidence\n 327→ 327→ \n 328→ 328→ def to_dict(self) -> Dict[str, Any]:\n 329→ 329→ return {\n 330→ 330→ "window": self.window.to_dict(),\n 331→ 331→ "text": self.text.to_dict(),\n 332→ 332→ "ui": self.ui.to_dict(),\n 333→ 333→ "embedding": self.embedding.to_dict()\n 334→ 334→ }\n 335→ 335→ \n 336→ 336→ @classmethod\n 337→ 337→ def from_dict(cls, data: Dict[str, Any]) -> 'ScreenTemplate':\n 338→ 338→ # Support format simplifié ou complet\n 339→ 339→ if "window" in data:\n 340→ 340→ window = WindowConstraint.from_dict(data["window"])\n 341→ 341→ else:\n 342→ 342→ # Format simplifié avec title_pattern\n 343→ 343→ window = WindowConstraint(\n 344→ 344→ title_pattern=data.get("title_pattern", "."),\n 345→ 345→ process_name=data.get("process_name")\n 346→ 346→ )\n 347→ 347→ \n 348→ 348→ if "text" in data:\n 349→ 349→ text = TextConstraint.from_dict(data["text"])\n 350→ 350→ else:\n 351→ 351→ text = TextConstraint(\n 352→ 352→ required_texts=data.get("required_text", []),\n 353→ 353→ forbidden_texts=data.get("forbidden_text", [])\n 354→ 354→ )\n 355→ 355→ \n 356→ 356→ if "ui" in data:\n 357→ 357→ ui = UIConstraint.from_dict(data["ui"])\n 358→ 358→ else:\n 359→ 359→ ui = UIConstraint(\n 360→ 360→ required_roles=data.get("required_elements", []),\n 361→ 361→ required_types=data.get("required_types", []),\n 362→ 362→ min_element_count=data.get("min_element_count", 0)\n 363→ 363→ )\n 364→ 364→ \n 365→ 365→ if "embedding" in data and data["embedding"]:\n 366→ 366→ embedding = EmbeddingPrototype.from_dict(data["embedding"])\n 367→ 367→ else:\n 368→ 368→ # Créer un prototype vide/par défaut\n 369→ 369→ embedding = EmbeddingPrototype(\n 370→ 370→ provider="none",\n 371→ 371→ vector_id="",\n 372→ 372→ min_cosine_similarity=data.get("similarity_threshold", 0.85),\n 373→ 373→ sample_count=0\n 374→ 374→ )\n 375→ 375→ \n 376→ 376→ return cls(window=window, text=text, ui=ui, embedding=embedding)\n 377→ 377→\n 378→ 378→\n 379→ 379→# ============================================================================\n 380→ 380→# WorkflowNode\n 381→ 381→# ============================================================================\n 382→ 382→\n 383→ 383→@dataclass\n 384→ 384→class WorkflowNode:\n 385→ 385→ """\n 386→ 386→ Node dans un Workflow Graph\n 387→ 387→ \n 388→ 388→ Représente un état d'écran reconnaissable par son template.\n 389→ 389→ Supporte les variantes pour gérer les variations UI.\n 390→ 390→ """\n 391→ 391→ node_id: str\n 392→ 392→ name: str\n 393→ 393→ description: str\n 394→ 394→ template: ScreenTemplate\n 395→ 395→ is_entry: bool = False\n 396→ 396→ is_end: bool = False\n 397→ 397→ \n 398→ 398→ # Support des variantes (Exigences 4.1, 4.2, 4.3)\n 399→ 399→ variants: List[NodeVariant] = field(default_factory=list)\n 400→ 400→ primary_variant_id: Optional[str] = None # ID de la variante principale\n 401→ 401→ max_variants: int = 5 # Limite de variantes avant consolidation\n 402→ 402→ \n 403→ 403→ # Versioning du prototype (Exigence 3.5)\n 404→ 404→ prototype_version: Optional[str] = None # Version actuelle\n 405→ 405→ prototype_history: List[PrototypeVersion] = field(default_factory=list)\n 406→ 406→ \n 407→ 407→ # Score de qualité (Exigences 1.1, 1.4)\n 408→ 408→ quality_score: float = 0.0 # Score de qualité du cluster\n 409→ 409→ cluster_metrics: Dict[str, float] = field(default_factory=dict)\n 410→ 410→ \n 411→ 411→ # Relations spatiales (Exigence 5.1)\n 412→ 412→ spatial_relations: List[SpatialRelation] = field(default_factory=list)\n 413→ 413→ container_type: Optional[str] = None # form, menu, toolbar, dialog, list\n 414→ 414→ \n 415→ 415→ metadata: Dict[str, Any] = field(default_factory=dict)\n 416→ 416→ \n 417→ 417→ def matches(self, screen_state: Any, state_embedding: Any) -> Tuple[bool, float]:\n 418→ 418→ """\n 419→ 419→ Vérifier si un ScreenState correspond à ce node\n 420→ 420→ \n 421→ 421→ Returns:\n 422→ 422→ (matches: bool, confidence: float)\n 423→ 423→ """\n 424→ 424→ # Calculer similarité d'embedding\n 425→ 425→ if hasattr(state_embedding, 'compute_similarity'):\n 426→ 426→ # Charger prototype embedding\n 427→ 427→ from .state_embedding import StateEmbedding\n 428→ 428→ prototype_path = Path(self.template.embedding.vector_id)\n 429→ 429→ if prototype_path.exists():\n 430→ 430→ # Créer un StateEmbedding temporaire pour le prototype\n 431→ 431→ prototype_emb = StateEmbedding(\n 432→ 432→ embedding_id=f"{self.node_id}_prototype",\n 433→ 433→ vector_id=self.template.embedding.vector_id,\n 434→ 434→ dimensions=state_embedding.dimensions,\n 435→ 435→ fusion_method="weighted",\n 436→ 436→ components={}\n 437→ 437→ )\n 438→ 438→ embedding_similarity = state_embedding.compute_similarity(prototype_emb)\n 439→ 439→ else:\n 440→ 440→ embedding_similarity = 0.0\n 441→ 441→ else:\n 442→ 442→ embedding_similarity = 0.0\n 443→ 443→ \n 444→ 444→ # Vérifier template\n 445→ 445→ return self.template.matches(screen_state, embedding_similarity)\n 446→ 446→ \n 447→ 447→ def to_dict(self) -> Dict[str, Any]:\n 448→ 448→ return {\n 449→ 449→ "node_id": self.node_id,\n 450→ 450→ "name": self.name,\n 451→ 451→ "description": self.description,\n 452→ 452→ "template": self.template.to_dict(),\n 453→ 453→ "is_entry": self.is_entry,\n 454→ 454→ "is_end": self.is_end,\n 455→ 455→ "variants": [v.to_dict() for v in self.variants],\n 456→ 456→ "primary_variant_id": self.primary_variant_id,\n 457→ 457→ "max_variants": self.max_variants,\n 458→ 458→ "prototype_version": self.prototype_version,\n 459→ 459→ "prototype_history": [p.to_dict() for p in self.prototype_history],\n 460→ 460→ "quality_score": self.quality_score,\n 461→ 461→ "cluster_metrics": self.cluster_metrics,\n 462→ 462→ "spatial_relations": [r.to_dict() for r in self.spatial_relations],\n 463→ 463→ "container_type": self.container_type,\n 464→ 464→ "metadata": self.metadata\n 465→ 465→ }\n 466→ 466→ \n 467→ 467→ @classmethod\n 468→ 468→ def from_dict(cls, data: Dict[str, Any]) -> 'WorkflowNode':\n 469→ 469→ variants = [NodeVariant.from_dict(v) for v in data.get("variants", [])]\n 470→ 470→ prototype_history = [PrototypeVersion.from_dict(p) for p in data.get("prototype_history", [])]\n 471→ 471→ spatial_relations = [SpatialRelation.from_dict(r) for r in data.get("spatial_relations", [])]\n 472→ 472→ \n 473→ 473→ return cls(\n 474→ 474→ node_id=data["node_id"],\n 475→ 475→ name=data["name"],\n 476→ 476→ description=data["description"],\n 477→ 477→ template=ScreenTemplate.from_dict(data["template"]),\n 478→ 478→ is_entry=data.get("is_entry", False),\n 479→ 479→ is_end=data.get("is_end", False),\n 480→ 480→ variants=variants,\n 481→ 481→ primary_variant_id=data.get("primary_variant_id"),\n 482→ 482→ max_variants=data.get("max_variants", 5),\n 483→ 483→ prototype_version=data.get("prototype_version"),\n 484→ 484→ prototype_history=prototype_history,\n 485→ 485→ quality_score=data.get("quality_score", 0.0),\n 486→ 486→ cluster_metrics=data.get("cluster_metrics", {}),\n 487→ 487→ spatial_relations=spatial_relations,\n 488→ 488→ container_type=data.get("container_type"),\n 489→ 489→ metadata=data.get("metadata", {})\n 490→ 490→ )\n 491→ 491→ \n 492→ 492→ # =========================================================================\n 493→ 493→ # Méthodes de gestion des variantes\n 494→ 494→ # =========================================================================\n 495→ 495→ \n 496→ 496→ def add_variant(self, variant: NodeVariant) -> bool:\n 497→ 497→ """\n 498→ 498→ Ajouter une variante au node.\n 499→ 499→ \n 500→ 500→ Returns:\n 501→ 501→ True si ajoutée, False si limite atteinte ou déjà existante\n 502→ 502→ """\n 503→ 503→ # Vérifier si variante existe déjà\n 504→ 504→ for existing in self.variants:\n 505→ 505→ if existing.variant_id == variant.variant_id:\n 506→ 506→ return False\n 507→ 507→ \n 508→ 508→ # Vérifier limite\n 509→ 509→ if len(self.variants) >= self.max_variants:\n 510→ 510→ return False\n 511→ 511→ \n 512→ 512→ self.variants.append(variant)\n 513→ 513→ return True\n 514→ 514→ \n 515→ 515→ def remove_variant(self, variant_id: str) -> bool:\n 516→ 516→ """Supprimer une variante par ID"""\n 517→ 517→ for i, variant in enumerate(self.variants):\n 518→ 518→ if variant.variant_id == variant_id:\n 519→ 519→ del self.variants[i]\n 520→ 520→ return True\n 521→ 521→ return False\n 522→ 522→ \n 523→ 523→ def get_variant(self, variant_id: str) -> Optional[NodeVariant]:\n 524→ 524→ """Récupérer une variante par ID"""\n 525→ 525→ for variant in self.variants:\n 526→ 526→ if variant.variant_id == variant_id:\n 527→ 527→ return variant\n 528→ 528→ return None\n 529→ 529→ \n 530→ 530→ def get_best_variant(self) -> Optional[NodeVariant]:\n 531→ 531→ """Récupérer la variante avec le meilleur taux de succès"""\n 532→ 532→ if not self.variants:\n 533→ 533→ return None\n 534→ 534→ return max(self.variants, key=lambda v: v.success_rate * v.observation_count)\n 535→ 535→ \n 536→ 536→ def needs_consolidation(self) -> bool:\n 537→ 537→ """Vérifier si les variantes doivent être consolidées"""\n 538→ 538→ return len(self.variants) >= self.max_variants\n 539→ 539→ \n 540→ 540→ def add_prototype_version(self, version: PrototypeVersion) -> None:\n 541→ 541→ """Ajouter une version de prototype à l'historique"""\n 542→ 542→ self.prototype_history.append(version)\n 543→ 543→ self.prototype_version = version.version_id\n 544→ 544→ \n 545→ 545→ def get_prototype_version(self, version_id: str) -> Optional[PrototypeVersion]:\n 546→ 546→ """Récupérer une version de prototype par ID"""\n 547→ 547→ for version in self.prototype_history:\n 548→ 548→ if version.version_id == version_id:\n 549→ 549→ return version"}]},"uuid":"8fd5fb77-1f36-49fc-9553-b5abbdf0c7d1","timestamp":"2026-03-14T21:14:46.233Z","sourceToolAssistantUUID":"2cd48daa-b380-46a3-8bad-1b3141a8e4b9"} {"parentUuid":"8fd5fb77-1f36-49fc-9553-b5abbdf0c7d1","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01J3i27LT9oGKvqrkVUTBBLN","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_017njNfQLQ3Ymdsstbzwwr3k","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/agent_chat/autonomous_planner.py"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":9111,"cache_read_input_tokens":64538,"output_tokens":311,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":9111},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fVe9AsfhmFbwzYEdsj","type":"assistant","uuid":"e6c09744-f7f8-4f2e-b18d-b260036d6f58","timestamp":"2026-03-14T21:14:46.876Z"} {"parentUuid":"e6c09744-f7f8-4f2e-b18d-b260036d6f58","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_017njNfQLQ3Ymdsstbzwwr3k","toolUseID":"toolu_017njNfQLQ3Ymdsstbzwwr3k","timestamp":"2026-03-14T21:14:47.061Z","uuid":"665864b1-5093-4ab1-a058-419aa38d1311"} {"parentUuid":"e6c09744-f7f8-4f2e-b18d-b260036d6f58","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_017njNfQLQ3Ymdsstbzwwr3k","type":"tool_result","content":" 1→#!/usr/bin/env python3\n 2→"""\n 3→Autonomous Planner - Agent Libre pour RPA Vision V3\n 4→\n 5→Ce module permet d'exécuter des tâches sans workflow pré-enregistré.\n 6→Il utilise un LLM (Qwen via Ollama) pour :\n 7→1. Comprendre l'intention utilisateur\n 8→2. Décomposer en étapes d'actions\n 9→3. Adapter dynamiquement selon le contexte visuel\n 10→\n 11→Auteur: Dom - Janvier 2026\n 12→"""\n 13→\n 14→import json\n 15→import logging\n 16→import time\n 17→import re\n 18→import sys\n 19→import os\n 20→from dataclasses import dataclass, field\n 21→from typing import List, Dict, Any, Optional, Callable, Tuple\n 22→from enum import Enum\n 23→from datetime import datetime\n 24→\n 25→import requests\n 26→\n 27→# Ajouter le chemin du projet pour les imports core\n 28→sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(file), '..')))\n 29→\n 30→logger = logging.getLogger(name)\n 31→\n 32→# Essayer d'importer les composants de détection visuelle\n 33→try:\n 34→ from core.detection.owl_detector import OwlDetector, OWL_AVAILABLE\n 35→ VISUAL_DETECTION_AVAILABLE = OWL_AVAILABLE\n 36→except ImportError:\n 37→ VISUAL_DETECTION_AVAILABLE = False\n 38→ OwlDetector = None\n 39→\n 40→# Essayer d'importer le client VLM pour analyse intelligente\n 41→try:\n 42→ from core.detection.ollama_client import OllamaClient\n 43→ VLM_AVAILABLE = True\n 44→except ImportError:\n 45→ VLM_AVAILABLE = False\n 46→ OllamaClient = None\n 47→\n 48→try:\n 49→ from PIL import Image as PILImage\n 50→ import pyautogui\n 51→ PYAUTOGUI_AVAILABLE = True\n 52→except ImportError:\n 53→ PYAUTOGUI_AVAILABLE = False\n 54→ PILImage = None\n 55→ pyautogui = None\n 56→\n 57→\n 58→class ActionType(Enum):\n 59→ """Types d'actions supportées par l'agent autonome."""\n 60→ CLICK = "click"\n 61→ TYPE_TEXT = "type_text"\n 62→ HOTKEY = "hotkey"\n 63→ SCROLL = "scroll"\n 64→ WAIT = "wait"\n 65→ OPEN_APP = "open_app"\n 66→ OPEN_URL = "open_url"\n 67→ SCREENSHOT = "screenshot"\n 68→ FIND_ELEMENT = "find_element"\n 69→\n 70→\n 71→@dataclass\n 72→class PlannedAction:\n 73→ """Une action planifiée par le LLM."""\n 74→ step_number: int\n 75→ action_type: ActionType\n 76→ description: str\n 77→ target: Optional[str] = None # Description textuelle de la cible\n 78→ parameters: Dict[str, Any] = field(default_factory=dict)\n 79→ expected_result: Optional[str] = None\n 80→ fallback: Optional[str] = None\n 81→\n 82→\n 83→@dataclass\n 84→class ExecutionPlan:\n 85→ """Plan d'exécution complet généré par le LLM."""\n 86→ task_description: str\n 87→ steps: List[PlannedAction]\n 88→ estimated_duration_seconds: int = 30\n 89→ requires_confirmation: bool = True\n 90→ risk_level: str = "low" # low, medium, high\n 91→ created_at: datetime = field(default_factory=datetime.now)\n 92→\n 93→\n 94→@dataclass\n 95→class ActionResult:\n 96→ """Résultat d'une action exécutée."""\n 97→ success: bool\n 98→ action: PlannedAction\n 99→ message: str\n 100→ screenshot_path: Optional[str] = None\n 101→ duration_ms: int = 0\n 102→ error: Optional[str] = None\n 103→\n 104→\n 105→class AutonomousPlanner:\n 106→ """\n 107→ Planificateur autonome utilisant LLM pour générer des plans d'action.\n 108→ """\n 109→\n 110→ def init(\n 111→ self,\n 112→ llm_endpoint: str = "http://localhost:11434/api/generate",\n 113→ llm_model: str = "qwen2.5:7b",\n 114→ timeout: int = 60\n 115→ ):\n 116→ self.llm_endpoint = llm_endpoint\n 117→ self.llm_model = llm_model\n 118→ self.timeout = timeout\n 119→ self.llm_available = self._check_llm()\n 120→\n 121→ # Callbacks pour l'exécution\n 122→ self._action_executor: Optional[Callable] = None\n 123→ self._screen_capturer: Optional[Callable] = None\n 124→ self._progress_callback: Optional[Callable] = None\n 125→\n 126→ # Détecteur visuel (OWL-v2)\n 127→ self._owl_detector: Optional[OwlDetector] = None\n 128→ self._init_visual_detection()\n 129→\n 130→ # Client VLM pour analyse intelligente d'écran\n 131→ self._vlm_client = None\n 132→ self._init_vlm_client()\n 133→\n 134→ logger.info(f"AutonomousPlanner initialized (LLM: {self.llm_model}, available: {self.llm_available}, visual: {self._owl_detector is not None}, vlm: {self._vlm_client is not None})")\n 135→\n 136→ def _init_visual_detection(self):\n 137→ """Initialise le détecteur visuel OWL-v2."""\n 138→ if VISUAL_DETECTION_AVAILABLE and OwlDetector:\n 139→ try:\n 140→ self._owl_detector = OwlDetector(confidence_threshold=0.1)\n 141→ logger.info("OWL-v2 visual detector initialized")\n 142→ except Exception as e:\n 143→ logger.warning(f"Could not initialize OWL detector: {e}")\n 144→ self._owl_detector = None\n 145→\n 146→ def _init_vlm_client(self):\n 147→ """Initialise le client VLM pour analyse intelligente."""\n 148→ if VLM_AVAILABLE and OllamaClient:\n 149→ try:\n 150→ self._vlm_client = OllamaClient(model="qwen2.5vl:7b")\n 151→ logger.info("VLM client initialized (qwen2.5vl:7b)")\n 152→ except Exception as e:\n 153→ logger.warning(f"Could not initialize VLM client: {e}")\n 154→ self._vlm_client = None\n 155→\n 156→ def _analyze_screen_for_element(\n 157→ self,\n 158→ description: str,\n 159→ screenshot=None\n 160→ ) -> Optional[Tuple[int, int]]:\n 161→ """\n 162→ Utilise le VLM pour analyser l'écran et trouver un élément.\n 163→ Plus intelligent que OWL car peut comprendre le contexte.\n 164→\n 165→ Args:\n 166→ description: Description de l'élément à trouver\n 167→ screenshot: Capture d'écran (capturée si non fournie)\n 168→\n 169→ Returns:\n 170→ Tuple (x, y) des coordonnées ou None\n 171→ """\n 172→ if not self._vlm_client:\n 173→ return None\n 174→\n 175→ if screenshot is None:\n 176→ screenshot = self._capture_screen()\n 177→\n 178→ if screenshot is None:\n 179→ return None\n 180→\n 181→ # Prompt pour le VLM\n 182→ prompt = f"""Analyse cette capture d'écran et trouve l'élément suivant: "{description}"\n 183→\n 184→IMPORTANT:\n 185→- Ignore les publicités et les éléments sponsorisés\n 186→- Cherche un vrai résultat de recherche, pas une annonce\n 187→- Si c'est une vidéo, cherche une miniature de vidéo avec un titre pertinent\n 188→\n 189→Réponds UNIQUEMENT avec les coordonnées X,Y du CENTRE de l'élément au format:\n 190→COORDINATES: X, Y\n 191→\n 192→Si tu ne trouves pas l'élément, réponds:\n 193→NOT_FOUND"""\n 194→\n 195→ try:\n 196→ result = self._vlm_client.generate(\n 197→ prompt=prompt,\n 198→ image=screenshot,\n 199→ temperature=0.1,\n 200→ max_tokens=100\n 201→ )\n 202→\n 203→ if result.get('success'):\n 204→ response = result.get('response', '')\n 205→ logger.info(f"VLM response: {response}")\n 206→\n 207→ # Parser les coordonnées\n 208→ coord_match = re.search(r'COORDINATES:\s(\d+)\s*,\s*(\d+)', response)\n 209→ if coord_match:\n 210→ x = int(coord_match.group(1))\n 211→ y = int(coord_match.group(2))\n 212→ logger.info(f"VLM found element at ({x}, {y})")\n 213→ return (x, y)\n 214→\n 215→ except Exception as e:\n 216→ logger.warning(f"VLM analysis failed: {e}")\n 217→\n 218→ return None\n 219→\n 220→ def _capture_screen(self):\n 221→ """Capture l'écran actuel. Retourne PIL.Image ou None."""\n 222→ if not PYAUTOGUI_AVAILABLE:\n 223→ return None\n 224→ try:\n 225→ screenshot = pyautogui.screenshot()\n 226→ return screenshot\n 227→ except Exception as e:\n 228→ logger.warning(f"Screen capture failed: {e}")\n 229→ return None\n 230→\n 231→ def _find_element_by_description(\n 232→ self,\n 233→ description: str,\n 234→ screenshot=None\n 235→ ) -> Optional[Tuple[int, int]]:\n 236→ """\n 237→ Trouve un élément à l'écran par sa description textuelle.\n 238→\n 239→ Args:\n 240→ description: Description de l'élément (ex: "search bar", "play button")\n 241→ screenshot: Capture d'écran (capturée si non fournie)\n 242→\n 243→ Returns:\n 244→ Tuple (x, y) des coordonnées du centre de l'élément, ou None si non trouvé\n 245→ """\n 246→ if screenshot is None:\n 247→ screenshot = self._capture_screen()\n 248→\n 249→ if screenshot is None:\n 250→ return None\n 251→\n 252→ # Essayer avec OWL-v2 si disponible\n 253→ if self._owl_detector:\n 254→ try:\n 255→ # Créer des variantes de la requête pour OWL\n 256→ queries = self._generate_owl_queries(description)\n 257→ logger.info(f"OWL searching for: {queries}")\n 258→\n 259→ detections = self._owl_detector.detect(screenshot, queries, confidence_threshold=0.05)\n 260→\n 261→ if detections:\n 262→ # Prendre la détection avec le meilleur score\n 263→ best = max(detections, key=lambda d: d['confidence'])\n 264→ center = best.get('center')\n 265→ if center:\n 266→ logger.info(f"OWL found '{best['label']}' at {center} (conf: {best['confidence']:.2f})")\n 267→ return (int(center[0]), int(center[1]))\n 268→ except Exception as e:\n 269→ logger.warning(f"OWL detection failed: {e}")\n 270→\n 271→ # Fallback 2: Utiliser le VLM pour analyse intelligente\n 272→ # Particulièrement utile pour distinguer les pubs des vrais résultats\n 273→ if self._vlm_client and ("vidéo" in description.lower() or "video" in description.lower()):\n 274→ logger.info("Trying VLM analysis for intelligent element detection...")\n 275→ vlm_result = self._analyze_screen_for_element(description, screenshot)\n 276→ if vlm_result:\n 277→ return vlm_result\n 278→\n 279→ # Fallback 3: positions heuristiques basées sur la description\n 280→ return self._heuristic_position(description, screenshot.size if screenshot else (1920, 1080))\n 281→\n 282→ def _generate_owl_queries(self, description: str) -> List[str]:\n 283→ """Génère des requêtes OWL à partir d'une description."""\n 284→ desc_lower = description.lower()\n 285→ queries = []\n 286→\n 287→ # Mapping description -> termes OWL\n 288→ if "recherche" in desc_lower or "search" in desc_lower:\n 289→ queries.extend(["search bar", "search box", "text input", "search field"])\n 290→ elif "vidéo" in desc_lower or "video" in desc_lower or "miniature" in desc_lower:\n 291→ queries.extend(["video thumbnail", "video preview", "video card"])\n 292→ elif "bouton" in desc_lower or "button" in desc_lower:\n 293→ queries.extend(["button", "click button", "submit button"])\n 294→ elif "play" in desc_lower or "lecture" in desc_lower:\n 295→ queries.extend(["play button", "video player"])\n 296→ else:\n 297→ # Utiliser la description directement\n 298→ queries.append(description)\n 299→\n 300→ return queries\n 301→\n 302→ def _heuristic_position(\n 303→ self,\n 304→ description: str,\n 305→ screen_size: Tuple[int, int]\n 306→ ) -> Optional[Tuple[int, int]]:\n 307→ """Position heuristique basée sur la description (fallback)."""\n 308→ width, height = screen_size\n 309→ desc_lower = description.lower()\n 310→\n 311→ # YouTube specific heuristics (après scroll, les vidéos sont plus visibles)\n 312→ if "recherche" in desc_lower or "search" in desc_lower:\n 313→ # Barre de recherche YouTube: centre-haut\n 314→ return (width // 2, 60)\n 315→ elif ("vidéo" in desc_lower or "video" in desc_lower or "miniature" in desc_lower) and ("musique" in desc_lower or "music" in desc_lower or "première" in desc_lower or "visible" in desc_lower):\n 316→ # Première vidéo YouTube visible (après scroll, au milieu de l'écran)\n 317→ # Position adaptée pour différentes résolutions\n 318→ x = width // 4 # Colonne gauche où sont les miniatures\n 319→ y = height // 3 # Tiers supérieur après scroll\n 320→ logger.info(f"Heuristic video position: ({x}, {y}) for screen {width}x{height}")\n 321→ return (x, y)\n 322→ elif "bouton" in desc_lower and "recherche" in desc_lower:\n 323→ # Bouton de recherche YouTube (loupe)\n 324→ return (width // 2 + 280, 60)\n 325→\n 326→ # Position par défaut: centre\n 327→ return (width // 2, height // 2)\n 328→\n 329→ def _check_llm(self) -> bool:\n 330→ """Vérifie si le LLM est disponible."""\n 331→ try:\n 332→ response = requests.get(\n 333→ "http://localhost:11434/api/tags",\n 334→ timeout=5\n 335→ )\n 336→ if response.status_code == 200:\n 337→ models = response.json().get('models', [])\n 338→ model_names = [m['name'] for m in models]\n 339→ # Vérifier si notre modèle est disponible\n 340→ if any(self.llm_model in name for name in model_names):\n 341→ return True\n 342→ logger.warning(f"Model {self.llm_model} not found. Available: {model_names}")\n 343→ return False\n 344→ except Exception as e:\n 345→ logger.warning(f"LLM check failed: {e}")\n 346→ return False\n 347→\n 348→ def set_action_executor(self, executor: Callable):\n 349→ """Configure l'exécuteur d'actions."""\n 350→ self._action_executor = executor\n 351→\n 352→ def set_screen_capturer(self, capturer: Callable):\n 353→ """Configure le capteur d'écran."""\n 354→ self._screen_capturer = capturer\n 355→\n 356→ def set_progress_callback(self, callback: Callable):\n 357→ """Configure le callback de progression."""\n 358→ self._progress_callback = callback\n 359→\n 360→ def _notify_progress(self, step: int, total: int, message: str, status: str = "running"):\n 361→ """Notifie la progression."""\n 362→ if self._progress_callback:\n 363→ self._progress_callback({\n 364→ "step": step,\n 365→ "total": total,\n 366→ "message": message,\n 367→ "status": status,\n 368→ "percent": int((step / total) * 100) if total > 0 else 0\n 369→ })\n 370→\n 371→ def plan(self, user_request: str, context: Optional[Dict] = None) -> ExecutionPlan:\n 372→ """\n 373→ Génère un plan d'exécution à partir d'une requête utilisateur.\n 374→\n 375→ Args:\n 376→ user_request: La demande en langage naturel\n 377→ context: Contexte optionnel (écran actuel, apps ouvertes, etc.)\n 378→\n 379→ Returns:\n 380→ ExecutionPlan avec les étapes à exécuter\n 381→ """\n 382→ if not self.llm_available:\n 383→ return self._fallback_plan(user_request)\n 384→\n 385→ # Construire le prompt\n 386→ prompt = self._build_planning_prompt(user_request, context)\n 387→\n 388→ try:\n 389→ # Appeler le LLM\n 390→ response = requests.post(\n 391→ self.llm_endpoint,\n 392→ json={\n 393→ "model": self.llm_model,\n 394→ "prompt": prompt,\n 395→ "stream": False,\n 396→ "options": {\n 397→ "temperature": 0.3, # Déterministe\n 398→ "num_predict": 1500\n 399→ }\n 400→ },\n 401→ timeout=self.timeout\n 402→ )\n 403→\n 404→ if response.status_code == 200:\n 405→ result = response.json()\n 406→ llm_response = result.get('response', '')\n 407→ return self._parse_plan(user_request, llm_response)\n 408→ else:\n 409→ logger.error(f"LLM request failed: {response.status_code}")\n 410→ return self._fallback_plan(user_request)\n 411→\n 412→ except Exception as e:\n 413→ logger.error(f"Planning error: {e}")\n 414→ return self._fallback_plan(user_request)\n 415→\n 416→ def _build_planning_prompt(self, user_request: str, context: Optional[Dict] = None) -> str:\n 417→ """Construit le prompt pour le LLM."""\n 418→\n 419→ context_info = ""\n 420→ if context:\n 421→ context_info = f"""\n 422→Contexte actuel:\n 423→- Application active: {context.get('active_app', 'inconnue')}\n 424→- Fenêtre: {context.get('window_title', 'inconnue')}\n 425→- Écran: {context.get('screen_info', 'standard')}\n 426→"""\n 427→\n 428→ prompt = f"""Tu es un assistant RPA intelligent qui planifie des actions sur ordinateur.\n 429→\n 430→TÂCHE DEMANDÉE: {user_request}\n 431→{context_info}\n 432→\n 433→ACTIONS DISPONIBLES:\n 434→- open_app: Ouvrir une application (params: app_name - ex: "firefox", "terminal", "code")\n 435→- open_url: Ouvrir une URL dans le navigateur (params: url)\n 436→- click: Cliquer sur un élément visuel (target: description précise de l'élément à cliquer)\n 437→- type_text: Taper du texte (target: description du champ, params: text)\n 438→- hotkey: Raccourci clavier (params: keys - ex: "ctrl+t", "ctrl+l", "Return", "Escape", "Tab")\n 439→- scroll: Défiler (params: direction - "up"/"down", amount - nombre de scrolls)\n 440→- wait: Attendre (params: seconds)\n 441→\n 442→PRINCIPES DE PLANIFICATION:\n 443→1. Décompose la tâche en étapes simples, une action par étape\n 444→2. Ajoute wait (2-3s) après open_url/open_app pour laisser le temps au chargement\n 445→3. Pour taper dans un champ web, utilise d'abord click sur le champ OU hotkey pour focus (ex: ctrl+l pour barre d'adresse)\n 446→4. Après avoir tapé dans un champ de recherche, ajoute hotkey "Return" pour valider\n 447→5. Pour click, décris précisément l'élément visuel (ex: "bouton bleu Envoyer", "première image de la liste")\n 448→6. Si la page peut avoir des publicités, ajoute un scroll pour les passer avant de cliquer\n 449→\n 450→RACCOURCIS UTILES:\n 451→- ctrl+l : Focus sur la barre d'adresse du navigateur\n 452→- ctrl+t : Nouvel onglet\n 453→- ctrl+f : Rechercher dans la page\n 454→- Return : Valider/Entrée\n 455→- Escape : Annuler/Fermer\n 456→\n 457→RÉPONDS EN JSON UNIQUEMENT:\n 458→{{\n 459→ "steps": [\n 460→ {{\n 461→ "step": 1,\n 462→ "action": "open_url|click|type_text|hotkey|wait|scroll",\n 463→ "target": "description visuelle de la cible",\n 464→ "params": {{"url": "...", "text": "...", "keys": "...", "seconds": 2}},\n 465→ "description": "Ce que fait cette étape",\n 466→ "expected_result": "Résultat attendu"\n 467→ }}\n 468→ ],\n 469→ "risk_level": "low|medium|high",\n 470→ "estimated_seconds": 30\n 471→}}\n 472→\n 473→JSON:"""\n 474→\n 475→ return prompt\n 476→\n 477→ def parse_plan(self, user_request: str, llm_response: str) -> ExecutionPlan:\n 478→ """Parse la réponse du LLM en ExecutionPlan."""\n 479→\n 480→ try:\n 481→ # Extraire le JSON de la réponse\n 482→ json_match = re.search(r'\{[\s\S]\}', llm_response)\n 483→ if not json_match:\n 484→ logger.warning("No JSON found in LLM response")\n 485→ return self._fallback_plan(user_request)\n 486→\n 487→ data = json.loads(json_match.group())\n 488→\n 489→ steps = []\n 490→ for step_data in data.get('steps', []):\n 491→ action_type_str = step_data.get('action', 'click')\n 492→\n 493→ # Mapper vers ActionType\n 494→ action_type_map = {\n 495→ 'open_app': ActionType.OPEN_APP,\n 496→ 'open_url': ActionType.OPEN_URL,\n 497→ 'click': ActionType.CLICK,\n 498→ 'type_text': ActionType.TYPE_TEXT,\n 499→ 'hotkey': ActionType.HOTKEY,\n 500→ 'scroll': ActionType.SCROLL,\n 501→ 'wait': ActionType.WAIT,\n 502→ 'screenshot': ActionType.SCREENSHOT,\n 503→ 'find_element': ActionType.FIND_ELEMENT\n 504→ }\n 505→\n 506→ action_type = action_type_map.get(action_type_str, ActionType.CLICK)\n 507→\n 508→ steps.append(PlannedAction(\n 509→ step_number=step_data.get('step', len(steps) + 1),\n 510→ action_type=action_type,\n 511→ description=step_data.get('description', ''),\n 512→ target=step_data.get('target'),\n 513→ parameters=step_data.get('params', {}),\n 514→ expected_result=step_data.get('expected_result')\n 515→ ))\n 516→\n 517→ return ExecutionPlan(\n 518→ task_description=user_request,\n 519→ steps=steps,\n 520→ estimated_duration_seconds=data.get('estimated_seconds', 30),\n 521→ risk_level=data.get('risk_level', 'low'),\n 522→ requires_confirmation=data.get('risk_level', 'low') != 'low'\n 523→ )\n 524→\n 525→ except json.JSONDecodeError as e:\n 526→ logger.error(f"JSON parse error: {e}")\n 527→ return self._fallback_plan(user_request)\n 528→\n 529→ def _fallback_plan(self, user_request: str) -> ExecutionPlan:\n 530→ """Plan de fallback si le LLM n'est pas disponible."""\n 531→\n 532→ # Détection simple de patterns\n 533→ request_lower = user_request.lower()\n 534→ steps = []\n 535→\n 536→ # Pattern: ouvrir navigateur/browser\n 537→ if any(word in request_lower for word in ['navigateur', 'browser', 'chrome', 'firefox']):\n 538→ steps.append(PlannedAction(\n 539→ step_number=1,\n 540→ action_type=ActionType.OPEN_APP,\n 541→ description="Ouvrir le navigateur",\n 542→ parameters={"app_name": "firefox"}\n 543→ ))\n 544→\n 545→ # Pattern: aller sur / ouvrir site\n 546→ url_patterns = [\n 547→ (r'youtube', 'https://www.youtube.com'),\n 548→ (r'google', 'https://www.google.com'),\n 549→ (r'github', 'https://www.github.com'),\n 550→ ]\n 551→\n 552→ for pattern, url in url_patterns:\n 553→ if pattern in request_lower:\n 554→ steps.append(PlannedAction(\n 555→ step_number=len(steps) + 1,\n 556→ action_type=ActionType.OPEN_URL,\n 557→ description=f"Ouvrir {pattern}",\n 558→ parameters={"url": url}\n 559→ ))\n 560→ break\n 561→\n 562→ # Pattern: chercher / rechercher\n 563→ if any(word in request_lower for word in ['chercher', 'rechercher', 'search']):\n 564→ # Extraire le terme de recherche\n 565→ search_term = request_lower\n 566→ for remove in ['cherche', 'recherche', 'search', 'sur youtube', 'sur google', 'une vidéo', 'video']:\n 567→ search_term = search_term.replace(remove, '')\n 568→ search_term = search_term.strip()\n 569→\n 570→ if search_term:\n 571→ steps.append(PlannedAction(\n 572→ step_number=len(steps) + 1,\n 573→ action_type=ActionType.CLICK,\n 574→ description="Cliquer sur la barre de recherche",\n 575→ target="barre de recherche"\n 576→ ))\n 577→ steps.append(PlannedAction(\n 578→ step_number=len(steps) + 1,\n 579→ action_type=ActionType.TYPE_TEXT,\n 580→ description=f"Taper '{search_term}'",\n 581→ parameters={"text": search_term}\n 582→ ))\n 583→ steps.append(PlannedAction(\n 584→ step_number=len(steps) + 1,\n 585→ action_type=ActionType.HOTKEY,\n 586→ description="Appuyer sur Entrée",\n 587→ parameters={"keys": "Return"}\n 588→ ))\n 589→\n 590→ # Pattern: lancer / jouer vidéo\n 591→ if any(word in request_lower for word in ['lancer', 'jouer', 'play', 'regarder']):\n 592→ steps.append(PlannedAction(\n 593→ step_number=len(steps) + 1,\n 594→ action_type=ActionType.WAIT,\n 595→ description="Attendre le chargement",\n 596→ parameters={"seconds": 2}\n 597→ ))\n 598→ steps.append(PlannedAction(\n 599→ step_number=len(steps) + 1,\n 600→ action_type=ActionType.CLICK,\n 601→ description="Cliquer sur la première vidéo",\n 602→ target="première miniature de vidéo"\n 603→ ))\n 604→\n 605→ if not steps:\n 606→ # Plan générique\n 607→ steps.append(PlannedAction(\n 608→ step_number=1,\n 609→ action_type=ActionType.SCREENSHOT,\n 610→ description="Capturer l'écran actuel",\n 611→ parameters={}\n 612→ ))\n 613→\n 614→ return ExecutionPlan(\n 615→ task_description=user_request,\n 616→ steps=steps,\n 617→ estimated_duration_seconds=len(steps) * 5,\n 618→ risk_level="low"\n 619→ )\n 620→\n 621→ async def execute_plan(self, plan: ExecutionPlan) -> List[ActionResult]:\n 622→ """\n 623→ Exécute un plan d'action.\n 624→\n 625→ Args:\n 626→ plan: Le plan à exécuter\n 627→\n 628→ Returns:\n 629→ Liste des résultats d'actions\n 630→ """\n 631→ results = []\n 632→ total_steps = len(plan.steps)\n 633→\n 634→ self._notify_progress(0, total_steps, "Démarrage de l'exécution...", "starting")\n 635→\n 636→ for i, action in enumerate(plan.steps):\n 637→ step_num = i + 1\n 638→ self._notify_progress(\n 639→ step_num,\n 640→ total_steps,\n 641→ f"Étape {step_num}/{total_steps}: {action.description}",\n 642→ "running"\n 643→ )\n 644→\n 645→ try:\n 646→ result = await self._execute_action(action)\n 647→ results.append(result)\n 648→\n 649→ if not result.success:\n 650→ logger.warning(f"Step {step_num} failed: {result.message}")\n 651→ # Continuer malgré l'échec (best effort)\n 652→\n 653→ except Exception as e:\n 654→ logger.error(f"Step {step_num} error: {e}")\n 655→ results.append(ActionResult(\n 656→ success=False,\n 657→ action=action,\n 658→ message=f"Erreur: {str(e)}",\n 659→ error=str(e)\n 660→ ))\n 661→\n 662→ # Petit délai entre les actions\n 663→ time.sleep(0.3)\n 664→\n 665→ success_count = sum(1 for r in results if r.success)\n 666→ self._notify_progress(\n 667→ total_steps,\n 668→ total_steps,\n 669→ f"Terminé: {success_count}/{total_steps} étapes réussies",\n 670→ "completed" if success_count == total_steps else "partial"\n 671→ )\n 672→\n 673→ return results\n 674→\n 675→ async def _execute_action(self, action: PlannedAction) -> ActionResult:\n 676→ """Exécute une action individuelle."""\n 677→\n 678→ start_time = time.time()\n 679→\n 680→ try:\n 681→ if action.action_type == ActionType.OPEN_APP:\n 682→ return await self._exec_open_app(action)\n 683→\n 684→ elif action.action_type == ActionType.OPEN_URL:\n 685→ return await self._exec_open_url(action)\n 686→\n 687→ elif action.action_type == ActionType.CLICK:\n 688→ return await self._exec_click(action)\n 689→\n 690→ elif action.action_type == ActionType.TYPE_TEXT:\n 691→ return await self._exec_type_text(action)\n 692→\n 693→ elif action.action_type == ActionType.HOTKEY:\n 694→ return await self._exec_hotkey(action)\n 695→\n 696→ elif action.action_type == ActionType.SCROLL:\n 697→ return await self._exec_scroll(action)\n 698→\n 699→ elif action.action_type == ActionType.WAIT:\n 700→ return await self._exec_wait(action)\n 701→\n 702→ elif action.action_type == ActionType.SCREENSHOT:\n 703→ return await self._exec_screenshot(action)\n 704→\n 705→ else:\n 706→ return ActionResult(\n 707→ success=False,\n 708→ action=action,\n 709→ message=f"Type d'action non supporté: {action.action_type}",\n 710→ duration_ms=int((time.time() - start_time) * 1000)\n 711→ )\n 712→\n 713→ except Exception as e:\n 714→ return ActionResult(\n 715→ success=False,\n 716→ action=action,\n 717→ message=f"Erreur: {str(e)}",\n 718→ error=str(e),\n 719→ duration_ms=int((time.time() - start_time) * 1000)\n 720→ )\n 721→\n 722→ async def _exec_open_app(self, action: PlannedAction) -> ActionResult:\n 723→ """Ouvre une application."""\n 724→ import subprocess\n 725→\n 726→ app_name = action.parameters.get('app_name', 'firefox')\n 727→\n 728→ # Mapping des noms courants\n 729→ app_commands = {\n 730→ 'firefox': 'firefox',\n 731→ 'chrome': 'google-chrome',\n 732→ 'navigateur': 'firefox',\n 733→ 'browser': 'firefox',\n 734→ 'terminal': 'gnome-terminal',\n 735→ 'files': 'nautilus',\n 736→ 'code': 'code',\n 737→ 'vscode': 'code'\n 738→ }\n 739→\n 740→ cmd = app_commands.get(app_name.lower(), app_name)\n 741→\n 742→ try:\n 743→ subprocess.Popen([cmd], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)\n 744→ time.sleep(1) # Attendre que l'app démarre\n 745→\n 746→ return ActionResult(\n 747→ success=True,\n 748→ action=action,\n 749→ message=f"Application '{app_name}' ouverte"\n 750→ )\n 751→ except FileNotFoundError:\n 752→ return ActionResult(\n 753→ success=False,\n 754→ action=action,\n 755→ message=f"Application '{app_name}' non trouvée"\n 756→ )\n 757→\n 758→ async def _exec_open_url(self, action: PlannedAction) -> ActionResult:\n 759→ """Ouvre une URL dans le navigateur."""\n 760→ import subprocess\n 761→\n 762→ url = action.parameters.get('url', 'https://www.google.com')\n 763→\n 764→ try:\n 765→ subprocess.Popen(['xdg-open', url], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)\n 766→ time.sleep(2) # Attendre le chargement\n 767→\n 768→ return ActionResult(\n 769→ success=True,\n 770→ action=action,\n 771→ message=f"URL '{url}' ouverte"\n 772→ )\n 773→ except Exception as e:\n 774→ return ActionResult(\n 775→ success=False,\n 776→ action=action,\n 777→ message=f"Erreur ouverture URL: {e}"\n 778→ )\n 779→\n 780→ async def _exec_click(self, action: PlannedAction) -> ActionResult:\n 781→ """Exécute un clic sur un élément avec détection visuelle."""\n 782→\n 783→ target = action.target or action.parameters.get('target', '')\n 784→\n 785→ # Si on a un ActionExecutor configuré, l'utiliser en priorité\n 786→ if self._action_executor:\n 787→ try:\n 788→ screenshot = None\n 789→ if self._screen_capturer:\n 790→ screenshot = self._screen_capturer()\n 791→\n 792→ result = self._action_executor(\n 793→ action_type="click",\n 794→ target=target,\n 795→ screenshot=screenshot\n 796→ )\n 797→\n 798→ return ActionResult(\n 799→ success=result.get('success', False),\n 800→ action=action,\n 801→ message=result.get('message', 'Clic effectué')\n 802→ )\n 803→ except Exception as e:\n 804→ logger.warning(f"ActionExecutor failed: {e}, using visual detection")\n 805→\n 806→ # Utiliser la détection visuelle\n 807→ try:\n 808→ if not PYAUTOGUI_AVAILABLE:\n 809→ return ActionResult(\n 810→ success=False,\n 811→ action=action,\n 812→ message="pyautogui non disponible"\n 813→ )\n 814→\n 815→ # Capturer l'écran\n 816→ screenshot = self._capture_screen()\n 817→\n 818→ # Trouver l'élément par sa description\n 819→ position = self._find_element_by_description(target, screenshot)\n 820→\n 821→ if position:\n 822→ x, y = position\n 823→ logger.info(f"Clicking at ({x}, {y}) for target: '{target}'")\n 824→ pyautogui.click(x, y)\n 825→\n 826→ return ActionResult(\n 827→ success=True,\n 828→ action=action,\n 829→ message=f"Clic à ({x}, {y}) pour '{target}'"\n 830→ )\n 831→ else:\n 832→ return ActionResult(\n 833→ success=False,\n 834→ action=action,\n 835→ message=f"Élément non trouvé: '{target}'"\n 836→ )\n 837→\n 838→ except Exception as e:\n 839→ logger.error(f"Click error: {e}")\n 840→ return ActionResult(\n 841→ success=False,\n 842→ action=action,\n 843→ message=f"Erreur clic: {e}"\n 844→ )\n 845→\n 846→ async def _exec_type_text(self, action: PlannedAction) -> ActionResult:\n 847→ """Tape du texte dans un champ (clique d'abord sur le champ si spécifié)."""\n 848→ try:\n 849→ if not PYAUTOGUI_AVAILABLE:\n 850→ return ActionResult(\n 851→ success=False,\n 852→ action=action,\n 853→ message="pyautogui non disponible"\n 854→ )\n 855→\n 856→ text = action.parameters.get('text', '')\n 857→ target = action.target or action.parameters.get('target', '')\n 858→\n 859→ # Si une cible est spécifiée, cliquer dessus d'abord\n 860→ if target:\n 861→ screenshot = self._capture_screen()\n 862→ position = self._find_element_by_description(target, screenshot)\n 863→\n 864→ if position:\n 865→ x, y = position\n 866→ logger.info(f"Clicking on input field at ({x}, {y}) before typing")\n 867→ pyautogui.click(x, y)\n 868→ time.sleep(0.5) # Attendre que le focus soit acquis\n 869→ else:\n 870→ # Champ non trouvé - on continue car le focus est peut-être déjà au bon endroit\n 871→ # (ex: après un hotkey ctrl+l ou ctrl+k)\n 872→ logger.info(f"Target '{target}' not found visually, assuming focus is already correct")\n 873→\n 874→ # Taper le texte\n 875→ time.sleep(0.2)\n 876→\n 877→ # Utiliser write() pour le texte Unicode (français, etc.)\n 878→ if text.isascii():\n 879→ pyautogui.typewrite(text, interval=0.03)\n 880→ else:\n 881→ # Pour les caractères non-ASCII, utiliser le presse-papier\n 882→ import subprocess\n 883→ # Copier dans le presse-papier\n 884→ process = subprocess.Popen(['xclip', '-selection', 'clipboard'], stdin=subprocess.PIPE)\n 885→ process.communicate(text.encode('utf-8'))\n 886→ # Coller avec Ctrl+V\n 887→ pyautogui.hotkey('ctrl', 'v')\n 888→\n 889→ return ActionResult(\n 890→ success=True,\n 891→ action=action,\n 892→ message=f"Texte tapé: '{text[:30]}...'" if len(text) > 30 else f"Texte tapé: '{text}'"\n 893→ )\n 894→\n 895→ except Exception as e:\n 896→ logger.error(f"Type text error: {e}")\n 897→ return ActionResult(\n 898→ success=False,\n 899→ action=action,\n 900→ message=f"Erreur frappe: {e}"\n 901→ )\n 902→\n 903→ async def _exec_hotkey(self, action: PlannedAction) -> ActionResult:\n 904→ """Exécute un raccourci clavier."""\n 905→ try:\n 906→ import pyautogui\n 907→\n 908→ keys = action.parameters.get('keys', 'Return')\n 909→\n 910→ # Parser les combinaisons de touches\n 911→ if '+' in keys:\n 912→ key_list = [k.strip().lower() for k in keys.split('+')]\n 913→ pyautogui.hotkey(key_list)\n 914→ else:\n 915→ pyautogui.press(keys.lower())\n 916→\n 917→ return ActionResult(\n 918→ success=True,\n 919→ action=action,\n 920→ message=f"Touche(s) '{keys}' pressée(s)"\n 921→ )\n 922→\n 923→ except Exception as e:\n 924→ return ActionResult(\n 925→ success=False,\n 926→ action=action,\n 927→ message=f"Erreur hotkey: {e}"\n 928→ )\n 929→\n 930→ async def _exec_scroll(self, action: PlannedAction) -> ActionResult:\n 931→ """Effectue un scroll."""\n 932→ try:\n 933→ import pyautogui\n 934→\n 935→ direction = action.parameters.get('direction', 'down')\n 936→ amount = action.parameters.get('amount', 3)\n 937→\n 938→ scroll_amount = amount if direction == 'up' else -amount\n 939→ pyautogui.scroll(scroll_amount)\n 940→\n 941→ return ActionResult(\n 942→ success=True,\n 943→ action=action,\n 944→ message=f"Scroll {direction} de {amount}"\n 945→ )\n 946→\n 947→ except Exception as e:\n 948→ return ActionResult(\n 949→ success=False,\n 950→ action=action,\n 951→ message=f"Erreur scroll: {e}"\n 952→ )\n 953→\n 954→ async def _exec_wait(self, action: PlannedAction) -> ActionResult:\n 955→ """Attend un certain temps."""\n 956→ seconds = action.parameters.get('seconds', 1)\n 957→ time.sleep(seconds)\n 958→\n 959→ return ActionResult(\n 960→ success=True,\n 961→ action=action,\n 962→ message=f"Attente de {seconds}s terminée"\n 963→ )\n 964→\n 965→ async def _exec_screenshot(self, action: PlannedAction) -> ActionResult:\n 966→ """Capture l'écran."""\n 967→ try:\n 968→ if self.screen_capturer:\n 969→ screenshot = self.screen_capturer()\n 970→ # Sauvegarder si nécessaire\n 971→ path = f"/tmp/screenshot{datetime.now().strftime('%H%M%S')}.png"\n 972→\n 973→ return ActionResult(\n 974→ success=True,\n 975→ action=action,\n 976→ message="Screenshot capturé",\n 977→ screenshot_path=path\n 978→ )\n 979→ else:\n 980→ import pyautogui\n 981→ screenshot = pyautogui.screenshot()\n 982→ path = f"/tmp/screenshot{datetime.now().strftime('%H%M%S')}.png"\n 983→ screenshot.save(path)\n 984→\n 985→ return ActionResult(\n 986→ success=True,\n 987→ action=action,\n 988→ message="Screenshot capturé",\n 989→ screenshot_path=path\n 990→ )\n 991→\n 992→ except Exception as e:\n 993→ return ActionResult(\n 994→ success=False,\n 995→ action=action,\n 996→ message=f"Erreur screenshot: {e}"\n 997→ )\n 998→\n 999→\n 1000→# Singleton\n 1001→_planner_instance: Optional[AutonomousPlanner] = None\n 1002→\n 1003→\n 1004→def get_autonomous_planner(\n 1005→ llm_model: str = "qwen2.5:7b"\n 1006→) -> AutonomousPlanner:\n 1007→ """Retourne l'instance singleton du planner."""\n 1008→ global _planner_instance\n 1009→\n 1010→ if _planner_instance is None:\n 1011→ _planner_instance = AutonomousPlanner(llm_model=llm_model)\n 1012→\n 1013→ return _planner_instance\n 1014→"}]},"uuid":"11f4ac18-a8f3-44aa-9c09-180da7980968","timestamp":"2026-03-14T21:14:47.061Z","sourceToolAssistantUUID":"e6c09744-f7f8-4f2e-b18d-b260036d6f58"} {"parentUuid":"11f4ac18-a8f3-44aa-9c09-180da7980968","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01H8ZjFUhrqBrpRGKpFeQHu8","type":"message","role":"assistant","content":[{"type":"text","text":"Now let me check the remaining workflow_graph.py model code (edges, Action, TargetSpec, Workflow class) and the OllamaClient for VLM."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":19869,"cache_read_input_tokens":73649,"cache_creation":{"ephemeral_5m_input_tokens":19869,"ephemeral_1h_input_tokens":0},"output_tokens":5,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fWGtGR8nfcrwezUJkN","type":"assistant","uuid":"87ad57d3-2eac-4f51-abdf-6ffbe836a341","timestamp":"2026-03-14T21:14:51.234Z"} {"parentUuid":"87ad57d3-2eac-4f51-abdf-6ffbe836a341","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01H8ZjFUhrqBrpRGKpFeQHu8","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_013nBBeHdHp9v2zyV7mDi9tj","name":"Read","input":{"file_path":"/home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_01X8WDVqsezUXH1vkcskBwyT.txt","offset":550,"limit":350},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":19869,"cache_read_input_tokens":73649,"cache_creation":{"ephemeral_5m_input_tokens":19869,"ephemeral_1h_input_tokens":0},"output_tokens":5,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fWGtGR8nfcrwezUJkN","type":"assistant","uuid":"97aaadb2-200e-4567-b137-6a689a807e14","timestamp":"2026-03-14T21:14:55.188Z"} {"parentUuid":"97aaadb2-200e-4567-b137-6a689a807e14","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_013nBBeHdHp9v2zyV7mDi9tj","toolUseID":"toolu_013nBBeHdHp9v2zyV7mDi9tj","timestamp":"2026-03-14T21:14:55.212Z","uuid":"99b6d6fb-7717-490f-a2ca-b02075b058c7"} {"parentUuid":"97aaadb2-200e-4567-b137-6a689a807e14","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_013nBBeHdHp9v2zyV7mDi9tj","type":"tool_result","content":" 550→ 550→ return None\n 551→ 551→ \n 552→ 552→ def add_spatial_relation(self, relation: SpatialRelation) -> None:\n 553→ 553→ """Ajouter une relation spatiale"""\n 554→ 554→ # Éviter les doublons\n 555→ 555→ for existing in self.spatial_relations:\n 556→ 556→ if (existing.source_element_id == relation.source_element_id and\n 557→ 557→ existing.target_element_id == relation.target_element_id and\n 558→ 558→ existing.relation_type == relation.relation_type):\n 559→ 559→ return\n 560→ 560→ self.spatial_relations.append(relation)\n 561→ 561→ \n 562→ 562→ def get_relations_for_element(self, element_id: str) -> List[SpatialRelation]:\n 563→ 563→ """Récupérer toutes les relations impliquant un élément"""\n 564→ 564→ return [r for r in self.spatial_relations \n 565→ 565→ if r.source_element_id == element_id or r.target_element_id == element_id]\n 566→ 566→\n 567→ 567→\n 568→ 568→# ============================================================================\n 569→ 569→# WorkflowEdge et Actions\n 570→ 570→# ============================================================================\n 571→ 571→\n 572→ 572→@dataclass\n 573→ 573→class TargetSpec:\n 574→ 574→ """\n 575→ 575→ Spécification de la cible d'une action.\n 576→ 576→ \n 577→ 577→ Supporte plusieurs stratégies de résolution:\n 578→ 578→ - by_role: Recherche par rôle sémantique\n 579→ 579→ - by_text: Recherche par texte exact ou partiel\n 580→ 580→ - by_position: Recherche par coordonnées\n 581→ 581→ - embedding_ref: Recherche par similarité visuelle\n 582→ 582→ \n 583→ 583→ Fiche #11 - Multi-anchor + contraintes combinées:\n 584→ 584→ - hard_constraints: Contraintes strictes (within_container_text, min_area)\n 585→ 585→ - weights: Pondération des critères de scoring\n 586→ 586→ """\n 587→ 587→ # Stratégies de recherche (au moins une requise)\n 588→ 588→ by_role: Optional[str] = None # Rôle sémantique (e.g., "button", "textfield")\n 589→ 589→ by_text: Optional[str] = None # Texte à rechercher\n 590→ 590→ by_position: Optional[Tuple[int, int]] = None # Coordonnées (x, y)\n 591→ 591→ \n 592→ 592→ # Configuration\n 593→ 593→ selection_policy: str = "first" # first, last, by_similarity, largest, nearest_to_previous\n 594→ 594→ fallback_strategy: str = "visual_similarity" # visual_similarity, position\n 595→ 595→ embedding_ref: Optional[Any] = None # Embedding de référence pour similarité\n 596→ 596→ \n 597→ 597→ # Contexte spatial (optionnel)\n 598→ 598→ context_hints: Dict[str, Any] = field(default_factory=dict) # near_text, below_text, right_of_text\n 599→ 599→ \n 600→ 600→ # Fiche #11: Contraintes & pondérations (facultatif, backward compatible)\n 601→ 601→ hard_constraints: Dict[str, Any] = field(default_factory=dict) # within_container_text, min_area\n 602→ 602→ weights: Dict[str, float] = field(default_factory=dict) # proximity, alignment, container, roi_iou\n 603→ 603→ \n 604→ 604→ # Legacy support\n 605→ 605→ @property\n 606→ 606→ def role(self) -> Optional[str]:\n 607→ 607→ """Alias pour by_role (compatibilité)"""\n 608→ 608→ return self.by_role\n 609→ 609→ \n 610→ 610→ def to_dict(self) -> Dict[str, Any]:\n 611→ 611→ return {\n 612→ 612→ "by_role": self.by_role,\n 613→ 613→ "by_text": self.by_text,\n 614→ 614→ "by_position": list(self.by_position) if self.by_position else None,\n 615→ 615→ "selection_policy": self.selection_policy,\n 616→ 616→ "fallback_strategy": self.fallback_strategy,\n 617→ 617→ "context_hints": self.context_hints,\n 618→ 618→ # Fiche #11: Nouveaux champs\n 619→ 619→ "hard_constraints": self.hard_constraints,\n 620→ 620→ "weights": self.weights\n 621→ 621→ }\n 622→ 622→ \n 623→ 623→ @classmethod\n 624→ 624→ def from_dict(cls, data: Dict[str, Any]) -> 'TargetSpec':\n 625→ 625→ by_position = data.get("by_position")\n 626→ 626→ if by_position:\n 627→ 627→ by_position = tuple(by_position)\n 628→ 628→ \n 629→ 629→ return cls(\n 630→ 630→ by_role=data.get("by_role") or data.get("role"), # Support legacy "role"\n 631→ 631→ by_text=data.get("by_text"),\n 632→ 632→ by_position=by_position,\n 633→ 633→ selection_policy=data.get("selection_policy", "first"),\n 634→ 634→ fallback_strategy=data.get("fallback_strategy", "visual_similarity"),\n 635→ 635→ context_hints=data.get("context_hints", {}),\n 636→ 636→ # Fiche #11: Nouveaux champs avec defaults safe\n 637→ 637→ hard_constraints=data.get("hard_constraints", {}),\n 638→ 638→ weights=data.get("weights", {})\n 639→ 639→ )\n 640→ 640→\n 641→ 641→\n 642→ 642→@dataclass\n 643→ 643→class Action:\n 644→ 644→ """Action à exécuter sur un élément UI"""\n 645→ 645→ type: str # mouse_click, text_input, key_press, compound\n 646→ 646→ target: TargetSpec\n 647→ 647→ parameters: Dict[str, Any] = field(default_factory=dict)\n 648→ 648→ \n 649→ 649→ @property\n 650→ 650→ def params(self) -> Dict[str, Any]:\n 651→ 651→ return self.parameters\n 652→ 652→\n 653→ 653→ @params.setter\n 654→ 654→ def params(self, value: Dict[str, Any]) -> None:\n 655→ 655→ self.parameters = value\n 656→ 656→ \n 657→ 657→ def to_dict(self) -> Dict[str, Any]:\n 658→ 658→ return {\n 659→ 659→ "type": self.type,\n 660→ 660→ "target": self.target.to_dict(),\n 661→ 661→ "parameters": self.parameters\n 662→ 662→ }\n 663→ 663→ \n 664→ 664→ @classmethod\n 665→ 665→ def from_dict(cls, data: Dict[str, Any]) -> 'Action':\n 666→ 666→ return cls(\n 667→ 667→ type=data["type"],\n 668→ 668→ target=TargetSpec.from_dict(data["target"]),\n 669→ 669→ parameters=data.get("parameters", {})\n 670→ 670→ )\n 671→ 671→\n 672→ 672→\n 673→ 673→@dataclass\n 674→ 674→class EdgeConstraints:\n 675→ 675→ """Contraintes pour l'exécution d'un edge"""\n 676→ 676→ pre_conditions: Dict[str, Any] = field(default_factory=dict)\n 677→ 677→ required_confidence: float = 0.8\n 678→ 678→ max_wait_time_ms: int = 5000\n 679→ 679→ \n 680→ 680→ def to_dict(self) -> Dict[str, Any]:\n 681→ 681→ return {\n 682→ 682→ "pre_conditions": self.pre_conditions,\n 683→ 683→ "required_confidence": self.required_confidence,\n 684→ 684→ "max_wait_time_ms": self.max_wait_time_ms\n 685→ 685→ }\n 686→ 686→ \n 687→ 687→ @classmethod\n 688→ 688→ def from_dict(cls, data: Dict[str, Any]) -> 'EdgeConstraints':\n 689→ 689→ return cls(\n 690→ 690→ pre_conditions=data.get("pre_conditions", {}),\n 691→ 691→ required_confidence=data.get("required_confidence", 0.8),\n 692→ 692→ max_wait_time_ms=data.get("max_wait_time_ms", 5000)\n 693→ 693→ )\n 694→ 694→\n 695→ 695→\n 696→ 696→@dataclass\n 697→ 697→class PostConditionCheck:\n 698→ 698→ """\n 699→ 699→ kind: - "text_present": value=str\n 700→ 700→ - "text_absent": value=str\n 701→ 701→ - "element_present": target=TargetSpec\n 702→ 702→ - "window_title_contains": value=str\n 703→ 703→ """\n 704→ 704→ kind: str\n 705→ 705→ value: Optional[str] = None\n 706→ 706→ target: Optional["TargetSpec"] = None\n 707→ 707→\n 708→ 708→\n 709→ 709→@dataclass\n 710→ 710→class PostConditions:\n 711→ 711→ """Post-conditions attendues après exécution - Fiche #9"""\n 712→ 712→ # (garde tes champs existants si tu en as déjà, et ajoute ceux-ci)\n 713→ 713→ \n 714→ 714→ success_mode: str = "all" # "all" | "any"\n 715→ 715→ timeout_ms: int = 2500\n 716→ 716→ poll_ms: int = 200\n 717→ 717→ \n 718→ 718→ success: List[PostConditionCheck] = field(default_factory=list)\n 719→ 719→ fail_fast: List[PostConditionCheck] = field(default_factory=list)\n 720→ 720→ \n 721→ 721→ retries: int = 2 # nb de tentatives après échec post-conditions\n 722→ 722→ backoff_ms: int = 150 # 150, 300, 600...\n 723→ 723→ \n 724→ 724→ # Legacy fields (garde compatibilité)\n 725→ 725→ expected_node: Optional[str] = None # Node attendu après action\n 726→ 726→ window_change_expected: bool = False\n 727→ 727→ new_ui_elements_expected: List[str] = field(default_factory=list)\n 728→ 728→ \n 729→ 729→ def to_dict(self) -> Dict[str, Any]:\n 730→ 730→ return {\n 731→ 731→ "success_mode": self.success_mode,\n 732→ 732→ "timeout_ms": self.timeout_ms,\n 733→ 733→ "poll_ms": self.poll_ms,\n 734→ 734→ "success": [{"kind": c.kind, "value": c.value, "target": c.target.to_dict() if c.target else None} for c in self.success],\n 735→ 735→ "fail_fast": [{"kind": c.kind, "value": c.value, "target": c.target.to_dict() if c.target else None} for c in self.fail_fast],\n 736→ 736→ "retries": self.retries,\n 737→ 737→ "backoff_ms": self.backoff_ms,\n 738→ 738→ # Legacy\n 739→ 739→ "expected_node": self.expected_node,\n 740→ 740→ "window_change_expected": self.window_change_expected,\n 741→ 741→ "new_ui_elements_expected": self.new_ui_elements_expected\n 742→ 742→ }\n 743→ 743→ \n 744→ 744→ @classmethod\n 745→ 745→ def from_dict(cls, data: Dict[str, Any]) -> 'PostConditions':\n 746→ 746→ success_checks = []\n 747→ 747→ for c in data.get("success", []):\n 748→ 748→ target = TargetSpec.from_dict(c["target"]) if c.get("target") else None\n 749→ 749→ success_checks.append(PostConditionCheck(kind=c["kind"], value=c.get("value"), target=target))\n 750→ 750→ \n 751→ 751→ fail_fast_checks = []\n 752→ 752→ for c in data.get("fail_fast", []):\n 753→ 753→ target = TargetSpec.from_dict(c["target"]) if c.get("target") else None\n 754→ 754→ fail_fast_checks.append(PostConditionCheck(kind=c["kind"], value=c.get("value"), target=target))\n 755→ 755→ \n 756→ 756→ return cls(\n 757→ 757→ success_mode=data.get("success_mode", "all"),\n 758→ 758→ timeout_ms=data.get("timeout_ms", 2500),\n 759→ 759→ poll_ms=data.get("poll_ms", 200),\n 760→ 760→ success=success_checks,\n 761→ 761→ fail_fast=fail_fast_checks,\n 762→ 762→ retries=data.get("retries", 2),\n 763→ 763→ backoff_ms=data.get("backoff_ms", 150),\n 764→ 764→ # Legacy\n 765→ 765→ expected_node=data.get("expected_node"),\n 766→ 766→ window_change_expected=data.get("window_change_expected", False),\n 767→ 767→ new_ui_elements_expected=data.get("new_ui_elements_expected", [])\n 768→ 768→ )\n 769→ 769→\n 770→ 770→\n 771→ 771→@dataclass\n 772→ 772→class EdgeStats:\n 773→ 773→ """Statistiques d'exécution d'un edge"""\n 774→ 774→ execution_count: int = 0\n 775→ 775→ success_count: int = 0\n 776→ 776→ failure_count: int = 0\n 777→ 777→ avg_execution_time_ms: float = 0.0\n 778→ 778→ last_executed: Optional[datetime] = None\n 779→ 779→ \n 780→ 780→ @property\n 781→ 781→ def success_rate(self) -> float:\n 782→ 782→ if self.execution_count == 0:\n 783→ 783→ return 0.0\n 784→ 784→ return self.success_count / self.execution_count\n 785→ 785→ \n 786→ 786→ def to_dict(self) -> Dict[str, Any]:\n 787→ 787→ return {\n 788→ 788→ "execution_count": self.execution_count,\n 789→ 789→ "success_count": self.success_count,\n 790→ 790→ "failure_count": self.failure_count,\n 791→ 791→ "avg_execution_time_ms": self.avg_execution_time_ms,\n 792→ 792→ "last_executed": self.last_executed.isoformat() if self.last_executed else None\n 793→ 793→ }\n 794→ 794→ \n 795→ 795→ @classmethod\n 796→ 796→ def from_dict(cls, data: Dict[str, Any]) -> 'EdgeStats':\n 797→ 797→ last_executed = None\n 798→ 798→ if data.get("last_executed"):\n 799→ 799→ last_executed = datetime.fromisoformat(data["last_executed"])\n 800→ 800→ \n 801→ 801→ return cls(\n 802→ 802→ execution_count=data.get("execution_count", 0),\n 803→ 803→ success_count=data.get("success_count", 0),\n 804→ 804→ failure_count=data.get("failure_count", 0),\n 805→ 805→ avg_execution_time_ms=data.get("avg_execution_time_ms", 0.0),\n 806→ 806→ last_executed=last_executed\n 807→ 807→ )\n 808→ 808→\n 809→ 809→\n 810→ 810→@dataclass\n 811→ 811→class WorkflowEdge:\n 812→ 812→ """\n 813→ 813→ Edge dans un Workflow Graph\n 814→ 814→ \n 815→ 815→ Représente une transition entre deux états via une action.\n 816→ 816→ """\n 817→ 817→ edge_id: str\n 818→ 818→ from_node: str\n 819→ 819→ to_node: str\n 820→ 820→ action: Action\n 821→ 821→ constraints: EdgeConstraints\n 822→ 822→ post_conditions: PostConditions\n 823→ 823→ stats: EdgeStats = field(default_factory=EdgeStats)\n 824→ 824→ metadata: Dict[str, Any] = field(default_factory=dict)\n 825→ 825→ \n 826→ 826→ def can_execute(self, current_state: Any) -> Tuple[bool, str]:\n 827→ 827→ """\n 828→ 828→ Vérifier si l'edge peut être exécuté\n 829→ 829→ \n 830→ 830→ Returns:\n 831→ 831→ (can_execute: bool, reason: str)\n 832→ 832→ """\n 833→ 833→ # Vérifier pre-conditions\n 834→ 834→ for key, value in self.constraints.pre_conditions.items():\n 835→ 835→ if not hasattr(current_state, key):\n 836→ 836→ return False, f"Missing pre-condition: {key}"\n 837→ 837→ if getattr(current_state, key) != value:\n 838→ 838→ return False, f"Pre-condition not met: {key}"\n 839→ 839→ \n 840→ 840→ return True, "OK"\n 841→ 841→ \n 842→ 842→ def to_dict(self) -> Dict[str, Any]:\n 843→ 843→ return {\n 844→ 844→ "edge_id": self.edge_id,\n 845→ 845→ "from_node": self.from_node,\n 846→ 846→ "to_node": self.to_node,\n 847→ 847→ "action": self.action.to_dict(),\n 848→ 848→ "constraints": self.constraints.to_dict(),\n 849→ 849→ "post_conditions": self.post_conditions.to_dict(),\n 850→ 850→ "stats": self.stats.to_dict(),\n 851→ 851→ "metadata": self.metadata\n 852→ 852→ }\n 853→ 853→ \n 854→ 854→ @classmethod\n 855→ 855→ def from_dict(cls, data: Dict[str, Any]) -> 'WorkflowEdge':\n 856→ 856→ # Support both naming conventions\n 857→ 857→ from_node = data.get("from_node") or data.get("source_node")\n 858→ 858→ to_node = data.get("to_node") or data.get("target_node")\n 859→ 859→ \n 860→ 860→ return cls(\n 861→ 861→ edge_id=data["edge_id"],\n 862→ 862→ from_node=from_node,\n 863→ 863→ to_node=to_node,\n 864→ 864→ action=Action.from_dict(data["action"]),\n 865→ 865→ constraints=EdgeConstraints.from_dict(data.get("constraints", {})),\n 866→ 866→ post_conditions=PostConditions.from_dict(data.get("post_conditions", {})),\n 867→ 867→ stats=EdgeStats.from_dict(data.get("stats", {})),\n 868→ 868→ metadata=data.get("metadata", {})\n 869→ 869→ )\n 870→ 870→\n 871→ 871→\n 872→ 872→# ============================================================================\n 873→ 873→# Workflow\n 874→ 874→# ============================================================================\n 875→ 875→\n 876→ 876→@dataclass\n 877→ 877→class SafetyRules:\n 878→ 878→ """Règles de sécurité pour un workflow"""\n 879→ 879→ # Pas de limite d'exécutions - les workflows peuvent tourner indéfiniment\n 880→ 880→ require_confirmation_for: List[str] = field(default_factory=list)\n 881→ 881→ forbidden_windows: List[str] = field(default_factory=list)\n 882→ 882→ # Timeout global optionnel (0 = pas de limite)\n 883→ 883→ execution_timeout_minutes: int = 0\n 884→ 884→ \n 885→ 885→ def to_dict(self) -> Dict[str, Any]:\n 886→ 886→ return {\n 887→ 887→ "require_confirmation_for": self.require_confirmation_for,\n 888→ 888→ "forbidden_windows": self.forbidden_windows,\n 889→ 889→ "execution_timeout_minutes": self.execution_timeout_minutes\n 890→ 890→ }\n 891→ 891→ \n 892→ 892→ @classmethod\n 893→ 893→ def from_dict(cls, data: Dict[str, Any]) -> 'SafetyRules':\n 894→ 894→ return cls(\n 895→ 895→ require_confirmation_for=data.get("require_confirmation_for", []),\n 896→ 896→ forbidden_windows=data.get("forbidden_windows", []),\n 897→ 897→ execution_timeout_minutes=data.get("execution_timeout_minutes", 0)\n 898→ 898→ )\n 899→ 899→"}]},"uuid":"67d8f0e2-1545-467f-a589-4e5660418e44","timestamp":"2026-03-14T21:14:55.210Z","sourceToolAssistantUUID":"97aaadb2-200e-4567-b137-6a689a807e14"} {"parentUuid":"67d8f0e2-1545-467f-a589-4e5660418e44","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01H8ZjFUhrqBrpRGKpFeQHu8","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01EBX746AqpXNFUkatu4p96j","name":"Grep","input":{"pattern":"class OllamaClient","output_mode":"files_with_matches","path":"/home/dom/ai/rpa_vision_v3"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":19869,"cache_read_input_tokens":73649,"cache_creation":{"ephemeral_5m_input_tokens":19869,"ephemeral_1h_input_tokens":0},"output_tokens":5,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fWGtGR8nfcrwezUJkN","type":"assistant","uuid":"35fb6407-70b5-45e7-8eb4-960cab4f4553","timestamp":"2026-03-14T21:14:56.631Z"} {"parentUuid":"35fb6407-70b5-45e7-8eb4-960cab4f4553","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Grep","command":"callback"},"parentToolUseID":"toolu_01EBX746AqpXNFUkatu4p96j","toolUseID":"toolu_01EBX746AqpXNFUkatu4p96j","timestamp":"2026-03-14T21:14:56.723Z","uuid":"23a264df-e1b8-484c-812c-b601e7d64ca5"} {"parentUuid":"35fb6407-70b5-45e7-8eb4-960cab4f4553","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01EBX746AqpXNFUkatu4p96j","type":"tool_result","content":"Found 2 files\n/home/dom/ai/rpa_vision_v3/core/detection/ollama_client.py\n/home/dom/ai/rpa_vision_v3/archives/rpa_vision_v3_code_docs_20251214/core/detection/ollama_client.py"}]},"uuid":"5f2e8617-2c1f-4059-8ecf-fdce6bedb2f8","timestamp":"2026-03-14T21:14:56.722Z","sourceToolAssistantUUID":"35fb6407-70b5-45e7-8eb4-960cab4f4553"} {"parentUuid":"5f2e8617-2c1f-4059-8ecf-fdce6bedb2f8","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01H8ZjFUhrqBrpRGKpFeQHu8","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_012N2J9nryWVbntT1L8qcGwz","name":"Glob","input":{"pattern":"**/vlm_provider","path":"/home/dom/ai/rpa_vision_v3"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":19869,"cache_read_input_tokens":73649,"output_tokens":366,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":19869},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fWGtGR8nfcrwezUJkN","type":"assistant","uuid":"a9a83ba4-63e5-40b7-9307-cacdf5add4fa","timestamp":"2026-03-14T21:14:57.119Z"} {"parentUuid":"a9a83ba4-63e5-40b7-9307-cacdf5add4fa","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Glob","command":"callback"},"parentToolUseID":"toolu_012N2J9nryWVbntT1L8qcGwz","toolUseID":"toolu_012N2J9nryWVbntT1L8qcGwz","timestamp":"2026-03-14T21:14:57.546Z","uuid":"acdfe641-74a1-462e-8b8c-3429f7bddae5"} {"parentUuid":"a9a83ba4-63e5-40b7-9307-cacdf5add4fa","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_012N2J9nryWVbntT1L8qcGwz","type":"tool_result","content":"/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/vlm_provider.py\n/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/pycache/vlm_provider.cpython-312.pyc"}]},"uuid":"cd3ccc9f-e0f6-4f98-9486-aca108286ca7","timestamp":"2026-03-14T21:14:57.545Z","sourceToolAssistantUUID":"a9a83ba4-63e5-40b7-9307-cacdf5add4fa"} {"parentUuid":"cd3ccc9f-e0f6-4f98-9486-aca108286ca7","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_019YS9SGZykUqmK5jrorwCyc","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01BSoCow5m65osTC3pYXWYiK","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/detection/ollama_client.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":6693,"cache_read_input_tokens":93518,"cache_creation":{"ephemeral_5m_input_tokens":6693,"ephemeral_1h_input_tokens":0},"output_tokens":25,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fX3j2Pf2pk4pA4ri12","type":"assistant","uuid":"4e8b7992-eb31-4cf0-aecf-1654710832ae","timestamp":"2026-03-14T21:15:01.922Z"} {"parentUuid":"4e8b7992-eb31-4cf0-aecf-1654710832ae","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01BSoCow5m65osTC3pYXWYiK","toolUseID":"toolu_01BSoCow5m65osTC3pYXWYiK","timestamp":"2026-03-14T21:15:01.926Z","uuid":"5fb67f4d-1779-4597-bda9-f6ef6a352a88"} {"parentUuid":"4e8b7992-eb31-4cf0-aecf-1654710832ae","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01BSoCow5m65osTC3pYXWYiK","type":"tool_result","content":" 1→"""\n 2→OllamaClient - Client pour Vision-Language Models via Ollama\n 3→\n 4→Interface pour communiquer avec des VLM (Qwen, LLaVA, etc.) via Ollama.\n 5→"""\n 6→\n 7→import logging\n 8→from typing import Dict, List, Optional, Any\n 9→import requests\n 10→import json\n 11→import base64\n 12→from pathlib import Path\n 13→from PIL import Image\n 14→import io\n 15→\n 16→logger = logging.getLogger(name)\n 17→\n 18→\n 19→class OllamaClient:\n 20→ """\n 21→ Client Ollama pour VLM\n 22→ \n 23→ Permet d'envoyer des images et prompts à un VLM via l'API Ollama.\n 24→ """\n 25→ \n 26→ def init(self, \n 27→ endpoint: str = "http://localhost:11434",\n 28→ model: str = "qwen3-vl:8b",\n 29→ timeout: int = 60):\n 30→ """\n 31→ Initialiser le client Ollama\n 32→ \n 33→ Args:\n 34→ endpoint: URL de l'API Ollama\n 35→ model: Nom du modèle VLM à utiliser\n 36→ timeout: Timeout en secondes\n 37→ """\n 38→ self.endpoint = endpoint.rstrip('/')\n 39→ self.model = model\n 40→ self.timeout = timeout\n 41→ self._check_connection()\n 42→ \n 43→ def _check_connection(self) -> bool:\n 44→ """Vérifier la connexion à Ollama"""\n 45→ try:\n 46→ response = requests.get(f"{self.endpoint}/api/tags", timeout=5)\n 47→ if response.status_code == 200:\n 48→ models = response.json().get('models', [])\n 49→ model_names = [m['name'] for m in models]\n 50→ if self.model not in model_names:\n 51→ logger.warning(f" Model '{self.model}' not found in Ollama")\n 52→ logger.info(f"Available models: {model_names}")\n 53→ return True\n 54→ except Exception as e:\n 55→ logger.warning(f" Cannot connect to Ollama at {self.endpoint}: {e}")\n 56→ return False\n 57→ return False\n 58→ \n 59→ def generate(self,\n 60→ prompt: str,\n 61→ image_path: Optional[str] = None,\n 62→ image: Optional[Image.Image] = None,\n 63→ system_prompt: Optional[str] = None,\n 64→ temperature: float = 0.1,\n 65→ max_tokens: int = 500,\n 66→ force_json: bool = False) -> Dict[str, Any]:\n 67→ """\n 68→ Générer une réponse du VLM\n 69→ \n 70→ Args:\n 71→ prompt: Prompt textuel\n 72→ image_path: Chemin vers une image (optionnel)\n 73→ image: Image PIL (optionnel)\n 74→ system_prompt: Prompt système (optionnel)\n 75→ temperature: Température de génération\n 76→ max_tokens: Nombre max de tokens\n 77→ \n 78→ Returns:\n 79→ Dict avec 'response', 'success', 'error'\n 80→ """\n 81→ try:\n 82→ # Préparer l'image si fournie\n 83→ image_data = None\n 84→ if image_path:\n 85→ image_data = self._encode_image_from_path(image_path)\n 86→ elif image:\n 87→ image_data = self._encode_image_from_pil(image)\n 88→ \n 89→ # Construire la requête avec thinking mode désactivé\n 90→ # Pour Qwen3, utiliser /nothink au début du prompt\n 91→ effective_prompt = prompt\n 92→ if "qwen" in self.model.lower():\n 93→ effective_prompt = f"/nothink {prompt}"\n 94→ \n 95→ payload = {\n 96→ "model": self.model,\n 97→ "prompt": effective_prompt,\n 98→ "stream": False,\n 99→ "options": {\n 100→ "temperature": temperature,\n 101→ "num_predict": max_tokens,\n 102→ "num_ctx": 2048, # Contexte réduit pour plus de vitesse\n 103→ "top_k": 1 # Plus rapide pour les tâches de classification\n 104→ }\n 105→ }\n 106→\n 107→ # Forcer la sortie JSON si demandé (réduit drastiquement les erreurs de parsing)\n 108→ if force_json:\n 109→ payload["format"] = "json"\n 110→\n 111→ if system_prompt:\n 112→ payload["system"] = system_prompt\n 113→\n 114→ if image_data:\n 115→ payload["images"] = [image_data]\n 116→ \n 117→ # Envoyer la requête\n 118→ response = requests.post(\n 119→ f"{self.endpoint}/api/generate",\n 120→ json=payload,\n 121→ timeout=self.timeout\n 122→ )\n 123→ \n 124→ if response.status_code == 200:\n 125→ result = response.json()\n 126→ return {\n 127→ "response": result.get("response", ""),\n 128→ "success": True,\n 129→ "error": None\n 130→ }\n 131→ else:\n 132→ return {\n 133→ "response": "",\n 134→ "success": False,\n 135→ "error": f"HTTP {response.status_code}: {response.text}"\n 136→ }\n 137→ \n 138→ except Exception as e:\n 139→ return {\n 140→ "response": "",\n 141→ "success": False,\n 142→ "error": str(e)\n 143→ }\n 144→ \n 145→ def detect_ui_elements(self, image_path: str) -> Dict[str, Any]:\n 146→ """\n 147→ Détecter les éléments UI dans une image\n 148→ \n 149→ Args:\n 150→ image_path: Chemin vers le screenshot\n 151→ \n 152→ Returns:\n 153→ Dict avec liste d'éléments détectés\n 154→ """\n 155→ prompt = """Analyze this screenshot and list all interactive UI elements you can see.\n 156→For each element, provide:\n 157→- Type (button, text_input, checkbox, radio, dropdown, tab, link, icon, table_row, menu_item)\n 158→- Position (approximate x, y coordinates)\n 159→- Label or text content\n 160→- Semantic role (primary_action, cancel, submit, form_input, search_field, navigation, settings, close)\n 161→\n 162→Format your response as JSON."""\n 163→ \n 164→ result = self.generate(prompt, image_path=image_path, temperature=0.1)\n 165→ \n 166→ if result["success"]:\n 167→ try:\n 168→ # Parser la réponse JSON\n 169→ elements = json.loads(result["response"])\n 170→ return {"elements": elements, "success": True}\n 171→ except json.JSONDecodeError:\n 172→ # Si pas JSON valide, retourner texte brut\n 173→ return {"elements": [], "success": False, "raw_response": result["response"]}\n 174→ \n 175→ return {"elements": [], "success": False, "error": result["error"]}\n 176→ \n 177→ def classify_element_type(self, \n 178→ element_image: Image.Image,\n 179→ context: Optional[str] = None) -> Dict[str, Any]:\n 180→ """\n 181→ Classifier le type d'un élément UI\n 182→ \n 183→ Args:\n 184→ element_image: Image de l'élément\n 185→ context: Contexte additionnel\n 186→ \n 187→ Returns:\n 188→ Dict avec 'type' et 'confidence'\n 189→ """\n 190→ types_list = "button, text_input, checkbox, radio, dropdown, tab, link, icon, table_row, menu_item"\n 191→ \n 192→ prompt = f"""What type of UI element is this?\n 193→Choose ONLY ONE from: {types_list}\n 194→\n 195→Respond with just the type name, nothing else."""\n 196→ \n 197→ if context:\n 198→ prompt += f"\n\nContext: {context}"\n 199→ \n 200→ result = self.generate(prompt, image=element_image, temperature=0.0)\n 201→ \n 202→ if result["success"]:\n 203→ element_type = result["response"].strip().lower()\n 204→ # Valider que c'est un type connu\n 205→ valid_types = types_list.split(", ")\n 206→ if element_type in valid_types:\n 207→ return {"type": element_type, "confidence": 0.9, "success": True}\n 208→ else:\n 209→ # Essayer de trouver le type le plus proche\n 210→ for vtype in valid_types:\n 211→ if vtype in element_type:\n 212→ return {"type": vtype, "confidence": 0.7, "success": True}\n 213→ \n 214→ return {"type": "unknown", "confidence": 0.0, "success": False}\n 215→ \n 216→ def classify_element_role(self,\n 217→ element_image: Image.Image,\n 218→ element_type: str,\n 219→ context: Optional[str] = None) -> Dict[str, Any]:\n 220→ """\n 221→ Classifier le rôle sémantique d'un élément\n 222→ \n 223→ Args:\n 224→ element_image: Image de l'élément\n 225→ element_type: Type de l'élément\n 226→ context: Contexte additionnel\n 227→ \n 228→ Returns:\n 229→ Dict avec 'role' et 'confidence'\n 230→ """\n 231→ roles_list = "primary_action, cancel, submit, form_input, search_field, navigation, settings, close, delete, edit, save"\n 232→ \n 233→ prompt = f"""This is a {element_type}. What is its semantic role or purpose?\n 234→Choose ONLY ONE from: {roles_list}\n 235→\n 236→Respond with just the role name, nothing else."""\n 237→ \n 238→ if context:\n 239→ prompt += f"\n\nContext: {context}"\n 240→ \n 241→ result = self.generate(prompt, image=element_image, temperature=0.0)\n 242→ \n 243→ if result["success"]:\n 244→ role = result["response"].strip().lower()\n 245→ # Valider que c'est un rôle connu\n 246→ valid_roles = roles_list.split(", ")\n 247→ if role in valid_roles:\n 248→ return {"role": role, "confidence": 0.9, "success": True}\n 249→ else:\n 250→ # Essayer de trouver le rôle le plus proche\n 251→ for vrole in valid_roles:\n 252→ if vrole in role:\n 253→ return {"role": vrole, "confidence": 0.7, "success": True}\n 254→ \n 255→ return {"role": "unknown", "confidence": 0.0, "success": False}\n 256→ \n 257→ def extract_text(self, image: Image.Image) -> Dict[str, Any]:\n 258→ """\n 259→ Extraire le texte d'une image\n 260→ \n 261→ Args:\n 262→ image: Image PIL\n 263→ \n 264→ Returns:\n 265→ Dict avec 'text' extrait\n 266→ """\n 267→ prompt = "Extract all visible text from this image. Return only the text, nothing else."\n 268→ \n 269→ result = self.generate(prompt, image=image, temperature=0.0)\n 270→ \n 271→ if result["success"]:\n 272→ return {"text": result["response"].strip(), "success": True}\n 273→ \n 274→ return {"text": "", "success": False, "error": result["error"]}\n 275→\n 276→ def classify_element_complete(self, element_image: Image.Image) -> Dict[str, Any]:\n 277→ """\n 278→ Classifier complètement un élément UI en UN SEUL appel VLM (optimisé)\n 279→\n 280→ Au lieu de 3 appels séparés (type, role, text), cette méthode\n 281→ fait UN SEUL appel pour obtenir toutes les informations.\n 282→\n 283→ Réduction de performance: 3 appels → 1 appel = 66% plus rapide\n 284→\n 285→ Args:\n 286→ element_image: Image PIL de l'élément\n 287→\n 288→ Returns:\n 289→ Dict avec 'type', 'role', 'text', 'confidence', 'success'\n 290→ """\n 291→ # System prompt "zéro tolérance" - Force le VLM à NE produire QUE du JSON\n 292→ system_prompt = """You are a UI element classifier.\n 293→Your ONLY task is to output valid JSON. Never explain. Never comment. Never discuss.\n 294→Expected format:\n 295→{"type": "...", "role": "...", "text": "..."}"""\n 296→\n 297→ # User prompt simplifié et direct\n 298→ prompt = """Classify this UI element:\n 299→- Type: Choose ONE from [button, text_input, checkbox, radio, dropdown, tab, link, icon, table_row, menu_item]\n 300→- Role: Choose ONE from [primary_action, cancel, submit, form_input, search_field, navigation, settings, close, delete, edit, save]\n 301→- Text: Any visible text (empty string if none)\n 302→\n 303→Output JSON only."""\n 304→\n 305→ result = self.generate(\n 306→ prompt,\n 307→ image=element_image,\n 308→ system_prompt=system_prompt,\n 309→ temperature=0.0,\n 310→ max_tokens=150,\n 311→ force_json=True\n 312→ )\n 313→\n 314→ if result["success"]:\n 315→ try:\n 316→ # Parser la réponse JSON\n 317→ response_text = result["response"].strip()\n 318→\n 319→ # Nettoyer la réponse si elle contient du markdown\n 320→ if response_text.startswith("\"):\n 321→ lines = response_text.split(\"\\n\")\n 322→ response_text = \"\\n\".join([l for l in lines if not l.startswith(\"")])\n 323→ response_text = response_text.strip()\n 324→\n 325→ data = json.loads(response_text)\n 326→\n 327→ # Valider les valeurs\n 328→ valid_types = ["button", "text_input", "checkbox", "radio", "dropdown",\n 329→ "tab", "link", "icon", "table_row", "menu_item"]\n 330→ valid_roles = ["primary_action", "cancel", "submit", "form_input",\n 331→ "search_field", "navigation", "settings", "close",\n 332→ "delete", "edit", "save"]\n 333→\n 334→ elem_type = data.get("type", "unknown").lower()\n 335→ elem_role = data.get("role", "unknown").lower()\n 336→ elem_text = data.get("text", "")\n 337→\n 338→ # Fallback si type/role invalides\n 339→ if elem_type not in valid_types:\n 340→ elem_type = "unknown"\n 341→ if elem_role not in valid_roles:\n 342→ elem_role = "unknown"\n 343→\n 344→ return {\n 345→ "type": elem_type,\n 346→ "role": elem_role,\n 347→ "text": elem_text,\n 348→ "confidence": 0.85,\n 349→ "success": True\n 350→ }\n 351→\n 352→ except json.JSONDecodeError as e:\n 353→ logger.warning(f"JSON parse error in classify_element_complete: {e}")\n 354→ logger.debug(f"Raw response: {result['response'][:200]}")\n 355→ return {\n 356→ "type": "unknown",\n 357→ "role": "unknown",\n 358→ "text": "",\n 359→ "confidence": 0.0,\n 360→ "success": False,\n 361→ "error": f"JSON parse error: {e}"\n 362→ }\n 363→\n 364→ return {\n 365→ "type": "unknown",\n 366→ "role": "unknown",\n 367→ "text": "",\n 368→ "confidence": 0.0,\n 369→ "success": False,\n 370→ "error": result.get("error", "VLM call failed")\n 371→ }\n 372→\n 373→ def _encode_image_from_path(self, image_path: str) -> str:\n 374→ """Encoder une image depuis un fichier en base64"""\n 375→ with open(image_path, 'rb') as f:\n 376→ return base64.b64encode(f.read()).decode('utf-8')\n 377→ \n 378→ def _encode_image_from_pil(self, image: Image.Image) -> str:\n 379→ """Encoder une image PIL en base64 avec prétraitement optimisé"""\n 380→ # 1. Convertir en RGB si nécessaire (évite erreurs PNG transparent)\n 381→ if image.mode != 'RGB':\n 382→ image = image.convert('RGB')\n 383→\n 384→ # 2. Redimensionnement intelligent : max 1280px sur le côté long\n 385→ max_size = 1280\n 386→ if max(image.size) > max_size:\n 387→ ratio = max_size / max(image.size)\n 388→ new_size = (int(image.size[0] * ratio), int(image.size[1] * ratio))\n 389→ image = image.resize(new_size, Image.Resampling.LANCZOS)\n 390→\n 391→ # 3. Sauvegarder en JPEG qualité 90 (plus léger, meilleur pour VLM)\n 392→ buffer = io.BytesIO()\n 393→ image.save(buffer, format='JPEG', quality=90)\n 394→ return base64.b64encode(buffer.getvalue()).decode('utf-8')\n 395→ \n 396→ def list_models(self) -> List[str]:\n 397→ """Lister les modèles disponibles dans Ollama"""\n 398→ try:\n 399→ response = requests.get(f"{self.endpoint}/api/tags", timeout=5)\n 400→ if response.status_code == 200:\n 401→ models = response.json().get('models', [])\n 402→ return [m['name'] for m in models]\n 403→ except Exception as e:\n 404→ logger.error(f"Error listing models: {e}")\n 405→ return []\n 406→ \n 407→ def pull_model(self, model_name: str) -> bool:\n 408→ """\n 409→ Télécharger un modèle dans Ollama\n 410→ \n 411→ Args:\n 412→ model_name: Nom du modèle à télécharger\n 413→ \n 414→ Returns:\n 415→ True si succès\n 416→ """\n 417→ try:\n 418→ logger.info(f"Pulling model {model_name}...")\n 419→ response = requests.post(\n 420→ f"{self.endpoint}/api/pull",\n 421→ json={"name": model_name},\n 422→ stream=True,\n 423→ timeout=600\n 424→ )\n 425→ \n 426→ if response.status_code == 200:\n 427→ for line in response.iter_lines():\n 428→ if line:\n 429→ data = json.loads(line)\n 430→ if 'status' in data:\n 431→ logger.info(f" {data['status']}")\n 432→ return True\n 433→ except Exception as e:\n 434→ logger.error(f"Error pulling model: {e}")\n 435→ return False\n 436→\n 437→\n 438→# ============================================================================\n 439→# Fonctions utilitaires\n 440→# ============================================================================\n 441→\n 442→def create_ollama_client(model: str = "qwen3-vl:8b",\n 443→ endpoint: str = "http://localhost:11434") -> OllamaClient:\n 444→ """\n 445→ Créer un client Ollama\n 446→ \n 447→ Args:\n 448→ model: Nom du modèle VLM\n 449→ endpoint: URL de l'API Ollama\n 450→ \n 451→ Returns:\n 452→ OllamaClient configuré\n 453→ """\n 454→ return OllamaClient(endpoint=endpoint, model=model)\n 455→\n 456→\n 457→def check_ollama_available(endpoint: str = "http://localhost:11434") -> bool:\n 458→ """\n 459→ Vérifier si Ollama est disponible\n 460→ \n 461→ Args:\n 462→ endpoint: URL de l'API Ollama\n 463→ \n 464→ Returns:\n 465→ True si disponible\n 466→ """\n 467→ try:\n 468→ response = requests.get(f"{endpoint}/api/tags", timeout=5)\n 469→ return response.status_code == 200\n 470→ except (requests.RequestException, ConnectionError, TimeoutError):\n 471→ return False\n 472→"}]},"uuid":"1e809a55-98f5-4c1b-9b65-ea248fc23a5e","timestamp":"2026-03-14T21:15:01.926Z","sourceToolAssistantUUID":"4e8b7992-eb31-4cf0-aecf-1654710832ae"} {"parentUuid":"1e809a55-98f5-4c1b-9b65-ea248fc23a5e","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_019YS9SGZykUqmK5jrorwCyc","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01JJD3buRZPXasnoS3YezSgZ","name":"Read","input":{"file_path":"/home/dom/.claude/projects/-home-dom-ai-rpa-vision-v3/eec8c668-e1f8-4599-9dd5-42170a166ba4/tool-results/toolu_01X8WDVqsezUXH1vkcskBwyT.txt","offset":899,"limit":200},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":6693,"cache_read_input_tokens":93518,"output_tokens":222,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":6693},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fX3j2Pf2pk4pA4ri12","type":"assistant","uuid":"6f9761c0-a7c4-47d8-aa44-f5224c70a3b0","timestamp":"2026-03-14T21:15:05.469Z"} {"parentUuid":"6f9761c0-a7c4-47d8-aa44-f5224c70a3b0","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01JJD3buRZPXasnoS3YezSgZ","toolUseID":"toolu_01JJD3buRZPXasnoS3YezSgZ","timestamp":"2026-03-14T21:15:05.475Z","uuid":"4bfc3929-4c5d-4f78-8e07-4c1be941e02a"} {"parentUuid":"6f9761c0-a7c4-47d8-aa44-f5224c70a3b0","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01JJD3buRZPXasnoS3YezSgZ","type":"tool_result","content":" 899→ 899→\n 900→ 900→\n 901→ 901→@dataclass\n 902→ 902→class WorkflowStats:\n 903→ 903→ """Statistiques globales d'un workflow"""\n 904→ 904→ total_executions: int = 0\n 905→ 905→ success_count: int = 0\n 906→ 906→ failure_count: int = 0\n 907→ 907→ observed_runs: int = 0\n 908→ 908→ assist_runs: int = 0\n 909→ 909→ auto_candidate_runs: int = 0\n 910→ 910→ auto_confirmed_runs: int = 0\n 911→ 911→ avg_similarity: float = 0.0\n 912→ 912→ recent_confidence: float = 0.0\n 913→ 913→ last_executed: Optional[datetime] = None\n 914→ 914→ \n 915→ 915→ @property\n 916→ 916→ def success_rate(self) -> float:\n 917→ 917→ if self.total_executions == 0:\n 918→ 918→ return 0.0\n 919→ 919→ return self.success_count / self.total_executions\n 920→ 920→ \n 921→ 921→ def to_dict(self) -> Dict[str, Any]:\n 922→ 922→ return {\n 923→ 923→ "total_executions": self.total_executions,\n 924→ 924→ "success_count": self.success_count,\n 925→ 925→ "failure_count": self.failure_count,\n 926→ 926→ "observed_runs": self.observed_runs,\n 927→ 927→ "assist_runs": self.assist_runs,\n 928→ 928→ "auto_candidate_runs": self.auto_candidate_runs,\n 929→ 929→ "auto_confirmed_runs": self.auto_confirmed_runs,\n 930→ 930→ "avg_similarity": self.avg_similarity,\n 931→ 931→ "recent_confidence": self.recent_confidence,\n 932→ 932→ "last_executed": self.last_executed.isoformat() if self.last_executed else None\n 933→ 933→ }\n 934→ 934→ \n 935→ 935→ @classmethod\n 936→ 936→ def from_dict(cls, data: Dict[str, Any]) -> 'WorkflowStats':\n 937→ 937→ last_executed = None\n 938→ 938→ if data.get("last_executed"):\n 939→ 939→ last_executed = datetime.fromisoformat(data["last_executed"])\n 940→ 940→ \n 941→ 941→ return cls(\n 942→ 942→ total_executions=data.get("total_executions", 0),\n 943→ 943→ success_count=data.get("success_count", 0),\n 944→ 944→ failure_count=data.get("failure_count", 0),\n 945→ 945→ observed_runs=data.get("observed_runs", 0),\n 946→ 946→ assist_runs=data.get("assist_runs", 0),\n 947→ 947→ auto_candidate_runs=data.get("auto_candidate_runs", 0),\n 948→ 948→ auto_confirmed_runs=data.get("auto_confirmed_runs", 0),\n 949→ 949→ avg_similarity=data.get("avg_similarity", 0.0),\n 950→ 950→ recent_confidence=data.get("recent_confidence", 0.0),\n 951→ 951→ last_executed=last_executed\n 952→ 952→ )\n 953→ 953→\n 954→ 954→\n 955→ 955→@dataclass\n 956→ 956→class LearningConfig:\n 957→ 957→ """Configuration de l'apprentissage"""\n 958→ 958→ observation_threshold: int = 5\n 959→ 959→ coaching_threshold: int = 10\n 960→ 960→ auto_candidate_threshold: int = 20\n 961→ 961→ min_similarity_coaching: float = 0.90\n 962→ 962→ min_success_rate_auto: float = 0.90\n 963→ 963→ min_success_rate_confirmed: float = 0.95\n 964→ 964→ rollback_confidence_threshold: float = 0.90\n 965→ 965→ \n 966→ 966→ def to_dict(self) -> Dict[str, Any]:\n 967→ 967→ return {\n 968→ 968→ "observation_threshold": self.observation_threshold,\n 969→ 969→ "coaching_threshold": self.coaching_threshold,\n 970→ 970→ "auto_candidate_threshold": self.auto_candidate_threshold,\n 971→ 971→ "min_similarity_coaching": self.min_similarity_coaching,\n 972→ 972→ "min_success_rate_auto": self.min_success_rate_auto,\n 973→ 973→ "min_success_rate_confirmed": self.min_success_rate_confirmed,\n 974→ 974→ "rollback_confidence_threshold": self.rollback_confidence_threshold\n 975→ 975→ }\n 976→ 976→ \n 977→ 977→ @classmethod\n 978→ 978→ def from_dict(cls, data: Dict[str, Any]) -> 'LearningConfig':\n 979→ 979→ return cls(\n 980→ 980→ observation_threshold=data.get("observation_threshold", 5),\n 981→ 981→ coaching_threshold=data.get("coaching_threshold", 10),\n 982→ 982→ auto_candidate_threshold=data.get("auto_candidate_threshold", 20),\n 983→ 983→ min_similarity_coaching=data.get("min_similarity_coaching", 0.90),\n 984→ 984→ min_success_rate_auto=data.get("min_success_rate_auto", 0.90),\n 985→ 985→ min_success_rate_confirmed=data.get("min_success_rate_confirmed", 0.95),\n 986→ 986→ rollback_confidence_threshold=data.get("rollback_confidence_threshold", 0.90)\n 987→ 987→ )\n 988→ 988→\n 989→ 989→\n 990→ 990→@dataclass\n 991→ 991→class Workflow:\n 992→ 992→ """\n 993→ 993→ Workflow Graph complet\n 994→ 994→ \n 995→ 995→ Représente un workflow appris comme un graphe de transitions.\n 996→ 996→ Supporte la composition: boucles, conditions, références à sous-workflows.\n 997→ 997→ """\n 998→ 998→ workflow_id: str\n 999→ 999→ name: str\n 1000→ 1000→ description: str\n 1001→ 1001→ version: int\n 1002→ 1002→ learning_state: str # OBSERVATION, COACHING, AUTO_CANDIDATE, AUTO_CONFIRMÉ\n 1003→ 1003→ created_at: datetime\n 1004→ 1004→ updated_at: datetime\n 1005→ 1005→ entry_nodes: List[str]\n 1006→ 1006→ end_nodes: List[str]\n 1007→ 1007→ nodes: List[WorkflowNode]\n 1008→ 1008→ edges: List[WorkflowEdge]\n 1009→ 1009→ safety_rules: SafetyRules\n 1010→ 1010→ stats: WorkflowStats\n 1011→ 1011→ learning: LearningConfig\n 1012→ 1012→ metadata: Dict[str, Any] = field(default_factory=dict)\n 1013→ 1013→ # Composition fields\n 1014→ 1014→ loops: Dict[str, Any] = field(default_factory=dict) # loop_id -> LoopConfig\n 1015→ 1015→ conditionals: Dict[str, Any] = field(default_factory=dict) # node_id -> ConditionalNode\n 1016→ 1016→ references: List[Any] = field(default_factory=list) # List of ReferenceNode\n 1017→ 1017→ chain_config: Optional[Dict[str, Any]] = None # ChainConfig if part of a chain\n 1018→ 1018→ \n 1019→ 1019→ def get_node(self, node_id: str) -> Optional[WorkflowNode]:\n 1020→ 1020→ """Récupérer un node par ID"""\n 1021→ 1021→ for node in self.nodes:\n 1022→ 1022→ if node.node_id == node_id:\n 1023→ 1023→ return node\n 1024→ 1024→ return None\n 1025→ 1025→ \n 1026→ 1026→ def get_edge(self, edge_id: str) -> Optional[WorkflowEdge]:\n 1027→ 1027→ """Récupérer un edge par ID"""\n 1028→ 1028→ for edge in self.edges:\n 1029→ 1029→ if edge.edge_id == edge_id:\n 1030→ 1030→ return edge\n 1031→ 1031→ return None\n 1032→ 1032→ \n 1033→ 1033→ def get_outgoing_edges(self, node_id: str) -> List[WorkflowEdge]:\n 1034→ 1034→ """Récupérer tous les edges sortants d'un node"""\n 1035→ 1035→ return [edge for edge in self.edges if edge.from_node == node_id]\n 1036→ 1036→ \n 1037→ 1037→ def get_incoming_edges(self, node_id: str) -> List[WorkflowEdge]:\n 1038→ 1038→ """Récupérer tous les edges entrants d'un node"""\n 1039→ 1039→ return [edge for edge in self.edges if edge.to_node == node_id]\n 1040→ 1040→ \n 1041→ 1041→ # =========================================================================\n 1042→ 1042→ # Méthodes de gestion des variantes au niveau workflow\n 1043→ 1043→ # =========================================================================\n 1044→ 1044→ \n 1045→ 1045→ def add_variant_to_node(self, node_id: str, variant: NodeVariant) -> bool:\n 1046→ 1046→ """Ajouter une variante à un node spécifique"""\n 1047→ 1047→ node = self.get_node(node_id)\n 1048→ 1048→ if not node:\n 1049→ 1049→ return False\n 1050→ 1050→ return node.add_variant(variant)\n 1051→ 1051→ \n 1052→ 1052→ def remove_variant_from_node(self, node_id: str, variant_id: str) -> bool:\n 1053→ 1053→ """Supprimer une variante d'un node"""\n 1054→ 1054→ node = self.get_node(node_id)\n 1055→ 1055→ if not node:\n 1056→ 1056→ return False\n 1057→ 1057→ return node.remove_variant(variant_id)\n 1058→ 1058→ \n 1059→ 1059→ def get_node_variants(self, node_id: str) -> List[NodeVariant]:\n 1060→ 1060→ """Récupérer toutes les variantes d'un node"""\n 1061→ 1061→ node = self.get_node(node_id)\n 1062→ 1062→ return node.variants if node else []\n 1063→ 1063→ \n 1064→ 1064→ def get_nodes_needing_consolidation(self) -> List[WorkflowNode]:\n 1065→ 1065→ """Récupérer les nodes qui ont besoin de consolidation de variantes"""\n 1066→ 1066→ return [node for node in self.nodes if node.needs_consolidation()]\n 1067→ 1067→ \n 1068→ 1068→ def get_low_quality_nodes(self, threshold: float = 0.7) -> List[WorkflowNode]:\n 1069→ 1069→ """Récupérer les nodes avec un score de qualité faible"""\n 1070→ 1070→ return [node for node in self.nodes if node.quality_score < threshold]\n 1071→ 1071→ \n 1072→ 1072→ def to_dict(self) -> Dict[str, Any]:\n 1073→ 1073→ """Sérialiser en JSON"""\n 1074→ 1074→ result = {\n 1075→ 1075→ "workflow_id": self.workflow_id,\n 1076→ 1076→ "name": self.name,\n 1077→ 1077→ "description": self.description,\n 1078→ 1078→ "version": self.version,\n 1079→ 1079→ "learning_state": self.learning_state,\n 1080→ 1080→ "created_at": self.created_at.isoformat(),\n 1081→ 1081→ "updated_at": self.updated_at.isoformat(),\n 1082→ 1082→ "entry_nodes": self.entry_nodes,\n 1083→ 1083→ "end_nodes": self.end_nodes,\n 1084→ 1084→ "nodes": [node.to_dict() for node in self.nodes],\n 1085→ 1085→ "edges": [edge.to_dict() for edge in self.edges],\n 1086→ 1086→ "safety_rules": self.safety_rules.to_dict(),\n 1087→ 1087→ "stats": self.stats.to_dict(),\n 1088→ 1088→ "learning": self.learning.to_dict(),\n 1089→ 1089→ "metadata": self.metadata,\n 1090→ 1090→ # Composition fields\n 1091→ 1091→ "loops": {k: v.to_dict() if hasattr(v, 'to_dict') else v for k, v in self.loops.items()},\n 1092→ 1092→ "conditionals": {k: v.to_dict() if hasattr(v, 'to_dict') else v for k, v in self.conditionals.items()},\n 1093→ 1093→ "references": [r.to_dict() if hasattr(r, 'to_dict') else r for r in self.references],\n 1094→ 1094→ }\n 1095→ 1095→ if self.chain_config:\n 1096→ 1096→ result["chain_config"] = self.chain_config.to_dict() if hasattr(self.chain_config, 'to_dict') else self.chain_config\n 1097→ 1097→ return result\n 1098→ 1098→ "}]},"uuid":"b05e89cc-66de-42d3-bc66-4e48958071fa","timestamp":"2026-03-14T21:15:05.475Z","sourceToolAssistantUUID":"6f9761c0-a7c4-47d8-aa44-f5224c70a3b0"} {"parentUuid":"b05e89cc-66de-42d3-bc66-4e48958071fa","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01SM1QGmytJzDPiWzU1ioy7x","type":"message","role":"assistant","content":[{"type":"text","text":"Now let me check the Workflow's loops field and if there are existing composition models."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":10209,"cache_read_input_tokens":100211,"cache_creation":{"ephemeral_5m_input_tokens":10209,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fXdneScfcyCC8huFhu","type":"assistant","uuid":"1873b458-57b7-414d-8ecd-c33238a800da","timestamp":"2026-03-14T21:15:09.131Z"} {"parentUuid":"1873b458-57b7-414d-8ecd-c33238a800da","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01SM1QGmytJzDPiWzU1ioy7x","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_016R12Df9A3dQHJpQowZWosj","name":"Grep","input":{"pattern":"class LoopConfig|class ConditionalNode|class ReferenceNode","output_mode":"content","path":"/home/dom/ai/rpa_vision_v3/core/models"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":10209,"cache_read_input_tokens":100211,"cache_creation":{"ephemeral_5m_input_tokens":10209,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fXdneScfcyCC8huFhu","type":"assistant","uuid":"22300e57-9dc3-493b-9b87-239a87603be5","timestamp":"2026-03-14T21:15:10.515Z"} {"parentUuid":"22300e57-9dc3-493b-9b87-239a87603be5","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Grep","command":"callback"},"parentToolUseID":"toolu_016R12Df9A3dQHJpQowZWosj","toolUseID":"toolu_016R12Df9A3dQHJpQowZWosj","timestamp":"2026-03-14T21:15:10.528Z","uuid":"1ae2838c-f1fc-42fe-a1ac-53b917337a2e"} {"parentUuid":"22300e57-9dc3-493b-9b87-239a87603be5","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_016R12Df9A3dQHJpQowZWosj","type":"tool_result","content":"No matches found"}]},"uuid":"81d8ffd9-fda8-45a4-b599-670ae783d8dc","timestamp":"2026-03-14T21:15:10.528Z","sourceToolAssistantUUID":"22300e57-9dc3-493b-9b87-239a87603be5"} {"parentUuid":"81d8ffd9-fda8-45a4-b599-670ae783d8dc","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01SM1QGmytJzDPiWzU1ioy7x","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01Bzm6LMYSnD8LEambCjRpw3","name":"Grep","input":{"pattern":"loops|foreach|for_each|loop_config|iterator","output_mode":"files_with_matches","path":"/home/dom/ai/rpa_vision_v3/core"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":10209,"cache_read_input_tokens":100211,"cache_creation":{"ephemeral_5m_input_tokens":10209,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fXdneScfcyCC8huFhu","type":"assistant","uuid":"7a6b7283-ad1c-4fbe-840f-cca3db71490d","timestamp":"2026-03-14T21:15:11.790Z"} {"parentUuid":"7a6b7283-ad1c-4fbe-840f-cca3db71490d","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Grep","command":"callback"},"parentToolUseID":"toolu_01Bzm6LMYSnD8LEambCjRpw3","toolUseID":"toolu_01Bzm6LMYSnD8LEambCjRpw3","timestamp":"2026-03-14T21:15:11.804Z","uuid":"22fee0c8-f1ab-4f85-a573-bdf3e651ad7b"} {"parentUuid":"7a6b7283-ad1c-4fbe-840f-cca3db71490d","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01Bzm6LMYSnD8LEambCjRpw3","type":"tool_result","content":"Found 2 files\n/home/dom/ai/rpa_vision_v3/core/workflow/loop_executor.py\n/home/dom/ai/rpa_vision_v3/core/workflow/composition_models.py"}]},"uuid":"4a1e4e61-c790-4438-a34d-d90c44075044","timestamp":"2026-03-14T21:15:11.804Z","sourceToolAssistantUUID":"7a6b7283-ad1c-4fbe-840f-cca3db71490d"} {"parentUuid":"4a1e4e61-c790-4438-a34d-d90c44075044","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01SM1QGmytJzDPiWzU1ioy7x","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01WLN1Pk2kN1ckHi7pL2dRee","name":"Bash","input":{"command":"ls -la /home/dom/ai/rpa_vision_v3/core/models/","description":"List core models directory"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":10209,"cache_read_input_tokens":100211,"output_tokens":320,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":10209},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fXdneScfcyCC8huFhu","type":"assistant","uuid":"86ec0364-6434-4b45-9858-4a35c992de21","timestamp":"2026-03-14T21:15:12.259Z"} {"parentUuid":"86ec0364-6434-4b45-9858-4a35c992de21","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PreToolUse","hookName":"PreToolUse:Bash","command":"/home/dom/.claude/hooks/guard_tests.sh"},"toolUseID":"toolu_01WLN1Pk2kN1ckHi7pL2dRee","parentToolUseID":"toolu_01WLN1Pk2kN1ckHi7pL2dRee","uuid":"4098e35b-c125-45c0-8b16-b07154d117eb","timestamp":"2026-03-14T21:15:12.260Z"} {"parentUuid":"86ec0364-6434-4b45-9858-4a35c992de21","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01WLN1Pk2kN1ckHi7pL2dRee","type":"tool_result","content":"total 176\ndrwxrwxr-x 3 dom dom 4096 mars 11 12:34 .\ndrwxrwxr-x 29 dom dom 4096 janv. 18 22:39 ..\n-rw-rw-r-- 1 dom dom 13466 mars 11 12:34 base_models.py\n-rw-rw-r-- 1 dom dom 30155 déc. 23 09:39 core_models.zip\n-rw-rw-r-- 1 dom dom 9946 déc. 20 16:41 execution_result.py\n-rw-rw-r-- 1 dom dom 5107 mars 11 12:34 init.py\n-rw-rw-r-- 1 dom dom 14155 déc. 20 18:26 model_cache.py\ndrwxrwxr-x 2 dom dom 4096 mars 11 12:34 pycache\n-rw-rw-r-- 1 dom dom 6365 déc. 15 00:09 raw_session.py\n-rw-rw-r-- 1 dom dom 10961 déc. 20 18:12 screen_state.py\n-rw-rw-r-- 1 dom dom 6332 nov. 22 04:14 state_embedding.py\n-rw-rw-r-- 1 dom dom 7682 déc. 20 18:12 ui_element.py\n-rw-rw-r-- 1 dom dom 44343 mars 11 12:34 workflow_graph.py","is_error":false}]},"uuid":"c3991d99-4119-4a08-b730-8fa14c3ee773","timestamp":"2026-03-14T21:15:12.280Z","sourceToolAssistantUUID":"86ec0364-6434-4b45-9858-4a35c992de21"} {"parentUuid":"c3991d99-4119-4a08-b730-8fa14c3ee773","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_011dgNsq4ex2V2dJ1prfZqvd","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01KwS6L4KBKL49DW6d5uZWQV","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/workflow/composition_models.py"},"caller":{"type":"direct"}}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":923,"cache_read_input_tokens":110420,"cache_creation":{"ephemeral_5m_input_tokens":923,"ephemeral_1h_input_tokens":0},"output_tokens":25,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fY8trukCFqLqmt9G82","type":"assistant","uuid":"81d6cb2f-8c11-496b-917d-1a5f676e95fc","timestamp":"2026-03-14T21:15:16.334Z"} {"parentUuid":"81d6cb2f-8c11-496b-917d-1a5f676e95fc","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01KwS6L4KBKL49DW6d5uZWQV","toolUseID":"toolu_01KwS6L4KBKL49DW6d5uZWQV","timestamp":"2026-03-14T21:15:16.525Z","uuid":"f9beb768-0af0-4a11-8c70-4e8ab6ee1ae6"} {"parentUuid":"81d6cb2f-8c11-496b-917d-1a5f676e95fc","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01KwS6L4KBKL49DW6d5uZWQV","type":"tool_result","content":" 1→"""\n 2→Composition Models - Modèles de données pour la composition de workflows\n 3→\n 4→Ce module contient les dataclasses de configuration pour:\n 5→- Chaînage de workflows (ChainConfig)\n 6→- Boucles (LoopConfig, LoopState)\n 7→- Conditions (ConditionalNode, BranchConfig, VisualCondition)\n 8→- Déclencheurs (ScheduleTrigger, FileTrigger, VisualTrigger, TriggerContext)\n 9→- Sous-workflows (SubWorkflowDefinition, ReferenceNode, ParameterDef)\n 10→- Contexte d'exécution (ExecutionContext, LogEntry)\n 11→- Validation (ValidationResult)\n 12→- Fusion (MergeCandidate, NodeConflict)\n 13→- Séquences (CommonSequence, SequenceOccurrence)\n 14→\n 15→Chaque classe inclut les méthodes to_dict() et from_dict() pour la sérialisation.\n 16→"""\n 17→\n 18→from dataclasses import dataclass, field\n 19→from datetime import datetime\n 20→from typing import Dict, List, Optional, Any, Literal, Union\n 21→from enum import Enum\n 22→\n 23→\n 24→# ============================================================================\n 25→# Enums\n 26→# ============================================================================\n 27→\n 28→class FailureAction(str, Enum):\n 29→ """Actions possibles en cas d'échec dans une chaîne"""\n 30→ RETRY = "retry"\n 31→ SKIP = "skip"\n 32→ ABORT = "abort"\n 33→\n 34→\n 35→class LoopType(str, Enum):\n 36→ """Types de boucles"""\n 37→ COUNT = "count"\n 38→ CONDITION = "condition"\n 39→\n 40→\n 41→class ConditionType(str, Enum):\n 42→ """Types de conditions visuelles"""\n 43→ ELEMENT_PRESENT = "element_present"\n 44→ ELEMENT_ABSENT = "element_absent"\n 45→ TEXT_EQUALS = "text_equals"\n 46→ TEXT_CONTAINS = "text_contains"\n 47→\n 48→\n 49→class TriggerType(str, Enum):\n 50→ """Types de déclencheurs"""\n 51→ SCHEDULE = "schedule"\n 52→ FILE = "file"\n 53→ VISUAL = "visual"\n 54→\n 55→\n 56→class ConcurrencyMode(str, Enum):\n 57→ """Modes de gestion de la concurrence"""\n 58→ CONCURRENT = "concurrent"\n 59→ QUEUE = "queue"\n 60→\n 61→\n 62→class SequencePriority(str, Enum):\n 63→ """Priorité des séquences pour extraction"""\n 64→ HIGH = "high"\n 65→ MEDIUM = "medium"\n 66→ LOW = "low"\n 67→\n 68→\n 69→# ============================================================================\n 70→# Conditions Visuelles\n 71→# ============================================================================\n 72→\n 73→@dataclass\n 74→class VisualCondition:\n 75→ """\n 76→ Condition basée sur l'état visuel de l'écran.\n 77→ \n 78→ Utilisée pour les boucles conditionnelles et les branchements.\n 79→ """\n 80→ condition_type: str # element_present, element_absent, text_equals, text_contains\n 81→ target_element: Optional[str] = None # ID ou template de l'élément cible\n 82→ expected_text: Optional[str] = None # Texte attendu pour les conditions textuelles\n 83→ \n 84→ def to_dict(self) -> Dict[str, Any]:\n 85→ return {\n 86→ "condition_type": self.condition_type,\n 87→ "target_element": self.target_element,\n 88→ "expected_text": self.expected_text\n 89→ }\n 90→ \n 91→ @classmethod\n 92→ def from_dict(cls, data: Dict[str, Any]) -> 'VisualCondition':\n 93→ return cls(\n 94→ condition_type=data["condition_type"],\n 95→ target_element=data.get("target_element"),\n 96→ expected_text=data.get("expected_text")\n 97→ )\n 98→\n 99→\n 100→# ============================================================================\n 101→# Configuration de Chaînage\n 102→# ============================================================================\n 103→\n 104→@dataclass\n 105→class ChainConfig:\n 106→ """\n 107→ Configuration de chaînage entre deux workflows.\n 108→ \n 109→ Définit comment un workflow se connecte au suivant dans une chaîne.\n 110→ """\n 111→ source_workflow_id: str\n 112→ target_workflow_id: str\n 113→ variable_mapping: Dict[str, str] = field(default_factory=dict) # source_var -> target_var\n 114→ on_failure: str = "abort" # retry, skip, abort\n 115→ max_retries: int = 3\n 116→ \n 117→ def to_dict(self) -> Dict[str, Any]:\n 118→ return {\n 119→ "source_workflow_id": self.source_workflow_id,\n 120→ "target_workflow_id": self.target_workflow_id,\n 121→ "variable_mapping": self.variable_mapping,\n 122→ "on_failure": self.on_failure,\n 123→ "max_retries": self.max_retries\n 124→ }\n 125→ \n 126→ @classmethod\n 127→ def from_dict(cls, data: Dict[str, Any]) -> 'ChainConfig':\n 128→ return cls(\n 129→ source_workflow_id=data["source_workflow_id"],\n 130→ target_workflow_id=data["target_workflow_id"],\n 131→ variable_mapping=data.get("variable_mapping", {}),\n 132→ on_failure=data.get("on_failure", "abort"),\n 133→ max_retries=data.get("max_retries", 3)\n 134→ )\n 135→\n 136→\n 137→# ============================================================================\n 138→# Configuration de Boucles\n 139→# ============================================================================\n 140→\n 141→@dataclass\n 142→class LoopConfig:\n 143→ """\n 144→ Configuration d'une boucle dans un workflow.\n 145→ \n 146→ Supporte les boucles par compteur (count) ou par condition visuelle (condition).\n 147→ """\n 148→ loop_id: str\n 149→ loop_type: str # count, condition\n 150→ max_iterations: Optional[int] = None # Pour type "count"\n 151→ exit_condition: Optional[VisualCondition] = None # Pour type "condition"\n 152→ body_nodes: List[str] = field(default_factory=list)\n 153→ safety_limit: int = 1000 # Garde de sécurité\n 154→ \n 155→ def to_dict(self) -> Dict[str, Any]:\n 156→ return {\n 157→ "loop_id": self.loop_id,\n 158→ "loop_type": self.loop_type,\n 159→ "max_iterations": self.max_iterations,\n 160→ "exit_condition": self.exit_condition.to_dict() if self.exit_condition else None,\n 161→ "body_nodes": self.body_nodes,\n 162→ "safety_limit": self.safety_limit\n 163→ }\n 164→ \n 165→ @classmethod\n 166→ def from_dict(cls, data: Dict[str, Any]) -> 'LoopConfig':\n 167→ exit_condition = None\n 168→ if data.get("exit_condition"):\n 169→ exit_condition = VisualCondition.from_dict(data["exit_condition"])\n 170→ \n 171→ return cls(\n 172→ loop_id=data["loop_id"],\n 173→ loop_type=data["loop_type"],\n 174→ max_iterations=data.get("max_iterations"),\n 175→ exit_condition=exit_condition,\n 176→ body_nodes=data.get("body_nodes", []),\n 177→ safety_limit=data.get("safety_limit", 1000)\n 178→ )\n 179→\n 180→\n 181→@dataclass\n 182→class LoopState:\n 183→ """\n 184→ État d'exécution d'une boucle.\n 185→ \n 186→ Maintient le compteur et l'état courant pendant l'exécution.\n 187→ """\n 188→ loop_id: str\n 189→ current_iteration: int = 0\n 190→ started_at: datetime = field(default_factory=datetime.now)\n 191→ last_condition_result: Optional[bool] = None\n 192→ \n 193→ def to_dict(self) -> Dict[str, Any]:\n 194→ return {\n 195→ "loop_id": self.loop_id,\n 196→ "current_iteration": self.current_iteration,\n 197→ "started_at": self.started_at.isoformat(),\n 198→ "last_condition_result": self.last_condition_result\n 199→ }\n 200→ \n 201→ @classmethod\n 202→ def from_dict(cls, data: Dict[str, Any]) -> 'LoopState':\n 203→ started_at = datetime.now()\n 204→ if data.get("started_at"):\n 205→ started_at = datetime.fromisoformat(data["started_at"])\n 206→ \n 207→ return cls(\n 208→ loop_id=data["loop_id"],\n 209→ current_iteration=data.get("current_iteration", 0),\n 210→ started_at=started_at,\n 211→ last_condition_result=data.get("last_condition_result")\n 212→ )\n 213→\n 214→\n 215→# ============================================================================\n 216→# Configuration de Branchements Conditionnels\n 217→# ============================================================================\n 218→\n 219→@dataclass\n 220→class BranchConfig:\n 221→ """\n 222→ Configuration d'une branche conditionnelle.\n 223→ \n 224→ Définit une condition et le node cible si la condition est vraie.\n 225→ """\n 226→ branch_id: str\n 227→ condition: VisualCondition\n 228→ target_node: str\n 229→ priority: int = 0 # Ordre d'évaluation (plus petit = évalué en premier)\n 230→ \n 231→ def to_dict(self) -> Dict[str, Any]:\n 232→ return {\n 233→ "branch_id": self.branch_id,\n 234→ "condition": self.condition.to_dict(),\n 235→ "target_node": self.target_node,\n 236→ "priority": self.priority\n 237→ }\n 238→ \n 239→ @classmethod\n 240→ def from_dict(cls, data: Dict[str, Any]) -> 'BranchConfig':\n 241→ return cls(\n 242→ branch_id=data["branch_id"],\n 243→ condition=VisualCondition.from_dict(data["condition"]),\n 244→ target_node=data["target_node"],\n 245→ priority=data.get("priority", 0)\n 246→ )\n 247→\n 248→\n 249→@dataclass\n 250→class ConditionalNode:\n 251→ """\n 252→ Node conditionnel avec plusieurs branches.\n 253→ \n 254→ Évalue les branches par ordre de priorité et exécute la première qui match.\n 255→ """\n 256→ node_id: str\n 257→ branches: List[BranchConfig] = field(default_factory=list)\n 258→ default_branch: Optional[str] = None # Node par défaut si aucune condition ne match\n 259→ \n 260→ def to_dict(self) -> Dict[str, Any]:\n 261→ return {\n 262→ "node_id": self.node_id,\n 263→ "branches": [b.to_dict() for b in self.branches],\n 264→ "default_branch": self.default_branch\n 265→ }\n 266→ \n 267→ @classmethod\n 268→ def from_dict(cls, data: Dict[str, Any]) -> 'ConditionalNode':\n 269→ branches = [BranchConfig.from_dict(b) for b in data.get("branches", [])]\n 270→ return cls(\n 271→ node_id=data["node_id"],\n 272→ branches=branches,\n 273→ default_branch=data.get("default_branch")\n 274→ )\n 275→\n 276→\n 277→# ============================================================================\n 278→# Configuration des Déclencheurs\n 279→# ============================================================================\n 280→\n 281→@dataclass\n 282→class ScheduleTrigger:\n 283→ """\n 284→ Déclencheur basé sur un horaire.\n 285→ \n 286→ Supporte les expressions cron ou les intervalles en secondes.\n 287→ """\n 288→ trigger_id: str\n 289→ workflow_id: str\n 290→ cron_expression: Optional[str] = None\n 291→ interval_seconds: Optional[int] = None\n 292→ enabled: bool = True\n 293→ \n 294→ def to_dict(self) -> Dict[str, Any]:\n 295→ return {\n 296→ "trigger_id": self.trigger_id,\n 297→ "trigger_type": "schedule",\n 298→ "workflow_id": self.workflow_id,\n 299→ "cron_expression": self.cron_expression,\n 300→ "interval_seconds": self.interval_seconds,\n 301→ "enabled": self.enabled\n 302→ }\n 303→ \n 304→ @classmethod\n 305→ def from_dict(cls, data: Dict[str, Any]) -> 'ScheduleTrigger':\n 306→ return cls(\n 307→ trigger_id=data["trigger_id"],\n 308→ workflow_id=data["workflow_id"],\n 309→ cron_expression=data.get("cron_expression"),\n 310→ interval_seconds=data.get("interval_seconds"),\n 311→ enabled=data.get("enabled", True)\n 312→ )\n 313→\n 314→\n 315→@dataclass\n 316→class FileTrigger:\n 317→ """\n 318→ Déclencheur basé sur l'apparition d'un fichier.\n 319→ \n 320→ Surveille un répertoire et déclenche le workflow quand un fichier correspondant apparaît.\n 321→ """\n 322→ trigger_id: str\n 323→ workflow_id: str\n 324→ watch_directory: str\n 325→ file_pattern: str # Pattern glob (ex: ".csv", "report*.xlsx")\n 326→ enabled: bool = True\n 327→ \n 328→ def to_dict(self) -> Dict[str, Any]:\n 329→ return {\n 330→ "trigger_id": self.trigger_id,\n 331→ "trigger_type": "file",\n 332→ "workflow_id": self.workflow_id,\n 333→ "watch_directory": self.watch_directory,\n 334→ "file_pattern": self.file_pattern,\n 335→ "enabled": self.enabled\n 336→ }\n 337→ \n 338→ @classmethod\n 339→ def from_dict(cls, data: Dict[str, Any]) -> 'FileTrigger':\n 340→ return cls(\n 341→ trigger_id=data["trigger_id"],\n 342→ workflow_id=data["workflow_id"],\n 343→ watch_directory=data["watch_directory"],\n 344→ file_pattern=data["file_pattern"],\n 345→ enabled=data.get("enabled", True)\n 346→ )\n 347→\n 348→\n 349→@dataclass\n 350→class VisualTrigger:\n 351→ """\n 352→ Déclencheur basé sur la détection visuelle.\n 353→ \n 354→ Capture périodiquement l'écran et déclenche le workflow quand l'élément cible est détecté.\n 355→ """\n 356→ trigger_id: str\n 357→ workflow_id: str\n 358→ target_element: str # ID ou template de l'élément à détecter\n 359→ check_interval_seconds: int = 5\n 360→ enabled: bool = True\n 361→ \n 362→ def to_dict(self) -> Dict[str, Any]:\n 363→ return {\n 364→ "trigger_id": self.trigger_id,\n 365→ "trigger_type": "visual",\n 366→ "workflow_id": self.workflow_id,\n 367→ "target_element": self.target_element,\n 368→ "check_interval_seconds": self.check_interval_seconds,\n 369→ "enabled": self.enabled\n 370→ }\n 371→ \n 372→ @classmethod\n 373→ def from_dict(cls, data: Dict[str, Any]) -> 'VisualTrigger':\n 374→ return cls(\n 375→ trigger_id=data["trigger_id"],\n 376→ workflow_id=data["workflow_id"],\n 377→ target_element=data["target_element"],\n 378→ check_interval_seconds=data.get("check_interval_seconds", 5),\n 379→ enabled=data.get("enabled", True)\n 380→ )\n 381→\n 382→\n 383→@dataclass\n 384→class TriggerContext:\n 385→ """\n 386→ Contexte passé au workflow lors du déclenchement.\n 387→ \n 388→ Contient les informations sur l'événement qui a déclenché le workflow.\n 389→ """\n 390→ trigger_id: str\n 391→ trigger_type: str # schedule, file, visual\n 392→ fired_at: datetime = field(default_factory=datetime.now)\n 393→ file_path: Optional[str] = None # Pour FileTrigger\n 394→ detected_element: Optional[Dict[str, Any]] = None # Pour VisualTrigger\n 395→ \n 396→ def to_dict(self) -> Dict[str, Any]:\n 397→ return {\n 398→ "trigger_id": self.trigger_id,\n 399→ "trigger_type": self.trigger_type,\n 400→ "fired_at": self.fired_at.isoformat(),\n 401→ "file_path": self.file_path,\n 402→ "detected_element": self.detected_element\n 403→ }\n 404→ \n 405→ @classmethod\n 406→ def from_dict(cls, data: Dict[str, Any]) -> 'TriggerContext':\n 407→ fired_at = datetime.now()\n 408→ if data.get("fired_at"):\n 409→ fired_at = datetime.fromisoformat(data["fired_at"])\n 410→ \n 411→ return cls(\n 412→ trigger_id=data["trigger_id"],\n 413→ trigger_type=data["trigger_type"],\n 414→ fired_at=fired_at,\n 415→ file_path=data.get("file_path"),\n 416→ detected_element=data.get("detected_element")\n 417→ )\n 418→\n 419→\n 420→# Type alias pour les triggers\n 421→TriggerConfig = Union[ScheduleTrigger, FileTrigger, VisualTrigger]\n 422→\n 423→\n 424→def trigger_from_dict(data: Dict[str, Any]) -> TriggerConfig:\n 425→ """Factory pour créer le bon type de trigger depuis un dictionnaire."""\n 426→ trigger_type = data.get("trigger_type", "schedule")\n 427→ \n 428→ if trigger_type == "schedule":\n 429→ return ScheduleTrigger.from_dict(data)\n 430→ elif trigger_type == "file":\n 431→ return FileTrigger.from_dict(data)\n 432→ elif trigger_type == "visual":\n 433→ return VisualTrigger.from_dict(data)\n 434→ else:\n 435→ raise ValueError(f"Unknown trigger type: {trigger_type}")\n 436→\n 437→\n 438→# ============================================================================\n 439→# Configuration des Sous-Workflows\n 440→# ============================================================================\n 441→\n 442→@dataclass\n 443→class ParameterDef:\n 444→ """\n 445→ Définition d'un paramètre d'entrée ou de sortie pour un sous-workflow.\n 446→ """\n 447→ name: str\n 448→ param_type: str = "string" # string, number, boolean, list, dict\n 449→ required: bool = True\n 450→ default_value: Optional[Any] = None\n 451→ description: str = ""\n 452→ \n 453→ def to_dict(self) -> Dict[str, Any]:\n 454→ return {\n 455→ "name": self.name,\n 456→ "param_type": self.param_type,\n 457→ "required": self.required,\n 458→ "default_value": self.default_value,\n 459→ "description": self.description\n 460→ }\n 461→ \n 462→ @classmethod\n 463→ def from_dict(cls, data: Dict[str, Any]) -> 'ParameterDef':\n 464→ return cls(\n 465→ name=data["name"],\n 466→ param_type=data.get("param_type", "string"),\n 467→ required=data.get("required", True),\n 468→ default_value=data.get("default_value"),\n 469→ description=data.get("description", "")\n 470→ )\n 471→\n 472→\n 473→@dataclass\n 474→class SubWorkflowDefinition:\n 475→ """\n 476→ Définition d'un sous-workflow réutilisable.\n 477→ \n 478→ Permet de définir les paramètres d'entrée et de sortie.\n 479→ """\n 480→ workflow_id: str\n 481→ name: str\n 482→ input_parameters: List[ParameterDef] = field(default_factory=list)\n 483→ output_values: List[ParameterDef] = field(default_factory=list)\n 484→ description: str = ""\n 485→ \n 486→ def to_dict(self) -> Dict[str, Any]:\n 487→ return {\n 488→ "workflow_id": self.workflow_id,\n 489→ "name": self.name,\n 490→ "input_parameters": [p.to_dict() for p in self.input_parameters],\n 491→ "output_values": [p.to_dict() for p in self.output_values],\n 492→ "description": self.description\n 493→ }\n 494→ \n 495→ @classmethod\n 496→ def from_dict(cls, data: Dict[str, Any]) -> 'SubWorkflowDefinition':\n 497→ input_params = [ParameterDef.from_dict(p) for p in data.get("input_parameters", [])]\n 498→ output_vals = [ParameterDef.from_dict(p) for p in data.get("output_values", [])]\n 499→ \n 500→ return cls(\n 501→ workflow_id=data["workflow_id"],\n 502→ name=data["name"],\n 503→ input_parameters=input_params,\n 504→ output_values=output_vals,\n 505→ description=data.get("description", "")\n 506→ )\n 507→\n 508→\n 509→@dataclass\n 510→class ReferenceNode:\n 511→ """\n 512→ Node qui référence un sous-workflow.\n 513→ \n 514→ Permet d'appeler un sous-workflow depuis un workflow parent.\n 515→ """\n 516→ node_id: str\n 517→ sub_workflow_id: str\n 518→ input_bindings: Dict[str, str] = field(default_factory=dict) # param_name -> variable_name\n 519→ output_bindings: Dict[str, str] = field(default_factory=dict) # param_name -> variable_name\n 520→ \n 521→ def to_dict(self) -> Dict[str, Any]:\n 522→ return {\n 523→ "node_id": self.node_id,\n 524→ "sub_workflow_id": self.sub_workflow_id,\n 525→ "input_bindings": self.input_bindings,\n 526→ "output_bindings": self.output_bindings\n 527→ }\n 528→ \n 529→ @classmethod\n 530→ def from_dict(cls, data: Dict[str, Any]) -> 'ReferenceNode':\n 531→ return cls(\n 532→ node_id=data["node_id"],\n 533→ sub_workflow_id=data["sub_workflow_id"],\n 534→ input_bindings=data.get("input_bindings", {}),\n 535→ output_bindings=data.get("output_bindings", {})\n 536→ )\n 537→\n 538→\n 539→# ============================================================================\n 540→# Contexte d'Exécution et Logging\n 541→# ============================================================================\n 542→\n 543→@dataclass\n 544→class LogEntry:\n 545→ """\n 546→ Entrée du log d'exécution unifié.\n 547→ \n 548→ Enregistre un événement pendant l'exécution d'une chaîne de workflows.\n 549→ """\n 550→ timestamp: datetime\n 551→ workflow_id: str\n 552→ node_id: str\n 553→ event_type: str # start, end, action, error, warning, info\n 554→ details: Dict[str, Any] = field(default_factory=dict)\n 555→ \n 556→ def to_dict(self) -> Dict[str, Any]:\n 557→ return {\n 558→ "timestamp": self.timestamp.isoformat(),\n 559→ "workflow_id": self.workflow_id,\n 560→ "node_id": self.node_id,\n 561→ "event_type": self.event_type,\n 562→ "details": self.details\n 563→ }\n 564→ \n 565→ @classmethod\n 566→ def from_dict(cls, data: Dict[str, Any]) -> 'LogEntry':\n 567→ timestamp = datetime.now()\n 568→ if data.get("timestamp"):\n 569→ timestamp = datetime.fromisoformat(data["timestamp"])\n 570→ \n 571→ return cls(\n 572→ timestamp=timestamp,\n 573→ workflow_id=data["workflow_id"],\n 574→ node_id=data["node_id"],\n 575→ event_type=data["event_type"],\n 576→ details=data.get("details", {})\n 577→ )\n 578→\n 579→\n 580→@dataclass\n 581→class VariableOverride:\n 582→ """\n 583→ Enregistrement d'un override de variable.\n 584→ \n 585→ Utilisé pour tracer les conflits de noms de variables.\n 586→ """\n 587→ variable_name: str\n 588→ old_value: Any\n 589→ new_value: Any\n 590→ source_workflow: str\n 591→ timestamp: datetime = field(default_factory=datetime.now)\n 592→ \n 593→ def to_dict(self) -> Dict[str, Any]:\n 594→ return {\n 595→ "variable_name": self.variable_name,\n 596→ "old_value": self.old_value,\n 597→ "new_value": self.new_value,\n 598→ "source_workflow": self.source_workflow,\n 599→ "timestamp": self.timestamp.isoformat()\n 600→ }\n 601→ \n 602→ @classmethod\n 603→ def from_dict(cls, data: Dict[str, Any]) -> 'VariableOverride':\n 604→ timestamp = datetime.now()\n 605→ if data.get("timestamp"):\n 606→ timestamp = datetime.fromisoformat(data["timestamp"])\n 607→ \n 608→ return cls(\n 609→ variable_name=data["variable_name"],\n 610→ old_value=data.get("old_value"),\n 611→ new_value=data.get("new_value"),\n 612→ source_workflow=data["source_workflow"],\n 613→ timestamp=timestamp\n 614→ )\n 615→\n 616→\n 617→@dataclass\n 618→class ExecutionContext:\n 619→ """\n 620→ Contexte d'exécution partagé entre workflows d'une chaîne.\n 621→ \n 622→ Maintient l'état global pendant l'exécution.\n 623→ """\n 624→ chain_id: str\n 625→ current_workflow_id: str\n 626→ global_variables: Dict[str, Any] = field(default_factory=dict)\n 627→ execution_log: List[LogEntry] = field(default_factory=list)\n 628→ trigger_context: Optional[TriggerContext] = None\n 629→ started_at: datetime = field(default_factory=datetime.now)\n 630→ \n 631→ def to_dict(self) -> Dict[str, Any]:\n 632→ return {\n 633→ "chain_id": self.chain_id,\n 634→ "current_workflow_id": self.current_workflow_id,\n 635→ "global_variables": self.global_variables,\n 636→ "execution_log": [e.to_dict() for e in self.execution_log],\n 637→ "trigger_context": self.trigger_context.to_dict() if self.trigger_context else None,\n 638→ "started_at": self.started_at.isoformat()\n 639→ }\n 640→ \n 641→ @classmethod\n 642→ def from_dict(cls, data: Dict[str, Any]) -> 'ExecutionContext':\n 643→ log_entries = [LogEntry.from_dict(e) for e in data.get("execution_log", [])]\n 644→ trigger_ctx = None\n 645→ if data.get("trigger_context"):\n 646→ trigger_ctx = TriggerContext.from_dict(data["trigger_context"])\n 647→ \n 648→ started_at = datetime.now()\n 649→ if data.get("started_at"):\n 650→ started_at = datetime.fromisoformat(data["started_at"])\n 651→ \n 652→ return cls(\n 653→ chain_id=data["chain_id"],\n 654→ current_workflow_id=data["current_workflow_id"],\n 655→ global_variables=data.get("global_variables", {}),\n 656→ execution_log=log_entries,\n 657→ trigger_context=trigger_ctx,\n 658→ started_at=started_at\n 659→ )\n 660→\n 661→\n 662→# ============================================================================\n 663→# Validation\n 664→# ============================================================================\n 665→\n 666→@dataclass\n 667→class ValidationResult:\n 668→ """\n 669→ Résultat de validation d'une configuration.\n 670→ \n 671→ Utilisé pour valider les chaînes, dépendances, etc.\n 672→ """\n 673→ is_valid: bool\n 674→ errors: List[str] = field(default_factory=list)\n 675→ warnings: List[str] = field(default_factory=list)\n 676→ \n 677→ def to_dict(self) -> Dict[str, Any]:\n 678→ return {\n 679→ "is_valid": self.is_valid,\n 680→ "errors": self.errors,\n 681→ "warnings": self.warnings\n 682→ }\n 683→ \n 684→ @classmethod\n 685→ def from_dict(cls, data: Dict[str, Any]) -> 'ValidationResult':\n 686→ return cls(\n 687→ is_valid=data["is_valid"],\n 688→ errors=data.get("errors", []),\n 689→ warnings=data.get("warnings", [])\n 690→ )\n 691→\n 692→\n 693→# ============================================================================\n 694→# Fusion de Workflows\n 695→# ============================================================================\n 696→\n 697→@dataclass\n 698→class NodeConflict:\n 699→ """\n 700→ Conflit entre deux nodes lors d'une fusion.\n 701→ \n 702→ Indique que deux workflows ont des actions différentes sur le même node.\n 703→ """\n 704→ node_id: str\n 705→ action_a: str\n 706→ action_b: str\n 707→ \n 708→ def to_dict(self) -> Dict[str, Any]:\n 709→ return {\n 710→ "node_id": self.node_id,\n 711→ "action_a": self.action_a,\n 712→ "action_b": self.action_b\n 713→ }\n 714→ \n 715→ @classmethod\n 716→ def from_dict(cls, data: Dict[str, Any]) -> 'NodeConflict':\n 717→ return cls(\n 718→ node_id=data["node_id"],\n 719→ action_a=data["action_a"],\n 720→ action_b=data["action_b"]\n 721→ )\n 722→\n 723→\n 724→@dataclass\n 725→class MergeCandidate:\n 726→ """\n 727→ Candidat à la fusion de deux workflows.\n 728→ \n 729→ Contient les informations sur la similarité et les conflits potentiels.\n 730→ """\n 731→ workflow_a_id: str\n 732→ workflow_b_id: str\n 733→ similarity_score: float\n 734→ shared_nodes: List[str] = field(default_factory=list)\n 735→ conflicts: List[NodeConflict] = field(default_factory=list)\n 736→ \n 737→ def to_dict(self) -> Dict[str, Any]:\n 738→ return {\n 739→ "workflow_a_id": self.workflow_a_id,\n 740→ "workflow_b_id": self.workflow_b_id,\n 741→ "similarity_score": self.similarity_score,\n 742→ "shared_nodes": self.shared_nodes,\n 743→ "conflicts": [c.to_dict() for c in self.conflicts]\n 744→ }\n 745→ \n 746→ @classmethod\n 747→ def from_dict(cls, data: Dict[str, Any]) -> 'MergeCandidate':\n 748→ conflicts = [NodeConflict.from_dict(c) for c in data.get("conflicts", [])]\n 749→ return cls(\n 750→ workflow_a_id=data["workflow_a_id"],\n 751→ workflow_b_id=data["workflow_b_id"],\n 752→ similarity_score=data["similarity_score"],\n 753→ shared_nodes=data.get("shared_nodes", []),\n 754→ conflicts=conflicts\n 755→ )\n 756→\n 757→\n 758→# ============================================================================\n 759→# Extraction de Séquences\n 760→# ============================================================================\n 761→\n 762→@dataclass\n 763→class SequenceOccurrence:\n 764→ """\n 765→ Occurrence d'une séquence dans un workflow.\n 766→ \n 767→ Indique où une séquence commune a été trouvée.\n 768→ """\n 769→ workflow_id: str\n 770→ start_index: int\n 771→ end_index: int\n 772→ \n 773→ def to_dict(self) -> Dict[str, Any]:\n 774→ return {\n 775→ "workflow_id": self.workflow_id,\n 776→ "start_index": self.start_index,\n 777→ "end_index": self.end_index\n 778→ }\n 779→ \n 780→ @classmethod\n 781→ def from_dict(cls, data: Dict[str, Any]) -> 'SequenceOccurrence':\n 782→ return cls(\n 783→ workflow_id=data["workflow_id"],\n 784→ start_index=data["start_index"],\n 785→ end_index=data["end_index"]\n 786→ )\n 787→\n 788→\n 789→@dataclass\n 790→class CommonSequence:\n 791→ """\n 792→ Séquence commune détectée dans plusieurs workflows.\n 793→ \n 794→ Candidate à l'extraction en sous-workflow.\n 795→ """\n 796→ nodes: List[str]\n 797→ occurrences: List[SequenceOccurrence] = field(default_factory=list)\n 798→ priority: str = "medium" # high, medium, low\n 799→ \n 800→ def to_dict(self) -> Dict[str, Any]:\n 801→ return {\n 802→ "nodes": self.nodes,\n 803→ "occurrences": [o.to_dict() for o in self.occurrences],\n 804→ "priority": self.priority\n 805→ }\n 806→ \n 807→ @classmethod\n 808→ def from_dict(cls, data: Dict[str, Any]) -> 'CommonSequence':\n 809→ occurrences = [SequenceOccurrence.from_dict(o) for o in data.get("occurrences", [])]\n 810→ return cls(\n 811→ nodes=data["nodes"],\n 812→ occurrences=occurrences,\n 813→ priority=data.get("priority", "medium")\n 814→ )\n 815→\n 816→\n 817→# ============================================================================\n 818→# Configuration Complète de Composition\n 819→# ============================================================================\n 820→\n 821→@dataclass\n 822→class WorkflowCompositionConfig:\n 823→ """\n 824→ Configuration complète de composition pour un ensemble de workflows.\n 825→ \n 826→ Regroupe toutes les configurations de chaînage, boucles, conditions, etc.\n 827→ """\n 828→ chains: List[ChainConfig] = field(default_factory=list)\n 829→ sub_workflows: Dict[str, SubWorkflowDefinition] = field(default_factory=dict)\n 830→ references: Dict[str, List[ReferenceNode]] = field(default_factory=dict)\n 831→ loops: Dict[str, LoopConfig] = field(default_factory=dict)\n 832→ conditionals: Dict[str, ConditionalNode] = field(default_factory=dict)\n 833→ triggers: List[TriggerConfig] = field(default_factory=list)\n 834→ \n 835→ def to_dict(self) -> Dict[str, Any]:\n 836→ return {\n 837→ "chains": [c.to_dict() for c in self.chains],\n 838→ "sub_workflows": {k: v.to_dict() for k, v in self.sub_workflows.items()},\n 839→ "references": {k: [r.to_dict() for r in v] for k, v in self.references.items()},\n 840→ "loops": {k: v.to_dict() for k, v in self.loops.items()},\n 841→ "conditionals": {k: v.to_dict() for k, v in self.conditionals.items()},\n 842→ "triggers": [t.to_dict() for t in self.triggers]\n 843→ }\n 844→ \n 845→ @classmethod\n 846→ def from_dict(cls, data: Dict[str, Any]) -> 'WorkflowCompositionConfig':\n 847→ chains = [ChainConfig.from_dict(c) for c in data.get("chains", [])]\n 848→ sub_workflows = {\n 849→ k: SubWorkflowDefinition.from_dict(v) \n 850→ for k, v in data.get("sub_workflows", {}).items()\n 851→ }\n 852→ references = {\n 853→ k: [ReferenceNode.from_dict(r) for r in v]\n 854→ for k, v in data.get("references", {}).items()\n 855→ }\n 856→ loops = {\n 857→ k: LoopConfig.from_dict(v)\n 858→ for k, v in data.get("loops", {}).items()\n 859→ }\n 860→ conditionals = {\n 861→ k: ConditionalNode.from_dict(v)\n 862→ for k, v in data.get("conditionals", {}).items()\n 863→ }\n 864→ triggers = [trigger_from_dict(t) for t in data.get("triggers", [])]\n 865→ \n 866→ return cls(\n 867→ chains=chains,\n 868→ sub_workflows=sub_workflows,\n 869→ references=references,\n 870→ loops=loops,\n 871→ conditionals=conditionals,\n 872→ triggers=triggers\n 873→ )\n 874→\n 875→\n 876→# ============================================================================\n 877→# Résultats d'Exécution\n 878→# ============================================================================\n 879→\n 880→@dataclass\n 881→class ChainResult:\n 882→ """\n 883→ Résultat de l'exécution d'une chaîne de workflows.\n 884→ """\n 885→ chain_id: str\n 886→ success: bool\n 887→ workflows_executed: List[str] = field(default_factory=list)\n 888→ final_variables: Dict[str, Any] = field(default_factory=dict)\n 889→ execution_log: List[LogEntry] = field(default_factory=list)\n 890→ error_message: Optional[str] = None\n 891→ started_at: datetime = field(default_factory=datetime.now)\n 892→ completed_at: Optional[datetime] = None\n 893→ \n 894→ def to_dict(self) -> Dict[str, Any]:\n 895→ return {\n 896→ "chain_id": self.chain_id,\n 897→ "success": self.success,\n 898→ "workflows_executed": self.workflows_executed,\n 899→ "final_variables": self.final_variables,\n 900→ "execution_log": [e.to_dict() for e in self.execution_log],\n 901→ "error_message": self.error_message,\n 902→ "started_at": self.started_at.isoformat(),\n 903→ "completed_at": self.completed_at.isoformat() if self.completed_at else None\n 904→ }\n 905→ \n 906→ @classmethod\n 907→ def from_dict(cls, data: Dict[str, Any]) -> 'ChainResult':\n 908→ log_entries = [LogEntry.from_dict(e) for e in data.get("execution_log", [])]\n 909→ \n 910→ started_at = datetime.now()\n 911→ if data.get("started_at"):\n 912→ started_at = datetime.fromisoformat(data["started_at"])\n 913→ \n 914→ completed_at = None\n 915→ if data.get("completed_at"):\n 916→ completed_at = datetime.fromisoformat(data["completed_at"])\n 917→ \n 918→ return cls(\n 919→ chain_id=data["chain_id"],\n 920→ success=data["success"],\n 921→ workflows_executed=data.get("workflows_executed", []),\n 922→ final_variables=data.get("final_variables", {}),\n 923→ execution_log=log_entries,\n 924→ error_message=data.get("error_message"),\n 925→ started_at=started_at,\n 926→ completed_at=completed_at\n 927→ )\n 928→\n 929→\n 930→@dataclass\n 931→class IterationResult:\n 932→ """\n 933→ Résultat d'une itération de boucle.\n 934→ """\n 935→ loop_id: str\n 936→ iteration: int\n 937→ success: bool\n 938→ should_continue: bool\n 939→ error_message: Optional[str] = None\n 940→ \n 941→ def to_dict(self) -> Dict[str, Any]:\n 942→ return {\n 943→ "loop_id": self.loop_id,\n 944→ "iteration": self.iteration,\n 945→ "success": self.success,\n 946→ "should_continue": self.should_continue,\n 947→ "error_message": self.error_message\n 948→ }\n 949→ \n 950→ @classmethod\n 951→ def from_dict(cls, data: Dict[str, Any]) -> 'IterationResult':\n 952→ return cls(\n 953→ loop_id=data["loop_id"],\n 954→ iteration=data["iteration"],\n 955→ success=data["success"],\n 956→ should_continue=data["should_continue"],\n 957→ error_message=data.get("error_message")\n 958→ )\n 959→\n 960→\n 961→@dataclass\n 962→class ExecutionResult:\n 963→ """\n 964→ Résultat générique d'exécution (sous-workflow, action, etc.).\n 965→ """\n 966→ success: bool\n 967→ output_values: Dict[str, Any] = field(default_factory=dict)\n 968→ error_message: Optional[str] = None\n 969→ error_context: Optional[Dict[str, Any]] = None\n 970→ \n 971→ def to_dict(self) -> Dict[str, Any]:\n 972→ return {\n 973→ "success": self.success,\n 974→ "output_values": self.output_values,\n 975→ "error_message": self.error_message,\n 976→ "error_context": self.error_context\n 977→ }\n 978→ \n 979→ @classmethod\n 980→ def from_dict(cls, data: Dict[str, Any]) -> 'ExecutionResult':\n 981→ return cls(\n 982→ success=data["success"],\n 983→ output_values=data.get("output_values", {}),\n 984→ error_message=data.get("error_message"),\n 985→ error_context=data.get("error_context")\n 986→ )\n 987→"}]},"uuid":"344960d3-48e3-40a6-b0e0-d76abe19ffbc","timestamp":"2026-03-14T21:15:16.524Z","sourceToolAssistantUUID":"81d6cb2f-8c11-496b-917d-1a5f676e95fc"} {"parentUuid":"344960d3-48e3-40a6-b0e0-d76abe19ffbc","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_011dgNsq4ex2V2dJ1prfZqvd","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01N3a4Q5Zhv2Re9qBSokyFny","name":"Read","input":{"file_path":"/home/dom/ai/rpa_vision_v3/core/workflow/loop_executor.py"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":923,"cache_read_input_tokens":110420,"output_tokens":135,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":923},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fY8trukCFqLqmt9G82","type":"assistant","uuid":"5d60c146-eece-4229-bf4e-6ee747bd79ce","timestamp":"2026-03-14T21:15:16.730Z"} {"parentUuid":"5d60c146-eece-4229-bf4e-6ee747bd79ce","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"progress","data":{"type":"hook_progress","hookEvent":"PostToolUse","hookName":"PostToolUse:Read","command":"callback"},"parentToolUseID":"toolu_01N3a4Q5Zhv2Re9qBSokyFny","toolUseID":"toolu_01N3a4Q5Zhv2Re9qBSokyFny","timestamp":"2026-03-14T21:15:16.737Z","uuid":"2a21ca4c-c2e9-423b-9dcf-5da8dbc48ec9"} {"parentUuid":"5d60c146-eece-4229-bf4e-6ee747bd79ce","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"tool_use_id":"toolu_01N3a4Q5Zhv2Re9qBSokyFny","type":"tool_result","content":" 1→"""\n 2→Loop Executor - Gestion de l'exécution des boucles dans les workflows\n 3→\n 4→Ce module gère l'exécution des boucles (count et condition) avec:\n 5→- Compteur d'itérations\n 6→- Évaluation des conditions de sortie\n 7→- Garde de sécurité (limite max d'itérations)\n 8→"""\n 9→\n 10→import logging\n 11→from typing import Dict, Any, Optional\n 12→from datetime import datetime\n 13→\n 14→from .composition_models import (\n 15→ LoopConfig, LoopState, IterationResult, VisualCondition\n 16→)\n 17→\n 18→logger = logging.getLogger(name)\n 19→\n 20→\n 21→class LoopSafetyLimitError(Exception):\n 22→ """Erreur levée quand la limite de sécurité est atteinte."""\n 23→ \n 24→ def init(self, loop_id: str, iterations: int, limit: int):\n 25→ self.loop_id = loop_id\n 26→ self.iterations = iterations\n 27→ self.limit = limit\n 28→ super().init(\n 29→ f"Boucle '{loop_id}' a atteint la limite de sécurité: "\n 30→ f"{iterations} itérations (limite: {limit})"\n 31→ )\n 32→\n 33→\n 34→class LoopExecutor:\n 35→ """\n 36→ Gère l'exécution des boucles dans les workflows.\n 37→ \n 38→ Supporte deux types de boucles:\n 39→ - count: Nombre fixe d'itérations\n 40→ - condition: Itère jusqu'à ce qu'une condition visuelle soit vraie\n 41→ """\n 42→ \n 43→ DEFAULT_SAFETY_LIMIT = 1000\n 44→\n 45→ \n 46→ def init(self, safety_limit: int = DEFAULT_SAFETY_LIMIT):\n 47→ """\n 48→ Initialise le LoopExecutor.\n 49→ \n 50→ Args:\n 51→ safety_limit: Limite maximale d'itérations par défaut\n 52→ """\n 53→ self.safety_limit = safety_limit\n 54→ self._active_loops: Dict[str, LoopState] = {}\n 55→ self._configs: Dict[str, LoopConfig] = {}\n 56→ \n 57→ def start_loop(self, config: LoopConfig) -> LoopState:\n 58→ """\n 59→ Démarre une nouvelle boucle.\n 60→ \n 61→ Args:\n 62→ config: Configuration de la boucle\n 63→ \n 64→ Returns:\n 65→ État initial de la boucle\n 66→ """\n 67→ state = LoopState(\n 68→ loop_id=config.loop_id,\n 69→ current_iteration=0,\n 70→ started_at=datetime.now(),\n 71→ last_condition_result=None\n 72→ )\n 73→ \n 74→ self._active_loops[config.loop_id] = state\n 75→ self._configs[config.loop_id] = config\n 76→ \n 77→ logger.info(f"Boucle '{config.loop_id}' démarrée (type: {config.loop_type})")\n 78→ return state\n 79→ \n 80→ def execute_iteration(self, loop_id: str) -> IterationResult:\n 81→ """\n 82→ Exécute une itération de la boucle.\n 83→ \n 84→ Args:\n 85→ loop_id: ID de la boucle\n 86→ \n 87→ Returns:\n 88→ Résultat de l'itération\n 89→ \n 90→ Raises:\n 91→ LoopSafetyLimitError: Si la limite de sécurité est atteinte\n 92→ """\n 93→ if loop_id not in self._active_loops:\n 94→ return IterationResult(\n 95→ loop_id=loop_id,\n 96→ iteration=0,\n 97→ success=False,\n 98→ should_continue=False,\n 99→ error_message=f"Boucle '{loop_id}' non trouvée"\n 100→ )\n 101→ \n 102→ state = self._active_loops[loop_id]\n 103→ config = self._configs[loop_id]\n 104→ \n 105→ # Incrémenter le compteur\n 106→ new_iteration = self.increment_counter(loop_id)\n 107→ \n 108→ # Vérifier la limite de sécurité\n 109→ effective_limit = min(config.safety_limit, self.safety_limit)\n 110→ if new_iteration > effective_limit:\n 111→ logger.warning(\n 112→ f"Boucle '{loop_id}' a atteint la limite de sécurité ({effective_limit})"\n 113→ )\n 114→ raise LoopSafetyLimitError(loop_id, new_iteration, effective_limit)\n 115→ \n 116→ # Déterminer si on doit continuer\n 117→ should_continue = self.should_continue(loop_id)\n 118→ \n 119→ return IterationResult(\n 120→ loop_id=loop_id,\n 121→ iteration=new_iteration,\n 122→ success=True,\n 123→ should_continue=should_continue\n 124→ )\n 125→\n 126→ \n 127→ def should_continue(self, loop_id: str, condition_result: Optional[bool] = None) -> bool:\n 128→ """\n 129→ Évalue si la boucle doit continuer.\n 130→ \n 131→ Args:\n 132→ loop_id: ID de la boucle\n 133→ condition_result: Résultat de l'évaluation de la condition (pour type condition)\n 134→ \n 135→ Returns:\n 136→ True si la boucle doit continuer, False sinon\n 137→ """\n 138→ if loop_id not in self._active_loops:\n 139→ return False\n 140→ \n 141→ state = self._active_loops[loop_id]\n 142→ config = self._configs[loop_id]\n 143→ \n 144→ # Mettre à jour le résultat de condition si fourni\n 145→ if condition_result is not None:\n 146→ state.last_condition_result = condition_result\n 147→ \n 148→ if config.loop_type == "count":\n 149→ # Boucle par compteur\n 150→ if config.max_iterations is None:\n 151→ return True # Pas de limite (dangereux, mais la garde de sécurité protège)\n 152→ return state.current_iteration < config.max_iterations\n 153→ \n 154→ elif config.loop_type == "condition":\n 155→ # Boucle par condition\n 156→ # Continue tant que la condition de sortie n'est pas vraie\n 157→ if state.last_condition_result is None:\n 158→ return True # Pas encore évalué, continuer\n 159→ return not state.last_condition_result # Sortir quand la condition est vraie\n 160→ \n 161→ return False\n 162→ \n 163→ def increment_counter(self, loop_id: str) -> int:\n 164→ """\n 165→ Incrémente le compteur de la boucle.\n 166→ \n 167→ Args:\n 168→ loop_id: ID de la boucle\n 169→ \n 170→ Returns:\n 171→ Nouvelle valeur du compteur\n 172→ """\n 173→ if loop_id not in self._active_loops:\n 174→ return 0\n 175→ \n 176→ state = self._active_loops[loop_id]\n 177→ state.current_iteration += 1\n 178→ \n 179→ logger.debug(f"Boucle '{loop_id}': itération {state.current_iteration}")\n 180→ return state.current_iteration\n 181→ \n 182→ def get_state(self, loop_id: str) -> Optional[LoopState]:\n 183→ """Retourne l'état actuel d'une boucle."""\n 184→ return self._active_loops.get(loop_id)\n 185→ \n 186→ def get_iteration(self, loop_id: str) -> int:\n 187→ """Retourne le numéro d'itération actuel."""\n 188→ state = self._active_loops.get(loop_id)\n 189→ return state.current_iteration if state else 0\n 190→ \n 191→ def end_loop(self, loop_id: str) -> Optional[LoopState]:\n 192→ """\n 193→ Termine une boucle et retourne son état final.\n 194→ \n 195→ Args:\n 196→ loop_id: ID de la boucle\n 197→ \n 198→ Returns:\n 199→ État final de la boucle ou None si non trouvée\n 200→ """\n 201→ state = self._active_loops.pop(loop_id, None)\n 202→ self._configs.pop(loop_id, None)\n 203→ \n 204→ if state:\n 205→ logger.info(\n 206→ f"Boucle '{loop_id}' terminée après {state.current_iteration} itérations"\n 207→ )\n 208→ \n 209→ return state\n 210→ \n 211→ def is_active(self, loop_id: str) -> bool:\n 212→ """Vérifie si une boucle est active."""\n 213→ return loop_id in self._active_loops\n 214→ \n 215→ def get_active_loops(self) -> Dict[str, LoopState]:\n 216→ """Retourne toutes les boucles actives."""\n 217→ return self._active_loops.copy()\n 218→ \n 219→ def to_dict(self) -> Dict[str, Any]:\n 220→ """Sérialise l'état des boucles."""\n 221→ return {\n 222→ "safety_limit": self.safety_limit,\n 223→ "active_loops": {k: v.to_dict() for k, v in self._active_loops.items()},\n 224→ "configs": {k: v.to_dict() for k, v in self._configs.items()}\n 225→ }\n 226→ \n 227→ @classmethod\n 228→ def from_dict(cls, data: Dict[str, Any]) -> 'LoopExecutor':\n 229→ """Désérialise l'état des boucles."""\n 230→ executor = cls(safety_limit=data.get("safety_limit", cls.DEFAULT_SAFETY_LIMIT))\n 231→ \n 232→ for loop_id, state_data in data.get("active_loops", {}).items():\n 233→ executor._active_loops[loop_id] = LoopState.from_dict(state_data)\n 234→ \n 235→ for loop_id, config_data in data.get("configs", {}).items():\n 236→ executor._configs[loop_id] = LoopConfig.from_dict(config_data)\n 237→ \n 238→ return executor\n 239→"}]},"uuid":"2fb302e2-0589-460e-ba91-9ac20c38e6b9","timestamp":"2026-03-14T21:15:16.736Z","sourceToolAssistantUUID":"5d60c146-eece-4229-bf4e-6ee747bd79ce"} {"parentUuid":"2fb302e2-0589-460e-ba91-9ac20c38e6b9","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01WW297niaHDCxDbdLcyBMLs","type":"message","role":"assistant","content":[{"type":"text","text":"Now let me generate the Mermaid diagram for the architecture."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":15676,"cache_read_input_tokens":111343,"cache_creation":{"ephemeral_5m_input_tokens":15676,"ephemeral_1h_input_tokens":0},"output_tokens":4,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fYTmfW8g6ebBpnWCFZ","type":"assistant","uuid":"436511e2-5fea-4685-a4ee-3ba4031eca7f","timestamp":"2026-03-14T21:15:22.647Z"} {"parentUuid":"436511e2-5fea-4685-a4ee-3ba4031eca7f","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01WW297niaHDCxDbdLcyBMLs","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01DE5FdR375WTciFL6tCmMus","name":"ToolSearch","input":{"query":"select:mcp__claude_ai_Mermaid_Chart__validate_and_render_mermaid_diagram","max_results":1},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":15676,"cache_read_input_tokens":111343,"output_tokens":113,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":15676},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fYTmfW8g6ebBpnWCFZ","type":"assistant","uuid":"4308fb99-9ec2-4ddc-9bf7-c2fc06041255","timestamp":"2026-03-14T21:15:23.548Z"} {"parentUuid":"4308fb99-9ec2-4ddc-9bf7-c2fc06041255","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"type":"tool_result","tool_use_id":"toolu_01DE5FdR375WTciFL6tCmMus","content":[{"type":"tool_reference","tool_name":"mcp__claude_ai_Mermaid_Chart__validate_and_render_mermaid_diagram"}]}]},"uuid":"7e6dce52-282c-42cf-8b6d-bc96447bd44d","timestamp":"2026-03-14T21:15:23.550Z","sourceToolAssistantUUID":"4308fb99-9ec2-4ddc-9bf7-c2fc06041255"} {"parentUuid":"7e6dce52-282c-42cf-8b6d-bc96447bd44d","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01Y5CbzzhD4pZNB1tswKymfM","type":"message","role":"assistant","content":[{"type":"text","text":"\n\nI now have a thorough understanding of the entire codebase. Let me generate the architecture diagrams and then produce the complete plan."}],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":3,"cache_creation_input_tokens":511,"cache_read_input_tokens":127019,"cache_creation":{"ephemeral_5m_input_tokens":511,"ephemeral_1h_input_tokens":0},"output_tokens":2,"service_tier":"standard","inference_geo":"not_available"}},"requestId":"req_011CZ3fYxzpiJVk6ihJZEbgL","type":"assistant","uuid":"d54b44db-003a-45be-96a5-46065185bc79","timestamp":"2026-03-14T21:15:28.730Z"} {"parentUuid":"d54b44db-003a-45be-96a5-46065185bc79","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01Y5CbzzhD4pZNB1tswKymfM","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01NTd5H6AXeLBNSaSfnew2W7","name":"mcp__claude_ai_Mermaid_Chart__validate_and_render_mermaid_diagram","input":{"prompt":"Architecture globale du scraping visuel structuré : flux de données depuis le Shadow mode jusqu'au stockage DB, en passant par le pipeline d'extraction VLM","mermaidCode":"flowchart TB\n subgraph AgentV1["Agent V1 (Windows, no GPU)"]\n CAP[Capture ecran + events]\n EXEC[ActionExecutorV1]\n POLL[Poll replay/next]\n end\n\n subgraph StreamServer["Streaming Server :5005 (Linux, RTX 5070)"]\n API[api_stream.py]\n SP[StreamProcessor]\n SA[ScreenAnalyzer]\n VLM[OllamaClient VLM]\n \n subgraph NewModules["Nouveaux modules"]\n EXT["ExtractionEngine"]\n ITER["IterationController"]\n STORE["DataStore (SQLite)"]\n SCHEMA["ExtractionSchema"]\n end\n end\n\n subgraph Chat["Agent Chat :5004"]\n INTENT[IntentParser]\n PLAN[AutonomousPlanner]\n CONV[ConversationManager]\n end\n\n subgraph Storage["Stockage"]\n WF[Workflows JSON]\n DB[(extraction_db.sqlite)]\n RUNS[extraction_runs/]\n end\n\n CAP -->|screenshots + events| API\n API --> SP\n SP --> SA\n SA -->|ScreenState + OCR| EXT\n EXT -->|prompt structure| VLM\n VLM -->|JSON structure| EXT\n EXT --> STORE\n STORE --> DB\n\n ITER -->|next item| API\n API -->|action queue| POLL\n POLL --> EXEC\n EXEC -->|result + screenshot| API\n\n Chat -->|start extraction| API\n INTENT --> PLAN\n PLAN -->|extraction workflow| API\n\n API --> WF\n STORE --> RUNS\n","diagramType":"flowchart","clientName":"claude"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":3,"cache_creation_input_tokens":511,"cache_read_input_tokens":127019,"output_tokens":632,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":511},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fYxzpiJVk6ihJZEbgL","type":"assistant","uuid":"76372708-56e6-494d-9276-3c81d57a5869","timestamp":"2026-03-14T21:15:39.999Z"} {"parentUuid":"76372708-56e6-494d-9276-3c81d57a5869","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"type":"tool_result","content":"Streamable HTTP error: Error POSTing to endpoint: event: message\r\ndata: {"jsonrpc":"2.0","error":{"code":-32600,"message":"Invalid Request"},"id":2}","is_error":true,"tool_use_id":"toolu_01NTd5H6AXeLBNSaSfnew2W7"}]},"uuid":"8de35c99-f8fd-47d8-9f7d-d992a743796c","timestamp":"2026-03-14T21:15:40.955Z","toolUseResult":"Error: Streamable HTTP error: Error POSTing to endpoint: event: message\r\ndata: {"jsonrpc":"2.0","error":{"code":-32600,"message":"Invalid Request"},"id":2}","sourceToolAssistantUUID":"76372708-56e6-494d-9276-3c81d57a5869"} {"parentUuid":"8de35c99-f8fd-47d8-9f7d-d992a743796c","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_012ru94vE1WncT2MXPXwzB8H","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01XtbwSBJTjy5HFjKX7VN51F","name":"mcp__claude_ai_Mermaid_Chart__validate_and_render_mermaid_diagram","input":{"prompt":"Architecture globale du scraping visuel structure","mermaidCode":"flowchart TB\n subgraph AgentV1["Agent V1 Windows"]\n CAP["Capture ecran"]\n EXEC["ActionExecutorV1"]\n POLL["Poll replay next"]\n end\n subgraph Server["Streaming Server 5005"]\n API["api_stream.py"]\n SP["StreamProcessor"]\n SA["ScreenAnalyzer"]\n VLM["OllamaClient VLM"]\n EXT["ExtractionEngine"]\n ITER["IterationController"]\n STORE["DataStore SQLite"]\n end\n subgraph Chat["Agent Chat 5004"]\n INTENT["IntentParser"]\n PLAN["AutonomousPlanner"]\n end\n subgraph Storage["Stockage"]\n WF["Workflows JSON"]\n DB[("extraction_db")]\n end\n CAP -->|screenshots| API\n API --> SP\n SP --> SA\n SA --> EXT\n EXT --> VLM\n VLM --> EXT\n EXT --> STORE\n STORE --> DB\n ITER --> API\n API --> POLL\n POLL --> EXEC\n EXEC --> API\n Chat --> API\n API --> WF\n","diagramType":"flowchart","clientName":"claude"},"caller":{"type":"direct"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":737,"cache_read_input_tokens":127530,"output_tokens":443,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":737},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3faFYWcDsLctQtPEosY","type":"assistant","uuid":"4d0c6c6e-410c-4e71-9ec9-042e96dac140","timestamp":"2026-03-14T21:15:49.157Z"} {"parentUuid":"4d0c6c6e-410c-4e71-9ec9-042e96dac140","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3/agent_v0","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","type":"user","message":{"role":"user","content":[{"type":"tool_result","content":"Streamable HTTP error: Error POSTing to endpoint: event: message\r\ndata: {"jsonrpc":"2.0","error":{"code":-32600,"message":"Invalid Request"},"id":3}","is_error":true,"tool_use_id":"toolu_01XtbwSBJTjy5HFjKX7VN51F"}]},"uuid":"852b44d7-0c60-4b44-b01f-11a17485923f","timestamp":"2026-03-14T21:15:50.283Z","toolUseResult":"Error: Streamable HTTP error: Error POSTing to endpoint: event: message\r\ndata: {"jsonrpc":"2.0","error":{"code":-32600,"message":"Invalid Request"},"id":3}","sourceToolAssistantUUID":"4d0c6c6e-410c-4e71-9ec9-042e96dac140"} {"parentUuid":"852b44d7-0c60-4b44-b01f-11a17485923f","isSidechain":true,"userType":"external","cwd":"/home/dom/ai/rpa_vision_v3","sessionId":"eec8c668-e1f8-4599-9dd5-42170a166ba4","version":"2.1.72","gitBranch":"master","agentId":"a44ee2e574a8117a1","slug":"noble-roaming-floyd","message":{"model":"claude-opus-4-6","id":"msg_01WPVn22tg27JyHuVTpmXeCk","type":"message","role":"assistant","content":[{"type":"text","text":"The Mermaid tool appears to be unavailable. I will include the diagrams as text in the plan. I have now gathered everything I need. Here is the complete architecture plan.\n\n---\n\n# Plan d'Architecture : Scraping Visuel Structure\n\n## 1. Vue d'ensemble et positionnement dans l'existant\n\nLe systeme existant a deja tous les composants fondamentaux pour cette fonctionnalite :\n\n- Pipeline d'analyse : ScreenAnalyzer (OCR docTR + UIDetector + VLM qwen2.5vl) produit des ScreenState enrichis (textes detectes, elements UI, embeddings).\n- Execution distante : Le cycle api_stream.py replay queue / ActionExecutorV1 poll+execute permet d'envoyer des ordres a l'Agent V1 Windows et de recevoir les screenshots de resultat.\n- Resolution visuelle : L'endpoint /replay/resolve_target sait deja trouver un element UI par texte/role dans un screenshot.\n- Boucles : LoopConfig, LoopState, LoopExecutor existent deja dans core/workflow/composition_models.py et loop_executor.py. Ils supportent des boucles count et condition.\n- VLM : OllamaClient dans core/detection/ollama_client.py sait envoyer une image + prompt a Ollama et recevoir du JSON structure (force_json=True).\n- Variables : Le systeme _substitute_variables dans api_stream.py gere deja ${var} dans les textes d'actions.\n\nCe qu'il manque : un type d'action "extract" qui, au lieu de cliquer ou taper, lit l'ecran et structure les donnees, un schema d'extraction configurable, un controleur d'iteration specifique au scraping, et un stockage base de donnees.\n\n---\n\n## 2. Nouveau type d'action "extract" dans les workflows\n\n### 2.1 Extension du modele ActionType\n\nDans /home/dom/ai/rpa_vision_v3/core/models/workflow_graph.py, la classe ActionType (ligne 32) definit actuellement :\n\npython\nclass ActionType(str, Enum):\n MOUSE_CLICK = \"mouse_click\"\n TEXT_INPUT = \"text_input\"\n KEY_PRESS = \"key_press\"\n COMPOUND = \"compound\"\n\n\nAjouter deux nouveaux types :\n\n\nEXTRACT = \"extract\" # Lire l'ecran et extraire des donnees structurees\nEXTRACT_LIST = \"extract_list\" # Lire une liste d'elements visibles (table, liste de resultats)\n\n\n### 2.2 Structure JSON d'un edge "extract" dans un workflow\n\nL'action extract est un edge comme les autres. Elle a un from_node (l'ecran a lire) et un to_node (l'ecran apres lecture, souvent le meme). Sa particularite : elle ne modifie pas l'UI, elle lit l'ecran.\n\njson\n{\n \"edge_id\": \"e_extract_patient\",\n \"from_node\": \"patient_detail\",\n \"to_node\": \"patient_detail\",\n \"action\": {\n \"type\": \"extract\",\n \"target\": {\n \"by_role\": \"screen\",\n \"selection_policy\": \"full_screen\"\n },\n \"parameters\": {\n \"schema_id\": \"schema_patient_dpi\",\n \"extraction_mode\": \"vlm\",\n \"store_result\": true,\n \"variable_prefix\": \"patient\"\n }\n }\n}\n\n\nLe champ parameters.schema_id fait reference a un schema d'extraction (voir section 3). Le variable_prefix permet de stocker les resultats dans les variables du workflow (${patient.nom}, ${patient.nip}, etc.) pour les reutiliser dans les etapes suivantes.\n\n### 2.3 Integration dans le replay pipeline\n\nDans /home/dom/ai/rpa_vision_v3/agent_v0/server_v1/api_stream.py, la fonction _edge_to_normalized_actions (ligne 433) doit etre etendue pour generer une action normalisee de type \"extract\". Contrairement aux actions physiques (click, type), l'action extract ne sera pas executee par l'Agent V1. C'est le serveur qui la traite directement en analysant le screenshot courant.\n\nLe flux est :\n\n\n api_stream.py Agent V1\n | |\n 1. Enqueue action | {\"type\":\"extract\", ...} |\n type=extract |------------------------------------>|\n | |\n 2. Agent V1 recoit | Capture screenshot, |\n l'action | POST /replay/result |\n | avec screenshot_b64 |\n |<------------------------------------|\n | |\n 3. Serveur recoit | ExtractionEngine.extract( |\n le screenshot | screenshot, schema) |\n + action extract | |\n | |\n 4. Resultat stocke | DataStore.insert_row(...) |\n en DB | |\n\n\nLe point critique : quand le serveur recoit le ReplayResultReport pour une action extract, il doit intercepter le screenshot joint et le passer au moteur d'extraction au lieu de simplement logger le resultat. Cela necessite une extension de report_action_result dans api_stream.py (ligne 699).\n\nAlternative preferee (plus propre) : l'action extract n'est jamais envoyee a l'Agent V1. Le serveur la detecte dans get_next_action (ligne 652), capture l'analyse du dernier screenshot recu de l'Agent, et fait l'extraction cote serveur. L'Agent V1 ne sait meme pas qu'il y a eu une extraction.\n\n---\n\n## 3. Schema d'extraction configurable\n\n### 3.1 Nouveau fichier : core/extraction/schema.py\n\nLe schema d'extraction definit quels champs extraire et comment les valider. Il est cree par l'utilisateur via le Chat (mode Shadow) ou le VWB (Visual Workflow Builder).\n\nStructure JSON du schema :\n\njson\n{\n \"schema_id\": \"schema_patient_dpi\",\n \"name\": \"Fiche Patient DPI\",\n \"version\": 1,\n \"description\": \"Extraction des informations patient depuis le Dossier Patient Informatise\",\n \"target_app\": \"DPI Hopital*\",\n \"fields\": [\n {\n \"field_id\": \"nip\",\n \"label\": \"NIP (Numero d'Identification Patient)\",\n \"type\": \"string\",\n \"required\": true,\n \"validation\": {\n \"pattern\": \"^[0-9]{8,12}$\",\n \"description\": \"8 a 12 chiffres\"\n },\n \"extraction_hint\": \"Numero a 8-12 chiffres, souvent en haut a droite, pres de 'NIP' ou 'IPP'\",\n \"zone_hint\": \"top_right\"\n },\n {\n \"field_id\": \"nom\",\n \"label\": \"Nom du patient\",\n \"type\": \"string\",\n \"required\": true,\n \"validation\": null,\n \"extraction_hint\": \"Nom de famille en majuscules, souvent le premier champ visible\",\n \"zone_hint\": \"top_left\"\n },\n {\n \"field_id\": \"prenom\",\n \"label\": \"Prenom du patient\",\n \"type\": \"string\",\n \"required\": true,\n \"validation\": null,\n \"extraction_hint\": \"Prenom apres le nom\",\n \"zone_hint\": \"top_left\"\n },\n {\n \"field_id\": \"date_naissance\",\n \"label\": \"Date de naissance\",\n \"type\": \"date\",\n \"required\": true,\n \"validation\": {\n \"format\": \"DD/MM/YYYY\"\n },\n \"extraction_hint\": \"Date au format JJ/MM/AAAA pres de 'Ne(e) le'\",\n \"zone_hint\": null\n },\n {\n \"field_id\": \"diagnostic_principal\",\n \"label\": \"Diagnostic principal\",\n \"type\": \"string\",\n \"required\": false,\n \"validation\": null,\n \"extraction_hint\": \"Texte libre dans la section 'Diagnostic' ou 'Motif'\",\n \"zone_hint\": \"center\"\n },\n {\n \"field_id\": \"code_cim10\",\n \"label\": \"Code CIM-10\",\n \"type\": \"string\",\n \"required\": false,\n \"validation\": {\n \"pattern\": \"^[A-Z][0-9]{2}(\\\\.[0-9]{1,2})?$\"\n },\n \"extraction_hint\": \"Code au format lettre + 2 chiffres, ex: J18.9, I25.0\",\n \"zone_hint\": null\n }\n ],\n \"created_at\": \"2026-03-14T10:00:00\",\n \"learning_examples\": []\n}\n\n\nPoints de design importants :\n\n- extraction_hint : description en langage naturel utilisee dans le prompt VLM. C'est la que l'utilisateur "enseigne" au systeme ou trouver l'information.\n- zone_hint : indication spatiale optionnelle (top_left, top_right, center, bottom, null). Permet de pre-decouper le screenshot pour limiter le bruit. Non obligatoire.\n- validation : regex ou format attendu. Permet de detecter les extractions ratees et de demander un retry.\n- type : string, number, date, boolean, list. Determine le post-traitement et le schema de la colonne en DB.\n- learning_examples : liste de paires (screenshot_path, expected_values) accumulees en mode Shadow pour affiner le prompt VLM.\n\n### 3.2 Stockage des schemas\n\nLes schemas sont stockes en JSON dans data/extraction/schemas/ :\n\n\ndata/extraction/\n schemas/\n schema_patient_dpi.json\n schema_labo_resultats.json\n runs/\n run_20260314_143000/\n metadata.json\n screenshots/\n results.jsonl\n\n\n---\n\n## 4. Pipeline d'extraction (ExtractionEngine)\n\n### 4.1 Nouveau fichier : core/extraction/engine.py\n\nC'est le coeur du systeme. Il recoit un screenshot (chemin ou PIL Image) + un ExtractionSchema et retourne des donnees structurees.\n\nStrategie en 3 niveaux (du plus rapide au plus lent) :\n\nNiveau 1 : OCR + heuristiques (< 1s)\n- Utiliser le ScreenState.perception.detected_text deja produit par le ScreenAnalyzer.\n- Pour chaque champ du schema, chercher le label puis le texte adjacent. Par exemple, si extraction_hint dit "pres de NIP", chercher le mot "NIP" dans les textes detectes et prendre la valeur a droite.\n- Convient pour les champs bien structures (labels + valeurs alignes en tableau).\n\nNiveau 2 : VLM extraction structuree (5-15s)\n- Envoyer le screenshot complet (ou une zone decoupee via zone_hint) a OllamaClient.generate() avec un prompt construit a partir du schema.\n- Utiliser force_json=True pour forcer la sortie JSON.\n- Prompt type :\n\n\nAnalyse cette capture d'ecran d'un dossier patient informatise.\nExtrais les informations suivantes au format JSON :\n\n- \"nip\": NIP (8 a 12 chiffres, souvent en haut a droite)\n- \"nom\": Nom de famille en majuscules\n- \"prenom\": Prenom\n- \"date_naissance\": Date au format DD/MM/YYYY\n- \"diagnostic_principal\": Texte du diagnostic principal\n- \"code_cim10\": Code CIM-10 (format X00.0)\n\nSi un champ n'est pas visible, utilise null.\nReponds UNIQUEMENT en JSON.\n\n\nNiveau 3 : VLM + retry cible (10-30s)\n- Si le niveau 2 retourne des champs manquants ou echouant la validation, re-envoyer des requetes ciblees pour les champs en erreur, eventuellement avec un crop de la zone pertinente.\n\nLe choix du niveau est configurable dans les parameters de l'action extract :\n- \"extraction_mode\": \"fast\" -> Niveau 1 seulement\n- \"extraction_mode\": \"vlm\" -> Niveau 1 + 2\n- \"extraction_mode\": \"thorough\" -> Niveaux 1 + 2 + 3\n\n### 4.2 Interface de l'ExtractionEngine\n\n\nclass ExtractionEngine:\n def __init__(self, ollama_client, screen_analyzer):\n ...\n\n def extract(\n self,\n screenshot_path: str,\n schema: ExtractionSchema,\n mode: str = \"vlm\",\n screen_state: ScreenState = None, # reutiliser si deja analyse\n context: dict = None, # variables du workflow\n ) -> ExtractionResult:\n ...\n\n@dataclass\nclass ExtractionResult:\n schema_id: str\n fields: Dict[str, Any] # {\"nip\": \"12345678\", \"nom\": \"DUPONT\", ...}\n confidence: Dict[str, float] # {\"nip\": 0.95, \"nom\": 0.88, ...}\n validation_errors: Dict[str, str] # {\"code_cim10\": \"Pattern mismatch\"}\n extraction_mode_used: str # \"ocr\", \"vlm\", \"vlm_retry\"\n screenshot_path: str\n duration_ms: int\n raw_vlm_response: str = None # Pour debug/audit\n\n\n### 4.3 Integration dans le StreamProcessor\n\nLe StreamProcessor (dans /home/dom/ai/rpa_vision_v3/agent_v0/server_v1/stream_processor.py) a deja un _screen_analyzer et peut creer un OllamaClient. L'ExtractionEngine sera instancie paresseusement dans le StreamProcessor, comme les autres composants GPU :\n\n\nself._extraction_engine = None # Lazy init dans _ensure_initialized()\n\n\n---\n\n## 5. Iteration / Pagination (for-each dans un workflow)\n\n### 5.1 Concept : LoopConfig etendu avec loop_type = \"visual_list\"\n\nLe LoopConfig existant dans composition_models.py (ligne 142) supporte count et condition. Il faut ajouter un troisieme type : \"visual_list\" -- une boucle qui itere visuellement sur une liste d'elements a l'ecran.\n\nNouveau type de boucle : ForEachVisualConfig\n\njson\n{\n \"loop_id\": \"loop_patients\",\n \"loop_type\": \"visual_list\",\n \"list_detection\": {\n \"method\": \"table_rows\",\n \"list_node\": \"patient_list_screen\",\n \"item_target\": {\n \"by_role\": \"table_row\",\n \"by_text\": null\n },\n \"pagination\": {\n \"has_pagination\": true,\n \"next_page_target\": {\n \"by_role\": \"button\",\n \"by_text\": \"Suivant\"\n },\n \"end_condition\": {\n \"condition_type\": \"element_absent\",\n \"target_element\": \"Suivant\"\n }\n }\n },\n \"body_nodes\": [\"click_item\", \"patient_detail\", \"extract_patient\", \"back_to_list\"],\n \"safety_limit\": 500,\n \"extract_schema_id\": \"schema_patient_dpi\"\n}\n\n\n### 5.2 Cycle d'execution d'une boucle visual_list\n\n\nBOUCLE (pour chaque page) :\n |\n 1. ANALYSER l'ecran de liste (ScreenAnalyzer)\n | -> Detecter les elements de type table_row\n | -> Compter N items visibles\n |\n 2. POUR chaque item i de 1 a N :\n | a. Cliquer sur l'item i (action click, resolve par position relative)\n | b. Attendre l'ecran de detail (post_conditions)\n | c. Executer l'action EXTRACT (extraire les donnees)\n | d. Stocker le resultat en DB\n | e. Revenir a la liste (action click \"Retour\" ou Back)\n | f. Verifier qu'on est sur la liste (post_conditions)\n |\n 3. PAGINATION :\n | a. Verifier si le bouton \"Suivant\" existe (visual check)\n | b. Si oui : cliquer sur Suivant, retour a 1\n | c. Si non : FIN DE BOUCLE\n\n\n### 5.3 Nouveau fichier : core/extraction/iteration_controller.py\n\nCe module orchestre le cycle ci-dessus. Il utilise le LoopExecutor existant pour le comptage et la guard de securite, mais ajoute la logique specifique au scraping visuel :\n\n- Detection des items : demander au ScreenAnalyzer de detecter les table_row ou utiliser le VLM pour compter les elements visibles.\n- Navigation item par item : generer dynamiquement les actions de clic (positions relatives calculees a partir des bounding boxes des elements detectes).\n- Gestion de la pagination : apres avoir traite tous les items visibles, chercher et cliquer le bouton "Suivant".\n- Gestion des erreurs : si un item echoue (ecran inattendu, timeout), loguer l'erreur, revenir a la liste, et continuer avec l'item suivant. Ne pas arreter toute la boucle.\n- Resume : sauvegarder periodiquement l'etat (dernier item traite, page courante) pour pouvoir reprendre apres un crash.\n\n### 5.4 Representation dans le workflow JSON complet\n\nVoici a quoi ressemble un workflow complet de scraping d'un DPI :\n\njson\n{\n \"workflow_id\": \"scraping_dpi_patients\",\n \"name\": \"Extraction donnees patients DPI\",\n \"description\": \"Ouvre le DPI, parcourt la liste des patients, extrait les infos de chaque fiche\",\n \"version\": 1,\n \"learning_state\": \"COACHING\",\n \"variables\": [\n {\"name\": \"service\", \"type\": \"string\", \"required\": true, \"description\": \"Service hospitalier\"},\n {\"name\": \"date_debut\", \"type\": \"date\", \"required\": false, \"default_value\": \"today\"}\n ],\n \"nodes\": [\n {\"node_id\": \"dpi_home\", \"name\": \"Accueil DPI\", \"type\": \"entry\"},\n {\"node_id\": \"service_select\", \"name\": \"Selection service\"},\n {\"node_id\": \"patient_list\", \"name\": \"Liste patients\"},\n {\"node_id\": \"patient_detail\", \"name\": \"Fiche patient\"},\n {\"node_id\": \"extraction_done\", \"name\": \"Extraction terminee\", \"type\": \"end\"}\n ],\n \"edges\": [\n {\n \"edge_id\": \"e1\", \"from_node\": \"dpi_home\", \"to_node\": \"service_select\",\n \"action\": {\"type\": \"mouse_click\", \"target\": {\"by_text\": \"Patients\", \"by_role\": \"navigation\"}}\n },\n {\n \"edge_id\": \"e2\", \"from_node\": \"service_select\", \"to_node\": \"patient_list\",\n \"action\": {\"type\": \"text_input\", \"target\": {\"by_role\": \"search_field\"}, \"parameters\": {\"text\": \"${service}\"}}\n },\n {\n \"edge_id\": \"e3_open\", \"from_node\": \"patient_list\", \"to_node\": \"patient_detail\",\n \"action\": {\"type\": \"mouse_click\", \"target\": {\"by_role\": \"table_row\"}, \"parameters\": {\"select_nth\": \"${_loop_index}\"}}\n },\n {\n \"edge_id\": \"e4_extract\", \"from_node\": \"patient_detail\", \"to_node\": \"patient_detail\",\n \"action\": {\"type\": \"extract\", \"target\": {\"by_role\": \"screen\"}, \"parameters\": {\"schema_id\": \"schema_patient_dpi\", \"extraction_mode\": \"vlm\", \"store_result\": true}}\n },\n {\n \"edge_id\": \"e5_back\", \"from_node\": \"patient_detail\", \"to_node\": \"patient_list\",\n \"action\": {\"type\": \"mouse_click\", \"target\": {\"by_text\": \"Retour\", \"by_role\": \"navigation\"}}\n }\n ],\n \"entry_nodes\": [\"dpi_home\"],\n \"end_nodes\": [\"extraction_done\"],\n \"loops\": {\n \"loop_patients\": {\n \"loop_id\": \"loop_patients\",\n \"loop_type\": \"visual_list\",\n \"body_nodes\": [\"patient_list\", \"patient_detail\"],\n \"body_edges\": [\"e3_open\", \"e4_extract\", \"e5_back\"],\n \"list_detection\": {\n \"method\": \"table_rows\",\n \"list_node\": \"patient_list\",\n \"item_target\": {\"by_role\": \"table_row\"}\n },\n \"pagination\": {\n \"next_page_target\": {\"by_text\": \"Suivant\"},\n \"end_condition\": {\"condition_type\": \"element_absent\", \"target_element\": \"Suivant\"}\n },\n \"safety_limit\": 500,\n \"extract_schema_id\": \"schema_patient_dpi\"\n }\n }\n}\n\n\nLa variable speciale ${_loop_index} est injectee automatiquement par l'IterationController et represente l'index de l'item courant dans la liste visible (0-based).\n\n---\n\n## 6. Stockage base de donnees\n\n### 6.1 Nouveau fichier : core/extraction/datastore.py\n\nSQLite d'abord, avec abstraction pour PostgreSQL plus tard.\n\nSchema SQL de base :\n\nsql\n-- Table des runs d'extraction (une execution complete du workflow)\nCREATE TABLE extraction_runs (\n run_id TEXT PRIMARY KEY,\n workflow_id TEXT NOT NULL,\n schema_id TEXT NOT NULL,\n started_at TEXT NOT NULL,\n completed_at TEXT,\n status TEXT DEFAULT 'running', -- running, completed, failed, paused\n total_items INTEGER DEFAULT 0,\n extracted_items INTEGER DEFAULT 0,\n failed_items INTEGER DEFAULT 0,\n params TEXT, -- JSON des parametres du workflow\n resume_state TEXT -- JSON pour reprendre apres interruption\n);\n\n-- Table des donnees extraites (une ligne par item extrait)\nCREATE TABLE extracted_data (\n row_id INTEGER PRIMARY KEY AUTOINCREMENT,\n run_id TEXT NOT NULL REFERENCES extraction_runs(run_id),\n schema_id TEXT NOT NULL,\n item_index INTEGER NOT NULL, -- Position dans la liste source\n extracted_at TEXT NOT NULL,\n data TEXT NOT NULL, -- JSON des champs extraits\n confidence TEXT, -- JSON des scores de confiance par champ\n validation_errors TEXT, -- JSON des erreurs de validation\n screenshot_path TEXT, -- Chemin du screenshot source\n extraction_mode TEXT, -- \"ocr\", \"vlm\", \"vlm_retry\"\n duration_ms INTEGER,\n status TEXT DEFAULT 'ok' -- ok, validation_warning, extraction_failed\n);\n\n-- Index pour les requetes frequentes\nCREATE INDEX idx_extracted_run ON extracted_data(run_id);\nCREATE INDEX idx_extracted_schema ON extracted_data(schema_id);\nCREATE INDEX idx_extracted_status ON extracted_data(status);\n\n-- Table de metadata (schemas, configs)\nCREATE TABLE extraction_schemas (\n schema_id TEXT PRIMARY KEY,\n name TEXT NOT NULL,\n version INTEGER DEFAULT 1,\n schema_json TEXT NOT NULL, -- Le schema complet en JSON\n created_at TEXT NOT NULL,\n updated_at TEXT\n);\n\n\n### 6.2 Interface DataStore\n\n\nclass DataStore:\n def __init__(self, db_path: str = \"data/extraction/extraction_db.sqlite\"):\n ...\n\n def create_run(self, workflow_id, schema_id, params) -> str:\n # Retourne run_id\n\n def insert_row(self, run_id, item_index, extraction_result: ExtractionResult):\n ...\n\n def update_run_status(self, run_id, status, resume_state=None):\n ...\n\n def get_run(self, run_id) -> dict:\n ...\n\n def get_run_data(self, run_id, status_filter=None) -> List[dict]:\n ...\n\n def export_csv(self, run_id, output_path) -> str:\n ...\n\n def export_json(self, run_id, output_path) -> str:\n ...\n\n def get_schema(self, schema_id) -> dict:\n ...\n\n def save_schema(self, schema: ExtractionSchema):\n ...\n\n\n### 6.3 Emplacement du fichier SQLite\n\ndata/extraction/extraction_db.sqlite -- dans le repertoire data du projet, jamais commite (dans .gitignore).\n\n### 6.4 Extension vers PostgreSQL\n\nL'abstraction est simple : DataStore utilise un connection_string et soit sqlite3 soit psycopg2. La difference principale est la gestion du JSON (TEXT en SQLite, JSONB en PostgreSQL). Prevoir une methode _get_connection() qui retourne l'objet connection appropriate.\n\n---\n\n## 7. Endpoints API\n\n### 7.1 Nouveaux endpoints dans api_stream.py\n\n| Methode | Endpoint | Description |\n|---------|----------|-------------|\n| POST | /api/v1/extraction/schemas | Creer/mettre a jour un schema d'extraction |\n| GET | /api/v1/extraction/schemas | Lister les schemas |\n| GET | /api/v1/extraction/schemas/{schema_id} | Recuperer un schema |\n| POST | /api/v1/extraction/start | Demarrer un run d'extraction |\n| GET | /api/v1/extraction/runs | Lister les runs |\n| GET | /api/v1/extraction/runs/{run_id} | Etat d'un run |\n| POST | /api/v1/extraction/runs/{run_id}/pause | Pause/resume |\n| POST | /api/v1/extraction/runs/{run_id}/cancel | Annuler |\n| GET | /api/v1/extraction/runs/{run_id}/data | Donnees extraites |\n| GET | /api/v1/extraction/runs/{run_id}/export | Export CSV/JSON |\n| POST | /api/v1/extraction/extract_single | Extraction ponctuelle (debug) |\n\n### 7.2 Modeles Pydantic pour les requetes\n\npython\nclass StartExtractionRequest(BaseModel):\n workflow_id: str\n schema_id: str\n session_id: Optional[str] = None # Auto-detect si vide\n params: Optional[Dict[str, Any]] = None\n max_items: Optional[int] = None # Limite optionnelle\n extraction_mode: str = \"vlm\" # fast, vlm, thorough\n\nclass ExtractionSchemaCreate(BaseModel):\n schema_id: str\n name: str\n description: str\n target_app: Optional[str] = None\n fields: List[Dict[str, Any]]\n\n\n---\n\n## 8. Integration dans le cycle Shadow - Copilot - Autonomous\n\n### 8.1 Phase Shadow (Observation)\n\nL'utilisateur ouvre le DPI et parcourt manuellement quelques dossiers pendant que l'Agent V1 capture tout. Le serveur analyse chaque screenshot.\n\nCe qui se passe en plus pour l'extraction :\n\n1. L'utilisateur parcourt 3-5 dossiers patients manuellement.\n2. Le serveur detecte le pattern repetitif (meme ecran de detail, memes actions).\n3. Via le chat (port 5004), l'utilisateur dit : "Les ecrans que tu viens de voir, c'est les fiches patients. Je veux extraire le NIP, le nom, le prenom, la date de naissance et le diagnostic."\n4. Le systeme genere un brouillon de schema en utilisant le VLM sur les screenshots captures :\n - Envoie un screenshot de fiche patient au VLM avec le prompt : "Identifie les champs suivants dans cette capture d'ecran : NIP, nom, prenom, date de naissance, diagnostic. Pour chaque champ, decris sa position et son apparence visuelle."\n - Le VLM retourne les descriptions qui deviennent les extraction_hint du schema.\n5. Le systeme presente le schema a l'utilisateur pour validation dans le chat.\n6. Le systeme genere aussi le workflow de navigation (depuis les ScreenStates et le GraphBuilder) avec la boucle et les edges.\n\n### 8.2 Phase Copilot (Supervision)\n\nL'utilisateur lance l'extraction via le chat : "Extrais les patients du service Cardiologie".\n\nLe systeme :\n1. Demande confirmation : "Je vais extraire les fiches patients du service Cardiologie. Schema : Fiche Patient DPI (6 champs). Confirmer ?"\n2. L'utilisateur confirme.\n3. Le systeme execute le workflow en mode supervise :\n - Apres chaque extraction, affiche les resultats dans le chat.\n - Si un champ echoue la validation, demande confirmation a l'utilisateur.\n - L'utilisateur peut corriger en temps reel ("Non, le NIP c'est 12345678, pas 1234567").\n - Les corrections sont stockees comme learning_examples dans le schema.\n4. Apres 5-10 extractions reussies sans correction, le systeme propose de passer en mode Autonomous.\n\n### 8.3 Phase Autonomous\n\nLe systeme fonctionne seul. Les corrections humaines precedentes ont raffine le schema et les prompts VLM. Le systeme :\n- Execute la boucle complete sans interruption.\n- Stocke les resultats en DB.\n- Envoie un rapport de synthese dans le chat a la fin.\n- En cas d'erreur (ecran inattendu, timeout), tente un retry, puis saute l'item et continue.\n- L'utilisateur peut surveiller en temps reel via le chat ou le dashboard (port 5001).\n\n### 8.4 Progression du learning_state\n\nLe learning_state du workflow d'extraction suit la meme progression que les workflows classiques :\n\n\nOBSERVATION -> COACHING -> AUTO_CANDIDATE -> AUTO_CONFIRMED\n | | | |\n Shadow mode Copilot Autonomous Autonomous\n (learning) (supervised) (testing) (production)\n\n\nCriteres de passage :\n- OBSERVATION -> COACHING : schema cree, workflow valide, au moins 3 extractions manuelles reussies\n- COACHING -> AUTO_CANDIDATE : 10+ extractions supervisees, 0 correction sur les 5 dernieres\n- AUTO_CANDIDATE -> AUTO_CONFIRMED : 50+ extractions autonomes, taux de validation > 95%\n\n---\n\n## 9. Edge cases et robustesse\n\n### 9.1 Ecrans variables\n\n- Pop-ups inattendus : Le ScreenAnalyzer detecte un ecran non reconnu. L'IterationController compare le ScreenState courant avec le node attendu. Si la similarite est < 0.7, il tente de fermer le pop-up (Echap, clic sur X) et re-evalue.\n- Ecran de chargement : Ajouter un wait adaptatif : si l'ecran est presque vide ou contient un spinner, attendre jusqu'a 10s en re-capturant toutes les 2s.\n- Session expiree : Le VLM detecte un ecran de login. Le systeme met en pause et notifie l'utilisateur via le chat.\n\n### 9.2 Donnees manquantes\n\n- Champs required=true manquants : l'item est marque status=validation_warning en DB. Le run continue.\n- Tous les champs manquants : status=extraction_failed. Probablement un ecran different de celui attendu.\n- Post-run : un rapport liste les items en erreur pour reprise manuelle.\n\n### 9.3 Erreurs de navigation\n\n- Le retour a la liste echoue : tenter le bouton "Retour" du navigateur, ou Alt+Left, ou cliquer sur le breadcrumb.\n- L'item suivant n'est plus visible (la liste a change apres retour) : re-analyser la liste et recalculer les positions.\n- Pagination cassee : le bouton "Suivant" est grise ou absent alors qu'il reste des items. Loguer et terminer proprement.\n\n### 9.4 Performance\n\n- Une extraction VLM prend 5-15s. Pour 500 patients, c'est 40min a 2h.\n- Optimisation : utiliser le mode fast (OCR seul) pour les champs bien structures, vlm seulement pour les champs ambigus.\n- Parallelisation future : si l'Agent V1 est sur une VM, on pourrait lancer plusieurs VMs (hors scope v1).\n\n---\n\n## 10. Plan d'implementation sequence\n\n### Phase 1 : Fondations (2-3 jours)\n1. Creer core/extraction/__init__.py\n2. Creer core/extraction/schema.py -- ExtractionSchema + validation\n3. Creer core/extraction/datastore.py -- DataStore SQLite\n4. Creer core/extraction/engine.py -- ExtractionEngine (niveau 1 OCR + niveau 2 VLM)\n5. Tests unitaires pour schema, datastore, engine\n\n### Phase 2 : Integration dans le pipeline (2-3 jours)\n6. Etendre ActionType dans workflow_graph.py avec EXTRACT\n7. Etendre _edge_to_normalized_actions dans api_stream.py pour gerer extract\n8. Modifier report_action_result ou get_next_action pour intercepter les actions extract\n9. Initialiser ExtractionEngine dans StreamProcessor._ensure_initialized()\n10. Ajouter les endpoints API d'extraction dans api_stream.py\n\n### Phase 3 : Iteration (2-3 jours)\n11. Etendre LoopConfig avec visual_list dans composition_models.py\n12. Creer core/extraction/iteration_controller.py\n13. Modifier _workflow_to_actions pour expandre les boucles visual_list\n14. Gerer la pagination et la detection d'items dans la liste\n15. Tests d'integration avec un workflow de test\n\n### Phase 4 : Integration Chat (1-2 jours)\n16. Ajouter une intention EXTRACT_DATA dans IntentParser\n17. Ajouter la commande de creation de schema via le chat\n18. Ajouter le feedback en temps reel des extractions via SocketIO\n19. Ajouter le rapport de synthese de fin d'extraction\n\n### Phase 5 : Shadow learning (2-3 jours)\n20. Detecter automatiquement les patterns de navigation repetitive dans le StreamProcessor\n21. Generer un brouillon de schema via VLM sur les screenshots captures\n22. Proposer le schema et le workflow via le chat\n23. Gerer les corrections utilisateur comme exemples d'apprentissage\n\n---\n\n## Diagramme d'architecture (texte)\n\n\n +-------------------+ +----------------------------+\n | Agent V1 | | Streaming Server :5005 |\n | (Windows) | HTTP | (Linux, RTX 5070) |\n | |<-------->| |\n | ActionExecutorV1 | | api_stream.py |\n | poll_and_execute | | | |\n | _capture_screen | | StreamProcessor |\n +-------------------+ | |-- ScreenAnalyzer |\n | |-- CLIPEmbedder |\n | |-- FAISSManager |\n | |-- ExtractionEngine * |\n | | |-- OllamaClient |\n | | |-- schema.py |\n | | |\n | IterationController * |\n | |-- LoopExecutor |\n | |-- _replay_queues |\n | |\n | DataStore * |\n | |-- SQLite |\n +----------------------------+\n |\n +-------------------+ +----------v---------+\n | Agent Chat | | Stockage |\n | :5004 | | |\n | | | data/extraction/ |\n | IntentParser |--------->| extraction_db |\n | AutonomousPlanner | | schemas/*.json |\n | ConversationMgr | | runs/ |\n +-------------------+ | data/workflows/ |\n | *.json |\n +--------------------+\n\n * = nouveaux modules\n\n\n---\n\n### Critical Files for Implementation\n\n- /home/dom/ai/rpa_vision_v3/agent_v0/server_v1/api_stream.py - Point d'integration principal : ajouter les endpoints extraction, modifier _edge_to_normalized_actions et report_action_result pour gerer le type "extract", intercepter les actions extract dans la boucle de replay\n- /home/dom/ai/rpa_vision_v3/core/models/workflow_graph.py - Etendre ActionType avec EXTRACT, le modele Action et TargetSpec pour supporter les parametres d'extraction (schema_id, extraction_mode)\n- /home/dom/ai/rpa_vision_v3/core/workflow/composition_models.py - Etendre LoopConfig avec le type visual_list et la configuration de pagination/detection d'items pour le for-each visuel\n- /home/dom/ai/rpa_vision_v3/core/detection/ollama_client.py - Pattern a suivre pour les appels VLM ; sera utilise directement par ExtractionEngine pour l'extraction structuree avec force_json=True\n- /home/dom/ai/rpa_vision_v3/agent_v0/server_v1/stream_processor.py - Initialiser lazily l'ExtractionEngine et le DataStore dans _ensure_initialized(), les rendre disponibles pour l'extraction pendant le replay"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":1,"cache_creation_input_tokens":545,"cache_read_input_tokens":128267,"output_tokens":10123,"server_tool_use":{"web_search_requests":0,"web_fetch_requests":0},"service_tier":"standard","cache_creation":{"ephemeral_1h_input_tokens":0,"ephemeral_5m_input_tokens":545},"inference_geo":"","iterations":[],"speed":"standard"}},"requestId":"req_011CZ3fawNu1LbakRQK6y78R","type":"assistant","uuid":"65247dfb-c2cd-4340-af2c-4db0eb3ff8fb","timestamp":"2026-03-14T21:19:00.393Z"}