v1.0 - Version stable: multi-PC, détection UI-DETR-1, 3 modes exécution

- Frontend v4 accessible sur réseau local (192.168.1.40)
- Ports ouverts: 3002 (frontend), 5001 (backend), 5004 (dashboard)
- Ollama GPU fonctionnel
- Self-healing interactif
- Dashboard confiance

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Dom
2026-01-29 11:23:51 +01:00
parent 21bfa3b337
commit a27b74cf22
1595 changed files with 412691 additions and 400 deletions

View File

@@ -0,0 +1,30 @@
# Flask Configuration
FLASK_ENV=development
SECRET_KEY=your-secret-key-here-change-in-production
PORT=5001
# Database
DATABASE_URL=sqlite:///workflows.db
# For production, use PostgreSQL:
# DATABASE_URL=postgresql://user:password@localhost:5432/visual_workflows
# Redis (for caching and session management)
REDIS_URL=redis://localhost:6379/0
# CORS Origins (comma-separated)
CORS_ORIGINS=http://localhost:3000,http://localhost:8080
# RPA Vision V3 Integration
RPA_VISION_API_URL=http://localhost:8000
# Logging
LOG_LEVEL=INFO
LOG_FILE=logs/app.log
# Security
MAX_WORKFLOW_SIZE=1000
MAX_EXECUTION_TIME=1800
RATE_LIMIT_PER_MINUTE=100
# WebSocket
SOCKETIO_ASYNC_MODE=threading

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 253 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 206 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

View File

@@ -0,0 +1,328 @@
# Scripts de Gestion du Serveur Backend
Ce dossier contient des scripts shell pour faciliter la gestion du serveur backend Flask.
## Scripts Disponibles
### 🚀 start.sh
Démarre le serveur backend en arrière-plan.
```bash
./start.sh
```
**Fonctionnalités:**
- Vérifie si le serveur est déjà en cours d'exécution
- Démarre le serveur sur le port 5001
- Crée un fichier PID (`.server.pid`)
- Redirige les logs vers `server.log`
- Affiche l'URL et les informations de démarrage
**Sortie:**
```
==========================================
Visual Workflow Builder - Backend Server
==========================================
Starting backend server...
- Port: 5001
- Log file: /path/to/server.log
- PID file: /path/to/.server.pid
✓ Server started successfully!
- PID: 12345
- URL: http://localhost:5001
- Health check: http://localhost:5001/health
To view logs: tail -f /path/to/server.log
To stop server: ./stop.sh
```
---
### 🛑 stop.sh
Arrête le serveur backend proprement.
```bash
./stop.sh
```
**Fonctionnalités:**
- Arrêt gracieux du serveur (SIGTERM)
- Force l'arrêt si nécessaire (SIGKILL après 5 secondes)
- Nettoie le fichier PID
- Tue tous les processus sur le port 5001 si nécessaire
- Fonctionne même si le fichier PID n'existe pas
**Sortie:**
```
==========================================
Visual Workflow Builder - Stop Server
==========================================
Stopping server (PID: 12345)...
✓ Server stopped gracefully
```
---
### 📊 status.sh
Affiche l'état actuel du serveur.
```bash
./status.sh
```
**Fonctionnalités:**
- Vérifie si le processus est en cours d'exécution
- Affiche les informations du processus (CPU, mémoire, durée)
- Vérifie l'état du port 5001
- Effectue un health check HTTP
- Affiche les 10 dernières lignes du log
**Sortie:**
```
==========================================
Visual Workflow Builder - Server Status
==========================================
PID File: /path/to/.server.pid
PID: 12345
Status: RUNNING ✓
Process Information:
PID PPID USER %CPU %MEM ELAPSED COMMAND
12345 1234 user 0.5 1.2 00:05:23 python app.py
Port 5001 Status:
Port 5001: IN USE
Process(es) using port 5001:
python 12345 user 5u IPv4 0x... TCP *:5001 (LISTEN)
Health Check:
HTTP Response: 200 OK ✓
Server is responding correctly
Recent Log Entries (last 10 lines):
-----------------------------------
[log entries...]
```
---
### 🔄 restart.sh
Redémarre le serveur (stop + start).
```bash
./restart.sh
```
**Fonctionnalités:**
- Arrête le serveur proprement
- Attend 2 secondes
- Redémarre le serveur
---
## Fichiers Générés
### .server.pid
Fichier contenant le PID du processus serveur. Utilisé pour gérer le serveur.
**Emplacement:** `visual_workflow_builder/backend/.server.pid`
**Contenu:** Un seul nombre (le PID)
```
12345
```
### server.log
Fichier de log contenant toutes les sorties du serveur (stdout et stderr).
**Emplacement:** `visual_workflow_builder/backend/server.log`
**Utilisation:**
```bash
# Voir les logs en temps réel
tail -f server.log
# Voir les 50 dernières lignes
tail -n 50 server.log
# Rechercher des erreurs
grep -i error server.log
# Vider le fichier de log
> server.log
```
---
## Exemples d'Utilisation
### Démarrage Normal
```bash
# Démarrer le serveur
./start.sh
# Vérifier qu'il fonctionne
./status.sh
# Tester l'API
curl http://localhost:5001/health
```
### Développement avec Logs
```bash
# Démarrer le serveur
./start.sh
# Ouvrir un autre terminal et suivre les logs
tail -f server.log
# Faire des requêtes API et voir les logs en temps réel
curl -X POST http://localhost:5001/api/workflows/ \
-H "Content-Type: application/json" \
-d '{"name": "Test", "created_by": "user"}'
```
### Redémarrage Après Modifications
```bash
# Après avoir modifié le code
./restart.sh
# Ou manuellement
./stop.sh
./start.sh
```
### Debugging
```bash
# Vérifier l'état
./status.sh
# Si le serveur ne répond pas
./stop.sh
./start.sh
# Voir les erreurs récentes
tail -n 100 server.log | grep -i error
```
### Nettoyage Complet
```bash
# Arrêter le serveur
./stop.sh
# Supprimer les fichiers temporaires
rm -f .server.pid server.log
```
---
## Dépannage
### Le serveur ne démarre pas
```bash
# Vérifier si le port est déjà utilisé
lsof -i:5001
# Tuer le processus sur le port 5001
kill $(lsof -ti:5001)
# Réessayer
./start.sh
```
### Le serveur ne s'arrête pas
```bash
# Forcer l'arrêt
./stop.sh
# Si ça ne fonctionne pas, tuer manuellement
kill -9 $(cat .server.pid)
rm .server.pid
```
### Fichier PID obsolète
```bash
# Supprimer le fichier PID
rm .server.pid
# Vérifier l'état
./status.sh
# Redémarrer si nécessaire
./start.sh
```
### Logs trop volumineux
```bash
# Vider le fichier de log
> server.log
# Ou le supprimer
rm server.log
# Le fichier sera recréé au prochain démarrage
```
---
## Intégration avec les Tests
### Lancer les tests avec le serveur
```bash
# Démarrer le serveur
./start.sh
# Attendre qu'il soit prêt
sleep 2
# Lancer les tests
python test_api_manual.py
# Arrêter le serveur
./stop.sh
```
### Script de test complet
```bash
#!/bin/bash
./start.sh
sleep 2
python test_api_manual.py
TEST_RESULT=$?
./stop.sh
exit $TEST_RESULT
```
---
## Notes Importantes
1. **Environnement Virtuel**: Les scripts utilisent automatiquement `../../venv_v3`
2. **Port**: Le serveur écoute sur le port 5001 (configurable dans `app.py`)
3. **Logs**: Les logs sont écrits dans `server.log` (rotation manuelle nécessaire)
4. **PID**: Le fichier `.server.pid` est automatiquement nettoyé à l'arrêt
5. **Permissions**: Les scripts doivent être exécutables (`chmod +x *.sh`)
---
## Commandes Rapides
```bash
# Démarrer
./start.sh
# Arrêter
./stop.sh
# État
./status.sh
# Redémarrer
./restart.sh
# Logs en temps réel
tail -f server.log
# Health check
curl http://localhost:5001/health
```

View File

@@ -0,0 +1,65 @@
"""
Actions VWB - Module d'initialisation
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Ce module contient les actions VisionOnly pour le Visual Workflow Builder.
Actions disponibles :
- BaseAction : Classe de base pour toutes les actions
- ClickAnchorAction : Action de clic sur ancre visuelle
- TypeTextAction : Action de saisie de texte
- WaitForAnchorAction : Action d'attente d'ancre visuelle
- FocusAnchorAction : Action de focus sur ancre visuelle
- TypeSecretAction : Action de saisie de secret/mot de passe
- ScrollToAnchorAction : Action de défilement vers ancre
- ExtractTextAction : Action d'extraction de texte
Registry :
- VWBActionRegistry : Gestionnaire des actions
- Fonctions utilitaires pour l'enregistrement et la création
"""
from .base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from .vision_ui.click_anchor import VWBClickAnchorAction
from .vision_ui.type_text import VWBTypeTextAction
from .vision_ui.wait_for_anchor import VWBWaitForAnchorAction
from .vision_ui.focus_anchor import VWBFocusAnchorAction
from .vision_ui.type_secret import VWBTypeSecretAction
from .vision_ui.scroll_to_anchor import VWBScrollToAnchorAction
from .vision_ui.extract_text import VWBExtractTextAction
from .registry import (
VWBActionRegistry,
get_global_registry,
register_action,
get_action_class,
create_action,
list_available_actions,
get_registry_info,
vwb_action
)
__all__ = [
'BaseVWBAction',
'VWBActionResult',
'VWBActionStatus',
'VWBClickAnchorAction',
'VWBTypeTextAction',
'VWBWaitForAnchorAction',
'VWBFocusAnchorAction',
'VWBTypeSecretAction',
'VWBScrollToAnchorAction',
'VWBExtractTextAction',
'VWBActionRegistry',
'get_global_registry',
'register_action',
'get_action_class',
'create_action',
'list_available_actions',
'get_registry_info',
'vwb_action'
]
__version__ = '1.1.0'
__author__ = 'Dom, Alice, Kiro'
__date__ = '10 janvier 2026'

View File

@@ -0,0 +1,454 @@
"""
Action de Base VWB - Classe Abstraite pour Actions VisionOnly
Auteur : Dom, Alice, Kiro - 09 janvier 2026
Ce module définit la classe de base pour toutes les actions VisionOnly
du Visual Workflow Builder.
Classes :
- VWBActionStatus : États d'exécution des actions
- VWBActionResult : Résultat d'exécution d'action
- BaseVWBAction : Classe abstraite de base pour toutes les actions
"""
from abc import ABC, abstractmethod
from enum import Enum
from dataclasses import dataclass
from typing import Dict, Any, Optional, List
from datetime import datetime
import time
import traceback
# Import des contrats VWB
from ..contracts.error import VWBActionError, VWBErrorType, VWBErrorSeverity, create_vwb_error
from ..contracts.evidence import VWBEvidence, VWBEvidenceType, create_screenshot_evidence
from ..contracts.visual_anchor import VWBVisualAnchor
class VWBActionStatus(Enum):
"""États d'exécution des actions VWB."""
PENDING = "pending" # En attente d'exécution
RUNNING = "running" # En cours d'exécution
SUCCESS = "success" # Exécution réussie
FAILED = "failed" # Exécution échouée
TIMEOUT = "timeout" # Délai dépassé
CANCELLED = "cancelled" # Annulée par l'utilisateur
RETRYING = "retrying" # En cours de retry
@dataclass
class VWBActionResult:
"""
Résultat d'exécution d'une action VWB.
Cette classe encapsule tous les résultats d'une exécution d'action,
incluant le statut, les données de sortie, les evidence et les erreurs.
"""
# Identification
action_id: str
step_id: str
# Statut d'exécution
status: VWBActionStatus
# Informations temporelles
start_time: datetime
end_time: datetime
execution_time_ms: float
# Données de sortie
output_data: Dict[str, Any]
# Evidence générées
evidence_list: List[VWBEvidence]
# Erreur éventuelle
error: Optional[VWBActionError] = None
# Métadonnées
retry_count: int = 0
workflow_id: Optional[str] = None
user_id: Optional[str] = None
session_id: Optional[str] = None
def is_success(self) -> bool:
"""Vérifie si l'action a réussi."""
return self.status == VWBActionStatus.SUCCESS
def is_failed(self) -> bool:
"""Vérifie si l'action a échoué."""
return self.status in {VWBActionStatus.FAILED, VWBActionStatus.TIMEOUT}
def can_retry(self) -> bool:
"""Vérifie si l'action peut être retentée."""
return (
self.is_failed() and
(self.error is None or self.error.is_retryable()) and
self.retry_count < 3
)
def get_primary_evidence(self) -> Optional[VWBEvidence]:
"""Retourne l'evidence principale (screenshot après action)."""
for evidence in self.evidence_list:
if evidence.evidence_type == VWBEvidenceType.SCREENSHOT_AFTER:
return evidence
# Si pas de screenshot après, prendre le premier screenshot
for evidence in self.evidence_list:
if evidence.is_visual_evidence():
return evidence
return None
def add_evidence(self, evidence: VWBEvidence):
"""Ajoute une evidence au résultat."""
self.evidence_list.append(evidence)
def get_summary(self) -> Dict[str, Any]:
"""Retourne un résumé du résultat."""
return {
'action_id': self.action_id,
'step_id': self.step_id,
'status': self.status.value,
'execution_time_ms': round(self.execution_time_ms, 1),
'evidence_count': len(self.evidence_list),
'has_error': self.error is not None,
'retry_count': self.retry_count,
'can_retry': self.can_retry()
}
class BaseVWBAction(ABC):
"""
Classe de base abstraite pour toutes les actions VWB.
Cette classe définit l'interface commune et les fonctionnalités
partagées par toutes les actions VisionOnly du Visual Workflow Builder.
"""
def __init__(
self,
action_id: str,
name: str,
description: str,
parameters: Dict[str, Any],
screen_capturer=None
):
"""
Initialise l'action de base.
Args:
action_id: Identifiant unique de l'action
name: Nom de l'action
description: Description de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer (Option A thread-safe)
"""
self.action_id = action_id
self.name = name
self.description = description
self.parameters = parameters
self.screen_capturer = screen_capturer
# Configuration par défaut
self.timeout_ms = parameters.get('timeout_ms', 10000)
self.retry_count = parameters.get('retry_count', 3)
self.retry_delay_ms = parameters.get('retry_delay_ms', 1000)
# État d'exécution
self.current_status = VWBActionStatus.PENDING
self.current_result: Optional[VWBActionResult] = None
# Evidence collectées
self.evidence_list: List[VWBEvidence] = []
@abstractmethod
def validate_parameters(self) -> List[str]:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation (vide si valide)
"""
pass
@abstractmethod
def execute_core(self, step_id: str) -> VWBActionResult:
"""
Exécute la logique principale de l'action.
Args:
step_id: Identifiant de l'étape
Returns:
Résultat d'exécution
"""
pass
def execute(self, step_id: str, **kwargs) -> VWBActionResult:
"""
Exécute l'action avec gestion d'erreurs et retry.
Args:
step_id: Identifiant de l'étape
**kwargs: Paramètres additionnels
Returns:
Résultat d'exécution complet
"""
start_time = datetime.now()
self.current_status = VWBActionStatus.RUNNING
# Validation des paramètres
validation_errors = self.validate_parameters()
if validation_errors:
return self._create_error_result(
step_id=step_id,
start_time=start_time,
error_type=VWBErrorType.PARAMETER_INVALID,
message=f"Paramètres invalides: {', '.join(validation_errors)}",
technical_details={'validation_errors': validation_errors}
)
# Capture d'écran avant action
self._capture_before_screenshot(step_id)
# Exécution avec retry
last_error = None
for attempt in range(self.retry_count + 1):
try:
if attempt > 0:
self.current_status = VWBActionStatus.RETRYING
time.sleep(self.retry_delay_ms / 1000.0)
# Exécution de l'action
result = self.execute_core(step_id)
result.retry_count = attempt
result.workflow_id = kwargs.get('workflow_id')
result.user_id = kwargs.get('user_id')
result.session_id = kwargs.get('session_id')
# Capture d'écran après action si succès
if result.is_success():
self._capture_after_screenshot(step_id, result)
self.current_result = result
self.current_status = result.status
return result
except Exception as e:
last_error = e
# Créer une erreur VWB
error = create_vwb_error(
error_type=VWBErrorType.SYSTEM_ERROR,
message=f"Erreur lors de l'exécution: {str(e)}",
action_id=self.action_id,
step_id=step_id,
severity=VWBErrorSeverity.ERROR,
technical_details={'exception': str(e), 'attempt': attempt + 1},
stack_trace=traceback.format_exc()
)
# Si c'est le dernier essai, retourner l'erreur
if attempt == self.retry_count:
return self._create_error_result(
step_id=step_id,
start_time=start_time,
error=error,
retry_count=attempt
)
# Ne devrait jamais arriver, mais par sécurité
return self._create_error_result(
step_id=step_id,
start_time=start_time,
error_type=VWBErrorType.SYSTEM_ERROR,
message=f"Échec après {self.retry_count + 1} tentatives",
technical_details={'last_exception': str(last_error) if last_error else 'Unknown'}
)
def _capture_before_screenshot(self, step_id: str):
"""Capture un screenshot avant l'action."""
if not self.screen_capturer:
return
try:
# Utiliser la méthode ultra stable (Option A)
img_array = self.screen_capturer.capture()
if img_array is not None:
from PIL import Image
import base64
import io
# Convertir en PIL Image et base64
pil_image = Image.fromarray(img_array)
buffer = io.BytesIO()
pil_image.save(buffer, format='PNG', optimize=True)
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
# Créer l'evidence
evidence = create_screenshot_evidence(
action_id=self.action_id,
step_id=step_id,
screenshot_base64=screenshot_base64,
evidence_type=VWBEvidenceType.SCREENSHOT_BEFORE,
title=f"Avant {self.name}",
description=f"Capture d'écran avant l'exécution de {self.name}",
screenshot_width=pil_image.width,
screenshot_height=pil_image.height
)
self.evidence_list.append(evidence)
except Exception as e:
# Ne pas faire échouer l'action pour un problème de screenshot
print(f"⚠️ Erreur capture avant action: {e}")
def _capture_after_screenshot(self, step_id: str, result: VWBActionResult):
"""Capture un screenshot après l'action."""
if not self.screen_capturer:
return
try:
# Utiliser la méthode ultra stable (Option A)
img_array = self.screen_capturer.capture()
if img_array is not None:
from PIL import Image
import base64
import io
# Convertir en PIL Image et base64
pil_image = Image.fromarray(img_array)
buffer = io.BytesIO()
pil_image.save(buffer, format='PNG', optimize=True)
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
# Créer l'evidence
evidence = create_screenshot_evidence(
action_id=self.action_id,
step_id=step_id,
screenshot_base64=screenshot_base64,
evidence_type=VWBEvidenceType.SCREENSHOT_AFTER,
title=f"Après {self.name}",
description=f"Capture d'écran après l'exécution de {self.name}",
screenshot_width=pil_image.width,
screenshot_height=pil_image.height
)
result.add_evidence(evidence)
except Exception as e:
# Ne pas faire échouer l'action pour un problème de screenshot
print(f"⚠️ Erreur capture après action: {e}")
def _create_error_result(
self,
step_id: str,
start_time: datetime,
error_type: VWBErrorType = VWBErrorType.SYSTEM_ERROR,
message: str = "Erreur d'exécution",
technical_details: Optional[Dict[str, Any]] = None,
error: Optional[VWBActionError] = None,
retry_count: int = 0
) -> VWBActionResult:
"""Crée un résultat d'erreur."""
end_time = datetime.now()
execution_time = (end_time - start_time).total_seconds() * 1000
if error is None:
error = create_vwb_error(
error_type=error_type,
message=message,
action_id=self.action_id,
step_id=step_id,
technical_details=technical_details or {},
execution_time_ms=execution_time
)
# Capture d'écran d'erreur
self._capture_error_screenshot(step_id)
result = VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=end_time,
execution_time_ms=execution_time,
output_data={},
evidence_list=self.evidence_list.copy(),
error=error,
retry_count=retry_count
)
self.current_status = VWBActionStatus.FAILED
self.current_result = result
return result
def _capture_error_screenshot(self, step_id: str):
"""Capture un screenshot lors d'une erreur."""
if not self.screen_capturer:
return
try:
# Utiliser la méthode ultra stable (Option A)
img_array = self.screen_capturer.capture()
if img_array is not None:
from PIL import Image
import base64
import io
# Convertir en PIL Image et base64
pil_image = Image.fromarray(img_array)
buffer = io.BytesIO()
pil_image.save(buffer, format='PNG', optimize=True)
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
# Créer l'evidence
evidence = create_screenshot_evidence(
action_id=self.action_id,
step_id=step_id,
screenshot_base64=screenshot_base64,
evidence_type=VWBEvidenceType.SCREENSHOT_ERROR,
title=f"Erreur {self.name}",
description=f"Capture d'écran lors de l'erreur de {self.name}",
screenshot_width=pil_image.width,
screenshot_height=pil_image.height,
success=False
)
self.evidence_list.append(evidence)
except Exception as e:
# Ne pas faire échouer davantage pour un problème de screenshot
print(f"⚠️ Erreur capture screenshot d'erreur: {e}")
def get_status(self) -> VWBActionStatus:
"""Retourne le statut actuel de l'action."""
return self.current_status
def get_result(self) -> Optional[VWBActionResult]:
"""Retourne le résultat actuel de l'action."""
return self.current_result
def get_evidence_list(self) -> List[VWBEvidence]:
"""Retourne la liste des evidence collectées."""
return self.evidence_list.copy()
def __str__(self) -> str:
"""Représentation string de l'action."""
return f"VWBAction({self.name}): {self.description}"
def __repr__(self) -> str:
"""Représentation détaillée de l'action."""
return (
f"BaseVWBAction("
f"action_id='{self.action_id}', "
f"name='{self.name}', "
f"status={self.current_status.value}"
f")"
)

View File

@@ -0,0 +1 @@
"""Actions de navigation VWB."""

View File

@@ -0,0 +1,70 @@
"""
Action Navigation Retour - Retourner à la page précédente
Auteur : Dom, Alice, Kiro - 12 janvier 2026
"""
from typing import Dict, Any
from ..base_action import BaseVWBAction
from ..contracts.evidence import VWBActionEvidence
from ..contracts.error import VWBActionError
class VWBBrowserBackAction(BaseVWBAction):
"""Action pour retourner à la page précédente."""
def __init__(self, action_id: str, parameters: Dict[str, Any]):
"""
Initialise l'action de retour navigateur.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de l'action
"""
super().__init__(action_id, parameters)
def validate_parameters(self) -> bool:
"""
Valide les paramètres de l'action.
Returns:
True (pas de paramètres requis)
"""
return True
def execute(self) -> Dict[str, Any]:
"""
Exécute le retour à la page précédente.
Returns:
Résultat de l'exécution
"""
try:
# Simuler le retour navigateur (à implémenter avec selenium/playwright)
import time
self.add_evidence(VWBActionEvidence(
evidence_type="browser_back_start",
data={"timestamp": datetime.now().isoformat()}
))
# Simuler le temps de navigation
time.sleep(0.5)
self.add_evidence(VWBActionEvidence(
evidence_type="browser_back_complete",
data={"success": True}
))
return {
"success": True,
"action": "browser_back",
"execution_time_ms": 500
}
except Exception as e:
error = VWBActionError(
error_type="browser_back_failed",
message=f"Échec retour navigateur: {str(e)}"
)
self.add_error(error)
return {"success": False, "error": str(e)}

View File

@@ -0,0 +1,80 @@
"""
Action Navigation URL - Naviguer vers une URL spécifique
Auteur : Dom, Alice, Kiro - 12 janvier 2026
"""
from typing import Dict, Any, Optional
from ..base_action import BaseVWBAction
from ..contracts.evidence import VWBActionEvidence
from ..contracts.error import VWBActionError
class VWBNavigateToUrlAction(BaseVWBAction):
"""Action pour naviguer vers une URL spécifique."""
def __init__(self, action_id: str, parameters: Dict[str, Any]):
"""
Initialise l'action de navigation URL.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de l'action
"""
super().__init__(action_id, parameters)
self.url = parameters.get('url', '')
self.wait_for_load = parameters.get('wait_for_load', True)
def validate_parameters(self) -> bool:
"""
Valide les paramètres de l'action.
Returns:
True si les paramètres sont valides
"""
if not self.url or not isinstance(self.url, str):
self.add_error("URL manquante ou invalide")
return False
if not self.url.startswith(('http://', 'https://')):
self.add_error("URL doit commencer par http:// ou https://")
return False
return True
def execute(self) -> Dict[str, Any]:
"""
Exécute la navigation vers l'URL.
Returns:
Résultat de l'exécution
"""
try:
# Simuler la navigation (à implémenter avec selenium/playwright)
import time
self.add_evidence(VWBActionEvidence(
evidence_type="navigation_start",
data={"url": self.url, "timestamp": datetime.now().isoformat()}
))
# Simuler le temps de navigation
time.sleep(1 if self.wait_for_load else 0.1)
self.add_evidence(VWBActionEvidence(
evidence_type="navigation_complete",
data={"url": self.url, "success": True}
))
return {
"success": True,
"url": self.url,
"navigation_time_ms": 1000 if self.wait_for_load else 100
}
except Exception as e:
error = VWBActionError(
error_type="navigation_failed",
message=f"Échec navigation vers {self.url}: {str(e)}"
)
self.add_error(error)
return {"success": False, "error": str(e)}

View File

@@ -0,0 +1,517 @@
"""
Registry Actions VWB - Gestionnaire d'Actions VisionOnly
Auteur : Dom, Alice, Kiro - 09 janvier 2026
Ce module implémente le registry des actions VisionOnly pour le Visual Workflow Builder.
Il permet l'enregistrement, la recherche et la gestion des actions disponibles.
Fonctionnalités :
- Enregistrement automatique des actions
- Recherche par catégorie et type
- Thread-safety pour accès concurrent
- Chargement dynamique des actions
"""
import threading
from typing import Dict, List, Optional, Type, Any, Set
from datetime import datetime
from pathlib import Path
import importlib
import inspect
from .base_action import BaseVWBAction
class VWBActionRegistry:
"""
Registry thread-safe pour les actions VWB.
Ce registry maintient un catalogue des actions disponibles et permet
leur recherche et instanciation dynamique.
"""
def __init__(self):
"""Initialise le registry."""
self._actions: Dict[str, Type[BaseVWBAction]] = {}
self._categories: Dict[str, Set[str]] = {}
self._metadata: Dict[str, Dict[str, Any]] = {}
self._lock = threading.RLock()
self._initialized = False
print("📋 Registry Actions VWB initialisé")
def register_action(self,
action_class: Type[BaseVWBAction],
action_id: Optional[str] = None,
category: str = "default",
metadata: Optional[Dict[str, Any]] = None) -> bool:
"""
Enregistre une action dans le registry.
Args:
action_class: Classe de l'action à enregistrer
action_id: Identifiant unique (auto-généré si None)
category: Catégorie de l'action
metadata: Métadonnées additionnelles
Returns:
True si l'enregistrement a réussi
"""
with self._lock:
try:
# Générer l'ID si non fourni
if action_id is None:
action_id = self._generate_action_id(action_class)
# Vérifier que l'action hérite de BaseVWBAction
if not issubclass(action_class, BaseVWBAction):
print(f"⚠️ {action_class.__name__} n'hérite pas de BaseVWBAction")
return False
# Vérifier l'unicité de l'ID
if action_id in self._actions:
print(f"⚠️ Action '{action_id}' déjà enregistrée")
return False
# Enregistrer l'action
self._actions[action_id] = action_class
# Gérer les catégories
if category not in self._categories:
self._categories[category] = set()
self._categories[category].add(action_id)
# Stocker les métadonnées
self._metadata[action_id] = {
'class_name': action_class.__name__,
'module': action_class.__module__,
'category': category,
'registered_at': datetime.now().isoformat(),
'metadata': metadata or {}
}
print(f"✅ Action '{action_id}' enregistrée (catégorie: {category})")
return True
except Exception as e:
print(f"❌ Erreur enregistrement action '{action_id}': {e}")
return False
def get_action_class(self, action_id: str) -> Optional[Type[BaseVWBAction]]:
"""
Récupère la classe d'une action par son ID.
Args:
action_id: Identifiant de l'action
Returns:
Classe de l'action ou None si non trouvée
"""
with self._lock:
return self._actions.get(action_id)
def create_action(self,
action_id: str,
parameters: Optional[Dict[str, Any]] = None,
**kwargs) -> Optional[BaseVWBAction]:
"""
Crée une instance d'action.
Args:
action_id: Identifiant de l'action
parameters: Paramètres de l'action
**kwargs: Arguments additionnels pour le constructeur
Returns:
Instance de l'action ou None si erreur
"""
with self._lock:
action_class = self._actions.get(action_id)
if action_class is None:
print(f"⚠️ Action '{action_id}' non trouvée dans le registry")
return None
try:
# Préparer les arguments du constructeur
constructor_args = {
'action_id': f"{action_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
'parameters': parameters or {}
}
constructor_args.update(kwargs)
# Créer l'instance
instance = action_class(**constructor_args)
print(f"✅ Instance d'action '{action_id}' créée")
return instance
except Exception as e:
print(f"❌ Erreur création instance '{action_id}': {e}")
return None
def list_actions(self, category: Optional[str] = None) -> List[str]:
"""
Liste les actions disponibles.
Args:
category: Filtrer par catégorie (optionnel)
Returns:
Liste des IDs d'actions
"""
with self._lock:
if category is None:
return list(self._actions.keys())
else:
return list(self._categories.get(category, set()))
def list_categories(self) -> List[str]:
"""
Liste les catégories disponibles.
Returns:
Liste des catégories
"""
with self._lock:
return list(self._categories.keys())
def get_action_metadata(self, action_id: str) -> Optional[Dict[str, Any]]:
"""
Récupère les métadonnées d'une action.
Args:
action_id: Identifiant de l'action
Returns:
Métadonnées ou None si non trouvée
"""
with self._lock:
return self._metadata.get(action_id)
def search_actions(self,
query: str,
category: Optional[str] = None) -> List[str]:
"""
Recherche des actions par nom ou description.
Args:
query: Terme de recherche
category: Filtrer par catégorie (optionnel)
Returns:
Liste des IDs d'actions correspondantes
"""
with self._lock:
query_lower = query.lower()
results = []
actions_to_search = self.list_actions(category)
for action_id in actions_to_search:
metadata = self._metadata.get(action_id, {})
class_name = metadata.get('class_name', '').lower()
# Rechercher dans l'ID et le nom de classe
if (query_lower in action_id.lower() or
query_lower in class_name):
results.append(action_id)
return results
def get_registry_stats(self) -> Dict[str, Any]:
"""
Obtient les statistiques du registry.
Returns:
Dictionnaire avec les statistiques
"""
with self._lock:
return {
'total_actions': len(self._actions),
'categories': {
cat: len(actions)
for cat, actions in self._categories.items()
},
'initialized': self._initialized,
'last_update': datetime.now().isoformat()
}
def auto_discover_actions(self, base_path: Optional[Path] = None) -> int:
"""
Découvre et enregistre automatiquement les actions.
Args:
base_path: Chemin de base pour la découverte (optionnel)
Returns:
Nombre d'actions découvertes
"""
with self._lock:
if base_path is None:
base_path = Path(__file__).parent
discovered_count = 0
try:
# Découvrir les actions dans vision_ui
vision_ui_path = base_path / "vision_ui"
if vision_ui_path.exists():
discovered_count += self._discover_in_directory(
vision_ui_path, "vision_ui"
)
# Découvrir les actions dans d'autres catégories
for category_dir in base_path.iterdir():
if (category_dir.is_dir() and
category_dir.name not in ["__pycache__", "vision_ui"] and
not category_dir.name.startswith(".")):
discovered_count += self._discover_in_directory(
category_dir, category_dir.name
)
self._initialized = True
print(f"🔍 Découverte automatique terminée : {discovered_count} actions trouvées")
return discovered_count
except Exception as e:
print(f"❌ Erreur découverte automatique : {e}")
return discovered_count
def _discover_in_directory(self, directory: Path, category: str) -> int:
"""
Découvre les actions dans un répertoire.
Args:
directory: Répertoire à explorer
category: Catégorie des actions
Returns:
Nombre d'actions découvertes
"""
discovered_count = 0
for py_file in directory.glob("*.py"):
if py_file.name.startswith("__"):
continue
try:
# Construire le nom du module
module_name = f"visual_workflow_builder.backend.actions.{category}.{py_file.stem}"
# Importer le module
module = importlib.import_module(module_name)
# Chercher les classes d'actions
for name, obj in inspect.getmembers(module, inspect.isclass):
if (obj != BaseVWBAction and
issubclass(obj, BaseVWBAction) and
obj.__module__ == module.__name__):
# Générer l'ID de l'action
action_id = self._generate_action_id(obj)
# Enregistrer l'action
if self.register_action(obj, action_id, category):
discovered_count += 1
except Exception as e:
print(f"⚠️ Erreur import {py_file}: {e}")
return discovered_count
def _generate_action_id(self, action_class: Type[BaseVWBAction]) -> str:
"""
Génère un ID d'action à partir de la classe.
Args:
action_class: Classe de l'action
Returns:
ID généré
"""
class_name = action_class.__name__
# Convertir VWBClickAnchorAction -> click_anchor
if class_name.startswith("VWB") and class_name.endswith("Action"):
# Enlever VWB et Action
core_name = class_name[3:-6]
# Convertir CamelCase en snake_case
import re
snake_case = re.sub('([A-Z]+)', r'_\1', core_name).lower()
return snake_case.lstrip('_')
# Fallback : utiliser le nom de classe en minuscules
return class_name.lower()
def clear(self):
"""Vide le registry."""
with self._lock:
self._actions.clear()
self._categories.clear()
self._metadata.clear()
self._initialized = False
print("🗑️ Registry vidé")
# Instance globale du registry
_global_registry: Optional[VWBActionRegistry] = None
_registry_lock = threading.Lock()
def get_global_registry() -> VWBActionRegistry:
"""
Obtient l'instance globale du registry (singleton thread-safe).
Returns:
Instance du registry
"""
global _global_registry
if _global_registry is None:
with _registry_lock:
if _global_registry is None:
_global_registry = VWBActionRegistry()
# Auto-découverte des actions au premier accès
try:
_global_registry.auto_discover_actions()
except Exception as e:
print(f"⚠️ Erreur auto-découverte : {e}")
return _global_registry
def register_action(action_class: Type[BaseVWBAction],
action_id: Optional[str] = None,
category: str = "default",
metadata: Optional[Dict[str, Any]] = None) -> bool:
"""
Enregistre une action dans le registry global.
Args:
action_class: Classe de l'action
action_id: Identifiant unique (optionnel)
category: Catégorie de l'action
metadata: Métadonnées additionnelles
Returns:
True si l'enregistrement a réussi
"""
return get_global_registry().register_action(
action_class, action_id, category, metadata
)
def get_action_class(action_id: str) -> Optional[Type[BaseVWBAction]]:
"""
Récupère une classe d'action du registry global.
Args:
action_id: Identifiant de l'action
Returns:
Classe de l'action ou None
"""
return get_global_registry().get_action_class(action_id)
def create_action(action_id: str,
parameters: Optional[Dict[str, Any]] = None,
**kwargs) -> Optional[BaseVWBAction]:
"""
Crée une instance d'action depuis le registry global.
Args:
action_id: Identifiant de l'action
parameters: Paramètres de l'action
**kwargs: Arguments additionnels
Returns:
Instance de l'action ou None
"""
return get_global_registry().create_action(action_id, parameters, **kwargs)
def list_available_actions(category: Optional[str] = None) -> List[str]:
"""
Liste les actions disponibles dans le registry global.
Args:
category: Filtrer par catégorie (optionnel)
Returns:
Liste des IDs d'actions
"""
return get_global_registry().list_actions(category)
def get_registry_info() -> Dict[str, Any]:
"""
Obtient les informations du registry global.
Returns:
Informations du registry
"""
registry = get_global_registry()
stats = registry.get_registry_stats()
return {
'stats': stats,
'actions': {
action_id: registry.get_action_metadata(action_id)
for action_id in registry.list_actions()
},
'categories': registry.list_categories()
}
# Décorateur pour l'enregistrement automatique
def vwb_action(action_id: Optional[str] = None,
category: str = "default",
metadata: Optional[Dict[str, Any]] = None):
"""
Décorateur pour l'enregistrement automatique d'actions VWB.
Args:
action_id: Identifiant unique (optionnel)
category: Catégorie de l'action
metadata: Métadonnées additionnelles
Returns:
Décorateur
"""
def decorator(action_class: Type[BaseVWBAction]):
register_action(action_class, action_id, category, metadata)
return action_class
return decorator
if __name__ == "__main__":
# Test du registry
print("🧪 Test du Registry Actions VWB")
registry = VWBActionRegistry()
# Test de découverte automatique
discovered = registry.auto_discover_actions()
print(f"Actions découvertes : {discovered}")
# Afficher les statistiques
stats = registry.get_registry_stats()
print(f"Statistiques : {stats}")
# Lister les actions
actions = registry.list_actions()
print(f"Actions disponibles : {actions}")
# Test de création d'action
if actions:
test_action_id = actions[0]
instance = registry.create_action(test_action_id)
if instance:
print(f"✅ Instance créée pour '{test_action_id}': {type(instance).__name__}")
else:
print(f"❌ Échec création instance pour '{test_action_id}'")

View File

@@ -0,0 +1 @@
"""Actions de validation VWB."""

View File

@@ -0,0 +1,646 @@
"""
Action VWB Extract Text - Extraire du texte d'une zone visuelle
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Cette action permet d'extraire du texte d'une zone de l'écran identifiée par une ancre visuelle,
en utilisant l'OCR et la reconnaissance de texte pour automatiser la lecture de données.
"""
from typing import Dict, Any, Optional, List
from datetime import datetime
import time
import traceback
import re
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBActionError, VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidence, VWBEvidenceType
from ...contracts.visual_anchor import VWBVisualAnchor
class VWBExtractTextAction(BaseVWBAction):
"""
Action VWB pour extraire du texte d'une zone identifiée par une ancre visuelle.
Cette action localise une zone de texte via une ancre visuelle et extrait
le contenu textuel en utilisant l'OCR et des techniques de reconnaissance.
"""
def __init__(self, action_id: str, parameters: Dict[str, Any], screen_capturer=None):
"""
Initialise l'action ExtractText.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer (Option A thread-safe)
"""
super().__init__(
action_id=action_id,
name="Extraire du Texte",
description="Extrait du texte d'une zone identifiée par une ancre visuelle",
parameters=parameters,
screen_capturer=screen_capturer
)
# Paramètres spécifiques à ExtractText
self.visual_anchor = parameters.get('visual_anchor')
self.extraction_mode = parameters.get('extraction_mode', 'full') # full, lines, words, numbers
self.text_filters = parameters.get('text_filters', []) # regex, cleanup, etc.
self.ocr_language = parameters.get('ocr_language', 'fra') # Français par défaut
self.confidence_threshold = parameters.get('confidence_threshold', 0.8)
self.expand_region = parameters.get('expand_region', {'top': 0, 'bottom': 0, 'left': 0, 'right': 0})
self.preprocessing = parameters.get('preprocessing', ['contrast', 'denoise'])
self.output_format = parameters.get('output_format', 'text') # text, json, structured
# Validation des paramètres
validation_errors = self._validate_parameters()
if validation_errors:
print(f"⚠️ Erreurs de validation: {validation_errors}")
def _validate_parameters(self) -> List[str]:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
errors = []
# Vérifier l'ancre visuelle
if not self.visual_anchor:
errors.append("Paramètre 'visual_anchor' requis")
elif not isinstance(self.visual_anchor, (VWBVisualAnchor, dict)):
errors.append("'visual_anchor' doit être un VWBVisualAnchor ou un dictionnaire")
# Vérifier le mode d'extraction
valid_modes = ['full', 'lines', 'words', 'numbers', 'custom']
if self.extraction_mode not in valid_modes:
errors.append(f"'extraction_mode' doit être l'un de : {valid_modes}")
# Vérifier le format de sortie
valid_formats = ['text', 'json', 'structured']
if self.output_format not in valid_formats:
errors.append(f"'output_format' doit être l'un de : {valid_formats}")
# Vérifier la région d'expansion
if not isinstance(self.expand_region, dict):
errors.append("'expand_region' doit être un dictionnaire")
else:
required_keys = ['top', 'bottom', 'left', 'right']
for key in required_keys:
if key not in self.expand_region:
errors.append(f"'expand_region' doit contenir la clé '{key}'")
elif not isinstance(self.expand_region[key], (int, float)):
errors.append(f"'expand_region.{key}' doit être un nombre")
# Vérifier le seuil de confiance
if not (0.0 <= self.confidence_threshold <= 1.0):
errors.append("'confidence_threshold' doit être entre 0.0 et 1.0")
return errors
def validate_parameters(self) -> List[str]:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
return self._validate_parameters()
def get_action_metadata(self) -> Dict[str, Any]:
"""
Retourne les métadonnées de l'action.
Returns:
Dictionnaire des métadonnées
"""
return {
"id": "extract_text",
"name": "Extraire du Texte",
"description": "Extrait du texte d'une zone identifiée par une ancre visuelle",
"category": "data",
"version": "1.0.0",
"author": "Dom, Alice, Kiro",
"created_date": "2026-01-10",
"parameters": {
"visual_anchor": {
"type": "VWBVisualAnchor",
"required": True,
"description": "Ancre visuelle pour localiser la zone de texte"
},
"extraction_mode": {
"type": "string",
"required": False,
"default": "full",
"options": ["full", "lines", "words", "numbers", "custom"],
"description": "Mode d'extraction du texte"
},
"text_filters": {
"type": "array",
"required": False,
"default": [],
"description": "Filtres à appliquer au texte extrait"
},
"ocr_language": {
"type": "string",
"required": False,
"default": "fra",
"description": "Langue pour la reconnaissance OCR"
},
"confidence_threshold": {
"type": "number",
"required": False,
"default": 0.8,
"min": 0.0,
"max": 1.0,
"description": "Seuil de confiance pour la détection"
},
"expand_region": {
"type": "object",
"required": False,
"default": {"top": 0, "bottom": 0, "left": 0, "right": 0},
"description": "Expansion de la région de capture (pixels)"
},
"preprocessing": {
"type": "array",
"required": False,
"default": ["contrast", "denoise"],
"description": "Prétraitements d'image pour améliorer l'OCR"
},
"output_format": {
"type": "string",
"required": False,
"default": "text",
"options": ["text", "json", "structured"],
"description": "Format de sortie du texte extrait"
}
},
"outputs": {
"extracted_text": {
"type": "string",
"description": "Texte extrait de la zone"
},
"text_confidence": {
"type": "number",
"description": "Confiance moyenne de l'extraction OCR"
},
"text_structure": {
"type": "object",
"description": "Structure détaillée du texte (si format structuré)"
},
"extraction_region": {
"type": "object",
"description": "Coordonnées de la région d'extraction"
}
},
"examples": [
{
"name": "Extraire un numéro de facture",
"description": "Extrait un numéro de facture d'un document",
"parameters": {
"extraction_mode": "numbers",
"text_filters": ["digits_only"],
"output_format": "text"
}
},
{
"name": "Lire un tableau de données",
"description": "Extrait les données d'un tableau avec structure",
"parameters": {
"extraction_mode": "full",
"output_format": "structured",
"preprocessing": ["contrast", "denoise", "binarize"]
}
}
]
}
def execute_core(self, step_id: str) -> VWBActionResult:
"""
Exécute l'action d'extraction de texte.
Args:
step_id: Identifiant de l'étape
Returns:
Résultat de l'exécution avec texte extrait
"""
start_time = datetime.now()
evidence_list = []
try:
print(f"📝 Début ExtractText - Mode: {self.extraction_mode}")
# Validation des paramètres
validation_errors = self._validate_parameters()
if validation_errors:
error = create_vwb_error(
error_type=VWBErrorType.PARAMETER_INVALID,
message=f"Paramètres invalides: {', '.join(validation_errors)}",
severity=VWBErrorSeverity.HIGH,
retryable=False,
details={"validation_errors": validation_errors}
)
return self._create_error_result_simple(start_time, step_id, error)
# Convertir l'ancre visuelle si nécessaire
if isinstance(self.visual_anchor, dict):
visual_anchor = VWBVisualAnchor.from_dict(self.visual_anchor)
else:
visual_anchor = self.visual_anchor
# Vérifier la disponibilité du ScreenCapturer
if not self.screen_capturer:
error = create_vwb_error(
error_type=VWBErrorType.SCREEN_CAPTURE_FAILED,
message="ScreenCapturer non disponible",
severity=VWBErrorSeverity.HIGH,
retryable=False
)
return self._create_error_result_simple(start_time, step_id, error)
# Capture d'écran
screenshot_data = self._capture_screen_safe()
if not screenshot_data:
error = create_vwb_error(
error_type=VWBErrorType.SCREEN_CAPTURE_FAILED,
message="Impossible de capturer l'écran",
severity=VWBErrorSeverity.HIGH,
retryable=True
)
return self._create_error_result_simple(start_time, step_id, error)
# Recherche de la zone de texte
print(f"🔍 Recherche de la zone de texte: {visual_anchor.label}")
region_found, region_coords, confidence = self._find_visual_element(
screenshot_data, visual_anchor, self.confidence_threshold
)
if not region_found:
error = create_vwb_error(
error_type=VWBErrorType.ELEMENT_NOT_FOUND,
message=f"Zone de texte '{visual_anchor.label}' non trouvée (confiance < {self.confidence_threshold})",
severity=VWBErrorSeverity.MEDIUM,
retryable=True,
details={
"anchor_label": visual_anchor.label,
"confidence_threshold": self.confidence_threshold,
"best_confidence": confidence
}
)
# Evidence d'échec
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.SCREENSHOT,
description=f"Zone de texte '{visual_anchor.label}' non trouvée",
screenshot_base64=self._encode_screenshot(screenshot_data),
confidence_score=confidence,
success=False
)
evidence_list.append(evidence)
return self._create_error_result_simple(start_time, step_id, error, evidence_list)
print(f"✅ Zone trouvée avec confiance {confidence:.3f} à {region_coords}")
# Expansion de la région si demandée
expanded_coords = self._expand_extraction_region(region_coords)
print(f"📏 Région d'extraction: {expanded_coords}")
# Extraction de la région d'image
region_image = self._extract_image_region(screenshot_data, expanded_coords)
if not region_image:
error = create_vwb_error(
error_type=VWBErrorType.ACTION_EXECUTION_FAILED,
message="Impossible d'extraire la région d'image",
severity=VWBErrorSeverity.HIGH,
retryable=True
)
return self._create_error_result_simple(start_time, step_id, error)
# Prétraitement de l'image
processed_image = self._preprocess_image(region_image)
# Extraction du texte via OCR
extracted_text, text_confidence, text_structure = self._perform_ocr_extraction(processed_image)
if not extracted_text and self.extraction_mode != 'custom':
error = create_vwb_error(
error_type=VWBErrorType.ACTION_EXECUTION_FAILED,
message="Aucun texte extrait de la région",
severity=VWBErrorSeverity.MEDIUM,
retryable=True,
details={
"region_coordinates": expanded_coords,
"ocr_confidence": text_confidence
}
)
return self._create_error_result_simple(start_time, step_id, error)
# Application des filtres
filtered_text = self._apply_text_filters(extracted_text)
# Formatage de la sortie
formatted_output = self._format_output(filtered_text, text_structure)
print(f"📄 Texte extrait: '{filtered_text[:50]}{'...' if len(filtered_text) > 50 else ''}'")
# Evidence de succès
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.DATA_EXTRACTION,
description=f"Texte extrait de '{visual_anchor.label}'",
screenshot_base64=self._encode_screenshot(screenshot_data),
element_coordinates=expanded_coords,
confidence_score=confidence,
interaction_details={
"extraction_mode": self.extraction_mode,
"text_length": len(filtered_text),
"ocr_confidence": text_confidence,
"preprocessing_applied": self.preprocessing,
"filters_applied": len(self.text_filters)
},
extracted_data={
"text": filtered_text,
"confidence": text_confidence,
"structure": text_structure if self.output_format == 'structured' else None
},
success=True
)
evidence_list.append(evidence)
# Données de sortie
output_data = {
"extracted_text": filtered_text,
"text_confidence": text_confidence,
"text_structure": text_structure if self.output_format == 'structured' else None,
"extraction_region": expanded_coords,
"character_count": len(filtered_text),
"word_count": len(filtered_text.split()) if filtered_text else 0,
"formatted_output": formatted_output
}
end_time = datetime.now()
execution_time = (end_time - start_time).total_seconds() * 1000
print(f"✅ ExtractText réussie en {execution_time:.1f}ms")
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=end_time,
execution_time_ms=execution_time,
output_data=output_data,
evidence_list=evidence_list
)
except Exception as e:
print(f"❌ Erreur ExtractText: {e}")
error = create_vwb_error(
error_type=VWBErrorType.SYSTEM_ERROR,
message=f"Erreur inattendue lors de l'extraction: {str(e)}",
severity=VWBErrorSeverity.HIGH,
retryable=True,
details={"exception": str(e), "traceback": traceback.format_exc()}
)
return self._create_error_result_simple(start_time, step_id, error, evidence_list)
def _create_error_result_simple(self, start_time: datetime, step_id: str, error: VWBActionError, evidence_list: List[VWBEvidence] = None) -> VWBActionResult:
"""Crée un résultat d'erreur simplifié."""
end_time = datetime.now()
execution_time = (end_time - start_time).total_seconds() * 1000
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=end_time,
execution_time_ms=execution_time,
output_data={},
evidence_list=evidence_list or []
)
def _capture_screen_safe(self):
"""Capture d'écran sécurisée avec gestion d'erreur."""
try:
if self.screen_capturer:
return self.screen_capturer.capture()
except Exception as e:
print(f"⚠️ Erreur capture d'écran: {e}")
return None
def _find_visual_element(self, screenshot, visual_anchor, threshold):
"""Simulation de recherche d'élément visuel."""
import random
confidence = random.uniform(0.6, 0.95)
if confidence >= threshold:
return True, {'x': 300, 'y': 200, 'width': 250, 'height': 80}, confidence
else:
return False, {}, confidence
def _encode_screenshot(self, screenshot_data) -> str:
"""Encode un screenshot en base64."""
try:
import base64
return base64.b64encode(str(screenshot_data).encode()).decode('utf-8')
except:
return ""
def execute(self, step_id: str = None, workflow_id: str = None, user_id: str = None) -> VWBActionResult:
"""
Exécute l'action d'extraction de texte (méthode héritée).
Args:
step_id: Identifiant de l'étape
workflow_id: Identifiant du workflow
user_id: Identifiant de l'utilisateur
Returns:
Résultat de l'exécution avec texte extrait
"""
# Déléguer à execute_core qui est la méthode abstraite requise
return self.execute_core(step_id or f"step_{datetime.now().strftime('%Y%m%d_%H%M%S')}")
def _expand_extraction_region(self, coords: Dict[str, int]) -> Dict[str, int]:
"""
Expanse la région d'extraction selon les paramètres.
Args:
coords: Coordonnées originales
Returns:
Coordonnées expansées
"""
return {
'x': max(0, coords['x'] - self.expand_region['left']),
'y': max(0, coords['y'] - self.expand_region['top']),
'width': coords['width'] + self.expand_region['left'] + self.expand_region['right'],
'height': coords['height'] + self.expand_region['top'] + self.expand_region['bottom']
}
def _extract_image_region(self, screenshot_data, coords: Dict[str, int]):
"""
Extrait une région spécifique de l'image.
Args:
screenshot_data: Données de l'image complète
coords: Coordonnées de la région
Returns:
Image de la région ou None
"""
try:
# Ici, on utiliserait PIL ou OpenCV pour extraire la région
# Pour la simulation, on retourne un objet factice
print(f"✂️ Extraction région {coords['width']}x{coords['height']}")
return {"width": coords['width'], "height": coords['height'], "data": "simulated"}
except Exception as e:
print(f"❌ Erreur extraction région: {e}")
return None
def _preprocess_image(self, image_data):
"""
Applique les prétraitements à l'image pour améliorer l'OCR.
Args:
image_data: Données de l'image
Returns:
Image prétraitée
"""
try:
processed = image_data.copy() if isinstance(image_data, dict) else image_data
for preprocessing in self.preprocessing:
if preprocessing == 'contrast':
print("🔆 Amélioration du contraste")
elif preprocessing == 'denoise':
print("🧹 Réduction du bruit")
elif preprocessing == 'binarize':
print("⚫⚪ Binarisation")
elif preprocessing == 'deskew':
print("📐 Correction de l'inclinaison")
return processed
except Exception as e:
print(f"⚠️ Erreur prétraitement: {e}")
return image_data
def _perform_ocr_extraction(self, image_data) -> tuple[str, float, Dict[str, Any]]:
"""
Effectue l'extraction OCR sur l'image.
Args:
image_data: Image prétraitée
Returns:
Tuple (texte, confiance, structure)
"""
try:
# Simulation d'extraction OCR
# En réalité, on utiliserait pytesseract ou une API OCR
if self.extraction_mode == 'full':
extracted_text = "Texte exemple extrait par OCR\nLigne 2 du texte\nDernière ligne"
elif self.extraction_mode == 'numbers':
extracted_text = "123456 789 2026"
elif self.extraction_mode == 'words':
extracted_text = "mot1 mot2 mot3 mot4"
elif self.extraction_mode == 'lines':
extracted_text = "Ligne 1\nLigne 2\nLigne 3"
else:
extracted_text = "Texte personnalisé"
# Confiance simulée
confidence = 0.85
# Structure simulée
structure = {
"lines": extracted_text.split('\n') if '\n' in extracted_text else [extracted_text],
"words": extracted_text.split(),
"characters": len(extracted_text),
"language_detected": self.ocr_language
}
print(f"🔤 OCR terminé - Confiance: {confidence:.3f}")
return extracted_text, confidence, structure
except Exception as e:
print(f"❌ Erreur OCR: {e}")
return "", 0.0, {}
def _apply_text_filters(self, text: str) -> str:
"""
Applique les filtres de texte configurés.
Args:
text: Texte brut
Returns:
Texte filtré
"""
filtered_text = text
try:
for filter_name in self.text_filters:
if filter_name == 'digits_only':
filtered_text = re.sub(r'[^\d\s]', '', filtered_text)
elif filter_name == 'letters_only':
filtered_text = re.sub(r'[^a-zA-ZÀ-ÿ\s]', '', filtered_text)
elif filter_name == 'trim_whitespace':
filtered_text = filtered_text.strip()
elif filter_name == 'remove_newlines':
filtered_text = filtered_text.replace('\n', ' ')
elif filter_name == 'uppercase':
filtered_text = filtered_text.upper()
elif filter_name == 'lowercase':
filtered_text = filtered_text.lower()
print(f"🔧 Filtre appliqué: {filter_name}")
except Exception as e:
print(f"⚠️ Erreur application filtres: {e}")
return filtered_text
def _format_output(self, text: str, structure: Dict[str, Any]) -> Any:
"""
Formate la sortie selon le format demandé.
Args:
text: Texte filtré
structure: Structure du texte
Returns:
Sortie formatée
"""
if self.output_format == 'text':
return text
elif self.output_format == 'json':
return {
"text": text,
"metadata": {
"extraction_mode": self.extraction_mode,
"character_count": len(text),
"word_count": len(text.split()),
"line_count": len(text.split('\n'))
}
}
elif self.output_format == 'structured':
return {
"text": text,
"structure": structure,
"metadata": {
"extraction_mode": self.extraction_mode,
"filters_applied": self.text_filters,
"preprocessing": self.preprocessing
}
}
else:
return text

View File

@@ -0,0 +1,392 @@
"""
Action VWB - Focus sur Ancre Visuelle
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Cette action met le focus sur un élément UI identifié par une ancre visuelle.
"""
from typing import Dict, Any, Optional
from datetime import datetime
import time
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidence, VWBEvidenceType
from ...contracts.visual_anchor import VWBVisualAnchor
class VWBFocusAnchorAction(BaseVWBAction):
"""
Action pour mettre le focus sur un élément UI via ancre visuelle.
Cette action localise un élément UI et lui donne le focus sans cliquer,
utile pour activer des champs de saisie ou révéler des menus contextuels.
"""
def __init__(self, action_id: str, parameters: Dict[str, Any], screen_capturer=None):
"""
Initialise l'action de focus sur ancre.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer
"""
super().__init__(
action_id=action_id,
name="Focus sur Ancre Visuelle",
description="Met le focus sur un élément UI identifié par une ancre visuelle",
parameters=parameters,
screen_capturer=screen_capturer
)
# Paramètres spécifiques
self.visual_anchor = parameters.get('visual_anchor')
self.focus_method = parameters.get('focus_method', 'hover') # hover, tab, click_light
self.hover_duration_ms = parameters.get('hover_duration_ms', 500)
self.confidence_threshold = parameters.get('confidence_threshold', 0.8)
self.max_attempts = parameters.get('max_attempts', 3)
# Validation des paramètres
validation_errors = self.validate_parameters()
if validation_errors:
print(f"⚠️ Erreurs de validation: {validation_errors}")
# Validation des paramètres spécifiques
focus_errors = self._validate_focus_parameters()
if focus_errors:
print(f"⚠️ Erreurs de validation focus: {focus_errors}")
def _validate_focus_parameters(self):
"""Valide les paramètres spécifiques au focus."""
errors = []
if not isinstance(self.visual_anchor, VWBVisualAnchor):
errors.append("visual_anchor doit être une instance de VWBVisualAnchor")
if self.focus_method not in ['hover', 'tab', 'click_light']:
errors.append("focus_method doit être 'hover', 'tab' ou 'click_light'")
if not isinstance(self.hover_duration_ms, (int, float)) or self.hover_duration_ms < 0:
errors.append("hover_duration_ms doit être un nombre positif")
if not isinstance(self.confidence_threshold, (int, float)) or not 0 <= self.confidence_threshold <= 1:
errors.append("confidence_threshold doit être entre 0 et 1")
if errors:
print(f"⚠️ Erreurs de validation focus: {errors}")
return errors
def get_action_type(self) -> str:
"""Retourne le type d'action."""
return "focus_anchor"
def get_action_name(self) -> str:
"""Retourne le nom de l'action."""
return "Focus sur Ancre Visuelle"
def get_action_description(self) -> str:
"""Retourne la description de l'action."""
return "Met le focus sur un élément UI identifié par une ancre visuelle"
def validate_parameters(self) -> list:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
errors = []
if not self.visual_anchor:
errors.append("Paramètre 'visual_anchor' requis")
if self.confidence_threshold < 0.5:
errors.append("Seuil de confiance trop faible (< 0.5)")
return errors
def get_action_metadata(self) -> Dict[str, Any]:
"""
Retourne les métadonnées de l'action.
Returns:
Dictionnaire des métadonnées
"""
return {
"id": "focus_anchor",
"name": "Donner le Focus à un Élément",
"description": "Met le focus sur un élément UI identifié par une ancre visuelle",
"category": "vision_ui",
"version": "1.0.0",
"author": "Dom, Alice, Kiro",
"created_date": "2026-01-10",
"parameters": {
"visual_anchor": {
"type": "VWBVisualAnchor",
"required": True,
"description": "Ancre visuelle pour localiser l'élément cible"
},
"focus_method": {
"type": "string",
"required": False,
"default": "hover",
"options": ["hover", "tab", "click_light"],
"description": "Méthode pour donner le focus"
},
"hover_duration_ms": {
"type": "number",
"required": False,
"default": 500,
"min": 100,
"description": "Durée du survol en millisecondes"
},
"confidence_threshold": {
"type": "number",
"required": False,
"default": 0.8,
"min": 0.0,
"max": 1.0,
"description": "Seuil de confiance pour la détection"
}
},
"outputs": {
"focus_success": {
"type": "boolean",
"description": "Indique si le focus a été donné avec succès"
},
"focus_coordinates": {
"type": "object",
"description": "Coordonnées où le focus a été donné"
},
"confidence_score": {
"type": "number",
"description": "Score de confiance de la détection"
}
}
}
def execute_core(self, step_id: str) -> VWBActionResult:
"""
Logique principale d'exécution du focus.
Args:
step_id: Identifiant de l'étape
Returns:
Résultat de l'exécution
"""
start_time = datetime.now()
evidence_list = []
try:
print(f"🎯 Focus sur ancre '{self.visual_anchor.label}' (méthode: {self.focus_method})")
# Capture d'écran initiale
if not self.screen_capturer:
raise Exception("ScreenCapturer non disponible")
screenshot = self.screen_capturer.capture()
if not screenshot:
raise Exception("Impossible de capturer l'écran")
# Recherche de l'ancre visuelle
match_found = False
best_match = None
for attempt in range(self.max_attempts):
print(f" Tentative {attempt + 1}/{self.max_attempts}")
# Simulation de recherche d'ancre (à remplacer par vraie implémentation)
import random
confidence = random.uniform(0.6, 0.95)
if confidence >= self.confidence_threshold:
# Ancre trouvée
match_found = True
best_match = {
'confidence': confidence,
'bbox': {'x': 400, 'y': 300, 'width': 120, 'height': 30},
'center': {'x': 460, 'y': 315}
}
break
if attempt < self.max_attempts - 1:
time.sleep(0.5) # Attendre avant nouvelle tentative
if not match_found:
# Ancre non trouvée
error = create_vwb_error(
error_type=VWBErrorType.ANCHOR_NOT_FOUND,
message=f"Ancre '{self.visual_anchor.label}' non trouvée (seuil: {self.confidence_threshold})",
severity=VWBErrorSeverity.HIGH,
retryable=True,
details={
'anchor_label': self.visual_anchor.label,
'confidence_threshold': self.confidence_threshold,
'attempts': self.max_attempts
}
)
# Evidence d'échec
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.SCREENSHOT,
description=f"Échec focus - ancre non trouvée",
screenshot_base64=self._screenshot_to_base64(screenshot),
success=False,
confidence_score=0.0,
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000
)
evidence_list.append(evidence)
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=datetime.now(),
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
output_data={},
evidence_list=evidence_list,
error=error
)
# Exécuter le focus selon la méthode
focus_success = self._execute_focus_method(best_match)
if not focus_success:
raise Exception(f"Échec de la méthode de focus '{self.focus_method}'")
# Evidence de succès
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.UI_INTERACTION,
description=f"Focus réussi sur '{self.visual_anchor.label}' (méthode: {self.focus_method})",
screenshot_base64=self._screenshot_to_base64(screenshot),
success=True,
confidence_score=best_match['confidence'],
bbox=best_match['bbox'],
click_point=best_match['center'],
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
metadata={
'focus_method': self.focus_method,
'hover_duration_ms': self.hover_duration_ms,
'attempts': attempt + 1
}
)
evidence_list.append(evidence)
# Données de sortie
output_data = {
'focus_success': True,
'focus_method': self.focus_method,
'confidence_score': best_match['confidence'],
'focus_coordinates': best_match['center'],
'attempts_used': attempt + 1
}
print(f"✅ Focus réussi avec confiance {best_match['confidence']:.2f}")
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=datetime.now(),
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
output_data=output_data,
evidence_list=evidence_list
)
except Exception as e:
# Gestion des erreurs
error = create_vwb_error(
error_type=VWBErrorType.EXECUTION_ERROR,
message=f"Erreur lors du focus: {str(e)}",
severity=VWBErrorSeverity.HIGH,
retryable=True,
details={'exception': str(e)}
)
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=datetime.now(),
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
output_data={},
evidence_list=evidence_list,
error=error
)
def _execute_focus_method(self, match_info: Dict[str, Any]) -> bool:
"""
Exécute la méthode de focus spécifiée.
Args:
match_info: Informations sur l'élément trouvé
Returns:
True si le focus a réussi
"""
try:
center = match_info['center']
if self.focus_method == 'hover':
# Survol de l'élément
print(f" Survol à ({center['x']}, {center['y']}) pendant {self.hover_duration_ms}ms")
# Simulation du survol
time.sleep(self.hover_duration_ms / 1000.0)
return True
elif self.focus_method == 'click_light':
# Clic léger (sans appui prolongé)
print(f" Clic léger à ({center['x']}, {center['y']})")
# Simulation du clic léger
time.sleep(0.1)
return True
elif self.focus_method == 'tab':
# Navigation par tabulation (approximative)
print(" Navigation par tabulation")
# Simulation de la tabulation
time.sleep(0.2)
return True
else:
print(f"⚠️ Méthode de focus inconnue: {self.focus_method}")
return False
except Exception as e:
print(f"❌ Erreur lors de l'exécution du focus: {e}")
return False
def _screenshot_to_base64(self, screenshot) -> str:
"""
Convertit un screenshot en base64.
Args:
screenshot: Image capturée
Returns:
String base64 de l'image
"""
try:
import base64
import io
from PIL import Image
if hasattr(screenshot, 'save'):
# PIL Image
buffer = io.BytesIO()
screenshot.save(buffer, format='PNG')
return base64.b64encode(buffer.getvalue()).decode('utf-8')
else:
# Données brutes ou autre format
return base64.b64encode(str(screenshot).encode()).decode('utf-8')
except Exception as e:
print(f"⚠️ Erreur conversion base64: {e}")
return ""

View File

@@ -0,0 +1,368 @@
"""
Action VWB - Raccourci Clavier
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Cette action exécute des raccourcis clavier (Ctrl+C, Alt+Tab, etc.).
"""
from typing import Dict, Any, Optional, List
from datetime import datetime
import time
import re
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidence, VWBEvidenceType
class VWBHotkeyAction(BaseVWBAction):
"""
Action pour exécuter des raccourcis clavier.
Cette action permet d'envoyer des combinaisons de touches comme Ctrl+C, Alt+Tab,
F5, etc. pour interagir avec les applications.
"""
def __init__(self, action_id: str, parameters: Dict[str, Any], screen_capturer=None):
"""
Initialise l'action de raccourci clavier.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer
"""
super().__init__(action_id, parameters, screen_capturer)
# Paramètres spécifiques
self.keys = parameters.get('keys', '')
self.hold_duration_ms = parameters.get('hold_duration_ms', 100)
self.delay_between_keys_ms = parameters.get('delay_between_keys_ms', 50)
self.repeat_count = parameters.get('repeat_count', 1)
self.delay_between_repeats_ms = parameters.get('delay_between_repeats_ms', 100)
self.capture_before = parameters.get('capture_before', True)
self.capture_after = parameters.get('capture_after', True)
# Validation des paramètres
self._validate_hotkey_parameters()
# Parser les touches
self.parsed_keys = self._parse_keys(self.keys)
def _validate_hotkey_parameters(self):
"""Valide les paramètres spécifiques aux raccourcis clavier."""
errors = []
if not self.keys or not isinstance(self.keys, str):
errors.append("keys doit être une chaîne non vide")
if not isinstance(self.hold_duration_ms, (int, float)) or self.hold_duration_ms < 0:
errors.append("hold_duration_ms doit être un nombre positif")
if not isinstance(self.delay_between_keys_ms, (int, float)) or self.delay_between_keys_ms < 0:
errors.append("delay_between_keys_ms doit être un nombre positif")
if not isinstance(self.repeat_count, int) or self.repeat_count < 1:
errors.append("repeat_count doit être un entier positif")
if errors:
raise ValueError(f"Paramètres invalides pour Hotkey: {'; '.join(errors)}")
def _parse_keys(self, keys_string: str) -> List[Dict[str, Any]]:
"""
Parse une chaîne de touches en structure utilisable.
Args:
keys_string: Chaîne comme "ctrl+c", "alt+tab", "f5", etc.
Returns:
Liste des touches parsées
"""
try:
# Normaliser la chaîne
keys_string = keys_string.lower().strip()
# Séparer les combinaisons multiples (ex: "ctrl+c, ctrl+v")
combinations = [combo.strip() for combo in keys_string.split(',')]
parsed = []
for combo in combinations:
# Séparer les modificateurs et la touche principale
parts = [part.strip() for part in combo.split('+')]
modifiers = []
main_key = None
for part in parts:
if part in ['ctrl', 'control', 'alt', 'shift', 'win', 'cmd', 'meta']:
modifiers.append(part)
else:
main_key = part
if not main_key and len(parts) == 1:
# Touche simple sans modificateur
main_key = parts[0]
parsed.append({
'modifiers': modifiers,
'key': main_key,
'original': combo
})
return parsed
except Exception as e:
print(f"⚠️ Erreur parsing touches '{keys_string}': {e}")
return [{'modifiers': [], 'key': keys_string, 'original': keys_string}]
def get_action_type(self) -> str:
"""Retourne le type d'action."""
return "hotkey"
def get_action_name(self) -> str:
"""Retourne le nom de l'action."""
return "Raccourci Clavier"
def get_action_description(self) -> str:
"""Retourne la description de l'action."""
return "Exécute des raccourcis clavier (Ctrl+C, Alt+Tab, F5, etc.)"
def validate_parameters(self) -> list:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
errors = []
if not self.keys:
errors.append("Paramètre 'keys' requis")
if self.repeat_count > 10:
errors.append("repeat_count trop élevé (max 10)")
# Vérifier que les touches sont reconnues
valid_keys = {
# Touches spéciales
'enter', 'return', 'space', 'tab', 'escape', 'esc', 'backspace', 'delete', 'del',
'home', 'end', 'pageup', 'pagedown', 'insert', 'pause', 'printscreen',
# Flèches
'up', 'down', 'left', 'right',
# Touches fonction
'f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12',
# Modificateurs
'ctrl', 'control', 'alt', 'shift', 'win', 'cmd', 'meta',
# Lettres et chiffres (seront validés séparément)
}
for parsed_key in self.parsed_keys:
key = parsed_key['key']
if key and len(key) > 1 and key not in valid_keys:
# Vérifier si c'est une lettre ou un chiffre
if not (key.isalnum() and len(key) == 1):
errors.append(f"Touche non reconnue: '{key}'")
return errors
def _execute_action_logic(self, step_id: str, workflow_id: Optional[str] = None,
user_id: Optional[str] = None) -> VWBActionResult:
"""
Logique principale d'exécution du raccourci clavier.
Args:
step_id: Identifiant de l'étape
workflow_id: Identifiant du workflow
user_id: Identifiant de l'utilisateur
Returns:
Résultat de l'exécution
"""
start_time = datetime.now()
evidence_list = []
try:
print(f"⌨️ Exécution raccourci clavier: {self.keys}")
# Capture d'écran avant (optionnelle)
screenshot_before = None
if self.capture_before and self.screen_capturer:
screenshot_before = self.screen_capturer.capture()
# Exécuter les raccourcis
execution_details = []
for repeat in range(self.repeat_count):
if self.repeat_count > 1:
print(f" Répétition {repeat + 1}/{self.repeat_count}")
for i, parsed_key in enumerate(self.parsed_keys):
success = self._execute_key_combination(parsed_key)
execution_details.append({
'combination': parsed_key['original'],
'success': success,
'repeat': repeat + 1
})
if not success:
raise Exception(f"Échec exécution de '{parsed_key['original']}'")
# Délai entre les combinaisons
if i < len(self.parsed_keys) - 1:
time.sleep(self.delay_between_keys_ms / 1000.0)
# Délai entre les répétitions
if repeat < self.repeat_count - 1:
time.sleep(self.delay_between_repeats_ms / 1000.0)
# Capture d'écran après (optionnelle)
screenshot_after = None
if self.capture_after and self.screen_capturer:
screenshot_after = self.screen_capturer.capture()
# Evidence de succès
evidence_description = f"Raccourci clavier '{self.keys}' exécuté"
if self.repeat_count > 1:
evidence_description += f" ({self.repeat_count} fois)"
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.KEYBOARD_INPUT,
description=evidence_description,
screenshot_base64=self._screenshot_to_base64(screenshot_after or screenshot_before),
success=True,
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
metadata={
'keys': self.keys,
'parsed_keys': [pk['original'] for pk in self.parsed_keys],
'repeat_count': self.repeat_count,
'execution_details': execution_details,
'hold_duration_ms': self.hold_duration_ms,
'delay_between_keys_ms': self.delay_between_keys_ms
}
)
evidence_list.append(evidence)
# Données de sortie
output_data = {
'hotkey_success': True,
'keys_executed': self.keys,
'combinations_count': len(self.parsed_keys),
'repeat_count': self.repeat_count,
'total_combinations': len(self.parsed_keys) * self.repeat_count,
'execution_details': execution_details
}
print(f"✅ Raccourci clavier exécuté avec succès")
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=datetime.now(),
output_data=output_data,
evidence_list=evidence_list,
workflow_id=workflow_id,
user_id=user_id
)
except Exception as e:
# Gestion des erreurs
error = create_vwb_error(
error_type=VWBErrorType.EXECUTION_ERROR,
message=f"Erreur lors de l'exécution du raccourci: {str(e)}",
severity=VWBErrorSeverity.MEDIUM,
retryable=True,
details={'keys': self.keys, 'exception': str(e)}
)
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=datetime.now(),
output_data={},
evidence_list=evidence_list,
error=error,
workflow_id=workflow_id,
user_id=user_id
)
def _execute_key_combination(self, parsed_key: Dict[str, Any]) -> bool:
"""
Exécute une combinaison de touches.
Args:
parsed_key: Touche parsée avec modificateurs
Returns:
True si l'exécution a réussi
"""
try:
combination = parsed_key['original']
modifiers = parsed_key['modifiers']
key = parsed_key['key']
print(f" Exécution: {combination}")
# Simulation de l'appui sur les touches
# En production, utiliser pyautogui, pynput ou équivalent
# Appuyer sur les modificateurs
for modifier in modifiers:
print(f" Appui modificateur: {modifier}")
time.sleep(0.01)
# Appuyer sur la touche principale
if key:
print(f" Appui touche: {key}")
time.sleep(self.hold_duration_ms / 1000.0)
# Relâcher les touches (ordre inverse)
if key:
print(f" Relâchement touche: {key}")
time.sleep(0.01)
for modifier in reversed(modifiers):
print(f" Relâchement modificateur: {modifier}")
time.sleep(0.01)
return True
except Exception as e:
print(f"❌ Erreur exécution combinaison '{parsed_key['original']}': {e}")
return False
def _screenshot_to_base64(self, screenshot) -> str:
"""
Convertit un screenshot en base64.
Args:
screenshot: Image capturée
Returns:
String base64 de l'image
"""
try:
if not screenshot:
return ""
import base64
import io
from PIL import Image
if hasattr(screenshot, 'save'):
# PIL Image
buffer = io.BytesIO()
screenshot.save(buffer, format='PNG')
return base64.b64encode(buffer.getvalue()).decode('utf-8')
else:
# Données brutes ou autre format
return base64.b64encode(str(screenshot).encode()).decode('utf-8')
except Exception as e:
print(f"⚠️ Erreur conversion base64: {e}")
return ""

View File

@@ -0,0 +1,351 @@
"""
Action VWB - Capture d'Écran Evidence
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Cette action capture l'écran pour créer une preuve visuelle (Evidence).
"""
from typing import Dict, Any, Optional
from datetime import datetime
import time
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidence, VWBEvidenceType
class VWBScreenshotEvidenceAction(BaseVWBAction):
"""
Action pour capturer l'écran et créer une Evidence.
Cette action capture l'état actuel de l'écran pour documenter
une étape du workflow ou créer une preuve visuelle.
"""
def __init__(self, action_id: str, parameters: Dict[str, Any], screen_capturer=None):
"""
Initialise l'action de capture d'écran.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer
"""
super().__init__(action_id, parameters, screen_capturer)
# Paramètres spécifiques
self.capture_region = parameters.get('capture_region') # None = écran complet
self.evidence_title = parameters.get('evidence_title', 'Capture d\'écran')
self.evidence_description = parameters.get('evidence_description', '')
self.include_timestamp = parameters.get('include_timestamp', True)
self.include_cursor = parameters.get('include_cursor', False)
self.delay_before_capture_ms = parameters.get('delay_before_capture_ms', 0)
self.quality = parameters.get('quality', 'high') # low, medium, high
self.format = parameters.get('format', 'png') # png, jpg
# Validation des paramètres
self._validate_screenshot_parameters()
def _validate_screenshot_parameters(self):
"""Valide les paramètres spécifiques à la capture d'écran."""
errors = []
if self.capture_region and not isinstance(self.capture_region, dict):
errors.append("capture_region doit être un dictionnaire avec x, y, width, height")
if self.capture_region:
required_keys = ['x', 'y', 'width', 'height']
for key in required_keys:
if key not in self.capture_region:
errors.append(f"capture_region manque la clé '{key}'")
elif not isinstance(self.capture_region[key], (int, float)):
errors.append(f"capture_region['{key}'] doit être un nombre")
if not isinstance(self.delay_before_capture_ms, (int, float)) or self.delay_before_capture_ms < 0:
errors.append("delay_before_capture_ms doit être un nombre positif")
if self.quality not in ['low', 'medium', 'high']:
errors.append("quality doit être 'low', 'medium' ou 'high'")
if self.format not in ['png', 'jpg', 'jpeg']:
errors.append("format doit être 'png', 'jpg' ou 'jpeg'")
if errors:
raise ValueError(f"Paramètres invalides pour ScreenshotEvidence: {'; '.join(errors)}")
def get_action_type(self) -> str:
"""Retourne le type d'action."""
return "screenshot_evidence"
def get_action_name(self) -> str:
"""Retourne le nom de l'action."""
return "Capture d'Écran Evidence"
def get_action_description(self) -> str:
"""Retourne la description de l'action."""
return "Capture l'écran pour créer une preuve visuelle (Evidence)"
def validate_parameters(self) -> list:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
errors = []
if self.capture_region:
region = self.capture_region
if region.get('width', 0) <= 0 or region.get('height', 0) <= 0:
errors.append("Région de capture invalide (largeur/hauteur <= 0)")
return errors
def _execute_action_logic(self, step_id: str, workflow_id: Optional[str] = None,
user_id: Optional[str] = None) -> VWBActionResult:
"""
Logique principale d'exécution de la capture d'écran.
Args:
step_id: Identifiant de l'étape
workflow_id: Identifiant du workflow
user_id: Identifiant de l'utilisateur
Returns:
Résultat de l'exécution
"""
start_time = datetime.now()
evidence_list = []
try:
print(f"📸 Capture d'écran Evidence: {self.evidence_title}")
# Vérifier la disponibilité du ScreenCapturer
if not self.screen_capturer:
raise Exception("ScreenCapturer non disponible")
# Délai avant capture si spécifié
if self.delay_before_capture_ms > 0:
print(f" Attente de {self.delay_before_capture_ms}ms avant capture")
time.sleep(self.delay_before_capture_ms / 1000.0)
# Effectuer la capture
capture_start = time.time()
if self.capture_region:
# Capture d'une région spécifique
region = self.capture_region
print(f" Capture région: {region['x']},{region['y']} {region['width']}x{region['height']}")
screenshot = self._capture_region(region)
else:
# Capture écran complet
print(" Capture écran complet")
screenshot = self.screen_capturer.capture()
capture_duration = (time.time() - capture_start) * 1000
if not screenshot:
raise Exception("Échec de la capture d'écran")
# Obtenir les informations sur la capture
capture_info = self._get_capture_info(screenshot)
# Créer la description de l'Evidence
description = self.evidence_description or self.evidence_title
if self.include_timestamp:
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
description += f" (capturé le {timestamp})"
# Evidence de succès
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.SCREENSHOT,
description=description,
screenshot_base64=self._screenshot_to_base64(screenshot),
success=True,
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
metadata={
'evidence_title': self.evidence_title,
'capture_region': self.capture_region,
'capture_duration_ms': capture_duration,
'quality': self.quality,
'format': self.format,
'include_cursor': self.include_cursor,
'capture_info': capture_info,
'screen_resolution': capture_info.get('resolution', {}),
'file_size_bytes': capture_info.get('size_bytes', 0)
}
)
evidence_list.append(evidence)
# Données de sortie
output_data = {
'capture_success': True,
'evidence_title': self.evidence_title,
'capture_duration_ms': capture_duration,
'capture_info': capture_info,
'evidence_id': evidence.evidence_id
}
print(f"✅ Capture d'écran réussie ({capture_info.get('resolution', {}).get('width', '?')}x{capture_info.get('resolution', {}).get('height', '?')})")
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=datetime.now(),
output_data=output_data,
evidence_list=evidence_list,
workflow_id=workflow_id,
user_id=user_id
)
except Exception as e:
# Gestion des erreurs
error = create_vwb_error(
error_type=VWBErrorType.EXECUTION_ERROR,
message=f"Erreur lors de la capture d'écran: {str(e)}",
severity=VWBErrorSeverity.MEDIUM,
retryable=True,
details={'exception': str(e)}
)
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=datetime.now(),
output_data={},
evidence_list=evidence_list,
error=error,
workflow_id=workflow_id,
user_id=user_id
)
def _capture_region(self, region: Dict[str, int]):
"""
Capture une région spécifique de l'écran.
Args:
region: Dictionnaire avec x, y, width, height
Returns:
Image capturée ou None
"""
try:
# En production, utiliser mss ou pyautogui pour capturer une région
# Pour l'instant, simulation avec capture complète
full_screenshot = self.screen_capturer.capture()
if not full_screenshot:
return None
# Simuler le crop de la région
# En production: full_screenshot.crop((x, y, x+width, y+height))
print(f" Région simulée: {region}")
return full_screenshot
except Exception as e:
print(f"❌ Erreur capture région: {e}")
return None
def _get_capture_info(self, screenshot) -> Dict[str, Any]:
"""
Obtient les informations sur la capture.
Args:
screenshot: Image capturée
Returns:
Informations sur la capture
"""
try:
info = {
'timestamp': datetime.now().isoformat(),
'quality': self.quality,
'format': self.format
}
# Tenter d'obtenir les dimensions
if hasattr(screenshot, 'size'):
# PIL Image
info['resolution'] = {
'width': screenshot.size[0],
'height': screenshot.size[1]
}
elif hasattr(screenshot, 'width') and hasattr(screenshot, 'height'):
info['resolution'] = {
'width': screenshot.width,
'height': screenshot.height
}
else:
# Valeurs par défaut
info['resolution'] = {
'width': 1920,
'height': 1080
}
# Estimer la taille du fichier
width = info['resolution']['width']
height = info['resolution']['height']
if self.format.lower() in ['jpg', 'jpeg']:
# JPEG approximation
info['size_bytes'] = int(width * height * 0.1) # Compression ~10:1
else:
# PNG approximation
info['size_bytes'] = int(width * height * 3) # RGB sans compression
return info
except Exception as e:
print(f"⚠️ Erreur obtention info capture: {e}")
return {
'timestamp': datetime.now().isoformat(),
'quality': self.quality,
'format': self.format,
'resolution': {'width': 1920, 'height': 1080},
'size_bytes': 0
}
def _screenshot_to_base64(self, screenshot) -> str:
"""
Convertit un screenshot en base64.
Args:
screenshot: Image capturée
Returns:
String base64 de l'image
"""
try:
if not screenshot:
return ""
import base64
import io
from PIL import Image
if hasattr(screenshot, 'save'):
# PIL Image
buffer = io.BytesIO()
# Ajuster la qualité selon le paramètre
if self.format.lower() in ['jpg', 'jpeg']:
quality_map = {'low': 60, 'medium': 80, 'high': 95}
quality_value = quality_map.get(self.quality, 80)
screenshot.save(buffer, format='JPEG', quality=quality_value)
else:
# PNG - pas de paramètre qualité
screenshot.save(buffer, format='PNG')
return base64.b64encode(buffer.getvalue()).decode('utf-8')
else:
# Données brutes ou autre format
return base64.b64encode(str(screenshot).encode()).decode('utf-8')
except Exception as e:
print(f"⚠️ Erreur conversion base64: {e}")
return ""

View File

@@ -0,0 +1,554 @@
"""
Action VWB Scroll To Anchor - Faire défiler jusqu'à un élément visuel
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Cette action permet de faire défiler la page ou une zone jusqu'à ce qu'un élément
identifié par une ancre visuelle soit visible à l'écran.
"""
from typing import Dict, Any, Optional, List
from datetime import datetime
import time
import traceback
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBActionError, VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidence, VWBEvidenceType
from ...contracts.visual_anchor import VWBVisualAnchor
class VWBScrollToAnchorAction(BaseVWBAction):
"""
Action VWB pour faire défiler jusqu'à un élément identifié par une ancre visuelle.
Cette action recherche un élément et fait défiler la page ou une zone
jusqu'à ce que l'élément soit visible et accessible.
"""
def __init__(self, action_id: str, parameters: Dict[str, Any], screen_capturer=None):
"""
Initialise l'action ScrollToAnchor.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer (Option A thread-safe)
"""
super().__init__(
action_id=action_id,
name="Défiler vers un Élément",
description="Fait défiler la page jusqu'à ce qu'un élément soit visible",
parameters=parameters,
screen_capturer=screen_capturer
)
# Paramètres spécifiques à ScrollToAnchor
self.visual_anchor = parameters.get('visual_anchor')
self.scroll_direction = parameters.get('scroll_direction', 'vertical') # vertical, horizontal, both
self.scroll_speed = parameters.get('scroll_speed', 'medium') # slow, medium, fast
self.max_scroll_attempts = parameters.get('max_scroll_attempts', 10)
self.scroll_step_pixels = parameters.get('scroll_step_pixels', 200)
self.wait_after_scroll_ms = parameters.get('wait_after_scroll_ms', 500)
self.target_position = parameters.get('target_position', 'center') # top, center, bottom
self.confidence_threshold = parameters.get('confidence_threshold', 0.8)
# Validation des paramètres
validation_errors = self._validate_parameters()
if validation_errors:
print(f"⚠️ Erreurs de validation: {validation_errors}")
def _validate_parameters(self) -> List[str]:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
errors = []
# Vérifier l'ancre visuelle
if not self.visual_anchor:
errors.append("Paramètre 'visual_anchor' requis")
elif not isinstance(self.visual_anchor, (VWBVisualAnchor, dict)):
errors.append("'visual_anchor' doit être un VWBVisualAnchor ou un dictionnaire")
# Vérifier la direction de défilement
valid_directions = ['vertical', 'horizontal', 'both']
if self.scroll_direction not in valid_directions:
errors.append(f"'scroll_direction' doit être l'un de : {valid_directions}")
# Vérifier la vitesse de défilement
valid_speeds = ['slow', 'medium', 'fast']
if self.scroll_speed not in valid_speeds:
errors.append(f"'scroll_speed' doit être l'un de : {valid_speeds}")
# Vérifier la position cible
valid_positions = ['top', 'center', 'bottom']
if self.target_position not in valid_positions:
errors.append(f"'target_position' doit être l'un de : {valid_positions}")
# Vérifier les valeurs numériques
if self.max_scroll_attempts <= 0:
errors.append("'max_scroll_attempts' doit être positif")
if self.scroll_step_pixels <= 0:
errors.append("'scroll_step_pixels' doit être positif")
if self.wait_after_scroll_ms < 0:
errors.append("'wait_after_scroll_ms' doit être positif ou nul")
if not (0.0 <= self.confidence_threshold <= 1.0):
errors.append("'confidence_threshold' doit être entre 0.0 et 1.0")
return errors
def validate_parameters(self) -> List[str]:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
return self._validate_parameters()
def get_action_metadata(self) -> Dict[str, Any]:
"""
Retourne les métadonnées de l'action.
Returns:
Dictionnaire des métadonnées
"""
return {
"id": "scroll_to_anchor",
"name": "Défiler vers un Élément",
"description": "Fait défiler la page jusqu'à ce qu'un élément soit visible",
"category": "vision_ui",
"version": "1.0.0",
"author": "Dom, Alice, Kiro",
"created_date": "2026-01-10",
"parameters": {
"visual_anchor": {
"type": "VWBVisualAnchor",
"required": True,
"description": "Ancre visuelle pour localiser l'élément cible"
},
"scroll_direction": {
"type": "string",
"required": False,
"default": "vertical",
"options": ["vertical", "horizontal", "both"],
"description": "Direction du défilement"
},
"scroll_speed": {
"type": "string",
"required": False,
"default": "medium",
"options": ["slow", "medium", "fast"],
"description": "Vitesse du défilement"
},
"max_scroll_attempts": {
"type": "number",
"required": False,
"default": 10,
"min": 1,
"description": "Nombre maximum de tentatives de défilement"
},
"scroll_step_pixels": {
"type": "number",
"required": False,
"default": 200,
"min": 50,
"description": "Nombre de pixels par étape de défilement"
},
"wait_after_scroll_ms": {
"type": "number",
"required": False,
"default": 500,
"min": 0,
"description": "Délai d'attente après chaque défilement"
},
"target_position": {
"type": "string",
"required": False,
"default": "center",
"options": ["top", "center", "bottom"],
"description": "Position cible de l'élément dans la vue"
},
"confidence_threshold": {
"type": "number",
"required": False,
"default": 0.8,
"min": 0.0,
"max": 1.0,
"description": "Seuil de confiance pour la détection"
}
},
"outputs": {
"element_found": {
"type": "boolean",
"description": "Indique si l'élément a été trouvé et rendu visible"
},
"scroll_distance": {
"type": "object",
"description": "Distance totale de défilement (x, y)"
},
"final_coordinates": {
"type": "object",
"description": "Coordonnées finales de l'élément"
},
"scroll_attempts": {
"type": "number",
"description": "Nombre de tentatives de défilement effectuées"
}
},
"examples": [
{
"name": "Défiler vers un bouton",
"description": "Fait défiler verticalement pour trouver un bouton",
"parameters": {
"scroll_direction": "vertical",
"target_position": "center",
"scroll_speed": "medium"
}
},
{
"name": "Défilement horizontal",
"description": "Fait défiler horizontalement dans un carrousel",
"parameters": {
"scroll_direction": "horizontal",
"scroll_step_pixels": 300,
"max_scroll_attempts": 5
}
}
]
}
def execute_core(self, step_id: str) -> VWBActionResult:
"""
Exécute l'action de défilement vers l'ancre visuelle.
Args:
step_id: Identifiant de l'étape
Returns:
Résultat de l'exécution avec Evidence
"""
start_time = datetime.now()
evidence_list = []
try:
print(f"📜 Début ScrollToAnchor - Direction: {self.scroll_direction}")
# Validation des paramètres
validation_errors = self._validate_parameters()
if validation_errors:
error = create_vwb_error(
error_type=VWBErrorType.PARAMETER_INVALID,
message=f"Paramètres invalides: {', '.join(validation_errors)}",
severity=VWBErrorSeverity.HIGH,
retryable=False,
details={"validation_errors": validation_errors}
)
return self._create_error_result_simple(start_time, step_id, error)
# Convertir l'ancre visuelle si nécessaire
if isinstance(self.visual_anchor, dict):
visual_anchor = VWBVisualAnchor.from_dict(self.visual_anchor)
else:
visual_anchor = self.visual_anchor
# Vérifier la disponibilité du ScreenCapturer
if not self.screen_capturer:
error = create_vwb_error(
error_type=VWBErrorType.SCREEN_CAPTURE_FAILED,
message="ScreenCapturer non disponible",
severity=VWBErrorSeverity.HIGH,
retryable=False
)
return self._create_error_result_simple(start_time, step_id, error)
# Capture d'écran initiale
initial_screenshot = self._capture_screen_safe()
if not initial_screenshot:
error = create_vwb_error(
error_type=VWBErrorType.SCREEN_CAPTURE_FAILED,
message="Impossible de capturer l'écran initial",
severity=VWBErrorSeverity.HIGH,
retryable=True
)
return self._create_error_result_simple(start_time, step_id, error)
# Vérifier si l'élément est déjà visible
print(f"🔍 Vérification initiale de '{visual_anchor.label}'")
element_found, element_coords, confidence = self._find_visual_element(
initial_screenshot, visual_anchor, self.confidence_threshold
)
total_scroll_x = 0
total_scroll_y = 0
scroll_attempts = 0
# Si l'élément n'est pas trouvé, commencer le défilement
if not element_found:
print(f"🔄 Élément non visible, début du défilement...")
# Calculer les paramètres de défilement
scroll_delays = {'slow': 800, 'medium': 500, 'fast': 200}
scroll_delay = scroll_delays.get(self.scroll_speed, 500)
# Boucle de défilement
for attempt in range(self.max_scroll_attempts):
scroll_attempts += 1
print(f" Tentative {attempt + 1}/{self.max_scroll_attempts}")
# Effectuer le défilement
scroll_x, scroll_y = self._perform_scroll_step()
total_scroll_x += scroll_x
total_scroll_y += scroll_y
# Attendre que le défilement soit effectif
time.sleep(self.wait_after_scroll_ms / 1000.0)
# Nouvelle capture d'écran
current_screenshot = self._capture_screen_safe()
if not current_screenshot:
continue
# Rechercher l'élément dans la nouvelle vue
element_found, element_coords, confidence = self._find_visual_element(
current_screenshot, visual_anchor, self.confidence_threshold
)
if element_found:
print(f"✅ Élément trouvé après {attempt + 1} tentatives!")
break
# Attendre avant la prochaine tentative
time.sleep(scroll_delay / 1000.0)
# Vérifier le résultat final
if not element_found:
error = create_vwb_error(
error_type=VWBErrorType.ELEMENT_NOT_FOUND,
message=f"Élément '{visual_anchor.label}' non trouvé après {scroll_attempts} tentatives de défilement",
severity=VWBErrorSeverity.MEDIUM,
retryable=True,
details={
"anchor_label": visual_anchor.label,
"scroll_attempts": scroll_attempts,
"total_scroll_distance": {"x": total_scroll_x, "y": total_scroll_y},
"confidence_threshold": self.confidence_threshold
}
)
# Evidence d'échec
final_screenshot = self._capture_screen_safe()
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.SCREENSHOT,
description=f"Élément '{visual_anchor.label}' non trouvé après défilement",
screenshot_base64=self._encode_screenshot(final_screenshot) if final_screenshot else "",
confidence_score=confidence,
interaction_details={
"scroll_attempts": scroll_attempts,
"total_scroll_distance": {"x": total_scroll_x, "y": total_scroll_y}
},
success=False
)
evidence_list.append(evidence)
return self._create_error_result_simple(start_time, step_id, error, evidence_list)
# Ajustement de position si nécessaire
if self.target_position != 'current':
adjustment_success = self._adjust_element_position(element_coords)
if adjustment_success:
print(f"📍 Élément ajusté à la position '{self.target_position}'")
# Capture d'écran finale
final_screenshot = self._capture_screen_safe()
# Evidence de succès
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.UI_INTERACTION,
description=f"Défilement réussi vers '{visual_anchor.label}'",
screenshot_base64=self._encode_screenshot(final_screenshot) if final_screenshot else "",
element_coordinates=element_coords,
confidence_score=confidence,
interaction_details={
"scroll_direction": self.scroll_direction,
"scroll_attempts": scroll_attempts,
"total_scroll_distance": {"x": total_scroll_x, "y": total_scroll_y},
"target_position": self.target_position,
"scroll_speed": self.scroll_speed
},
success=True
)
evidence_list.append(evidence)
# Données de sortie
output_data = {
"element_found": True,
"scroll_distance": {"x": total_scroll_x, "y": total_scroll_y},
"final_coordinates": element_coords,
"scroll_attempts": scroll_attempts,
"confidence_score": confidence
}
end_time = datetime.now()
execution_time = (end_time - start_time).total_seconds() * 1000
print(f"✅ ScrollToAnchor réussie en {execution_time:.1f}ms")
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=end_time,
execution_time_ms=execution_time,
output_data=output_data,
evidence_list=evidence_list
)
except Exception as e:
print(f"❌ Erreur ScrollToAnchor: {e}")
error = create_vwb_error(
error_type=VWBErrorType.SYSTEM_ERROR,
message=f"Erreur inattendue lors du défilement: {str(e)}",
severity=VWBErrorSeverity.HIGH,
retryable=True,
details={"exception": str(e), "traceback": traceback.format_exc()}
)
return self._create_error_result_simple(start_time, step_id, error, evidence_list)
def _create_error_result_simple(self, start_time: datetime, step_id: str, error: VWBActionError, evidence_list: List[VWBEvidence] = None) -> VWBActionResult:
"""Crée un résultat d'erreur simplifié."""
end_time = datetime.now()
execution_time = (end_time - start_time).total_seconds() * 1000
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=end_time,
execution_time_ms=execution_time,
output_data={},
evidence_list=evidence_list or [],
error=error
)
def _capture_screen_safe(self):
"""Capture d'écran sécurisée avec gestion d'erreur."""
try:
if self.screen_capturer:
return self.screen_capturer.capture()
except Exception as e:
print(f"⚠️ Erreur capture d'écran: {e}")
return None
def _find_visual_element(self, screenshot, visual_anchor, threshold):
"""Simulation de recherche d'élément visuel."""
import random
confidence = random.uniform(0.6, 0.95)
if confidence >= threshold:
return True, {'x': 400, 'y': 300, 'width': 200, 'height': 50}, confidence
else:
return False, {}, confidence
def _encode_screenshot(self, screenshot_data) -> str:
"""Encode un screenshot en base64."""
try:
import base64
return base64.b64encode(str(screenshot_data).encode()).decode('utf-8')
except:
return ""
def execute(self, step_id: str = None, workflow_id: str = None, user_id: str = None) -> VWBActionResult:
"""
Exécute l'action de défilement vers l'ancre visuelle (méthode héritée).
Args:
step_id: Identifiant de l'étape
workflow_id: Identifiant du workflow
user_id: Identifiant de l'utilisateur
Returns:
Résultat de l'exécution avec Evidence
"""
# Déléguer à execute_core qui est la méthode abstraite requise
return self.execute_core(step_id or f"step_{datetime.now().strftime('%Y%m%d_%H%M%S')}")
def _perform_scroll_step(self) -> tuple[int, int]:
"""
Effectue une étape de défilement.
Returns:
Tuple (pixels_x, pixels_y) défilés
"""
scroll_x = 0
scroll_y = 0
try:
if self.scroll_direction in ['vertical', 'both']:
# Défilement vertical vers le bas
scroll_y = self.scroll_step_pixels
print(f" ⬇️ Défilement vertical: {scroll_y}px")
# En réalité: pyautogui.scroll(-scroll_y)
if self.scroll_direction in ['horizontal', 'both']:
# Défilement horizontal vers la droite
scroll_x = self.scroll_step_pixels
print(f" ➡️ Défilement horizontal: {scroll_x}px")
# En réalité: pyautogui.hscroll(scroll_x)
# Simuler le délai de défilement
time.sleep(0.1)
except Exception as e:
print(f"⚠️ Erreur lors du défilement: {e}")
return scroll_x, scroll_y
def _adjust_element_position(self, element_coords: Dict[str, int]) -> bool:
"""
Ajuste la position de l'élément selon la position cible.
Args:
element_coords: Coordonnées actuelles de l'élément
Returns:
True si l'ajustement a réussi
"""
try:
# Obtenir les dimensions de l'écran
# En réalité, on utiliserait pyautogui.size()
screen_width, screen_height = 1920, 1080 # Valeurs par défaut
element_center_y = element_coords['y'] + element_coords['height'] // 2
# Calculer la position cible
if self.target_position == 'top':
target_y = screen_height * 0.2 # 20% du haut
elif self.target_position == 'center':
target_y = screen_height * 0.5 # Centre
elif self.target_position == 'bottom':
target_y = screen_height * 0.8 # 80% du haut
else:
return True # Pas d'ajustement nécessaire
# Calculer le défilement nécessaire
adjustment_pixels = int(element_center_y - target_y)
if abs(adjustment_pixels) > 50: # Seuil minimum d'ajustement
print(f"🎯 Ajustement de position: {adjustment_pixels}px")
# En réalité: pyautogui.scroll(-adjustment_pixels // 10)
time.sleep(0.2)
return True
except Exception as e:
print(f"⚠️ Erreur ajustement position: {e}")
return False

View File

@@ -0,0 +1,428 @@
"""
Action VWB - Saisie de Secret
Auteur : Dom, Alice, Kiro - 10 janvier 2026
Cette action saisit un mot de passe ou secret dans un champ identifié par une ancre visuelle.
Inclut des mesures de sécurité pour éviter la journalisation des secrets.
"""
from typing import Dict, Any, Optional
from datetime import datetime
import time
import re
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidence, VWBEvidenceType
from ...contracts.visual_anchor import VWBVisualAnchor
class VWBTypeSecretAction(BaseVWBAction):
"""
Action pour saisir un secret (mot de passe) dans un champ UI.
Cette action localise un champ de saisie et y tape un secret de manière sécurisée,
en évitant la journalisation du contenu sensible.
"""
def __init__(self, action_id: str, parameters: Dict[str, Any], screen_capturer=None):
"""
Initialise l'action de saisie de secret.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres de configuration
screen_capturer: Instance du ScreenCapturer
"""
super().__init__(
action_id=action_id,
name="Saisie de Secret",
description="Saisit un mot de passe ou secret dans un champ identifié par une ancre visuelle",
parameters=parameters,
screen_capturer=screen_capturer
)
# Paramètres spécifiques
self.visual_anchor = parameters.get('visual_anchor')
self.secret_value = parameters.get('secret_value', '')
self.secret_ref = parameters.get('secret_ref') # Référence sécurisée
self.clear_field_first = parameters.get('clear_field_first', True)
self.click_before_typing = parameters.get('click_before_typing', True)
self.press_enter_after = parameters.get('press_enter_after', False)
self.typing_speed_ms = parameters.get('typing_speed_ms', 30) # Plus rapide pour secrets
self.confidence_threshold = parameters.get('confidence_threshold', 0.8)
self.mask_in_evidence = parameters.get('mask_in_evidence', True)
# Validation des paramètres
validation_errors = self.validate_parameters()
if validation_errors:
print(f"⚠️ Erreurs de validation: {validation_errors}")
# Validation des paramètres spécifiques
secret_errors = self._validate_secret_parameters()
if secret_errors:
print(f"⚠️ Erreurs de validation secret: {secret_errors}")
def _validate_secret_parameters(self):
"""Valide les paramètres spécifiques à la saisie de secret."""
errors = []
if not isinstance(self.visual_anchor, VWBVisualAnchor):
errors.append("visual_anchor doit être une instance de VWBVisualAnchor")
if not self.secret_value and not self.secret_ref:
errors.append("secret_value ou secret_ref doit être fourni")
if self.secret_value and len(self.secret_value.strip()) == 0:
errors.append("secret_value ne peut pas être vide")
if not isinstance(self.typing_speed_ms, (int, float)) or self.typing_speed_ms < 0:
errors.append("typing_speed_ms doit être un nombre positif")
if not isinstance(self.confidence_threshold, (int, float)) or not 0 <= self.confidence_threshold <= 1:
errors.append("confidence_threshold doit être entre 0 et 1")
if errors:
print(f"⚠️ Erreurs de validation secret: {errors}")
return errors
def get_action_type(self) -> str:
"""Retourne le type d'action."""
return "type_secret"
def get_action_name(self) -> str:
"""Retourne le nom de l'action."""
return "Saisie de Secret"
def get_action_description(self) -> str:
"""Retourne la description de l'action."""
return "Saisit un mot de passe ou secret dans un champ identifié par une ancre visuelle"
def validate_parameters(self) -> list:
"""
Valide les paramètres de l'action.
Returns:
Liste des erreurs de validation
"""
errors = []
if not self.visual_anchor:
errors.append("Paramètre 'visual_anchor' requis")
if not self.secret_value and not self.secret_ref:
errors.append("Paramètre 'secret_value' ou 'secret_ref' requis")
if self.confidence_threshold < 0.5:
errors.append("Seuil de confiance trop faible (< 0.5)")
return errors
def get_action_metadata(self) -> Dict[str, Any]:
"""
Retourne les métadonnées de l'action.
Returns:
Dictionnaire des métadonnées
"""
return {
"id": "type_secret",
"name": "Saisie de Secret",
"description": "Saisit un mot de passe ou secret dans un champ identifié par une ancre visuelle",
"category": "vision_ui",
"version": "1.0.0",
"author": "Dom, Alice, Kiro",
"created_date": "2026-01-10",
"parameters": {
"visual_anchor": {
"type": "VWBVisualAnchor",
"required": True,
"description": "Ancre visuelle pour localiser le champ de saisie"
},
"secret_value": {
"type": "string",
"required": False,
"description": "Valeur du secret à saisir (sensible)"
},
"secret_ref": {
"type": "string",
"required": False,
"description": "Référence sécurisée vers le secret"
},
"clear_field_first": {
"type": "boolean",
"required": False,
"default": True,
"description": "Vider le champ avant la saisie"
},
"press_enter_after": {
"type": "boolean",
"required": False,
"default": False,
"description": "Appuyer sur Entrée après la saisie"
},
"confidence_threshold": {
"type": "number",
"required": False,
"default": 0.8,
"min": 0.0,
"max": 1.0,
"description": "Seuil de confiance pour la détection"
}
},
"outputs": {
"typing_success": {
"type": "boolean",
"description": "Indique si la saisie a réussi"
},
"secret_length": {
"type": "number",
"description": "Longueur du secret saisi"
},
"masked_secret": {
"type": "string",
"description": "Version masquée du secret pour les logs"
}
}
}
def _get_secret_to_type(self) -> str:
"""
Récupère le secret à saisir de manière sécurisée.
Returns:
Le secret à saisir
"""
if self.secret_ref:
# Récupération depuis une référence sécurisée
# TODO: Implémenter la récupération depuis un coffre-fort
print("🔐 Récupération du secret depuis la référence sécurisée")
return self.secret_ref # Placeholder
else:
return self.secret_value
def _mask_secret(self, secret: str) -> str:
"""
Masque un secret pour la journalisation.
Args:
secret: Secret à masquer
Returns:
Secret masqué
"""
if not secret:
return ""
if len(secret) <= 2:
return "*" * len(secret)
elif len(secret) <= 6:
return secret[0] + "*" * (len(secret) - 2) + secret[-1]
else:
return secret[:2] + "*" * (len(secret) - 4) + secret[-2:]
def execute_core(self, step_id: str) -> VWBActionResult:
"""
Logique principale d'exécution de la saisie de secret.
Args:
step_id: Identifiant de l'étape
Returns:
Résultat de l'exécution
"""
start_time = datetime.now()
evidence_list = []
try:
# Récupérer le secret de manière sécurisée
secret_to_type = self._get_secret_to_type()
masked_secret = self._mask_secret(secret_to_type)
print(f"🔐 Saisie de secret dans '{self.visual_anchor.label}' (longueur: {len(secret_to_type)})")
# Capture d'écran initiale
if not self.screen_capturer:
raise Exception("ScreenCapturer non disponible")
screenshot = self.screen_capturer.capture()
if not screenshot:
raise Exception("Impossible de capturer l'écran")
# Recherche de l'ancre visuelle
match_found = False
best_match = None
# Simulation de recherche d'ancre (à remplacer par vraie implémentation)
import random
confidence = random.uniform(0.7, 0.95)
if confidence >= self.confidence_threshold:
match_found = True
best_match = {
'confidence': confidence,
'bbox': {'x': 300, 'y': 250, 'width': 200, 'height': 30},
'center': {'x': 400, 'y': 265}
}
if not match_found:
# Ancre non trouvée
error = create_vwb_error(
error_type=VWBErrorType.ANCHOR_NOT_FOUND,
message=f"Champ secret '{self.visual_anchor.label}' non trouvé",
severity=VWBErrorSeverity.HIGH,
retryable=True,
details={
'anchor_label': self.visual_anchor.label,
'confidence_threshold': self.confidence_threshold
}
)
# Evidence d'échec (sans révéler le secret)
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.SCREENSHOT,
description=f"Échec saisie secret - champ non trouvé",
screenshot_base64=self._screenshot_to_base64(screenshot),
success=False,
confidence_score=0.0,
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000
)
evidence_list.append(evidence)
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=datetime.now(),
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
output_data={},
evidence_list=evidence_list,
error=error
)
# Cliquer sur le champ si demandé
if self.click_before_typing:
print(f" Clic sur le champ à ({best_match['center']['x']}, {best_match['center']['y']})")
time.sleep(0.1)
# Vider le champ si demandé
if self.clear_field_first:
print(" Vidage du champ existant")
time.sleep(0.1)
# Saisir le secret
print(f" Saisie du secret ({masked_secret}) à {self.typing_speed_ms}ms/char")
typing_start = time.time()
# Simulation de la saisie caractère par caractère
for i, char in enumerate(secret_to_type):
time.sleep(self.typing_speed_ms / 1000.0)
# En production, utiliser pyautogui ou équivalent
typing_duration = (time.time() - typing_start) * 1000
# Appuyer sur Entrée si demandé
if self.press_enter_after:
print(" Appui sur Entrée")
time.sleep(0.1)
# Evidence de succès (masquée pour sécurité)
evidence_description = f"Saisie de secret réussie dans '{self.visual_anchor.label}'"
if self.mask_in_evidence:
evidence_description += f" (masqué: {masked_secret})"
evidence = VWBEvidence(
evidence_type=VWBEvidenceType.UI_INTERACTION,
description=evidence_description,
screenshot_base64=self._screenshot_to_base64(screenshot),
success=True,
confidence_score=best_match['confidence'],
bbox=best_match['bbox'],
click_point=best_match['center'],
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
metadata={
'secret_length': len(secret_to_type),
'typing_duration_ms': typing_duration,
'clear_field_first': self.clear_field_first,
'click_before_typing': self.click_before_typing,
'press_enter_after': self.press_enter_after,
'masked_secret': masked_secret if self.mask_in_evidence else None
}
)
evidence_list.append(evidence)
# Données de sortie (sécurisées)
output_data = {
'typing_success': True,
'secret_length': len(secret_to_type),
'confidence_score': best_match['confidence'],
'typing_duration_ms': typing_duration,
'field_coordinates': best_match['center'],
'masked_secret': masked_secret
}
print(f"✅ Saisie de secret réussie (longueur: {len(secret_to_type)})")
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=datetime.now(),
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
output_data=output_data,
evidence_list=evidence_list
)
except Exception as e:
# Gestion des erreurs (sans révéler le secret)
error = create_vwb_error(
error_type=VWBErrorType.EXECUTION_ERROR,
message=f"Erreur lors de la saisie de secret: {str(e)}",
severity=VWBErrorSeverity.HIGH,
retryable=True,
details={'exception': str(e)}
)
return VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.FAILED,
start_time=start_time,
end_time=datetime.now(),
execution_time_ms=(datetime.now() - start_time).total_seconds() * 1000,
output_data={},
evidence_list=evidence_list,
error=error
)
def _screenshot_to_base64(self, screenshot) -> str:
"""
Convertit un screenshot en base64.
Args:
screenshot: Image capturée
Returns:
String base64 de l'image
"""
try:
import base64
import io
from PIL import Image
if hasattr(screenshot, 'save'):
# PIL Image
buffer = io.BytesIO()
screenshot.save(buffer, format='PNG')
return base64.b64encode(buffer.getvalue()).decode('utf-8')
else:
# Données brutes ou autre format
return base64.b64encode(str(screenshot).encode()).decode('utf-8')
except Exception as e:
print(f"⚠️ Erreur conversion base64: {e}")
return ""

View File

@@ -0,0 +1,470 @@
"""
Action VWB - Attente d'Ancre Visuelle
Auteur : Dom, Alice, Kiro - 09 janvier 2026
Cette action permet d'attendre qu'une ancre visuelle apparaisse ou disparaisse
de l'écran dans le Visual Workflow Builder.
Classes :
- VWBWaitForAnchorAction : Action d'attente d'ancre visuelle
"""
from typing import Dict, Any, List, Optional
from datetime import datetime
import time
# Import des modules de base
from ..base_action import BaseVWBAction, VWBActionResult, VWBActionStatus
from ...contracts.error import VWBErrorType, VWBErrorSeverity, create_vwb_error
from ...contracts.evidence import VWBEvidenceType, create_interaction_evidence
from ...contracts.visual_anchor import VWBVisualAnchor
class VWBWaitForAnchorAction(BaseVWBAction):
"""
Action d'attente d'ancre visuelle VWB.
Cette action surveille l'écran jusqu'à ce qu'une ancre visuelle
apparaisse ou disparaisse, avec un délai d'attente configurable.
"""
def __init__(
self,
action_id: str,
parameters: Dict[str, Any],
screen_capturer=None
):
"""
Initialise l'action d'attente d'ancre.
Args:
action_id: Identifiant unique de l'action
parameters: Paramètres incluant l'ancre et le mode d'attente
screen_capturer: Instance du ScreenCapturer (Option A thread-safe)
"""
super().__init__(
action_id=action_id,
name="Attente d'Ancre Visuelle",
description="Attend qu'une ancre visuelle apparaisse ou disparaisse",
parameters=parameters,
screen_capturer=screen_capturer
)
# Paramètres spécifiques à l'attente
self.visual_anchor: Optional[VWBVisualAnchor] = parameters.get('visual_anchor')
self.wait_mode = parameters.get('wait_mode', 'appear') # appear, disappear
self.max_wait_time_ms = parameters.get('max_wait_time_ms', 30000) # 30 secondes
self.check_interval_ms = parameters.get('check_interval_ms', 500) # Vérification toutes les 500ms
self.early_exit_on_found = parameters.get('early_exit_on_found', True)
# Configuration de matching
self.confidence_threshold = parameters.get('confidence_threshold', 0.8)
self.stable_detection_count = parameters.get('stable_detection_count', 2) # Détections stables requises
def validate_parameters(self) -> List[str]:
"""Valide les paramètres de l'action d'attente."""
errors = []
# Vérifier l'ancre visuelle
if not self.visual_anchor:
errors.append("Ancre visuelle requise")
elif not isinstance(self.visual_anchor, VWBVisualAnchor):
errors.append("Ancre visuelle invalide")
elif not self.visual_anchor.is_active:
errors.append("Ancre visuelle inactive")
# Vérifier le mode d'attente
if self.wait_mode not in ['appear', 'disappear']:
errors.append(f"Mode d'attente invalide: {self.wait_mode}")
# Vérifier les délais
if not isinstance(self.max_wait_time_ms, (int, float)) or self.max_wait_time_ms <= 0:
errors.append("max_wait_time_ms doit être un nombre positif")
if not isinstance(self.check_interval_ms, (int, float)) or self.check_interval_ms <= 0:
errors.append("check_interval_ms doit être un nombre positif")
# Vérifier la cohérence des délais
if self.check_interval_ms >= self.max_wait_time_ms:
errors.append("check_interval_ms doit être inférieur à max_wait_time_ms")
# Vérifier le seuil de confiance
if not (0.0 <= self.confidence_threshold <= 1.0):
errors.append("Seuil de confiance doit être entre 0.0 et 1.0")
# Vérifier le nombre de détections stables
if not isinstance(self.stable_detection_count, int) or self.stable_detection_count < 1:
errors.append("stable_detection_count doit être un entier positif")
# Vérifier le ScreenCapturer
if not self.screen_capturer:
errors.append("ScreenCapturer requis pour la capture d'écran")
return errors
def execute_core(self, step_id: str) -> VWBActionResult:
"""
Exécute l'action d'attente d'ancre visuelle.
Args:
step_id: Identifiant de l'étape
Returns:
Résultat d'exécution
"""
start_time = datetime.now()
try:
print(f"⏳ Attente de l'ancre '{self.visual_anchor.name}' (mode: {self.wait_mode})")
print(f" Délai max: {self.max_wait_time_ms}ms, Intervalle: {self.check_interval_ms}ms")
# Initialiser les variables de surveillance
wait_start = time.time()
max_wait_seconds = self.max_wait_time_ms / 1000.0
check_interval_seconds = self.check_interval_ms / 1000.0
stable_detections = 0
last_detection_state = None
check_count = 0
detection_history = []
# Boucle de surveillance
while True:
current_time = time.time()
elapsed_time = current_time - wait_start
# Vérifier le délai d'attente
if elapsed_time >= max_wait_seconds:
return self._create_timeout_result(
step_id=step_id,
start_time=start_time,
elapsed_time_ms=elapsed_time * 1000,
check_count=check_count,
detection_history=detection_history
)
# Capturer l'écran actuel
screenshot_data = self._capture_current_screen()
if screenshot_data is None:
print("⚠️ Échec de capture d'écran, retry dans 1s")
time.sleep(1.0)
continue
# Rechercher l'ancre
check_count += 1
match_result = self._find_visual_anchor(screenshot_data)
detection_state = match_result['found']
# Enregistrer dans l'historique
detection_history.append({
'timestamp': datetime.now().isoformat(),
'found': detection_state,
'confidence': match_result.get('confidence', 0.0),
'elapsed_ms': elapsed_time * 1000
})
print(f" Check {check_count}: {'' if detection_state else ''} "
f"(confiance: {match_result.get('confidence', 0.0):.2f}, "
f"temps: {elapsed_time:.1f}s)")
# Vérifier la stabilité de la détection
if detection_state == last_detection_state:
stable_detections += 1
else:
stable_detections = 1
last_detection_state = detection_state
# Vérifier si la condition d'attente est remplie
condition_met = self._check_wait_condition(
detection_state,
stable_detections
)
if condition_met:
# Condition remplie - succès
end_time = datetime.now()
execution_time = (end_time - start_time).total_seconds() * 1000
# Mettre à jour les statistiques de l'ancre
self.visual_anchor.update_usage_stats(execution_time, True)
return self._create_success_result(
step_id=step_id,
start_time=start_time,
end_time=end_time,
execution_time=execution_time,
final_state=detection_state,
check_count=check_count,
detection_history=detection_history,
match_result=match_result
)
# Attendre avant la prochaine vérification
time.sleep(check_interval_seconds)
except Exception as e:
return self._create_error_result(
step_id=step_id,
start_time=start_time,
error_type=VWBErrorType.SYSTEM_ERROR,
message=f"Erreur lors de l'attente: {str(e)}",
technical_details={'exception': str(e)}
)
def _capture_current_screen(self) -> Optional[Dict[str, Any]]:
"""Capture l'écran actuel avec métadonnées."""
try:
# Utiliser la méthode ultra stable (Option A)
img_array = self.screen_capturer.capture()
if img_array is None:
return None
from PIL import Image
import base64
import io
# Convertir en PIL Image
pil_image = Image.fromarray(img_array)
# Convertir en base64 pour stockage (optionnel pour l'attente)
buffer = io.BytesIO()
pil_image.save(buffer, format='PNG', optimize=True)
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
return {
'image_array': img_array,
'pil_image': pil_image,
'screenshot_base64': screenshot_base64,
'width': pil_image.width,
'height': pil_image.height,
'timestamp': datetime.now().isoformat()
}
except Exception as e:
print(f"❌ Erreur capture d'écran: {e}")
return None
def _find_visual_anchor(self, screenshot_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Recherche l'ancre visuelle dans le screenshot.
Args:
screenshot_data: Données du screenshot
Returns:
Résultat de la recherche
"""
search_start = time.time()
try:
pil_image = screenshot_data['pil_image']
# Simuler une recherche rapide pour l'attente
search_delay = 0.1 # Recherche rapide pour l'attente
time.sleep(search_delay)
search_time_ms = (time.time() - search_start) * 1000
# Simuler la détection avec variation dans le temps
# Pour rendre la simulation plus réaliste
current_time = time.time()
detection_probability = 0.3 + 0.4 * (current_time % 10) / 10 # Varie entre 0.3 et 0.7
# Ajuster selon le mode d'attente
if self.wait_mode == 'appear':
# Probabilité croissante d'apparition
detection_probability = min(0.9, detection_probability + 0.2)
else: # disappear
# Probabilité décroissante de présence
detection_probability = max(0.1, detection_probability - 0.3)
found = detection_probability >= 0.5
confidence = detection_probability if found else 0.2
result = {
'found': found,
'confidence': confidence,
'search_time_ms': search_time_ms,
'method': 'simulated_wait_detection'
}
if found:
# Ajouter les coordonnées si trouvé
center_x = pil_image.width // 2
center_y = pil_image.height // 2
if self.visual_anchor.has_bounding_box():
search_area = self.visual_anchor.get_search_area(
pil_image.width,
pil_image.height
)
if search_area:
center_x = search_area['x'] + search_area['width'] // 2
center_y = search_area['y'] + search_area['height'] // 2
result.update({
'match_box': {
'x': center_x - 50,
'y': center_y - 25,
'width': 100,
'height': 50
},
'center_coordinates': {'x': center_x, 'y': center_y}
})
return result
except Exception as e:
search_time_ms = (time.time() - search_start) * 1000
return {
'found': False,
'error': str(e),
'search_time_ms': search_time_ms
}
def _check_wait_condition(self, detection_state: bool, stable_detections: int) -> bool:
"""
Vérifie si la condition d'attente est remplie.
Args:
detection_state: État actuel de détection
stable_detections: Nombre de détections stables consécutives
Returns:
True si la condition est remplie
"""
# Vérifier la stabilité
if stable_detections < self.stable_detection_count:
return False
# Vérifier selon le mode
if self.wait_mode == 'appear':
return detection_state # Attendre que l'ancre apparaisse
else: # disappear
return not detection_state # Attendre que l'ancre disparaisse
def _create_success_result(
self,
step_id: str,
start_time: datetime,
end_time: datetime,
execution_time: float,
final_state: bool,
check_count: int,
detection_history: List[Dict[str, Any]],
match_result: Dict[str, Any]
) -> VWBActionResult:
"""Crée un résultat de succès."""
# Créer l'evidence d'attente
wait_evidence = create_interaction_evidence(
action_id=self.action_id,
step_id=step_id,
evidence_type=VWBEvidenceType.WAIT_EVIDENCE,
title=f"Attente de {self.visual_anchor.name}",
interaction_data={
'anchor_id': self.visual_anchor.anchor_id,
'anchor_name': self.visual_anchor.name,
'wait_mode': self.wait_mode,
'final_state': final_state,
'wait_time_ms': execution_time,
'check_count': check_count,
'stable_detections_required': self.stable_detection_count,
'confidence_threshold': self.confidence_threshold,
'final_confidence': match_result.get('confidence', 0.0),
'detection_history': detection_history[-5:], # Garder les 5 dernières
'match_box': match_result.get('match_box')
},
confidence_score=match_result.get('confidence', 0.0)
)
result = VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.SUCCESS,
start_time=start_time,
end_time=end_time,
execution_time_ms=execution_time,
output_data={
'wait_mode': self.wait_mode,
'final_state': final_state,
'wait_time_ms': execution_time,
'check_count': check_count,
'anchor_confidence': match_result.get('confidence', 0.0),
'condition_met': True
},
evidence_list=[wait_evidence]
)
state_text = "apparue" if final_state else "disparue"
print(f"✅ Ancre {state_text} après {execution_time:.1f}ms ({check_count} vérifications)")
return result
def _create_timeout_result(
self,
step_id: str,
start_time: datetime,
elapsed_time_ms: float,
check_count: int,
detection_history: List[Dict[str, Any]]
) -> VWBActionResult:
"""Crée un résultat de timeout."""
# Mettre à jour les statistiques de l'ancre (échec)
self.visual_anchor.update_usage_stats(elapsed_time_ms, False)
error = create_vwb_error(
error_type=VWBErrorType.WAIT_TIMEOUT,
message=f"Délai d'attente dépassé pour l'ancre '{self.visual_anchor.name}'",
action_id=self.action_id,
step_id=step_id,
severity=VWBErrorSeverity.ERROR,
technical_details={
'wait_mode': self.wait_mode,
'max_wait_time_ms': self.max_wait_time_ms,
'elapsed_time_ms': elapsed_time_ms,
'check_count': check_count,
'detection_history': detection_history[-10:] # Garder les 10 dernières
},
execution_time_ms=elapsed_time_ms
)
end_time = datetime.now()
result = VWBActionResult(
action_id=self.action_id,
step_id=step_id,
status=VWBActionStatus.TIMEOUT,
start_time=start_time,
end_time=end_time,
execution_time_ms=elapsed_time_ms,
output_data={
'wait_mode': self.wait_mode,
'timeout_reached': True,
'elapsed_time_ms': elapsed_time_ms,
'check_count': check_count
},
evidence_list=[],
error=error
)
print(f"⏰ Timeout après {elapsed_time_ms:.1f}ms ({check_count} vérifications)")
return result
def get_action_info(self) -> Dict[str, Any]:
"""Retourne les informations de l'action pour l'interface."""
return {
'action_id': self.action_id,
'name': self.name,
'description': self.description,
'type': 'wait_for_anchor',
'parameters': {
'anchor_name': self.visual_anchor.name if self.visual_anchor else 'Non définie',
'wait_mode': self.wait_mode,
'max_wait_time_ms': self.max_wait_time_ms,
'check_interval_ms': self.check_interval_ms,
'confidence_threshold': self.confidence_threshold,
'stable_detection_count': self.stable_detection_count
},
'status': self.current_status.value,
'anchor_reliable': self.visual_anchor.is_reliable() if self.visual_anchor else False
}

View File

@@ -0,0 +1,43 @@
"""
API package for Visual Workflow Builder
This package contains all REST API endpoints and WebSocket handlers.
"""
from .workflows import workflows_bp
from .errors import (
APIError,
ValidationError,
NotFoundError,
BadRequestError,
ConflictError,
InternalServerError,
error_response,
ErrorCode
)
from .validation import (
validate_workflow_data,
validate_update_data,
validate_node_data,
validate_edge_data,
validate_variable_data,
validate_settings_data
)
__all__ = [
'workflows_bp',
'APIError',
'ValidationError',
'NotFoundError',
'BadRequestError',
'ConflictError',
'InternalServerError',
'error_response',
'ErrorCode',
'validate_workflow_data',
'validate_update_data',
'validate_node_data',
'validate_edge_data',
'validate_variable_data',
'validate_settings_data'
]

View File

@@ -0,0 +1,426 @@
"""
Analytics API endpoints for Visual Workflow Builder.
Provides analytics data and metrics for workflows executed through the visual builder.
Exigence: 18.3
"""
import sys
from pathlib import Path
from flask import Blueprint, request, jsonify
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any
# Ajouter le chemin racine pour importer les modules core
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent))
try:
from core.analytics.analytics_system import get_analytics_system
from core.analytics.integration.execution_integration import get_analytics_integration
ANALYTICS_AVAILABLE = True
except ImportError:
ANALYTICS_AVAILABLE = False
from services.execution_integration import get_executor
from services.serialization import WorkflowDatabase
# Blueprint pour les endpoints Analytics
analytics_bp = Blueprint('analytics', __name__)
@analytics_bp.route('/workflow/<workflow_id>/metrics', methods=['GET'])
def get_workflow_metrics(workflow_id: str):
"""
Récupère les métriques d'un workflow.
Exigence: 18.3
Query Parameters:
- hours: Fenêtre de temps en heures (défaut: 24)
- metric_type: Type de métrique (execution, step, performance)
"""
try:
if not ANALYTICS_AVAILABLE:
return jsonify({
'success': False,
'error': 'Analytics system not available'
}), 503
hours = int(request.args.get('hours', 24))
metric_type = request.args.get('metric_type', 'execution')
# Récupérer les métriques via l'exécuteur
executor = get_executor()
analytics_data = executor.get_workflow_analytics(workflow_id, hours)
if analytics_data is None:
return jsonify({
'success': False,
'error': 'No analytics data available'
}), 404
return jsonify({
'success': True,
'workflow_id': workflow_id,
'time_window_hours': hours,
'metric_type': metric_type,
'data': analytics_data
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@analytics_bp.route('/workflow/<workflow_id>/performance', methods=['GET'])
def get_workflow_performance(workflow_id: str):
"""
Récupère les métriques de performance d'un workflow.
Exigence: 18.3
"""
try:
if not ANALYTICS_AVAILABLE:
return jsonify({
'success': False,
'error': 'Analytics system not available'
}), 503
hours = int(request.args.get('hours', 24))
analytics_system = get_analytics_system()
# Calculer la fenêtre de temps
end_time = datetime.now()
start_time = end_time - timedelta(hours=hours)
# Analyser les performances
performance_stats = analytics_system.performance_analyzer.analyze_performance(
workflow_id=workflow_id,
start_time=start_time,
end_time=end_time
)
# Calculer le taux de succès
success_stats = analytics_system.success_rate_calculator.calculate_success_rate(
workflow_id=workflow_id,
time_window_hours=hours
)
return jsonify({
'success': True,
'workflow_id': workflow_id,
'time_window_hours': hours,
'performance': performance_stats.to_dict() if performance_stats else None,
'success_rate': success_stats.to_dict() if success_stats else None
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@analytics_bp.route('/workflow/<workflow_id>/executions', methods=['GET'])
def get_workflow_executions(workflow_id: str):
"""
Récupère l'historique des exécutions d'un workflow.
Exigence: 18.3
"""
try:
executor = get_executor()
executions = executor.list_executions(workflow_id=workflow_id)
# Ajouter des métriques calculées
for execution in executions:
if execution.get('analytics_data'):
# Enrichir avec des métriques calculées
execution['calculated_metrics'] = _calculate_execution_metrics(execution)
return jsonify({
'success': True,
'workflow_id': workflow_id,
'executions': executions,
'total_count': len(executions)
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@analytics_bp.route('/dashboard/workflows', methods=['GET'])
def get_workflows_dashboard():
"""
Récupère les données du dashboard pour tous les workflows.
Exigence: 18.3
"""
try:
if not ANALYTICS_AVAILABLE:
return jsonify({
'success': False,
'error': 'Analytics system not available'
}), 503
hours = int(request.args.get('hours', 24))
# Récupérer tous les workflows
try:
db = WorkflowDatabase()
all_workflows = db.list_workflows()
except Exception as e:
return jsonify({
'success': False,
'error': f'Database error: {str(e)}'
}), 500
dashboard_data = {
'summary': {
'total_workflows': len(all_workflows),
'time_window_hours': hours,
'generated_at': datetime.now().isoformat()
},
'workflows': []
}
try:
analytics_system = get_analytics_system()
except Exception as e:
return jsonify({
'success': False,
'error': f'Analytics system error: {str(e)}'
}), 503
# Collecter les métriques pour chaque workflow
for workflow_info in all_workflows:
workflow_id = workflow_info['workflow_id']
try:
# Métriques de performance
end_time = datetime.now()
start_time = end_time - timedelta(hours=hours)
performance_stats = analytics_system.performance_analyzer.analyze_performance(
workflow_id=workflow_id,
start_time=start_time,
end_time=end_time
)
success_stats = analytics_system.success_rate_calculator.calculate_success_rate(
workflow_id=workflow_id,
time_window_hours=hours
)
# Exécutions récentes
executor = get_executor()
recent_executions = executor.list_executions(workflow_id=workflow_id)[:5] # 5 plus récentes
workflow_metrics = {
'workflow_id': workflow_id,
'name': workflow_info.get('name', 'Unnamed Workflow'),
'performance': performance_stats.to_dict() if performance_stats else None,
'success_rate': success_stats.to_dict() if success_stats else None,
'recent_executions': recent_executions,
'last_execution': recent_executions[0] if recent_executions else None
}
dashboard_data['workflows'].append(workflow_metrics)
except Exception as e:
# Continuer même si un workflow échoue
dashboard_data['workflows'].append({
'workflow_id': workflow_id,
'name': workflow_info.get('name', 'Unnamed Workflow'),
'error': str(e)
})
return jsonify({
'success': True,
'dashboard': dashboard_data
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@analytics_bp.route('/dashboard/summary', methods=['GET'])
def get_dashboard_summary():
"""
Récupère un résumé global des métriques.
Exigence: 18.3
"""
try:
if not ANALYTICS_AVAILABLE:
return jsonify({
'success': False,
'error': 'Analytics system not available'
}), 503
hours = int(request.args.get('hours', 24))
analytics_system = get_analytics_system()
executor = get_executor()
# Statistiques globales
end_time = datetime.now()
start_time = end_time - timedelta(hours=hours)
# Compter les exécutions totales
all_executions = executor.list_executions()
recent_executions = [
exec for exec in all_executions
if exec.get('start_time') and
datetime.fromisoformat(exec['start_time']) >= start_time
]
successful_executions = [
exec for exec in recent_executions
if exec.get('status') == 'completed'
]
failed_executions = [
exec for exec in recent_executions
if exec.get('status') == 'failed'
]
# Calculer les métriques de résumé
total_executions = len(recent_executions)
success_rate = (len(successful_executions) / total_executions * 100) if total_executions > 0 else 0
# Durée moyenne
durations = [
exec.get('duration_ms', 0) for exec in successful_executions
if exec.get('duration_ms')
]
avg_duration = sum(durations) / len(durations) if durations else 0
summary = {
'time_window_hours': hours,
'total_executions': total_executions,
'successful_executions': len(successful_executions),
'failed_executions': len(failed_executions),
'success_rate_percent': round(success_rate, 2),
'average_duration_ms': round(avg_duration, 2),
'generated_at': datetime.now().isoformat()
}
return jsonify({
'success': True,
'summary': summary
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@analytics_bp.route('/insights', methods=['GET'])
def get_analytics_insights():
"""
Récupère les insights Analytics générés automatiquement.
Exigence: 18.3
"""
try:
if not ANALYTICS_AVAILABLE:
return jsonify({
'success': False,
'error': 'Analytics system not available'
}), 503
hours = int(request.args.get('hours', 168)) # 1 semaine par défaut
try:
analytics_system = get_analytics_system()
except Exception as e:
return jsonify({
'success': False,
'error': f'Analytics system error: {str(e)}'
}), 503
end_time = datetime.now()
start_time = end_time - timedelta(hours=hours)
# Générer les insights
try:
insights = analytics_system.insight_generator.generate_insights(
start_time=start_time,
end_time=end_time
)
except Exception as e:
return jsonify({
'success': False,
'error': f'Insights generation error: {str(e)}'
}), 500
return jsonify({
'success': True,
'time_window_hours': hours,
'insights': [insight.to_dict() for insight in insights],
'generated_at': datetime.now().isoformat()
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
def _calculate_execution_metrics(execution: Dict[str, Any]) -> Dict[str, Any]:
"""
Calcule des métriques supplémentaires pour une exécution.
Args:
execution: Données d'exécution
Returns:
Métriques calculées
"""
metrics = {}
try:
# Efficacité (steps completed / steps total)
steps_completed = execution.get('steps_completed', 0)
steps_total = execution.get('steps_total', 0)
if steps_total > 0:
metrics['efficiency_percent'] = round((steps_completed / steps_total) * 100, 2)
# Vitesse (steps par seconde)
duration_ms = execution.get('duration_ms', 0)
if duration_ms > 0 and steps_completed > 0:
duration_sec = duration_ms / 1000
metrics['steps_per_second'] = round(steps_completed / duration_sec, 2)
# Statut de santé
if execution.get('status') == 'completed':
metrics['health_status'] = 'healthy'
elif execution.get('status') == 'failed':
metrics['health_status'] = 'unhealthy'
else:
metrics['health_status'] = 'unknown'
except Exception as e:
metrics['calculation_error'] = str(e)
return metrics
# Fonction pour enregistrer le blueprint
def register_analytics_blueprint(app):
"""Enregistre le blueprint Analytics dans l'application Flask."""
app.register_blueprint(analytics_bp)

View File

@@ -0,0 +1,243 @@
"""
API REST pour la gestion des images d'ancres visuelles.
Auteur : Dom, Alice, Kiro - 21 janvier 2026
Endpoints :
POST /api/anchor-images - Upload d'une image d'ancre
GET /api/anchor-images/{id}/thumbnail - Récupérer la miniature
GET /api/anchor-images/{id}/original - Récupérer l'image originale
GET /api/anchor-images/{id}/metadata - Récupérer les métadonnées
DELETE /api/anchor-images/{id} - Supprimer une ancre
GET /api/anchor-images - Lister les ancres
GET /api/anchor-images/stats - Statistiques de stockage
"""
from flask import Blueprint, request, jsonify, send_file, abort
from services.anchor_image_service import (
save_anchor_image,
get_thumbnail_path,
get_original_path,
get_anchor_metadata,
delete_anchor_image,
list_anchor_images,
get_storage_stats,
)
anchor_images_bp = Blueprint('anchor_images', __name__)
@anchor_images_bp.route('/api/anchor-images', methods=['POST'])
def upload_anchor_image():
"""
Upload d'une image d'ancre visuelle.
Body JSON attendu:
{
"image_base64": "data:image/png;base64,...",
"bounding_box": {"x": 100, "y": 200, "width": 150, "height": 50},
"anchor_id": "anchor_xxx" (optionnel),
"metadata": {...} (optionnel)
}
Returns:
{
"success": true,
"anchor_id": "anchor_xxx",
"thumbnail_url": "/api/anchor-images/anchor_xxx/thumbnail",
"original_url": "/api/anchor-images/anchor_xxx/original",
"metadata": {...}
}
"""
try:
data = request.get_json()
if not data:
return jsonify({
'success': False,
'error': 'Données JSON requises'
}), 400
image_base64 = data.get('image_base64')
bounding_box = data.get('bounding_box')
if not image_base64:
return jsonify({
'success': False,
'error': 'image_base64 est requis'
}), 400
if not bounding_box:
return jsonify({
'success': False,
'error': 'bounding_box est requis'
}), 400
# Valider le bounding_box
required_keys = ['x', 'y', 'width', 'height']
for key in required_keys:
if key not in bounding_box:
return jsonify({
'success': False,
'error': f'bounding_box.{key} est requis'
}), 400
anchor_id = data.get('anchor_id')
metadata = data.get('metadata')
result = save_anchor_image(anchor_id, image_base64, bounding_box, metadata)
return jsonify(result), 201
except ValueError as e:
return jsonify({
'success': False,
'error': str(e)
}), 400
except Exception as e:
print(f"❌ Erreur upload anchor image: {e}")
return jsonify({
'success': False,
'error': f'Erreur serveur: {str(e)}'
}), 500
@anchor_images_bp.route('/api/anchor-images/<anchor_id>/thumbnail', methods=['GET'])
def get_thumbnail(anchor_id: str):
"""
Récupérer la miniature d'une ancre.
Args:
anchor_id: ID de l'ancre
Returns:
Image JPEG (fichier binaire)
"""
path = get_thumbnail_path(anchor_id)
if not path:
abort(404, description=f"Ancre '{anchor_id}' non trouvée")
return send_file(
path,
mimetype='image/jpeg',
as_attachment=False,
download_name=f'{anchor_id}_thumbnail.jpg'
)
@anchor_images_bp.route('/api/anchor-images/<anchor_id>/original', methods=['GET'])
def get_original(anchor_id: str):
"""
Récupérer l'image originale d'une ancre.
Args:
anchor_id: ID de l'ancre
Returns:
Image PNG (fichier binaire)
"""
path = get_original_path(anchor_id)
if not path:
abort(404, description=f"Ancre '{anchor_id}' non trouvée")
return send_file(
path,
mimetype='image/png',
as_attachment=False,
download_name=f'{anchor_id}_original.png'
)
@anchor_images_bp.route('/api/anchor-images/<anchor_id>/metadata', methods=['GET'])
def get_metadata(anchor_id: str):
"""
Récupérer les métadonnées d'une ancre.
Args:
anchor_id: ID de l'ancre
Returns:
JSON avec les métadonnées
"""
metadata = get_anchor_metadata(anchor_id)
if not metadata:
return jsonify({
'success': False,
'error': f"Ancre '{anchor_id}' non trouvée"
}), 404
return jsonify({
'success': True,
'metadata': metadata
})
@anchor_images_bp.route('/api/anchor-images/<anchor_id>', methods=['DELETE'])
def delete_anchor(anchor_id: str):
"""
Supprimer une ancre et ses fichiers associés.
Args:
anchor_id: ID de l'ancre
Returns:
JSON de confirmation
"""
deleted = delete_anchor_image(anchor_id)
if not deleted:
return jsonify({
'success': False,
'error': f"Ancre '{anchor_id}' non trouvée"
}), 404
return jsonify({
'success': True,
'message': f"Ancre '{anchor_id}' supprimée"
})
@anchor_images_bp.route('/api/anchor-images', methods=['GET'])
def list_anchors():
"""
Lister toutes les images d'ancres stockées.
Query params:
limit: Nombre maximum d'ancres à retourner (défaut: 100)
offset: Décalage pour la pagination (défaut: 0)
Returns:
JSON avec la liste des ancres
"""
limit = request.args.get('limit', 100, type=int)
offset = request.args.get('offset', 0, type=int)
all_anchors = list_anchor_images()
paginated = all_anchors[offset:offset + limit]
return jsonify({
'success': True,
'anchors': paginated,
'total': len(all_anchors),
'limit': limit,
'offset': offset
})
@anchor_images_bp.route('/api/anchor-images/stats', methods=['GET'])
def storage_stats():
"""
Obtenir les statistiques de stockage.
Returns:
JSON avec les statistiques
"""
stats = get_storage_stats()
return jsonify({
'success': True,
'stats': stats
})

View File

@@ -0,0 +1,444 @@
"""
API endpoints pour la détection d'éléments UI
Intégration avec le UIDetector du core RPA Vision V3
"""
from __future__ import annotations
from flask import Blueprint, request, jsonify
from typing import Dict, Any, List
import asyncio
import logging
import base64
from io import BytesIO
from PIL import Image
import numpy as np
import sys
import os
# Add project root to path to import core modules (best-effort)
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
try:
from core.detection.ui_detector import UIDetector
from core.models import Point, BBox, UIElement
CORE_AVAILABLE = True
except ImportError as e:
print(f"Warning: Core modules not available (element_detection): {e}")
CORE_AVAILABLE = False
UIDetector = None # type: ignore
Point = None # type: ignore
BBox = None # type: ignore
UIElement = None # type: ignore
from core.capture.screen_capturer import ScreenCapturer
logger = logging.getLogger(__name__)
# Créer le blueprint pour les endpoints de détection
element_detection_bp = Blueprint('element_detection', __name__, url_prefix='/api/detection')
# Instance globale du UIDetector (sera initialisée dans app.py)
ui_detector: UIDetector = None
screen_capturer: ScreenCapturer = None
def init_element_detection(detector: UIDetector, capturer: ScreenCapturer):
"""Initialise le UIDetector avec les dépendances"""
global ui_detector, screen_capturer
ui_detector = detector
screen_capturer = capturer
logger.info("UIDetector initialisé pour l'API")
@element_detection_bp.route('/elements', methods=['POST'])
def detect_elements():
"""
Détecte les éléments UI dans une capture d'écran
Body:
{
"screenshot": "base64_image_data",
"region": {"x": 0, "y": 0, "width": 1920, "height": 1080}, // optionnel
"element_types": ["button", "input", "link"], // optionnel
"confidence_threshold": 0.7 // optionnel
}
Returns:
{
"elements": [
{
"id": "element_1",
"type": "button",
"bounds": {"x": 100, "y": 200, "width": 80, "height": 30},
"confidence": 0.95,
"text": "Cliquer ici",
"attributes": {
"tag_name": "button",
"class_name": "btn btn-primary",
"is_clickable": true
},
"visual_features": {
"color_dominant": "#007bff",
"has_border": true,
"has_shadow": false
}
}
],
"processing_time": 1.23,
"total_elements": 15,
"filtered_elements": 5
}
"""
try:
if not ui_detector:
return jsonify({'error': 'UIDetector non initialisé'}), 500
data = request.get_json()
if not data or 'screenshot' not in data:
return jsonify({'error': 'Screenshot requis'}), 400
# Décoder l'image base64
try:
screenshot_data = data['screenshot']
if screenshot_data.startswith('data:image'):
screenshot_data = screenshot_data.split(',')[1]
image_bytes = base64.b64decode(screenshot_data)
image = Image.open(BytesIO(image_bytes))
image_array = np.array(image)
except Exception as e:
return jsonify({'error': f'Erreur de décodage image: {str(e)}'}), 400
# Paramètres optionnels
region = data.get('region')
element_types = data.get('element_types', [])
confidence_threshold = data.get('confidence_threshold', 0.7)
# Convertir la région si fournie
detection_region = None
if region:
detection_region = BBox(
x=region['x'],
y=region['y'],
width=region['width'],
height=region['height']
)
# Effectuer la détection (opération async)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
import time
start_time = time.time()
detected_elements = loop.run_until_complete(
ui_detector.detect_elements(
image_array,
region=detection_region,
element_types=element_types if element_types else None
)
)
processing_time = time.time() - start_time
# Filtrer par confiance
filtered_elements = [
elem for elem in detected_elements
if elem.confidence >= confidence_threshold
]
# Convertir en dictionnaires pour JSON
elements_data = []
for elem in filtered_elements:
element_dict = {
'id': f"element_{elem.bounds.x}_{elem.bounds.y}",
'type': elem.element_type,
'bounds': {
'x': elem.bounds.x,
'y': elem.bounds.y,
'width': elem.bounds.width,
'height': elem.bounds.height
},
'confidence': round(elem.confidence, 3),
'text': elem.text_content or '',
'attributes': elem.attributes or {},
'visual_features': {
'color_dominant': getattr(elem, 'dominant_color', '#000000'),
'has_border': getattr(elem, 'has_border', False),
'has_shadow': getattr(elem, 'has_shadow', False)
}
}
elements_data.append(element_dict)
result = {
'elements': elements_data,
'processing_time': round(processing_time, 3),
'total_elements': len(detected_elements),
'filtered_elements': len(filtered_elements)
}
logger.info(f"Détection terminée: {len(filtered_elements)} éléments trouvés en {processing_time:.2f}s")
return jsonify(result), 200
finally:
loop.close()
except ValueError as e:
logger.warning(f"Erreur de validation: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Erreur lors de la détection: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@element_detection_bp.route('/element-at-position', methods=['POST'])
def detect_element_at_position():
"""
Détecte l'élément UI à une position spécifique
Body:
{
"position": {"x": 100, "y": 200},
"screenshot": "base64_image_data", // optionnel, capture automatique si absent
"tolerance": 5 // optionnel, tolérance en pixels
}
Returns:
{
"element": {
"id": "element_1",
"type": "button",
"bounds": {"x": 95, "y": 195, "width": 80, "height": 30},
"confidence": 0.95,
"text": "Cliquer ici",
"attributes": {...},
"visual_features": {...}
},
"processing_time": 0.45
}
"""
try:
if not ui_detector:
return jsonify({'error': 'UIDetector non initialisé'}), 500
data = request.get_json()
if not data or 'position' not in data:
return jsonify({'error': 'Position requise'}), 400
position_data = data['position']
if 'x' not in position_data or 'y' not in position_data:
return jsonify({'error': 'Coordonnées x et y requises'}), 400
position = Point(x=position_data['x'], y=position_data['y'])
tolerance = data.get('tolerance', 5)
# Obtenir l'image
if 'screenshot' in data:
# Utiliser l'image fournie
try:
screenshot_data = data['screenshot']
if screenshot_data.startswith('data:image'):
screenshot_data = screenshot_data.split(',')[1]
image_bytes = base64.b64decode(screenshot_data)
image = Image.open(BytesIO(image_bytes))
image_array = np.array(image)
except Exception as e:
return jsonify({'error': f'Erreur de décodage image: {str(e)}'}), 400
else:
# Capturer l'écran automatiquement
if not screen_capturer:
return jsonify({'error': 'ScreenCapturer non disponible'}), 500
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
screenshot_result = loop.run_until_complete(
screen_capturer.capture_screen()
)
image_array = screenshot_result.image_array
finally:
loop.close()
# Effectuer la détection à la position
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
import time
start_time = time.time()
element = loop.run_until_complete(
ui_detector.detect_element_at_position(
image_array,
position,
tolerance=tolerance
)
)
processing_time = time.time() - start_time
if not element:
return jsonify({
'element': None,
'processing_time': round(processing_time, 3),
'message': 'Aucun élément trouvé à cette position'
}), 200
# Convertir en dictionnaire pour JSON
element_dict = {
'id': f"element_{element.bounds.x}_{element.bounds.y}",
'type': element.element_type,
'bounds': {
'x': element.bounds.x,
'y': element.bounds.y,
'width': element.bounds.width,
'height': element.bounds.height
},
'confidence': round(element.confidence, 3),
'text': element.text_content or '',
'attributes': element.attributes or {},
'visual_features': {
'color_dominant': getattr(element, 'dominant_color', '#000000'),
'has_border': getattr(element, 'has_border', False),
'has_shadow': getattr(element, 'has_shadow', False)
}
}
result = {
'element': element_dict,
'processing_time': round(processing_time, 3)
}
logger.info(f"Élément détecté à ({position.x}, {position.y}): {element.element_type}")
return jsonify(result), 200
finally:
loop.close()
except ValueError as e:
logger.warning(f"Erreur de validation: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Erreur lors de la détection: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@element_detection_bp.route('/element-types', methods=['GET'])
def get_supported_element_types():
"""
Obtient la liste des types d'éléments supportés
Returns:
{
"element_types": [
{
"type": "button",
"description": "Boutons cliquables",
"confidence_threshold": 0.8
},
{
"type": "input",
"description": "Champs de saisie",
"confidence_threshold": 0.7
}
]
}
"""
try:
if not ui_detector:
return jsonify({'error': 'UIDetector non initialisé'}), 500
# Types d'éléments supportés (à adapter selon l'implémentation du UIDetector)
element_types = [
{
'type': 'button',
'description': 'Boutons cliquables',
'confidence_threshold': 0.8
},
{
'type': 'input',
'description': 'Champs de saisie de texte',
'confidence_threshold': 0.7
},
{
'type': 'link',
'description': 'Liens hypertexte',
'confidence_threshold': 0.75
},
{
'type': 'image',
'description': 'Images et icônes',
'confidence_threshold': 0.6
},
{
'type': 'text',
'description': 'Texte statique',
'confidence_threshold': 0.5
},
{
'type': 'dropdown',
'description': 'Listes déroulantes',
'confidence_threshold': 0.8
},
{
'type': 'checkbox',
'description': 'Cases à cocher',
'confidence_threshold': 0.85
},
{
'type': 'radio',
'description': 'Boutons radio',
'confidence_threshold': 0.85
}
]
return jsonify({'element_types': element_types}), 200
except Exception as e:
logger.error(f"Erreur lors de la récupération des types: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@element_detection_bp.route('/health', methods=['GET'])
def health_check():
"""
Vérification de santé du service de détection
Returns:
{
"status": "healthy",
"detector_initialized": true,
"capturer_available": true
}
"""
try:
status = {
'status': 'healthy' if ui_detector else 'unhealthy',
'detector_initialized': ui_detector is not None,
'capturer_available': screen_capturer is not None
}
return jsonify(status), 200
except Exception as e:
logger.error(f"Erreur lors du health check: {e}")
return jsonify({
'status': 'error',
'error': str(e)
}), 500
# Gestionnaire d'erreurs pour le blueprint
@element_detection_bp.errorhandler(404)
def not_found(error):
return jsonify({'error': 'Endpoint non trouvé'}), 404
@element_detection_bp.errorhandler(405)
def method_not_allowed(error):
return jsonify({'error': 'Méthode non autorisée'}), 405
@element_detection_bp.errorhandler(500)
def internal_error(error):
return jsonify({'error': 'Erreur interne du serveur'}), 500

View File

@@ -0,0 +1,161 @@
"""
Error handling for API
Defines custom exceptions and error response formatting.
"""
from flask import jsonify
from typing import Dict, Any
import logging
class APIError(Exception):
"""Base class for API errors"""
def __init__(self, message: str, status_code: int = 500):
super().__init__(message)
self.message = message
self.status_code = status_code
class ValidationError(APIError):
"""Raised when request data validation fails"""
def __init__(self, message: str):
super().__init__(message, 400)
class NotFoundError(APIError):
"""Raised when a resource is not found"""
def __init__(self, message: str):
super().__init__(message, 404)
class BadRequestError(APIError):
"""Raised when request is malformed"""
def __init__(self, message: str):
super().__init__(message, 400)
class ConflictError(APIError):
"""Raised when there's a conflict (e.g., duplicate resource)"""
def __init__(self, message: str):
super().__init__(message, 409)
class InternalServerError(APIError):
"""Raised for internal server errors"""
def __init__(self, message: str):
super().__init__(message, 500)
def error_response(status_code: int, message: str, details: Dict[str, Any] = None) -> tuple:
"""
Create a standardized error response
Args:
status_code: HTTP status code
message: Error message
details: Optional additional error details
Returns:
Tuple of (response, status_code)
"""
response = {
'error': {
'code': status_code,
'message': message
}
}
if details:
response['error']['details'] = details
return jsonify(response), status_code
# Error code constants
class ErrorCode:
"""Standard error codes"""
# Validation errors (1000-1999)
MISSING_REQUIRED_PARAMETER = 1001
INVALID_PARAMETER_TYPE = 1002
INVALID_VARIABLE_REFERENCE = 1003
CIRCULAR_DEPENDENCY = 1004
DISCONNECTED_NODE = 1005
INVALID_EDGE_CONNECTION = 1006
DUPLICATE_VARIABLE_NAME = 1007
# Serialization errors (2000-2999)
INVALID_JSON_FORMAT = 2001
MISSING_REQUIRED_FIELD = 2002
VERSION_INCOMPATIBLE = 2003
DESERIALIZATION_FAILED = 2004
# Execution errors (3000-3999)
CONVERSION_FAILED = 3001
EXECUTION_FAILED = 3002
TARGET_NOT_FOUND = 3003
TIMEOUT_EXCEEDED = 3004
# Network errors (4000-4999)
CONNECTION_FAILED = 4001
TIMEOUT = 4002
SERVER_ERROR = 4003
# Resource errors (5000-5999)
RESOURCE_NOT_FOUND = 5001
RESOURCE_ALREADY_EXISTS = 5002
RESOURCE_LOCKED = 5003
def get_error_message(error_code: int) -> str:
"""Get a human-readable message for an error code"""
messages = {
ErrorCode.MISSING_REQUIRED_PARAMETER: "A required parameter is missing",
ErrorCode.INVALID_PARAMETER_TYPE: "Parameter has invalid type",
ErrorCode.INVALID_VARIABLE_REFERENCE: "Variable reference is invalid",
ErrorCode.CIRCULAR_DEPENDENCY: "Circular dependency detected",
ErrorCode.DISCONNECTED_NODE: "Node is not connected to the workflow",
ErrorCode.INVALID_EDGE_CONNECTION: "Edge connection is invalid",
ErrorCode.DUPLICATE_VARIABLE_NAME: "Variable name already exists",
ErrorCode.INVALID_JSON_FORMAT: "JSON format is invalid",
ErrorCode.MISSING_REQUIRED_FIELD: "Required field is missing",
ErrorCode.VERSION_INCOMPATIBLE: "Version is incompatible",
ErrorCode.DESERIALIZATION_FAILED: "Failed to deserialize data",
ErrorCode.CONVERSION_FAILED: "Failed to convert workflow",
ErrorCode.EXECUTION_FAILED: "Workflow execution failed",
ErrorCode.TARGET_NOT_FOUND: "Target element not found",
ErrorCode.TIMEOUT_EXCEEDED: "Operation timeout exceeded",
ErrorCode.CONNECTION_FAILED: "Connection failed",
ErrorCode.TIMEOUT: "Request timeout",
ErrorCode.SERVER_ERROR: "Internal server error",
ErrorCode.RESOURCE_NOT_FOUND: "Resource not found",
ErrorCode.RESOURCE_ALREADY_EXISTS: "Resource already exists",
ErrorCode.RESOURCE_LOCKED: "Resource is locked"
}
return messages.get(error_code, "Unknown error")
def handle_api_error(func):
"""
Decorator to handle API errors consistently
Catches APIError exceptions and returns proper JSON responses
"""
from functools import wraps
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except APIError as e:
logger = logging.getLogger(__name__)
logger.error(f"API Error in {func.__name__}: {e.message}")
return error_response(e.status_code, e.message)
except Exception as e:
logger = logging.getLogger(__name__)
logger.error(f"Unexpected error in {func.__name__}: {str(e)}")
return error_response(500, "Internal server error")
return wrapper

View File

@@ -0,0 +1,439 @@
"""
API endpoints pour l'import/export de workflows
"""
from flask import Blueprint, request, jsonify, send_file
from werkzeug.utils import secure_filename
import json
import yaml
import tempfile
import os
from datetime import datetime
from typing import Dict, Any, List, Optional
from models.visual_workflow import VisualWorkflow
from services.workflow_service import WorkflowService
import_export_bp = Blueprint('import_export', __name__)
class ImportExportService:
"""Service pour l'import/export de workflows"""
@staticmethod
def export_workflow(workflow: VisualWorkflow, format_type: str = 'json',
include_metadata: bool = True,
include_template_info: bool = True,
minify: bool = False) -> Dict[str, Any]:
"""
Exporter un workflow vers un format donné
Args:
workflow: Le workflow à exporter
format_type: Format d'export ('json' ou 'yaml')
include_metadata: Inclure les métadonnées
include_template_info: Inclure les infos de template
minify: Minifier le JSON
Returns:
Dict contenant les données d'export
"""
# Préparer les données d'export
export_data = {
'version': '1.0.0',
'name': workflow.name,
'description': workflow.description,
'nodes': [node.to_dict() for node in workflow.nodes],
'edges': [edge.to_dict() for edge in workflow.edges],
'variables': [var.to_dict() for var in workflow.variables]
}
# Ajouter les métadonnées si demandé
if include_metadata:
export_data['metadata'] = {
'exported_at': datetime.now().isoformat(),
'exported_by': 'Visual Workflow Builder',
'node_count': len(workflow.nodes),
'edge_count': len(workflow.edges),
'variable_count': len(workflow.variables)
}
# Ajouter les infos de template si demandé
if include_template_info and hasattr(workflow, 'template_id') and workflow.template_id:
export_data['template'] = {
'id': workflow.template_id,
'name': getattr(workflow, 'template_name', None)
}
return export_data
@staticmethod
def import_workflow(data: str, filename: Optional[str] = None) -> Dict[str, Any]:
"""
Importer un workflow depuis des données
Args:
data: Données du workflow (JSON ou YAML)
filename: Nom du fichier (pour détecter le format)
Returns:
Dict avec le résultat de l'import
"""
errors = []
warnings = []
try:
# Détecter le format
format_type = ImportExportService._detect_format(data, filename)
# Parser les données
if format_type == 'yaml':
parsed_data = yaml.safe_load(data)
else:
parsed_data = json.loads(data)
# Valider la structure
validation_result = ImportExportService._validate_structure(parsed_data)
errors.extend(validation_result['errors'])
warnings.extend(validation_result['warnings'])
if errors:
return {
'success': False,
'errors': errors,
'warnings': warnings
}
# Migrer si nécessaire
migrated_data = ImportExportService._migrate_workflow(parsed_data, warnings)
# Créer le workflow
workflow = ImportExportService._create_workflow_from_data(migrated_data)
return {
'success': True,
'workflow': workflow.to_dict(),
'warnings': warnings
}
except Exception as e:
errors.append(f"Erreur de parsing: {str(e)}")
return {
'success': False,
'errors': errors,
'warnings': warnings
}
@staticmethod
def _detect_format(data: str, filename: Optional[str] = None) -> str:
"""Détecter le format du fichier"""
if filename:
ext = filename.lower().split('.')[-1]
if ext in ['yaml', 'yml']:
return 'yaml'
# Détecter par le contenu
stripped = data.strip()
if stripped.startswith('{') or stripped.startswith('['):
return 'json'
return 'yaml'
@staticmethod
def _validate_structure(data: Dict[str, Any]) -> Dict[str, List[str]]:
"""Valider la structure du workflow"""
errors = []
warnings = []
# Vérifications obligatoires
if not isinstance(data, dict):
errors.append('Le fichier doit contenir un objet JSON valide')
return {'errors': errors, 'warnings': warnings}
if 'nodes' not in data or not isinstance(data['nodes'], list):
errors.append('Le workflow doit contenir un tableau de nodes')
if 'edges' not in data or not isinstance(data['edges'], list):
errors.append('Le workflow doit contenir un tableau d\'edges')
# Valider les nodes
if 'nodes' in data:
for i, node in enumerate(data['nodes']):
if not isinstance(node, dict):
errors.append(f'Node {i}: Doit être un objet')
continue
if 'id' not in node:
errors.append(f'Node {i}: ID manquant')
if 'type' not in node:
errors.append(f'Node {i}: Type manquant')
if 'position' not in node or not isinstance(node['position'], dict):
errors.append(f'Node {i}: Position manquante ou invalide')
elif 'x' not in node['position'] or 'y' not in node['position']:
errors.append(f'Node {i}: Position doit contenir x et y')
# Valider les edges
if 'edges' in data:
for i, edge in enumerate(data['edges']):
if not isinstance(edge, dict):
errors.append(f'Edge {i}: Doit être un objet')
continue
if 'id' not in edge:
errors.append(f'Edge {i}: ID manquant')
if 'source' not in edge:
errors.append(f'Edge {i}: Source manquante')
if 'target' not in edge:
errors.append(f'Edge {i}: Target manquante')
# Avertissements
if 'name' not in data or not data['name']:
warnings.append('Le workflow n\'a pas de nom')
if 'version' not in data:
warnings.append('Version du workflow non spécifiée')
return {'errors': errors, 'warnings': warnings}
@staticmethod
def _migrate_workflow(data: Dict[str, Any], warnings: List[str]) -> Dict[str, Any]:
"""Migrer un workflow vers la version actuelle"""
# Migration de version si nécessaire
if 'version' not in data or data['version'] < '1.0.0':
warnings.append('Workflow migré depuis une version antérieure')
# Migrations spécifiques
for node in data.get('nodes', []):
# Assurer que chaque node a une structure data
if 'data' not in node:
node['data'] = {
'label': node.get('label', node.get('type', 'Node')),
'parameters': node.get('parameters', {})
}
return data
@staticmethod
def _create_workflow_from_data(data: Dict[str, Any]) -> VisualWorkflow:
"""Créer un workflow depuis les données importées"""
from models.visual_workflow import VisualNode, VisualEdge, WorkflowVariable
# Créer le workflow
workflow = VisualWorkflow(
name=data.get('name', 'Workflow Importé'),
description=data.get('description', ''),
nodes=[],
edges=[],
variables=[]
)
# Ajouter les nodes
for node_data in data.get('nodes', []):
node = VisualNode(
id=node_data['id'],
type=node_data['type'],
position=node_data['position'],
data=node_data.get('data', {})
)
workflow.nodes.append(node)
# Ajouter les edges
for edge_data in data.get('edges', []):
edge = VisualEdge(
id=edge_data['id'],
source=edge_data['source'],
target=edge_data['target'],
source_handle=edge_data.get('sourceHandle'),
target_handle=edge_data.get('targetHandle'),
data=edge_data.get('data', {})
)
workflow.edges.append(edge)
# Ajouter les variables
for var_data in data.get('variables', []):
variable = WorkflowVariable(
id=var_data.get('id', f"var-{len(workflow.variables)}"),
name=var_data['name'],
value=var_data.get('value', ''),
type=var_data.get('type', 'string'),
description=var_data.get('description', '')
)
workflow.variables.append(variable)
return workflow
@import_export_bp.route('/workflows/<workflow_id>/export', methods=['GET'])
def export_workflow(workflow_id: str):
"""
Exporter un workflow
Query parameters:
- format: json ou yaml (défaut: json)
- include_metadata: true/false (défaut: true)
- include_template_info: true/false (défaut: true)
- minify: true/false (défaut: false)
- download: true/false (défaut: false) - force le téléchargement
"""
try:
# Récupérer le workflow
workflow = WorkflowService.get_workflow(workflow_id)
if not workflow:
return jsonify({'error': 'Workflow non trouvé'}), 404
# Paramètres d'export
format_type = request.args.get('format', 'json').lower()
include_metadata = request.args.get('include_metadata', 'true').lower() == 'true'
include_template_info = request.args.get('include_template_info', 'true').lower() == 'true'
minify = request.args.get('minify', 'false').lower() == 'true'
download = request.args.get('download', 'false').lower() == 'true'
if format_type not in ['json', 'yaml']:
return jsonify({'error': 'Format non supporté'}), 400
# Exporter
export_data = ImportExportService.export_workflow(
workflow, format_type, include_metadata, include_template_info, minify
)
# Sérialiser
if format_type == 'yaml':
content = yaml.dump(export_data, default_flow_style=False, allow_unicode=True)
mimetype = 'application/x-yaml'
extension = 'yaml'
else:
if minify:
content = json.dumps(export_data, ensure_ascii=False)
else:
content = json.dumps(export_data, indent=2, ensure_ascii=False)
mimetype = 'application/json'
extension = 'json'
# Nom du fichier
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
filename = f"workflow_{secure_filename(workflow.name or 'untitled')}_{timestamp}.{extension}"
if download:
# Créer un fichier temporaire
with tempfile.NamedTemporaryFile(mode='w', suffix=f'.{extension}', delete=False) as f:
f.write(content)
temp_path = f.name
return send_file(
temp_path,
as_attachment=True,
download_name=filename,
mimetype=mimetype
)
else:
# Retourner les données
return jsonify({
'success': True,
'data': content,
'filename': filename,
'format': format_type
})
except Exception as e:
return jsonify({'error': str(e)}), 500
@import_export_bp.route('/workflows/import', methods=['POST'])
def import_workflow():
"""
Importer un workflow
Accepte:
- Fichier uploadé (multipart/form-data)
- Données JSON/YAML dans le body
"""
try:
data = None
filename = None
# Vérifier si c'est un upload de fichier
if 'file' in request.files:
file = request.files['file']
if file.filename == '':
return jsonify({'error': 'Aucun fichier sélectionné'}), 400
filename = secure_filename(file.filename)
data = file.read().decode('utf-8')
# Sinon, vérifier le body
elif request.is_json:
# Données JSON directes
data = json.dumps(request.get_json())
elif request.content_type and 'yaml' in request.content_type:
# Données YAML directes
data = request.get_data(as_text=True)
else:
# Essayer de lire comme texte
data = request.get_data(as_text=True)
if not data:
return jsonify({'error': 'Aucune donnée fournie'}), 400
# Importer
result = ImportExportService.import_workflow(data, filename)
if result['success']:
# Sauvegarder le workflow importé
workflow_data = result['workflow']
workflow = WorkflowService.create_workflow(
workflow_data['name'],
workflow_data['description']
)
return jsonify({
'success': True,
'workflow': workflow.to_dict(),
'warnings': result.get('warnings', [])
})
else:
return jsonify(result), 400
except Exception as e:
return jsonify({
'success': False,
'errors': [str(e)]
}), 500
@import_export_bp.route('/workflows/validate', methods=['POST'])
def validate_workflow_data():
"""
Valider des données de workflow sans les importer
"""
try:
data = None
filename = None
# Récupérer les données
if 'file' in request.files:
file = request.files['file']
filename = secure_filename(file.filename)
data = file.read().decode('utf-8')
else:
data = request.get_data(as_text=True)
if not data:
return jsonify({'error': 'Aucune donnée fournie'}), 400
# Valider seulement
result = ImportExportService.import_workflow(data, filename)
# Ne pas sauvegarder, juste retourner la validation
return jsonify({
'valid': result['success'],
'errors': result.get('errors', []),
'warnings': result.get('warnings', [])
})
except Exception as e:
return jsonify({
'valid': False,
'errors': [str(e)]
}), 500

View File

@@ -0,0 +1,21 @@
"""
Node Types API Blueprint
Provides REST endpoints for node type definitions.
"""
from flask import Blueprint, jsonify
node_types_bp = Blueprint('node_types', __name__)
@node_types_bp.route('/', methods=['GET'])
def list_node_types():
"""List all available node types"""
# TODO: Implement in Phase 2
return jsonify([])
@node_types_bp.route('/<node_type>', methods=['GET'])
def get_node_type(node_type):
"""Get a specific node type definition"""
# TODO: Implement in Phase 2
return jsonify({'type': node_type})

View File

@@ -0,0 +1,667 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
API REST pour la Démonstration Réelle - RPA Vision V3
Auteur : Dom, Alice, Kiro - 8 janvier 2026
Endpoints pour la capture d'écran réelle et l'interaction avec le système.
"""
from flask import Blueprint, jsonify, request
from typing import Dict, Any, List
import logging
import time
# Import du service de capture d'écran réel
try:
from services.real_screen_capture import real_capture_service
except ImportError:
# Fallback si le service n'est pas disponible
real_capture_service = None
# Import des modules RPA Vision V3 pour l'exécution d'actions
import sys
import os
# Ajouter le chemin vers le répertoire racine du projet
project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../..'))
if project_root not in sys.path:
sys.path.insert(0, project_root)
try:
from core.execution.action_executor import ActionExecutor
from core.models.workflow_graph import Action, ActionType
from core.models.screen_state import ScreenState
RPA_CORE_AVAILABLE = True
except ImportError as e:
print(f"Warning: Modules RPA Core non disponibles: {e}")
RPA_CORE_AVAILABLE = False
ActionExecutor = None
Action = None
ActionType = None
ScreenState = None
logger = logging.getLogger(__name__)
# Créer le blueprint
real_demo_bp = Blueprint('real_demo', __name__, url_prefix='/api/real-demo')
# Instance globale de l'exécuteur d'actions
if RPA_CORE_AVAILABLE:
action_executor = ActionExecutor()
else:
action_executor = None
@real_demo_bp.route('/monitors', methods=['GET'])
def get_monitors():
"""
Obtenir la liste des moniteurs disponibles
Returns:
JSON: Liste des moniteurs avec leurs propriétés
"""
try:
if real_capture_service is None:
return jsonify({
"success": False,
"error": "Service de capture d'écran non disponible"
}), 503
monitors = real_capture_service.get_monitors()
return jsonify({
"success": True,
"monitors": monitors,
"selected_monitor": real_capture_service.selected_monitor
})
except Exception as e:
logger.error(f"Erreur lors de la récupération des moniteurs: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/monitors/<int:monitor_id>/select', methods=['POST'])
def select_monitor(monitor_id: int):
"""
Sélectionner un moniteur pour la capture
Args:
monitor_id: ID du moniteur à sélectionner
Returns:
JSON: Statut de la sélection
"""
try:
success = real_capture_service.select_monitor(monitor_id)
if success:
return jsonify({
"success": True,
"message": f"Moniteur {monitor_id} sélectionné",
"selected_monitor": monitor_id
})
else:
return jsonify({
"success": False,
"error": f"Moniteur {monitor_id} invalide"
}), 400
except Exception as e:
logger.error(f"Erreur lors de la sélection du moniteur: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/capture', methods=['POST'])
def capture_single():
"""
Effectuer une capture d'écran unique (snapshot)
Body JSON (optionnel):
- monitor_id: ID du moniteur à capturer (défaut: moniteur sélectionné)
- detect_elements: Détecter les éléments UI (défaut: false)
Returns:
JSON: Screenshot en base64 et métadonnées
"""
try:
data = request.get_json() or {}
monitor_id = data.get('monitor_id')
detect_elements = data.get('detect_elements', False)
# Effectuer la capture unique
screenshot = real_capture_service.capture_single(monitor_id)
if screenshot:
return jsonify({
"success": True,
"screenshot": screenshot,
"monitor_id": monitor_id if monitor_id is not None else real_capture_service.selected_monitor,
"elements": [] if not detect_elements else [] # TODO: ajouter détection si demandé
})
else:
return jsonify({
"success": False,
"error": "Échec de la capture d'écran"
}), 500
except Exception as e:
logger.error(f"Erreur lors de la capture unique: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/capture/start', methods=['POST'])
def start_capture():
"""
Démarrer la capture d'écran en temps réel
Body JSON (optionnel):
- interval: Intervalle de capture en secondes (défaut: 1.0)
Returns:
JSON: Statut du démarrage
"""
try:
data = request.get_json() or {}
interval = float(data.get('interval', 1.0))
# Valider l'intervalle
if interval < 0.1 or interval > 10.0:
return jsonify({
"success": False,
"error": "L'intervalle doit être entre 0.1 et 10.0 secondes"
}), 400
success = real_capture_service.start_capture(interval)
if success:
return jsonify({
"success": True,
"message": f"Capture démarrée (intervalle: {interval}s)",
"interval": interval
})
else:
return jsonify({
"success": False,
"error": "Capture déjà en cours"
}), 409
except Exception as e:
logger.error(f"Erreur lors du démarrage de la capture: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/capture/stop', methods=['POST'])
def stop_capture():
"""
Arrêter la capture d'écran
Returns:
JSON: Statut de l'arrêt
"""
try:
success = real_capture_service.stop_capture()
return jsonify({
"success": True,
"message": "Capture arrêtée" if success else "Capture n'était pas active"
})
except Exception as e:
logger.error(f"Erreur lors de l'arrêt de la capture: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/capture/status', methods=['GET'])
def get_capture_status():
"""
Obtenir le statut de la capture
Returns:
JSON: Statut détaillé de la capture
"""
try:
status = real_capture_service.get_status()
return jsonify({
"success": True,
"status": status
})
except Exception as e:
logger.error(f"Erreur lors de la récupération du statut: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/capture/screenshot', methods=['GET'])
def get_current_screenshot():
"""
Obtenir la capture d'écran actuelle
Returns:
JSON: Screenshot en base64 et éléments détectés
"""
try:
screenshot_base64 = real_capture_service.get_current_screenshot_base64()
detected_elements = real_capture_service.get_detected_elements()
if screenshot_base64 is None:
return jsonify({
"success": False,
"error": "Aucune capture d'écran disponible"
}), 404
return jsonify({
"success": True,
"screenshot": screenshot_base64,
"elements": detected_elements,
"timestamp": time.time()
})
except Exception as e:
logger.error(f"Erreur lors de la récupération de la capture: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/elements', methods=['GET'])
def get_detected_elements():
"""
Obtenir les éléments UI détectés sur l'écran actuel
Returns:
JSON: Liste des éléments UI détectés
"""
try:
elements = real_capture_service.get_detected_elements()
return jsonify({
"success": True,
"elements": elements,
"count": len(elements)
})
except Exception as e:
logger.error(f"Erreur lors de la récupération des éléments: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/interact/click', methods=['POST'])
def perform_click():
"""
Effectuer un clic sur l'écran réel
Body JSON:
- x: Coordonnée X du clic
- y: Coordonnée Y du clic
- element_id: (optionnel) ID de l'élément à cliquer
Returns:
JSON: Résultat de l'interaction
"""
try:
data = request.get_json()
if not data:
return jsonify({
"success": False,
"error": "Données JSON requises"
}), 400
# Méthode 1: Clic par coordonnées
if 'x' in data and 'y' in data:
x = float(data['x'])
y = float(data['y'])
# Utiliser pyautogui pour le clic réel
try:
import pyautogui
pyautogui.click(x, y)
return jsonify({
"success": True,
"message": f"Clic effectué à ({x}, {y})",
"method": "coordinates",
"coordinates": {"x": x, "y": y}
})
except ImportError:
return jsonify({
"success": False,
"error": "pyautogui non disponible pour les interactions réelles"
}), 500
# Méthode 2: Clic par élément ID
elif 'element_id' in data:
element_id = data['element_id']
elements = real_capture_service.get_detected_elements()
# Trouver l'élément
target_element = None
for element in elements:
if element.get('id') == element_id:
target_element = element
break
if not target_element:
return jsonify({
"success": False,
"error": f"Élément {element_id} non trouvé"
}), 404
# Calculer le centre de l'élément
bbox = target_element.get('bbox', {})
x = bbox.get('x', 0) + bbox.get('width', 0) / 2
y = bbox.get('y', 0) + bbox.get('height', 0) / 2
# Effectuer le clic
try:
import pyautogui
pyautogui.click(x, y)
return jsonify({
"success": True,
"message": f"Clic effectué sur l'élément {element_id}",
"method": "element",
"element_id": element_id,
"coordinates": {"x": x, "y": y}
})
except ImportError:
return jsonify({
"success": False,
"error": "pyautogui non disponible pour les interactions réelles"
}), 500
else:
return jsonify({
"success": False,
"error": "Coordonnées (x, y) ou element_id requis"
}), 400
except Exception as e:
logger.error(f"Erreur lors du clic: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/interact/type', methods=['POST'])
def perform_typing():
"""
Effectuer une saisie de texte sur l'écran réel
Body JSON:
- text: Texte à saisir
- x, y: (optionnel) Coordonnées où cliquer avant de saisir
- element_id: (optionnel) ID de l'élément où saisir
Returns:
JSON: Résultat de l'interaction
"""
try:
data = request.get_json()
if not data:
return jsonify({
"success": False,
"error": "Données JSON requises"
}), 400
text = data.get('text', '')
if not text:
return jsonify({
"success": False,
"error": "Texte à saisir requis"
}), 400
try:
import pyautogui
# Si coordonnées ou élément spécifié, cliquer d'abord
if 'x' in data and 'y' in data:
x = float(data['x'])
y = float(data['y'])
pyautogui.click(x, y)
time.sleep(0.2) # Attendre que le focus soit donné
elif 'element_id' in data:
element_id = data['element_id']
elements = real_capture_service.get_detected_elements()
# Trouver l'élément
target_element = None
for element in elements:
if element.get('id') == element_id:
target_element = element
break
if not target_element:
return jsonify({
"success": False,
"error": f"Élément {element_id} non trouvé"
}), 404
# Cliquer sur l'élément
bbox = target_element.get('bbox', {})
x = bbox.get('x', 0) + bbox.get('width', 0) / 2
y = bbox.get('y', 0) + bbox.get('height', 0) / 2
pyautogui.click(x, y)
time.sleep(0.2)
# Saisir le texte
pyautogui.write(text, interval=0.05)
return jsonify({
"success": True,
"message": f"Texte saisi: {text[:50]}{'...' if len(text) > 50 else ''}",
"text_length": len(text)
})
except ImportError:
return jsonify({
"success": False,
"error": "pyautogui non disponible pour les interactions réelles"
}), 500
except Exception as e:
logger.error(f"Erreur lors de la saisie: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/workflow/execute', methods=['POST'])
def execute_workflow():
"""
Exécuter un workflow simple sur l'écran réel
Body JSON:
- actions: Liste d'actions à exécuter
- validate_elements: (optionnel) Valider la présence des éléments avant exécution
Returns:
JSON: Résultat de l'exécution du workflow
"""
try:
data = request.get_json()
if not data:
return jsonify({
"success": False,
"error": "Données JSON requises"
}), 400
actions = data.get('actions', [])
if not actions:
return jsonify({
"success": False,
"error": "Liste d'actions requise"
}), 400
validate_elements = data.get('validate_elements', True)
# Obtenir l'état actuel de l'écran
screenshot_base64 = real_capture_service.get_current_screenshot_base64()
detected_elements = real_capture_service.get_detected_elements()
if not screenshot_base64:
return jsonify({
"success": False,
"error": "Aucune capture d'écran disponible"
}), 400
# Créer un ScreenState temporaire
screen_state = ScreenState(
timestamp=time.time(),
screenshot_path="", # Pas de fichier, image en mémoire
screenshot_data=None, # Sera rempli si nécessaire
ui_elements=detected_elements,
metadata={"source": "real_demo_workflow"}
)
results = []
# Exécuter chaque action
for i, action_data in enumerate(actions):
try:
action_type = action_data.get('type')
if action_type == 'click':
# Action de clic
if 'x' in action_data and 'y' in action_data:
import pyautogui
x = float(action_data['x'])
y = float(action_data['y'])
pyautogui.click(x, y)
results.append({
"action_index": i,
"type": "click",
"success": True,
"message": f"Clic à ({x}, {y})",
"coordinates": {"x": x, "y": y}
})
else:
results.append({
"action_index": i,
"type": "click",
"success": False,
"error": "Coordonnées x, y requises pour le clic"
})
elif action_type == 'type':
# Action de saisie
text = action_data.get('text', '')
if text:
import pyautogui
pyautogui.write(text, interval=0.05)
results.append({
"action_index": i,
"type": "type",
"success": True,
"message": f"Texte saisi: {text[:30]}{'...' if len(text) > 30 else ''}",
"text_length": len(text)
})
else:
results.append({
"action_index": i,
"type": "type",
"success": False,
"error": "Texte requis pour la saisie"
})
elif action_type == 'wait':
# Action d'attente
duration = float(action_data.get('duration', 1.0))
time.sleep(duration)
results.append({
"action_index": i,
"type": "wait",
"success": True,
"message": f"Attente de {duration}s",
"duration": duration
})
else:
results.append({
"action_index": i,
"type": action_type,
"success": False,
"error": f"Type d'action non supporté: {action_type}"
})
# Petite pause entre les actions
time.sleep(0.2)
except Exception as action_error:
logger.error(f"Erreur lors de l'exécution de l'action {i}: {action_error}")
results.append({
"action_index": i,
"type": action_data.get('type', 'unknown'),
"success": False,
"error": str(action_error)
})
# Calculer le résumé
successful_actions = sum(1 for r in results if r.get('success', False))
total_actions = len(results)
return jsonify({
"success": True,
"message": f"Workflow exécuté: {successful_actions}/{total_actions} actions réussies",
"results": results,
"summary": {
"total_actions": total_actions,
"successful_actions": successful_actions,
"failed_actions": total_actions - successful_actions,
"success_rate": successful_actions / total_actions if total_actions > 0 else 0
}
})
except Exception as e:
logger.error(f"Erreur lors de l'exécution du workflow: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
@real_demo_bp.route('/safety/emergency-stop', methods=['POST'])
def emergency_stop():
"""
Arrêt d'urgence - Stoppe toutes les interactions en cours
Returns:
JSON: Confirmation de l'arrêt d'urgence
"""
try:
# Arrêter la capture
real_capture_service.stop_capture()
# Déplacer la souris dans un coin pour déclencher le failsafe de pyautogui
try:
import pyautogui
pyautogui.moveTo(0, 0)
except ImportError:
pass
return jsonify({
"success": True,
"message": "Arrêt d'urgence activé - Toutes les interactions stoppées"
})
except Exception as e:
logger.error(f"Erreur lors de l'arrêt d'urgence: {e}")
return jsonify({
"success": False,
"error": str(e)
}), 500
# Gestionnaire d'erreur pour le blueprint
@real_demo_bp.errorhandler(Exception)
def handle_error(error):
"""Gestionnaire d'erreur global pour l'API real_demo"""
logger.error(f"Erreur non gérée dans real_demo API: {error}", exc_info=True)
return jsonify({
"success": False,
"error": "Erreur interne du serveur",
"details": str(error)
}), 500

View File

@@ -0,0 +1,499 @@
"""
Screen Capture API - Visual Workflow Builder
Provides endpoints for screen capture and UI element detection
to support interactive target selection.
Exigences: 4.1, 4.2, 4.3, 4.4, 4.5
"""
from flask import Blueprint, request, jsonify, send_file
from flask_cors import cross_origin
import sys
import os
import io
import base64
from datetime import datetime
from typing import Dict, List, Any, Optional
# Add parent directory to path to import core modules
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
try:
from core.capture.screen_capturer import ScreenCapturer
from core.detection.ui_detector import UIDetector
from core.models.screen_state import ScreenState
from core.embedding.fusion_engine import FusionEngine
CORE_AVAILABLE = True
except ImportError as e:
print(f"Warning: Core modules not available: {e}")
CORE_AVAILABLE = False
screen_capture_bp = Blueprint('screen_capture', __name__)
# Initialize components if available
ui_detector = UIDetector() if CORE_AVAILABLE else None
def get_screen_capturer():
"""Create a new ScreenCapturer instance for each request to avoid threading issues"""
if CORE_AVAILABLE:
return ScreenCapturer()
return None
@screen_capture_bp.route('/capture', methods=['POST'])
@cross_origin()
def capture_screen():
"""
Capture the current screen
Request body:
{
"region": { // Optional - capture specific region
"x": 0,
"y": 0,
"width": 1920,
"height": 1080
}
}
Returns:
{
"image": "base64_encoded_image",
"width": 1920,
"height": 1080,
"format": "png"
}
"""
if not CORE_AVAILABLE:
return jsonify({
'error': 'Screen capture not available',
'message': 'Core modules are not properly configured'
}), 503
try:
data = request.get_json() or {}
region = data.get('region')
# Create a new capturer instance for this request
screen_capturer = get_screen_capturer()
if not screen_capturer:
return jsonify({
'error': 'Screen capture not available',
'message': 'Could not initialize screen capturer'
}), 503
# Capture screen
if region:
# Note: capture_region not implemented in ScreenCapturer, use full capture
screenshot_array = screen_capturer.capture()
else:
screenshot_array = screen_capturer.capture()
if screenshot_array is None:
return jsonify({
'error': 'Capture failed',
'message': 'Could not capture screen'
}), 500
# Convert numpy array to PIL Image
from PIL import Image
screenshot = Image.fromarray(screenshot_array)
# Convert to base64
img_buffer = io.BytesIO()
screenshot.save(img_buffer, format='PNG')
img_buffer.seek(0)
img_base64 = base64.b64encode(img_buffer.read()).decode('utf-8')
return jsonify({
'image': f'data:image/png;base64,{img_base64}',
'width': screenshot.width,
'height': screenshot.height,
'format': 'png'
})
except Exception as e:
return jsonify({
'error': 'Capture failed',
'message': str(e)
}), 500
@screen_capture_bp.route('/detect-elements', methods=['POST'])
@cross_origin()
def detect_elements():
"""
Detect UI elements in a screenshot
Request body:
{
"image": "base64_encoded_image", // Optional - will capture if not provided
"region": { // Optional
"x": 0,
"y": 0,
"width": 1920,
"height": 1080
}
}
Returns:
{
"elements": [
{
"id": "element-1",
"type": "button",
"bounds": {"x": 100, "y": 200, "width": 80, "height": 30},
"text": "Submit",
"confidence": 0.95,
"selectors": {
"css": "button.submit",
"xpath": "//button[@class='submit']"
}
}
]
}
"""
if not CORE_AVAILABLE or not ui_detector:
return jsonify({
'error': 'UI detection not available',
'message': 'Core modules are not properly configured'
}), 503
try:
data = request.get_json() or {}
# Get or capture screenshot
screenshot_array = None
if 'image' in data:
# Decode base64 image
img_data = data['image']
if img_data.startswith('data:image'):
img_data = img_data.split(',')[1]
img_bytes = base64.b64decode(img_data)
# Convert to numpy array
from PIL import Image
import numpy as np
img = Image.open(io.BytesIO(img_bytes))
screenshot_array = np.array(img)
else:
# Capture new screenshot
screen_capturer = get_screen_capturer()
if not screen_capturer:
return jsonify({
'error': 'Screen capture not available',
'message': 'Could not initialize screen capturer'
}), 503
region = data.get('region')
if region:
# Note: capture_region not implemented in ScreenCapturer, use full capture
screenshot_array = screen_capturer.capture()
else:
screenshot_array = screen_capturer.capture()
if screenshot_array is None:
return jsonify({
'error': 'Screenshot capture failed',
'message': 'Could not obtain screenshot'
}), 500
# Create ScreenState for detection
try:
screen_state = ScreenState(
screenshot=screenshot_array,
timestamp=datetime.now(),
window_title="Visual Workflow Builder",
resolution=(screenshot_array.shape[1], screenshot_array.shape[0])
)
# Detect UI elements using the core system
detected_elements = ui_detector.detect_elements(screen_state)
# Format response
formatted_elements = []
for i, element in enumerate(detected_elements):
formatted_element = {
'id': f'element-{i}',
'type': getattr(element, 'element_type', 'generic'),
'bounds': {
'x': int(element.bbox.x),
'y': int(element.bbox.y),
'width': int(element.bbox.width),
'height': int(element.bbox.height)
},
'text': getattr(element, 'text', ''),
'confidence': getattr(element, 'confidence', 0.8),
'selectors': generate_selectors({
'type': getattr(element, 'element_type', 'generic'),
'text': getattr(element, 'text', ''),
'id': getattr(element, 'element_id', ''),
'classes': getattr(element, 'classes', []),
'attributes': getattr(element, 'attributes', {})
})
}
formatted_elements.append(formatted_element)
return jsonify({
'elements': formatted_elements,
'count': len(formatted_elements)
})
except Exception as detection_error:
# Fallback to mock elements for testing
print(f"Detection error: {detection_error}")
formatted_elements = [
{
'id': 'mock-element-1',
'type': 'button',
'bounds': {'x': 100, 'y': 100, 'width': 80, 'height': 30},
'text': 'Test Button',
'confidence': 0.9,
'selectors': {
'css': 'button.test',
'xpath': '//button[@class="test"]',
'text': 'Test Button'
}
}
]
return jsonify({
'elements': formatted_elements,
'count': len(formatted_elements),
'warning': 'Using mock data due to detection error'
})
except Exception as e:
return jsonify({
'error': 'Detection failed',
'message': str(e)
}), 500
@screen_capture_bp.route('/element-at-point', methods=['POST'])
@cross_origin()
def element_at_point():
"""
Get the UI element at a specific point
Request body:
{
"x": 500,
"y": 300,
"image": "base64_encoded_image" // Optional
}
Returns:
{
"element": {
"id": "element-1",
"type": "button",
"bounds": {"x": 100, "y": 200, "width": 80, "height": 30},
"text": "Submit",
"confidence": 0.95,
"selectors": {
"css": "button.submit",
"xpath": "//button[@class='submit']",
"text": "Submit"
},
"properties": {
"tag": "button",
"classes": ["btn", "btn-primary"],
"id": "submit-btn",
"attributes": {
"type": "submit",
"data-testid": "submit-button"
}
}
}
}
"""
if not CORE_AVAILABLE or not ui_detector:
return jsonify({
'error': 'UI detection not available',
'message': 'Core modules are not properly configured'
}), 503
try:
data = request.get_json()
x = data.get('x')
y = data.get('y')
if x is None or y is None:
return jsonify({
'error': 'Invalid request',
'message': 'x and y coordinates are required'
}), 400
# Get or capture screenshot
if 'image' in data:
img_data = data['image']
if img_data.startswith('data:image'):
img_data = img_data.split(',')[1]
img_bytes = base64.b64decode(img_data)
screenshot = io.BytesIO(img_bytes)
else:
screen_capturer = get_screen_capturer()
if not screen_capturer:
return jsonify({
'error': 'Screen capture not available',
'message': 'Could not initialize screen capturer'
}), 503
screenshot = screen_capturer.capture()
# Detect all elements
# Note: UIDetector.detect expects a file path, not an image object
# For now, return mock elements
elements = []
# For now, return a mock element at the clicked point
mock_element = {
'id': f'element-{x}-{y}',
'type': 'generic',
'bounds': {'x': x-10, 'y': y-10, 'width': 20, 'height': 20},
'text': f'Element at ({x},{y})',
'confidence': 0.8,
'selectors': {
'css': f'[data-x="{x}"][data-y="{y}"]',
'xpath': f'//*[@data-x="{x}" and @data-y="{y}"]',
'text': f'Element at ({x},{y})'
},
'properties': {
'tag': 'div',
'classes': [],
'id': f'elem-{x}-{y}',
'attributes': {'data-x': str(x), 'data-y': str(y)},
'text': f'Element at ({x},{y})',
'bounds': {'x': x-10, 'y': y-10, 'width': 20, 'height': 20},
'visible': True,
'enabled': True
}
}
return jsonify({
'element': mock_element
})
except Exception as e:
return jsonify({
'error': 'Detection failed',
'message': str(e)
}), 500
def generate_selectors(element: Dict[str, Any]) -> Dict[str, str]:
"""
Generate multiple selector strategies for an element
Exigence: 4.5
"""
selectors = {}
# Extract element properties
elem_type = element.get('type', 'unknown')
text = element.get('text', '')
elem_id = element.get('id', '')
classes = element.get('classes', [])
attributes = element.get('attributes', {})
# CSS selectors (in order of preference)
if elem_id:
selectors['css_id'] = f'#{elem_id}'
if 'data-testid' in attributes:
selectors['css_testid'] = f'[data-testid="{attributes["data-testid"]}"]'
if classes:
selectors['css_class'] = f'{elem_type}.{".".join(classes)}'
if text:
selectors['css_text'] = f'{elem_type}:contains("{text}")'
# Default CSS selector
selectors['css'] = elem_id if elem_id else (
f'{elem_type}.{classes[0]}' if classes else elem_type
)
# XPath selectors
if elem_id:
selectors['xpath_id'] = f'//{elem_type}[@id="{elem_id}"]'
if text:
selectors['xpath_text'] = f'//{elem_type}[contains(text(), "{text}")]'
if 'data-testid' in attributes:
selectors['xpath_testid'] = f'//{elem_type}[@data-testid="{attributes["data-testid"]}"]'
# Default XPath
selectors['xpath'] = selectors.get('xpath_id') or selectors.get('xpath_testid') or f'//{elem_type}'
# Text-based selector
if text:
selectors['text'] = text
return selectors
def extract_properties(element: Dict[str, Any]) -> Dict[str, Any]:
"""
Extract detailed properties from an element
Exigence: 4.5
"""
return {
'tag': element.get('type', 'unknown'),
'classes': element.get('classes', []),
'id': element.get('id', ''),
'attributes': element.get('attributes', {}),
'text': element.get('text', ''),
'bounds': element.get('bounds', {}),
'visible': element.get('visible', True),
'enabled': element.get('enabled', True),
}
@screen_capture_bp.route('/validate-selector', methods=['POST'])
@cross_origin()
def validate_selector():
"""
Validate a selector and count matching elements
Request body:
{
"selector": "button.submit",
"type": "css", // or "xpath"
"image": "base64_encoded_image" // Optional
}
Returns:
{
"valid": true,
"count": 1,
"elements": [...] // Optional - matching elements
}
"""
try:
data = request.get_json()
selector = data.get('selector')
selector_type = data.get('type', 'css')
if not selector:
return jsonify({
'error': 'Invalid request',
'message': 'selector is required'
}), 400
# For now, return a mock validation
# In production, this would use the actual UI detection
return jsonify({
'valid': True,
'count': 1,
'message': f'{selector_type.upper()} selector is valid'
})
except Exception as e:
return jsonify({
'error': 'Validation failed',
'message': str(e)
}), 500

View File

@@ -0,0 +1,279 @@
"""Self-healing API endpoints for Visual Workflow Builder."""
from flask import Blueprint, request, jsonify
from typing import Dict, Any
import logging
from services.self_healing_integration import get_self_healing_service
from models.self_healing_config import SelfHealingConfig, RecoveryStrategy, RecoveryMode
logger = logging.getLogger(__name__)
# Create blueprint
self_healing_bp = Blueprint('self_healing', __name__, url_prefix='/api/self-healing')
@self_healing_bp.route('/config/defaults/<node_type>', methods=['GET'])
def get_default_config(node_type: str):
"""
Get default self-healing configuration for a node type.
Args:
node_type: Type of node (click, type, wait, etc.)
"""
try:
service = get_self_healing_service()
config = service.configure_node_self_healing(node_type)
return jsonify({
'success': True,
'config': config.to_dict(),
'available_strategies': [s.value for s in RecoveryStrategy],
'available_modes': [m.value for m in RecoveryMode]
})
except Exception as e:
logger.error(f"Failed to get default config for {node_type}: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/config/validate', methods=['POST'])
def validate_config():
"""
Validate a self-healing configuration.
"""
try:
data = request.get_json()
# Validate required fields
if not data:
return jsonify({
'success': False,
'error': 'Configuration data required'
}), 400
# Try to create config from data
config = SelfHealingConfig.from_dict(data)
# Perform validation
errors = []
if config.max_attempts < 1 or config.max_attempts > 10:
errors.append('max_attempts must be between 1 and 10')
if config.confidence_threshold < 0.0 or config.confidence_threshold > 1.0:
errors.append('confidence_threshold must be between 0.0 and 1.0')
if config.strategy_timeout < 1.0 or config.strategy_timeout > 300.0:
errors.append('strategy_timeout must be between 1.0 and 300.0 seconds')
if not config.enabled_strategies:
errors.append('At least one recovery strategy must be enabled')
return jsonify({
'success': len(errors) == 0,
'errors': errors,
'config': config.to_dict() if len(errors) == 0 else None
})
except Exception as e:
logger.error(f"Failed to validate config: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/suggestions', methods=['POST'])
def get_recovery_suggestions():
"""
Get recovery suggestions for a node.
"""
try:
data = request.get_json()
# Validate required fields
required_fields = ['workflow_id', 'node_id', 'action_info', 'screenshot_path']
for field in required_fields:
if field not in data:
return jsonify({
'success': False,
'error': f'Missing required field: {field}'
}), 400
service = get_self_healing_service()
suggestions = service.get_recovery_suggestions(
workflow_id=data['workflow_id'],
node_id=data['node_id'],
action_info=data['action_info'],
screenshot_path=data['screenshot_path']
)
return jsonify({
'success': True,
'suggestions': suggestions
})
except Exception as e:
logger.error(f"Failed to get recovery suggestions: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/notifications', methods=['GET'])
def get_notifications():
"""
Get recovery notifications.
"""
try:
workflow_id = request.args.get('workflow_id')
limit = int(request.args.get('limit', 50))
service = get_self_healing_service()
notifications = service.get_notifications(
workflow_id=workflow_id,
limit=limit
)
return jsonify({
'success': True,
'notifications': notifications
})
except Exception as e:
logger.error(f"Failed to get notifications: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/notifications', methods=['DELETE'])
def clear_notifications():
"""
Clear recovery notifications.
"""
try:
workflow_id = request.args.get('workflow_id')
service = get_self_healing_service()
service.clear_notifications(workflow_id=workflow_id)
return jsonify({
'success': True,
'message': 'Notifications cleared'
})
except Exception as e:
logger.error(f"Failed to clear notifications: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/statistics', methods=['GET'])
def get_statistics():
"""
Get recovery statistics.
"""
try:
service = get_self_healing_service()
statistics = service.get_statistics()
return jsonify({
'success': True,
'statistics': statistics
})
except Exception as e:
logger.error(f"Failed to get statistics: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/insights', methods=['GET'])
def get_insights():
"""
Get insights from recovery patterns.
"""
try:
service = get_self_healing_service()
insights = service.get_insights()
return jsonify({
'success': True,
'insights': insights
})
except Exception as e:
logger.error(f"Failed to get insights: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/status', methods=['GET'])
def get_status():
"""
Get self-healing system status.
"""
try:
service = get_self_healing_service()
return jsonify({
'success': True,
'status': {
'enabled': service.enabled,
'core_available': service.enabled,
'notifications_count': len(service.notifications),
'statistics': service.statistics.to_dict()
}
})
except Exception as e:
logger.error(f"Failed to get status: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500
@self_healing_bp.route('/test', methods=['POST'])
def test_recovery():
"""
Test recovery functionality with mock data.
"""
try:
data = request.get_json()
# Mock recovery test
mock_result = {
'success': True,
'strategy_used': 'SemanticVariantStrategy',
'confidence': 0.85,
'execution_time': 2.3,
'message': '✅ Test de récupération réussi avec variante sémantique',
'test_mode': True
}
return jsonify({
'success': True,
'result': mock_result,
'message': 'Test de récupération terminé avec succès'
})
except Exception as e:
logger.error(f"Failed to test recovery: {e}")
return jsonify({
'success': False,
'error': str(e)
}), 500

View File

@@ -0,0 +1,226 @@
"""
Templates API Blueprint
Provides REST endpoints for workflow template management.
"""
from flask import Blueprint, request, jsonify
import logging
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from services.template_service import TemplateService
from services.serialization import WorkflowSerializer
from api.errors import handle_api_error
templates_bp = Blueprint('templates', __name__)
logger = logging.getLogger(__name__)
# Initialize services
template_service = TemplateService()
workflow_serializer = WorkflowSerializer()
@templates_bp.route('/', methods=['GET'])
def list_templates():
"""List all templates with optional filtering"""
try:
# Get query parameters
category = request.args.get('category')
difficulty = request.args.get('difficulty')
# Get templates
templates = template_service.list_templates(category=category, difficulty=difficulty)
# Convert to dict format
result = []
for template in templates:
template_dict = template.to_dict()
# Remove the full workflow from list view for performance
template_dict.pop('workflow', None)
result.append(template_dict)
return jsonify({
'templates': result,
'count': len(result)
})
except Exception as e:
logger.error(f"Error listing templates: {e}")
return handle_api_error(e)
@templates_bp.route('/', methods=['POST'])
def create_template():
"""Create a new template"""
try:
data = request.get_json()
if not data:
return jsonify({'error': 'No data provided'}), 400
# Create template
template = template_service.create_template(data)
logger.info(f"Created template: {template.id}")
return jsonify(template.to_dict()), 201
except ValueError as e:
logger.warning(f"Template validation error: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Error creating template: {e}")
return handle_api_error(e)
@templates_bp.route('/<template_id>', methods=['GET'])
def get_template(template_id):
"""Get a specific template"""
try:
template = template_service.get_template(template_id)
if not template:
return jsonify({'error': 'Template not found'}), 404
return jsonify(template.to_dict())
except Exception as e:
logger.error(f"Error getting template {template_id}: {e}")
return handle_api_error(e)
@templates_bp.route('/<template_id>', methods=['PUT'])
def update_template(template_id):
"""Update an existing template"""
try:
data = request.get_json()
if not data:
return jsonify({'error': 'No data provided'}), 400
template = template_service.update_template(template_id, data)
if not template:
return jsonify({'error': 'Template not found'}), 404
logger.info(f"Updated template: {template_id}")
return jsonify(template.to_dict())
except ValueError as e:
logger.warning(f"Template validation error: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Error updating template {template_id}: {e}")
return handle_api_error(e)
@templates_bp.route('/<template_id>', methods=['DELETE'])
def delete_template(template_id):
"""Delete a template"""
try:
success = template_service.delete_template(template_id)
if not success:
return jsonify({'error': 'Template not found'}), 404
logger.info(f"Deleted template: {template_id}")
return jsonify({'message': 'Template deleted successfully'})
except Exception as e:
logger.error(f"Error deleting template {template_id}: {e}")
return handle_api_error(e)
@templates_bp.route('/<template_id>/instantiate', methods=['POST'])
def instantiate_template(template_id):
"""Create a workflow from a template"""
try:
data = request.get_json()
if not data:
return jsonify({'error': 'No data provided'}), 400
# Extract parameters
parameters = data.get('parameters', {})
workflow_name = data.get('name', f'Workflow from template {template_id}')
created_by = data.get('created_by', 'user')
# Create workflow instance
workflow = template_service.instantiate_template(
template_id, parameters, workflow_name, created_by
)
if not workflow:
return jsonify({'error': 'Template not found'}), 404
# Save the workflow
workflow_data = workflow_serializer.serialize(workflow)
workflow_id = workflow_serializer.save_workflow(workflow_data)
logger.info(f"Instantiated template {template_id} as workflow {workflow_id}")
return jsonify({
'workflow_id': workflow_id,
'workflow': workflow.to_dict()
}), 201
except ValueError as e:
logger.warning(f"Template instantiation error: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Error instantiating template {template_id}: {e}")
return handle_api_error(e)
@templates_bp.route('/from-workflow', methods=['POST'])
def create_template_from_workflow():
"""Create a template from an existing workflow"""
try:
data = request.get_json()
if not data:
return jsonify({'error': 'No data provided'}), 400
# Extract required fields
workflow_id = data.get('workflow_id')
template_name = data.get('template_name')
template_description = data.get('template_description')
category = data.get('category', 'Custom')
parameters = data.get('parameters', [])
if not all([workflow_id, template_name, template_description]):
return jsonify({
'error': 'workflow_id, template_name, and template_description are required'
}), 400
# Load the workflow
workflow_data = workflow_serializer.load_workflow(workflow_id)
if not workflow_data:
return jsonify({'error': 'Workflow not found'}), 404
workflow = workflow_serializer.deserialize(workflow_data)
# Create template
template = template_service.create_template_from_workflow(
workflow, template_name, template_description, category, parameters
)
logger.info(f"Created template {template.id} from workflow {workflow_id}")
return jsonify(template.to_dict()), 201
except ValueError as e:
logger.warning(f"Template creation error: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Error creating template from workflow: {e}")
return handle_api_error(e)
@templates_bp.route('/categories', methods=['GET'])
def get_template_categories():
"""Get all available template categories"""
try:
templates = template_service.list_templates()
categories = list(set(template.category for template in templates))
categories.sort()
return jsonify({
'categories': categories
})
except Exception as e:
logger.error(f"Error getting template categories: {e}")
return handle_api_error(e)

View File

@@ -167,6 +167,101 @@ def get_status():
}), 500
@ui_detection_bp.route('/intelligent-click', methods=['POST'])
@cross_origin()
def intelligent_click():
"""
Trouve une ancre et effectue un clic intelligent.
Request body (JSON):
- anchor_image_base64: Image de l'ancre à trouver (requis)
- anchor_bbox: Bounding box originale {x, y, width, height} (optionnel)
- method: Méthode de matching 'template', 'clip', 'hybrid' (défaut: 'template')
- click_type: Type de clic 'left', 'right', 'double' (défaut: 'left')
- execute_click: Effectuer le clic ou juste retourner les coordonnées (défaut: true)
- threshold: Seuil de détection (défaut: 0.35)
Response:
- success: bool
- result: {
found: bool,
coordinates: {x, y},
confidence: float,
clicked: bool,
method: str,
search_time_ms: float
}
"""
try:
data = request.get_json()
if not data or 'anchor_image_base64' not in data:
return jsonify({
'success': False,
'error': 'anchor_image_base64 est requis'
}), 400
anchor_image_base64 = data['anchor_image_base64']
anchor_bbox = data.get('anchor_bbox')
method = data.get('method', 'template')
click_type = data.get('click_type', 'left')
execute_click = data.get('execute_click', True)
threshold = data.get('threshold', 0.35)
# Importer le service d'exécution intelligente
from services.intelligent_executor import find_and_click
# Trouver l'ancre
result = find_and_click(
anchor_image_base64=anchor_image_base64,
anchor_bbox=anchor_bbox,
method=method,
detection_threshold=threshold
)
# Effectuer le clic si demandé et trouvé
clicked = False
if execute_click and result['found'] and result['coordinates']:
try:
import pyautogui
x, y = result['coordinates']['x'], result['coordinates']['y']
if click_type == 'left':
pyautogui.click(x, y)
elif click_type == 'right':
pyautogui.rightClick(x, y)
elif click_type == 'double':
pyautogui.doubleClick(x, y)
clicked = True
print(f"✅ Clic intelligent {click_type} à ({x}, {y}) - confiance: {result['confidence']:.2f}")
except Exception as click_err:
print(f"❌ Erreur clic: {click_err}")
return jsonify({
'success': True,
'result': {
'found': result['found'],
'coordinates': result['coordinates'],
'bbox': result.get('bbox'),
'confidence': result['confidence'],
'clicked': clicked,
'method': result.get('method', method),
'search_time_ms': result.get('search_time_ms', 0),
'candidates': result.get('candidates', [])
}
})
except Exception as e:
import traceback
traceback.print_exc()
return jsonify({
'success': False,
'error': str(e)
}), 500
@ui_detection_bp.route('/find-element', methods=['POST'])
@cross_origin()
def find_element():

View File

@@ -0,0 +1,137 @@
"""backend/api/validation.py
Validation légère pour les payloads de l'API workflows.
Auteur : Dom, Alice, Kiro - 08 janvier 2026
Patch #1:
- Ce module manquait et bloquait le boot via api/__init__.py
- On reste volontairement permissif (on valide les essentiels)
"""
from __future__ import annotations
from typing import Any, Dict, Iterable
from .errors import ValidationError
_ALLOWED_UPDATE_FIELDS = {
"name",
"description",
"nodes",
"edges",
"variables",
"settings",
"tags",
"category",
"is_template",
# Champs additionnels envoyés par le frontend VWB
"id",
"steps",
"connections",
}
def _ensure_dict(data: Any, context: str = "payload") -> Dict[str, Any]:
"""S'assure que les données sont un dictionnaire."""
if not isinstance(data, dict):
raise ValidationError(f"{context} doit être un objet")
return data
def _ensure_list(value: Any, context: str) -> Iterable[Any]:
"""S'assure que la valeur est une liste."""
if value is None:
return []
if not isinstance(value, list):
raise ValidationError(f"{context} doit être un tableau")
return value
def validate_workflow_data(data: Any) -> None:
"""Valide les données d'un workflow lors de la création."""
data = _ensure_dict(data, "workflow")
# Champs requis (création)
if "name" not in data or not str(data.get("name") or "").strip():
raise ValidationError("Le champ 'name' est requis")
if "created_by" not in data or not str(data.get("created_by") or "").strip():
raise ValidationError("Le champ 'created_by' est requis")
# Champs structurés optionnels
if "nodes" in data:
for n in _ensure_list(data.get("nodes"), "nodes"):
validate_node_data(n)
if "edges" in data:
for e in _ensure_list(data.get("edges"), "edges"):
validate_edge_data(e)
if "variables" in data:
for v in _ensure_list(data.get("variables"), "variables"):
validate_variable_data(v)
if "settings" in data and data.get("settings") is not None:
validate_settings_data(data.get("settings"))
if "tags" in data and data.get("tags") is not None and not isinstance(data.get("tags"), list):
raise ValidationError("Le champ 'tags' doit être un tableau")
def validate_update_data(data: Any) -> None:
"""Valide les données d'un workflow lors de la mise à jour."""
data = _ensure_dict(data, "update")
unknown = set(data.keys()) - _ALLOWED_UPDATE_FIELDS
if unknown:
raise ValidationError(f"Champ(s) inconnu(s) dans la mise à jour: {', '.join(sorted(unknown))}")
if "nodes" in data:
for n in _ensure_list(data.get("nodes"), "nodes"):
validate_node_data(n)
if "edges" in data:
for e in _ensure_list(data.get("edges"), "edges"):
validate_edge_data(e)
if "variables" in data:
for v in _ensure_list(data.get("variables"), "variables"):
validate_variable_data(v)
if "settings" in data and data.get("settings") is not None:
validate_settings_data(data.get("settings"))
if "tags" in data and data.get("tags") is not None and not isinstance(data.get("tags"), list):
raise ValidationError("Le champ 'tags' doit être un tableau")
def validate_node_data(node: Any) -> None:
"""Valide les données d'un nœud."""
node = _ensure_dict(node, "node")
if "id" not in node or not str(node.get("id") or "").strip():
raise ValidationError("Le champ 'id' du nœud est requis")
if "type" not in node or not str(node.get("type") or "").strip():
# Les nœuds ReactFlow ont toujours un type (default est ok)
raise ValidationError("Le champ 'type' du nœud est requis")
def validate_edge_data(edge: Any) -> None:
"""Valide les données d'une connexion."""
edge = _ensure_dict(edge, "edge")
if "source" not in edge or not str(edge.get("source") or "").strip():
raise ValidationError("Le champ 'source' de la connexion est requis")
if "target" not in edge or not str(edge.get("target") or "").strip():
raise ValidationError("Le champ 'target' de la connexion est requis")
def validate_variable_data(variable: Any) -> None:
"""Valide les données d'une variable."""
variable = _ensure_dict(variable, "variable")
if "name" not in variable and "key" not in variable:
raise ValidationError("Le champ 'name' de la variable est requis")
def validate_settings_data(settings: Any) -> None:
"""Valide les données des paramètres."""
_ensure_dict(settings, "settings")

View File

@@ -0,0 +1,417 @@
"""
API endpoints pour la gestion des cibles visuelles
Intégration avec le VisualTargetManager du core RPA Vision V3
"""
from __future__ import annotations
from flask import Blueprint, request, jsonify
from typing import Dict, Any, List
import asyncio
import logging
import sys
import os
# Add project root to path to import core modules (best-effort)
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../../')))
try:
from core.visual.visual_target_manager import VisualTargetManager, VisualTarget, ValidationResult
from core.models import Point, BBox
from core.capture.screen_capturer import ScreenCapturer
from core.detection.ui_detector import UIDetector
from core.embedding.fusion_engine import FusionEngine
CORE_AVAILABLE = True
except ImportError as e:
print(f"Warning: Core modules not available (visual_targets): {e}")
CORE_AVAILABLE = False
VisualTargetManager = None # type: ignore
VisualTarget = None # type: ignore
ValidationResult = None # type: ignore
Point = None # type: ignore
BBox = None # type: ignore
ScreenCapturer = None # type: ignore
UIDetector = None # type: ignore
FusionEngine = None # type: ignore
logger = logging.getLogger(__name__)
# Créer le blueprint pour les endpoints visuels
visual_targets_bp = Blueprint('visual_targets', __name__, url_prefix='/api/visual')
# Instance globale du VisualTargetManager (sera initialisée dans app.py)
visual_target_manager: VisualTargetManager = None
def init_visual_target_manager(screen_capturer: ScreenCapturer,
ui_detector: UIDetector,
fusion_engine: FusionEngine):
"""Initialise le VisualTargetManager avec les dépendances"""
if not CORE_AVAILABLE or VisualTargetManager is None:
raise RuntimeError("Core RPA modules not available - cannot init VisualTargetManager")
global visual_target_manager
visual_target_manager = VisualTargetManager(screen_capturer, ui_detector, fusion_engine)
logger.info("VisualTargetManager initialisé pour l'API")
@visual_targets_bp.route('/targets', methods=['POST'])
def create_visual_target():
"""
Capture et crée une nouvelle cible visuelle à la position donnée
Body:
{
"position": {"x": 100, "y": 200}
}
Returns:
{
"embedding": [...],
"screenshot": "base64_image",
"bounding_box": {"x": 95, "y": 195, "width": 50, "height": 30},
"confidence": 1.0,
"contextual_info": {...},
"signature": "visual_abc123_def456",
"metadata": {...},
"created_at": "2024-01-07T10:30:00",
"validation_count": 0
}
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
data = request.get_json()
if not data or 'position' not in data:
return jsonify({'error': 'Position requise'}), 400
position_data = data['position']
if 'x' not in position_data or 'y' not in position_data:
return jsonify({'error': 'Coordonnées x et y requises'}), 400
# Créer l'objet Point
position = Point(x=position_data['x'], y=position_data['y'])
# Capturer et sélectionner l'élément (opération async)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
target = loop.run_until_complete(
visual_target_manager.capture_and_select_element(position)
)
# Convertir en dictionnaire pour JSON
target_dict = target.to_dict()
logger.info(f"Cible visuelle créée: {target.signature}")
return jsonify(target_dict), 201
finally:
loop.close()
except ValueError as e:
logger.warning(f"Erreur de validation: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Erreur lors de la création de cible: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/targets/<signature>/validate', methods=['POST'])
def validate_visual_target(signature: str):
"""
Valide qu'une cible visuelle est toujours présente et accessible
Returns:
{
"is_valid": true,
"confidence": 0.95,
"current_position": {"x": 95, "y": 195, "width": 50, "height": 30},
"suggestions": [...],
"issues": []
}
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
# Récupérer la cible depuis le cache
target = visual_target_manager.get_cached_target(signature)
if not target:
return jsonify({'error': 'Cible non trouvée'}), 404
# Valider la cible (opération async)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
validation_result = loop.run_until_complete(
visual_target_manager.validate_target(target)
)
# Convertir en dictionnaire pour JSON
result_dict = {
'is_valid': validation_result.is_valid,
'confidence': validation_result.confidence,
'current_position': validation_result.current_position.__dict__ if validation_result.current_position else None,
'suggestions': [s.to_dict() for s in validation_result.suggestions] if validation_result.suggestions else [],
'issues': validation_result.issues or []
}
logger.debug(f"Validation de {signature}: {'valide' if validation_result.is_valid else 'invalide'}")
return jsonify(result_dict), 200
finally:
loop.close()
except Exception as e:
logger.error(f"Erreur lors de la validation: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/targets/<signature>', methods=['PUT'])
def update_visual_target(signature: str):
"""
Met à jour la capture d'écran d'une cible visuelle
Returns:
{
"embedding": [...],
"screenshot": "base64_image_updated",
"bounding_box": {"x": 95, "y": 195, "width": 50, "height": 30},
"confidence": 0.95,
...
}
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
# Récupérer la cible depuis le cache
target = visual_target_manager.get_cached_target(signature)
if not target:
return jsonify({'error': 'Cible non trouvée'}), 404
# Mettre à jour la capture (opération async)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
updated_target = loop.run_until_complete(
visual_target_manager.update_target_screenshot(target)
)
# Convertir en dictionnaire pour JSON
target_dict = updated_target.to_dict()
logger.info(f"Cible mise à jour: {signature}")
return jsonify(target_dict), 200
finally:
loop.close()
except ValueError as e:
logger.warning(f"Erreur lors de la mise à jour: {e}")
return jsonify({'error': str(e)}), 400
except Exception as e:
logger.error(f"Erreur lors de la mise à jour: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/targets/<signature>/similar', methods=['GET'])
def find_similar_elements(signature: str):
"""
Trouve des éléments similaires à la cible donnée
Returns:
[
{
"embedding": [...],
"screenshot": "base64_image",
"bounding_box": {"x": 200, "y": 300, "width": 50, "height": 30},
"confidence": 0.87,
...
},
...
]
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
# Récupérer la cible depuis le cache
target = visual_target_manager.get_cached_target(signature)
if not target:
return jsonify({'error': 'Cible non trouvée'}), 404
# Rechercher des éléments similaires (opération async)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
similar_targets = loop.run_until_complete(
visual_target_manager.find_similar_elements(target)
)
# Convertir en liste de dictionnaires pour JSON
targets_list = [t.to_dict() for t in similar_targets]
logger.info(f"Trouvé {len(similar_targets)} éléments similaires à {signature}")
return jsonify(targets_list), 200
finally:
loop.close()
except Exception as e:
logger.error(f"Erreur lors de la recherche: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/targets/<signature>', methods=['DELETE'])
def delete_visual_target(signature: str):
"""
Supprime une cible visuelle du cache et arrête sa validation
Returns:
{
"message": "Cible supprimée avec succès"
}
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
# Vérifier que la cible existe
target = visual_target_manager.get_cached_target(signature)
if not target:
return jsonify({'error': 'Cible non trouvée'}), 404
# Arrêter la validation et supprimer du cache
visual_target_manager.stop_validation(signature)
logger.info(f"Cible supprimée: {signature}")
return jsonify({'message': 'Cible supprimée avec succès'}), 200
except Exception as e:
logger.error(f"Erreur lors de la suppression: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/targets', methods=['GET'])
def list_visual_targets():
"""
Liste toutes les cibles visuelles en cache
Returns:
{
"targets": [
{
"signature": "visual_abc123_def456",
"confidence": 0.95,
"created_at": "2024-01-07T10:30:00",
"last_validated": "2024-01-07T10:35:00",
"validation_count": 5,
"metadata": {...}
},
...
],
"count": 2
}
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
# Récupérer toutes les cibles du cache
targets_summary = []
for signature, target in visual_target_manager._target_cache.items():
summary = {
'signature': target.signature,
'confidence': target.confidence,
'created_at': target.created_at.isoformat(),
'last_validated': target.last_validated.isoformat() if target.last_validated else None,
'validation_count': target.validation_count,
'metadata': target.metadata
}
targets_summary.append(summary)
return jsonify({
'targets': targets_summary,
'count': len(targets_summary)
}), 200
except Exception as e:
logger.error(f"Erreur lors de la liste des cibles: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/targets/clear', methods=['POST'])
def clear_visual_targets():
"""
Vide le cache de toutes les cibles visuelles
Returns:
{
"message": "Cache vidé avec succès",
"cleared_count": 5
}
"""
try:
if not visual_target_manager:
return jsonify({'error': 'VisualTargetManager non initialisé'}), 500
# Compter les cibles avant de vider
count_before = len(visual_target_manager._target_cache)
# Vider le cache
visual_target_manager.clear_cache()
logger.info(f"Cache vidé: {count_before} cibles supprimées")
return jsonify({
'message': 'Cache vidé avec succès',
'cleared_count': count_before
}), 200
except Exception as e:
logger.error(f"Erreur lors du vidage du cache: {e}")
return jsonify({'error': 'Erreur interne du serveur'}), 500
@visual_targets_bp.route('/health', methods=['GET'])
def health_check():
"""
Vérification de santé du service de cibles visuelles
Returns:
{
"status": "healthy",
"manager_initialized": true,
"cached_targets": 3,
"active_validations": 3
}
"""
try:
status = {
'status': 'healthy' if visual_target_manager else 'unhealthy',
'manager_initialized': visual_target_manager is not None,
'cached_targets': len(visual_target_manager._target_cache) if visual_target_manager else 0,
'active_validations': len(visual_target_manager._validation_tasks) if visual_target_manager else 0
}
return jsonify(status), 200
except Exception as e:
logger.error(f"Erreur lors du health check: {e}")
return jsonify({
'status': 'error',
'error': str(e)
}), 500
# Gestionnaire d'erreurs pour le blueprint
@visual_targets_bp.errorhandler(404)
def not_found(error):
return jsonify({'error': 'Endpoint non trouvé'}), 404
@visual_targets_bp.errorhandler(405)
def method_not_allowed(error):
return jsonify({'error': 'Méthode non autorisée'}), 405
@visual_targets_bp.errorhandler(500)
def internal_error(error):
return jsonify({'error': 'Erreur interne du serveur'}), 500

View File

@@ -0,0 +1,595 @@
"""
WebSocket Handlers - Visual Workflow Builder
Gestion des événements WebSocket pour les mises à jour en temps réel
des exécutions de workflows.
Exigences: 6.2, 6.3, 6.4
"""
from flask import request
from flask_socketio import emit, join_room, leave_room, rooms
from app import socketio
from services.execution_integration import get_executor
import logging
logger = logging.getLogger(__name__)
# Dictionnaire pour suivre les souscriptions
# Format: {execution_id: [sid1, sid2, ...]}
execution_subscriptions = {}
@socketio.on('connect')
def handle_connect():
"""
Gère la connexion d'un client WebSocket.
Exigence: 6.2
"""
client_id = request.sid
logger.info(f"Client connecté: {client_id}")
emit('connected', {
'message': 'Connecté au serveur WebSocket',
'client_id': client_id
})
@socketio.on('disconnect')
def handle_disconnect():
"""
Gère la déconnexion d'un client WebSocket.
Nettoie les souscriptions du client.
Exigence: 6.2
"""
client_id = request.sid
logger.info(f"Client déconnecté: {client_id}")
# Nettoyer les souscriptions
for execution_id in list(execution_subscriptions.keys()):
if client_id in execution_subscriptions[execution_id]:
execution_subscriptions[execution_id].remove(client_id)
if not execution_subscriptions[execution_id]:
del execution_subscriptions[execution_id]
@socketio.on('subscribe_execution')
def handle_subscribe_execution(data):
"""
Souscrit aux mises à jour d'une exécution spécifique.
Args:
data: {'execution_id': str}
Exigence: 6.2, 6.3
"""
client_id = request.sid
execution_id = data.get('execution_id')
if not execution_id:
emit('error', {'message': 'execution_id requis'})
return
# Ajouter le client à la room de l'exécution
join_room(execution_id)
# Enregistrer la souscription
if execution_id not in execution_subscriptions:
execution_subscriptions[execution_id] = []
if client_id not in execution_subscriptions[execution_id]:
execution_subscriptions[execution_id].append(client_id)
logger.info(f"Client {client_id} souscrit à l'exécution {execution_id}")
# Envoyer le statut actuel
executor = get_executor()
result = executor.get_execution_status(execution_id)
if result:
emit('execution_status', {
'execution_id': execution_id,
'status': result.status,
'progress': result.progress,
'logs': result.logs[-10:] if len(result.logs) > 10 else result.logs # Derniers 10 logs
})
else:
emit('error', {
'message': f'Exécution {execution_id} introuvable'
})
@socketio.on('unsubscribe_execution')
def handle_unsubscribe_execution(data):
"""
Se désabonne des mises à jour d'une exécution.
Args:
data: {'execution_id': str}
Exigence: 6.2
"""
client_id = request.sid
execution_id = data.get('execution_id')
if not execution_id:
emit('error', {'message': 'execution_id requis'})
return
# Retirer le client de la room
leave_room(execution_id)
# Retirer de la liste des souscriptions
if execution_id in execution_subscriptions:
if client_id in execution_subscriptions[execution_id]:
execution_subscriptions[execution_id].remove(client_id)
if not execution_subscriptions[execution_id]:
del execution_subscriptions[execution_id]
logger.info(f"Client {client_id} désabonné de l'exécution {execution_id}")
emit('unsubscribed', {'execution_id': execution_id})
@socketio.on('get_execution_status')
def handle_get_execution_status(data):
"""
Récupère le statut actuel d'une exécution.
Args:
data: {'execution_id': str}
Exigence: 6.3
"""
execution_id = data.get('execution_id')
if not execution_id:
emit('error', {'message': 'execution_id requis'})
return
executor = get_executor()
result = executor.get_execution_status(execution_id)
if result:
emit('execution_status', result.to_dict())
else:
emit('error', {
'message': f'Exécution {execution_id} introuvable'
})
# ============================================================================
# Fonctions utilitaires pour émettre des événements depuis le backend
# ============================================================================
def broadcast_execution_started(execution_id: str, workflow_id: str):
"""
Diffuse un événement de démarrage d'exécution.
Exigence: 6.3
"""
socketio.emit('execution_started', {
'execution_id': execution_id,
'workflow_id': workflow_id,
'timestamp': datetime.now().isoformat()
}, room=execution_id)
logger.info(f"Événement execution_started diffusé pour {execution_id}")
def broadcast_node_status(execution_id: str, node_id: str, status: str, data: dict = None):
"""
Diffuse un événement de changement de statut de node.
Args:
execution_id: ID de l'exécution
node_id: ID du node
status: Nouveau statut (running, success, failed)
data: Données additionnelles
Exigence: 6.3
"""
event_data = {
'execution_id': execution_id,
'node_id': node_id,
'status': status,
'timestamp': datetime.now().isoformat()
}
if data:
event_data.update(data)
socketio.emit('node_status', event_data, room=execution_id)
logger.debug(f"Événement node_status diffusé: {node_id} -> {status}")
def broadcast_execution_progress(execution_id: str, progress: dict):
"""
Diffuse un événement de progression d'exécution.
Args:
execution_id: ID de l'exécution
progress: Données de progression
Exigence: 6.3
"""
socketio.emit('execution_progress', {
'execution_id': execution_id,
'progress': progress,
'timestamp': datetime.now().isoformat()
}, room=execution_id)
def broadcast_execution_complete(execution_id: str, status: str, result: dict = None):
"""
Diffuse un événement de fin d'exécution.
Args:
execution_id: ID de l'exécution
status: Statut final (completed, failed, cancelled)
result: Résultat de l'exécution
Exigence: 6.4
"""
event_data = {
'execution_id': execution_id,
'status': status,
'timestamp': datetime.now().isoformat()
}
if result:
event_data['result'] = result
socketio.emit('execution_complete', event_data, room=execution_id)
logger.info(f"Événement execution_complete diffusé pour {execution_id}: {status}")
def broadcast_execution_log(execution_id: str, log_entry: dict):
"""
Diffuse un nouveau log d'exécution.
Args:
execution_id: ID de l'exécution
log_entry: Entrée de log
Exigence: 6.3
"""
socketio.emit('execution_log', {
'execution_id': execution_id,
'log': log_entry
}, room=execution_id)
def broadcast_execution_error(execution_id: str, error_message: str, node_id: str = None):
"""
Diffuse un événement d'erreur d'exécution.
Args:
execution_id: ID de l'exécution
error_message: Message d'erreur
node_id: ID du node en erreur (optionnel)
Exigence: 6.4
"""
event_data = {
'execution_id': execution_id,
'error': error_message,
'timestamp': datetime.now().isoformat()
}
if node_id:
event_data['node_id'] = node_id
socketio.emit('execution_error', event_data, room=execution_id)
logger.error(f"Événement execution_error diffusé pour {execution_id}: {error_message}")
# Import datetime pour les timestamps
from datetime import datetime
# ============================================================================
# COACHING Mode WebSocket Handlers
# ============================================================================
# Dictionnaire pour suivre les sessions COACHING actives
# Format: {execution_id: {client_id: sid, pending_suggestion: dict}}
coaching_sessions = {}
@socketio.on('subscribe_coaching')
def handle_subscribe_coaching(data):
"""
Souscrit aux événements COACHING d'une exécution.
Args:
data: {'execution_id': str}
"""
client_id = request.sid
execution_id = data.get('execution_id')
if not execution_id:
emit('error', {'message': 'execution_id requis'})
return
# Rejoindre la room coaching
coaching_room = f"coaching_{execution_id}"
join_room(coaching_room)
# Enregistrer la session coaching
if execution_id not in coaching_sessions:
coaching_sessions[execution_id] = {
'clients': [],
'pending_suggestion': None,
'stats': {
'suggestions_made': 0,
'accepted': 0,
'rejected': 0,
'corrected': 0
}
}
if client_id not in coaching_sessions[execution_id]['clients']:
coaching_sessions[execution_id]['clients'].append(client_id)
logger.info(f"Client {client_id} souscrit au COACHING {execution_id}")
emit('coaching_subscribed', {
'execution_id': execution_id,
'message': 'Souscrit aux événements COACHING',
'stats': coaching_sessions[execution_id]['stats']
})
@socketio.on('unsubscribe_coaching')
def handle_unsubscribe_coaching(data):
"""
Se désabonne des événements COACHING.
Args:
data: {'execution_id': str}
"""
client_id = request.sid
execution_id = data.get('execution_id')
if not execution_id:
emit('error', {'message': 'execution_id requis'})
return
coaching_room = f"coaching_{execution_id}"
leave_room(coaching_room)
if execution_id in coaching_sessions:
if client_id in coaching_sessions[execution_id]['clients']:
coaching_sessions[execution_id]['clients'].remove(client_id)
if not coaching_sessions[execution_id]['clients']:
del coaching_sessions[execution_id]
logger.info(f"Client {client_id} désabonné du COACHING {execution_id}")
emit('coaching_unsubscribed', {'execution_id': execution_id})
@socketio.on('coaching_decision')
def handle_coaching_decision(data):
"""
Reçoit une décision COACHING de l'utilisateur.
Args:
data: {
'execution_id': str,
'decision': str ('accept', 'reject', 'correct', 'manual', 'skip'),
'correction': dict (optionnel, si decision == 'correct'),
'feedback': str (optionnel)
}
"""
client_id = request.sid
execution_id = data.get('execution_id')
decision = data.get('decision')
if not execution_id or not decision:
emit('error', {'message': 'execution_id et decision requis'})
return
valid_decisions = ['accept', 'reject', 'correct', 'manual', 'skip']
if decision not in valid_decisions:
emit('error', {'message': f'decision invalide. Valeurs: {valid_decisions}'})
return
logger.info(f"Décision COACHING reçue: {execution_id} -> {decision}")
# Mettre à jour les stats
if execution_id in coaching_sessions:
stats = coaching_sessions[execution_id]['stats']
if decision == 'accept':
stats['accepted'] += 1
elif decision == 'reject':
stats['rejected'] += 1
elif decision == 'correct':
stats['corrected'] += 1
# Transmettre la décision au backend d'exécution
try:
from services.execution_integration import get_executor
executor = get_executor()
# Construire la réponse COACHING
coaching_response = {
'decision': decision,
'correction': data.get('correction'),
'feedback': data.get('feedback'),
'executed_manually': decision == 'manual'
}
# Soumettre au loop d'exécution
result = executor.submit_coaching_decision(execution_id, coaching_response)
if result:
emit('coaching_decision_accepted', {
'execution_id': execution_id,
'decision': decision,
'message': 'Décision enregistrée'
})
# Diffuser à tous les clients de la room
coaching_room = f"coaching_{execution_id}"
socketio.emit('coaching_decision_broadcast', {
'execution_id': execution_id,
'decision': decision,
'by_client': client_id,
'timestamp': datetime.now().isoformat()
}, room=coaching_room)
else:
emit('error', {'message': 'Impossible de soumettre la décision'})
except Exception as e:
logger.error(f"Erreur lors de la soumission COACHING: {e}")
emit('error', {'message': str(e)})
@socketio.on('get_coaching_stats')
def handle_get_coaching_stats(data):
"""
Récupère les statistiques COACHING d'une exécution.
Args:
data: {'execution_id': str}
"""
execution_id = data.get('execution_id')
if not execution_id:
emit('error', {'message': 'execution_id requis'})
return
stats = {}
if execution_id in coaching_sessions:
stats = coaching_sessions[execution_id]['stats']
emit('coaching_stats', {
'execution_id': execution_id,
'stats': stats
})
# ============================================================================
# Fonctions pour émettre des événements COACHING depuis le backend
# ============================================================================
def broadcast_coaching_suggestion(
execution_id: str,
action_info: dict,
screenshot_path: str = None,
context: dict = None
):
"""
Diffuse une suggestion d'action COACHING.
Args:
execution_id: ID de l'exécution
action_info: Information sur l'action suggérée
screenshot_path: Chemin vers la capture d'écran
context: Contexte additionnel
"""
coaching_room = f"coaching_{execution_id}"
event_data = {
'execution_id': execution_id,
'action': action_info.get('action', 'unknown'),
'target': action_info.get('target', {}),
'params': action_info.get('params', {}),
'screenshot_path': screenshot_path,
'context': context or {},
'timestamp': datetime.now().isoformat()
}
# Enregistrer la suggestion en attente
if execution_id in coaching_sessions:
coaching_sessions[execution_id]['pending_suggestion'] = event_data
coaching_sessions[execution_id]['stats']['suggestions_made'] += 1
socketio.emit('coaching_suggestion', event_data, room=coaching_room)
logger.info(f"Suggestion COACHING diffusée: {execution_id} -> {action_info.get('action')}")
def broadcast_coaching_action_result(
execution_id: str,
action_info: dict,
success: bool,
result: dict = None,
error: str = None
):
"""
Diffuse le résultat d'une action COACHING.
Args:
execution_id: ID de l'exécution
action_info: Information sur l'action
success: Si l'action a réussi
result: Résultat de l'action
error: Message d'erreur si échec
"""
coaching_room = f"coaching_{execution_id}"
event_data = {
'execution_id': execution_id,
'action': action_info.get('action', 'unknown'),
'success': success,
'timestamp': datetime.now().isoformat()
}
if result:
event_data['result'] = result
if error:
event_data['error'] = error
socketio.emit('coaching_action_result', event_data, room=coaching_room)
logger.info(f"Résultat action COACHING: {execution_id} -> success={success}")
def broadcast_coaching_stats_update(execution_id: str, stats: dict):
"""
Diffuse une mise à jour des statistiques COACHING.
Args:
execution_id: ID de l'exécution
stats: Statistiques mises à jour
"""
coaching_room = f"coaching_{execution_id}"
socketio.emit('coaching_stats_update', {
'execution_id': execution_id,
'stats': stats,
'timestamp': datetime.now().isoformat()
}, room=coaching_room)
def broadcast_coaching_session_end(execution_id: str, final_stats: dict):
"""
Diffuse la fin d'une session COACHING.
Args:
execution_id: ID de l'exécution
final_stats: Statistiques finales de la session
"""
coaching_room = f"coaching_{execution_id}"
socketio.emit('coaching_session_end', {
'execution_id': execution_id,
'stats': final_stats,
'timestamp': datetime.now().isoformat()
}, room=coaching_room)
# Nettoyer la session
if execution_id in coaching_sessions:
del coaching_sessions[execution_id]
logger.info(f"Session COACHING terminée: {execution_id}")
# Enregistrer les handlers dans l'application
logger.info("WebSocket handlers enregistrés (incluant COACHING)")

View File

@@ -14,5 +14,6 @@ from . import session
from . import workflow
from . import capture
from . import execute
from . import match # Matching sémantique des workflows
__all__ = ['api_v3_bp']

View File

@@ -0,0 +1,277 @@
"""
API v3 - Workflow Matching
Matching sémantique des workflows par commande en langage naturel
POST /api/v3/match/find → Trouver les workflows correspondant à une commande
GET /api/v3/match/suggest → Suggestions de workflows
POST /api/v3/match/reload → Recharger le cache des workflows
GET /api/v3/match/stats → Statistiques du matcher
"""
from flask import jsonify, request
from . import api_v3_bp
from services.workflow_matcher import get_workflow_matcher, WorkflowMatch
from dataclasses import asdict
@api_v3_bp.route('/match/find', methods=['POST'])
def find_matching_workflows():
"""
Trouver les workflows correspondant à une commande.
Request:
{
"command": "créer une facture pour le client Acme",
"limit": 5, // Optionnel, défaut: 5
"min_confidence": 0.3 // Optionnel, défaut: 0.3
}
Response:
{
"success": true,
"command": "créer une facture...",
"matches": [
{
"workflow_id": "wf_123",
"workflow_name": "Facturation Client",
"confidence": 0.85,
"match_reasons": ["trigger_example_exact:créer une facture", "tags:facturation"],
"extracted_params": {"client": "Acme"},
"description": "...",
"tags": ["facturation", "client"],
"step_count": 5
}
],
"best_match": { ... } ou null
}
"""
try:
data = request.get_json() or {}
command = data.get('command', '').strip()
if not command:
return jsonify({
'success': False,
'error': "Le champ 'command' est requis"
}), 400
limit = data.get('limit', 5)
min_confidence = data.get('min_confidence', 0.3)
# Trouver les workflows
matcher = get_workflow_matcher()
matches = matcher.find_workflows(command, limit=limit, min_confidence=min_confidence)
# Convertir en dict pour JSON
matches_dict = []
for match in matches:
match_data = {
'workflow_id': match.workflow_id,
'workflow_name': match.workflow_name,
'confidence': match.confidence,
'match_reasons': match.match_reasons,
'extracted_params': match.extracted_params,
'description': match.description,
'tags': match.tags or [],
'step_count': match.step_count
}
matches_dict.append(match_data)
print(f"🔍 [Match] Commande: '{command}'{len(matches)} résultats")
return jsonify({
'success': True,
'command': command,
'matches': matches_dict,
'best_match': matches_dict[0] if matches_dict else None,
'total_workflows': matcher.workflow_count()
})
except Exception as e:
import traceback
traceback.print_exc()
return jsonify({
'success': False,
'error': str(e)
}), 500
@api_v3_bp.route('/match/suggest', methods=['GET'])
def suggest_workflows():
"""
Obtenir des suggestions de workflows.
Query params:
q: Texte partiel (requis)
limit: Nombre max de suggestions (optionnel, défaut: 5)
Response:
{
"success": true,
"query": "fact",
"suggestions": [
{
"id": "wf_123",
"name": "Facturation Client",
"description": "Crée une facture...",
"tags": ["facturation"]
}
]
}
"""
try:
query = request.args.get('q', '').strip()
if not query:
return jsonify({
'success': False,
'error': "Le paramètre 'q' est requis"
}), 400
limit = int(request.args.get('limit', 5))
matcher = get_workflow_matcher()
suggestions = matcher.suggest_workflows(query, limit=limit)
return jsonify({
'success': True,
'query': query,
'suggestions': suggestions
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@api_v3_bp.route('/match/reload', methods=['POST'])
def reload_matcher():
"""
Recharger le cache du matcher.
Utile après avoir ajouté/modifié des workflows.
Response:
{
"success": true,
"workflows_loaded": 10
}
"""
try:
matcher = get_workflow_matcher()
count = matcher.reload_workflows()
print(f"🔄 [Match] Cache rechargé: {count} workflows")
return jsonify({
'success': True,
'workflows_loaded': count
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@api_v3_bp.route('/match/stats', methods=['GET'])
def matcher_stats():
"""
Obtenir les statistiques du matcher.
Response:
{
"success": true,
"stats": {
"total_workflows": 10,
"workflows_with_tags": 8,
"workflows_with_triggers": 5,
"workflows_with_description": 9
}
}
"""
try:
matcher = get_workflow_matcher()
workflows = matcher.get_all_workflows()
stats = {
'total_workflows': len(workflows),
'workflows_with_tags': sum(1 for w in workflows if w.tags),
'workflows_with_triggers': sum(1 for w in workflows if w.trigger_examples),
'workflows_with_description': sum(1 for w in workflows if w.description),
'total_tags': sum(len(w.tags) for w in workflows),
'total_trigger_examples': sum(len(w.trigger_examples) for w in workflows)
}
# Liste des workflows avec leurs métadonnées
workflow_list = []
for w in workflows:
workflow_list.append({
'id': w.workflow_id,
'name': w.name,
'tags_count': len(w.tags),
'triggers_count': len(w.trigger_examples),
'has_description': bool(w.description),
'step_count': w.step_count
})
return jsonify({
'success': True,
'stats': stats,
'workflows': workflow_list
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500
@api_v3_bp.route('/match/test', methods=['GET'])
def test_matcher():
"""
Endpoint de test pour vérifier le fonctionnement du matcher.
Response:
{
"success": true,
"message": "Matcher opérationnel",
"example": { ... }
}
"""
try:
matcher = get_workflow_matcher()
count = matcher.workflow_count()
# Test avec une commande simple si des workflows existent
example = None
if count > 0:
workflows = matcher.get_all_workflows()
# Prendre le premier workflow avec des trigger_examples
for w in workflows:
if w.trigger_examples:
test_command = w.trigger_examples[0]
result = matcher.find_workflow(test_command)
if result:
example = {
'test_command': test_command,
'matched_workflow': result.workflow_name,
'confidence': result.confidence
}
break
return jsonify({
'success': True,
'message': 'Matcher opérationnel',
'workflows_loaded': count,
'example': example
})
except Exception as e:
return jsonify({
'success': False,
'error': str(e)
}), 500

View File

@@ -33,26 +33,44 @@ def get_state():
try:
session = get_session_state()
# Workflow actif
# Workflow actif - nettoyer si n'existe plus
active_workflow = None
if session.active_workflow_id:
wf = Workflow.query.get(session.active_workflow_id)
if wf:
active_workflow = wf.to_dict()
else:
# Le workflow n'existe plus, nettoyer la session
print(f"⚠️ [Session] Workflow '{session.active_workflow_id}' n'existe plus, nettoyage session")
session.active_workflow_id = None
session.selected_step_id = None
# Exécution active
# Vérifier que l'étape sélectionnée existe toujours
if session.selected_step_id:
step = Step.query.get(session.selected_step_id)
if not step:
print(f"⚠️ [Session] Étape '{session.selected_step_id}' n'existe plus, nettoyage")
session.selected_step_id = None
# Exécution active - nettoyer si n'existe plus
active_execution = None
if session.active_execution_id:
exe = Execution.query.get(session.active_execution_id)
if exe:
active_execution = exe.to_dict()
else:
print(f"⚠️ [Session] Exécution '{session.active_execution_id}' n'existe plus, nettoyage")
session.active_execution_id = None
# Liste des workflows (résumé)
# Liste des workflows (résumé avec métadonnées)
workflows_list = []
for wf in Workflow.query.filter_by(is_active=True).order_by(Workflow.updated_at.desc()).all():
workflows_list.append({
'id': wf.id,
'name': wf.name,
'description': wf.description or '',
'tags': wf.tags or [],
'trigger_examples': wf.trigger_examples or [],
'step_count': wf.steps.count(),
'updated_at': wf.updated_at.isoformat() if wf.updated_at else None
})

View File

@@ -99,6 +99,66 @@ def get_workflow(workflow_id: str):
}), 500
@api_v3_bp.route('/workflow/<workflow_id>', methods=['PUT'])
def update_workflow(workflow_id: str):
"""
Met à jour les métadonnées d'un workflow.
Request:
{
"name": "Nouveau nom", // Optionnel
"description": "Description", // Optionnel
"tags": ["tag1", "tag2"], // Optionnel
"triggerExamples": ["phrase1"] // Optionnel
}
Response:
{
"success": true,
"workflow": { ... }
}
"""
try:
workflow = Workflow.query.get(workflow_id)
if not workflow:
return jsonify({
'success': False,
'error': f"Workflow '{workflow_id}' non trouvé"
}), 404
data = request.get_json() or {}
# Mettre à jour les champs fournis
if 'name' in data:
workflow.name = data['name']
if 'description' in data:
workflow.description = data['description']
if 'tags' in data:
workflow.tags = data['tags']
if 'triggerExamples' in data:
workflow.trigger_examples = data['triggerExamples']
workflow.updated_at = datetime.utcnow()
db.session.commit()
print(f"✅ [API v3] Workflow mis à jour: {workflow_id}")
return jsonify({
'success': True,
'workflow': workflow.to_dict()
})
except Exception as e:
db.session.rollback()
return jsonify({
'success': False,
'error': str(e)
}), 500
@api_v3_bp.route('/workflow/<workflow_id>', methods=['DELETE'])
def delete_workflow(workflow_id: str):
"""Supprime un workflow"""

View File

@@ -10,6 +10,7 @@ from flask import Flask
from flask_cors import CORS
from flask_socketio import SocketIO
from flask_caching import Cache
from flask_migrate import Migrate
import os
from dotenv import load_dotenv
@@ -30,6 +31,10 @@ app.config['CACHE_REDIS_URL'] = os.getenv('REDIS_URL', 'redis://localhost:6379/0
# Initialize extensions - Use db from v3 models (source of truth)
from db.models import db
db.init_app(app)
# Initialize Flask-Migrate for database migrations
migrate = Migrate(app, db)
cache = Cache(app)
socketio = SocketIO(
app,
@@ -287,9 +292,23 @@ def execute_workflow_step():
'error': str(e)
}), 500
# Create database tables
# Create database tables - only if migrations not available
# In production, use: flask db upgrade
import os
migrations_dir = os.path.join(os.path.dirname(__file__), 'migrations')
with app.app_context():
db.create_all()
if not os.path.exists(migrations_dir):
# No migrations folder - use create_all for development
db.create_all()
print("✅ [DB] Tables créées avec db.create_all()")
else:
# Migrations available - check if alembic_version exists
from sqlalchemy import inspect
inspector = inspect(db.engine)
if 'alembic_version' not in inspector.get_table_names():
# First run with migrations - create tables and stamp
db.create_all()
print("✅ [DB] Tables créées, utiliser 'flask db stamp head' pour initialiser les migrations")
# Initialize VisualTargetManager with RPA Vision V3 components (optional)
try:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -21,6 +21,8 @@ class Workflow(db.Model):
id = db.Column(db.String(64), primary_key=True)
name = db.Column(db.String(255), nullable=False)
description = db.Column(db.Text, nullable=True)
tags_json = db.Column(db.Text, nullable=True) # JSON array de tags pour le matching
trigger_examples_json = db.Column(db.Text, nullable=True) # JSON array d'exemples de déclenchement
created_at = db.Column(db.DateTime, default=datetime.utcnow)
updated_at = db.Column(db.DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
is_active = db.Column(db.Boolean, default=True)
@@ -31,12 +33,44 @@ class Workflow(db.Model):
executions = db.relationship('Execution', backref='workflow', lazy='dynamic',
cascade='all, delete-orphan')
@property
def tags(self) -> List[str]:
"""Retourne les tags comme liste"""
if not self.tags_json:
return []
try:
return json.loads(self.tags_json)
except json.JSONDecodeError:
return []
@tags.setter
def tags(self, value: List[str]):
"""Définit les tags depuis une liste"""
self.tags_json = json.dumps(value) if value else None
@property
def trigger_examples(self) -> List[str]:
"""Retourne les exemples de déclenchement comme liste"""
if not self.trigger_examples_json:
return []
try:
return json.loads(self.trigger_examples_json)
except json.JSONDecodeError:
return []
@trigger_examples.setter
def trigger_examples(self, value: List[str]):
"""Définit les exemples depuis une liste"""
self.trigger_examples_json = json.dumps(value) if value else None
def to_dict(self) -> Dict[str, Any]:
"""Sérialise le workflow complet"""
return {
'id': self.id,
'name': self.name,
'description': self.description,
'tags': self.tags,
'triggerExamples': self.trigger_examples,
'created_at': self.created_at.isoformat() if self.created_at else None,
'updated_at': self.updated_at.isoformat() if self.updated_at else None,
'steps': [step.to_dict() for step in self.steps.order_by(Step.order).all()],

View File

@@ -0,0 +1,199 @@
#!/usr/bin/env python3
"""
Script de gestion des migrations de base de données VWB
Usage:
python manage.py init # Initialiser les migrations (une seule fois)
python manage.py migrate # Créer une nouvelle migration
python manage.py upgrade # Appliquer les migrations pendantes
python manage.py downgrade # Annuler la dernière migration
python manage.py stamp # Marquer une révision comme appliquée
python manage.py current # Afficher la révision actuelle
python manage.py history # Afficher l'historique des migrations
python manage.py backup # Sauvegarder la base de données
python manage.py status # Afficher le status complet
Exemple de workflow de migration:
1. Modifier les modèles dans db/models.py
2. python manage.py migrate "Description du changement"
3. Vérifier le fichier de migration généré dans migrations/versions/
4. python manage.py upgrade
"""
import sys
import os
import shutil
import subprocess
from datetime import datetime
# Chemin vers le backend
BACKEND_DIR = os.path.dirname(os.path.abspath(__file__))
VENV_PYTHON = '/home/dom/ai/rpa_vision_v3/venv_v3/bin/python'
VENV_FLASK = '/home/dom/ai/rpa_vision_v3/venv_v3/bin/flask'
# Variables d'environnement pour Flask
ENV = os.environ.copy()
ENV['FLASK_APP'] = 'app.py'
def run_flask_db(args):
"""Exécute une commande flask db"""
cmd = [VENV_FLASK, 'db'] + args
result = subprocess.run(cmd, cwd=BACKEND_DIR, env=ENV, capture_output=True, text=True)
# Filtrer les warnings connus
output = result.stdout + result.stderr
for line in output.split('\n'):
if line.strip() and not any(skip in line for skip in [
'FutureWarning', 'pynvml', 'use_fast', 'Workflow ignoré',
'ScreenState non disponible', 'Server initialized'
]):
print(line)
return result.returncode == 0
def find_database():
"""Trouve le fichier de base de données SQLite"""
instance_dir = os.path.join(BACKEND_DIR, 'instance')
if os.path.exists(instance_dir):
for f in os.listdir(instance_dir):
if f.endswith('.db') and not f.startswith('.'):
return os.path.join(instance_dir, f)
return None
def backup_database():
"""Sauvegarde la base de données avant une migration"""
instance_dir = os.path.join(BACKEND_DIR, 'instance')
db_path = find_database()
if db_path and os.path.exists(db_path):
db_name = os.path.basename(db_path).replace('.db', '')
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
backup_dir = os.path.join(instance_dir, 'backups')
os.makedirs(backup_dir, exist_ok=True)
backup_path = os.path.join(backup_dir, f'{db_name}_{timestamp}.db')
shutil.copy2(db_path, backup_path)
print(f"✅ Base de données sauvegardée: {backup_path}")
# Garder seulement les 10 dernières sauvegardes
backups = sorted([f for f in os.listdir(backup_dir) if f.endswith('.db')])
if len(backups) > 10:
for old_backup in backups[:-10]:
os.remove(os.path.join(backup_dir, old_backup))
print(f" 🗑️ Ancienne sauvegarde supprimée: {old_backup}")
return backup_path
else:
print("⚠️ Pas de base de données existante à sauvegarder")
return None
def show_status():
"""Affiche le status complet des migrations"""
migrations_dir = os.path.join(BACKEND_DIR, 'migrations')
instance_dir = os.path.join(BACKEND_DIR, 'instance')
db_path = find_database()
print("=" * 50)
print("Status des Migrations VWB")
print("=" * 50)
print(f"Dossier backend: {BACKEND_DIR}")
print(f"Dossier migrations: {'✅ Existe' if os.path.exists(migrations_dir) else '❌ Non initialisé'}")
print(f"Base de données: {'' + db_path if db_path else '❌ Non créée'}")
if os.path.exists(migrations_dir):
versions_dir = os.path.join(migrations_dir, 'versions')
if os.path.exists(versions_dir):
migrations = sorted([f for f in os.listdir(versions_dir) if f.endswith('.py') and not f.startswith('__')])
print(f"\nMigrations disponibles ({len(migrations)}):")
for m in migrations:
print(f" 📄 {m}")
# Backups
backup_dir = os.path.join(instance_dir, 'backups')
if os.path.exists(backup_dir):
backups = sorted([f for f in os.listdir(backup_dir) if f.endswith('.db')])
if backups:
print(f"\nSauvegardes ({len(backups)}):")
for b in backups[-5:]: # Afficher les 5 dernières
print(f" 💾 {b}")
print("\nRévision actuelle de la base:")
run_flask_db(['current'])
def main():
if len(sys.argv) < 2:
print(__doc__)
sys.exit(1)
command = sys.argv[1].lower()
if command == 'init':
print("🔧 Initialisation des migrations...")
if run_flask_db(['init']):
print("✅ Migrations initialisées")
print(" Prochaine étape: python manage.py migrate 'Initial migration'")
else:
print("❌ Échec de l'initialisation")
elif command == 'migrate':
message = ' '.join(sys.argv[2:]) if len(sys.argv) > 2 else f"Migration {datetime.now().strftime('%Y%m%d_%H%M%S')}"
print(f"📝 Création de la migration: {message}")
backup_database()
if run_flask_db(['migrate', '-m', message]):
print("✅ Migration créée")
print(" Prochaine étape: python manage.py upgrade")
else:
print("❌ Échec de la création de migration")
elif command == 'upgrade':
revision = sys.argv[2] if len(sys.argv) > 2 else 'head'
print(f"⬆️ Application des migrations jusqu'à: {revision}")
backup_database()
if run_flask_db(['upgrade', revision]):
print("✅ Migrations appliquées")
else:
print("❌ Échec de l'upgrade")
elif command == 'downgrade':
revision = sys.argv[2] if len(sys.argv) > 2 else '-1'
print(f"⬇️ Annulation des migrations: {revision}")
backup_database()
if run_flask_db(['downgrade', revision]):
print("✅ Downgrade effectué")
else:
print("❌ Échec du downgrade")
elif command == 'stamp':
revision = sys.argv[2] if len(sys.argv) > 2 else 'head'
print(f"🏷️ Marquage de la révision: {revision}")
if run_flask_db(['stamp', revision]):
print("✅ Révision marquée")
else:
print("❌ Échec du stamp")
elif command == 'current':
print("📍 Révision actuelle:")
run_flask_db(['current'])
elif command == 'history':
print("📜 Historique des migrations:")
run_flask_db(['history'])
elif command == 'backup':
backup_database()
elif command == 'status':
show_status()
else:
print(f"❌ Commande inconnue: {command}")
print(__doc__)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -0,0 +1 @@
Single-database configuration for Flask.

View File

@@ -0,0 +1,50 @@
# A generic, single database configuration.
[alembic]
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic,flask_migrate
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[logger_flask_migrate]
level = INFO
handlers =
qualname = flask_migrate
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@@ -0,0 +1,113 @@
import logging
from logging.config import fileConfig
from flask import current_app
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger('alembic.env')
def get_engine():
try:
# this works with Flask-SQLAlchemy<3 and Alchemical
return current_app.extensions['migrate'].db.get_engine()
except (TypeError, AttributeError):
# this works with Flask-SQLAlchemy>=3
return current_app.extensions['migrate'].db.engine
def get_engine_url():
try:
return get_engine().url.render_as_string(hide_password=False).replace(
'%', '%%')
except AttributeError:
return str(get_engine().url).replace('%', '%%')
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
config.set_main_option('sqlalchemy.url', get_engine_url())
target_db = current_app.extensions['migrate'].db
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def get_metadata():
if hasattr(target_db, 'metadatas'):
return target_db.metadatas[None]
return target_db.metadata
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(
url=url, target_metadata=get_metadata(), literal_binds=True
)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
# this callback is used to prevent an auto-migration from being generated
# when there are no changes to the schema
# reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html
def process_revision_directives(context, revision, directives):
if getattr(config.cmd_opts, 'autogenerate', False):
script = directives[0]
if script.upgrade_ops.is_empty():
directives[:] = []
logger.info('No changes in schema detected.')
conf_args = current_app.extensions['migrate'].configure_args
if conf_args.get("process_revision_directives") is None:
conf_args["process_revision_directives"] = process_revision_directives
connectable = get_engine()
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=get_metadata(),
**conf_args
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -0,0 +1,24 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@@ -0,0 +1,104 @@
"""Initial schema with workflow metadata
Revision ID: 001_initial
Revises:
Create Date: 2026-01-23
Tables:
- workflows: Workflow definitions with metadata (tags, trigger examples)
- steps: Workflow steps with action types and parameters
- visual_anchors: Visual anchors for UI element detection
- executions: Workflow execution history
- execution_steps: Individual step execution results
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '001_initial'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# Workflows table
op.create_table('workflows',
sa.Column('id', sa.String(64), primary_key=True),
sa.Column('name', sa.String(255), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('tags_json', sa.Text(), nullable=True),
sa.Column('trigger_examples_json', sa.Text(), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('is_active', sa.Boolean(), default=True),
)
# Visual Anchors table
op.create_table('visual_anchors',
sa.Column('id', sa.String(64), primary_key=True),
sa.Column('image_path', sa.String(512), nullable=True),
sa.Column('thumbnail_path', sa.String(512), nullable=True),
sa.Column('bbox_x', sa.Float(), nullable=True),
sa.Column('bbox_y', sa.Float(), nullable=True),
sa.Column('bbox_width', sa.Float(), nullable=True),
sa.Column('bbox_height', sa.Float(), nullable=True),
sa.Column('screen_width', sa.Integer(), nullable=True),
sa.Column('screen_height', sa.Integer(), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('confidence_threshold', sa.Float(), default=0.8),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('capture_method', sa.String(64), default='screen_capture'),
)
# Steps table
op.create_table('steps',
sa.Column('id', sa.String(64), primary_key=True),
sa.Column('workflow_id', sa.String(64), sa.ForeignKey('workflows.id'), nullable=False),
sa.Column('action_type', sa.String(64), nullable=False),
sa.Column('order', sa.Integer(), nullable=False, default=0),
sa.Column('position_x', sa.Float(), default=0),
sa.Column('position_y', sa.Float(), default=0),
sa.Column('parameters_json', sa.Text(), default='{}'),
sa.Column('anchor_id', sa.String(64), sa.ForeignKey('visual_anchors.id'), nullable=True),
sa.Column('label', sa.String(255), nullable=True),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
)
# Executions table
op.create_table('executions',
sa.Column('id', sa.String(64), primary_key=True),
sa.Column('workflow_id', sa.String(64), sa.ForeignKey('workflows.id'), nullable=False),
sa.Column('status', sa.String(32), default='pending'),
sa.Column('started_at', sa.DateTime(), nullable=True),
sa.Column('ended_at', sa.DateTime(), nullable=True),
sa.Column('current_step_index', sa.Integer(), default=0),
sa.Column('total_steps', sa.Integer(), default=0),
sa.Column('completed_steps', sa.Integer(), default=0),
sa.Column('failed_steps', sa.Integer(), default=0),
sa.Column('error_message', sa.Text(), nullable=True),
)
# Execution Steps table
op.create_table('execution_steps',
sa.Column('id', sa.Integer(), primary_key=True, autoincrement=True),
sa.Column('execution_id', sa.String(64), sa.ForeignKey('executions.id'), nullable=False),
sa.Column('step_id', sa.String(64), nullable=False),
sa.Column('status', sa.String(32), default='pending'),
sa.Column('started_at', sa.DateTime(), nullable=True),
sa.Column('ended_at', sa.DateTime(), nullable=True),
sa.Column('duration_ms', sa.Integer(), nullable=True),
sa.Column('error_message', sa.Text(), nullable=True),
sa.Column('evidence_path', sa.String(512), nullable=True),
sa.Column('output_json', sa.Text(), default='{}'),
)
def downgrade():
op.drop_table('execution_steps')
op.drop_table('executions')
op.drop_table('steps')
op.drop_table('visual_anchors')
op.drop_table('workflows')

View File

@@ -0,0 +1,269 @@
"""backend/models.py
Modèles "visuels" minimalistes pour le Visual Workflow Builder.
Auteur : Dom, Alice, Kiro - 08 janvier 2026
Objectif du Patch #1:
- Fournir des structures de données stables (to_dict/from_dict)
- Permettre la persistance disque (JSON/YAML) via services.serialization
- Permettre au backend de démarrer même si le reste du core RPA n'est pas branché
NB: Ces modèles sont volontairement permissifs (ils conservent les champs inconnus).
"""
from __future__ import annotations
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, Dict, List, Optional
from uuid import uuid4
def generate_id(prefix: str = "wf") -> str:
"""Génère un identifiant court et lisible."""
return f"{prefix}_{uuid4().hex[:12]}"
@dataclass
class WorkflowSettings:
"""Sac de paramètres pour un workflow."""
data: Dict[str, Any] = field(default_factory=dict)
@classmethod
def from_dict(cls, d: Any) -> "WorkflowSettings":
if isinstance(d, WorkflowSettings):
return d
if not isinstance(d, dict):
return cls({})
return cls(dict(d))
def to_dict(self) -> Dict[str, Any]:
return dict(self.data)
@dataclass
class VisualNode:
"""Représentation d'un nœud dans le canvas visuel."""
id: str
type: str = "unknown"
position: Dict[str, Any] = field(default_factory=dict)
data: Dict[str, Any] = field(default_factory=dict)
style: Dict[str, Any] = field(default_factory=dict)
extra: Dict[str, Any] = field(default_factory=dict)
@classmethod
def from_dict(cls, d: Dict[str, Any]) -> "VisualNode":
if not isinstance(d, dict):
raise ValueError("Le nœud doit être un dictionnaire")
node_id = d.get("id") or generate_id("node")
node_type = d.get("type") or d.get("node_type") or "unknown"
known = {"id", "type", "node_type", "position", "data", "style"}
extra = {k: v for k, v in d.items() if k not in known}
return cls(
id=str(node_id),
type=str(node_type),
position=dict(d.get("position") or {}),
data=dict(d.get("data") or {}),
style=dict(d.get("style") or {}),
extra=extra,
)
def to_dict(self) -> Dict[str, Any]:
out = {
"id": self.id,
"type": self.type,
"position": self.position,
"data": self.data,
"style": self.style,
}
out.update(self.extra)
return out
@dataclass
class VisualEdge:
"""Représentation d'une connexion entre nœuds."""
id: str
source: str
target: str
type: str = "default"
label: Optional[str] = None
data: Dict[str, Any] = field(default_factory=dict)
extra: Dict[str, Any] = field(default_factory=dict)
@classmethod
def from_dict(cls, d: Dict[str, Any]) -> "VisualEdge":
if not isinstance(d, dict):
raise ValueError("La connexion doit être un dictionnaire")
edge_id = d.get("id") or generate_id("edge")
source = d.get("source")
target = d.get("target")
if not source or not target:
raise ValueError("La connexion nécessite 'source' et 'target'")
known = {"id", "source", "target", "type", "label", "data"}
extra = {k: v for k, v in d.items() if k not in known}
return cls(
id=str(edge_id),
source=str(source),
target=str(target),
type=str(d.get("type") or "default"),
label=d.get("label"),
data=dict(d.get("data") or {}),
extra=extra,
)
def to_dict(self) -> Dict[str, Any]:
out = {
"id": self.id,
"source": self.source,
"target": self.target,
"type": self.type,
"label": self.label,
"data": self.data,
}
out.update(self.extra)
return out
@dataclass
class Variable:
"""Variable de workflow."""
name: str
value: Any = None
var_type: str = "any"
scope: str = "workflow"
description: str = ""
required: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
@classmethod
def from_dict(cls, d: Dict[str, Any]) -> "Variable":
if not isinstance(d, dict):
raise ValueError("La variable doit être un dictionnaire")
name = d.get("name") or d.get("key")
if not name:
raise ValueError("La variable nécessite un 'name'")
known = {"name", "key", "value", "type", "var_type", "scope", "description", "required"}
extra = {k: v for k, v in d.items() if k not in known}
return cls(
name=str(name),
value=d.get("value"),
var_type=str(d.get("type") or d.get("var_type") or "any"),
scope=str(d.get("scope") or "workflow"),
description=str(d.get("description") or ""),
required=bool(d.get("required") or False),
extra=extra,
)
def to_dict(self) -> Dict[str, Any]:
out = {
"name": self.name,
"value": self.value,
"type": self.var_type,
"scope": self.scope,
"description": self.description,
"required": self.required,
}
out.update(self.extra)
return out
@dataclass
class VisualWorkflow:
"""Workflow visuel complet."""
id: str
name: str
description: str = ""
created_by: str = "unknown"
created_at: str = field(default_factory=lambda: datetime.now().isoformat())
updated_at: str = field(default_factory=lambda: datetime.now().isoformat())
nodes: List[VisualNode] = field(default_factory=list)
edges: List[VisualEdge] = field(default_factory=list)
variables: List[Variable] = field(default_factory=list)
settings: WorkflowSettings = field(default_factory=WorkflowSettings)
tags: List[str] = field(default_factory=list)
category: str = "default"
is_template: bool = False
extra: Dict[str, Any] = field(default_factory=dict)
@classmethod
def from_dict(cls, d: Dict[str, Any]) -> "VisualWorkflow":
if not isinstance(d, dict):
raise ValueError("Le workflow doit être un dictionnaire")
wf_id = d.get("id") or generate_id("wf")
name = d.get("name") or "Sans titre"
known = {
"id", "name", "description", "created_by", "created_at", "updated_at",
"nodes", "edges", "variables", "settings", "tags", "category", "is_template"
}
extra = {k: v for k, v in d.items() if k not in known}
nodes = [VisualNode.from_dict(n) for n in (d.get("nodes") or [])]
edges = [VisualEdge.from_dict(e) for e in (d.get("edges") or [])]
variables = [Variable.from_dict(v) for v in (d.get("variables") or [])]
settings = WorkflowSettings.from_dict(d.get("settings") or {})
return cls(
id=str(wf_id),
name=str(name),
description=str(d.get("description") or ""),
created_by=str(d.get("created_by") or "unknown"),
created_at=str(d.get("created_at") or datetime.now().isoformat()),
updated_at=str(d.get("updated_at") or datetime.now().isoformat()),
nodes=nodes,
edges=edges,
variables=variables,
settings=settings,
tags=list(d.get("tags") or []),
category=str(d.get("category") or "default"),
is_template=bool(d.get("is_template") or False),
extra=extra,
)
def to_dict(self) -> Dict[str, Any]:
out = {
"id": self.id,
"name": self.name,
"description": self.description,
"created_by": self.created_by,
"created_at": self.created_at,
"updated_at": self.updated_at,
"nodes": [n.to_dict() for n in self.nodes],
"edges": [e.to_dict() for e in self.edges],
"variables": [v.to_dict() for v in self.variables],
"settings": self.settings.to_dict(),
"tags": self.tags,
"category": self.category,
"is_template": self.is_template,
}
out.update(self.extra)
return out
def validate(self) -> List[str]:
"""Valide le workflow et retourne la liste des erreurs."""
errors: List[str] = []
if not self.name or not str(self.name).strip():
errors.append("le nom est requis")
node_ids = [n.id for n in self.nodes]
if len(node_ids) != len(set(node_ids)):
errors.append("identifiants de nœuds dupliqués")
# Les connexions doivent référencer des nœuds existants
nodes_set = set(node_ids)
for e in self.edges:
if e.source not in nodes_set:
errors.append(f"connexion {e.id} source '{e.source}' n'existe pas")
if e.target not in nodes_set:
errors.append(f"connexion {e.id} target '{e.target}' n'existe pas")
# Les variables doivent être uniques par nom
var_names = [v.name for v in self.variables]
if len(var_names) != len(set(var_names)):
errors.append("noms de variables dupliqués")
return errors

View File

@@ -0,0 +1,65 @@
"""
Models package for Visual Workflow Builder
Contains data models and database schemas.
"""
from .visual_workflow import (
# Enums
NodeCategory,
NodeStatus,
ParameterType,
# Basic types
Position,
Size,
Port,
ValidationRule,
ParameterDefinition,
# Edge types
EdgeStyle,
EdgeCondition,
VisualEdge,
# Node types
VisualNode,
# Workflow types
Variable,
WorkflowSettings,
VisualWorkflow,
# Utilities
generate_id
)
__all__ = [
# Enums
'NodeCategory',
'NodeStatus',
'ParameterType',
# Basic types
'Position',
'Size',
'Port',
'ValidationRule',
'ParameterDefinition',
# Edge types
'EdgeStyle',
'EdgeCondition',
'VisualEdge',
# Node types
'VisualNode',
# Workflow types
'Variable',
'WorkflowSettings',
'VisualWorkflow',
# Utilities
'generate_id'
]

View File

@@ -0,0 +1,180 @@
"""Self-healing configuration models for Visual Workflow Builder."""
from dataclasses import dataclass, field
from typing import Dict, Any, List, Optional
from enum import Enum
class RecoveryStrategy(Enum):
"""Available recovery strategies."""
SEMANTIC_VARIANT = "semantic_variant"
SPATIAL_FALLBACK = "spatial_fallback"
TIMING_ADAPTATION = "timing_adaptation"
FORMAT_TRANSFORMATION = "format_transformation"
ALL = "all"
class RecoveryMode(Enum):
"""Recovery modes for different scenarios."""
DISABLED = "disabled"
CONSERVATIVE = "conservative" # Only high-confidence recoveries
BALANCED = "balanced" # Default mode
AGGRESSIVE = "aggressive" # Try all strategies
@dataclass
class SelfHealingConfig:
"""Configuration for self-healing behavior of a node."""
# Basic settings
enabled: bool = True
recovery_mode: RecoveryMode = RecoveryMode.BALANCED
max_attempts: int = 3
confidence_threshold: float = 0.7
# Strategy configuration
enabled_strategies: List[RecoveryStrategy] = field(
default_factory=lambda: [RecoveryStrategy.ALL]
)
strategy_timeout: float = 30.0 # seconds
# Advanced settings
learn_from_success: bool = True
require_user_confirmation: bool = False
stop_on_failure: bool = False
# Notification settings
notify_on_recovery: bool = True
notify_on_failure: bool = True
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
'enabled': self.enabled,
'recovery_mode': self.recovery_mode.value,
'max_attempts': self.max_attempts,
'confidence_threshold': self.confidence_threshold,
'enabled_strategies': [s.value for s in self.enabled_strategies],
'strategy_timeout': self.strategy_timeout,
'learn_from_success': self.learn_from_success,
'require_user_confirmation': self.require_user_confirmation,
'stop_on_failure': self.stop_on_failure,
'notify_on_recovery': self.notify_on_recovery,
'notify_on_failure': self.notify_on_failure
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'SelfHealingConfig':
"""Create from dictionary."""
return cls(
enabled=data.get('enabled', True),
recovery_mode=RecoveryMode(data.get('recovery_mode', 'balanced')),
max_attempts=data.get('max_attempts', 3),
confidence_threshold=data.get('confidence_threshold', 0.7),
enabled_strategies=[
RecoveryStrategy(s) for s in data.get('enabled_strategies', ['all'])
],
strategy_timeout=data.get('strategy_timeout', 30.0),
learn_from_success=data.get('learn_from_success', True),
require_user_confirmation=data.get('require_user_confirmation', False),
stop_on_failure=data.get('stop_on_failure', False),
notify_on_recovery=data.get('notify_on_recovery', True),
notify_on_failure=data.get('notify_on_failure', True)
)
@classmethod
def get_default_for_action(cls, action_type: str) -> 'SelfHealingConfig':
"""Get default configuration for specific action types."""
if action_type in ['click', 'hover']:
# More aggressive for UI interactions
return cls(
recovery_mode=RecoveryMode.BALANCED,
enabled_strategies=[
RecoveryStrategy.SEMANTIC_VARIANT,
RecoveryStrategy.SPATIAL_FALLBACK
],
max_attempts=3,
confidence_threshold=0.6
)
elif action_type in ['type', 'input']:
# Conservative for data input
return cls(
recovery_mode=RecoveryMode.CONSERVATIVE,
enabled_strategies=[
RecoveryStrategy.FORMAT_TRANSFORMATION,
RecoveryStrategy.TIMING_ADAPTATION
],
max_attempts=2,
confidence_threshold=0.8,
require_user_confirmation=True
)
elif action_type in ['wait', 'navigate']:
# Timing-focused for navigation
return cls(
recovery_mode=RecoveryMode.BALANCED,
enabled_strategies=[
RecoveryStrategy.TIMING_ADAPTATION
],
max_attempts=2,
confidence_threshold=0.7
)
else:
# Default configuration
return cls()
@dataclass
class RecoveryNotification:
"""Notification about recovery attempt."""
node_id: str
strategy_used: str
success: bool
confidence: float
execution_time: float
message: str
timestamp: str
requires_attention: bool = False
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
'node_id': self.node_id,
'strategy_used': self.strategy_used,
'success': self.success,
'confidence': self.confidence,
'execution_time': self.execution_time,
'message': self.message,
'timestamp': self.timestamp,
'requires_attention': self.requires_attention
}
@dataclass
class RecoveryStatistics:
"""Statistics about recovery attempts."""
total_attempts: int = 0
successful_recoveries: int = 0
failed_recoveries: int = 0
average_confidence: float = 0.0
most_used_strategy: Optional[str] = None
total_time_saved: float = 0.0 # seconds
@property
def success_rate(self) -> float:
"""Calculate success rate."""
return (self.successful_recoveries / self.total_attempts
if self.total_attempts > 0 else 0.0)
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for serialization."""
return {
'total_attempts': self.total_attempts,
'successful_recoveries': self.successful_recoveries,
'failed_recoveries': self.failed_recoveries,
'success_rate': self.success_rate,
'average_confidence': self.average_confidence,
'most_used_strategy': self.most_used_strategy,
'total_time_saved': self.total_time_saved
}

View File

@@ -0,0 +1,200 @@
"""
Template Data Models
Contains data models for workflow templates and template parameters.
"""
from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Optional
from uuid import uuid4
from .visual_workflow import VisualWorkflow, ParameterType
class TemplateDifficulty(Enum):
"""Difficulty levels for templates"""
BEGINNER = 'beginner'
INTERMEDIATE = 'intermediate'
ADVANCED = 'advanced'
@dataclass
class TemplateParameter:
"""Configurable parameter for a template"""
name: str
type: ParameterType
description: str
default_value: Optional[Any] = None
# Mapping to workflow nodes
node_id: str = ""
parameter_name: str = ""
# UI hints
label: str = ""
placeholder: Optional[str] = None
required: bool = True
def to_dict(self) -> Dict[str, Any]:
return {
'name': self.name,
'type': self.type.value,
'description': self.description,
'default_value': self.default_value,
'node_id': self.node_id,
'parameter_name': self.parameter_name,
'label': self.label,
'placeholder': self.placeholder,
'required': self.required
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'TemplateParameter':
return cls(
name=data['name'],
type=ParameterType(data['type']),
description=data['description'],
default_value=data.get('default_value'),
node_id=data.get('node_id', ''),
parameter_name=data.get('parameter_name', ''),
label=data.get('label', ''),
placeholder=data.get('placeholder'),
required=data.get('required', True)
)
@dataclass
class WorkflowTemplate:
"""Template for creating workflows"""
id: str
name: str
description: str
category: str
# Template workflow structure
workflow: VisualWorkflow
# Configurable parameters
parameters: List[TemplateParameter] = field(default_factory=list)
# Metadata
tags: List[str] = field(default_factory=list)
difficulty: TemplateDifficulty = TemplateDifficulty.BEGINNER
estimated_time: int = 5 # minutes
# Usage statistics
usage_count: int = 0
rating: float = 0.0
# Timestamps
created_at: datetime = field(default_factory=datetime.now)
updated_at: datetime = field(default_factory=datetime.now)
created_by: str = "system"
def to_dict(self) -> Dict[str, Any]:
return {
'id': self.id,
'name': self.name,
'description': self.description,
'category': self.category,
'workflow': self.workflow.to_dict(),
'parameters': [p.to_dict() for p in self.parameters],
'tags': self.tags,
'difficulty': self.difficulty.value,
'estimated_time': self.estimated_time,
'usage_count': self.usage_count,
'rating': self.rating,
'created_at': self.created_at.isoformat(),
'updated_at': self.updated_at.isoformat(),
'created_by': self.created_by
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'WorkflowTemplate':
return cls(
id=data['id'],
name=data['name'],
description=data['description'],
category=data['category'],
workflow=VisualWorkflow.from_dict(data['workflow']),
parameters=[TemplateParameter.from_dict(p) for p in data.get('parameters', [])],
tags=data.get('tags', []),
difficulty=TemplateDifficulty(data.get('difficulty', 'beginner')),
estimated_time=data.get('estimated_time', 5),
usage_count=data.get('usage_count', 0),
rating=data.get('rating', 0.0),
created_at=datetime.fromisoformat(data.get('created_at', datetime.now().isoformat())),
updated_at=datetime.fromisoformat(data.get('updated_at', datetime.now().isoformat())),
created_by=data.get('created_by', 'system')
)
def instantiate(self, parameters: Dict[str, Any], workflow_name: str, created_by: str = "user") -> VisualWorkflow:
"""Create a new workflow instance from this template"""
# Create a copy of the template workflow
workflow_data = self.workflow.to_dict()
# Generate new IDs
workflow_data['id'] = str(uuid4())
workflow_data['name'] = workflow_name
workflow_data['created_at'] = datetime.now().isoformat()
workflow_data['updated_at'] = datetime.now().isoformat()
workflow_data['created_by'] = created_by
workflow_data['is_template'] = False
# Apply parameter substitutions
for param in self.parameters:
if param.name in parameters:
value = parameters[param.name]
# Find the target node and update its parameter
for node_data in workflow_data['nodes']:
if node_data['id'] == param.node_id:
node_data['parameters'][param.parameter_name] = value
break
# Generate new node and edge IDs to avoid conflicts
node_id_mapping = {}
for i, node_data in enumerate(workflow_data['nodes']):
old_id = node_data['id']
new_id = f"{workflow_data['id']}_node_{i}"
node_id_mapping[old_id] = new_id
node_data['id'] = new_id
# Update edge references
for edge_data in workflow_data['edges']:
edge_data['id'] = str(uuid4())
edge_data['source'] = node_id_mapping.get(edge_data['source'], edge_data['source'])
edge_data['target'] = node_id_mapping.get(edge_data['target'], edge_data['target'])
return VisualWorkflow.from_dict(workflow_data)
def validate(self) -> List[str]:
"""Validate template structure"""
errors = []
# Basic validation
if not self.id:
errors.append("Template ID is required")
if not self.name:
errors.append("Template name is required")
if not self.category:
errors.append("Template category is required")
# Validate workflow
workflow_errors = self.workflow.validate()
errors.extend([f"Workflow: {err}" for err in workflow_errors])
# Validate parameters
node_ids = {node.id for node in self.workflow.nodes}
for param in self.parameters:
if param.node_id and param.node_id not in node_ids:
errors.append(f"Parameter {param.name} references non-existent node {param.node_id}")
return errors
def generate_template_id() -> str:
"""Generate a unique template ID"""
return f"template_{str(uuid4())[:8]}"

View File

@@ -0,0 +1,540 @@
"""
Visual Workflow Data Models
Contains the core data models for visual workflows, nodes, edges, and related structures.
"""
from dataclasses import dataclass, field
from datetime import datetime
from enum import Enum
from typing import Any, Dict, List, Optional
from uuid import uuid4
from .self_healing_config import SelfHealingConfig
class NodeCategory(Enum):
"""Categories for organizing node types"""
ACTION = 'action'
LOGIC = 'logic'
DATA = 'data'
FLOW = 'flow'
INTEGRATION = 'integration'
class NodeStatus(Enum):
"""Execution status of a node"""
IDLE = 'idle'
RUNNING = 'running'
SUCCESS = 'success'
FAILED = 'failed'
SKIPPED = 'skipped'
class ParameterType(Enum):
"""Types of parameters supported"""
STRING = 'string'
NUMBER = 'number'
BOOLEAN = 'boolean'
SELECT = 'select'
TARGET = 'target'
VARIABLE = 'variable'
EXPRESSION = 'expression'
FILE = 'file'
@dataclass
class Position:
"""2D position in the canvas"""
x: float
y: float
def to_dict(self) -> Dict[str, float]:
return {'x': self.x, 'y': self.y}
@classmethod
def from_dict(cls, data: Dict[str, float]) -> 'Position':
return cls(x=data['x'], y=data['y'])
@dataclass
class Size:
"""2D size dimensions"""
width: float
height: float
def to_dict(self) -> Dict[str, float]:
return {'width': self.width, 'height': self.height}
@classmethod
def from_dict(cls, data: Dict[str, float]) -> 'Size':
return cls(width=data['width'], height=data['height'])
@dataclass
class Port:
"""Input or output port on a node"""
id: str
name: str
type: str # 'input' or 'output'
data_type: Optional[str] = None # Type of data flowing through
def to_dict(self) -> Dict[str, Any]:
return {
'id': self.id,
'name': self.name,
'type': self.type,
'data_type': self.data_type
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'Port':
return cls(
id=data['id'],
name=data['name'],
type=data['type'],
data_type=data.get('data_type')
)
@dataclass
class ValidationRule:
"""Validation rule for parameters"""
type: str # 'required', 'min', 'max', 'pattern', 'custom'
value: Any
message: str
def to_dict(self) -> Dict[str, Any]:
return {
'type': self.type,
'value': self.value,
'message': self.message
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'ValidationRule':
return cls(
type=data['type'],
value=data['value'],
message=data['message']
)
@dataclass
class ParameterDefinition:
"""Definition of a node parameter"""
name: str
type: ParameterType
required: bool
default_value: Optional[Any] = None
# Validation
validation: Optional[List[ValidationRule]] = None
# UI
label: str = ""
description: Optional[str] = None
placeholder: Optional[str] = None
# Special behavior
is_target: bool = False
is_variable: bool = False
is_expression: bool = False
def to_dict(self) -> Dict[str, Any]:
return {
'name': self.name,
'type': self.type.value,
'required': self.required,
'default_value': self.default_value,
'validation': [v.to_dict() for v in self.validation] if self.validation else None,
'label': self.label,
'description': self.description,
'placeholder': self.placeholder,
'is_target': self.is_target,
'is_variable': self.is_variable,
'is_expression': self.is_expression
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'ParameterDefinition':
validation = None
if data.get('validation'):
validation = [ValidationRule.from_dict(v) for v in data['validation']]
return cls(
name=data['name'],
type=ParameterType(data['type']),
required=data['required'],
default_value=data.get('default_value'),
validation=validation,
label=data.get('label', ''),
description=data.get('description'),
placeholder=data.get('placeholder'),
is_target=data.get('is_target', False),
is_variable=data.get('is_variable', False),
is_expression=data.get('is_expression', False)
)
@dataclass
class EdgeStyle:
"""Visual style for an edge"""
color: Optional[str] = None
width: Optional[float] = None
dashed: bool = False
def to_dict(self) -> Dict[str, Any]:
return {
'color': self.color,
'width': self.width,
'dashed': self.dashed
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'EdgeStyle':
return cls(
color=data.get('color'),
width=data.get('width'),
dashed=data.get('dashed', False)
)
@dataclass
class EdgeCondition:
"""Condition for edge execution"""
type: str # 'always', 'success', 'failure', 'expression'
expression: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
return {
'type': self.type,
'expression': self.expression
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'EdgeCondition':
return cls(
type=data['type'],
expression=data.get('expression')
)
@dataclass
class VisualEdge:
"""Connection between nodes"""
id: str
source: str # node ID
target: str # node ID
source_port: str
target_port: str
# Condition
condition: Optional[EdgeCondition] = None
# Visual style
style: Optional[EdgeStyle] = None
# State
selected: bool = False
animated: bool = False
def to_dict(self) -> Dict[str, Any]:
return {
'id': self.id,
'source': self.source,
'target': self.target,
'source_port': self.source_port,
'target_port': self.target_port,
'condition': self.condition.to_dict() if self.condition else None,
'style': self.style.to_dict() if self.style else None,
'selected': self.selected,
'animated': self.animated
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'VisualEdge':
condition = None
if data.get('condition'):
condition = EdgeCondition.from_dict(data['condition'])
style = None
if data.get('style'):
style = EdgeStyle.from_dict(data['style'])
# Gérer les ports manquants (format VWB vs standard)
# VWB utilise sourceHandle/targetHandle, standard utilise source_port/target_port
source_port = data.get('source_port') or data.get('sourceHandle', 'out')
target_port = data.get('target_port') or data.get('targetHandle', 'in')
return cls(
id=data['id'],
source=data['source'],
target=data['target'],
source_port=source_port,
target_port=target_port,
condition=condition,
style=style,
selected=data.get('selected', False),
animated=data.get('animated', False)
)
@dataclass
class VisualNode:
"""Visual representation of a workflow node"""
id: str
type: str # 'click', 'type', 'wait', 'if', 'loop', etc.
# Visual position
position: Position
size: Size
# Configuration
parameters: Dict[str, Any]
# Connections
input_ports: List[Port]
output_ports: List[Port]
# Self-healing configuration
self_healing: Optional[SelfHealingConfig] = None
# Visual state
selected: bool = False
highlighted: bool = False
status: Optional[NodeStatus] = None
# Metadata
label: Optional[str] = None
description: Optional[str] = None
color: Optional[str] = None
# Données VWB complètes (préserve visualSelection, isVWBCatalogAction, etc.)
data: Optional[Dict[str, Any]] = None
def to_dict(self) -> Dict[str, Any]:
result = {
'id': self.id,
'type': self.type,
'position': self.position.to_dict(),
'size': self.size.to_dict(),
'parameters': self.parameters,
'input_ports': [p.to_dict() for p in self.input_ports],
'output_ports': [p.to_dict() for p in self.output_ports],
'self_healing': self.self_healing.to_dict() if self.self_healing else None,
'selected': self.selected,
'highlighted': self.highlighted,
'status': self.status.value if self.status else None,
'label': self.label,
'description': self.description,
'color': self.color
}
# Inclure les données VWB complètes si présentes
if self.data:
result['data'] = self.data
return result
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'VisualNode':
status = None
if data.get('status'):
status = NodeStatus(data['status'])
self_healing = None
if data.get('self_healing'):
self_healing = SelfHealingConfig.from_dict(data['self_healing'])
# Gérer les différents formats de données (VWB vs standard)
# Format VWB: data.parameters contient les paramètres
# Format standard: parameters directement au niveau du node
if 'data' in data and isinstance(data['data'], dict):
parameters = data['data'].get('parameters', data.get('parameters', {}))
else:
parameters = data.get('parameters', {})
# Gérer size manquant (défaut: 200x80)
if 'size' in data:
size = Size.from_dict(data['size'])
else:
size = Size(width=200, height=80)
# Gérer ports manquants (défaut: ports vides)
input_ports = [Port.from_dict(p) for p in data.get('input_ports', [])]
output_ports = [Port.from_dict(p) for p in data.get('output_ports', [])]
# Préserver les données VWB complètes
vwb_data = data.get('data') if isinstance(data.get('data'), dict) else None
return cls(
id=data['id'],
type=data['type'],
position=Position.from_dict(data['position']),
size=size,
parameters=parameters,
input_ports=input_ports,
output_ports=output_ports,
self_healing=self_healing,
selected=data.get('selected', False),
highlighted=data.get('highlighted', False),
status=status,
label=data.get('label') or data.get('name'),
description=data.get('description'),
color=data.get('color'),
data=vwb_data
)
@dataclass
class Variable:
"""Workflow variable"""
name: str
type: str # 'string', 'number', 'boolean', 'object'
value: Any
description: Optional[str] = None
def to_dict(self) -> Dict[str, Any]:
return {
'name': self.name,
'type': self.type,
'value': self.value,
'description': self.description
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'Variable':
return cls(
name=data['name'],
type=data['type'],
value=data['value'],
description=data.get('description')
)
@dataclass
class WorkflowSettings:
"""Workflow execution settings"""
timeout: int = 300000 # 5 minutes default
retry_on_failure: bool = True
max_retries: int = 3
enable_self_healing: bool = True
enable_analytics: bool = True
def to_dict(self) -> Dict[str, Any]:
return {
'timeout': self.timeout,
'retry_on_failure': self.retry_on_failure,
'max_retries': self.max_retries,
'enable_self_healing': self.enable_self_healing,
'enable_analytics': self.enable_analytics
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'WorkflowSettings':
return cls(
timeout=data.get('timeout', 300000),
retry_on_failure=data.get('retry_on_failure', True),
max_retries=data.get('max_retries', 3),
enable_self_healing=data.get('enable_self_healing', True),
enable_analytics=data.get('enable_analytics', True)
)
@dataclass
class VisualWorkflow:
"""Complete visual workflow representation"""
id: str
name: str
description: Optional[str]
version: str
created_at: datetime
updated_at: datetime
created_by: str
# Visual structure
nodes: List[VisualNode]
edges: List[VisualEdge]
# Configuration
variables: List[Variable]
settings: WorkflowSettings
# Metadata
tags: List[str] = field(default_factory=list)
category: Optional[str] = None
is_template: bool = False
def to_dict(self) -> Dict[str, Any]:
return {
'id': self.id,
'name': self.name,
'description': self.description,
'version': self.version,
'created_at': self.created_at.isoformat(),
'updated_at': self.updated_at.isoformat(),
'created_by': self.created_by,
'nodes': [n.to_dict() for n in self.nodes],
'edges': [e.to_dict() for e in self.edges],
'variables': [v.to_dict() for v in self.variables],
'settings': self.settings.to_dict(),
'tags': self.tags,
'category': self.category,
'is_template': self.is_template
}
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> 'VisualWorkflow':
return cls(
id=data['id'],
name=data['name'],
description=data.get('description'),
version=data['version'],
created_at=datetime.fromisoformat(data['created_at']),
updated_at=datetime.fromisoformat(data['updated_at']),
created_by=data['created_by'],
nodes=[VisualNode.from_dict(n) for n in data['nodes']],
edges=[VisualEdge.from_dict(e) for e in data['edges']],
variables=[Variable.from_dict(v) for v in data['variables']],
settings=WorkflowSettings.from_dict(data['settings']),
tags=data.get('tags', []),
category=data.get('category'),
is_template=data.get('is_template', False)
)
def validate(self) -> List[str]:
"""Validate workflow structure and return list of errors"""
errors = []
# Check required fields
if not self.id:
errors.append("Workflow ID is required")
if not self.name:
errors.append("Workflow name is required")
if not self.version:
errors.append("Workflow version is required")
# Validate nodes
node_ids = {node.id for node in self.nodes}
for node in self.nodes:
if not node.id:
errors.append(f"Node missing ID")
if not node.type:
errors.append(f"Node {node.id} missing type")
# Validate edges
for edge in self.edges:
if edge.source not in node_ids:
errors.append(f"Edge {edge.id} references non-existent source node {edge.source}")
if edge.target not in node_ids:
errors.append(f"Edge {edge.id} references non-existent target node {edge.target}")
# Validate variables
variable_names = {var.name for var in self.variables}
if len(variable_names) != len(self.variables):
errors.append("Duplicate variable names found")
return errors
def generate_id(prefix: str = "wf") -> str:
"""Generate a unique ID"""
return str(uuid4())

View File

@@ -0,0 +1,16 @@
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
--verbose
--cov=.
--cov-report=html
--cov-report=term-missing
--cov-fail-under=70
markers =
unit: Unit tests
integration: Integration tests
property: Property-based tests
slow: Slow running tests

View File

@@ -0,0 +1,37 @@
# Flask and Extensions
Flask==3.0.0
Flask-SocketIO==5.3.5
Flask-CORS==4.0.0
python-socketio==5.10.0
python-engineio==4.8.0
# Database
SQLAlchemy==2.0.23
Flask-SQLAlchemy==3.1.1
Flask-Migrate==4.0.5
# Validation and Serialization
marshmallow==3.20.1
jsonschema==4.20.0
pydantic==2.5.2
# Redis for caching
redis==5.0.1
Flask-Caching==2.1.0
# Utilities
python-dotenv==1.0.0
PyYAML==6.0.1
python-dateutil==2.8.2
# Testing
pytest==7.4.3
pytest-cov==4.1.0
pytest-flask==1.3.0
pytest-mock==3.12.0
hypothesis==6.92.1
# Development
black==23.12.1
flake8==6.1.0
mypy==1.7.1

View File

@@ -0,0 +1,29 @@
#!/bin/bash
#
# Restart Visual Workflow Builder Backend Server
#
# This script stops and starts the Flask backend server
#
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Colors for output
GREEN='\033[0;32m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo "=========================================="
echo "Visual Workflow Builder - Restart Server"
echo "=========================================="
# Stop the server
echo -e "${BLUE}Stopping server...${NC}"
"$SCRIPT_DIR/stop.sh"
# Wait a moment
sleep 2
# Start the server
echo ""
echo -e "${BLUE}Starting server...${NC}"
"$SCRIPT_DIR/start.sh"

View File

@@ -0,0 +1,61 @@
#!/bin/bash
# Script de vérification de tous les tests backend
# Visual Workflow Builder - Checkpoint Tâche 19
echo "============================================================"
echo "🧪 Exécution de tous les tests backend"
echo "============================================================"
echo ""
# Compteurs
TOTAL_TESTS=0
PASSED_TESTS=0
FAILED_TESTS=0
# Fonction pour exécuter un test
run_test() {
local test_file=$1
local test_name=$2
echo "📋 Test: $test_name"
echo " Fichier: $test_file"
if python "$test_file" > /tmp/test_output.txt 2>&1; then
echo " ✅ RÉUSSI"
PASSED_TESTS=$((PASSED_TESTS + 1))
else
echo " ❌ ÉCHOUÉ"
echo " Sortie:"
tail -10 /tmp/test_output.txt | sed 's/^/ /'
FAILED_TESTS=$((FAILED_TESTS + 1))
fi
TOTAL_TESTS=$((TOTAL_TESTS + 1))
echo ""
}
# Exécuter tous les tests
run_test "test_models_manual.py" "Modèles de données"
run_test "test_serialization.py" "Sérialisation et persistance"
run_test "test_converter.py" "Conversion Visual → WorkflowGraph"
run_test "test_logic_nodes.py" "Nodes de logique (Condition/Loop)"
run_test "test_execution_integration.py" "Intégration ExecutionLoop"
# Résumé
echo "============================================================"
echo "📊 RÉSUMÉ DES TESTS"
echo "============================================================"
echo "Total de tests: $TOTAL_TESTS"
echo "Tests réussis: $PASSED_TESTS"
echo "Tests échoués: $FAILED_TESTS"
echo ""
if [ $FAILED_TESTS -eq 0 ]; then
echo "🎉 TOUS LES TESTS PASSENT!"
echo "============================================================"
exit 0
else
echo "⚠️ CERTAINS TESTS ONT ÉCHOUÉ"
echo "============================================================"
exit 1
fi

View File

@@ -0,0 +1,4 @@
"""Package des services backend.
Auteur : Dom, Alice, Kiro - 08 janvier 2026
"""

View File

@@ -0,0 +1,328 @@
"""
Service de stockage des images d'ancres visuelles côté serveur.
Auteur : Dom, Alice, Kiro - 21 janvier 2026
Ce service gère le stockage des images d'ancres sur disque pour éviter
les problèmes de mémoire causés par le stockage base64 dans les workflows.
Structure sur disque :
/backend/data/anchor_images/
/{anchor_id}/
original.png # Image originale (crop de la zone sélectionnée)
thumbnail.jpg # 200x150, JPEG q80 (~10-30 Ko)
metadata.json # dimensions, timestamp, bounding_box
"""
import os
import json
import base64
import uuid
import shutil
from datetime import datetime
from pathlib import Path
from typing import Optional, Dict, Any, Tuple
from PIL import Image
from io import BytesIO
# Configuration
DATA_DIR = Path(__file__).parent.parent / 'data' / 'anchor_images'
THUMBNAIL_SIZE = (200, 150)
THUMBNAIL_QUALITY = 80
def ensure_data_dir():
"""S'assurer que le répertoire de données existe."""
DATA_DIR.mkdir(parents=True, exist_ok=True)
def generate_anchor_id() -> str:
"""Générer un ID unique pour une ancre."""
return f"anchor_{uuid.uuid4().hex[:12]}_{int(datetime.now().timestamp())}"
def decode_base64_image(image_base64: str) -> Image.Image:
"""Décoder une image base64 en objet PIL Image."""
# Retirer le préfixe data:image/...;base64, si présent
if ',' in image_base64:
image_base64 = image_base64.split(',', 1)[1]
image_data = base64.b64decode(image_base64)
return Image.open(BytesIO(image_data))
def crop_image(image: Image.Image, bounding_box: Dict[str, int], margin: int = 10) -> Image.Image:
"""
Recadrer l'image selon le bounding box avec une marge.
Args:
image: Image PIL source
bounding_box: Dictionnaire avec x, y, width, height
margin: Marge en pixels autour de la zone
Returns:
Image recadrée
"""
x = max(0, int(bounding_box['x']) - margin)
y = max(0, int(bounding_box['y']) - margin)
width = int(bounding_box['width']) + margin * 2
height = int(bounding_box['height']) + margin * 2
# S'assurer de ne pas dépasser les limites de l'image
right = min(image.width, x + width)
bottom = min(image.height, y + height)
return image.crop((x, y, right, bottom))
def create_thumbnail(image: Image.Image, size: Tuple[int, int] = THUMBNAIL_SIZE) -> Image.Image:
"""
Créer une miniature de l'image.
Args:
image: Image PIL source
size: Taille cible (largeur, hauteur)
Returns:
Image miniature
"""
# Utiliser LANCZOS pour un meilleur rendu
thumbnail = image.copy()
thumbnail.thumbnail(size, Image.Resampling.LANCZOS)
return thumbnail
def save_anchor_image(
anchor_id: Optional[str],
image_base64: str,
bounding_box: Dict[str, int],
metadata: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Sauvegarder une image d'ancre sur le disque.
Args:
anchor_id: ID de l'ancre (généré si None)
image_base64: Image en base64 (screenshot complet)
bounding_box: Zone de sélection {x, y, width, height}
metadata: Métadonnées additionnelles
Returns:
Dictionnaire avec anchor_id, URLs et métadonnées
"""
ensure_data_dir()
# Générer l'ID si nécessaire
if not anchor_id:
anchor_id = generate_anchor_id()
# Créer le répertoire de l'ancre
anchor_dir = DATA_DIR / anchor_id
anchor_dir.mkdir(parents=True, exist_ok=True)
try:
# Décoder l'image
full_image = decode_base64_image(image_base64)
# Recadrer selon le bounding box
cropped_image = crop_image(full_image, bounding_box)
# Convertir en RGB si nécessaire (pour JPEG)
if cropped_image.mode in ('RGBA', 'P'):
cropped_rgb = Image.new('RGB', cropped_image.size, (255, 255, 255))
if cropped_image.mode == 'RGBA':
cropped_rgb.paste(cropped_image, mask=cropped_image.split()[3])
else:
cropped_rgb.paste(cropped_image)
cropped_image = cropped_rgb
# Sauvegarder l'image originale (crop)
original_path = anchor_dir / 'original.png'
cropped_image.save(str(original_path), 'PNG', optimize=True)
# Créer et sauvegarder la miniature
thumbnail = create_thumbnail(cropped_image)
thumbnail_path = anchor_dir / 'thumbnail.jpg'
thumbnail.save(str(thumbnail_path), 'JPEG', quality=THUMBNAIL_QUALITY, optimize=True)
# Préparer les métadonnées
meta = {
'anchor_id': anchor_id,
'bounding_box': bounding_box,
'original_size': {
'width': cropped_image.width,
'height': cropped_image.height
},
'thumbnail_size': {
'width': thumbnail.width,
'height': thumbnail.height
},
'created_at': datetime.now().isoformat(),
'original_file_size': original_path.stat().st_size,
'thumbnail_file_size': thumbnail_path.stat().st_size,
}
# Ajouter les métadonnées supplémentaires
if metadata:
meta['extra'] = metadata
# Sauvegarder les métadonnées
metadata_path = anchor_dir / 'metadata.json'
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(meta, f, indent=2, ensure_ascii=False)
# Retourner les informations
return {
'success': True,
'anchor_id': anchor_id,
'thumbnail_url': f'/api/anchor-images/{anchor_id}/thumbnail',
'original_url': f'/api/anchor-images/{anchor_id}/original',
'metadata': meta
}
except Exception as e:
# Nettoyer en cas d'erreur
if anchor_dir.exists():
shutil.rmtree(anchor_dir)
raise ValueError(f"Erreur lors de la sauvegarde de l'image: {str(e)}")
def get_thumbnail_path(anchor_id: str) -> Optional[Path]:
"""
Obtenir le chemin du fichier miniature.
Args:
anchor_id: ID de l'ancre
Returns:
Chemin du fichier ou None si inexistant
"""
path = DATA_DIR / anchor_id / 'thumbnail.jpg'
return path if path.exists() else None
def get_original_path(anchor_id: str) -> Optional[Path]:
"""
Obtenir le chemin du fichier original.
Args:
anchor_id: ID de l'ancre
Returns:
Chemin du fichier ou None si inexistant
"""
path = DATA_DIR / anchor_id / 'original.png'
return path if path.exists() else None
def get_anchor_metadata(anchor_id: str) -> Optional[Dict[str, Any]]:
"""
Obtenir les métadonnées d'une ancre.
Args:
anchor_id: ID de l'ancre
Returns:
Métadonnées ou None si inexistant
"""
path = DATA_DIR / anchor_id / 'metadata.json'
if not path.exists():
return None
with open(path, 'r', encoding='utf-8') as f:
return json.load(f)
def delete_anchor_image(anchor_id: str) -> bool:
"""
Supprimer une image d'ancre et ses fichiers associés.
Args:
anchor_id: ID de l'ancre
Returns:
True si supprimé, False si inexistant
"""
anchor_dir = DATA_DIR / anchor_id
if not anchor_dir.exists():
return False
shutil.rmtree(anchor_dir)
return True
def list_anchor_images() -> list:
"""
Lister toutes les images d'ancres stockées.
Returns:
Liste des métadonnées de toutes les ancres
"""
ensure_data_dir()
anchors = []
for anchor_dir in DATA_DIR.iterdir():
if anchor_dir.is_dir():
metadata = get_anchor_metadata(anchor_dir.name)
if metadata:
anchors.append(metadata)
# Trier par date de création décroissante
anchors.sort(key=lambda x: x.get('created_at', ''), reverse=True)
return anchors
def cleanup_old_anchors(max_age_days: int = 30) -> int:
"""
Nettoyer les anciennes images d'ancres.
Args:
max_age_days: Age maximum en jours
Returns:
Nombre d'ancres supprimées
"""
ensure_data_dir()
from datetime import timedelta
cutoff = datetime.now() - timedelta(days=max_age_days)
deleted = 0
for anchor_dir in DATA_DIR.iterdir():
if anchor_dir.is_dir():
metadata = get_anchor_metadata(anchor_dir.name)
if metadata:
created_at = datetime.fromisoformat(metadata.get('created_at', datetime.now().isoformat()))
if created_at < cutoff:
shutil.rmtree(anchor_dir)
deleted += 1
return deleted
def get_storage_stats() -> Dict[str, Any]:
"""
Obtenir des statistiques sur le stockage des ancres.
Returns:
Dictionnaire avec les statistiques
"""
ensure_data_dir()
total_size = 0
count = 0
for anchor_dir in DATA_DIR.iterdir():
if anchor_dir.is_dir():
count += 1
for file in anchor_dir.iterdir():
if file.is_file():
total_size += file.stat().st_size
return {
'anchor_count': count,
'total_size_bytes': total_size,
'total_size_mb': round(total_size / (1024 * 1024), 2),
'data_directory': str(DATA_DIR),
}

View File

@@ -0,0 +1,730 @@
"""
Visual to WorkflowGraph Converter - Visual Workflow Builder
Convertit les workflows visuels en WorkflowGraph exécutables.
Exigences: 6.1, 18.1
"""
import sys
from pathlib import Path
from typing import Dict, List, Optional, Tuple, Any
from datetime import datetime
# Ajouter le chemin racine pour importer les modules core
sys.path.insert(0, str(Path(__file__).parent.parent.parent.parent))
from core.models.workflow_graph import (
Workflow,
WorkflowNode,
WorkflowEdge,
Action,
TargetSpec,
ScreenTemplate,
WindowConstraint,
TextConstraint,
UIConstraint,
EmbeddingPrototype,
EdgeConstraints,
PostConditions,
EdgeStats,
SafetyRules,
WorkflowStats,
LearningConfig
)
from models.visual_workflow import (
VisualWorkflow,
VisualNode,
VisualEdge
)
try:
from .self_healing_converter import get_self_healing_converter
SELF_HEALING_AVAILABLE = True
except ImportError:
SELF_HEALING_AVAILABLE = False
def get_self_healing_converter():
return None
class ConversionError(Exception):
"""Erreur lors de la conversion"""
pass
class VisualToGraphConverter:
"""
Convertisseur de workflows visuels en WorkflowGraph.
Exigences: 6.1, 18.1
"""
# Mapping des types de nodes visuels vers les types d'actions
NODE_TYPE_TO_ACTION = {
'click': 'mouse_click',
'type': 'text_input',
'wait': 'wait',
'navigate': 'navigate',
'extract': 'extract_data',
'variable': 'set_variable',
'condition': 'evaluate_condition',
'loop': 'execute_loop',
'validate': 'key_press',
'scroll': 'scroll',
'screenshot': 'screenshot',
'transform': 'transform_data',
'api': 'api_call',
'database': 'database_query',
'start': 'workflow_start',
'end': 'workflow_end'
}
# Types de nodes de logique
LOGIC_NODE_TYPES = {'condition', 'loop'}
def __init__(self):
"""Initialise le convertisseur"""
self.errors: List[str] = []
self.warnings: List[str] = []
def convert(self, visual_workflow: VisualWorkflow) -> Workflow:
"""
Convertit un VisualWorkflow en Workflow exécutable.
Args:
visual_workflow: Le workflow visuel à convertir
Returns:
Workflow exécutable
Raises:
ConversionError: Si la conversion échoue
"""
self.errors = []
self.warnings = []
# Valider la structure avant conversion
validation_errors = visual_workflow.validate()
if validation_errors:
raise ConversionError(f"Workflow invalide: {', '.join(validation_errors)}")
# Vérifier qu'il y a au moins un node
if not visual_workflow.nodes:
raise ConversionError("Le workflow ne contient aucun node")
# Convertir les nodes
workflow_nodes = self._convert_nodes(visual_workflow)
# Convertir les edges
workflow_edges = self._convert_edges(visual_workflow, workflow_nodes)
# Déterminer les nodes d'entrée et de sortie
entry_nodes, end_nodes = self._determine_entry_exit_nodes(
visual_workflow, workflow_nodes, workflow_edges
)
# Détecter et configurer les boucles et conditions
loops, conditionals = self._detect_logic_structures(
visual_workflow, workflow_nodes, workflow_edges
)
# Créer le workflow
workflow = Workflow(
workflow_id=visual_workflow.id,
name=visual_workflow.name,
description=visual_workflow.description or "",
version=int(visual_workflow.version.split('.')[0]), # "1.0.0" -> 1
learning_state="OBSERVATION",
created_at=visual_workflow.created_at,
updated_at=visual_workflow.updated_at,
entry_nodes=entry_nodes,
end_nodes=end_nodes,
nodes=workflow_nodes,
edges=workflow_edges,
safety_rules=self._create_safety_rules(visual_workflow),
stats=WorkflowStats(),
learning=LearningConfig(),
loops=loops,
conditionals=conditionals,
metadata={
'created_by': visual_workflow.created_by,
'tags': visual_workflow.tags,
'category': visual_workflow.category,
'is_template': visual_workflow.is_template,
'source': 'visual_workflow_builder'
}
)
# Intégrer les paramètres Self-Healing au niveau workflow
if SELF_HEALING_AVAILABLE:
self_healing_converter = get_self_healing_converter()
if self_healing_converter:
workflow = self_healing_converter.convert_workflow_settings(visual_workflow, workflow)
return workflow
def _detect_logic_structures(
self,
visual_workflow: VisualWorkflow,
workflow_nodes: List[WorkflowNode],
workflow_edges: List[WorkflowEdge]
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""
Détecte et configure les structures de logique (boucles et conditions).
Exigences: 8.1, 8.2, 9.1, 9.2
"""
loops = {}
conditionals = {}
for node in workflow_nodes:
visual_type = node.metadata.get('visual_type')
parameters = node.metadata.get('parameters', {})
if visual_type == 'condition':
# Configuration d'un node conditionnel (Exigences 8.1, 8.2)
conditionals[node.node_id] = {
'expression': parameters.get('expression', ''),
'true_branch': self._find_branch_target(node.node_id, 'true', workflow_edges),
'false_branch': self._find_branch_target(node.node_id, 'false', workflow_edges),
'metadata': {
'visual_position': node.metadata.get('visual_position'),
'condition_type': parameters.get('type', 'expression')
}
}
elif visual_type == 'loop':
# Configuration d'un node de boucle (Exigences 9.1, 9.2)
loop_type = parameters.get('type', 'repeat')
loop_config = {
'loop_type': loop_type,
'body_nodes': self._find_loop_body(node.node_id, workflow_edges),
'exit_node': self._find_loop_exit(node.node_id, workflow_edges),
'metadata': {
'visual_position': node.metadata.get('visual_position')
}
}
# Ajouter les paramètres spécifiques au type de boucle
if loop_type == 'repeat':
loop_config['count'] = parameters.get('count', 1)
elif loop_type == 'while':
loop_config['condition'] = parameters.get('condition', '')
loop_config['max_iterations'] = parameters.get('max_iterations', 100)
elif loop_type == 'for-each':
loop_config['collection'] = parameters.get('collection', '')
loop_config['item_variable'] = parameters.get('item_variable', 'item')
loops[node.node_id] = loop_config
return loops, conditionals
def _find_branch_target(
self,
node_id: str,
branch_type: str,
edges: List[WorkflowEdge]
) -> Optional[str]:
"""Trouve le node cible d'une branche de condition"""
for edge in edges:
if edge.from_node == node_id:
# Vérifier le port source ou les métadonnées
source_port = edge.metadata.get('source_port', '')
if branch_type in source_port.lower():
return edge.to_node
# Vérifier les pre-conditions
if 'condition_result' in edge.constraints.pre_conditions:
expected_result = branch_type == 'true'
if edge.constraints.pre_conditions['condition_result'] == expected_result:
return edge.to_node
return None
def _find_loop_body(
self,
loop_node_id: str,
edges: List[WorkflowEdge]
) -> List[str]:
"""Trouve les nodes du corps de la boucle"""
body_nodes = []
# Trouver le premier node du corps (edge avec port 'body' ou 'loop')
for edge in edges:
if edge.from_node == loop_node_id:
source_port = edge.metadata.get('source_port', '')
if 'body' in source_port.lower() or 'loop' in source_port.lower():
body_nodes.append(edge.to_node)
# TODO: Suivre le graphe pour trouver tous les nodes du corps
break
return body_nodes
def _find_loop_exit(
self,
loop_node_id: str,
edges: List[WorkflowEdge]
) -> Optional[str]:
"""Trouve le node de sortie de la boucle"""
for edge in edges:
if edge.from_node == loop_node_id:
source_port = edge.metadata.get('source_port', '')
# Chercher le port de sortie (exit, out_exit, etc.)
if 'exit' in source_port.lower():
return edge.to_node
# Si pas de port body, c'est probablement la sortie
if 'body' not in source_port.lower() and 'loop' not in source_port.lower():
# Vérifier si ce n'est pas déjà le corps
body_targets = self._find_loop_body(loop_node_id, edges)
if edge.to_node not in body_targets:
return edge.to_node
return None
def _convert_nodes(self, visual_workflow: VisualWorkflow) -> List[WorkflowNode]:
"""Convertit les nodes visuels en WorkflowNodes"""
workflow_nodes = []
for vnode in visual_workflow.nodes:
try:
wnode = self._convert_node(vnode)
workflow_nodes.append(wnode)
except Exception as e:
self.errors.append(f"Erreur conversion node {vnode.id}: {str(e)}")
if self.errors:
raise ConversionError(f"Erreurs lors de la conversion des nodes: {', '.join(self.errors)}")
return workflow_nodes
def _convert_node(self, vnode: VisualNode) -> WorkflowNode:
"""Convertit un VisualNode en WorkflowNode"""
# Créer un template d'écran basique
# Dans une vraie implémentation, on utiliserait les embeddings et contraintes
template = ScreenTemplate(
window=WindowConstraint(),
text=TextConstraint(),
ui=UIConstraint(),
embedding=EmbeddingPrototype(
provider="visual_workflow_builder",
vector_id=f"node_{vnode.id}",
min_cosine_similarity=0.85,
sample_count=0
)
)
# Créer le WorkflowNode
wnode = WorkflowNode(
node_id=vnode.id,
name=vnode.label or vnode.type,
description=vnode.description or f"Node de type {vnode.type}",
template=template,
is_entry=False, # Sera déterminé plus tard
is_end=False, # Sera déterminé plus tard
metadata={
'visual_type': vnode.type,
'visual_position': vnode.position.to_dict(),
'visual_size': vnode.size.to_dict(),
'parameters': vnode.parameters,
'color': vnode.color
}
)
# Intégrer la configuration Self-Healing
if SELF_HEALING_AVAILABLE:
self_healing_converter = get_self_healing_converter()
if self_healing_converter:
wnode = self_healing_converter.convert_node_config(vnode, wnode)
return wnode
def _convert_edges(
self,
visual_workflow: VisualWorkflow,
workflow_nodes: List[WorkflowNode]
) -> List[WorkflowEdge]:
"""Convertit les edges visuels en WorkflowEdges"""
workflow_edges = []
# Créer un mapping node_id -> node pour validation
node_map = {node.node_id: node for node in workflow_nodes}
for vedge in visual_workflow.edges:
try:
# Vérifier que les nodes source et target existent
if vedge.source not in node_map:
raise ConversionError(f"Node source {vedge.source} introuvable")
if vedge.target not in node_map:
raise ConversionError(f"Node target {vedge.target} introuvable")
source_node = node_map[vedge.source]
target_node = node_map[vedge.target]
# Créer l'action basée sur le type du node source
action = self._create_action_from_node(source_node, visual_workflow)
# Créer les contraintes (avec gestion des conditions)
constraints = self._create_edge_constraints(vedge, source_node)
# Créer les post-conditions
post_conditions = PostConditions(
expected_node=target_node.node_id,
timeout_ms=3000
)
# Créer le WorkflowEdge
wedge = WorkflowEdge(
edge_id=vedge.id,
from_node=vedge.source,
to_node=vedge.target,
action=action,
constraints=constraints,
post_conditions=post_conditions,
stats=EdgeStats(),
metadata={
'visual_condition': vedge.condition.to_dict() if vedge.condition else None,
'visual_style': vedge.style.to_dict() if vedge.style else None,
'source_port': vedge.source_port,
'target_port': vedge.target_port
}
)
workflow_edges.append(wedge)
except Exception as e:
self.errors.append(f"Erreur conversion edge {vedge.id}: {str(e)}")
if self.errors:
raise ConversionError(f"Erreurs lors de la conversion des edges: {', '.join(self.errors)}")
return workflow_edges
def _create_edge_constraints(
self,
vedge: VisualEdge,
source_node: WorkflowNode
) -> EdgeConstraints:
"""Crée les contraintes d'edge avec support des conditions"""
constraints = EdgeConstraints(
required_confidence=0.8,
max_wait_time_ms=5000
)
# Si le node source est une condition, ajouter la condition à l'edge
visual_type = source_node.metadata.get('visual_type')
if visual_type == 'condition':
# Déterminer si c'est la branche true ou false basé sur le port
source_port = vedge.source_port
if 'true' in source_port.lower() or source_port == 'out_true':
constraints.pre_conditions['condition_result'] = True
elif 'false' in source_port.lower() or source_port == 'out_false':
constraints.pre_conditions['condition_result'] = False
# Si l'edge a une condition explicite, l'ajouter
if vedge.condition:
if vedge.condition.type == 'expression' and vedge.condition.expression:
constraints.pre_conditions['expression'] = vedge.condition.expression
elif vedge.condition.type in ['success', 'failure']:
constraints.pre_conditions['execution_status'] = vedge.condition.type
return constraints
def _create_action_from_node(
self,
node: WorkflowNode,
visual_workflow: VisualWorkflow
) -> Action:
"""Crée une Action basée sur le type et les paramètres du node"""
visual_type = node.metadata.get('visual_type', 'unknown')
parameters = node.metadata.get('parameters', {})
# Déterminer le type d'action
action_type = self.NODE_TYPE_TO_ACTION.get(visual_type, 'mouse_click')
# Créer le TargetSpec
target_spec = self._create_target_spec(visual_type, parameters)
# Créer les paramètres d'action
action_params = self._create_action_parameters(visual_type, parameters, visual_workflow)
return Action(
type=action_type,
target=target_spec,
parameters=action_params
)
def _create_target_spec(self, node_type: str, parameters: Dict[str, Any]) -> TargetSpec:
"""Crée un TargetSpec basé sur les paramètres du node"""
# Extraire les informations de cible
target_info = parameters.get('target', {})
# Si target est une string, c'est un sélecteur simple
if isinstance(target_info, str):
return TargetSpec(
by_text=target_info,
selection_policy="first"
)
# Si target est un dict, extraire les détails
if isinstance(target_info, dict):
return TargetSpec(
by_role=target_info.get('role'),
by_text=target_info.get('text'),
by_position=tuple(target_info['position']) if 'position' in target_info else None,
selection_policy=target_info.get('selection_policy', 'first')
)
# Par défaut, créer un target générique
return TargetSpec(
by_role="button",
selection_policy="first"
)
def _create_action_parameters(
self,
node_type: str,
parameters: Dict[str, Any],
visual_workflow: VisualWorkflow
) -> Dict[str, Any]:
"""Crée les paramètres d'action avec substitution de variables"""
action_params = {}
if node_type == 'click':
# Pour les actions de clic
action_params['click_type'] = parameters.get('click_type', 'left')
action_params['timeout_ms'] = parameters.get('timeout', 5000)
action_params['retries'] = parameters.get('retries', 3)
action_params['wait_after_ms'] = parameters.get('wait_after', 500)
elif node_type == 'type':
# Pour les actions de saisie de texte
text = parameters.get('text', '')
text = self._substitute_variables(text, visual_workflow)
action_params['text'] = text
action_params['clear_first'] = parameters.get('clear_first', False)
action_params['typing_speed'] = parameters.get('typing_speed', 'normal')
action_params['press_enter'] = parameters.get('press_enter', False)
elif node_type == 'wait':
# Pour les actions d'attente
duration = parameters.get('duration', 1000)
action_params['duration_ms'] = int(duration)
action_params['wait_type'] = parameters.get('wait_type', 'fixed')
elif node_type == 'navigate':
# Pour la navigation
url = parameters.get('url', '')
url = self._substitute_variables(url, visual_workflow)
action_params['url'] = url
action_params['wait_for_load'] = parameters.get('wait_for_load', True)
action_params['timeout_ms'] = parameters.get('timeout', 10000)
elif node_type == 'validate':
# Pour la validation (touche Entrée)
action_params['key'] = 'Return'
action_params['validation_type'] = parameters.get('validation_type', 'exists')
action_params['expected_text'] = parameters.get('expected_text', '')
elif node_type == 'scroll':
# Pour le défilement
action_params['direction'] = parameters.get('direction', 'down')
action_params['amount'] = parameters.get('amount', 3)
elif node_type == 'screenshot':
# Pour les captures d'écran
action_params['filename'] = parameters.get('filename', '')
action_params['full_screen'] = not parameters.get('region')
elif node_type == 'extract':
# Pour l'extraction de données
variable_name = parameters.get('variable', '')
action_params['variable_name'] = variable_name
action_params['extraction_type'] = parameters.get('extraction_type', 'text')
action_params['attribute_name'] = parameters.get('attribute_name', '')
elif node_type == 'variable':
# Pour la définition de variables
var_name = parameters.get('name', '')
var_value = parameters.get('value', '')
var_value = self._substitute_variables(str(var_value), visual_workflow)
action_params['variable_name'] = var_name
action_params['variable_value'] = var_value
action_params['variable_type'] = parameters.get('variable_type', 'string')
elif node_type == 'transform':
# Pour la transformation de données
action_params['transformation_type'] = parameters.get('transformation_type', 'format')
action_params['input_variable'] = parameters.get('input_variable', '')
action_params['output_variable'] = parameters.get('output_variable', '')
action_params['transformation_rule'] = parameters.get('transformation_rule', '')
elif node_type == 'api':
# Pour les appels API
action_params['method'] = parameters.get('method', 'GET')
action_params['url'] = self._substitute_variables(parameters.get('url', ''), visual_workflow)
action_params['headers'] = parameters.get('headers', {})
action_params['body'] = parameters.get('body', '')
action_params['response_variable'] = parameters.get('response_variable', '')
elif node_type == 'database':
# Pour les requêtes base de données
action_params['connection_string'] = parameters.get('connection_string', '')
action_params['query'] = self._substitute_variables(parameters.get('query', ''), visual_workflow)
action_params['result_variable'] = parameters.get('result_variable', '')
elif node_type == 'condition':
# Pour les conditions (Exigences 8.1, 8.2, 8.5)
expression = parameters.get('expression', '')
expression = self._substitute_variables(expression, visual_workflow)
action_params['expression'] = expression
action_params['condition_type'] = parameters.get('condition_type', 'expression')
# Valider la syntaxe de l'expression (Exigence 8.5)
validation_result = self._validate_expression(expression)
if not validation_result['valid']:
self.warnings.append(
f"Expression de condition potentiellement invalide: {expression} - {validation_result['message']}"
)
elif node_type == 'loop':
# Pour les boucles (Exigences 9.1, 9.2, 9.5)
loop_type = parameters.get('type', 'repeat') # for-each, while, repeat
action_params['loop_type'] = loop_type
if loop_type == 'repeat':
# Boucle avec nombre d'itérations fixe
count = parameters.get('count', 1)
action_params['count'] = int(count)
elif loop_type == 'while':
# Boucle avec condition
condition = parameters.get('condition', '')
condition = self._substitute_variables(condition, visual_workflow)
action_params['condition'] = condition
action_params['max_iterations'] = parameters.get('max_iterations', 100)
elif loop_type == 'for-each':
# Boucle sur une collection
collection = parameters.get('collection', '')
collection = self._substitute_variables(collection, visual_workflow)
action_params['collection'] = collection
action_params['item_variable'] = parameters.get('item_variable', 'item')
action_params['max_iterations'] = parameters.get('max_iterations', 100)
return action_params
def _validate_expression(self, expression: str) -> Dict[str, Any]:
"""
Valide la syntaxe d'une expression de condition.
Exigence: 8.5
"""
# Validation basique - dans une vraie implémentation, on utiliserait un parser
if not expression or not expression.strip():
return {'valid': False, 'message': 'Expression vide'}
# Vérifier les opérateurs de base
valid_operators = ['==', '!=', '<', '>', '<=', '>=', 'and', 'or', 'not', 'in']
has_operator = any(op in expression for op in valid_operators)
if not has_operator:
return {'valid': False, 'message': 'Aucun opérateur de comparaison trouvé'}
# Vérifier les parenthèses équilibrées
if expression.count('(') != expression.count(')'):
return {'valid': False, 'message': 'Parenthèses non équilibrées'}
return {'valid': True, 'message': 'OK'}
def _substitute_variables(self, text: str, visual_workflow: VisualWorkflow) -> str:
"""Substitue les références de variables ${var} dans le texte"""
# Pour l'instant, on garde les références telles quelles
# L'exécution fera la substitution réelle
return text
def _determine_entry_exit_nodes(
self,
visual_workflow: VisualWorkflow,
workflow_nodes: List[WorkflowNode],
workflow_edges: List[WorkflowEdge]
) -> Tuple[List[str], List[str]]:
"""Détermine les nodes d'entrée et de sortie du workflow"""
# Créer des sets pour les nodes avec edges entrants/sortants
nodes_with_incoming = {edge.to_node for edge in workflow_edges}
nodes_with_outgoing = {edge.from_node for edge in workflow_edges}
# Entry nodes = nodes sans edges entrants
entry_nodes = [
node.node_id for node in workflow_nodes
if node.node_id not in nodes_with_incoming
]
# End nodes = nodes sans edges sortants
end_nodes = [
node.node_id for node in workflow_nodes
if node.node_id not in nodes_with_outgoing
]
# Si pas de entry nodes, prendre le premier node
if not entry_nodes and workflow_nodes:
entry_nodes = [workflow_nodes[0].node_id]
self.warnings.append("Aucun node d'entrée détecté, utilisation du premier node")
# Si pas de end nodes, prendre le dernier node
if not end_nodes and workflow_nodes:
end_nodes = [workflow_nodes[-1].node_id]
self.warnings.append("Aucun node de sortie détecté, utilisation du dernier node")
# Marquer les nodes
for node in workflow_nodes:
if node.node_id in entry_nodes:
node.is_entry = True
if node.node_id in end_nodes:
node.is_end = True
return entry_nodes, end_nodes
def _create_safety_rules(self, visual_workflow: VisualWorkflow) -> SafetyRules:
"""Crée les règles de sécurité basées sur les settings du workflow"""
settings = visual_workflow.settings
return SafetyRules(
require_confirmation_for=[],
forbidden_windows=[],
execution_timeout_minutes=settings.timeout // 60000 if settings.timeout > 0 else 0
)
def get_errors(self) -> List[str]:
"""Retourne les erreurs de conversion"""
return self.errors
def get_warnings(self) -> List[str]:
"""Retourne les avertissements de conversion"""
return self.warnings
def convert_visual_to_graph(visual_workflow: VisualWorkflow) -> Workflow:
"""
Fonction utilitaire pour convertir un workflow visuel.
Args:
visual_workflow: Le workflow visuel à convertir
Returns:
Workflow exécutable
Raises:
ConversionError: Si la conversion échoue
"""
converter = VisualToGraphConverter()
return converter.convert(visual_workflow)

View File

@@ -0,0 +1,356 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Service de Capture d'Écran Réelle - RPA Vision V3
Auteur : Dom, Alice, Kiro - 8 janvier 2026
Service pour capturer l'écran réel de l'utilisateur et détecter les éléments UI.
"""
import cv2
import numpy as np
import mss
import base64
import io
from PIL import Image
from typing import Dict, List, Tuple, Optional
import threading
import time
import logging
# Import des modules RPA Vision V3 pour la détection UI
import sys
import os
# Ajouter le chemin vers le répertoire racine du projet
project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../..'))
if project_root not in sys.path:
sys.path.insert(0, project_root)
try:
from core.detection.ui_detector import UIDetector
UI_DETECTOR_AVAILABLE = True
except ImportError as e:
print(f"Warning: UIDetector non disponible: {e}")
UI_DETECTOR_AVAILABLE = False
UIDetector = None
try:
from core.models.screen_state import ScreenState, UIElement
SCREEN_STATE_AVAILABLE = True
except ImportError as e:
print(f"Warning: ScreenState non disponible: {e}")
SCREEN_STATE_AVAILABLE = False
ScreenState = None
UIElement = None
logger = logging.getLogger(__name__)
class RealScreenCaptureService:
"""
Service de capture d'écran réelle avec détection d'éléments UI
"""
def __init__(self):
self.is_capturing = False
self.capture_thread = None
self.current_screenshot = None
self.detected_elements = []
# Initialiser le détecteur UI si disponible
if UI_DETECTOR_AVAILABLE:
self.ui_detector = UIDetector()
else:
self.ui_detector = None
print("Warning: UIDetector non disponible - détection d'éléments désactivée")
self.capture_interval = 1.0 # 1 seconde par défaut
self.monitors = []
self.selected_monitor = 0
# Initialiser MSS pour la capture d'écran
try:
# Utiliser MSS temporairement pour détecter les moniteurs
with mss.mss() as sct:
self.monitors = sct.monitors
logger.info(f"Détecté {len(self.monitors)} moniteurs")
for i, monitor in enumerate(self.monitors):
logger.info(f"Moniteur {i}: {monitor}")
except Exception as e:
logger.error(f"Erreur lors de la détection des moniteurs: {e}")
self.monitors = [{"top": 0, "left": 0, "width": 1920, "height": 1080}]
def _detect_monitors(self):
"""Détecte les moniteurs disponibles"""
try:
self.monitors = self.sct.monitors
logger.info(f"Détecté {len(self.monitors)} moniteurs")
for i, monitor in enumerate(self.monitors):
logger.info(f"Moniteur {i}: {monitor}")
except Exception as e:
logger.error(f"Erreur lors de la détection des moniteurs: {e}")
self.monitors = [{"top": 0, "left": 0, "width": 1920, "height": 1080}]
def get_monitors(self) -> List[Dict]:
"""Retourne la liste des moniteurs disponibles"""
return [
{
"id": i,
"width": monitor.get("width", 0),
"height": monitor.get("height", 0),
"top": monitor.get("top", 0),
"left": monitor.get("left", 0)
}
for i, monitor in enumerate(self.monitors)
]
def select_monitor(self, monitor_id: int) -> bool:
"""Sélectionne le moniteur à capturer"""
if 0 <= monitor_id < len(self.monitors):
self.selected_monitor = monitor_id
logger.info(f"Moniteur sélectionné: {monitor_id}")
return True
return False
def start_capture(self, interval: float = 1.0) -> bool:
"""Démarre la capture d'écran en temps réel"""
if self.is_capturing:
logger.warning("Capture déjà en cours")
return False
self.capture_interval = interval
self.is_capturing = True
# Démarrer le thread de capture
self.capture_thread = threading.Thread(target=self._capture_loop, daemon=True)
self.capture_thread.start()
logger.info(f"Capture démarrée (intervalle: {interval}s)")
return True
def stop_capture(self) -> bool:
"""Arrête la capture d'écran"""
if not self.is_capturing:
return False
self.is_capturing = False
if self.capture_thread and self.capture_thread.is_alive():
self.capture_thread.join(timeout=2.0)
logger.info("Capture arrêtée")
return True
def _capture_loop(self):
"""Boucle principale de capture avec MSS local au thread"""
# Créer une instance MSS locale au thread pour éviter les problèmes de threading
try:
with mss.mss() as sct_local:
while self.is_capturing:
try:
# Capturer l'écran avec l'instance locale
screenshot = self._capture_screen_with_sct(sct_local)
if screenshot is not None:
self.current_screenshot = screenshot
# Détecter les éléments UI
if UI_DETECTOR_AVAILABLE and self.ui_detector:
self._detect_ui_elements(screenshot)
# Attendre avant la prochaine capture
time.sleep(self.capture_interval)
except Exception as e:
logger.error(f"Erreur dans la boucle de capture: {e}")
time.sleep(1.0) # Attendre avant de réessayer
except Exception as e:
logger.error(f"Erreur lors de l'initialisation MSS dans le thread: {e}")
def _capture_screen_with_sct(self, sct):
"""Capture l'écran avec une instance MSS donnée"""
try:
if self.selected_monitor >= len(self.monitors):
self.selected_monitor = 0
monitor = self.monitors[self.selected_monitor]
# Capturer avec MSS
screenshot = sct.grab(monitor)
# Convertir en array numpy
img_array = np.array(screenshot)
# Convertir BGRA vers BGR (OpenCV)
if img_array.shape[2] == 4:
img_array = cv2.cvtColor(img_array, cv2.COLOR_BGRA2BGR)
return img_array
except Exception as e:
logger.error(f"Erreur lors de la capture d'écran: {e}")
return None
def _capture_screen(self) -> Optional[np.ndarray]:
"""Capture l'écran sélectionné (version legacy, utilise _capture_screen_with_sct)"""
try:
with mss.mss() as sct:
return self._capture_screen_with_sct(sct)
except Exception as e:
logger.error(f"Erreur lors de la capture d'écran legacy: {e}")
return None
def _detect_ui_elements(self, screenshot: np.ndarray):
"""Détecte les éléments UI sur la capture d'écran"""
try:
# Créer un ScreenState temporaire pour la détection
screen_state = ScreenState(
timestamp=time.time(),
screenshot_path="", # Pas de fichier, image en mémoire
screenshot_data=screenshot,
ui_elements=[],
metadata={"source": "real_capture"}
)
# Utiliser le détecteur UI existant
detected_elements = self.ui_detector.detect_elements(screen_state)
# Mettre à jour les éléments détectés
self.detected_elements = detected_elements
logger.debug(f"Détecté {len(detected_elements)} éléments UI")
except Exception as e:
logger.error(f"Erreur lors de la détection UI: {e}")
self.detected_elements = []
def get_current_screenshot_base64(self) -> Optional[str]:
"""Retourne la capture d'écran actuelle en base64"""
if self.current_screenshot is None:
return None
try:
# Convertir en PIL Image
if len(self.current_screenshot.shape) == 3:
# BGR vers RGB
rgb_image = cv2.cvtColor(self.current_screenshot, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(rgb_image)
else:
pil_image = Image.fromarray(self.current_screenshot)
# Redimensionner pour l'affichage web (optionnel)
max_width = 1200
if pil_image.width > max_width:
ratio = max_width / pil_image.width
new_height = int(pil_image.height * ratio)
pil_image = pil_image.resize((max_width, new_height), Image.Resampling.LANCZOS)
# Convertir en base64
buffer = io.BytesIO()
pil_image.save(buffer, format='JPEG', quality=85)
img_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
return f"data:image/jpeg;base64,{img_base64}"
except Exception as e:
logger.error(f"Erreur lors de la conversion base64: {e}")
return None
def get_detected_elements(self) -> List[Dict]:
"""Retourne les éléments UI détectés"""
elements = []
for element in self.detected_elements:
try:
elements.append({
"id": getattr(element, 'id', ''),
"type": getattr(element, 'element_type', 'unknown'),
"text": getattr(element, 'text', ''),
"bbox": {
"x": getattr(element, 'bbox', {}).get('x', 0),
"y": getattr(element, 'bbox', {}).get('y', 0),
"width": getattr(element, 'bbox', {}).get('width', 0),
"height": getattr(element, 'bbox', {}).get('height', 0)
},
"confidence": getattr(element, 'confidence', 0.0),
"attributes": getattr(element, 'attributes', {})
})
except Exception as e:
logger.error(f"Erreur lors de la sérialisation d'un élément: {e}")
return elements
def capture_single(self, monitor_id: Optional[int] = None) -> Optional[str]:
"""
Effectue une capture d'écran unique et retourne le résultat en base64.
Args:
monitor_id: ID du moniteur (utilise le moniteur sélectionné si None)
Returns:
Screenshot en base64 (data:image/jpeg;base64,...) ou None si échec
"""
try:
# Utiliser le moniteur spécifié ou celui sélectionné
target_monitor = monitor_id if monitor_id is not None else self.selected_monitor
with mss.mss() as sct:
# Obtenir le moniteur cible (index 0 = tous, 1+ = moniteurs individuels)
if target_monitor >= len(sct.monitors) - 1:
logger.warning(f"Moniteur {target_monitor} invalide, utilisation du moniteur 0")
target_monitor = 0
# mss.monitors[0] = tous les écrans combinés, [1] = premier écran, etc.
monitor = sct.monitors[target_monitor + 1] if target_monitor >= 0 else sct.monitors[1]
# Capturer l'écran
screenshot = sct.grab(monitor)
# Convertir en numpy array
img_array = np.array(screenshot)
# Convertir BGRA vers BGR si nécessaire
if img_array.shape[2] == 4:
img_array = cv2.cvtColor(img_array, cv2.COLOR_BGRA2BGR)
# Convertir BGR vers RGB pour PIL
rgb_image = cv2.cvtColor(img_array, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(rgb_image)
# Redimensionner si trop grand
max_width = 1600
if pil_image.width > max_width:
ratio = max_width / pil_image.width
new_height = int(pil_image.height * ratio)
pil_image = pil_image.resize((max_width, new_height), Image.LANCZOS)
# Convertir en base64 JPEG pour réduire la taille
buffer = io.BytesIO()
pil_image.save(buffer, format='JPEG', quality=85)
base64_data = base64.b64encode(buffer.getvalue()).decode('utf-8')
logger.info(f"Capture unique réussie - Moniteur {target_monitor}, taille: {len(base64_data)} caractères")
return f"data:image/jpeg;base64,{base64_data}"
except Exception as e:
logger.error(f"Erreur lors de la capture unique: {e}")
return None
def get_status(self) -> Dict:
"""Retourne le statut du service"""
return {
"is_capturing": self.is_capturing,
"selected_monitor": self.selected_monitor,
"monitors_count": len(self.monitors),
"capture_interval": self.capture_interval,
"elements_detected": len(self.detected_elements),
"has_screenshot": self.current_screenshot is not None
}
def cleanup(self):
"""Nettoie les ressources"""
self.stop_capture()
# Plus besoin de fermer self.sct car nous utilisons des instances locales
# Instance globale du service
real_capture_service = RealScreenCaptureService()

View File

@@ -0,0 +1,353 @@
"""Self-healing converter for Visual Workflow Builder."""
import logging
from typing import Dict, Any, Optional
from ..models.visual_workflow import VisualNode, VisualWorkflow
from ..models.self_healing_config import SelfHealingConfig, RecoveryStrategy, RecoveryMode
# Import core workflow models
try:
from core.models.workflow_graph import WorkflowGraph, WorkflowNode, WorkflowEdge
CORE_AVAILABLE = True
except ImportError:
CORE_AVAILABLE = False
logger = logging.getLogger(__name__)
class SelfHealingConverter:
"""Converts visual workflow self-healing configurations to core format."""
def __init__(self):
"""Initialize the converter."""
self.enabled = CORE_AVAILABLE
if not self.enabled:
logger.warning("Core workflow models not available - self-healing conversion disabled")
def convert_node_config(
self,
visual_node: VisualNode,
workflow_node: 'WorkflowNode'
) -> 'WorkflowNode':
"""
Convert visual node self-healing config to core workflow node.
Args:
visual_node: Visual workflow node with self-healing config
workflow_node: Core workflow node to update
Returns:
Updated workflow node with self-healing configuration
"""
if not self.enabled or not visual_node.self_healing:
return workflow_node
try:
config = visual_node.self_healing
# Convert to core format
core_config = {
'enabled': config.enabled,
'recovery_mode': config.recovery_mode.value,
'max_attempts': config.max_attempts,
'confidence_threshold': config.confidence_threshold,
'enabled_strategies': [s.value for s in config.enabled_strategies],
'strategy_timeout': config.strategy_timeout,
'learn_from_success': config.learn_from_success,
'require_user_confirmation': config.require_user_confirmation,
'stop_on_failure': config.stop_on_failure,
'notify_on_recovery': config.notify_on_recovery,
'notify_on_failure': config.notify_on_failure
}
# Add to workflow node metadata
if not hasattr(workflow_node, 'metadata'):
workflow_node.metadata = {}
workflow_node.metadata['self_healing'] = core_config
logger.debug(f"Converted self-healing config for node {visual_node.id}")
except Exception as e:
logger.error(f"Failed to convert self-healing config for node {visual_node.id}: {e}")
return workflow_node
def convert_workflow_settings(
self,
visual_workflow: VisualWorkflow,
workflow_graph: 'WorkflowGraph'
) -> 'WorkflowGraph':
"""
Convert workflow-level self-healing settings.
Args:
visual_workflow: Visual workflow with settings
workflow_graph: Core workflow graph to update
Returns:
Updated workflow graph with self-healing settings
"""
if not self.enabled:
return workflow_graph
try:
settings = visual_workflow.settings
# Add global self-healing settings
if not hasattr(workflow_graph, 'metadata'):
workflow_graph.metadata = {}
workflow_graph.metadata['self_healing_enabled'] = settings.enable_self_healing
# Collect node-level configurations for workflow-wide statistics
node_configs = []
for node in visual_workflow.nodes:
if node.self_healing and node.self_healing.enabled:
node_configs.append({
'node_id': node.id,
'node_type': node.type,
'recovery_mode': node.self_healing.recovery_mode.value,
'strategies': [s.value for s in node.self_healing.enabled_strategies]
})
workflow_graph.metadata['self_healing_nodes'] = node_configs
logger.info(f"Converted workflow self-healing settings: {len(node_configs)} nodes configured")
except Exception as e:
logger.error(f"Failed to convert workflow self-healing settings: {e}")
return workflow_graph
def get_execution_config(
self,
visual_node: VisualNode
) -> Dict[str, Any]:
"""
Get execution configuration for self-healing integration.
Args:
visual_node: Visual node with self-healing config
Returns:
Dictionary with execution configuration
"""
if not visual_node.self_healing or not visual_node.self_healing.enabled:
return {'self_healing_enabled': False}
config = visual_node.self_healing
return {
'self_healing_enabled': True,
'recovery_config': {
'mode': config.recovery_mode.value,
'max_attempts': config.max_attempts,
'confidence_threshold': config.confidence_threshold,
'strategies': [s.value for s in config.enabled_strategies],
'timeout': config.strategy_timeout,
'learn_from_success': config.learn_from_success,
'require_confirmation': config.require_user_confirmation,
'stop_on_failure': config.stop_on_failure,
'notifications': {
'on_recovery': config.notify_on_recovery,
'on_failure': config.notify_on_failure
}
}
}
def create_recovery_context(
self,
visual_node: VisualNode,
workflow_id: str,
action_info: Dict[str, Any],
failure_info: Dict[str, Any]
) -> Optional[Dict[str, Any]]:
"""
Create recovery context for self-healing execution.
Args:
visual_node: Visual node that failed
workflow_id: ID of the workflow
action_info: Information about the failed action
failure_info: Information about the failure
Returns:
Recovery context dictionary or None if not configured
"""
if not visual_node.self_healing or not visual_node.self_healing.enabled:
return None
config = visual_node.self_healing
# Extract action details
action_type = visual_node.type
target_element = visual_node.parameters.get('target', 'unknown')
# Create context
context = {
'workflow_id': workflow_id,
'node_id': visual_node.id,
'node_type': action_type,
'original_action': action_type,
'target_element': target_element,
'failure_reason': failure_info.get('reason', 'unknown'),
'screenshot_path': failure_info.get('screenshot_path', ''),
'attempt_count': failure_info.get('attempt_count', 1),
'max_attempts': config.max_attempts,
'confidence_threshold': config.confidence_threshold,
'metadata': {
'node_parameters': visual_node.parameters,
'recovery_mode': config.recovery_mode.value,
'enabled_strategies': [s.value for s in config.enabled_strategies],
'require_confirmation': config.require_user_confirmation
}
}
return context
def validate_configuration(
self,
config: SelfHealingConfig
) -> Dict[str, Any]:
"""
Validate self-healing configuration.
Args:
config: Self-healing configuration to validate
Returns:
Validation result with errors and warnings
"""
errors = []
warnings = []
# Validate basic settings
if config.max_attempts < 1 or config.max_attempts > 10:
errors.append("max_attempts must be between 1 and 10")
if config.confidence_threshold < 0.0 or config.confidence_threshold > 1.0:
errors.append("confidence_threshold must be between 0.0 and 1.0")
if config.strategy_timeout < 1.0 or config.strategy_timeout > 300.0:
errors.append("strategy_timeout must be between 1.0 and 300.0 seconds")
# Validate strategies
if not config.enabled_strategies:
errors.append("At least one recovery strategy must be enabled")
# Check for conflicting settings
if config.recovery_mode == RecoveryMode.DISABLED and config.enabled:
warnings.append("Recovery mode is disabled but self-healing is enabled")
if config.recovery_mode == RecoveryMode.AGGRESSIVE and config.require_user_confirmation:
warnings.append("Aggressive mode with user confirmation may slow down recovery")
if config.confidence_threshold > 0.9 and config.recovery_mode == RecoveryMode.AGGRESSIVE:
warnings.append("High confidence threshold with aggressive mode may reduce recovery success")
# Strategy-specific validations
if RecoveryStrategy.ALL in config.enabled_strategies and len(config.enabled_strategies) > 1:
warnings.append("'All strategies' is selected along with specific strategies")
return {
'valid': len(errors) == 0,
'errors': errors,
'warnings': warnings
}
def get_strategy_recommendations(
self,
node_type: str,
action_parameters: Dict[str, Any]
) -> Dict[str, Any]:
"""
Get strategy recommendations based on node type and parameters.
Args:
node_type: Type of the node (click, type, wait, etc.)
action_parameters: Parameters of the action
Returns:
Dictionary with strategy recommendations
"""
recommendations = {
'recommended_strategies': [],
'recovery_mode': RecoveryMode.BALANCED,
'confidence_threshold': 0.7,
'max_attempts': 3,
'reasoning': []
}
# Node type specific recommendations
if node_type in ['click', 'hover']:
recommendations['recommended_strategies'] = [
RecoveryStrategy.SEMANTIC_VARIANT,
RecoveryStrategy.SPATIAL_FALLBACK
]
recommendations['reasoning'].append(
"UI interactions benefit from semantic and spatial recovery"
)
elif node_type in ['type', 'input']:
recommendations['recommended_strategies'] = [
RecoveryStrategy.FORMAT_TRANSFORMATION,
RecoveryStrategy.TIMING_ADAPTATION
]
recommendations['recovery_mode'] = RecoveryMode.CONSERVATIVE
recommendations['confidence_threshold'] = 0.8
recommendations['reasoning'].append(
"Data input requires conservative approach with format validation"
)
elif node_type in ['wait', 'navigate']:
recommendations['recommended_strategies'] = [
RecoveryStrategy.TIMING_ADAPTATION
]
recommendations['reasoning'].append(
"Navigation and timing actions primarily need timing adjustments"
)
elif node_type in ['extract', 'validate']:
recommendations['recommended_strategies'] = [
RecoveryStrategy.SEMANTIC_VARIANT,
RecoveryStrategy.SPATIAL_FALLBACK
]
recommendations['confidence_threshold'] = 0.8
recommendations['reasoning'].append(
"Data extraction requires high confidence in element identification"
)
else:
# Default recommendations
recommendations['recommended_strategies'] = [RecoveryStrategy.ALL]
recommendations['reasoning'].append(
"General node type - all strategies recommended"
)
# Parameter-specific adjustments
if 'target' in action_parameters:
target = action_parameters['target']
if isinstance(target, dict) and target.get('confidence', 1.0) < 0.8:
recommendations['confidence_threshold'] = 0.6
recommendations['reasoning'].append(
"Low target confidence - reduced threshold recommended"
)
return recommendations
# Global converter instance
_converter_instance: Optional[SelfHealingConverter] = None
def get_self_healing_converter() -> SelfHealingConverter:
"""Get or create the global self-healing converter instance."""
global _converter_instance
if _converter_instance is None:
_converter_instance = SelfHealingConverter()
return _converter_instance

View File

@@ -0,0 +1,383 @@
"""Self-healing integration service for Visual Workflow Builder."""
import logging
from typing import Dict, Any, List, Optional
from datetime import datetime
import json
from models.self_healing_config import (
SelfHealingConfig, RecoveryNotification, RecoveryStatistics,
RecoveryStrategy, RecoveryMode
)
# Import core self-healing components
try:
from core.healing.execution_integration import get_self_healing_integration
from core.healing.models import RecoveryContext, RecoveryResult
SELF_HEALING_AVAILABLE = True
except ImportError:
SELF_HEALING_AVAILABLE = False
logger = logging.getLogger(__name__)
class VisualWorkflowSelfHealingService:
"""Service for integrating self-healing with visual workflows."""
def __init__(self):
"""Initialize the self-healing service."""
self.enabled = SELF_HEALING_AVAILABLE
self.notifications: List[RecoveryNotification] = []
self.statistics = RecoveryStatistics()
if self.enabled:
self.core_integration = get_self_healing_integration()
logger.info("Visual Workflow Self-Healing service initialized")
else:
self.core_integration = None
logger.warning("Core self-healing not available - service disabled")
def configure_node_self_healing(
self,
node_type: str,
existing_config: Optional[SelfHealingConfig] = None
) -> SelfHealingConfig:
"""
Configure self-healing for a node based on its type.
Args:
node_type: Type of the node (click, type, wait, etc.)
existing_config: Existing configuration to merge with defaults
Returns:
SelfHealingConfig for the node
"""
# Get default configuration for node type
default_config = SelfHealingConfig.get_default_for_action(node_type)
# Merge with existing configuration if provided
if existing_config:
# Preserve user customizations
config = SelfHealingConfig(
enabled=existing_config.enabled,
recovery_mode=existing_config.recovery_mode,
max_attempts=existing_config.max_attempts,
confidence_threshold=existing_config.confidence_threshold,
enabled_strategies=existing_config.enabled_strategies,
strategy_timeout=existing_config.strategy_timeout,
learn_from_success=existing_config.learn_from_success,
require_user_confirmation=existing_config.require_user_confirmation,
stop_on_failure=existing_config.stop_on_failure,
notify_on_recovery=existing_config.notify_on_recovery,
notify_on_failure=existing_config.notify_on_failure
)
else:
config = default_config
return config
def handle_execution_failure(
self,
workflow_id: str,
node_id: str,
node_config: SelfHealingConfig,
action_info: Dict[str, Any],
failure_info: Dict[str, Any],
screenshot_path: str,
attempt_count: int = 1
) -> Optional[Dict[str, Any]]:
"""
Handle execution failure and attempt recovery.
Args:
workflow_id: ID of the workflow
node_id: ID of the failed node
node_config: Self-healing configuration for the node
action_info: Information about the failed action
failure_info: Information about the failure
screenshot_path: Path to screenshot at failure
attempt_count: Current attempt number
Returns:
Recovery result dictionary or None if not attempted
"""
if not self.enabled or not node_config.enabled:
return None
# Check if we should attempt recovery
if attempt_count > node_config.max_attempts:
logger.info(f"Max attempts ({node_config.max_attempts}) reached for node {node_id}")
return None
try:
# Create mock execution result for core integration
execution_result = type('ExecutionResult', (), {
'status': type('ExecutionStatus', (), {'TARGET_NOT_FOUND': 'TARGET_NOT_FOUND'})(),
'message': failure_info.get('message', 'Execution failed')
})()
# Attempt recovery using core integration
recovery_result = self.core_integration.handle_execution_failure(
action_info=action_info,
execution_result=execution_result,
workflow_id=workflow_id,
node_id=node_id,
screenshot_path=screenshot_path,
attempt_count=attempt_count
)
if recovery_result:
# Create notification
notification = RecoveryNotification(
node_id=node_id,
strategy_used=recovery_result.strategy_used,
success=recovery_result.success,
confidence=recovery_result.confidence_score,
execution_time=recovery_result.execution_time,
message=self._create_recovery_message(recovery_result),
timestamp=datetime.now().isoformat(),
requires_attention=recovery_result.requires_user_input
)
# Add notification if configured
if (recovery_result.success and node_config.notify_on_recovery) or \
(not recovery_result.success and node_config.notify_on_failure):
self.notifications.append(notification)
# Update statistics
self._update_statistics(recovery_result)
# Return result for UI
return {
'success': recovery_result.success,
'strategy_used': recovery_result.strategy_used,
'confidence': recovery_result.confidence_score,
'execution_time': recovery_result.execution_time,
'new_element': recovery_result.new_element,
'requires_user_input': recovery_result.requires_user_input,
'message': notification.message,
'notification': notification.to_dict()
}
except Exception as e:
logger.error(f"Self-healing attempt failed: {e}")
# Create failure notification
notification = RecoveryNotification(
node_id=node_id,
strategy_used='error',
success=False,
confidence=0.0,
execution_time=0.0,
message=f"Erreur lors de la tentative de récupération: {str(e)}",
timestamp=datetime.now().isoformat(),
requires_attention=True
)
if node_config.notify_on_failure:
self.notifications.append(notification)
return None
def get_recovery_suggestions(
self,
workflow_id: str,
node_id: str,
action_info: Dict[str, Any],
screenshot_path: str
) -> List[Dict[str, Any]]:
"""
Get recovery suggestions for a node.
Args:
workflow_id: ID of the workflow
node_id: ID of the node
action_info: Information about the action
screenshot_path: Path to current screenshot
Returns:
List of recovery suggestions
"""
if not self.enabled:
return []
try:
suggestions = self.core_integration.get_recovery_suggestions(
action_info=action_info,
workflow_id=workflow_id,
node_id=node_id,
screenshot_path=screenshot_path
)
return [
{
'strategy': suggestion.strategy,
'confidence': suggestion.confidence,
'description': suggestion.description,
'estimated_time': suggestion.estimated_time,
'metadata': suggestion.metadata
}
for suggestion in suggestions
]
except Exception as e:
logger.error(f"Failed to get recovery suggestions: {e}")
return []
def get_notifications(
self,
workflow_id: Optional[str] = None,
limit: int = 50
) -> List[Dict[str, Any]]:
"""
Get recovery notifications.
Args:
workflow_id: Filter by workflow ID (optional)
limit: Maximum number of notifications to return
Returns:
List of notification dictionaries
"""
notifications = self.notifications[-limit:] # Get latest notifications
return [n.to_dict() for n in notifications]
def clear_notifications(self, workflow_id: Optional[str] = None):
"""
Clear notifications.
Args:
workflow_id: Clear notifications for specific workflow (optional)
"""
if workflow_id:
# Filter out notifications for specific workflow
# Note: We'd need to track workflow_id in notifications for this
pass
else:
self.notifications.clear()
def get_statistics(self) -> Dict[str, Any]:
"""
Get recovery statistics.
Returns:
Statistics dictionary
"""
stats = self.statistics.to_dict()
# Add core statistics if available
if self.enabled and self.core_integration:
try:
core_stats = self.core_integration.get_statistics()
stats.update(core_stats)
except Exception as e:
logger.error(f"Failed to get core statistics: {e}")
return stats
def get_insights(self) -> List[str]:
"""
Get insights from recovery patterns.
Returns:
List of insight strings
"""
insights = []
if self.statistics.total_attempts > 0:
success_rate = self.statistics.success_rate
if success_rate > 0.8:
insights.append(
f"🎯 Excellent taux de récupération: {success_rate:.1%} "
f"({self.statistics.successful_recoveries}/{self.statistics.total_attempts})"
)
elif success_rate > 0.5:
insights.append(
f"⚠️ Taux de récupération modéré: {success_rate:.1%} - "
"Considérez ajuster les seuils de confiance"
)
else:
insights.append(
f"❌ Taux de récupération faible: {success_rate:.1%} - "
"Vérifiez la configuration des stratégies"
)
if self.statistics.most_used_strategy:
insights.append(
f"🔧 Stratégie la plus efficace: {self.statistics.most_used_strategy}"
)
if self.statistics.total_time_saved > 60:
minutes_saved = self.statistics.total_time_saved / 60
insights.append(
f"⏱️ Temps économisé: {minutes_saved:.1f} minutes grâce à la récupération automatique"
)
# Add core insights if available
if self.enabled and self.core_integration:
try:
core_insights = self.core_integration.get_insights()
insights.extend(core_insights)
except Exception as e:
logger.error(f"Failed to get core insights: {e}")
return insights
def _create_recovery_message(self, recovery_result: 'RecoveryResult') -> str:
"""Create a user-friendly recovery message."""
if recovery_result.success:
strategy_names = {
'SemanticVariantStrategy': 'variante sémantique',
'SpatialFallbackStrategy': 'recherche spatiale élargie',
'TimingAdaptationStrategy': 'adaptation du délai',
'FormatTransformationStrategy': 'transformation de format'
}
strategy_name = strategy_names.get(
recovery_result.strategy_used,
recovery_result.strategy_used
)
return (
f"✅ Récupération réussie avec {strategy_name} "
f"(confiance: {recovery_result.confidence_score:.1%}, "
f"temps: {recovery_result.execution_time:.1f}s)"
)
else:
return (
f"❌ Échec de la récupération: {recovery_result.error_message or 'Raison inconnue'}"
)
def _update_statistics(self, recovery_result: 'RecoveryResult'):
"""Update recovery statistics."""
self.statistics.total_attempts += 1
if recovery_result.success:
self.statistics.successful_recoveries += 1
self.statistics.total_time_saved += recovery_result.execution_time
# Update average confidence
total_confidence = (
self.statistics.average_confidence * (self.statistics.successful_recoveries - 1) +
recovery_result.confidence_score
)
self.statistics.average_confidence = total_confidence / self.statistics.successful_recoveries
# Update most used strategy
if not self.statistics.most_used_strategy:
self.statistics.most_used_strategy = recovery_result.strategy_used
else:
self.statistics.failed_recoveries += 1
# Global service instance
_service_instance: Optional[VisualWorkflowSelfHealingService] = None
def get_self_healing_service() -> VisualWorkflowSelfHealingService:
"""Get or create the global self-healing service instance."""
global _service_instance
if _service_instance is None:
_service_instance = VisualWorkflowSelfHealingService()
return _service_instance

View File

@@ -0,0 +1,188 @@
"""backend/services/serialization.py
Persistance simple (JSON/YAML) pour les workflows.
Auteur : Dom, Alice, Kiro - 08 janvier 2026
Patch #1:
- Fournit WorkflowDatabase + WorkflowSerializer utilisés par api/workflows.py
- Stockage fichier: un workflow = un fichier <id>.json dans un répertoire
Design:
- Permissif: on conserve les champs inconnus via models.py
- Robuste: erreurs encapsulées, pas de crash au boot
"""
from __future__ import annotations
import json
import os
from dataclasses import dataclass
from typing import Any, Dict, Optional, List
import yaml # PyYAML
from models import VisualWorkflow, generate_id, WorkflowSettings
from datetime import datetime
class SerializationError(Exception):
"""Erreur de sérialisation."""
pass
class ValidationError(Exception):
"""Erreur de validation pour les workflows sérialisés."""
def __init__(self, errors: List[str]):
super().__init__(", ".join(errors))
self.errors = errors
def create_empty_workflow(name: str, description: str, created_by: str) -> VisualWorkflow:
"""Crée un workflow vide avec les paramètres de base."""
now = datetime.now()
return VisualWorkflow(
id=WorkflowSerializer.generate_workflow_id(),
name=name,
description=description,
version="1.0.0",
created_at=now,
updated_at=now,
created_by=created_by,
nodes=[],
edges=[],
variables=[],
settings=WorkflowSettings(),
)
class WorkflowSerializer:
"""Sérialise/désérialise les workflows vers/depuis JSON ou YAML."""
@staticmethod
def generate_workflow_id() -> str:
"""Génère un identifiant unique pour un workflow."""
return generate_id("wf")
@staticmethod
def serialize(workflow: VisualWorkflow, format: str = "json") -> str:
"""Sérialise un workflow vers une chaîne JSON ou YAML."""
try:
data = workflow.to_dict()
fmt = (format or "json").lower()
if fmt == "json":
return json.dumps(data, ensure_ascii=False, indent=2)
if fmt in ("yml", "yaml"):
return yaml.safe_dump(data, allow_unicode=True, sort_keys=False)
raise SerializationError(f"Format non supporté: {format}")
except Exception as e:
raise SerializationError(str(e)) from e
@staticmethod
def deserialize(raw: Any, format: str = "json") -> VisualWorkflow:
"""Désérialise un workflow depuis une chaîne JSON ou YAML."""
try:
fmt = (format or "json").lower()
if isinstance(raw, (bytes, bytearray)):
raw = raw.decode("utf-8", errors="replace")
if isinstance(raw, str):
if fmt == "json":
data = json.loads(raw)
elif fmt in ("yml", "yaml"):
data = yaml.safe_load(raw)
else:
raise SerializationError(f"Format non supporté: {format}")
elif isinstance(raw, dict):
data = raw
else:
raise SerializationError("Type d'entrée invalide pour la désérialisation")
wf = VisualWorkflow.from_dict(data)
errors = wf.validate()
if errors:
raise ValidationError(errors)
return wf
except ValidationError:
raise
except Exception as e:
raise SerializationError(str(e)) from e
@dataclass
class WorkflowDatabase:
"""Stockage de workflows basé sur des fichiers."""
root_dir: str
def __post_init__(self) -> None:
"""Crée le répertoire de stockage s'il n'existe pas."""
os.makedirs(self.root_dir, exist_ok=True)
def _path(self, workflow_id: str) -> str:
"""Retourne le chemin du fichier pour un workflow donné."""
safe_id = "".join(c for c in workflow_id if c.isalnum() or c in ("_", "-")) or workflow_id
return os.path.join(self.root_dir, f"{safe_id}.json")
def exists(self, workflow_id: str) -> bool:
"""Vérifie si un workflow existe."""
return os.path.exists(self._path(workflow_id))
def save(self, workflow: VisualWorkflow) -> None:
"""Sauvegarde un workflow sur disque."""
try:
payload = WorkflowSerializer.serialize(workflow, format="json")
with open(self._path(workflow.id), "w", encoding="utf-8") as f:
f.write(payload)
except Exception as e:
raise SerializationError(f"Échec de la sauvegarde du workflow: {e}") from e
def load(self, workflow_id: str) -> Optional[VisualWorkflow]:
"""Charge un workflow depuis le disque."""
path = self._path(workflow_id)
if not os.path.exists(path):
return None
try:
with open(path, "r", encoding="utf-8") as f:
raw = f.read()
return WorkflowSerializer.deserialize(raw, format="json")
except ValidationError:
raise
except Exception as e:
raise SerializationError(f"Échec du chargement du workflow '{workflow_id}': {e}") from e
def delete(self, workflow_id: str) -> None:
"""Supprime un workflow du disque."""
path = self._path(workflow_id)
try:
if os.path.exists(path):
os.remove(path)
except Exception as e:
raise SerializationError(f"Échec de la suppression du workflow '{workflow_id}': {e}") from e
def list(self) -> List[VisualWorkflow]:
"""Liste tous les workflows disponibles.
Les fichiers invalides sont ignorés silencieusement pour éviter
de bloquer le chargement de tous les workflows.
"""
workflows: List[VisualWorkflow] = []
if not os.path.isdir(self.root_dir):
return []
for fname in os.listdir(self.root_dir):
if not fname.endswith(".json"):
continue
wf_id = fname[:-5]
try:
wf = self.load(wf_id)
if wf is not None:
workflows.append(wf)
except (SerializationError, ValidationError) as e:
# Ignorer les fichiers invalides et continuer
print(f"⚠️ Workflow ignoré '{wf_id}': {e}")
except Exception as e:
# Autres erreurs - les ignorer aussi pour ne pas bloquer
print(f"⚠️ Erreur inattendue pour '{wf_id}': {e}")
return workflows

View File

@@ -0,0 +1,941 @@
"""
Template Service
Handles template storage, retrieval, and instantiation.
"""
import json
import os
from datetime import datetime
from typing import Dict, List, Optional, Any
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from models.template import WorkflowTemplate, TemplateParameter, TemplateDifficulty, generate_template_id
from models.visual_workflow import VisualWorkflow, ParameterType
class TemplateService:
"""Service for managing workflow templates"""
def __init__(self, data_dir: str = "data/templates"):
self.data_dir = data_dir
self.templates_file = os.path.join(data_dir, "templates.json")
self._ensure_data_dir()
self._load_default_templates()
def _ensure_data_dir(self):
"""Ensure the data directory exists"""
os.makedirs(self.data_dir, exist_ok=True)
def _load_templates(self) -> Dict[str, WorkflowTemplate]:
"""Load all templates from storage"""
if not os.path.exists(self.templates_file):
return {}
try:
with open(self.templates_file, 'r', encoding='utf-8') as f:
data = json.load(f)
return {
template_id: WorkflowTemplate.from_dict(template_data)
for template_id, template_data in data.items()
}
except (json.JSONDecodeError, KeyError, ValueError) as e:
print(f"Error loading templates: {e}")
return {}
def _save_templates(self, templates: Dict[str, WorkflowTemplate]):
"""Save all templates to storage"""
try:
data = {
template_id: template.to_dict()
for template_id, template in templates.items()
}
with open(self.templates_file, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
except Exception as e:
print(f"Error saving templates: {e}")
raise
def list_templates(self, category: Optional[str] = None, difficulty: Optional[str] = None) -> List[WorkflowTemplate]:
"""List all templates with optional filtering"""
templates = self._load_templates()
result = list(templates.values())
# Filter by category
if category:
result = [t for t in result if t.category.lower() == category.lower()]
# Filter by difficulty
if difficulty:
result = [t for t in result if t.difficulty.value == difficulty.lower()]
# Sort by usage count (most used first), then by name
result.sort(key=lambda t: (-t.usage_count, t.name))
return result
def get_template(self, template_id: str) -> Optional[WorkflowTemplate]:
"""Get a specific template by ID"""
templates = self._load_templates()
return templates.get(template_id)
def create_template(self, template_data: Dict[str, Any]) -> WorkflowTemplate:
"""Create a new template"""
# Generate ID if not provided
if 'id' not in template_data or not template_data['id']:
template_data['id'] = generate_template_id()
# Set timestamps
now = datetime.now()
template_data['created_at'] = now.isoformat()
template_data['updated_at'] = now.isoformat()
# Create template object
template = WorkflowTemplate.from_dict(template_data)
# Validate
errors = template.validate()
if errors:
raise ValueError(f"Template validation failed: {', '.join(errors)}")
# Save
templates = self._load_templates()
templates[template.id] = template
self._save_templates(templates)
return template
def update_template(self, template_id: str, template_data: Dict[str, Any]) -> Optional[WorkflowTemplate]:
"""Update an existing template"""
templates = self._load_templates()
if template_id not in templates:
return None
# Preserve ID and creation time
template_data['id'] = template_id
template_data['created_at'] = templates[template_id].created_at.isoformat()
template_data['updated_at'] = datetime.now().isoformat()
# Create updated template
template = WorkflowTemplate.from_dict(template_data)
# Validate
errors = template.validate()
if errors:
raise ValueError(f"Template validation failed: {', '.join(errors)}")
# Save
templates[template_id] = template
self._save_templates(templates)
return template
def delete_template(self, template_id: str) -> bool:
"""Delete a template"""
templates = self._load_templates()
if template_id not in templates:
return False
del templates[template_id]
self._save_templates(templates)
return True
def instantiate_template(self, template_id: str, parameters: Dict[str, Any],
workflow_name: str, created_by: str = "user") -> Optional[VisualWorkflow]:
"""Create a workflow instance from a template"""
template = self.get_template(template_id)
if not template:
return None
# Increment usage count
templates = self._load_templates()
templates[template_id].usage_count += 1
templates[template_id].updated_at = datetime.now()
self._save_templates(templates)
# Create workflow instance
return template.instantiate(parameters, workflow_name, created_by)
def create_template_from_workflow(self, workflow: VisualWorkflow, template_name: str,
template_description: str, category: str,
parameters: List[Dict[str, Any]] = None) -> WorkflowTemplate:
"""Create a template from an existing workflow"""
# Create template workflow (copy of original)
template_workflow_data = workflow.to_dict()
template_workflow_data['is_template'] = True
template_workflow = VisualWorkflow.from_dict(template_workflow_data)
# Create template parameters
template_params = []
if parameters:
template_params = [TemplateParameter.from_dict(p) for p in parameters]
# Create template
template_data = {
'id': generate_template_id(),
'name': template_name,
'description': template_description,
'category': category,
'workflow': template_workflow.to_dict(),
'parameters': [p.to_dict() for p in template_params],
'tags': workflow.tags,
'difficulty': 'intermediate',
'estimated_time': 10,
'created_by': workflow.created_by
}
return self.create_template(template_data)
def _load_default_templates(self):
"""Load default templates if none exist"""
templates = self._load_templates()
if templates:
return # Templates already exist
# Create default templates
default_templates = self._create_default_templates()
for template in default_templates:
templates[template.id] = template
self._save_templates(templates)
def _create_default_templates(self) -> List[WorkflowTemplate]:
"""Create the default template set"""
templates = []
# 1. Login Template
login_template = self._create_login_template()
templates.append(login_template)
# 2. Form Fill Template
form_template = self._create_form_fill_template()
templates.append(form_template)
# 3. Data Extraction Template
extraction_template = self._create_data_extraction_template()
templates.append(extraction_template)
# 4. Navigation Template
navigation_template = self._create_navigation_template()
templates.append(navigation_template)
return templates
def _create_login_template(self) -> WorkflowTemplate:
"""Create the login workflow template"""
from models.visual_workflow import (
VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port,
Variable, WorkflowSettings
)
# Create nodes
nodes = [
VisualNode(
id="start",
type="start",
position=Position(50, 100),
size=Size(100, 50),
parameters={},
input_ports=[],
output_ports=[Port("out", "output", "output")],
label="Début"
),
VisualNode(
id="navigate",
type="navigate",
position=Position(200, 100),
size=Size(150, 80),
parameters={
"url": "{{login_url}}",
"wait_for_load": True,
"timeout": 10000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Naviguer vers la page de connexion"
),
VisualNode(
id="username",
type="type",
position=Position(400, 50),
size=Size(150, 80),
parameters={
"target": "{{username_selector}}",
"text": "{{username}}",
"clear_first": True,
"timeout": 5000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Saisir nom d'utilisateur"
),
VisualNode(
id="password",
type="type",
position=Position(400, 150),
size=Size(150, 80),
parameters={
"target": "{{password_selector}}",
"text": "{{password}}",
"clear_first": True,
"timeout": 5000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Saisir mot de passe"
),
VisualNode(
id="login_button",
type="click",
position=Position(600, 100),
size=Size(150, 80),
parameters={
"target": "{{login_button_selector}}",
"timeout": 5000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Cliquer sur Se connecter"
),
VisualNode(
id="end",
type="end",
position=Position(800, 100),
size=Size(100, 50),
parameters={},
input_ports=[Port("in", "input", "input")],
output_ports=[],
label="Fin"
)
]
# Create edges
edges = [
VisualEdge("e1", "start", "navigate", "out", "in"),
VisualEdge("e2", "navigate", "username", "out", "in"),
VisualEdge("e3", "username", "password", "out", "in"),
VisualEdge("e4", "password", "login_button", "out", "in"),
VisualEdge("e5", "login_button", "end", "out", "in")
]
# Create workflow
workflow = VisualWorkflow(
id="login_template_workflow",
name="Modèle de Connexion",
description="Workflow de base pour se connecter à un site web",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="system",
nodes=nodes,
edges=edges,
variables=[],
settings=WorkflowSettings(),
tags=["login", "authentication", "web"],
category="Web Automation",
is_template=True
)
# Create template parameters
parameters = [
TemplateParameter(
name="login_url",
type=ParameterType.STRING,
description="URL de la page de connexion",
node_id="navigate",
parameter_name="url",
label="URL de connexion",
placeholder="https://example.com/login"
),
TemplateParameter(
name="username_selector",
type=ParameterType.TARGET,
description="Sélecteur du champ nom d'utilisateur",
node_id="username",
parameter_name="target",
label="Champ nom d'utilisateur",
placeholder="input[name='username']"
),
TemplateParameter(
name="username",
type=ParameterType.STRING,
description="Nom d'utilisateur à saisir",
node_id="username",
parameter_name="text",
label="Nom d'utilisateur",
placeholder="votre_nom_utilisateur"
),
TemplateParameter(
name="password_selector",
type=ParameterType.TARGET,
description="Sélecteur du champ mot de passe",
node_id="password",
parameter_name="target",
label="Champ mot de passe",
placeholder="input[name='password']"
),
TemplateParameter(
name="password",
type=ParameterType.STRING,
description="Mot de passe à saisir",
node_id="password",
parameter_name="text",
label="Mot de passe",
placeholder="votre_mot_de_passe"
),
TemplateParameter(
name="login_button_selector",
type=ParameterType.TARGET,
description="Sélecteur du bouton de connexion",
node_id="login_button",
parameter_name="target",
label="Bouton de connexion",
placeholder="button[type='submit']"
)
]
return WorkflowTemplate(
id="login_template",
name="Connexion à un site web",
description="Template pour automatiser la connexion à un site web avec nom d'utilisateur et mot de passe",
category="Web Automation",
workflow=workflow,
parameters=parameters,
tags=["login", "authentication", "web", "form"],
difficulty=TemplateDifficulty.BEGINNER,
estimated_time=3,
created_by="system"
)
def _create_form_fill_template(self) -> WorkflowTemplate:
"""Create the form filling template"""
from models.visual_workflow import (
VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port,
Variable, WorkflowSettings
)
# Create nodes for a basic form filling workflow
nodes = [
VisualNode(
id="start",
type="start",
position=Position(50, 150),
size=Size(100, 50),
parameters={},
input_ports=[],
output_ports=[Port("out", "output", "output")],
label="Début"
),
VisualNode(
id="navigate",
type="navigate",
position=Position(200, 150),
size=Size(150, 80),
parameters={
"url": "{{form_url}}",
"wait_for_load": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Naviguer vers le formulaire"
),
VisualNode(
id="fill_name",
type="type",
position=Position(400, 50),
size=Size(150, 80),
parameters={
"target": "{{name_selector}}",
"text": "{{name_value}}",
"clear_first": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Remplir le nom"
),
VisualNode(
id="fill_email",
type="type",
position=Position(400, 150),
size=Size(150, 80),
parameters={
"target": "{{email_selector}}",
"text": "{{email_value}}",
"clear_first": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Remplir l'email"
),
VisualNode(
id="fill_message",
type="type",
position=Position(400, 250),
size=Size(150, 80),
parameters={
"target": "{{message_selector}}",
"text": "{{message_value}}",
"clear_first": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Remplir le message"
),
VisualNode(
id="submit",
type="click",
position=Position(600, 150),
size=Size(150, 80),
parameters={
"target": "{{submit_selector}}"
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Soumettre le formulaire"
),
VisualNode(
id="end",
type="end",
position=Position(800, 150),
size=Size(100, 50),
parameters={},
input_ports=[Port("in", "input", "input")],
output_ports=[],
label="Fin"
)
]
edges = [
VisualEdge("e1", "start", "navigate", "out", "in"),
VisualEdge("e2", "navigate", "fill_name", "out", "in"),
VisualEdge("e3", "fill_name", "fill_email", "out", "in"),
VisualEdge("e4", "fill_email", "fill_message", "out", "in"),
VisualEdge("e5", "fill_message", "submit", "out", "in"),
VisualEdge("e6", "submit", "end", "out", "in")
]
workflow = VisualWorkflow(
id="form_fill_template_workflow",
name="Modèle de Remplissage de Formulaire",
description="Workflow pour remplir automatiquement un formulaire web",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="system",
nodes=nodes,
edges=edges,
variables=[],
settings=WorkflowSettings(),
tags=["form", "fill", "web"],
category="Web Automation",
is_template=True
)
parameters = [
TemplateParameter(
name="form_url",
type=ParameterType.STRING,
description="URL du formulaire à remplir",
node_id="navigate",
parameter_name="url",
label="URL du formulaire"
),
TemplateParameter(
name="name_selector",
type=ParameterType.TARGET,
description="Sélecteur du champ nom",
node_id="fill_name",
parameter_name="target",
label="Champ nom"
),
TemplateParameter(
name="name_value",
type=ParameterType.STRING,
description="Valeur à saisir dans le champ nom",
node_id="fill_name",
parameter_name="text",
label="Nom"
),
TemplateParameter(
name="email_selector",
type=ParameterType.TARGET,
description="Sélecteur du champ email",
node_id="fill_email",
parameter_name="target",
label="Champ email"
),
TemplateParameter(
name="email_value",
type=ParameterType.STRING,
description="Valeur à saisir dans le champ email",
node_id="fill_email",
parameter_name="text",
label="Email"
),
TemplateParameter(
name="message_selector",
type=ParameterType.TARGET,
description="Sélecteur du champ message",
node_id="fill_message",
parameter_name="target",
label="Champ message"
),
TemplateParameter(
name="message_value",
type=ParameterType.STRING,
description="Valeur à saisir dans le champ message",
node_id="fill_message",
parameter_name="text",
label="Message"
),
TemplateParameter(
name="submit_selector",
type=ParameterType.TARGET,
description="Sélecteur du bouton de soumission",
node_id="submit",
parameter_name="target",
label="Bouton de soumission"
)
]
return WorkflowTemplate(
id="form_fill_template",
name="Remplissage de formulaire",
description="Template pour remplir automatiquement un formulaire de contact ou d'inscription",
category="Web Automation",
workflow=workflow,
parameters=parameters,
tags=["form", "contact", "web", "automation"],
difficulty=TemplateDifficulty.BEGINNER,
estimated_time=5,
created_by="system"
)
def _create_data_extraction_template(self) -> WorkflowTemplate:
"""Create the data extraction template"""
from models.visual_workflow import (
VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port,
Variable, WorkflowSettings
)
nodes = [
VisualNode(
id="start",
type="start",
position=Position(50, 150),
size=Size(100, 50),
parameters={},
input_ports=[],
output_ports=[Port("out", "output", "output")],
label="Début"
),
VisualNode(
id="navigate",
type="navigate",
position=Position(200, 150),
size=Size(150, 80),
parameters={
"url": "{{target_url}}",
"wait_for_load": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Naviguer vers la page"
),
VisualNode(
id="extract_title",
type="extract",
position=Position(400, 100),
size=Size(150, 80),
parameters={
"target": "{{title_selector}}",
"attribute": "text",
"variable": "page_title"
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Extraire le titre"
),
VisualNode(
id="extract_content",
type="extract",
position=Position(400, 200),
size=Size(150, 80),
parameters={
"target": "{{content_selector}}",
"attribute": "text",
"variable": "page_content"
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Extraire le contenu"
),
VisualNode(
id="save_data",
type="save",
position=Position(600, 150),
size=Size(150, 80),
parameters={
"filename": "{{output_file}}",
"format": "json",
"data": {
"title": "${page_title}",
"content": "${page_content}",
"extracted_at": "${current_timestamp}"
}
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Sauvegarder les données"
),
VisualNode(
id="end",
type="end",
position=Position(800, 150),
size=Size(100, 50),
parameters={},
input_ports=[Port("in", "input", "input")],
output_ports=[],
label="Fin"
)
]
edges = [
VisualEdge("e1", "start", "navigate", "out", "in"),
VisualEdge("e2", "navigate", "extract_title", "out", "in"),
VisualEdge("e3", "extract_title", "extract_content", "out", "in"),
VisualEdge("e4", "extract_content", "save_data", "out", "in"),
VisualEdge("e5", "save_data", "end", "out", "in")
]
workflow = VisualWorkflow(
id="data_extraction_template_workflow",
name="Modèle d'Extraction de Données",
description="Workflow pour extraire des données d'une page web",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="system",
nodes=nodes,
edges=edges,
variables=[
Variable("page_title", "string", "", "Titre de la page extrait"),
Variable("page_content", "string", "", "Contenu de la page extrait"),
Variable("current_timestamp", "string", "", "Timestamp de l'extraction")
],
settings=WorkflowSettings(),
tags=["extraction", "data", "scraping"],
category="Data Processing",
is_template=True
)
parameters = [
TemplateParameter(
name="target_url",
type=ParameterType.STRING,
description="URL de la page à analyser",
node_id="navigate",
parameter_name="url",
label="URL cible"
),
TemplateParameter(
name="title_selector",
type=ParameterType.TARGET,
description="Sélecteur de l'élément titre",
node_id="extract_title",
parameter_name="target",
label="Sélecteur du titre"
),
TemplateParameter(
name="content_selector",
type=ParameterType.TARGET,
description="Sélecteur de l'élément contenu",
node_id="extract_content",
parameter_name="target",
label="Sélecteur du contenu"
),
TemplateParameter(
name="output_file",
type=ParameterType.STRING,
description="Nom du fichier de sortie",
node_id="save_data",
parameter_name="filename",
label="Fichier de sortie",
default_value="extracted_data.json"
)
]
return WorkflowTemplate(
id="data_extraction_template",
name="Extraction de données web",
description="Template pour extraire et sauvegarder des données depuis une page web",
category="Data Processing",
workflow=workflow,
parameters=parameters,
tags=["extraction", "scraping", "data", "web"],
difficulty=TemplateDifficulty.INTERMEDIATE,
estimated_time=8,
created_by="system"
)
def _create_navigation_template(self) -> WorkflowTemplate:
"""Create the navigation template"""
from models.visual_workflow import (
VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port,
Variable, WorkflowSettings
)
nodes = [
VisualNode(
id="start",
type="start",
position=Position(50, 200),
size=Size(100, 50),
parameters={},
input_ports=[],
output_ports=[Port("out", "output", "output")],
label="Début"
),
VisualNode(
id="navigate_home",
type="navigate",
position=Position(200, 200),
size=Size(150, 80),
parameters={
"url": "{{home_url}}",
"wait_for_load": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Page d'accueil"
),
VisualNode(
id="click_menu",
type="click",
position=Position(400, 150),
size=Size(150, 80),
parameters={
"target": "{{menu_selector}}",
"wait_after": 1000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Cliquer sur le menu"
),
VisualNode(
id="click_submenu",
type="click",
position=Position(400, 250),
size=Size(150, 80),
parameters={
"target": "{{submenu_selector}}",
"wait_after": 1000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Cliquer sur le sous-menu"
),
VisualNode(
id="wait_page_load",
type="wait",
position=Position(600, 200),
size=Size(150, 80),
parameters={
"condition": "element_visible",
"target": "{{target_element}}",
"timeout": 10000
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Attendre le chargement"
),
VisualNode(
id="end",
type="end",
position=Position(800, 200),
size=Size(100, 50),
parameters={},
input_ports=[Port("in", "input", "input")],
output_ports=[],
label="Fin"
)
]
edges = [
VisualEdge("e1", "start", "navigate_home", "out", "in"),
VisualEdge("e2", "navigate_home", "click_menu", "out", "in"),
VisualEdge("e3", "click_menu", "click_submenu", "out", "in"),
VisualEdge("e4", "click_submenu", "wait_page_load", "out", "in"),
VisualEdge("e5", "wait_page_load", "end", "out", "in")
]
workflow = VisualWorkflow(
id="navigation_template_workflow",
name="Modèle de Navigation",
description="Workflow pour naviguer dans un site web avec menus",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="system",
nodes=nodes,
edges=edges,
variables=[],
settings=WorkflowSettings(),
tags=["navigation", "menu", "web"],
category="Web Automation",
is_template=True
)
parameters = [
TemplateParameter(
name="home_url",
type=ParameterType.STRING,
description="URL de la page d'accueil",
node_id="navigate_home",
parameter_name="url",
label="URL d'accueil"
),
TemplateParameter(
name="menu_selector",
type=ParameterType.TARGET,
description="Sélecteur de l'élément de menu principal",
node_id="click_menu",
parameter_name="target",
label="Menu principal"
),
TemplateParameter(
name="submenu_selector",
type=ParameterType.TARGET,
description="Sélecteur de l'élément de sous-menu",
node_id="click_submenu",
parameter_name="target",
label="Sous-menu"
),
TemplateParameter(
name="target_element",
type=ParameterType.TARGET,
description="Élément à attendre sur la page de destination",
node_id="wait_page_load",
parameter_name="target",
label="Élément de destination"
)
]
return WorkflowTemplate(
id="navigation_template",
name="Navigation avec menus",
description="Template pour naviguer dans un site web en utilisant les menus déroulants",
category="Web Automation",
workflow=workflow,
parameters=parameters,
tags=["navigation", "menu", "web", "click"],
difficulty=TemplateDifficulty.BEGINNER,
estimated_time=4,
created_by="system"
)

View File

@@ -0,0 +1,204 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Service de Capture d'Écran Thread-Safe - RPA Vision V3
Auteur : Dom, Alice, Kiro - 09 janvier 2026
Service de capture d'écran thread-safe pour résoudre les problèmes
de threading avec Flask et mss.
"""
import cv2
import numpy as np
import base64
import io
import threading
from PIL import Image
from typing import Dict, List, Tuple, Optional
import logging
logger = logging.getLogger(__name__)
class ThreadSafeScreenCapturer:
"""
Capturer d'écran thread-safe qui crée une instance mss locale
pour chaque thread.
"""
def __init__(self):
self.method = None
self._thread_local = threading.local()
self._init_capture_method()
def _init_capture_method(self):
"""Détermine la méthode de capture disponible."""
try:
import mss
self.method = "mss"
logger.info("✅ mss disponible - utilisation pour capture thread-safe")
except ImportError:
try:
import pyautogui
self.method = "pyautogui"
logger.info("✅ pyautogui disponible - utilisation pour capture thread-safe")
except ImportError:
logger.error("❌ Aucune méthode de capture disponible")
raise ImportError("Ni mss ni pyautogui disponibles")
def _get_thread_capturer(self):
"""Obtient ou crée un capturer pour le thread actuel."""
if not hasattr(self._thread_local, 'capturer'):
if self.method == "mss":
import mss
self._thread_local.capturer = mss.mss()
logger.debug(f"Nouvelle instance mss créée pour thread {threading.current_thread().ident}")
else:
import pyautogui
self._thread_local.capturer = pyautogui
logger.debug(f"pyautogui configuré pour thread {threading.current_thread().ident}")
return self._thread_local.capturer
def capture_screen(self) -> Optional[np.ndarray]:
"""
Capture l'écran de manière thread-safe.
Returns:
Screenshot as numpy array (H, W, 3) RGB ou None si erreur
"""
try:
capturer = self._get_thread_capturer()
if self.method == "mss":
return self._capture_mss(capturer)
else:
return self._capture_pyautogui(capturer)
except Exception as e:
logger.error(f"Erreur capture thread-safe: {e}")
return None
def _capture_mss(self, sct) -> np.ndarray:
"""Capture avec mss thread-safe."""
# Utiliser le moniteur principal (index 1)
monitor_idx = 1 if len(sct.monitors) > 1 else 0
monitor = sct.monitors[monitor_idx]
# Capturer
sct_img = sct.grab(monitor)
# Convertir en numpy array
img = np.array(sct_img)
# Convertir BGRA vers RGB
if img.shape[2] == 4:
img = img[:, :, :3][:, :, ::-1] # BGRA to RGB
elif img.shape[2] == 3:
img = img[:, :, ::-1] # BGR to RGB
if img.size == 0 or img.shape[0] == 0 or img.shape[1] == 0:
raise ValueError("Image capturée a des dimensions invalides")
return img
def _capture_pyautogui(self, pyautogui) -> np.ndarray:
"""Capture avec pyautogui."""
screenshot = pyautogui.screenshot()
img = np.array(screenshot)
if img.size == 0 or img.shape[0] == 0 or img.shape[1] == 0:
raise ValueError("Image capturée a des dimensions invalides")
return img
def capture_to_base64(self, format='PNG', quality=90) -> Dict[str, any]:
"""
Capture l'écran et retourne l'image en base64.
Args:
format: Format de l'image ('PNG' ou 'JPEG')
quality: Qualité pour JPEG (1-100)
Returns:
Dict avec 'success', 'screenshot' (base64), 'width', 'height', ou 'error'
"""
try:
# Capturer l'écran
img_array = self.capture_screen()
if img_array is None:
return {
'success': False,
'error': 'Échec de la capture d\'écran'
}
# Convertir en PIL Image
pil_image = Image.fromarray(img_array)
# Convertir en base64
buffer = io.BytesIO()
if format.upper() == 'JPEG':
pil_image.save(buffer, format='JPEG', quality=quality, optimize=True)
else:
pil_image.save(buffer, format='PNG', optimize=True)
buffer.seek(0)
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
return {
'success': True,
'screenshot': screenshot_base64,
'width': pil_image.width,
'height': pil_image.height,
'format': format,
'thread_id': threading.current_thread().ident
}
except Exception as e:
logger.error(f"Erreur capture_to_base64: {e}")
return {
'success': False,
'error': f'Erreur lors de la capture: {str(e)}'
}
def get_screen_info(self) -> Dict[str, any]:
"""Retourne les informations sur l'écran."""
try:
capturer = self._get_thread_capturer()
if self.method == "mss":
monitors = capturer.monitors
return {
'method': 'mss',
'monitors_count': len(monitors),
'primary_monitor': monitors[1] if len(monitors) > 1 else monitors[0],
'all_monitors': monitors
}
else:
import pyautogui
size = pyautogui.size()
return {
'method': 'pyautogui',
'width': size.width,
'height': size.height
}
except Exception as e:
logger.error(f"Erreur get_screen_info: {e}")
return {
'method': self.method,
'error': str(e)
}
def cleanup_thread(self):
"""Nettoie les ressources du thread actuel."""
if hasattr(self._thread_local, 'capturer'):
if self.method == "mss":
try:
self._thread_local.capturer.close()
except Exception as e:
logger.debug(f"Erreur cleanup mss: {e}")
delattr(self._thread_local, 'capturer')
logger.debug(f"Ressources nettoyées pour thread {threading.current_thread().ident}")
# Instance globale thread-safe
thread_safe_capturer = ThreadSafeScreenCapturer()

View File

@@ -0,0 +1,452 @@
"""
VWB Workflow Matcher - Intégration du SemanticMatcher avec les workflows VWB
Ce service permet de :
- Charger les workflows VWB depuis la base de données SQLite
- Trouver le workflow correspondant à une commande en langage naturel
- Utiliser les métadonnées (description, tags, triggerExamples) pour le matching
Usage:
matcher = VWBWorkflowMatcher()
result = matcher.find_workflow("créer une facture pour le client Acme")
if result:
print(f"Workflow trouvé: {result.workflow_name} (confiance: {result.confidence})")
"""
import re
import logging
from typing import Dict, Any, List, Optional, Tuple
from dataclasses import dataclass
logger = logging.getLogger(__name__)
@dataclass
class WorkflowMatch:
"""Résultat d'un matching de workflow."""
workflow_id: str
workflow_name: str
confidence: float
extracted_params: Dict[str, str]
match_reasons: List[str]
description: str = ""
tags: List[str] = None
step_count: int = 0
@dataclass
class WorkflowInfo:
"""Informations d'un workflow pour le matching."""
workflow_id: str
name: str
description: str
tags: List[str]
trigger_examples: List[str]
keywords: List[str]
step_count: int
class VWBWorkflowMatcher:
"""
Matcher sémantique pour les workflows VWB.
Stratégies de matching (par ordre de priorité) :
1. Matching exact des trigger_examples (0.9)
2. Matching exact du nom (0.5)
3. Matching des tags (0.3 par tag)
4. Matching des mots-clés (Jaccard similarity * 0.4)
5. Matching de la description (0.2)
Les scores sont cumulatifs et normalisés à 1.0 max.
"""
# Stop words français et anglais
STOP_WORDS = {
'le', 'la', 'les', 'un', 'une', 'des', 'de', 'du', 'à', 'au', 'aux',
'et', 'ou', 'mais', 'donc', 'or', 'ni', 'car', 'que', 'qui', 'quoi',
'ce', 'cette', 'ces', 'mon', 'ma', 'mes', 'ton', 'ta', 'tes', 'son',
'sa', 'ses', 'notre', 'votre', 'leur', 'leurs', 'je', 'tu', 'il',
'elle', 'nous', 'vous', 'ils', 'elles', 'on', 'se', 'en', 'y',
'the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for',
'of', 'with', 'by', 'from', 'is', 'are', 'was', 'were', 'be', 'been',
'have', 'has', 'had', 'do', 'does', 'did', 'will', 'would', 'could',
'should', 'may', 'might', 'must', 'shall', 'can', 'need', 'dare',
'faire', 'fait', 'fais', 'veux', 'veut', 'peux', 'peut', 'dois', 'doit'
}
def __init__(self, app=None):
"""
Initialiser le matcher.
Args:
app: Application Flask (optionnel, pour accès à la DB)
"""
self.app = app
self._workflows: Dict[str, WorkflowInfo] = {}
self._loaded = False
def _ensure_loaded(self) -> None:
"""S'assurer que les workflows sont chargés."""
if not self._loaded:
self.reload_workflows()
def reload_workflows(self) -> int:
"""
Recharger les workflows depuis la base de données.
Returns:
Nombre de workflows chargés
"""
self._workflows.clear()
try:
# Import ici pour éviter les imports circulaires
from db.models import Workflow
workflows = Workflow.query.filter_by(is_active=True).all()
for wf in workflows:
# Extraire les mots-clés
keywords = self._extract_keywords(wf.name, wf.description, wf.tags)
info = WorkflowInfo(
workflow_id=wf.id,
name=wf.name,
description=wf.description or "",
tags=wf.tags or [],
trigger_examples=wf.trigger_examples or [],
keywords=keywords,
step_count=wf.steps.count()
)
self._workflows[wf.id] = info
logger.debug(f"Loaded workflow: {wf.name} (tags: {wf.tags}, triggers: {len(wf.trigger_examples or [])})")
self._loaded = True
logger.info(f"VWBWorkflowMatcher: {len(self._workflows)} workflows chargés")
return len(self._workflows)
except Exception as e:
logger.error(f"Erreur chargement workflows: {e}")
self._loaded = True # Marquer comme chargé même en cas d'erreur
return 0
def _extract_keywords(self, name: str, description: str, tags: List[str]) -> List[str]:
"""Extraire les mots-clés d'un workflow."""
keywords = set()
# Tokeniser le nom
keywords.update(self._tokenize(name))
# Tokeniser la description
if description:
keywords.update(self._tokenize(description))
# Ajouter les tags (en minuscules)
if tags:
keywords.update(t.lower() for t in tags)
return list(keywords)
def _tokenize(self, text: str) -> List[str]:
"""Tokeniser un texte en mots-clés."""
if not text:
return []
# Normaliser
text = text.lower()
# Supprimer la ponctuation
text = re.sub(r'[^\w\s]', ' ', text)
# Découper en mots
words = text.split()
# Filtrer les mots courts et les stop words
return [w for w in words if len(w) > 2 and w not in self.STOP_WORDS]
def find_workflow(
self,
command: str,
min_confidence: float = 0.3
) -> Optional[WorkflowMatch]:
"""
Trouver le meilleur workflow correspondant à une commande.
Args:
command: Commande en langage naturel
min_confidence: Confiance minimale requise (0-1)
Returns:
WorkflowMatch ou None si aucun match
"""
matches = self.find_workflows(command, limit=1, min_confidence=min_confidence)
return matches[0] if matches else None
def find_workflows(
self,
command: str,
limit: int = 5,
min_confidence: float = 0.3
) -> List[WorkflowMatch]:
"""
Trouver les workflows correspondant à une commande.
Args:
command: Commande en langage naturel
limit: Nombre max de résultats
min_confidence: Confiance minimale requise
Returns:
Liste de WorkflowMatch triés par confiance décroissante
"""
self._ensure_loaded()
if not self._workflows:
logger.warning("Aucun workflow chargé")
return []
command_lower = command.lower().strip()
command_tokens = set(self._tokenize(command))
matches = []
for workflow_id, info in self._workflows.items():
score, reasons, params = self._calculate_match_score(
command_lower, command_tokens, info
)
if score >= min_confidence:
matches.append(WorkflowMatch(
workflow_id=workflow_id,
workflow_name=info.name,
confidence=round(score, 3),
extracted_params=params,
match_reasons=reasons,
description=info.description,
tags=info.tags,
step_count=info.step_count
))
# Trier par confiance décroissante
matches.sort(key=lambda m: m.confidence, reverse=True)
return matches[:limit]
def _calculate_match_score(
self,
command: str,
command_tokens: set,
info: WorkflowInfo
) -> Tuple[float, List[str], Dict[str, str]]:
"""
Calculer le score de matching entre une commande et un workflow.
Returns:
(score, reasons, extracted_params)
"""
score = 0.0
reasons = []
params = {}
# 1. Matching exact des trigger_examples (score le plus élevé)
for example in info.trigger_examples:
example_lower = example.lower().strip()
if example_lower in command or command in example_lower:
score += 0.9
reasons.append(f"trigger_example_exact:{example}")
break
# Matching partiel des exemples
example_tokens = set(self._tokenize(example))
if example_tokens and command_tokens:
overlap = len(example_tokens & command_tokens) / len(example_tokens)
if overlap > 0.7:
score += 0.6 * overlap
reasons.append(f"trigger_example_partial:{example}")
break
# 2. Matching exact du nom
name_lower = info.name.lower()
if name_lower in command:
score += 0.5
reasons.append("exact_name")
elif any(word in command for word in name_lower.split()):
score += 0.2
reasons.append("partial_name")
# 3. Matching des tags
matched_tags = []
for tag in info.tags:
tag_lower = tag.lower()
if tag_lower in command:
score += 0.3
matched_tags.append(tag)
if matched_tags:
reasons.append(f"tags:{','.join(matched_tags)}")
# 4. Matching des mots-clés (Jaccard similarity)
workflow_tokens = set(info.keywords)
if workflow_tokens and command_tokens:
intersection = command_tokens & workflow_tokens
union = command_tokens | workflow_tokens
jaccard = len(intersection) / len(union) if union else 0
score += jaccard * 0.4
if intersection:
reasons.append(f"keywords:{','.join(list(intersection)[:5])}")
# 5. Matching de la description
if info.description:
desc_tokens = set(self._tokenize(info.description))
if desc_tokens and command_tokens:
intersection = command_tokens & desc_tokens
if intersection:
score += 0.2 * (len(intersection) / len(desc_tokens))
reasons.append("description_match")
# 6. Extraction des paramètres
params = self._extract_params(command)
if params:
score += 0.05
reasons.append(f"params:{','.join(params.keys())}")
# Normaliser le score (max 1.0)
score = min(score, 1.0)
return score, reasons, params
def _extract_params(self, command: str) -> Dict[str, str]:
"""
Extraire les paramètres d'une commande.
Utilise des heuristiques pour extraire les valeurs.
"""
params = {}
# Pattern: "client X" ou "customer X"
client_match = re.search(r'(?:client|customer|compte)\s+([A-Za-zÀ-ÿ0-9_\-]+)', command, re.IGNORECASE)
if client_match:
params['client'] = client_match.group(1)
# Pattern: "facture N" ou "invoice N"
invoice_match = re.search(r'(?:facture|invoice|commande|order)\s+([A-Za-z0-9_\-]+)', command, re.IGNORECASE)
if invoice_match:
params['invoice'] = invoice_match.group(1)
# Pattern: "de X à Y" ou "from X to Y"
range_match = re.search(r'(?:de|from)\s+(\w+)\s+(?:à|to)\s+(\w+)', command, re.IGNORECASE)
if range_match:
params['start'] = range_match.group(1)
params['end'] = range_match.group(2)
# Pattern: valeurs entre guillemets
quoted_values = re.findall(r'"([^"]+)"', command)
for i, value in enumerate(quoted_values):
params[f'value{i}'] = value
# Pattern: nombres
numbers = re.findall(r'\b(\d+(?:[.,]\d+)?)\b', command)
for i, num in enumerate(numbers[:3]): # Max 3 nombres
params[f'number{i}'] = num
return params
def suggest_workflows(self, partial: str, limit: int = 5) -> List[Dict[str, Any]]:
"""
Suggérer des workflows basés sur une entrée partielle.
Args:
partial: Texte partiel saisi
limit: Nombre max de suggestions
Returns:
Liste de suggestions avec id, name, description
"""
self._ensure_loaded()
suggestions = []
partial_lower = partial.lower().strip()
partial_tokens = set(self._tokenize(partial))
for info in self._workflows.values():
score = 0
# Match sur le nom
if info.name.lower().startswith(partial_lower):
score = 1.0
elif partial_lower in info.name.lower():
score = 0.8
# Match sur les tags
for tag in info.tags:
if partial_lower in tag.lower():
score = max(score, 0.7)
# Match sur les trigger_examples
for example in info.trigger_examples:
if partial_lower in example.lower():
score = max(score, 0.9)
# Match sur les mots-clés
if partial_tokens:
keyword_match = partial_tokens & set(info.keywords)
if keyword_match:
score = max(score, 0.5 * len(keyword_match) / len(partial_tokens))
if score > 0:
suggestions.append({
'id': info.workflow_id,
'name': info.name,
'description': info.description[:100] + '...' if len(info.description) > 100 else info.description,
'tags': info.tags,
'score': score
})
# Trier par score
suggestions.sort(key=lambda x: x['score'], reverse=True)
# Retirer le score du résultat
for s in suggestions:
del s['score']
return suggestions[:limit]
def get_workflow_info(self, workflow_id: str) -> Optional[WorkflowInfo]:
"""Obtenir les infos d'un workflow."""
self._ensure_loaded()
return self._workflows.get(workflow_id)
def get_all_workflows(self) -> List[WorkflowInfo]:
"""Obtenir tous les workflows."""
self._ensure_loaded()
return list(self._workflows.values())
def workflow_count(self) -> int:
"""Nombre de workflows chargés."""
self._ensure_loaded()
return len(self._workflows)
# Instance globale (singleton)
_matcher_instance: Optional[VWBWorkflowMatcher] = None
def get_workflow_matcher() -> VWBWorkflowMatcher:
"""Obtenir l'instance globale du matcher."""
global _matcher_instance
if _matcher_instance is None:
_matcher_instance = VWBWorkflowMatcher()
return _matcher_instance
def find_matching_workflow(command: str, min_confidence: float = 0.3) -> Optional[WorkflowMatch]:
"""
Fonction utilitaire pour trouver un workflow.
Args:
command: Commande en langage naturel
min_confidence: Confiance minimale
Returns:
WorkflowMatch ou None
"""
return get_workflow_matcher().find_workflow(command, min_confidence)

View File

@@ -0,0 +1,95 @@
"""
Service pour gérer les workflows visuels
"""
from datetime import datetime
from typing import Dict, List, Optional
from models.visual_workflow import VisualWorkflow, VisualNode, VisualEdge, Variable, WorkflowSettings, generate_id
class WorkflowService:
"""Service pour gérer les workflows en mémoire (pour les tests)"""
_workflows: Dict[str, VisualWorkflow] = {}
@classmethod
def create_workflow(cls, name: str, description: str = "", created_by: str = "system") -> VisualWorkflow:
"""Créer un nouveau workflow"""
workflow_id = generate_id()
now = datetime.now()
workflow = VisualWorkflow(
id=workflow_id,
name=name,
description=description,
version="1.0.0",
created_at=now,
updated_at=now,
created_by=created_by,
nodes=[],
edges=[],
variables=[],
settings=WorkflowSettings()
)
cls._workflows[workflow_id] = workflow
return workflow
@classmethod
def get_workflow(cls, workflow_id: str) -> Optional[VisualWorkflow]:
"""Récupérer un workflow par ID"""
return cls._workflows.get(workflow_id)
@classmethod
def list_workflows(cls) -> List[VisualWorkflow]:
"""Lister tous les workflows"""
return list(cls._workflows.values())
@classmethod
def update_workflow(cls, workflow_id: str, **kwargs) -> Optional[VisualWorkflow]:
"""Mettre à jour un workflow"""
workflow = cls._workflows.get(workflow_id)
if not workflow:
return None
# Mettre à jour les champs
for key, value in kwargs.items():
if hasattr(workflow, key):
setattr(workflow, key, value)
workflow.updated_at = datetime.now()
return workflow
@classmethod
def delete_workflow(cls, workflow_id: str) -> bool:
"""Supprimer un workflow"""
if workflow_id in cls._workflows:
del cls._workflows[workflow_id]
return True
return False
@classmethod
def add_node(cls, workflow_id: str, node: VisualNode) -> bool:
"""Ajouter un node à un workflow"""
workflow = cls._workflows.get(workflow_id)
if not workflow:
return False
workflow.nodes.append(node)
workflow.updated_at = datetime.now()
return True
@classmethod
def add_edge(cls, workflow_id: str, edge: VisualEdge) -> bool:
"""Ajouter un edge à un workflow"""
workflow = cls._workflows.get(workflow_id)
if not workflow:
return False
workflow.edges.append(edge)
workflow.updated_at = datetime.now()
return True
@classmethod
def clear_all(cls):
"""Vider tous les workflows (pour les tests)"""
cls._workflows.clear()

View File

@@ -0,0 +1,21 @@
#!/bin/bash
# Script de démarrage du backend Flask
echo "Démarrage du backend Flask..."
# Détecter python ou python3
if command -v python3 > /dev/null; then
PYTHON_CMD=python3
elif command -v python > /dev/null; then
PYTHON_CMD=python
else
echo "Erreur: Python n'est pas installé"
exit 1
fi
echo "Utilisation de: $PYTHON_CMD"
echo "Port: 5000"
echo ""
$PYTHON_CMD app.py

View File

@@ -0,0 +1,45 @@
#!/bin/bash
# Script de démarrage rapide du backend Visual Workflow Builder
# Auteur : Dom, Alice, Kiro - 08 janvier 2026
echo "⚡ DÉMARRAGE RAPIDE - BACKEND VWB"
echo "=================================="
# Détecter python ou python3
if command -v python3 > /dev/null; then
PYTHON_CMD=python3
elif command -v python > /dev/null; then
PYTHON_CMD=python
else
echo "❌ Erreur: Python n'est pas installé"
exit 1
fi
echo "🐍 Python: $PYTHON_CMD"
# Vérifier si nous sommes dans le bon répertoire
if [ ! -f "app_lightweight.py" ]; then
echo "❌ Erreur: Exécutez depuis visual_workflow_builder/backend/"
exit 1
fi
# Créer les répertoires nécessaires
echo "📁 Préparation des répertoires..."
mkdir -p ../../data/workflows
mkdir -p logs
# Définir les variables d'environnement
export FLASK_ENV=production
export PORT=5002
echo "🚀 Démarrage du backend allégé..."
echo "🌐 URL: http://localhost:5002"
echo "❤️ Health: http://localhost:5002/health"
echo "📋 API: http://localhost:5002/api/workflows"
echo ""
echo "Appuyez sur Ctrl+C pour arrêter"
echo ""
# Démarrer le serveur allégé
$PYTHON_CMD app_lightweight.py

View File

@@ -0,0 +1,69 @@
#!/bin/bash
# Script de démarrage optimisé du backend Visual Workflow Builder
# Auteur : Dom, Alice, Kiro - 08 janvier 2026
echo "🚀 Démarrage du Backend Visual Workflow Builder"
echo "================================================"
# Détecter python ou python3
if command -v python3 > /dev/null; then
PYTHON_CMD=python3
elif command -v python > /dev/null; then
PYTHON_CMD=python
else
echo "❌ Erreur: Python n'est pas installé"
exit 1
fi
echo "🐍 Utilisation de: $PYTHON_CMD"
# Vérifier si nous sommes dans le bon répertoire
if [ ! -f "app.py" ]; then
echo "❌ Erreur: app.py introuvable. Exécutez depuis visual_workflow_builder/backend/"
exit 1
fi
# Créer les répertoires nécessaires
echo "📁 Création des répertoires..."
mkdir -p ../../data/workflows
mkdir -p logs
# Vérifier les dépendances critiques
echo "🔍 Vérification des dépendances..."
$PYTHON_CMD -c "import flask, flask_cors; print('✅ Dépendances de base OK')" || {
echo "❌ Dépendances manquantes. Installation..."
pip install flask flask-cors python-dotenv PyYAML
}
# Choix du mode de démarrage
echo ""
echo "Choisissez le mode de démarrage:"
echo "1) Mode normal (toutes les fonctionnalités)"
echo "2) Mode allégé (démarrage rapide)"
echo "3) Mode debug (développement)"
echo ""
read -p "Votre choix (1-3): " choice
case $choice in
1)
echo "🚀 Démarrage en mode normal..."
export FLASK_ENV=production
$PYTHON_CMD app.py
;;
2)
echo "⚡ Démarrage en mode allégé..."
export FLASK_ENV=production
$PYTHON_CMD app_lightweight.py
;;
3)
echo "🔧 Démarrage en mode debug..."
export FLASK_ENV=development
export FLASK_DEBUG=1
$PYTHON_CMD app.py
;;
*)
echo "❌ Choix invalide. Démarrage en mode normal..."
$PYTHON_CMD app.py
;;
esac

View File

@@ -0,0 +1,84 @@
#!/bin/bash
#
# Check Visual Workflow Builder Backend Server Status
#
# This script checks if the server is running and displays status information
#
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PID_FILE="$SCRIPT_DIR/.server.pid"
LOG_FILE="$SCRIPT_DIR/server.log"
# Colors for output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
echo "=========================================="
echo "Visual Workflow Builder - Server Status"
echo "=========================================="
# Check PID file
if [ -f "$PID_FILE" ]; then
PID=$(cat "$PID_FILE")
echo -e "${BLUE}PID File:${NC} $PID_FILE"
echo -e "${BLUE}PID:${NC} $PID"
# Check if process is running
if ps -p $PID > /dev/null 2>&1; then
echo -e "${GREEN}Status: RUNNING ✓${NC}"
# Get process info
echo ""
echo "Process Information:"
ps -p $PID -o pid,ppid,user,%cpu,%mem,etime,command
else
echo -e "${RED}Status: NOT RUNNING (stale PID file)${NC}"
echo "The PID file exists but the process is not running"
fi
else
echo -e "${YELLOW}PID File: Not found${NC}"
echo -e "${RED}Status: NOT RUNNING${NC}"
fi
# Check port 5001
echo ""
echo "Port 5001 Status:"
PORT_CHECK=$(lsof -ti:5001 2>/dev/null)
if [ -z "$PORT_CHECK" ]; then
echo -e "${RED} Port 5001: FREE${NC}"
else
echo -e "${GREEN} Port 5001: IN USE${NC}"
echo " Process(es) using port 5001:"
lsof -i:5001 | grep -v COMMAND
fi
# Check if server responds
echo ""
echo "Health Check:"
if command -v curl &> /dev/null; then
HEALTH_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:5001/health 2>/dev/null)
if [ "$HEALTH_RESPONSE" = "200" ]; then
echo -e "${GREEN} HTTP Response: 200 OK ✓${NC}"
echo " Server is responding correctly"
else
echo -e "${RED} HTTP Response: $HEALTH_RESPONSE${NC}"
echo " Server is not responding"
fi
else
echo -e "${YELLOW} curl not available, skipping health check${NC}"
fi
# Show recent log entries if log file exists
if [ -f "$LOG_FILE" ]; then
echo ""
echo "Recent Log Entries (last 10 lines):"
echo "-----------------------------------"
tail -n 10 "$LOG_FILE"
fi
echo ""
echo "=========================================="

View File

@@ -0,0 +1,110 @@
#!/bin/bash
#
# Stop Visual Workflow Builder Backend Server
#
# This script stops the Flask backend server gracefully
#
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PID_FILE="$SCRIPT_DIR/.server.pid"
# Colors for output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo "=========================================="
echo "Visual Workflow Builder - Stop Server"
echo "=========================================="
# Check if PID file exists
if [ ! -f "$PID_FILE" ]; then
echo -e "${YELLOW}No server PID file found${NC}"
echo "Server may not be running or was started manually"
# Try to find and kill any Flask process on port 5001
echo "Checking for Flask processes on port 5001..."
FLASK_PIDS=$(lsof -ti:5001 2>/dev/null)
if [ -z "$FLASK_PIDS" ]; then
echo -e "${GREEN}No processes found on port 5001${NC}"
exit 0
else
echo "Found processes: $FLASK_PIDS"
echo "Killing processes..."
kill $FLASK_PIDS 2>/dev/null
sleep 1
# Force kill if still running
REMAINING=$(lsof -ti:5001 2>/dev/null)
if [ ! -z "$REMAINING" ]; then
echo "Force killing remaining processes..."
kill -9 $REMAINING 2>/dev/null
fi
echo -e "${GREEN}✓ Processes stopped${NC}"
exit 0
fi
fi
# Read PID from file
PID=$(cat "$PID_FILE")
# Check if process is running
if ! ps -p $PID > /dev/null 2>&1; then
echo -e "${YELLOW}Server process (PID: $PID) is not running${NC}"
rm "$PID_FILE"
# Check if port is still in use
PORT_PID=$(lsof -ti:5001 2>/dev/null)
if [ ! -z "$PORT_PID" ]; then
echo "But port 5001 is in use by PID: $PORT_PID"
echo "Killing process on port 5001..."
kill $PORT_PID 2>/dev/null
sleep 1
kill -9 $PORT_PID 2>/dev/null
fi
echo -e "${GREEN}✓ Cleanup complete${NC}"
exit 0
fi
echo "Stopping server (PID: $PID)..."
# Try graceful shutdown first
kill $PID 2>/dev/null
# Wait for process to stop (max 5 seconds)
for i in {1..5}; do
if ! ps -p $PID > /dev/null 2>&1; then
echo -e "${GREEN}✓ Server stopped gracefully${NC}"
rm "$PID_FILE"
exit 0
fi
sleep 1
done
# Force kill if still running
echo "Server did not stop gracefully, forcing shutdown..."
kill -9 $PID 2>/dev/null
# Wait a moment
sleep 1
if ps -p $PID > /dev/null 2>&1; then
echo -e "${RED}✗ Failed to stop server${NC}"
exit 1
else
echo -e "${GREEN}✓ Server stopped (forced)${NC}"
rm "$PID_FILE"
# Also kill any remaining processes on port 5001
REMAINING=$(lsof -ti:5001 2>/dev/null)
if [ ! -z "$REMAINING" ]; then
echo "Cleaning up remaining processes on port 5001..."
kill -9 $REMAINING 2>/dev/null
fi
exit 0
fi

View File

@@ -0,0 +1,266 @@
#!/usr/bin/env python3
"""
Test Analytics API - Visual Workflow Builder Backend
Tests the analytics API endpoints and integration.
Exigence: 18.3
"""
import sys
import os
import unittest
import json
from pathlib import Path
# Ajouter le chemin racine pour les imports
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
sys.path.insert(0, str(Path(__file__).parent))
try:
from app import app
from services.execution_integration import get_executor
from services.serialization import WorkflowDatabase
except ImportError as e:
print(f"Erreur d'import: {e}")
print("Assurez-vous d'être dans le bon répertoire et que les dépendances sont installées")
sys.exit(1)
class TestAnalyticsAPI(unittest.TestCase):
"""Tests pour l'API Analytics"""
def setUp(self):
"""Configuration des tests"""
self.app = app.test_client()
self.app.testing = True
# Créer un workflow de test
self.test_workflow = {
"name": "Test Analytics Workflow",
"description": "Workflow de test pour les analytics",
"nodes": [
{
"id": "start",
"type": "start",
"position": {"x": 100, "y": 100},
"label": "Début",
"parameters": {},
"input_ports": [],
"output_ports": [{"id": "out", "name": "Output", "type": "output"}]
},
{
"id": "end",
"type": "end",
"position": {"x": 300, "y": 100},
"label": "Fin",
"parameters": {},
"input_ports": [{"id": "in", "name": "Input", "type": "input"}],
"output_ports": []
}
],
"edges": [
{
"id": "edge1",
"source": "start",
"target": "end",
"source_port": "out",
"target_port": "in"
}
],
"variables": {},
"metadata": {
"created_at": "2024-12-14T00:00:00Z",
"version": "1.0.0"
}
}
def test_dashboard_summary_endpoint(self):
"""Test l'endpoint de résumé du dashboard"""
print("\n🧪 Test dashboard summary endpoint...")
response = self.app.get('/api/analytics/dashboard/summary?hours=24')
# L'endpoint doit répondre même si Analytics n'est pas configuré
self.assertIn(response.status_code, [200, 503])
if response.status_code == 200:
data = json.loads(response.data)
self.assertIn('success', data)
print(" ✅ Endpoint accessible")
else:
print(" ⚠️ Analytics service non disponible (normal)")
def test_dashboard_workflows_endpoint(self):
"""Test l'endpoint du dashboard des workflows"""
print("\n🧪 Test dashboard workflows endpoint...")
response = self.app.get('/api/analytics/dashboard/workflows?hours=24')
self.assertIn(response.status_code, [200, 503])
if response.status_code == 200:
data = json.loads(response.data)
self.assertIn('success', data)
print(" ✅ Endpoint accessible")
else:
print(" ⚠️ Analytics service non disponible (normal)")
def test_insights_endpoint(self):
"""Test l'endpoint des insights"""
print("\n🧪 Test insights endpoint...")
response = self.app.get('/api/analytics/insights?hours=168')
self.assertIn(response.status_code, [200, 503])
if response.status_code == 200:
data = json.loads(response.data)
self.assertIn('success', data)
print(" ✅ Endpoint accessible")
else:
print(" ⚠️ Analytics service non disponible (normal)")
def test_workflow_metrics_endpoint(self):
"""Test l'endpoint des métriques de workflow"""
print("\n🧪 Test workflow metrics endpoint...")
# D'abord créer un workflow
response = self.app.post(
'/api/workflows',
data=json.dumps(self.test_workflow),
content_type='application/json'
)
self.assertEqual(response.status_code, 201)
workflow_data = json.loads(response.data)
workflow_id = workflow_data['workflow_id']
# Tester l'endpoint des métriques
response = self.app.get(f'/api/analytics/workflow/{workflow_id}/metrics?hours=24')
self.assertIn(response.status_code, [200, 404, 503])
if response.status_code == 200:
data = json.loads(response.data)
self.assertIn('success', data)
print(" ✅ Métriques récupérées")
elif response.status_code == 404:
print(" Pas de données (normal pour nouveau workflow)")
else:
print(" ⚠️ Analytics service non disponible (normal)")
def test_workflow_performance_endpoint(self):
"""Test l'endpoint de performance de workflow"""
print("\n🧪 Test workflow performance endpoint...")
# Créer un workflow
response = self.app.post(
'/api/workflows',
data=json.dumps(self.test_workflow),
content_type='application/json'
)
self.assertEqual(response.status_code, 201)
workflow_data = json.loads(response.data)
workflow_id = workflow_data['workflow_id']
# Tester l'endpoint de performance
response = self.app.get(f'/api/analytics/workflow/{workflow_id}/performance?hours=24')
self.assertIn(response.status_code, [200, 503])
if response.status_code == 200:
data = json.loads(response.data)
self.assertIn('success', data)
print(" ✅ Performance récupérée")
else:
print(" ⚠️ Analytics service non disponible (normal)")
def test_execution_with_analytics_hooks(self):
"""Test l'exécution avec les hooks Analytics"""
print("\n🧪 Test execution with analytics hooks...")
try:
# Créer un workflow
response = self.app.post(
'/api/workflows',
data=json.dumps(self.test_workflow),
content_type='application/json'
)
self.assertEqual(response.status_code, 201)
workflow_data = json.loads(response.data)
workflow_id = workflow_data['workflow_id']
# Obtenir l'exécuteur et vérifier l'intégration Analytics
executor = get_executor()
# Vérifier que l'intégration Analytics est disponible
has_analytics = executor.analytics_integration is not None
print(f" 📊 Analytics integration: {'' if has_analytics else '⚠️ Non configuré'}")
# Tester l'exécution (simulation)
execution_id = executor.execute_workflow(workflow_id, {})
self.assertIsNotNone(execution_id)
print(f" 🚀 Exécution démarrée: {execution_id}")
# Vérifier le statut
import time
time.sleep(1) # Attendre un peu
result = executor.get_execution_status(execution_id)
self.assertIsNotNone(result)
print(f" 📊 Statut: {result.status}")
# Vérifier les métriques Analytics si disponibles
if has_analytics:
analytics_data = executor.get_workflow_analytics(workflow_id, 1)
if analytics_data:
print(" ✅ Métriques Analytics collectées")
else:
print(" Pas de métriques disponibles")
except Exception as e:
print(f" ⚠️ Erreur lors du test d'exécution: {e}")
def test_analytics_integration_availability(self):
"""Test la disponibilité de l'intégration Analytics"""
print("\n🧪 Test analytics integration availability...")
try:
# Vérifier l'import du système Analytics
from core.analytics.analytics_system import get_analytics_system
print(" ✅ Analytics system importé")
# Vérifier l'intégration d'exécution
from core.analytics.integration.execution_integration import get_analytics_integration
print(" ✅ Execution integration importée")
# Tester l'initialisation
executor = get_executor()
has_analytics = executor.analytics_integration is not None
print(f" 📊 Analytics integration active: {'' if has_analytics else '⚠️ Non configuré'}")
except ImportError as e:
print(f" ⚠️ Analytics system non disponible: {e}")
except Exception as e:
print(f" ❌ Erreur: {e}")
def main():
"""Fonction principale de test"""
print("=" * 60)
print("🧪 TEST ANALYTICS API - Visual Workflow Builder Backend")
print("=" * 60)
# Exécuter les tests
unittest.main(verbosity=2, exit=False)
print("\n" + "=" * 60)
print("✅ TESTS ANALYTICS API TERMINÉS")
print("=" * 60)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,320 @@
#!/usr/bin/env python3
"""
Manual API test script
Tests the REST API endpoints for workflows.
"""
import sys
import requests
import json
from datetime import datetime
# Base URL for API
BASE_URL = "http://localhost:5001/api"
def print_test(name):
"""Print test name"""
print(f"\n{'='*60}")
print(f"TEST: {name}")
print('='*60)
def print_response(response):
"""Print response details"""
print(f"Status: {response.status_code}")
try:
print(f"Response: {json.dumps(response.json(), indent=2)}")
except:
print(f"Response: {response.text}")
def test_health_check():
"""Test health check endpoint"""
print_test("Health Check")
response = requests.get(f"{BASE_URL.replace('/api', '')}/health")
print_response(response)
assert response.status_code == 200
data = response.json()
assert 'status' in data
assert data['status'] == 'healthy'
print("✓ Health check passed")
def test_list_workflows_empty():
"""Test listing workflows when empty"""
print_test("List Workflows (Empty)")
response = requests.get(f"{BASE_URL}/workflows/")
print_response(response)
assert response.status_code == 200
data = response.json()
assert isinstance(data, list)
print(f"✓ Got {len(data)} workflows")
def test_create_workflow():
"""Test creating a workflow"""
print_test("Create Workflow")
workflow_data = {
"name": "Test Workflow",
"description": "A test workflow created via API",
"created_by": "test_user",
"nodes": [],
"edges": [],
"variables": [],
"tags": ["test", "api"]
}
response = requests.post(
f"{BASE_URL}/workflows/",
json=workflow_data,
headers={'Content-Type': 'application/json'}
)
print_response(response)
assert response.status_code == 201
data = response.json()
assert 'id' in data
assert data['name'] == "Test Workflow"
assert data['created_by'] == "test_user"
print(f"✓ Created workflow with ID: {data['id']}")
return data['id']
def test_get_workflow(workflow_id):
"""Test getting a specific workflow"""
print_test(f"Get Workflow {workflow_id}")
response = requests.get(f"{BASE_URL}/workflows/{workflow_id}")
print_response(response)
assert response.status_code == 200
data = response.json()
assert data['id'] == workflow_id
print(f"✓ Retrieved workflow: {data['name']}")
def test_update_workflow(workflow_id):
"""Test updating a workflow"""
print_test(f"Update Workflow {workflow_id}")
update_data = {
"name": "Updated Test Workflow",
"description": "Updated description",
"tags": ["test", "api", "updated"]
}
response = requests.put(
f"{BASE_URL}/workflows/{workflow_id}",
json=update_data,
headers={'Content-Type': 'application/json'}
)
print_response(response)
assert response.status_code == 200
data = response.json()
assert data['name'] == "Updated Test Workflow"
assert "updated" in data['tags']
print("✓ Workflow updated successfully")
def test_validate_workflow(workflow_id):
"""Test validating a workflow"""
print_test(f"Validate Workflow {workflow_id}")
response = requests.post(f"{BASE_URL}/workflows/{workflow_id}/validate")
print_response(response)
assert response.status_code == 200
data = response.json()
assert 'is_valid' in data
assert 'errors' in data
assert 'warnings' in data
print(f"✓ Validation result: valid={data['is_valid']}, errors={len(data['errors'])}, warnings={len(data['warnings'])}")
def test_create_workflow_with_nodes():
"""Test creating a workflow with nodes and edges"""
print_test("Create Workflow with Nodes")
workflow_data = {
"name": "Workflow with Nodes",
"description": "A workflow with nodes and edges",
"created_by": "test_user",
"nodes": [
{
"id": "node-1",
"type": "click",
"position": {"x": 100, "y": 200},
"size": {"width": 200, "height": 100},
"parameters": {"target": "button"},
"input_ports": [],
"output_ports": [{"id": "out", "name": "Output", "type": "output"}]
},
{
"id": "node-2",
"type": "type",
"position": {"x": 400, "y": 200},
"size": {"width": 200, "height": 100},
"parameters": {"text": "Hello"},
"input_ports": [{"id": "in", "name": "Input", "type": "input"}],
"output_ports": []
}
],
"edges": [
{
"id": "edge-1",
"source": "node-1",
"target": "node-2",
"source_port": "out",
"target_port": "in",
"selected": False,
"animated": False
}
],
"variables": [
{
"name": "username",
"type": "string",
"value": "test_user"
}
]
}
response = requests.post(
f"{BASE_URL}/workflows/",
json=workflow_data,
headers={'Content-Type': 'application/json'}
)
print_response(response)
assert response.status_code == 201
data = response.json()
assert len(data['nodes']) == 2
assert len(data['edges']) == 1
assert len(data['variables']) == 1
print(f"✓ Created workflow with {len(data['nodes'])} nodes and {len(data['edges'])} edges")
return data['id']
def test_list_workflows_with_filters():
"""Test listing workflows with filters"""
print_test("List Workflows with Filters")
# List all
response = requests.get(f"{BASE_URL}/workflows/")
print(f"All workflows: {response.status_code}")
all_workflows = response.json()
print(f" Found {len(all_workflows)} workflows")
# Filter by tags
response = requests.get(f"{BASE_URL}/workflows/?tags=test")
print(f"Filtered by tag 'test': {response.status_code}")
filtered = response.json()
print(f" Found {len(filtered)} workflows")
print("✓ Filtering works")
def test_delete_workflow(workflow_id):
"""Test deleting a workflow"""
print_test(f"Delete Workflow {workflow_id}")
response = requests.delete(f"{BASE_URL}/workflows/{workflow_id}")
print(f"Status: {response.status_code}")
assert response.status_code == 204
print("✓ Workflow deleted successfully")
# Verify it's gone
response = requests.get(f"{BASE_URL}/workflows/{workflow_id}")
assert response.status_code == 404
print("✓ Workflow no longer exists")
def test_error_handling():
"""Test error handling"""
print_test("Error Handling")
# Test 404
response = requests.get(f"{BASE_URL}/workflows/nonexistent-id")
print(f"404 test: {response.status_code}")
assert response.status_code == 404
print("✓ 404 error handled correctly")
# Test validation error
response = requests.post(
f"{BASE_URL}/workflows/",
json={"invalid": "data"},
headers={'Content-Type': 'application/json'}
)
print(f"Validation error test: {response.status_code}")
assert response.status_code == 400
print("✓ Validation error handled correctly")
# Test missing body
response = requests.post(f"{BASE_URL}/workflows/")
print(f"Missing body test: {response.status_code}")
assert response.status_code == 400
print("✓ Missing body error handled correctly")
def run_all_tests():
"""Run all API tests"""
print("\n" + "="*60)
print("Visual Workflow Builder - API Tests")
print("="*60)
print(f"Testing API at: {BASE_URL}")
try:
# Basic tests
test_health_check()
test_list_workflows_empty()
# CRUD tests
workflow_id = test_create_workflow()
test_get_workflow(workflow_id)
test_update_workflow(workflow_id)
test_validate_workflow(workflow_id)
# Advanced tests
workflow_id2 = test_create_workflow_with_nodes()
test_list_workflows_with_filters()
# Cleanup
test_delete_workflow(workflow_id)
test_delete_workflow(workflow_id2)
# Error handling
test_error_handling()
print("\n" + "="*60)
print("✓ ALL TESTS PASSED!")
print("="*60)
except AssertionError as e:
print(f"\n✗ TEST FAILED: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
except requests.exceptions.ConnectionError:
print(f"\n✗ CONNECTION ERROR: Could not connect to {BASE_URL}")
print("Make sure the backend server is running on port 5001")
sys.exit(1)
except Exception as e:
print(f"\n✗ UNEXPECTED ERROR: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == '__main__':
run_all_tests()

View File

@@ -0,0 +1,173 @@
#!/usr/bin/env python3
"""
Test API avec Sérialisation - Visual Workflow Builder
Test manuel de l'API avec le nouveau système de sérialisation.
"""
import sys
import json
import requests
from pathlib import Path
# Configuration
BASE_URL = "http://localhost:5002/api"
def test_create_workflow():
"""Test de création de workflow"""
print("🧪 Test 1: Création de workflow")
data = {
"name": "Test Serialization Workflow",
"description": "Workflow de test pour la sérialisation",
"created_by": "test_user",
"tags": ["test", "serialization"],
"category": "test"
}
response = requests.post(f"{BASE_URL}/workflows/", json=data)
if response.status_code == 201:
workflow = response.json()
print(f"✅ Workflow créé: {workflow['id']}")
print(f" Nom: {workflow['name']}")
print(f" Version: {workflow['version']}")
return workflow['id']
else:
print(f"❌ Échec: {response.status_code}")
print(response.text)
return None
def test_export_workflow(workflow_id):
"""Test d'export de workflow"""
print(f"\n🧪 Test 2: Export de workflow {workflow_id}")
# Export JSON
response = requests.get(f"{BASE_URL}/workflows/{workflow_id}/export?format=json")
if response.status_code == 200:
data = json.loads(response.text)
print(f"✅ Export JSON réussi")
print(f" Taille: {len(response.text)} caractères")
print(f" Contient _serialization: {'_serialization' in data}")
return response.text
else:
print(f"❌ Échec: {response.status_code}")
print(response.text)
return None
def test_import_workflow(json_data):
"""Test d'import de workflow"""
print("\n🧪 Test 3: Import de workflow")
# Import avec nouvel ID
response = requests.post(
f"{BASE_URL}/workflows/import?format=json&generate_new_id=true",
data=json_data,
headers={'Content-Type': 'application/json'}
)
if response.status_code == 201:
result = response.json()
workflow = result['workflow']
print(f"✅ Import réussi")
print(f" Nouvel ID: {workflow['id']}")
print(f" Nom: {workflow['name']}")
return workflow['id']
else:
print(f"❌ Échec: {response.status_code}")
print(response.text)
return None
def test_list_workflows():
"""Test de listage des workflows"""
print("\n🧪 Test 4: Listage des workflows")
response = requests.get(f"{BASE_URL}/workflows/")
if response.status_code == 200:
workflows = response.json()
print(f"{len(workflows)} workflow(s) trouvé(s)")
for wf in workflows[:3]: # Afficher les 3 premiers
print(f" - {wf['id']}: {wf['name']}")
return True
else:
print(f"❌ Échec: {response.status_code}")
print(response.text)
return False
def test_delete_workflow(workflow_id):
"""Test de suppression de workflow"""
print(f"\n🧪 Test 5: Suppression de workflow {workflow_id}")
response = requests.delete(f"{BASE_URL}/workflows/{workflow_id}")
if response.status_code == 204:
print(f"✅ Workflow supprimé")
return True
else:
print(f"❌ Échec: {response.status_code}")
print(response.text)
return False
def main():
"""Exécute tous les tests"""
print("=" * 60)
print("🚀 Tests API avec Sérialisation")
print("=" * 60)
print(f"URL de base: {BASE_URL}")
print()
try:
# Vérifier que le serveur est accessible
try:
response = requests.get(f"{BASE_URL}/workflows/", timeout=2)
except requests.exceptions.ConnectionError:
print("❌ Serveur non accessible!")
print(" Démarrez le serveur avec: cd backend && python app.py")
return 1
# Test 1: Créer un workflow
workflow_id = test_create_workflow()
if not workflow_id:
return 1
# Test 2: Exporter le workflow
json_data = test_export_workflow(workflow_id)
if not json_data:
return 1
# Test 3: Importer le workflow (avec nouvel ID)
imported_id = test_import_workflow(json_data)
if not imported_id:
return 1
# Test 4: Lister les workflows
if not test_list_workflows():
return 1
# Test 5: Supprimer les workflows créés
test_delete_workflow(workflow_id)
test_delete_workflow(imported_id)
print("\n" + "=" * 60)
print("✅ TOUS LES TESTS RÉUSSIS!")
print("=" * 60)
return 0
except Exception as e:
print(f"\n❌ ERREUR: {e}")
import traceback
traceback.print_exc()
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,93 @@
#!/usr/bin/env python3
"""
Test du Backend Visual Workflow Builder
Auteur : Dom, Alice, Kiro - 08 janvier 2026
"""
import requests
import json
import time
import sys
def test_backend(base_url="http://localhost:5002"):
"""Test le backend VWB."""
print(f"🧪 Test du backend: {base_url}")
# Test 1: Health check
try:
response = requests.get(f"{base_url}/health", timeout=5)
if response.status_code == 200:
print("✅ Health check OK")
print(f" Réponse: {response.json()}")
else:
print(f"❌ Health check échoué: {response.status_code}")
return False
except Exception as e:
print(f"❌ Erreur health check: {e}")
return False
# Test 2: Liste des workflows
try:
response = requests.get(f"{base_url}/api/workflows", timeout=5)
if response.status_code == 200:
workflows = response.json()
print(f"✅ Liste workflows OK ({len(workflows)} workflows)")
else:
print(f"❌ Liste workflows échoué: {response.status_code}")
return False
except Exception as e:
print(f"❌ Erreur liste workflows: {e}")
return False
# Test 3: Création d'un workflow
try:
test_workflow = {
"name": "Test Workflow",
"description": "Workflow de test automatique",
"created_by": "test_script"
}
response = requests.post(
f"{base_url}/api/workflows",
json=test_workflow,
timeout=5
)
if response.status_code == 201:
created_workflow = response.json()
workflow_id = created_workflow['id']
print(f"✅ Création workflow OK (ID: {workflow_id})")
# Test 4: Récupération du workflow créé
response = requests.get(f"{base_url}/api/workflows/{workflow_id}", timeout=5)
if response.status_code == 200:
print("✅ Récupération workflow OK")
return True
else:
print(f"❌ Récupération workflow échoué: {response.status_code}")
return False
else:
print(f"❌ Création workflow échoué: {response.status_code}")
print(f" Réponse: {response.text}")
return False
except Exception as e:
print(f"❌ Erreur création workflow: {e}")
return False
if __name__ == "__main__":
print("🚀 Test automatique du backend VWB")
print("=" * 40)
# Attendre que le serveur soit prêt
print("⏳ Attente du démarrage du serveur...")
time.sleep(2)
success = test_backend()
if success:
print("\n✅ Tous les tests sont passés!")
sys.exit(0)
else:
print("\n❌ Certains tests ont échoué")
sys.exit(1)

View File

@@ -0,0 +1,374 @@
#!/usr/bin/env python3
"""
Test du Convertisseur Visual → WorkflowGraph
Test manuel de la conversion de workflows visuels en WorkflowGraph exécutables.
"""
import sys
from pathlib import Path
sys.path.insert(0, '.')
from services.converter import VisualToGraphConverter, ConversionError, convert_visual_to_graph
from services.serialization import create_empty_workflow
from models.visual_workflow import (
VisualNode,
VisualEdge,
Position,
Size,
Port
)
def test_empty_workflow_conversion():
"""Test de conversion d'un workflow vide"""
print("🧪 Test 1: Conversion d'un workflow vide")
workflow = create_empty_workflow("Empty Workflow")
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
print("❌ Devrait échouer pour un workflow vide")
return False
except ConversionError as e:
print(f"✅ Erreur attendue: {e}")
return True
def test_simple_workflow_conversion():
"""Test de conversion d'un workflow simple"""
print("\n🧪 Test 2: Conversion d'un workflow simple (2 nodes, 1 edge)")
workflow = create_empty_workflow("Simple Workflow")
# Ajouter 2 nodes
workflow.nodes.append(VisualNode(
id="click_1",
type="click",
position=Position(100, 100),
size=Size(200, 80),
parameters={'target': {'text': 'Login Button', 'role': 'button'}},
input_ports=[],
output_ports=[Port('out', 'Output', 'output')]
))
workflow.nodes.append(VisualNode(
id="type_1",
type="type",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': {'role': 'textfield'}, 'text': 'username'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
))
# Ajouter 1 edge
workflow.edges.append(VisualEdge(
id="edge_1",
source="click_1",
target="type_1",
source_port="out",
target_port="in"
))
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifications
assert result.workflow_id == workflow.id
assert result.name == "Simple Workflow"
assert len(result.nodes) == 2
assert len(result.edges) == 1
assert len(result.entry_nodes) == 1
assert len(result.end_nodes) == 1
assert result.entry_nodes[0] == "click_1"
assert result.end_nodes[0] == "type_1"
# Vérifier les nodes
node1 = result.get_node("click_1")
assert node1 is not None
assert node1.is_entry == True
assert node1.metadata['visual_type'] == 'click'
node2 = result.get_node("type_1")
assert node2 is not None
assert node2.is_end == True
assert node2.metadata['visual_type'] == 'type'
# Vérifier l'edge
edge1 = result.get_edge("edge_1")
assert edge1 is not None
assert edge1.from_node == "click_1"
assert edge1.to_node == "type_1"
assert edge1.action.type == "mouse_click"
print(f"✅ Conversion réussie:")
print(f" - {len(result.nodes)} nodes")
print(f" - {len(result.edges)} edges")
print(f" - Entry: {result.entry_nodes}")
print(f" - End: {result.end_nodes}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_complex_workflow_conversion():
"""Test de conversion d'un workflow complexe"""
print("\n🧪 Test 3: Conversion d'un workflow complexe (4 nodes, 3 edges)")
workflow = create_empty_workflow("Complex Workflow")
# Ajouter 4 nodes
workflow.nodes.extend([
VisualNode(
id="navigate_1",
type="navigate",
position=Position(100, 100),
size=Size(200, 80),
parameters={'url': 'https://example.com'},
input_ports=[],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="click_1",
type="click",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': {'text': 'Login'}},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="type_1",
type="type",
position=Position(700, 100),
size=Size(200, 80),
parameters={'target': {'role': 'textfield'}, 'text': '${username}'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="wait_1",
type="wait",
position=Position(1000, 100),
size=Size(200, 80),
parameters={'duration': 2000},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
)
])
# Ajouter 3 edges
workflow.edges.extend([
VisualEdge(
id="edge_1",
source="navigate_1",
target="click_1",
source_port="out",
target_port="in"
),
VisualEdge(
id="edge_2",
source="click_1",
target="type_1",
source_port="out",
target_port="in"
),
VisualEdge(
id="edge_3",
source="type_1",
target="wait_1",
source_port="out",
target_port="in"
)
])
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifications
assert len(result.nodes) == 4
assert len(result.edges) == 3
assert result.entry_nodes[0] == "navigate_1"
assert result.end_nodes[0] == "wait_1"
# Vérifier les types d'actions
edge1 = result.get_edge("edge_1")
assert edge1.action.type == "navigate"
assert edge1.action.parameters['url'] == 'https://example.com'
edge2 = result.get_edge("edge_2")
assert edge2.action.type == "mouse_click"
edge3 = result.get_edge("edge_3")
assert edge3.action.type == "text_input"
assert edge3.action.parameters['text'] == '${username}'
print(f"✅ Conversion réussie:")
print(f" - {len(result.nodes)} nodes")
print(f" - {len(result.edges)} edges")
print(f" - Types d'actions: navigate, mouse_click, text_input, wait")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_workflow_with_variables():
"""Test de conversion avec variables"""
print("\n🧪 Test 4: Conversion avec variables")
workflow = create_empty_workflow("Workflow with Variables")
# Ajouter des variables
from models.visual_workflow import Variable
workflow.variables.extend([
Variable(name="username", type="string", value="test_user"),
Variable(name="password", type="string", value="secret123")
])
# Ajouter des nodes utilisant les variables
workflow.nodes.extend([
VisualNode(
id="type_user",
type="type",
position=Position(100, 100),
size=Size(200, 80),
parameters={'target': {'role': 'textfield'}, 'text': '${username}'},
input_ports=[],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="type_pass",
type="type",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': {'role': 'textfield'}, 'text': '${password}'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
)
])
workflow.edges.append(VisualEdge(
id="edge_1",
source="type_user",
target="type_pass",
source_port="out",
target_port="in"
))
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifier que les variables sont préservées dans les métadonnées
assert len(result.nodes) == 2
# Vérifier que les références de variables sont préservées
edge1 = result.get_edge("edge_1")
assert '${username}' in edge1.action.parameters['text']
print(f"✅ Conversion avec variables réussie")
print(f" - Variables préservées dans les paramètres")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_conversion_utility_function():
"""Test de la fonction utilitaire convert_visual_to_graph"""
print("\n🧪 Test 5: Fonction utilitaire convert_visual_to_graph")
workflow = create_empty_workflow("Utility Test")
workflow.nodes.append(VisualNode(
id="click_1",
type="click",
position=Position(100, 100),
size=Size(200, 80),
parameters={'target': 'button'},
input_ports=[],
output_ports=[]
))
try:
result = convert_visual_to_graph(workflow)
assert result.workflow_id == workflow.id
assert len(result.nodes) == 1
print("✅ Fonction utilitaire fonctionne")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
return False
def main():
"""Exécute tous les tests"""
print("=" * 60)
print("🚀 Tests du Convertisseur Visual → WorkflowGraph")
print("=" * 60)
tests = [
test_empty_workflow_conversion,
test_simple_workflow_conversion,
test_complex_workflow_conversion,
test_workflow_with_variables,
test_conversion_utility_function
]
results = []
for test in tests:
try:
result = test()
results.append(result)
except Exception as e:
print(f"\n❌ Test échoué avec exception: {e}")
import traceback
traceback.print_exc()
results.append(False)
print("\n" + "=" * 60)
passed = sum(results)
total = len(results)
if passed == total:
print(f"✅ TOUS LES TESTS RÉUSSIS! ({passed}/{total})")
print("=" * 60)
return 0
else:
print(f"{total - passed} test(s) échoué(s) sur {total}")
print("=" * 60)
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,399 @@
#!/usr/bin/env python3
"""
Test d'Intégration ExecutionLoop - Visual Workflow Builder
Test manuel de l'intégration avec ExecutionLoop pour l'exécution des workflows.
Exigences: 20.1, 20.2, 20.3
"""
import sys
import time
from pathlib import Path
sys.path.insert(0, '.')
from services.execution_integration import (
VisualWorkflowExecutor,
ExecutionStatus,
get_executor
)
from services.serialization import create_empty_workflow, WorkflowDatabase
from models.visual_workflow import (
VisualNode,
VisualEdge,
Variable,
Position,
Size,
Port
)
def test_executor_initialization():
"""Test d'initialisation de l'exécuteur"""
print("🧪 Test 1: Initialisation de l'exécuteur")
try:
executor = VisualWorkflowExecutor()
assert executor.db is not None
assert len(executor.executions) == 0
# Test de l'instance globale
global_executor = get_executor()
assert global_executor is not None
print("✅ Exécuteur initialisé avec succès")
print(f" - Base de données: OK")
print(f" - Intégrations: OK")
print(f" - Instance globale: OK")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_simple_workflow_execution():
"""Test d'exécution d'un workflow simple"""
print("\n🧪 Test 2: Exécution d'un workflow simple")
try:
# Créer un workflow simple
workflow = create_empty_workflow("Test Execution Workflow")
# Ajouter des nodes
workflow.nodes.extend([
VisualNode(
id="start_node",
type="wait",
position=Position(100, 100),
size=Size(200, 80),
parameters={'duration': 100},
input_ports=[],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="end_node",
type="click",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': 'Test Button'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
)
])
# Ajouter un edge
workflow.edges.append(VisualEdge(
id="edge_1",
source="start_node",
target="end_node",
source_port="out",
target_port="in"
))
# Sauvegarder le workflow
db = WorkflowDatabase()
db.save(workflow)
# Exécuter le workflow
executor = get_executor()
# Callback pour suivre la progression
progress_events = []
def progress_callback(execution_id, event_type, data):
progress_events.append({
'execution_id': execution_id,
'event_type': event_type,
'data': data
})
execution_id = executor.execute_workflow(
workflow_id=workflow.id,
variables={'test_var': 'test_value'},
progress_callback=progress_callback
)
assert execution_id is not None
assert execution_id.startswith('exec_')
# Attendre un peu pour que l'exécution démarre
time.sleep(0.2)
# Vérifier le statut
result = executor.get_execution_status(execution_id)
assert result is not None
assert result.execution_id == execution_id
assert result.workflow_id == workflow.id
assert result.status in [ExecutionStatus.RUNNING, ExecutionStatus.COMPLETED]
# Attendre la fin de l'exécution
max_wait = 5 # 5 secondes max
waited = 0
while waited < max_wait:
result = executor.get_execution_status(execution_id)
if result.status in [ExecutionStatus.COMPLETED, ExecutionStatus.FAILED]:
break
time.sleep(0.1)
waited += 0.1
# Vérifier le résultat final
final_result = executor.get_execution_status(execution_id)
assert final_result.status == ExecutionStatus.COMPLETED
assert final_result.start_time is not None
assert final_result.end_time is not None
assert final_result._calculate_duration() is not None
# Vérifier les événements de progression
assert len(progress_events) > 0
assert any(event['event_type'] == 'started' for event in progress_events)
assert any(event['event_type'] == 'completed' for event in progress_events)
print(f"✅ Exécution réussie:")
print(f" - ID d'exécution: {execution_id}")
print(f" - Statut final: {final_result.status}")
print(f" - Durée: {final_result._calculate_duration()}ms")
print(f" - Événements de progression: {len(progress_events)}")
print(f" - Logs: {len(final_result.logs)}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_workflow_with_variables():
"""Test d'exécution avec variables"""
print("\n🧪 Test 3: Exécution avec variables d'entrée")
try:
# Créer un workflow avec variables
workflow = create_empty_workflow("Variables Workflow")
# Ajouter des variables au workflow
workflow.variables.extend([
Variable(name="username", type="string", value="default_user"),
Variable(name="count", type="number", value=5)
])
# Ajouter un node utilisant les variables
workflow.nodes.append(VisualNode(
id="type_username",
type="type",
position=Position(100, 100),
size=Size(200, 80),
parameters={'target': 'input', 'text': '${username}'},
input_ports=[],
output_ports=[]
))
# Sauvegarder
db = WorkflowDatabase()
db.save(workflow)
# Exécuter avec variables personnalisées
executor = get_executor()
execution_id = executor.execute_workflow(
workflow_id=workflow.id,
variables={
'username': 'custom_user',
'count': 10,
'extra_var': 'extra_value'
}
)
# Attendre la fin
time.sleep(0.5)
result = executor.get_execution_status(execution_id)
assert result.status == ExecutionStatus.COMPLETED
print(f"✅ Exécution avec variables réussie:")
print(f" - Variables d'entrée transmises")
print(f" - Exécution terminée: {result.status}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_execution_cancellation():
"""Test d'annulation d'exécution"""
print("\n🧪 Test 4: Annulation d'exécution")
try:
# Créer un workflow long
workflow = create_empty_workflow("Long Workflow")
# Ajouter plusieurs nodes pour simuler une exécution longue
for i in range(10):
workflow.nodes.append(VisualNode(
id=f"wait_{i}",
type="wait",
position=Position(100 + i * 50, 100),
size=Size(200, 80),
parameters={'duration': 200},
input_ports=[Port('in', 'Input', 'input')] if i > 0 else [],
output_ports=[Port('out', 'Output', 'output')] if i < 9 else []
))
if i > 0:
workflow.edges.append(VisualEdge(
id=f"edge_{i}",
source=f"wait_{i-1}",
target=f"wait_{i}",
source_port="out",
target_port="in"
))
# Sauvegarder
db = WorkflowDatabase()
db.save(workflow)
# Démarrer l'exécution
executor = get_executor()
execution_id = executor.execute_workflow(workflow_id=workflow.id)
# Attendre un peu puis annuler
time.sleep(0.1)
# Vérifier qu'elle est en cours
result = executor.get_execution_status(execution_id)
if result.status == ExecutionStatus.RUNNING:
# Annuler
cancelled = executor.cancel_execution(execution_id)
assert cancelled == True
# Vérifier le statut
result = executor.get_execution_status(execution_id)
assert result.status == ExecutionStatus.CANCELLED
print(f"✅ Annulation réussie:")
print(f" - Exécution annulée: {execution_id}")
print(f" - Statut final: {result.status}")
else:
print(f"⚠️ Exécution trop rapide pour être annulée (statut: {result.status})")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_execution_listing():
"""Test de listage des exécutions"""
print("\n🧪 Test 5: Listage des exécutions")
try:
executor = get_executor()
# Lister toutes les exécutions
all_executions = executor.list_executions()
assert isinstance(all_executions, list)
print(f"✅ Listage réussi:")
print(f" - Nombre total d'exécutions: {len(all_executions)}")
# Afficher quelques détails
for i, execution in enumerate(all_executions[:3]):
print(f" - Exécution {i+1}: {execution['execution_id']} ({execution['status']})")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_error_handling():
"""Test de gestion d'erreurs"""
print("\n🧪 Test 6: Gestion d'erreurs")
try:
executor = get_executor()
# Tenter d'exécuter un workflow inexistant
try:
execution_id = executor.execute_workflow("nonexistent_workflow")
print("❌ Devrait lever une exception")
return False
except ValueError as e:
print(f"✅ Exception attendue: {e}")
# Tenter d'obtenir le statut d'une exécution inexistante
result = executor.get_execution_status("nonexistent_execution")
assert result is None
# Tenter d'annuler une exécution inexistante
cancelled = executor.cancel_execution("nonexistent_execution")
assert cancelled == False
print(f"✅ Gestion d'erreurs correcte")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Exécute tous les tests"""
print("=" * 60)
print("🚀 Tests d'Intégration ExecutionLoop")
print("=" * 60)
tests = [
test_executor_initialization,
test_simple_workflow_execution,
test_workflow_with_variables,
test_execution_cancellation,
test_execution_listing,
test_error_handling
]
results = []
for test in tests:
try:
result = test()
results.append(result)
except Exception as e:
print(f"\n❌ Test échoué avec exception: {e}")
import traceback
traceback.print_exc()
results.append(False)
print("\n" + "=" * 60)
passed = sum(results)
total = len(results)
if passed == total:
print(f"✅ TOUS LES TESTS RÉUSSIS! ({passed}/{total})")
print("=" * 60)
return 0
else:
print(f"{total - passed} test(s) échoué(s) sur {total}")
print("=" * 60)
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,335 @@
#!/usr/bin/env python3
"""
Test du système d'import/export de workflows
"""
import json
import yaml
import tempfile
import os
from datetime import datetime
from api.import_export import ImportExportService
from models.visual_workflow import VisualWorkflow, VisualNode, VisualEdge, WorkflowVariable
def test_export_json():
"""Test d'export en JSON"""
print("🧪 Test d'export JSON...")
# Créer un workflow de test
workflow = VisualWorkflow(
id="test-workflow",
name="Test Workflow",
description="Workflow de test pour l'export",
nodes=[
VisualNode(
id="node-1",
type="click",
position={"x": 100, "y": 100},
data={"label": "Click Button", "parameters": {"target": "button"}}
),
VisualNode(
id="node-2",
type="type",
position={"x": 300, "y": 100},
data={"label": "Type Text", "parameters": {"text": "Hello World"}}
)
],
edges=[
VisualEdge(
id="edge-1",
source="node-1",
target="node-2"
)
],
variables=[
WorkflowVariable(
id="var-1",
name="username",
value="testuser",
type="string",
description="Nom d'utilisateur"
)
]
)
# Exporter
export_data = ImportExportService.export_workflow(
workflow,
format_type='json',
include_metadata=True,
include_template_info=False,
minify=False
)
# Vérifier la structure
assert 'version' in export_data
assert 'name' in export_data
assert 'nodes' in export_data
assert 'edges' in export_data
assert 'variables' in export_data
assert 'metadata' in export_data
assert len(export_data['nodes']) == 2
assert len(export_data['edges']) == 1
assert len(export_data['variables']) == 1
print("✅ Export JSON réussi")
return export_data
def test_export_yaml():
"""Test d'export en YAML"""
print("🧪 Test d'export YAML...")
# Créer un workflow simple
workflow = VisualWorkflow(
id="yaml-test",
name="YAML Test",
description="Test YAML export",
nodes=[
VisualNode(
id="start",
type="start",
position={"x": 0, "y": 0},
data={"label": "Start"}
)
],
edges=[],
variables=[]
)
# Exporter
export_data = ImportExportService.export_workflow(workflow, format_type='yaml')
# Vérifier
assert export_data['name'] == "YAML Test"
assert len(export_data['nodes']) == 1
print("✅ Export YAML réussi")
return export_data
def test_import_json():
"""Test d'import JSON"""
print("🧪 Test d'import JSON...")
# Données JSON de test
json_data = {
"version": "1.0.0",
"name": "Imported Workflow",
"description": "Test import",
"nodes": [
{
"id": "imported-node",
"type": "click",
"position": {"x": 50, "y": 50},
"data": {"label": "Imported Click", "parameters": {"target": "button"}}
}
],
"edges": [],
"variables": [
{
"id": "imported-var",
"name": "test_var",
"value": "test_value",
"type": "string"
}
]
}
# Convertir en string JSON
json_string = json.dumps(json_data)
# Importer
result = ImportExportService.import_workflow(json_string, "test.json")
# Vérifier
assert result['success'] == True
assert 'workflow' in result
workflow_data = result['workflow']
assert workflow_data['name'] == "Imported Workflow"
assert len(workflow_data['nodes']) == 1
assert len(workflow_data['variables']) == 1
print("✅ Import JSON réussi")
return result
def test_import_validation():
"""Test de validation d'import"""
print("🧪 Test de validation d'import...")
# Données invalides
invalid_data = {
"name": "Invalid Workflow",
# Manque nodes et edges
}
json_string = json.dumps(invalid_data)
result = ImportExportService.import_workflow(json_string)
# Doit échouer
assert result['success'] == False
assert 'errors' in result
assert len(result['errors']) > 0
print("✅ Validation d'import réussie")
# Test avec données partiellement valides
partial_data = {
"name": "Partial Workflow",
"nodes": [
{
"id": "node-1",
"type": "click",
"position": {"x": 0, "y": 0}
}
],
"edges": []
# Manque variables mais c'est optionnel
}
json_string = json.dumps(partial_data)
result = ImportExportService.import_workflow(json_string)
# Doit réussir avec avertissements
assert result['success'] == True
assert 'warnings' in result
print("✅ Import partiel avec avertissements réussi")
def test_round_trip():
"""Test de round-trip (export puis import)"""
print("🧪 Test de round-trip...")
# Créer un workflow original
original = VisualWorkflow(
id="roundtrip-test",
name="Round Trip Test",
description="Test de round-trip",
nodes=[
VisualNode(
id="rt-node-1",
type="click",
position={"x": 100, "y": 200},
data={"label": "Click", "parameters": {"target": "button", "timeout": 5000}}
),
VisualNode(
id="rt-node-2",
type="type",
position={"x": 300, "y": 200},
data={"label": "Type", "parameters": {"text": "${username}", "clear": True}}
)
],
edges=[
VisualEdge(
id="rt-edge-1",
source="rt-node-1",
target="rt-node-2",
data={"condition": "success"}
)
],
variables=[
WorkflowVariable(
id="rt-var-1",
name="username",
value="testuser",
type="string",
description="Username for login"
)
]
)
# Export
export_data = ImportExportService.export_workflow(original, include_metadata=False)
json_string = json.dumps(export_data)
# Import
import_result = ImportExportService.import_workflow(json_string)
# Vérifier
assert import_result['success'] == True
imported_data = import_result['workflow']
# Comparer les données importantes
assert imported_data['name'] == original.name
assert imported_data['description'] == original.description
assert len(imported_data['nodes']) == len(original.nodes)
assert len(imported_data['edges']) == len(original.edges)
assert len(imported_data['variables']) == len(original.variables)
# Vérifier les détails des nodes
imported_node = imported_data['nodes'][0]
original_node = original.nodes[0]
assert imported_node['type'] == original_node.type
assert imported_node['position'] == original_node.position
print("✅ Round-trip réussi")
def test_migration():
"""Test de migration de version"""
print("🧪 Test de migration...")
# Données d'ancienne version (sans version et structure data différente)
old_data = {
"name": "Old Workflow",
"nodes": [
{
"id": "old-node",
"type": "click",
"position": {"x": 0, "y": 0},
"label": "Old Click",
"parameters": {"target": "button"}
# Pas de structure 'data'
}
],
"edges": [],
"variables": []
}
json_string = json.dumps(old_data)
result = ImportExportService.import_workflow(json_string)
# Doit réussir avec migration
assert result['success'] == True
assert 'warnings' in result
assert any('migré' in warning for warning in result['warnings'])
# Vérifier que la structure a été migrée
workflow_data = result['workflow']
node = workflow_data['nodes'][0]
assert 'data' in node
assert 'label' in node['data']
assert 'parameters' in node['data']
print("✅ Migration réussie")
def main():
"""Fonction principale de test"""
print("🚀 Démarrage des tests d'import/export...")
try:
# Tests d'export
test_export_json()
test_export_yaml()
# Tests d'import
test_import_json()
test_import_validation()
# Tests avancés
test_round_trip()
test_migration()
print("\n✅ Tous les tests d'import/export sont passés avec succès!")
except Exception as e:
print(f"\n❌ Erreur dans les tests: {e}")
import traceback
traceback.print_exc()
return False
return True
if __name__ == "__main__":
success = main()
exit(0 if success else 1)

View File

@@ -0,0 +1,461 @@
#!/usr/bin/env python3
"""
Test des Nodes de Logique (Condition et Loop)
Test manuel de la conversion des nodes de condition et de boucle.
"""
import sys
from pathlib import Path
sys.path.insert(0, '.')
from services.converter import VisualToGraphConverter, convert_visual_to_graph
from services.serialization import create_empty_workflow
from models.visual_workflow import (
VisualNode,
VisualEdge,
Position,
Size,
Port,
EdgeCondition
)
def test_condition_node_conversion():
"""Test de conversion d'un node Condition"""
print("🧪 Test 1: Conversion d'un node Condition avec branches true/false")
workflow = create_empty_workflow("Condition Workflow")
# Ajouter un node condition
workflow.nodes.append(VisualNode(
id="condition_1",
type="condition",
position=Position(100, 100),
size=Size(200, 80),
parameters={'expression': '${status} == "success"', 'type': 'expression'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[
Port('out_true', 'True', 'output'),
Port('out_false', 'False', 'output')
]
))
# Ajouter nodes pour les branches
workflow.nodes.extend([
VisualNode(
id="success_action",
type="click",
position=Position(400, 50),
size=Size(200, 80),
parameters={'target': 'Success Button'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
),
VisualNode(
id="failure_action",
type="click",
position=Position(400, 150),
size=Size(200, 80),
parameters={'target': 'Retry Button'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
)
])
# Ajouter edges pour les branches
workflow.edges.extend([
VisualEdge(
id="edge_true",
source="condition_1",
target="success_action",
source_port="out_true",
target_port="in"
),
VisualEdge(
id="edge_false",
source="condition_1",
target="failure_action",
source_port="out_false",
target_port="in"
)
])
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifications
assert len(result.nodes) == 3
assert len(result.edges) == 2
assert len(result.conditionals) == 1
# Vérifier la configuration de la condition
condition_config = result.conditionals['condition_1']
assert condition_config['expression'] == '${status} == "success"'
assert condition_config['true_branch'] == 'success_action'
assert condition_config['false_branch'] == 'failure_action'
# Vérifier les edges
edge_true = result.get_edge('edge_true')
assert edge_true is not None
assert edge_true.from_node == 'condition_1'
assert edge_true.to_node == 'success_action'
edge_false = result.get_edge('edge_false')
assert edge_false is not None
assert edge_false.from_node == 'condition_1'
assert edge_false.to_node == 'failure_action'
print(f"✅ Conversion réussie:")
print(f" - Condition configurée avec expression")
print(f" - Branche true → {condition_config['true_branch']}")
print(f" - Branche false → {condition_config['false_branch']}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_loop_repeat_conversion():
"""Test de conversion d'une boucle repeat"""
print("\n🧪 Test 2: Conversion d'une boucle repeat (nombre fixe d'itérations)")
workflow = create_empty_workflow("Loop Repeat Workflow")
# Ajouter un node loop
workflow.nodes.append(VisualNode(
id="loop_1",
type="loop",
position=Position(100, 100),
size=Size(200, 80),
parameters={'type': 'repeat', 'count': 5},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[
Port('out_body', 'Body', 'output'),
Port('out_exit', 'Exit', 'output')
]
))
# Ajouter node pour le corps de la boucle
workflow.nodes.append(VisualNode(
id="loop_body",
type="click",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': 'Next Button'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
))
# Ajouter node de sortie
workflow.nodes.append(VisualNode(
id="after_loop",
type="wait",
position=Position(700, 100),
size=Size(200, 80),
parameters={'duration': 1000},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
))
# Ajouter edges
workflow.edges.extend([
VisualEdge(
id="edge_body",
source="loop_1",
target="loop_body",
source_port="out_body",
target_port="in"
),
VisualEdge(
id="edge_exit",
source="loop_1",
target="after_loop",
source_port="out_exit",
target_port="in"
)
])
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifications
assert len(result.nodes) == 3
assert len(result.edges) == 2
assert len(result.loops) == 1
# Vérifier la configuration de la boucle
loop_config = result.loops['loop_1']
assert loop_config['loop_type'] == 'repeat'
assert loop_config['count'] == 5
assert 'loop_body' in loop_config['body_nodes']
assert loop_config['exit_node'] == 'after_loop'
print(f"✅ Conversion réussie:")
print(f" - Boucle repeat avec {loop_config['count']} itérations")
print(f" - Corps: {loop_config['body_nodes']}")
print(f" - Sortie: {loop_config['exit_node']}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_loop_while_conversion():
"""Test de conversion d'une boucle while"""
print("\n🧪 Test 3: Conversion d'une boucle while (avec condition)")
workflow = create_empty_workflow("Loop While Workflow")
# Ajouter un node loop while
workflow.nodes.append(VisualNode(
id="loop_while",
type="loop",
position=Position(100, 100),
size=Size(200, 80),
parameters={
'type': 'while',
'condition': '${counter} < 10',
'max_iterations': 100
},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[
Port('out_body', 'Body', 'output'),
Port('out_exit', 'Exit', 'output')
]
))
# Ajouter node pour le corps
workflow.nodes.append(VisualNode(
id="increment",
type="variable",
position=Position(400, 100),
size=Size(200, 80),
parameters={'name': 'counter', 'value': '${counter} + 1'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
))
# Ajouter edges
workflow.edges.append(VisualEdge(
id="edge_body",
source="loop_while",
target="increment",
source_port="out_body",
target_port="in"
))
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifications
assert len(result.loops) == 1
loop_config = result.loops['loop_while']
assert loop_config['loop_type'] == 'while'
assert loop_config['condition'] == '${counter} < 10'
assert loop_config['max_iterations'] == 100
print(f"✅ Conversion réussie:")
print(f" - Boucle while avec condition: {loop_config['condition']}")
print(f" - Max itérations: {loop_config['max_iterations']}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_loop_foreach_conversion():
"""Test de conversion d'une boucle for-each"""
print("\n🧪 Test 4: Conversion d'une boucle for-each (sur collection)")
workflow = create_empty_workflow("Loop ForEach Workflow")
# Ajouter un node loop for-each
workflow.nodes.append(VisualNode(
id="loop_foreach",
type="loop",
position=Position(100, 100),
size=Size(200, 80),
parameters={
'type': 'for-each',
'collection': '${items}',
'item_variable': 'current_item'
},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[
Port('out_body', 'Body', 'output'),
Port('out_exit', 'Exit', 'output')
]
))
# Ajouter node pour le corps
workflow.nodes.append(VisualNode(
id="process_item",
type="click",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': '${current_item}'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
))
# Ajouter edges
workflow.edges.append(VisualEdge(
id="edge_body",
source="loop_foreach",
target="process_item",
source_port="out_body",
target_port="in"
))
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifications
assert len(result.loops) == 1
loop_config = result.loops['loop_foreach']
assert loop_config['loop_type'] == 'for-each'
assert loop_config['collection'] == '${items}'
assert loop_config['item_variable'] == 'current_item'
print(f"✅ Conversion réussie:")
print(f" - Boucle for-each sur: {loop_config['collection']}")
print(f" - Variable d'item: {loop_config['item_variable']}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_expression_validation():
"""Test de validation des expressions de condition"""
print("\n🧪 Test 5: Validation des expressions de condition")
workflow = create_empty_workflow("Expression Validation")
# Ajouter un node condition avec expression invalide (parenthèses non équilibrées)
workflow.nodes.append(VisualNode(
id="condition_invalid",
type="condition",
position=Position(100, 100),
size=Size(200, 80),
parameters={'expression': '(${value} == "test"'}, # Parenthèse manquante
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out_true', 'True', 'output')]
))
workflow.nodes.append(VisualNode(
id="dummy",
type="wait",
position=Position(400, 100),
size=Size(200, 80),
parameters={'duration': 1000},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
))
workflow.edges.append(VisualEdge(
id="edge_1",
source="condition_invalid",
target="dummy",
source_port="out_true",
target_port="in"
))
# Convertir
converter = VisualToGraphConverter()
try:
result = converter.convert(workflow)
# Vérifier qu'il y a des avertissements
warnings = converter.get_warnings()
assert len(warnings) > 0
assert any('invalide' in w.lower() for w in warnings)
print(f"✅ Validation réussie:")
print(f" - {len(warnings)} avertissement(s) détecté(s)")
for warning in warnings:
print(f" - {warning}")
return True
except Exception as e:
print(f"❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Exécute tous les tests"""
print("=" * 60)
print("🚀 Tests des Nodes de Logique (Condition et Loop)")
print("=" * 60)
tests = [
test_condition_node_conversion,
test_loop_repeat_conversion,
test_loop_while_conversion,
test_loop_foreach_conversion,
test_expression_validation
]
results = []
for test in tests:
try:
result = test()
results.append(result)
except Exception as e:
print(f"\n❌ Test échoué avec exception: {e}")
import traceback
traceback.print_exc()
results.append(False)
print("\n" + "=" * 60)
passed = sum(results)
total = len(results)
if passed == total:
print(f"✅ TOUS LES TESTS RÉUSSIS! ({passed}/{total})")
print("=" * 60)
return 0
else:
print(f"{total - passed} test(s) échoué(s) sur {total}")
print("=" * 60)
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,294 @@
#!/usr/bin/env python3
"""
Manual test script for models
"""
import sys
sys.path.insert(0, '.')
from datetime import datetime
from models import (
VisualWorkflow,
VisualNode,
VisualEdge,
Variable,
WorkflowSettings,
Position,
Size,
Port,
NodeStatus,
generate_id
)
def test_basic_serialization():
"""Test basic serialization"""
print("Testing basic serialization...")
# Create a node
node = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={'target': 'button', 'timeout': 5000},
input_ports=[Port(id='in', name='Input', type='input')],
output_ports=[Port(id='out', name='Output', type='output')],
label='Click Button'
)
# Serialize
data = node.to_dict()
print(f" Serialized node: {data['id']}, type: {data['type']}")
# Deserialize
node2 = VisualNode.from_dict(data)
print(f" Deserialized node: {node2.id}, type: {node2.type}")
assert node2.id == node.id
assert node2.type == node.type
assert node2.position.x == node.position.x
print(" ✓ Node serialization works")
def test_workflow_creation():
"""Test workflow creation"""
print("\nTesting workflow creation...")
now = datetime.now()
# Create nodes
node1 = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
node2 = VisualNode(
id='node-2',
type='type',
position=Position(x=300, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
# Create edge
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in'
)
# Create workflow
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='A test workflow',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='test_user',
nodes=[node1, node2],
edges=[edge],
variables=[],
settings=WorkflowSettings()
)
print(f" Created workflow: {workflow.name}")
print(f" Nodes: {len(workflow.nodes)}")
print(f" Edges: {len(workflow.edges)}")
print(" ✓ Workflow creation works")
def test_workflow_validation():
"""Test workflow validation"""
print("\nTesting workflow validation...")
now = datetime.now()
# Valid workflow
node1 = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
node2 = VisualNode(
id='node-2',
type='type',
position=Position(x=300, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in'
)
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[node1, node2],
edges=[edge],
variables=[],
settings=WorkflowSettings()
)
errors = workflow.validate()
print(f" Valid workflow errors: {len(errors)}")
assert len(errors) == 0
print(" ✓ Valid workflow passes validation")
# Invalid workflow - missing fields
workflow2 = VisualWorkflow(
id='',
name='',
description='Test',
version='',
created_at=now,
updated_at=now,
created_by='user',
nodes=[],
edges=[],
variables=[],
settings=WorkflowSettings()
)
errors2 = workflow2.validate()
print(f" Invalid workflow errors: {len(errors2)}")
assert len(errors2) == 3
print(" ✓ Invalid workflow fails validation")
# Invalid workflow - bad edge
edge_bad = VisualEdge(
id='edge-1',
source='node-1',
target='node-999', # Non-existent
source_port='out',
target_port='in'
)
workflow3 = VisualWorkflow(
id='wf-1',
name='Test',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[node1],
edges=[edge_bad],
variables=[],
settings=WorkflowSettings()
)
errors3 = workflow3.validate()
print(f" Bad edge workflow errors: {len(errors3)}")
assert len(errors3) == 1
assert 'non-existent' in errors3[0]
print(" ✓ Bad edge detected")
def test_workflow_serialization():
"""Test complete workflow serialization"""
print("\nTesting workflow serialization...")
now = datetime.now()
node = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={'target': 'button'},
input_ports=[],
output_ports=[]
)
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in'
)
var = Variable(name='test', type='string', value='value')
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[node],
edges=[edge],
variables=[var],
settings=WorkflowSettings(),
tags=['test', 'demo']
)
# Serialize
data = workflow.to_dict()
print(f" Serialized workflow: {data['name']}")
print(f" Nodes: {len(data['nodes'])}")
print(f" Edges: {len(data['edges'])}")
print(f" Variables: {len(data['variables'])}")
print(f" Tags: {data['tags']}")
# Deserialize
workflow2 = VisualWorkflow.from_dict(data)
print(f" Deserialized workflow: {workflow2.name}")
assert workflow2.id == workflow.id
assert workflow2.name == workflow.name
assert len(workflow2.nodes) == len(workflow.nodes)
assert len(workflow2.edges) == len(workflow.edges)
assert len(workflow2.variables) == len(workflow.variables)
assert workflow2.tags == workflow.tags
print(" ✓ Workflow serialization works")
if __name__ == '__main__':
print("=" * 60)
print("Visual Workflow Builder - Models Test")
print("=" * 60)
try:
test_basic_serialization()
test_workflow_creation()
test_workflow_validation()
test_workflow_serialization()
print("\n" + "=" * 60)
print("✓ All tests passed!")
print("=" * 60)
except Exception as e:
print(f"\n✗ Test failed: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

View File

@@ -0,0 +1,387 @@
#!/usr/bin/env python3
"""
Test Save As Template Feature
Tests the save as template functionality including parameter extraction
and template creation from existing workflows.
"""
import json
import os
import tempfile
import unittest
from datetime import datetime
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from models.template import WorkflowTemplate, TemplateParameter, TemplateDifficulty
from models.visual_workflow import VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port, WorkflowSettings, ParameterType
from services.template_service import TemplateService
class TestSaveAsTemplate(unittest.TestCase):
"""Test save as template functionality"""
def setUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.service = TemplateService(data_dir=self.temp_dir)
# Create a sample workflow to convert to template
self.sample_workflow = VisualWorkflow(
id="sample_workflow",
name="Sample Workflow",
description="A sample workflow for testing",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test_user",
nodes=[
VisualNode(
id="navigate_node",
type="navigate",
position=Position(100, 100),
size=Size(200, 100),
parameters={
"url": "https://example.com",
"wait_for_load": True,
"timeout": 10000
},
input_ports=[],
output_ports=[Port("out", "output", "output")],
label="Navigate to Website"
),
VisualNode(
id="type_node",
type="type",
position=Position(350, 100),
size=Size(200, 100),
parameters={
"target": "input[name='username']",
"text": "testuser",
"clear_first": True
},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")],
label="Enter Username"
),
VisualNode(
id="click_node",
type="click",
position=Position(600, 100),
size=Size(200, 100),
parameters={
"target": "button[type='submit']",
"timeout": 5000
},
input_ports=[Port("in", "input", "input")],
output_ports=[],
label="Click Submit"
)
],
edges=[
VisualEdge("edge1", "navigate_node", "type_node", "out", "in"),
VisualEdge("edge2", "type_node", "click_node", "out", "in")
],
variables=[],
settings=WorkflowSettings(),
tags=["login", "web", "automation"]
)
def tearDown(self):
"""Clean up test environment"""
import shutil
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_create_template_from_workflow_basic(self):
"""Test basic template creation from workflow"""
# Create template from workflow
template = self.service.create_template_from_workflow(
self.sample_workflow,
"Login Template",
"Template for login automation",
"Web Automation"
)
# Verify template was created
self.assertIsNotNone(template.id)
self.assertEqual(template.name, "Login Template")
self.assertEqual(template.description, "Template for login automation")
self.assertEqual(template.category, "Web Automation")
self.assertTrue(template.workflow.is_template)
# Verify workflow structure is preserved
self.assertEqual(len(template.workflow.nodes), 3)
self.assertEqual(len(template.workflow.edges), 2)
self.assertEqual(template.workflow.tags, ["login", "web", "automation"])
def test_create_template_with_parameters(self):
"""Test template creation with custom parameters"""
# Define custom parameters
parameters = [
{
'name': 'website_url',
'type': 'string',
'description': 'URL of the website to navigate to',
'node_id': 'navigate_node',
'parameter_name': 'url',
'label': 'Website URL',
'required': True
},
{
'name': 'username_selector',
'type': 'target',
'description': 'CSS selector for username field',
'node_id': 'type_node',
'parameter_name': 'target',
'label': 'Username Field',
'required': True
},
{
'name': 'username_value',
'type': 'string',
'description': 'Username to enter',
'node_id': 'type_node',
'parameter_name': 'text',
'label': 'Username',
'required': True
},
{
'name': 'submit_button',
'type': 'target',
'description': 'CSS selector for submit button',
'node_id': 'click_node',
'parameter_name': 'target',
'label': 'Submit Button',
'required': True
}
]
# Create template with parameters
template = self.service.create_template_from_workflow(
self.sample_workflow,
"Configurable Login Template",
"Login template with configurable parameters",
"Web Automation",
parameters
)
# Verify parameters were added
self.assertEqual(len(template.parameters), 4)
# Check parameter details
url_param = next((p for p in template.parameters if p.name == 'website_url'), None)
self.assertIsNotNone(url_param)
self.assertEqual(url_param.type, ParameterType.STRING)
self.assertEqual(url_param.node_id, 'navigate_node')
self.assertEqual(url_param.parameter_name, 'url')
self.assertTrue(url_param.required)
username_selector_param = next((p for p in template.parameters if p.name == 'username_selector'), None)
self.assertIsNotNone(username_selector_param)
self.assertEqual(username_selector_param.type, ParameterType.TARGET)
def test_template_instantiation_with_parameters(self):
"""Test that templates with parameters can be instantiated correctly"""
# Create template with parameters
parameters = [
{
'name': 'site_url',
'type': 'string',
'description': 'Website URL',
'node_id': 'navigate_node',
'parameter_name': 'url',
'label': 'Site URL',
'required': True
},
{
'name': 'user_field',
'type': 'target',
'description': 'Username field selector',
'node_id': 'type_node',
'parameter_name': 'target',
'label': 'Username Field',
'required': True
}
]
template = self.service.create_template_from_workflow(
self.sample_workflow,
"Test Template",
"Test template for instantiation",
"Testing",
parameters
)
# Instantiate template with parameter values
param_values = {
'site_url': 'https://mysite.com',
'user_field': '#username'
}
workflow = self.service.instantiate_template(
template.id,
param_values,
"My Custom Workflow",
"test_user"
)
# Verify workflow was created
self.assertIsNotNone(workflow)
self.assertEqual(workflow.name, "My Custom Workflow")
self.assertEqual(workflow.created_by, "test_user")
self.assertFalse(workflow.is_template)
# Verify parameter substitution
navigate_node = next((n for n in workflow.nodes if n.type == 'navigate'), None)
self.assertIsNotNone(navigate_node)
self.assertEqual(navigate_node.parameters['url'], 'https://mysite.com')
type_node = next((n for n in workflow.nodes if n.type == 'type'), None)
self.assertIsNotNone(type_node)
self.assertEqual(type_node.parameters['target'], '#username')
def test_template_persistence(self):
"""Test that templates are properly saved and can be retrieved"""
# Create template
template = self.service.create_template_from_workflow(
self.sample_workflow,
"Persistent Template",
"Template to test persistence",
"Testing"
)
template_id = template.id
# Create new service instance (simulates restart)
new_service = TemplateService(data_dir=self.temp_dir)
# Retrieve template
retrieved_template = new_service.get_template(template_id)
# Verify template was persisted correctly
self.assertIsNotNone(retrieved_template)
self.assertEqual(retrieved_template.name, "Persistent Template")
self.assertEqual(retrieved_template.description, "Template to test persistence")
self.assertEqual(retrieved_template.category, "Testing")
self.assertEqual(len(retrieved_template.workflow.nodes), 3)
def test_template_validation(self):
"""Test template validation during creation"""
# Test with invalid workflow (no nodes)
empty_workflow = VisualWorkflow(
id="empty",
name="Empty Workflow",
description="Empty",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test",
nodes=[],
edges=[],
variables=[],
settings=WorkflowSettings()
)
# Should still create template (empty workflows are valid)
template = self.service.create_template_from_workflow(
empty_workflow,
"Empty Template",
"Template from empty workflow",
"Testing"
)
self.assertIsNotNone(template)
self.assertEqual(len(template.workflow.nodes), 0)
def test_parameter_extraction_logic(self):
"""Test the logic for extracting configurable parameters"""
# This would test the frontend logic for automatic parameter detection
# For now, we test that parameters can be properly configured
# Create template with various parameter types
parameters = [
{
'name': 'string_param',
'type': 'string',
'description': 'A string parameter',
'node_id': 'navigate_node',
'parameter_name': 'url',
'label': 'String Param',
'required': True,
'default_value': 'default_value'
},
{
'name': 'target_param',
'type': 'target',
'description': 'A target selector parameter',
'node_id': 'type_node',
'parameter_name': 'target',
'label': 'Target Param',
'required': False,
'default_value': 'input.default'
},
{
'name': 'number_param',
'type': 'number',
'description': 'A number parameter',
'node_id': 'click_node',
'parameter_name': 'timeout',
'label': 'Number Param',
'required': False,
'default_value': 5000
}
]
template = self.service.create_template_from_workflow(
self.sample_workflow,
"Multi-Parameter Template",
"Template with various parameter types",
"Testing",
parameters
)
# Verify all parameter types are handled correctly
self.assertEqual(len(template.parameters), 3)
string_param = next((p for p in template.parameters if p.name == 'string_param'), None)
self.assertEqual(string_param.type, ParameterType.STRING)
self.assertEqual(string_param.default_value, 'default_value')
target_param = next((p for p in template.parameters if p.name == 'target_param'), None)
self.assertEqual(target_param.type, ParameterType.TARGET)
self.assertFalse(target_param.required)
number_param = next((p for p in template.parameters if p.name == 'number_param'), None)
self.assertEqual(number_param.type, ParameterType.NUMBER)
self.assertEqual(number_param.default_value, 5000)
def run_tests():
"""Run save as template tests"""
print("🧪 Running Save As Template Tests...")
# Create test suite
suite = unittest.TestSuite()
# Add test cases
loader = unittest.TestLoader()
suite.addTest(loader.loadTestsFromTestCase(TestSaveAsTemplate))
# Run tests
runner = unittest.TextTestRunner(verbosity=2)
result = runner.run(suite)
# Print summary
if result.wasSuccessful():
print("✅ All Save As Template tests passed!")
return True
else:
print(f"{len(result.failures)} test(s) failed, {len(result.errors)} error(s)")
return False
if __name__ == '__main__':
success = run_tests()
exit(0 if success else 1)

View File

@@ -0,0 +1,312 @@
#!/usr/bin/env python3
"""
Test de sérialisation - Visual Workflow Builder
Test manuel du système de sérialisation complet.
"""
import sys
import json
from pathlib import Path
sys.path.insert(0, '.')
from services.serialization import (
WorkflowSerializer,
WorkflowDatabase,
SerializationError,
ValidationError,
create_empty_workflow
)
from models.visual_workflow import (
VisualNode,
VisualEdge,
Variable,
Position,
Size,
Port
)
def test_id_generation():
"""Test de génération d'ID unique"""
print("🧪 Test 1: Génération d'ID unique")
ids = set()
for _ in range(100):
wf_id = WorkflowSerializer.generate_workflow_id()
assert wf_id not in ids, "ID dupliqué détecté!"
ids.add(wf_id)
assert wf_id.startswith('wf_'), f"Format d'ID invalide: {wf_id}"
print("✅ 100 IDs uniques générés avec succès")
def test_empty_workflow_creation():
"""Test de création de workflow vide"""
print("\n🧪 Test 2: Création de workflow vide")
workflow = create_empty_workflow(
name="Test Workflow",
description="Un workflow de test",
created_by="test_user"
)
assert workflow.id.startswith('wf_'), "ID invalide"
assert workflow.name == "Test Workflow"
assert workflow.description == "Un workflow de test"
assert workflow.created_by == "test_user"
assert len(workflow.nodes) == 0
assert len(workflow.edges) == 0
assert len(workflow.variables) == 0
print(f"✅ Workflow créé: {workflow.id}")
def test_serialization_json():
"""Test de sérialisation JSON"""
print("\n🧪 Test 3: Sérialisation JSON")
workflow = create_empty_workflow("Test JSON")
# Ajouter un node
workflow.nodes.append(VisualNode(
id="click_1",
type="click",
position=Position(100, 100),
size=Size(200, 80),
parameters={'target': 'button'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
))
# Sérialiser
json_str = WorkflowSerializer.serialize(workflow, format='json')
assert isinstance(json_str, str)
assert len(json_str) > 0
# Vérifier que c'est du JSON valide
data = json.loads(json_str)
assert data['id'] == workflow.id
assert data['name'] == "Test JSON"
assert len(data['nodes']) == 1
assert '_serialization' in data
print(f"✅ Sérialisation JSON réussie ({len(json_str)} caractères)")
def test_deserialization_json():
"""Test de désérialisation JSON"""
print("\n🧪 Test 4: Désérialisation JSON")
# Créer et sérialiser
original = create_empty_workflow("Test Deserialize")
original.nodes.append(VisualNode(
id="type_1",
type="type",
position=Position(200, 200),
size=Size(200, 80),
parameters={'target': 'input', 'text': 'Hello'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
))
json_str = WorkflowSerializer.serialize(original, format='json')
# Désérialiser
restored = WorkflowSerializer.deserialize(json_str, format='json')
assert restored.id == original.id
assert restored.name == original.name
assert len(restored.nodes) == len(original.nodes)
assert restored.nodes[0].id == "type_1"
assert restored.nodes[0].type == "type"
assert restored.nodes[0].parameters['text'] == 'Hello'
print("✅ Désérialisation JSON réussie")
def test_round_trip():
"""Test de round-trip (sérialisation + désérialisation)"""
print("\n🧪 Test 5: Round-trip complet")
# Créer un workflow complexe
workflow = create_empty_workflow("Complex Workflow")
# Ajouter des nodes
workflow.nodes.extend([
VisualNode(
id="click_1",
type="click",
position=Position(100, 100),
size=Size(200, 80),
parameters={'target': 'button'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="type_1",
type="type",
position=Position(400, 100),
size=Size(200, 80),
parameters={'target': 'input', 'text': 'Test'},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
)
])
# Ajouter un edge
workflow.edges.append(VisualEdge(
id="edge_1",
source="click_1",
target="type_1",
source_port="out",
target_port="in"
))
# Ajouter une variable
workflow.variables.append(Variable(
name="username",
type="string",
value="test_user"
))
# Round-trip
json_str = WorkflowSerializer.serialize(workflow, format='json')
restored = WorkflowSerializer.deserialize(json_str, format='json')
# Vérifications
assert restored.id == workflow.id
assert len(restored.nodes) == 2
assert len(restored.edges) == 1
assert len(restored.variables) == 1
assert restored.variables[0].name == "username"
print("✅ Round-trip réussi avec 2 nodes, 1 edge, 1 variable")
def test_validation_errors():
"""Test de détection d'erreurs de validation"""
print("\n🧪 Test 6: Validation d'erreurs")
workflow = create_empty_workflow("Invalid Workflow")
# Ajouter un edge avec des nodes inexistants
workflow.edges.append(VisualEdge(
id="edge_1",
source="nonexistent_1",
target="nonexistent_2",
source_port="out",
target_port="in"
))
# La validation devrait échouer
errors = workflow.validate()
assert len(errors) > 0, "Des erreurs auraient dû être détectées"
print(f"{len(errors)} erreurs détectées comme prévu:")
for error in errors:
print(f" - {error}")
def test_database_operations():
"""Test des opérations de base de données"""
print("\n🧪 Test 7: Opérations de base de données")
# Créer une DB temporaire
db = WorkflowDatabase("test_data/workflows")
# Créer et sauvegarder un workflow
workflow = create_empty_workflow("DB Test")
workflow.nodes.append(VisualNode(
id="wait_1",
type="wait",
position=Position(300, 300),
size=Size(200, 80),
parameters={'duration': 1000},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[Port('out', 'Output', 'output')]
))
db.save(workflow)
print(f"✅ Workflow sauvegardé: {workflow.id}")
# Charger le workflow
loaded = db.load(workflow.id)
assert loaded is not None
assert loaded.id == workflow.id
assert len(loaded.nodes) == 1
print(f"✅ Workflow chargé: {loaded.id}")
# Lister tous les workflows
all_workflows = db.list_all()
assert len(all_workflows) >= 1
print(f"{len(all_workflows)} workflow(s) dans la DB")
# Supprimer le workflow
deleted = db.delete(workflow.id)
assert deleted == True
print(f"✅ Workflow supprimé: {workflow.id}")
# Vérifier qu'il n'existe plus
assert db.load(workflow.id) is None
print("✅ Vérification de suppression OK")
def test_file_persistence():
"""Test de persistance dans des fichiers"""
print("\n🧪 Test 8: Persistance fichier")
workflow = create_empty_workflow("File Test")
filepath = Path("test_data/test_workflow.json")
# Sauvegarder
WorkflowSerializer.save_to_file(workflow, filepath, format='json')
assert filepath.exists()
print(f"✅ Fichier créé: {filepath}")
# Charger
loaded = WorkflowSerializer.load_from_file(filepath, format='json')
assert loaded.id == workflow.id
print(f"✅ Fichier chargé: {loaded.id}")
# Nettoyer
filepath.unlink()
print("✅ Fichier nettoyé")
def main():
"""Exécute tous les tests"""
print("=" * 60)
print("🚀 Tests de Sérialisation - Visual Workflow Builder")
print("=" * 60)
try:
test_id_generation()
test_empty_workflow_creation()
test_serialization_json()
test_deserialization_json()
test_round_trip()
test_validation_errors()
test_database_operations()
test_file_persistence()
print("\n" + "=" * 60)
print("✅ TOUS LES TESTS RÉUSSIS!")
print("=" * 60)
return 0
except AssertionError as e:
print(f"\n❌ ÉCHEC: {e}")
return 1
except Exception as e:
print(f"\n❌ ERREUR: {e}")
import traceback
traceback.print_exc()
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,539 @@
#!/usr/bin/env python3
"""
Test Templates System
Tests for the template service and API endpoints.
"""
import json
import os
import tempfile
import unittest
from datetime import datetime
import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from models.template import WorkflowTemplate, TemplateParameter, TemplateDifficulty
from models.visual_workflow import VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port, WorkflowSettings, ParameterType
from services.template_service import TemplateService
class TestTemplateModels(unittest.TestCase):
"""Test template data models"""
def setUp(self):
"""Set up test data"""
self.sample_workflow = VisualWorkflow(
id="test_workflow",
name="Test Workflow",
description="A test workflow",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test_user",
nodes=[
VisualNode(
id="node1",
type="click",
position=Position(100, 100),
size=Size(150, 80),
parameters={"target": "{{button_selector}}"},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")]
)
],
edges=[],
variables=[],
settings=WorkflowSettings(),
is_template=True
)
def test_template_parameter_serialization(self):
"""Test template parameter to_dict and from_dict"""
param = TemplateParameter(
name="button_selector",
type=ParameterType.TARGET,
description="Button to click",
node_id="node1",
parameter_name="target",
label="Button"
)
# Test serialization
param_dict = param.to_dict()
self.assertEqual(param_dict['name'], "button_selector")
self.assertEqual(param_dict['type'], "target")
self.assertEqual(param_dict['node_id'], "node1")
# Test deserialization
param2 = TemplateParameter.from_dict(param_dict)
self.assertEqual(param2.name, param.name)
self.assertEqual(param2.type, param.type)
self.assertEqual(param2.node_id, param.node_id)
def test_workflow_template_serialization(self):
"""Test workflow template to_dict and from_dict"""
template = WorkflowTemplate(
id="test_template",
name="Test Template",
description="A test template",
category="Test",
workflow=self.sample_workflow,
parameters=[
TemplateParameter(
name="button_selector",
type=ParameterType.TARGET,
description="Button to click",
node_id="node1",
parameter_name="target"
)
],
tags=["test", "example"],
difficulty=TemplateDifficulty.BEGINNER
)
# Test serialization
template_dict = template.to_dict()
self.assertEqual(template_dict['id'], "test_template")
self.assertEqual(template_dict['name'], "Test Template")
self.assertEqual(len(template_dict['parameters']), 1)
self.assertIn('workflow', template_dict)
# Test deserialization
template2 = WorkflowTemplate.from_dict(template_dict)
self.assertEqual(template2.id, template.id)
self.assertEqual(template2.name, template.name)
self.assertEqual(len(template2.parameters), 1)
self.assertEqual(template2.workflow.id, template.workflow.id)
def test_template_instantiation(self):
"""Test creating a workflow from a template"""
template = WorkflowTemplate(
id="test_template",
name="Test Template",
description="A test template",
category="Test",
workflow=self.sample_workflow,
parameters=[
TemplateParameter(
name="button_selector",
type=ParameterType.TARGET,
description="Button to click",
node_id="node1",
parameter_name="target"
)
]
)
# Instantiate template
parameters = {"button_selector": "button.submit"}
workflow = template.instantiate(parameters, "My Workflow", "test_user")
# Verify workflow
self.assertEqual(workflow.name, "My Workflow")
self.assertEqual(workflow.created_by, "test_user")
self.assertFalse(workflow.is_template)
self.assertNotEqual(workflow.id, template.workflow.id) # Should have new ID
# Verify parameter substitution
self.assertEqual(len(workflow.nodes), 1)
self.assertEqual(workflow.nodes[0].parameters["target"], "button.submit")
def test_template_validation(self):
"""Test template validation"""
# Valid template
template = WorkflowTemplate(
id="test_template",
name="Test Template",
description="A test template",
category="Test",
workflow=self.sample_workflow,
parameters=[]
)
errors = template.validate()
self.assertEqual(len(errors), 0)
# Invalid template - missing required fields
template.id = ""
template.name = ""
errors = template.validate()
self.assertGreater(len(errors), 0)
self.assertTrue(any("ID is required" in err for err in errors))
self.assertTrue(any("name is required" in err for err in errors))
class TestTemplateService(unittest.TestCase):
"""Test template service"""
def setUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.service = TemplateService(data_dir=self.temp_dir)
# Create sample workflow
self.sample_workflow = VisualWorkflow(
id="test_workflow",
name="Test Workflow",
description="A test workflow",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test_user",
nodes=[
VisualNode(
id="node1",
type="click",
position=Position(100, 100),
size=Size(150, 80),
parameters={"target": "{{button_selector}}"},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")]
)
],
edges=[],
variables=[],
settings=WorkflowSettings(),
is_template=True
)
def tearDown(self):
"""Clean up test environment"""
import shutil
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_create_and_get_template(self):
"""Test creating and retrieving a template"""
template_data = {
'name': 'Test Template',
'description': 'A test template',
'category': 'Test',
'workflow': self.sample_workflow.to_dict(),
'parameters': [
{
'name': 'button_selector',
'type': 'target',
'description': 'Button to click',
'node_id': 'node1',
'parameter_name': 'target',
'required': True
}
],
'tags': ['test'],
'difficulty': 'beginner'
}
# Create template
template = self.service.create_template(template_data)
self.assertIsNotNone(template.id)
self.assertEqual(template.name, 'Test Template')
# Retrieve template
retrieved = self.service.get_template(template.id)
self.assertIsNotNone(retrieved)
self.assertEqual(retrieved.name, template.name)
self.assertEqual(len(retrieved.parameters), 1)
def test_list_templates(self):
"""Test listing templates"""
# Should have default templates
templates = self.service.list_templates()
self.assertGreater(len(templates), 0)
# Test filtering by category
web_templates = self.service.list_templates(category="Web Automation")
self.assertGreater(len(web_templates), 0)
for template in web_templates:
self.assertEqual(template.category, "Web Automation")
# Test filtering by difficulty
beginner_templates = self.service.list_templates(difficulty="beginner")
self.assertGreater(len(beginner_templates), 0)
for template in beginner_templates:
self.assertEqual(template.difficulty, TemplateDifficulty.BEGINNER)
def test_template_instantiation(self):
"""Test instantiating a template"""
# Get a default template
templates = self.service.list_templates()
self.assertGreater(len(templates), 0)
template = templates[0]
# Create parameters for instantiation
parameters = {}
for param in template.parameters:
if param.type == ParameterType.STRING:
parameters[param.name] = f"test_{param.name}"
elif param.type == ParameterType.TARGET:
parameters[param.name] = f"#{param.name}"
# Instantiate template
workflow = self.service.instantiate_template(
template.id, parameters, "Test Workflow", "test_user"
)
self.assertIsNotNone(workflow)
self.assertEqual(workflow.name, "Test Workflow")
self.assertEqual(workflow.created_by, "test_user")
self.assertFalse(workflow.is_template)
def test_create_template_from_workflow(self):
"""Test creating a template from an existing workflow"""
# Create a workflow
workflow = VisualWorkflow(
id="source_workflow",
name="Source Workflow",
description="Source for template",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test_user",
nodes=[
VisualNode(
id="node1",
type="type",
position=Position(100, 100),
size=Size(150, 80),
parameters={"target": "input.username", "text": "testuser"},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")]
)
],
edges=[],
variables=[],
settings=WorkflowSettings(),
tags=["login", "test"]
)
# Define template parameters
parameters = [
{
'name': 'username',
'type': 'string',
'description': 'Username to enter',
'node_id': 'node1',
'parameter_name': 'text',
'required': True
}
]
# Create template from workflow
template = self.service.create_template_from_workflow(
workflow, "Login Template", "Template for login", "Authentication", parameters
)
self.assertIsNotNone(template.id)
self.assertEqual(template.name, "Login Template")
self.assertEqual(template.category, "Authentication")
self.assertEqual(len(template.parameters), 1)
self.assertTrue(template.workflow.is_template)
def test_update_template(self):
"""Test updating a template"""
# Create a template first
template_data = {
'name': 'Original Template',
'description': 'Original description',
'category': 'Test',
'workflow': self.sample_workflow.to_dict(),
'parameters': [],
'tags': ['original']
}
template = self.service.create_template(template_data)
original_id = template.id
# Update the template
updated_data = template_data.copy()
updated_data['name'] = 'Updated Template'
updated_data['description'] = 'Updated description'
updated_data['tags'] = ['updated']
updated_template = self.service.update_template(original_id, updated_data)
self.assertIsNotNone(updated_template)
self.assertEqual(updated_template.id, original_id)
self.assertEqual(updated_template.name, 'Updated Template')
self.assertEqual(updated_template.description, 'Updated description')
self.assertEqual(updated_template.tags, ['updated'])
def test_delete_template(self):
"""Test deleting a template"""
# Create a template first
template_data = {
'name': 'Template to Delete',
'description': 'Will be deleted',
'category': 'Test',
'workflow': self.sample_workflow.to_dict(),
'parameters': []
}
template = self.service.create_template(template_data)
template_id = template.id
# Verify it exists
self.assertIsNotNone(self.service.get_template(template_id))
# Delete it
success = self.service.delete_template(template_id)
self.assertTrue(success)
# Verify it's gone
self.assertIsNone(self.service.get_template(template_id))
# Try to delete non-existent template
success = self.service.delete_template("non_existent")
self.assertFalse(success)
class TestTemplateAPI(unittest.TestCase):
"""Test template API endpoints"""
def setUp(self):
"""Set up test Flask app"""
import tempfile
from app import app
self.temp_dir = tempfile.mkdtemp()
app.config['TESTING'] = True
app.config['TEMPLATE_DATA_DIR'] = self.temp_dir
self.client = app.test_client()
self.app_context = app.app_context()
self.app_context.push()
def tearDown(self):
"""Clean up test environment"""
import shutil
self.app_context.pop()
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_list_templates_endpoint(self):
"""Test GET /api/templates/"""
response = self.client.get('/api/templates/')
self.assertEqual(response.status_code, 200)
data = json.loads(response.data)
self.assertIn('templates', data)
self.assertIn('count', data)
self.assertIsInstance(data['templates'], list)
def test_get_template_endpoint(self):
"""Test GET /api/templates/<id>"""
# First get list of templates
response = self.client.get('/api/templates/')
data = json.loads(response.data)
if data['templates']:
template_id = data['templates'][0]['id']
# Get specific template
response = self.client.get(f'/api/templates/{template_id}')
self.assertEqual(response.status_code, 200)
template_data = json.loads(response.data)
self.assertEqual(template_data['id'], template_id)
self.assertIn('workflow', template_data)
# Test non-existent template
response = self.client.get('/api/templates/non_existent')
self.assertEqual(response.status_code, 404)
def test_create_template_endpoint(self):
"""Test POST /api/templates/"""
template_data = {
'name': 'API Test Template',
'description': 'Created via API',
'category': 'Test',
'workflow': {
'id': 'test_workflow',
'name': 'Test Workflow',
'description': 'Test',
'version': '1.0.0',
'created_at': datetime.now().isoformat(),
'updated_at': datetime.now().isoformat(),
'created_by': 'test',
'nodes': [],
'edges': [],
'variables': [],
'settings': {
'timeout': 300000,
'retry_on_failure': True,
'max_retries': 3,
'enable_self_healing': True,
'enable_analytics': True
},
'tags': [],
'is_template': True
},
'parameters': [],
'tags': ['api', 'test']
}
response = self.client.post('/api/templates/',
data=json.dumps(template_data),
content_type='application/json')
self.assertEqual(response.status_code, 201)
created_template = json.loads(response.data)
self.assertEqual(created_template['name'], 'API Test Template')
self.assertIn('id', created_template)
def test_template_filtering(self):
"""Test template filtering by category and difficulty"""
# Test category filtering
response = self.client.get('/api/templates/?category=Web Automation')
self.assertEqual(response.status_code, 200)
data = json.loads(response.data)
for template in data['templates']:
self.assertEqual(template['category'], 'Web Automation')
# Test difficulty filtering
response = self.client.get('/api/templates/?difficulty=beginner')
self.assertEqual(response.status_code, 200)
data = json.loads(response.data)
for template in data['templates']:
self.assertEqual(template['difficulty'], 'beginner')
def test_get_template_categories(self):
"""Test GET /api/templates/categories"""
response = self.client.get('/api/templates/categories')
self.assertEqual(response.status_code, 200)
data = json.loads(response.data)
self.assertIn('categories', data)
self.assertIsInstance(data['categories'], list)
def run_tests():
"""Run all template tests"""
print("🧪 Running Template System Tests...")
# Create test suite
suite = unittest.TestSuite()
# Add test cases
suite.addTest(unittest.makeSuite(TestTemplateModels))
suite.addTest(unittest.makeSuite(TestTemplateService))
suite.addTest(unittest.makeSuite(TestTemplateAPI))
# Run tests
runner = unittest.TextTestRunner(verbosity=2)
result = runner.run(suite)
# Print summary
if result.wasSuccessful():
print("✅ All template tests passed!")
return True
else:
print(f"{len(result.failures)} test(s) failed, {len(result.errors)} error(s)")
return False
if __name__ == '__main__':
success = run_tests()
exit(0 if success else 1)

View File

@@ -0,0 +1,113 @@
#!/usr/bin/env python3
"""
Manual API Test for Templates
Test the templates API endpoints manually.
"""
import requests
import json
import sys
import os
# Configuration
BASE_URL = "http://localhost:5000"
API_URL = f"{BASE_URL}/api"
def test_templates_api():
"""Test all template API endpoints"""
print("🧪 Testing Templates API...")
try:
# Test 1: List templates
print("\n1. Testing GET /api/templates/")
response = requests.get(f"{API_URL}/templates/")
print(f"Status: {response.status_code}")
if response.status_code == 200:
data = response.json()
print(f"Found {data['count']} templates")
if data['templates']:
print(f"First template: {data['templates'][0]['name']}")
template_id = data['templates'][0]['id']
# Test 2: Get specific template
print(f"\n2. Testing GET /api/templates/{template_id}")
response = requests.get(f"{API_URL}/templates/{template_id}")
print(f"Status: {response.status_code}")
if response.status_code == 200:
template = response.json()
print(f"Template: {template['name']}")
print(f"Parameters: {len(template.get('parameters', []))}")
# Test 3: Instantiate template
print(f"\n3. Testing POST /api/templates/{template_id}/instantiate")
# Create parameters for the template
parameters = {}
for param in template.get('parameters', []):
if param['type'] == 'string':
parameters[param['name']] = f"test_{param['name']}"
elif param['type'] == 'target':
parameters[param['name']] = f"#{param['name']}_selector"
instantiate_data = {
'name': 'Test Workflow from Template',
'parameters': parameters,
'created_by': 'test_user'
}
response = requests.post(
f"{API_URL}/templates/{template_id}/instantiate",
json=instantiate_data,
headers={'Content-Type': 'application/json'}
)
print(f"Status: {response.status_code}")
if response.status_code == 201:
result = response.json()
print(f"Created workflow: {result.get('workflow_id')}")
else:
print(f"Error: {response.text}")
else:
print(f"Error getting template: {response.text}")
else:
print(f"Error listing templates: {response.text}")
# Test 4: Get categories
print(f"\n4. Testing GET /api/templates/categories")
response = requests.get(f"{API_URL}/templates/categories")
print(f"Status: {response.status_code}")
if response.status_code == 200:
data = response.json()
print(f"Categories: {data['categories']}")
else:
print(f"Error: {response.text}")
# Test 5: Filter by category
print(f"\n5. Testing GET /api/templates/?category=Web Automation")
response = requests.get(f"{API_URL}/templates/?category=Web Automation")
print(f"Status: {response.status_code}")
if response.status_code == 200:
data = response.json()
print(f"Web Automation templates: {data['count']}")
else:
print(f"Error: {response.text}")
print("\n✅ Template API tests completed!")
except requests.exceptions.ConnectionError:
print("❌ Could not connect to server. Make sure the backend is running on http://localhost:5000")
return False
except Exception as e:
print(f"❌ Error during testing: {e}")
return False
return True
if __name__ == '__main__':
success = test_templates_api()
exit(0 if success else 1)

View File

@@ -0,0 +1,287 @@
#!/usr/bin/env python3
"""
Test Templates System - Simple Version
Tests for the template service and models only (no API tests).
"""
import json
import os
import tempfile
import unittest
from datetime import datetime
import sys
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from models.template import WorkflowTemplate, TemplateParameter, TemplateDifficulty
from models.visual_workflow import VisualWorkflow, VisualNode, VisualEdge, Position, Size, Port, WorkflowSettings, ParameterType
from services.template_service import TemplateService
class TestTemplateModels(unittest.TestCase):
"""Test template data models"""
def setUp(self):
"""Set up test data"""
self.sample_workflow = VisualWorkflow(
id="test_workflow",
name="Test Workflow",
description="A test workflow",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test_user",
nodes=[
VisualNode(
id="node1",
type="click",
position=Position(100, 100),
size=Size(150, 80),
parameters={"target": "{{button_selector}}"},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")]
)
],
edges=[],
variables=[],
settings=WorkflowSettings(),
is_template=True
)
def test_template_parameter_serialization(self):
"""Test template parameter to_dict and from_dict"""
param = TemplateParameter(
name="button_selector",
type=ParameterType.TARGET,
description="Button to click",
node_id="node1",
parameter_name="target",
label="Button"
)
# Test serialization
param_dict = param.to_dict()
self.assertEqual(param_dict['name'], "button_selector")
self.assertEqual(param_dict['type'], "target")
self.assertEqual(param_dict['node_id'], "node1")
# Test deserialization
param2 = TemplateParameter.from_dict(param_dict)
self.assertEqual(param2.name, param.name)
self.assertEqual(param2.type, param.type)
self.assertEqual(param2.node_id, param.node_id)
def test_workflow_template_serialization(self):
"""Test workflow template to_dict and from_dict"""
template = WorkflowTemplate(
id="test_template",
name="Test Template",
description="A test template",
category="Test",
workflow=self.sample_workflow,
parameters=[
TemplateParameter(
name="button_selector",
type=ParameterType.TARGET,
description="Button to click",
node_id="node1",
parameter_name="target"
)
],
tags=["test", "example"],
difficulty=TemplateDifficulty.BEGINNER
)
# Test serialization
template_dict = template.to_dict()
self.assertEqual(template_dict['id'], "test_template")
self.assertEqual(template_dict['name'], "Test Template")
self.assertEqual(len(template_dict['parameters']), 1)
self.assertIn('workflow', template_dict)
# Test deserialization
template2 = WorkflowTemplate.from_dict(template_dict)
self.assertEqual(template2.id, template.id)
self.assertEqual(template2.name, template.name)
self.assertEqual(len(template2.parameters), 1)
self.assertEqual(template2.workflow.id, template.workflow.id)
def test_template_instantiation(self):
"""Test creating a workflow from a template"""
template = WorkflowTemplate(
id="test_template",
name="Test Template",
description="A test template",
category="Test",
workflow=self.sample_workflow,
parameters=[
TemplateParameter(
name="button_selector",
type=ParameterType.TARGET,
description="Button to click",
node_id="node1",
parameter_name="target"
)
]
)
# Instantiate template
parameters = {"button_selector": "button.submit"}
workflow = template.instantiate(parameters, "My Workflow", "test_user")
# Verify workflow
self.assertEqual(workflow.name, "My Workflow")
self.assertEqual(workflow.created_by, "test_user")
self.assertFalse(workflow.is_template)
self.assertNotEqual(workflow.id, template.workflow.id) # Should have new ID
# Verify parameter substitution
self.assertEqual(len(workflow.nodes), 1)
self.assertEqual(workflow.nodes[0].parameters["target"], "button.submit")
class TestTemplateService(unittest.TestCase):
"""Test template service"""
def setUp(self):
"""Set up test environment"""
self.temp_dir = tempfile.mkdtemp()
self.service = TemplateService(data_dir=self.temp_dir)
# Create sample workflow
self.sample_workflow = VisualWorkflow(
id="test_workflow",
name="Test Workflow",
description="A test workflow",
version="1.0.0",
created_at=datetime.now(),
updated_at=datetime.now(),
created_by="test_user",
nodes=[
VisualNode(
id="node1",
type="click",
position=Position(100, 100),
size=Size(150, 80),
parameters={"target": "{{button_selector}}"},
input_ports=[Port("in", "input", "input")],
output_ports=[Port("out", "output", "output")]
)
],
edges=[],
variables=[],
settings=WorkflowSettings(),
is_template=True
)
def tearDown(self):
"""Clean up test environment"""
import shutil
shutil.rmtree(self.temp_dir, ignore_errors=True)
def test_create_and_get_template(self):
"""Test creating and retrieving a template"""
template_data = {
'name': 'Test Template',
'description': 'A test template',
'category': 'Test',
'workflow': self.sample_workflow.to_dict(),
'parameters': [
{
'name': 'button_selector',
'type': 'target',
'description': 'Button to click',
'node_id': 'node1',
'parameter_name': 'target',
'required': True
}
],
'tags': ['test'],
'difficulty': 'beginner'
}
# Create template
template = self.service.create_template(template_data)
self.assertIsNotNone(template.id)
self.assertEqual(template.name, 'Test Template')
# Retrieve template
retrieved = self.service.get_template(template.id)
self.assertIsNotNone(retrieved)
self.assertEqual(retrieved.name, template.name)
self.assertEqual(len(retrieved.parameters), 1)
def test_list_templates(self):
"""Test listing templates"""
# Should have default templates
templates = self.service.list_templates()
self.assertGreater(len(templates), 0)
# Test filtering by category
web_templates = self.service.list_templates(category="Web Automation")
self.assertGreater(len(web_templates), 0)
for template in web_templates:
self.assertEqual(template.category, "Web Automation")
# Test filtering by difficulty
beginner_templates = self.service.list_templates(difficulty="beginner")
self.assertGreater(len(beginner_templates), 0)
for template in beginner_templates:
self.assertEqual(template.difficulty, TemplateDifficulty.BEGINNER)
def test_template_instantiation(self):
"""Test instantiating a template"""
# Get a default template
templates = self.service.list_templates()
self.assertGreater(len(templates), 0)
template = templates[0]
# Create parameters for instantiation
parameters = {}
for param in template.parameters:
if param.type == ParameterType.STRING:
parameters[param.name] = f"test_{param.name}"
elif param.type == ParameterType.TARGET:
parameters[param.name] = f"#{param.name}"
# Instantiate template
workflow = self.service.instantiate_template(
template.id, parameters, "Test Workflow", "test_user"
)
self.assertIsNotNone(workflow)
self.assertEqual(workflow.name, "Test Workflow")
self.assertEqual(workflow.created_by, "test_user")
self.assertFalse(workflow.is_template)
def run_tests():
"""Run template tests"""
print("🧪 Running Template System Tests (Models & Service)...")
# Create test suite
suite = unittest.TestSuite()
# Add test cases
loader = unittest.TestLoader()
suite.addTest(loader.loadTestsFromTestCase(TestTemplateModels))
suite.addTest(loader.loadTestsFromTestCase(TestTemplateService))
# Run tests
runner = unittest.TextTestRunner(verbosity=2)
result = runner.run(suite)
# Print summary
if result.wasSuccessful():
print("✅ All template tests passed!")
return True
else:
print(f"{len(result.failures)} test(s) failed, {len(result.errors)} error(s)")
return False
if __name__ == '__main__':
success = run_tests()
exit(0 if success else 1)

View File

@@ -0,0 +1,304 @@
#!/usr/bin/env python3
"""
Test WebSocket - Visual Workflow Builder
Test manuel des fonctionnalités WebSocket pour les mises à jour en temps réel.
Exigences: 6.2, 6.3, 6.4
"""
import sys
import time
from socketio import Client
import threading
sys.path.insert(0, '.')
# Configuration
SERVER_URL = "http://localhost:5002"
def test_websocket_connection():
"""Test de connexion WebSocket"""
print("🧪 Test 1: Connexion WebSocket")
try:
sio = Client()
# Événements reçus
events_received = []
@sio.on('connected')
def on_connected(data):
print(f" ✅ Connecté: {data}")
events_received.append(('connected', data))
@sio.on('error')
def on_error(data):
print(f" ❌ Erreur: {data}")
events_received.append(('error', data))
# Connexion
sio.connect(SERVER_URL)
time.sleep(0.5)
# Vérifier la connexion
assert sio.connected, "Client devrait être connecté"
assert len(events_received) > 0, "Devrait avoir reçu un événement"
# Déconnexion
sio.disconnect()
time.sleep(0.2)
print(" ✅ Connexion/Déconnexion réussie")
return True
except Exception as e:
print(f" ❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_execution_subscription():
"""Test de souscription à une exécution"""
print("\n🧪 Test 2: Souscription à une exécution")
try:
sio = Client()
# Événements reçus
events_received = []
@sio.on('connected')
def on_connected(data):
print(f" Connecté")
@sio.on('execution_status')
def on_execution_status(data):
print(f" 📊 Statut reçu: {data.get('status')}")
events_received.append(('execution_status', data))
@sio.on('error')
def on_error(data):
print(f" ⚠️ Erreur: {data.get('message')}")
events_received.append(('error', data))
@sio.on('unsubscribed')
def on_unsubscribed(data):
print(f" Désabonné: {data}")
events_received.append(('unsubscribed', data))
# Connexion
sio.connect(SERVER_URL)
time.sleep(0.5)
# Souscrire à une exécution (qui n'existe pas)
sio.emit('subscribe_execution', {'execution_id': 'test_exec_123'})
time.sleep(0.5)
# Devrait recevoir une erreur
assert len(events_received) > 0, "Devrait avoir reçu un événement"
# Désabonnement
sio.emit('unsubscribe_execution', {'execution_id': 'test_exec_123'})
time.sleep(0.5)
# Déconnexion
sio.disconnect()
print(" ✅ Souscription/Désabonnement réussi")
return True
except Exception as e:
print(f" ❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def test_execution_with_websocket():
"""Test d'exécution avec mises à jour WebSocket"""
print("\n🧪 Test 3: Exécution avec WebSocket")
try:
# Créer un workflow de test
from services.serialization import create_empty_workflow, WorkflowDatabase
from models.visual_workflow import VisualNode, VisualEdge, Position, Size, Port
workflow = create_empty_workflow("Test WebSocket Workflow")
# Ajouter des nodes
workflow.nodes.extend([
VisualNode(
id="node_1",
type="wait",
position=Position(100, 100),
size=Size(200, 80),
parameters={'duration': 100},
input_ports=[],
output_ports=[Port('out', 'Output', 'output')]
),
VisualNode(
id="node_2",
type="wait",
position=Position(400, 100),
size=Size(200, 80),
parameters={'duration': 100},
input_ports=[Port('in', 'Input', 'input')],
output_ports=[]
)
])
workflow.edges.append(VisualEdge(
id="edge_1",
source="node_1",
target="node_2",
source_port="out",
target_port="in"
))
# Sauvegarder
db = WorkflowDatabase()
db.save(workflow)
# Démarrer l'exécution
from services.execution_integration import get_executor
executor = get_executor()
# Événements WebSocket reçus
ws_events = []
# Client WebSocket
sio = Client()
@sio.on('connected')
def on_connected(data):
print(f" WebSocket connecté")
@sio.on('execution_started')
def on_execution_started(data):
print(f" 🚀 Exécution démarrée: {data.get('execution_id')}")
ws_events.append(('started', data))
@sio.on('node_status')
def on_node_status(data):
print(f" 📍 Node {data.get('node_id')}: {data.get('status')}")
ws_events.append(('node_status', data))
@sio.on('execution_progress')
def on_execution_progress(data):
progress = data.get('progress', {}).get('progress', 0)
print(f" 📊 Progression: {progress:.1f}%")
ws_events.append(('progress', data))
@sio.on('execution_complete')
def on_execution_complete(data):
print(f" ✅ Exécution terminée: {data.get('status')}")
ws_events.append(('complete', data))
@sio.on('execution_error')
def on_execution_error(data):
print(f" ❌ Erreur: {data.get('error')}")
ws_events.append(('error', data))
# Connexion WebSocket
sio.connect(SERVER_URL)
time.sleep(0.5)
# Démarrer l'exécution
execution_id = executor.execute_workflow(workflow_id=workflow.id)
print(f" Exécution démarrée: {execution_id}")
# Souscrire aux mises à jour
sio.emit('subscribe_execution', {'execution_id': execution_id})
time.sleep(0.2)
# Attendre la fin de l'exécution
max_wait = 5
waited = 0
while waited < max_wait:
result = executor.get_execution_status(execution_id)
if result and result.status in ['completed', 'failed', 'cancelled']:
break
time.sleep(0.1)
waited += 0.1
# Attendre un peu pour recevoir tous les événements
time.sleep(0.5)
# Déconnexion
sio.disconnect()
# Vérifier les événements reçus
print(f"\n 📊 Événements WebSocket reçus: {len(ws_events)}")
for event_type, data in ws_events:
print(f" - {event_type}")
# On devrait avoir reçu des événements
assert len(ws_events) > 0, "Devrait avoir reçu des événements WebSocket"
print("\n ✅ Exécution avec WebSocket réussie")
return True
except Exception as e:
print(f" ❌ Erreur: {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Exécute tous les tests"""
print("=" * 60)
print("🚀 Tests WebSocket - Visual Workflow Builder")
print("=" * 60)
print()
print("⚠️ IMPORTANT: Le serveur Flask doit être démarré sur le port 5002")
print(" Commande: cd backend && python app.py")
print()
# Vérifier si le serveur est accessible
import requests
try:
response = requests.get(f"{SERVER_URL}/health", timeout=2)
print(f"✅ Serveur accessible: {response.json()}")
print()
except Exception as e:
print(f"❌ Serveur non accessible: {e}")
print(" Démarrez le serveur avec: cd backend && python app.py")
return 1
tests = [
test_websocket_connection,
test_execution_subscription,
test_execution_with_websocket
]
results = []
for test in tests:
try:
result = test()
results.append(result)
except Exception as e:
print(f"\n❌ Test échoué avec exception: {e}")
import traceback
traceback.print_exc()
results.append(False)
print("\n" + "=" * 60)
passed = sum(results)
total = len(results)
if passed == total:
print(f"✅ TOUS LES TESTS RÉUSSIS! ({passed}/{total})")
print("=" * 60)
return 0
else:
print(f"{total - passed} test(s) échoué(s) sur {total}")
print("=" * 60)
return 1
if __name__ == '__main__':
sys.exit(main())

View File

@@ -0,0 +1,3 @@
"""
Tests package for Visual Workflow Builder backend
"""

View File

@@ -0,0 +1,26 @@
"""
Pytest configuration and fixtures for Visual Workflow Builder tests
"""
import pytest
from app import app, db
@pytest.fixture
def client():
"""Create a test client for the Flask app"""
app.config['TESTING'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
with app.test_client() as client:
with app.app_context():
db.create_all()
yield client
db.drop_all()
@pytest.fixture
def app_context():
"""Create an application context for tests"""
with app.app_context():
db.create_all()
yield app
db.drop_all()

View File

@@ -0,0 +1,393 @@
"""
Tests for COACHING Mode API (REST + WebSocket).
Tests the complete COACHING flow including:
- REST API endpoints for starting/managing COACHING executions
- WebSocket events for real-time suggestions and decisions
- Integration with ExecutionLoop COACHING mode
Exigences: 6.1, 6.2, 6.3, 6.4
"""
import pytest
import json
import time
from unittest.mock import MagicMock, patch
from flask import Flask
from flask_socketio import SocketIOTestClient
@pytest.fixture
def app():
"""Create test Flask app with blueprints."""
from app import create_app
app = create_app()
app.config['TESTING'] = True
return app
@pytest.fixture
def client(app):
"""Create test client."""
return app.test_client()
@pytest.fixture
def mock_executor():
"""Create mock executor for testing."""
with patch('api.executions.get_executor') as mock_get:
executor = MagicMock()
executor.execute_workflow_coaching.return_value = 'coaching_test_001'
executor.execute_workflow.return_value = 'exec_test_001'
executor.is_coaching_execution.return_value = True
executor.get_coaching_stats.return_value = {
'suggestions_made': 5,
'accepted': 3,
'rejected': 1,
'corrected': 1,
'manual_executions': 0,
'acceptance_rate': 0.6
}
executor.submit_coaching_decision.return_value = True
executor.get_execution_status.return_value = MagicMock(
to_dict=lambda: {
'execution_id': 'coaching_test_001',
'status': 'running',
'progress': {'completed_nodes': 2, 'total_nodes': 5}
}
)
executor.list_executions.return_value = [
{'execution_id': 'coaching_test_001', 'status': 'running'},
{'execution_id': 'exec_test_002', 'status': 'completed'}
]
mock_get.return_value = executor
yield executor
class TestCoachingAPIEndpoints:
"""Tests for COACHING REST API endpoints."""
def test_start_coaching_execution(self, client, mock_executor):
"""Test starting a COACHING execution."""
response = client.post(
'/api/executions/coaching',
data=json.dumps({'workflow_id': 'wf_test_001'}),
content_type='application/json'
)
assert response.status_code == 201
data = response.get_json()
assert data['execution_id'] == 'coaching_test_001'
assert data['mode'] == 'coaching'
assert data['status'] == 'started'
def test_start_coaching_no_workflow_id(self, client, mock_executor):
"""Test starting COACHING without workflow_id fails."""
response = client.post(
'/api/executions/coaching',
data=json.dumps({}),
content_type='application/json'
)
assert response.status_code == 400
assert 'error' in response.get_json()
def test_start_execution_with_coaching_mode(self, client, mock_executor):
"""Test starting execution with mode=coaching."""
response = client.post(
'/api/executions/',
data=json.dumps({
'workflow_id': 'wf_test_001',
'mode': 'coaching'
}),
content_type='application/json'
)
assert response.status_code == 201
data = response.get_json()
assert data['mode'] == 'coaching'
def test_get_coaching_stats(self, client, mock_executor):
"""Test getting COACHING statistics."""
response = client.get('/api/executions/coaching_test_001/coaching/stats')
assert response.status_code == 200
data = response.get_json()
assert 'stats' in data
assert data['stats']['suggestions_made'] == 5
assert data['stats']['accepted'] == 3
def test_submit_coaching_decision_accept(self, client, mock_executor):
"""Test submitting accept decision."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({'decision': 'accept'}),
content_type='application/json'
)
assert response.status_code == 200
data = response.get_json()
assert data['decision'] == 'accept'
assert data['status'] == 'accepted'
def test_submit_coaching_decision_correct(self, client, mock_executor):
"""Test submitting correct decision with correction."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({
'decision': 'correct',
'correction': {
'target': {'id': 'new_button'},
'params': {'timeout': 5}
},
'feedback': 'Le bouton a changé de position'
}),
content_type='application/json'
)
assert response.status_code == 200
data = response.get_json()
assert data['decision'] == 'correct'
def test_submit_coaching_decision_reject(self, client, mock_executor):
"""Test submitting reject decision."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({
'decision': 'reject',
'feedback': 'Cette action est incorrecte'
}),
content_type='application/json'
)
assert response.status_code == 200
data = response.get_json()
assert data['decision'] == 'reject'
def test_submit_coaching_decision_manual(self, client, mock_executor):
"""Test submitting manual execution decision."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({'decision': 'manual'}),
content_type='application/json'
)
assert response.status_code == 200
data = response.get_json()
assert data['decision'] == 'manual'
def test_submit_coaching_decision_skip(self, client, mock_executor):
"""Test submitting skip decision."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({'decision': 'skip'}),
content_type='application/json'
)
assert response.status_code == 200
data = response.get_json()
assert data['decision'] == 'skip'
def test_submit_invalid_decision(self, client, mock_executor):
"""Test submitting invalid decision fails."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({'decision': 'invalid'}),
content_type='application/json'
)
assert response.status_code == 400
assert 'error' in response.get_json()
def test_submit_decision_missing(self, client, mock_executor):
"""Test submitting without decision fails."""
response = client.post(
'/api/executions/coaching_test_001/coaching/decision',
data=json.dumps({}),
content_type='application/json'
)
assert response.status_code == 400
def test_get_execution_shows_coaching_flag(self, client, mock_executor):
"""Test that get execution includes is_coaching flag."""
response = client.get('/api/executions/coaching_test_001')
assert response.status_code == 200
data = response.get_json()
assert 'is_coaching' in data
def test_list_executions_filter_coaching(self, client, mock_executor):
"""Test filtering executions by coaching mode."""
response = client.get('/api/executions/?mode=coaching')
assert response.status_code == 200
data = response.get_json()
assert 'executions' in data
class TestCoachingWebSocket:
"""Tests for COACHING WebSocket events."""
@pytest.fixture
def socketio_client(self, app):
"""Create SocketIO test client."""
from app import socketio
return SocketIOTestClient(app, socketio)
def test_subscribe_coaching(self, socketio_client):
"""Test subscribing to COACHING events."""
socketio_client.emit('subscribe_coaching', {
'execution_id': 'coaching_test_001'
})
# Check for response
received = socketio_client.get_received()
# Should receive coaching_subscribed event
events = [r['name'] for r in received]
assert 'coaching_subscribed' in events or 'connected' in events
def test_unsubscribe_coaching(self, socketio_client):
"""Test unsubscribing from COACHING events."""
# First subscribe
socketio_client.emit('subscribe_coaching', {
'execution_id': 'coaching_test_001'
})
socketio_client.get_received() # Clear
# Then unsubscribe
socketio_client.emit('unsubscribe_coaching', {
'execution_id': 'coaching_test_001'
})
received = socketio_client.get_received()
events = [r['name'] for r in received]
assert 'coaching_unsubscribed' in events or len(events) >= 0
def test_get_coaching_stats_websocket(self, socketio_client):
"""Test getting COACHING stats via WebSocket."""
socketio_client.emit('get_coaching_stats', {
'execution_id': 'coaching_test_001'
})
received = socketio_client.get_received()
# Should receive coaching_stats event
events = [r['name'] for r in received]
assert 'coaching_stats' in events or 'error' in events
class TestVisualWorkflowExecutorCoaching:
"""Tests for VisualWorkflowExecutor COACHING methods."""
def test_execute_workflow_coaching_creates_execution(self):
"""Test that execute_workflow_coaching creates execution."""
from services.execution_integration import VisualWorkflowExecutor
executor = VisualWorkflowExecutor()
# Mock the database load
with patch.object(executor.db, 'load') as mock_load:
mock_workflow = MagicMock()
mock_workflow.id = 'test_wf'
mock_workflow.workflow_id = 'test_wf'
mock_load.return_value = mock_workflow
# Mock the conversion
with patch('services.execution_integration.convert_visual_to_graph'):
execution_id = executor.execute_workflow_coaching('test_wf')
assert execution_id.startswith('coaching_')
assert executor.is_coaching_execution(execution_id)
def test_is_coaching_execution_false_for_normal(self):
"""Test is_coaching_execution returns False for normal executions."""
from services.execution_integration import VisualWorkflowExecutor
executor = VisualWorkflowExecutor()
assert executor.is_coaching_execution('normal_exec_001') is False
def test_submit_coaching_decision_non_coaching_fails(self):
"""Test that submit_coaching_decision fails for non-COACHING execution."""
from services.execution_integration import VisualWorkflowExecutor
executor = VisualWorkflowExecutor()
result = executor.submit_coaching_decision(
'non_coaching_exec',
{'decision': 'accept'}
)
assert result is False
def test_get_coaching_stats_returns_none_for_unknown(self):
"""Test get_coaching_stats returns None for unknown execution."""
from services.execution_integration import VisualWorkflowExecutor
executor = VisualWorkflowExecutor()
stats = executor.get_coaching_stats('unknown_exec')
assert stats is None
class TestCoachingDecisionFlow:
"""Tests for the complete COACHING decision flow."""
def test_decision_accept_flow(self):
"""Test the complete accept decision flow."""
from services.execution_integration import VisualWorkflowExecutor
executor = VisualWorkflowExecutor()
# Simulate COACHING execution
execution_id = 'coaching_flow_test'
executor._coaching_mode_executions[execution_id] = True
# Initialize responses dict
executor._coaching_responses = {}
# Submit decision
result = executor.submit_coaching_decision(
execution_id,
{'decision': 'accept', 'feedback': 'Looks good'}
)
assert result is True
assert execution_id in executor._coaching_responses
def test_decision_correct_with_correction(self):
"""Test correct decision with correction data."""
from services.execution_integration import VisualWorkflowExecutor
executor = VisualWorkflowExecutor()
execution_id = 'coaching_correct_test'
executor._coaching_mode_executions[execution_id] = True
executor._coaching_responses = {}
correction = {
'target': {'id': 'corrected_element'},
'params': {'timeout': 10}
}
result = executor.submit_coaching_decision(
execution_id,
{
'decision': 'correct',
'correction': correction,
'feedback': 'Element changed position'
}
)
assert result is True
response = executor._coaching_responses[execution_id]
assert response['correction'] == correction
class TestCoachingIntegrationWithCorrectionPacks:
"""Tests for COACHING integration with Correction Packs."""
def test_coaching_correction_captured(self):
"""Test that COACHING corrections are captured in Correction Packs."""
# This tests the integration between COACHING and Correction Packs
pass # Implemented in test_correction_pack_integration.py
if __name__ == '__main__':
pytest.main([__file__, '-v'])

View File

@@ -0,0 +1,393 @@
"""
Unit tests for visual workflow models
"""
import pytest
from datetime import datetime
from backend.models import (
VisualWorkflow,
VisualNode,
VisualEdge,
Variable,
WorkflowSettings,
Position,
Size,
Port,
NodeStatus,
NodeCategory,
ParameterType,
generate_id
)
def test_position_serialization():
"""Test Position to_dict and from_dict"""
pos = Position(x=100.5, y=200.3)
data = pos.to_dict()
assert data == {'x': 100.5, 'y': 200.3}
pos2 = Position.from_dict(data)
assert pos2.x == pos.x
assert pos2.y == pos.y
def test_size_serialization():
"""Test Size to_dict and from_dict"""
size = Size(width=150, height=80)
data = size.to_dict()
assert data == {'width': 150, 'height': 80}
size2 = Size.from_dict(data)
assert size2.width == size.width
assert size2.height == size.height
def test_visual_node_creation():
"""Test creating a VisualNode"""
node = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={'target': 'button'},
input_ports=[Port(id='in', name='Input', type='input')],
output_ports=[Port(id='out', name='Output', type='output')]
)
assert node.id == 'node-1'
assert node.type == 'click'
assert node.position.x == 100
assert node.parameters['target'] == 'button'
assert len(node.input_ports) == 1
assert len(node.output_ports) == 1
def test_visual_node_serialization():
"""Test VisualNode to_dict and from_dict"""
node = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={'target': 'button', 'timeout': 5000},
input_ports=[Port(id='in', name='Input', type='input')],
output_ports=[Port(id='out', name='Output', type='output')],
label='Click Button',
status=NodeStatus.IDLE
)
data = node.to_dict()
assert data['id'] == 'node-1'
assert data['type'] == 'click'
assert data['position'] == {'x': 100, 'y': 200}
assert data['parameters']['target'] == 'button'
assert data['status'] == 'idle'
assert data['label'] == 'Click Button'
node2 = VisualNode.from_dict(data)
assert node2.id == node.id
assert node2.type == node.type
assert node2.position.x == node.position.x
assert node2.status == node.status
def test_visual_edge_creation():
"""Test creating a VisualEdge"""
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in'
)
assert edge.id == 'edge-1'
assert edge.source == 'node-1'
assert edge.target == 'node-2'
assert edge.selected == False
assert edge.animated == False
def test_visual_edge_serialization():
"""Test VisualEdge to_dict and from_dict"""
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in',
selected=True
)
data = edge.to_dict()
assert data['id'] == 'edge-1'
assert data['source'] == 'node-1'
assert data['target'] == 'node-2'
assert data['selected'] == True
edge2 = VisualEdge.from_dict(data)
assert edge2.id == edge.id
assert edge2.source == edge.source
assert edge2.selected == edge.selected
def test_variable_creation():
"""Test creating a Variable"""
var = Variable(
name='username',
type='string',
value='john_doe',
description='User login name'
)
assert var.name == 'username'
assert var.type == 'string'
assert var.value == 'john_doe'
assert var.description == 'User login name'
def test_workflow_settings_defaults():
"""Test WorkflowSettings default values"""
settings = WorkflowSettings()
assert settings.timeout == 300000
assert settings.retry_on_failure == True
assert settings.max_retries == 3
assert settings.enable_self_healing == True
assert settings.enable_analytics == True
def test_visual_workflow_creation():
"""Test creating a complete VisualWorkflow"""
now = datetime.now()
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='A test workflow',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='test_user',
nodes=[],
edges=[],
variables=[],
settings=WorkflowSettings(),
tags=['test', 'demo'],
category='automation'
)
assert workflow.id == 'wf-1'
assert workflow.name == 'Test Workflow'
assert workflow.version == '1.0.0'
assert len(workflow.tags) == 2
assert workflow.is_template == False
def test_visual_workflow_serialization():
"""Test VisualWorkflow to_dict and from_dict"""
now = datetime.now()
node = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in'
)
var = Variable(name='test', type='string', value='value')
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[node],
edges=[edge],
variables=[var],
settings=WorkflowSettings()
)
data = workflow.to_dict()
assert data['id'] == 'wf-1'
assert data['name'] == 'Test Workflow'
assert len(data['nodes']) == 1
assert len(data['edges']) == 1
assert len(data['variables']) == 1
workflow2 = VisualWorkflow.from_dict(data)
assert workflow2.id == workflow.id
assert workflow2.name == workflow.name
assert len(workflow2.nodes) == 1
assert len(workflow2.edges) == 1
def test_workflow_validation_success():
"""Test workflow validation with valid workflow"""
now = datetime.now()
node1 = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
node2 = VisualNode(
id='node-2',
type='type',
position=Position(x=300, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-2',
source_port='out',
target_port='in'
)
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[node1, node2],
edges=[edge],
variables=[],
settings=WorkflowSettings()
)
errors = workflow.validate()
assert len(errors) == 0
def test_workflow_validation_missing_fields():
"""Test workflow validation with missing required fields"""
now = datetime.now()
workflow = VisualWorkflow(
id='', # Missing ID
name='', # Missing name
description='Test',
version='', # Missing version
created_at=now,
updated_at=now,
created_by='user',
nodes=[],
edges=[],
variables=[],
settings=WorkflowSettings()
)
errors = workflow.validate()
assert len(errors) == 3
assert any('ID is required' in e for e in errors)
assert any('name is required' in e for e in errors)
assert any('version is required' in e for e in errors)
def test_workflow_validation_invalid_edge():
"""Test workflow validation with edge referencing non-existent node"""
now = datetime.now()
node1 = VisualNode(
id='node-1',
type='click',
position=Position(x=100, y=200),
size=Size(width=200, height=100),
parameters={},
input_ports=[],
output_ports=[]
)
edge = VisualEdge(
id='edge-1',
source='node-1',
target='node-999', # Non-existent node
source_port='out',
target_port='in'
)
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[node1],
edges=[edge],
variables=[],
settings=WorkflowSettings()
)
errors = workflow.validate()
assert len(errors) == 1
assert 'non-existent target node' in errors[0]
def test_workflow_validation_duplicate_variables():
"""Test workflow validation with duplicate variable names"""
now = datetime.now()
var1 = Variable(name='test', type='string', value='value1')
var2 = Variable(name='test', type='string', value='value2') # Duplicate name
workflow = VisualWorkflow(
id='wf-1',
name='Test Workflow',
description='Test',
version='1.0.0',
created_at=now,
updated_at=now,
created_by='user',
nodes=[],
edges=[],
variables=[var1, var2],
settings=WorkflowSettings()
)
errors = workflow.validate()
assert len(errors) == 1
assert 'Duplicate variable names' in errors[0]
def test_generate_id():
"""Test ID generation"""
id1 = generate_id()
id2 = generate_id()
assert id1 != id2
assert len(id1) == 36 # UUID format
assert '-' in id1

Some files were not shown because too many files have changed in this diff Show More