v1.0 - Version stable: multi-PC, détection UI-DETR-1, 3 modes exécution
- Frontend v4 accessible sur réseau local (192.168.1.40) - Ports ouverts: 3002 (frontend), 5001 (backend), 5004 (dashboard) - Ollama GPU fonctionnel - Self-healing interactif - Dashboard confiance Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
253
core/graph/README.md
Normal file
253
core/graph/README.md
Normal file
@@ -0,0 +1,253 @@
|
||||
# Graph Module - Construction de Workflow Graphs
|
||||
|
||||
Ce module implémente la construction automatique de graphes de workflows depuis des sessions enregistrées.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
graph/
|
||||
├── __init__.py
|
||||
├── graph_builder.py # Construction de workflows depuis sessions
|
||||
├── node_matcher.py # Matching de ScreenStates contre nodes
|
||||
└── README.md # Ce fichier
|
||||
```
|
||||
|
||||
## GraphBuilder
|
||||
|
||||
### Responsabilités
|
||||
|
||||
Le `GraphBuilder` analyse une `RawSession` pour construire automatiquement un `Workflow` complet :
|
||||
|
||||
1. **Création de ScreenStates** - Convertit les screenshots en états structurés
|
||||
2. **Calcul d'Embeddings** - Génère des embeddings multi-modaux pour chaque état
|
||||
3. **Détection de Patterns** - Utilise DBSCAN pour identifier les patterns répétés
|
||||
4. **Construction de Nodes** - Crée des WorkflowNodes depuis les clusters
|
||||
5. **Construction d'Edges** - Détecte les transitions entre états (TODO)
|
||||
|
||||
### Algorithme de Détection de Patterns
|
||||
|
||||
Utilise **DBSCAN** (Density-Based Spatial Clustering of Applications with Noise) :
|
||||
|
||||
- **Métrique** : Similarité cosinus entre embeddings
|
||||
- **Paramètres** :
|
||||
- `eps` : Distance maximum entre points (défaut: 0.15)
|
||||
- `min_samples` : Échantillons minimum par cluster (défaut: 2)
|
||||
- `min_pattern_repetitions` : Répétitions minimum pour un pattern (défaut: 3)
|
||||
|
||||
**Avantages de DBSCAN** :
|
||||
- Détecte automatiquement le nombre de clusters
|
||||
- Identifie le bruit (états uniques)
|
||||
- Fonctionne bien avec des clusters de formes arbitraires
|
||||
|
||||
### Exemple d'Utilisation
|
||||
|
||||
```python
|
||||
from core.graph.graph_builder import GraphBuilder
|
||||
from core.models.raw_session import RawSession
|
||||
|
||||
# Créer le builder
|
||||
builder = GraphBuilder(
|
||||
min_pattern_repetitions=3,
|
||||
clustering_eps=0.15
|
||||
)
|
||||
|
||||
# Construire workflow depuis session
|
||||
workflow = builder.build_from_session(
|
||||
session=raw_session,
|
||||
workflow_name="Login Workflow"
|
||||
)
|
||||
|
||||
print(f"Workflow: {len(workflow.nodes)} nodes, {len(workflow.edges)} edges")
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
```python
|
||||
GraphBuilder(
|
||||
embedding_builder=None, # StateEmbeddingBuilder personnalisé
|
||||
faiss_manager=None, # FAISSManager pour indexation
|
||||
min_pattern_repetitions=3, # Répétitions min pour un pattern
|
||||
clustering_eps=0.15, # Distance max DBSCAN
|
||||
clustering_min_samples=2 # Échantillons min par cluster
|
||||
)
|
||||
```
|
||||
|
||||
### Méthodes Publiques
|
||||
|
||||
#### `build_from_session(session, workflow_name=None) -> Workflow`
|
||||
|
||||
Construit un workflow complet depuis une RawSession.
|
||||
|
||||
**Args:**
|
||||
- `session` : RawSession à analyser
|
||||
- `workflow_name` : Nom du workflow (optionnel)
|
||||
|
||||
**Returns:**
|
||||
- `Workflow` avec nodes et edges
|
||||
|
||||
**Raises:**
|
||||
- `ValueError` si la session est vide
|
||||
|
||||
### Méthodes Privées
|
||||
|
||||
#### `_create_screen_states(session) -> List[ScreenState]`
|
||||
|
||||
Crée des ScreenStates depuis les screenshots.
|
||||
|
||||
**Note:** Pour l'instant, crée des états basiques. TODO: Enrichir avec détection UI.
|
||||
|
||||
#### `_compute_embeddings(screen_states) -> List[np.ndarray]`
|
||||
|
||||
Calcule les embeddings pour tous les états.
|
||||
|
||||
Utilise `StateEmbeddingBuilder` pour générer des embeddings multi-modaux (image + texte + UI).
|
||||
|
||||
#### `_detect_patterns(embeddings, screen_states) -> Dict[int, List[int]]`
|
||||
|
||||
Détecte les patterns répétés via clustering DBSCAN.
|
||||
|
||||
**Returns:** Dictionnaire `{cluster_id: [indices des états]}`
|
||||
|
||||
#### `_build_nodes(clusters, screen_states, embeddings) -> List[WorkflowNode]`
|
||||
|
||||
Construit des WorkflowNodes depuis les clusters.
|
||||
|
||||
Pour chaque cluster :
|
||||
1. Calcule l'embedding prototype (moyenne normalisée)
|
||||
2. Extrait les contraintes
|
||||
3. Crée un ScreenTemplate
|
||||
4. Crée un WorkflowNode
|
||||
|
||||
#### `_create_screen_template(states, prototype_embedding) -> ScreenTemplate`
|
||||
|
||||
Crée un ScreenTemplate depuis un cluster d'états.
|
||||
|
||||
**TODO:** Extraire intelligemment :
|
||||
- `window_title_pattern` (regex depuis titres communs)
|
||||
- `required_text_patterns` (texte présent dans tous les états)
|
||||
- `required_ui_elements` (éléments UI communs)
|
||||
|
||||
#### `_build_edges(nodes, screen_states, session) -> List[WorkflowEdge]`
|
||||
|
||||
Construit des WorkflowEdges depuis les transitions.
|
||||
|
||||
**TODO:** Implémenter détection de transitions :
|
||||
1. Identifier séquences d'états (state_i → state_j)
|
||||
2. Extraire actions depuis événements RawSession
|
||||
3. Mapper états vers nodes
|
||||
4. Créer edges avec TargetSpec et conditions
|
||||
|
||||
## NodeMatcher
|
||||
|
||||
### Responsabilités
|
||||
|
||||
Le `NodeMatcher` trouve le WorkflowNode qui correspond le mieux à un ScreenState actuel.
|
||||
|
||||
### Stratégies de Matching
|
||||
|
||||
1. **Recherche FAISS** (si disponible) : Recherche rapide dans l'index
|
||||
2. **Recherche Linéaire** (fallback) : Compare avec tous les candidats
|
||||
3. **Validation de Contraintes** : Vérifie titre fenêtre, texte requis, UI requis
|
||||
|
||||
### Exemple d'Utilisation
|
||||
|
||||
```python
|
||||
from core.graph.node_matcher import NodeMatcher
|
||||
|
||||
# Créer le matcher
|
||||
matcher = NodeMatcher(similarity_threshold=0.85)
|
||||
|
||||
# Matcher un état contre des nodes candidats
|
||||
result = matcher.match(current_state, candidate_nodes)
|
||||
|
||||
if result:
|
||||
node, confidence = result
|
||||
print(f"Matched {node.node_id} with confidence {confidence:.2f}")
|
||||
else:
|
||||
print("No match found")
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
```python
|
||||
NodeMatcher(
|
||||
embedding_builder=None, # StateEmbeddingBuilder personnalisé
|
||||
faiss_manager=None, # FAISSManager pour recherche rapide
|
||||
similarity_threshold=0.85 # Seuil de similarité minimum
|
||||
)
|
||||
```
|
||||
|
||||
### Méthodes Publiques
|
||||
|
||||
#### `match(current_state, candidate_nodes) -> Optional[Tuple[WorkflowNode, float]]`
|
||||
|
||||
Trouve le node qui matche le mieux l'état actuel.
|
||||
|
||||
**Returns:** `(node, confidence)` si match trouvé, `None` sinon
|
||||
|
||||
#### `validate_constraints(state, node) -> bool`
|
||||
|
||||
Valide les contraintes du node contre l'état.
|
||||
|
||||
**Returns:** `True` si toutes les contraintes sont satisfaites
|
||||
|
||||
## Tests
|
||||
|
||||
### Tests Unitaires
|
||||
|
||||
```bash
|
||||
# Lancer les tests
|
||||
python -m pytest tests/unit/test_graph_builder.py -v
|
||||
python -m pytest tests/unit/test_node_matcher.py -v
|
||||
```
|
||||
|
||||
### Test d'Intégration
|
||||
|
||||
```bash
|
||||
# Test rapide
|
||||
python test_phase_a_b.py
|
||||
```
|
||||
|
||||
## Qualité du Code
|
||||
|
||||
- ✅ **Type Hints** : Toutes les fonctions sont typées
|
||||
- ✅ **Docstrings** : Documentation complète (Google style)
|
||||
- ✅ **Logging** : Logs informatifs à tous les niveaux
|
||||
- ✅ **Error Handling** : Validation des entrées
|
||||
- ✅ **No Diagnostics** : Aucune erreur de linting/typing
|
||||
|
||||
## Prochaines Étapes
|
||||
|
||||
### Priorité Haute
|
||||
|
||||
1. **Implémenter `_build_edges()`**
|
||||
- Détecter transitions entre états
|
||||
- Extraire actions depuis événements
|
||||
- Créer TargetSpec avec rôles sémantiques
|
||||
|
||||
2. **Enrichir `_create_screen_template()`**
|
||||
- Extraire window_title_pattern
|
||||
- Extraire required_text_patterns
|
||||
- Extraire required_ui_elements
|
||||
|
||||
3. **Tests Property-Based**
|
||||
- Property 14: Embedding Prototype Sample Count
|
||||
- Property 16: Pattern Detection Minimum Repetitions
|
||||
|
||||
### Priorité Moyenne
|
||||
|
||||
4. **Optimisations**
|
||||
- Batch processing pour embeddings
|
||||
- Cache pour prototypes
|
||||
- Parallélisation du clustering
|
||||
|
||||
5. **Robustesse**
|
||||
- Gestion des sessions très longues
|
||||
- Gestion des états sans patterns
|
||||
- Métriques de qualité des clusters
|
||||
|
||||
## Références
|
||||
|
||||
- **DBSCAN** : [Scikit-learn Documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.DBSCAN.html)
|
||||
- **Workflow Graphs** : Voir `core/models/workflow_graph.py`
|
||||
- **State Embeddings** : Voir `core/embedding/state_embedding_builder.py`
|
||||
9
core/graph/__init__.py
Normal file
9
core/graph/__init__.py
Normal file
@@ -0,0 +1,9 @@
|
||||
"""Workflow graph construction, matching, and execution"""
|
||||
|
||||
from .graph_builder import GraphBuilder
|
||||
from .node_matcher import NodeMatcher
|
||||
|
||||
__all__ = [
|
||||
"GraphBuilder",
|
||||
"NodeMatcher",
|
||||
]
|
||||
305
core/graph/node_matcher.py
Normal file
305
core/graph/node_matcher.py
Normal file
@@ -0,0 +1,305 @@
|
||||
"""NodeMatcher - Matching de ScreenStates contre WorkflowNodes en temps réel."""
|
||||
import logging
|
||||
import json
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import List, Optional, Tuple, Dict, Any
|
||||
import numpy as np
|
||||
|
||||
from core.models.screen_state import ScreenState
|
||||
from core.models.workflow_graph import WorkflowNode
|
||||
from core.embedding.state_embedding_builder import StateEmbeddingBuilder
|
||||
from core.embedding.faiss_manager import FAISSManager
|
||||
from core.execution.error_handler import ErrorHandler, ErrorType, RecoveryStrategy
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class NodeMatcher:
|
||||
"""Matcher pour trouver le WorkflowNode correspondant à un ScreenState."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
embedding_builder: Optional[StateEmbeddingBuilder] = None,
|
||||
faiss_manager: Optional[FAISSManager] = None,
|
||||
error_handler: Optional[ErrorHandler] = None,
|
||||
similarity_threshold: float = 0.85,
|
||||
failed_matches_dir: str = "data/failed_matches"
|
||||
):
|
||||
self.embedding_builder = embedding_builder or StateEmbeddingBuilder()
|
||||
self.faiss_manager = faiss_manager
|
||||
self.error_handler = error_handler or ErrorHandler()
|
||||
self.similarity_threshold = similarity_threshold
|
||||
self.failed_matches_dir = Path(failed_matches_dir)
|
||||
self.failed_matches_dir.mkdir(parents=True, exist_ok=True)
|
||||
logger.info(f"NodeMatcher initialized with threshold={similarity_threshold}")
|
||||
|
||||
def match(
|
||||
self,
|
||||
current_state: ScreenState,
|
||||
candidate_nodes: List[WorkflowNode]
|
||||
) -> Optional[Tuple[WorkflowNode, float]]:
|
||||
"""
|
||||
Trouver le WorkflowNode qui matche le mieux le ScreenState actuel.
|
||||
|
||||
Returns:
|
||||
Tuple (node, confidence) si match trouvé, None sinon
|
||||
"""
|
||||
if not candidate_nodes:
|
||||
logger.warning("No candidate nodes provided")
|
||||
return None
|
||||
|
||||
state_embedding = self.embedding_builder.build(current_state)
|
||||
current_vector = state_embedding.get_vector()
|
||||
|
||||
if self.faiss_manager:
|
||||
return self._match_with_faiss(current_vector, candidate_nodes)
|
||||
|
||||
return self._match_linear(current_state, current_vector, candidate_nodes)
|
||||
|
||||
def _match_with_faiss(
|
||||
self,
|
||||
query_vector: np.ndarray,
|
||||
candidate_nodes: List[WorkflowNode]
|
||||
) -> Optional[Tuple[WorkflowNode, float]]:
|
||||
"""Matcher avec recherche FAISS."""
|
||||
results = self.faiss_manager.search(query_vector, k=5)
|
||||
|
||||
if not results:
|
||||
return None
|
||||
|
||||
best_match = None
|
||||
best_confidence = 0.0
|
||||
|
||||
for result in results:
|
||||
similarity = result['similarity']
|
||||
if similarity < self.similarity_threshold:
|
||||
continue
|
||||
|
||||
for node in candidate_nodes:
|
||||
if result['metadata'].get('node_id') == node.node_id:
|
||||
if similarity > best_confidence:
|
||||
best_match = node
|
||||
best_confidence = similarity
|
||||
|
||||
if best_match:
|
||||
logger.info(f"Matched node {best_match.node_id} with confidence {best_confidence:.3f}")
|
||||
return (best_match, best_confidence)
|
||||
|
||||
return None
|
||||
|
||||
def _match_linear(
|
||||
self,
|
||||
current_state: ScreenState,
|
||||
current_vector: np.ndarray,
|
||||
candidate_nodes: List[WorkflowNode]
|
||||
) -> Optional[Tuple[WorkflowNode, float]]:
|
||||
"""Matcher avec recherche linéaire."""
|
||||
best_match = None
|
||||
best_confidence = 0.0
|
||||
|
||||
for node in candidate_nodes:
|
||||
matches, confidence = node.matches(current_state, current_vector)
|
||||
|
||||
if matches and confidence > best_confidence:
|
||||
best_match = node
|
||||
best_confidence = confidence
|
||||
|
||||
if best_match and best_confidence >= self.similarity_threshold:
|
||||
logger.info(f"Matched node {best_match.node_id} with confidence {best_confidence:.3f}")
|
||||
return (best_match, best_confidence)
|
||||
|
||||
# Échec de matching - utiliser ErrorHandler
|
||||
recovery = self.error_handler.handle_matching_failure(
|
||||
current_state,
|
||||
candidate_nodes,
|
||||
best_confidence,
|
||||
self.similarity_threshold
|
||||
)
|
||||
|
||||
logger.warning(
|
||||
f"No match found (best confidence: {best_confidence:.3f}, threshold: {self.similarity_threshold})"
|
||||
)
|
||||
logger.info(f"Recovery strategy: {recovery.strategy_used.value} - {recovery.message}")
|
||||
|
||||
# Logger aussi les détails localement pour compatibilité
|
||||
self._log_failed_match(current_state, current_vector, candidate_nodes, best_confidence)
|
||||
|
||||
return None
|
||||
|
||||
def validate_constraints(
|
||||
self,
|
||||
state: ScreenState,
|
||||
node: WorkflowNode
|
||||
) -> bool:
|
||||
"""Valider les contraintes du node contre l'état."""
|
||||
template = node.screen_template
|
||||
|
||||
if template.window_title_pattern:
|
||||
if not state.raw_level or not state.raw_level.window_title:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _log_failed_match(
|
||||
self,
|
||||
state: ScreenState,
|
||||
state_vector: np.ndarray,
|
||||
candidate_nodes: List[WorkflowNode],
|
||||
best_confidence: float
|
||||
):
|
||||
"""
|
||||
Logger un échec de matching avec tous les détails pour analyse.
|
||||
|
||||
Sauvegarde:
|
||||
- Screenshot de l'état non matché
|
||||
- Vecteur d'embedding
|
||||
- Similarités avec tous les nodes candidats
|
||||
- Suggestions de mise à jour ou création de node
|
||||
"""
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
failed_match_id = f"failed_match_{timestamp}"
|
||||
failed_match_dir = self.failed_matches_dir / failed_match_id
|
||||
failed_match_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Sauvegarder le screenshot
|
||||
if state.raw_level and state.raw_level.screenshot_path:
|
||||
import shutil
|
||||
screenshot_dest = failed_match_dir / "screenshot.png"
|
||||
try:
|
||||
shutil.copy(state.raw_level.screenshot_path, screenshot_dest)
|
||||
logger.debug(f"Screenshot saved to {screenshot_dest}")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to copy screenshot: {e}")
|
||||
|
||||
# Sauvegarder le vecteur d'embedding
|
||||
vector_path = failed_match_dir / "state_embedding.npy"
|
||||
np.save(vector_path, state_vector)
|
||||
|
||||
# Calculer similarités avec tous les nodes
|
||||
similarities = []
|
||||
for node in candidate_nodes:
|
||||
if node.screen_template.embedding_prototype_path:
|
||||
try:
|
||||
prototype = np.load(node.screen_template.embedding_prototype_path)
|
||||
similarity = float(np.dot(state_vector, prototype))
|
||||
similarities.append({
|
||||
'node_id': node.node_id,
|
||||
'node_label': node.label,
|
||||
'similarity': similarity,
|
||||
'threshold': self.similarity_threshold,
|
||||
'matched': similarity >= self.similarity_threshold
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load prototype for node {node.node_id}: {e}")
|
||||
|
||||
# Trier par similarité décroissante
|
||||
similarities.sort(key=lambda x: x['similarity'], reverse=True)
|
||||
|
||||
# Générer suggestions
|
||||
suggestions = self._generate_suggestions(similarities, best_confidence)
|
||||
|
||||
# Sauvegarder le rapport
|
||||
report = {
|
||||
'timestamp': timestamp,
|
||||
'failed_match_id': failed_match_id,
|
||||
'state': {
|
||||
'window_title': state.raw_level.window_title if state.raw_level else None,
|
||||
'screenshot_path': str(state.raw_level.screenshot_path) if state.raw_level else None,
|
||||
'ui_elements_count': len(state.perception_level.ui_elements) if state.perception_level else 0
|
||||
},
|
||||
'matching_results': {
|
||||
'best_confidence': best_confidence,
|
||||
'threshold': self.similarity_threshold,
|
||||
'num_candidates': len(candidate_nodes),
|
||||
'similarities': similarities
|
||||
},
|
||||
'suggestions': suggestions
|
||||
}
|
||||
|
||||
report_path = failed_match_dir / "report.json"
|
||||
with open(report_path, 'w') as f:
|
||||
json.dump(report, f, indent=2)
|
||||
|
||||
logger.info(f"Failed match logged to {failed_match_dir}")
|
||||
logger.info(f"Suggestions: {', '.join(suggestions)}")
|
||||
|
||||
def _generate_suggestions(
|
||||
self,
|
||||
similarities: List[Dict[str, Any]],
|
||||
best_confidence: float
|
||||
) -> List[str]:
|
||||
"""Générer des suggestions d'action basées sur les similarités."""
|
||||
suggestions = []
|
||||
|
||||
if not similarities:
|
||||
suggestions.append("CREATE_NEW_NODE: Aucun node candidat, créer un nouveau node")
|
||||
return suggestions
|
||||
|
||||
best_match = similarities[0]
|
||||
|
||||
if best_confidence < 0.70:
|
||||
suggestions.append(
|
||||
f"CREATE_NEW_NODE: Similarité très faible ({best_confidence:.3f}), "
|
||||
"probablement un nouvel état"
|
||||
)
|
||||
elif best_confidence < self.similarity_threshold:
|
||||
suggestions.append(
|
||||
f"UPDATE_NODE: Similarité proche ({best_confidence:.3f}) avec node "
|
||||
f"'{best_match['node_label']}', considérer mise à jour du prototype"
|
||||
)
|
||||
suggestions.append(
|
||||
f"ADJUST_THRESHOLD: Ou réduire le seuil de {self.similarity_threshold} "
|
||||
f"à {best_confidence - 0.02:.3f}"
|
||||
)
|
||||
|
||||
# Vérifier si plusieurs nodes ont des similarités proches
|
||||
if len(similarities) >= 2:
|
||||
diff = similarities[0]['similarity'] - similarities[1]['similarity']
|
||||
if diff < 0.05:
|
||||
suggestions.append(
|
||||
f"AMBIGUOUS_MATCH: Deux nodes très similaires "
|
||||
f"({similarities[0]['node_label']}: {similarities[0]['similarity']:.3f}, "
|
||||
f"{similarities[1]['node_label']}: {similarities[1]['similarity']:.3f}), "
|
||||
"affiner les prototypes"
|
||||
)
|
||||
|
||||
return suggestions
|
||||
|
||||
def detect_ui_change(
|
||||
self,
|
||||
current_state: ScreenState,
|
||||
expected_node: WorkflowNode,
|
||||
current_similarity: float
|
||||
) -> Tuple[bool, Optional[Any]]:
|
||||
"""
|
||||
Détecter si l'UI a changé de manière significative.
|
||||
|
||||
Args:
|
||||
current_state: État actuel
|
||||
expected_node: Node attendu
|
||||
current_similarity: Similarité actuelle avec le prototype
|
||||
|
||||
Returns:
|
||||
Tuple (ui_changed, recovery_result)
|
||||
"""
|
||||
return self.error_handler.detect_ui_change(
|
||||
current_state,
|
||||
expected_node,
|
||||
current_similarity
|
||||
)
|
||||
|
||||
def get_error_statistics(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Obtenir les statistiques d'erreurs depuis l'ErrorHandler.
|
||||
|
||||
Returns:
|
||||
Dict avec statistiques d'erreurs
|
||||
"""
|
||||
return self.error_handler.get_error_statistics()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
matcher = NodeMatcher()
|
||||
logger.info(f"NodeMatcher initialized: {matcher}")
|
||||
18
core/graph/simple_state.py
Normal file
18
core/graph/simple_state.py
Normal file
@@ -0,0 +1,18 @@
|
||||
"""Classes simplifiées pour GraphBuilder."""
|
||||
|
||||
class SimpleWindow:
|
||||
"""Window context simplifié."""
|
||||
def __init__(self):
|
||||
self.title = ""
|
||||
self.app_name = ""
|
||||
|
||||
|
||||
class SimpleScreenState:
|
||||
"""ScreenState simplifié pour GraphBuilder."""
|
||||
def __init__(self, screen_state_id, timestamp, screenshot_path):
|
||||
self.screen_state_id = screen_state_id
|
||||
self.timestamp = timestamp
|
||||
self.screenshot_path = screenshot_path
|
||||
self.window = SimpleWindow()
|
||||
self.raw_level = None
|
||||
self.perception = None
|
||||
Reference in New Issue
Block a user