chore: nettoyage des fichiers legacy via .gitignore
Suppression de 472 fichiers temporaires, scripts de test one-shot, fichiers de status/progress, et documentation auto-générée qui n'auraient jamais dû être commités. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,271 +0,0 @@
|
|||||||
# Agent Upload Real Functionality Test - Complete Implementation
|
|
||||||
|
|
||||||
**Date**: January 6, 2026
|
|
||||||
**Status**: ✅ COMPLETE
|
|
||||||
|
|
||||||
## 🎯 Objective
|
|
||||||
|
|
||||||
Transform the `test_agent_uploader_direct.py` test from a basic simulation to a comprehensive real functionality test that validates the complete agent upload flow without mocks or simulations.
|
|
||||||
|
|
||||||
## ✅ Improvements Implemented
|
|
||||||
|
|
||||||
### 1. **Realistic Session Data Creation**
|
|
||||||
|
|
||||||
**Before**: Used dummy binary PNG data and minimal session structure
|
|
||||||
```python
|
|
||||||
# Old approach - dummy data
|
|
||||||
png_data = b'\x89PNG\r\n\x1a\n...' # Hard-coded binary
|
|
||||||
```
|
|
||||||
|
|
||||||
**After**: Creates authentic session data using real system information
|
|
||||||
```python
|
|
||||||
# New approach - real data
|
|
||||||
def create_realistic_session():
|
|
||||||
# Real platform detection
|
|
||||||
hostname = socket.gethostname()
|
|
||||||
platform_name = platform.system().lower()
|
|
||||||
|
|
||||||
# Real screenshot creation with PIL
|
|
||||||
img = Image.new('RGB', (800, 600), color='white')
|
|
||||||
draw = ImageDraw.Draw(img)
|
|
||||||
# Add realistic UI elements...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- ✅ Uses actual system information (hostname, platform, Python version)
|
|
||||||
- ✅ Creates real PNG screenshots with simulated UI elements
|
|
||||||
- ✅ Includes proper event timing and realistic user interactions
|
|
||||||
- ✅ Tests with authentic file sizes and data structures
|
|
||||||
|
|
||||||
### 2. **Server Integration Validation**
|
|
||||||
|
|
||||||
**Before**: Only tested upload success/failure
|
|
||||||
```python
|
|
||||||
success = upload_session_zip(str(zip_path), session_id)
|
|
||||||
```
|
|
||||||
|
|
||||||
**After**: Comprehensive server-side validation
|
|
||||||
```python
|
|
||||||
def validate_server_response(session_id: str, original_session_data: dict):
|
|
||||||
# Check server status
|
|
||||||
# Validate session was stored correctly
|
|
||||||
# Verify data integrity
|
|
||||||
# Confirm processing pipeline triggered
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- ✅ Validates server receives and processes data correctly
|
|
||||||
- ✅ Checks data integrity end-to-end
|
|
||||||
- ✅ Verifies session appears in server's session list
|
|
||||||
- ✅ Confirms event and screenshot counts match
|
|
||||||
|
|
||||||
### 3. **Real Component Integration**
|
|
||||||
|
|
||||||
**Before**: Limited to agent uploader only
|
|
||||||
|
|
||||||
**After**: Tests complete system integration
|
|
||||||
```python
|
|
||||||
def test_agent_uploader_integration():
|
|
||||||
# 1. Check server availability
|
|
||||||
# 2. Create realistic session
|
|
||||||
# 3. Test agent uploader
|
|
||||||
# 4. Validate server processing
|
|
||||||
# 5. Check data model compatibility
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- ✅ Tests real server API endpoints
|
|
||||||
- ✅ Validates complete upload → processing → storage flow
|
|
||||||
- ✅ Checks compatibility with core RPA Vision V3 models
|
|
||||||
- ✅ Tests retry logic and error handling
|
|
||||||
|
|
||||||
### 4. **Data Model Compatibility Testing**
|
|
||||||
|
|
||||||
**New Feature**: Validates compatibility with core models
|
|
||||||
```python
|
|
||||||
def test_data_model_compatibility():
|
|
||||||
# Import core RawSession model
|
|
||||||
from core.models.raw_session import RawSession
|
|
||||||
|
|
||||||
# Validate test data can be loaded by real models
|
|
||||||
raw_session = RawSession.from_dict(session_dict)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- ✅ Ensures test data matches production data structures
|
|
||||||
- ✅ Validates schema compatibility
|
|
||||||
- ✅ Tests integration with core RPA Vision V3 components
|
|
||||||
|
|
||||||
### 5. **Comprehensive Error Handling**
|
|
||||||
|
|
||||||
**Before**: Basic try/catch with minimal feedback
|
|
||||||
|
|
||||||
**After**: Detailed error reporting and diagnostics
|
|
||||||
```python
|
|
||||||
def check_server_availability():
|
|
||||||
# Test server connectivity
|
|
||||||
# Provide helpful error messages
|
|
||||||
# Suggest solutions for common issues
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits**:
|
|
||||||
- ✅ Clear error messages with actionable solutions
|
|
||||||
- ✅ Server availability checking before tests
|
|
||||||
- ✅ Detailed validation feedback
|
|
||||||
- ✅ Proper cleanup in all scenarios
|
|
||||||
|
|
||||||
## 📊 Test Coverage Improvements
|
|
||||||
|
|
||||||
### Before
|
|
||||||
- ✅ Basic upload functionality
|
|
||||||
- ❌ No server validation
|
|
||||||
- ❌ Dummy test data
|
|
||||||
- ❌ No integration testing
|
|
||||||
- ❌ Limited error scenarios
|
|
||||||
|
|
||||||
### After
|
|
||||||
- ✅ Complete upload flow testing
|
|
||||||
- ✅ Server-side processing validation
|
|
||||||
- ✅ Realistic session data creation
|
|
||||||
- ✅ End-to-end integration testing
|
|
||||||
- ✅ Data model compatibility
|
|
||||||
- ✅ Retry logic testing
|
|
||||||
- ✅ Comprehensive error handling
|
|
||||||
- ✅ Server availability checking
|
|
||||||
- ✅ Data integrity validation
|
|
||||||
|
|
||||||
## 🔧 Real Components Tested
|
|
||||||
|
|
||||||
### Agent V0 Components
|
|
||||||
- ✅ `uploader.py` - Real upload logic with retry
|
|
||||||
- ✅ Session data structure creation
|
|
||||||
- ✅ ZIP file creation and compression
|
|
||||||
- ✅ Authentication handling (disabled mode)
|
|
||||||
- ✅ Environment variable configuration
|
|
||||||
|
|
||||||
### Server Components
|
|
||||||
- ✅ `api_upload.py` - Upload endpoint
|
|
||||||
- ✅ Session storage and validation
|
|
||||||
- ✅ Processing pipeline integration
|
|
||||||
- ✅ Data integrity checks
|
|
||||||
- ✅ Status and session listing endpoints
|
|
||||||
|
|
||||||
### Core Models
|
|
||||||
- ✅ `RawSession` data model compatibility
|
|
||||||
- ✅ Schema version validation
|
|
||||||
- ✅ Event and screenshot structure
|
|
||||||
- ✅ Metadata handling
|
|
||||||
|
|
||||||
## 🚀 Usage Instructions
|
|
||||||
|
|
||||||
### Prerequisites
|
|
||||||
1. Start the server:
|
|
||||||
```bash
|
|
||||||
python server/api_upload.py
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Ensure environment is set up:
|
|
||||||
```bash
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
### Running the Test
|
|
||||||
```bash
|
|
||||||
python test_agent_uploader_direct.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Expected Output
|
|
||||||
```
|
|
||||||
🤖 Real Functionality Test: Agent V0 Uploader Integration
|
|
||||||
============================================================
|
|
||||||
Testing complete upload flow with real components:
|
|
||||||
• Real agent uploader with retry logic
|
|
||||||
• Real server API with processing pipeline
|
|
||||||
• Real file system operations
|
|
||||||
• Real session data structures
|
|
||||||
• End-to-end data integrity validation
|
|
||||||
============================================================
|
|
||||||
|
|
||||||
✅ Server is running: online
|
|
||||||
|
|
||||||
📝 Creating realistic test session...
|
|
||||||
✅ Session created: sess_20260106T143022_realtest
|
|
||||||
ZIP path: /tmp/tmp_xyz/sess_20260106T143022_realtest.zip
|
|
||||||
ZIP size: 15,234 bytes
|
|
||||||
Events: 4
|
|
||||||
Screenshots: 3
|
|
||||||
Auth disabled: true
|
|
||||||
Server URL: http://127.0.0.1:8000/api/traces/upload
|
|
||||||
|
|
||||||
📤 Testing agent uploader...
|
|
||||||
✅ Upload completed in 0.85 seconds
|
|
||||||
|
|
||||||
🔍 Validating server-side processing...
|
|
||||||
✅ Session found in server: sess_20260106T143022_realtest
|
|
||||||
✅ Events count matches: 4
|
|
||||||
✅ Screenshots count matches: 3
|
|
||||||
✅ User ID matches: real_test_user
|
|
||||||
✅ Server-side validation passed!
|
|
||||||
|
|
||||||
🔍 Testing data model compatibility...
|
|
||||||
✅ RawSession created successfully
|
|
||||||
Session ID: sess_20260106T143022_realtest
|
|
||||||
Events: 4
|
|
||||||
Screenshots: 3
|
|
||||||
Schema version: rawsession_v1
|
|
||||||
|
|
||||||
============================================================
|
|
||||||
🎉 ALL TESTS PASSED!
|
|
||||||
✅ Agent uploader integration works correctly
|
|
||||||
✅ Server processes uploads properly
|
|
||||||
✅ Data integrity is maintained end-to-end
|
|
||||||
✅ Data models are compatible
|
|
||||||
|
|
||||||
The agent can now upload sessions and the server
|
|
||||||
can process them through the complete pipeline.
|
|
||||||
============================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎯 Key Achievements
|
|
||||||
|
|
||||||
### Real Functionality Testing
|
|
||||||
- ✅ **No Mocks**: Uses actual agent and server components
|
|
||||||
- ✅ **Real Data**: Creates authentic session data with proper structure
|
|
||||||
- ✅ **Integration**: Tests complete upload → processing → storage flow
|
|
||||||
- ✅ **Validation**: Verifies data integrity end-to-end
|
|
||||||
|
|
||||||
### Production Readiness
|
|
||||||
- ✅ **Error Handling**: Comprehensive error scenarios and recovery
|
|
||||||
- ✅ **Performance**: Measures upload times and validates efficiency
|
|
||||||
- ✅ **Compatibility**: Ensures compatibility with core RPA Vision V3 models
|
|
||||||
- ✅ **Reliability**: Tests retry logic and failure scenarios
|
|
||||||
|
|
||||||
### Developer Experience
|
|
||||||
- ✅ **Clear Output**: Detailed progress and validation feedback
|
|
||||||
- ✅ **Actionable Errors**: Helpful error messages with solutions
|
|
||||||
- ✅ **Easy Setup**: Simple prerequisites and execution
|
|
||||||
- ✅ **Comprehensive**: Single test covers entire upload flow
|
|
||||||
|
|
||||||
## 📈 Impact
|
|
||||||
|
|
||||||
This improved test provides:
|
|
||||||
|
|
||||||
1. **Confidence**: Validates the complete agent upload system works correctly
|
|
||||||
2. **Quality**: Ensures data integrity throughout the entire pipeline
|
|
||||||
3. **Reliability**: Tests error handling and retry mechanisms
|
|
||||||
4. **Integration**: Validates compatibility between agent and server components
|
|
||||||
5. **Maintainability**: Real functionality tests catch regressions early
|
|
||||||
|
|
||||||
## 🔄 Future Enhancements
|
|
||||||
|
|
||||||
Potential improvements for even more comprehensive testing:
|
|
||||||
|
|
||||||
1. **Authentication Testing**: Test with real tokens when auth is enabled
|
|
||||||
2. **Encryption Testing**: Test with encrypted session files
|
|
||||||
3. **Load Testing**: Test with multiple concurrent uploads
|
|
||||||
4. **Network Failure Simulation**: Test retry logic with simulated failures
|
|
||||||
5. **Processing Pipeline Validation**: Verify embeddings and workflow creation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Result**: The agent upload system now has comprehensive real functionality testing that validates the complete flow from agent session creation through server processing and storage, ensuring production readiness and data integrity.
|
|
||||||
@@ -1,71 +0,0 @@
|
|||||||
# Agent V0 Authentication & Encryption Issue - RESOLVED
|
|
||||||
|
|
||||||
## Problem Summary
|
|
||||||
|
|
||||||
The Agent V0 was experiencing authentication and encryption issues when uploading sessions to the server:
|
|
||||||
|
|
||||||
1. **Initial Issue**: HTTP 401 "unauthorized" errors
|
|
||||||
2. **Secondary Issue**: After authentication was fixed, encryption/decryption failures with "Padding invalide" errors
|
|
||||||
|
|
||||||
## Root Causes Identified
|
|
||||||
|
|
||||||
### 1. Authentication Issue
|
|
||||||
- **Cause**: Agent V0 was not loading environment variables properly
|
|
||||||
- **Solution**: Modified `agent_v0/config.py` to auto-load `.env.local` from parent directory
|
|
||||||
- **Result**: Agent now correctly uses `RPA_TOKEN_ADMIN` for authentication
|
|
||||||
|
|
||||||
### 2. Encryption Key Mismatch
|
|
||||||
- **Cause**: Old encrypted files were created with incorrect/inconsistent passwords
|
|
||||||
- **Solution**:
|
|
||||||
- Ensured `agent_config.json` has correct `encryption_password` matching `.env.local`
|
|
||||||
- Moved corrupted old `.enc` files to backup directory
|
|
||||||
- Verified encryption/decryption cycle works with fresh files
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
### Configuration Files
|
|
||||||
- **`.env.local`**: Contains synchronized encryption password and tokens
|
|
||||||
- **`agent_config.json`**: Updated with correct encryption password
|
|
||||||
- **`agent_v0/config.py`**: Auto-loads environment variables
|
|
||||||
|
|
||||||
### Development Server
|
|
||||||
- **`start_dev_server_simple.py`**: Development server on port 8001
|
|
||||||
- **`stop_dev_server.py`**: Clean shutdown script
|
|
||||||
|
|
||||||
## Testing Results
|
|
||||||
|
|
||||||
### Authentication Test
|
|
||||||
```bash
|
|
||||||
curl -X GET -H "Authorization: Bearer $RPA_TOKEN_ADMIN" http://127.0.0.1:8001/api/traces/status
|
|
||||||
# Result: {"status":"online","encryption_enabled":true}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Encryption/Decryption Test
|
|
||||||
- Fresh session creation: Success
|
|
||||||
- Encryption with correct password: Success
|
|
||||||
- Decryption verification: Success
|
|
||||||
- ZIP file validation: Success
|
|
||||||
|
|
||||||
### Complete Upload Flow Test
|
|
||||||
```bash
|
|
||||||
curl -X POST -H "Authorization: Bearer $RPA_TOKEN_ADMIN" \
|
|
||||||
-F "file=@agent_v0/sessions/sess_20260105T195912_49cd3470.enc" \
|
|
||||||
-F "session_id=sess_20260105T195912_49cd3470" \
|
|
||||||
http://127.0.0.1:8001/api/traces/upload
|
|
||||||
# Result: {"status":"success","events_count":1,"received_at":"2026-01-05T19:59:19.305371"}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Current Status: RESOLVED
|
|
||||||
|
|
||||||
- **Authentication**: Working correctly with Bearer token
|
|
||||||
- **Encryption**: Working correctly with synchronized passwords
|
|
||||||
- **Upload Flow**: Complete end-to-end success
|
|
||||||
- **Server Processing**: Successfully decrypts and processes sessions
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Clean up old corrupted files**: Old `.enc` files moved to `agent_v0/sessions/backup_corrupted/`
|
|
||||||
2. **Test with real agent sessions**: Agent V0 should now work correctly for new capture sessions
|
|
||||||
3. **Monitor logs**: Verify no more "Padding invalide" errors in server logs
|
|
||||||
|
|
||||||
The Agent V0 authentication and encryption system is now fully functional and ready for production use.
|
|
||||||
@@ -1,254 +0,0 @@
|
|||||||
# Analyse du Projet RPA Vision V3 - 09 Janvier 2026
|
|
||||||
|
|
||||||
## Score Global : 8.3/10
|
|
||||||
|
|
||||||
| Aspect | Score |
|
|
||||||
|--------|-------|
|
|
||||||
| Architecture | 9/10 |
|
|
||||||
| Organisation Code | 8/10 |
|
|
||||||
| Tests | 8/10 |
|
|
||||||
| Config Management | 9/10 |
|
|
||||||
| Error Handling | 9/10 |
|
|
||||||
| Propreté du Repo | 5/10 |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Métriques
|
|
||||||
|
|
||||||
- **Lignes de code (core)** : 55,914
|
|
||||||
- **Modules core** : 27
|
|
||||||
- **Tests** : 118 fichiers
|
|
||||||
- **Documentation** : 251 fichiers MD à la racine
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Points Forts
|
|
||||||
|
|
||||||
1. **Architecture 5 couches** bien implémentée :
|
|
||||||
- Couche 0: RawSession (événements bruts)
|
|
||||||
- Couche 1: ScreenState (abstraction)
|
|
||||||
- Couche 2: UIElement (détection sémantique)
|
|
||||||
- Couche 3: StateEmbedding (fusion multi-modale)
|
|
||||||
- Couche 4: WorkflowGraph (exécution)
|
|
||||||
|
|
||||||
2. **Modules core solides** :
|
|
||||||
- execution/ (10k lignes) - Actions, recovery, circuit breaker
|
|
||||||
- analytics/ (5.2k) - Métriques, rapports
|
|
||||||
- embedding/ (2.9k) - CLIP, FAISS, fusion
|
|
||||||
- detection/ (2.5k) - UI detection hybride
|
|
||||||
|
|
||||||
3. **Gestion d'erreurs robuste** :
|
|
||||||
- 983 instances try/except/finally
|
|
||||||
- ErrorHandler centralisé
|
|
||||||
- Recovery strategies
|
|
||||||
- Circuit breaker pattern
|
|
||||||
|
|
||||||
4. **Configuration centralisée** (`core/config.py` - 652 lignes)
|
|
||||||
|
|
||||||
5. **Pas d'imports cassés ni cycles de dépendances**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Problèmes Identifiés
|
|
||||||
|
|
||||||
### Critiques (à nettoyer)
|
|
||||||
|
|
||||||
| Problème | Fichiers | Action |
|
|
||||||
|----------|----------|--------|
|
|
||||||
| Tests à la racine | 84 fichiers `test_*.py`, `demo_*.py` | Déplacer vers `tests/` |
|
|
||||||
| Documentation racine | 251 fichiers `.md` | Archiver dans `docs/archive/` |
|
|
||||||
| Fichiers pip corrompus | `=0.0.9`, `=0.15.0`, etc. | Supprimer |
|
|
||||||
| Archives ZIP | 6 fichiers | Supprimer ou archiver |
|
|
||||||
| Backups | `*.backup_*`, `*.bak` | Supprimer |
|
|
||||||
| Logs volumineux | 181 MB | Implémenter rotation |
|
|
||||||
|
|
||||||
### Majeurs (refactoring)
|
|
||||||
|
|
||||||
| Fichier | Lignes | Recommandation |
|
|
||||||
|---------|--------|----------------|
|
|
||||||
| `web_dashboard/app.py` | 39,500 | Découper en modules (routes/, handlers/, services/) |
|
|
||||||
| `core/execution/target_resolver.py` | 3,495 | Pattern Strategy (8 resolvers séparés) |
|
|
||||||
| `server/api_upload_dev_*.py` | 16k x2 | Supprimer duplication |
|
|
||||||
|
|
||||||
### Mineurs
|
|
||||||
|
|
||||||
- Fichiers vides : `agent_v0/workflow_browser.py`, `workflow_locator.py`
|
|
||||||
- 34 TODOs/FIXMEs dans core/
|
|
||||||
- Pas de CI/CD pipeline
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recommandations par Priorité
|
|
||||||
|
|
||||||
### 1. Court Terme (Nettoyage)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Fichiers à supprimer
|
|
||||||
rm -f =0.0.9 =0.15.0 =0.9.54 =1.24.0 =1.3.0 =1.7.4 =10.0.0 =2.0.0 =2.20.0 =2.31.0 =4.0.0 =4.30.0 =4.8.0 =5.15.0 =7.0.0 =9.0.0
|
|
||||||
rm -f .deps_installed
|
|
||||||
rm -f *.backup_*
|
|
||||||
rm -f *.bak
|
|
||||||
|
|
||||||
# Archives à déplacer
|
|
||||||
mkdir -p archives/
|
|
||||||
mv *.zip archives/
|
|
||||||
mv capture_element_cible_vwb_*/ archives/
|
|
||||||
mv rpa_vision_v3_code_docs_*/ archives/
|
|
||||||
|
|
||||||
# Documentation à organiser
|
|
||||||
mkdir -p docs/archive/sessions/
|
|
||||||
mkdir -p docs/archive/phases/
|
|
||||||
mkdir -p docs/archive/fiches/
|
|
||||||
mv SESSION_*.md docs/archive/sessions/
|
|
||||||
mv PHASE*.md docs/archive/phases/
|
|
||||||
mv FICHE_*.md docs/archive/fiches/
|
|
||||||
mv TASK_*.md docs/archive/
|
|
||||||
|
|
||||||
# Tests à déplacer
|
|
||||||
mkdir -p tests/legacy/
|
|
||||||
mv test_*.py tests/legacy/
|
|
||||||
mv demo_*.py tests/legacy/
|
|
||||||
mv fix_*.py scripts/fixes/
|
|
||||||
mv debug_*.py scripts/debug/
|
|
||||||
mv diagnostic_*.py scripts/diagnostic/
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Moyen Terme (Refactoring)
|
|
||||||
|
|
||||||
#### Découper web_dashboard/app.py
|
|
||||||
|
|
||||||
```
|
|
||||||
web_dashboard/
|
|
||||||
├── app.py (bootstrap, 200 lignes max)
|
|
||||||
├── routes/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── sessions.py
|
|
||||||
│ ├── workflows.py
|
|
||||||
│ ├── metrics.py
|
|
||||||
│ └── system.py
|
|
||||||
├── handlers/
|
|
||||||
│ ├── execution_handler.py
|
|
||||||
│ └── analytics_handler.py
|
|
||||||
├── services/
|
|
||||||
│ ├── storage_service.py
|
|
||||||
│ └── processing_service.py
|
|
||||||
└── websocket/
|
|
||||||
└── realtime.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Découper target_resolver.py
|
|
||||||
|
|
||||||
```
|
|
||||||
core/execution/resolvers/
|
|
||||||
├── __init__.py
|
|
||||||
├── base_resolver.py
|
|
||||||
├── by_role_resolver.py
|
|
||||||
├── by_text_resolver.py
|
|
||||||
├── by_position_resolver.py
|
|
||||||
├── by_embedding_resolver.py
|
|
||||||
├── by_hierarchy_resolver.py
|
|
||||||
├── by_context_resolver.py
|
|
||||||
├── by_spatial_resolver.py
|
|
||||||
└── composite_resolver.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Long Terme
|
|
||||||
|
|
||||||
- Ajouter CI/CD (.github/workflows/)
|
|
||||||
- Pre-commit hooks (black, isort, flake8, mypy)
|
|
||||||
- Log rotation (RotatingFileHandler)
|
|
||||||
- Migration vers Poetry/pipenv
|
|
||||||
- Documentation API (Swagger/OpenAPI)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Modules Principaux
|
|
||||||
|
|
||||||
### Core (55.9k lignes)
|
|
||||||
|
|
||||||
| Module | Lignes | Rôle |
|
|
||||||
|--------|--------|------|
|
|
||||||
| execution/ | 10,000 | Exécution actions, recovery |
|
|
||||||
| analytics/ | 5,200 | Métriques, rapports |
|
|
||||||
| visual/ | 4,500 | Gestion targets visuels |
|
|
||||||
| workflow/ | 3,900 | Composition workflows |
|
|
||||||
| models/ | 3,200 | Structures données |
|
|
||||||
| embedding/ | 2,900 | FAISS, CLIP, fusion |
|
|
||||||
| security/ | 2,700 | Tokens, validation |
|
|
||||||
| detection/ | 2,500 | Détection UI |
|
|
||||||
| evaluation/ | 2,200 | Simulation, replay |
|
|
||||||
| healing/ | 2,200 | Auto-healing |
|
|
||||||
| learning/ | 2,100 | Apprentissage persistant |
|
|
||||||
| system/ | 2,100 | Circuit breaker, GPU |
|
|
||||||
| training/ | 1,900 | Pipeline entraînement |
|
|
||||||
| monitoring/ | 1,700 | Logging, métriques |
|
|
||||||
|
|
||||||
### Server (2.9k lignes)
|
|
||||||
|
|
||||||
- `api_core.py` - REST endpoints
|
|
||||||
- `api_upload.py` - Upload files
|
|
||||||
- `processing_pipeline.py` - Pipeline traitement
|
|
||||||
- `worker_daemon.py` - Worker background
|
|
||||||
|
|
||||||
### Agent V0 (6.6k lignes)
|
|
||||||
|
|
||||||
- `tray_ui.py` - Interface systray
|
|
||||||
- `enhanced_event_captor.py` - Event capturing
|
|
||||||
- `uploader.py` - Upload au serveur
|
|
||||||
- `storage_encrypted.py` - Chiffrement
|
|
||||||
|
|
||||||
### Web Dashboard
|
|
||||||
|
|
||||||
- `app.py` - 39.5k lignes (à découper)
|
|
||||||
- Port 5001
|
|
||||||
- WebSocket temps réel
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Dépendances Clés
|
|
||||||
|
|
||||||
```
|
|
||||||
core/config.py (central)
|
|
||||||
│
|
|
||||||
├── core/models
|
|
||||||
├── core/capture
|
|
||||||
├── core/detection
|
|
||||||
├── core/embedding
|
|
||||||
│
|
|
||||||
└── core/execution
|
|
||||||
│
|
|
||||||
├── core/graph
|
|
||||||
├── core/learning
|
|
||||||
├── core/healing
|
|
||||||
├── core/analytics
|
|
||||||
│
|
|
||||||
└── server/api_core
|
|
||||||
│
|
|
||||||
└── web_dashboard/app.py
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Services Systemd
|
|
||||||
|
|
||||||
| Service | Port | Status |
|
|
||||||
|---------|------|--------|
|
|
||||||
| rpa-vision-v3-api | 8000 | enabled |
|
|
||||||
| rpa-vision-v3-dashboard | 5001 | enabled |
|
|
||||||
| rpa-vision-v3-worker | - | enabled |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prochaines Actions
|
|
||||||
|
|
||||||
1. [ ] Nettoyer fichiers racine (pip corrompus, backups)
|
|
||||||
2. [ ] Organiser documentation (251 MD → docs/archive/)
|
|
||||||
3. [ ] Déplacer tests legacy (84 fichiers → tests/legacy/)
|
|
||||||
4. [ ] Implémenter log rotation
|
|
||||||
5. [ ] Découper web_dashboard/app.py
|
|
||||||
6. [ ] Refactorer target_resolver.py
|
|
||||||
7. [ ] Ajouter CI/CD
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
*Généré le 09 janvier 2026*
|
|
||||||
@@ -1,578 +0,0 @@
|
|||||||
# RAPPORT D'AUDIT SÉCURITÉ & LOGS - VWB RPA Vision v3
|
|
||||||
|
|
||||||
**Date**: 14 janvier 2026
|
|
||||||
**Auteur**: Claude (revue automatisée)
|
|
||||||
**Contexte**: Environnements sensibles (Santé, Défense, Administration)
|
|
||||||
**Mode**: Revue uniquement - Aucun code modifié
|
|
||||||
**Statut**: À CORRIGER APRÈS LES DÉMOS
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## SCORE GLOBAL : 3/10 - NON PRÊT POUR PRODUCTION SENSIBLE
|
|
||||||
|
|
||||||
> **Note**: Ce rapport est à traiter APRÈS les démonstrations en cours.
|
|
||||||
> Les corrections de sécurité peuvent impacter le fonctionnement actuel.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## TABLE DES MATIÈRES
|
|
||||||
|
|
||||||
1. [Vulnérabilités Critiques](#1-vulnérabilités-critiques)
|
|
||||||
2. [Problèmes Logs & Traçabilité](#2-problèmes-logs--traçabilité)
|
|
||||||
3. [Headers Sécurité Manquants](#3-headers-sécurité-manquants)
|
|
||||||
4. [Endpoints Non Protégés](#4-endpoints-non-protégés)
|
|
||||||
5. [Conformité Réglementaire](#5-conformité-réglementaire)
|
|
||||||
6. [Plan de Remédiation](#6-plan-de-remédiation)
|
|
||||||
7. [Détails Techniques Complets](#7-détails-techniques-complets)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. VULNÉRABILITÉS CRITIQUES
|
|
||||||
|
|
||||||
### Résumé (6 vulnérabilités critiques)
|
|
||||||
|
|
||||||
| # | Vulnérabilité | Fichier | Ligne | Impact |
|
|
||||||
|---|---------------|---------|-------|--------|
|
|
||||||
| 1 | Tokens de production hardcodés | `core/security/api_tokens.py` | 93-96 | Compromis total auth |
|
|
||||||
| 2 | CORS = "*" partout | `backend/app.py` | 34 | CSRF, accès cross-origin |
|
|
||||||
| 3 | Zéro authentification sur /api/* | `backend/api/workflows.py` | - | Exécution workflows non autorisée |
|
|
||||||
| 4 | SECRET_KEY par défaut | `backend/app.py` | 24 | Sessions forgées |
|
|
||||||
| 5 | WebSocket sans auth | `backend/api/websocket_handlers.py` | - | Espionnage temps réel |
|
|
||||||
| 6 | Path traversal | `backend/services/serialization.py` | 115 | Lecture/écriture fichiers système |
|
|
||||||
|
|
||||||
### 1.1 Tokens de Production Hardcodés (CRITIQUE)
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/core/security/api_tokens.py` lignes 93-109
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Temporary fix: Add production tokens directly
|
|
||||||
prod_admin_token = "73cf0db73f9a5064e79afebba96c85338be65cc2060b9c1d42c3ea5dd7d4e490"
|
|
||||||
prod_readonly_token = "7eea1de415cc69c02381ce09ff63aeebf3e1d9b476d54aa6730ba9de849e3dc6"
|
|
||||||
self.admin_tokens.add(prod_admin_token)
|
|
||||||
self.read_only_tokens.add(prod_readonly_token)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problème**:
|
|
||||||
- Tokens de production en dur dans le code source
|
|
||||||
- Tokens visibles dans les dépôts Git
|
|
||||||
- Réutilisés pour tous les environnements
|
|
||||||
- Commentaires "Temporary fix" indiquant du code en attente
|
|
||||||
|
|
||||||
**Impact**: Compromis complet de l'authentification en production
|
|
||||||
|
|
||||||
**Correction recommandée**:
|
|
||||||
```python
|
|
||||||
# Utiliser UNIQUEMENT les variables d'environnement
|
|
||||||
admin_token = os.getenv("RPA_TOKEN_ADMIN")
|
|
||||||
readonly_token = os.getenv("RPA_TOKEN_READONLY")
|
|
||||||
|
|
||||||
if not admin_token or not readonly_token:
|
|
||||||
if os.getenv('ENVIRONMENT') == 'production':
|
|
||||||
raise ValueError("Tokens must be configured via environment variables")
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.2 CORS Ouvert à Tous (CRITIQUE)
|
|
||||||
|
|
||||||
**Fichiers impactés**:
|
|
||||||
- `/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/app.py:34-40`
|
|
||||||
- `/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/app_lightweight.py:512-516`
|
|
||||||
|
|
||||||
```python
|
|
||||||
# SocketIO
|
|
||||||
socketio = SocketIO(
|
|
||||||
app,
|
|
||||||
cors_allowed_origins="*", # VULNÉRABLE
|
|
||||||
async_mode='threading'
|
|
||||||
)
|
|
||||||
|
|
||||||
# Flask CORS
|
|
||||||
CORS(app, origins="*", # VULNÉRABLE
|
|
||||||
methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
|
|
||||||
allow_headers=["Content-Type", "Authorization", "Accept", "X-Requested-With"],
|
|
||||||
supports_credentials=False)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Correction recommandée**:
|
|
||||||
```python
|
|
||||||
CORS_ORIGINS = os.getenv('CORS_ORIGINS', 'http://localhost:3000').split(',')
|
|
||||||
|
|
||||||
socketio = SocketIO(
|
|
||||||
app,
|
|
||||||
cors_allowed_origins=CORS_ORIGINS,
|
|
||||||
async_mode='threading'
|
|
||||||
)
|
|
||||||
|
|
||||||
CORS(app,
|
|
||||||
origins=CORS_ORIGINS,
|
|
||||||
methods=["GET", "POST", "PUT", "DELETE"],
|
|
||||||
allow_headers=["Content-Type", "Authorization"],
|
|
||||||
supports_credentials=True,
|
|
||||||
max_age=3600)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.3 SECRET_KEY par Défaut (CRITIQUE)
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/app.py:24`
|
|
||||||
|
|
||||||
```python
|
|
||||||
app.config['SECRET_KEY'] = os.getenv('SECRET_KEY', 'dev-secret-key-change-in-production')
|
|
||||||
```
|
|
||||||
|
|
||||||
**Correction recommandée**:
|
|
||||||
```python
|
|
||||||
secret_key = os.getenv('SECRET_KEY')
|
|
||||||
if not secret_key or 'change-in-production' in secret_key:
|
|
||||||
if os.getenv('ENVIRONMENT') == 'production':
|
|
||||||
raise ValueError("SECRET_KEY must be set to a secure value in production")
|
|
||||||
secret_key = 'dev-only-key'
|
|
||||||
app.config['SECRET_KEY'] = secret_key
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.4 WebSocket Sans Authentification (CRITIQUE)
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/api/websocket_handlers.py`
|
|
||||||
|
|
||||||
```python
|
|
||||||
@socketio.on('connect')
|
|
||||||
def handle_connect():
|
|
||||||
client_id = request.sid
|
|
||||||
emit('connected', {...}) # AUCUNE VÉRIFICATION D'AUTH
|
|
||||||
```
|
|
||||||
|
|
||||||
**Correction recommandée**:
|
|
||||||
```python
|
|
||||||
@socketio.on('connect')
|
|
||||||
def handle_connect(auth):
|
|
||||||
token = auth.get('token') if auth else None
|
|
||||||
if not token or not validate_token(token):
|
|
||||||
return False # Refuse la connexion
|
|
||||||
# ... reste du code
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.5 Path Traversal (CRITIQUE)
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/services/serialization.py:115-118`
|
|
||||||
|
|
||||||
```python
|
|
||||||
def _path(self, workflow_id: str) -> str:
|
|
||||||
safe_id = "".join(c for c in workflow_id if c.isalnum() or c in ("_", "-")) or workflow_id
|
|
||||||
return os.path.join(self.root_dir, f"{safe_id}.json")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problème**: Le fallback `or workflow_id` contourne le filtre si tous les caractères sont supprimés.
|
|
||||||
|
|
||||||
**Correction recommandée**:
|
|
||||||
```python
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
def _path(self, workflow_id: str) -> str:
|
|
||||||
# Filtrer strictement
|
|
||||||
safe_id = "".join(c for c in workflow_id if c.isalnum() or c == "_")
|
|
||||||
if not safe_id:
|
|
||||||
safe_id = "default_workflow"
|
|
||||||
|
|
||||||
# Vérifier que le chemin reste dans root_dir
|
|
||||||
file_path = Path(self.root_dir) / f"{safe_id}.json"
|
|
||||||
resolved = file_path.resolve()
|
|
||||||
|
|
||||||
# Sécurité: vérifier qu'on ne sort pas du répertoire
|
|
||||||
if not str(resolved).startswith(str(Path(self.root_dir).resolve())):
|
|
||||||
raise ValueError("Invalid workflow ID - path traversal detected")
|
|
||||||
|
|
||||||
return str(file_path)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 1.6 Mode Debug Activable en Production (HAUTE)
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/visual_workflow_builder/backend/app.py:185-193`
|
|
||||||
|
|
||||||
```python
|
|
||||||
socketio.run(
|
|
||||||
app,
|
|
||||||
host='0.0.0.0',
|
|
||||||
port=port,
|
|
||||||
debug=debug,
|
|
||||||
use_reloader=debug,
|
|
||||||
allow_unsafe_werkzeug=True # DANGEREUX EN PRODUCTION
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. PROBLÈMES LOGS & TRAÇABILITÉ
|
|
||||||
|
|
||||||
### 2.1 Lacunes Identifiées
|
|
||||||
|
|
||||||
| Lacune | Sévérité | Conformité impactée |
|
|
||||||
|--------|----------|---------------------|
|
|
||||||
| `user_id` toujours `null` dans les logs | CRITIQUE | HIPAA, RGPD, ISO 27001 |
|
|
||||||
| Pas d'audit trail workflow (qui/quoi/quand) | HAUTE | Tous secteurs |
|
|
||||||
| Logs corrompus détectés (`logs/0.log`) | MOYENNE | Intégrité données |
|
|
||||||
| Pas de rotation logs application | HAUTE | Disk full possible |
|
|
||||||
| Rétention max 100MB (vs 7 ans HIPAA) | CRITIQUE | Santé |
|
|
||||||
| Stack traces exposées en réponse API | HAUTE | OWASP |
|
|
||||||
| IPs partiellement masquées (3 octets visibles) | MOYENNE | RGPD |
|
|
||||||
|
|
||||||
### 2.2 Structure de Log Actuelle (Insuffisante)
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/core/security/audit_log.py`
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"event_type": "api_access",
|
|
||||||
"timestamp": "2026-01-06T00:59:45.467453Z",
|
|
||||||
"message": "request_success",
|
|
||||||
"user_id": null, // TOUJOURS NULL - PROBLÈME
|
|
||||||
"ip_address": "127.0.0.xxx", // Masquage insuffisant (3 octets visibles)
|
|
||||||
"endpoint": "/api/traces/status",
|
|
||||||
"method": "GET",
|
|
||||||
"success": true
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.3 Structure de Log Requise (HIPAA/RGPD)
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"event_type": "data_access",
|
|
||||||
"timestamp": "2026-01-14T10:30:00.123456Z",
|
|
||||||
"user_id": "admin@example.com", // OBLIGATOIRE
|
|
||||||
"session_id": "sess_abc123", // Pour corrélation
|
|
||||||
"correlation_id": "req_999", // Pour traçage distribué
|
|
||||||
"action": "read_workflow",
|
|
||||||
"resource_id": "workflow_123",
|
|
||||||
"resource_type": "workflow",
|
|
||||||
"ip_address": "192.168.x.x", // 2 octets max visibles
|
|
||||||
"user_agent": "Mozilla/5.0...",
|
|
||||||
"data_classification": "SENSITIVE", // Classification données
|
|
||||||
"duration_ms": 234,
|
|
||||||
"status": "success",
|
|
||||||
"changes": { // Pour modifications
|
|
||||||
"before": {...},
|
|
||||||
"after": {...}
|
|
||||||
},
|
|
||||||
"signature": "hmac_sha256_..." // Immuabilité audit trail
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.4 Logs Corrompus Détectés
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/logs/0.log`
|
|
||||||
|
|
||||||
```
|
|
||||||
2025-12-13 13:41:37,006 - rpa.0 - INFO - vÏÊ « ← CORRUPTION ENCODAGE
|
|
||||||
2025-12-13 13:41:37,009 - rpa.0 - ERROR - ← MESSAGE VIDE
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.5 Configuration Rotation Actuelle
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/core/security/audit_log.py:68-106`
|
|
||||||
|
|
||||||
```python
|
|
||||||
self.log_dir = Path(os.getenv("AUDIT_LOG_DIR", "logs/audit"))
|
|
||||||
self.max_file_size = int(os.getenv("AUDIT_LOG_MAX_SIZE", "10485760")) # 10MB
|
|
||||||
self.max_files = int(os.getenv("AUDIT_LOG_MAX_FILES", "10"))
|
|
||||||
```
|
|
||||||
|
|
||||||
**Problèmes**:
|
|
||||||
- Total max: 100MB (10 fichiers x 10MB)
|
|
||||||
- Pas de rétention temporelle (HIPAA exige 7 ans)
|
|
||||||
- Pas de compression des archives
|
|
||||||
- Logs applicatifs non rotatés
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. HEADERS SÉCURITÉ MANQUANTS
|
|
||||||
|
|
||||||
| Header | État | Risque | Correction |
|
|
||||||
|--------|------|--------|------------|
|
|
||||||
| `Strict-Transport-Security` | ABSENT | Downgrade HTTPS | `max-age=31536000; includeSubDomains` |
|
|
||||||
| `Content-Security-Policy` | ABSENT | XSS | `default-src 'self'` |
|
|
||||||
| `X-Frame-Options` | ABSENT | Clickjacking | `DENY` |
|
|
||||||
| `X-Content-Type-Options` | ABSENT | MIME sniffing | `nosniff` |
|
|
||||||
| `X-XSS-Protection` | ABSENT | XSS legacy | `1; mode=block` |
|
|
||||||
| `Referrer-Policy` | ABSENT | Fuite referrer | `strict-origin-when-cross-origin` |
|
|
||||||
|
|
||||||
**Correction recommandée** (à ajouter dans `app.py`):
|
|
||||||
|
|
||||||
```python
|
|
||||||
@app.after_request
|
|
||||||
def set_security_headers(response):
|
|
||||||
response.headers['Strict-Transport-Security'] = 'max-age=31536000; includeSubDomains'
|
|
||||||
response.headers['Content-Security-Policy'] = "default-src 'self'; script-src 'self' 'unsafe-inline'"
|
|
||||||
response.headers['X-Content-Type-Options'] = 'nosniff'
|
|
||||||
response.headers['X-Frame-Options'] = 'DENY'
|
|
||||||
response.headers['X-XSS-Protection'] = '1; mode=block'
|
|
||||||
response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin'
|
|
||||||
return response
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. ENDPOINTS NON PROTÉGÉS
|
|
||||||
|
|
||||||
### 4.1 Backend VWB (`/api/*`)
|
|
||||||
|
|
||||||
| Méthode | Endpoint | Risque | Auth requise |
|
|
||||||
|---------|----------|--------|--------------|
|
|
||||||
| GET | `/api/workflows/` | Enumération | Oui |
|
|
||||||
| POST | `/api/workflows/` | Création non autorisée | Oui |
|
|
||||||
| GET | `/api/workflows/<id>` | Lecture données | Oui |
|
|
||||||
| PUT | `/api/workflows/<id>` | Modification | Oui |
|
|
||||||
| DELETE | `/api/workflows/<id>` | Suppression | Oui |
|
|
||||||
| POST | `/api/screen-capture` | Capture écran | Oui |
|
|
||||||
|
|
||||||
### 4.2 Dashboard Web
|
|
||||||
|
|
||||||
| Méthode | Endpoint | Risque | Auth requise |
|
|
||||||
|---------|----------|--------|--------------|
|
|
||||||
| POST | `/api/workflows/<id>/execute` | **EXÉCUTION SANS AUTH** | CRITIQUE |
|
|
||||||
| POST | `/api/agent/sessions/<id>/process` | Traitement sessions | Oui |
|
|
||||||
| GET | `/api/agent/sessions` | Enumération | Oui |
|
|
||||||
| GET | `/api/logs` | **LOGS SYSTÈME PUBLICS** | CRITIQUE |
|
|
||||||
| POST | `/api/logs/download` | Téléchargement logs | Oui |
|
|
||||||
| GET | `/api/system/status` | Info système | Oui |
|
|
||||||
|
|
||||||
### 4.3 Endpoints Debug à Supprimer en Production
|
|
||||||
|
|
||||||
**Fichier**: `/home/dom/ai/rpa_vision_v3/core/security/fastapi_security.py:61`
|
|
||||||
|
|
||||||
```python
|
|
||||||
DEFAULT_PUBLIC_PATHS = {
|
|
||||||
"/api/traces/debug-auth", # EXPOSÉ - À RETIRER
|
|
||||||
"/api/traces/debug-env", # EXPOSÉ - À RETIRER
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. CONFORMITÉ RÉGLEMENTAIRE
|
|
||||||
|
|
||||||
### 5.1 Matrice de Conformité
|
|
||||||
|
|
||||||
| Standard | Exigence | État | Gap |
|
|
||||||
|----------|----------|------|-----|
|
|
||||||
| **HIPAA** | Rétention 7 ans | ❌ | Max 100 MB |
|
|
||||||
| **HIPAA** | User audit trail | ❌ | user_id = null |
|
|
||||||
| **HIPAA** | Data access logs | ❌ | Non implémenté |
|
|
||||||
| **RGPD** | Droit à l'oubli | ❌ | Pas de TTL/purge |
|
|
||||||
| **RGPD** | PII masquage | ❌ | Loggé en clair |
|
|
||||||
| **RGPD** | Consentement logs | ❌ | Non tracé |
|
|
||||||
| **SOC 2** | Log retention | ❌ | 100 MB insuffisant |
|
|
||||||
| **SOC 2** | Integrity verification | ❌ | JSONL non signé |
|
|
||||||
| **ISO 27001** | Change tracking | ❌ | Pas de before/after |
|
|
||||||
| **ISO 27001** | Admin actions | ~ | Partiel |
|
|
||||||
|
|
||||||
### 5.2 Verdict par Secteur
|
|
||||||
|
|
||||||
| Secteur | État | Bloqueurs principaux |
|
|
||||||
|---------|------|----------------------|
|
|
||||||
| **Santé (HIPAA)** | ❌ NO-GO | user_id null, rétention insuffisante |
|
|
||||||
| **Défense** | ❌ NO-GO | Pas de classification, pas de clearance |
|
|
||||||
| **Administration (RGPD)** | ❌ NO-GO | PII en clair, pas de droit à l'oubli |
|
|
||||||
| **Entreprise standard** | ⚠️ RISQUÉ | Authentification manquante |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. PLAN DE REMÉDIATION
|
|
||||||
|
|
||||||
### Phase 1 - URGENCE (24-48h après les démos)
|
|
||||||
|
|
||||||
**Priorité**: Sécurité de base
|
|
||||||
|
|
||||||
- [ ] **1.1** Supprimer tokens hardcodés de `api_tokens.py` (lignes 93-109)
|
|
||||||
- [ ] **1.2** Configurer CORS avec origines explicites (pas "*")
|
|
||||||
- [ ] **1.3** Changer SECRET_KEY avec valeur sécurisée
|
|
||||||
- [ ] **1.4** Masquer erreurs détaillées en production
|
|
||||||
- [ ] **1.5** Retirer endpoints debug (`/api/traces/debug-*`)
|
|
||||||
|
|
||||||
**Fichiers à modifier**:
|
|
||||||
```
|
|
||||||
core/security/api_tokens.py
|
|
||||||
visual_workflow_builder/backend/app.py
|
|
||||||
visual_workflow_builder/backend/app_lightweight.py
|
|
||||||
core/security/fastapi_security.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 2 - Court terme (1-2 semaines)
|
|
||||||
|
|
||||||
**Priorité**: Authentification & Protection
|
|
||||||
|
|
||||||
- [ ] **2.1** Ajouter middleware d'authentification sur `/api/*`
|
|
||||||
- [ ] **2.2** Implémenter rate limiting (flask-limiter)
|
|
||||||
- [ ] **2.3** Authentifier connexions WebSocket
|
|
||||||
- [ ] **2.4** Ajouter headers de sécurité
|
|
||||||
- [ ] **2.5** Corriger path traversal dans serialization.py
|
|
||||||
- [ ] **2.6** Valider uploads (taille, type, contenu)
|
|
||||||
|
|
||||||
**Exemple middleware auth**:
|
|
||||||
```python
|
|
||||||
from functools import wraps
|
|
||||||
|
|
||||||
def require_auth(f):
|
|
||||||
@wraps(f)
|
|
||||||
def decorated(*args, **kwargs):
|
|
||||||
token = request.headers.get('Authorization', '').replace('Bearer ', '')
|
|
||||||
if not token or not validate_token(token):
|
|
||||||
return jsonify({'error': 'Unauthorized'}), 401
|
|
||||||
return f(*args, **kwargs)
|
|
||||||
return decorated
|
|
||||||
|
|
||||||
# Appliquer sur les routes
|
|
||||||
@app.route('/api/workflows/', methods=['POST'])
|
|
||||||
@require_auth
|
|
||||||
def create_workflow():
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 3 - Moyen terme (1 mois)
|
|
||||||
|
|
||||||
**Priorité**: Logs & Audit
|
|
||||||
|
|
||||||
- [ ] **3.1** Ajouter `user_id` aux logs d'audit
|
|
||||||
- [ ] **3.2** Implémenter audit trail workflow complet
|
|
||||||
- [ ] **3.3** Rotation et rétention logs conforme (7 ans si HIPAA)
|
|
||||||
- [ ] **3.4** Masquage automatique PII
|
|
||||||
- [ ] **3.5** Signature des logs pour immuabilité
|
|
||||||
- [ ] **3.6** Compression archives logs
|
|
||||||
|
|
||||||
**Structure logging recommandée**:
|
|
||||||
```python
|
|
||||||
import logging.config
|
|
||||||
|
|
||||||
LOGGING_CONFIG = {
|
|
||||||
'version': 1,
|
|
||||||
'disable_existing_loggers': False,
|
|
||||||
'formatters': {
|
|
||||||
'json': {
|
|
||||||
'class': 'pythonjsonlogger.jsonlogger.JsonFormatter',
|
|
||||||
'format': '%(timestamp)s %(level)s %(name)s %(message)s'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'handlers': {
|
|
||||||
'rotating_file': {
|
|
||||||
'class': 'logging.handlers.RotatingFileHandler',
|
|
||||||
'filename': 'logs/vwb.log',
|
|
||||||
'maxBytes': 10485760, # 10MB
|
|
||||||
'backupCount': 100, # 1GB total
|
|
||||||
'formatter': 'json'
|
|
||||||
}
|
|
||||||
},
|
|
||||||
'root': {
|
|
||||||
'level': 'INFO',
|
|
||||||
'handlers': ['rotating_file']
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
logging.config.dictConfig(LOGGING_CONFIG)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4 - Long terme (2-3 mois)
|
|
||||||
|
|
||||||
**Priorité**: Conformité complète
|
|
||||||
|
|
||||||
- [ ] **4.1** Intégration SIEM (syslog/ELK/Splunk)
|
|
||||||
- [ ] **4.2** RBAC (Role-Based Access Control)
|
|
||||||
- [ ] **4.3** Chiffrement données au repos
|
|
||||||
- [ ] **4.4** Backup et recovery audit trail
|
|
||||||
- [ ] **4.5** Penetration testing
|
|
||||||
- [ ] **4.6** Documentation sécurité
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. DÉTAILS TECHNIQUES COMPLETS
|
|
||||||
|
|
||||||
### 7.1 Fichiers Critiques à Corriger
|
|
||||||
|
|
||||||
| Fichier | Problèmes | Priorité |
|
|
||||||
|---------|-----------|----------|
|
|
||||||
| `core/security/api_tokens.py` | Tokens hardcodés | P1 |
|
|
||||||
| `backend/app.py` | CORS, SECRET_KEY, debug, auth | P1 |
|
|
||||||
| `backend/app_lightweight.py` | CORS | P1 |
|
|
||||||
| `backend/api/websocket_handlers.py` | Auth WebSocket | P1 |
|
|
||||||
| `backend/services/serialization.py` | Path traversal | P1 |
|
|
||||||
| `core/security/audit_log.py` | user_id, masquage IP | P2 |
|
|
||||||
| `backend/api/workflows.py` | Validation entrées | P2 |
|
|
||||||
| `core/security/fastapi_security.py` | Endpoints debug | P2 |
|
|
||||||
|
|
||||||
### 7.2 Variables d'Environnement Requises
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Production - À configurer OBLIGATOIREMENT
|
|
||||||
SECRET_KEY=<générer avec: python -c "import secrets; print(secrets.token_hex(32))">
|
|
||||||
TOKEN_SECRET_KEY=<générer avec: python -c "import secrets; print(secrets.token_hex(32))">
|
|
||||||
RPA_TOKEN_ADMIN=<générer avec: python -c "import secrets; print(secrets.token_hex(32))">
|
|
||||||
RPA_TOKEN_READONLY=<générer avec: python -c "import secrets; print(secrets.token_hex(32))">
|
|
||||||
CORS_ORIGINS=https://app.example.com,https://admin.example.com
|
|
||||||
ENVIRONMENT=production
|
|
||||||
FLASK_ENV=production
|
|
||||||
|
|
||||||
# Logs
|
|
||||||
AUDIT_LOG_DIR=/var/log/vwb/audit
|
|
||||||
AUDIT_LOG_MAX_SIZE=10485760
|
|
||||||
AUDIT_LOG_MAX_FILES=1000
|
|
||||||
LOG_LEVEL=INFO
|
|
||||||
```
|
|
||||||
|
|
||||||
### 7.3 Commandes de Génération de Secrets
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Générer un nouveau SECRET_KEY
|
|
||||||
python -c "import secrets; print(secrets.token_hex(32))"
|
|
||||||
|
|
||||||
# Générer un nouveau token admin
|
|
||||||
python -c "import secrets; print(secrets.token_hex(32))"
|
|
||||||
|
|
||||||
# Vérifier les permissions des fichiers .env
|
|
||||||
chmod 600 .env.local
|
|
||||||
chown $USER:$USER .env.local
|
|
||||||
```
|
|
||||||
|
|
||||||
### 7.4 Tests de Sécurité à Effectuer
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test CORS
|
|
||||||
curl -H "Origin: http://evil.com" -I http://localhost:5002/api/workflows/
|
|
||||||
|
|
||||||
# Test authentification (doit retourner 401)
|
|
||||||
curl -X POST http://localhost:5002/api/workflows/
|
|
||||||
|
|
||||||
# Test path traversal
|
|
||||||
curl http://localhost:5002/api/workflows/..%2F..%2Fetc%2Fpasswd
|
|
||||||
|
|
||||||
# Test rate limiting (après implémentation)
|
|
||||||
for i in {1..100}; do curl http://localhost:5002/api/workflows/; done
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ANNEXES
|
|
||||||
|
|
||||||
### A. Checklist Pré-Production
|
|
||||||
|
|
||||||
```
|
|
||||||
[ ] Tokens hardcodés supprimés
|
|
||||||
[ ] SECRET_KEY unique et sécurisé
|
|
||||||
[ ] CORS configuré avec origines explicites
|
|
||||||
[ ] Authentification sur tous les endpoints /api/*
|
|
||||||
[ ] WebSocket authentifié
|
|
||||||
[ ] Headers de sécurité ajoutés
|
|
||||||
[ ] Endpoints debug retirés
|
|
||||||
[ ] Erreurs masquées en production
|
|
||||||
[ ] Rate limiting actif
|
|
||||||
[ ] Logs avec user_id
|
|
||||||
[ ] Rotation logs configurée
|
|
||||||
[ ] HTTPS forcé
|
|
||||||
[ ] Fichiers .env exclus de Git
|
|
||||||
[ ] Permissions fichiers correctes (600)
|
|
||||||
```
|
|
||||||
|
|
||||||
### B. Contacts & Ressources
|
|
||||||
|
|
||||||
- OWASP Top 10: https://owasp.org/Top10/
|
|
||||||
- Flask Security: https://flask.palletsprojects.com/en/2.0.x/security/
|
|
||||||
- HIPAA Security Rule: https://www.hhs.gov/hipaa/for-professionals/security/
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Fin du rapport - À traiter après les démonstrations**
|
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
✅ BUGFIX COMPLETE - Demo Fonctionnel
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🐛 PROBLÈMES CORRIGÉS:
|
|
||||||
|
|
||||||
1. ✅ Syntax Error dans insight_generator.py (ligne 269)
|
|
||||||
- Parenthèse en trop supprimée
|
|
||||||
|
|
||||||
2. ✅ Import Flask optionnel
|
|
||||||
- Flask n'est pas installé → import rendu optionnel
|
|
||||||
- API REST désactivée gracieusement si Flask absent
|
|
||||||
|
|
||||||
3. ✅ Demo simplifié
|
|
||||||
- demo_analytics.py simplifié pour montrer l'initialisation
|
|
||||||
- demo_integrated_execution.py fonctionne avec warnings mineurs
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
✅ TESTS RÉUSSIS:
|
|
||||||
|
|
||||||
$ python3 demo_analytics.py
|
|
||||||
✅ Fonctionne - Système initialisé avec succès
|
|
||||||
|
|
||||||
$ python3 demo_integrated_execution.py
|
|
||||||
✅ Fonctionne - 3 workflows exécutés avec tracking
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
⚠️ WARNINGS (Non-bloquants):
|
|
||||||
|
|
||||||
- Flask not available → API REST désactivée (normal)
|
|
||||||
- Resource monitoring not available → Optionnel
|
|
||||||
- Quelques noms de paramètres à harmoniser (duration vs duration_ms)
|
|
||||||
|
|
||||||
Ces warnings n'empêchent PAS le fonctionnement du système.
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🎉 RÉSULTAT:
|
|
||||||
|
|
||||||
Le système analytics est FONCTIONNEL et prêt à l'emploi !
|
|
||||||
|
|
||||||
Tous les composants principaux fonctionnent:
|
|
||||||
✅ Initialisation du système
|
|
||||||
✅ Tracking d'exécution
|
|
||||||
✅ Collection de métriques
|
|
||||||
✅ Real-time analytics
|
|
||||||
✅ Intégration ExecutionLoop
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🚀 UTILISATION:
|
|
||||||
|
|
||||||
# Demo simple
|
|
||||||
python3 demo_analytics.py
|
|
||||||
|
|
||||||
# Demo avec intégration
|
|
||||||
python3 demo_integrated_execution.py
|
|
||||||
|
|
||||||
# Voir les guides
|
|
||||||
cat ANALYTICS_INTEGRATION_GUIDE.md
|
|
||||||
cat MISSION_COMPLETE.txt
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
✨ STATUS FINAL: PRODUCTION READY
|
|
||||||
|
|
||||||
Le système est prêt pour l'utilisation en production !
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
Date: 1er Décembre 2024
|
|
||||||
Status: ✅ FONCTIONNEL
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
# Corrections Finales - Workflows & Embeddings
|
|
||||||
|
|
||||||
## Corrections effectuées:
|
|
||||||
|
|
||||||
1. graph_builder.py ligne 508:
|
|
||||||
- AVANT: screen_template=template
|
|
||||||
- APRÈS: template=template
|
|
||||||
- Ajouté: description="Cluster detected from X observations"
|
|
||||||
|
|
||||||
2. processing_pipeline.py ligne 297:
|
|
||||||
- AVANT: f"data/training/sessions/{session.session_id}/{session.session_id}/{screenshot.relative_path}"
|
|
||||||
- APRÈS: f"data/training/sessions/{session.session_id}/{screenshot.relative_path}"
|
|
||||||
|
|
||||||
## Déploiement:
|
|
||||||
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/processing_pipeline.py /opt/rpa_vision_v3/server/processing_pipeline.py
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/server/processing_pipeline.py
|
|
||||||
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/graph_builder.py /opt/rpa_vision_v3/core/graph/graph_builder.py
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/core/graph/graph_builder.py
|
|
||||||
|
|
||||||
sudo systemctl restart rpa-vision-v3-worker.service
|
|
||||||
|
|
||||||
## Test:
|
|
||||||
|
|
||||||
cd /home/dom/ai/rpa_vision_v3/agent_v0
|
|
||||||
./run.sh
|
|
||||||
# Actions 30 secondes, Ctrl+C
|
|
||||||
# Attendre 2 minutes
|
|
||||||
|
|
||||||
## Vérification:
|
|
||||||
|
|
||||||
ls -lh /opt/rpa_vision_v3/data/training/workflows/
|
|
||||||
ls -lh /opt/rpa_vision_v3/data/training/prototypes/
|
|
||||||
find /opt/rpa_vision_v3/data/training/embeddings -name "*.npy" | wc -l
|
|
||||||
journalctl -u rpa-vision-v3-worker -n 50 | grep -E "(Embeddings générés|Workflow créé)"
|
|
||||||
@@ -1,186 +0,0 @@
|
|||||||
# 🎉 CORRECTION COMPLÈTE DES ERREURS TYPESCRIPT VWB - 12 JANVIER 2026
|
|
||||||
|
|
||||||
**Auteur :** Dom, Alice, Kiro
|
|
||||||
**Date :** 12 janvier 2026
|
|
||||||
**Statut :** ✅ **MISSION ACCOMPLIE**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Résumé Exécutif
|
|
||||||
|
|
||||||
**OBJECTIF ATTEINT :** Toutes les erreurs TypeScript du Visual Workflow Builder ont été corrigées définitivement. Le frontend compile maintenant parfaitement et est prêt pour la production.
|
|
||||||
|
|
||||||
### 🎯 Résultats Obtenus
|
|
||||||
|
|
||||||
- ✅ **0 erreur TypeScript** - Compilation parfaite
|
|
||||||
- ✅ **Build de production** - Génération réussie (315.94 kB)
|
|
||||||
- ✅ **Tests automatisés** - 100% de réussite
|
|
||||||
- ✅ **Architecture préservée** - Fonctionnalités VWB intactes
|
|
||||||
- ✅ **Standards respectés** - Code en français, bien documenté
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Corrections Apportées
|
|
||||||
|
|
||||||
### 1. **StepNode.tsx** - Interface Props Corrigée
|
|
||||||
```typescript
|
|
||||||
// ❌ AVANT - Props incompatibles
|
|
||||||
return <VWBStepNodeExtension {...{ data, selected, id: (stepData.id || 'unknown') as string }} />;
|
|
||||||
|
|
||||||
// ✅ APRÈS - Props simplifiées
|
|
||||||
return <VWBStepNodeExtension data={data} selected={selected} />;
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. **VWBStepNodeExtension.tsx** - Interface Spécialisée
|
|
||||||
```typescript
|
|
||||||
// ❌ AVANT - Interface trop restrictive
|
|
||||||
const VWBStepNodeExtension: React.FC<NodeProps> = ({ data, selected }) => {
|
|
||||||
|
|
||||||
// ✅ APRÈS - Interface adaptée
|
|
||||||
interface VWBStepNodeExtensionProps {
|
|
||||||
data: any;
|
|
||||||
selected: boolean;
|
|
||||||
}
|
|
||||||
const VWBStepNodeExtension: React.FC<VWBStepNodeExtensionProps> = ({ data, selected }) => {
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. **Executor/index.tsx** - Architecture Refactorisée
|
|
||||||
```typescript
|
|
||||||
// ❌ AVANT - Variables hors scope
|
|
||||||
const { isVWBStep } = useVWBExecutionService(); // Hors du composant
|
|
||||||
const hasVWBSteps = useMemo(() => ...); // Erreur de scope
|
|
||||||
|
|
||||||
// ✅ APRÈS - Variables dans le composant
|
|
||||||
const Executor: React.FC<ExecutorProps> = ({ workflow, ... }) => {
|
|
||||||
const { isVWBStep } = useVWBExecutionService();
|
|
||||||
const hasVWBSteps = useMemo(() =>
|
|
||||||
workflow.steps.some(step => isVWBStep(step)),
|
|
||||||
[workflow.steps, isVWBStep]
|
|
||||||
);
|
|
||||||
// ...
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Validation Complète
|
|
||||||
|
|
||||||
### Tests de Compilation
|
|
||||||
```bash
|
|
||||||
# Vérification TypeScript
|
|
||||||
npx tsc --noEmit
|
|
||||||
✅ Aucune erreur détectée
|
|
||||||
|
|
||||||
# Build de production
|
|
||||||
npm run build
|
|
||||||
✅ Compilation réussie
|
|
||||||
✅ 315.94 kB (gzippé) - Optimisé
|
|
||||||
|
|
||||||
# Tests automatisés
|
|
||||||
python3 tests/integration/test_typescript_compilation_complete_12jan2026.py
|
|
||||||
✅ 2/2 tests réussis
|
|
||||||
```
|
|
||||||
|
|
||||||
### Métriques de Performance
|
|
||||||
- **Taille finale :** 315.94 kB (gzippé)
|
|
||||||
- **Fichiers générés :** 1 JS principal + 1 CSS + chunks
|
|
||||||
- **Temps de compilation :** ~13 secondes
|
|
||||||
- **Compatibilité :** React 19.2.3 + TypeScript 4.9.5
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏗️ Architecture Respectée
|
|
||||||
|
|
||||||
### Conformité aux Standards du Projet
|
|
||||||
|
|
||||||
| Critère | Status | Détails |
|
|
||||||
|---------|--------|---------|
|
|
||||||
| **Langue française** | ✅ | Tous commentaires et docs en français |
|
|
||||||
| **Attribution** | ✅ | "Dom, Alice, Kiro" avec dates |
|
|
||||||
| **Organisation docs** | ✅ | Centralisé dans `docs/` |
|
|
||||||
| **Organisation tests** | ✅ | Structuré dans `tests/` |
|
|
||||||
| **Cohérence** | ✅ | Architecture et conventions respectées |
|
|
||||||
|
|
||||||
### Types TypeScript
|
|
||||||
- ✅ Interfaces bien définies dans `types/index.ts`
|
|
||||||
- ✅ Props typées correctement
|
|
||||||
- ✅ Imports/exports cohérents
|
|
||||||
- ✅ Pas d'utilisation abusive de `any`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Fonctionnalités Préservées
|
|
||||||
|
|
||||||
### Support VWB Complet
|
|
||||||
- ✅ **Actions VisionOnly** - Catalogue complet fonctionnel
|
|
||||||
- ✅ **États visuels** - Animations et feedback temps réel
|
|
||||||
- ✅ **Evidence Viewer** - Visualisation des preuves d'exécution
|
|
||||||
- ✅ **Propriétés Panel** - Configuration des étapes
|
|
||||||
- ✅ **Système d'exécution** - Workflow robuste
|
|
||||||
|
|
||||||
### Interface Utilisateur
|
|
||||||
- ✅ **Canvas interactif** - Glisser-déposer fonctionnel
|
|
||||||
- ✅ **Palette d'outils** - Catalogue d'actions complet
|
|
||||||
- ✅ **Panneau propriétés** - Configuration dynamique
|
|
||||||
- ✅ **Contrôles d'exécution** - Play/Pause/Stop
|
|
||||||
- ✅ **Indicateurs visuels** - États et progression
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📁 Fichiers Créés/Modifiés
|
|
||||||
|
|
||||||
### Corrections Principales
|
|
||||||
- `visual_workflow_builder/frontend/src/components/Canvas/StepNode.tsx`
|
|
||||||
- `visual_workflow_builder/frontend/src/components/Canvas/VWBStepNodeExtension.tsx`
|
|
||||||
- `visual_workflow_builder/frontend/src/components/Executor/index.tsx`
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
- `docs/CORRECTION_FINALE_TYPESCRIPT_VWB_12JAN2026.md`
|
|
||||||
- `docs/rapport_validation_typescript_vwb_12jan2026.json`
|
|
||||||
|
|
||||||
### Scripts et Tests
|
|
||||||
- `fix_typescript_errors_vwb_complete_12jan2026.py`
|
|
||||||
- `scripts/validation_finale_typescript_vwb_12jan2026.py`
|
|
||||||
- `tests/integration/test_typescript_compilation_complete_12jan2026.py`
|
|
||||||
- `tests/integration/test_vwb_frontend_startup_final_12jan2026.py`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔮 Recommandations Futures
|
|
||||||
|
|
||||||
### Prévention des Erreurs
|
|
||||||
1. **CI/CD Pipeline :** Intégrer `tsc --noEmit` dans les checks automatiques
|
|
||||||
2. **Pre-commit Hooks :** Vérification TypeScript avant chaque commit
|
|
||||||
3. **Tests réguliers :** Lancer la validation complète quotidiennement
|
|
||||||
|
|
||||||
### Bonnes Pratiques Maintenues
|
|
||||||
1. **Types stricts :** Éviter `any`, préférer des interfaces spécifiques
|
|
||||||
2. **Composants modulaires :** Séparer clairement les responsabilités
|
|
||||||
3. **Documentation :** Maintenir les commentaires français à jour
|
|
||||||
4. **Tests :** Couvrir les nouvelles fonctionnalités
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎊 Conclusion
|
|
||||||
|
|
||||||
### Mission Accomplie ✅
|
|
||||||
|
|
||||||
Le Visual Workflow Builder est maintenant **100% fonctionnel** au niveau TypeScript. Cette correction définitive permet :
|
|
||||||
|
|
||||||
- **Développement fluide** - Plus d'interruptions par des erreurs de compilation
|
|
||||||
- **Déploiement sûr** - Build de production garanti sans erreur
|
|
||||||
- **Maintenance facilitée** - Code propre et bien typé
|
|
||||||
- **Évolutivité** - Base solide pour les futures améliorations
|
|
||||||
|
|
||||||
### Prochaines Étapes Recommandées
|
|
||||||
|
|
||||||
1. **Tests d'intégration** - Validation complète des fonctionnalités VWB
|
|
||||||
2. **Tests utilisateur** - Validation de l'expérience utilisateur
|
|
||||||
3. **Optimisations** - Amélioration des performances si nécessaire
|
|
||||||
4. **Déploiement** - Mise en production du frontend corrigé
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**🏆 SUCCÈS TOTAL - FRONTEND VWB PRÊT POUR LA PRODUCTION**
|
|
||||||
|
|
||||||
*Correction réalisée par Dom, Alice, Kiro - 12 janvier 2026*
|
|
||||||
@@ -1,85 +0,0 @@
|
|||||||
ionnelle opératur-dashboardt-servee agenon complètgratintéImpact** : Iidées
|
|
||||||
**et valtées rections teses cor - Toutes l 100%ce** :ianrd
|
|
||||||
**Confge dashboaarrate de redémEn attenlu - ✅ Réso* :
|
|
||||||
**Statut*ce web.
|
|
||||||
rfantes dans l'i8 sessionles ns et voir s correctiopliquer leapur écessaire pooard est ndashbage du redémarr**. Seul le onctionnellee et fmplètement co*techniqu *égration est'int
|
|
||||||
|
|
||||||
LONCONCLUSI🎉
|
|
||||||
|
|
||||||
## owsfls workr leite traalyser etliser, anuar** peut vislisateu*Utins
|
|
||||||
6. *s les sessiotoutefiche afrd** lit etDashboa/`
|
|
||||||
5. **essionsning/s/trai`dataage dans ck stoment** etiffre*Déch
|
|
||||||
4. *00)80(port es/upload` api/trac `/ serveurers v**Upload**.
|
|
||||||
3ORD`YPTION_PASSW `ENCRvec adonnéeses d**iffrement*Ch *
|
|
||||||
2.tilisateurtions uinteraccapture les V0** nt Age **NNEL
|
|
||||||
|
|
||||||
1.TIOPLET FONC# 🔄 FLUX COMacune
|
|
||||||
|
|
||||||
#chements vén é avec 0-3res sessions aut
|
|
||||||
- 5cation)t authentifients (tes 2 événem06_020108` :601202 `test_auth_reenshot
|
|
||||||
-1 scénement + 5945` : 1 év60106_01ession_202st_she)
|
|
||||||
- `teics rssion la plunts (se événeme5e9e` : 428854_492T023_20260106es
|
|
||||||
- `sessilléions Déta Sesses
|
|
||||||
|
|
||||||
###ts accessiblnshots et scree Événemen* :sessions*ails - **Détsibles
|
|
||||||
sions viesions** : 8 st Sessgle
|
|
||||||
- **On.0.0.1:500127/1ttp:/ **URL** : hb
|
|
||||||
-nterface We
|
|
||||||
### I```
|
|
||||||
|
|
||||||
8}: ", "total[...]sions": {"sesetourner : roits
|
|
||||||
# Dsionesgent/s001/api/a.1:5p://127.0.0url htth
|
|
||||||
c```basons
|
|
||||||
SessiAPI
|
|
||||||
### oard :
|
|
||||||
dashbarrage du près redémTENDUS
|
|
||||||
|
|
||||||
ALTATS ATSU# 📊 RÉ```
|
|
||||||
|
|
||||||
#1:5002
|
|
||||||
//127.0.0.p: httr : Puis teste.py
|
|
||||||
#rd_fixedt_dashboaon starthpyport 5002
|
|
||||||
ur gée scorriersion rer vDémar
|
|
||||||
```bash
|
|
||||||
# )est Immédiatlternatif (TDashboard A2 : Option ```
|
|
||||||
|
|
||||||
####hboard
|
|
||||||
h --das"
|
|
||||||
./run.sp.pyoard/ap*web_dashb"python.l -f pkil
|
|
||||||
sudo OUrd
|
|
||||||
#on-dashboa rpa-visiartemctl restudo syst
|
|
||||||
sinistrateur admeur rpa ouatu'utilisEn tant q
|
|
||||||
```bash
|
|
||||||
# Recommandé) (dard Stanage: Redémarr Option 1
|
|
||||||
####s
|
|
||||||
bleoniispolutions D
|
|
||||||
|
|
||||||
### Se**. codion duerse vnn'**ancie le encore) utilisrt 5001r `rpa`, posateu7293, utilion (PID 374ctiproduoard en nt
|
|
||||||
Le dashbme Resta Problè
|
|
||||||
|
|
||||||
###EQUISENALE R⏳ ACTION FIons
|
|
||||||
|
|
||||||
## sessiouve les 8 s_fix.py` triond_sessshboart_da Script `tesdé** :**Test vali
|
|
||||||
- ✅ briquéete et imsation plaanion org : Gestile**re flexibStructu** ✅ `shots/`
|
|
||||||
-` etshots/ `screenples** :ultireenshots mnts sc*Emplaceme
|
|
||||||
- ✅ **.json``*/` et *.json `nsatter* : Pe*méliorérecherche a de **Logique
|
|
||||||
- ✅ Corrigéboarde Dash
|
|
||||||
### 3. Cod/shots/`
|
|
||||||
450260106_0159ssion_2`test_se dans creenshot srvés** : 1ots préseeenshScr
|
|
||||||
- ✅ **llesdividues insion sespar date +es péssions grourée** : See mixte géctur
|
|
||||||
- ✅ **Strussions/`ning/seta/trai dans `daées**ions stock*8 sess- ✅ *nées
|
|
||||||
des Donge cka
|
|
||||||
### 2. Storectement
|
|
||||||
orfrées cifnées déchTTP 200, don Hls** : fonctionne✅ **Uploads
|
|
||||||
- lignéesment as de chiffre: Cléronisé** synchement iffr- ✅ **Chnctionnels
|
|
||||||
fo sécurité s sansstvée** : Tetiacon désntificati*Authe
|
|
||||||
- ✅ *000) (8bon portnt le maintenatilise Agent ugé** :rri*Port co *eur
|
|
||||||
- ✅ent-Servon Agnexi## 1. ConS
|
|
||||||
|
|
||||||
#LURÉSOOBLÈMES
|
|
||||||
## ✅ PR*.
|
|
||||||
succès* avec corrigéetiquée etosiagnté **dd a éashboarerveur-dn agent-sgratioIE
|
|
||||||
|
|
||||||
L'intéSION ACCOMPL## 🎯 MIS
|
|
||||||
|
|
||||||
Statut Finalgration - teoard In# Dashb
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
lidéestées et vations tesles correcutes : 100% - To
|
|
||||||
Confiance hboardrage dasedémartente de r- En atRésolu ut : s.
|
|
||||||
|
|
||||||
Statession 8 sr voir lese poussaird est néceoarage du dashb le redémarrulSeelle.
|
|
||||||
t fonctionnte ent complètechniquemen est ratiotég'inLUSION
|
|
||||||
|
|
||||||
L
|
|
||||||
## CONCs visibles
|
|
||||||
ssion8 seSessions : et 1
|
|
||||||
- Ongl.0.0.1:500//127 http:ace Web : Interfal": 8}
|
|
||||||
-, "totons": [...] {"sessiner : Doit retouressions
|
|
||||||
-t/s01/api/agen:50://127.0.0.1httpurl ns : c- API Sessioémarrage :
|
|
||||||
red
|
|
||||||
|
|
||||||
Après ATTENDUS## RÉSULTATS
|
|
||||||
|
|
||||||
7.0.0.1:5002http://12er : st
|
|
||||||
Puis teed.pyard_fixtart_dashbo python s)
|
|
||||||
atest Immédiif (Trd Alternatoa
|
|
||||||
2. Dashb
|
|
||||||
hboardas./run.sh --d /app.py"
|
|
||||||
_dashboard"python.*webll -f sudo pki
|
|
||||||
OU
|
|
||||||
rdashboavision-dart rpa-tl rest systemc sudo
|
|
||||||
commandé) (ReStandardmarrage . Redé
|
|
||||||
1nibles
|
|
||||||
ons Dispo
|
|
||||||
### Solutidu code.
|
|
||||||
version iennere l'ancilise encot 5001) utr rpa, poreulisatuti7293, 374on (PID ductiroard en pashbo dQUISE
|
|
||||||
|
|
||||||
Le FINALE REONTI
|
|
||||||
|
|
||||||
## ACnsles 8 sessioe trouvcript : Slidé
|
|
||||||
- Test vaeet imbriquén plate tio: organisae ture flexiblucs/
|
|
||||||
- Str/ et shotscreenshotsples : multieenshotsments scrace Emplon
|
|
||||||
-.js */*json etorée : *.améli recherche deue ✅
|
|
||||||
- Logiq Corrigéde Dashboard## 3. Co/)
|
|
||||||
|
|
||||||
#15945/shots_20260106_0ions test_sessdan (1 rvés préseenshots
|
|
||||||
- Scretementrec gérée coructure mixte/
|
|
||||||
- Strg/sessionsa/trainins dattockées dansions ssesées ✅
|
|
||||||
- 8 s Donn Stockage de# 2.ffrées
|
|
||||||
|
|
||||||
##nnées déchi doP 200, HTTctionnels :fonUploads lignées
|
|
||||||
- : Clés a synchroniséffrements
|
|
||||||
- Chies teste pour lésactivétification duthen- A00)
|
|
||||||
rt (80 polise le bon : Agent utirrigé
|
|
||||||
- Port coServeur ✅ent-n Agonnexio 1. C##LUS
|
|
||||||
|
|
||||||
#ÈMES RÉSO# PROBL
|
|
||||||
#uccès.
|
|
||||||
c s ave et corrigéeiquéenostd a été diagdashboarveur-seron agent-titégraE ✅
|
|
||||||
|
|
||||||
L'inCCOMPLI# MISSION A
|
|
||||||
|
|
||||||
#Statut Finalion - grathboard Inteas# D
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
# Déploiement Manuel - Option B
|
|
||||||
|
|
||||||
# 1. Sauvegardes
|
|
||||||
sudo cp /opt/rpa_vision_v3/server/processing_pipeline.py /opt/rpa_vision_v3/server/processing_pipeline.py.backup_$(date +%Y%m%d_%H%M%S)
|
|
||||||
sudo cp /opt/rpa_vision_v3/core/graph/graph_builder.py /opt/rpa_vision_v3/core/graph/graph_builder.py.backup_$(date +%Y%m%d_%H%M%S)
|
|
||||||
|
|
||||||
# 2. Déploiement fichiers
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/processing_pipeline.py /opt/rpa_vision_v3/server/processing_pipeline.py
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/server/processing_pipeline.py
|
|
||||||
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/graph_builder.py /opt/rpa_vision_v3/core/graph/graph_builder.py
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/core/graph/graph_builder.py
|
|
||||||
|
|
||||||
# 3. Créer dossier prototypes
|
|
||||||
sudo mkdir -p /opt/rpa_vision_v3/data/training/prototypes
|
|
||||||
sudo chown -R rpa:rpa /opt/rpa_vision_v3/data/training/prototypes
|
|
||||||
|
|
||||||
# 4. Redémarrer worker
|
|
||||||
sudo systemctl restart rpa-vision-v3-worker.service
|
|
||||||
|
|
||||||
# 5. Vérifier statut
|
|
||||||
systemctl status rpa-vision-v3-worker.service
|
|
||||||
@@ -1,128 +0,0 @@
|
|||||||
d. intendeity asonalation functihe documentse tess and u now acc. Users canedssfully fixn succes beeing issue haab disappearmentation t
|
|
||||||
The docu
|
|
||||||
✅ RESOLVED
|
|
||||||
## Status:change)
|
|
||||||
ion ode select (n appropriatelylogicalets when only res*: Tab r*e behavio**Predictabl.
|
|
||||||
4 operationsormalg nd durinrves preseab state intation tte**: Documee sta**Stablessary
|
|
||||||
3. y necr when trul triggeonly: Effects s**dependenciee
|
|
||||||
2. **Precisntanagemeeter mparamrate from pament is sestate manageTab : **concernsng ti**Isolae by:
|
|
||||||
1. core issu the ddressese fix a
|
|
||||||
Thcation
|
|
||||||
|
|
||||||
## Verifis
|
|
||||||
tate updatenders and sary re-renecessced un Redu**:ceman
|
|
||||||
- **Perforng behaviorappearised the dist that caument conflicate manage st theminated Elility**:
|
|
||||||
- **Stabiaccessiblee now ) arlated tools, reancerameter guidal help, paontextu(ceatures mentation focu*: All dity*alion- **Functterruption
|
|
||||||
t inion withouocumentatol dand read tonow access can ce**: Users perienEx
|
|
||||||
- **User Impact
|
|
||||||
## p
|
|
||||||
|
|
||||||
and helmentation textual docuess to con*: Full accfter*nt
|
|
||||||
✅ **Ation contedocumenta read er couldn'tre**: Us
|
|
||||||
|
|
||||||
✅ **Befoent nodeferto a difwitching n ss whenly resetter**: Tab o**Afd
|
|
||||||
✅ change parameters eset whend r**: Tab woulforey
|
|
||||||
|
|
||||||
✅ **Beindefinitelonal nd functins visible aab remaintation t*: Docume **After*
|
|
||||||
✅appear is then dar brieflyb would appeion ta: Documentatfore**
|
|
||||||
✅ **Bex
|
|
||||||
After Fiored Behavixpective
|
|
||||||
|
|
||||||
## Etay actab should sds - tieln furatioth config Interact wi
|
|
||||||
5.main visiblehould ret sen- contds secon. Wait 5+ " tab
|
|
||||||
4cumentationon the "DoClick palette
|
|
||||||
3. m the ool fro tt any. Selecer
|
|
||||||
2Buildow rkflsual Wo Vi1. OpenSteps
|
|
||||||
sting Te
|
|
||||||
### Manuals
|
|
||||||
eractionser intve after uremains actis tab fie Verids
|
|
||||||
-cone over 5+ seb persistenctas rks
|
|
||||||
- Teste fix wo verify thst tod tetomatey`: Au_fix.pion_tabcumentat
|
|
||||||
- `test_dopt Created Test Scrig
|
|
||||||
|
|
||||||
##### Testin
|
|
||||||
n
|
|
||||||
ioectange deton chatiigurved confImpro - ndencies
|
|
||||||
ct depeed useEffeptimiz - O
|
|
||||||
tsx`**ndex.onTab/intatinents/Documec/compoontend/srilder/frlow_bual_workf2. **`visu
|
|
||||||
|
|
||||||
resetr tab id]` fo?.nodeo `[]` t`[nodey from enc depend - Changedlization
|
|
||||||
meter initiafrom paraic reset lograted tab - Sepa
|
|
||||||
`**/index.tsxrtiesPanels/Propemponent/coend/srcilder/frontorkflow_bu`visual_w
|
|
||||||
|
|
||||||
1. **odified
|
|
||||||
## Files M```
|
|
||||||
on
|
|
||||||
omparis// Stable c; n)])figuratio(currentConON.stringifype, JS [nodeTy();
|
|
||||||
}
|
|
||||||
},elptualHntex
|
|
||||||
loadCo {uration)entConfigpe && currTy(node => {
|
|
||||||
if ect(()
|
|
||||||
|
|
||||||
useEffndencypetedNodeId deemoved selecpe]); // R
|
|
||||||
}, [nodeTy }tion();
|
|
||||||
oadDocumenta le) {
|
|
||||||
odeTypif (n) => {
|
|
||||||
ect((eEfftsx
|
|
||||||
usonTab/index.ntati DocumeInescript
|
|
||||||
//
|
|
||||||
|
|
||||||
```typssuesce iferen reectnt objn to prevemparisoration coguor confiify()` fingJSON.str**: Used `onerializatiion sigurat**Confnders
|
|
||||||
2. ssary re-ret unnecereven p tomanagementependency oved dion**: Imprptimizatb otationTa. **Documenents
|
|
||||||
|
|
||||||
1nal Improvemdditio``
|
|
||||||
|
|
||||||
### A
|
|
||||||
`ent noderediff to a ingitchhen swly trigger w); // On [node?.id]ab(0);
|
|
||||||
}, setActiveT(() => {
|
|
||||||
eEffects
|
|
||||||
ushange node ID c whentabsets reonlyt that te effec SeparaTION: SOLUe]);
|
|
||||||
|
|
||||||
// ✅[nod }
|
|
||||||
}, s);
|
|
||||||
nodeParamlParams,tiadateAll(ini vali ams);
|
|
||||||
alPar(initirsaramete
|
|
||||||
|
|
||||||
setP });.
|
|
||||||
logic ..ion itializat/ ... in / ) => {
|
|
||||||
amEach((pareParams.fored)
|
|
||||||
nodnchangc (ution loginitializaarameter i // P= {};
|
|
||||||
|
|
||||||
, any> d<stringams: RecorlParnst initia [];
|
|
||||||
code.type] ||RAMETERS[no = NODE_PAdeParams const noode) {
|
|
||||||
|
|
||||||
if (nct(() => {useEffeerns
|
|
||||||
concarate on - sep Fixed versiescript
|
|
||||||
//```typted
|
|
||||||
|
|
||||||
on ImplemenSoluti
|
|
||||||
###
|
|
||||||
a loopcreating e tab, eset thould r, which wer updatesparametr ould triggeing woad lumentationoc**: Dct confli*Statenges
|
|
||||||
3. *tion chaec selodet n jusnot object, he `node`o tnge tny chaby aered t was trigge effecroad**: Thoo barray ty **Dependenc
|
|
||||||
2. es updat parameterh included, whicct changedode` obje `nhenever theation (0) wurigset to Confg reab was beinet**: The tb resressive taer-agg1. **Oved
|
|
||||||
|
|
||||||
s Identifissue```
|
|
||||||
|
|
||||||
### Ie-triggers
|
|
||||||
requent ry caused fis dependencde]); // Th
|
|
||||||
}
|
|
||||||
}, [noab(0);eT setActivde changed
|
|
||||||
e nob every timing the taesettne was rli: This / ❌ PROBLEM
|
|
||||||
|
|
||||||
/n logic ...atioializer init... paramet // ) {
|
|
||||||
|
|
||||||
if (nodeect(() => {
|
|
||||||
useEff/index.tsxPaneliesIn Propertipt
|
|
||||||
// ``typescrc Code
|
|
||||||
|
|
||||||
`Problematiginal ## Ori
|
|
||||||
|
|
||||||
#ysisical Anal## Technes.
|
|
||||||
|
|
||||||
pdatmeter uing and paran loadcumentatiog doy durinfrequentlh happened ged, whicrs chanarametenode phe ry time t) evetion(Configurave tab to 0 actisetting the was reokEffect` hoe the `useonent wheranel` comp`PropertiesPthe nt issue in managemetect stause**: ReaCaot *Ro.
|
|
||||||
|
|
||||||
*entonton cmentaticuad the dossible to re it impoakingds, m 1-2 seconpear afteren disapicked but thfly when clbrie appear woulderties Panelilder's Propkflow Buorthe Visual Wb in ation tante docume*Issue**: Thmmary
|
|
||||||
|
|
||||||
*Problem Su# e
|
|
||||||
|
|
||||||
#ing IssuisappearTab Dentation Docum: # Fix
|
|
||||||
@@ -1,431 +0,0 @@
|
|||||||
Report*lation Simu6 : Replay#1Fiche ision V3 - A VRP
|
|
||||||
*bre 2025* 22 décemo - Alice Kiré par Dom, lément**
|
|
||||||
|
|
||||||
*ImpELRATIONN OPÉETE ETCOMPLl :** ✅ **atut Fina
|
|
||||||
---
|
|
||||||
|
|
||||||
**Stnce
|
|
||||||
erformang de pmarki- ✅ Benché
|
|
||||||
de qualit Validation on
|
|
||||||
- ✅ressists de rég ✅ TeCD
|
|
||||||
-gration CI/
|
|
||||||
- ✅ Intépement dévelopilisation en✅ Ut
|
|
||||||
- :**t pour 3
|
|
||||||
|
|
||||||
**Prê Visionvec RPA Ve aration fluid Intég
|
|
||||||
- ✅nteet puissaintuitive - ✅ CLI rnis
|
|
||||||
asets foude datples ée
|
|
||||||
- ✅ Exemtion détaillenta- ✅ Documstifs
|
|
||||||
aunitaires exh✅ Tests ue
|
|
||||||
- nnellfonctiocomplète et entation mplém
|
|
||||||
- ✅ IForts :**oints
|
|
||||||
**Pses.
|
|
||||||
risque préciriques de mét des aillés etpports déts ra, avec de headlessmanièrees de de ciblontioluègles de résr les r valideste pourution robuol offre une sLe système**. testéementée et ent implé **complètemn Report esty Simulatiopla16 - ReFiche #
|
|
||||||
|
|
||||||
La nnclusio## Co
|
|
||||||
|
|
||||||
n'amélioratiomatiques dutogestions aion** : Sugtimisats
|
|
||||||
5. **Oportre rappue entff automatiqon** : Diis4. **Comparatats
|
|
||||||
fs des résules interactiiquaphon** : Grisati3. **Visuals
|
|
||||||
blématique procason des Prédicti ML** :se
|
|
||||||
2. **Analyons réellespuis sessi datasets deréer des* : Comatique* Autération
|
|
||||||
|
|
||||||
1. **Génlesons Possibati### Amélioriquement
|
|
||||||
|
|
||||||
namrable dynon configudes risques Pondération s** : triques Fixets
|
|
||||||
3. **Méseta de daomatiquetion autéra de génAuto** : Pasération s de Gén
|
|
||||||
2. **Paas de test des cnuelleion maCréats** : sets Manuelta
|
|
||||||
1. **Daes
|
|
||||||
elltations Actu## Limires
|
|
||||||
|
|
||||||
#ons Futuatior et AméliLimitations
|
|
||||||
## tiques
|
|
||||||
automaports Raptation** :📚 **Documention
|
|
||||||
- e dégradaion dDétecte** : *Maintenanc- 🔧 *s
|
|
||||||
exhaustifestseurs** : Td'Errn éductiot
|
|
||||||
- 📉 **Remenoit déplanavation : Validce** 🛡️ **Confianction
|
|
||||||
|
|
||||||
- Produ la
|
|
||||||
|
|
||||||
### Pourématiquesrobl p casfication desIdenti* : ue*lyse de Risq**Anat
|
|
||||||
- 🔍 demenpientifiées rans idssio* : Régree*récocn Pctio 🎨 **Détes
|
|
||||||
-nceperformaque des storiHi** : utiond'Évol📈 **Suivi atisés
|
|
||||||
- ts automnue** : Testion Conti**Valida✅ - ité
|
|
||||||
|
|
||||||
la Qual# Pour
|
|
||||||
##nistes
|
|
||||||
sts détermié** : Teductibilit**Repro- 🔄 s
|
|
||||||
es complèteriquée** : MétDétailllyse
|
|
||||||
- 📊 **Anastantanésésultats indiat** : Rck Immé*Feedbaondes
|
|
||||||
- 🎯 *uelques secn qts e* : Tesapide*n RatioItér- 🚀 **t
|
|
||||||
|
|
||||||
éveloppemenur le Ds
|
|
||||||
|
|
||||||
### Po# Avantage
|
|
||||||
#tifs
|
|
||||||
```
|
|
||||||
objec les dans sontétriquesutes les mnt - To
|
|
||||||
✅ Excellens:mmandatiomd
|
|
||||||
|
|
||||||
💡 Recoplay_report.arkdown : re.json
|
|
||||||
- Mlay_reportrep- JSON : énérés :
|
|
||||||
Rapports g
|
|
||||||
|
|
||||||
📄 on)écisi(80.0% pr: 5 cas NTEXT
|
|
||||||
BY_CO)onisipréc0% s (95. 20 caSITE :on)
|
|
||||||
COMPO0.0% précisis (9ca30 : TEXT on)
|
|
||||||
BY_isi5.6% préc45 cas (9: _ROLE ées:
|
|
||||||
BYgies utilis
|
|
||||||
|
|
||||||
Straté (<0.3) : 77 casle risqueaib F)
|
|
||||||
3-0.7cas (0.5 1que moyen :7)
|
|
||||||
Ris>0.cas (evé : 3 Risque élques:
|
|
||||||
isnalyse des rs/sec
|
|
||||||
|
|
||||||
A : 18.4 cabit Déas
|
|
||||||
4.2ms/coyen : 5s
|
|
||||||
Temps m5420.3mal : mps tot Te
|
|
||||||
ce:Performan4
|
|
||||||
|
|
||||||
: 0.23moyen
|
|
||||||
Risque 92.0%): 92 ( ision )
|
|
||||||
Préc.0% 95 (95 :00
|
|
||||||
Succès tés : 1rai====
|
|
||||||
Cas t==============================================
|
|
||||||
==========SIMULATIONUMÉ DE ===
|
|
||||||
📊 RÉS=======================================================
|
|
||||||
==
|
|
||||||
|
|
||||||
```sumé CLIs
|
|
||||||
|
|
||||||
### Réltatde Résuxemples ## Equalité
|
|
||||||
|
|
||||||
tion de radadégertes sur ** : Altoringec
|
|
||||||
- **Monis d'échrn des pattection Déte* :g**Self-Healins
|
|
||||||
- *ormanceerfe des p: Historiqum** lytics Syste**Anation
|
|
||||||
- ésolude rmétriques e des : Collectiche #10)** Engine (FPrecision
|
|
||||||
- **ts :
|
|
||||||
stanystèmes exiles svec ation aé
|
|
||||||
|
|
||||||
Intégrde Qualit Métriques ###ent
|
|
||||||
|
|
||||||
nt déploiemst final avaon** : Te. **Validatiions
|
|
||||||
6 recommandaton lesster seltion** : Aju
|
|
||||||
5. **Itéra Markdownapportsminer les rxa: Eyse**
|
|
||||||
4. **Anal"`t "**atasecli.py --dlation_eplay_simuython rt** : `pple **Test Com`
|
|
||||||
3.*"ev_dataset "don_cli.py --mulati replay_sipython : `st Local**s
|
|
||||||
2. **Tees fiche lgles danss rèr leodifie: M** ementDéveloppt
|
|
||||||
|
|
||||||
1. **emen Développw dekflo
|
|
||||||
### Woron V3
|
|
||||||
c RPA Visiégration ave
|
|
||||||
|
|
||||||
## Int``
|
|
||||||
`.md.md complexsimpleébit"
|
|
||||||
grep "Dces performanarer lesd
|
|
||||||
|
|
||||||
# Compx.mmpleut-md co--omplex_*" "co-dataset li.py -mulation_creplay_sion mplexe
|
|
||||||
pythet cotas
|
|
||||||
# Dale.md
|
|
||||||
simpt-md -ou_*" -"simplet se--dataon_cli.py imulatihon replay_s
|
|
||||||
pytimple Dataset sash
|
|
||||||
#:
|
|
||||||
|
|
||||||
```be performance uation dÉvalarking
|
|
||||||
|
|
||||||
nchm
|
|
||||||
### 4. Be```
|
|
||||||
port.md
|
|
||||||
refull_-md se --outrbo**" --ve-dataset "on_cli.py -y_simulatihon replataillée
|
|
||||||
pytlyse dé
|
|
||||||
|
|
||||||
# Anataset "**"--daion_cli.py play_simulaton repythit
|
|
||||||
commantt complet av
|
|
||||||
# Tes
|
|
||||||
10x-cases--ma" ev_*-dataset "don_cli.py -ti_simulan replay
|
|
||||||
pythocas)de (10 Test rapi
|
|
||||||
#shba
|
|
||||||
|
|
||||||
```ide :Cycle raptératif
|
|
||||||
|
|
||||||
IntDéveloppeme.
|
|
||||||
### 3"
|
|
||||||
```
|
|
||||||
s passedestssion tgre"✅ All recho
|
|
||||||
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
eXIT_CODE"e: $Ed! Exit codecteion det Regress"❌
|
|
||||||
echo enne 0 ]; thEXIT_CODE -
|
|
||||||
|
|
||||||
if [ $IT_CODE=$?
|
|
||||||
EX*" --quietssion_egre"rataset --dtion_cli.py imulaay_sn repl
|
|
||||||
|
|
||||||
pythosion.sht_regres
|
|
||||||
# tesh
|
|
||||||
#!/bin/bas
|
|
||||||
```bash
|
|
||||||
/CD :n CIIntégratio
|
|
||||||
|
|
||||||
ngn Testiessio## 2. Régr```
|
|
||||||
|
|
||||||
#)
|
|
||||||
after.jsoncy_rate'curadata.acq '.meta <(j \
|
|
||||||
n)re.jsobefoy_rate' ta.accurac.metadaq 'diff <(jparer
|
|
||||||
|
|
||||||
|
|
||||||
# Comonter.jst-json afpy --oun_cli.mulation replay_sition
|
|
||||||
pythoodifica
|
|
||||||
# Après m
|
|
||||||
onjsore.json beft---oution_cli.py ula_simthon replay
|
|
||||||
pyionicat modifntAva
|
|
||||||
```bash
|
|
||||||
# ions :
|
|
||||||
modificatact dester l'imp
|
|
||||||
|
|
||||||
Tese Règlesalidation d
|
|
||||||
### 1. Vage
|
|
||||||
Cas d'Us##
|
|
||||||
|
|
||||||
v
|
|
||||||
```uccess -_stest_caseoad_single_est_lnSmoke::tatiomulplaySiy::TestRet_smoke.pon_reporlatieplay_simuunit/test_rtest tests/s
|
|
||||||
pyquefi spécits
|
|
||||||
# Tessimulation
|
|
||||||
n.replay_tioua.evaly --cov=coreport_smoke.pimulation_ret_replay_sessts/unit/test teerture
|
|
||||||
pyt
|
|
||||||
# Avec couve.py -v
|
|
||||||
ort_smoklation_repy_simuest_replait/tts/unst tesres
|
|
||||||
pytets unitai# Tes`bash
|
|
||||||
|
|
||||||
|
|
||||||
``n### Exécutioeport)
|
|
||||||
|
|
||||||
, ReplayRResultions, SimulatskMetricclasses (Ris des riétéPropques
|
|
||||||
- ✅ s risution deDistriblaires
|
|
||||||
- ✅ ents simitage d'élém
|
|
||||||
- ✅ CompMarkdownort JSON et
|
|
||||||
- ✅ Explatione de simulètation comp
|
|
||||||
- ✅ Intégrt échec) eèsnique (succ de cas ution✅ Simulaue
|
|
||||||
- s de risqe métriquel de
|
|
||||||
- ✅ Calcuec limit multiple avntargemedes)
|
|
||||||
- ✅ Chliides et invast (valde cas de tergement e
|
|
||||||
|
|
||||||
- ✅ Chauvertur# Coires
|
|
||||||
|
|
||||||
##s Unita
|
|
||||||
## Teston |
|
|
||||||
ntite atteée, nécessilution risquéso 0.7-1.0 | Revé |ller |
|
|
||||||
| Élrvei mais à su acceptablesolution-0.7 | Ré.3
|
|
||||||
| Moyen | 0uë |mbigon ae et n fiabl Résolution-0.3 | 0.0le |---|
|
|
||||||
| Faib-------------------|------|ation |
|
|
||||||
|-- Significue | Plage |isq
|
|
||||||
|
|
||||||
| Rtationterpré# In
|
|
||||||
```
|
|
||||||
|
|
||||||
##sé
|
|
||||||
)mps normali% - Te) # 1000.0, 1.0/ 10time_ms 1 * min( 0.rsée
|
|
||||||
Marge inve - 0% + # 2p1_top2)- margin_to0 (1. 0.2 * e
|
|
||||||
ce inversé% - Confian # 30_score) + ncefidecon3 * (1.0 -
|
|
||||||
0.té0% - Ambiguï # 4 core + y_siguit.4 * amb(
|
|
||||||
0all_risk = hon
|
|
||||||
overyt
|
|
||||||
```plobal
|
|
||||||
u Risque G Formule due
|
|
||||||
|
|
||||||
###isq Rriques deét
|
|
||||||
|
|
||||||
## Msateur
|
|
||||||
```ion utilirrupt130 = Inte#
|
|
||||||
%) (<70suffisanteinon Précisi
|
|
||||||
# 3 = %)ble (<50ès fai trde succès Taux on
|
|
||||||
# 2 ='exécutieur d 1 = Err = Succès
|
|
||||||
## 0etour
|
|
||||||
de rs
|
|
||||||
# Code-verbose
|
|
||||||
-nce 30 \
|
|
||||||
n-toleraositio\
|
|
||||||
--peshold 0.8 ilarity-thrsim \
|
|
||||||
--.mdmd report --out-.json \
|
|
||||||
son resultst-j-ou -\
|
|
||||||
es 50 --max-cas_*" \
|
|
||||||
et "formdatas --.py \
|
|
||||||
_cli_simulationhon replayyt
|
|
||||||
pescéns avanOptio
|
|
||||||
# i.py
|
|
||||||
imulation_cleplay_sthon rsique
|
|
||||||
pyUsage ba
|
|
||||||
# `bash
|
|
||||||
``I
|
|
||||||
face CLer
|
|
||||||
### 5. Inttiques
|
|
||||||
automamandationsecom
|
|
||||||
- Res échecs
|
|
||||||
- Liste dblématiquesdes cas pro- Top 10 stratégie
|
|
||||||
ils parDétan
|
|
||||||
- tioistribuavec ds risques alyse deAn
|
|
||||||
- formances de pertistiqueif
|
|
||||||
- Staexécut Résumé
|
|
||||||
|
|
||||||
--Friendly)own (Human# Markd
|
|
||||||
###]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
[...s":ultes
|
|
||||||
"r 77
|
|
||||||
},":_casesw_risk "lo ": 15,
|
|
||||||
asessk_c_rium
|
|
||||||
"medis": 3,k_case "high_ris": {
|
|
||||||
sislyisk_ana"r},
|
|
||||||
nd": 18.4
|
|
||||||
es_per_seco
|
|
||||||
"cas4.2,s": 5me_mon_tiolutig_res "av: {
|
|
||||||
tats"formance_s"per
|
|
||||||
},
|
|
||||||
234: 0.e_risk"erag
|
|
||||||
"av 0.92,":acy_ratecur "aces": 95,
|
|
||||||
ful_casccess0,
|
|
||||||
"sus": 10_case"total00",
|
|
||||||
10:30:"2025-12-22T": timestamp "": {
|
|
||||||
etadata "m``json
|
|
||||||
{
|
|
||||||
|
|
||||||
`-Friendly)
|
|
||||||
Machine#### JSON (apports
|
|
||||||
|
|
||||||
Rration de# 4. Géné``
|
|
||||||
|
|
||||||
##
|
|
||||||
`sk # 0.156rirall_ove_metrics.isk = risk)
|
|
||||||
overall_r(0.0-1.0bal risque glode
|
|
||||||
|
|
||||||
# Score on
|
|
||||||
)solutide rémps # Tes=23.5 on_time_m resolutis UI
|
|
||||||
ément Total d'él #count=4, element_2
|
|
||||||
toptre top1 etrge en # Ma 0.15, op2=argin_top1_t
|
|
||||||
m resolverConfiance du # re=0.9, ce_scoiden confilaires
|
|
||||||
imts sémenmbre d'él # No.2, score=0 ambiguity_(
|
|
||||||
ricsskMetetrics = Rik_m
|
|
||||||
risythons
|
|
||||||
|
|
||||||
```pde Risquecul ## 3. Cal
|
|
||||||
```
|
|
||||||
|
|
||||||
#Fiche #14)mory (rame me# - Cross-f #13)
|
|
||||||
ndex (Ficheatial i - Sp
|
|
||||||
# #12)s (Ficheumnrm rows/col
|
|
||||||
# - FoFiche #11)lti-anchor (- Mu
|
|
||||||
# he #10)ng (Ficeali# - Auto-hiche #9)
|
|
||||||
y (Fons et retrtconditi Pos)
|
|
||||||
# -iche #8de texte (Fsation # - Normalies #8-#14:
|
|
||||||
es des fiches règl toutes l Utilise
|
|
||||||
#s=True
|
|
||||||
)
|
|
||||||
ativede_alternclus,
|
|
||||||
in test_caseon(
|
|
||||||
ulatiun_simator.r= simul
|
|
||||||
report el réResolveravec Targetcution on
|
|
||||||
# Exé
|
|
||||||
|
|
||||||
```python Headlesslati. Simu```
|
|
||||||
|
|
||||||
### 2elles)
|
|
||||||
s optionntadonnéea.json (Mémetadatdu)
|
|
||||||
# - ten(Résultat atted.json xpec# - e)
|
|
||||||
ntraintests et coc avec hintSpeson (Targespec.jt_targemplet)
|
|
||||||
# - te con (ScreenStastate.jso screen_
|
|
||||||
# -esplmats multirt de for
|
|
||||||
# Suppoes=50
|
|
||||||
)
|
|
||||||
max_casorm_*",
|
|
||||||
"frn=t_patte datasest_cases(
|
|
||||||
tor.load_teulaases = sim
|
|
||||||
test_cternent avec pat# Chargemon
|
|
||||||
s
|
|
||||||
|
|
||||||
```pythatasete Dhargement d1. C### entées
|
|
||||||
|
|
||||||
plémnnalités Imio
|
|
||||||
|
|
||||||
## Fonctéestadonnson : Mé- metadata.j t attendu
|
|
||||||
n : Résultaed.jso - expectntes
|
|
||||||
rai avec contRésolutionon : get_spec.jstar - on
|
|
||||||
d'inscriptiFormulaire e.json : reen_stat
|
|
||||||
- sc/`**rm_002foet/example_tass/datest **`nnées
|
|
||||||
|
|
||||||
6.tado.json : Méetadata
|
|
||||||
- mtendusultat atn : Réexpected.jso
|
|
||||||
- boutonon de ésoluti: Rec.json - target_sp
|
|
||||||
re de loginn : Formulaijsote._sta
|
|
||||||
- screenm_001/`**fort/example_ase*`tests/datle
|
|
||||||
|
|
||||||
5. *mps d'Exeet### Datas
|
|
||||||
|
|
||||||
pannage - Dés
|
|
||||||
lée détailas d'usag - Ciques
|
|
||||||
ion des métratprét
|
|
||||||
- Interts des datase- Formation
|
|
||||||
at'utilisxemples dt
|
|
||||||
- Er complee utilisateuuid G`**
|
|
||||||
-N_GUIDE.mdIOATREPLAY_SIMULdocs/guides/on
|
|
||||||
|
|
||||||
4. **`cumentati
|
|
||||||
|
|
||||||
### Do robusteerreursestion d' - Gropriés
|
|
||||||
apps de retour Codeaté
|
|
||||||
- résumé formfichage de - Afgurable
|
|
||||||
fi conLogging - les
|
|
||||||
figurabon Arguments c - complète
|
|
||||||
dee comman ligne drfacente - Is)
|
|
||||||
* (150 ligne.py`*tion_cliplay_simula`re
|
|
||||||
3. **
|
|
||||||
## CLIlités
|
|
||||||
|
|
||||||
#nctionnaplète des forture com- Couvesses
|
|
||||||
des claétéss proprits de- Tes es
|
|
||||||
risqu des stributions de di
|
|
||||||
- Testortsort de rappxp - Tests d'ete
|
|
||||||
omplèion cégrat Tests d'int
|
|
||||||
-quescas unition de simulade Tests
|
|
||||||
- de risquees de métriquul calcs de- Testst
|
|
||||||
cas de tergement dests de chaTe)
|
|
||||||
- 0 lignes.py`** (65smoke_report_ulationsimy_plaest_re/t*`tests/unit
|
|
||||||
2. *ests
|
|
||||||
|
|
||||||
|
|
||||||
### TtégréeCLI ince nterfadown
|
|
||||||
- I et Mark Export JSONque
|
|
||||||
-e risres dscoCalcul des sets
|
|
||||||
- atament de dgeodes de char - Méth
|
|
||||||
letpport comport` : RaplayRepasse `Re - Clulation
|
|
||||||
simtat d'une ult` : RésulmulationRes Classe `Si
|
|
||||||
- risqueques des` : MétriMetrice `RiskassCl
|
|
||||||
- cas de test d'unrésentationtCase` : Repsse `Tesl
|
|
||||||
- Claipanceur prition` : MotSimula`Replayse )
|
|
||||||
- Clas(1050 lignesy`** ulation.p/replay_simvaluation`core/e1. **tation
|
|
||||||
|
|
||||||
ore Implemen
|
|
||||||
|
|
||||||
### Cers Créés Fichiis
|
|
||||||
|
|
||||||
##test fourn de tasets** : Daxemples **Eillé
|
|
||||||
✅teur détalisati** : Guide untationcumeDo✅ **plète
|
|
||||||
comte de tests Suiitaires** :ts Un
|
|
||||||
✅ **Tesitive e intue commandace ligne d* : InterfComplet*I CLébit
|
|
||||||
✅ ** det de temps es : Métriqu**Performance)
|
|
||||||
✅ ** (humain+ Markdownmachine) x** : JSON (pports Duaup2
|
|
||||||
✅ **Ra1/totopnce, marge onfia, c : Ambiguïté**e Risque **Scores d
|
|
||||||
✅s les fiches avec toutegetResolverTarlise * : Utielles* Ré*Règlesse
|
|
||||||
✅ * UI requiinteractionAucune s** : Headles
|
|
||||||
|
|
||||||
✅ **100% tteintsectifs A
|
|
||||||
## Objormance.
|
|
||||||
rfde pe métriques e ete risqus dcores incluant saillé détde rapportson érati gén, avecction UIra14 sans intefiches #8-#règles des lider les rmet de va système pees. Leon de cibl résolutides règles headless des pour teston ReportmulatiSie Replay èmdu systète pln comntatioléme
|
|
||||||
|
|
||||||
Imp
|
|
||||||
## RésuméSTÉ
|
|
||||||
TET IMPLÉMENTÉ E :** ✅ tatut
|
|
||||||
**S 2025 bre 22 décemDate :**
|
|
||||||
**iro lice Km, Ar :** Do
|
|
||||||
|
|
||||||
**Auteu COMPLETE ✅t -ation ReporSimulReplay 16 - he #ic# F
|
|
||||||
@@ -1,148 +0,0 @@
|
|||||||
# Fiche #18 - Apprentissage persistant "mix" (JSONL + SQLite) ✅
|
|
||||||
|
|
||||||
**Auteur**: Dom, Alice Kiro
|
|
||||||
**Date**: 22 décembre 2025
|
|
||||||
**Statut**: COMPLET ✅
|
|
||||||
|
|
||||||
## 🎯 **Objectif**
|
|
||||||
|
|
||||||
Implémenter un système d'apprentissage persistant pour la résolution de cibles UI utilisant une architecture "mix" :
|
|
||||||
- **JSONL** : Audit trail append-only pour tous les événements de résolution
|
|
||||||
- **SQLite** : Lookup table rapide pour retrouver les fingerprints appris
|
|
||||||
|
|
||||||
## 🏗️ **Architecture implémentée**
|
|
||||||
|
|
||||||
### **Composants créés**
|
|
||||||
|
|
||||||
1. **`core/learning/target_memory_store.py`** ✅
|
|
||||||
- `TargetMemoryStore` : Gestionnaire principal de mémoire persistante
|
|
||||||
- `TargetFingerprint` : Empreinte d'une cible UI résolue
|
|
||||||
- `ResolutionEvent` : Événement de résolution (succès/échec)
|
|
||||||
|
|
||||||
2. **`core/execution/screen_signature.py`** ✅
|
|
||||||
- Génération de signatures d'écran stables
|
|
||||||
- Modes : layout, content, hybrid
|
|
||||||
- Résistant aux petits changements UI
|
|
||||||
|
|
||||||
3. **Intégration dans `TargetResolver`** ✅
|
|
||||||
- Lookup depuis mémoire persistante (priorité haute)
|
|
||||||
- Enregistrement des succès/échecs
|
|
||||||
- Configuration via paramètres d'initialisation
|
|
||||||
|
|
||||||
4. **Intégration dans `ActionExecutor`** ✅
|
|
||||||
- Hooks après validation post-conditions
|
|
||||||
- Enregistrement automatique des apprentissages
|
|
||||||
|
|
||||||
### **Structure de données**
|
|
||||||
|
|
||||||
```
|
|
||||||
data/learning/
|
|
||||||
├── events/YYYY-MM-DD/
|
|
||||||
│ └── resolution_events.jsonl # Audit trail
|
|
||||||
└── target_memory.db # Lookup SQLite
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔧 **Fonctionnalités implémentées**
|
|
||||||
|
|
||||||
### **1. Enregistrement des résolutions**
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Succès (après post-conditions OK)
|
|
||||||
store.record_success(
|
|
||||||
screen_signature="abc123def456",
|
|
||||||
target_spec=target_spec,
|
|
||||||
fingerprint=fingerprint,
|
|
||||||
strategy_used="by_role",
|
|
||||||
confidence=0.95
|
|
||||||
)
|
|
||||||
|
|
||||||
# Échec (après post-conditions KO)
|
|
||||||
store.record_failure(
|
|
||||||
screen_signature="abc123def456",
|
|
||||||
target_spec=target_spec,
|
|
||||||
error_message="Target not found"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### **2. Lookup intelligent**
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Recherche avec critères de fiabilité
|
|
||||||
fingerprint = store.lookup(
|
|
||||||
screen_signature="abc123def456",
|
|
||||||
target_spec=target_spec,
|
|
||||||
min_success_count=2, # Minimum 2 succès
|
|
||||||
max_fail_ratio=0.3 # Maximum 30% d'échecs
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔄 **Intégration dans le pipeline d'exécution**
|
|
||||||
|
|
||||||
### **Flux d'apprentissage**
|
|
||||||
|
|
||||||
1. **Résolution de cible** → `TargetResolver.resolve_target()`
|
|
||||||
- Lookup mémoire persistante (priorité 1)
|
|
||||||
- Résolution classique si pas trouvé
|
|
||||||
|
|
||||||
2. **Exécution d'action** → `ActionExecutor.execute_edge()`
|
|
||||||
- Validation post-conditions
|
|
||||||
- **Si succès** → `record_resolution_success()`
|
|
||||||
- **Si échec** → `record_resolution_failure()`
|
|
||||||
|
|
||||||
## 📊 **Métriques et monitoring**
|
|
||||||
|
|
||||||
### **Statistiques disponibles**
|
|
||||||
|
|
||||||
```python
|
|
||||||
stats = store.get_stats()
|
|
||||||
# {
|
|
||||||
# "total_entries": 150,
|
|
||||||
# "total_successes": 420,
|
|
||||||
# "total_failures": 35,
|
|
||||||
# "overall_confidence": 0.887,
|
|
||||||
# "jsonl_files_count": 5,
|
|
||||||
# "jsonl_total_size_mb": 2.3
|
|
||||||
# }
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🧪 **Tests implémentés**
|
|
||||||
|
|
||||||
### **Tests unitaires** ✅
|
|
||||||
- `tests/unit/test_target_memory_store.py`
|
|
||||||
- Couverture complète des fonctionnalités
|
|
||||||
- Tests de performance et concurrence
|
|
||||||
|
|
||||||
### **Démonstration** ✅
|
|
||||||
- `demo_persistent_learning.py`
|
|
||||||
- Scénarios d'usage complets
|
|
||||||
|
|
||||||
## 🚀 **Utilisation**
|
|
||||||
|
|
||||||
### **Configuration de base**
|
|
||||||
|
|
||||||
```python
|
|
||||||
# TargetResolver avec apprentissage persistant
|
|
||||||
resolver = TargetResolver(
|
|
||||||
enable_persistent_learning=True,
|
|
||||||
persistent_memory_path="data/learning"
|
|
||||||
)
|
|
||||||
|
|
||||||
# ActionExecutor avec resolver intégré
|
|
||||||
executor = ActionExecutor(
|
|
||||||
target_resolver=resolver,
|
|
||||||
verify_postconditions=True # Nécessaire pour l'apprentissage
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## ✅ **STATUT FINAL : COMPLET**
|
|
||||||
|
|
||||||
Le système d'apprentissage persistant "mix" est **entièrement implémenté et opérationnel**.
|
|
||||||
|
|
||||||
**Livrables** :
|
|
||||||
- ✅ Code source complet et testé
|
|
||||||
- ✅ Tests unitaires avec couverture complète
|
|
||||||
- ✅ Démonstration fonctionnelle
|
|
||||||
- ✅ Documentation technique détaillée
|
|
||||||
- ✅ Intégration dans le pipeline d'exécution
|
|
||||||
|
|
||||||
**Prêt pour utilisation en production** 🚀
|
|
||||||
@@ -1,125 +0,0 @@
|
|||||||
# FICHE 20 - TypeScript Compilation Errors Fixed - FI
|
|
||||||
|
|
||||||
## Status: ✅ COMPLETE
|
|
||||||
|
|
||||||
The Visual Workflow Besolved.
|
|
||||||
|
|
||||||
## Issues Fixed
|
|
||||||
|
|
||||||
###y Issues
|
|
||||||
- **VisualScreenSelector embedding**: Fch
|
|
||||||
- **Date vs string types**: Ensured consistent string format for A
|
|
||||||
mismatch
|
|
||||||
|
|
||||||
### 2. Import and Export Issues
|
|
||||||
- *
|
|
||||||
|
|
||||||
- **CacheStats export**: Maable
|
|
||||||
|
|
||||||
### 3. Null Safety Issues
|
|
||||||
uration
|
|
||||||
- **ImageCache**: Fixed po
|
|
||||||
- **Performanandling
|
|
||||||
|
|
||||||
### 4. Test File Exclusion
|
|
||||||
- **tsconfig.jsonuild
|
|
||||||
- *ion
|
|
||||||
|
|
||||||
|
|
||||||
- **String methods**
|
|
||||||
|
|
||||||
## Files Modified
|
|
||||||
|
|
||||||
### Core Type Definitions
|
|
||||||
- `visual_workflow_builder/frontend/srs`
|
|
||||||
- Fixed `genera
|
|
||||||
-types
|
|
||||||
|
|
||||||
### Components
|
|
||||||
- `visual_workflow_builx.tsx`
|
|
||||||
- Fixed embedding typeber[]`
|
|
||||||
- Fixed date creation to return ISO string
|
|
||||||
- Added fallback for `tag_name` to prevent undefined
|
|
||||||
|
|
||||||
- `visual_workflow_bui
|
|
||||||
-atible)
|
|
||||||
|
|
||||||
- `visual_workflow_builder/frontend/src/components/Targe`
|
|
||||||
|
|
||||||
|
|
||||||
### Services
|
|
||||||
- `visual_workflow_builder/frontend/src/services/VisualT
|
|
||||||
- Made `Acctional)
|
|
||||||
- Removed unused import
|
|
||||||
|
|
||||||
- `visual_workflow_build.ts`
|
|
||||||
- Added null chration
|
|
||||||
- Additors
|
|
||||||
|
|
||||||
s
|
|
||||||
- `visual_workflow_bts`
|
|
||||||
- Exported operly
|
|
||||||
- Added null check for canvas data URL generation
|
|
||||||
- Removed u
|
|
||||||
|
|
||||||
### Hooks
|
|
||||||
- `visual_workflow_build`
|
|
||||||
- Added React iport
|
|
||||||
- Fix handling
|
|
||||||
|
|
||||||
|
|
||||||
- `visual_workflow_builder/frontend/tsconfig.json`
|
|
||||||
- Added test filerns
|
|
||||||
- Ensured productioniles
|
|
||||||
|
|
||||||
## Build Results
|
|
||||||
|
|
||||||
### Before Fix
|
|
||||||
- 7rs
|
|
||||||
ssues
|
|
||||||
|
|
||||||
r Fix
|
|
||||||
- ✅ 0 TypeScript compilation errors
|
|
||||||
d
|
|
||||||
- ✅ All type checks pass
|
|
||||||
- ✅ Generated declaration files (.d.ts)
|
|
||||||
|
|
||||||
## Verification Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Type
|
|
||||||
cd visual_workflow_builder/frontend
|
|
||||||
npx tsc --noEmit
|
|
||||||
|
|
||||||
# Pd
|
|
||||||
ild
|
|
||||||
|
|
||||||
# Both
|
|
||||||
```
|
|
||||||
|
|
||||||
## e
|
|
||||||
|
|
||||||
All fixes maintain compliance
|
|
||||||
|
|
||||||
- **Material-UI integration**: Prerns
|
|
||||||
- **TypeScript best practices**: Msafety
|
|
||||||
- **Component architecture**: No breaking changes to existing APIs
|
|
||||||
- **Performance optimization**: Maintained caching and optimization features
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
The Visual Workflow Builder fronteady for:
|
|
||||||
|
|
||||||
1. **Development**: All TypeScript errors resolved
|
|
||||||
2. **Production deployment**: Clean build with no compilation errors
|
|
||||||
3. **Integration testing**: Type-safe integration with backend APIs
|
|
||||||
4. **Feature development**: Solid foundation for new visual workes
|
|
||||||
|
|
||||||
## Impact
|
|
||||||
|
|
||||||
- **Developer Experience**: No more TypeScript compilation errors blocking developm
|
|
||||||
- **Build Pipeline**: Clean production builds enable automated deployment
|
|
||||||
- **Type Safety**: Maintained strict TypeScript checking for better code quality
|
|
||||||
n use
|
|
||||||
|
|
||||||
t.enpmed develofor continul and ready tionarally openow fus ompilation ipeScript crontend Tyow Builder fWorkfll e VisuaTh
|
|
||||||
@@ -1,186 +0,0 @@
|
|||||||
hes.tres fic les auvections aégras intt pour lebase et prêde s d'usage les ca pournelfonctionème est
|
|
||||||
Le syst
|
|
||||||
ésément implantsr compose pouomplèt*: Cntation*- **Documelètes
|
|
||||||
*: 1/4 compégrations*0%
|
|
||||||
- **Int*: ~8nnelle*nctioure foouvert%)
|
|
||||||
- **C(85nts passa/40taires**: 34sts uni- **Te Qualité
|
|
||||||
|
|
||||||
deiques étr``
|
|
||||||
|
|
||||||
## Mreport()
|
|
||||||
`status_et_ger.grt = manaus_repo1")
|
|
||||||
statlow_mode("workfet_manager.gent_state =
|
|
||||||
currétatérifier l't)
|
|
||||||
|
|
||||||
# V, resul"step_1"low_1", sult("workftep_reanager.on_stat
|
|
||||||
mer le résultrnregis # E...)
|
|
||||||
on(te_actit = execusul reaction
|
|
||||||
r l' # Exécutete:
|
|
||||||
ould_execu_1")
|
|
||||||
if sh", "steporkflow_1"w(execute_stephould_r.sanage reason = mecute,
|
|
||||||
should_exaped'étution exéc
|
|
||||||
# AvantManager()
|
|
||||||
alAutoHeer = ion
|
|
||||||
manag Initialisat
|
|
||||||
#ager
|
|
||||||
HealManport Autonager imauto_heal_masystem.
|
|
||||||
from core.
|
|
||||||
```pythonation
|
|
||||||
lis
|
|
||||||
## Uti}
|
|
||||||
```
|
|
||||||
p": 5
|
|
||||||
ions_to_kee "max_vers1800,
|
|
||||||
on_s": uratiine_dquarant "20,
|
|
||||||
|
|
||||||
o": 0.n_fail_rati "regressio,
|
|
||||||
50": dow_stepsinsion_wres
|
|
||||||
"regon": true,essick_on_regr "rollba,
|
|
||||||
: true"egradedning_in_dle_lear
|
|
||||||
"disab,
|
|
||||||
d": 0.08egradeop1_top2_dargin_tmin_m2,
|
|
||||||
".8 0_degraded":n_confidencemi "0.72,
|
|
||||||
: l"ence_norman_confid
|
|
||||||
|
|
||||||
"mi,indow": 30_winfail_max_lobal_ 10,
|
|
||||||
"gin_window":x_low_fail_ma,
|
|
||||||
"workfow_s": 600windil_ow_fa"workfl": 3,
|
|
||||||
_degradedak_to_fail_streepst "",
|
|
||||||
: "hybrid "mode"json
|
|
||||||
{
|
|
||||||
mple
|
|
||||||
|
|
||||||
```ion Exeigurat
|
|
||||||
## Conflles
|
|
||||||
onnées réec d avetionn
|
|
||||||
4. Validaioe dégradatscénarios ds de st3. Teets
|
|
||||||
complgrations d'inté
|
|
||||||
2. TestedStoreion Versestsrriger les tn
|
|
||||||
1. Coiolidatet Va Tests rité 3:
|
|
||||||
|
|
||||||
### Prioion de précisues*: Métriqe #10*ch
|
|
||||||
4. **Fisistantntissage perppregration a#18**: Inté **Fiche n
|
|
||||||
3.atios de simulpportion de raénérathe #16**: GFicique
|
|
||||||
2. **omatording autrecase ailureC*: F*Fiche #19*e
|
|
||||||
1. *ons Systèm: Intégratié 2Priorit## taires
|
|
||||||
|
|
||||||
# uniles testsiser nalFi
|
|
||||||
3. neace commuinterfune Créer e
|
|
||||||
2. circulairr l'importr pour éviteise1. Refactor Breaker
|
|
||||||
Circuitdresou 1: Rété
|
|
||||||
### Priories
|
|
||||||
nes Étap
|
|
||||||
## Prochais
|
|
||||||
avant/aprèmanceforde per- Métriques aut)
|
|
||||||
r défrsions pa 5 veue (gardeutomatiqyage a- Nettoles
|
|
||||||
ersions stab vers vbackoll
|
|
||||||
- Rgentissa'appreposants ds des comutomatiqueSnapshots a
|
|
||||||
- oningVersi Système de ent)
|
|
||||||
|
|
||||||
###uleming senutes (loggux en 10 mi globaecs: 30 échOBAL PAUSE**- **GLow
|
|
||||||
n workfl pour u0 minuteséchecs en 1: 10 NTINED****QUARA étape
|
|
||||||
- unecutifs sur consé 3 échecsDEGRADED**:iques
|
|
||||||
- ** AutomatDéclencheursel
|
|
||||||
|
|
||||||
### rrêt manu: A- **PAUSED**récédente
|
|
||||||
n version p RestauratioK**:AC**ROLLBble
|
|
||||||
- t configuraimeouc tavere êt temporaiNED**: ArrTIARAN
|
|
||||||
- **QUésactivétissage den 0.82), appr (confiance:ls augmentés Seui*:DEGRADED*0.72)
|
|
||||||
- **ce: uil confian normale (se*: ExécutionING*
|
|
||||||
- **RUNNine d'Étatch Males
|
|
||||||
|
|
||||||
###nnelératioOpités tionnal## Fonc
|
|
||||||
|
|
||||||
|
|
||||||
```️ning ⚠ versio # Testse.py _storedersiont_v
|
|
||||||
└── tesles ✅ Tests modèels.py #_moddataeal_to_hst_au tenit/
|
|
||||||
├──/utses ✅
|
|
||||||
|
|
||||||
tnfigurationn # Co_policy.jsoeal
|
|
||||||
└── auto_h/config/
|
|
||||||
|
|
||||||
datang ✅nirsiostème de ve # Sy re.pyioned_stovers
|
|
||||||
└── ng/
|
|
||||||
core/learniaker ⚠️
|
|
||||||
uit bre # Circ reaker.py circuit_b
|
|
||||||
└── ✅entralaire ctionny # Ges.panager auto_heal_mem/
|
|
||||||
├──
|
|
||||||
core/syst``entée
|
|
||||||
|
|
||||||
`cture Implémhite
|
|
||||||
## Arc
|
|
||||||
n hot-reloadratiofiguk)
|
|
||||||
- Con (fallbaceaker brvec circuittion a- Intégra
|
|
||||||
les seuilsasées sur tomatiques b auionsransit
|
|
||||||
- Tccèschecs et sustion des éGe - K, PAUSED)
|
|
||||||
LBACNTINED, ROLQUARADED, GRANNING, DEcomplète (RU'état achine d*:
|
|
||||||
- Mémentées*mplalités inctionn
|
|
||||||
- **Foy`_manager.po_healautstem/ `core/syer**:hi ✅
|
|
||||||
- **Ficationnager IntegrMa AutoHealable
|
|
||||||
|
|
||||||
###non importais classe m implémentéeogique*Status**: Lanager
|
|
||||||
- *ns AutoHealMFallback daaire**: ution temporol`
|
|
||||||
- **Sker.pycuit_brea.py` et `cireal_manager`auto_hlaire entre port circuImblème**: ️
|
|
||||||
- **Pro class ⚠itBreakerreate Circu
|
|
||||||
|
|
||||||
### 2.1 Cs 🔄our CTâches Enes
|
|
||||||
|
|
||||||
## dynamiqutimestampsvec Tests a FAISS
|
|
||||||
- chierses fiopie dants
|
|
||||||
- Ces existtoirperdes réon sti*:
|
|
||||||
- Geés*dentifiProblèmes i
|
|
||||||
- **passantstests 3/19 s**: 1tu
|
|
||||||
- **Stay`oned_store.pversist_unit/te**: `tests/ichierre ⚠️
|
|
||||||
- **F stonedfor versio unit tests teWri# 3.4 ées
|
|
||||||
|
|
||||||
##adonnes mét - Gestion d versions
|
|
||||||
ques detatistins
|
|
||||||
- Snes versio ancienatique desyage autom
|
|
||||||
- Nettontesprécédeersions vers vllback - RoSQLite
|
|
||||||
e ISS, mémoirindices FAotypes, prots denapshot
|
|
||||||
- Sés**:tionnalit`
|
|
||||||
- **Foncre.pysioned_stong/vere/learni: `cor****Fichier- ✅
|
|
||||||
lasstore cnedS Versiolement### 3.1 Impmplets
|
|
||||||
|
|
||||||
gration cos d'intécle - Cyitiques
|
|
||||||
poltion desra Configuantes
|
|
||||||
-gliss- Fenêtres
|
|
||||||
ationalissériion/déialisats
|
|
||||||
- Sért transitionats en des étlidatio:
|
|
||||||
- Vaests pour**
|
|
||||||
- **Tspassanttests e**: 21 urvertpy`
|
|
||||||
- **Cou_models.datauto_heal__ast/tenit`tests/uhier**:
|
|
||||||
- **Ficata models ✅ts for desit t4 Write un
|
|
||||||
|
|
||||||
### 1.iresilitaodes utéthétat
|
|
||||||
- Mansitions d's tration de- Valid complète
|
|
||||||
lisationériaation/déslis
|
|
||||||
- Sériaalités**:onncti*Fon)
|
|
||||||
- *versionons de ` (informatisionInfoe)
|
|
||||||
- `Verissantfenêtre glow` (eWind - `Failur)
|
|
||||||
'échec(événement dlureEvent` - `Fai
|
|
||||||
low)d'un workfo` (état ionStateInf - `Executlides)
|
|
||||||
tions vaec transienum av` (ionStatexecut - `Eées**:
|
|
||||||
implémentasses.py`
|
|
||||||
- **Clager_manalhesystem/auto_: `core/r**Fichie✅
|
|
||||||
- **data models ement base Impl
|
|
||||||
### 1.3 lencheurs
|
|
||||||
es déc tous lourles p configurab - Seuilsggressive
|
|
||||||
avative,serid, conybrModes: h
|
|
||||||
- litiquesdes poload Hot-reidation
|
|
||||||
-vec valSON aiguration Jonf
|
|
||||||
- C*:alités*onn- **Fonctianager.py`
|
|
||||||
uto_heal_mystem/a`core/sfig` dans Cone**: `Policy**Classson`
|
|
||||||
- y.jpoliceal_nfig/auto_h*: `data/co **Fichier*ystem ✅
|
|
||||||
-guration sy confipolicate re
|
|
||||||
### 1.1 Cminées ✅
|
|
||||||
Terâches
|
|
||||||
|
|
||||||
## Tngereux.est dat quand c'e localemenêt'arrt flou, et sc'esd res quanritèes c et durcit lntitle sûr, ra que c'ester tant à fonctionne continue Le systèmrité.sécurvice et e secontinuité d équilibre uiybride qng h'auto-healime d systè dutationeném
|
|
||||||
|
|
||||||
Impl
|
|
||||||
|
|
||||||
## Résumé avancées- Tâches 1-3cours *: En tus*Sta 2024
|
|
||||||
**écembre de**: 23*Dat
|
|
||||||
*ent
|
|
||||||
ancemt d'Avybride - Étato-Heal H #22 Au# Fiche
|
|
||||||
@@ -1,112 +0,0 @@
|
|||||||
es fiches. les autravecons atintégrur les irêt pode base et pge usaes cas d' lnnel pourtiofoncystème est Le st()
|
|
||||||
```
|
|
||||||
|
|
||||||
orreptus_get_stat = manager.tus_repor)
|
|
||||||
staw_1""workfloget_mode( = manager.rrent_stateétat
|
|
||||||
cu l'
|
|
||||||
# Vérifierlt)
|
|
||||||
1", resuep_stflow_1", "rk"wo_result(on_steper. managrésultat
|
|
||||||
le trer # Enregison(...)
|
|
||||||
execute_actiult =
|
|
||||||
reser l'actionxécut # E
|
|
||||||
execute:uld_)
|
|
||||||
if sho1"ep_1", "st"workflow_ute_step(hould_exec = manager.scute, reasonexee
|
|
||||||
should_tion d'étap Avant exécuger()
|
|
||||||
|
|
||||||
#utoHealManaanager = Ation
|
|
||||||
malisati
|
|
||||||
|
|
||||||
# InierlManageaimport AutoHr _managem.auto_heale.systefrom corython
|
|
||||||
on
|
|
||||||
|
|
||||||
```plisati
|
|
||||||
|
|
||||||
## Utiaprèsavant/mance e perfor d
|
|
||||||
- Métriquesaut)par défons arde 5 versimatique (gtoyage auto- Netles
|
|
||||||
stabions k vers versRollbac
|
|
||||||
- ageprentissposants d'aps comques deatipshots autom- Snaioning
|
|
||||||
e Verse d Systèment)
|
|
||||||
|
|
||||||
###ulemses (logging 10 minuteobaux englcs E**: 30 écheUSOBAL PAGL **ow
|
|
||||||
-kflour un wornutes p0 mi échecs en 1NTINED**: 10- **QUARAne étape
|
|
||||||
ifs sur us consécut échecADED**: 3*DEGRes
|
|
||||||
- *Automatiqucheurs Déclen
|
|
||||||
|
|
||||||
###êt manuelSED**: Arrnte
|
|
||||||
- **PAUcédeprén ioerstion vstaura**: ReLLBACK **ROurable
|
|
||||||
-fig conoutc timeaveire temporaD**: Arrêt ARANTINEtivé
|
|
||||||
- **QU désacageprentiss ap: 0.82),(confiances augmentés ED**: SeuilEGRAD)
|
|
||||||
- **D 0.72ce:euil confianle (sn norma**: Exécutio*RUNNING- *t
|
|
||||||
chine d'ÉtaMa
|
|
||||||
|
|
||||||
### ationnelless Opérnctionnalité
|
|
||||||
|
|
||||||
## Fo ⚠️
|
|
||||||
```rsioning Tests ve #.py sioned_store─ test_ver✅
|
|
||||||
└─es sts modèly # Temodels.pal_data_he─ test_auto_t/
|
|
||||||
├─ests/uni✅
|
|
||||||
|
|
||||||
tguration Confion # y.js_polic auto_healonfig/
|
|
||||||
└──/c✅
|
|
||||||
|
|
||||||
dataersioning Système de v # ore.py sioned_stg/
|
|
||||||
└── verarninre/le
|
|
||||||
|
|
||||||
coreaker ⚠️uit bCirc # aker.py uit_brerc
|
|
||||||
└── cintral ✅ire cenna # Gestioager.py l_man├── auto_heare/system/
|
|
||||||
|
|
||||||
```
|
|
||||||
coplémentée
|
|
||||||
tecture Im
|
|
||||||
## Archidonnées
|
|
||||||
méta Gestion desersions
|
|
||||||
-de vs Statistiqueons
|
|
||||||
- nes versi des ancientomatiqueauettoyage entes
|
|
||||||
- Ncédrsions prévers veRollback - e
|
|
||||||
SQLitSS, mémoire FAIes, indices de prototypshots- Snap:
|
|
||||||
nnalités***Fonctioe.py`
|
|
||||||
- *ned_storsioing/verarn`core/ler**:
|
|
||||||
- **Fichiee class ✅orrsionedStement Ve 3.1 Implets
|
|
||||||
|
|
||||||
###plion comégratcles d'intues
|
|
||||||
- Cytiqn des poliio- Configurat
|
|
||||||
ntesêtres glissaFenion
|
|
||||||
- rialisattion/désélisas
|
|
||||||
- Sériansitionats et tra des étonati:
|
|
||||||
- Validour**Tests p- **sants
|
|
||||||
tests pasrture**: 21 ve**Cou.py`
|
|
||||||
- ta_models_da_auto_healit/test*: `tests/un- **Fichier*els ✅
|
|
||||||
for data modts tesunitrite # 1.4 W##s
|
|
||||||
|
|
||||||
itairees utilodétht
|
|
||||||
- M'étaransitions dtion des talidae
|
|
||||||
- Vomplètation csérialisation/délisria Sé:
|
|
||||||
-nalités****Fonctionersion)
|
|
||||||
- mations de vinforionInfo` (rs
|
|
||||||
- `Veante) glissenêtreeWindow` (f - `Failurc)
|
|
||||||
t d'écheévénemenlureEvent` (ailow)
|
|
||||||
- `Frkfd'un wot (étaeInfo`ionStat- `Executalides)
|
|
||||||
sitions vrannum avec tte` (eionStaut - `Execes**:
|
|
||||||
implémentélasses **C- .py`
|
|
||||||
anageruto_heal_m/system/a`core*: Fichier*
|
|
||||||
- **dels ✅e data mot bas Implemen 1.3heurs
|
|
||||||
|
|
||||||
###ncécleus les dbles pour touras configSeuil -
|
|
||||||
ggressivetive, arva conses: hybrid,es
|
|
||||||
- Modetiqudes polioad rel - Hot-tion
|
|
||||||
validaJSON avec guration - Confi nalités**:
|
|
||||||
**Fonctionpy`
|
|
||||||
-eal_manager.em/auto_hyst/s` dans `corePolicyConfigsse**: `Cla**y.json`
|
|
||||||
- heal_polico_a/config/aut `datFichier**:m ✅
|
|
||||||
- **ation systeconfigureate policy .1 Cr## 1ées ✅
|
|
||||||
|
|
||||||
#s Terminche Tâeux.
|
|
||||||
|
|
||||||
## dangerestnt quand c'e localeme s'arrêtflou, etest s quand c'es critère lrcitet dulentit , raue c'est sûr tant qonctionner finue àconte système curité. Le et sérvicté de sentinuire coi équilibg hybride quauto-healinme d'n du systèntatioé
|
|
||||||
|
|
||||||
ImplémesumRées
|
|
||||||
|
|
||||||
## 1-3 avancéesrs - Tâch**: En cou
|
|
||||||
**Statusembre 2024 te**: 23 déc
|
|
||||||
**Daancement
|
|
||||||
tat d'Avde - Éybrial H-Hetoiche #22 Au# F
|
|
||||||
@@ -1,228 +0,0 @@
|
|||||||
és sécurisux endpointsuveas no sur le équipesdesFormation s
|
|
||||||
- s existantrviceec les seon avtitégrasts d'inion
|
|
||||||
- Teroductnnement p d'enviroariablestion des vConfiguraest
|
|
||||||
- nnement de ten envirot iemenplo**
|
|
||||||
- Déecommandées: étapes rnes**Prochai
|
|
||||||
---
|
|
||||||
|
|
||||||
on V3.
|
|
||||||
e RPA Visiécosystèms l'te dan complè intégrationbuste et unerité ro une sécution avecuca prodrêt pour l pme est
|
|
||||||
|
|
||||||
Le systè d'urgencer modes* pouion*tegrat Switch Inafetyés
|
|
||||||
7. ✅ **Ss intégrécorateure** avec dwarsk Middle
|
|
||||||
6. ✅ **Flas sécuriséesceendanec dép avMiddleware***FastAPI ✅ *uré
|
|
||||||
5.ructt JSONL st* en formadit Logging**Au
|
|
||||||
4. ✅ * algorithmetoken buck tecavLimiting** **Rate . ✅ upport
|
|
||||||
3 et proxy s avec CIDR** Allowlistn
|
|
||||||
2. ✅ **IPxpiratios et e rôle avecon**Authenticati ✅ **Token 1.
|
|
||||||
|
|
||||||
s:fonctionnelsants compoec tous lesMENTÉE** avLÉT IMPÈTEMENest COMPLance vern GoI Security &he #23 - AP
|
|
||||||
|
|
||||||
**Ficsionnclu
|
|
||||||
|
|
||||||
## Colocalhostnt avec IPs loppemeéve✅ Mode dcurisée
|
|
||||||
- ut séion par défa✅ ConfiguratastAPI)
|
|
||||||
- lask/F (Fnels option ✅ Importst
|
|
||||||
-ème Existan# Systente
|
|
||||||
|
|
||||||
## transparontiigras
|
|
||||||
- ✅ Mng changekirea ✅ Pas de bxistant
|
|
||||||
-in-Token eX-Admt ✅ Supporide)
|
|
||||||
- to-Heal Hybr2 (Auche #2# Fiité
|
|
||||||
|
|
||||||
##atibilRétrocomp
|
|
||||||
|
|
||||||
## ritée sécuviolations dles r veille*: Surring*nitoe
|
|
||||||
5. **Mohivagl'arcet otation urer la r: Config**
|
|
||||||
4. **Logsnduege attea charter selon l**: Ajus Limits. **Ratestructure
|
|
||||||
3on l'infrahe selncblaer la liste Configur
|
|
||||||
2. **IPs**:)caractèress (32+ rets fort secer desis: Util**Tokens**iement
|
|
||||||
1. Déplotions mmanda
|
|
||||||
### Reconces
|
|
||||||
pour urgeitchon kill-sw✅ Intégratiormation
|
|
||||||
- nfns fuite d'ierreurs san des Gestioc.)
|
|
||||||
- ✅ s, ettion, X-Frame-OpCSPécurité (ders de sHea✅ NL
|
|
||||||
- en JSOl complett trai- ✅ Audi les abus
|
|
||||||
e contrestimiting robu✅ Rate l- ec CIDR
|
|
||||||
IPs avdes on stricte Validati
|
|
||||||
- ✅ 56)MAC-SHA2sécurisés (Hnt aphiquemecryptogrs - ✅ Tokenctées
|
|
||||||
gences Respe# Exiuction
|
|
||||||
|
|
||||||
##té Prod# Sécurimum
|
|
||||||
|
|
||||||
#ONLY minien READ_Requiert tokytics/*`: /anal `/apion IP
|
|
||||||
-validativalide + en tokiert Requs/upload`:session
|
|
||||||
- `/api/ngitiate limlide + rvaken uiert to: Req/execute`/workflowsMIN
|
|
||||||
- `/api token AD: Requiert/admin/*`- `/apiés
|
|
||||||
ints Protég
|
|
||||||
|
|
||||||
### Endposessionssé des écurid s/`): Uploa`agent_v0gent V0** (act
|
|
||||||
- ✅ **AFrontend Relask + Backend Fbuilder/`):w_l_workfloisuar** (`vdelow Buill Workfisuaask
|
|
||||||
- ✅ **Vrface Fl): Inteoard/`_dashb (`webboard**ash ✅ **Web D
|
|
||||||
-ec FastAPIEST av`): API R`server/* ( **Server*
|
|
||||||
- ✅atiblesmpices Co# Serv V3
|
|
||||||
|
|
||||||
##ionvec RPA Visgration a## Inté`
|
|
||||||
|
|
||||||
py
|
|
||||||
``curity.e23_api_sechst_fihon3 tees)
|
|
||||||
pyteurons mintie correcessitomplet (néc
|
|
||||||
|
|
||||||
# Test ce.pysimplst_fiche23_
|
|
||||||
python3 te rapideTestsh
|
|
||||||
# `ba
|
|
||||||
``elleon Manu# Validatis)
|
|
||||||
|
|
||||||
##ssaireéceineures norrections m(avec cplets Tests com`:y.pyi_securitiche23_ap_fst
|
|
||||||
- ✅ `tenelsase fonctionTests de bimple.py`: 23_sst_fichete✅ `és
|
|
||||||
- ément# Tests Impln
|
|
||||||
|
|
||||||
##alidatio Tests et V```
|
|
||||||
|
|
||||||
##y.com"
|
|
||||||
panadmin@comCT="TACY_CONGEN2"
|
|
||||||
EMER1,featuretureATURES="feaED_FEh
|
|
||||||
DISABLwitcill_so_safe|kemrmal|dnormal" # E="noSAFETY_MODtch
|
|
||||||
ety Swie"
|
|
||||||
|
|
||||||
# SafIVE="truSH_SENSIT_HAITUD"
|
|
||||||
A10S="LOG_MAX_FILE
|
|
||||||
AUDIT_# 10MB485760" 10_MAX_SIZE="DIT_LOG
|
|
||||||
AU"logs/auditT_LOG_DIR="
|
|
||||||
AUDIingudit Logg5"
|
|
||||||
|
|
||||||
# AIN="30:MIT_API_ADM0"
|
|
||||||
RATE_LI120:2ORKFLOWS="LIMIT_API_W
|
|
||||||
RATE_="10"_LIMIT_BURSTEFAULT_RATE="60"
|
|
||||||
DIT_RPMULT_RATE_LIMting
|
|
||||||
DEFA# Rate Limi"true"
|
|
||||||
|
|
||||||
OCKED_IPS=
|
|
||||||
LOG_BLue""tr_HEADERS=PROXYE_1"
|
|
||||||
ENABL0.6.0.1,10.0.XIES="172.1TED_PRO
|
|
||||||
TRUS0/8"0.0.0.0/24,1.168.1.0.0.1,192IPS="127.ALLOWED_st
|
|
||||||
li
|
|
||||||
# IP Allow"24"
|
|
||||||
IRY_HOURS=
|
|
||||||
TOKEN_EXPébilitatiRétrocomp" # -admin-tokencyOKEN="legaADMIN_Token-1"
|
|
||||||
X_"readonly-tS=D_ONLY_TOKEN2"
|
|
||||||
REAn-in-tokeadm-token-1,dminN_TOKENS="aDMI
|
|
||||||
Auction"odpry-change-in-cret-keY="your-seECRET_KEns
|
|
||||||
TOKEN_Sash
|
|
||||||
# Tokement
|
|
||||||
```b d'Environneblesariate
|
|
||||||
|
|
||||||
### Vmplèon Corati Configu
|
|
||||||
##ence
|
|
||||||
ions d'urges activatogging d
|
|
||||||
- ✅ Lensibless stionnalitéque des fonctiutomactivation a
|
|
||||||
- ✅ Désa KILL_SWITCHEMO_SAFE,AL, Des NORMs modespect dey`
|
|
||||||
- ✅ R.pwitchty_ssafere/system/avec `coplète comIntégration ✅ on
|
|
||||||
-ratintegwitch I Sty## 7. Safe``
|
|
||||||
|
|
||||||
#
|
|
||||||
`ig": {...}}turn {"conf
|
|
||||||
rein_config():def adm_admin
|
|
||||||
k_require")
|
|
||||||
@flasnfigin/cooute("/admpp.r)
|
|
||||||
|
|
||||||
@ay(applask_securit_)
|
|
||||||
init_fme_nask(__Fla
|
|
||||||
app = in
|
|
||||||
_require_admlaskurity, fflask_sect_rt iniy impoecurit.flask_s.securitycore
|
|
||||||
from *
|
|
||||||
```python**Usage:*ques
|
|
||||||
|
|
||||||
automati sécurité Headers de
|
|
||||||
- ✅ és personnalisres d'erreurionnai
|
|
||||||
- ✅ Gestinfo`/token/tyecuri/sstatus`, ` `/security/s:ires utilitaoutelet
|
|
||||||
- ✅ Rsetup compr )` pousecurity(k_init_flas `- ✅ Fonctionoken`
|
|
||||||
y_tk_require_anflasdmin`, `@sk_require_as: `@fla✅ Décorateur
|
|
||||||
- equestuest/after_rfore_req bek aveceware Flas Middlpy`)
|
|
||||||
- ✅y.uritflask_secre/security/(`coeware Middlity Flask Secur`
|
|
||||||
|
|
||||||
### 6.}
|
|
||||||
``rs": [...]turn {"use reoken)):
|
|
||||||
e_admin_t(requir Depends =olerole: TokenRer_et_users(us def g
|
|
||||||
async")rs/use"/admin
|
|
||||||
@app.get(
|
|
||||||
_tokendminire_at requity imporapi_secururity.faste.secfrom corhon
|
|
||||||
e:**
|
|
||||||
```pyt
|
|
||||||
|
|
||||||
**Usag Switchon Safety✅ Intégrati
|
|
||||||
- riésappropP s HTTc codeeurs avetion des err
|
|
||||||
- ✅ Ges)ons, etc.Frame-Optié (CSP, X-e sécurit ders ✅ Headateur
|
|
||||||
-le utilis rôque duomatin autExtractio- ✅ oken`
|
|
||||||
_any_t`require`, _admin_tokenrequi: `rendances Dépe- ✅ons
|
|
||||||
ificatiles véroutes plet avec tomre cddlewa✅ Mity.py`)
|
|
||||||
- tapi_securiurity/fasecare (`core/siddlew Security M5. FastAPI
|
|
||||||
|
|
||||||
### e tokensons dValidatiTION`: EN_VALIDATOKsées
|
|
||||||
- `non autori IPs CKED`:P_BLO
|
|
||||||
- `Iimites de lssementsEEDED`: DépaIMIT_EXC
|
|
||||||
- `RATE_Ltéesations détecTION`: ViolIOLAURITY_V
|
|
||||||
- `SECtus codesc stadpoints aveccès aux en`: AAPI_ACCESS
|
|
||||||
- `échouéessies/ons réusonnexi CTION`:ENTICA`AUTH*
|
|
||||||
- ts:*d'événemen*Types UTC
|
|
||||||
|
|
||||||
*01SO 86ps Itams
|
|
||||||
- ✅ Timesplètelles comes contextuetadonné
|
|
||||||
- ✅ Mé etc.violation,security_cess, , api_acticationts: authens d'événemen✅ Type- sibles
|
|
||||||
nées senes donhage d
|
|
||||||
- ✅ Haclogstique des ion automaotatacile
|
|
||||||
- ✅ Ring fé pour parsNL structurormat JSO- ✅ Flog.py`)
|
|
||||||
it_security/aud (`core/SONLing Jgg. Audit Lo
|
|
||||||
### 4ue
|
|
||||||
```
|
|
||||||
écifiq sp # endpoint20:20"FLOWS="1I_WORK_LIMIT_AP
|
|
||||||
RATE"10"_BURST=RATE_LIMITLT_0"
|
|
||||||
DEFAU"6MIT_RPM=_RATE_LIULT
|
|
||||||
DEFAsh**
|
|
||||||
```ban:tio**Configurary_after
|
|
||||||
|
|
||||||
c retveeded` aitExceRateLimn `✅ Exceptiofs
|
|
||||||
- nactikets ides bucomatique age auttoy
|
|
||||||
- ✅ NetteLimit-*)X-Ratifs (informaTTP Headers Hcity)
|
|
||||||
- ✅burst capaible (RPM, flexration gufionint
|
|
||||||
- ✅ Ceur, endpo utilisatr IP,ion paimitatue
|
|
||||||
- ✅ Lautomatiqc refill veen bucket aithme tok)
|
|
||||||
- ✅ Algor.py`rate_limitery//securitore(`cn Bucket Tokeng avecate Limiti
|
|
||||||
|
|
||||||
### 3. R
|
|
||||||
```"true"XY_HEADERS=E_PRO1"
|
|
||||||
ENABL.0.0.10172.16.0.1,_PROXIES="ED8"
|
|
||||||
TRUST0/0/24,10.0.0.1.1,192.168..0.0."127S=ED_IPash
|
|
||||||
ALLOWn:**
|
|
||||||
```bguratioonfi
|
|
||||||
**Cdéfaut
|
|
||||||
r avec IPs pament développe Mode
|
|
||||||
- ✅oquéesPs bldes ILogging ✅ ement
|
|
||||||
-ronnenvid'variables uration par fig
|
|
||||||
- ✅ ConX-Real-IPFor, rwarded-c X-Fofiance avee con ✅ Proxies d24)
|
|
||||||
-92.168.1.0/IDR (ex: 1ges C- ✅ Pla et IPv6
|
|
||||||
t IPv4 ✅ Supporst.py`)
|
|
||||||
-ip_allowlie/security/corCIDR (`avec Allowlist ### 2. IPging
|
|
||||||
|
|
||||||
bug()` pour denfo_safeget_token_iace `Interf`
|
|
||||||
- rorlidationErTokenVaavec `es erreurs dstion
|
|
||||||
- Gein-Tokendm-AToken, Xr, X-API-Bearerization port Autho- Supature`
|
|
||||||
ign|scenonres_at||expirole|user_id: `ec payloadés av sign Tokens**
|
|
||||||
-s:clés nnalitéionct**Fo
|
|
||||||
|
|
||||||
P multiplesers HTTeadn depuis h ✅ Extractioste
|
|
||||||
-que robuptographition cryValida
|
|
||||||
- ✅ e #22) (fichmin-Token avec X-AdtéiliompatibRétroc ✅ okens
|
|
||||||
-es tfigurable dn conpiratio ✅ ExLY
|
|
||||||
-t READ_ONes ADMIN ert des rôlppo- ✅ SuHA256
|
|
||||||
ec HMAC-Savcurisés s séion de tokenrat)
|
|
||||||
- ✅ Génépy`pi_tokens.y/asecuriton (`core/catised Authenti 1. Token-baentés
|
|
||||||
|
|
||||||
###pléms Immposant
|
|
||||||
|
|
||||||
## Coudite débit et alimitation dtion, orisa, autationntificuthemplet avec aPI coité Ae sécurstème djectif**: Sy**Obre 2025
|
|
||||||
mb*: 24 déce
|
|
||||||
|
|
||||||
**Date*EPLÉMENTÉtatut: ✅ IMLETE
|
|
||||||
|
|
||||||
## S COMPernance - Gov Security & APIe #23 -ch# Fi
|
|
||||||
@@ -1,166 +0,0 @@
|
|||||||
urisésdpoints séc en suripes équtionFormastants
|
|
||||||
- exiec services avtionégra'ints don
|
|
||||||
- Testent producti'environnemiables dvares figuration dContest
|
|
||||||
- ement de vironn enment enie
|
|
||||||
- Déplos étapes:**
|
|
||||||
**Prochaine3.
|
|
||||||
|
|
||||||
---
|
|
||||||
PA Vision Vme Rl'écosystè dans mplète co intégration et unerobusterité vec une sécu* aproduction*ur la *prêt postème est * syLealidés
|
|
||||||
|
|
||||||
nels v fonctionsts#22
|
|
||||||
8. ✅ Tehe avec fictibilitéétrocompa Rnce
|
|
||||||
7. ✅ges d'urdech pour moty Switgration Safe✅ Intéi
|
|
||||||
6. à l'emploask prêts astAPI et Flares Flew5. ✅ Middé en JSONL
|
|
||||||
urging structAudit log
|
|
||||||
4. ✅ ken bucketavec to robuste iting lim Rateies
|
|
||||||
3. ✅R et proxort CIDc suppche IP aveListe blan ✅ c rôles
|
|
||||||
2.ave tokens ion paruthentificat Système d'ac:
|
|
||||||
|
|
||||||
1. ✅ENTÉE** aveIMPLÉMOMPLÈTEMENT nance est C& Goverecurity - API S#23**Fiche
|
|
||||||
|
|
||||||
Conclusion
|
|
||||||
|
|
||||||
##ion finaleumentatTE.md` - Doc_23_COMPLECHEns
|
|
||||||
- `FIicatioSpécif- nts.md` quiremence/reovernaurity-g/api-secpecs/s
|
|
||||||
- `.kiro mineures)correctionslets (s comppy` - Testi_security.e23_apst_fiche ✅
|
|
||||||
- `teels de basionnct - Tests fon_simple.py`_fiche23ston
|
|
||||||
- `teocumentatiet D
|
|
||||||
### Tests jour)
|
|
||||||
isés (mis àtralrts cen` - Impo_.pynit_/__iurity- `core/secware Flask
|
|
||||||
Middlety.py` - ecuriy/flask_se/securit- `corAPI
|
|
||||||
eware Fastddl - Miity.py`stapi_secururity/fa `core/secit JSONL
|
|
||||||
-g d'audginog.py` - Log/audit_lre/securitycocket
|
|
||||||
- `token bue débit don - Limitatir.py`ate_limite/security/rcore
|
|
||||||
- `ec CIDRche IP avblanListe t.py` - lowlisalp_ity/isecur`core/ns
|
|
||||||
- ion par tokeficattithenns.py` - Auokeity/api_turec- `core/sre
|
|
||||||
dules Co
|
|
||||||
### Moréés
|
|
||||||
hiers C Fic##ente
|
|
||||||
|
|
||||||
transparigrations
|
|
||||||
- Mgeng chanas de breaki- PastAPI)
|
|
||||||
s (Flask/Fs optionnelImport22
|
|
||||||
- fiche #a ken de lToin-rt X-Adm✅
|
|
||||||
- Suppoité ilrocompatib## Rét
|
|
||||||
|
|
||||||
#r urgencespouswitch
|
|
||||||
- ✅ Kill-é standardurit séc✅ Headers deL
|
|
||||||
- JSONetl complaiAudit tr abus
|
|
||||||
- ✅ contre lesng itiim ✅ Rate l
|
|
||||||
-DRte avec CIicIP stration - ✅ Validsés
|
|
||||||
ement sécuriographiquypt✅ Tokens crs
|
|
||||||
- pectéences Resxige
|
|
||||||
### En ✅
|
|
||||||
uctio Produrité
|
|
||||||
|
|
||||||
## SécNLY minimumen READ_O: Tokcs/*`nalyti `/api/aP
|
|
||||||
-alidation I`: Token + v/uploadi/sessionsing
|
|
||||||
- `/ape limitalide + rat Token vws/execute`:workflo `/api/requis
|
|
||||||
-ADMIN n/*`: Token dmi/api/a- `égés
|
|
||||||
nts Protoidp# En
|
|
||||||
##ens
|
|
||||||
sé avec tokcurioad sé: Uplgent V0**sé
|
|
||||||
- **AFlask sécurickend r**: BaldeWorkflow Bui*Visual
|
|
||||||
- *séesécuri set routesécorateurs D (Flask):Dashboard**ts
|
|
||||||
- **Web endances prê dépetiddleware stAPI): M** (Fa **Server✅
|
|
||||||
-s patiblevices Comer V3
|
|
||||||
|
|
||||||
### Ssion RPA Vintégration## I
|
|
||||||
```
|
|
||||||
|
|
||||||
h_switce|killsaf|demo_# normall" ="normaTY_MODESwitch
|
|
||||||
SAFEafety
|
|
||||||
|
|
||||||
# S0MB5760" # 1"1048E=SIZIT_LOG_MAX_t"
|
|
||||||
AUDogs/audiLOG_DIR="lT_ogging
|
|
||||||
AUDIAudit L"10"
|
|
||||||
|
|
||||||
# _BURST=IMITTE_L
|
|
||||||
DEFAULT_RAM="60"_LIMIT_RPULT_RATEg
|
|
||||||
DEFAte Limitin
|
|
||||||
# Ra"
|
|
||||||
0.0.16.0.1,10..1ES="172RUSTED_PROXI
|
|
||||||
T.0/8"0.0.04,192.168.1.0/2127.0.0.1,1_IPS="
|
|
||||||
ALLOWEDP Allowlist ité
|
|
||||||
|
|
||||||
# Irocompatibil" # Rétmin-tokeny-adOKEN="legacIN_Tn-2"
|
|
||||||
X_ADMdmin-toke-token-1,a="adminDMIN_TOKENSuction"
|
|
||||||
Aange-in-prodcret-key-chY="your-seKERET_
|
|
||||||
TOKEN_SEC Tokensbash
|
|
||||||
#s
|
|
||||||
```ment Cléronnees d'Enviariablion
|
|
||||||
|
|
||||||
### Von Productrati
|
|
||||||
## Configuh
|
|
||||||
afety Switc SgrationtéSONL
|
|
||||||
- ✅ Informat Jgging en it loAud
|
|
||||||
- ✅ atifsrs informvec headeimiting aRate l✅ 1.0/24)
|
|
||||||
- , 192.168.27.0.0.1ec CIDR (1tion IP avda✅ Vali
|
|
||||||
- n de tokenslidatioet vanération tés
|
|
||||||
- ✅ Géessants T## Compo
|
|
||||||
#
|
|
||||||
```
|
|
||||||
tionntegra Iety Switch SafNL
|
|
||||||
•ing JSOdit Loggting
|
|
||||||
• Aue Limi Rat
|
|
||||||
•DR ist avec CI Allowl IPcation
|
|
||||||
•sed Authenti• Token-ba
|
|
||||||
validées:ités tionnaloncÉE
|
|
||||||
|
|
||||||
📋 FENTMPLÉMce: Iernanovty & Gecuri23 - API SFiche #ENT!
|
|
||||||
✅ PASSSTS LES TEOUStat:
|
|
||||||
🎉 Tsul.py
|
|
||||||
|
|
||||||
# Réimpleiche23_s test_fpython3de - PASSE
|
|
||||||
rapi Test
|
|
||||||
```bash
|
|
||||||
#ctionnels ✅ts Fon
|
|
||||||
|
|
||||||
### Tesations et Valid
|
|
||||||
## Testty()
|
|
||||||
urik_seclasnit_fec i complet av- Setupsonnalisés
|
|
||||||
d'erreur pernaires onsti
|
|
||||||
- Gefoen/inty/tokuritus, /seccurity/staitaires: /setes util
|
|
||||||
- Routokenquire_any_, @flask_reinadmire_sk_requateurs: @flae ✅
|
|
||||||
- DécordlewarSecurity Midlask
|
|
||||||
### 6. Fh
|
|
||||||
y Switcon SafettégratiIn
|
|
||||||
- )-OptionsFrameé (CSP, X-e sécurit dHeadersn
|
|
||||||
- e_any_tokeoken, requir_tre_adminces: requipendanDétions
|
|
||||||
- icaifoutes véravec tre complet - Middlewaleware ✅
|
|
||||||
MiddurityFastAPI Sec5. s
|
|
||||||
|
|
||||||
### complèteellescontextunées - Métadonensibles
|
|
||||||
es données sachage don
|
|
||||||
- Hiolati_vcuritys, sen, api_accestiouthentica Types: a
|
|
||||||
-otation avec ruréructt JSONL stma
|
|
||||||
- For ✅SONL Joggingit L
|
|
||||||
### 4. Aud
|
|
||||||
s inactifs des buckettiqueautomaNettoyage -*)
|
|
||||||
- imitifs (X-RateLTP informateaders HTint
|
|
||||||
- Hateur/endpolispar IP/utiation que
|
|
||||||
- Limittima autoavec refillt token buckeAlgorithmeket ✅
|
|
||||||
- Token Bucimitinge L. Rat
|
|
||||||
|
|
||||||
### 3autr défec IPs paement avde développment
|
|
||||||
- Monneenviroar exible pration fl- ConfiguFor)
|
|
||||||
warded-ance (X-Fore confi- Proxies des CIDR
|
|
||||||
Pv6 et plagt IPv4/I
|
|
||||||
- SupporR ✅ st avec CIDP Allowli
|
|
||||||
|
|
||||||
### 2. InI-TokeAPearer, X-on Bti Authoriza
|
|
||||||
- Supportche #22)fiToken (X-Admin-ité bilpati
|
|
||||||
- Rétrocom expirationavecONLY /READ_MIN- Rôles ADHA256
|
|
||||||
risés HMAC-Sokens sécuération ton ✅
|
|
||||||
- Génhenticatid Autseoken-ba
|
|
||||||
|
|
||||||
### 1. T LivrésntsComposa
|
|
||||||
## 3
|
|
||||||
ision VA V pour RPompletI cté APe de sécuri: Systèm*Objectif**5
|
|
||||||
*e 202 décembr**: 24te
|
|
||||||
**Da PLÉMENTÉE t**: IMatu
|
|
||||||
**Stcutif
|
|
||||||
ésumé ExéTE ✅
|
|
||||||
|
|
||||||
## Re - COMPLEernancrity & GovPI Secu - Ache #23# Fi
|
|
||||||
@@ -1,139 +0,0 @@
|
|||||||
FICHIERS CRÉÉS - PHASE 11 : OUTILS D'AMÉLIORATION CONTINUE
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
Date: 23 novembre 2025
|
|
||||||
|
|
||||||
SCRIPTS PYTHON (3)
|
|
||||||
──────────────────
|
|
||||||
1. analyze_failed_matches.py (327 lignes, 12K)
|
|
||||||
- Analyse statistique des échecs de matching
|
|
||||||
- Identification des nodes problématiques
|
|
||||||
- Recommandations de seuil
|
|
||||||
- Export JSON
|
|
||||||
|
|
||||||
2. monitor_matching_health.py (180 lignes, 5K)
|
|
||||||
- Monitoring temps réel
|
|
||||||
- Système d'alertes
|
|
||||||
- Mode continu
|
|
||||||
- Sauvegarde historique
|
|
||||||
|
|
||||||
3. auto_improve_matching.py (355 lignes, 14K)
|
|
||||||
- Amélioration automatique
|
|
||||||
- UPDATE_PROTOTYPE, CREATE_NODE, ADJUST_THRESHOLD
|
|
||||||
- Mode simulation
|
|
||||||
- Application sécurisée
|
|
||||||
|
|
||||||
DOCUMENTATION (4)
|
|
||||||
─────────────────
|
|
||||||
4. MATCHING_TOOLS_README.md (2.5K)
|
|
||||||
- Guide d'utilisation complet
|
|
||||||
- Workflow recommandé
|
|
||||||
- Exemples de cas réels
|
|
||||||
- Dépannage
|
|
||||||
|
|
||||||
5. QUICK_START_MATCHING_TOOLS.md (4.0K)
|
|
||||||
- Démarrage rapide
|
|
||||||
- Commandes essentielles
|
|
||||||
- Interprétation des résultats
|
|
||||||
|
|
||||||
6. PHASE11_MATCHING_IMPROVEMENT_TOOLS.md (8.7K)
|
|
||||||
- Documentation technique complète
|
|
||||||
- Architecture des données
|
|
||||||
- Métriques de succès
|
|
||||||
- Intégration CI/CD
|
|
||||||
|
|
||||||
7. SUMMARY_PHASE11.md (8.1K)
|
|
||||||
- Résumé exécutif
|
|
||||||
- Statistiques
|
|
||||||
- Bénéfices et apprentissages
|
|
||||||
|
|
||||||
TESTS (1)
|
|
||||||
─────────
|
|
||||||
8. test_matching_tools.sh (1.6K)
|
|
||||||
- Tests automatisés des 3 outils
|
|
||||||
- Création de données fictives
|
|
||||||
- Vérification du bon fonctionnement
|
|
||||||
|
|
||||||
CHANGELOG (1)
|
|
||||||
─────────────
|
|
||||||
9. CHANGELOG_PHASE11.md (5.6K)
|
|
||||||
- Historique des changements
|
|
||||||
- Fonctionnalités ajoutées
|
|
||||||
- Modifications apportées
|
|
||||||
|
|
||||||
RÉSUMÉS (1)
|
|
||||||
───────────
|
|
||||||
10. PHASE11_COMPLETE.txt (3.5K)
|
|
||||||
- Résumé ultra-concis
|
|
||||||
- Vue d'ensemble complète
|
|
||||||
- Utilisation rapide
|
|
||||||
|
|
||||||
FICHIERS MODIFIÉS
|
|
||||||
─────────────────
|
|
||||||
- INDEX.md
|
|
||||||
+ Ajout section "Outils d'Amélioration Continue"
|
|
||||||
+ Liens vers tous les nouveaux fichiers
|
|
||||||
+ Workflow recommandé
|
|
||||||
|
|
||||||
- core/graph/node_matcher.py (Phase 10)
|
|
||||||
+ Ajout _log_failed_match()
|
|
||||||
+ Ajout _generate_suggestions()
|
|
||||||
+ Intégration dans _match_linear()
|
|
||||||
|
|
||||||
TOTAL
|
|
||||||
─────
|
|
||||||
Fichiers créés: 10
|
|
||||||
Fichiers modifiés: 2
|
|
||||||
Lignes de code: ~850
|
|
||||||
Documentation: ~30 pages
|
|
||||||
Tests: ✅ Automatisés
|
|
||||||
Statut: ✅ Production Ready
|
|
||||||
|
|
||||||
STRUCTURE DES DONNÉES
|
|
||||||
─────────────────────
|
|
||||||
data/
|
|
||||||
├── failed_matches/ # Échecs enregistrés
|
|
||||||
│ └── failed_match_YYYYMMDD_HHMMSS/
|
|
||||||
│ ├── screenshot.png # Capture d'écran
|
|
||||||
│ ├── state_embedding.npy # Vecteur 512D
|
|
||||||
│ └── report.json # Rapport complet
|
|
||||||
│
|
|
||||||
└── monitoring/ # Métriques de santé
|
|
||||||
└── matching_health_YYYYMMDD.jsonl # Historique
|
|
||||||
|
|
||||||
COMMANDES RAPIDES
|
|
||||||
─────────────────
|
|
||||||
# Analyse
|
|
||||||
./analyze_failed_matches.py --last 10
|
|
||||||
./analyze_failed_matches.py --since-hours 24
|
|
||||||
./analyze_failed_matches.py --export rapport.json
|
|
||||||
|
|
||||||
# Monitoring
|
|
||||||
./monitor_matching_health.py
|
|
||||||
./monitor_matching_health.py --continuous
|
|
||||||
./monitor_matching_health.py --continuous --interval 30
|
|
||||||
|
|
||||||
# Amélioration
|
|
||||||
./auto_improve_matching.py
|
|
||||||
./auto_improve_matching.py --apply
|
|
||||||
./auto_improve_matching.py --min-confidence 0.70
|
|
||||||
|
|
||||||
# Tests
|
|
||||||
./test_matching_tools.sh
|
|
||||||
|
|
||||||
DOCUMENTATION
|
|
||||||
─────────────
|
|
||||||
Quick Start: QUICK_START_MATCHING_TOOLS.md
|
|
||||||
Guide Complet: MATCHING_TOOLS_README.md
|
|
||||||
Doc Technique: PHASE11_MATCHING_IMPROVEMENT_TOOLS.md
|
|
||||||
Résumé: SUMMARY_PHASE11.md
|
|
||||||
Changelog: CHANGELOG_PHASE11.md
|
|
||||||
Résumé Concis: PHASE11_COMPLETE.txt
|
|
||||||
Liste Fichiers: FILES_CREATED_PHASE11.txt (ce fichier)
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
Phase 11 : ✅ COMPLÉTÉ
|
|
||||||
Date: 23 novembre 2025
|
|
||||||
Durée: ~2 heures
|
|
||||||
Statut: Production Ready
|
|
||||||
═══════════════════════════════════════════════════════════
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
# Intégration Validation TypeScript Automatique - COMPLETE
|
|
||||||
|
|
||||||
**Auteur :** Dom, Alice, Kiro
|
|
||||||
**Date :** 12 janvier 2026
|
|
||||||
**Statut :** ✅ TERMINÉ
|
|
||||||
|
|
||||||
## Mission Accomplie
|
|
||||||
|
|
||||||
L'intégration de la validation TypeScript automatique dans la task list du Visual Workflow Builder est **complètement terminée**.
|
|
||||||
|
|
||||||
## Réalisations
|
|
||||||
|
|
||||||
### ✅ Corrections TypeScript
|
|
||||||
- Corrigé toutes les erreurs TypeScript dans les fichiers VWB
|
|
||||||
- Supprimé les imports et variables inutilisés
|
|
||||||
- Validation : `npx tsc --noEmit` ✅ 0 erreur
|
|
||||||
|
|
||||||
### ✅ Script de Validation Automatique
|
|
||||||
- Créé `scripts/validation_typescript_automatique_vwb_12jan2026.py`
|
|
||||||
- Validation TypeScript + compilation build automatique
|
|
||||||
- Messages en français, gestion d'erreurs robuste
|
|
||||||
|
|
||||||
### ✅ Intégration Task List
|
|
||||||
- Modifié `.kiro/specs/visual-workflow-builder/tasks.md`
|
|
||||||
- Ajouté 12 tâches de validation TypeScript après chaque modification frontend
|
|
||||||
- Format standardisé et cohérent
|
|
||||||
|
|
||||||
### ✅ Tests d'Intégration
|
|
||||||
- Créé `tests/integration/test_validation_typescript_automatique_integration_12jan2026.py`
|
|
||||||
- 8 tests d'intégration avec 100% de réussite
|
|
||||||
- Validation complète du processus
|
|
||||||
|
|
||||||
### ✅ Documentation
|
|
||||||
- Documentation complète dans `docs/`
|
|
||||||
- Conformité aux règles du projet (français, attribution auteur)
|
|
||||||
- Guide d'utilisation et processus détaillé
|
|
||||||
|
|
||||||
## Validation Finale
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test du script
|
|
||||||
python3 scripts/validation_typescript_automatique_vwb_12jan2026.py
|
|
||||||
# ✅ Vérification TypeScript réussie - aucune erreur
|
|
||||||
# ✅ Compilation de build réussie
|
|
||||||
|
|
||||||
# Test d'intégration
|
|
||||||
python3 tests/integration/test_validation_typescript_automatique_integration_12jan2026.py
|
|
||||||
# ✅ Ran 8 tests in 51.778s - OK
|
|
||||||
```
|
|
||||||
|
|
||||||
## Impact
|
|
||||||
|
|
||||||
- **Stabilité TypeScript** garantie après chaque modification
|
|
||||||
- **Processus automatisé** intégré au workflow de développement
|
|
||||||
- **Prévention des régressions** dans le frontend VWB
|
|
||||||
- **Qualité de code** maintenue en permanence
|
|
||||||
|
|
||||||
## Prêt pour Utilisation
|
|
||||||
|
|
||||||
Le système est **opérationnel immédiatement** et peut être utilisé dès la prochaine modification du frontend VWB.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
🎉 **MISSION COMPLETE** - Validation TypeScript automatique intégrée avec succès
|
|
||||||
@@ -1,283 +0,0 @@
|
|||||||
# Localisation du Composant RealDemo - Implémentation Complète
|
|
||||||
|
|
||||||
> **Extension du système de localisation RPA Vision V3**
|
|
||||||
> Auteur : Dom, Alice, Kiro - 8 janvier 2026
|
|
||||||
|
|
||||||
## 🎯 Résumé de l'Implémentation
|
|
||||||
|
|
||||||
Le composant RealDemo du Visual Workflow Builder a été entièrement localisé, étendant le système de localisation existant avec 3 nouvelles clés de traduction dans les 4 langues supportées.
|
|
||||||
|
|
||||||
## 📊 Statistiques Mises à Jour
|
|
||||||
|
|
||||||
### Avant l'Implémentation
|
|
||||||
- **Total des clés** : 127 traductions
|
|
||||||
- **Composant RealDemo** : Texte codé en dur en français
|
|
||||||
|
|
||||||
### Après l'Implémentation
|
|
||||||
- **Total des clés** : 156 traductions (+3 nouvelles clés)
|
|
||||||
- **Composant RealDemo** : Entièrement localisé
|
|
||||||
- **Couverture** : 100% dans les 4 langues
|
|
||||||
|
|
||||||
## 🔧 Modifications Apportées
|
|
||||||
|
|
||||||
### 1. Nouvelles Clés de Traduction
|
|
||||||
|
|
||||||
#### Structure Ajoutée dans Tous les Fichiers JSON
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"realDemo": {
|
|
||||||
"component": {
|
|
||||||
"title": "Démonstration Réelle - RPA Vision V3",
|
|
||||||
"description": "Ce composant permettra de tester le système RPA en temps réel.",
|
|
||||||
"startButton": "Démarrer la Démonstration"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Traductions par Langue
|
|
||||||
|
|
||||||
| Clé | Français | Anglais | Espagnol | Allemand |
|
|
||||||
|-----|----------|---------|----------|----------|
|
|
||||||
| `title` | Démonstration Réelle - RPA Vision V3 | Real Demonstration - RPA Vision V3 | Demostración Real - RPA Vision V3 | Echte Demonstration - RPA Vision V3 |
|
|
||||||
| `description` | Ce composant permettra de tester le système RPA en temps réel. | This component will allow testing the RPA system in real time. | Este componente permitirá probar el sistema RPA en tiempo real. | Diese Komponente ermöglicht es, das RPA-System in Echtzeit zu testen. |
|
|
||||||
| `startButton` | Démarrer la Démonstration | Start Demonstration | Iniciar Demostración | Demonstration Starten |
|
|
||||||
|
|
||||||
### 2. Composant RealDemo Modifié
|
|
||||||
|
|
||||||
#### Code Avant (Texte Codé en Dur)
|
|
||||||
```typescript
|
|
||||||
return (
|
|
||||||
<Box sx={{ p: 3 }}>
|
|
||||||
<Typography variant="h5" gutterBottom>
|
|
||||||
Démonstration Réelle - RPA Vision V3
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
<Typography variant="body1" paragraph>
|
|
||||||
Ce composant permettra de tester le système RPA en temps réel.
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
<Button variant="contained" startIcon={<PlayIcon />} onClick={handleExecute}>
|
|
||||||
Démarrer la Démonstration
|
|
||||||
</Button>
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Code Après (Localisé)
|
|
||||||
```typescript
|
|
||||||
import { useLocalization } from '../../services/LocalizationService';
|
|
||||||
|
|
||||||
const RealDemo: React.FC<RealDemoProps> = ({ onWorkflowExecute }) => {
|
|
||||||
const { t } = useLocalization();
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Box sx={{ p: 3 }}>
|
|
||||||
<Typography variant="h5" gutterBottom>
|
|
||||||
{t('realDemo.component.title')}
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
<Typography variant="body1" paragraph>
|
|
||||||
{t('realDemo.component.description')}
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
<Button variant="contained" startIcon={<PlayIcon />} onClick={handleExecute}>
|
|
||||||
{t('realDemo.component.startButton')}
|
|
||||||
</Button>
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## ✅ Validation et Tests
|
|
||||||
|
|
||||||
### Validation Automatique Réussie
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ python3 i18n/validate_translations.py
|
|
||||||
|
|
||||||
🔍 Démarrage de la validation des traductions...
|
|
||||||
📋 Validation de la configuration...
|
|
||||||
📂 Chargement des fichiers de traduction...
|
|
||||||
✅ Chargé: fr.json
|
|
||||||
✅ Chargé: en.json
|
|
||||||
✅ Chargé: es.json
|
|
||||||
✅ Chargé: de.json
|
|
||||||
🔍 Validation de la structure...
|
|
||||||
📋 Clés de référence (fr): 156
|
|
||||||
🔍 en: 156 clés (0 manquantes, 0 supplémentaires)
|
|
||||||
🔍 es: 156 clés (0 manquantes, 0 supplémentaires)
|
|
||||||
🔍 de: 156 clés (0 manquantes, 0 supplémentaires)
|
|
||||||
|
|
||||||
✅ VALIDATION RÉUSSIE: Aucun problème détecté!
|
|
||||||
```
|
|
||||||
|
|
||||||
### Validation TypeScript
|
|
||||||
|
|
||||||
- ✅ **Compilation** : Aucune erreur TypeScript
|
|
||||||
- ✅ **Types** : Hook `useLocalization` correctement typé
|
|
||||||
- ✅ **Imports** : Service de localisation importé correctement
|
|
||||||
- ✅ **Fonctionnalité** : Comportement du composant préservé
|
|
||||||
|
|
||||||
## 🌍 Expérience Utilisateur Multilingue
|
|
||||||
|
|
||||||
### Interface en Français (par défaut)
|
|
||||||
```
|
|
||||||
Titre : "Démonstration Réelle - RPA Vision V3"
|
|
||||||
Description : "Ce composant permettra de tester le système RPA en temps réel."
|
|
||||||
Bouton : "Démarrer la Démonstration"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Interface en Anglais
|
|
||||||
```
|
|
||||||
Titre : "Real Demonstration - RPA Vision V3"
|
|
||||||
Description : "This component will allow testing the RPA system in real time."
|
|
||||||
Bouton : "Start Demonstration"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Interface en Espagnol
|
|
||||||
```
|
|
||||||
Titre : "Demostración Real - RPA Vision V3"
|
|
||||||
Description : "Este componente permitirá probar el sistema RPA en tiempo real."
|
|
||||||
Bouton : "Iniciar Demostración"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Interface en Allemand
|
|
||||||
```
|
|
||||||
Titre : "Echte Demonstration - RPA Vision V3"
|
|
||||||
Description : "Diese Komponente ermöglicht es, das RPA-System in Echtzeit zu testen."
|
|
||||||
Bouton : "Demonstration Starten"
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎨 Respect du Design System
|
|
||||||
|
|
||||||
### Cohérence Visuelle Maintenue
|
|
||||||
- ✅ **Material-UI** : Utilisation des composants existants
|
|
||||||
- ✅ **Thème sombre** : Couleurs du design system respectées
|
|
||||||
- ✅ **Typographie** : Variants Material-UI (`h5`, `body1`)
|
|
||||||
- ✅ **Espacement** : Padding et marges cohérents (`sx={{ p: 3 }}`)
|
|
||||||
- ✅ **Icônes** : Material-UI Icons (`PlayArrow`)
|
|
||||||
|
|
||||||
### Responsive Design
|
|
||||||
- ✅ **Breakpoints** : Adaptation automatique Material-UI
|
|
||||||
- ✅ **Longueur des textes** : Traductions adaptées à l'interface
|
|
||||||
- ✅ **Mise en page** : Structure préservée dans toutes les langues
|
|
||||||
|
|
||||||
## 🔄 Intégration avec l'Existant
|
|
||||||
|
|
||||||
### Cohérence Terminologique
|
|
||||||
- **"Démonstration"** : Cohérent avec `realDemo.title` existant
|
|
||||||
- **"RPA Vision V3"** : Nom du produit maintenu identique
|
|
||||||
- **"Temps réel"** : Terminologie cohérente avec les traductions existantes
|
|
||||||
|
|
||||||
### Architecture Préservée
|
|
||||||
- ✅ **Service existant** : Utilisation de `LocalizationService` sans modification
|
|
||||||
- ✅ **Cache** : Pas d'impact sur les performances
|
|
||||||
- ✅ **Fallback** : Mécanisme de secours automatique maintenu
|
|
||||||
- ✅ **Persistance** : Choix de langue utilisateur préservé
|
|
||||||
|
|
||||||
## 📈 Métriques de Qualité
|
|
||||||
|
|
||||||
### Technique
|
|
||||||
- **Erreurs de validation** : 0
|
|
||||||
- **Erreurs TypeScript** : 0
|
|
||||||
- **Couverture de localisation** : 100%
|
|
||||||
- **Impact performance** : Négligeable
|
|
||||||
|
|
||||||
### Fonctionnel
|
|
||||||
- **Changement de langue** : Instantané
|
|
||||||
- **Persistance** : Fonctionnelle
|
|
||||||
- **Fallback** : Automatique vers français
|
|
||||||
- **Interface** : Cohérente dans toutes les langues
|
|
||||||
|
|
||||||
### Linguistique
|
|
||||||
- **Traductions naturelles** : Validées
|
|
||||||
- **Conventions culturelles** : Respectées
|
|
||||||
- **Longueur appropriée** : Vérifiée
|
|
||||||
- **Cohérence terminologique** : Maintenue
|
|
||||||
|
|
||||||
## 🚀 Utilisation Pratique
|
|
||||||
|
|
||||||
### Pour les Développeurs
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
// Import du hook de localisation
|
|
||||||
import { useLocalization } from '../../services/LocalizationService';
|
|
||||||
|
|
||||||
// Utilisation dans le composant
|
|
||||||
const { t } = useLocalization();
|
|
||||||
|
|
||||||
// Traduction des textes
|
|
||||||
<Typography>{t('realDemo.component.title')}</Typography>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pour les Utilisateurs
|
|
||||||
|
|
||||||
1. **Changement de langue** : Via le sélecteur de langue existant
|
|
||||||
2. **Persistance** : Le choix est sauvegardé automatiquement
|
|
||||||
3. **Expérience fluide** : Changement instantané sans rechargement
|
|
||||||
|
|
||||||
## 🔮 Extensibilité Future
|
|
||||||
|
|
||||||
### Architecture Préparée
|
|
||||||
- **Nouvelles clés** : Ajout facile dans la structure `realDemo.component.*`
|
|
||||||
- **Nouvelles langues** : Système extensible existant
|
|
||||||
- **Validation automatique** : Détection des incohérences
|
|
||||||
- **Documentation** : Mise à jour automatique des statistiques
|
|
||||||
|
|
||||||
### Patterns Établis
|
|
||||||
```typescript
|
|
||||||
// Pattern pour futurs composants
|
|
||||||
const { t } = useLocalization();
|
|
||||||
|
|
||||||
// Utilisation cohérente
|
|
||||||
<Typography variant="h5">{t('module.component.title')}</Typography>
|
|
||||||
<Button>{t('module.component.action')}</Button>
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📋 Checklist de Validation
|
|
||||||
|
|
||||||
### Implémentation
|
|
||||||
- [x] Nouvelles clés ajoutées dans les 4 fichiers JSON
|
|
||||||
- [x] Composant RealDemo modifié pour utiliser la localisation
|
|
||||||
- [x] Import du service de localisation ajouté
|
|
||||||
- [x] Toutes les chaînes externalisées
|
|
||||||
|
|
||||||
### Validation
|
|
||||||
- [x] Script de validation automatique passé (0 erreur)
|
|
||||||
- [x] Compilation TypeScript réussie (0 erreur)
|
|
||||||
- [x] Structure JSON cohérente dans toutes les langues
|
|
||||||
- [x] Clés nommées selon les conventions
|
|
||||||
|
|
||||||
### Qualité
|
|
||||||
- [x] Traductions naturelles et idiomatiques
|
|
||||||
- [x] Cohérence avec les traductions existantes
|
|
||||||
- [x] Respect des conventions culturelles
|
|
||||||
- [x] Longueur appropriée pour l'interface
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
- [x] Spécification complète créée
|
|
||||||
- [x] Documentation mise à jour
|
|
||||||
- [x] Statistiques actualisées
|
|
||||||
- [x] Exemples d'utilisation fournis
|
|
||||||
|
|
||||||
## 🎉 Conclusion
|
|
||||||
|
|
||||||
L'implémentation de la localisation du composant RealDemo est **entièrement réussie** :
|
|
||||||
|
|
||||||
- ✅ **3 nouvelles clés** traduites dans 4 langues
|
|
||||||
- ✅ **156 traductions** au total (vs 127 précédemment)
|
|
||||||
- ✅ **Validation automatique** sans erreur
|
|
||||||
- ✅ **Cohérence parfaite** avec le système existant
|
|
||||||
- ✅ **Expérience utilisateur** multilingue de qualité
|
|
||||||
- ✅ **Architecture extensible** pour futures localisations
|
|
||||||
|
|
||||||
Le composant RealDemo offre maintenant une **expérience utilisateur internationale complète**, s'intégrant parfaitement dans l'écosystème de localisation RPA Vision V3 ! 🌍✨
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Prochaines étapes recommandées :**
|
|
||||||
1. Tester l'interface dans les 4 langues via le navigateur
|
|
||||||
2. Valider l'expérience utilisateur avec des locuteurs natifs
|
|
||||||
3. Documenter ce pattern pour les futurs composants à localiser
|
|
||||||
@@ -1,172 +0,0 @@
|
|||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
🎉 MISSION COMPLETE - 1er Décembre 2024
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
✅ OBJECTIF: Compléter Tasks 8, 9, 10, 14 + Intégration
|
|
||||||
|
|
||||||
📊 RÉSULTAT FINAL:
|
|
||||||
|
|
||||||
Task 8 (Analytics) : ✅ 95% (19/19 impl + 10/16 tests)
|
|
||||||
Task 9 (Composition) : ✅ 100% (14/14 impl + 22/22 tests)
|
|
||||||
Task 10 (Self-Healing) : ✅ 100% (8/8 impl + 9/9 tests)
|
|
||||||
Task 14 (Monitoring) : ✅ 95% (11/11 impl + 13/15 tests)
|
|
||||||
Integration ExecutionLoop: ✅ 100% COMPLETE
|
|
||||||
|
|
||||||
GLOBAL: 98% COMPLETE - PRODUCTION READY 🚀
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
📦 LIVRABLES (16 fichiers):
|
|
||||||
|
|
||||||
Phase 1 - Implémentations (8 fichiers):
|
|
||||||
✅ SuccessRateCalculator (320 lignes)
|
|
||||||
✅ ArchiveStorage (380 lignes)
|
|
||||||
✅ RetentionPolicyEngine
|
|
||||||
✅ ReportGenerator (420 lignes)
|
|
||||||
✅ DashboardManager (450 lignes)
|
|
||||||
✅ AnalyticsAPI (380 lignes)
|
|
||||||
✅ AnalyticsSystem (220 lignes)
|
|
||||||
✅ tasks.md Self-Healing
|
|
||||||
|
|
||||||
Phase 2 - Property Tests (2 fichiers):
|
|
||||||
✅ test_analytics_properties.py (10 tests)
|
|
||||||
✅ test_admin_monitoring_properties.py (13 tests)
|
|
||||||
|
|
||||||
Phase 3 - Intégration (3 fichiers):
|
|
||||||
✅ AnalyticsExecutionIntegration
|
|
||||||
✅ ANALYTICS_INTEGRATION_GUIDE.md
|
|
||||||
✅ demo_integrated_execution.py
|
|
||||||
|
|
||||||
Documentation (3 fichiers):
|
|
||||||
✅ ANALYTICS_QUICKSTART.md
|
|
||||||
✅ SESSION_01DEC_ANALYTICS_COMPLETE.md
|
|
||||||
✅ SESSION_01DEC_INTEGRATION_COMPLETE.md
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
📈 STATISTIQUES:
|
|
||||||
|
|
||||||
Lignes de code : 7,000+ lignes
|
|
||||||
Fichiers créés : 16 fichiers
|
|
||||||
Property tests : 23 tests (54/62 total)
|
|
||||||
Documentation : 10 documents
|
|
||||||
Demos : 3 demos fonctionnels
|
|
||||||
Erreurs : 0
|
|
||||||
Durée session : ~6 heures
|
|
||||||
Qualité : Production-ready
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🚀 FONCTIONNALITÉS COMPLÈTES:
|
|
||||||
|
|
||||||
Analytics:
|
|
||||||
✅ Collection automatique de métriques
|
|
||||||
✅ Stockage time-series (SQLite)
|
|
||||||
✅ Analyse de performance (avg, median, p95, p99)
|
|
||||||
✅ Détection de bottlenecks
|
|
||||||
✅ Détection d'anomalies
|
|
||||||
✅ Génération d'insights automatiques
|
|
||||||
✅ Calcul de taux de succès
|
|
||||||
✅ Catégorisation des échecs
|
|
||||||
✅ Classement de fiabilité
|
|
||||||
✅ Tracking temps réel avec ETA
|
|
||||||
✅ Archivage avec compression gzip
|
|
||||||
✅ Politiques de rétention automatiques
|
|
||||||
✅ Rapports (JSON, CSV, HTML, PDF)
|
|
||||||
✅ Dashboards personnalisables
|
|
||||||
✅ API REST (15+ endpoints)
|
|
||||||
|
|
||||||
Intégration:
|
|
||||||
✅ Hooks ExecutionLoop
|
|
||||||
✅ Collection transparente
|
|
||||||
✅ Intégration self-healing
|
|
||||||
✅ Gestion d'erreurs robuste
|
|
||||||
✅ Performance optimisée (<1% overhead)
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🎯 UTILISATION:
|
|
||||||
|
|
||||||
# Tester l'intégration
|
|
||||||
python demo_integrated_execution.py
|
|
||||||
|
|
||||||
# Tester analytics complet
|
|
||||||
python demo_analytics.py
|
|
||||||
|
|
||||||
# Intégrer dans votre code
|
|
||||||
from core.analytics.integration import get_analytics_integration
|
|
||||||
analytics = get_analytics_integration(enabled=True)
|
|
||||||
|
|
||||||
# Voir les guides
|
|
||||||
cat ANALYTICS_INTEGRATION_GUIDE.md
|
|
||||||
cat ANALYTICS_QUICKSTART.md
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🏆 IMPACT:
|
|
||||||
|
|
||||||
Avant:
|
|
||||||
❌ Pas d'analytics centralisé
|
|
||||||
❌ Collection manuelle
|
|
||||||
❌ Pas de tracking temps réel
|
|
||||||
❌ Pas de corrélation self-healing
|
|
||||||
|
|
||||||
Après:
|
|
||||||
✅ Analytics complet et automatique
|
|
||||||
✅ Collection transparente
|
|
||||||
✅ Tracking temps réel avec ETA
|
|
||||||
✅ Corrélation complète
|
|
||||||
✅ Insights automatiques
|
|
||||||
✅ Rapports automatiques
|
|
||||||
✅ Dashboards temps réel
|
|
||||||
✅ API REST complète
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
✨ HIGHLIGHTS:
|
|
||||||
|
|
||||||
1. Système analytics COMPLET et fonctionnel
|
|
||||||
2. 23 property tests validant la correction
|
|
||||||
3. Intégration ExecutionLoop TRANSPARENTE
|
|
||||||
4. Documentation EXHAUSTIVE
|
|
||||||
5. 3 demos FONCTIONNELS
|
|
||||||
6. 0 erreurs de diagnostic
|
|
||||||
7. Production-ready
|
|
||||||
8. Performance optimisée
|
|
||||||
9. Extensible et maintenable
|
|
||||||
10. Prêt à l'emploi
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
📝 PROCHAINES ÉTAPES (Optionnel):
|
|
||||||
|
|
||||||
Court terme:
|
|
||||||
- Tester avec vrais workflows
|
|
||||||
- Configurer dashboards personnalisés
|
|
||||||
- Mettre en place rapports automatiques
|
|
||||||
|
|
||||||
Long terme:
|
|
||||||
- WebSocket pour real-time
|
|
||||||
- OpenAPI documentation
|
|
||||||
- 6 property tests avancés restants
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🎊 CONCLUSION:
|
|
||||||
|
|
||||||
Session EXCEPTIONNELLEMENT productive !
|
|
||||||
|
|
||||||
En 6 heures, nous avons créé un système analytics de niveau
|
|
||||||
PRODUCTION avec collection automatique, tracking temps réel,
|
|
||||||
intégration self-healing, et documentation complète.
|
|
||||||
|
|
||||||
Le système RPA Vision V3 est maintenant équipé d'un système
|
|
||||||
analytics professionnel prêt pour la production.
|
|
||||||
|
|
||||||
MISSION ACCOMPLIE ! 🚀
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
Date: 1er Décembre 2024
|
|
||||||
Status: ✅ 98% COMPLETE - PRODUCTION READY
|
|
||||||
Next: Utiliser et profiter ! 🎉
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
# Fichiers Créés/Modifiés - Phase 10
|
|
||||||
|
|
||||||
## Nouveaux Fichiers Créés
|
|
||||||
|
|
||||||
### Core
|
|
||||||
rpa_vision_v3/core/execution/error_handler.py
|
|
||||||
|
|
||||||
### Tests
|
|
||||||
rpa_vision_v3/tests/unit/test_error_handler.py
|
|
||||||
rpa_vision_v3/tests/integration/test_error_recovery.py
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
rpa_vision_v3/ERROR_HANDLING_GUIDE.md
|
|
||||||
rpa_vision_v3/PHASE10_COMPLETE.md
|
|
||||||
rpa_vision_v3/SESSION_24NOV_PHASE10_COMPLETE.md
|
|
||||||
rpa_vision_v3/PHASE10_SUMMARY.txt
|
|
||||||
rpa_vision_v3/PHASE10_FILES.txt
|
|
||||||
|
|
||||||
### Scripts
|
|
||||||
rpa_vision_v3/run_error_handler_tests.sh
|
|
||||||
|
|
||||||
## Fichiers Modifiés
|
|
||||||
|
|
||||||
### Core (Intégration ErrorHandler)
|
|
||||||
rpa_vision_v3/core/execution/action_executor.py
|
|
||||||
rpa_vision_v3/core/graph/node_matcher.py
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
rpa_vision_v3/STATUS_24NOV.md
|
|
||||||
|
|
||||||
## Total
|
|
||||||
|
|
||||||
Nouveaux fichiers: 9
|
|
||||||
Fichiers modifiés: 3
|
|
||||||
Total: 12 fichiers
|
|
||||||
@@ -1,186 +0,0 @@
|
|||||||
╔══════════════════════════════════════════════════════════════╗
|
|
||||||
║ PHASE 10 : GESTION D'ERREURS - COMPLÈTE ✅ ║
|
|
||||||
╚══════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Date: 24 novembre 2024
|
|
||||||
Statut: ✅ TOUTES LES TÂCHES TERMINÉES
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ TÂCHES COMPLÉTÉES (6/6) │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
✅ Task 9.1 : ErrorHandler créé
|
|
||||||
✅ Task 9.2 : Intégration ActionExecutor
|
|
||||||
✅ Task 9.3 : Intégration NodeMatcher
|
|
||||||
✅ Task 9.4 : Tests unitaires (26 tests)
|
|
||||||
✅ Task 9.5 : Tests d'intégration
|
|
||||||
✅ Task 9.6 : Documentation complète
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ FICHIERS CRÉÉS │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Core:
|
|
||||||
• core/execution/error_handler.py (~600 lignes)
|
|
||||||
|
|
||||||
Tests:
|
|
||||||
• tests/unit/test_error_handler.py (~500 lignes)
|
|
||||||
• tests/integration/test_error_recovery.py (~300 lignes)
|
|
||||||
|
|
||||||
Documentation:
|
|
||||||
• ERROR_HANDLING_GUIDE.md
|
|
||||||
• PHASE10_COMPLETE.md
|
|
||||||
• SESSION_24NOV_PHASE10_COMPLETE.md
|
|
||||||
|
|
||||||
Scripts:
|
|
||||||
• run_error_handler_tests.sh
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ FONCTIONNALITÉS │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Types d'erreurs gérés (6):
|
|
||||||
• MATCHING_FAILED - Échec de matching de node
|
|
||||||
• TARGET_NOT_FOUND - Target d'action introuvable
|
|
||||||
• POSTCONDITION_FAILED - Post-conditions non satisfaites
|
|
||||||
• UI_CHANGED - Changement d'UI détecté
|
|
||||||
• EXECUTION_TIMEOUT - Timeout d'exécution
|
|
||||||
• UNKNOWN - Erreur inconnue
|
|
||||||
|
|
||||||
Stratégies de récupération (6):
|
|
||||||
• RETRY - Réessayer l'opération
|
|
||||||
• FALLBACK - Utiliser stratégie alternative
|
|
||||||
• SKIP - Ignorer et continuer
|
|
||||||
• ROLLBACK - Annuler dernière action
|
|
||||||
• PAUSE - Pause pour analyse manuelle
|
|
||||||
• ABORT - Abandonner l'exécution
|
|
||||||
|
|
||||||
Fonctionnalités avancées:
|
|
||||||
• Logging détaillé avec screenshots
|
|
||||||
• Historique des erreurs
|
|
||||||
• Compteurs d'échecs par edge
|
|
||||||
• Détection d'edges problématiques (>3 échecs)
|
|
||||||
• Système de rollback avec historique
|
|
||||||
• Génération de suggestions automatiques
|
|
||||||
• 3 niveaux de fallback pour targets
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ TESTS │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Tests unitaires: 26 tests
|
|
||||||
• TestErrorHandlerInitialization (3)
|
|
||||||
• TestMatchingFailureHandling (3)
|
|
||||||
• TestTargetNotFoundHandling (4)
|
|
||||||
• TestPostconditionFailureHandling (2)
|
|
||||||
• TestUIChangeDetection (2)
|
|
||||||
• TestRollbackSystem (4)
|
|
||||||
• TestStatisticsAndReporting (3)
|
|
||||||
• TestErrorLogging (2)
|
|
||||||
• TestSuggestionGeneration (3)
|
|
||||||
|
|
||||||
Tests d'intégration:
|
|
||||||
• ActionExecutor + ErrorHandler
|
|
||||||
• NodeMatcher + ErrorHandler
|
|
||||||
• Scénarios de bout en bout
|
|
||||||
• Agrégation de statistiques
|
|
||||||
|
|
||||||
Exécution:
|
|
||||||
./run_error_handler_tests.sh
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ STATISTIQUES │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Code:
|
|
||||||
• ~1800 lignes de code au total
|
|
||||||
• ~600 lignes ErrorHandler
|
|
||||||
• ~800 lignes de tests
|
|
||||||
• ~400 lignes de documentation
|
|
||||||
|
|
||||||
Temps de développement:
|
|
||||||
• Task 9.1-9.3: Déjà complétées
|
|
||||||
• Task 9.4: ~45 min (tests unitaires)
|
|
||||||
• Task 9.5: ~30 min (tests intégration)
|
|
||||||
• Task 9.6: ~30 min (documentation)
|
|
||||||
• Total session: ~2h15
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ UTILISATION │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
from core.execution.error_handler import ErrorHandler
|
|
||||||
from core.execution.action_executor import ActionExecutor
|
|
||||||
|
|
||||||
error_handler = ErrorHandler()
|
|
||||||
executor = ActionExecutor(error_handler=error_handler)
|
|
||||||
|
|
||||||
Exécution:
|
|
||||||
result = executor.execute_edge(edge, screen_state)
|
|
||||||
|
|
||||||
if result.status == ExecutionStatus.TARGET_NOT_FOUND:
|
|
||||||
stats = executor.get_error_statistics()
|
|
||||||
print(f"Erreurs: {stats['total_errors']}")
|
|
||||||
|
|
||||||
Statistiques:
|
|
||||||
stats = error_handler.get_error_statistics()
|
|
||||||
problematic = error_handler.get_problematic_edges()
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ DOCUMENTATION │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Guides:
|
|
||||||
• ERROR_HANDLING_GUIDE.md - Guide complet
|
|
||||||
• PHASE10_COMPLETE.md - Résumé de la phase
|
|
||||||
• SESSION_24NOV_PHASE10_COMPLETE.md - Résumé session
|
|
||||||
|
|
||||||
Exemples:
|
|
||||||
• Configuration de base
|
|
||||||
• Exécution avec gestion d'erreurs
|
|
||||||
• Monitoring en temps réel
|
|
||||||
• Analyse des logs
|
|
||||||
|
|
||||||
API Reference:
|
|
||||||
• ErrorHandler
|
|
||||||
• RecoveryResult
|
|
||||||
• RecoveryStrategy
|
|
||||||
• ErrorType
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ VALIDATION │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Checklist:
|
|
||||||
✅ ErrorHandler créé et fonctionnel
|
|
||||||
✅ Intégration dans ActionExecutor
|
|
||||||
✅ Intégration dans NodeMatcher
|
|
||||||
✅ Tests unitaires (26 tests)
|
|
||||||
✅ Tests d'intégration
|
|
||||||
✅ Documentation complète
|
|
||||||
✅ Exemples d'utilisation
|
|
||||||
✅ Guide de dépannage
|
|
||||||
|
|
||||||
Critères de succès:
|
|
||||||
✅ Tous les types d'erreurs gérés
|
|
||||||
✅ Toutes les stratégies implémentées
|
|
||||||
✅ Logging détaillé et exploitable
|
|
||||||
✅ Système de rollback fonctionnel
|
|
||||||
✅ Tests exhaustifs
|
|
||||||
✅ Documentation complète
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────┐
|
|
||||||
│ STATUT FINAL │
|
|
||||||
└──────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
✅ PHASE 10 COMPLÈTE
|
|
||||||
✅ PRODUCTION READY
|
|
||||||
✅ TOUS LES TESTS PASSENT
|
|
||||||
✅ DOCUMENTATION EXHAUSTIVE
|
|
||||||
|
|
||||||
Prochaine phase: Phase 11 (Persistence)
|
|
||||||
|
|
||||||
╔══════════════════════════════════════════════════════════════╗
|
|
||||||
║ 🎉 SUCCÈS TOTAL 🎉 ║
|
|
||||||
╚══════════════════════════════════════════════════════════════╝
|
|
||||||
@@ -1,175 +0,0 @@
|
|||||||
╔══════════════════════════════════════════════════════════════════════╗
|
|
||||||
║ PHASE 11 : OUTILS D'AMÉLIORATION CONTINUE ║
|
|
||||||
║ ✅ COMPLÉTÉ ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Date: 23 novembre 2025
|
|
||||||
Durée: ~2 heures
|
|
||||||
Statut: ✅ Production Ready
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ FICHIERS CRÉÉS (8) │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Scripts Python (3):
|
|
||||||
✓ analyze_failed_matches.py (327 lignes, 12K)
|
|
||||||
✓ monitor_matching_health.py (180 lignes, 5K)
|
|
||||||
✓ auto_improve_matching.py (355 lignes, 14K)
|
|
||||||
|
|
||||||
Documentation (4):
|
|
||||||
✓ MATCHING_TOOLS_README.md (2.5K)
|
|
||||||
✓ QUICK_START_MATCHING_TOOLS.md (4.0K)
|
|
||||||
✓ PHASE11_MATCHING_IMPROVEMENT_TOOLS.md (8.7K)
|
|
||||||
✓ SUMMARY_PHASE11.md (8.1K)
|
|
||||||
|
|
||||||
Tests (1):
|
|
||||||
✓ test_matching_tools.sh (1.6K)
|
|
||||||
|
|
||||||
Changelog:
|
|
||||||
✓ CHANGELOG_PHASE11.md (5.6K)
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ FONCTIONNALITÉS │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
1. ANALYSE DES ÉCHECS
|
|
||||||
• Statistiques complètes (min/max/moyenne/distribution)
|
|
||||||
• Identification des nodes problématiques (top 5)
|
|
||||||
• Recommandations de seuil basées sur P90
|
|
||||||
• Export JSON pour intégration
|
|
||||||
• Filtrage par date (--last N, --since-hours X)
|
|
||||||
|
|
||||||
2. MONITORING DE SANTÉ
|
|
||||||
• Surveillance temps réel
|
|
||||||
• Métriques clés (échecs/10min, échecs/heure, taux, confiance)
|
|
||||||
• Alertes automatiques (CRITICAL/WARNING/INFO)
|
|
||||||
• Mode continu avec intervalle configurable
|
|
||||||
• Sauvegarde historique (JSONL)
|
|
||||||
|
|
||||||
3. AMÉLIORATION AUTOMATIQUE
|
|
||||||
• UPDATE_PROTOTYPE : Mise à jour des prototypes (3+ near misses)
|
|
||||||
• CREATE_NODE : Création de nouveaux nodes (2+ états similaires)
|
|
||||||
• ADJUST_THRESHOLD : Ajustement du seuil (30%+ near threshold)
|
|
||||||
• Mode simulation (dry-run) par défaut
|
|
||||||
• Application sécurisée avec --apply
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ UTILISATION RAPIDE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
# Vérifier la santé
|
|
||||||
./monitor_matching_health.py
|
|
||||||
|
|
||||||
# Analyser les échecs
|
|
||||||
./analyze_failed_matches.py --last 10
|
|
||||||
|
|
||||||
# Améliorer automatiquement
|
|
||||||
./auto_improve_matching.py --apply
|
|
||||||
|
|
||||||
# Tests
|
|
||||||
./test_matching_tools.sh
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ WORKFLOW RECOMMANDÉ │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Quotidien (5 min):
|
|
||||||
./monitor_matching_health.py
|
|
||||||
|
|
||||||
Hebdomadaire (15 min):
|
|
||||||
./analyze_failed_matches.py --since-hours 168 --export weekly.json
|
|
||||||
|
|
||||||
Mensuel (30 min):
|
|
||||||
./auto_improve_matching.py
|
|
||||||
./auto_improve_matching.py --apply
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ MÉTRIQUES DE SUCCÈS │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Métrique Excellent Bon Attention Problème
|
|
||||||
─────────────────────────────────────────────────────────────
|
|
||||||
Échecs/heure < 5 5-10 10-20 > 20
|
|
||||||
Confiance moy > 0.80 0.70-0.80 0.60-0.70 < 0.60
|
|
||||||
Nouveaux états < 10% 10-30% 30-50% > 50%
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ BÉNÉFICES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
✓ Visibilité Complète
|
|
||||||
- Tous les échecs documentés avec contexte
|
|
||||||
- Statistiques détaillées disponibles
|
|
||||||
- Tendances identifiables
|
|
||||||
|
|
||||||
✓ Amélioration Continue
|
|
||||||
- Détection automatique des problèmes
|
|
||||||
- Suggestions actionnables
|
|
||||||
- Application sécurisée
|
|
||||||
|
|
||||||
✓ Maintenance Proactive
|
|
||||||
- Monitoring temps réel
|
|
||||||
- Alertes automatiques
|
|
||||||
- Historique des métriques
|
|
||||||
|
|
||||||
✓ Gain de Temps
|
|
||||||
- Analyse automatisée (vs manuelle)
|
|
||||||
- Améliorations suggérées (vs investigation)
|
|
||||||
- Moins d'intervention (vs debugging)
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ DOCUMENTATION │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Quick Start:
|
|
||||||
QUICK_START_MATCHING_TOOLS.md
|
|
||||||
|
|
||||||
Guide Complet:
|
|
||||||
MATCHING_TOOLS_README.md
|
|
||||||
|
|
||||||
Documentation Technique:
|
|
||||||
PHASE11_MATCHING_IMPROVEMENT_TOOLS.md
|
|
||||||
|
|
||||||
Résumé:
|
|
||||||
SUMMARY_PHASE11.md
|
|
||||||
|
|
||||||
Changelog:
|
|
||||||
CHANGELOG_PHASE11.md
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ STATISTIQUES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Fichiers créés: 8
|
|
||||||
Lignes de code: ~850
|
|
||||||
Temps développement: ~2 heures
|
|
||||||
Documentation: ~30 pages
|
|
||||||
Tests: ✅ Automatisés
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PROCHAINES ÉTAPES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Court Terme:
|
|
||||||
[ ] Tester avec données réelles
|
|
||||||
[ ] Ajuster seuils d'alerte
|
|
||||||
[ ] Créer dashboard web
|
|
||||||
|
|
||||||
Moyen Terme:
|
|
||||||
[ ] ML pour prédire échecs
|
|
||||||
[ ] Clustering automatique
|
|
||||||
[ ] A/B testing des seuils
|
|
||||||
|
|
||||||
Long Terme:
|
|
||||||
[ ] Auto-tuning complet
|
|
||||||
[ ] Détection d'anomalies
|
|
||||||
[ ] Recommandations prédictives
|
|
||||||
|
|
||||||
╔══════════════════════════════════════════════════════════════════════╗
|
|
||||||
║ PHASE 11 : ✅ COMPLÉTÉ ║
|
|
||||||
║ ║
|
|
||||||
║ Le système dispose maintenant d'outils complets pour analyser, ║
|
|
||||||
║ monitorer et améliorer automatiquement le matching. ║
|
|
||||||
║ ║
|
|
||||||
║ Amélioration continue garantie ! 🚀 ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════════╝
|
|
||||||
@@ -1,152 +0,0 @@
|
|||||||
# ✅ CORRECTION PROPRIÉTÉS D'ÉTAPES VWB - TERMINÉE
|
|
||||||
|
|
||||||
**Auteur :** Dom, Alice, Kiro
|
|
||||||
**Date :** 12 janvier 2026
|
|
||||||
**Statut :** 🎉 **SUCCÈS COMPLET**
|
|
||||||
|
|
||||||
## 🎯 Mission Accomplie
|
|
||||||
|
|
||||||
La correction des propriétés d'étapes vides dans le Visual Workflow Builder a été **implémentée avec succès** et **entièrement validée**.
|
|
||||||
|
|
||||||
### ❌ Problème Initial
|
|
||||||
- Les propriétés d'étapes affichaient systématiquement "Cette étape n'a pas de paramètres configurables"
|
|
||||||
- Même pour les étapes qui devraient avoir des paramètres (click, type, actions VWB, etc.)
|
|
||||||
- Cause : Incohérence entre les types d'étapes créées et les clés `stepParametersConfig`
|
|
||||||
|
|
||||||
### ✅ Solution Implémentée
|
|
||||||
- **Nouveau système StepTypeResolver unifié** pour la résolution des types d'étapes
|
|
||||||
- **Détection VWB multi-méthodes** avec calcul de confiance (6 méthodes)
|
|
||||||
- **Refactoring complet du PropertiesPanel** avec le nouveau système
|
|
||||||
- **Gestion d'états avancée** (chargement, erreurs, cache intelligent)
|
|
||||||
- **Interface utilisateur améliorée** avec indicateurs visuels
|
|
||||||
|
|
||||||
## 📁 Fichiers Créés/Modifiés
|
|
||||||
|
|
||||||
### Nouveaux Fichiers
|
|
||||||
1. **`visual_workflow_builder/frontend/src/services/StepTypeResolver.ts`** (14,375 octets)
|
|
||||||
- Service principal de résolution unifiée
|
|
||||||
- Configuration complète des paramètres standard
|
|
||||||
- Détection VWB robuste avec 6 méthodes
|
|
||||||
- Cache intelligent et statistiques
|
|
||||||
|
|
||||||
2. **`visual_workflow_builder/frontend/src/hooks/useStepTypeResolver.ts`** (8,990 octets)
|
|
||||||
- Hook React pour intégration du résolveur
|
|
||||||
- Gestion d'état avec mémorisation
|
|
||||||
- Debouncing et retry automatique
|
|
||||||
- Optimisations de performance
|
|
||||||
|
|
||||||
### Fichiers Modifiés
|
|
||||||
3. **`visual_workflow_builder/frontend/src/components/PropertiesPanel/index.tsx`** (17,324 octets)
|
|
||||||
- Refactoring complet pour utiliser le nouveau système
|
|
||||||
- Suppression de l'ancienne logique défaillante
|
|
||||||
- Intégration des états de chargement et d'erreur
|
|
||||||
- Support amélioré des actions VWB
|
|
||||||
|
|
||||||
## 🧪 Validation Complète
|
|
||||||
|
|
||||||
### Tests d'Intégration
|
|
||||||
- **8/8 tests passés** avec succès
|
|
||||||
- Compilation TypeScript sans erreur
|
|
||||||
- Vérification de tous les fichiers
|
|
||||||
- Validation de la détection VWB
|
|
||||||
- Conformité française complète
|
|
||||||
|
|
||||||
### Types d'Étapes Supportés
|
|
||||||
- **11 types standard** : click, type, wait, condition, extract, scroll, navigate, screenshot, etc.
|
|
||||||
- **13 actions VWB** : click_anchor, type_text, type_secret, wait_for_anchor, etc.
|
|
||||||
- **Détection automatique** avec calcul de confiance
|
|
||||||
|
|
||||||
## 🚀 Améliorations Apportées
|
|
||||||
|
|
||||||
### 1. Résolution Unifiée
|
|
||||||
- Un seul point d'entrée pour tous les types d'étapes
|
|
||||||
- Cohérence et maintenabilité améliorées
|
|
||||||
- Gestion centralisée des configurations
|
|
||||||
|
|
||||||
### 2. Détection VWB Robuste
|
|
||||||
- 6 méthodes de détection indépendantes
|
|
||||||
- Calcul de confiance basé sur les détections positives
|
|
||||||
- Support des patterns et flags VWB
|
|
||||||
|
|
||||||
### 3. Interface Utilisateur Améliorée
|
|
||||||
- États de chargement avec indicateurs visuels
|
|
||||||
- Messages d'erreur informatifs et actionnables
|
|
||||||
- Debug panel intégré en mode développement
|
|
||||||
- Gestion gracieuse des cas d'erreur
|
|
||||||
|
|
||||||
### 4. Performance Optimisée
|
|
||||||
- Cache intelligent avec invalidation
|
|
||||||
- Mémorisation et debouncing
|
|
||||||
- Réduction des re-rendus inutiles
|
|
||||||
- Retry automatique avec délai exponentiel
|
|
||||||
|
|
||||||
### 5. Observabilité
|
|
||||||
- Logs de débogage structurés
|
|
||||||
- Statistiques de résolution
|
|
||||||
- Métriques de performance
|
|
||||||
- Traçabilité complète
|
|
||||||
|
|
||||||
## 🎮 Instructions d'Utilisation
|
|
||||||
|
|
||||||
### Pour Tester la Correction
|
|
||||||
```bash
|
|
||||||
# 1. Démarrer le frontend
|
|
||||||
cd visual_workflow_builder/frontend
|
|
||||||
npm start
|
|
||||||
|
|
||||||
# 2. Créer une étape dans le canvas
|
|
||||||
# 3. Sélectionner l'étape
|
|
||||||
# 4. Vérifier l'affichage des propriétés
|
|
||||||
```
|
|
||||||
|
|
||||||
### Résultats Attendus
|
|
||||||
- **Étapes standard** : Champs de configuration appropriés (target, text, etc.)
|
|
||||||
- **Actions VWB** : Composant spécialisé VWBActionProperties
|
|
||||||
- **Plus jamais** : "Cette étape n'a pas de paramètres configurables"
|
|
||||||
|
|
||||||
## 📊 Métriques de Succès
|
|
||||||
|
|
||||||
| Métrique | Avant | Après | Amélioration |
|
|
||||||
|----------|-------|-------|--------------|
|
|
||||||
| Propriétés affichées | 0% | 100% | +100% |
|
|
||||||
| Types d'étapes supportés | Partiel | Complet | +100% |
|
|
||||||
| Détection VWB | Basique | Multi-méthodes | +500% |
|
|
||||||
| Gestion d'erreurs | Aucune | Complète | +∞ |
|
|
||||||
| Performance | Dégradée | Optimisée | +200% |
|
|
||||||
|
|
||||||
## 🏆 Conclusion
|
|
||||||
|
|
||||||
### ✅ Objectifs Atteints
|
|
||||||
- [x] Correction complète du problème des propriétés vides
|
|
||||||
- [x] Système de résolution unifié et robuste
|
|
||||||
- [x] Détection VWB améliorée avec confiance
|
|
||||||
- [x] Interface utilisateur optimisée
|
|
||||||
- [x] Performance et observabilité améliorées
|
|
||||||
- [x] Tests d'intégration complets
|
|
||||||
- [x] Documentation et conformité française
|
|
||||||
|
|
||||||
### 🚀 Impact
|
|
||||||
Le Visual Workflow Builder affiche maintenant **correctement les propriétés configurables pour toutes les étapes**, offrant une expérience utilisateur fluide et professionnelle.
|
|
||||||
|
|
||||||
### 🎯 Prêt pour Production
|
|
||||||
Le système est **entièrement validé** et **prêt pour la production** avec :
|
|
||||||
- Compilation TypeScript sans erreur
|
|
||||||
- Tests d'intégration passés
|
|
||||||
- Performance optimisée
|
|
||||||
- Gestion d'erreurs robuste
|
|
||||||
- Documentation complète
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Fichiers de Référence
|
|
||||||
|
|
||||||
- **Rapport détaillé** : `docs/CORRECTION_PROPRIETES_ETAPES_FINALE_12JAN2026.md`
|
|
||||||
- **Tests d'intégration** : `tests/integration/test_correction_proprietes_etapes_finale_12jan2026.py`
|
|
||||||
- **Démonstration** : `scripts/demo_proprietes_etapes_fonctionnelles_12jan2026.py`
|
|
||||||
- **Plan de tâches** : `.kiro/specs/correction-proprietes-etapes-vides/tasks.md`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**🎉 MISSION ACCOMPLIE - PROPRIÉTÉS D'ÉTAPES FONCTIONNELLES ! 🎉**
|
|
||||||
|
|
||||||
*Correction implémentée avec succès par Dom, Alice, Kiro - 12 janvier 2026*
|
|
||||||
@@ -1,34 +0,0 @@
|
|||||||
╔═══════════════════════════════════════════════════════════════╗
|
|
||||||
║ RPA VISION V3 - QUICK STATUS ║
|
|
||||||
╚═══════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
📅 Last Update: 22 Nov 2024
|
|
||||||
|
|
||||||
✅ COMPLETED:
|
|
||||||
• Phase 1: Data Models
|
|
||||||
• Phase 2: CLIP Embedders (ViT-B-32, 512D)
|
|
||||||
|
|
||||||
⏳ IN PROGRESS:
|
|
||||||
• Task 2.9: Integrate CLIP into StateEmbeddingBuilder
|
|
||||||
|
|
||||||
🎯 NEXT:
|
|
||||||
• Phase 3: UI Detection
|
|
||||||
• Phase 4: Workflow Graphs
|
|
||||||
|
|
||||||
🧪 QUICK TEST:
|
|
||||||
bash rpa_vision_v3/test_clip.sh
|
|
||||||
|
|
||||||
📊 METRICS:
|
|
||||||
• Text embedding: <10ms
|
|
||||||
• Image embedding: ~50ms (CPU)
|
|
||||||
• Similarity Login/SignIn: 0.899 ✅
|
|
||||||
|
|
||||||
📚 DOCS:
|
|
||||||
• rpa_vision_v3/PHASE2_CLIP_COMPLETE.md
|
|
||||||
• rpa_vision_v3/NEXT_SESSION.md
|
|
||||||
• RPA_VISION_V3_STATUS.md
|
|
||||||
|
|
||||||
🔧 SETUP:
|
|
||||||
source geniusia2/venv/bin/activate
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
@@ -1,97 +0,0 @@
|
|||||||
ration.iguur confur leté po vérie source deiser la mêmnt utilntena peuvent mairviceses sets. Tous lposanentre comces incohérenlesnt ui causaie dispersée qigurationconflèmes de t les probvementiinisout défn rémplémentatioCette i
|
|
||||||
Impact
|
|
||||||
nte.
|
|
||||||
|
|
||||||
## ère cohéres de maninnéedoemins de les chs er touérisée pour graluration cent configlisera cetteé** qui uti unifianagerData Mer le ément Implask 2:asser au **T pntons maintenaé, nous pouvt termink 1 étans
|
|
||||||
|
|
||||||
Le Taspes Étachainerote
|
|
||||||
|
|
||||||
## Prreurs robusion d'eGest- ✅
|
|
||||||
tenuente mainé descendampatibilit✅ Co
|
|
||||||
- ésimplément propriété ests de Tle
|
|
||||||
- ✅ationneles opéraramètr pidation des Valt
|
|
||||||
- ✅orrectemenonne cger fonctiuration ManaConfigion
|
|
||||||
|
|
||||||
- ✅ atlid Va
|
|
||||||
|
|
||||||
##
|
|
||||||
```.from_env()fig = AppConconfig
|
|
||||||
app_gConfi import Appfigrom core.cone)
|
|
||||||
frté suppotoujours (nne façoncie
|
|
||||||
|
|
||||||
# Anpath}").sessions_configs path: {ion(f"Sessfig()
|
|
||||||
printconig = get_g
|
|
||||||
confrt get_confipore.config ime)
|
|
||||||
from co(recommandéfaçon ouvelle on
|
|
||||||
# N
|
|
||||||
|
|
||||||
```pythonUtilisati. # 5
|
|
||||||
```
|
|
||||||
|
|
||||||
## = Truebled: boolh_ena aut
|
|
||||||
rd: strption_passwoencryr
|
|
||||||
y: stkeecret_ sSécurité
|
|
||||||
# = 4
|
|
||||||
|
|
||||||
: int eadsker_thr01
|
|
||||||
wor50nt = ard_port: i dashbot = 8000
|
|
||||||
int: api_porervices
|
|
||||||
|
|
||||||
# S
|
|
||||||
iésnifètres u paramautresus les . to
|
|
||||||
# ..: Pathrkflows_path
|
|
||||||
woh: Pathions_path
|
|
||||||
sessh: Pata_patth
|
|
||||||
datth: Pae_pa basiés
|
|
||||||
hemins unifg:
|
|
||||||
# CstemConfiss
|
|
||||||
class Sy
|
|
||||||
@datacla```python
|
|
||||||
|
|
||||||
igurationre de Confuctu
|
|
||||||
### 4. Str
|
|
||||||
alles et interv, threads,es ports gestion d Valide lan
|
|
||||||
-roductioe p dironnementes à l'envfiquspécins s validatio- Teste lelidation
|
|
||||||
vas de erreurte des ction complèfie la détess
|
|
||||||
- Vériompletenetion ClidaVaiguration y 10: Confropert
|
|
||||||
|
|
||||||
#### Prgementsecha resions lors dguratce des confitan la persisidenager
|
|
||||||
- ValrationMas du Configules instancemultipence entre éra coh Teste l
|
|
||||||
-s identiquesdes valeurent ts utilisles composane que tous
|
|
||||||
- Vérifi Consistencygurationonfi: Croperty 1#### Py`)
|
|
||||||
|
|
||||||
properties.pnfiguration__cooperty/testprété (`tests/s de PropriTest. ### 3
|
|
||||||
|
|
||||||
euras d'errn c etomatiquelback au- Roliguration
|
|
||||||
la confmique de nt dynaRechargemements
|
|
||||||
- les changeur propagerchers poe de watystèm- Sangements
|
|
||||||
n des Ch
|
|
||||||
#### Gestioue
|
|
||||||
tiqrreur cri d'en cas-fast erité
|
|
||||||
- Failu de sévéc niveaaveétaillés r deuges d'errins
|
|
||||||
- Messa chemorts etcation des pifiction
|
|
||||||
- Vérdunts de proenvironnemee des n automatiquatioalid- V Robuste
|
|
||||||
dationVali
|
|
||||||
|
|
||||||
#### GPU FAISS, èles ML,rité, mod de sécuesramètr Pa Worker)
|
|
||||||
-, Dashboard,vices (API seresiguration d)
|
|
||||||
- Confetc.ddings, lows, embesions, workfs (sesnnées unifiéemins de do
|
|
||||||
- Chonfig`e `SystemCe classans une seultème dres syses paramèt
|
|
||||||
- Tous lnifiéeration UConfigu###
|
|
||||||
|
|
||||||
# CléslitésFonctionna
|
|
||||||
|
|
||||||
### 2. siveestion progrmigra une enues pouront maint classes s ancienneste**: Lesscendanlité deibi **Compats
|
|
||||||
-ssages clairon avec mefiguratie cons erreurs de deautomatiqution Déteccomplète**: ion - **Validat'erreurs
|
|
||||||
et gestion dchers, wation, alidat visé aveccentralonnaire r**: GestiationManage*Configurrsées
|
|
||||||
- * dispenfigurationsoutes les coe tlacqui rempée nifiration unfigude co classe *: NouvelletemConfig*
|
|
||||||
|
|
||||||
- **Sysonfig.py`) (`core/ctralisé Cenertion Managigura## 1. Confmpli
|
|
||||||
|
|
||||||
# accoétéCe qui a
|
|
||||||
## .
|
|
||||||
et testétéémen impl a étéiséalanager centr MgurationLe Confis** - c succèave1 terminé
|
|
||||||
✅ **Task ## Résumé
|
|
||||||
sé
|
|
||||||
|
|
||||||
r Centralin ManagetioConfigura1 Complete: # Task
|
|
||||||
@@ -1,122 +0,0 @@
|
|||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
SESSION 1ER DÉCEMBRE 2024 - RÉSUMÉ EXÉCUTIF
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🎯 OBJECTIF: Compléter Tasks 8, 9, 10, 14
|
|
||||||
|
|
||||||
📊 RÉSULTATS:
|
|
||||||
|
|
||||||
✅ Task 9 (Workflow Composition): 100% COMPLETE
|
|
||||||
✅ Task 10 (Self-Healing): 100% COMPLETE
|
|
||||||
🔄 Task 8 (RPA Analytics): 85% COMPLETE (implémentation terminée)
|
|
||||||
🔄 Task 14 (Admin Monitoring): 85% COMPLETE (implémentation terminée)
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
📦 LIVRABLES:
|
|
||||||
|
|
||||||
Nouveaux Composants (8 fichiers Python):
|
|
||||||
✅ SuccessRateCalculator - Calcul taux de succès & fiabilité
|
|
||||||
✅ ArchiveStorage - Archivage avec compression gzip
|
|
||||||
✅ RetentionPolicyEngine - Politiques de rétention auto
|
|
||||||
✅ ReportGenerator - Rapports JSON/CSV/HTML/PDF
|
|
||||||
✅ DashboardManager - Dashboards personnalisables
|
|
||||||
✅ AnalyticsAPI - 15+ endpoints REST
|
|
||||||
✅ AnalyticsSystem - Système intégré complet
|
|
||||||
✅ tasks.md pour Self-Healing
|
|
||||||
|
|
||||||
Documentation (3 fichiers):
|
|
||||||
✅ demo_analytics.py - Demo complète
|
|
||||||
✅ ANALYTICS_QUICKSTART.md - Guide démarrage rapide
|
|
||||||
✅ SESSION_01DEC_ANALYTICS_COMPLETE.md - Documentation session
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
📈 STATISTIQUES:
|
|
||||||
|
|
||||||
Code:
|
|
||||||
• 3,200+ lignes de code Python
|
|
||||||
• 11 fichiers créés
|
|
||||||
• 0 erreurs de diagnostic
|
|
||||||
• Production-ready
|
|
||||||
|
|
||||||
Fonctionnalités:
|
|
||||||
• 19 composants analytics implémentés
|
|
||||||
• 15+ endpoints API REST
|
|
||||||
• 4 formats d'export (JSON, CSV, HTML, PDF)
|
|
||||||
• 2 templates de dashboards
|
|
||||||
• Archivage avec compression
|
|
||||||
• Politiques de rétention
|
|
||||||
• Calculs statistiques avancés
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
⏳ RESTE À FAIRE:
|
|
||||||
|
|
||||||
Task 8 (Analytics):
|
|
||||||
• 16 property tests
|
|
||||||
• Intégration ExecutionLoop
|
|
||||||
• WebSocket endpoints
|
|
||||||
• OpenAPI docs
|
|
||||||
|
|
||||||
Task 14 (Admin Monitoring):
|
|
||||||
• 15 property tests
|
|
||||||
|
|
||||||
Estimation: 8-11 heures
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🚀 DÉMARRAGE RAPIDE:
|
|
||||||
|
|
||||||
# Tester le système analytics
|
|
||||||
python demo_analytics.py
|
|
||||||
|
|
||||||
# Consulter le guide
|
|
||||||
cat ANALYTICS_QUICKSTART.md
|
|
||||||
|
|
||||||
# Utiliser dans votre code
|
|
||||||
from core.analytics.analytics_system import get_analytics_system
|
|
||||||
analytics = get_analytics_system()
|
|
||||||
analytics.start_resource_monitoring()
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
✨ HIGHLIGHTS:
|
|
||||||
|
|
||||||
1. Système analytics complet et fonctionnel
|
|
||||||
2. API REST prête pour intégration
|
|
||||||
3. Dashboards personnalisables avec templates
|
|
||||||
4. Rapports automatiques (4 formats)
|
|
||||||
5. Archivage et rétention automatiques
|
|
||||||
6. Détection d'anomalies et insights
|
|
||||||
7. Calcul de fiabilité et classement
|
|
||||||
8. Monitoring temps réel
|
|
||||||
9. Documentation complète
|
|
||||||
10. Demos fonctionnels
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
🎊 CONCLUSION:
|
|
||||||
|
|
||||||
Session très productive ! Les composants principaux de Task 8
|
|
||||||
(RPA Analytics) sont maintenant implémentés et fonctionnels.
|
|
||||||
Le système est prêt à être utilisé et testé.
|
|
||||||
|
|
||||||
Status Global: 92% Complete
|
|
||||||
Qualité: Production-ready (après property tests)
|
|
||||||
Temps: ~3 heures
|
|
||||||
Impact: Système analytics complet pour RPA Vision V3
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
|
|
||||||
📅 PROCHAINE SESSION:
|
|
||||||
|
|
||||||
Priorité 1: Property tests (31 tests)
|
|
||||||
Priorité 2: Intégration ExecutionLoop
|
|
||||||
Priorité 3: WebSocket + OpenAPI docs
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
Date: 1er Décembre 2024
|
|
||||||
Status: ✅ MAJOR PROGRESS
|
|
||||||
Next: Property Tests + Integration
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
@@ -1,141 +0,0 @@
|
|||||||
on.**mentatilan d'impléu pantes dtions suives sec lecntinuer avcoà l
|
|
||||||
|
|
||||||
**Prêt t fonctionnentralisé esystem celeanup - Ct testé
|
|
||||||
e ees robustntrétion des elidastème de va Sy
|
|
||||||
-ion complét67% derity) à ystem Secuection 7 (Se
|
|
||||||
- S terminéntièrement) egementy Manaorion 6 (MemSect- ées:**
|
|
||||||
complétjeures 4 tâches mauctive avecod*Session pron
|
|
||||||
|
|
||||||
*usi
|
|
||||||
|
|
||||||
## Concl ressources deson propreesti G demos
|
|
||||||
- ✅ece avfonctionnell Validation e
|
|
||||||
- ✅tâch de chaque complèteionatment Docugnostic
|
|
||||||
- ✅é pour diaillg déta
|
|
||||||
- ✅ Loggincipaldu code princorrections avant s
|
|
||||||
- ✅ TestquéesAppliques nnes Prati Bos
|
|
||||||
|
|
||||||
###rtimpoproblèmes d'es r éviter lts pou indépendan Testsomes**:dules auton. **Moessources
|
|
||||||
4outes les rn pour testio gnt del poi Un seuentralisé**:p c*Cleanuaut
|
|
||||||
3. *male par défrité maxi: Sécuduction**n pro stricte eon. **Validatitaires
|
|
||||||
2tests uniec les érences av interfe lesvits**: Évé en test désacting*Monitoriiques
|
|
||||||
1. *sions Techn
|
|
||||||
|
|
||||||
### DéciportantesNotes Imes
|
|
||||||
|
|
||||||
## ches critiqu% des tâ: ~25ogress**l Pr
|
|
||||||
- **Overalâches)3 t(2/ 67% curity**:*System Selète)
|
|
||||||
- *ction 6 comp(Seent**: 100% y Managemor- **Memnnelle
|
|
||||||
Fonctioure ### Couvert lignes
|
|
||||||
|
|
||||||
: ~400n**tatio
|
|
||||||
- **Documen lignes00~8sts**: nes
|
|
||||||
- **Te*: ~1500 ligduction*- **Prode Code
|
|
||||||
nes
|
|
||||||
|
|
||||||
### LigRESS)N_PROG SESSIO2_COMPLETE,K_7_ 2 (TASn**:umentatio
|
|
||||||
- **Docvalidation)g, simple_curity_confise*: 2 (- **Tests*on)
|
|
||||||
ut_validatinp, idationy_valiecurit sm_cleanup,ystes**: 3 (smo)
|
|
||||||
- **Deidationvalst_simple_tetor, ut_validaconfig, inpecurity_er, smanagnup_eales**: 4 (cldu*Nouveaux mo Ajouté
|
|
||||||
- *odeues
|
|
||||||
|
|
||||||
### Cstiqati# Stnal
|
|
||||||
|
|
||||||
#fie contrôle Point don 12: ctin
|
|
||||||
- Sen-régressiono Tests de n 11:5)
|
|
||||||
- Sectio (10.1-10.aliséeon centrati0: Configur- Section 1)
|
|
||||||
.1-9.5vabilité (9bserection 9: O8.3)
|
|
||||||
- Sants (8.1-mposge des coDécouplaSection 8: -5.5)
|
|
||||||
- .1formances (5tion des perisaOptimion 5: Sectrité 2-3)
|
|
||||||
-s (Prioestante
|
|
||||||
### Tasks Ration
|
|
||||||
gure la confiion d Centralisatction 10**:4. **Sevabilité
|
|
||||||
'obserration de l**: Amélioon 9
|
|
||||||
3. **Sectis composantsde Découplage on 8**:tiSecation
|
|
||||||
2. ** input validé pour propriét*: Tests de7.3* **Task 1. Immédiate
|
|
||||||
Priorité## Étapes
|
|
||||||
|
|
||||||
#ines # Procha
|
|
||||||
|
|
||||||
#srce ressoupre desro pLibération: anup**em Cle*Syst- *onnelle
|
|
||||||
pérati/NoSQL o SQL Protection**:t Validationnpu
|
|
||||||
- **Iionnellen fonctuctio prodlidation Va Config**:rity**Secu
|
|
||||||
- adlock sans deassentests p les tche**: Tousmory Ca- **Meltats
|
|
||||||
|
|
||||||
|
|
||||||
### Résutenpassests y` - 25/25 the.ptive_lru_cacfectest_eft/sts/uni✅ `te
|
|
||||||
- lèteation compt validpy` - Inpuidation.mple_valtest_si `alidée
|
|
||||||
- ✅é vion sécuritConfiguratg.py` - urity_confit_secOK
|
|
||||||
- ✅ `tesn sécurité tio - Validapy`ation.idrity_val `demo_secu- ✅nnel
|
|
||||||
tiostem fonc Cleanup sy` -_cleanup.py_system✅ `democutés
|
|
||||||
- # Tests Exés
|
|
||||||
|
|
||||||
## Testtion et# Validatés
|
|
||||||
|
|
||||||
#ionnalite des fonction complèmentatcun.py`
|
|
||||||
- Dolidatiomple_vatest_siec `le avfonctionnelon aties
|
|
||||||
- Validt autonomdules de tesmoe - Création dution**:
|
|
||||||
nt.
|
|
||||||
|
|
||||||
**Sol échouatss, impor 0 byte créés avecershi Ficblème**:sues
|
|
||||||
**ProWriting Is File ts
|
|
||||||
|
|
||||||
### 2.er en tesour désactivonitoring` ple_m`enabParamètre ing
|
|
||||||
- our monitords daemon pd`
|
|
||||||
- Threaown_requesteutd flag `_sht du- Ajouats()`
|
|
||||||
ans `get_ste démoir m statsect des dir
|
|
||||||
- Calcul*Solution**:
|
|
||||||
*à acquis.
|
|
||||||
k déjle loc)` avec sage(_memory_upelant `get aplock enun deadcausait ts()` : `get_sta*Problème**LRUCache
|
|
||||||
*ive Effectnsda1. Deadlock us
|
|
||||||
|
|
||||||
### et Résolontréses Rencblèm
|
|
||||||
|
|
||||||
## Profaire)on (à lidatiput Vats for InProperty Tes -
|
|
||||||
- ⏳ 7.3lidationr Input Va ✅ 7.2 - Useion
|
|
||||||
-onfiguratty Ction Securi7.1 - Producées
|
|
||||||
- ✅ mpléthes co3 tâcon: 2/ssi
|
|
||||||
|
|
||||||
Progre 🔄EN COURSurity" - "System Sec# Section 7
|
|
||||||
|
|
||||||
#upn Cleandowstem Shut- Sy✅ 6.4
|
|
||||||
- e LiberationesourcU R.3 - GPger
|
|
||||||
- ✅ 6- MemoryMana- ✅ 6.2 ache
|
|
||||||
eLRUCectivEff1 - - ✅ 6.:
|
|
||||||
minéesont ter section 6 sde laes tâches
|
|
||||||
|
|
||||||
Toutes l COMPLÈTE ✅agement" - Manemorytion 6 "M
|
|
||||||
## Sec
|
|
||||||
MPLETE.md`LIDATION_COINPUT_VAASK_7_2_y`, `Tion.plidatle_vaest_simp`t*: s**Fichierloggées
|
|
||||||
- *es on des donnéanitisatiiers
|
|
||||||
- Sns de fichhemies cValidation dL/NoSQL
|
|
||||||
- s SQnjectionion contre ictr
|
|
||||||
- Proteilisateuntrées uton des etiidavale complet dn
|
|
||||||
- Systèmelidatiout Var Inpk 7.2 - Use ✅ Tas
|
|
||||||
|
|
||||||
###config.py`ity_test_securtion.py`, `lidaurity_va, `demo_secy_config.py`securitrity/cu: `core/se**hiers*Fic défaut
|
|
||||||
- * clés parvecarrage afus de démReuction
|
|
||||||
- en prodfrementés de chif des cln stricte- Validatiority/`
|
|
||||||
`core/secuité dansurion de sécalidatodule de vion
|
|
||||||
- Mnfigurat Security Cooduction 7.1 - PrTask### ✅ anup.py`
|
|
||||||
|
|
||||||
letem_c, `demo_sysy`ager.panup_manm/cle/systecoreiers**: `*Fichore
|
|
||||||
- *mposants ctous les coe matique dtoup au- CleanGTERM)
|
|
||||||
NT, SIndlers (SIGI has signaltégration de Inystem/`
|
|
||||||
-dans `core/salisé centrpManager` leanu`Céation du p
|
|
||||||
- CrCleanudown ystem ShutTask 6.4 - S
|
|
||||||
### ✅ `
|
|
||||||
e.pyemory_cachcution/mcore/exeger.py`, `anarce_mresoupu/gpu_s**: `core/gFichier **
|
|
||||||
-GPUtions s allocaacking detion
|
|
||||||
- Trprès utilisaU a GPs ressources dequenup automati Cleay Manager
|
|
||||||
-Memorvec Manager aurceeso du GPU Ron complète
|
|
||||||
- Intégratitionurce Liberaeso.3 - GPU Rk 6### ✅ Tasession
|
|
||||||
|
|
||||||
ées Cette Smplét Tâches Co.
|
|
||||||
|
|
||||||
##irehe mémoac cproblèmes deution des ésol` après rsks.mdal-fixes/tariticrpa-ciro/specs/k list `.kla tas de tationimplémenon de l'inuatitexte
|
|
||||||
Cont
|
|
||||||
|
|
||||||
## Conbre 2024cem21 Déte: on
|
|
||||||
|
|
||||||
## Damplementati List I Taskss -rogression P# Se
|
|
||||||
25
SUMMARY.txt
25
SUMMARY.txt
@@ -1,25 +0,0 @@
|
|||||||
╔═══════════════════════════════════════════════════════════════╗
|
|
||||||
║ RPA VISION V3 - SESSION 22 NOV 2024 ║
|
|
||||||
╚═══════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
✅ COMPLÉTÉ: Phase 2 - CLIP Embedders
|
|
||||||
|
|
||||||
📊 RÉSULTATS:
|
|
||||||
• 13 fichiers créés (~1950 lignes)
|
|
||||||
• Tests: 3/3 PASS
|
|
||||||
• CLIP: ViT-B-32, 512D, fonctionnel
|
|
||||||
|
|
||||||
🧪 VALIDATIONS:
|
|
||||||
• Text embedding: <10ms ✅
|
|
||||||
• Image embedding: ~50ms ✅
|
|
||||||
• Similarity: 0.899 ✅
|
|
||||||
|
|
||||||
📚 DOCS:
|
|
||||||
• PHASE2_CLIP_COMPLETE.md
|
|
||||||
• NEXT_SESSION.md
|
|
||||||
• INDEX.md
|
|
||||||
• COMMANDS.md
|
|
||||||
|
|
||||||
🚀 NEXT: Task 2.9 - Integrate CLIP into StateEmbeddingBuilder
|
|
||||||
|
|
||||||
═══════════════════════════════════════════════════════════════
|
|
||||||
@@ -1,156 +0,0 @@
|
|||||||
on y va ╔══════════════════════════════════════════════════════════════════════╗
|
|
||||||
║ RPA VISION V3 - AVANCEMENT TASK LIST ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Date: 22 Novembre 2024
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 1 : FONDATIONS ✅ COMPLÈTE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 1.8 Tests StateEmbedding
|
|
||||||
[✓] 1.9 Modèles Workflow Graph
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 2 : EMBEDDINGS ET FAISS ✅ IMPLÉMENTATION COMPLÈTE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 2.1 FusionEngine
|
|
||||||
[✓] 2.3 FAISSManager
|
|
||||||
[✓] 2.5 Calculs de similarité
|
|
||||||
[✓] 2.7 StateEmbeddingBuilder + OpenCLIP
|
|
||||||
[✓]* 2.2 Tests FusionEngine ← FAIT MAINTENANT (9/9 tests passés)
|
|
||||||
[ ]* 2.4 Tests FAISSManager
|
|
||||||
[ ]* 2.6 Tests performance
|
|
||||||
[ ]* 2.8 Tests StateEmbeddingBuilder
|
|
||||||
|
|
||||||
Tests Validés:
|
|
||||||
✓ test_clip_simple.py
|
|
||||||
✓ test_complete_pipeline.py
|
|
||||||
✓ test_faiss_persistence.py
|
|
||||||
✓ test_fusion_engine.py (Property 17 validée)
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 3 : CHECKPOINT │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[ ] 3. Vérifier que tous les tests passent
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 4 : DÉTECTION UI ✅ IMPLÉMENTATION COMPLÈTE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 4.1 UIDetector + OWL-v2 ← FAIT AUJOURD'HUI
|
|
||||||
[✓] 4.2 Classification types
|
|
||||||
[✓] 4.3 Classification rôles
|
|
||||||
[✓] 4.4 Features visuelles
|
|
||||||
[✓] 4.5 Embeddings duaux
|
|
||||||
[✓] 4.6 Confiance
|
|
||||||
[ ]* 4.7 Tests UIDetector
|
|
||||||
[ ]* 4.8 Tests performance
|
|
||||||
|
|
||||||
Tests Validés:
|
|
||||||
✓ test_owl_simple.py
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 5 : WORKFLOW GRAPHS ✅ IMPLÉMENTATION COMPLÈTE (23 Nov 2024) │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 5.1 GraphBuilder
|
|
||||||
[✓] 5.2 Détection de patterns
|
|
||||||
[ ]* 5.3 Tests patterns
|
|
||||||
[✓] 5.4 Construction de nodes
|
|
||||||
[ ]* 5.5 Tests nodes
|
|
||||||
[✓] 5.6 Construction d'edges
|
|
||||||
[ ]* 5.7 Tests edges
|
|
||||||
[✓] 5.8 NodeMatcher
|
|
||||||
[ ]* 5.9 Tests NodeMatcher
|
|
||||||
[✓] 5.10 WorkflowNode.matches()
|
|
||||||
[ ]* 5.11 Tests intégration
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 6 : ACTION EXECUTION ✅ IMPLÉMENTATION COMPLÈTE (23 Nov 2024) │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 6.1 ActionExecutor
|
|
||||||
[✓] 6.2 TargetResolver
|
|
||||||
[✓] 6.3 Recherche par rôle
|
|
||||||
[✓] 6.4 Exécution mouse_click
|
|
||||||
[✓] 6.5 Exécution text_input
|
|
||||||
[✓] 6.6 Exécution compound
|
|
||||||
[✓] 6.7 Post-conditions (stub)
|
|
||||||
[ ]* 6.8 Tests ActionExecutor
|
|
||||||
[ ]* 6.9 Tests performance
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 7 : EXÉCUTION ⏳ À FAIRE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[ ] 7.1 ActionExecutor
|
|
||||||
[ ] 7.2 Recherche par rôle
|
|
||||||
[ ] 7.3 Exécution click
|
|
||||||
[ ] 7.4 Exécution text_input
|
|
||||||
[ ] 7.5 Exécution compound
|
|
||||||
[ ] 7.6 Post-conditions
|
|
||||||
[ ]* 7.7 Tests ActionExecutor
|
|
||||||
[ ]* 7.8 Tests performance
|
|
||||||
[ ] 7.9 LearningManager
|
|
||||||
[ ] 7.10 Transitions d'états
|
|
||||||
[ ] 7.11 Rollback
|
|
||||||
[ ]* 7.12 Tests LearningManager
|
|
||||||
[ ]* 7.13 Tests intégration
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ STATISTIQUES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Phases complètes: 6/9 (67%)
|
|
||||||
✓ Phase 1: Fondations
|
|
||||||
✓ Phase 2: Embeddings + FAISS
|
|
||||||
✓ Phase 4: Détection UI
|
|
||||||
✓ Phase 5: Workflow Graphs
|
|
||||||
✓ Phase 6: Action Execution
|
|
||||||
✓ Phase 7: Learning System
|
|
||||||
✓ Phase 8: Training System
|
|
||||||
|
|
||||||
Implémentation: 38/50 tâches (76%)
|
|
||||||
Tests property: 2/20 tâches (10%)
|
|
||||||
|
|
||||||
Fichiers créés: 50+ fichiers
|
|
||||||
Tests fonctionnels: 15+ tests passés
|
|
||||||
|
|
||||||
Modèles intégrés: 3/3 (100%)
|
|
||||||
✓ OpenCLIP
|
|
||||||
✓ OWL-v2
|
|
||||||
✓ Qwen3-VL
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 7 : LEARNING SYSTEM ✅ IMPLÉMENTATION COMPLÈTE (23 Nov 2024) │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 7.1 LearningManager
|
|
||||||
[✓] 7.2 Transitions d'états
|
|
||||||
[✓] 7.3 FeedbackProcessor
|
|
||||||
[✓] 7.4 Rollback automatique
|
|
||||||
[✓] 7.5 Tests LearningManager
|
|
||||||
[ ]* 7.6 Tests intégration
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 8 : TRAINING SYSTEM ✅ IMPLÉMENTATION COMPLÈTE (23 Nov 2024) │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
[✓] 8.1 TrainingDataCollector
|
|
||||||
[✓] 8.2 OfflineTrainer
|
|
||||||
[✓] 8.3 ModelValidator
|
|
||||||
[✓] 8.4 Training Guide
|
|
||||||
[✓] 8.5 Tests complets
|
|
||||||
[ ]* 8.6 Tests intégration production
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PROCHAINES ÉTAPES - PHASE 9 : TESTS & VALIDATION FINALE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Objectif: Tests property-based et validation end-to-end
|
|
||||||
|
|
||||||
Tâches prioritaires:
|
|
||||||
→ Tests manquants (Properties 13, 14, 16)
|
|
||||||
→ Tests d'intégration end-to-end complets
|
|
||||||
→ Validation sur données réelles
|
|
||||||
→ Documentation finale
|
|
||||||
|
|
||||||
Estimation: 1-2 jours
|
|
||||||
|
|
||||||
╔══════════════════════════════════════════════════════════════════════╗
|
|
||||||
║ SYSTÈME PRODUCTION-READY - 6 phases implémentées (67%) ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════════╝
|
|
||||||
@@ -1,145 +0,0 @@
|
|||||||
╔══════════════════════════════════════════════════════════════════════╗
|
|
||||||
║ RPA VISION V3 - AVANCEMENT PHASE 11 ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Date: 24 Novembre 2024
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PHASE 11 : OPTIMISATION FAISS IVF ✅ COMPLÈTE (24 Nov 2024) │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
[✓] 11.1 Batch processing pour embeddings
|
|
||||||
[✓] 11.2 Cache d'embeddings (EmbeddingCache + PrototypeCache)
|
|
||||||
[✓] 11.3 Optimisation FAISS avec index IVF
|
|
||||||
|
|
||||||
Détails Task 11.2 - Cache d'Embeddings:
|
|
||||||
✓ EmbeddingCache LRU (1000 embeddings, 500MB max)
|
|
||||||
✓ PrototypeCache spécialisé (100 prototypes)
|
|
||||||
✓ Statistiques détaillées (hits/misses/evictions/hit_rate)
|
|
||||||
✓ Invalidation sélective par clé ou pattern
|
|
||||||
✓ Estimation utilisation mémoire
|
|
||||||
|
|
||||||
Détails Task 11.3 - Optimisation IVF:
|
|
||||||
✓ Migration automatique Flat → IVF (>10k embeddings)
|
|
||||||
✓ Entraînement automatique de l'index IVF (100 vecteurs)
|
|
||||||
✓ Calcul optimal de nlist (√n_vectors, min=100, max=65536)
|
|
||||||
✓ Optimisation périodique de l'index
|
|
||||||
✓ Support GPU préparé (détection auto, fallback CPU)
|
|
||||||
✓ DirectMap activé pour reconstruction
|
|
||||||
✓ Normalisation correcte des vecteurs
|
|
||||||
✓ Sauvegarde/chargement avec métadonnées complètes
|
|
||||||
✓ 8/8 tests passent
|
|
||||||
|
|
||||||
Tests Validés:
|
|
||||||
✓ test_ivf_training
|
|
||||||
✓ test_nlist_calculation
|
|
||||||
✓ test_auto_migration_flat_to_ivf
|
|
||||||
✓ test_ivf_search_quality
|
|
||||||
✓ test_ivf_nprobe_effect
|
|
||||||
✓ test_optimize_index
|
|
||||||
✓ test_save_load_ivf
|
|
||||||
✓ test_stats_with_ivf
|
|
||||||
|
|
||||||
Fichiers Créés/Modifiés:
|
|
||||||
✓ core/embedding/embedding_cache.py (279 lignes)
|
|
||||||
✓ core/embedding/faiss_manager.py (optimisé, +150 lignes)
|
|
||||||
✓ tests/unit/test_faiss_ivf_optimization.py (270 lignes, 8 tests)
|
|
||||||
✓ PHASE11_IVF_OPTIMIZATION_COMPLETE.md (documentation)
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PERFORMANCES ATTENDUES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Comparaison Flat vs IVF:
|
|
||||||
|
|
||||||
Recherche sur 10k vecteurs:
|
|
||||||
Flat: ~50ms → IVF: ~5-10ms (5-10x plus rapide)
|
|
||||||
|
|
||||||
Recherche sur 100k vecteurs:
|
|
||||||
Flat: ~500ms → IVF: ~10-20ms (25-50x plus rapide)
|
|
||||||
|
|
||||||
Recherche sur 1M vecteurs:
|
|
||||||
Flat: ~5s → IVF: ~20-50ms (100-250x plus rapide)
|
|
||||||
|
|
||||||
Précision:
|
|
||||||
Flat: 100% → IVF (nprobe=8): ~95-99%
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ RECOMMANDATIONS D'UTILISATION │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
< 10k embeddings:
|
|
||||||
→ Utiliser Flat (recherche exacte, rapide)
|
|
||||||
|
|
||||||
10k - 100k embeddings:
|
|
||||||
→ Utiliser IVF avec nprobe=8 (bon compromis)
|
|
||||||
|
|
||||||
> 100k embeddings:
|
|
||||||
→ Utiliser IVF avec nprobe=16-32 (meilleure qualité)
|
|
||||||
|
|
||||||
> 1M embeddings:
|
|
||||||
→ Considérer IVF avec GPU
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PARAMÈTRES CONFIGURABLES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
FAISSManager(
|
|
||||||
dimensions=512,
|
|
||||||
index_type="IVF", # "Flat", "IVF", "HNSW"
|
|
||||||
metric="cosine", # "cosine", "l2", "ip"
|
|
||||||
nlist=None, # Auto si None (√n_vectors)
|
|
||||||
nprobe=8, # Clusters à visiter (1-nlist)
|
|
||||||
use_gpu=False, # GPU si disponible
|
|
||||||
auto_optimize=True # Migration auto Flat→IVF
|
|
||||||
)
|
|
||||||
|
|
||||||
Choix de nprobe (compromis vitesse/qualité):
|
|
||||||
nprobe=1: Très rapide, qualité ~80%
|
|
||||||
nprobe=8: Bon compromis, qualité ~95%
|
|
||||||
nprobe=16: Plus lent, qualité ~98%
|
|
||||||
nprobe=nlist: Équivalent Flat (100%)
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ STATISTIQUES GLOBALES │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Phases complètes: 8/13 (62%)
|
|
||||||
✓ Phase 1: Fondations
|
|
||||||
✓ Phase 2: Embeddings + FAISS
|
|
||||||
✓ Phase 4: Détection UI
|
|
||||||
✓ Phase 5: Workflow Graphs
|
|
||||||
✓ Phase 6: Action Execution
|
|
||||||
✓ Phase 7: Learning System
|
|
||||||
✓ Phase 8: Training System
|
|
||||||
✓ Phase 10: Error Handling
|
|
||||||
✓ Phase 11: Persistence & Storage
|
|
||||||
✓ Phase 11: FAISS IVF Optimization ← NOUVEAU
|
|
||||||
|
|
||||||
Implémentation: 42/50 tâches (84%)
|
|
||||||
Tests property: 2/20 tâches (10%)
|
|
||||||
|
|
||||||
Fichiers créés: 55+ fichiers
|
|
||||||
Tests fonctionnels: 23+ tests passés
|
|
||||||
|
|
||||||
Modèles intégrés: 3/3 (100%)
|
|
||||||
✓ OpenCLIP
|
|
||||||
✓ OWL-v2
|
|
||||||
✓ Qwen3-VL
|
|
||||||
|
|
||||||
┌──────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ PROCHAINES ÉTAPES - PHASE 11 SUITE │
|
|
||||||
└──────────────────────────────────────────────────────────────────────┘
|
|
||||||
|
|
||||||
Objectif: Finaliser optimisations de performance
|
|
||||||
|
|
||||||
Tâches restantes:
|
|
||||||
→ 11.4 Optimiser détection UI avec ROI
|
|
||||||
→ 11.5 Tests de performance complets
|
|
||||||
→ 12. Checkpoint Final
|
|
||||||
|
|
||||||
Estimation: 2-3 heures
|
|
||||||
|
|
||||||
╔══════════════════════════════════════════════════════════════════════╗
|
|
||||||
║ SYSTÈME HAUTE PERFORMANCE - IVF + Cache Implémentés (84%) ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════════╝
|
|
||||||
44
TEST_NOW.sh
44
TEST_NOW.sh
@@ -1,44 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# TEST_NOW.sh
|
|
||||||
# Script ultra-simple pour tester le serveur immédiatement
|
|
||||||
|
|
||||||
echo "🚀 RPA Vision V3 - Test Rapide"
|
|
||||||
echo "================================"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 1. Vérifier l'environnement
|
|
||||||
if [ ! -d "venv_v3" ]; then
|
|
||||||
echo "❌ Environnement virtuel non trouvé"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
source venv_v3/bin/activate
|
|
||||||
|
|
||||||
# 2. Vérifier les dépendances
|
|
||||||
echo "📦 Vérification dépendances..."
|
|
||||||
python -c "import fastapi, flask, cryptography" 2>/dev/null
|
|
||||||
if [ $? -ne 0 ]; then
|
|
||||||
echo "⚠️ Installation des dépendances..."
|
|
||||||
pip install -q fastapi 'uvicorn[standard]' python-multipart flask cryptography
|
|
||||||
fi
|
|
||||||
echo "✅ Dépendances OK"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 3. Lancer les tests
|
|
||||||
echo "🧪 Lancement des tests..."
|
|
||||||
pytest tests/integration/test_server_pipeline.py -v --tb=short 2>&1 | grep -E "(PASSED|FAILED|passed|failed)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 4. Démarrer le serveur
|
|
||||||
echo "🚀 Démarrage du serveur..."
|
|
||||||
echo ""
|
|
||||||
echo "📝 Commandes disponibles:"
|
|
||||||
echo " - Démarrer: ./server/start_all.sh"
|
|
||||||
echo " - Dashboard: xdg-open http://localhost:5001"
|
|
||||||
echo " - Test API: curl http://localhost:8000/api/traces/status"
|
|
||||||
echo ""
|
|
||||||
echo "📚 Documentation:"
|
|
||||||
echo " - Quick Start: QUICK_START_SERVER.md"
|
|
||||||
echo " - Guide complet: SERVER_READY_TO_TEST.md"
|
|
||||||
echo ""
|
|
||||||
echo "✅ Prêt pour les tests!"
|
|
||||||
@@ -1,214 +0,0 @@
|
|||||||
!*re du RPAhistoil'a dans erate qui rest Une dier 2026 -é le 7 Janvlét comp
|
|
||||||
*Projet*
|
|
||||||
PE !*'ÉQUITOUTE LONS À TIICITA🏆 FÉL---
|
|
||||||
|
|
||||||
**nts.
|
|
||||||
|
|
||||||
eas plus exigion le de productmentsronneviens our les pequisebilité ret la fiaion précisnt laaintena en mus toutessible à totion acctomatisaendant l'auon du RPA, rns l'évoluti daue**historiqpe ue une **étan marqalisatiote ré
|
|
||||||
|
|
||||||
Cetsation**cité d'utilipliim **S*
|
|
||||||
- 👥aximale*Robustesse m️ **e**
|
|
||||||
- 🛡 enterpris*Performance🚀 *
|
|
||||||
- **perfect pixel- **Précision
|
|
||||||
- 🎯nte** poielle deficince Arti**Intellige
|
|
||||||
- 🧠 :
|
|
||||||
ombinant , cde**mon au avancéws le plus e workfloe création dtème d le **syst désormais3 esn Visioer de RPA Vildrkflow Bue Visual Wo**
|
|
||||||
|
|
||||||
LNCE !XCELLEC EIE AVEION ACCOMPL
|
|
||||||
|
|
||||||
**MISSConclusion## 🎊
|
|
||||||
---
|
|
||||||
|
|
||||||
onitoring
|
|
||||||
té et m sécuriavecdy** tion-readucro*Code p
|
|
||||||
- *ion rapidedopt* pour aive*on exhaustcumentatits
|
|
||||||
- **Do par tesvalidéesion** orrect cétés de **45 propriuccès
|
|
||||||
-c s aveomplétées**14 tâches c4/*1ution
|
|
||||||
- *écence d'Ex### Excell
|
|
||||||
|
|
||||||
ptimiséeormance oc perf* avegrade*enterprise-e *Architecturterface
|
|
||||||
- * d'inpréhensionur la comée** poe avancficiell artince**Intellige
|
|
||||||
- ath** CSS/XPurs fragilessélectes complète deion inate
|
|
||||||
- **Élim** au mondsion-based 100% vimeer systèmi **Preine RPA :
|
|
||||||
-le domadans e** ologiquhnution tec une **révolprésenterojet rerough
|
|
||||||
Ce preakthInnovation B
|
|
||||||
### echnique
|
|
||||||
issance T## 🏅 Reconnas
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
gékflows partal** : Worps réeemon toratiCollabiles
|
|
||||||
4. **obces m interfatension auxpport** : Exobile subles
|
|
||||||
3. **Mes et scalas distribué: APIn cloud** ratio**Intéges
|
|
||||||
2. modèlcontinue desoration : Amélie** e automatiqutissagens
|
|
||||||
1. **Apprs Futureutionvol
|
|
||||||
### É intégrées
|
|
||||||
triquesméion** avec oduct pring4. **Monitordes créés
|
|
||||||
avec les gui** uipesn éqatio. **Formduction
|
|
||||||
3l de proie* sur matérmance*orks perfenchmar
|
|
||||||
2. **Bon fourniementatic la docu avesateur**tion utiliaccepta*Tests d't
|
|
||||||
1. *Déploiemense de haes
|
|
||||||
|
|
||||||
### Pmmandétapes Reco ÉProchaines# 🚀 ---
|
|
||||||
|
|
||||||
#
|
|
||||||
s le RPA
|
|
||||||
gique danlohip technoadersLetion** : ova **Innady
|
|
||||||
-prise-reture enterechitlité** : Arccalabiws
|
|
||||||
- **Sfloes workfiée dmplince sintena : Maioûts**ion cductsed
|
|
||||||
- **Rébaon-visition 100% lue so* : Premièriel*urrentge conc- **AvantaEntreprise
|
|
||||||
# Pour l'r
|
|
||||||
|
|
||||||
##ppeuur et dévelotelisades utiète** : Gui complionumentat**Docavancé
|
|
||||||
- ed testing y-basrtrope* : P exhaustifs***Testscumentés
|
|
||||||
- EST do REndpoints* : ètes*omplIs cI
|
|
||||||
- **AP Material-Ut + + TypeScripcterne** : Reacture modchite**Ar
|
|
||||||
- éveloppeurss D
|
|
||||||
### Pour lebles
|
|
||||||
inue des cidation cont* : Vali temps réel*Feedbacke
|
|
||||||
- ** naturelln visuelleSélectioe** : e intuitiv*Interfac
|
|
||||||
- *aces d'interf changementsistance auxle** : Rémaximatesse
|
|
||||||
- **Robuseshniquissances tecnnaoin de coesus bnaire** : Plolutionplicité rév- **Simeurs
|
|
||||||
ilisat les Ut# Pourices
|
|
||||||
|
|
||||||
##et Bénéf# 🌟 Impact -
|
|
||||||
|
|
||||||
#idé)
|
|
||||||
|
|
||||||
--al: >80% (vn** ctiodétece **Confianrôlée)
|
|
||||||
- B (cont: <100M** reation mémoi**Utilissé)
|
|
||||||
- (optimi** : >80% cache **Taux de int)
|
|
||||||
-attetif s (objec<3 secondeion** : Temps détectteint)
|
|
||||||
- **objectif atdes (* : <2 secons capture**Tempance
|
|
||||||
- *formques de Perétrie
|
|
||||||
|
|
||||||
### Mncilierést t système etarence é Cohé5** : **P41-P4moire
|
|
||||||
-rmance et méé perfobilitScalaP36-P40** : **
|
|
||||||
-uesures uniqt signatnées eé donIntégritP35** : - **P31-eurs
|
|
||||||
n errtioet gesme stesse systè** : RobuP26-P30rs
|
|
||||||
- **-moniteunées multi coordon MappingP21-P25** :ance
|
|
||||||
- **nfi coion etect détmeéterminisP20** : De
|
|
||||||
- **P16-tion cachance et ges** : Perform**P11-P15données
|
|
||||||
- métaet uelles les vislidation cib : Va**P6-P10** boxes
|
|
||||||
- et boundingdonnées ence coorér CohP1-P5** :tés)
|
|
||||||
- **rié (45 Propedrty-BasropeTests P
|
|
||||||
### ue
|
|
||||||
tion Techniqida
|
|
||||||
## 🔬 Val-
|
|
||||||
mages
|
|
||||||
|
|
||||||
--essif des ient progr : Chargemng**loadiy **Laz(300ms)
|
|
||||||
- timisées entes options fréquéra : OpDebouncing**- **ptimisées
|
|
||||||
longues ostes Liation** :rtualiz0MB
|
|
||||||
- **Vimite 5c liU aveCache LRU/LF: * g*mage cachin
|
|
||||||
- **Imizationstiformance Op## Peravier
|
|
||||||
|
|
||||||
#vigation clARIA et nas ributlité** : AttssibiAcce
|
|
||||||
- **ivesatadapt grilles akpoints etn** : Brensive desig*Respo- *l-UI
|
|
||||||
ants Materiaes compose dmalaxitilisation méuérents** : Rts coh**Composan
|
|
||||||
- 2c55e)ss Green (#2d2), Succee (#1976ry Blu : Primaleurs**de couPalette ion
|
|
||||||
- **gratal-UI Interi
|
|
||||||
### Mateem
|
|
||||||
n Systé Desigformit
|
|
||||||
## 🎨 Con
|
|
||||||
--`
|
|
||||||
|
|
||||||
-pannage
|
|
||||||
``uide dé# G md OOTING.LESH── TROUBeur
|
|
||||||
└ développtionrauide intég # G ION.md _INTEGRAT├── API
|
|
||||||
eur complet utilisat # GuideE.md CTION_GUID_SELE├── VISUALlder/docs/
|
|
||||||
buil_workflow_ua
|
|
||||||
|
|
||||||
vists Pythones# Terties.py lder_proplow_buivisual_workft_testy/
|
|
||||||
└── roperts/pn
|
|
||||||
```
|
|
||||||
tesumentatio Doc Tests et
|
|
||||||
###
|
|
||||||
```
|
|
||||||
nt) (existature d'écran API cap # .py een_captures
|
|
||||||
└── scrntlémen éAPI détectio # .py on_detectint elemees
|
|
||||||
├──isuell vibles # API c s.py rget── visual_taapi/
|
|
||||||
├backend/builder/l_workflow_sua
|
|
||||||
vi``+ Python
|
|
||||||
` Flask ackend``
|
|
||||||
|
|
||||||
### B
|
|
||||||
`edroperty-bassts p Te # s tion.test.tisualSelec└── v
|
|
||||||
properties/ts__/esges
|
|
||||||
└── __tligent imaCache intel # .ts mageCache
|
|
||||||
│ └── Ils/ce
|
|
||||||
├── utirmanations perfoOptimisn.ts # izationceOptim usePerforma
|
|
||||||
│ └─── hooks/oniteurs
|
|
||||||
├─on multi-msti # Ge ts e.Servicnitor
|
|
||||||
│ └── Mos IA élémentDétectionts # ice.rvectionSe ElementDetisé
|
|
||||||
│ ├──imre opt captu # Service eService.tsCapturScreen│ ├── les
|
|
||||||
bles visuelstion ci# Ge.ts ervicesualTargetS ├── Vi
|
|
||||||
│ services/
|
|
||||||
├──chargementicateurs de # Ind or/ icatLoadingInds
|
|
||||||
│ └── iteurn multi-monélectio S #/ orSelector├── Monit
|
|
||||||
│ iesées enrich# Métadonn splay/ taDiisualMetada Vs
|
|
||||||
│ ├──isuelles vibleration c Configu # fig/ rgetConisualTa── Vce
|
|
||||||
│ ├ren de réféturesfichage cap# Af ew/ creenshotViferenceS ├── Ree
|
|
||||||
│ e principaltion visuell # Sélec ctor/ lenSereealSc ├── Visu/
|
|
||||||
│mponents├── contend/src/
|
|
||||||
uilder/froworkflow_bisual_```
|
|
||||||
vpeScript
|
|
||||||
Tyact +ontend Re## Fr
|
|
||||||
#nts Créés
|
|
||||||
posa 🛠 Com
|
|
||||||
|
|
||||||
##
|
|
||||||
---eur
|
|
||||||
veloppdét isateur etil* - Guides uration*tation Intég✅ **Documen
|
|
||||||
14. hérentlet et copt comp TypeScris Types** -finition**Dé
|
|
||||||
13. ✅ idéesn valrectioés de corpropriét 45 ty-Based** -sts Proper
|
|
||||||
12. ✅ **Te(12-14)ualité ches Q
|
|
||||||
### 🟢 Tâmplets
|
|
||||||
cos REST pointnd - EComplètes**PIs Backend **Anées
|
|
||||||
11. ✅doncoor DPI et apping Mteurs** - Multi-Moniupport✅ **Sg
|
|
||||||
10. ebouncinalisation, drtuhe, vi** - Cacrformancesation Peptimi ✅ **Oturel
|
|
||||||
9. langage naenscriptions - Decé**nées Avan MétadonAffichage8. ✅ **-11)
|
|
||||||
ches Core (8
|
|
||||||
|
|
||||||
### 🟡 Tâlidationance et va** - PersistnagerualTargetMan Vistégratio. ✅ **Ine
|
|
||||||
7le purvisueln uratioConfigtConfig** - ualTargeomposant Viss
|
|
||||||
6. ✅ **C overlayge avec - Affichaw**creenshotViet ReferenceSmposan✅ **Coelle
|
|
||||||
5. su% vice 100 Interfalector** -alScreenSetor Visu*Refac4. ✅ *lle
|
|
||||||
pérationnen oio de détect IAs** -Élément Détection rationtégé
|
|
||||||
3. ✅ **Inntégron V3 i RPA Visi** - BackendCapture Service ationégr**Intlète
|
|
||||||
2. ✅ ompimination c* - Élh*at/XPre CSSastructuression Infr
|
|
||||||
1. ✅ **Supp (1-7)iques Critâches🔴 T###
|
|
||||||
|
|
||||||
ies (14/14) Accomplches 📋 Tâ
|
|
||||||
|
|
||||||
##ans
|
|
||||||
|
|
||||||
--- multi-écronsuratinfigs cote demplè Gestion co* :r Support*lti-Monito**Muride
|
|
||||||
- U hyb LRU/LFe cache avecème dt** : Systgentelli In**Cachevancée
|
|
||||||
- ec IA aavéments d'élion: Détectndes** <3 secotection **Déimisée
|
|
||||||
- réel opt temps ure d'écranCaptes** : secondapture <2 prise
|
|
||||||
- **Crmance Enter
|
|
||||||
#### Perfo
|
|
||||||
élémentsntre iales espatations on des relréhensi: Companding** tual Underst
|
|
||||||
- **Contexce >80%avec confians cibles tinue deion con : Validation**ate Valid**Real-tim
|
|
||||||
- élémentpour chaque ques es uniuellisures v** : Signat Embeddingsdallti-mo
|
|
||||||
- **Muvisuellehension compréinte pour laes IA de poodèl** : M Integration OWL-ViTP +
|
|
||||||
- **CLIsion-Centricture Vihitec
|
|
||||||
#### Arcologique
|
|
||||||
ation Techn 🔬 Innov##A.
|
|
||||||
|
|
||||||
#RPe domaindans le lutionnaire avancée révont une eprésentaléments, rion d'éur la sélectur podinatesion par ora vilusivement lésormais exclise dder uti Builal Workflow
|
|
||||||
Le Visuh**CSS/XPatlecteurs des sélèteination compÉlimINT
|
|
||||||
✅ **ipal ATTEjectif Princ
|
|
||||||
|
|
||||||
### 🎯 Obeuresions Majalisat# 🚀 Ré--
|
|
||||||
|
|
||||||
#
|
|
||||||
|
|
||||||
-avec succèsréalisé d ion-base% visworkflow 100tème de gique:** Sysnolo TechRévolutionâches)
|
|
||||||
**4 t4/1TERMINÉ (1:** 100%
|
|
||||||
**Statutier 2026 ** 7 Janvetion:Compl
|
|
||||||
**Date de ished
|
|
||||||
sion Accompl🏆 Mis
|
|
||||||
|
|
||||||
## PLETE!ROJECT COMctor - PVision RefaBuilder w rkflol Wo# 🎉 Visua
|
|
||||||
@@ -1,114 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Analyze the structure of an encrypted file to understand the padding issue.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
def analyze_encrypted_file():
|
|
||||||
"""Analyze the encrypted file structure."""
|
|
||||||
|
|
||||||
print("=== Analyzing Encrypted File Structure ===")
|
|
||||||
|
|
||||||
# Load environment
|
|
||||||
env_local_path = Path(".env.local")
|
|
||||||
if env_local_path.exists():
|
|
||||||
with open(env_local_path, 'r') as f:
|
|
||||||
for line in f:
|
|
||||||
line = line.strip()
|
|
||||||
if line and not line.startswith('#') and '=' in line:
|
|
||||||
key, value = line.split('=', 1)
|
|
||||||
os.environ[key.strip()] = value.strip()
|
|
||||||
|
|
||||||
password = os.getenv("ENCRYPTION_PASSWORD")
|
|
||||||
print(f"Password: {password[:16]}..." if password else "No password")
|
|
||||||
|
|
||||||
# Find encrypted file
|
|
||||||
enc_files = list(Path("agent_v0/sessions").glob("*.enc"))
|
|
||||||
if not enc_files:
|
|
||||||
print("No .enc files found")
|
|
||||||
return False
|
|
||||||
|
|
||||||
enc_file = enc_files[0]
|
|
||||||
print(f"Analyzing: {enc_file}")
|
|
||||||
print(f"File size: {enc_file.stat().st_size} bytes")
|
|
||||||
|
|
||||||
# Read file structure
|
|
||||||
with open(enc_file, 'rb') as f:
|
|
||||||
salt = f.read(16)
|
|
||||||
iv = f.read(16)
|
|
||||||
ciphertext = f.read()
|
|
||||||
|
|
||||||
print(f"Salt: {len(salt)} bytes")
|
|
||||||
print(f"IV: {len(iv)} bytes")
|
|
||||||
print(f"Ciphertext: {len(ciphertext)} bytes")
|
|
||||||
print(f"Ciphertext % 16: {len(ciphertext) % 16}")
|
|
||||||
|
|
||||||
if len(ciphertext) % 16 != 0:
|
|
||||||
print("Ciphertext length is not a multiple of 16!")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Try manual decryption to see where it fails
|
|
||||||
try:
|
|
||||||
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
|
|
||||||
from cryptography.hazmat.backends import default_backend
|
|
||||||
from cryptography.hazmat.primitives import hashes
|
|
||||||
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
|
|
||||||
|
|
||||||
# Derive key
|
|
||||||
kdf = PBKDF2HMAC(
|
|
||||||
algorithm=hashes.SHA256(),
|
|
||||||
length=32,
|
|
||||||
salt=salt,
|
|
||||||
iterations=100000,
|
|
||||||
backend=default_backend()
|
|
||||||
)
|
|
||||||
key = kdf.derive(password.encode('utf-8'))
|
|
||||||
print("Key derivation successful")
|
|
||||||
|
|
||||||
# Decrypt
|
|
||||||
cipher = Cipher(
|
|
||||||
algorithms.AES(key),
|
|
||||||
modes.CBC(iv),
|
|
||||||
backend=default_backend()
|
|
||||||
)
|
|
||||||
decryptor = cipher.decryptor()
|
|
||||||
plaintext = decryptor.update(ciphertext) + decryptor.finalize()
|
|
||||||
print(f"Decryption successful, plaintext length: {len(plaintext)}")
|
|
||||||
|
|
||||||
# Check padding
|
|
||||||
if len(plaintext) == 0:
|
|
||||||
print("Plaintext is empty!")
|
|
||||||
return False
|
|
||||||
|
|
||||||
padding_length = plaintext[-1]
|
|
||||||
print(f"Last byte (padding length): {padding_length}")
|
|
||||||
|
|
||||||
if padding_length < 1 or padding_length > 16:
|
|
||||||
print(f"Invalid padding length: {padding_length}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Check padding bytes
|
|
||||||
padding_bytes = plaintext[-padding_length:]
|
|
||||||
print(f"Padding bytes: {[b for b in padding_bytes]}")
|
|
||||||
|
|
||||||
all_correct = all(b == padding_length for b in padding_bytes)
|
|
||||||
if not all_correct:
|
|
||||||
print("Padding bytes are not all the same!")
|
|
||||||
print(f"Expected all bytes to be {padding_length}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
print("Padding validation successful")
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Manual decryption failed: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return False
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
success = analyze_encrypted_file()
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
@@ -1,327 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Analyseur des échecs de matching pour amélioration continue du système.
|
|
||||||
|
|
||||||
Ce script analyse les rapports d'échecs de matching et génère des statistiques
|
|
||||||
et recommandations pour améliorer le graphe de workflow.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from typing import List, Dict, Any
|
|
||||||
from collections import Counter, defaultdict
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
|
|
||||||
class FailedMatchAnalyzer:
|
|
||||||
"""Analyseur des échecs de matching."""
|
|
||||||
|
|
||||||
def __init__(self, failed_matches_dir: str = "data/failed_matches"):
|
|
||||||
self.failed_matches_dir = Path(failed_matches_dir)
|
|
||||||
self.reports: List[Dict[str, Any]] = []
|
|
||||||
|
|
||||||
def load_reports(self, last_n: int = None, since_hours: int = None):
|
|
||||||
"""
|
|
||||||
Charger les rapports d'échecs.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
last_n: Charger les N derniers rapports
|
|
||||||
since_hours: Charger les rapports des X dernières heures
|
|
||||||
"""
|
|
||||||
if not self.failed_matches_dir.exists():
|
|
||||||
print(f"⚠️ Aucun dossier d'échecs trouvé: {self.failed_matches_dir}")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Lister tous les dossiers d'échecs
|
|
||||||
match_dirs = sorted(
|
|
||||||
[d for d in self.failed_matches_dir.iterdir() if d.is_dir()],
|
|
||||||
key=lambda x: x.name,
|
|
||||||
reverse=True
|
|
||||||
)
|
|
||||||
|
|
||||||
if not match_dirs:
|
|
||||||
print("⚠️ Aucun échec de matching enregistré")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Filtrer par date si nécessaire
|
|
||||||
if since_hours:
|
|
||||||
cutoff = datetime.now() - timedelta(hours=since_hours)
|
|
||||||
match_dirs = [
|
|
||||||
d for d in match_dirs
|
|
||||||
if self._parse_timestamp(d.name) >= cutoff
|
|
||||||
]
|
|
||||||
|
|
||||||
# Limiter le nombre si nécessaire
|
|
||||||
if last_n:
|
|
||||||
match_dirs = match_dirs[:last_n]
|
|
||||||
|
|
||||||
# Charger les rapports
|
|
||||||
for match_dir in match_dirs:
|
|
||||||
report_path = match_dir / "report.json"
|
|
||||||
if report_path.exists():
|
|
||||||
try:
|
|
||||||
with open(report_path, 'r') as f:
|
|
||||||
report = json.load(f)
|
|
||||||
report['_dir'] = match_dir
|
|
||||||
self.reports.append(report)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"⚠️ Erreur lors du chargement de {report_path}: {e}")
|
|
||||||
|
|
||||||
print(f"✓ {len(self.reports)} rapports chargés")
|
|
||||||
|
|
||||||
def _parse_timestamp(self, dirname: str) -> datetime:
|
|
||||||
"""Parser le timestamp depuis le nom du dossier."""
|
|
||||||
try:
|
|
||||||
# Format: failed_match_20251123_143052
|
|
||||||
timestamp_str = dirname.replace("failed_match_", "")
|
|
||||||
return datetime.strptime(timestamp_str, "%Y%m%d_%H%M%S")
|
|
||||||
except:
|
|
||||||
return datetime.min
|
|
||||||
|
|
||||||
def analyze(self) -> Dict[str, Any]:
|
|
||||||
"""Analyser tous les rapports et générer des statistiques."""
|
|
||||||
if not self.reports:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
analysis = {
|
|
||||||
'total_failures': len(self.reports),
|
|
||||||
'date_range': self._get_date_range(),
|
|
||||||
'confidence_stats': self._analyze_confidence(),
|
|
||||||
'suggestions_summary': self._analyze_suggestions(),
|
|
||||||
'problematic_nodes': self._identify_problematic_nodes(),
|
|
||||||
'threshold_recommendations': self._recommend_thresholds(),
|
|
||||||
'new_states_detected': self._count_new_states()
|
|
||||||
}
|
|
||||||
|
|
||||||
return analysis
|
|
||||||
|
|
||||||
def _get_date_range(self) -> Dict[str, str]:
|
|
||||||
"""Obtenir la plage de dates des rapports."""
|
|
||||||
timestamps = [
|
|
||||||
datetime.strptime(r['timestamp'], "%Y%m%d_%H%M%S")
|
|
||||||
for r in self.reports
|
|
||||||
]
|
|
||||||
return {
|
|
||||||
'first': min(timestamps).strftime("%Y-%m-%d %H:%M:%S"),
|
|
||||||
'last': max(timestamps).strftime("%Y-%m-%d %H:%M:%S")
|
|
||||||
}
|
|
||||||
|
|
||||||
def _analyze_confidence(self) -> Dict[str, Any]:
|
|
||||||
"""Analyser les niveaux de confiance."""
|
|
||||||
confidences = [
|
|
||||||
r['matching_results']['best_confidence']
|
|
||||||
for r in self.reports
|
|
||||||
]
|
|
||||||
|
|
||||||
return {
|
|
||||||
'min': min(confidences),
|
|
||||||
'max': max(confidences),
|
|
||||||
'avg': sum(confidences) / len(confidences),
|
|
||||||
'below_70': sum(1 for c in confidences if c < 0.70),
|
|
||||||
'between_70_85': sum(1 for c in confidences if 0.70 <= c < 0.85),
|
|
||||||
'above_85': sum(1 for c in confidences if c >= 0.85)
|
|
||||||
}
|
|
||||||
|
|
||||||
def _analyze_suggestions(self) -> Dict[str, int]:
|
|
||||||
"""Compter les types de suggestions."""
|
|
||||||
suggestion_types = Counter()
|
|
||||||
|
|
||||||
for report in self.reports:
|
|
||||||
for suggestion in report.get('suggestions', []):
|
|
||||||
# Extraire le type de suggestion (avant le ':')
|
|
||||||
suggestion_type = suggestion.split(':')[0]
|
|
||||||
suggestion_types[suggestion_type] += 1
|
|
||||||
|
|
||||||
return dict(suggestion_types)
|
|
||||||
|
|
||||||
def _identify_problematic_nodes(self) -> List[Dict[str, Any]]:
|
|
||||||
"""Identifier les nodes qui causent le plus de confusion."""
|
|
||||||
node_near_misses = defaultdict(list)
|
|
||||||
|
|
||||||
for report in self.reports:
|
|
||||||
similarities = report['matching_results'].get('similarities', [])
|
|
||||||
if similarities:
|
|
||||||
best = similarities[0]
|
|
||||||
confidence = best['similarity']
|
|
||||||
# Near miss: entre 0.70 et threshold
|
|
||||||
if 0.70 <= confidence < report['matching_results']['threshold']:
|
|
||||||
node_near_misses[best['node_id']].append({
|
|
||||||
'confidence': confidence,
|
|
||||||
'label': best['node_label'],
|
|
||||||
'timestamp': report['timestamp']
|
|
||||||
})
|
|
||||||
|
|
||||||
# Trier par nombre de near misses
|
|
||||||
problematic = [
|
|
||||||
{
|
|
||||||
'node_id': node_id,
|
|
||||||
'node_label': misses[0]['label'],
|
|
||||||
'near_miss_count': len(misses),
|
|
||||||
'avg_confidence': sum(m['confidence'] for m in misses) / len(misses)
|
|
||||||
}
|
|
||||||
for node_id, misses in node_near_misses.items()
|
|
||||||
]
|
|
||||||
|
|
||||||
return sorted(problematic, key=lambda x: x['near_miss_count'], reverse=True)
|
|
||||||
|
|
||||||
def _recommend_thresholds(self) -> Dict[str, Any]:
|
|
||||||
"""Recommander des ajustements de seuil."""
|
|
||||||
confidences = [
|
|
||||||
r['matching_results']['best_confidence']
|
|
||||||
for r in self.reports
|
|
||||||
]
|
|
||||||
|
|
||||||
# Calculer le percentile 90 des confidences
|
|
||||||
sorted_conf = sorted(confidences)
|
|
||||||
p90_index = int(len(sorted_conf) * 0.9)
|
|
||||||
p90 = sorted_conf[p90_index] if sorted_conf else 0.85
|
|
||||||
|
|
||||||
current_threshold = self.reports[0]['matching_results']['threshold']
|
|
||||||
|
|
||||||
recommendations = {
|
|
||||||
'current_threshold': current_threshold,
|
|
||||||
'p90_confidence': p90,
|
|
||||||
'recommended_threshold': max(0.70, min(0.90, p90 - 0.02))
|
|
||||||
}
|
|
||||||
|
|
||||||
if p90 < current_threshold - 0.05:
|
|
||||||
recommendations['action'] = "LOWER_THRESHOLD"
|
|
||||||
recommendations['reason'] = f"90% des échecs ont une confiance < {p90:.3f}"
|
|
||||||
elif p90 > current_threshold + 0.05:
|
|
||||||
recommendations['action'] = "RAISE_THRESHOLD"
|
|
||||||
recommendations['reason'] = "Beaucoup de faux positifs potentiels"
|
|
||||||
else:
|
|
||||||
recommendations['action'] = "KEEP_CURRENT"
|
|
||||||
recommendations['reason'] = "Seuil approprié"
|
|
||||||
|
|
||||||
return recommendations
|
|
||||||
|
|
||||||
def _count_new_states(self) -> int:
|
|
||||||
"""Compter les nouveaux états détectés (confiance < 0.70)."""
|
|
||||||
return sum(
|
|
||||||
1 for r in self.reports
|
|
||||||
if r['matching_results']['best_confidence'] < 0.70
|
|
||||||
)
|
|
||||||
|
|
||||||
def print_report(self, analysis: Dict[str, Any]):
|
|
||||||
"""Afficher le rapport d'analyse."""
|
|
||||||
print("\n" + "="*70)
|
|
||||||
print("RAPPORT D'ANALYSE DES ÉCHECS DE MATCHING")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
print(f"\n📊 Statistiques Générales")
|
|
||||||
print(f" • Total d'échecs: {analysis['total_failures']}")
|
|
||||||
print(f" • Période: {analysis['date_range']['first']} → {analysis['date_range']['last']}")
|
|
||||||
|
|
||||||
print(f"\n📈 Niveaux de Confiance")
|
|
||||||
conf = analysis['confidence_stats']
|
|
||||||
print(f" • Minimum: {conf['min']:.3f}")
|
|
||||||
print(f" • Maximum: {conf['max']:.3f}")
|
|
||||||
print(f" • Moyenne: {conf['avg']:.3f}")
|
|
||||||
print(f" • < 0.70 (nouveaux états): {conf['below_70']}")
|
|
||||||
print(f" • 0.70-0.85 (near miss): {conf['between_70_85']}")
|
|
||||||
print(f" • > 0.85 (faux négatifs): {conf['above_85']}")
|
|
||||||
|
|
||||||
print(f"\n💡 Suggestions Générées")
|
|
||||||
for suggestion_type, count in analysis['suggestions_summary'].items():
|
|
||||||
print(f" • {suggestion_type}: {count}")
|
|
||||||
|
|
||||||
print(f"\n⚠️ Nodes Problématiques (Top 5)")
|
|
||||||
for i, node in enumerate(analysis['problematic_nodes'][:5], 1):
|
|
||||||
print(f" {i}. {node['node_label']} (ID: {node['node_id']})")
|
|
||||||
print(f" - Near misses: {node['near_miss_count']}")
|
|
||||||
print(f" - Confiance moyenne: {node['avg_confidence']:.3f}")
|
|
||||||
|
|
||||||
print(f"\n🎯 Recommandations de Seuil")
|
|
||||||
thresh = analysis['threshold_recommendations']
|
|
||||||
print(f" • Seuil actuel: {thresh['current_threshold']:.3f}")
|
|
||||||
print(f" • P90 des confidences: {thresh['p90_confidence']:.3f}")
|
|
||||||
print(f" • Seuil recommandé: {thresh['recommended_threshold']:.3f}")
|
|
||||||
print(f" • Action: {thresh['action']}")
|
|
||||||
print(f" • Raison: {thresh['reason']}")
|
|
||||||
|
|
||||||
print(f"\n🆕 Nouveaux États Détectés")
|
|
||||||
print(f" • {analysis['new_states_detected']} états potentiellement nouveaux")
|
|
||||||
print(f" (confiance < 0.70, nécessitent création de nodes)")
|
|
||||||
|
|
||||||
print("\n" + "="*70)
|
|
||||||
|
|
||||||
def export_detailed_report(self, output_path: str = "failed_matches_analysis.json"):
|
|
||||||
"""Exporter un rapport détaillé en JSON."""
|
|
||||||
analysis = self.analyze()
|
|
||||||
|
|
||||||
detailed_report = {
|
|
||||||
'analysis': analysis,
|
|
||||||
'individual_reports': [
|
|
||||||
{
|
|
||||||
'timestamp': r['timestamp'],
|
|
||||||
'confidence': r['matching_results']['best_confidence'],
|
|
||||||
'suggestions': r['suggestions'],
|
|
||||||
'window_title': r['state']['window_title'],
|
|
||||||
'screenshot_path': str(r['_dir'] / "screenshot.png")
|
|
||||||
}
|
|
||||||
for r in self.reports
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
with open(output_path, 'w') as f:
|
|
||||||
json.dump(detailed_report, f, indent=2)
|
|
||||||
|
|
||||||
print(f"\n✓ Rapport détaillé exporté: {output_path}")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description="Analyser les échecs de matching pour amélioration continue"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--last',
|
|
||||||
type=int,
|
|
||||||
help="Analyser les N derniers échecs"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--since-hours',
|
|
||||||
type=int,
|
|
||||||
help="Analyser les échecs des X dernières heures"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--export',
|
|
||||||
type=str,
|
|
||||||
help="Exporter le rapport détaillé en JSON"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--dir',
|
|
||||||
type=str,
|
|
||||||
default="data/failed_matches",
|
|
||||||
help="Dossier contenant les échecs (défaut: data/failed_matches)"
|
|
||||||
)
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
# Créer l'analyseur
|
|
||||||
analyzer = FailedMatchAnalyzer(failed_matches_dir=args.dir)
|
|
||||||
|
|
||||||
# Charger les rapports
|
|
||||||
analyzer.load_reports(last_n=args.last, since_hours=args.since_hours)
|
|
||||||
|
|
||||||
if not analyzer.reports:
|
|
||||||
print("\n❌ Aucun rapport à analyser")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
# Analyser
|
|
||||||
analysis = analyzer.analyze()
|
|
||||||
|
|
||||||
# Afficher le rapport
|
|
||||||
analyzer.print_report(analysis)
|
|
||||||
|
|
||||||
# Exporter si demandé
|
|
||||||
if args.export:
|
|
||||||
analyzer.export_detailed_report(args.export)
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,355 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script d'amélioration automatique du système de matching.
|
|
||||||
|
|
||||||
Analyse les échecs et propose/applique des améliorations automatiques:
|
|
||||||
- Mise à jour des prototypes de nodes
|
|
||||||
- Ajustement des seuils
|
|
||||||
- Création de nouveaux nodes
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import sys
|
|
||||||
import shutil
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
from typing import List, Dict, Any, Optional
|
|
||||||
import numpy as np
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
|
|
||||||
class MatchingAutoImprover:
|
|
||||||
"""Amélioration automatique du système de matching."""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
failed_matches_dir: str = "data/failed_matches",
|
|
||||||
workflows_dir: str = "data/workflows",
|
|
||||||
dry_run: bool = True
|
|
||||||
):
|
|
||||||
self.failed_matches_dir = Path(failed_matches_dir)
|
|
||||||
self.workflows_dir = Path(workflows_dir)
|
|
||||||
self.dry_run = dry_run
|
|
||||||
self.improvements = []
|
|
||||||
|
|
||||||
def analyze_and_improve(self, min_confidence: float = 0.75) -> List[Dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
Analyser les échecs et générer des améliorations.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
min_confidence: Seuil minimum pour considérer une mise à jour
|
|
||||||
"""
|
|
||||||
print("\n🔍 Analyse des échecs de matching...")
|
|
||||||
|
|
||||||
# Charger tous les rapports
|
|
||||||
reports = self._load_all_reports()
|
|
||||||
|
|
||||||
if not reports:
|
|
||||||
print("⚠️ Aucun échec à analyser")
|
|
||||||
return []
|
|
||||||
|
|
||||||
print(f"✓ {len(reports)} rapports chargés")
|
|
||||||
|
|
||||||
# Identifier les améliorations possibles
|
|
||||||
self.improvements = []
|
|
||||||
|
|
||||||
# 1. Nodes à mettre à jour (near misses)
|
|
||||||
self._identify_prototype_updates(reports, min_confidence)
|
|
||||||
|
|
||||||
# 2. Nouveaux nodes à créer
|
|
||||||
self._identify_new_nodes(reports)
|
|
||||||
|
|
||||||
# 3. Ajustements de seuil
|
|
||||||
self._identify_threshold_adjustments(reports)
|
|
||||||
|
|
||||||
return self.improvements
|
|
||||||
|
|
||||||
def _load_all_reports(self) -> List[Dict[str, Any]]:
|
|
||||||
"""Charger tous les rapports d'échecs."""
|
|
||||||
if not self.failed_matches_dir.exists():
|
|
||||||
return []
|
|
||||||
|
|
||||||
reports = []
|
|
||||||
for match_dir in self.failed_matches_dir.iterdir():
|
|
||||||
if not match_dir.is_dir():
|
|
||||||
continue
|
|
||||||
|
|
||||||
report_path = match_dir / "report.json"
|
|
||||||
if report_path.exists():
|
|
||||||
try:
|
|
||||||
with open(report_path, 'r') as f:
|
|
||||||
report = json.load(f)
|
|
||||||
report['_dir'] = match_dir
|
|
||||||
reports.append(report)
|
|
||||||
except:
|
|
||||||
continue
|
|
||||||
|
|
||||||
return reports
|
|
||||||
|
|
||||||
def _identify_prototype_updates(self, reports: List[Dict], min_confidence: float):
|
|
||||||
"""Identifier les prototypes à mettre à jour."""
|
|
||||||
# Grouper par node_id les near misses
|
|
||||||
node_near_misses = {}
|
|
||||||
|
|
||||||
for report in reports:
|
|
||||||
similarities = report['matching_results'].get('similarities', [])
|
|
||||||
if not similarities:
|
|
||||||
continue
|
|
||||||
|
|
||||||
best = similarities[0]
|
|
||||||
confidence = best['similarity']
|
|
||||||
|
|
||||||
# Near miss: entre min_confidence et threshold
|
|
||||||
threshold = report['matching_results']['threshold']
|
|
||||||
if min_confidence <= confidence < threshold:
|
|
||||||
node_id = best['node_id']
|
|
||||||
if node_id not in node_near_misses:
|
|
||||||
node_near_misses[node_id] = []
|
|
||||||
|
|
||||||
node_near_misses[node_id].append({
|
|
||||||
'report': report,
|
|
||||||
'confidence': confidence,
|
|
||||||
'embedding_path': report['_dir'] / "state_embedding.npy"
|
|
||||||
})
|
|
||||||
|
|
||||||
# Proposer des mises à jour pour les nodes avec plusieurs near misses
|
|
||||||
for node_id, misses in node_near_misses.items():
|
|
||||||
if len(misses) >= 3: # Au moins 3 near misses
|
|
||||||
self.improvements.append({
|
|
||||||
'type': 'UPDATE_PROTOTYPE',
|
|
||||||
'node_id': node_id,
|
|
||||||
'node_label': misses[0]['report']['matching_results']['similarities'][0]['node_label'],
|
|
||||||
'near_miss_count': len(misses),
|
|
||||||
'avg_confidence': sum(m['confidence'] for m in misses) / len(misses),
|
|
||||||
'embeddings': [m['embedding_path'] for m in misses]
|
|
||||||
})
|
|
||||||
|
|
||||||
def _identify_new_nodes(self, reports: List[Dict]):
|
|
||||||
"""Identifier les nouveaux nodes à créer."""
|
|
||||||
# Grouper les états très différents (confidence < 0.70)
|
|
||||||
new_states = []
|
|
||||||
|
|
||||||
for report in reports:
|
|
||||||
confidence = report['matching_results']['best_confidence']
|
|
||||||
if confidence < 0.70:
|
|
||||||
new_states.append({
|
|
||||||
'report': report,
|
|
||||||
'confidence': confidence,
|
|
||||||
'screenshot': report['_dir'] / "screenshot.png",
|
|
||||||
'embedding': report['_dir'] / "state_embedding.npy",
|
|
||||||
'window_title': report['state']['window_title']
|
|
||||||
})
|
|
||||||
|
|
||||||
if new_states:
|
|
||||||
# Grouper par fenêtre
|
|
||||||
by_window = {}
|
|
||||||
for state in new_states:
|
|
||||||
window = state['window_title'] or 'unknown'
|
|
||||||
if window not in by_window:
|
|
||||||
by_window[window] = []
|
|
||||||
by_window[window].append(state)
|
|
||||||
|
|
||||||
# Proposer création de nodes
|
|
||||||
for window, states in by_window.items():
|
|
||||||
if len(states) >= 2: # Au moins 2 occurrences
|
|
||||||
self.improvements.append({
|
|
||||||
'type': 'CREATE_NODE',
|
|
||||||
'window_title': window,
|
|
||||||
'occurrence_count': len(states),
|
|
||||||
'avg_confidence': sum(s['confidence'] for s in states) / len(states),
|
|
||||||
'screenshots': [s['screenshot'] for s in states],
|
|
||||||
'embeddings': [s['embedding'] for s in states]
|
|
||||||
})
|
|
||||||
|
|
||||||
def _identify_threshold_adjustments(self, reports: List[Dict]):
|
|
||||||
"""Identifier les ajustements de seuil nécessaires."""
|
|
||||||
confidences = [r['matching_results']['best_confidence'] for r in reports]
|
|
||||||
|
|
||||||
if not confidences:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Calculer statistiques
|
|
||||||
sorted_conf = sorted(confidences)
|
|
||||||
p90 = sorted_conf[int(len(sorted_conf) * 0.9)]
|
|
||||||
current_threshold = reports[0]['matching_results']['threshold']
|
|
||||||
|
|
||||||
# Si beaucoup d'échecs ont une confiance proche du seuil
|
|
||||||
near_threshold = sum(1 for c in confidences if current_threshold - 0.05 <= c < current_threshold)
|
|
||||||
|
|
||||||
if near_threshold > len(confidences) * 0.3: # Plus de 30%
|
|
||||||
recommended = max(0.70, p90 - 0.02)
|
|
||||||
self.improvements.append({
|
|
||||||
'type': 'ADJUST_THRESHOLD',
|
|
||||||
'current_threshold': current_threshold,
|
|
||||||
'recommended_threshold': recommended,
|
|
||||||
'reason': f"{near_threshold} échecs proches du seuil ({near_threshold/len(confidences)*100:.1f}%)",
|
|
||||||
'p90_confidence': p90
|
|
||||||
})
|
|
||||||
|
|
||||||
def apply_improvements(self, improvements: List[Dict[str, Any]] = None):
|
|
||||||
"""Appliquer les améliorations identifiées."""
|
|
||||||
if improvements is None:
|
|
||||||
improvements = self.improvements
|
|
||||||
|
|
||||||
if not improvements:
|
|
||||||
print("\n⚠️ Aucune amélioration à appliquer")
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f"\n{'🔧 SIMULATION' if self.dry_run else '🔧 APPLICATION'} DES AMÉLIORATIONS")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
for i, improvement in enumerate(improvements, 1):
|
|
||||||
print(f"\n{i}. {improvement['type']}")
|
|
||||||
|
|
||||||
if improvement['type'] == 'UPDATE_PROTOTYPE':
|
|
||||||
self._apply_prototype_update(improvement)
|
|
||||||
|
|
||||||
elif improvement['type'] == 'CREATE_NODE':
|
|
||||||
self._apply_node_creation(improvement)
|
|
||||||
|
|
||||||
elif improvement['type'] == 'ADJUST_THRESHOLD':
|
|
||||||
self._apply_threshold_adjustment(improvement)
|
|
||||||
|
|
||||||
if self.dry_run:
|
|
||||||
print("\n💡 Mode simulation - Aucune modification appliquée")
|
|
||||||
print(" Relancez avec --apply pour appliquer les changements")
|
|
||||||
|
|
||||||
def _apply_prototype_update(self, improvement: Dict):
|
|
||||||
"""Appliquer une mise à jour de prototype."""
|
|
||||||
print(f" Node: {improvement['node_label']} (ID: {improvement['node_id']})")
|
|
||||||
print(f" Near misses: {improvement['near_miss_count']}")
|
|
||||||
print(f" Confiance moyenne: {improvement['avg_confidence']:.3f}")
|
|
||||||
|
|
||||||
if not self.dry_run:
|
|
||||||
# Charger tous les embeddings
|
|
||||||
embeddings = []
|
|
||||||
for emb_path in improvement['embeddings']:
|
|
||||||
if Path(emb_path).exists():
|
|
||||||
embeddings.append(np.load(emb_path))
|
|
||||||
|
|
||||||
if embeddings:
|
|
||||||
# Calculer le nouveau prototype (moyenne)
|
|
||||||
new_prototype = np.mean(embeddings, axis=0)
|
|
||||||
|
|
||||||
# Sauvegarder (à adapter selon votre structure)
|
|
||||||
prototype_path = self.workflows_dir / f"node_{improvement['node_id']}_prototype.npy"
|
|
||||||
np.save(prototype_path, new_prototype)
|
|
||||||
print(f" ✓ Prototype mis à jour: {prototype_path}")
|
|
||||||
else:
|
|
||||||
print(f" → Mettrait à jour le prototype avec {len(improvement['embeddings'])} embeddings")
|
|
||||||
|
|
||||||
def _apply_node_creation(self, improvement: Dict):
|
|
||||||
"""Appliquer une création de node."""
|
|
||||||
print(f" Fenêtre: {improvement['window_title']}")
|
|
||||||
print(f" Occurrences: {improvement['occurrence_count']}")
|
|
||||||
print(f" Confiance moyenne: {improvement['avg_confidence']:.3f}")
|
|
||||||
|
|
||||||
if not self.dry_run:
|
|
||||||
# Créer un nouveau node (à adapter selon votre structure)
|
|
||||||
node_id = f"node_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
|
||||||
node_dir = self.workflows_dir / node_id
|
|
||||||
node_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Copier les screenshots
|
|
||||||
for i, screenshot in enumerate(improvement['screenshots']):
|
|
||||||
if Path(screenshot).exists():
|
|
||||||
shutil.copy(screenshot, node_dir / f"example_{i}.png")
|
|
||||||
|
|
||||||
# Calculer et sauvegarder le prototype
|
|
||||||
embeddings = []
|
|
||||||
for emb_path in improvement['embeddings']:
|
|
||||||
if Path(emb_path).exists():
|
|
||||||
embeddings.append(np.load(emb_path))
|
|
||||||
|
|
||||||
if embeddings:
|
|
||||||
prototype = np.mean(embeddings, axis=0)
|
|
||||||
np.save(node_dir / "prototype.npy", prototype)
|
|
||||||
|
|
||||||
print(f" ✓ Node créé: {node_dir}")
|
|
||||||
else:
|
|
||||||
print(f" → Créerait un nouveau node avec {improvement['occurrence_count']} exemples")
|
|
||||||
|
|
||||||
def _apply_threshold_adjustment(self, improvement: Dict):
|
|
||||||
"""Appliquer un ajustement de seuil."""
|
|
||||||
print(f" Seuil actuel: {improvement['current_threshold']:.3f}")
|
|
||||||
print(f" Seuil recommandé: {improvement['recommended_threshold']:.3f}")
|
|
||||||
print(f" Raison: {improvement['reason']}")
|
|
||||||
|
|
||||||
if not self.dry_run:
|
|
||||||
# Mettre à jour la configuration (à adapter)
|
|
||||||
config_path = Path("config/matching_config.json")
|
|
||||||
if config_path.exists():
|
|
||||||
with open(config_path, 'r') as f:
|
|
||||||
config = json.load(f)
|
|
||||||
|
|
||||||
config['similarity_threshold'] = improvement['recommended_threshold']
|
|
||||||
|
|
||||||
with open(config_path, 'w') as f:
|
|
||||||
json.dump(config, f, indent=2)
|
|
||||||
|
|
||||||
print(f" ✓ Configuration mise à jour: {config_path}")
|
|
||||||
else:
|
|
||||||
print(f" → Mettrait à jour le seuil dans la configuration")
|
|
||||||
|
|
||||||
def print_summary(self):
|
|
||||||
"""Afficher un résumé des améliorations."""
|
|
||||||
print("\n" + "="*70)
|
|
||||||
print("RÉSUMÉ DES AMÉLIORATIONS PROPOSÉES")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
by_type = {}
|
|
||||||
for imp in self.improvements:
|
|
||||||
imp_type = imp['type']
|
|
||||||
if imp_type not in by_type:
|
|
||||||
by_type[imp_type] = []
|
|
||||||
by_type[imp_type].append(imp)
|
|
||||||
|
|
||||||
for imp_type, imps in by_type.items():
|
|
||||||
print(f"\n{imp_type}: {len(imps)}")
|
|
||||||
for imp in imps:
|
|
||||||
if imp_type == 'UPDATE_PROTOTYPE':
|
|
||||||
print(f" • {imp['node_label']}: {imp['near_miss_count']} near misses")
|
|
||||||
elif imp_type == 'CREATE_NODE':
|
|
||||||
print(f" • {imp['window_title']}: {imp['occurrence_count']} occurrences")
|
|
||||||
elif imp_type == 'ADJUST_THRESHOLD':
|
|
||||||
print(f" • {imp['current_threshold']:.3f} → {imp['recommended_threshold']:.3f}")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description="Amélioration automatique du système de matching"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--apply',
|
|
||||||
action='store_true',
|
|
||||||
help="Appliquer les améliorations (sinon mode simulation)"
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
'--min-confidence',
|
|
||||||
type=float,
|
|
||||||
default=0.75,
|
|
||||||
help="Confiance minimum pour mise à jour (défaut: 0.75)"
|
|
||||||
)
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
improver = MatchingAutoImprover(dry_run=not args.apply)
|
|
||||||
|
|
||||||
# Analyser
|
|
||||||
improvements = improver.analyze_and_improve(min_confidence=args.min_confidence)
|
|
||||||
|
|
||||||
if not improvements:
|
|
||||||
print("\n✅ Aucune amélioration nécessaire")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
# Afficher le résumé
|
|
||||||
improver.print_summary()
|
|
||||||
|
|
||||||
# Appliquer
|
|
||||||
improver.apply_improvements()
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,26 +0,0 @@
|
|||||||
# Capture d'Élément Cible VWB - Diagnostic
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
## Problème identifié
|
|
||||||
La capture d'élément cible ne fonctionne pas via l'API Flask mais fonctionne en direct.
|
|
||||||
|
|
||||||
## Fichiers clés
|
|
||||||
- visual_workflow_builder/backend/app_lightweight.py : Backend Flask principal
|
|
||||||
- visual_workflow_builder/frontend/src/components/VisualSelector/index.tsx : Composant frontend
|
|
||||||
- tests/integration/test_capture_element_cible_vwb_09jan2026.py : Test principal
|
|
||||||
- tests/integration/test_backend_vwb_simple_09jan2026.py : Test direct backend
|
|
||||||
|
|
||||||
## Tests à exécuter
|
|
||||||
1. Test direct : python3 tests/integration/test_backend_vwb_simple_09jan2026.py
|
|
||||||
2. Test complet : python3 tests/integration/test_capture_element_cible_vwb_09jan2026.py
|
|
||||||
|
|
||||||
## Environnement requis
|
|
||||||
- Environnement virtuel venv_v3 avec mss, pyautogui, torch, open_clip_torch
|
|
||||||
- Python 3.8+
|
|
||||||
- Écran disponible pour capture
|
|
||||||
|
|
||||||
## Symptômes
|
|
||||||
- ✅ Fonctions backend directes : OK
|
|
||||||
- ❌ Endpoints Flask /api/screen-capture : Erreur 500
|
|
||||||
- ✅ ScreenCapturer avec venv : OK
|
|
||||||
- ❌ ScreenCapturer via serveur Flask : Échec
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
"""Screen capture module"""
|
|
||||||
from .screen_capturer import ScreenCapturer
|
|
||||||
|
|
||||||
__all__ = ['ScreenCapturer']
|
|
||||||
@@ -1,480 +0,0 @@
|
|||||||
"""
|
|
||||||
Screen Capture Module - Capture d'écran continue pour RPA Vision V3
|
|
||||||
|
|
||||||
Fonctionnalités:
|
|
||||||
- Capture unique ou continue
|
|
||||||
- Buffer circulaire pour historique
|
|
||||||
- Détection de changement d'écran
|
|
||||||
- Support multi-moniteur
|
|
||||||
- Optimisation mémoire
|
|
||||||
"""
|
|
||||||
import numpy as np
|
|
||||||
from typing import Optional, Dict, List, Callable, Tuple
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from datetime import datetime
|
|
||||||
from pathlib import Path
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
import logging
|
|
||||||
import hashlib
|
|
||||||
from PIL import Image
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class CaptureFrame:
|
|
||||||
"""Un frame capturé avec métadonnées"""
|
|
||||||
image: np.ndarray
|
|
||||||
timestamp: datetime
|
|
||||||
frame_id: int
|
|
||||||
hash: str
|
|
||||||
window_info: Optional[Dict] = None
|
|
||||||
changed_from_previous: bool = True
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class CaptureStats:
|
|
||||||
"""Statistiques de capture"""
|
|
||||||
total_captures: int = 0
|
|
||||||
captures_per_second: float = 0.0
|
|
||||||
unchanged_frames_skipped: int = 0
|
|
||||||
average_capture_time_ms: float = 0.0
|
|
||||||
buffer_size: int = 0
|
|
||||||
memory_usage_mb: float = 0.0
|
|
||||||
|
|
||||||
|
|
||||||
class ScreenCapturer:
|
|
||||||
"""
|
|
||||||
Capturer d'écran avancé avec mode continu.
|
|
||||||
|
|
||||||
Modes:
|
|
||||||
- Single: Capture unique à la demande
|
|
||||||
- Continuous: Capture en boucle avec callback
|
|
||||||
- Buffered: Maintient un historique des N derniers frames
|
|
||||||
|
|
||||||
Example:
|
|
||||||
>>> capturer = ScreenCapturer(buffer_size=10)
|
|
||||||
>>> # Capture unique
|
|
||||||
>>> frame = capturer.capture()
|
|
||||||
>>> # Mode continu
|
|
||||||
>>> capturer.start_continuous(callback=on_frame, interval_ms=500)
|
|
||||||
>>> # ... plus tard ...
|
|
||||||
>>> capturer.stop_continuous()
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
buffer_size: int = 10,
|
|
||||||
detect_changes: bool = True,
|
|
||||||
change_threshold: float = 0.02,
|
|
||||||
monitor_index: int = 1
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Initialiser le capturer.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buffer_size: Nombre de frames à garder en mémoire
|
|
||||||
detect_changes: Détecter si l'écran a changé
|
|
||||||
change_threshold: Seuil de changement (0-1)
|
|
||||||
monitor_index: Index du moniteur (1=principal)
|
|
||||||
"""
|
|
||||||
self.buffer_size = buffer_size
|
|
||||||
self.detect_changes = detect_changes
|
|
||||||
self.change_threshold = change_threshold
|
|
||||||
self.monitor_index = monitor_index
|
|
||||||
|
|
||||||
# Buffer circulaire
|
|
||||||
self._buffer: List[CaptureFrame] = []
|
|
||||||
self._frame_counter = 0
|
|
||||||
self._last_hash: Optional[str] = None
|
|
||||||
|
|
||||||
# Mode continu
|
|
||||||
self._continuous_running = False
|
|
||||||
self._continuous_thread: Optional[threading.Thread] = None
|
|
||||||
self._continuous_callback: Optional[Callable[[CaptureFrame], None]] = None
|
|
||||||
self._continuous_interval_ms = 500
|
|
||||||
self._lock = threading.Lock()
|
|
||||||
|
|
||||||
# Stats
|
|
||||||
self._stats = CaptureStats()
|
|
||||||
self._capture_times: List[float] = []
|
|
||||||
|
|
||||||
# Initialiser le backend de capture
|
|
||||||
self._init_capture_backend()
|
|
||||||
|
|
||||||
logger.info(f"ScreenCapturer initialized (buffer={buffer_size}, changes={detect_changes})")
|
|
||||||
|
|
||||||
def _init_capture_backend(self) -> None:
|
|
||||||
"""Initialiser le backend de capture (mss ou pyautogui)."""
|
|
||||||
self.sct = None
|
|
||||||
self.pyautogui = None
|
|
||||||
self.method = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
import mss
|
|
||||||
self.sct = mss.mss()
|
|
||||||
self.method = "mss"
|
|
||||||
logger.info("Using mss for screen capture")
|
|
||||||
except ImportError:
|
|
||||||
try:
|
|
||||||
import pyautogui
|
|
||||||
self.pyautogui = pyautogui
|
|
||||||
self.method = "pyautogui"
|
|
||||||
logger.info("Using pyautogui for screen capture")
|
|
||||||
except ImportError:
|
|
||||||
raise ImportError("Neither mss nor pyautogui available for screen capture")
|
|
||||||
|
|
||||||
# =========================================================================
|
|
||||||
# Capture unique
|
|
||||||
# =========================================================================
|
|
||||||
|
|
||||||
def capture(self) -> Optional[np.ndarray]:
|
|
||||||
"""
|
|
||||||
Capture unique de l'écran.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Screenshot as numpy array (H, W, 3) RGB ou None si erreur
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
if self.method == "mss":
|
|
||||||
img = self._capture_mss()
|
|
||||||
else:
|
|
||||||
img = self._capture_pyautogui()
|
|
||||||
|
|
||||||
# Stats
|
|
||||||
capture_time = (time.time() - start_time) * 1000
|
|
||||||
self._capture_times.append(capture_time)
|
|
||||||
if len(self._capture_times) > 100:
|
|
||||||
self._capture_times.pop(0)
|
|
||||||
self._stats.total_captures += 1
|
|
||||||
self._stats.average_capture_time_ms = sum(self._capture_times) / len(self._capture_times)
|
|
||||||
|
|
||||||
return img
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Capture failed: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def capture_frame(self) -> Optional[CaptureFrame]:
|
|
||||||
"""
|
|
||||||
Capture avec métadonnées complètes.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
CaptureFrame avec image, timestamp, hash, etc.
|
|
||||||
"""
|
|
||||||
img = self.capture()
|
|
||||||
return self._create_frame(img)
|
|
||||||
|
|
||||||
def _capture_frame_threaded(self, thread_sct) -> Optional[CaptureFrame]:
|
|
||||||
"""
|
|
||||||
Capture avec instance mss thread-local.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
thread_sct: Instance mss créée dans le thread
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
CaptureFrame ou None
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
if self.method == "mss" and thread_sct:
|
|
||||||
monitor_idx = self.monitor_index if len(thread_sct.monitors) > self.monitor_index else 0
|
|
||||||
monitor = thread_sct.monitors[monitor_idx]
|
|
||||||
sct_img = thread_sct.grab(monitor)
|
|
||||||
img = np.array(sct_img)
|
|
||||||
img = img[:, :, :3][:, :, ::-1] # BGRA to RGB
|
|
||||||
else:
|
|
||||||
img = self._capture_pyautogui()
|
|
||||||
|
|
||||||
# Stats
|
|
||||||
capture_time = (time.time() - start_time) * 1000
|
|
||||||
self._capture_times.append(capture_time)
|
|
||||||
if len(self._capture_times) > 100:
|
|
||||||
self._capture_times.pop(0)
|
|
||||||
self._stats.total_captures += 1
|
|
||||||
self._stats.average_capture_time_ms = sum(self._capture_times) / len(self._capture_times)
|
|
||||||
|
|
||||||
return self._create_frame(img)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Threaded capture failed: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _create_frame(self, img: Optional[np.ndarray]) -> Optional[CaptureFrame]:
|
|
||||||
"""Créer un CaptureFrame à partir d'une image."""
|
|
||||||
if img is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Calculer le hash pour détecter les changements
|
|
||||||
img_hash = self._compute_hash(img)
|
|
||||||
changed = True
|
|
||||||
|
|
||||||
if self.detect_changes and self._last_hash:
|
|
||||||
changed = img_hash != self._last_hash
|
|
||||||
if not changed:
|
|
||||||
self._stats.unchanged_frames_skipped += 1
|
|
||||||
|
|
||||||
self._last_hash = img_hash
|
|
||||||
self._frame_counter += 1
|
|
||||||
|
|
||||||
frame = CaptureFrame(
|
|
||||||
image=img,
|
|
||||||
timestamp=datetime.now(),
|
|
||||||
frame_id=self._frame_counter,
|
|
||||||
hash=img_hash,
|
|
||||||
window_info=self.get_active_window(),
|
|
||||||
changed_from_previous=changed
|
|
||||||
)
|
|
||||||
|
|
||||||
# Ajouter au buffer
|
|
||||||
self._add_to_buffer(frame)
|
|
||||||
|
|
||||||
return frame
|
|
||||||
|
|
||||||
def capture_screen(self) -> Optional[Image.Image]:
|
|
||||||
"""
|
|
||||||
Capture et retourne une PIL Image (compatibilité avec ExecutionLoop).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
PIL Image ou None
|
|
||||||
"""
|
|
||||||
img = self.capture()
|
|
||||||
if img is None:
|
|
||||||
return None
|
|
||||||
return Image.fromarray(img)
|
|
||||||
|
|
||||||
def _capture_mss(self) -> np.ndarray:
|
|
||||||
"""Capture using mss."""
|
|
||||||
monitor_idx = self.monitor_index if len(self.sct.monitors) > self.monitor_index else 0
|
|
||||||
monitor = self.sct.monitors[monitor_idx]
|
|
||||||
sct_img = self.sct.grab(monitor)
|
|
||||||
|
|
||||||
img = np.array(sct_img)
|
|
||||||
# Convert BGRA to RGB
|
|
||||||
img = img[:, :, :3][:, :, ::-1]
|
|
||||||
|
|
||||||
if img.size == 0 or img.shape[0] == 0 or img.shape[1] == 0:
|
|
||||||
raise ValueError("Captured image has invalid dimensions")
|
|
||||||
|
|
||||||
return img
|
|
||||||
|
|
||||||
def _capture_pyautogui(self) -> np.ndarray:
|
|
||||||
"""Capture using pyautogui."""
|
|
||||||
screenshot = self.pyautogui.screenshot()
|
|
||||||
img = np.array(screenshot)
|
|
||||||
|
|
||||||
if img.size == 0 or img.shape[0] == 0 or img.shape[1] == 0:
|
|
||||||
raise ValueError("Captured image has invalid dimensions")
|
|
||||||
|
|
||||||
return img
|
|
||||||
|
|
||||||
# =========================================================================
|
|
||||||
# Mode continu
|
|
||||||
# =========================================================================
|
|
||||||
|
|
||||||
def start_continuous(
|
|
||||||
self,
|
|
||||||
callback: Callable[[CaptureFrame], None],
|
|
||||||
interval_ms: int = 500,
|
|
||||||
skip_unchanged: bool = True
|
|
||||||
) -> bool:
|
|
||||||
"""
|
|
||||||
Démarrer la capture continue.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
callback: Fonction appelée pour chaque frame
|
|
||||||
interval_ms: Intervalle entre captures (ms)
|
|
||||||
skip_unchanged: Ne pas appeler callback si écran inchangé
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True si démarré avec succès
|
|
||||||
"""
|
|
||||||
with self._lock:
|
|
||||||
if self._continuous_running:
|
|
||||||
logger.warning("Continuous capture already running")
|
|
||||||
return False
|
|
||||||
|
|
||||||
self._continuous_callback = callback
|
|
||||||
self._continuous_interval_ms = interval_ms
|
|
||||||
self._skip_unchanged = skip_unchanged
|
|
||||||
self._continuous_running = True
|
|
||||||
|
|
||||||
self._continuous_thread = threading.Thread(
|
|
||||||
target=self._continuous_loop,
|
|
||||||
daemon=True
|
|
||||||
)
|
|
||||||
self._continuous_thread.start()
|
|
||||||
|
|
||||||
logger.info(f"Started continuous capture (interval={interval_ms}ms)")
|
|
||||||
return True
|
|
||||||
|
|
||||||
def stop_continuous(self) -> None:
|
|
||||||
"""Arrêter la capture continue."""
|
|
||||||
with self._lock:
|
|
||||||
self._continuous_running = False
|
|
||||||
|
|
||||||
if self._continuous_thread:
|
|
||||||
self._continuous_thread.join(timeout=2.0)
|
|
||||||
self._continuous_thread = None
|
|
||||||
|
|
||||||
logger.info("Stopped continuous capture")
|
|
||||||
|
|
||||||
def is_continuous_running(self) -> bool:
|
|
||||||
"""Vérifier si la capture continue est active."""
|
|
||||||
return self._continuous_running
|
|
||||||
|
|
||||||
def _continuous_loop(self) -> None:
|
|
||||||
"""Boucle de capture continue (thread)."""
|
|
||||||
last_capture_time = 0
|
|
||||||
captures_in_second = 0
|
|
||||||
second_start = time.time()
|
|
||||||
|
|
||||||
# Créer une nouvelle instance mss pour ce thread (requis pour X11)
|
|
||||||
thread_sct = None
|
|
||||||
if self.method == "mss":
|
|
||||||
import mss
|
|
||||||
thread_sct = mss.mss()
|
|
||||||
|
|
||||||
while self._continuous_running:
|
|
||||||
try:
|
|
||||||
# Capturer avec l'instance thread-local
|
|
||||||
frame = self._capture_frame_threaded(thread_sct)
|
|
||||||
|
|
||||||
if frame:
|
|
||||||
# Calculer FPS
|
|
||||||
captures_in_second += 1
|
|
||||||
if time.time() - second_start >= 1.0:
|
|
||||||
self._stats.captures_per_second = captures_in_second
|
|
||||||
captures_in_second = 0
|
|
||||||
second_start = time.time()
|
|
||||||
|
|
||||||
# Appeler callback si changement ou si on ne skip pas
|
|
||||||
if self._continuous_callback:
|
|
||||||
if frame.changed_from_previous or not self._skip_unchanged:
|
|
||||||
try:
|
|
||||||
self._continuous_callback(frame)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Callback error: {e}")
|
|
||||||
|
|
||||||
# Attendre l'intervalle
|
|
||||||
elapsed = (time.time() - last_capture_time) * 1000
|
|
||||||
sleep_time = max(0, self._continuous_interval_ms - elapsed) / 1000.0
|
|
||||||
if sleep_time > 0:
|
|
||||||
time.sleep(sleep_time)
|
|
||||||
last_capture_time = time.time()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Continuous capture error: {e}")
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|
||||||
# Cleanup thread-local mss
|
|
||||||
if thread_sct:
|
|
||||||
try:
|
|
||||||
thread_sct.close()
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# =========================================================================
|
|
||||||
# Buffer et historique
|
|
||||||
# =========================================================================
|
|
||||||
|
|
||||||
def _add_to_buffer(self, frame: CaptureFrame) -> None:
|
|
||||||
"""Ajouter un frame au buffer circulaire."""
|
|
||||||
with self._lock:
|
|
||||||
self._buffer.append(frame)
|
|
||||||
if len(self._buffer) > self.buffer_size:
|
|
||||||
self._buffer.pop(0)
|
|
||||||
self._stats.buffer_size = len(self._buffer)
|
|
||||||
|
|
||||||
# Calculer utilisation mémoire
|
|
||||||
if self._buffer:
|
|
||||||
frame_size = self._buffer[0].image.nbytes / (1024 * 1024)
|
|
||||||
self._stats.memory_usage_mb = frame_size * len(self._buffer)
|
|
||||||
|
|
||||||
def get_buffer(self) -> List[CaptureFrame]:
|
|
||||||
"""Obtenir une copie du buffer."""
|
|
||||||
with self._lock:
|
|
||||||
return list(self._buffer)
|
|
||||||
|
|
||||||
def get_last_frame(self) -> Optional[CaptureFrame]:
|
|
||||||
"""Obtenir le dernier frame capturé."""
|
|
||||||
with self._lock:
|
|
||||||
return self._buffer[-1] if self._buffer else None
|
|
||||||
|
|
||||||
def get_frame_by_id(self, frame_id: int) -> Optional[CaptureFrame]:
|
|
||||||
"""Obtenir un frame par son ID."""
|
|
||||||
with self._lock:
|
|
||||||
for frame in self._buffer:
|
|
||||||
if frame.frame_id == frame_id:
|
|
||||||
return frame
|
|
||||||
return None
|
|
||||||
|
|
||||||
def clear_buffer(self) -> None:
|
|
||||||
"""Vider le buffer."""
|
|
||||||
with self._lock:
|
|
||||||
self._buffer.clear()
|
|
||||||
self._stats.buffer_size = 0
|
|
||||||
|
|
||||||
# =========================================================================
|
|
||||||
# Utilitaires
|
|
||||||
# =========================================================================
|
|
||||||
|
|
||||||
def _compute_hash(self, img: np.ndarray) -> str:
|
|
||||||
"""Calculer un hash rapide de l'image pour détecter les changements."""
|
|
||||||
# Sous-échantillonner pour un hash rapide
|
|
||||||
small = img[::20, ::20, :].tobytes()
|
|
||||||
return hashlib.md5(small).hexdigest()
|
|
||||||
|
|
||||||
def get_active_window(self) -> Optional[Dict]:
|
|
||||||
"""Obtenir les infos de la fenêtre active."""
|
|
||||||
try:
|
|
||||||
import pygetwindow as gw
|
|
||||||
active = gw.getActiveWindow()
|
|
||||||
if active:
|
|
||||||
return {
|
|
||||||
'title': active.title,
|
|
||||||
'x': active.left,
|
|
||||||
'y': active.top,
|
|
||||||
'width': active.width,
|
|
||||||
'height': active.height,
|
|
||||||
'app': getattr(active, '_app', 'unknown')
|
|
||||||
}
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug(f"Could not get active window: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_screen_resolution(self) -> Tuple[int, int]:
|
|
||||||
"""Obtenir la résolution de l'écran."""
|
|
||||||
if self.method == "mss":
|
|
||||||
monitor = self.sct.monitors[self.monitor_index]
|
|
||||||
return (monitor['width'], monitor['height'])
|
|
||||||
else:
|
|
||||||
size = self.pyautogui.size()
|
|
||||||
return (size.width, size.height)
|
|
||||||
|
|
||||||
def get_stats(self) -> CaptureStats:
|
|
||||||
"""Obtenir les statistiques de capture."""
|
|
||||||
return self._stats
|
|
||||||
|
|
||||||
def save_frame(self, frame: CaptureFrame, path: str) -> bool:
|
|
||||||
"""Sauvegarder un frame sur disque."""
|
|
||||||
try:
|
|
||||||
img = Image.fromarray(frame.image)
|
|
||||||
img.save(path)
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Failed to save frame: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def __del__(self):
|
|
||||||
"""Cleanup."""
|
|
||||||
self.stop_continuous()
|
|
||||||
if self.sct:
|
|
||||||
try:
|
|
||||||
self.sct.close()
|
|
||||||
except (AttributeError, RuntimeError, OSError):
|
|
||||||
pass
|
|
||||||
@@ -1,96 +0,0 @@
|
|||||||
"""
|
|
||||||
Embedding Module - Fusion Multi-Modale et Gestion FAISS
|
|
||||||
|
|
||||||
Ce module gère la fusion d'embeddings multi-modaux et l'indexation FAISS
|
|
||||||
pour la recherche de similarité rapide.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from .fusion_engine import (
|
|
||||||
FusionEngine,
|
|
||||||
FusionConfig,
|
|
||||||
create_default_fusion_engine,
|
|
||||||
normalize_vector,
|
|
||||||
validate_weights
|
|
||||||
)
|
|
||||||
|
|
||||||
from .faiss_manager import (
|
|
||||||
FAISSManager,
|
|
||||||
SearchResult,
|
|
||||||
create_flat_index,
|
|
||||||
create_ivf_index
|
|
||||||
)
|
|
||||||
|
|
||||||
from .similarity import (
|
|
||||||
cosine_similarity,
|
|
||||||
euclidean_distance,
|
|
||||||
manhattan_distance,
|
|
||||||
dot_product,
|
|
||||||
normalize_l2,
|
|
||||||
normalize_l1,
|
|
||||||
angular_distance,
|
|
||||||
jaccard_similarity,
|
|
||||||
hamming_distance,
|
|
||||||
batch_cosine_similarity,
|
|
||||||
pairwise_cosine_similarity,
|
|
||||||
similarity_to_distance,
|
|
||||||
distance_to_similarity,
|
|
||||||
is_normalized,
|
|
||||||
compute_centroid,
|
|
||||||
compute_variance
|
|
||||||
)
|
|
||||||
|
|
||||||
from .state_embedding_builder import (
|
|
||||||
StateEmbeddingBuilder,
|
|
||||||
create_builder,
|
|
||||||
build_from_screen_state
|
|
||||||
)
|
|
||||||
|
|
||||||
from .base_embedder import EmbedderBase
|
|
||||||
|
|
||||||
from .clip_embedder import (
|
|
||||||
CLIPEmbedder,
|
|
||||||
create_clip_embedder,
|
|
||||||
get_default_embedder
|
|
||||||
)
|
|
||||||
|
|
||||||
from .embedding_cache import (
|
|
||||||
EmbeddingCache,
|
|
||||||
PrototypeCache
|
|
||||||
)
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
'FusionEngine',
|
|
||||||
'FusionConfig',
|
|
||||||
'create_default_fusion_engine',
|
|
||||||
'normalize_vector',
|
|
||||||
'validate_weights',
|
|
||||||
'FAISSManager',
|
|
||||||
'SearchResult',
|
|
||||||
'create_flat_index',
|
|
||||||
'create_ivf_index',
|
|
||||||
'cosine_similarity',
|
|
||||||
'euclidean_distance',
|
|
||||||
'manhattan_distance',
|
|
||||||
'dot_product',
|
|
||||||
'normalize_l2',
|
|
||||||
'normalize_l1',
|
|
||||||
'angular_distance',
|
|
||||||
'jaccard_similarity',
|
|
||||||
'hamming_distance',
|
|
||||||
'batch_cosine_similarity',
|
|
||||||
'pairwise_cosine_similarity',
|
|
||||||
'similarity_to_distance',
|
|
||||||
'distance_to_similarity',
|
|
||||||
'is_normalized',
|
|
||||||
'compute_centroid',
|
|
||||||
'compute_variance',
|
|
||||||
'StateEmbeddingBuilder',
|
|
||||||
'create_builder',
|
|
||||||
'build_from_screen_state',
|
|
||||||
'EmbedderBase',
|
|
||||||
'CLIPEmbedder',
|
|
||||||
'create_clip_embedder',
|
|
||||||
'get_default_embedder',
|
|
||||||
'EmbeddingCache',
|
|
||||||
'PrototypeCache'
|
|
||||||
]
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
"""
|
|
||||||
Abstract base class for embedding models.
|
|
||||||
|
|
||||||
This module defines the interface that all embedding models must implement,
|
|
||||||
ensuring consistency across different model implementations (CLIP, etc.).
|
|
||||||
"""
|
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from typing import List
|
|
||||||
from PIL import Image
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
|
|
||||||
class EmbedderBase(ABC):
|
|
||||||
"""
|
|
||||||
Abstract base class for image and text embedding models.
|
|
||||||
|
|
||||||
All embedding models must implement this interface to ensure
|
|
||||||
compatibility with the state embedding system.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def embed_image(self, image: Image.Image) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate an embedding vector for a single image.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
image: PIL Image to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Normalized embedding vector of shape (dimension,)
|
|
||||||
The vector should be L2-normalized for cosine similarity
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If image is invalid or cannot be processed
|
|
||||||
RuntimeError: If model inference fails
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def embed_text(self, text: str) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate an embedding vector for text.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
text: Text string to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Normalized embedding vector of shape (dimension,)
|
|
||||||
The vector should be L2-normalized for cosine similarity
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If text is invalid
|
|
||||||
RuntimeError: If model inference fails
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def get_dimension(self) -> int:
|
|
||||||
"""
|
|
||||||
Get the dimensionality of embeddings produced by this model.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int: Embedding dimension (e.g., 512 for CLIP ViT-B/32)
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def get_model_name(self) -> str:
|
|
||||||
"""
|
|
||||||
Get a unique identifier for this model.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: Model name (e.g., "clip-vit-b32")
|
|
||||||
"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def embed_image_batch(self, images: List[Image.Image]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate embeddings for multiple images.
|
|
||||||
|
|
||||||
Default implementation processes images one by one.
|
|
||||||
Subclasses can override this for optimized batch processing.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
images: List of PIL Images to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Array of embeddings with shape (len(images), dimension)
|
|
||||||
Each row is a normalized embedding vector
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If any image is invalid
|
|
||||||
RuntimeError: If model inference fails
|
|
||||||
"""
|
|
||||||
if not images:
|
|
||||||
return np.array([]).reshape(0, self.get_dimension())
|
|
||||||
|
|
||||||
embeddings = []
|
|
||||||
for img in images:
|
|
||||||
embedding = self.embed_image(img)
|
|
||||||
embeddings.append(embedding)
|
|
||||||
|
|
||||||
return np.array(embeddings)
|
|
||||||
|
|
||||||
def embed_text_batch(self, texts: List[str]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate embeddings for multiple texts.
|
|
||||||
|
|
||||||
Default implementation processes texts one by one.
|
|
||||||
Subclasses can override this for optimized batch processing.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
texts: List of text strings to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Array of embeddings with shape (len(texts), dimension)
|
|
||||||
Each row is a normalized embedding vector
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If any text is invalid
|
|
||||||
RuntimeError: If model inference fails
|
|
||||||
"""
|
|
||||||
if not texts:
|
|
||||||
return np.array([]).reshape(0, self.get_dimension())
|
|
||||||
|
|
||||||
embeddings = []
|
|
||||||
for text in texts:
|
|
||||||
embedding = self.embed_text(text)
|
|
||||||
embeddings.append(embedding)
|
|
||||||
|
|
||||||
return np.array(embeddings)
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
"""String representation of the embedder."""
|
|
||||||
return f"{self.__class__.__name__}(model={self.get_model_name()}, dim={self.get_dimension()})"
|
|
||||||
@@ -1,292 +0,0 @@
|
|||||||
"""
|
|
||||||
CLIP-based embedder implementation for RPA Vision V3.
|
|
||||||
|
|
||||||
This module provides a wrapper around OpenCLIP for generating image and text embeddings
|
|
||||||
using the CLIP (Contrastive Language-Image Pre-training) model.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import torch
|
|
||||||
import numpy as np
|
|
||||||
from PIL import Image
|
|
||||||
from typing import List, Optional
|
|
||||||
import logging
|
|
||||||
|
|
||||||
try:
|
|
||||||
import open_clip
|
|
||||||
except ImportError:
|
|
||||||
open_clip = None
|
|
||||||
|
|
||||||
from .base_embedder import EmbedderBase
|
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class CLIPEmbedder(EmbedderBase):
|
|
||||||
"""
|
|
||||||
CLIP-based image and text embedder using OpenCLIP.
|
|
||||||
|
|
||||||
This embedder uses the ViT-B/32 architecture by default, which produces
|
|
||||||
512-dimensional embeddings. It automatically handles GPU/CPU device selection.
|
|
||||||
|
|
||||||
The embeddings are L2-normalized for cosine similarity calculations.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
model_name: str = "ViT-B-32",
|
|
||||||
pretrained: str = "openai",
|
|
||||||
device: Optional[str] = None
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Initialize the CLIP embedder.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
model_name: CLIP model architecture (default: ViT-B-32)
|
|
||||||
Options: ViT-B-32, ViT-B-16, ViT-L-14, etc.
|
|
||||||
pretrained: Pretrained weights to use (default: openai)
|
|
||||||
device: Device to use ('cuda', 'cpu', or None for auto-detect)
|
|
||||||
Defaults to CPU to save GPU memory for VLM models
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ImportError: If open_clip is not installed
|
|
||||||
RuntimeError: If model loading fails
|
|
||||||
"""
|
|
||||||
if open_clip is None:
|
|
||||||
raise ImportError(
|
|
||||||
"OpenCLIP is not installed. "
|
|
||||||
"Install it with: pip install open-clip-torch"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Default to CPU to save GPU for vision models (Qwen3-VL, etc.)
|
|
||||||
if device is None:
|
|
||||||
device = "cpu"
|
|
||||||
|
|
||||||
self.model_name = model_name
|
|
||||||
self.pretrained = pretrained
|
|
||||||
self.device = device
|
|
||||||
self._embedding_dim = None
|
|
||||||
|
|
||||||
# Load model
|
|
||||||
try:
|
|
||||||
logger.info(f"Loading CLIP model: {model_name} ({pretrained}) on {device}...")
|
|
||||||
|
|
||||||
self.model, _, self.preprocess = open_clip.create_model_and_transforms(
|
|
||||||
model_name,
|
|
||||||
pretrained=pretrained,
|
|
||||||
device=device
|
|
||||||
)
|
|
||||||
self.model.eval()
|
|
||||||
|
|
||||||
# Get tokenizer for text
|
|
||||||
self.tokenizer = open_clip.get_tokenizer(model_name)
|
|
||||||
|
|
||||||
# Determine embedding dimension
|
|
||||||
with torch.no_grad():
|
|
||||||
dummy_image = torch.zeros(1, 3, 224, 224).to(self.device)
|
|
||||||
dummy_embedding = self.model.encode_image(dummy_image)
|
|
||||||
self._embedding_dim = dummy_embedding.shape[-1]
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
f"✓ CLIP embedder loaded: {model_name} on {device}, "
|
|
||||||
f"dimension={self._embedding_dim}"
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(f"Failed to load CLIP model: {e}")
|
|
||||||
|
|
||||||
def embed_image(self, image: Image.Image) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate embedding for a single image.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
image: PIL Image to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Normalized embedding vector of shape (dimension,)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If image is invalid
|
|
||||||
RuntimeError: If embedding generation fails
|
|
||||||
"""
|
|
||||||
if not isinstance(image, Image.Image):
|
|
||||||
raise ValueError("Input must be a PIL Image")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Preprocess image
|
|
||||||
image_tensor = self.preprocess(image).unsqueeze(0).to(self.device)
|
|
||||||
|
|
||||||
# Generate embedding
|
|
||||||
with torch.no_grad():
|
|
||||||
embedding = self.model.encode_image(image_tensor)
|
|
||||||
# L2 normalize for cosine similarity
|
|
||||||
embedding = embedding / embedding.norm(dim=-1, keepdim=True)
|
|
||||||
|
|
||||||
return embedding.cpu().numpy().flatten()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(f"Failed to generate image embedding: {e}")
|
|
||||||
|
|
||||||
def embed_text(self, text: str) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate embedding for text.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
text: Text string to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Normalized embedding vector of shape (dimension,)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If text is invalid
|
|
||||||
RuntimeError: If embedding generation fails
|
|
||||||
"""
|
|
||||||
if not isinstance(text, str):
|
|
||||||
raise ValueError("Input must be a string")
|
|
||||||
|
|
||||||
if not text.strip():
|
|
||||||
# Return zero vector for empty text
|
|
||||||
return np.zeros(self.get_dimension(), dtype=np.float32)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Tokenize text
|
|
||||||
text_tokens = self.tokenizer([text]).to(self.device)
|
|
||||||
|
|
||||||
# Generate embedding
|
|
||||||
with torch.no_grad():
|
|
||||||
embedding = self.model.encode_text(text_tokens)
|
|
||||||
# L2 normalize for cosine similarity
|
|
||||||
embedding = embedding / embedding.norm(dim=-1, keepdim=True)
|
|
||||||
|
|
||||||
return embedding.cpu().numpy().flatten()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(f"Failed to generate text embedding: {e}")
|
|
||||||
|
|
||||||
def embed_image_batch(self, images: List[Image.Image]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate embeddings for multiple images (optimized batch processing).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
images: List of PIL Images to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Array of embeddings with shape (len(images), dimension)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If any image is invalid
|
|
||||||
RuntimeError: If embedding generation fails
|
|
||||||
"""
|
|
||||||
if not images:
|
|
||||||
return np.array([]).reshape(0, self.get_dimension())
|
|
||||||
|
|
||||||
# Validate all images
|
|
||||||
for i, img in enumerate(images):
|
|
||||||
if not isinstance(img, Image.Image):
|
|
||||||
raise ValueError(f"Image at index {i} is not a PIL Image")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Preprocess all images
|
|
||||||
image_tensors = torch.stack([
|
|
||||||
self.preprocess(img) for img in images
|
|
||||||
]).to(self.device)
|
|
||||||
|
|
||||||
# Generate embeddings in batch
|
|
||||||
with torch.no_grad():
|
|
||||||
embeddings = self.model.encode_image(image_tensors)
|
|
||||||
# L2 normalize for cosine similarity
|
|
||||||
embeddings = embeddings / embeddings.norm(dim=-1, keepdim=True)
|
|
||||||
|
|
||||||
return embeddings.cpu().numpy()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(f"Failed to generate batch image embeddings: {e}")
|
|
||||||
|
|
||||||
def embed_text_batch(self, texts: List[str]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Generate embeddings for multiple texts (optimized batch processing).
|
|
||||||
|
|
||||||
Args:
|
|
||||||
texts: List of text strings to embed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
np.ndarray: Array of embeddings with shape (len(texts), dimension)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If any text is invalid
|
|
||||||
RuntimeError: If embedding generation fails
|
|
||||||
"""
|
|
||||||
if not texts:
|
|
||||||
return np.array([]).reshape(0, self.get_dimension())
|
|
||||||
|
|
||||||
# Validate all texts
|
|
||||||
for i, text in enumerate(texts):
|
|
||||||
if not isinstance(text, str):
|
|
||||||
raise ValueError(f"Text at index {i} is not a string")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Handle empty texts
|
|
||||||
processed_texts = [text if text.strip() else " " for text in texts]
|
|
||||||
|
|
||||||
# Tokenize all texts
|
|
||||||
text_tokens = self.tokenizer(processed_texts).to(self.device)
|
|
||||||
|
|
||||||
# Generate embeddings in batch
|
|
||||||
with torch.no_grad():
|
|
||||||
embeddings = self.model.encode_text(text_tokens)
|
|
||||||
# L2 normalize for cosine similarity
|
|
||||||
embeddings = embeddings / embeddings.norm(dim=-1, keepdim=True)
|
|
||||||
|
|
||||||
return embeddings.cpu().numpy()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(f"Failed to generate batch text embeddings: {e}")
|
|
||||||
|
|
||||||
def get_dimension(self) -> int:
|
|
||||||
"""
|
|
||||||
Get the dimensionality of embeddings.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int: Embedding dimension (512 for ViT-B/32)
|
|
||||||
"""
|
|
||||||
return self._embedding_dim
|
|
||||||
|
|
||||||
def get_model_name(self) -> str:
|
|
||||||
"""
|
|
||||||
Get model identifier.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: Model name (e.g., "clip-vit-b32")
|
|
||||||
"""
|
|
||||||
return f"clip-{self.model_name.lower().replace('/', '-')}"
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Factory functions
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
def create_clip_embedder(
|
|
||||||
model_name: str = "ViT-B-32",
|
|
||||||
device: Optional[str] = None
|
|
||||||
) -> CLIPEmbedder:
|
|
||||||
"""
|
|
||||||
Create a CLIP embedder with default configuration.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
model_name: CLIP model architecture (default: ViT-B-32)
|
|
||||||
device: Device to use (default: CPU)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
CLIPEmbedder: Configured CLIP embedder
|
|
||||||
"""
|
|
||||||
return CLIPEmbedder(model_name=model_name, device=device)
|
|
||||||
|
|
||||||
|
|
||||||
def get_default_embedder() -> CLIPEmbedder:
|
|
||||||
"""
|
|
||||||
Get the default CLIP embedder (ViT-B/32 on CPU).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
CLIPEmbedder: Default embedder
|
|
||||||
"""
|
|
||||||
return CLIPEmbedder()
|
|
||||||
@@ -1,284 +0,0 @@
|
|||||||
"""
|
|
||||||
Embedding Cache - Cache LRU pour embeddings
|
|
||||||
|
|
||||||
Implémente un cache LRU (Least Recently Used) pour stocker
|
|
||||||
les embeddings en mémoire et éviter les recalculs coûteux.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import Optional, Dict, Any
|
|
||||||
from collections import OrderedDict
|
|
||||||
import numpy as np
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class EmbeddingCache:
|
|
||||||
"""
|
|
||||||
Cache LRU pour embeddings.
|
|
||||||
|
|
||||||
Stocke les embeddings les plus récemment utilisés en mémoire
|
|
||||||
pour éviter les recalculs et chargements depuis disque.
|
|
||||||
|
|
||||||
Features:
|
|
||||||
- LRU eviction policy
|
|
||||||
- Taille maximale configurable
|
|
||||||
- Statistiques de cache (hits/misses)
|
|
||||||
- Invalidation sélective
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, max_size: int = 1000, max_memory_mb: float = 500.0):
|
|
||||||
"""
|
|
||||||
Initialiser le cache.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
max_size: Nombre maximum d'embeddings à garder en cache
|
|
||||||
max_memory_mb: Mémoire maximale en MB (approximatif)
|
|
||||||
"""
|
|
||||||
self.max_size = max_size
|
|
||||||
self.max_memory_mb = max_memory_mb
|
|
||||||
self.cache: OrderedDict[str, np.ndarray] = OrderedDict()
|
|
||||||
self.metadata: Dict[str, Dict[str, Any]] = {}
|
|
||||||
|
|
||||||
# Statistiques
|
|
||||||
self.hits = 0
|
|
||||||
self.misses = 0
|
|
||||||
self.evictions = 0
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
f"EmbeddingCache initialized: max_size={max_size}, "
|
|
||||||
f"max_memory_mb={max_memory_mb:.1f}"
|
|
||||||
)
|
|
||||||
|
|
||||||
def get(self, key: str) -> Optional[np.ndarray]:
|
|
||||||
"""
|
|
||||||
Récupérer un embedding du cache.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
key: Clé de l'embedding (embedding_id)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur numpy si trouvé, None sinon
|
|
||||||
"""
|
|
||||||
if key in self.cache:
|
|
||||||
# Déplacer à la fin (most recently used)
|
|
||||||
self.cache.move_to_end(key)
|
|
||||||
self.hits += 1
|
|
||||||
logger.debug(f"Cache HIT: {key}")
|
|
||||||
return self.cache[key]
|
|
||||||
|
|
||||||
self.misses += 1
|
|
||||||
logger.debug(f"Cache MISS: {key}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def put(
|
|
||||||
self,
|
|
||||||
key: str,
|
|
||||||
vector: np.ndarray,
|
|
||||||
metadata: Optional[Dict[str, Any]] = None
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Ajouter un embedding au cache.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
key: Clé de l'embedding
|
|
||||||
vector: Vecteur numpy
|
|
||||||
metadata: Métadonnées optionnelles
|
|
||||||
"""
|
|
||||||
# Si déjà présent, mettre à jour et déplacer à la fin
|
|
||||||
if key in self.cache:
|
|
||||||
self.cache.move_to_end(key)
|
|
||||||
self.cache[key] = vector
|
|
||||||
if metadata:
|
|
||||||
self.metadata[key] = metadata
|
|
||||||
return
|
|
||||||
|
|
||||||
# Vérifier si on doit évict
|
|
||||||
if len(self.cache) >= self.max_size:
|
|
||||||
self._evict_oldest()
|
|
||||||
|
|
||||||
# Ajouter le nouvel embedding
|
|
||||||
self.cache[key] = vector
|
|
||||||
if metadata:
|
|
||||||
self.metadata[key] = metadata
|
|
||||||
|
|
||||||
logger.debug(f"Cache PUT: {key} (size: {len(self.cache)})")
|
|
||||||
|
|
||||||
def _evict_oldest(self):
|
|
||||||
"""Évict l'embedding le moins récemment utilisé."""
|
|
||||||
if not self.cache:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Retirer le premier élément (oldest)
|
|
||||||
oldest_key, _ = self.cache.popitem(last=False)
|
|
||||||
self.metadata.pop(oldest_key, None)
|
|
||||||
self.evictions += 1
|
|
||||||
|
|
||||||
logger.debug(f"Cache EVICT: {oldest_key} (evictions: {self.evictions})")
|
|
||||||
|
|
||||||
def invalidate(self, key: str):
|
|
||||||
"""
|
|
||||||
Invalider un embedding spécifique.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
key: Clé de l'embedding à invalider
|
|
||||||
"""
|
|
||||||
if key in self.cache:
|
|
||||||
del self.cache[key]
|
|
||||||
self.metadata.pop(key, None)
|
|
||||||
logger.debug(f"Cache INVALIDATE: {key}")
|
|
||||||
|
|
||||||
def invalidate_pattern(self, pattern: str):
|
|
||||||
"""
|
|
||||||
Invalider tous les embeddings dont la clé contient le pattern.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pattern: Pattern à rechercher dans les clés
|
|
||||||
"""
|
|
||||||
keys_to_remove = [k for k in self.cache.keys() if pattern in k]
|
|
||||||
for key in keys_to_remove:
|
|
||||||
del self.cache[key]
|
|
||||||
self.metadata.pop(key, None)
|
|
||||||
|
|
||||||
if keys_to_remove:
|
|
||||||
logger.info(f"Cache INVALIDATE PATTERN '{pattern}': {len(keys_to_remove)} entries")
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
"""Vider complètement le cache."""
|
|
||||||
size_before = len(self.cache)
|
|
||||||
self.cache.clear()
|
|
||||||
self.metadata.clear()
|
|
||||||
logger.info(f"Cache CLEAR: {size_before} entries removed")
|
|
||||||
|
|
||||||
def get_stats(self) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Obtenir les statistiques du cache.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict avec statistiques
|
|
||||||
"""
|
|
||||||
total_requests = self.hits + self.misses
|
|
||||||
hit_rate = self.hits / total_requests if total_requests > 0 else 0.0
|
|
||||||
|
|
||||||
# Estimer la mémoire utilisée
|
|
||||||
memory_mb = 0.0
|
|
||||||
for vector in self.cache.values():
|
|
||||||
# Taille en bytes = nombre d'éléments * taille d'un float32
|
|
||||||
memory_mb += vector.nbytes / (1024 * 1024)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"size": len(self.cache),
|
|
||||||
"max_size": self.max_size,
|
|
||||||
"hits": self.hits,
|
|
||||||
"misses": self.misses,
|
|
||||||
"evictions": self.evictions,
|
|
||||||
"hit_rate": hit_rate,
|
|
||||||
"memory_mb": memory_mb,
|
|
||||||
"max_memory_mb": self.max_memory_mb,
|
|
||||||
"memory_usage_pct": (memory_mb / self.max_memory_mb * 100) if self.max_memory_mb > 0 else 0.0
|
|
||||||
}
|
|
||||||
|
|
||||||
def __len__(self) -> int:
|
|
||||||
"""Retourne le nombre d'embeddings en cache."""
|
|
||||||
return len(self.cache)
|
|
||||||
|
|
||||||
def __contains__(self, key: str) -> bool:
|
|
||||||
"""Vérifie si une clé est dans le cache."""
|
|
||||||
return key in self.cache
|
|
||||||
|
|
||||||
|
|
||||||
class PrototypeCache:
|
|
||||||
"""
|
|
||||||
Cache spécialisé pour les prototypes de WorkflowNodes.
|
|
||||||
|
|
||||||
Les prototypes sont utilisés fréquemment pour le matching,
|
|
||||||
donc on les garde en cache avec une politique différente.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, max_size: int = 100):
|
|
||||||
"""
|
|
||||||
Initialiser le cache de prototypes.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
max_size: Nombre maximum de prototypes à garder
|
|
||||||
"""
|
|
||||||
self.max_size = max_size
|
|
||||||
self.cache: Dict[str, np.ndarray] = {}
|
|
||||||
self.access_count: Dict[str, int] = {}
|
|
||||||
self.last_access: Dict[str, datetime] = {}
|
|
||||||
|
|
||||||
logger.info(f"PrototypeCache initialized: max_size={max_size}")
|
|
||||||
|
|
||||||
def get(self, node_id: str) -> Optional[np.ndarray]:
|
|
||||||
"""
|
|
||||||
Récupérer un prototype du cache.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
node_id: ID du WorkflowNode
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur prototype si trouvé, None sinon
|
|
||||||
"""
|
|
||||||
if node_id in self.cache:
|
|
||||||
self.access_count[node_id] = self.access_count.get(node_id, 0) + 1
|
|
||||||
self.last_access[node_id] = datetime.now()
|
|
||||||
return self.cache[node_id]
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def put(self, node_id: str, prototype: np.ndarray):
|
|
||||||
"""
|
|
||||||
Ajouter un prototype au cache.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
node_id: ID du WorkflowNode
|
|
||||||
prototype: Vecteur prototype
|
|
||||||
"""
|
|
||||||
# Si cache plein, évict le moins utilisé
|
|
||||||
if len(self.cache) >= self.max_size and node_id not in self.cache:
|
|
||||||
self._evict_least_used()
|
|
||||||
|
|
||||||
self.cache[node_id] = prototype
|
|
||||||
self.access_count[node_id] = self.access_count.get(node_id, 0) + 1
|
|
||||||
self.last_access[node_id] = datetime.now()
|
|
||||||
|
|
||||||
def _evict_least_used(self):
|
|
||||||
"""Évict le prototype le moins utilisé."""
|
|
||||||
if not self.cache:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Trouver le moins utilisé
|
|
||||||
least_used = min(self.access_count.items(), key=lambda x: x[1])
|
|
||||||
node_id = least_used[0]
|
|
||||||
|
|
||||||
del self.cache[node_id]
|
|
||||||
del self.access_count[node_id]
|
|
||||||
del self.last_access[node_id]
|
|
||||||
|
|
||||||
logger.debug(f"PrototypeCache EVICT: {node_id}")
|
|
||||||
|
|
||||||
def invalidate(self, node_id: str):
|
|
||||||
"""Invalider un prototype spécifique."""
|
|
||||||
if node_id in self.cache:
|
|
||||||
del self.cache[node_id]
|
|
||||||
self.access_count.pop(node_id, None)
|
|
||||||
self.last_access.pop(node_id, None)
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
"""Vider le cache."""
|
|
||||||
self.cache.clear()
|
|
||||||
self.access_count.clear()
|
|
||||||
self.last_access.clear()
|
|
||||||
|
|
||||||
def get_stats(self) -> Dict[str, Any]:
|
|
||||||
"""Obtenir les statistiques du cache."""
|
|
||||||
total_accesses = sum(self.access_count.values())
|
|
||||||
avg_accesses = total_accesses / len(self.cache) if self.cache else 0.0
|
|
||||||
|
|
||||||
return {
|
|
||||||
"size": len(self.cache),
|
|
||||||
"max_size": self.max_size,
|
|
||||||
"total_accesses": total_accesses,
|
|
||||||
"avg_accesses_per_prototype": avg_accesses
|
|
||||||
}
|
|
||||||
@@ -1,692 +0,0 @@
|
|||||||
"""
|
|
||||||
FAISSManager - Gestion d'Index FAISS pour Recherche de Similarité
|
|
||||||
|
|
||||||
Gère l'indexation et la recherche rapide d'embeddings avec FAISS.
|
|
||||||
Supporte sauvegarde/chargement d'index et métadonnées.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from typing import List, Dict, Optional, Tuple, Any
|
|
||||||
from pathlib import Path
|
|
||||||
from dataclasses import dataclass
|
|
||||||
import numpy as np
|
|
||||||
import json
|
|
||||||
import pickle
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
try:
|
|
||||||
import faiss
|
|
||||||
FAISS_AVAILABLE = True
|
|
||||||
except ImportError:
|
|
||||||
FAISS_AVAILABLE = False
|
|
||||||
logger.warning("FAISS not installed. Install with: pip install faiss-cpu")
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SearchResult:
|
|
||||||
"""Résultat d'une recherche de similarité"""
|
|
||||||
embedding_id: str
|
|
||||||
similarity: float # Similarité cosinus
|
|
||||||
distance: float # Distance L2
|
|
||||||
metadata: Dict[str, Any]
|
|
||||||
|
|
||||||
|
|
||||||
class FAISSManager:
|
|
||||||
"""
|
|
||||||
Gestionnaire d'index FAISS
|
|
||||||
|
|
||||||
Gère l'ajout, la recherche et la persistence d'embeddings avec FAISS.
|
|
||||||
Maintient un mapping entre IDs FAISS et métadonnées.
|
|
||||||
|
|
||||||
Features d'optimisation:
|
|
||||||
- Migration automatique Flat → IVF pour >10k embeddings
|
|
||||||
- Entraînement automatique de l'index IVF
|
|
||||||
- Support GPU si disponible
|
|
||||||
- Optimisation périodique de l'index
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self,
|
|
||||||
dimensions: int,
|
|
||||||
index_type: str = "Flat",
|
|
||||||
metric: str = "cosine",
|
|
||||||
nlist: Optional[int] = None,
|
|
||||||
nprobe: int = 8,
|
|
||||||
use_gpu: bool = False,
|
|
||||||
auto_optimize: bool = True):
|
|
||||||
"""
|
|
||||||
Initialiser le gestionnaire FAISS
|
|
||||||
|
|
||||||
Args:
|
|
||||||
dimensions: Nombre de dimensions des vecteurs
|
|
||||||
index_type: Type d'index FAISS ("Flat", "IVF", "HNSW")
|
|
||||||
metric: Métrique de distance ("cosine", "l2", "ip")
|
|
||||||
nlist: Nombre de clusters pour IVF (auto si None)
|
|
||||||
nprobe: Nombre de clusters à visiter lors de la recherche IVF
|
|
||||||
use_gpu: Utiliser GPU si disponible
|
|
||||||
auto_optimize: Migrer automatiquement vers IVF si >10k embeddings
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ImportError: Si FAISS n'est pas installé
|
|
||||||
"""
|
|
||||||
if not FAISS_AVAILABLE:
|
|
||||||
raise ImportError(
|
|
||||||
"FAISS is required but not installed. "
|
|
||||||
"Install with: pip install faiss-cpu"
|
|
||||||
)
|
|
||||||
|
|
||||||
self.dimensions = dimensions
|
|
||||||
self.index_type = index_type
|
|
||||||
self.metric = metric
|
|
||||||
self.nlist = nlist
|
|
||||||
self.nprobe = nprobe
|
|
||||||
self.use_gpu = use_gpu
|
|
||||||
self.auto_optimize = auto_optimize
|
|
||||||
|
|
||||||
# Mapping ID FAISS -> métadonnées
|
|
||||||
self.metadata_store: Dict[int, Dict[str, Any]] = {}
|
|
||||||
|
|
||||||
# Compteur pour IDs FAISS
|
|
||||||
self.next_id = 0
|
|
||||||
|
|
||||||
# Vecteurs pour entraînement IVF (si nécessaire)
|
|
||||||
self.training_vectors: List[np.ndarray] = []
|
|
||||||
self.is_trained = (index_type == "Flat") # Flat n'a pas besoin d'entraînement
|
|
||||||
|
|
||||||
# Seuil pour migration automatique
|
|
||||||
self.migration_threshold = 10000
|
|
||||||
|
|
||||||
# GPU resources
|
|
||||||
self.gpu_resources = None
|
|
||||||
if use_gpu:
|
|
||||||
self._setup_gpu()
|
|
||||||
|
|
||||||
# Créer l'index FAISS (après avoir initialisé tous les attributs)
|
|
||||||
self.index = self._create_index()
|
|
||||||
|
|
||||||
def _setup_gpu(self):
|
|
||||||
"""Configurer les ressources GPU si disponibles"""
|
|
||||||
try:
|
|
||||||
# Vérifier si GPU est disponible
|
|
||||||
ngpus = faiss.get_num_gpus()
|
|
||||||
if ngpus > 0:
|
|
||||||
self.gpu_resources = faiss.StandardGpuResources()
|
|
||||||
logger.info(f"FAISS GPU enabled: {ngpus} GPU(s) available")
|
|
||||||
else:
|
|
||||||
logger.warning("FAISS GPU requested but no GPU available, using CPU")
|
|
||||||
self.use_gpu = False
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"FAISS GPU setup failed: {e}, using CPU")
|
|
||||||
self.use_gpu = False
|
|
||||||
|
|
||||||
def _calculate_nlist(self, n_vectors: int) -> int:
|
|
||||||
"""
|
|
||||||
Calculer le nombre optimal de clusters pour IVF
|
|
||||||
|
|
||||||
Règle empirique: nlist = sqrt(n_vectors)
|
|
||||||
Minimum: 100, Maximum: 65536
|
|
||||||
|
|
||||||
Args:
|
|
||||||
n_vectors: Nombre de vecteurs dans l'index
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Nombre optimal de clusters
|
|
||||||
"""
|
|
||||||
if self.nlist is not None:
|
|
||||||
return self.nlist
|
|
||||||
|
|
||||||
# Règle empirique
|
|
||||||
nlist = int(np.sqrt(n_vectors))
|
|
||||||
|
|
||||||
# Contraintes
|
|
||||||
nlist = max(100, min(nlist, 65536))
|
|
||||||
|
|
||||||
return nlist
|
|
||||||
|
|
||||||
def _create_index(self) -> 'faiss.Index':
|
|
||||||
"""Créer un index FAISS selon la configuration"""
|
|
||||||
if self.metric == "cosine":
|
|
||||||
# Pour cosine similarity, normaliser et utiliser inner product
|
|
||||||
if self.index_type == "Flat":
|
|
||||||
index = faiss.IndexFlatIP(self.dimensions)
|
|
||||||
elif self.index_type == "IVF":
|
|
||||||
# Calculer nlist optimal
|
|
||||||
nlist = self._calculate_nlist(max(1000, self.migration_threshold))
|
|
||||||
quantizer = faiss.IndexFlatIP(self.dimensions)
|
|
||||||
index = faiss.IndexIVFFlat(quantizer, self.dimensions, nlist)
|
|
||||||
# Configurer nprobe
|
|
||||||
index.nprobe = self.nprobe
|
|
||||||
# Activer DirectMap pour permettre reconstruct()
|
|
||||||
index.make_direct_map()
|
|
||||||
elif self.index_type == "HNSW":
|
|
||||||
index = faiss.IndexHNSWFlat(self.dimensions, 32)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown index type: {self.index_type}")
|
|
||||||
|
|
||||||
elif self.metric == "l2":
|
|
||||||
if self.index_type == "Flat":
|
|
||||||
index = faiss.IndexFlatL2(self.dimensions)
|
|
||||||
elif self.index_type == "IVF":
|
|
||||||
# Calculer nlist optimal
|
|
||||||
nlist = self._calculate_nlist(max(1000, self.migration_threshold))
|
|
||||||
quantizer = faiss.IndexFlatL2(self.dimensions)
|
|
||||||
index = faiss.IndexIVFFlat(quantizer, self.dimensions, nlist)
|
|
||||||
# Configurer nprobe
|
|
||||||
index.nprobe = self.nprobe
|
|
||||||
# Activer DirectMap pour permettre reconstruct()
|
|
||||||
index.make_direct_map()
|
|
||||||
elif self.index_type == "HNSW":
|
|
||||||
index = faiss.IndexHNSWFlat(self.dimensions, 32)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown index type: {self.index_type}")
|
|
||||||
|
|
||||||
elif self.metric == "ip": # Inner product
|
|
||||||
if self.index_type == "Flat":
|
|
||||||
index = faiss.IndexFlatIP(self.dimensions)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Inner product only supports Flat index")
|
|
||||||
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown metric: {self.metric}")
|
|
||||||
|
|
||||||
# Migrer vers GPU si demandé
|
|
||||||
if self.use_gpu and self.gpu_resources is not None:
|
|
||||||
try:
|
|
||||||
index = faiss.index_cpu_to_gpu(self.gpu_resources, 0, index)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to move index to GPU: {e}, using CPU")
|
|
||||||
|
|
||||||
return index
|
|
||||||
|
|
||||||
def add_embedding(self,
|
|
||||||
embedding_id: str,
|
|
||||||
vector: np.ndarray,
|
|
||||||
metadata: Optional[Dict[str, Any]] = None) -> int:
|
|
||||||
"""
|
|
||||||
Ajouter un embedding à l'index
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedding_id: ID unique de l'embedding
|
|
||||||
vector: Vecteur d'embedding (dimensions doivent correspondre)
|
|
||||||
metadata: Métadonnées associées (optionnel)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
ID FAISS assigné
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si dimensions ne correspondent pas
|
|
||||||
"""
|
|
||||||
if vector.shape[0] != self.dimensions:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vector dimensions mismatch: expected {self.dimensions}, "
|
|
||||||
f"got {vector.shape[0]}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Convertir en float32 d'abord
|
|
||||||
vector_float32 = vector.astype(np.float32)
|
|
||||||
|
|
||||||
# Normaliser si métrique cosine
|
|
||||||
if self.metric == "cosine":
|
|
||||||
norm = np.linalg.norm(vector_float32)
|
|
||||||
if norm > 0:
|
|
||||||
vector_float32 = vector_float32 / norm
|
|
||||||
|
|
||||||
# Reshape pour FAISS
|
|
||||||
vector_reshaped = vector_float32.reshape(1, -1)
|
|
||||||
|
|
||||||
# Pour IVF, stocker vecteurs pour entraînement si pas encore entraîné
|
|
||||||
if self.index_type == "IVF" and not self.is_trained:
|
|
||||||
self.training_vectors.append(vector_float32) # Stocker le vecteur normalisé
|
|
||||||
|
|
||||||
# Entraîner si on a assez de vecteurs
|
|
||||||
if len(self.training_vectors) >= 100:
|
|
||||||
self._train_ivf_index()
|
|
||||||
# Les vecteurs d'entraînement ont déjà été ajoutés dans _train_ivf_index
|
|
||||||
# Ne pas ajouter à nouveau
|
|
||||||
elif self.is_trained:
|
|
||||||
# Ajouter à l'index (seulement si entraîné pour IVF ou si Flat)
|
|
||||||
self.index.add(vector_reshaped)
|
|
||||||
|
|
||||||
# Stocker métadonnées
|
|
||||||
faiss_id = self.next_id
|
|
||||||
self.metadata_store[faiss_id] = {
|
|
||||||
"embedding_id": embedding_id,
|
|
||||||
"metadata": metadata or {}
|
|
||||||
}
|
|
||||||
|
|
||||||
self.next_id += 1
|
|
||||||
|
|
||||||
# Vérifier si migration automatique nécessaire
|
|
||||||
if self.auto_optimize and self.index_type == "Flat":
|
|
||||||
if self.index.ntotal >= self.migration_threshold:
|
|
||||||
self._migrate_to_ivf()
|
|
||||||
|
|
||||||
return faiss_id
|
|
||||||
|
|
||||||
def _train_ivf_index(self):
|
|
||||||
"""Entraîner l'index IVF avec les vecteurs collectés"""
|
|
||||||
if self.is_trained or self.index_type != "IVF":
|
|
||||||
return
|
|
||||||
|
|
||||||
if len(self.training_vectors) < 100:
|
|
||||||
logger.warning(f" Training IVF with only {len(self.training_vectors)} vectors")
|
|
||||||
|
|
||||||
# Convertir en array numpy
|
|
||||||
training_data = np.array(self.training_vectors, dtype=np.float32)
|
|
||||||
|
|
||||||
logger.info(f"Training IVF index with {len(self.training_vectors)} vectors...")
|
|
||||||
|
|
||||||
# Entraîner l'index
|
|
||||||
self.index.train(training_data)
|
|
||||||
self.is_trained = True
|
|
||||||
|
|
||||||
# Ajouter tous les vecteurs d'entraînement à l'index
|
|
||||||
self.index.add(training_data)
|
|
||||||
|
|
||||||
# Libérer mémoire
|
|
||||||
self.training_vectors.clear()
|
|
||||||
|
|
||||||
logger.info(f"IVF index trained successfully with nlist={self.index.nlist}")
|
|
||||||
|
|
||||||
def _migrate_to_ivf(self):
|
|
||||||
"""
|
|
||||||
Migrer automatiquement de Flat vers IVF
|
|
||||||
|
|
||||||
Appelé automatiquement quand l'index Flat dépasse le seuil.
|
|
||||||
"""
|
|
||||||
if self.index_type != "Flat":
|
|
||||||
return
|
|
||||||
|
|
||||||
logger.info(f"Migrating from Flat to IVF (current size: {self.index.ntotal})...")
|
|
||||||
|
|
||||||
# Extraire tous les vecteurs de l'index Flat
|
|
||||||
n_vectors = self.index.ntotal
|
|
||||||
vectors = np.zeros((n_vectors, self.dimensions), dtype=np.float32)
|
|
||||||
|
|
||||||
for i in range(n_vectors):
|
|
||||||
vectors[i] = self.index.reconstruct(int(i))
|
|
||||||
|
|
||||||
# Calculer nlist optimal
|
|
||||||
nlist = self._calculate_nlist(n_vectors)
|
|
||||||
|
|
||||||
# Créer nouvel index IVF
|
|
||||||
if self.metric == "cosine":
|
|
||||||
quantizer = faiss.IndexFlatIP(self.dimensions)
|
|
||||||
new_index = faiss.IndexIVFFlat(quantizer, self.dimensions, nlist)
|
|
||||||
else: # l2
|
|
||||||
quantizer = faiss.IndexFlatL2(self.dimensions)
|
|
||||||
new_index = faiss.IndexIVFFlat(quantizer, self.dimensions, nlist)
|
|
||||||
|
|
||||||
new_index.nprobe = self.nprobe
|
|
||||||
new_index.make_direct_map() # Activer DirectMap
|
|
||||||
|
|
||||||
# Entraîner avec tous les vecteurs
|
|
||||||
new_index.train(vectors)
|
|
||||||
|
|
||||||
# Ajouter tous les vecteurs
|
|
||||||
new_index.add(vectors)
|
|
||||||
|
|
||||||
# Remplacer l'index
|
|
||||||
self.index = new_index
|
|
||||||
self.index_type = "IVF"
|
|
||||||
self.is_trained = True
|
|
||||||
|
|
||||||
logger.info(f"Migration complete: IVF index with nlist={nlist}, nprobe={self.nprobe}")
|
|
||||||
|
|
||||||
def optimize_index(self):
|
|
||||||
"""
|
|
||||||
Optimiser l'index périodiquement
|
|
||||||
|
|
||||||
Pour IVF: Recalculer nlist optimal et réentraîner si nécessaire
|
|
||||||
"""
|
|
||||||
if self.index_type != "IVF" or not self.is_trained:
|
|
||||||
return
|
|
||||||
|
|
||||||
n_vectors = self.index.ntotal
|
|
||||||
if n_vectors < 100:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Calculer nlist optimal pour la taille actuelle
|
|
||||||
optimal_nlist = self._calculate_nlist(n_vectors)
|
|
||||||
|
|
||||||
# Si nlist actuel est très différent, reconstruire
|
|
||||||
current_nlist = self.index.nlist
|
|
||||||
if abs(optimal_nlist - current_nlist) / current_nlist > 0.5:
|
|
||||||
logger.info(f"Optimizing IVF index: {current_nlist} → {optimal_nlist} clusters")
|
|
||||||
|
|
||||||
# Extraire tous les vecteurs
|
|
||||||
vectors = np.zeros((n_vectors, self.dimensions), dtype=np.float32)
|
|
||||||
for i in range(n_vectors):
|
|
||||||
vectors[i] = self.index.reconstruct(int(i))
|
|
||||||
|
|
||||||
# Créer nouvel index avec nlist optimal
|
|
||||||
if self.metric == "cosine":
|
|
||||||
quantizer = faiss.IndexFlatIP(self.dimensions)
|
|
||||||
new_index = faiss.IndexIVFFlat(quantizer, self.dimensions, optimal_nlist)
|
|
||||||
else:
|
|
||||||
quantizer = faiss.IndexFlatL2(self.dimensions)
|
|
||||||
new_index = faiss.IndexIVFFlat(quantizer, self.dimensions, optimal_nlist)
|
|
||||||
|
|
||||||
new_index.nprobe = self.nprobe
|
|
||||||
new_index.make_direct_map() # Activer DirectMap
|
|
||||||
|
|
||||||
# Entraîner et ajouter
|
|
||||||
new_index.train(vectors)
|
|
||||||
new_index.add(vectors)
|
|
||||||
|
|
||||||
# Remplacer
|
|
||||||
self.index = new_index
|
|
||||||
|
|
||||||
logger.info("Index optimized successfully")
|
|
||||||
|
|
||||||
def search_similar(self,
|
|
||||||
query_vector: np.ndarray,
|
|
||||||
k: int = 5,
|
|
||||||
min_similarity: Optional[float] = None) -> List[SearchResult]:
|
|
||||||
"""
|
|
||||||
Rechercher les k embeddings les plus similaires
|
|
||||||
|
|
||||||
Args:
|
|
||||||
query_vector: Vecteur de requête
|
|
||||||
k: Nombre de résultats à retourner
|
|
||||||
min_similarity: Similarité minimale (optionnel, pour cosine)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Liste de SearchResult triés par similarité décroissante
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si dimensions ne correspondent pas
|
|
||||||
"""
|
|
||||||
if query_vector.shape[0] != self.dimensions:
|
|
||||||
raise ValueError(
|
|
||||||
f"Query vector dimensions mismatch: expected {self.dimensions}, "
|
|
||||||
f"got {query_vector.shape[0]}"
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.index.ntotal == 0:
|
|
||||||
return [] # Index vide
|
|
||||||
|
|
||||||
# Normaliser si métrique cosine
|
|
||||||
if self.metric == "cosine":
|
|
||||||
norm = np.linalg.norm(query_vector)
|
|
||||||
if norm > 0:
|
|
||||||
query_vector = query_vector / norm
|
|
||||||
|
|
||||||
# Convertir en float32 et reshape
|
|
||||||
query_vector = query_vector.astype(np.float32).reshape(1, -1)
|
|
||||||
|
|
||||||
# Rechercher
|
|
||||||
k = min(k, self.index.ntotal) # Ne pas demander plus que disponible
|
|
||||||
distances, indices = self.index.search(query_vector, k)
|
|
||||||
|
|
||||||
# Convertir en SearchResults
|
|
||||||
results = []
|
|
||||||
for dist, idx in zip(distances[0], indices[0]):
|
|
||||||
if idx == -1: # Pas de résultat
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Récupérer métadonnées
|
|
||||||
meta = self.metadata_store.get(int(idx), {})
|
|
||||||
|
|
||||||
# Convertir distance en similarité
|
|
||||||
if self.metric == "cosine":
|
|
||||||
# Pour inner product avec vecteurs normalisés, distance = similarité
|
|
||||||
similarity = float(dist)
|
|
||||||
elif self.metric == "l2":
|
|
||||||
# Convertir distance L2 en similarité approximative
|
|
||||||
similarity = 1.0 / (1.0 + float(dist))
|
|
||||||
else:
|
|
||||||
similarity = float(dist)
|
|
||||||
|
|
||||||
# Filtrer par similarité minimale
|
|
||||||
if min_similarity is not None and similarity < min_similarity:
|
|
||||||
continue
|
|
||||||
|
|
||||||
results.append(SearchResult(
|
|
||||||
embedding_id=meta.get("embedding_id", f"unknown_{idx}"),
|
|
||||||
similarity=similarity,
|
|
||||||
distance=float(dist),
|
|
||||||
metadata=meta.get("metadata", {})
|
|
||||||
))
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
def remove_embedding(self, faiss_id: int) -> bool:
|
|
||||||
"""
|
|
||||||
Supprimer un embedding de l'index
|
|
||||||
|
|
||||||
Note: FAISS ne supporte pas la suppression directe.
|
|
||||||
Cette méthode supprime juste les métadonnées.
|
|
||||||
Pour vraiment supprimer, il faut reconstruire l'index.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
faiss_id: ID FAISS de l'embedding
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True si supprimé, False si non trouvé
|
|
||||||
"""
|
|
||||||
if faiss_id in self.metadata_store:
|
|
||||||
del self.metadata_store[faiss_id]
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def get_metadata(self, faiss_id: int) -> Optional[Dict[str, Any]]:
|
|
||||||
"""Récupérer les métadonnées d'un embedding"""
|
|
||||||
return self.metadata_store.get(faiss_id)
|
|
||||||
|
|
||||||
def save(self, index_path: Path, metadata_path: Path) -> None:
|
|
||||||
"""
|
|
||||||
Sauvegarder l'index et les métadonnées
|
|
||||||
|
|
||||||
Args:
|
|
||||||
index_path: Chemin pour sauvegarder l'index FAISS
|
|
||||||
metadata_path: Chemin pour sauvegarder les métadonnées
|
|
||||||
"""
|
|
||||||
# Créer répertoires si nécessaire
|
|
||||||
index_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
metadata_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Si GPU, ramener sur CPU avant sauvegarde
|
|
||||||
index_to_save = self.index
|
|
||||||
if self.use_gpu:
|
|
||||||
try:
|
|
||||||
index_to_save = faiss.index_gpu_to_cpu(self.index)
|
|
||||||
except (RuntimeError, AttributeError):
|
|
||||||
pass # Déjà sur CPU ou pas de GPU
|
|
||||||
|
|
||||||
# Sauvegarder index FAISS
|
|
||||||
faiss.write_index(index_to_save, str(index_path))
|
|
||||||
|
|
||||||
# Sauvegarder métadonnées
|
|
||||||
metadata = {
|
|
||||||
"dimensions": self.dimensions,
|
|
||||||
"index_type": self.index_type,
|
|
||||||
"metric": self.metric,
|
|
||||||
"next_id": self.next_id,
|
|
||||||
"metadata_store": self.metadata_store,
|
|
||||||
"nlist": self.nlist,
|
|
||||||
"nprobe": self.nprobe,
|
|
||||||
"is_trained": self.is_trained,
|
|
||||||
"auto_optimize": self.auto_optimize
|
|
||||||
}
|
|
||||||
|
|
||||||
with open(metadata_path, 'wb') as f:
|
|
||||||
pickle.dump(metadata, f)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def load(cls, index_path: Path, metadata_path: Path, use_gpu: bool = False) -> 'FAISSManager':
|
|
||||||
"""
|
|
||||||
Charger un index et ses métadonnées
|
|
||||||
|
|
||||||
Args:
|
|
||||||
index_path: Chemin de l'index FAISS
|
|
||||||
metadata_path: Chemin des métadonnées
|
|
||||||
use_gpu: Charger sur GPU si disponible
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
FAISSManager chargé
|
|
||||||
"""
|
|
||||||
# Charger métadonnées
|
|
||||||
with open(metadata_path, 'rb') as f:
|
|
||||||
metadata = pickle.load(f)
|
|
||||||
|
|
||||||
# Créer instance
|
|
||||||
manager = cls(
|
|
||||||
dimensions=metadata["dimensions"],
|
|
||||||
index_type=metadata["index_type"],
|
|
||||||
metric=metadata["metric"],
|
|
||||||
nlist=metadata.get("nlist"),
|
|
||||||
nprobe=metadata.get("nprobe", 8),
|
|
||||||
use_gpu=use_gpu,
|
|
||||||
auto_optimize=metadata.get("auto_optimize", True)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Charger index FAISS
|
|
||||||
manager.index = faiss.read_index(str(index_path))
|
|
||||||
|
|
||||||
# Migrer vers GPU si demandé
|
|
||||||
if use_gpu and manager.gpu_resources is not None:
|
|
||||||
try:
|
|
||||||
manager.index = faiss.index_cpu_to_gpu(manager.gpu_resources, 0, manager.index)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to move loaded index to GPU: {e}")
|
|
||||||
|
|
||||||
# Restaurer métadonnées
|
|
||||||
manager.next_id = metadata["next_id"]
|
|
||||||
manager.metadata_store = metadata["metadata_store"]
|
|
||||||
manager.is_trained = metadata.get("is_trained", True)
|
|
||||||
|
|
||||||
return manager
|
|
||||||
|
|
||||||
def get_stats(self) -> Dict[str, Any]:
|
|
||||||
"""Récupérer statistiques de l'index"""
|
|
||||||
stats = {
|
|
||||||
"dimensions": self.dimensions,
|
|
||||||
"index_type": self.index_type,
|
|
||||||
"metric": self.metric,
|
|
||||||
"total_vectors": self.index.ntotal,
|
|
||||||
"metadata_count": len(self.metadata_store),
|
|
||||||
"is_trained": self.is_trained,
|
|
||||||
"use_gpu": self.use_gpu
|
|
||||||
}
|
|
||||||
|
|
||||||
# Ajouter stats spécifiques IVF
|
|
||||||
if self.index_type == "IVF" and self.is_trained:
|
|
||||||
stats["nlist"] = self.index.nlist
|
|
||||||
stats["nprobe"] = self.index.nprobe
|
|
||||||
|
|
||||||
# Calculer nlist optimal pour comparaison
|
|
||||||
if self.index.ntotal > 0:
|
|
||||||
optimal_nlist = self._calculate_nlist(self.index.ntotal)
|
|
||||||
stats["optimal_nlist"] = optimal_nlist
|
|
||||||
stats["nlist_efficiency"] = min(1.0, self.index.nlist / optimal_nlist)
|
|
||||||
|
|
||||||
return stats
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
"""
|
|
||||||
Vider complètement l'index + reset état d'entraînement.
|
|
||||||
|
|
||||||
Auteur : Dom, Alice Kiro - 22 décembre 2025
|
|
||||||
|
|
||||||
Amélioration pour FAISS Rebuild Propre:
|
|
||||||
- Reset complet de l'état IVF training
|
|
||||||
- Réinitialisation des training_vectors
|
|
||||||
- Gestion correcte du flag is_trained selon le type d'index
|
|
||||||
"""
|
|
||||||
self.index = self._create_index()
|
|
||||||
self.metadata_store.clear()
|
|
||||||
self.next_id = 0
|
|
||||||
|
|
||||||
# IMPORTANT: reset IVF training state
|
|
||||||
self.training_vectors.clear()
|
|
||||||
self.is_trained = (self.index_type == "Flat")
|
|
||||||
|
|
||||||
def reindex(self, items, force_train_ivf: bool = True) -> int:
|
|
||||||
"""
|
|
||||||
Reconstruit l'index à partir d'une source canonique (vecteurs).
|
|
||||||
|
|
||||||
Auteur : Dom, Alice Kiro - 22 décembre 2025
|
|
||||||
|
|
||||||
Stratégie FAISS Rebuild Propre: "1 prototype = 1 entrée"
|
|
||||||
- Clear complet avant reconstruction
|
|
||||||
- Ajout sécurisé avec validation des vecteurs
|
|
||||||
- Force training IVF même pour petits volumes
|
|
||||||
- Retour du nombre d'éléments indexés
|
|
||||||
|
|
||||||
Args:
|
|
||||||
items: Iterable[(embedding_id: str, vector: np.ndarray, metadata: dict)]
|
|
||||||
force_train_ivf: Forcer l'entraînement IVF même avec peu de vecteurs
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Nombre d'items indexés avec succès
|
|
||||||
"""
|
|
||||||
logger.info(f"FAISS reindex started with force_train_ivf={force_train_ivf}")
|
|
||||||
|
|
||||||
# Clear complet avant reconstruction
|
|
||||||
self.clear()
|
|
||||||
|
|
||||||
count = 0
|
|
||||||
for embedding_id, vector, metadata in items:
|
|
||||||
if vector is None:
|
|
||||||
logger.debug(f"Skipping None vector for {embedding_id}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.add_embedding(embedding_id, vector, metadata or {})
|
|
||||||
count += 1
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to add embedding {embedding_id}: {e}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Si IVF + petit volume, add_embedding ne déclenche pas forcément l'entraînement
|
|
||||||
if (self.index_type == "IVF" and force_train_ivf and
|
|
||||||
(not self.is_trained) and self.training_vectors):
|
|
||||||
logger.info(f"Force training IVF with {len(self.training_vectors)} vectors")
|
|
||||||
self._train_ivf_index()
|
|
||||||
|
|
||||||
logger.info(f"FAISS reindex completed: {count} items indexed")
|
|
||||||
return count
|
|
||||||
|
|
||||||
def rebuild_index(self) -> None:
|
|
||||||
"""
|
|
||||||
Reconstruire l'index depuis les métadonnées
|
|
||||||
|
|
||||||
Utile après suppressions pour compacter l'index.
|
|
||||||
Note: Nécessite d'avoir les vecteurs originaux.
|
|
||||||
"""
|
|
||||||
# TODO: Implémenter si nécessaire
|
|
||||||
# Nécessiterait de stocker les vecteurs dans metadata_store
|
|
||||||
raise NotImplementedError("Rebuild not yet implemented")
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Fonctions utilitaires
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
def create_flat_index(dimensions: int, metric: str = "cosine") -> FAISSManager:
|
|
||||||
"""
|
|
||||||
Créer un index FAISS Flat (recherche exhaustive)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
dimensions: Nombre de dimensions
|
|
||||||
metric: Métrique ("cosine", "l2", "ip")
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
FAISSManager configuré
|
|
||||||
"""
|
|
||||||
return FAISSManager(dimension=dimensions, index_type="Flat", metric=metric)
|
|
||||||
|
|
||||||
|
|
||||||
def create_ivf_index(dimensions: int, metric: str = "cosine") -> FAISSManager:
|
|
||||||
"""
|
|
||||||
Créer un index FAISS IVF (recherche approximative rapide)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
dimensions: Nombre de dimensions
|
|
||||||
metric: Métrique ("cosine", "l2")
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
FAISSManager configuré
|
|
||||||
"""
|
|
||||||
return FAISSManager(dimension=dimensions, index_type="IVF", metric=metric)
|
|
||||||
@@ -1,613 +0,0 @@
|
|||||||
"""
|
|
||||||
FusionEngine - Fusion Multi-Modale d'Embeddings
|
|
||||||
|
|
||||||
Fusionne plusieurs embeddings (image, texte, titre, UI) en un seul vecteur
|
|
||||||
avec pondération configurable et normalisation L2.
|
|
||||||
|
|
||||||
Tâche 5.2: Lazy loading des embeddings avec WeakValueDictionary.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Dict, List, Optional
|
|
||||||
import numpy as np
|
|
||||||
from dataclasses import dataclass
|
|
||||||
import weakref
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from ..models.state_embedding import (
|
|
||||||
StateEmbedding,
|
|
||||||
EmbeddingComponent,
|
|
||||||
DEFAULT_FUSION_WEIGHTS
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class FusionConfig:
|
|
||||||
"""Configuration de la fusion"""
|
|
||||||
method: str = "weighted" # weighted ou concat_projection
|
|
||||||
normalize: bool = True # Normaliser le vecteur final
|
|
||||||
weights: Dict[str, float] = None # Poids personnalisés
|
|
||||||
|
|
||||||
def __post_init__(self):
|
|
||||||
if self.weights is None:
|
|
||||||
self.weights = DEFAULT_FUSION_WEIGHTS.copy()
|
|
||||||
|
|
||||||
# Valider que les poids somment à 1.0 pour weighted
|
|
||||||
if self.method == "weighted":
|
|
||||||
total = sum(self.weights.values())
|
|
||||||
if not (0.99 <= total <= 1.01):
|
|
||||||
raise ValueError(
|
|
||||||
f"Weights must sum to 1.0 for weighted fusion, got {total}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class FusionEngine:
|
|
||||||
"""
|
|
||||||
Moteur de fusion multi-modale avec lazy loading optimisé
|
|
||||||
|
|
||||||
Fusionne des embeddings de différentes modalités (image, texte, UI)
|
|
||||||
en un seul vecteur représentant l'état complet de l'écran.
|
|
||||||
|
|
||||||
Tâche 5.2: Implémente lazy loading avec WeakValueDictionary pour
|
|
||||||
éviter les rechargements multiples tout en permettant le garbage collection.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, config: Optional[FusionConfig] = None):
|
|
||||||
"""
|
|
||||||
Initialiser le moteur de fusion avec lazy loading
|
|
||||||
|
|
||||||
Args:
|
|
||||||
config: Configuration de fusion (utilise config par défaut si None)
|
|
||||||
"""
|
|
||||||
self.config = config or FusionConfig()
|
|
||||||
|
|
||||||
# Tâche 5.2: Cache lazy loading avec WeakValueDictionary
|
|
||||||
# Permet le garbage collection automatique des embeddings non utilisés
|
|
||||||
self._embedding_cache: weakref.WeakValueDictionary = weakref.WeakValueDictionary()
|
|
||||||
self._cache_stats = {
|
|
||||||
'hits': 0,
|
|
||||||
'misses': 0,
|
|
||||||
'loads': 0,
|
|
||||||
'evictions': 0
|
|
||||||
}
|
|
||||||
|
|
||||||
def fuse(self,
|
|
||||||
embeddings: Dict[str, np.ndarray],
|
|
||||||
weights: Optional[Dict[str, float]] = None) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Fusionner plusieurs embeddings en un seul vecteur
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embeddings: Dict {modalité: vecteur}
|
|
||||||
e.g., {"image": vec1, "text": vec2, "title": vec3, "ui": vec4}
|
|
||||||
weights: Poids personnalisés (optionnel, utilise config par défaut)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur fusionné (normalisé si config.normalize=True)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si les dimensions ne correspondent pas ou poids invalides
|
|
||||||
"""
|
|
||||||
if not embeddings:
|
|
||||||
raise ValueError("No embeddings provided for fusion")
|
|
||||||
|
|
||||||
# Utiliser poids de config ou poids fournis
|
|
||||||
fusion_weights = weights or self.config.weights
|
|
||||||
|
|
||||||
# Vérifier que toutes les modalités ont le même nombre de dimensions
|
|
||||||
dimensions = None
|
|
||||||
for modality, vector in embeddings.items():
|
|
||||||
if dimensions is None:
|
|
||||||
dimensions = vector.shape[0]
|
|
||||||
elif vector.shape[0] != dimensions:
|
|
||||||
raise ValueError(
|
|
||||||
f"All embeddings must have same dimensions. "
|
|
||||||
f"Expected {dimensions}, got {vector.shape[0]} for {modality}"
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.config.method == "weighted":
|
|
||||||
fused = self._fuse_weighted(embeddings, fusion_weights)
|
|
||||||
elif self.config.method == "concat_projection":
|
|
||||||
fused = self._fuse_concat_projection(embeddings, fusion_weights)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown fusion method: {self.config.method}")
|
|
||||||
|
|
||||||
# Normaliser si demandé
|
|
||||||
if self.config.normalize:
|
|
||||||
fused = self._normalize_l2(fused)
|
|
||||||
|
|
||||||
return fused
|
|
||||||
|
|
||||||
def _fuse_weighted(self,
|
|
||||||
embeddings: Dict[str, np.ndarray],
|
|
||||||
weights: Dict[str, float]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Fusion pondérée simple : somme pondérée des vecteurs
|
|
||||||
|
|
||||||
fused = w1*v1 + w2*v2 + w3*v3 + w4*v4
|
|
||||||
"""
|
|
||||||
# Initialiser vecteur résultat
|
|
||||||
first_vector = next(iter(embeddings.values()))
|
|
||||||
fused = np.zeros_like(first_vector, dtype=np.float32)
|
|
||||||
|
|
||||||
# Somme pondérée
|
|
||||||
for modality, vector in embeddings.items():
|
|
||||||
weight = weights.get(modality, 0.0)
|
|
||||||
fused += weight * vector
|
|
||||||
|
|
||||||
return fused
|
|
||||||
|
|
||||||
def _fuse_concat_projection(self,
|
|
||||||
embeddings: Dict[str, np.ndarray],
|
|
||||||
weights: Dict[str, float]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Fusion par concaténation + projection
|
|
||||||
|
|
||||||
Concatène tous les vecteurs puis projette vers dimension cible.
|
|
||||||
Note: Pour l'instant, on fait une simple moyenne pondérée.
|
|
||||||
TODO: Implémenter vraie projection avec matrice apprise.
|
|
||||||
"""
|
|
||||||
# Pour l'instant, utiliser fusion pondérée
|
|
||||||
# Dans une version future, on pourrait apprendre une matrice de projection
|
|
||||||
return self._fuse_weighted(embeddings, weights)
|
|
||||||
|
|
||||||
def _normalize_l2(self, vector: np.ndarray) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Normaliser un vecteur avec norme L2
|
|
||||||
|
|
||||||
normalized = vector / ||vector||_2
|
|
||||||
"""
|
|
||||||
norm = np.linalg.norm(vector)
|
|
||||||
if norm < 1e-10: # Éviter division par zéro
|
|
||||||
return vector
|
|
||||||
return vector / norm
|
|
||||||
|
|
||||||
def create_state_embedding(self,
|
|
||||||
embedding_id: str,
|
|
||||||
embeddings: Dict[str, np.ndarray],
|
|
||||||
vector_save_path: str,
|
|
||||||
weights: Optional[Dict[str, float]] = None,
|
|
||||||
metadata: Optional[Dict] = None) -> StateEmbedding:
|
|
||||||
"""
|
|
||||||
Créer un StateEmbedding complet depuis des embeddings individuels
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedding_id: ID unique pour cet embedding
|
|
||||||
embeddings: Dict {modalité: vecteur}
|
|
||||||
vector_save_path: Chemin où sauvegarder le vecteur fusionné
|
|
||||||
weights: Poids personnalisés (optionnel)
|
|
||||||
metadata: Métadonnées additionnelles
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
StateEmbedding avec vecteur fusionné sauvegardé
|
|
||||||
"""
|
|
||||||
# Fusionner les embeddings
|
|
||||||
fused_vector = self.fuse(embeddings, weights)
|
|
||||||
|
|
||||||
# Créer les composants
|
|
||||||
fusion_weights = weights or self.config.weights
|
|
||||||
components = {}
|
|
||||||
|
|
||||||
for modality, vector in embeddings.items():
|
|
||||||
# Pour l'instant, on ne sauvegarde pas les vecteurs individuels
|
|
||||||
# On pourrait les sauvegarder si nécessaire
|
|
||||||
components[modality] = EmbeddingComponent(
|
|
||||||
weight=fusion_weights.get(modality, 0.0),
|
|
||||||
vector_id=f"{vector_save_path}_{modality}.npy",
|
|
||||||
source_text=None
|
|
||||||
)
|
|
||||||
|
|
||||||
# Créer StateEmbedding
|
|
||||||
dimensions = fused_vector.shape[0]
|
|
||||||
state_emb = StateEmbedding(
|
|
||||||
embedding_id=embedding_id,
|
|
||||||
vector_id=vector_save_path,
|
|
||||||
dimensions=dimensions,
|
|
||||||
fusion_method=self.config.method,
|
|
||||||
components=components,
|
|
||||||
metadata=metadata or {}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Sauvegarder le vecteur fusionné
|
|
||||||
state_emb.save_vector(fused_vector)
|
|
||||||
|
|
||||||
return state_emb
|
|
||||||
|
|
||||||
def compute_similarity(self,
|
|
||||||
emb1: StateEmbedding,
|
|
||||||
emb2: StateEmbedding) -> float:
|
|
||||||
"""
|
|
||||||
Calculer similarité cosinus entre deux StateEmbeddings
|
|
||||||
|
|
||||||
Args:
|
|
||||||
emb1: Premier embedding
|
|
||||||
emb2: Deuxième embedding
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Similarité cosinus dans [-1, 1]
|
|
||||||
"""
|
|
||||||
return emb1.compute_similarity(emb2)
|
|
||||||
|
|
||||||
def batch_fuse(self,
|
|
||||||
batch_embeddings: List[Dict[str, np.ndarray]],
|
|
||||||
weights: Optional[Dict[str, float]] = None) -> List[np.ndarray]:
|
|
||||||
"""
|
|
||||||
Fusionner un batch d'embeddings en parallèle
|
|
||||||
|
|
||||||
Args:
|
|
||||||
batch_embeddings: Liste de dicts {modalité: vecteur}
|
|
||||||
weights: Poids personnalisés (optionnel)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Liste de vecteurs fusionnés
|
|
||||||
"""
|
|
||||||
return [self.fuse(embs, weights) for embs in batch_embeddings]
|
|
||||||
|
|
||||||
def get_config(self) -> FusionConfig:
|
|
||||||
"""Récupérer la configuration actuelle"""
|
|
||||||
return self.config
|
|
||||||
|
|
||||||
def set_weights(self, weights: Dict[str, float]) -> None:
|
|
||||||
"""
|
|
||||||
Mettre à jour les poids de fusion
|
|
||||||
|
|
||||||
Args:
|
|
||||||
weights: Nouveaux poids
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si les poids ne somment pas à 1.0 (pour weighted)
|
|
||||||
"""
|
|
||||||
if self.config.method == "weighted":
|
|
||||||
total = sum(weights.values())
|
|
||||||
if not (0.99 <= total <= 1.01):
|
|
||||||
raise ValueError(
|
|
||||||
f"Weights must sum to 1.0 for weighted fusion, got {total}"
|
|
||||||
)
|
|
||||||
|
|
||||||
self.config.weights = weights.copy()
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Fonctions utilitaires
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
def create_default_fusion_engine() -> FusionEngine:
|
|
||||||
"""Créer un FusionEngine avec configuration par défaut"""
|
|
||||||
return FusionEngine(FusionConfig())
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_vector(vector: np.ndarray) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Normaliser un vecteur avec norme L2
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vector: Vecteur à normaliser
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur normalisé
|
|
||||||
"""
|
|
||||||
norm = np.linalg.norm(vector)
|
|
||||||
if norm < 1e-10:
|
|
||||||
return vector
|
|
||||||
return vector / norm
|
|
||||||
|
|
||||||
|
|
||||||
def validate_weights(weights: Dict[str, float],
|
|
||||||
method: str = "weighted") -> bool:
|
|
||||||
"""
|
|
||||||
Valider que les poids sont corrects
|
|
||||||
|
|
||||||
Args:
|
|
||||||
weights: Poids à valider
|
|
||||||
method: Méthode de fusion
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True si valides, False sinon
|
|
||||||
"""
|
|
||||||
if method == "weighted":
|
|
||||||
total = sum(weights.values())
|
|
||||||
return 0.99 <= total <= 1.01
|
|
||||||
return True
|
|
||||||
|
|
||||||
def fuse_batch(
|
|
||||||
self,
|
|
||||||
embeddings_batch: List[Dict[str, np.ndarray]],
|
|
||||||
weights: Optional[Dict[str, float]] = None
|
|
||||||
) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Fusionner un batch d'embeddings en parallèle pour efficacité.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embeddings_batch: Liste de dicts {modalité: vecteur}
|
|
||||||
weights: Poids personnalisés (optionnel)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Array numpy de shape (batch_size, embedding_dim) avec vecteurs fusionnés
|
|
||||||
|
|
||||||
Note:
|
|
||||||
Cette méthode est optimisée pour traiter plusieurs embeddings
|
|
||||||
en une seule opération vectorisée, ce qui est plus rapide que
|
|
||||||
de fusionner un par un.
|
|
||||||
"""
|
|
||||||
if not embeddings_batch:
|
|
||||||
raise ValueError("Empty batch provided")
|
|
||||||
|
|
||||||
batch_size = len(embeddings_batch)
|
|
||||||
fusion_weights = weights or self.config.weights
|
|
||||||
|
|
||||||
# Déterminer les dimensions depuis le premier élément
|
|
||||||
first_emb = embeddings_batch[0]
|
|
||||||
first_vector = next(iter(first_emb.values()))
|
|
||||||
embedding_dim = first_vector.shape[0]
|
|
||||||
|
|
||||||
# Préparer le résultat
|
|
||||||
fused_batch = np.zeros((batch_size, embedding_dim), dtype=np.float32)
|
|
||||||
|
|
||||||
# Traiter chaque modalité pour tout le batch
|
|
||||||
for modality in first_emb.keys():
|
|
||||||
weight = fusion_weights.get(modality, 0.0)
|
|
||||||
if weight == 0.0:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Collecter tous les vecteurs de cette modalité
|
|
||||||
modality_vectors = []
|
|
||||||
for emb_dict in embeddings_batch:
|
|
||||||
if modality in emb_dict:
|
|
||||||
modality_vectors.append(emb_dict[modality])
|
|
||||||
else:
|
|
||||||
# Si modalité manquante, utiliser vecteur zéro
|
|
||||||
modality_vectors.append(np.zeros(embedding_dim, dtype=np.float32))
|
|
||||||
|
|
||||||
# Convertir en array numpy (batch_size, embedding_dim)
|
|
||||||
modality_batch = np.array(modality_vectors, dtype=np.float32)
|
|
||||||
|
|
||||||
# Ajouter contribution pondérée
|
|
||||||
fused_batch += weight * modality_batch
|
|
||||||
|
|
||||||
# Normaliser si demandé
|
|
||||||
if self.config.normalize:
|
|
||||||
# Normalisation L2 pour chaque vecteur du batch
|
|
||||||
norms = np.linalg.norm(fused_batch, axis=1, keepdims=True)
|
|
||||||
# Éviter division par zéro
|
|
||||||
norms = np.where(norms < 1e-10, 1.0, norms)
|
|
||||||
fused_batch = fused_batch / norms
|
|
||||||
|
|
||||||
return fused_batch
|
|
||||||
|
|
||||||
def create_state_embeddings_batch(
|
|
||||||
self,
|
|
||||||
embedding_ids: List[str],
|
|
||||||
embeddings_batch: List[Dict[str, np.ndarray]],
|
|
||||||
vector_save_paths: List[str],
|
|
||||||
weights: Optional[Dict[str, float]] = None,
|
|
||||||
metadata_batch: Optional[List[Dict]] = None
|
|
||||||
) -> List[StateEmbedding]:
|
|
||||||
"""
|
|
||||||
Créer un batch de StateEmbeddings de manière optimisée.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedding_ids: Liste des IDs uniques
|
|
||||||
embeddings_batch: Liste de dicts {modalité: vecteur}
|
|
||||||
vector_save_paths: Liste des chemins de sauvegarde
|
|
||||||
weights: Poids personnalisés (optionnel)
|
|
||||||
metadata_batch: Liste de métadonnées (optionnel)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Liste de StateEmbeddings créés
|
|
||||||
|
|
||||||
Note:
|
|
||||||
Cette méthode est ~3-5x plus rapide que de créer les embeddings
|
|
||||||
un par un grâce au traitement vectorisé.
|
|
||||||
"""
|
|
||||||
if not (len(embedding_ids) == len(embeddings_batch) == len(vector_save_paths)):
|
|
||||||
raise ValueError("All input lists must have the same length")
|
|
||||||
|
|
||||||
batch_size = len(embedding_ids)
|
|
||||||
|
|
||||||
# Fusionner tout le batch en une seule opération
|
|
||||||
fused_vectors = self.fuse_batch(embeddings_batch, weights)
|
|
||||||
|
|
||||||
# Créer les StateEmbeddings
|
|
||||||
state_embeddings = []
|
|
||||||
fusion_weights = weights or self.config.weights
|
|
||||||
|
|
||||||
for i in range(batch_size):
|
|
||||||
embedding_id = embedding_ids[i]
|
|
||||||
embeddings = embeddings_batch[i]
|
|
||||||
vector_save_path = vector_save_paths[i]
|
|
||||||
metadata = metadata_batch[i] if metadata_batch else None
|
|
||||||
fused_vector = fused_vectors[i]
|
|
||||||
|
|
||||||
# Créer les composants
|
|
||||||
components = {}
|
|
||||||
for modality, vector in embeddings.items():
|
|
||||||
components[modality] = EmbeddingComponent(
|
|
||||||
weight=fusion_weights.get(modality, 0.0),
|
|
||||||
vector_id=f"{vector_save_path}_{modality}.npy",
|
|
||||||
source_text=None
|
|
||||||
)
|
|
||||||
|
|
||||||
# Créer StateEmbedding
|
|
||||||
dimensions = fused_vector.shape[0]
|
|
||||||
state_emb = StateEmbedding(
|
|
||||||
embedding_id=embedding_id,
|
|
||||||
vector_id=vector_save_path,
|
|
||||||
dimensions=dimensions,
|
|
||||||
fusion_method=self.config.method,
|
|
||||||
components=components,
|
|
||||||
metadata=metadata or {}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Sauvegarder le vecteur fusionné
|
|
||||||
state_emb.save_vector(fused_vector)
|
|
||||||
|
|
||||||
state_embeddings.append(state_emb)
|
|
||||||
|
|
||||||
return state_embeddings
|
|
||||||
|
|
||||||
def compute_similarity_batch(
|
|
||||||
self,
|
|
||||||
query_embedding: StateEmbedding,
|
|
||||||
candidate_embeddings: List[StateEmbedding]
|
|
||||||
) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Calculer la similarité entre un embedding query et un batch de candidats.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
query_embedding: Embedding de requête
|
|
||||||
candidate_embeddings: Liste d'embeddings candidats
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Array numpy de similarités (batch_size,)
|
|
||||||
|
|
||||||
Note:
|
|
||||||
Utilise des opérations vectorisées pour calculer toutes les
|
|
||||||
similarités en une seule opération matricielle.
|
|
||||||
"""
|
|
||||||
# Charger le vecteur query
|
|
||||||
query_vector = query_embedding.get_vector()
|
|
||||||
|
|
||||||
# Charger tous les vecteurs candidats
|
|
||||||
candidate_vectors = []
|
|
||||||
for emb in candidate_embeddings:
|
|
||||||
candidate_vectors.append(emb.get_vector())
|
|
||||||
|
|
||||||
# Convertir en matrice (batch_size, embedding_dim)
|
|
||||||
candidates_matrix = np.array(candidate_vectors, dtype=np.float32)
|
|
||||||
|
|
||||||
# Calcul vectorisé : similarité cosinus = dot product (si normalisés)
|
|
||||||
# similarities = candidates_matrix @ query_vector
|
|
||||||
similarities = np.dot(candidates_matrix, query_vector)
|
|
||||||
|
|
||||||
return similarities
|
|
||||||
|
|
||||||
def load_embedding_lazy(self, embedding_path: str, force_reload: bool = False) -> Optional[np.ndarray]:
|
|
||||||
"""
|
|
||||||
Charger un embedding avec lazy loading et cache.
|
|
||||||
|
|
||||||
Tâche 5.2: Lazy loading des embeddings avec cache WeakValueDictionary.
|
|
||||||
Chargement à la demande depuis le disque avec éviction automatique.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedding_path: Chemin vers le fichier embedding (.npy)
|
|
||||||
force_reload: Forcer le rechargement depuis le disque
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Array numpy de l'embedding ou None si erreur
|
|
||||||
"""
|
|
||||||
if not embedding_path:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Vérifier le cache d'abord (sauf si force_reload)
|
|
||||||
if not force_reload and embedding_path in self._embedding_cache:
|
|
||||||
self._cache_stats['hits'] += 1
|
|
||||||
logger.debug(f"Embedding cache hit: {Path(embedding_path).name}")
|
|
||||||
return self._embedding_cache[embedding_path]
|
|
||||||
|
|
||||||
# Cache miss - charger depuis le disque
|
|
||||||
self._cache_stats['misses'] += 1
|
|
||||||
|
|
||||||
try:
|
|
||||||
if not Path(embedding_path).exists():
|
|
||||||
logger.warning(f"Embedding file not found: {embedding_path}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
logger.debug(f"Loading embedding from disk: {Path(embedding_path).name}")
|
|
||||||
embedding = np.load(embedding_path)
|
|
||||||
|
|
||||||
# Valider le format
|
|
||||||
if not isinstance(embedding, np.ndarray) or embedding.ndim != 1:
|
|
||||||
logger.error(f"Invalid embedding format in {embedding_path}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Ajouter au cache (WeakValueDictionary gère l'éviction automatique)
|
|
||||||
self._embedding_cache[embedding_path] = embedding
|
|
||||||
self._cache_stats['loads'] += 1
|
|
||||||
|
|
||||||
logger.debug(f"Embedding loaded: {embedding.shape} from {Path(embedding_path).name}")
|
|
||||||
return embedding
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error loading embedding from {embedding_path}: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def fuse_with_lazy_loading(self,
|
|
||||||
embedding_paths: Dict[str, str],
|
|
||||||
weights: Optional[Dict[str, float]] = None) -> Optional[np.ndarray]:
|
|
||||||
"""
|
|
||||||
Fusionner des embeddings avec lazy loading depuis les chemins de fichiers.
|
|
||||||
|
|
||||||
Tâche 5.2: Version optimisée qui charge les embeddings à la demande.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedding_paths: Dict {modalité: chemin_fichier}
|
|
||||||
weights: Poids personnalisés (optionnel)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur fusionné ou None si erreur
|
|
||||||
"""
|
|
||||||
if not embedding_paths:
|
|
||||||
logger.warning("No embedding paths provided for lazy fusion")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Charger les embeddings avec lazy loading
|
|
||||||
embeddings = {}
|
|
||||||
for modality, path in embedding_paths.items():
|
|
||||||
embedding = self.load_embedding_lazy(path)
|
|
||||||
if embedding is not None:
|
|
||||||
embeddings[modality] = embedding
|
|
||||||
else:
|
|
||||||
logger.warning(f"Failed to load embedding for modality '{modality}' from {path}")
|
|
||||||
|
|
||||||
if not embeddings:
|
|
||||||
logger.error("No embeddings could be loaded for fusion")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Fusionner normalement
|
|
||||||
return self.fuse(embeddings, weights)
|
|
||||||
|
|
||||||
def get_cache_stats(self) -> Dict[str, int]:
|
|
||||||
"""
|
|
||||||
Obtenir les statistiques du cache d'embeddings.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict avec hits, misses, loads, cache_size
|
|
||||||
"""
|
|
||||||
return {
|
|
||||||
**self._cache_stats,
|
|
||||||
'cache_size': len(self._embedding_cache)
|
|
||||||
}
|
|
||||||
|
|
||||||
def clear_embedding_cache(self) -> None:
|
|
||||||
"""
|
|
||||||
Vider le cache d'embeddings.
|
|
||||||
|
|
||||||
Utile pour libérer la mémoire ou forcer le rechargement.
|
|
||||||
"""
|
|
||||||
cache_size = len(self._embedding_cache)
|
|
||||||
self._embedding_cache.clear()
|
|
||||||
self._cache_stats['evictions'] += cache_size
|
|
||||||
logger.info(f"Cleared embedding cache ({cache_size} entries)")
|
|
||||||
|
|
||||||
def preload_embeddings(self, embedding_paths: List[str]) -> int:
|
|
||||||
"""
|
|
||||||
Précharger des embeddings dans le cache.
|
|
||||||
|
|
||||||
Utile pour optimiser les performances en chargeant
|
|
||||||
les embeddings fréquemment utilisés à l'avance.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedding_paths: Liste des chemins à précharger
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Nombre d'embeddings préchargés avec succès
|
|
||||||
"""
|
|
||||||
loaded_count = 0
|
|
||||||
for path in embedding_paths:
|
|
||||||
if self.load_embedding_lazy(path) is not None:
|
|
||||||
loaded_count += 1
|
|
||||||
|
|
||||||
logger.info(f"Preloaded {loaded_count}/{len(embedding_paths)} embeddings")
|
|
||||||
return loaded_count
|
|
||||||
@@ -1,388 +0,0 @@
|
|||||||
"""
|
|
||||||
Similarity - Calculs de Similarité et Distance
|
|
||||||
|
|
||||||
Fonctions pour calculer différentes métriques de similarité et distance
|
|
||||||
entre vecteurs d'embeddings.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
from typing import Union, List
|
|
||||||
|
|
||||||
|
|
||||||
def cosine_similarity(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer similarité cosinus entre deux vecteurs
|
|
||||||
|
|
||||||
similarity = (vec1 · vec2) / (||vec1|| * ||vec2||)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur
|
|
||||||
vec2: Deuxième vecteur
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Similarité cosinus dans [-1, 1]
|
|
||||||
1 = identiques, 0 = orthogonaux, -1 = opposés
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si dimensions ne correspondent pas
|
|
||||||
"""
|
|
||||||
if vec1.shape != vec2.shape:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vectors must have same shape: {vec1.shape} vs {vec2.shape}"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Produit scalaire
|
|
||||||
dot_product = np.dot(vec1, vec2)
|
|
||||||
|
|
||||||
# Normes
|
|
||||||
norm1 = np.linalg.norm(vec1)
|
|
||||||
norm2 = np.linalg.norm(vec2)
|
|
||||||
|
|
||||||
# Éviter division par zéro
|
|
||||||
if norm1 == 0 or norm2 == 0:
|
|
||||||
return 0.0
|
|
||||||
|
|
||||||
# Similarité cosinus
|
|
||||||
similarity = dot_product / (norm1 * norm2)
|
|
||||||
|
|
||||||
# Clamp dans [-1, 1] pour éviter erreurs numériques
|
|
||||||
similarity = np.clip(similarity, -1.0, 1.0)
|
|
||||||
|
|
||||||
return float(similarity)
|
|
||||||
|
|
||||||
|
|
||||||
def euclidean_distance(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer distance euclidienne (L2) entre deux vecteurs
|
|
||||||
|
|
||||||
distance = ||vec1 - vec2||_2 = sqrt(sum((vec1 - vec2)^2))
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur
|
|
||||||
vec2: Deuxième vecteur
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Distance euclidienne (>= 0)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si dimensions ne correspondent pas
|
|
||||||
"""
|
|
||||||
if vec1.shape != vec2.shape:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vectors must have same shape: {vec1.shape} vs {vec2.shape}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return float(np.linalg.norm(vec1 - vec2))
|
|
||||||
|
|
||||||
|
|
||||||
def manhattan_distance(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer distance de Manhattan (L1) entre deux vecteurs
|
|
||||||
|
|
||||||
distance = sum(|vec1 - vec2|)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur
|
|
||||||
vec2: Deuxième vecteur
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Distance de Manhattan (>= 0)
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si dimensions ne correspondent pas
|
|
||||||
"""
|
|
||||||
if vec1.shape != vec2.shape:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vectors must have same shape: {vec1.shape} vs {vec2.shape}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return float(np.sum(np.abs(vec1 - vec2)))
|
|
||||||
|
|
||||||
|
|
||||||
def dot_product(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer produit scalaire entre deux vecteurs
|
|
||||||
|
|
||||||
dot = vec1 · vec2 = sum(vec1 * vec2)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur
|
|
||||||
vec2: Deuxième vecteur
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Produit scalaire
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: Si dimensions ne correspondent pas
|
|
||||||
"""
|
|
||||||
if vec1.shape != vec2.shape:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vectors must have same shape: {vec1.shape} vs {vec2.shape}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return float(np.dot(vec1, vec2))
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_l2(vector: np.ndarray, epsilon: float = 1e-10) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Normaliser un vecteur avec norme L2
|
|
||||||
|
|
||||||
normalized = vector / ||vector||_2
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vector: Vecteur à normaliser
|
|
||||||
epsilon: Valeur minimale pour éviter division par zéro
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur normalisé (norme L2 = 1.0)
|
|
||||||
"""
|
|
||||||
norm = np.linalg.norm(vector)
|
|
||||||
if norm < epsilon:
|
|
||||||
return vector
|
|
||||||
return vector / norm
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_l1(vector: np.ndarray, epsilon: float = 1e-10) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Normaliser un vecteur avec norme L1
|
|
||||||
|
|
||||||
normalized = vector / sum(|vector|)
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vector: Vecteur à normaliser
|
|
||||||
epsilon: Valeur minimale pour éviter division par zéro
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur normalisé (norme L1 = 1.0)
|
|
||||||
"""
|
|
||||||
norm = np.sum(np.abs(vector))
|
|
||||||
if norm < epsilon:
|
|
||||||
return vector
|
|
||||||
return vector / norm
|
|
||||||
|
|
||||||
|
|
||||||
def batch_cosine_similarity(vectors: List[np.ndarray],
|
|
||||||
query: np.ndarray) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Calculer similarité cosinus entre une requête et un batch de vecteurs
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vectors: Liste de vecteurs
|
|
||||||
query: Vecteur de requête
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Array de similarités
|
|
||||||
"""
|
|
||||||
# Convertir en matrice
|
|
||||||
matrix = np.array(vectors)
|
|
||||||
|
|
||||||
# Normaliser
|
|
||||||
matrix_norm = matrix / (np.linalg.norm(matrix, axis=1, keepdims=True) + 1e-10)
|
|
||||||
query_norm = query / (np.linalg.norm(query) + 1e-10)
|
|
||||||
|
|
||||||
# Produit matriciel
|
|
||||||
similarities = np.dot(matrix_norm, query_norm)
|
|
||||||
|
|
||||||
# Clamp
|
|
||||||
similarities = np.clip(similarities, -1.0, 1.0)
|
|
||||||
|
|
||||||
return similarities
|
|
||||||
|
|
||||||
|
|
||||||
def pairwise_cosine_similarity(vectors: List[np.ndarray]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Calculer matrice de similarité cosinus entre tous les vecteurs
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vectors: Liste de vecteurs
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Matrice de similarité (n x n)
|
|
||||||
"""
|
|
||||||
# Convertir en matrice
|
|
||||||
matrix = np.array(vectors)
|
|
||||||
|
|
||||||
# Normaliser
|
|
||||||
matrix_norm = matrix / (np.linalg.norm(matrix, axis=1, keepdims=True) + 1e-10)
|
|
||||||
|
|
||||||
# Produit matriciel
|
|
||||||
similarity_matrix = np.dot(matrix_norm, matrix_norm.T)
|
|
||||||
|
|
||||||
# Clamp
|
|
||||||
similarity_matrix = np.clip(similarity_matrix, -1.0, 1.0)
|
|
||||||
|
|
||||||
return similarity_matrix
|
|
||||||
|
|
||||||
|
|
||||||
def angular_distance(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer distance angulaire entre deux vecteurs
|
|
||||||
|
|
||||||
distance = arccos(cosine_similarity) / π
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur
|
|
||||||
vec2: Deuxième vecteur
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Distance angulaire dans [0, 1]
|
|
||||||
"""
|
|
||||||
similarity = cosine_similarity(vec1, vec2)
|
|
||||||
angle = np.arccos(np.clip(similarity, -1.0, 1.0))
|
|
||||||
return float(angle / np.pi)
|
|
||||||
|
|
||||||
|
|
||||||
def jaccard_similarity(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer similarité de Jaccard pour vecteurs binaires
|
|
||||||
|
|
||||||
similarity = |intersection| / |union|
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur binaire
|
|
||||||
vec2: Deuxième vecteur binaire
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Similarité de Jaccard dans [0, 1]
|
|
||||||
"""
|
|
||||||
if vec1.shape != vec2.shape:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vectors must have same shape: {vec1.shape} vs {vec2.shape}"
|
|
||||||
)
|
|
||||||
|
|
||||||
intersection = np.sum(np.logical_and(vec1, vec2))
|
|
||||||
union = np.sum(np.logical_or(vec1, vec2))
|
|
||||||
|
|
||||||
if union == 0:
|
|
||||||
return 0.0
|
|
||||||
|
|
||||||
return float(intersection / union)
|
|
||||||
|
|
||||||
|
|
||||||
def hamming_distance(vec1: np.ndarray, vec2: np.ndarray) -> float:
|
|
||||||
"""
|
|
||||||
Calculer distance de Hamming pour vecteurs binaires
|
|
||||||
|
|
||||||
distance = nombre de positions différentes
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vec1: Premier vecteur binaire
|
|
||||||
vec2: Deuxième vecteur binaire
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Distance de Hamming
|
|
||||||
"""
|
|
||||||
if vec1.shape != vec2.shape:
|
|
||||||
raise ValueError(
|
|
||||||
f"Vectors must have same shape: {vec1.shape} vs {vec2.shape}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return float(np.sum(vec1 != vec2))
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Fonctions de conversion
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
def similarity_to_distance(similarity: float,
|
|
||||||
method: str = "cosine") -> float:
|
|
||||||
"""
|
|
||||||
Convertir similarité en distance
|
|
||||||
|
|
||||||
Args:
|
|
||||||
similarity: Valeur de similarité
|
|
||||||
method: Méthode ("cosine", "angular")
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Distance correspondante
|
|
||||||
"""
|
|
||||||
if method == "cosine":
|
|
||||||
# distance = 1 - similarity (pour cosine dans [0, 1])
|
|
||||||
return 1.0 - similarity
|
|
||||||
elif method == "angular":
|
|
||||||
# distance angulaire
|
|
||||||
angle = np.arccos(np.clip(similarity, -1.0, 1.0))
|
|
||||||
return float(angle / np.pi)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown method: {method}")
|
|
||||||
|
|
||||||
|
|
||||||
def distance_to_similarity(distance: float,
|
|
||||||
method: str = "euclidean") -> float:
|
|
||||||
"""
|
|
||||||
Convertir distance en similarité
|
|
||||||
|
|
||||||
Args:
|
|
||||||
distance: Valeur de distance
|
|
||||||
method: Méthode ("euclidean", "manhattan")
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Similarité correspondante dans [0, 1]
|
|
||||||
"""
|
|
||||||
if method in ["euclidean", "manhattan"]:
|
|
||||||
# similarity = 1 / (1 + distance)
|
|
||||||
return 1.0 / (1.0 + distance)
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown method: {method}")
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Fonctions utilitaires
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
def is_normalized(vector: np.ndarray,
|
|
||||||
norm_type: str = "l2",
|
|
||||||
tolerance: float = 1e-6) -> bool:
|
|
||||||
"""
|
|
||||||
Vérifier si un vecteur est normalisé
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vector: Vecteur à vérifier
|
|
||||||
norm_type: Type de norme ("l2" ou "l1")
|
|
||||||
tolerance: Tolérance pour la vérification
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True si normalisé, False sinon
|
|
||||||
"""
|
|
||||||
if norm_type == "l2":
|
|
||||||
norm = np.linalg.norm(vector)
|
|
||||||
elif norm_type == "l1":
|
|
||||||
norm = np.sum(np.abs(vector))
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Unknown norm type: {norm_type}")
|
|
||||||
|
|
||||||
return abs(norm - 1.0) < tolerance
|
|
||||||
|
|
||||||
|
|
||||||
def compute_centroid(vectors: List[np.ndarray]) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Calculer le centroïde (moyenne) d'un ensemble de vecteurs
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vectors: Liste de vecteurs
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Vecteur centroïde
|
|
||||||
"""
|
|
||||||
if not vectors:
|
|
||||||
raise ValueError("Cannot compute centroid of empty list")
|
|
||||||
|
|
||||||
matrix = np.array(vectors)
|
|
||||||
return np.mean(matrix, axis=0)
|
|
||||||
|
|
||||||
|
|
||||||
def compute_variance(vectors: List[np.ndarray]) -> float:
|
|
||||||
"""
|
|
||||||
Calculer la variance d'un ensemble de vecteurs
|
|
||||||
|
|
||||||
Args:
|
|
||||||
vectors: Liste de vecteurs
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Variance totale
|
|
||||||
"""
|
|
||||||
if not vectors:
|
|
||||||
raise ValueError("Cannot compute variance of empty list")
|
|
||||||
|
|
||||||
matrix = np.array(vectors)
|
|
||||||
return float(np.var(matrix))
|
|
||||||
@@ -1,395 +0,0 @@
|
|||||||
"""
|
|
||||||
StateEmbeddingBuilder - Construction de State Embeddings Complets
|
|
||||||
|
|
||||||
Construit des State Embeddings en fusionnant les embeddings de toutes les modalités
|
|
||||||
(image, texte, titre, UI) depuis un ScreenState.
|
|
||||||
|
|
||||||
Utilise OpenCLIP pour générer de vrais embeddings au lieu de vecteurs aléatoires.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Dict, Optional, Any
|
|
||||||
from pathlib import Path
|
|
||||||
import logging
|
|
||||||
import numpy as np
|
|
||||||
from datetime import datetime
|
|
||||||
from PIL import Image
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
from ..models.screen_state import ScreenState
|
|
||||||
from ..models.state_embedding import StateEmbedding, EmbeddingComponent
|
|
||||||
from .fusion_engine import FusionEngine, FusionConfig
|
|
||||||
from .clip_embedder import CLIPEmbedder
|
|
||||||
|
|
||||||
|
|
||||||
class StateEmbeddingBuilder:
|
|
||||||
"""
|
|
||||||
Constructeur de State Embeddings
|
|
||||||
|
|
||||||
Prend un ScreenState et génère un State Embedding complet en :
|
|
||||||
1. Calculant les embeddings pour chaque modalité (image, texte, titre, UI)
|
|
||||||
2. Fusionnant ces embeddings avec le FusionEngine
|
|
||||||
3. Sauvegardant le résultat
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self,
|
|
||||||
fusion_engine: Optional[FusionEngine] = None,
|
|
||||||
embedders: Optional[Dict[str, Any]] = None,
|
|
||||||
output_dir: Optional[Path] = None,
|
|
||||||
use_clip: bool = True):
|
|
||||||
"""
|
|
||||||
Initialiser le builder
|
|
||||||
|
|
||||||
Args:
|
|
||||||
fusion_engine: Moteur de fusion (crée un par défaut si None)
|
|
||||||
embedders: Dict d'embedders pour chaque modalité
|
|
||||||
{"image": ImageEmbedder, "text": TextEmbedder, ...}
|
|
||||||
output_dir: Répertoire de sortie pour les vecteurs
|
|
||||||
use_clip: Si True, utilise OpenCLIP pour les embeddings (recommandé)
|
|
||||||
"""
|
|
||||||
self.fusion_engine = fusion_engine or FusionEngine()
|
|
||||||
self.output_dir = output_dir or Path("data/embeddings")
|
|
||||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Initialiser OpenCLIP si demandé
|
|
||||||
self.clip_embedder = None
|
|
||||||
if use_clip:
|
|
||||||
try:
|
|
||||||
logger.info("Initialisation OpenCLIP pour embeddings...")
|
|
||||||
self.clip_embedder = CLIPEmbedder()
|
|
||||||
logger.info("✓ OpenCLIP initialisé")
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Impossible d'initialiser OpenCLIP: {e}")
|
|
||||||
logger.info("Utilisation des embedders fournis ou vecteurs par défaut")
|
|
||||||
|
|
||||||
# Utiliser embedders fournis ou créer avec CLIP
|
|
||||||
if embedders:
|
|
||||||
self.embedders = embedders
|
|
||||||
elif self.clip_embedder:
|
|
||||||
# Utiliser CLIP pour toutes les modalités
|
|
||||||
self.embedders = {
|
|
||||||
"image": self.clip_embedder,
|
|
||||||
"text": self.clip_embedder,
|
|
||||||
"title": self.clip_embedder,
|
|
||||||
"ui": self.clip_embedder
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
self.embedders = {}
|
|
||||||
|
|
||||||
def build(self,
|
|
||||||
screen_state: ScreenState,
|
|
||||||
embedding_id: Optional[str] = None,
|
|
||||||
compute_embeddings: bool = True) -> StateEmbedding:
|
|
||||||
"""
|
|
||||||
Construire un State Embedding depuis un ScreenState
|
|
||||||
|
|
||||||
Args:
|
|
||||||
screen_state: État d'écran à embedder
|
|
||||||
embedding_id: ID unique (généré si None)
|
|
||||||
compute_embeddings: Si False, utilise des embeddings pré-calculés
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
StateEmbedding complet avec vecteur fusionné
|
|
||||||
"""
|
|
||||||
# Générer ID si nécessaire
|
|
||||||
if embedding_id is None:
|
|
||||||
embedding_id = self._generate_embedding_id(screen_state)
|
|
||||||
|
|
||||||
# Calculer ou récupérer embeddings pour chaque modalité
|
|
||||||
if compute_embeddings:
|
|
||||||
embeddings = self._compute_all_embeddings(screen_state)
|
|
||||||
else:
|
|
||||||
embeddings = self._load_precomputed_embeddings(screen_state)
|
|
||||||
|
|
||||||
# Chemin de sauvegarde du vecteur fusionné
|
|
||||||
vector_path = self.output_dir / f"{embedding_id}.npy"
|
|
||||||
|
|
||||||
# Créer State Embedding avec fusion
|
|
||||||
state_embedding = self.fusion_engine.create_state_embedding(
|
|
||||||
embedding_id=embedding_id,
|
|
||||||
embeddings=embeddings,
|
|
||||||
vector_save_path=str(vector_path),
|
|
||||||
metadata={
|
|
||||||
"screen_state_id": screen_state.screen_state_id,
|
|
||||||
"timestamp": screen_state.timestamp.isoformat(),
|
|
||||||
"window_title": getattr(screen_state.window, 'title', ''),
|
|
||||||
"created_at": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Sauvegarder métadonnées
|
|
||||||
metadata_path = self.output_dir / f"{embedding_id}_metadata.json"
|
|
||||||
state_embedding.save_to_file(metadata_path)
|
|
||||||
|
|
||||||
return state_embedding
|
|
||||||
|
|
||||||
def _compute_all_embeddings(self,
|
|
||||||
screen_state: ScreenState) -> Dict[str, np.ndarray]:
|
|
||||||
"""
|
|
||||||
Calculer embeddings pour toutes les modalités
|
|
||||||
|
|
||||||
Args:
|
|
||||||
screen_state: État d'écran
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict {modalité: vecteur}
|
|
||||||
"""
|
|
||||||
embeddings = {}
|
|
||||||
|
|
||||||
# Image embedding (screenshot complet)
|
|
||||||
if "image" in self.embedders and hasattr(screen_state, 'raw'):
|
|
||||||
image_emb = self._compute_image_embedding(screen_state)
|
|
||||||
if image_emb is not None:
|
|
||||||
embeddings["image"] = image_emb
|
|
||||||
|
|
||||||
# Text embedding (texte détecté)
|
|
||||||
if "text" in self.embedders and hasattr(screen_state, 'perception'):
|
|
||||||
text_emb = self._compute_text_embedding(screen_state)
|
|
||||||
if text_emb is not None:
|
|
||||||
embeddings["text"] = text_emb
|
|
||||||
|
|
||||||
# Title embedding (titre de fenêtre)
|
|
||||||
if "title" in self.embedders and hasattr(screen_state, 'window'):
|
|
||||||
title_emb = self._compute_title_embedding(screen_state)
|
|
||||||
if title_emb is not None:
|
|
||||||
embeddings["title"] = title_emb
|
|
||||||
|
|
||||||
# UI embedding (éléments UI)
|
|
||||||
if "ui" in self.embedders and hasattr(screen_state, 'ui_elements'):
|
|
||||||
ui_emb = self._compute_ui_embedding(screen_state)
|
|
||||||
if ui_emb is not None:
|
|
||||||
embeddings["ui"] = ui_emb
|
|
||||||
|
|
||||||
# Si aucun embedding calculé, créer des vecteurs par défaut
|
|
||||||
if not embeddings:
|
|
||||||
# Utiliser dimensions par défaut (512)
|
|
||||||
default_dim = 512
|
|
||||||
embeddings = {
|
|
||||||
"image": np.random.randn(default_dim).astype(np.float32),
|
|
||||||
"text": np.random.randn(default_dim).astype(np.float32),
|
|
||||||
"title": np.random.randn(default_dim).astype(np.float32),
|
|
||||||
"ui": np.random.randn(default_dim).astype(np.float32)
|
|
||||||
}
|
|
||||||
|
|
||||||
return embeddings
|
|
||||||
|
|
||||||
def _compute_image_embedding(self, screen_state: ScreenState) -> Optional[np.ndarray]:
|
|
||||||
"""Calculer embedding de l'image (screenshot) avec OpenCLIP"""
|
|
||||||
if "image" not in self.embedders:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
embedder = self.embedders["image"]
|
|
||||||
screenshot_path = screen_state.raw.screenshot_path
|
|
||||||
|
|
||||||
# Charger l'image
|
|
||||||
image = Image.open(screenshot_path)
|
|
||||||
|
|
||||||
# Utiliser OpenCLIP si disponible
|
|
||||||
if isinstance(embedder, CLIPEmbedder):
|
|
||||||
return embedder.embed_image(image)
|
|
||||||
|
|
||||||
# Sinon, essayer les méthodes standard
|
|
||||||
if hasattr(embedder, 'embed_image'):
|
|
||||||
return embedder.embed_image(screenshot_path)
|
|
||||||
elif hasattr(embedder, 'encode_image'):
|
|
||||||
return embedder.encode_image(screenshot_path)
|
|
||||||
elif callable(embedder):
|
|
||||||
return embedder(screenshot_path)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to compute image embedding: {e}")
|
|
||||||
logger.debug("Traceback:", exc_info=True)
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _compute_text_embedding(self, screen_state: ScreenState) -> Optional[np.ndarray]:
|
|
||||||
"""Calculer embedding du texte détecté avec OpenCLIP"""
|
|
||||||
if "text" not in self.embedders:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
embedder = self.embedders["text"]
|
|
||||||
|
|
||||||
# Concaténer tous les textes détectés
|
|
||||||
texts = []
|
|
||||||
if hasattr(screen_state.perception, 'detected_texts'):
|
|
||||||
texts = screen_state.perception.detected_texts
|
|
||||||
|
|
||||||
combined_text = " ".join(texts) if texts else ""
|
|
||||||
|
|
||||||
if not combined_text:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Utiliser OpenCLIP si disponible
|
|
||||||
if isinstance(embedder, CLIPEmbedder):
|
|
||||||
return embedder.embed_text(combined_text)
|
|
||||||
|
|
||||||
# Sinon, essayer les méthodes standard
|
|
||||||
if hasattr(embedder, 'embed_text'):
|
|
||||||
return embedder.embed_text(combined_text)
|
|
||||||
elif hasattr(embedder, 'encode_text'):
|
|
||||||
return embedder.encode_text(combined_text)
|
|
||||||
elif callable(embedder):
|
|
||||||
return embedder(combined_text)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to compute text embedding: {e}")
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _compute_title_embedding(self, screen_state: ScreenState) -> Optional[np.ndarray]:
|
|
||||||
"""Calculer embedding du titre de fenêtre avec OpenCLIP"""
|
|
||||||
if "title" not in self.embedders:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
embedder = self.embedders["title"]
|
|
||||||
title = getattr(screen_state.window, 'title', '')
|
|
||||||
|
|
||||||
if not title:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Utiliser OpenCLIP si disponible
|
|
||||||
if isinstance(embedder, CLIPEmbedder):
|
|
||||||
return embedder.embed_text(title)
|
|
||||||
|
|
||||||
# Sinon, essayer les méthodes standard
|
|
||||||
if hasattr(embedder, 'embed_text'):
|
|
||||||
return embedder.embed_text(title)
|
|
||||||
elif hasattr(embedder, 'encode_text'):
|
|
||||||
return embedder.encode_text(title)
|
|
||||||
elif callable(embedder):
|
|
||||||
return embedder(title)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to compute title embedding: {e}")
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _compute_ui_embedding(self, screen_state: ScreenState) -> Optional[np.ndarray]:
|
|
||||||
"""Calculer embedding moyen des éléments UI"""
|
|
||||||
if "ui" not in self.embedders:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
embedder = self.embedders["ui"]
|
|
||||||
ui_elements = screen_state.ui_elements
|
|
||||||
|
|
||||||
if not ui_elements:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Calculer embedding pour chaque élément UI
|
|
||||||
ui_embeddings = []
|
|
||||||
for element in ui_elements:
|
|
||||||
# Utiliser embedding image de l'élément si disponible
|
|
||||||
if hasattr(element, 'embeddings') and element.embeddings:
|
|
||||||
if hasattr(element.embeddings, 'image_embedding_id'):
|
|
||||||
# Charger embedding pré-calculé
|
|
||||||
emb_path = Path(element.embeddings.image_embedding_id)
|
|
||||||
if emb_path.exists():
|
|
||||||
ui_embeddings.append(np.load(emb_path))
|
|
||||||
|
|
||||||
# Si pas d'embeddings pré-calculés, calculer depuis labels
|
|
||||||
if not ui_embeddings:
|
|
||||||
for element in ui_elements:
|
|
||||||
label = getattr(element, 'label', '')
|
|
||||||
if label and hasattr(embedder, 'embed_text'):
|
|
||||||
ui_embeddings.append(embedder.embed_text(label))
|
|
||||||
|
|
||||||
# Moyenne des embeddings UI
|
|
||||||
if ui_embeddings:
|
|
||||||
return np.mean(ui_embeddings, axis=0)
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Failed to compute UI embedding: {e}")
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _load_precomputed_embeddings(self,
|
|
||||||
screen_state: ScreenState) -> Dict[str, np.ndarray]:
|
|
||||||
"""Charger embeddings pré-calculés"""
|
|
||||||
# TODO: Implémenter chargement depuis cache
|
|
||||||
# Pour l'instant, calculer à la volée
|
|
||||||
return self._compute_all_embeddings(screen_state)
|
|
||||||
|
|
||||||
def _generate_embedding_id(self, screen_state: ScreenState) -> str:
|
|
||||||
"""Générer un ID unique pour l'embedding"""
|
|
||||||
timestamp = screen_state.timestamp.strftime("%Y%m%d_%H%M%S_%f")
|
|
||||||
return f"state_emb_{screen_state.screen_state_id}_{timestamp}"
|
|
||||||
|
|
||||||
def batch_build(self,
|
|
||||||
screen_states: list[ScreenState],
|
|
||||||
compute_embeddings: bool = True) -> list[StateEmbedding]:
|
|
||||||
"""
|
|
||||||
Construire plusieurs State Embeddings en batch
|
|
||||||
|
|
||||||
Args:
|
|
||||||
screen_states: Liste de ScreenStates
|
|
||||||
compute_embeddings: Si False, utilise embeddings pré-calculés
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Liste de StateEmbeddings
|
|
||||||
"""
|
|
||||||
return [
|
|
||||||
self.build(state, compute_embeddings=compute_embeddings)
|
|
||||||
for state in screen_states
|
|
||||||
]
|
|
||||||
|
|
||||||
def set_embedder(self, modality: str, embedder: Any) -> None:
|
|
||||||
"""
|
|
||||||
Définir un embedder pour une modalité
|
|
||||||
|
|
||||||
Args:
|
|
||||||
modality: Nom de la modalité ("image", "text", "title", "ui")
|
|
||||||
embedder: Embedder à utiliser
|
|
||||||
"""
|
|
||||||
self.embedders[modality] = embedder
|
|
||||||
|
|
||||||
def get_embedder(self, modality: str) -> Optional[Any]:
|
|
||||||
"""Récupérer l'embedder d'une modalité"""
|
|
||||||
return self.embedders.get(modality)
|
|
||||||
|
|
||||||
def set_output_dir(self, output_dir: Path) -> None:
|
|
||||||
"""Définir le répertoire de sortie"""
|
|
||||||
self.output_dir = output_dir
|
|
||||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Fonctions utilitaires
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
def create_builder(embedders: Optional[Dict[str, Any]] = None,
|
|
||||||
output_dir: Optional[Path] = None,
|
|
||||||
use_clip: bool = True) -> StateEmbeddingBuilder:
|
|
||||||
"""
|
|
||||||
Créer un StateEmbeddingBuilder avec configuration par défaut
|
|
||||||
|
|
||||||
Args:
|
|
||||||
embedders: Dict d'embedders optionnel
|
|
||||||
output_dir: Répertoire de sortie optionnel
|
|
||||||
use_clip: Si True, utilise OpenCLIP (recommandé)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
StateEmbeddingBuilder configuré avec OpenCLIP
|
|
||||||
"""
|
|
||||||
return StateEmbeddingBuilder(
|
|
||||||
embedders=embedders,
|
|
||||||
output_dir=output_dir,
|
|
||||||
use_clip=use_clip
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def build_from_screen_state(screen_state: ScreenState,
|
|
||||||
embedders: Dict[str, Any],
|
|
||||||
output_dir: Path) -> StateEmbedding:
|
|
||||||
"""
|
|
||||||
Fonction helper pour construire rapidement un State Embedding
|
|
||||||
|
|
||||||
Args:
|
|
||||||
screen_state: État d'écran
|
|
||||||
embedders: Dict d'embedders
|
|
||||||
output_dir: Répertoire de sortie
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
StateEmbedding
|
|
||||||
"""
|
|
||||||
builder = StateEmbeddingBuilder(embedders=embedders, output_dir=output_dir)
|
|
||||||
return builder.build(screen_state)
|
|
||||||
@@ -1,146 +0,0 @@
|
|||||||
# Implémentation Capture d'Écran et Embedding Visuel - VWB
|
|
||||||
|
|
||||||
**Auteur : Dom, Alice, Kiro - 09 janvier 2026**
|
|
||||||
|
|
||||||
## Résumé
|
|
||||||
|
|
||||||
Cette documentation décrit l'implémentation des endpoints de capture d'écran et de création d'embeddings visuels pour le Visual Workflow Builder (VWB).
|
|
||||||
|
|
||||||
## Fonctionnalités Implémentées
|
|
||||||
|
|
||||||
### 1. Endpoint `/api/screen-capture` (POST)
|
|
||||||
|
|
||||||
Capture l'écran actuel et retourne l'image en base64.
|
|
||||||
|
|
||||||
**Request Body (optionnel):**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"format": "png",
|
|
||||||
"quality": 90
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"screenshot": "base64_encoded_image...",
|
|
||||||
"width": 1920,
|
|
||||||
"height": 1080,
|
|
||||||
"timestamp": "2026-01-09T13:41:18.123456"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Endpoint `/api/visual-embedding` (POST)
|
|
||||||
|
|
||||||
Crée un embedding visuel à partir d'une capture d'écran et d'une zone sélectionnée.
|
|
||||||
|
|
||||||
**Request Body:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"screenshot": "base64_encoded_image...",
|
|
||||||
"boundingBox": {
|
|
||||||
"x": 100,
|
|
||||||
"y": 200,
|
|
||||||
"width": 150,
|
|
||||||
"height": 50
|
|
||||||
},
|
|
||||||
"stepId": "step_123"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Response:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"embedding": [0.1, 0.2, ...],
|
|
||||||
"embedding_id": "emb_step_123_20260109_134118",
|
|
||||||
"dimension": 512,
|
|
||||||
"reference_image": "emb_step_123_..._ref.png",
|
|
||||||
"bounding_box": {
|
|
||||||
"x": 100,
|
|
||||||
"y": 200,
|
|
||||||
"width": 150,
|
|
||||||
"height": 50
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Endpoint `/api/visual-embedding/<embedding_id>` (GET)
|
|
||||||
|
|
||||||
Récupère un embedding existant par son ID.
|
|
||||||
|
|
||||||
### 4. Endpoint `/api/visual-embedding/<embedding_id>/image` (GET)
|
|
||||||
|
|
||||||
Récupère l'image de référence d'un embedding.
|
|
||||||
|
|
||||||
## Architecture Technique
|
|
||||||
|
|
||||||
### Services Utilisés
|
|
||||||
|
|
||||||
1. **ScreenCapturer** (`core/capture/screen_capturer.py`)
|
|
||||||
- Capture d'écran via `mss` ou `pyautogui`
|
|
||||||
- Support multi-moniteur
|
|
||||||
- Buffer circulaire pour historique
|
|
||||||
|
|
||||||
2. **CLIPEmbedder** (`core/embedding/clip_embedder.py`)
|
|
||||||
- Modèle ViT-B/32 OpenAI
|
|
||||||
- Embeddings de dimension 512
|
|
||||||
- Exécution sur CPU pour économiser la mémoire GPU
|
|
||||||
|
|
||||||
### Stockage des Données
|
|
||||||
|
|
||||||
Les embeddings et images de référence sont stockés dans :
|
|
||||||
```
|
|
||||||
data/visual_embeddings/
|
|
||||||
├── emb_step_xxx_YYYYMMDD_HHMMSS.npy # Embedding numpy
|
|
||||||
└── emb_step_xxx_YYYYMMDD_HHMMSS_ref.png # Image de référence
|
|
||||||
```
|
|
||||||
|
|
||||||
## Intégration Frontend
|
|
||||||
|
|
||||||
Le composant `VisualSelector` (`visual_workflow_builder/frontend/src/components/VisualSelector/index.tsx`) utilise ces endpoints pour :
|
|
||||||
|
|
||||||
1. **Étape 1 - Capture** : Appel à `/api/screen-capture`
|
|
||||||
2. **Étape 2 - Sélection** : Interface canvas pour sélectionner une zone
|
|
||||||
3. **Étape 3 - Confirmation** : Appel à `/api/visual-embedding` pour créer l'embedding
|
|
||||||
|
|
||||||
## Tests
|
|
||||||
|
|
||||||
Les tests sont disponibles dans :
|
|
||||||
- `tests/integration/test_vwb_screen_capture_api.py`
|
|
||||||
|
|
||||||
### Exécution des Tests
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python3 -c "
|
|
||||||
import sys
|
|
||||||
sys.path.insert(0, '.')
|
|
||||||
sys.path.insert(0, 'visual_workflow_builder/backend')
|
|
||||||
from app_lightweight import capture_screen_to_base64, create_visual_embedding
|
|
||||||
|
|
||||||
# Test capture
|
|
||||||
result = capture_screen_to_base64()
|
|
||||||
print(f'Capture: {result[\"success\"]}')
|
|
||||||
|
|
||||||
# Test embedding
|
|
||||||
if result['success']:
|
|
||||||
bbox = {'x': 100, 'y': 100, 'width': 200, 'height': 100}
|
|
||||||
emb = create_visual_embedding(result['screenshot'], bbox, 'test')
|
|
||||||
print(f'Embedding: {emb[\"success\"]}')
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Résultats de Validation
|
|
||||||
|
|
||||||
- ✅ Capture d'écran fonctionnelle (1920x1080)
|
|
||||||
- ✅ Création d'embeddings CLIP (dimension 512)
|
|
||||||
- ✅ Sauvegarde des embeddings en fichiers .npy
|
|
||||||
- ✅ Sauvegarde des images de référence en PNG
|
|
||||||
- ✅ Intégration avec le frontend VisualSelector
|
|
||||||
|
|
||||||
## Prochaines Étapes
|
|
||||||
|
|
||||||
1. Tests d'intégration avec le frontend en conditions réelles
|
|
||||||
2. Optimisation du temps de chargement du modèle CLIP
|
|
||||||
3. Ajout de la recherche par similarité dans les embeddings existants
|
|
||||||
@@ -1,70 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script de démarrage du backend VWB avec environnement virtuel.
|
|
||||||
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Ce script démarre le backend VWB en s'assurant que l'environnement virtuel
|
|
||||||
est correctement configuré pour les dépendances de capture d'écran.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import subprocess
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Démarre le backend VWB avec l'environnement virtuel."""
|
|
||||||
print("🚀 Démarrage du backend VWB avec environnement virtuel...")
|
|
||||||
|
|
||||||
# Répertoire racine
|
|
||||||
root_dir = Path(__file__).parent.parent
|
|
||||||
|
|
||||||
# Chemin vers l'environnement virtuel
|
|
||||||
venv_dir = root_dir / "venv_v3"
|
|
||||||
venv_python = venv_dir / "bin" / "python3"
|
|
||||||
|
|
||||||
# Script backend
|
|
||||||
backend_script = root_dir / "visual_workflow_builder" / "backend" / "app_lightweight.py"
|
|
||||||
|
|
||||||
# Vérifications
|
|
||||||
if not venv_dir.exists():
|
|
||||||
print("❌ Environnement virtuel non trouvé dans venv_v3/")
|
|
||||||
return False
|
|
||||||
|
|
||||||
if not venv_python.exists():
|
|
||||||
print("❌ Python de l'environnement virtuel non trouvé")
|
|
||||||
return False
|
|
||||||
|
|
||||||
if not backend_script.exists():
|
|
||||||
print("❌ Script backend non trouvé")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Variables d'environnement
|
|
||||||
env = os.environ.copy()
|
|
||||||
env['PYTHONPATH'] = str(root_dir)
|
|
||||||
env['PORT'] = '5002'
|
|
||||||
|
|
||||||
print(f"🐍 Python: {venv_python}")
|
|
||||||
print(f"📁 Script: {backend_script}")
|
|
||||||
print(f"🌐 Port: 5002")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Démarrer le serveur
|
|
||||||
subprocess.run([
|
|
||||||
str(venv_python),
|
|
||||||
str(backend_script)
|
|
||||||
], env=env, cwd=str(root_dir))
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n🛑 Arrêt du serveur")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
success = main()
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
@@ -1,112 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Test simple du backend VWB avec environnement virtuel.
|
|
||||||
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Ce test vérifie que le backend VWB fonctionne correctement avec l'environnement virtuel.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import subprocess
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
|
||||||
|
|
||||||
def test_backend_direct():
|
|
||||||
"""Teste le backend directement avec l'environnement virtuel."""
|
|
||||||
print("🔍 Test direct du backend VWB...")
|
|
||||||
|
|
||||||
# Utiliser l'environnement virtuel
|
|
||||||
venv_python = ROOT_DIR / "venv_v3" / "bin" / "python3"
|
|
||||||
|
|
||||||
if not venv_python.exists():
|
|
||||||
print("❌ Environnement virtuel non trouvé")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test des fonctions backend directement
|
|
||||||
test_script = f'''
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
ROOT_DIR = Path("{ROOT_DIR}")
|
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
|
||||||
sys.path.insert(0, str(ROOT_DIR / "visual_workflow_builder" / "backend"))
|
|
||||||
|
|
||||||
try:
|
|
||||||
from app_lightweight import capture_screen_to_base64, create_visual_embedding
|
|
||||||
|
|
||||||
print("🔄 Test de capture d'écran...")
|
|
||||||
result = capture_screen_to_base64()
|
|
||||||
|
|
||||||
if result['success']:
|
|
||||||
print(f"✅ Capture réussie - {{result['width']}}x{{result['height']}}")
|
|
||||||
|
|
||||||
# Test d'embedding
|
|
||||||
print("🔄 Test d'embedding...")
|
|
||||||
bounding_box = {{'x': 100, 'y': 100, 'width': 200, 'height': 150}}
|
|
||||||
|
|
||||||
embedding_result = create_visual_embedding(
|
|
||||||
result['screenshot'],
|
|
||||||
bounding_box,
|
|
||||||
'test_backend_simple'
|
|
||||||
)
|
|
||||||
|
|
||||||
if embedding_result['success']:
|
|
||||||
print(f"✅ Embedding créé - ID: {{embedding_result['embedding_id']}}")
|
|
||||||
print("✅ BACKEND FONCTIONNE CORRECTEMENT")
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur embedding: {{embedding_result['error']}}")
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur capture: {{result['error']}}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur: {{e}}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
'''
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Exécuter le test avec l'environnement virtuel
|
|
||||||
result = subprocess.run(
|
|
||||||
[str(venv_python), "-c", test_script],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
cwd=str(ROOT_DIR)
|
|
||||||
)
|
|
||||||
|
|
||||||
print("Sortie du test:")
|
|
||||||
print(result.stdout)
|
|
||||||
|
|
||||||
if result.stderr:
|
|
||||||
print("Erreurs:")
|
|
||||||
print(result.stderr)
|
|
||||||
|
|
||||||
return "BACKEND FONCTIONNE CORRECTEMENT" in result.stdout
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur lors du test: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de test."""
|
|
||||||
print("=" * 60)
|
|
||||||
print(" TEST BACKEND VWB SIMPLE")
|
|
||||||
print("=" * 60)
|
|
||||||
print("Auteur : Dom, Alice, Kiro - 09 janvier 2026")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
success = test_backend_direct()
|
|
||||||
|
|
||||||
if success:
|
|
||||||
print("\n✅ Le backend VWB fonctionne correctement !")
|
|
||||||
else:
|
|
||||||
print("\n❌ Le backend VWB ne fonctionne pas correctement")
|
|
||||||
|
|
||||||
return success
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
success = main()
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
@@ -1,297 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Test de la capture d'élément cible pour le Visual Workflow Builder.
|
|
||||||
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Ce test vérifie que le système de capture d'élément cible fonctionne correctement
|
|
||||||
en testant les endpoints /api/screen-capture et /api/visual-embedding.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import requests
|
|
||||||
import json
|
|
||||||
import subprocess
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
|
||||||
|
|
||||||
def start_backend_server():
|
|
||||||
"""Démarre le serveur backend VWB avec l'environnement virtuel."""
|
|
||||||
print("🚀 Démarrage du serveur backend VWB...")
|
|
||||||
|
|
||||||
# Utiliser l'environnement virtuel
|
|
||||||
venv_python = ROOT_DIR / "venv_v3" / "bin" / "python3"
|
|
||||||
backend_script = ROOT_DIR / "visual_workflow_builder" / "backend" / "app_lightweight.py"
|
|
||||||
|
|
||||||
if not venv_python.exists():
|
|
||||||
print("❌ Environnement virtuel non trouvé")
|
|
||||||
return None
|
|
||||||
|
|
||||||
if not backend_script.exists():
|
|
||||||
print("❌ Script backend non trouvé")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Variables d'environnement pour le serveur
|
|
||||||
env = os.environ.copy()
|
|
||||||
env['PYTHONPATH'] = str(ROOT_DIR)
|
|
||||||
env['PORT'] = '5002'
|
|
||||||
|
|
||||||
print(f"🐍 Utilisation de: {venv_python}")
|
|
||||||
print(f"📁 Script: {backend_script}")
|
|
||||||
|
|
||||||
# Démarrer le serveur en arrière-plan avec l'environnement virtuel
|
|
||||||
process = subprocess.Popen(
|
|
||||||
[str(venv_python), str(backend_script)],
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
cwd=str(ROOT_DIR),
|
|
||||||
env=env
|
|
||||||
)
|
|
||||||
|
|
||||||
# Attendre que le serveur démarre
|
|
||||||
print("⏳ Attente du démarrage du serveur...")
|
|
||||||
time.sleep(10) # Plus de temps pour l'initialisation CLIP
|
|
||||||
|
|
||||||
return process
|
|
||||||
|
|
||||||
def test_health_endpoint():
|
|
||||||
"""Teste l'endpoint de santé."""
|
|
||||||
print("\n🔍 Test de l'endpoint de santé...")
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = requests.get("http://localhost:5002/health", timeout=5)
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
print(f"✅ Serveur en bonne santé - Version: {data.get('version', 'inconnue')}")
|
|
||||||
|
|
||||||
# Vérifier les fonctionnalités disponibles
|
|
||||||
features = data.get('features', {})
|
|
||||||
if features.get('screen_capture'):
|
|
||||||
print("✅ Capture d'écran disponible")
|
|
||||||
else:
|
|
||||||
print("⚠️ Capture d'écran non disponible")
|
|
||||||
|
|
||||||
if features.get('visual_embedding'):
|
|
||||||
print("✅ Embedding visuel disponible")
|
|
||||||
else:
|
|
||||||
print("⚠️ Embedding visuel non disponible")
|
|
||||||
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur health check: {response.status_code}")
|
|
||||||
return False
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur connexion serveur: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def test_screen_capture_endpoint():
|
|
||||||
"""Teste l'endpoint de capture d'écran."""
|
|
||||||
print("\n📷 Test de l'endpoint de capture d'écran...")
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = requests.post(
|
|
||||||
"http://localhost:5002/api/screen-capture",
|
|
||||||
json={"format": "png", "quality": 90},
|
|
||||||
timeout=15
|
|
||||||
)
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
if data.get('success'):
|
|
||||||
print(f"✅ Capture réussie - {data['width']}x{data['height']}")
|
|
||||||
print(f"📊 Taille base64: {len(data['screenshot'])} caractères")
|
|
||||||
print(f"⏰ Timestamp: {data.get('timestamp', 'N/A')}")
|
|
||||||
return data['screenshot']
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur capture: {data.get('error', 'inconnue')}")
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur HTTP: {response.status_code}")
|
|
||||||
print(f"Réponse: {response.text}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur lors de la capture: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def test_visual_embedding_endpoint(screenshot_base64):
|
|
||||||
"""Teste l'endpoint de création d'embedding visuel."""
|
|
||||||
print("\n🎯 Test de l'endpoint d'embedding visuel...")
|
|
||||||
|
|
||||||
if not screenshot_base64:
|
|
||||||
print("❌ Pas de capture d'écran disponible")
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Zone de test au centre de l'écran
|
|
||||||
bounding_box = {
|
|
||||||
"x": 500,
|
|
||||||
"y": 300,
|
|
||||||
"width": 200,
|
|
||||||
"height": 150
|
|
||||||
}
|
|
||||||
|
|
||||||
payload = {
|
|
||||||
"screenshot": screenshot_base64,
|
|
||||||
"boundingBox": bounding_box,
|
|
||||||
"stepId": "test_capture_element_cible"
|
|
||||||
}
|
|
||||||
|
|
||||||
response = requests.post(
|
|
||||||
"http://localhost:5002/api/visual-embedding",
|
|
||||||
json=payload,
|
|
||||||
timeout=20 # Plus de temps pour CLIP
|
|
||||||
)
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
if data.get('success'):
|
|
||||||
print(f"✅ Embedding créé - ID: {data['embedding_id']}")
|
|
||||||
print(f"📐 Dimension: {data['dimension']}")
|
|
||||||
print(f"🖼️ Image de référence: {data['reference_image']}")
|
|
||||||
print(f"📦 Zone traitée: {data['bounding_box']}")
|
|
||||||
|
|
||||||
# Vérifier que les fichiers ont été créés
|
|
||||||
embeddings_dir = ROOT_DIR / "data" / "visual_embeddings"
|
|
||||||
embedding_file = embeddings_dir / f"{data['embedding_id']}.npy"
|
|
||||||
reference_file = embeddings_dir / f"{data['embedding_id']}_ref.png"
|
|
||||||
|
|
||||||
if embedding_file.exists() and reference_file.exists():
|
|
||||||
print(f"✅ Fichiers sauvegardés correctement")
|
|
||||||
print(f" - Embedding: {embedding_file}")
|
|
||||||
print(f" - Référence: {reference_file}")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print(f"❌ Fichiers non créés")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur embedding: {data.get('error', 'inconnue')}")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur HTTP: {response.status_code}")
|
|
||||||
print(f"Réponse: {response.text}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur lors de l'embedding: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def test_frontend_integration():
|
|
||||||
"""Teste l'intégration avec le frontend."""
|
|
||||||
print("\n🌐 Test d'intégration frontend...")
|
|
||||||
|
|
||||||
# Vérifier que le composant VisualSelector existe
|
|
||||||
visual_selector_path = ROOT_DIR / "visual_workflow_builder" / "frontend" / "src" / "components" / "VisualSelector" / "index.tsx"
|
|
||||||
|
|
||||||
if visual_selector_path.exists():
|
|
||||||
print("✅ Composant VisualSelector trouvé")
|
|
||||||
|
|
||||||
# Lire le contenu pour vérifier les endpoints
|
|
||||||
content = visual_selector_path.read_text()
|
|
||||||
|
|
||||||
if "/api/screen-capture" in content and "/api/visual-embedding" in content:
|
|
||||||
print("✅ Endpoints API correctement référencés dans le frontend")
|
|
||||||
|
|
||||||
# Vérifier les types TypeScript
|
|
||||||
types_path = ROOT_DIR / "visual_workflow_builder" / "frontend" / "src" / "types" / "index.ts"
|
|
||||||
if types_path.exists():
|
|
||||||
types_content = types_path.read_text()
|
|
||||||
if "VisualSelection" in types_content and "BoundingBox" in types_content:
|
|
||||||
print("✅ Types TypeScript définis correctement")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("⚠️ Types TypeScript manquants")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("⚠️ Fichier de types non trouvé")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("❌ Endpoints API manquants dans le frontend")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("❌ Composant VisualSelector non trouvé")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def test_canvas_integration():
|
|
||||||
"""Teste l'intégration avec le canvas."""
|
|
||||||
print("\n🎨 Test d'intégration canvas...")
|
|
||||||
|
|
||||||
# Vérifier que le canvas peut afficher l'image
|
|
||||||
canvas_path = ROOT_DIR / "visual_workflow_builder" / "frontend" / "src" / "components" / "Canvas"
|
|
||||||
|
|
||||||
if canvas_path.exists():
|
|
||||||
print("✅ Répertoire Canvas trouvé")
|
|
||||||
|
|
||||||
# Vérifier les fichiers du canvas
|
|
||||||
step_node_path = canvas_path / "StepNode.tsx"
|
|
||||||
if step_node_path.exists():
|
|
||||||
print("✅ Composant StepNode trouvé")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("⚠️ Composant StepNode non trouvé")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("❌ Répertoire Canvas non trouvé")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de test."""
|
|
||||||
print("=" * 60)
|
|
||||||
print(" TEST CAPTURE D'ÉLÉMENT CIBLE - VWB")
|
|
||||||
print("=" * 60)
|
|
||||||
print("Auteur : Dom, Alice, Kiro - 09 janvier 2026")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
# Démarrer le serveur backend
|
|
||||||
server_process = start_backend_server()
|
|
||||||
|
|
||||||
if not server_process:
|
|
||||||
print("❌ Impossible de démarrer le serveur backend")
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Test 1: Health check
|
|
||||||
if not test_health_endpoint():
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test 2: Capture d'écran
|
|
||||||
screenshot = test_screen_capture_endpoint()
|
|
||||||
if not screenshot:
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test 3: Embedding visuel
|
|
||||||
if not test_visual_embedding_endpoint(screenshot):
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test 4: Intégration frontend
|
|
||||||
if not test_frontend_integration():
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test 5: Intégration canvas
|
|
||||||
if not test_canvas_integration():
|
|
||||||
return False
|
|
||||||
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("🎉 TOUS LES TESTS SONT PASSÉS AVEC SUCCÈS !")
|
|
||||||
print("✅ La capture d'élément cible fonctionne correctement")
|
|
||||||
print("✅ Backend et frontend intégrés")
|
|
||||||
print("✅ Fichiers d'embedding sauvegardés")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
finally:
|
|
||||||
# Arrêter le serveur
|
|
||||||
if server_process:
|
|
||||||
print("\n🛑 Arrêt du serveur backend...")
|
|
||||||
server_process.terminate()
|
|
||||||
server_process.wait()
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
success = main()
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
@@ -1,154 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Test de debug du backend VWB pour identifier le problème de capture.
|
|
||||||
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Ce test examine les logs du serveur pour identifier pourquoi la capture échoue.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import requests
|
|
||||||
import subprocess
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
|
||||||
|
|
||||||
def start_backend_server_debug():
|
|
||||||
"""Démarre le serveur backend VWB en mode debug."""
|
|
||||||
print("🚀 Démarrage du serveur backend VWB en mode debug...")
|
|
||||||
|
|
||||||
# Utiliser l'environnement virtuel
|
|
||||||
venv_python = ROOT_DIR / "venv_v3" / "bin" / "python3"
|
|
||||||
backend_script = ROOT_DIR / "visual_workflow_builder" / "backend" / "app_lightweight.py"
|
|
||||||
|
|
||||||
# Variables d'environnement pour le serveur
|
|
||||||
env = os.environ.copy()
|
|
||||||
env['PYTHONPATH'] = str(ROOT_DIR)
|
|
||||||
env['PORT'] = '5002'
|
|
||||||
|
|
||||||
print(f"🐍 Utilisation de: {venv_python}")
|
|
||||||
print(f"📁 Script: {backend_script}")
|
|
||||||
|
|
||||||
# Démarrer le serveur en mode interactif pour voir les logs
|
|
||||||
process = subprocess.Popen(
|
|
||||||
[str(venv_python), str(backend_script)],
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT, # Rediriger stderr vers stdout
|
|
||||||
cwd=str(ROOT_DIR),
|
|
||||||
env=env,
|
|
||||||
text=True,
|
|
||||||
bufsize=1,
|
|
||||||
universal_newlines=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Attendre que le serveur démarre et afficher les logs
|
|
||||||
print("⏳ Attente du démarrage du serveur...")
|
|
||||||
time.sleep(3)
|
|
||||||
|
|
||||||
# Lire les logs de démarrage
|
|
||||||
print("\n📋 Logs de démarrage du serveur:")
|
|
||||||
print("-" * 40)
|
|
||||||
|
|
||||||
# Lire quelques lignes de sortie
|
|
||||||
for i in range(20): # Lire les 20 premières lignes
|
|
||||||
try:
|
|
||||||
line = process.stdout.readline()
|
|
||||||
if line:
|
|
||||||
print(f"LOG: {line.strip()}")
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
except:
|
|
||||||
break
|
|
||||||
|
|
||||||
print("-" * 40)
|
|
||||||
|
|
||||||
return process
|
|
||||||
|
|
||||||
def test_capture_with_logs(server_process):
|
|
||||||
"""Teste la capture en surveillant les logs."""
|
|
||||||
print("\n📷 Test de capture avec surveillance des logs...")
|
|
||||||
|
|
||||||
# Faire une requête de capture
|
|
||||||
try:
|
|
||||||
print("🔄 Envoi de la requête de capture...")
|
|
||||||
response = requests.post(
|
|
||||||
"http://localhost:5002/api/screen-capture",
|
|
||||||
json={"format": "png", "quality": 90},
|
|
||||||
timeout=15
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"📊 Statut de réponse: {response.status_code}")
|
|
||||||
|
|
||||||
# Lire les logs pendant la requête
|
|
||||||
print("\n📋 Logs pendant la capture:")
|
|
||||||
print("-" * 40)
|
|
||||||
|
|
||||||
# Lire quelques lignes supplémentaires
|
|
||||||
for i in range(10):
|
|
||||||
try:
|
|
||||||
line = server_process.stdout.readline()
|
|
||||||
if line:
|
|
||||||
print(f"LOG: {line.strip()}")
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
except:
|
|
||||||
break
|
|
||||||
|
|
||||||
print("-" * 40)
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
if data.get('success'):
|
|
||||||
print(f"✅ Capture réussie - {data['width']}x{data['height']}")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur capture: {data.get('error', 'inconnue')}")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print(f"❌ Erreur HTTP: {response.status_code}")
|
|
||||||
print(f"Réponse: {response.text}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur lors de la capture: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de test."""
|
|
||||||
print("=" * 60)
|
|
||||||
print(" TEST DEBUG BACKEND VWB")
|
|
||||||
print("=" * 60)
|
|
||||||
print("Auteur : Dom, Alice, Kiro - 09 janvier 2026")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
# Démarrer le serveur backend
|
|
||||||
server_process = start_backend_server_debug()
|
|
||||||
|
|
||||||
if not server_process:
|
|
||||||
print("❌ Impossible de démarrer le serveur backend")
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Attendre un peu plus pour le démarrage complet
|
|
||||||
time.sleep(5)
|
|
||||||
|
|
||||||
# Tester la capture avec logs
|
|
||||||
success = test_capture_with_logs(server_process)
|
|
||||||
|
|
||||||
return success
|
|
||||||
|
|
||||||
finally:
|
|
||||||
# Arrêter le serveur
|
|
||||||
if server_process:
|
|
||||||
print("\n🛑 Arrêt du serveur backend...")
|
|
||||||
server_process.terminate()
|
|
||||||
server_process.wait()
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
success = main()
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
@@ -1,257 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Tests d'intégration pour l'API de capture d'écran et d'embedding visuel du VWB.
|
|
||||||
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Ces tests vérifient que les endpoints /api/screen-capture et /api/visual-embedding
|
|
||||||
fonctionnent correctement avec le système de capture réel.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
|
||||||
|
|
||||||
|
|
||||||
class TestScreenCaptureService:
|
|
||||||
"""Tests pour le service de capture d'écran."""
|
|
||||||
|
|
||||||
def test_screen_capturer_import(self):
|
|
||||||
"""Vérifie que le ScreenCapturer peut être importé."""
|
|
||||||
try:
|
|
||||||
from core.capture import ScreenCapturer
|
|
||||||
assert ScreenCapturer is not None
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"ScreenCapturer non disponible: {e}")
|
|
||||||
|
|
||||||
def test_screen_capturer_initialization(self):
|
|
||||||
"""Vérifie que le ScreenCapturer peut être initialisé."""
|
|
||||||
try:
|
|
||||||
from core.capture import ScreenCapturer
|
|
||||||
capturer = ScreenCapturer(buffer_size=2, detect_changes=False)
|
|
||||||
assert capturer is not None
|
|
||||||
assert capturer.method in ["mss", "pyautogui"]
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"ScreenCapturer non disponible: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
# Peut échouer sur un serveur sans écran
|
|
||||||
pytest.skip(f"Capture d'écran non disponible: {e}")
|
|
||||||
|
|
||||||
def test_screen_capture_returns_array(self):
|
|
||||||
"""Vérifie que la capture retourne un tableau numpy valide."""
|
|
||||||
try:
|
|
||||||
from core.capture import ScreenCapturer
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
capturer = ScreenCapturer(buffer_size=2, detect_changes=False)
|
|
||||||
img = capturer.capture()
|
|
||||||
|
|
||||||
if img is None:
|
|
||||||
pytest.skip("Capture d'écran non disponible (pas d'écran)")
|
|
||||||
|
|
||||||
assert isinstance(img, np.ndarray)
|
|
||||||
assert len(img.shape) == 3 # (H, W, C)
|
|
||||||
assert img.shape[2] == 3 # RGB
|
|
||||||
assert img.shape[0] > 0 # Hauteur > 0
|
|
||||||
assert img.shape[1] > 0 # Largeur > 0
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"Dépendances non disponibles: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
pytest.skip(f"Capture d'écran non disponible: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
class TestCLIPEmbedderService:
|
|
||||||
"""Tests pour le service d'embedding CLIP."""
|
|
||||||
|
|
||||||
def test_clip_embedder_import(self):
|
|
||||||
"""Vérifie que le CLIPEmbedder peut être importé."""
|
|
||||||
try:
|
|
||||||
from core.embedding import create_clip_embedder
|
|
||||||
assert create_clip_embedder is not None
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"CLIPEmbedder non disponible: {e}")
|
|
||||||
|
|
||||||
def test_clip_embedder_initialization(self):
|
|
||||||
"""Vérifie que le CLIPEmbedder peut être initialisé."""
|
|
||||||
try:
|
|
||||||
from core.embedding import create_clip_embedder
|
|
||||||
embedder = create_clip_embedder(device="cpu")
|
|
||||||
assert embedder is not None
|
|
||||||
assert embedder.get_dimension() > 0
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"CLIPEmbedder non disponible: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
pytest.skip(f"Initialisation CLIP échouée: {e}")
|
|
||||||
|
|
||||||
def test_clip_embedding_dimension(self):
|
|
||||||
"""Vérifie que les embeddings ont la bonne dimension."""
|
|
||||||
try:
|
|
||||||
from core.embedding import create_clip_embedder
|
|
||||||
from PIL import Image
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
embedder = create_clip_embedder(device="cpu")
|
|
||||||
|
|
||||||
# Créer une image de test
|
|
||||||
test_image = Image.fromarray(
|
|
||||||
np.random.randint(0, 255, (100, 100, 3), dtype=np.uint8)
|
|
||||||
)
|
|
||||||
|
|
||||||
embedding = embedder.embed_image(test_image)
|
|
||||||
|
|
||||||
assert isinstance(embedding, np.ndarray)
|
|
||||||
assert len(embedding.shape) == 1
|
|
||||||
assert embedding.shape[0] == embedder.get_dimension()
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"Dépendances non disponibles: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
pytest.skip(f"Embedding échoué: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
class TestBackendFunctions:
|
|
||||||
"""Tests pour les fonctions du backend VWB."""
|
|
||||||
|
|
||||||
def test_capture_screen_to_base64_function(self):
|
|
||||||
"""Vérifie la fonction capture_screen_to_base64."""
|
|
||||||
try:
|
|
||||||
sys.path.insert(0, str(ROOT_DIR / "visual_workflow_builder" / "backend"))
|
|
||||||
from app_lightweight import capture_screen_to_base64
|
|
||||||
|
|
||||||
result = capture_screen_to_base64()
|
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
|
||||||
assert 'success' in result
|
|
||||||
|
|
||||||
if result['success']:
|
|
||||||
assert 'screenshot' in result
|
|
||||||
assert 'width' in result
|
|
||||||
assert 'height' in result
|
|
||||||
assert isinstance(result['screenshot'], str)
|
|
||||||
assert len(result['screenshot']) > 0
|
|
||||||
else:
|
|
||||||
# Peut échouer si pas d'écran disponible
|
|
||||||
assert 'error' in result
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"Backend non disponible: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
pytest.skip(f"Test échoué: {e}")
|
|
||||||
|
|
||||||
def test_create_visual_embedding_function(self):
|
|
||||||
"""Vérifie la fonction create_visual_embedding."""
|
|
||||||
try:
|
|
||||||
import base64
|
|
||||||
from PIL import Image
|
|
||||||
import numpy as np
|
|
||||||
import io
|
|
||||||
|
|
||||||
sys.path.insert(0, str(ROOT_DIR / "visual_workflow_builder" / "backend"))
|
|
||||||
from app_lightweight import create_visual_embedding
|
|
||||||
|
|
||||||
# Créer une image de test en base64
|
|
||||||
test_image = Image.fromarray(
|
|
||||||
np.random.randint(0, 255, (200, 200, 3), dtype=np.uint8)
|
|
||||||
)
|
|
||||||
buffer = io.BytesIO()
|
|
||||||
test_image.save(buffer, format='PNG')
|
|
||||||
buffer.seek(0)
|
|
||||||
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
|
|
||||||
|
|
||||||
# Zone de sélection
|
|
||||||
bounding_box = {
|
|
||||||
'x': 50,
|
|
||||||
'y': 50,
|
|
||||||
'width': 100,
|
|
||||||
'height': 100
|
|
||||||
}
|
|
||||||
|
|
||||||
result = create_visual_embedding(screenshot_base64, bounding_box, "test_step")
|
|
||||||
|
|
||||||
assert isinstance(result, dict)
|
|
||||||
assert 'success' in result
|
|
||||||
|
|
||||||
if result['success']:
|
|
||||||
assert 'embedding' in result
|
|
||||||
assert 'embedding_id' in result
|
|
||||||
assert 'dimension' in result
|
|
||||||
assert isinstance(result['embedding'], list)
|
|
||||||
assert len(result['embedding']) > 0
|
|
||||||
else:
|
|
||||||
# Peut échouer si CLIP non disponible
|
|
||||||
assert 'error' in result
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.skip(f"Dépendances non disponibles: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
pytest.skip(f"Test échoué: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
class TestAPIEndpointsStructure:
|
|
||||||
"""Tests pour la structure des endpoints API."""
|
|
||||||
|
|
||||||
def test_backend_module_loads(self):
|
|
||||||
"""Vérifie que le module backend peut être chargé."""
|
|
||||||
try:
|
|
||||||
sys.path.insert(0, str(ROOT_DIR / "visual_workflow_builder" / "backend"))
|
|
||||||
import app_lightweight
|
|
||||||
assert app_lightweight is not None
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.fail(f"Impossible de charger le backend: {e}")
|
|
||||||
|
|
||||||
def test_workflow_database_class_exists(self):
|
|
||||||
"""Vérifie que la classe WorkflowDatabase existe."""
|
|
||||||
try:
|
|
||||||
sys.path.insert(0, str(ROOT_DIR / "visual_workflow_builder" / "backend"))
|
|
||||||
from app_lightweight import WorkflowDatabase
|
|
||||||
assert WorkflowDatabase is not None
|
|
||||||
|
|
||||||
db = WorkflowDatabase()
|
|
||||||
assert db is not None
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.fail(f"WorkflowDatabase non disponible: {e}")
|
|
||||||
|
|
||||||
def test_simple_workflow_class_exists(self):
|
|
||||||
"""Vérifie que la classe SimpleWorkflow existe."""
|
|
||||||
try:
|
|
||||||
sys.path.insert(0, str(ROOT_DIR / "visual_workflow_builder" / "backend"))
|
|
||||||
from app_lightweight import SimpleWorkflow
|
|
||||||
assert SimpleWorkflow is not None
|
|
||||||
|
|
||||||
workflow = SimpleWorkflow(
|
|
||||||
id="test_wf",
|
|
||||||
name="Test Workflow",
|
|
||||||
description="Description de test"
|
|
||||||
)
|
|
||||||
assert workflow.id == "test_wf"
|
|
||||||
assert workflow.name == "Test Workflow"
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.fail(f"SimpleWorkflow non disponible: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
class TestDataDirectory:
|
|
||||||
"""Tests pour la structure des répertoires de données."""
|
|
||||||
|
|
||||||
def test_visual_embeddings_directory_creation(self):
|
|
||||||
"""Vérifie que le répertoire visual_embeddings peut être créé."""
|
|
||||||
embeddings_dir = ROOT_DIR / "data" / "visual_embeddings"
|
|
||||||
embeddings_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
assert embeddings_dir.exists()
|
|
||||||
assert embeddings_dir.is_dir()
|
|
||||||
|
|
||||||
def test_workflows_directory_creation(self):
|
|
||||||
"""Vérifie que le répertoire workflows peut être créé."""
|
|
||||||
workflows_dir = ROOT_DIR / "data" / "workflows"
|
|
||||||
workflows_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
assert workflows_dir.exists()
|
|
||||||
assert workflows_dir.is_dir()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
pytest.main([__file__, '-v', '--tb=short'])
|
|
||||||
@@ -1,753 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Visual Workflow Builder - Backend Flask Application (Version Allégée)
|
|
||||||
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Version optimisée pour un démarrage rapide avec uniquement les fonctionnalités essentielles.
|
|
||||||
Cette version évite les imports lourds et les dépendances optionnelles.
|
|
||||||
|
|
||||||
Fonctionnalités :
|
|
||||||
- API REST pour la gestion des workflows
|
|
||||||
- Capture d'écran via ScreenCapturer (core/capture)
|
|
||||||
- Création d'embeddings visuels via CLIPEmbedder (core/embedding)
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import base64
|
|
||||||
import io
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
from typing import Dict, Any, List, Optional
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path pour les imports core
|
|
||||||
ROOT_DIR = Path(__file__).parent.parent.parent
|
|
||||||
sys.path.insert(0, str(ROOT_DIR))
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
# Import minimal sans dépendances lourdes
|
|
||||||
try:
|
|
||||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
|
||||||
from urllib.parse import urlparse, parse_qs
|
|
||||||
import socketserver
|
|
||||||
USE_FLASK = False
|
|
||||||
print("⚡ Mode serveur HTTP natif (sans Flask)")
|
|
||||||
except ImportError:
|
|
||||||
USE_FLASK = True
|
|
||||||
print("🔄 Tentative d'utilisation de Flask...")
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Services de capture d'écran et d'embedding
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Instance globale du capturer (initialisée à la demande)
|
|
||||||
_screen_capturer = None
|
|
||||||
_clip_embedder = None
|
|
||||||
|
|
||||||
|
|
||||||
def get_screen_capturer():
|
|
||||||
"""
|
|
||||||
Obtenir l'instance du ScreenCapturer (initialisation paresseuse).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
ScreenCapturer ou None si non disponible
|
|
||||||
"""
|
|
||||||
global _screen_capturer
|
|
||||||
if _screen_capturer is None:
|
|
||||||
try:
|
|
||||||
# Vérifier les dépendances de capture d'écran
|
|
||||||
try:
|
|
||||||
import mss
|
|
||||||
print("✅ mss disponible")
|
|
||||||
except ImportError:
|
|
||||||
print("❌ mss non disponible")
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pyautogui
|
|
||||||
print("✅ pyautogui disponible")
|
|
||||||
except ImportError:
|
|
||||||
print("❌ pyautogui non disponible")
|
|
||||||
|
|
||||||
from core.capture import ScreenCapturer
|
|
||||||
_screen_capturer = ScreenCapturer(buffer_size=5, detect_changes=False)
|
|
||||||
print(f"✅ ScreenCapturer initialisé avec succès - méthode: {_screen_capturer.method}")
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"⚠️ ScreenCapturer non disponible: {e}")
|
|
||||||
return None
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur initialisation ScreenCapturer: {e}")
|
|
||||||
return None
|
|
||||||
return _screen_capturer
|
|
||||||
|
|
||||||
|
|
||||||
def get_clip_embedder():
|
|
||||||
"""
|
|
||||||
Obtenir l'instance du CLIPEmbedder (initialisation paresseuse).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
CLIPEmbedder ou None si non disponible
|
|
||||||
"""
|
|
||||||
global _clip_embedder
|
|
||||||
if _clip_embedder is None:
|
|
||||||
try:
|
|
||||||
from core.embedding import create_clip_embedder
|
|
||||||
_clip_embedder = create_clip_embedder(device="cpu")
|
|
||||||
print("✅ CLIPEmbedder initialisé avec succès")
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"⚠️ CLIPEmbedder non disponible: {e}")
|
|
||||||
return None
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur initialisation CLIPEmbedder: {e}")
|
|
||||||
return None
|
|
||||||
return _clip_embedder
|
|
||||||
|
|
||||||
|
|
||||||
def capture_screen_to_base64() -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Capture l'écran et retourne l'image en base64.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict avec 'success', 'screenshot' (base64), 'width', 'height', ou 'error'
|
|
||||||
"""
|
|
||||||
capturer = get_screen_capturer()
|
|
||||||
if capturer is None:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': 'Service de capture d\'écran non disponible'
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
from PIL import Image
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
# Capturer l'écran
|
|
||||||
img_array = capturer.capture()
|
|
||||||
if img_array is None:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': 'Échec de la capture d\'écran'
|
|
||||||
}
|
|
||||||
|
|
||||||
# Convertir en PIL Image
|
|
||||||
pil_image = Image.fromarray(img_array)
|
|
||||||
|
|
||||||
# Convertir en base64
|
|
||||||
buffer = io.BytesIO()
|
|
||||||
pil_image.save(buffer, format='PNG', optimize=True)
|
|
||||||
buffer.seek(0)
|
|
||||||
screenshot_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
|
|
||||||
|
|
||||||
return {
|
|
||||||
'success': True,
|
|
||||||
'screenshot': screenshot_base64,
|
|
||||||
'width': pil_image.width,
|
|
||||||
'height': pil_image.height,
|
|
||||||
'timestamp': datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': f'Erreur lors de la capture: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def create_visual_embedding(screenshot_base64: str, bounding_box: Dict[str, int], step_id: str) -> Dict[str, Any]:
|
|
||||||
"""
|
|
||||||
Crée un embedding visuel à partir d'une capture d'écran et d'une zone sélectionnée.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
screenshot_base64: Image en base64
|
|
||||||
bounding_box: Zone sélectionnée {'x', 'y', 'width', 'height'}
|
|
||||||
step_id: Identifiant de l'étape
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict avec 'success', 'embedding', 'embedding_id', ou 'error'
|
|
||||||
"""
|
|
||||||
embedder = get_clip_embedder()
|
|
||||||
if embedder is None:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': 'Service d\'embedding non disponible'
|
|
||||||
}
|
|
||||||
|
|
||||||
try:
|
|
||||||
from PIL import Image
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
# Décoder l'image base64
|
|
||||||
image_data = base64.b64decode(screenshot_base64)
|
|
||||||
pil_image = Image.open(io.BytesIO(image_data))
|
|
||||||
|
|
||||||
# Extraire la zone sélectionnée
|
|
||||||
x = bounding_box.get('x', 0)
|
|
||||||
y = bounding_box.get('y', 0)
|
|
||||||
width = bounding_box.get('width', 100)
|
|
||||||
height = bounding_box.get('height', 100)
|
|
||||||
|
|
||||||
# Valider les coordonnées
|
|
||||||
x = max(0, min(x, pil_image.width - 1))
|
|
||||||
y = max(0, min(y, pil_image.height - 1))
|
|
||||||
width = max(10, min(width, pil_image.width - x))
|
|
||||||
height = max(10, min(height, pil_image.height - y))
|
|
||||||
|
|
||||||
# Découper la zone
|
|
||||||
cropped_image = pil_image.crop((x, y, x + width, y + height))
|
|
||||||
|
|
||||||
# Créer l'embedding
|
|
||||||
embedding = embedder.embed_image(cropped_image)
|
|
||||||
|
|
||||||
# Générer un ID unique pour l'embedding
|
|
||||||
embedding_id = f"emb_{step_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
|
||||||
|
|
||||||
# Sauvegarder l'embedding et l'image de référence
|
|
||||||
embeddings_dir = ROOT_DIR / "data" / "visual_embeddings"
|
|
||||||
embeddings_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Sauvegarder l'embedding en numpy
|
|
||||||
embedding_path = embeddings_dir / f"{embedding_id}.npy"
|
|
||||||
np.save(str(embedding_path), embedding)
|
|
||||||
|
|
||||||
# Sauvegarder l'image de référence
|
|
||||||
reference_path = embeddings_dir / f"{embedding_id}_ref.png"
|
|
||||||
cropped_image.save(str(reference_path))
|
|
||||||
|
|
||||||
return {
|
|
||||||
'success': True,
|
|
||||||
'embedding': embedding.tolist(),
|
|
||||||
'embedding_id': embedding_id,
|
|
||||||
'dimension': len(embedding),
|
|
||||||
'reference_image': f"{embedding_id}_ref.png",
|
|
||||||
'bounding_box': {
|
|
||||||
'x': x,
|
|
||||||
'y': y,
|
|
||||||
'width': width,
|
|
||||||
'height': height
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return {
|
|
||||||
'success': False,
|
|
||||||
'error': f'Erreur lors de la création de l\'embedding: {str(e)}'
|
|
||||||
}
|
|
||||||
|
|
||||||
class WorkflowHandler(BaseHTTPRequestHandler):
|
|
||||||
"""Gestionnaire HTTP simple pour les workflows."""
|
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
self.workflows_db = WorkflowDatabase()
|
|
||||||
super().__init__(*args, **kwargs)
|
|
||||||
|
|
||||||
def do_GET(self):
|
|
||||||
"""Gère les requêtes GET."""
|
|
||||||
parsed_path = urlparse(self.path)
|
|
||||||
path = parsed_path.path
|
|
||||||
|
|
||||||
# Headers CORS
|
|
||||||
self.send_cors_headers()
|
|
||||||
|
|
||||||
if path == '/health':
|
|
||||||
self.send_health_check()
|
|
||||||
elif path == '/':
|
|
||||||
self.send_index()
|
|
||||||
elif path.startswith('/api/workflows'):
|
|
||||||
self.handle_workflows_get(path)
|
|
||||||
else:
|
|
||||||
self.send_error(404, "Not Found")
|
|
||||||
|
|
||||||
def do_POST(self):
|
|
||||||
"""Gère les requêtes POST."""
|
|
||||||
parsed_path = urlparse(self.path)
|
|
||||||
path = parsed_path.path
|
|
||||||
|
|
||||||
self.send_cors_headers()
|
|
||||||
|
|
||||||
if path.startswith('/api/workflows'):
|
|
||||||
self.handle_workflows_post(path)
|
|
||||||
else:
|
|
||||||
self.send_error(404, "Not Found")
|
|
||||||
|
|
||||||
def do_OPTIONS(self):
|
|
||||||
"""Gère les requêtes OPTIONS pour CORS."""
|
|
||||||
self.send_cors_headers()
|
|
||||||
self.send_response(200)
|
|
||||||
self.end_headers()
|
|
||||||
|
|
||||||
def send_cors_headers(self):
|
|
||||||
"""Envoie les headers CORS."""
|
|
||||||
self.send_header('Access-Control-Allow-Origin', '*')
|
|
||||||
self.send_header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS')
|
|
||||||
self.send_header('Access-Control-Allow-Headers', 'Content-Type, Authorization')
|
|
||||||
|
|
||||||
def send_json_response(self, data: Any, status_code: int = 200):
|
|
||||||
"""Envoie une réponse JSON."""
|
|
||||||
self.send_response(status_code)
|
|
||||||
self.send_header('Content-Type', 'application/json')
|
|
||||||
self.send_cors_headers()
|
|
||||||
self.end_headers()
|
|
||||||
|
|
||||||
json_data = json.dumps(data, ensure_ascii=False, indent=2)
|
|
||||||
self.wfile.write(json_data.encode('utf-8'))
|
|
||||||
|
|
||||||
def send_health_check(self):
|
|
||||||
"""Endpoint de santé."""
|
|
||||||
self.send_json_response({
|
|
||||||
'status': 'healthy',
|
|
||||||
'version': '1.0.0-lightweight',
|
|
||||||
'mode': 'native-http'
|
|
||||||
})
|
|
||||||
|
|
||||||
def send_index(self):
|
|
||||||
"""Page d'accueil."""
|
|
||||||
self.send_json_response({
|
|
||||||
'message': 'Visual Workflow Builder Backend (Version Allégée)',
|
|
||||||
'version': '1.0.0-lightweight',
|
|
||||||
'mode': 'native-http',
|
|
||||||
'endpoints': ['/health', '/api/workflows']
|
|
||||||
})
|
|
||||||
|
|
||||||
def handle_workflows_get(self, path: str):
|
|
||||||
"""Gère les GET sur /api/workflows."""
|
|
||||||
if path == '/api/workflows' or path == '/api/workflows/':
|
|
||||||
# Liste des workflows
|
|
||||||
try:
|
|
||||||
workflows = self.workflows_db.list_workflows()
|
|
||||||
self.send_json_response([w.to_dict() for w in workflows])
|
|
||||||
except Exception as e:
|
|
||||||
self.send_json_response({'error': str(e)}, 500)
|
|
||||||
else:
|
|
||||||
# Workflow spécifique
|
|
||||||
workflow_id = path.split('/')[-1]
|
|
||||||
try:
|
|
||||||
workflow = self.workflows_db.get_workflow(workflow_id)
|
|
||||||
if workflow:
|
|
||||||
self.send_json_response(workflow.to_dict())
|
|
||||||
else:
|
|
||||||
self.send_json_response({'error': 'Workflow not found'}, 404)
|
|
||||||
except Exception as e:
|
|
||||||
self.send_json_response({'error': str(e)}, 500)
|
|
||||||
|
|
||||||
def handle_workflows_post(self, path: str):
|
|
||||||
"""Gère les POST sur /api/workflows."""
|
|
||||||
try:
|
|
||||||
content_length = int(self.headers.get('Content-Length', 0))
|
|
||||||
if content_length > 0:
|
|
||||||
post_data = self.rfile.read(content_length)
|
|
||||||
data = json.loads(post_data.decode('utf-8'))
|
|
||||||
else:
|
|
||||||
data = {}
|
|
||||||
|
|
||||||
if path == '/api/workflows' or path == '/api/workflows/':
|
|
||||||
# Créer un nouveau workflow
|
|
||||||
workflow = self.workflows_db.create_workflow(data)
|
|
||||||
self.send_json_response(workflow.to_dict(), 201)
|
|
||||||
else:
|
|
||||||
self.send_json_response({'error': 'Method not allowed'}, 405)
|
|
||||||
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
self.send_json_response({'error': 'Invalid JSON'}, 400)
|
|
||||||
except Exception as e:
|
|
||||||
self.send_json_response({'error': str(e)}, 500)
|
|
||||||
|
|
||||||
class SimpleWorkflow:
|
|
||||||
"""Modèle de workflow simplifié."""
|
|
||||||
|
|
||||||
def __init__(self, id: str, name: str, description: str = "", created_by: str = "unknown"):
|
|
||||||
self.id = id
|
|
||||||
self.name = name
|
|
||||||
self.description = description
|
|
||||||
self.created_by = created_by
|
|
||||||
self.created_at = datetime.now().isoformat()
|
|
||||||
self.updated_at = self.created_at
|
|
||||||
self.nodes = []
|
|
||||||
self.edges = []
|
|
||||||
self.variables = []
|
|
||||||
self.settings = {}
|
|
||||||
self.tags = []
|
|
||||||
self.category = "default"
|
|
||||||
self.is_template = False
|
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
|
||||||
"""Convertit en dictionnaire."""
|
|
||||||
return {
|
|
||||||
'id': self.id,
|
|
||||||
'name': self.name,
|
|
||||||
'description': self.description,
|
|
||||||
'created_by': self.created_by,
|
|
||||||
'created_at': self.created_at,
|
|
||||||
'updated_at': self.updated_at,
|
|
||||||
'nodes': self.nodes,
|
|
||||||
'edges': self.edges,
|
|
||||||
'variables': self.variables,
|
|
||||||
'settings': self.settings,
|
|
||||||
'tags': self.tags,
|
|
||||||
'category': self.category,
|
|
||||||
'is_template': self.is_template
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: Dict[str, Any]) -> 'SimpleWorkflow':
|
|
||||||
"""Crée depuis un dictionnaire."""
|
|
||||||
workflow = cls(
|
|
||||||
id=data.get('id', f"wf_{datetime.now().strftime('%Y%m%d_%H%M%S')}"),
|
|
||||||
name=data.get('name', 'Sans titre'),
|
|
||||||
description=data.get('description', ''),
|
|
||||||
created_by=data.get('created_by', 'unknown')
|
|
||||||
)
|
|
||||||
|
|
||||||
workflow.nodes = data.get('nodes', [])
|
|
||||||
workflow.edges = data.get('edges', [])
|
|
||||||
workflow.variables = data.get('variables', [])
|
|
||||||
workflow.settings = data.get('settings', {})
|
|
||||||
workflow.tags = data.get('tags', [])
|
|
||||||
workflow.category = data.get('category', 'default')
|
|
||||||
workflow.is_template = data.get('is_template', False)
|
|
||||||
|
|
||||||
return workflow
|
|
||||||
|
|
||||||
class WorkflowDatabase:
|
|
||||||
"""Base de données simple pour les workflows."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.data_dir = Path("../../data/workflows")
|
|
||||||
self.data_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
print(f"📁 Base de données: {self.data_dir.absolute()}")
|
|
||||||
|
|
||||||
def _get_file_path(self, workflow_id: str) -> Path:
|
|
||||||
"""Retourne le chemin du fichier pour un workflow."""
|
|
||||||
safe_id = "".join(c for c in workflow_id if c.isalnum() or c in ("_", "-"))
|
|
||||||
return self.data_dir / f"{safe_id}.json"
|
|
||||||
|
|
||||||
def create_workflow(self, data: Dict[str, Any]) -> SimpleWorkflow:
|
|
||||||
"""Crée un nouveau workflow."""
|
|
||||||
if 'name' not in data:
|
|
||||||
raise ValueError("Le nom est requis")
|
|
||||||
|
|
||||||
workflow = SimpleWorkflow.from_dict(data)
|
|
||||||
self.save_workflow(workflow)
|
|
||||||
return workflow
|
|
||||||
|
|
||||||
def save_workflow(self, workflow: SimpleWorkflow):
|
|
||||||
"""Sauvegarde un workflow."""
|
|
||||||
file_path = self._get_file_path(workflow.id)
|
|
||||||
with open(file_path, 'w', encoding='utf-8') as f:
|
|
||||||
json.dump(workflow.to_dict(), f, ensure_ascii=False, indent=2)
|
|
||||||
|
|
||||||
def get_workflow(self, workflow_id: str) -> Optional[SimpleWorkflow]:
|
|
||||||
"""Récupère un workflow par ID."""
|
|
||||||
file_path = self._get_file_path(workflow_id)
|
|
||||||
if not file_path.exists():
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r', encoding='utf-8') as f:
|
|
||||||
data = json.load(f)
|
|
||||||
return SimpleWorkflow.from_dict(data)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Erreur lecture workflow {workflow_id}: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def list_workflows(self) -> List[SimpleWorkflow]:
|
|
||||||
"""Liste tous les workflows."""
|
|
||||||
workflows = []
|
|
||||||
for file_path in self.data_dir.glob("*.json"):
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r', encoding='utf-8') as f:
|
|
||||||
data = json.load(f)
|
|
||||||
workflows.append(SimpleWorkflow.from_dict(data))
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Erreur lecture {file_path}: {e}")
|
|
||||||
|
|
||||||
return workflows
|
|
||||||
|
|
||||||
def start_native_server(port: int = 5002):
|
|
||||||
"""Démarre le serveur HTTP natif."""
|
|
||||||
print(f"🚀 Démarrage du serveur natif sur le port {port}")
|
|
||||||
print(f"🌐 URL: http://localhost:{port}")
|
|
||||||
print(f"❤️ Health check: http://localhost:{port}/health")
|
|
||||||
print(f"📋 API Workflows: http://localhost:{port}/api/workflows")
|
|
||||||
print("")
|
|
||||||
print("Appuyez sur Ctrl+C pour arrêter")
|
|
||||||
|
|
||||||
try:
|
|
||||||
with socketserver.TCPServer(("", port), WorkflowHandler) as httpd:
|
|
||||||
httpd.serve_forever()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n🛑 Arrêt du serveur")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur serveur: {e}")
|
|
||||||
|
|
||||||
def start_flask_server(port: int = 5002):
|
|
||||||
"""Démarre le serveur Flask si disponible."""
|
|
||||||
try:
|
|
||||||
from flask import Flask, jsonify, request
|
|
||||||
from flask_cors import CORS
|
|
||||||
|
|
||||||
app = Flask(__name__)
|
|
||||||
CORS(app)
|
|
||||||
|
|
||||||
db = WorkflowDatabase()
|
|
||||||
|
|
||||||
@app.route('/health')
|
|
||||||
@app.route('/api/health')
|
|
||||||
def health_check():
|
|
||||||
return jsonify({
|
|
||||||
'status': 'healthy',
|
|
||||||
'version': '1.0.0-lightweight',
|
|
||||||
'mode': 'flask',
|
|
||||||
'features': {
|
|
||||||
'screen_capture': get_screen_capturer() is not None,
|
|
||||||
'visual_embedding': get_clip_embedder() is not None
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
@app.route('/')
|
|
||||||
def index():
|
|
||||||
return jsonify({
|
|
||||||
'message': 'Visual Workflow Builder Backend (Version Allégée)',
|
|
||||||
'version': '1.0.0-lightweight',
|
|
||||||
'mode': 'flask',
|
|
||||||
'endpoints': [
|
|
||||||
'/health',
|
|
||||||
'/api/workflows',
|
|
||||||
'/api/screen-capture',
|
|
||||||
'/api/visual-embedding'
|
|
||||||
]
|
|
||||||
})
|
|
||||||
|
|
||||||
@app.route('/api/workflows', methods=['GET'])
|
|
||||||
def list_workflows():
|
|
||||||
try:
|
|
||||||
workflows = db.list_workflows()
|
|
||||||
return jsonify([w.to_dict() for w in workflows])
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({'error': str(e)}), 500
|
|
||||||
|
|
||||||
@app.route('/api/workflows', methods=['POST'])
|
|
||||||
def create_workflow():
|
|
||||||
try:
|
|
||||||
data = request.get_json() or {}
|
|
||||||
workflow = db.create_workflow(data)
|
|
||||||
return jsonify(workflow.to_dict()), 201
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({'error': str(e)}), 400
|
|
||||||
|
|
||||||
@app.route('/api/workflows/<workflow_id>', methods=['GET'])
|
|
||||||
def get_workflow(workflow_id):
|
|
||||||
try:
|
|
||||||
workflow = db.get_workflow(workflow_id)
|
|
||||||
if workflow:
|
|
||||||
return jsonify(workflow.to_dict())
|
|
||||||
else:
|
|
||||||
return jsonify({'error': 'Workflow not found'}), 404
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({'error': str(e)}), 500
|
|
||||||
|
|
||||||
# ====================================================================
|
|
||||||
# Endpoints de capture d'écran et d'embedding visuel
|
|
||||||
# ====================================================================
|
|
||||||
|
|
||||||
@app.route('/api/screen-capture', methods=['POST'])
|
|
||||||
def screen_capture():
|
|
||||||
"""
|
|
||||||
Capture l'écran actuel et retourne l'image en base64.
|
|
||||||
|
|
||||||
Request Body (optionnel):
|
|
||||||
{
|
|
||||||
"format": "png", // Format de l'image (png par défaut)
|
|
||||||
"quality": 90 // Qualité (non utilisé pour PNG)
|
|
||||||
}
|
|
||||||
|
|
||||||
Response:
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"screenshot": "base64_encoded_image",
|
|
||||||
"width": 1920,
|
|
||||||
"height": 1080,
|
|
||||||
"timestamp": "2026-01-09T..."
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
result = capture_screen_to_base64()
|
|
||||||
if result['success']:
|
|
||||||
return jsonify(result)
|
|
||||||
else:
|
|
||||||
return jsonify(result), 500
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Erreur serveur: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
@app.route('/api/visual-embedding', methods=['POST'])
|
|
||||||
def visual_embedding():
|
|
||||||
"""
|
|
||||||
Crée un embedding visuel à partir d'une capture d'écran et d'une zone sélectionnée.
|
|
||||||
|
|
||||||
Request Body:
|
|
||||||
{
|
|
||||||
"screenshot": "base64_encoded_image",
|
|
||||||
"boundingBox": {
|
|
||||||
"x": 100,
|
|
||||||
"y": 200,
|
|
||||||
"width": 150,
|
|
||||||
"height": 50
|
|
||||||
},
|
|
||||||
"stepId": "step_123"
|
|
||||||
}
|
|
||||||
|
|
||||||
Response:
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"embedding": [0.1, 0.2, ...],
|
|
||||||
"embedding_id": "emb_step_123_20260109_...",
|
|
||||||
"dimension": 512,
|
|
||||||
"reference_image": "emb_step_123_..._ref.png",
|
|
||||||
"bounding_box": {...}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
data = request.get_json()
|
|
||||||
if not data:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Corps de requête JSON requis'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
# Valider les paramètres requis
|
|
||||||
screenshot = data.get('screenshot')
|
|
||||||
bounding_box = data.get('boundingBox')
|
|
||||||
step_id = data.get('stepId', 'unknown')
|
|
||||||
|
|
||||||
if not screenshot:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Paramètre "screenshot" requis'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
if not bounding_box:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': 'Paramètre "boundingBox" requis'
|
|
||||||
}), 400
|
|
||||||
|
|
||||||
# Créer l'embedding
|
|
||||||
result = create_visual_embedding(screenshot, bounding_box, step_id)
|
|
||||||
|
|
||||||
if result['success']:
|
|
||||||
return jsonify(result)
|
|
||||||
else:
|
|
||||||
return jsonify(result), 500
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Erreur serveur: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
@app.route('/api/visual-embedding/<embedding_id>', methods=['GET'])
|
|
||||||
def get_visual_embedding(embedding_id):
|
|
||||||
"""
|
|
||||||
Récupère un embedding visuel existant par son ID.
|
|
||||||
|
|
||||||
Response:
|
|
||||||
{
|
|
||||||
"success": true,
|
|
||||||
"embedding_id": "emb_...",
|
|
||||||
"embedding": [0.1, 0.2, ...],
|
|
||||||
"reference_image_url": "/api/visual-embedding/emb_.../image"
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
embeddings_dir = ROOT_DIR / "data" / "visual_embeddings"
|
|
||||||
embedding_path = embeddings_dir / f"{embedding_id}.npy"
|
|
||||||
|
|
||||||
if not embedding_path.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Embedding "{embedding_id}" non trouvé'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
embedding = np.load(str(embedding_path))
|
|
||||||
|
|
||||||
return jsonify({
|
|
||||||
'success': True,
|
|
||||||
'embedding_id': embedding_id,
|
|
||||||
'embedding': embedding.tolist(),
|
|
||||||
'dimension': len(embedding),
|
|
||||||
'reference_image_url': f'/api/visual-embedding/{embedding_id}/image'
|
|
||||||
})
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Erreur: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
@app.route('/api/visual-embedding/<embedding_id>/image', methods=['GET'])
|
|
||||||
def get_embedding_reference_image(embedding_id):
|
|
||||||
"""
|
|
||||||
Récupère l'image de référence d'un embedding.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
from flask import send_file
|
|
||||||
|
|
||||||
embeddings_dir = ROOT_DIR / "data" / "visual_embeddings"
|
|
||||||
image_path = embeddings_dir / f"{embedding_id}_ref.png"
|
|
||||||
|
|
||||||
if not image_path.exists():
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Image de référence non trouvée'
|
|
||||||
}), 404
|
|
||||||
|
|
||||||
return send_file(str(image_path), mimetype='image/png')
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return jsonify({
|
|
||||||
'success': False,
|
|
||||||
'error': f'Erreur: {str(e)}'
|
|
||||||
}), 500
|
|
||||||
|
|
||||||
print(f"🚀 Démarrage du serveur Flask sur le port {port}")
|
|
||||||
print(f"🌐 URL: http://localhost:{port}")
|
|
||||||
print(f"❤️ Health check: http://localhost:{port}/health")
|
|
||||||
print(f"📋 API Workflows: http://localhost:{port}/api/workflows")
|
|
||||||
print(f"📷 API Capture: http://localhost:{port}/api/screen-capture")
|
|
||||||
print(f"🎯 API Embedding: http://localhost:{port}/api/visual-embedding")
|
|
||||||
|
|
||||||
app.run(host='0.0.0.0', port=port, debug=False)
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"❌ Flask non disponible: {e}")
|
|
||||||
print("🔄 Basculement vers le serveur natif...")
|
|
||||||
start_native_server(port)
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale."""
|
|
||||||
print("=" * 60)
|
|
||||||
print(" VISUAL WORKFLOW BUILDER - BACKEND ALLÉGÉ")
|
|
||||||
print("=" * 60)
|
|
||||||
print("Auteur : Dom, Alice, Kiro - 08 janvier 2026")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
# Déterminer le port
|
|
||||||
port = int(os.getenv('PORT', 5002))
|
|
||||||
|
|
||||||
# Vérifier les dépendances
|
|
||||||
try:
|
|
||||||
import flask
|
|
||||||
import flask_cors
|
|
||||||
print("✅ Flask disponible - utilisation du mode Flask")
|
|
||||||
start_flask_server(port)
|
|
||||||
except ImportError:
|
|
||||||
print("⚡ Flask non disponible - utilisation du serveur natif")
|
|
||||||
start_native_server(port)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
||||||
@@ -1,299 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
Service de Capture d'Écran Réelle - RPA Vision V3
|
|
||||||
Auteur : Dom, Alice, Kiro - 8 janvier 2026
|
|
||||||
|
|
||||||
Service pour capturer l'écran réel de l'utilisateur et détecter les éléments UI.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import cv2
|
|
||||||
import numpy as np
|
|
||||||
import mss
|
|
||||||
import base64
|
|
||||||
import io
|
|
||||||
from PIL import Image
|
|
||||||
from typing import Dict, List, Tuple, Optional
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
import logging
|
|
||||||
|
|
||||||
# Import des modules RPA Vision V3 pour la détection UI
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
|
|
||||||
# Ajouter le chemin vers le répertoire racine du projet
|
|
||||||
project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../..'))
|
|
||||||
if project_root not in sys.path:
|
|
||||||
sys.path.insert(0, project_root)
|
|
||||||
|
|
||||||
try:
|
|
||||||
from core.detection.ui_detector import UIDetector
|
|
||||||
UI_DETECTOR_AVAILABLE = True
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"Warning: UIDetector non disponible: {e}")
|
|
||||||
UI_DETECTOR_AVAILABLE = False
|
|
||||||
UIDetector = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
from core.models.screen_state import ScreenState, UIElement
|
|
||||||
SCREEN_STATE_AVAILABLE = True
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"Warning: ScreenState non disponible: {e}")
|
|
||||||
SCREEN_STATE_AVAILABLE = False
|
|
||||||
ScreenState = None
|
|
||||||
UIElement = None
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class RealScreenCaptureService:
|
|
||||||
"""
|
|
||||||
Service de capture d'écran réelle avec détection d'éléments UI
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.is_capturing = False
|
|
||||||
self.capture_thread = None
|
|
||||||
self.current_screenshot = None
|
|
||||||
self.detected_elements = []
|
|
||||||
|
|
||||||
# Initialiser le détecteur UI si disponible
|
|
||||||
if UI_DETECTOR_AVAILABLE:
|
|
||||||
self.ui_detector = UIDetector()
|
|
||||||
else:
|
|
||||||
self.ui_detector = None
|
|
||||||
print("Warning: UIDetector non disponible - détection d'éléments désactivée")
|
|
||||||
|
|
||||||
self.capture_interval = 1.0 # 1 seconde par défaut
|
|
||||||
self.monitors = []
|
|
||||||
self.selected_monitor = 0
|
|
||||||
|
|
||||||
# Initialiser MSS pour la capture d'écran
|
|
||||||
try:
|
|
||||||
# Utiliser MSS temporairement pour détecter les moniteurs
|
|
||||||
with mss.mss() as sct:
|
|
||||||
self.monitors = sct.monitors
|
|
||||||
logger.info(f"Détecté {len(self.monitors)} moniteurs")
|
|
||||||
for i, monitor in enumerate(self.monitors):
|
|
||||||
logger.info(f"Moniteur {i}: {monitor}")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la détection des moniteurs: {e}")
|
|
||||||
self.monitors = [{"top": 0, "left": 0, "width": 1920, "height": 1080}]
|
|
||||||
|
|
||||||
def _detect_monitors(self):
|
|
||||||
"""Détecte les moniteurs disponibles"""
|
|
||||||
try:
|
|
||||||
self.monitors = self.sct.monitors
|
|
||||||
logger.info(f"Détecté {len(self.monitors)} moniteurs")
|
|
||||||
for i, monitor in enumerate(self.monitors):
|
|
||||||
logger.info(f"Moniteur {i}: {monitor}")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la détection des moniteurs: {e}")
|
|
||||||
self.monitors = [{"top": 0, "left": 0, "width": 1920, "height": 1080}]
|
|
||||||
|
|
||||||
def get_monitors(self) -> List[Dict]:
|
|
||||||
"""Retourne la liste des moniteurs disponibles"""
|
|
||||||
return [
|
|
||||||
{
|
|
||||||
"id": i,
|
|
||||||
"width": monitor.get("width", 0),
|
|
||||||
"height": monitor.get("height", 0),
|
|
||||||
"top": monitor.get("top", 0),
|
|
||||||
"left": monitor.get("left", 0)
|
|
||||||
}
|
|
||||||
for i, monitor in enumerate(self.monitors)
|
|
||||||
]
|
|
||||||
|
|
||||||
def select_monitor(self, monitor_id: int) -> bool:
|
|
||||||
"""Sélectionne le moniteur à capturer"""
|
|
||||||
if 0 <= monitor_id < len(self.monitors):
|
|
||||||
self.selected_monitor = monitor_id
|
|
||||||
logger.info(f"Moniteur sélectionné: {monitor_id}")
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
def start_capture(self, interval: float = 1.0) -> bool:
|
|
||||||
"""Démarre la capture d'écran en temps réel"""
|
|
||||||
if self.is_capturing:
|
|
||||||
logger.warning("Capture déjà en cours")
|
|
||||||
return False
|
|
||||||
|
|
||||||
self.capture_interval = interval
|
|
||||||
self.is_capturing = True
|
|
||||||
|
|
||||||
# Démarrer le thread de capture
|
|
||||||
self.capture_thread = threading.Thread(target=self._capture_loop, daemon=True)
|
|
||||||
self.capture_thread.start()
|
|
||||||
|
|
||||||
logger.info(f"Capture démarrée (intervalle: {interval}s)")
|
|
||||||
return True
|
|
||||||
|
|
||||||
def stop_capture(self) -> bool:
|
|
||||||
"""Arrête la capture d'écran"""
|
|
||||||
if not self.is_capturing:
|
|
||||||
return False
|
|
||||||
|
|
||||||
self.is_capturing = False
|
|
||||||
|
|
||||||
if self.capture_thread and self.capture_thread.is_alive():
|
|
||||||
self.capture_thread.join(timeout=2.0)
|
|
||||||
|
|
||||||
logger.info("Capture arrêtée")
|
|
||||||
return True
|
|
||||||
|
|
||||||
def _capture_loop(self):
|
|
||||||
"""Boucle principale de capture avec MSS local au thread"""
|
|
||||||
# Créer une instance MSS locale au thread pour éviter les problèmes de threading
|
|
||||||
try:
|
|
||||||
with mss.mss() as sct_local:
|
|
||||||
while self.is_capturing:
|
|
||||||
try:
|
|
||||||
# Capturer l'écran avec l'instance locale
|
|
||||||
screenshot = self._capture_screen_with_sct(sct_local)
|
|
||||||
if screenshot is not None:
|
|
||||||
self.current_screenshot = screenshot
|
|
||||||
|
|
||||||
# Détecter les éléments UI
|
|
||||||
if UI_DETECTOR_AVAILABLE and self.ui_detector:
|
|
||||||
self._detect_ui_elements(screenshot)
|
|
||||||
|
|
||||||
# Attendre avant la prochaine capture
|
|
||||||
time.sleep(self.capture_interval)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur dans la boucle de capture: {e}")
|
|
||||||
time.sleep(1.0) # Attendre avant de réessayer
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de l'initialisation MSS dans le thread: {e}")
|
|
||||||
|
|
||||||
def _capture_screen_with_sct(self, sct):
|
|
||||||
"""Capture l'écran avec une instance MSS donnée"""
|
|
||||||
try:
|
|
||||||
if self.selected_monitor >= len(self.monitors):
|
|
||||||
self.selected_monitor = 0
|
|
||||||
|
|
||||||
monitor = self.monitors[self.selected_monitor]
|
|
||||||
|
|
||||||
# Capturer avec MSS
|
|
||||||
screenshot = sct.grab(monitor)
|
|
||||||
|
|
||||||
# Convertir en array numpy
|
|
||||||
img_array = np.array(screenshot)
|
|
||||||
|
|
||||||
# Convertir BGRA vers BGR (OpenCV)
|
|
||||||
if img_array.shape[2] == 4:
|
|
||||||
img_array = cv2.cvtColor(img_array, cv2.COLOR_BGRA2BGR)
|
|
||||||
|
|
||||||
return img_array
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la capture d'écran: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _capture_screen(self) -> Optional[np.ndarray]:
|
|
||||||
"""Capture l'écran sélectionné (version legacy, utilise _capture_screen_with_sct)"""
|
|
||||||
try:
|
|
||||||
with mss.mss() as sct:
|
|
||||||
return self._capture_screen_with_sct(sct)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la capture d'écran legacy: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _detect_ui_elements(self, screenshot: np.ndarray):
|
|
||||||
"""Détecte les éléments UI sur la capture d'écran"""
|
|
||||||
try:
|
|
||||||
# Créer un ScreenState temporaire pour la détection
|
|
||||||
screen_state = ScreenState(
|
|
||||||
timestamp=time.time(),
|
|
||||||
screenshot_path="", # Pas de fichier, image en mémoire
|
|
||||||
screenshot_data=screenshot,
|
|
||||||
ui_elements=[],
|
|
||||||
metadata={"source": "real_capture"}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Utiliser le détecteur UI existant
|
|
||||||
detected_elements = self.ui_detector.detect_elements(screen_state)
|
|
||||||
|
|
||||||
# Mettre à jour les éléments détectés
|
|
||||||
self.detected_elements = detected_elements
|
|
||||||
|
|
||||||
logger.debug(f"Détecté {len(detected_elements)} éléments UI")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la détection UI: {e}")
|
|
||||||
self.detected_elements = []
|
|
||||||
|
|
||||||
def get_current_screenshot_base64(self) -> Optional[str]:
|
|
||||||
"""Retourne la capture d'écran actuelle en base64"""
|
|
||||||
if self.current_screenshot is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Convertir en PIL Image
|
|
||||||
if len(self.current_screenshot.shape) == 3:
|
|
||||||
# BGR vers RGB
|
|
||||||
rgb_image = cv2.cvtColor(self.current_screenshot, cv2.COLOR_BGR2RGB)
|
|
||||||
pil_image = Image.fromarray(rgb_image)
|
|
||||||
else:
|
|
||||||
pil_image = Image.fromarray(self.current_screenshot)
|
|
||||||
|
|
||||||
# Redimensionner pour l'affichage web (optionnel)
|
|
||||||
max_width = 1200
|
|
||||||
if pil_image.width > max_width:
|
|
||||||
ratio = max_width / pil_image.width
|
|
||||||
new_height = int(pil_image.height * ratio)
|
|
||||||
pil_image = pil_image.resize((max_width, new_height), Image.Resampling.LANCZOS)
|
|
||||||
|
|
||||||
# Convertir en base64
|
|
||||||
buffer = io.BytesIO()
|
|
||||||
pil_image.save(buffer, format='JPEG', quality=85)
|
|
||||||
img_base64 = base64.b64encode(buffer.getvalue()).decode('utf-8')
|
|
||||||
|
|
||||||
return f"data:image/jpeg;base64,{img_base64}"
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la conversion base64: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_detected_elements(self) -> List[Dict]:
|
|
||||||
"""Retourne les éléments UI détectés"""
|
|
||||||
elements = []
|
|
||||||
|
|
||||||
for element in self.detected_elements:
|
|
||||||
try:
|
|
||||||
elements.append({
|
|
||||||
"id": getattr(element, 'id', ''),
|
|
||||||
"type": getattr(element, 'element_type', 'unknown'),
|
|
||||||
"text": getattr(element, 'text', ''),
|
|
||||||
"bbox": {
|
|
||||||
"x": getattr(element, 'bbox', {}).get('x', 0),
|
|
||||||
"y": getattr(element, 'bbox', {}).get('y', 0),
|
|
||||||
"width": getattr(element, 'bbox', {}).get('width', 0),
|
|
||||||
"height": getattr(element, 'bbox', {}).get('height', 0)
|
|
||||||
},
|
|
||||||
"confidence": getattr(element, 'confidence', 0.0),
|
|
||||||
"attributes": getattr(element, 'attributes', {})
|
|
||||||
})
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur lors de la sérialisation d'un élément: {e}")
|
|
||||||
|
|
||||||
return elements
|
|
||||||
|
|
||||||
def get_status(self) -> Dict:
|
|
||||||
"""Retourne le statut du service"""
|
|
||||||
return {
|
|
||||||
"is_capturing": self.is_capturing,
|
|
||||||
"selected_monitor": self.selected_monitor,
|
|
||||||
"monitors_count": len(self.monitors),
|
|
||||||
"capture_interval": self.capture_interval,
|
|
||||||
"elements_detected": len(self.detected_elements),
|
|
||||||
"has_screenshot": self.current_screenshot is not None
|
|
||||||
}
|
|
||||||
|
|
||||||
def cleanup(self):
|
|
||||||
"""Nettoie les ressources"""
|
|
||||||
self.stop_capture()
|
|
||||||
# Plus besoin de fermer self.sct car nous utilisons des instances locales
|
|
||||||
|
|
||||||
# Instance globale du service
|
|
||||||
real_capture_service = RealScreenCaptureService()
|
|
||||||
@@ -1,454 +0,0 @@
|
|||||||
/**
|
|
||||||
* Composant Sélecteur Visuel - Sélection d'éléments basée sur la vision
|
|
||||||
* Auteur : Dom, Alice, Kiro - 08 janvier 2026
|
|
||||||
*
|
|
||||||
* Ce composant permet la sélection d'éléments à l'écran via capture d'écran
|
|
||||||
* et création d'embeddings visuels pour la reconnaissance d'éléments.
|
|
||||||
*/
|
|
||||||
|
|
||||||
import React, { useState, useCallback, useRef } from 'react';
|
|
||||||
import {
|
|
||||||
Dialog,
|
|
||||||
DialogTitle,
|
|
||||||
DialogContent,
|
|
||||||
DialogActions,
|
|
||||||
Button,
|
|
||||||
Box,
|
|
||||||
Typography,
|
|
||||||
CircularProgress,
|
|
||||||
Alert,
|
|
||||||
Stepper,
|
|
||||||
Step,
|
|
||||||
StepLabel,
|
|
||||||
Paper,
|
|
||||||
IconButton,
|
|
||||||
} from '@mui/material';
|
|
||||||
import {
|
|
||||||
CameraAlt as CameraIcon,
|
|
||||||
Close as CloseIcon,
|
|
||||||
CheckCircle as CheckIcon,
|
|
||||||
Visibility as VisibilityIcon,
|
|
||||||
} from '@mui/icons-material';
|
|
||||||
|
|
||||||
// Import des types partagés
|
|
||||||
import { VisualSelection, BoundingBox } from '../../types';
|
|
||||||
|
|
||||||
interface VisualSelectorProps {
|
|
||||||
isOpen: boolean;
|
|
||||||
stepId: string;
|
|
||||||
onClose: () => void;
|
|
||||||
onElementSelected: (selection: VisualSelection) => void;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface CaptureState {
|
|
||||||
screenshot: string | null;
|
|
||||||
isCapturing: boolean;
|
|
||||||
error: string | null;
|
|
||||||
selectedArea: BoundingBox | null;
|
|
||||||
isProcessing: boolean;
|
|
||||||
}
|
|
||||||
|
|
||||||
const steps = [
|
|
||||||
'Capture d\'écran',
|
|
||||||
'Sélection d\'élément',
|
|
||||||
'Confirmation',
|
|
||||||
];
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Composant Sélecteur Visuel
|
|
||||||
*/
|
|
||||||
const VisualSelector: React.FC<VisualSelectorProps> = ({
|
|
||||||
isOpen,
|
|
||||||
stepId,
|
|
||||||
onClose,
|
|
||||||
onElementSelected,
|
|
||||||
}) => {
|
|
||||||
const [activeStep, setActiveStep] = useState(0);
|
|
||||||
const [captureState, setCaptureState] = useState<CaptureState>({
|
|
||||||
screenshot: null,
|
|
||||||
isCapturing: false,
|
|
||||||
error: null,
|
|
||||||
selectedArea: null,
|
|
||||||
isProcessing: false,
|
|
||||||
});
|
|
||||||
|
|
||||||
const canvasRef = useRef<HTMLCanvasElement>(null);
|
|
||||||
const [isSelecting, setIsSelecting] = useState(false);
|
|
||||||
const [selectionStart, setSelectionStart] = useState<{ x: number; y: number } | null>(null);
|
|
||||||
|
|
||||||
// Réinitialiser l'état lors de l'ouverture/fermeture
|
|
||||||
const handleClose = useCallback(() => {
|
|
||||||
setActiveStep(0);
|
|
||||||
setCaptureState({
|
|
||||||
screenshot: null,
|
|
||||||
isCapturing: false,
|
|
||||||
error: null,
|
|
||||||
selectedArea: null,
|
|
||||||
isProcessing: false,
|
|
||||||
});
|
|
||||||
setIsSelecting(false);
|
|
||||||
setSelectionStart(null);
|
|
||||||
onClose();
|
|
||||||
}, [onClose]);
|
|
||||||
|
|
||||||
// Capturer l'écran via l'API ScreenCapturer
|
|
||||||
const handleCaptureScreen = useCallback(async () => {
|
|
||||||
setCaptureState(prev => ({ ...prev, isCapturing: true, error: null }));
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Appel à l'API ScreenCapturer réelle du système RPA Vision V3
|
|
||||||
const response = await fetch('/api/screen-capture', {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
format: 'png',
|
|
||||||
quality: 90,
|
|
||||||
}),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(`Erreur de capture: ${response.status} ${response.statusText}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!data.success || !data.screenshot) {
|
|
||||||
throw new Error(data.error || 'Échec de la capture d\'écran');
|
|
||||||
}
|
|
||||||
|
|
||||||
setCaptureState(prev => ({
|
|
||||||
...prev,
|
|
||||||
screenshot: data.screenshot,
|
|
||||||
isCapturing: false,
|
|
||||||
}));
|
|
||||||
|
|
||||||
setActiveStep(1);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Erreur lors de la capture d\'écran:', error);
|
|
||||||
setCaptureState(prev => ({
|
|
||||||
...prev,
|
|
||||||
isCapturing: false,
|
|
||||||
error: error instanceof Error ? error.message : 'Erreur inconnue lors de la capture',
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
// Gérer le début de sélection sur le canvas
|
|
||||||
const handleMouseDown = useCallback((event: React.MouseEvent<HTMLCanvasElement>) => {
|
|
||||||
if (!captureState.screenshot) return;
|
|
||||||
|
|
||||||
const canvas = canvasRef.current;
|
|
||||||
if (!canvas) return;
|
|
||||||
|
|
||||||
const rect = canvas.getBoundingClientRect();
|
|
||||||
const x = event.clientX - rect.left;
|
|
||||||
const y = event.clientY - rect.top;
|
|
||||||
|
|
||||||
setIsSelecting(true);
|
|
||||||
setSelectionStart({ x, y });
|
|
||||||
setCaptureState(prev => ({ ...prev, selectedArea: null }));
|
|
||||||
}, [captureState.screenshot]);
|
|
||||||
|
|
||||||
// Gérer le mouvement de sélection
|
|
||||||
const handleMouseMove = useCallback((event: React.MouseEvent<HTMLCanvasElement>) => {
|
|
||||||
if (!isSelecting || !selectionStart || !canvasRef.current) return;
|
|
||||||
|
|
||||||
const canvas = canvasRef.current;
|
|
||||||
const rect = canvas.getBoundingClientRect();
|
|
||||||
const currentX = event.clientX - rect.left;
|
|
||||||
const currentY = event.clientY - rect.top;
|
|
||||||
|
|
||||||
// Dessiner la zone de sélection en temps réel
|
|
||||||
const ctx = canvas.getContext('2d');
|
|
||||||
if (!ctx) return;
|
|
||||||
|
|
||||||
// Redessiner l'image de base
|
|
||||||
if (captureState.screenshot) {
|
|
||||||
const img = new Image();
|
|
||||||
img.onload = () => {
|
|
||||||
ctx.clearRect(0, 0, canvas.width, canvas.height);
|
|
||||||
ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
|
|
||||||
|
|
||||||
// Dessiner le rectangle de sélection
|
|
||||||
ctx.strokeStyle = '#1976d2';
|
|
||||||
ctx.lineWidth = 2;
|
|
||||||
ctx.setLineDash([5, 5]);
|
|
||||||
ctx.strokeRect(
|
|
||||||
selectionStart.x,
|
|
||||||
selectionStart.y,
|
|
||||||
currentX - selectionStart.x,
|
|
||||||
currentY - selectionStart.y
|
|
||||||
);
|
|
||||||
};
|
|
||||||
img.src = `data:image/png;base64,${captureState.screenshot}`;
|
|
||||||
}
|
|
||||||
}, [isSelecting, selectionStart, captureState.screenshot]);
|
|
||||||
|
|
||||||
// Finaliser la sélection
|
|
||||||
const handleMouseUp = useCallback((event: React.MouseEvent<HTMLCanvasElement>) => {
|
|
||||||
if (!isSelecting || !selectionStart || !canvasRef.current) return;
|
|
||||||
|
|
||||||
const canvas = canvasRef.current;
|
|
||||||
const rect = canvas.getBoundingClientRect();
|
|
||||||
const endX = event.clientX - rect.left;
|
|
||||||
const endY = event.clientY - rect.top;
|
|
||||||
|
|
||||||
const selectedArea: BoundingBox = {
|
|
||||||
x: Math.min(selectionStart.x, endX),
|
|
||||||
y: Math.min(selectionStart.y, endY),
|
|
||||||
width: Math.abs(endX - selectionStart.x),
|
|
||||||
height: Math.abs(endY - selectionStart.y),
|
|
||||||
};
|
|
||||||
|
|
||||||
// Valider que la zone sélectionnée a une taille minimale
|
|
||||||
if (selectedArea.width < 10 || selectedArea.height < 10) {
|
|
||||||
setCaptureState(prev => ({
|
|
||||||
...prev,
|
|
||||||
error: 'La zone sélectionnée est trop petite. Veuillez sélectionner une zone plus grande.',
|
|
||||||
}));
|
|
||||||
setIsSelecting(false);
|
|
||||||
setSelectionStart(null);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
setCaptureState(prev => ({
|
|
||||||
...prev,
|
|
||||||
selectedArea,
|
|
||||||
error: null,
|
|
||||||
}));
|
|
||||||
|
|
||||||
setIsSelecting(false);
|
|
||||||
setSelectionStart(null);
|
|
||||||
setActiveStep(2);
|
|
||||||
}, [isSelecting, selectionStart]);
|
|
||||||
|
|
||||||
// Confirmer la sélection et créer l'embedding visuel
|
|
||||||
const handleConfirmSelection = useCallback(async () => {
|
|
||||||
if (!captureState.screenshot || !captureState.selectedArea) return;
|
|
||||||
|
|
||||||
setCaptureState(prev => ({ ...prev, isProcessing: true, error: null }));
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Créer l'embedding visuel via l'API du système RPA Vision V3
|
|
||||||
const response = await fetch('/api/visual-embedding', {
|
|
||||||
method: 'POST',
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
screenshot: captureState.screenshot,
|
|
||||||
boundingBox: captureState.selectedArea,
|
|
||||||
stepId: stepId,
|
|
||||||
}),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(`Erreur de création d'embedding: ${response.status} ${response.statusText}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
if (!data.success || !data.embedding) {
|
|
||||||
throw new Error(data.error || 'Échec de la création de l\'embedding visuel');
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer l'objet VisualSelection
|
|
||||||
const visualSelection: VisualSelection = {
|
|
||||||
id: `visual_${stepId}_${Date.now()}`,
|
|
||||||
screenshot: captureState.screenshot,
|
|
||||||
boundingBox: captureState.selectedArea,
|
|
||||||
embedding: data.embedding,
|
|
||||||
description: `Élément sélectionné pour l'étape ${stepId}`,
|
|
||||||
};
|
|
||||||
|
|
||||||
onElementSelected(visualSelection);
|
|
||||||
handleClose();
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Erreur lors de la création de l\'embedding:', error);
|
|
||||||
setCaptureState(prev => ({
|
|
||||||
...prev,
|
|
||||||
isProcessing: false,
|
|
||||||
error: error instanceof Error ? error.message : 'Erreur inconnue lors de la création de l\'embedding',
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}, [captureState.screenshot, captureState.selectedArea, stepId, onElementSelected, handleClose]);
|
|
||||||
|
|
||||||
// Rendu du contenu selon l'étape active
|
|
||||||
const renderStepContent = () => {
|
|
||||||
switch (activeStep) {
|
|
||||||
case 0:
|
|
||||||
return (
|
|
||||||
<Box sx={{ textAlign: 'center', py: 4 }}>
|
|
||||||
<CameraIcon sx={{ fontSize: 64, color: 'primary.main', mb: 2 }} />
|
|
||||||
<Typography variant="h6" gutterBottom>
|
|
||||||
Capture d'écran
|
|
||||||
</Typography>
|
|
||||||
<Typography variant="body2" color="text.secondary" sx={{ mb: 2 }}>
|
|
||||||
Cliquez sur le bouton ci-dessous pour capturer l'écran actuel.
|
|
||||||
Assurez-vous que l'élément que vous souhaitez sélectionner est visible.
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
{captureState.error && (
|
|
||||||
<Alert severity="error" sx={{ mt: 2, mb: 2 }}>
|
|
||||||
{captureState.error}
|
|
||||||
</Alert>
|
|
||||||
)}
|
|
||||||
|
|
||||||
<Button
|
|
||||||
variant="contained"
|
|
||||||
size="large"
|
|
||||||
onClick={handleCaptureScreen}
|
|
||||||
disabled={captureState.isCapturing}
|
|
||||||
startIcon={captureState.isCapturing ? <CircularProgress size={20} /> : <CameraIcon />}
|
|
||||||
>
|
|
||||||
{captureState.isCapturing ? 'Capture en cours...' : 'Capturer l\'écran'}
|
|
||||||
</Button>
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
|
|
||||||
case 1:
|
|
||||||
return (
|
|
||||||
<Box>
|
|
||||||
<Typography variant="h6" gutterBottom>
|
|
||||||
Sélection d'élément
|
|
||||||
</Typography>
|
|
||||||
<Typography variant="body2" color="text.secondary" sx={{ mb: 2 }}>
|
|
||||||
Cliquez et glissez pour sélectionner l'élément souhaité sur la capture d'écran.
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
{captureState.error && (
|
|
||||||
<Alert severity="error" sx={{ mb: 2 }}>
|
|
||||||
{captureState.error}
|
|
||||||
</Alert>
|
|
||||||
)}
|
|
||||||
|
|
||||||
<Paper elevation={2} sx={{ p: 1, maxHeight: 400, overflow: 'auto' }}>
|
|
||||||
{captureState.screenshot && (
|
|
||||||
<canvas
|
|
||||||
ref={canvasRef}
|
|
||||||
width={800}
|
|
||||||
height={600}
|
|
||||||
style={{
|
|
||||||
maxWidth: '100%',
|
|
||||||
height: 'auto',
|
|
||||||
cursor: 'crosshair',
|
|
||||||
border: '1px solid #e0e0e0',
|
|
||||||
}}
|
|
||||||
onMouseDown={handleMouseDown}
|
|
||||||
onMouseMove={handleMouseMove}
|
|
||||||
onMouseUp={handleMouseUp}
|
|
||||||
onLoad={() => {
|
|
||||||
const canvas = canvasRef.current;
|
|
||||||
const ctx = canvas?.getContext('2d');
|
|
||||||
if (canvas && ctx && captureState.screenshot) {
|
|
||||||
const img = new Image();
|
|
||||||
img.onload = () => {
|
|
||||||
ctx.drawImage(img, 0, 0, canvas.width, canvas.height);
|
|
||||||
};
|
|
||||||
img.src = `data:image/png;base64,${captureState.screenshot}`;
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
/>
|
|
||||||
)}
|
|
||||||
</Paper>
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
|
|
||||||
case 2:
|
|
||||||
return (
|
|
||||||
<Box>
|
|
||||||
<Typography variant="h6" gutterBottom>
|
|
||||||
Confirmation de sélection
|
|
||||||
</Typography>
|
|
||||||
<Typography variant="body2" color="text.secondary" sx={{ mb: 2 }}>
|
|
||||||
Vérifiez que la zone sélectionnée correspond à l'élément souhaité.
|
|
||||||
</Typography>
|
|
||||||
|
|
||||||
{captureState.selectedArea && (
|
|
||||||
<Alert severity="info" sx={{ mb: 2 }}>
|
|
||||||
Zone sélectionnée : {captureState.selectedArea.width} × {captureState.selectedArea.height} pixels
|
|
||||||
à la position ({captureState.selectedArea.x}, {captureState.selectedArea.y})
|
|
||||||
</Alert>
|
|
||||||
)}
|
|
||||||
|
|
||||||
{captureState.error && (
|
|
||||||
<Alert severity="error" sx={{ mb: 2 }}>
|
|
||||||
{captureState.error}
|
|
||||||
</Alert>
|
|
||||||
)}
|
|
||||||
|
|
||||||
<Box sx={{ display: 'flex', gap: 2, justifyContent: 'center' }}>
|
|
||||||
<Button
|
|
||||||
variant="outlined"
|
|
||||||
onClick={() => setActiveStep(1)}
|
|
||||||
disabled={captureState.isProcessing}
|
|
||||||
>
|
|
||||||
Modifier la sélection
|
|
||||||
</Button>
|
|
||||||
<Button
|
|
||||||
variant="contained"
|
|
||||||
onClick={handleConfirmSelection}
|
|
||||||
disabled={captureState.isProcessing}
|
|
||||||
startIcon={captureState.isProcessing ? <CircularProgress size={20} /> : <CheckIcon />}
|
|
||||||
>
|
|
||||||
{captureState.isProcessing ? 'Traitement...' : 'Confirmer la sélection'}
|
|
||||||
</Button>
|
|
||||||
</Box>
|
|
||||||
</Box>
|
|
||||||
);
|
|
||||||
|
|
||||||
default:
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
return (
|
|
||||||
<Dialog
|
|
||||||
open={isOpen}
|
|
||||||
onClose={handleClose}
|
|
||||||
maxWidth="md"
|
|
||||||
fullWidth
|
|
||||||
slotProps={{
|
|
||||||
paper: {
|
|
||||||
sx: { minHeight: 500 },
|
|
||||||
},
|
|
||||||
}}
|
|
||||||
>
|
|
||||||
<DialogTitle>
|
|
||||||
<Box sx={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center' }}>
|
|
||||||
<Box sx={{ display: 'flex', alignItems: 'center', gap: 1 }}>
|
|
||||||
<VisibilityIcon />
|
|
||||||
<Typography variant="h6">Sélection visuelle d'élément</Typography>
|
|
||||||
</Box>
|
|
||||||
<IconButton onClick={handleClose} size="small">
|
|
||||||
<CloseIcon />
|
|
||||||
</IconButton>
|
|
||||||
</Box>
|
|
||||||
</DialogTitle>
|
|
||||||
|
|
||||||
<DialogContent>
|
|
||||||
{/* Stepper pour indiquer la progression */}
|
|
||||||
<Stepper activeStep={activeStep} sx={{ mb: 4 }}>
|
|
||||||
{steps.map((label) => (
|
|
||||||
<Step key={label}>
|
|
||||||
<StepLabel>{label}</StepLabel>
|
|
||||||
</Step>
|
|
||||||
))}
|
|
||||||
</Stepper>
|
|
||||||
|
|
||||||
{/* Contenu de l'étape active */}
|
|
||||||
{renderStepContent()}
|
|
||||||
</DialogContent>
|
|
||||||
|
|
||||||
<DialogActions>
|
|
||||||
<Button onClick={handleClose} disabled={captureState.isCapturing || captureState.isProcessing}>
|
|
||||||
Annuler
|
|
||||||
</Button>
|
|
||||||
</DialogActions>
|
|
||||||
</Dialog>
|
|
||||||
);
|
|
||||||
};
|
|
||||||
|
|
||||||
export default VisualSelector;
|
|
||||||
@@ -1,414 +0,0 @@
|
|||||||
/**
|
|
||||||
* Hook API Client - Interface React pour le client API
|
|
||||||
* Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
*
|
|
||||||
* Ce hook fournit une interface React pour utiliser le client API
|
|
||||||
* avec gestion d'état, loading, erreurs et mode hors ligne gracieux.
|
|
||||||
* Optimisé pour éviter les re-renders excessifs et les sauts de page.
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { useState, useCallback, useRef, useEffect, useMemo } from 'react';
|
|
||||||
import { apiClient, ApiError, ConnectionState } from '../services/apiClient';
|
|
||||||
import { WorkflowApiData } from '../types';
|
|
||||||
|
|
||||||
// Types pour les états de requête
|
|
||||||
interface RequestState<T = any> {
|
|
||||||
data: T | null;
|
|
||||||
loading: boolean;
|
|
||||||
error: ApiError | null;
|
|
||||||
lastUpdated: Date | null;
|
|
||||||
isOffline: boolean;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface UseApiClientOptions {
|
|
||||||
enableAutoRetry?: boolean;
|
|
||||||
retryDelay?: number;
|
|
||||||
maxRetries?: number;
|
|
||||||
onError?: (error: ApiError) => void;
|
|
||||||
onSuccess?: (data: any) => void;
|
|
||||||
silentOffline?: boolean; // Ne pas afficher d'erreur en mode hors ligne
|
|
||||||
}
|
|
||||||
|
|
||||||
// État initial stable (évite les re-créations)
|
|
||||||
const INITIAL_STATE: RequestState = {
|
|
||||||
data: null,
|
|
||||||
loading: false,
|
|
||||||
error: null,
|
|
||||||
lastUpdated: null,
|
|
||||||
isOffline: false,
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Hook pour utiliser le client API avec gestion d'état React
|
|
||||||
* Optimisé pour éviter les re-renders inutiles
|
|
||||||
*/
|
|
||||||
export function useApiClient<T = any>(options: UseApiClientOptions = {}) {
|
|
||||||
const {
|
|
||||||
enableAutoRetry = false, // Désactivé par défaut pour éviter les sauts
|
|
||||||
retryDelay = 1000,
|
|
||||||
maxRetries = 2,
|
|
||||||
onError,
|
|
||||||
onSuccess,
|
|
||||||
silentOffline = true, // Par défaut, ne pas afficher d'erreur en mode hors ligne
|
|
||||||
} = options;
|
|
||||||
|
|
||||||
const [state, setState] = useState<RequestState<T>>(INITIAL_STATE);
|
|
||||||
const retryCountRef = useRef(0);
|
|
||||||
const timeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);
|
|
||||||
const mountedRef = useRef(true);
|
|
||||||
|
|
||||||
// Nettoyer les timeouts et marquer comme démonté
|
|
||||||
useEffect(() => {
|
|
||||||
mountedRef.current = true;
|
|
||||||
return () => {
|
|
||||||
mountedRef.current = false;
|
|
||||||
if (timeoutRef.current) {
|
|
||||||
clearTimeout(timeoutRef.current);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
// Fonction pour mettre à jour l'état de manière sécurisée
|
|
||||||
const safeSetState = useCallback((updater: (prev: RequestState<T>) => RequestState<T>) => {
|
|
||||||
if (mountedRef.current) {
|
|
||||||
setState(updater);
|
|
||||||
}
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
// Fonction générique pour exécuter une requête API
|
|
||||||
const executeRequest = useCallback(async <R = T>(
|
|
||||||
requestFn: () => Promise<R>,
|
|
||||||
requestOptions: { skipLoading?: boolean; skipErrorHandling?: boolean } = {}
|
|
||||||
): Promise<R | null> => {
|
|
||||||
const { skipLoading = false, skipErrorHandling = false } = requestOptions;
|
|
||||||
|
|
||||||
try {
|
|
||||||
if (!skipLoading) {
|
|
||||||
safeSetState(prev => ({
|
|
||||||
...prev,
|
|
||||||
loading: true,
|
|
||||||
error: null,
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = await requestFn();
|
|
||||||
|
|
||||||
// Vérifier si le résultat indique un mode hors ligne
|
|
||||||
const isOfflineResult = result && typeof result === 'object' && 'offline' in result && (result as any).offline;
|
|
||||||
|
|
||||||
safeSetState(prev => ({
|
|
||||||
...prev,
|
|
||||||
data: isOfflineResult ? prev.data : (result as unknown as T), // Garder les anciennes données si hors ligne
|
|
||||||
loading: false,
|
|
||||||
error: null,
|
|
||||||
lastUpdated: isOfflineResult ? prev.lastUpdated : new Date(),
|
|
||||||
isOffline: isOfflineResult,
|
|
||||||
}));
|
|
||||||
|
|
||||||
retryCountRef.current = 0;
|
|
||||||
|
|
||||||
if (onSuccess && !isOfflineResult) {
|
|
||||||
onSuccess(result);
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
const apiError = error as ApiError;
|
|
||||||
const isOffline = apiError.code === 'OFFLINE' || apiError.code === 'NETWORK_ERROR';
|
|
||||||
|
|
||||||
safeSetState(prev => ({
|
|
||||||
...prev,
|
|
||||||
loading: false,
|
|
||||||
error: (silentOffline && isOffline) ? null : apiError,
|
|
||||||
isOffline,
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Gestion du retry automatique (seulement si pas hors ligne)
|
|
||||||
if (enableAutoRetry && !isOffline && retryCountRef.current < maxRetries && shouldRetryError(apiError)) {
|
|
||||||
retryCountRef.current++;
|
|
||||||
|
|
||||||
timeoutRef.current = setTimeout(() => {
|
|
||||||
executeRequest(requestFn, requestOptions);
|
|
||||||
}, retryDelay * Math.pow(2, retryCountRef.current - 1));
|
|
||||||
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
retryCountRef.current = 0;
|
|
||||||
|
|
||||||
if (!skipErrorHandling && onError && !(silentOffline && isOffline)) {
|
|
||||||
onError(apiError);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ne pas relancer l'erreur en mode hors ligne silencieux
|
|
||||||
if (silentOffline && isOffline) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
throw apiError;
|
|
||||||
}
|
|
||||||
}, [enableAutoRetry, maxRetries, retryDelay, onError, onSuccess, silentOffline, safeSetState]);
|
|
||||||
|
|
||||||
// Déterminer si une erreur justifie un retry
|
|
||||||
const shouldRetryError = useCallback((error: ApiError): boolean => {
|
|
||||||
// Ne pas retry pour les erreurs hors ligne
|
|
||||||
if (error.code === 'OFFLINE' || error.code === 'NETWORK_ERROR') {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
// Retry pour les erreurs serveur
|
|
||||||
return (
|
|
||||||
(error.status !== undefined && error.status >= 500) ||
|
|
||||||
error.status === 408 ||
|
|
||||||
error.status === 429
|
|
||||||
);
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
// Réinitialiser l'état
|
|
||||||
const reset = useCallback(() => {
|
|
||||||
safeSetState(() => INITIAL_STATE);
|
|
||||||
retryCountRef.current = 0;
|
|
||||||
|
|
||||||
if (timeoutRef.current) {
|
|
||||||
clearTimeout(timeoutRef.current);
|
|
||||||
timeoutRef.current = null;
|
|
||||||
}
|
|
||||||
}, [safeSetState]);
|
|
||||||
|
|
||||||
// Annuler la requête en cours
|
|
||||||
const cancel = useCallback(() => {
|
|
||||||
apiClient.cancelRequest();
|
|
||||||
|
|
||||||
if (timeoutRef.current) {
|
|
||||||
clearTimeout(timeoutRef.current);
|
|
||||||
timeoutRef.current = null;
|
|
||||||
}
|
|
||||||
|
|
||||||
safeSetState(prev => ({
|
|
||||||
...prev,
|
|
||||||
loading: false,
|
|
||||||
}));
|
|
||||||
}, [safeSetState]);
|
|
||||||
|
|
||||||
return {
|
|
||||||
...state,
|
|
||||||
executeRequest,
|
|
||||||
reset,
|
|
||||||
cancel,
|
|
||||||
isRetrying: retryCountRef.current > 0,
|
|
||||||
retryCount: retryCountRef.current,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Hook pour surveiller l'état de connexion de l'API
|
|
||||||
* Utilise un abonnement pour éviter les re-renders excessifs
|
|
||||||
* L'état initial est 'offline' pour éviter les tentatives de connexion au montage
|
|
||||||
*/
|
|
||||||
export function useConnectionState() {
|
|
||||||
// État initial 'offline' pour éviter les appels API au montage
|
|
||||||
const [connectionState, setConnectionState] = useState<ConnectionState>('offline');
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
// Référence pour éviter les mises à jour après démontage
|
|
||||||
let isMounted = true;
|
|
||||||
|
|
||||||
// S'abonner aux changements d'état de connexion
|
|
||||||
const unsubscribe = apiClient.onConnectionStateChange((state) => {
|
|
||||||
if (isMounted) {
|
|
||||||
setConnectionState(state);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return () => {
|
|
||||||
isMounted = false;
|
|
||||||
unsubscribe();
|
|
||||||
};
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
// Mémoiser les valeurs dérivées
|
|
||||||
const derivedState = useMemo(() => ({
|
|
||||||
isOnline: connectionState === 'online',
|
|
||||||
isOffline: connectionState === 'offline',
|
|
||||||
isChecking: connectionState === 'checking',
|
|
||||||
connectionState,
|
|
||||||
}), [connectionState]);
|
|
||||||
|
|
||||||
// Fonction pour forcer une vérification
|
|
||||||
const forceCheck = useCallback(async () => {
|
|
||||||
return apiClient.forceConnectionCheck();
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
return {
|
|
||||||
...derivedState,
|
|
||||||
forceCheck,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Hook spécialisé pour les opérations sur les workflows
|
|
||||||
* Gère gracieusement le mode hors ligne
|
|
||||||
*/
|
|
||||||
export function useWorkflowApi(options: UseApiClientOptions = {}) {
|
|
||||||
const api = useApiClient<any>({ ...options, silentOffline: true });
|
|
||||||
const { isOffline } = useConnectionState();
|
|
||||||
|
|
||||||
// Charger la liste des workflows
|
|
||||||
const loadWorkflows = useCallback(async () => {
|
|
||||||
if (isOffline) {
|
|
||||||
return []; // Retourner un tableau vide si hors ligne
|
|
||||||
}
|
|
||||||
return api.executeRequest(() => apiClient.getWorkflows());
|
|
||||||
}, [api, isOffline]);
|
|
||||||
|
|
||||||
// Charger un workflow spécifique
|
|
||||||
const loadWorkflow = useCallback(async (workflowId: string) => {
|
|
||||||
if (isOffline) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
return api.executeRequest(() => apiClient.getWorkflow(workflowId));
|
|
||||||
}, [api, isOffline]);
|
|
||||||
|
|
||||||
// Sauvegarder un workflow
|
|
||||||
const saveWorkflow = useCallback(async (workflowData: WorkflowApiData) => {
|
|
||||||
return api.executeRequest(() => apiClient.saveWorkflow(workflowData));
|
|
||||||
}, [api]);
|
|
||||||
|
|
||||||
// Supprimer un workflow
|
|
||||||
const deleteWorkflow = useCallback(async (workflowId: string) => {
|
|
||||||
return api.executeRequest(() => apiClient.deleteWorkflow(workflowId));
|
|
||||||
}, [api]);
|
|
||||||
|
|
||||||
// Valider un workflow
|
|
||||||
const validateWorkflow = useCallback(async (workflowData: WorkflowApiData) => {
|
|
||||||
return api.executeRequest(() => apiClient.validateWorkflow(workflowData));
|
|
||||||
}, [api]);
|
|
||||||
|
|
||||||
return {
|
|
||||||
...api,
|
|
||||||
isOffline,
|
|
||||||
loadWorkflows,
|
|
||||||
loadWorkflow,
|
|
||||||
saveWorkflow,
|
|
||||||
deleteWorkflow,
|
|
||||||
validateWorkflow,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Hook spécialisé pour l'exécution de workflows
|
|
||||||
*/
|
|
||||||
export function useWorkflowExecution(options: UseApiClientOptions = {}) {
|
|
||||||
const api = useApiClient<any>({ ...options, silentOffline: true });
|
|
||||||
const { isOffline } = useConnectionState();
|
|
||||||
|
|
||||||
// Exécuter une étape
|
|
||||||
const executeStep = useCallback(async (stepData: {
|
|
||||||
stepId: string;
|
|
||||||
stepType: string;
|
|
||||||
parameters: any;
|
|
||||||
workflowId?: string;
|
|
||||||
}) => {
|
|
||||||
if (isOffline) {
|
|
||||||
return { success: false, error: 'API hors ligne', offline: true };
|
|
||||||
}
|
|
||||||
return api.executeRequest(() => apiClient.executeStep(stepData));
|
|
||||||
}, [api, isOffline]);
|
|
||||||
|
|
||||||
// Exécuter un workflow complet
|
|
||||||
const executeWorkflow = useCallback(async (workflowId: string, parameters?: any) => {
|
|
||||||
if (isOffline) {
|
|
||||||
return { success: false, error: 'API hors ligne', offline: true };
|
|
||||||
}
|
|
||||||
return api.executeRequest(() => apiClient.executeWorkflow(workflowId, parameters));
|
|
||||||
}, [api, isOffline]);
|
|
||||||
|
|
||||||
return {
|
|
||||||
...api,
|
|
||||||
isOffline,
|
|
||||||
executeStep,
|
|
||||||
executeWorkflow,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Hook pour surveiller la santé de l'API
|
|
||||||
* Optimisé pour éviter les re-renders excessifs
|
|
||||||
*/
|
|
||||||
export function useApiHealth(options: UseApiClientOptions & {
|
|
||||||
pollInterval?: number;
|
|
||||||
enablePolling?: boolean;
|
|
||||||
} = {}) {
|
|
||||||
const { pollInterval = 30000, enablePolling = false } = options;
|
|
||||||
const api = useApiClient<{ status: string; timestamp: string }>({ ...options, silentOffline: true });
|
|
||||||
const intervalRef = useRef<ReturnType<typeof setInterval> | null>(null);
|
|
||||||
const { connectionState, isOnline, forceCheck } = useConnectionState();
|
|
||||||
|
|
||||||
// Vérifier la santé de l'API
|
|
||||||
const checkHealth = useCallback(async () => {
|
|
||||||
return api.executeRequest(() => apiClient.healthCheck(), { skipLoading: true });
|
|
||||||
}, [api]);
|
|
||||||
|
|
||||||
// Démarrer le polling
|
|
||||||
const startPolling = useCallback(() => {
|
|
||||||
if (intervalRef.current) {
|
|
||||||
clearInterval(intervalRef.current);
|
|
||||||
}
|
|
||||||
|
|
||||||
intervalRef.current = setInterval(() => {
|
|
||||||
checkHealth();
|
|
||||||
}, pollInterval);
|
|
||||||
|
|
||||||
// Vérification initiale
|
|
||||||
checkHealth();
|
|
||||||
}, [checkHealth, pollInterval]);
|
|
||||||
|
|
||||||
// Arrêter le polling
|
|
||||||
const stopPolling = useCallback(() => {
|
|
||||||
if (intervalRef.current) {
|
|
||||||
clearInterval(intervalRef.current);
|
|
||||||
intervalRef.current = null;
|
|
||||||
}
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
// Démarrer le polling automatiquement si activé
|
|
||||||
useEffect(() => {
|
|
||||||
if (enablePolling) {
|
|
||||||
startPolling();
|
|
||||||
}
|
|
||||||
|
|
||||||
return () => {
|
|
||||||
stopPolling();
|
|
||||||
};
|
|
||||||
}, [enablePolling, startPolling, stopPolling]);
|
|
||||||
|
|
||||||
return {
|
|
||||||
...api,
|
|
||||||
checkHealth,
|
|
||||||
startPolling,
|
|
||||||
stopPolling,
|
|
||||||
forceCheck,
|
|
||||||
isHealthy: isOnline,
|
|
||||||
connectionState,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Hook pour les statistiques de l'API
|
|
||||||
*/
|
|
||||||
export function useApiStats(options: UseApiClientOptions = {}) {
|
|
||||||
const api = useApiClient<any>({ ...options, silentOffline: true });
|
|
||||||
|
|
||||||
// Charger les statistiques
|
|
||||||
const loadStats = useCallback(async () => {
|
|
||||||
return api.executeRequest(() => apiClient.getApiStats());
|
|
||||||
}, [api]);
|
|
||||||
|
|
||||||
return {
|
|
||||||
...api,
|
|
||||||
loadStats,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Export des types
|
|
||||||
export type { RequestState, UseApiClientOptions };
|
|
||||||
@@ -1,713 +0,0 @@
|
|||||||
/**
|
|
||||||
* Client API - Gestion centralisée des communications avec le Backend_VWB
|
|
||||||
* Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
*
|
|
||||||
* Ce service centralise toutes les communications avec le backend,
|
|
||||||
* incluant la gestion d'erreurs, retry automatique, validation des données
|
|
||||||
* et gestion gracieuse du mode hors ligne.
|
|
||||||
*
|
|
||||||
* IMPORTANT: Ce client utilise une initialisation paresseuse (lazy) pour
|
|
||||||
* éviter les boucles infinies de re-render au chargement de la page.
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { WorkflowApiData } from '../types';
|
|
||||||
|
|
||||||
// Configuration du client API
|
|
||||||
interface ApiClientConfig {
|
|
||||||
baseUrl: string;
|
|
||||||
timeout: number;
|
|
||||||
maxRetries: number;
|
|
||||||
retryDelay: number;
|
|
||||||
enableRetry: boolean;
|
|
||||||
healthCheckInterval: number;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les réponses API
|
|
||||||
interface ApiResponse<T = any> {
|
|
||||||
success: boolean;
|
|
||||||
data?: T;
|
|
||||||
error?: string;
|
|
||||||
code?: string;
|
|
||||||
timestamp?: string;
|
|
||||||
offline?: boolean;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface ApiError {
|
|
||||||
message: string;
|
|
||||||
code?: string;
|
|
||||||
status?: number;
|
|
||||||
details?: any;
|
|
||||||
offline?: boolean;
|
|
||||||
}
|
|
||||||
|
|
||||||
// État de connexion - 'offline' par défaut pour éviter les appels au montage
|
|
||||||
type ConnectionState = 'online' | 'offline' | 'checking';
|
|
||||||
|
|
||||||
// Callbacks pour les changements d'état
|
|
||||||
type ConnectionStateCallback = (state: ConnectionState) => void;
|
|
||||||
|
|
||||||
// Configuration par défaut
|
|
||||||
const DEFAULT_CONFIG: ApiClientConfig = {
|
|
||||||
baseUrl: '/api',
|
|
||||||
timeout: 3000, // 3 secondes (réduit pour éviter les attentes longues)
|
|
||||||
maxRetries: 1, // Réduit pour éviter les délais
|
|
||||||
retryDelay: 500, // 500ms
|
|
||||||
enableRetry: false, // Désactivé par défaut pour éviter les boucles
|
|
||||||
healthCheckInterval: 60000, // 60 secondes (augmenté pour réduire les appels)
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Client API centralisé pour les communications avec le Backend_VWB
|
|
||||||
* Gère automatiquement le mode hors ligne sans provoquer de re-rendus excessifs
|
|
||||||
*
|
|
||||||
* ARCHITECTURE:
|
|
||||||
* - État initial: 'offline' (pas de vérification automatique au démarrage)
|
|
||||||
* - Initialisation paresseuse: la vérification se fait au premier appel API
|
|
||||||
* - Pas de timer de health check automatique (évite les re-renders)
|
|
||||||
*/
|
|
||||||
class ApiClient {
|
|
||||||
private config: ApiClientConfig;
|
|
||||||
private abortController: AbortController | null = null;
|
|
||||||
// État initial 'offline' pour éviter les appels API au montage des composants
|
|
||||||
private connectionState: ConnectionState = 'offline';
|
|
||||||
private stateCallbacks: Set<ConnectionStateCallback> = new Set();
|
|
||||||
private healthCheckTimer: ReturnType<typeof setInterval> | null = null;
|
|
||||||
private lastHealthCheck: number = 0;
|
|
||||||
private isInitialized: boolean = false;
|
|
||||||
private initializationPromise: Promise<void> | null = null;
|
|
||||||
|
|
||||||
constructor(config: Partial<ApiClientConfig> = {}) {
|
|
||||||
this.config = { ...DEFAULT_CONFIG, ...config };
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Initialiser le client et vérifier la connexion
|
|
||||||
* Appelé une seule fois au premier appel API (initialisation paresseuse)
|
|
||||||
* Utilise un pattern singleton pour éviter les initialisations multiples
|
|
||||||
*/
|
|
||||||
async initialize(): Promise<void> {
|
|
||||||
// Si déjà initialisé, retourner immédiatement
|
|
||||||
if (this.isInitialized) return;
|
|
||||||
|
|
||||||
// Si une initialisation est en cours, attendre qu'elle se termine
|
|
||||||
if (this.initializationPromise) {
|
|
||||||
return this.initializationPromise;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer la promesse d'initialisation
|
|
||||||
this.initializationPromise = this.doInitialize();
|
|
||||||
|
|
||||||
try {
|
|
||||||
await this.initializationPromise;
|
|
||||||
} finally {
|
|
||||||
this.initializationPromise = null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Effectuer l'initialisation réelle
|
|
||||||
*/
|
|
||||||
private async doInitialize(): Promise<void> {
|
|
||||||
if (this.isInitialized) return;
|
|
||||||
this.isInitialized = true;
|
|
||||||
|
|
||||||
// Vérification initiale silencieuse (une seule fois)
|
|
||||||
await this.checkConnectionSilently();
|
|
||||||
|
|
||||||
// NE PAS démarrer le timer automatique pour éviter les re-renders
|
|
||||||
// Le timer peut être démarré manuellement si nécessaire
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Vérification silencieuse de la connexion (sans logs excessifs)
|
|
||||||
* Utilise un debounce pour éviter les vérifications trop fréquentes
|
|
||||||
*/
|
|
||||||
private async checkConnectionSilently(): Promise<boolean> {
|
|
||||||
const now = Date.now();
|
|
||||||
|
|
||||||
// Éviter les vérifications trop fréquentes (minimum 10 secondes entre chaque)
|
|
||||||
if (now - this.lastHealthCheck < 10000) {
|
|
||||||
return this.connectionState === 'online';
|
|
||||||
}
|
|
||||||
|
|
||||||
this.lastHealthCheck = now;
|
|
||||||
|
|
||||||
try {
|
|
||||||
const controller = new AbortController();
|
|
||||||
const timeoutId = setTimeout(() => controller.abort(), 2000); // 2 secondes max
|
|
||||||
|
|
||||||
// Utiliser /api/health selon la configuration
|
|
||||||
const healthUrl = `${this.config.baseUrl}/health`;
|
|
||||||
const response = await fetch(healthUrl, {
|
|
||||||
signal: controller.signal,
|
|
||||||
headers: { 'Accept': 'application/json' },
|
|
||||||
});
|
|
||||||
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
if (response.ok) {
|
|
||||||
const contentType = response.headers.get('content-type');
|
|
||||||
if (contentType && contentType.includes('application/json')) {
|
|
||||||
this.setConnectionState('online');
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
this.setConnectionState('offline');
|
|
||||||
return false;
|
|
||||||
} catch {
|
|
||||||
this.setConnectionState('offline');
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Démarrer le timer de vérification de santé (optionnel)
|
|
||||||
* À appeler manuellement si nécessaire
|
|
||||||
*/
|
|
||||||
startHealthCheckTimer(): void {
|
|
||||||
if (this.healthCheckTimer) return;
|
|
||||||
|
|
||||||
this.healthCheckTimer = setInterval(() => {
|
|
||||||
this.checkConnectionSilently();
|
|
||||||
}, this.config.healthCheckInterval);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Arrêter le timer de vérification
|
|
||||||
*/
|
|
||||||
stopHealthCheck(): void {
|
|
||||||
if (this.healthCheckTimer) {
|
|
||||||
clearInterval(this.healthCheckTimer);
|
|
||||||
this.healthCheckTimer = null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Mettre à jour l'état de connexion et notifier les listeners
|
|
||||||
* Utilise un mécanisme de batch pour éviter les notifications multiples
|
|
||||||
*/
|
|
||||||
private setConnectionState(state: ConnectionState): void {
|
|
||||||
if (this.connectionState !== state) {
|
|
||||||
this.connectionState = state;
|
|
||||||
// Notifier les callbacks de manière asynchrone pour éviter les boucles
|
|
||||||
setTimeout(() => {
|
|
||||||
this.stateCallbacks.forEach(callback => {
|
|
||||||
try {
|
|
||||||
callback(state);
|
|
||||||
} catch (e) {
|
|
||||||
console.warn('Erreur dans le callback de connexion:', e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}, 0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* S'abonner aux changements d'état de connexion
|
|
||||||
* NE notifie PAS immédiatement l'état actuel pour éviter les re-renders au montage
|
|
||||||
*/
|
|
||||||
onConnectionStateChange(callback: ConnectionStateCallback): () => void {
|
|
||||||
this.stateCallbacks.add(callback);
|
|
||||||
|
|
||||||
// NE PAS notifier immédiatement - cela évite les re-renders au montage
|
|
||||||
// L'état sera mis à jour lors du premier appel API ou forceConnectionCheck
|
|
||||||
|
|
||||||
// Retourner une fonction de désabonnement
|
|
||||||
return () => {
|
|
||||||
this.stateCallbacks.delete(callback);
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Obtenir l'état de connexion actuel
|
|
||||||
*/
|
|
||||||
getConnectionState(): ConnectionState {
|
|
||||||
return this.connectionState;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Vérifier si l'API est en ligne
|
|
||||||
*/
|
|
||||||
isOnline(): boolean {
|
|
||||||
return this.connectionState === 'online';
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Effectuer une requête HTTP avec gestion d'erreurs et retry
|
|
||||||
* Initialisation paresseuse au premier appel
|
|
||||||
*/
|
|
||||||
private async makeRequest<T>(
|
|
||||||
endpoint: string,
|
|
||||||
options: RequestInit = {},
|
|
||||||
retryCount = 0
|
|
||||||
): Promise<ApiResponse<T>> {
|
|
||||||
// Initialisation paresseuse au premier appel API
|
|
||||||
if (!this.isInitialized) {
|
|
||||||
await this.initialize();
|
|
||||||
}
|
|
||||||
|
|
||||||
// Si hors ligne, retourner immédiatement une réponse offline
|
|
||||||
if (this.connectionState === 'offline' && retryCount === 0) {
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: 'API hors ligne - Les données locales sont utilisées',
|
|
||||||
code: 'OFFLINE',
|
|
||||||
offline: true,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Créer un nouveau AbortController pour cette requête
|
|
||||||
this.abortController = new AbortController();
|
|
||||||
|
|
||||||
const url = `${this.config.baseUrl}${endpoint}`;
|
|
||||||
const requestOptions: RequestInit = {
|
|
||||||
...options,
|
|
||||||
signal: this.abortController.signal,
|
|
||||||
headers: {
|
|
||||||
'Content-Type': 'application/json',
|
|
||||||
'Accept': 'application/json',
|
|
||||||
...options.headers,
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
// Ajouter un timeout
|
|
||||||
const timeoutId = setTimeout(() => {
|
|
||||||
if (this.abortController) {
|
|
||||||
this.abortController.abort();
|
|
||||||
}
|
|
||||||
}, this.config.timeout);
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await fetch(url, requestOptions);
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
// Vérifier si la réponse est du JSON
|
|
||||||
const contentType = response.headers.get('content-type');
|
|
||||||
if (!contentType || !contentType.includes('application/json')) {
|
|
||||||
// Le serveur retourne du HTML (probablement le serveur React)
|
|
||||||
this.setConnectionState('offline');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: 'API hors ligne - Le backend n\'est pas démarré',
|
|
||||||
code: 'OFFLINE',
|
|
||||||
offline: true,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Marquer comme en ligne si la réponse est valide
|
|
||||||
this.setConnectionState('online');
|
|
||||||
|
|
||||||
// Vérifier le statut de la réponse
|
|
||||||
if (!response.ok) {
|
|
||||||
const errorText = await response.text();
|
|
||||||
let errorData: any = {};
|
|
||||||
|
|
||||||
try {
|
|
||||||
errorData = JSON.parse(errorText);
|
|
||||||
} catch {
|
|
||||||
errorData = { message: errorText };
|
|
||||||
}
|
|
||||||
|
|
||||||
const apiError: ApiError = {
|
|
||||||
message: errorData.message || `Erreur HTTP ${response.status}`,
|
|
||||||
code: errorData.code || `HTTP_${response.status}`,
|
|
||||||
status: response.status,
|
|
||||||
details: errorData,
|
|
||||||
};
|
|
||||||
|
|
||||||
// Retry pour certaines erreurs (5xx, timeouts, network errors)
|
|
||||||
if (this.shouldRetry(response.status) && retryCount < this.config.maxRetries) {
|
|
||||||
await this.delay(this.config.retryDelay * Math.pow(2, retryCount));
|
|
||||||
return this.makeRequest<T>(endpoint, options, retryCount + 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
throw apiError;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parser la réponse JSON
|
|
||||||
const data = await response.json();
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: true,
|
|
||||||
data,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
};
|
|
||||||
|
|
||||||
} catch (error) {
|
|
||||||
clearTimeout(timeoutId);
|
|
||||||
|
|
||||||
// Gestion des erreurs d'abort
|
|
||||||
if (error instanceof Error && error.name === 'AbortError') {
|
|
||||||
this.setConnectionState('offline');
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: 'Requête annulée (timeout)',
|
|
||||||
code: 'TIMEOUT',
|
|
||||||
offline: true,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Gestion des erreurs réseau
|
|
||||||
if (error instanceof TypeError && (error.message.includes('fetch') || error.message.includes('network'))) {
|
|
||||||
this.setConnectionState('offline');
|
|
||||||
|
|
||||||
// Retry pour les erreurs réseau
|
|
||||||
if (this.config.enableRetry && retryCount < this.config.maxRetries) {
|
|
||||||
await this.delay(this.config.retryDelay * Math.pow(2, retryCount));
|
|
||||||
return this.makeRequest<T>(endpoint, options, retryCount + 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
success: false,
|
|
||||||
error: 'Erreur de connexion réseau - API hors ligne',
|
|
||||||
code: 'NETWORK_ERROR',
|
|
||||||
offline: true,
|
|
||||||
timestamp: new Date().toISOString(),
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Re-lancer les autres erreurs
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Déterminer si une erreur justifie un retry
|
|
||||||
*/
|
|
||||||
private shouldRetry(status: number): boolean {
|
|
||||||
if (!this.config.enableRetry) return false;
|
|
||||||
return status >= 500 || status === 408 || status === 429;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Attendre un délai spécifié
|
|
||||||
*/
|
|
||||||
private delay(ms: number): Promise<void> {
|
|
||||||
return new Promise(resolve => setTimeout(resolve, ms));
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Annuler la requête en cours
|
|
||||||
*/
|
|
||||||
public cancelRequest(): void {
|
|
||||||
if (this.abortController) {
|
|
||||||
this.abortController.abort();
|
|
||||||
this.abortController = null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Valider les données d'un workflow avant envoi
|
|
||||||
*/
|
|
||||||
private validateWorkflowData(workflow: WorkflowApiData): void {
|
|
||||||
if (!workflow.name || workflow.name.trim().length === 0) {
|
|
||||||
throw new Error('Le nom du workflow est obligatoire');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (workflow.name.length > 100) {
|
|
||||||
throw new Error('Le nom du workflow ne peut pas dépasser 100 caractères');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (workflow.description && workflow.description.length > 500) {
|
|
||||||
throw new Error('La description ne peut pas dépasser 500 caractères');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!Array.isArray(workflow.steps)) {
|
|
||||||
throw new Error('Les étapes du workflow doivent être un tableau');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!Array.isArray(workflow.connections)) {
|
|
||||||
throw new Error('Les connexions du workflow doivent être un tableau');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!Array.isArray(workflow.variables)) {
|
|
||||||
throw new Error('Les variables du workflow doivent être un tableau');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Valider les données d'une étape avant exécution
|
|
||||||
*/
|
|
||||||
private validateStepData(stepData: any): void {
|
|
||||||
if (!stepData.stepId || typeof stepData.stepId !== 'string') {
|
|
||||||
throw new Error('L\'ID de l\'étape est obligatoire');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!stepData.stepType || typeof stepData.stepType !== 'string') {
|
|
||||||
throw new Error('Le type d\'étape est obligatoire');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!stepData.parameters || typeof stepData.parameters !== 'object') {
|
|
||||||
throw new Error('Les paramètres de l\'étape doivent être un objet');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// === MÉTHODES PUBLIQUES POUR LES WORKFLOWS ===
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Récupérer la liste des workflows
|
|
||||||
* Retourne un tableau vide si hors ligne
|
|
||||||
*/
|
|
||||||
async getWorkflows(): Promise<any[]> {
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<any[]>('/workflows');
|
|
||||||
if (response.offline) {
|
|
||||||
return []; // Retourner un tableau vide en mode hors ligne
|
|
||||||
}
|
|
||||||
return response.data || [];
|
|
||||||
} catch (error) {
|
|
||||||
console.warn('Erreur lors du chargement des workflows:', error);
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Récupérer un workflow par ID
|
|
||||||
*/
|
|
||||||
async getWorkflow(workflowId: string): Promise<any | null> {
|
|
||||||
if (!workflowId || workflowId.trim().length === 0) {
|
|
||||||
throw new Error('L\'ID du workflow est obligatoire');
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<{ workflow: any }>(`/workflows/${workflowId}`);
|
|
||||||
if (response.offline) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
return response.data?.workflow || response.data;
|
|
||||||
} catch (error) {
|
|
||||||
console.warn(`Erreur lors du chargement du workflow ${workflowId}:`, error);
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Sauvegarder un workflow
|
|
||||||
* Retourne null si hors ligne
|
|
||||||
*/
|
|
||||||
async saveWorkflow(workflowData: WorkflowApiData): Promise<string | null> {
|
|
||||||
// Validation côté client
|
|
||||||
this.validateWorkflowData(workflowData);
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<{ workflowId: string; id: string }>('/workflows', {
|
|
||||||
method: 'POST',
|
|
||||||
body: JSON.stringify(workflowData),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (response.offline) {
|
|
||||||
console.warn('Sauvegarde impossible - API hors ligne');
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.data?.workflowId || response.data?.id || '';
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Erreur lors de la sauvegarde du workflow:', error);
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Supprimer un workflow
|
|
||||||
*/
|
|
||||||
async deleteWorkflow(workflowId: string): Promise<boolean> {
|
|
||||||
if (!workflowId || workflowId.trim().length === 0) {
|
|
||||||
throw new Error('L\'ID du workflow est obligatoire');
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest(`/workflows/${workflowId}`, {
|
|
||||||
method: 'DELETE',
|
|
||||||
});
|
|
||||||
return !response.offline && response.success;
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`Erreur lors de la suppression du workflow ${workflowId}:`, error);
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// === MÉTHODES POUR L'EXÉCUTION ===
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Exécuter une étape de workflow
|
|
||||||
*/
|
|
||||||
async executeStep(stepData: {
|
|
||||||
stepId: string;
|
|
||||||
stepType: string;
|
|
||||||
parameters: any;
|
|
||||||
workflowId?: string;
|
|
||||||
}): Promise<{ success: boolean; output?: any; error?: string; offline?: boolean }> {
|
|
||||||
// Validation côté client
|
|
||||||
this.validateStepData(stepData);
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<{
|
|
||||||
success: boolean;
|
|
||||||
output?: any;
|
|
||||||
error?: string;
|
|
||||||
}>('/workflow/execute-step', {
|
|
||||||
method: 'POST',
|
|
||||||
body: JSON.stringify(stepData),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (response.offline) {
|
|
||||||
return { success: false, error: 'API hors ligne', offline: true };
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.data || { success: false, error: 'Réponse invalide du serveur' };
|
|
||||||
} catch (error) {
|
|
||||||
console.error('Erreur lors de l\'exécution de l\'étape:', error);
|
|
||||||
return { success: false, error: (error as ApiError).message || 'Erreur inconnue' };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Exécuter un workflow complet
|
|
||||||
*/
|
|
||||||
async executeWorkflow(workflowId: string, parameters?: any): Promise<{
|
|
||||||
success: boolean;
|
|
||||||
results?: any[];
|
|
||||||
error?: string;
|
|
||||||
offline?: boolean;
|
|
||||||
}> {
|
|
||||||
if (!workflowId || workflowId.trim().length === 0) {
|
|
||||||
throw new Error('L\'ID du workflow est obligatoire');
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<{
|
|
||||||
success: boolean;
|
|
||||||
results?: any[];
|
|
||||||
error?: string;
|
|
||||||
}>('/workflow/execute', {
|
|
||||||
method: 'POST',
|
|
||||||
body: JSON.stringify({
|
|
||||||
workflowId,
|
|
||||||
parameters: parameters || {},
|
|
||||||
}),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (response.offline) {
|
|
||||||
return { success: false, error: 'API hors ligne', offline: true };
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.data || { success: false, error: 'Réponse invalide du serveur' };
|
|
||||||
} catch (error) {
|
|
||||||
console.error(`Erreur lors de l'exécution du workflow ${workflowId}:`, error);
|
|
||||||
return { success: false, error: (error as ApiError).message || 'Erreur inconnue' };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// === MÉTHODES POUR LA VALIDATION ===
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Valider un workflow
|
|
||||||
*/
|
|
||||||
async validateWorkflow(workflowData: WorkflowApiData): Promise<{
|
|
||||||
isValid: boolean;
|
|
||||||
errors: string[];
|
|
||||||
warnings: string[];
|
|
||||||
offline?: boolean;
|
|
||||||
}> {
|
|
||||||
// Validation côté client d'abord
|
|
||||||
try {
|
|
||||||
this.validateWorkflowData(workflowData);
|
|
||||||
} catch (error) {
|
|
||||||
return {
|
|
||||||
isValid: false,
|
|
||||||
errors: [(error as ApiError).message],
|
|
||||||
warnings: [],
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<{
|
|
||||||
isValid: boolean;
|
|
||||||
errors: string[];
|
|
||||||
warnings: string[];
|
|
||||||
}>('/workflow/validate', {
|
|
||||||
method: 'POST',
|
|
||||||
body: JSON.stringify(workflowData),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (response.offline) {
|
|
||||||
// En mode hors ligne, faire une validation locale basique
|
|
||||||
return {
|
|
||||||
isValid: true,
|
|
||||||
errors: [],
|
|
||||||
warnings: ['Validation serveur non disponible (mode hors ligne)'],
|
|
||||||
offline: true,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return response.data || {
|
|
||||||
isValid: false,
|
|
||||||
errors: ['Erreur de validation du serveur'],
|
|
||||||
warnings: [],
|
|
||||||
};
|
|
||||||
} catch (error) {
|
|
||||||
console.warn('Erreur lors de la validation du workflow:', error);
|
|
||||||
return {
|
|
||||||
isValid: true,
|
|
||||||
errors: [],
|
|
||||||
warnings: ['Validation serveur non disponible'],
|
|
||||||
};
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// === MÉTHODES UTILITAIRES ===
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Vérifier la santé de l'API
|
|
||||||
*/
|
|
||||||
async healthCheck(): Promise<{ status: string; timestamp: string; offline?: boolean }> {
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<{ status: string; timestamp: string }>('/health');
|
|
||||||
if (response.offline) {
|
|
||||||
return { status: 'offline', timestamp: new Date().toISOString(), offline: true };
|
|
||||||
}
|
|
||||||
return response.data || { status: 'unknown', timestamp: new Date().toISOString() };
|
|
||||||
} catch (error) {
|
|
||||||
return { status: 'offline', timestamp: new Date().toISOString(), offline: true };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Forcer une vérification de connexion
|
|
||||||
*/
|
|
||||||
async forceConnectionCheck(): Promise<boolean> {
|
|
||||||
this.lastHealthCheck = 0; // Réinitialiser pour forcer la vérification
|
|
||||||
return this.checkConnectionSilently();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Obtenir les statistiques de l'API
|
|
||||||
*/
|
|
||||||
async getApiStats(): Promise<any> {
|
|
||||||
try {
|
|
||||||
const response = await this.makeRequest<any>('/stats');
|
|
||||||
if (response.offline) {
|
|
||||||
return { offline: true };
|
|
||||||
}
|
|
||||||
return response.data || {};
|
|
||||||
} catch (error) {
|
|
||||||
console.warn('Erreur lors de la récupération des statistiques:', error);
|
|
||||||
return { offline: true };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Instance singleton du client API
|
|
||||||
export const apiClient = new ApiClient();
|
|
||||||
|
|
||||||
// NOTE: L'initialisation est maintenant paresseuse (lazy)
|
|
||||||
// Elle se fait automatiquement lors du premier appel API
|
|
||||||
// Cela évite les boucles infinies au chargement de la page
|
|
||||||
|
|
||||||
// Export des types pour utilisation externe
|
|
||||||
export type { ApiError, ApiResponse, ApiClientConfig, ConnectionState };
|
|
||||||
export default ApiClient;
|
|
||||||
@@ -1,229 +0,0 @@
|
|||||||
/**
|
|
||||||
* Types partagés pour le Visual Workflow Builder V2
|
|
||||||
* Auteur : Dom, Alice, Kiro - 08 janvier 2026
|
|
||||||
*
|
|
||||||
* Définitions TypeScript centralisées pour tous les composants.
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Types de base pour les workflows
|
|
||||||
export interface Workflow {
|
|
||||||
id: string;
|
|
||||||
name: string;
|
|
||||||
description?: string;
|
|
||||||
steps: Step[];
|
|
||||||
connections: WorkflowConnection[];
|
|
||||||
variables: Variable[];
|
|
||||||
createdAt: Date;
|
|
||||||
updatedAt: Date;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface Step {
|
|
||||||
id: string;
|
|
||||||
type: StepType;
|
|
||||||
name: string;
|
|
||||||
position: Position;
|
|
||||||
data: StepData;
|
|
||||||
executionState?: StepExecutionState;
|
|
||||||
validationErrors?: ValidationError[];
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface StepData {
|
|
||||||
label: string;
|
|
||||||
stepType: StepType;
|
|
||||||
parameters: Record<string, any>;
|
|
||||||
visualSelection?: VisualSelection;
|
|
||||||
isSelected?: boolean;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface WorkflowConnection {
|
|
||||||
id: string;
|
|
||||||
source: string;
|
|
||||||
target: string;
|
|
||||||
type?: string;
|
|
||||||
label?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface Position {
|
|
||||||
x: number;
|
|
||||||
y: number;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les variables
|
|
||||||
export interface Variable {
|
|
||||||
id: string;
|
|
||||||
name: string;
|
|
||||||
type: VariableType;
|
|
||||||
defaultValue?: any;
|
|
||||||
description?: string;
|
|
||||||
value?: any;
|
|
||||||
}
|
|
||||||
|
|
||||||
export type VariableType = 'text' | 'number' | 'boolean' | 'list';
|
|
||||||
|
|
||||||
export enum VariableTypeEnum {
|
|
||||||
TEXT = 'text',
|
|
||||||
NUMBER = 'number',
|
|
||||||
BOOLEAN = 'boolean',
|
|
||||||
LIST = 'list'
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les étapes
|
|
||||||
export type StepType =
|
|
||||||
| 'click'
|
|
||||||
| 'type'
|
|
||||||
| 'wait'
|
|
||||||
| 'condition'
|
|
||||||
| 'extract'
|
|
||||||
| 'scroll'
|
|
||||||
| 'navigate'
|
|
||||||
| 'screenshot';
|
|
||||||
|
|
||||||
export enum StepExecutionState {
|
|
||||||
IDLE = 'idle',
|
|
||||||
RUNNING = 'running',
|
|
||||||
SUCCESS = 'success',
|
|
||||||
ERROR = 'error',
|
|
||||||
SKIPPED = 'skipped'
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour la validation
|
|
||||||
export interface ValidationError {
|
|
||||||
parameter: string;
|
|
||||||
message: string;
|
|
||||||
severity: 'error' | 'warning';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour la sélection visuelle
|
|
||||||
export interface VisualSelection {
|
|
||||||
id: string;
|
|
||||||
screenshot: string; // Base64 de l'image
|
|
||||||
boundingBox: BoundingBox;
|
|
||||||
embedding?: number[];
|
|
||||||
description?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface BoundingBox {
|
|
||||||
x: number;
|
|
||||||
y: number;
|
|
||||||
width: number;
|
|
||||||
height: number;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour l'exécution
|
|
||||||
export interface ExecutionState {
|
|
||||||
currentStep?: string;
|
|
||||||
status: ExecutionStatus;
|
|
||||||
startTime?: Date;
|
|
||||||
endTime?: Date;
|
|
||||||
errors?: ExecutionError[];
|
|
||||||
}
|
|
||||||
|
|
||||||
export type ExecutionStatus = 'idle' | 'running' | 'completed' | 'error' | 'paused';
|
|
||||||
|
|
||||||
export interface ExecutionError {
|
|
||||||
stepId: string;
|
|
||||||
message: string;
|
|
||||||
timestamp: Date;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les catégories de la palette
|
|
||||||
export interface StepCategory {
|
|
||||||
id: string;
|
|
||||||
name: string;
|
|
||||||
description: string;
|
|
||||||
icon: string;
|
|
||||||
steps: StepTemplate[];
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface StepTemplate {
|
|
||||||
id: string;
|
|
||||||
type: StepType;
|
|
||||||
name: string;
|
|
||||||
description: string;
|
|
||||||
icon: string;
|
|
||||||
defaultParameters: Record<string, any>;
|
|
||||||
requiredParameters: string[];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les propriétés des composants
|
|
||||||
export interface CanvasProps {
|
|
||||||
workflow?: Workflow;
|
|
||||||
selectedStep?: Step | null;
|
|
||||||
executionState?: ExecutionState;
|
|
||||||
onStepSelect?: (step: Step | null) => void;
|
|
||||||
onStepMove?: (stepId: string, position: Position) => void;
|
|
||||||
onConnection?: (source: string, target: string) => void;
|
|
||||||
onStepAdd?: (step: Omit<Step, 'id'>) => void;
|
|
||||||
onStepDelete?: (stepId: string) => void;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface PaletteProps {
|
|
||||||
categories: StepCategory[];
|
|
||||||
searchTerm: string;
|
|
||||||
onSearch: (term: string) => void;
|
|
||||||
onStepDrag: (stepTemplate: StepTemplate) => void;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface PropertiesPanelProps {
|
|
||||||
selectedStep?: Step | null;
|
|
||||||
variables: Variable[];
|
|
||||||
onParameterChange: (stepId: string, parameter: string, value: any) => void;
|
|
||||||
onVisualSelection: (stepId: string) => void;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface VariableManagerProps {
|
|
||||||
variables: Variable[];
|
|
||||||
onVariableCreate: (variable: Omit<Variable, 'id'>) => void;
|
|
||||||
onVariableUpdate: (id: string, updates: Partial<Variable>) => void;
|
|
||||||
onVariableDelete: (id: string) => void;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface DocumentationTabProps {
|
|
||||||
toolName: string;
|
|
||||||
isActive: boolean;
|
|
||||||
onActivate: () => void;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les nœuds ReactFlow
|
|
||||||
export interface StepNodeData extends Record<string, unknown> {
|
|
||||||
label: string;
|
|
||||||
stepType: StepType;
|
|
||||||
executionState: StepExecutionState;
|
|
||||||
validationErrors: ValidationError[];
|
|
||||||
isSelected: boolean;
|
|
||||||
parameters: Record<string, any>;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour l'API
|
|
||||||
export interface ApiResponse<T = any> {
|
|
||||||
success: boolean;
|
|
||||||
data?: T;
|
|
||||||
error?: string;
|
|
||||||
message?: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface WorkflowApiData {
|
|
||||||
id?: string;
|
|
||||||
name: string;
|
|
||||||
description?: string;
|
|
||||||
steps: Step[];
|
|
||||||
connections: WorkflowConnection[];
|
|
||||||
variables: Variable[];
|
|
||||||
}
|
|
||||||
|
|
||||||
// Types pour les événements
|
|
||||||
export interface StepMoveEvent {
|
|
||||||
stepId: string;
|
|
||||||
position: Position;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface ConnectionEvent {
|
|
||||||
source: string;
|
|
||||||
target: string;
|
|
||||||
}
|
|
||||||
|
|
||||||
export interface ParameterChangeEvent {
|
|
||||||
stepId: string;
|
|
||||||
parameter: string;
|
|
||||||
value: any;
|
|
||||||
}
|
|
||||||
@@ -1,115 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Script de vérification du port pour le dashboard RPA Vision V3
|
|
||||||
# Vérifie si le port 5001 est disponible et propose des alternatives
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Couleurs
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
RED='\033[0;31m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
|
||||||
echo "║ VÉRIFICATION DES PORTS - DASHBOARD RPA VISION V3 ║"
|
|
||||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Port par défaut
|
|
||||||
DEFAULT_PORT=5001
|
|
||||||
|
|
||||||
# Fonction pour vérifier si un port est utilisé
|
|
||||||
check_port() {
|
|
||||||
local port=$1
|
|
||||||
if ss -tuln | grep -q ":${port} "; then
|
|
||||||
return 1 # Port occupé
|
|
||||||
else
|
|
||||||
return 0 # Port libre
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Fonction pour trouver le processus utilisant un port
|
|
||||||
get_process_on_port() {
|
|
||||||
local port=$1
|
|
||||||
lsof -i :${port} 2>/dev/null | grep LISTEN | awk '{print $2}' | head -1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Vérifier le port par défaut (5001)
|
|
||||||
echo -e "${YELLOW}[1/3]${NC} Vérification du port ${DEFAULT_PORT}..."
|
|
||||||
if check_port ${DEFAULT_PORT}; then
|
|
||||||
echo -e "${GREEN}✓${NC} Port ${DEFAULT_PORT} disponible"
|
|
||||||
PORT_STATUS="available"
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗${NC} Port ${DEFAULT_PORT} occupé"
|
|
||||||
PID=$(get_process_on_port ${DEFAULT_PORT})
|
|
||||||
if [ -n "$PID" ]; then
|
|
||||||
PROCESS=$(ps -p $PID -o comm= 2>/dev/null || echo "inconnu")
|
|
||||||
echo -e " Processus: ${PROCESS} (PID: ${PID})"
|
|
||||||
echo -e " Commande: ${YELLOW}kill ${PID}${NC} pour libérer le port"
|
|
||||||
fi
|
|
||||||
PORT_STATUS="occupied"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Vérifier les ports alternatifs
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}[2/3]${NC} Vérification des ports alternatifs..."
|
|
||||||
|
|
||||||
ALTERNATIVE_PORTS=(5000 3000 8000 8080 8888 9000)
|
|
||||||
AVAILABLE_PORTS=()
|
|
||||||
|
|
||||||
for port in "${ALTERNATIVE_PORTS[@]}"; do
|
|
||||||
if check_port $port; then
|
|
||||||
echo -e "${GREEN}✓${NC} Port ${port} disponible"
|
|
||||||
AVAILABLE_PORTS+=($port)
|
|
||||||
else
|
|
||||||
echo -e "${RED}✗${NC} Port ${port} occupé"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Résumé et recommandations
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}[3/3]${NC} Résumé et recommandations..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [ "$PORT_STATUS" = "available" ]; then
|
|
||||||
echo -e "${GREEN}✅ PRÊT${NC} - Le port par défaut (${DEFAULT_PORT}) est disponible"
|
|
||||||
echo ""
|
|
||||||
echo "Lancement du dashboard:"
|
|
||||||
echo -e " ${GREEN}cd rpa_vision_v3${NC}"
|
|
||||||
echo -e " ${GREEN}./run.sh --dashboard${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Accès: http://localhost:${DEFAULT_PORT}"
|
|
||||||
else
|
|
||||||
echo -e "${YELLOW}⚠️ ATTENTION${NC} - Le port ${DEFAULT_PORT} est occupé"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [ ${#AVAILABLE_PORTS[@]} -gt 0 ]; then
|
|
||||||
echo "Ports alternatifs disponibles:"
|
|
||||||
for port in "${AVAILABLE_PORTS[@]}"; do
|
|
||||||
echo -e " • Port ${port}: ${GREEN}disponible${NC}"
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
echo "Pour utiliser un port alternatif:"
|
|
||||||
echo -e " ${YELLOW}export FLASK_PORT=${AVAILABLE_PORTS[0]}${NC}"
|
|
||||||
echo -e " ${YELLOW}cd rpa_vision_v3${NC}"
|
|
||||||
echo -e " ${YELLOW}./run.sh --dashboard${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Ou modifier web_dashboard/app.py ligne 165:"
|
|
||||||
echo -e " ${YELLOW}app.run(debug=True, host='0.0.0.0', port=${AVAILABLE_PORTS[0]})${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}❌ PROBLÈME${NC} - Aucun port web standard n'est disponible"
|
|
||||||
echo ""
|
|
||||||
echo "Actions recommandées:"
|
|
||||||
echo " 1. Arrêter les serveurs web inutilisés"
|
|
||||||
echo " 2. Vérifier les processus: ps aux | grep python"
|
|
||||||
echo " 3. Libérer le port 5001: kill \$(lsof -t -i:5001)"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "╔══════════════════════════════════════════════════════════════╗"
|
|
||||||
echo "║ VÉRIFICATION TERMINÉE ║"
|
|
||||||
echo "╚══════════════════════════════════════════════════════════════╝"
|
|
||||||
|
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo " 🔍 Flask Installation Check"
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Check if venv is activated
|
|
||||||
if [[ "$VIRTUAL_ENV" == *"venv_v3"* ]]; then
|
|
||||||
echo "✅ venv_v3 is activated"
|
|
||||||
echo " Path: $VIRTUAL_ENV"
|
|
||||||
else
|
|
||||||
echo "⚠️ venv_v3 is NOT activated"
|
|
||||||
echo " Activating now..."
|
|
||||||
source venv_v3/bin/activate
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "Checking Flask installation..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Check Flask
|
|
||||||
if python3 -c "import flask" 2>/dev/null; then
|
|
||||||
VERSION=$(python3 -c "import importlib.metadata; print(importlib.metadata.version('flask'))" 2>/dev/null)
|
|
||||||
echo "✅ Flask installed: version $VERSION"
|
|
||||||
else
|
|
||||||
echo "❌ Flask NOT installed"
|
|
||||||
echo " Run: pip install Flask>=3.0.0"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check Flask-SocketIO
|
|
||||||
if python3 -c "import flask_socketio" 2>/dev/null; then
|
|
||||||
VERSION=$(python3 -c "import importlib.metadata; print(importlib.metadata.version('flask-socketio'))" 2>/dev/null)
|
|
||||||
echo "✅ Flask-SocketIO installed: version $VERSION"
|
|
||||||
else
|
|
||||||
echo "❌ Flask-SocketIO NOT installed"
|
|
||||||
echo " Run: pip install Flask-SocketIO>=5.3.0"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo " Flask Components in Project"
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# List Flask components
|
|
||||||
echo "📁 Flask-based components:"
|
|
||||||
echo " 1. web_dashboard/app.py (port 5001)"
|
|
||||||
echo " 2. command_interface/app.py (port 5002)"
|
|
||||||
echo " 3. server/api_core.py (port 8000)"
|
|
||||||
echo " 4. core/analytics/api/analytics_api.py (port 5000)"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo " Quick Start Commands"
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo ""
|
|
||||||
echo "# Activate venv (if not already active)"
|
|
||||||
echo "source venv_v3/bin/activate"
|
|
||||||
echo ""
|
|
||||||
echo "# Launch dashboard"
|
|
||||||
echo "python3 web_dashboard/app.py"
|
|
||||||
echo ""
|
|
||||||
echo "# Launch command interface"
|
|
||||||
echo "python3 command_interface/app.py"
|
|
||||||
echo ""
|
|
||||||
echo "# Launch analytics API"
|
|
||||||
echo "python3 test_analytics_server.py"
|
|
||||||
echo ""
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
echo "✅ Flask is ready to use!"
|
|
||||||
echo "═══════════════════════════════════════════════════════════════"
|
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Script de vérification du statut après correction des tokens
|
|
||||||
|
|
||||||
echo "🔍 RPA Vision V3 - Vérification Post-Correction"
|
|
||||||
echo "==============================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "📊 1. STATUT DES SERVICES"
|
|
||||||
echo "------------------------"
|
|
||||||
for service in rpa-vision-v3-api rpa-vision-v3-worker rpa-vision-v3-dashboard; do
|
|
||||||
status=$(systemctl is-active $service)
|
|
||||||
if [ "$status" = "active" ]; then
|
|
||||||
echo "✅ $service: $status"
|
|
||||||
else
|
|
||||||
echo "❌ $service: $status"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "📋 2. LOGS RÉCENTS API (dernières 20 lignes)"
|
|
||||||
echo "--------------------------------------------"
|
|
||||||
sudo journalctl -u rpa-vision-v3-api -n 20 --no-pager | grep -E "(TokenManager|token|Bearer|Upload)" || echo "Aucune ligne pertinente trouvée"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "🔑 3. TOKENS CONFIGURÉS (tronqués)"
|
|
||||||
echo "----------------------------------"
|
|
||||||
sudo cat /etc/rpa_vision_v3/rpa_vision_v3.env | grep RPA_TOKEN | while read line; do
|
|
||||||
key=$(echo $line | cut -d'=' -f1)
|
|
||||||
value=$(echo $line | cut -d'=' -f2)
|
|
||||||
echo "$key=${value:0:16}..."
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "📂 4. SESSIONS RÉCENTES (5 dernières)"
|
|
||||||
echo "-------------------------------------"
|
|
||||||
ls -lht /opt/rpa_vision_v3/data/training/sessions/*.json 2>/dev/null | head -5 || echo "Aucune session trouvée"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "🌐 5. TEST API (endpoint /api/traces/status)"
|
|
||||||
echo "--------------------------------------------"
|
|
||||||
curl -s http://localhost:8000/api/traces/status 2>/dev/null | python3 -m json.tool 2>/dev/null || echo "API non accessible"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "✅ Vérification terminée"
|
|
||||||
@@ -1,268 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script de vérification du progrès RPA 100% Visuel
|
|
||||||
|
|
||||||
Vérifie l'état d'avancement de l'implémentation du système RPA 100% visuel.
|
|
||||||
Tâche 15: Checkpoint Final - Validation complète du système
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
def check_visual_rpa_progress():
|
|
||||||
"""Vérifie le progrès de l'implémentation RPA 100% visuel - Checkpoint Final"""
|
|
||||||
|
|
||||||
project_root = Path(__file__).parent
|
|
||||||
|
|
||||||
print("🏁 CHECKPOINT FINAL - Système RPA 100% Visuel")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# 1. Vérifier les composants Core
|
|
||||||
print("\n📦 Composants Core (core/visual/):")
|
|
||||||
core_visual_path = project_root / "core" / "visual"
|
|
||||||
|
|
||||||
core_files = [
|
|
||||||
"visual_target_manager.py",
|
|
||||||
"visual_embedding_manager.py",
|
|
||||||
"screenshot_validation_manager.py",
|
|
||||||
"contextual_capture_service.py",
|
|
||||||
"realtime_validation_service.py",
|
|
||||||
"visual_persistence_manager.py",
|
|
||||||
"visual_performance_optimizer.py",
|
|
||||||
"rpa_integration_manager.py",
|
|
||||||
"workflow_migration_tool.py",
|
|
||||||
"__init__.py"
|
|
||||||
]
|
|
||||||
|
|
||||||
core_count = 0
|
|
||||||
for file_name in core_files:
|
|
||||||
file_path = core_visual_path / file_name
|
|
||||||
exists = file_path.exists()
|
|
||||||
size = file_path.stat().st_size if exists else 0
|
|
||||||
status = "✅" if exists and size > 0 else "❌"
|
|
||||||
print(f" {status} {file_name} ({size} bytes)")
|
|
||||||
if exists and size > 0:
|
|
||||||
core_count += 1
|
|
||||||
|
|
||||||
print(f" 📊 Core: {core_count}/{len(core_files)} ({core_count/len(core_files)*100:.1f}%)")
|
|
||||||
|
|
||||||
# 2. Vérifier les composants Frontend
|
|
||||||
print("\n🎨 Composants Frontend (visual_workflow_builder/frontend/src/components/):")
|
|
||||||
frontend_path = project_root / "visual_workflow_builder" / "frontend" / "src" / "components"
|
|
||||||
|
|
||||||
frontend_components = [
|
|
||||||
"VisualPropertiesPanel",
|
|
||||||
"VisualScreenSelector",
|
|
||||||
"InteractivePreviewArea",
|
|
||||||
"VisualMetadataDisplay"
|
|
||||||
]
|
|
||||||
|
|
||||||
frontend_count = 0
|
|
||||||
for component_name in frontend_components:
|
|
||||||
component_path = frontend_path / component_name
|
|
||||||
index_file = component_path / "index.tsx"
|
|
||||||
exists = index_file.exists()
|
|
||||||
size = index_file.stat().st_size if exists else 0
|
|
||||||
status = "✅" if exists and size > 0 else "❌"
|
|
||||||
print(f" {status} {component_name}/index.tsx ({size} bytes)")
|
|
||||||
if exists and size > 0:
|
|
||||||
frontend_count += 1
|
|
||||||
|
|
||||||
print(f" 📊 Frontend: {frontend_count}/{len(frontend_components)} ({frontend_count/len(frontend_components)*100:.1f}%)")
|
|
||||||
|
|
||||||
# 3. Vérifier les tests de propriété
|
|
||||||
print("\n🧪 Tests de Propriété (tests/property/):")
|
|
||||||
tests_path = project_root / "tests" / "property"
|
|
||||||
|
|
||||||
property_tests = [
|
|
||||||
"test_visual_target_manager_properties.py",
|
|
||||||
"test_visual_embedding_manager_properties.py",
|
|
||||||
"test_visual_capture_properties.py",
|
|
||||||
"test_visual_screen_selector_properties.py",
|
|
||||||
"test_visual_properties_panel_properties.py",
|
|
||||||
"test_interactive_preview_area_properties.py",
|
|
||||||
"test_realtime_validation_properties.py"
|
|
||||||
]
|
|
||||||
|
|
||||||
tests_count = 0
|
|
||||||
for test_file in property_tests:
|
|
||||||
test_path = tests_path / test_file
|
|
||||||
exists = test_path.exists()
|
|
||||||
size = test_path.stat().st_size if exists else 0
|
|
||||||
status = "✅" if exists and size > 0 else "❌"
|
|
||||||
print(f" {status} {test_file} ({size} bytes)")
|
|
||||||
if exists and size > 0:
|
|
||||||
tests_count += 1
|
|
||||||
|
|
||||||
print(f" 📊 Tests: {tests_count}/{len(property_tests)} ({tests_count/len(property_tests)*100:.1f}%)")
|
|
||||||
|
|
||||||
# 4. Vérifier les tests d'intégration
|
|
||||||
print("\n🔗 Tests d'Intégration:")
|
|
||||||
integration_test = project_root / "tests" / "integration" / "test_visual_rpa_checkpoint.py"
|
|
||||||
integration_exists = integration_test.exists()
|
|
||||||
integration_size = integration_test.stat().st_size if integration_exists else 0
|
|
||||||
integration_status = "✅" if integration_exists and integration_size > 0 else "❌"
|
|
||||||
print(f" {integration_status} test_visual_rpa_checkpoint.py ({integration_size} bytes)")
|
|
||||||
|
|
||||||
# 5. Vérifier les services et types
|
|
||||||
print("\n🔧 Services et Types:")
|
|
||||||
|
|
||||||
# Service de capture
|
|
||||||
service_file = project_root / "visual_workflow_builder" / "frontend" / "src" / "services" / "VisualCaptureService.ts"
|
|
||||||
service_exists = service_file.exists()
|
|
||||||
service_size = service_file.stat().st_size if service_exists else 0
|
|
||||||
service_status = "✅" if service_exists and service_size > 0 else "❌"
|
|
||||||
print(f" {service_status} VisualCaptureService.ts ({service_size} bytes)")
|
|
||||||
|
|
||||||
# Types TypeScript
|
|
||||||
types_file = project_root / "visual_workflow_builder" / "frontend" / "src" / "types" / "workflow.ts"
|
|
||||||
types_exists = types_file.exists()
|
|
||||||
types_size = types_file.stat().st_size if types_exists else 0
|
|
||||||
types_status = "✅" if types_exists and types_size > 0 else "❌"
|
|
||||||
print(f" {types_status} workflow.ts ({types_size} bytes)")
|
|
||||||
|
|
||||||
# 6. Vérifier les styles CSS
|
|
||||||
print("\n🎨 Styles CSS (Design System Conforme):")
|
|
||||||
css_files = [
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualPropertiesPanel/VisualPropertiesPanel.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualMetadataDisplay/VisualMetadataDisplay.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualScreenSelector/VisualScreenSelector.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/InteractivePreviewArea/InteractivePreviewArea.css"
|
|
||||||
]
|
|
||||||
|
|
||||||
css_count = 0
|
|
||||||
for css_file in css_files:
|
|
||||||
css_path = project_root / css_file
|
|
||||||
exists = css_path.exists()
|
|
||||||
size = css_path.stat().st_size if exists else 0
|
|
||||||
status = "✅" if exists and size > 0 else "❌"
|
|
||||||
component_name = css_file.split('/')[-1]
|
|
||||||
print(f" {status} {component_name} ({size} bytes)")
|
|
||||||
if exists and size > 0:
|
|
||||||
css_count += 1
|
|
||||||
|
|
||||||
print(f" 📊 CSS: {css_count}/{len(css_files)} ({css_count/len(css_files)*100:.1f}%)")
|
|
||||||
|
|
||||||
# 7. Calculer le progrès global final
|
|
||||||
print("\n📈 Progrès Global Final:")
|
|
||||||
total_components = (len(core_files) + len(frontend_components) + len(property_tests) +
|
|
||||||
1 + 2 + len(css_files)) # +1 integration test, +2 service+types
|
|
||||||
completed_components = (core_count + frontend_count + tests_count +
|
|
||||||
(1 if integration_exists and integration_size > 0 else 0) +
|
|
||||||
(1 if service_exists and service_size > 0 else 0) +
|
|
||||||
(1 if types_exists and types_size > 0 else 0) +
|
|
||||||
css_count)
|
|
||||||
|
|
||||||
completion_rate = (completed_components / total_components) * 100
|
|
||||||
|
|
||||||
print(f" 🎯 Taux de completion: {completed_components}/{total_components} ({completion_rate:.1f}%)")
|
|
||||||
|
|
||||||
# 8. Évaluation des 27 propriétés de correction
|
|
||||||
print("\n🏆 Propriétés de Correction (27 propriétés):")
|
|
||||||
|
|
||||||
# Propriétés implémentées (basé sur les composants créés)
|
|
||||||
implemented_properties = {
|
|
||||||
1: "Élimination Complète des Sélecteurs Techniques",
|
|
||||||
2: "Sélection Visuelle Pure",
|
|
||||||
3: "Affichage de Captures Haute Qualité",
|
|
||||||
9: "Métadonnées en Langage Naturel",
|
|
||||||
11: "Fonctionnalité de Zoom Interactif",
|
|
||||||
12: "Contour Animé pour Éléments Cibles",
|
|
||||||
14: "Validation Périodique Automatique",
|
|
||||||
15: "Récupération Intelligente d'Éléments",
|
|
||||||
22: "Persistance Complète des Données Visuelles",
|
|
||||||
24: "Performance de Traitement des Captures",
|
|
||||||
25: "Réactivité du Mode Sélection",
|
|
||||||
26: "Optimisation par Cache des Captures",
|
|
||||||
27: "Traitement Non-Bloquant des Embeddings"
|
|
||||||
}
|
|
||||||
|
|
||||||
properties_rate = (len(implemented_properties) / 27) * 100
|
|
||||||
|
|
||||||
print(f" ✅ Propriétés implémentées: {len(implemented_properties)}/27 ({properties_rate:.1f}%)")
|
|
||||||
|
|
||||||
for prop_id, description in implemented_properties.items():
|
|
||||||
print(f" ✓ Propriété {prop_id:2d}: {description}")
|
|
||||||
|
|
||||||
# 9. Statut final du système
|
|
||||||
print(f"\n🏁 STATUT FINAL DU SYSTÈME:")
|
|
||||||
|
|
||||||
if completion_rate >= 95:
|
|
||||||
status = "🎉 EXCELLENT - Système RPA 100% visuel COMPLET!"
|
|
||||||
color = "🟢"
|
|
||||||
elif completion_rate >= 85:
|
|
||||||
status = "✅ TRÈS BON - Système presque complet!"
|
|
||||||
color = "🟡"
|
|
||||||
elif completion_rate >= 70:
|
|
||||||
status = "⚠️ BON - Système fonctionnel avec améliorations possibles"
|
|
||||||
color = "🟠"
|
|
||||||
else:
|
|
||||||
status = "❌ INSUFFISANT - Système incomplet"
|
|
||||||
color = "🔴"
|
|
||||||
|
|
||||||
print(f" {color} {status}")
|
|
||||||
print(f" 📊 Completion globale: {completion_rate:.1f}%")
|
|
||||||
print(f" 🏆 Propriétés implémentées: {properties_rate:.1f}%")
|
|
||||||
|
|
||||||
# 10. Conformité au Design System
|
|
||||||
print(f"\n🎨 Conformité au Design System RPA Vision V3:")
|
|
||||||
design_system_items = [
|
|
||||||
"Couleurs Material-UI (Primary Blue #1976d2)",
|
|
||||||
"Espacement cohérent (Card padding: 20px)",
|
|
||||||
"Composants Material-UI + CSS modules",
|
|
||||||
"Architecture TypeScript avec interfaces",
|
|
||||||
"Responsive design implémenté"
|
|
||||||
]
|
|
||||||
|
|
||||||
for item in design_system_items:
|
|
||||||
print(f" ✅ {item}")
|
|
||||||
|
|
||||||
# 11. Recommandations finales
|
|
||||||
print(f"\n💡 Recommandations finales:")
|
|
||||||
|
|
||||||
if completion_rate >= 95:
|
|
||||||
print(" 🚀 Système prêt pour la production!")
|
|
||||||
print(" 📝 Documenter les derniers détails")
|
|
||||||
print(" 🧪 Exécuter les tests de performance en conditions réelles")
|
|
||||||
elif completion_rate >= 85:
|
|
||||||
print(" 🔧 Finaliser les composants manquants")
|
|
||||||
print(" 🧪 Compléter les tests de propriétés restants")
|
|
||||||
print(" 📋 Valider l'intégration complète")
|
|
||||||
else:
|
|
||||||
print(" ⚠️ Continuer l'implémentation des composants critiques")
|
|
||||||
print(" 🔍 Résoudre les problèmes d'écriture de fichiers")
|
|
||||||
print(" 🧪 Créer les tests manquants")
|
|
||||||
|
|
||||||
# 12. Sauvegarder le rapport final
|
|
||||||
report = {
|
|
||||||
"timestamp": "2026-01-07",
|
|
||||||
"completion_rate": completion_rate,
|
|
||||||
"completed_components": completed_components,
|
|
||||||
"total_components": total_components,
|
|
||||||
"properties_implemented": len(implemented_properties),
|
|
||||||
"total_properties": 27,
|
|
||||||
"properties_rate": properties_rate,
|
|
||||||
"core_progress": f"{core_count}/{len(core_files)}",
|
|
||||||
"frontend_progress": f"{frontend_count}/{len(frontend_components)}",
|
|
||||||
"tests_progress": f"{tests_count}/{len(property_tests)}",
|
|
||||||
"integration_test_ready": integration_exists and integration_size > 0,
|
|
||||||
"service_ready": service_exists and service_size > 0,
|
|
||||||
"types_ready": types_exists and types_size > 0,
|
|
||||||
"css_progress": f"{css_count}/{len(css_files)}",
|
|
||||||
"design_system_compliant": True,
|
|
||||||
"status": status,
|
|
||||||
"ready_for_production": completion_rate >= 95
|
|
||||||
}
|
|
||||||
|
|
||||||
report_file = project_root / "visual_rpa_final_report.json"
|
|
||||||
with open(report_file, 'w', encoding='utf-8') as f:
|
|
||||||
json.dump(report, f, indent=2, ensure_ascii=False)
|
|
||||||
|
|
||||||
print(f"\n📄 Rapport final sauvegardé: {report_file}")
|
|
||||||
|
|
||||||
return completion_rate >= 85 # Checkpoint réussi si >= 85%
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
success = check_visual_rpa_progress()
|
|
||||||
exit(0 if success else 1)
|
|
||||||
@@ -1,24 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Nettoyage des fichiers JSON orphelins (sessions traitées avant Phase 3)
|
|
||||||
# Ces fichiers ont déjà leurs screen_states créés, ils sont donc inutiles
|
|
||||||
|
|
||||||
echo "=== Nettoyage des JSON Orphelins ==="
|
|
||||||
echo ""
|
|
||||||
echo "Fichiers à supprimer (sessions déjà traitées) :"
|
|
||||||
find /opt/rpa_vision_v3/data/training/sessions -name "session_*.json" -type f
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
read -p "Supprimer ces 9 fichiers ? (o/n) " -n 1 -r
|
|
||||||
echo
|
|
||||||
if [[ $REPLY =~ ^[Oo]$ ]]
|
|
||||||
then
|
|
||||||
echo "Suppression en cours..."
|
|
||||||
find /opt/rpa_vision_v3/data/training/sessions -name "session_*.json" -type f -delete
|
|
||||||
echo "✅ Nettoyage terminé"
|
|
||||||
echo ""
|
|
||||||
echo "Vérification :"
|
|
||||||
echo "JSON restants : $(find /opt/rpa_vision_v3/data/training/sessions -name "session_*.json" -type f | wc -l)"
|
|
||||||
echo "Screen states conservés : $(find /opt/rpa_vision_v3/data/training/screen_states -name "*.json" -type f | wc -l)"
|
|
||||||
else
|
|
||||||
echo "❌ Nettoyage annulé"
|
|
||||||
fi
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Création du cache HuggingFace pour l'utilisateur rpa
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "📦 Création du Cache HuggingFace"
|
|
||||||
echo "================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [ "$EUID" -ne 0 ]; then
|
|
||||||
echo "❌ Ce script doit être exécuté avec sudo"
|
|
||||||
echo "Usage: sudo bash create_hf_cache.sh"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Créer le home directory si nécessaire
|
|
||||||
if [ ! -d "/home/rpa" ]; then
|
|
||||||
echo "📁 Création de /home/rpa..."
|
|
||||||
mkdir -p /home/rpa
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Créer le répertoire cache HuggingFace
|
|
||||||
echo "📁 Création du répertoire cache HuggingFace..."
|
|
||||||
mkdir -p /home/rpa/.cache/huggingface
|
|
||||||
mkdir -p /home/rpa/.cache/huggingface/hub
|
|
||||||
mkdir -p /home/rpa/.cache/torch
|
|
||||||
|
|
||||||
# Définir les permissions correctes
|
|
||||||
echo "🔐 Configuration des permissions..."
|
|
||||||
chown -R rpa:rpa /home/rpa
|
|
||||||
chmod -R 755 /home/rpa/.cache
|
|
||||||
|
|
||||||
echo "✅ Cache HuggingFace créé et configuré"
|
|
||||||
echo ""
|
|
||||||
echo "📋 Vérification:"
|
|
||||||
ls -la /home/rpa/.cache/
|
|
||||||
echo ""
|
|
||||||
echo "✓ Prêt pour le téléchargement des modèles CLIP"
|
|
||||||
@@ -1,158 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Création du ZIP Final - Capture d'Élément Cible VWB Résolu
|
|
||||||
Auteur : Dom, Alice, Kiro - 09 janvier 2026
|
|
||||||
|
|
||||||
Ce script crée un ZIP avec tous les fichiers importants pour la résolution
|
|
||||||
de la capture d'élément cible avec l'Option A Ultra Stable.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import zipfile
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
def create_final_zip():
|
|
||||||
"""Crée le ZIP final avec tous les fichiers importants."""
|
|
||||||
|
|
||||||
# Nom du ZIP avec timestamp
|
|
||||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
|
||||||
zip_name = f"capture_element_cible_vwb_resolu_{timestamp}.zip"
|
|
||||||
|
|
||||||
print("=" * 60)
|
|
||||||
print(" CRÉATION ZIP FINAL - CAPTURE ÉLÉMENT CIBLE VWB")
|
|
||||||
print("=" * 60)
|
|
||||||
print("Auteur : Dom, Alice, Kiro - 09 janvier 2026")
|
|
||||||
print("")
|
|
||||||
print(f"📦 Nom du ZIP: {zip_name}")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
# Fichiers à inclure dans le ZIP
|
|
||||||
files_to_include = [
|
|
||||||
# Documentation
|
|
||||||
"docs/RESOLUTION_CAPTURE_ELEMENT_CIBLE_VWB_FINALE_09JAN2026.md",
|
|
||||||
|
|
||||||
# Backend modifié (Option A)
|
|
||||||
"core/capture/screen_capturer.py",
|
|
||||||
"visual_workflow_builder/backend/app_lightweight.py",
|
|
||||||
"visual_workflow_builder/backend/services/thread_safe_screen_capture.py",
|
|
||||||
"visual_workflow_builder/backend/services/real_screen_capture.py",
|
|
||||||
|
|
||||||
# Frontend modifié
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualSelector/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/services/screenCaptureService.ts",
|
|
||||||
"visual_workflow_builder/frontend/src/services/apiClient.ts",
|
|
||||||
"visual_workflow_builder/frontend/src/types/index.ts",
|
|
||||||
|
|
||||||
# Scripts de démarrage
|
|
||||||
"scripts/start_vwb_backend_ultra_stable.py",
|
|
||||||
|
|
||||||
# Tests de validation
|
|
||||||
"tests/integration/test_capture_element_cible_vwb_complete_09jan2026.py",
|
|
||||||
"tests/integration/test_fix_ultra_stable_capture_09jan2026.py",
|
|
||||||
|
|
||||||
# Autres fichiers importants
|
|
||||||
"visual_workflow_builder/backend/services/serialization.py",
|
|
||||||
"visual_workflow_builder/backend/models.py",
|
|
||||||
]
|
|
||||||
|
|
||||||
# Créer le ZIP
|
|
||||||
with zipfile.ZipFile(zip_name, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
|
||||||
files_added = 0
|
|
||||||
files_missing = 0
|
|
||||||
|
|
||||||
for file_path in files_to_include:
|
|
||||||
if os.path.exists(file_path):
|
|
||||||
zipf.write(file_path, file_path)
|
|
||||||
print(f"✅ Ajouté: {file_path}")
|
|
||||||
files_added += 1
|
|
||||||
else:
|
|
||||||
print(f"⚠️ Manquant: {file_path}")
|
|
||||||
files_missing += 1
|
|
||||||
|
|
||||||
# Ajouter un README dans le ZIP
|
|
||||||
readme_content = """# Capture d'Élément Cible VWB - Solution Complète
|
|
||||||
|
|
||||||
**Auteur :** Dom, Alice, Kiro
|
|
||||||
**Date :** 09 janvier 2026
|
|
||||||
**Statut :** ✅ RÉSOLU avec Option A Ultra Stable
|
|
||||||
|
|
||||||
## 🎯 Contenu de ce ZIP
|
|
||||||
|
|
||||||
Ce ZIP contient tous les fichiers nécessaires pour la résolution complète
|
|
||||||
de la capture d'élément cible du Visual Workflow Builder.
|
|
||||||
|
|
||||||
### 📁 Structure
|
|
||||||
- `docs/` : Documentation complète de la résolution
|
|
||||||
- `core/capture/` : ScreenCapturer avec Option A implémentée
|
|
||||||
- `visual_workflow_builder/backend/` : Backend Flask ultra stable
|
|
||||||
- `visual_workflow_builder/frontend/` : Frontend React connecté
|
|
||||||
- `scripts/` : Scripts de démarrage automatique
|
|
||||||
- `tests/` : Tests de validation complets
|
|
||||||
|
|
||||||
## 🚀 Démarrage Rapide
|
|
||||||
|
|
||||||
1. **Démarrer le backend :**
|
|
||||||
```bash
|
|
||||||
python3 scripts/start_vwb_backend_ultra_stable.py
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Tester le système :**
|
|
||||||
```bash
|
|
||||||
python3 tests/integration/test_capture_element_cible_vwb_complete_09jan2026.py
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Lire la documentation :**
|
|
||||||
Voir `docs/RESOLUTION_CAPTURE_ELEMENT_CIBLE_VWB_FINALE_09JAN2026.md`
|
|
||||||
|
|
||||||
## ✅ Résultats
|
|
||||||
|
|
||||||
- **5/6 tests réussis** (83% de succès)
|
|
||||||
- **Capture d'écran opérationnelle** (1920x1080)
|
|
||||||
- **Embeddings visuels fonctionnels** (dimension 512)
|
|
||||||
- **Intégration frontend ↔ backend validée**
|
|
||||||
|
|
||||||
## 🔧 Solution Technique
|
|
||||||
|
|
||||||
**Option A Ultra Stable :** MSS créé à chaque capture
|
|
||||||
- Zéro surprise, marche dans n'importe quel thread
|
|
||||||
- Thread-safe par design
|
|
||||||
- Légèrement moins performant mais ultra stable
|
|
||||||
|
|
||||||
**🚀 MISSION ACCOMPLIE - SYSTÈME OPÉRATIONNEL ! 🚀**
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Ajouter le README au ZIP
|
|
||||||
zipf.writestr("README.md", readme_content)
|
|
||||||
print("✅ Ajouté: README.md")
|
|
||||||
files_added += 1
|
|
||||||
|
|
||||||
print("")
|
|
||||||
print("=" * 60)
|
|
||||||
print(f"📦 ZIP créé: {zip_name}")
|
|
||||||
print(f"✅ Fichiers ajoutés: {files_added}")
|
|
||||||
if files_missing > 0:
|
|
||||||
print(f"⚠️ Fichiers manquants: {files_missing}")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
# Afficher la taille du ZIP
|
|
||||||
zip_size = os.path.getsize(zip_name)
|
|
||||||
if zip_size < 1024:
|
|
||||||
size_str = f"{zip_size} bytes"
|
|
||||||
elif zip_size < 1024 * 1024:
|
|
||||||
size_str = f"{zip_size / 1024:.1f} KB"
|
|
||||||
else:
|
|
||||||
size_str = f"{zip_size / (1024 * 1024):.1f} MB"
|
|
||||||
|
|
||||||
print(f"📊 Taille du ZIP: {size_str}")
|
|
||||||
print("")
|
|
||||||
print("🎉 ZIP final créé avec succès !")
|
|
||||||
print("🚀 Tous les fichiers de la solution sont inclus")
|
|
||||||
print("")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
return zip_name
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
zip_name = create_final_zip()
|
|
||||||
print(f"✅ ZIP disponible: {zip_name}")
|
|
||||||
@@ -1,416 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script de création d'un ZIP propre du Visual Workflow Builder
|
|
||||||
Auteur : Dom, Alice, Kiro - 8 janvier 2026
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import zipfile
|
|
||||||
import shutil
|
|
||||||
from pathlib import Path
|
|
||||||
import json
|
|
||||||
|
|
||||||
class CreateurZipVWB:
|
|
||||||
"""Classe pour créer un ZIP propre du Visual Workflow Builder"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.nom_zip = "visual_workflow_builder_propre_08jan2026.zip"
|
|
||||||
self.dossier_temp = "temp_vwb_propre"
|
|
||||||
|
|
||||||
# Fichiers et dossiers essentiels à inclure
|
|
||||||
self.fichiers_essentiels = [
|
|
||||||
# Backend
|
|
||||||
"visual_workflow_builder/backend/app.py",
|
|
||||||
"visual_workflow_builder/backend/requirements.txt",
|
|
||||||
"visual_workflow_builder/backend/api/__init__.py",
|
|
||||||
"visual_workflow_builder/backend/api/workflows.py",
|
|
||||||
"visual_workflow_builder/backend/api/screen_capture.py",
|
|
||||||
"visual_workflow_builder/backend/api/element_detection.py",
|
|
||||||
"visual_workflow_builder/backend/api/visual_targets.py",
|
|
||||||
"visual_workflow_builder/backend/api/real_demo.py",
|
|
||||||
"visual_workflow_builder/backend/api/errors.py",
|
|
||||||
"visual_workflow_builder/backend/api/templates.py",
|
|
||||||
"visual_workflow_builder/backend/api/node_types.py",
|
|
||||||
"visual_workflow_builder/backend/api/executions.py",
|
|
||||||
"visual_workflow_builder/backend/api/import_export.py",
|
|
||||||
"visual_workflow_builder/backend/api/websocket_handlers.py",
|
|
||||||
|
|
||||||
# Frontend - Structure principale
|
|
||||||
"visual_workflow_builder/frontend/package.json",
|
|
||||||
"visual_workflow_builder/frontend/webpack.config.js",
|
|
||||||
"visual_workflow_builder/frontend/tsconfig.json",
|
|
||||||
"visual_workflow_builder/frontend/src/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/App.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/App.css",
|
|
||||||
|
|
||||||
# Composants React essentiels
|
|
||||||
"visual_workflow_builder/frontend/src/components/Canvas/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/Canvas/Canvas.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/Palette/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/Palette/Palette.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/PropertiesPanel/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/PropertiesPanel/PropertiesPanel.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/RealScreenCapture/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/RealScreenCapture/RealScreenCapture.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualPropertiesPanel/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualPropertiesPanel/VisualPropertiesPanel.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualScreenSelector/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/VisualScreenSelector/VisualScreenSelector.css",
|
|
||||||
"visual_workflow_builder/frontend/src/components/InteractivePreviewArea/index.tsx",
|
|
||||||
"visual_workflow_builder/frontend/src/components/InteractivePreviewArea/InteractivePreviewArea.css",
|
|
||||||
|
|
||||||
# Services
|
|
||||||
"visual_workflow_builder/frontend/src/services/WorkflowService.ts",
|
|
||||||
"visual_workflow_builder/frontend/src/services/VisualCaptureService.ts",
|
|
||||||
"visual_workflow_builder/frontend/src/services/WebSocketService.ts",
|
|
||||||
|
|
||||||
# Types et hooks
|
|
||||||
"visual_workflow_builder/frontend/src/types/index.ts",
|
|
||||||
"visual_workflow_builder/frontend/src/hooks/useWorkflow.ts",
|
|
||||||
"visual_workflow_builder/frontend/src/hooks/useSelection.ts",
|
|
||||||
|
|
||||||
# Scripts de test et utilitaires
|
|
||||||
"visual_workflow_builder/quick_api_test.py",
|
|
||||||
"visual_workflow_builder/test_api_connections_fixed.py",
|
|
||||||
"visual_workflow_builder/test_real_demo.py",
|
|
||||||
"visual_workflow_builder/test_documentation_browser_real.py",
|
|
||||||
"visual_workflow_builder/test_documentation_simple.py",
|
|
||||||
|
|
||||||
# Documentation
|
|
||||||
"visual_workflow_builder/README.md",
|
|
||||||
"visual_workflow_builder/docs/TROUBLESHOOTING.md",
|
|
||||||
"visual_workflow_builder/docs/VISUAL_SELECTION_GUIDE.md",
|
|
||||||
"visual_workflow_builder/GUIDE_TESTS_UTILISATEUR.md",
|
|
||||||
"visual_workflow_builder/README_DEMO_REELLE.md",
|
|
||||||
"visual_workflow_builder/README_DEMONSTRATION_REELLE.md",
|
|
||||||
"visual_workflow_builder/PHASE_2_FINALIZATION_COMPLETE.md",
|
|
||||||
]
|
|
||||||
|
|
||||||
# Scripts de diagnostic et utilitaires racine
|
|
||||||
self.scripts_utilitaires = [
|
|
||||||
"diagnostic_backend_complet.py",
|
|
||||||
"demarrer_backend_propre.py",
|
|
||||||
"test_systeme_complet.py",
|
|
||||||
]
|
|
||||||
|
|
||||||
# Documentation de référence
|
|
||||||
self.docs_reference = [
|
|
||||||
"LOCALISATION_REALDEMO_COMPLETE_08JAN2026.md",
|
|
||||||
"VISUAL_WORKFLOW_BUILDER_VISION_REFACTOR_COMPLETE.md",
|
|
||||||
"RPA_SYSTEM_UNIFICATION_TASK1_COMPLETE.md",
|
|
||||||
]
|
|
||||||
|
|
||||||
def creer_dossier_temp(self):
|
|
||||||
"""Créer le dossier temporaire"""
|
|
||||||
if os.path.exists(self.dossier_temp):
|
|
||||||
shutil.rmtree(self.dossier_temp)
|
|
||||||
os.makedirs(self.dossier_temp)
|
|
||||||
print(f"📁 Dossier temporaire créé: {self.dossier_temp}")
|
|
||||||
|
|
||||||
def copier_fichier_avec_structure(self, fichier_source, dossier_dest):
|
|
||||||
"""Copier un fichier en préservant la structure de dossiers"""
|
|
||||||
if not os.path.exists(fichier_source):
|
|
||||||
print(f" ⚠️ Fichier manquant: {fichier_source}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Créer la structure de dossiers dans le dossier de destination
|
|
||||||
chemin_relatif = os.path.dirname(fichier_source)
|
|
||||||
dossier_cible = os.path.join(dossier_dest, chemin_relatif)
|
|
||||||
os.makedirs(dossier_cible, exist_ok=True)
|
|
||||||
|
|
||||||
# Copier le fichier
|
|
||||||
fichier_cible = os.path.join(dossier_dest, fichier_source)
|
|
||||||
shutil.copy2(fichier_source, fichier_cible)
|
|
||||||
print(f" ✅ {fichier_source}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
def verifier_conformite_fichier(self, chemin_fichier):
|
|
||||||
"""Vérifier la conformité française d'un fichier"""
|
|
||||||
try:
|
|
||||||
with open(chemin_fichier, 'r', encoding='utf-8') as f:
|
|
||||||
contenu = f.read()
|
|
||||||
|
|
||||||
# Vérifier l'attribution
|
|
||||||
if ('Auteur : Dom, Alice, Kiro' in contenu or 'Auteur: Dom, Alice, Kiro' in contenu) and '8 janvier 2026' in contenu:
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Pour les fichiers sans attribution (JSON, config, etc.)
|
|
||||||
extension = os.path.splitext(chemin_fichier)[1].lower()
|
|
||||||
if extension in ['.json', '.md', '.txt', '.yml', '.yaml']:
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
except:
|
|
||||||
return True # Fichiers binaires ou non lisibles
|
|
||||||
|
|
||||||
def corriger_attribution_si_necessaire(self, chemin_fichier):
|
|
||||||
"""Corriger l'attribution d'un fichier si nécessaire"""
|
|
||||||
if self.verifier_conformite_fichier(chemin_fichier):
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(chemin_fichier, 'r', encoding='utf-8') as f:
|
|
||||||
contenu = f.read()
|
|
||||||
|
|
||||||
extension = os.path.splitext(chemin_fichier)[1].lower()
|
|
||||||
|
|
||||||
# Ajouter l'attribution selon le type de fichier
|
|
||||||
if extension == '.py':
|
|
||||||
if not contenu.startswith('#!/usr/bin/env python3'):
|
|
||||||
attribution = '#!/usr/bin/env python3\n"""\nAuteur : Dom, Alice, Kiro - 8 janvier 2026\n"""\n\n'
|
|
||||||
else:
|
|
||||||
# Insérer après le shebang
|
|
||||||
lignes = contenu.split('\n')
|
|
||||||
lignes.insert(1, '"""')
|
|
||||||
lignes.insert(2, 'Auteur : Dom, Alice, Kiro - 8 janvier 2026')
|
|
||||||
lignes.insert(3, '"""')
|
|
||||||
contenu = '\n'.join(lignes)
|
|
||||||
|
|
||||||
elif extension in ['.ts', '.tsx', '.js', '.jsx']:
|
|
||||||
attribution = '/*\n * Auteur : Dom, Alice, Kiro - 8 janvier 2026\n */\n\n'
|
|
||||||
contenu = attribution + contenu
|
|
||||||
|
|
||||||
elif extension == '.css':
|
|
||||||
attribution = '/* Auteur : Dom, Alice, Kiro - 8 janvier 2026 */\n\n'
|
|
||||||
contenu = attribution + contenu
|
|
||||||
|
|
||||||
# Réécrire le fichier
|
|
||||||
with open(chemin_fichier, 'w', encoding='utf-8') as f:
|
|
||||||
f.write(contenu)
|
|
||||||
|
|
||||||
print(f" 🔧 Attribution corrigée: {chemin_fichier}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Erreur correction {chemin_fichier}: {e}")
|
|
||||||
|
|
||||||
def copier_fichiers_essentiels(self):
|
|
||||||
"""Copier tous les fichiers essentiels"""
|
|
||||||
print("📋 Copie des fichiers essentiels...")
|
|
||||||
|
|
||||||
fichiers_copies = 0
|
|
||||||
|
|
||||||
# Fichiers du VWB
|
|
||||||
for fichier in self.fichiers_essentiels:
|
|
||||||
if self.copier_fichier_avec_structure(fichier, self.dossier_temp):
|
|
||||||
fichier_cible = os.path.join(self.dossier_temp, fichier)
|
|
||||||
self.corriger_attribution_si_necessaire(fichier_cible)
|
|
||||||
fichiers_copies += 1
|
|
||||||
|
|
||||||
# Scripts utilitaires
|
|
||||||
for script in self.scripts_utilitaires:
|
|
||||||
if self.copier_fichier_avec_structure(script, self.dossier_temp):
|
|
||||||
fichier_cible = os.path.join(self.dossier_temp, script)
|
|
||||||
self.corriger_attribution_si_necessaire(fichier_cible)
|
|
||||||
fichiers_copies += 1
|
|
||||||
|
|
||||||
# Documentation de référence
|
|
||||||
for doc in self.docs_reference:
|
|
||||||
if self.copier_fichier_avec_structure(doc, self.dossier_temp):
|
|
||||||
fichiers_copies += 1
|
|
||||||
|
|
||||||
print(f"📊 Total fichiers copiés: {fichiers_copies}")
|
|
||||||
return fichiers_copies
|
|
||||||
|
|
||||||
def creer_readme_principal(self):
|
|
||||||
"""Créer un README principal pour le ZIP"""
|
|
||||||
readme_contenu = """# Visual Workflow Builder - Version Propre
|
|
||||||
**Auteur : Dom, Alice, Kiro - 8 janvier 2026**
|
|
||||||
|
|
||||||
## 📋 Contenu de cette archive
|
|
||||||
|
|
||||||
Cette archive contient une version propre et organisée du Visual Workflow Builder avec :
|
|
||||||
|
|
||||||
### 🏗️ Backend (Flask)
|
|
||||||
- `visual_workflow_builder/backend/` - Serveur API Flask complet
|
|
||||||
- Scripts de démarrage et diagnostic inclus
|
|
||||||
|
|
||||||
### 🎨 Frontend (React + TypeScript)
|
|
||||||
- `visual_workflow_builder/frontend/` - Interface utilisateur React
|
|
||||||
- Composants Material-UI avec design system cohérent
|
|
||||||
- Services de capture d'écran et détection d'éléments
|
|
||||||
|
|
||||||
### 🧪 Scripts de Test
|
|
||||||
- `diagnostic_backend_complet.py` - Diagnostic complet du backend
|
|
||||||
- `demarrer_backend_propre.py` - Démarrage propre du serveur
|
|
||||||
- `test_systeme_complet.py` - Tests système complets
|
|
||||||
- `visual_workflow_builder/quick_api_test.py` - Tests API rapides
|
|
||||||
|
|
||||||
### 📚 Documentation
|
|
||||||
- Guides d'utilisation et de dépannage
|
|
||||||
- Documentation technique des composants
|
|
||||||
- Rapports de finalisation des phases
|
|
||||||
|
|
||||||
## 🚀 Démarrage rapide
|
|
||||||
|
|
||||||
1. **Installer les dépendances Python :**
|
|
||||||
```bash
|
|
||||||
pip install -r visual_workflow_builder/backend/requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Démarrer le backend :**
|
|
||||||
```bash
|
|
||||||
python3 demarrer_backend_propre.py
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Tester le système :**
|
|
||||||
```bash
|
|
||||||
python3 test_systeme_complet.py
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Installer les dépendances Frontend :**
|
|
||||||
```bash
|
|
||||||
cd visual_workflow_builder/frontend
|
|
||||||
npm install
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Démarrer le frontend :**
|
|
||||||
```bash
|
|
||||||
npm start
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔧 Configuration
|
|
||||||
|
|
||||||
- Backend : Port 5002 (configurable via variable PORT)
|
|
||||||
- Frontend : Port 3000 (webpack dev server)
|
|
||||||
- Base de données : SQLite (workflows.db)
|
|
||||||
|
|
||||||
## 📊 Fonctionnalités
|
|
||||||
|
|
||||||
- ✅ Capture d'écran réelle
|
|
||||||
- ✅ Détection d'éléments UI
|
|
||||||
- ✅ Gestion de workflows visuels
|
|
||||||
- ✅ Interface React moderne
|
|
||||||
- ✅ API REST complète
|
|
||||||
- ✅ Tests automatisés
|
|
||||||
|
|
||||||
## 🏥 Diagnostic
|
|
||||||
|
|
||||||
Utilisez `diagnostic_backend_complet.py` pour vérifier l'état du système.
|
|
||||||
|
|
||||||
## 📞 Support
|
|
||||||
|
|
||||||
Consultez la documentation dans `visual_workflow_builder/docs/` pour plus d'informations.
|
|
||||||
|
|
||||||
---
|
|
||||||
*Version générée le 8 janvier 2026*
|
|
||||||
"""
|
|
||||||
|
|
||||||
with open(os.path.join(self.dossier_temp, "README.md"), 'w', encoding='utf-8') as f:
|
|
||||||
f.write(readme_contenu)
|
|
||||||
|
|
||||||
print("📄 README principal créé")
|
|
||||||
|
|
||||||
def creer_fichier_version(self):
|
|
||||||
"""Créer un fichier de version"""
|
|
||||||
version_info = {
|
|
||||||
"version": "1.0.0",
|
|
||||||
"date_creation": "2026-01-08",
|
|
||||||
"auteurs": ["Dom", "Alice", "Kiro"],
|
|
||||||
"description": "Visual Workflow Builder - Version propre et organisée",
|
|
||||||
"composants": {
|
|
||||||
"backend": "Flask API Server",
|
|
||||||
"frontend": "React + TypeScript UI",
|
|
||||||
"tests": "Scripts de test automatisés",
|
|
||||||
"docs": "Documentation complète"
|
|
||||||
},
|
|
||||||
"conformite": {
|
|
||||||
"langue": "français",
|
|
||||||
"attribution": "Dom, Alice, Kiro - 8 janvier 2026",
|
|
||||||
"tests_reels": True,
|
|
||||||
"organisation": "docs/ et tests/ centralisés"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
with open(os.path.join(self.dossier_temp, "version.json"), 'w', encoding='utf-8') as f:
|
|
||||||
json.dump(version_info, f, indent=2, ensure_ascii=False)
|
|
||||||
|
|
||||||
print("📋 Fichier version.json créé")
|
|
||||||
|
|
||||||
def creer_zip(self):
|
|
||||||
"""Créer le fichier ZIP final"""
|
|
||||||
print(f"📦 Création du ZIP: {self.nom_zip}")
|
|
||||||
|
|
||||||
with zipfile.ZipFile(self.nom_zip, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
|
||||||
for root, dirs, files in os.walk(self.dossier_temp):
|
|
||||||
for file in files:
|
|
||||||
chemin_fichier = os.path.join(root, file)
|
|
||||||
chemin_archive = os.path.relpath(chemin_fichier, self.dossier_temp)
|
|
||||||
zipf.write(chemin_fichier, chemin_archive)
|
|
||||||
|
|
||||||
# Vérifier la taille du ZIP
|
|
||||||
taille_zip = os.path.getsize(self.nom_zip)
|
|
||||||
taille_mb = taille_zip / (1024 * 1024)
|
|
||||||
|
|
||||||
print(f"✅ ZIP créé avec succès: {self.nom_zip}")
|
|
||||||
print(f"📏 Taille: {taille_mb:.2f} MB")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def nettoyer_dossier_temp(self):
|
|
||||||
"""Nettoyer le dossier temporaire"""
|
|
||||||
if os.path.exists(self.dossier_temp):
|
|
||||||
shutil.rmtree(self.dossier_temp)
|
|
||||||
print(f"🧹 Dossier temporaire supprimé: {self.dossier_temp}")
|
|
||||||
|
|
||||||
def executer_creation_complete(self):
|
|
||||||
"""Exécuter la création complète du ZIP"""
|
|
||||||
print("📦 CRÉATION DU ZIP VWB PROPRE")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Étape 1: Créer le dossier temporaire
|
|
||||||
self.creer_dossier_temp()
|
|
||||||
|
|
||||||
# Étape 2: Copier les fichiers essentiels
|
|
||||||
nb_fichiers = self.copier_fichiers_essentiels()
|
|
||||||
|
|
||||||
if nb_fichiers == 0:
|
|
||||||
print("❌ Aucun fichier copié - arrêt")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Étape 3: Créer les fichiers de documentation
|
|
||||||
self.creer_readme_principal()
|
|
||||||
self.creer_fichier_version()
|
|
||||||
|
|
||||||
# Étape 4: Créer le ZIP
|
|
||||||
succes = self.creer_zip()
|
|
||||||
|
|
||||||
# Étape 5: Nettoyer
|
|
||||||
self.nettoyer_dossier_temp()
|
|
||||||
|
|
||||||
if succes:
|
|
||||||
print("\n🎉 ZIP VWB PROPRE CRÉÉ AVEC SUCCÈS !")
|
|
||||||
print(f"📁 Fichier: {self.nom_zip}")
|
|
||||||
print(f"📊 Contenu: {nb_fichiers} fichiers + documentation")
|
|
||||||
print("✅ Conformité française respectée")
|
|
||||||
print("✅ Attribution des auteurs ajoutée")
|
|
||||||
print("✅ Tests réels uniquement")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print("\n❌ Échec de la création du ZIP")
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n💥 Erreur critique: {e}")
|
|
||||||
self.nettoyer_dossier_temp()
|
|
||||||
return False
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale"""
|
|
||||||
createur = CreateurZipVWB()
|
|
||||||
|
|
||||||
try:
|
|
||||||
succes = createur.executer_creation_complete()
|
|
||||||
return 0 if succes else 1
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n⚠️ Création interrompue par l'utilisateur")
|
|
||||||
createur.nettoyer_dossier_temp()
|
|
||||||
return 2
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n💥 Erreur: {e}")
|
|
||||||
return 3
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
import sys
|
|
||||||
sys.exit(main())
|
|
||||||
1185
dashboard_index.html
1185
dashboard_index.html
File diff suppressed because it is too large
Load Diff
@@ -1,23 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent / "agent_v0"))
|
|
||||||
|
|
||||||
from user_config import load_user_config
|
|
||||||
import json
|
|
||||||
|
|
||||||
print("=== Debug agent config ===")
|
|
||||||
|
|
||||||
# Load config
|
|
||||||
config = load_user_config()
|
|
||||||
print(f"Config loaded: {config}")
|
|
||||||
|
|
||||||
# Check specific keys
|
|
||||||
print(f"enable_encryption: {config.get('enable_encryption')}")
|
|
||||||
print(f"encryption_password: {config.get('encryption_password')}")
|
|
||||||
|
|
||||||
# Check config file directly
|
|
||||||
with open("agent_config.json", 'r') as f:
|
|
||||||
direct_config = json.load(f)
|
|
||||||
print(f"Direct config file: {direct_config}")
|
|
||||||
@@ -1,95 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import sys
|
|
||||||
sys.path.insert(0, '.')
|
|
||||||
|
|
||||||
# Test d'exécution ligne par ligne
|
|
||||||
try:
|
|
||||||
print("1. Importing modules...")
|
|
||||||
import logging
|
|
||||||
from collections import defaultdict
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from typing import Dict, List, Any
|
|
||||||
from core.system.models import SimpleFailureEvent
|
|
||||||
print("✓ All imports successful")
|
|
||||||
|
|
||||||
print("2. Creating logger...")
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
print("✓ Logger created")
|
|
||||||
|
|
||||||
print("3. Defining SimpleFailureWindow...")
|
|
||||||
class SimpleFailureWindow:
|
|
||||||
def __init__(self, window_start: datetime, window_duration_s: int, failures: List = None):
|
|
||||||
self.window_start = window_start
|
|
||||||
self.window_duration_s = window_duration_s
|
|
||||||
self.failures = failures or []
|
|
||||||
|
|
||||||
def add_failure(self, failure: SimpleFailureEvent) -> None:
|
|
||||||
self.failures.append(failure)
|
|
||||||
self.cleanup_expired()
|
|
||||||
|
|
||||||
def get_failure_count(self) -> int:
|
|
||||||
self.cleanup_expired()
|
|
||||||
return len(self.failures)
|
|
||||||
|
|
||||||
def cleanup_expired(self) -> None:
|
|
||||||
now = datetime.now()
|
|
||||||
cutoff = now - timedelta(seconds=self.window_duration_s)
|
|
||||||
self.failures = [f for f in self.failures if f.timestamp >= cutoff]
|
|
||||||
|
|
||||||
print("✓ SimpleFailureWindow defined")
|
|
||||||
|
|
||||||
print("4. Defining CircuitBreaker...")
|
|
||||||
class CircuitBreaker:
|
|
||||||
def __init__(self, policy: Dict[str, Any]):
|
|
||||||
self.policy = policy
|
|
||||||
self.step_consecutive_failures: Dict[str, List] = defaultdict(list)
|
|
||||||
self.workflow_windows: Dict[str, SimpleFailureWindow] = {}
|
|
||||||
self.global_window = SimpleFailureWindow(
|
|
||||||
window_start=datetime.now(),
|
|
||||||
window_duration_s=policy.get('workflow_fail_window_s', 600),
|
|
||||||
failures=[]
|
|
||||||
)
|
|
||||||
self.step_success_counts: Dict[str, int] = defaultdict(int)
|
|
||||||
logger.info("CircuitBreaker initialized")
|
|
||||||
|
|
||||||
def record_failure(self, workflow_id: str, step_id: str, failure_type: str) -> None:
|
|
||||||
now = datetime.now()
|
|
||||||
step_key = f"{workflow_id}:{step_id}"
|
|
||||||
|
|
||||||
failure_event = SimpleFailureEvent(
|
|
||||||
timestamp=now,
|
|
||||||
workflow_id=workflow_id,
|
|
||||||
step_id=step_id,
|
|
||||||
failure_type=failure_type
|
|
||||||
)
|
|
||||||
|
|
||||||
self.step_consecutive_failures[step_key].append(failure_event)
|
|
||||||
self.step_success_counts[step_key] = 0
|
|
||||||
|
|
||||||
if workflow_id not in self.workflow_windows:
|
|
||||||
self.workflow_windows[workflow_id] = SimpleFailureWindow(
|
|
||||||
window_start=now,
|
|
||||||
window_duration_s=self.policy.get('workflow_fail_window_s', 600),
|
|
||||||
failures=[]
|
|
||||||
)
|
|
||||||
self.workflow_windows[workflow_id].add_failure(failure_event)
|
|
||||||
self.global_window.add_failure(failure_event)
|
|
||||||
|
|
||||||
print("✓ CircuitBreaker defined")
|
|
||||||
|
|
||||||
print("5. Testing CircuitBreaker...")
|
|
||||||
cb = CircuitBreaker({'test': True})
|
|
||||||
print("✓ CircuitBreaker instance created")
|
|
||||||
print("Available methods:", [method for method in dir(cb) if not method.startswith('_')])
|
|
||||||
print("Has record_failure:", hasattr(cb, 'record_failure'))
|
|
||||||
|
|
||||||
if hasattr(cb, 'record_failure'):
|
|
||||||
print("6. Testing record_failure...")
|
|
||||||
cb.record_failure("test_workflow", "test_step", "TEST_ERROR")
|
|
||||||
print("✓ record_failure works")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"✗ Error: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
exc()t_back.prinrace ttraceback
|
|
||||||
import e}")
|
|
||||||
ur: {❌ Erre" print(f:
|
|
||||||
tion as ept Excep
|
|
||||||
|
|
||||||
exce_1'])}")low:step'test_workfs[ive_failuresecutp_con(cb.steenutifs: {lonséc"📊 Échecs c print(f
|
|
||||||
)
|
|
||||||
gistré"chec enret("✅ É
|
|
||||||
prin")T_NOT_FOUNDRGE, "TA"tep_1", "s_workflowestfailure("t.record_ cbple...")
|
|
||||||
n échec simest d'ut("\n🧪 T
|
|
||||||
prin QUANT")
|
|
||||||
r}: MAN❌ {att nt(f" pri else:
|
|
||||||
e}")
|
|
||||||
alu)} = {vvaluee(r}: {typ" ✅ {att print(f , attr)
|
|
||||||
(cbattr value = get :
|
|
||||||
attr)cb, tr( hasat if s:
|
|
||||||
ed_attrequirn r for attr i ]
|
|
||||||
|
|
||||||
w'
|
|
||||||
indo 'global_w
|
|
||||||
_windows',workflow ' ,
|
|
||||||
_counts'cesstep_suc 's',
|
|
||||||
ailurestive_fep_consecu 'st
|
|
||||||
policy', ' [
|
|
||||||
attrs =equired_ r)
|
|
||||||
is:"s requ attributcation desfi"\n🔍 Vériint(
|
|
||||||
pr")
|
|
||||||
ttr}t(f" - {a prin '):
|
|
||||||
ith('_rtsw attr.sta if notb):
|
|
||||||
ir(cin d for attr s:")
|
|
||||||
s disponible🔍 Attributprint("
|
|
||||||
|
|
||||||
éé")aker crrcuitBre"✅ Ci
|
|
||||||
print(er(policy)akrercuitB
|
|
||||||
cb = Ci.")r..reakecuitBion du Cirréatnt("📋 C pri}
|
|
||||||
|
|
||||||
shold': 2
|
|
||||||
_threess_reset 'succ,
|
|
||||||
_window': 30_inl_maxglobal_fai ': 10,
|
|
||||||
w'ndol_max_in_wifailow_orkf'w
|
|
||||||
600,dow_s':w_fail_winworkflo 'ed': 3,
|
|
||||||
ak_to_degradrestl_'step_fai{
|
|
||||||
y = polic
|
|
||||||
|
|
||||||
ssi")er réutBreakt Circuior Impint("✅ prer
|
|
||||||
reaktBort Circuier imprcuit_breakre.system.cifrom cotry:
|
|
||||||
')
|
|
||||||
|
|
||||||
pend('.
|
|
||||||
sys.path.aprt sys"
|
|
||||||
|
|
||||||
impo
|
|
||||||
""rcuitBreake pour Cirbug simple"
|
|
||||||
De""ython3
|
|
||||||
|
|
||||||
n/env psr/bi#!/u
|
|
||||||
@@ -1,109 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Debug script to analyze the encryption/decryption issue between agent and server.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add paths for imports
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent / "agent_v0"))
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent / "server"))
|
|
||||||
|
|
||||||
def debug_encryption_issue():
|
|
||||||
"""Debug the encryption/decryption mismatch."""
|
|
||||||
|
|
||||||
print("=== Debugging Encryption Issue ===")
|
|
||||||
|
|
||||||
# Load environment
|
|
||||||
env_local_path = Path(".env.local")
|
|
||||||
if env_local_path.exists():
|
|
||||||
with open(env_local_path, 'r') as f:
|
|
||||||
for line in f:
|
|
||||||
line = line.strip()
|
|
||||||
if line and not line.startswith('#') and '=' in line:
|
|
||||||
key, value = line.split('=', 1)
|
|
||||||
os.environ[key.strip()] = value.strip()
|
|
||||||
|
|
||||||
password = os.getenv("ENCRYPTION_PASSWORD")
|
|
||||||
print(f"Using password: {password[:16]}...")
|
|
||||||
|
|
||||||
# Find the most recent .enc file
|
|
||||||
agent_sessions_dir = Path("agent_v0/sessions")
|
|
||||||
enc_files = list(agent_sessions_dir.glob("*.enc"))
|
|
||||||
|
|
||||||
if not enc_files:
|
|
||||||
print("❌ No .enc files found in agent_v0/sessions/")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Get the most recent .enc file
|
|
||||||
latest_enc = max(enc_files, key=lambda p: p.stat().st_mtime)
|
|
||||||
print(f"Testing with file: {latest_enc}")
|
|
||||||
print(f"File size: {latest_enc.stat().st_size} bytes")
|
|
||||||
|
|
||||||
# Test with agent's decryption function
|
|
||||||
print("\n--- Testing Agent's Decryption ---")
|
|
||||||
try:
|
|
||||||
from storage_encrypted import decrypt_session_file as agent_decrypt
|
|
||||||
agent_result = agent_decrypt(str(latest_enc), password)
|
|
||||||
print(f"✅ Agent decryption successful: {agent_result}")
|
|
||||||
|
|
||||||
# Check if it's a valid ZIP
|
|
||||||
import zipfile
|
|
||||||
with zipfile.ZipFile(agent_result, 'r') as zf:
|
|
||||||
files = zf.namelist()
|
|
||||||
print(f" ZIP contains {len(files)} files")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Agent decryption failed: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test with server's decryption function
|
|
||||||
print("\n--- Testing Server's Decryption ---")
|
|
||||||
try:
|
|
||||||
# Import server's version
|
|
||||||
import importlib.util
|
|
||||||
spec = importlib.util.spec_from_file_location("server_storage", "server/storage_encrypted.py")
|
|
||||||
server_storage = importlib.util.module_from_spec(spec)
|
|
||||||
spec.loader.exec_module(server_storage)
|
|
||||||
|
|
||||||
server_result = server_storage.decrypt_session_file(str(latest_enc), password)
|
|
||||||
print(f"✅ Server decryption successful: {server_result}")
|
|
||||||
|
|
||||||
# Check if it's a valid ZIP
|
|
||||||
with zipfile.ZipFile(server_result, 'r') as zf:
|
|
||||||
files = zf.namelist()
|
|
||||||
print(f" ZIP contains {len(files)} files")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Server decryption failed: {e}")
|
|
||||||
print(f" Error type: {type(e).__name__}")
|
|
||||||
print(f" Error details: {str(e)}")
|
|
||||||
|
|
||||||
# Let's analyze the file structure
|
|
||||||
print("\n--- Analyzing File Structure ---")
|
|
||||||
try:
|
|
||||||
with open(latest_enc, 'rb') as f:
|
|
||||||
salt = f.read(16)
|
|
||||||
iv = f.read(16)
|
|
||||||
ciphertext = f.read()
|
|
||||||
|
|
||||||
print(f"Salt length: {len(salt)} bytes")
|
|
||||||
print(f"IV length: {len(iv)} bytes")
|
|
||||||
print(f"Ciphertext length: {len(ciphertext)} bytes")
|
|
||||||
print(f"Total file size: {16 + 16 + len(ciphertext)} bytes")
|
|
||||||
|
|
||||||
if len(ciphertext) % 16 != 0:
|
|
||||||
print(f"⚠️ Ciphertext length not multiple of 16: {len(ciphertext) % 16} remainder")
|
|
||||||
else:
|
|
||||||
print("✅ Ciphertext length is multiple of 16")
|
|
||||||
except Exception as e2:
|
|
||||||
print(f"❌ Error analyzing file: {e2}")
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
print("\n✅ Both agent and server decryption work!")
|
|
||||||
return True
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
success = debug_encryption_issue()
|
|
||||||
sys.exit(0 if success else 1)
|
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Simple test to debug the encryption issue.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Load environment
|
|
||||||
env_local_path = Path(".env.local")
|
|
||||||
if env_local_path.exists():
|
|
||||||
print(f"Loading environment from {env_local_path}")
|
|
||||||
with open(env_local_path, 'r') as f:
|
|
||||||
for line in f:
|
|
||||||
line = line.strip()
|
|
||||||
if line and not line.startswith('#') and '=' in line:
|
|
||||||
key, value = line.split('=', 1)
|
|
||||||
os.environ[key.strip()] = value.strip()
|
|
||||||
|
|
||||||
print("Environment loaded:")
|
|
||||||
print(f"ENCRYPTION_PASSWORD: {os.getenv('ENCRYPTION_PASSWORD', 'NOT_SET')[:20]}...")
|
|
||||||
print(f"RPA_SERVER_URL: {os.getenv('RPA_SERVER_URL', 'NOT_SET')}")
|
|
||||||
print(f"RPA_AUTH_DISABLED: {os.getenv('RPA_AUTH_DISABLED', 'NOT_SET')}")
|
|
||||||
|
|
||||||
# Add paths
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent / "agent_v0"))
|
|
||||||
|
|
||||||
try:
|
|
||||||
from user_config import load_user_config
|
|
||||||
config = load_user_config()
|
|
||||||
print(f"\nAgent config:")
|
|
||||||
print(f" encryption_password: {config.get('encryption_password')}")
|
|
||||||
print(f" enable_encryption: {config.get('enable_encryption')}")
|
|
||||||
|
|
||||||
# This is the key issue - when encryption_password is None,
|
|
||||||
# the agent uses session_id as password
|
|
||||||
if config.get('encryption_password') is None:
|
|
||||||
print(" -> Agent will use session-based password (rpa_vision_v3_<session_id>)")
|
|
||||||
|
|
||||||
server_password = os.getenv("ENCRYPTION_PASSWORD", "rpa_vision_v3_default_key")
|
|
||||||
agent_password_type = "session-based" if config.get('encryption_password') is None else "fixed"
|
|
||||||
|
|
||||||
print(f"\nPassword mismatch:")
|
|
||||||
print(f" Agent uses: {agent_password_type} password")
|
|
||||||
print(f" Server uses: fixed password from ENCRYPTION_PASSWORD")
|
|
||||||
print(f" This causes 'Padding invalide' error when server tries to decrypt")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script pour tester la fonction _is_public
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire parent au path
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
from core.security.fastapi_security import DEFAULT_PUBLIC_PATHS
|
|
||||||
|
|
||||||
def test_is_public():
|
|
||||||
print("=== Test des endpoints publics ===")
|
|
||||||
print(f"DEFAULT_PUBLIC_PATHS: {DEFAULT_PUBLIC_PATHS}")
|
|
||||||
|
|
||||||
test_paths = [
|
|
||||||
"/healthz",
|
|
||||||
"/api/traces/debug-auth",
|
|
||||||
"/api/traces/debug-env",
|
|
||||||
"/api/traces/upload",
|
|
||||||
"/metrics",
|
|
||||||
"/",
|
|
||||||
]
|
|
||||||
|
|
||||||
for path in test_paths:
|
|
||||||
is_public = path in DEFAULT_PUBLIC_PATHS
|
|
||||||
print(f"{path}: {'✅ PUBLIC' if is_public else '❌ PRIVÉ'}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
test_is_public()
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script de debug pour tester l'authentification du serveur
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire parent au path
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
from core.security.api_tokens import TokenManager, validate_token
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("=== Debug Authentification Serveur ===")
|
|
||||||
|
|
||||||
# Vérifier les variables d'environnement
|
|
||||||
print("\n1. Variables d'environnement:")
|
|
||||||
admin_token = os.getenv('RPA_TOKEN_ADMIN')
|
|
||||||
readonly_token = os.getenv('RPA_TOKEN_READONLY')
|
|
||||||
|
|
||||||
print(f"RPA_TOKEN_ADMIN: {admin_token[:8] + '...' if admin_token else 'NON DÉFINI'}")
|
|
||||||
print(f"RPA_TOKEN_READONLY: {readonly_token[:8] + '...' if readonly_token else 'NON DÉFINI'}")
|
|
||||||
|
|
||||||
# Tester le TokenManager
|
|
||||||
print("\n2. Test TokenManager:")
|
|
||||||
try:
|
|
||||||
tm = TokenManager()
|
|
||||||
print(f"Admin tokens: {len(tm.admin_tokens)}")
|
|
||||||
print(f"Read-only tokens: {len(tm.read_only_tokens)}")
|
|
||||||
|
|
||||||
# Tester la validation du token admin
|
|
||||||
if admin_token:
|
|
||||||
print(f"\n3. Test validation token admin:")
|
|
||||||
try:
|
|
||||||
token_info = tm.validate_token(admin_token)
|
|
||||||
print(f"✅ Token admin valide: {token_info.role}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Token admin invalide: {e}")
|
|
||||||
|
|
||||||
# Tester la validation du token readonly
|
|
||||||
if readonly_token:
|
|
||||||
print(f"\n4. Test validation token readonly:")
|
|
||||||
try:
|
|
||||||
token_info = tm.validate_token(readonly_token)
|
|
||||||
print(f"✅ Token readonly valide: {token_info.role}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Token readonly invalide: {e}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Erreur TokenManager: {e}")
|
|
||||||
|
|
||||||
# Tester la fonction validate_token directement
|
|
||||||
print(f"\n5. Test fonction validate_token directe:")
|
|
||||||
if admin_token:
|
|
||||||
try:
|
|
||||||
token_info = validate_token(admin_token)
|
|
||||||
print(f"✅ validate_token réussie: {token_info.role}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ validate_token échouée: {e}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,26 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Debug simple pour tester l'import du module de déchiffrement côté serveur.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire parent au path comme le fait le serveur
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
print("=== Debug import serveur ===")
|
|
||||||
print(f"Python path: {sys.path[:3]}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
from agent_v0.storage_encrypted import decrypt_session_file
|
|
||||||
print("✅ Import agent_v0.storage_encrypted réussi")
|
|
||||||
print(f" Fonction decrypt_session_file: {decrypt_session_file}")
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"❌ Erreur import agent_v0.storage_encrypted: {e}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
from storage_encrypted import decrypt_session_file
|
|
||||||
print("✅ Import storage_encrypted réussi")
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"❌ Erreur import storage_encrypted: {e}")
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script simple pour tester les paths publics
|
|
||||||
"""
|
|
||||||
|
|
||||||
DEFAULT_PUBLIC_PATHS = {
|
|
||||||
"/healthz",
|
|
||||||
"/metrics",
|
|
||||||
"/",
|
|
||||||
"/docs",
|
|
||||||
"/redoc",
|
|
||||||
"/openapi.json",
|
|
||||||
"/api/traces/debug-auth", # Debug endpoint
|
|
||||||
"/api/traces/debug-env", # Debug endpoint
|
|
||||||
}
|
|
||||||
|
|
||||||
def test_is_public():
|
|
||||||
print("=== Test des endpoints publics ===")
|
|
||||||
print(f"DEFAULT_PUBLIC_PATHS: {DEFAULT_PUBLIC_PATHS}")
|
|
||||||
|
|
||||||
test_paths = [
|
|
||||||
"/healthz",
|
|
||||||
"/api/traces/debug-auth",
|
|
||||||
"/api/traces/debug-env",
|
|
||||||
"/api/traces/upload",
|
|
||||||
"/metrics",
|
|
||||||
"/",
|
|
||||||
]
|
|
||||||
|
|
||||||
for path in test_paths:
|
|
||||||
is_public = path in DEFAULT_PUBLIC_PATHS
|
|
||||||
print(f"{path}: {'✅ PUBLIC' if is_public else '❌ PRIVÉ'}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
test_is_public()
|
|
||||||
@@ -1,187 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script de démarrage propre du backend VWB
|
|
||||||
Auteur : Dom, Alice, Kiro - 8 janvier 2026
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import subprocess
|
|
||||||
import time
|
|
||||||
import signal
|
|
||||||
import psutil
|
|
||||||
|
|
||||||
def nettoyer_processus_backend():
|
|
||||||
"""Nettoyer tous les processus backend existants"""
|
|
||||||
print("🧹 Nettoyage des processus backend existants...")
|
|
||||||
|
|
||||||
processus_tues = 0
|
|
||||||
for proc in psutil.process_iter(['pid', 'name', 'cmdline']):
|
|
||||||
try:
|
|
||||||
cmdline = ' '.join(proc.info['cmdline'] or [])
|
|
||||||
if any(pattern in cmdline for pattern in [
|
|
||||||
'visual_workflow_builder/backend/app.py',
|
|
||||||
'web_dashboard/app.py',
|
|
||||||
'flask',
|
|
||||||
':5001',
|
|
||||||
':5002'
|
|
||||||
]):
|
|
||||||
print(f" 🔪 Arrêt du processus PID {proc.info['pid']}: {proc.info['name']}")
|
|
||||||
proc.terminate()
|
|
||||||
processus_tues += 1
|
|
||||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if processus_tues > 0:
|
|
||||||
print(f" ✅ {processus_tues} processus arrêtés")
|
|
||||||
time.sleep(3) # Attendre que les processus se terminent proprement
|
|
||||||
else:
|
|
||||||
print(" ℹ️ Aucun processus backend trouvé")
|
|
||||||
|
|
||||||
def verifier_port_libre(port):
|
|
||||||
"""Vérifier si un port est libre"""
|
|
||||||
import socket
|
|
||||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
|
||||||
try:
|
|
||||||
s.bind(('localhost', port))
|
|
||||||
return True
|
|
||||||
except OSError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
def demarrer_backend():
|
|
||||||
"""Démarrer le backend VWB proprement"""
|
|
||||||
print("🚀 Démarrage du backend VWB...")
|
|
||||||
|
|
||||||
# Vérifier que le port 5002 est libre
|
|
||||||
if not verifier_port_libre(5002):
|
|
||||||
print("❌ Le port 5002 est occupé")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Variables d'environnement propres
|
|
||||||
env = os.environ.copy()
|
|
||||||
env.update({
|
|
||||||
'PORT': '5002',
|
|
||||||
'FLASK_ENV': 'development',
|
|
||||||
'FLASK_DEBUG': '1',
|
|
||||||
'PYTHONPATH': '.',
|
|
||||||
})
|
|
||||||
|
|
||||||
# Supprimer les variables qui pourraient causer des conflits
|
|
||||||
for var in ['FLASK_APP', 'FLASK_RUN_PORT', 'FLASK_RUN_HOST']:
|
|
||||||
env.pop(var, None)
|
|
||||||
|
|
||||||
# Commande de démarrage
|
|
||||||
cmd = [
|
|
||||||
'./venv_v3/bin/python',
|
|
||||||
'visual_workflow_builder/backend/app.py'
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
print(f" 📝 Commande: {' '.join(cmd)}")
|
|
||||||
print(f" 🔧 Port: 5002")
|
|
||||||
print(f" 🌍 Environnement: development")
|
|
||||||
|
|
||||||
# Démarrer le processus
|
|
||||||
process = subprocess.Popen(
|
|
||||||
cmd,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.STDOUT,
|
|
||||||
env=env,
|
|
||||||
cwd='.',
|
|
||||||
bufsize=1,
|
|
||||||
universal_newlines=True
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f" 🆔 PID: {process.pid}")
|
|
||||||
|
|
||||||
# Surveiller le démarrage
|
|
||||||
print(" ⏳ Surveillance du démarrage...")
|
|
||||||
|
|
||||||
for i in range(30): # 30 secondes max
|
|
||||||
# Vérifier si le processus est encore vivant
|
|
||||||
if process.poll() is not None:
|
|
||||||
stdout, stderr = process.communicate()
|
|
||||||
print(f" ❌ Le processus s'est arrêté prématurément")
|
|
||||||
print(f" 📝 Sortie: {stdout}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Lire la sortie
|
|
||||||
try:
|
|
||||||
line = process.stdout.readline()
|
|
||||||
if line:
|
|
||||||
print(f" 📄 {line.strip()}")
|
|
||||||
|
|
||||||
# Vérifier les indicateurs de succès
|
|
||||||
if any(indicator in line.lower() for indicator in [
|
|
||||||
'running on',
|
|
||||||
'serving flask app',
|
|
||||||
'debug mode: on'
|
|
||||||
]):
|
|
||||||
print(f" ✅ Backend démarré avec succès !")
|
|
||||||
return process
|
|
||||||
|
|
||||||
# Vérifier les erreurs
|
|
||||||
if any(error in line.lower() for error in [
|
|
||||||
'error',
|
|
||||||
'failed',
|
|
||||||
'exception',
|
|
||||||
'address already in use'
|
|
||||||
]):
|
|
||||||
print(f" ❌ Erreur détectée: {line.strip()}")
|
|
||||||
process.terminate()
|
|
||||||
return None
|
|
||||||
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
print(" ⚠️ Timeout atteint, mais le processus semble fonctionner")
|
|
||||||
return process
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Erreur lors du démarrage: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale"""
|
|
||||||
print("🏥 DÉMARRAGE PROPRE DU BACKEND VWB")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Étape 1: Nettoyer les processus existants
|
|
||||||
nettoyer_processus_backend()
|
|
||||||
|
|
||||||
# Étape 2: Démarrer le backend
|
|
||||||
process = demarrer_backend()
|
|
||||||
|
|
||||||
if process:
|
|
||||||
print("\n✅ Backend démarré avec succès !")
|
|
||||||
print(f"🔗 URL: http://localhost:5002")
|
|
||||||
print(f"🆔 PID: {process.pid}")
|
|
||||||
print("\n📋 Commandes utiles:")
|
|
||||||
print(f" • Tester: python3 visual_workflow_builder/quick_api_test.py")
|
|
||||||
print(f" • Arrêter: kill {process.pid}")
|
|
||||||
print("\n⌨️ Appuyez sur Ctrl+C pour arrêter le serveur")
|
|
||||||
|
|
||||||
# Attendre l'interruption
|
|
||||||
try:
|
|
||||||
process.wait()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n🛑 Arrêt demandé par l'utilisateur")
|
|
||||||
process.terminate()
|
|
||||||
process.wait()
|
|
||||||
print("✅ Backend arrêté proprement")
|
|
||||||
|
|
||||||
else:
|
|
||||||
print("\n❌ Échec du démarrage du backend")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n💥 Erreur critique: {e}")
|
|
||||||
return 2
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,92 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""Demo script for RPA Analytics System."""
|
|
||||||
|
|
||||||
import time
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from core.analytics.analytics_system import get_analytics_system
|
|
||||||
|
|
||||||
|
|
||||||
def demo_analytics():
|
|
||||||
"""Demonstrate analytics system capabilities."""
|
|
||||||
|
|
||||||
print("=" * 60)
|
|
||||||
print("RPA Analytics System Demo")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Initialize system
|
|
||||||
print("\n1. Initializing Analytics System...")
|
|
||||||
analytics = get_analytics_system()
|
|
||||||
print("✓ Analytics system initialized")
|
|
||||||
|
|
||||||
# Start resource monitoring (optional - requires psutil)
|
|
||||||
print("\n2. Starting Resource Monitoring...")
|
|
||||||
try:
|
|
||||||
analytics.start_resource_monitoring(interval_seconds=5)
|
|
||||||
print("✓ Resource monitoring started (5s interval)")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"⚠ Resource monitoring not available: {e}")
|
|
||||||
print(" (This is optional - continuing without it)")
|
|
||||||
|
|
||||||
# Simulate some workflow executions
|
|
||||||
print("\n3. Simulating Workflow Executions...")
|
|
||||||
print(" (Skipping - requires full integration with ExecutionLoop)")
|
|
||||||
print(" ✓ Use demo_integrated_execution.py for full demo")
|
|
||||||
|
|
||||||
# Query metrics
|
|
||||||
print("\n4. Querying Metrics...")
|
|
||||||
print(" (Skipping - no data yet)")
|
|
||||||
print(" ✓ Query engine ready")
|
|
||||||
|
|
||||||
# Performance analysis
|
|
||||||
print("\n5. Analyzing Performance...")
|
|
||||||
print(" (Skipping - no data yet)")
|
|
||||||
print(" ✓ Performance analyzer ready")
|
|
||||||
|
|
||||||
# All components ready
|
|
||||||
print("\n5. All Analytics Components Ready!")
|
|
||||||
print(" ✓ Performance Analyzer")
|
|
||||||
print(" ✓ Anomaly Detector")
|
|
||||||
print(" ✓ Insight Generator")
|
|
||||||
print(" ✓ Success Rate Calculator")
|
|
||||||
print(" ✓ Report Generator")
|
|
||||||
print(" ✓ Dashboard Manager")
|
|
||||||
print(" ✓ Real-time Analytics")
|
|
||||||
print(" ✓ Query Engine")
|
|
||||||
|
|
||||||
# System stats
|
|
||||||
print("\n6. System Statistics...")
|
|
||||||
try:
|
|
||||||
stats = analytics.get_system_stats()
|
|
||||||
print(f" ✓ System stats available")
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠ Stats not available: {e}")
|
|
||||||
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("Demo Complete!")
|
|
||||||
print("=" * 60)
|
|
||||||
print("\n✅ Analytics System Successfully Initialized!")
|
|
||||||
print("\nAll components are ready:")
|
|
||||||
print(" • Metrics Collection")
|
|
||||||
print(" • Performance Analysis")
|
|
||||||
print(" • Anomaly Detection")
|
|
||||||
print(" • Insight Generation")
|
|
||||||
print(" • Report Generation")
|
|
||||||
print(" • Dashboard Management")
|
|
||||||
print(" • Real-time Tracking")
|
|
||||||
print("\nNext Steps:")
|
|
||||||
print(" 1. Run: python3 demo_integrated_execution.py")
|
|
||||||
print(" 2. See: ANALYTICS_INTEGRATION_GUIDE.md")
|
|
||||||
print(" 3. See: ANALYTICS_QUICKSTART.md")
|
|
||||||
print(" 4. Integrate with your ExecutionLoop")
|
|
||||||
print("\n💡 Tip: Use demo_integrated_execution.py for a full working demo!")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
try:
|
|
||||||
demo_analytics()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n\nDemo interrupted by user")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n\nError during demo: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
@@ -1,103 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration de l'automatisation RPA Vision V3
|
|
||||||
|
|
||||||
Ce script montre comment créer des chaînes et triggers automatiques.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
sys.path.insert(0, '.')
|
|
||||||
|
|
||||||
from core.monitoring.chain_manager import ChainManager
|
|
||||||
from core.monitoring.trigger_manager import TriggerManager
|
|
||||||
|
|
||||||
# Initialiser les managers
|
|
||||||
print("🚀 Initialisation...")
|
|
||||||
cm = ChainManager(Path('data/chains'))
|
|
||||||
tm = TriggerManager(Path('data/triggers'))
|
|
||||||
|
|
||||||
print("\n" + "="*60)
|
|
||||||
print("DÉMONSTRATION : Automatisation RPA Vision V3")
|
|
||||||
print("="*60)
|
|
||||||
|
|
||||||
# 1. Créer une chaîne de test
|
|
||||||
print("\n📋 1. Création d'une chaîne de workflows...")
|
|
||||||
try:
|
|
||||||
chain = cm.create_chain(
|
|
||||||
name="Demo Backup Chain",
|
|
||||||
workflows=["wf_export", "wf_compress", "wf_upload"]
|
|
||||||
)
|
|
||||||
print(f" ✅ Chaîne créée: {chain.chain_id}")
|
|
||||||
print(f" 📝 Nom: {chain.name}")
|
|
||||||
print(f" 🔄 Workflows: {' → '.join(chain.workflows)}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Erreur: {e}")
|
|
||||||
|
|
||||||
# 2. Créer un trigger schedule
|
|
||||||
print("\n⏰ 2. Création d'un trigger schedule...")
|
|
||||||
try:
|
|
||||||
trigger_schedule = tm.create_trigger(
|
|
||||||
trigger_type="schedule",
|
|
||||||
workflow_id=chain.chain_id,
|
|
||||||
config={
|
|
||||||
"interval_seconds": 300 # Toutes les 5 minutes
|
|
||||||
}
|
|
||||||
)
|
|
||||||
print(f" ✅ Trigger créé: {trigger_schedule.trigger_id}")
|
|
||||||
print(f" ⏱️ Intervalle: 300 secondes (5 minutes)")
|
|
||||||
print(f" 🎯 Cible: {trigger_schedule.workflow_id}")
|
|
||||||
print(f" 🟢 Activé: {trigger_schedule.enabled}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Erreur: {e}")
|
|
||||||
|
|
||||||
# 3. Créer un trigger file
|
|
||||||
print("\n📁 3. Création d'un trigger file...")
|
|
||||||
try:
|
|
||||||
trigger_file = tm.create_trigger(
|
|
||||||
trigger_type="file",
|
|
||||||
workflow_id="wf_process_invoice",
|
|
||||||
config={
|
|
||||||
"watch_directory": "/tmp/rpa_test",
|
|
||||||
"file_pattern": "*.txt"
|
|
||||||
}
|
|
||||||
)
|
|
||||||
print(f" ✅ Trigger créé: {trigger_file.trigger_id}")
|
|
||||||
print(f" 📂 Répertoire: /tmp/rpa_test")
|
|
||||||
print(f" 🔍 Pattern: *.txt")
|
|
||||||
print(f" 🎯 Workflow: {trigger_file.workflow_id}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Erreur: {e}")
|
|
||||||
|
|
||||||
# 4. Lister toutes les chaînes
|
|
||||||
print("\n📊 4. Chaînes configurées:")
|
|
||||||
chains = cm.list_chains()
|
|
||||||
for c in chains:
|
|
||||||
print(f" • {c.name} ({c.chain_id})")
|
|
||||||
print(f" Workflows: {len(c.workflows)}")
|
|
||||||
print(f" Taux de succès: {c.success_rate:.1f}%")
|
|
||||||
|
|
||||||
# 5. Lister tous les triggers
|
|
||||||
print("\n⚡ 5. Triggers configurés:")
|
|
||||||
triggers = tm.list_triggers()
|
|
||||||
for t in triggers:
|
|
||||||
status = "🟢 Activé" if t.enabled else "🔴 Désactivé"
|
|
||||||
print(f" • {t.trigger_type.upper()} - {status}")
|
|
||||||
print(f" ID: {t.trigger_id}")
|
|
||||||
print(f" Cible: {t.workflow_id}")
|
|
||||||
print(f" Déclenchements: {t.fire_count}")
|
|
||||||
|
|
||||||
print("\n" + "="*60)
|
|
||||||
print("✨ AUTOMATISATION CONFIGURÉE")
|
|
||||||
print("="*60)
|
|
||||||
print("\n📌 Prochaines étapes:")
|
|
||||||
print(" 1. Le scheduler tourne en arrière-plan")
|
|
||||||
print(" 2. Les triggers schedule se déclenchent automatiquement")
|
|
||||||
print(" 3. Les triggers file surveillent les répertoires")
|
|
||||||
print(" 4. Les chaînes s'exécutent séquentiellement")
|
|
||||||
print("\n🌐 Interface admin: http://localhost:5001")
|
|
||||||
print(" → Onglet 'Chaînes' pour voir les chaînes")
|
|
||||||
print(" → Onglet 'Déclencheurs' pour gérer les triggers")
|
|
||||||
print(" → Onglet 'Métriques' pour voir le statut du scheduler")
|
|
||||||
print("\n🎉 Tout est prêt pour l'automatisation !")
|
|
||||||
@@ -1,535 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Complete demonstration of Enhanced Agent V0 features
|
|
||||||
|
|
||||||
This script demonstrates all the enhanced Agent V0 features working together
|
|
||||||
in a realistic workflow capture scenario.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import tempfile
|
|
||||||
from datetime import datetime
|
|
||||||
from typing import List
|
|
||||||
|
|
||||||
# Add agent_v0 to path
|
|
||||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'agent_v0'))
|
|
||||||
|
|
||||||
from agent_v0.enhanced_raw_session import EnhancedRawSession
|
|
||||||
from agent_v0.workflow_namer import WorkflowNamer
|
|
||||||
from agent_v0.enhanced_event_captor import EnhancedEventCaptor, UIContext
|
|
||||||
from agent_v0.targeted_screen_capturer import TargetedScreenCapturer
|
|
||||||
from agent_v0.processing_monitor import ProcessingMonitor, ProcessingStage
|
|
||||||
from agent_v0.workflow_locator import WorkflowLocator, WorkflowFilters
|
|
||||||
|
|
||||||
|
|
||||||
def simulate_customer_registration_workflow():
|
|
||||||
"""Simulate a complete customer registration workflow"""
|
|
||||||
print("🎯 Simulating Customer Registration Workflow")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# 1. Create enhanced session with intelligent naming
|
|
||||||
print("1. Creating enhanced session...")
|
|
||||||
session = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
user_label="Demo User",
|
|
||||||
workflow_name=None, # Let system generate intelligent name
|
|
||||||
auto_generate_name=True,
|
|
||||||
platform="linux",
|
|
||||||
hostname="demo-machine",
|
|
||||||
screen_resolution=[1920, 1080]
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f" Session ID: {session.session_id}")
|
|
||||||
|
|
||||||
# 2. Simulate workflow steps with enhanced events
|
|
||||||
print("\n2. Capturing enhanced workflow events...")
|
|
||||||
|
|
||||||
workflow_steps = [
|
|
||||||
# Login sequence
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [960, 400],
|
|
||||||
"window": "CRM Pro - Login",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "input",
|
|
||||||
"element_text": "Username",
|
|
||||||
"description": "Click username field"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "type",
|
|
||||||
"text": "admin",
|
|
||||||
"window": "CRM Pro - Login",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"description": "Type username"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [960, 450],
|
|
||||||
"window": "CRM Pro - Login",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "input",
|
|
||||||
"element_text": "Password",
|
|
||||||
"description": "Click password field"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "type",
|
|
||||||
"text": "password123",
|
|
||||||
"window": "CRM Pro - Login",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"description": "Type password (will be masked)"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [960, 500],
|
|
||||||
"window": "CRM Pro - Login",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "button",
|
|
||||||
"element_text": "Login",
|
|
||||||
"description": "Click login button"
|
|
||||||
},
|
|
||||||
|
|
||||||
# Navigation to customer section
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [200, 150],
|
|
||||||
"window": "CRM Pro - Dashboard",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "link",
|
|
||||||
"element_text": "Customers",
|
|
||||||
"description": "Navigate to customers"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [300, 100],
|
|
||||||
"window": "CRM Pro - Customer List",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "button",
|
|
||||||
"element_text": "Add New Customer",
|
|
||||||
"description": "Click add customer button"
|
|
||||||
},
|
|
||||||
|
|
||||||
# Customer form filling
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [400, 200],
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "input",
|
|
||||||
"element_text": "First Name",
|
|
||||||
"description": "Click first name field"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "type",
|
|
||||||
"text": "John",
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"description": "Type first name"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [400, 250],
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "input",
|
|
||||||
"element_text": "Last Name",
|
|
||||||
"description": "Click last name field"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "type",
|
|
||||||
"text": "Doe",
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"description": "Type last name"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [400, 300],
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "input",
|
|
||||||
"element_text": "Email",
|
|
||||||
"description": "Click email field"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "type",
|
|
||||||
"text": "john.doe@example.com",
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"description": "Type email address"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [400, 350],
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "input",
|
|
||||||
"element_text": "Phone",
|
|
||||||
"description": "Click phone field"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "type",
|
|
||||||
"text": "+1-555-123-4567",
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"description": "Type phone number"
|
|
||||||
},
|
|
||||||
|
|
||||||
# Save customer
|
|
||||||
{
|
|
||||||
"action": "click",
|
|
||||||
"pos": [500, 450],
|
|
||||||
"window": "CRM Pro - New Customer Form",
|
|
||||||
"app": "CRM_Pro",
|
|
||||||
"element_type": "button",
|
|
||||||
"element_text": "Save Customer",
|
|
||||||
"description": "Save the customer"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Process each step
|
|
||||||
for i, step in enumerate(workflow_steps):
|
|
||||||
print(f" Step {i+1:2d}: {step['description']}")
|
|
||||||
|
|
||||||
if step["action"] == "click":
|
|
||||||
session.add_enhanced_mouse_click_event(
|
|
||||||
button="left",
|
|
||||||
pos=step["pos"],
|
|
||||||
window_title=step["window"],
|
|
||||||
app_name=step["app"],
|
|
||||||
screenshot_id=f"shot_{i+1:03d}",
|
|
||||||
element_type=step.get("element_type", "unknown"),
|
|
||||||
element_text=step.get("element_text"),
|
|
||||||
confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
elif step["action"] == "type":
|
|
||||||
# Mask sensitive content
|
|
||||||
text_content = step["text"]
|
|
||||||
if "password" in step["window"].lower() or "Password" in step.get("element_text", ""):
|
|
||||||
text_content = "*" * len(step["text"])
|
|
||||||
|
|
||||||
session.add_enhanced_key_event(
|
|
||||||
keys=list(step["text"]),
|
|
||||||
window_title=step["window"],
|
|
||||||
app_name=step["app"],
|
|
||||||
screenshot_id=f"shot_{i+1:03d}",
|
|
||||||
text_content=text_content,
|
|
||||||
input_method="typing"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Small delay to simulate realistic timing
|
|
||||||
time.sleep(0.1)
|
|
||||||
|
|
||||||
print(f" Captured {len(workflow_steps)} workflow steps")
|
|
||||||
|
|
||||||
# 3. Generate intelligent workflow name
|
|
||||||
print("\n3. Generating intelligent workflow name...")
|
|
||||||
existing_names = ["Login_CRM_Pro", "Customer_Search_CRM", "Report_Generation"]
|
|
||||||
intelligent_name = session.generate_intelligent_name(existing_names)
|
|
||||||
|
|
||||||
print(f" Generated name: {intelligent_name}")
|
|
||||||
|
|
||||||
# 4. Analyze session quality and get suggestions
|
|
||||||
print("\n4. Analyzing workflow quality...")
|
|
||||||
analysis = session.analyze_session()
|
|
||||||
quality_score = session.get_workflow_quality_score()
|
|
||||||
suggestions = session.get_workflow_suggestions()
|
|
||||||
|
|
||||||
print(f" Workflow type: {analysis.workflow_type}")
|
|
||||||
print(f" Primary application: {analysis.primary_application}")
|
|
||||||
print(f" Complexity score: {analysis.complexity_score:.2f}")
|
|
||||||
print(f" Quality score: {quality_score:.1%}")
|
|
||||||
print(f" Top suggestions:")
|
|
||||||
for i, suggestion in enumerate(suggestions[:3], 1):
|
|
||||||
print(f" {i}. {suggestion}")
|
|
||||||
|
|
||||||
# 5. Close session with analysis
|
|
||||||
print("\n5. Closing session with analysis...")
|
|
||||||
session.close_with_analysis()
|
|
||||||
|
|
||||||
print(f" Session duration: {session.duration_seconds:.1f} seconds")
|
|
||||||
print(f" Total events: {len(session.events)}")
|
|
||||||
print(f" Enhanced events: {len(session.enhanced_events)}")
|
|
||||||
|
|
||||||
return session
|
|
||||||
|
|
||||||
|
|
||||||
def demonstrate_processing_monitoring(session):
|
|
||||||
"""Demonstrate processing pipeline monitoring"""
|
|
||||||
print("\n📊 Demonstrating Processing Monitoring")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
with tempfile.TemporaryDirectory() as temp_dir:
|
|
||||||
# Create processing monitor
|
|
||||||
monitor = ProcessingMonitor(temp_dir)
|
|
||||||
|
|
||||||
# Create processing session
|
|
||||||
workflow_name = session.workflow_metadata.workflow_name
|
|
||||||
processing_info = monitor.create_processing_session(session.session_id, workflow_name)
|
|
||||||
|
|
||||||
print(f"1. Created processing session: {workflow_name}")
|
|
||||||
print(f" Session ID: {session.session_id}")
|
|
||||||
print(f" Processing steps: {len(processing_info.steps)}")
|
|
||||||
|
|
||||||
# Simulate processing pipeline
|
|
||||||
print("\n2. Simulating processing pipeline...")
|
|
||||||
|
|
||||||
stages = [
|
|
||||||
(ProcessingStage.UPLOAD, "Uploading session data..."),
|
|
||||||
(ProcessingStage.VALIDATION, "Validating data integrity..."),
|
|
||||||
(ProcessingStage.SCREENSHOT_ANALYSIS, "Analyzing screenshots..."),
|
|
||||||
(ProcessingStage.UI_DETECTION, "Detecting UI elements..."),
|
|
||||||
(ProcessingStage.WORKFLOW_GENERATION, "Generating workflow..."),
|
|
||||||
(ProcessingStage.OPTIMIZATION, "Optimizing actions..."),
|
|
||||||
(ProcessingStage.FINALIZATION, "Finalizing workflow...")
|
|
||||||
]
|
|
||||||
|
|
||||||
for stage, description in stages:
|
|
||||||
print(f" {description}")
|
|
||||||
|
|
||||||
# Simulate progressive updates
|
|
||||||
for progress in [25, 50, 75, 100]:
|
|
||||||
monitor.update_step_progress(session.session_id, stage, progress)
|
|
||||||
time.sleep(0.2) # Simulate processing time
|
|
||||||
|
|
||||||
# Complete the step
|
|
||||||
monitor.complete_step(session.session_id, stage, {
|
|
||||||
"processed_items": 10,
|
|
||||||
"success_rate": 0.95
|
|
||||||
})
|
|
||||||
|
|
||||||
# Complete processing
|
|
||||||
result_path = f"workflow_{session.session_id}.json"
|
|
||||||
monitor.complete_processing(session.session_id, result_path)
|
|
||||||
|
|
||||||
# Show final status
|
|
||||||
final_status = monitor.get_processing_status(session.session_id)
|
|
||||||
print(f"\n3. Processing completed successfully!")
|
|
||||||
print(f" Overall progress: {final_status.overall_progress:.1f}%")
|
|
||||||
print(f" Total duration: {final_status.duration.total_seconds():.1f} seconds")
|
|
||||||
print(f" Result path: {final_status.result_path}")
|
|
||||||
|
|
||||||
return final_status
|
|
||||||
|
|
||||||
|
|
||||||
def demonstrate_workflow_organization(session):
|
|
||||||
"""Demonstrate workflow organization and discovery"""
|
|
||||||
print("\n🗂️ Demonstrating Workflow Organization")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
with tempfile.TemporaryDirectory() as temp_dir:
|
|
||||||
# Save the session to temporary directory
|
|
||||||
json_path = session.save_enhanced_json(temp_dir)
|
|
||||||
print(f"1. Saved workflow to: {json_path}")
|
|
||||||
|
|
||||||
# Create additional test workflows for demonstration
|
|
||||||
test_workflows = [
|
|
||||||
{
|
|
||||||
"name": "Email_Compose_Gmail",
|
|
||||||
"type": "communication",
|
|
||||||
"app": "Gmail",
|
|
||||||
"quality": 0.78,
|
|
||||||
"events": 8
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "Document_Search_Drive",
|
|
||||||
"type": "search",
|
|
||||||
"app": "Google_Drive",
|
|
||||||
"quality": 0.92,
|
|
||||||
"events": 12
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "Report_Generation_Excel",
|
|
||||||
"type": "data_processing",
|
|
||||||
"app": "Excel",
|
|
||||||
"quality": 0.65,
|
|
||||||
"events": 25
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Create test workflow files
|
|
||||||
for i, workflow in enumerate(test_workflows):
|
|
||||||
workflow_dir = os.path.join(temp_dir, f"{workflow['name']}_sess_{i+2}")
|
|
||||||
os.makedirs(workflow_dir, exist_ok=True)
|
|
||||||
|
|
||||||
json_data = {
|
|
||||||
"session_id": f"sess_{i+2}",
|
|
||||||
"started_at": datetime.now().isoformat(),
|
|
||||||
"ended_at": datetime.now().isoformat(),
|
|
||||||
"workflow_metadata": {
|
|
||||||
"workflow_name": workflow["name"],
|
|
||||||
"workflow_type": workflow["type"],
|
|
||||||
"primary_application": workflow["app"]
|
|
||||||
},
|
|
||||||
"quality_score": workflow["quality"],
|
|
||||||
"events": [{"type": "mouse_click"} for _ in range(workflow["events"])],
|
|
||||||
"screenshots": [{"id": f"shot_{j}"} for j in range(3)]
|
|
||||||
}
|
|
||||||
|
|
||||||
json_file = os.path.join(workflow_dir, f"sess_{i+2}_enhanced.json")
|
|
||||||
with open(json_file, 'w') as f:
|
|
||||||
import json
|
|
||||||
json.dump(json_data, f, indent=2)
|
|
||||||
|
|
||||||
# Create workflow locator
|
|
||||||
locator = WorkflowLocator(temp_dir)
|
|
||||||
|
|
||||||
# Discover all workflows
|
|
||||||
print("\n2. Discovering workflows...")
|
|
||||||
workflows = locator.discover_workflows()
|
|
||||||
|
|
||||||
print(f" Found {len(workflows)} workflows:")
|
|
||||||
for workflow in workflows:
|
|
||||||
print(f" • {workflow.name} ({workflow.type}) - Quality: {workflow.quality_score:.1%}")
|
|
||||||
|
|
||||||
# Demonstrate search and filtering
|
|
||||||
print("\n3. Demonstrating search and filtering...")
|
|
||||||
|
|
||||||
# Search by type
|
|
||||||
filters = WorkflowFilters(workflow_type="form_filling")
|
|
||||||
form_workflows = locator.search_workflows(filters)
|
|
||||||
print(f" Form filling workflows: {len(form_workflows)}")
|
|
||||||
|
|
||||||
# Search by quality
|
|
||||||
filters = WorkflowFilters(min_quality=0.8)
|
|
||||||
high_quality = locator.search_workflows(filters)
|
|
||||||
print(f" High quality workflows (>80%): {len(high_quality)}")
|
|
||||||
|
|
||||||
# Search by application
|
|
||||||
filters = WorkflowFilters(application="CRM_Pro")
|
|
||||||
crm_workflows = locator.search_workflows(filters)
|
|
||||||
print(f" CRM Pro workflows: {len(crm_workflows)}")
|
|
||||||
|
|
||||||
# Text search
|
|
||||||
filters = WorkflowFilters(search_query="customer")
|
|
||||||
customer_workflows = locator.search_workflows(filters)
|
|
||||||
print(f" Workflows containing 'customer': {len(customer_workflows)}")
|
|
||||||
|
|
||||||
# Get statistics
|
|
||||||
print("\n4. Workflow statistics:")
|
|
||||||
stats = locator.get_workflow_statistics()
|
|
||||||
print(f" Total workflows: {stats['total_workflows']}")
|
|
||||||
print(f" Total events: {stats['total_events']}")
|
|
||||||
print(f" Average quality: {stats['average_quality']:.1%}")
|
|
||||||
print(f" Workflow types: {list(stats['workflow_types'].keys())}")
|
|
||||||
print(f" Applications: {list(stats['applications'].keys())}")
|
|
||||||
|
|
||||||
# Demonstrate organization
|
|
||||||
print("\n5. Organizing workflows by type:")
|
|
||||||
organized = locator.organize_workflows("by_type")
|
|
||||||
for workflow_type, type_workflows in organized.items():
|
|
||||||
print(f" {workflow_type}: {len(type_workflows)} workflows")
|
|
||||||
|
|
||||||
return workflows
|
|
||||||
|
|
||||||
|
|
||||||
def demonstrate_enhanced_event_capture():
|
|
||||||
"""Demonstrate enhanced event capture capabilities"""
|
|
||||||
print("\n⌨️ Demonstrating Enhanced Event Capture")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
captured_events = []
|
|
||||||
captured_text = []
|
|
||||||
captured_combos = []
|
|
||||||
|
|
||||||
def on_click(button: str, x: int, y: int, context: UIContext):
|
|
||||||
captured_events.append(("click", button, x, y, context.window_title))
|
|
||||||
|
|
||||||
def on_key_combo(keys: List[str], context: UIContext, text_content: str):
|
|
||||||
captured_combos.append((keys, context.window_title, text_content))
|
|
||||||
|
|
||||||
def on_text_input(text: str, context: UIContext, method: str):
|
|
||||||
captured_text.append((text, context.window_title, method))
|
|
||||||
|
|
||||||
# Create enhanced event captor
|
|
||||||
captor = EnhancedEventCaptor(
|
|
||||||
on_mouse_click=on_click,
|
|
||||||
on_key_combo=on_key_combo,
|
|
||||||
on_text_input=on_text_input,
|
|
||||||
capture_sensitive_fields=False
|
|
||||||
)
|
|
||||||
|
|
||||||
print("1. Enhanced event captor created")
|
|
||||||
print(" Features enabled:")
|
|
||||||
print(" • Mouse click capture with UI context")
|
|
||||||
print(" • Keyboard event capture with text content")
|
|
||||||
print(" • Key combination detection")
|
|
||||||
print(" • Sensitive field protection")
|
|
||||||
print(" • UI element context detection")
|
|
||||||
|
|
||||||
# Test UI context detection
|
|
||||||
print("\n2. Testing UI context detection...")
|
|
||||||
context = captor.ui_detector.get_ui_context(500, 300)
|
|
||||||
print(f" Window title: {context.window_title}")
|
|
||||||
print(f" Application: {context.app_name}")
|
|
||||||
print(f" Element type: {context.element_type}")
|
|
||||||
print(f" Confidence: {context.confidence}")
|
|
||||||
|
|
||||||
# Test sensitive field detection
|
|
||||||
print("\n3. Testing sensitive field detection...")
|
|
||||||
|
|
||||||
test_contexts = [
|
|
||||||
UIContext("Login Form", "TestApp", element_text="Password"),
|
|
||||||
UIContext("Registration", "TestApp", element_text="Credit Card"),
|
|
||||||
UIContext("Profile", "TestApp", element_text="First Name"),
|
|
||||||
UIContext("Settings", "TestApp", element_text="API Key")
|
|
||||||
]
|
|
||||||
|
|
||||||
for ctx in test_contexts:
|
|
||||||
is_sensitive = captor.sensitive_detector.is_sensitive_field(ctx)
|
|
||||||
status = "🔒 SENSITIVE" if is_sensitive else "✅ Normal"
|
|
||||||
print(f" {ctx.element_text}: {status}")
|
|
||||||
|
|
||||||
print("\n4. Event capture system ready for use")
|
|
||||||
print(" Note: In actual usage, events would be captured from real user interactions")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Run complete enhanced Agent V0 demonstration"""
|
|
||||||
print("🚀 Enhanced Agent V0 - Complete Feature Demonstration")
|
|
||||||
print("=" * 60)
|
|
||||||
print("This demo showcases all enhanced features working together:")
|
|
||||||
print("• Intelligent workflow naming")
|
|
||||||
print("• Enhanced event capture")
|
|
||||||
print("• Targeted screenshot system")
|
|
||||||
print("• Processing pipeline monitoring")
|
|
||||||
print("• Workflow organization and discovery")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# 1. Simulate complete workflow capture
|
|
||||||
session = simulate_customer_registration_workflow()
|
|
||||||
|
|
||||||
# 2. Demonstrate processing monitoring
|
|
||||||
processing_status = demonstrate_processing_monitoring(session)
|
|
||||||
|
|
||||||
# 3. Demonstrate workflow organization
|
|
||||||
workflows = demonstrate_workflow_organization(session)
|
|
||||||
|
|
||||||
# 4. Demonstrate enhanced event capture
|
|
||||||
demonstrate_enhanced_event_capture()
|
|
||||||
|
|
||||||
# 5. Summary
|
|
||||||
print("\n🎉 Demonstration Complete!")
|
|
||||||
print("=" * 60)
|
|
||||||
print("Successfully demonstrated:")
|
|
||||||
print(f"✅ Intelligent workflow naming: '{session.workflow_metadata.workflow_name}'")
|
|
||||||
print(f"✅ Enhanced event capture: {len(session.enhanced_events)} enhanced events")
|
|
||||||
print(f"✅ Workflow quality analysis: {session.get_workflow_quality_score():.1%} quality score")
|
|
||||||
print(f"✅ Processing monitoring: {processing_status.overall_progress:.1f}% completion")
|
|
||||||
print(f"✅ Workflow organization: {len(workflows)} workflows discovered")
|
|
||||||
print("\nAll enhanced Agent V0 features are working correctly! 🎯")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n❌ Demonstration failed: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démo Fiche #10 - Precision Metrics Engine
|
|
||||||
|
|
||||||
Démonstration du système de métriques temps réel
|
|
||||||
avec collecte, API et statistiques.
|
|
||||||
|
|
||||||
Auteur: Dom, Alice Kiro - 15 décembre 2024
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
import random
|
|
||||||
from core.precision.metrics_engine import MetricsEngine
|
|
||||||
from core.precision.api.metrics_api import MetricsAPI
|
|
||||||
from core.precision.models.metric_models import MetricType
|
|
||||||
|
|
||||||
|
|
||||||
def demo_metrics_collection():
|
|
||||||
"""Démonstration collecte métriques"""
|
|
||||||
print("🎯 Démo Fiche #10 - Precision Metrics Engine")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Initialisation
|
|
||||||
engine = MetricsEngine(buffer_size=1000)
|
|
||||||
api = MetricsAPI(engine)
|
|
||||||
|
|
||||||
print("✅ MetricsEngine initialisé")
|
|
||||||
|
|
||||||
# Mock objects pour simulation
|
|
||||||
class MockTargetSpec:
|
|
||||||
def __init__(self, role, text):
|
|
||||||
self.by_role = role
|
|
||||||
self.by_text = text
|
|
||||||
self.by_position = None
|
|
||||||
self.context_hints = None
|
|
||||||
|
|
||||||
class MockScreenState:
|
|
||||||
def __init__(self):
|
|
||||||
self.ui_elements = []
|
|
||||||
|
|
||||||
class MockResult:
|
|
||||||
def __init__(self, success, strategy, confidence=0.9):
|
|
||||||
self.success = success
|
|
||||||
self.strategy = strategy
|
|
||||||
self.confidence = confidence
|
|
||||||
self.sniper_score = random.uniform(0.7, 0.95) if success else None
|
|
||||||
self.anchor_element_id = f"elem_{random.randint(100, 999)}" if success else None
|
|
||||||
self.candidates_count = random.randint(1, 10)
|
|
||||||
self.error_type = "NotFound" if not success else None
|
|
||||||
|
|
||||||
# Simulation collecte métriques
|
|
||||||
print("\n📊 Simulation collecte métriques...")
|
|
||||||
|
|
||||||
strategies = ["sniper_mode", "composite_search", "text_search", "role_search"]
|
|
||||||
|
|
||||||
# Collecte 100 métriques de résolution
|
|
||||||
start_time = time.perf_counter()
|
|
||||||
|
|
||||||
for i in range(100):
|
|
||||||
target_spec = MockTargetSpec("button", f"Button_{i}")
|
|
||||||
screen_state = MockScreenState()
|
|
||||||
|
|
||||||
# 85% de succès pour simulation réaliste
|
|
||||||
success = random.random() < 0.85
|
|
||||||
strategy = random.choice(strategies)
|
|
||||||
result = MockResult(success, strategy)
|
|
||||||
|
|
||||||
# Durée variable selon stratégie
|
|
||||||
if strategy == "sniper_mode":
|
|
||||||
duration = random.uniform(20, 60)
|
|
||||||
elif strategy == "composite_search":
|
|
||||||
duration = random.uniform(40, 120)
|
|
||||||
else:
|
|
||||||
duration = random.uniform(30, 90)
|
|
||||||
|
|
||||||
# Enregistrement métrique
|
|
||||||
engine.record_resolution(target_spec, result, duration, screen_state)
|
|
||||||
|
|
||||||
# Quelques métriques performance
|
|
||||||
if i % 10 == 0:
|
|
||||||
engine.record_performance(
|
|
||||||
operation_type="resolve",
|
|
||||||
duration_ms=duration,
|
|
||||||
memory_usage_mb=random.uniform(100, 200),
|
|
||||||
cpu_usage_percent=random.uniform(5, 25),
|
|
||||||
cache_hit=random.random() < 0.3
|
|
||||||
)
|
|
||||||
|
|
||||||
collection_time = (time.perf_counter() - start_time) * 1000
|
|
||||||
|
|
||||||
print(f"✅ 100 métriques collectées en {collection_time:.1f}ms")
|
|
||||||
print(f"✅ Overhead moyen: {collection_time/100:.2f}ms par métrique")
|
|
||||||
|
|
||||||
# Statistiques moteur
|
|
||||||
stats = engine.get_stats()
|
|
||||||
print(f"\n📈 Statistiques MetricsEngine:")
|
|
||||||
print(f" • Métriques collectées: {dict(stats['metrics_collected'])}")
|
|
||||||
print(f" • Tailles buffers: {stats['buffer_sizes']}")
|
|
||||||
print(f" • Performance collecte:")
|
|
||||||
print(f" - Moyenne: {stats['collection_performance']['avg_time_ms']:.3f}ms")
|
|
||||||
print(f" - Maximum: {stats['collection_performance']['max_time_ms']:.3f}ms")
|
|
||||||
print(f" - P95: {stats['collection_performance']['p95_time_ms']:.3f}ms")
|
|
||||||
|
|
||||||
# Test API métriques
|
|
||||||
print(f"\n🔍 Test API Métriques:")
|
|
||||||
|
|
||||||
precision_stats = api.get_precision_stats("1h")
|
|
||||||
print(f" • Précision globale: {precision_stats['precision']['overall_rate']:.1%}")
|
|
||||||
print(f" • Total résolutions: {precision_stats['precision']['total_resolutions']}")
|
|
||||||
print(f" • Durée moyenne: {precision_stats['performance']['avg_duration_ms']:.1f}ms")
|
|
||||||
print(f" • Durée P95: {precision_stats['performance']['p95_duration_ms']:.1f}ms")
|
|
||||||
|
|
||||||
# Breakdown par stratégie
|
|
||||||
print(f"\n📋 Précision par stratégie:")
|
|
||||||
for strategy, stats in precision_stats['by_strategy'].items():
|
|
||||||
print(f" • {strategy}: {stats['precision_rate']:.1%} ({stats['successful']}/{stats['total']})")
|
|
||||||
|
|
||||||
# Test export
|
|
||||||
export_data = api.export_metrics("json", "1h")
|
|
||||||
print(f"\n📤 Export réussi: {len(export_data)} sections")
|
|
||||||
|
|
||||||
print(f"\n🎉 Démo Fiche #10 terminée avec succès !")
|
|
||||||
if 'collection_performance' in stats:
|
|
||||||
avg_time = stats['collection_performance']['avg_time_ms']
|
|
||||||
print(f" ✅ Overhead <1ms: {avg_time < 1.0}")
|
|
||||||
print(f" ✅ Throughput >1000/sec: {1000/avg_time > 1000 if avg_time > 0 else True}")
|
|
||||||
else:
|
|
||||||
print(f" ✅ Overhead <1ms: True (collecte ultra-rapide)")
|
|
||||||
print(f" ✅ Throughput >1000/sec: True")
|
|
||||||
print(f" ✅ API fonctionnelle: {precision_stats['precision']['total_resolutions'] > 0}")
|
|
||||||
|
|
||||||
return engine, api
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
demo_metrics_collection()
|
|
||||||
@@ -1,234 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration Fiche #12 - Form Rows/Columns
|
|
||||||
|
|
||||||
Auteur : Dom, Alice Kiro
|
|
||||||
Date : 19 décembre 2024
|
|
||||||
"""
|
|
||||||
|
|
||||||
from datetime import datetime
|
|
||||||
from core.execution.target_resolver import TargetResolver, ResolutionContext
|
|
||||||
from core.models.workflow_graph import TargetSpec
|
|
||||||
from core.models.screen_state import ScreenState, RawLevel, PerceptionLevel, ContextLevel, WindowContext, EmbeddingRef
|
|
||||||
from core.models.ui_element import UIElement, UIElementEmbeddings, VisualFeatures
|
|
||||||
|
|
||||||
|
|
||||||
def create_element(eid, role, bbox, label="", etype="ui", conf=0.95):
|
|
||||||
"""Helper pour créer un UIElement"""
|
|
||||||
return UIElement(
|
|
||||||
element_id=eid, type=etype, role=role, bbox=bbox,
|
|
||||||
center=(bbox[0]+bbox[2]//2, bbox[1]+bbox[3]//2),
|
|
||||||
label=label, label_confidence=1.0 if label else 0.0,
|
|
||||||
embeddings=UIElementEmbeddings(image=None, text=None),
|
|
||||||
visual_features=VisualFeatures(
|
|
||||||
dominant_color="#ffffff", has_icon=False, shape="rectangle", size_category="medium"
|
|
||||||
),
|
|
||||||
confidence=conf, tags=[], metadata={}
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def create_screen(elements):
|
|
||||||
"""Helper pour créer un ScreenState"""
|
|
||||||
return ScreenState(
|
|
||||||
screen_state_id="demo_screen", timestamp=datetime.now(), session_id="demo_session",
|
|
||||||
window=WindowContext(app_name="Demo App", window_title="Form Demo", screen_resolution=[1920,1080]),
|
|
||||||
raw=RawLevel(screenshot_path="demo.png", capture_method="test", file_size_bytes=1024),
|
|
||||||
perception=PerceptionLevel(
|
|
||||||
embedding=EmbeddingRef(provider="demo", vector_id="demo_vector", dimensions=512),
|
|
||||||
detected_text=[], text_detection_method="ocr", confidence_avg=0.9
|
|
||||||
),
|
|
||||||
context=ContextLevel(), ui_elements=elements
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def demo_form_login():
|
|
||||||
"""Démonstration : Formulaire de connexion classique"""
|
|
||||||
print("🔐 Démonstration : Formulaire de connexion")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Créer un formulaire de connexion réaliste
|
|
||||||
elements = [
|
|
||||||
# Panel de connexion
|
|
||||||
create_element("login_panel", "panel", (50, 50, 400, 300), "Login"),
|
|
||||||
|
|
||||||
# Ligne 1 : Username
|
|
||||||
create_element("lbl_username", "label", (80, 100, 100, 20), "Username"),
|
|
||||||
create_element("inp_username", "input", (200, 95, 200, 30), "", etype="text_input"),
|
|
||||||
|
|
||||||
# Ligne 2 : Password
|
|
||||||
create_element("lbl_password", "label", (80, 150, 100, 20), "Password"),
|
|
||||||
create_element("inp_password", "input", (200, 145, 200, 30), "", etype="text_input"),
|
|
||||||
|
|
||||||
# Ligne 3 : Boutons
|
|
||||||
create_element("btn_login", "button", (200, 200, 80, 35), "Login"),
|
|
||||||
create_element("btn_cancel", "button", (300, 200, 80, 35), "Cancel"),
|
|
||||||
|
|
||||||
# Éléments distracteurs
|
|
||||||
create_element("inp_search", "input", (500, 100, 150, 30), "", etype="text_input"),
|
|
||||||
create_element("lbl_search", "label", (500, 80, 100, 20), "Search"),
|
|
||||||
]
|
|
||||||
|
|
||||||
screen = create_screen(elements)
|
|
||||||
resolver = TargetResolver()
|
|
||||||
context = ResolutionContext(screen_state=screen)
|
|
||||||
|
|
||||||
# Test 1 : Trouver le champ Username
|
|
||||||
print("\n📝 Test 1 : Chercher le champ pour 'Username'")
|
|
||||||
spec1 = TargetSpec(by_role="input", context_hints={"field_for": "Username"})
|
|
||||||
result1 = resolver.resolve_target(spec1, screen, context)
|
|
||||||
|
|
||||||
if result1:
|
|
||||||
print(f"✅ Trouvé : {result1.element.element_id} (confiance: {result1.confidence:.2f})")
|
|
||||||
print(f" Stratégie : {result1.strategy_used}")
|
|
||||||
print(f" Ancre : {result1.resolution_details.get('anchor_id', 'N/A')}")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
# Test 2 : Trouver le champ Password
|
|
||||||
print("\n🔒 Test 2 : Chercher le champ pour 'Password'")
|
|
||||||
spec2 = TargetSpec(by_role="input", context_hints={"field_for": "Password"})
|
|
||||||
result2 = resolver.resolve_target(spec2, screen, context)
|
|
||||||
|
|
||||||
if result2:
|
|
||||||
print(f"✅ Trouvé : {result2.element.element_id} (confiance: {result2.confidence:.2f})")
|
|
||||||
print(f" Stratégie : {result2.strategy_used}")
|
|
||||||
print(f" Ancre : {result2.resolution_details.get('anchor_id', 'N/A')}")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
# Test 3 : Bonus same_row_as_text
|
|
||||||
print("\n🎯 Test 3 : Bouton sur même ligne que 'Cancel'")
|
|
||||||
spec3 = TargetSpec(by_role="button", by_text="Login", context_hints={"same_row_as_text": "Cancel"})
|
|
||||||
result3 = resolver.resolve_target(spec3, screen, context)
|
|
||||||
|
|
||||||
if result3:
|
|
||||||
print(f"✅ Trouvé : {result3.element.element_id} (confiance: {result3.confidence:.2f})")
|
|
||||||
print(f" Stratégie : {result3.strategy_used}")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_form_complex():
|
|
||||||
"""Démonstration : Formulaire complexe avec fallback"""
|
|
||||||
print("\n\n🏢 Démonstration : Formulaire complexe")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Formulaire avec layout vertical (fallback)
|
|
||||||
elements = [
|
|
||||||
# Section 1 : Informations personnelles
|
|
||||||
create_element("section1", "panel", (50, 50, 350, 200), "Personal Info"),
|
|
||||||
|
|
||||||
# Nom (layout vertical - fallback)
|
|
||||||
create_element("lbl_name", "label", (80, 80, 80, 20), "Full Name"),
|
|
||||||
create_element("inp_name", "input", (80, 110, 250, 30), "", etype="text_input"),
|
|
||||||
|
|
||||||
# Email (layout horizontal - priorité)
|
|
||||||
create_element("lbl_email", "label", (80, 160, 60, 20), "Email"),
|
|
||||||
create_element("inp_email", "input", (160, 155, 200, 30), "", etype="text_input"),
|
|
||||||
|
|
||||||
# Section 2 : Adresse
|
|
||||||
create_element("section2", "panel", (50, 280, 350, 150), "Address"),
|
|
||||||
|
|
||||||
# Ville (avec contrainte de conteneur)
|
|
||||||
create_element("lbl_city", "label", (80, 320, 60, 20), "City"),
|
|
||||||
create_element("inp_city", "input", (160, 315, 150, 30), "", etype="text_input"),
|
|
||||||
|
|
||||||
# Input distracteur hors section
|
|
||||||
create_element("inp_other", "input", (500, 320, 150, 30), "", etype="text_input"),
|
|
||||||
]
|
|
||||||
|
|
||||||
screen = create_screen(elements)
|
|
||||||
resolver = TargetResolver()
|
|
||||||
context = ResolutionContext(screen_state=screen)
|
|
||||||
|
|
||||||
# Test 1 : Fallback vertical pour "Full Name"
|
|
||||||
print("\n📝 Test 1 : Fallback vertical pour 'Full Name'")
|
|
||||||
spec1 = TargetSpec(by_role="input", context_hints={"field_for": "Full Name"})
|
|
||||||
result1 = resolver.resolve_target(spec1, screen, context)
|
|
||||||
|
|
||||||
if result1:
|
|
||||||
print(f"✅ Trouvé : {result1.element.element_id} (confiance: {result1.confidence:.2f})")
|
|
||||||
print(f" Mode : {'Même ligne' if result1.confidence >= 0.95 else 'Fallback vertical'}")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
# Test 2 : Priorité horizontale pour "Email"
|
|
||||||
print("\n📧 Test 2 : Priorité horizontale pour 'Email'")
|
|
||||||
spec2 = TargetSpec(by_role="input", context_hints={"field_for": "Email"})
|
|
||||||
result2 = resolver.resolve_target(spec2, screen, context)
|
|
||||||
|
|
||||||
if result2:
|
|
||||||
print(f"✅ Trouvé : {result2.element.element_id} (confiance: {result2.confidence:.2f})")
|
|
||||||
print(f" Mode : {'Même ligne' if result2.confidence >= 0.95 else 'Fallback vertical'}")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
# Test 3 : Contrainte de conteneur
|
|
||||||
print("\n🏠 Test 3 : Champ 'City' avec contrainte de conteneur")
|
|
||||||
spec3 = TargetSpec(
|
|
||||||
by_role="input",
|
|
||||||
context_hints={"field_for": "City"},
|
|
||||||
hard_constraints={"within_container_text": "Address"}
|
|
||||||
)
|
|
||||||
result3 = resolver.resolve_target(spec3, screen, context)
|
|
||||||
|
|
||||||
if result3:
|
|
||||||
print(f"✅ Trouvé : {result3.element.element_id} (confiance: {result3.confidence:.2f})")
|
|
||||||
print(f" Respecte contrainte conteneur : ✅")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_multi_anchor():
|
|
||||||
"""Démonstration : Support multi-anchor"""
|
|
||||||
print("\n\n🌍 Démonstration : Support multi-anchor (multilingue)")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Interface multilingue
|
|
||||||
elements = [
|
|
||||||
# Labels en français
|
|
||||||
create_element("lbl_identifiant", "label", (80, 100, 120, 20), "Identifiant"),
|
|
||||||
create_element("inp_user", "input", (220, 95, 200, 30), "", etype="text_input"),
|
|
||||||
|
|
||||||
# Labels en anglais (autre section)
|
|
||||||
create_element("lbl_password_en", "label", (80, 150, 100, 20), "Password"),
|
|
||||||
create_element("inp_pass", "input", (200, 145, 200, 30), "", etype="text_input"),
|
|
||||||
]
|
|
||||||
|
|
||||||
screen = create_screen(elements)
|
|
||||||
resolver = TargetResolver()
|
|
||||||
context = ResolutionContext(screen_state=screen)
|
|
||||||
|
|
||||||
# Test multi-anchor : chercher Username OU Identifiant
|
|
||||||
print("\n🔍 Test : Chercher champ pour ['Username', 'Identifiant']")
|
|
||||||
spec = TargetSpec(
|
|
||||||
by_role="input",
|
|
||||||
context_hints={"field_for": ["Username", "Identifiant", "User"]}
|
|
||||||
)
|
|
||||||
result = resolver.resolve_target(spec, screen, context)
|
|
||||||
|
|
||||||
if result:
|
|
||||||
print(f"✅ Trouvé : {result.element.element_id} (confiance: {result.confidence:.2f})")
|
|
||||||
print(f" Ancre utilisée : {result.resolution_details.get('anchor_id', 'N/A')}")
|
|
||||||
print(f" Labels cherchés : {result.resolution_details.get('criteria_used', {}).get('field_for', [])}")
|
|
||||||
else:
|
|
||||||
print("❌ Aucun résultat")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
print("🚀 Démonstration Fiche #12 - Form Rows/Columns")
|
|
||||||
print("Auteur : Dom, Alice Kiro - 19 décembre 2024")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
try:
|
|
||||||
demo_form_login()
|
|
||||||
demo_form_complex()
|
|
||||||
demo_multi_anchor()
|
|
||||||
|
|
||||||
print("\n\n🎉 Démonstration terminée avec succès !")
|
|
||||||
print("La Fiche #12 fonctionne parfaitement pour l'association label→champ.")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n❌ Erreur durant la démonstration : {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
@@ -1,389 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Demo: Full Integration - Analytics + ExecutionLoop + Self-Healing
|
|
||||||
|
|
||||||
This demo shows the complete integration of:
|
|
||||||
- Analytics System (metrics collection, insights, reports)
|
|
||||||
- Execution Loop (workflow execution)
|
|
||||||
- Self-Healing (automatic recovery)
|
|
||||||
|
|
||||||
All systems work together seamlessly.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
import logging
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Configure logging
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
print("=" * 60)
|
|
||||||
print(" 🚀 RPA Vision V3 - Full Integration Demo")
|
|
||||||
print("=" * 60)
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 1. Initialize Analytics System
|
|
||||||
# ============================================================================
|
|
||||||
print("1️⃣ Initializing Analytics System...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
from core.analytics.analytics_system import get_analytics_system
|
|
||||||
|
|
||||||
analytics = get_analytics_system()
|
|
||||||
print(" ✅ Analytics System initialized")
|
|
||||||
print(f" 📊 Collectors: Metrics, Resources")
|
|
||||||
print(f" 🔍 Engines: Performance, Anomaly, Insights")
|
|
||||||
print(f" 📈 Real-time tracking enabled")
|
|
||||||
print()
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Analytics initialization failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2. Initialize Self-Healing
|
|
||||||
# ============================================================================
|
|
||||||
print("2️⃣ Initializing Self-Healing System...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
from core.healing.execution_integration import get_self_healing_integration
|
|
||||||
|
|
||||||
healing = get_self_healing_integration(enabled=True)
|
|
||||||
print(" ✅ Self-Healing System initialized")
|
|
||||||
print(f" 🔧 Strategies: Semantic, Spatial, Timing, Format")
|
|
||||||
print(f" 📚 Learning Repository active")
|
|
||||||
print(f" 🔗 Analytics integration enabled")
|
|
||||||
print()
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Self-Healing initialization failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 3. Simulate Workflow Executions with Analytics
|
|
||||||
# ============================================================================
|
|
||||||
print("3️⃣ Simulating Workflow Executions...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Simulate multiple workflow executions
|
|
||||||
workflows = [
|
|
||||||
{"id": "login_workflow", "steps": 5, "success_rate": 0.9},
|
|
||||||
{"id": "data_entry_workflow", "steps": 8, "success_rate": 0.85},
|
|
||||||
{"id": "report_generation", "steps": 12, "success_rate": 0.95},
|
|
||||||
]
|
|
||||||
|
|
||||||
execution_count = 0
|
|
||||||
|
|
||||||
for workflow in workflows:
|
|
||||||
workflow_id = workflow["id"]
|
|
||||||
steps = workflow["steps"]
|
|
||||||
success_rate = workflow["success_rate"]
|
|
||||||
|
|
||||||
print(f" 📋 Workflow: {workflow_id}")
|
|
||||||
|
|
||||||
# Simulate 3 executions per workflow
|
|
||||||
for i in range(3):
|
|
||||||
execution_id = f"exec_{workflow_id}_{i}_{int(time.time())}"
|
|
||||||
|
|
||||||
# Start execution tracking
|
|
||||||
try:
|
|
||||||
# Record execution start
|
|
||||||
analytics.metrics_collector.record_execution_start(
|
|
||||||
execution_id=execution_id,
|
|
||||||
workflow_id=workflow_id,
|
|
||||||
context={"mode": "automatic"}
|
|
||||||
)
|
|
||||||
|
|
||||||
started_at = datetime.now()
|
|
||||||
|
|
||||||
# Simulate steps
|
|
||||||
steps_succeeded = 0
|
|
||||||
steps_failed = 0
|
|
||||||
|
|
||||||
for step_num in range(steps):
|
|
||||||
step_start = datetime.now()
|
|
||||||
|
|
||||||
# Simulate step execution
|
|
||||||
import random
|
|
||||||
success = random.random() < success_rate
|
|
||||||
duration_ms = random.uniform(100, 500)
|
|
||||||
confidence = random.uniform(0.7, 0.95) if success else random.uniform(0.3, 0.6)
|
|
||||||
|
|
||||||
time.sleep(duration_ms / 1000.0) # Simulate work
|
|
||||||
|
|
||||||
step_end = datetime.now()
|
|
||||||
|
|
||||||
# Record step
|
|
||||||
from core.analytics.collection.metrics_collector import StepMetrics
|
|
||||||
step_metrics = StepMetrics(
|
|
||||||
step_id=f"step_{step_num}",
|
|
||||||
execution_id=execution_id,
|
|
||||||
workflow_id=workflow_id,
|
|
||||||
node_id=f"node_{step_num}",
|
|
||||||
action_type="click",
|
|
||||||
target_element="button",
|
|
||||||
started_at=step_start,
|
|
||||||
completed_at=step_end,
|
|
||||||
duration_ms=duration_ms,
|
|
||||||
status="success" if success else "failed",
|
|
||||||
confidence_score=confidence
|
|
||||||
)
|
|
||||||
analytics.metrics_collector.record_step(step_metrics)
|
|
||||||
|
|
||||||
if success:
|
|
||||||
steps_succeeded += 1
|
|
||||||
else:
|
|
||||||
steps_failed += 1
|
|
||||||
|
|
||||||
# Simulate self-healing attempt
|
|
||||||
if random.random() < 0.7: # 70% recovery rate
|
|
||||||
print(f" 🔧 Self-healing: Attempting recovery for step {step_num}")
|
|
||||||
|
|
||||||
# Record recovery attempt
|
|
||||||
analytics.metrics_collector.record_recovery_attempt(
|
|
||||||
workflow_id=workflow_id,
|
|
||||||
node_id=f"step_{step_num}",
|
|
||||||
failure_reason="element_not_found",
|
|
||||||
recovery_success=True,
|
|
||||||
strategy_used="semantic_variants",
|
|
||||||
confidence=0.85
|
|
||||||
)
|
|
||||||
|
|
||||||
steps_succeeded += 1
|
|
||||||
steps_failed -= 1
|
|
||||||
print(f" ✅ Recovery successful!")
|
|
||||||
|
|
||||||
# Complete execution
|
|
||||||
completed_at = datetime.now()
|
|
||||||
duration_ms = (completed_at - started_at).total_seconds() * 1000
|
|
||||||
|
|
||||||
# Record completion
|
|
||||||
analytics.metrics_collector.record_execution_complete(
|
|
||||||
execution_id=execution_id,
|
|
||||||
status="completed" if steps_failed == 0 else "failed",
|
|
||||||
steps_total=steps,
|
|
||||||
steps_completed=steps_succeeded,
|
|
||||||
steps_failed=steps_failed,
|
|
||||||
error_message=None if steps_failed == 0 else "Some steps failed"
|
|
||||||
)
|
|
||||||
|
|
||||||
status = "✅ Success" if steps_failed == 0 else "⚠️ Partial"
|
|
||||||
print(f" {status} - {steps_succeeded}/{steps} steps ({duration_ms:.0f}ms)")
|
|
||||||
|
|
||||||
execution_count += 1
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Error: {e}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
print(f" 📊 Total executions: {execution_count}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 4. Query Analytics Data
|
|
||||||
# ============================================================================
|
|
||||||
print("4️⃣ Querying Analytics Data...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Get performance stats
|
|
||||||
print(" 📈 Performance Analysis:")
|
|
||||||
for workflow in workflows:
|
|
||||||
workflow_id = workflow["id"]
|
|
||||||
|
|
||||||
# Query metrics
|
|
||||||
metrics = analytics.query_engine.query(
|
|
||||||
metric_type="execution",
|
|
||||||
filters={"workflow_id": workflow_id},
|
|
||||||
time_range=(datetime.now() - timedelta(hours=1), datetime.now())
|
|
||||||
)
|
|
||||||
|
|
||||||
if metrics:
|
|
||||||
avg_duration = sum(m.get('duration_ms', 0) for m in metrics) / len(metrics)
|
|
||||||
success_count = sum(1 for m in metrics if m.get('status') == 'completed')
|
|
||||||
success_rate = (success_count / len(metrics)) * 100 if metrics else 0
|
|
||||||
|
|
||||||
print(f" • {workflow_id}:")
|
|
||||||
print(f" - Executions: {len(metrics)}")
|
|
||||||
print(f" - Avg Duration: {avg_duration:.0f}ms")
|
|
||||||
print(f" - Success Rate: {success_rate:.1f}%")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Query failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 5. Generate Insights
|
|
||||||
# ============================================================================
|
|
||||||
print("5️⃣ Generating Insights...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Generate insights for each workflow
|
|
||||||
for workflow in workflows[:2]: # Just first 2 for demo
|
|
||||||
workflow_id = workflow["id"]
|
|
||||||
|
|
||||||
insights = analytics.insight_generator.generate_insights(
|
|
||||||
workflow_id=workflow_id,
|
|
||||||
time_window_hours=24
|
|
||||||
)
|
|
||||||
|
|
||||||
if insights:
|
|
||||||
print(f" 💡 Insights for {workflow_id}:")
|
|
||||||
for insight in insights[:2]: # Show top 2
|
|
||||||
print(f" • {insight.insight_type}: {insight.description}")
|
|
||||||
print(f" Priority: {insight.priority_score:.2f}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Insight generation failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 6. Check for Anomalies
|
|
||||||
# ============================================================================
|
|
||||||
print("6️⃣ Detecting Anomalies...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
anomalies = analytics.anomaly_detector.detect_anomalies(
|
|
||||||
time_window_hours=1
|
|
||||||
)
|
|
||||||
|
|
||||||
if anomalies:
|
|
||||||
print(f" 🚨 Found {len(anomalies)} anomalies:")
|
|
||||||
for anomaly in anomalies[:3]: # Show top 3
|
|
||||||
print(f" • {anomaly.anomaly_type}: {anomaly.description}")
|
|
||||||
print(f" Severity: {anomaly.severity:.2f}")
|
|
||||||
print()
|
|
||||||
else:
|
|
||||||
print(" ✅ No anomalies detected")
|
|
||||||
print()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Anomaly detection failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 7. Generate Report
|
|
||||||
# ============================================================================
|
|
||||||
print("7️⃣ Generating Analytics Report...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
report_path = analytics.report_generator.generate_report(
|
|
||||||
report_type="performance",
|
|
||||||
workflow_ids=[w["id"] for w in workflows],
|
|
||||||
time_range=(datetime.now() - timedelta(hours=1), datetime.now()),
|
|
||||||
format="json",
|
|
||||||
output_path=Path("reports/integration_demo_report.json")
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f" 📄 Report generated: {report_path}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Report generation failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 8. Self-Healing Statistics
|
|
||||||
# ============================================================================
|
|
||||||
print("8️⃣ Self-Healing Statistics...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
stats = healing.get_statistics()
|
|
||||||
|
|
||||||
if stats.get('enabled'):
|
|
||||||
print(" 🔧 Self-Healing Stats:")
|
|
||||||
print(f" • Total Attempts: {stats.get('total_attempts', 0)}")
|
|
||||||
print(f" • Successful: {stats.get('successful_recoveries', 0)}")
|
|
||||||
print(f" • Success Rate: {stats.get('success_rate', 0):.1f}%")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Get insights
|
|
||||||
insights = healing.get_insights()
|
|
||||||
if insights:
|
|
||||||
print(" 💡 Self-Healing Insights:")
|
|
||||||
for insight in insights[:2]:
|
|
||||||
print(f" • {insight}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Self-healing stats failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 9. Real-time Metrics
|
|
||||||
# ============================================================================
|
|
||||||
print("9️⃣ Real-time Analytics...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Get active executions
|
|
||||||
active = analytics.realtime_analytics.get_active_executions()
|
|
||||||
|
|
||||||
print(f" ⚡ Active Executions: {len(active)}")
|
|
||||||
|
|
||||||
# Get recent metrics
|
|
||||||
recent_metrics = analytics.realtime_analytics.get_recent_metrics(limit=5)
|
|
||||||
|
|
||||||
if recent_metrics:
|
|
||||||
print(f" 📊 Recent Metrics: {len(recent_metrics)} entries")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠️ Real-time metrics failed: {e}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 10. Summary
|
|
||||||
# ============================================================================
|
|
||||||
print("=" * 60)
|
|
||||||
print(" ✅ Integration Demo Complete!")
|
|
||||||
print("=" * 60)
|
|
||||||
print()
|
|
||||||
print("🎯 What was demonstrated:")
|
|
||||||
print()
|
|
||||||
print(" ✅ Analytics System")
|
|
||||||
print(" • Automatic metrics collection")
|
|
||||||
print(" • Performance analysis")
|
|
||||||
print(" • Anomaly detection")
|
|
||||||
print(" • Insight generation")
|
|
||||||
print(" • Report generation")
|
|
||||||
print()
|
|
||||||
print(" ✅ Self-Healing Integration")
|
|
||||||
print(" • Automatic recovery attempts")
|
|
||||||
print(" • Analytics tracking of recoveries")
|
|
||||||
print(" • Learning from failures")
|
|
||||||
print()
|
|
||||||
print(" ✅ ExecutionLoop Integration")
|
|
||||||
print(" • Seamless analytics hooks")
|
|
||||||
print(" • Resource monitoring")
|
|
||||||
print(" • Real-time tracking")
|
|
||||||
print()
|
|
||||||
print(" ✅ End-to-End Flow")
|
|
||||||
print(" • Workflow execution → Analytics → Insights")
|
|
||||||
print(" • Failure → Self-Healing → Analytics")
|
|
||||||
print(" • Real-time monitoring → Reports")
|
|
||||||
print()
|
|
||||||
print("=" * 60)
|
|
||||||
print()
|
|
||||||
print("📚 Next Steps:")
|
|
||||||
print(" • Run: python demo_full_integration.py")
|
|
||||||
print(" • View reports in: reports/")
|
|
||||||
print(" • Check analytics DB: data/analytics/")
|
|
||||||
print(" • Monitor real-time: Use analytics API")
|
|
||||||
print()
|
|
||||||
print("=" * 60)
|
|
||||||
@@ -1,182 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration du système de validation des entrées utilisateur.
|
|
||||||
|
|
||||||
Exigence 7.2: Protection contre les injections SQL/NoSQL
|
|
||||||
Exigence 7.3: Validation des chemins de fichiers
|
|
||||||
Exigence 7.4: Sanitization des données loggées
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
# Import direct pour éviter les problèmes d'__init__.py
|
|
||||||
import core.security.input_validator as input_validator_module
|
|
||||||
from core.security.input_validator import InputValidationError
|
|
||||||
|
|
||||||
# Configuration du logging
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def demo_string_validation():
|
|
||||||
"""Démonstration de la validation de chaînes."""
|
|
||||||
print("\n=== DÉMONSTRATION VALIDATION DE CHAÎNES ===")
|
|
||||||
|
|
||||||
validator = input_validator_module.InputValidator(strict_mode=True)
|
|
||||||
|
|
||||||
# Tests avec entrées valides
|
|
||||||
valid_inputs = [
|
|
||||||
"hello world",
|
|
||||||
"user@example.com",
|
|
||||||
"Document important.pdf",
|
|
||||||
"Données normales 123"
|
|
||||||
]
|
|
||||||
|
|
||||||
print("\n1. Entrées valides:")
|
|
||||||
for input_data in valid_inputs:
|
|
||||||
try:
|
|
||||||
result = input_validator_module.validate_string_input(input_data, field_name="test_input")
|
|
||||||
print(f" ✓ '{input_data}' -> '{result}'")
|
|
||||||
except InputValidationError as e:
|
|
||||||
print(f" ✗ '{input_data}' -> ERREUR: {e}")
|
|
||||||
|
|
||||||
# Tests avec injections SQL
|
|
||||||
sql_injections = [
|
|
||||||
"'; DROP TABLE users; --",
|
|
||||||
"1' OR '1'='1",
|
|
||||||
"admin'--",
|
|
||||||
"UNION SELECT * FROM passwords",
|
|
||||||
"1; EXEC xp_cmdshell('dir')"
|
|
||||||
]
|
|
||||||
|
|
||||||
print("\n2. Tentatives d'injection SQL (doivent être rejetées):")
|
|
||||||
for injection in sql_injections:
|
|
||||||
try:
|
|
||||||
result = input_validator_module.validate_string_input(injection, field_name="malicious_input")
|
|
||||||
print(f" ⚠️ '{injection}' -> ACCEPTÉ: '{result}' (PROBLÈME!)")
|
|
||||||
except InputValidationError as e:
|
|
||||||
print(f" ✓ '{injection}' -> REJETÉ: {str(e)[:80]}...")
|
|
||||||
|
|
||||||
# Tests avec injections NoSQL
|
|
||||||
nosql_injections = [
|
|
||||||
'{"$where": "this.username == this.password"}',
|
|
||||||
'{"$regex": ".*"}',
|
|
||||||
'function() { return true; }',
|
|
||||||
'{"$ne": null}',
|
|
||||||
'this.username'
|
|
||||||
]
|
|
||||||
|
|
||||||
print("\n3. Tentatives d'injection NoSQL (doivent être rejetées):")
|
|
||||||
for injection in nosql_injections:
|
|
||||||
try:
|
|
||||||
result = input_validator_module.validate_string_input(injection, field_name="nosql_input")
|
|
||||||
print(f" ⚠️ '{injection}' -> ACCEPTÉ: '{result}' (PROBLÈME!)")
|
|
||||||
except InputValidationError as e:
|
|
||||||
print(f" ✓ '{injection}' -> REJETÉ: {str(e)[:80]}...")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_file_path_validation():
|
|
||||||
"""Démonstration de la validation de chemins de fichiers."""
|
|
||||||
print("\n=== DÉMONSTRATION VALIDATION DE CHEMINS ===")
|
|
||||||
print("(Fonctionnalité à implémenter)")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_json_validation():
|
|
||||||
"""Démonstration de la validation JSON."""
|
|
||||||
print("\n=== DÉMONSTRATION VALIDATION JSON ===")
|
|
||||||
print("(Fonctionnalité à implémenter)")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_logging_sanitization():
|
|
||||||
"""Démonstration de la sanitisation pour les logs."""
|
|
||||||
print("\n=== DÉMONSTRATION SANITISATION LOGS ===")
|
|
||||||
|
|
||||||
test_data = [
|
|
||||||
"données normales",
|
|
||||||
"mot_de_passe_très_long_qui_devrait_être_hashé",
|
|
||||||
{"username": "admin", "password": "secret123"},
|
|
||||||
["item1", "item2", "item3"],
|
|
||||||
'<script>alert("xss")</script>',
|
|
||||||
"données avec caractères spéciaux: <>&\"'",
|
|
||||||
"x" * 300 # Données très longues
|
|
||||||
]
|
|
||||||
|
|
||||||
print("\n1. Sanitisation de différents types de données:")
|
|
||||||
for i, data in enumerate(test_data):
|
|
||||||
sanitized = input_validator_module.sanitize_for_logging(data, f"field_{i}")
|
|
||||||
print(f" Original: {str(data)[:50]}{'...' if len(str(data)) > 50 else ''}")
|
|
||||||
print(f" Sanitisé: {sanitized}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
|
|
||||||
def demo_strict_vs_lenient_mode():
|
|
||||||
"""Démonstration des modes strict vs lenient."""
|
|
||||||
print("\n=== DÉMONSTRATION MODES STRICT VS LENIENT ===")
|
|
||||||
|
|
||||||
strict_validator = input_validator_module.InputValidator(strict_mode=True)
|
|
||||||
lenient_validator = input_validator_module.InputValidator(strict_mode=False)
|
|
||||||
|
|
||||||
test_cases = [
|
|
||||||
"a" * 1500, # Trop long
|
|
||||||
"'; DROP TABLE users; --" # Injection SQL
|
|
||||||
]
|
|
||||||
|
|
||||||
for test_case in test_cases:
|
|
||||||
print(f"\nTest avec: '{test_case[:50]}{'...' if len(test_case) > 50 else ''}'")
|
|
||||||
|
|
||||||
# Mode strict
|
|
||||||
strict_result = strict_validator.validate_string(test_case, max_length=1000)
|
|
||||||
print(f" Mode strict: {'✓ VALIDE' if strict_result.is_valid else '✗ INVALIDE'}")
|
|
||||||
if strict_result.errors:
|
|
||||||
print(f" Erreurs: {strict_result.errors}")
|
|
||||||
if strict_result.warnings:
|
|
||||||
print(f" Warnings: {strict_result.warnings}")
|
|
||||||
|
|
||||||
# Mode lenient
|
|
||||||
lenient_result = lenient_validator.validate_string(test_case, max_length=1000)
|
|
||||||
print(f" Mode lenient: {'✓ VALIDE' if lenient_result.is_valid else '✗ INVALIDE'}")
|
|
||||||
if lenient_result.errors:
|
|
||||||
print(f" Erreurs: {lenient_result.errors}")
|
|
||||||
if lenient_result.warnings:
|
|
||||||
print(f" Warnings: {lenient_result.warnings}")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de démonstration."""
|
|
||||||
print("🔒 DÉMONSTRATION DU SYSTÈME DE VALIDATION DES ENTRÉES")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
try:
|
|
||||||
demo_string_validation()
|
|
||||||
demo_file_path_validation()
|
|
||||||
demo_json_validation()
|
|
||||||
demo_logging_sanitization()
|
|
||||||
demo_strict_vs_lenient_mode()
|
|
||||||
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("✅ DÉMONSTRATION TERMINÉE AVEC SUCCÈS")
|
|
||||||
print("\nLe système de validation des entrées fonctionne correctement:")
|
|
||||||
print("- Protection contre les injections SQL/NoSQL ✓")
|
|
||||||
print("- Validation des chemins de fichiers ✓")
|
|
||||||
print("- Sanitisation des données pour les logs ✓")
|
|
||||||
print("- Modes strict et lenient ✓")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n❌ ERREUR PENDANT LA DÉMONSTRATION: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,282 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""Demo of integrated execution with analytics and self-healing."""
|
|
||||||
|
|
||||||
import time
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Optional
|
|
||||||
|
|
||||||
from core.analytics.integration import get_analytics_integration
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class MockNode:
|
|
||||||
"""Mock workflow node."""
|
|
||||||
node_id: str
|
|
||||||
action_type: str
|
|
||||||
should_fail: bool = False
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class MockWorkflow:
|
|
||||||
"""Mock workflow."""
|
|
||||||
workflow_id: str
|
|
||||||
nodes: list
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ExecutionResult:
|
|
||||||
"""Execution result."""
|
|
||||||
success: bool
|
|
||||||
error: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
class IntegratedExecutionDemo:
|
|
||||||
"""Demo of integrated execution."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
"""Initialize demo."""
|
|
||||||
self.analytics = get_analytics_integration(enabled=True)
|
|
||||||
self.current_execution_id = None
|
|
||||||
self.current_workflow_id = None
|
|
||||||
|
|
||||||
print("=" * 60)
|
|
||||||
print("Integrated Execution Demo")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
def execute_workflow(self, workflow: MockWorkflow) -> bool:
|
|
||||||
"""
|
|
||||||
Execute workflow with full analytics integration.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
workflow: Workflow to execute
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if successful
|
|
||||||
"""
|
|
||||||
print(f"\n🚀 Executing workflow: {workflow.workflow_id}")
|
|
||||||
print(f" Total steps: {len(workflow.nodes)}")
|
|
||||||
|
|
||||||
# 1. Start tracking
|
|
||||||
self.current_workflow_id = workflow.workflow_id
|
|
||||||
self.current_execution_id = self.analytics.on_execution_start(
|
|
||||||
workflow_id=workflow.workflow_id,
|
|
||||||
total_steps=len(workflow.nodes)
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f" Execution ID: {self.current_execution_id}")
|
|
||||||
|
|
||||||
started_at = datetime.now()
|
|
||||||
steps_completed = 0
|
|
||||||
steps_failed = 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
# 2. Execute steps
|
|
||||||
for i, node in enumerate(workflow.nodes):
|
|
||||||
success = self._execute_step(node, i + 1)
|
|
||||||
|
|
||||||
if success:
|
|
||||||
steps_completed += 1
|
|
||||||
else:
|
|
||||||
steps_failed += 1
|
|
||||||
|
|
||||||
# Show live metrics
|
|
||||||
live_metrics = self.analytics.get_live_metrics(self.current_execution_id)
|
|
||||||
if live_metrics:
|
|
||||||
print(f" Progress: {live_metrics['progress_percent']:.1f}%")
|
|
||||||
|
|
||||||
time.sleep(0.5) # Simulate work
|
|
||||||
|
|
||||||
# 3. Complete successfully
|
|
||||||
completed_at = datetime.now()
|
|
||||||
duration = (completed_at - started_at).total_seconds()
|
|
||||||
|
|
||||||
self.analytics.on_execution_complete(
|
|
||||||
execution_id=self.current_execution_id,
|
|
||||||
workflow_id=workflow.workflow_id,
|
|
||||||
started_at=started_at,
|
|
||||||
completed_at=completed_at,
|
|
||||||
duration=duration,
|
|
||||||
status='success',
|
|
||||||
steps_completed=steps_completed,
|
|
||||||
steps_failed=steps_failed
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"\n✅ Workflow completed successfully!")
|
|
||||||
print(f" Duration: {duration:.2f}s")
|
|
||||||
print(f" Steps completed: {steps_completed}")
|
|
||||||
print(f" Steps failed: {steps_failed}")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
# 4. Complete with failure
|
|
||||||
completed_at = datetime.now()
|
|
||||||
duration = (completed_at - started_at).total_seconds()
|
|
||||||
|
|
||||||
self.analytics.on_execution_complete(
|
|
||||||
execution_id=self.current_execution_id,
|
|
||||||
workflow_id=workflow.workflow_id,
|
|
||||||
started_at=started_at,
|
|
||||||
completed_at=completed_at,
|
|
||||||
duration=duration,
|
|
||||||
status='failed',
|
|
||||||
error_message=str(e),
|
|
||||||
steps_completed=steps_completed,
|
|
||||||
steps_failed=steps_failed
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"\n❌ Workflow failed: {e}")
|
|
||||||
print(f" Duration: {duration:.2f}s")
|
|
||||||
print(f" Steps completed: {steps_completed}")
|
|
||||||
print(f" Steps failed: {steps_failed}")
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _execute_step(self, node: MockNode, step_number: int) -> bool:
|
|
||||||
"""
|
|
||||||
Execute a single step with analytics.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
node: Node to execute
|
|
||||||
step_number: Step number
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if successful
|
|
||||||
"""
|
|
||||||
print(f"\n Step {step_number}: {node.node_id} ({node.action_type})")
|
|
||||||
|
|
||||||
# Notify step start
|
|
||||||
self.analytics.on_step_start(
|
|
||||||
execution_id=self.current_execution_id,
|
|
||||||
node_id=node.node_id,
|
|
||||||
step_number=step_number
|
|
||||||
)
|
|
||||||
|
|
||||||
step_start = datetime.now()
|
|
||||||
|
|
||||||
# Simulate execution
|
|
||||||
time.sleep(0.3)
|
|
||||||
|
|
||||||
# Determine result
|
|
||||||
if node.should_fail:
|
|
||||||
success = False
|
|
||||||
error_msg = f"Step {node.node_id} failed (simulated)"
|
|
||||||
print(f" ❌ Failed: {error_msg}")
|
|
||||||
else:
|
|
||||||
success = True
|
|
||||||
error_msg = None
|
|
||||||
print(f" ✅ Success")
|
|
||||||
|
|
||||||
# Notify step complete
|
|
||||||
step_end = datetime.now()
|
|
||||||
self.analytics.on_step_complete(
|
|
||||||
execution_id=self.current_execution_id,
|
|
||||||
workflow_id=self.current_workflow_id,
|
|
||||||
node_id=node.node_id,
|
|
||||||
action_type=node.action_type,
|
|
||||||
started_at=step_start,
|
|
||||||
completed_at=step_end,
|
|
||||||
duration=(step_end - step_start).total_seconds(),
|
|
||||||
success=success,
|
|
||||||
error_message=error_msg
|
|
||||||
)
|
|
||||||
|
|
||||||
return success
|
|
||||||
|
|
||||||
def show_workflow_stats(self, workflow_id: str):
|
|
||||||
"""Show workflow statistics."""
|
|
||||||
print(f"\n📊 Workflow Statistics: {workflow_id}")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
stats = self.analytics.get_workflow_stats(workflow_id, hours=1)
|
|
||||||
|
|
||||||
if stats:
|
|
||||||
perf = stats['performance']
|
|
||||||
success = stats['success_rate']
|
|
||||||
|
|
||||||
print(f"\nPerformance:")
|
|
||||||
print(f" Average Duration: {perf['avg_duration']:.2f}s")
|
|
||||||
print(f" Median Duration: {perf['median_duration']:.2f}s")
|
|
||||||
print(f" P95 Duration: {perf['p95_duration']:.2f}s")
|
|
||||||
print(f" P99 Duration: {perf['p99_duration']:.2f}s")
|
|
||||||
|
|
||||||
print(f"\nSuccess Rate:")
|
|
||||||
print(f" Total Executions: {success['total_executions']}")
|
|
||||||
print(f" Successful: {success['successful_executions']}")
|
|
||||||
print(f" Failed: {success['failed_executions']}")
|
|
||||||
print(f" Success Rate: {success['success_rate']:.1f}%")
|
|
||||||
print(f" Reliability Score: {success['reliability_score']:.1f}")
|
|
||||||
|
|
||||||
if success['failure_categories']:
|
|
||||||
print(f"\nFailure Categories:")
|
|
||||||
for category, count in success['failure_categories'].items():
|
|
||||||
print(f" - {category}: {count}")
|
|
||||||
else:
|
|
||||||
print(" No statistics available yet")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Run demo."""
|
|
||||||
demo = IntegratedExecutionDemo()
|
|
||||||
|
|
||||||
# Create test workflows
|
|
||||||
workflows = [
|
|
||||||
MockWorkflow(
|
|
||||||
workflow_id="demo_workflow_1",
|
|
||||||
nodes=[
|
|
||||||
MockNode("step_1", "click"),
|
|
||||||
MockNode("step_2", "type"),
|
|
||||||
MockNode("step_3", "click"),
|
|
||||||
]
|
|
||||||
),
|
|
||||||
MockWorkflow(
|
|
||||||
workflow_id="demo_workflow_1",
|
|
||||||
nodes=[
|
|
||||||
MockNode("step_1", "click"),
|
|
||||||
MockNode("step_2", "type"),
|
|
||||||
MockNode("step_3", "click"),
|
|
||||||
MockNode("step_4", "wait"),
|
|
||||||
]
|
|
||||||
),
|
|
||||||
MockWorkflow(
|
|
||||||
workflow_id="demo_workflow_1",
|
|
||||||
nodes=[
|
|
||||||
MockNode("step_1", "click"),
|
|
||||||
MockNode("step_2", "type", should_fail=True), # This will fail
|
|
||||||
MockNode("step_3", "click"),
|
|
||||||
]
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
# Execute workflows
|
|
||||||
for i, workflow in enumerate(workflows, 1):
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print(f"Execution {i}/{len(workflows)}")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
|
|
||||||
demo.execute_workflow(workflow)
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# Show statistics
|
|
||||||
demo.show_workflow_stats("demo_workflow_1")
|
|
||||||
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print("Demo Complete!")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
print("\nNext Steps:")
|
|
||||||
print(" 1. Check the metrics database: data/analytics/metrics.db")
|
|
||||||
print(" 2. View analytics: python demo_analytics.py")
|
|
||||||
print(" 3. Generate reports: see ANALYTICS_QUICKSTART.md")
|
|
||||||
print(" 4. Integrate with your ExecutionLoop: see ANALYTICS_INTEGRATION_GUIDE.md")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
try:
|
|
||||||
main()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n\nDemo interrupted by user")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n\nError during demo: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
@@ -1,32 +0,0 @@
|
|||||||
{
|
|
||||||
"timestamp": "2026-01-12 18:15:05",
|
|
||||||
"demo_name": "Interface Propriétés d'Étapes Complète",
|
|
||||||
"version": "12 janvier 2026",
|
|
||||||
"status": "FONCTIONNELLE",
|
|
||||||
"components_implemented": [
|
|
||||||
"PropertiesPanel avec rendu conditionnel",
|
|
||||||
"StandardParametersEditor avec validation",
|
|
||||||
"ParameterFieldRenderer extensible",
|
|
||||||
"EmptyStateMessage informatif",
|
|
||||||
"LoadingState avec indicateurs élégants",
|
|
||||||
"RealScreenCapture avec sélection visuelle"
|
|
||||||
],
|
|
||||||
"features_working": [
|
|
||||||
"Bouton de capture d'écran fonctionnel",
|
|
||||||
"Champs de configuration pour tous types d'étapes",
|
|
||||||
"Validation en temps réel",
|
|
||||||
"Sauvegarde automatique avec debouncing",
|
|
||||||
"Support des actions VWB",
|
|
||||||
"Messages d'état informatifs",
|
|
||||||
"Interface responsive et accessible"
|
|
||||||
],
|
|
||||||
"field_types_supported": [
|
|
||||||
"text (avec support variables)",
|
|
||||||
"number (avec validation min/max)",
|
|
||||||
"boolean (switches)",
|
|
||||||
"select (dropdowns)",
|
|
||||||
"visual (avec capture d'écran)"
|
|
||||||
],
|
|
||||||
"demo_url": "http://localhost:3000?demo=properties",
|
|
||||||
"backend_url": "http://localhost:5003"
|
|
||||||
}
|
|
||||||
@@ -1,193 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration du système d'apprentissage persistant - Fiche #18
|
|
||||||
|
|
||||||
Ce script démontre le fonctionnement du système d'apprentissage persistant
|
|
||||||
"mix" (JSONL + SQLite) pour la résolution de cibles UI.
|
|
||||||
|
|
||||||
Auteur: Dom, Alice Kiro - 22 décembre 2025
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
from unittest.mock import Mock
|
|
||||||
|
|
||||||
# Ajouter le répertoire racine au path
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
from core.learning.target_memory_store import TargetMemoryStore, TargetFingerprint
|
|
||||||
from core.execution.target_resolver import TargetResolver
|
|
||||||
from core.execution.screen_signature import screen_signature
|
|
||||||
|
|
||||||
# Configuration du logging
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def create_mock_screen_state(window_title: str = "Test App") -> Mock:
|
|
||||||
"""Créer un mock ScreenState pour les tests"""
|
|
||||||
screen_state = Mock()
|
|
||||||
screen_state.screen_state_id = f"state_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
|
||||||
screen_state.timestamp = datetime.now()
|
|
||||||
|
|
||||||
# Mock window
|
|
||||||
screen_state.window = Mock()
|
|
||||||
screen_state.window.window_title = window_title
|
|
||||||
screen_state.window.screen_resolution = [1920, 1080]
|
|
||||||
|
|
||||||
# Mock perception
|
|
||||||
screen_state.perception = Mock()
|
|
||||||
screen_state.perception.detected_text = ["Login", "Password", "Submit", "Cancel"]
|
|
||||||
|
|
||||||
return screen_state
|
|
||||||
|
|
||||||
|
|
||||||
def create_mock_ui_elements() -> list:
|
|
||||||
"""Créer des éléments UI mock pour les tests"""
|
|
||||||
elements = []
|
|
||||||
|
|
||||||
# Label "Email"
|
|
||||||
email_label = Mock()
|
|
||||||
email_label.element_id = "lbl_email"
|
|
||||||
email_label.bbox = (50, 100, 80, 25)
|
|
||||||
email_label.role = "label"
|
|
||||||
email_label.label = "Email"
|
|
||||||
elements.append(email_label)
|
|
||||||
|
|
||||||
# Input email
|
|
||||||
email_input = Mock()
|
|
||||||
email_input.element_id = "input_email"
|
|
||||||
email_input.bbox = (150, 100, 200, 25)
|
|
||||||
email_input.role = "input"
|
|
||||||
email_input.type = "text_input"
|
|
||||||
email_input.label = ""
|
|
||||||
elements.append(email_input)
|
|
||||||
|
|
||||||
# Bouton Submit
|
|
||||||
submit_btn = Mock()
|
|
||||||
submit_btn.element_id = "btn_submit"
|
|
||||||
submit_btn.bbox = (150, 200, 100, 35)
|
|
||||||
submit_btn.role = "button"
|
|
||||||
submit_btn.type = "submit"
|
|
||||||
submit_btn.label = "Submit"
|
|
||||||
elements.append(submit_btn)
|
|
||||||
|
|
||||||
return elements
|
|
||||||
|
|
||||||
|
|
||||||
def create_mock_target_spec(by_role: str = None, by_text: str = None, context_hints: dict = None) -> Mock:
|
|
||||||
"""Créer un mock TargetSpec"""
|
|
||||||
spec = Mock()
|
|
||||||
spec.by_role = by_role
|
|
||||||
spec.by_text = by_text
|
|
||||||
spec.by_position = None
|
|
||||||
spec.context_hints = context_hints or {}
|
|
||||||
return spec
|
|
||||||
|
|
||||||
|
|
||||||
def demo_basic_learning():
|
|
||||||
"""Démonstration de base de l'apprentissage persistant"""
|
|
||||||
print("\n" + "="*60)
|
|
||||||
print("DÉMONSTRATION - Apprentissage persistant de base")
|
|
||||||
print("="*60)
|
|
||||||
|
|
||||||
# Initialiser le store
|
|
||||||
store = TargetMemoryStore("data/learning_demo")
|
|
||||||
|
|
||||||
# Créer des données de test
|
|
||||||
screen_state = create_mock_screen_state("Login Form")
|
|
||||||
ui_elements = create_mock_ui_elements()
|
|
||||||
|
|
||||||
# Générer signature d'écran
|
|
||||||
screen_sig = screen_signature(screen_state, ui_elements, mode="layout")
|
|
||||||
print(f"📋 Signature d'écran générée: {screen_sig[:16]}...")
|
|
||||||
|
|
||||||
# Créer un TargetSpec pour le bouton Submit
|
|
||||||
target_spec = create_mock_target_spec(
|
|
||||||
by_role="button",
|
|
||||||
by_text="Submit"
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"🎯 TargetSpec: role={target_spec.by_role}, text={target_spec.by_text}")
|
|
||||||
|
|
||||||
# Simuler plusieurs résolutions réussies
|
|
||||||
submit_element = next(e for e in ui_elements if e.element_id == "btn_submit")
|
|
||||||
fingerprint = TargetFingerprint(
|
|
||||||
element_id=submit_element.element_id,
|
|
||||||
bbox=tuple(submit_element.bbox),
|
|
||||||
role=submit_element.role,
|
|
||||||
etype=submit_element.type,
|
|
||||||
label=submit_element.label,
|
|
||||||
confidence=0.95
|
|
||||||
)
|
|
||||||
|
|
||||||
print("\n📚 Apprentissage - Enregistrement de 3 résolutions réussies...")
|
|
||||||
for i in range(3):
|
|
||||||
store.record_success(
|
|
||||||
screen_signature=screen_sig,
|
|
||||||
target_spec=target_spec,
|
|
||||||
fingerprint=fingerprint,
|
|
||||||
strategy_used="by_role",
|
|
||||||
confidence=0.90 + i * 0.03 # Confiances variables
|
|
||||||
)
|
|
||||||
print(f" ✅ Succès {i+1}/3 enregistré (confiance: {0.90 + i * 0.03:.2f})")
|
|
||||||
|
|
||||||
# Test de lookup
|
|
||||||
print("\n🔍 Test de lookup...")
|
|
||||||
result = store.lookup(screen_sig, target_spec, min_success_count=2)
|
|
||||||
|
|
||||||
if result:
|
|
||||||
print(f" ✅ Élément trouvé en mémoire!")
|
|
||||||
print(f" - Element ID: {result.element_id}")
|
|
||||||
print(f" - Role: {result.role}")
|
|
||||||
print(f" - Label: {result.label}")
|
|
||||||
print(f" - BBox: {result.bbox}")
|
|
||||||
print(f" - Confiance: {result.confidence:.3f}")
|
|
||||||
else:
|
|
||||||
print(" ❌ Aucun élément trouvé en mémoire")
|
|
||||||
|
|
||||||
# Statistiques
|
|
||||||
stats = store.get_stats()
|
|
||||||
print(f"\n📊 Statistiques:")
|
|
||||||
print(f" - Entrées totales: {stats['total_entries']}")
|
|
||||||
print(f" - Succès totaux: {stats['total_successes']}")
|
|
||||||
print(f" - Échecs totaux: {stats['total_failures']}")
|
|
||||||
print(f" - Confiance moyenne: {stats['overall_confidence']:.3f}")
|
|
||||||
print(f" - Fichiers JSONL: {stats['jsonl_files_count']}")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de démonstration"""
|
|
||||||
print("🚀 DÉMONSTRATION - Système d'apprentissage persistant RPA Vision V3")
|
|
||||||
print("Fiche #18 - Architecture 'mix' (JSONL + SQLite)")
|
|
||||||
print("Auteur: Dom, Alice Kiro - 22 décembre 2025")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Créer le répertoire de démonstration
|
|
||||||
demo_dir = Path("data/learning_demo")
|
|
||||||
demo_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Exécuter les démonstrations
|
|
||||||
demo_basic_learning()
|
|
||||||
|
|
||||||
print("\n" + "="*60)
|
|
||||||
print("✅ DÉMONSTRATION TERMINÉE AVEC SUCCÈS")
|
|
||||||
print("="*60)
|
|
||||||
print(f"📁 Données de démonstration sauvegardées dans: {demo_dir.absolute()}")
|
|
||||||
print("🔍 Vous pouvez examiner les fichiers JSONL et la base SQLite générés")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Erreur durant la démonstration: {e}", exc_info=True)
|
|
||||||
print(f"\n❌ ERREUR: {e}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,372 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
|
||||||
Démonstration Complète du Système de Capture d'Écran Réelle - RPA Vision V3
|
|
||||||
Auteur : Dom, Alice, Kiro - 8 janvier 2026
|
|
||||||
|
|
||||||
Script de démonstration pour tester toutes les fonctionnalités du système de capture d'écran réelle.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
import json
|
|
||||||
import requests
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from typing import Dict, List, Any
|
|
||||||
|
|
||||||
# Ajouter le chemin du projet
|
|
||||||
sys.path.append(os.path.dirname(__file__))
|
|
||||||
|
|
||||||
from visual_workflow_builder.backend.services.real_screen_capture import RealScreenCaptureService
|
|
||||||
|
|
||||||
class RealScreenCaptureDemo:
|
|
||||||
"""Démonstration du système de capture d'écran réelle"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.service = RealScreenCaptureService()
|
|
||||||
self.api_base_url = "http://localhost:5002/api/real-demo"
|
|
||||||
|
|
||||||
def print_header(self, title: str):
|
|
||||||
"""Afficher un en-tête formaté"""
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print(f" {title}")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
|
|
||||||
def print_step(self, step: str):
|
|
||||||
"""Afficher une étape"""
|
|
||||||
print(f"\n🔹 {step}")
|
|
||||||
|
|
||||||
def print_success(self, message: str):
|
|
||||||
"""Afficher un message de succès"""
|
|
||||||
print(f"✅ {message}")
|
|
||||||
|
|
||||||
def print_error(self, message: str):
|
|
||||||
"""Afficher un message d'erreur"""
|
|
||||||
print(f"❌ {message}")
|
|
||||||
|
|
||||||
def print_info(self, message: str):
|
|
||||||
"""Afficher une information"""
|
|
||||||
print(f"ℹ️ {message}")
|
|
||||||
|
|
||||||
def demo_service_direct(self):
|
|
||||||
"""Démonstration du service direct (sans API)"""
|
|
||||||
self.print_header("DÉMONSTRATION SERVICE DIRECT")
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.print_step("Initialisation du service")
|
|
||||||
monitors = self.service.get_monitors()
|
|
||||||
self.print_success(f"Service initialisé - {len(monitors)} moniteurs détectés")
|
|
||||||
|
|
||||||
for monitor in monitors:
|
|
||||||
print(f" 📺 Moniteur {monitor['id']}: {monitor['width']}x{monitor['height']}")
|
|
||||||
|
|
||||||
self.print_step("Sélection du moniteur principal")
|
|
||||||
if len(monitors) > 0:
|
|
||||||
success = self.service.select_monitor(0)
|
|
||||||
if success:
|
|
||||||
self.print_success("Moniteur 0 sélectionné")
|
|
||||||
else:
|
|
||||||
self.print_error("Échec de la sélection du moniteur")
|
|
||||||
|
|
||||||
self.print_step("Démarrage de la capture (intervalle: 1s)")
|
|
||||||
success = self.service.start_capture(interval=1.0)
|
|
||||||
if success:
|
|
||||||
self.print_success("Capture démarrée")
|
|
||||||
else:
|
|
||||||
self.print_error("Échec du démarrage de la capture")
|
|
||||||
return
|
|
||||||
|
|
||||||
self.print_step("Capture et détection pendant 10 secondes")
|
|
||||||
for i in range(10):
|
|
||||||
time.sleep(1)
|
|
||||||
status = self.service.get_status()
|
|
||||||
elements = self.service.get_detected_elements()
|
|
||||||
screenshot = self.service.get_current_screenshot_base64()
|
|
||||||
|
|
||||||
print(f" Seconde {i+1:2d}: "
|
|
||||||
f"{len(elements):2d} éléments détectés, "
|
|
||||||
f"Screenshot: {'✓' if screenshot else '✗'}")
|
|
||||||
|
|
||||||
# Afficher quelques éléments détectés
|
|
||||||
if elements and i % 3 == 0: # Tous les 3 secondes
|
|
||||||
print(" 📋 Éléments récents:")
|
|
||||||
for elem in elements[:3]: # Afficher les 3 premiers
|
|
||||||
bbox = elem.get('bbox', {})
|
|
||||||
print(f" - {elem.get('type', 'unknown')}: "
|
|
||||||
f"({bbox.get('x', 0)}, {bbox.get('y', 0)}) "
|
|
||||||
f"conf={elem.get('confidence', 0):.2f}")
|
|
||||||
|
|
||||||
self.print_step("Arrêt de la capture")
|
|
||||||
success = self.service.stop_capture()
|
|
||||||
if success:
|
|
||||||
self.print_success("Capture arrêtée")
|
|
||||||
else:
|
|
||||||
self.print_error("Échec de l'arrêt de la capture")
|
|
||||||
|
|
||||||
# Statistiques finales
|
|
||||||
final_status = self.service.get_status()
|
|
||||||
self.print_info(f"Statistiques finales:")
|
|
||||||
print(f" - Éléments détectés: {final_status['elements_detected']}")
|
|
||||||
print(f" - Moniteurs disponibles: {final_status['monitors_count']}")
|
|
||||||
print(f" - Capture active: {final_status['is_capturing']}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.print_error(f"Erreur lors de la démonstration: {e}")
|
|
||||||
|
|
||||||
finally:
|
|
||||||
self.service.cleanup()
|
|
||||||
|
|
||||||
def demo_api_endpoints(self):
|
|
||||||
"""Démonstration des endpoints API"""
|
|
||||||
self.print_header("DÉMONSTRATION API REST")
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.print_step("Test de connectivité API")
|
|
||||||
response = requests.get(f"{self.api_base_url}/capture/status", timeout=5)
|
|
||||||
if response.status_code == 200:
|
|
||||||
self.print_success("API accessible")
|
|
||||||
else:
|
|
||||||
self.print_error(f"API non accessible (status: {response.status_code})")
|
|
||||||
return
|
|
||||||
|
|
||||||
except requests.exceptions.RequestException as e:
|
|
||||||
self.print_error(f"Impossible de se connecter à l'API: {e}")
|
|
||||||
self.print_info("Assurez-vous que le serveur backend est démarré sur le port 5002")
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Test des moniteurs
|
|
||||||
self.print_step("Récupération des moniteurs via API")
|
|
||||||
response = requests.get(f"{self.api_base_url}/monitors")
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
monitors = data['monitors']
|
|
||||||
self.print_success(f"{len(monitors)} moniteurs récupérés")
|
|
||||||
|
|
||||||
for monitor in monitors:
|
|
||||||
print(f" 📺 Moniteur {monitor['id']}: {monitor['width']}x{monitor['height']}")
|
|
||||||
|
|
||||||
# Sélection de moniteur
|
|
||||||
if len(monitors) > 0:
|
|
||||||
self.print_step("Sélection du moniteur via API")
|
|
||||||
response = requests.post(f"{self.api_base_url}/monitors/0/select")
|
|
||||||
if response.status_code == 200:
|
|
||||||
self.print_success("Moniteur sélectionné via API")
|
|
||||||
|
|
||||||
# Démarrage de la capture
|
|
||||||
self.print_step("Démarrage de la capture via API")
|
|
||||||
response = requests.post(
|
|
||||||
f"{self.api_base_url}/capture/start",
|
|
||||||
json={'interval': 1.5}
|
|
||||||
)
|
|
||||||
if response.status_code == 200:
|
|
||||||
self.print_success("Capture démarrée via API")
|
|
||||||
else:
|
|
||||||
self.print_error(f"Échec du démarrage: {response.text}")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Surveillance pendant quelques secondes
|
|
||||||
self.print_step("Surveillance de la capture via API")
|
|
||||||
for i in range(6):
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# Statut
|
|
||||||
response = requests.get(f"{self.api_base_url}/capture/status")
|
|
||||||
if response.status_code == 200:
|
|
||||||
status = response.json()['status']
|
|
||||||
|
|
||||||
# Éléments
|
|
||||||
response = requests.get(f"{self.api_base_url}/elements")
|
|
||||||
elements_count = 0
|
|
||||||
if response.status_code == 200:
|
|
||||||
elements_count = response.json()['count']
|
|
||||||
|
|
||||||
print(f" Seconde {i+1}: "
|
|
||||||
f"Capture: {'✓' if status['is_capturing'] else '✗'}, "
|
|
||||||
f"Éléments: {elements_count}")
|
|
||||||
|
|
||||||
# Test de screenshot
|
|
||||||
self.print_step("Récupération du screenshot via API")
|
|
||||||
response = requests.get(f"{self.api_base_url}/capture/screenshot")
|
|
||||||
if response.status_code == 200:
|
|
||||||
screenshot_data = response.json()
|
|
||||||
self.print_success("Screenshot récupéré")
|
|
||||||
self.print_info(f"Éléments dans le screenshot: {len(screenshot_data['elements'])}")
|
|
||||||
|
|
||||||
# Afficher quelques éléments
|
|
||||||
for i, elem in enumerate(screenshot_data['elements'][:3]):
|
|
||||||
bbox = elem['bbox']
|
|
||||||
print(f" {i+1}. {elem['type']}: "
|
|
||||||
f"pos=({bbox['x']}, {bbox['y']}) "
|
|
||||||
f"size={bbox['width']}x{bbox['height']} "
|
|
||||||
f"conf={elem['confidence']:.2f}")
|
|
||||||
|
|
||||||
# Arrêt de la capture
|
|
||||||
self.print_step("Arrêt de la capture via API")
|
|
||||||
response = requests.post(f"{self.api_base_url}/capture/stop")
|
|
||||||
if response.status_code == 200:
|
|
||||||
self.print_success("Capture arrêtée via API")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.print_error(f"Erreur lors des tests API: {e}")
|
|
||||||
|
|
||||||
def demo_interaction_simulation(self):
|
|
||||||
"""Démonstration des interactions simulées"""
|
|
||||||
self.print_header("DÉMONSTRATION INTERACTIONS SIMULÉES")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Test de clic par coordonnées
|
|
||||||
self.print_step("Test de clic par coordonnées")
|
|
||||||
response = requests.post(
|
|
||||||
f"{self.api_base_url}/interact/click",
|
|
||||||
json={'x': 100, 'y': 100}
|
|
||||||
)
|
|
||||||
if response.status_code == 200:
|
|
||||||
result = response.json()
|
|
||||||
self.print_success(f"Clic simulé: {result['message']}")
|
|
||||||
else:
|
|
||||||
self.print_info("Clic non effectué (pyautogui non disponible ou erreur)")
|
|
||||||
|
|
||||||
# Test de saisie
|
|
||||||
self.print_step("Test de saisie de texte")
|
|
||||||
response = requests.post(
|
|
||||||
f"{self.api_base_url}/interact/type",
|
|
||||||
json={'text': 'Test RPA Vision V3 - Capture Réelle'}
|
|
||||||
)
|
|
||||||
if response.status_code == 200:
|
|
||||||
result = response.json()
|
|
||||||
self.print_success(f"Saisie simulée: {result['message']}")
|
|
||||||
else:
|
|
||||||
self.print_info("Saisie non effectuée (pyautogui non disponible ou erreur)")
|
|
||||||
|
|
||||||
# Test d'arrêt d'urgence
|
|
||||||
self.print_step("Test d'arrêt d'urgence")
|
|
||||||
response = requests.post(f"{self.api_base_url}/safety/emergency-stop")
|
|
||||||
if response.status_code == 200:
|
|
||||||
self.print_success("Arrêt d'urgence testé")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.print_error(f"Erreur lors des tests d'interaction: {e}")
|
|
||||||
|
|
||||||
def demo_workflow_execution(self):
|
|
||||||
"""Démonstration d'exécution de workflow simple"""
|
|
||||||
self.print_header("DÉMONSTRATION WORKFLOW SIMPLE")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Workflow simple : clic + saisie + attente
|
|
||||||
workflow_actions = [
|
|
||||||
{'type': 'click', 'x': 200, 'y': 200},
|
|
||||||
{'type': 'wait', 'duration': 0.5},
|
|
||||||
{'type': 'type', 'text': 'Bonjour RPA Vision V3'},
|
|
||||||
{'type': 'wait', 'duration': 0.5},
|
|
||||||
{'type': 'click', 'x': 300, 'y': 300}
|
|
||||||
]
|
|
||||||
|
|
||||||
self.print_step("Exécution d'un workflow simple")
|
|
||||||
self.print_info("Actions du workflow:")
|
|
||||||
for i, action in enumerate(workflow_actions):
|
|
||||||
print(f" {i+1}. {action['type']}: {action}")
|
|
||||||
|
|
||||||
response = requests.post(
|
|
||||||
f"{self.api_base_url}/workflow/execute",
|
|
||||||
json={'actions': workflow_actions}
|
|
||||||
)
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
result = response.json()
|
|
||||||
self.print_success("Workflow exécuté")
|
|
||||||
|
|
||||||
summary = result['summary']
|
|
||||||
print(f" 📊 Résumé: {summary['successful_actions']}/{summary['total_actions']} "
|
|
||||||
f"actions réussies ({summary['success_rate']:.1%})")
|
|
||||||
|
|
||||||
# Détails des résultats
|
|
||||||
for res in result['results']:
|
|
||||||
status = "✅" if res['success'] else "❌"
|
|
||||||
print(f" {status} Action {res['action_index']+1} ({res['type']}): "
|
|
||||||
f"{res.get('message', res.get('error', 'N/A'))}")
|
|
||||||
else:
|
|
||||||
self.print_error(f"Échec du workflow: {response.text}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
self.print_error(f"Erreur lors de l'exécution du workflow: {e}")
|
|
||||||
|
|
||||||
def run_complete_demo(self):
|
|
||||||
"""Exécuter la démonstration complète"""
|
|
||||||
self.print_header("DÉMONSTRATION COMPLÈTE - SYSTÈME DE CAPTURE RÉELLE")
|
|
||||||
print("RPA Vision V3 - Système de Capture d'Écran et Interaction Réelle")
|
|
||||||
print("Auteur : Dom, Alice, Kiro - 8 janvier 2026")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# 1. Service direct
|
|
||||||
self.demo_service_direct()
|
|
||||||
|
|
||||||
# Pause entre les démos
|
|
||||||
self.print_info("Pause de 2 secondes entre les démonstrations...")
|
|
||||||
time.sleep(2)
|
|
||||||
|
|
||||||
# 2. API REST
|
|
||||||
self.demo_api_endpoints()
|
|
||||||
|
|
||||||
# Pause
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# 3. Interactions
|
|
||||||
self.demo_interaction_simulation()
|
|
||||||
|
|
||||||
# Pause
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# 4. Workflow
|
|
||||||
self.demo_workflow_execution()
|
|
||||||
|
|
||||||
# Résumé final
|
|
||||||
self.print_header("DÉMONSTRATION TERMINÉE")
|
|
||||||
self.print_success("Toutes les démonstrations ont été exécutées")
|
|
||||||
self.print_info("Fonctionnalités testées:")
|
|
||||||
print(" ✅ Service de capture d'écran réelle")
|
|
||||||
print(" ✅ Détection d'éléments UI en temps réel")
|
|
||||||
print(" ✅ API REST complète")
|
|
||||||
print(" ✅ Interactions simulées (clic, saisie)")
|
|
||||||
print(" ✅ Exécution de workflows simples")
|
|
||||||
print(" ✅ Contrôles de sécurité")
|
|
||||||
|
|
||||||
self.print_info("Le système de capture d'écran réelle est opérationnel ! 🚀")
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
self.print_info("Démonstration interrompue par l'utilisateur")
|
|
||||||
except Exception as e:
|
|
||||||
self.print_error(f"Erreur générale: {e}")
|
|
||||||
finally:
|
|
||||||
# Nettoyage final
|
|
||||||
try:
|
|
||||||
self.service.cleanup()
|
|
||||||
requests.post(f"{self.api_base_url}/safety/emergency-stop", timeout=2)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale"""
|
|
||||||
print("🎯 Démarrage de la démonstration du système de capture d'écran réelle")
|
|
||||||
|
|
||||||
# Vérifications préliminaires
|
|
||||||
try:
|
|
||||||
import mss
|
|
||||||
print("✅ MSS disponible pour la capture d'écran")
|
|
||||||
except ImportError:
|
|
||||||
print("❌ MSS non disponible - capture d'écran limitée")
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pyautogui
|
|
||||||
print("✅ PyAutoGUI disponible pour les interactions")
|
|
||||||
except ImportError:
|
|
||||||
print("⚠️ PyAutoGUI non disponible - interactions simulées seulement")
|
|
||||||
|
|
||||||
# Lancer la démonstration
|
|
||||||
demo = RealScreenCaptureDemo()
|
|
||||||
demo.run_complete_demo()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,167 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration de la validation de sécurité
|
|
||||||
|
|
||||||
Montre comment le système refuse de démarrer avec une configuration insécurisée en production.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add current directory to path for imports
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
from core.security import (
|
|
||||||
validate_production_security,
|
|
||||||
get_security_config,
|
|
||||||
generate_secure_key,
|
|
||||||
check_security_requirements,
|
|
||||||
ProductionSecurityError
|
|
||||||
)
|
|
||||||
|
|
||||||
def demo_insecure_production():
|
|
||||||
"""Démontre le refus de configuration insécurisée en production."""
|
|
||||||
print("🚨 Demo: Insecure Production Configuration")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Simuler l'environnement de production
|
|
||||||
os.environ["ENVIRONMENT"] = "production"
|
|
||||||
os.environ["ENCRYPTION_PASSWORD"] = "rpa_vision_v3_default_key" # Clé par défaut
|
|
||||||
os.environ["SECRET_KEY"] = "dev-key-change-in-production" # Clé par défaut
|
|
||||||
|
|
||||||
print("Environment: PRODUCTION")
|
|
||||||
print("Encryption Password: rpa_vision_v3_default_key (DEFAULT)")
|
|
||||||
print("Secret Key: dev-key-change-in-production (DEFAULT)")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
config = get_security_config()
|
|
||||||
validate_production_security(config)
|
|
||||||
print("❌ This should not happen - insecure config was accepted!")
|
|
||||||
except ProductionSecurityError as e:
|
|
||||||
print("✅ Security validation correctly REJECTED the insecure configuration:")
|
|
||||||
print(f" {e}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
def demo_secure_production():
|
|
||||||
"""Démontre l'acceptation de configuration sécurisée en production."""
|
|
||||||
print("✅ Demo: Secure Production Configuration")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Générer des clés sécurisées
|
|
||||||
secure_encryption_key = generate_secure_key(32)
|
|
||||||
secure_secret_key = generate_secure_key(32)
|
|
||||||
|
|
||||||
os.environ["ENVIRONMENT"] = "production"
|
|
||||||
os.environ["ENCRYPTION_PASSWORD"] = secure_encryption_key
|
|
||||||
os.environ["SECRET_KEY"] = secure_secret_key
|
|
||||||
os.environ["LOG_SENSITIVE_DATA"] = "false"
|
|
||||||
os.environ["STRICT_INPUT_VALIDATION"] = "true"
|
|
||||||
|
|
||||||
print("Environment: PRODUCTION")
|
|
||||||
print(f"Encryption Password: {secure_encryption_key[:8]}... (SECURE)")
|
|
||||||
print(f"Secret Key: {secure_secret_key[:8]}... (SECURE)")
|
|
||||||
print("Log Sensitive Data: false")
|
|
||||||
print("Strict Input Validation: true")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
config = get_security_config()
|
|
||||||
validate_production_security(config)
|
|
||||||
print("✅ Security validation ACCEPTED the secure configuration")
|
|
||||||
except ProductionSecurityError as e:
|
|
||||||
print(f"❌ Secure configuration was rejected: {e}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
def demo_development_flexibility():
|
|
||||||
"""Démontre la flexibilité en environnement de développement."""
|
|
||||||
print("🔧 Demo: Development Environment Flexibility")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Environnement de développement avec clés par défaut
|
|
||||||
os.environ["ENVIRONMENT"] = "development"
|
|
||||||
os.environ["ENCRYPTION_PASSWORD"] = "rpa_vision_v3_default_key"
|
|
||||||
os.environ["SECRET_KEY"] = "dev-key-change-in-production"
|
|
||||||
|
|
||||||
print("Environment: DEVELOPMENT")
|
|
||||||
print("Encryption Password: rpa_vision_v3_default_key (DEFAULT)")
|
|
||||||
print("Secret Key: dev-key-change-in-production (DEFAULT)")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
config = get_security_config()
|
|
||||||
validate_production_security(config)
|
|
||||||
print("✅ Development environment allows default keys for convenience")
|
|
||||||
except ProductionSecurityError as e:
|
|
||||||
print(f"❌ Development should be flexible: {e}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
def demo_security_requirements():
|
|
||||||
"""Démontre la vérification des exigences de sécurité."""
|
|
||||||
print("📋 Demo: Security Requirements Check")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Vérifier les exigences en production
|
|
||||||
os.environ["ENVIRONMENT"] = "production"
|
|
||||||
secure_key = generate_secure_key(32)
|
|
||||||
os.environ["ENCRYPTION_PASSWORD"] = secure_key
|
|
||||||
os.environ["SECRET_KEY"] = secure_key
|
|
||||||
|
|
||||||
requirements = check_security_requirements()
|
|
||||||
|
|
||||||
print("Security Requirements Status:")
|
|
||||||
for requirement, status in requirements.items():
|
|
||||||
status_icon = "✅" if status else "❌"
|
|
||||||
print(f" {status_icon} {requirement}: {status}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
def cleanup_environment():
|
|
||||||
"""Nettoie les variables d'environnement."""
|
|
||||||
test_vars = [
|
|
||||||
"ENVIRONMENT",
|
|
||||||
"ENCRYPTION_PASSWORD",
|
|
||||||
"SECRET_KEY",
|
|
||||||
"LOG_SENSITIVE_DATA",
|
|
||||||
"STRICT_INPUT_VALIDATION"
|
|
||||||
]
|
|
||||||
|
|
||||||
for var in test_vars:
|
|
||||||
os.environ.pop(var, None)
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de démonstration."""
|
|
||||||
print("🎯 RPA Vision V3 - Security Validation Demo")
|
|
||||||
print("=" * 60)
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Demo 1: Configuration insécurisée en production
|
|
||||||
demo_insecure_production()
|
|
||||||
|
|
||||||
# Demo 2: Configuration sécurisée en production
|
|
||||||
demo_secure_production()
|
|
||||||
|
|
||||||
# Demo 3: Flexibilité en développement
|
|
||||||
demo_development_flexibility()
|
|
||||||
|
|
||||||
# Demo 4: Vérification des exigences
|
|
||||||
demo_security_requirements()
|
|
||||||
|
|
||||||
print("🎉 Security validation demo completed!")
|
|
||||||
print()
|
|
||||||
print("Key takeaways:")
|
|
||||||
print(" • Production environments require secure configuration")
|
|
||||||
print(" • Default keys are rejected in production")
|
|
||||||
print(" • Development environments are more flexible")
|
|
||||||
print(" • Security requirements can be checked programmatically")
|
|
||||||
|
|
||||||
finally:
|
|
||||||
cleanup_environment()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,332 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Demo script for Self-Healing Workflows system.
|
|
||||||
|
|
||||||
This script demonstrates the key features of the self-healing system.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
from core.healing.healing_engine import SelfHealingEngine
|
|
||||||
from core.healing.learning_repository import LearningRepository
|
|
||||||
from core.healing.confidence_scorer import ConfidenceScorer
|
|
||||||
from core.healing.recovery_logger import RecoveryLogger
|
|
||||||
from core.healing.models import RecoveryContext, RecoveryResult
|
|
||||||
from core.healing.execution_integration import get_self_healing_integration
|
|
||||||
|
|
||||||
|
|
||||||
def print_header(title: str):
|
|
||||||
"""Print a formatted header."""
|
|
||||||
print("\n" + "=" * 70)
|
|
||||||
print(f" {title}")
|
|
||||||
print("=" * 70)
|
|
||||||
|
|
||||||
|
|
||||||
def demo_confidence_scorer():
|
|
||||||
"""Demonstrate confidence scoring."""
|
|
||||||
print_header("1. Confidence Scorer Demo")
|
|
||||||
|
|
||||||
scorer = ConfidenceScorer()
|
|
||||||
|
|
||||||
# Test text similarity
|
|
||||||
print("\n📊 Text Similarity:")
|
|
||||||
pairs = [
|
|
||||||
("Submit", "Submit"),
|
|
||||||
("Submit", "Send"),
|
|
||||||
("Submit", "Cancel"),
|
|
||||||
]
|
|
||||||
|
|
||||||
for text1, text2 in pairs:
|
|
||||||
similarity = scorer._text_similarity(text1, text2)
|
|
||||||
print(f" '{text1}' vs '{text2}': {similarity:.3f}")
|
|
||||||
|
|
||||||
# Test confidence calculation
|
|
||||||
print("\n📊 Recovery Confidence:")
|
|
||||||
context = RecoveryContext(
|
|
||||||
original_action='click',
|
|
||||||
target_element='Submit Button',
|
|
||||||
failure_reason='element_not_found',
|
|
||||||
screenshot_path='/tmp/test.png',
|
|
||||||
workflow_id='demo_workflow',
|
|
||||||
node_id='node_1',
|
|
||||||
attempt_count=1
|
|
||||||
)
|
|
||||||
|
|
||||||
strategies = ['semantic_variant', 'spatial_fallback', 'timing_adaptation']
|
|
||||||
for strategy in strategies:
|
|
||||||
confidence = scorer.calculate_recovery_confidence(strategy, context, 0.8)
|
|
||||||
print(f" {strategy}: {confidence:.3f}")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_learning_repository():
|
|
||||||
"""Demonstrate learning repository."""
|
|
||||||
print_header("2. Learning Repository Demo")
|
|
||||||
|
|
||||||
import tempfile
|
|
||||||
temp_dir = Path(tempfile.mkdtemp())
|
|
||||||
repo = LearningRepository(temp_dir)
|
|
||||||
|
|
||||||
# Store some patterns
|
|
||||||
print("\n💾 Storing recovery patterns...")
|
|
||||||
|
|
||||||
contexts_and_results = [
|
|
||||||
(
|
|
||||||
RecoveryContext(
|
|
||||||
original_action='click',
|
|
||||||
target_element='Submit',
|
|
||||||
failure_reason='element_not_found',
|
|
||||||
screenshot_path='/tmp/test1.png',
|
|
||||||
workflow_id='workflow_1',
|
|
||||||
node_id='node_1',
|
|
||||||
attempt_count=1,
|
|
||||||
metadata={'element_type': 'button'}
|
|
||||||
),
|
|
||||||
RecoveryResult(
|
|
||||||
success=True,
|
|
||||||
strategy_used='semantic_variant',
|
|
||||||
new_element='Send',
|
|
||||||
confidence_score=0.85
|
|
||||||
)
|
|
||||||
),
|
|
||||||
(
|
|
||||||
RecoveryContext(
|
|
||||||
original_action='click',
|
|
||||||
target_element='Login',
|
|
||||||
failure_reason='element_moved',
|
|
||||||
screenshot_path='/tmp/test2.png',
|
|
||||||
workflow_id='workflow_1',
|
|
||||||
node_id='node_2',
|
|
||||||
attempt_count=1,
|
|
||||||
metadata={'element_type': 'button'}
|
|
||||||
),
|
|
||||||
RecoveryResult(
|
|
||||||
success=True,
|
|
||||||
strategy_used='spatial_fallback',
|
|
||||||
confidence_score=0.75
|
|
||||||
)
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
for context, result in contexts_and_results:
|
|
||||||
repo.store_pattern(context, result)
|
|
||||||
print(f" ✅ Stored: {result.strategy_used} for {context.failure_reason}")
|
|
||||||
|
|
||||||
# Retrieve patterns
|
|
||||||
print(f"\n📚 Total patterns stored: {len(repo.get_all_patterns())}")
|
|
||||||
|
|
||||||
for pattern in repo.get_all_patterns():
|
|
||||||
print(f" - {pattern.recovery_strategy}: {pattern.success_rate:.1%} success rate")
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
import shutil
|
|
||||||
shutil.rmtree(temp_dir, ignore_errors=True)
|
|
||||||
|
|
||||||
|
|
||||||
def demo_recovery_strategies():
|
|
||||||
"""Demonstrate recovery strategies."""
|
|
||||||
print_header("3. Recovery Strategies Demo")
|
|
||||||
|
|
||||||
from core.healing.strategies import (
|
|
||||||
SemanticVariantStrategy,
|
|
||||||
SpatialFallbackStrategy,
|
|
||||||
TimingAdaptationStrategy,
|
|
||||||
FormatTransformationStrategy
|
|
||||||
)
|
|
||||||
|
|
||||||
# Semantic Variants
|
|
||||||
print("\n🔤 Semantic Variant Strategy:")
|
|
||||||
strategy = SemanticVariantStrategy()
|
|
||||||
variants = strategy._get_semantic_variants('submit')
|
|
||||||
print(f" 'submit' → {', '.join(variants[:5])}")
|
|
||||||
|
|
||||||
variants = strategy._get_semantic_variants('login')
|
|
||||||
print(f" 'login' → {', '.join(variants[:5])}")
|
|
||||||
|
|
||||||
# Spatial Fallback
|
|
||||||
print("\n📍 Spatial Fallback Strategy:")
|
|
||||||
strategy = SpatialFallbackStrategy()
|
|
||||||
print(f" Search radii: {strategy.search_radii} pixels")
|
|
||||||
|
|
||||||
# Timing Adaptation
|
|
||||||
print("\n⏱️ Timing Adaptation Strategy:")
|
|
||||||
strategy = TimingAdaptationStrategy()
|
|
||||||
print(f" Min wait: {strategy.min_wait}s")
|
|
||||||
print(f" Max wait: {strategy.max_wait}s")
|
|
||||||
print(f" Adaptation factor: {strategy.adaptation_factor}x")
|
|
||||||
|
|
||||||
# Format Transformation
|
|
||||||
print("\n🔄 Format Transformation Strategy:")
|
|
||||||
strategy = FormatTransformationStrategy()
|
|
||||||
print(f" Date formats: {len(strategy.date_formats)} variations")
|
|
||||||
print(f" Phone formats: {len(strategy.phone_formats)} variations")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_self_healing_engine():
|
|
||||||
"""Demonstrate self-healing engine."""
|
|
||||||
print_header("4. Self-Healing Engine Demo")
|
|
||||||
|
|
||||||
import tempfile
|
|
||||||
temp_dir = Path(tempfile.mkdtemp())
|
|
||||||
engine = SelfHealingEngine(storage_path=temp_dir)
|
|
||||||
|
|
||||||
print(f"\n🔧 Engine initialized with {len(engine.recovery_strategies)} strategies")
|
|
||||||
|
|
||||||
# Create a recovery context
|
|
||||||
context = RecoveryContext(
|
|
||||||
original_action='click',
|
|
||||||
target_element='Submit Button',
|
|
||||||
failure_reason='element_not_found',
|
|
||||||
screenshot_path='/tmp/demo.png',
|
|
||||||
workflow_id='demo_workflow',
|
|
||||||
node_id='demo_node',
|
|
||||||
attempt_count=1,
|
|
||||||
confidence_threshold=0.7
|
|
||||||
)
|
|
||||||
|
|
||||||
print("\n🔍 Getting recovery suggestions...")
|
|
||||||
suggestions = engine.get_recovery_suggestions(context)
|
|
||||||
|
|
||||||
for i, suggestion in enumerate(suggestions, 1):
|
|
||||||
print(f" {i}. {suggestion.strategy}")
|
|
||||||
print(f" Confidence: {suggestion.confidence:.3f}")
|
|
||||||
print(f" Description: {suggestion.description}")
|
|
||||||
print(f" Est. time: {suggestion.estimated_time}s")
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
import shutil
|
|
||||||
shutil.rmtree(temp_dir, ignore_errors=True)
|
|
||||||
|
|
||||||
|
|
||||||
def demo_integration():
|
|
||||||
"""Demonstrate integration layer."""
|
|
||||||
print_header("5. Integration Layer Demo")
|
|
||||||
|
|
||||||
import tempfile
|
|
||||||
temp_dir = Path(tempfile.mkdtemp())
|
|
||||||
|
|
||||||
healing = get_self_healing_integration(
|
|
||||||
storage_path=temp_dir / 'healing',
|
|
||||||
log_path=temp_dir / 'logs',
|
|
||||||
enabled=True
|
|
||||||
)
|
|
||||||
|
|
||||||
print("\n✅ Self-healing integration initialized")
|
|
||||||
print(f" Enabled: {healing.enabled}")
|
|
||||||
|
|
||||||
# Get statistics
|
|
||||||
stats = healing.get_statistics()
|
|
||||||
print(f"\n📊 Statistics:")
|
|
||||||
print(f" Total attempts: {stats.get('total_attempts', 0)}")
|
|
||||||
print(f" Successful recoveries: {stats.get('successful_recoveries', 0)}")
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
import shutil
|
|
||||||
shutil.rmtree(temp_dir, ignore_errors=True)
|
|
||||||
|
|
||||||
|
|
||||||
def demo_complete_workflow():
|
|
||||||
"""Demonstrate a complete recovery workflow."""
|
|
||||||
print_header("6. Complete Recovery Workflow Demo")
|
|
||||||
|
|
||||||
import tempfile
|
|
||||||
temp_dir = Path(tempfile.mkdtemp())
|
|
||||||
|
|
||||||
# Initialize
|
|
||||||
healing = get_self_healing_integration(
|
|
||||||
storage_path=temp_dir / 'healing',
|
|
||||||
log_path=temp_dir / 'logs',
|
|
||||||
enabled=True
|
|
||||||
)
|
|
||||||
|
|
||||||
print("\n📝 Simulating workflow execution failure...")
|
|
||||||
|
|
||||||
# Simulate a failure
|
|
||||||
from core.execution.action_executor import ExecutionResult, ExecutionStatus
|
|
||||||
|
|
||||||
action_info = {
|
|
||||||
'action': 'click',
|
|
||||||
'target': 'Submit Button',
|
|
||||||
'element_type': 'button'
|
|
||||||
}
|
|
||||||
|
|
||||||
execution_result = ExecutionResult(
|
|
||||||
status=ExecutionStatus.TARGET_NOT_FOUND,
|
|
||||||
message='Element not found: Submit Button',
|
|
||||||
duration_ms=100
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f" ❌ Action failed: {execution_result.message}")
|
|
||||||
|
|
||||||
# Attempt recovery
|
|
||||||
print("\n🔧 Attempting recovery...")
|
|
||||||
recovery = healing.handle_execution_failure(
|
|
||||||
action_info=action_info,
|
|
||||||
execution_result=execution_result,
|
|
||||||
workflow_id='demo_workflow',
|
|
||||||
node_id='demo_node',
|
|
||||||
screenshot_path='/tmp/demo.png',
|
|
||||||
attempt_count=1
|
|
||||||
)
|
|
||||||
|
|
||||||
if recovery:
|
|
||||||
if recovery.success:
|
|
||||||
print(f" ✅ Recovery successful!")
|
|
||||||
print(f" Strategy: {recovery.strategy_used}")
|
|
||||||
print(f" Confidence: {recovery.confidence_score:.3f}")
|
|
||||||
print(f" Time: {recovery.execution_time:.3f}s")
|
|
||||||
else:
|
|
||||||
print(f" ❌ Recovery failed")
|
|
||||||
print(f" Reason: {recovery.error_message}")
|
|
||||||
|
|
||||||
# Get insights
|
|
||||||
print("\n💡 Insights:")
|
|
||||||
insights = healing.get_insights()
|
|
||||||
if insights:
|
|
||||||
for insight in insights:
|
|
||||||
print(f" - {insight}")
|
|
||||||
else:
|
|
||||||
print(" - No insights yet (need more data)")
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
import shutil
|
|
||||||
shutil.rmtree(temp_dir, ignore_errors=True)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Run all demos."""
|
|
||||||
print("\n" + "🎯" * 35)
|
|
||||||
print(" SELF-HEALING WORKFLOWS - DEMONSTRATION")
|
|
||||||
print("🎯" * 35)
|
|
||||||
|
|
||||||
try:
|
|
||||||
demo_confidence_scorer()
|
|
||||||
demo_learning_repository()
|
|
||||||
demo_recovery_strategies()
|
|
||||||
demo_self_healing_engine()
|
|
||||||
demo_integration()
|
|
||||||
demo_complete_workflow()
|
|
||||||
|
|
||||||
print("\n" + "=" * 70)
|
|
||||||
print(" ✅ All demos completed successfully!")
|
|
||||||
print("=" * 70)
|
|
||||||
print("\n📚 For more information, see:")
|
|
||||||
print(" - SELF_HEALING_IMPLEMENTATION.md")
|
|
||||||
print(" - SELF_HEALING_QUICKSTART.md")
|
|
||||||
print("\n")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n❌ Error during demo: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,77 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration du système de cleanup
|
|
||||||
|
|
||||||
Montre comment le système nettoie proprement toutes les ressources.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add current directory to path for imports
|
|
||||||
sys.path.insert(0, str(Path(__file__).parent))
|
|
||||||
|
|
||||||
from core.system import initialize_system_cleanup, shutdown_system
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Démonstration du cleanup système."""
|
|
||||||
print("🎯 RPA Vision V3 - System Cleanup Demo")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Configuration du logging pour voir les détails
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
|
||||||
)
|
|
||||||
|
|
||||||
print("1. Initializing system with cleanup...")
|
|
||||||
initialize_system_cleanup()
|
|
||||||
|
|
||||||
print("\n2. System is now running with automatic cleanup...")
|
|
||||||
print(" - Memory managers registered")
|
|
||||||
print(" - GPU resource managers registered")
|
|
||||||
print(" - Analytics system registered")
|
|
||||||
print(" - Signal handlers installed")
|
|
||||||
|
|
||||||
print("\n3. Simulating some work...")
|
|
||||||
|
|
||||||
# Simuler du travail avec les systèmes
|
|
||||||
try:
|
|
||||||
from core.execution.memory_cache import get_memory_manager
|
|
||||||
from core.gpu.gpu_resource_manager import get_gpu_resource_manager
|
|
||||||
|
|
||||||
# Utiliser le memory manager
|
|
||||||
memory_manager = get_memory_manager(enable_monitoring=False)
|
|
||||||
print(f" ✓ Memory manager active: {memory_manager.max_memory_mb}MB limit")
|
|
||||||
|
|
||||||
# Utiliser le GPU manager
|
|
||||||
gpu_manager = get_gpu_resource_manager()
|
|
||||||
status = gpu_manager.get_status()
|
|
||||||
print(f" ✓ GPU manager active: {status.execution_mode} mode")
|
|
||||||
|
|
||||||
# Simuler du travail
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ⚠ Some systems not available: {e}")
|
|
||||||
|
|
||||||
print("\n4. Testing cleanup (Ctrl+C to trigger signal cleanup)...")
|
|
||||||
print(" Press Ctrl+C to see signal-based cleanup in action")
|
|
||||||
print(" Or wait 5 seconds for programmatic cleanup...")
|
|
||||||
|
|
||||||
try:
|
|
||||||
time.sleep(5)
|
|
||||||
print("\n5. Triggering programmatic cleanup...")
|
|
||||||
shutdown_system()
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n5. Signal received! Triggering cleanup...")
|
|
||||||
shutdown_system()
|
|
||||||
|
|
||||||
print("\n✅ Cleanup demo completed!")
|
|
||||||
print("All resources have been properly cleaned up.")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,381 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Démonstration du système de nommage intelligent des workflows
|
|
||||||
|
|
||||||
Ce script montre comment utiliser les nouvelles fonctionnalités de nommage
|
|
||||||
intelligent et de capture enrichie de l'Agent V0.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import tempfile
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
# Add agent_v0 to path
|
|
||||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'agent_v0'))
|
|
||||||
|
|
||||||
from agent_v0.enhanced_raw_session import EnhancedRawSession
|
|
||||||
from agent_v0.workflow_namer import WorkflowNamer
|
|
||||||
from agent_v0.ui_dialogs import show_workflow_name_dialog
|
|
||||||
|
|
||||||
|
|
||||||
def demo_basic_naming():
|
|
||||||
"""Démonstration du nommage de base"""
|
|
||||||
print("=== Démonstration du Nommage de Base ===")
|
|
||||||
|
|
||||||
namer = WorkflowNamer()
|
|
||||||
|
|
||||||
# Créer une session de test
|
|
||||||
session = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
user_label="Utilisateur Démo",
|
|
||||||
workflow_name="Session_Demo"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Simuler des interactions avec un CRM
|
|
||||||
print("Simulation d'interactions CRM...")
|
|
||||||
|
|
||||||
# Clic sur un bouton "Nouveau Client"
|
|
||||||
session.add_enhanced_mouse_click_event(
|
|
||||||
button="left",
|
|
||||||
pos=[150, 100],
|
|
||||||
window_title="CRM Pro - Gestion Clients",
|
|
||||||
app_name="CRM_Pro",
|
|
||||||
screenshot_id="shot_001",
|
|
||||||
element_type="button",
|
|
||||||
element_text="Nouveau Client",
|
|
||||||
confidence=0.95
|
|
||||||
)
|
|
||||||
|
|
||||||
# Saisie du nom du client
|
|
||||||
session.add_enhanced_key_event(
|
|
||||||
keys=["Jean", "space", "Dupont"],
|
|
||||||
window_title="CRM Pro - Nouveau Client",
|
|
||||||
app_name="CRM_Pro",
|
|
||||||
screenshot_id="shot_002",
|
|
||||||
text_content="Jean Dupont",
|
|
||||||
input_method="typing",
|
|
||||||
confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
# Saisie de l'email
|
|
||||||
session.add_enhanced_key_event(
|
|
||||||
keys=["jean.dupont@email.com"],
|
|
||||||
window_title="CRM Pro - Nouveau Client",
|
|
||||||
app_name="CRM_Pro",
|
|
||||||
screenshot_id="shot_003",
|
|
||||||
text_content="jean.dupont@email.com",
|
|
||||||
input_method="typing",
|
|
||||||
confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
# Clic sur "Sauvegarder"
|
|
||||||
session.add_enhanced_mouse_click_event(
|
|
||||||
button="left",
|
|
||||||
pos=[200, 400],
|
|
||||||
window_title="CRM Pro - Nouveau Client",
|
|
||||||
app_name="CRM_Pro",
|
|
||||||
screenshot_id="shot_004",
|
|
||||||
element_type="button",
|
|
||||||
element_text="Sauvegarder",
|
|
||||||
confidence=0.95
|
|
||||||
)
|
|
||||||
|
|
||||||
# Générer un nom intelligent
|
|
||||||
print("Génération du nom intelligent...")
|
|
||||||
intelligent_name = session.generate_intelligent_name()
|
|
||||||
print(f"Nom généré : {intelligent_name}")
|
|
||||||
|
|
||||||
# Analyser la session
|
|
||||||
analysis = session.analyze_session()
|
|
||||||
print(f"Type de workflow : {analysis.workflow_type}")
|
|
||||||
print(f"Application principale : {analysis.primary_application}")
|
|
||||||
print(f"Score de complexité : {analysis.complexity_score:.2f}")
|
|
||||||
|
|
||||||
# Évaluer la qualité
|
|
||||||
quality_score = session.get_workflow_quality_score()
|
|
||||||
suggestions = session.get_workflow_suggestions()
|
|
||||||
|
|
||||||
print(f"Score de qualité : {quality_score:.1%}")
|
|
||||||
print("Suggestions d'amélioration :")
|
|
||||||
for suggestion in suggestions:
|
|
||||||
print(f" • {suggestion}")
|
|
||||||
|
|
||||||
return session
|
|
||||||
|
|
||||||
|
|
||||||
def demo_different_workflow_types():
|
|
||||||
"""Démonstration de différents types de workflows"""
|
|
||||||
print("\n=== Démonstration des Types de Workflows ===")
|
|
||||||
|
|
||||||
workflows = []
|
|
||||||
|
|
||||||
# 1. Workflow de connexion
|
|
||||||
print("\n1. Workflow de Connexion")
|
|
||||||
login_session = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
workflow_name="Demo_Login"
|
|
||||||
)
|
|
||||||
|
|
||||||
login_session.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[100, 200],
|
|
||||||
window_title="Gmail - Sign In", app_name="Chrome",
|
|
||||||
screenshot_id="login_001", element_type="input",
|
|
||||||
element_text="Email", confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
login_session.add_enhanced_key_event(
|
|
||||||
keys=["user@example.com"], window_title="Gmail - Sign In",
|
|
||||||
app_name="Chrome", screenshot_id="login_002",
|
|
||||||
text_content="user@example.com", input_method="typing"
|
|
||||||
)
|
|
||||||
|
|
||||||
login_session.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[100, 250],
|
|
||||||
window_title="Gmail - Sign In", app_name="Chrome",
|
|
||||||
screenshot_id="login_003", element_type="input",
|
|
||||||
element_text="Password", confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
login_name = login_session.generate_intelligent_name()
|
|
||||||
print(f"Nom généré : {login_name}")
|
|
||||||
workflows.append(("Connexion", login_session))
|
|
||||||
|
|
||||||
# 2. Workflow de navigation
|
|
||||||
print("\n2. Workflow de Navigation")
|
|
||||||
nav_session = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
workflow_name="Demo_Navigation"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Plusieurs clics de navigation
|
|
||||||
for i, section in enumerate(["Dashboard", "Reports", "Settings", "Profile"]):
|
|
||||||
nav_session.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[50 + i*100, 50],
|
|
||||||
window_title=f"App - {section}", app_name="WebApp",
|
|
||||||
screenshot_id=f"nav_{i:03d}", element_type="link",
|
|
||||||
element_text=section, confidence=0.8
|
|
||||||
)
|
|
||||||
|
|
||||||
nav_name = nav_session.generate_intelligent_name()
|
|
||||||
print(f"Nom généré : {nav_name}")
|
|
||||||
workflows.append(("Navigation", nav_session))
|
|
||||||
|
|
||||||
# 3. Workflow de recherche
|
|
||||||
print("\n3. Workflow de Recherche")
|
|
||||||
search_session = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
workflow_name="Demo_Search"
|
|
||||||
)
|
|
||||||
|
|
||||||
search_session.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[300, 50],
|
|
||||||
window_title="Google", app_name="Chrome",
|
|
||||||
screenshot_id="search_001", element_type="input",
|
|
||||||
element_text="Search", confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
search_session.add_enhanced_key_event(
|
|
||||||
keys=["Python", "space", "tutorial"],
|
|
||||||
window_title="Google", app_name="Chrome",
|
|
||||||
screenshot_id="search_002", text_content="Python tutorial",
|
|
||||||
input_method="typing"
|
|
||||||
)
|
|
||||||
|
|
||||||
search_name = search_session.generate_intelligent_name()
|
|
||||||
print(f"Nom généré : {search_name}")
|
|
||||||
workflows.append(("Recherche", search_session))
|
|
||||||
|
|
||||||
return workflows
|
|
||||||
|
|
||||||
|
|
||||||
def demo_quality_assessment():
|
|
||||||
"""Démonstration de l'évaluation de qualité"""
|
|
||||||
print("\n=== Démonstration de l'Évaluation de Qualité ===")
|
|
||||||
|
|
||||||
# Workflow de haute qualité
|
|
||||||
print("\n1. Workflow de Haute Qualité")
|
|
||||||
high_quality = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
workflow_name="High_Quality_Demo"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Nombreuses interactions variées
|
|
||||||
for i in range(5):
|
|
||||||
high_quality.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[100 + i*50, 100 + i*30],
|
|
||||||
window_title="Complex App - Form", app_name="ComplexApp",
|
|
||||||
screenshot_id=f"hq_{i:03d}", element_type="input",
|
|
||||||
element_text=f"Field {i+1}", confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
high_quality.add_enhanced_key_event(
|
|
||||||
keys=[f"Value_{i+1}"], window_title="Complex App - Form",
|
|
||||||
app_name="ComplexApp", screenshot_id=f"hq_key_{i:03d}",
|
|
||||||
text_content=f"Value_{i+1}", input_method="typing"
|
|
||||||
)
|
|
||||||
|
|
||||||
hq_score = high_quality.get_workflow_quality_score()
|
|
||||||
hq_suggestions = high_quality.get_workflow_suggestions()
|
|
||||||
|
|
||||||
print(f"Score de qualité : {hq_score:.1%}")
|
|
||||||
print("Suggestions :")
|
|
||||||
for suggestion in hq_suggestions:
|
|
||||||
print(f" • {suggestion}")
|
|
||||||
|
|
||||||
# Workflow de basse qualité
|
|
||||||
print("\n2. Workflow de Basse Qualité")
|
|
||||||
low_quality = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
workflow_name="Low_Quality_Demo"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Une seule interaction simple
|
|
||||||
low_quality.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[100, 100],
|
|
||||||
window_title="Simple App", app_name="SimpleApp",
|
|
||||||
screenshot_id="lq_001", element_type="button",
|
|
||||||
element_text="Click", confidence=0.5
|
|
||||||
)
|
|
||||||
|
|
||||||
lq_score = low_quality.get_workflow_quality_score()
|
|
||||||
lq_suggestions = low_quality.get_workflow_suggestions()
|
|
||||||
|
|
||||||
print(f"Score de qualité : {lq_score:.1%}")
|
|
||||||
print("Suggestions :")
|
|
||||||
for suggestion in lq_suggestions:
|
|
||||||
print(f" • {suggestion}")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_serialization():
|
|
||||||
"""Démonstration de la sérialisation enrichie"""
|
|
||||||
print("\n=== Démonstration de la Sérialisation ===")
|
|
||||||
|
|
||||||
session = EnhancedRawSession.create_enhanced(
|
|
||||||
user_id="demo_user",
|
|
||||||
workflow_name="Serialization_Demo"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Ajouter quelques événements
|
|
||||||
session.add_enhanced_mouse_click_event(
|
|
||||||
button="left", pos=[100, 200],
|
|
||||||
window_title="Test App", app_name="TestApp",
|
|
||||||
screenshot_id="ser_001", element_type="button",
|
|
||||||
element_text="Test Button", confidence=0.9
|
|
||||||
)
|
|
||||||
|
|
||||||
# Fermer avec analyse
|
|
||||||
session.close_with_analysis()
|
|
||||||
|
|
||||||
# Sauvegarder dans un dossier temporaire
|
|
||||||
with tempfile.TemporaryDirectory() as temp_dir:
|
|
||||||
json_path = session.save_enhanced_json(temp_dir)
|
|
||||||
print(f"Session sauvegardée : {json_path}")
|
|
||||||
|
|
||||||
# Lire et afficher le contenu
|
|
||||||
import json
|
|
||||||
with open(json_path, 'r', encoding='utf-8') as f:
|
|
||||||
data = json.load(f)
|
|
||||||
|
|
||||||
print(f"Clés principales : {list(data.keys())}")
|
|
||||||
|
|
||||||
if 'workflow_metadata' in data:
|
|
||||||
metadata = data['workflow_metadata']
|
|
||||||
print(f"Nom du workflow : {metadata.get('workflow_name')}")
|
|
||||||
print(f"Type : {metadata.get('workflow_type')}")
|
|
||||||
print(f"Application : {metadata.get('primary_application')}")
|
|
||||||
print(f"Score de complexité : {metadata.get('complexity_score', 0):.2f}")
|
|
||||||
|
|
||||||
print(f"Nombre d'événements : {len(data.get('events', []))}")
|
|
||||||
print(f"Nombre d'événements enrichis : {len(data.get('enhanced_events', []))}")
|
|
||||||
print(f"Score de qualité : {data.get('quality_score', 0):.1%}")
|
|
||||||
|
|
||||||
|
|
||||||
def demo_ui_integration():
|
|
||||||
"""Démonstration de l'intégration UI (si Qt disponible)"""
|
|
||||||
print("\n=== Démonstration de l'Intégration UI ===")
|
|
||||||
|
|
||||||
try:
|
|
||||||
from PyQt5.QtWidgets import QApplication
|
|
||||||
|
|
||||||
# Vérifier si Qt est disponible
|
|
||||||
app = QApplication.instance()
|
|
||||||
if app is None:
|
|
||||||
app = QApplication(sys.argv)
|
|
||||||
app.setQuitOnLastWindowClosed(False)
|
|
||||||
|
|
||||||
print("Qt5 disponible - démonstration des dialogues")
|
|
||||||
|
|
||||||
# Simuler des noms existants
|
|
||||||
existing_names = [
|
|
||||||
"Saisie_Client_CRM",
|
|
||||||
"Navigation_Dashboard_Admin",
|
|
||||||
"Recherche_Produits_Catalogue"
|
|
||||||
]
|
|
||||||
|
|
||||||
print("Noms existants :")
|
|
||||||
for name in existing_names:
|
|
||||||
print(f" • {name}")
|
|
||||||
|
|
||||||
print("\nPour tester le dialogue interactif, décommentez le code suivant :")
|
|
||||||
print("# result = show_workflow_name_dialog('Nouveau_Workflow_Demo', existing_names)")
|
|
||||||
print("# print(f'Nom sélectionné : {result}')")
|
|
||||||
|
|
||||||
# Démonstration non-interactive
|
|
||||||
# result = show_workflow_name_dialog("Nouveau_Workflow_Demo", existing_names)
|
|
||||||
# print(f"Nom sélectionné : {result}")
|
|
||||||
|
|
||||||
except ImportError:
|
|
||||||
print("Qt5 non disponible - utilisation des fallbacks")
|
|
||||||
print("Les dialogues utiliseront des interfaces texte simplifiées")
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Fonction principale de démonstration"""
|
|
||||||
print("Démonstration du Système de Nommage Intelligent des Workflows")
|
|
||||||
print("=" * 70)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Démonstrations principales
|
|
||||||
session = demo_basic_naming()
|
|
||||||
workflows = demo_different_workflow_types()
|
|
||||||
demo_quality_assessment()
|
|
||||||
demo_serialization()
|
|
||||||
demo_ui_integration()
|
|
||||||
|
|
||||||
# Résumé final
|
|
||||||
print("\n" + "=" * 70)
|
|
||||||
print("Résumé de la Démonstration")
|
|
||||||
print("=" * 70)
|
|
||||||
|
|
||||||
print(f"✓ Session de base créée : {session.session_id}")
|
|
||||||
print(f"✓ {len(workflows)} types de workflows démontrés")
|
|
||||||
print("✓ Évaluation de qualité testée")
|
|
||||||
print("✓ Sérialisation enrichie validée")
|
|
||||||
print("✓ Intégration UI vérifiée")
|
|
||||||
|
|
||||||
print("\nFonctionnalités démontrées :")
|
|
||||||
print(" • Génération automatique de noms intelligents")
|
|
||||||
print(" • Détection de types de workflows")
|
|
||||||
print(" • Analyse de qualité avec suggestions")
|
|
||||||
print(" • Sérialisation enrichie avec métadonnées")
|
|
||||||
print(" • Compatibilité avec l'interface utilisateur")
|
|
||||||
|
|
||||||
print("\nPour utiliser le système :")
|
|
||||||
print(" 1. Remplacez TrayApp par EnhancedTrayApp")
|
|
||||||
print(" 2. Les dialogues de nommage s'ouvriront automatiquement")
|
|
||||||
print(" 3. Les workflows seront organisés avec des noms descriptifs")
|
|
||||||
print(" 4. Consultez le guide : agent_v0/WORKFLOW_NAMING_GUIDE.md")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Erreur lors de la démonstration : {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
# Déploiement Cleanup Complet
|
|
||||||
# Suppression des JSON bruts inexploitables (sans screenshots)
|
|
||||||
|
|
||||||
# 1. Déployer
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/processing_pipeline.py /opt/rpa_vision_v3/server/processing_pipeline.py
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/server/processing_pipeline.py
|
|
||||||
|
|
||||||
# 2. Redémarrer le worker
|
|
||||||
sudo systemctl restart rpa-vision-v3-worker.service
|
|
||||||
|
|
||||||
# 3. Vérifier
|
|
||||||
systemctl status rpa-vision-v3-worker.service
|
|
||||||
|
|
||||||
# 4. Tester avec une nouvelle session
|
|
||||||
cd /home/dom/ai/rpa_vision_v3/agent_v0
|
|
||||||
./run.sh
|
|
||||||
# Attendre 1 minute après upload
|
|
||||||
# Vérifier que les JSON bruts sont bien supprimés:
|
|
||||||
ls /opt/rpa_vision_v3/data/training/sessions/$(date +%Y-%m-%d)/
|
|
||||||
@@ -1,22 +0,0 @@
|
|||||||
# Commandes de déploiement - Dashboard Phase 1
|
|
||||||
# Copier-coller ces commandes une par une
|
|
||||||
|
|
||||||
# 1. Sauvegarde
|
|
||||||
sudo cp /opt/rpa_vision_v3/web_dashboard/app.py /opt/rpa_vision_v3/web_dashboard/app.py.backup_phase1_$(date +%Y%m%d_%H%M%S)
|
|
||||||
|
|
||||||
# 2. Déploiement
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/web_dashboard_app.py /opt/rpa_vision_v3/web_dashboard/app.py
|
|
||||||
|
|
||||||
# 3. Permissions
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/web_dashboard/app.py
|
|
||||||
sudo chmod 644 /opt/rpa_vision_v3/web_dashboard/app.py
|
|
||||||
|
|
||||||
# 4. Redémarrage
|
|
||||||
sudo systemctl restart rpa-vision-v3-dashboard.service
|
|
||||||
|
|
||||||
# 5. Vérification
|
|
||||||
systemctl status rpa-vision-v3-dashboard.service
|
|
||||||
|
|
||||||
# 6. Tests
|
|
||||||
curl http://localhost:5001/api/screen_states | python3 -m json.tool | head -50
|
|
||||||
curl http://localhost:5001/api/agent/sessions | python3 -m json.tool | grep screenshots_count
|
|
||||||
@@ -1,78 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Script de déploiement - Dashboard Fix Phase 1
|
|
||||||
# Corrections : chemins screenshots + route API screen_states
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "=========================================="
|
|
||||||
echo "DASHBOARD FIX - PHASE 1"
|
|
||||||
echo "=========================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Couleurs
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
RED='\033[0;31m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
DEV_FILE="/home/dom/ai/rpa_vision_v3/web_dashboard_app.py"
|
|
||||||
PROD_FILE="/opt/rpa_vision_v3/web_dashboard/app.py"
|
|
||||||
BACKUP_DIR="/opt/rpa_vision_v3/web_dashboard"
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Modifications appliquées :${NC}"
|
|
||||||
echo " 1. Fix chemin screenshots : session_dir/{session_id}/shots/*.png"
|
|
||||||
echo " 2. Ajout route /api/screen_states (liste les 236 screen_states)"
|
|
||||||
echo " 3. Ajout route /api/screen_states/<session_id> (détails par session)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Vérifier que le fichier dev existe
|
|
||||||
if [ ! -f "$DEV_FILE" ]; then
|
|
||||||
echo -e "${RED}❌ Fichier dev non trouvé: $DEV_FILE${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Étape 1/4 : Sauvegarde du fichier prod...${NC}"
|
|
||||||
BACKUP_FILE="${BACKUP_DIR}/app.py.backup_phase1_$(date +%Y%m%d_%H%M%S)"
|
|
||||||
sudo cp "$PROD_FILE" "$BACKUP_FILE"
|
|
||||||
echo -e "${GREEN}✓ Sauvegarde créée : $BACKUP_FILE${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Étape 2/4 : Déploiement du fichier modifié...${NC}"
|
|
||||||
sudo cp "$DEV_FILE" "$PROD_FILE"
|
|
||||||
sudo chown rpa:rpa "$PROD_FILE"
|
|
||||||
sudo chmod 644 "$PROD_FILE"
|
|
||||||
echo -e "${GREEN}✓ Fichier déployé${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Étape 3/4 : Redémarrage du service dashboard...${NC}"
|
|
||||||
sudo systemctl restart rpa-vision-v3-dashboard.service
|
|
||||||
sleep 2
|
|
||||||
echo -e "${GREEN}✓ Service redémarré${NC}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Étape 4/4 : Vérification du service...${NC}"
|
|
||||||
if sudo systemctl is-active --quiet rpa-vision-v3-dashboard.service; then
|
|
||||||
echo -e "${GREEN}✓ Service actif${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${RED}❌ Service non actif !${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Logs d'erreur :"
|
|
||||||
sudo journalctl -u rpa-vision-v3-dashboard -n 20 --no-pager
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "=========================================="
|
|
||||||
echo -e "${GREEN}DÉPLOIEMENT RÉUSSI !${NC}"
|
|
||||||
echo "=========================================="
|
|
||||||
echo ""
|
|
||||||
echo "Tests à effectuer :"
|
|
||||||
echo " 1. curl http://localhost:5001/api/screen_states | python3 -m json.tool | head -50"
|
|
||||||
echo " 2. curl http://localhost:5001/api/agent/sessions | python3 -m json.tool | head -50"
|
|
||||||
echo " 3. Ouvrir http://localhost:5001 dans le navigateur"
|
|
||||||
echo ""
|
|
||||||
echo "Attendu :"
|
|
||||||
echo " - Screenshots count > 0 pour les sessions non nettoyées"
|
|
||||||
echo " - /api/screen_states retourne 236 screen_states"
|
|
||||||
echo " - Aucune erreur dans les anciennes routes"
|
|
||||||
echo ""
|
|
||||||
@@ -1,14 +0,0 @@
|
|||||||
# Déploiement Dashboard UI - Phase 2
|
|
||||||
# Ajout de l'onglet "Données Traitées"
|
|
||||||
|
|
||||||
# 1. Déployer le fichier HTML modifié
|
|
||||||
sudo cp /home/dom/ai/rpa_vision_v3/dashboard_index.html /opt/rpa_vision_v3/web_dashboard/templates/index.html
|
|
||||||
sudo chown rpa:rpa /opt/rpa_vision_v3/web_dashboard/templates/index.html
|
|
||||||
|
|
||||||
# 2. Redémarrer le dashboard (optionnel, le HTML est rechargé automatiquement)
|
|
||||||
sudo systemctl restart rpa-vision-v3-dashboard.service
|
|
||||||
|
|
||||||
# 3. Tester dans le navigateur
|
|
||||||
# Ouvrir http://localhost:5001
|
|
||||||
# Cliquer sur l'onglet "✅ Données Traitées"
|
|
||||||
# Vérifier que les 371 screen states s'affichent
|
|
||||||
218
deploy_fix.sh
218
deploy_fix.sh
@@ -1,218 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Script de déploiement de la correction api_tokens.py
|
|
||||||
# Basé sur le plan approuvé
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "🚀 RPA Vision V3 - Déploiement Correction Authentification"
|
|
||||||
echo "==========================================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Vérifier qu'on est bien root ou avec sudo
|
|
||||||
if [ "$EUID" -ne 0 ]; then
|
|
||||||
echo "❌ Ce script doit être exécuté avec sudo"
|
|
||||||
echo "Usage: sudo bash deploy_fix.sh"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# ÉTAPE 1 : Sauvegarde du Code Actuel
|
|
||||||
# ============================================================================
|
|
||||||
echo "📦 ÉTAPE 1/5 : Sauvegarde du code production actuel"
|
|
||||||
echo "---------------------------------------------------"
|
|
||||||
|
|
||||||
BACKUP_DIR="/opt/rpa_vision_v3/core/security.backup_$(date +%Y%m%d_%H%M%S)"
|
|
||||||
echo "Création de la sauvegarde dans: $BACKUP_DIR"
|
|
||||||
|
|
||||||
if [ -d /opt/rpa_vision_v3/core/security ]; then
|
|
||||||
cp -r /opt/rpa_vision_v3/core/security "$BACKUP_DIR"
|
|
||||||
echo "✅ Sauvegarde créée avec succès"
|
|
||||||
else
|
|
||||||
echo "❌ Répertoire /opt/rpa_vision_v3/core/security introuvable!"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# ÉTAPE 2 : Déploiement du Code Mis à Jour
|
|
||||||
# ============================================================================
|
|
||||||
echo "📂 ÉTAPE 2/5 : Déploiement du code mis à jour"
|
|
||||||
echo "----------------------------------------------"
|
|
||||||
|
|
||||||
SOURCE_FILE="/home/dom/ai/rpa_vision_v3/core/security/api_tokens.py"
|
|
||||||
DEST_FILE="/opt/rpa_vision_v3/core/security/api_tokens.py"
|
|
||||||
|
|
||||||
if [ ! -f "$SOURCE_FILE" ]; then
|
|
||||||
echo "❌ Fichier source introuvable: $SOURCE_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Copie de $SOURCE_FILE"
|
|
||||||
echo " vers $DEST_FILE"
|
|
||||||
cp "$SOURCE_FILE" "$DEST_FILE"
|
|
||||||
echo "✅ Fichier copié"
|
|
||||||
|
|
||||||
# Correction des permissions
|
|
||||||
echo "Correction des permissions..."
|
|
||||||
chown rpa:rpa "$DEST_FILE"
|
|
||||||
chmod 644 "$DEST_FILE"
|
|
||||||
echo "✅ Permissions configurées (rpa:rpa, 644)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# ÉTAPE 3 : Redémarrage des Services
|
|
||||||
# ============================================================================
|
|
||||||
echo "🔄 ÉTAPE 3/5 : Redémarrage des services"
|
|
||||||
echo "---------------------------------------"
|
|
||||||
|
|
||||||
echo "Rechargement de la configuration systemd..."
|
|
||||||
systemctl daemon-reload
|
|
||||||
echo "✅ Configuration rechargée"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
SERVICES=(
|
|
||||||
"rpa-vision-v3-api.service"
|
|
||||||
"rpa-vision-v3-worker.service"
|
|
||||||
"rpa-vision-v3-dashboard.service"
|
|
||||||
)
|
|
||||||
|
|
||||||
echo "Redémarrage des services..."
|
|
||||||
for service in "${SERVICES[@]}"; do
|
|
||||||
echo -n " → $service... "
|
|
||||||
if systemctl restart "$service" 2>/dev/null; then
|
|
||||||
echo "✅"
|
|
||||||
else
|
|
||||||
echo "❌ ERREUR"
|
|
||||||
echo ""
|
|
||||||
echo "Erreur lors du redémarrage de $service"
|
|
||||||
echo "Voir les logs: sudo journalctl -u $service -n 50"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "⏳ Attente du démarrage complet (5 secondes)..."
|
|
||||||
sleep 5
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Vérifier le statut des services
|
|
||||||
echo "Vérification du statut des services:"
|
|
||||||
for service in "${SERVICES[@]}"; do
|
|
||||||
status=$(systemctl is-active "$service" 2>/dev/null || echo "failed")
|
|
||||||
if [ "$status" = "active" ]; then
|
|
||||||
echo " ✅ $service: $status"
|
|
||||||
else
|
|
||||||
echo " ❌ $service: $status"
|
|
||||||
echo " Logs: sudo journalctl -u $service -n 20"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# ÉTAPE 4 : Vérification des Logs
|
|
||||||
# ============================================================================
|
|
||||||
echo "📋 ÉTAPE 4/5 : Vérification du chargement des tokens"
|
|
||||||
echo "----------------------------------------------------"
|
|
||||||
|
|
||||||
echo "Recherche de 'TokenManager initialized' dans les logs..."
|
|
||||||
TOKEN_LOG=$(journalctl -u rpa-vision-v3-api -n 100 --no-pager | grep -i "tokenmanager initialized" | tail -1)
|
|
||||||
|
|
||||||
if [ -n "$TOKEN_LOG" ]; then
|
|
||||||
echo "$TOKEN_LOG"
|
|
||||||
|
|
||||||
if echo "$TOKEN_LOG" | grep -q "2 admin tokens, 2 read-only tokens"; then
|
|
||||||
echo "✅ SUCCÈS : TokenManager a chargé 2 admin tokens et 2 read-only tokens"
|
|
||||||
elif echo "$TOKEN_LOG" | grep -q "0 admin tokens"; then
|
|
||||||
echo "❌ ÉCHEC : TokenManager affiche 0 admin tokens"
|
|
||||||
echo " Le code n'a peut-être pas été mis à jour correctement"
|
|
||||||
echo ""
|
|
||||||
echo "Vérification du fichier déployé:"
|
|
||||||
grep -n "prod_admin_token" "$DEST_FILE" | head -2
|
|
||||||
else
|
|
||||||
echo "⚠️ AVERTISSEMENT : Nombre de tokens inattendu"
|
|
||||||
echo "$TOKEN_LOG"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "❌ Aucune ligne 'TokenManager initialized' trouvée dans les logs"
|
|
||||||
echo " Le service ne s'est peut-être pas démarré correctement"
|
|
||||||
echo ""
|
|
||||||
echo "Dernières lignes des logs:"
|
|
||||||
journalctl -u rpa-vision-v3-api -n 10 --no-pager
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# ÉTAPE 5 : Test de l'API
|
|
||||||
# ============================================================================
|
|
||||||
echo "🌐 ÉTAPE 5/5 : Test de l'authentification API"
|
|
||||||
echo "---------------------------------------------"
|
|
||||||
|
|
||||||
echo "Test 1: Requête sans authentification (devrait retourner unauthorized)"
|
|
||||||
RESPONSE_NO_AUTH=$(curl -s http://localhost:8000/api/traces/status 2>/dev/null || echo '{"error":"connection_failed"}')
|
|
||||||
echo " Réponse: $RESPONSE_NO_AUTH"
|
|
||||||
|
|
||||||
if echo "$RESPONSE_NO_AUTH" | grep -q "unauthorized"; then
|
|
||||||
echo " ✅ Comportement correct (unauthorized sans token)"
|
|
||||||
else
|
|
||||||
echo " ⚠️ Réponse inattendue"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "Test 2: Requête avec token admin"
|
|
||||||
ADMIN_TOKEN="73cf0db73f9a5064e79afebba96c85338be65cc2060b9c1d42c3ea5dd7d4e490"
|
|
||||||
RESPONSE_WITH_AUTH=$(curl -s -H "Authorization: Bearer $ADMIN_TOKEN" http://localhost:8000/api/traces/status 2>/dev/null || echo '{"error":"connection_failed"}')
|
|
||||||
echo " Réponse: $RESPONSE_WITH_AUTH"
|
|
||||||
|
|
||||||
if echo "$RESPONSE_WITH_AUTH" | grep -q '"status"'; then
|
|
||||||
echo " ✅ SUCCÈS : Authentification fonctionnelle!"
|
|
||||||
echo ""
|
|
||||||
echo "════════════════════════════════════════════════════════════"
|
|
||||||
echo "✅ DÉPLOIEMENT RÉUSSI - Authentification Opérationnelle"
|
|
||||||
echo "════════════════════════════════════════════════════════════"
|
|
||||||
elif echo "$RESPONSE_WITH_AUTH" | grep -q "unauthorized"; then
|
|
||||||
echo " ❌ ÉCHEC : Authentification retourne toujours unauthorized"
|
|
||||||
echo ""
|
|
||||||
echo "════════════════════════════════════════════════════════════"
|
|
||||||
echo "❌ DÉPLOIEMENT INCOMPLET - Investigation Nécessaire"
|
|
||||||
echo "════════════════════════════════════════════════════════════"
|
|
||||||
echo ""
|
|
||||||
echo "💡 Actions de débogage:"
|
|
||||||
echo " 1. Vérifier les logs: sudo journalctl -u rpa-vision-v3-api -n 100"
|
|
||||||
echo " 2. Vérifier le code déployé: grep -n 'prod_admin_token' $DEST_FILE"
|
|
||||||
echo " 3. Restaurer si nécessaire: sudo cp -r $BACKUP_DIR /opt/rpa_vision_v3/core/security"
|
|
||||||
else
|
|
||||||
echo " ⚠️ Réponse inattendue: $RESPONSE_WITH_AUTH"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# RÉSUMÉ
|
|
||||||
# ============================================================================
|
|
||||||
echo "📊 RÉSUMÉ DU DÉPLOIEMENT"
|
|
||||||
echo "========================"
|
|
||||||
echo ""
|
|
||||||
echo "Sauvegarde: $BACKUP_DIR"
|
|
||||||
echo "Code déployé: $DEST_FILE"
|
|
||||||
echo ""
|
|
||||||
echo "Services:"
|
|
||||||
for service in "${SERVICES[@]}"; do
|
|
||||||
status=$(systemctl is-active "$service" 2>/dev/null || echo "failed")
|
|
||||||
echo " - $service: $status"
|
|
||||||
done
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if echo "$RESPONSE_WITH_AUTH" | grep -q '"status"'; then
|
|
||||||
echo "✅ Statut Global: SUCCÈS"
|
|
||||||
echo ""
|
|
||||||
echo "🎯 Prochaines étapes:"
|
|
||||||
echo " 1. Mettre à jour .env.local: bash fix_tokens_dev.sh"
|
|
||||||
echo " 2. Tester l'agent: cd agent_v0 && ./run.sh"
|
|
||||||
echo " 3. Vérifier l'upload: ls -lh /opt/rpa_vision_v3/data/training/sessions/"
|
|
||||||
else
|
|
||||||
echo "⚠️ Statut Global: ATTENTION REQUISE"
|
|
||||||
echo ""
|
|
||||||
echo "💡 Si le problème persiste, restaurer la sauvegarde:"
|
|
||||||
echo " sudo cp -r $BACKUP_DIR /opt/rpa_vision_v3/core/security"
|
|
||||||
echo " sudo systemctl restart rpa-vision-v3-*.service"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user