feat(anonymisation): blur PII côté serveur via EDS-NLP + VLM local-first

Blur PII server-side (core/anonymisation/pii_blur.py) :
- Pipeline OCR (docTR) → NER (EDS-NLP + fallback regex)
- Détection ciblée noms/prénoms/adresses/NIR/téléphone/email
- Protection explicite CIM-10, CCAM, montants €, dates, IDs techniques
- Dual-storage : shot_XXXX_full.png (brut) + _blurred.png (affichage)
- 18 tests

Client :
- RPA_BLUR_SENSITIVE=false par défaut (blur serveur uniquement)
- Zéro overhead côté poste utilisateur

VLM config :
- vlm_config.py : gemma4:latest, fallbacks qwen3-vl:8b + UI-TARS
- think=false auto pour gemma4 (bug Ollama 0.20.x)
- VLM provider VWB : local-first (Ollama), cloud opt-in via VLM_ALLOW_CLOUD

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Dom
2026-04-14 16:48:23 +02:00
parent a9a99953dd
commit f7b8cddd2b
10 changed files with 1283 additions and 65 deletions

View File

@@ -68,11 +68,11 @@ class SystemConfig:
clip_model: str = "ViT-B-32"
clip_pretrained: str = "openai"
clip_device: str = "cpu"
vlm_model: str = "qwen3-vl:8b"
vlm_model: str = "gemma4:latest"
vlm_endpoint: str = "http://localhost:11434"
owl_model: str = "google/owlv2-base-patch16-ensemble"
owl_confidence_threshold: float = 0.1
# FAISS
faiss_dimensions: int = 512
faiss_index_type: str = "Flat"
@@ -211,7 +211,7 @@ class ConfigurationManager:
clip_model=os.getenv("CLIP_MODEL", "ViT-B-32"),
clip_pretrained=os.getenv("CLIP_PRETRAINED", "openai"),
clip_device=os.getenv("CLIP_DEVICE", "cpu"),
vlm_model=os.getenv("VLM_MODEL", "qwen3-vl:8b"),
vlm_model=os.getenv("RPA_VLM_MODEL", os.getenv("VLM_MODEL", "gemma4:latest")),
vlm_endpoint=os.getenv("VLM_ENDPOINT", "http://localhost:11434"),
owl_model=os.getenv("OWL_MODEL", "google/owlv2-base-patch16-ensemble"),
owl_confidence_threshold=float(os.getenv("OWL_CONFIDENCE_THRESHOLD", "0.1")),
@@ -435,7 +435,7 @@ class ModelConfig:
clip_model: str = "ViT-B-32"
clip_pretrained: str = "openai"
clip_device: str = "cpu"
vlm_model: str = "qwen3-vl:8b"
vlm_model: str = "gemma4:latest"
vlm_endpoint: str = "http://localhost:11434"
owl_model: str = "google/owlv2-base-patch16-ensemble"
owl_confidence_threshold: float = 0.1
@@ -510,7 +510,7 @@ class FAISSConfig:
class GPUResourceConfig:
"""Configuration for GPU resource management - DEPRECATED: Use SystemConfig instead"""
ollama_endpoint: str = "http://localhost:11434"
vlm_model: str = "qwen3-vl:8b"
vlm_model: str = "gemma4:latest"
clip_model: str = "ViT-B-32"
idle_timeout_seconds: int = 300
vram_threshold_for_clip_gpu_mb: int = 1024
@@ -599,7 +599,7 @@ UPLOADS_PATH=data/training/uploads
CLIP_MODEL=ViT-B-32
CLIP_PRETRAINED=openai
CLIP_DEVICE=cpu
VLM_MODEL=qwen3-vl:8b
VLM_MODEL=gemma4:latest
VLM_ENDPOINT=http://localhost:11434
OWL_MODEL=google/owlv2-base-patch16-ensemble
OWL_CONFIDENCE_THRESHOLD=0.1