feat: architecture multi-modèles LLM + quality engine + benchmark

- Multi-modèles : 4 rôles LLM (coding=gemma3:27b-cloud, cpam=gemma3:27b-cloud,
  validation=deepseek-v3.2:cloud, qc=gemma3:12b) avec get_model(role)
- Prompts externalisés : 7 templates dans src/prompts/templates.py
- Cache Ollama : modèle stocké par entrée (migration auto ancien format)
- call_ollama() : paramètre role= (priorité: model > role > global)
- Quality engine : veto_engine + decision_engine + rules_router (YAML)
- Benchmark qualité : scripts/benchmark_quality.py (A/B, métriques CIM-10)
- Fix biologie : valeurs qualitatives (troponine négative) non filtrées
- Fix CPAM : gemma3:27b-cloud au lieu de deepseek (JSON tronqué par thinking)
- CPAM max_tokens 4000→6000, viewer admin multi-modèles
- Benchmark 10 dossiers : 100% DAS valides, 10/10 CPAM, 243s/dossier

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
dom
2026-02-20 00:21:09 +01:00
parent 5c8c2817ec
commit 909e051cc9
39 changed files with 5092 additions and 574 deletions

View File

@@ -407,7 +407,7 @@ class TestGenerateResponse:
]
call_count = {"n": 0}
def ollama_side_effect(prompt, temperature=0.1, max_tokens=4000):
def ollama_side_effect(prompt, temperature=0.1, max_tokens=4000, **kwargs):
call_count["n"] += 1
if call_count["n"] == 1:
return {"comprehension_contestation": "Extraction...", "elements_cliniques_pertinents": [], "points_accord_potentiels": [], "codes_en_jeu": {}}
@@ -448,7 +448,7 @@ class TestGenerateResponse:
mock_ollama.return_value = None
call_count = {"n": 0}
def anthropic_side_effect(prompt, temperature=0.1, max_tokens=4000):
def anthropic_side_effect(prompt, temperature=0.1, max_tokens=4000, **kwargs):
call_count["n"] += 1
if call_count["n"] == 1:
return {"comprehension_contestation": "Extraction Haiku...", "elements_cliniques_pertinents": [], "points_accord_potentiels": [], "codes_en_jeu": {}}
@@ -1155,7 +1155,7 @@ class TestExtractionPass:
"""L'orchestrateur appelle extraction + argumentation + validation."""
call_count = {"n": 0}
def ollama_side_effect(prompt, temperature=0.1, max_tokens=4000):
def ollama_side_effect(prompt, temperature=0.1, max_tokens=4000, **kwargs):
call_count["n"] += 1
if call_count["n"] == 1:
return {
@@ -1249,7 +1249,7 @@ class TestValidateAdversarial:
"""Incohérences détectées → avertissements dans le texte formaté."""
call_count = {"n": 0}
def ollama_side_effect(prompt, temperature=0.1, max_tokens=4000):
def ollama_side_effect(prompt, temperature=0.1, max_tokens=4000, **kwargs):
call_count["n"] += 1
if call_count["n"] == 1:
return {"comprehension_contestation": "Extraction", "elements_cliniques_pertinents": [], "points_accord_potentiels": [], "codes_en_jeu": {}}