fix: max_tokens extraction CPAM et validation adversariale 1500→3000

Les deux appels tronquaient systématiquement (done_reason=length),
causant des JSON invalides et des faux positifs adversariaux.
num_predict n'a aucun impact sur VRAM ni sur les réponses courtes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
dom
2026-02-23 10:12:26 +01:00
parent d192af74ec
commit cc642c1143
3 changed files with 6 additions and 6 deletions

View File

@@ -365,9 +365,9 @@ def _validate_adversarial(
)
logger.debug(" Validation adversariale")
result = call_ollama(prompt, temperature=0.0, max_tokens=1500, role="validation")
result = call_ollama(prompt, temperature=0.0, max_tokens=3000, role="validation")
if result is None:
result = call_anthropic(prompt, temperature=0.0, max_tokens=1500)
result = call_anthropic(prompt, temperature=0.0, max_tokens=3000)
if result is None:
logger.warning(" Validation adversariale échouée — LLM indisponible")
return None