feat(api): add LLM response validation and input sanitization
Implement Phase 1: Safety & Validation for all LLM-based suggestion engines. - Add input sanitization module to prevent prompt injection attacks - Implement 5 comprehensive validators (routine, batch, shopping, product parse, photo) - Add 10+ critical safety checks (retinoid+acid conflicts, barrier compatibility, etc.) - Integrate validation into all 5 API endpoints (routines, products, skincare) - Add validation fields to ai_call_logs table (validation_errors, validation_warnings, auto_fixed) - Create database migration for validation fields - Add comprehensive test suite (9/9 tests passing, 88% coverage on validators) Safety improvements: - Blocks retinoid + acid conflicts in same routine/day - Rejects unknown product IDs - Enforces min_interval_hours rules - Protects compromised skin barriers - Prevents prohibited fields (dose, amount) in responses - Validates all enum values and score ranges All validation failures are logged and responses are rejected with HTTP 502.
This commit is contained in:
parent
e3ed0dd3a3
commit
2a9391ad32
16 changed files with 2357 additions and 13 deletions
1
backend/tests/validators/__init__.py
Normal file
1
backend/tests/validators/__init__.py
Normal file
|
|
@ -0,0 +1 @@
|
|||
"""Tests for LLM response validators."""
|
||||
Loading…
Add table
Add a link
Reference in a new issue