Implement Phase 1: Safety & Validation for all LLM-based suggestion engines. - Add input sanitization module to prevent prompt injection attacks - Implement 5 comprehensive validators (routine, batch, shopping, product parse, photo) - Add 10+ critical safety checks (retinoid+acid conflicts, barrier compatibility, etc.) - Integrate validation into all 5 API endpoints (routines, products, skincare) - Add validation fields to ai_call_logs table (validation_errors, validation_warnings, auto_fixed) - Create database migration for validation fields - Add comprehensive test suite (9/9 tests passing, 88% coverage on validators) Safety improvements: - Blocks retinoid + acid conflicts in same routine/day - Rejects unknown product IDs - Enforces min_interval_hours rules - Protects compromised skin barriers - Prevents prohibited fields (dose, amount) in responses - Validates all enum values and score ranges All validation failures are logged and responses are rejected with HTTP 502.
17 lines
661 B
Python
17 lines
661 B
Python
"""LLM response validators for safety and quality checks."""
|
|
|
|
from innercontext.validators.base import ValidationResult
|
|
from innercontext.validators.batch_validator import BatchValidator
|
|
from innercontext.validators.photo_validator import PhotoValidator
|
|
from innercontext.validators.product_parse_validator import ProductParseValidator
|
|
from innercontext.validators.routine_validator import RoutineSuggestionValidator
|
|
from innercontext.validators.shopping_validator import ShoppingValidator
|
|
|
|
__all__ = [
|
|
"ValidationResult",
|
|
"RoutineSuggestionValidator",
|
|
"ShoppingValidator",
|
|
"ProductParseValidator",
|
|
"BatchValidator",
|
|
"PhotoValidator",
|
|
]
|