innercontext/backend/alembic
Piotr Oleszczyk c87d1b8581 feat(api): implement Phase 2 token optimization and reasoning capture
- Add tiered context system (summary/detailed/full) to reduce token usage by 70-80%
- Replace old _build_products_context with build_products_context_summary_list (Tier 1: ~15 tokens/product vs 150)
- Optimize function tool responses: exclude INCI list by default (saves ~15KB/product)
- Reduce actives from 24 to top 5 in function tools
- Add reasoning_chain field to AICallLog model for observability
- Implement _extract_thinking_content to capture LLM reasoning (MEDIUM thinking level)
- Strengthen prompt enforcement for prohibited fields (dose, amount, quantity)
- Update get_creative_config to use MEDIUM thinking level instead of LOW

Token Savings:
- Routine suggestions: 9,613 → ~1,300 tokens (-86%)
- Batch planning: 12,580 → ~1,800 tokens (-86%)
- Function tool responses: ~15KB → ~2KB per product (-87%)

Breaks discovered in log analysis (ai_call_log.json):
- Lines 10, 27, 61, 78: LLM returned prohibited dose field
- Line 85: MAX_TOKENS failure (output truncated)

Phase 2 complete. Next: two-phase batch planning with safety verification.
2026-03-06 10:26:29 +01:00
..
versions feat(api): implement Phase 2 token optimization and reasoning capture 2026-03-06 10:26:29 +01:00
env.py fix(backend): apply black/isort formatting and fix ruff noqa annotations 2026-03-01 17:27:07 +01:00
README feat(backend): add Alembic migrations 2026-02-28 20:14:57 +01:00
script.py.mako feat(backend): add Alembic migrations 2026-02-28 20:14:57 +01:00

Generic single-database configuration.