Add comprehensive token breakdown logging to understand MAX_TOKENS behavior and verify documentation claims about thinking tokens. New Fields Added to ai_call_logs: - thoughts_tokens: Thinking tokens (thoughtsTokenCount) - documented as separate from output budget - tool_use_prompt_tokens: Tool use overhead (toolUsePromptTokenCount) - cached_content_tokens: Cached content tokens (cachedContentTokenCount) Purpose: Investigate token counting mystery from production logs where: prompt_tokens: 4400 completion_tokens: 589 total_tokens: 8489 ← Should be 4400 + 589 = 4989, missing 3500! According to Gemini API docs (Polish translation): totalTokenCount = promptTokenCount + candidatesTokenCount (thoughts NOT included in total) But production logs show 3500 token gap. New logging will reveal: 1. Are thinking tokens actually separate from max_output_tokens limit? 2. Where did the 3500 missing tokens go? 3. Does MEDIUM thinking level consume output budget despite docs? 4. Are tool use tokens included in total but not shown separately? Changes: - Added 3 new integer columns to ai_call_logs (nullable) - Enhanced llm.py to capture all usage_metadata fields - Used getattr() for safe access (fields may not exist in all responses) - Database migration: 7e6f73d1cc95 This will provide complete data for future LLM calls to diagnose: - MAX_TOKENS failures - Token budget behavior - Thinking token costs - Tool use overhead |
||
|---|---|---|
| .. | ||
| alembic | ||
| innercontext | ||
| jobs/2026-03-02__17-12-31 | ||
| tests | ||
| .coverage | ||
| .env.example | ||
| .python-version | ||
| alembic.ini | ||
| db.py | ||
| main.py | ||
| pgloader.config | ||
| pyproject.toml | ||
| README.md | ||
| skincare.yaml | ||
| test_query.py | ||
| uv.lock | ||
See the root README for setup and usage instructions.