Guides
5-Minute Quickstart: Track Your First LLM Event
Create an ingestion key, send one usage event, and confirm it appears in your dashboard — in under five minutes.
Tracking OpenAI Costs with CostLynx
Three ways to track OpenAI spend: automatic provider sync, the TypeScript SDK helper, and the Python SDK helper — with attribution per feature and user.
Tracking Anthropic Claude Costs with CostLynx
Track Claude API spend per feature and project with the Python SDK or TypeScript SDK — including cache read tokens.
Tracking LLM Costs in FastAPI Applications
Instrument a FastAPI service to track LLM spend per endpoint, user, and feature — with async fire-and-forget tracking that never blocks responses.
Tracking LangChain LLM Costs with CostLynx
Add a CostLynxCallbackHandler to any LangChain LLM to automatically track token usage after every response — no changes to your chain logic.
Understanding Token Costs in Production LLM Systems
How input, output, cached, and reasoning tokens accumulate in production, and how to calculate per-request cost accurately across providers.
Building and Enforcing AI Budget Controls
How to structure org and project-level AI budgets, set threshold strategies, and maintain spend governance without blocking product delivery.
Optimizing LLM Infrastructure Spend at Production Scale
Model selection, context management, caching, and request shaping strategies that reduce inference spend without degrading production quality.
Instrumenting LLM Usage Monitoring Across Your Stack
Step-by-step integration guide for capturing real-time usage events from OpenAI, Anthropic, and Google Gemini into a centralized cost tracking pipeline.
Configuring AI Spend Alerts and Anomaly Detection
How to design, configure, and tune threshold alerts and anomaly detection rules for production AI spend — including timing expectations and operational runbooks.