← Resources

Guides

Step-by-step implementation guides for AI cost governance, usage monitoring, optimization, and alerting.
Setup5 min read

5-Minute Quickstart: Track Your First LLM Event

Create an ingestion key, send one usage event, and confirm it appears in your dashboard — in under five minutes.

What you needStep 1: Create an ingestion keyStep 2: Send your first eventStep 3: Confirm in the dashboard
Read guide
Setup8 min read

Tracking OpenAI Costs with CostLynx

Three ways to track OpenAI spend: automatic provider sync, the TypeScript SDK helper, and the Python SDK helper — with attribution per feature and user.

Option A: Provider sync (OpenAI only)Option B: Python SDK (recommended for Python apps)Option C: TypeScript SDKAttribution fields
Read guide
Setup7 min read

Tracking Anthropic Claude Costs with CostLynx

Track Claude API spend per feature and project with the Python SDK or TypeScript SDK — including cache read tokens.

How Anthropic tracking worksPython SDKTypeScript SDKAsync support
Read guide
Setup8 min read

Tracking LLM Costs in FastAPI Applications

Instrument a FastAPI service to track LLM spend per endpoint, user, and feature — with async fire-and-forget tracking that never blocks responses.

InstallPer-endpoint trackingAuto-track all calls with lifespan middlewareEnvironment configuration
Read guide
Setup7 min read

Tracking LangChain LLM Costs with CostLynx

Add a CostLynxCallbackHandler to any LangChain LLM to automatically track token usage after every response — no changes to your chain logic.

InstallCostLynxCallbackHandlerAttach to any LangChain LLMLCEL / chain composition
Read guide
Foundations12 min read

Understanding Token Costs in Production LLM Systems

How input, output, cached, and reasoning tokens accumulate in production, and how to calculate per-request cost accurately across providers.

Cost per request: calculationInput vs output cost dynamicsPrompt caching: when it appliesReasoning tokens: the invisible cost layer
Read guide
Governance14 min read

Building and Enforcing AI Budget Controls

How to structure org and project-level AI budgets, set threshold strategies, and maintain spend governance without blocking product delivery.

Step-by-step: structuring a budget hierarchyMapping budgets to real workloadsPractical examplesSoft vs hard enforcement
Read guide
Optimization16 min read

Optimizing LLM Infrastructure Spend at Production Scale

Model selection, context management, caching, and request shaping strategies that reduce inference spend without degrading production quality.

Lever 1: Model selectionLever 2: Context window managementLever 3: Prompt cachingLever 4: Output shaping
Read guide
Setup18 min read

Instrumenting LLM Usage Monitoring Across Your Stack

Step-by-step integration guide for capturing real-time usage events from OpenAI, Anthropic, and Google Gemini into a centralized cost tracking pipeline.

Step 1: Generate ingestion keysStep 2: Create projects and environmentsStep 3: Instrument your LLM call sitesStep 4: Handle retries and idempotency
Read guide
Operations15 min read

Configuring AI Spend Alerts and Anomaly Detection

How to design, configure, and tune threshold alerts and anomaly detection rules for production AI spend — including timing expectations and operational runbooks.

Alert types and when to use eachStep-by-step: configuring your first alert rulesExample alert configurationsNotification design and routing
Read guide
Need API reference? DocumentationLooking for definitions? Glossary