← Guides
Setup7 min read

Tracking Anthropic Claude Costs with CostLynx

Track Claude API spend per feature and project with the Python SDK or TypeScript SDK — including cache read tokens.

How Anthropic tracking works

Anthropic does not expose a usage pull API, so CostLynx uses ingestion-only tracking: your application sends token counts after each Claude call. The SDK extracts input_tokens, output_tokens, and cache_read_input_tokens from the Anthropic response object automatically.

Note

Prompt cache reads (cache_read_input_tokens) are tracked separately at the cached token rate, which is significantly cheaper than standard input tokens.

Python SDK

Install
pip install "costlynx[anthropic]"
Python
import os
import anthropic
from costlynx import CostLynx

clx = CostLynx(
    ingestion_key=os.environ["COSTLYNX_INGESTION_KEY"],
    default_project="my-app",
    default_environment="prod",
)
client = anthropic.Anthropic()

message = client.messages.create(
    model="claude-opus-4-7",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Summarise this document..."}],
)

clx.track_anthropic_response(message, feature="summariser")

TypeScript SDK

TypeScript
import Anthropic from "@anthropic-ai/sdk";
import { CostLynx } from "@costlynx/sdk";

const clx = new CostLynx({ ingestionKey: process.env.COSTLYNX_INGESTION_KEY! });
const anthropic = new Anthropic();

const message = await anthropic.messages.create({
  model: "claude-opus-4-7",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Hello" }],
});

await clx.track({
  provider: "anthropic",
  model: message.model,
  inputTokens: message.usage.input_tokens,
  outputTokens: message.usage.output_tokens,
  cachedTokens: message.usage.cache_read_input_tokens ?? 0,
  feature: "summariser",
  requestId: message.id,
});

Async support

Every Python SDK method has an async variant with an a prefix: atrack(), atrack_anthropic_response(). Use these in async frameworks like FastAPI or asyncio-based services.

Python async
import anthropic
from costlynx import CostLynx

clx = CostLynx(ingestion_key="clx_ingestion_...")
client = anthropic.AsyncAnthropic()

async def summarise(text: str) -> str:
    message = await client.messages.create(
        model="claude-opus-4-7",
        max_tokens=512,
        messages=[{"role": "user", "content": text}],
    )
    await clx.atrack_anthropic_response(message, feature="summariser")
    return message.content[0].text