Analyse and query meter usagePrivate preview
Learn how to query and analyse meter usage data.
Use the Meter Usage Analytics API to query and analyse your customers’ meter usage data. This enables you to build custom usage dashboards, generate reports and determine consumption patterns across your meters.
Query usage data
The Meter Usage Analytics API returns aggregated usage data for a customer within a specified time interval. You can query for data by time periods, filter by meter dimensions and query across multiple meters simultaneously.
Fetch usage for a single meter
Retrieve usage data for a specific customer and meter over a time range:
Fetch usage for a meter filtered and grouped by meter dimension
Query usage data that’s filtered by premium tier and grouped by model:
Fetch usage across multiple meters
Query usage across multiple meters with different filters and groupings:
Build usage dashboards
You can use the API data to create visualisations, such as stacked charts that show usage across different dimensions. The following example demonstrates how you can structure data for a chart that shows API usage by model:
An example response to this request looks like:
View response
{ "data": [ { "bucket_start_time": 1735689600, "bucket_end_time": 1735776000, "bucket_value": 1500, "meter_id": "mtr_1234567890", "dimensions": { "model": "gpt-4" } }, { "bucket_start_time": 1735689600, "bucket_end_time": 1735776000, "bucket_value": 800, "meter_id": "mtr_1234567890", "dimensions": { "model": "gpt-3.5-turbo" } }, { "bucket_start_time": 1735776000, "bucket_end_time": 1735862400, "bucket_value": 2100, "meter_id": "mtr_1234567890", "dimensions": { "model": "gpt-4" } }, { "bucket_start_time": 1735776000, "bucket_end_time": 1735862400, "bucket_value": 950, "meter_id": "mtr_1234567890", "dimensions": { "model": "gpt-3.5-turbo" } } ] }
Use this example code to pull data from the API in your back end and display it to users as a stacked bar chart in your front end.
Your back end
// Step 1: Extract the data from the Stripe API response const data = stripeApiResponse.data; // Step 2: Create a dictionary to store the processed data const processedData = {}; // Step 3: Iterate through the data and organize it by date and model data.forEach(point => { const date = new Date(point.bucket_start_time * 1000).toISOString().split('T')[0]; const model = point.dimensions.model; const value = point.bucket_value; if (!processedData[date]) { processedData[date] = {}; } processedData[date][model] = value; }); // Step 4: Create a list of unique models and sort them const models = [...new Set(data.map(point => point.dimensions.model))].sort(); // Step 5: Prepare the data for charting const chartData = []; Object.keys(processedData).sort().forEach(date => { const dataPoint = { date }; let cumulativeValue = 0; models.forEach(model => { const value = processedData[date][model] || 0; dataPoint[`${model}_start`] = cumulativeValue; cumulativeValue += value; dataPoint[`${model}_end`] = cumulativeValue; dataPoint[model] = value; // For simple stacked charts }); chartData.push(dataPoint); }); // Return chart data for front end chart library usage return chartData;
Your front end
// Step 1: Fetch usage chart data from your back end const chartData = await fetch('/api/customer_usage/:customer_id').then(r => r.json()); // Step 2: Extract unique models from the chart data const models = Object.keys(chartData[0]).filter(key => key !== 'date' && !key.endsWith('_start') && !key.endsWith('_end') ); // Step 3: Use the chart data to create your stacked bar chart // Example using D3 or Recharts: createStackedChart({ data: chartData.map(point => ({ date: point.date, 'gpt-4': point['gpt-4'] || 0, 'gpt-3.5-turbo': point['gpt-3.5-turbo'] || 0 })), stackKeys: models, xKey: 'date', title: 'Daily API Usage by Model' });
Rate limits
The Meter Usage Analytics API has its own rate limit of 100 requests per second per account, which is separate from the Stripe overall API rate limits. If you exceed this limit, the API returns a 429 Too Many Requests
status code.
Best practices
Handle data freshness
Usage data might have a slight delay. You can use the data_
field in the response to understand data freshness. Also consider this latency when building real-time dashboards or alerts.
Customise your queries
Follow these best practices:
- Use appropriate
value_
values to balance granularity with performance.grouping_ window - Apply
dimension_
to reduce data volume when you only need specific segments.filters - Query multiple meters in a single request when analysing related usage patterns.
Data size limits
To prevent overly large responses, the following limits apply per meter:
- A maximum of 2
dimension_
group_ by_ keys - A maximum of 10
dimension_
filters - A maximum of 3
tenant_
filters
Handle errors
The API returns standard HTTP status codes and structured error responses:
{ "error": { "type": "invalid_request_error", "code": "invalid_time_range", "message": "Param start_time should not be greater than end_time" } }