Record usage for billing using S3Public preview
Report usage events from your S3 bucket in bulk.
You can send meter usage events to Stripe from your S3 storage bucket. Stripe parses, validates, and transforms the file contents into meter events. After the events from your file are successfully uploaded, Stripe displays them on your subscription invoice.
You can upload your meter usage events in CSV
, JSON
or JSONLINE
file formats.
Include the following fields in your file and make sure that they follow the Meter Event schema.
identifier
: A unique identifier for the event. If not provided, Stripe generates one. We recommend using a globally unique identifier for this.timestamp
: The time of the event—measured in seconds since the Unix epoch.event_
: The name of the meter event.name payload_
: The set of columns containing key names for customer and numerical usage values.columns payload_
: The stripe_customer_id that the event gets created against.stripe_ customer_ id payload_
: The numerical usage value of the meter event. By default, the column name isvalue payload_
. However, you can set it to match the field name that you set when creating the meter event (prepend it byvalue payload_
).
Need support for a different file format?
If you want to upload files with a different structure or in a custom format, contact us.
Before you begin
Make sure you have the following:
- An active AWS account and S3 bucket with access to the relevant files.
- Admin account access to the Stripe Dashboard.
Log in to your AWS account
You need access to your AWS Access Console during the configuration process. Sign in to the AWS Management Console.
Prepare your Files in Amazon S3
To validate your connection configuration, use well-formatted data in your S3 bucket. The configuration process shows you available files, and runs an initial sync when the connection is configured.
- Visit your Amazon S3 console
- Make sure that your files are stored in a designated S3 bucket and organized according to your import preferences.
- If you don’t currently have an S3 bucket, you can follow the AWS guidelines for creating your first bucket.
- Stripe has the following file requirements for successful retrieval:
- File names must adhere to S3 Object naming conventions.
- The maximum file size is 1 GB.
- Remember the bucket name and region because you need them for future steps.
- Keep your AWS Console open to configure an IAM role in future steps.
Configure the Stripe Amazon S3 Connector to import files
- In the Dashboard, go to Data Management > Connectors.
- Click + Add connector > Amazon S3.
- Provide a unique connector name, then click Next.
- Navigate to the IAM console in your Amazon console.
- Create a Custom trust policy, and create a role:
- In the navigation pane of the console, click Policies > Create policy.
- To create your permission policy, select JSON and replace the existing policy text by copying and pasting the provided code block. In the Resource section of the Policy editor code block, replace
USER_
with your intended bucket name. Click Next. Under Policy details, add a policy name, along with any tags (optional), then click Create policy.TARGET_ BUCKET - Return to the navigation pane of the console, then click Roles > Create role.
- Choose the Custom trust policy role type, copy and paste the provided code block, then click Next.
- To select your permission policy, locate the newly created permission policy in the list. Select the checkbox to enable the policy, scroll down, then click Next.
- To create a role name, copy and paste the provided role name, then click Create role.
- Establish the connection between your Amazon S3 bucket and Stripe:
- From the AWS Console, find and provide your AWS Account ID.
- Provide the Bucket Name and Region from your AWS Console.
- If you use folders to organize your files in your Amazon S3 bucket, specify a folder within the above bucket.
- If you specify a folder within the above bucket, we only fetch data from this folder, not the entire bucket.
- After you set up a new connector, Stripe fetches all data from the Amazon S3 bucket that was modified in the last 90 days.
- We fetch data after every 5 minutes.
- Only objects with a
LastModified
date later than the last sync are imported for recurring imports. - Make sure that the file name is under 255 characters and should include the appropriate extension, such as
.
,csv .
, orjson .
.jsonl
- Preview the files available in the connected Amazon S3 bucket:
- The file preview validates that your credentials connect Stripe with the expected Amazon S3 bucket and folder.
- The data template associates this connection with an expected file format for initial and recurring imports.
- For JSON format files, choose the Billing Meter Event Transaction Template - JSON.
- For JSONLINE format files, choose the Billing Meter Event Transaction Template - JSONLINE.
- For CSV format files, choose the Billing Meter Event Transaction Template - CSV.
- Click Done to create an Active Data Connection and initiate the initial Data Import.
- After you upload a file to the S3 connector, the usage events reflect it within five minutes. If your bucket contains a lot of unprocessed files, it might take more time.
- You can check the status of processed files from your S3 bucket on the Import Set tab in the Dashboard. This page provides granular details, including the number of created records.
Rate limits
S3 processes your uploaded data at a rate of 10,000 events per second. If you upload large files or a high volume of files to your bucket, we poll and process the data to maintain this throughput rate.
For example, if you upload 100 files once a day, each containing 100,000 records, it takes approximately 17 minutes to process the entire dataset (10 million events).
Best practices
- In each run, the S3 connector polls a maximum of 50 files or up to 10GB of data.
- You can upload any number of files and records to your S3 bucket. The S3 connector pulls the data according to the rate limits. We process any remaining files or data in subsequent runs on a first-come, first-served basis. It might take multiple runs to completely process all the data.
- Upload a file every 10 seconds or when the current file reaches one million records, whichever comes first. After upload, you can start adding events in a new file.
- Avoid creating empty files, even if they are non-zero byte files. Examples include:
- CSV: Files containing only the header row.
- JSON: Files containing only [] (empty square brackets).
- JSONLINE: Files containing only {} (empty curly brackets). Although S3 accepts these files, they increase the object and file count, which might cause delays in polling of files.
Error reporting and handling
The following sections describe failures that might occur when uploading a file to ingest meter events.
Format Issues
These errors occur when the uploaded file’s contents contains formatting or data type issues. For example:
- Processing fails for the entire file if you omit a mandatory column, such as
event_
.name - Processing fails for individual records with invalid formatting, such as a missing value for a mandatory field like
stripe_
orcustomer_ id event_
. Valid records still process successfully.name
Stripe processes the files asynchronously by polling the files that you upload to the S3 bucket. If we detect errors during processing, Stripe notifies you using events. You can subscribe to the following events using a webhook endpoint. Based on the event type, you can implement your own logic to handle these errors.
Event | Description | Payload type |
---|---|---|
data_ | This event occurs when processing fails for an entire file. | Snapshot |
data_ | This event identifies individual record failures in a partially processed file. | Snapshot |
Invalid File Format
Stripe creates an data_management.import_set.failed event when an entire file fails. As the following example shows, you can find the reason for failure under the failed_
key and fix it before re-uploading.
{ "object": { "id": "impset_test_61RdoFsHlLDUVWcpq41043aNvT20bVhA", "object": "data_management.import_set", "created": 1733825048650, "failed_reason": "[Missing required keys - [event_name]]", "file": "file_1QUQIW043aNvT20b3NgYbtox", "livemode": false, "source_data_format": "gsdf_test_61RE2aHAeNhvGQdlZ5AwC", "status": "failed", "status_transitions": { "archived_at": null, "succeeded_at": null } }, "previous_attributes": null }
Invalid record format
For partially processed files, you can see the details of the failed records in the result
parameter of the data_management.import_set.succeeded event, as shown in the following example.
{ "object": { "id": "impset_test_61RgPF3UUFYNqIiaa4103UU8Ng78OGv2", "object": "data_management.import_set", "created": 1734443881685, "failed_reason": null, "file": "file_1QX1Hh03UU8Ng78OHYE8hMTx", "livemode": false, "metadata": {}, "result": { "errors": { "file": "file_1QX1Hl03UU8Ng78O2v6hYQp3", "row_count": 1 }, "objects_created": 2, "rows_processed": 3, "skipped_by_filter": { "file": null, "row_count": 0 }, "skipped_duplicates": { "file": null, "row_count": 0 }, "successes": { "row_count": 2 } }, "source_data_format": "gsdf_test_61RE2aHAeNhvGQdlZ5AwC", "status": "succeeded_with_errors", "status_transitions": { "archived_at": null, "succeeded_at": 1734443886825 } }, "previous_attributes": null }
Check the status
field in the event. A succeeded_
status indicates that at least one record failed due to invalid formatting. The event result.
gives the number of records that failed and the file_
of the file containing the failed records. You can download this error file using files api for a complete list of the failed records and detailed error descriptions.
Data Issues
Files with correct formatting can fail processing due to invalid data within the file, such as incorrect values for the event_
or stripe_
. You can subscribe to the following events for detailed information about these failures.
Event | Description | Payload type |
---|---|---|
v1. | This event occurs when a meter has invalid usage events. | thin |
v1. | This event occurs when usage events have missing or invalid meter IDs. | thin |
Warning
To create an event destination that subscribes to thin events, enable Workbench in your Developer settings.
Example payloads
Error codes
The reason.
provides the error categorization that triggered the error. Possible error codes include:
meter_
event_ customer_ not_ found meter_
event_ no_ customer_ defined meter_
event_ dimension_ count_ too_ high archived_
meter timestamp_
too_ far_ in_ past timestamp_
in_ future meter_
event_ value_ not_ found meter_
event_ invalid_ value no_
(supported only for themeter v1.
event type)billing. meter. no_ meter_ found
Listen to events
Set up an event destination to listen to events.
On the Event destinations tab in Workbench, click Create new destination. Alternatively, use this template to configure a new destination in Workbench with the two event types pre-selected.
Click Show advanced options, then select the Thin payload style.
Select
v1.
andbilling. meter. error_ report_ triggered v1.
from the list of events.billing. meter. no_ meter_ found Create a handler to process the event.
Test your handler by configuring a local listener with the Stripe CLI to send events to your local machine for testing before deploying the handler to production.
- The
--forward-thin-to
flag specifies which URL to forwardthin
events to. - The
--thin-events
flag specifies which thin events to forward to your application. You can forward all thin events with an asterisk (*
), or a subset of thin events.
$ stripe listen --forward-thin-to localhost:4242/webhooks --thin-events "*"
- The
Trigger test events to your handler. Use the trigger function to run the following commands, which simulates the respective events in your account for testing.
$ stripe trigger v1.billing.meter.error_report_triggered --api-key <your-secret-key> $ stripe trigger v1.billing.meter.no_meter_found --api-key <your-secret-key>
If you process events with a webhook endpoint, verify the webhook signatures to secure your endpoint and validate all requests are from Stripe.
Correct the invalid events, save them to a new file, and upload it to the S3 bucket for processing.