# Batch jobs Process multiple API requests asynchronously with a single file upload. The Batch API allows you to perform bulk operations on Stripe resources. Instead of making individual API calls for each operation that could trigger rate limits, you can upload a file with all of your operations and let Stripe process them asynchronously. Use this for one-time migrations, bulk updates, or any operation that requires processing many resources. ## When to use batch jobs Batch jobs work well for: - **Bulk migrations**: Move large numbers of subscriptions to new billing modes. - **Mass updates**: Update many accounts or subscriptions at once. Batch jobs don’t work well for: - Operations that require an immediate synchronous response. - Real-time processing with tight timing requirements. - A single asynchronous call. To process a batch job, follow these steps: 1. [Create a batch job](https://docs.stripe.com/batch-api.md#create-a-batch-job) and specify the target API endpoint. 1. [Upload the input file](https://docs.stripe.com/batch-api.md#upload-the-input-file) with your batch requests. 1. [Monitor job status](https://docs.stripe.com/batch-api.md#monitor-job-status) through webhooks or polling. 1. [Download the results](https://docs.stripe.com/batch-api.md#download-the-results). ## Supported endpoints The Batch Jobs API supports the following endpoints. Each batch job targets a single endpoint, and all requests in the batch go to that endpoint. - [Migrate a subscription](https://docs.stripe.com/api/subscriptions/migrate.md): POST `/v1/subscriptions/:id/migrate` - [Update a subscription ](https://docs.stripe.com/api/subscriptions/update.md): POST `/v1/subscriptions/:id` - [Update a customer](https://docs.stripe.com/api/customers/update.md): POST `/v1/customers/:id` ## Limitations Review the following limitations: - Batch files are limited to 5 GB. If you need to process a larger file for a higher volume of requests, split it into multiple batches. - Batch jobs only support JSONL (newline-delimited JSON) files. Batch jobs don’t accept CSV or other formats. - Requests in a batch can only use `POST` or `DELETE`. Batch jobs don’t support `GET`. - All requests in a batch must target the same API endpoint. - Batch jobs don’t guarantee the order of request processing. - Batch jobs have a maximum processing duration of 24 hours. Jobs that exceed this limit transition to `timeout` status, with partial results available. - Results are available for download for 7 days after the job completes. - The upload URL expires 5 minutes after job creation. After that period, the job will transition to `upload_timeout` and you will need to create a new one. - Upload the file with a direct HTTP `PUT` request to the presigned URL. ## Create a batch job To start, create a batch job by sending a `POST` request to `/v2/core/batch_jobs`. Specify the target endpoint and any processing options: ```bash curl https://api.stripe.com/v2/core/batch_jobs \ -u <>: \ -H "Content-Type: application/json" \ -H "Stripe-Version: 2026-03-25.preview" \ -d '{ "endpoint": { "path": "/v1/subscriptions/:id/migrate", "http_method": "post" }, "maximum_rps": 10, "skip_validation": false }' ``` The content type for this request is a JSON file. This returns a batch job object with a `ready_for_upload` status. The upload URL and its expiration time are in the `status_details` field: ```json { "id": "batchv2_AbCdEfGhIjKlMnOpQrStUvWxYz", "object": "v2.core.batch_job", "created": "2026-03-09T20:55:31.000Z", "maximum_rps": 10, "skip_validation": false, "status": "ready_for_upload", "status_details": { "ready_for_upload": { "upload_url": { "expires_at": "2026-03-09T21:00:31.000Z", "url": "https://stripeusercontent.com/files/upload/..." } } } } ``` The `status_details` object changes shape based on the current status. When the job is `ready_for_upload`, it contains the presigned upload URL and its expiration timestamp. ### Parameters | Parameter | Required | Description | | -------------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `endpoint.path` | Yes | The API endpoint to target (for example, `/v1/subscriptions/:id/migrate`). See [Supported endpoints](https://docs.stripe.com/batch-api.md#supported-endpoints). | | `endpoint.http_method` | Yes | The HTTP method for the endpoint. Currently only `post` is supported. | | `maximum_rps` | No | Maximum requests processed per second (1–100). Defaults to 10. | | `skip_validation` | No | Set to `true` to skip input file validation and start processing immediately. Defaults to `false`. | | `notification_suppression` | No | Controls whether webhooks from the underlying API operations are delivered. Set `{"scope": "all"}` to suppress operation-level webhooks. Batch-level events are always delivered regardless of this setting. Defaults to `{"scope": "none"}`. | | `metadata` | No | Key-value pairs for your internal tracking. Metadata is included in batch job events, including failure events. | Set `maximum_rps` based on your throughput needs (capped). Higher values process the batch faster. Batch processing uses a separate rate-limit pool from your main API requests. ## Upload the input file After creating the batch job, upload your input file to the URL in `status_details.ready_for_upload.upload_url.url`. Use a `PUT` request with the file contents: ```bash curl {UPLOAD_URL} \ -X PUT \ -T input.jsonl \ -H "Content-Type: application/jsonlines" ``` The input file for this request must be a JSONL file, and the content type must be `application/octet-stream`. After the upload completes, Stripe automatically starts processing. There’s no separate `start` step. The upload URL expires 5 minutes after batch job creation. Check the `expires_at` field for the exact deadline. If the URL expires before you upload the file, the job status changes to `upload_timeout`, and you must create a new batch job. Generate the input file before you create the batch job so you can upload it promptly. ### Input file format The file must be UTF-8 encoded and use JSONL format (newline-delimited JSON, one object per line). Each line represents a single API request to the target endpoint. CSV and other formats aren’t supported. Each JSON object supports these fields: | Field | Required | Description | | ------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `id` | Yes | A unique identifier to correlate this request with its result. The IDs on the path parameters and the IDs on the endpoint must match, though the user is free to choose how to name them: must match `/^[A-Za-z0-9_-]+$/`. | | `path_params` | Conditional | Path parameters for the endpoint. Required when the endpoint path includes placeholders (for example, `:id`). The keys in `path_params` must match the placeholders in the endpoint path exactly. | | `params` | No | Request body parameters for the API call. Variations can occur based on the API method. | | `context` | No | A Stripe account ID. Use this to execute the request against a specific account, such as a connected account. | ### Example input file #### API Method - /v1/customers/:id For the `/v1/customers/:id` endpoint: ```json {"id": "req_001", "path_params": {"id": "cus_1AbCdEfGhIjKlMn"}, "params": {"name": "Jenny Rosen", "email": "jenny@example.com"}} {"id": "req_002", "path_params": {"id": "cus_2BcDeFgHiJkLmNo"}, "params": {"name": "John Smith", "metadata": {"tier": "premium"}}} {"id": "req_003", "context": "acct_1234567890", "path_params": {"id": "cus_3CdEfGhIjKlMnOp"}, "params": {"description": "Updated by batch"}} ``` #### API Method - /v1/subscriptions/:id/migrate For the `/v1/subscriptions/:id/migrate` endpoint: ```json {"id": "req_001", "path_params": {"id": "sub_1AbCdEfGhIjKlMn"}, "params": {"billing_cycle_anchor": "unchanged", "proration_behavior": "none"}} {"id": "req_002", "path_params": {"id": "sub_2BcDeFgHiJkLmNo"}, "params": {"billing_cycle_anchor": "unchanged", "proration_behavior": "create_prorations"}} ``` Each `id` must be unique within the file. Stripe uses it to correlate requests with results, because the results file isn’t ordered the same way as the input file. ## Monitor job status You can track your batch job by polling the retrieve endpoint or by listening for [webhook events](https://docs.stripe.com/batch-api.md#webhook-events). We recommend using webhook events for production integrations. ### Poll for status ```bash curl https://api.stripe.com/v2/core/batch_jobs/{BATCH_JOB_ID} \ -u <>: \ -H "Stripe-Version: 2026-03-25.preview" ``` The content type for this request is a json file. While the job is running, `status_details` includes real-time progress counts: ```json { "status": "in_progress", "status_details": { "in_progress": { "success_count": "1", "failure_count": "0" } } } ``` During the `validating` phase, `status_details` includes a `validated_count` field that shows how many rows Stripe has validated so far. Batch job API calls appear in the Stripe Dashboard or Workbench request logs. The underlying API calls don’t appear in the request logs. Use the retrieve endpoint or webhook events to monitor progress. To debug individual request failures, check the results file. ### Job lifecycle After you upload the input file, the batch job progresses through these statuses: | Status | Description | | ------------------ | -------------------------------------------------------------------------------------------------- | | `ready_for_upload` | The batch job was created and is waiting for the input file. | | `validating` | The input file was uploaded and Stripe is validating it. Skipped when `skip_validation` is `true`. | | `in_progress` | Validation passed (or was skipped) and Stripe is processing requests. | | `complete` | All requests have been processed. Results are available for download. | | `cancelling` | A cancellation was requested. Stripe is finishing in-flight requests. | ### Terminal statuses | Status | Description | | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `validation_failed` | The input file contains errors. No requests were processed. Check the batch job object for error details. This is only applicable when `skip_validation: false`. | | `batch_failed` | An unexpected error occurred during processing. | | `cancelled` | The batch job was cancelled. Partial results may be available. | | `upload_timeout` | The upload URL expired before the file was uploaded. Create a new batch job. | | `timeout` | The batch job exceeded the maximum processing duration of 24 hours. Partial results may be available. | ### Validation When `skip_validation` is `false` (the default) Stripe validates the entire input file before processing any requests. This validation catches errors such as: - Invalid JSON in any row. - Missing or invalid `id` fields. - Duplicate IDs. - Missing required `path_params` for the target endpoint. - Malformed parameters. If validation fails, the status changes to `validation_failed`, and Stripe doesn’t attempt any requests. The batch job object includes details about the first error it encounters. When `skip_validation` is `true`, the job transitions directly from `ready_for_upload` to `in_progress` after upload. Errors in individual requests appear in the results file instead of blocking the entire batch. ## Download the results When the batch job reaches `complete` status, the `status_details` field includes a summary of successes and failures, along with a presigned download URL for the output file: ```json { "id": "batchv2_AbCdEfGhIjKlMnOpQrStUvWxYz", "object": "v2.core.batch_job", "created": "2026-03-09T20:55:31.000Z", "maximum_rps": 10, "skip_validation": true, "status": "complete", "status_details": { "complete": { "success_count": "2", "failure_count": "0", "output_file": { "content_type": "application/jsonlines", "size": "8514", "download_url": { "expires_at": "2026-03-09T22:05:31.000Z", "url": "https://stripeusercontent.com/files/download/..." } } } } } ``` Download the file using the URL in `status_details.complete.output_file.download_url.url`. Stripe provides an output file when the batch job reaches any of these states: - `complete` - `cancelled` - `timeout` - `validation_failed` To see when the download URL expires, check the `expires_at` field for the deadline. The results file contains both successful and failed requests in a single file. To find failures, filter for rows where `status` isn’t `200`. ### Results file format The output file uses JSONL format (one JSON object per line). Each line contains these fields: | Field | Description | | ---------- | ---------------------------------------------------------------------------------------------- | | `id` | The request ID from the input file. Use this to correlate results with requests. | | `response` | The full API response object. Contains the resource on success, or an error object on failure. | | `status` | The HTTP status code as an integer (for example, `200`, `402`). | ### Example results file Successful requests return the full API resource in the `response` field: ```json {"id": "req_001", "response": {"id": "sub_1AbCdEfGhIjKlMn", "object": "subscription", "status": "active", "billing_cycle_anchor": 1710021331, "current_period_end": 1712613331, "current_period_start": 1710021331}, "status": 200} {"id": "req_002", "response": {"id": "sub_2BcDeFgHiJkLmNo", "object": "subscription", "status": "active", "billing_cycle_anchor": 1710021331, "current_period_end": 1712613331, "current_period_start": 1710021331}, "status": 200} ``` Failed requests return an error object: ```json {"id": "req_003", "response": {"error": {"message": "This subscription cannot be migrated because it is not active. Current status is canceled.", "type": "invalid_request_error", "code": "resource_invalid_state"}}, "status": 400} ``` Results aren’t returned in the same order as the input file. Use the `id` field to match each result to its corresponding request. ## Cancel a batch job You can cancel a batch job that hasn’t completed yet by sending a `POST` request: ```bash curl https://api.stripe.com/v2/core/batch_jobs/{BATCH_JOB_ID}/cancel \ -u <>: \ -X POST \ -H "Stripe-Version: 2026-03-25.preview" ``` Cancellation is asynchronous. The job first transitions to `cancelling` while in-flight requests finish, then to `cancelled`. Any partial results from requests processed before cancellation are available in the results file. ## Webhook events Batch jobs emit v2 thin events for every lifecycle transition. To receive these events, you must configure a [v2 event destination](https://docs.stripe.com/event-destinations.md). Batch job events require v2 event destinations. They aren’t delivered to v1 webhook endpoints. The following events are available: | Event type | Description | | ------------------------------------- | ---------------------------------------------------- | | `v2.core.batch_job.created` | A batch job was created. | | `v2.core.batch_job.ready_for_upload` | The batch job is ready for file upload. | | `v2.core.batch_job.validating` | File upload complete, validation in progress. | | `v2.core.batch_job.validation_failed` | Input file validation failed. | | `v2.core.batch_job.completed` | All requests have been processed. | | `v2.core.batch_job.batch_failed` | The batch job failed unexpectedly. | | `v2.core.batch_job.canceled` | The batch job was cancelled. | | `v2.core.batch_job.timeout` | The batch job exceeded maximum processing duration. | | `v2.core.batch_job.upload_timeout` | The upload URL expired before the file was uploaded. | | `v2.core.batch_job.updated` | The batch job status or progress changed. | All batch job events include the metadata you provided when creating the job. Use this to correlate events with your internal systems. When `notification_suppression` is set to `{"scope": "all"}`, webhooks from the underlying API operations (for example, subscription update events) are suppressed. Batch-level events listed above are always delivered regardless of this setting. ## Common errors ### Upload URL expired If you don’t upload the input file before the `expires_at` timestamp (5 minutes after job creation), the batch job transitions to `upload_timeout` status. Create a new batch job and upload the file promptly. Generate your input file before creating the batch job to avoid this. ### Invalid resource state Individual requests can fail if the target resource isn’t in the expected state. For example, when using `/v1/subscriptions/:id/migrate`: - **Subscription is not active**: The subscription must be in an `active` state before it can be migrated. Canceled or incomplete subscriptions return a `400` error with `resource_invalid_state`. - **Subscription already migrated**: Attempting to migrate a subscription that has already been migrated returns an error. These per-request errors appear in the results file with a non-`200` status code. The batch job itself still completes successfully (the batch continues processing if individual lines fail). ### Path parameter mismatches The keys in `path_params` must exactly match the placeholders in the endpoint path. For example, if your endpoint path is `/v1/subscriptions/:id/migrate`, your `path_params` must use `{"id": "sub_..."}`. A mismatch between the placeholder name and the key causes a validation error or a `400` status in the results file. ### Upload content type The upload `PUT` request must use `Content-Type: application/octet-stream`. Other content types are rejected. ### File format errors When `skip_validation` is `false`, these errors cause the entire batch to fail with `validation_failed` status: - Rows that aren’t valid JSON - Missing `id` field on any row - Duplicate `id` values across rows - IDs containing characters outside `A-Za-z0-9_-` When `skip_validation` is `true`, file-level format errors can cause individual rows to fail rather than blocking the entire batch. ### Job processing timeout Batch jobs that run longer than 24 hours transition to `timeout` status. Partial results from requests that completed before the timeout are available in the results file. ## Best practices ### Choose the right `maximum_rps` The `maximum_rps` parameter controls how fast Stripe processes requests in your batch. Batch processing uses a separate rate limit pool from your main API requests, so batch jobs don’t affect your account’s regular API traffic. - **Lower values**: 1–10 are suitable for non-urgent bulk operations. - **Higher values**: 50–100 process batches faster and are suitable for independent operations across different resources. ### Use validation for critical operations Keep `skip_validation` set to `false` (the default) for operations where partial processing would cause issues. Validation ensures your entire file is well-formed before any requests are executed. Set `skip_validation` to `true` when you’ve already validated your input data and want faster job startup, or when processing partial results is acceptable. ### Split large workloads If you have an input file larger than 5GB, split them into multiple batch jobs. You can run multiple batch jobs concurrently. ### Verify resource state before batching Confirm that all target resources are in the required state before submitting a batch. For example, subscriptions must be active before they can be migrated with `/v1/subscriptions/:id/migrate`. Batch jobs execute the target operation directly and don’t change resource state as a prerequisite. ### Handle errors in results Always check the `status` field in each result line. Individual requests within a successful batch can still fail (for example, due to insufficient funds or invalid parameters). Build your integration to filter the results file for non-`200` statuses and handle failures accordingly. ### Prepare your file before creating the job The upload URL expires 5 minutes after batch job creation. Generate and validate your input file before calling the create endpoint. If you need to prepare data dynamically, complete all data retrieval and file generation first, then create the batch job and upload immediately.