Why Use Batch Operations?
Batch operations provide significant advantages for high-volume use cases:
- Cost Efficiency: Up to 99% cost reduction through Merkle tree batching
- Performance: Single API call for multiple records
- Atomic Processing: All records processed together
- Simplified Integration: One webhook for entire batch
Batch Anchor Request
Use the /v1/anchor/batch endpoint to anchor multiple records:
Batch anchor request
curl -X POST https://api.anchora.io/v1/anchor/batch \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "records": [ { "data": { "name": "Alice Smith", "certificateId": "CERT-001" }, "encryptionKey": "key-for-alice-32-chars-minimum!!" }, { "data": { "name": "Bob Johnson", "certificateId": "CERT-002" }, "encryptionKey": "key-for-bob-32-chars-minimum!!!" }, { "data": { "name": "Carol White", "certificateId": "CERT-003" }, "encryptionKey": "key-for-carol-32-chars-minimum!!" } ], "webhookUrl": "https://your-app.com/webhooks/batch" }'
Batch Response
Batch response
{
"success": true,
"batchId": "batch_xyz789abc123",
"status": "QUEUED",
"recordCount": 3,
"records": [
{ "recordId": "rec_abc123", "status": "QUEUED" },
{ "recordId": "rec_def456", "status": "QUEUED" },
{ "recordId": "rec_ghi789", "status": "QUEUED" }
],
"merkleRoot": "0x7f83b1657ff1fc53b92dc18148a1d65dfc...",
"estimatedConfirmation": "2024-01-31T10:31:00Z"
}
Limits and Constraints
| Parameter | Limit | Notes |
|---|---|---|
| Records per batch | 100 |
Maximum records in single request |
| Request size | 10 MB |
Total payload size limit |
| Record size | 1 MB |
Per-record data limit |
| Concurrent batches | 10 |
Per API key |
How Merkle Batching Works
Anchora uses Merkle trees to batch multiple records into a single blockchain transaction:
- Each record is hashed individually
- Hashes are combined into a Merkle tree
- Only the Merkle root is written to blockchain
- Individual records can still be verified using Merkle proofs
Cost Savings: Batching 100 records costs the same as anchoring 1 record directly. This is how we achieve 99%+ cost reduction compared to individual blockchain transactions.
Error Handling
If some records fail validation, the batch will still process valid records:
Partial success response
{
"success": true,
"batchId": "batch_xyz789abc123",
"status": "PARTIAL",
"recordCount": 2,
"records": [
{ "recordId": "rec_abc123", "status": "QUEUED" },
{ "recordId": null, "status": "FAILED", "error": "Invalid encryption key" },
{ "recordId": "rec_ghi789", "status": "QUEUED" }
],
"errors": [
{ "index": 1, "error": "Encryption key must be at least 32 characters" }
]
}
Note: Failed records are not retried automatically. Check the
errors array and resubmit corrected records in a new batch.
Best Practices
- Optimize batch size: Larger batches = better cost efficiency
- Use unique encryption keys: Each record should have its own key
- Handle partial failures: Check response for failed records
- Implement idempotency: Use client-generated IDs to prevent duplicates
- Monitor batch status: Use webhooks or poll the batch status endpoint