Why Use Batch Operations?
Batch operations provide significant advantages for high-volume use cases:
- Cost Efficiency: Up to 99% cost reduction through Merkle tree batching
- Performance: Single API call for multiple records
- Atomic Processing: All records processed together
- Simplified Integration: One webhook for entire batch
Batch Anchor Request
Use the /v1/anchor/batch endpoint to anchor multiple records:
curl -X POST https://api.anchora.co.in/v1/anchor/batch \ -H "Authorization: Bearer YOUR_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "records": [ { "data": { "name": "Alice Smith", "certificateId": "CERT-001" }, "encryptionKey": "key-for-alice-32-chars-minimum!!" }, { "data": { "name": "Bob Johnson", "certificateId": "CERT-002" }, "encryptionKey": "key-for-bob-32-chars-minimum!!!" }, { "data": { "name": "Carol White", "certificateId": "CERT-003" }, "encryptionKey": "key-for-carol-32-chars-minimum!!" } ], "webhookUrl": "https://your-app.com/webhooks/batch" }'
Batch Response
Returns 201 when all records succeed, 207 for partial failures, or 400 if all fail.
{
"success": true,
"message": "Batch anchoring completed",
"batchId": "batch_1701590400000_abc123xyz",
"totalRecords": 3,
"results": [
{
"success": true,
"index": 0,
"recordId": "507f1f77bcf86cd799439011",
"hash": "a1b2c3d4...",
"status": "QUEUED",
"isEncrypted": false
},
{
"success": true,
"index": 1,
"recordId": "507f1f77bcf86cd799439012",
"hash": "f6e5d4c3...",
"status": "QUEUED",
"isEncrypted": true
},
{
"success": true,
"index": 2,
"recordId": "507f1f77bcf86cd799439013",
"hash": "d4c3b2a1...",
"status": "QUEUED",
"isEncrypted": true
}
],
"summary": {
"successful": 3,
"failed": 0
}
}
Limits and Constraints
| Parameter | Limit | Notes |
|---|---|---|
| Records per batch | 100 |
Maximum records in single request |
| Request size | 10 MB |
Total payload size limit |
| Record size | 1 MB |
Per-record data limit |
| Concurrent batches | 10 |
Per API key |
How Merkle Batching Works
Anchora uses Merkle trees to batch multiple records into a single blockchain transaction:
- Each record is hashed individually
- Hashes are combined into a Merkle tree
- Only the Merkle root is written to blockchain
- Individual records can still be verified using Merkle proofs
Error Handling
If some records fail validation, the batch will still process valid records:
{
"success": true,
"message": "Batch anchoring completed",
"batchId": "batch_1701590400000_abc123xyz",
"totalRecords": 3,
"results": [
{
"success": true,
"index": 0,
"recordId": "507f1f77bcf86cd799439011",
"hash": "a1b2c3d4...",
"status": "QUEUED",
"isEncrypted": true
},
{
"success": false,
"index": 1,
"error": "Encryption key must be at least 32 characters",
"errorCode": "VALIDATION_ERROR"
},
{
"success": true,
"index": 2,
"recordId": "507f1f77bcf86cd799439013",
"hash": "d4c3b2a1...",
"status": "QUEUED",
"isEncrypted": true
}
],
"summary": {
"successful": 2,
"failed": 1
}
}
results array for "success": false and resubmit corrected records in a new batch.
Best Practices
- Optimize batch size: Larger batches = better cost efficiency
- Use unique encryption keys: Each record should have its own key
- Handle partial failures: Check response for failed records
- Implement idempotency: Use client-generated IDs to prevent duplicates
- Monitor batch status: Use webhooks or poll the batch status endpoint