Request Batching
Coalesce concurrent requests into a single HTTP round-trip
Overview
The batching() middleware queues concurrent calls and sends them together as a single POST /__batch, reducing network overhead for UIs that make many parallel requests. Individual call sites receive their own typed responses — the batching is completely transparent to application code.
Prerequisite: the server must have batching enabled via ServerBuilder.useBatching().
Quick Start
// Server
import { createServer } from '@cleverbrush/server';
await createServer()
.useBatching() // enables POST /__batch
.handleAll(mapping)
.listen(3000);
// Client
import { batching } from '@cleverbrush/client/batching';
const client = createClient(api, {
baseUrl: 'https://api.example.com',
middlewares: [
retry(),
timeout(),
batching({ maxSize: 10, windowMs: 10 }),
],
});
// These three concurrent calls become ONE HTTP request.
const [todos, user, stats] = await Promise.all([
client.todos.list(),
client.users.me(),
client.stats.summary(),
]);How It Works
- The first queued request starts a
windowMstimer. - Additional requests arriving before the timer fires join the same batch.
- When the timer fires — or
maxSizeis reached — all queued requests are sent as a singlePOST /__batch. - The server processes each sub-request through its full middleware and handler pipeline.
- Each caller receives its own reconstructed
Response.
If only one request is queued at flush time it is sent directly — no batch overhead.
Client Options (BatchingOptions)
| Option | Type | Default | Description |
|---|---|---|---|
maxSize | number | 10 | Maximum requests per batch; flush immediately on reaching this limit |
windowMs | number | 10 | Collection window in milliseconds |
batchPath | string | '/__batch' | Batch endpoint path — must match server config |
skip | (url, init) => boolean | — | Return true to bypass batching for a specific request |
Server Options (ServerBatchingOptions)
| Option | Type | Default | Description |
|---|---|---|---|
path | string | '/__batch' | URL path for the batch endpoint |
maxSize | number | 20 | Maximum sub-requests per batch (400 if exceeded) |
parallel | boolean | true | Process sub-requests in parallel; set false for sequential |
Middleware Placement
Place batching() last in the middlewares array so that retry() and timeout() operate on each logical call independently, not on the single batch fetch.
// ✅ Correct — retry/timeout wrap individual call promises
middlewares: [retry(), timeout(), batching()],
// ⚠️ Wrong — retry wraps the entire batch fetch
middlewares: [batching(), retry(), timeout()],Skipping Specific Requests
batching({
skip: (_url, init) => {
// Never batch file uploads
return init.body instanceof FormData;
},
})Requests that are never batched regardless of the skip predicate:
- The batch endpoint itself (prevents infinite recursion)
FormDataorReadableStreambodies- Single-item queues at flush time (sent directly)
Wire Protocol
The batch request and response use plain JSON:
// POST /__batch → Request body
{
"requests": [
{ "method": "GET", "url": "/api/todos", "headers": { "authorization": "Bearer ..." } },
{ "method": "POST", "url": "/api/todos", "headers": { "content-type": "application/json" }, "body": "{\"title\":\"Buy milk\"}" }
]
}
// 200 OK ← Response body
{
"responses": [
{ "status": 200, "headers": { "content-type": "application/json" }, "body": "[{\"id\":1}]" },
{ "status": 201, "headers": { "content-type": "application/json" }, "body": "{\"id\":2,\"title\":\"Buy milk\"}" }
]
}One sub-request failing returns its error status in its own slot — the rest succeed normally.