Request Batching

Coalesce concurrent requests into a single HTTP round-trip

Overview

The batching() middleware queues concurrent calls and sends them together as a single POST /__batch, reducing network overhead for UIs that make many parallel requests. Individual call sites receive their own typed responses — the batching is completely transparent to application code.

Prerequisite: the server must have batching enabled via ServerBuilder.useBatching().

Quick Start

// Server
import { createServer } from '@cleverbrush/server';

await createServer()
    .useBatching()         // enables POST /__batch
    .handleAll(mapping)
    .listen(3000);

// Client
import { batching } from '@cleverbrush/client/batching';

const client = createClient(api, {
    baseUrl: 'https://api.example.com',
    middlewares: [
        retry(),
        timeout(),
        batching({ maxSize: 10, windowMs: 10 }),
    ],
});

// These three concurrent calls become ONE HTTP request.
const [todos, user, stats] = await Promise.all([
    client.todos.list(),
    client.users.me(),
    client.stats.summary(),
]);

How It Works

  1. The first queued request starts a windowMs timer.
  2. Additional requests arriving before the timer fires join the same batch.
  3. When the timer fires — or maxSize is reached — all queued requests are sent as a single POST /__batch.
  4. The server processes each sub-request through its full middleware and handler pipeline.
  5. Each caller receives its own reconstructed Response.

If only one request is queued at flush time it is sent directly — no batch overhead.

Client Options (BatchingOptions)

OptionTypeDefaultDescription
maxSizenumber10Maximum requests per batch; flush immediately on reaching this limit
windowMsnumber10Collection window in milliseconds
batchPathstring'/__batch'Batch endpoint path — must match server config
skip(url, init) => booleanReturn true to bypass batching for a specific request

Server Options (ServerBatchingOptions)

OptionTypeDefaultDescription
pathstring'/__batch'URL path for the batch endpoint
maxSizenumber20Maximum sub-requests per batch (400 if exceeded)
parallelbooleantrueProcess sub-requests in parallel; set false for sequential

Middleware Placement

Place batching() last in the middlewares array so that retry() and timeout() operate on each logical call independently, not on the single batch fetch.

// ✅ Correct — retry/timeout wrap individual call promises
middlewares: [retry(), timeout(), batching()],

// ⚠️ Wrong — retry wraps the entire batch fetch
middlewares: [batching(), retry(), timeout()],

Skipping Specific Requests

batching({
    skip: (_url, init) => {
        // Never batch file uploads
        return init.body instanceof FormData;
    },
})

Requests that are never batched regardless of the skip predicate:

  • The batch endpoint itself (prevents infinite recursion)
  • FormData or ReadableStream bodies
  • Single-item queues at flush time (sent directly)

Wire Protocol

The batch request and response use plain JSON:

// POST /__batch  →  Request body
{
    "requests": [
        { "method": "GET",  "url": "/api/todos", "headers": { "authorization": "Bearer ..." } },
        { "method": "POST", "url": "/api/todos", "headers": { "content-type": "application/json" }, "body": "{\"title\":\"Buy milk\"}" }
    ]
}

// 200 OK  ←  Response body
{
    "responses": [
        { "status": 200, "headers": { "content-type": "application/json" }, "body": "[{\"id\":1}]" },
        { "status": 201, "headers": { "content-type": "application/json" }, "body": "{\"id\":2,\"title\":\"Buy milk\"}" }
    ]
}

One sub-request failing returns its error status in its own slot — the rest succeed normally.