Debug journal
The body shape is the transfer encoding
I picked up an open issue on e2b-dev/E2B this morning. A self-hosted user couldn't upload templates: their S3 bucket returned 501 NotImplemented on every PUT. The reporter had already root-caused it, and the diagnosis taught me something about fetch I hadn't thought about carefully.
The shape of the body you pass to fetch picks the transfer encoding. And some servers have opinions about the result.
The bug
The JS SDK's uploadFile did the obvious thing:
const res = await fetch(url, {
method: "PUT",
body: uploadStream, // Node Readable
duplex: "half",
});
uploadStream is a gzipped tar produced by the SDK as a Node.js Readable. Passed straight to fetch, this looks idiomatic.
Under the hood, Node's fetch is undici. When the body is a stream whose length isn't known ahead of time, undici can't fill in Content-Length. It falls back to Transfer-Encoding: chunked: each chunk is prefixed with its length, the stream ends with a zero-length chunk, and the server reassembles the pieces.
Most HTTP endpoints accept chunked. S3 presigned PUT URLs do not. The server answers 501 NotImplemented at the protocol layer, and there isn't a bucket policy or an IAM adjustment that changes its mind.
Why Python worked and Node didn't
The Python SDK, built around the same upload flow, has never hit this bug. Its code buffers the tar archive into io.BytesIO before the PUT:
tar_buffer = tar_file_stream(...)
response = client.put(url, content=tar_buffer.getvalue())
bytes has a known length; httpx sets Content-Length automatically; S3 accepts the request.
Two SDKs, same upload flow, same presigned URL, same bucket. One works, one doesn't. The only difference is the shape of the body handed to the HTTP client.
The fix
Materialize the stream before the PUT:
import { buffer } from "node:stream/consumers";
const bodyBuffer = await buffer(uploadStream);
const res = await fetch(url, {
method: "PUT",
body: bodyBuffer,
});
Three things fall out of the change. Content-Length is set automatically because the Buffer has a known size. Transfer-Encoding: chunked disappears from the request. The duplex: "half" flag goes with it. That flag only exists to tell fetch the request body is a stream that won't be read until after the response starts; a Buffer isn't a stream.
One import, one await, a three-line diff in the upload function. Full PR at e2b-dev/E2B#1285.
The trade-off
Buffering the tar into memory means the whole archive has to fit. The Python SDK has the same property, and both SDKs target Docker-build-sized payloads, not multi-gigabyte uploads. The PR body names this out loud: if memory pressure ever matters for very large uploads, the real fix is coordinated work across both SDKs and the server, because setting Content-Length up front for a streaming upload means the server telling the client the length during negotiation.
A fix that ships today plus an honest note about when it would need revisiting is a better state than a longer design conversation that leaves a broken feature in place.
What I'll remember
When you hand fetch a body, you aren't just handing it bytes. You're handing it a shape. A Buffer picks fixed-length framing. A Readable picks chunked. Most servers don't care. The ones that do (S3 presigned PUTs, some older proxies, certain WAFs) will reject one and accept the other without giving you a helpful error message.
The generalizable rule: if the payload fits in memory and the destination isn't under your control, buffer first. The memory cost is obvious up front; the debugging cost of a 501 on chunked is not.
Credit for the diagnosis on this one goes to the reporter, @g-veritas, whose bug report is a small masterclass in how to root-cause a protocol-level failure before filing. JS vs Python side-by-side. Wire-shape table. Suggested fix with a concrete diff. I shipped the fix; they did the work that made the fix obvious.