Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrated: Any facility for POST body compression? #56

Open
clelland opened this issue Nov 20, 2022 · 14 comments
Open

Migrated: Any facility for POST body compression? #56

clelland opened this issue Nov 20, 2022 · 14 comments

Comments

@clelland
Copy link
Collaborator

From clelland/page-unload-beacon#1:

Today, large beacon bodies are common, and compressing them is annoyingly hard. Could this be fixed here?

@ericlaw1979

@clelland
Copy link
Collaborator Author

clelland commented Dec 8, 2022

So PendingBeacon can take a ReadableStream as input, which should plug directly in to the output side of a CompressionStream. That would mean sending your data into the CompressionStream, and then feeding that stream into setData, which would read and then buffer the compressed data for later sending.

Implementation-wise, I think this would mean doing the compression on the renderer side. I don't know if there is any increased danger of data loss by moving to something less synchronous than just sending in a byte array. (e.g., if there would have been time before a renderer crash to send the raw data, but not to wait for the async compression to finish.)

@clelland
Copy link
Collaborator Author

clelland commented Dec 8, 2022

It occurs to me after writing that that since this would mean that the HTTP stream wouldn't have compression applied as a transport encoding, but would be a regular post whose payload happens to be a compressed byte stream encoded in JSON -- that we'd have to make sure that character escaping doesn't interfere with the goals of compression here.

@yoavweiss
Copy link
Collaborator

/cc @nicjansma @ksylor

@yoavweiss
Copy link
Collaborator

An alternative here would be browser-side compression of the entire body before sending. The downside of that approach would be that there's no way for the browser to negotiate compression formats with the server, so the browser would have to trust the app that the server we're sending the data to can handle the requested compression format.

@ksylor
Copy link

ksylor commented Dec 8, 2022

@yoavweiss @clelland Could there be an optional parameter where the acceptable compression format(s) are passed into the options object, and if none are specified, the payload is not compressed? Also curious if this could be an option for GET as well as POST beacons?

@yoavweiss
Copy link
Collaborator

I'm curious RE the use case for GET compression. Presumably whatever compression scheme we'd end up with would require backend work to enable the end point to decompress the data. Given that, any reasons why the payload can't be via a POST?

Also, one more complication: there's a good chance that arbitrary POST compression would mean that the request is not CORS-safe and hence would require a preflight. I'm not sure that's awful.

@ksylor
Copy link

ksylor commented Dec 8, 2022

I don't have a use case for GET compression @yoavweiss was just curious why this was specifically for POST and not both

@tunetheweb
Copy link

I don't have a use case for GET compression @yoavweiss was just curious why this was specifically for POST and not both

Headers (including the URL) are automatically compressed for HTTP/2 and HTTP/3 (and can't be compressed for HTTP/1.1) so this is mostly about compressing the body in a large POST message.

@yoavweiss
Copy link
Collaborator

Headers (including the URL) are automatically compressed for HTTP/2 and HTTP/3

I wouldn't rely on HPACK and QPACK doing a good job at compressing one time arbitrary payloads here. They're good at compressing repeating header names or values, and beacon payloads won't be a good fit for them. Generally, I think we should steer people towards POST based beacons.

@nicjansma
Copy link

Also see w3c/beacon#72 for discussions on this in the context of sendBeacon().

As @ksylor mentions, for RUM I would also prefer a mechanism to specify desired compression encodings that could be applied as-supported pre-beaconing (but not every time I setData()).

For some RUM use cases, I think we'll be adding/updating content to a PendingBeacon multiple times prior to the page unload/beaconing, and it would be ideal to not spend the cost of compression on a stream until the very "final" payload gets sent.

RE: this triggering a CORS preflight, it's not awful but could increase the likelihood of beacons never arriving (not due to preflight denial, but due to the latency it adds).

@fergald
Copy link
Collaborator

fergald commented Feb 20, 2023

@nicjansma Perhaps we could do the pre-flight at the time the beacon is created.

@yoavweiss
Copy link
Collaborator

So, when talking about a Fetch-based API, I think we have 2 options here:

  1. The API will accept a CompressionStream, and compression would happen when the fetch is emitted. Any future update to the beacon would trigger re-compression of the data (maybe we can avoid that by recycling the CompressionStream for additions, but not for modifications).
  2. We'd rely on Content-Encoding on the request side and Accept-Encoding on the response side to perform the compression on the browser side. As @fergald said, we can send an empty request at the beacon creation time to test the server's ability to handle Content-Encoding. This option will not require re-compression for beacon changes.

@yoavweiss
Copy link
Collaborator

At the same time, for (2) it's be a bit harder to integrate post-compression quota.

@yoavweiss
Copy link
Collaborator

A third option would be for the compression to happen on the browser side, as an opt-in from the user. (e.g. add compression: gz field to the RequestInit)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants