You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #13 and #23, support was discussed -- and added -- for allowing the sender to specify the type of the data being parsed by the receiver. I contend this feature went too far, and this protocol should avoid confusing data format types with JavaScript language types.
Other web protocols only differentiate string from binary, as these are different ways of interpreting the data, rather than storing it. The choice between Blob and ArrayBuffer is then handled by the receiver, using binaryType. This protocol should work the same way.
This isn't just a plea for consistency: by specifying the receiver's language type at the sender, you tie this specification to a JavaScript API and make it a protocol issue -- rather than an implementation concern -- how the receiver should handle large memory allocations.
If I am implementing this protocol in Rust or Go or C++, how am I to interpret the difference between being on the receiving end of a binary that tries to specify that it should be reified as specifically a Blob or specifically an ArrayBuffer? This distinction is kind of non-sensical.
Meanwhile, as a receiver, if someone sends me a very large message but marks it as ArrayBuffer, it should not be a protocol-level concern whether I choose to work with the data incrementally, rather than buffer the entire value into an array, because of the wishes of the sender.
The concept of binaryType should therefore be an idea specific to the JavaScript API -- and thereby not part of the protocol: the same way it works for both WebSocket and WebRTC Data Channels -- where it should serve the practical interests of the receiver implementation.
(This is, of course, assuming you continue down the road of providing both a WebRTC Data Channel API and a WebTransport API -- as, AFAIU, the latter considers even string encoding an e2e issue -- on top of separate network protocol layers... I, personally, hope you do not.)
After some discussions, including some with the WebRTC WG, I am also leaning towards just dropping the DataChannel API. I included it for ease of use but it seems less favored than I originally imagined.
I didn't make a proposal for this change yet because In the meantime I was exploring ways to make the API even leaner by leveraging existing transport API/protocols rather than introducing new ones (at least for the LAN use case). The idea would be that the LP2P API gives you a URL & certificate for a local peer that can be used with existing WebRTC/WebTransport/... APIs/protocols. Therefore being more open to existing efforts and creating less new implementation work. But that requires protocol level support. w3c/openscreenprotocol#351 is an attempt at a design for that.
In #13 and #23, support was discussed -- and added -- for allowing the sender to specify the type of the data being parsed by the receiver. I contend this feature went too far, and this protocol should avoid confusing data format types with JavaScript language types.
Other web protocols only differentiate string from binary, as these are different ways of interpreting the data, rather than storing it. The choice between Blob and ArrayBuffer is then handled by the receiver, using binaryType. This protocol should work the same way.
This isn't just a plea for consistency: by specifying the receiver's language type at the sender, you tie this specification to a JavaScript API and make it a protocol issue -- rather than an implementation concern -- how the receiver should handle large memory allocations.
If I am implementing this protocol in Rust or Go or C++, how am I to interpret the difference between being on the receiving end of a binary that tries to specify that it should be reified as specifically a Blob or specifically an ArrayBuffer? This distinction is kind of non-sensical.
Meanwhile, as a receiver, if someone sends me a very large message but marks it as ArrayBuffer, it should not be a protocol-level concern whether I choose to work with the data incrementally, rather than buffer the entire value into an array, because of the wishes of the sender.
The concept of binaryType should therefore be an idea specific to the JavaScript API -- and thereby not part of the protocol: the same way it works for both WebSocket and WebRTC Data Channels -- where it should serve the practical interests of the receiver implementation.
https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/binaryType
https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel/binaryType
(This is, of course, assuming you continue down the road of providing both a WebRTC Data Channel API and a WebTransport API -- as, AFAIU, the latter considers even string encoding an e2e issue -- on top of separate network protocol layers... I, personally, hope you do not.)
w3c/webrtc-pc#2170 (comment)
The text was updated successfully, but these errors were encountered: