You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Open Screen Protocol has a max-pixels-per-second and a supports-rotation field, neither of which match any property in the Media Capabilities API. When are these fields useful? If they are, should they be considered in the Media Capabilities API as well? If not, could they be dropped?
On top of HDR related properties that will be added to the Open Screen Protocol, Media Capabilities also has scalabilityMode, which targets identifiers defined in WebRTC-SVC. Would that be worth adding to the Open Screen Protocol?
Media Capabilities also defines hasAlphaChannel. That does not strike me as particularly useful in a remote streaming scenario though?
For Audio, Media Capabilities API also has:
samplerate. The Open Screen Protocol has [max-frames-per-second] for video capabilities, but no equivalent for audio.
For the streaming protocol capabilities, I would have to do some research to find out what the use cases are for max-pixels-per-second and supports-rotation.
In general, if there are properties of the video decoder that are not captured in the "profile" part of the codec string and the sender needs to know them to figure out how to encode the video, then they should be reported as streaming capabilities by the receiver.
For the additional capabilities reported through Media Capabilities, we'd have to consider them on a case by case basis. Audio samplerate probably makes sense, but alphaChannel may not.
If we decide to open up an API to script to allow senders to query the MediaCapabilities of a remote playback device, then that would also impact what capabilities need to be reported through the protocol.
Reviewing #225, I wondered how streaming protocol capabilities exposed in a
streaming-capabilities-response
mapped to capabilities defined in theAudioConfiguration
andVideoConfiguration
constructs in the Media Capabilities API.For Video:
max-pixels-per-second
and asupports-rotation
field, neither of which match any property in the Media Capabilities API. When are these fields useful? If they are, should they be considered in the Media Capabilities API as well? If not, could they be dropped?scalabilityMode
, which targets identifiers defined in WebRTC-SVC. Would that be worth adding to the Open Screen Protocol?hasAlphaChannel
. That does not strike me as particularly useful in a remote streaming scenario though?For Audio, Media Capabilities API also has:
samplerate
. The Open Screen Protocol has [max-frames-per-second
] for video capabilities, but no equivalent for audio.spatialRendering
Note that there is an ongoing discussion on these audio configuration capabilities in the Media Capabilities repo w3c/media-capabilities#160.
The text was updated successfully, but these errors were encountered: