You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
considered alternatives should outline the tradeoffs in play WRT this design and getUserMedia(), with particular attention paid to the varied configuration options gUM takes and how we will ensure that developers are not confused in future when trying to access the same devices through both interfaces. That is, if this API configures the device in a lower-resolution, higher-frame-rate mode, but gUM is called with different configuration options, what happens?
I think that's a valid concern, but it's not actually new for the proposed raw camera access feature. Even without that feature, an immersive-ar session on a device such as a smartphone will be using the same camera for AR tracking that would otherwise be accessible through getUserMedia().
Would it make sense to add a clarification here, for the WebXR AR module? Here's a somewhat handwavy proposal, the details would of course be subject to further discussion:
Interaction with getUserMedia
On some devices, the AR session is using a camera that is also potentially accessible through the getUserMedia() API.
A user agent MAY require exclusive camera access for immersive-ar sessions. In that case, getUserMedia() requests for that camera will fail with a NotReadable error while the AR session is in progress, and conversely requestSession('immersive-ar') will fail if the camera is already in use for getUserMedia().
If the user agent supports concurrent camera access, it must ensure that both APIs work according to their respective standards. If the application makes a getUserMedia() request with a custom resolution and framerate that doesn't match the requirements of the AR session, the user agent must reject the request with an OverconstrainedError if the capability is required, or keep using the AR session's parameters if the capability is an optional preference. For privacy reasons, if the AR device is displaying a camera feed, the stream provided by getUserMedia() must either have a matching field of view, or alternatively the user agent must provide a clear indication to the user that the camera is recording more data than the user can see on their device.
The text was updated successfully, but these errors were encountered:
Another potential privacy concern for concurrent camera access is what happens when the AR session ends while a getUserMedia stream is still active for that camera. If the camera continues recording after the AR session has ended and the user is no longer seeing the camera feed, that may violate the principle of least surprise even if the device is still showing a "camera in use" indicator.
(FWIW, the handheld AR implementations I'm aware of all use exclusive camera access, so any potential ambiguity regarding concurrent access hasn't really come up before.)
In immersive-web/raw-camera-access#14 (comment), @slightlyoff said:
I think that's a valid concern, but it's not actually new for the proposed raw camera access feature. Even without that feature, an
immersive-ar
session on a device such as a smartphone will be using the same camera for AR tracking that would otherwise be accessible through getUserMedia().Would it make sense to add a clarification here, for the WebXR AR module? Here's a somewhat handwavy proposal, the details would of course be subject to further discussion:
Interaction with getUserMedia
On some devices, the AR session is using a camera that is also potentially accessible through the getUserMedia() API.
A user agent MAY require exclusive camera access for immersive-ar sessions. In that case,
getUserMedia()
requests for that camera will fail with aNotReadable
error while the AR session is in progress, and converselyrequestSession('immersive-ar')
will fail if the camera is already in use forgetUserMedia()
.If the user agent supports concurrent camera access, it must ensure that both APIs work according to their respective standards. If the application makes a
getUserMedia()
request with a custom resolution and framerate that doesn't match the requirements of the AR session, the user agent must reject the request with anOverconstrainedError
if the capability is required, or keep using the AR session's parameters if the capability is an optional preference. For privacy reasons, if the AR device is displaying a camera feed, the stream provided bygetUserMedia()
must either have a matching field of view, or alternatively the user agent must provide a clear indication to the user that the camera is recording more data than the user can see on their device.The text was updated successfully, but these errors were encountered: