From d87c18d3b4e5bfbd184b59711a8c00728ca68da7 Mon Sep 17 00:00:00 2001 From: Austin Sullivan Date: Thu, 14 Nov 2024 12:48:46 -0800 Subject: [PATCH] point to #778 --- mltensor-explainer.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mltensor-explainer.md b/mltensor-explainer.md index 69557c2c..225de712 100644 --- a/mltensor-explainer.md +++ b/mltensor-explainer.md @@ -292,7 +292,8 @@ It's possible `compute()` may have a performance advantage on some platforms for ### Open Questions -- How will errors be surfaced? Do we need a concept similar to [WebGPU's error scopes](https://www.w3.org/TR/webgpu/#error-scopes), or is [returning errors via a promise for select operations](https://github.com/webmachinelearning/webnn/issues/697#issuecomment-2195656878) and losing the `MLContext` sufficient? See [#477](https://github.com/webmachinelearning/webnn/issues/477) +- How will errors be surfaced? Do we need a concept similar to [WebGPU's error scopes](https://www.w3.org/TR/webgpu/#error-scopes), or is [returning errors via a promise for select operations](https://github.com/webmachinelearning/webnn/issues/697#issuecomment-2195656878) and losing the `MLContext` sufficient? + - *Update: [#778](https://github.com/webmachinelearning/webnn/issues/778) is a proposal for reporting non-fatal errors from the WebNN timeline* - Does the user agent have enough information to appropriately allocate an `MLTensor` if an `MLDeviceType` or `GPUDevice` is not used to create an `MLContext`? See [#350](https://github.com/webmachinelearning/webnn/issues/350) and [#749](https://github.com/webmachinelearning/webnn/issues/749) - Should the `dispatch()` method be a part of the `MLGraph` interface rather than `MLContext`? Should `readTensor()` and `writeTensor()` exist on an `MLTensor`? See [#697](https://github.com/webmachinelearning/webnn/issues/697). - Is a sync variant of the `importExternalBuffer()` method feasible (1) on platforms where completion of ML compute can be signaled on a GPU timeline, or (2) when blocking WebGPU workloads which do not themselves block compositing.