You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Normally the ONNX Bernoulli operator gets imported as torch.operator "onnx.Bernoulli", but since ONNX provides a function for this operator, it's possible to make the importer pre-expand it (using the code added in #3409).
When I investigated why this happens, it seems to be that the ONNX function interprets the input to the operator (let's call it p) in the opposite way to what these tests expect. p is always in [0,1], but the ONNX function behaves like (1-p) was passed. So, where an all-ones result is expected, it gets all zeroes, and vice-versa.
Looking at the importer output above, we can see ONNX's definition is very simple: generate random numbers (each is in the range [0,1] I believe), then elementwise compare against p, with the comparison result (false or true) being casted to an integer (0 or 1). To get the "expected" behavior, greater-than would have to be replaced with a different comparison (perhaps less-than-or-equal).
To me, this surely indicates a bug, but I'm not sure which implementation is "wrong" and which is "right".
The text was updated successfully, but these errors were encountered:
Issue I discovered while working on #3384.
Tested on
main
, commit 0b46d11.Normally the ONNX Bernoulli operator gets imported as
torch.operator "onnx.Bernoulli"
, but since ONNX provides a function for this operator, it's possible to make the importer pre-expand it (using the code added in #3409).If we apply this patch
then the new importer output becomes
and the e2e tests for Bernoulli start failing:
[...]
When I investigated why this happens, it seems to be that the ONNX function interprets the input to the operator (let's call it p) in the opposite way to what these tests expect. p is always in [0,1], but the ONNX function behaves like (1-p) was passed. So, where an all-ones result is expected, it gets all zeroes, and vice-versa.
Looking at the importer output above, we can see ONNX's definition is very simple: generate random numbers (each is in the range [0,1] I believe), then elementwise compare against p, with the comparison result (false or true) being casted to an integer (0 or 1). To get the "expected" behavior, greater-than would have to be replaced with a different comparison (perhaps less-than-or-equal).
To me, this surely indicates a bug, but I'm not sure which implementation is "wrong" and which is "right".
The text was updated successfully, but these errors were encountered: