You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When passing a data frame with 0 in training, the resulting processed_x occasionally replaces 0 with 1.387e-17. This is
likely due to floating point behavior either in the C++ logic or wrapper APIs. This has also led to flakiness in some exact tests.
The text was updated successfully, but these errors were encountered:
Another issue that may be related to precision is that when predicting over the same dataset used in training, sometimes not all leaves are used.
There has been one case when honesty=TRUE and scale=TRUE where the number of unique leaves != number of unique predictions when predicting over the data used for averaging
When passing a data frame with 0 in training, the resulting
processed_x
occasionally replaces 0 with 1.387e-17. This islikely due to floating point behavior either in the C++ logic or wrapper APIs. This has also led to flakiness in some exact tests.
The text was updated successfully, but these errors were encountered: