You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, categorical variables are handled in the framework by one-hot encoding and then projected gradient descent to restrict the proposed values to what is possible in the domain. This works very well for nominal variables, but underperforms for ordinal variables especially when they have a large number of possible values, e.g., age.
I am not aware of any better methods to treat categorical variables, but in my own application using CounterfactualExplanations.jl, I applied the following heuristic: treat categorical (ordinal) variables as continuous and round all recommended value changes away from zero to the nearest integer. In code:
# Round the difference between old value and new value away from zero
Δresult =round.(counterfactual - factual, RoundFromZero)
# Add it back to the factual
factual += Δresult
This is only guaranteed to work for monotonic classifiers, ones that satisfy the property that increasing/decreasing the value of a (subset of) feature(s) does not decrease/increase the predicted score, because in such cases rounding away from zero increases the magnitude of the proposed change but not its sign, hence the label must remain the same. Theoretically, this increases the cost of the issued counterfactual, but in practical terms the original counterfactual can only be implemented by "rounding to nearest integers" either way because of the initial domain constraints.
Also, the above approach may work well even if the model is non-monotonic. In my own experiments on a non-monotonic model the rounding preserves the validity of 99.98% of counterfactuals. For the remaining 0.02% I simply repeat the procedure treating the original counterfactual as a factual, to push it even deeper into the region of space populated by the opposite class.
Ideally, this feature request would be satisfied by a new method to deal with ordinal variables but even the heuristic enhancement may be useful in many use cases. Explanations for Monotonic Classifiers may be informative.
The text was updated successfully, but these errors were encountered:
Currently, categorical variables are handled in the framework by one-hot encoding and then projected gradient descent to restrict the proposed values to what is possible in the domain. This works very well for nominal variables, but underperforms for ordinal variables especially when they have a large number of possible values, e.g., age.
I am not aware of any better methods to treat categorical variables, but in my own application using
CounterfactualExplanations.jl
, I applied the following heuristic: treat categorical (ordinal) variables as continuous and round all recommended value changes away from zero to the nearest integer. In code:This is only guaranteed to work for monotonic classifiers, ones that satisfy the property that increasing/decreasing the value of a (subset of) feature(s) does not decrease/increase the predicted score, because in such cases rounding away from zero increases the magnitude of the proposed change but not its sign, hence the label must remain the same. Theoretically, this increases the cost of the issued counterfactual, but in practical terms the original counterfactual can only be implemented by "rounding to nearest integers" either way because of the initial domain constraints.
Also, the above approach may work well even if the model is non-monotonic. In my own experiments on a non-monotonic model the rounding preserves the validity of 99.98% of counterfactuals. For the remaining 0.02% I simply repeat the procedure treating the original counterfactual as a factual, to push it even deeper into the region of space populated by the opposite class.
Ideally, this feature request would be satisfied by a new method to deal with ordinal variables but even the heuristic enhancement may be useful in many use cases. Explanations for Monotonic Classifiers may be informative.
The text was updated successfully, but these errors were encountered: