You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description:
I am currently working with a 2D dataset. I have a sample point, x[f1, f2], where f1 and f2 are continuous real values. I am training a linear regression model and using the Dice gradient method for explanation. I have not applied any preprocessing transformations to the data. Here's my code:
d = dice_ml.Data(dataframe=x_train, continuous_features=['f1', 'f2'], outcome_name='tag')
m = dice_ml.Model(model=model, backend='PYT')
dice_exp = dice_ml.Dice(d, m, method='gradient')
cfs = dice_exp.generate_counterfactuals(query_instance, total_CFs=4, posthoc_sparsity_param=None)
cfs.visualize_as_dataframe(show_only_changes=True)
However, I'm facing issues where the loss is rapidly increasing, and I'm not getting any counterfactual results.
Upon further investigation, I found that this might be related to the "proximity_loss." Specifically, the "feature_weight_list" is calculated based on continuous features that have been scaled to the [0, 1] range, leading to extremely large values for proximity_loss.
However, when I attempted to calculate the Mean Absolute Deviation (MAD) using the original unscaled data and obtained the feature_weight_list from it, I found that the loss was normal, and I could successfully find counterfactual explanations.
So I want to ask, is this method of calculating feature weights reasonable? In other words, shouldn't MAD be calculated using scaled values if the user does not specify the preprocessing transformer, i.e., func=None?
(PS* some running details are shown as below)
The text was updated successfully, but these errors were encountered:
Description:
I am currently working with a 2D dataset. I have a sample point, x[f1, f2], where f1 and f2 are continuous real values. I am training a linear regression model and using the Dice gradient method for explanation. I have not applied any preprocessing transformations to the data. Here's my code:
However, I'm facing issues where the loss is rapidly increasing, and I'm not getting any counterfactual results.
Upon further investigation, I found that this might be related to the "proximity_loss." Specifically, the "feature_weight_list" is calculated based on continuous features that have been scaled to the [0, 1] range, leading to extremely large values for proximity_loss.
However, when I attempted to calculate the Mean Absolute Deviation (MAD) using the original unscaled data and obtained the feature_weight_list from it, I found that the loss was normal, and I could successfully find counterfactual explanations.
So I want to ask, is this method of calculating feature weights reasonable? In other words, shouldn't MAD be calculated using scaled values if the user does not specify the preprocessing transformer, i.e., func=None?
(PS* some running details are shown as below)
The text was updated successfully, but these errors were encountered: