Replies: 4 comments 1 reply
-
I got the answer from Jan: "the emulator is trained on both the parameter and the conditioning variables. At inference time you build a wrapper that fixes the conditions via the potential function wrapper. You could use the same mechanism to fix some of the actual parameters as well at inference time." |
Beta Was this translation helpful? Give feedback.
-
Thanks for your answer. I know how to fix a parameter to a specific value in the potential function wrapper. However, the key idea in my analysis is to force the parameter to be the same in two conditions. Suppose we have a five-parameter competing accumulator (CA) model: dI, I, k, a, ter are the parameters. The experiment has a difficulty manipulation that only affect dI (input difference), resulting in two exp. conditions. The behavioral data of one subject will have three columns: RTs, choices, and conditions. In the analysis, I am trying to build a CA model that has six parameters: dI1 (for condition 1), dI2 (for condition 2), I, k, a, ter. This means the dI is free to vary across the condition while the others are fixed (because the experimental manipulation was supposed to only change dI). In the inference stage of MNLE, it needs to take the RTs, choices, and conditions into account, returning the MAP estimate (or posterior samples) for dI1, dI2, I, k, a, ter. My intuition is that we should also specify the relationship of the parameters before training the network. Maybe we need to change the underlying network structure to achieve this? I've attached a Jupyter notebook with code for one condition and the implementation of the simulator wrapper. I would greatly appreciate any help with this issue, as fitting decision-making models to experimental data like this is critically important. SBI with multiple experimental conditions.ipynb.zip Best regards, |
Beta Was this translation helpful? Give feedback.
-
Hi @Jiashun97 thanks for sharing your question here - and sorry for the delayed response. I am not sure I fully understand your setup. The initial CA model has 5 parameters: Q1: Is At inference time, you would then define a prior only over Q2: Am I missing something in the setup? Comment: I had a brief look at your notebook and it seems in cell 8 or nine where you define the wrapper to simulate both parameters and conditions the indexing into the theta tensor seems to overlap: # define a simulator wrapper in which the experimental condition are contained
# in theta and passed to the simulator.
def sim_wrapper(theta):
# simulate with experiment conditions
return simul_LCA5(
# we assume the first two parameters are beta and rho
parameters=theta[:, :5],
# we treat the third concentration parameter as an experimental condition
# add 1 to deal with 0 values from Categorical distribution
cond=theta[:, 2:] + 1,
) i.e., Cheers, |
Beta Was this translation helpful? Give feedback.
-
Hi @Jiashun97 , thank for sharing the details, and sorry, it always take a while until I find the time to look into this. I now started implementing a feature that allows passing a batch of different condition-values that correspond to a batch of observed i.i.d With this new feature it will be possible to do the following:
Only at inference time, you start to distinguish between parameters and conditions. Your observed data is now given (e.g., for a specific subject) as a table of i.i.d.
potential_fn = LikelihoodBasedPotential(estimator, proposal, x_o)
conditioned_potential_fn = potential_fn.condition_on(conditions, dims_to_sample=[0, 1]) This is the crucial step. You can then use the conditioned potential function to perform MCMC as usual. You need to repeat this step for each subject or whenever the You can find the full code snippet here: https://github.com/sbi-dev/sbi/blob/multiple-conditions-in-iid-sampling/tutorials/Example_01_DecisionMakingModel.ipynb Note that this is still WIP, the exact API might change a bit. But I hope this helps you figuring out how to set up your specific case. Best wishes, |
Beta Was this translation helpful? Give feedback.
-
Hi there,
I am trying to apply MNLE in DDMs that need to allow some parameters free to vary across experimental conditions. I noticed that in the sbi/tutorials/Example_01_DecisionMakingModel.ipynb, it provides a simulator wrapper to do so. If I understand it correctly, it makes all parameters free across conditions. However, in some of the behavioral experiments, only one or two parameters are free and the others are fixed. I was wondering if it is possible to implement this in MNLE. Thanks for your help!
Best regards,
Jiashun
Beta Was this translation helpful? Give feedback.
All reactions