Skip to content

Commit

Permalink
added analysis, experiment, figures, updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
sarahawu committed May 24, 2024
1 parent 1d7e8e6 commit af9539c
Show file tree
Hide file tree
Showing 159 changed files with 22,545 additions and 24 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -158,3 +158,4 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
.Rproj.user
43 changes: 19 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Materials for the paper "[Whodunnit? Inferring what happened from multimodal evi

Sarah A. Wu*, Erik Brockbank*, Hannah Cha, Jan-Philipp Fränken, Emily Jin, Zhuoyi Huang, Weiyu Liu, Ruohan Zhang, Jiajun Wu, Tobias Gerstenberg.

Presented at the *46th Annual Conference of the Cognitive Science Society* (2024; Rotterdam, Netherlands).
To be presented at the *46th Annual Conference of the Cognitive Science Society* (2024; Rotterdam, Netherlands).

```
@inproceedings{wu2024whodunnit,
Expand All @@ -19,10 +19,9 @@ Presented at the *46th Annual Conference of the Cognitive Science Society* (2024
**Contents:**

* [Overview](#overview)
* [Experiment pre-registration & demo](#experiment-preregistration--demo)
* [Experiment](#experiment)
* [Repository structure](#repository-structure)
* [Experiment code](#experiment-code)
* [GPT evaluation](#gpt-evaluation)
* [GPT-4 evaluations](#gpt4-evaluations)
* [Simulation model](#simulation-model)
* [Generating trials](#generating-trials)
* [CRediT author statement](#credit-author-statement)
Expand All @@ -45,45 +44,42 @@ Multimodal event reconstruction represents a challenge for current AI systems, a



## Experiment preregistration & demo
## Experiment

The experiment reported in these results was pre-registered on the [Open Science Framework](https://help.osf.io/article/158-create-a-preregistration).

Our pre-registration can be found [here](https://osf.io/fzxre).

A demo of the experiment can be found [here](). <!--TODO add a demo link here -->
The experiment reported in these results was pre-registered on the Open Science Framework [here](https://osf.io/fzxre).
It can be previewed [here](https://cicl-stanford.github.io/whodunnit_multimodal_inference/experiment)!


## Repository structure


```
├── code
│   ├── analysis
│   ├── generate_audio
│   ├── generate_visual
│ └── ...
   ├── gpt4
│   ├── model_data
│ │ └── ...
│   └── simulation_model
│ │ └── ...
├── data
├── docs
│   └── experiment
├── figures
└── writeup
```

- `/code`: This folder contains the code for various aspects of the experiment and analyses.
- `/model_data`: This folder has trial data in the format used by the simulation model and by GPT-4 and GPT-4V evaluations. The models use a combination of evidence images, scene graph JSON files, and a CSV with transcribed audio evidence for each trial.
- `/analysis`: contains all the code for analyzing data and generating figures (view a rendered file [here](https://cicl-stanford.github.io/whodunnit_multimodal_inference)).
- `/generate_visual`: contains code to generate the images for each trial from JSON specifications.
- `/generate_audio`: contains code to generate the audio files for each trial.
- `/gpt4`: This folder contains code to run GPT-4 and GPT-4V evaluations.
- `/model_data`: This folder has trial data in the format used by the simulation model and for GPT-4 and GPT-4V evaluations. The models use a combination of evidence images, scene graph JSON files, and a CSV with transcribed audio evidence for each trial.
- `/simulation_model`: This folder has code and output for the [simulation model](#simulation-model).
- `/data`:
- `/docs`:
- `/figures`:
- `/writeup`:
- `/data`: contains anonymized participant data from the experiment as well as GPT-4 and GPT-4V evaluation results.
- `/docs/experiment`: contains all the behavioral experiment code. You can demo the experiment [here](https://cicl-stanford.github.io/whodunnit_multimodal_inference/experiment))!
- `/figures`: contains all the figures from the paper, generated using the script in `code/analysis`.


## Experiment code


## GPT evaluation
## GPT-4 evaluations


## Simulation model
Expand Down Expand Up @@ -199,7 +195,6 @@ This CSV file is used to compare participants' responses on each trial to the mo
## Generating trials



## CRediT author statement

*[What is a CRediT author statement?](https://www.elsevier.com/researcher/author/policies-and-guidelines/credit-author-statement)*
Expand Down
File renamed without changes.
Binary file added code/analysis/cache/fit.gpt4.rds
Binary file not shown.
Binary file added code/analysis/cache/fit.gpt4v.rds
Binary file not shown.
Binary file added code/analysis/cache/fit.human.rds
Binary file not shown.
Binary file added code/analysis/cache/fit.modalities.rds
Binary file not shown.
Loading

0 comments on commit af9539c

Please sign in to comment.