Skip to content

Latest commit

 

History

History
34 lines (20 loc) · 1023 Bytes

README.md

File metadata and controls

34 lines (20 loc) · 1023 Bytes

Official Repo of "Jump Starting Bandits with LLM-Generated Prior Knowledge" appearing in EMNLP2024 (Main Conference)

Authors: Parand A. Alamdari, Yanshuai Cao, Kevin H. Wilson

Arxiv: https://arxiv.org/abs/2406.19317

Pretrain phase:

plot

Online fine-tuning phase:

plot

Instructions to run experiments.

First Experiment: Personalized Email Campaign for Charity Donations

To generate virtual users and responses:

python generate_dataset.py

To pretrain contextual bandit model and generate plots:

python pretrained-bandit.py -r 10

Second Experiment: A Choice-Based Conjoint Analysis with Real-world Data

To generate virtual users and responses:

cd conjoint_simulation

python generate_counterfactuals.py

To pretrain contextual bandit model and generate plots:

cd conjoint_simulation

python conjoint-pretrained-bandit.py -r 10