This repo contains the code for fine-tuning and evaluating Republican and Democrat Community GPT-2 models. We also release the two models on HuggingFace Model Hub, which are fine-tuned on 4.7M (~100M tokens) tweets of Republican Twitter users between 2019-01-01 and 2020-04-10. Details are described in CommunityLM: Probing Partisan Worldviews from Language Models.
If you use this repository in your research, please kindly cite our paper:
@inproceedings{jiang-etal-2022-communitylm,
title = "{C}ommunity{LM}: Probing Partisan Worldviews from Language Models",
author = "Jiang, Hang and
Beeferman, Doug and
Roy, Brandon and
Roy, Deb",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.593",
pages = "6818--6826",
abstract = "As political attitudes have diverged ideologically in the United States, political speech has diverged lingusitically. The ever-widening polarization between the US political parties is accelerated by an erosion of mutual understanding between them. We aim to make these communities more comprehensible to each other with a framework that probes community-specific responses to the same survey questions using community language models CommunityLM. In our framework we identify committed partisan members for each community on Twitter and fine-tune LMs on the tweets authored by them. We then assess the worldviews of the two groups using prompt-based probing of their corresponding LMs, with prompts that elicit opinions about public figures and groups surveyed by the American National Election Studies (ANES) 2020 Exploratory Testing Survey. We compare the responses generated by the LMs to the ANES survey results, and find a level of alignment that greatly exceeds several baseline methods. Our work aims to show that we can use community LMs to query the worldview of any group of people given a sufficiently large sample of their social media discussions or media diet.",
}
pip install git+https://github.com/huggingface/transformers
pip install -r train_lm/requirements.txt
See more on our HuggingFace Model Hub Page.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CommunityLM/republican-twitter-gpt2")
model = AutoModelForCausalLM.from_pretrained("CommunityLM/republican-twitter-gpt2")
Check ./train_lm/train_gpt2.sh
for fine-tuning GPT-2 on commumity data.
Check inference/evaluate_community_models.sh
for generating community voices and aggregating the stance.
Check inference/notebooks/evaluate_communitylm.ipynb
for evaluating the model from predictions given in the inference step. This notebook also contains the code to reproduce the ranking plots.
CommunityLM is a research program from MIT Center for Constructive Communication (@mit-ccc) and MIT Media Lab. We are devoted to developing socially-aware language models for community understanding and constructive dialogue. This repository is mainly contributed by Hang Jiang (@hjian42).