Skip to content

[COLING 2022]: CommunityLM: Probing Partisan Worldviews from Language Models

License

Notifications You must be signed in to change notification settings

hjian42/CommunityLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CommunityLM: Probing Partisan Worldviews from Language Models

License: MIT arXiv

This repo contains the code for fine-tuning and evaluating Republican and Democrat Community GPT-2 models. We also release the two models on HuggingFace Model Hub, which are fine-tuned on 4.7M (~100M tokens) tweets of Republican Twitter users between 2019-01-01 and 2020-04-10. Details are described in CommunityLM: Probing Partisan Worldviews from Language Models.

References

If you use this repository in your research, please kindly cite our paper:

@inproceedings{jiang-etal-2022-communitylm,
    title = "{C}ommunity{LM}: Probing Partisan Worldviews from Language Models",
    author = "Jiang, Hang  and
      Beeferman, Doug  and
      Roy, Brandon  and
      Roy, Deb",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics",
    url = "https://aclanthology.org/2022.coling-1.593",
    pages = "6818--6826",
    abstract = "As political attitudes have diverged ideologically in the United States, political speech has diverged lingusitically. The ever-widening polarization between the US political parties is accelerated by an erosion of mutual understanding between them. We aim to make these communities more comprehensible to each other with a framework that probes community-specific responses to the same survey questions using community language models CommunityLM. In our framework we identify committed partisan members for each community on Twitter and fine-tune LMs on the tweets authored by them. We then assess the worldviews of the two groups using prompt-based probing of their corresponding LMs, with prompts that elicit opinions about public figures and groups surveyed by the American National Election Studies (ANES) 2020 Exploratory Testing Survey. We compare the responses generated by the LMs to the ANES survey results, and find a level of alignment that greatly exceeds several baseline methods. Our work aims to show that we can use community LMs to query the worldview of any group of people given a sufficiently large sample of their social media discussions or media diet.",
}

Installation

pip install git+https://github.com/huggingface/transformers
pip install -r train_lm/requirements.txt

How to use the left and right CommunityLM models from HuggingFace

See more on our HuggingFace Model Hub Page.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("CommunityLM/republican-twitter-gpt2")

model = AutoModelForCausalLM.from_pretrained("CommunityLM/republican-twitter-gpt2")

CommunityLM Framework

Training

Check ./train_lm/train_gpt2.sh for fine-tuning GPT-2 on commumity data.

Inference

Check inference/evaluate_community_models.sh for generating community voices and aggregating the stance.

Evaluation

Check inference/notebooks/evaluate_communitylm.ipynb for evaluating the model from predictions given in the inference step. This notebook also contains the code to reproduce the ranking plots.

Acknowledgement

CommunityLM is a research program from MIT Center for Constructive Communication (@mit-ccc) and MIT Media Lab. We are devoted to developing socially-aware language models for community understanding and constructive dialogue. This repository is mainly contributed by Hang Jiang (@hjian42).

About

[COLING 2022]: CommunityLM: Probing Partisan Worldviews from Language Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •