Skip to content

Leveraging Large Language Models to Identify the Values Behind Arguments.

Notifications You must be signed in to change notification settings

rithik83/LLM-values

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Repository containing the experiments conducted for the paper:

Leveraging Large Language Models to Identify the Values Behind Arguments, Rithik Appachi Senthilkumar, Amir Homayounirad, Luciano Cavalcante Siebert

Status: Accepted at the VECOMP 2024 workshop affiliated with the 27th European Conference on Artificial Intelligence (ECAI)

Abstract: Human values capture what people and societies perceive as desirable, transcend specific situations and serve as guiding principles for action. People’s value systems motivate their positions on issues concerning the economy, society and politics among others, influencing the arguments they make. Identifying the values behind arguments can therefore help us find common ground in discourse and uncover the core reasons behind disagreements. Transformer-based large language models (LLMs) have exhibited remarkable performance across language generation and analysis. However, leveraging LLMs in sociotechnical systems that assist with discourse and argumentation necessitates systematically evaluating their ability to analyse and identify the values behind arguments, an under-explored research direction. Using a multi-level human value taxonomy inspired by the Schwartz Theory of Basic Human Values, we present a systematic and critical evaluation of GPT-3.5-turbo in human value identification from a dataset of multi-cultural arguments, across the zero-shot, few-shot and chain-of-thought prompting strategies, carrying forward from prior research on this task which leveraged a fine-tuned BERT model. We observe that prompting strategies exhibit performance levels close to, but still behind fine-tuning for value classification. We also detail some challenges associated with value classification with LLMs, offering potential directions for future research.

This repository contains our experimental workflow and results.

About

Leveraging Large Language Models to Identify the Values Behind Arguments.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published