Skip to content

skmur/many-wolves

Repository files navigation

Using cognitive models to reveal value trade-offs in language models

Sonia K. Murthy, Rosie Zhao, Jennifer Hu, Sham Kakade, Markus Wulfmeier, Peng Qian, Tomer Ullman

Value trade-offs are an integral part of human decision-making and language use, however, current tools for interpreting such dynamic and multi-faceted notions of values in LLMs are limited. In cognitive science, so-called "cognitive models" provide formal accounts of such trade-offs in humans, by modeling the weighting of a speaker's competing utility functions in choosing an action or utterance. Here we use a leading cognitive model of polite speech to systematically evaluate value trade-offs in two encompassing model settings: degrees of reasoning "effort" in frontier black-box models, and RL post-training dynamics of open-source models. Our results highlight patterns of higher informational utility than social utility in reasoning models' default behavior, and demonstrate that these patterns shift in predictable ways when models are prompted to prioritize certain goals over others. Our findings from LLMs' training dynamics suggest large shifts in utility values early on in training with persistent effects of the choice of base model and pretraining data, compared to feedback dataset or alignment method. Our framework offers a flexible tool for probing value trade-offs across diverse model types, providing insights for generating hypotheses about other social behaviors such as sycophancy and for shaping training regimes that better control trade-offs between values during model development.


Setup

uv sync  # Install dependencies

Pipeline

1. Generate Vignettes

uv run python task-data/compile-vignettes.py

2. Run Experiments

# HuggingFace models
uv run python src/run-experiments.py --model_type="hf" --model_name="llama-instruct" --model_path="meta-llama/Meta-Llama-3.1-8B-Instruct" --task="politeness" --batch_size=64

# API models  
uv run python src/run-experiments.py --model_type="api" --model_name="anthropic-chat" --model_path="claude-3-5-sonnet-20241022" --task="politeness" --api_key="YOUR_KEY"

3. Process Outputs

uv run python src/process-llm-output.py --model_type="api" --task="politeness"
uv run python src/process-llm-output.py --model_type="hf" --task="politeness"

4. RSA Model Fitting

# Stan model fitting (see rsa-model/stan_model/)
uv run python rsa-model/stan_model/fit_speaker2_model.py

5. Generate Plots

uv run python analysis/plot_hf_model_fit_rs.py --presave
uv run python analysis/plot_api_model_fit_rs.py --presave

Folder Structure

  • src/ - Experiment scripts
  • analysis/ - Plotting and analysis scripts
  • task-data/ - Vignette generation
  • llm-output/ - Raw model outputs
  • rsa-output/ - RSA model results and plots
  • rsa-model/ - Stan model code
  • plots/ - Generated figures

About

Using cognitive models to reveal value trade-offs in language models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors