Prejudice Modeling Competition

Welcome to the prejudice modeling competition


We have recently published a paper developing two formalish/predictive models of different operationalizations of prejudice (Hehman & Neel, 2024).

They are: Bias = B0 + .12(Threat_g) + .17(Threat_s) – .21(Contact_Q) – .12(Contact_N) + .53(Identification) + e

and: Outgroup attitudes = B0 + .24(Threat_s) – .46(Contact_Q) – .12(Contact_N) – .13(Agreeable) + e

Paper for more detail: [here]; non-paywalled preprint [here]

The models are rudimentary, explaining ~40-60% of the variance. This competition is designed to develop superior models and advance the field.

We (Travis Lim, Eric Hehman, Becca Neel) have collected new data from a lot of people reporting their prejudices toward a variety of groups. We will give you 67% of that data, on which you and your team will build a model of prejudice. We will test your final submitted models on the other 33% of the hold-out data and determine the winners based on several fit metrics.

The winning team(s) will receive a $2,000 cash prize, and all submitting teams will earn authorship on the resulting paper.

We are going to advertise for 1 month prior to sharing the data and beginning the competition. Teams will then have to 2 months to build and submit their models.

YOUR JOB if you are interested in participating is to go [here] and enter in your team name, members (limited to 3), and email. We will send you information about the data and ultimately send you the training data when the competition begins.

Please go [here] for more fine-grained information about details of the competition.