Etd

 

A Bayesian Analysis of a Multiple Choice Test Public

Downloadable Content

Download PDF

In a multiple choice test, examinees gain points based on how many correct responses they got. However, in this traditional grading, it is assumed that questions in the test are replications of each other. We apply an item response theory model to estimate students' abilities characterized by item's feature in a midterm test. Our Bayesian logistic Item response theory model studies the relation between the probability of getting a correct response and the three parameters. One parameter measures the student's ability and the other two measure an item's difficulty and its discriminatory feature. In this model the ability and the discrimination parameters are not identifiable. To address this issue, we construct a hierarchical Bayesian model to nullify the effects of non-identifiability. A Gibbs sampler is used to make inference and to obtain posterior distributions of the three parameters. For a ""nonparametric"" approach, we implement the item response theory model using a Dirichlet process mixture model. This new approach enables us to grade and cluster students based on their ""ability"" automatically. Although Dirichlet process mixture model has very good clustering property, it suffers from expensive and complicated computations. A slice sampling algorithm has been proposed to accommodate this issue. We apply our methodology to a real dataset obtained on a multiple choice test from WPI’s Applied Statistics I (Spring 2012) that illustrates how a student's ability relates to the observed scores.

Creator
Contributors
Degree
Unit
Publisher
Language
  • English
Identifier
  • etd-042413-163846
Keyword
Advisor
Defense date
Year
  • 2013
Date created
  • 2013-04-24
Resource type
Rights statement
License

Relationships

In Collection:

Items

Permanent link to this page: https://digital.wpi.edu/show/2z10wq351