Date of Award

2017

Document Type

Open Access Thesis

Department

Psychology

Sub-Department

College of Arts and Sciences

First Advisor

Amanda Fairchild

Abstract

Despite the great prevalence in both research and application of Factor Analysis (FA), widespread misinterpretation continues to pervade the psychological community in its application for the development and evaluation of psychometric tools. Fundamental measurement questions such as the number of response alternatives needed, and the power to detect poor model fit in non-normal or misspecified data, still remain in need of further investigation. For example, the power of the chi-square statistic used in structural equation modeling decreases as the absolute value of excess kurtosis of the observed data increases. This issue is further compounded with discrete variables, where increasing kurtosis manifests as the number of item response categories is reduced; in these cases, the fit of a confirmatory factor analysis model will improve as the number of response categories decreases, regardless of the true underlying factor structure or X2-based fit index used to examine model fit. Such artifacts have critical implications for the assessment of model fit, as well as validation efforts. To garner additional insight into the phenomenon, a simulation study was conducted to evaluate the impact of distributional nonnormality, model misspecification and model estimator on tests of model fit when true factor structure is known. Results indicate that effects of excess kurtosis and number of scale categories are exacerbated by model misfit. We discuss results and provide substantive recommendations. We also demonstrate an empirical example of how number of response options impacts dimensionality assessment through evaluation of the Beck Hopelessness Scale (BHS).

Rights

© 2017, Alexander G. Hall

Share

COinS