Meta-Analysis of Item Generation Procedures Used in Selected Standardised Tests in Education
Abstract
Meta-analysis is a systematic process combining multiple studies' results to create a comprehensive understanding of a research question. It uses statistical methods to synthesise the findings of different studies, offering a quantitative summary of the overall effect. The study analysed the differences in studies on validating standardised tests in education using various procedures. The meta-analysis of 130 empirical studies on the validation of standardised educational tests published between 1988 and 2017 revealed that authors predominantly rely on literature reviews for item generation, followed by theory reviews and expert reviews. The data was analysed using percentages and counts. The majority of studies (80%) used literature reviews to generate ideas, with a smaller percentage (60%) and (38%) for theory and expert reviews. The majority of authors used Guttman divided half and internal consistency, with Alpha if item removed being the most commonly used method (86.2%). Construct validity was the primary method used (89.2%). The factorability of the correlation matrix was the main reason for the Barlett's test of sphericity score (75.4%). Most authors reported factor retention (60.8%) and used the Unweighted Least Squares (51.5%) approach for factor extraction. The most commonly reported rotation technique was orthogonal (Varimax) (59.2%). In conclusion, the text provides a comprehensive overview of test validation, emphasising the importance of establishing validity for ensuring trustworthy and meaningful test results. Future research should continue to explore the complexities of test validation, seeking to improve practices and develop more standardised approaches that ensure the reliability and validity of tests across various contexts.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.