Access the full text.
Sign up today, get DeepDyve free for 14 days.
Calls for empirical investigations of the Common Core standards (CCSSs) for English Language Arts have been widespread, particularly in the area of text complexity in the primary grades (e.g., Hiebert & Mesmer Educational Research, 42(1), 44–51, 2013). The CCSSs mention that qualitative methods (such as Fountas and Pinnell) and quantitative methods (such as Lexiles) can be used to gauge text complexity (CCSS Initiative, 2010). However, researchers have questioned the validity of these tools for several decades (e.g., Hiebert & Pearson, 2010). In an effort to establish criterion validity of these tools, individual studies have compared how well they correlate with actual student reading performance measures, most commonly reading comprehension and/or oral-reading fluency (ORF). ORF is a key aspect of reading success and as such is often used for progress monitoring purposes. However, to date, studies have not been able to evaluate different text complexity tools and relation to reading outcomes across studies. This is challenging because the pair-wise meta-analytic model is not able to synthesize several independent variables that differ both within and across studies. Therefore, it is unable to answer pressing research questions in education, such as, which text complexity tool is most correlated with student ORF (and, thus, a good measure of text difficulty)? This question is timely given that the Common Core State Standards explicitly mention various text complexity tools; yet, the validity of such tools has been repeatedly questioned by researchers. This article provides preliminary evidence to answer that question using an approach borrowed from the field of medicine—Network Meta-Analysis (NMA; Lumley Statistics in Medicine, 21, 2313–2324, 2002). A systematic search yielded 5 studies using 19 different text complexity tools with ORF as the reading outcome measured. Both a frequentist and Bayesian NMA were conducted to pool the correlations of a given text complexity tool with students’ ORF. While the results differed slightly across the two approaches, there is preliminary evidence in support of the hypothesis that text complexity tools which incorporate more fine-grained sub-lexical variables were more strongly correlated with student outcomes. While the results of this example cannot be generalized due to the low sample size, this article shows how NMA is a promising new analytic tool for synthesizing educational research.
Annals of Dyslexia – Springer Journals
Published: Jul 27, 2019
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.