Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

If We Build It, Will They Learn? An Analysis of Students’ Understanding in an Interactive Game During and After a Research Project

If We Build It, Will They Learn? An Analysis of Students’ Understanding in an Interactive Game... Studies of educational games often treat them as “black boxes” (Black and Wiliam in Phi Delta Kappan 80: 139–48, 1998; Buckley et al. in Int J LearnTechnol 5:166–190, 2010; Buckley et al. in J Sci Educ Technol 13: 23–41, 2010) and measure their effectiveness by exposing a treatment group of students to the game and comparing their performance on an external assessment to that of a control group taught the same material by some other method. This precludes the possibility of monitoring, evaluating, and reacting to the actions of individual students as they progress through the game. To do that, however, one must know what to look for because superficial measures of success are unlikely to identify unproductive behaviors such as “gaming the system.” (Baker in Philipp Comput J, 2011; Downs et al. in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, USA, 2010) The research reported here advances the ultimate goal of creating educational games that can provide real time, meaningful feedback on the progress of their users, enabling teachers or the game itself to intervene in a timely manner. We present the results of an in-depth analysis of students’ actions in Geniventure, an interactive digital game designed to teach genetics to middle and high school students. Geniventure offers a sequence of challenges of increasing difficulty and records students’ actions as they progress. We analyzed the resulting log files, taking into account not only whether a student achieved a certain goal, but also the quality of the student’s performance on each attempt. Using this information, we quantified students’ performance and correlated it to their learning gain as estimated by scores on identical multiple-choice tests administered before and after exposure to Geniventure. This analysis was performed in classes taught by teachers who had participated in professional development as part of a research project. A two-tailed paired-sample t-test of mean pre-test and post-test scores in these classes indicates a significant positive difference with a large effect size. Multivariate regression analysis of log data finds no correlation between students’ post-test scores and their performance on “practice” challenges that invite experimentation, but a highly significant positive correlation with performance on “assessment” challenges, presented immediately following the practice challenges, that required students to invoke relevant mental models. We repeated this analysis with similar results using a second group of classes led by teachers who implemented Geniventure on their own after the conclusion of, and with no support from, the research project. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png "Technology, Knowledge and Learning" Springer Journals

If We Build It, Will They Learn? An Analysis of Students’ Understanding in an Interactive Game During and After a Research Project

Loading next page...
 
/lp/springer-journals/if-we-build-it-will-they-learn-an-analysis-of-students-understanding-tthiT00O51
Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Nature B.V. 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
ISSN
2211-1662
eISSN
2211-1670
DOI
10.1007/s10758-022-09617-7
Publisher site
See Article on Publisher Site

Abstract

Studies of educational games often treat them as “black boxes” (Black and Wiliam in Phi Delta Kappan 80: 139–48, 1998; Buckley et al. in Int J LearnTechnol 5:166–190, 2010; Buckley et al. in J Sci Educ Technol 13: 23–41, 2010) and measure their effectiveness by exposing a treatment group of students to the game and comparing their performance on an external assessment to that of a control group taught the same material by some other method. This precludes the possibility of monitoring, evaluating, and reacting to the actions of individual students as they progress through the game. To do that, however, one must know what to look for because superficial measures of success are unlikely to identify unproductive behaviors such as “gaming the system.” (Baker in Philipp Comput J, 2011; Downs et al. in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, USA, 2010) The research reported here advances the ultimate goal of creating educational games that can provide real time, meaningful feedback on the progress of their users, enabling teachers or the game itself to intervene in a timely manner. We present the results of an in-depth analysis of students’ actions in Geniventure, an interactive digital game designed to teach genetics to middle and high school students. Geniventure offers a sequence of challenges of increasing difficulty and records students’ actions as they progress. We analyzed the resulting log files, taking into account not only whether a student achieved a certain goal, but also the quality of the student’s performance on each attempt. Using this information, we quantified students’ performance and correlated it to their learning gain as estimated by scores on identical multiple-choice tests administered before and after exposure to Geniventure. This analysis was performed in classes taught by teachers who had participated in professional development as part of a research project. A two-tailed paired-sample t-test of mean pre-test and post-test scores in these classes indicates a significant positive difference with a large effect size. Multivariate regression analysis of log data finds no correlation between students’ post-test scores and their performance on “practice” challenges that invite experimentation, but a highly significant positive correlation with performance on “assessment” challenges, presented immediately following the practice challenges, that required students to invoke relevant mental models. We repeated this analysis with similar results using a second group of classes led by teachers who implemented Geniventure on their own after the conclusion of, and with no support from, the research project.

Journal

"Technology, Knowledge and Learning"Springer Journals

Published: Aug 5, 2022

Keywords: Modeling; Science education; Logging; Assessment

References