Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

CSF: Formative Feedback in Autograding

CSF: Formative Feedback in Autograding Autograding systems are being increasingly deployed to meet the challenges of teaching programming at scale. Studies show that formative feedback can greatly help novices learn programming. This work extends an autograder, enabling it to provide formative feedback on programming assignment submissions. Our methodology starts with the design of a knowledge map, which is the set of concepts and skills that are necessary to complete an assignment, followed by the design of the assignment and that of a comprehensive test suite for identifying logical errors in the submitted code. Test cases are used to test the student submissions and learn classes of common errors. For each assignment, we train a classifier that automatically categorizes errors in a submission based on the outcome of the test suite. The instructor maps the errors to corresponding concepts and skills and writes hints to help students find their misconceptions and mistakes. We apply this methodology to two assignments in our Introduction to Computer Science course and find that the automatic error categorization has a 90% average accuracy. We report and compare data from two semesters, one semester when hints are given for the two assignments and one when hints are not given. Results show that the percentage of students who successfully complete the assignments after an initial erroneous submission is three times greater when hints are given compared to when hints are not given. However, on average, even when hints are provided, almost half of the students fail to correct their code so that it passes all the test cases. The initial implementation of the framework focuses on the functional correctness of the programs as reflected by the outcome of the test cases. In our future work, we will explore other kinds of feedback and approaches to automatically generate feedback to better serve the educational needs of the students. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Computing Education (TOCE) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/csf-formative-feedback-in-autograding-A8sulUi8VB
Publisher
Association for Computing Machinery
Copyright
Copyright © 2021 Association for Computing Machinery.
ISSN
1946-6226
eISSN
1946-6226
DOI
10.1145/3445983
Publisher site
See Article on Publisher Site

Abstract

Autograding systems are being increasingly deployed to meet the challenges of teaching programming at scale. Studies show that formative feedback can greatly help novices learn programming. This work extends an autograder, enabling it to provide formative feedback on programming assignment submissions. Our methodology starts with the design of a knowledge map, which is the set of concepts and skills that are necessary to complete an assignment, followed by the design of the assignment and that of a comprehensive test suite for identifying logical errors in the submitted code. Test cases are used to test the student submissions and learn classes of common errors. For each assignment, we train a classifier that automatically categorizes errors in a submission based on the outcome of the test suite. The instructor maps the errors to corresponding concepts and skills and writes hints to help students find their misconceptions and mistakes. We apply this methodology to two assignments in our Introduction to Computer Science course and find that the automatic error categorization has a 90% average accuracy. We report and compare data from two semesters, one semester when hints are given for the two assignments and one when hints are not given. Results show that the percentage of students who successfully complete the assignments after an initial erroneous submission is three times greater when hints are given compared to when hints are not given. However, on average, even when hints are provided, almost half of the students fail to correct their code so that it passes all the test cases. The initial implementation of the framework focuses on the functional correctness of the programs as reflected by the outcome of the test cases. In our future work, we will explore other kinds of feedback and approaches to automatically generate feedback to better serve the educational needs of the students.

Journal

ACM Transactions on Computing Education (TOCE)Association for Computing Machinery

Published: May 10, 2021

Keywords: Autograding

References