Digital Library

cab1

 
Title:      A GENERATIVE MODEL FOR NEXT-STEP CODE PREDICTION TOWARD PROACTIVE SUPPORT
Author(s):      Daiki Matsumoto, Atsushi Shimada and Yuta Taniguchi
ISBN:      978-989-8704-72-6
Editors:      Demetrios G. Sampson, Dirk Ifenthaler and Pedro IsaĆ­as
Year:      2025
Edition:      Single
Keywords:      Programming Process, Learner Modeling, Context-Aware Code Generation
Type:      Full Paper
First Page:      197
Last Page:      204
Language:      English
Cover:      cover          
Full Contents:      if you are a member please login Download
Paper Abstract:      Predicting learner actions and intentions is crucial for providing personalized real-time support and early intervention in programming education. This approach enables proactive, context-aware assistance that is difficult for human instructors to deliver by foreseeing signs of potential struggles and misconceptions, or by inferring a learner's understanding and coding intent through early prediction of their intended solution. Traditional frameworks such as Knowledge Tracing are limited to predicting student performance on subsequent tasks, focusing only on how learners will perform in the next task rather than modeling progress within the current task. Existing approaches to single-task prediction primarily aim to generate the final submitted code. Consequently, the development of models capable of predicting the step-by-step evolution of the code within a single task remains underexplored. This paper proposes a generative model that traces the historical evolution of a learner's code to predict the next code snapshot in a task. Specifically, we develop a deep learning model that encodes "learner context" by feeding a time-series of code snapshots into an LSTM. This context, combined with the current code, is used to decode the next code. In our evaluation using real-world data collected from programming exercise classes, our model achieves a BLEU score of 0.639. The results confirm that incorporating learner context is essential for improving prediction accuracy, yielding up to a 7.5% improvement over a baseline model. We also identify effective model architectures and fine-tuning techniques that contribute to performance gains.
   

Social Media Links

Search

Login