Large Language Model-Driven Teaching Reform for Foundational Computer Science Courses in Universities: A Case Study of the Course “Discrete Mathematics”
DOI: https://doi.org/10.62381/H251310
Author(s)
Xu Cheng*, Bingzhao Ma
Affiliation(s)
College of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan, Hubei, China
*Corresponding Author
Abstract
Foundational computer science courses, epitomized by Discrete Mathematics, serve as cornerstones for students’ academic progression in the field. However, these essential subjects frequently present considerable hurdles in both teaching and learning. This paper delves into the transformative potential of Large Language Models (LLMs) in navigating these complexities and instigating meaningful pedagogical reform. Through a focused case study on the Discrete Mathematics curriculum, we meticulously investigate the multifaceted integration of LLMs across diverse instructional dimensions. This includes leveraging LLMs for the efficient generation of course materials, the facilitation of personalized learning pathways tailored to individual student needs, the creation of engaging and interactive exercises that foster deeper understanding, and the development of innovative assessment strategies. Our exploration extends to a thorough discussion of the salient advantages offered by this LLM-driven approach, while also acknowledging its inherent limitations and charting promising avenues for future research and implementation in this crucial educational domain.
Keywords
Large Language Models; Discrete Mathematics; Computer Science Education; Teaching Reform; Personalized Learning; Artificial Intelligence in Education
References
[1]K. Rosen, Discrete Mathematics and Its Applications. McGraw-Hill Education, 2018.
[2]S. Epp, Discrete Mathematics with Applications. Cengage Learning, 2010.
[3]T. Brown, B. Mann, N. Ryder, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877-1901, 2020.
[4]N. Biggs, Discrete Mathematics. Oxford University Press, 2002.
[5]D. Solow, How to Read and Do Proofs: An Introduction to Mathematical Thought Processes. Wiley, 2013.
[6]J. L. Hein, Discrete Structures, Logic, and Computability. Jones & Bartlett Learning, 2009.
[7]R. L. Graham, D. E. Knuth, O. Patashnik, Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley Professional, 1994.
[8]H. Holmes, W. Bialik, V. Rayz, P. Antón Álvarez, B. Rumshinsky, “A practical guide to using artificial intelligence in education,” UNESCO, 2023.
[9]J. Lancaster, “Lecture notes generation using transformers,” arXiv preprint arXiv:2305.18430, 2023.
[10]P. Piatti, A. Marchisio, E. Sanchez, J. Luis, “Personalized Learning with Large Language Models,” International Journal of Artificial Intelligence in Education, vol. 1, no. 1, pp. 1-20, 2024.
[11]Y. Li, A. Zhang, L. Zhang, “Interactive Learning Environments with LLMs,” Proceedings of the 2024 ACM Conference on Learning at Scale, pp. 100-110, 2024.
[12]K. Rudner, “Developing Intelligent Tutoring Systems with GPT,” Journal of Educational Technology, vol. 50, no. 2, pp. 45-60, 2023.
[13]M. Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.