Analysis of the Application of Artificial Intelligence Large Language Models in Higher
DOI: https://doi.org/10.62381/H251504
Author(s)
Yongyi Lin*
Affiliation(s)
School of Media Technology, Communication University of China Nanjing, Nanjing, Jiangsu, China
*Corresponding Author
Abstract
With the rapid development of Artificial Intelligence (AI) technology, Large Language Models (LLMs) are gradually integrated into all aspects of higher education. LLMs have powerful knowledge processing and text generation capabilities. The article outlines the definition and development history of AI as well as the concept, characteristics and development status of LLMs. It then focuses on the analysis of specific applications of LLMs in higher education. The study shows that LLMs can effectively improve teaching efficiency, personalized learning experience and educational resource utilization. At the same time, this study also explores the challenges in the application of LLMs, such as safety hazards and over-reliance. Through comprehensive analyses, this paper aims to provide theoretical guidance and practical references for the effective application of AI LLMs in higher education, and to provide insights for the future development of educational technology.
Keywords
Artificial Intelligence; Large Language Models; Higher Education
References
[1] T. Brown, B. Mann, N. Ryder, et al. Language Models are Few-Shot Learners. Proceedings of 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada, 2020.
[2] A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Proceedings of International Conference on Learning Representations (ICLR), Vienna, Austria, 2021.
[3] R. Bommasani, D. Hudson, E. Adeli, et al. On the Opportunities and Risks of Foundation Models. arXiv preprint arXiv: 2108. 07258, 2021.
[4] R. Schaeffer, B. Miranda, and S. Koyejo. Are Emergent Abilities of Large Language Models a Mirage. Proceedings of 37th Conference on Neural Information Processing Systems (NeurIPS 2023), Los Angeles, USA, 2023.
[5] T. Brown, B. Mann, N. Ryder, et al. Language Models are Few-Shot Learners. Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS'20), Vancouver, Canada, 2020.
[6] J. Devlin, M. Chang, K. Lee, et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minnesota, USA, 2019.
[7] A. Esteva, K. Chou, S. Yeung, et al. Deep learning-enabled medical computer vision. NPJ Digital Medicine, 2021, 4(1): 5.
[8] Z. Zhao, Y. Chen, A. Bangash, et al. An Empirical Study of Challenges in Machine Learning Asset Management. arXiv:2402.15990, 2024.
[9] L. Yang, H. Chen, Q. Wu, Zhang, et al. Leveraging Large Language Models for Adaptive Curriculum Design: A Mixed-Methods Study on Personalized Learning Pathways. Journal of Educational Technology & Society, 2023, 26(3): 187-201.
[10] M. Zhang, and R. Johnson. AI-Enhanced Teaching Activity Design: From Theory to Practice in K-12 Education. International Journal of Artificial Intelligence in Education, 2022, 32(2): 351-375.
[11] X. Chen. Analysis of the Application Prospects of Computer Big Data in Higher Education Curriculum Software Development. Journal of Higher Education Research, 2024, 5(5): 397-400
[12] X. Wang and X. Song. Exploring the Return of Human Subjectivity under Technological Alienation—Based on Marx’s Idea of Subjectivity. Advances in Social Sciences, 2021, 10(10): 2763-2769.