AAA

Generative technologies in higher education - assessment of the current state, essential skills, and a proposal for a didactic method

Andrzej Wodecki

Abstract

This article proposes the application of generative technologies, specifically large language models, in higher education. While such technologies present novel opportunities, at the same time, they raise concerns, including potential cognitive degradation, job displacement, and intellectual property issues. The first section of this paper introduces the essential concepts and methods of generative technologies, coupled with a discussion on the necessary competencies to fully harness their potential. The next section suggests an addition to usual teaching methods, using the 'Artificial Intelligence in Business' course as an example. This proposed enhancement incorporates a review of student work outcomes by systems powered by large language models. The underlying didactic principles of the course, sample system reports, and an illustrative diagram of the teaching process are presented. The paper concludes by contemplating the possible advantages and challenges posed by these technologies in pedagogy, along with recommendations for future research.

Keywords: generative technologies, language models, knowledge management, teaching methodology, evaluation

References

  • Bulathwela, S., Muse, H. i Yilmaz, E. (2023). Scalable educational question generation with pre-trained language models. W N. Wang, G. Rebolledo-Mendez, N. Matsuda, O. C., Santos i V. Dimitrova (red.), Artificial Intelligence in Education: 24th International Conference, AIED 2023, Tokyo, Japan (s. 327-339). https://doi.org/10.1007/978-3-031-36272-9_27
  • Cai, Y., Mao, S., Wu, W., Wang, Z., Liang, Y., Ge, T., Wu, C., You, W., Song, T., Xia, Y., Tien, J. i Duan, N. (2023). Low-code LLM: Visual Programming over LLMs. arXiv. https://doi.org/10.48550/arXiv.2304.08103
  • Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L, Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P. S., Yang, Q. i Xie, X. (2023). A survey on evaluation of Large Language Models. https://doi.org/10.48550/arXiv.2307.03109
  • Cheng, D., Huang, S., Bi, J., Zhan, Y., Liu, J., Wang, Y., Sun, H., Wei, F., Deng, D. i Zhang, Q. (2023). UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation. arXiv. https://doi.org/10.48550/arXiv.2303.08518
  • Ge, T., Hu, ., Dong, L., Mao, S., Xia, Y., Wang, X., Chen, S.-Q. i Wei, F. (2022). Extensible prompts for language models. arXiv. https://doi.org/10.48550/arXiv.2212.00616
  • Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T. ... Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for ducation. https://doi.org/10.35542/osf.io/5er8f
  • Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepano, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J. i Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198
  • MetaAI. (2023, 24 lutego). Introducing LLaMA: A foundational, 65-billion-parameter language model. https://ai.meta.com/blog/large-language-model-llama-meta-ai
  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J. i Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv. https://doi.org/10.48550/arXiv.2203.02155
  • Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S., Ring, R., Young, S., Rutherford, E., Hennigan, T., Menick, J., Cassirer, A., Powell, R., Driessche, G. van den, Hendricks, L. A., Rauh, M., Huang, P.-S., . Irving, G. (2022). Scaling language models: Methods, analysis and insights from Training Gopher. arXiv. https://doi.org/10.48550/arXiv.2112.11446
  • The Vicuna Team. (30 marca 2023). Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna
AUTHOR

Andrzej Wodecki

About the article

DOI: https://doi.org/10.15219/em100.1617

The article is in the printed version on pages 51-60.

pdf read the article (Polish)

How to cite

Wodecki, A. (2023). Technologie generatywne w szkolnictwie wyższym - diagnoza sytuacji, przydatne kompetencje i propozycja metody. e-mentor, 3(100), 51-60. https://doi.org/10.15219/em100.1617