A Linguistically Developed Prompt Engineering Parameters Model for Enhancing AI’s Generation of Customized ESL Reading Texts

Document Type : Original Article

Author

An Assistant Professsor of Linguistics & Translation – Dept. of English Language – Faculty of Education - Alexandria University/ Egypt.

Abstract

The quick progression of artificial intelligence (AI), especially Large Language Models (LLMs) now offers unprecedented opportunities for creating personalized learning experiences in the education field, as in the field of teaching English as a Second Language (ESL). The present study investigates the integration of prompt engineering with linguistic theories as a means of guiding LLMs to produce customized ESL reading material that is both linguistically accurate and pedagogically sound, addressing the diverse needs of ESL learners via building a linguistically-informed and user-friendly model of prompt parameters. Through such model, the study seeks to benefit educators regardless of them being experienced enough in prompt engineering and generative AI. The theoretical framework of this study is based on developing a comprehensive model of prompt engineering that integrates elements from three well-known linguistic theories: Transformational Generative Grammar, Systemic Functional Linguistics, and Global Englishes, along with basic prompt engineering elements. Using a mixed-method approach of quantitative and qualitative analysis, the study evaluates the effectiveness of this model. To test the model, six reading texts at different levels of the Common European Framework of Reference for language proficiency (CEFR) are generated by an LLM chatbot named Microsoft Copilot. These texts serve a variety of purposes and are of different genres. The readability scores of the generated texts are analysed using a combination of three metrics. Alongside this, a detailed qualitative analysis of each text is also undertaken. Together, these have revealed a general alignment between the texts and the targeted CEFR levels as well as their adherence to elements of the employed linguistic theories as requested in the devised prompt for each generated text. This demonstrates the developed model's efficiency in enhancing the AI's ability to produce reading material that is responsive to the diverse language levels and needs of the ESL learners, hence contributing to both creating more suitable learning experiences within ESL pedagogy and endorsing the integration of generative AI with linguistic theories to help teachers satisfy such needs.

Keywords


Alasmari, A. (2018). The attitudes of Saudi EFL teachers towards the inclusion of Saudi culture in EFL textbooks. Journal of Language Teaching and Research, 9(4), 760–768. https://doi.org/10.17507/jltr.0904.14
Alduais, A. M. S. (2015). Role of the transformational generative grammar and other linguistic theories in teaching English as a foreign language. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2611383
Alshalan, A. A., & Alyousef, H. S. (2020). Evaluating EFL textbooks from a systemic functional perspective: A focus on grammatical metaphor. Indonesian Journal of Applied Linguistics, 9(3), 694–704. https://doi.org/10.17509/ijal.v9i3.23781
Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to conversational AI. DigitalCommons@URI. https://digitalcommons.uri.edu/cba_facpubs/548
Barrett, A., & Pack, A. (2023). Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 36. https://doi.org/10.1186/s41239-023-00427-0
Bartlett, T., & O'Grady, G. (Eds.). (2017). The Routledge handbook of systemic functional linguistics. Routledge.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C.,   … Amodei, D. (2020). Language models are few-shot learners. arXiv. https://doi.org/10.48550/arXiv.2005.14165