Artificial intelligence-driven technologies and software programmes are now everywhere, affecting our thinking, practice, and behaviours. In higher education, it is almost impossible to ignore artificial intelligence (AI). AI tools are now used in teaching, research, and administrative work in academia, as much if not more than in other workplaces. In this article, I would like to address teaching and learning in academia at the AI age.
In academia, the main concern for different stakeholders is whether to use AI tools or not, and if yes, to what extent. We can consider the following three positions considering the use of AI in academia, especially for teaching and writing:
- Ban it fully and do not allow anyone, especially students, to use it;
- Leave it to professors and students to decide how to use it; or
- Use it responsibly and ethically.
The Case for Banning AI
The first stance sees problems concerning the use of AI tools. These concerns are mostly related to cheating (at worst) and negatively affecting students’ creative and critical thinking (at best). In addition, people in this group raise privacy concerns. For example, companies producing AI tools collect a lot of information from users. These companies usually require users to use identifiable information like an email address, which means that it can be tracked.
Another point of concern is that such tools are not always trustworthy. They may make up fake academic information. In an assignment I gave my students to ask ChatGPT-3 to produce references related to their topic, there were fake or inaccurate sources in almost 95 percent of the cases. The references either did not either exit or, when they did, were inaccurate considering the authors’ names, date of publication, journal name, etc.
Another issue raised by people in the first group is that such tools present the same biases as in the original texts used to train them, and are not up-to-date. This viewpoint implies that we must have strong policies in place that could prevent the use of AI tools at any rate with specific penalties. This group believes we need to follow restrictions at least in the short term until there are principled and regulated ways of using such tools.
The Case for No Restrictions
The second stance goes to the opposite extreme, viewing any bans as impossible and undesirable. Holders of this view consider AI tools like any other educational tools and believe that there is no need to exert any limitations on their use. They argue that in the long term, the uses will be regulated autonomously. They argue if we were able to limit tools and facilities like calculators, computers, and the internet, we would be able to put restrictions on the use of AI tools. They refer to sources like Jey Willmore, of the National Science Foundation in the United States, who has argued that AI is transforming how students learn to engage with the world around them and use new technologies to create solutions to real problems. Thus we must not restrict the use of AI tools and even encourage their use, given their benefits outweigh their harms.
A Third Approach
The third view considers the creative, critical, responsible, and ethical use of AI tools as an optimal plan. The idea is that depriving academia of using new educational tools will keep us behind. However, they argue that opening the scene to any use of AI tools will be harmful because it will lead to anarchy with no guarantee that the outcomes will lead to effective learning.
The solution this group puts forth is to allow a responsible and ethical use of such tools. They believe that the question is not whether to use AI tools or not, but how their use can benefit classrooms, including students and teachers, and what methods are best to inspire, engage, and teach future generations about AI. They believe that it is not the time to panic; it is the time to update our course syllabi and assignments to include such tools and the value we can offer students and to talk to students about responsible and ethical uses of AI tools.
There is indeed no one-size-fits-all approach to responsible and ethical AI usage in our classes. It is thus essential that different stakeholders in each context foster a responsible use of AI culture among students and professors.
AI Affordances and Uses
Considering the third perspective as a plausible approach, there are different ways in which professors and students can use AI tools in responsible and ethical ways. I will briefly present some of these uses. Before that, we need to know what AI tools like ChatGPT can do in the context of universities. The following are some examples:
- Write essays.
- Write lesson plans.
- Design an outline for a course syllabus.
- Write policies for a class syllabus.
- Write learning objectives for specific courses.
- Design quiz/test questions.
- Write a script for a podcast or video.
- Design a rubric for assessment tasks.
- Provide directions for a learning activity.
- Write emails.
- Take notes on the text they are provided with.
- Provide a revised version of a text with improved grammar and spelling.
- Produce graded texts with specific levels of vocabulary for student reading.
- Writing creatively, like writing adventure stories.
With everyday advances in AI technology, many other things will be added to the above list.
There are creative ways for professors to use AI tools, for example, in redesigning their assessment tasks. Torrey Trust, a professor of learning technology at the University of Massachusetts at Amherst, suggests in a resource called “ChatGPT & Education” that instead of conventional essay assignments, we might now think about multimodal, higher-order thinking and learning activities, challenge-based learning, “Shark Tank” in the classroom, experiential learning, and makerspaces. The idea is to prepare assignments that cannot be fully completed by AI tools, although such tools might be used in critical and responsible ways.
Another example that Trust provides is to ask AI tools like ChatGPT to explain a concept to a 5-year-old, a college student, and an expert. Subsequently, students in a class can be asked to analyze and discuss these three texts regarding their communicative, comprehensibility, and linguistic features. Trust provides many additional sources that could be used by professors.
In a 2022 article titled “Update Your Course Syllabus for ChatGPT”, Ryan Watkins, a professor in educational technology leadership at George Washington University, in Washington, D.C., provides 10 ideas for creative assignments with ChatGPT that can be adapted to classrooms. There are other sources for using AI tools for course syllabus design. For example, readers are encouraged to check Boris Steipe’s Sentient Syllabus Project.
Regarding the use of AI tools by students when writing, for example, essays. In writing source-based essays, the following are required skills by students.
- Research skills (searching in different databases and identifying relevant materials).
- Domain knowledge (developing knowledge about a specific field or concept).
- Critical thinking (critically reviewing the materials and selecting appropriate ones for inclusion in the essay).
- Writing skills (synthesising the excerpts from different sources to develop a coherent essay).
- Language/Communication skills (deciding on the appropriate use of language and communicating the points in clear and effective ways).
No matter what AI tools like ChatGPT offer to users (in this case students), the above skills are still needed, even if we spend less time developing them. Toby Walsh, a professor of artificial intelligence at the University of New South Wales, in Australia, notes that education is an area where AI has much to offer. In an article for The Conversation titled “A Year of ChatGPT: 5 Ways the AI Marvel Has Changed the World”, he writes that “large language models such as ChatGPT can, for example, be fine-tuned into excellent Socratic tutors”. Intelligent tutoring systems, he adds, “can be infinitely patient when generating precisely targeted revision questions”.
Torrey Trust and Robert Maloy, a colleague of hers in the College of Education at UMass-Amherst, have argued that AI writing tools can free students from spending time trying to find basic, textbook-style information online (and potentially getting lost in the process) so they can spend more time thinking like historians and acting like writers. (See “Teaching History/Social Studies in the Era of AI Writing Tools”, a guest post by Trust and Maloy on a blog maintained by Rachelle Dené Poth, a U.S.-based education technology consultant, author, and teacher.)
Of course, this does not mean that students should trust whatever any AI writing tool produces as true and credible information (see Torrey Trust on the inherent untrustworthiness of large language models). OpenAI, the designer of ChatGPT, admits that the tool may provide harmful, biased, misleading, and false information, especially when asking about anything that happened after 2021. This is because ChatGPT is not connected to the internet and the data used to build the ChatGPT database was pulled before 2021.
This makes the need for AI literacy, in addition to digital and information literacy skills, and the responsible and ethical use of AI tools more important than ever. In addition to AI literacy, institutional policies for responsible and ethical uses of AI tools are also critical.
For example, the Czech scholar Tomas Foltynek and co-authors have provided recommendations to European universities on the ethical use of artificial intelligence in higher education. These recommendations (cited by Carmela de Maio in a paper analysing institutional responses to ChatGPT in Australia) include the need to develop clear definitions in academic integrity policies of what constitutes appropriate and inappropriate use of AI tools. Also, in Australia, the federal government’s Tertiary Education Quality and Standards Agency (TEQSA) has updated guidelines and resources on the use of artificial intelligence for all of the country’s institutions of higher education, based on advice from academic integrity experts.
As Trust and Maloy rightly observed, ultimately, AI writing tools, like ChatGPT, are “simply just tools. They are tools that present information. They will not replace teachers, but they might spark a rethinking of what teaching really is and can be.”
Sir Ken Robinson, who was an internationally known figure for creativity and education, stated in a 2017 interview with Martyn Newman, a clinical psychologist and author specialising in emotional intelligence and mindfulness, that humans have always intimately engaged with technology in innovative and creative ways. According to Robinson, technological tools have done two things: First, they have extended our reach, and second, they have extended our minds so we can think of things differently.
Accordingly, and in Trust and Maloy’s language, teaching with interactive digital tools “can be a means of empowering students to creatively construct their own knowledge, experiences, and understandings of the world and to rethink and re-envision research and writing in the era of AI writing tools”.
Conclusions
What we understand is that AI causes a paradigm shift in (higher) education. Every paradigm-shifting technology raises both opportunities and challenges, especially when it is adopted so quickly. It is crucial to enhance our teaching and student learning using AI tools without compromising academic integrity.
We need to redefine teaching and learning in light of the AI tools’ affordances, develop AI literacy, and use AI tools for teaching and learning in creative, critical, responsible, and ethical ways. To this end, universities and institutes of higher education must consider enhancing teaching and learning through AI literacy and practice. In addition to appropriate policymaking and recommendations (a top-down strategy), taking a bottom-up approach and developing communities of practice for using AI is imperative. Such communities of practice can monitor the advent and affordances of AI tools, evaluate these tools, investigate their affordances for the enhancement of education (e.g., personalised learning), and disseminate their knowledge to foster an AI literacy culture.
There seems to be a need for endeavours at three levels:
- Redefining teaching and learning in light of new AI tools;
- Redefining assessment tasks; and
- Expecting more from students.
In a nutshell, we need to reiterate Trust and Maloy’s point that “teaching can be a means of empowering students to creatively construct their own knowledge, experiences, and understandings of the world and to rethink and re-envision research and writing in the era of AI writing tools.”
As educators, we need to think about and design project-based, problem-based, and challenge-based approaches to teaching and learning and to enable students to take control of their learning, becoming more autonomous learners with moral duties in using AI tools.
A. Mehdi Riazi is a professor, associate dean for research, and Ph.D. programme coordinator in the College of Humanities and Social Sciences at Hamad Bin Khalifa University, in Doha.
Recommended: