At the Center for Integrated Professional Development, we have curated this guide to help instructors navigate the presence of emerging artificial intelligence (AI) technologies in their classrooms. While we have provided a thorough exploration of AI-generated content below, here are our top-level conclusions:
What is AI-generated content? How might such content be problematic in higher education?
AI-generated content generation services use artificial intelligence to create content using natural human language. They can use search engines and databases to create a wide variety of written or symbolic documents, sometimes using real references and sometimes fabricating them. Some services can also produce multimedia content like images. Because many of these services can produce content in such a robust manner, there has been increasing concern in the broader higher education community that AI content generation might be used by students to complete assignments, essays, or other activities in a manner that would lead to concerns with academic integrity.
Effectively, students can use AI content generators to avoid producing their own work in a very similar way to paying someone else to write a paper for them or using an exam/assignment database provided by their peers. This is very problematic because it can be difficult for instructors to detect and easy for students to access. Most importantly, it removes all learning value in any assignment.
Specific AI content generation services include ChatGPT, DALL-E, Jasper, and youChat, which are based on openAI technology. New services are emerging at a rapid pace, with capabilities that could impact assignments across the curriculum.
What does the content generated by artificial intelligence look like?
Because AI content generators are designed to emulate natural language and because they learn from every query they are presented with, the characteristics of AI-generated content are constantly in flux in terms of quality and utility. Here are two examples of queries to ChatGPT, its responses, and brief commentary on the quality of the response:
The first example asks ChatGPT to solve the problem 5x6(9-2)x15.
In this example, the AI provides the correct order of operations that someone would use to solve the problem and arrives at the correct answer, 3150. However, the AI did not actually apply the order of operations correctly when it showed its work, as multiplication should be from left to right:
This is a good example of a response that is very close to being correct and which does arrive at the right answer but misses the mark. This is particularly problematic for a student using this service even as a learning tool because their own understanding of the order of operations may now be impacted. In addition to this, multiple attempts in the same system with the same query returned different answers and different orders of operation as well.
In this example, ChatGPT is asked to summarize the impact of the Civil war on women’s rights in America in 50 words or less.
Here, the AI correctly notes that the women’s rights movement did not advance per se during the Civil War but doesn’t acknowledge the overlap between the women’s rights and abolitionist movements. This is a response that would be difficult to distinguish from an actual student response. It is 70 words long, even though the question only asked for 50, but ignoring articles and prepositions brings it back down to approximately 50, suggesting that this bot only counts content words—or at least it doesn’t always read the whole question.
In this example, ChatGPT is asked to provide citations for a paper about edible insects.
In this screenshot, you can see that ChatGPT provided five potential citations to learn more about edible insects. However, these citations tend to be inaccurate in part or whole. For instance, the first paper listed has an incorrect publication date, journal name, volume number, and page numbers. It should be cited as:
DeFoliart GR. Insects as food: why the western attitude is important. Annu Rev Entomol. 1999;44:21-50. doi: 10.1146/annurev.ento.44.1.21. PMID: 9990715.
How might I know if a student’s work is generated using artificial intelligence?
There is not yet a detection service that can definitively determine whether content was created using artificial intelligence. Some services, such as ChatGPT, have announced that they are working on features that would provide digital watermarks in text produced by their service, but this has not yet become a reality. As discussed in the previous question, AI-generated content is often structurally and stylistically close to natural Human language, but it struggles to vary words and use idioms appropriately.
In particular, AI content generators struggle with creating citations. They will sometimes find an author who has written something that would be appropriate to cite, but then make up a new title for that work or fully invent an author and a work. They may also fabricate quotes from real sources. If you use a service like Crossref (discussed later) to verify a student’s sources and there are lots of unknowns, this may be a sign of an AI-generated paper.
Otherwise, we suggest using strategies like the ones you may already be using to validate the originality of student work:
How do I develop learning experiences that discourage the use of AI content generation services?
How do I develop learning experiences that integrate the use of AI-generated content into my course pedagogy?
How does student use of AI content generation interact with Illinois State's existing academic integrity policy? What do I do if I suspect a student has submitted work that is not theirs?
The Student Code of Conduct’s academic integrity policy has several provisions that would apply to the unauthorized use of any resource or service not authorized by or acknowledged to the instructor, which covers the unauthorized use of AI content generation services. This is similar to how cases of students paying for essays or relying on test/paper archives maintained by peers would be considered. Using AI content generation services to solve equations or develop code would be examples of unauthorized uses of assistance. Using them to create text for a written assignment would be plagiarism if that content were not cited as being AI-generated content.
If you are concerned that a student may have used AI content generation services to submit work that is not theirs, you should follow the same process you would during any other academic integrity concern. That process is laid out by Student Conduct and Community Responsibilities.
What sort of syllabus language might be appropriate to address students’ use of artificial intelligence in my class?
The Center provides suggested syllabus language for a variety of topics, including academic integrity. This language can be added to your course syllabus and discussed with your students at appropriate times throughout the semester. The suggested language on academic integrity was updated in January 2023 to include mention of the use of content produced through artificial intelligence services.
Other institutions are also examining the impact of these new technologies on learning.
Need additional help incorporating these suggestions into your particular course? Email ProDev@ilstu.edu to set up a consultation with a member of the Center's Scholarly Teaching team.