Using assessment tasks multiple times, having un-invigilated multiple choice or single correct answer assessments are highly vulnerable to cheating (Dawson, 2020). If an assignment can be easily completed with AI, Lancaster (2023) recommends reconsidering the assessment. The assessment can integrate AI (if appropriate), be reformulated, or moved to an invigilated environment, depending on the required learning outcomes.
Rudolph et al. (2023) suggest adopting a flipped classroom approach. Under this approach, students do their assessments in-person (Rudolph et al., 2023). Faculty can supervise students and provide feedback on their assessments (Rudolph et al., 2023). This has pedagogical benefits for students and secures assessments, as faculty can see the process of work being done by students.
Consider permitting the cognitive offloading of lower-level outcomes to AI once students have demonstrated mastery.
Assessments that are lower on Bloom’s Taxonomy are higher risk (Bearman et al., 2020). Questions of this type are multiple choice questions and memory recall questions (Bearman et al., 2020). These assessment types should not be used in invigilated online assessments. Instead, use higher-level questions (Bearman et al., 2020): questions involving evaluation or complex analysis or questions with a range of possible answers are more secure. Redesigned questions such as questions that require students to provide their rationale for their answers are more secure (Bearman et al., 2020).
AI can effectively determine which option is correct in a multiple-choice quiz (MCQ). However, adding more restrictions to successful answer conditions can confuse AI systems. For example, a MCQ with only one correct answer is a straightforward question for AI to answer, as the AI can ignore distractor questions. However, when the answer conditions become more complex such as selecting x-number of correct answers out of y-number of possible answers is more challenging (Kozar, 2023). For example, if the MCQ wording is “‘Which of the two are most probable’ or ‘Pick 4 out of 6’” (Kozar, 2023, para. 14), the correct answer condition becomes more challenging. Rewording questions away from memorization and recall and introducing higher-order thinking, such as considering probabilities, or grounding the question in specific contexts can make them more AI-resistant (Kozar, 2023). Providing a suite of correct answers and having one optimal answer challenges the student to engage in deeper thinking and makes it difficult for LLMs to determine the correct answer.
Updating assignments by modifying the data provided/situational context improves assessment security overall (Baird & Clare, 2017). Refresh and modify assessments every time the course is run (Dawson, 2020, p. 133). If assignments are reused, AI models may be able to provide better responses (Lesage et al., 2024, p. 103).
Refreshing assessments on a regular basis generally makes them more secure. Birks and Clare (2017) found that creating case study videos for one-time use, then retiring them, lessens the likelihood of students outsourcing their work. Having relevant information in video format can also harden assessments against AI, as the major available AI models currently cannot process video inputs reliably.
Instructors should get AI to complete their assignment. Tell the AI to assume a student role (e.g., “You are a second-year college student in accounting”) (Vu et al., 2024a). An example prompt is provided below:
“I will provide the assignment guidance document, a rubric against which it will be marked, and a reference case study in separate prompts. Once you have read them, please write a submission for the assignment” (Vu et al., 2024a, Slide 4)
Add additional prompts to improve the output (gaps in arguments, a greater focus on the rubric, etc.) (Vu et al., 2024a).
Mark the output. If the AI-generated assignment scores well, consider either permitting or banning the use of AI (Vu et al., 2024a). The AI-generated output may be a new minimum standard for assessment (Vu et al., 2024a). Students may have to submit a higher-quality assignment to pass the assignment. This ensures that students who rely on AI to create their assignment will not pass.
The LX Team (2023c) defines e-Portfolios as a digital folder that demonstrates the process of a student’s journey through a course. AI tools are currently unable to convincingly demonstrate self-reflection (Center for Teaching and Learning at UMass, n.d.). e-Portfolios enable students to reflect on the learning journey they have gone through (LX Team, 2023c). Faculty can leverage e-Portfolios to provide feedback to students (LX Team, 2023c) on assignments. It is also a central repository of all assessments (LX Team, 2023c) and enables instructors to see the student’s improvement and gain of knowledge throughout the course. As an additional assessment security tool, instructors can see a student’s proficiency level. Any abnormalities in performance can be flagged for further investigation.
*To learn more about creating E-portfolios refer to the E-portfolio support resource created by SPARK.