It is probably no surprise that many professors are reporting an increase in cheating facilitated by artificial intelligence chatbots, like ChatGPT. This trend has prompted educators to reconsider how they teach and assess students. While they recognize the potential of technology in teaching and learning, they are concerned about the need to create tests and assignments that are resistant to cheating with chatbots. Some instructors are opting for paper exams and requiring students to submit editing history and drafts to demonstrate their thought process.
However, some educators believe that cheating has always existed and chatbots are just the latest tool students use. The rising popularity of AI-generated chatbots raises challenges for academics, who want to ensure that students have both correct answers and an understanding of the work. One pressing issue is the reliability of AI detectors, since current technology is highly inaccurate in identifying chatbot-generated text. Additionally, dangerous false accusations of AI cheating can occur, and it is difficult to determine if students have dishonestly used AI-powered chatbots. Despite these challenges, educators are implementing stricter rules to prevent cheating, such as forbidding the use of artificial intelligence on assignments. Many college administrators are leaving it up to professors to decide, urging instructors to create and clarify the rules surrounding the use of chatbots. Some institutions, like Michigan State University, provide faculty with a library of statements [perhaps generated by AI] to modify for their syllabi.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…