AIB-3 Chatbots as co-conspirators: Why students turn to AI chatbots for cheating"

Hunter J. Schrock, University of South Carolina - Upstate
Justin Travis, University of South Carolina - Upstate

Abstract

This presentation reviews extant research on Artificial Intelligence (A.I.), specifically the use of chatbots to engage in cheating, plagiarizing, and other intentional behaviors that shirk student duties and responsibilities, also known as Counterproductive Student Behaviors (CSBs). A.I. chatbots such as ChatGPT, Bing Copilot, and Google Gemini use large-scale language models to generate text responses to user-created prompts. This functionality makes chatbots a very useful tool for learning and productivity, such as when utilized by patrons in the academic sector to refine and assist in their work. While we know that the effectiveness of chatbots in generating educational content and material can benefit those working in academia, some students utilize the generative content capabilities to avoid plagiarism detection techniques and submit dishonest work or use the large knowledge of the model to find answers to any online assignment. This is a novel issue that has rapidly risen alongside the development and evolving nature of A.I. generated content. Currently, there are many studies and reviews regarding the effect of chatbots on higher education, specifically as it pertains to their use by students for sourcing answers on online exams and generating material for open-ended writing assignments (e.g., engaging in CSBs). Nevertheless, this nascent literature has often lacked individual-level theoretical guidance to explain why, and how, students may engage with these tools for counterproductive purposes. To this end, we will consider the justifications of CSBs that involve chatbots, along with the predictors of these behaviors. Mistrust of A.I., preference for computer versus human interaction, and moral disengagement are all factors that may be conceptually relevant in determining whether A.I. is chosen over other options such as live-service websites that provide human assistance with assignments. With the rapid advancements in A.I. technology of late, more than ever it is important to understand the way it is utilized in academic settings, and what causes students to use it dishonestly.

 
Apr 12th, 9:30 AM Apr 12th, 11:30 AM

AIB-3 Chatbots as co-conspirators: Why students turn to AI chatbots for cheating"

University Readiness Center Greatroom

This presentation reviews extant research on Artificial Intelligence (A.I.), specifically the use of chatbots to engage in cheating, plagiarizing, and other intentional behaviors that shirk student duties and responsibilities, also known as Counterproductive Student Behaviors (CSBs). A.I. chatbots such as ChatGPT, Bing Copilot, and Google Gemini use large-scale language models to generate text responses to user-created prompts. This functionality makes chatbots a very useful tool for learning and productivity, such as when utilized by patrons in the academic sector to refine and assist in their work. While we know that the effectiveness of chatbots in generating educational content and material can benefit those working in academia, some students utilize the generative content capabilities to avoid plagiarism detection techniques and submit dishonest work or use the large knowledge of the model to find answers to any online assignment. This is a novel issue that has rapidly risen alongside the development and evolving nature of A.I. generated content. Currently, there are many studies and reviews regarding the effect of chatbots on higher education, specifically as it pertains to their use by students for sourcing answers on online exams and generating material for open-ended writing assignments (e.g., engaging in CSBs). Nevertheless, this nascent literature has often lacked individual-level theoretical guidance to explain why, and how, students may engage with these tools for counterproductive purposes. To this end, we will consider the justifications of CSBs that involve chatbots, along with the predictors of these behaviors. Mistrust of A.I., preference for computer versus human interaction, and moral disengagement are all factors that may be conceptually relevant in determining whether A.I. is chosen over other options such as live-service websites that provide human assistance with assignments. With the rapid advancements in A.I. technology of late, more than ever it is important to understand the way it is utilized in academic settings, and what causes students to use it dishonestly.