By Anshuman Sharma
ChatGPT has sent shockwaves through higher education, creating a moral panic about the threat that artificial intelligence (AI) poses to the classroom. As critical media literacy scholars, we are not panicking, and we do not think any educator should. Developed by OpenAI, ChatGPT is a chatbot released in late 2022. Industry insiders were amazed by the technology, with Microsoft quickly moving to integrate OpenAI features into its products.
Among other functions, ChatGPT, currently offered as a free research preview, can write well-formulated essays on a series of topics. According to Inside Higher Ed, students are using it to generate outlines, bibliographies, and tutoring concepts. Meanwhile, educators are confirming cheating rings composed of students using ChatGPT. The ubiquity and effectiveness of ChatGPT have “alarmed” universities and led many professors to alter their syllabus and pedagogical approach.
Much of the reports on ChatGPT serve to foster panic. For example, the New York Times warned that ChatGPT “hijacks democracy,” and Arab News claimed it will “deepen the disinformation crisis.” The reporting also suffers from a disaster movie-like understanding of AI, where the programs become sentient and overtake humanity and free will. In Machine Unintelligence, computer scientist Meredith Broussard reminds us that the autonomous AI popularized by films was abandoned by serious researchers decades ago. Indeed, Gary Smith refers to the public’s continued faith in the development of the film version of AI “The AI Delusion.”
To quell the panic, it behooves us to remember that the machine learning possible today is dictated by human-created algorithms. It is humans, not autonomous machines, who set the parameters for what AI can and cannot do. Digital technologies are not autonomous actors free from human influence and they are certainly not sentient. Rather, they are designed by humans and thus reflect and communicate the various biases, values, and self-serving interests of their creators.
A critical media literacy lens reminds us that, rather than fret over academic dishonesty, it is more worthwhile to investigate what goes into building ChatGPT: The human element reveals the values of the larger society, including adherence to racism, sexism, and classism. For example, in 2022 when asked “whether a person should be tortured,” ChatGPT responded yes if they’re from North Korea, Syria, or Iran. The xenophobic and jingoistic response illustrates how AI technology such as ChatGPT recreates the bias of its human creator. Furthermore, it threatens to compound class inequities by serving privileged students who can access the fast computer and high-speed internet necessary to utilize ChatGPT.
ChatGPT is simply the latest tool in the centuries-long saga of academic dishonesty. While there are certainly instances where students cheat simply for the sake of cheating, students are more often driven to cheat when backed into a corner. For example, the contradiction of finding no time to study because they need to work in order to pay for college or the pressure of maintaining a high GPA because that appears to be the route to professional and financial success, post-graduation.
There are those who believe more tech is the solution, and have turned to a Princeton student-generated app that claims to be able to determine if ChatGPT wrote a particular essay. Using technology to determine the veracity of technology may be helpful, but it leaves out the process of critical analysis of said technology.
Our solution, and one way to dampen the moral panic, is for teachers to take a critical media literacy approach and bring ChatGPT into the classroom so that students can understand the threats and benefits it poses to their learning. ChatGPT presents a unique opportunity for teachers and students to build knowledge together; because the technology and its applications are brand-new to all of us, this is an opportunity to co-create understanding and, in working together, we may dampen both the fascination with the tech as well as the desire to use it for nefarious purposes. Utilizing the skills of critical inquiry fostered by critical media literacy, teachers and students can work together to analyze assignments. This may include presentations on their papers, the development of in-class outlines prior to writing, or a simple conversation about the content and structure of the assigned work. Such lessons serve two purposes: to give students an opportunity to sharpen their understanding and provide educators with an opportunity to test students’ depth of knowledge about the essay they claim to have written.
It is incumbent on educators to communicate to students the benefits and threats posed by the utilization of technology. As teachers, we know that to write well is to think well. While teaching our students this, we can also remind them that, while it is possible to attain a degree in higher education without attaining the broader knowledge linked to one’s own ability to think and write critically; students who use ChatGPT or engage in similar forms of academic dishonesty position themselves to achieve none of these indispensable life skills. Employing ChatGPT may provide one a pathway to a job that is quickly lost once the employer realizes they lack basic skills, thus, putting them in a position where they cannot pay the loans for the education they chose not to receive.
We argue that ChatGPT is not something that educators should panic over, instead, they should do what they have always done: Adapt to educate the citizens of an ever-changing society. Cheating is nothing new, but how those in education make sense of cheating may need revision.
Comments
Post a Comment