Throughout various professional and educational fields, experts and professors at the University of Wisconsin-Madison grapple with the ramifications of ChatGPT and other artificial intelligence bots’ usage in academic settings.
The chatbot is advanced enough to create responses that many students could use to complete assignments, and at the university level, reactions are varied.
John Zumbrummen, Vice Provost of Learning and Teaching at UW-Madison, said the university does not plan to write new academic policy for ChatGPT, which is covered by existing academic integrity policy, according to the Milwaukee Journal Sentinel.
Dr. Yonatan Mintz, an assistant professor in industrial and systems engineering at UW-Madison, explained how ChatGPT functions crawled “English language corpus” from the entire internet until 2019 and used those webpages to figure out ways of reproducing patterns in English speech.
The chatbot isn’t actually looking at English words, according to Mintz. Instead, the transformer uses an encoding layer which converts an English word into a vector — that embedding process is the way ChatGPT understands language, said Mintz.
“When we say machine learning, artificial intelligence — it’s all kinda nice euphemisms to explain complicated mathematics,” Mintz said. “It doesn’t necessarily map on to what you and I would identify as human learning.”
Despite producing advanced messages, ChatGPT does not understand the content of its output.
“It doesn’t mean that it sees meaning in these things; the meaning comes from all of us,” Mintz added.
What do professors think?
Dr. Larry Shaprio, a professor of philosophy at UW-Madison, was first skeptical of ChatGPT’s ability to cogently write, but was surprised by the chatbot’s products and linguistic ability.
Shapiro contextualized the chatbot’s progress using the Turing test, created by British mathematician Alan Turing, which posits that if conversing with chatbots can deceive a human into believing that they are talking to another human, the computer is intelligent.
“I think ChatGPT passes, or soon will pass the ‘Turing test,’ but I don’t think it is intelligent. This means that Turing was wrong,” explained Shapiro.
Mintz also mentioned the Turing test and compared the acuity of ChatGPT’s responses with earlier chatbots like Microsoft’s ill-fated “Tay.”
“All of those original models would be able to pass [the Turing test] at the level of a teenager, a 13-year-old or 16-year-old who doesn’t know much about the world, ” said Mintz. “[But] ChatGPT is good enough, I believe, to pass at the level of someone with an undergraduate degree.”
Though experts may be able to pick out flaws in responses, like made up citations or attempts to fill character counts, the bot will appear accurate unless you know what you’re looking for, Mintz said.
“You could really be duped into believing ridiculous things,” Mintz explained, suggesting that more explicit falsehood warnings and tags from Microsoft, OpenAI and other AI developers on sensitive topics like vaccination may be a good step for identifying misinformation.
Some professors began using ChatGPT responses in their curriculum. After finding out students used the chatbot for coursework during finals, Dr. Joshua Calhoun, a professor of English at UW-Madison, investigated it further.
“Because they’re students, and because we’re all learners, maybe they’re still trying to figure out [what to do] with a new tool — how do we use it well,” he said.
Calhoun said he’s found a large number of students unaware of ChatGPT, while others use it or are nervous about it. He employed GPT-generated responses in his course “English 433: Edmund Spenser,” where he had students analyze GPT poetry imitating the style of the Renaissance-era poet.
Calhoun added how the chatbot makes people think about information and knowledge differently, comparing it to the emergence and backlash around Wikipedia.
On students using ChatGPT for classwork, Calhoun isn’t wholly worried.
“Good essay questions can’t simply be answered by typing into a chatbot,” said Calhoun. “If [a professor’s] essay question can be answered that way, even before an AI, I think there is some onus on the professor to think about what that prompt is asking students to demonstrate or to know.”
Where does AI go from here?
Though Mintz cautioned against predicting too far into the future of AI progress, he said one immediate use of chatbots is their ability to generate a rough draft without investing a lot of time.
“Technology is technology; the real ethics of how bad or how good it is is how you end up using it,” he said.
Educator perspectives on the bot vary, but Shapiro sees ways for it to be a tool or aspect of curriculum. One potential exercise he pondered would, in the spirit of the Turing test, have students examine several essays to determine which ones were written by classmates rather than the bot.
“I hope they see it as a tool that can make their lives easier while not, at the same time, replacing the need to develop skills on their own. Calculators didn’t put mathematicians or engineers out of business,” said Shapiro. “I expect that in time, once the novelty of ChatGPT has dissipated, we’ll view it in the same way as we do calculators, GPSs and other smart tools that make our lives easier.”
Mintz said that though he’s not sure about the economics, a conversation about AI’s impact on employment will be necessary.
“For the first time, we have something that’s really impacting more professional and white collar jobs, as opposed to more blue collar, manual labor,” he said. “I think this is something that’s definitely going to have to be discussed — we haven’t been in a situation where the college-educated are going to have their opportunities reduced by these technologies.”
Liam Beran is the former campus news editor for The Daily Cardinal and a third-year English major. He has written in-depth on higher-education issues and covered state news. He is a now a summer LGBTQ+ news fellow with The Nation. Follow him on Twitter at @liampberan.