Skip to Content

ChatGPT in the Classroom with Dr. Chris GauthierDickey

Back to Article Listing

Author(s)

Aubrey Cox

Student Content Writer

Student Content Writer"

aubrey.cox@du.edu

Article  • Blog  • Blog Post  •

The rise of OpenAI and ChatGPT has reshaped thinking around Artificial Intelligence (AI) capabilities and its practical use. The ability of the technology to mimic human language has caused much of the world, but especially higher education, to grapple with the potential advantages and pitfalls of its utilization. 

In the Department of Computer Science at Ritchie School of Engineering and Computer Science, administrators are responding to the new technology’s ability to churn out code, which could prove harmful if misused by budding computer scientists. However, as experts in the field, many in the department have been considering the possibilities for some time. Chair of the Computer Science department Dr. Chris GauthierDickey is shouldering concern in stride.

“We have been talking about AI for a while now, even before ChatGPT came out because the research that made ChatGPT possible has been out since 2016,” said Dr. GauthierDickey. 

The department’s current policy is simple: leave it up to the discretion of the professors. There is an ongoing push to assemble an ad hoc, department-level committee to delve into the complexities of AI’s use in the classroom. The committee’s task would be to determine how AI could be intentionally utilized to make the educational experience more seamless, enabling students to breeze by simpler, time-consuming code onto more complex, thought-provoking questions.

When it comes to academic misuse, Dr. GauthierDickey’s apprehension is middling. The computer science department currently prohibits the use of large language models;(LLMs) within the Introduction to Computer Science sequence, but outside of that context, he considers LLMs more as a tool to be understood and applied.

Pieces of code derived from ChatGPT should be cited as any outside source would, according to policy, but he cautions that, like resources such as Stack Overflow, ChatGPT can often be incorrect or inaccessible to newer students working on ambitious projects.

“Ironically, you actually have to know how to program to use ChatGPT to write a lot of code for you.”

But even Dr. GauthierDickey, who teaches a section of the intro sequence, sometimes breaks his own rule. Even for students that are new to computer science, he believes that the technology can have a place in an academic setting.

“I told my students, ‘You should try it. First of all, do your assignments on your own, and then see what ChatGPT has to say about it, because if you prompt it right, if you know how to work with it, it can be really effective, like helping to explain code,” Dr. GauthierDickey explained.

Assessment considerations aside, Dr. GauthierDickey sees the innovation of ChatGPT as yet another reason for everyone to learn to code. Given the prevalence of ChatGPT outside of the computer science field, a greater understanding of the technology would permit its manipulation to better fit the needs of any user. 

“The advantage of being a CS person is that we know how to call the APIs [Application Programming Interface]. We know how to use the chat GPT outside of the web interface, and you can change the behavior.”

With a deeper understanding of programming, there comes a heightened awareness of the ethical issues that society will face outside of the classroom. Less perturbed by the prospect of academic misuse, Dr. GauthierDickey is worried about the bias that will undoubtedly crop up in technology that is trained on inherently biased data sets. 

“If you're a computer scientist, you have a responsibility to deal with bias in the software that you write. And in the software and the way that these are trained, they just didn't care when they did it. Because they cared about the end product…Obviously, society is biased, and because it is being trained on this information — this is the hard research question — how do you address that?” Dr. GauthierDickey said.

As for other concerns, GauthierDickey shares the anxiety of many experts in the field, including OpenAI itself who recently released a blog post explaining efforts to curtail misinformation, that AI could be an effective tool of nefarious influence if not properly implemented.

“I’m worried about our society being fooled by AI — that’s my only worry,” Dr. GauthierDickey said. “It’s getting so good at mimicking us, and there is a concern that because it’s so good at evaluating language, it might be really good at convincing us of things.”

He noted that the United States legal system is ill-prepared to handle the incoming questions about AI misuse, which currently lacks regulation. To combat this, Dr. GauthierDickey recommends learning to code, endorsing a full term Common Curriculum requirement for computer science.

Dr. GauthierDickey will be offering a topics course in the academic year on large language model applications called “3703/4703 - Topics: Large Language Models”, which includes how to prompt properly, how to hook into the API, and more complicated fine-tuning of the system.If you’re interested in learning more, you can also check out the Ritchie School’s Computer Science department page.

 

Related Articles