OpenAI is launching a version of ChatGPT for college students


A handful of college students who were part of OpenAI’s testing cohort—hailing from Princeton, Wharton, and the University of Minnesota—shared positive reviews of Study Mode, saying it did a good job of checking their understanding and adapting to their pace.

The learning approaches that OpenAI has programmed into Study Mode, which are based partially on Socratic methods, appear sound, says Christopher Harris, an educator in New York who has created a curriculum aimed at AI literacy. They might grant educators more confidence about allowing, or even encouraging, their students to use AI. “Professors will see this as working with them in support of learning as opposed to just being a way for students to cheat on assignments,” he says.

But there’s a more ambitious vision behind Study Mode. As demonstrated in OpenAI’s recent partnership with leading teachers’ unions, the company is currently trying to rebrand chatbots as tools for personalized learning rather than cheating. Part of this promise is that AI will act like the expensive human tutors that currently only the most well-off students’ families can typically afford.

“We can begin to close the gap between those with access to learning resources and high-quality education and those who have been historically left behind,” says OpenAI’s head of education. Leah Belsky.

But painting Study Mode as an education equalizer obfuscates one glaring problem. Underneath the hood, it is not a tool trained exclusively on academic textbooks and other approved materials—it’s more like the same old ChatGPT, tuned with a new conversation filter that simply governs how it responds to students, encouraging fewer answers and more explanations. 

This AI tutor, therefore, more resembles what you’d get if you hired a human tutor who has read every required textbook, but also every flawed explanation of the subject ever posted to Reddit, Tumblr, and the farthest reaches of the web. And because of the way AI works, you can’t expect it to distinguish right information from wrong. 

Professors encouraging their students to use it run the risk of it teaching them to approach problems in the wrong way—or worse, being taught material that is fabricated or entirely false. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *