nakra logo

ChatGPT, the newest cool tech tool can approximate some aspects of human creativity very well. But it, or other Al applications, won’t replace humans. Chatbots don’t make meaning, they present words in an order, they crunch data and find patterns. And teachers needn’t worry, ChatGPT won’t subvert education either

Ananya Khera

Artificial intelligence is all around us, but it’s been reassuringly clunk so far. Al tells terrible jokes, its pick-up lines are absurd. But a new chatbot has left us reeling with its smoothness and competence. OpenAI’s ChatGPT, which launched last week has gone viral around the world and is certainly a moment of lift for artificial intelligence. It crates text responding to commands in conversational language, and it can write a decent movie script or ad, email, article or marketing content. It can solve complex math problems. It seems to have a sense of metaphor. It can counsel you how to get over heartbreak for instance, with nothing to give away the fact that it’s machine-generated.

Its abundantly clear that Al will change education, journalism, research and any textbased work. But will it supplant workers in these fields? Will it end homework and ruin young minds? Probably not.

How the tech works

ChatGPT is a deep learning neural network that can process data in complex ways. “It is a large language model, trained on a huge variety of data that spans almost everything available on the internet,’ says Bhasker Gupta, founder of Analytics India magazine. It understands patterns, and has human coaching for reinforcement leaning, it gets smarter with continued use and interaction.

In robotics, there’s a concept called the “uncanny valley”: that weird feeling one gets when something is almost human, but not quite. ChatGPT can almost overcome that problem, producing startling results because they don’t seem as synthetic or patched together.

Johannes Burger, a professor of literature and digital humanities, jokily asked it to write a poem on the dangers of technology in education, and to provide a solution that involved gulab jamuns. The poem it produced was freaky- not be cause it was high art, but because it was like a creative human response.

The famous Turing Test judges machine intelligence by asking if a human would be able to tell if they’re conversing with another human or a computer. ChatGPT is not the only impressive contender on this front. Google’s LaMDA 2 and Meta’s BlenderBot 3 are in the works, and can have open-ended conversations, and take live feedback from users on improving outcomes, says Gupta.

How does ChatGPT think?

You and I use mental shortcuts and take imaginative leaps, we rely on gutfeel and common sense. The algorithm, though, picks words most likely to follow the previous one, figuring out patterns. “It’s a very sophisticated language tool, but it doesn’t really think. It is presentation, rather than meaning-making.” says Padmini

Ray Murray, an academic who works in design and digital humanities. Its performance depends on the material it’s fed on, so when Murray prompted it for a take on ‘Brahmanical patriarchy’ for instance, a relatively niche concept from a different culture, the bot ran out of steam quickly, she says.

How does ChatGPT affect teaching and learning?

Can a student simply feed in a prompt and get ChatGBT to do their work? “The problem is that it yokes together words, which works, but is not foolproof. So, while it may give students a plausible answer nine out of ten times, it may be completely off the tenth time and they would not know unless they knew,” says Murray. Stack Overflow has temporarily banned its use, because of its high error rate for coding problems. “It’s also obviously a plagiarism engine, since it spits out information without citation,” she says.

Another problem is that it is still open to saying biased, offensive things about specific groups, because of the material it has imbibed, despite openAI’s attempts to filter these responses. As with any piece of tech, human design choices determine its uses, and its ethics. After all, “unfounded assumptions, bad advice and misinformation also abound on the Internet, and ChatGBT seems to make it easier for bad actors to add to the chaos,” says Gupta.

Of course, it makes instructors anxious when they cannot tell whether something is done by a person or a bot. ChatGPT can mimic writing styles, it can produce different grades of work when asked. It can pose problems for – who cultural gatekeeping decides what is good art? It makes you wonder about how to parse a poem when you don’t know if it’s written by a human or generated by a machine is the creativity inherent in the work, or in the reader responding to it?

For teachers and evaluators, it’s a challenge to be more conceptually interesting, to gauge the student’s capacities and interests. “Large classes asking generic question won’t work,” agrees Burgers.

“But it’s the oldest debate in the history of education, on whether technology makes it better or worse, but the reality is that what makes education work has been figured out ago too-a small teacher-student ratio so that teachers have a sense of where each student is at,” he says. The original format to make sure something can’t be gamed or plagiarised is an oral exam.

Smartphones were once banned in classrooms, but then teachers figured that students are going to use it anyway, and now that the technology is assimilated, it makes the class more interactive.

Schools and universities will have to work around the machine. Another good parallel is the search engine and the impact it had it didn’t replace anyone, but it transformed the way we worked.

How can Al help?

Al can also be a learning aid, when used right. It can make complex text simpler, or help students hamstrung by language and presentation skills. In a context where eloquence in English can affect the assessment of someone’s grasp on a subject, AI can come in handy. “If you use it like that, in the way Gmail’s predictive text can polish an email, it’s useful,” says Murray. There is a lot of writing that does not call for novelty and flair and AI could fill these generic needs. –

Meanwhile, an Al chatbot is a great way to play with ideas, and for easy experiment. “In my class, students said they wished they had it when they were kids, so they could imagine stories and remake them, instead of just reading them,” says Burgers.

While AI might stir understandable anxieties, the hyped up ChatGPT is clearly not doing what humans do. It can produce some marvels, but it is still pulling from a large pool of ideas and words, and its emotional effect is oddly flattened.

And after all, communication is not just words bolted together, it is one mind speaking to another.

Leave a Reply