Yesterday, I drove through the Ball State University campus. In front of Emens Auditorium appeared to be a display of AI for students. But nothing about it on The Daily Student site.
That I do not think this is a good idea may be due to me being an old curmudgeonly Luddite, but turns out my intuition has support in research.
Are we living in a golden age of stupidity? (Sophie McBain, The Guardian)
With some MIT colleagues, Kosmyna set up an experiment that used an electroencephalogram to monitor people’s brain activity while they wrote essays, either with no digital assistance, or with the help of an internet search engine, or ChatGPT. She found that the more external help participants had, the lower their level of brain connectivity, so those who used ChatGPT to write showed significantly less activity in the brain networks associated with cognitive processing, attention and creativity.
In other words, whatever the people using ChatGPT felt was going on inside their brains, the scans showed there wasn’t much happening up there.
The study’s participants, who were all enrolled at MIT or nearby universities, were asked, right after they had handed in their work, if they could recall what they had written. “Barely anyone in the ChatGPT group could give a quote,” Kosmyna says. “That was concerning, because you just wrote it and you do not remember anything.”
Yeah.... I wish I could forget some of the things I've written, I might not spend so much time revising those words.
A caution, or two, in the following; one about the experiment and one about human beings:
The experiment was small (54 participants) and has not yet been peer reviewed. In June, however, Kosmyna posted it online, thinking other researchers might find it interesting, and then she went about her day, unaware that she had just created an international media frenzy.
Alongside the journalist requests, she received more than 4,000 emails from around the world, many from stressed-out teachers who feel their students aren’t learning properly because they are using ChatGPT to do their homework. They worry AI is creating a generation who can produce passable work but don’t have any usable knowledge or understanding of the material.
The fundamental issue, Kosmyna says, is that as soon as a technology becomes available that makes our lives easier, we’re evolutionarily primed to use it. “Our brains love shortcuts, it’s in our nature. But your brain needs friction to learn. It needs to have a challenge.”
My mind stuck on this paragraph, too:
One issue is that our digital devices have not been designed to help us think more efficiently and clearly; almost everything we encounter online has been designed to capture and monetise our attention. Each time you reach for your phone with the intention of completing a simple, discrete, potentially self-improving task, such as checking the news, your primitive hunter-gatherer brain confronts a multibillion-pound tech industry devoted to throwing you off course and holding your attention, no matter what. To extend Christodoulou ’s metaphor, in the same way that one feature of an obesogenic society are food deserts – whole neighbourhoods in which you cannot buy a healthy meal – large parts of the internet are information deserts, in which the only available brain food is junk.
Perhaps it is past time to rethink the purpose of technology - not to make money, but to improve our lives. That will upset the tech lords and their finance bros - not the bright idea that will bring in the short-term dollars. I have no idea what this kind of tech would be; I am too old, too bound to other things, for such things.
More cautions:
Michael Gerlich, head of the Centre for Strategic Corporate Foresight and Sustainability at SBS Swiss Business School, began studying the impact of generative AI on critical thinking because he noticed the quality of classroom discussions decline. Sometimes he’d set his students a group exercise, and rather than talk to one another they continued to sit in silence, consulting their laptops. He spoke to other lecturers, who had noticed something similar. Gerlich recently conducted a study, involving 666 people of various ages, and found those who used AI more frequently scored lower on critical thinking. (As he notes, to date his work only provides evidence for a correlation between the two: it’s possible that people with lower critical thinking abilities are more likely to trust AI, for example.)
But if you have low critical thinking abilities, what are you doing in college?
If you cannot imagine a different purpose for tech any more than I can, then what about proving future tech does not have detrimental side effects? The following gives a good argument for such a process:
Until the pandemic, many teachers were “rightly sceptical” about the benefits of introducing more technology into the classroom, Faith Boninger, a researcher at the University of Colorado, observes, but when lockdowns forced schools to go online, a new normal was created, and ed tech platforms such as Google Workspace for Education, Kahoot! and Zearn became ubiquitous. With the spread of generative AI came new promises that it could revolutionise education and usher in an era of personalised student learning, while also reducing the workload for teachers. But almost all the research that has found benefits to introducing tech in classrooms is funded by the ed-tech industry, and most large-scale independent research has found that screen time gets in the way of achievement. A global OECD study found, for instance, that the more students use tech in schools, the worse their results. “There is simply no independent evidence at scale for the effectiveness of these tools … in essence what is happening with these technologies is we’re experimenting on children,” says Wayne Holmes, a professor of critical studies of artificial intelligence and education at University College London. “Most sensible people would not go into a bar and meet somebody who says, ‘Hey, I’ve got this new drug. It’s really good for you’ – and just use it. Generally, we expect our medicines to be rigorously tested, we expect them to be prescribed to us by professionals. But suddenly when we’re talking about ed tech, which apparently is very beneficial for children’s developing brains, we don’t need to do that.”
AI, smartphones, even the internet that I am using right now, are dangerous. Their threat to civil society has become clearer as the years go by. We impose safety requirements on automobiles - especially as we come to understand their dangers - so why not other products of our technological society?
sch 9:13 AM
No comments:
Post a Comment
Please feel free to comment