Friday, June 14, 2024

AI Agnostic

 Call me leery of AI before I read Navneet Alang's AI Is a False God. Halfway through the article, I realized my leeriness was agnosticism.

But being able to do language without also thinking, feeling, willing, or being is probably why writing done by AI chatbots is so lifeless and generic. Because LLMs are essentially looking at massive sets of patterns of data and parsing how they relate to one another, they can often spit out perfectly reasonable-sounding statements that are wrong or nonsensical or just weird. That reduction of language to just collection of data is also why, for example, when I asked ChatGPT to write a bio for me, it told me I was born in India, went to Carleton University, and had a degree in journalism—about which it was wrong on all three counts (it was the UK, York University, and English). To ChatGPT, it was the shape of the answer, expressed confidently, that was more important than the content, the right pattern mattering more than the right response.

All the same, the idea of LLMs as repositories of meaning that are then recombined does align with some assertions from twentieth-century philosophy about the way humans think, experience the world, and create art. French philosopher Jacques Derrida, building on the work of linguist Ferdinand de Saussure, suggested that meaning was differential—the meaning of each word depends on that of other words. Think of a dictionary: the meaning of words can only ever be explained by other words, which in turn can only ever be explained by other words. What is always missing is some sort of “objective” meaning outside of this never-ending chain of signification that brings it to a halt. We are instead forever stuck in this loop of difference. Some, like Russian literary scholar Vladimir Propp, theorized that you could break down folklore narratives into constituent structural elements, as per his seminal work, Morphology of the Folktale. Of course, this doesn’t apply to all narratives, but you can see how you might combine units of a story—a starting action, a crisis, a resolution, and so on—to then create a story about a sentient cloud.

It seems human because it deals with words, but words are not enough to write like a human.

OTOh, I know plenty of human beings who fail at pattern recognition.

The sense of there being a thinking thing behind AI chatbots is also driven by the now common wisdom that we don’t know exactly how AI systems work. What’s called the black box problem is often framed as mysticism—the robots are so far ahead or so alien that they are doing something we can’t comprehend. That is true, but not quite in the way it sounds. New York University professor Leif Weatherby suggests that the models are processing so many permutations of data that it is impossible for a single person to wrap their head around it. The mysticism of AI isn’t a hidden or inscrutable mind behind the curtain; it’s to do with scale and brute power. 

Do those people lack thought? Better not to go there - I am in a good mood this morning and not ready to be full-on cynical! 

Dreams... desires... Even if we are not always capable of thinking, do we not have dreams? Ignore that fellow over there dreaming of electric sheep.

...Computers might in fact approach what we call thinking, but they don’t dream, or want, or desire, and this matters more than AI’s proponents let on—not just for why we think but what we end up thinking. When we use our intelligence to craft solutions to economic crises or to tackle racism, we do so out of a sense of morality, of obligation to those around us, our progeny—our cultivated sense that we have a responsibility to make things better in specific, morally significant ways.

AI may think faster and wider, but deepness may elude them. So far, I find AI-generated writing jejune. More troubling, as history teaches us, are people.

Some Silicon Valley businessmen have taken tech solutionism to an extreme. It is these AI accelerationists whose ideas are the most terrifying. Marc Andreessen was intimately involved in the creation of the first web browsers and is now a billionaire venture capitalist who has taken up a mission to fight against the “woke mind virus” and generally embrace capitalism and libertarianism. In a screed published last year, titled “The Techno-Optimist Manifesto,” Andreessen outlined his belief that “there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.” When writer Rick Perlstein attended a dinner at Andreessen’s $34 million (US) home in California, he found a group adamantly opposed to regulation or any kind of constraint on tech (in a tweet at the end of 2023, Andreessen called regulation of AI “the new foundation of totalitarianism”). When Perlstein related the whole experience to a colleague, he “noted a similarity to a student of his who insisted that all the age-old problems historians worried over would soon obviously be solved by better computers, and thus considered the entire humanistic enterprise faintly ridiculous.”

Andreessen’s manifesto also included a perfectly normal, non-threatening section in which he listed off a series of enemies. It included all the usual right-wing bugbears: regulation, know-it-all academics, the constraint on “innovation,” progressives themselves. To the venture capitalist, these are all self-evident evils. Andreessen has been on the board of Facebook/Meta—a company that has allowed mis- and disinformation to wreak havoc on democratic institutions—since 2008. However, he insists, apparently without a trace of irony, that experts are “playing God with everyone else’s lives, with total insulation from the consequences.”

sch 6/1 

No comments:

Post a Comment

Please feel free to comment