Saturday, March 18, 2023

Artificial Intelligience - But Is It Intelligent?

 With ChatGPT getting so much attention, I have been paying a bit more interest. 

The Point Magazine published a few pieces that seem relevant:

Intelligent Life by Rory O'Connell has much to say about thinking and machines:

What would it be to approach the question “Can machines think?” in a different way? Forget about machines for a moment. Instead, just think about thinking.

To think anything at all is already to expose yourself to the possibility of going either right or wrong—of your thinking being true or false. Although it is all too often absent from the discourse surrounding AI, the concept of truth is absolutely essential for understanding thought.

There are numerous paths by which to approach this fact. One of the most direct, however, starts from an ancient observation: one cannot think a contradiction. It is a fundamental principle of thought—a “law,” some call it—that one cannot think both “Alan Turing is alive” and “Alan Turing is not alive” at the same time. Someone who insisted that Turing is alive and dead would not simply be mistaken, as they would be if they thought Turing were alive. Someone who thinks Turing is alive is merely misinformed, but perfectly intelligible—whereas someone who earnestly asserts both that he is alive and that he is dead is not describing even a possible way the world could be. This is why the impossibility of believing a contradiction is a precondition of thought’s relation to the world. Nothing of course rules out that I unwittingly hold contradictory beliefs—this happens often enough—but that is not the same as consciously thinking those thoughts together. According to Aristotle, someone who really rejected the law of noncontradiction would be akin to a vegetable. More politely, we would say that they could not express, or form, a coherent thought.

What is the nature of the impossibility associated with thinking a contradiction? I find it hard in a world of technological aids to learn phone numbers by heart; still, there’s nothing problematic in the idea that someone could remember indefinitely many phone numbers. By contrast, it is not simply an idiosyncrasy of mine—or of human beings in general—that we cannot think a contradiction; it reflects the fact that thought concerns the world. A contradiction can’t be thought because a contradiction cannot be—it can’t be true of the world that Turing is alive and that he is dead.

For a machine to truly think, it too would have to be governed by the law of noncontradiction. A computer can easily be designed so as to never simultaneously “output” both a statement and its contradiction. In that case, the law of noncontradiction may be said to “govern” the machine’s thinking since its programming renders this outcome impossible.

But I do not think this will do. In genuine thinking the truth is freely acknowledged. We are “governed” by the law of noncontradiction only to the extent that we are capable of freely grasping its truth. This is not freedom of choice, since we do not simply decide what is true. It is the freedom characteristic of making up your own mind, of your judgments resting, and resting only, on your recognition of what considerations speak in their favor. In the machine, in place of the free acknowledgment thinking requires, we instead find a mechanism specified and implemented by a designer. But something that conforms to the law of noncontradiction out of mechanical necessity falls short of conducting itself—either in thought or in action—in light of the truth. 

That’s why machines, despite the increasingly complex tasks they will be able to perform, will not be able to think. It is tempting to suppose that it is an open question whether thought might eventually be recreated through better technology, programming or “deep learning,” even if we haven’t succeeded in doing so yet. But once we accept that thought is governed by its own principles, its own forms of explanation, we are not free to simultaneously reduce it to such mechanisms. Their modes of explanation are, properly understood, mutually exclusive.

Intelligence should include thinking as well as knowledge. I used to be able to remember most of what I read. That talent has been declining the past 13 years. That I could regurgitate facts does not mean I understood those facts. That I knew the rules of grammar did not mean I could write Hamlet. Intelligence leaps over facts; creativity does not balk at grammar (read the last section of Ulysses for an example.)

The Liberal Patriot asks What is Artificial Intelligence For, Exactly? and opens with a discussion of the failure of the self-driving automobile before plunging into the wider topic.

As things stand, these programs and platforms amount to mere curios, ways for programmers, technology enthusiasts, and journalists to amuse and titillate themselves. They don’t have any apparent purpose or function at the moment, and they don’t fulfill any distinct or meaningful role as yet. We may be able to use ChatGPT or similar AI systems to write press releases and do other necessary drudgery for us, but it’s hard to believe that automating such boring tasks will prove worth all the time and effort—not to mention the large sums of money—poured into this particular enterprise.

In other words, it’s hard to know what exactly these artificial intelligence platforms are supposed to be for—and it’s hard to figure that out when a sort of technological mysticism pervades most discussions of the subject. Some rhetoric about artificial intelligence veers into the theological, complete with its own prophesied apocalypse and rapture; by the same token, comparatively tame fantasies that artificial intelligence will “do everything” obfuscate just as much. By focusing our attention on unlikely scenarios and quasi-religious pronouncements, these flights of fancy prevent us from thinking about various artificial intelligence platforms as tools crafted for specific purposes.

Indeed, ChatGPT and other chatbots come with their own set of specific problems that endless pondering about the alleged “existential risk” posed by general-purpose artificial intelligence won’t solve. As computer scientists Arvind Narayanan and Sayash Kapoor put it, “ChatGPT is the greatest bullshitter ever”—it produces convincing responses to questions without any reference to the truth. That leads psychologist and long-time artificial intelligence researcher Gary Marcus to warn that chatbots like ChatGPT could produce prodigious quantities of seemingly credible bullshit like fake medical and scientific studies. Marcus also raises the disturbing possibility that mindless chatbots will emotionally and psychologically manipulate the humans they interact with, even to the point of encouraging murder and suicide. The fact that ChatGPT’s function and purpose remains murky at best only makes these dangers more troubling; there’s no sense of what anyone gains from the platform given the potential costs involved.

The article goes a bit further with practical advice:

Above all else, we need to ask what a particular artificial intelligence platform is for—what purpose it’s intended to serve, what role it’ll fulfill, what function it’s supposed to assume. With autonomous weapons the answer to these questions might seem obvious, but it should not be taken for granted. For civil AI projects like ChatGPT, however, there needs to be more rigorous thinking about the ultimate purpose of the intended platforms and the research needed to build them. It may be fun and possibly even lucrative for programmers and companies to create various artificial intelligence platforms, but if there’s no practical reason for their existence—helping programmers write code more efficiently, for instance—they should probably remain confined to lab settings. Even so, it’s hard to say that that goal demands the sort of time and energy it’s receiving today.

For all the article emphasizes the need for discussing ends, I cannot help recall a word I have not seen for quite a while: vaporware.

 Finally, ZDNet published How does ChatGPT work?  and I will let you read that in full. I just want to point out it is just a fancy database:

In addition to Persona-Chat, there are many other conversational datasets that were used to fine-tune ChatGPT. Here are a few examples:

  • Cornell Movie Dialogs Corpus: a dataset containing conversations between characters in movie scripts. It includes over 200,000 conversational exchanges between more than 10,000 movie character pairs, covering a diverse range of topics and genres.
  • Ubuntu Dialogue Corpus: a collection of multi-turn dialogues between users seeking technical support and the Ubuntu community support team. It contains over 1 million dialogues, making it one of the largest publicly available datasets for research on dialog systems.
  • DailyDialog: a collection of human-to-human dialogues in a variety of topics, ranging from daily life conversations to discussions about social issues. Each dialogue in the dataset consists of several turns, and is labeled with a set of emotion, sentiment, and topic information.

In addition to these datasets, ChatGPT was trained on a large amount of unstructured data found on the internet, including websites, books, and other text sources. This allowed ChatGPT to learn about the structure and patterns of language in a more general sense, which could then be fine-tuned for specific applications like dialogue management or sentiment analysis.

So, SkyNet is still a long way off. Engineers need to have a real life, maybe then they will stop hyping stuff as they do. 

And from The Scottish Review comes Bill Magee's From Bletchley Park to ChatGPT. AI has a Scottish angle, so there is history here, and a warning:

Microsoft has integrated OpenAI's latest model ChatGPT into its struggling Bing search engine and Edge web browser. It hopes to see massive revenue earners from sectors including entertainment, health, education, finance, e-commerce, news and politics. You name it. Many chatbots are already employed by businesses run on your mobile's messaging apps or via SMS commonly used for business-to-commerce (B2C) customer service, sales and marketing. You know the sort of thing: a conversational online/mobile (ro)bot tells you: 'Your call is special to us,' then you're left for 15 minutes listening to the Supremes singing You Keep Me Hangin' On.

When it comes to ChatGPT, academics, cybersecurity researchers and AI experts have singled out this generation of chatbots from previous incarnations, collectively warning that they could be used by bad actors on social media 'to sow dissent and spread propaganda', terms that sound eerily familiar to a past age. The difference is, this time around, we cannot expect a group of Bletchley codebreakers to come to our rescue.

sch 3/10


 

No comments:

Post a Comment

Please feel free to comment