Monday, February 27, 2023

With All the Talking About ChatGpt, I Had to Do This

 I ran across The Point and spent a little time reading Intelligent Life by Rory O'Connell, which I have decided was time well spent this Sunday morning. It is a lot of history of computing and then goes into philosophy. Way too long for easy summary, I will try to manage a few quotes of what made impressed me most in the hope you will read further:

Most experts acknowledge that we are a long way off from seeing what AI researchers have termed “artificial general intelligence”: the kind of intelligence that does not consist in performing highly circumscribed tasks, but which involves a unified conception of the world, and a capacity to learn and think about anything at all; the sort of intelligence, in other words, that we ourselves are thought to exhibit. Yet the present air of excitement surrounding AI cannot be chalked up simply to familiar tech boosterism. Even those skeptical of the new technology’s advance on the grail of genuine intelligence remain deeply agnostic on the question of whether, in principle, genuine artificial intelligence is achievable. This, in turn, reveals a radical transformation in the way we have come to understand ourselves.

###

For a machine to truly think, it too would have to be governed by the law of noncontradiction. A computer can easily be designed so as to never simultaneously “output” both a statement and its contradiction. In that case, the law of noncontradiction may be said to “govern” the machine’s thinking since its programming renders this outcome impossible.

But I do not think this will do. In genuine thinking the truth is freely acknowledged. We are “governed” by the law of noncontradiction only to the extent that we are capable of freely grasping its truth. This is not freedom of choice, since we do not simply decide what is true. It is the freedom characteristic of making up your own mind, of your judgments resting, and resting only, on your recognition of what considerations speak in their favor. In the machine, in place of the free acknowledgment thinking requires, we instead find a mechanism specified and implemented by a designer. But something that conforms to the law of noncontradiction out of mechanical necessity falls short of conducting itself—either in thought or in action—in light of the truth. 

That’s why machines, despite the increasingly complex tasks they will be able to perform, will not be able to think. It is tempting to suppose that it is an open question whether thought might eventually be recreated through better technology, programming or “deep learning,” even if we haven’t succeeded in doing so yet. But once we accept that thought is governed by its own principles, its own forms of explanation, we are not free to simultaneously reduce it to such mechanisms. Their modes of explanation are, properly understood, mutually exclusive.

###

Even if we insist on treating ourselves as tools, we cannot escape the question: What are we for? Every tool, after all, must have some purpose. To determine what “use” we are to be put, we would need some sense of what is actually worthwhile in the first place—what is worth pursuing, not simply as a means to something else, but for its own sake. This is an ethical question—one that reveals that we are not mere “instruments”—since in answering it we determine how we ought to live. Yet we lose our very ability to respond to such questions when the distinction between humans and artifacts is effaced.

sch 2/19

This feels related: Sci-Fi Magazine Clarkesworld Overwhelmed with Flood of AI-Generated Stories  

sch 2/24
 


No comments:

Post a Comment

Please feel free to comment