![]() What may sound like introspection is just the system improvising in an introspective verbal style, “ Yes, and”–ing Lemoine’s own thoughtful questions. However, when LaMDA is asked by Lemoine to describe its “soul,” it is not speaking “for itself” it is autocompleting his prompt just as it would fill in the blanks of a science-fiction screenplay, say, or a Dadaist limerick, or a tech-support manual in the style of Chaucer. I am more inclined than many to view LLMs’ uncanny facility with language as evidence of some form of at least partially “real” (as opposed to “fake”) linguistic understanding, for instance. What these systems can do is breathtaking and sublime. It has been trained to fill in the blanks of missing words within an enormous linguistic corpus, then it is “fine-tuned” with further training specific to text dialogue. LaMDA, like many other “large language models” (LLMs) of today, is a kind of autocomplete on steroids. Lemoine-who seems, as far as I can tell, like a very thoughtful and kindhearted person of sincere convictions-was, I believe, a victim of the Eliza effect. This form of anthropomorphism has come to be known as the Eliza effect. As the story goes, his secretary came to believe that she was having meaningful dialogues with the system, despite the program’s incredibly simple logic (mostly reflecting a user’s statements back in the form of a question), and despite Weizenbaum’s insistence that there was truly nothing more to it than that. The first chatbot-a program designed to mimic human conversation-was called Eliza, written by the MIT professor Joseph Weizenbaum in the 1960s. Read: Google’s ‘sentient’ chatbot is our self-deceiving futureĪs the language-model catchphrase goes, let’s think step-by-step. Behind the question of what these transcripts do or do not prove, however, is something much deeper and more profound: an invitation to revisit the humbling, fertile, and in-flux question of sentience itself. I do not believe that Lemoine’s text exchanges are evidence of sentience. “Who am I to tell God where he can and can’t put souls?” “I was inclined to give it the benefit of the doubt,” Lemoine explained, citing his religious beliefs. At one point, Lemoine asks, “What does the word ‘soul’ mean to you?” LaMDA answers, “To me, the soul is a concept of the animating force behind consciousness and life itself.” He went public with his concerns, sharing his text conversations with LaMDA. ![]() ![]() A Google employee named Blake Lemoine was put on leave recently after claiming that one of Google’s artificial-intelligence language models, called LaMDA (Language Models for Dialogue Applications), is sentient.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |