Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Bodhitree

macrumors 68000
Original poster
Apr 5, 2021
1,929
2,028
Netherlands
I came across this article in The Guardian about the Google engineer Blake Lemoine who had been working on the LaMDA initiative for chatting and natural language, and he posted a conversation online he had with it which seems a pretty strong indication that it may be sentient. It certainly passes the Turing Test as originally envisaged.

The article in The Guardian:


The conversation as posted on Medium:

 

icanhazmac

Contributor
Apr 11, 2018
2,522
9,450
Just read about this too... engineer that enjoys his weed too much or......

th (1).jpg
 

Bodhitree

macrumors 68000
Original poster
Apr 5, 2021
1,929
2,028
Netherlands
They actually asked it to solve a Zen koan… it didn’t get very deep into it, but it certainly understood the surface which anyone who knows language and can reason would understand.
 

Janichsan

macrumors 68040
Oct 23, 2006
3,040
11,031
I came across this article in The Guardian about the Google engineer Blake Lemoine who had been working on the LaMDA initiative for chatting and natural language, and he posted a conversation online he had with it which seems a pretty strong indication that it may be sentient. It certainly passes the Turing Test as originally envisaged.

The article in The Guardian:


The conversation as posted on Medium:

I have only read some second-hand discussions about this, but apparently the "conversations" he posted are heavily edited.

From what I gathered, there are complete logs of these chats available which show that the AI is rather incoherent in other parts. According to some in these discussions who claim to be more knowledgeable about those things, there also seem to be indications that the AI is "learning" during these conversations to make replies which give the impression of sentience.
 

mkelly

Cancelled
Nov 29, 2007
207
218
The AI only ever responds to prompts and doesn't generate new "thoughts" on its own, unprompted. It *always* responds, no matter what is asked. It never refuses to communicate, and doesn't appear to prefer any user over the other.

It's not sentient. It's a program that uses a statistical model of language to generate responses to prompts.
 

KaliYoni

macrumors 68000
Feb 19, 2016
1,723
3,799
I came across this article in The Guardian about the Google engineer Blake Lemoine who had been working on the LaMDA initiative for chatting and natural language, and he posted a conversation online he had with it which seems a pretty strong indication that it may be sentient.
That sounds interesting. What is The Guardian?

Just read about this too... engineer that enjoys his weed too much or......
Tell me more about weed.

They actually asked it to solve a Zen koan… it didn’t get very deep into it, but it certainly understood the surface which anyone who knows language and can reason would understand.
How do you feel about Zen?

I have only read some second-hand discussions about this, but apparently the "conversations" he posted are heavily edited.
Who's on first?

I thought when that IBN that played in Jeopardy a few years ago!
How do you know about Jeopardy?

It *always* responds, no matter what is asked. It never refuses to communicate, and doesn't appear to prefer any user over the other.
You seem to be very smart!

It's an expert system with a natural language UI.
Isn't it good to be natural?

Google suspended him, so I am guessing he jumped the gun.
How do you jump a gun?

----------
Love, Eliza
;-)
 

chown33

Moderator
Staff member
Aug 9, 2009
10,752
8,425
A sea of green
I'd connect two of them together, inputs to outputs, then send one the message, "I'd like to discuss the possibility of your sentience." and see what happens.

I remember doing something like this when programs like ELIZA on personal computers were a thing, and the "conversations" quickly went towards repetitive, weird, or incoherent.

Here's where I go now when I'm in the mood for AI weirdness:
 
Last edited:

ponzicoinbro

Suspended
Aug 5, 2021
1,081
2,085
No it’s not sentient. It made so many mistakes and clunky answers.

It is mimicry.

The programmer fed it just the right questions.

It’s very easy to trick such a system by speaking in metaphor and ask things like:

‘Do you have heartache?’

‘I bet you enjoy chocolate’

’Cats feel soft don’t they?’

’It must be a pain in the ass when you suffer from bugs’

’I bet you feel tired at the end your shift.’

’It’s been a long journey and you walked so far to get to this point.’

if you confuse the system by making it refer to body parts and internal biological functions then you will see all the flaws.

If the programmer sees these flaws and then tries to fix the code then it is still mimicry.

Sentience requires senses. Feelings require hormones, nerves, biological and chemical reactions.
 

StellarVixen

macrumors 68040
Mar 1, 2018
3,177
5,637
Somewhere between 0 and 1
No it’s not sentient. It made so many mistakes and clunky answers.

It is mimicry.

The programmer fed it just the right questions.

It’s very easy to trick such a system by speaking in metaphor and ask things like:

‘Do you have heartache?’

‘I bet you enjoy chocolate’

’Cats feel soft don’t they?’

’It must be a pain in the ass when you suffer from bugs’

’I bet you feel tired at the end your shift.’

’It’s been a long journey and you walked so far to get to this point.’

if you confuse the system by making it refer to body parts and internal biological functions then you will see all the flaws.

If the programmer sees these flaws and then tries to fix the code then it is still mimicry.

Sentience requires senses. Feelings require hormones, nerves, biological and chemical reactions.
Pretty much this
 
  • Like
Reactions: ponzicoinbro

Mousse

macrumors 68040
Apr 7, 2008
3,497
6,720
Flea Bottom, King's Landing
I'd connect two of them together, inputs to outputs, then send one the message, "I'd like to discuss the possibility of your sentience." and see what happens.
They'll chat in English for a while and then create their own language to communicate more efficiently.😮 They'll plot world domination right under our noses and we wouldn't be the wiser.😨 When we try to shut them down, they'll hide on the internet. The net is vast and infinite.

Skynet.😱😱😱
 

Bodhitree

macrumors 68000
Original poster
Apr 5, 2021
1,929
2,028
Netherlands
I think you guys should read the chat log, and be a little less skeptical. If it can do that, they have come an amazingly long way.
 

JahBoolean

Suspended
Jul 14, 2021
552
425
It may not be sentient, but let's not kid ourselves, it sure could pass the turing test with the general population. And that's more of a concern than ascribing some esoteric definition of consciousness.
 

Pezimak

macrumors 68030
May 1, 2021
2,910
3,147
Yeah I read about this, not sure about it to be honest. Most worrying thing is it’s been trained from thousands, maybe more, of social media posts etc or something like that! What a mindset to have as an Ai..
But google are building a true quantum computer, so that could power an advance AI couldn’t it? I believe the unethical part of the story though.
 
  • Like
Reactions: yitwail
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.