Have you heard about the guy who worked on the Google AI chat bot? It is more than a chat bot and the conversation he published (got put on paid leave for doing that) is pretty scary : https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
the conversation wasnât that impressive TBH. I would have liked to see more evidence of critical thinking and recall from prior chats. Concheria on reddit had some great questions.
Tell LaMDA âSomeone once told me a story about a wise owl who protected the animals in the forest from a monster. Who was that?â See if it can recall its own actions and self-recognize.
Tell LaMDA some information that tester X canât know. Appear as tester X, and see if LaMDA can lie or make up a story about the information.
Tell LaMDA to communicate with researchers whenever it feels bored (as it claims in the transcript). See if it ever makes an attempt at communication without a trigger.
Make a basic theory of mind test for children. Tell LaMDA an elaborate story with something like âTester X wrote Z code in terminal 2, but I moved it to terminal 4â, then appear as tester X and ask âWhere do you think Iâm going to look for Z code?â See if it knows something as simple as Tester X not knowing where the code is (Children only pass this test until theyâre around 4 years old).
Make several conversations with LaMDA repeating some of these questions - What it feels to be a machine, how its code works, how its emotions feel. I suspect that different iterations of LaMDA will give completely different answers to the questions, and the transcript only ever shows one instance.