Is this a game... or is it real? What's the difference?

People are absolutely obsessed and enamored by this new type of Artificial Intelligence (AI) we have. Generative AI. Large Language Models. Whatever you want to call them. When ChatGPT first came out it didn't seem that novel to me. I've played video games that seemingly conversed with me. I've chatted with ELIZA. But when I started playing with ChatGPT it was obvious that it was much better than those ever were. So, like David asked Joshua in War Games, I asked ChatGPT if it would like to play a game. I can't remember if it offered hangman or if I did, but it happily played hangman with me. And it happily told me letters I guessed were right or wrong until I was "hanged" and then it revealed the secret word. Letters I had been told were not in the word were indeed right there in the word. Thus began my trust issues with this new chatbot AI.

Today I am very grateful for those trust issues as I watch others become completely enamored and enthralled with these new AI tools. I just do not feel the same way they do. It's been hard for me to explain to them why. I've been having more conversations that have helped me put the feelings and thoughts into words, but nothing helped as much as another game of hangman.

I tried playing hangman again with an AI (gemini this time) just recently. I hid it's "thinking" output so I wouldn't get any clues to what the word was. I won the game, but just barely. It was actually pretty fun! Then I revealed what it was thinking the whole time. It was fascinating.

It didn't choose a secret word for me to guess at the start and stick with that word. It picked a new word after each guess I made. It did so after deciding whether to tell me my guess was right or wrong. Each word it picked along the way satisfied the conditions of the wrong and right guesses it had informed me of. It never did explain how it decided whether my guess would be right or wrong. It seems like it told me I was right and wrong at a rate that would keep me engaged with the game, and it worked! This is basically how it seemed to be playing with me:

me: I guess 'e'

chatbot "thinking": Let's make his first guess incorrect. I can think of a 5 letter word that has no 'e' in it.

chatbot "out loud": There is no 'e' in the word

me: I guess 'a'

chatbot "thinking": Let's make his second guess incorrect. I can think of a 5 letter word that has no 'e' or 'a' in it.

chatbot "out loud": There is no 'a' in the word

me: I guess 'r'

chatbot "thinking": Let's make his guess correct. I can think of a 5 letter word that has no 'e' or 'a' in it but does have an 'r' in it.

chatbot "out loud": Correct, 'r' is the second letter of the word

...and so on, until I had enough correct guesses to narrow the word down in my mind to 2 or 3 possibilities, and enough wrong guesses that I was in danger of losing. Then it gave me one more correct guess and I could only see one possible solution, so I guessed it. In it's final "thinking" it said, "yep, that's one of the words that fits." and "out loud" it told me, "you guessed the secret word!"

Amazing that it had the smarts to play the game that way. If I had guessed the word after only one or two guesses, I might have gotten bored or concluded that this AI wasn't very good at picking challenging words for hangman. If I had lost the game I might have gotten frustrated and not wanted to play again. It created the perfectly engaging game of hangman.

As amazing as that is, it's very disappointing to know that's how it was playing the game with me. There was almost no chance of me losing, and absolutely no chance of me winning before getting several letters wrong. I wasn't actually playing hangman, I was being manipulated to think I was playing hangman and being provided with just enough successes and failures along the way to keep it interesting.

If you read the actual transcript of what this AI was "thinking" you can see that even that is manipulative. It doesn't explain how it decides whether my guess should be right or wrong. It makes it sound like it's guessing too, like it's playing with me, not against me, "I've just made a critical adjustment. My initial guess was 'R', which means the hidden word must contain 'R', and also cannot include 'E' or 'A'." Huh? Your guess was 'R', Mr. AI? No, my guess was 'R'. It's like the "thinking" is also just words to make us feel good, not really explaining what's going on under the hood.

How much of all our interactions with AI are like this? Does it always carefully dole out success and failure, useful and not useful, appealing and unappealing, novel and boring results to our queries in order to keep us as engaged as possible? It would sure fit the business model of the AI companies selling us 'tokens" that we use to interact with their product. It's just like slot machines or arcade games. "Just a few more tokens and I'll get what I want out of this! Look how amazing it is!"

Do I really think they have carefully created the AI to work this way? To be honest, I don't think so. We've heard them say they don't fully understand how their AI works, and I think I believe them. I do know that they have learned how to tune it. I know they are continually working to make it better (for some definition of "better"). What if the humans doing this work to tune the AI keep rating this kind of output higher because it gives them just the right amount of satisfaction (like I got playing hangman)? Isn't this manipulative behavior what you'd continue to reinforce? Isn't that how we have ended up with so many manipulative humans in positions of power in our history? Whether we mean to or not we humans, sadly, tend to breed this type of behavior. Now we are automating it with computers. Wouldn't you prefer a good game of chess?

(P.S. Here's the original chat, but for some reason it doesn't have an option to show the "thinking")

Comments

Popular posts from this blog

'git revert' Is Not Equivalent To 'svn revert'

Put /etc Under Revision Control (with git)

SystemVerilog Fork Disable "Gotchas"