Our Continuously Evolving Relationship With AI

DISCLAIMER: This was originally posted for another tech blog which has since gone long defunct. It is reposted here with permission.

You can’t deny that we live in exciting times for technology progress. Between IBM’s Watson, Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Assistant (formerly OK Google and Google Now), we are well into the dawn of AI personal assistants which can carry on a minor conversation with you, albeit a limited one resembling a customer service call. Just for some context, here’s what science fiction imagined AI assistants to be like only a few decades ago:

Not only does the Star Trek computer speak in a harsh monotone, but it has wimpy access times too. You’d think a civilization with faster-than-light space travel would have figured out how to cut network latency.

While AI might have a far way to go in emulating some functions of human intelligence, it’s definitely catching up in the space of personal assistants. A quick search of your app store will shows dozens more virtual assistant bots, with more prototypes launching all the time. Not only are these apps getting more capable, they’re also developing personalities of their own.

Which raises the question: What will the response to talking AI from us humans be like going forward?

The Eliza Effect

We may be sorry we asked that question.

Recent research finds that a majority of voice assistant users seek a deeper connection to their pet bots, and even a quarter have admitted to sexual fantasies about their phone’s voice assistant. So we have a whole generation out there that’s hot for bots already.

Surprised? You shouldn’t be. In fact, it’s a well known bug in human-computer interaction known as “The Eliza Effect.” Briefly, the effect causes us humans to falsely attribute mental and emotional depths to computers that aren’t there. It comes from ELIZA, a chatterbot experiment at the MIT Artificial Intelligence Laboratory in the 1960s which was programmed to interact with a human in text messaging. ELIZA, in “doctor” mode, would imitate the conversational patterns of a Rogerian psychotherapist, which is a very easy script to run. We have an easy example today as an Emacs plug-in, M-x doctor. There’s an online version of ELIZA for you dirty vi users.

  • * Yes, I do, but I’m writing an article on bots.
  • > Did you come to me because you are writing an article on bots?
  • * Yep, Google showed me here.
  • > I see.
  • * Tell me about yourself.
  • > That is quite interesting.
  • * What’s quite interesting?
  • > Say, do you have any psychological problems?
  • * Yes, I have Attention Surplus Disorder.
  • > I see.

The tricky thing about the ELIZA effect is, it still works even when you know about it. Here, you see the present author attempted to drive ELIZA “off-script,” only to have it change the subject back to me. Yet in the moment, it was impossible not to have the thought “It’s deliberately being evasive.” Talk to ELIZA long enough, and you’ll end up spilling your guts, revisiting childhood flashbacks and buried memories. It works the way rubber-duck debugging works: there’s no magic or even exterior input, but explaining things out loud, even to a mirror, helps you comprehend them better.

Getting back to our modern AI voice assistants, quite a few of them have to put up with some grossly sexual behavior from users already.

How Much Should We Humanize Voice Assistants?

Some of this attachment disorder might be the fault of designers. It’s been pointed out that all of our voice bots have female voices by default. Sure, you can change it to a male’s voice, but people seem to prefer their voice assistants to sound girly. This is going so far as to raise the question of patriarchy in our society: We are, after all, socially conditioned to think of any assistant as female since the vast majority of secretaries and receptionists are female.

We could dodge by saying female voices register better, are more pleasing to the ear, etc., but who are we kidding? Even the Star Trek computer back there has a female voice. The canonical movie about a computer AI going awry, 2013‘s Her, features a distinctly female AI as well. Ditto for the 2015 science fiction thriller Ex Machina. So, it apparently goes without saying that AI programmers are expected to be dudes, and they use their skills to build the ideal dudette.

That speaks to something weirdly Stepfordian lurking just under the surface. Could it be, since all of our science fiction depicting female bots has them turning against their male masters, that we’re expressing cultural collective guilt at objectifying one gender?

And How Much Should AI Analyze Us?

On the flip side, several powers that be at Facebook noticed that suicidal people usually express their intentions on Facebook before doing themselves in. This spurred the development of predictive AI models which try to prevent suicide. So now we have AI assistants not only as our companions, but in some cases as our babysitters as well.

This opens up yet another social can of worms. So far, the cause is noble, but what happens when predictive models get more advanced and are applied more intrusively? Will we one day have Minority Report style interventions whenever a predictive model can tell we’re contemplating committing a crime? Maybe a computer be able to tell when we’re stressed out and say “You know what you need? A nice, stiff scotch.”

Once again, we are charging forward with technology progress into realms where our philosophies and knowledge can barely keep up. We barely have time to ask these questions, let alone answer them.

But maybe our best hope now is to get a smart enough AI who can do all the answering for us.

Now chill out from your future shock and contemplate an episode of Bob Ross’ The Joy of Painting filtered through the neural net of Google’s Deep Dream.

Never saw that on Star Trek, did you?

Author: Penguin Pete

Take good care of my memes; I've raised them since they were daydreams!