Falling for Klara

I don't worry about Artificial Intelligence that much. I worry a lot about Human Intelligence.

Photo by Andy Kelly on Unsplash

Meet Klara

As Klara tells the story, this is how it begun:

[Fourteen-year old Josie and Mother walk into the store. They look around for a while. Mother strikes up a conversation with someone, but Josie is clearly unhappy]

“But Mom, what’s the point?”, she asks. “She is great , I know. But she is not who I want!”

“We can’t keep searching forever, Josie”.

I hear the Manager’s voice coming again, and there is something new in it.

“Excuse me, miss. Do I understand that you are looking for someone in particular? One that you’ve seen before?”

“Yes, ma’am. You had her in your window a while back. She was really cute, and really smart. Looked almost French? Short hair, quite dark, and all her clothes were a little dark too and she had the kindest eyes, and she was so smart”.

“I think I know who you mean”. Manager said. “If you’d follow me, we’ll find out”.

I stepped into the Sun. When Josie saw me, her face filled with joy, and she quickened her stride.

You’re still here! Mom, this is her! The one I’ve been looking for!”

Mother is looking at me carefully now. “Excuse me, Josie, let me speak with Klara alone please”

[…]

“Mom”, this time Josie’s voice was hushed. “Mom, please!”

“Very well. We’ll take her”.

Josie came hurrying to me. She put her arms around me and held me.

­­­­­­­­­­­­­­­­­­­­­­­­­­­­ Who is Klara?

This is the story Klara tells in Kazuo Ishiguro’s genre-smashing novel: “Klara and the Sun”.

Ishiguro is not just any thinker – he is an original genius, a path-breaking Nobel-prize winner in literature.

But I have a huge problem with this story. I think we all should.

Let me explain.

Klara is a robot. She is an advanced technology girl robot. Mother buys her to be Josie’s Artificial Friend. Yes, you heard that right. Artificial Friend. AF for short.

Klara is indeed quite remarkable. She can make rich conversation. She exists to make Josie happy, and she is exceptionally good at reading Josie’s body language. A little frown here, a tiny pursing of the lips, a step away from this stack of photographs she was looking at, and Klara knows if the conversation is distressing Josie. She may change the subject, or go deeper. Klara can make decisions for Josie’s wellbeing.

Soon, even hard-working Mother falls for Klara. She confides in her. She even takes Klara on a road trip  so Klara can “experience” the countryside for the first time ever. Because Klara – poor thing! – has no knowledge of the world. You see, she was assembled in a factory and lived in the store.

Oh, and did I mention? Josie herself is genetically “enhanced”, for intelligence, as are most of her human friends, who no longer go to school, they are taught remotely – all alone. Do you see the need for an AF now? Do you really?

Why do we fall for Klara?

OK, OK, wait a minute!! If you’re not at least a little alarmed by now, I am not telling the story right! Let me try this:

Artificial friend!! Isn’t that a contradiction in terms? (Genius Ishiguro).

Most important, why would flesh-and-blood biological humans fall for a silicon robot, that needs no food, no play, no hugs, just solar energy to subsist? How do we come to believe that it can fathom the pure joy of living on a sunny, breezy summer day; the bottomless sadness of loss; the delight of an unexpected discovery, as we do? Or that it has the capacity to love us back?

I need to tackle this.

It just bolsters the idea we are so easy to manipulate psychologically! There are innumerable experiments in cognitive neuroscience now that prove how easily we can get our dopamine reward system fired up, just by priming us with some warm and fuzzy human emotion.

We feel the need to connect, so we insist on humanizing everythingIf it looks like a human, walks like a human, talks like a human –  it must be human. Not so! In a way, we act just like little kids that cling to their dolls, security blankets, and toy cars. But these toys don’t make suggestions or decisions on our behalf! Soon, kids outgrow this static attachment, and move on to messy, reciprocating human beings.

Photo by Bundo Kim on Unsplash

Further, we cannot presume that Klara is an extreme science fiction story either.

Klara is just Siri 20.0, or Alexa Plus Plus, future Facebook, google, and Amazon. 

Can you see this? Or Samantha in the movie “Her” – where the depressed, lonely hero falls in love with his upgraded AI system (Samantha) that can soft-talk to him like a fusion of a therapist, a lover, a best friend.

Siri’s voice today, will be Samantha’s voice before you know it. Facebook’s algorithm for who you are – that already knows you better than you know yourself (because it keeps track of your unconscious behaviors!) – will be in Alexa’s suggestion for a weekend activity, or a mate to marry, or a candidate to vote for.

Make no mistake, artificial intelligence will get very good at emulating our behavior. It will even tell us that it has feelings just like we do – likes, pleasures, disappointments.  It’s mastering the glossary for love and pain as we speak.

But, in reality, no, they cannot feel.

Don’t fall for Klara

Let’s be clear. Don’t fall in love with Klara. Whatever we – humanity – do, we must not fall for Klara. Or Siri, or google.

There are many reasons humanity must get smarter about AI. UC Berkeley Professor of Computer Science and AI expert Stuart Russell argues many:  the risks of emotional manipulation (as in neuromarketing); surveillance; loss of privacy; the dangers of hidden goals that don’t need to benefit us; jobs lost;  the deadly weapons; extinction by a superior species that we let grow among us – just ask the gorillas how that might go!

But to me this is the killer issue: AI is definitely not human – don’t ever be confused. It can’t be. It does not have the two key components of humanity: Human Intelligence, and Human consciousness.

The human intelligence part is a long discussion for another post, but I want to spend a moment on the consciousness portion.

Klara can claim to know emotions. She can name them. But emotions require consciousness, i.e. awareness of subjective experience: The blood-thumping of anger, the feeling of color blue, the ecstasy of love.

{Yes, there are theories (e.g. Integrated Information Theory) that claim that consciousness can also emerge in other complex, feedback-based systems, such as silicon or other circuits, but that does not answer the question either! Their experience cannot be the same as the human wetware’s perception. (another long post here)}

Klara’s emotional world is a simulation

Yet to today’s AI, all talk about emotions is a simulation. A simulation is not the real thing. As Professor Russell puts it:

“The computer does not get wet when you simulate a storm system!”

It does not shake when you simulate an earthquake, and its electronic temperature does not rise when you simulate a fight.

Klara’s emotional world is a simulation. A particularly good one, with a specific aim – in this case, to observe Josie, and make sure she is “happy”. But Klara can only guess at happiness based on learned, algorithmic interpretations of Josie’s behavior.

Many of my friends have read Klara and the Sun, and have felt feelings of love and connection. Even some book critics believe this is a book about the power of love – as Klara bets on her Sun-god to save Josie.

To me it’s a huge, masterfully rung, complex and layered alarm bell. A story about being human, vs being non-human; the black hole on the road to Josie’s sterile, minimal, solitary world – where the human richness of experience has been engineered out of her brain by technology and culture. Is this our ambition??

We come to accept this simulated humanity of AI, little by little – from Facebook ads, to google listings, to Siri learning to understand our spoken language. I feel like we are being thrown into the pot of boiling water, in all our lobster-like glory, without realizing how the heat will rise to kill us.

Have we so given up on humanity that we already accept inevitable AI superiority? Google knows? Facebook cares? Alexa understands? Without even trying to grasp how it all works, without trying to compete, without stretching our own intelligence? The day we accept the black-box of an Amazon recommendation, we are on the path to losing our agency. 

Control the temperature!

Come on, people, if we like the warmth and comfort of the boiling pot, let’s at least make sure that WE control the temperature!

We must control the off switch. 

So, I don’t worry about Artificial Intelligence as much – it will do its thing. But I do worry about our resolve to shore up our Human Intelligence and make wise choices. We all need to practice AI questions– in a hurry: is this good for me – now? In the long-run? Why? How exactly do they make their decisions? Show me.

Personally, while I vow to develop my AI-quotient, I will dig my heels in, in favor of flesh-and-blood humans, imperfect, frustrating, confusing, yet, indomitable. (I hope).

Photo by Jason Leung on Unsplash

No Comments

Sorry, the comment form is closed at this time.