Why we've entered AI hell and also why robots oughtn't umm when they speak

Before I begin; Mike Godwin put this in some much-needed scope:

So, before you read anything more… I agree. I’m not so much frustrated with Google as I am frustrated with the way we treat human-machine trust in general.

TLDR:

  • Duplex favors rich users who can afford to outsource speaking to a human;
  • Duplex enables the sorts of people who think they are too important to waste time talking with a mere maître d’, and we need to not enable those sorts of people;
  • By proxy, and BEAR WITH ME,
    • Duplex targets low-income individuals and folks who can’t afford to ignore all other humans;
  • This is clearly just a spam-bot wet dream
    • and anyone who says differently is assblowingly wrong;
  • Duplex is an abomination unto the almighty but it is too late and we are doomed

Furthermore:

Google introduced Duplex at I/O this past week, and — please, dear reader — let me emphasize first how blown away I was with this new technology.

To those who do not click links or read: Duplex is a service that conducts phonecalls on your behalf, scheduling haircuts or booking reservations using human-like voice. It sounds like a human. Discussing this article with others, some people had misunderstood which speaker on the recorded call was the robot and which was the human. That’s how good it is.

okay but just to be clear,

I work in the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory. Robots doing human things is what we do. Robots walking, robots driving, robots opening doors, robots being an arm CONTROLLED USING THOUGHT, robots drawing pictures… (It goes without saying that the views expressed herein do not reflect the views of my employer, but I’ll say it anyway.)

Here’s our cool stuff:

My point is that I’m not gun-shy about bots I SWEAR.

But Duplex is an abomination unto the almighty. Google’s socioeconomically-blind wax wings have flown us too close to the sun, and now we all have to fall and die of death I suppose

(Of note: I am not the first person to say that I feel this way.)

I have a lot of thinks about whyfor Duplex is a robot sin:

  • It was borne of some Googler’s laziness and unwillingness to speak on the phone with a human
    • and please, just talk to humans, it’s fine
  • It does not save humans time, it just saves the payer’s time at the expense of the target
    • listen to the ass-damn examples, the robot has synthetic dysfluencies (“ummmm”, “uhh”) which don’t convey meaning, they just waste someone’s time
  • Fucking send a fucking email for fuck’s sake
    • or pick up the phone, what the frog
      • it doesn’t take that long oh my GOAT

But I hope the most salient takeaway from this rant is this:

Robots should not pretend to be humans.

This isn’t codgy harumphing. This is optimistic, hopeful, hells-yes-let’s-do-this-thing whooping. Robots are amazing! And it’s turning out that machine learning is making robots better at a lot of things than humans are. And this is amazing, and we should celebrate by making life easier and cooler!

Machines have standardized interfaces. When two machines need to speak to each other, they usually know exactly how to do it. Computer nerds spend a lot of time designing these standards so that — usually — they can communicate effectively and quickly. Not always! But usually. Duplex subverts that communication efficiency by mandating that some user has to pick up a phone.

https://xkcd.com/927/

Machines move really really fast. When two machines need to interact, they do it fast. Phonecalls using voice are slow. Don’t do this.

I mean honestly, it’s like

Machines are better than people at things, and we should celebrate this. If I need to add two numbers, I punch them into a calculator. “Human-ness” is not a factor that I optimize for when adding numbers. I don’t want my calculator to sometimes make mistakes so I feel more comfortable.

Duplex wasting a listener’s time with dysfluencies demonstrates that Google wants you to think that it’s a human on the line. But this isn’t what we should care about! What I want to optimize for is efficacy of conversation. If I must have an automated call, then yes, by all means use voice-synthesis to make it pleasant to listen to. Make it understand my natural langauge and reply in a meaningful way. But don’t make it dumber to sound like a human!

Making Duplex sound like a stuttering human is optimizing for humanness. That’s like computer keyboards being soft and squishy and smelling like skin. Don’t, for the love of all that is good and holy in this world.

But this is serious, because

But this is serious, because we’re not talking about making the world a better place. And when technology doesn’t make the world a better place, something has gone wrong.

Software developers often talk about attack vector surface area: This is the surface area that you expose that makes you vulnerable to folks trying to take advantage of your stuff.

If you have one computer, you are a little vulnerable. If you have two computers, you’re more vulnerable. That’s more surface area for attack.

Duplex expands society’s vulnerable surface area immensely.

I don’t see a clear use-case for Duplex. Phoning in reservations to small businesses that don’t have a website and don’t use other services for this? That seems perhaps not that useful, and honestly not even that common. And what does this save you? The user saves a a handful of minutes so they don’t have to call anyone anymore. But this costs the small business the same amount of time, and costs them even more when your ass-shit robot starts “umm”ing artificially.

But Google sells ads. You may have received a phonecall recently trying to sell you something. An ad phonecall! I can’t help but think — and, truly dear reader, I have tried elsewise — that armed with a Duplex subscription, it’s now cheaper, faster, more automated, and harder to detect when you get one of these calls. Please! Prove me wrong. I would love to not feel this way. But I don’t see any way that this gets used for greater good.

I don’t think that Google was consciously TRYING to deceive users; but that’s part of what bothers me. The designers of this program clearly didn’t think about the deceptive implications, or at least they’ve given no indications that they did. And that’s frustrating, because we want to — and in some regards — need to — trust Google.

and there’s no going back

This rant isn’t meant to pick on Google, but it is true that Google often overlooks the value of individual humans in pursuit of ad revenue.

And we need to be cautious: We, as robot designers; we as the trainers of learning machines; we, as the consumers of this technology.

We, as the inhabitants of the 21st century, have an immense responsibility to our posterity to point our technological weapons at disease; at hunger; at conflict. But for fuck’s sake, please do not point technological weapons directly at your eyes or ears. And absolutely do not point them at mine.

Written on May 9, 2018
Comments? Let's chat on mastodon (Or on Twitter if you absolutely must...)