Let’s work through this post by Sebastian Thrun. All of it.

You’re at the wheel, tired. You close your eyes, drift from your lane. This time you are lucky. You awaken, scared. If you are smart, you won’t drive again when you are about to fall asleep.

Well … people don’t always have a choice about this kind of thing. I mean, sometimes people drive when they’re about to fall asleep because they’ve been working a really long time and driving is the only way they can get home. But never mind. Proceed.

Through your mistakes, you learn. But other drivers won’t learn from your mistakes. They have to make the same mistakes by themselves — risking other people’s lives.

This is true. Also, when I learned to walk, to read, to hit a forehand, to drive a manual-transmission car, no one else but me learned from my mistakes. This seems to be how learning works, in general. However, some of the people who taught me these things explained them to me in ways that helped me to avoid mistakes; and often they were drawing on their own experience. People may even have told me to load up on caffeine before driving late at night. This kind of thing happens a lot among humans — the sharing of knowledge and experience.

Not so the self-driving car. When it makes a mistake, all the other cars learn from it, courtesy of the people programming them. The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over. But just a few days later, the software of all cars was adjusted to handle such a situation. The difference? All self-driving cars learn from this mistake, not just one. Including future, “unborn” cars.

Okay, so the cars learn … but I guess the people in the cars don’t learn anything.

When it comes to artificial intelligence (AI), computers learn faster than people.

I don’t understand what “when it comes to” means in this sentence, but “Some computers learn some things faster than some people” would be closer to a true statement. Let’s stick with self-driving cars for a moment: you and I have no trouble discerning and avoiding a pothole, but Google’s cars can’t do that at all. You and I can tell when a policeman on the side of the road is signaling for you to slow down or stop, and can tell whether that’s a big rock in the road or just a piece of cardboard, but Google’s cars are clueless.

The Gutenberg Bible is a beautiful early example of a technology that helped humans distribute information from brain to brain much more efficiently. AI in machines like the self-driving car is the Gutenberg Bible, on steroids.

“On steroids”?

The learning speed of AI is immense, and not just for self-driving cars. Similar revolutions are happening in fields as diverse as medical diagnostics, investing, and online information access.

I wonder what simple, everyday tasks those systems are unable to perform.

Because machines can learn faster than people, it would seem just a matter of time before we will be outranked by them.

“Outranked”?

Today, about 75 percent of the United States workforce is employed in offices — and most of this work will be taken away by AI systems. A single lawyer or accountant or secretary will soon be 100 times as effective with a good AI system, which means we’ll need fewer lawyers, accountants, and secretaries.

What do you mean by “effective”?

It’s the digital equivalent of the farmers who replaced 100 field hands with a tractor and plow. Those who thrive will be the ones who can make artificial intelligence give them superhuman capabilities.

“Make them”? How?

But if people become so very effective on the job, you need fewer of them, which means many more people will be left behind.

“Left behind” in what way? Left behind to die on the side of the road? Or what?

That places a lot of pressure on us to keep up, to get lifelong training for the skills necessary to play a role.

“Lifelong training”? Perhaps via those MOOCs that have been working so well? And what does “play a role” mean? The role of making artificial intelligence give me superhuman capabilities?

The ironic thing is that with the effectiveness of these coming technologies we could all work one or two hours a day and still retain today’s standard of living.

How? No, seriously, how would that play out? How do I, in my job, get to “one or two hours a day”? How would my doctor do it? How about a plumber? I’m not asking for a detailed roadmap of the future, but just sketch out a path, dude. Otherwise I might think you’re just talking through your artificially intelligent hat. Also, do you know what “ironic” means?

But when there are fewer jobs — in some places the chances are smaller of landing a position at Walmart than gaining admission to Harvard —

That’s called lying with statstics, but never mind, keep going.

— one way to stay employed is to work even harder. So we see people working more, not less.

If by “people,” you mean “Americans,” then that is probably true — but these things have been highly variable throughout history. Any anyway, how does “people working more” fit with your picture of the coming future?

Get ready for unprecedented times.

An evergreen remark, that one is.

We need to prepare for a world in which fewer and fewer people can make meaningful contributions.

Meaningful contributions to what?

Only a small group will command technology and command AI.

What do you mean by “command”? Does a really good plumber “command technology”? If not, why not? How important is AI in comparison to other technologies, like, for instance, farming?

What this will mean for all of us, I don’t know.

Finally, an honest and useful comment. Thrun doesn’t know anything about what he was asked to comment on, but that didn’t stop him from extruding a good deal of incoherent vapidity, nor did it stop an editor at Pacific Standard from presenting it to the world.

2 Comments

  1. "The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over."

    That human safety driver must have some pretty sophisticated programming.

  2. Ack. So much wrongness …

    Computers neither know or learn — even those that possess rudimentary AI or self-programming. Whereas it’s bone-crushingly obvious that information is exchanged in digital environments at the speed of whatever transmission medium is used and that networks connect millions of devices simultaneously, it is not at all the same thing as human learning and knowledge, nor nearly as flexible. Rather, it’s merely engineering/programming. Further, computers are good at crunching through lots of data and/or information but not good at all performing, say, the tasks a typical fast food worker does. Of course, innovators are hard at work building robotics to do just that (disenfranchising workers the same way ATMs dispense with bank tellers), but the effort to design machines for every task performed in the material world, which is to say, outside of a processor, would be so gargantuan compared to how reliably humans are equipped to handle human needs. Human skill yoked to banal economic efficiency is a misapplication.

    And even further, the amplification of effort through tool use does not confer superhuman capabilities. Using a steam shovel (dated, ought to be hydraulics now) does not turn the operator into Superman, able to lift thousands of pounds with his own muscles. Rather, he’s controlling a tool that does the lifting. Big deal. Same with vehicles and computers, which don’t make humans run or think faster. Lastly, there is no such thing as life-long training when the employment landscape is in constant flux. Some jobs are undoubtedly durable; others, not so much. Same as it ever was except that the speed to change today is so discontinuous with the near past that a mere decade is enough for some skills to be rendered moribund.

Comments are closed.