Opinion
Elon Musk Fires Back at Harvard Psychologist Steven Pinker Over the Future of Artificial Intelligence
—
Solitude is known to be valuable for creativity.
By Melissa Schilling
Elon Musk has frequently expressed concerns about artificial intelligence (AI), noting for example that robots will be able to do everything better than humans, and machines could start a war. Harvard cognitive psychologist Steven Pinker challenged Musk's perspective this week in Episode 296 of Geek's Guide to the Galaxy, arguing that such concerns are as overblown as the dire predictions of the Y2K bug. He also questioned whether Musk's concerns were authentic, saying "If Elon Musk was really serious about the AI threat he'd stop building those self-driving cars, which are the first kind of advanced AI that we're going to see," and adds later, "Hypocritically he [Musk] warns about AI but he's the world's most energetic purveyor of it."
Musk fired back with a tweet:
Musk's point is that autonomous cars are (for now at least) using "weak" or "narrow" artificial intelligence. This refers to software programmed to follow rules to achieve a narrowly-defined task. Some programs can learn to improve at their task (just as Cortana gets better at understanding your voice commands and the Google Search algorithm gets better matches for queries over time), but the programs do not get to change their objective; they only have the objective for which they were built. Weak artificial intelligence is all around us--it deploys your airbag in a car crash, it turns off the dryer when the clothes are dry enough, and more.
Read full article as published by Inc.
___
Melissa Schilling is a Professor of Management and Organizations.
Musk fired back with a tweet:
Wow, if even Pinker doesn't understand the difference between functional/narrow AI (eg. car) and general AI, when the latter *literally* has a million times more compute power and an open-ended utility function, humanity is in deep trouble
-- Elon Musk (@elonmusk) February 27, 2018
-- Elon Musk (@elonmusk) February 27, 2018
Musk's point is that autonomous cars are (for now at least) using "weak" or "narrow" artificial intelligence. This refers to software programmed to follow rules to achieve a narrowly-defined task. Some programs can learn to improve at their task (just as Cortana gets better at understanding your voice commands and the Google Search algorithm gets better matches for queries over time), but the programs do not get to change their objective; they only have the objective for which they were built. Weak artificial intelligence is all around us--it deploys your airbag in a car crash, it turns off the dryer when the clothes are dry enough, and more.
Read full article as published by Inc.
___
Melissa Schilling is a Professor of Management and Organizations.