Going Beyond Intelligent Machines

Written by: Meeri Kim

Artificial intelligence (AI) has always fascinated Hollywood, with the technology mostly depicted in a sinister, ominous light. In The Terminator, the U.S. military commissioned the development of Skynet, a defense AI who gains artificial consciousness and ends up attacking the human race. HAL 9000, the spacecraft computer in 2001: A Space Odyssey, kills members of the crew when faced with the prospect of disconnection.

These days, artificial intelligence appears in the news more often than it does onscreen, as AI technology progresses at a rapid—and some say alarming—rate. In 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter that highlights the potential pitfalls of AI, such as losing control of the systems we build. And earlier this year, Bill Gates called AI “both promising and dangerous,” akin to nuclear weapons and nuclear energy.

Recently, the Science & Entertainment Exchange spent an evening discussing all of the ways—both good and bad—that artificial intelligence might play a role in our lives. The event “Going Beyond Intelligent Machines” took place in the Brentwood home of Janet and Jerry Zucker in March. More than 100 science and entertainment professionals gathered on the back patio for tacos and drinks, followed by engaging talks from two thought leaders in artificial intelligence. The evening ended with a lively Q&A with the audience.


Jen Golbeck

“I do artificial intelligence and lots of stuff on social media. I basically build all those creepy algorithms that take your data and find out other stuff about you,” said Jen Golbeck, Professor in the College of Information Studies at the University of Maryland, who served as the first speaker. “So things like personality traits, political leanings… [even] things that will be true in the future.”

For one of her studies, Golbeck took data from Twitter to predict with 85 percent accuracy whether a person going into treatment for alcoholism will still be sober after 90 days. She built an algorithm that analyzes all of the words tweeted by a given individual for signs of maladaptive coping mechanisms, a social circle that revolves around alcohol, and other factors that help determine whether they will end up relapsing.

Sounds neat, right? But Golbeck warns that this type of AI, in the wrong hands, could end up punishing people for crimes they have yet to commit.

“In a lot of places, if you get a DUI, they’ll give you the option of going to [Alcoholics Anonymous (AA)] instead of going to jail for your first one,” she said. “You can imagine a judge or a legislator saying, ‘We’ll just run the algorithm, and if the algorithm says that AA will work, they can go to AA. If it won’t, we’ll just send them to jail.’”

In her view, the dangers of artificial intelligence do not have anything to do with Skynet or The Matrix. Instead, she foresees people in power—heads of corporations, government, and the police—upholding the results of flawed algorithms as the truth, when in reality, AI only learns to do what we as humans have already done.


Ben Shniederman

Something both speakers emphasized was the idea that artificial intelligence should not make all of the decisions for us. Ben Shneiderman, who took the stage after Golbeck, brought up the tragedies involving the Boeing 737 Max in Indonesia and Ethiopia as a prime example. As the Founding Director of the Human-Computer Interaction Laboratory at the University of Maryland, he studies how people use technology with the goal of redesigning devices and software to make them better.

“What happened on the Lion Air plane [in Indonesia] was the sensor that told the system of the position of the plane gave a faulty reading suggesting it was rising. And therefore, the autopilot—in its wisdom, dare I say—pointed the nose down,” said Shneiderman. “They all went down and 189 people died. Pretty much what I would call a deadly AI from excessive automation.”

While some engineers believe in maximizing automation to make for a more user-friendly experience, excessive automation can also endanger human safety. People tend to think about a scale of supervisory control, going from complete human control on one end to complete machine automation on the other. But Shneiderman thinks the ultimate goal should be ensuring human control while increasing the level of automation.

Inspired by the National Transportation Safety Board, he wants to start a National Algorithms Safety Board that will investigate accidents caused by or related to artificial intelligence in order to make the technology safer. He also foresees a system where corporations and other institutions may have to follow a set of rules when creating their algorithms in order to gain clearance to use them.

“The reason aviation is so safe in general is because of a very open system where people can report about problems that happen, and there are retrospective public investigations about them,” said Shneiderman. “We need to open up the AI community to make more of that. We need to have an independent and public oversight.”

Photos by Zach Dripps.


The statements and opinions expressed in this piece are those of the event participants and do not necessarily reflect the views of any organization or agency that provided support for this event or of the National Academies of Sciences, Engineering, and Medicine.