Opinion
Why Algorithms Alone Can't Make the Internet Safe
—
Human judgment and oversight of the Internet platforms is essential.
By Michael Posner
Major Internet companies today find themselves in the cross-hairs of European regulators and facing increased public distrust in Europe and the U.S. One reason is that the full scope of harmful content online has become more evident. Until recently, Facebook, Google, and Twitter sought to downplay the magnitude of their problems in this regard, arguing that the amount of hate speech and political disinformation online was relatively tiny, a minor inconvenience when compared to the overall volume and value of Internet communications.
But now, as the heat rises, the Internet platforms have begun to acknowledge a measure of responsibility for the deleterious content that sits on their sites, a positive first step. But in responding, their first instinct is to revert to form, assuming that their engineers will create improved tools using artificial intelligence which will deal effectively with these challenges. Testifying about hate speech online before Congress in April, Facebook CEO Mark Zuckerberg reflected Silicon Valley’s reverence for machine-based solutions. “Over a five- to 10-year period,” he said, “we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”
In the coming days, researchers at the Aalto University in Finland, along with counterparts at the University of Padua in Italy, will present a new study to a workshop on Artificial Intelligence and Security. As part of their research, they successfully evaded seven different algorithms designed to block hate speech. They concluded that all of the existing algorithms used to detect hate speech are vulnerable to easy manipulation, contributing to, rather than solving, the problem. Their work is part of a project called Deception Detection via Text Analysis.
Read the full Forbes article.
___
Michael Posner is a Professor of Business and Society and Director of the NYU Stern Center for Business and Human Rights.
But now, as the heat rises, the Internet platforms have begun to acknowledge a measure of responsibility for the deleterious content that sits on their sites, a positive first step. But in responding, their first instinct is to revert to form, assuming that their engineers will create improved tools using artificial intelligence which will deal effectively with these challenges. Testifying about hate speech online before Congress in April, Facebook CEO Mark Zuckerberg reflected Silicon Valley’s reverence for machine-based solutions. “Over a five- to 10-year period,” he said, “we will have AI tools that can get into some of the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”
In the coming days, researchers at the Aalto University in Finland, along with counterparts at the University of Padua in Italy, will present a new study to a workshop on Artificial Intelligence and Security. As part of their research, they successfully evaded seven different algorithms designed to block hate speech. They concluded that all of the existing algorithms used to detect hate speech are vulnerable to easy manipulation, contributing to, rather than solving, the problem. Their work is part of a project called Deception Detection via Text Analysis.
Read the full Forbes article.
___
Michael Posner is a Professor of Business and Society and Director of the NYU Stern Center for Business and Human Rights.