Research Highlights
Harmful Content: The Role of Internet Platform Companies In Fighting Terrorist Incitement and Politically Motivated Disinformation
—
But increasingly, harmful content contaminates the web, threatening democratic institutions and human rights around the globe.
A new study from the NYU Stern Center for Business and Human Rights examines the steps Google, Facebook, Twitter and Microsoft can take to stem the proliferation of terrorist incitement and politically motivated disinformation
The internet does a lot of good for the estimated 3.5 billion people who use it today. But increasingly, harmful content contaminates the web, threatening democratic institutions and human rights around the globe.
A new study from the NYU Stern Center for Business and Human Rights entitled "Harmful Content: The Role of Internet Platform Companies In Fighting Terrorist Incitement and Politically Motivated Disinformation" examines what Google, Facebook, Twitter and Microsoft need to do to fight terrorist incitement and politically motivated disinformation. Written and published by the Center, the report grows out of discussions among members of the World Economic Forum Global Future Council on Human Rights, which is co-chaired by Professor Michael Posner, the Center’s director.
The study focuses on two types of dangerous online content: terrorist incitement and politically motivated disinformation. Though emanating from different sources, both seek to distort the truth, discredit liberal institutions, and, in the words of the European Parliament, undermine “democratic values, human rights, and the rule of law”.
Internet companies have resisted government content regulations. They justifiably worry that many states would seek to suppress dissenting views, undermining free speech online. The danger in this regard is all too obvious. A number of governments have blocked Facebook, Twitter, and Google. In the absence of government regulation, however, it is incumbent on the major platforms to assume a more active self-governance role. Corporate leaders should take responsibility to vindicate core societal interests, such as combating political disinformation and terrorist incitement, while elevating journalistic reporting and civil discourse.
The study concludes with seven recommendations, including:
A new study from the NYU Stern Center for Business and Human Rights entitled "Harmful Content: The Role of Internet Platform Companies In Fighting Terrorist Incitement and Politically Motivated Disinformation" examines what Google, Facebook, Twitter and Microsoft need to do to fight terrorist incitement and politically motivated disinformation. Written and published by the Center, the report grows out of discussions among members of the World Economic Forum Global Future Council on Human Rights, which is co-chaired by Professor Michael Posner, the Center’s director.
The study focuses on two types of dangerous online content: terrorist incitement and politically motivated disinformation. Though emanating from different sources, both seek to distort the truth, discredit liberal institutions, and, in the words of the European Parliament, undermine “democratic values, human rights, and the rule of law”.
Internet companies have resisted government content regulations. They justifiably worry that many states would seek to suppress dissenting views, undermining free speech online. The danger in this regard is all too obvious. A number of governments have blocked Facebook, Twitter, and Google. In the absence of government regulation, however, it is incumbent on the major platforms to assume a more active self-governance role. Corporate leaders should take responsibility to vindicate core societal interests, such as combating political disinformation and terrorist incitement, while elevating journalistic reporting and civil discourse.
The study concludes with seven recommendations, including:
- Enhance company governance: Conduct across-the-board internal assessments of vulnerabilities to terrorist content and political disinformation—and then act on the results.
- Increase human oversight: Devote a sufficient number of people to monitoring and evaluating content, while also giving users more tools to report harmful material.
- Identify government’s role: Promote media literacy—a mission government can take on without threatening free speech.