Opinion
Should We Regulate Digital Platforms?
—
An important lesson we have learned from the rise of digital platforms is that they disrupt industries by blurring the boundaries between them
By Vasant Dhar
I invited Phil Howard and Gillian Bolsover in November of 2016 to guest edit this special issue of Big Data on “Computational Propaganda.” I am delighted at the collection of academic papers that they have curated on the subject, which is front and center at the moment. Politicians, journalists, and scholars grapple with how and to what extent social-media platforms were used to manipulate the presidential election of the oldest democracy in the world.
While the details were murky last year, it was becoming evident to us at Big Data that we were witnessing the vulnerability of social media for political mass manipulation for the first time. Despite a large-scale scientific study* conducted by Facebook in 2012 demonstrating that users' moods could be manipulated via messages fed to its users, it continued to maintain a position of “algorithmic neutrality” on content. Other reports in the media had suggested that it was difficult for most people to distinguish humans from bots on Twitter. While the evidence suggested that social-media platforms were ripe for manipulation, the platforms didn't seem particularly worried about the potential consequences of the research findings. A year later, however, with increasing evidence of manipulation, it is time to ask ourselves whether we need to make some changes to preempt potentially worse consequences going forward, namely, the potential for digital platforms being used as weapons by malicious actors. Common sense would suggest that we do, and while there are no easy answers, we need to start asking ourselves whether our liberal free-market values are vulnerable to rogue governments or groups who are able to sow havoc using minimal financial resources. We need to start finding answers.
Read the full article as published by Big Data.
___
Vasant Dhar is a Professor of Information Systems.
While the details were murky last year, it was becoming evident to us at Big Data that we were witnessing the vulnerability of social media for political mass manipulation for the first time. Despite a large-scale scientific study* conducted by Facebook in 2012 demonstrating that users' moods could be manipulated via messages fed to its users, it continued to maintain a position of “algorithmic neutrality” on content. Other reports in the media had suggested that it was difficult for most people to distinguish humans from bots on Twitter. While the evidence suggested that social-media platforms were ripe for manipulation, the platforms didn't seem particularly worried about the potential consequences of the research findings. A year later, however, with increasing evidence of manipulation, it is time to ask ourselves whether we need to make some changes to preempt potentially worse consequences going forward, namely, the potential for digital platforms being used as weapons by malicious actors. Common sense would suggest that we do, and while there are no easy answers, we need to start asking ourselves whether our liberal free-market values are vulnerable to rogue governments or groups who are able to sow havoc using minimal financial resources. We need to start finding answers.
Read the full article as published by Big Data.
___
Vasant Dhar is a Professor of Information Systems.