“Strict Moderation?” The Impact of Increased Moderation on Parler Content and User Behavior
View/ Open
Date
2022-05-18Author
Kumarswamy, Nihal
0000-0002-0900-7634
Metadata
Show full item recordAbstract
Social media platforms have brought people from different backgrounds, ethnicity, race, gender, etc
together to form a platform to share ideas and opinions and discuss news events among other social
events. Unfortunately, these platforms have also been a safe haven for abusive users who harass,
bully other users or spread misinformation and disinformation. Social media platforms have a huge
incentive to police these abusive users and keep them in check to allow other genuine users to use
their platform. Social media platforms employ several different content moderation techniques to
perform this task. These techniques vary across platforms, for example, Parler believes in using
the least restrictive moderation policies and having open discussion spaces for their users. These
policies were used by several members responsible for the 2021 US Capitol Riots.
On January 12, 2021, Parler a social media platform popular among conservative users was removed
from the Apple App store, the Google Play Store, and Amazon Web Services. This was blamed
on Parler’s refusal to remove posts inciting violence following the 2021 US Capitol Riots.
To return to the app stores, Parler would have to modify their moderation policies drastically. Shortly before being banned from Amazon Web Services, a Twitter user, donk_enby,
published frameworks and methodology for scraping Parler using their open API service. Studies
like Aliapoulios et al. used this opportunity to collect a dataset of posts from Parler and record
user information. After a month of downtime, with a new cloud service provider and a new
set of user guidelines, Parler was back online. Our study looks into the moderation changes
performed by Parler and studies any noticeable differences in user behavior.
Using Google’s Perspective API, we notice a decrease in the toxicity content shared in posts. We
also notice similar trends in other labels such as identity attack, insult, severe toxicity, profanity,
and threats. We study the most popular topics being talked about on Parler and compare other
topics to uncover any changes in the topics of discussion. Finally, the Media Bias Fact Check service
also checks the factuality of a sample of news websites being shared. We find an increase in the
factuality in the news sites being shared. We also notice a decrease in the number of questionable
sources and conspiracy or pseudoscience sources being shared.