Social networking platforms have long faced the difficult task of balancing the desire to promote freedom of expression with the need to prevent abuse and harassment on their sites. One of social media’s greatest challenges is to make platforms safe enough so users are not constantly bombarded with offensive content and threats (a recent Pew Research Center study reported that 40% of Internet users have experienced harassment), yet open enough to foster discussion of complex, and sometimes controversial, topics.

This past year, certain companies have made some noteworthy changes. Perhaps most notably, Twitter, long known for its relatively permissive stance regarding content regulation, introduced automatic filtering and stricter language in its policies regarding threatening language. Also, Reddit, long known as the “wild wild west” of the Internet, released a controversial new anti-harassment policy and took unprecedented proactive steps to regulate content by shutting down some of the site’s more controversial forums.

According to some, such changes came as a result of several recent, highly publicized instances of targeted threat campaigns on such platforms, such as “Gamergate,” a campaign against female gaming journalists organized and perpetrated over Twitter, Reddit and other social media platforms. Below we summarize how some of the major social networking platforms are addressing these difficult issues.

Facebook

Facebook’s anti-harassment policy and community standards have remained relatively stable over time. However, in March 2015, Facebook released a redesign of its Community Standards page in order to better explain its policies and make it easier to navigate. This was largely a cosmetic change.

According to Monika Bickert, Facebook’s head of global policy management, “We’re just trying to explain what we do more clearly.”

The rules of conduct are now grouped into the following four categories:

  1. “Helping to keep you safe” details the prohibition of bullying and harassment, direct threats, criminal activity, etc.
  2. “Encouraging respectful behavior” discusses the prohibition of nudity, hate speech and graphic content.
  3. “Keeping your account and personal information secure” lays out Facebook’s policy on fraud and spam.
  4. “Protecting your intellectual property” encourages users to only post content to which they own the rights.

Instagram

After a series of highly publicized censorship battles, Instagram updated its community standards page in April 2015 to clarify its policies. These more-detailed standards for appropriate images posted to the site are aimed at curbing nudity, pornography and harassment.

According to Nicky Jackson Colaco, director of public policy, “In the old guidelines, we would say ‘don’t be mean.’ Now we’re actively saying you can’t harass people. The language is just stronger.”

The old guidelines comprised a relatively simple list of do’s and don’ts—for example, the policy regarding abuse and harassment fell under Don’t #5: “Don’t be rude.” As such, the new guidelines are much more fleshed out. The new guidelines clearly state, “By using Instagram, you agree to these guidelines and our Terms of Use. We’re committed to these guidelines and we hope you are too. Overstepping these boundaries may result in a disabled account.”

According to Jackson Colaco, there was no one incident that triggered Instagram’s decision. Rather, the changes were catalyzed by continuous user complaints and confusion regarding the lack of clarity in content regulation. In policing content, Instagram has always relied on users to flag inappropriate content rather than actively patrolling the site for offensive material.

The language of the new guidelines now details several explicit rules, including the following:

  1. Nudity. Images of nudity and of an explicitly sexual nature are prohibited. However, Instagram makes an exception for “photos of post-mastectomy scarring and women actively breastfeeding.”
  2. Illegal activity. Offering sexual services, buying or selling drugs (as well as promoting recreational use) is prohibited. There is a zero-tolerance policy for sexual images of minors and revenge porn (including threats of posting revenge porn).
  3. Harassment. “We remove content that contains credible threats or hate speech, content that targets private individuals to degrade or shame them, personal information meant to blackmail or harass someone, and repeated unwanted messages…We carefully review reports of threats and consider many things when determining whether a threat is credible.”

Twitter

Twitter has made two major rounds of changes to its content regulation policies in the past year. These changes are especially salient given the fact that Twitter has previously been fairly permissive regarding content regulation.

In December 2014, Twitter announced a set of new tools to help users deal with harassment and unwanted messages. These tools allow users to more easily flag abuse and describe their reasons for blocking or reporting a Twitter account in more specific terms. While in the past Twitter had allowed users to report spam, the new tools allow users to report harassment, impersonations, self-harm, suicide and, perhaps most interestingly, harassment on behalf of others.

Within “harassment,” Twitter allows the user to report multiple categories: “being disrespectful or offensive,” “harassing me” or “threatening violence or physical harm.” The new tools have also been designed to be more mobile-friendly.

Twitter also released a new blocked accounts page during this round of changes. This feature allows users to more easily manage the list of Twitter accounts they have blocked (rather than relying on third-party apps, as many did before). The company also changed how the blocking system operates. Before, blocked users could still tweet and respond to the blocker; they simply could not follow the blocker. Now, blocked accounts will not be able to view the profile of the blocker at all.

In April 2015, Twitter further cracked down on abuse and unveiled a new filter designed to automatically prevent users from seeing harassing and violent messages. For the first time, all users’ notifications will be filtered for abusive content. This change came shortly after an internal memo from CEO Dick Costolo leaked, in which he remarked, “We suck at dealing with abuse and trolls on the platform, and we’ve sucked at it for years.”

The new filter will be automatically turned on for all users and cannot be turned off. According to Shreyas Doshi, head of product management, “This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of the Tweet to other content that our safety team has in the past independently determined to be abusive.”

Beyond the filter, Twitter also made two changes to its harassment policies. First, the rules against threatening language have been strengthened. While “direct, specific threats of violence against others” were always banned, that prohibition is now much broader and includes “threats of violence against others or promot[ing] violence against others.”

Second, users who breach the policies will now face heavier sanctions. Previously, the only options were to either ban an account completely or take no action (resulting in much of the threatening language not being sanctioned at all). Now, Twitter will begin to impose temporary suspensions for users who violate the rules but whose violation does not warrant a full ban.

Moreover, since Costolo’s statements, Twitter has tripled the size of its team handling abuse reports and added rules prohibiting revenge porn.

Reddit

In March 2015, Reddit prohibited the posting of several types of content, including anything copyrighted or confidential, violent personalized images and unauthorized photos or videos of nude or sexually excited subjects.

Two months later, Reddit unveiled a controversial new anti-harassment policy that represented a significant shift from Reddit’s long-time reputation as an online free-for-all. The company announced that it was updating its policies to explicitly ban harassment against users. Some found this move surprising, given Reddit’s laissez-faire reputation and the wide range of subject matter and tone it had previously allowed to proliferate on its site (for example, Reddit only expressly banned sexually explicit content involving minors three years ago after much negative PR).

In a blog post titled “promote ideas, protect people,” Reddit announced it would be prohibiting “attacks and harassment of individuals” through the platform. According to Reddit’s former CEO Ellen Pao, “We’ve heard a lot of complaints and found that even our existing users were unhappy with the content on the site.”

In March 2015, Reddit also moved to ban the posting of nude photos without the subjects’ consent (i.e., revenge porn). In discussing the changes in content regulation, Alexis Ohanian, executive chairman, said, “Revenge porn didn’t exist in 2005. Smartphones didn’t really exist in 2005…we’re taking the standards we had 10 years ago and bringing them up to speed for 2015.” Interestingly, rather than actively policing the site, Reddit will rely on members to report offensive material to moderators.

Reddit’s new policy defines harassment as: “systematic and/or continued actions continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.”

As a result of the new policies, Reddit permanently removed five subreddits (forums) from the site: two dedicated to fat-shaming, one to racism, one to transphobia and one to harassing members of a progressive website. Apart from the expected criticisms of censorship, some commentators have condemned Reddit for the seemingly random selection of these specific subreddits. Even though these subreddits have been removed, many other offensive subreddits remain, including a violently anti-black subreddit and one dedicated to suggestive pictures of minors.

Google

In June 2015, Google took a major step in the battle against revenge porn, a form of online harassment that involves publishing private, sexually explicit photos of someone without that person’s consent. Adding to the damage, such photos may appear in Google search results for the person’s name. Google has now announced that it will remove such images from search results when the subject of the photo requests it.

Amit Singhal, senior vice president of Google Search, stated, “This is a narrow and limited policy, similar to how we treat removal requests for other highly sensitive personal information, such as bank account numbers and signatures, that may surface in our search results.” Some have questioned, though, why it took so long for Google to treat private sexual information similarly to other private information.

As social media grows up and becomes firmly ensconced in the mainstream, it is not surprising to see the major players striving to make their platforms safer and more comfortable for the majority of users. It will be interesting, though, to watch as the industry continues to wrestle with the challenge of instituting these new standards without overly restricting the free flow of content and ideas that made social media so appealing in the first place.