Twitter recently announced that it has updated its Twitter Rules “to clarify what we consider to be abusive behaviour and hateful conduct,” wrote Megan Cristina, Twitter’s head of Trust & Safety, in a Dec. 30, 2015 blog post.
According to Cristina’s blog post (“Fighting abuse to protect freedom of expression”), the policy changes are intended to more firmly emphasize that Twitter does not permit abusive behavior that is “intended to harass, intimidate, or use fear to silence another user’s voice.”
This announcement comes nearly a year after Twitter’s now former CEO, Dick Costolo, acknowledged that Twitter had been doing a poor job dealing with trolls and abusive and harassing behavior. Costolo said he was “ashamed” and took “full responsibility.”
According to the Twitter Rules, Twitter accounts found to be engaging in prohibited behaviors – such as users making violent threats, engaging in hateful conduct or wrongfully impersonating other parties – “may be temporarily locked and/or subject to permanent suspension.”
Twitter’s heightened emphasis on stopping bad actors will be a welcome change, as it is far too easy for people to engage in harmful behavior without much recourse. According to the blog post, Twitter has stepped up its investment in policy enforcement in recent months.
An increased investment is necessary if Twitter truly wants to crack down on harmful behavior, as Twitter can only be effective in curbing the bad actors if it has the personnel and resources in place.
Twitter, of course, receives countless reports of bad behavior. But the company has often been slow to process reports pertaining to blatantly abusive and harmful accounts and activity – probably unavoidable to an extent, based on the volume of reports. But hopefully the new rules and a greater commitment to enforcing them will be improved going forward into 2016.
Nevertheless, bad actors will continue to troll or otherwise harass individuals and organizations on Twitter. And it will be interesting to see if this behavior increases, should Twitter expand its character limit, as rumored on Tuesday.
Reporting harassing and/or abusive behavior on Twitter
One common form of harassment and abuse on Twitter is through impersonation. Impersonation is defined in the Twitter Rules as behavior “intended to or [that] does mislead, confuse, or deceive others.”
Obviously, there are a number of very popular parody accounts on Twitter and those are generally welcome, so long as they are not confusing or deceptive.
Sometimes, however, disgruntled parties will take impersonations to the level of harassment, specifically by engaging in a calculated targeted harassment campaign. A general example of this would be creating an account for someone and attributing extremely offensive or vulgar statements and characteristics to them (e.g. creating an account in someone’s name and holding out to the public that the person is racist or is a pedophile).
To report such harassment from an account (as opposed to an individual tweet, though that involves similar process), the reporting party – whether the actual affected individual or company or someone acting on that party’s behalf, such as an attorney – should click (or tap, if using the app) the “gear” symbol on the Twitter account’s profile and then “Report.”
From there, the reporting party is given a list of options, including “They’re being abusive or harmful.” This selection produces another list of items, including “Pretending to be more or someone else.”
If reporting on one’s own behalf, Twitter just asks for information about the allegedly harmful behavior. If reporting on behalf of another (“Someone I represent”), such as a client, the reporting party will obviously need to identify that person.
Upon submission of the report, Twitter will follow up with an email (to the email address affiliated with the reporting person’s account) and ask for confirmation that he or she is authorized to represent the other individual. For example, the reporting party (e.g. attorney) can provide documentary proof such as a driver’s license or passport or something else demonstrating he or she has the authority to act on the other person’s behalf.
For a company or brand, a reporting party will need to provide the username of the company being impersonated as well as include a company email address. Of course, companies and brands complaining about impersonation might wish to report the account(s) as misuse of their trademarks, under Twitter’s Trademark Policy.
In this scenario, not only is it possible to limit the activity of the offending party, but also Twitter can release the relevant user name for the trademark holder to use.
While it is too early to tell the extent of the progress Twitter is actually making in terms of enforcing its rules and policies, hopefully Twitter continues to show a true willingness to assist people and organizations being harmed through the website/app.
If it can be demonstrated that actual harassment or other abusive behavior is taking place on Twitter – not just negative comments (protected by free speech) – Twitter will take action and likely suspend the relevant accounts. One challenge to the company will continue to be having to wade through tons of “non-actionable” complaints to help the truly harmed parties.
While suspending accounts does not guarantee that bad actors will not go and create new accounts to continue their harassment, it is generally not worth the effort to keep creating new accounts (and potentially associated email addresses), so harassment is likely to stop there.
If not, further legal action can certainly be pursued and, in some circumstances, law enforcement might be willing to get involved.