Many of us are aware of the popular myth that ostriches believe burying their heads in the sand will make them invisible to predators. In other words, an “if I can’t see you, you can’t see me” approach. Even though scientists will tell you that this is not true, the approach seems to bear a resemblance to the online behaviour of some humans. Many of us have a false sense of security and anonymity when we sit behind a screen and publish or comment on social media, despite the fact that this content can often be viewed by anyone.
Many of us seem to forget that the consequences of online actions are no different to those in the brick-and-mortar world, and that we are liable for social media posts that amount to claims of defamation, violation of privacy, hate speech, bullying, harassment and criminal activity. In addition, it has become easier to prove these claims because the evidence is being published for everyone to read and share, with many of these posts going viral. Not only is it easy to prove that a harmful post was published, it is also evident how many people saw it and how they reacted to it – much more far reaching and damaging than old-fashioned gossip!
Globally, courts have found people liable for harmful posts on social media even if they were not the authors of the post. A South African example is the case of Isparta v Richter, in which a person merely tagged in a defamatory post was found liable for it.
This begs the question: is there a legal obligation on a person or a company to monitor posts on their social media accounts?
South African statutory laws have not been able to keep up with the rapid evolution of technology. The Electronic Communications and Transactions Act (“ECTA”), which governs electronic transactions, was promulgated in 2002, before Mark Zuckerberg launched Facebook in 2004. The idea of social media and the vast impact it would have on the way members of society interact and the legal implications thereof, could not have been comprehended when these laws were written.
These laws do, however, provide some guidance. According to section 78 of ECTA, there is no general obligation on any person or entity providing information system services (by way of connection, operation, access, transmission or processing of information systems or data, including the internet) to monitor data that it transmits or stores or to actively seek facts or circumstances indicating unlawful activity. However, this section needs to be read with section 75, which states that a service provider that provides a service that consists of the storage of data is not liable for damages arising from data stored, as long as it does not have actual knowledge that the data message or activity relating to the data message infringes on the rights of a third party and, upon receipt of a take-down notice, acts expeditiously to remove or disable access to the data.
Social media platforms would be deemed to be entities providing information system services or providing hosting services (the storage of data). Their obligations and liability to monitor and remove content are less vague because, as “service providers”, the provisions of ECTA (and similar legislation globally) apply to them. Most popular social media platforms have ways in which users can report “abuse” or infringements, which the platform will consider and remove if they deem necessary, thereby avoiding liability.
However, are everyday users of these social media platforms also providing information system services by creating their own “mini” public platforms on these sites and, depending on individual privacy settings, allowing other people to post or comment on their profiles? It is plausible that the courts would apply the same principles as, for instance, those set out in ECTA to these personal social media platforms. In other words, there is no general obligation for individual users to monitor their social media platforms or to look for harmful posts, but there is an obligation to remove harmful posts when becoming aware of them or requested to remove them.
The above analysis is in line with rulings such as the Isparta case. Here, a woman posted defamatory posts on Facebook about her husband’s ex-wife, tagging her husband. The court found that the husband was as liable as his wife for the posts because, although he was not the original author of the posts, he was aware of them and did not take any action to remove them from his profile and, in turn, allowed his name to be coupled with the author. However, it is not clear how the court determined that the husband knew about the posts.
If a person’s “knowledge” of a post is one of the determining factors in establishing liability, how is it determined that a person had knowledge of the post? When a person is tagged or mentioned in a post, is it sufficient to assume that she/he had knowledge of it? Depending on a user’s settings, the user may receive email or push notifications of posts in which they were tagged or mentioned. However, not all users activate these alerts and may only become aware of the post once they log in. Even then, with hundreds of posts to sift through, it might be easy to miss, or not pay attention to, a post in which they were tagged or mentioned.
Notwithstanding these challenges, if someone was active on the social media platform after the harmful post was published, it would be difficult to claim that she/he was unaware of it.
Another determining factor for liability may be someone’s actions after they became aware of the harmful post. Did they remove the post or, where there this is not possible, did they distance themselves from it?
It is important to use, manage and monitor social media accounts responsibly. This does not necessarily mean that these accounts need to be constantly watched, but it would be wise for social media users to pay attention to posts in which they are tagged or mentioned, and to be mindful of not only what they post, but also what they react to. Everyone contributing to a harmful post could be held liable, even if this is just by way of “liking”, “loving” or “sharing” it.
There is an even bigger responsibility on companies to manage their social media platforms, not only from a liability perspective, but also from a brand reputation perspective. Many companies employ social media monitoring tools to manage their social media platforms and those of employees. While monitoring employees’ social media platforms has to be done within the bounds of the law (in South Africa, this includes employment law, the Constitution and the Regulation of Interception of Communications and Provision of Communication-related Information Act, 2002), such monitoring is important because the line between personal and business posts are sometimes blurred. Companies could be held vicariously liable for employees’ posts or even criminally liable where insider trading flows from seemingly innocent posts. This is one reason why companies should have social media policies in place to promote the responsible use of social media.