In a time where “fake news” is common parlance and tensions rise in response to the smallest media slight, is it time for algorithms to take the place of humans in moderating news? This New York Times article seems to think so. What role, and to what extent, should algorithms be used in regulating and implementing everyday business ventures, government agency routine processes, health care management, etc.? Who should take responsibility in the event of a problem or negative consequence, if it is all verified by an algorithm? And, importantly, what will enhanced monitoring of algorithms do to the progress and profitability of companies whose bottom line depends on the very algorithms that can cause unforeseen, sometimes very harmful, problems?

Though it can be explained in many ways, an algorithm is basically a set of rules that precisely defines a sequence of operations and/or solves a problem, as applicable in mathematics, computer programming, machinery, etc. Algorithms were first discussed around 825 A.D. by the Persian mathematician and scholar, Al-Khwārizmī. Their use has evolved to the present day and is now integral to many company’s business models, including mammoth companies such as Google and Facebook, all the way down to small business owners. Algorithms impact our daily lives, often without us even being aware of the fact. Algorithms decide which ads you see on your favorite websites and suggest products on websites you frequently order from (e.g., Amazon). They are used in online dating sites and even to auto-tune your favorite singer’s voice. The ubiquity of algorithms in marketing and business make them a crucial underpinning of many a successful business model.

But no matter how useful and versatile a tool, algorithms are still just that—tools. And like any tool, an algorithm can yield unwanted or unintended results even when otherwise working as designed. In one instance, an algorithm was tasked with judging an international beauty contest. While the creators of the algorithm fed images of past beauty pageant winners, with the intent to recognize human beauty based on past winners, the algorithm ultimately ended up picking winners based solely on skin color, in a clear instance of bias creep in algorithm use.

In other cases, algorithms can go awry and cause serious consequences for those affected. A motorist in Massachusetts mistakenly had his license revoked after a facial-recognition algorithm mistakenly flagged his face as looking sufficiently similar to another Massachusetts’s driver. In another case in California, a glitch in the Department of Health Services’ automated system unexpectedly stopped the benefits of thousands of low-income people with disabilities and senior citizens. As a result, Medicare canceled their health care coverage because their premiums were not paid. Meanwhile, there are plenty of other concerns involving algorithms and the potential societal consequences of their use in capitalist markets, where the only goal is to maximize profits.

Amongst the big picture questions, there are no shortage of practical, legal considerations for businesses that employ algorithms. Can a company be legally held accountable for an algorithm’s handiwork? It is easy to imagine scenarios where an improperly designed algorithm results in fatal consequences, such as in self-driving cars getting into accidents or not recognizing certain road conditions due to faulty planning. In those cases, if someone is injured or killed, who is responsible? In legal terms, an algorithm is not an entity that can be brought to court. Are the creators of the algorithm then legally responsible for the consequences of their product, even if the consequences were not intended? Sure, an algorithm is just a tool, but there is an argument for treating an algorithm as an artificial person for purposes of the law, and there is ample legal precedent for holding someone accountable for knowingly using a faulty tool. At some point, any company for whom algorithms do the heavy lifting will face questions of liability and intent. Up until now, saying “Whoops, the algorithm did it!” and claiming unintended consequences has sufficed in most cases (even if there’s still a bit of a PR hit). But every company executive should have contingencies in place in case an individual, court or agency decides that’s not enough.

As for determining exactly where that tipping point lies, that will likely depend on a host of shifting variables that include the technology involved, publicity received, seriousness of outcome and public mood. It sounds like just the thing a good algorithm could predict.