When we hear about artificial intelligence, we frequently are bombarded with notions of ultra-smart robots taking over the world, while either destroying humans, or at least leaving humans in the development dust. The good news, at the time of this writing, is that humans currently do not face that AI existential threat. However, the bad news is that artificial intelligence nevertheless creates present and future safety concerns.

As accurately pointed out in a commentary recently posted by Informationweek.com, the use of artificial intelligence raises the following risks:

  • AI can lead to privacy invasions;
  • Socioeconomic biases can be built into AI applications (whether intended or not);
  • AI algorithms may not always be transparent and subject to full interpretation by humans;
  • Systems may not be established that make human AI-creators responsible and liable for algorithmic outcomes;
  • It is not clear that AI applications generally will be aligned with stakeholder values;
  • There is concern that AI-driven decision-making will not be throttled even when uncertainty is too great to support automated AI decisions;
  • Worry exists about the implementation of failsafe procedures that would allow humans to take back control when AI applications reach the limit of their competency (or when the AI applications simply are not working properly);
  • It is not certain that AI-driven applications will work in consistent, predictable patterns, free from unintended consequences;
  • Importantly, it is not known that AI applications can be impenetrable to adversarial attacks that are intended to exploit vulnerabilities; and
  • We do not know whether AI algorithms ultimately will fail gracefully rather than catastrophically at the end of their useful lives.

This is quite the list of AI safety concerns, and of course, we can think of many more.

What is the bottom line?

The bottom line here is that we should not currently worry about some distant future in which humans are the slaves of AI-robot masters. But the AI train is out of the station — artificial intelligence is here and likely to stay. So, we need to dedicate present and mighty focus on how to reduce and even eliminate AI safety risks.