For an organisation seeking to implement an AI solution to improve its business processes (whether for internal use or customer-facing), Amazon's recently leaked foray into this area provides a good case study.
How much do you know about the training data you are using and any biases it may contain? For an unsupervised algorithm (where there is no human teacher confirming that an output corresponds to a particular input), the likelihood is that you may not know much - this type of algorithm is generally used to spot trends which a human may have missed. The problem is, the output of the algorithm will have taken on any biases indicated in the data.
This is less likely to be a problem in an AI solution intended to make a warehouse run more efficiently, or to reduce the electricity bill for your data centres.
But for any decisions being made about individuals, it is important that those individuals are not being unlawfully discriminated against on the basis of any protected characteristics, such as race or gender.
For Amazon, who were trying to use an AI solution to review applicants for software developer jobs to short-list for interview, their biggest problem was the historic bias in the data towards men, to the extent that their AI model was actively discriminating against any use of "women" in resumes.
Fortunately, Amazon were able to spot this issue before the algorithm was put to use, and were able to correct this. However, the fact that Amazon were not able to be sure of what other biases the AI may have picked up from the company's past decisions, meant that they gave up and scrapped the whole thing.
Although this may sound like a failure, it actually shows that Amazon had good governance procedures in place to try to spot some of the biggest issues facing businesses wanting to make use of unguided algorithms: bad data in, bad data out; watch out for bias; and can you audit what your AI is thinking?
Amazon could see some of the biases, but in the end the lack of transparency of the AI's reasoning in an area (recruitment) where bias and discrimination (even inadvertent) are major issues, meant they made an informed, risk-based decision not to go ahead.