Although the importance of AI is increasingly becoming an integral part of our daily lives, the increased use of AI systems raises concerns about “biases,” which are prejudices and resulting discriminations that can arise from their use.
Machine learning algorithms
AI can be defined as the ability of a machine to exhibit human-like capabilities such as reasoning, learning, planning, and creativity, and it is composed of various algorithms that learn through machine learning. Based on machine learning algorithms, it’s possible to guide and “teach” the algorithm what results to generate, similar to teaching a child the letters of the alphabet through illustrated books. In this case, AI can “learn” from a dataset and produce a predefined output (in the case of supervised machine learning).
In other cases, starting from a dataset, the algorithm learns to identify complex processes and patterns without the careful guidance of a human mind (unsupervised machine learning). It’s as if we always wanted to teach a child the letters of the alphabet through illustrated books, and the child will be able to produce their “own reasoning” by generating words and phrases that are not predefined outputs.
Cognitive biases
It’s in these scenarios that we find AI models like ChatGPT where cognitive biases can be produced. Algorithms are nothing more than mathematical models that are “trained” using datasets provided by humans. Returning to our illustrative example involving a child, when the letter “A” consistently correlates with the color red, it increases the likelihood of the child using the color red when reproducing the letter on a blank sheet of paper.
At the same time, through the initial datasets provided to the algorithm, it’s possible for the algorithm to reproduce “biases” that are simply derived from the set of information provided. Bias can infiltrate in various ways, and in this context, we will focus on biases related to prejudices, opinions and ethnic, cultural, social issues.
Personnel selection and insurance risk assessment using ML algorithms
One area where the use of AI can create great efficiency on one hand, and raise concerns on the other, is the workplace, specifically personnel selection using machine learning algorithms. When the personnel selection algorithm is trained using historical datasets of candidates who have demonstrated greater success in a specific role, it may perceive the shared attributes among these candidates as more significant for that role. This is what happened to a well-known multinational company in search of new resources for an IT role: the algorithm automatically rejected female candidates because it relied on a dataset collected over the past ten years where the majority of tech hires were male. The algorithms identified and highlighted the biases of their own creators, demonstrating that training automated systems on biased data leads to future nonneutral decisions.
In addition to the above, in the insurance landscape, AI systems are being increasingly used to provide personalized products and services at more competitive prices, ranging from health and life protection to underwriting and claims assessment. If not properly developed, even in this perspective, AI systems can pose significant risks to people’s lives, including discrimination. For example, an insurance risk assessment algorithm could use customer data such as age, gender, income, profession, and health status to determine the insurance price and the level of risk associated with the customer, excluding certain individuals.
Technical and regulatory remedies for biases and discrimination
The risks generated by these cognitive biases can be reduced or even avoided through actions both from a technical and regulatory standpoint. First and foremost, it’s necessary to act on the algorithm itself. Training algorithms on the most diverse and representative dataset possible is essential, constantly monitoring the produced outputs to identify and correct biases at the source. Additionally, in the selection process, it may be relevant to involve not only technical experts but also a variety of professionals to prevent unintentional discrimination.
On the other hand, clear regulatory provisions are also crucial. It should be noted that the described AI systems already fall within the scope of the AI Act draft. The AI Act adopts a risk-based approach (similar to the GDPR), identifying three levels of risk (unacceptable, high, and limited). These systems are listed in Annex III of the proposed Regulation and include systems that fall within the context of work and employment, including the selection and hiring of personnel.
For these systems, a series of obligations (eg risk management systems, transparency, human oversight) are foreseen, which suppliers must comply with from the design and development phase, and compliance with these obligations must be carefully evaluated before the commercialization of the system itself.
However, the application of these rules will only concern the near future. For the time being, it is necessary to rely on the combined provisions of other regulations that impose transparency obligations or the right to opt-out regarding the processing carried out by these systems, as well as the right to request human intervention in the processing of this data. These provisions can be found in Article 22 of the GDPR and the new transparency provisions incorporated by Legislative Decree no. 104/2022 (the Transparency Decree), which introduce significant regulatory obligations when “automated decision-making or monitoring systems” are used concerning workers.
Using AI in personnel selection processes, the creation of personalized offers, and access to certain services can lead to significant improvements in efficiency and accuracy. But it’s important to pay attention to the risks of discrimination and cognitive biases associated with the use of these algorithms. Only through a combination of transparency, fairness and clear regulatory provisions that impose specific obligations on the users of AI systems can we ensure that AI is used in a more responsible and equitable manner.