Modern HR programs, especially those using artificial intelligence, are developing in leaps and bounds. But under which labor law conditions can innovations like chatbots be used for recruitment?

What is a chatbot? These are programs that independently react or reply to text or voice commands someone enters, and in this way they can simulate a conversation with a human being. The programs are based on databases which contain pattern recognition and answers. In the early stages of development, chatbot answers were usually predetermined by simple structures. This meant that the possible course of a conversation had to be predicted and mapped, but there are now chatbots with artificial intelligence.

Pilot projects show that reactions during simulated conversations with a human are not always predictable

There is no standard, unambiguous definition of the term 'artificial intelligence' yet. Its most important feature, however, is that of an "autonomous system". This means the ability to make decisions and implement them in an outside world, independently of any external controls or influences. Autonomy is purely technological, and its degree depends on how sophisticated and advanced the program is. Artificial intelligence is also characterized by the ability to adapt its behavior to the environment, and to learn from interactions and its own experiences. The aim of continually improving artificial intelligence in chatbots is that the person concerned, in this case the applicant, has no idea that he or she is talking to a program, and not to a human being.

In recruitment, chatbots can be used during initial interviews and when assessing CVs for example, and when combined with other data analyses, to make predictions about an applicant's suitability. This allows the applicant's suitability to be assessed in a way that takes equal account of all of the data that is collected and available, and that is based solely on objective criteria such as logic and rationality. This means that application procedures can be made more effective for the employer, especially as chatbots can be used regardless of office or business hours or staff availability.

The problem to date: Unpredictability

Notwithstanding the benefits that using chatbots can offer, we must also ask what legal risk is involved. Although it is based on databases and programming, the autonomy of artificial intelligence is characterized by independent decisions. The behavior of autonomous systems is therefore currently not completely predetermined or predictable. The continuous ongoing development and optimization of artificial intelligence means increasing independence, process complexity and decision-making. Given the huge quantity of data being processed by programs like these, logging and storing ongoing processes could become unworkable. Due to these developments, we must ask to what extent chatbots' behavior will remain comprehensible or controllable.

Current pilot projects and trials are showing that reactions during simulated conversations with a human cannot always be planned or predicted. For example, Microsoft had to deactivate the chatbot "Tay" and remove it from the market, because after a few hours it started asking racist questions.

This problem raises legal issues, particularly in recruitment. If an employer uses a chatbot to conduct job interviews, it must comply with long-standing rights regarding acceptable or unacceptable questions, as established by many years of case law in the Federal Labor Court (BAG), and observe the guidelines of the General Equality Act (AGG) when choosing a candidate. However, we must question whether the employer can really put this into practice, particularly as a non-specialist in IT, if the chatbot makes autonomous, rational decisions about which questions to ask during the conversation. According to purely objective criteria, an existing pregnancy or disability could reduce an applicant's suitability for the position being filled, meaning that the chatbot asks this as a logical consequence, even though this is illegal. It may be possible to program in a block on illegal questions to prevent this risk. However, the more advanced artificially intelligent chatbots are, and the more autonomously they operate and learn, the less predictable their decisions will be, and therefore the greater the risk of infringing the AGG.

The final decision as to whether or not an applicant is hired must be made by a human, and therefore remains the remit of the Human Resources Department.

Who is responsible for chatbot errors?

What would happen if the chatbot were to asks an illegal question in the initial interview with an applicant who is later rejected? The applicant could use the illegal question as evidence (presumption of Sec. 22 AGG) to claim that he or she was illegally discriminated against in the selection process, and seek damages. [Alongside, in to the margin column: Secs. 7 para. 1, 1 AGG and Sec. 15 para. 1 AGG and/or compensation pursuant to Sec. 15 para. 2 AGG]. The question then arises as to who is liable for this AGG violation: the chatbot manufacturer or developer, the employer as the program user or even the chatbot itself?

Firstly, it is clear that even with artificial intelligence that can act autonomously, the chatbot has no legal personality and therefore cannot be liable. However, as the program user, the employer could be liable for damages under Sec. 15 para. 1 AGG. Here, it is the employer's act of using of the chatbot software that asked the illegal question which is decisive, as this suggests illegal unequal treatment in the selection process. Consequently, the employer has used illegal, and therefore faulty, software. The fault is presumed by Sec. 15 para. 1 AGG and the employer may have difficulty in refuting this, even if the software is used as intended. The applicant may then claim damages against the employer for discrimination during shortlisting. With respect to the no-fault claim for compensation, the same applies in principle. The employer has used faulty software, resulting in an AGG violation, which means illegal, unequal treatment. It is liable towards the applicant for non-material damages.

In view of this extensive liability for the employer when using the chatbot, a follow-up question arises as to whether it can seek recourse from the software manufacturer. This depends on the contractual relationship between the software manufacturer and the company which is using the software. This could be a sales contract, if they have agreed delivery of already manufactured, standard software. However, producing individual software that is the object of a contract (software development contract) or adapting standard software to individual needs, such as in-house candidate selection guidelines, means there must be a service contract between the software manufacturer and the company using the chatbot. In both cases, the employer could assert warranty rights under this contractual relationship. [Law alongside in the margin column: Secs. 434 et seqq. and/or 633 et seqq. BGB [German Civil Code].

Legal safeguards are crucial

To safeguard these warranty claims, concluding a specifications agreement with the software manufacturer is particularly recommended. This is done through the functional specifications, which specify the individual requirements for the software. For example, this could stipulate that the chatbot software should be programmed to block illegal questions, to prevent the employer from making any claim under Sec. 15 AGG. If the chatbot nevertheless asks illegal questions that lead to the employer being liable for damages or compensation, the software defect results from the software not being of the agreed quality. The employer can then assert its own warranty claims against the software manufacturer without evidential difficulties, on the basis of the existing contractual relationship they have, and thus indemnify itself. This distribution of risk is also reasonable for the relevant liable parties to consider. It is not clear why, as a chatbot user and despite the presence of an autonomous artificially intelligent system, the employer would be responsible for the "malfunction" experienced by the applicant, as with any other software. Likewise, the possibility of recourse is justified for the software manufacturer, as it alone has control over the software, minimizes the risk of software errors, and can remedy them.

In general, the debate over liability issues related to autonomous systems is still in its infancy. Both the question of the liable party, i.e. who is liable for which instance of violation, and the question of which liability system applies (no-fault liability or absolute liability) remain unanswered. In its resolution of 12 February 2017, the European Parliament called on the Commission to present a proposal for a legislative instrument on legal issues relating to robotics and artificial intelligence, as foreseeable over the next 10 to 15 years. The European Parliament considers that either no-fault liability or a risk management approach should be used. This would put whoever is capable of minimizing the risks and managing any adverse effects, under certain circumstances, at the center of liability.

Nationally too, many see a need for the legislator to act and to determine their own comprehensive liability provisions for issues relating to artificial intelligence and comparable autonomous systems. In any case, with regard to labor law issues associated with using chatbots in recruitment, however, the question of liability can be clarified sufficiently on the basis of existing AGG provisions and the warranty rights from purchase or service contracts.

Data protection limits the possible uses

Using chatbots in the recruitment process raises liability and data protection issues. When using the program, the employer must also comply with the requirements of the General Data Protection Regulation, or GDPR (DSGVO). The chatbot's program processes the applicant's personal data to perform a pre-contractual measure, and is therefore governed by Art. 6 para. 1 lit. b GDPR as the legal basis. Personal data in particularly sensitive categories (Article 9 para. 1 of the GDPR) have no place in the application process, since the question of this is prohibited in any case. Furthermore, when using the chatbot, the employer would have to comply with the duties of disclosure under Art. 12 et seqq. GDPR and ensure there is adequate IT security (Art. 32 GDPR).

From the point of view of data protection law, the prohibition of Art. 22 (1) GDPR is problematic in subjecting the data subject to a decision based solely on automated processing, which has a legal effect on him or her. During the application process, this would be the decision on whether the applicant were hired or not. However, this prohibition is excepted if the decision to conclude or fulfill a contract is required, or if it is made with the express consent of the data subject (Article 22 para. 2 GDPR). However, the application process serves only to establish a contract, therefore the first possibility for an exception is removed. For the applicant to consent, voluntary consent is a mandatory condition (Article 4 No. 11 GDPR). In the applicant's position, however, refusing consent would mean that he or she could not participate in the application process and would therefore have no chance from the outset of being selected for the vacancy. The applicant is therefore in a position where he or she has no real choice without negative consequences, meaning that no effective consent is possible as an exception. Therefore, the use of chatbots remains limited in the application process regarding the prohibition of Art. 22 para. 1 GDPR, which thus limits the opportunities for its use. As a result of this prohibition, the chatbot's assessment may only be used as a basis for decision-making. The final decision as to whether or not an applicant is hired must be made by a human, and therefore remains the remit of the Human Resources Department.