Lexology GTDT Market Intelligence provides a unique perspective on evolving legal and regulatory landscapes. This interview is taken from the Artificial Intelligence volume featuring discussion on various topics including, government national strategy on AI, ethics and human rights, AI-related data protection and privacy issues, trade implications for AI and more, within key jurisdictions worldwide.
1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?
The Brazilian General Personal Data Protection Law No. 13,709/2018 (LGPD) governs decision-making processes that are totally automated and exclude any human influence on the outcome of the AI’s decisions (despite any human inputs such as in the supervised machine learning method). In supervised learning, the program is prepared through a set of labelled data, in which the output for each data entry is already known. In this way, the data is labelled to tell the machine exactly what patterns to look for, and there is no human influence on the outcome of the AI’s decisions.
According to the LGPD, data subjects are entitled to request a review of decisions based solely on automated processing of personal data that affect their interests, including decisions designed to define their personal, professional, consumer and credit profile or aspects of their personality. Moreover, the LGPD covers the right to explanation, determining that data controllers must provide data subjects, whenever requested; with clear and adequate information regarding the criteria and procedures used for the automated decision. However, unlike the EU General Data Protection Regulation (GDPR), the LGPD does not grant data subjects the right to not be subject to a decision based solely on automated processing that produces legal effects on them or significantly affects them. Moreover, the exceptions established in the GDPR and the LGPD regarding the duty to provide information about the automated decision-making and the algorithms’ logic, and the envisaged consequences of such processing for the data subjects, are different because under the LGPD, theoretically, data controllers may refuse to provide clear and adequate information regarding the criteria and procedures used for the automated decision if it is claied that this information violates industrial or business secrets.
In March 2020, Brazil concluded a Public Consultation on the Brazilian Artificial Intelligence Strategy, aiming to collect subsidies to enhance the benefits of AI in the country, mitigating any negative impacts.
The Ministry of Science, Technology, Innovation and Communication (MCTIC) has partnered with the United Nations Educational, Scientific and Cultural Organization (UNESCO) to carry out specialised consultancy in AI. The material by this consultancy will subsidise the formulation of a national strategy for artificial intelligence. One of the products produced of the consultancy has been the state-of-the-art academic discussion and the main concepts about AI. In addition, an inventory was made of initiatives and projects in progress in Brazil related to the development and adoption of AI (national benchmarking).
Brazilian Decree No. 9,854/2019 institutes the National Internet of Things Plan (IOT) with the purpose of implementing and developing IOT in Brazil, based on free competition and free data circulation, in compliance with information security and data protection guidelines personal. The government will regulate the operation and the development of novelties in the area of connected devices, which include equipment using AI. The aim of this decree is to provide greater legal security to projects and initiatives based on IOT.
Further draft bills of law governing AI are also currently under discussion at the Brazilian Congress.
In terms of the level of regulation compared with that in other jurisdictions, it is noteworthy that more than 15 countries have already launched their national strategies to prepare for the implementation of artificial intelligence and its applications. Additionally, countries such as Australia, Denmark, Finland, India, Italy, Japan, Mexico and Sweden have already created or implemented national AI research centres. In this scenario, as analysed by the Institute of Technology and Society (ITS) contribution to the MCTIC Public Consultation, Brazil is lagging behind, although the examples of the previous national plans will certainly contribute to the quality of the Brazilian national strategy, aided by the consultancy on AI carried out by the MCTIC in partnership with UNESCO, as well as the multi-stakeholders’ contributions presented during the public consultation held on AI carried out by the MCTIC.
2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?
Brazil does not have an approved national plan or strategy for AI yet, but in March 2020, the MCTIC concluded a Public Consultation on the Brazilian Artificial Intelligence Strategy (the Public Consultation), aiming to collect subsidies to enhance the benefits of AI to the country and mitigate any negative impacts.
The Public Consultation was divided into six vertical axes:
- qualifications for a digital future;
- research, development, innovation and entrepreneurship;
- application in the public sector;
- application in the productive sectors; and
- public security.
It was also divided into transversal axes:
- legislation, regulation and ethical use;
- international aspects (AI governance); and
- a ‘priorities and objectives’ section.
The MCTIC announced an initiative to create eight AI laboratories in Brazil to manage and create policies focused on, for example, the IOT, cybersecurity and applied AI.
Regarding the efforts to share data, the specialised consultancy conducted by the MCTIC in partnership with UNESCO points out some sparse initiatives for the sharing of data generated from the application of AI. The first was related to the AI Victor, developed jointly by the Federal Supreme Court (STF) in partnership with the law, software engineering and computer science courses of the University of Brasília to streamline the assessment of the resource framework in relation to the main themes of general outcomes set by the STF, as well as to separate and classify the most relevant parts of the process. The AI Victor makes it possible to create a significant database in the Brazilian judiciary with information such as:
- who the most frequent litigants before the STF are, in the appeal scope;
- which issues of general repercussion have the highest volume of linked processes; and
- which constitutional issues have undergone greater judicialisation.
Second, the Rondônia Court of Justice also developed the AI Synapses, with predictive models, pointing out the appropriate procedural movement for a case, using databases of similar cases previously judged and accessing more than 40,000 final and interlocutory decisions as training data. Third, the Federal Audit Court (TCU) also uses AI systems to analyse a large volume of state hiring processes and make this information available to entities such as the Public Ministry, the Federal Revenue Service and the courts of accounts through access to that algorithm.
In addition to more than 70 databases implemented by the TCU, the AI systems Alice, Sofia and Mônica include government account records, contracts that have public resources and information about public servants processed by control bodies, in addition to other information. The following initiatives are also pointed out by the MCTIC consultation:
- the Superior Labour Court (TST) also uses the AI system Bem-Te-Vi to analyse procedural deadlines;
- the Ministry of Economy uses a virtual assistant to combat fraud in the procurement processes of the public administration;
- the Comptroller General of the Union (CGU) uses AI systems, such as Rosie, to audit public accounts and assist in social control, as well as to identify possible signs of deviations in the performance of servers and also to inspect contracts and suppliers, providing risk analysis; and
- state initiatives in Paraná, Pernambuco, Rio de Janeiro, Santa Catarina and São Paulo.
Decree 10,046/2019 establishes the data sharing policy within the scope of the federal public administration and institutes the Citizen Base Register and the Central Data Governance Committee. The decree categorises data sharing at different levels and aims to reduce barriers to sharing and cross-referencing of federal public administration databases, with the aim of eliminating duplicate information and inconsistencies in public databases. However, the decree contains different definitions from those referred to in the LGPD, making the interpretation of the standard somewhat difficult, and also expands the justification hypotheses for data sharing.
In this way, the general parameters established in the LGPD should guide the use and sharing of data through the AI systems.
3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?
In the Report for Artificial Intelligence and Algorithm Regulation produced by Paulo Novais and Pedro Miguel Freitas – within the scope of the Sector Dialogues, a partnership between Brazil and the European Union – it has been provided that ethical issues shall be evaluated according to the complexity of the activity to be performed.
The public consultation held by the federal government regarding a national plan for AI resulted in many different approaches regarding the ethical issues of AI. For instance, the ITS proposed, in summary, that the development of AI applications should follow certain principles, such as:
- security and reliability; and
- technological understanding.
The ITS furthermore presented how other countries approach the matter, highlighting the importance of transparency in the use of AI and the need for researchers and entities who develop or use such technology to be able to provide explanations about the algorithmic decision-making process. Therefore, transparency and explanation of the algorithms (preferably public) are important and necessary in the pursuit of an ethical use of AI.
The ITS also reinforces the importance of a political arrangement to organise a strong legal scenario capable of implementing the principles mentioned. The government’s position on the matter should also encompass the selection of priorities and the adoption of basic principles in a national strategy for the development of AI.
On this matter, a debate arises about the need to create a specialist authority to assist, supervise and advise on the ethical use of AI. This authority should be an entity composed of specialists from different areas for a multidisciplinary approach.
The ethical issues presented by AI had been indirectly approached by the LGPD, as it provided, at first, that data subjects could require the review of decisions made solely by automated machines. This provision would offer mechanisms to minimise the risks triggered by the increasing use of algorithms on the evaluation and classification of people’s lives and behaviour, for which the criteria are not usually revealed. The guarantee of these rights was an important tool to protect the data subject against decisions made by potentially biased AI systems. However, this provision was vetoed and altered by the Brazilian president. Therefore, the review of the decision no longer needs to be done by a natural person. This change was not well received by the legal community, as they perceived it as a loss to data subjects’ rights and to an ethical use of AI-based systems.
4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?
There is no specific government policy and strategy for managing the national security and trade implications of AI yet. However, in the public consultation held by the MCTIC, several entities proposed suggestions for the use of AI in Brazil and its regulation. For instance, the civil entity InternetLab proposed that whenever an AI-based system is being used for national security reasons, there must be a constant evaluation whether the algorithm is used for the purpose for which it was intended. In addition, a report must be created and made available to the public, indicating:
- the AI solution’s purposes;
- the population that may be impacted;
- the mechanisms used to control possible biases;
- updates to the algorithm; and
- contact information for the exercise of individual rights.
It has also been suggested that the creation of an entity responsible for providing specific regulation on the matter and inspect the use of AI on national security. This entity would be responsible for auditing and applying penalties to bodies that are not compliant with the legislation, as well as for the development of technical standards and binding good practices to reduce bias in the development and use of these systems.
Furthermore, InternetLab also suggested other requirements for the use of AI on national security, such as:
- affirmative actions for the presence of multiple racial and gender participants in the teams developing and maintaining these technologies;
- an obligation that any automated systems used for public security purposes are previously tested in accordance with the technical standards established by the competent authority before their implementation and public use;
- a guarantee of individual rights regarding decisions made or informed by AI systems, such as the right to human review, especially since there is the potential for risk to individual freedoms; and
- investment in academic research on the topic to update public awareness of the risks of technology and AI-based products.
In relation to trade restrictions that may apply to AI-based products, article 20, paragraph 1 of the LGPD institutes the right to explanation, derived from the transparency principle, determining that the data controller must provide data subjects, whenever requested, with clear and adequate information regarding the criteria and procedures used for the automated decision. In view of the complexity of the AI decision-making process, an interpretation regarding the factual impossibility of the use (and trading) of machine learning algorithms whose decision-making activity is inscrutable could result, in practice. However, such a solution would be counterproductive and not feasible from an economic standpoint. Therefore, it will be important to await the Brazilian Data Protection Supervisory Authority’s (ANPD) possible regulations regarding its role in audits to verify discriminatory aspects in the automated processing of personal data. This type of audit has not been regulated by the ANPD yet.
5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?
Even though there is no specific AI-related data protection and privacy regulation, under the LGPD’s perspective, the design and development of AI systems will need to involve security, technical and administrative measures capable of protecting personal data from unauthorised access and from accidental or unlawful situations of destruction, loss, alteration, communication or any form of inappropriate or illicit processing of personal data. The ANPD may provide minimum technical standards in this regard, taking into account the nature of the information processed, the specific characteristics of the processing and the state-of-the-art technology, including the methods and techniques of machine learning available at the time of the design and development of the AI systems.
Data sharing arrangements will need to be compliant with the LGPD. Anonymisation is preferable whenever possible and the use of machine learning techniques such as generative neural networks can be an alternative in data sharing arrangements rather than the statistical technique of differential privacy, used to prevent details that are rare or unique to customers from being recorded and ensure that two machine learning models are indistinguishable whether or not a customer’s data was used in their training. In broad terms, generative neural networks are categories of artificial neural networks, insofar as they seek to emulate the functions of the human brain through networks of decision-making neurons imitating biological evolution, which involve binary (yes and no) decisions, based on data training. By confronting two neural networks, generative neural networks create and improve content in reinforcement learning processes.
6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?
Even though Brazil does not have any specific AI legislation yet, it has already created entities within other governmental institutions to operate with AI themes. A branch of the federal administration, the MCTIC, is responsible for, among other roles, dealing with national policies for scientific and technological research and the encouragement of innovation, such as the Public Consultation (see question 2). Such aims to identify priority areas in the development and use of AI-related technologies in which there is greater potential in obtaining benefits. However, the MCTIC is only responsible for providing guidance and strategies and does not hold enforcement and investigation powers.
In addition, the Prosecution Service of the Federal District and Territories created the Special Unity for Data Protection and Artificial Intelligence (ESPEC) in 2018, which so far has been operating on data protection, investigating data breaches and the misuse of personal data by companies. The ESPEC has been acting in relation to damage suffered by individuals to be filed at the state or federal district courts with potentially national damage.
Furthermore, according to Brazilian Decree No. 9,854/2019 on the National Internet of Things Plan, the Brazilian government will regulate the operation and development of innovations in the area of connected devices, including equipment using AI.
7 Has your jurisdiction participated in any international frameworks for AI?
Brazil participates in the International Conference on Artificial Intelligence and Law (ICAIL) and, at the 2019 edition, two articles by Brazilian researchers were presented, one on formal models of legal interpretation and the other on automated decisions and data protection. The Superior Court of Justice (STJ) also participated in the 2019 edition with the Socrates Project, an AI system to provide subsidies to court rapporteurs to render decisions.
In addition, in December 2019, the University of São Paulo hosted the Regional Forum on Artificial Intelligence in Latin America and the Caribbean, promoted by(UNESCO, Ponto BR Information and Coordination Center and the federal government, through the Ministry of Foreign Affairs and the MCTIC.
8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?
In the cybersecurity area, the Technical Advisory Group, a platform developed in Brazil to offer vulnerability management in an integrated manner, helps to define action plans based on the criticality of security flaws and the sensitivity of assets.
The Socrates Project, mentioned in question 7, was developed by the Artificial Intelligence Advisory of the Superior Court of Justice and allows the examination of appeals and appealed judgments, informing case reporters whether the case falls within the scope of the court, the applied legislation and even similar processes with suggestions of decisions. The AI Victor (see question 2) is another noteworthy AI-related development in Brazil during the past year.
9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?
According to the Brazilian Association of Artificial Intelligence, in 2017 there were around 40 start-ups that were dedicated exclusively to creating or applying AI solutions in Brazil to increase efficiency in sectors such as insurance, digital marketing, retail, agribusiness, education, health, legislation, transport, financial services and natural language. In addition, IBM, in partnership with the São Paulo State Research Support Foundation selected the University of São Paulo as a partner institution for the creation of an advanced research centre in artificial intelligence engineering in Brazil. The research will be applied to different market sectors, with an emphasis on natural resources, agribusiness, environment, finance and health. In general, all these industry sectors have seen development in AI-based products and services in Brazil.
In relation to financial services, AI-based products have been extensively used for:
- customer support;
- credit analysis;
- profiling for the offering of products;
- bankruptcy risk prediction; and
- financial trading.
With pre-defined parameters, these products have operated on the analysis of the stock market and provided more accurate reports on the risks of purchase and sale of assets.
Healthcare has also developed positively in the use of AI-based products. Many public and private hospitals have been investing in research on AI systems to provide a risk analysis on certain treatments for cancer patients. They have also been able to determine more thoroughly the chances of patients developing hereditary diseases and therefore approach them at an early stage. As a recent example, the Brazilian health technology company Portal Telemedicina launched a new model of AI to accelerate the process of diagnosing patients with suspected covid-19, with the promise of a faster, safer, more economical and scalable alternative for laboratories, clinics and hospitals that act on the front line in attending to patients.
Finally, as mentioned above, the public sector has implemented many new projects involving AI-based products that assist government entities on daily activities, such as those employed by the TCU, the Rondônia Court of Justice, the TST, the CGU and state initiatives in Paraná, Pernambuco, Rio de Janeiro, Santa Catarina and São Paulo.
10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?
Presently there are three draft bills under discussion at the Brazilian Congress.
Draft Bill No. 5,691/2019, proposed by Senator Styvenson Valentim, proposes the institution of a National Policy for Artificial Intelligence, with the objective of stimulating the formation of an environment favourable to the development of technologies in AI. The bill establishes:
- principles for the policy;
- requirements for AI solutions to be understandable and accessible, with mechanisms for human intervention if necessary, without discriminatory bias; and
- the possibility of federal government and its branches reaching agreements with Brazilian and foreign private and public entities to obtain technical, human or financial resources to support and strengthen the national policy.
The draft bill does not provide for a competent authority.
Senator Styvenson Valentim has also proposed Draft Bill No. 5051/2020, establishing the principles for the use of AI in Brazil. One proposition of a practical effect refers to the need for human supervision. However, in general, the draft bill does not meet relevant aspects covered by the Public Consultation held by the MCTIC.
Draft Bill No. 21/20 was proposed by Deputy Eduardo Bismarck to create the legal framework for the development and use of AI in Brazil, establishing principles, rights and duties for the use of AI. In general, this draft bill conforms to the ethical principles of AI set by the Organization for Economic Co-operation and Development.
This draft bill establishes that the use of AI must be based on respect for human rights and democratic values, equality, non-discrimination, plurality, free initiative and data privacy. In addition, AI must have, as a principle, the guarantee of transparency in its use and operation.
This proposal, which originated in the Chamber of Deputies, foresees development agents, who are individuals or legal entities that participate in the planning and design, data collection and processing phases and construction of the AI model as well as verification and validation, and operating agents, who participate in the monitoring and operation phase of the AI systems. This bill of law imposes several duties on the development and operating agents, such as:
- to provide clear and adequate information about the criteria and procedures used by the AI system (a duty imposed on the processing agents in the LGPD);
- to respond, in accordance with the law, for decisions made by an AI system; and
- to provide for the continued protection of AI systems against cybersecurity threats.
In addition to establishing the rights of the stakeholders involved or affected, directly or indirectly, by AI, Draft Bill No. 21/20 also provides for the artificial intelligence impact report. This is defined as the documentation to be published by the AI development and operating agents at the request of the government branches, describing the life cycle of the AI system, as well as measures, safeguards and risk management and mitigation mechanisms related to each phase of the system, including security and privacy.
11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?
Based on the studies, public consultation, draft bills of law discussed so far and the international plans for AI, we recommend assessing and managing risks arising in the deployment of AI through application of technical and entrepreneurial efforts (some of them proposed by Brent Daniel Mittelstadt in ‘The Ethics of Algorithms: Mapping the Tebate’, Big Data & Society) supported by public policies in the conception of the algorithms, such as:
- training AI systems with human-interpretable terms and store data from each decision in order to probe the decision afterwards to provide information that is understandable and accessible by the public;
- establishing different levels of controllability according to the different machine learning methods (ie, supervised, unsupervised and reinforcement learning) and techniques, such as regression analysis and artificial neural networks;
- preferring the adoption of machine learning methods and techniques that facilitate control and understandability, allowing for the AI algorithms to use more complex logic only for a few cases that really need it (balancing transparency and business performance);
- establishing periodical review of the algorithms used to guarantee it is bias-free;
- designing updatable AI systems to train proxies to verify whether predictions of terms that cannot be determined in advance through the learning inputs (such as in a litigation process) are correlated with the automated decisions generated by the AI system;
- creating AI-based products with the possibility for human intervention when necessary;
- establishing periodical review of the algorithms used to guarantee it is bias-free;
- maintaining multi-racial and multi-gender teams of program developers and machine learners;
- keeping updated with guidance from MCTIC and investigations and decisions from ESPEC; and
- providing an efficient communication channel with data subjects, in order to receive and execute requests regarding their individual rights regarding AI decision-making processes.
The Inside Track
What skills and experiences have helped you to navigate AI issues as a lawyer?
Tatiana Campello: Being an IP and privacy lawyer helped me to navigate in AI issues. In the end, we need to analyse subjects in a broader way and each background and knowledge brought me expertise and material to understand AI.
Vanessa Ferro: Being a critical thinker and curious individual helps me to navigate AI issues. During my master’s degree, I took the discipline of AI and focused my research on the technical literature to understand the machine learning functioning.
Isabela Garcia de Souza: Studying and analysing the technical aspects of AI helps me with issues arising from it, allowing me to understand its functionalities and keep up with the developments on the field.
Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?
TC: Everything related to privacy aspects (big data) and the discussions about authorship and ownership, more related to intellectual property.
VF: AI-generated artwork (so much so that I wrote a book about it ). AI developments in healthcare, natural language processing, advertising and the impacts of deep fake are also thrilling. The AI developments in healthcare will offer the greatest opportunities.
IGS: Research in the medical field for early detection and treatment of lethal diseases.
What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?
TC: One of the biggest challenges is the understanding of AI by society, its development, regulation and balance of benefits and consequences.
VF: The black box phenomenon, as well as the impact of AI on the job market.
IGS: Biased algorithms and how they can negatively impact people’s lives.