Artifical intelligence (“AI”) raises ethical concerns for both individuals and organizations. Google, Facebook and Stanford University invested in AI ethics research centers, and in 2018, France and Canada jointly sponsored an international panel to discuss the “responsible adoption” of AI. Earlier this year, the European Commission released its guidelines to encourage ethical development of “trustworthy AI.”
And recently, the Korea Communications Commission (“KCC”) and the Korea Information Society Development Institute (”KISDI”), a global ICT policy institute, jointly announced similar principles to govern the creation and use of AI, focused on the proper protection of human dignity (“the AI Ethics Principles”). These basic rules are to be complied by all members of the society, including the government, corporations, users.
In December 2019, the KCC plans to host an international conference under the theme of “Trustworthy AI,” designed to introduce these principles and to attempt to reach an international consensus on the direction of user protection.
Moreover, the KCC plans to collect comments and opinions of vartious stakeholders, including users, corporations, subject matter experts, as well as from the international community, and incorporate the feedback to the framework. Therefore, it is advisable for companies that develop AI technologies to pay close attention to how these principles continune to be developed, and if appropriate, to share your constructive comments.
1. Background and Purpose
As new types of AI technology (e.g., artificial intelligence, big data, Internet of Things (“IoT”)) are applied to broadcasting and telecommunications services, the roles of service providers and users need to be re-established to maximize the benefits and advantages of AI technology while minimizing unexpected technical and social risks and consequences. To help establish an environment that is safe from these unintended technical and social risks, which may be brought by the introduction of new AI technology, KCC has released the AI Ethics Principles as the basic principles applicable to all stakeholders.
2. Summary of the AI Ethics Principles
- Human-centered service: Provision and use of AI services must be human-centered and promoted in a way to guarantee the fundamental freedom and rights of human beings and to protect human dignity.
- Transparency and explainability: If AI services have any material impact on any user, relevant information must be prepared in a manner in which the user can understand, but only to the extent that the company’s legitimate interest(s) is(are) not adversely affected. Further, in the event any user’s basic right is violated, an explanation of the main reasons used to estimate, recommend and determine the services must be provided (i.e., ensure traceability of AI systems).
- Responsibility: All members of the AI community must acknowledge the joint responsibility to ensure proper functions of AI services and human-centered value creation, and all members must comply with relevant laws and agreements.
- Safety: All members of the AI community must strive to develop and use safe and reliable AI services. Also, service providers and users must establish and operate an autonomous response system to respond to any damage that may be caused by AI services.
- Anti-discrimination: All members of the AI communty should acknowledge that AI services may cause a social or economic gap or unfairness, and make effort to minimize discriminatory elements in every phase of the development and use of algorithms (i.e., consider wide range of human abilities, skills and requires and help ensure accessibility).
- Participation: All members of the AI society may participate in the public comment process without any discrimination (user policymaking process). To this end, a regular communications channel must be established for service providers and users to effectively communicate their suggestions and opinions.
- Privacy and data governance: Personal information and privacy of citizens should be protected in the entire process of the development, provision and use of AI services. Also, all members of the AI community must engage in continuous exchange of opinions to maintain a balance between sharing of technical benefits and the protection of one’s privacy.
3. Implications for AI Companies
The principles recently declared by the KCC have been established to proactively respond to user protection in the era of intelligence information in which AI is routinely used and the future regulation direction is highly likely to be determined based on the principles.
Therefore, companies engaged in the development of AI technology should continue to monitor the policy and regulatory developments, and to consider actively participating in the continued discussions and to share their inputs.