All questions
Introduction
In the United States (US), laws relating to artificial intelligence (AI) are emerging in several distinct contexts, including (1) Executive Orders, (2) proposed legislation, (3) new and existing regulations, (4) enforcement activities by federal, state and local authorities, and (5) private litigation. The subjects of legal activities also differ substantially, even within these categories; for instance, some are aimed at:
- technologies themselves (e.g., facial recognition or foundational large language models (LLMs));
- higher impact or risk use cases (e.g., law enforcement, hiring, loan underwriting);
- civil rights (e.g., non-discrimination, non-surveillance);
- legal rights (e.g., privacy, child online safety, copyright);
- health and safety (e.g., minimum performance or safety standards); and
- business issues (e.g., liability, reliability).
This is a dynamic space, with lawyers in all corners of government, business, academia and civil society starting to consider how existing laws will be applied, where new laws or concepts will need to be developed and where policy is becoming actionable (for instance, the Biden administration's use of Executive Orders, and ongoing debate in Congress on specific proposals). In addition, frameworks and guidelines are multiplying, revealing common opportunities and concerns across industries and geographies.
Eventually, certain compliance-based practices will emerge around AI. However, even now, counsel need to look at AI as a mixed discipline entailing the review of existing laws and guidance, strategy and risk management, 'agile governance' practices and first principles. AI laws emerging in other parts of the world (e.g., the EU, the United Kingdom, China, Singapore) will apply in some cases and be instructive in others; in short, for counsel in the US, this uneven terrain represents a new sort of practice.

