Artificial intelligence has been on Washington’s radar for decades, at least conceptionally. More concretely, over the past few years the federal government has sought to keep up with the dizzying pace of advances by Big Tech and any number of smaller startups – not to mention international competitors, most notably China.
Congress and the executive branch – including the White House and a wide range of federal agencies in both the national security and civilian economy spheres – have increasingly supported direct investments, promoted incentives for stepped-up R&D, and worked to develop non-regulatory guidance for the public and private sectors in navigating the economic, technological and social implications of AI.
Seeking to ensure a leading global role for the US in AI development and implementation is a prime motivator for American policymakers. In doing so, Washington has been reluctant to adopt or even propose an EU-style sweeping regulatory regime governing applications and oversight of AI for fear that it may slow innovation.
Across the Atlantic, however, the European Union has unveiled a new strict regulatory regime for AI. While Washington and Brussels are pursuing divergent approaches and philosophies for a comprehensive AI regulatory framework, the nature of globalization means that European regulations are bound to have extraterritorial implications for American companies. US-based tech giants with huge global footprints will inevitably find themselves squarely in the crosshairs of European regulators. American officials could find themselves confronted with competing imperatives: defending the interests of homegrown innovators from Silicon Valley while at the same time working within the constraints of the EU regime. Indeed, the EU framework may well end up becoming the de facto global standard for policing AI – just as the General Data Protection Regulation (GDPR) essentially became the de facto privacy standard for multinationals after its enactment in 2018.
The closest approach in the US for regulating AI to date has been draft guidance to the heads of federal agencies issued by the Trump Administration in early 2020, which advocated a lighter regulatory touch and favored non-regulatory pilot programs. “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” warns the January 7, 2020, Office of Management and Budget (OMB) memorandum to the heads of executive departments and agencies. The guidance further states, “Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.” And the Trump White House’s request for comments on the guidance further emphasized this laissez-faire approach: “OMB guidance on these matters seeks to support the U.S. approach to free-market capitalism, federalism, and good regulatory practices (GRPs).”
That being said, it is important to remember that landmark laws in the United States like the Civil Rights Act of 1964 and Equal Credit Opportunity Act of 1974 already prohibit forms of discrimination by employers, and businesses that increasingly utilize advanced forms of technology such as AI in their operations and decision-making processes should be mindful of the unintended outcomes of AI that may run counter to existing law.
Today, more specific regulatory guidelines for AI are being promulgated on an agency-by-agency basis, but nothing drastic nor prohibitive has been proposed. That could change under President Joe Biden, who has made fairness and equality central tenets of his presidency and recently appointed Lina Khan as chair of the Federal Trade Commission. Guidance may eventually surface that insists upon greater transparency and explainability to address concerns about algorithmic discrimination and bias, but overarching restrictions or bans are still unlikely at the federal level.
Early this year, the Food and Drug Administration (FDA) issued its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. Recognizing that AI applications are constantly evolving with new data, the FDA said its “plan outlines a holistic approach based on total product lifecycle oversight to further the enormous potential that these technologies have to improve patient care while delivering safe and effective software functionality that improves the quality of care that patients receive.”
The US Department of Transportation (DOT) is another federal agency pursuing research and adopting guidelines on AI initiatives, particularly with regard to automated driving systems (ADS) and unmanned aircraft systems (UAS, commonly known as drones). In January 2021, DOT issued its updated Automated Vehicles Comprehensive Plan, which among other goals seeks to “modernize regulations to remove unintended and unnecessary barriers to innovative vehicle designs, features, and operational models, and will develop safety focused frameworks and tools to assess the safe performance of ADS technologies.”
Meanwhile, the Federal Aviation Administration (FAA), an agency under DOT, has established the Alliance for System Safety of UAS through Research Excellence (ASSURE). Composed of leading research institutions and industry and government partners, ASSURE brands itself as the “go-to high-quality” center for “policy, regulations, standards, training, operations, and education.”
In Congress, legislation that most closely resembles regulation of AI is the Algorithmic Accountability Act, which is expected to be re-introduced in the new congress soon. The legislation would require companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions. Failure to comply would be considered an unfair or deceptive act under the Federal Trade Commission Act. Its primary author, Senator Ron Wyden (D-OR), is now the chair of the powerful Senate Finance Committee, increasing the bill’s prospects for consideration and passage, but its ultimate fate is uncertain. Companies and other organizations that develop or use AI systems are well advised to be vigilant about the potential for biased decision-making and to take whatever steps they can to prevent these problems or to promptly address them if they arise.
President Biden has sought to put his own stamp on the emerging US AI strategy, but has not signaled a major shift from this non-regulatory approach, yet. He has established an interagency Task Force on Scientific Integrity within the National Science and Technology Council, pursuant to his January 27 memorandum to the heads of executive agencies and departments.
The President has also issued a series of executive orders aimed at boosting American manufacturing while securing and onshoring supply chains. While these policy developments are not AI-specific, they reference the need to fill gaps in policies regarding emerging technologies, including AI. President Biden has raised the profile of key posts with jurisdiction over cyber and tech issues. Many if not most of his nominees and appointees frequently cite the need for more effective use of AI among their top priorities. And on May 6, the White House and Dr. Lynne Parker announced the launch of a central AI website which will serve as a focal point for actions taken to date on AI by US government agencies, the White House’s strategic vision for AI, and the foundational AI legislation and executive orders currently in place.
One area of continuity between the former and current Administrations is the tendency to view AI and other disruptive technologies through the lens of an emerging tech race with China. The US Innovation and Competition Act of 2021 – massive R&D and science-policy legislation with bipartisan support – moved through the Senate at an unusually swift pace. Proponents of that legislation have made clear that their animating goal is to counter China’s ascendancy in transformative innovations like AI, superconductors, and robotics.
The Biden Administration has made the promotion of democratic ideals against authoritarianism a major policy focus and has framed the technological competition with China as part of this effort. President Biden and his team have sought to enlist America’s allies – most prominently the nations of the EU – as partners in a long-term campaign to ensure that international standards of AI practices and governance are consistent with these shared democratic values. US-EU cooperation on AI best practices could also lay the foundation of a transatlantic AI marketplace that would benefit hundreds of millions of users and consumers on both sides of the pond.
Based on the evidence so far, policy makers in Washington, at least in the Biden Administration’s first six months, appear primarily focused on innovation and competition with China, and it’s difficult to envision a domestic comprehensive AI regulatory regime taking shape from either Congress or the executive branch in the near term. Over time, increased regulation and enforcement of narrow AI applications should be expected as new agency heads take their seats and staff up. Just don’t expect anything along the lines of the expansive regime across the pond.