Biden Administration Updates Roadmap for AI Research and Development
The National Science and Technology Council (NSTC) recently bolstered the Biden administration's continued focus on responsible artificial intelligence (AI) development by announcing an update to the National Artificial Intelligence Research and Development Strategic Plan (Strategic Plan). The Strategic Plan, which was first published in 2016 and later updated in 2019, lays out a roadmap for federal investments in AI research and development (R&D).
The Strategic Plan primarily describes federal R&D investment strategies to help maintain American leadership in AI research and innovation. While the updated Strategic Plan is largely consistent with the 2019 version, it includes a greater emphasis on responsible AI development and establishes a new approach for international collaboration in AI research. The Strategic Plan explicitly leaves discussions of regulation and governance of AI to other federal documents, such as the Blueprint of an AI Bill of Rights and the AI Risk Management Framework.
Eight AI Strategies Reaffirmed
The Strategic Plan reaffirms the eight AI strategies outlined in previous versions of the NSTC's roadmap, which are to:
- Make long-term investments in fundamental and responsible AI research by investing in foundational and use-inspired research to advance both individual tasks and scalable general-purpose systems, improved hardware, and sustainable computing systems.
- Develop effective methods for human-AI collaboration, including in measuring and improving performance, cultivating trust, and in understanding user needs and requirements to allow for safe and effective human-AI collaboration.
- Understand and address the ethical, legal, and societal implications of AI, including by:
- Establishing an extensive R&D program to design values-aligned AI by proactively developing frameworks and metrics to protect security, accessibility, privacy, and accountability.
- Identifying structures to help mitigate potential risks and implement AI that fosters public trust.
- Soliciting stakeholder feedback and being mindful of the possible harm and the distribution of the likely benefits of AI.
- Ensure the safety and security of AI systems by increasing research in the development of safe human-AI interactions, the potential long-term risks of using AI systems, and improving the security of AI systems.
- Develop shared public datasets and environments for AI training and testing to help democratize access to necessary AI development resources and produce innovative and equitable results.
- Measure and evaluate AI systems through standards and benchmarks, including in areas such as engineering, metrics, safety, usability, security, fairness, and privacy, to ensure AI systems meet critical objectives and bring credibility to AI advancements.
- Better understand the national AI R&D workforce needs by evaluating the current "AI workforce" and improving opportunities for workforce development.
- Expand public-private partnerships to accelerate advances in AI, including by creating and improving public-private R&D partnership mechanisms that leverage each partner's strengths and by expanding partnership opportunities to more diverse stakeholders.
The Newest Strategy: International Collaboration in AI R&D
In light of the increasing global attention on AI, the NSTC also adopted a new strategy to establish a principled and coordinated approach to international collaboration in AI research. This strategy aims to cement the United States' place as a central hub for AI R&D by facilitating international research collaborations and cultivating international standards and cross-border frameworks. Specifically, the Strategic Plan describes the following four areas of focus:
- Cultivating a global culture of developing and using trustworthy AI.
- Supporting development of global AI systems, standards, and frameworks.
- Facilitating international exchange of ideas and expertise.
- Encouraging AI development for global benefit.
The NSTC emphasizes that federal agencies pursuing this goal should consider the benefits and risks of partnerships with countries that might not uphold democratic values or respect for human rights. Further, the NSTC encourages agencies to develop and establish rigorous standards for data sharing and data privacy to safeguard data, privacy, and national security.
This new addition aligns with statements from the recent 49th G7 summit, where leaders of the United States, Japan, Canada, France, Germany, Italy, the United Kingdom, and the European Union announced support for the development and adoption of international technical standards to support trustworthy AI and agreed to a G7 working group for discussions on generative AI before the end of this year.
Looking Forward
The NSTC's Strategic Plan covers a broad range of issues with the goal of guiding U.S. science and technology policymaking toward responsible AI innovation that serves the public interest. In conjunction with the updated roadmap, the White House also announced a new request for information from the Office of Science and Technology Policy (OSTP) on national priorities in mitigating AI risk, and a report from the U.S. Department of Education's Office of Educational Technology on the risks and opportunities related to the use of AI in education.
AI remains at the forefront of the minds of regulators, and further developments can be expected throughout the year. The Artificial Intelligence & Machine Learning industry group at Perkins Coie will continue to monitor changes to the AI regulatory landscape to keep clients informed of potential legal and regulatory issues surrounding the development and use of AI products and services.
The authors wish to acknowledge Summer Associate Sydney Veatch's contributions to this Update.
© 2023 Perkins Coie LLP