Skip to main content
Home
Home

Ten Considerations for Developing an Effective Generative AI Use Policy

Ten Considerations for Developing an Effective Generative AI Use Policy

Artificial Intelligence

This year's news has been full of stories about "generative" artificial intelligence (AI) applications. Generative AI tools create code, text, images, and other content in response to text prompts, queries, and other inputs. These tools have the potential to make research, writing, coding, graphic design, and other forms of content creation and manipulation much faster and easier. But as with other emerging technologies, the rapid, widespread adoption of these tools in a developing legal and regulatory environment can give rise to potential risks.

To manage these risks, many companies are adopting an acceptable use policy (AUP) governing their use of third-party generative AI tools, educating employees on their use, and monitoring initial use cases and the quality, legality, and accuracy of the outputs, particularly regarding publishing or utilizing generated content publicly.

Crafting an appropriate AUP for generative AI is a process that requires careful consideration and collaboration across multiple departments. Each policy will be different, reflecting the company's business needs and culture, the nature of the intended uses of such tools, and the company's level of risk tolerance in light of its industry and the applicable evolving legal and regulatory landscape. Policies should be flexible because generative AI tools—and the laws and regulations governing them—are developing.

 

Considerations When Crafting AUPs

 

For many companies, the following considerations are a good starting point for developing a generative AI AUP.

  1. Ensure alignment with the organization's business needs, values, and culture. AUPs for generative AI are not one-size-fits-all. They should reflect the voice of the organization. They may also address how and when employees can use AI applications in a manner that aligns with the company's priorities, culture, desired use cases, and risk tolerance. The nature of an organization's business and the sophistication of the users of these tools may also shape what a policy may look like. Including a wide range of stakeholders from across the organization can help develop a more thoughtful AUP by considering different points of view and understanding different use cases. This approach can also help with buy-in to the policy.
  2. Design for flexibility. AI applications are rapidly developing, as are the laws that apply to them. Organizations may have to adjust their approach to third-party AI applications to reflect these changes. To simplify updates, consider creating subsidiary implementation documents that, for example, list approved third-party applications, use cases that are preapproved or prohibited without exception, or categories of information that may not be disclosed to third-party applications.
  3. Provide guidance on human oversight. AI tools show enormous promise but are also subject to factual errors (often fancifully referred to as "hallucinations"). As a result, organizations should consider how an AUP should address requirements for human oversight of AI use, including the review of AI-generated output.
  4. Consider the applicable regulatory environment. AI regulations are being proposed and adopted at a rapid rate domestically and abroad, including on topics such as automated decision-making, algorithmic bias, and transparency. Laws that are not specific to AI also govern how AI tools should or should not be used and potential adverse impacts. AUPs can help raise awareness within organizations of how various laws applicable to the organization's use cases may apply to the use of AI and provide guidance or resources to help users comply or, if appropriate, direct them to obtain legal review prior to permitting use.
  5. Ethical and responsible use of AI. The ethical and responsible use of AI has become an important topic, and companies may want to consider including provisions in their AUP that address topics like transparency, privacy protection, accountability, and bias, even beyond what might be legally required.
  6. Consider the nature of the use. Some uses of generative AI are inherently riskier or may raise more issues than others. As a result, it may make sense to adapt the policy based on the nature of the use. For example, more permissive rules might apply to using generative AI tools for internal uses, such as for inspiration, that will not be seen outside the organization and will not be incorporated into products or key creative assets. On the other hand, uses for automated decision-making or in areas where accuracy is essential may merit a more restrictive approach.
  7. Compliance with applicable agreements. Organizations should consider how contractual obligations may affect their use of generative AI and how this should be addressed in their AUP. For example, it may be appropriate to address compliance with contractual confidentiality obligations with third parties as well as the terms applicable to specific AI tools being used. Organizations may want to require approval of specific tools before they are permitted to be used in order to make sure that the applicable terms don't impose restrictions that would be difficult to comply with (e.g., limits on permitted inputs or on how output can be used) or create unfavorable results (e.g., regarding intellectual property (IP) ownership and privacy compliance).
  8. Data privacy and security. Organizations should consider including guidance in AUPs to address compliance with applicable privacy policies and other privacy obligations when AI tools are used. For example, the policy might specify that employees cannot include personal information in text prompts or other input without prior approval.
  9. IP infringement risk. Depending on the nature of the tool and how it is deployed, the use of generative AI tools may potentially create infringement risk, and organizations should consider addressing such risks in an AUP and seek to provide useful and practical guidance tailored to the organization's likely use cases.
  10. IP protection risk. There also are open questions around IP protection for the outputs of generative AI. Courts have held that the output from a generative AI tool is not patentable; although there is no case law yet with respect to copyright protection, the U.S. Copyright Office has taken a very narrow view of copyrightability of AI-generated output that seems to preclude protection for most such outputs. This potential lack of protection should be considered in drafting AUPs. For example, it may be appropriate to address when and how generative AI may (or may not) be used in areas that implicate IP rights. The protection of confidential information and trade secrets is another important issue to consider addressing in AUPs.

 

Takeaways

 

Generative AI promises to be a game changer for many companies in many ways. However, careful consideration should be given to how it is used and how companies can best minimize risk through the creation of tailored internal policies guiding their internal users in how to use these tools from the outset. There is no one-size-fits-all AUP for generative AI. Rather, companies should reflect on their specific needs and the nature of the contemplated uses to determine the right strategy for addressing the use of generative AI tools. Perkins Coie has a team of lawyers advising companies on these issues and assisting with the drafting and implementation of AUPs.

© 2023 Perkins Coie LLP

Related insights

Home
Jump back to top