Skip to main content
Home
Home

New York and New Jersey Make an Early Effort to Regulate Artificial Intelligence

New York and New Jersey Make an Early Effort to Regulate Artificial Intelligence

Update: New York Governor Andrew Cuomo signed SB S3971B into law on July 24, 2019, creating a state commission to study and investigate how to regulate artificial intelligence, robotics and automation as described below.

Original text: In recent years, the use of artificial intelligence (AI) solutions in every sphere of the economy has increased dramatically. In response to its rapid growth, governments are scrambling to regulate the new technology. Most recently, New York and New Jersey proposed bills to study and regulate AI. The New York bill establishes a committee to study and make recommendations on issues relating to artificial intelligence, robotics and automation; in contrast, the New Jersey bill imposes substantive obligations on businesses to assess and disclose risks relating to their use of AI decision-making and high-risk data processing.

Summary of New York's Legislation

Under New York's AI legislation, a temporary state commission known as the New York State artificial intelligence, robotics and automation commission would study and make recommendations on issues such as:

  1. Current law within the state of New York addressing artificial intelligence, robotics and automation;
  2. Comparative state policies that have aided in the creation of regulatory structures for AI, robotics and automation;
  3. Criminal and civil liability regarding violations of law caused by entities utilizing AI, robotics and automation;
  4. The impact of AI, robotics and automation on employment in New York;
  5. The impact of AI, robotics and automation acquiring confidential information and the necessary disclosures;
  6. Potential restrictions on the use of AI, robotics and automation in weaponry;
  7. Potential impact on the technology industry by regulatory measures proposed by the commission; and
  8. Public sector applications of AI and cognitive technologies.

The committee must issue a final report to the governor and other senior members of New York's Senate and Assembly no later than thirty days before the expiration of the act on December 31, 2020.

The bill passed the New York Senate on May 19, 2019, and the Assembly on June 3, 2019, and is awaiting signature by the governor. [Update: New York Governor Andrew Cuomo signed SB S3971B into law on July 24, 2019.]

This proposed state commission follows a 2017 New York City law (Local Law 49), which established a task force to study "automated decision systems" that New York City utilizes to make decisions concerning rules, policies or actions that affect the public. The New York City automated decision systems task force was responsible for producing a report that assessed whether the usage of automated decisions systems disproportionately affect persons based upon protected status, e.g., age, race, creed, color, religion, etc. However, almost 18 months after Local Law 49 went into effect, New York City officials reported that the task force could not reach consensus on what constitutes an "automated decision system," casting doubt on whether the task force will meet the deadline to issue a complete report of policy recommendations by fall 2019.

It remains to be seen whether either the New York City task force or the broader New York state proposal to create a commission on artificial intelligence, robotics and automation will produce a report and recommendations, even if the latter is enacted this session.

Summary of New Jersey's AI Legislation

If passed into law, the current draft of the New Jersey Algorithmic Accountability Act (NJAAA) would require businesses that meet revenue or data processing thresholds to conduct assessments designed to reduce the risks of using "high-risk" automated decision systems or "high-risk" information systems on New Jersey consumers.

Under the NJAAA, a system is deemed "high-risk" if, among other things, it poses significant risk to privacy or security, systematically monitors or evaluates consumers or publicly accessible areas, or results in inaccurate, biased or discriminatory decisions impacting consumers. More specifically, for systems that are deemed "high-risk" under the NJAAA, covered entities will be required to do the following:

  1. Conduct an automated decision system impact assessment, i.e., a study evaluating an automated decision system, including its design and data used to train the automated decision, as well as its impact on accuracy, fairness, bias, discrimination, privacy and security;
  2. Conduct a data protection impact assessment, i.e., a study evaluating the extent to which an information system protects the privacy and security of personally identifiable information that the system processes;
  3. Work with independent third parties, such as auditors and technology experts, in conducting the assessments described in (1) and (2);
  4. Record any racial or other bias, or any threat to the security of a consumer's personally identifiable information discovered through the assessments; and
  5. Provide any other information that the Director of the Division of Consumer Affairs in the Department of Law and Public Safety requires.

The NJAAA applies only to "covered entities," businesses that have one of the following characteristics: (1) have greater than $50,000,000 in average annual gross receipts for the three taxable-year period preceding the most recent fiscal year; (2) possess or control personally identifiable information on more than 1,000,000 New Jersey consumers or 1,000,000 consumer computers or mobile telecommunications service devices; or (3) are data brokers.

Although the NJAAA does not provide a private right of action, the Director of the Division of Consumer Affairs in the Department of Law and Public Safety may bring a civil action against a covered entity if, after reviewing the assessments detailed above, it determines that interest of residents are threatened or adversely affected by a practice that violates the act.

The NJAAA has several legislative hurdles to pass before making its way to the governor's desk, so it is unclear if it will ever become law.

New York and New Jersey Are Not Outliers in Their Quest for AI Regulation

In addition to New York and New Jersey, other states and cities such as Illinois and San Francisco, as well as the federal government, are also considering legislation to regulate AI and mitigate its perceived risks. On April 10, 2019, federal lawmakers introduced the Algorithmic Accountability Act of 2019, which is sponsored by New Jersey Senator Cory Booker and Oregon Senator Ron Wyden. The bill is currently circulating with the House Energy and Commerce Committee. The Algorithmic Accountability Act largely mirrors the requirements of the NJAAA, presumably as a result of Senator Booker's sponsorship of the bill. Notably, the federal legislation gives rulemaking authority to the Federal Trade Commission (FTC) and gives the FTC enforcement authority for violations of the act as an unfair or deceptive act or practice.

Interestingly, the proposed Algorithmic Accountability Act does not preempt state law. As a result, with various states, localities and the federal government promulgating regulations regarding AI and other automated decision-making systems, businesses and consumers could potentially be subject to disparate and overlapping requirements. We will continue to the monitor and track these regulatory developments.

© 2019 Perkins Coie LLP

Related insights

Home
Jump back to top