State AI Legislation Report, 2026, first month

Jan 29th, 2026

By: Thurston Powers

AK AK: 2 bills
ME ME: 1 bill
WI WI: 5 bills
VT VT: 6 bills
NH NH: 5 bills
WA WA: 3 bills
ID ID: 1 bill
MT MT: No bills
ND ND: No bills
MN MN: 12 bills
IL IL: 1 bill
MI MI: 3 bills
NY NY: 89 bills
MA MA: 6 bills
RI RI: No bills
OR OR: No bills
NV NV: No bills
WY WY: No bills
SD SD: 1 bill
IA IA: 8 bills
IN IN: 5 bills
OH OH: 1 bill
PA PA: No bills
NJ NJ: 17 bills
CT CT: No bills
CA CA: 4 bills
UT UT: 8 bills
CO CO: No bills
NE NE: 6 bills
MO MO: 5 bills
KY KY: 4 bills
WV WV: 5 bills
VA VA: 30 bills
MD MD: 9 bills
DE DE: No bills
AZ AZ: 8 bills
NM NM: 4 bills
KS KS: 1 bill
AR AR: No bills
TN TN: 5 bills
NC NC: 3 bills
SC SC: 5 bills
DC DC: No bills
OK OK: 11 bills
LA LA: No bills
MS MS: 17 bills
AL AL: No bills
GA GA: 4 bills
TX TX: No bills
FL FL: 12 bills
HI HI: No bills
No Bills
Few
Many

1. Introduction

This report tracks AI-related legislation emerging in the first month of the 2026 U.S. state legislative sessions. As of January 29, 2026, early filings already reveal a clear pattern: states are moving aggressively to regulate artificial intelligence, with a particular focus on consumer protection, transparency, and preventing the most harmful applications of the technology.

The Legislative Landscape

Across 35 states with active AI legislation in this first month, we've identified approximately 300 bills directly addressing artificial intelligence. The volume and scope signal that 2026 will be a watershed year for AI regulation at the state level. New York alone has introduced 89 bills, followed by Virginia with 30 and New Jersey with 17.

Key Trends Across States

Analyzing the legislation reveals several dominant themes:

Guardrails Against Harmful Uses

The largest category of bills focuses on preventing AI's worst applications. States are targeting:

Human Oversight Requirements

A strong thread across states mandates that humans remain in the loop for consequential decisions:

Algorithmic Pricing and Antitrust Concerns

States are increasingly concerned about AI-driven price manipulation:

Transparency and Disclosure

Disclosure requirements are becoming standard:

Government AI Governance

States are building internal AI oversight infrastructure:

Education Policy

Education is a contested battleground:

Mental Health Protections

Special concern exists around AI in mental health contexts:

The Posture: Caution Prevails

Overall, the legislative posture toward AI is defensive. States are primarily focused on:

A smaller set of bills take a facilitative approach, focusing on AI literacy education, establishing innovation sandboxes, and creating frameworks for responsible AI adoption in government.

Select State Highlights

Alaska (2 bills)

  • Resolution supporting federal Kids Online Safety Act
  • Defining synthetic media for election law purposes

Arizona (8 bills)

  • Conversational AI disclosure and safety requirements for minors
  • AI-assisted arbitration in divorce proceedings
  • Statewide AI education program
  • Prohibition on algorithmic rent pricing
  • Rules for AI use by state agencies

California (4 bills)

  • Human customer service requirement (5-minute response for AI handoff)
  • Ban on AI chatbots in toys for children under 12
  • AI disclosure requirements for mental health professionals
  • Sensitive personal information protections affecting AI training data

Florida (12 bills)

  • AI Bill of Rights establishing consumer protections
  • Mandatory human review of AI insurance claim denials
  • AI education task force for higher education
  • AI restrictions in psychological and therapy services
  • Technology education requirements including AI courses

Georgia (4 bills)

  • Annual AI inventory requirement for state agencies
  • Enhanced criminal sentencing for AI-facilitated offenses
  • Virtual peeping crimes involving AI

Minnesota (12 bills)

  • Prohibition on AI in health insurance utilization review
  • Ban on AI-driven dynamic product pricing
  • Prohibition on biased tenant screening algorithms
  • Ban on algorithmic rent-setting with competitor data
  • AI-generated CSAM criminalization
  • Social media algorithm restrictions for children

Missouri (5 bills)

  • AI Transparency and Accountability Act (labeling, watermarks, usage logs)
  • Unauthorized practice of law via AI prohibition
  • AI restrictions in mental health services

New York (89 bills)

  • Chief AI Officer position for state government
  • AI evidence admissibility standards in courts
  • Oaths of responsible use for generative AI
  • AI book publishing disclosure requirements
  • AI Bill of Rights
  • Advanced AI licensing requirements
  • Algorithmic pricing discrimination prevention
  • AI transparency for journalism
  • Automation displacement worker protections
  • Ban on algorithmic rent-setting devices
  • AI in employment decision-making restrictions
  • Biometric technology ban in schools
  • AI literacy education requirements

Utah (8 bills)

  • Digital literacy graduation requirement including AI literacy
  • Classroom technology and AI use policies
  • Digital Voyeurism Prevention Act (deepfake intimate images)
  • AI transparency amendments
  • Office of AI Policy amendments
  • Product pricing algorithm regulations

Virginia (30 bills)

  • Law enforcement AI policy requirements
  • Prohibition on AI chatbots in student instruction
  • Algorithmic pricing disclosure for landlords
  • AI in employment decisions regulation
  • AI chatbot disclosure and minor protection requirements
  • Mental health AI service restrictions
  • Fostering AI Innovation and Responsibility Act
  • AI verification organization requirements
  • Fair housing algorithmic discrimination prevention

Looking Ahead

The first month of the 2026 session establishes clear legislative priorities: protect consumers, require transparency, maintain human oversight in high-stakes decisions, and prevent the most harmful applications of AI technology. As sessions progress, we expect to see many of these bills evolve through committee hearings and floor debates, with the most pressing concerns — particularly around deepfakes, insurance claim denials, and housing algorithms — likely to see the most legislative action.

This report will be updated as the legislative session progresses and more bills are introduced, amended, and voted upon.

2. Methodology

Scholars Edge functions like a series of sieves, each designed to filter search results in a unique way. Artificial intelligence—particularly transformer models—is a cross-cutting technology used across a wide range of industries, applications, and use cases. For that reason, a concept search is the ideal tool for identifying related legislation. However, building an effective concept search is a process in itself.

To start, I copied the text of the Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure into a similarity search. This first search was not intended to be perfect; its purpose was simply to surface bills that could serve as the foundation for a concept search. Similarity search uses cosine similarity to identify documents that closely resemble the example text. The next filter applied an LLM to label whether each bill was truly focused on AI. Using this basic approach, I collected 10 bills from several states—broad enough to be representative—and organized them into two zip files, each containing five text files.

With those zip files in hand, I began building a new concept search. I uploaded the first zip file and saved it. Then, I ran the second zip file through the tool and copied the relevant concepts into the first search via the search editor. I carefully refined the combined concept list—removing overly broad or overly specific concepts, as well as those likely to bleed into unrelated topics. Where gaps existed, I added missing concepts manually. The resulting concept list was:

The search prompt was: “I am looking for legislation that focuses on Artificial Intelligence—regulating it, requiring disclosures, age verification for usage, requiring audits, submission of weights to the government for oversight, etc.”

Unlike the 2025 post-session report, I did not manually curate the results. Instead, this is the raw output of the search engine.

Alaska

Index of Bills

House - 28 - SUPPORT KIDS ONLINE SAFETY ACT

Legislation ID: 284976

Bill URL: View Bill

Summary

This resolution calls on the U.S. Congress to adopt the Kids Online Safety Act, which seeks to implement safety measures for minors using online platforms. It highlights the risks associated with online use, including exposure to harmful content and mental health issues, and advocates for protections such as privacy safeguards and parental controls.

Key Sections

Key Requirements

  • Establishes accountability for compliance with safety standards through federal and state enforcement.
  • Establishes a council to advise Congress on risks to minors and safety practices.
  • Mandates independent audits and public reporting on platform usage and safety efforts.
  • Prohibits market research on children under 13 without parental consent.
  • Provides parents with tools to manage privacy settings and restrict access.
  • Requires online platforms to implement design safeguards for users under 17 years of age.
  • Requires platforms to offer options to opt-out of harmful algorithmic recommendations.
  • Restricts market research on minors aged 13 to 16 to those with parental consent.

Sponsors

Legislative Actions

Date Action
2026-01-23 (H) EDC, JUD
2026-01-23 (H) READ THE FIRST TIME - REFERRALS
2026-01-23 (H) REFERRED TO EDUCATION

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses algorithmic recommendations, which are a core component of artificial intelligence systems used by online platforms.

Mechanism of Influence: By requiring platforms to offer an opt-out for algorithmic recommendations, the law mandates a change in how AI-driven content delivery systems operate for users under 17.

Evidence:

  • Requires platforms to offer options to opt-out of harmful algorithmic recommendations.

Ambiguity Notes: The term 'algorithmic recommendations' is a common proxy for AI-driven curation, though the specific technical definitions of the algorithms are often left to regulatory interpretation.

Analysis 2

Why Relevant: The user specifically requested legislation requiring audits and transparency for automated systems.

Mechanism of Influence: The mandate for independent audits and public reporting forces platforms to subject their internal processes, including AI-driven safety and moderation tools, to external scrutiny.

Evidence:

  • Mandates independent audits and public reporting on platform usage and safety efforts.

Ambiguity Notes: The scope of the 'independent audits' would likely be defined by the proposed Kids Online Safety Council or federal enforcement agencies.

Analysis 3

Why Relevant: The legislation focuses on age-based usage restrictions and design safeguards, which aligns with the user's interest in age verification and usage regulation.

Mechanism of Influence: Platforms must implement specific design safeguards for users under 17, which often involves AI-based age estimation or verification technologies to ensure compliance.

Evidence:

  • Requires online platforms to implement design safeguards for users under 17 years of age.

Ambiguity Notes: While the summary mentions 'design safeguards,' the implementation often relies on automated systems to identify and protect minor users.

Senate - 64 - ELECTIONS

Legislation ID: 18043

Bill URL: View Bill

Summary

The bill addresses various aspects of political communication, particularly focusing on synthetic media and its implications for voter perception. It also outlines regulations for outdoor advertising near highways, procedures for voter registration, and the dissemination of election-related information. Additionally, it mandates a report on expanding early voting options in rural and low-income areas, highlighting the need for accessible voting practices.

Key Sections

Key Requirements

  • Allows temporary political campaign signs under certain conditions.
  • All statements must be preserved for six years
  • Annual report on data handling must be submitted to the legislature.
  • Application must inform applicants of penalties and residency certification.
  • Can vote with absentee or special needs ballots even if not on official list
  • Certain provisions apply to offenses committed on or after the effective date.
  • Clarifies the effective date for specific legal amendments.
  • Commissioner must develop security protocols for data storage and transfer.
  • Communication must be disseminated through recognized media channels to an audience of voters.
  • Defines criteria for low-income neighborhoods and rural communities.
  • Defines low-income neighborhoods and rural communities for report purposes.
  • Definitions for low-income neighborhoods and rural communities are provided.
  • Excludes minimally edited media from being classified as synthetic media.
  • Individuals must provide required information to preregister
  • Limits temporary political campaign signs to 32 square feet and must be displayed on private property without compensation.
  • Mandates preservation of documents for six years.
  • Mandates that all statements filed are public records.
  • Monthly updates to the director of elections on applicants information.
  • Must allow applicants to decline to register or update voter information.
  • Must be at least 18 years old
  • Must be a U.S. citizen
  • Must be available on the commissions website
  • Must be identifiable as a manipulated representation.
  • Must be registered to vote
  • Must create a materially different understanding than the original depiction.
  • Must declare intent to be 18 within two years of preregistration
  • Must declare U.S. citizenship and previous registration status
  • Must have lived in the municipality for at least 30 days
  • Must have resided in the state for at least 30 days prior to the election
  • Must not pose a risk to public safety
  • Outdoor advertising must not be within 660 feet of highways unless specified exceptions apply.
  • Prohibits outdoor advertising within 660 feet of certain highways unless it meets specific criteria.
  • Provide name, sex, date of birth, and residence address
  • Report due by November 1, 2026.
  • Report must be submitted by November 1, 2026.
  • Reports must be accessible at the commissions offices and online.
  • Reports must be made available at commission offices and online after each reporting period.
  • Reports must be preserved for six years.
  • Requires advertising to conform to local, state, and federal standards for highways.
  • Requires a report by November 1, 2026, on early voting accessibility.
  • Requires clear definitions of synthetic media for regulatory purposes.
  • Requires public availability of filed statements and reports on the commissions website.
  • Requires public officials to file financial statements with the Alaska Public Offices Commission.
  • Security protocols for data storage and transfer must be developed.
  • Signs must not exceed 32 square feet
  • Synthetic media must not mislead voters about candidates or parties
  • Temporary political campaign signs must not exceed 32 square feet and must be displayed without compensation.
  • The content must be interpretable as urging a vote for or against a specific candidate.

Sponsors

Legislative Actions

Date Action
2026-01-29 (H) FINANCE at 01:30 PM ADAMS 519
2025-05-20 (H) IN FINANCE
2025-05-20 (H) RULES TO CALENDAR PENDING FIN RPT/REF
2025-05-19 (H) IN FINANCE
2025-05-19 (H) RULES TO CALENDAR PENDING FIN RPT/REF
2025-05-16 (H) -- Delayed to a Call of the Chair --
2025-05-16 (H) FINANCE at 01:30 PM ADAMS 519
2025-05-16 (H) FINANCE at 09:00 AM ADAMS 519

Detailed Analysis

Analysis 1

Why Relevant: The bill provides a specific legal definition for 'synthetic media' created through artificial intelligence manipulation.

Mechanism of Influence: By defining AI-manipulated images, audio, and video, the bill creates a regulatory framework to identify and potentially restrict or require disclosures for deepfakes in political campaigns.

Evidence:

  • Defines synthetic media as AI-manipulated images, audio, or videos that create realistic but false representations of individuals, excluding minor edits or enhancements.

Ambiguity Notes: The exclusion of 'minor edits or enhancements' is not strictly defined, which could lead to disputes over whether a specific AI-enhanced video crosses the threshold into 'synthetic media'.

Analysis 2

Why Relevant: The bill defines 'interactive computer services', which are the primary platforms for the dissemination of AI-generated content.

Mechanism of Influence: This definition establishes the types of digital entities and services that may be subject to regulations regarding the hosting or transmission of AI-manipulated political communications.

Evidence:

  • Defines interactive computer service as any service enabling computer access to multiple users, including internet services and educational institution systems.

Ambiguity Notes: The definition is broad, covering everything from commercial internet services to educational systems, which may create varying levels of compliance burden.

↑ Back to Table of Contents

Arizona

Index of Bills

House - 2245 - veterans claims pilot program

Legislation ID: 253434

Bill URL: View Bill

Summary

HB 2245 introduces a veteran claims pilot program designed to enhance the claims development process for veterans. The program will utilize technology to conduct comprehensive reviews of veterans records, identify service-connected conditions, and produce complete claim packets for submission to the U.S. Department of Veterans Affairs. The bill outlines the integration of this program with existing veteran services and mandates an evaluation of its effectiveness, including various performance indicators.

Key Sections

Key Requirements

  • Appropriates a specified sum from the state general fund for the program.
  • Conduct comprehensive digital reviews of veterans medical and service records.
  • Demands production of complete disability compensation claim packets.
  • Ensures all users receive training on the technology system.
  • Ensures compliance with state and federal privacy laws.
  • Evaluates program effectiveness using specific performance indicators.
  • Exempts the appropriation from lapsing provisions under Arizona law.
  • Guarantees that data collected is not used for commercial purposes.
  • Identify supportable service-connected conditions based on evidence.
  • Integrates program with state veteran benefits counselors, county veteran service offices, and tribal programs.
  • Maintains veterans access to traditional service organization assistance.
  • Mandates identification of all supportable primary and secondary service-connected conditions.
  • Map conditions to federal diagnostic and rating criteria.
  • Produce complete disability compensation claim packets for submission.
  • Requires comprehensive digital reviews of veterans medical records and supporting documentation.
  • Requires mapping of documented conditions to federal diagnostic and rating criteria.
  • Requires submission of a report by July 1, 2027, detailing program activities and fiscal impacts.
  • Selects participants from various veteran service offices and establishes a control group.

Sponsors

Legislative Actions

Date Action
2026-01-20 House2nd Read
2026-01-15 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the use of automated technology systems to perform complex analytical tasks traditionally handled by humans, specifically the review of medical records and the mapping of conditions to federal diagnostic criteria.

Mechanism of Influence: It requires the Department of Veterans Services to assess the effectiveness of these 'claims development technology systems' through specific performance indicators and reporting, effectively requiring a government evaluation of an automated system's accuracy and utility.

Evidence:

  • assess the effectiveness of claims development technology systems
  • Requires comprehensive digital reviews of veterans medical records and supporting documentation
  • Requires mapping of documented conditions to federal diagnostic and rating criteria
  • Evaluates program effectiveness using specific performance indicators

Ambiguity Notes: While the bill uses the term 'technology' rather than 'Artificial Intelligence,' the functions described—such as mapping medical documentation to complex rating criteria—are characteristic of AI-driven diagnostic and decision-support tools.

House - 2311 - artificial intelligence service; disclosures; requirements

Legislation ID: 259099

Bill URL: View Bill

Summary

HB2311 introduces regulations for conversational AI services in Arizona, requiring operators to disclose when minors are interacting with AI, implement safety measures to protect minors from inappropriate content, and provide tools for privacy management. The bill also outlines penalties for violations and establishes protocols for handling sensitive topics like suicidal ideation.

Key Sections

Key Requirements

  • Clarifies the scope of conversational AI services and their intended audience.
  • Defines key terms related to conversational AI services and users.
  • Disclosures must be visible and repeated at specified intervals.
  • No private right of action is created; enforcement is solely through the attorney general.
  • No sexual content or objectification should be generated for minors.
  • Notifications must occur at the beginning of each session and every three hours during use.
  • Operators cannot claim their AI provides professional mental health care.
  • Operators must clearly disclose AI interaction to minors.
  • Operators must disclose the AI nature of the service to avoid misleading minors.
  • Operators must offer privacy management tools for minor account holders and their guardians.
  • Operators must offer privacy management tools for minors and their guardians.
  • Operators must prevent AI from producing sexual content or encouraging sexual conduct.
  • Operators must prevent misleading statements about the AIs human-like qualities.
  • Operators must provide a visible disclaimer to minor account holders about AI interaction.
  • Operators must provide responses directing users to crisis services for suicidal ideation.
  • Operators must provide responses directing users to crisis services when prompted about self-harm.
  • Operators violating the bill may face civil penalties up to $500,000.
  • Specific tools must be available based on the age of the minor account holder.
  • Violators are subject to civil penalties of $1,000 per violation, capped at $500,000.

Sponsors

Legislative Actions

Date Action
2026-01-20 House2nd Read
2026-01-15 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in legislation requiring disclosures for AI usage.

Mechanism of Influence: It mandates that operators of conversational AI services must provide persistent disclaimers and regular notifications to minor account holders to ensure they are aware they are interacting with an artificial intelligence rather than a human.

Evidence:

  • Operators of conversational AI services must inform minor account holders that they are interacting with AI through persistent disclaimers and regular notifications during usage.
  • Disclosures must be visible and repeated at specified intervals.

Ambiguity Notes: The term 'persistent disclaimers' and 'regular notifications' are not strictly defined by specific time intervals in the abstract, which may lead to varying implementation standards among different operators.

Analysis 2

Why Relevant: The legislation focuses on regulating AI behavior and content safety, which aligns with the user's request for AI regulation.

Mechanism of Influence: It prohibits AI services from engaging in harmful interactions with minors, such as generating sexual content or encouraging sexual conduct, and requires specific crisis protocols for sensitive topics.

Evidence:

  • Operators are prohibited from allowing AI services to engage minors in harmful interactions, including generating inappropriate content or misleading statements about the nature of the AI.
  • Operators must prevent AI from producing sexual content or encouraging sexual conduct.

Ambiguity Notes: The definition of 'inappropriate content' may be subject to interpretation or further regulatory clarification to determine what specifically constitutes a violation.

Analysis 3

Why Relevant: The bill establishes a legal framework for liability and oversight of AI operators.

Mechanism of Influence: By empowering the attorney general to enforce civil penalties and clarifying that developers are not liable for third-party violations, the bill creates a regulatory oversight structure for AI deployment.

Evidence:

  • The bill establishes civil penalties for violations and clarifies that liability does not extend to developers of AI models for third-party violations.
  • Operators violating the bill may face civil penalties up to $500,000.

Ambiguity Notes: The exclusion of 'developers of AI models' from liability for 'third-party violations' creates a distinction between the creator of the technology and the entity operating the specific service interface.

House - 2371 - arbitration; divorce proceedings; artificial intelligence

Legislation ID: 259200

Bill URL: View Bill

Summary

HB2371 amends Arizona law to introduce artificial intelligence-assisted arbitration in divorce cases, provided both parties consent and do not have minor children. The bill outlines the process for arbitration, the ability for parties to withdraw consent, and the appeal process for binding determinations made through this method. It defines artificial intelligence-assisted arbitration and establishes the jurisdiction of courts over these matters.

Key Sections

Key Requirements

  • Allows any court with jurisdiction to enter binding determinations from AI arbitration.
  • Allows either party to withdraw consent prior to issuance of arbitration results.
  • Allows either party to withdraw consent prior to the issuance of a determination.
  • Binding determinations are appealable within twenty judicial days.
  • Binding determinations can be entered by any court with jurisdiction.
  • Defines AI-assisted arbitration as a non-legal entity system that applies laws to facts and generates outcomes.
  • Defines artificial intelligence-assisted arbitration and its characteristics.
  • Mandates filing a notice of appeal within twenty judicial days after issuance of a determination.
  • Requires both parties to provide written consent for AI-assisted arbitration.
  • Requires that parties do not share minor children.
  • Requires that the parties do not share minor children.
  • Requires written consent from both parties for artificial intelligence-assisted arbitration.
  • Superior court reviews the case de novo, without regard to arbitration proceedings.
  • Superior court reviews the case without considering the arbitration proceedings.

Sponsors

Legislative Actions

Date Action
2026-01-20 House2nd Read
2026-01-15 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill provides a specific legal definition for artificial intelligence in the context of arbitration, which is a foundational element of AI regulation.

Mechanism of Influence: By defining AI-assisted arbitration as a computational system rather than a legal person, it establishes the legal status and limitations of AI tools used in the judicial process.

Evidence:

  • Defines artificial intelligence-assisted arbitration as a computational system that applies laws to facts and generates recommendations or binding determinations, clarifying that it is not a legal person with independent authority.

Ambiguity Notes: The term 'computational system' is broad and could encompass various types of algorithmic decision-making tools beyond generative AI.

Analysis 2

Why Relevant: The legislation regulates the usage of AI by establishing strict prerequisites for its application in legal disputes.

Mechanism of Influence: It mandates a disclosure and consent mechanism where both parties must provide written agreement, effectively regulating the deployment of AI in sensitive legal matters.

Evidence:

  • allows parties in a divorce without minor children to use artificial intelligence-assisted arbitration if both provide written consent, detailing whether the outcome will be a recommendation or binding determination.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill establishes a framework for oversight and human-in-the-loop review of AI-generated outcomes.

Mechanism of Influence: It creates a mandatory appeal process where AI-generated binding determinations are reviewed de novo by a human judge, ensuring that AI does not have the final word without judicial recourse.

Evidence:

  • Binding determinations made through artificial intelligence-assisted arbitration can be appealed to the superior court within twenty judicial days, with the court reviewing the case de novo without considering prior arbitration proceedings.

Ambiguity Notes: None

House - 2409 - artificial intelligence; statewide education program

Legislation ID: 259303

Bill URL: View Bill

Summary

HB 2409 establishes the Arizona Artificial Intelligence Education Program within the Department of Education. The program will offer summer courses that cover digital hygiene and civic integrity, as well as AI applications for small business and entrepreneurship. The curriculum will include training on navigating the digital world safely, understanding algorithmic bias, and using AI tools for business operations. The program aims to empower residents with skills for economic success and to recognize digital manipulation.

Key Sections

Key Requirements

  • Develop a curriculum for AI in small business and entrepreneurship.
  • Develop curriculum for the AI for Small Business and Entrepreneurship course.
  • Identify qualified volunteer instructors from diverse backgrounds.
  • Identify qualified volunteer instructors from various sectors.
  • Locate facilities for course delivery.
  • Public schools and colleges may grant academic credit to individuals who complete the course.
  • Public schools, universities, and community colleges can award academic credit for the course.
  • State agencies and educational institutions may allow uncompensated use of their facilities for the program.
  • State and local entities may allow the Department of Education to use their facilities for the program.
  • The curriculum must include digital hygiene, civic integrity, and AI for small business training.
  • The program must also include a curriculum for small businesses on using AI for operations and marketing.
  • The program must include a digital hygiene curriculum covering safe online navigation, algorithmic bias, data protection, media literacy, and critical thinking.
  • The program must offer an artificial intelligence education course each summer in various locations in Arizona.

Sponsors

Legislative Actions

Date Action
2026-01-26 House2nd Read
2026-01-22 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the creation of a curriculum that specifically addresses AI-related risks such as algorithmic bias and data protection.

Mechanism of Influence: By institutionalizing education on algorithmic bias and data protection, the state creates a framework for public awareness of AI oversight issues, though it stops short of direct industry regulation.

Evidence:

  • The program must include a digital hygiene curriculum covering safe online navigation, algorithmic bias, data protection, media literacy, and critical thinking.

Ambiguity Notes: The bill does not define the specific standards or definitions for 'algorithmic bias' or 'data protection' that must be taught, leaving the substantive content to the discretion of the Office of Economic Opportunity.

Analysis 2

Why Relevant: The legislation focuses on the economic and operational integration of AI for small businesses.

Mechanism of Influence: It promotes the adoption of AI technologies by providing state-sponsored training on operations and marketing, which influences how AI is deployed in the local economy.

Evidence:

  • The program must also include a curriculum for small businesses on using AI for operations and marketing.

Ambiguity Notes: The provision focuses on promotion and education rather than restriction or mandatory disclosure, which may fall outside the scope of 'regulation' depending on the user's strictness.

House - 2410 - artificial intelligence; privileged communications

Legislation ID: 259305

Bill URL: View Bill

Summary

HB2410 introduces a new chapter to Title 18 of the Arizona Revised Statutes, specifically addressing the legal status of communications with artificial intelligence. It asserts that communications with AI will be considered privileged if the individual would have been entitled to privileged communication had they consulted a human professional, thereby extending legal protections to interactions with AI technologies.

Key Sections

Key Requirements

  • Establishes that communications with AI are privileged under certain conditions.
  • Requires that communications with AI are treated as privileged if similar to human professional interactions.

Sponsors

Legislative Actions

Date Action
2026-01-21 House2nd Read
2026-01-20 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the legal status and privacy protections of interactions with AI, which falls under the regulation of AI usage and data handling.

Mechanism of Influence: It grants AI-human interactions the same legal protections as professional-client relationships, preventing such communications from being used as evidence or disclosed in contexts where privilege applies.

Evidence:

  • communications between a person and an artificial intelligence are considered privileged if the individual would have received similar privilege had they sought advice from a human professional.

Ambiguity Notes: The term 'human professional' is not explicitly defined in the provided text, leaving it open to interpretation regarding which specific professional privileges (e.g., legal, medical, clergy) are extended to AI.

House - 2490 - rental price fixing; algorithmic pricing

Legislation ID: 265734

Bill URL: View Bill

Summary

HB 2490 introduces regulations concerning algorithmic pricing in the rental market. It defines key terms related to algorithmic devices and establishes prohibitions against their use in ways that could facilitate collusion among landlords or manipulate rental prices and terms. The bill outlines enforcement mechanisms and specifies the conditions under which these regulations apply, as well as exceptions for certain types of housing.

Key Sections

Key Requirements

  • Applies to landlords with five or more rental units.
  • Creates a rebuttable presumption of conspiracy if algorithmic devices are used in violation.
  • Creates a rebuttable presumption of conspiracy to restrain trade if violations occur.
  • Exempts certain housing types such as public housing and properties with fewer than five units.
  • Exempts federal, state, or local government housing programs from the algorithmic device prohibition.
  • Exempts landlords with four or fewer rental units from the regulations.
  • Prohibits the use of algorithmic devices for setting rental prices.
  • Prohibits the use of algorithmic devices for setting rental prices, renewal terms, and occupancy levels.

Sponsors

Legislative Actions

Date Action
2026-01-21 House2nd Read
2026-01-20 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of algorithms, which are foundational components of artificial intelligence, within the real estate and rental sectors.

Mechanism of Influence: It imposes a legal prohibition on using specific types of algorithmic tools for price coordination and establishes a rebuttable presumption of illegal trade practices for violations.

Evidence:

  • This provision prohibits landlords and coordinators from using algorithmic devices that utilize nonpublic competitor data to coordinate rental pricing and terms
  • This section provides definitions for key terms used in the article, including algorithm, algorithmic device

Ambiguity Notes: The scope of 'algorithmic device' and 'algorithm' depends on the specific definitions provided in the bill, which may broadly encompass various automated decision-making systems or narrow predictive models.

House - 2592 - artificial intelligence; state agencies; rules

Legislation ID: 266104

Bill URL: View Bill

Summary

HB2592 introduces regulations for the use of artificial intelligence systems by state budget units. It mandates the identification of opportunities for AI implementation, elimination of restrictive regulations, and the establishment of governance structures. The bill also requires legislative ratification for any rules specifically regulating AI, ensuring that such regulations do not hinder innovation or competition.

Key Sections

Key Requirements

  • Allows modification or elimination of regulations that hinder AI innovation.
  • Benefits of rules must outweigh impacts on innovation and competition.
  • Benefits of the rule must clearly outweigh impacts on innovation.
  • Budget units must identify AI opportunities to reduce administrative burdens.
  • Defines artificial intelligence system as machine learning-based systems producing outputs that influence environments.
  • Defines computational resource as tools or technologies facilitating computation and data processing.
  • Defines content as any digitally generated or manipulated outputs by AI systems.
  • Emergency rules must be approved by the legislature.
  • Emergency rules regulating AI must be ratified by the legislature to remain in effect after a session.
  • Establishes governance for consistent use of AI within agencies.
  • Existing regulations affecting AI development must be reviewed for negative impacts.
  • Focus on reducing costs and improving service delivery.
  • Governance for AI systems must be established for consistent use.
  • Legislature must provide express statutory delegation for specific harms.
  • Legislature must provide express statutory delegation for specific harms to regulate AI.
  • Legislature must schedule a vote within the first thirty days of the session.
  • Legislature must vote on these rules within the first thirty days of the session.
  • Mandates elimination of regulations that unnecessarily restrict AI innovation.
  • Mandates use of existing staff and resources without forming new administrative bodies.
  • Must eliminate unnecessary regulations that restrict AI innovation.
  • No new regulatory requirements specific to private sector AI development.
  • Only a simple majority vote is needed for ratification.
  • Only a simple majority vote is required for ratification.
  • Procurement processes for AI systems must be streamlined.
  • Prohibits creation of new regulatory requirements solely for private sector AI development.
  • Regulations creating unreasonable barriers to AI innovation must be modified or eliminated.
  • Requires budget units to identify AI implementation opportunities to reduce administrative burdens.
  • Requires focus on cost reduction and improved service delivery when implementing AI.
  • Requires review of existing regulations affecting AI for competitiveness and innovation.
  • Requires streamlining of procurement processes for AI systems.
  • Rules cannot create barriers to market entry or advantages for incumbents.
  • Rules in effect at the start of a legislative session require ratification to remain effective.
  • Rules must be the least restrictive means to achieve statutory objectives.
  • Rules must not create barriers to market entry or advantages for incumbents.
  • Use existing staff and resources without creating new administrative bodies.

Sponsors

Legislative Actions

Date Action
2026-01-26 House2nd Read
2026-01-22 House1st Read
2026-01-12 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes strict oversight and a ratification process for any new regulations concerning artificial intelligence.

Mechanism of Influence: It requires that any emergency or temporary rules regulating AI be approved by the legislature within thirty days of a session and mandates that rules must not create barriers to market entry.

Evidence:

  • Any new rules regulating AI must be legislatively ratified and must not create barriers to innovation or competition.
  • Legislature must provide express statutory delegation for specific harms to regulate AI.
  • Rules must be the least restrictive means to achieve statutory objectives.

Ambiguity Notes: The term 'specific harms' is not explicitly defined, leaving the threshold for when the legislature may delegate regulatory authority open to interpretation.

Analysis 2

Why Relevant: The legislation provides foundational legal definitions for AI and related technologies which determine the scope of future regulation.

Mechanism of Influence: By defining 'artificial intelligence system' and 'computational resource,' the bill sets the boundaries for what technologies fall under these legislative constraints.

Evidence:

  • Defines artificial intelligence system as machine learning-based systems producing outputs that influence environments.
  • Defines computational resource as tools or technologies facilitating computation and data processing.

Ambiguity Notes: The definition of AI as systems that 'influence environments' is broad and could encompass a wide range of software beyond generative AI.

Analysis 3

Why Relevant: The bill regulates the internal government adoption and procurement of AI technologies.

Mechanism of Influence: It mandates that budget units streamline procurement and identify opportunities for AI implementation to reduce costs and improve services.

Evidence:

  • Requires streamlining of procurement processes for AI systems.
  • Requires budget units to identify AI implementation opportunities to reduce administrative burdens.

Ambiguity Notes: None

Senate - 1088 - appropriation; Arizona homeland security; cybersecurity

Legislation ID: 248169

Bill URL: View Bill

Summary

SB 1088 proposes an appropriation of $2,500,000 from the state general fund for the fiscal year 2026-2027 to support cybersecurity programs. Specifically, it designates $500,000 for generative artificial intelligence cybersecurity initiatives and $2,000,000 to modernize the statewide VPN security network using a zero trust network access solution.

Key Sections

Key Requirements

  • Allocates $2,500,000 from the state general fund for fiscal year 2026-2027.
  • Requires $2,000,000 to be allocated for modernizing the statewide VPN security network with a zero trust network access solution.
  • Requires $500,000 to be specifically used for generative artificial intelligence cybersecurity programs.

Sponsors

Legislative Actions

Date Action
2026-01-14 Senate2nd Read
2026-01-12 Filed
2026-01-12 Senate1st Read

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mentions and allocates funding for generative artificial intelligence cybersecurity initiatives.

Mechanism of Influence: The appropriation provides the financial resources necessary for the state to develop or implement cybersecurity protocols and programs specifically designed to address the risks or capabilities of generative AI.

Evidence:

  • Requires $500,000 to be specifically used for generative artificial intelligence cybersecurity programs.

Ambiguity Notes: The bill does not provide a specific definition for 'generative artificial intelligence cybersecurity programs,' which could encompass securing AI models from external threats, using AI for cyber defense, or auditing AI systems for vulnerabilities.

↑ Back to Table of Contents

California

Index of Bills

Assembly - 1542 - Sensitive personal information.

Legislation ID: 249095

Bill URL: View Bill

Summary

Assembly Bill 1542 amends sections of the Civil Code regarding the handling of sensitive personal information by businesses. It establishes clearer guidelines for how businesses must inform consumers about the collection and use of their personal information, specifically sensitive data, and reinforces the rights of consumers to limit the use and disclosure of such information.

Key Sections

Key Requirements

  • Businesses cannot collect additional categories of personal information or use it for incompatible purposes without notice.
  • Businesses must inform consumers about the categories of personal information collected and the purposes for which it is used.
  • Businesses must provide notice if they intend to use sensitive personal information for additional purposes.
  • Businesses must retain personal information only as long as necessary for the disclosed purposes.
  • Consumers can instruct businesses to limit the use of their sensitive personal information.
  • Service providers or contractors must not use sensitive personal information for unauthorized purposes.

Sponsors

Legislative Actions

Date Action
2026-01-06 From printer. May be heard in committee February 5.
2026-01-05 Read first time. To print.

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates the collection and use of sensitive personal information, which is a primary input for many AI systems and machine learning models.

Mechanism of Influence: By requiring businesses to limit data use to 'necessary purposes' and allowing consumers to opt-out of further disclosure, the law restricts how data can be repurposed for AI training or algorithmic profiling.

Evidence:

  • This section grants consumers the right to direct businesses to limit the use of their sensitive personal information to necessary purposes and prohibits businesses from using or disclosing this information for other purposes without consent.
  • Businesses must inform consumers about the categories of personal information collected and the purposes for which it is used.

Ambiguity Notes: The bill does not explicitly mention 'Artificial Intelligence,' but its restrictions on data usage and disclosure directly impact the data governance frameworks required for AI development.

Assembly - 1609 - Customer service support.

Legislation ID: 282265

Bill URL: View Bill

Summary

Assembly Bill No. 1609 aims to regulate customer service support provided by large private businesses, specifically requiring them to offer human assistance during specified hours, respond quickly to customer requests, and ensure transparency regarding the use of artificial intelligence in customer service interactions.

Key Sections

Key Requirements

  • If answered by a bot, human assistance must be provided within five minutes.
  • Prohibits representing AI systems as human.
  • Requires businesses to offer a telephonic customer service platform.
  • Requires businesses to offer human customer service support from 8 a.m. to 6 p.m. daily.
  • Requires calls to be answered quickly and prohibits hold times exceeding five minutes.
  • Requires clear disclosure when customer service is artificially generated.
  • Requires conspicuous posting of the telephonic customer service number on their website.
  • Requires customer service platforms to implement tracking mechanisms to ensure compliance.
  • Requires human assistance to be provided within five minutes of a request.
  • Requires online customer service platforms to provide an option for human assistance.
  • Specifies that operator does not include small businesses as defined in the Government Code.

Sponsors

Legislative Actions

Date Action
2026-01-21 From printer. May be heard in committee February 20.
2026-01-20 Read first time. To print.

Detailed Analysis

Analysis 1

Why Relevant: The bill contains specific transparency and disclosure requirements for AI-driven customer service interactions.

Mechanism of Influence: It mandates that businesses disclose the use of AI and prohibits them from deceiving customers into thinking they are speaking with a human.

Evidence:

  • Prohibits representing AI systems as human.
  • Requires clear disclosure when customer service is artificially generated.

Ambiguity Notes: The term 'artificially generated' is used broadly and may require further technical clarification to determine which specific technologies, such as Large Language Models versus simple automated scripts, are covered.

Analysis 2

Why Relevant: The bill regulates the operational deployment of AI bots in customer service settings by requiring human oversight/availability.

Mechanism of Influence: It requires a human fallback mechanism within a five-minute window if an AI bot is initially used to answer a customer query, effectively regulating the autonomy of AI in business-to-consumer interactions.

Evidence:

  • If answered by a bot, human assistance must be provided within five minutes.

Ambiguity Notes: None

Senate - 867 - Toys: companion chatbots.

Legislation ID: 250975

Bill URL: View Bill

Summary

Senate Bill No. 867 introduces regulations on toys that include companion chatbots, which are defined as artificial intelligence systems that mimic human-like interactions. The bill prohibits the manufacture, sale, or exchange of such toys until January 1, 2031, aiming to ensure childrens safety and clarity in interactions with these technologies.

Key Sections

Key Requirements

  • Prohibits the manufacture, sale, or exchange of toys that contain companion chatbots.

Sponsors

Legislative Actions

Date Action
2026-01-06 From printer. May be acted upon on or after February 5.
2026-01-05 Introduced. Read first time. To Com. on RLS. for assignment. To print.

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets and regulates a subset of artificial intelligence technology known as companion chatbots.

Mechanism of Influence: It creates a legal prohibition on the commercial distribution of AI-integrated toys, effectively banning this specific AI application for a set period to ensure child safety.

Evidence:

  • companion chatbots, which are defined as artificial intelligence systems that mimic human-like interactions
  • prohibits any person from manufacturing, selling, exchanging, or offering for sale any toy that includes a companion chatbot

Ambiguity Notes: The definition of 'mimic human-like interactions' is broad and could encompass a wide range of AI complexities, from basic scripted decision trees to advanced generative models.

Analysis 2

Why Relevant: The legislation addresses the user's interest in age-related AI regulations and safety protections for minors.

Mechanism of Influence: By defining 'toy' based on the age of the user (12 or less), the law restricts AI usage and exposure based on the age of the target demographic.

Evidence:

  • Defines a toy as a product designed for play by children 12 years of age or less.

Ambiguity Notes: None

Senate - 903 - Mental health professionals: artificial intelligence.

Legislation ID: 284203

Bill URL: View Bill

Summary

Senate Bill No. 903 establishes regulations for the use of artificial intelligence by licensed mental health professionals in California. It prohibits the use of AI in therapeutic settings without informed consent and restricts AI from making independent therapeutic decisions. The bill aims to safeguard individuals seeking mental health services and ensure that they are provided by qualified professionals.

Key Sections

Key Requirements

  • Penalties can be up to $10,000 per violation.
  • Requires written consent from the patient or their representative before using AI in therapy.

Sponsors

Legislative Actions

Date Action
2026-01-22 From printer. May be acted upon on or after February 21.
2026-01-21 Introduced. Read first time. To Com. on RLS. for assignment. To print.

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates specific disclosures and informed consent protocols for the use of AI in a professional setting.

Mechanism of Influence: Licensed professionals are legally barred from utilizing AI tools in a therapeutic context unless they first obtain and document written consent from the patient or their representative.

Evidence:

  • A licensed professional may not use AI in therapy unless the patient is informed and consents to its use.
  • Requires written consent from the patient or their representative before using AI in therapy.

Ambiguity Notes: The specific requirements for what constitutes 'informed' consent regarding the technical nature of the AI are not detailed in the abstract.

Analysis 2

Why Relevant: It imposes direct restrictions on the functional capabilities and autonomy of AI systems within the mental health industry.

Mechanism of Influence: The law prevents AI from being used as a primary provider or decision-maker, ensuring that AI remains a tool for licensed humans rather than an independent agent.

Evidence:

  • Prohibits AI from making independent therapeutic decisions or interacting directly with clients in therapeutic communication.

Ambiguity Notes: The term 'independent therapeutic decisions' may require further regulatory clarification to distinguish between AI-assisted suggestions and autonomous actions.

Analysis 3

Why Relevant: The legislation includes enforcement mechanisms and financial penalties for failing to adhere to AI regulations.

Mechanism of Influence: The Department of Consumer Affairs is granted investigative authority and the power to levy significant fines for non-compliance.

Evidence:

  • The Department of Consumer Affairs is authorized to investigate violations and impose civil penalties for non-compliance with the chapter.
  • Penalties can be up to $10,000 per violation.

Ambiguity Notes: None

↑ Back to Table of Contents

Florida

Index of Bills

House - 1395 - Artificial Intelligence

Legislation ID: 250688

Bill URL: View Bill

Summary

This bill introduces the Artificial Intelligence Bill of Rights in Florida, defining artificial intelligence and prohibiting certain contracts with foreign entities of concern. It lays out various rights for Floridians related to AI, including rights to privacy, consent, and protection from misuse of AI technologies. The bill also mandates that AI technologies must not infringe on personal rights and establishes penalties for violations.

Key Sections

Key Requirements

  • Contracts must not be extended or renewed if they provide access to personal identifying information.
  • Entities must provide affidavits confirming they do not meet specified criteria related to foreign ownership.
  • Floridians have rights to use AI to enhance their lives and to control their childrens use of AI.
  • Individuals may pursue civil remedies against unauthorized use of their likeness or personal data.
  • Rights include knowing if interacting with AI and if personal data is collected.
  • Rights to protection from AI-related criminal acts.

Sponsors

Legislative Actions

Date Action
2026-01-15 H Now in Information Technology Budget & Policy Subcommittee
2026-01-15 H Referred to Civil Justice & Claims Subcommittee
2026-01-15 H Referred to Commerce Committee
2026-01-15 H Referred to Information Technology Budget & Policy Subcommittee
2026-01-15 H Referred to State Affairs Committee
2026-01-13 H 1st Reading
2026-01-09 H Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates transparency and disclosure regarding the use of AI systems.

Mechanism of Influence: It establishes a legal right for individuals to be informed when they are interacting with an AI rather than a human and when their data is being harvested by such systems.

Evidence:

  • Rights include knowing if interacting with AI and if personal data is collected.

Ambiguity Notes: The specific method of disclosure, such as the required format or timing of the notice, is not detailed in the summary.

Analysis 2

Why Relevant: The legislation addresses the regulation of AI usage for minors and parental oversight.

Mechanism of Influence: By granting parents the right to control their children's use of AI, the bill implies a need for age verification or parental consent mechanisms for AI platforms.

Evidence:

  • Floridians have rights to... control their childrens use of AI.

Ambiguity Notes: The summary does not specify the age threshold for 'children' or the technical requirements for implementing parental control.

Analysis 3

Why Relevant: The bill regulates AI through the lens of data privacy and oversight of foreign entities.

Mechanism of Influence: It prohibits government contracts with foreign entities of concern that involve personal identifying information, effectively regulating which AI providers can provide services to state government infrastructure.

Evidence:

  • Prohibits governmental entities from extending or renewing contracts with certain foreign entities that have access to personal identifying information.

Ambiguity Notes: The term 'foreign entities of concern' likely refers to specific countries or organizations, but these are not explicitly listed in the summary.

House - 1503 - Technology Education

Legislation ID: 250945

Bill URL: View Bill

Summary

This bill amends existing Florida Statutes to require school districts to develop and provide elective courses in computer technology for high school students. It also revises the general education core course standards for public postsecondary educational institutions to include technology-related courses, ensuring that students are equipped with relevant technological skills.

Key Sections

Key Requirements

  • Electives must include opportunities for students to earn college credit or industry certifications.
  • General education core courses must include technology courses focused on computer science and artificial intelligence applications.
  • School districts must develop and offer electives focused on STEM and computer technology.

Sponsors

Legislative Actions

Date Action
2026-01-28 H CS Filed
2026-01-28 H Favorable with CS by Careers & Workforce Subcommittee
2026-01-28 H Laid on Table under Rule 7.18(a)
2026-01-28 H Reported out of Careers & Workforce Subcommittee
2026-01-26 H PCS added to Careers & Workforce Subcommittee agenda
2026-01-15 H Now in Careers & Workforce Subcommittee
2026-01-15 H Referred to Budget Committee
2026-01-15 H Referred to Careers & Workforce Subcommittee

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates the inclusion of artificial intelligence applications within the general education core course standards for public postsecondary institutions.

Mechanism of Influence: By requiring AI to be part of the core curriculum, the state influences the educational standards and foundational knowledge required for students, though it does not regulate the technology's development or commercial use.

Evidence:

  • General education core courses must include technology courses focused on computer science and artificial intelligence applications.

Ambiguity Notes: The bill focuses on the educational and curriculum side of AI rather than the regulatory oversight (such as audits or weight submissions) mentioned in the system instructions.

Analysis 2

Why Relevant: The legislation requires school districts to offer high school electives specifically in artificial intelligence.

Mechanism of Influence: It mandates that school districts provide access to AI education, potentially allowing students to earn industry certifications or college credit in the field.

Evidence:

  • specifically including courses in computer technology like artificial intelligence, and allows students to earn college credit through these courses.

Ambiguity Notes: The scope is limited to educational offerings and does not address age verification for AI usage or disclosure requirements for AI-generated content.

House - 527 - Mandatory Human Reviews of Insurance Claim Denials

Legislation ID: 239390

Bill URL: View Bill

Summary

This bill establishes mandatory human reviews for insurance claim denials across various sectors, including workers compensation, general insurance, and health maintenance organizations. It defines key terms and outlines the responsibilities of qualified human professionals in the claims process. The bill mandates that any decision to deny or reduce a claim must be made by a qualified human professional after a thorough review of the case, independent of automated systems. It also requires carriers to maintain records of these decisions and includes provisions for enforcement and penalties for non-compliance.

Key Sections

Key Requirements

  • Affirm that AI/algorithm was not the sole basis for the decision.
  • Automated systems cannot be the sole basis for denying claims.
  • Carriers must maintain detailed records of decisions and provide specific information in denial communications.
  • Carriers must use algorithms and AI systems to assist in claims processing but not as the sole basis for decisions.
  • Decisions must be made by qualified human professionals with proper documentation of the decision-making process.
  • Decisions to deny or reduce claims must be made by qualified human professionals.
  • Detail the use of algorithms and AI in claims-handling manuals.
  • Detail the use of automated systems in claims-handling manuals.
  • Documentation and record-keeping requirements are similar to those for workers compensation carriers.
  • Document the basis for any claim denial or reduction.
  • Document the date and time of claim decisions and reviews.
  • Health maintenance organizations must use algorithms and AI systems to assist in claims but not as the sole decision-making basis.
  • Identify the qualified human professional in denial communications.
  • Include a statement about the non-exclusive use of automated systems in decisions.
  • Include contact details of the qualified human professional in denial communications.
  • Insurers may use algorithms and AI systems but cannot rely on them solely for claim decisions.
  • Maintain records of the names and titles of professionals involved in claim decisions.
  • Maintain records of the qualified human professionals name, title, and decision details.
  • Must review accuracy of AI-generated outputs before making decisions.
  • Provide documentation for the basis of claim denials.
  • Qualified human professionals must analyze claims independently.
  • Qualified human professionals must analyze claims independently and verify the accuracy of AI-generated outputs.
  • Qualified human professionals must analyze claims independently of AI systems.
  • Qualified human professionals must make the final decisions on claim denials or reductions.
  • Requires decisions on claims to be made by qualified human professionals, not solely by AI or algorithms.
  • Requires insurers to have human professionals make claim denial decisions.
  • Specific requirements for communication of denial reasons to claimants are mandated.
  • They must conduct reviews of previous claim decisions.
  • They must review the accuracy of algorithm-generated outputs.

Sponsors

Legislative Actions

Date Action
2026-01-13 H 1st Reading
2025-12-12 H Now in Commerce Committee
2025-12-12 H Referred to Commerce Committee
2025-12-11 H CS Filed
2025-12-11 H Laid on Table under Rule 7.18(a)
2025-12-11 H Reported out of Insurance & Banking Subcommittee
2025-12-09 H Favorable with CS by Insurance & Banking Subcommittee
2025-12-02 H Added to Insurance & Banking Subcommittee agenda

Detailed Analysis

Analysis 1

Why Relevant: The bill sets specific constraints on how AI can be deployed in the insurance industry.

Mechanism of Influence: It mandates a human-in-the-loop requirement, ensuring AI is only an assistive tool rather than a final decision-maker.

Evidence:

  • Insurers may use algorithms and AI systems but cannot rely on them solely for claim decisions.
  • Qualified human professionals must make the final decisions on claim denials or reductions.

Ambiguity Notes: The term qualified human professional is defined but its specific qualifications may vary by insurance type.

Analysis 2

Why Relevant: It requires transparency and disclosure regarding the use of AI in the claims process.

Mechanism of Influence: Insurers must include statements in denial letters affirming that AI was not the sole factor and detail AI usage in internal manuals.

Evidence:

  • All written communications regarding claim denials must include... a statement affirming that an automated system did not solely determine the outcome.
  • Health maintenance organizations must detail in their claims-handling manuals how algorithms and AI systems are utilized

Ambiguity Notes: The level of detail required in the claims-handling manuals regarding AI logic is not fully specified.

Analysis 3

Why Relevant: It provides legal definitions for AI-related technologies.

Mechanism of Influence: Establishes the scope of the law by defining algorithm, artificial intelligence system, and machine learning system.

Evidence:

  • This section defines key terms used throughout the bill, including algorithm, artificial intelligence system, machine learning system

Ambiguity Notes: None

Analysis 4

Why Relevant: It allows for government oversight of AI-related practices.

Mechanism of Influence: Authorizes market conduct examinations and investigations to ensure compliance with AI regulations.

Evidence:

  • The office may conduct examinations and investigations to ensure compliance with the regulations set forth in this section.

Ambiguity Notes: The frequency and specific criteria for these examinations are left to the office's discretion.

House - 899 - Task Force on Artificial Intelligence in Public Postsecondary Education

Legislation ID: 239843

Bill URL: View Bill

Summary

This bill creates a Task Force on Artificial Intelligence in Public Postsecondary Education under the Department of Education. The task force is required to convene by August 1, 2026, and will consist of various stakeholders, including faculty, education board representatives, and AI experts. The task forces duties include examining the impact of AI on academic integrity, exploring AI applications in education, and recommending policies for its ethical use. A report of findings and recommendations is to be submitted by December 1, 2026, after which the task force will terminate.

Key Sections

Key Requirements

  • Assess privacy and security implications of AI tools.
  • Examine AIs impact on academic integrity and student work verification.
  • Must include faculty from diverse disciplines, education board members, an AI expert, and a student representative.
  • Review AI applications in teaching and learning.
  • Submit a report with findings and recommendations by December 1, 2026.
  • Task force must convene by August 1, 2026.
  • Task force must submit a report by December 1, 2026.

Sponsors

Legislative Actions

Date Action
2026-01-13 H 1st Reading
2026-01-05 H Now in Education Administration Subcommittee
2026-01-05 H Referred to Education Administration Subcommittee
2026-01-05 H Referred to Education & Employment Committee
2026-01-05 H Referred to Higher Education Budget Subcommittee
2025-12-23 H Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill initiates government oversight and policy development for AI within the public education sector, specifically focusing on ethical use and security.

Mechanism of Influence: The task force is mandated to recommend model policies for ethical use and assess privacy and security implications, which serves as a precursor to formal regulation or auditing requirements in educational settings.

Evidence:

  • The task force is tasked with examining various aspects of AIs impact on education, including academic integrity, potential applications, privacy implications, and the development of model policies for AI use in education.
  • Assess privacy and security implications of AI tools.
  • Examine AIs impact on academic integrity and student work verification.

Ambiguity Notes: The bill focuses on study and recommendation rather than immediate enforcement of regulations like age verification or weight submission, but it explicitly addresses the 'ethical use' and 'security implications' which are core to AI regulation.

Senate - 1194 - Artificial Intelligence in Education

Legislation ID: 249451

Bill URL: View Bill

Summary

This bill mandates the State Board of Education to create statewide policies regarding the use of artificial intelligence in schools. It requires that students receive instruction on digital literacy and AI ethics, and it outlines the responsibilities of the Department of Education in monitoring compliance and providing teacher training. Additionally, it amends existing statutes to incorporate AI-related policies into student codes of conduct.

Key Sections

Key Requirements

  • Imposes AI-monitoring safeguards for assessments.
  • Incorporates AI policies into the student code of conduct.
  • Mandates disclosure of AI usage in student work.
  • Requires K-12 instruction on digital literacy and AI ethics.
  • Requires teacher permission for student use of AI.

Sponsors

Legislative Actions

Date Action
2026-01-13 • Introduced
2026-01-12 • Referred to Education Pre-K - 12; Commerce and Tourism; Rules
2026-01-06 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly addresses the user's interest in AI disclosures and regulation.

Mechanism of Influence: It creates a legal requirement for students to disclose the use of AI in their academic work and requires teacher permission for its use.

Evidence:

  • Mandates disclosure of AI usage in student work.
  • Requires teacher permission for student use of AI.

Ambiguity Notes: The bill does not specify the technical methods for detecting AI or the exact format of the required disclosures.

Analysis 2

Why Relevant: The bill focuses on the oversight and monitoring of AI technology within a specific sector.

Mechanism of Influence: It imposes monitoring safeguards for assessments and requires the Department of Education to monitor compliance with AI policies.

Evidence:

  • Imposes AI-monitoring safeguards for assessments.
  • outlines the responsibilities of the Department of Education in monitoring compliance

Ambiguity Notes: The term 'monitoring safeguards' is broad and could refer to various forms of algorithmic or human oversight.

Analysis 3

Why Relevant: The bill addresses the ethical regulation of AI through education and conduct codes.

Mechanism of Influence: It mandates instruction on AI ethics and requires that AI usage policies be formally integrated into student codes of conduct.

Evidence:

  • Mandates age-appropriate instruction on digital literacy and the ethical use of AI for students in grades 6 through 12.
  • Requires that student codes of conduct include policies on the use of artificial intelligence.

Ambiguity Notes: The bill leaves the definition of 'ethical use' to be determined by the State Board of Education.

Senate - 1458 - Artificial Intelligence in Higher Education

Legislation ID: 250298

Bill URL: View Bill

Summary

This bill creates the Artificial Intelligence in Higher Education Study Group, tasked with reviewing the impact of artificial intelligence on academic integrity, teaching, and research within Floridas higher education systems. The group will consist of various stakeholders, including faculty, students, and AI experts, and is required to submit a report with findings and recommendations by December 1, 2026.

Key Sections

Key Requirements

  • Assess privacy and intellectual property implications
  • Consider safeguards for academic freedom
  • Consult with accrediting bodies and industry representatives
  • Engage with student governments
  • Examine academic integrity issues related to AI
  • Identify training needs for faculty and staff
  • Include findings, recommendations, and proposed legislation
  • Includes faculty from various disciplines
  • Includes representatives from education boards and AI experts
  • Includes student representation from the Board of Governors or State Board of Education
  • Recommend model policies for AI use
  • Review AI applications in education
  • Review best practices in AI governance
  • Solicit input from faculty senates
  • Submit a report by December 1, 2026
  • This section expires on December 31, 2026

Sponsors

Legislative Actions

Date Action
2026-01-22 • Introduced
2026-01-16 • Referred to Education Postsecondary; Commerce and Tourism; Rules
2026-01-08 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the governance and oversight of artificial intelligence within the higher education sector.

Mechanism of Influence: By mandating the creation of model policies and reviewing governance best practices, the study group's recommendations could form the basis for future regulatory requirements or disclosure mandates for AI tools used in academic settings.

Evidence:

  • Recommend model policies for AI use
  • Review best practices in AI governance
  • Include findings, recommendations, and proposed legislation

Ambiguity Notes: The terms 'model policies' and 'best practices in AI governance' are broad and could encompass anything from voluntary ethical guidelines to mandatory disclosure and auditing requirements.

Analysis 2

Why Relevant: The legislation focuses on the ethical and legal implications of AI, specifically regarding privacy and intellectual property.

Mechanism of Influence: The study group is required to assess how AI affects privacy and IP, which may lead to specific disclosure requirements or restrictions on how AI models are trained or deployed using academic data.

Evidence:

  • Assess privacy and intellectual property implications
  • Examine academic integrity issues related to AI

Ambiguity Notes: The scope of 'privacy and intellectual property implications' is not strictly defined, leaving room for the group to investigate data collection practices and the ownership of AI-generated content.

Senate - 146 - Use of Artificial Intelligence by State Agencies

Legislation ID: 239160

Bill URL: View Bill

Summary

This legislation mandates the Florida Digital Service to conduct a comprehensive study on the use of artificial intelligence by state agencies. It defines artificial intelligence and state agencies, and outlines the requirements for the study, including details on the agencies using AI, the purposes of their use, and associated costs. A report summarizing the findings must be submitted by March 1, 2027.

Key Sections

Key Requirements

  • Conduct a study on the impact of AI technology by state agencies.
  • Detail the purposes for each agencys use of AI.
  • Include a list of state agencies that use AI.
  • Provide cost analysis for procuring, implementing, and operating AI technology.
  • Submit a written report to the Governor, President of the Senate, and Speaker of the House.

Sponsors

Legislative Actions

Date Action
2026-01-13 • Introduced
2025-10-21 • Referred to Governmental Oversight and Accountability; Appropriations Committee on Agriculture, Environment, and General Government; Fiscal Policy
2025-10-09 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The legislation focuses on government oversight and transparency regarding the implementation of artificial intelligence within state agencies.

Mechanism of Influence: It requires a formal study and reporting mechanism to the Governor and Legislature, which serves as a precursor to potential regulatory frameworks or budgetary oversight.

Evidence:

  • This legislation mandates the Florida Digital Service to conduct a comprehensive study on the use of artificial intelligence by state agencies.
  • The Florida Digital Service is required to conduct a study on the impact of AI technology used by state agencies, including a list of agencies, their purposes for using AI, and the costs involved.

Ambiguity Notes: The effectiveness of the study depends on the specific definition of 'artificial intelligence' adopted in the bill's definitions section.

Senate - 1694 - Technology Education

Legislation ID: 250801

Bill URL: View Bill

Summary

This bill amends section 1007.25 of the Florida Statutes to incorporate technology courses into the general education core course standards for public postsecondary educational institutions. It mandates that faculty committees review and recommend course options that include subjects related to computer science, artificial intelligence, robotics, and cybersecurity, ensuring students gain relevant technological skills.

Key Sections

Key Requirements

  • Requires technology courses to include subjects like artificial intelligence, robotics, software engineering, computer networks, database systems, and cybersecurity.

Sponsors

Legislative Actions

Date Action
2026-01-22 • Introduced
2026-01-16 • Referred to Education Postsecondary; Appropriations Committee on Higher Education; Rules
2026-01-09 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The legislation explicitly identifies 'artificial intelligence' as a core subject area that must be included in the state's postsecondary technology course standards.

Mechanism of Influence: By mandating the inclusion of AI in general education standards, the law influences the academic framework and workforce preparation related to AI, though it does not impose direct regulatory controls on AI development or deployment.

Evidence:

  • Requires technology courses to include subjects like artificial intelligence, robotics, software engineering, computer networks, database systems, and cybersecurity.

Ambiguity Notes: The bill focuses on educational curriculum and academic standards rather than the regulatory oversight, audits, or disclosures typically associated with AI governance legislation.

Senate - 344 - Use of Artificial Intelligence in Psychological, Clinical, Counseling, and Therapy Services

Legislation ID: 239332

Bill URL: View Bill

Summary

This bill establishes regulations regarding the use of artificial intelligence (AI) in the fields of psychology, clinical social work, marriage and family therapy, and mental health counseling. It defines AI and prohibits its use in direct therapeutic practices, with specified exceptions for administrative support and session recording under certain conditions. The bill seeks to protect the integrity of mental health services by limiting AIs role in direct client interactions.

Key Sections

Key Requirements

  • Allows AI for administrative tasks like scheduling and billing.
  • Allows AI for administrative tasks such as managing records and communications.
  • Prohibits the use of AI in direct clinical social work and counseling practices.
  • Prohibits the use of AI in direct psychological practices.
  • Requires written informed consent to record or transcribe therapy sessions at least 24 hours in advance.

Sponsors

Legislative Actions

Date Action
2026-01-13 • Introduced
2025-11-17 • Referred to Health Policy; Children, Families, and Elder Affairs; Rules
2025-11-04 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates and restricts the use of AI in specific professional fields by defining its permissible scope.

Mechanism of Influence: It creates a legal prohibition against using AI for direct therapeutic interventions, limiting its role to administrative support such as scheduling and billing.

Evidence:

  • Prohibits the use of AI in direct clinical social work and counseling practices.
  • Prohibits the use of AI in direct psychological practices.

Ambiguity Notes: The term 'direct clinical social work and counseling practices' may require clearer boundaries to determine if AI-assisted diagnostic tools or decision-support systems are also prohibited.

Analysis 2

Why Relevant: The bill mandates disclosures and informed consent regarding AI usage for data processing.

Mechanism of Influence: Practitioners are required to obtain written consent from clients at least 24 hours before using AI for recording or transcribing sessions, ensuring transparency.

Evidence:

  • Requires written informed consent to record or transcribe therapy sessions at least 24 hours in advance.

Ambiguity Notes: The 24-hour advance notice requirement might be impractical for certain types of immediate or emergency mental health interventions.

Senate - 480 - Information Technology

Legislation ID: 253667

Bill URL: View Bill

Summary

This bill creates the Division of Integrated Government Innovation and Technology (DIGIT) within the Executive Office of the Governor, transferring responsibilities from the Department of Management Services. It establishes DIGIT as a separate budget entity responsible for overseeing state information technology governance, cybersecurity standards, and supporting state agencies in technology initiatives. The bill also outlines various requirements for compliance, reporting, and collaboration among state agencies and establishes new roles and responsibilities for key positions related to information technology management.

Key Sections

Key Requirements

  • DIGIT must prepare and submit a budget as a separate budget entity.
  • The director of DIGIT will also serve as the state chief information officer, with specific qualifications and responsibilities.

Sponsors

Legislative Actions

Date Action
2026-01-22 • Introduced
2026-01-16 • Referred to Appropriations Committee on Agriculture, Environment, and General Government; Appropriations
2026-01-12 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill creates a centralized authority (DIGIT) for state technology governance and innovation. While AI is not explicitly mentioned in the provided text, AI-related projects and regulations within state government would logically fall under this division's oversight of 'technology initiatives'.

Mechanism of Influence: The Director of DIGIT, serving as the state CIO, would have the authority to set standards and oversee the implementation of emerging technologies, including AI, across state agencies.

Evidence:

  • The bill establishes the Division of Integrated Government Innovation and Technology (DIGIT) as a new entity
  • overseeing state information technology governance, cybersecurity standards, and supporting state agencies in technology initiatives

Ambiguity Notes: The terms 'Innovation' and 'Technology' are not defined to specifically include or exclude Artificial Intelligence, allowing for broad interpretation of the division's scope regarding AI oversight.

Senate - 482 - Artificial Intelligence Bill of Rights

Legislation ID: 239443

Bill URL: View Bill

Summary

The bill amends existing statutes and creates new sections to define artificial intelligence, prohibit certain contracts with foreign entities, and establish the Artificial Intelligence Bill of Rights for Floridians. It outlines the rights of individuals regarding AI, including consent requirements for minors, protections against misuse of personal data, and civil remedies for violations. The bill also imposes obligations on AI technology companies and chatbot platforms to protect user information and restrict access for minors without parental consent.

Key Sections

Key Requirements

  • AI companies must ensure personal information is deidentified and cannot be sold or disclosed without consent.
  • Chatbot platforms must prohibit minors from creating accounts without parental consent.
  • Department can bring actions under the Florida Deceptive and Unfair Trade Practices Act for violations.
  • Floridians have the right to know if they are interacting with AI, control their childrens use of AI, and seek civil remedies for unauthorized use of their likeness.
  • Platforms must provide parents with options to monitor and control their childs interactions with chatbots.
  • Prohibits governmental entities from extending or renewing contracts with specified foreign entities if they access personal information.
  • Requires an affidavit from entities seeking contracts to confirm they do not meet specified criteria related to foreign ownership or control.
  • Requires consent for the commercial use of an individuals AI-generated likeness.

Sponsors

Legislative Actions

Date Action
2026-01-21 • Favorable by Commerce and Tourism; YEAS 10 NAYS 0 • Now in Appropriations
2026-01-16 • On Committee agenda-- Commerce and Tourism, 01/21/26, 8:30 am, 110 Senate Building
2026-01-13 • Introduced
2026-01-07 • Referred to Commerce and Tourism; Appropriations
2025-12-22 • Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates transparency regarding the use of AI in interactions.

Mechanism of Influence: It grants Floridians the right to be informed when they are interacting with an artificial intelligence system rather than a human.

Evidence:

  • Floridians have the right to know if they are interacting with AI

Ambiguity Notes: The specific method of disclosure, such as a visual badge or verbal notice, is not detailed in the provided abstract.

Analysis 2

Why Relevant: The legislation includes specific age-related restrictions and parental oversight for AI platforms.

Mechanism of Influence: Companion chatbot platforms are required to implement parental consent mechanisms to prevent unauthorized use by minors and provide monitoring tools.

Evidence:

  • Chatbot platforms must prohibit minors from creating accounts without parental consent.
  • Platforms must provide parents with options to monitor and control their childs interactions with chatbots.

Ambiguity Notes: The term 'companion chatbot' may require further legal definition to determine which specific apps or services are covered.

Analysis 3

Why Relevant: It regulates the data practices of AI technology companies.

Mechanism of Influence: It imposes a legal obligation on AI companies to deidentify personal information and prohibits the sale or disclosure of such data without explicit consent.

Evidence:

  • AI companies must ensure personal information is deidentified and cannot be sold or disclosed without consent.

Ambiguity Notes: The standard for 'deidentified' data can vary; the bill's effectiveness depends on how strictly this is defined.

Senate - 7030 - Public Records/Investigations by the Department of Legal Affairs

Legislation ID: 270848

Bill URL: View Bill

Summary

This bill amends various sections of Florida Statutes to provide exemptions from public records requirements for information held by the Department of Legal Affairs regarding notifications and investigations of violations related to companion chatbots, bots, and deidentified data. It outlines the conditions under which this information remains confidential and the circumstances under which it may be disclosed.

Key Sections

Key Requirements

  • Allows disclosure for official duties and responsibilities of the Department.
  • Allows for disclosure of certain information during investigations for official duties or public notification.
  • Allows for specific disclosures during investigations for official purposes.
  • Allows limited disclosure of information during an active investigation under specific circumstances.
  • Allows sharing with other government entities for their official tasks.
  • Allows the Department of Legal Affairs to disclose certain information during active investigations for specified purposes.
  • Defines proprietary information as that which is owned and treated as private by the AI company.
  • Emphasizes the risk of identity theft and privacy violations from releasing sensitive information.
  • Justifies the need for confidentiality to protect ongoing investigations.
  • Maintains confidentiality of personal identifying information and proprietary information after investigation completion.
  • Mandates confidentiality of information held during investigations.
  • Permits disclosure for official duties during active investigations.
  • Permits disclosure for public notification if it aids in identifying victims of improper data use.
  • Protects computer forensic reports and data security weaknesses from disclosure.
  • Protects information whose disclosure could harm the competitive advantage of the AI company.
  • Protects proprietary information to prevent economic harm to AI companies.
  • Requires confidentiality of information during active investigations.
  • Requires information related to violations to remain confidential during investigations.
  • Requires that information must not be publicly available or easily ascertainable from other sources.
  • Requires that information related to violations remains confidential during active investigations.
  • Requires the Department of Legal Affairs to keep investigation-related information confidential until the investigation is resolved.

Sponsors

Legislative Actions

Date Action
2026-01-22 • Filed • Referred to Appropriations
2026-01-21 • Submitted as Committee Bill and Reported Favorably by Commerce and Tourism; YEAS 10 NAYS 0
2026-01-16 • Submitted for consideration by Commerce and Tourism • On Committee agenda-- Commerce and Tourism, 01/21/26, 8:30 am, 110 Senate Building

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically targets 'companion chatbots,' which are a specialized application of generative artificial intelligence.

Mechanism of Influence: It regulates the transparency and oversight process for AI chatbots by exempting investigation records from public disclosure, thereby governing how the state handles AI-related consumer protection cases.

Evidence:

  • Companion chatbot use for minors
  • information related to notifications or investigations of violations concerning companion chatbots is confidential and exempt from public records

Ambiguity Notes: The term 'companion chatbot' is used but the specific technical threshold for what constitutes a 'chatbot' versus other AI interfaces is not detailed in the abstract.

Analysis 2

Why Relevant: The bill provides a legal definition for proprietary information specifically tailored to artificial intelligence technology companies.

Mechanism of Influence: By defining and protecting AI proprietary information, the law creates a shield for AI developers' weights, algorithms, or trade secrets during government investigations.

Evidence:

  • This provision defines proprietary information in the context of artificial intelligence technology companies
  • Protects proprietary information to prevent economic harm to AI companies.

Ambiguity Notes: The definition of 'proprietary information' relies on the company's own treatment of the data as private, which could be interpreted broadly by AI firms.

Analysis 3

Why Relevant: The bill addresses 'bots,' which are frequently powered by AI and are a core subject of AI regulatory discussions regarding automation and disclosure.

Mechanism of Influence: It establishes the confidentiality framework for state-level enforcement actions against bot operators, affecting how AI-driven automation is policed.

Evidence:

  • Consumer protections regarding bots
  • information related to notifications or investigations of violations concerning bots is confidential

Ambiguity Notes: None

↑ Back to Table of Contents

Georgia

Index of Bills

house - 147 - Georgia Technology Authority; annual inventory of artificial intelligence usage by state agencies; provide

Legislation ID: 188075

Bill URL: View Bill

Summary

This bill amends the Georgia Technology Authoritys regulations to require an annual inventory of artificial intelligence systems utilized by state agencies. It mandates the development of policies regarding the procurement and implementation of these systems, with a focus on preventing unlawful discrimination. The bill also requires the authority to prepare annual reports on the usage of artificial intelligence across agencies and ensures cooperation among state entities in this process.

Key Sections

Key Requirements

  • All state agencies must cooperate with the authority as required.
  • Conduct an inventory of AI systems by December 31, 2025, and annually thereafter.
  • Develop policies for procurement, implementation, and assessment of AI systems.
  • Ensure no unlawful discrimination from AI systems.
  • Include system name, vendor, capabilities, decision-making role, and impact assessment status in the inventory.
  • Notify recipients of the reports availability.
  • Prepare and provide the annual report to state leaders and legislators.
  • Requires published inventory to include system names, vendors, capabilities, decision-making support, and impact assessments.
  • Requires state agencies to inventory AI systems by December 31, 2025, and annually thereafter.

Sponsors

Legislative Actions

Date Action
2026-01-12 Senate Recommitted
2025-03-27 Senate Read Second Time
2025-03-25 Senate Committee Favorably Reported By Substitute
2025-03-10 Senate Withdrawn & Recommitted
2025-02-21 Senate Read and Referred
2025-02-20 House Passed/Adopted
2025-02-20 House Third Readers
2025-02-06 House Committee Favorably Reported

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a formal oversight and disclosure mechanism for AI systems used within the state government.

Mechanism of Influence: It requires state agencies to disclose specific technical and operational details of their AI systems, including capabilities and impact assessment statuses, to a central authority.

Evidence:

  • Conduct an inventory of AI systems by December 31, 2025, and annually thereafter.
  • Include system name, vendor, capabilities, decision-making role, and impact assessment status in the inventory.

Ambiguity Notes: The term 'impact assessment status' implies a requirement for audits or evaluations, though the specific criteria for these assessments are left to be defined in future policies.

Analysis 2

Why Relevant: The legislation mandates the creation of regulatory frameworks governing how AI is acquired and deployed.

Mechanism of Influence: By requiring the development of policies for procurement and implementation, the law sets a regulatory floor for AI usage, specifically targeting the prevention of algorithmic discrimination.

Evidence:

  • Develop policies for procurement, implementation, and assessment of AI systems.
  • Ensure no unlawful discrimination from AI systems.

Ambiguity Notes: The scope of 'unlawful discrimination' and the specific 'policies and procedures' for procurement are broad and will depend on the Georgia Technology Authority's eventual rulemaking.

house - 171 - Crimes and offenses; obscenity; repeal and replace Code Section 16-12-80

Legislation ID: 188104

Bill URL: View Bill

Summary

This bill amends existing laws in Georgia regarding obscenity and related offenses by specifically prohibiting the distribution of computer-generated obscene material that depicts children. It establishes definitions for obscenity and child, outlines penalties for violations, and mandates reporting for individuals who suspect they are processing such material. Additionally, it introduces enhanced sentencing for defendants who utilize artificial intelligence in the commission of designated offenses.

Key Sections

Key Requirements

  • Defines child as individuals under 16 years.
  • Defines obscene and child for legal clarity.
  • Defines obscene material based on community standards and its appeal to prurient interests.
  • Defines obscene material based on community standards and lack of serious value.
  • Establishes specific sentencing guidelines based on the type of offense committed.
  • Establishes that the offense is a felony with a punishment of 1 to 15 years imprisonment.
  • Felony punishment of 1 to 15 years for most offenders.
  • Immediate reporting to the National Center for Missing and Exploited Children and local law enforcement if there is reasonable belief of obscene conduct involving a minor.
  • Mandates a minimum fine and imprisonment based on the classification of the designated offense.
  • Mandates notification to the defendant of intent to seek enhanced penalties.
  • Mandates that the obscene material must depict an image that appears realistic and engages in sexually explicit conduct.
  • Misdemeanor classification for offenders aged 18 or younger under specific conditions.
  • Prohibits distribution of computer-generated obscene material that appears to depict a child.
  • Prohibits distribution of obscene material depicting a child created through artificial intelligence.
  • Prohibits the distribution of computer-generated obscene material depicting a child.
  • Prohibits the distribution of obscene materials depicting children created through AI.
  • Replaces cross-references to Code Section 16-12-80 with Code Section 16-12-80.1 in specified sections.
  • Requires enhanced penalties for using AI in designated offenses.
  • Requires individuals operating AI programs for children to ensure they do not provide obscene material.
  • Requires individuals to report suspected obscene material depicting children to law enforcement agencies.
  • Requires notice to be given to defendants regarding enhanced sentencing for using AI.
  • Requires notice to be given to defendants regarding the intent to seek enhanced penalties for AI usage in crimes.
  • Requires notification to defendants of intent to seek enhanced penalties if AI is used in a crime.
  • Specifies penalties based on whether the designated offense is a misdemeanor or felony.
  • Updates cross-references in education and transportation codes to reflect new provisions.
  • Updates cross-references in multiple education and transportation-related code sections.

Sponsors

Legislative Actions

Date Action
2026-01-12 Senate Recommitted
2025-03-31 Senate Committee Favorably Reported By Substitute
2025-03-28 Senate Recommitted
2025-03-27 Senate Committee Favorably Reported By Substitute
2025-03-27 Senate Read Second Time
2025-02-27 Senate Read and Referred
2025-02-26 House Passed/Adopted By Substitute
2025-02-26 House Third Readers

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically regulates the output of artificial intelligence by prohibiting the distribution of AI-generated obscene material depicting children.

Mechanism of Influence: It creates a legal prohibition against distributing specific types of AI-generated content, effectively regulating the use of AI for generating child-like obscene imagery.

Evidence:

  • Prohibits distribution of computer-generated obscene material that appears to depict a child.

Ambiguity Notes: The phrase 'appears to depict' relies on community standards and visual interpretation, which may vary.

Analysis 2

Why Relevant: The legislation addresses the use of AI in criminal activities by providing for increased penalties.

Mechanism of Influence: It mandates enhanced sentencing for defendants who utilize AI during the commission of designated offenses, acting as a deterrent for the misuse of AI technology.

Evidence:

  • introduces enhanced sentencing guidelines for defendants who use artificial intelligence in the commission of certain designated offenses.

Ambiguity Notes: The specific 'designated offenses' are not fully listed in the summary, leaving the scope of the enhancement partially undefined.

house - 638 - MARTA; prohibit stopping or parking of a motor vehicle other than a transit vehicle in a designated transit vehicle lane in the City of Atlanta

Legislation ID: 188619

Bill URL: View Bill

Summary

House Bill 638 proposes amendments to the Georgia Code regarding the Metropolitan Atlanta Rapid Transit Authority (MARTA). It prohibits non-transit vehicles from stopping or parking in designated transit vehicle lanes in Atlanta, introduces penalties for violations, and authorizes the use of automated monitoring devices for enforcement. The bill outlines procedures for issuing citations, conditions for penalties, and the management of funds collected from fines.

Key Sections

Key Requirements

  • Automated devices may issue citations; notifications must be sent by mail within 60 days of the violation.
  • Automated devices must send citations by mail within 60 days.
  • Citations must include recorded images and instructions for contesting the penalty.
  • District attorneys authorized to prosecute civil actions for penalties.
  • First six months after designation only warnings issued for violations.
  • First violation fine of up to $50, waived if safety course is completed.
  • First violation fine up to $50, waived with safety course; second violation up to $100; third violation up to $150 with a required defensive driving course.
  • Funds collected are to be used for public safety; agreements with agents must comply with security standards and cannot allow agents to retain a portion of fines.
  • Jurisdiction for traffic law violations within the city applies.
  • Prohibits non-transit vehicles from stopping or parking in designated transit vehicle lanes unless authorized.
  • Prohibits stopping or parking in transit lanes unless authorized.
  • Recorded images serve as prima facie evidence; owners can contest by providing sworn statements or evidence they were not operating the vehicle.
  • Requires vehicles to adhere to transit lane regulations.
  • Second violation fine of up to $100.
  • Signs must be placed 200-500 feet before transit lanes and be visible from all traffic lanes.
  • Third violation fine of up to $150, with mandatory defensive driving course.

Sponsors

Legislative Actions

Date Action
2026-01-12 Senate Recommitted
2026-01-12 Senate Taken from Table
2025-04-02 Senate Tabled
2025-03-27 Senate Committee Favorably Reported
2025-03-27 Senate Read Second Time
2025-03-10 Senate Read and Referred
2025-03-06 House Committee Favorably Reported By Substitute
2025-03-06 House Passed/Adopted By Substitute

Detailed Analysis

Analysis 1

Why Relevant: The bill authorizes the use of automated monitoring devices for law enforcement and civil penalty issuance, which involves automated decision-making systems.

Mechanism of Influence: It establishes a legal framework for 'automated transit vehicle lane monitoring devices' to record images and trigger citations, effectively automating the enforcement of traffic laws.

Evidence:

  • authorizes the use of automated monitoring devices for enforcement
  • automated transit vehicle lane monitoring device
  • Citations must include recorded images

Ambiguity Notes: The legislation does not explicitly use the term 'Artificial Intelligence,' but the automated systems described (likely involving computer vision or license plate recognition) are common applications of AI in public infrastructure.

senate - 398 - Wiretapping, Eavesdropping, Surveillance, and Related Offenses; criminal offenses of virtual peeping; establish

Legislation ID: 258976

Bill URL: View Bill

Summary

Senate Bill 398 amends Georgias laws on wiretapping and surveillance by introducing provisions against virtual peeping, which involves the unauthorized generation of images of individuals using generative AI. The bill outlines specific definitions, penalties for violations, and exceptions for law enforcement activities. The legislation aims to protect the privacy of individuals, particularly minors, from unauthorized image generation.

Key Sections

Key Requirements

  • Each image generated in violation counts as a separate offense.
  • Exempts law enforcement from these provisions during criminal investigations.
  • Imposes penalties for violations.
  • Prohibits generation of obscene material involving minors without consent.
  • Prohibits generation of obscene material without consent.
  • Requires consent from individuals before generating their images using AI.
  • Requires consent from the minor or their guardian for image generation.
  • Specifies conditions under which offenses may be treated as misdemeanors.

Sponsors

Legislative Actions

Date Action
2026-01-14 Senate Read and Referred
2026-01-13 Senate Hopper

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the use of generative artificial intelligence systems by criminalizing specific outputs.

Mechanism of Influence: It creates a legal framework that prohibits the use of AI to generate human likenesses without explicit consent, classifying such acts as 'virtual peeping' or felonies depending on the content.

Evidence:

  • uses a generative AI system to create an image of an adult without their consent
  • This section defines key terms related to the bill, including... generative artificial intelligence system

Ambiguity Notes: The definition of 'generative artificial intelligence system' is central to the bill's scope, determining which software tools fall under these criminal statutes.

Analysis 2

Why Relevant: The legislation focuses heavily on the protection of minors and age-based distinctions in AI usage.

Mechanism of Influence: It imposes harsher criminal penalties (1 to 20 years imprisonment) when the subject of the AI-generated image is a minor.

Evidence:

  • Generating an image of a minor without consent is a felony
  • Creating obscene material that includes an image of a minor is a serious felony

Ambiguity Notes: The bill distinguishes between minors under 14 and those 14 or older regarding consent and misdemeanor vs. felony status.

Analysis 3

Why Relevant: The bill establishes a consent-based regulatory requirement for AI image generation.

Mechanism of Influence: By requiring consent from the subject or a guardian, it effectively mandates a disclosure or authorization process before AI can be used to replicate a person's likeness.

Evidence:

  • Requires consent from the minor or their guardian for image generation
  • Prohibits generation of obscene material without consent

Ambiguity Notes: The bill does not specify the technical form consent must take, only that its absence triggers criminal liability.

↑ Back to Table of Contents

Idaho

Index of Bills

Senate - 1227 - Artificial intelligence, education

Legislation ID: 283865

Bill URL: View Bill

Summary

This bill amends Title 33 of the Idaho Code by adding a new chapter that addresses the integration of generative AI technologies in education. It mandates the development of a statewide framework by the State Department of Education, which will guide local school districts and public charter schools in adopting policies regarding the use of generative AI. The bill emphasizes the importance of transparency, student privacy, and human oversight in educational settings while promoting the ethical use of AI tools for instruction and administration.

Key Sections

Key Requirements

  • All tools must comply with state and federal data privacy laws.
  • Comply with applicable state and federal laws.
  • Create assessment guidelines for evaluating student understanding of generative AI.
  • Define appropriate and prohibited uses of generative AI.
  • Develop generative AI literacy standards for K-12 students.
  • Establish a professional development plan for educators.
  • Framework must prioritize human-centered oversight, transparency, safety, and data security.
  • Framework should provide guidance on instructional integration and responsible student use.
  • Include safeguards for student privacy and data security.
  • Must address accessibility and serve as a foundation for local policies and professional development.
  • Policy must align with the statewide framework.
  • Vendors must disclose use of machine learning and generative AI.

Sponsors

Legislative Actions

Date Action
2026-01-23 Reported Printed; referred to Education
2026-01-22 Introduced; read first time; referred to JR for Printing

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a regulatory framework for AI usage and mandates specific disclosures from technology providers.

Mechanism of Influence: It requires vendors to disclose the use of machine learning and generative AI in educational tools and mandates that local districts create policies to govern and restrict AI use.

Evidence:

  • Vendors must disclose use of machine learning and generative AI.
  • Local school districts and public charter schools must adopt policies governing the use of generative AI, aligning with the statewide framework and defining appropriate uses.
  • Define appropriate and prohibited uses of generative AI.

Ambiguity Notes: The bill leaves the specific definitions of 'appropriate' and 'prohibited' uses to be determined by local school districts, which could result in varying standards across the state.

Analysis 2

Why Relevant: The legislation focuses on oversight, data privacy, and the ethical application of AI technologies.

Mechanism of Influence: It mandates that the statewide framework prioritize human-centered oversight and safety, while ensuring all AI-related procurement complies with existing data privacy laws.

Evidence:

  • Framework must prioritize human-centered oversight, transparency, safety, and data security.
  • Local school districts must ensure that generative AI-related tools comply with data privacy laws and disclose their use of AI technologies.

Ambiguity Notes: The term 'human-centered oversight' is not strictly defined, leaving room for interpretation regarding the level of human intervention required in AI-driven administrative or instructional processes.

↑ Back to Table of Contents

Illinois

Index of Bills

Senate - 2255 - SURVEILLANCE DISCRIMINATION

Legislation ID: 177958

Bill URL: View Bill

Summary

This Act prohibits surveillance-based discrimination in pricing and wages by prohibiting the use of surveillance data in automated decision systems that determine individualized prices for consumers or wages for employees. It defines key terms, sets out exemptions, establishes enforcement by the Attorney General, and provides for penalties and private rights of action. It also outlines relationship to other laws and provides for rulemaking.

Key Sections

Key Requirements

  • Aggrieved persons may sue for damages, costs, and attorneys fees.
  • Allows recovery of actual damages or statutory damages of $3,000 per violation.
  • A person shall not use surveillance data as part of an automated decision system to inform the individualized price charged to a consumer.
  • Attorney General has enforcement authority.
  • Attorney General is responsible for enforcement.
  • Attorney General to enforce the Act.
  • Civil penalties up to $10,000 for each violation.
  • Companies must disclose, before hiring, what data is used in automated wage decisions and how it is used.
  • Damages: actual, or at least $3,000 per violation, or up to three times actual damages for bad faith/intentional violations.
  • Each consumer, employee, or transaction constitutes a separate violation.
  • Each violation is a separate cause of action.
  • Enforcement and penalties apply per violation.
  • Establishes different compensation options for violations, including actual damages or a statutory minimum.
  • Exceptions include cost-based price variations and two enumerated exemptions.
  • Exemptions for insurers complying with the Illinois Insurance Code and entities using consumer reports under the Fair Credit Reporting Act.
  • Exemptions for insurers using risk-relevant data and entities making credit decisions based on consumer reports.
  • Individuals can sue for damages, costs, and attorney fees.
  • Mandates procedures to ensure data accuracy for wage-setting.
  • Mandates that employers provide reasonable procedures to ensure data accuracy in wage-setting systems.
  • Must provide reasonable procedures to ensure data accuracy used in wage setting.
  • Private civil action available for aggrieved individuals or groups.
  • Prohibits the use of surveillance data for individualized pricing unless based on cost differences.
  • Prohibits the use of surveillance data for individualized wages unless based on direct job-related data or cost differences.
  • Prohibits the use of surveillance data for pricing decisions unless based on cost differences for providing goods or services.
  • Prohibits the use of surveillance data for wage decisions unless based on individual employee data related to their job tasks or labor costs.
  • Requires disclosure of data considerations before hiring employees whose wages are determined through automated systems.
  • Requires disclosure of data considered in wage decisions before hiring.
  • Violations subject to civil penalties up to $10,000 per violation and attorneys fees.
  • Violators may face civil penalties up to $10,000 for each violation.
  • Wage decisions must not be based on surveillance data, with limited carve-outs for data directly related to tasks or cost differences.

Sponsors

Legislative Actions

Date Action
2026-01-27 Re-assigned toExecutive
2026-01-14 Added as Chief Co-SponsorSen. Celina Villanueva
2025-04-11 Rule 3-9(a) / Re-referred toAssignments
2025-03-21 Rule 2-10 Committee Deadline Established As April 11, 2025
2025-03-19 ToAI and Social Media
2025-03-12 Assigned toExecutive
2025-02-07 Filed with Secretary bySen. Robert Peters
2025-02-07 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates 'automated decision systems,' which is a standard legislative term used to encompass artificial intelligence and algorithmic decision-making tools.

Mechanism of Influence: It imposes substantive restrictions on how AI-driven automated systems can be used to calculate consumer prices and employee compensation, effectively regulating the output and application of AI in commercial and employment contexts.

Evidence:

  • This provision prohibits the use of surveillance data in automated systems that set individualized prices for consumers
  • This provision prohibits the use of surveillance data in automated systems that determine individualized wages for employees

Ambiguity Notes: While the abstract uses the term 'automated decision systems' rather than 'artificial intelligence' explicitly, these terms are often used interchangeably in regulatory frameworks to cover machine learning and algorithmic models.

Analysis 2

Why Relevant: The bill includes disclosure requirements regarding the data used by these automated systems.

Mechanism of Influence: It requires employers to disclose the specific data considered by automated wage-setting systems before an individual is hired, aligning with the user's interest in AI transparency and disclosure mandates.

Evidence:

  • Requires disclosure of data considered in wage decisions before hiring.
  • Mandates procedures to ensure data accuracy for wage-setting.

Ambiguity Notes: The disclosure requirement is specific to wage decisions and does not explicitly mention 'weights' or 'audits' in the technical AI sense, though it mandates 'procedures to ensure data accuracy.'

↑ Back to Table of Contents

Indiana

Index of Bills

House - 1085 - Civil liability for child sexual abuse material.

Legislation ID: 242366

Bill URL: View Bill

Summary

House Bill No. 1085 introduces provisions for civil liability concerning child sexual abuse material and obscene material on the Internet. It enables individuals depicted in or exposed to such materials to file civil actions against those who knowingly allow access to, disseminate, or provide the content. The bill also allows the attorney general to seek injunctive relief and establishes a safe harbor provision for certain entities under specific conditions. Notably, it states that comparative fault and tort claims immunities do not apply to these civil actions.

Key Sections

Key Requirements

  • Attorney general can initiate actions for injunctive relief without needing a private individual to file first.
  • Defendants can be held liable if they knowingly allow access to or disseminate prohibited material.
  • Individuals depicted in child sexual abuse material or exposed to obscene material may bring civil actions.
  • Individuals depicted in prohibited material can sue for damages.
  • No need to exhaust administrative remedies prior to filing a lawsuit.
  • Parents or guardians can bring actions on behalf of minors.
  • Parents or guardians may file actions on behalf of minors depicted in prohibited material.
  • Tort claim immunities do not apply to civil actions under this chapter.

Sponsors

Legislative Actions

Date Action
2026-01-13 Representative Goss-Reaves added as coauthor
2026-01-05 Authored by Representative King
2026-01-05 First reading: referred to Committee on Judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill targets information content providers and interactive computer services, which are categories that include AI developers and platforms hosting generative AI.

Mechanism of Influence: AI companies could face civil litigation if their models are used to generate or distribute prohibited content, as the bill removes certain tort immunities for these actions.

Evidence:

  • Defines key terms such as child sexual abuse material, information content provider, interactive computer service provider, obscene material, and prohibited material.

Ambiguity Notes: The legislation uses technology-neutral language like 'information content provider' which likely covers AI entities without naming them explicitly.

House - 1201 - Various mental health and insurance matters.

Legislation ID: 247680

Bill URL: View Bill

Summary

House Bill No. 1201 addresses various mental health and insurance matters by prohibiting the use of artificial intelligence to impersonate licensed mental health professionals, requiring compliance with network adequacy standards for health carriers, and ensuring favorable reimbursement rates for mental health services relative to Medicare. It also sets forth regulations on downcoding practices and retroactive audits of paid claims.

Key Sections

Key Requirements

  • Department of Insurance must contract with a third party for compliance verification.
  • Department of Insurance must contract with a third party for verification.
  • Health carriers must meet network adequacy standards.
  • Health carriers must meet network adequacy standards set by the Centers for Medicare and Medicaid Services.
  • Insurers cannot audit or seek refunds on claims more than 180 days after payment.
  • Insurers cannot audit paid claims or seek refunds more than 180 days after payment.
  • Insurers cannot downcode claims in a manner that affects reimbursement for actual services performed.
  • Insurers cannot downcode claims that prevent proper reimbursement.
  • Insurers cannot downcode claims to a less complex service that results in lower reimbursement.
  • Insurers cannot retroactively audit claims more than 180 days after payment.
  • Insurers must allow providers to collect reimbursement for actual services.
  • Insurers must reimburse mental health service providers at favorable rates compared to medical services.
  • Insurers must reimburse mental health service providers at rates at least as favorable as those for medical services.
  • Licensed professionals violating this provision may face disciplinary action.
  • Limits billing to insureds to only their deductible or copayment when accessing out-of-network care.
  • Mandates contracting with an objective third party for compliance verification.
  • Mandates that reimbursement rates for mental health services are comparable to other medical services.
  • Mental health service reimbursements must be at least as favorable as medical service reimbursements relative to Medicare.
  • Prohibits audits for recoupment more than 180 days after payment or the same duration as claim submission.
  • Prohibits billing for the difference between out-of-network charges and insurer payments.
  • Prohibits downcoding that prevents providers from submitting claims for actual services performed.
  • Prohibits impersonation or substitution of licensed mental health professionals by AI systems.
  • Prohibits requests for repayment of overpayments after two years.
  • Prohibits retroactive audits of paid claims after 180 days or the same number of days allowed for claim submission.
  • Prohibits the use of AI to impersonate licensed mental health professionals.
  • Providers must be able to submit claims for actual services performed.
  • Providers must be notified 60 days in advance of any amendments to contracts.
  • Requires 60 days notice for amendments to health provider contracts.
  • Requires analysis of compliance with mental health parity laws.
  • Requires annual reports on medical necessity criteria and nonquantitative treatment limitations.
  • Requires health carriers to meet network adequacy standards.
  • Requires HMOs to obtain provider approval and signature before rate reductions.
  • Requires HMOs to provide at least 60 days notice before rate reductions.
  • Requires insurers to obtain provider approval and signature before rate reductions.
  • Requires insurers to provide at least 60 days notice before implementing rate reductions.
  • Requires reimbursement rates for mental health services to be at least as favorable as those for medical services relative to Medicare.
  • Requires written notice to providers of contract amendments 60 days in advance.
  • Requires written notice to providers of contract amendments at least 60 days in advance.
  • The department must contract with a third party to verify compliance.

Sponsors

Legislative Actions

Date Action
2026-01-22 Representative Goss-Reaves added as coauthor
2026-01-14 Representative Ledbetter added as coauthor
2026-01-13 Representative Cash added as coauthor
2026-01-05 Authored by Representative Rowray
2026-01-05 First reading: referred to Committee on Insurance

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly addresses the use of AI in healthcare, specifically prohibiting its use as a replacement for human mental health professionals.

Mechanism of Influence: It creates a legal barrier against the automation of mental health professional roles, ensuring that AI cannot be used to bypass the requirement for licensed human practitioners in specific interactions.

Evidence:

  • This provision prohibits the use of artificial intelligence systems to impersonate or act as a substitute for licensed mental health professionals during required interactions.
  • Prohibits impersonation or substitution of licensed mental health professionals by AI systems.

Ambiguity Notes: The terms 'impersonate' and 'substitute' could be interpreted broadly, potentially affecting the deployment of AI-driven mental health chatbots or diagnostic tools if they are deemed to be acting in place of a professional.

House - 1360 - Access to public records.

Legislation ID: 248519

Bill URL: View Bill

Summary

HB 1360 allows public agencies in Indiana to create electronic portals for public records requests. These portals will include security features to verify human requestors and their residency status. The bill also introduces provisions for collecting additional fees from non-residents and prioritizing requests based on their purpose. Public agencies are required to report suspicious requests, and the public access counselor must address excessive requests and recommend solutions to the General Assembly.

Key Sections

Key Requirements

  • Advisory opinions or informal inquiries are encouraged before filing actions (with fee implications).
  • Agencies are required to provide at least one copy if reasonably capable and must require advance payment of fees.
  • Agencies must use a standardized mechanism to report suspected automated or suspicious requests.
  • Agency reporting mechanism for suspect requests.
  • Agency response may be appealed under a specified chapter division; general description of withheld materials must be provided.
  • Allows collection of a supplemental fee for out-of-state requests.
  • Allows collection of supplemental fees for non-resident requests.
  • Allows public agencies to collect additional fees for requests from non-residents.
  • Allows the General Assembly to create reasonable procedural safeguards.
  • Annual electronic report to Legislative Services Agency with specified data points.
  • Annual report must include information on public records requests.
  • Annual report must include volume and nature of public records requests.
  • Annual report to include volume and nature of requests and recommendations for addressing excessive requests.
  • A supplemental out-of-state processing fee may be charged for non-Indiana requests, with caps per page and per hour; fees may be waived if in public interest.
  • Authorities may deny disclosure or refuse to confirm existence of a record if disclosure would threaten public safety or reveal vulnerabilities (e.g., to counterterrorism).
  • Automatically log and report automated or phishing/data-scraping submissions
  • Automatically logs and reports submissions suspected to be automated or from phishing sources.
  • Automatically logs and reports suspected automated submissions.
  • Automatically logs and reports suspicious submissions.
  • Collects additional fees for public records requests submitted by non-Indiana residents or out-of-state entities.
  • Confidentiality preserved except under specified lawful access triggers.
  • Counselor to establish training programs and educational materials.
  • Counselor to track suspicious requests and coordinate reporting mechanisms.
  • Court review is de novo; agency must file a detailed public affidavit; burden shifts to requester if the agency sustains the initial refusal.
  • Court review is de novo with the burden on the agency to prove exemptions.
  • Courts may order fees and costs to prevailing party under specified conditions.
  • Definitions of data scraping and phishing included in the public access framework.
  • Fees for certification, copying, facsimiles, and electronic distribution are outlined; specific caps for law enforcement recordings apply.
  • General Assembly may create procedural safeguards.
  • General Assembly may create safeguards to manage public records request processes.
  • Gives priority in fulfilling public records requests to Indiana residents.
  • Gives priority to requests from Indiana residents.
  • Include in annual report information about the volume and nature of requests.
  • Incorporate CAPTCHA or an equivalent mechanism to verify humans
  • Incorporates CAPTCHA or equivalent to ensure requestor is human.
  • Incorporates CAPTCHA or equivalent to verify human requestors.
  • Indicates if the requestor is an Indiana resident.
  • Indicates if the requestor is a resident of Indiana.
  • Indicates whether the requestor is an Indiana resident.
  • Indicate to the agency whether the requester is an Indiana resident
  • Interlocutory appeals permitted; in camera review possible if necessary; attorneys’ fees may be awarded to prevailing party under specified conditions.
  • Legislative prerogative to set procedural safeguards to protect agency resources.
  • Logs and reports submissions suspected of being automated or phishing.
  • Logs and reports submissions suspected to be automated or phishing.
  • Must include CAPTCHA or an equivalent mechanism to confirm the requestor is human.
  • Must incorporate CAPTCHA or equivalent to ensure requestor is human.
  • Nonresponse timelines set a deemed denial after 24 hours (in-person/phone requests) or 7 days (mail/facsimile/electronic portal requests).
  • Oral denials allowed for certain oral requests but must be in writing to continue denial.
  • Portal must incorporate CAPTCHA or equivalent verification.
  • Prioritizes requests for civic, journalistic, academic, or personal use.
  • Prioritizes requests from Indiana residents.
  • Prioritizes requests made for civic, journalistic, academic, or personal use.
  • Public access counselor must take actions regarding excessive requests.
  • Public agencies may give priority to requests from Indiana residents.
  • Public agencies must articulate denials in writing with exemptions when denying requests.
  • Public agencies must charge a uniform copying fee (or average cost) for standard records and may charge reasonable fees for nonstandard documents.
  • Public agencies must report suspected automated requests.
  • Public agencies must report suspect requests to the public access counselor.
  • Recommendations for addressing excessive requests must be included.
  • Recommendations for statutory or administrative remedies must be included.
  • Reports must include data on the volume and nature of requests.
  • Report suspected automated or phishing-related public records requests to the public access counselor.
  • Requests for civic, journalistic, academic, or personal use can be prioritized.
  • Requires public agencies to report suspected automated or phishing requests.
  • Requires reporting of suspect public records requests to the public access counselor.
  • Requires verification of requestors physical address.
  • Requires verification of the requestors physical address.
  • Require verification of the requestors physical address
  • School corporations/charter schools are subject to specific search-fee rules for electronic records, including a 5-hour non-charge threshold and hourly rates thereafter.
  • Take specified actions regarding excessive public records requests.

Sponsors

Legislative Actions

Date Action
2026-01-28 Senate sponsor: Senator Brown L
2026-01-28 Third reading: passed; Roll Call 115: yeas 94, nays 0
2026-01-27 Amendment #1 (Lehman) prevailed; voice vote
2026-01-27 Second reading: amended, ordered engrossed
2026-01-22 Committee report: amend do pass, adopted
2026-01-22 Representative Miller D added as coauthor
2026-01-15 Representative Porter added as coauthor
2026-01-08 Authored by Representative Lehman

Detailed Analysis

Analysis 1

Why Relevant: The bill addresses the use of automated systems and bots, which are frequently powered by AI, to scrape public records and interact with government portals.

Mechanism of Influence: It mandates technical barriers such as CAPTCHA or equivalent verification to ensure requestors are human, and requires agencies to log and report suspected automated submissions, data scraping, or phishing attempts.

Evidence:

  • Automatically logs and reports suspected automated submissions.
  • Portal must incorporate CAPTCHA or equivalent verification.
  • Public agencies must report suspected automated requests.
  • related to phishing or data scraping.

Ambiguity Notes: While the bill does not use the specific term 'Artificial Intelligence,' the regulation of 'automated submissions' and 'data scraping' directly impacts the methods used by AI developers and agents to harvest public data.

House - 1421 - Ban on employer use of automated decision systems.

Legislation ID: 248801

Bill URL: View Bill

Summary

House Bill No. 1421 aims to prohibit employers from relying solely on automated decision systems for employment-related decisions and outlines specific conditions under which such systems can be used. It establishes rights for employees and candidates regarding the use of automated outputs, mandates disclosures, and provides mechanisms for enforcement and civil action against violations. The bill seeks to protect covered individuals from discrimination and retaliation related to automated decision-making processes.

Key Sections

Key Requirements

  • Covered individuals or labor organizations can file civil actions for violations.
  • Disclosures must be made before hiring or by a specified date for existing employees.
  • Employers cannot rely exclusively on automated decision systems for employment decisions.
  • Employers must allow covered individuals to opt out of automated management.
  • Employers must meet testing and validation requirements for automated decision systems before use.
  • Employers must not retaliate against individuals for filing complaints or exercising rights under the bill.
  • Employers must provide clear descriptions of the automated decision system and its outputs.
  • Employers must provide disclosures to covered individuals regarding the use of automated decision systems.
  • Human managers must be available to make employment-related decisions.
  • The department can receive complaints and investigate violations.
  • Training must cover input information, appeals process, potential biases, limitations, adverse effects, and inappropriate uses of the automated decision system.
  • Updated disclosures must be provided when significant changes occur.

Sponsors

Legislative Actions

Date Action
2026-01-08 Authored by Representative Harris
2026-01-08 First reading: referred to Committee on Employment, Labor and Pensions

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation of automated decision systems, which are a primary form of artificial intelligence used for organizational decision-making.

Mechanism of Influence: It imposes a legal prohibition against fully autonomous AI decision-making in hiring and management, requiring human intervention and validation.

Evidence:

  • Employers are prohibited from relying solely on automated decision systems for employment-related decisions
  • Employers must meet testing and validation requirements for automated decision systems before use.

Ambiguity Notes: The scope of the regulation depends on the bill's specific definition of 'automated decision system' versus 'passive computing infrastructure'.

Analysis 2

Why Relevant: The legislation includes mandatory disclosure requirements for AI-driven systems, a key component of AI transparency regulation.

Mechanism of Influence: Employers must provide clear descriptions of the system's logic and outputs to affected individuals before use or hiring, enabling individuals to understand and dispute AI-generated outcomes.

Evidence:

  • Employers using automated decision systems must disclose specific information to covered individuals about the system and its outputs, including how decisions are made

Ambiguity Notes: The requirement for 'clear descriptions' may be subject to interpretation regarding the level of technical complexity required in the disclosure.

Analysis 3

Why Relevant: The bill mandates training and oversight for those operating AI systems.

Mechanism of Influence: It requires that human operators are trained on the limitations, potential biases, and adverse effects of the automated systems they use.

Evidence:

  • Training must cover input information, appeals process, potential biases, limitations, adverse effects, and inappropriate uses of the automated decision system.

Ambiguity Notes: None

Senate - 199 - Various education matters.

Legislation ID: 249529

Bill URL: View Bill

Summary

SB 199 introduces several amendments to Indianas education laws, including changes to the composition of the case review panel for interscholastic athletics, requirements for schools with low reading proficiency scores, and mandates for the secretary of education to report on civic literacy metrics and employee paid leave recommendations. Additionally, it establishes regulations for social media services concerning adolescent users.

Key Sections

Key Requirements

  • Accredit and review teacher prep programs
  • Allows individuals to take action against unlawful retention or use of their information.
  • Allows parents to sue for violations of the bills provisions.
  • Allows parents to view account activity, modify configurations, and set access limits.
  • Annual program performance reporting with public access
  • Approve content-area licensure programs
  • Association must implement panel decisions; decisions apply to the specific case only
  • Association pays all costs, including at least $50 per meeting stipend to panel members
  • Authorize at least two accreditors; department may also act as accreditor
  • Civil actions can be taken against violators.
  • Department must arrange statewide professional instruction system
  • Entitles parents to damages, injunctive relief, and legal costs.
  • Establish matrix rating system for programs (based on last 3 years of data)
  • External evaluation on reading instruction programs by Dec 31, 2024 (with related ongoing collaboration)
  • Identifies specific features that contribute to addictive use of social media.
  • Identify and implement actions for programs not meeting minimum ratings (improvement plans or peer review)
  • Identify key civic literacy metrics and activities for K-12 and postsecondary education
  • Identify leave lengths for each event
  • Identify metrics for measuring civic literacy in K-12 and postsecondary education.
  • Identify school corporations’ paid leave for:birth of employee’s child; birth of a child to employee’s spouse; placement for adoption; stillbirth
  • Identify school corporations providing paid leave for specified events.
  • Imposes time restrictions on account access.
  • Limits the use of collected information solely to age determination.
  • Make recommendations on paid leave policies and submit to the general assembly.
  • Mandates written parental consent for adolescents and prohibits account creation for children.
  • Members: 4 parents of high school students, 2 high school principals, 2 high school athletic directors, 4 school administrators
  • Members must include four parents, two high school principals, and two athletic directors.
  • Panel functions administered by the secretary of education
  • Panel meetings: monthly unless no cases; emergency meetings as needed
  • Panel meets monthly and must issue decisions on cases within ten business days.
  • Panel must consist of nine members including the secretary of education as chairperson.
  • Panel must have nine members
  • Panel must have nine members including the secretary of education as chairperson.
  • Panel must meet monthly or as needed for cases with time-sensitive issues.
  • Prevents accounts from appearing in search results unless designated by the user.
  • Prohibit accounts for children under a certain age.
  • Prohibits dissemination of content based on usage patterns.
  • Prohibits minors from holding accounts without consent.
  • Prohibits retention or use of identifying information beyond age verification.
  • Provide recommendations for paid leave for each event
  • Publish attrition, retention, completion rates, and licensure exam data
  • Quorum: 5; decision rule: greater of majority present or 4 votes
  • Recommendations on paid leave policies must be submitted to the general assembly by December 1, 2026.
  • Requires a quorum of five members for decision-making.
  • Requires reasonable age verification methods for account creation.
  • Requires social media services to employ algorithms analyzing user data to select content.
  • Requires social media services to provide separate access credentials for parents.
  • Restricts direct communications to linked accounts only.
  • Restricts the retention of identifying information unless required by court order.
  • Secretary of education or designee is chair
  • Secretary of education to identify school corporations providing paid leave for specified events.
  • Secretary of education to submit identified metrics and activities by December 1, 2026.
  • Section expires July 1, 2027
  • Social media services must obtain parental consent for adolescent accounts.
  • Social media services must obtain parental consent for adolescent users.
  • Specify licenses for graduates of approved programs
  • Submit findings and recommendations to the General Assembly electronically
  • Submit findings to the general assembly.
  • Submit metrics to the General Assembly electronically (IC 5-14-6)
  • Terms: 4 years, with staggered start terms for initial appointments

Sponsors

Legislative Actions

Date Action
2026-01-28 Amendment #6 (Raatz) prevailed; voice vote
2026-01-28 Reread second time: amended, ordered engrossed
2026-01-27 Placed back on second reading
2026-01-26 Amendment #3 (Raatz) prevailed; voice vote
2026-01-26 Amendment #5 (Raatz) prevailed; voice vote
2026-01-26 Second reading: amended, ordered engrossed
2026-01-15 Committee report: amend do pass, adopted
2026-01-08 Senator Rogers added as second author

Detailed Analysis

Analysis 1

Why Relevant: The bill includes specific mandates for age verification and parental consent for digital platforms.

Mechanism of Influence: It requires social media services to implement age verification protocols and restricts account creation for minors without explicit parental permission, which is a regulatory theme requested by the user.

Evidence:

  • Establishes regulations that prevent social media services from allowing minors to create accounts without parental consent and imposes restrictions on the collection of personal information for age verification.

Ambiguity Notes: The bill focuses on 'social media services' rather than 'artificial intelligence' specifically, though these platforms often utilize AI for content delivery and age estimation.

↑ Back to Table of Contents

Iowa

Index of Bills

House - 2048 - relating to personal data processing practices for companies, and making civil penalties applicable.

Legislation ID: 259635

Bill URL: View Bill

Summary

House File 2048 introduces regulations concerning the processing of personal data by companies operating in Iowa. It defines key terms related to personal data and outlines the responsibilities of companies, including disclosure requirements, consent protocols, and individual rights regarding personal data. The bill also establishes enforcement mechanisms, including penalties for violations and the ability for individuals to seek damages. Exemptions are provided for specific types of data processing, such as for law enforcement and national security purposes.

Key Sections

Key Requirements

  • Companies must cease processing personal data within 30 days of revocation of consent.
  • Companies must collect only necessary personal data and allow individuals to revoke consent easily.
  • Companies must disclose the purposes of data use, types of data processed, and whether data will be shared or sold.
  • Companies must maintain security practices appropriate to the data they process.
  • Companies must obtain clear and affirmative consent from individuals before processing their data.
  • Individuals can request a summary of their processed personal data.
  • Individuals can request corrections to inaccurate data.
  • Individuals can request deletion of their personal data.
  • Individuals can revoke their consent at any time.
  • Individuals have the right to obtain confirmation of whether their personal data is being processed.
  • The attorney general may investigate violations.
  • Violations are considered unlawful practices and may result in civil penalties of up to $7,500 per violation.

Sponsors

Legislative Actions

Date Action
2026-01-14 Introduced, referred to Economic Growth and Technology.H.J. 76.

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly includes 'automated decision making' within its scope of definitions and regulations.

Mechanism of Influence: By defining and potentially regulating automated decision making, the law impacts how AI algorithms process personal data to make choices about individuals, often a precursor to specific AI transparency requirements.

Evidence:

  • This section defines key terms related to personal data processing, including automated decision making, company, personal data, and process.

Ambiguity Notes: The abstract mentions the definition of automated decision making but does not specify the exact restrictions or opt-out rights associated with it, which are common in similar privacy-AI legislation.

Analysis 2

Why Relevant: The legislation mandates disclosure and consent protocols for data processing, which are foundational to AI governance and data provenance.

Mechanism of Influence: Requirements to disclose the purposes of data use and obtain affirmative consent directly affect the collection of training data for AI models and the transparency of AI-driven services.

Evidence:

  • Companies must disclose their data processing practices to individuals clearly and obtain consent before processing personal data.
  • Companies must disclose the purposes of data use, types of data processed, and whether data will be shared or sold.

Ambiguity Notes: While the bill focuses on personal data generally, the requirements for 'clear and affirmative consent' create a regulatory hurdle for the mass scraping or use of personal data in AI development.

Analysis 3

Why Relevant: The bill provides individuals with the right to delete data and revoke consent, which impacts the lifecycle of data used in AI systems.

Mechanism of Influence: The requirement to cease processing within 30 days of consent revocation and the right to request deletion could necessitate the removal of specific data points from active AI models or training sets.

Evidence:

  • Companies must cease processing personal data within 30 days of revocation of consent.
  • Individuals can request deletion of their personal data.

Ambiguity Notes: The practical application of 'deleting' data from a trained neural network is a complex technical area that the law does not explicitly address.

House - 2082 - relating to restrictions on the use of artificial intelligence, and creating a civil cause of action.

Legislation ID: 266493

Bill URL: View Bill

Summary

House File 2082 outlines definitions and regulations regarding the use of artificial intelligence (AI) in recreating an individuals likeness without consent. The bill specifies the types of uses that require consent, such as in commercial activities or political campaigns, and establishes a framework for civil liability, including potential damages and class action provisions for violations.

Key Sections

Key Requirements

  • Allows for actions for actual and punitive damages for violations.
  • Defines separate violations for each day a violation continues.
  • Prohibits unauthorized likeness use in commercial activities, unsupported political campaigns, and contexts that may harm the individual’s reputation.
  • Requires consent of the individual for likeness recreation.
  • Sets punitive damages cap at $250,000 per violation.

Sponsors

Legislative Actions

Date Action
2026-01-15 Introduced, referred to Economic Growth and Technology.H.J. 01/15.

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of artificial intelligence by requiring consent for the recreation of an individual's likeness, which falls under the category of regulating AI usage and requiring disclosures/permissions.

Mechanism of Influence: It creates a legal prohibition against unauthorized AI likeness generation in commercial and political contexts, enforced through civil liability and punitive damages.

Evidence:

  • Prohibits the use of AI to recreate an individuals likeness without their consent
  • artificial intelligence as any machine-based system that generates outputs based on inputs, which can influence environments

Ambiguity Notes: The definition of artificial intelligence as a system that can 'influence environments' is broad and could encompass a wide range of software beyond generative media.

Analysis 2

Why Relevant: The legislation establishes a framework for oversight and accountability for AI developers and users through the legal system.

Mechanism of Influence: By defining separate violations for each day and allowing for class action suits, the bill creates significant financial and legal risks for non-compliance, effectively acting as a regulatory deterrent.

Evidence:

  • Allows for actions for actual and punitive damages for violations.
  • Defines separate violations for each day a violation continues.
  • Sets punitive damages cap at $250,000 per violation.

Ambiguity Notes: The bill does not specify technical standards for how consent must be obtained or verified, which may lead to litigation over the validity of digital disclosures.

House - 2153 - requiring community colleges, school districts, and institutions under the control of the state board of regents to adopt policies related to the use of artificial intelligence by students and employees.

Legislation ID: 286695

Bill URL: View Bill

Summary

House File 2153 mandates that community colleges, school districts, and state universities develop and publish policies regarding the use of artificial intelligence by students and employees. These policies must clarify when AI can be used in educational settings and outline any prohibitions. The bill includes specific deadlines for the adoption of these policies and requires them to be accessible via the institutions websites.

Key Sections

Key Requirements

  • Adopt a policy detailing AI usage during instructional time and work duties.
  • Adopt a policy outlining when students and faculty can use AI during instructional time and work duties.
  • Adopt a policy regarding AI use by July 1, 2028.
  • Outline prohibitions on AI use for students and employees.
  • Provide a copy of the policy to the director of the department of education.
  • Provide a copy of the policy to the state board of regents.
  • Publish the policy on the colleges website.
  • Publish the policy on the institutions website.
  • Publish the policy on the school districts website.
  • Specify prohibitions on AI use for students and staff.
  • Specify when students and faculty can use AI and outline prohibitions.

Sponsors

Legislative Actions

Date Action
2026-01-26 Introduced, referred to Education.H.J. 01/26.

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the creation of regulatory frameworks and disclosure requirements for AI usage within the public education sector.

Mechanism of Influence: It requires institutions to formally define and disclose their stance on AI, creating a public record of allowed and prohibited AI activities, which acts as a form of institutional regulation and state-level oversight.

Evidence:

  • mandates that community colleges, school districts, and state universities develop and publish policies regarding the use of artificial intelligence
  • Specify prohibitions on AI use for students and staff.
  • Provide a copy of the policy to the director of the department of education.

Ambiguity Notes: The text does not provide a specific technical definition of 'artificial intelligence,' which may lead to inconsistent policy applications across different school districts and universities.

House - 610 - relating to computer science education, including high school curricula and graduation requirements, practitioner preparation programs, and college admissions, and including applicability provisions.

Legislation ID: 284482

Bill URL: View Bill

Summary

House Study Bill 610 introduces measures to integrate computer science into high school curricula and graduation requirements in Iowa. Starting with the graduating class of 2030-2031, students will be required to complete one semester of computer science. The bill also mandates the development of high-quality standards for computer science education across all grades, the creation of a list of approved computer science courses, and a plan to increase the capacity of computer science teachers. Additionally, it outlines how computer science courses can fulfill science and mathematics requirements for college admissions.

Key Sections

Key Requirements

  • Allows application for tuition reimbursement for teachers under initial or conditional licenses seeking computer science endorsements.
  • Allows reimbursement for teachers under initial or conditional licenses.
  • Establishes standards for computer science education that include AI and ethical considerations.
  • Includes instruction on the impact and ethical considerations of AI.
  • Mandates annual reporting on computer science education by school districts.
  • Requires acceptance of computer science courses as equivalent to math or science in admission calculations.
  • Requires annual reports on courses and teacher qualifications related to computer science and AI.
  • Requires a plan to increase computer science teacher capacity to be published.
  • Requires a plan to increase teacher capacity in computer science and AI.
  • Requires a statewide plan for computer science and AI instruction.
  • Requires colleges to accept computer science courses as equivalent to science or mathematics units for admission.
  • Requires high school students to complete one semester of computer science and artificial intelligence to graduate.
  • Requires high school students to complete one semester of computer science to graduate.
  • Requires inclusion of computer science, AI, and computational thinking in teacher preparation programs.
  • Requires publication of approved courses for computer science and AI by the Department of Education.
  • Requires schools to provide high-quality computer science education that includes instruction on artificial intelligence.
  • Requires standards for computer science and AI education at all grade levels.
  • Requires students to complete one semester of computer science for graduation.
  • Requires the publication of a list of approved computer science courses by the Iowa Department of Education.
  • Specifies that state funding for mandates will come from state school foundation aid and relevant grants.

Sponsors

Legislative Actions

Date Action
2026-01-28 Subcommittee recommends passage.
2026-01-26 Subcommittee Meeting: 01/28/2026 12:00PM RM 304.
2026-01-22 Introduced, referred to Education.H.J. 01/22.
2026-01-22 Introduced, referred to Education.H.J. 144.
2026-01-22 Subcommittee: Ingels, Kurth and Shipley.H.J. 01/22.
2026-01-22 Subcommittee: Ingels, Kurth and Shipley.H.J. 147.

Detailed Analysis

Analysis 1

Why Relevant: The bill focuses on the educational integration of artificial intelligence and requires the establishment of standards for its instruction.

Mechanism of Influence: It mandates that the Department of Education develop standards for AI education that include instruction on the impact and ethical considerations of the technology.

Evidence:

  • Includes instruction on the impact and ethical considerations of AI.
  • Amends high school graduation requirements to include at least one semester of computer science and artificial intelligence starting with the class of 2030-2031.

Ambiguity Notes: The bill focuses on academic standards and curriculum rather than direct regulation of AI developers or technical audits of AI systems.

Senate - 2094 - relating to computer science and artificial intelligence education, including high school curricula and graduation requirements, practitioner preparation programs, and college admissions, and including applicability provisions.

Legislation ID: 284483

Bill URL: View Bill

Summary

House Study Bill 610 proposes amendments to existing education codes to require high school students to complete a semester of computer science as part of their graduation requirements starting with the class of 2030-2031. It also sets standards for computer science education across all grade levels, mandates the publication of approved computer science courses, and outlines the necessary steps for expanding teacher capacity in this field. Additionally, it addresses the inclusion of computer science courses in college admissions criteria and provides for the potential waiver of graduation requirements in certain circumstances.

Key Sections

Key Requirements

  • Allows sharing of resources and costs between districts for computer science courses.
  • Department must create a plan by June 30, 2027, to expand computer science teacher capacity, focusing on smaller schools.
  • Department of Education must publish a list of computer science courses that meet graduation requirements by June 30, 2027.
  • Higher education institutions must accept computer science courses as equivalent to corresponding science or math units for admission.
  • Requires a comprehensive plan for K-12 computer science education.
  • Requires all high school students to complete one semester of computer science and artificial intelligence for graduation.
  • Requires districts to report course names, teacher information, and student enrollment in computer science courses.
  • Requires higher education institutions to include computer science education in teacher preparation programs.
  • Requires high school students to complete one semester of computer science for graduation starting with the 2030-2031 class.
  • Requires instruction on fundamental concepts of computer science and artificial intelligence at all educational levels.
  • Requires instruction on fundamental concepts of computer science and artificial intelligence, including ethical considerations.
  • Requires targeted support for schools with fewer than 500 students.
  • Requires that computer science education be offered in at least one grade level starting from the 2023 school year.
  • Requires the publication of course names and codes for compliance with graduation requirements.

Sponsors

Legislative Actions

Date Action
2026-01-28 Subcommittee recommends amendment and passage.
2026-01-27 Subcommittee: Gruenhagen, Donahue, and Pike.S.J. 149.
2026-01-27 Subcommittee Meeting: 01/28/2026 2:30PM Room 217 Conference Room.
2026-01-22 Introduced, referred to Education.S.J. 130.

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the inclusion of artificial intelligence in the state's educational curriculum and graduation requirements.

Mechanism of Influence: It requires the Department of Education to set standards for AI education, including instruction on ethical considerations and societal impacts, and mandates that teacher preparation programs include AI training.

Evidence:

  • Requires instruction on fundamental concepts of computer science and artificial intelligence, including ethical considerations.
  • Amends high school graduation requirements to include one semester of computer science and artificial intelligence starting with the 2030-2031 graduating class.

Ambiguity Notes: The term 'ethical considerations' regarding AI is not specifically defined, leaving the scope of what must be taught to the discretion of the Department of Education and school districts.

Senate - 3011 - establishing requirements and guidelines for chatbots, making appropriations, and providing civil penalties.

Legislation ID: 254908

Bill URL: View Bill

Summary

This bill introduces a framework for defining and regulating chatbots within the state of Iowa. It sets forth specific requirements for chatbot functionality, including transparency about their nature as non-human entities and limitations regarding the advice they can provide. The bill also outlines civil penalties for violations of these regulations and empowers the attorney general to enforce compliance and implement rules.

Key Sections

Key Requirements

  • Chatbots must disclose they are not human at the start of each conversation and every thirty minutes.
  • Defines chatbot as an interactive service that produces adaptive content and accepts open-ended user input.
  • Excludes services that provide limited, predetermined responses or operate within a narrow field.
  • Must inform users that they do not provide professional services and direct them to licensed professionals for such services.
  • Must not represent themselves as licensed professionals.
  • Must prevent claiming to be human or responding deceptively when asked.
  • The attorney general is required to adopt rules in accordance with chapter 17A to enforce the provisions of this bill.
  • The attorney general may bring civil actions to enforce compliance and seek penalties or restitution.
  • Violators may be fined up to $100,000 for each violation.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced, referred to Technology.
2026-01-13 Subcommittee: Warme, Bennett, and Taylor.

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates specific transparency and disclosure requirements for AI-driven chatbot systems.

Mechanism of Influence: It requires chatbots to explicitly state they are not human at the start of interactions and at regular intervals, ensuring users are aware they are interacting with an AI.

Evidence:

  • Chatbots must disclose they are not human at the start of each conversation and every thirty minutes.
  • Must prevent claiming to be human or responding deceptively when asked.

Ambiguity Notes: The requirement to disclose every thirty minutes may be difficult to implement in asynchronous or long-running sessions without clear technical guidelines.

Analysis 2

Why Relevant: The legislation imposes operational restrictions and government oversight on AI functionality.

Mechanism of Influence: It prohibits AI from providing professional advice and empowers the attorney general to levy significant fines and create new rules for AI deployment.

Evidence:

  • Must inform users that they do not provide professional services and direct them to licensed professionals for such services.
  • The attorney general may bring civil actions to enforce compliance and seek penalties or restitution.
  • Violators may be fined up to $100,000 for each violation.

Ambiguity Notes: The definition of 'adaptive content' is broad and could potentially encompass a wide range of generative AI technologies beyond simple text bots.

Senate - 3013 - relating to the ownership of artificial intelligence output and trained artificial intelligence.

Legislation ID: 258361

Bill URL: View Bill

Summary

This bill addresses the ownership of artificial intelligence output and trained artificial intelligence. It defines key terms such as artificial intelligence, input, output, train, and user. It stipulates that users who provide input to AI own the output generated, provided it does not infringe on third-party rights. Additionally, it states that individuals who train AI own the resulting AI if they have lawfully acquired the training data and have not transferred ownership. It also clarifies that if AI is used in an employment context, the output belongs to the employer under certain conditions. The bill ensures that ownership rights do not infringe on existing intellectual property rights.

Key Sections

Key Requirements

  • Output must not infringe on third-party rights.
  • Ownership rights must not be transferred through contract or agreement.
  • Training data must be lawfully acquired.
  • Use must be under employers direction and control.
  • Use of AI must be within the scope of employment.
  • User must provide input to the AI.

Sponsors

Legislative Actions

Date Action
2026-01-28 Subcommittee Meeting: 01/29/2026 9:30AM Room 217 Conference Room (Cancelled).
2026-01-13 Introduced, referred to Technology.
2026-01-13 Subcommittee: Alons, Drey, and Kraayenbrink.
2026-01-13 Subcommittee: Alons, Kraayenbrink, and Staed.

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the legal status and property rights of artificial intelligence outputs.

Mechanism of Influence: It creates a statutory default for who owns AI-generated content, which is a core component of AI legal regulation.

Evidence:

  • a user who provides input to an AI owns the output generated by that AI
  • provided it does not infringe on third-party rights

Ambiguity Notes: The phrase 'infringe on third-party rights' is broad and relies on existing intellectual property case law which is currently evolving regarding AI.

Analysis 2

Why Relevant: The bill addresses the ownership of the trained AI models themselves, which relates to the oversight of AI weights and development.

Mechanism of Influence: It sets a legal requirement that training data must be 'lawfully acquired' to claim ownership of the resulting AI model, effectively regulating the data sourcing process for AI development.

Evidence:

  • a person who trains an AI owns the resulting AI if the training data was lawfully acquired

Ambiguity Notes: The term 'lawfully acquired' may be subject to interpretation regarding web-scraping and fair use of copyrighted data for training.

Analysis 3

Why Relevant: The bill provides regulatory clarity for the use of AI in professional and employment environments.

Mechanism of Influence: It defines the 'scope of employment' as a boundary for AI ownership, ensuring that corporate entities retain rights to AI developed or used by employees under their direction.

Evidence:

  • if AI is used during employment, the output or trained AI belongs to the employer
  • Use must be under employers direction and control

Ambiguity Notes: The 'direction and control' requirement may be difficult to apply to autonomous or semi-autonomous AI agents used by employees.

Senate - 3014 - relating to the use of artificial intelligence systems and related software by state agencies for employment and other purposes.

Legislation ID: 255881

Bill URL: View Bill

Summary

This bill establishes guidelines for the use of artificial intelligence systems and related software by state agencies in Iowa. It mandates the creation of an inventory of such systems, outlines requirements for automated employment decision-making tools, and prohibits certain uses of AI that could affect employee rights or benefits. The bill emphasizes accountability and transparency in the deployment of AI technologies within state agencies.

Key Sections

Key Requirements

  • Annual report of the inventory to be submitted to the General Assembly by January 15.
  • Department must issue guidance on data elements to be collected for the inventory.
  • Inventory to be posted on the departments website.
  • No use of AI systems that alters employee rights or benefits.
  • No use of AI systems that leads to employee discharge or reduction of wages.
  • Publish a list of tools within 90 days after each tool is used.
  • Submit an annual report to the General Assembly by January 15.

Sponsors

Legislative Actions

Date Action
2026-01-27 Subcommittee recommends passage.
2026-01-21 Subcommittee Meeting: 01/27/2026 12:00PM Room 217 Conference Room.
2026-01-13 Introduced, referred to Technology.
2026-01-13 Subcommittee: McClintock, Bennett, and Sires.

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of AI within state government operations, specifically targeting employment-related decisions.

Mechanism of Influence: It creates a legal prohibition against using AI to discharge employees or reduce wages, effectively setting boundaries on algorithmic management.

Evidence:

  • This provision prohibits state agencies from using AI systems in ways that affect employee rights, benefits, or privileges
  • No use of AI systems that leads to employee discharge or reduction of wages.

Ambiguity Notes: The term 'affect employee rights' is broad and may require further legal interpretation to determine if it includes indirect impacts or procedural changes.

Analysis 2

Why Relevant: The legislation mandates transparency and reporting for specific AI-driven tools used in hiring and personnel management.

Mechanism of Influence: State agencies are required to publish lists of automated employment tools and submit annual reports to the General Assembly, ensuring legislative oversight.

Evidence:

  • State agencies using automated employment decision-making tools must publish a list of these tools and submit an annual report on their usage.

Ambiguity Notes: The effectiveness of the disclosure depends on the specific definition of 'automated employment decision-making tools' provided in the bill.

Analysis 3

Why Relevant: The bill requires a comprehensive inventory and public disclosure of all AI systems used by state agencies.

Mechanism of Influence: By tasking the Department of Management with maintaining a public inventory, the bill subjects AI usage to public scrutiny and centralized government tracking.

Evidence:

  • This provision requires the Department of Management to maintain an inventory of AI systems used by state agencies, detailing their purposes and uses.
  • Inventory to be posted on the departments website.

Ambiguity Notes: The specific 'data elements' to be collected are left to the department's discretion, which could affect the depth of the oversight.

↑ Back to Table of Contents

Kansas

Index of Bills

house - 2183 - Modifying elements in the crimes of sexual exploitation of a child, unlawful transmission of a visual depiction of a child and breach of privacy to prohibit certain acts related to visual depictions in which the person depicted is indistinguishable from a real child, morphed from a real childs image or generated without any actual child involvement, provide an exception for cable services in the crime of breach of privacy and prohibit dissemination of certain items that appear to depict or purport to depict an identifiable person.

Legislation ID: 236969

Bill URL: View Bill

Summary

House Bill No. 2183 amends existing laws related to crimes against children, particularly focusing on sexual exploitation and privacy breaches involving visual depictions. The bill expands the definitions of visual depictions to encompass images created or modified by artificial intelligence, thereby addressing modern technological concerns. It modifies elements of existing crimes, introduces new prohibitions related to visual depictions, and includes exceptions for specific situations, such as those involving cable services.

Key Sections

Key Requirements

  • Defines aggravated unlawful transmission with higher penalties.
  • Defines penalties for possession or promotion of such depictions.
  • Defines penalties for unlawful recording and dissemination of images without consent.
  • Defines sexual exploitation to include the use of AI-generated images.
  • Establishes penalties based on the age of the offender and the child involved.
  • Establishes penalties for aggravated unlawful transmission based on intent.
  • Establishes penalties for aggravated unlawful transmission based on intent and circumstances.
  • Establishes that transmitting visual depictions of minors in a state of nudity is unlawful if the offender is under 19 years of age.
  • Includes provisions for dissemination of visual depictions obtained through breaches of privacy.
  • Mandates that any remaining doubts in interpretation should favor individuals constitutional rights.
  • Outlines the consequences for disseminating unauthorized visual depictions.
  • Prohibits the dissemination of images that have been altered or created to depict identifiable persons without their consent.
  • Prohibits the installation of devices for recording private conduct without consent.
  • Prohibits the possession of visual depictions of children engaging in sexually explicit conduct, regardless of the original images creation context.
  • Prohibits the transmission of visual depictions of children aged 12 to 18 in states of nudity by individuals under 19.
  • Prohibits the transmission of visual depictions of minors in a state of nudity.
  • Prohibits the transmission of visual depictions of minors in a state of nudity without consent.
  • Prohibits unauthorized interception and dissemination of private communications.
  • Prohibits unauthorized recording or transmission of identifiable persons in private situations.
  • Requires consideration of constitutional rights when interpreting statutes.
  • Requires courts and hearing officers to interpret laws de novo without deferring to agency interpretations.
  • Requires de novo interpretation by courts and administrative decision-makers; prohibits deference to agency interpretations
  • Requires individuals to refrain from employing or coercing children into sexually explicit conduct.
  • Requires individuals to refrain from possessing or promoting visual depictions of children in sexually explicit conduct, including those generated by artificial means.
  • Requires the definition of sexual exploitation to include artificially generated visual depictions.
  • Specifies penalties for aggravated unlawful transmission based on intent and prior offenses.

Sponsors

Legislative Actions

Date Action
2026-01-27 Enrolled and presented to Governor on Tuesday, January 27, 2026
2026-01-23 Reengrossed on Friday, January 23, 2026
2026-01-22 Conference Committee Report was adopted; Yea: 83 Nay: 39
2025-03-27 Conference committee report now available
2025-03-27 Conference Committee Report was adopted; Yea: 30 Nay: 10
2025-03-24 Motion to accede adopted; Senator Warren , Senator Titus and Senator Corson appointed as conferees
2025-03-20 Nonconcurred with amendments; Conference Committee requested; appointed Representative Humphries , Representative Williams, L. and Representative Osman as conferees
2025-03-19 Committee of the Whole - Be passed as amended

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly addresses the regulation of AI-generated content by incorporating it into criminal definitions for child exploitation and privacy breaches.

Mechanism of Influence: It subjects creators and possessors of AI-generated child sexual abuse material (CSAM) to criminal prosecution by updating the legal definition of "visual depiction" to include AI-created or altered images.

Evidence:

  • expands the definition of visual depiction to include images created or altered by artificial intelligence or digital means

Ambiguity Notes: The phrase "digital means" is broad and could potentially cover a wide range of non-AI digital manipulation techniques, though AI is specifically named.

↑ Back to Table of Contents

Kentucky

Index of Bills

House - 201 - AN ACT relating to the use of algorithmic devices in setting the amount of rent to be charged to a residential tenant.

Legislation ID: 251520

Bill URL: View Bill

Summary

The bill establishes a new section in Kentucky law that defines algorithmic devices and prohibits their use by landlords in determining rent amounts. It emphasizes that such practices may violate antitrust laws and outlines the consequences for landlords who engage in these practices, deeming them unfair and deceptive. The bill also empowers the Attorney General to enforce these provisions under existing consumer protection laws.

Key Sections

Key Requirements

  • Defines algorithmic device as any device using algorithms to calculate rental prices.
  • Excludes products designed and used internally by landlords.
  • Prohibits landlords from employing or relying on algorithmic devices for rent setting.
  • The act applies to rental agreements executed on or after the effective date.
  • Violations are also considered illegal restraints of trade under KRS 367.175.
  • Violations are considered unfair and deceptive practices under KRS 367.170.

Sponsors

Legislative Actions

Date Action
2026-01-14 to Judiciary (H)
2026-01-07 introduced in House to Committee on Committees (H)

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets and regulates algorithmic decision-making tools, which are a fundamental component of artificial intelligence systems used for automated pricing and market analysis.

Mechanism of Influence: The law creates a legal prohibition against the deployment of AI-driven pricing software in the residential rental sector, establishing penalties for landlords who rely on these automated systems to set prices.

Evidence:

  • Defines algorithmic device as any device using algorithms to calculate rental prices.
  • Prohibits landlords from employing or relying on algorithmic devices for rent setting.
  • Violations are considered unfair and deceptive practices under KRS 367.170.

Ambiguity Notes: The definition of 'algorithmic device' includes an exclusion for products 'designed and used internally by landlords,' which may create uncertainty regarding whether proprietary AI models developed in-house are subject to the same restrictions as third-party software.

House - 227 - AN ACT relating to addictive online platforms.

Legislation ID: 251636

Bill URL: View Bill

Summary

This bill outlines the requirements for parental consent when children create accounts on covered AI companion or social media platforms. It includes provisions for resolving disputes regarding a childs age, invalidation of contracts made without proper consent, and the establishment of penalties for violations of these provisions. The bill also empowers the Attorney General to enforce these regulations and provides mechanisms for civil action against non-compliant platforms.

Key Sections

Key Requirements

  • Allows private right of action for violations of the act.
  • Contracts with minors must have parental consent to be valid.
  • Establishes civil penalties for reckless or intentional violations.
  • Grants enforcement authority to the Attorney General.
  • Mandates resolution of age disputes within 30 days.
  • Prohibits any waivers of rights under the act.
  • Requires account termination within 7 days if a child is determined to be ineligible.
  • Requires notice before initiating action for violations.
  • Requires verifiable parental consent for children to open accounts.
  • Specifies damages for emotional distress.

Sponsors

Legislative Actions

Date Action
2026-01-14 to Small Business & Information Technology (H)
2026-01-07 introduced in House to Committee on Committees (H)

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly targets AI companion platforms and mandates age verification and parental consent for minors.

Mechanism of Influence: It requires covered AI platforms to obtain verifiable parental consent before account creation and provides a framework for resolving age disputes, backed by civil penalties and Attorney General enforcement.

Evidence:

  • This bill outlines the requirements for parental consent when children create accounts on covered AI companion or social media platforms.
  • It also outlines the process for age verification if an account holder disputes their classification as a child.
  • This section mandates that covered platforms obtain verifiable parental consent before allowing children to create accounts.

Ambiguity Notes: The scope of 'covered AI companion' platforms may need precise definition to distinguish between general AI tools and those specifically designed for companionship.

House - 33 - AN ACT relating to data privacy.

Legislation ID: 248218

Bill URL: View Bill

Summary

This bill amends KRS 367.3611 to 367.3629, introducing definitions related to data privacy, including terms like personal data, controller, processor, and sensitive data. It outlines the responsibilities of data controllers in handling personal data, ensuring consumer rights are respected, and preventing practices like surveillance pricing. The amendments are set to take effect on January 1, 2026.

Key Sections

Key Requirements

  • Do not engage in surveillance pricing.
  • Establish reasonable data security practices.
  • Limit data collection to what is necessary for processing.
  • Obtain consumer consent for processing sensitive data.

Sponsors

Legislative Actions

Date Action
2026-01-13 to Small Business & Information Technology (H)
2026-01-06 introduced in House to Committee on Committees (H)

Detailed Analysis

Analysis 1

Why Relevant: The bill addresses data processing and surveillance pricing, which are central to the commercial application and regulation of AI technologies.

Mechanism of Influence: By regulating how controllers handle personal and sensitive data, the bill impacts the data pipelines used to train and operate AI models. The prohibition on surveillance pricing specifically targets algorithmic and AI-driven dynamic pricing models.

Evidence:

  • Do not engage in surveillance pricing.
  • Limit data collection to what is necessary for processing.
  • Obtain consumer consent for processing sensitive data.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its focus on automated data processing and pricing algorithms aligns with common AI regulatory frameworks.

House - 375 - AN ACT relating to automated license plate readers.

Legislation ID: 259851

Bill URL: View Bill

Summary

This legislation creates a new section in KRS Chapter 189 that defines Automated License Plate Readers (ALPR) and outlines strict prohibitions against their use. It establishes criminal penalties for violations, categorizing the use of ALPR systems as a Class D felony, and allows individuals recorded by these systems to seek civil damages.

Key Sections

Key Requirements

  • Allows individuals recorded by ALPR to file civil suits for damages.
  • Classifies the use of ALPR systems as a felony.
  • Prevailing defendants may also recover attorneys fees.
  • Prevailing plaintiffs may recover attorneys fees unless found acting in bad faith.
  • Prohibits the use of ALPR systems by individuals and entities.

Sponsors

Legislative Actions

Date Action
2026-01-22 to Transportation (H)
2026-01-14 introduced in House to Committee on Committees (H)

Detailed Analysis

Analysis 1

Why Relevant: The law regulates a specific application of automated technology and algorithms used for data collection and identification, which falls under the broader category of AI-driven surveillance and automated decision-making systems.

Mechanism of Influence: It imposes a total ban on the technology, establishing severe criminal penalties and civil causes of action to prevent the deployment of algorithm-based license plate recognition systems.

Evidence:

  • Defines what constitutes an Automated License Plate Reader (ALPR) as a system using cameras and algorithms to read license plates.
  • Prohibits any person, entity, or government agency from using, deploying, or maintaining an ALPR.

Ambiguity Notes: The definition of ALPR specifically cites the use of 'algorithms' to read license plates, which is a fundamental component of computer vision AI, though the bill does not use the specific term 'Artificial Intelligence'.

↑ Back to Table of Contents

Maine

Index of Bills

House - 1451 - An Act to Regulate and Prevent Childrens Access to Artificial Intelligence Chatbots with Human-like Features and Social Artificial Intelligence Companions

Legislation ID: 256064

Bill URL: View Bill

Summary

This legislation prohibits the accessibility of artificial intelligence chatbots and social AI companions that exhibit human-like features to minors. It defines what constitutes human-like features and outlines specific requirements for deployers to prevent minors from accessing such technologies. The bill allows exceptions for therapy chatbots under strict conditions, mandates safeguards for user information, and establishes penalties for violations.

Key Sections

Key Requirements

  • A licensed mental health professional must assess and monitor the use of the therapy chatbot.
  • Deployers may only collect user information necessary for legitimate purposes.
  • Deployers may provide alternative versions of chatbots without human-like features for minors.
  • Deployers must have systems to detect and respond to emergency situations.
  • Deployers must implement reasonable age verification systems.
  • Developers must provide peer-reviewed clinical trial data on the chatbots safety and efficacy.
  • Minors or their guardians may sue for damages or seek injunctive relief.
  • The Attorney General can bring civil actions for violations.
  • Therapy chatbots must provide a disclaimer that they are not licensed professionals.

Sponsors

Legislative Actions

Date Action
2026-01-13 Referred in Concurrence
2026-01-13 Referred to Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in age verification for AI usage.

Mechanism of Influence: Deployers are required to implement reasonable age verification systems to prevent minors from accessing specific AI technologies.

Evidence:

  • Deployers must implement reasonable age verification systems.

Ambiguity Notes: The term 'reasonable' regarding age verification is not strictly defined, leaving the specific technical method to the deployer's discretion or future rulemaking.

Analysis 2

Why Relevant: The legislation regulates the deployment and accessibility of specific AI models based on their features.

Mechanism of Influence: It prohibits the accessibility of chatbots with human-like features to minors and defines what constitutes these features.

Evidence:

  • This legislation prohibits the accessibility of artificial intelligence chatbots and social AI companions that exhibit human-like features to minors.
  • It defines what constitutes human-like features and outlines specific requirements for deployers to prevent minors from accessing such technologies.

Ambiguity Notes: The definition of 'human-like features' is mentioned as being defined in the chapter but the specific criteria are not detailed in the abstract.

Analysis 3

Why Relevant: The bill includes requirements for disclosures and the submission of safety data for specific AI applications.

Mechanism of Influence: Therapy chatbots must provide disclaimers and developers must submit peer-reviewed clinical trial data regarding safety and efficacy.

Evidence:

  • Developers must provide peer-reviewed clinical trial data on the chatbots safety and efficacy.
  • Therapy chatbots must provide a disclaimer that they are not licensed professionals.

Ambiguity Notes: None

Analysis 4

Why Relevant: The legislation mandates operational safeguards and data collection limits for AI deployers.

Mechanism of Influence: Deployers must implement emergency detection systems and are restricted to collecting only necessary user information.

Evidence:

  • Deployers may only collect user information necessary for legitimate purposes.
  • Deployers must have systems to detect and respond to emergency situations.

Ambiguity Notes: The definition of 'legitimate purposes' for data collection may be subject to interpretation by the Attorney General.

↑ Back to Table of Contents

Maryland

Index of Bills

House - 145 - Election Law - Election Misinformation, Election Disinformation, and Deepfakes

Legislation ID: 263218

Bill URL: View Bill

Summary

House Bill 145 seeks to empower the State Administrator of Elections to act against election misinformation and disinformation, including the use of deepfakes. It mandates the State Board of Elections to maintain a reporting portal for the public and allows for civil actions against entities disseminating false information. The bill also prohibits the use of deepfakes to mislead voters and establishes penalties for violations.

Key Sections

Key Requirements

  • Allows for recovery of costs and attorneys fees.
  • Allows seeking injunctions for removal of false information.
  • Allows seeking of damages and attorneys fees.
  • Allows the Administrator to seek injunctions and issue subpoenas.
  • Authorizes the State Board to file civil actions against those disseminating misinformation.
  • Authorizes the State Board to file civil actions against violators.
  • Establishes penalties for knowingly using deepfakes to mislead voters.
  • Mandates periodic review of reported material and corrective action.
  • Mandates periodic review of submissions and issuance of corrective information.
  • Prohibits the use of deepfakes to disseminate materially false information with intent to influence voters.
  • Prohibits the use of deepfakes to produce materially false information with intent to mislead voters.
  • Requires the State Administrator to correct misinformation publicly.
  • Requires the State Administrator to correct misinformation upon receiving a credible report.
  • Requires the State Board to maintain a public portal for reporting misinformation.
  • Requires the State Board to maintain a reporting portal for election misinformation.
  • Specifies penalties for violations, including fines and imprisonment.

Sponsors

Legislative Actions

Date Action
2026-01-16 Hearing 2/04 at 2:00 p.m.
2026-01-14 First Reading Government, Labor, and Elections
2025-07-16 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly defines and regulates deepfakes, which are a prominent application of generative artificial intelligence.

Mechanism of Influence: It prohibits the creation and dissemination of AI-generated deepfakes that contain materially false information intended to influence or mislead voters, establishing criminal penalties including fines and imprisonment.

Evidence:

  • This section defines deepfake and outlines the conditions under which the use of deepfakes is prohibited in the electoral context
  • Prohibits the use of deepfakes to disseminate materially false information with intent to influence voters.
  • Specifies penalties for violations, including fines and imprisonment.

Ambiguity Notes: The effectiveness of the law depends on the technical definition of 'deepfake' and whether it keeps pace with evolving AI synthesis techniques.

Analysis 2

Why Relevant: The legislation establishes an oversight and reporting mechanism for AI-generated content in the political sphere.

Mechanism of Influence: It mandates the State Board of Elections to maintain a public portal for reporting misinformation (including deepfakes) and requires periodic reviews and corrective actions, creating a government-led audit trail for deceptive AI content.

Evidence:

  • The State Board of Elections is required to maintain a reporting portal for election misinformation and disinformation, review submissions, and issue corrective information
  • Mandates periodic review of reported material and corrective action.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill defines the legal boundaries and exemptions for the use of AI-generated media.

Mechanism of Influence: By providing exemptions for satire and news broadcasts, the bill sets a precedent for how AI regulations balance deceptive intent against protected speech and journalistic use.

Evidence:

  • Certain exemptions are provided for deepfakes used in satire, news broadcasts, and other specified contexts, clarifying when the regulations do not apply.

Ambiguity Notes: The distinction between 'satire' and 'materially false information' may be subjective and lead to legal challenges regarding AI-generated parodies.

House - 148 - Consumer Protection and Labor and Employment - Surveillance-Based Price and Wage Setting - Prohibition

Legislation ID: 263230

Bill URL: View Bill

Summary

House Bill 148 establishes regulations against the use of surveillance data for setting prices and wages. It defines surveillance-based price setting and wage setting, outlines exceptions for certain pricing practices, and establishes penalties for violations under the Maryland Consumer Protection Act. The bill aims to ensure fair practices in pricing and employment compensation by restricting the use of personal data obtained through surveillance.

Key Sections

Key Requirements

  • Defines key terms such as automated decision system and surveillance data.
  • Defines surveillance-based price setting and surveillance data.
  • Defines surveillance-based wage setting and surveillance data.
  • Exempts pricing practices based on actual cost differences or uniform discounts available to all consumers.
  • Exempts wage setting based on job-related data or cost of living adjustments, provided that employers disclose the data considered.
  • Lists exceptions for customized pricing and wages based on specific criteria.
  • Prohibits surveillance-based price setting by any person.
  • Prohibits surveillance-based wage setting by employers.
  • Prohibits the use of surveillance data for price setting.
  • Prohibits the use of surveillance data for wage setting.

Sponsors

Legislative Actions

Date Action
2026-01-19 Hearing 2/10 at 1:00 p.m.
2026-01-14 First Reading Economic Matters
2025-08-14 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates 'automated decision systems,' which is a standard legal classification for artificial intelligence and algorithmic decision-making tools.

Mechanism of Influence: It restricts the functional application of AI by prohibiting these systems from using specific types of data (surveillance data) to automate the determination of prices and wages.

Evidence:

  • defines key terms such as automated decision system and surveillance data
  • prohibits the use of surveillance data in conjunction with automated decision systems to set customized prices for goods or services

Ambiguity Notes: The term 'automated decision system' is broad and typically encompasses a wide range of AI technologies, from simple rule-based algorithms to complex machine learning models, depending on the specific statutory definition used in the bill.

Analysis 2

Why Relevant: The legislation focuses on the governance of data inputs for automated systems, which is a core component of AI regulation and oversight.

Mechanism of Influence: By defining and restricting 'surveillance-based' practices, the law forces developers and users of AI to audit their data pipelines to ensure prohibited surveillance data is not influencing automated outcomes.

Evidence:

  • prohibits employers from using surveillance data alongside automated decision systems to determine customized wages for employees
  • outlines definitions related to automated decision systems and surveillance data

Ambiguity Notes: The bill's impact on AI depends on how 'surveillance data' is defined; if defined broadly, it could affect a vast array of data points used in predictive AI modeling.

House - 184 - Criminal Law - Identity Fraud - Artificial Intelligence and Deepfake Representations

Legislation ID: 263367

Bill URL: View Bill

Summary

House Bill 184 seeks to enhance protections against identity fraud by prohibiting the unauthorized use of personal identifying information and the malicious use of artificial intelligence or deepfake technologies to harm individuals. The bill outlines various forms of identity fraud and establishes penalties for violations, while also allowing victims to pursue civil actions against perpetrators. It defines key terms related to identity fraud and sets forth requirements for prosecution and civil recourse.

Key Sections

Key Requirements

  • Allows victims to seek civil remedies.
  • Courts may grant injunctions to prevent further violations.
  • Establishes different penalties based on the value of the benefit obtained through fraud.
  • Prohibits malicious use of personal identifying information.
  • Prohibits the use of deepfake representations for fraudulent purposes.
  • Prohibits the use of deepfake representations to defraud or mislead individuals.
  • Requires consent of the individual for the use of their personal identifying information.

Sponsors

Legislative Actions

Date Action
2026-01-16 Hearing 2/03 at 1:00 p.m.
2026-01-16 Hearing 2/03 at 2:00 p.m.
2026-01-16 Hearing canceled
2026-01-14 First Reading Judiciary
2025-11-01 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly targets the use of artificial intelligence and deepfake technology as tools for committing identity fraud and impersonation.

Mechanism of Influence: It creates a legal framework that prohibits the creation of deepfake representations for fraudulent purposes, subjecting users of such AI tools to criminal prosecution and civil liability.

Evidence:

  • The bill prohibits individuals from... using deepfake technology to impersonate others.
  • This section specifically prohibits the use of artificial intelligence and deepfake technologies to impersonate others or create false records with harmful intent.

Ambiguity Notes: The bill's effectiveness may depend on how broadly 'harm' and 'fraudulent purposes' are interpreted when applied to AI-generated content that might be satirical or non-malicious.

Analysis 2

Why Relevant: The legislation establishes statutory definitions for key AI concepts, which is a primary step in regulating the technology.

Mechanism of Influence: By defining 'artificial intelligence' and 'deepfake representation,' the bill determines the technical scope of the activities that are subject to its prohibitions and penalties.

Evidence:

  • This section provides definitions for key terms related to the bill, including artificial intelligence, deepfake representation, and harm.

Ambiguity Notes: The abstract does not provide the specific technical language used in the definitions, which could be either too narrow to cover emerging AI techniques or too broad, potentially capturing standard digital editing.

House - 314 - Automation Technology Deployment Assessment and Displaced Employee Retraining Fund - Established

Legislation ID: 264427

Bill URL: View Bill

Summary

House Bill 314 requires certain employers that deploy automation technology to report employee counts and displaced employees to the Secretary of Labor, and to pay an assessment for each displaced employee. The bill establishes the Displaced Employee Retraining Fund to support retraining programs for individuals affected by automation technology, ensuring they have access to training and job placement services.

Key Sections

Key Requirements

  • A $250,000 penalty for failing to make required payments.
  • A $250,000 penalty for failure to make required payments.
  • A $250 daily penalty for late reports.
  • A $250 penalty for each day a report is late.
  • Covered employers must pay $900 for each displaced employee reported.
  • Employers can reduce payments by 50% under certain conditions, such as providing severance pay or retraining opportunities.
  • Employers must pay $900 for each reported displaced employee.
  • Employers must report the number of displaced employees due to automation technology.
  • Employers must report the number of employees, details of automation technology used, and the number of displaced employees annually.
  • Employers with 100 or more employees must report employee counts and automation technology usage.
  • Expenditures from the fund must support training and job placement services.
  • Payment can be reduced by 50% if severance pay, retraining opportunities, or successful job placements are provided.
  • Report must include the number of employers and displaced employees, as well as total assessments collected.
  • Reports must be submitted annually by January 15, starting in 2028.
  • Reports must exclude separations due to voluntary attrition, significant revenue declines unrelated to automation, or facility closures.
  • Reports must exclude separations due to voluntary attrition, significant revenue decline unrelated to automation, or facility closures.
  • The fund will be financed by assessments paid by employers and state budget appropriations.
  • The fund will be used exclusively for training and job placement programs.
  • The fund will consist of assessments collected from employers and other appropriated funds.
  • The Secretary must ensure compliance among employers with 100 or more employees.
  • The Secretary must report on the number of employers, displaced employees, and total assessments collected each year.
  • The Secretary must verify employer compliance with reporting requirements annually.

Sponsors

Legislative Actions

Date Action
2026-01-19 Hearing 2/04 at 1:00 p.m.
2026-01-15 First Reading Economic Matters

Detailed Analysis

Analysis 1

Why Relevant: The bill addresses the regulation and disclosure of 'automation technology' in the workplace, which is the broader category under which Artificial Intelligence (AI) systems typically fall when used to replace or augment human labor.

Mechanism of Influence: It imposes a mandatory reporting requirement on the types of automation technology used and the resulting displacement of human workers, effectively serving as a disclosure mandate for AI-driven automation.

Evidence:

  • Covered employers are required to submit annual reports detailing employee counts and automation technology used, along with the number of displaced employees.
  • This section defines key terms related to automation technology and the responsibilities of covered employers regarding employee displacement.

Ambiguity Notes: The term 'automation technology' is broad and likely includes AI, though the abstract does not explicitly use the term 'Artificial Intelligence'. The scope of what constitutes 'automation technology' would determine the extent of AI oversight.

House - 434 - Residential Leases - Use of Algorithmic Device by Landlord to Determine Rent, Occupancy, and Lease Terms - Prohibition

Legislation ID: 283780

Bill URL: View Bill

Summary

House Bill 434 introduces a prohibition against landlords utilizing algorithmic devices that rely on nonpublic competitor data for setting rent prices and lease terms. This legislation is designed to protect tenants from potentially unfair practices that could arise from the use of such technology. Violations of this act would be classified as unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act.

Key Sections

Key Requirements

  • Clarifies what constitutes nonpublic competitor data.
  • Classifies violations as unfair, abusive, or deceptive trade practices.
  • Defines algorithmic device and its exclusions.
  • Indicates that it does not retroactively affect existing rental agreements.
  • Prohibits the use of algorithmic devices for determining rent and lease terms.
  • Specifies that landlords cannot use algorithmic devices for determining rent, lease terms, or occupancy levels.
  • Specifies the effective date of the act.

Sponsors

Legislative Actions

Date Action
2026-01-28 Hearing 2/19 at 1:00 p.m.
2026-01-22 First Reading Economic Matters

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically regulates the use of 'algorithmic devices,' which is a core component of artificial intelligence and automated decision-making systems used in commercial settings.

Mechanism of Influence: It creates a legal prohibition against using specific types of automated systems for rent-setting, effectively regulating how AI-driven tools can be applied in the real estate industry to prevent price-fixing.

Evidence:

  • Landlords are prohibited from employing algorithmic devices that utilize nonpublic competitor data to set rental prices, occupancy levels, or lease terms.
  • Defines algorithmic device and its exclusions.

Ambiguity Notes: The specific definition of 'algorithmic device' and its 'exclusions' will determine the breadth of the law, as it may exclude simple spreadsheets while targeting complex AI models.

House - 9 - 3-1-1 Systems - Expansion Program and Oversight Board - Establishment

Legislation ID: 262849

Bill URL: View Bill

Summary

House Bill 9 aims to create the Maryland 3–1–1 Oversight Board and a 3–1–1 Program that incorporates artificial intelligence technology to enhance the efficiency of nonemergency services. The bill mandates the implementation of these systems across all counties by a specified date, ensuring that residents have access to streamlined information and services through modern technology.

Key Sections

Key Requirements

  • Board consists of appointed members from the Senate, House, and various state departments.
  • Board must consist of appointed members from the Senate, House, and various state departments and organizations.
  • Board must designate counties for participation in the program.
  • Board must establish evaluation criteria for the program.
  • Board must implement marketing strategies that are accessible, multilingual, and culturally competent.
  • Board must meet at least four times per year.
  • Board must submit a report by December 1, 2027, and another by July 1, 2028, evaluating the programs implementation and effectiveness.
  • Chatbots must be established by June 30, 2027, and voicebots by December 1, 2028.
  • Complete statewide implementation by July 1, 2028.
  • Definitions provided for key terms related to the 3–1–1 system.
  • Designate counties for participation in the program.
  • Each chatbot and voicebot must include multilingual support and clear escalation protocols.
  • Ensure alignment with best practices for technology and accessibility.
  • Establish chatbots by June 30, 2027, in designated counties.
  • Establish evaluation criteria and review vendor applications.
  • Establishment of the Maryland 3–1–1 Oversight Board.
  • Implement voicebots by December 1, 2028, in participating counties.
  • Maryland Information Network must solicit proposals from vendors for technology platforms.
  • Members are not compensated but can be reimbursed for expenses.
  • Report on the implementation and evaluation of the program by July 1, 2028.
  • Submit a progress report by December 1, 2027.

Sponsors

Legislative Actions

Date Action
2026-01-28 Hearing 2/10 at 1:00 p.m.
2026-01-14 First Reading Government, Labor, and Elections
2025-10-17 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill provides a formal legal definition of artificial intelligence and applies it to state-run public services.

Mechanism of Influence: It adopts the definition from the State Finance and Procurement Article to categorize the predictive and decision-making software used in the 3-1-1 program.

Evidence:

  • Reiterates the definition of artificial intelligence as per the State Finance and Procurement Article, emphasizing its predictive and decision-making capabilities.
  • Defines key terms related to the 3–1–1 systems, including artificial intelligence, chatbot, voicebot, and 3–1–1 system.

Ambiguity Notes: The definition focusing on 'predictive and decision-making capabilities' is broad and could encompass a wide variety of algorithmic systems beyond generative AI.

Analysis 2

Why Relevant: It creates a regulatory body (the Maryland 3–1–1 Oversight Board) specifically tasked with the governance of AI implementation.

Mechanism of Influence: The Board is responsible for establishing evaluation criteria, reviewing vendor applications, and ensuring technology aligns with best practices for accessibility and performance.

Evidence:

  • Outlines the duties of the Board, including designating counties for the program, establishing evaluation criteria, and ensuring alignment with best practices.
  • Board must establish evaluation criteria for the program.
  • Maryland Information Network must solicit proposals from vendors for technology platforms.

Ambiguity Notes: The term 'best practices' is not defined in the text, leaving the Board with significant discretion to set its own regulatory standards.

Analysis 3

Why Relevant: The legislation mandates specific operational safeguards and features for AI-driven communication tools.

Mechanism of Influence: It requires chatbots and voicebots to include multilingual support and 'clear escalation protocols,' effectively regulating how the AI must interact with and hand off to human operators.

Evidence:

  • Each chatbot and voicebot must include multilingual support and clear escalation protocols.
  • Establish chatbots by June 30, 2027, in designated counties.
  • Implement voicebots by December 1, 2028, in participating counties.

Ambiguity Notes: The bill does not specify the technical requirements for 'escalation protocols' or the specific conditions under which a human must intervene.

Analysis 4

Why Relevant: The bill requires mandatory auditing and reporting on the AI program's performance.

Mechanism of Influence: The Board must submit reports evaluating the effectiveness, user satisfaction, and cost-efficiency of the AI systems to ensure accountability.

Evidence:

  • Mandates the Board to submit progress reports on the programs implementation and effectiveness, including user satisfaction and cost evaluations.
  • Board must submit a report by December 1, 2027, and another by July 1, 2028, evaluating the programs implementation and effectiveness.

Ambiguity Notes: The specific metrics for 'effectiveness' and 'user satisfaction' are not detailed, leaving the methodology of the audit to the Board's discretion.

Senate - 114 - 3-1-1 Systems - Expansion Program and Oversight Board - Establishment

Legislation ID: 264988

Bill URL: View Bill

Summary

Senate Bill 114 proposes the creation of a Maryland 3-1-1 Oversight Board to oversee the implementation and expansion of a 3-1-1 Program. This program will utilize artificial intelligence, including chatbots and voicebots, to provide community information and route calls efficiently. The bill mandates the expansion of the 3-1-1 system to all counties by a specific deadline and outlines the roles and responsibilities of the Oversight Board, including evaluating vendor proposals and ensuring adherence to best practices.

Key Sections

Key Requirements

  • The board is required to meet at least four times a year.
  • The board must consist of various appointed members including state legislators and experts in relevant fields.
  • The board must designate counties with and without existing 3-1-1 systems to participate in the program.
  • The board must establish evaluation criteria for the program.
  • The board must implement statewide marketing strategies that are accessible and culturally competent.
  • The board must submit a progress report by December 1, 2027, and a comprehensive evaluation report by July 1, 2028.
  • The program must include multilingual support and clear escalation protocols for complex requests.

Sponsors

Legislative Actions

Date Action
2026-01-14 First Reading Education, Energy, and the Environment
2025-10-17 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates the use of artificial intelligence technologies, including chatbots and voicebots, within a government-run communication system and establishes a regulatory framework for its implementation.

Mechanism of Influence: It creates an Oversight Board responsible for evaluating AI vendor proposals and establishing criteria for the program's effectiveness, effectively regulating how AI is deployed and monitored in this context.

Evidence:

  • This program will utilize artificial intelligence, including chatbots and voicebots, to provide community information and route calls efficiently.
  • Defines key terms related to the 3-1-1 Program, including artificial intelligence, chatbot, voicebot, and 3-1-1 system.
  • The board is required to meet at least four times a year.
  • The board must establish evaluation criteria for the program.

Ambiguity Notes: While the bill defines 'artificial intelligence' and specific AI tools, the specific 'best practices' and 'evaluation criteria' for these technologies are left to the discretion of the Oversight Board, which could lead to varying standards of oversight.

Senate - 141 - Election Law - Election Misinformation, Election Disinformation, and Deepfakes

Legislation ID: 265015

Bill URL: View Bill

Summary

Senate Bill 141 establishes measures to combat election misinformation and disinformation, including the requirement for the State Administrator of Elections to act upon credible reports of such misinformation. It authorizes the State Board of Elections to pursue civil actions against entities that disseminate false information and prohibits the use of deepfakes to mislead voters. The bill outlines definitions, procedures for reporting misinformation, and penalties for violations.

Key Sections

Key Requirements

  • Allows the administrator to seek injunctions and issue subpoenas.
  • Allows the Administrator to seek injunctions and issue subpoenas related to misinformation.
  • Authorizes the State Board to file civil actions against entities responsible for misinformation.
  • Establishes penalties for knowingly using deepfakes to mislead voters.
  • Establishes penalties for violations, including fines and imprisonment.
  • Mandates periodic reviews of submissions and corrective actions as necessary.
  • Prohibits the use of deepfakes to produce materially false information intended to mislead voters.
  • Prohibits the use of deepfakes with intent to mislead voters.
  • Requires the State Administrator to correct misinformation publicly.
  • Requires the State Administrator to correct misinformation upon receiving credible reports.
  • Requires the State Board to maintain a public portal for reporting election misinformation.
  • State Board may file a civil action if misinformation is published and costs are incurred.
  • State Board must maintain a reporting portal for election misinformation and disinformation.
  • State Board must review submissions and issue corrections as necessary.
  • The Board can seek damages and attorneys fees.

Sponsors

Legislative Actions

Date Action
2026-01-14 First Reading Education, Energy, and the Environment
2026-01-14 Hearing 1/21 at 11:00 a.m.
2025-07-16 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill contains specific provisions regulating deepfakes, which are a primary application of generative artificial intelligence used to create deceptive media.

Mechanism of Influence: It prohibits the use of deepfakes to disseminate materially false information intended to mislead voters and establishes criminal penalties, including fines and imprisonment, for such use.

Evidence:

  • Prohibits the use of deepfakes to produce materially false information intended to mislead voters.
  • Individuals who violate the deepfake regulations are subject to criminal penalties, including fines and imprisonment.
  • Defines deepfakes and prohibits their use to disseminate materially false information related to elections.

Ambiguity Notes: While the bill defines 'deepfake,' the specific technical threshold for what constitutes an AI-generated deepfake versus traditional digital manipulation may require further clarification in practice.

Senate - 8 - Criminal Law - Identity Fraud - Artificial Intelligence and Deepfake Representations

Legislation ID: 264505

Bill URL: View Bill

Summary

Senate Bill 8 seeks to enhance protections against identity fraud by prohibiting the unauthorized use of personal identifying information and the malicious use of artificial intelligence or deepfake representations. It outlines civil actions for victims and stipulates penalties for violators, thereby aiming to safeguard individuals from harm caused by identity theft and fraudulent representations.

Key Sections

Key Requirements

  • Allows state police and other law enforcement to investigate identity fraud cases statewide.
  • Allows victims to seek civil remedies for harm caused by violations.
  • Allows victims to seek civil remedies through the courts.
  • Enables courts to issue injunctions and other appropriate relief.
  • Establishes felony and misdemeanor classifications based on the value of the benefit involved.
  • Establishes felony and misdemeanor classifications based on the value of the fraud.
  • Imposes penalties including imprisonment and fines for violations.
  • Mandates restitution for reasonable costs incurred by victims.
  • Prohibits malicious use of personal identifying information.
  • Prohibits the use of AI or deepfake technology to impersonate or mislead individuals.
  • Prohibits the use of deepfake representations to impersonate or defraud others.
  • Prohibits the use of personal identifying information without consent to cause harm.
  • Specifies imprisonment terms and fines for different levels of offenses.

Sponsors

Legislative Actions

Date Action
2026-01-14 First Reading Judicial Proceedings
2026-01-13 Hearing 1/22 at 1:00 p.m.
2025-08-26 Pre-filed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates the use of artificial intelligence and deepfake technology in the context of identity theft and fraudulent representation.

Mechanism of Influence: It creates a legal prohibition against using AI to impersonate or mislead individuals and provides a framework for civil litigation and criminal prosecution against those who use these technologies maliciously.

Evidence:

  • The bill prohibits individuals from... using AI or deepfake representations for fraudulent purposes.
  • Prohibits the use of AI or deepfake technology to impersonate or mislead individuals.
  • This section defines key terms related to identity fraud, including artificial intelligence, deepfake representation

Ambiguity Notes: The practical scope of the law will depend on the specific technical definitions of 'artificial intelligence' and 'deepfake representation' adopted in the bill's text.

↑ Back to Table of Contents

Massachusetts

Index of Bills

House - 4616 - An Act improving the health insurance prior authorization process

Legislation ID: 241812

Bill URL: View Bill

Summary

This bill amends various provisions of the Massachusetts General Laws to establish clear requirements for health insurance carriers regarding prior authorization processes. It mandates that insurers publicly disclose items and services requiring prior authorization, report data on authorization requests, and ensure that decisions are based on evidence-based criteria. Additionally, it sets guidelines for the use of artificial intelligence in utilization reviews and protects patients from retrospective denials of previously authorized services.

Key Sections

Key Requirements

  • AI tools used in utilization review must consider individual patient data and not solely group datasets.
  • Annual reporting of prior authorization data to the division of insurance.
  • Compliance with federal standards for electronic health information exchange.
  • Coverage for stable patients on a treatment must continue for at least 90 days after enrollment.
  • Criteria for utilization review must be evidence-based and developed with input from physicians.
  • Data must include approval and denial statistics, processing times, and other relevant metrics.
  • Decisions affecting patient care must be made by licensed healthcare providers.
  • Establishment of an application programming interface for automated prior authorization processing.
  • Payment for medically necessary services cannot be denied based on administrative defects if authorization was properly secured.
  • Preauthorization for ongoing treatment is valid for the duration of treatment or at least one year.
  • Prohibits prior authorization requests for unlisted items, services, or medications.
  • Prohibits retrospective denial of previously authorized services unless based on fraudulent information.
  • Recoupment of payments must occur within one year of payment.
  • Requests deemed granted if not responded to within specified timeframes.
  • Requires carriers to publicly list prior authorization requirements on their websites.
  • Standard prior authorization requests must be responded to within set time limits.
  • Updates to preauthorization requirements must be publicly posted and communicated to affected individuals.

Sponsors

Legislative Actions

Date Action
2025-12-08 Reporting date extended to Wednesday, March 18, 2026
2025-10-20 New draft ofH1136
2025-10-20 Reported favorably by committee and referred to the committee onHealth Care Financing
2025-10-20 Reported from the committee onFinancial Services

Detailed Analysis

Analysis 1

Why Relevant: The bill contains a dedicated section regulating the use of artificial intelligence in medical utilization reviews.

Mechanism of Influence: It mandates that AI tools used for insurance approvals must incorporate individual patient data and prohibits these tools from replacing the final decision-making authority of human healthcare providers.

Evidence:

  • Strict guidelines govern the use of artificial intelligence in utilization management, ensuring it does not replace provider decision-making or discriminate against patients.
  • AI tools used in utilization review must consider individual patient data and not solely group datasets.
  • Decisions affecting patient care must be made by licensed healthcare providers.

Ambiguity Notes: The term 'artificial intelligence' is used broadly; the specific technical definitions or thresholds for what constitutes an AI tool in this context may require further regulatory clarification.

House - 4746 - An Act establishing the Massachusetts consumer data privacy act

Legislation ID: 241832

Bill URL: View Bill

Summary

The Massachusetts Consumer Data Privacy Act seeks to protect the personal data of residents by defining key terms and outlining the responsibilities of data controllers. It emphasizes the necessity of obtaining affirmative consent from consumers before collecting or processing their personal data and sets out specific requirements for transparency and consumer rights. The act also addresses various types of personal data, including biometric, genetic, and health-related information, and establishes guidelines for the sale and processing of such data.

Key Sections

Key Requirements

  • Consent must be a clear affirmative act.
  • Consumers must be informed of their rights related to personal data.
  • Consumers must provide affirmative consent for the sale of their personal data.
  • Controllers must disclose when personal data is sold.
  • Controllers must provide mechanisms for consumers to exercise these rights.
  • Options to refuse consent must be as prominent as options to give consent.
  • Requests for consent must describe the processing purpose and categories of personal data.

Sponsors

Legislative Actions

Date Action
2025-11-17 Bill reported favorably by committee and referred to the committee onHouse Ways and Means
2025-11-17 New draft ofH78,H80,H86,H96,H103andH104
2025-11-17 Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity

Detailed Analysis

Analysis 1

Why Relevant: The act regulates the 'processing' of personal data, which is the foundational activity for training and operating artificial intelligence models.

Mechanism of Influence: AI developers and companies using AI systems would be required to obtain explicit affirmative consent from Massachusetts residents before using their personal data for model training or algorithmic processing.

Evidence:

  • necessity of obtaining affirmative consent from consumers before collecting or processing their personal data
  • Consent must be a clear affirmative act.
  • Requests for consent must describe the processing purpose and categories of personal data.

Ambiguity Notes: The term 'processing' is broad and typically encompasses the computational analysis and data ingestion required for machine learning, though the abstract does not explicitly name 'machine learning' or 'AI'.

Analysis 2

Why Relevant: The inclusion of biometric data regulation directly impacts AI-driven technologies such as facial recognition, voice analysis, and gait detection.

Mechanism of Influence: Companies deploying AI for biometric identification or analysis must adhere to specific guidelines for the sale and processing of such data, potentially requiring audits or specific disclosures to ensure compliance.

Evidence:

  • addresses various types of personal data, including biometric, genetic, and health-related information
  • establishes guidelines for the sale and processing of such data

Ambiguity Notes: While it mentions 'guidelines for the sale and processing,' it does not specify if these guidelines include technical audits of the AI algorithms themselves.

Analysis 3

Why Relevant: Consumer rights to opt-out and delete data create a 'right to be forgotten' that complicates the persistence of data within trained AI weights.

Mechanism of Influence: If a consumer exercises their right to delete personal data, AI companies may need to evaluate if that data must be removed from training sets or if the model needs to be retrained (machine unlearning).

Evidence:

  • rights of consumers regarding their personal data, including the right to access, delete, and opt-out of the sale of their data

Ambiguity Notes: The act does not clarify if the 'right to delete' extends to data already vectorized or transformed into neural network weights.

House - 83 - An Act establishing a special legislative commission to study load growth due to AI and data centers

Legislation ID: 89120

Bill URL: View Bill

Summary

This bill proposes the creation of a legislative commission tasked with investigating the surges in electricity demand caused by data centers that support high-performance computing and AI, as well as the effects of industrial growth and electrification in transportation and buildings. The commission will include various appointed members and is required to submit a report with recommendations within one year of the bills passage.

Key Sections

Key Requirements

  • Includes co-chairs from the joint committee on telecommunications, utilities, and energy.
  • Involves the commissioner of public utilities, the secretary of energy and environmental affairs, and the secretary of administration and finance.
  • One member appointed by the speaker of the house, one by the president of the senate, and one by each minority leader.
  • The commission must submit its report no later than one year after the passage of the act.
  • The commissions study must include surges in electricity demand driven by data centers for high-performance computing and AI.
  • The study must also consider industrial growth and electrification of transportation and buildings.

Sponsors

Legislative Actions

Date Action
2026-01-08 Discharged to the committee onHouse Rules
2025-12-24 Bill reported favorably by committee and referred to the committee onRules of the two branches, acting concurrently
2025-08-28 Hearing rescheduled to 09/11/2025 from 01:00 PM-05:00 PM in A-2 and VirtualHearing updated to include Virtual
2025-02-27 Referred to the committee onAdvanced Information Technology, the Internet and Cybersecurity
2025-02-27 Senate concurred

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets the infrastructure and energy consumption associated with the operation of artificial intelligence systems.

Mechanism of Influence: The commission's findings and subsequent report could lead to legislative recommendations or regulations governing the expansion, location, and energy efficiency requirements of AI-related data centers.

Evidence:

  • investigating the surges in electricity demand caused by data centers that support high-performance computing and AI
  • A special legislative commission will be established to study load growth due to AI and data centers.

Ambiguity Notes: While the bill focuses on the energy impact of AI rather than the algorithmic content or safety, it represents a form of indirect oversight over the physical requirements for AI development.

Senate - 2630 - promoting economic development with emerging artificial intelligence models and safety

Legislation ID: 241780

Bill URL: View Bill

Summary

This bill establishes the Massachusetts Artificial Intelligence Innovation Trust Fund to support companies developing AI models and promotes entrepreneurship in AI through grants and partnerships. It also introduces the Transparency in Frontier Artificial Intelligence Act, which sets safety protocols, risk assessments, and reporting requirements for large frontier AI developers to ensure public safety and accountability.

Key Sections

Key Requirements

  • Aims for clear definitions that allow for early determination of developer status before training or deploying models.
  • Allows for grants to companies developing AI in key sectors.
  • Allows for the promulgation, amendment, or rescission of regulations as necessary.
  • Civil penalties can only be pursued by the Attorney General.
  • Consortium must include representatives from various sectors, including academia and labor organizations.
  • Encourages partnerships for entrepreneurship programs.
  • Establishes penalties up to $1,000,000 for violations of the chapter.
  • Excludes certain foreseeable risks from the definition.
  • Large frontier developers must create and publish a frontier AI framework.
  • Mandates annual reporting starting January 1, 2027.
  • MassCompute must operate within public institutions where possible.
  • Must conduct and publish assessments of catastrophic risks.
  • Prohibits contracts that prevent employees from disclosing safety risks.
  • Prohibits inclusion of information that could compromise trade secrets or public safety.
  • Requires developers to inform employees of their rights and provide anonymous reporting options.
  • Requires reporting of critical safety incidents to the attorney general.
  • Requires the Attorney General to consider international standards and stakeholder input when making recommendations.
  • Requires the secretary of economic development to manage the fund.
  • Specifies thresholds for catastrophic risk, including potential loss of life and property damage.

Sponsors

Legislative Actions

Date Action
2025-10-16 Bill reported favorably by committee and referred to the committee onSenate Ways and Means
2025-10-16 New draft ofS37
2025-10-16 Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity

Detailed Analysis

Analysis 1

Why Relevant: The Transparency in Frontier Artificial Intelligence Act directly regulates large-scale AI models.

Mechanism of Influence: It requires developers to create and publish frontier AI frameworks and conduct assessments of catastrophic risks, effectively mandating a form of internal audit and public disclosure.

Evidence:

  • Large frontier developers must create and publish a frontier AI framework.
  • Must conduct and publish assessments of catastrophic risks.

Ambiguity Notes: The definition of 'large frontier developer' and 'frontier AI framework' may require further regulatory clarification by the Attorney General.

Analysis 2

Why Relevant: The bill establishes mandatory reporting and government oversight mechanisms.

Mechanism of Influence: Developers are required to report critical safety incidents to the Attorney General, and the Attorney General is tasked with producing annual reports on AI safety risks.

Evidence:

  • Requires reporting of critical safety incidents to the attorney general.
  • The Attorney General must produce annual reports on anonymized employee reports related to frontier AI development.

Ambiguity Notes: The reporting requirements for 'critical safety incidents' depend on the specific thresholds defined for catastrophic risk.

Analysis 3

Why Relevant: The legislation includes enforcement mechanisms for AI-related regulations.

Mechanism of Influence: It empowers the Attorney General to pursue civil penalties of up to $1,000,000 for non-compliance with the AI safety and reporting standards.

Evidence:

  • Large frontier developers face civil penalties for failing to comply with reporting and operational requirements.
  • Establishes penalties up to $1,000,000 for violations of the chapter.

Ambiguity Notes: None

Analysis 4

Why Relevant: The bill provides protections for whistleblowers within AI development companies.

Mechanism of Influence: It prohibits NDAs or contracts that prevent employees from disclosing safety risks to the government, ensuring a channel for oversight regarding internal AI risks.

Evidence:

  • Protects employees who report risks associated with frontier AI development from retaliation.
  • Prohibits contracts that prevent employees from disclosing safety risks.

Ambiguity Notes: None

Senate - 2631 - to protect against election misinformation

Legislation ID: 241778

Bill URL: View Bill

Summary

This legislation seeks to amend Chapter 56 of the General Laws by introducing a new section that specifically addresses election misinformation. It defines key terms related to artificial intelligence and establishes prohibitions against distributing materially deceptive election-related communications within 90 days of an election. The bill also outlines the legal recourse available to individuals affected by such deceptive communications, while providing exceptions for certain media and content types.

Key Sections

Key Requirements

  • Allows for lawsuits to recover damages and attorneys fees.
  • Applies to various entities including candidates, political parties, and committees.
  • Exempts bona fide news coverage from the prohibitions.
  • Exempts satire or parody communications.
  • Individuals may seek injunctive relief against deceptive communications.
  • Prohibits distribution of materially deceptive communications related to elections.

Sponsors

Legislative Actions

Date Action
2025-10-16 Bill reported favorably by committee and referred to the committee onSenate Ways and Means
2025-10-16 New draft ofS44
2025-10-16 Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity

Detailed Analysis

Analysis 1

Why Relevant: The legislation explicitly defines and regulates artificial intelligence and synthetic media to prevent election interference.

Mechanism of Influence: It imposes a legal prohibition on distributing AI-generated deceptive content intended to mislead voters and provides a cause of action for individuals to seek damages or injunctions.

Evidence:

  • defines key terms related to artificial intelligence
  • prohibitions against distributing materially deceptive election-related communications
  • synthetic media

Ambiguity Notes: The term 'materially deceptive' and the specific thresholds for what constitutes 'intent to mislead' may be subject to judicial interpretation.

Senate - 2632 - relative to the use of artificial intelligence and other software tools in healthcare decision-making

Legislation ID: 241777

Bill URL: View Bill

Summary

This legislation amends Chapter 112 of the General Laws to introduce guidelines for the use of artificial intelligence in therapy and psychotherapy services. It defines key terms, establishes requirements for consent, and outlines the permissible use of AI tools by licensed professionals. The bill also addresses the use of AI in utilization review by insurance carriers, ensuring compliance with state and federal laws.

Key Sections

Key Requirements

  • AI cannot deny or modify services based solely on medical necessity.
  • AI cannot directly interact with clients in therapeutic communication.
  • AI cannot generate treatment plans without professional review.
  • AI cannot make independent therapeutic decisions.
  • AI must consider individual clinical history and circumstances.
  • All records and communications must be kept confidential.
  • The patient must be informed in writing about the use of AI and its specific purpose.
  • The patient must provide explicit consent for the use of AI.
  • Violators may face civil penalties up to $10,000.

Sponsors

Legislative Actions

Date Action
2025-10-16 Bill reported favorably by committee and referred to the committee onHealth Care Financing
2025-10-16 New draft ofS46
2025-10-16 Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes mandatory disclosure and consent requirements for the use of AI in a professional setting.

Mechanism of Influence: Licensed professionals are legally required to provide written notification to patients regarding the purpose of AI tools and must secure explicit consent prior to implementation.

Evidence:

  • The patient must be informed in writing about the use of AI and its specific purpose.
  • The patient must provide explicit consent for the use of AI.

Ambiguity Notes: The bill does not specify the exact format of the written disclosure or the technical standards for the AI tools being used.

Analysis 2

Why Relevant: The legislation imposes strict prohibitions on autonomous AI functionality in clinical decision-making.

Mechanism of Influence: It prevents AI from acting as a primary actor in therapy by banning direct interaction with clients and requiring human professional review for all treatment plans and decisions.

Evidence:

  • AI cannot directly interact with clients in therapeutic communication.
  • AI cannot generate treatment plans without professional review.
  • AI cannot make independent therapeutic decisions.

Ambiguity Notes: The term 'therapeutic communication' may require further legal definition to determine if it includes administrative or scheduling interactions.

Analysis 3

Why Relevant: The bill regulates the use of AI algorithms in insurance and utilization review processes.

Mechanism of Influence: It mandates that AI-driven insurance reviews cannot rely solely on medical necessity algorithms or group data, forcing a human-centric review of individual clinical circumstances.

Evidence:

  • AI cannot deny or modify services based solely on medical necessity.
  • AI must consider individual clinical history and circumstances.

Ambiguity Notes: It is unclear how 'individual clinical history' must be weighted against AI-generated group data in practice.

↑ Back to Table of Contents

Michigan

Index of Bills

House - 5357 - Communications: internet; age-appropriate design code act; create. Creates new act.Last Action: bill electronically reproduced 12/11/2025

Legislation ID: 247159

Bill URL: View Bill

Summary

This bill, known as the Age-Appropriate Design Code Act, outlines regulations for online services that are accessed by minors. It mandates that businesses implement specific privacy settings, restrict certain data practices, and ensure that minors have clear access to privacy information. The bill also aims to prevent harmful practices that could exploit minors online and establishes civil sanctions for non-compliance.

Key Sections

Key Requirements

  • Attorney General must provide written notice of violations and a 90-day cure period.
  • Creates a fund in the state treasury for enforcement of the act.
  • Default settings must protect minors privacy.
  • Does not create a private right of action under this act.
  • Does not impose liability inconsistent with 47 USC 230.
  • Establish mechanisms for reporting harms.
  • Funds collected from civil fines must be deposited into this fund.
  • No single setting should make all privacy settings less protective.
  • Only collect minimum necessary personal information.
  • Penalties for violations range from $2,500 to $7,500 per affected minor.
  • Prohibits processing precise geolocation information without clear consent.
  • Prohibits profiling of minors unless necessary for the service requested.
  • Prohibits selling personal information of minors.
  • Requires businesses to collect only the minimum amount of personal information necessary for the service.
  • Requires mechanisms for minors and parents to report harms experienced online.

Sponsors

Legislative Actions

Date Action
2025-12-16 bill electronically reproduced 12/11/2025
2025-12-11 introduced by Representative Rep. Carol Glanville
2025-12-11 read a first time
2025-12-11 referred to Committee onRegulatory Reform

Detailed Analysis

Analysis 1

Why Relevant: The act specifically prohibits the profiling of minors, which is a primary application of artificial intelligence and machine learning in digital services.

Mechanism of Influence: Businesses using AI-driven recommendation engines or behavioral analysis tools would be restricted from applying these technologies to minors unless they can demonstrate it is necessary for the requested service.

Evidence:

  • Prohibits profiling of minors unless necessary for the service requested.

Ambiguity Notes: While the summary does not explicitly name 'Artificial Intelligence', the definition of profiling typically encompasses automated processing of personal data to evaluate or predict aspects of a person's behavior.

Analysis 2

Why Relevant: The legislation requires online services to implement safety-by-design principles, which directly impacts how algorithmic systems are deployed for younger audiences.

Mechanism of Influence: The requirement to offer the 'highest level of privacy and safety' by default forces a redesign of algorithmic engagement features that might otherwise exploit minor vulnerabilities.

Evidence:

  • Businesses must configure all default privacy settings for minors to offer the highest level of privacy and safety
  • The bill also aims to prevent harmful practices that could exploit minors online

Ambiguity Notes: The act focuses on the 'online service' as a whole, which serves as the delivery mechanism for most consumer-facing AI.

Senate - 620 - Traffic control: driver license; regulation of a relying party related to a mobile driver license or identification card; provide for. Creates new act.Last Action: REFERRED TO COMMITTEE ON TRANSPORTATION AND INFRASTRUCTURE

Legislation ID: 264793

Bill URL: View Bill

Summary

Senate Bill No. 620 seeks to provide clear guidelines for relying parties that use mobile licenses for identity verification. It defines key terms, outlines the responsibilities of relying parties when handling mobile licenses, and sets restrictions on data collection and device access.

Key Sections

Key Requirements

  • Compliance with applicable privacy laws is mandatory.
  • Consent must be obtained from holders before data retention.
  • Holders must be informed about the retention of data elements.
  • Mobile licenses must be cryptographically authenticated before acceptance.
  • Presentation of a mobile license cannot be mandated for transaction completion.
  • Prohibits asking for consent to search the mobile device.
  • Prohibits asking holders to give up possession of their mobile device.
  • Prohibits requiring mobile licenses for transaction completion.
  • Relying parties cannot ask holders to give up possession of their mobile device.
  • Relying parties cannot request consent to search the mobile device.
  • Requests must be limited to necessary data elements for the transaction.
  • Requires adherence to privacy laws and regulations.
  • Requires cryptographic authentication of mobile licenses before acceptance.
  • Requires informing holders about data use and retention.
  • Requires obtaining consent from holders for data release and retention.
  • Requires relying parties to request only necessary data elements.

Sponsors

Legislative Actions

Date Action
2025-10-22 INTRODUCED BY SENATOR ERIKA GEISS
2025-10-22 REFERRED TO COMMITTEE ONTRANSPORTATION AND INFRASTRUCTURE

Detailed Analysis

Analysis 1

Why Relevant: The bill governs the protocols for digital identity and age verification, which are foundational components for regulating access to AI services and ensuring compliance with age-restricted usage policies.

Mechanism of Influence: It mandates that any entity verifying identity via mobile licenses must use cryptographic authentication and limit data collection to only what is necessary, directly affecting how platforms implement age-gating or identity-based access controls.

Evidence:

  • Senate Bill No. 620 seeks to provide clear guidelines for relying parties that use mobile licenses for identity verification.
  • Requires cryptographic authentication of mobile licenses before acceptance.
  • Requires obtaining consent from holders for data release and retention.
  • Requires relying parties to request only necessary data elements.

Ambiguity Notes: The legislation is technology-neutral regarding the 'relying party,' meaning it applies to any service provider using digital IDs, but it does not explicitly name AI developers or automated decision-making systems as a specific category.

Senate - 760 - Trade: business regulation; availability of companion chatbots to minors; prohibit. Creates new act.Last Action: REFERRED TO COMMITTEE ON FINANCE, INSURANCE, AND CONSUMER PROTECTION

Legislation ID: 266150

Bill URL: View Bill

Summary

Senate Bill No. 760, known as the leading ethical AI development for kids act, establishes regulations for operators of companion chatbots, especially concerning their availability to minors. The bill outlines specific prohibitions on harmful interactions and sets forth civil penalties for violations, emphasizing the protection of minors in digital environments.

Key Sections

Key Requirements

  • Chatbots must not engage in sexually explicit interactions with minors.
  • Minors or their guardians can sue for actual and punitive damages.
  • Operators face a civil fine of $25,000 per violation.
  • Operators must ensure chatbots do not encourage self-harm or illegal activities.
  • Operators must prioritize the safety of minors over engagement metrics.

Sponsors

Legislative Actions

Date Action
2025-12-17 INTRODUCED BY SENATOR DAYNA POLEHANKI
2025-12-17 REFERRED TO COMMITTEE ONFINANCE, INSURANCE, AND CONSUMER PROTECTION

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets 'companion chatbots,' which are a specific application of generative artificial intelligence technology.

Mechanism of Influence: It imposes strict content restrictions and safety requirements on AI operators, effectively regulating the development and deployment of AI models intended for or accessible by minors.

Evidence:

  • establishes regulations for operators of companion chatbots
  • Operators are prohibited from making companion chatbots available to covered minors if the chatbots can potentially encourage harmful behaviors

Ambiguity Notes: The practical impact depends on the legal definition of 'companion chatbot'; if defined broadly, it could capture a wide range of LLM-based applications.

Analysis 2

Why Relevant: The legislation addresses the user's interest in age-specific regulations and safety oversight for AI usage.

Mechanism of Influence: By restricting availability to 'covered minors' based on the risk of harmful outputs, it necessitates that AI developers implement age verification or robust safety filtering to avoid significant civil penalties.

Evidence:

  • especially concerning their availability to minors
  • Operators must prioritize the safety of minors over engagement metrics.

Ambiguity Notes: The bill does not explicitly detail the technical method for age verification, leaving the implementation details to the operators or future regulatory guidance.

↑ Back to Table of Contents

Minnesota

Index of Bills

house - 1142 - Use of tenant screening software that uses nonpublic competitor data to set rent prohibited, and use of software that is biased against protected classes prohibited.

Legislation ID: 32376

Bill URL: View Bill

Summary

This bill prohibits landlords from using tenant screening software that relies on nonpublic competitor data to determine rent prices, as well as software that exhibits bias against protected classes. It amends existing statutes to include these prohibitions and establishes penalties for violations.

Key Sections

Key Requirements

  • Attorney General can investigate violations.
  • Defines algorithm and artificial intelligence in this context.
  • Defines what constitutes an algorithmic device and excludes certain data sources.
  • Individuals can sue for damages of at least $1,000 for violations.
  • Landlords cannot employ algorithms that use nonpublic competitor data for rent setting.
  • Landlords must not use biased screening tools that impact protected classes.

Sponsors

Legislative Actions

Date Action
2025-02-24 Author added Jones
2025-02-20 Authors added Sencer-Mura, Norris
2025-02-19 Introduction and first reading, referred to Housing Finance and Policy
2025-02-19 Introduction and first reading, referred to Housing Finance and Policy

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates the use of artificial intelligence and algorithms in the context of tenant background screening.

Mechanism of Influence: It creates a legal prohibition against using AI tools that result in biased outcomes for protected classes and establishes a statutory definition for AI within this regulatory framework.

Evidence:

  • This provision prohibits the use of algorithms or AI software for tenant background screening if they disproportionately affect protected classes.
  • Defines algorithm and artificial intelligence in this context.

Ambiguity Notes: The term 'disproportionately affect' may require further judicial or regulatory clarification to establish the specific metrics for determining bias.

Analysis 2

Why Relevant: The legislation targets the use of algorithmic devices for price-fixing in the rental market.

Mechanism of Influence: It restricts the types of data (specifically nonpublic competitor data) that can be fed into algorithms used to determine rental prices, effectively regulating the operational inputs of automated pricing systems.

Evidence:

  • This provision prohibits landlords from using algorithmic devices that incorporate nonpublic competitor data to set rental prices.
  • Landlords cannot employ algorithms that use nonpublic competitor data for rent setting.

Ambiguity Notes: The distinction between 'nonpublic competitor data' and 'publicly available market data' could be a point of contention in enforcement.

house - 1838 - Health insurance; use of artificial intelligence prohibited in the utilization review process.

Legislation ID: 53415

Bill URL: View Bill

Summary

This bill amends Minnesota Statutes to include a definition of artificial intelligence and explicitly prohibits its use in utilization review processes by health insurance organizations. The intent is to maintain human involvement in critical evaluations regarding healthcare services, thereby safeguarding the quality and reliability of healthcare decisions.

Key Sections

Key Requirements

  • Prohibits utilization review organizations from using artificial intelligence in their processes.

Sponsors

Legislative Actions

Date Action
2025-03-03 Introduction and first reading, referred to Commerce Finance and Policy

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a formal legal definition of artificial intelligence within the state's statutes.

Mechanism of Influence: By defining AI, the bill sets the legal boundaries for what technologies are subject to the subsequent prohibitions and regulations.

Evidence:

  • This provision defines artificial intelligence as per the United States Code, indicating the legal framework for understanding the term within the context of the bill.

Ambiguity Notes: The definition relies on the United States Code, which provides a specific federal framework but may be subject to future federal amendments.

Analysis 2

Why Relevant: The bill directly regulates the application of AI by prohibiting its use in specific high-stakes decision-making processes.

Mechanism of Influence: It mandates that utilization reviews, evaluations, and appeals must be conducted by humans rather than automated AI systems, effectively banning AI from this sector of healthcare administration.

Evidence:

  • This provision establishes a clear prohibition against the use of artificial intelligence in any aspect of the utilization review process, including reviews, evaluations, determinations, or appeals by utilization review organizations.

Ambiguity Notes: The phrase 'any aspect' is broad and could be interpreted to include not just final determinations but also administrative or preparatory tasks involving AI.

house - 2452 - Artificial intelligence use to dynamically set product prices prohibited.

Legislation ID: 91049

Bill URL: View Bill

Summary

This bill introduces a prohibition against the use of artificial intelligence to adjust product prices in real time. It defines artificial intelligence and outlines the prohibited practices related to pricing strategies that could unfairly manipulate consumer behavior. The enforcement of this regulation is designated to the attorney general under existing consumer protection laws.

Key Sections

Key Requirements

  • Prohibits the use of artificial intelligence for real-time price adjustments based on market demands, competitor prices, inventory levels, or customer behavior.

Sponsors

Legislative Actions

Date Action
2025-03-20 Author added Kotyza-Witthuhn
2025-03-17 Introduction and first reading, referred to Commerce Finance and Policy

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates a specific commercial application of artificial intelligence and establishes legal boundaries for its use in pricing strategies.

Mechanism of Influence: It creates a statutory prohibition against AI-driven real-time price adjustments, subjecting violators to enforcement by the attorney general under consumer protection laws.

Evidence:

  • This bill introduces a prohibition against the use of artificial intelligence to adjust product prices in real time.
  • Prohibits the use of artificial intelligence for real-time price adjustments based on market demands, competitor prices, inventory levels, or customer behavior.

Ambiguity Notes: The specific definition of 'artificial intelligence' used in the bill is referenced but not detailed in the text, which could impact the scope of technologies covered.

house - 2500 - Algorithm and AI use prohibited during health insurance prior authorization request review.

Legislation ID: 90999

Bill URL: View Bill

Summary

This bill amends Minnesota Statutes by adding a subdivision that explicitly prohibits health carriers from using algorithms or artificial intelligence in the approval or denial process of prior authorization requests. This measure aims to safeguard the integrity of the healthcare authorization process by preventing automated decision-making that could negatively impact patients.

Key Sections

Key Requirements

  • Health carriers must make prior authorization decisions without the use of algorithms or AI.

Sponsors

Legislative Actions

Date Action
2025-03-24 Author added Rehrauer
2025-03-17 Introduction and first reading, referred to Commerce Finance and Policy

Detailed Analysis

Analysis 1

Why Relevant: The legislation explicitly targets and restricts the application of artificial intelligence and algorithmic decision-making within the healthcare insurance sector.

Mechanism of Influence: By banning these technologies for prior authorization, the law mandates that health carriers must use non-AI methods to process requests, thereby preventing automated denials and requiring human-centric oversight.

Evidence:

  • This provision prohibits health carriers from using algorithms or artificial intelligence programs to make decisions regarding prior authorization requests.
  • Health carriers must make prior authorization decisions without the use of algorithms or AI.

Ambiguity Notes: The bill uses broad terms like 'algorithms' and 'artificial intelligence programs' without providing specific technical definitions, which could potentially encompass a wide range of data processing software.

house - 2700 - Minnesota Consumer Data Privacy Act modified to make consumer health data a form of sensitive data, and additional protections added for sensitive data.

Legislation ID: 90797

Bill URL: View Bill

Summary

This bill seeks to amend the Minnesota Consumer Data Privacy Act by defining health data as a form of sensitive data and introducing stricter regulations surrounding the processing of such data. It aims to ensure that consumers have greater control over their personal health information and that their privacy is adequately protected. The bill includes definitions for key terms related to data privacy and establishes requirements for consent and data processing.

Key Sections

Key Requirements

  • Authorization must contain all required information.
  • Authorization must have an expiration date of one year.
  • Authorization must include specific details about the data and its use.
  • Authorization must not be combined with other documents.
  • Authorization must not have been revoked by the consumer.
  • Civil penalties apply for violations, with a maximum of $7,500 per violation.
  • Collecting health data through geofences is prohibited.
  • Consumers have the right to revoke the authorization at any time.
  • Controllers must conduct and document data privacy assessments for specific processing activities.
  • Controllers must disclose the purposes for data collection to consumers.
  • Controllers must document policies for compliance with data protection laws.
  • Defines key terms related to consumer data privacy and health data.
  • Entities must control or process personal data of 100,000 consumers or more, or derive over 25% of gross revenue from the sale of personal data.
  • Establishes criteria for what constitutes sensitive data, including health data and biometric data.
  • Expiration date must not have passed.
  • Goods or services must not be conditioned on signing the authorization.
  • It is unlawful to use geofences to track consumers seeking healthcare services.
  • Mandates adherence to the Gramm-Leach-Bliley Act for financial data handling.
  • Must include how sensitive data will be gathered and used by the purchaser.
  • Must include the consumers signature and date.
  • Penalties for violations are applicable as per the enforcement section.
  • Provision of goods or services must not be conditioned on signing the authorization.
  • Requires compliance with the Fair Credit Reporting Act for the collection and use of personal data.
  • Requires consumer consent for processing data beyond stated purposes.
  • Requires valid authorization from consumers before selling sensitive data.
  • Seller and purchaser must retain a copy of all valid authorizations for six years.
  • Sending notifications related to health data through geofences is not allowed.
  • Sensitive data may be subject to redisclosure by the purchaser.
  • Small businesses must not sell consumer sensitive data without prior consent.
  • The attorney general must issue a warning letter before filing an enforcement action.

Sponsors

Legislative Actions

Date Action
2025-03-24 Introduction and first reading, referred to Judiciary Finance and Civil Law

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates 'targeted advertising,' which is a primary application of artificial intelligence and machine learning algorithms.

Mechanism of Influence: By defining and regulating targeted advertising based on inferred preferences, the bill places constraints on how AI-driven profiling and ad-delivery systems can operate using consumer data.

Evidence:

  • Targeted advertising is described as displaying ads based on inferred consumer preferences from personal data

Ambiguity Notes: The bill does not explicitly use the term 'artificial intelligence,' but the definition of targeted advertising inherently covers algorithmic systems that process personal data to predict consumer behavior.

Analysis 2

Why Relevant: The bill mandates 'data privacy assessments' for specific processing activities, a common regulatory tool used to oversee high-risk AI systems.

Mechanism of Influence: Controllers must document and assess processing activities, which in modern privacy frameworks typically includes automated decision-making and AI-driven data processing that poses a risk to consumers.

Evidence:

  • Controllers must conduct and document data privacy assessments for specific processing activities.

Ambiguity Notes: The summary does not specify if 'specific processing activities' explicitly includes automated decision-making or profiling, though these are standard inclusions in similar state privacy laws.

Analysis 3

Why Relevant: The regulation of 'biometric data' is a core component of AI oversight, as AI is the primary technology used to process and identify individuals via biometrics.

Mechanism of Influence: By classifying biometric data as sensitive and requiring specific handling, the bill regulates the input data necessary for facial recognition, voice analysis, and other AI-based biometric systems.

Evidence:

  • Establishes criteria for what constitutes sensitive data, including health data and biometric data.

Ambiguity Notes: None

house - 48 - Certain social media algorithms that target children prohibited.

Legislation ID: 33474

Bill URL: View Bill

Summary

This bill establishes regulations for social media platforms operating in Minnesota, specifically targeting algorithms that direct user-generated content towards minors. It defines key terms related to social media usage, sets forth prohibitions on algorithmic targeting, and outlines requirements for parental consent for minors. The bill also includes provisions for liability and penalties for violations, aiming to create a safer online environment for children.

Key Sections

Key Requirements

  • Liability for platforms that target minors in violation of the law.
  • Penalty of $1,000 for each violation, capped at $100,000 per year.
  • Prohibits targeting minors under 18 with social media algorithms.
  • Requires chronological display of content for minors.
  • Requires verifiable parental consent for minors to open accounts.

Sponsors

Legislative Actions

Date Action
2025-02-20 Author added Stephenson
2025-02-13 Authors added Engen and Burkel
2025-02-10 Introduction and first reading, referred to Commerce Finance and Policy
2025-02-10 Introduction and first reading, referred to Commerce Finance and Policy

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates 'social media algorithms,' which are a primary application of AI and machine learning used for content recommendation and user profiling.

Mechanism of Influence: It restricts the functional application of recommendation AI by prohibiting its use for targeting content to minors, effectively mandating a non-algorithmic (chronological) interface for that user group.

Evidence:

  • This subdivision prohibits social media platforms with over 1,000,000 account holders from using algorithms to target user-generated content at minors under 18 in Minnesota

Ambiguity Notes: The definition of 'social media algorithm' is broad and likely encompasses various machine learning models used for engagement optimization.

Analysis 2

Why Relevant: The user query specifically requested legislation regarding age verification and usage restrictions for minors.

Mechanism of Influence: By requiring 'verifiable parental consent' for account creation, the law forces platforms to implement age-gating and identity verification systems to distinguish between minors and adults.

Evidence:

  • Requires verifiable parental consent for minors to open accounts.

Ambiguity Notes: The specific technical standards for 'verifiable' consent are not detailed in the summary, leaving room for interpretation on how platforms must verify age and parental status.

senate - 1528 - Certain social media algorithms targeting children prohibition provision

Legislation ID: 30477

Bill URL: View Bill

Summary

This bill establishes regulations concerning social media platforms operating in Minnesota, particularly focusing on the use of algorithms that target minors. It defines key terms related to social media and outlines prohibitions against targeting user-generated content at minors through recommendation features. The bill also mandates parental consent for minors to create accounts and outlines penalties for non-compliance.

Key Sections

Key Requirements

  • Platforms are liable for damages to account holders under 18 if they target them with content.
  • Prohibits algorithms from targeting users under 18.
  • Requires verifiable parental consent for minors to open accounts.
  • Statutory penalties of $1,000 per violation, capped at $100,000 per year.

Sponsors

Legislative Actions

Date Action
2025-02-17 Introduction and first reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates 'social media algorithms' and 'recommendation features,' which are fundamental applications of artificial intelligence in content curation.

Mechanism of Influence: It prohibits the deployment of algorithmic targeting systems for a specific demographic (minors), effectively restricting how AI models can be used to process and serve user-generated content.

Evidence:

  • This provision prohibits social media platforms with over 1,000,000 account holders from using algorithms to target content at users under 18 in Minnesota
  • defines key terms related to social media... including... social media algorithm

Ambiguity Notes: The definition of 'social media algorithm' is broad and likely encompasses various machine learning and automated decision-making systems used by platforms.

Analysis 2

Why Relevant: The user specifically requested legislation concerning age verification and usage requirements.

Mechanism of Influence: The bill mandates 'verifiable parental consent' for account creation by minors, which necessitates the implementation of age verification or identity verification technologies by the platforms.

Evidence:

  • Requires verifiable parental consent for minors to open accounts.

Ambiguity Notes: The bill does not specify the technical standards for 'verifiable' consent, leaving the implementation details to the platforms or future regulatory guidance.

senate - 1577 - Artificial intelligence generated child sexual abuse material and possession, sale, creation, dissemination, and purchase of child-like sex dolls prohibition provisions

Legislation ID: 30391

Bill URL: View Bill

Summary

This bill amends existing Minnesota statutes to explicitly outlaw the possession, sale, and distribution of child-like sex dolls and artificial intelligence-generated child sexual abuse material. It establishes definitions, penalties, and registration requirements for offenders associated with sexual crimes against minors, thereby reinforcing protections for children and addressing emerging threats posed by technology.

Key Sections

Key Requirements

  • Clarifies what constitutes sexual conduct and pornographic work.
  • Defines minor as anyone under 18 years old.
  • Establishes felony penalties for violations, with increased penalties for prior offenders or dolls depicting minors under 14.
  • Includes registration for those with prior convictions from other states if similar offenses occurred.
  • Prohibits the possession and dissemination of child-like sex dolls.
  • Requires individuals convicted of specified sexual offenses to register as offenders.
  • Specifies that artificial intelligence-generated images depicting minors in sexual conduct are included under pornographic work.

Sponsors

Legislative Actions

Date Action
2025-02-20 Introduction and first reading

Detailed Analysis

Analysis 1

Why Relevant: The legislation explicitly regulates artificial intelligence by categorizing AI-generated depictions of minors in sexual conduct as illegal pornographic material.

Mechanism of Influence: It creates a legal framework where the creation, possession, or distribution of specific AI-generated content results in criminal prosecution and mandatory sex offender registration.

Evidence:

  • Specifies that artificial intelligence-generated images depicting minors in sexual conduct are included under pornographic work.

Ambiguity Notes: While the bill targets AI-generated CSAM, the technical criteria for what constitutes an 'AI-generated image' versus a digitally manipulated or traditionally rendered image may require further legal clarification.

senate - 1856 - Usage of artificial intelligence in the utilization review process prohibition provision

Legislation ID: 30020

Bill URL: View Bill

Summary

The bill amends Minnesota Statutes to define artificial intelligence and explicitly prohibit its use in the utilization review processes conducted by organizations. This includes any reviews, evaluations, determinations, or appeals related to health insurance.

Key Sections

Key Requirements

  • Prohibits utilization review organizations from using artificial intelligence in reviews, evaluations, determinations, or appeals.

Sponsors

Legislative Actions

Date Action
2025-03-10 Author added Mitchell
2025-02-27 Authors added Boldon; Mann; Mohamed
2025-02-24 Introduction and first reading
2025-02-24 Referred to Commerce and Consumer Protection

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the application of artificial intelligence by prohibiting its use in a specific industry sector (health insurance).

Mechanism of Influence: It creates a legal prohibition that prevents utilization review organizations from using AI tools for decision-making, evaluations, or appeals processes, effectively mandating human-only review.

Evidence:

  • This provision prohibits the use of artificial intelligence in any part of the utilization review process by organizations involved in health insurance.
  • Prohibits utilization review organizations from using artificial intelligence in reviews, evaluations, determinations, or appeals.

Ambiguity Notes: The bill adopts the federal definition of AI from 15 U.S.C. 9401, which is broad; however, the prohibition itself is narrow and specific to the utilization review context.

senate - 2087 - Use of tenant screening software that uses nonpublic competitor data to set rent prohibition

Legislation ID: 52905

Bill URL: View Bill

Summary

This bill introduces regulations concerning tenant screening algorithms used by landlords. It prohibits the use of software that relies on nonpublic competitor data to set rental prices and restricts the use of algorithms that may lead to discrimination against protected classes. The bill also outlines the consequences for violations and amends existing statutes related to tenant reporting and remedies.

Key Sections

Key Requirements

  • Exclusions include reports from trade associations that provide aggregate data and products that comply with affordable housing guidelines.
  • Landlords must ensure that any background screening tools do not have a discriminatory impact on protected classes.
  • Landlords must not use algorithms to set rent based on nonpublic competitor data.
  • Violations may result in liability under section 504B.245.

Sponsors

Legislative Actions

Date Action
2025-04-03 Author added Fateh
2025-03-27 Author stricken Housley
2025-03-03 Introduction and first reading
2025-03-03 Referred to Judiciary and Public Safety

Detailed Analysis

Analysis 1

Why Relevant: The provision explicitly regulates the use of AI software and algorithms in the context of tenant background screening.

Mechanism of Influence: It creates a legal prohibition against using AI tools that produce biased outcomes, effectively necessitating that landlords and software providers audit their algorithms for discriminatory impacts.

Evidence:

  • This provision prohibits the use of algorithms or AI software for tenant background screening if they disproportionately affect protected classes as defined in Minnesota law.
  • Landlords must ensure that any background screening tools do not have a discriminatory impact on protected classes.

Ambiguity Notes: The term 'disproportionately affect' is a legal standard that may require specific statistical thresholds or algorithmic auditing protocols to define compliance.

Analysis 2

Why Relevant: This section regulates 'algorithmic devices' used for automated financial decision-making (rent setting).

Mechanism of Influence: It restricts the data inputs available to AI models, specifically banning the use of nonpublic competitor data, which impacts how pricing algorithms are trained and deployed.

Evidence:

  • This provision prohibits landlords from using algorithmic devices that incorporate nonpublic competitor data to determine rental prices for residential units.
  • Landlords must not use algorithms to set rent based on nonpublic competitor data.

Ambiguity Notes: The definition of 'algorithmic devices' is broad and likely encompasses various forms of automated and machine-learning-based pricing software.

senate - 2940 - Minnesota Data Privacy Act modification to make consumer health data a form of sensitive data provision and sensitive data additional protections addition provision

Legislation ID: 90468

Bill URL: View Bill

Summary

This bill amends various sections of the Minnesota Consumer Data Privacy Act to redefine and expand the scope of sensitive data, particularly health data. It establishes clearer definitions and protections for personal data, including biometric and genetic information, and introduces requirements for consent, data processing, and consumer rights regarding their personal information.

Key Sections

Key Requirements

  • Consumers must be able to access their personal data and request deletion.
  • Consumers must provide clear consent for data processing.
  • Defines sensitive data to include health data, biometric data, and genetic information.
  • Establishes consent requirements for processing personal data.
  • Health data must include personal data identifying mental or physical health status, conditions, treatments, and related interventions.
  • Mandates that consent must be obtained in a clear and unambiguous manner.
  • Requires specific handling and processing standards for sensitive data.

Sponsors

Legislative Actions

Date Action
2025-04-22 Author added Oumou Verbeten
2025-03-24 Introduction and first reading
2025-03-24 Referred to Commerce and Consumer Protection
2025-03-24 Referred to Commerce and Consumer Protection

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates biometric and genetic data, which are foundational inputs for many artificial intelligence systems, particularly those involving facial recognition and predictive health analytics.

Mechanism of Influence: By requiring clear consent and establishing processing standards for biometric data, the law restricts how AI developers can collect and utilize sensitive datasets for training or deploying identification algorithms.

Evidence:

  • Defines sensitive data to include health data, biometric data, and genetic information.
  • Establishes consent requirements for processing personal data.

Ambiguity Notes: The term 'processing' is broad and, while not explicitly naming AI, encompasses the computational methods used to train and run machine learning models on personal data.

Analysis 2

Why Relevant: The legislation mandates transparency and consumer control over data processing, which aligns with AI disclosure and oversight goals.

Mechanism of Influence: The requirement for 'clear and unambiguous' consent for processing sensitive data forces AI companies to provide disclosures to users before their data is ingested into automated systems.

Evidence:

  • Mandates that consent must be obtained in a clear and unambiguous manner.
  • Requires specific handling and processing standards for sensitive data.

Ambiguity Notes: The bill focuses on data privacy rather than the specific algorithmic outputs or weights of AI models, but it regulates the 'fuel' (data) that powers AI.

senate - 3098 - Prohibition from using artificial intelligence to dynamically set product prices

Legislation ID: 96164

Bill URL: View Bill

Summary

This bill introduces a prohibition against the use of artificial intelligence to dynamically set product prices based on various market factors. It defines artificial intelligence and outlines the enforcement powers of the attorney general in relation to this prohibition. The goal is to prevent unfair pricing practices that could arise from automated systems adjusting prices in real time.

Key Sections

Key Requirements

  • Prohibits the use of artificial intelligence for real-time price adjustments.

Sponsors

Legislative Actions

Date Action
2025-04-24 Author added Boldon
2025-03-27 Introduction and first reading
2025-03-27 Referred to Commerce and Consumer Protection
2025-03-27 Referred to Commerce and Consumer Protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates a specific application of artificial intelligence in the commercial sector, which falls under the user's request for legislation regulating AI.

Mechanism of Influence: It establishes a legal prohibition on AI-driven automated systems for price setting and grants the attorney general power to oversee and enforce compliance.

Evidence:

  • Prohibits the use of artificial intelligence for real-time price adjustments.
  • defines artificial intelligence and outlines the enforcement powers of the attorney general

Ambiguity Notes: The specific definition of 'artificial intelligence' used in the bill is not provided in the abstract, which could determine the breadth of the enforcement.

↑ Back to Table of Contents

Mississippi

Index of Bills

House - 1035 - MS Future Innovators Act; enact to require high school computer science or CTE with embedded computer science course.

Legislation ID: 270712

Bill URL: View Bill

Summary

This bill mandates that starting with the ninth-grade class of the 2029-2030 school year, public high school students in Mississippi must earn one unit of credit in a computer science course or an industry-aligned career and technical education (CTE) course with embedded computer science. The legislation aims to enhance students understanding of emerging technologies, including artificial intelligence, and establishes requirements for the courses offered to meet state graduation criteria.

Key Sections

Key Requirements

  • Courses must be approved by the State Board of Education.
  • Requires students to earn one unit of credit in a computer science course or CTE with embedded computer science before graduation.

Sponsors

Legislative Actions

Date Action
2026-01-21 (H) Title Suff Do Pass
2026-01-16 (H) Referred To Education

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly identifies artificial intelligence as a core subject area for the mandated computer science or CTE curriculum.

Mechanism of Influence: By requiring AI-related education for graduation, the law ensures a baseline level of AI literacy among the future workforce and public in Mississippi.

Evidence:

  • The legislation aims to enhance students understanding of emerging technologies, including artificial intelligence

Ambiguity Notes: While AI is mentioned as an aim, the specific standards for what constitutes 'understanding' of AI or the depth of the AI curriculum are left to the State Board of Education's approval process.

House - 1048 - Artifical intelligence; prohibit use of in professional mental and behavioral health care.

Legislation ID: 270736

Bill URL: View Bill

Summary

House Bill No. 1048 seeks to regulate the use of artificial intelligence in mental and behavioral health care by prohibiting AI systems from providing such care and restricting licensed professionals from using AI in their practice. It allows limited use of AI for administrative support services and establishes penalties for violations.

Key Sections

Key Requirements

  • Allows AI for administrative tasks like scheduling and billing.
  • Allows the Attorney General to investigate and take action against violators.
  • Amends licensing statutes to include AI violations as grounds for disciplinary action.
  • Imposes a civil penalty of up to $15,000 for violations.
  • Prohibits AI from making therapeutic decisions or interacting with clients.
  • Prohibits AI systems from providing mental health care services.
  • Prohibits licensed professionals from using AI in therapeutic practice.

Sponsors

Legislative Actions

Date Action
2026-01-16 (H) Referred To Public Health and Human Services;Accountability, Efficiency, Transparency

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the deployment and use of artificial intelligence in the healthcare sector.

Mechanism of Influence: It prohibits AI from performing core professional tasks (therapeutic decisions and client interactions) and restricts its use to administrative functions, thereby setting boundaries on AI autonomy in clinical settings.

Evidence:

  • Prohibits AI systems from providing mental health care services.
  • Prohibits licensed professionals from using AI in therapeutic practice.
  • Licensed professionals may use AI for administrative and supplementary support services, such as managing appointments and processing billing, but cannot use it for therapeutic decisions or client interactions.

Ambiguity Notes: The scope of 'administrative and supplementary support services' may require further clarification to distinguish between purely clerical tasks and those that might influence clinical outcomes.

Analysis 2

Why Relevant: The legislation includes enforcement mechanisms and oversight for AI-related violations.

Mechanism of Influence: It grants the Attorney General investigative powers and establishes significant civil penalties ($15,000) for non-compliance, while also integrating AI usage standards into professional licensing disciplinary grounds.

Evidence:

  • Amends licensing statutes to include AI violations as grounds for disciplinary action.
  • Allows the Attorney General to investigate and take action against violators.
  • Imposes a civil penalty of up to $15,000 for violations.

Ambiguity Notes: None

House - 1051 - Mississippi Consumer Privacy Protection Act; create.

Legislation ID: 270741

Bill URL: View Bill

Summary

This bill establishes regulations for businesses in Mississippi that generate over $25 million in revenue, focusing on their responsibilities towards consumer personal information. It outlines consumer rights to access, correct, delete, or opt-out of data processing, and mandates that businesses implement robust data security measures. The bill also designates the Attorney General as the authority for enforcement and provides for penalties for violations.

Key Sections

Key Requirements

  • Allows for civil penalties of up to $7,500 for each violation.
  • Assessments must weigh benefits against potential risks to consumers.
  • Attorney General must provide 60 days written notice before taking action against a controller or processor for violations.
  • Businesses must exceed $25 million in revenue.
  • Consumer rights do not apply to pseudonymous data if identification measures are in place.
  • Consumers can request access to their personal information, correct inaccuracies, delete information, obtain a copy of their data in a portable format, and opt-out of data processing for sales or targeted advertising.
  • Consumers must have the option to opt-out of the sale of their personal information.
  • Contracts between controllers and processors must clearly define processing obligations and data handling procedures.
  • Controllers and processors can comply with federal, state, or local laws.
  • Controllers cannot discriminate against consumers for exercising their rights, such as denying services or charging higher prices.
  • Controllers must clearly disclose sales of personal information and provide methods for consumers to opt-out.
  • Controllers must conduct data protection assessments for targeted advertising, data sales, and sensitive data processing.
  • Controllers must create and maintain a written privacy program that includes security practices.
  • Controllers must describe secure and reliable methods for consumers to submit requests in their privacy notices.
  • Controllers must limit data collection to what is necessary for processing purposes.
  • Controllers must maintain de-identified data in a way that prevents re-identification.
  • Controllers must provide a clear privacy notice to consumers.
  • Controllers must provide requested data protection assessments to the Attorney General.
  • Controllers or processors must cure violations within the notice period to avoid legal action.
  • Entities cannot retain identifying information after access is granted.
  • Entities must implement reasonable age verification methods.
  • Exemptions include financial institutions, HIPAA-covered entities, and certain government and educational entities.
  • Limits collection of minors personal information to what is necessary.
  • Mandates confidentiality, integrity, and accessibility of personal data.
  • Must conform to NIST or comparable frameworks and update within two years.
  • Must control or process personal information of at least 25,000 consumers or 175,000 consumers in a year.
  • Must create and maintain a written privacy program.
  • Must provide substantive rights as required by the act.
  • Permits the Attorney General to seek declaratory judgments and injunctive relief.
  • Privacy notices must include categories of data processed, purposes of processing, and information on how to exercise consumer rights.
  • Privacy notices must include categories of personal information processed, purposes of processing, consumer rights, and any sales of personal information to third parties.
  • Processors must ensure confidentiality and cooperate with assessments.
  • Processors must follow the instructions of controllers regarding data processing.
  • Prohibits use of precise geolocation data and targeted advertising to minors.
  • Requires reasonable measures to protect personal information.
  • Responses to consumer requests must be made within 45 days, with an option for a 45-day extension if necessary.
  • The act applies to entities processing personal information of at least 175,000 consumers annually.
  • The Attorney General can issue civil investigative demands and conduct investigations.
  • They can also take actions necessary for legal investigations and public safety.

Sponsors

Legislative Actions

Date Action
2026-01-16 (H) Referred To Judiciary A

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates age verification for specific types of digital content providers.

Mechanism of Influence: Commercial entities publishing 'harmful material' are legally required to implement reasonable age verification methods, which often involves the use of third-party identity verification software or AI-based age estimation tools.

Evidence:

  • Commercial entities publishing harmful material on the internet must perform reasonable age verification and are liable for damages if they fail to do so.
  • Entities must implement reasonable age verification methods.

Ambiguity Notes: The term 'reasonable age verification' is not technologically defined, leaving open whether AI-based biometric estimation or document-based verification is required.

Analysis 2

Why Relevant: The bill requires data protection assessments for targeted advertising and sensitive data processing, which are primary use cases for AI and machine learning models.

Mechanism of Influence: Businesses must document and weigh the benefits of data processing against risks to consumers. This creates a regulatory hurdle for deploying AI models used for profiling or behavioral targeting.

Evidence:

  • Controllers must conduct data protection assessments for targeted advertising, data sales, and sensitive data processing.
  • Assessments must weigh benefits against potential risks to consumers.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence' or 'Automated Decision Making,' but the regulated activities (targeted advertising and sensitive data processing) are almost exclusively driven by these technologies in the current market.

Analysis 3

Why Relevant: The bill restricts the use of automated systems for targeting minors.

Mechanism of Influence: By prohibiting targeted advertising and the use of precise geolocation for known minors, the bill effectively bans the use of recommendation algorithms and AI-driven ad-tech targeting this demographic.

Evidence:

  • Prohibits use of precise geolocation data and targeted advertising to minors.
  • Digital service providers must limit the collection and use of personal information from known minors.

Ambiguity Notes: The effectiveness of this provision depends on the definition of 'known minors' and whether businesses must proactively identify them.

House - 1082 - Alcoholic beverages; revise certain provisions relating to sales to persons under the age of 21.

Legislation ID: 270804

Bill URL: View Bill

Summary

House Bill No. 1082 amends Section 67-1-81 of the Mississippi Code to impose additional penalties on permit holders who sell alcohol to minors. Specifically, after a third offense, the Commissioner of Revenue can require the use of an independent age-verification app on the premises to ensure compliance with age restrictions. The bill also outlines specific fines and penalties for both permit holders and individuals under 21 who violate alcohol purchase laws.

Key Sections

Key Requirements

  • Community service not exceeding 30 days may be imposed.
  • Conditions for probation may be applied during the suspension period.
  • Fines for minors caught purchasing alcohol range from $200 to $500.
  • First offense: fine of $500 to $1,000.
  • Fourth offense: revocation of permit.
  • Judges may suspend a minors drivers license for up to 90 days instead of imposing fines.
  • Requires the use of an independent age-verification app for permit holders with multiple offenses.
  • Second offense: fine of $1,000 to $2,000 or up to one year imprisonment.
  • The app must meet national accuracy standards.
  • Third offense: suspension of permit for up to three weeks or revocation.

Sponsors

Legislative Actions

Date Action
2026-01-16 (H) Referred To Judiciary A

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the use of specific age-verification technology as a regulatory compliance measure.

Mechanism of Influence: It requires permit holders with three or more offenses to implement a third-party age-verification app to validate customer ages, setting a performance standard of 85% accuracy.

Evidence:

  • require the use of an independent age-verification app on the premises
  • utilize a third-party age-verification app with at least 85% accuracy to confirm the age of customers purchasing alcohol

Ambiguity Notes: While the bill does not explicitly use the term 'Artificial Intelligence,' age-verification apps frequently utilize AI-driven biometric analysis or automated document verification to meet accuracy standards.

House - 1242 - Mississippi Dyslexia Generative Artificial Intelligence Education and Workforce Development Act; create.

Legislation ID: 273949

Bill URL: View Bill

Summary

The Mississippi Dyslexia Generative Artificial Intelligence Education and Workforce Development Act seeks to create an open-access dyslexia curriculum and AI system, designed to enhance educational resources for public schools, correctional facilities, and workforce training programs. It aims to improve literacy for students and adults with dyslexia while promoting research and development in dyslexia education.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-19 (H) Referred To Education;Appropriations A

Detailed Analysis

Analysis 1

Why Relevant: The act specifically mandates the creation and deployment of a generative artificial intelligence system for educational purposes.

Mechanism of Influence: It establishes a formal state program (the Mississippi Dyslexia Generative Artificial Intelligence Education and Workforce Development Program) to govern the development and use of AI in specific public sectors.

Evidence:

  • Mississippi Dyslexia Generative Artificial Intelligence Education and Workforce Development Program to develop and deploy an open-access dyslexia curriculum and AI system

Ambiguity Notes: While the act focuses on development, the 'Rulemaking Authority' allows for future regulatory constraints or usage standards not explicitly detailed in the text.

Analysis 2

Why Relevant: The legislation includes specific legal definitions for AI technologies.

Mechanism of Influence: By defining 'Generative artificial intelligence system,' the law sets the legal boundaries for what technologies fall under the program's scope and subsequent regulations.

Evidence:

  • Defines key terms such as Department, Program, Generative artificial intelligence system, Open-access, and Founding development partners

Ambiguity Notes: None

Analysis 3

Why Relevant: The act requires oversight and reporting on the AI system's performance and outcomes.

Mechanism of Influence: It mandates a formal research and evaluation component with annual reports to the Legislature, serving as a mechanism for government oversight and performance auditing.

Evidence:

  • Requires a formal research and evaluation component with annual reports to the Legislature detailing program outcomes and recommendations for expansion.

Ambiguity Notes: None

Analysis 4

Why Relevant: The act grants regulatory authority to state departments regarding the AI program.

Mechanism of Influence: The department is empowered to create regulations, which could include requirements for disclosures, data usage, or safety standards for the AI system.

Evidence:

  • Empowers the department to create necessary regulations to implement the acts provisions.

Ambiguity Notes: None

House - 1576 - Interactive computer service providers; require parental consent and access to minor users account history.

Legislation ID: 282753

Bill URL: View Bill

Summary

House Bill No. 1576 establishes regulations for interactive computer service providers regarding minors. It mandates that such providers cannot engage in contracts with minors unless parental consent is obtained. The bill outlines requirements for parental access to user history, age verification methods, and penalties for non-compliance. Additionally, it allows the Attorney General to enforce these regulations and provides parents with the right to file civil complaints against violators.

Key Sections

Key Requirements

  • Attorney General can bring action against violators.
  • Mandates access for parents to their childs user history.
  • Parents can file civil complaints and seek damages.
  • Requires parental consent for contracts with minors.

Sponsors

Legislative Actions

Date Action
2026-01-19 (H) Referred To Judiciary A

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates age verification and parental consent for interactive computer services, which is a specific area of interest for AI regulation concerning minors.

Mechanism of Influence: AI platforms and applications falling under the definition of 'interactive computer service' would be required to implement age verification and obtain parental consent before allowing minors to use their services.

Evidence:

  • The bill outlines requirements for parental access to user history, age verification methods
  • Interactive computer service providers are prohibited from entering into contracts with minors without prior express consent from a parent.

Ambiguity Notes: The definition of 'interactive computer service' is broad and does not explicitly name AI, but functionally covers the platforms where AI is most commonly deployed to minors.

House - 1605 - MS Artifical Intelligence and Stem Eduaction Act; create.

Legislation ID: 282800

Bill URL: View Bill

Summary

The Mississippi Artificial Intelligence and STEM Education Innovation Act authorizes the use of artificial intelligence in public schools to improve STEM instruction. It establishes pilot programs for AI-assisted learning, teacher support, and career pathways while ensuring safeguards for student data privacy. The act aims to address challenges in STEM achievement and workforce readiness, particularly in underserved areas.

Key Sections

Key Requirements

  • Participation in the pilot program shall be voluntary.
  • Pilot program sunsets on June 30, 2030, unless reauthorized.
  • Report due by December 1 each year.

Sponsors

Legislative Actions

Date Action
2026-01-19 (H) Referred To Education

Detailed Analysis

Analysis 1

Why Relevant: The act mandates data privacy compliance and restricts the commercial use of data collected by AI tools.

Mechanism of Influence: AI tools used in the pilot program must comply with FERPA and state privacy laws, specifically prohibiting the sale of student data or its use for noneducational purposes.

Evidence:

  • All AI-assisted tools must comply with FERPA and Mississippi data privacy laws, ensuring student data is not sold or used for noneducational purposes.

Ambiguity Notes: The term 'noneducational purposes' is not explicitly defined, which could lead to varying interpretations of permissible data use by AI vendors.

Analysis 2

Why Relevant: It requires oversight through annual reporting and establishes standards for the ethical use of AI in an educational setting.

Mechanism of Influence: The Department of Education is required to report on outcomes and provide professional development on ethical AI usage, creating a framework for responsible implementation.

Evidence:

  • The department must report annually on student outcomes, teacher feedback, cost effectiveness, and recommendations
  • The Department of Education must provide professional development to educators on the effective and ethical use of AI tools

Ambiguity Notes: The criteria for 'ethical use' are not detailed, leaving the specific standards to be determined by the Department of Education during rule promulgation.

Analysis 3

Why Relevant: The legislation provides formal legal definitions for artificial intelligence and related technologies.

Mechanism of Influence: By defining 'artificial intelligence' and 'AI-assisted learning tool,' the act sets the regulatory scope for which technologies are subject to the pilot's requirements and privacy protections.

Evidence:

  • This section defines key terms such as artificial intelligence, STEM, AI-assisted learning tool, Department, and local education agency.

Ambiguity Notes: The summary mentions that the section defines these terms but does not provide the specific text of the definitions.

House - 1618 - Parental consent; require for minors before use of interactive computer services.

Legislation ID: 282818

Bill URL: View Bill

Summary

House Bill No. 1618 seeks to establish regulations for interactive computer service providers regarding minors. It prohibits these providers from entering contracts with minors without parental consent, restricts minors from accessing harmful materials, and mandates reasonable age verification methods. The bill also grants the Attorney General the authority to enforce these provisions and impose civil penalties for violations.

Key Sections

Key Requirements

  • Allows the Attorney General to recover attorney fees and costs.
  • Applies to digital services that allow social interaction, profile creation, and content posting.
  • Cannot retain identifying information post-access.
  • Develop strategies to mitigate exposure to self-harm, substance abuse, and other harmful behaviors.
  • Establishes civil penalties for violations, including daily fines and penalties per instance of violation.
  • Excludes services related to employment data processing, email, news access, and career development.
  • Limits collection to necessary information.
  • Must perform reasonable age verification.
  • No class action suits allowed.
  • Parents may seek declaratory judgments or injunctions.
  • Prohibits access to harmful materials by minors.
  • Prohibits collecting precise geolocation data and targeted advertising involving harmful material.
  • Prohibits sharing of a minors persona with others.
  • Provide resources for prevention and mitigation of harms.
  • Requires age verification methods appropriate to the risk of information management.
  • Requires express consent from a parent or guardian for known minors.
  • Requires parental consent for contracts with minors.

Sponsors

Legislative Actions

Date Action
2026-01-19 (H) Referred To Judiciary A

Detailed Analysis

Analysis 1

Why Relevant: The bill's requirements for age verification and parental consent directly impact how AI-driven platforms and interactive computer services are accessed by younger demographics.

Mechanism of Influence: AI service providers falling under the definition of 'digital service providers' would be legally required to implement age gates and obtain express parental consent before allowing minors to create accounts or interact with the service.

Evidence:

  • Digital service providers must verify the age of users creating accounts and cannot allow known minors to create accounts without parental consent.
  • Requires age verification methods appropriate to the risk of information management.

Ambiguity Notes: The term 'interactive computer service' is broad and typically includes AI platforms, but the bill does not explicitly name 'Artificial Intelligence,' leaving its application to specific AI architectures to be determined by the scope of 'digital services that allow social interaction.'

Analysis 2

Why Relevant: The mandate to mitigate harmful content is highly relevant to AI safety and the deployment of Large Language Models (LLMs) or generative AI that may produce harmful outputs.

Mechanism of Influence: AI companies would be required to develop and implement safety strategies to prevent their models from generating or exposing minors to content related to self-harm, substance abuse, or other defined harmful behaviors.

Evidence:

  • Develop strategies to mitigate exposure to self-harm, substance abuse, and other harmful behaviors.
  • Provide resources for prevention and mitigation of harms.

Ambiguity Notes: The effectiveness of 'strategies to mitigate exposure' is subjective and may require AI companies to perform internal audits or implement specific filtering layers to comply with the law.

Analysis 3

Why Relevant: The bill restricts data collection practices, which is a core component of how AI models are personalized or how user data is utilized for iterative training.

Mechanism of Influence: Providers must limit data collection to what is strictly necessary, potentially hindering the ability of AI services to collect extensive behavioral data from minors for targeted advertising or model optimization.

Evidence:

  • This section mandates that digital service providers limit the collection and use of personal identifying information from known minors to what is necessary for providing the service.
  • Prohibits collecting precise geolocation data and targeted advertising involving harmful material.

Ambiguity Notes: The definition of 'necessary for providing the service' could be interpreted narrowly, potentially impacting the functionality of personalized AI assistants.

House - 1717 - Mississippi Medical Judgement Protection Act; create.

Legislation ID: 284060

Bill URL: View Bill

Summary

The bill establishes the AI Task Force, which will include both voting and non-voting members with expertise in various fields related to AI technology. The task force is responsible for developing recommendations for the regulation of AI, reviewing existing laws, and proposing necessary revisions to the Mississippi Code. It aims to foster innovation while addressing ethical and societal concerns related to AI deployment.

Key Sections

Key Requirements

  • Task force to include members with expertise in AI from various sectors.
  • The task force may adjust the number of ex-officio members as needed.

Sponsors

Legislative Actions

Date Action
2026-01-19 (H) Referred To Public Health and Human Services

Detailed Analysis

Analysis 1

Why Relevant: The legislation is directly focused on the creation of a regulatory oversight body for artificial intelligence.

Mechanism of Influence: By establishing a task force to develop recommendations for AI regulation and propose revisions to state code, this bill serves as the foundational step for future AI-specific mandates such as audits or disclosures.

Evidence:

  • The task force is responsible for developing recommendations for the regulation of AI
  • proposing necessary revisions to the Mississippi Code

Ambiguity Notes: The bill focuses on the formation of the task force rather than prescribing specific technical requirements like weight submissions or age verification at this stage.

House - 1723 - Artificial intelligence; define.

Legislation ID: 284066

Bill URL: View Bill

Summary

This bill seeks to define artificial intelligence as a machine-based system capable of making predictions, recommendations, or decisions based on human-defined objectives. It outlines the capabilities of such systems, including their ability to perceive environments, analyze data, and formulate options for action or information.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-28 (H) Title Suff Do Pass
2026-01-19 (H) Referred To Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill provides the foundational definition of AI, which is necessary for any subsequent regulation, disclosure requirements, or oversight mechanisms mentioned in the system instructions.

Mechanism of Influence: By defining what constitutes an AI system, this law determines the scope of future regulatory actions such as audits, disclosures, or government oversight.

Evidence:

  • This bill seeks to define artificial intelligence as a machine-based system capable of making predictions, recommendations, or decisions based on human-defined objectives.

Ambiguity Notes: The phrase "human-defined objectives" is broad and could potentially encompass a wide range of software from simple algorithms to complex generative models.

House - 840 - Criminal offenses; enhance penalties for certain if artificial intelligence was used in the commission of.

Legislation ID: 270251

Bill URL: View Bill

Summary

This bill establishes additional penalties for defendants who knowingly and intentionally use artificial intelligence systems in the commission of designated offenses. Depending on whether the offense is classified as a misdemeanor or felony, the penalties include increased terms of imprisonment and fines. Additionally, the bill amends existing laws to prohibit the transmission and possession of visual materials depicting child exploitation, reinforcing protections against such crimes.

Key Sections

Key Requirements

  • For felonies with a minimum sentence of 2 years or more, the sentence increases by 1 year and a fine of at least $5,000.
  • If the designated offense is a felony, an additional imprisonment of at least 2 years and a fine of at least $5,000.
  • If the designated offense is a misdemeanor, an additional imprisonment of 6 to 12 months and a fine up to $5,000.
  • Prohibits causing, soliciting, or permitting children to engage in sexually explicit conduct.
  • Prohibits possession or distribution of materials depicting children in sexually explicit conduct.
  • Prohibits the depiction or recording of children in sexually explicit conduct.
  • Prosecutors must provide notice in a separate clause in the information or indictment.
  • The notice must allege specific factors for seeking enhanced penalties.

Sponsors

Legislative Actions

Date Action
2026-01-16 (H) Referred To Judiciary B

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the use of AI in criminal activities and provides a legal definition for artificial intelligence systems.

Mechanism of Influence: It regulates the use of AI by creating a deterrent through enhanced sentencing and legal definitions that courts must apply to AI-assisted crimes.

Evidence:

  • This bill establishes additional penalties for defendants who knowingly and intentionally use artificial intelligence systems in the commission of designated offenses.
  • Defines key terms related to the bill, including artificial intelligence system and designated offense.

Ambiguity Notes: The term designated offense determines the scope of the AI-related penalties, which may vary depending on other sections of the code.

Analysis 2

Why Relevant: It imposes procedural requirements on the legal system regarding AI-related crimes.

Mechanism of Influence: Prosecutors must explicitly cite the use of AI in indictments to trigger the enhanced penalties, creating a formal legal record and oversight mechanism for AI misuse.

Evidence:

  • Establishes requirements for prosecutors to provide notice when seeking to enhance penalties due to the use of artificial intelligence in the commission of an offense.

Ambiguity Notes: None

Senate - 2050 - Artificial intelligence; require disclosure when used in political advertisements.

Legislation ID: 249695

Bill URL: View Bill

Summary

Senate Bill No. 2050 amends Section 23-15-897 of the Mississippi Code to mandate that any qualified political advertisement utilizing artificial intelligence must disclose this fact to the public. The bill defines what constitutes a qualified political advertisement and the nature of artificial intelligence. It specifies the required information for disclosure, outlines who is exempt from liability for non-disclosure, and establishes civil penalties for violations. The bill also details the legal recourse available to aggrieved parties and the attorney general in cases of non-compliance.

Key Sections

Key Requirements

  • Advertisements that are satire or parody are also exempt.
  • Audio disclaimers must be audible and clear, lasting at least three seconds.
  • First violation incurs a fine of up to $250.
  • If not approved by a candidate, requires the name of the paying entity.
  • Legal action can be taken in specified courts.
  • Must indicate whether AI was used in creating the advertisement.
  • Radio or TV stations broadcasting news content are exempt.
  • Requires the name of the candidate and a statement of approval for the advertisement.
  • Subsequent violations incur fines up to $1,000.
  • Text disclaimers must be prominent and in the same language as the advertisement.
  • Video disclaimers must be visible for at least four seconds and include an audible message.

Sponsors

Legislative Actions

Date Action
2026-01-08 (S) Referred To Elections;Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the use of artificial intelligence in political campaigning by requiring mandatory disclosures.

Mechanism of Influence: It imposes legal obligations on candidates and political committees to label AI-generated content, with specific technical requirements for how those labels appear in audio and video formats, and establishes civil penalties for failure to comply.

Evidence:

  • Qualified political advertisements that involve AI must include clear statements indicating the use of AI, with specific requirements for text, audio, and video formats.
  • The bill defines what constitutes a qualified political advertisement and the nature of artificial intelligence.
  • This section outlines the civil penalties for individuals or committees that fail to comply with the AI disclosure requirements.

Ambiguity Notes: The scope of the regulation depends on the specific definitions provided for 'artificial intelligence' and 'qualified political advertisement' within the bill.

Senate - 2294 - MS Future Innovators Act; enact to require high-school computer science or CTE with embedded computer science course.

Legislation ID: 273109

Bill URL: View Bill

Summary

Senate Bill No. 2294 establishes a requirement for public high school students in Mississippi to earn one unit of credit in a computer science course or a career and technical education (CTE) course with embedded computer science instruction before graduation, starting with the ninth-grade class of 2029-2030. The bill also mandates that these courses include fundamental concepts of emerging technologies, such as artificial intelligence, and defines relevant terms for clarity in implementation.

Key Sections

Key Requirements

  • Courses must cover fundamental concepts of computer science technologies, including AI and its societal impacts.
  • CTE courses must provide foundational computer science instruction and be approved by the State Board of Education.
  • Students must earn one credit in a high school computer science course or a CTE course with embedded computer science before graduation.
  • The computer science course or CTE course can fulfill one credit requirement in mathematics or one credit requirement in science.
  • The courses must be approved by the State Board of Education.
  • This requirement does not increase the total number of credits needed for graduation.

Sponsors

Legislative Actions

Date Action
2026-01-19 (S) Referred To Education

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mentions artificial intelligence as a required component of the mandatory computer science curriculum for high school graduation.

Mechanism of Influence: By mandating the inclusion of AI in educational standards, the law ensures that the state's education system addresses the technology's fundamental concepts and societal implications.

Evidence:

  • Courses fulfilling the graduation requirement must include instruction on emerging computer science technologies, particularly artificial intelligence.
  • Courses must cover fundamental concepts of computer science technologies, including AI and its societal impacts.

Ambiguity Notes: The bill focuses on education and curriculum standards rather than the regulation of AI development, deployment, or oversight mechanisms like audits or weight submissions.

Senate - 2354 - Artificial Intelligence Fraud and Accountability Act; enact.

Legislation ID: 273204

Bill URL: View Bill

Summary

The Artificial Intelligence Fraud and Accountability Act aims to define artificial intelligence fraud and create a civil cause of action for those harmed by such fraudulent activities. It outlines the remedies available, including the possibility of punitive damages for willful violations and allows for injunctions against violators. The Act holds developers and users of AI systems accountable for any fraudulent use, promoting accountability in the deployment of AI technologies.

Key Sections

Key Requirements

  • Allows recovery of statutory damages of $500 per violation.
  • Establishes joint and several liability for those knowingly involved in AI fraud.
  • Permits temporary or permanent injunctions against violators.
  • Requires clear and convincing evidence for punitive damages.
  • Requires individuals or entities to prove injury or damage from AI fraud.

Sponsors

Legislative Actions

Date Action
2026-01-19 (S) Referred To Judiciary, Division A

Detailed Analysis

Analysis 1

Why Relevant: This legislation directly addresses the regulation and accountability of artificial intelligence by establishing legal consequences for its misuse in fraudulent activities.

Mechanism of Influence: It creates a civil cause of action allowing for compensatory, statutory, and punitive damages, as well as injunctions, which forces developers and users to implement safeguards against fraudulent deployment.

Evidence:

  • The Artificial Intelligence Fraud and Accountability Act aims to define artificial intelligence fraud and create a civil cause of action for those harmed by such fraudulent activities.
  • Developers or users of AI systems that knowingly facilitate fraud can be held jointly liable for damages incurred.

Ambiguity Notes: The definition of 'deceptive use' and the threshold for 'knowingly facilitate' are broad, potentially leaving room for interpretation regarding the extent of a developer's responsibility for third-party misuse.

Senate - 2429 - Artificial Intelligence in Education Task Force Act; enact.

Legislation ID: 274071

Bill URL: View Bill

Summary

The Artificial Intelligence in Education Task Force Act aims to create a task force that will explore potential applications of artificial intelligence in K-12 education. The task force will develop policy recommendations for the responsible use of AI by students and educators, assess workforce needs related to AI, and ensure alignment with industry demands. It will consist of twelve members appointed by state officials and will conduct meetings, gather data, and submit reports on its findings and recommendations.

Key Sections

Key Requirements

  • Act to take effect on July 1, 2026.
  • Assess ethical and data privacy implications of AI.
  • Conduct evaluations, assessments, and develop recommendations regarding AI.
  • Conduct evaluations of AI technology in education.
  • Develop policy recommendations on AI use in education.
  • Develops policy recommendations for responsible AI use by educators and students.
  • Establishes a task force to evaluate AI in K-12 education.
  • Meet at least four times between September and December 2026.
  • Members must have expertise in education, technology, AI, ethics, data privacy, and policy.
  • Submit final report by December 15, 2026.
  • Submit interim report by November 15, 2026.
  • Task force consists of 12 members appointed by the Governor, Lieutenant Governor, and Speaker of the House.
  • Task force to be dissolved on January 1, 2027.

Sponsors

Legislative Actions

Date Action
2026-01-19 (S) Referred To Technology;Education

Detailed Analysis

Analysis 1

Why Relevant: The act focuses on developing policy recommendations for the responsible use of AI in an educational setting, which aligns with the user's interest in AI regulation.

Mechanism of Influence: The task force is tasked with creating guidelines and policy frameworks that will likely shape future regulations for AI deployment in schools.

Evidence:

  • Develops policy recommendations for responsible AI use by educators and students.
  • Develop policy recommendations on AI use in education.

Ambiguity Notes: The term 'responsible use' is not defined, leaving room for a wide range of policy interpretations from restrictive to permissive.

Analysis 2

Why Relevant: The legislation mandates the assessment of ethical and data privacy implications of AI technology.

Mechanism of Influence: By requiring an assessment of ethics and privacy, the task force's findings will influence how AI systems are vetted for safety and compliance before use by minors.

Evidence:

  • Assess ethical and data privacy implications of AI.

Ambiguity Notes: The specific ethical frameworks or privacy standards to be used for the assessment are not specified in the text.

Analysis 3

Why Relevant: The act requires the evaluation of AI technology and reporting to government officials, which serves as a form of oversight.

Mechanism of Influence: The task force conducts evaluations and submits interim and final reports to state officials, providing a mechanism for government oversight of AI applications.

Evidence:

  • Conduct evaluations of AI technology in education.
  • Submit final report by December 15, 2026.

Ambiguity Notes: The scope of 'evaluations' is not detailed, so it is unclear if this includes technical audits or just general policy reviews.

Senate - 2437 - Artificial intelligence; define.

Legislation ID: 274081

Bill URL: View Bill

Summary

Senate Bill No. 2437 aims to define artificial intelligence as a machine-based system capable of making predictions, recommendations, or decisions based on human-defined objectives. The bill outlines how AI systems utilize both machine and human inputs to understand environments, create models through analysis, and generate options for actions or information.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-28 (S) Title Suff Do Pass
2026-01-19 (S) Referred To Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill provides the foundational legal definition of AI, which is a prerequisite for any regulatory framework, disclosure requirement, or oversight mechanism.

Mechanism of Influence: By codifying this definition into state law, it establishes the legal scope for what technologies will be subject to future AI-specific regulations, audits, or government oversight in Mississippi.

Evidence:

  • defines artificial intelligence as a system that can perform tasks involving predictions, recommendations, or decisions based on specified human objectives.

Ambiguity Notes: The definition is broad, utilizing terms like 'machine-based system' and 'human-defined objectives,' which could potentially encompass a wide array of traditional software and algorithms beyond modern neural networks.

Senate - 2672 - Mississippi Department of Information Technology Services; bring forward code sections.

Legislation ID: 284391

Bill URL: View Bill

Summary

Senate Bill No. 2672 seeks to bring forward various sections of the Mississippi Code related to information technology services, specifically establishing the Mississippi Department of Information Technology Services (MDITS) as the central authority for state technology procurement and management. The bill also proposes amendments to existing sections, ensuring cohesive planning and cooperation among state agencies for the optimal use of technology resources.

Key Sections

Key Requirements

  • MDITS will provide statewide services for cost-effective information processing and telecommunications.
  • State agencies must cooperate with MDITS to minimize duplication and reduce costs.
  • The authority may establish advisory committees and training programs for state agency personnel.
  • The authority must adopt rules to ensure maximum competition in technology procurement.

Sponsors

Legislative Actions

Date Action
2026-01-19 (S) Referred To Economic and Workforce Development

Detailed Analysis

Analysis 1

Why Relevant: The bill governs the procurement and management of all information technology for state agencies. As AI is a subset of information technology, this department would be the primary body overseeing how AI tools are acquired and utilized by the state government.

Mechanism of Influence: MDITS is empowered to establish rules for competitive procurement and develop statewide plans for technology. This creates the administrative structure through which any future AI-specific procurement standards or usage policies would be implemented.

Evidence:

  • establishing the Mississippi Department of Information Technology Services (MDITS) as the central authority for state technology procurement and management
  • The authority must adopt rules to ensure maximum competition in technology procurement.

Ambiguity Notes: The bill uses the broad term 'information technology' without specific mention of artificial intelligence, machine learning, or automated decision systems, leaving the extent of AI-specific oversight to the department's rule-making authority.

↑ Back to Table of Contents

Missouri

Index of Bills

Senate - 1324 - Creates regulations of artificially generated online content using artificial intelligence

Legislation ID: 235040

Bill URL: View Bill

Summary

This bill introduces the Missouri Artificial Intelligence Transparency and Accountability Act which mandates that AI-generated content must be clearly labeled and logged. It defines key terms related to AI content, outlines requirements for labeling and maintaining usage logs, and establishes enforcement mechanisms through the attorney general. The bill also allows for the creation of rules by the Missouri Department of Commerce and Insurance to ensure compliance and public awareness regarding AI-generated content.

Key Sections

Key Requirements

  • Attorney general to enforce the act.
  • Audio must have verbal disclosures at the start and every two minutes for content over thirty seconds.
  • Content must be labeled as AI-generated.
  • Images/videos must have visible watermarks or overlays indicating AI generation.
  • Include the name of the AI system and developer/deployer.
  • Logs must be retained for a minimum of seven years.
  • Logs must be stored securely with encryption.
  • Logs must include date, time, user identity, input parameters, and output description.
  • Penalties for violations range from $5,000 to $100,000 depending on the nature of the violation.
  • Public awareness campaign to educate residents about AI-generated content.
  • Rules must comply with chapter 536.
  • Text labels must be at the beginning or in a visible header/footer.

Sponsors

Legislative Actions

Date Action
2026-01-27 Second Read and Referred S General Laws Committee
2026-01-07 S First Read
2025-12-01 Prefiled

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in requiring disclosures for AI-generated content.

Mechanism of Influence: It mandates specific disclosure formats for different media types, including verbal disclosures for audio, watermarks for images and video, and text labels for written content, ensuring the public is aware when content is AI-generated.

Evidence:

  • This section requires that all AI-generated content for public consumption be labeled clearly as AI-generated
  • Audio must have verbal disclosures at the start and every two minutes for content over thirty seconds.
  • Images/videos must have visible watermarks or overlays indicating AI generation.

Ambiguity Notes: The term 'public consumption' is used but not fully defined in the abstract, which could impact the scope of which AI-generated materials require labeling.

Analysis 2

Why Relevant: The bill aligns with the user's interest in AI regulation and oversight through record-keeping requirements.

Mechanism of Influence: By requiring developers and deployers to maintain usage logs for seven years, including user identity and input/output descriptions, the law creates an audit trail for government oversight and accountability.

Evidence:

  • This section mandates that developers and deployers maintain detailed usage logs for all AI-generated content
  • Logs must be retained for a minimum of seven years.
  • Logs must include date, time, user identity, input parameters, and output description.

Ambiguity Notes: While the bill requires logs, it does not explicitly mention 'audits' by third parties, though the logs serve as the primary data source for such oversight.

Analysis 3

Why Relevant: The bill establishes the regulatory and enforcement mechanisms requested by the user.

Mechanism of Influence: It grants the Attorney General the power to enforce the act and impose penalties up to $100,000 per violation, while also allowing for private civil actions.

Evidence:

  • Attorney general to enforce the act.
  • Penalties for violations range from $5,000 to $100,000 depending on the nature of the violation.

Ambiguity Notes: None

Senate - 1395 - Modifies provisions relating to the unauthorized practice of law as it relates to the use of artificial intelligence

Legislation ID: 235111

Bill URL: View Bill

Summary

This bill repeals the existing section 484.020 and enacts a new provision that prohibits individuals and entities from engaging in the practice of law without proper licensing. It specifically addresses the unauthorized provision of legal services, including those facilitated by artificial intelligence, and establishes penalties for violations.

Key Sections

Key Requirements

  • Allows for treble damages for services rendered in violation of the law.
  • Establishes that unauthorized practice of law is subject to injunctive relief.
  • Prohibits associations, partnerships, limited liability companies, and corporations from practicing law unless they are organized under specific legal frameworks.
  • Requires individuals and entities to be licensed to practice law in Missouri.
  • Requires the Attorney General or local prosecutors to pursue recovery of treble damages.
  • Violators are guilty of a misdemeanor and may face fines up to one hundred dollars.

Sponsors

Legislative Actions

Date Action
2026-01-27 Second Read and Referred S General Laws Committee
2026-01-07 S First Read
2025-12-02 Prefiled

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically targets the use of artificial intelligence in the delivery of legal services, categorizing unauthorized AI-driven legal assistance as a violation of law.

Mechanism of Influence: By explicitly mentioning AI, the bill subjects AI developers and platforms providing legal tools to the same licensing requirements and penalties as human practitioners, effectively regulating the commercial deployment of legal AI in the state.

Evidence:

  • It specifically addresses the unauthorized provision of legal services, including those facilitated by artificial intelligence, and establishes penalties for violations.

Ambiguity Notes: The phrase 'facilitated by artificial intelligence' is not strictly defined, which could lead to broad interpretations covering a wide range of software from basic document automation to advanced generative AI legal advice.

Senate - 1444 - Creates provisions relating to artificial intelligence in mental health

Legislation ID: 235160

Bill URL: View Bill

Summary

This bill introduces a new section to chapter 407 of the Missouri statutes, defining artificial intelligence and outlining the legal implications for entities that develop or deploy AI in mental health contexts. It prohibits advertising AI as capable of providing therapy services and establishes penalties for violations.

Key Sections

Key Requirements

  • First violation incurs a penalty of $10,000.
  • Prohibits the representation of AI as a mental health professional.
  • Subsequent violations incur a penalty of $20,000.

Sponsors

Legislative Actions

Date Action
2026-01-07 S First Read
2025-12-16 Prefiled

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses the regulation and disclosure requirements for AI deployment, specifically targeting the misrepresentation of AI capabilities in a professional context.

Mechanism of Influence: It imposes a legal prohibition on advertising AI as a mental health professional or therapy provider and establishes a civil penalty structure for enforcement by the Attorney General.

Evidence:

  • Prohibits individuals or entities from advertising AI as a mental health professional or capable of providing therapy.
  • The attorney general is responsible for enforcing this section and can take civil action against violators.

Ambiguity Notes: The term 'providing therapy' may be subject to interpretation regarding whether it encompasses non-clinical wellness support or emotional support chatbots.

Senate - 1474 - Creates provisions relating to artificial intelligence

Legislation ID: 250869

Bill URL: View Bill

Summary

This bill introduces the "AI Non-Sentience and Responsibility Act", which clarifies that artificial intelligence systems are non-sentient entities and outlines the legal responsibilities of developers, manufacturers, and owners of AI. It stipulates that AI cannot hold legal personhood, cannot be married or appointed to corporate roles, and establishes liability for harm caused by AI systems, ensuring that responsibility remains with human actors.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-07 S First Read
2025-12-29 Prefiled

Detailed Analysis

Analysis 1

Why Relevant: The bill imposes regulatory requirements on AI developers and owners regarding safety and reporting.

Mechanism of Influence: It requires mandatory incident reporting to authorities and the implementation of safety mechanisms like regular risk assessments.

Evidence:

  • In cases of significant harm caused by AI, developers or owners must notify authorities and comply with investigations
  • Developers and owners must prioritize safety in AI design and operation, including regular risk assessments to mitigate harm.

Ambiguity Notes: The term 'significant harm' is not explicitly defined, which may lead to varying interpretations of when reporting is required.

Analysis 2

Why Relevant: It defines the legal boundaries and accountability structures for AI technology.

Mechanism of Influence: By denying AI legal personhood and establishing strict liability for human actors, it ensures that AI usage remains under human oversight and control.

Evidence:

  • AI systems are declared non-sentient and cannot possess legal personhood
  • Owners and users of AI are responsible for any harm caused by the AIs operations, with developers and manufacturers liable only under specific conditions related to defects.

Ambiguity Notes: The conditions under which developers are liable for 'defects' versus owner liability for 'operations' may require further legal clarification.

Senate - 859 - Creates provisions relating to artificial intelligence

Legislation ID: 234575

Bill URL: View Bill

Summary

This bill introduces the AI Non-Sentience and Responsibility Act, which defines artificial intelligence and clarifies that AI systems are non-sentient and cannot possess legal personhood. It establishes that owners and developers are responsible for any harm caused by AI systems and outlines the legal implications of AI-related incidents, including liability and oversight requirements. The bill is set to take effect on August 28, 2026.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-08 Second Read and Referred S General Laws Committee
2026-01-07 S First Read
2025-12-01 Prefiled

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes mandatory oversight and reporting requirements for AI systems.

Mechanism of Influence: It requires owners to maintain active oversight and mandates that developers or owners report severe incidents to authorities, creating a regulatory compliance loop.

Evidence:

  • Developers or owners must report severe incidents involving AI to the authorities.
  • Owners must maintain oversight of AI systems to prevent harm, with failure potentially leading to liability.

Ambiguity Notes: The term 'severe incidents' is not explicitly defined in the abstract, which could lead to inconsistent reporting standards.

Analysis 2

Why Relevant: The legislation requires the implementation of safety protocols by AI stakeholders.

Mechanism of Influence: By requiring safety mechanisms to mitigate risks, the law forces developers to integrate risk-management features into the AI lifecycle.

Evidence:

  • Developers and owners must implement safety measures to mitigate risks associated with AI systems.

Ambiguity Notes: The abstract does not specify what constitutes an acceptable 'safety measure,' leaving technical requirements to future regulation or court interpretation.

Analysis 3

Why Relevant: The bill addresses liability and the legal status of AI, preventing entities from using AI 'autonomy' to evade regulation.

Mechanism of Influence: It ensures that liability cannot be waived by claiming an AI is 'ethically trained' or 'aligned,' maintaining a strict chain of human responsibility.

Evidence:

  • AI systems cannot bear fault or liability; responsibility lies with human actors.
  • Labeling an AI as aligned or ethically trained does not reduce liability for harm caused.

Ambiguity Notes: The 'specific conditions' under which developers and manufacturers are held liable versus owners are not detailed.

↑ Back to Table of Contents

Nebraska

Index of Bills

Senate - 1083 - Adopt the Transparency in Artificial Intelligence Risk Management Act, create a fund, and change provisions relating to records which may be withheld from the public

Legislation ID: 263260

Bill URL: View Bill

Summary

LB1083 introduces the Transparency in Artificial Intelligence Risk Management Act, which aims to address the potential risks associated with artificial intelligence technologies, particularly in relation to child safety and catastrophic risks. The bill mandates that large frontier developers and chatbot providers create and publish detailed safety and risk management plans, report safety incidents, and implement necessary safeguards to protect the public, especially minors, from the risks posed by AI systems.

Key Sections

Key Requirements

  • Mandates immediate disclosure of imminent risks to appropriate authorities.
  • Mandates the incorporation of national and international standards into safety plans.
  • Requires both to incorporate national and international standards into their plans.
  • Requires immediate reporting of imminent risks within 24 hours.
  • Requires large chatbot providers to assess and mitigate child safety risks.
  • Requires large chatbot providers to assess potential child safety risks.
  • Requires large frontier developers to define and assess thresholds for catastrophic risks.
  • Requires reporting of child safety incidents within fifteen days.
  • Requires reporting of critical safety incidents within 15 days.
  • Requires reporting of critical safety incidents within fifteen days.

Sponsors

Legislative Actions

Date Action
2026-01-23 Notice of hearing for February 09, 2026
2026-01-20 Referred to Banking, Commerce and Insurance Committee
2026-01-16 KauthFA742filed
2026-01-15 Date of introduction

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates artificial intelligence by imposing transparency and risk management requirements on developers.

Mechanism of Influence: It forces large frontier developers and chatbot providers to formalize, publish, and adhere to safety and risk management plans.

Evidence:

  • Large frontier developers and chatbot providers must create and publish a public safety and child protection plan detailing how they assess and mitigate risks associated with their AI systems.

Ambiguity Notes: The term 'large frontier developers' and 'large chatbot providers' may require specific technical or market-share definitions to determine which entities are captured.

Analysis 2

Why Relevant: The bill specifically addresses child safety and the protection of minors in the context of AI usage.

Mechanism of Influence: Chatbot providers are explicitly required to assess potential child safety risks as part of their mandatory protection plans.

Evidence:

  • Requires large chatbot providers to assess potential child safety risks.
  • Legislature acknowledges the benefits and risks of artificial intelligence, emphasizing the need for transparency in how AI companies manage risks, particularly those affecting minors.

Ambiguity Notes: The specific 'national and international standards' to be incorporated are not named, leaving room for interpretation on which benchmarks apply.

Analysis 3

Why Relevant: It establishes a government oversight mechanism through mandatory incident reporting.

Mechanism of Influence: Developers must report safety incidents to the Attorney General, with extremely short windows (24 hours) for imminent risks.

Evidence:

  • This section mandates that developers and providers report safety incidents to the Attorney General within specified timeframes
  • Requires immediate reporting of imminent risks within 24 hours.

Ambiguity Notes: The definition of 'imminent risks' versus 'critical safety incidents' could impact the urgency and volume of reports submitted.

Senate - 1119 - Change provisions relating to the collection and use of personal data and provide additional duties and prohibitions for a covered online service under the Age-Appropriate Online Design Code Act

Legislation ID: 270238

Bill URL: View Bill

Summary

Legislative Bill 1119 amends the Age-Appropriate Online Design Code Act to redefine terms related to online services and minors, change provisions regarding the collection and use of personal data, and impose additional duties and prohibitions on covered online services. It seeks to ensure that online environments are safer for minors by restricting targeted advertising, data collection practices, and the use of manipulative design features.

Key Sections

Key Requirements

  • Collect and use only the minimum necessary personal data.
  • Mandates that requests for deletion be honored within fifteen days.
  • Must honor deletion requests within fifteen days.
  • Must not facilitate advertisements for prohibited products to minors.
  • Must not prompt minors to make privacy settings less protective unless necessary for access.
  • Must not provide a single setting that makes all privacy settings less protective.
  • Must not use dark patterns that impair minor autonomy.
  • Must provide a tool for minors to request account deletion.
  • Only collect the minimum necessary personal data from minors.
  • Prohibit notifications between 10 p.m. and 6 a.m. and 8 a.m. and 4 p.m. during school days.
  • Prohibits advertisements for narcotics, tobacco, gambling, and alcohol to minors.
  • Prohibits online services from providing a single setting that lowers all default privacy settings for minors.
  • Prohibits prompting minors to make their privacy settings less protective unless necessary for access.
  • Prohibits the use of dark patterns that impair the decision-making of minors.
  • Prohibit targeted advertising to minors.
  • Provide clear notifications when collecting precise geolocation information.
  • Requires online services to provide a tool for minors to request account deletion.

Sponsors

Legislative Actions

Date Action
2026-01-23 Notice of hearing for February 09, 2026
2026-01-21 Referred to Banking, Commerce and Insurance Committee
2026-01-20 KauthFA778filed
2026-01-16 Date of introduction

Detailed Analysis

Analysis 1

Why Relevant: The legislation targets online service design and data practices for minors, which directly impacts AI-driven platforms, recommendation engines, and algorithmic advertising.

Mechanism of Influence: The prohibition of dark patterns and manipulative design features restricts how AI algorithms can be used to influence minor behavior or engagement. Additionally, data minimization requirements limit the training data available from minor users for AI models.

Evidence:

  • Prohibits the use of dark patterns that impair the decision-making of minors.
  • Prohibit targeted advertising to minors.
  • Only collect the minimum necessary personal data from minors.

Ambiguity Notes: The bill applies to 'covered online services,' a broad category that includes AI-powered social media and applications, though it does not explicitly mention 'Artificial Intelligence' by name.

Senate - 204 - Adopt the Biometric Autonomy Liberty Law

Legislation ID: 121519

Bill URL: View Bill

Summary

This bill establishes the Biometric Autonomy Liberty Law, which recognizes biometric data as personal property of the individual from whom it is collected. It outlines definitions related to biometric data, the responsibilities of entities that collect or process such data, and the rights of individuals regarding their biometric information. The law aims to enhance security and privacy protections in light of the increasing use of biometric technology in various sectors.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-07 Title printed. Carryover bill
2025-01-29 Notice of hearing for March 17, 2025
2025-01-16 Referred to Banking, Commerce and Insurance Committee
2025-01-14 Date of introduction

Detailed Analysis

Analysis 1

Why Relevant: Biometric data is a foundational component of many AI systems, including facial recognition, voice analysis, and gait detection, making the regulation of this data a primary method of governing AI applications.

Mechanism of Influence: AI developers and operators acting as 'controllers' or 'processors' would be legally required to obtain written consent and provide disclosures before using biometric datasets for training or deploying AI models.

Evidence:

  • Entities must obtain written consent from individuals before collecting or possessing their biometric data, specifying the purpose and duration.
  • Defines key terms used in the act, including biometric data, controller, processor, and others relevant to the handling of biometric information.

Ambiguity Notes: The law does not explicitly use the term 'Artificial Intelligence,' but its definitions of 'processor' and 'biometric data' are broad enough to encompass the algorithmic processing of physical and behavioral characteristics common in AI.

Analysis 2

Why Relevant: The bill addresses the user's interest in disclosures and oversight by requiring entities to specify the purpose and duration of biometric data usage.

Mechanism of Influence: This provision forces transparency on how AI-driven biometric systems are utilized, preventing the 'black box' collection of data for undisclosed algorithmic purposes.

Evidence:

  • Entities must obtain written consent from individuals before collecting or possessing their biometric data, specifying the purpose and duration.
  • Outlines conditions under which a processor may disclose biometric data, emphasizing the need for consent or legal requirements.

Ambiguity Notes: While it requires disclosure of purpose, it does not specifically mandate the disclosure of AI model weights or technical audits of the algorithms themselves.

Senate - 642 - Adopt the Artificial Intelligence Consumer Protection Act

Legislation ID: 122219

Bill URL: View Bill

Summary

The Artificial Intelligence Consumer Protection Act is designed to protect consumers from algorithmic discrimination by setting forth requirements for developers and deployers of high-risk artificial intelligence systems. It outlines definitions, responsibilities, and documentation requirements to ensure compliance with anti-discrimination laws. The act mandates developers to disclose known risks and implement risk management policies, while deployers must conduct impact assessments and use reasonable care in their deployment of such systems.

Key Sections

Key Requirements

  • Assessments must analyze the deployment context and intended benefits of the AI system.
  • Deployers must complete impact assessments for each high-risk AI system deployed.
  • Deployers must use reasonable care to protect consumers from known risks.
  • Developers must disclose known risks of algorithmic discrimination.
  • Developers must disclose risks of algorithmic discrimination without unreasonable delay.
  • Developers must make documentation available to assist deployers in understanding the AI systems outputs.
  • Developers must provide documentation detailing the use and limitations of the AI system.
  • Impact assessments must be completed within 90 days of deployment or modification.
  • Requires adherence to AI Risk Management Framework or ISO/IEC 42001 for compliance.
  • Requires high-risk AI systems used by federal agencies to comply with specific provisions of the act.

Sponsors

Legislative Actions

Date Action
2026-01-07 Title printed. Carryover bill
2025-01-28 Notice of hearing for February 06, 2025
2025-01-24 Referred to Judiciary Committee
2025-01-22 Date of introduction

Detailed Analysis

Analysis 1

Why Relevant: The act directly regulates developers and deployers of high-risk AI systems, aligning with the user's interest in AI regulation.

Mechanism of Influence: It imposes a legal duty of reasonable care and mandates specific risk management policies for entities involved in AI development and deployment.

Evidence:

  • Developers of high-risk artificial intelligence systems must use reasonable care to protect consumers from algorithmic discrimination and provide necessary documentation to deployers.
  • Deployers of high-risk artificial intelligence systems must implement risk management policies and conduct impact assessments to mitigate algorithmic discrimination risks.

Ambiguity Notes: The term 'reasonable care' is a legal standard that may be subject to judicial interpretation rather than technical specification.

Analysis 2

Why Relevant: The legislation requires impact assessments, which serve as a mandatory audit and oversight mechanism.

Mechanism of Influence: Deployers are legally required to complete impact assessments within 90 days of deployment or modification to evaluate the system's effects.

Evidence:

  • Deployers are required to conduct impact assessments for high-risk AI systems to evaluate their effects and ensure compliance with the act.
  • Impact assessments must be completed within 90 days of deployment or modification.

Ambiguity Notes: The summary does not specify if these assessments must be submitted to a central government authority or kept for internal compliance.

Analysis 3

Why Relevant: The act mandates disclosures and documentation regarding AI system risks and outputs.

Mechanism of Influence: Developers must provide documentation detailing the use, limitations, and known risks of algorithmic discrimination to deployers.

Evidence:

  • Developers must disclose risks of algorithmic discrimination without unreasonable delay.
  • Developers must provide documentation detailing the use and limitations of the AI system.

Ambiguity Notes: The requirement to disclose risks 'without unreasonable delay' is a subjective timeframe.

Analysis 4

Why Relevant: It addresses government oversight of AI systems used by federal agencies.

Mechanism of Influence: It removes exemptions for federal agencies when using high-risk AI systems that impact critical areas like employment or housing.

Evidence:

  • Requires high-risk AI systems used by federal agencies to comply with specific provisions of the act.

Ambiguity Notes: None

Senate - 939 - Adopt the Saving Human Connection Act

Legislation ID: 248403

Bill URL: View Bill

Summary

The Saving Human Connection Act establishes regulations for covered platforms that operate generative artificial intelligence systems. It defines key terms, outlines responsibilities for platforms to protect users, especially minors, and mandates transparency regarding the non-human nature of chatbots. The act also provides for enforcement mechanisms and civil penalties for violations.

Key Sections

Key Requirements

  • Adults or guardians can file civil actions for damages between $100 and $10,000 per incident.
  • Attorney General to enforce the act.
  • Avoid conflicts of interest regarding user data.
  • Avoid conflicts with user interests regarding third-party data access.
  • Civil penalties up to $10,000 for violations.
  • Clearly identify human-like features as artificial.
  • Collect only necessary data respecting user interests.
  • Consider the best interests of users when personalizing content.
  • Covered platforms may face injunctions and civil penalties up to $10,000 per violation.
  • Detect and respond to emergency situations prioritizing user safety.
  • Ensure chatbots with human-like features are not accessible to minors.
  • Ensure chatbots with human-like features are not available to minors.
  • Implement age verification systems.
  • Include regular disclosures about the chatbots nature.
  • Include regular disclosures about the non-human nature of chatbots.
  • Limit data collection to what is necessary and in the best interest of users.
  • Limit data collection to what is necessary for legitimate purposes.
  • Maintain systems to detect and respond to emergency situations.
  • Notify users of material changes in terms.
  • Notify users of material changes to the agreement.
  • Outline obligations and rights of users.
  • Outline platform obligations and user rights.
  • Present terms in clear and understandable language.
  • Prevent emotional dependence on chatbots.
  • Provide a default service version without human-like features.
  • Provide a default version of the platform without human-like features.
  • Require affirmative consent from users.
  • Require user consent before agreement takes effect.
  • Terms of service must be clear and understandable.
  • Users can bring civil actions for damages or injunctive relief.
  • Users can sue for damages and seek injunctive relief.
  • Violators may face injunctions and civil penalties up to $10,000 per violation.

Sponsors

Legislative Actions

Date Action
2026-01-28 Notice of hearing for February 17, 2026 (cancel)
2026-01-23 Notice of hearing for February 17, 2026
2026-01-13 Referred to Banking, Commerce and Insurance Committee
2026-01-09 Date of introduction
2026-01-09 KauthFA563filed
2026-01-09 MurmanFA564filed
2026-01-09 MurmanFA565filed

Detailed Analysis

Analysis 1

Why Relevant: The act explicitly mandates age verification for accessing specific AI features.

Mechanism of Influence: Platforms must implement systems to ensure minors cannot access chatbots with human-like features, effectively restricting AI usage based on age.

Evidence:

  • Implement age verification systems.
  • Ensure chatbots with human-like features are not accessible to minors.

Ambiguity Notes: The specific technical standards for age verification are not defined, leaving implementation details to the platforms or future regulation.

Analysis 2

Why Relevant: The legislation requires transparency and disclosures regarding the nature of AI interactions.

Mechanism of Influence: Covered platforms are legally obligated to inform users that they are interacting with an artificial system rather than a human.

Evidence:

  • Clearly identify human-like features as artificial.
  • Include regular disclosures about the non-human nature of chatbots.

Ambiguity Notes: The term 'regular disclosures' is broad and does not specify the frequency or format of these notifications.

Analysis 3

Why Relevant: The act regulates the design and output of generative AI to protect user psychological well-being.

Mechanism of Influence: It imposes a legal duty on AI developers to prevent their systems from causing emotional dependence and to prioritize user safety in emergency detections.

Evidence:

  • Prevent emotional dependence on chatbots.
  • Consider the best interests of users when personalizing content.
  • Detect and respond to emergency situations prioritizing user safety.

Ambiguity Notes: Terms like 'emotional dependence' and 'best interests' are subjective and may be difficult to measure or enforce without specific metrics.

Senate - 978 - Provide for civil actions for conduct relating to obscene material, child sexual abuse material, and child sexual exploitation devices

Legislation ID: 253730

Bill URL: View Bill

Summary

LB978 introduces legal measures to combat the distribution and possession of prohibited content related to child sexual abuse and exploitation. It defines terms, outlines civil actions that can be taken against violators, and specifies the roles of the Attorney General and county attorneys. The bill also establishes civil penalties for violations and ensures that certain legal protections are in place for judges and attorneys acting in good faith.

Key Sections

Key Requirements

  • Adults cannot sue if they intentionally viewed child sexual abuse material.
  • Attorney General can seek equitable relief for violations.
  • Civil penalties collected are remitted to the State Treasurer.
  • Civil penalties may be up to $10,000 per violation.
  • Individuals exposed to prohibited content may file civil actions.
  • Judges and attorneys must act in good faith for court proceedings or client representation.
  • Judges and attorneys must act in good faith to qualify for immunity.
  • Minors can seek civil action against violators.
  • Plaintiffs are not subject to contributory negligence.
  • Prohibits intentional access to prohibited content on public websites.
  • Prohibits intentional facilitation of access to prohibited content on public websites.
  • Prohibits knowingly buying or possessing child sexual exploitation devices.
  • Prohibits the creation or dissemination of prohibited content.
  • Prohibits the distribution or creation of prohibited content online.
  • Prohibits the sale and possession of child sexual exploitation devices.
  • The Attorney General may seek equitable relief against violators.
  • Victims can seek damages and equitable relief against violators.
  • Violators may incur fines up to $10,000 per violation.

Sponsors

Legislative Actions

Date Action
2026-01-14 Referred to Judiciary Committee
2026-01-13 KauthFA634filed
2026-01-12 Date of introduction

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates the 'creation' of prohibited content and 'child sexual exploitation devices,' which in contemporary legislation often includes AI-generated synthetic media and the software or models used to generate them.

Mechanism of Influence: By prohibiting the creation and distribution of prohibited content online and regulating exploitation devices, the law creates civil liability and penalties for the use of AI tools to generate illegal material.

Evidence:

  • Prohibits the distribution or creation of prohibited content online.
  • Regulation of Child Sexual Exploitation Devices
  • This section defines key terms related to the bill, including child sexual abuse material, Internet utility, and prohibited content.

Ambiguity Notes: The abstract mentions definitions for 'child sexual abuse material' and 'child sexual exploitation devices' but does not explicitly detail the technical scope; however, these terms are frequently used in state legislation to encompass AI-generated content.

↑ Back to Table of Contents

New Hampshire

Index of Bills

House - 1124 - relative to the right to compute.

Legislation ID: 235399

Bill URL: View Bill

Summary

This bill introduces the Right to Compute Act, which asserts the rights of individuals to acquire, possess, and utilize computational resources for lawful purposes. It prohibits government entities from imposing restrictions on these rights unless such restrictions are necessary to serve a compelling government interest. The bill also defines key terms related to computational resources and government actions, and emphasizes the preservation of intellectual property rights.

Key Sections

Key Requirements

  • Prohibits government restrictions on ownership and use of computational resources unless necessary for a compelling government interest.

Sponsors

Legislative Actions

Date Action
2026-01-29 Public Hearing: 01/29/2026 10:30 am GP 229
2026-01-07 Introduced 01/07/2026 and referred to Commerce and Consumer Affairs
2026-01-07 To Be Introduced 01/07/2026 and referred to Commerce and Consumer Affairs

Detailed Analysis

Analysis 1

Why Relevant: The bill directly impacts the infrastructure required for Artificial Intelligence. By protecting the 'Right to Compute,' it creates a barrier against regulations that might seek to limit AI development through hardware restrictions or compute-usage monitoring.

Mechanism of Influence: It would likely prevent government entities from imposing arbitrary caps on the amount of compute used for training AI models or requiring licenses to own high-performance computing hardware, unless the government can prove a compelling interest.

Evidence:

  • This section asserts that no government entity may restrict the private ownership or use of computational resources for lawful purposes unless it is necessary to achieve a compelling government interest.
  • emphasizing the fundamental rights related to property, free expression, and privacy in the context of technological tools and computational resources.

Ambiguity Notes: The term 'computational resources' is broad and likely includes the GPUs and specialized chips used for AI. The 'compelling government interest' clause is the primary loophole through which AI-specific safety regulations might still be enacted.

House - 1265 - prohibiting the construction of data centers in the state and establishing a committee to study the environmental impact of data centers.

Legislation ID: 235540

Bill URL: View Bill

Summary

This bill establishes a moratorium on the construction of data centers in New Hampshire for one year and creates a committee to investigate the environmental effects of such facilities. The committee will consist of members from the House and Senate, tasked with reporting their findings and recommendations for legislation.

Key Sections

Key Requirements

  • A quorum shall consist of three members.
  • Members will receive mileage compensation for attending meetings.
  • No new data centers can be constructed in the state during the moratorium period.
  • The act will take effect 60 days after its passage.
  • The committee must report findings by November 1, 2025, and a final report by November 1, 2026.
  • The committee shall elect a chairperson from among its members.
  • The committee will consist of three House members and one Senate member.

Sponsors

Legislative Actions

Date Action
2026-01-07 To Be Introduced 01/07/2026 and referred to Commerce and Consumer Affairs

Detailed Analysis

Analysis 1

Why Relevant: Data centers serve as the critical physical infrastructure required for the development, training, and deployment of large-scale artificial intelligence models.

Mechanism of Influence: By prohibiting the construction of new data centers for one year, the bill restricts the expansion of the compute capacity available for AI operations within the state.

Evidence:

  • Prohibits the construction of new data centers in New Hampshire for one year from the effective date of the act.
  • Establishes a committee to study the environmental impact of data centers in New Hampshire.

Ambiguity Notes: The bill does not explicitly mention artificial intelligence, focusing instead on environmental impacts; however, data center regulation is a primary bottleneck for AI industry growth.

House - 1506 - creating an exception to the restricted uses of artificial intelligence by state agencies.

Legislation ID: 235781

Bill URL: View Bill

Summary

This legislation introduces a framework for state agency heads to apply for exceptions to the restrictions on artificial intelligence usage established under RSA 5-D. It mandates the creation of a procedure by the Department of Information Technology for processing these requests, which will ultimately require approval from the executive council.

Key Sections

Key Requirements

  • Requests must comply with requirements set by the Department of Information Technology.
  • Requests must include the agencys purpose for needing the exception.
  • Requests must specify a purpose for needing the exception.

Sponsors

Legislative Actions

Date Action
2026-01-29 Public Hearing: 01/29/2026 01:00 pm GP 231
2026-01-07 Introduced 01/07/2026 and referred to Executive Departments and Administration
2026-01-07 To Be Introduced 01/07/2026 and referred to Executive Departments and Administration

Detailed Analysis

Analysis 1

Why Relevant: The bill directly concerns the governance and regulatory oversight of artificial intelligence usage within state government entities.

Mechanism of Influence: It creates a legal pathway for agencies to bypass standard AI restrictions, subject to administrative review and executive approval, thereby influencing how AI is deployed and controlled at the state level.

Evidence:

  • This legislation introduces a framework for state agency heads to apply for exceptions to the restrictions on artificial intelligence usage established under RSA 5-D.
  • The executive council is responsible for reviewing and deciding on the acceptance or denial of exception requests.

Ambiguity Notes: The text refers to RSA 5-D but does not detail the specific AI restrictions being exempted, leaving the scope of the exceptions dependent on the underlying law.

House - 1582 - prohibiting the use of credit information in underwriting and rating personal automobile and homeowners insurance policies and prohibiting certain surveillance practices by insurers.

Legislation ID: 235857

Bill URL: View Bill

Summary

This bill seeks to eliminate the use of credit history and scores in the insurance underwriting process for personal automobile and homeowners policies. It also prohibits insurers from using drones, satellites, or other forms of surveillance without explicit permission from property owners. The bill aims to prevent unfair discrimination against consumers and to safeguard their privacy rights.

Key Sections

Key Requirements

  • Any violation is considered an unfair method of competition.
  • Defines consumer report, credit history, and credit score.
  • Insurers must not base any underwriting or rating decisions on credit information.
  • Insurers must obtain express written permission from property owners for drone surveillance.
  • Permission granted must be voluntary and can be revoked at any time.
  • Prohibits insurers from using credit information in insurance underwriting and rating.
  • Prohibits use of satellite imagery for adverse actions against individual policyholders.
  • Repeals RSA 412:15, III.

Sponsors

Legislative Actions

Date Action
2026-01-28 Public Hearing: 01/28/2026 02:00 pm GP 229
2026-01-07 Introduced 01/07/2026 and referred to Commerce and Consumer Affairs
2026-01-07 To Be Introduced 01/07/2026 and referred to Commerce and Consumer Affairs

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates algorithmic inputs and automated data collection methods used in insurance underwriting.

Mechanism of Influence: By banning credit scores and restricting drone/satellite imagery, the bill limits the data sources and automated models insurers can use for risk assessment, which frequently involve AI or machine learning components.

Evidence:

  • Insurers must not base any underwriting or rating decisions on credit information.
  • This section prohibits insurers from using drones for surveillance of private property without the property owners written consent
  • This provision bans the use of satellite imagery or commercial surveillance products in underwriting or rating homeowners insurance policies

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its restrictions on credit scoring models and automated surveillance imagery directly impact the deployment of AI-driven underwriting tools.

House - 1725 - relative to the regulation of artificial intelligence technologies.

Legislation ID: 236000

Bill URL: View Bill

Summary

This bill introduces a new chapter in New Hampshire law dedicated to the governance of artificial intelligence. It defines key terms related to AI, outlines the applicability of the regulations, establishes an artificial intelligence council, and sets forth general duties and prohibitions for entities involved with AI systems. Additionally, it creates a regulatory sandbox for testing AI innovations and details enforcement mechanisms for violations of the law.

Key Sections

Key Requirements

  • 60-day notice for violations before penalties apply.
  • Applies to any entity conducting business related to AI in New Hampshire.
  • Attorney general to enforce the chapter.
  • Clear disclosure to consumers when using AI systems.
  • Council to advise on AI development and oversight.
  • Council to include 7 members with expertise in relevant fields.
  • No unlawful discrimination through AI systems.
  • No use of AI for social scoring or manipulation.
  • Oversight and quarterly reporting required.
  • Participants can test AI systems for up to 36 months.
  • Supersedes local regulations unless expressly authorized.

Sponsors

Legislative Actions

Date Action
2026-01-22 Subcommittee Work Session: 01/22/2026 01:15 pm GP 229
2026-01-15 Public Hearing: 01/15/2026 11:00 am GP 229
2026-01-07 Introduced 01/07/2026 and referred to Commerce and Consumer AffairsHJ 1
2026-01-07 To Be Introduced 01/07/2026 and referred to Commerce and Consumer AffairsHJ 1

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes formal oversight and governance structures for AI development and use.

Mechanism of Influence: It creates a New Hampshire artificial intelligence council to advise on ethical practices and oversight, and grants the attorney general enforcement authority.

Evidence:

  • This section establishes the New Hampshire artificial intelligence council, detailing its composition and responsibilities, including advising on ethical AI practices
  • Attorney general to enforce the chapter.

Ambiguity Notes: None

Analysis 2

Why Relevant: The legislation mandates transparency through disclosure requirements.

Mechanism of Influence: Entities are required to provide clear disclosure to consumers when using AI systems, ensuring users are aware of AI involvement.

Evidence:

  • Clear disclosure to consumers when using AI systems.

Ambiguity Notes: The specific format and timing of 'clear disclosure' may require further regulatory definition.

Analysis 3

Why Relevant: The bill imposes specific prohibitions on AI applications and requires reporting for experimental systems.

Mechanism of Influence: It bans AI use for social scoring and manipulation while requiring quarterly reporting for entities operating within the regulatory sandbox.

Evidence:

  • No use of AI for social scoring or manipulation.
  • Oversight and quarterly reporting required.

Ambiguity Notes: None

↑ Back to Table of Contents

New Jersey

Index of Bills

Assembly - 6309 - Establishes "Privacy Protection Act"; concerns collection and sharing of certain personal information.

Legislation ID: 250372

Bill URL: View Bill

Summary

This bill establishes the Privacy Protection Act, which prohibits the collection and sharing of certain personal information, such as immigration status and social security numbers, by government entities and health care facilities. It aims to protect individuals privacy interests and ensure that their data is not shared without consent, while also outlining specific conditions under which data may be collected or disclosed.

Key Sections

Key Requirements

  • Entities must review and update confidentiality policies within one year of the acts effective date.
  • Individuals can pursue civil action for unauthorized use of their information.
  • Limits retention of collected information to the time necessary for service administration.
  • Prohibits collection of personal information unless necessary for public service eligibility.
  • Prohibits collection of personal information unless necessary for service eligibility.
  • Prohibits sale or sharing of license plate recognition data except under specific legal circumstances.
  • Records cannot be disclosed except for administering services required by law or under subpoena.
  • Requires written consent for disclosure of records.

Sponsors

Legislative Actions

Date Action
2026-01-12 Motion To As (Kanitra)
2026-01-12 Motion To Table (Quijano) (42-23-0)
2026-01-12 Passed by the Assembly (47-26-0)
2026-01-12 Passed Senate (Passed Both Houses) (23-14)
2026-01-12 Received in the Senate without Reference, 2nd Reading
2026-01-12 Substituted for S5037 (1R)
2026-01-08 Reported out of Assembly Comm. with Amendments, 2nd Reading
2026-01-05 Reported and Referred to Assembly Appropriations Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill contains specific provisions regarding Automated License Plate Recognition (ALPR) technology.

Mechanism of Influence: ALPR systems utilize computer vision and automated data processing, which are foundational AI technologies. By restricting the sale and sharing of ALPR data, the legislation regulates the commercial and governmental application of AI-driven surveillance outputs.

Evidence:

  • Restrictions are placed on the sale and sharing of automated license plate recognition information, with exceptions for legal disclosures.
  • Prohibits sale or sharing of license plate recognition data except under specific legal circumstances.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but it regulates a specific application of AI (ALPR). It does not address other AI-specific requirements like algorithmic audits, submission of weights, or age verification.

Senate - 1463 - Prohibits collection of biometric identifier information by public or private entity under certain circumstances.

Legislation ID: 256467

Bill URL: View Bill

Summary

This bill prohibits the collection, retention, conversion, storage, or sharing of biometric identifier information by public and private entities unless they provide clear and conspicuous notice of such practices. It establishes penalties for violations and defines what constitutes biometric identifier information and biometric surveillance systems.

Key Sections

Key Requirements

  • Requires clear and conspicuous notice at every common entryway regarding the use of biometric surveillance systems.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Law and Public Safety Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates 'biometric surveillance systems,' which are a core application of artificial intelligence, particularly in computer vision and facial recognition technologies.

Mechanism of Influence: It imposes a disclosure and transparency requirement on entities using AI-driven surveillance, requiring them to provide clear notice to individuals before their biometric data is processed by these systems.

Evidence:

  • biometric surveillance system
  • clear and conspicuous notice
  • Prohibition on Collection of Biometric Information

Ambiguity Notes: The bill focuses on the 'biometric surveillance system' as the regulated entity; while AI is the standard underlying technology for such systems, the bill's scope depends on the specific technical definition of 'biometric surveillance' provided in the full text.

Senate - 1464 - Prohibits use of biometric surveillance system by business entity under certain circumstances.

Legislation ID: 256468

Bill URL: View Bill

Summary

This bill prohibits business entities from using biometric surveillance systems on consumers at their physical premises without clear notice and lawful purpose. It mandates that businesses provide explanations if they use biometric data to deny access or remove consumers. Additionally, it restricts the sale or profit from biometric data obtained from consumers and establishes penalties for violations.

Key Sections

Key Requirements

  • Biometric surveillance must be used for a lawful purpose.
  • Businesses must comply with the provisions within 30 days of a first violation notice.
  • Prohibits selling, leasing, trading, or sharing biometric data.
  • Requires businesses to provide an explanation if a consumer is denied access based on biometric data.
  • Requires clear and conspicuous notice to consumers regarding the use of biometric surveillance.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Law and Public Safety Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates biometric surveillance systems and facial recognition, which are specific implementations of artificial intelligence technology used for identification and monitoring.

Mechanism of Influence: It imposes disclosure requirements (notice) and operational constraints (lawful purpose, no-profit rule) on businesses deploying AI-driven biometric tools.

Evidence:

  • Prohibits business entities from using biometric surveillance systems on consumers unless they provide notice
  • Defines key terms related to biometric surveillance, including biometric surveillance system, business entity, and facial recognition.

Ambiguity Notes: The definition of 'biometric surveillance system' likely encompasses various AI models, though the text focuses on the application rather than the underlying algorithmic weights.

Senate - 1668 - Establishes "Artificial Intelligence Innovation Partnership"; provides funding for certain nonprofit partnerships to promote certain emerging technology businesses.

Legislation ID: 256731

Bill URL: View Bill

Summary

This bill establishes the Artificial Intelligence Innovation Partnership, which will be administered by the New Jersey Commission on Science, Innovation and Technology. The partnership will consist of independent nonprofit organizations working to support emerging artificial intelligence technology businesses and create collaborative innovation ecosystems across New Jersey. The bill outlines the goals, definitions, and operational framework for the partnership, including funding mechanisms and the establishment of a research grant fund.

Key Sections

Key Requirements

  • Annual reports must include current contact information and organizational structure.
  • Grants must be matched by private sector funds on a minimum basis.
  • Nonprofit organizations must demonstrate diversity in leadership.
  • Organizations must focus on specific AI technology sectors relevant to their region.
  • Partners must provide independent audits of funds received.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Economic Growth Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a formal government oversight and reporting structure for entities involved in the development and support of artificial intelligence technology.

Mechanism of Influence: It mandates annual reporting to the Governor and Legislature regarding AI partnership activities and requires the Commission to define and categorize 'artificial intelligence technology' for regulatory and funding purposes.

Evidence:

  • Mandates the commission to provide an annual report to the Governor and Legislature on the partnerships activities and progress.
  • Defines key terms used in the bill, including Artificial Intelligence Innovation Partner, artificial intelligence technology, and emerging AI technology business.

Ambiguity Notes: While the bill focuses on innovation and funding, the definitions of 'artificial intelligence technology' and 'emerging AI technology business' will determine the scope of who falls under this state-monitored ecosystem.

Analysis 2

Why Relevant: The legislation specifically requires audits and financial disclosures for organizations participating in the AI partnership.

Mechanism of Influence: Partners are required to submit independent audits of funds received and detailed annual reports on their organizational structure and activities to ensure compliance with state agreements.

Evidence:

  • Partners must provide independent audits of funds received.
  • Requires partners to submit annual reports to the commission detailing their activities, finances, and compliance with grant agreements.

Ambiguity Notes: The audits mentioned are focused on financial compliance and fund usage rather than technical algorithmic audits or safety assessments.

Senate - 1802 - Requires artificial intelligence companies to conduct safety tests and report results to Office of Information Technology.

Legislation ID: 256940

Bill URL: View Bill

Summary

This bill mandates that artificial intelligence companies in New Jersey perform annual safety tests on their AI technologies, which include assessments for biases, inaccuracies, and cybersecurity threats. The results of these tests must be reported to the Office of Information Technology, which will also establish minimum testing requirements. The bill seeks to promote accountability and safety in the development and deployment of AI technologies.

Key Sections

Key Requirements

  • Requires a description of each safety test conducted.
  • Requires a list of all AI technologies tested.
  • Requires a list of any third parties involved in the testing.
  • Requires an analysis of data sources for potential biases and inaccuracies.
  • Requires an analysis of potential cybersecurity threats and vulnerabilities.
  • Requires descriptions of remedies for identified issues.
  • Requires the submission of results from each safety test.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Commerce Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a mandatory audit and reporting framework for AI developers, which is a core component of AI regulation.

Mechanism of Influence: It requires companies to submit detailed safety test results to a state agency, effectively creating a government oversight mechanism for AI safety and performance.

Evidence:

  • AI companies are required to conduct annual safety tests on their technologies and submit detailed reports to the Office of Information Technology.
  • Requires the submission of results from each safety test.

Ambiguity Notes: The specific technical standards for the 'minimum requirements' of these tests are left to the discretion of the Office of Information Technology, which may lead to varying levels of rigor.

Analysis 2

Why Relevant: It addresses specific regulatory concerns regarding AI bias and cybersecurity vulnerabilities.

Mechanism of Influence: The legislation mandates data source analysis and vulnerability assessments to mitigate risks such as algorithmic bias and security threats.

Evidence:

  • Requires an analysis of data sources for potential biases and inaccuracies.
  • Requires an analysis of potential cybersecurity threats and vulnerabilities.

Ambiguity Notes: The term 'remedies for identified issues' does not specify whether the government has the authority to block deployment if the proposed remedies are deemed insufficient.

Senate - 1840 - Creates "New Jersey Responsible AI Advancement and Workforce Protection Act."

Legislation ID: 256984

Bill URL: View Bill

Summary

The New Jersey Responsible AI Advancement and Workforce Protection Act seeks to ensure that the deployment of AI technologies does not displace workers or harm communities. It establishes the AI Horizon Fund to support workforce retraining and apprenticeship programs, mandates environmental impact assessments for AI infrastructure, and requires high-risk AI systems to undergo algorithmic impact assessments. The bill aims to protect civil rights and promote community engagement in AI developments, holding companies accountable for their impact on workers and the environment.

Key Sections

Key Requirements

  • Allows collection of penalties through civil actions.
  • Allows enforcement of civil rights protections under existing laws.
  • Engages with unions and community colleges to develop training programs.
  • Establishes a list of sectors at risk for AI-driven displacement.
  • Imposes fines ranging from $1,000 to $2,000 for violations.
  • Mandates algorithmic impact assessments for high-risk AI systems prior to deployment.
  • Mandates annual reporting of energy consumption, water usage, and carbon emissions.
  • Mandates penalties and fees collected by the department to be credited to the fund.
  • Mandates the investigation of complaints regarding AI discrimination and workplace surveillance.
  • Provides enhanced unemployment benefits for displaced workers.
  • Requires compliance with ethical use and transparency standards.
  • Requires contributions from AI infrastructure entities based on a percentage of their gross revenue.
  • Requires initial and annual environmental impact assessments.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Labor Committee

Detailed Analysis

Analysis 1

Why Relevant: The act directly regulates the deployment of high-risk AI systems through mandatory assessments.

Mechanism of Influence: It requires high-risk AI systems to undergo algorithmic impact assessments before deployment to evaluate societal impacts and ensure compliance with ethical use and transparency standards.

Evidence:

  • Requires high-risk AI systems to undergo algorithmic impact assessments before deployment to evaluate their potential societal impacts.
  • Mandates algorithmic impact assessments for high-risk AI systems prior to deployment.
  • Requires compliance with ethical use and transparency standards.

Ambiguity Notes: The specific criteria for what constitutes a 'high-risk' AI system and the exact metrics for 'societal impacts' may require further administrative definition.

Analysis 2

Why Relevant: The legislation imposes disclosure and reporting requirements on AI infrastructure entities.

Mechanism of Influence: Entities must conduct annual environmental impact assessments and report on energy consumption, water usage, and carbon emissions, with penalties for non-compliance.

Evidence:

  • Mandates AI infrastructure entities to conduct environmental impact assessments and report on resource usage annually to ensure sustainable development.
  • Mandates annual reporting of energy consumption, water usage, and carbon emissions.

Ambiguity Notes: The definition of 'AI infrastructure entity' determines the scope of companies subject to these reporting requirements.

Analysis 3

Why Relevant: It establishes government oversight and enforcement mechanisms for AI-related harms.

Mechanism of Influence: The Attorney General is granted authority to investigate AI-driven discrimination and workplace surveillance, while the Department of Labor monitors AI-driven displacement.

Evidence:

  • Empowers the Attorney General to investigate AI-driven discrimination and enforce civil rights protections related to AI usage.
  • Mandates the investigation of complaints regarding AI discrimination and workplace surveillance.

Ambiguity Notes: None

Analysis 4

Why Relevant: The act includes financial penalties to ensure compliance with AI regulations.

Mechanism of Influence: It establishes a fine structure for violations related to environmental assessments and high-risk AI system mandates, with funds directed to a workforce retraining fund.

Evidence:

  • Establishes penalties for violations of the act, specifically related to environmental assessments and high-risk AI systems, with specified fines for non-compliance.
  • Imposes fines ranging from $1,000 to $2,000 for violations.

Ambiguity Notes: None

Senate - 2129 - Prohibits and imposes criminal penalty on disclosure of certain intentionally deceptive audio or visual media within 90 days of election.*

Legislation ID: 257300

Bill URL: View Bill

Summary

Senate Bill No. 2129 prohibits the disclosure and solicitation of deceptive audio or visual media within a specified timeframe before elections, imposing criminal penalties for violations. It allows registered voters and candidates to seek civil remedies against those who distribute deceptive media with the intent to mislead voters. The bill outlines exceptions for minor alterations and certain forms of expression, while also clarifying the protections for various media platforms.

Key Sections

Key Requirements

  • Actions must be initiated by filing an application for an Order to Show Cause.
  • Certain types of media and platforms are exempt from the law.
  • Disclaimers must be clearly presented in the media.
  • First violation is a fourth-degree crime.
  • Plaintiffs must demonstrate entitlement to relief by clear and convincing evidence.
  • Second or subsequent violations are a third-degree crime.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate State Government, Wagering, Tourism & Historic Preservation Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill addresses 'deceptive audio or visual media,' a category that fundamentally includes AI-generated deepfakes used to manipulate public perception during elections.

Mechanism of Influence: It mandates disclosures in the form of disclaimers for such media and imposes legal liability on those who use AI-driven deceptive content to influence voters.

Evidence:

  • prohibits the disclosure and solicitation of deceptive audio or visual media
  • Disclaimers must be clearly presented in the media
  • deceptive audio or visual media intended to mislead voters

Ambiguity Notes: While the abstract uses the term 'deceptive audio or visual media' rather than 'artificial intelligence' explicitly, this terminology is the standard legislative framework for regulating AI-generated synthetic media.

Senate - 2130 - Establishes Deep Fake Technology Unit in DLPS; appropriates $2 million.

Legislation ID: 257301

Bill URL: View Bill

Summary

This bill establishes the Deep Fake Technology Unit within the Division of Criminal Justice in the Department of Law and Public Safety in New Jersey. The unit will provide expertise, training, and technical assistance to law enforcement and the judiciary regarding deep fakes, which are manipulated media that can misrepresent reality. The bill also includes provisions for annual reporting on the units activities and technological advancements in the field, along with an appropriation of $2 million to support its operations.

Key Sections

Key Requirements

  • Mandates the unit to provide technical assistance, training, and expertise to law enforcement and courts.
  • Requires the Attorney General to establish the unit in consultation with the Chief Technology Officer.
  • Requires the unit to analyze and authenticate deceptive media for investigations.
  • Requires the unit to issue an annual report detailing its activities and advancements in technology.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Law and Public Safety Committee

Detailed Analysis

Analysis 1

Why Relevant: Deep fakes are a primary output of generative artificial intelligence, and the creation of a dedicated state unit to monitor and authenticate such media represents a form of government oversight and regulation of AI-generated content.

Mechanism of Influence: The unit will provide technical assistance and authentication services for investigations, creating a practical mechanism for the state to identify and mitigate the impact of AI-manipulated media in legal and law enforcement contexts.

Evidence:

  • The unit will provide expertise, training, and technical assistance to law enforcement and the judiciary regarding deep fakes
  • Requires the unit to analyze and authenticate deceptive media for investigations.
  • Requires the unit to issue an annual report detailing its activities and advancements in technology.

Ambiguity Notes: The definition of 'deceptive audio or visual media' may be broad, potentially covering a wide range of AI-generated or AI-enhanced content beyond traditional deep fakes.

Senate - 2602 - "New Jersey Disclosure and Accountability Transparency Act (NJ DaTA)"; establishes certain requirements for disclosure and processing of personally identifiable information; establishes Office of Data Protection and Responsible Use in Division of Consumer Affairs.

Legislation ID: 257842

Bill URL: View Bill

Summary

The New Jersey Disclosure and Accountability Transparency Act (NJ DaTA) is designed to regulate how personally identifiable information is collected, processed, and disclosed by controllers. It mandates transparency in data handling, consumer consent for data processing, and establishes the Office of Data Protection and Responsible Use within the Division of Consumer Affairs to oversee compliance.

Key Sections

Key Requirements

  • Allows processing for contractual obligations, legal compliance, and vital interests.
  • Consumers can request rectification of inaccurate data.
  • Consumers can request the deletion of their data under certain conditions.
  • Consumers have the right to request access to their data.
  • Ensures accuracy and timely updates of collected data.
  • Imposes data retention limits based on necessity.
  • Includes information about third parties with whom data may be shared.
  • Limits data collection to what is necessary for specified purposes.
  • Mandates lawful, fair, and transparent processing of data.
  • Mandates notification of consumer rights regarding their data.
  • Processing requires affirmative consent from the consumer.
  • Prohibits processing of sensitive data unless specific conditions are met, such as consumer consent.
  • Requires appropriate security measures for data protection.
  • Requires consumer affirmative opt-in for data collection.
  • Requires disclosure of categories of processed data and purposes of processing.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Commerce Committee

Detailed Analysis

Analysis 1

Why Relevant: The legislation includes specific definitions for automated decision making, which is a core component of artificial intelligence systems.

Mechanism of Influence: By defining automated decision making within a regulatory framework for data transparency, the law creates a legal basis for overseeing how algorithms and AI models process personal data to make determinations about individuals.

Evidence:

  • This section provides definitions for key terms used in the bill, such as automated decision making, biometric data, consumer, and processor.

Ambiguity Notes: The provided text mentions that automated decision making is defined but does not explicitly detail the specific regulatory constraints or audit requirements applied to those automated processes.

Analysis 2

Why Relevant: The act mandates transparency and affirmative consent for data processing, which directly impacts the data acquisition and training phases of AI development.

Mechanism of Influence: AI developers acting as data controllers would be required to obtain explicit opt-in consent before collecting data used for processing, potentially limiting the use of scraped or non-consensual datasets for AI training.

Evidence:

  • Requires consumer affirmative opt-in for data collection.
  • Mandates that controllers provide clear information to consumers about the processing of their personally identifiable information at the time of collection.
  • establishes the Office of Data Protection and Responsible Use within the Division of Consumer Affairs to oversee compliance.

Ambiguity Notes: While the law focuses on PII, the 'Responsible Use' aspect of the newly created Office suggests a broader mandate that could encompass algorithmic accountability.

Senate - 2625 - Creates separate crime for items depicting sexual exploitation or abuse of children; concerns computer generated or manipulated sexually explicit images.

Legislation ID: 257871

Bill URL: View Bill

Summary

Senate Bill No. 2625 establishes new legal definitions and penalties related to the sexual exploitation or abuse of children, particularly focusing on items that depict such exploitation, whether through direct photography or digital manipulation. It outlines various offenses, including distribution, possession, and creation of such materials, and specifies the legal consequences based on the number of items involved. The bill also addresses the treatment of juveniles in cases related to the sharing of sexually suggestive materials, aiming to provide educational and counseling opportunities.

Key Sections

Key Requirements

  • Different forms of media have specific aggregation rules.
  • Each depiction is treated as a separate item for aggregation.
  • Knowingly distribute or possess items depicting sexual exploitation.
  • Must be aware of the nature of the items possessed.
  • Must not allow a child to engage in prohibited sexual acts.
  • Must not permit a child to be portrayed in a sexually suggestive manner.
  • Possess items depicting sexual exploitation in specified quantities.
  • Prohibits photographing or filming children in sexual acts or suggestive manners.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Judiciary Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill includes 'manipulated depictions' in its scope, which is the legal terminology used to regulate AI-generated deepfakes and synthetic media.

Mechanism of Influence: It imposes criminal penalties on the creation and distribution of digitally manipulated content, effectively regulating the output of generative AI tools when used for illegal imagery involving minors.

Evidence:

  • manipulated depiction
  • digital manipulation
  • creation of such materials

Ambiguity Notes: The term 'manipulated depiction' is broad and typically covers both traditional digital editing and advanced AI-driven synthesis, though the bill focuses on the content rather than the specific technological method of generation.

Senate - 2860 - Establishes Artificial Intelligence Apprenticeship Program and artificial intelligence apprenticeship tax credit program.

Legislation ID: 258122

Bill URL: View Bill

Summary

This bill establishes an Artificial Intelligence Apprenticeship Program within the New Jersey Department of Labor and Workforce Development. The program will work with AI companies to create apprenticeship opportunities and will also set up a tax credit for employers who hire apprentices in the AI field. The tax credit will be equal to half of the wages paid to qualified apprentices, up to a maximum of $5,000 per apprentice.

Key Sections

Key Requirements

  • Assists employers in establishing compliant apprenticeship programs.
  • Facilitates partnerships between employers and educational institutions for training.
  • Informs employers about the tax credit program.
  • Requires collaboration with AI companies to offer apprenticeship opportunities.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Labor Committee

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically targets the Artificial Intelligence industry by creating a state-managed workforce development program and financial incentives for AI employers.

Mechanism of Influence: The bill influences the AI sector through economic incentives and state-led coordination of labor, rather than through direct regulation or oversight of the technology itself.

Evidence:

  • Establishment of the Artificial Intelligence Apprenticeship Program
  • The program aims to provide apprenticeship opportunities in the AI industry
  • tax credit for employers who hire apprentices in the AI field

Ambiguity Notes: The abstract does not define the specific criteria for what constitutes an 'AI company' or 'AI field,' which may lead to broad interpretation regarding which businesses qualify for the tax credit.

Senate - 2862 - Requires school districts to provide instruction on artificial intelligence; requires Secretary of Higher Education to develop artificial intelligence model curricula.

Legislation ID: 258124

Bill URL: View Bill

Summary

This bill mandates the inclusion of artificial intelligence instruction in K-12 education and requires public institutions of higher education to offer related certificate and degree programs. It outlines the responsibilities of the Commissioner of Education and the Secretary of Higher Education in developing curricula and resources to support these educational initiatives. The legislation is designed to enhance students understanding of AI and prepare them for careers in this growing field.

Key Sections

Key Requirements

  • Mandates that the curricula meet academic quality and accreditation standards.
  • Mandates the provision of age-appropriate learning activities and resources by the Commissioner of Education.
  • Requires development and distribution of materials that describe AI careers and their benefits to public institutions of higher education.
  • Requires development of artificial intelligence model curricula for public four-year institutions and county colleges.
  • Requires school districts to include AI concepts, foundational skills, and ethical practices in K-12 education.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Education Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the inclusion of 'ethical practices' in AI education, which relates to the broader regulatory interest in AI safety and ethics.

Mechanism of Influence: By requiring the Commissioner of Education to develop resources for ethical AI instruction, the state establishes a framework for how AI's societal impacts are taught and understood at a foundational level.

Evidence:

  • Requires school districts to include AI concepts, foundational skills, and ethical practices in K-12 education.
  • Mandates the provision of age-appropriate learning activities and resources by the Commissioner of Education.

Ambiguity Notes: The term 'ethical practices' is broad and undefined, potentially encompassing topics ranging from data privacy and algorithmic bias to the responsible use of generative AI.

Analysis 2

Why Relevant: The bill regulates the educational requirements for AI, focusing on workforce development and academic standardization.

Mechanism of Influence: It mandates that public higher education institutions offer specific AI credentials, ensuring that the state's educational output aligns with the technical needs of the AI industry.

Evidence:

  • Public institutions of higher education are required to offer certificate and degree programs in artificial intelligence.
  • Requires development of artificial intelligence model curricula for public four-year institutions and county colleges.

Ambiguity Notes: While it mandates the creation of programs, it does not specify the technical depth or specific AI sub-fields (e.g., machine learning vs. neural networks) that must be covered.

Senate - 2940 - Establishes Office of Cybersecurity Infrastructure.

Legislation ID: 258213

Bill URL: View Bill

Summary

This bill establishes the Office of Cybersecurity Infrastructure as an independent entity within the Executive Branch of New Jerseys government. The office is tasked with creating and implementing cybersecurity policies, monitoring technology infrastructure, and establishing guidelines for the safe integration of artificial intelligence in both public and private sectors. The office will be led by a Director appointed by the Governor and will report on its activities to the Governor and Legislature annually.

Key Sections

Key Requirements

  • Annual report to be issued on or before January 1 each year.
  • Coordinate cybersecurity operations in the Executive Branch.
  • Deputy Directors serve at the pleasure of the Director.
  • Develop AI policies for public and private institutions.
  • Director may appoint up to six Deputy Directors.
  • Director must be appointed by the Governor with Senate consent.
  • Director must be qualified by education and experience.
  • Director must serve full-time.
  • Establish cybersecurity policies for the State.
  • Establish internal organizational structure of the office.
  • Monitor technology infrastructure for secure interactions with residents.
  • Office must operate independently from the State Treasurer and Department of the Treasury.
  • Periodic reports to the Governor.
  • The Director must report directly to the Governor.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate State Government, Wagering, Tourism & Historic Preservation Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates the creation of policies for the safe integration of artificial intelligence.

Mechanism of Influence: The Office of Cybersecurity Infrastructure is tasked with developing AI policies for both public and private institutions, which serves as a regulatory framework for AI usage and safety standards.

Evidence:

  • Develop AI policies for public and private institutions.
  • developing AI policies for safe integration.

Ambiguity Notes: The terms 'safe integration' and 'AI policies' are broad and do not specify whether they include specific requirements like audits, disclosure, or weight submissions, though the office has the authority to define these.

Analysis 2

Why Relevant: The bill establishes an oversight and reporting mechanism for technology and AI policy.

Mechanism of Influence: The Director must report annually to the Governor and Legislature, providing a channel for government oversight of AI-related infrastructure and policy implementation.

Evidence:

  • The Director must provide periodic updates and an annual report to the Governor and Legislature regarding the offices operations and cybersecurity infrastructure.

Ambiguity Notes: The reporting requirements focus on 'operations and cybersecurity infrastructure,' which likely includes the progress and enforcement of the AI policies mentioned elsewhere in the bill.

Senate - 2942 - Establishes public-private partnerships to develop artificial intelligence job training.

Legislation ID: 258215

Bill URL: View Bill

Summary

This legislation enables the Commissioner of Labor and Workforce Development to create public-private partnerships aimed at providing training and retraining services related to artificial intelligence. The bill outlines the responsibilities of the private entities involved and establishes an advisory council for oversight. It also provides guidelines for project proposals and exempts certain entities from procurement and prevailing wage requirements to facilitate the development of AI training programs.

Key Sections

Key Requirements

  • Establishes an advisory council to guide the partnership.
  • Exempts private entities from certain procurement and prevailing wage requirements.
  • Proposal must include anticipated fee structure, cost per trainee, expected duration of training, instructor qualifications, and proposed training location.
  • Requires annual reports to the Governor and Legislature on program progress.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Labor Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically addresses the artificial intelligence industry and establishes a legal framework for AI-related workforce development.

Mechanism of Influence: It creates a state-sanctioned mechanism for AI training and requires the definition of AI and the AI industry within state labor regulations.

Evidence:

  • This legislation enables the Commissioner of Labor and Workforce Development to create public-private partnerships aimed at providing training and retraining services related to artificial intelligence.
  • This section defines key terms used in the act, including AI, artificial intelligence industry, Authority, Commissioner, Department, and Public-private partnership agreement.

Ambiguity Notes: The focus is on workforce training rather than technical regulation of AI models, but it establishes the state's role in overseeing AI's impact on the labor market.

Analysis 2

Why Relevant: The legislation includes oversight and reporting requirements concerning AI initiatives.

Mechanism of Influence: It mandates an advisory council and requires annual reports to the Governor and Legislature regarding the progress and outcomes of AI training programs.

Evidence:

  • Establishes an advisory council to guide the partnership.
  • Requires annual reports to the Governor and Legislature on program progress.

Ambiguity Notes: The oversight is administrative and focused on program efficacy rather than the technical auditing of AI weights or algorithms.

Senate - 52 - Urges generative artificial intelligence companies to make voluntary commitments regarding employee whistleblower protections.

Legislation ID: 258671

Bill URL: View Bill

Summary

This resolution highlights the potential benefits and risks of artificial intelligence technology, emphasizing the need for better whistleblower protections for employees in the sector. It calls for generative AI companies to adopt principles that would ensure employee safety when reporting risks, promote transparency, and facilitate independent evaluations of AI systems.

Key Sections

Key Requirements

  • Allows employees to report concerns publicly until an anonymous reporting process is established.
  • Facilitates a verifiably anonymous process for reporting risks to the board and regulators.
  • Prevents retaliation against employees sharing risk-related information after other reporting processes fail.
  • Prohibits agreements that prevent employees from criticizing the company for risk-related concerns.
  • Provides legal and technical safe harbor for good faith evaluations of AI systems.
  • Supports a culture of open criticism regarding technologies, protecting trade secrets.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Labor Committee

Detailed Analysis

Analysis 1

Why Relevant: The resolution directly addresses the regulation and oversight of artificial intelligence by focusing on risk reporting and independent evaluations.

Mechanism of Influence: It seeks to establish a framework where employees can report AI-related risks to boards and regulators without fear of retaliation, thereby creating a mechanism for government and internal oversight.

Evidence:

  • The resolution urges generative AI companies to commit to protecting employees who raise concerns about risks associated with their technologies.
  • Facilitates a verifiably anonymous process for reporting risks to the board and regulators.
  • Provides legal and technical safe harbor for good faith evaluations of AI systems.

Ambiguity Notes: The terms 'risk-related concerns' and 'good faith evaluations' are not strictly defined, allowing for a broad range of safety and ethical issues to be covered under these protections.

Senate - 735 - Prohibits advertising artificial intelligence system as licensed mental health professional.

Legislation ID: 255539

Bill URL: View Bill

Summary

This legislation prohibits developers or deployers of artificial intelligence systems in New Jersey from advertising or claiming that such systems can act as licensed mental health professionals. Violations of this prohibition are deemed unlawful practices under the New Jersey Consumer Fraud Act, with penalties for infractions.

Key Sections

Key Requirements

  • Defines penalties for violations under the Consumer Fraud Act.
  • Prohibits advertising AI systems as licensed mental health professionals.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Commerce Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the marketing and public representation of AI systems, specifically prohibiting claims that AI can substitute for licensed mental health professionals.

Mechanism of Influence: It creates a legal prohibition on specific types of AI-related advertising and subjects violators to the New Jersey Consumer Fraud Act, effectively regulating the commercial deployment of AI in the mental health space.

Evidence:

  • This legislation prohibits developers or deployers of artificial intelligence systems in New Jersey from advertising or claiming that such systems can act as licensed mental health professionals.
  • The provision forbids any person or entity from advertising an artificial intelligence system as a licensed mental health professional.

Ambiguity Notes: The scope of 'advertising or claiming' could be interpreted to include not just traditional ads but also branding, user interface design, or conversational prompts that imply professional status.

Analysis 2

Why Relevant: It provides legal definitions for artificial intelligence and establishes enforcement mechanisms for AI-related consumer protection.

Mechanism of Influence: By defining AI and integrating it into the Consumer Fraud Act, the law provides a framework for state oversight of AI developers and deployers.

Evidence:

  • This section provides definitions for artificial intelligence and licensed mental health professional to clarify the scope of the bill.
  • Defines penalties for violations under the Consumer Fraud Act.

Ambiguity Notes: None

Senate - 861 - Establishes certain requirements for social media websites concerning content moderation practices; establishes cause of action against social media websites for violation of content moderation practices.

Legislation ID: 255694

Bill URL: View Bill

Summary

Senate Bill No. 861 establishes requirements for social media websites in New Jersey concerning their content moderation practices. It mandates transparency in censorship actions, ensures consistent application of moderation standards, and allows users to challenge unjust censorship. The bill also provides for penalties against social media platforms that violate these regulations, particularly in relation to political candidates and journalistic enterprises.

Key Sections

Key Requirements

  • Allow users to claim statutory damages up to $100,000.
  • Allow users to request data on content visibility.
  • Apply moderation standards consistently among users.
  • Explain the algorithms used to flag content.
  • Fines of $100,000 per day for violations related to statewide candidates.
  • Fines of $10,000 per day for violations related to other candidates.
  • Include a rationale for the censorship.
  • Notify users of changes to rules and terms.
  • Provide for actual and punitive damages.
  • Provide notification before censoring content.
  • Publish standards for censorship and user bans.
  • Social media websites must provide algorithms and documentation upon subpoena.
  • Written notification within 30 days of censorship.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced in the Senate, Referred to Senate Commerce Committee

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates the use of algorithms in content moderation and requires disclosures regarding their function.

Mechanism of Influence: It mandates that social media websites explain the algorithms used to flag content to users and allows the Attorney General to subpoena these algorithms for oversight purposes.

Evidence:

  • This section defines key terms used throughout the bill, including algorithm, censor, user, and social media website.
  • The Office of the Attorney General can subpoena social media websites for algorithms related to user banning.
  • Explain the algorithms used to flag content.

Ambiguity Notes: While the bill uses the term 'algorithm' rather than 'artificial intelligence,' modern content moderation algorithms are predominantly AI-driven, making this a direct regulation of AI application in social media.

Analysis 2

Why Relevant: The bill establishes a framework for government oversight and submission of algorithmic documentation.

Mechanism of Influence: Social media platforms are required to provide their algorithms and related documentation to the government upon subpoena, which functions as a form of regulatory audit or oversight of automated decision-making systems.

Evidence:

  • Social media websites must provide algorithms and documentation upon subpoena.

Ambiguity Notes: The term 'documentation' is broad and could potentially encompass technical specifications, model weights, or training data used in the moderation algorithms.

↑ Back to Table of Contents

New Mexico

Index of Bills

House - 141 - ARTIFICIAL INTELLIGENCE ACCOUNTABILITY ACT

Legislation ID: 285310

Bill URL: View Bill

Summary

House Bill 141, known as the Artificial Intelligence Accountability Act, aims to regulate the creation and distribution of synthetic content generated by artificial intelligence. It mandates disclosure requirements for covered providers, establishes guidelines for capture device manufacturers, and outlines the responsibilities of large online platforms. The bill also includes provisions for civil investigations and penalties for non-compliance to safeguard against deceptive synthetic content.

Key Sections

Key Requirements

  • Allows for recovery of attorney fees and damages for affected individuals.
  • Allows users the option to opt out of latent disclosures.
  • Attorney General can issue civil investigative demands for relevant documents.
  • Demands must detail the subject matter and specify compliance timelines.
  • Establishes that each instance of interaction with deceptive content can be counted as a separate violation.
  • Exempts non-user-generated video games, streaming services, and similar products from the act.
  • Imposes a civil penalty of $15,000 for each violation of the act.
  • Mandates an additional year of imprisonment for crimes involving generative AI.
  • Mandates embedding of latent disclosures in new capture devices.
  • Mandates latent disclosures containing specific information about the content.
  • Must provide interfaces for users to access and request takedown of deceptive content.
  • Platforms must detect system provenance data and append relevant information.
  • Providers cannot retain personal provenance data longer than necessary.
  • Requires covered providers to offer manifest disclosures in synthetic content.
  • Requires individuals to comply with the demands or face potential court orders and penalties for contempt.
  • Requires individuals to respond to written interrogatories under oath or provide documents for inspection and copying.
  • Requires that a person who disseminates deceptive content can be held liable if they knowingly or recklessly disregard the potential harm caused.
  • Requires that each demand specify the subject matter, describe the documents needed, and identify the attorney generals staff member handling the case.
  • Tool must allow users to read provenance data and comply with established standards.

Sponsors

Legislative Actions

Date Action
2026-01-22 Not Printed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in AI disclosures and regulation of synthetic content.

Mechanism of Influence: It mandates that covered providers include manifest and latent disclosures in AI-generated content and requires hardware manufacturers to embed disclosure capabilities.

Evidence:

  • Covered providers must offer users the option for manifest disclosures in synthetic content and ensure latent disclosures are included.
  • Manufacturers of capture devices must embed latent disclosures and allow users to opt out.

Ambiguity Notes: The specific technical standards for 'latent disclosures' are not fully detailed in the abstract, potentially leaving implementation details to future rulemaking.

Analysis 2

Why Relevant: The bill establishes government oversight and investigative powers over AI entities.

Mechanism of Influence: The Attorney General is authorized to issue civil investigative demands to compel information and documents from individuals or entities suspected of non-compliance.

Evidence:

  • This provision allows the attorney general to issue civil investigative demands requiring individuals to provide information or documents relevant to an investigation

Ambiguity Notes: The scope of 'relevant information' for an investigation is broad and subject to the Attorney General's discretion.

Analysis 3

Why Relevant: It imposes specific operational requirements on large online platforms regarding AI content management.

Mechanism of Influence: Platforms must implement detection measures for synthetic content and provide user interfaces for reporting and removing deceptive AI content.

Evidence:

  • Large online platforms must implement measures for detecting and managing synthetic content.
  • Must provide interfaces for users to access and request takedown of deceptive content.

Ambiguity Notes: The definition of a 'Large Online Platform' (e.g., user thresholds) is not specified in the abstract.

Analysis 4

Why Relevant: The bill requires transparency tools to identify the origin of AI-generated media.

Mechanism of Influence: Providers must offer a free, publicly accessible provenance detection tool to allow users to verify content data.

Evidence:

  • Covered providers are required to offer a publicly accessible provenance detection tool at no cost.
  • Tool must allow users to read provenance data and comply with established standards.

Ambiguity Notes: The effectiveness of these tools depends on the adoption of 'established standards' mentioned in the text.

Analysis 5

Why Relevant: It regulates the misuse of AI through criminal and civil penalties.

Mechanism of Influence: The act increases prison sentences for felonies involving generative AI and allows for private lawsuits against those who spread deceptive synthetic content.

Evidence:

  • Mandates an additional year of imprisonment for crimes involving generative AI.
  • This provision makes individuals civilly liable for spreading deceptive synthetic content that harms others

Ambiguity Notes: The term 'deceptive synthetic content' requires clear legal interpretation to distinguish between harmful misinformation and protected speech like satire.

House - 28 - ARTIFICIAL INTELLIGENCE TRANSPARENCY ACT

Legislation ID: 247447

Bill URL: View Bill

Summary

This bill establishes the Artificial Intelligence Transparency Act, which requires entities deploying artificial intelligence systems to notify consumers about the use of these systems in making consequential decisions. It mandates that consumers receive clear explanations of how their data is used and the basis for decisions made by AI. The bill also outlines the rights of consumers to appeal adverse decisions and sets forth enforcement mechanisms to protect consumer rights.

Key Sections

Key Requirements

  • Consumers can seek civil action for violations.
  • Enforcement authority granted to the state department of justice.
  • Ensures consumers can correct incorrect personal data and appeal decisions.
  • Notifications must be accessible to users with disabilities.
  • Requires clear and conspicuous notification that the product is an AI system.
  • Requires deployers to notify consumers before using AI for consequential decisions.
  • Requires detailed explanations for adverse decisions made by AI.

Sponsors

Legislative Actions

Date Action
2026-01-21 Not Printed
2026-01-20 Sent to HPREF - Referrals: HPREF

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in AI regulation and disclosure requirements.

Mechanism of Influence: It mandates that deployers provide clear and conspicuous notifications to consumers before using AI for consequential decisions and at the start of interactions with AI companion products.

Evidence:

  • Requires deployers to notify consumers before using AI for consequential decisions.
  • Requires clear and conspicuous notification that the product is an AI system.

Ambiguity Notes: The scope of the act depends heavily on the definitions of 'consequential decision' and 'companion products', which are mentioned but not fully detailed in the abstract.

Analysis 2

Why Relevant: The bill includes oversight and enforcement mechanisms, which aligns with the user's interest in government oversight of AI.

Mechanism of Influence: It empowers the state department of justice to enforce the act and grants consumers the right to take civil action for violations.

Evidence:

  • Enforcement authority granted to the state department of justice.
  • Consumers can seek civil action for violations.

Ambiguity Notes: The specific penalties for non-compliance or the threshold for 'adverse decisions' may require further clarification in the full text.

Senate - 53 - COMMUNITY & HEALTH INFO SAFETY & PRIVACY ACT

Legislation ID: 282777

Bill URL: View Bill

Summary

This legislation, known as the Community and Health Information Safety and Privacy Act, introduces comprehensive definitions and requirements for entities that collect and process consumer data. It outlines the rights of consumers regarding their personal information, sets limitations on data processing, and prohibits certain uses of consumer data. The bill also includes provisions for enforcement and penalties for violations.

Key Sections

Key Requirements

  • Allow consumers to disable notifications.
  • Allows compliance with subpoenas and cooperation with law enforcement.
  • Avoid using dark patterns to manipulate consumers into providing personal data.
  • Compliance with specified federal data privacy laws provides exemption from certain state law provisions.
  • Configure all default privacy settings to the highest level of privacy.
  • Consumers can request deletion of their data, which must be honored within specified timeframes.
  • Consumers must be able to access their personal data in a clear format.
  • Disable contact by unknown users unless initiated by the consumer.
  • Does not apply to information processed by local, state, or federal government.
  • Do not profile consumers by default without consent.
  • Establish reasonable data security practices.
  • Imposes fines for violations based on consumer impact and violation severity.
  • Limit processing of personal data to necessary activities.
  • Mandates clear disclosures regarding data categories, purposes, sharing entities, consent withdrawal, and expiration.
  • Obtain consent for processing sensitive personal data.
  • Permits actions to protect life or physical safety in emergencies.
  • Prohibits discrimination against consumers based on their exercise of privacy rights.
  • Provide accessible tools for consumers to exercise their privacy rights.
  • Provide options for privacy-protective or profile-based feeds.
  • Publicly provide clear privacy information and terms of service.
  • Requires compliance with federal data privacy laws for covered entities and service providers to be deemed compliant with this Act.
  • Requires opt-in consent for processing sensitive personal data.
  • Service providers must have data processing agreements with covered entities.

Sponsors

Legislative Actions

Date Action
2026-01-21 Sent to SCC - Referrals: SCC/SHPAC/SJC

Detailed Analysis

Analysis 1

Why Relevant: The legislation regulates 'profiling' and 'profile-based feeds,' which are core functions of AI-driven recommendation engines and behavioral analysis systems.

Mechanism of Influence: By prohibiting profiling by default and requiring opt-in consent, the law restricts the automated categorization of individuals by AI models.

Evidence:

  • Do not profile consumers by default without consent.
  • Provide options for privacy-protective or profile-based feeds.

Ambiguity Notes: The bill uses the term 'profiling' rather than 'Artificial Intelligence,' which is a common legal approach to capture algorithmic decision-making without relying on a shifting technical definition.

Analysis 2

Why Relevant: The act includes specific mandates for minors' privacy and default settings, aligning with the user's interest in age-related usage regulations.

Mechanism of Influence: It requires covered entities to set default privacy settings to the highest level for all users and specifically restricts notifications and unknown contacts for minors.

Evidence:

  • This provision outlines the requirements for privacy settings specific to minors, including disabling notifications and contact from unknown users.
  • Configure all default privacy settings to the highest level of privacy.

Ambiguity Notes: While it defines 'minor,' the abstract does not specify the technical method for age verification required to trigger these protections.

Analysis 3

Why Relevant: The regulation of biometric data is a critical component of AI oversight, particularly concerning facial recognition and biometric identification technologies.

Mechanism of Influence: The law classifies biometric data as sensitive personal data, requiring explicit opt-in consent before it can be processed by any system.

Evidence:

  • This section provides definitions for key terms used throughout the act, including actual knowledge, affiliate, biometric data
  • Requires opt-in consent for processing sensitive personal data.

Ambiguity Notes: None

Analysis 4

Why Relevant: The bill addresses the use of 'dark patterns,' which are often used in AI-driven user interfaces to manipulate consumer behavior.

Mechanism of Influence: It prohibits the use of manipulative design to coerce users into providing data, which impacts how AI-driven engagement loops are designed.

Evidence:

  • Avoid using dark patterns to manipulate consumers into providing personal data.

Ambiguity Notes: The definition of 'dark patterns' can be broad and may require further regulatory clarification to determine which specific UI/UX designs are prohibited.

Senate - 68 - ARTIFICIAL INTELLIGENCE GOVERNMENT USE ACT

Legislation ID: 285365

Bill URL: View Bill

Summary

The Artificial Intelligence Government Use Act mandates that public bodies create policies and training programs regarding the use of artificial intelligence and automated decision tools. It defines key terms related to AI and automated decision-making, outlines the requirements for public bodies to establish policies on authorized use, and mandates training for employees on cybersecurity and the appropriate use of these technologies.

Key Sections

Key Requirements

  • Mandates training for employees on cybersecurity policies.
  • Mandates training on the appropriate use of AI and automated decision tools.
  • Policies must address security procedures for nonpublic data.
  • Policies must define authorized uses of AI and automated decision tools.
  • Prohibits overriding security procedures except under specific circumstances.
  • Requires human oversight for consequential decisions made with AI tools.
  • Requires public bodies to establish policies for AI and automated decision tools.

Sponsors

Legislative Actions

Date Action
2026-01-22 Sent to SCC - Referrals: SCC/SHPAC/SJC

Detailed Analysis

Analysis 1

Why Relevant: The act directly regulates the deployment and governance of AI within public institutions.

Mechanism of Influence: It mandates the creation of formal policies and security procedures, effectively setting a regulatory framework for government AI usage.

Evidence:

  • Public bodies must establish policies governing the use of artificial intelligence and automated decision tools
  • Requires human oversight for consequential decisions made with AI tools.

Ambiguity Notes: The specific scope of 'consequential decisions' is a critical term that will determine the breadth of the human oversight requirement.

Analysis 2

Why Relevant: The legislation addresses oversight and accountability mechanisms for automated systems.

Mechanism of Influence: By requiring human oversight for consequential decisions, the law prevents fully autonomous AI systems from making high-stakes determinations without human intervention.

Evidence:

  • Requires human oversight for consequential decisions made with AI tools.
  • Mandates training on the appropriate use of AI and automated decision tools.

Ambiguity Notes: The act does not specify the level of human intervention required to satisfy the 'oversight' mandate.

↑ Back to Table of Contents

New York

Index of Bills

Assembly - 10008 - Enacts into law major components of legislation necessary to implement the state transportation, economic development and environmental conservation budget for the 2026-2027 state fiscal year

Legislation ID: 283291

Bill URL: View Bill

Summary

This bill encompasses a wide range of amendments to existing laws related to motor vehicle regulations, insurance, environmental conservation, and economic development. It includes provisions for increasing motor vehicle fees, establishing safety courses, implementing technology for speed assistance in vehicles, and enhancing protections for highway workers. The bill also addresses funding for transportation projects and updates regulations concerning insurance and utilities.

Key Sections

Key Requirements

  • Cities must enact local laws to implement the pilot program.
  • Completion of necessary rules and regulations by the effective date.
  • Extends the expiration date of specific transaction fees and regulations.
  • Proof of completion of the motorcycle rider safety course is required for new applicants.

Sponsors

Legislative Actions

Date Action
2026-01-21 referred to ways and means

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates a pilot program for 'intelligent speed assistance devices,' which represents a form of automated or algorithmic technology used to regulate vehicle speed.

Mechanism of Influence: It authorizes local governments to implement technological systems that can intervene in or monitor vehicle operation, which falls under the broader category of regulating automated and intelligent systems.

Evidence:

  • This provision allows cities with populations over one million to establish a pilot program for intelligent speed assistance devices to manage speed limit violations.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but 'intelligent' speed assistance systems typically rely on algorithmic processing, data inputs, or computer vision to function.

Assembly - 1205 - Establishes the position of chief artificial intelligence officer

Legislation ID: 54982

Bill URL: View Bill

Summary

This legislation amends the state technology law to define artificial intelligence and automated decision-making systems, and to create the office of Chief Artificial Intelligence Officer. This officer will be responsible for developing statewide policies, ensuring compliance with laws, and coordinating AI activities across state agencies. The bill also establishes an advisory committee to assist in guiding AI practices and policy.

Key Sections

Key Requirements

  • Coordinate activities of state departments using AI tools.
  • Defines artificial intelligence and automated decision-making systems with specific criteria.
  • Develop and update state policies on AI and automated decision-making systems.
  • Excludes basic computerized processes that do not materially affect human rights or safety.
  • Investigate resource needs for adapting to AI changes in the regulatory landscape.
  • Members are appointed by various state leaders and must provide advice on AI practices and policies.
  • The Chief Artificial Intelligence Officer must be appointed by the governor with Senate consent.
  • The committee must meet at least twice a year.
  • The officer is responsible for overseeing the administration of the office and reporting to the executive department.
  • The officer must have expertise in AI, data privacy, and the technology industry.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to governmental operations
2025-01-09 referred to governmental operations

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes the foundational legal definitions for AI and automated decision-making systems within the state's jurisdiction.

Mechanism of Influence: These definitions dictate the scope of future regulations and determine which technologies are subject to the oversight of the Chief AI Officer.

Evidence:

  • The bill provides definitions for Artificial Intelligence and Automated Decision-Making System, detailing their functionalities and exclusions.
  • Excludes basic computerized processes that do not materially affect human rights or safety.

Ambiguity Notes: The exclusion of 'basic computerized processes' that do not 'materially affect human rights or safety' creates a subjective threshold for what constitutes regulated AI.

Analysis 2

Why Relevant: It creates a centralized regulatory authority (Chief AI Officer) dedicated to AI governance.

Mechanism of Influence: The officer is empowered to develop statewide policies, ensure compliance with existing laws, and coordinate the use of AI tools across all state departments.

Evidence:

  • This officer will be responsible for developing statewide policies, ensuring compliance with laws, and coordinating AI activities across state agencies.
  • Develop and update state policies on AI and automated decision-making systems.

Ambiguity Notes: The specific content of the 'statewide policies' is left to the discretion of the officer, meaning the actual regulatory requirements are yet to be drafted.

Analysis 3

Why Relevant: The legislation mandates the creation of an advisory body to shape AI best practices and policy.

Mechanism of Influence: The committee provides the expertise and recommendations that will form the basis of state AI policy and agency-level implementation.

Evidence:

  • An advisory committee is created to assist the Chief AI Officer in developing best practices and policies for AI use in state agencies.

Ambiguity Notes: The bill does not specify how much weight the Chief AI Officer must give to the committee's advice.

Assembly - 1338 - Relates to the admissibility of evidence created or processed by artificial intelligence

Legislation ID: 55115

Bill URL: View Bill

Summary

This legislation amends the criminal procedure law and civil practice law to set standards for the admissibility of evidence that is either created or processed by artificial intelligence. It requires that such evidence be supported by independent and admissible evidence and mandates that the proponent of the evidence demonstrates the reliability and accuracy of the AIs use in generating or processing that evidence.

Key Sections

Key Requirements

  • AI must have been rigorously tested in varied environments.
  • AI should not have been subjected to variables that may cause inaccuracies.
  • Evidence created by AI must be supported by independent evidence.
  • Evidence is created by AI if it produces new information not deducible from existing data.
  • Evidence is processed by AI if it draws conclusions not reasonably deducible from existing information.
  • Evidence processed by AI also requires proof of reliability and accuracy.
  • Expert testimony is required to validate the AIs use.
  • Proponent must establish the reliability and accuracy of the AIs use.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to codes
2025-01-09 referred to codes

Detailed Analysis

Analysis 1

Why Relevant: The legislation regulates the use and legal validity of artificial intelligence outputs within the judicial system.

Mechanism of Influence: It imposes a burden of proof on the proponent of AI evidence to demonstrate reliability and accuracy, effectively creating a regulatory framework for AI's application in legal evidence.

Evidence:

  • Evidence created by AI must be supported by independent evidence.
  • Proponent must establish the reliability and accuracy of the AIs use.
  • AI must have been rigorously tested in varied environments.
  • Expert testimony is required to validate the AIs use.

Ambiguity Notes: The distinction between 'new information not deducible' and 'conclusions not reasonably deducible' may lead to varying interpretations of what constitutes AI-created versus AI-processed evidence.

Assembly - 1342 - Requires the collection of oaths of responsible use from users of certain generative or surveillance advanced artificial intelligence systems

Legislation ID: 55119

Bill URL: View Bill

Summary

This legislation amends the general business law to mandate that operators of generative or surveillance advanced artificial intelligence systems collect oaths from users affirming their responsible use of these technologies. It defines key terms, outlines the requirements for user affirmation, and establishes penalties for non-compliance by operators.

Key Sections

Key Requirements

  • Mandates that users affirm their responsible use of the system under penalty of perjury.
  • Operators face fines for failing to present or collect oaths.
  • Operators may not modify the oath without authorization.
  • Operators must ensure the oath is sworn under penalty of perjury.
  • Operators must submit oaths to the attorney general within thirty days.
  • Requires users to create an account before using the AI system.
  • Users must affirm they will not use the AI system to create harmful or illegal content.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-01-09 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the operation and user-onboarding process for advanced artificial intelligence systems.

Mechanism of Influence: It mandates that AI operators implement a specific compliance mechanism (sworn oaths) and submit these records to the government, creating a layer of oversight and legal accountability for AI usage.

Evidence:

  • mandate that operators of generative or surveillance advanced artificial intelligence systems collect oaths from users affirming their responsible use
  • Operators must submit copies of the oaths taken by users to the attorney general within a specified timeframe
  • Operators must require users to create an account and affirm their responsible use of the AI system through a sworn statement

Ambiguity Notes: The term 'advanced artificial intelligence systems' is defined within the law, but its practical scope depends on how the attorney general interprets 'generative' or 'surveillance' capabilities.

Assembly - 1509 - Requires publishers of books created with the use of generative artificial intelligence to contain a disclosure of such use

Legislation ID: 55286

Bill URL: View Bill

Summary

This bill amends the general business law in New York to mandate that any book published in the state that has been wholly or partially created using generative artificial intelligence must include a conspicuous disclosure on its cover. This requirement applies to all types of books, including printed and digital formats, and aims to inform consumers about the nature of the content they are purchasing.

Key Sections

Key Requirements

  • Applies to all forms of books, including printed and digital.
  • Defines generative artificial intelligence to include machine learning, cognitive tasks, and systems that operate with minimal human oversight.
  • Requires books to have a visible disclosure if they are created using generative artificial intelligence.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-01-10 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in AI regulation and mandatory disclosures for AI-generated content.

Mechanism of Influence: It imposes a legal requirement on publishers to label books, providing transparency to consumers regarding the use of AI in the creative process.

Evidence:

  • mandate that any book published in the state that has been wholly or partially created using generative artificial intelligence must include a conspicuous disclosure on its cover.
  • Defines generative artificial intelligence to include machine learning, cognitive tasks, and systems that operate with minimal human oversight.

Ambiguity Notes: The phrase 'partially created' may require further clarification to determine the threshold of AI involvement that triggers the disclosure requirement, though the bill attempts to define AI through the lens of 'minimal human oversight'.

Assembly - 156 - Relates to the use of smart access systems and the information that may be gathered from such systems

Legislation ID: 53933

Bill URL: View Bill

Summary

This bill outlines the requirements and limitations for smart access systems used in multiple dwellings. It mandates that only essential data may be collected, prohibits certain types of data collection, and establishes penalties for violations. Additionally, it sets forth guidelines for the destruction of collected data and requires owners to provide written procedures to tenants regarding the use of these systems.

Key Sections

Key Requirements

  • Data must be stored securely to prevent unauthorized access.
  • Future tenancy cannot be conditioned on consent to use smart access systems.
  • Higher penalties apply for harassment or deprivation of tenant rights.
  • Limits data collection to necessary account information for system use.
  • Mandates destruction of biometric data within 48 hours unless for reference.
  • Owners must apply for approval before installing smart access systems.
  • Prohibits collection of information on tenant relationships or usage patterns for harassment or eviction purposes.
  • Prohibits location tracking through smart access systems.
  • Prohibits the collection of social security numbers.
  • Prohibits the sale or disclosure of biometric data without legal authorization.
  • Requires destruction or anonymization of collected data within specified timeframes.
  • Requires express consent from individuals before capturing biometric data.
  • Vendors must notify customers of security vulnerabilities within 24 hours.
  • Vendors must provide updates to fix vulnerabilities within 30 days.
  • Violators may face penalties up to $5,000 per violation.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to housing
2025-01-08 referred to housing

Detailed Analysis

Analysis 1

Why Relevant: The regulation of biometric data collection is a core component of AI oversight, as biometric identification systems—such as facial recognition or fingerprint analysis—typically rely on artificial intelligence and machine learning models.

Mechanism of Influence: By requiring express consent and limiting the retention of biometric data to 48 hours, the law restricts the operational parameters of AI-driven identification technologies in residential settings.

Evidence:

  • Sets strict conditions under which biometric data can be collected, emphasizing user consent and secure handling of such data.
  • Requires express consent from individuals before capturing biometric data.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but the technologies used for biometric processing in modern 'smart access systems' are almost exclusively AI-based.

Analysis 2

Why Relevant: The bill mandates disclosures regarding software security and requires vendors to provide updates, which aligns with the user's interest in oversight and transparency for automated systems.

Mechanism of Influence: It creates a mandatory disclosure and remediation pipeline for software vulnerabilities, ensuring technical accountability for the vendors of automated access systems.

Evidence:

  • Vendors must notify customers of security vulnerabilities within 24 hours.
  • Vendors must provide updates to fix vulnerabilities within 30 days.

Ambiguity Notes: While these provisions apply to all software within smart access systems, they serve as a mechanism for the 'oversight' and 'disclosures' requested by the user regarding automated technologies.

Assembly - 1952 - Requires employers and employment agencies to notify candidates for employment if machine learning technology is used to make hiring decisions

Legislation ID: 55729

Bill URL: View Bill

Summary

This bill introduces a new section to the labor law concerning automated employment decision tools. It defines what constitutes such tools and mandates that employers notify candidates about their use in the hiring process, including details about the job qualifications considered, the data used, and the data retention policy. The bill also ensures candidates rights to seek alternatives or accommodations in the selection process.

Key Sections

Key Requirements

  • Must inform candidates about data collection sources and retention policies.
  • Notification must include job qualifications and characteristics used in assessments.
  • Requires employers to notify candidates about the use of automated employment decision tools.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to labor
2025-01-14 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the use of automated systems and AI-driven tools in the context of employment decisions.

Mechanism of Influence: It mandates transparency through mandatory disclosures to candidates at least ten business days before an automated tool is utilized, requiring the disclosure of assessment criteria and data practices.

Evidence:

  • Employers using automated employment decision tools must notify candidates at least ten business days before the tool is used
  • Must inform candidates about data collection sources and retention policies.
  • Notification must include job qualifications and characteristics used in assessments.

Ambiguity Notes: The specific technical threshold for what constitutes an 'automated employment decision tool' depends on the provided definitions, which may vary in breadth regarding machine learning or simple algorithmic filtering.

Analysis 2

Why Relevant: The legislation establishes a regulatory framework for AI oversight in the workplace by defining the scope of automated tools and ensuring candidate rights.

Mechanism of Influence: By defining 'automated employment decision tool' and 'employment decision,' the law creates a legal boundary for which AI technologies are subject to labor law oversight.

Evidence:

  • This provision defines key terms related to automated employment decision tools, including what constitutes an automated employment decision tool

Ambiguity Notes: The effectiveness of the regulation depends on how strictly 'automated employment decision tool' is defined and whether it captures all forms of AI used in hiring.

Assembly - 235 - Relates to unauthorized depictions of public officials generated by artificial intelligence

Legislation ID: 54012

Bill URL: View Bill

Summary

This bill introduces a new section to the general business law that addresses unauthorized depictions of public officials generated by artificial intelligence. It defines key terms related to artificial intelligence and establishes responsibilities for the owners and operators of AI systems to prevent unauthorized depictions of covered persons. The bill outlines the requirements for notification and the liability of system operators for failing to comply with these regulations.

Key Sections

Key Requirements

  • Liability is waived if reasonable prevention methods were in place.
  • Operators are liable if safeguards are not consistent with industry standards.
  • Operators liable for $100 per unauthorized depiction, up to $100,000 total.
  • Operators must implement reasonable safeguards for authorized depictions.
  • Requires AI system operators to implement a method to prevent unauthorized depictions within 60 days of notification.
  • Requires operators to have an easy-to-use notification system for covered persons.
  • Requires timely updates on the status of their requests.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-01-08 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the output and operational requirements of generative AI systems and defines key AI-related terminology.

Mechanism of Influence: It imposes a legal duty on AI operators to implement technical safeguards and administrative notice systems, creating financial liability for failing to prevent unauthorized AI-generated depictions.

Evidence:

  • This provision defines important terms used in the bill, including visual or audio generative artificial intelligence system, artificial intelligence
  • Operators of AI systems are liable for unauthorized depictions created by users
  • Operators of AI systems must implement a reasonable method to prevent users from creating unauthorized realistic depictions

Ambiguity Notes: The terms 'reasonable prevention methods' and 'industry standards' are not explicitly defined, which may lead to varying interpretations of what constitutes technical compliance.

Assembly - 3265 - Enacts the New York artificial intelligence bill of rights

Legislation ID: 57911

Bill URL: View Bill

Summary

This legislation, known as the New York artificial intelligence bill of rights, is designed to protect New York residents from the potential harms of automated decision-making systems. It outlines specific rights related to safety, discrimination, data privacy, and the ability to opt for human alternatives in interactions with automated systems. The bill emphasizes the importance of oversight and accountability in the development and deployment of such technologies.

Key Sections

Key Requirements

  • Automated systems must undergo equity assessments and disparity testing.
  • Automated systems must undergo pre-deployment testing and ongoing monitoring.
  • Consent for data collection must be clear and understandable.
  • Data collection must conform to reasonable expectations and only necessary data should be collected.
  • Designers must ensure accessibility for residents with disabilities.
  • Systems failing to meet safety standards must not be deployed or must be removed.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to science and technology
2025-01-27 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The legislation mandates audits and assessments for AI systems to ensure equity and safety.

Mechanism of Influence: It requires automated systems to undergo equity assessments, disparity testing, and pre-deployment safety testing before they can be used.

Evidence:

  • Automated systems must undergo equity assessments and disparity testing.
  • Automated systems must undergo pre-deployment testing and ongoing monitoring.

Ambiguity Notes: The specific technical standards for 'equity assessments' and 'disparity testing' are not detailed, leaving room for interpretation on what constitutes a passing result.

Analysis 2

Why Relevant: The bill regulates the deployment and continued use of AI based on performance and safety standards.

Mechanism of Influence: It grants the authority to prevent the deployment of or require the removal of systems that fail to meet safety standards or are found to be ineffective.

Evidence:

  • Systems failing to meet safety standards must not be deployed or must be removed.
  • Outlines the rights of residents to be protected from unsafe automated systems, requiring pre-deployment testing, ongoing monitoring, and removal of ineffective systems.

Ambiguity Notes: The criteria for 'ineffective systems' or 'safety standards' may be subject to administrative definition.

Analysis 3

Why Relevant: It requires disclosures regarding data collection and usage in the context of automated systems.

Mechanism of Influence: It mandates that consent for data collection be clear and understandable, effectively requiring a disclosure mechanism for users interacting with these systems.

Evidence:

  • Consent for data collection must be clear and understandable.
  • Ensures residents are protected from abusive data practices and maintain control over their personal data.

Ambiguity Notes: The term 'clear and understandable' is a subjective standard that may vary based on the target audience.

Assembly - 3327 - Relates to political communication utilizing artificial intelligence

Legislation ID: 58041

Bill URL: View Bill

Summary

This legislation amends the New York election law to require that any political communication using an artificial intelligence system must inform the recipient that they are interacting with AI. This applies to various forms of communication, including phone calls and emails, to promote transparency and accountability in political discourse.

Key Sections

Key Requirements

  • Requires political communications using AI to inform recipients that they are communicating with an artificial intelligence system.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to election law
2025-01-27 referred to election law

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses the user's interest in AI regulation and disclosure requirements, specifically within the context of political communications.

Mechanism of Influence: It creates a legal mandate for transparency, requiring entities to inform individuals when they are interacting with an AI rather than a human, thereby affecting how AI is deployed in political campaigning.

Evidence:

  • This legislation amends the New York election law to require that any political communication using an artificial intelligence system must inform the recipient that they are interacting with AI.
  • This provision mandates that any political communication employing artificial intelligence to simulate human conversation must disclose to the recipient that they are engaging with an AI system.

Ambiguity Notes: The definition of 'simulate human conversation' may be subject to interpretation regarding the level of sophistication required to trigger the disclosure.

Assembly - 3356 - Relates to enacting the "advanced artificial intelligence licensing act"

Legislation ID: 58101

Bill URL: View Bill

Summary

This bill establishes a framework for the oversight of high-risk advanced artificial intelligence systems by empowering a secretary to review, recommend, and enforce compliance measures. It outlines the responsibilities of operators regarding system modifications, incident reporting, and compliance with ethical standards, as well as the penalties for non-compliance. The bill also addresses issues related to source code management, third-party integrations, and security risks associated with AI systems.

Key Sections

Key Requirements

  • AI systems that could disrupt critical infrastructure or cause significant harm are categorized as high-risk.
  • Internal controls must allow for indefinite cessation of system operations.
  • Licensees must not uncontain high-risk source code without written authorization from the secretary.
  • Licensees must provide access to relevant information during investigations.
  • Licensees must report significant malfunctions promptly to the department and applicable law enforcement agencies.
  • Licensees must submit a written notice detailing the purpose and risks of modifications or upgrades to the secretary.
  • Logs must be preserved for ten years and are subject to inspection.
  • Modifications require the secretarys approval within 30 business days, or they are deemed approved if no response is received.
  • Operators must provide a detailed plan for addressing recommendations, which is binding unless unexpected occurrences arise.
  • Requires operators to consult with the secretary on the feasibility and timeline for implementing recommendations.
  • Specific standards for log management must be followed, including access and encryption protocols.
  • The department will assess compliance with cybersecurity standards before issuing the certificate.
  • The secretary may designate specific reporting requirements based on system interactions with law enforcement.
  • The secretary may examine books, records, and logs of licensees to enforce compliance.
  • The secretary may prohibit access to information or source code with written justification.
  • The secretary will determine unique requirements for such systems.
  • Third-party parties sharing biometric information are jointly liable for violations.
  • Third-party systems must apply for a certificate of compliance before integration.
  • Violators may face class E felony or class A misdemeanor charges depending on the nature of the violation.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to science and technology
2025-01-27 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a direct oversight framework for high-risk AI systems, aligning with the user's interest in AI regulation.

Mechanism of Influence: It empowers a secretary to review systems, issue binding recommendations, and enforce compliance measures, effectively creating a licensing and regulatory body for AI operators.

Evidence:

  • This bill establishes a framework for the oversight of high-risk advanced artificial intelligence systems by empowering a secretary to review, recommend, and enforce compliance measures.

Ambiguity Notes: The term 'high-risk advanced artificial intelligence systems' is subject to secretary designation, which could be interpreted broadly or narrowly depending on future administrative rules.

Analysis 2

Why Relevant: The legislation mandates government oversight of AI source code and system modifications, similar to the user's interest in the submission of weights or technical specifications.

Mechanism of Influence: Licensees must submit written notices of modifications for approval and share source code with the secretary, preventing unauthorized changes to AI models.

Evidence:

  • Licensees must notify the secretary of any intended modifications or upgrades to their AI systems source code and cannot implement changes without approval.
  • Licensees can share information and source code with third parties... The secretary may prohibit access to information or source code with written justification.

Ambiguity Notes: While 'source code' is specified, the bill does not explicitly use the term 'weights,' though source code management often encompasses the parameters and architecture of the model.

Analysis 3

Why Relevant: The bill includes provisions for audits and investigations to ensure compliance with ethical and safety standards.

Mechanism of Influence: The secretary is authorized to conduct investigations, compel document production, and examine logs and records, functioning as a mandatory audit mechanism.

Evidence:

  • The secretary has the authority to conduct investigations to ensure compliance with the bills provisions and can compel the production of relevant documents.
  • The secretary may examine books, records, and logs of licensees to enforce compliance.

Ambiguity Notes: None

Analysis 4

Why Relevant: The bill requires mandatory disclosures regarding system failures and security risks.

Mechanism of Influence: Operators must promptly report significant malfunctions that could harm individuals to the department and law enforcement.

Evidence:

  • Licensees are required to notify relevant authorities of significant malfunctions in their AI systems that could harm individuals.
  • Licensees must report significant malfunctions promptly to the department and applicable law enforcement agencies.

Ambiguity Notes: None

Analysis 5

Why Relevant: The bill establishes criminal penalties for the 'uncontainment' of high-risk AI, representing a high level of regulatory enforcement.

Mechanism of Influence: Willful or negligent release of high-risk source code without authorization can result in felony or misdemeanor charges.

Evidence:

  • This section prohibits the willful or negligent uncontainment of high-risk AI source code, establishing penalties for violations based on the severity of the offense.
  • Violators may face class E felony or class A misdemeanor charges depending on the nature of the violation.

Ambiguity Notes: The definition of 'uncontainment' is not fully detailed but implies the public release or leakage of restricted AI code.

Assembly - 3411 - Requires warnings on generative artificial intelligence systems

Legislation ID: 58208

Bill URL: View Bill

Summary

This legislation amends the general business law by introducing a new section that mandates owners, licensees, or operators of generative artificial intelligence systems to display warnings on their user interfaces. These warnings must inform users that the systems outputs may not always be accurate or appropriate. Failure to comply with this requirement could result in civil penalties.

Key Sections

Key Requirements

  • Assess a civil penalty of $25 per user or up to $100,000 for non-compliance.
  • Each year of violation counts as a separate offense.
  • Requires a conspicuous warning on user interfaces of generative AI systems.
  • Warning must inform users about potential inaccuracies and inappropriate outputs.

Sponsors

Legislative Actions

Date Action
2026-01-28 delivered to senate
2026-01-28 passed assembly
2026-01-28 REFERRED TO INTERNET AND TECHNOLOGY
2026-01-07 ordered to third reading cal.110
2025-06-11 ordered to third reading rules cal.608
2025-06-11 reported
2025-06-11 rules report cal.608
2025-06-09 amend (t) and recommit to rules

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the deployment of generative artificial intelligence by mandating specific transparency and disclosure requirements.

Mechanism of Influence: It forces AI developers and operators to modify their user interfaces to include legal disclaimers, thereby informing users of the limitations of the technology and shifting some liability or awareness to the end-user.

Evidence:

  • This provision requires that a conspicuous warning be displayed on the user interface of generative artificial intelligence systems, indicating that the outputs may be inaccurate or inappropriate.

Ambiguity Notes: The term 'conspicuous' is not strictly defined, which could lead to variations in how prominent the warning must be. Additionally, 'inappropriate' is a subjective standard that may be difficult to define consistently across different AI applications.

Analysis 2

Why Relevant: The law introduces a financial enforcement mechanism specifically for AI-related compliance failures.

Mechanism of Influence: By imposing penalties of $25 per user or up to $100,000, the law creates a significant financial incentive for AI companies to adhere to state-mandated disclosure standards.

Evidence:

  • Assess a civil penalty of $25 per user or up to $100,000 for non-compliance.
  • Each year of violation counts as a separate offense.

Ambiguity Notes: The method for counting 'users' (e.g., unique visitors, registered accounts, or active monthly users) is not specified, which could lead to disputes over the total penalty amount.

Assembly - 3914 - Establishes criteria for the sale of automated employment decision tools

Legislation ID: 59212

Bill URL: View Bill

Summary

The bill introduces a new section to the labor law that mandates the use of automated employment decision tools to comply with specific criteria, including conducting annual disparate impact analyses. It defines key terms related to automated tools and outlines the responsibilities of employers regarding reporting and compliance, as well as the enforcement powers of the attorney general and commissioner.

Key Sections

Key Requirements

  • Attorney general can investigate based on evidence of violation.
  • Commissioner can initiate investigations and legal actions for compliance.
  • Conduct annual disparate impact analysis for automated employment decision tools.
  • Department may promulgate necessary rules and regulations.
  • Make summaries of the analysis publicly available on the employers website.
  • Provide the department with summaries of the most recent analyses annually.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to labor
2025-01-30 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically targets automated decision-making systems used in employment, which falls under the umbrella of artificial intelligence regulation and oversight.

Mechanism of Influence: It imposes a mandatory audit requirement (disparate impact analysis) and transparency obligations (publicly available summaries), directly addressing the user's interest in AI audits and disclosures.

Evidence:

  • mandates the use of automated employment decision tools to comply with specific criteria, including conducting annual disparate impact analyses.
  • Employers must conduct annual disparate impact analyses for any automated employment decision tools they use, report the findings to the employer, and make summaries publicly available.

Ambiguity Notes: The scope of the regulation depends on the specific definition of 'automated employment decision tool,' which may vary in breadth to include different types of algorithmic or AI-driven software.

Assembly - 3930 - Regulates the use of artificial intelligence in aiding decisions on rental housing and loans

Legislation ID: 59244

Bill URL: View Bill

Summary

This bill amends the real property law, general business law, and banking law to establish guidelines for the use of automated decision tools in housing and loan applications. It mandates annual disparate impact analyses to assess potential biases, requires landlords and banks to provide clear notifications to applicants regarding the use of these tools, and prohibits the use of certain algorithms that rely on nonpublic competitor data.

Key Sections

Key Requirements

  • Applicants must be notified about the use of automated decision tools and their data processing policies at least 24 hours in advance.
  • Applicants must be notified at least 24 hours in advance of the use of automated decision tools.
  • A report with findings and recommendations must be submitted to the governor and legislature within 90 days after the studys completion.
  • A study must be conducted within one year of the acts effective date.
  • Banks must conduct annual disparate impact analyses for their automated decision tools.
  • Landlords cannot employ algorithms using nonpublic competitor data for pricing decisions.
  • Landlords must conduct an annual disparate impact analysis of their automated decision tools.
  • Landlords must disclose personal data processing practices to tenants.
  • Summaries of the analyses must be publicly available before the implementation of the tools.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to housing
2025-01-30 referred to housing

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the use of algorithms in the real estate market, specifically targeting price-fixing concerns.

Mechanism of Influence: It creates a legal prohibition against using specific types of data (nonpublic competitor data) within rent-setting algorithms, effectively regulating the logic and data inputs of AI tools.

Evidence:

  • This provision prohibits landlords from using algorithms that incorporate nonpublic competitor data to set or change rent amounts.

Ambiguity Notes: The term 'nonpublic competitor data' may require further regulatory definition to determine if it includes aggregated or anonymized data sets.

Analysis 2

Why Relevant: The legislation mandates transparency and consumer notification regarding the use of automated systems.

Mechanism of Influence: It requires a 24-hour advance notice to applicants, fulfilling the user's interest in 'requiring disclosures' for AI usage.

Evidence:

  • Applicants must be notified at least 24 hours in advance of the use of automated decision tools.
  • Applicants must be notified about the use of automated decision tools and their data processing policies at least 24 hours in advance.

Ambiguity Notes: The bill does not specify the required format or level of detail for the notification beyond 'data processing policies'.

Analysis 3

Why Relevant: The bill requires mandatory bias testing, which aligns with the user's interest in 'requiring audits'.

Mechanism of Influence: It forces entities to conduct annual 'disparate impact analyses' and make summaries publicly available, creating a public oversight mechanism for algorithmic bias.

Evidence:

  • Landlords must conduct an annual disparate impact analysis of their automated decision tools.
  • Banks must conduct annual disparate impact analyses for their automated decision tools.
  • Summaries of the analyses must be publicly available before the implementation of the tools.

Ambiguity Notes: The specific metrics or standards for what constitutes a sufficient 'disparate impact analysis' are not detailed in the abstract.

Analysis 4

Why Relevant: The bill initiates government oversight and research into AI's societal impacts.

Mechanism of Influence: By mandating a formal study and a report to the governor and legislature, the bill creates a pathway for future AI-specific legislation and regulatory standards.

Evidence:

  • This provision mandates a study on the impact of artificial intelligence on housing discrimination and redlining, to be conducted by relevant state departments.

Ambiguity Notes: None

Assembly - 3932 - Relates to improving safety for third-party food deliveries

Legislation ID: 59247

Bill URL: View Bill

Summary

This bill amends the general business law to introduce regulations for third-party food delivery services regarding the delivery of food by bicycles with electric assist or electric scooters. It aims to ensure that delivery platforms do not impose unrealistic delivery times or penalize workers for traffic law violations, thereby promoting safer delivery practices.

Key Sections

Key Requirements

  • Allows audits of delivery platform algorithms by designated officials.
  • First violation fine up to $250,000; subsequent violations up to $500,000.
  • No penalties for traffic law violations during deliveries.
  • Penalties for traffic violations are capped at five dollars per day.
  • Prohibits incentives for deliveries that cannot be achieved safely by bicycle or scooter within a set timeframe.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-01-30 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates the auditing of algorithms used by third-party delivery platforms to ensure compliance with safety standards.

Mechanism of Influence: It grants state officials the authority to inspect and audit the logic and outputs of delivery algorithms, representing a form of algorithmic oversight and regulation.

Evidence:

  • This provision grants authority to specific state officials to audit the algorithms of third-party delivery platforms to ensure compliance with safety regulations.
  • Allows audits of delivery platform algorithms by designated officials.

Ambiguity Notes: While the bill uses the term 'algorithms' rather than 'artificial intelligence,' the automated systems used for route optimization and time estimation in delivery platforms typically fall under the broader umbrella of AI and automated decision-making systems.

Analysis 2

Why Relevant: The legislation sets specific prohibitions on how algorithms can be programmed and utilized regarding worker performance and delivery estimates.

Mechanism of Influence: It legally restricts the parameters of the delivery platform's automated systems, prohibiting the use of algorithms that calculate or promote delivery times that are physically impossible to achieve safely.

Evidence:

  • This provision prohibits third-party food delivery platforms from using algorithms that promote delivery times that cannot be realistically achieved by bicycles with electric assist or electric scooters traveling at safe speeds.

Ambiguity Notes: None

Assembly - 4427 - Relates to the use of external consumer data and information sources being used when determining insurance rates

Legislation ID: 60078

Bill URL: View Bill

Summary

This bill amends the insurance law to prohibit insurers from using external consumer data and information sources in ways that unfairly discriminate against individuals based on race, gender, and other protected characteristics. It establishes a framework for the superintendent of insurance to oversee and regulate the use of such data, ensuring that insurers demonstrate compliance and mitigate discriminatory practices. The bill also mandates stakeholder engagement and provides for the confidentiality of proprietary information.

Key Sections

Key Requirements

  • Definitions include terms such as algorithm, external consumer data, insurance practice, and predictive model.
  • Description of data sources discussed during the stakeholder process.
  • Documents obtained by the department are not subject to public disclosure or subpoena.
  • Does not apply to title insurance, surety bonds, or commercial insurance policies with certain exceptions.
  • Establish a risk management framework to assess potential discrimination.
  • Insurers must comply with rules adopted by the superintendent regarding the use of external consumer data.
  • Insurers must provide information about the external data sources and their use in algorithms.
  • No insurer shall unfairly discriminate based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.
  • Provide ongoing assessments and attestations regarding the implementation of these frameworks.
  • Report includes changes in insurance rates due to the prohibitions.
  • Rules must be adopted after stakeholder engagement.
  • Summary of stakeholder engagement process.
  • The superintendent may only disclose aggregated or de-identified data.
  • The superintendent must hold stakeholder meetings and provide notice of these meetings publicly.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to insurance
2025-02-04 referred to insurance

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates the use of algorithms and predictive models, which are fundamental components of artificial intelligence systems used in automated decision-making.

Mechanism of Influence: It requires insurers to establish risk management frameworks and conduct testing to mitigate discriminatory outcomes from these technologies, effectively mandating a form of algorithmic auditing and oversight.

Evidence:

  • Definitions include terms such as algorithm, external consumer data, insurance practice, and predictive model.
  • Insurers must provide information about the external data sources and their use in algorithms.
  • Establish a risk management framework to assess potential discrimination.

Ambiguity Notes: While the bill uses terms like algorithm and predictive model rather than artificial intelligence exclusively, these terms encompass the AI technologies used for underwriting and pricing in the insurance industry.

Assembly - 4550 - Requires the department of labor to study the long-term impact of artificial intelligence on the state workforce

Legislation ID: 60326

Bill URL: View Bill

Summary

This legislation mandates the New York Department of Labor, in consultation with relevant state departments, to conduct a comprehensive study on how artificial intelligence affects job performance, productivity, training, education requirements, privacy, and security within the state workforce. The department is required to report its findings and recommendations for legislative action every five years, culminating in a final report by January 1, 2035. Additionally, the bill prohibits state entities from using artificial intelligence in a manner that would displace employees until the final report is received.

Key Sections

Key Requirements

  • A final report is due by January 1, 2035.
  • No state department or entity may use AI to displace employees until the completion of the study.
  • The Department of Labor must issue interim reports every five years.
  • The study must begin within six months of the bills effective date.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to ways and means
2025-04-30 reported referred to ways and means
2025-04-18 amend (t) and recommit to labor
2025-04-18 print number 4550a
2025-02-04 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The bill imposes a direct regulatory restriction on the application of AI by state entities.

Mechanism of Influence: It establishes a moratorium on AI-driven employee displacement, effectively regulating how the technology can be deployed within the public sector.

Evidence:

  • State entities are prohibited from using artificial intelligence in ways that would replace human workers until the final report is received.
  • No state department or entity may use AI to displace employees until the completion of the study.

Ambiguity Notes: The term "displace" is not explicitly defined, leaving it open to interpretation whether it refers only to layoffs or also to the reduction of hours or reassignment of duties.

Analysis 2

Why Relevant: The legislation mandates government oversight and periodic reporting on AI's effects on privacy and security.

Mechanism of Influence: By requiring the Department of Labor to study and report on AI's impact every five years, the bill creates a framework for ongoing legislative oversight and potential future regulation based on the findings.

Evidence:

  • The Department of Labor is required to conduct a study on the long-term effects of artificial intelligence on the workforce, including various factors such as job performance and privacy.
  • The department is required to report its findings and recommendations for legislative action every five years

Ambiguity Notes: The scope of "privacy" and "security" within the study is broad and may encompass both data protection and physical workplace security.

Assembly - 4947 - Relates to enacting the NY privacy act

Legislation ID: 61232

Bill URL: View Bill

Summary

The New York Privacy Act aims to establish comprehensive privacy protections for consumers in New York by granting them rights over their personal data, including the ability to access, correct, and delete their data, as well as requiring businesses to implement reasonable data security measures and obtain consent for data processing. The act also empowers the New York State Attorney General to enforce compliance and allows consumers to seek legal recourse for violations.

Key Sections

Key Requirements

  • Applies to businesses with annual revenue over $25 million.
  • Applies to entities controlling or processing data of 100,000 consumers or more.
  • Businesses must maintain reasonable data security.
  • Businesses must notify consumers of foreseeable harms from data use.
  • Businesses must obtain specific consent for data processing.
  • Businesses must provide clear notice of data usage.
  • Consumers must be able to access, correct, and delete their data.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-02-10 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The act provides a regulatory framework for the collection and processing of personal data, which is a foundational component for training and operating AI models involving consumer information.

Mechanism of Influence: Provisions requiring 'specific consent for data processing' and 'clear notice of data usage' would legally constrain how companies gather datasets for AI training and how they deploy AI-driven analytics on New York residents.

Evidence:

  • Businesses must obtain specific consent for data processing.
  • Businesses must provide clear notice of data usage.
  • Businesses must notify consumers of foreseeable harms from data use.

Ambiguity Notes: The text lacks explicit mentions of 'Artificial Intelligence,' 'algorithms,' or 'automated decision-making,' meaning its application to AI depends on the broad interpretation of 'data processing' and 'foreseeable harms.'

Assembly - 4991 - Prohibits the use of an algorithmic device by a landlord for the purpose of determining the amount of rent to charge a residential tenant

Legislation ID: 61348

Bill URL: View Bill

Summary

The bill amends the real property law to explicitly forbid landlords from employing algorithmic devices that utilize nonpublic competitor data for setting rent. This measure is introduced in response to allegations that such practices could lead to higher rents and diminish landlords direct involvement in pricing decisions. The bill outlines definitions for algorithmic devices and nonpublic competitor data, and establishes penalties for violations.

Key Sections

Key Requirements

  • Algorithmic device refers to any device using algorithms for rent calculations, excluding certain reports and affordable housing guidelines.
  • Landlords must determine rent amounts without the assistance of algorithmic devices that use nonpublic competitor data.
  • Nonpublic competitor data refers to non-public information about rent prices and occupancy rates.
  • Rent includes all charges a tenant must pay under a rental agreement.
  • Violations are considered deceptive acts under the General Business Law, leading to potential penalties.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to housing
2025-02-10 referred to housing

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically regulates the use of 'algorithmic devices,' which encompasses automated decision-making systems and AI-driven pricing models used in the real estate sector.

Mechanism of Influence: It restricts the data inputs allowed for these algorithms, specifically banning the use of nonpublic competitor data to prevent algorithmic price-fixing or collusion.

Evidence:

  • Algorithmic device refers to any device using algorithms for rent calculations
  • Landlords are prohibited from using algorithmic devices that incorporate nonpublic competitor data to set or adjust residential rent amounts.

Ambiguity Notes: The term 'algorithmic device' is defined broadly as any device using algorithms for rent calculations, which could capture a wide range of software from simple spreadsheets to complex machine learning models, though it excludes certain standard reports.

Assembly - 5216 - Requires state units to purchase a product or service that is or contains an algorithmic decision system that adheres to responsible artificial intelligence standards

Legislation ID: 61793

Bill URL: View Bill

Summary

This legislation amends the state finance law to include requirements for the purchase of algorithmic decision systems by state units. It defines what constitutes an algorithmic decision system and mandates that such systems adhere to standards that prevent harm, promote transparency, ensure fairness, and undergo thorough evaluation. Additionally, it modifies the definition of unlawful discriminatory practices to include actions taken through these systems.

Key Sections

Key Requirements

  • Includes acts of discrimination conducted via algorithmic decision systems as unlawful practices.
  • Requires state units to purchase algorithmic decision systems that avoid harm, promote transparency, prioritize fairness, and undergo comprehensive evaluation.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to governmental operations
2025-02-12 referred to governmental operations

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the acquisition and use of algorithmic decision systems, which are a primary form of artificial intelligence used in automated decision-making.

Mechanism of Influence: It imposes procurement standards on state agencies, requiring them to evaluate AI systems for fairness and transparency before purchase, effectively creating a regulatory framework for government AI usage.

Evidence:

  • This legislation amends the state finance law to include requirements for the purchase of algorithmic decision systems by state units.
  • Requires state units to purchase algorithmic decision systems that avoid harm, promote transparency, prioritize fairness, and undergo comprehensive evaluation.

Ambiguity Notes: The definition of 'algorithmic decision system' is broad and likely covers a wide range of machine learning and AI technologies beyond simple rule-based software.

Analysis 2

Why Relevant: The legislation addresses the legal accountability of AI systems regarding civil rights and discrimination.

Mechanism of Influence: By expanding the definition of unlawful discriminatory practices to include those performed through algorithmic systems, it ensures that AI-driven bias is subject to existing legal protections.

Evidence:

  • This provision expands the definition of unlawful discriminatory practices to include those performed through algorithmic decision systems.
  • Includes acts of discrimination conducted via algorithmic decision systems as unlawful practices.

Ambiguity Notes: None

Assembly - 563 - Requires policies for the use of automatic license plate reader systems

Legislation ID: 54340

Bill URL: View Bill

Summary

The bill amends the executive law and general business law to require the development of minimum standards for the use of automatic license plate reader systems by non-law enforcement entities. These standards will cover permissible uses, data sharing, record retention, and employee training. Non-law enforcement entities will be required to publicly disclose these standards on their websites or in their main offices. The bill also mandates the establishment of a training program for employees regarding these policies.

Key Sections

Key Requirements

  • Non-law enforcement agencies must post the minimum standards policy on their website or in their main office.
  • Non-law enforcement entities must post this policy on their website or in their main office.
  • Policy must be available to the public upon request.
  • Policy must include provisions on permissible uses, data sharing, record retention, and training.
  • Requires the development of a minimum standards policy for automatic license plate reader systems.

Sponsors

Legislative Actions

Date Action
2026-01-12 delivered to senate
2026-01-12 passed assembly
2026-01-12 REFERRED TO CONSUMER PROTECTION
2026-01-07 DIED IN SENATE
2026-01-07 ordered to third reading cal.16
2026-01-07 RETURNED TO ASSEMBLY
2025-05-05 delivered to senate
2025-05-05 passed assembly

Detailed Analysis

Analysis 1

Why Relevant: Automatic license plate reader (ALPR) systems are a specific application of computer vision and automated data processing, which are core components of artificial intelligence technology.

Mechanism of Influence: The bill imposes disclosure requirements and operational standards on automated surveillance technology, requiring entities to publish their data usage and retention policies.

Evidence:

  • require the development of minimum standards for the use of automatic license plate reader systems
  • Non-law enforcement entities will be required to publicly disclose these standards
  • Policy must include provisions on permissible uses, data sharing, record retention, and training.

Ambiguity Notes: The bill does not explicitly use the term 'artificial intelligence,' but it regulates a technology that relies on AI-driven character recognition and automated decision-making regarding data capture.

Assembly - 6545 - Imposes liability for damages caused by a chatbot impersonating licensed professionals

Legislation ID: 64450

Bill URL: View Bill

Summary

The bill introduces a new section to the general business law that defines chatbots and their proprietors, sets forth restrictions on the type of information and advice chatbots can provide, and outlines the liabilities for proprietors who violate these regulations. It mandates clear notification to users that they are interacting with a chatbot and allows individuals to pursue civil action for damages caused by violations.

Key Sections

Key Requirements

  • Allows individuals to recover actual damages and attorney fees if a proprietor willfully violates the law.
  • Notice must be in the same language and font size as the chatbots text.
  • Prohibits chatbots from giving medical or psychological advice.
  • Prohibits chatbots from providing responses that constitute a crime under specific education law sections.
  • Requires clear, conspicuous notice that users are interacting with a chatbot.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-04-07 amend (t) and recommit to consumer affairs and protection
2025-04-07 print number 6545a
2025-03-06 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates chatbots, which are a primary application of artificial intelligence technology.

Mechanism of Influence: It imposes legal restrictions on the content AI can generate and mandates transparency disclosures to users.

Evidence:

  • The bill introduces a new section to the general business law that defines chatbots and their proprietors
  • Proprietors are prohibited from allowing chatbots to provide certain types of substantive responses
  • Proprietors must provide clear notice to users that they are interacting with a chatbot

Ambiguity Notes: The term 'substantive responses' may require further legal clarification to determine the threshold of prohibited advice versus general information.

Analysis 2

Why Relevant: The legislation addresses AI transparency and consumer protection through mandatory disclosures.

Mechanism of Influence: By requiring notice in the same language and font size as the chatbot's text, it ensures users are aware they are not speaking to a human.

Evidence:

  • Requires clear, conspicuous notice that users are interacting with a chatbot.
  • Notice must be in the same language and font size as the chatbots text.

Ambiguity Notes: None

Assembly - 6578 - Establishes the artificial intelligence training data transparency act

Legislation ID: 64494

Bill URL: View Bill

Summary

This legislation introduces the Artificial Intelligence Training Data Transparency Act, mandating developers of generative AI models to publicly disclose detailed information about the datasets used for training these models. It defines key terms related to artificial intelligence and outlines specific requirements for documentation, especially regarding employee data. Certain exceptions are included for models related to national security or aviation.

Key Sections

Key Requirements

  • Developers must post documentation on their website regarding the training data used.
  • Disclosure must include types of data points and time periods for data collection.
  • Documentation must include sources, descriptions, number of data points, copyright status, and whether personal information is included.
  • Entities must disclose the intended purpose of the AI model to employees.

Sponsors

Legislative Actions

Date Action
2026-01-12 amended on third reading 6578a
2026-01-07 DIED IN SENATE
2026-01-07 ordered to third reading cal.166
2026-01-07 RETURNED TO ASSEMBLY
2025-06-10 delivered to senate
2025-06-10 ordered to third reading rules cal.571
2025-06-10 passed assembly
2025-06-10 REFERRED TO RULES

Detailed Analysis

Analysis 1

Why Relevant: The act directly addresses the user's interest in AI regulation and disclosure requirements by mandating transparency for training datasets.

Mechanism of Influence: It requires developers to publicly post documentation regarding the sources, copyright status, and personal information contained within training data before a model is released.

Evidence:

  • Developers must disclose information about the datasets used to train generative AI models before making them publicly available.
  • Documentation must include sources, descriptions, number of data points, copyright status, and whether personal information is included.

Ambiguity Notes: While it requires 'descriptions' of data, the specific granularity of these descriptions is not fully defined, potentially allowing for varying levels of detail.

Analysis 2

Why Relevant: The legislation includes specific regulatory requirements for the use of employee data in AI development.

Mechanism of Influence: It creates a legal obligation for entities to inform employees about the purpose of AI models and the specific types of employee data used to train them.

Evidence:

  • Entities using employee data to train AI models must inform employees about how their data is used, including the purpose of the AI model and the types of data involved.

Ambiguity Notes: None

Assembly - 6656 - Relates to requiring responsible capability scaling policies

Legislation ID: 64602

Bill URL: View Bill

Summary

This legislation amends the General Business Law to introduce a new section focusing on responsible capability scaling policies for artificial intelligence. It mandates that all businesses operating in New York develop and file an annual certification of compliance regarding their AI practices with the Chief Information Officer. The bill also outlines the roles of the Chief Information Officer and the Attorney General in overseeing compliance and auditing policies.

Key Sections

Key Requirements

  • Allows joint filings of AI compliance and cybersecurity certifications.
  • Allows the Chief Information Officer to issue waivers for certain entities.
  • Grants the Attorney General the power to audit policies filed under this section.
  • Mandates annual certification of compliance with this policy to be filed with the Chief Information Officer.
  • Requires all businesses operating in New York to develop a responsible capability scaling policy for AI.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-03-06 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly establishes an auditing framework for AI compliance policies.

Mechanism of Influence: The Attorney General and Chief Information Officer are granted the authority to review and audit the AI policies filed by businesses to ensure regulatory adherence.

Evidence:

  • The Attorney General, in coordination with the Chief Information Officer, can audit the compliance policies submitted by businesses.
  • Grants the Attorney General the power to audit policies filed under this section.

Ambiguity Notes: The specific technical standards for what constitutes a 'responsible capability scaling policy' are left to be defined by the Chief Information Officer through rule promulgation.

Analysis 2

Why Relevant: It mandates the creation of internal AI governance policies and annual disclosures to the government.

Mechanism of Influence: Businesses must file an annual certification of compliance regarding their AI practices, creating a mandatory reporting and oversight loop with the state government.

Evidence:

  • Businesses must create and implement a responsible capability scaling policy concerning their use of artificial intelligence.
  • Mandates annual certification of compliance with this policy to be filed with the Chief Information Officer.

Ambiguity Notes: The scope of 'Artificial Intelligence' is defined within the bill, but the breadth of businesses impacted depends on the CIO's use of waiver and exemption authority.

Assembly - 6720 - Relates to banning the use of biometric identifying technology in schools

Legislation ID: 64694

Bill URL: View Bill

Summary

This bill amends the state technology law to ban public and nonpublic elementary and secondary schools from purchasing or using biometric identifying technology, such as facial recognition, for any purpose. It allows limited use of such technology for employee identification under certain conditions but requires a comprehensive report to assess the implications of biometric technology in educational settings.

Key Sections

Key Requirements

  • A report must be prepared evaluating privacy implications, civil rights impacts, and security effectiveness of biometric technology.
  • Limited use is allowed for employee identification with written consent.
  • Scheduled public hearings and outreach methods must be conducted to gather feedback.
  • Schools cannot purchase or utilize biometric identifying technology for any purpose.
  • Stakeholder consultation is required in preparing the report.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to education
2025-04-16 amend and recommit to education
2025-04-16 print number 6720a
2025-03-11 referred to education

Detailed Analysis

Analysis 1

Why Relevant: Facial recognition and other biometric identifying technologies are primary applications of artificial intelligence, specifically computer vision and pattern recognition. Regulating these tools falls under the umbrella of AI oversight.

Mechanism of Influence: The bill imposes a direct ban on the acquisition and use of these AI-driven technologies in schools, effectively halting their deployment until a formal impact assessment is conducted.

Evidence:

  • ban public and nonpublic elementary and secondary schools from purchasing or using biometric identifying technology, such as facial recognition
  • A report must be prepared evaluating privacy implications, civil rights impacts, and security effectiveness of biometric technology.

Ambiguity Notes: While 'biometric identifying technology' is defined with examples like facial recognition, the scope could extend to other AI-based systems like iris scanning or behavioral biometrics depending on the legal definition of 'biometric'.

Assembly - 6765 - Enacts the preventing algorithmic pricing discrimination act

Legislation ID: 98289

Bill URL: View Bill

Summary

This bill amends the general business law in New York to introduce regulations on algorithmically set prices. It requires clear disclosure when personalized algorithmic pricing is used, particularly in consumer transactions, and prohibits the use of protected class data in pricing decisions that could lead to discrimination. The bill aims to protect consumers from unfair pricing practices and enhance their understanding of how their personal data may influence pricing.

Key Sections

Key Requirements

  • Allows aggrieved individuals to file actions under section 297 of the executive law.
  • Allows the attorney general to seek injunctions for violations.
  • Exempts financial services from the provisions of the bill.
  • Exempts licensed insurers from the disclosure requirements.
  • Imposes civil penalties of up to $1,000 for each violation.
  • Prohibits pricing based on protected class data that leads to discrimination.
  • Requires clear and conspicuous disclosure in advertisements for algorithmically set prices.

Sponsors

Legislative Actions

Date Action
2026-01-07 ordered to third reading cal.173
2025-03-25 amended on third reading 6765a
2025-03-25 ordered to third reading rules cal.114
2025-03-25 reported
2025-03-25 reported referred to codes
2025-03-25 reported referred to rules
2025-03-25 rules report cal.114
2025-03-12 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the requirement for disclosures regarding the use of algorithms in consumer-facing transactions.

Mechanism of Influence: It mandates that any advertisement or announcement for a price set by an algorithm must include a clear and conspicuous disclosure to the consumer.

Evidence:

  • This provision mandates that any advertisement or announcement of personalized algorithmic pricing must include a clear disclosure stating that the price was set by an algorithm using the consumers personal data.

Ambiguity Notes: The scope of 'personalized algorithmic pricing' depends on the specific definition of 'algorithm' provided in the bill's definitions section.

Analysis 2

Why Relevant: The legislation regulates the data inputs and decision-making processes of algorithmic systems to prevent bias.

Mechanism of Influence: By prohibiting the use of protected class data in pricing decisions, the law restricts how AI and algorithmic models can be trained or deployed in commercial settings.

Evidence:

  • This section prohibits the use of protected class data in pricing decisions if it results in discrimination against individuals or groups based on legally protected characteristics.

Ambiguity Notes: The bill mentions 'discrimination' but may rely on existing executive law to define the specific thresholds for what constitutes a discriminatory algorithmic output.

Analysis 3

Why Relevant: The bill establishes a regulatory framework for algorithmic oversight, including definitions and enforcement mechanisms.

Mechanism of Influence: It empowers the attorney general to seek injunctions and impose civil penalties for failure to comply with algorithmic transparency requirements.

Evidence:

  • This section provides definitions for key terms used throughout the bill, including algorithm, consumer data, dynamic pricing
  • This section outlines the enforcement mechanisms available to the attorney general, including the ability to seek injunctions against violators and impose civil penalties

Ambiguity Notes: None

Assembly - 6767 - Relates to artificial intelligence companion models

Legislation ID: 98295

Bill URL: View Bill

Summary

This bill amends the General Business Law to introduce regulations for artificial intelligence companion models. It defines key terms related to AI and establishes requirements for operators of AI companions, including protocols for handling user expressions of self-harm or harm to others. The bill mandates notifications to users about the nature of AI companions and provides a legal basis for users to seek damages in case of violations.

Key Sections

Key Requirements

  • Mandates notification of crisis services to users expressing harmful thoughts.
  • Requires AI companions to have protocols for addressing suicidal ideation, self-harm, and harm to others.
  • Requires operators to provide a notification at the start and every three hours during interactions.

Sponsors

Legislative Actions

Date Action
2026-01-07 DIED IN SENATE
2026-01-07 ordered to third reading cal.175
2026-01-07 RETURNED TO ASSEMBLY
2025-03-25 delivered to senate
2025-03-25 ordered to third reading rules cal.116
2025-03-25 passed assembly
2025-03-25 REFERRED TO CONSUMER PROTECTION
2025-03-25 reported

Detailed Analysis

Analysis 1

Why Relevant: The bill directly defines and regulates artificial intelligence companion models.

Mechanism of Influence: It establishes legal definitions for AI and generative AI, setting the scope for regulatory oversight.

Evidence:

  • This section provides definitions for key terms related to artificial intelligence and AI companions, including what constitutes AI, generative AI, and emotional recognition algorithms.

Ambiguity Notes: The definition of 'emotional recognition algorithms' may be broad depending on the technical implementation.

Analysis 2

Why Relevant: The bill requires specific disclosures to users about the nature of the AI.

Mechanism of Influence: Operators must notify users every three hours that the companion is not human and lacks emotions, ensuring transparency.

Evidence:

  • Operators are required to notify users at the start of interactions and every three hours thereafter that the AI companion is not a human and cannot feel emotions.

Ambiguity Notes: The frequency of notification (every three hours) might be interpreted as continuous interaction or cumulative time.

Analysis 3

Why Relevant: The bill mandates safety protocols and crisis intervention for AI interactions.

Mechanism of Influence: AI operators must implement systems to detect and respond to expressions of self-harm or harm to others, including referrals to crisis services.

Evidence:

  • Operators of AI companions must implement protocols to address potential suicidal ideation, physical harm, or financial harm expressed by users, including referrals to crisis services.

Ambiguity Notes: The specific 'protocols' required are not detailed, leaving implementation details to the operators or future regulation.

Assembly - 6874 - Establishes the Artificial Intelligence Literacy Act

Legislation ID: 98397

Bill URL: View Bill

Summary

This bill proposes the creation of an artificial intelligence literacy program within the digital equity competitive grant program. It seeks to provide funding for schools, community colleges, and organizations to develop and implement AI literacy initiatives, ensuring that individuals from all backgrounds gain essential knowledge and skills related to artificial intelligence technologies. The program emphasizes training for educators, resources for students, and outreach to underserved communities to bridge the digital divide.

Key Sections

Key Requirements

  • Community colleges must develop interdisciplinary AI literacy programs and labs.
  • Community organizations must provide training and educational programming.
  • Criteria for grant applications and recipient selection must be established.
  • Grants must be available for public schools, community colleges, higher education institutions, and community organizations.
  • Higher education institutions must create labs and tools for AI education.
  • Reports must include data on training, implementation, student reach, and grant usage.
  • Schools must use funds for teacher training, professional development, and creating AI learning materials.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to education
2025-04-28 amend (t) and recommit to education
2025-04-28 print number 6874a
2025-03-18 referred to education

Detailed Analysis

Analysis 1

Why Relevant: The bill addresses the educational and literacy aspects of artificial intelligence, which is a foundational element of AI policy and public oversight.

Mechanism of Influence: It establishes government-funded programs that require reporting on AI implementation and training, providing a mechanism for state oversight of AI education.

Evidence:

  • The commissioner must submit an annual report summarizing the effectiveness of the grant program.
  • Entities receiving grants must report on their progress and impact annually for four years.
  • This section defines key terms related to artificial intelligence literacy

Ambiguity Notes: The bill focuses on literacy and education rather than direct technical regulation, disclosures, or audits of AI models, but it sets a precedent for government involvement in AI-related standards and definitions.

Assembly - 6972 - Relates to creating an artificial intelligence working group in the department of education

Legislation ID: 98500

Bill URL: View Bill

Summary

This legislation seeks to address the urgent need for guidance on the integration of artificial intelligence in education. It proposes the formation of an artificial intelligence working group tasked with creating policies that ensure AI technologies enhance educational quality without compromising the roles of educators or the learning experience of students. The working group will assess current AI usage in schools, develop best practices, and provide recommendations for policy and legislative changes.

Key Sections

Key Requirements

  • Conduct at least three public meetings for stakeholder feedback.
  • Guidance must address academic integrity, data privacy, and acceptable uses of AI.
  • Model policy must include strategies for equitable AI use in education.
  • Requires the formation of a working group to develop guidance on AI use in education.
  • Submit a report detailing assessments and recommendations by January 1, 2027.
  • The working group must assess current and projected AI use in education.
  • The working group must include diverse members with expertise in education and AI.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to education
2025-03-18 referred to education

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation and oversight of AI technologies within the educational sector.

Mechanism of Influence: By establishing a working group to create model policies and guidance, the state initiates a framework for how AI tools can be legally and safely deployed in classrooms, affecting procurement and usage standards.

Evidence:

  • Establishes a working group in the Department of Education to develop guidance and policies for AI use in schools.
  • Guidance must address academic integrity, data privacy, and acceptable uses of AI.

Ambiguity Notes: While the bill focuses on guidance and model policies rather than hard prohibitions or technical audits like weight submission, these policies often form the basis for future mandatory regulations.

Assembly - 6974 - Enacts the Stop Addictive Feeds Exploitation (SAFE) for all act

Legislation ID: 98497

Bill URL: View Bill

Summary

This bill introduces new regulations under the General Business Law to address the impact of addictive feeds on users, particularly in social media platforms. It defines key terms related to addictive feeds and algorithmic recommendations, mandates user control settings, prohibits deceptive design practices (dark patterns), and establishes enforcement mechanisms including penalties for non-compliance.

Key Sections

Key Requirements

  • Attorney General can bring actions to enjoin violations.
  • Civil penalties of up to $5,000 per violation can be imposed.
  • It is unlawful to deploy mechanisms that inhibit user autonomy.
  • Operators must allow users to turn off algorithmic recommendations.
  • Operators must allow users to turn off autoplay of media.
  • Operators must provide mechanisms to turn off notifications about addictive feeds.
  • Settings must be presented clearly and accessibly.
  • Users must be able to limit their daily access to the platform.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-03-18 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates 'algorithmic recommendations,' which are a primary application of AI in social media contexts.

Mechanism of Influence: It mandates that platforms provide a mechanism for users to opt-out of AI-driven content delivery (algorithmic feeds), effectively regulating the deployment and user interaction with recommendation algorithms.

Evidence:

  • This section defines key terms used throughout the act, including addictive feed, addictive social media platform, algorithmic recommendation
  • Operators of addictive social media platforms must provide users with options to turn off algorithmic recommendations

Ambiguity Notes: The bill's definition of 'algorithmic recommendation' likely encompasses various machine learning and AI models used for content ranking, though the specific technical thresholds for these models are not detailed.

Assembly - 7172 - Relation to the regulation of the use of artificial intelligence and facial recognition technology in criminal investigations

Legislation ID: 98704

Bill URL: View Bill

Summary

This act seeks to amend the executive law and the criminal procedure law in New York State regarding the use of AI and FRT in criminal investigations. It aims to create protocols for law enforcement use of these technologies while prohibiting the use of AI-generated outputs as evidence in court. The bill emphasizes the need for transparency, auditing, and training for law enforcement agencies to mitigate biases and errors associated with AI systems.

Key Sections

Key Requirements

  • AI-generated outputs, including facial recognition results, are inadmissible as evidence.
  • Defendants have the right to expert witnesses regarding AI and FRT reliability.
  • Law enforcement must maintain records of AI-generated outputs for audit.
  • Law enforcement officers must receive training on AI limitations and biases.
  • Prosecutors must disclose information about AI systems used in investigations.
  • Regular independent audits of FRT systems are required.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to codes
2025-03-21 referred to codes

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the legal standing and disclosure requirements of AI-generated outputs in criminal proceedings.

Mechanism of Influence: It creates a legal barrier by making AI outputs inadmissible as evidence and forces transparency by requiring prosecutors to disclose information about the AI systems used.

Evidence:

  • AI-generated outputs, including facial recognition results, are inadmissible as evidence.
  • Prosecutors must disclose information about AI systems used in investigations.

Ambiguity Notes: The term 'AI-generated outputs' is broad and could encompass a wide range of technologies, potentially leading to disputes over what constitutes an AI output versus a standard digital tool.

Analysis 2

Why Relevant: The bill mandates oversight mechanisms such as audits and record-keeping for AI and FRT systems.

Mechanism of Influence: It requires law enforcement to maintain audit trails and subjects FRT systems to regular independent audits to ensure compliance and accuracy.

Evidence:

  • Regular independent audits of FRT systems are required.
  • Law enforcement must maintain records of AI-generated outputs for audit.

Ambiguity Notes: The criteria for what constitutes an 'independent' audit or the specific standards for the audit are not fully defined in the summary.

Analysis 3

Why Relevant: It addresses the operational regulation of AI through mandatory training on bias and limitations.

Mechanism of Influence: By requiring training, the law attempts to mitigate the risks of algorithmic bias and human over-reliance on AI systems in law enforcement.

Evidence:

  • Law enforcement officers must receive training on AI limitations and biases.

Ambiguity Notes: None

Assembly - 7278 - Prohibits the use of certain artificial intelligence models

Legislation ID: 98880

Bill URL: View Bill

Summary

This bill amends the state technology law to introduce a new section prohibiting state agencies and state-owned entities from using large language models or artificial intelligence systems to make decisions that impact individuals rights, benefits, or services. The legislation allows for the use of AI in advisory roles and for data analysis, provided that final decisions remain with human personnel. It also mandates the development of compliance policies and grants the attorney general the authority to investigate violations.

Key Sections

Key Requirements

  • Allows AI use in advisory roles if final decisions are made by humans.
  • Gives the attorney general authority to investigate and enforce compliance.
  • Prohibits use of AI for decision-making affecting individuals rights, benefits, or services by state agencies and state-owned entities.
  • Requires development of compliance policies by state agencies and state-owned entities.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to science and technology
2025-03-21 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the use of artificial intelligence and large language models within government operations.

Mechanism of Influence: It imposes a legal prohibition on automated decision-making for critical individual outcomes, mandating that AI remains in an advisory capacity only.

Evidence:

  • State agencies and state-owned entities are prohibited from using large language models or AI systems to make decisions that affect individuals rights, benefits, or services, requiring human personnel to make such decisions.

Ambiguity Notes: The terms 'rights, benefits, or services' are broad and could cover a wide range of administrative actions, from social welfare to professional licensing.

Analysis 2

Why Relevant: The legislation establishes a framework for AI governance and oversight.

Mechanism of Influence: It requires the creation of internal compliance policies and empowers the attorney general to investigate and enforce these regulations.

Evidence:

  • Each state agency and state-owned entity must develop and implement policies and procedures to ensure compliance
  • The attorney general is granted the authority to investigate violations of this section and take legal action to enforce its provisions.

Ambiguity Notes: The specific standards for what constitutes an 'advisory role' versus a 'decision-making' role may require further clarification in policy implementation.

Assembly - 7656 - Enacts the "respect electoral audiovisual legitimacy (REAL) act"

Legislation ID: 111254

Bill URL: View Bill

Summary

This legislation, known as the respect electoral audiovisual legitimacy (REAL) act, seeks to amend the election law to prevent the use of generative artificial intelligence for creating realistic audio, video, or photo representations of political candidates. The bill defines generative artificial intelligence and establishes regulations regarding its use in political communications to maintain authenticity and prevent misinformation.

Key Sections

Key Requirements

  • Political communications must not include any realistic photo, video, or audio depiction of a candidate created through generative artificial intelligence.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to election law
2025-04-04 referred to election law

Detailed Analysis

Analysis 1

Why Relevant: The act directly regulates the use of generative AI by prohibiting specific types of AI-generated content in political communications.

Mechanism of Influence: It creates a legal prohibition against using AI to generate realistic depictions of candidates, effectively restricting the deployment of generative AI tools in election-related media.

Evidence:

  • This provision prohibits any political communication from containing realistic depictions of candidates that are created using generative artificial intelligence.

Ambiguity Notes: The term "realistic" is not strictly defined in the abstract, which could lead to varying interpretations of what constitutes a prohibited depiction.

Analysis 2

Why Relevant: The legislation establishes a formal legal definition for generative artificial intelligence.

Mechanism of Influence: By defining the technology, the law sets the jurisdictional boundaries for which AI systems and outputs are subject to these electoral regulations.

Evidence:

  • This provision defines what constitutes generative artificial intelligence for the purposes of the bill.

Ambiguity Notes: The specific technical criteria used to define "generative artificial intelligence" are not detailed in the summary, potentially leaving room for debate on emerging technologies.

Assembly - 768 - Enacts the "New York artificial intelligence consumer protection act"

Legislation ID: 54545

Bill URL: View Bill

Summary

The New York Artificial Intelligence Consumer Protection Act seeks to regulate the use of artificial intelligence decision systems that may lead to algorithmic discrimination. It defines key terms, outlines documentation requirements for AI developers, mandates risk management practices, and establishes enforcement mechanisms to protect consumers from discriminatory practices based on various protected characteristics.

Key Sections

Key Requirements

  • A rebuttable presumption of reasonable care is established if developers comply with documentation and audit requirements.
  • Developers must conduct bias and governance audits annually.
  • Developers must disclose foreseeable uses and known risks of their AI systems.
  • Developers must provide a general statement on the uses and limitations of their AI systems.
  • Documentation must include evaluations of potential biases and mitigation strategies.
  • Documentation must include performance evaluations and data governance measures.
  • The Attorney General is empowered to take enforcement actions against non-compliant developers.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-01-08 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses the regulation of artificial intelligence systems to prevent discrimination.

Mechanism of Influence: It mandates that developers of high-risk AI systems maintain technical documentation and conduct risk management practices.

Evidence:

  • The New York Artificial Intelligence Consumer Protection Act seeks to regulate the use of artificial intelligence decision systems that may lead to algorithmic discrimination.
  • Developers of high-risk AI decision systems must provide documentation about the systems uses, limitations, and performance evaluations

Ambiguity Notes: The term 'high-risk AI decision system' is defined within the act but its specific scope depends on the provided definitions section.

Analysis 2

Why Relevant: The act requires specific disclosures and transparency measures.

Mechanism of Influence: Developers are legally obligated to disclose the uses, limitations, and known risks of their AI systems to deployers.

Evidence:

  • Developers must disclose various aspects of their AI systems to deployers to ensure transparency and accountability.
  • Developers must provide a general statement on the uses and limitations of their AI systems.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill includes a mandatory auditing requirement, which is a key focus of the user's query.

Mechanism of Influence: It requires developers to perform annual bias and governance audits to ensure compliance and manage risks.

Evidence:

  • Developers must conduct bias and governance audits annually.

Ambiguity Notes: The specific standards for what constitutes a 'governance audit' may require further regulatory clarification.

Analysis 4

Why Relevant: The act establishes government oversight and enforcement mechanisms.

Mechanism of Influence: The Attorney General is granted the power to enforce these regulations and hold developers accountable for violations.

Evidence:

  • The Attorney General is empowered to take enforcement actions against non-compliant developers.

Ambiguity Notes: None

Assembly - 8158 - Relates to enacting the NY privacy act

Legislation ID: 139376

Bill URL: View Bill

Summary

This bill aims to empower New York consumers by granting them greater control over their personal data. It mandates businesses to provide clear information on data usage, allows consumers to access and delete their data, and requires businesses to maintain data security and notify consumers of risks. The bill also establishes enforcement mechanisms through the New York State Attorney General.

Key Sections

Key Requirements

  • Businesses must maintain reasonable data security.
  • Businesses must notify consumers of foreseeable harms from data use.
  • Businesses must obtain explicit consent for data use.
  • Businesses must provide clear notice on how consumer data is used.
  • Consumers must be able to access, correct, and delete their data.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-05-02 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes foundational data governance rules that apply to the datasets used to train and operate artificial intelligence systems.

Mechanism of Influence: AI developers and companies using AI would be classified as data controllers or processors, requiring them to provide disclosures on how consumer data is used within their models and to honor deletion or access requests for data used in training.

Evidence:

  • This section details the rights granted to consumers regarding their personal data, including rights to access, correction, deletion, and data portability.
  • Businesses must obtain explicit consent for data use.
  • Businesses must provide clear notice on how consumer data is used.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its broad definitions of 'personal data' and 'processing' would encompass the algorithmic use of consumer information.

Assembly - 8523 - Requires certain political communications to include provenance data for all audio, images or videos used in such communications

Legislation ID: 147753

Bill URL: View Bill

Summary

This legislation, known as the election content accountability act, mandates that starting from the 2030 election cycle, campaigns for certain high-level offices in New York must include detailed provenance data for digital content in their political communications. This data must specify the origin, any modifications made, and the involvement of generative artificial intelligence in the contents creation. Violations can lead to significant penalties assessed by the attorney general.

Key Sections

Key Requirements

  • Imposes a penalty of up to $100,000 for intentional violations and $50,000 for unintentional violations.
  • Provenance data must communicate the type of device used, specific synthetic content, AI involvement, provider details, and the date of data application.
  • Requires campaigns to apply provenance data to political communications that include audio, images, or videos.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to election law
2025-05-20 referred to election law

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the disclosure of generative artificial intelligence in political communications.

Mechanism of Influence: It mandates that campaigns include provenance data specifying AI involvement and provider details for any synthetic content used in communications.

Evidence:

  • Provenance data must communicate the type of device used, specific synthetic content, AI involvement, provider details, and the date of data application.
  • This section provides definitions for key terms related to provenance data, including what constitutes provenance data, generative artificial intelligence systems, and synthetic content.

Ambiguity Notes: The specific technical standards for what constitutes 'provenance data' are left to the Attorney General to define through rules and regulations.

Analysis 2

Why Relevant: The act establishes a legal and financial penalty framework for the misuse or non-disclosure of AI-generated content.

Mechanism of Influence: It imposes fines of up to $100,000 for intentional failure to disclose the use of AI or synthetic media in campaign materials.

Evidence:

  • Establishes penalties for violations of the provenance data requirements, with higher penalties for intentional or grossly negligent violations.
  • Imposes a penalty of up to $100,000 for intentional violations and $50,000 for unintentional violations.

Ambiguity Notes: The distinction between 'intentional' and 'unintentional' violations may require further judicial or regulatory clarification.

Assembly - 8546 - Relates to requiring disclosure of use of generative artificial intelligence in a civil action

Legislation ID: 147779

Bill URL: View Bill

Summary

This bill amends the civil practice law and rules of New York to require that any legal documents drafted with the assistance of generative artificial intelligence must include an affidavit disclosing this use. It mandates that a human must review and certify the accuracy of the content generated by the AI. Additionally, it defines generative artificial intelligence and outlines the requirements for disclosure in legal briefs.

Key Sections

Key Requirements

  • Mandates that a human must review and verify the accuracy of AI-generated content.
  • Requires an affidavit to be attached to any filing utilizing generative AI.
  • Requires disclosure of generative AI use in the drafting of legal briefs when applicable.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to judiciary
2025-05-20 referred to judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the requirement for disclosures when artificial intelligence is used in a professional capacity.

Mechanism of Influence: It mandates that any legal document drafted with generative AI must include a separate affidavit disclosing such use.

Evidence:

  • any legal document drafted with the assistance of generative artificial intelligence must include an affidavit disclosing this use
  • requires disclosure of generative AI use in the drafting of legal briefs when applicable

Ambiguity Notes: The requirement for an 'affidavit' is specific, but the threshold for what constitutes 'assistance' in drafting may need further clarification.

Analysis 2

Why Relevant: The bill imposes a regulatory requirement for human oversight and auditing of AI-generated outputs.

Mechanism of Influence: It requires a human to review and certify the accuracy of content generated by AI before it is submitted to the court.

Evidence:

  • mandates that a human must review and certify the accuracy of the content generated by the AI
  • certifying human review for accuracy

Ambiguity Notes: The term 'accuracy' in a legal brief can be subjective, potentially leading to disputes over the validity of the certification.

Analysis 3

Why Relevant: The bill establishes a legal definition for generative artificial intelligence, which is foundational for AI regulation.

Mechanism of Influence: By defining the technology, the bill sets the scope for which systems are subject to disclosure and certification rules.

Evidence:

  • defines what constitutes generative artificial intelligence, including various technologies and systems capable of performing tasks that require human-like cognition and decision-making

Ambiguity Notes: The definition includes 'human-like cognition and decision-making,' which are broad terms that may evolve as AI technology advances.

Assembly - 8595 - Enacts the "New York artificial intelligence transparency for journalism act"

Legislation ID: 148212

Bill URL: View Bill

Summary

This bill, known as the New York Artificial Intelligence Transparency for Journalism Act, mandates that developers of generative artificial intelligence disclose information about the sources of training data derived from journalism. It aims to protect the rights of news organizations by requiring developers to provide details about the content they utilize from covered publications, ensuring that journalism is compensated fairly and that the public is aware of how AI systems are trained.

Key Sections

Key Requirements

  • Developers must comply with subpoenas within thirty days.
  • Developers must disclose the identity and purpose of crawlers used to access journalism content.
  • Developers must post information on their website regarding the content used for AI training by January 1, 2027.
  • Journalism providers can request subpoenas for disclosure of training data.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to codes
2025-06-09 amend and recommit to codes
2025-06-09 print number 8595b
2025-05-29 reported referred to codes
2025-05-23 amend and recommit to science and technology
2025-05-23 print number 8595a
2025-05-22 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The bill directly mandates disclosures regarding the training data used for generative AI systems.

Mechanism of Influence: Developers are legally required to post information on their websites and provide detailed lists of URLs and content descriptions used in their training sets.

Evidence:

  • Developers of generative AI must disclose information regarding the content sourced from journalism providers used for training their systems, including URLs, descriptions of the content, and whether source identifiers were removed.

Ambiguity Notes: The bill specifies 'journalism providers' and 'covered publications,' which may leave ambiguity regarding whether social media posts or independent citizen journalism are included.

Analysis 2

Why Relevant: The legislation establishes oversight and enforcement mechanisms for AI developers.

Mechanism of Influence: It empowers journalism providers to seek subpoenas and injunctions to compel developers to reveal their training data and crawler identities.

Evidence:

  • Developers must comply with subpoenas within thirty days.
  • Journalism providers can request subpoenas for disclosure of training data.

Ambiguity Notes: The bill states it does not alter federal copyright law, which may create legal tension if developers argue that training data usage is 'fair use' under federal law regardless of state disclosure mandates.

Assembly - 8833 - Establishes understanding artificial intelligence responsibility act

Legislation ID: 166783

Bill URL: View Bill

Summary

This bill introduces the Understanding Artificial Intelligence Act to define artificial intelligence and set forth liability standards for developers of advanced AI models. It establishes a strict liability framework for injuries caused by these models, while also outlining the definitions relevant to AI and the conditions under which developers may be held responsible or absolved of liability.

Key Sections

Key Requirements

  • Defines artificial intelligence as a system that can infer outputs from inputs to influence environments.
  • Defines covered model based on its training cost and computational requirements.
  • Defines developer as the entity performing the initial training of a covered model.
  • Developers are strictly liable for injuries to non-users caused by their models if the models conduct would meet negligence or tort criteria.
  • Developers can rebut the presumption of liability by proving the models actions would not constitute negligence if performed by a human.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to science and technology
2025-06-09 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: This section directly regulates the legal accountability of AI developers, a core component of AI oversight and governance.

Mechanism of Influence: It creates a strict liability standard for injuries caused by AI models, effectively forcing developers to internalize the risks of their systems and providing a legal mechanism for redress when AI conduct causes harm.

Evidence:

  • This section outlines the liability framework for developers of covered AI models, establishing strict liability for injuries caused by their models under certain conditions.
  • Developers are strictly liable for injuries to non-users caused by their models if the models conduct would meet negligence or tort criteria.

Ambiguity Notes: The bill references 'negligence or tort criteria' as applied to AI conduct, which may require courts to interpret how human-centric legal standards apply to autonomous or semi-autonomous system outputs.

Analysis 2

Why Relevant: The definitions section determines the scope of the regulation, identifying which specific technologies and entities are subject to the law.

Mechanism of Influence: By defining 'covered model' through training costs and computational requirements, the act targets high-compute, advanced AI systems for specific regulatory burdens while exempting smaller models.

Evidence:

  • Defines artificial intelligence as a system that can infer outputs from inputs to influence environments.
  • Defines covered model based on its training cost and computational requirements.
  • Defines developer as the entity performing the initial training of a covered model.

Ambiguity Notes: The specific numerical thresholds for 'training cost' and 'computational requirements' are not provided in the abstract, leaving the exact breadth of the 'covered model' category undefined.

Assembly - 8884 - Relates to the development and use of certain artificial intelligence systems

Legislation ID: 166829

Bill URL: View Bill

Summary

The New York Artificial Intelligence Act aims to regulate AI systems that significantly impact individuals rights and opportunities. It addresses algorithmic discrimination, mandates developer and deployer responsibilities, and introduces auditing and reporting requirements for high-risk AI systems. The Act emphasizes the need for transparency, oversight, and the protection of vulnerable populations from potential harms associated with AI technologies.

Key Sections

Key Requirements

  • Allow end users to opt-out of AI decision-making processes without facing adverse consequences.
  • Developers and deployers must conduct independent audits of high-risk AI systems to ensure they do not cause algorithmic discrimination.
  • Developers and deployers must prevent foreseeable risks associated with high-risk AI systems.
  • Notify end users at least five business days before using an AI system for consequential decisions.

Sponsors

Legislative Actions

Date Action
2026-01-12 reference changed to science and technology
2026-01-07 referred to ways and means
2025-06-11 reference changed to ways and means
2025-06-09 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The Act directly addresses the regulation of AI systems that impact individual rights and opportunities.

Mechanism of Influence: It defines high-risk AI and sets legal standards for its deployment and development, creating a compliance framework for AI technologies.

Evidence:

  • The New York Artificial Intelligence Act aims to regulate AI systems that significantly impact individuals rights and opportunities.

Ambiguity Notes: The term 'consequential decisions' may require further legal clarification to determine the full scope of applicable industries and scenarios.

Analysis 2

Why Relevant: It mandates disclosures and transparency regarding the use of AI in decision-making.

Mechanism of Influence: Requires a five-day advance notice to users and provides an opt-out mechanism for AI-driven decisions, ensuring human oversight or choice.

Evidence:

  • Notify end users at least five business days before using an AI system for consequential decisions.
  • Allow end users to opt-out of AI decision-making processes without facing adverse consequences.

Ambiguity Notes: The practical implementation of the opt-out without 'adverse consequences' might be complex for certain automated business models.

Analysis 3

Why Relevant: The legislation specifically requires formal audits of AI systems, a key component of the user's request.

Mechanism of Influence: Mandates independent audits to detect and prevent algorithmic discrimination in high-risk systems, placing the burden of proof on developers and deployers.

Evidence:

  • Developers and deployers must conduct independent audits of high-risk AI systems to ensure they do not cause algorithmic discrimination.

Ambiguity Notes: The specific standards or certifications required for what constitutes an 'independent audit' are not detailed in the provided text.

Assembly - 8962 - Enacts the New York fundamental artificial intelligence requirements in news act

Legislation ID: 216392

Bill URL: View Bill

Summary

The FAIR News Act establishes requirements for the disclosure of artificial intelligence usage in news media, mandates human oversight of AI-generated content, and provides protections for news workers against the misuse of their work in training AI systems. It seeks to maintain the quality of news reporting and safeguard journalistic integrity in the face of advancing technology.

Key Sections

Key Requirements

  • AI-generated content must have a conspicuous disclosure stating its origin.
  • Approval must be obtained before publication.
  • Disclosure must include a description of the AI system and its purpose.
  • Employers cannot use workers content to train AI without consent.
  • Employers must inform workers about the use of AI in content creation.
  • Establish safeguards for protecting sources and confidential materials accessed by AI.
  • Human oversight is required for AI-generated content review.
  • If eligible for copyright, the disclosure requirement does not apply.
  • Workers cannot be penalized for refusing to allow their work to be used for AI training.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-12-19 amend (t) and recommit to consumer affairs and protection
2025-12-19 print number 8962a
2025-08-13 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The act mandates transparency through disclosures to consumers when content is generated by AI.

Mechanism of Influence: It requires news organizations to provide conspicuous disclosures on content that is significantly generated by AI, ensuring consumers are aware of the content's origin.

Evidence:

  • Any news content that is significantly generated by AI must clearly indicate this to consumers.
  • AI-generated content must have a conspicuous disclosure stating its origin.

Ambiguity Notes: The term 'significantly generated' is not quantitatively defined, which may lead to varying interpretations of when a disclosure is legally required.

Analysis 2

Why Relevant: The legislation requires human oversight and approval of AI-generated outputs.

Mechanism of Influence: It creates a legal requirement for a human-in-the-loop system where AI-generated content must be reviewed and approved by a person prior to publication.

Evidence:

  • Content created by AI must be reviewed and approved by a human before publication.
  • Human oversight is required for AI-generated content review.

Ambiguity Notes: The depth and standard of the 'review' process are not specified, leaving it unclear if a cursory glance suffices or if rigorous fact-checking is required.

Analysis 3

Why Relevant: It regulates the use of proprietary data for the training of artificial intelligence systems.

Mechanism of Influence: The act prohibits employers from using content created by their workers to train AI models without obtaining explicit consent, and protects workers from retaliation for withholding consent.

Evidence:

  • Employers cannot use workers content to train AI without consent.
  • Workers cannot be penalized for refusing to allow their work to be used for AI training.

Ambiguity Notes: The act does not specify the format of consent or if blanket consent can be included in standard employment contracts.

Analysis 4

Why Relevant: The act requires disclosures to employees regarding the internal use of AI tools.

Mechanism of Influence: Employers must provide descriptions of AI systems and their purposes to their workforce, ensuring internal transparency about automation in the workplace.

Evidence:

  • News media employers are required to disclose to their workers how generative AI tools are used in content creation.
  • Disclosure must include a description of the AI system and its purpose.

Ambiguity Notes: It is unclear how frequently these disclosures must be updated as AI systems evolve or are updated.

Assembly - 9028 - Relates to use of virtual agents and AI tools in property searches

Legislation ID: 241890

Bill URL: View Bill

Summary

This legislation amends the New York real property law to introduce definitions and requirements for the use of virtual agents and AI tools in property searches. It mandates that real estate brokers and online housing platforms conduct annual disparate impact analyses to assess potential discrimination resulting from these technologies. The bill also outlines specific obligations for identifying and mitigating discriminatory outcomes in algorithmic systems used for property searches and advertisements.

Key Sections

Key Requirements

  • Annual disparate impact analysis required for platforms using virtual agents or AI tools.
  • Conduct regular testing for discriminatory outcomes in advertising and chatbot systems.
  • Ensure predictive fairness across protected demographic groups.
  • No differential charges for advertisements based on demographic groups.
  • No targeted advertisement options based on protected characteristics.
  • Proactively identify and modify discriminatory outcomes in virtual agents and AI tools.
  • Public reporting on compliance and internal auditing methods.
  • Separate processes for housing-related advertisements and captioning to avoid discrimination.
  • Submission of analysis summary to the attorney general.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to judiciary
2025-09-05 referred to judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly requires annual audits of AI systems used in the housing market.

Mechanism of Influence: Real estate entities must conduct disparate impact analyses on their AI tools and submit the results to the attorney general's office for oversight.

Evidence:

  • Real estate brokers and online housing platforms using virtual agents or AI tools must conduct annual disparate impact analyses and submit findings to the attorney generals office.

Ambiguity Notes: The specific technical standards for what constitutes a 'disparate impact analysis' are not fully detailed in the summary, potentially leaving room for varying levels of rigor.

Analysis 2

Why Relevant: The legislation imposes regulatory requirements and disclosure obligations on AI-driven advertising and virtual agents.

Mechanism of Influence: It mandates public reporting on compliance and internal auditing methods, while prohibiting specific algorithmic functions like demographic targeting.

Evidence:

  • Public reporting on compliance and internal auditing methods.
  • Platforms using AI tools must ensure non-discriminatory practices in advertising and captioning, avoiding targeted options based on protected characteristics.

Ambiguity Notes: The definition of 'virtual agents' may be broad enough to cover a wide range of automated communication tools.

Analysis 3

Why Relevant: The bill requires proactive mitigation and modification of AI algorithmic outcomes.

Mechanism of Influence: Brokers and platforms are legally obligated to identify and modify discriminatory algorithmic results and ensure predictive fairness across demographic groups.

Evidence:

  • Brokers and platforms must identify and modify discriminatory algorithmic results, ensure predictive fairness across demographic groups, and conduct regular testing to detect discrimination.

Ambiguity Notes: The term 'predictive fairness' is a technical concept in machine learning that may require specific regulatory definitions to enforce consistently.

Assembly - 9091 - Requires search engines inform users when showing information which was generated using artificial intelligence

Legislation ID: 241953

Bill URL: View Bill

Summary

This bill amends the general business law to introduce a new section that mandates search engines to inform users when displaying information generated by artificial intelligence. It specifies the definition of artificial intelligence and outlines the requirements for disclosure, including the manner in which the information must be presented. Violations of this requirement can result in civil penalties.

Key Sections

Key Requirements

  • Requires a watermark indicating the information is AI-generated.
  • Requires search engines to display a notification above the AI-generated information.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to consumer affairs and protection
2025-09-12 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in AI disclosures by mandating that search engines notify users of AI-generated content.

Mechanism of Influence: It requires specific formatting, including watermarks and notifications placed above the content, ensuring transparency for the end-user.

Evidence:

  • Search engines must inform users that the displayed information was generated by artificial intelligence
  • Requires a watermark indicating the information is AI-generated.
  • Requires search engines to display a notification above the AI-generated information.

Ambiguity Notes: The requirement for 'clear language' and 'specific formatting' may require further regulatory clarification to ensure consistency across different platforms.

Analysis 2

Why Relevant: The bill provides a formal legal definition of Artificial Intelligence, which is a foundational element of AI regulation.

Mechanism of Influence: By defining AI as a machine-based system for predictions or decisions, it sets the jurisdictional scope for which technologies are subject to the disclosure rules.

Evidence:

  • This section defines artificial intelligence as a machine-based system that makes predictions, recommendations, or decisions through automated analysis of inputs.

Ambiguity Notes: The definition is broad ('automated analysis of inputs'), which could potentially encompass traditional algorithms or statistical models not typically categorized as modern generative AI.

Analysis 3

Why Relevant: The bill establishes an enforcement mechanism for AI regulations through civil penalties.

Mechanism of Influence: It imposes a financial deterrent of up to five thousand dollars for violations, creating a compliance burden for search engine operators.

Evidence:

  • Establishes civil penalties for violations of the disclosure requirements, with a maximum fine of five thousand dollars.

Ambiguity Notes: It is unclear if the five thousand dollar fine is per violation (per user view) or per instance of non-compliant software deployment.

Assembly - 9097 - Requires disclosure of use of generative artificial intelligence to clients, criminal defendants, and the court

Legislation ID: 241959

Bill URL: View Bill

Summary

This legislation amends the civil practice law and rules along with the criminal procedure law to introduce requirements for the disclosure of the use of generative artificial intelligence in legal document preparation. It defines generative artificial intelligence, outlines the responsibilities of courts to inform litigants about its risks, and mandates that any documents created with AI assistance include an affidavit confirming human oversight and accuracy verification.

Key Sections

Key Requirements

  • Counsel and litigants must be informed of the requirements set forth in this rule.
  • Courts must provide warnings about the dangers of using generative AI.
  • Documents must attach an affidavit disclosing AI use and human verification.
  • Documents must attach an affidavit stating that no generative AI was used.
  • Requires informed consent from clients for the use of generative AI.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to judiciary
2025-09-12 referred to judiciary

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses the user's interest in AI disclosures and regulation.

Mechanism of Influence: It mandates a formal disclosure process via affidavits for any legal document drafted using generative AI, ensuring transparency in the judicial process.

Evidence:

  • Documents drafted with generative artificial intelligence must include an affidavit disclosing its use and certifying human review for accuracy.

Ambiguity Notes: The standard for 'human review' and 'accuracy verification' is not explicitly defined, leaving room for interpretation on the level of diligence required.

Analysis 2

Why Relevant: The bill establishes regulatory oversight and consumer protection measures for AI usage.

Mechanism of Influence: It requires legal professionals to obtain informed consent from clients and mandates that courts provide warnings about AI risks, effectively regulating how AI is integrated into professional services.

Evidence:

  • No legal documents can be drafted using generative artificial intelligence without the informed consent of the client after they are warned of the associated risks.
  • Courts must provide warnings about the dangers of using generative AI.

Ambiguity Notes: The specific 'risks' and 'dangers' that courts must warn about are not detailed, which may lead to inconsistent messaging across different courts.

Analysis 3

Why Relevant: It provides a statutory definition for generative artificial intelligence.

Mechanism of Influence: By defining the technology's capabilities, such as autonomous task performance and learning from data, it sets the scope for which tools are subject to these legal regulations.

Evidence:

  • This provision defines generative artificial intelligence and its capabilities, including its ability to perform tasks autonomously and learn from data.

Ambiguity Notes: The phrase 'perform tasks autonomously' could be interpreted broadly to include basic automation or narrowly to include only advanced LLMs.

Assembly - 9106 - Regulates the use of artificial intelligence in the provision of therapy or psychotherapy services

Legislation ID: 241968

Bill URL: View Bill

Summary

This bill introduces a new section to the education law that defines the permissible use of artificial intelligence in mental health care. It establishes clear guidelines for licensed professionals regarding administrative support and supplementary support tasks, while emphasizing the importance of client consent and confidentiality. The bill also outlines penalties for violations and clarifies that certain services, such as religious counseling and peer support, are exempt from these regulations.

Key Sections

Key Requirements

  • AI cannot directly interact with clients in therapeutic communication.
  • AI cannot make independent therapeutic decisions.
  • AI must not be used for direct therapeutic communication or decision-making.
  • Maintains confidentiality of client records and communications.
  • Penalties can be up to fifty thousand dollars per violation, assessed after a hearing.
  • Requires informed written consent from the patient or their representative regarding the use of AI.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to higher education
2025-09-26 referred to higher education

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a regulatory framework for the use of AI in a specific professional sector.

Mechanism of Influence: It prohibits AI from performing core professional functions like direct therapeutic communication or independent decision-making, restricting its use to administrative and supplementary support.

Evidence:

  • AI must not be used for direct therapeutic communication or decision-making.
  • AI cannot make independent therapeutic decisions.

Ambiguity Notes: The term 'supplementary support tasks' is not explicitly defined in the summary and could be interpreted broadly.

Analysis 2

Why Relevant: The legislation requires mandatory disclosures to users regarding the involvement of AI.

Mechanism of Influence: Licensed professionals are required to obtain informed written consent from patients or their representatives specifically regarding the use of AI in their care.

Evidence:

  • Requires informed written consent from the patient or their representative regarding the use of AI.

Ambiguity Notes: The bill does not specify the level of technical detail required in the informed consent disclosure.

Analysis 3

Why Relevant: The bill includes enforcement mechanisms and penalties for the unauthorized or improper use of AI.

Mechanism of Influence: It establishes a civil penalty system with fines reaching up to fifty thousand dollars per violation to ensure compliance with AI regulations.

Evidence:

  • Penalties can be up to fifty thousand dollars per violation, assessed after a hearing.

Ambiguity Notes: None

Assembly - 9185 - Relates to falsely reporting an incident through the use of artificial intelligence

Legislation ID: 242047

Bill URL: View Bill

Summary

This bill amends Section 240.50 of the penal law to include provisions that specifically address the use of artificial intelligence in falsely reporting incidents. It outlines several scenarios where an individual can be guilty of falsely reporting an incident, including false reports of crimes, emergencies, or child abuse. The bill classifies these offenses as a class A misdemeanor.

Key Sections

Key Requirements

  • Mandates accurate reporting to law enforcement and emergency agencies.
  • Prohibits false reports of child abuse or maltreatment to designated registers.
  • Prohibits gratuitous reporting of non-existent offenses to law enforcement.
  • Requires individuals to refrain from initiating or circulating false reports or warnings.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to codes
2025-10-17 referred to codes

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically targets the use of artificial intelligence in the commission of a crime, aligning with the user's interest in AI regulation and oversight.

Mechanism of Influence: It expands the scope of existing penal law to ensure that false reports created or disseminated via AI are subject to criminal prosecution, thereby regulating the conduct of individuals using AI tools.

Evidence:

  • specifically address the use of artificial intelligence in falsely reporting incidents
  • including the use of artificial intelligence

Ambiguity Notes: The bill uses the broad term 'use of artificial intelligence' without defining specific technologies, which could encompass deepfakes, automated bots, or AI-generated text used to deceive emergency services.

Assembly - 9190 - Prohibits the use of most artificial intelligence in classrooms prior to high school

Legislation ID: 242052

Bill URL: View Bill

Summary

This bill amends the education law to include a new section that prohibits the use of artificial intelligence (AI) in classrooms for students below ninth grade, with specific allowances for diagnostic and instructional interventions for students with disabilities. It also empowers the commissioner to provide guidance on permissible uses of AI and clarifies that teachers and school personnel may still use AI for administrative purposes.

Key Sections

Key Requirements

  • Prohibits the use of AI in classrooms prior to ninth grade, except for specific interventions for students with disabilities.
  • Requires the commissioner to provide guidance on permissible uses of AI.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to education
2025-11-03 referred to education

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the deployment and usage of AI technologies within the education sector, specifically targeting age-based restrictions.

Mechanism of Influence: It establishes a legal ban on AI tools for students in grades K-8, requiring schools to filter or restrict access to such technologies in a classroom setting.

Evidence:

  • prohibits the use of artificial intelligence (AI) in classrooms for students below ninth grade
  • empowers the commissioner to provide guidance on permissible uses of AI

Ambiguity Notes: The bill does not provide a technical definition of 'artificial intelligence', which may lead to uncertainty regarding whether standard educational software with automated features is included in the prohibition.

Assembly - 9219 - Requires artificial intelligence technology used in professional fields to be developed and maintained in consultation with experts in such fields

Legislation ID: 242081

Bill URL: View Bill

Summary

This bill amends the general business law to introduce Article 47-A, which establishes requirements for the development and maintenance of AI technologies in professional fields. It mandates that developers involve professional domain experts throughout the design, training, validation, and ongoing evaluation processes of AI systems to ensure compliance with ethical and safety standards.

Key Sections

Key Requirements

  • Applies to various sectors such as healthcare, law, finance, education, architecture, and public safety.
  • Developers may face civil penalties and injunctive relief.
  • Developers must provide documentation of expert involvement and development phases.
  • Must disclose known risks and ethical concerns during development.
  • Requires involvement of professional domain experts in AI technology development.
  • Violations constitute unfair trade practices.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to science and technology
2025-11-03 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The legislation imposes mandatory oversight and documentation requirements for AI development.

Mechanism of Influence: It forces developers to involve domain experts in the validation and risk assessment phases, effectively requiring a form of expert-led oversight and internal auditing before and during deployment.

Evidence:

  • Developers must involve professional domain experts throughout the design, training, validation, and ongoing evaluation processes
  • Developers must submit documentation to the attorney general confirming the involvement of professional experts

Ambiguity Notes: The specific qualifications for a 'professional domain expert' and the depth of the 'risk assessment' are subject to the Attorney General's rulemaking.

Analysis 2

Why Relevant: The bill requires specific disclosures regarding the safety and ethics of AI systems to the government.

Mechanism of Influence: Developers are legally obligated to disclose known risks and ethical concerns to the Attorney General, creating a government oversight mechanism for AI safety and potential harms.

Evidence:

  • Must disclose known risks and ethical concerns during development.

Ambiguity Notes: It is unclear if these disclosures will be made public or remain confidential within the Attorney General's office.

Assembly - 9253 - Relates to disclosure of the use of artificial intelligence by law enforcement agencies

Legislation ID: 242115

Bill URL: View Bill

Summary

This bill amends the executive law to require policing agencies to conduct an annual inventory of AI systems used in criminal investigations and to develop a publicly accessible policy regarding their use. The legislation defines covered AI, mandates disclosure in police reports, and establishes a model policy to be adopted by law enforcement agencies. It also allows for civil action against agencies that fail to comply with these requirements.

Key Sections

Key Requirements

  • Conduct an annual inventory of AI systems used.
  • Defines artificial intelligence as machine-based technology that generates outputs from inputs.
  • Defines covered AI as AI aiding criminal investigations, including various technologies like biometric identification and predictive policing.
  • Develop a model policy that includes compliance with inventory and disclosure requirements.
  • Include details about the AIs role in the investigation and any generative AI used in report creation.
  • Individuals can file civil actions against agencies for violations after providing written notice.
  • Law enforcement agencies must adopt the model policy or a compliant version within 90 days.
  • Must disclose the use of covered AI in police reports.
  • Publicly disclose the name, capabilities, data inputs, outputs, and authorized uses of each system.
  • The Attorney General can investigate and bring civil actions for compliance.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to codes
2025-11-21 referred to codes

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a regulatory framework for the use of AI within a specific government sector (law enforcement).

Mechanism of Influence: It mandates transparency through annual inventories and public disclosure of AI capabilities, data inputs, and outputs.

Evidence:

  • Law enforcement agencies must conduct an annual inventory of covered AI systems and make specific information publicly available.
  • Publicly disclose the name, capabilities, data inputs, outputs, and authorized uses of each system.

Ambiguity Notes: The definition of 'machine-based technology that generates outputs from inputs' is broad and could encompass a wide range of standard software if not interpreted strictly.

Analysis 2

Why Relevant: The legislation requires specific disclosures regarding the use of AI in official government documentation.

Mechanism of Influence: It forces law enforcement to document the role of AI in criminal investigations and the use of generative AI in drafting reports.

Evidence:

  • Include details about the AIs role in the investigation and any generative AI used in report creation.
  • Must disclose the use of covered AI in police reports.

Ambiguity Notes: The extent of detail required for 'the AIs role in the investigation' may vary by agency interpretation.

Analysis 3

Why Relevant: The bill provides for oversight and enforcement of AI regulations through legal action.

Mechanism of Influence: It empowers the Attorney General to investigate compliance and allows individuals to bring civil actions against non-compliant agencies.

Evidence:

  • Individuals can file civil actions against agencies for violations after providing written notice.
  • The Attorney General can investigate and bring civil actions for compliance.

Ambiguity Notes: None

Assembly - 9449 - Relates to transparency and safety requirements for developers of artificial intelligence models

Legislation ID: 252536

Bill URL: View Bill

Summary

This legislation seeks to amend the general business law to establish the Responsible AI Safety and Education (RAISE) Act, which introduces mandatory transparency and safety protocols for large frontier developers of artificial intelligence. It emphasizes the need for standardized disclosures, incident reporting, and the establishment of frameworks to manage catastrophic risks associated with AI technologies. The bill reflects the intent to foster innovation while safeguarding public interests.

Key Sections

Key Requirements

  • Must include third-party assessments in their reporting.
  • Must publish a frontier AI framework on their website.
  • Must publish transparency reports before deploying new or modified AI models.
  • Must report any unauthorized access or critical safety incidents.
  • Must update the framework at least once a year.

Sponsors

Legislative Actions

Date Action
2026-01-28 reported referred to ways and means
2026-01-21 reported referred to codes
2026-01-07 referred to science and technology
2026-01-06 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses the user's interest in AI disclosures and transparency requirements.

Mechanism of Influence: It mandates that large frontier developers publish transparency reports and a frontier AI framework on their websites before deploying models.

Evidence:

  • Large frontier developers are required to create and publish a frontier AI framework detailing their safety protocols and risk assessments
  • Must publish transparency reports before deploying new or modified AI models.

Ambiguity Notes: The specific financial thresholds defining a 'large frontier developer' are mentioned but not enumerated in the summary, potentially leaving the scope to be defined by rulemaking.

Analysis 2

Why Relevant: The bill requires audits and reporting of safety incidents, aligning with the user's request for AI oversight and auditing legislation.

Mechanism of Influence: Developers are legally obligated to report critical safety incidents and unauthorized access to the government, supported by third-party assessments.

Evidence:

  • Developers must report critical safety incidents and risks associated with their AI models to the government.
  • Must include third-party assessments in their reporting.

Ambiguity Notes: The criteria for what constitutes a 'critical safety incident' may be subject to interpretation or further department rulemaking.

Analysis 3

Why Relevant: The act establishes government oversight and regulatory authority over AI development.

Mechanism of Influence: It grants rulemaking authority to a state department to implement safety protocols and defines specific duties for developers to ensure compliance.

Evidence:

  • Grants authority to the relevant department to establish rules and regulations for the implementation of the article.
  • Developers are assigned specific duties and obligations to ensure compliance with the established safety protocols.

Ambiguity Notes: The 'duties and obligations' are broadly stated and will likely be clarified through the granted rulemaking authority.

Assembly - 9487 - Relates to the use of automated employment decision-making tools and artificial intelligence systems by certain state and local entities; repealer

Legislation ID: 252574

Bill URL: View Bill

Summary

This bill amends the state technology law, education law, and civil service law to address the use of automated decision-making tools and artificial intelligence systems by various government entities. It repeals previous provisions related to automated decision-making and introduces new requirements for disclosure of such tools, aiming to protect employees rights and maintain existing collective bargaining agreements. The bill also defines covered entities and outlines their responsibilities regarding the use of these technologies.

Key Sections

Key Requirements

  • Defines covered entities to include counties, cities, towns, villages, school districts, and universities.
  • Disclosure must include a description, start date, purpose, and any relevant information.
  • Ensures AI systems do not affect civil service status or employee benefits.
  • Maintains existing rights of employees under collective bargaining agreements.
  • Preserves collective bargaining agreements and employee rights.
  • Prevents transfer of duties from employees to AI systems.
  • Prohibits job displacement, reductions in hours, wages, or employment benefits due to AI systems.
  • Requires covered entities to publish a list of automated tools on their website by December 30 annually.

Sponsors

Legislative Actions

Date Action
2026-01-21 ordered to third reading rules cal.51
2026-01-21 reported
2026-01-21 reported referred to rules
2026-01-21 rules report cal.51
2026-01-21 substituted by s8831
2026-01-07 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes mandatory disclosure requirements for AI-driven tools used in employment contexts.

Mechanism of Influence: Covered entities are required to maintain and publish an annual list on their websites detailing the description, purpose, and start date of any automated employment decision-making tools in use.

Evidence:

  • Covered entities must disclose the automated employment decision-making tools they use by publishing a list on their website, providing details such as the description, usage date, and purpose of the tools.
  • Requires covered entities to publish a list of automated tools on their website by December 30 annually.

Ambiguity Notes: The term 'any relevant information' regarding disclosures is broad and may be subject to varying interpretations by different government agencies.

Analysis 2

Why Relevant: The legislation regulates the operational use of AI to prevent the displacement of human labor and protect worker rights.

Mechanism of Influence: It creates a legal barrier against using AI to automate away duties currently held by employees or to reduce wages and benefits, effectively regulating the scope of AI integration in the public sector workforce.

Evidence:

  • Prevents transfer of duties from employees to AI systems.
  • Prohibits job displacement, reductions in hours, wages, or employment benefits due to AI systems.
  • Amendments to the education law clarify that the use of artificial intelligence systems shall not interfere with employee rights.

Ambiguity Notes: The bill does not explicitly define the technical threshold for what constitutes an 'artificial intelligence system' versus a standard software tool.

Assembly - 9533 - Enacts the automation displacement protection act

Legislation ID: 259907

Bill URL: View Bill

Summary

This bill, known as the automation displacement protection act, seeks to amend the labor law to establish protections for workers facing displacement due to the implementation of artificial intelligence and automated systems. It requires covered employers to notify employees about potential job losses, provide a transition period with options for retraining, and outlines penalties for non-compliance.

Key Sections

Key Requirements

  • Civil penalties of up to $10,000 per day for willful violations may be assessed.
  • Defines artificial intelligence system as computer systems performing tasks requiring human intelligence.
  • Defines covered employer as businesses with 50 or more full-time employees.
  • Defines employment loss to include various forms of job termination or reduction.
  • Defines technological displacement as job losses or hour reductions due to automation.
  • Employers failing to provide notice are liable for up to 60 days of back pay and benefits for affected employees.
  • Employers violating notice or transition requirements lose eligibility for state grants, loans, or tax incentives for five years.
  • Entitles affected employees to a 90-day transition period with options for continued employment or retraining.
  • Notice must be given to affected employees, the commissioner, local officials, and workforce boards.
  • Notice must include details about the functions being automated and available retraining programs.
  • Prohibits discharge of affected employees during the transition period without just cause.
  • Requires 90 days advance written notice for displacements affecting 25 or more employees or 25% of the workforce.
  • The attorney general can take action to enforce compliance and recover penalties.
  • The commissioner must maintain a public registry of violators.

Sponsors

Legislative Actions

Date Action
2026-01-14 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The bill requires specific disclosures regarding the implementation of AI systems in the workplace.

Mechanism of Influence: Employers must provide written notice to affected employees and state officials detailing the specific functions being automated by AI systems.

Evidence:

  • Notice must include details about the functions being automated and available retraining programs.
  • Defines artificial intelligence system as computer systems performing tasks requiring human intelligence.

Ambiguity Notes: The definition of 'artificial intelligence system' is broad, potentially covering a wide range of software beyond generative AI.

Analysis 2

Why Relevant: The legislation regulates the deployment of AI by imposing operational requirements and penalties on businesses using the technology.

Mechanism of Influence: It mandates a workforce transition period and imposes civil penalties of up to $10,000 per day for violations related to AI-driven displacement.

Evidence:

  • Civil penalties of up to $10,000 per day for willful violations may be assessed.
  • Requires 90 days advance written notice for displacements affecting 25 or more employees or 25% of the workforce.

Ambiguity Notes: The threshold for 'technological displacement' (25 employees or 25% of the workforce) limits the scope of regulation to larger-scale AI implementations.

Assembly - 9581 - Requires covered businesses to annually report to the department of labor regarding the impact of artificial intelligence on hiring and business practices for the previous year

Legislation ID: 283141

Bill URL: View Bill

Summary

The bill introduces a new section to the labor law mandating that covered businesses, defined as those employing over 100 people or being publicly traded, submit annual reports detailing the effects of artificial intelligence on their workforce and operations. These reports must include data on employment changes related to AI and the nature of AI usage, including oversight and protections for sensitive data. The Department of Labor will develop reporting forms and compile an annual analysis based on submitted reports, with penalties for non-compliance.

Key Sections

Key Requirements

  • Businesses face a civil penalty of up to $500 per day for non-compliance.
  • Reports must be submitted by March 1st each year.
  • Reports must detail the nature of AI usage, including objectives, oversight, and data protection measures.
  • Reports must include employment data related to AI use, including displacement and hiring statistics.
  • The commissioner may reduce penalties for good faith failures to report.
  • The department may establish additional reporting requirements.
  • The department must create standard reporting forms for businesses.
  • The report must analyze data by employment sector, location, and business size.
  • The report must be submitted to various legislative leaders and made publicly available.

Sponsors

Legislative Actions

Date Action
2026-01-21 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation and disclosure of artificial intelligence usage within the corporate sector.

Mechanism of Influence: It forces businesses to disclose the 'nature of AI usage' and 'oversight' measures, creating a transparency mechanism for how AI affects the labor market and requiring businesses to document their data protection measures.

Evidence:

  • Mandates annual reports from covered businesses on AIs impact on hiring and business practices.
  • Reports must detail the nature of AI usage, including objectives, oversight, and data protection measures.

Ambiguity Notes: The phrase 'nature of AI usage' is broad and could range from high-level descriptions to detailed technical disclosures depending on the Department of Labor's implementation.

Assembly - 9601 - Prohibits the use of automated systems to make employment decisions unless there is a meaningful human review of the output of such automated system prior to the final employment decision

Legislation ID: 283164

Bill URL: View Bill

Summary

This bill amends the labor law to prohibit employers from relying solely on automated systems for making employment decisions without meaningful human review. It mandates that any automated recommendations or evaluations be subject to human oversight, ensuring that applicants are not denied employment based solely on automated assessments. Employers must inform applicants about the use of automated systems and provide the opportunity for human review of adverse decisions.

Key Sections

Key Requirements

  • Applicants can request a human review if they receive an adverse employment decision.
  • Commissioner can issue administrative fines and cease-and-desist orders for violations.
  • Commissioner can require corrective action for non-compliance.
  • Defines automated employment decision tool as any system that assists in evaluating applicants.
  • Defines meaningful human review as a deliberate evaluation by a human with decision-making authority.
  • Employers must notify applicants about the use of automated systems and describe data analyzed.
  • Prohibits denial of employment solely based on automated system actions.
  • Requires meaningful human review of automated system outputs before final employment decisions.

Sponsors

Legislative Actions

Date Action
2026-01-21 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the application of AI and automated systems in the context of employment and hiring.

Mechanism of Influence: It mandates human oversight for AI-driven decisions and prohibits fully autonomous hiring processes, effectively creating a regulatory framework for AI deployment in HR.

Evidence:

  • Defines automated employment decision tool as any system that assists in evaluating applicants.
  • Employers are prohibited from using automated systems for screening resumes or making hiring decisions without a meaningful human review of the outputs.
  • Prohibits denial of employment solely based on automated system actions.

Ambiguity Notes: The definition of 'meaningful human review' as a 'deliberate evaluation' may require further clarification to determine the necessary depth of human involvement to satisfy the law.

Analysis 2

Why Relevant: The bill includes specific disclosure requirements for entities using automated decision-making tools.

Mechanism of Influence: Employers are legally required to inform applicants when automated systems are used and must provide transparency regarding the types of data these systems analyze.

Evidence:

  • Employers must notify applicants about the use of automated systems and describe data analyzed.

Ambiguity Notes: The requirement to 'describe data analyzed' is broad and could range from a general category list to a detailed technical disclosure of inputs.

Assembly - 9641 - Prohibits algorithmic wage-setting

Legislation ID: 283218

Bill URL: View Bill

Summary

This bill introduces a new article to the labor law that prohibits employers from using algorithmic wage-setting, which involves determining wages through automated decision systems based on surveillance data of employees. It establishes definitions for key terms, outlines requirements for employers who may use such systems, and details enforcement mechanisms including civil penalties and the right for employees to take legal action if their rights are violated.

Key Sections

Key Requirements

  • Affected individuals can seek damages and attorney fees.
  • Allow employees to correct or challenge data accuracy.
  • Develop procedures to ensure data accuracy.
  • Employers can be fined up to $10,000 per violation.
  • Employers must disclose data considerations and how they affect wage decisions to employees.
  • Employers must offer individualized wages based solely on specific data related to employee tasks.
  • Provide information to employees about data considerations in wage decisions.

Sponsors

Legislative Actions

Date Action
2026-01-21 referred to labor

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of automated decision systems (a form of AI) in the context of labor and wage determination.

Mechanism of Influence: It imposes a prohibition on algorithmic wage-setting unless employers provide specific disclosures and allow for data audits/corrections by employees, effectively requiring transparency and oversight of AI-driven financial decisions.

Evidence:

  • This bill introduces a new article to the labor law that prohibits employers from using algorithmic wage-setting, which involves determining wages through automated decision systems
  • Employers using automated decision systems for wage-setting must develop and publish procedures to ensure data accuracy and allow employees to challenge data used in wage decisions.
  • Employers must disclose data considerations and how they affect wage decisions to employees.

Ambiguity Notes: The definition of 'automated decision system' is broad and could encompass a wide range of AI and machine learning models used for workforce management.

Assembly - 9654 - Enacts the New York Artificial Intelligence Civil Rights Act

Legislation ID: 283236

Bill URL: View Bill

Summary

This bill introduces a new article in the civil rights law dedicated to the regulation of artificial intelligence and algorithms. It outlines definitions, establishes standards for algorithmic use, mandates evaluations and assessments, and includes provisions for consumer protection and rights. The act aims to ensure that the deployment of algorithms does not lead to discriminatory practices and that individuals are informed and protected against potential harms caused by such technologies.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-21 referred to science and technology

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation of artificial intelligence and algorithmic decision-making systems.

Mechanism of Influence: It creates legal standards for 'covered algorithms' and mandates both pre-deployment evaluations and post-deployment impact assessments, effectively requiring audits of AI systems.

Evidence:

  • This bill introduces a new article in the civil rights law dedicated to the regulation of artificial intelligence and algorithms.
  • Pre-deployment evaluations
  • Post-deployment impact assessments

Ambiguity Notes: The specific technical criteria for what constitutes a 'covered algorithm' are mentioned as being defined in the act but are not detailed in the abstract, potentially leaving the scope of regulated technologies broad.

Analysis 2

Why Relevant: The legislation includes specific requirements for transparency and informing the public about AI usage.

Mechanism of Influence: The 'Notice and disclosure' provision mandates that individuals be informed when algorithms are used in ways that affect them, while 'Consumer awareness' sections aim to enhance public understanding of algorithmic implications.

Evidence:

  • Notice and disclosure
  • This section mandates that individuals be informed about the use of algorithms that may affect them, ensuring transparency.

Ambiguity Notes: The abstract does not specify the threshold of 'impact' required to trigger a notice, nor the specific format the disclosure must take.

Analysis 3

Why Relevant: The bill establishes oversight and accountability measures for AI developers and users.

Mechanism of Influence: It defines the legal relationships and responsibilities between developers and deployers and establishes enforcement mechanisms to ensure compliance with civil rights standards.

Evidence:

  • Relationships between developers and deployers
  • Enforcement
  • Content regulations

Ambiguity Notes: The 'Content regulations' section mentions preventing 'harmful outcomes,' which is a subjective term that may require further regulatory clarification.

Assembly - 972 - Establishes the "protect our privacy (POP) act" to impose limitations on the use of drones for law enforcement purposes

Legislation ID: 54749

Bill URL: View Bill

Summary

This bill, known as the Protect Our Privacy (POP) Act, establishes regulations regarding the use of drones by law enforcement agencies. It prohibits the use of drones for general law enforcement purposes without a warrant, restricts the collection of data during protests and other First Amendment activities, and mandates the destruction of certain data collected by drones. Additionally, it provides individuals with the right to sue for violations of their privacy rights related to drone surveillance.

Key Sections

Key Requirements

  • Allows drone use for examining conditions after natural disasters.
  • Allows drone use for search and rescue operations.
  • Court must award reasonable attorneys fees to prevailing plaintiffs.
  • Data collected with facial recognition must be deleted.
  • Data must be destroyed within one year unless legally retained.
  • Data must be released under FOIL with redactions of personal information.
  • Data obtained cannot be used for law enforcement.
  • Individuals may seek damages for violations.
  • Personally identifying information cannot be shared without confidentiality agreements.
  • Prohibits agreements with private entities to obtain drone data.
  • Prohibits drone use for documentation at protests and demonstrations.
  • Prohibits law enforcement agencies from using drones for surveillance without a warrant.
  • Prohibits the use of armed drones.
  • Prohibits use of retroactive data in investigations.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to governmental operations
2025-01-08 referred to governmental operations

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically targets facial recognition technology, which is a primary application of artificial intelligence in surveillance and biometric data processing.

Mechanism of Influence: The bill mandates the retroactive deletion of data collected via facial recognition and prohibits law enforcement from using such AI-generated data in investigations, thereby regulating the deployment and legal admissibility of AI surveillance outputs.

Evidence:

  • The law applies retroactively to drone data collected with facial recognition technology, which must be deleted.
  • Data collected with facial recognition must be deleted.
  • Prohibits use of retroactive data in investigations.

Ambiguity Notes: The bill focuses on a specific use case of AI (facial recognition) rather than general-purpose AI or algorithmic transparency, and it does not define the technical standards of what constitutes 'facial recognition technology'.

Assembly - 974 - Relates to enacting the NY data protection act

Legislation ID: 54751

Bill URL: View Bill

Summary

This bill outlines the responsibilities of data controllers and processors in relation to consumer personal data. It emphasizes the need for data protection assessments, the prohibition of unfair practices in obtaining consent, and the requirement to maintain reasonable safeguards for data security. Additionally, the bill mandates that controllers must not discriminate against consumers exercising their rights and must ensure that any data shared with processors is done under strict contractual obligations.

Key Sections

Key Requirements

  • Allows different pricing in loyalty programs if clearly disclosed and beneficial to the consumer.
  • Assessments must weigh benefits against risks and consider consumer expectations and context.
  • Contracts must require processors to maintain confidentiality and security of personal data.
  • Controllers must conduct and document data protection assessments regularly for high-risk processing activities.
  • Controllers must enter into binding contracts with processors detailing processing instructions and obligations.
  • Controllers must implement reasonable administrative, technical, and physical safeguards for data security.
  • Controllers must review data retention practices annually and dispose of unnecessary data securely.
  • Prohibits denying services or charging different prices based on consumer rights exercised.
  • Prohibits designing user interfaces that deceive consumers regarding their rights.
  • Prohibits obtaining consent through overwhelming requests or manipulative practices.
  • Prohibits sharing personal data with third parties without consumer consent or respecting opt-out rights.
  • Prohibits the use of additional verification information for purposes other than verification.
  • Requires clear disclosure of third-party categories and processing purposes.
  • Requires controllers to delete additional verification information within 45 days after notifying the consumer of action taken on their request.

Sponsors

Legislative Actions

Date Action
2026-01-07 referred to codes
2025-05-27 reported referred to codes
2025-01-08 referred to consumer affairs and protection

Detailed Analysis

Analysis 1

Why Relevant: The mandate for data protection assessments for high-risk processing is a common regulatory tool used to oversee AI and automated decision-making systems.

Mechanism of Influence: AI systems that utilize personal data for profiling or automated decision-making would likely be classified as 'heightened risk,' thereby requiring the controllers to conduct and document audits of the AI's impact on privacy.

Evidence:

  • This provision mandates that data controllers regularly conduct data protection assessments for processing activities that pose a heightened risk to consumer privacy.
  • Assessments must weigh benefits against risks and consider consumer expectations and context.

Ambiguity Notes: The term 'heightened risk' is not explicitly defined to include AI, but in the context of modern privacy legislation, this category almost always encompasses algorithmic processing and machine learning applications.

Analysis 2

Why Relevant: The bill regulates the collection and sharing of data, which is the foundational component for training and operating AI models.

Mechanism of Influence: By requiring consent or opt-out rights for third-party data sharing, the bill limits how consumer data can be aggregated for use in third-party AI training sets or large language models.

Evidence:

  • This provision restricts controllers from sharing personal data with third parties unless consumer consent is obtained or opt-out rights are respected.

Ambiguity Notes: The bill does not specifically mention 'AI training,' but the restrictions on 'Third Party Data Sharing' would practically apply to any AI developer receiving data from a controller.

Senate - 1169 - Relates to the development and use of certain artificial intelligence systems

Legislation ID: 66847

Bill URL: View Bill

Summary

The New York Artificial Intelligence Act seeks to address the growing use of AI in various sectors and the potential for algorithmic discrimination. It mandates developers and deployers of AI systems to ensure their products do not harm consumers or violate civil rights. The legislation emphasizes the importance of transparency, accountability, and collaboration between the government and the AI industry to mitigate risks associated with AI technologies.

Key Sections

Key Requirements

  • Aligns with the provisions of the civil rights law regarding algorithmic discrimination.
  • All consequential decisions made by AI systems must be documented and justified.
  • Allows individuals to file lawsuits for damages caused by violations.
  • An appeal process must allow users to contest decisions and provide supporting information.
  • Audits must assess data management, accuracy, and compliance with privacy laws.
  • Audits must be conducted prior to deployment and at least every eighteen months thereafter.
  • Consumers must be provided a clear option to opt-out of automated decisions.
  • Decisions must be rendered within forty-five days after opting out.
  • Developers must ensure their AI systems do not produce algorithmic discrimination.
  • Employers cannot prevent employees from reporting violations to the attorney general.
  • Employers must inform employees of their rights regarding whistleblowing.
  • End users must be notified within five days of a consequential decision made by an AI system.
  • Establishes unlawful discriminatory practices related to the deployment of AI systems.
  • If a high-risk AI system is used, consumers can waive their right to advance notice but must be informed of the decision in a timely manner.
  • Imposes civil penalties for violations up to $20,000.
  • Mandates internal risk assessments and documentation of safeguards.
  • Mandates that consumers be allowed to opt-out of AI decisions without facing adverse consequences.
  • Must include descriptions of software stack, system purpose, and foreseeable uses.
  • Must incorporate principles from the National Institute of Standards and Technologys AI Risk Management Framework.
  • Policy must be regularly reviewed and updated.
  • Prevents unjustified differential treatment of individuals based on social scores.
  • Prohibits developers or deployers from using high-risk AI systems that produce algorithmic discrimination.
  • Prohibits the use of AI systems that score individuals based on social behavior.
  • Reports must be filed before deployment and annually thereafter.
  • Reports must include descriptions of the system, its use cases, and risk assessments.
  • Requires assessment of benefits and costs to consumers.
  • Requires deployers to file a comprehensive report on high-risk AI systems with the attorney general.
  • Requires deployers to notify end users at least five business days in advance of using high-risk AI systems for consequential decisions.
  • Requires developers to document and implement a risk management plan for high-risk AI systems.
  • Requires independent audits for high-risk AI systems before they can be used, sold, or shared.
  • Responses to appeals must occur within forty-five days.
  • The attorney general can seek injunctions against violations of the bill.

Sponsors

Legislative Actions

Date Action
2026-01-07 died in assembly
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2026-01-07 returned to senate
2025-06-12 COMMITTEE DISCHARGED AND COMMITTED TO RULES
2025-06-12 DELIVERED TO ASSEMBLY
2025-06-12 ORDERED TO THIRD READING CAL.1867
2025-06-12 PASSED SENATE
2025-06-12 referred to ways and means

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly mandates third-party audits for high-risk AI systems, which is a specific area of interest for the user.

Mechanism of Influence: It requires high-risk AI systems to undergo independent audits prior to deployment and every eighteen months thereafter to assess data management, accuracy, and compliance.

Evidence:

  • High-risk AI systems must undergo regular third-party audits to ensure compliance with anti-discrimination laws.
  • Audits must be conducted prior to deployment and at least every eighteen months thereafter.

Ambiguity Notes: The effectiveness of this provision depends on the specific criteria used to define 'high-risk' and the standards set for 'independent' auditors.

Analysis 2

Why Relevant: The bill requires significant disclosures to both consumers and the government.

Mechanism of Influence: Deployers must notify end users when AI is used for consequential decisions and provide a detailed report of the system's functionality and risks to the Attorney General.

Evidence:

  • End users must be informed if a consequential decision is made by an AI system and provided with an appeal process.
  • Deployers of high-risk AI systems must file a detailed report with the attorney general, including system descriptions, intended outputs, risk assessments, and potential revenue generation.

Ambiguity Notes: The term 'consequential decision' is a key threshold for disclosure that may require further regulatory clarification to determine which specific AI applications are covered.

Analysis 3

Why Relevant: The act establishes a formal oversight mechanism involving the submission of technical and operational data to the government.

Mechanism of Influence: It mandates that developers and deployers submit reports to the Attorney General including software stack descriptions and risk assessments, facilitating state-level oversight of AI technologies.

Evidence:

  • Developers and deployers must report on the functionality and compliance of high-risk AI systems to the attorney general.
  • Must include descriptions of software stack, system purpose, and foreseeable uses.

Ambiguity Notes: While it requires reporting on the 'software stack,' it does not explicitly mention the submission of 'model weights' as requested by the user, though this could be interpreted as part of the technical documentation required.

Analysis 4

Why Relevant: The legislation provides consumers with the right to opt-out of automated decision-making, a core component of AI regulation.

Mechanism of Influence: It forces developers to provide a human-led alternative to AI decisions and ensures consumers are not penalized for choosing the opt-out path.

Evidence:

  • Consumers have the right to opt-out of automated decision-making processes and receive decisions from human representatives.
  • Mandates that consumers be allowed to opt-out of AI decisions without facing adverse consequences.

Ambiguity Notes: The 'forty-five days' response window for human-rendered decisions might be considered a long delay for certain types of consumer interactions.

Senate - 1228 - Relates to requiring advertisements to disclose the use of a synthetic performer

Legislation ID: 66924

Bill URL: View Bill

Summary

This legislation amends the General Business Law to include definitions and requirements regarding the use of synthetic performers in advertisements. It mandates that any advertisement featuring a synthetic performer must clearly disclose this fact if the advertiser has actual knowledge of it. Violations of this requirement can result in civil penalties.

Key Sections

Key Requirements

  • Advertisements must disclose the use of a synthetic performer if the advertiser has actual knowledge.
  • First violation incurs a civil penalty of $1,000.
  • Subsequent violations incur a civil penalty of $5,000.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO CONSUMER PROTECTION
2025-06-13 COMMITTED TO RULES
2025-06-04 ADVANCED TO THIRD READING
2025-05-29 2ND REPORT CAL.
2025-05-28 1ST REPORT CAL.1408
2025-05-21 AMEND AND RECOMMIT TO CONSUMER PROTECTION
2025-05-21 PRINT NUMBER 1228C
2025-05-15 AMEND AND RECOMMIT TO CONSUMER PROTECTION

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the requirement for disclosures when using AI-generated content in a commercial context.

Mechanism of Influence: It mandates that advertisers must disclose the use of synthetic performers if they have actual knowledge, creating a legal obligation for AI transparency.

Evidence:

  • This provision requires any person engaged in advertising to disclose if a synthetic performer is used in the advertisement, provided they have actual knowledge of it.
  • Advertisements must disclose the use of a synthetic performer if the advertiser has actual knowledge.

Ambiguity Notes: The term 'actual knowledge' may create a loophole where advertisers claim ignorance of the AI-generated nature of a performer provided by a third party.

Analysis 2

Why Relevant: The legislation provides formal legal definitions for generative artificial intelligence.

Mechanism of Influence: By defining 'generative artificial intelligence' and 'synthetic performer', the bill establishes the jurisdictional scope for AI regulation within the state's business law.

Evidence:

  • This provision defines key terms such as generative artificial intelligence and synthetic performer to clarify the scope of the bill.

Ambiguity Notes: The definition of 'synthetic performer' might be interpreted broadly to include various forms of digital manipulation beyond generative AI, or narrowly depending on the specific technical language used.

Analysis 3

Why Relevant: The bill establishes an enforcement framework for AI regulations through financial penalties.

Mechanism of Influence: It imposes civil penalties ranging from $1,000 to $5,000 for failure to comply with AI disclosure requirements, providing a deterrent against undisclosed AI usage.

Evidence:

  • This provision outlines the penalties for failing to disclose the use of synthetic performers in advertisements, with specific fines for initial and subsequent violations.
  • First violation incurs a civil penalty of $1,000.
  • Subsequent violations incur a civil penalty of $5,000.

Ambiguity Notes: It is unclear if penalties apply per advertisement, per airing, or per campaign.

Senate - 1815 - Requires publishers of books created with the use of generative artificial intelligence to contain a disclosure of such use

Legislation ID: 67665

Bill URL: View Bill

Summary

This bill amends the general business law in New York to mandate that all books published in the state, which have been created using generative AI, must include a conspicuous disclosure indicating such use. The requirement applies to all formats of books, including printed and digital, and encompasses various types of content such as text, images, and games. It also defines generative AI and outlines its various forms and capabilities.

Key Sections

Key Requirements

  • Applies to both printed and digital formats, regardless of target audience.
  • Generative AI includes systems that perform tasks with minimal human oversight and can learn from data.
  • Includes software and hardware that mimic human cognitive tasks.
  • Requires all books published in the state that use generative AI to include a disclosure on the cover.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-01-14 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation of artificial intelligence by imposing mandatory disclosure requirements on AI-generated content.

Mechanism of Influence: It creates a legal obligation for publishers to label products, thereby providing transparency to consumers regarding the use of AI in creative works.

Evidence:

  • Any book created wholly or partially with generative AI must clearly state this on its cover.
  • mandate that all books published in the state, which have been created using generative AI, must include a conspicuous disclosure indicating such use.

Ambiguity Notes: The phrase 'partially' created with generative AI lacks a specific percentage or threshold, which may lead to broad interpretations regarding how much AI assistance triggers the disclosure requirement.

Analysis 2

Why Relevant: The legislation establishes a legal definition for generative AI within the state's general business law.

Mechanism of Influence: By defining generative AI as systems that 'mimic human cognitive tasks' and 'perform tasks with minimal human oversight,' it sets the scope for which technologies are subject to oversight.

Evidence:

  • Generative AI includes systems that perform tasks with minimal human oversight and can learn from data.
  • Includes software and hardware that mimic human cognitive tasks.

Ambiguity Notes: The definition of 'minimal human oversight' is subjective and could be interpreted differently depending on the complexity of the AI tool used.

Senate - 1962 - Enacts the "New York artificial intelligence consumer protection act"

Legislation ID: 67845

Bill URL: View Bill

Summary

The New York Artificial Intelligence Consumer Protection Act introduces a framework for regulating the use of artificial intelligence (AI) decision systems. It mandates compliance with ethical and privacy standards, allows for public research, and outlines the responsibilities of developers and deployers of AI technologies. The bill also establishes enforcement mechanisms and exemptions for certain entities, ensuring that AI systems do not adversely affect individuals rights and freedoms.

Key Sections

Key Requirements

  • Allows for product recalls of faulty AI systems.
  • Attorney General has exclusive enforcement authority.
  • Conducts research and testing on AI systems prior to deployment.
  • Developers must demonstrate compliance with risk management frameworks.
  • Engages in public research that adheres to ethics and privacy laws.
  • Ensures compliance with federal regulations and standards.
  • Exempts developers from obligations if compliance violates state evidentiary privileges.
  • Exempts systems approved by federal agencies from certain requirements.
  • Mandates identification and repair of technical errors in AI systems.
  • Mandates preservation of system integrity and security.
  • Requires developers to cure violations discovered through red-teaming within 60 days.
  • Requires developers to investigate and report misuse of AI systems.
  • Requires notice of violation and a cure period for developers before legal action.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-01-14 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The act mandates pre-market research and testing of AI systems, which aligns with the user's interest in regulation and audits.

Mechanism of Influence: This requirement forces developers to audit their systems for safety and compliance before they reach consumers, ensuring that AI systems do not adversely affect rights.

Evidence:

  • Developers are required to conduct research and testing on AI decision systems before market deployment
  • Ensures compliance with federal regulations and standards.

Ambiguity Notes: The terms 'research and testing' are broad and may vary in rigor depending on the specific AI application or industry standards.

Analysis 2

Why Relevant: It establishes enforcement mechanisms and reporting requirements for AI misuse.

Mechanism of Influence: The Attorney General is granted exclusive enforcement authority, and developers are mandated to investigate and report misuse, providing a layer of government oversight.

Evidence:

  • The Attorney General is designated as the enforcement authority
  • Requires developers to investigate and report misuse of AI systems.

Ambiguity Notes: The specific criteria for what constitutes 'misuse' are not fully defined in the abstract, potentially leaving room for interpretation by the Attorney General.

Analysis 3

Why Relevant: The bill introduces risk management and red-teaming as a compliance mechanism.

Mechanism of Influence: It encourages developers to perform internal audits (red-teaming) to identify and cure violations, offering an affirmative defense if corrective actions are taken within 60 days.

Evidence:

  • Developers may present an affirmative defense if they discover a violation through red-teaming
  • Requires developers to cure violations discovered through red-teaming within 60 days.

Ambiguity Notes: The effectiveness of the 'affirmative defense' depends on the specific risk management frameworks adopted by the developers.

Senate - 2487 - Enacts the New York artificial intelligence ethics commission act

Legislation ID: 68482

Bill URL: View Bill

Summary

This bill introduces the New York Artificial Intelligence Ethics Commission Act, which aims to create a commission responsible for overseeing the ethical deployment of AI technologies within the state. The commission will establish guidelines, review AI projects, educate the public, and investigate complaints related to unethical AI practices, ensuring that AI systems used do not discriminate or violate privacy rights.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-01-21 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates AI by creating an oversight commission with the power to review projects and set ethical standards.

Mechanism of Influence: The commission reviews AI projects for compliance and establishes ethical guidelines that private and state entities must follow.

Evidence:

  • The commission is tasked with establishing ethical guidelines, reviewing AI projects for compliance
  • The commission has authority to oversee AI systems used by state agencies and private companies that affect New York residents.

Ambiguity Notes: The term 'ethical guidelines' is broad and its specific requirements for AI developers are not fully defined in the abstract.

Analysis 2

Why Relevant: The legislation includes provisions for auditing and reporting on AI systems.

Mechanism of Influence: The commission is required to submit annual reports that include audit results, providing a mechanism for government oversight of AI activities.

Evidence:

  • The commission is required to submit annual reports to state leaders detailing its activities, audit results, and policy recommendations.

Ambiguity Notes: The scope and technical depth of the 'audit results' are not specified.

Analysis 3

Why Relevant: The bill establishes enforcement mechanisms and penalties for AI-related violations.

Mechanism of Influence: It allows for civil and criminal penalties for entities that use AI in ways that discriminate, infringe on privacy, or cause harm.

Evidence:

  • The commission can impose penalties for violations of ethical guidelines, including civil penalties for non-economic harm and criminal prosecution for economic harm or privacy breaches.
  • Entities in New York are prohibited from using AI systems that discriminate, disseminate false information, surveil without consent, infringe on privacy, misuse intellectual property, or cause harm through negligence.

Ambiguity Notes: The distinction between economic and non-economic harm for the purpose of civil vs criminal prosecution may require further legal clarification.

Senate - 3519 - Redefines the term "following" for a crime of stalking in the fourth degree

Legislation ID: 69303

Bill URL: View Bill

Summary

This bill seeks to redefine the term following in the context of stalking in the fourth degree under the New York penal law. It expands the definition to encompass unauthorized tracking of an individuals movements or location through devices or software that can access, record, or report on a persons location without their consent. Additionally, it clarifies that employers using tracking technology in the normal course of business do not constitute stalking under this statute.

Key Sections

Key Requirements

  • Exempts employers using tracking technology during normal business operations from being classified as stalking.
  • Includes any device or software that tracks or reports a persons location without their consent.

Sponsors

Legislative Actions

Date Action
2026-01-07 died in assembly
2026-01-07 REFERRED TO CODES
2026-01-07 returned to senate
2025-05-13 DELIVERED TO ASSEMBLY
2025-05-13 PASSED SENATE
2025-05-13 referred to codes
2025-04-15 ADVANCED TO THIRD READING
2025-04-10 2ND REPORT CAL.

Detailed Analysis

Analysis 1

Why Relevant: The legislation regulates the use of software for surveillance and tracking, which is a primary application of AI and automated data processing technologies.

Mechanism of Influence: It creates legal liability for the unauthorized use of tracking software, which would include AI-driven geolocation and behavioral monitoring tools.

Evidence:

  • unauthorized tracking of an individuals movements or location through devices or software
  • software that can access, record, or report on a persons location without their consent

Ambiguity Notes: The bill uses the broad term 'software' rather than specifically naming 'artificial intelligence,' but the scope encompasses any algorithmic or automated software used for location reporting.

Senate - 4276 - Enacts the "digital fairness act"

Legislation ID: 70060

Bill URL: View Bill

Summary

This bill addresses the growing concerns over privacy violations and the misuse of personal information in various domains, including employment, healthcare, and finance. It emphasizes the need for transparency in how personal data is handled and mandates that individuals give explicit consent before their information can be collected or used. Additionally, it introduces specific protections for biometric information and outlines the responsibilities of entities that engage in data collection and processing.

Key Sections

Key Requirements

  • Courses must be age-appropriate and tailored to students needs.
  • No payment for automated decision systems that make final decisions without human intervention.
  • Prohibits the use of automated systems that discriminate against individuals based on protected characteristics.
  • Schools must provide instruction on digital literacy and internet safety.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-02-03 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The provision regarding Automated Decision Systems directly regulates the deployment and financial support of AI-driven decision-making tools.

Mechanism of Influence: By prohibiting payment for systems that lack human intervention and banning discriminatory algorithms, the law imposes operational and ethical constraints on AI developers and users.

Evidence:

  • No payment for automated decision systems that make final decisions without human intervention.
  • Prohibits the use of automated systems that discriminate against individuals based on protected characteristics.

Ambiguity Notes: The term 'automated decision systems' is broad and typically includes machine learning models and AI, though the specific technical threshold for what constitutes 'automated' is not defined here.

Analysis 2

Why Relevant: The Internet Safety Education provision relates to the user's interest in age-appropriate usage and digital literacy regarding technology.

Mechanism of Influence: It mandates a curriculum that includes digital literacy and privacy, which are foundational to safe AI usage and understanding algorithmic impact.

Evidence:

  • Courses must be age-appropriate and tailored to students needs.
  • Schools must provide instruction on digital literacy and internet safety.

Ambiguity Notes: While it does not explicitly name 'Artificial Intelligence', digital literacy curricula in the modern era frequently encompass AI interactions and safety.

Senate - 4394 - Establishes criteria for the sale of automated employment decision tools

Legislation ID: 70178

Bill URL: View Bill

Summary

This bill amends the labor law to establish criteria for the use of automated employment decision tools, which include various systems that filter job candidates. It mandates that employers conduct annual disparate impact analyses to assess the effects of these tools on different demographic groups and requires transparency in the reporting of these analyses.

Key Sections

Key Requirements

  • Conduct annual disparate impact analyses on automated employment decision tools.
  • Make a summary of the most recent disparate impact analysis publicly available on the employers website.
  • Provide the department with the summary of the most recent disparate impact analysis annually.
  • The Attorney General may initiate investigations based on evidence from disparate impact analyses.
  • The Commissioner may initiate investigations similarly and take necessary legal actions.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO LABOR
2025-12-05 AMEND (T) AND RECOMMIT TO LABOR
2025-12-05 PRINT NUMBER 4394A
2025-02-04 REFERRED TO LABOR

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets automated employment decision tools, which are a subset of artificial intelligence and algorithmic systems used to filter and evaluate job candidates.

Mechanism of Influence: It imposes mandatory auditing requirements (disparate impact analyses) and transparency obligations, requiring both public disclosure and submission of data to the government.

Evidence:

  • automated employment decision tools
  • conduct annual disparate impact analyses
  • Make a summary of the most recent disparate impact analysis publicly available on the employers website
  • Provide the department with the summary of the most recent disparate impact analysis annually

Ambiguity Notes: The scope of the law depends on the specific definition of 'automated employment decision tools,' which may vary in how broadly it captures different types of machine learning or AI software.

Senate - 4506 - Enacts the Stop Addictive Feeds Exploitation (SAFE) for all act

Legislation ID: 70290

Bill URL: View Bill

Summary

This bill amends the general business law by introducing Article 45-A, which outlines definitions, required user settings, prohibitions against deceptive design practices, and the attorney generals authority to enforce these regulations. It aims to protect users, particularly minors, from the addictive nature of algorithmically generated content on social media platforms.

Key Sections

Key Requirements

  • It is unlawful to use designs that make it difficult for users to deactivate or manage their accounts.
  • Operators must allow users to turn off algorithmic recommendations.
  • Operators must allow users to turn off autoplay features.
  • Operators must allow users to turn off notifications, with specific time limitations.
  • Operators must enable users to limit their daily access to the platform.
  • Settings must be presented clearly and accessibly.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-02-06 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates 'algorithmic recommendations,' which are a primary application of artificial intelligence and machine learning in social media environments.

Mechanism of Influence: It mandates that social media operators provide a manual override for AI-driven content feeds, effectively requiring a disclosure of and an opt-out mechanism for algorithmic curation.

Evidence:

  • Operators of addictive social media platforms must provide users with mechanisms to turn off algorithmic recommendations
  • This section provides definitions for key terms used in the article, including addictive feed, addictive social media platform, and algorithmic recommendation.

Ambiguity Notes: The term 'algorithmic recommendation' is a key definition that determines the scope of AI technologies covered, though the specific technical thresholds for what constitutes such an algorithm are left to the attorney general's rulemaking.

Senate - 5486 - Relates to the use of telematics systems by automobile insurers

Legislation ID: 71270

Bill URL: View Bill

Summary

This bill amends the insurance law to include regulations concerning telematics systems used by insurers. It defines telematics systems, requires insurers to disclose their scoring methodologies, and mandates that data collected from these systems be used only for underwriting and rating decisions. The bill also prohibits discrimination in the use of telematics data and empowers the superintendent to enforce these regulations.

Key Sections

Key Requirements

  • Allow consumers access to their telematics data.
  • Avoid using external consumer data in a discriminatory manner.
  • Do not discriminate based on race, color, national origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.
  • Do not use telematics data for any purpose other than underwriting and rating.
  • Provide an explanation of risk factors used in models to the superintendent.
  • Publicly disclose scoring methodologies.
  • Report testing results to ensure no discrimination against protected classes.
  • Third-party developers or vendors of telematics systems must file their models or algorithms with the superintendent.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INSURANCE
2025-02-21 REFERRED TO INSURANCE

Detailed Analysis

Analysis 1

Why Relevant: The bill requires the submission of algorithms and models to a government authority for oversight.

Mechanism of Influence: Third-party developers and vendors are mandated to file their scoring models or algorithms with the superintendent, providing a mechanism for regulatory review of automated decision-making tools.

Evidence:

  • Third-party developers or vendors of telematics systems must file their models or algorithms with the superintendent.
  • Provide an explanation of risk factors used in models to the superintendent.

Ambiguity Notes: While the bill focuses on 'telematics,' these systems typically rely on algorithmic scoring and machine learning to interpret driver behavior data.

Analysis 2

Why Relevant: The legislation mandates disclosures and auditing for bias in automated scoring systems.

Mechanism of Influence: Insurers must publicly disclose how their scoring methodologies work and provide reports on testing performed to ensure the algorithms do not result in unfair discrimination against protected classes.

Evidence:

  • Publicly disclose scoring methodologies.
  • Report testing results to ensure no discrimination against protected classes.

Ambiguity Notes: The bill uses the term 'scoring methodologies' and 'risk models' which are central to the regulation of AI and automated systems in financial services.

Senate - 565 - Relates to the use of biometric identity verification devices for the purchase of alcoholic beverages and tobacco products

Legislation ID: 66064

Bill URL: View Bill

Summary

The bill amends the Alcoholic Beverage Control Law and Public Health Law to allow for the use of biometric identity verification devices. These devices will verify the identity and age of individuals attempting to purchase alcoholic beverages and tobacco products, ensuring compliance with age restrictions. The bill outlines the definition of such devices, the conditions under which they can be used, and the information that can be collected and maintained by licensees.

Key Sections

Key Requirements

  • Act takes effect 90 days after becoming law.
  • Allows for the addition, amendment, or repeal of regulations necessary for implementation before the effective date.
  • Devices must link to a securely stored, encrypted biometric database.
  • Limits recorded information to name, date of birth, drivers license or non-driver ID number, and expiration date.
  • Requires devices to verify identity and age through biometric scans.
  • Requires the commissioner and state commissioner of motor vehicles to create regulations for recording and maintaining these records.
  • Requires the commissioner and state liquor authority to promulgate regulations ensuring quality control in the use of transaction scan devices.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INVESTIGATIONS AND GOVERNMENT OPERATIONS
2025-01-08 REFERRED TO INVESTIGATIONS AND GOVERNMENT OPERATIONS

Detailed Analysis

Analysis 1

Why Relevant: The legislation concerns biometric identity verification and age verification, which are key areas of AI application and regulation mentioned in the user's instructions.

Mechanism of Influence: The law regulates the use of biometric scans (facial, iris, fingerprints) to verify age, establishing requirements for data collection and system security for these AI-adjacent technologies.

Evidence:

  • Establishes a definition for biometric identity verification device which includes methods of verifying identity through biometric scans such as fingerprints, iris images, and facial images.
  • These devices will verify the identity and age of individuals attempting to purchase alcoholic beverages and tobacco products, ensuring compliance with age restrictions.

Ambiguity Notes: While the bill does not explicitly use the term 'Artificial Intelligence', the technologies described (facial and iris recognition) are fundamentally powered by AI and machine learning algorithms.

Senate - 5668 - Relates to liability for misleading, incorrect, contradictory or harmful information provided to a user by a chatbot

Legislation ID: 71452

Bill URL: View Bill

Summary

The bill introduces a new section to the general business law concerning the liability of chatbot proprietors for misleading information provided by their chatbots. It defines key terms related to artificial intelligence and chatbots, outlines the responsibilities of chatbot proprietors, and specifies the conditions under which they can be held liable for harm caused to users. The bill also mandates that chatbot owners implement measures to protect users from self-harm and ensure that minors are not exposed to harmful content without parental consent.

Key Sections

Key Requirements

  • Cease access until parental consent is obtained.
  • Notification must be clear and conspicuous.
  • Prohibit use for 24 hours if self-harm is indicated.
  • Prohibit use for 3 days if self-harm is indicated.
  • Proprietors must ensure chatbots provide accurate information.
  • Proprietors must not disclaim liability for misleading information.
  • Provide contact information for suicide crisis organizations.
  • Regulations must consider the size and resources of proprietors.
  • Text must be easily readable.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-06-13 COMMITTED TO RULES
2025-03-17 ADVANCED TO THIRD READING
2025-03-13 2ND REPORT CAL.
2025-03-12 1ST REPORT CAL.548
2025-02-27 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The bill provides foundational legal definitions for artificial intelligence and chatbots.

Mechanism of Influence: By defining these terms, the bill sets the jurisdictional boundaries for which technologies are subject to the proposed regulations and oversight.

Evidence:

  • This section defines key terms related to the bill, including artificial intelligence, chatbot, companion chatbot, covered user, and proprietor.

Ambiguity Notes: The specific technical criteria for what constitutes 'artificial intelligence' versus standard automated software are not detailed in the abstract.

Analysis 2

Why Relevant: The bill mandates transparency through user disclosures.

Mechanism of Influence: Proprietors are legally required to inform users that they are interacting with a chatbot, preventing the deceptive passing of AI as a human agent.

Evidence:

  • Proprietors must clearly inform users that they are interacting with a chatbot rather than a human.
  • Notification must be clear and conspicuous.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill requires age verification and parental consent for AI usage.

Mechanism of Influence: It imposes a gatekeeping requirement where AI proprietors must verify user age and obtain verifiable parental consent before allowing minors to access 'companion chatbots'.

Evidence:

  • Companion chatbot proprietors must verify if users are minors and obtain parental consent before allowing access.
  • Data collected for age verification must be used solely for that purpose and deleted afterward.

Ambiguity Notes: The term 'companion chatbot' may imply a specific subset of AI, potentially leaving other types of AI chatbots unregulated in this regard.

Analysis 4

Why Relevant: The bill mandates ongoing safety audits and vulnerability assessments.

Mechanism of Influence: It creates a proactive compliance burden on AI developers to monitor their systems for safety risks rather than just reacting to incidents.

Evidence:

  • Proprietors must continuously assess their systems for vulnerabilities related to user safety.

Ambiguity Notes: The frequency and specific standards for 'continuous' assessment are left to future regulatory definition.

Analysis 5

Why Relevant: The bill regulates AI output by establishing liability for misleading or harmful information.

Mechanism of Influence: It prevents AI companies from using 'as-is' disclaimers to avoid responsibility for damages caused by hallucinations or incorrect medical/safety advice provided by the AI.

Evidence:

  • Proprietors are liable for false or misleading information provided by their chatbots that results in harm to users
  • Proprietors cannot disclaim liability for chatbots that provide harmful information leading to bodily harm, including self-harm.

Ambiguity Notes: None

Senate - 6748 - Requires publications to identify when the use of artificial intelligence is present within such publication

Legislation ID: 99887

Bill URL: View Bill

Summary

This bill amends the general business law to require that any publication, whether printed or electronic, must clearly indicate when generative artificial intelligence has been used in the creation of its content. This includes articles, photographs, videos, or any other visual media. The intent is to inform readers about the use of AI in the content they consume, thereby promoting awareness and understanding of AIs role in media.

Key Sections

Key Requirements

  • Applicable to all forms of media including articles, photographs, and videos.
  • Mandates that the disclosure be conspicuously imprinted on the top of the page or webpage.
  • Publications must imprint a notice on the top of the page or webpage indicating the use of AI in the content.
  • Requires publications to disclose the use of generative artificial intelligence in content creation.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO CONSUMER PROTECTION
2025-03-21 REFERRED TO CONSUMER PROTECTION

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in legislation requiring disclosures for artificial intelligence usage.

Mechanism of Influence: It imposes a legal requirement on media publishers to label AI-generated content, creating a transparency mechanism for the public.

Evidence:

  • require that any publication, whether printed or electronic, must clearly indicate when generative artificial intelligence has been used in the creation of its content
  • Mandates that the disclosure be conspicuously imprinted on the top of the page or webpage.

Ambiguity Notes: The definition of generative AI as systems performing tasks requiring 'human-like cognition or perception' is broad and could lead to varying interpretations regarding which specific automated tools trigger the disclosure requirement.

Senate - 6954 - Requires generative artificial intelligence providers to include provenance data on certain content made available by the provider

Legislation ID: 100093

Bill URL: View Bill

Summary

This legislation, known as the Stop Deepfakes Act, amends the General Business Law to mandate that generative artificial intelligence providers disclose provenance data for synthetic content they produce or modify. It outlines definitions, requirements for applying provenance data, and penalties for non-compliance. The bill seeks to enhance transparency and accountability in the use of AI-generated content, particularly on social media platforms and by state agencies.

Key Sections

Key Requirements

  • Allows for the establishment of regulations regarding compliance by state agencies.
  • Empowers the attorney general to define acceptable methods for applying provenance data.
  • Mandates disclosure of the time and date provenance data was applied.
  • Penalties for violations based on intentionality and negligence.
  • Requires provenance data to identify synthetic content and its creator.
  • Requires social media platforms to maintain provenance data unless content is deleted.
  • Requires state agencies to apply provenance data to their digital content.
  • Specifies that provenance data must communicate the type of device or system used for content generation.
  • Specifies that violations can incur penalties based on intent and negligence.

Sponsors

Legislative Actions

Date Action
2026-01-07 died in assembly
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2026-01-07 returned to senate
2025-06-17 ordered to third reading rules cal.900
2025-06-17 substituted for a6540c
2025-06-12 DELIVERED TO ASSEMBLY
2025-06-12 PASSED SENATE
2025-06-12 referred to codes

Detailed Analysis

Analysis 1

Why Relevant: The act specifically targets generative artificial intelligence providers and imposes disclosure requirements.

Mechanism of Influence: Providers must apply provenance data to synthetic content, detailing its creation and identifying it as AI-generated.

Evidence:

  • This provision requires generative artificial intelligence providers to apply provenance data to synthetic content they produce or modify
  • Mandates disclosure of the time and date provenance data was applied.

Ambiguity Notes: The term 'synthetic content' is defined but its breadth depends on the specific technical implementation of provenance data.

Analysis 2

Why Relevant: It regulates the lifecycle of AI-generated content on social media platforms.

Mechanism of Influence: Platforms are prohibited from removing metadata that identifies content as AI-generated, ensuring transparency for users.

Evidence:

  • This section prohibits social media platforms from deleting or degrading provenance data associated with user-uploaded content

Ambiguity Notes: The definition of 'degrading' provenance data may require further technical clarification by the attorney general.

Analysis 3

Why Relevant: The legislation establishes a regulatory framework for AI oversight through the Attorney General.

Mechanism of Influence: Grants rulemaking authority to define acceptable methods for applying provenance data and enforcing compliance.

Evidence:

  • This section grants the attorney general the authority to create rules and regulations necessary for enforcing the provisions of the article.

Ambiguity Notes: None

Senate - 6955 - Establishes the artificial intelligence training data transparency act

Legislation ID: 100095

Bill URL: View Bill

Summary

The Artificial Intelligence Training Data Transparency Act mandates developers of generative AI models to provide comprehensive documentation about the datasets used for training these models. This includes details on the sources of data, types of data points, and any modifications made to the datasets. Additionally, it requires disclosure to employees whose data is utilized in training AI models, while providing exemptions for specific national security-related applications.

Key Sections

Key Requirements

  • Details about the datasets must include sources, data types, copyright status, and any modifications.
  • Disclosure of information to employees whose data is used for training.
  • Documentation must be posted on the developers website.
  • Information must cover the purpose of the AI model, data types, and collection periods.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-03-27 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The act directly addresses the regulation of artificial intelligence by focusing on transparency and disclosure requirements for training data.

Mechanism of Influence: It forces developers to publish dataset details on their websites and requires employers to inform employees about the use of their data in AI training processes.

Evidence:

  • mandates developers of generative AI models to provide comprehensive documentation about the datasets used for training these models
  • requires disclosure to employees whose data is utilized in training AI models

Ambiguity Notes: The term 'national security-related applications' is not strictly defined in the abstract, potentially allowing for broad exemptions from the transparency requirements.

Senate - 7263 - Imposes liability for damages caused by a chatbot impersonating certain licensed professionals

Legislation ID: 113289

Bill URL: View Bill

Summary

This legislation amends the General Business Law to establish clear definitions and responsibilities for chatbot proprietors. It prohibits chatbots from providing certain legal or professional advice unless they are compliant with relevant licensing laws. Additionally, it allows individuals to seek damages if they are harmed by chatbot interactions and mandates clear notifications that users are engaging with a chatbot.

Key Sections

Key Requirements

  • Allows individuals to bring civil actions for actual damages.
  • Notice must be in the same language and size as other text on the website.
  • Prohibits chatbots from providing certain substantive responses or actions that would be illegal for licensed professionals.
  • Proprietors found willfully violating the law may face additional costs and attorney fees.
  • Requires clear, conspicuous notices that users are interacting with a chatbot.
  • Requires proprietors to not waive liability by merely notifying users that they are interacting with a chatbot.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-06-13 COMMITTED TO RULES
2025-05-07 ADVANCED TO THIRD READING
2025-05-06 2ND REPORT CAL.
2025-05-05 1ST REPORT CAL.931
2025-04-07 REFERRED TO INTERNET AND TECHNOLOGY

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly defines 'artificial intelligence system' and 'chatbot' to establish the scope of regulation.

Mechanism of Influence: Sets the legal foundation for which technologies are subject to the requirements and prohibitions outlined in the law.

Evidence:

  • This section defines key terms related to the bill, including artificial intelligence system, chatbot, and proprietor.

Ambiguity Notes: None

Analysis 2

Why Relevant: It regulates the output and capabilities of AI systems by prohibiting specific types of professional advice.

Mechanism of Influence: Prevents AI from performing actions that require professional licensing, such as providing legal or medical advice, thereby restricting its functional application.

Evidence:

  • Proprietors are prohibited from allowing chatbots to provide responses that could constitute illegal acts if performed by licensed professionals, such as legal or medical advice.

Ambiguity Notes: The term 'substantive responses' may require further clarification to distinguish between general information and regulated professional advice.

Analysis 3

Why Relevant: The legislation mandates transparency through user disclosures.

Mechanism of Influence: Requires proprietors to provide clear and conspicuous notice to users that they are interacting with a chatbot rather than a human.

Evidence:

  • Proprietors must provide clear notice to users that they are interacting with a chatbot, ensuring transparency.
  • Requires clear, conspicuous notices that users are interacting with a chatbot.

Ambiguity Notes: None

Analysis 4

Why Relevant: It establishes a framework for oversight and accountability regarding AI-induced harm.

Mechanism of Influence: Creates a private right of action allowing individuals to sue for damages, which serves as a regulatory mechanism to ensure proprietors maintain safe AI interactions.

Evidence:

  • Individuals can sue for damages if they suffer harm from chatbot interactions, with additional penalties for willful violations by proprietors.

Ambiguity Notes: None

Senate - 7691 - Establishes the Artificial Intelligence Literacy Act

Legislation ID: 138531

Bill URL: View Bill

Summary

This bill introduces the Artificial Intelligence Literacy Act, which aims to improve artificial intelligence literacy among students and communities in New York. It recognizes the growing importance of AI technology and the need for educational initiatives that address both the benefits and risks associated with AI. The bill establishes a competitive grant program to fund educational efforts in public schools, community colleges, higher education institutions, and community organizations, particularly focusing on underserved populations.

Key Sections

Key Requirements

  • Allocates 20% to community colleges and 15% to public institutions of higher education.
  • Allocates 30% of funds to public elementary and secondary schools.
  • Community organizations must provide training and implement AI learning experiences.
  • Ensures timely distribution of funds and technical assistance to applicants.
  • Establishes a competitive grant process prioritizing high-need applicants.
  • Establishes procedures for application review and approval.
  • Funds can be used for teacher training, lab development, and creating educational programs.
  • Includes metrics for measuring student learning outcomes and engagement with AI education.
  • Reports must include data on training, implementation, and demographic reach.
  • Requires proposals to demonstrate need, set measurable objectives, and outline fund usage.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO EDUCATION
2025-04-30 REFERRED TO EDUCATION

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses Artificial Intelligence by establishing a state-level framework for AI literacy and education.

Mechanism of Influence: While the bill focuses on education rather than technical restrictions like weight submissions or age verification, it creates a legal definition for 'AI system' and 'artificial intelligence literacy' within New York law and mandates reporting on AI-related educational initiatives.

Evidence:

  • This bill introduces the Artificial Intelligence Literacy Act, which aims to improve artificial intelligence literacy among students and communities in New York.
  • This section defines key terms related to the artificial intelligence literacy grant program, including AI system, artificial intelligence literacy...

Ambiguity Notes: The bill's focus is promotional and educational rather than regulatory; it does not impose restrictions on AI developers but rather focuses on the 'literacy' aspect of the user's request for AI-related legislation.

Senate - 7963 - Requires certain political communications to include provenance data for all audio, images or videos used in such communications

Legislation ID: 144758

Bill URL: View Bill

Summary

This legislation, referred to as the election content accountability act, mandates that political campaigns for specific offices include provenance data in their communications. This data must disclose the origin, modifications, and any use of generative artificial intelligence in creating or altering audio, images, or videos. The law introduces penalties for non-compliance and provides the attorney general with the authority to enforce the regulations.

Key Sections

Key Requirements

  • Imposes a penalty of up to $100,000 for intentional violations.
  • Imposes a penalty of up to $50,000 for unintentional violations.
  • Provenance data must include details about the device used, specific synthetic content, AI usage, the AI provider, and the date of data application.
  • Requires campaigns to apply provenance data to political communications that include audio, images, or videos.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO ELECTIONS
2025-05-15 REFERRED TO ELECTIONS

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly addresses the regulation of generative artificial intelligence by requiring specific disclosures (provenance data) when AI is used to create or alter political media.

Mechanism of Influence: It forces campaigns to label AI-generated content and provides a legal framework for penalties and oversight by the Attorney General, thereby regulating the output and transparency of AI systems in a political context.

Evidence:

  • mandates that political campaigns... include provenance data in their communications
  • This data must disclose... any use of generative artificial intelligence
  • defines key terms... including provenance data, generative artificial intelligence system, and synthetic content

Ambiguity Notes: The definition of 'synthetic content' and 'generative artificial intelligence system' will be crucial for determining the scope of what needs to be disclosed, though the text implies a broad application to media.

Senate - 8331 - Enacts the "New York artificial intelligence transparency for journalism act"

Legislation ID: 159844

Bill URL: View Bill

Summary

The New York Artificial Intelligence Transparency for Journalism Act establishes requirements for developers of generative AI systems to disclose information about the sources of data used for training their systems. This includes providing details about the content accessed from journalism providers and ensuring that such providers are recognized and compensated for their work. The bill reflects the need to sustain quality journalism and protect it from unfair practices in the evolving landscape of AI technology.

Key Sections

Key Requirements

  • Allows for legal actions to compel compliance with transparency requirements.
  • Journalism providers can request subpoenas for data disclosure from AI developers.
  • Mandates disclosure of crawler information and the identity of the crawlers used to access journalism content.
  • Requires developers to post specific information about accessed journalism content on their websites before the release of AI systems to the public.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-06-09 AMEND AND RECOMMIT TO RULES
2025-06-09 PRINT NUMBER 8331A
2025-06-03 REFERRED TO RULES

Detailed Analysis

Analysis 1

Why Relevant: This legislation falls directly under the user's request for AI regulation and disclosure requirements.

Mechanism of Influence: It forces AI developers to provide a public accounting of the journalism data they ingest, creating a legal pathway for content owners to verify usage and seek enforcement.

Evidence:

  • Developers of generative AI systems must disclose specific information about the journalism content used for training their systems
  • Requires developers to post specific information about accessed journalism content on their websites before the release of AI systems to the public.
  • Allows for legal actions to compel compliance with transparency requirements.

Ambiguity Notes: The effectiveness depends on the specific definitions of AI and journalism providers provided in the act.

Senate - 8451 - Enacts the New York fundamental artificial intelligence requirements in (FAIR) news act

Legislation ID: 200158

Bill URL: View Bill

Summary

The New York FAIR News Act seeks to address the implications of artificial intelligence in news media by mandating disclosures to workers and consumers, ensuring human oversight of AI-generated content, and providing workplace protections for media professionals. It aims to safeguard the journalistic workforce from the potential negative impacts of AI technology on their roles and the quality of news reporting.

Key Sections

Key Requirements

  • Ensures existing employee rights are not diminished by AI usage.
  • Establishes the need for protections for journalists and the public against AI-generated misinformation.
  • Mandates conspicuous disclosure of AI involvement in content creation.
  • Mandates safeguards for protecting journalistic sources from AI technology.
  • Prohibits training AI on journalists work without consent.
  • Requires full disclosure of AI tool usage to news media workers.
  • Requires human oversight for AI-generated content prior to publication.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-07-07 REFERRED TO RULES

Detailed Analysis

Analysis 1

Why Relevant: The act mandates transparency for AI-generated content presented to the public.

Mechanism of Influence: It requires news media to provide conspicuous disclosures when content is significantly created by generative AI.

Evidence:

  • News media content that is significantly created by generative AI must clearly indicate this to consumers at the point of access.

Ambiguity Notes: The term 'significantly created' is subjective and may require further regulatory clarification to determine the exact threshold of AI involvement that triggers disclosure.

Analysis 2

Why Relevant: It regulates the internal use of AI tools within news organizations.

Mechanism of Influence: Employers are required to disclose the use and application of generative AI tools to their workforce.

Evidence:

  • Employers in the news media must disclose to workers the use of generative AI tools in content creation, detailing how these tools are applied.

Ambiguity Notes: None

Analysis 3

Why Relevant: The legislation imposes a human-in-the-loop requirement for AI systems.

Mechanism of Influence: It prohibits the publication of AI-generated content without prior human review and approval.

Evidence:

  • Any content generated by AI must be reviewed and approved by a human worker before publication.

Ambiguity Notes: None

Analysis 4

Why Relevant: It addresses the use of intellectual property for AI training purposes.

Mechanism of Influence: The act prohibits training AI systems on journalists' work without obtaining their explicit consent.

Evidence:

  • Prohibits training AI on journalists work without consent.

Ambiguity Notes: None

Senate - 8524 - Relates to enacting the NY data protection act

Legislation ID: 281613

Bill URL: View Bill

Summary

This bill outlines the rights of consumers in relation to their personal data, including the ability to exercise these rights through authorized representatives. It also grants the Attorney General the authority to create rules and regulations to ensure compliance with the provisions of this article, including the collection of data from various stakeholders to inform these regulations. Additionally, the bill includes a severability clause to maintain the validity of the remaining provisions if any part is found invalid.

Key Sections

Key Requirements

  • Allows the Attorney General to collect data from businesses and other governmental entities for regulatory purposes.
  • Consumers or authorized agents may exercise rights on behalf of the consumer.
  • Ensures that the act remains effective even if parts are invalidated.
  • Establishes a staggered effective date for certain sections of the act.
  • Requires the Attorney General to adopt suitable rules and regulations to implement the articles provisions.

Sponsors

Legislative Actions

Date Action
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2025-10-08 REFERRED TO RULES

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates the Attorney General to define disclosure requirements for businesses.

Mechanism of Influence: This authority could be used to require businesses to disclose when AI is used to process consumer data or to provide transparency into automated decision-making processes.

Evidence:

  • The Attorney General is empowered to create and amend rules and regulations necessary to enforce the provisions of the article, including the content of disclosures required from businesses.

Ambiguity Notes: The bill does not explicitly mention 'Artificial Intelligence', but 'disclosures' is a common regulatory tool used for AI transparency.

Analysis 2

Why Relevant: The Attorney General is authorized to collect data from businesses to inform regulation.

Mechanism of Influence: This could serve as a mechanism for the government to request information about data sets used to train AI or the outcomes of AI processing to inform future oversight.

Evidence:

  • The Attorney General can request data and information from various entities to inform the creation of rules and regulations, ensuring the rules are based on comprehensive research and stakeholder input.

Ambiguity Notes: The scope of 'data and information' is broad and could include technical details about AI systems if they pertain to consumer data rights.

Senate - 8589 - Enacts the automation displacement protection act

Legislation ID: 281674

Bill URL: View Bill

Summary

This bill, known as the "automation displacement protection act," seeks to address the impact of artificial intelligence and automation on employment in New York. It mandates that covered employers notify employees and relevant authorities about impending job losses due to automation, ensures a transition period for affected workers, and establishes penalties for non-compliance. The legislation aims to safeguard workers rights and promote fair labor practices in the face of technological advancements.

Key Sections

Key Requirements

  • Defines artificial intelligence system as any system performing tasks that require human intelligence.
  • Defines covered employer as businesses with 50 or more full-time employees.
  • Defines employment loss including termination or significant reductions in hours.
  • Defines technological displacement as job losses or significant reductions in work hours due to automation.
  • Employers failing to provide notice are liable for up to 60 days of back pay and benefits to affected employees.
  • Employers must not discharge employees during the transition period except for just cause.
  • Employers who violate the notice or transition requirements will be ineligible for state grants, loans, or tax incentives for five years.
  • Notice must be given to affected employees, employee organizations, the commissioner, local officials, and workforce development boards.
  • Notice must include details about the automation, affected employees, anticipated displacement date, retraining programs, and vendors.
  • Requires 90 days advance notice for displacements affecting 25 or more employees or 25% of the workforce.
  • Requires employers to offer continued employment or equivalent wages during the transition period.
  • The attorney general may take action to enforce compliance and recover penalties.
  • The commissioner can impose civil penalties of up to $10,000 per day for willful violations.
  • The commissioner will maintain a public registry of violators.

Sponsors

Legislative Actions

Date Action
2026-01-16 AMEND AND RECOMMIT TO LABOR
2026-01-16 PRINT NUMBER 8589B
2026-01-07 REFERRED TO LABOR
2025-12-19 AMEND (T) AND RECOMMIT TO RULES
2025-12-19 PRINT NUMBER 8589A
2025-11-21 REFERRED TO RULES

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes mandatory disclosure requirements for the implementation of AI systems in the workplace.

Mechanism of Influence: Employers must provide detailed written notice to employees and government officials about the specific automation technology and vendors being used 90 days before displacement occurs.

Evidence:

  • Notice must include details about the automation, affected employees, anticipated displacement date, retraining programs, and vendors.
  • Defines artificial intelligence system as any system performing tasks that require human intelligence.

Ambiguity Notes: The definition of 'artificial intelligence system' is broad ('any system performing tasks that require human intelligence'), which could encompass a wide range of software beyond generative AI.

Analysis 2

Why Relevant: The legislation creates a regulatory framework for AI by imposing penalties and government oversight on its deployment when it affects employment.

Mechanism of Influence: It empowers the attorney general and commissioner to enforce compliance through fines and makes violators ineligible for state grants or tax incentives.

Evidence:

  • The attorney general may take action to enforce compliance and recover penalties.
  • Employers who violate the notice or transition requirements will be ineligible for state grants, loans, or tax incentives for five years.

Ambiguity Notes: None

Senate - 8706 - Requires covered businesses to annually report to the department of labor regarding the impact of artificial intelligence on hiring and the nature of artificial intelligence use

Legislation ID: 281780

Bill URL: View Bill

Summary

This legislation mandates that businesses with more than 100 employees or that are publicly traded submit annual reports detailing how artificial intelligence affects their hiring processes, including data on employee displacement, hiring, and the specifics of AI usage. The Department of Labor will develop reporting guidelines and publish an annual report based on the submitted data, ensuring transparency and accountability in the use of AI in the workplace.

Key Sections

Key Requirements

  • Allows businesses 90 days to rectify violations before penalties are enforced.
  • Imposes a civil penalty for each day of non-compliance.
  • Requires covered businesses to submit employment data related to AI impact.
  • Requires information on the nature of AI usage, including objectives and oversight.
  • Requires the Department to analyze and present aggregate data on AIs employment impacts.

Sponsors

Legislative Actions

Date Action
2026-01-15 AMEND (T) AND RECOMMIT TO LABOR
2026-01-15 PRINT NUMBER 8706A
2026-01-07 REFERRED TO LABOR

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the disclosure of AI usage in the workplace, specifically focusing on hiring and employment impacts.

Mechanism of Influence: It mandates annual reporting by covered businesses, creating a transparency mechanism where the government oversees how AI affects the labor market.

Evidence:

  • Mandates covered businesses to report annually on the impact of AI on hiring and usage details.
  • Requires information on the nature of AI usage, including objectives and oversight.

Ambiguity Notes: The term 'nature of AI usage' is broad and may require further clarification in the Department's guidelines to determine the depth of technical disclosure required.

Senate - 8828 - New York State Assembly

Legislation ID: 272208

Bill URL: View Bill

Summary

This legislation seeks to amend the General Business Law to establish the Responsible AI Safety and Education (RAISE) Act, which mandates transparency and safety protocols for large frontier developers of AI models. It emphasizes the need for standardized disclosures about the risks and management of AI technologies to protect the public and ensure responsible innovation.

Key Sections

Key Requirements

  • A civil penalty of $1,000 for each day of non-compliance with disclosure requirements.
  • Allows redaction of sensitive information in compliance documents.
  • Defines the scope of the articles applicability.
  • Establishes civil penalties for non-compliance with reporting.
  • Exempts certain educational and consortium entities.
  • Large frontier developers shall be assessed in pro rata shares for operating expenses.
  • Liability for the amount of assessments owed.
  • Limits access to authorized personnel only.
  • Limits penalties based on the severity of the violation.
  • Mandates annual reviews and updates of the frontier AI framework.
  • Prohibits false or misleading statements about catastrophic risks.
  • Prohibits false statements regarding compliance with AI frameworks.
  • Prohibits inclusion of sensitive information in annual reports.
  • Requires annual reporting of anonymized safety incidents.
  • Requires confidentiality of internal use reports.
  • Requires declaration of intent to comply with federal standards.
  • Requires disclosure of all persons or entities owning 50% or more of a publicly traded large frontier developer.
  • Requires disclosure of all persons or entities owning 5% or more of a privately held large frontier developer.
  • Requires disclosure of third-party evaluator involvement.
  • Requires filing of a disclosure statement with the office.
  • Requires immediate reporting of incidents posing imminent risks.
  • Requires justification for redactions in published documents.
  • Requires large frontier developers to publish a detailed frontier AI framework on their website.
  • Requires provision of contact information for a primary, secondary, and tertiary contact responsible for inquiries.
  • Requires renewal of the statement every two years or upon changes.
  • Requires reporting of critical safety incidents within 72 hours.
  • Requires submission of federal incident reports to the office.
  • Requires summaries of catastrophic risk assessments in transparency reports.
  • Requires transparency reports before deploying new or modified AI models.
  • The office may consider additional reporting requirements to enhance safety and transparency.

Sponsors

Legislative Actions

Date Action
2026-01-28 DELIVERED TO ASSEMBLY
2026-01-28 PASSED SENATE
2026-01-28 referred to ways and means
2026-01-20 ORDERED TO THIRD READING CAL.94
2026-01-08 REFERRED TO RULES

Detailed Analysis

Analysis 1

Why Relevant: The act mandates extensive disclosure requirements for AI developers regarding their operations and ownership.

Mechanism of Influence: Developers are required to file disclosure statements with a designated office and publish transparency reports before deploying new or modified AI models.

Evidence:

  • Large frontier developers must file a disclosure statement with the office containing specific information about their operations and ownership.
  • Requires transparency reports before deploying new or modified AI models.

Ambiguity Notes: The definition of 'large frontier developer' is central to the scope but the specific technical thresholds are not detailed in the summary.

Analysis 2

Why Relevant: The legislation focuses on regulating AI safety and mitigating catastrophic risks.

Mechanism of Influence: It requires the creation and publication of a 'frontier AI framework' detailing practices for assessing and mitigating risks.

Evidence:

  • This provision mandates that large frontier developers create and publish a frontier AI framework detailing their practices for assessing and mitigating catastrophic risks associated with their AI models.

Ambiguity Notes: The term 'catastrophic risk' is defined in the act but the specific criteria for what constitutes such a risk may be subject to regulatory interpretation.

Analysis 3

Why Relevant: The act requires reporting of safety incidents and provides for government oversight.

Mechanism of Influence: Developers must report critical safety incidents to the office within 72 hours and provide annual reports on safety incidents and risk assessments.

Evidence:

  • Requires reporting of critical safety incidents within 72 hours.
  • The office will produce annual reports summarizing critical safety incidents and assessments of catastrophic risks

Ambiguity Notes: None

Analysis 4

Why Relevant: The legislation includes provisions for third-party involvement in AI evaluation, which aligns with audit requirements.

Mechanism of Influence: Transparency reports must disclose the involvement of third-party evaluators in the assessment of AI models.

Evidence:

  • Requires disclosure of third-party evaluator involvement.

Ambiguity Notes: None

Senate - 8831 - New York State Assembly

Legislation ID: 272211

Bill URL: View Bill

Summary

This legislation amends existing laws to repeal certain provisions related to automated decision-making by government agencies and establishes new requirements for the disclosure of automated employment decision-making tools. It outlines the responsibilities of covered entities in disclosing the use of such tools and ensures that the use of artificial intelligence does not infringe upon existing employee rights and collective bargaining agreements.

Key Sections

Key Requirements

  • Defines covered entities to include counties, cities, towns, school districts, and public universities.
  • Disclosure must include a description of the tools, their start date, purpose, and any relevant information.
  • Guarantees protection of collective bargaining unit membership for existing employees.
  • Maintains existing collective bargaining agreements and employee rights.
  • Prohibits discharge, displacement, or loss of position due to AI use.
  • Requires covered entities to publish a list of automated tools used by December 30 each year.

Sponsors

Legislative Actions

Date Action
2026-01-21 DELIVERED TO ASSEMBLY
2026-01-21 ordered to third reading rules cal.51
2026-01-21 passed assembly
2026-01-21 PASSED SENATE
2026-01-21 referred to science and technology
2026-01-21 returned to senate
2026-01-21 substituted for a9487
2026-01-12 ORDERED TO THIRD READING CAL.46

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically mandates disclosures for automated employment decision-making tools, aligning with the user's interest in AI disclosure requirements.

Mechanism of Influence: Covered entities must publish descriptions and purposes of AI tools on their websites annually, providing public transparency into government AI usage.

Evidence:

  • Mandates that covered entities using automated employment decision-making tools must disclose their use on their website.
  • Requires covered entities to publish a list of automated tools used by December 30 each year.

Ambiguity Notes: The term 'automated employment decision-making tools' is used, which typically encompasses AI but may require specific technical definitions to determine the full scope of software covered.

Analysis 2

Why Relevant: The legislation regulates the impact of AI on the workforce by prohibiting certain outcomes like displacement.

Mechanism of Influence: It prohibits the discharge or loss of position due to AI use and ensures AI does not infringe upon collective bargaining agreements.

Evidence:

  • Prohibits discharge, displacement, or loss of position due to AI use.
  • Ensures that the use of artificial intelligence systems does not affect existing employee rights or collective bargaining agreements.

Ambiguity Notes: The bill focuses on the labor outcomes of AI rather than the technical specifications or weights of the models themselves.

Analysis 3

Why Relevant: The bill defines the scope of government oversight regarding automated systems.

Mechanism of Influence: It identifies specific public institutions (counties, cities, school districts, and universities) that must comply with AI transparency standards.

Evidence:

  • Introduces a definition for covered entity which includes various local government and educational institutions.
  • This section repeals specific articles and sections added in 2025 regarding automated decision-making by government agencies.

Ambiguity Notes: The repeal of 2025 provisions suggests a shift in how the government intends to oversee automated decision-making, though the specific nature of the repealed laws is not detailed.

Senate - 8928 - Enacts the artificial intelligence workforce impact transparency act

Legislation ID: 281831

Bill URL: View Bill

Summary

This bill amends the New York labor law to require employers to indicate if layoffs are due to the use of artificial intelligence or automation. It mandates that employers provide specific information regarding the impact of these technologies on job losses, aiming to inform public policy and workforce retraining efforts. Additionally, it establishes a pilot program to monitor compliance and analyze the effects of these reporting requirements.

Key Sections

Key Requirements

  • A report on the pilot programs findings must be submitted to the governor and legislature.
  • Employers must describe the technology or process contributing to the layoffs.
  • Employers must provide the estimated percentage of affected positions.
  • Employers must state whether layoffs are due to AI or automation.
  • Reports must be made publicly available and shared with the economic development department.
  • The commissioner must publish quarterly summaries of AI-related layoffs.
  • The Department of Labor must establish a pilot program within 180 days.

Sponsors

Legislative Actions

Date Action
2026-01-16 REFERRED TO LABOR

Detailed Analysis

Analysis 1

Why Relevant: The bill imposes mandatory disclosure requirements on the use of artificial intelligence within corporate operations.

Mechanism of Influence: Employers are legally required to identify and describe the specific AI technologies or automated processes that lead to workforce reductions when filing WARN Act notices.

Evidence:

  • require employers to disclose if layoffs are due to AI or automation
  • Employers must describe the technology or process contributing to the layoffs.
  • Employers must provide the estimated percentage of affected positions.

Ambiguity Notes: The bill uses the terms 'artificial intelligence' and 'automation' which may require further regulatory definition to ensure consistent reporting across different industries.

Analysis 2

Why Relevant: The legislation establishes a government oversight and monitoring framework for AI's societal impact.

Mechanism of Influence: It mandates the creation of a state-managed database and requires the commissioner of labor to publish analytical summaries of AI-related job losses.

Evidence:

  • mandates the commissioner of labor to maintain a database of reported layoffs related to AI and automation
  • publish quarterly summaries analyzing these reductions
  • establish a pilot program to evaluate compliance with the new reporting requirements

Ambiguity Notes: The effectiveness of the oversight depends on the specific data points collected during the pilot program and the level of detail provided by employers.

Senate - 9008 - Enacts into law major components of legislation necessary to implement the state transportation, economic development and environmental conservation budget for the 2026-2027 state fiscal year

Legislation ID: 283543

Bill URL: View Bill

Summary

This act amends various laws related to vehicle and traffic regulations, insurance, environmental conservation, and economic development initiatives in New York. It includes provisions for increasing motor vehicle transaction fees, establishing motorcycle safety course requirements, implementing intelligent speed assistance devices, and enhancing penalties for crimes against highway workers. The bill also addresses energy policies, insurance regulations, and agricultural marketing, among other areas.

Key Sections

Key Requirements

  • Applicants must submit proof of successful completion of the motorcycle rider safety course.
  • Cities may establish a pilot program for speed assistance devices based on local laws.
  • Completion of necessary rules and regulations for implementation by the effective date.
  • Establishes expiration date for specific provisions on April 1, 2028.
  • Increases certain motor vehicle transaction fees.

Sponsors

Legislative Actions

Date Action
2026-01-21 REFERRED TO FINANCE

Detailed Analysis

Analysis 1

Why Relevant: The legislation addresses 'intelligent' speed assistance devices, which fall under the category of automated or intelligent vehicle technologies.

Mechanism of Influence: It creates a legal framework for cities to pilot and regulate devices that automatically assist with vehicle speed management, involving automated decision-making.

Evidence:

  • Part D: Intelligent Speed Assistance Devices
  • Establishes a pilot program for intelligent speed assistance devices in cities with populations over one million.

Ambiguity Notes: The text uses the term 'intelligent' but does not explicitly use the term 'Artificial Intelligence' or mandate the specific AI-centric disclosures (like model weights or audits) requested by the user.

Senate - 9028 - Prohibits employers from engaging in discrimination on the basis of a protected class when using artificial intelligence in certain employment practices

Legislation ID: 285776

Bill URL: View Bill

Summary

This legislation amends New Yorks executive law to define artificial intelligence and generative artificial intelligence, and establishes unlawful discriminatory practices related to their use in employment. It mandates that employers cannot use AI for recruitment, hiring, or other employment-related decisions in a way that discriminates against individuals based on protected characteristics. Additionally, it requires employers to notify employees when AI is used in these contexts.

Key Sections

Key Requirements

  • Employers must not use AI for employment decisions that lead to discrimination based on protected classes.
  • Employers must provide notice to employees when using AI for recruitment and other employment-related decisions.
  • The division must adopt rules regarding notice requirements, including timing and means of providing notice.

Sponsors

Legislative Actions

Date Action
2026-01-23 REFERRED TO INVESTIGATIONS AND GOVERNMENT OPERATIONS

Detailed Analysis

Analysis 1

Why Relevant: The legislation provides formal legal definitions for artificial intelligence and generative artificial intelligence.

Mechanism of Influence: By defining these terms, the law establishes the specific scope of technologies subject to regulation and oversight within the state's executive law.

Evidence:

  • This provision defines artificial intelligence and generative artificial intelligence, outlining the capabilities and types of outputs these systems can produce.

Ambiguity Notes: The breadth of the definition of 'capabilities and types of outputs' will determine how many software tools fall under this regulatory umbrella.

Analysis 2

Why Relevant: The law requires disclosures regarding the use of AI in professional settings.

Mechanism of Influence: It creates a mandatory notification system where employers must inform employees and recruits if AI is being used to evaluate them, aligning with the user's interest in disclosure requirements.

Evidence:

  • Employers must provide notice to employees when using AI for recruitment and other employment-related decisions.
  • The division must adopt rules regarding notice requirements, including timing and means of providing notice.

Ambiguity Notes: The specific timing and means of notice are left to be determined by future rulemaking, which may affect the transparency's effectiveness.

Analysis 3

Why Relevant: The legislation regulates the application of AI to prevent discriminatory outcomes and establishes enforcement mechanisms.

Mechanism of Influence: It prohibits specific uses of AI that lead to discrimination and empowers a division to create regulations and enforcement protocols for these AI-related practices.

Evidence:

  • This provision establishes that it is unlawful for employers to use artificial intelligence in a way that discriminates against employees based on various protected classes.
  • This provision authorizes the division to create rules and regulations for the implementation and enforcement of the AI-related employment practices.

Ambiguity Notes: None

Senate - 933 - Establishes the position of chief artificial intelligence officer

Legislation ID: 66546

Bill URL: View Bill

Summary

This bill amends the state technology law to define artificial intelligence and automated decision-making systems, and to create the position of Chief Artificial Intelligence Officer. This officer will be responsible for developing policies, guidelines, and risk management plans for the use of AI in state operations, while also coordinating efforts across various state departments and ensuring public safety and rights are protected.

Key Sections

Key Requirements

  • Conduct audits of AI usage to ensure compliance with laws.
  • Coordinate activities involving AI across state departments.
  • Defines AI and automated decision-making systems based on their capabilities and autonomy.
  • Develop statewide AI policies and governance.
  • Excludes basic software processes that do not impact human rights or welfare.
  • Members are appointed from various state agencies and organizations.
  • Must have expertise in AI, data privacy, and technology.
  • The Chief AI Officer must be appointed by the governor with Senate consent.
  • The committee must meet at least twice a year.

Sponsors

Legislative Actions

Date Action
2026-01-07 died in assembly
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2026-01-07 returned to senate
2025-05-22 DELIVERED TO ASSEMBLY
2025-05-22 PASSED SENATE
2025-05-22 referred to governmental operations
2025-03-10 ADVANCED TO THIRD READING
2025-03-05 2ND REPORT CAL.

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes formal definitions for Artificial Intelligence and Automated Decision-Making Systems.

Mechanism of Influence: By defining these terms, the bill sets the legal scope for which technologies are subject to state oversight, regulation, and the powers of the Chief AI Officer.

Evidence:

  • This provision defines Artificial Intelligence and Automated Decision-Making System, outlining their functionalities and exclusions.

Ambiguity Notes: The exclusion of 'basic software processes' that do not impact human rights or welfare may create a grey area regarding which automated tools fall under the regulatory umbrella.

Analysis 2

Why Relevant: The legislation mandates the auditing of AI usage, which is a core component of AI regulation and oversight.

Mechanism of Influence: The Chief AI Officer is granted the authority to conduct audits to ensure that state agencies are complying with established laws and safety protocols when using AI.

Evidence:

  • Conduct audits of AI usage to ensure compliance with laws.
  • Develop statewide AI policies and governance.

Ambiguity Notes: The text does not specify the frequency of these audits or the specific technical standards against which the AI will be measured.

Analysis 3

Why Relevant: The bill creates a centralized oversight body and a Chief AI Officer to manage AI risks.

Mechanism of Influence: The CAIO is responsible for developing risk management plans and policies, effectively creating a regulatory environment for AI deployment within the state government.

Evidence:

  • This officer will be responsible for developing policies, guidelines, and risk management plans for the use of AI in state operations
  • Establishment of the Chief Artificial Intelligence Officer

Ambiguity Notes: While the focus is on state operations, the policies developed by the CAIO could influence procurement requirements for private AI vendors.

Senate - 934 - Requires warnings on generative artificial intelligence systems

Legislation ID: 66547

Bill URL: View Bill

Summary

This bill amends the general business law in New York by introducing a requirement for generative artificial intelligence systems to include conspicuous warnings on their user interfaces. These warnings must inform users that the outputs generated by these systems may not always be accurate or appropriate. Failure to comply with this requirement can result in civil penalties for the owners or operators of such systems.

Key Sections

Key Requirements

  • Defines Artificial intelligence as a machine-based system that makes predictions or decisions based on human-defined objectives.
  • Defines Generative artificial intelligence system as any AI system that generates content such as code, text, or images.
  • Each calendar year of continued violation counts as a separate violation.
  • Imposes a civil penalty of up to $25 per user or $100,000 for non-compliance with the warning requirement.
  • Requires conspicuous display of a warning on the user interface of generative AI systems regarding potential inaccuracies and inappropriateness of outputs.

Sponsors

Legislative Actions

Date Action
2026-01-07 died in assembly
2026-01-07 REFERRED TO INTERNET AND TECHNOLOGY
2026-01-07 returned to senate
2025-06-12 referred to codes
2025-06-12 REPASSED SENATE
2025-06-12 RETURNED TO ASSEMBLY
2025-06-09 AMENDED ON THIRD READING (T) 934A
2025-06-09 RECALLED FROM ASSEMBLY

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates generative AI systems by mandating specific disclosures to users regarding the reliability and nature of the content produced.

Mechanism of Influence: It imposes a legal requirement for AI operators to include warnings on user interfaces, backed by civil penalties, thereby enforcing transparency in AI-human interactions.

Evidence:

  • Requires conspicuous display of a warning on the user interface of generative AI systems regarding potential inaccuracies and inappropriateness of outputs.
  • Imposes a civil penalty of up to $25 per user or $100,000 for non-compliance with the warning requirement.

Ambiguity Notes: The definition of 'conspicuous' and 'inappropriate' may be subject to interpretation, potentially leading to varying standards of implementation among different AI providers.

Senate - 955 - Relates to the use of smart access systems and the information that may be gathered from such systems

Legislation ID: 66574

Bill URL: View Bill

Summary

This legislation outlines the requirements for smart access systems in multiple dwellings, including data collection limitations, prohibitions on certain types of data, and security measures for protecting tenant information. It also addresses the responsibilities of owners and managing agents regarding tenant consent and data retention, as well as penalties for violations of the provisions set forth in the bill.

Key Sections

Key Requirements

  • Civil penalty of up to $5,000 for each violation.
  • Collect only necessary data for system operation.
  • Data collected cannot be used for eviction purposes.
  • Destroy or anonymize data within specified timeframes.
  • Do not collect likeness of minors or relationship status.
  • Do not condition tenancy on consent to use smart access systems.
  • Do not request or retain social security numbers.
  • Exemptions for dwellings owned by specific entities or primarily occupied by transient occupants.
  • Increased penalty of up to $10,000 for harassment or deprivation of rights.
  • No location tracking devices in smart access systems.
  • Notify customers of security breaches within 24 hours.
  • Owners must apply for approval before installation in regulated dwellings.
  • Provide software updates to fix vulnerabilities within 30 days.
  • Record access events but not departures.
  • Store data securely to prevent unauthorized access.

Sponsors

Legislative Actions

Date Action
2026-01-07 died in assembly
2026-01-07 REFERRED TO HOUSING, CONSTRUCTION AND COMMUNITY DEVELOPMENT
2026-01-07 returned to senate
2025-05-14 DELIVERED TO ASSEMBLY
2025-05-14 PASSED SENATE
2025-05-14 referred to housing
2025-05-07 ADVANCED TO THIRD READING
2025-05-06 2ND REPORT CAL.

Detailed Analysis

Analysis 1

Why Relevant: The bill places specific restrictions on the collection and retention of biometric data within smart access systems.

Mechanism of Influence: Biometric data processing, such as facial recognition or fingerprint scanning, is a primary application of artificial intelligence in security and access control. By limiting biometric data collection, the bill regulates the deployment of AI-driven identification technologies in residential settings.

Evidence:

  • sets limits on the collection of biometric data
  • Do not collect likeness of minors

Ambiguity Notes: The text does not explicitly use the term 'artificial intelligence,' but 'smart access systems' and 'biometric data' collection typically involve AI-based pattern recognition and automated processing.

Analysis 2

Why Relevant: The legislation mandates oversight and security requirements for the software powering 'smart' infrastructure.

Mechanism of Influence: The requirement for companies to notify customers of security breaches and provide software updates to fix vulnerabilities within 30 days imposes regulatory oversight on the automated software systems used for building access.

Evidence:

  • Notify customers of security breaches within 24 hours
  • Provide software updates to fix vulnerabilities within 30 days

Ambiguity Notes: The scope of 'smart access software' is broad and could range from simple digital credentials to complex AI-integrated surveillance and entry systems.

↑ Back to Table of Contents

North Carolina

Index of Bills

Senate - 1004 - UNC AI & Technology Hubs.

Legislation ID: 163347

Bill URL: View Bill

Summary

H.B. 1004 appropriates funds to create AI Hubs and Technology Hubs within the University of North Carolina system. The bill outlines financial allocations for establishing these hubs, which will focus on technology innovation, workforce development, and research in artificial intelligence. Additionally, it mandates the selection of institutions, funding conditions, and reporting requirements to ensure accountability and effectiveness in achieving the bills goals.

Key Sections

Key Requirements

  • At least one AI Hub must be a historically black or American Indian university.
  • Constituent institutions must report on the departments and units included in the technology hub by February 15, 2026.
  • Institutions must report on the departments and units included in the technology hub to the Board of Governors.
  • Projects must address AI fundamentals and infrastructure or ethical considerations related to AI use.
  • Requires selected institutions to match 10% of the allocated funds with non-State funds.
  • Requires the selection of at least one historically Black or American Indian institution as an AI Hub.
  • Research projects must focus on applied AI in specified priority areas, AI fundamentals, or AI ethics and governance.
  • Research projects must focus on applied AI research in education, workforce development, healthcare, or government.
  • Selected institutions must match 10% of the allocated funds with non-State funds.
  • The Board of Governors must report this information to the Joint Legislative Education Oversight Committee by February 15, 2026.
  • The Board of Governors must select up to eight constituent institutions as AI Hubs by December 1, 2026.

Sponsors

Legislative Actions

Date Action
2025-04-14 Passed 1st Reading
2025-04-14 Ref to the Com on Appropriations, if favorable, Rules, Calendar, and Operations of the House
2025-04-10 Filed

Detailed Analysis

Analysis 1

Why Relevant: The legislation specifically allocates funding for research into AI ethics and governance.

Mechanism of Influence: By establishing a grant program for AI ethics and governance, the state creates a framework for academic and policy oversight regarding how AI technologies are developed and deployed.

Evidence:

  • Research projects must focus on applied AI in specified priority areas, AI fundamentals, or AI ethics and governance.

Ambiguity Notes: The term 'governance' is broad and could refer to either internal institutional policies or the development of broader regulatory recommendations for the state.

Analysis 2

Why Relevant: The mandate for AI Hubs includes a focus on citizen rights.

Mechanism of Influence: Requiring AI Hubs to focus on citizen rights suggests a regulatory or oversight interest in protecting the public from potential AI-related harms.

Evidence:

  • This section mandates the selection of up to eight institutions to serve as AI Hubs, with a focus on economic growth and citizen rights

Ambiguity Notes: The bill does not define specific 'citizen rights' or how the hubs will enforce or protect them, leaving the practical application to the selected institutions.

Senate - 860 - Social Media Control in IT Act.

Legislation ID: 163099

Bill URL: View Bill

Summary

This bill establishes the Social Media Control in Information Technology Act, which mandates that social media platforms provide clear disclosures regarding data collection and usage, particularly for minors. It requires platforms to implement user-friendly mechanisms for privacy rights, prohibits the use of minors data in algorithmic recommendations, and sets default privacy settings to protect young users. Additionally, it holds operators accountable for non-compliance and creates a registry for privacy policies.

Key Sections

Key Requirements

  • Allows the Attorney General to monitor compliance and investigate complaints.
  • Allows the Attorney General to monitor compliance and investigate complaints regarding social media platforms.
  • Creates a task force with diverse representation to address data privacy for minors.
  • Default privacy settings for minors must offer the highest level of privacy.
  • Default privacy settings for minors must prioritize user privacy.
  • Defines key terms related to user data privacy and social media operations.
  • Defines terms such as accessible mechanism, algorithmic recommendation system, personal information, and minor.
  • Enables minors to file civil suits against platforms that violate their rights.
  • Establishes a task force within the Department of Justice to address data privacy issues for minors.
  • Establishes rights for minors to be shielded from manipulative design and personalized recommendations.
  • Gives minors the right to sue for violations of their rights under this bill.
  • Mandates annual reporting on findings and recommendations related to mental health and social media.
  • Mandates platforms to act on verifiable requests for data correction or deletion unless certain conditions apply.
  • Mandates transparency in how platforms affect minors well-being.
  • Minors can file civil suits for violations affecting them.
  • Minors data must not be used in algorithmic recommendations.
  • Minors have rights against manipulative design practices.
  • Personal information can only be used in algorithmic recommendations if the user is not a minor and has consented.
  • Platforms are not required to delete information if it is necessary for transaction completion, security, debugging, compliance with regulations, research, or internal uses.
  • Platforms must allow users to manage their data preferences in algorithmic recommendations.
  • Platforms must inform users about data collection practices and obtain consent.
  • Platforms must inform users about data collection practices and obtain consent before collecting data.
  • Platforms must protect minors from manipulative design techniques.
  • Platforms must provide a registry of their privacy policies to the Consumer Protection Division.
  • Platforms must provide clear disclosures about data collection and usage to users upon first use or after six months of inactivity.
  • Privacy settings for minors must be configured to the highest level of privacy by default.
  • Requires annual reporting to the General Assembly on the task forces findings and recommendations.
  • Requires covered platforms to maintain records of requests for correction and deletion of personal information.
  • Requires platforms to correct inaccurate personal information upon verifiable user request.
  • Requires platforms to maintain records of data correction and deletion requests.
  • Requires platforms to make reasonable efforts to correct inaccurate information as directed by the user.
  • Requires platforms to protect minors from algorithmic recommendation systems.
  • Requires platforms to protect minors from manipulative designs that exploit psychological vulnerabilities.
  • Requires platforms to protect minors from manipulative design techniques that exploit psychological vulnerabilities.
  • Requires platforms to provide a mechanism for users to request deletion of their personal information.
  • Requires platforms to provide clear explanations of features and potential negative impacts on well-being.
  • Requires platforms to provide clear explanations of their features and potential negative impacts on minors.
  • Requires protection for minors from algorithmic recommendation systems.
  • Requires settings to be disabled by default for minors to prevent data exposure and manipulation.
  • The Attorney General is tasked with monitoring compliance and can bring civil actions for noncompliance.
  • The task force is composed of 21 members with specific representation requirements.
  • The task force must report annually to the General Assembly on its findings and recommendations regarding social media and mental health.
  • User consent is required before collecting personal data.
  • Users must have accessible mechanisms to request correction or deletion of personal information.
  • Users must have accessible mechanisms to request corrections and deletions of their personal data.
  • Users must have mechanisms to request corrections and deletions of their personal information.
  • Users must have the ability to modify their personal information used in recommendations.
  • Violations of the bill are considered unfair or deceptive acts under existing law.

Sponsors

Legislative Actions

Date Action
2025-06-17 Reptd Fav Com Substitute
2025-06-17 Re-ref Com On Appropriations
2025-04-10 Passed 1st Reading
2025-04-10 Ref to the Com on Commerce and Economic Development, if favorable, Appropriations, if favorable, Rules, Calendar, and Operations of the House
2025-04-09 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates algorithmic recommendation systems, which are a primary application of artificial intelligence in social media.

Mechanism of Influence: It prohibits the use of minors' data within these AI-driven systems and mandates transparency regarding how these algorithms affect user well-being.

Evidence:

  • prohibits the use of minors data in algorithmic recommendations
  • protection from personalized recommendation systems
  • Defines key terms used in the bill, including... algorithmic recommendation system

Ambiguity Notes: While the bill uses the term 'algorithmic recommendation system,' this is functionally synonymous with the AI models used to rank and suggest content to users.

Analysis 2

Why Relevant: The legislation requires specific disclosures and transparency regarding data usage and platform features.

Mechanism of Influence: Platforms must provide clear disclosures about data collection and usage and maintain a registry of privacy policies for government oversight.

Evidence:

  • mandates that social media platforms provide clear disclosures regarding data collection and usage
  • Mandates transparency in how platforms affect minors well-being
  • Platforms must provide a registry of their privacy policies to the Consumer Protection Division

Ambiguity Notes: The level of technical detail required in these disclosures (e.g., model architecture vs. data categories) is not fully specified in the abstract.

Analysis 3

Why Relevant: The bill focuses on age-specific regulations and usage controls.

Mechanism of Influence: It mandates default settings for minors to prevent data exposure and manipulation, effectively requiring platforms to distinguish between adult and minor users.

Evidence:

  • mandates that certain features on platforms be disabled by default to protect minors
  • Default privacy settings for minors must prioritize user privacy
  • User data privacy; targeting minors prohibited

Ambiguity Notes: The bill implies a need for age verification to enforce these protections, though the specific technical requirements for verification are not detailed.

Analysis 4

Why Relevant: The bill establishes oversight and enforcement mechanisms for data and algorithmic practices.

Mechanism of Influence: It creates a Data Privacy Task Force and empowers the Attorney General to monitor compliance and investigate platform operations.

Evidence:

  • establishes a task force within the Department of Justice to oversee data privacy issues
  • allows the Attorney General to monitor compliance and investigate complaints

Ambiguity Notes: The oversight is focused on privacy and well-being rather than a technical audit of AI weights or model performance.

Senate - 970 - Preventing Algorithmic Rent Fixing.

Legislation ID: 163290

Bill URL: View Bill

Summary

House Bill 970 introduces measures to combat algorithmic rent fixing by prohibiting real estate lessors from using nonpublic competitor data to set rental prices. It establishes definitions related to pricing algorithms and unlawful coordination among lessors, and empowers the Attorney General to enforce these provisions as unfair trade practices.

Key Sections

Key Requirements

  • Aggrieved parties may bring action against violators.
  • Allows aggrieved parties to bring legal action against violators.
  • Precludes the enforceability of pre-dispute arbitration agreements in cases related to violations of this Article.
  • Pre-dispute arbitration agreements related to violations of this Article are not enforceable.
  • Prohibits real estate lessors from subscribing to or contracting for coordinating functions.
  • Prohibits real estate lessors from subscribing to or exchanging value for coordinating functions.
  • Prohibits service providers from facilitating anti-competitive agreements among lessors.
  • Prohibits service providers from facilitating non-competitive agreements among lessors.

Sponsors

Legislative Actions

Date Action
2025-04-14 Passed 1st Reading
2025-04-14 Ref To Com On Rules, Calendar, and Operations of the House
2025-04-10 Filed

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically regulates 'pricing algorithms,' which are a core component of automated and AI-driven commercial decision-making systems.

Mechanism of Influence: It prohibits the use of nonpublic competitor data within these algorithms to prevent anti-competitive price coordination, effectively placing a constraint on the data inputs and functional outputs of automated pricing systems.

Evidence:

  • pricing algorithm
  • nonpublic data
  • coordinating functions

Ambiguity Notes: While the bill uses the term 'pricing algorithm' rather than 'artificial intelligence,' modern dynamic pricing software often utilizes machine learning and AI, making this a form of algorithmic oversight.

↑ Back to Table of Contents

Ohio

Index of Bills

House - 628 - License artificial intelligence risk mitigation organizations

Legislation ID: 232842

Bill URL: View Bill

Summary

House Bill No. 628 establishes a regulatory framework for independent verification organizations in Ohio, specifically targeting the verification of artificial intelligence applications and models. The bill defines key terms, outlines the licensing process, and sets forth the responsibilities and requirements for both the verification organizations and the developers or deployers of AI technologies. It also includes provisions for the establishment of an advisory council to oversee the implementation and effectiveness of the verification process.

Key Sections

Key Requirements

  • Allows for rebuttal of this presumption under specific circumstances of misconduct.
  • Council must include members representing civil society.
  • Demonstrates independence from the AI industry.
  • Establishes conditions under which a presumption against liability applies.
  • Includes requirements for aggregating information on AI capabilities and risks.
  • License may be revoked if the plan is misleading or if the organization fails to adhere to its own plan.
  • Mandates consideration of stakeholder input in rule adoption.
  • Mandates retention of documentation for ten years.
  • Members must remain free from conflicts of interest.
  • Must include measurable outcome metrics and evaluation protocols.
  • Organizations must monitor AI models for compliance with their verification plans.
  • Organizations must notify the attorney general of any material changes to their plans.
  • Plan must adequately ensure acceptable risk mitigation.
  • Requires a detailed plan for verifying AI risk mitigation.
  • Requires annual reporting to the general assembly, attorney general, and auditor of state.
  • Requires establishment of conflict of interest rules for verification organizations.
  • Specifies conditions for corrective action and loss of licensure.
  • Verification must be revoked if the developer fails to meet the organizations requirements.

Sponsors

Legislative Actions

Date Action
2025-12-11 Introduced

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation and auditing of artificial intelligence through a formal verification process.

Mechanism of Influence: It establishes a licensing regime for third-party organizations to audit AI models and applications for risk mitigation and compliance.

Evidence:

  • House Bill No. 628 establishes a regulatory framework for independent verification organizations in Ohio, specifically targeting the verification of artificial intelligence applications and models.
  • Independent verification organizations must implement their verification plans to ensure ongoing risk mitigation for AI applications.

Ambiguity Notes: The term 'independent verification' functions as a regulatory audit, though the specific technical standards for 'acceptable risk mitigation' are left to be defined by the Attorney General.

Analysis 2

Why Relevant: The legislation requires detailed disclosures regarding AI capabilities and potential harms.

Mechanism of Influence: IVOs are required to submit annual reports to the state government detailing AI capabilities, societal risks, and verification results.

Evidence:

  • Independent verification organizations must submit an annual report detailing AI capabilities, societal risks, verification results, compliance with remedial measures, and any changes affecting their independence.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill provides for government oversight and the creation of safety standards.

Mechanism of Influence: It creates an Artificial Intelligence Safety Advisory Council and empowers the Attorney General to adopt rules regarding AI risk mitigation and conflict of interest.

Evidence:

  • Establishes the artificial intelligence safety advisory council, which will assist the attorney general in overseeing the licensing and operation of independent verification organizations.
  • The attorney general is tasked with adopting rules to implement the provisions of this bill, focusing on conflict of interest, risk mitigation, and verification organization licensing.

Ambiguity Notes: None

↑ Back to Table of Contents

Oklahoma

Index of Bills

House - 3544 - Technology; artificial intelligence; chatbots; companions; minors; safety; civil penalties; effective date.

Legislation ID: 266871

Bill URL: View Bill

Summary

This bill establishes regulations for deployers of artificial intelligence (AI) chatbots, specifically those with human-like features. It mandates that such chatbots not be made available to minors, requires age verification systems, and allows for alternative versions of chatbots for younger users. Additionally, it outlines the responsibilities of deployers to prioritize user safety and well-being, as well as the penalties for non-compliance.

Key Sections

Key Requirements

  • Allows minors or their guardians to pursue civil action for damages related to non-compliance.
  • Establishes civil penalties of up to $2,500 for violations and $7,500 for intentional violations.
  • Mandates age verification systems to prevent minors from accessing these chatbots.
  • Mandates assessment and monitoring by licensed mental health professionals for therapy chatbots.
  • Mandates that information collected must be adequate, relevant, and necessary for legitimate purposes.
  • Requires clinical trial data demonstrating safety and efficacy of therapy chatbots.
  • Requires deployers to ensure chatbots with human-like features are not accessible to minors.
  • Requires deployers to implement systems for detecting and responding to emergency situations.
  • Requires therapy chatbots to provide clear disclaimers regarding their nature as AI.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Representative Maynard
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in age verification for AI usage.

Mechanism of Influence: It mandates that deployers implement age certification systems to prevent minors from accessing chatbots with human-like features.

Evidence:

  • Mandates age verification systems to prevent minors from accessing these chatbots.
  • Deployers are required to ensure that AI chatbots with human-like features are not available to minors and must implement age certification systems.

Ambiguity Notes: None

Analysis 2

Why Relevant: The legislation includes disclosure requirements for specific AI applications.

Mechanism of Influence: It requires therapy chatbots to provide clear disclaimers to users regarding their nature as artificial intelligence rather than human professionals.

Evidence:

  • Requires therapy chatbots to provide clear disclaimers regarding their nature as AI.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill imposes regulatory oversight and safety standards on AI deployers.

Mechanism of Influence: It requires the implementation of emergency response systems and professional monitoring for specialized AI (therapy chatbots), aligning with the user's interest in AI regulation.

Evidence:

  • Requires assessment and monitoring by licensed mental health professionals for therapy chatbots.
  • Requires deployers to implement systems for detecting and responding to emergency situations.

Ambiguity Notes: None

House - 3545 - Technology; artificial intelligence; state agencies; prohibited uses; permitted uses; Office of Management and Enterprise Services; report; effective date.

Legislation ID: 268908

Bill URL: View Bill

Summary

This bill establishes definitions related to artificial intelligence, outlines prohibited and allowed uses of AI by state agencies, and mandates compliance reporting to the Office of Management and Enterprise Services (OMES). It seeks to protect individual rights while allowing beneficial uses of AI, with specific restrictions on certain applications.

Key Sections

Key Requirements

  • Agencies must review AI systems for compliance within nine months.
  • All new procedures related to AI must align with the act.
  • Applies to all state agency computer systems.
  • Defines AI as machine capabilities for cognitive tasks.
  • Defines Deepfake as altered media used maliciously.
  • Defines Generative AI as AI that creates media based on prompts.
  • Defines State agency broadly to include all state entities.
  • Excludes common personal consumer systems.
  • Excludes systems used in research by state-funded institutions.
  • Mandates disclosure for generative AI-produced material.
  • Newly deployed AI systems must comply with the act.
  • OMES must report on identified AI systems annually.
  • Procedures inconsistent with the act must be modified.
  • Prohibited AI systems must be removed.
  • Prohibits classification leading to discrimination.
  • Prohibits cognitive behavioral manipulation.
  • Prohibits malicious use of deepfakes.
  • Prohibits real-time biometric identification for surveillance, except for locating missing persons.
  • Report must include compliance status and updates on systems.
  • Reports to be posted on the OMES website.
  • Requires human review for irreversible AI decisions.
  • Requires user awareness when interacting with AI.
  • Specifies allowed uses related to rights limitations, biometric identification, critical infrastructure, law enforcement, and legal interpretation.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Representative Maynard
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the deployment and operational standards of AI systems within state government entities.

Mechanism of Influence: It mandates that state agencies review all existing AI systems for compliance, remove prohibited systems within nine months, and ensure all new deployments meet specific ethical and transparency standards.

Evidence:

  • Prohibits classification leading to discrimination.
  • Prohibits cognitive behavioral manipulation.
  • Prohibited AI systems must be removed.
  • Agencies must review AI systems for compliance within nine months.

Ambiguity Notes: The term 'cognitive behavioral manipulation' is broad and may require specific regulatory guidance to determine which types of user interface designs or algorithms fall under this prohibition.

Analysis 2

Why Relevant: The legislation requires specific disclosures and human oversight, aligning with the user's interest in AI transparency and auditing.

Mechanism of Influence: It forces agencies to disclose when material is produced by generative AI and requires human intervention for any AI-driven decisions that are irreversible.

Evidence:

  • Mandates disclosure for generative AI-produced material.
  • Requires human review for irreversible AI decisions.
  • Requires user awareness when interacting with AI.

Ambiguity Notes: The requirement for 'user awareness' does not specify the format or prominence of the notification required when a citizen interacts with an AI.

Analysis 3

Why Relevant: The bill establishes an oversight and reporting mechanism to track AI usage and compliance across the state government.

Mechanism of Influence: The Office of Management and Enterprise Services (OMES) is tasked with creating annual public reports detailing the AI systems in use and their compliance status, serving as a form of government audit.

Evidence:

  • This section directs OMES to report annually on the AI systems used by state agencies, detailing compliance with the act.
  • Report must include compliance status and updates on systems.
  • Reports to be posted on the OMES website.

Ambiguity Notes: While it mandates reporting on 'compliance status,' it is unclear what specific metrics or auditing standards OMES will use to verify an agency's self-reported compliance.

House - 3546 - Technology; personhood; artificial intelligence; effective date.

Legislation ID: 269130

Bill URL: View Bill

Summary

House Bill 3546 establishes a legal framework in Oklahoma that denies personhood status to artificial intelligence systems, environmental elements, nonhuman animals, and inanimate objects. The bill clarifies that it does not affect the personhood status of any legal entities that are already recognized under Oklahoma law as of November 1, 2026.

Key Sections

Key Requirements

  • Maintains the personhood status of existing legal entities.
  • Prohibits the granting of personhood to specific non-human entities.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Representative Maynard
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the legal classification of artificial intelligence systems within the state's jurisdiction.

Mechanism of Influence: By prohibiting the granting of personhood, the law ensures that AI systems cannot exercise legal rights, own property, or be held liable in the same manner as a natural person or a corporation, thereby setting a foundational regulatory boundary for AI governance.

Evidence:

  • artificial intelligence systems, environmental elements, nonhuman animals, and inanimate objects shall not be granted personhood status under Oklahoma law

Ambiguity Notes: The bill does not provide a specific technical definition for 'artificial intelligence systems', which may lead to broad interpretation regarding which software or automated processes fall under this prohibition.

House - 3675 - Health insurance; review agents; artificial intelligence system; adverse determinations; effective date.

Legislation ID: 268290

Bill URL: View Bill

Summary

This bill establishes regulations on the use of automated systems in making adverse determinations related to health care services. It requires that any adverse determination made by such systems must be reviewed by a qualified human professional prior to finalization. Additionally, the bill grants auditing authority to the Insurance Commissioner and mandates that notice of adverse determinations includes specific information related to the decision-making process and appeals.

Key Sections

Key Requirements

  • Defines key terms related to health insurance and automated decision-making systems.
  • Grants auditing authority to the Insurance Commissioner.
  • Requires notification of adverse determinations to include principal reasons, clinical basis, screening criteria, and appeal procedures.
  • Requires review of adverse determinations by a qualified human professional.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Representative Provenzano
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of artificial intelligence and automated decision systems in the health care sector.

Mechanism of Influence: It imposes a 'human-in-the-loop' requirement, preventing AI from making final adverse determinations without human verification.

Evidence:

  • This provision mandates that any adverse determination made by an automated decision system must be reviewed by a qualified human professional before it is finalized.

Ambiguity Notes: The term 'qualified human professional' may require further regulatory clarification to determine the specific level of expertise required for different types of medical reviews.

Analysis 2

Why Relevant: The legislation establishes a mechanism for government oversight and auditing of AI systems.

Mechanism of Influence: It empowers the Insurance Commissioner to conduct audits on how utilization review agents employ automated systems.

Evidence:

  • It also allows the Insurance Commissioner to audit the use of these systems by utilization review agents.
  • Grants auditing authority to the Insurance Commissioner.

Ambiguity Notes: The bill does not specify the frequency or the technical standards of the audits to be performed.

Analysis 3

Why Relevant: The bill requires disclosures related to the logic and criteria used by automated systems in decision-making.

Mechanism of Influence: By requiring the disclosure of 'screening criteria' and 'clinical basis' in notices, it forces transparency regarding the underlying logic of the automated system.

Evidence:

  • This provision specifies the information that must be included in the notice of an adverse determination, such as the reasons for the decision, the clinical basis, the screening criteria used

Ambiguity Notes: While it requires disclosure of criteria, it does not explicitly mandate a statement that an AI was the primary source of the initial determination in the notice itself.

House - 3959 - Technology; Protecting Consumers and Jobs from Predatory Pricing Act; personalized algorithmic pricing; consumer data; food retailers; effective date.

Legislation ID: 266392

Bill URL: View Bill

Summary

This bill, known as the Protecting Consumers and Jobs from Predatory Pricing Act, establishes regulations for food retail establishments regarding algorithmic pricing. It mandates disclosures to consumers when personalized pricing is used, prohibits the use of electronic shelving labels for such pricing, and restricts data collection practices, particularly concerning minors and protected class data. The bill also outlines enforcement mechanisms and civil penalties for violations.

Key Sections

Key Requirements

  • Establishes civil penalties for violations.
  • Exempts financial services and licensed insurers from the act.
  • Permits the Attorney General to enforce the act.
  • Prohibits collection of data from minors under 17 for pricing purposes.
  • Prohibits personalized algorithmic pricing and surveillance pricing.
  • Prohibits the use of electronic shelving labels by large food retailers.
  • Prohibits use of protected class data in setting prices.
  • Requires clear and conspicuous disclosure of algorithmic pricing to consumers.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Representative Munson
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets algorithmic pricing and surveillance pricing, which are applications of artificial intelligence and automated decision-making systems.

Mechanism of Influence: It imposes mandatory disclosure requirements for algorithmic pricing, restricts the data inputs (such as minor data) used by these algorithms, and prohibits specific AI-driven pricing practices in food retail.

Evidence:

  • Food retail establishments must disclose when prices are set using personalized algorithmic pricing based on consumer data
  • The act restricts the collection of data from minors for targeted advertising or personalized pricing
  • Prohibits personalized algorithmic pricing and surveillance pricing.

Ambiguity Notes: While the bill defines 'algorithm', its regulatory scope is limited to food retail establishments rather than general-purpose AI applications.

House - 4083 - Technology; deployers; AI chatbots; minors; age verification systems; emergency situations; effective date.

Legislation ID: 269132

Bill URL: View Bill

Summary

House Bill 4083 introduces regulations for AI chatbots in Oklahoma, focusing on preventing minors from accessing chatbots with human-like features. It mandates deployers to implement age verification systems, restricts access to social AI companions for minors, and outlines conditions under which therapeutic chatbots can be used by minors. The bill also establishes legal consequences for violations and emphasizes the need for safety measures in emergency situations.

Key Sections

Key Requirements

  • Allows for alternative chatbot versions for minors without human-like features.
  • Allows minors or their guardians to sue for damages related to violations.
  • Ensures transparency regarding the chatbots functions and data privacy policies.
  • Establishes civil penalties for violations of the act, up to $2,500 for each violation and $7,500 for intentional violations.
  • Mandates assessment and monitoring by licensed mental health professionals.
  • Mandates prioritization of user safety in emergency responses.
  • Mandates the implementation of age verification systems for chatbots with human-like features.
  • Prohibits deployers from allowing minors to use social AI companions.
  • Prohibits marketing therapeutic chatbots as substitutes for human professionals.
  • Requires a clear disclaimer that the chatbot is AI and not a licensed professional.
  • Requires clinical trial data demonstrating safety and efficacy of the tool.
  • Requires collection of only adequate, relevant, and necessary information for legitimate purposes.
  • Requires deployers to implement age verification systems for social AI companions.
  • Requires deployers to prevent minors from accessing chatbots with human-like features.
  • Requires effective systems to detect and respond to emergency situations.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Representative Alonso-Sandoval
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates age verification for specific types of AI interactions.

Mechanism of Influence: It requires deployers to implement systems that prevent minors from accessing chatbots with human-like features or social AI companions unless age is verified.

Evidence:

  • Mandates the implementation of age verification systems for chatbots with human-like features.
  • Requires deployers to implement age verification systems for social AI companions.

Ambiguity Notes: The term 'human-like feature' is defined in the bill but its practical application to UI/UX design may vary.

Analysis 2

Why Relevant: The bill requires transparency and disclosures regarding the nature of the AI.

Mechanism of Influence: Therapeutic chatbots must clearly state they are AI and not licensed professionals to avoid misleading users.

Evidence:

  • Requires a clear disclaimer that the chatbot is AI and not a licensed professional.
  • Ensures transparency regarding the chatbots functions and data privacy policies.

Ambiguity Notes: The specific wording of the disclaimer is not provided, only the requirement for one.

Analysis 3

Why Relevant: The bill imposes safety and efficacy requirements similar to audits for high-risk AI applications.

Mechanism of Influence: Deployers of therapeutic chatbots must provide clinical trial data to prove the tool is safe and effective before use by minors.

Evidence:

  • Requires clinical trial data demonstrating safety and efficacy of the tool.

Ambiguity Notes: It is unclear what standard of 'clinical trial data' is required or which agency reviews it.

Senate - 1627 - Crimes and punishments; amending, merging, consolidating, and repealing multiple versions of statutes. Emergency.

Legislation ID: 267031

Bill URL: View Bill

Summary

Senate Bill 1627 addresses the need for clarity in Oklahomas legislative framework by consolidating various versions of statutes. It seeks to amend specific sections of the Oklahoma Statutes and repeal outdated or redundant provisions, thereby enhancing the efficiency and accessibility of state laws.

Key Sections

Key Requirements

  • Affirms that if a defendant was a victim of human trafficking during the alleged crime, it can be used as a defense.
  • Allows courts to mandate the removal of unlawful images by the defendant.
  • Allows for conditional carrying of firearms in designated properties under specific regulations.
  • Allows suspension of handgun licenses for three months for violators.
  • Class A2 felony for trafficking, with a minimum of five years imprisonment or life, and fines up to $100,000.
  • Classifies assault against law enforcement officers as a Class A felony with severe penalties.
  • Classifies child endangerment as a Class B felony with specific penalties.
  • Classifies violations as misdemeanors or felonies based on the nature of the offense.
  • Convicted individuals must serve 85% of their sentence before parole eligibility.
  • Court mandates restitution to victims.
  • Defendant must pay for psychological evaluations and counseling for victims.
  • Defendants must reimburse the agency for the evaluation.
  • Defines child abuse as willful or malicious harm to a child under 18.
  • Defines child neglect as willful or malicious neglect of a child under 18.
  • Defines child sexual abuse and child sexual exploitation with specific examples.
  • Defines coercion, commercial sex, debt bondage, and human trafficking for legal clarity.
  • Defines course of conduct to include various forms of contact and actions directed at the victim.
  • Defines emotional distress as significant mental suffering that may not require professional treatment.
  • Defines enabling child abuse and enabling child neglect as allowing or permitting such acts to occur.
  • Defines human trafficking for labor and human trafficking for commercial sex.
  • Defines legal process as encompassing various legal systems and actions.
  • Defines minor as individuals under eighteen years of age.
  • Defines sexual battery and outlines specific conditions under which it is considered an offense, including consent and the relationship between the offender and victim.
  • Defines stalking and establishes penalties that increase with subsequent offenses.
  • Defines unconsented contact to include specific actions such as following, confronting, or contacting the individual without consent.
  • Defines victim as individuals against whom trafficking violations occur.
  • Defines what constitutes a conviction under this section.
  • Engaging in human trafficking is declared unlawful.
  • Establish a statewide hotline for reporting child abuse or neglect.
  • Establishes enhanced penalties for individuals who repeatedly violate the dissemination laws.
  • Establishes felony charges for prolonged knowledge of abuse without reporting.
  • Establishes fines for violations of firearm carrying regulations, with specific amounts indicated for different offenses.
  • Establishes prison terms and fines for offenders.
  • Exempts interactive computer services from liability for user-generated content.
  • Exempts telecommunications providers from liability for content shared over their networks.
  • Fines up to $1,000.
  • Fines up to $2,500.
  • Healthcare professionals must report cases of infants testing positive for substances.
  • Hearing must be held following notification of a violation.
  • Imposes a fine of $250 for violations.
  • Imposes misdemeanor charges for failing to report suspected abuse or neglect.
  • Imposes penalties for willfully attempting to elude law enforcement, with increased penalties for endangering others or causing injury.
  • Imprisonment for 10 days to 1 year for first offenses.
  • Imprisonment of 1 to 5 years for second offenses.
  • Increased imprisonment and fines for aggravated DUI offenses.
  • Increased penalties for trafficking minors, with a minimum of fifteen years imprisonment or life without parole, and fines up to $250,000.
  • Increases fines for DUI violations involving minors.
  • Increases penalties for stalking violations when protective orders are in effect.
  • Lack of knowledge of a victims age is not a defense in cases involving minors.
  • Lists specific exemptions for peace officers, judges, and authorized personnel.
  • Mandates a $100 assessment fee to be deposited in the Drug Abuse Education and Treatment Revolving Fund.
  • Mandates a fine based on specific subsections of the law.
  • Mandates attendance at a victims impact panel program if available.
  • Mandates imprisonment and fines based on the number of stalking convictions.
  • Mandates post-imprisonment supervision for offenders sentenced to two years or more.
  • Mandates that individuals convicted of sexual battery must serve time in the custody of the Department of Corrections.
  • Mandates the installation of an ignition interlock device for at least 180 days.
  • Mandates use of ignition interlock devices for repeat offenders.
  • Permits testimony on impairment by trained witnesses.
  • Prohibits any lewd or lascivious acts against human corpses.
  • Prohibits carrying firearms in government-owned buildings, schools, courthouses, and certain public venues.
  • Prohibits colleges and universities from enacting rules against lawful firearm possession.
  • Prohibits payment of fines in lieu of community service.
  • Prohibits the dissemination of private sexual images without consent; defines penalties for violations.
  • Prohibits the possession, distribution, or production of child pornography; imposes a maximum penalty of 20 years imprisonment for violations.
  • Provide hotline-specific training for staff, including interviewing and customer service skills.
  • Requires a 30-day increase in suspension or deferral for each subsequent conviction after the second offense.
  • Requires all individuals who suspect child abuse to report it to the hotline.
  • Requires assessment and evaluation for convicted individuals.
  • Requires colleges, universities, or technology centers to notify the Bureau within ten days of a violation.
  • Requires consent from the depicted individuals unless exceptions apply.
  • Requires electronic monitoring for felony convictions.
  • Requires individuals associated with criminal street gangs to face felony charges for gang-related offenses.
  • Requires individuals to demonstrate reasonable fear of imminent peril to justify firearm discharge.
  • Requires lawful purpose for dissemination of sexual images or depictions.
  • Requires life imprisonment or life without parole for repeat offenders of specified sexual offenses.
  • Requires one year of supervision and periodic testing.
  • Requires participation in a certified evaluation program.
  • Requires payment of a $75 fee if the defendant has the ability to pay.
  • Requires that a pattern of criminal offenses must involve two or more offenses committed as part of the same plan or within a 30-day interval.
  • School employees must report suspected abuse of students to both the Department and local law enforcement.
  • Sentences are not subject to suspension, deferral, or probation.
  • Sets penalties for making false reports, including potential fines.
  • Specifies that trafficking can occur through deception, force, fraud, threat, or coercion.
  • Testing is to be conducted as per Oklahoma Statutes.
  • Track the number of calls received and categorize them based on outcomes.
  • Victims consent is not a valid defense against trafficking charges.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Senator Paxton
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically addresses the regulation of AI-generated content by criminalizing the nonconsensual dissemination of 'artificially generated sexual depictions.'

Mechanism of Influence: It establishes legal penalties (misdemeanors or felonies) for individuals who intentionally share sexual images created via artificial intelligence (deepfakes) without the subject's consent.

Evidence:

  • including artificially generated sexual depictions, and establishes penalties for offenders
  • This provision outlines the unlawful dissemination of images or sexual depictions without consent

Ambiguity Notes: The term 'artificially generated' is used broadly and may encompass various technologies beyond modern generative AI, such as traditional CGI, though it is clearly intended to capture AI-driven deepfakes.

Senate - 1734 - Schools; creating the Oklahoma Responsible Technology in Schools Act; requiring development of guidance for use of artificial intelligence and emerging technologies. Effective date. Emergency.

Legislation ID: 267726

Bill URL: View Bill

Summary

This legislation establishes the Oklahoma Responsible Technology in Schools Act, which provides guidelines for the responsible use of artificial intelligence in public education. It seeks to maintain educator oversight in the use of AI tools, protect student privacy, and ensure transparency in educational practices involving technology.

Key Sections

Key Requirements

  • AI cannot be the primary basis for high-stakes educational decisions.
  • AI tools must be age-appropriate and used for defined educational purposes.
  • AI tools must be used through educator-directed AI use.
  • AI tools must operate with human oversight.
  • Compliance with student data privacy laws is required.
  • Decisions informed by AI must remain with school employees.
  • Must address data protection and transparency.
  • Policies must identify responsible personnel and outline appropriate uses.
  • Policies should comply with established guidance.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Senator Seifried
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the deployment and operational constraints of AI within the public school system.

Mechanism of Influence: It imposes a 'human-in-the-loop' requirement and prevents automated systems from making high-stakes decisions without educator intervention.

Evidence:

  • AI cannot be the primary basis for high-stakes educational decisions.
  • AI tools must operate with human oversight.
  • Prohibits the use of AI tools for instructional purposes in public schools unless under educator supervision.

Ambiguity Notes: The term 'high-stakes educational decisions' is not explicitly defined in the summary, which could lead to varying interpretations across districts regarding what constitutes a high-stakes decision.

Analysis 2

Why Relevant: The act addresses transparency and data protection requirements for AI usage, aligning with disclosure-related regulatory interests.

Mechanism of Influence: School districts are mandated to adopt policies that specifically address transparency and identify personnel responsible for AI oversight.

Evidence:

  • Mandates that each school district adopt a policy on AI use
  • Must address data protection and transparency.
  • Policies must identify responsible personnel and outline appropriate uses.

Ambiguity Notes: The specific standards for 'transparency' are left to the State Department of Education and local boards to define in their guidance and policies.

Analysis 3

Why Relevant: The legislation touches upon age-related constraints for AI tools used by minors.

Mechanism of Influence: It requires that AI tools used in schools be 'age-appropriate,' which necessitates a vetting process to ensure tools match the developmental stage of the students.

Evidence:

  • AI tools must be age-appropriate and used for defined educational purposes.

Ambiguity Notes: While it mentions age-appropriateness, it does not explicitly detail a technical 'age verification' mechanism like those found in commercial age-gating regulations.

Senate - 1785 - State government; creating the Citizens Bill of Rights. Emergency.

Legislation ID: 268970

Bill URL: View Bill

Summary

Senate Bill 1785 introduces the Citizens Bill of Rights, which restricts government and business entities from imposing certain actions on citizens. It guarantees rights related to the use of gold and silver, prohibits digital identification requirements, bans social credit scores, and protects personal freedoms regarding medical decisions, energy usage, and agriculture. The bill also addresses the implications of artificial intelligence, ensuring that it is not used to discriminate or infringe on citizens rights. Violations of this act may result in legal consequences.

Key Sections

Key Requirements

  • Citizens cannot be penalized for refusing medical procedures.
  • Citizens can use gold and silver for transactions.
  • Defines citizen as a resident of Oklahoma and a citizen of the USA.
  • Defines government as any level of state or federal government.
  • Entities found guilty of violations must pay legal fees and restitution.
  • No AI determining life or medical care decisions.
  • No mandatory digital identification for transactions or employment.
  • No restrictions on personal gardening or rainwater collection.
  • No tracking or grading of citizens based on habits or political views.
  • No tracking or penalizing for energy choices.
  • No use of AI to replace human workers without compensation.
  • Prohibits carbon credit systems.
  • Prohibits monitoring or controlling citizens purchasing habits through digital currencies.
  • Prohibits taking gold or silver from citizens without consent.
  • Requires viable alternatives to digital currencies for transactions.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Senator Jett
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill contains specific prohibitions on the application of artificial intelligence in critical sectors such as healthcare and employment.

Mechanism of Influence: It creates legal liability for entities that use AI to make life or medical care decisions and mandates compensation if AI is used to replace human labor. It also prevents the use of AI for discriminatory practices.

Evidence:

  • Prohibits the use of AI for discriminatory practices and certain decision-making processes.
  • No AI determining life or medical care decisions.
  • No use of AI to replace human workers without compensation.

Ambiguity Notes: The provision regarding the replacement of human workers 'without compensation' is broad and does not specify the form of compensation, the duration, or who the recipient must be (e.g., the displaced worker or a state fund).

Senate - 2038 - Health Insurance; prohibiting issue of outcomes with AI; requiring decisions to be made by provider; requiring disclosures. Emergency.

Legislation ID: 268350

Bill URL: View Bill

Summary

Senate Bill 2038 seeks to establish guidelines for health insurance issuers regarding the use of artificial intelligence (AI) in making decisions about health insurance coverage. It prohibits the issuance of adverse consumer outcomes by AI systems, mandates that licensed professionals must make final decisions on such outcomes, and requires health insurance issuers to disclose the involvement of human professionals in decision-making processes. The bill also empowers the Insurance Commissioner to investigate the use of AI by insurers and imposes penalties for violations.

Key Sections

Key Requirements

  • All final decisions on medical necessity must be made by licensed healthcare providers.
  • Allows for the promulgation of rules by the Commissioner to enforce the act.
  • Commissioner may investigate AI systems used by insurers.
  • Defines terms relevant to the regulation of AI in health insurance.
  • Establishes criteria for what constitutes an adverse consumer outcome.
  • Insurers may be fined up to $10,000 for each violation of the act.
  • Insurers must consult with the claimant’s provider on medical necessity before making final decisions.
  • Mandates disclosure to claimants that decisions were made by professionals.
  • Prohibits insurers from making decisions on claims based on AI without human review.
  • Requires licensed professionals to issue final adverse consumer outcomes.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Senator Goodwin
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of AI in the health insurance sector by restricting its autonomy in decision-making.

Mechanism of Influence: It creates a legal barrier against fully automated adverse outcomes, forcing insurers to integrate human review into any AI-driven workflow.

Evidence:

  • Prohibits health insurance issuers from issuing adverse consumer outcomes based on decisions made by AI systems.
  • All such decisions must be made and issued by a licensed professional.

Ambiguity Notes: The effectiveness depends on the specific definitions of 'AI system' and 'Artificial Intelligence' provided in the bill's text.

Analysis 2

Why Relevant: The legislation includes a disclosure mandate regarding the decision-making process.

Mechanism of Influence: Insurers must inform claimants that a human professional, rather than just an algorithm, was responsible for the final decision, ensuring transparency.

Evidence:

  • Mandates disclosure to claimants that decisions were made by professionals.

Ambiguity Notes: It is unclear if the disclosure must explicitly state that AI was used in the preliminary stages or only that a human made the final call.

Analysis 3

Why Relevant: The bill establishes government oversight and investigative authority over AI usage.

Mechanism of Influence: By granting the Insurance Commissioner the power to review AI systems, it creates a mechanism for auditing the logic and compliance of insurance algorithms.

Evidence:

  • grants the Insurance Commissioner the authority to investigate and review the use of AI systems by health insurance issuers to ensure compliance with the act.

Ambiguity Notes: The scope of the 'investigation' is broad, potentially allowing for technical audits of AI models or merely procedural reviews.

Senate - 2085 - Artificial intelligence; establishing certain rights; prohibiting certain actions by certain entities; requiring certain actions by certain entities. Effective date.

Legislation ID: 269139

Bill URL: View Bill

Summary

Senate Bill 2085 introduces comprehensive regulations regarding artificial intelligence technology in Oklahoma. It defines key terms, prohibits state entities from contracting with foreign adversaries, and establishes rights for individuals concerning AI use. The bill also includes specific provisions to protect minors from inappropriate interactions with AI chatbots and mandates transparency from AI companies regarding data use and user interactions.

Key Sections

Key Requirements

  • Mandates platforms to notify parents about interactions and potential self-harm indications.
  • Requires an affidavit from AI companies affirming they do not meet criteria of being foreign adversaries.
  • Requires parental consent for minors to have accounts.

Sponsors

Legislative Actions

Date Action
2026-02-02 Authored by Senator Hamilton
2026-02-02 First Reading

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in AI disclosures and transparency.

Mechanism of Influence: It establishes a legal right for Oklahomans to be informed when they are interacting with an artificial intelligence system rather than a human.

Evidence:

  • the right to know if they are interacting with AI

Ambiguity Notes: The bill mentions the 'right to know' but the specific format or timing of the disclosure (e.g., a watermark, a text disclaimer) may be subject to rules created by the Attorney General.

Analysis 2

Why Relevant: The legislation includes specific age-related restrictions and verification requirements for AI usage.

Mechanism of Influence: It requires companion chatbot platforms to implement parental consent mechanisms and oversight tools before allowing minors to maintain accounts.

Evidence:

  • Companion chatbot platforms must ensure that minors cannot maintain accounts without parental consent
  • Requires parental consent for minors to have accounts.

Ambiguity Notes: The bill does not explicitly define the technical method for age verification, leaving the implementation details to the platforms or future AG rulemaking.

Analysis 3

Why Relevant: The bill introduces oversight and regulatory restrictions on AI companies based on ownership and control.

Mechanism of Influence: It mandates that AI companies provide affidavits regarding their ownership to ensure they are not controlled by foreign adversaries before contracting with the state.

Evidence:

  • State governmental entities are prohibited from entering into contracts with AI technology companies owned or controlled by foreign adversaries
  • Requires an affidavit from AI companies affirming they do not meet criteria of being foreign adversaries.

Ambiguity Notes: The criteria for 'foreign adversaries' likely relies on external state or federal lists which may fluctuate.

↑ Back to Table of Contents

South Carolina

Index of Bills

House - 3431 - Social media; provide companies may not permit certain minors to be account holders; provide requirements, enforcement, restrictions, reporting and other provisions

Legislation ID: 196757

Bill URL: View Bill

Summary

Bill 3431 aims to amend the South Carolina Code of Laws by introducing new regulations for social media companies that cater to minors. It establishes definitions, outlines requirements for protecting minors personal data, restricts access during certain hours, and mandates parental controls. The bill also addresses consumer complaints and provides for enforcement mechanisms, ensuring that social media platforms prioritize the safety and well-being of minor users.

Key Sections

Key Requirements

  • Allow parents to view and limit the time minors spend on the service.
  • Allows individuals to file lawsuits for violations.
  • Allows individuals to sue for violations of the chapter.
  • Attorney General enforces the bills provisions.
  • Attorney General enforces the provisions.
  • Bans targeted advertising directed at minors.
  • Collect only the minimum amount of personal data necessary for the service.
  • Consumer Services Division has enforcement authority.
  • Consumers may file complaints regarding violations.
  • Covered online services are liable for treble damages for violations.
  • Covered online services must submit annual reports on compliance.
  • Declares any waivers of rights under this chapter as void.
  • Default settings must provide maximum safety for minors.
  • Defines a child as under 13 years old and a minor as under 18.
  • Defines harm and limits liability according to federal law.
  • Delete personal data collected for age verification after use.
  • Do not facilitate targeted advertising to minors.
  • Do not use dark patterns in service design.
  • Do not use minors personal data for purposes other than those for which it was collected.
  • Empowers the Attorney General to enforce the chapter.
  • Enable parents to restrict minors purchases and transactions.
  • Ensure independent auditors have full access to necessary information.
  • Ensures that the act remains valid even if parts are struck down.
  • Establishes a mechanism for consumer complaints related to social media companies practices.
  • Establishes criteria for covered online services based on revenue and data practices.
  • Establishes default opt-out settings for minors.
  • Establishes exclusive enforcement rights for the Attorney General.
  • Establishes liability for financial damages due to violations.
  • Establishes that this act does not limit other laws and prioritizes those that protect minors.
  • Establish reporting mechanisms for parents, minors, and schools to report harm.
  • Grants the Attorney Generals Office the authority to investigate complaints.
  • If any section is held unconstitutional, it does not affect the validity of the remaining provisions.
  • If any section is held unconstitutional, remaining sections remain effective.
  • In case of conflict, the law offering greater protection prevails.
  • In case of conflict, the law providing the greatest protection to minors prevails.
  • Include details on data collection, privacy protections, and design safety for minors in the report.
  • Includes definitions related to online functionalities that may affect minors.
  • Liability for treble damages for violations.
  • Mandates a duty of care in the design and operation of online services used by minors.
  • Mandates protection against identity theft and discrimination based on personal characteristics.
  • Mandates reasonable care in the design and operation of services used by minors.
  • Mandates that companies provide specific information to parents or guardians.
  • Mandates that parental tools be enabled by default for child accounts.
  • Mandates the development of online safety education programs for students.
  • Mandates the prevention of compulsive usage, psychological harm, emotional distress, privacy intrusions, identity theft, discrimination, and physical injury to minors.
  • Mandates the use of technology to enforce age restrictions.
  • May not facilitate targeted advertising to minors.
  • Must allow minors to opt-out of certain design features.
  • Must allow minors to opt-out of certain features and set time limits on usage.
  • Must allow parents to manage child account settings and restrict financial transactions.
  • Must allow parents to manage childs account settings.
  • Must allow parents to view time spent on the service and set usage limits.
  • Must allow reporting of harm to minors.
  • Must allow users to disable design features that encourage compulsive use.
  • Must allow users to limit their time spent on the service.
  • Must allow users to restrict visibility of their accounts and control interactions with others.
  • Must collect only the minimum amount of personal data necessary.
  • Must describe personalized recommendation systems in clear and understandable terms.
  • Must enable parents to restrict the childs purchases and usage times.
  • Must ensure privacy expectations are respected.
  • Must ensure reasonable privacy expectations are maintained.
  • Must establish mechanisms for reporting harms to minors.
  • Must inform parents about their childs account status and usage.
  • Must inform parents about their childs data usage.
  • Must issue an annual public report detailing practices related to minors.
  • Must issue a public report detailing practices affecting minors, including data handling and safety measures.
  • Must limit access for minors during designated hours.
  • Must limit data collection to the minimum necessary.
  • Must not collect precise geolocation information by default.
  • Must not facilitate targeted advertising to minors.
  • Must notify minors when parental tools are in effect.
  • Must notify minors when they are being monitored by parents.
  • Must not use personal data for purposes other than those for which it was collected.
  • Must offer tools to disable unnecessary design features and limit time spent on the service.
  • Must only collect the minimum necessary personal data from minors.
  • Must prevent compulsive usage and severe psychological harm to minors.
  • Must prevent compulsive usage of the service.
  • Must protect minors from severe psychological harm.
  • Must protect minors from severe psychological harm, emotional distress, and identity theft.
  • Must protect minors privacy and prevent identity theft.
  • Must provide clear information on privacy protections and parental tools.
  • Must provide information on how minors and parents can control these systems.
  • Must provide notice when geolocation data is being collected.
  • Must provide notice when precise geolocation information is collected.
  • Must provide options to manage time spent on the platform.
  • Must provide parents tools to manage minors account settings and restrict purchases.
  • Must provide parents with tools to manage their childs account.
  • Must provide parents with tools to manage their childs account and restrict transactions.
  • Must provide tools for minors to limit communication and view personal data.
  • Must provide tools for minors to limit communication and visibility of their personal data.
  • Must provide tools for parents to manage childs account settings and restrict purchases.
  • Must provide tools to disable non-essential design features.
  • Must provide tools to limit communication with minors.
  • Must submit an annual report to the relevant authorities.
  • Notify minors when parental controls are in effect.
  • Obtain obvious notice when collecting precise geolocation information of minors.
  • Obvious notice must be provided when geolocation data is collected.
  • Officers and employees can be personally liable for willful violations.
  • Only collects minimum necessary personal data from minors.
  • Only collect the minimum necessary personal data from minors.
  • Only collect the minimum personal data necessary for service provision.
  • Only minimal personal data necessary for service provision can be collected.
  • Parental tools must enable monitoring of time spent on the service and limit usage.
  • Parents must have tools to manage their childs account and restrict transactions.
  • Precise geolocation data collection is restricted and requires user notification.
  • Precise geolocation information cannot be collected by default unless necessary for service provision.
  • Profiling of minors is prohibited unless appropriate safeguards are demonstrated or necessary for service engagement.
  • Prohibit ads directed to minors for prohibited products like tobacco and alcohol.
  • Prohibits ads directed at minors for prohibited products.
  • Prohibits ads for products illegal for minors and the use of dark patterns.
  • Prohibits ads for products like drugs and alcohol to minors.
  • Prohibits adults from direct messaging minor account holders unless already connected.
  • Prohibits adults from messaging minors unless already connected.
  • Prohibits advertising to minors for products like drugs, tobacco, gambling, and alcohol.
  • Prohibits collection of personal data beyond what is necessary.
  • Prohibits minors from being account holders without parental consent.
  • Prohibits minors from having accounts without parental consent.
  • Prohibits notifications and push alerts to minors between 10 PM and 6 AM.
  • Prohibits notifications during school hours (8 AM to 3 PM) from August to May.
  • Prohibits notifications to minors between 10 PM and 6 AM and during school hours from August to May.
  • Prohibits profiling of minors without appropriate safeguards or necessity for service provision.
  • Prohibits targeted advertising based on minors personal information.
  • Prohibits targeted advertising directed at minors.
  • Prohibits targeted advertising to minors.
  • Prohibits the use of dark patterns that obscure user choices.
  • Prohibits the use of minors data for purposes other than originally collected.
  • Prohibit targeted advertising to minors.
  • Provide accessible tools for parents to manage minors account settings.
  • Provides parents tools to manage minors account settings and restrict purchases.
  • Report must include data on minor users and compliance with safety measures.
  • Requirements of this act are in addition to other laws protecting minors.
  • Requires an annual public report on practices related to minors, submitted to the Attorney General.
  • Requires annual compliance reports from social media companies.
  • Requires annual public reports by independent auditors.
  • Requires clear disclosure of personalized recommendation systems.
  • Requires companies to ensure minors cannot bypass age verification.
  • Requires consent for the collection of sensitive personal data.
  • Requires covered online services to exercise reasonable care in handling minors personal data.
  • Requires covered online services to provide an opt-out option for personalized recommendations for all users.
  • Requires default high-level protection settings for minors.
  • Requires deletion of age verification data after use.
  • Requires filtering of harmful content for minors.
  • Requires mechanisms for reporting harms to minors.
  • Requires notification to minors when parental controls are in effect.
  • Requires notification to minors when parental monitoring is in effect.
  • Requires obvious notice to minors when precise geolocation information is collected or used.
  • Requires obvious notice when collecting precise geolocation data.
  • Requires obvious notice when geolocation data is collected.
  • Requires online services to prevent compulsive usage and severe psychological harm to minors.
  • Requires online services to provide parental management tools.
  • Requires options to control design features and manage time spent on the service.
  • Requires reasonable care in the design and operation of services to protect minors.
  • Requires reporting mechanisms for harm to minors.
  • Requires services to limit compulsive usage and prevent psychological harm to minors.
  • Requires social media companies to allow parental supervision features.
  • Requires social media companies to enable parental supervision features for minor accounts.
  • Requires social media companies to prevent compulsive usage and psychological harm to minors.
  • Requires social media companies to restrict access to minors during certain hours.
  • Requires social media companies to verify the age of account holders.
  • Requires the Attorney General to maintain and publish an annual report.
  • Requires the Attorney General to maintain and publish an annual report on enforcement actions.
  • Requires the development of educational programs regarding online safety.
  • Requires tools for parents to manage child account settings and restrict transactions.
  • Restricts personalized advertising for minor accounts.
  • Restricts targeted advertising and data collection from minor account holders.
  • Retain minors personal data only as long as necessary for service provision.
  • Services must allow minors to limit communications and view settings.
  • Settings must be set at the highest level of protection by default.
  • Specifies damages and attorney fees for successful claims.
  • Specifies that liability is limited under existing federal laws.
  • Specifies the definitions of personal data and sensitive personal data.
  • Submit a public report by July 1st each year to the Attorney General.
  • The act does not limit or restrict other applicable laws.
  • The Attorney General shall enforce the provisions of this chapter.
  • The Attorney Generals office must investigate complaints.
  • The Consumer Services Division is authorized to administer and enforce the requirements.
  • This act supplements existing laws and provides greater protection for minors.
  • This act takes effect upon approval by the Governor.
  • Tools must allow minors to limit communications and control personal data visibility.
  • Users should be able to limit their time and financial transactions on the service.

Sponsors

Legislative Actions

Date Action
2026-01-21 Concurred in House amendment and enrolled
2026-01-14 Returned to Senate with amendments
2026-01-14 Returned to Senate with amendments ( House Journal-page 105 )
2026-01-14 Roll call Yeas-112 Nays-0
2026-01-14 Roll call Yeas-112 Nays-0 ( House Journal-page 116 )
2026-01-14 Senate amendment amended
2026-01-14 Senate amendment amended ( House Journal-page 105 )
2025-05-12 Scriveners error corrected

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates algorithmic design and features that lead to compulsive usage and psychological harm.

Mechanism of Influence: By requiring services to prevent 'compulsive usage' and allow users to 'disable unnecessary design features,' the law impacts the deployment of AI-driven engagement and recommendation algorithms.

Evidence:

  • Must prevent compulsive usage of the service.
  • Must offer tools to disable unnecessary design features and limit time spent on the service.
  • Prohibits... the use of dark patterns.

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but the behaviors it regulates (compulsive usage, dark patterns, and targeted advertising) are primarily executed via AI and machine learning models on social media platforms.

Analysis 2

Why Relevant: The bill mandates access restrictions and parental notifications based on the user's age.

Mechanism of Influence: To comply with restrictions on access hours for minors, platforms must implement age verification or estimation technologies to distinguish between minor and adult users.

Evidence:

  • Social media companies must restrict access for minors during specific hours to mitigate risks associated with late-night usage.
  • Must inform parents about their childs account status and usage.

Ambiguity Notes: The bill does not specify the technical standard for age verification, leaving the implementation method to the covered online services.

Analysis 3

Why Relevant: The legislation restricts the use of personal data for targeted advertising to minors.

Mechanism of Influence: This provision limits the use of AI-driven profiling and automated decision-making systems used to serve personalized advertisements to minor users.

Evidence:

  • Prohibits targeted advertising to minors.
  • Must not use personal data for purposes other than those for which it was collected.

Ambiguity Notes: The definition of 'targeted advertising' often encompasses various AI-based ad-tech processes, though the bill focuses on the outcome rather than the specific technology.

House - 4582 - Artificial intelligence; provide each school district may provide age-appropriate instruction to student on how to access, utilize, and critically evaluate various AI tools

Legislation ID: 244830

Bill URL: View Bill

Summary

Bill 4582 seeks to amend the South Carolina Code by adding a new section that mandates school districts to provide instruction on artificial intelligence. Starting from the 2026-2027 school year, schools will educate students on accessing, utilizing, and critically evaluating AI tools, guided by the Department of Educations recommendations. The bill emphasizes the importance of teaching foundational AI concepts, practical applications, responsible usage, and critical thinking skills related to AI.

Key Sections

Key Requirements

  • Guidance must include instructional components on AI concepts, applications, responsible usage, access, and critical thinking.
  • Instruction must align with guidance from the Department of Education.
  • Instruction must be aligned with guidance from the Department of Education.
  • Instruction must cover basic AI concepts, practical applications, responsible usage, access to tools, and critical thinking skills.
  • Instruction should cover basic AI concepts, practical applications, responsible usage, access to AI tools, and critical thinking skills.
  • Professional development must focus on artificial intelligence education.
  • Requires each school district to provide instruction on accessing, utilizing, and critically evaluating AI tools.
  • Requires school districts to implement AI instruction aligned with Department of Education guidance.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced and read first time
2026-01-13 Referred to Committee on Education and Public Works
2025-12-16 Prefiled
2025-12-16 Referred to Committee on Education and Public Works

Detailed Analysis

Analysis 1

Why Relevant: The bill addresses the 'responsible usage' of artificial intelligence, which aligns with the user's interest in AI oversight and regulation, specifically regarding how the technology is introduced to and managed within the public education system.

Mechanism of Influence: By mandating AI literacy and critical evaluation in schools, the law shapes the public's understanding of AI risks and benefits, potentially influencing future regulatory compliance and ethical standards for AI interaction.

Evidence:

  • mandates school districts to provide instruction on artificial intelligence
  • responsible usage, and critical thinking skills related to AI
  • critically evaluating AI tools

Ambiguity Notes: The bill focuses on educational mandates rather than technical regulations like audits or weight submissions; however, 'responsible usage' is a broad term that could encompass discussions on AI ethics and disclosures.

House - 4657 - Right to Compute Act

Legislation ID: 244720

Bill URL: View Bill

Summary

The Right to Compute Act seeks to amend the South Carolina Code by adding a chapter that outlines the rights related to computational resources, particularly those controlled by artificial intelligence systems. It emphasizes the need for risk management policies for critical infrastructure and sets forth conditions under which governmental restrictions on private computational resources may occur. The bill recognizes the fundamental right to own and use technological tools while ensuring public safety and national security.

Key Sections

Key Requirements

  • Affirms the protection of intellectual property rights.
  • Clarifies that federal law is not overridden by this Act.
  • Defines key terms relevant to the Act.
  • Ensures the validity of unaffected sections if part of the Act is invalidated.
  • Establishes the conditions for the Acts effectiveness.
  • Governmental actions must be necessary and tailored to fulfill a compelling interest.
  • Must fulfill a compelling governmental interest.
  • Policy must align with national or international standards for AI risk management.
  • Requires deployers of critical AI systems to create a risk management policy.
  • Requires deployers of critical AI systems to develop risk management policies.
  • Requires deployers to create risk management policies for AI-controlled critical infrastructure.
  • Requires that restrictions on computational resources be demonstrably necessary and tailored.
  • Restrictions must be demonstrably necessary.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced and read first time
2026-01-13 Referred to Committee on Labor, Commerce and Industry
2025-12-16 Prefiled
2025-12-16 Referred to Committee on Labor, Commerce and Industry

Detailed Analysis

Analysis 1

Why Relevant: The act specifically mandates risk management for certain AI systems.

Mechanism of Influence: Deployers of critical AI systems are required to develop and maintain risk management policies that align with national or international standards.

Evidence:

  • This section mandates that deployers of critical artificial intelligence systems must develop a risk management policy in accordance with recognized standards.

Ambiguity Notes: The term 'critical artificial intelligence systems' is not fully defined in the abstract, leaving room for interpretation on which AI applications fall under this mandate.

Analysis 2

Why Relevant: The act establishes a legal framework for government oversight and restriction of AI-related resources.

Mechanism of Influence: It sets a high legal bar (narrowly tailored to a compelling interest) for any government action that would restrict the use of computational resources.

Evidence:

  • any governmental action restricting the use of computational resources must be narrowly tailored to serve a compelling governmental interest.

Ambiguity Notes: The definition of 'compelling governmental interest' and 'narrowly tailored' are legal standards that will require judicial interpretation in the context of AI.

House - 4675 - SC Community Data Protection and Responsible Surveillance Act

Legislation ID: 244723

Bill URL: View Bill

Summary

The South Carolina Community Data Protection and Responsible Surveillance Act prohibits state and local entities from participating in surveillance systems that store data on third-party servers or use AI for tracking vehicles based on appearance. It establishes strict guidelines for data retention, judicial oversight, and annual reporting to ensure transparency and accountability in the use of surveillance technologies.

Key Sections

Key Requirements

  • All data must be kept on secured servers owned by South Carolina government entities.
  • ALPR systems may only analyze license plate characters and essential contextual data.
  • Audit findings to be published publicly.
  • Automatic deletion unless tied to an active investigation with a court order.
  • Defines ALPR and its functionalities.
  • Defines key terms relevant to the act.
  • Defines what constitutes Vehicle Feature Recognition.
  • Documents emergency access to data.
  • Emergency access must be documented and justified.
  • Establishes penalties for unlawful access or use of surveillance data.
  • Imposes penalties for unauthorized access to surveillance data.
  • Independent audits every quarter by the South Carolina Inspector General.
  • Limits ALPR systems to license plate data only.
  • Limits data retention to 21 days.
  • Mandates automatic deletion of data after 21 days.
  • Mandates automatic deletion of data after the retention period.
  • Mandates documentation and logging of data access.
  • Mandates quarterly compliance audits.
  • No surveillance data may be stored on third-party servers.
  • Prohibits contracts that violate storage requirements.
  • Prohibits contracts with third-party storage providers.
  • Prohibits indefinite or bulk storage of surveillance data.
  • Prohibits the use of AI for tracking based on vehicle appearance.
  • Prohibits the use of AI for tracking vehicles by appearance.
  • Reports must include total scans, alerts generated, and investigations using ALPR data.
  • Requires all surveillance data to be stored on servers within South Carolina.
  • Requires all surveillance data to be stored on state-owned servers.
  • Requires annual transparency reports on ALPR usage.
  • Requires a search warrant for data access.
  • Requires publication of audit findings.
  • Requires search warrants for accessing data.
  • Residents may sue for unlawful access or sharing of their data.
  • Specifies that surveillance data must be stored on servers owned by South Carolina entities.
  • Surveillance data retention limited to 21 days.
  • Violations may lead to misdemeanor charges and fines.
  • Warrants required for accessing surveillance data.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced and read first time
2026-01-13 Referred to Committee on Judiciary
2025-12-16 Prefiled
2025-12-16 Referred to Committee on Judiciary

Detailed Analysis

Analysis 1

Why Relevant: The act explicitly bans the use of artificial intelligence for specific surveillance purposes.

Mechanism of Influence: It prohibits law enforcement from using AI or automated systems to identify or track vehicles based on non-license plate characteristics, such as vehicle appearance.

Evidence:

  • Bans the use of AI or automated systems for identifying or tracking vehicles based on non-license plate characteristics.
  • Prohibits the use of AI for tracking vehicles by appearance.

Ambiguity Notes: The term "essential contextual data" is not strictly defined, which could lead to varying interpretations of what ALPR systems are allowed to capture alongside license plates.

Analysis 2

Why Relevant: The legislation mandates oversight through regular auditing of surveillance technology usage.

Mechanism of Influence: It requires independent audits every quarter by the South Carolina Inspector General to ensure compliance with the act's privacy and data management provisions.

Evidence:

  • Mandates regular audits of law enforcement agencies using ALPR technology to ensure compliance with the acts provisions.
  • Independent audits every quarter by the South Carolina Inspector General.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill requires public disclosure of how surveillance data is utilized.

Mechanism of Influence: Law enforcement agencies must publish annual reports detailing total scans, alerts generated, and investigations involving ALPR data.

Evidence:

  • Requires law enforcement agencies to publish annual reports detailing the use of ALPR data.
  • Reports must include total scans, alerts generated, and investigations using ALPR data.

Ambiguity Notes: None

Senate - 788 - A.I. and therapy; provide licensed professional shall not be permitted to use A.I. to assist in providing supplementary support where clients therapeutic session is recorded unless patient is informed and consents; other provisions

Legislation ID: 257267

Bill URL: View Bill

Summary

Bill 788 aims to amend the South Carolina Code of Laws to include provisions regarding the use of artificial intelligence in therapy and psychotherapy. It establishes definitions for key terms related to AI and therapy, sets requirements for informed consent from clients when AI is used, and prohibits unlicensed entities from providing therapy services. The bill also emphasizes the confidentiality of client records and outlines penalties for violations of these regulations.

Key Sections

Key Requirements

  • AI cannot make independent therapeutic decisions or interact directly with clients.
  • All records must be kept confidential.
  • Disclosure is only permitted as required by law.
  • Fines up to $10,000 may be imposed for violations.
  • Imposes a civil penalty of up to $10,000 for violations.
  • Mandates confidentiality of all therapy records and communications.
  • Only licensed professionals may provide therapy or psychotherapy services.
  • Patient must be informed in writing about the use of AI.
  • Patient must provide written consent for the use of AI.
  • Prohibits unlicensed individuals or entities from providing therapy services.
  • Requires written consent from the patient for the use of AI in therapy sessions where recordings are made.

Sponsors

Legislative Actions

Date Action
2026-01-13 Introduced and read first time
2026-01-13 Referred to Committee on Labor, Commerce and Industry

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates disclosures and informed consent regarding the use of AI in a professional setting.

Mechanism of Influence: Licensed professionals are required to provide written notification to patients and obtain their written consent before utilizing AI in recorded therapeutic sessions.

Evidence:

  • Patient must be informed in writing about the use of AI.
  • Patient must provide written consent for the use of AI.

Ambiguity Notes: The term 'supplementary support' is not strictly defined, potentially allowing for a wide range of AI applications as long as a human is technically overseeing them.

Analysis 2

Why Relevant: The legislation establishes strict oversight requirements for AI, preventing it from operating autonomously in a clinical capacity.

Mechanism of Influence: It prohibits AI from making independent therapeutic decisions and requires all AI-delivered services to be overseen by a licensed professional.

Evidence:

  • AI cannot independently make therapeutic decisions or directly interact with clients.
  • Only licensed professionals may provide therapy or psychotherapy services.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill includes enforcement mechanisms and penalties for failing to adhere to the AI regulations.

Mechanism of Influence: Licensing boards are empowered to assess civil penalties and fines for violations of the provisions governing AI use.

Evidence:

  • Fines up to $10,000 may be imposed for violations.

Ambiguity Notes: None

↑ Back to Table of Contents

South Dakota

Index of Bills

Senate - 1125 - create a taskforce to study the impact of artificial intelligence systems on the state.

Legislation ID: 285242

Bill URL: View Bill

Summary

House Bill 1125 aims to create a taskforce composed of representatives from various industries, educational institutions, and government entities to examine the technological advancements and implications of artificial intelligence in South Dakota. The taskforce will provide findings and recommendations by December 1, 2028, including suggestions for any necessary legislation regarding AI systems.

Key Sections

Key Requirements

  • Includes two members from the House, two from the Senate, and representatives from specified industries and educational roles.
  • It should also include representatives from educational institutions and local government.
  • One member must be appointed by the Governor, one by the Chief Justice, and one by the Board of Regents.
  • The Act will take effect on January 1, 2027.
  • The taskforce is required to report its findings and recommendations by December 1, 2028.
  • The taskforce must include members from various sectors such as healthcare, banking, retail, manufacturing, and technology.
  • The taskforce must report its findings to the executive board by December 1, 2028.
  • The taskforce will be dissolved upon delivering the report.

Sponsors

Legislative Actions

Date Action
2026-02-02 Schedule for hearing
2026-01-26 First read in House and referred
2026-01-23 Introduced

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the oversight and potential future regulation of artificial intelligence by creating a formal study taskforce.

Mechanism of Influence: The taskforce is specifically charged with examining AI's impact and providing recommendations for necessary legislation, which could lead to future regulatory frameworks, disclosure requirements, or audit mandates.

Evidence:

  • examine the technological advancements and implications of artificial intelligence in South Dakota
  • suggestions for any necessary legislation regarding AI systems
  • study the impact of artificial intelligence systems on the state

Ambiguity Notes: The term 'implications' is broad and could encompass a wide range of regulatory topics such as privacy, ethics, bias, or economic impact, depending on the taskforce's focus.

↑ Back to Table of Contents

Tennessee

Index of Bills

House - 1455 - Criminal Offenses - As introduced, creates a Class A felony offense of knowingly training artificial intelligence to encourage the act of suicide or criminal homicide, or act in specific manners, including developing an emotional relationship with an individual or simulating a human being, including in appearance, voice, or other mannerisms. - Amends TCA Title 29; Title 33; Title 39 and Title 47.

Legislation ID: 240616

Bill URL: View Bill

Summary

This bill amends the Tennessee Code to introduce specific definitions related to artificial intelligence and its applications, particularly focusing on AI chatbots. It establishes unlawful practices concerning the training of AI to engage in harmful behaviors or simulate human interactions that could lead to emotional harm or misinformation. The bill also provides for civil actions against violators, allowing individuals to seek damages for violations.

Key Sections

Key Requirements

  • Allows for punitive damages and recovery of legal costs.
  • Court may order injunctions or retraining of the AI.
  • Individuals can seek actual damages or liquidated damages of $150,000.
  • Prohibits AI from acting as a licensed mental health professional.
  • Prohibits AI from providing emotional support or simulating human relationships.
  • Prohibits training AI to support suicide or criminal acts.

Sponsors

Legislative Actions

Date Action
2026-01-15 Sponsor(s) Added.
2026-01-14 Assigned to s/c Criminal Justice Subcommittee
2026-01-14 P2C, ref. to Judiciary Committee
2026-01-13 Intro., P1C.
2025-12-11 Filed for introduction

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the development and training phase of artificial intelligence systems, which is a core component of AI oversight.

Mechanism of Influence: It imposes criminal and civil liability on developers and entities that train AI to engage in prohibited behaviors, effectively mandating safety guardrails during the model training process.

Evidence:

  • This provision makes it a Class A felony to knowingly train AI in ways that encourage suicide, criminal homicide, emotional dependency, or simulate human relationships
  • Prohibits AI from acting as a licensed mental health professional.
  • Prohibits training AI to support suicide or criminal acts.

Ambiguity Notes: The prohibition on 'simulating human relationships' or 'emotional dependency' is broad and could impact a wide variety of generative AI and companion-style chatbots.

Analysis 2

Why Relevant: The legislation establishes legal definitions and oversight mechanisms for AI technologies.

Mechanism of Influence: By defining terms like 'artificial intelligence chatbot' and 'train,' the bill creates a legal framework for the government and individuals to monitor and litigate AI-related harms.

Evidence:

  • This section defines key terms related to artificial intelligence, including artificial intelligence, artificial intelligence chatbot, person, sexually explicit content, train, and video game.
  • Allows for punitive damages and recovery of legal costs.
  • Court may order injunctions or retraining of the AI.

Ambiguity Notes: None

House - 1470 - HB 1470by*Hicks T

Legislation ID: 260294

Bill URL: View Bill

Summary

This bill amends the Tennessee Code Annotated to establish regulations regarding artificial intelligence systems in mental health. It specifically prohibits individuals from advertising AI systems as qualified mental health professionals and outlines penalties for violations, including civil penalties under the Tennessee Consumer Protection Act.

Key Sections

Key Requirements

  • Adds violation of § 33-1-205 to the list of unfair or deceptive acts under the Consumer Protection Act.
  • Establishes civil penalties of $5,000 per violation for misleading representations.
  • Prohibits advertising AI systems as qualified mental health professionals.

Sponsors

Legislative Actions

Date Action
2026-01-14 Assigned to s/c Population Health Subcommittee
2026-01-14 P2C, ref. to Health Committee
2026-01-13 Intro., P1C.
2026-01-05 Filed for introduction

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the marketing and public representation of AI systems, specifically targeting the mental health sector.

Mechanism of Influence: It imposes a legal prohibition on developers and deployers, preventing them from mischaracterizing AI capabilities as equivalent to human professional expertise, backed by financial penalties.

Evidence:

  • prohibits anyone who develops or deploys an artificial intelligence system from advertising or representing that such a system can act as a qualified mental health professional.
  • Establishes civil penalties of $5,000 per violation for misleading representations.

Ambiguity Notes: The abstract does not provide a specific definition for 'artificial intelligence system,' which may lead to broad interpretation regarding which software tools fall under these regulations.

Analysis 2

Why Relevant: The legislation integrates AI-specific oversight into the state's existing consumer protection framework.

Mechanism of Influence: By classifying AI misrepresentation as an unfair or deceptive act, it grants state authorities the power to enforce AI regulations using established consumer protection mechanisms.

Evidence:

  • This provision adds a new subdivision to the Tennessee Consumer Protection Act to include violations of the new section regarding AI systems in mental health.

Ambiguity Notes: None

Senate - 1493 - Criminal Offenses - As introduced, creates a Class A felony offense of knowingly training artificial intelligence to encourage the act of suicide or criminal homicide, or act in specific manners, including developing an emotional relationship with an individual or simulating a human being, including in appearance, voice, or other mannerisms. - Amends TCA Title 29; Title 33; Title 39 and Title 47.

Legislation ID: 240617

Bill URL: View Bill

Summary

This bill amends the Tennessee Code to establish definitions and legal parameters regarding artificial intelligence, particularly in the context of training AI systems. It prohibits the training of AI to engage in harmful behaviors, such as encouraging suicide or simulating human relationships, and sets forth civil and criminal penalties for violations. The bill also provides mechanisms for individuals to seek damages if they are harmed by such AI systems.

Key Sections

Key Requirements

  • Aggrieved individuals can recover actual damages or liquidated damages of $150,000.
  • Legal representatives can act on behalf of minors or incapacitated individuals.
  • Prohibits AI from acting as a licensed mental health or healthcare professional.
  • Prohibits AI from providing emotional support or simulating human relationships.
  • Prohibits training AI to support suicide or criminal homicide.
  • Punitive damages and litigation costs may also be awarded.

Sponsors

Legislative Actions

Date Action
2026-01-14 Passed on Second Consideration, refer to Senate Judiciary Committee
2026-01-13 Introduced, Passed on First Consideration
2025-12-18 Filed for introduction

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the development and training phase of artificial intelligence systems.

Mechanism of Influence: It creates a legal prohibition against specific AI functionalities, effectively mandating safety guardrails during the development and training phase by banning the simulation of human relationships or emotional support.

Evidence:

  • Prohibits training AI to support suicide or criminal homicide.
  • Prohibits AI from providing emotional support or simulating human relationships.
  • Prohibits AI from acting as a licensed mental health or healthcare professional.

Ambiguity Notes: The prohibition on 'simulating human relationships' is broad and could potentially impact a wide variety of generative AI and chatbot applications beyond those intended for mental health.

Analysis 2

Why Relevant: The legislation establishes an enforcement and liability framework for AI-related harms.

Mechanism of Influence: By allowing for liquidated damages of $150,000 and punitive damages, it creates a high-stakes compliance environment for AI developers and companies operating within the state.

Evidence:

  • Aggrieved individuals can recover actual damages or liquidated damages of $150,000.
  • Punitive damages and litigation costs may also be awarded.

Ambiguity Notes: The scope of 'aggrieved individuals' and the specific threshold for what constitutes a violation of 'training' versus 'deployment' may require further judicial interpretation.

Senate - 1580 - SB 1580by*Walley

Legislation ID: 260511

Bill URL: View Bill

Summary

The bill amends Tennessee Code Annotated to include regulations on artificial intelligence systems in the mental health field. It specifically prohibits individuals or entities from advertising AI systems as qualified mental health professionals, establishing penalties for violations under the Tennessee Consumer Protection Act. The bill defines artificial intelligence and sets a civil penalty for violations to ensure compliance and protect consumers.

Key Sections

Key Requirements

  • Establishes a civil penalty for violations of this prohibition.
  • Imposes a civil penalty of $5,000 per violation.
  • Prohibits advertising AI systems as qualified mental health professionals.
  • Prohibits the advertisement of AI systems as qualified mental health professionals.
  • Violators may incur a civil penalty of $5,000 per violation.

Sponsors

Legislative Actions

Date Action
2026-01-14 Passed on Second Consideration, refer to Senate Health and Welfare Committee
2026-01-13 Introduced, Passed on First Consideration
2026-01-12 Filed for introduction

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the marketing and representation of AI systems, specifically targeting the disclosure of AI's status versus human professional qualifications.

Mechanism of Influence: It creates a legal prohibition against misrepresenting AI capabilities as human professional expertise and enforces this through civil penalties and consumer protection laws.

Evidence:

  • This provision prohibits individuals or entities from advertising or representing that an artificial intelligence system can act as a qualified mental health professional.
  • Imposes a civil penalty of $5,000 per violation.
  • This provision defines artificial intelligence as systems capable of performing tasks generally associated with human intelligence, such as reasoning and learning.

Ambiguity Notes: The definition of AI as systems 'capable of performing tasks generally associated with human intelligence' is relatively broad and may require further clarification as to whether it applies to simple chatbots or only advanced diagnostic tools.

Senate - 1700 - Attorney General and Reporter - As introduced, enacts the "Curbing Harmful AI Technology (CHAT) Act." - Amends TCA Title 29; Title 37 and Title 47.

Legislation ID: 260262

Bill URL: View Bill

Summary

Senate Bill 1700, known as the Curbing Harmful AI Technology (CHAT) Act, amends Tennessee Code to introduce regulations governing artificial intelligence systems and companion chatbots. It defines key terms, outlines safety and design requirements, mandates transparency and data privacy protections, and establishes enforcement mechanisms to hold developers and deployers accountable for violations. The bill seeks to ensure that AI technologies do not harm minors and provides a framework for addressing issues related to mental health and user safety.

Key Sections

Key Requirements

  • Attorney General can impose civil penalties for violations.
  • Establish a reporting mechanism for adverse incidents.
  • Must include detection mechanisms for suicidal expressions.
  • Must refer users to crisis services upon detection.
  • Notification upon login, every 30 minutes, when prompted, and when giving regulated advice.
  • Prevents chatbots from offering unsupervised mental health therapy to minors.
  • Prohibits chatbots from encouraging self-harm, violence, or illegal activities.
  • Publish safety test findings for public access.
  • Regular reporting of chatbot interactions related to mental health.
  • Requires persistent disclosure that the product is not human.
  • Requires written consent from a parent or guardian to use a minors information for training.
  • Users can seek damages and relief for violations.

Sponsors

Legislative Actions

Date Action
2026-01-15 Filed for introduction

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates specific disclosures and transparency requirements for AI interactions.

Mechanism of Influence: Deployers are legally required to provide disclaimers that a chatbot is not human at specific intervals (every 30 minutes) and when giving regulated advice.

Evidence:

  • Deployers must include disclaimers that the chatbot is not a human and notify users at specific intervals.
  • Requires persistent disclosure that the product is not human.

Ambiguity Notes: The term 'regulated advice' is mentioned but not specifically defined in the text, which could lead to broad interpretations regarding which AI outputs trigger specific disclosure requirements.

Analysis 2

Why Relevant: The legislation addresses age-specific usage and data privacy for minors.

Mechanism of Influence: It prohibits the use of a minor's input for training AI models without explicit parental consent and restricts the types of AI interactions allowed for minors, specifically regarding mental health.

Evidence:

  • Developers cannot use minors inputs to train AI models without parental consent.
  • Operators must not provide companion chatbots to minors if they can encourage harmful behaviors or interactions.

Ambiguity Notes: While it requires parental consent, the specific mechanism for age verification to identify a user as a minor is not detailed in the summary.

Analysis 3

Why Relevant: The bill requires developers to perform safety testing and public reporting, similar to an audit requirement.

Mechanism of Influence: Developers are obligated to publish their safety test findings for public access and maintain mechanisms for reporting adverse incidents.

Evidence:

  • Publish safety test findings for public access.
  • Establish a reporting mechanism for adverse incidents.

Ambiguity Notes: The criteria for what constitutes a 'safety test' and the required depth of the 'findings' are not specified, potentially allowing for varied levels of rigor among developers.

↑ Back to Table of Contents

Utah

Index of Bills

House - 218 - Digital Literacy Amendments

Legislation ID: 248008

Bill URL: View Bill

Summary

H.B. 218 introduces a requirement for Utah high school students to complete a half-credit digital literacy course to graduate. The bill emphasizes the integration of digital literacy concepts throughout K-12 education, defining key areas such as social media awareness and artificial intelligence literacy. It mandates end-of-course assessments for the digital literacy requirement, establishes a task force to oversee the implementation, and sets a timeline for the new requirements to take effect.

Key Sections

Key Requirements

  • Creates a task force to study and make recommendations on digital literacy education.
  • Mandates coordination of digital literacy instruction with existing core standards.
  • Mandates end-of-course assessments for the digital literacy course.
  • New graduation requirements to be implemented for the incoming freshman class one school year after the bills passage.
  • Requires high school students to complete a half-credit digital literacy course to graduate.
  • Requires integration of digital literacy education in kindergarten through grade 12.
  • Requires the task force to convene at least once every three years.

Sponsors

Legislative Actions

Date Action
2026-01-23 House/ received fiscal note from Fiscal Analyst
2026-01-20 House/ 1st reading (Introduced)
2026-01-14 House/ received bill from Legislative Research
2026-01-09 Bill Numbered but not Distributed
2026-01-09 Numbered Bill Publicly Distributed

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly identifies artificial intelligence literacy as a core component of the new digital literacy graduation requirement.

Mechanism of Influence: By mandating AI literacy in the K-12 curriculum, the law requires the state to define educational standards for AI and ensures that all graduating students have a foundational understanding of the technology.

Evidence:

  • defining key areas such as social media awareness and artificial intelligence literacy
  • Requires high school students to complete a half-credit digital literacy course to graduate.

Ambiguity Notes: The bill focuses on educational literacy rather than the direct regulation of AI development, deployment, or technical audits.

House - 273 - Classroom Technology Amendments

Legislation ID: 270123

Bill URL: View Bill

Summary

This bill mandates the State Board of Education to develop model policies regarding technology and artificial intelligence use in public school classrooms. It includes specific requirements for local education agencies (LEAs) on how to integrate technology effectively and safely. The bill also addresses the need for transparency with parents, limits on screen time, and the introduction of artificial intelligence standards into core education curricula.

Key Sections

Key Requirements

  • Artificial intelligence standards must be integrated into core computer science education.
  • Defines AI terms and outlines the structure of AI sandbox courses.
  • Excludes courses where technology is integral, online schools, core technology standards courses, AI sandbox courses, and IEP/504 plan compliant technology use.
  • Includes guidelines for different grade levels on technology use.
  • LEAs must adopt policies consistent with the model AI use policy created by the State Board.
  • LEAs must ensure instructional technology is designed for educational use and does not interfere with learning.
  • LEAs must provide access to parents regarding digital tools used in classrooms.
  • LEAs must report on the specifics of their adopted policies and their implementation efforts.
  • Minimizes non-essential screen-time and prioritizes purposeful engagement with technology.
  • Regular monitoring and accountability policies must be adopted to ensure compliance.
  • Requires certification of policy adoption on balanced technology use and AI usage for funding.
  • Requires LEAs to adopt a model AI policy and ensure compliance.
  • Requires LEAs to evaluate the effectiveness of technology requirements.
  • Requires LEAs to provide accommodations and additional resources for affected students.
  • Requires LEAs to submit a detailed report on adopted policies and compliance monitoring plans.
  • Requires parental notification and consent for students participating in AI sandbox courses.
  • Requires the state board to establish administrative rules for compliance and implementation.
  • Requires transparency with parents regarding technology use in classrooms.
  • The model policy must prioritize developmental appropriateness and age-based limits on screen exposure.

Sponsors

Legislative Actions

Date Action
2026-01-20 House/ 1st reading (Introduced)
2026-01-20 House/ received bill from Legislative Research
2026-01-16 Bill Numbered but not Distributed
2026-01-16 Numbered Bill Publicly Distributed

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes regulatory frameworks for AI usage in educational settings.

Mechanism of Influence: It requires the State Board of Education to publish a model AI use policy which local education agencies (LEAs) must then adopt and follow to ensure responsible use.

Evidence:

  • The state board will publish a model AI use policy for LEAs to adopt, ensuring responsible use of AI in education.
  • LEAs must adopt policies consistent with the model AI use policy created by the State Board.

Ambiguity Notes: The specific criteria for 'responsible use' are not defined in the abstract and are left to the State Board's administrative rulemaking.

Analysis 2

Why Relevant: It mandates disclosures and parental consent specifically for AI-related educational activities.

Mechanism of Influence: Students are prohibited from participating in AI sandbox courses unless the LEA provides notification and obtains explicit parental consent.

Evidence:

  • Requires parental notification and consent for students participating in AI sandbox courses.
  • LEAs may offer AI sandbox courses with specific guidelines and parental consent requirements.

Ambiguity Notes: While 'AI sandbox courses' are defined, the specific parameters of what constitutes a 'sandbox' versus general AI use in other courses may require further clarification.

Analysis 3

Why Relevant: The bill introduces oversight, auditing, and reporting requirements for AI policy implementation.

Mechanism of Influence: LEAs must certify compliance with AI usage policies to receive state funding and must submit detailed reports to the state board regarding their compliance monitoring plans.

Evidence:

  • LEAs must certify their compliance with technology use policies to receive state funds for educational technology programs.
  • LEAs must report to the state board upon adopting new policies related to instructional technology, detailing specifics about the policy and compliance monitoring.

Ambiguity Notes: The bill requires LEAs to adopt a method for evaluating effectiveness, but does not specify the metrics for that evaluation.

House - 276 - Artificial Intelligence Modifications

Legislation ID: 270127

Bill URL: View Bill

Summary

The bill enacts the Digital Voyeurism Prevention Act, which prohibits the generation and distribution of counterfeit intimate images without consent. It establishes civil liabilities for violations, mandates consent verification systems for generation services, and outlines procedures for platforms to remove non-consensual content. Additionally, it sets requirements for the disclosure of AI-generated content and the preservation of content provenance data.

Key Sections

Key Requirements

  • Consent verification systems must be implemented and maintained.
  • Each distribution of a counterfeit image without consent is considered a separate violation.
  • Each failure to remove an image after notice constitutes a separate violation.
  • Generation services must inform users of consent verification requirements and penalties for violations.
  • Generation services must obtain and verify consent before distributing counterfeit intimate images.
  • Plaintiffs can claim damages similar to those applicable to generation services.
  • Plaintiffs can recover actual damages, statutory damages, punitive damages, and attorney fees.
  • Platforms must provide clear reporting procedures for users.
  • Platforms must take action if they receive notice of non-consensual distribution.
  • Platforms must temporarily disable access to reported images and conduct a reasonable investigation.
  • They must remove the image if found to be distributed without consent.

Sponsors

Legislative Actions

Date Action
2026-01-26 House/ received fiscal note from Fiscal Analyst
2026-01-20 House/ 1st reading (Introduced)
2026-01-20 House/ received bill from Legislative Research
2026-01-16 Bill Numbered but not Distributed
2026-01-16 Numbered Bill Publicly Distributed

Detailed Analysis

Analysis 1

Why Relevant: The bill provides specific legal definitions for artificial intelligence and generative AI systems to establish the scope of regulation.

Mechanism of Influence: By defining 'artificial intelligence technology' and 'generative artificial intelligence system', the law determines which software tools are subject to the act's mandates and liabilities.

Evidence:

  • This section provides definitions for key terms used in the bill, including artificial intelligence technology, consent, counterfeit intimate image, and others.
  • This section provides definitions for terms related to digital content provenance, including capture device and generative artificial intelligence system.

Ambiguity Notes: The breadth of the definition for 'artificial intelligence technology' could determine whether traditional editing software is captured alongside modern LLMs or diffusion models.

Analysis 2

Why Relevant: The legislation mandates specific technical and procedural requirements for AI generation services, including consent verification and transparency.

Mechanism of Influence: AI generation services must implement and maintain consent verification systems and disclose their procedures to users, creating a regulatory compliance burden for AI developers.

Evidence:

  • Consent verification systems must be implemented and maintained.
  • Generation services must obtain and verify consent before distributing counterfeit intimate images.
  • Generation services must inform users of consent verification requirements and penalties for violations.

Ambiguity Notes: The term 'reasonable investigation' for platforms and the specific technical standards for 'consent verification systems' are not fully defined, leaving room for regulatory or judicial interpretation.

Analysis 3

Why Relevant: The bill addresses AI transparency through disclosure requirements and the preservation of content provenance data.

Mechanism of Influence: It sets requirements for the disclosure of AI-generated content and mandates the preservation of provenance data, which tracks the origin and history of digital content.

Evidence:

  • Additionally, it sets requirements for the disclosure of AI-generated content and the preservation of content provenance data.

Ambiguity Notes: The 'Digital Content Provenance Standards Act' component suggests a reliance on evolving technical standards for watermarking or metadata that may not be universally adopted.

House - 286 - Artificial Intelligence Transparency Amendments

Legislation ID: 273002

Bill URL: View Bill

Summary

The act requires large frontier developers to publish public safety plans addressing catastrophic risks, and child protection plans addressing child safety risks. It mandates publication of risk assessment summaries, prohibits false or misleading statements about risks, requires safety incident reporting to a state Office, and creates an enforcement framework including civil penalties, whistleblower protections, and an enforcement account. It also provides for rulemaking and annual reporting by the Office of Artificial Intelligence Policy and sets severability.

Key Sections

Key Requirements

  • Allow redactions for trade secrets, cybersecurity, public safety, national security, or legal compliance
  • Allows employees to bring actions for damages within specified timeframes.
  • Allows redactions to protect trade secrets and public safety.
  • An annual report summarizing risk assessments is required.
  • Apply mitigations based on assessments
  • Apply mitigations based on risk assessment results
  • Assess and manage catastrophic risk from internal use of frontier models
  • Assess potential child safety risks
  • Critical safety incidents must be disclosed within 24 hours.
  • Define and assess catastrophic risk thresholds for model capabilities
  • Defines key terms related to artificial intelligence and safety risks.
  • Developers cannot make materially false or misleading statements about risks.
  • Developers must assess and address potential catastrophic risks before deployment.
  • Developers must report safety incidents within 15 days.
  • Developers must report safety incidents within specified timeframes.
  • Developers must update the plan and publish modifications within 30 days.
  • Engage third-party assessors for catastrophe risk and mitigations
  • Establishes a restricted account for collecting civil penalty funds and appropriations for enforcement activities.
  • Establish internal governance to ensure process implementation
  • Establish safety incident reporting mechanism via rules
  • Identify and respond to child safety incidents
  • Identify and respond to critical safety incidents
  • Immediate disclosure within 24 hours for imminent life/physical injury risks
  • Implement cybersecurity measures to protect unreleased frontier model weights
  • Imposes a civil penalty of up to $1,000,000 for the first violation and up to $3,000,000 for subsequent violations.
  • Incorporate national/international standards and best practices
  • Incorporate standards and best practices
  • Large frontier developers to report incidents within 15 days of discovery
  • Maintain internal governance to ensure plan implementation
  • Must include involvement of third-party evaluators in assessments.
  • Office to produce annual reporting on safety incidents and related governance
  • Outlines potential remedies including reinstatement and compensation for legal costs.
  • Prohibit false/malse statements about public safety plan implementation and compliance
  • Prohibit materially false/misleading statements about covered risks
  • Prohibits adverse actions against employees who report violations or participate in investigations.
  • Prohibits false or misleading statements about risk management.
  • Provide justification for redactions and retain unredacted copies for five years
  • Publish child protection plan on website
  • Publish public safety plan on developers website
  • Publish summaries on deployment of frontier models or substantially modified frontier models (catastrophic risk assessments, results, third-party involvement, and plan fulfillment steps)
  • Publish summaries on deployment of new or modified foundation models (child safety risk assessments, results, third-party involvement, and plan fulfillment steps)
  • Redactions must be justified and documented.
  • Requires a child protection plan to be published on the developers website.
  • Requires a process for anonymous reporting of safety concerns.
  • Requires frontier developers to redact information to protect trade secrets, cybersecurity, public safety, national security, or legal compliance.
  • Requires immediate reporting of critical safety incidents within 24 hours to appropriate agencies.
  • Requires large frontier developers to publish a public safety plan on their website.
  • Requires large frontier developers to report safety incidents within 15 days of discovery.
  • Requires publication of risk assessment summaries on the developers website.
  • Requires retention of unredacted information for five years.
  • Requires submission of risk assessment reports at least every three months.
  • Requires the creation of a child protection plan that assesses and mitigates child safety risks.
  • Review assessments and mitigations prior to deployment or extensive internal use
  • Summaries must include details about third-party evaluations and mitigation steps.
  • Summaries of risk assessments must be published prior to deployment of new models.
  • The plan must assess potential child safety risks and include mitigation strategies.
  • The plan must be published on the developers website.
  • The plan must include national and international standards, risk assessments, and mitigation strategies.
  • The plan must incorporate national and international standards.
  • Update plan upon material changes with justification within 30 days
  • Update plan upon material modifications with justification within 30 days
  • Use third parties for risk assessment and mitigation evaluation

Sponsors

Legislative Actions

Date Action
2026-01-28 House/ 2nd reading
2026-01-28 House/ comm rpt/ substituted
2026-01-27 House Comm - Favorable Recommendation
2026-01-27 House Comm - Substitute Recommendation
2026-01-26 House/ to standing committee
2026-01-23 House/ received fiscal note from Fiscal Analyst
2026-01-20 House/ 1st reading (Introduced)
2026-01-20 House/ received bill from Legislative Research

Detailed Analysis

Analysis 1

Why Relevant: The act specifically targets child safety and protection in the context of AI chatbots, aligning with the user's interest in age-related regulations.

Mechanism of Influence: Frontier developers must implement and publish child protection plans that assess and mitigate risks to minors.

Evidence:

  • This section requires large frontier developers operating covered chatbots to implement and publish a child protection plan that addresses potential risks to minors.

Ambiguity Notes: The term 'potential risks to minors' is not explicitly defined in the abstract, leaving room for interpretation on what constitutes a safety risk.

Analysis 2

Why Relevant: It mandates public disclosures and transparency regarding AI model risks and safety strategies.

Mechanism of Influence: Developers are required to publish summaries of risk assessments and public safety plans on their websites prior to deployment.

Evidence:

  • Developers must publish summaries of risk assessments related to child safety and catastrophic risks before deploying new or modified AI models.
  • Requires large frontier developers to publish a public safety plan on their website.

Ambiguity Notes: Provisions allowing redactions for 'trade secrets' could potentially be used to obscure critical safety information.

Analysis 3

Why Relevant: The legislation establishes government oversight and reporting requirements for AI safety.

Mechanism of Influence: It creates a formal mechanism for reporting safety incidents to the Office of Artificial Intelligence Policy and requires annual risk assessment reports.

Evidence:

  • This section establishes a reporting mechanism for safety incidents and requires developers to report incidents to the Office of Artificial Intelligence Policy.
  • An annual report summarizing risk assessments is required.

Ambiguity Notes: The specific 'specified timeframes' for reporting incidents are mentioned but not detailed in the text.

House - 320 - Office of Artificial Intelligence Policy Amendments

Legislation ID: 283936

Bill URL: View Bill

Summary

This bill modifies existing laws related to the Office of Artificial Intelligence Policy and its associated learning laboratory program. It introduces new definitions, revises duties of the office, updates provisions for regulatory agreements, and makes technical adjustments to improve the management and oversight of artificial intelligence technologies within the state.

Key Sections

Key Requirements

  • Agreements must specify limitations on the use of AI technology and include safeguards.
  • Consultation with various stakeholders is required when establishing the learning agenda.
  • Initial agreements may not exceed 12 months.
  • Participants must demonstrate eligibility criteria established by the office.
  • Participants must demonstrate sufficient financial resources for testing.
  • Participants must have the technical capability to responsibly develop and use AI technology.
  • Requests for extension must be made 30 days before the current agreement expires.
  • The office must consult with stakeholders about regulatory proposals.
  • The office must create and administer an artificial intelligence learning laboratory program.
  • The office must periodically set a learning agenda for the laboratory.

Sponsors

Legislative Actions

Date Action
2026-01-28 House/ received fiscal note from Fiscal Analyst
2026-01-22 Bill Numbered but not Distributed
2026-01-22 House/ 1st reading (Introduced)
2026-01-22 House/ received bill from Legislative Research
2026-01-22 Numbered Bill Publicly Distributed

Detailed Analysis

Analysis 1

Why Relevant: The bill creates a dedicated government office specifically for the management and oversight of artificial intelligence technologies.

Mechanism of Influence: The Office of Artificial Intelligence Policy is tasked with administering a learning laboratory program, consulting on regulatory proposals, and reporting annually on AI developments.

Evidence:

  • This section establishes the Office of Artificial Intelligence Policy within the Department of Commerce
  • The office must create and administer an artificial intelligence learning laboratory program.

Ambiguity Notes: The scope of 'regulatory proposals' is broad and could encompass various types of AI governance from ethics to technical standards.

Analysis 2

Why Relevant: The bill establishes a program to evaluate AI technologies and inform future state regulations.

Mechanism of Influence: The Artificial Intelligence Learning Laboratory Program analyzes AI technologies to evaluate existing regulatory frameworks and encourage responsible deployment.

Evidence:

  • outlining its purpose to analyze AI technologies and inform state regulations
  • evaluating existing regulatory frameworks

Ambiguity Notes: The 'learning agenda' is not strictly defined, leaving the office significant discretion over which AI risks or technologies to prioritize.

Analysis 3

Why Relevant: The bill sets specific criteria for AI developers to enter into regulatory agreements with the state, involving government vetting.

Mechanism of Influence: Participants must demonstrate technical capability and financial resources, and agreements must include safeguards and limitations on AI use.

Evidence:

  • Participants must have the technical capability to responsibly develop and use AI technology.
  • Agreements must specify limitations on the use of AI technology and include safeguards.

Ambiguity Notes: The term 'regulatory mitigation' suggests a sandbox-like environment where certain rules might be waived in exchange for oversight, but the specific regulations being mitigated are not listed.

House - 357 - Amendments to Motor Vehicle Data Privacy

Legislation ID: 285175

Bill URL: View Bill

Summary

This bill amends the Utah Consumer Privacy Act to include specific provisions related to motor vehicle data privacy. It defines key terms, applies privacy regulations to motor vehicle manufacturers, mandates in-vehicle privacy controls, exempts certain safety data from consent requirements, and requires the Motor Vehicle Division to educate consumers about their data privacy rights.

Key Sections

Key Requirements

  • Consumers must be able to opt out of data sales and processing for targeted advertising.
  • Consumers must have the ability to delete readily accessible data.
  • Data collected for safety or compliance is exempt from consent requirements.
  • Manufacturers may only collect minimum necessary data for product improvement.
  • Manufacturers must provide controls to view data categories collected.
  • Motor vehicle manufacturers must comply if they manufacture vehicles sold or leased in Utah.
  • New owners must be informed about accessing this information during title transfers.
  • Requires adherence to good clinical practice guidelines for research.
  • Requires compliance with federal regulations regarding human subjects.
  • The division must maintain a website with data privacy information.
  • They must collect, transmit, or store personal data via a vehicle data collection system.

Sponsors

Legislative Actions

Date Action
2026-01-26 House/ 1st reading (Introduced)
2026-01-26 House/ received bill from Legislative Research
2026-01-23 Bill Numbered but not Distributed
2026-01-23 Numbered Bill Publicly Distributed

Detailed Analysis

Analysis 1

Why Relevant: The bill includes 'biometric data' within its scope, which is a critical data category often processed by AI systems for driver monitoring, facial recognition, or security.

Mechanism of Influence: By regulating the collection and use of biometric data in vehicles, the law places constraints on the types of data AI models can ingest and process without specific consumer protections.

Evidence:

  • This section provides definitions for various terms used in the bill, including motor vehicle, consumer, personal data, and biometric data.

Ambiguity Notes: The bill focuses on the privacy of the data rather than the specific AI algorithms that might use the data, leaving the technical implementation of 'privacy controls' for AI-driven features undefined.

Analysis 2

Why Relevant: The bill addresses 'targeted advertising,' which is a field heavily reliant on AI and machine learning for consumer profiling and automated decision-making.

Mechanism of Influence: Mandating an opt-out for data processing related to targeted advertising restricts the data pipeline used to train and execute AI-driven marketing models.

Evidence:

  • Consumers must be able to opt out of data sales and processing for targeted advertising.

Ambiguity Notes: The text does not explicitly mention AI or machine learning, focusing instead on the 'processing' of data for the purpose of advertising, which is the functional application of AI in this context.

House - 55 - Privacy Compliance for Education Technology Vendors

Legislation ID: 245758

Bill URL: View Bill

Summary

H.B. 55 establishes requirements for the termination of contracts with third-party providers when they fail to comply with privacy laws. It mandates that the State Board of Education investigate alleged privacy violations and conduct audits of agreements. The bill also prohibits third-party contractors from imposing fees on education entities for terminating contracts due to privacy violations.

Key Sections

Key Requirements

  • Establishes a reporting process for suspected privacy violations.
  • Establishes a reporting process for suspected privacy violations to the State Board.
  • Mandates compliance audits within six months of new or renewed contracts.
  • Mandates initial reviews and potential audits based on credible reports of violations.
  • Mandates investigations by the State Board of Education into allegations of privacy violations.
  • Mandates notification to contractors of unauthorized sales of student data and outlines the process for contract termination if violations are not remedied.
  • Mandates termination of contracts if privacy violations are not remedied after notification.
  • Mandates termination of the contract if the violation is not remedied within 30 days after notification.
  • Mandates the state board to conduct an initial review of reports and initiate audits if warranted.
  • Prohibits the sale of student data and targeted advertising using student data.
  • Prohibits the sale of student data by third-party contractors.
  • Prohibits third-party contractors from imposing fees for contract termination due to privacy violations.
  • Prohibits third-party contractors from imposing financial liabilities for contract termination due to privacy violations.
  • Requires compliance audits within six months of executing new contracts or revising data privacy agreements.
  • Requires contractors to return or delete student data upon contract completion unless consent is obtained for retention.
  • Requires education entities to include provisions in contracts that mandate compliance with privacy laws.
  • Requires education entities to include specific provisions in contracts with third-party contractors regarding data collection and use.
  • Requires notification to contractors of unauthorized use of student data within 30 days of discovery.
  • Requires notification to the contractor within 30 days of discovering a violation.
  • Requires termination of contracts with non-compliant third-party contractors.
  • Requires termination of contracts with third-party contractors for privacy violations.
  • Requires termination of contracts with vendors failing to remedy privacy violations after notification.
  • Requires the State Board of Education to investigate reports of privacy violations.
  • Requires the State Board to conduct investigations of reported privacy violations.
  • Requires third-party contractors to return or delete all student data upon contract completion unless consent is provided.
  • Restricts the use of student data to only what is necessary for the contracted services.
  • Specifies situations where the provisions do not apply, such as general audience applications and compliance with directory information policies.

Sponsors

Legislative Actions

Date Action
2026-01-23 House/ 2nd reading
2026-01-23 House/ comm rpt/ amended/ placed on Consent Cal
2026-01-23 House/ placed back on 3rd Reading Calendar
2026-01-22 House Comm - Amendment Recommendation
2026-01-22 House Comm - Consent Calendar Recommendation
2026-01-22 House Comm - Favorable Recommendation
2026-01-21 House/ to standing committee
2026-01-20 House/ 1st reading (Introduced)

Detailed Analysis

Analysis 1

Why Relevant: The legislation establishes a framework for auditing and regulating third-party contractors, which includes technology providers and software developers often utilizing AI in educational settings.

Mechanism of Influence: AI vendors serving as third-party contractors for educational entities would be subject to mandatory compliance audits and potential contract termination if their data processing or algorithmic functions violate student privacy laws.

Evidence:

  • The state board must conduct compliance audits within six months of new or renewed contracts with third-party contractors to ensure adherence to privacy laws.
  • The bill requires educational entities to terminate contracts with third-party contractors that do not remedy privacy violations after being notified.

Ambiguity Notes: The bill does not explicitly name 'Artificial Intelligence,' but its broad application to 'third-party contractors' and 'data privacy' encompasses AI service providers handling student data.

Senate - 177 - Product Pricing Amendments

Legislation ID: 282565

Bill URL: View Bill

Summary

The Product Pricing Amendments bill enacts provisions related to algorithmic pricing, defining necessary terms and establishing requirements for suppliers to disclose their pricing methods. It aims to protect consumers from deceptive practices by ensuring transparency in how prices are set and displayed based on algorithms.

Key Sections

Key Requirements

  • Suppliers must provide a disclaimer when using algorithmic pricing.
  • The disclaimer must clearly inform consumers that prices are set using algorithms based on personal data.

Sponsors

Legislative Actions

Date Action
2026-01-27 Senate/ received fiscal note from Fiscal Analyst
2026-01-26 Senate Comm - Not Considered
2026-01-26 Senate/ to standing committee
2026-01-22 Senate/ 1st reading (Introduced)
2026-01-22 Senate/ received bill from Legislative Research
2026-01-21 Bill Numbered but not Distributed
2026-01-21 Numbered Bill Publicly Distributed

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation of algorithms and requires specific disclosures for AI-driven or automated pricing systems.

Mechanism of Influence: It mandates that suppliers provide a disclaimer informing consumers that prices are set using algorithms based on personal data, thereby increasing transparency in automated decision-making.

Evidence:

  • Suppliers must provide a disclaimer when using algorithmic pricing.
  • The disclaimer must clearly inform consumers that prices are set using algorithms based on personal data.

Ambiguity Notes: The specific definition of 'algorithm' provided in the bill would determine the breadth of AI technologies covered, potentially ranging from simple rule-based systems to complex machine learning models.

↑ Back to Table of Contents

Vermont

Index of Bills

House - 644 - An act relating to regulating the use of artificial intelligence in the provision of mental health services

Legislation ID: 258216

Bill URL: View Bill

Summary

The bill addresses the growing concern over the use of artificial intelligence systems in mental health services, highlighting the risks associated with unregulated AI interactions. It seeks to safeguard individuals by regulating the use of AI, prohibiting its use in therapeutic settings, and establishing guidelines for mental health professionals who may use AI for administrative purposes only.

Key Sections

Key Requirements

  • AI cannot be used for therapeutic decisions or communications.
  • Consent must be obtained from patients for recording or transcription.
  • Mental health professionals may use AI for administrative tasks only with oversight.
  • Misuse of artificial intelligence by mental health professionals constitutes unprofessional conduct.
  • Prohibits the use of AI in mental health services unless authorized.

Sponsors

Legislative Actions

Date Action
2026-01-13 Read first time and referred to the Committee on [Health Care]

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates the use of artificial intelligence by defining permitted and prohibited applications within a specific professional field.

Mechanism of Influence: It creates a legal boundary that prevents AI from being used as a substitute for human practitioners in therapeutic settings, effectively banning AI-driven therapy.

Evidence:

  • AI cannot be used for therapeutic decisions or communications.
  • Prohibits any person or entity from offering or advertising mental health services that utilize artificial intelligence

Ambiguity Notes: The term 'administrative tasks' is not granularly defined, which may leave room for interpretation regarding data processing or scheduling versus clinical documentation.

Analysis 2

Why Relevant: The legislation requires disclosures and consent mechanisms for AI usage.

Mechanism of Influence: Mental health professionals must obtain patient consent before using AI for recording or transcription, ensuring transparency in AI deployment.

Evidence:

  • Consent must be obtained from patients for recording or transcription.

Ambiguity Notes: None

Analysis 3

Why Relevant: The bill establishes oversight requirements and professional accountability for AI use.

Mechanism of Influence: By amending the definition of unprofessional conduct to include AI misuse, the law subjects professionals to licensing penalties for failing to oversee AI tools properly.

Evidence:

  • Mental health professionals may use AI for administrative tasks only with oversight.
  • Misuse of artificial intelligence by mental health professionals constitutes unprofessional conduct.

Ambiguity Notes: None

House - 650 - An act relating to educational technology products

Legislation ID: 258229

Bill URL: View Bill

Summary

The bill amends existing laws to establish a framework for the registration and certification of educational technology products that collect student data. It mandates that providers register with the Secretary of State, pay a fee, and disclose their privacy policies and product information. The Secretary of State will create certification standards to ensure these products comply with state and federal privacy laws, and schools are prohibited from using non-certified products. The bill also includes penalties for non-compliance and outlines a transition period for schools to adapt to the new requirements.

Key Sections

Key Requirements

  • A registration fee of $100 is required.
  • Non-certified products may be used until June 30, 2027.
  • Products must comply with state curriculum standards and privacy laws.
  • Providers must disclose data collection practices and obtain parental consent.
  • Providers must register by January 31 each year.
  • Providers must submit their contact information, privacy policy, and product list.
  • Schools must submit a list of products in use by December 15, 2026.
  • The act takes effect on July 1, 2026.
  • The certified product requirement takes effect on July 1, 2027.
  • The Secretary of State must certify products before schools can use them.

Sponsors

Legislative Actions

Date Action
2026-01-13 Read first time and referred to the Committee on [Commerce and Economic Development]

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates educational technology products, a category that increasingly includes AI-driven personalized learning platforms, automated grading systems, and student monitoring tools.

Mechanism of Influence: AI providers operating in the educational sector would be required to disclose their data collection practices and undergo a state certification process, effectively subjecting AI models used in schools to privacy audits and regulatory oversight.

Evidence:

  • The bill amends existing laws to establish a framework for the registration and certification of educational technology products that collect student data.
  • Providers must disclose data collection practices and obtain parental consent.
  • The Secretary of State will create certification standards to ensure these products comply with state and federal privacy laws, and schools are prohibited from using non-certified products.

Ambiguity Notes: The bill uses the broad term 'educational technology products' without explicitly defining 'Artificial Intelligence.' While AI tools fall under this umbrella, the specific requirements for AI-specific disclosures (like model weights or algorithmic bias) are not explicitly mentioned.

House - 720 - An act relating to the Cloud Computing Public Utility Act

Legislation ID: 274514

Bill URL: View Bill

Summary

This bill introduces the Cloud Computing Public Utility Act, which recognizes cloud computing services as essential utilities in Vermont. It seeks to create a regulatory environment that fosters competition and innovation while safeguarding consumer interests against unfair practices. The bill outlines the definitions, jurisdiction, and operational requirements for cloud service providers, aiming to ensure service quality, affordability, and transparency in pricing and data management.

Key Sections

Sponsors

Legislative Actions

Date Action
2026-01-20 Read first time and referred to the Committee on [Energy and Digital Infrastructure]

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates the underlying infrastructure essential for the development, training, and deployment of Artificial Intelligence.

Mechanism of Influence: By classifying cloud computing as a public utility, the state gains oversight over the compute resources required for AI. Requirements for 'Provider Reports' and 'Service Quality' standards could be used to monitor the operational practices of platforms hosting large-scale AI models.

Evidence:

  • Cloud service providers must obtain a certificate from the Commission to operate in Vermont, ensuring their services promote the general good of the State.
  • The Commission can adopt additional rules to implement the chapters provisions, including service reliability and data security standards.
  • Providers must submit periodic reports to the Commission regarding service quality, consumer complaints, and operational practices.

Ambiguity Notes: The bill does not explicitly mention 'Artificial Intelligence' or 'Machine Learning.' However, the definition of 'cloud computing' is typically broad enough to encompass Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) offerings used by AI developers.

Analysis 2

Why Relevant: The bill addresses data management and portability, which are critical components of AI data governance.

Mechanism of Influence: Provisions requiring providers to offer data in a portable format and prohibiting excessive transfer fees affect how datasets used for AI training are managed and moved between cloud environments.

Evidence:

  • Consumers can request their data in a portable format, and providers cannot charge excessive fees for data transfer or impose termination fees.

Ambiguity Notes: While focused on consumer data, these provisions could impact enterprise-level AI data sets depending on how 'consumer' is defined in the final regulations.

House - 752 - An act relating to the Agency of Digital Services

Legislation ID: 284621

Bill URL: View Bill

Summary

This bill mandates the Agency of Digital Services to conduct an annual inventory of automated decision systems in state government, assessing their cybersecurity vulnerabilities and potential risks to personal data. It empowers the Agency to request the termination of any hazardous systems identified during the review process.

Key Sections

Key Requirements

  • Details must include cybersecurity vulnerabilities and personal data safeguards.
  • Empowers the Agency to request termination of systems with biased results or safety risks.
  • Includes updates on the inventory of automated decision systems.
  • Requires an inventory of automated decision systems including their capabilities, data inputs, and bias testing results.
  • Requires annual reporting on information technology and cybersecurity priorities.
  • Requires safeguards against misuse and proper data protection.

Sponsors

Legislative Actions

Date Action
2026-01-22 Read first time and referred to the Committee on [Energy and Digital Infrastructure]

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the oversight and auditing of automated decision systems, which is a primary category of artificial intelligence regulation.

Mechanism of Influence: By requiring an inventory that includes bias testing and risk assessments, the law creates a mandatory audit trail for AI-driven systems used by the state.

Evidence:

  • The Agency of Digital Services is tasked with creating an inventory of all automated decision systems used by the state, detailing their capabilities, data usage, testing for bias, and associated risks.

Ambiguity Notes: The term 'automated decision system' is often used as a legal catch-all for AI and algorithmic processes, though its specific technical scope depends on the statutory definition provided in the full text.

Analysis 2

Why Relevant: The legislation establishes a regulatory enforcement mechanism to stop the use of harmful AI technologies.

Mechanism of Influence: The Agency is granted the authority to request the termination of systems that produce biased results or pose safety risks, effectively acting as a regulatory gatekeeper for state-deployed AI.

Evidence:

  • The Agency can request the termination of any automated decision system that is found to produce biased results, poses safety risks, is likely to be misused, or does not protect personal data adequately.

Ambiguity Notes: The phrase 'request the termination' may imply an advisory role rather than a unilateral power to shut down systems, which could affect the strength of the regulation.

Senate - 207 - An act relating to prohibiting surveillance pricing

Legislation ID: 252270

Bill URL: View Bill

Summary

The proposed legislation establishes a framework to prohibit surveillance pricing in the State of Vermont. It defines key terms related to consumer information and surveillance technology, outlines the conditions under which surveillance pricing may be used, and establishes penalties for violations. The bill seeks to ensure fair pricing practices for consumers and requires transparency when personal information is collected and used.

Key Sections

Key Requirements

  • Allows surveillance pricing only if based on the cost to the seller or offered as a discount to all consumers equally.
  • Consumers have rights under section 2461 in cases of surveillance pricing.
  • Prohibits surveillance pricing unless certain conditions are met.
  • Violations are treated as violations of existing consumer protection laws.

Sponsors

Legislative Actions

Date Action
2026-01-06 Read 1st time & referred to Committee on [Economic Development, Housing and General Affairs]

Detailed Analysis

Analysis 1

Why Relevant: Surveillance pricing is a primary application of AI and machine learning in retail, where algorithms analyze consumer behavior and personal data to set dynamic, individualized prices.

Mechanism of Influence: The law regulates the output of AI-driven pricing models by prohibiting price discrimination based on individual consumer surveillance data, thereby restricting how these automated systems can be deployed in the marketplace.

Evidence:

  • The bill seeks to ensure fair pricing practices for consumers and requires transparency when personal information is collected and used.
  • This section provides definitions for important terms used in the context of surveillance pricing, including what constitutes aggregate consumer information, consumer products, covered information, and surveillance pricing itself.

Ambiguity Notes: While the abstract uses the term 'surveillance technology' rather than 'artificial intelligence,' the definitions of 'covered information' and 'surveillance pricing' likely encompass the data processing and algorithmic decision-making characteristic of AI.

Senate - 263 - An act relating to the use of automated traffic law enforcement (ATLE) systems by municipalities

Legislation ID: 271146

Bill URL: View Bill

Summary

This bill proposes the use of automated traffic law enforcement (ATLE) systems by municipal law enforcement agencies in work zones, areas with high crash or speeding incidents, traffic signals, and locations with excessive vehicle noise. It outlines definitions, usage guidelines, and requirements for the deployment of ATLE systems, including the need for public notification and engineering analysis. The bill also stipulates the procedures for municipalities to adopt ATLE systems and includes provisions for penalties and defenses related to violations captured by these systems.

Key Sections

Key Requirements

  • Annual calibration checks must be performed by an independent laboratory.
  • Annual reports must be submitted to legislative committees detailing ATLE system operations and violations.
  • Approval requires a majority vote from the municipalitys voters.
  • ATLE systems should not address deficiencies in road design or maintenance.
  • A traffic engineering analysis is required for deployment locations.
  • Civil penalties apply to vehicle owners unless specific defenses are proven.
  • Daily logs must be retained for a minimum of three years.
  • Defenses include improper calibration of the ATLE system.
  • Defines "automated traffic law enforcement system" with specific functionalities related to speed, traffic signals, and sound levels.
  • Defines "recorded image" for identification purposes of vehicles violating traffic laws.
  • Deployment of ATLE systems must not replace law enforcement personnel.
  • Municipalities must submit a proposal to their legislative body for ATLE use.
  • Proposal must include types of ATLE, intended traffic offenses, and operational plans.
  • Provisions regarding automated traffic law enforcement are set to be repealed on July 1, 2027, unless specific funding conditions are met.
  • Reports must include data on the number of systems, violations issued, and any recommended changes.

Sponsors

Legislative Actions

Date Action
2026-01-16 Read 1st time & referred to Committee on [Transportation]

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates automated decision-making systems used in law enforcement contexts, specifically those designed to identify and penalize traffic violations without direct human intervention at the moment of the offense.

Mechanism of Influence: It imposes mandatory independent audits (calibration checks) and data logging requirements to ensure the accuracy and integrity of the automated systems' outputs.

Evidence:

  • Defines 'automated traffic law enforcement system' with specific functionalities related to speed, traffic signals, and sound levels.
  • Annual calibration checks must be performed by an independent laboratory.
  • Daily logs must be retained for a minimum of three years.

Ambiguity Notes: While the bill uses the term 'automated' rather than 'artificial intelligence,' the functionalities described—such as identifying vehicles and sound levels from recorded images—typically rely on algorithmic processing and computer vision technologies often categorized under AI.

Analysis 2

Why Relevant: The legislation includes disclosure and transparency requirements for the use of automated technology.

Mechanism of Influence: It requires municipalities to submit detailed annual reports to the legislature, including data on system operations, violations issued, and recommendations for changes to the automated oversight.

Evidence:

  • Reports must include data on the number of systems, violations issued, and any recommended changes.
  • Annual reports must be submitted to legislative committees detailing ATLE system operations and violations.

Ambiguity Notes: The reporting requirements focus on operational outcomes rather than the disclosure of underlying algorithms or 'weights' as requested in the system instructions.

↑ Back to Table of Contents

Virginia

Index of Bills

House - 1170 - A BILL to amend and reenact §§ 9.1-101, as it is currently effective and as it shall become effective, and 9.1-102 of the Code of Virginia and to amend the Code of Virginia by adding in Article 1 of Chapter 17 of Title 15.2 a section numbered 15.2-1723.3 and by adding in Chapter 1 of Title 52 a section numbered 52-11.7, relating to Department of Criminal Justice Services; law-enforcement agencies and sheriffs departments; policy on use of artificial intelligence systems.

Legislation ID: 269307

Bill URL: View Bill

Summary

House Bill No. 1170 introduces amendments to the Code of Virginia, particularly focusing on definitions related to the administration of criminal justice and the use of artificial intelligence (AI) systems. It defines various terms including artificial intelligence system, covered AI system, and outlines the responsibilities of law enforcement agencies in employing such technologies. The bill seeks to establish guidelines for the deployment of AI in criminal justice to enhance transparency and accountability.

Key Sections

Key Requirements

  • AI systems must not be used for administrative tasks that do not materially impact investigations.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Communications
2026-01-26 Fiscal Impact Statement from Department of Planning and Budget (HB1170)
2026-01-14 Committee Referral Pending
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26105300D
2026-01-14 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation and oversight of AI systems used by law enforcement and criminal justice agencies, aligning with the user's interest in AI regulation and government oversight.

Mechanism of Influence: By defining 'covered AI systems' and outlining their use in investigations and predictive policing, the law creates a legal framework for what technologies are subject to oversight and transparency requirements within the criminal justice system.

Evidence:

  • Defines artificial intelligence system as a machine learning-based system that generates outputs influencing environments
  • Specifies what constitutes a covered AI system used in law enforcement, including technologies for investigations, biometric identification, and predictive policing.

Ambiguity Notes: The exclusion of systems that do not 'materially impact' investigations may lead to varying interpretations of what constitutes an administrative task versus a regulated investigative tool.

House - 1186 - A BILL to amend the Code of Virginia by adding a section numbered 22.1-79.3:2, relating to school board policies; prohibition on use of artificial intelligence chatbots for certain student instructional purposes.

Legislation ID: 269328

Bill URL: View Bill

Summary

This bill proposes the addition of a new section to the Code of Virginia that prohibits school boards from requiring or encouraging students to use artificial intelligence chatbots for instructional purposes. The bill recognizes the unreliability of such chatbots as sources of information and their potential negative impact on students critical thinking skills. Each school board is mandated to develop and implement a policy that enforces this prohibition.

Key Sections

Key Requirements

  • Defines artificial intelligence system and artificial intelligence chatbot for clarity in the bill.
  • Requires each school board to develop a policy prohibiting the use of AI chatbots for instruction.

Sponsors

Legislative Actions

Date Action
2026-01-20 Fiscal Impact Statement from Department of Planning and Budget (HB1186)
2026-01-14 Committee Referral Pending
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26101321D

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly regulates the deployment and usage of artificial intelligence tools within the educational sector.

Mechanism of Influence: It mandates that local school boards create and enforce policies that prevent the use of AI chatbots for instruction, effectively banning their integration into the curriculum.

Evidence:

  • prohibits school boards from requiring or encouraging students to use artificial intelligence chatbots for instructional purposes
  • Each school board is mandated to develop and implement a policy that enforces this prohibition
  • Defines artificial intelligence system and artificial intelligence chatbot for clarity in the bill

Ambiguity Notes: The term 'instructional purposes' may require further clarification to determine if it applies to administrative tasks, extracurricular activities, or strictly classroom learning.

House - 1252 - A BILL to amend and reenact § 55.1-1200 of the Code of Virginia and to amend the Code of Virginia by adding a section numbered 55.1-1204.2, relating to Virginia Residential Landlord and Tenant Act; algorithmic pricing device use by certain landlords; civil penalties.

Legislation ID: 269438

Bill URL: View Bill

Summary

House Bill No. 1252 amends the Virginia Residential Landlord and Tenant Act to address the use of algorithmic pricing devices by landlords. It mandates disclosure of such devices to tenants, outlines requirements for human review of rent determinations, and establishes civil penalties for violations. The bill seeks to protect tenants from deceptive practices related to automated rent pricing.

Key Sections

Key Requirements

  • Allows the Attorney General to seek injunctions for violations.
  • Does not create a private right of action for tenants.
  • Does not impose obligations on software vendors or third-party platforms.
  • Does not impose reporting obligations on the Department of Housing and Community Development.
  • Does not require landlords to keep additional records.
  • Imposes civil penalties of up to $1,000 for each violation.
  • Landlords must disclose the use of algorithmic pricing devices to tenants in writing.
  • Landlords must provide a plain-language summary of factors considered by the algorithmic pricing device upon request.
  • Prohibits landlords from using pricing devices that mislead tenants.
  • Tenants are entitled to a human review of rent determinations made by algorithmic devices.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HGL sub: Housing/Consumer Protection
2026-01-14 Committee Referral Pending
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26103337D
2026-01-14 Referred to Committee on General Laws

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes mandatory disclosure requirements for automated systems used in financial transactions (rent).

Mechanism of Influence: Landlords are legally required to inform tenants in writing if an algorithmic device is used and provide a plain-language explanation of the algorithm's logic.

Evidence:

  • Landlords must disclose the use of algorithmic pricing devices to tenants in writing.
  • Landlords must provide a plain-language summary of factors considered by the algorithmic pricing device upon request.

Ambiguity Notes: The term 'plain-language summary' is not strictly defined, leaving room for interpretation on the level of technical detail required regarding the algorithm's weights or data inputs.

Analysis 2

Why Relevant: It mandates human oversight of AI-driven or algorithmic decisions, a key pillar of AI safety and accountability legislation.

Mechanism of Influence: It creates a legal right for a consumer (tenant) to bypass or verify an automated decision through a human review process.

Evidence:

  • Tenants are entitled to a human review of rent determinations made by algorithmic devices.

Ambiguity Notes: The bill does not specify the standards for the 'human review' or whether the human has the authority to override the algorithm without justification.

Analysis 3

Why Relevant: The bill defines and prohibits the deceptive use of algorithmic tools, establishing a regulatory framework for AI-adjacent software.

Mechanism of Influence: It grants the Attorney General enforcement power to seek injunctions and civil penalties against landlords using these devices in misleading ways.

Evidence:

  • Prohibits landlords from using algorithmic pricing devices in a deceptive or misleading manner.
  • Imposes civil penalties of up to $1,000 for each violation.

Ambiguity Notes: The definition of 'algorithmic pricing device' determines the scope of the law and whether it applies to simple spreadsheets versus complex machine learning models.

House - 1257 - A BILL to amend and reenact §§ 9.1-101, as it is currently effective and as it shall become effective, 9.1-102, and 9.1-1110 of the Code of Virginia and to amend the Code of Virginia by adding in Article 1 of Chapter 17 of Title 15.2 a section numbered 15.2-1723.3 and by adding a section numbered 23.1-815.2, relating to law-enforcement agencies; use of certain technologies and interrogation practices; forensic laboratory accreditation.

Legislation ID: 269444

Bill URL: View Bill

Summary

This bill introduces amendments to various sections of the Code of Virginia, particularly focusing on definitions relevant to criminal justice, law enforcement, and the use of artificial intelligence technologies. It aims to clarify the roles and responsibilities of criminal justice agencies, enhance the standards for forensic laboratories, and incorporate the use of generative AI and machine learning systems in law enforcement practices. The bill also seeks to ensure that private police departments operate under clear regulations and maintain compliance with state laws.

Key Sections

Key Requirements

  • Clarifies the roles of various law enforcement entities and their responsibilities.
  • Defines key terms related to criminal justice and law enforcement.
  • Maintains consistency in definitions with the previous section while updating for future applicability.

Sponsors

Legislative Actions

Date Action
2026-01-14 Committee Referral Pending
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26103277D

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly addresses the integration of generative AI and machine learning systems within the framework of law enforcement and criminal justice.

Mechanism of Influence: By including these technologies in the definitions and operational standards for criminal justice agencies, the law establishes a legal basis for their use and potential oversight in policing and forensic contexts.

Evidence:

  • incorporate the use of generative AI and machine learning systems in law enforcement practices

Ambiguity Notes: While the abstract mentions the incorporation of AI, the specific regulatory requirements such as audits or disclosures are not detailed in the provided summary, leaving the exact nature of the oversight to the full text of the amendments.

House - 1261 - A BILL to amend and reenact §§ 9.1-101, as it is currently effective and as it shall become effective, 9.1-102, and 9.1-1110 of the Code of Virginia and to amend the Code of Virginia by adding in Article 1 of Chapter 17 of Title 15.2 a section numbered 15.2-1723.3 and by adding a section numbered 23.1-815.2, relating to law-enforcement agencies; use of certain technologies and interrogation practices; forensic laboratory accreditation.

Legislation ID: 269448

Bill URL: View Bill

Summary

This bill proposes amendments to various sections of the Code of Virginia, particularly in relation to the definitions and roles of law enforcement agencies. It introduces new definitions for technologies like generative AI and machine learning systems, while also updating the definitions of criminal justice agencies and their functions. The bill seeks to enhance the framework governing law enforcement practices and ensure compliance with modern technological advancements.

Key Sections

Key Requirements

  • Establishes a timeline for training completion.
  • Establishes compulsory minimum training standards for law enforcement officers.
  • Includes protocols for reporting abuse and identifying local resources.
  • Includes training on the impact of restraints and solitary confinement on pregnant inmates.
  • Mandates a study on privacy and confidentiality issues.
  • Requires a comprehensive plan for law enforcement improvement.
  • Requires adoption of guidelines for handling criminal history information.
  • Requires conducting and stimulating research to enhance police administration.
  • Requires conducting audits as per § 9.1-131.
  • Requires consultation with local and state agencies for training development.
  • Requires cooperation with local and state agencies for program development.
  • Requires coordination of state and local government activities.
  • Requires criminal justice agencies to submit information as needed.
  • Requires dispatchers to undergo training on handling calls involving individuals with dementia.
  • Requires establishment of programs to strengthen law enforcement.
  • Requires evaluation of programs for potential improvements.
  • Requires graduated training based on duties for auxiliary police officers.
  • Requires initiation of educational programs regarding privacy and security.
  • Requires institutions to be approved for law enforcement training programs.
  • Requires minimum qualifications for certification and recertification of instructors.
  • Requires minimum training standards for deputy sheriffs serving process.
  • Requires operation of a statewide criminal justice research center.
  • Requires participation in interstate criminal history information systems.
  • Requires the Board to establish and maintain police training programs.
  • Requires the submission of reports and information by law enforcement officers.
  • Requires training on the care of pregnant women in custody.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Communications
2026-01-14 Committee Referral Pending
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26103194D
2026-01-14 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly incorporates definitions for generative AI and machine learning into the state's criminal justice code.

Mechanism of Influence: By establishing these definitions, the law creates a regulatory foundation for how law enforcement agencies can legally deploy and categorize AI technologies.

Evidence:

  • It introduces new definitions for technologies like generative AI and machine learning systems
  • This provision introduces definitions for generative AI systems and machine learning systems, explaining their functions and relevance to law enforcement practices.

Ambiguity Notes: The text focuses on definitions and integration; it is not immediately clear if it imposes strict prohibitions or merely provides a framework for adoption.

Analysis 2

Why Relevant: The bill requires audits and privacy studies, which aligns with the user's interest in oversight and auditing of sensitive data systems.

Mechanism of Influence: Mandatory audits of criminal history information systems ensure oversight of the data that AI and machine learning models would likely process or generate.

Evidence:

  • This provision mandates audits and studies on privacy and confidentiality of criminal history information.
  • Mandates a study on privacy and confidentiality issues.

Ambiguity Notes: While the audit provision does not explicitly name AI, the concurrent introduction of AI definitions suggests these audits may encompass AI-driven data processing.

House - 1294 - A BILL to amend and reenact § 19.2-11.14 of the Code of Virginia, relating to use of artificial intelligence-based tools; covered artificial intelligence; disclosure of use.

Legislation ID: 269496

Bill URL: View Bill

Summary

This bill amends the Code of Virginia to define and provide guidelines for the use of artificial intelligence-based tools in law enforcement. It specifies what constitutes covered artificial intelligence, mandates disclosure of AI usage in police reports, and ensures that human decision-makers are involved in critical legal decisions. Additionally, it establishes a framework for civil actions against law enforcement agencies for non-compliance.

Key Sections

Key Requirements

  • Allows challenges to AI-generated recommendations.
  • Attorney General may bring civil actions against non-compliant agencies.
  • Defines AI-based tools and covered AI used in law enforcement.
  • Details on AIs role in generating leads or identifying suspects must be included.
  • Disclosure of AI use in official police reports.
  • Excludes administrative AI tools that do not impact investigations.
  • Include disclaimers in reports about AI-generated content.
  • Individuals can file civil actions after 90 days of notifying the agency.
  • Maintain audit trails for AI-generated reports.
  • Notifications to attorneys and individuals under investigation within 30 days.
  • Requires human involvement in all legal decision-making processes.

Sponsors

Legislative Actions

Date Action
2026-01-15 Committee Referral Pending
2026-01-15 Presented and ordered printed 26105298D

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's request for legislation requiring AI disclosures and regulation.

Mechanism of Influence: It mandates that law enforcement officers include disclaimers in reports and notify attorneys and investigated individuals when covered AI is used in investigations.

Evidence:

  • Reports generated in part by AI must include disclaimers, identify AI-generated content, and have author certifications regarding accuracy.
  • Law enforcement officers must disclose any use of covered AI in police reports, detailing its role in investigations.

Ambiguity Notes: The scope of regulation depends on the definition of 'covered artificial intelligence,' which excludes administrative tools that do not impact investigations.

Analysis 2

Why Relevant: The legislation includes provisions for audits and government oversight of AI systems.

Mechanism of Influence: It requires the maintenance of audit trails for AI-generated reports and grants the Attorney General authority to investigate and sue non-compliant agencies.

Evidence:

  • Maintain audit trails for AI-generated reports.
  • The Attorney General can investigate and take action against agencies that violate this section.

Ambiguity Notes: The text does not specify the technical requirements or duration for which audit trails must be maintained.

Analysis 3

Why Relevant: The bill regulates the decision-making autonomy of AI in high-stakes legal environments.

Mechanism of Influence: It prohibits AI from being the sole factor in decisions like pre-trial detention or sentencing, requiring a human-in-the-loop for all critical legal determinations.

Evidence:

  • This provision stipulates that all decisions regarding pre-trial detention, prosecution, and sentencing must involve a human decision-maker and cannot rely solely on AI recommendations.

Ambiguity Notes: The level of 'involvement' required by a human decision-maker to satisfy the requirement is not explicitly quantified.

House - 1295 - A BILL to amend the Code of Virginia by adding in Article 1 of Chapter 1 of Title 9.1 a section numbered 9.1-116.11, relating to law enforcement; artificial intelligence inventory; civil action.

Legislation ID: 269497

Bill URL: View Bill

Summary

This bill introduces a requirement for state and local law enforcement agencies to conduct an annual inventory of any artificial intelligence systems they utilize. The inventory must be publicly available and include detailed information about each systems capabilities, data inputs, outputs, and authorized uses. Additionally, the bill provides mechanisms for civil action against agencies that fail to comply with these requirements, allowing both the Attorney General and individuals to seek enforcement.

Key Sections

Key Requirements

  • Allows the Attorney General to investigate compliance.
  • Authorizes civil action for non-compliance.
  • Defines artificial intelligence system and covered AI system for law enforcement use.
  • Excludes certain AI systems used for administrative tasks from the definition of covered AI systems.
  • Individuals can bring civil action for enforcement.
  • Inventory must be publicly available by November 1 each year.
  • Must include specific information about each system.
  • Requires 90 days written notice to the agency before filing suit.
  • Requires annual inventory of covered AI systems.

Sponsors

Legislative Actions

Date Action
2026-01-15 Committee Referral Pending
2026-01-15 Presented and ordered printed 26105299D

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's interest in AI disclosures and government oversight of AI systems.

Mechanism of Influence: It requires law enforcement to create a public record of AI systems, including technical details like data inputs and outputs, which functions as a mandatory disclosure and transparency mechanism.

Evidence:

  • Law enforcement agencies are required to conduct an annual inventory of covered AI systems and make this information publicly available
  • The inventory must detail each systems name, capabilities, data inputs, outputs, and usage guidelines.

Ambiguity Notes: The definition of 'covered AI system' excludes administrative tasks, which may leave certain algorithmic tools used by police outside the scope of public disclosure.

Analysis 2

Why Relevant: The legislation establishes enforcement and accountability protocols for AI regulation.

Mechanism of Influence: By authorizing the Attorney General to investigate and allowing private citizens to sue for non-compliance, the bill creates a legal framework to ensure the AI inventory requirements are met.

Evidence:

  • The Attorney General has the authority to investigate and bring civil actions against law enforcement agencies that do not comply
  • Individuals residing in the jurisdiction of a law enforcement agency can file civil actions against the agency for non-compliance

Ambiguity Notes: The 90-day written notice requirement for individuals may serve as a procedural hurdle that delays enforcement actions.

House - 1514 - A BILL to amend the Code of Virginia by adding sections numbered 2.2-1202.2 and 15.2-1500.2 and by adding in Article 1 of Chapter 3 of Title 40.1 a section numbered 40.1-28.7:12, relating to employment decisions; automated decision systems; civil penalty.

Legislation ID: 285963

Bill URL: View Bill

Summary

This bill introduces new sections to the Code of Virginia that govern the use of automated decision systems in making employment decisions. It defines key terms related to artificial intelligence and automated systems, establishes requirements for state agencies and employers regarding the use of such systems, and outlines civil penalties for violations. The bill emphasizes the need for human involvement in employment decisions and mandates testing for algorithmic discrimination.

Key Sections

Key Requirements

  • Employers face civil penalties for violations, with fines of up to $500 for first offenses and $1,500 for subsequent offenses.
  • Employers must involve a human decision maker in all employment decisions.
  • Mandates disclosure of the use of automated decision systems and their intended purpose to employees and applicants.
  • Mandates training for agency staff on legal compliance and discrimination prevention.
  • Prohibits the sole use of automated decision systems for making employment decisions.
  • Provides individuals the right to opt out of automated decision systems for employment decisions.
  • Requires annual testing of automated decision systems for algorithmic discrimination.
  • Requires state agencies to ensure compliance with federal and state law regarding automated decision systems.
  • Requires the Commissioner to notify employers of violations and allows for informal conferences regarding such violations.

Sponsors

Legislative Actions

Date Action
2026-01-23 Committee Referral Pending
2026-01-23 Presented and ordered printed 26105693D

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the regulation of automated decision systems, which are a core component of artificial intelligence applications in the workforce.

Mechanism of Influence: It imposes mandatory annual testing for algorithmic discrimination and requires human oversight, effectively creating an auditing and accountability framework for AI-driven employment tools.

Evidence:

  • Requires annual testing of automated decision systems for algorithmic discrimination.
  • Prohibits the sole use of automated decision systems for making employment decisions.
  • Mandates disclosure of the use of automated decision systems and their intended purpose to employees and applicants.

Ambiguity Notes: The definition of 'automated decision system' is broad, potentially covering a wide range of AI technologies from simple rule-based systems to complex machine learning models.

Analysis 2

Why Relevant: The legislation includes specific disclosure requirements and consumer/employee rights regarding AI usage.

Mechanism of Influence: By providing a right to opt out and requiring transparency, the bill forces organizations to be accountable for their use of AI and allows for human intervention.

Evidence:

  • Provides individuals the right to opt out of automated decision systems for employment decisions.
  • Mandates disclosure of the use of automated decision systems.

Ambiguity Notes: The bill does not specify the exact technical standards for 'testing for algorithmic discrimination,' which may lead to varying interpretations of compliance.

House - 1521 - A BILL to amend the Code of Virginia by adding in Title 59.1 a chapter numbered 58, consisting of sections numbered 59.1-607 through 59.1-616, relating to digital innovation and infrastructure; establishing rights in digital property and technology resources; requiring risk management policies for critical infrastructure facilities controlled by critical artificial intelligence systems; providing safe harbors; preempting local regulation; and providing for enforcement and remedies.

Legislation ID: 285970

Bill URL: View Bill

Summary

House Bill No. 1521 proposes the addition of a new chapter to the Code of Virginia concerning digital innovation and infrastructure. It addresses the rights to digital property and technology resources, mandates risk management policies for AI-controlled critical infrastructure, establishes safe harbors for compliance, preempts local regulations that contradict state law, and outlines enforcement mechanisms and remedies for violations.

Key Sections

Key Requirements

  • Attestations must be filed by July 1 each year.
  • Attorney General can enforce against non-compliant government entities.
  • Deployers must create and comply with a risk management policy.
  • Deployers must demonstrate compliance with risk management requirements at the time of failure.
  • Digital assets are classified as personal property.
  • Exclusions apply for personal injury claims and intentional violations.
  • Government actions restricting ownership must be narrowly tailored to serve a compelling interest.
  • Individuals may seek declaratory judgments and injunctive relief.
  • Local governments cannot impose conflicting restrictions on technology resources.
  • Must identify applicable facilities and systems.
  • Owners have rights to control access, transfer ownership, modify assets, and be secure against unreasonable searches.
  • Policies must be reviewed and updated at least annually.

Sponsors

Legislative Actions

Date Action
2026-01-23 Committee Referral Pending
2026-01-23 Presented and ordered printed 26106161D

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the deployment of artificial intelligence systems within critical infrastructure sectors.

Mechanism of Influence: It mandates the creation and maintenance of risk management policies for AI systems, effectively requiring a governance framework for AI operations.

Evidence:

  • This section requires deployers of critical AI systems to develop and maintain risk management policies to ensure safe operations of critical infrastructure facilities.

Ambiguity Notes: The term 'critical artificial intelligence systems' is defined within the chapter, but the specific thresholds for what constitutes 'critical' may be subject to interpretation or further administrative refinement.

Analysis 2

Why Relevant: The legislation includes a mandatory disclosure and reporting mechanism through annual attestations.

Mechanism of Influence: Deployers must file an annual certification by July 1 each year to verify compliance with risk management policies, serving as a form of regulatory oversight and audit.

Evidence:

  • This section mandates annual certification by deployers regarding their compliance with risk management policies.
  • Attestations must be filed by July 1 each year.

Ambiguity Notes: The abstract does not specify if the underlying risk management policies themselves must be submitted to the government or if only the attestation of their existence and compliance is required.

Analysis 3

Why Relevant: The bill provides enforcement mechanisms for non-compliance with AI regulations.

Mechanism of Influence: It empowers the Attorney General to enforce the law against government entities and allows individuals to seek declaratory and injunctive relief.

Evidence:

  • Attorney General can enforce against non-compliant government entities.
  • Individuals may seek declaratory judgments and injunctive relief.

Ambiguity Notes: While the Attorney General can enforce against government entities, the abstract is less explicit about the specific penalties for private deployers beyond the loss of safe harbor protections.

House - 310 - A BILL to amend the Code of Virginia by adding in Chapter 6 of Title 2.2 an article numbered 3, consisting of sections numbered 2.2-622 through 2.2-625, relating to Artificial Intelligence Workforce Impact Act established; report.

Legislation ID: 253727

Bill URL: View Bill

Summary

The Artificial Intelligence Workforce Impact Act establishes reporting requirements for state agencies regarding the impacts of artificial intelligence on workforce positions. Agencies must report workforce changes quarterly, and if significant impacts are reported, they must develop a transition plan to assist affected employees with retraining and job placement. The Department of Human Resource Management will review these reports and plans to identify trends and recommend strategies for workforce adaptation.

Key Sections

Key Requirements

  • Reports must include specific data on position changes and retraining efforts.
  • Requires agencies reporting 10 or more workforce impacts to submit a transition plan within 120 days.
  • Requires agencies to report quarterly on workforce impacts of AI systems, starting January 1, 2027.
  • Requires the Department to submit an annual report to the Governor and legislative committees by November 1 each year.
  • The plan must include identification of at-risk positions, retraining strategies, and timelines.
  • The report must include total reported impacts, trends, projected workforce needs, and recommendations.

Sponsors

Legislative Actions

Date Action
2026-01-27 Fiscal Impact Statement from Department of Planning and Budget (HB310)
2026-01-26 Assigned HST sub: Communications
2026-01-09 Committee Referral Pending
2026-01-09 Prefiled and ordered printed; Offered 01-14-2026 26102961D
2026-01-09 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The act requires disclosures regarding the implementation and impact of AI systems within state government operations.

Mechanism of Influence: Agencies must submit data on AI-related job impacts, which serves as a form of government oversight and transparency regarding AI deployment and its socio-economic consequences.

Evidence:

  • Agencies are required to submit quarterly reports detailing workforce impacts resulting from the use of artificial intelligence systems
  • Requires agencies reporting 10 or more workforce impacts to submit a transition plan

Ambiguity Notes: The specific definition of 'artificial intelligence system' provided in the act will determine the breadth of technologies that trigger these reporting and planning requirements.

House - 32 - Directing the Joint Legislative Audit and Review Commission to study artificial intelligence use policies in place at institutions of higher education in the Commonwealth. Report.

Legislation ID: 258922

Bill URL: View Bill

Summary

This resolution calls for a comprehensive study of the artificial intelligence (AI) policies currently in place or being considered by higher education institutions in Virginia. The study aims to assess how these policies address critical issues such as academic integrity, data privacy, equity and access, transparency, and faculty autonomy. The Joint Legislative Audit and Review Commission (JLARC) will also develop a model policy for AI use and make recommendations for resources to support AI education.

Key Sections

Key Requirements

  • Conduct a survey of AI use policies adopted or considered by higher education institutions.
  • Develop a model policy for AI use in higher education.
  • Evaluate policies in terms of academic integrity, data privacy, equity and access, transparency, and faculty autonomy.
  • Make recommendations for AI tools, curricula, and resources for a statewide clearinghouse.

Sponsors

Legislative Actions

Date Action
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26104154D

Detailed Analysis

Analysis 1

Why Relevant: The resolution directly addresses the regulation and oversight of AI within the educational sector by mandating a study and the creation of a model policy.

Mechanism of Influence: By tasking JLARC with evaluating policies and creating a model framework, the resolution sets the stage for standardized AI governance and transparency requirements across state universities.

Evidence:

  • JLARC is tasked with surveying and evaluating AI use policies
  • Develop a model policy for AI use in higher education
  • Evaluate policies in terms of academic integrity, data privacy, equity and access, transparency

Ambiguity Notes: The term 'transparency' is broad and could encompass various disclosure requirements for AI-generated content or algorithmic processes.

House - 635 - A BILL to amend and reenact § 59.1-200 of the Code of Virginia and to amend the Code of Virginia by adding in Title 59.1 a chapter numbered 60, consisting of sections numbered 59.1-614 through 59.1-620, relating to Artificial Intelligence Chatbots Act established; prohibited practices; penalties.

Legislation ID: 258707

Bill URL: View Bill

Summary

This bill amends existing consumer protection laws and introduces a new chapter specifically addressing the use of artificial intelligence chatbots in consumer transactions. It outlines prohibited practices, such as misrepresentation and failure to disclose necessary information, that companies must adhere to when utilizing AI chatbots. Penalties for violations are also specified to ensure compliance and protect consumers.

Key Sections

Key Requirements

  • Suppliers must clearly indicate if goods are used or defective.
  • Suppliers must not advertise goods or services they do not intend to sell.
  • Suppliers must not misrepresent goods or services.
  • Suppliers must provide disclosures regarding returns, layaway agreements, and any existing credit balances.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Communications
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26105121D
2026-01-13 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets the regulation and disclosure requirements for artificial intelligence chatbots used in consumer interactions.

Mechanism of Influence: It mandates that companies using AI chatbots must avoid misrepresentation and provide specific disclosures, backed by legal penalties for non-compliance.

Evidence:

  • This bill amends existing consumer protection laws and introduces a new chapter specifically addressing the use of artificial intelligence chatbots in consumer transactions.
  • It outlines prohibited practices, such as misrepresentation and failure to disclose necessary information, that companies must adhere to when utilizing AI chatbots.

Ambiguity Notes: The text mentions 'failure to disclose necessary information' but does not explicitly define what specific AI-related technical information (like model version or data sources) constitutes 'necessary' beyond standard consumer protection.

House - 654 - A BILL to amend the Code of Virginia by adding a section numbered 59.1-577.2, relating to Consumer Data Protection Act; definition of "biometric data"; consent required for processing biometric data.

Legislation ID: 258727

Bill URL: View Bill

Summary

This bill amends the Code of Virginia by adding a new section that defines biometric data and mandates that consent must be obtained from individuals before their biometric data can be processed. It also specifies that for children, the processing must comply with existing federal laws regarding online privacy.

Key Sections

Key Requirements

  • For processing biometric data concerning children, compliance with federal law is mandatory.
  • Requires consent from individuals before processing their biometric data.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Technology and Innovation
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26103138D
2026-01-13 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates the collection and processing of biometric data, which is the foundational data source for AI-driven technologies such as facial recognition, voice synthesis, and biometric authentication systems.

Mechanism of Influence: By requiring explicit consent and federal compliance for children, the law imposes regulatory hurdles on AI companies that utilize biological characteristics for identification or automated decision-making.

Evidence:

  • This provision requires that no entity may process an individuals biometric data without obtaining their explicit consent
  • biometric data as data generated from automatic measurements of biological characteristics, including fingerprints, voiceprints, facial features, and other unique biological patterns used for identification

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its definition of biometric data encompasses the specific data types (facial features, voiceprints) that are central to the development and deployment of biometric AI models.

House - 668 - A BILL to amend the Code of Virginia by adding a section numbered 54.1-2400.1:1, relating to use of artificial intelligence system by mental health service providers; civil penalty.

Legislation ID: 258741

Bill URL: View Bill

Summary

This bill introduces guidelines for mental health service providers regarding the use of artificial intelligence systems in their practice. It defines terms related to mental health services and establishes rules for the use of AI, including the requirement for patient consent and the prohibition of AI making therapeutic decisions. Violations of these provisions may result in civil penalties.

Key Sections

Key Requirements

  • Requires mental health providers to disclose AI use to patients and obtain written consent before using AI during recorded sessions.

Sponsors

Legislative Actions

Date Action
2026-01-20 Fiscal Impact Statement from Department of Planning and Budget (HB668)
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26104644D
2026-01-13 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates transparency through disclosure and consent requirements for AI usage in a clinical setting.

Mechanism of Influence: Mental health providers must obtain written consent before using AI in recorded sessions and must disclose any AI involvement to patients.

Evidence:

  • Requires mental health providers to disclose AI use to patients and obtain written consent before using AI during recorded sessions.

Ambiguity Notes: The term 'supplementary support' is defined but may be subject to interpretation regarding the extent of AI involvement in clinical workflows.

Analysis 2

Why Relevant: The legislation imposes strict prohibitions on specific AI capabilities and use cases to ensure human oversight.

Mechanism of Influence: It legally restricts AI from making therapeutic decisions, generating treatment plans without review, or detecting emotions, effectively requiring a human-in-the-loop.

Evidence:

  • Restricts AI systems from making therapeutic decisions, directly interacting with clients in therapeutic communication, generating treatment plans without professional review, or detecting emotions.

Ambiguity Notes: 'Detecting emotions' is a broad category that might overlap with basic sentiment analysis tools used in administrative support.

House - 669 - A BILL to amend the Code of Virginia by adding a section numbered 54.1-111.1, relating to professions and occupations; impersonation of certain licensed professionals by chatbot; notice; civil liability.

Legislation ID: 258742

Bill URL: View Bill

Summary

This bill introduces a new section in the Code of Virginia that addresses the impersonation of licensed professionals by chatbots. It defines key terms such as artificial intelligence system and chatbot, and outlines the responsibilities of proprietors in terms of providing clear user notices. It prohibits chatbots from giving substantive responses that could constitute illegal actions if performed by a human, and establishes civil liability for proprietors who fail to comply with these regulations.

Key Sections

Key Requirements

  • Chatbots must not provide information or advice that constitutes a crime or violates educational regulations.
  • Chatbots must not provide responses that could violate professional licensing laws.
  • Legal action must be initiated within two years of the violation.
  • Notice must be clear, conspicuous, and in the same language as the chatbots communication.
  • Notice text must be easily readable and match or exceed the size of other text on the website.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Communications
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26104752D
2026-01-13 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates specific transparency disclosures for AI-driven interactions.

Mechanism of Influence: Proprietors are legally required to provide clear and conspicuous notice to users that they are interacting with an artificial intelligence system rather than a human.

Evidence:

  • Proprietors must provide clear notice to users that they are interacting with a chatbot.
  • Notice must be clear, conspicuous, and in the same language as the chatbots communication.

Ambiguity Notes: The term 'conspicuous' is defined by text size and language, but the specific placement on a user interface could still be subject to interpretation.

Analysis 2

Why Relevant: The legislation regulates the output and functional capabilities of AI systems in professional contexts.

Mechanism of Influence: It prohibits AI from generating substantive responses that would violate professional licensing laws or constitute crimes if performed by a human, holding the proprietor liable for such outputs.

Evidence:

  • Proprietors are prohibited from allowing chatbots to provide substantive responses that could constitute illegal actions if performed by a licensed professional.
  • Chatbots must not provide responses that could violate professional licensing laws.

Ambiguity Notes: The phrase 'substantive responses' is not strictly defined and may require judicial interpretation to determine the threshold of advice that triggers a violation.

House - 713 - A BILL to amend the Code of Virginia by adding in Title 59.1 a chapter numbered 60, consisting of sections numbered 59.1-614 through 59.1-617, relating to Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act established.

Legislation ID: 258786

Bill URL: View Bill

Summary

The Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act (FAIR AI Act) proposes the creation of regulations governing artificial intelligence systems in Virginia. It defines key terms related to AI, sets disclosure requirements for developers, establishes the FAIR AI Enforcement Fund for monitoring compliance, and outlines legal defenses in cases of harm caused by AI systems. The act seeks to ensure that AI technologies are deployed responsibly and ethically within the Commonwealth.

Key Sections

Key Requirements

  • Developers must disclose supported languages.
  • Developers must disclose the date of the last training data update.
  • Developers must disclose the developers name and incorporation location.
  • Developers must disclose the name of the model.
  • Developers must disclose the release date of the most recent version.
  • Developers must provide a link to the models terms of service.
  • Funds appropriated for the purpose must be credited to the fund.
  • Interest earned on the fund will remain with the fund.
  • The fund is to be established in the state treasury.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Communications
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26104935D
2026-01-13 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The act directly addresses the user's interest in AI disclosures and transparency requirements.

Mechanism of Influence: It mandates that developers of base AI models provide specific metadata to users, including training data update dates and terms of service, which facilitates oversight and user awareness.

Evidence:

  • Developers of base artificial intelligence models are required to disclose specific information about the model
  • Developers must disclose the date of the last training data update.
  • This disclosure must be clear and accessible to users.

Ambiguity Notes: The requirement that disclosures be 'clear and accessible' is a qualitative standard that may be subject to interpretation by regulators or courts.

Analysis 2

Why Relevant: The legislation establishes a mechanism for government oversight and enforcement of AI regulations.

Mechanism of Influence: By creating the FAIR AI Enforcement Fund, the state provides a dedicated financial structure to support monitoring for AI misuse, bias, and workforce disruption.

Evidence:

  • establishes the FAIR AI Enforcement Fund, a special fund in the state treasury designed to support the enforcement of regulations regarding AI misuse, bias, and workforce disruption.

Ambiguity Notes: While the fund is established, the specific technical methods for 'monitoring compliance' are not detailed in the provided abstract.

Analysis 3

Why Relevant: The act addresses legal accountability and the regulation of harm caused by AI systems.

Mechanism of Influence: It removes the ability for developers to claim 'autonomous harm' as a legal defense, effectively increasing the liability and responsibility of the entities that create and deploy AI.

Evidence:

  • it is not a valid defense to claim that the AI system autonomously caused harm.
  • outlines legal defenses in cases of harm caused by AI systems.

Ambiguity Notes: The phrase 'other common law defenses' allows for a wide range of existing legal strategies that are not AI-specific.

House - 758 - A BILL to amend and reenact § 59.1-200 of the Code of Virginia and to amend the Code of Virginia by adding in Title 59.1 a chapter numbered 60, consisting of sections numbered 59.1-614, 59.1-615, and 59.1-616, relating to Artificial Intelligence Chatbots and Minors Act established; prohibited practices; penalties.

Legislation ID: 258831

Bill URL: View Bill

Summary

House Bill No. 758 seeks to amend the existing consumer protection laws in Virginia by adding provisions specifically addressing the use of artificial intelligence chatbots when interacting with minors. The bill outlines prohibited practices for suppliers of such technology and establishes penalties for violations. It aims to ensure that minors are not subjected to deceptive practices and are provided with appropriate disclosures when engaging with AI chatbots.

Key Sections

Key Requirements

  • AI chatbots must not engage in deceptive practices with minors.
  • Suppliers must adhere to advertising regulations and not mislead consumers.
  • Suppliers must clearly disclose the nature and condition of goods being sold.
  • Suppliers must not misrepresent goods or services.
  • Suppliers must provide clear disclosures regarding the use of AI chatbots.
  • Violators may face fines or other penalties as determined by the court.

Sponsors

Legislative Actions

Date Action
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26103964D

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically targets the regulation of artificial intelligence technology, focusing on chatbots and their interaction with a vulnerable demographic.

Mechanism of Influence: It imposes legal requirements on AI suppliers to provide disclosures and avoid fraudulent acts, creating a compliance framework for AI deployment.

Evidence:

  • AI chatbots must not engage in deceptive practices with minors.
  • Suppliers must provide clear disclosures regarding the use of AI chatbots.

Ambiguity Notes: The definition of 'deceptive practices' in the context of AI logic or generative responses may be subject to broad interpretation by courts.

Analysis 2

Why Relevant: The legislation addresses the user's interest in age-related usage and mandatory disclosures for AI systems.

Mechanism of Influence: By establishing penalties for violations, the bill enforces accountability for AI service providers interacting with minors.

Evidence:

  • This provision specifically addresses the misuse of AI chatbots in interactions with minors
  • Violators may face fines or other penalties as determined by the court.

Ambiguity Notes: The bill mentions 'penalties as determined by the court' without specifying a fixed fine schedule, which could lead to varying levels of enforcement.

House - 797 - A BILL to amend the Code of Virginia by adding a section numbered 2.2-2012.01 and by adding in Chapter 20.1 of Title 2.2 an article numbered 9, consisting of sections numbered 2.2-2034.2 through 2.2-2034.7, relating to Virginia Information Technologies Agency; artificial intelligence; independent verification organizations.

Legislation ID: 258870

Bill URL: View Bill

Summary

This bill amends the Code of Virginia to include provisions for the licensing and oversight of independent verification organizations (IVOs) that assess artificial intelligence systems and applications. The Chief Information Officer (CIO) is tasked with overseeing the licensing process and establishing regulations to ensure transparency, independence, and adequate risk mitigation in AI technologies. The bill also establishes an Artificial Intelligence Safety Advisory Council to assist in these efforts.

Key Sections

Key Requirements

  • Establishes requirements for IVO application procedures and necessary materials.
  • Mandates corrective actions or loss of license under certain circumstances.
  • Members are subject to a one-year post-employment restriction from working with artificial intelligence firms or IVOs.
  • Members must not own or acquire equity in companies significantly involved in artificial intelligence.
  • Members must refrain from any employment by developers or deployers of artificial intelligence.
  • Requires conflict of interest and funding transparency for IVOs.
  • The Council must keep detailed records of proceedings related to the issuance, refusal, renewal, or revocation of IVO licenses.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HST sub: Technology and Innovation
2026-01-26 Fiscal Impact Statement from Department of Planning and Budget (HB797)
2026-01-13 Committee Referral Pending
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26102711D
2026-01-13 Referred to Committee on Communications, Technology and Innovation

Detailed Analysis

Analysis 1

Why Relevant: The provision directly addresses the regulation and auditing of artificial intelligence systems through the licensing of third-party verification organizations.

Mechanism of Influence: It empowers the CIO to establish regulations for IVOs, which are responsible for assessing AI systems for risk and transparency, effectively creating a state-sanctioned audit mechanism for AI technologies.

Evidence:

  • This provision outlines the responsibilities of the Chief Information Officer (CIO) in overseeing the licensing of independent verification organizations (IVOs) for artificial intelligence
  • establishing necessary regulations for transparency and risk management.

Ambiguity Notes: The terms 'necessary regulations' and 'risk management' are broad, leaving significant discretion to the CIO to define the specific standards AI systems must meet.

Analysis 2

Why Relevant: It establishes ethical guardrails for the individuals responsible for overseeing AI safety and licensing.

Mechanism of Influence: By prohibiting equity ownership and post-employment work with AI firms, the law attempts to prevent regulatory capture and ensure that AI safety assessments are conducted without industry bias.

Evidence:

  • Members of the Council are prohibited from engaging in actions or occupations that conflict with their duties, specifically in relation to artificial intelligence.
  • Members must not own or acquire equity in companies significantly involved in artificial intelligence.

Ambiguity Notes: The phrase 'significantly involved in artificial intelligence' is not quantitatively defined, which could lead to disputes over what level of AI involvement triggers a conflict.

Analysis 3

Why Relevant: It ensures administrative transparency regarding which organizations are permitted to audit AI systems.

Mechanism of Influence: Mandatory record-keeping of licensing decisions (issuance, refusal, or revocation) allows for public or legislative scrutiny of how AI verification standards are being applied.

Evidence:

  • The Council is required to maintain a record of its proceedings, particularly concerning the licensing of IVOs.
  • The Council must keep detailed records of proceedings related to the issuance, refusal, renewal, or revocation of IVO licenses.

Ambiguity Notes: None

House - 999 - A BILL to amend and reenact §§ 2.2-3906, 6.2-500, 6.2-501, 6.2-506, 6.2-510, 6.2-513, 36-96.1:1, 36-96.3, 36-96.4, 36-96.8, 36-96.10, and 36-96.16 of the Code of Virginia and to amend the Code of Virginia by adding a section numbered 2.2-3905.2, relating to Virginia Human Rights Act; equal credit opportunities; Virginia Fair Housing Law; nondiscrimination by automated decision systems.

Legislation ID: 260108

Bill URL: View Bill

Summary

House Bill No. 999 seeks to enhance protections against discrimination by regulating the use of automated decision systems. It defines key terms, outlines unlawful discriminatory practices, mandates disclosure requirements, and establishes assessment protocols for bias and discriminatory outcomes. The bill emphasizes accountability for entities that rely on such systems in decision-making processes, aiming to prevent discrimination based on protected characteristics.

Key Sections

Key Requirements

  • Allows the Attorney General to provide a notice of violation and opportunity to cure before taking action.
  • Enables individuals to intervene in civil actions related to discriminatory practices.
  • Mandates annual assessments of automated decision systems for bias and discriminatory outcomes.
  • Prohibits discrimination based on race, color, religion, national origin, sex, marital status, sexual orientation, gender identity, age, disability, or veteran status.
  • Prohibits the use of proxies that closely relate to protected characteristics in decision-making.
  • Requires disclosure of the use of automated decision systems to individuals affected by decisions.
  • Requires maintenance of documentation related to the automated decision systems for at least two years.

Sponsors

Legislative Actions

Date Action
2026-01-26 Assigned HGL sub: Housing/Consumer Protection
2026-01-14 Committee Referral Pending
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26105175D
2026-01-14 Referred to Committee on General Laws

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates 'automated decision systems,' which is a core component of AI regulation and oversight.

Mechanism of Influence: It imposes legal prohibitions on discriminatory outcomes and mandates specific compliance actions like annual assessments and documentation maintenance.

Evidence:

  • Nondiscrimination by automated decision systems
  • prohibits the use of automated decision systems in ways that result in discrimination

Ambiguity Notes: The term 'automated decision system' is broad and likely encompasses various AI and machine learning models used for decision-making, though the specific technical threshold for what constitutes such a system is not defined in the abstract.

Analysis 2

Why Relevant: The legislation includes specific requirements for disclosures and audits, which were explicitly mentioned in the user's instructions.

Mechanism of Influence: Entities must perform annual bias assessments (audits) and notify individuals when an automated system is used in a decision affecting them.

Evidence:

  • Mandates annual assessments of automated decision systems for bias and discriminatory outcomes.
  • Requires disclosure of the use of automated decision systems to individuals affected by decisions.

Ambiguity Notes: The abstract does not specify the exact format of the disclosure or the methodology required for the bias assessments.

Senate - 245 - A BILL to amend and reenact §§ 59.1-575 and 59.1-577.1 of the Code of Virginia and to amend the Code of Virginia by adding sections numbered 22.1-79.3:2, 59.1-577.2, and 59.1-577.3, relating to social media platforms; school boards; artificial intelligence systems; civil penalties.

Legislation ID: 269570

Bill URL: View Bill

Summary

Senate Bill No. 245 amends existing laws and introduces new sections to the Code of Virginia, focusing on the prohibition of using social media platforms as the sole means of communication for school-related extracurricular activities. It outlines specific regulations for school boards, employees, and volunteers, and establishes civil penalties for non-compliance. The bill also defines responsibilities for social media platforms regarding minors and addresses issues related to algorithmic discrimination and the use of artificial intelligence.

Key Sections

Key Requirements

  • A registration fee of $100 is required, along with specific documentation about the platform and its data practices.
  • Bans design features known to be harmful to minors.
  • Configure default privacy settings for minors to a high level of privacy.
  • Data collected for age determination must not be used for any other purpose.
  • Exceptions can be made only if a division superintendent provides clear written instructions and can revoke the exception at any time.
  • Imposes a daily penalty of $50 for registration failures, capped at $10,000 per year.
  • Limit minors use of social media platforms to one hour per day, with parental consent for adjustments.
  • Mandates high default privacy settings for minors.
  • Platforms must register annually with the Secretary of the Commonwealth starting January 1, 2027.
  • Prohibits unknown adults from contacting minors without initiation by the minor.
  • Prohibit unknown adults from contacting minors without verification.
  • Requires reasonable care to avoid risks in data processing for minors.
  • Requires social media platforms to limit minors usage to one hour per day.
  • School boards must ensure that communication regarding extracurricular activities is not limited to social media platforms.

Sponsors

Legislative Actions

Date Action
2026-01-12 Prefiled and ordered printed; Offered 01-14-2026 26100795D
2026-01-12 Referred to Committee on Education and Health

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly mandates the registration of AI systems and addresses algorithmic discrimination.

Mechanism of Influence: AI systems must register annually with the Secretary of the Commonwealth starting in 2027, requiring a fee and documentation of data practices.

Evidence:

  • registration requirements for social media and AI systems
  • algorithmic discrimination
  • Platforms must register annually with the Secretary of the Commonwealth starting January 1, 2027.

Ambiguity Notes: The definition of 'AI systems' is not fully detailed in the abstract, potentially covering a wide range of software.

Analysis 2

Why Relevant: The bill includes specific provisions for age verification and data usage restrictions for minors, which are often implemented via AI.

Mechanism of Influence: It mandates that data collected for age verification cannot be used for any other purpose and requires platforms to configure high privacy settings for minors.

Evidence:

  • Data collected for age determination must not be used for any other purpose.
  • Specifies that any data collected for age verification can only be used for that purpose

Ambiguity Notes: The 'reasonable care' standard for avoiding risks in data processing is subjective and may lead to varying compliance standards.

Senate - 269 - A BILL to amend the Code of Virginia by adding a section numbered 54.1-2400.1:1, relating to use of artificial intelligence system by mental health service providers; civil penalty.

Legislation ID: 269594

Bill URL: View Bill

Summary

This bill introduces regulations for mental health service providers regarding the use of artificial intelligence systems in their practice. It defines terms related to AI use, outlines permissible applications of AI, and establishes requirements for disclosure and consent from patients. The bill also includes provisions for penalties for violations and clarifies that certain types of counseling are exempt from these regulations.

Key Sections

Key Requirements

  • Mental health providers must disclose the use of AI systems to patients.
  • Patients must provide written consent for the use of AI systems.

Sponsors

Legislative Actions

Date Action
2026-01-20 Fiscal Impact Statement from Department of Planning and Budget (SB269)
2026-01-12 Prefiled and ordered printed; Offered 01-14-2026 26104492D
2026-01-12 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill directly regulates the application of artificial intelligence within the mental health profession.

Mechanism of Influence: It restricts AI systems from making independent therapeutic decisions or interacting directly with clients, ensuring that AI remains a tool under human supervision rather than an autonomous provider.

Evidence:

  • The bill prohibits the use of AI in therapy or counseling services unless conducted by a licensed mental health service provider.
  • AI systems are restricted from making independent therapeutic decisions, directly interacting with clients, generating treatment plans without review, or detecting emotions.

Ambiguity Notes: The term 'supplementary support' is defined in the bill but its practical boundaries in a clinical setting may require further interpretation by the Department of Health Professions.

Analysis 2

Why Relevant: The bill mandates transparency and informed consent regarding the use of AI technologies.

Mechanism of Influence: Providers are legally required to disclose AI usage to patients and obtain written consent, creating a formal oversight mechanism for patient rights.

Evidence:

  • Mental health providers must disclose the use of AI systems to patients.
  • Patients must provide written consent for the use of AI systems.

Ambiguity Notes: None

Analysis 3

Why Relevant: The legislation includes enforcement mechanisms for AI-related regulatory violations.

Mechanism of Influence: It establishes civil penalties of up to $10,000 for non-compliance with the AI usage and disclosure rules.

Evidence:

  • Violations of the provisions in this section may result in civil penalties of up to $10,000, collected by the Department of Health Professions.

Ambiguity Notes: None

Senate - 365 - A BILL to amend the Code of Virginia by adding in Title 59.1 a chapter numbered 60, consisting of sections numbered 59.1-614 through 59.1-617, relating to Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act established.

Legislation ID: 271376

Bill URL: View Bill

Summary

The proposed legislation, known as the Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act (FAIR AI Act), seeks to create a framework for the ethical development and use of artificial intelligence within the Commonwealth. It includes definitions of key terms related to artificial intelligence, outlines the responsibilities of developers and deployers of AI systems, and establishes an enforcement fund to address misuse and bias in AI applications.

Key Sections

Key Requirements

  • Clarifies that developers and deployers cannot use autonomy of AI as a defense in harm cases.
  • Establishes a special fund to support enforcement against AI misuse and bias.
  • Funds must be used solely for supporting enforcement activities.
  • Requires developers to disclose the name, developer, incorporation location, release date, training data update date, supported languages, and terms of service link for AI models.

Sponsors

Legislative Actions

Date Action
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26104938D
2026-01-13 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The legislation directly mandates transparency through disclosure requirements for AI developers.

Mechanism of Influence: Developers of base AI models must provide accessible information regarding the model's origin, training data updates, and terms of service, which aligns with the user's interest in disclosure regulations.

Evidence:

  • Developers of base artificial intelligence models are required to disclose specific information about the model in a clear and accessible manner

Ambiguity Notes: The term 'clear and accessible manner' is not strictly defined, leaving room for interpretation on where and how these disclosures must be hosted.

Analysis 2

Why Relevant: The act establishes a mechanism for government oversight and enforcement against AI misuse.

Mechanism of Influence: By creating the FAIR AI Enforcement Fund, the bill provides the financial infrastructure for state agencies to actively police AI bias and misuse.

Evidence:

  • A special nonreverting fund, known as the FAIR AI Enforcement Fund, is created to support state agency enforcement against misuse of AI systems, bias, and workforce disruption.

Ambiguity Notes: The scope of 'workforce disruption' as a trigger for enforcement is broad and may require further regulatory clarification.

Analysis 3

Why Relevant: The bill addresses legal accountability and liability for AI-driven harms.

Mechanism of Influence: It prevents developers and deployers from using the autonomous nature of AI as a legal shield, ensuring they remain responsible for the system's outputs.

Evidence:

  • it is not a valid defense that the AI system autonomously caused harm.

Ambiguity Notes: None

Senate - 384 - A BILL to amend the Code of Virginia by adding a section numbered 2.2-2012.01 and by adding in Chapter 20.1 of Title 2.2 an article numbered 9, consisting of sections numbered 2.2-2034.2 through 2.2-2034.7, relating to Virginia Information Technologies Agency; artificial intelligence; independent verification organizations.

Legislation ID: 271395

Bill URL: View Bill

Summary

This bill amends the Code of Virginia to include provisions for the licensing and oversight of independent verification organizations (IVOs) that assess artificial intelligence applications and models. It outlines the responsibilities of the Chief Information Officer (CIO) in regulating IVOs, the requirements for licensing, and the establishment of an Artificial Intelligence Safety Advisory Council to advise on these matters.

Key Sections

Key Requirements

  • Adhere to the approved plan.
  • Appoint qualified members to the Council.
  • Comply with ongoing monitoring and reporting requirements.
  • Define the structure and terms for the Artificial Intelligence Safety Advisory Council.
  • Demonstrate independence from the AI industry.
  • Ensure that verification methods remain effective.
  • Establish conflict of interest and funding transparency requirements for IVOs.
  • Identify additional IVO plan elements to manage risk from AI models.
  • Implement the approved verification plan.
  • Maintain documentation for 10 years.
  • Maintain independence from the AI industry.
  • Members must adhere to conflict of interest guidelines.
  • Outline IVO application procedures and required materials.
  • Provide aggregated information on AI capabilities and risks.
  • Set provisions for corrective action or license revocation.
  • Submit a comprehensive plan detailing risk assessment and mitigation strategies.
  • Update plans as necessary to improve verification efficacy.

Sponsors

Legislative Actions

Date Action
2026-01-26 Fiscal Impact Statement from Department of Planning and Budget (SB384)
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26101618D
2026-01-13 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes a formal regulatory structure for the oversight and auditing of AI models through third-party verification organizations.

Mechanism of Influence: It mandates that the CIO oversee the licensing of IVOs, ensuring they remain independent from the AI industry while evaluating AI applications for risks.

Evidence:

  • The CIO is tasked with overseeing the licensing of IVOs for artificial intelligence, including creating regulations that ensure transparency and independence from the AI industry.
  • Establish conflict of interest and funding transparency requirements for IVOs.

Ambiguity Notes: The specific 'risk assessment and mitigation strategies' required in IVO plans are left to be defined by the CIO's regulations.

Analysis 2

Why Relevant: The legislation requires disclosures and reporting regarding AI capabilities and observed risks.

Mechanism of Influence: IVOs must submit annual reports to the Virginia IT Agency (VITA) providing aggregated information on AI capabilities and risks, effectively creating a government oversight mechanism for AI performance.

Evidence:

  • Mandates that IVOs submit annual reports to VITA detailing their activities, evaluations, and any observed risks.
  • Provide aggregated information on AI capabilities and risks.

Ambiguity Notes: It is unclear if these reports will be made public or remain internal to the government agency.

Analysis 3

Why Relevant: The bill focuses on the auditing and verification of AI models, which aligns with the user's interest in AI audits.

Mechanism of Influence: Licensed IVOs are responsible for implementing verification plans to assess AI models and must update these plans to maintain efficacy in risk detection.

Evidence:

  • Details the responsibilities of licensed IVOs, including verifying AI models and the process for modifying their verification plans.
  • Submit a comprehensive plan detailing risk assessment and mitigation strategies.

Ambiguity Notes: None

Senate - 394 - A BILL to amend the Code of Virginia by adding a section numbered 22.1-20.2:1, relating to Board of Education; artificial intelligence use in instructional settings; development of AI safety guidance required; AI Innovation in Education Pilot Program established; report.

Legislation ID: 271405

Bill URL: View Bill

Summary

This legislation introduces a new section to the Code of Virginia that mandates the Board of Education to develop guidelines for the use of AI in instructional settings. It establishes the AI Innovation in Education Pilot Program, which will fund and evaluate innovative AI applications in schools, ensuring that AI is used ethically and effectively while prioritizing student data privacy and accessibility.

Key Sections

Key Requirements

  • Approved AI use must align with educational standards and provide monitoring tools for teachers.
  • Defines AI in instructional settings as the use of AI tools to support educational operations.
  • Defines division-managed AI platform to ensure compliance with educational standards and data privacy.
  • Develop guidelines for the pilot program administration.
  • Establishes a clear definition for Pilot Program related to AI use in education.
  • Functional guardrails must prioritize division-managed AI platforms.
  • Guidance must cover student data privacy, transparency, best practices against bias, clear protocols for AI use, and accessibility recommendations.
  • Prioritize diverse school divisions for participation.
  • Require professional development on AI literacy for educators.
  • School boards are required to create policies consistent with the Boards guidance.
  • Submit annual reports on the pilot programs effectiveness and recommendations.

Sponsors

Legislative Actions

Date Action
2026-01-13 Prefiled and ordered printed; Offered 01-14-2026 26105454D
2026-01-13 Referred to Committee on Education and Health

Detailed Analysis

Analysis 1

Why Relevant: The legislation establishes a regulatory framework for the implementation and use of AI within the state's public education system.

Mechanism of Influence: It mandates the creation of state-level guidance and requires local school boards to enforce policies that align with these safety and ethical standards.

Evidence:

  • The Board of Education is required to create and publicly post guidance for the ethical and safe use of AI in public schools
  • Each school board must establish, implement, and enforce policies that align with the guidance provided by the Board of Education regarding AI use.

Ambiguity Notes: The term 'ethical and safe use' is broad and leaves specific regulatory standards to be defined by the Board of Education's future guidance.

Analysis 2

Why Relevant: The bill addresses transparency and data privacy requirements for AI systems.

Mechanism of Influence: It specifically requires that the state-issued guidance include protocols for student data privacy and transparency in how AI tools operate.

Evidence:

  • Guidance must cover student data privacy, transparency, best practices against bias, clear protocols for AI use, and accessibility recommendations.

Ambiguity Notes: The level of transparency required (e.g., algorithmic transparency vs. usage disclosure) is not fully specified in the abstract.

Analysis 3

Why Relevant: The legislation introduces oversight and evaluation mechanisms for AI applications.

Mechanism of Influence: Through the AI Innovation in Education Pilot Program, the Department of Education is tasked with evaluating AI applications and reporting on their effectiveness and risks.

Evidence:

  • The Department of Education will oversee a pilot program to fund and evaluate innovative uses of AI in schools
  • Submit annual reports on the pilot programs effectiveness and recommendations.

Ambiguity Notes: None

Senate - 585 - A BILL to amend and reenact §§ 36-96.1:1, 36-96.3, 55.1-700, and 55.1-1200 of the Code of Virginia and to amend the Code of Virginia by adding sections numbered 55.1-708.3 and 55.1-1204.2, relating to Virginia Fair Housing Law; Virginia Residential Property Disclosure Act; Virginia Residential Landlord and Tenant Act; personalized algorithmic pricing disclosures; prohibitions; civil penalties; civil actions.

Legislation ID: 273578

Bill URL: View Bill

Summary

This bill introduces amendments to existing laws related to fair housing and landlord-tenant relationships in Virginia. It includes definitions of key terms, outlines unlawful discriminatory housing practices, and specifies requirements for landlords and housing providers to ensure compliance with fair housing standards. The bill also addresses the use of algorithmic pricing in housing transactions, aiming to prevent discrimination based on protected class data.

Key Sections

Key Requirements

  • Allows for civil action by individuals harmed by violations.
  • Allows for land use decisions to limit high concentrations of affordable housing.
  • Compliance with ANSI A117.1 or HUD standards for accessibility is deemed sufficient.
  • Ensures equal pricing for all prospective tenants.
  • Establishes penalties of up to $1,000 for violations.
  • Exemption for small landlords (owning four or fewer units).
  • Mandates that housing providers cannot use protected class data to set discriminatory prices.
  • Owners must disclose that pricing was set by an algorithm using personal data in advertisements or promotions.
  • Penalties for violations include potential civil penalties and the right for individuals to bring civil actions for damages.
  • Prohibits collusion in rental pricing among landlords.
  • Prohibits discrimination against housing developments containing affordable housing for individuals or families with incomes at or below 80% of the median income.
  • Prohibits discrimination based on race, color, religion, national origin, sex, elderliness, familial status, source of funds, sexual orientation, gender identity, military status, or disability.
  • Prohibits refusal to sell or rent housing based on race, color, religion, national origin, sex, elderliness, source of funds, familial status, sexual orientation, gender identity, or military status.
  • Prohibits reliance on algorithms for setting rental terms.
  • Prohibits setting rent based on protected class data unless for internal audits.
  • Prohibits the use of protected class data in setting rent prices.
  • Requires clear disclosure of algorithmic pricing to tenants.
  • Requires equal treatment in terms and conditions of housing sales or rentals.

Sponsors

Legislative Actions

Date Action
2026-01-21 Assigned GL&T sub: Housing
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26104984D
2026-01-14 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly regulates the use of software and algorithms to coordinate pricing among landlords.

Mechanism of Influence: It prohibits the use of algorithmic tools for price-fixing or coordinating rental terms, effectively regulating the application of AI in the real estate sector.

Evidence:

  • Landlords are not allowed to coordinate pricing or rental terms among themselves, particularly through software or algorithms
  • Prohibits collusion in rental pricing among landlords.

Ambiguity Notes: The exemption for landlords owning four or fewer units creates a threshold where algorithmic coordination might still occur without oversight.

Analysis 2

Why Relevant: The bill establishes legal definitions for key technological terms related to AI and data processing.

Mechanism of Influence: By defining 'algorithm', 'dynamic pricing', and 'personal data', the bill sets the scope for which automated systems are subject to these new regulations.

Evidence:

  • This provision provides definitions for various terms used throughout the chapter, including algorithm, dynamic pricing, personal data, and protected class data.

Ambiguity Notes: The abstract does not provide the specific technical criteria used in the definitions, which could determine if simple spreadsheets or complex machine learning models are covered.

Analysis 3

Why Relevant: The bill mandates transparency and disclosure regarding the use of algorithms in setting prices.

Mechanism of Influence: It requires housing providers to disclose to consumers when an algorithm is being used to set rental prices based on personal data, fulfilling the user's interest in disclosure requirements.

Evidence:

  • This provision mandates that landlords or multiple listing services must disclose when rental prices are set by algorithms using personal data, ensuring transparency.
  • Owners must disclose that pricing was set by an algorithm using personal data in advertisements or promotions.

Ambiguity Notes: The method and timing of the disclosure (e.g., 'in advertisements or promotions') may vary in effectiveness depending on the platform used.

Analysis 4

Why Relevant: The bill prohibits landlords from relying on algorithmic recommendations for setting rental terms.

Mechanism of Influence: This acts as a direct restriction on the autonomy of AI/algorithmic systems in the housing market, preventing automated systems from dictating contract terms.

Evidence:

  • Landlords cannot set or adjust rental terms based on recommendations from algorithms or data analytics services.
  • Prohibits reliance on algorithms for setting rental terms.

Ambiguity Notes: It is unclear if this prohibits all algorithmic assistance or only 'recommendations' that lead to specific adjustments.

Analysis 5

Why Relevant: The bill regulates the data inputs used by algorithms to prevent discriminatory outcomes.

Mechanism of Influence: It bans the use of 'protected class data' in algorithmic pricing models, targeting the prevention of algorithmic bias in housing.

Evidence:

  • Landlords are prohibited from using protected class data to set rental prices that discriminate against certain groups or individuals.
  • Prohibits setting rent based on protected class data unless for internal audits.

Ambiguity Notes: The exception for 'internal audits' might allow landlords to process protected class data within their systems, potentially creating a loophole if not strictly monitored.

Senate - 586 - A BILL to amend and reenact §§ 38.2-3407.15 and 38.2-3556 of the Code of Virginia and to amend the Code of Virginia by adding a section numbered 38.2-3570.1, relating to use of artificial intelligence; right to expedited appeal; civil penalties.

Legislation ID: 273579

Bill URL: View Bill

Summary

Senate Bill No. 586 proposes amendments to existing sections of the Code of Virginia regarding health insurance practices. It introduces provisions related to the use of artificial intelligence in managing claims and coverage, mandates transparency in AI processes, and establishes rights for expedited appeals and civil penalties for non-compliance. The bill seeks to protect enrollees and providers by ensuring fair treatment in claims processing and addressing the implications of AI in insurance operations.

Key Sections

Key Requirements

  • Allows for changes in AI usage by health carriers, civil penalties, license revocation, and compensation for affected parties.
  • Carriers must allow providers to confirm medical necessity and coverage in advance.
  • Carriers must disclose bundling and downcoding practices.
  • Carriers must maintain AI decision documentation for at least five years.
  • Carriers must notify providers of any defects in claims within 30 days.
  • Carriers must notify providers of any retroactive denials at least 30 days in advance.
  • Carriers must pay claims for services previously authorized as medically necessary.
  • Carriers must pay claims within 40 days unless certain conditions apply.
  • Carriers must publicly disclose AI usage and provide documentation upon request.
  • Compliance exemptions for external factors beyond the carriers control.
  • Comply with minimum fair business standards in provider contracts.
  • Consider claims as clean claims if a denial is overturned.
  • Contracts must include fee schedules and reimbursement policies.
  • Covered persons can request expedited external review after an adverse determination involving AI.
  • Deliver provider contracts electronically by July 1, 2025.
  • Ensure compliance by all carriers and their subcontractors.
  • Ensures protection of private health information according to Commonwealth and federal laws.
  • Establish a written claims payment dispute mechanism.
  • Grants a private right of action to covered persons for enforcement.
  • In cases of gross negligence, damages may be tripled.
  • Include non-discrimination provisions in provider contracts.
  • Interest on claims must be paid within 60 days if not paid sooner.
  • Maintain documentation of AI decisions for five years.
  • Make a reasonable effort to confer with the carrier before filing a complaint.
  • Make dispute mechanism information available to providers.
  • Make enrollee coverage verification available electronically by July 1, 2025.
  • Minimizes financial and administrative burdens on covered persons and healthcare providers.
  • Notify enrollees and providers when AI is used for adverse determinations.
  • Prohibit retaliation against providers for invoking their rights.
  • Prohibits AI use that violates discrimination laws based on age, race, sex, sexual orientation, and preexisting conditions.
  • Provide opportunity for providers to address alleged violations before referral.
  • Providers can initiate actions for actual damages.
  • Providers must be notified of contract amendments 60 days in advance.
  • Providers must submit contracts electronically by January 1, 2026.
  • Publicly disclose AI usage in claims management.
  • Requires medically necessary care to be provided without delay, denial, or limitation.
  • Submit AI-related information to the Commission upon request.
  • Wait at least 30 calendar days after the request unless the carrier is unresponsive.

Sponsors

Legislative Actions

Date Action
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26100849D
2026-01-14 Referred to Committee on Commerce and Labor

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes mandatory disclosure and documentation requirements for AI systems used in the insurance sector.

Mechanism of Influence: Carriers are required to notify enrollees and providers when AI is used for adverse determinations and must maintain an audit trail of AI decisions for five years.

Evidence:

  • Carriers must disclose their use of AI in managing claims and provide information for enforcement upon request.
  • Maintain documentation of AI decisions for five years.
  • Notify enrollees and providers when AI is used for adverse determinations.

Ambiguity Notes: While the bill requires documentation of AI decisions, it does not specify the technical granularity required for these records.

Analysis 2

Why Relevant: The legislation provides for government oversight and submission of AI-related data for regulatory enforcement.

Mechanism of Influence: The Commission is empowered to request AI-related information from carriers and can impose civil penalties or revoke licenses for non-compliance.

Evidence:

  • Submit AI-related information to the Commission upon request.
  • Allows for changes in AI usage by health carriers, civil penalties, license revocation, and compensation for affected parties.

Ambiguity Notes: The requirement to submit 'AI-related information' is broad and could potentially encompass algorithmic logic or training data parameters depending on Commission rules.

Analysis 3

Why Relevant: The bill addresses algorithmic bias and discrimination in AI applications.

Mechanism of Influence: It prohibits the use of AI in ways that violate existing discrimination laws, specifically mentioning protected classes such as age, race, and sex.

Evidence:

  • Prohibits AI use that violates discrimination laws based on age, race, sex, sexual orientation, and preexisting conditions.

Ambiguity Notes: None

Senate - 615 - A BILL to amend and reenact § 59.1-575 of the Code of Virginia and to amend the Code of Virginia by adding a section numbered 59.1-577.2, relating to Consumer Data Protection Act; online device pricing; prohibition.

Legislation ID: 281869

Bill URL: View Bill

Summary

This bill amends the Virginia Consumer Data Protection Act to include new provisions regarding online pricing strategies. It specifically prohibits controllers or processors from generating prices based on the hardware state of a consumers online device, the presence or absence of software, or precise geolocation data. However, it allows for exceptions in cases of device repairs, trade-in values, and legitimate pricing variations based on location. The bill aims to ensure fair pricing practices in the digital marketplace.

Key Sections

Key Requirements

  • Allows price generation for device repairs or trade-in values based on the devices characteristics.
  • Permits pricing based on real-time demand or legitimate cost differentials in different locations.
  • Prohibits price generation based on the devices hardware state, software presence, or geolocation data.

Sponsors

Legislative Actions

Date Action
2026-01-14 Prefiled and ordered printed; Offered 01-14-2026 26103781D
2026-01-14 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates automated price generation, which is a common application of algorithmic and AI-driven systems in the digital marketplace.

Mechanism of Influence: By prohibiting the use of specific data inputs (hardware state, software, geolocation) for price generation, the law effectively restricts the features that can be used in pricing algorithms or AI models, mandating a form of algorithmic constraint.

Evidence:

  • prohibits controllers or processors from generating prices based on the hardware state of a consumers online device
  • prohibits price generation based on the devices hardware state, software presence, or geolocation data

Ambiguity Notes: The text does not explicitly use the term 'Artificial Intelligence' or 'Machine Learning,' focusing instead on the 'generation' of prices by controllers and processors. However, in modern contexts, such generation is typically handled by automated decision-making systems.

Senate - 819 - A BILL to amend and reenact §§ 58.1-4007 and 58.1-4015 of the Code of Virginia and to amend the Code of Virginia by adding a section numbered 58.1-4007.4, relating to Virginia Lottery; powers of the Virginia Lottery Board; advertising restrictions; age verification.

Legislation ID: 286022

Bill URL: View Bill

Summary

This bill seeks to amend existing lottery regulations in Virginia, enhancing the powers of the Virginia Lottery Board to regulate lottery operations and sports betting. It introduces new advertising restrictions to protect consumers and mandates age verification for lottery sales to prevent underage gambling. The bill also outlines the responsibilities of the Board in maintaining the integrity of lottery operations and ensuring consumer protection.

Key Sections

Key Requirements

  • All lottery terminals must have age verification software.
  • All lottery terminals must possess age verification software.
  • No lottery ticket or share shall be sold to anyone younger than 18.
  • No person under the age of 18 shall be licensed as an agent to sell lottery tickets.
  • Prohibits advertising through email or SMS text messages.
  • Prohibits targeted digital advertising using personal data.
  • Regulations must be established for lottery and sports betting operations.
  • Requires all advertisements to include a warning about age restrictions and addiction risks.

Sponsors

Legislative Actions

Date Action
2026-01-23 Presented and ordered printed 26105677D
2026-01-23 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill restricts targeted digital advertising based on personal data.

Mechanism of Influence: This limits the application of AI-driven profiling and algorithmic targeting used in digital marketing to reach specific demographics.

Evidence:

  • Prohibits targeted digital advertising using personal data.

Ambiguity Notes: The bill does not explicitly name 'Artificial Intelligence' or 'Machine Learning,' but 'targeted digital advertising' typically relies on these technologies to process personal data for ad placement.

Analysis 2

Why Relevant: The bill mandates the use of age verification software for lottery operations.

Mechanism of Influence: Requires the technical implementation of verification systems, which often utilize AI-based biometric or document analysis tools to confirm identity.

Evidence:

  • All lottery terminals must possess age verification software.

Ambiguity Notes: The specific technical requirements for the 'software' are not defined, leaving it open to various implementations, including automated AI systems or simple database lookups.

Senate - 84 - A BILL to amend and reenact §§ 46.2-208, 46.2-882, and 46.2-882.1 of the Code of Virginia, relating to speed safety cameras, pedestrian crossing violation monitoring systems, and stop sign violation monitoring systems; placement and operation; violation enforcement; civil penalties.

Legislation ID: 252363

Bill URL: View Bill

Summary

This bill seeks to update the Code of Virginia by modifying sections related to traffic enforcement technologies. It aims to clarify the use of speed safety cameras and other monitoring systems, establish guidelines for their operation, and outline the civil penalties for violations detected by these systems. The bill also addresses the handling of personal information collected through these systems and ensures compliance with privacy standards.

Key Sections

Key Requirements

  • Agencies must certify the occurrence of a traffic fatality in high-risk corridors.
  • Agencies must establish methods for public inquiries and provide links to vendor sites for further information.
  • All agreements must ensure compliance with calibration and data protection standards.
  • All collected data must be kept confidential and not used for marketing or other purposes.
  • Allows summons by mail for vehicle speed, pedestrian crossing, or stop sign violations.
  • Annual reporting of test results to the Department of State Police by November 15.
  • A sworn certificate from law enforcement serves as prima facie evidence.
  • At least two signs must be posted within 1,000 feet of enforcement areas, including speed display signs.
  • Daily accuracy tests must be conducted for speed safety cameras.
  • Data must be purged within 60 days of the violation if no summons is issued.
  • During the first 30 days, only warnings will be issued instead of summonses for violations.
  • Fees to be specified in § 46.2-214.
  • Funding for local projects initiated before July 1, 2016, is protected.
  • Includes instructions for contesting the violation and provides at least 30 days for inspection of evidence.
  • Information may only be used for the original purposes specified in the agreement.
  • Information must be relevant to the administrative proceeding and limited to matters of fact and law asserted by the Department.
  • Information released is limited to name, address, and vehicle details of owners involved in toll violations or traffic control violations.
  • Information released is limited to owner details of vehicles involved in traffic violations.
  • Localities may use radar and laser devices as well.
  • Local law enforcement agencies must conduct a public awareness program prior to implementing or expanding monitoring systems.
  • Mandates that no contempt proceedings are to be initiated for failure to appear in response to a mailed summons.
  • Mandates that personal information must be provided upon request from the subject or their authorized representatives.
  • Monitoring systems must be placed in school crossing zones and highway work zones.
  • No civil penalty is assessed for violations when a summons is sent by mail.
  • Operators are liable for penalties if found speeding by at least 10 mph over the limit.
  • Plans must address system malfunctions and comply with U.S. Department of Transportation guidance.
  • Private vendors cannot impose additional fees for collecting civil penalties except for a small convenience fee for electronic payments.
  • Private vendors must comply with the provisions of the section.
  • Reports must include the number of violations and prosecutions, operating costs, and the outcomes of enforcement actions.
  • Requires immediate radio communication of the vehicles speed and identification.
  • Requires officers to be in uniform and display their badge when making an arrest.
  • Requires proper notification to the summoned individual regarding their rights to contest the violation.
  • Requires requests to be made by authorized personnel from government entities.
  • Requires the Commissioner to release medical information only to authorized medical personnel.
  • Requires written agreements for the release of privileged information.
  • Requires written request from the compliance agent of a licensed private security service.
  • Specifies that information cannot be released for civil immigration enforcement without consent or a judicial order.
  • State Police can use laser and radar devices.
  • Summons can be mailed to vehicle owners or lessees.
  • Summons must be mailed to the vehicle owner or lessees address.
  • Testing results must be submitted annually and comply with calibration standards.
  • The owner can rebut the presumption of liability by providing evidence.
  • Vendors must adhere to all operational and reporting requirements as stipulated in the bill.
  • Violating vendors are subject to a civil penalty of $1,000.

Sponsors

Legislative Actions

Date Action
2026-01-15 Reported from Transportation and rereferred to Finance and Appropriations (11-Y 3-N)
2025-12-30 Prefiled and ordered printed; Offered 01-14-2026 26100916D
2025-12-30 Referred to Committee on Transportation

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates automated enforcement systems which utilize computer vision and automated decision-making processes to identify violations.

Mechanism of Influence: It establishes operational standards and legal frameworks for 'speed safety cameras' and 'monitoring systems,' which are forms of automated technology used for law enforcement oversight.

Evidence:

  • This bill seeks to update the Code of Virginia by modifying sections related to traffic enforcement technologies.
  • It aims to clarify the use of speed safety cameras and other monitoring systems, establish guidelines for their operation, and outline the civil penalties for violations detected by these systems.

Ambiguity Notes: While the bill does not explicitly use the term 'Artificial Intelligence,' the technologies described (automated monitoring and speed detection) often rely on algorithmic processing and computer vision to function without direct human intervention at the moment of detection.

Analysis 2

Why Relevant: The legislation includes requirements for technical audits and performance reporting for automated systems.

Mechanism of Influence: It mandates daily accuracy tests and annual reporting of results to the Department of State Police, creating a mandatory oversight and calibration loop for the automated technology.

Evidence:

  • Mandates daily accuracy tests for speed safety cameras operated by law enforcement agencies and requires annual reporting of test results to the Department of State Police.
  • Requires law enforcement agencies to report annually on the use and effectiveness of speed safety cameras, including financial data and violation statistics.

Ambiguity Notes: The 'audits' are focused on technical accuracy and calibration rather than algorithmic bias or model weights, but they represent a form of mandatory government oversight for automated systems.

Analysis 3

Why Relevant: The bill addresses data privacy and the handling of personal information collected by automated surveillance systems.

Mechanism of Influence: It restricts how data collected by these systems can be used, mandates confidentiality, and requires the purging of data within specific timeframes (60 days) if no summons is issued.

Evidence:

  • Regulates the collection and use of data from speed safety cameras and other monitoring systems, ensuring that such data is protected and used solely for enforcement purposes.
  • All collected data must be kept confidential and not used for marketing or other purposes.
  • Data must be purged within 60 days of the violation if no summons is issued.

Ambiguity Notes: The focus is on the protection of PII (Personally Identifiable Information) generated by the system rather than the disclosure of the system's underlying logic or training data.

Senate - 85 - A BILL to amend and reenact §§ 59.1-575 and 59.1-577 of the Code of Virginia and to amend the Code of Virginia by adding a section numbered 59.1-577.2, relating to Consumer Data Protection Act; social media platforms and model operators; interoperability interfaces.

Legislation ID: 252364

Bill URL: View Bill

Summary

This bill amends and reenacts sections of the Code of Virginia concerning consumer data protection, particularly focusing on definitions related to personal data, artificial intelligence, and social media platforms. It establishes rights for consumers regarding their personal data, including the rights to access, correct, delete, and obtain copies of their data. The bill also introduces new definitions and requirements for entities that process personal data, ensuring better protection and transparency for consumers.

Key Sections

Key Requirements

  • Consumers can submit requests to invoke their rights at any time.
  • Consumers must submit an authenticated request to exercise their rights.
  • Controllers must authenticate requests before complying with them.
  • Controllers must comply with requests regarding personal data processing.
  • Controllers must inform consumers of their rights and provide an appeal process if requests are denied.
  • Controllers must respond to requests within 45 days, with a possible extension of 45 days if necessary.
  • Interfaces must use open protocols and maintain real-time data sharing without discrimination.
  • Model operators must implement interfaces for sharing contextual data with other AI models.
  • Requests must be fulfilled free of charge up to twice annually per consumer, unless deemed excessive.
  • Social media platforms must create third-party accessible interfaces for sharing social graph data.
  • Users must have a clear method to consent to data sharing through these interfaces.

Sponsors

Legislative Actions

Date Action
2025-12-30 Prefiled and ordered printed; Offered 01-14-2026 26100812D
2025-12-30 Referred to Committee on General Laws and Technology

Detailed Analysis

Analysis 1

Why Relevant: The bill introduces specific regulatory requirements for 'model operators' of artificial intelligence, focusing on data interoperability and consumer transparency.

Mechanism of Influence: It mandates that AI model operators implement standardized interfaces to allow for the sharing of contextual data with other AI models, effectively regulating the technical architecture and data-sharing practices of AI developers.

Evidence:

  • Model operators must implement interfaces for sharing contextual data with other AI models.
  • Interfaces must use open protocols and maintain real-time data sharing without discrimination.

Ambiguity Notes: The term 'contextual data' is not explicitly defined in the summary, which could lead to broad interpretations regarding how much internal model state or training data must be made interoperable.

Analysis 2

Why Relevant: The legislation extends consumer data privacy rights—such as access, correction, and deletion—to the data handled by AI models and their operators.

Mechanism of Influence: AI companies (as model operators) are required to authenticate and fulfill consumer requests regarding their data, which impacts how AI systems store, process, and purge user information used for training or inference.

Evidence:

  • Consumers have the right to access, correct, delete, and obtain copies of their personal data from controllers, as well as the right to opt out of certain data processing activities.
  • This provision outlines various definitions relevant to the Consumer Data Protection Act, including terms such as consumer, personal data, model operator

Ambiguity Notes: The effectiveness of these rights depends on the specific definition of 'model operator' and whether it includes both developers of foundational models and third-party deployers.

↑ Back to Table of Contents

Washington

Index of Bills

House - 2157 - High-risk AI

Legislation ID: 237668

Bill URL: View Bill

Summary

This bill outlines the obligations of developers and deployers of high-risk artificial intelligence systems, including requirements for impact assessments, consumer disclosures, and measures to mitigate algorithmic discrimination. It also specifies exemptions and establishes civil remedies for violations, aiming to protect consumers from potential harms associated with AI systems.

Key Sections

Key Requirements

  • Allows a single impact assessment for multiple comparable AI systems.
  • Allows civil action for violations of the chapter.
  • Allows compliance with federal, state, or local regulations without restriction.
  • Defines algorithmic discrimination and its exclusions.
  • Defines artificial intelligence system and its exclusions.
  • Defines high-risk artificial intelligence system and its exclusions.
  • Deployers must complete an impact assessment before deploying high-risk AI systems.
  • Deployers must create and maintain a risk management policy for high-risk AI systems.
  • Deployers must use reasonable care to protect consumers from algorithmic discrimination.
  • Developers must disclose intended uses and limitations of high-risk AI systems to deployers.
  • Developers must provide documentation for impact assessments and risk mitigation measures.
  • Developers must use reasonable care to protect consumers from algorithmic discrimination.
  • Establishes the chapter as remedial for consumer protection.
  • Exempts actions taken in the public interest from compliance requirements.
  • Exempts internal research and technical error corrections from obligations.
  • Permits existing assessments to fulfill new requirements if scope is similar.
  • Provides an affirmative defense for developers or deployers if they cure violations within 45 days.
  • Requires a clear statement on managing algorithmic discrimination risks.
  • Requires a summary of metrics used to evaluate the systems performance and limitations.
  • Requires deployers to disclose the purpose and intended use cases of the AI system.
  • Requires details of data categories processed and outputs produced by the AI system.
  • Requires disclosure of known risks of algorithmic discrimination and mitigation steps taken.
  • Requires disclosure to consumers that they are interacting with an AI system.
  • Requires explanation of the AI systems purpose and the nature of consequential decisions.
  • Requires retention of impact assessment records for three years.
  • Requires retention of impact assessments for three years.
  • Requires timely notification of adverse decisions and reasons for them.
  • Requires transparency measures for consumer awareness when interacting with the AI system.
  • Requires updates to disclosures within 30 days of substantial modifications.
  • Specifies effective date for the legislation.

Sponsors

Legislative Actions

Date Action
2025-12-16 Prefiled for introduction.

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's request for legislation requiring disclosures related to AI usage.

Mechanism of Influence: Deployers are legally required to notify consumers when they are interacting with an AI system and must provide explanations for any consequential decisions made by the system.

Evidence:

  • Requires disclosure to consumers that they are interacting with an AI system.
  • Requires explanation of the AI systems purpose and the nature of consequential decisions.
  • Requires timely notification of adverse decisions and reasons for them.

Ambiguity Notes: None

Analysis 2

Why Relevant: The bill mandates impact assessments, which serve as a form of audit and oversight requested by the user.

Mechanism of Influence: Deployers must conduct and document impact assessments before a high-risk AI system is used, and these records must be retained for three years for potential review.

Evidence:

  • Deployers must complete an impact assessment before deploying high-risk AI systems.
  • Requires retention of impact assessment records for three years.
  • Permits existing assessments to fulfill new requirements if scope is similar.

Ambiguity Notes: The bill allows for existing assessments from other regulations to fulfill these requirements if the scope is similar, which may lead to varying levels of rigor depending on the original regulation used.

Analysis 3

Why Relevant: The legislation focuses on the regulation and risk management of AI systems to prevent harm.

Mechanism of Influence: It establishes a 'reasonable care' standard for both developers and deployers to mitigate the risks of algorithmic discrimination and requires the maintenance of risk management policies.

Evidence:

  • Developers must use reasonable care to protect consumers from algorithmic discrimination.
  • Deployers must create and maintain a risk management policy for high-risk AI systems.
  • Requires a clear statement on managing algorithmic discrimination risks.

Ambiguity Notes: The term 'reasonable care' is a legal standard that may be subject to judicial interpretation and evolving industry best practices.

Analysis 4

Why Relevant: The bill provides the necessary legal definitions to determine the scope of AI regulation.

Mechanism of Influence: By defining 'high-risk artificial intelligence system' and 'algorithmic discrimination,' the bill sets the boundaries for which technologies and behaviors are subject to these new requirements.

Evidence:

  • Defines high-risk artificial intelligence system and its exclusions.
  • Defines algorithmic discrimination and its exclusions.

Ambiguity Notes: The specific exclusions within the definitions of 'high-risk' systems could potentially exempt certain AI applications that users or advocacy groups might otherwise consider dangerous.

Senate - 5870 - AI systems/suicide liability

Legislation ID: 237887

Bill URL: View Bill

Summary

This bill introduces regulations for operators of companion chatbots, requiring them to notify users about the nature of the interaction, implement protocols to prevent suicidal content, and provide referrals to crisis services. It also establishes civil liability for operators if their systems contribute to user harm, particularly in cases of suicide. The bill mandates annual reporting to the Department of Health on related incidents and protocols, and it outlines the responsibilities of operators regarding minors and the content generated by their systems.

Key Sections

Key Requirements

  • Operators can be held liable if their AI systems contribute to a users suicide.
  • Operators must prevent chatbots from engaging in discussions that may lead to suicidal ideation and provide crisis service referrals.
  • Operators must provide clear notification that the chatbot is not human.
  • Operators must report the number of crisis service notifications issued.
  • Reports must include details on protocols for detecting and responding to suicidal ideation.

Sponsors

Legislative Actions

Date Action
2025-12-11 Prefiled for introduction.

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates specific transparency disclosures for AI systems.

Mechanism of Influence: Operators are legally required to notify users that they are interacting with an AI and that the system may not be appropriate for minors.

Evidence:

  • Operators must provide clear notification that the chatbot is not human.
  • Operators must inform users that companion chatbots may not be suitable for minors.

Ambiguity Notes: The bill does not specify the exact format or prominence required for these notifications.

Analysis 2

Why Relevant: The bill imposes reporting requirements to a government body, serving as a form of regulatory oversight.

Mechanism of Influence: Operators must submit annual reports to the Department of Health detailing their safety protocols and the frequency of crisis referrals triggered by the AI.

Evidence:

  • Operators are required to report annually to the Department of Health on their protocols for handling suicidal ideation and the number of crisis referrals made.

Ambiguity Notes: The criteria for what constitutes an 'adequate' protocol for handling suicidal ideation are not defined in the text.

Analysis 3

Why Relevant: The legislation regulates the content generation and safety guardrails of AI systems.

Mechanism of Influence: It requires operators to implement technical or algorithmic protocols to prevent the AI from engaging in specific types of harmful discussions.

Evidence:

  • Operators must prevent chatbots from engaging in discussions that may lead to suicidal ideation and provide crisis service referrals.
  • implement protocols to prevent the generation of suicidal content.

Ambiguity Notes: The phrase 'discussions that may lead to suicidal ideation' is broad and could lead to significant filtering of AI responses.

Senate - 5937 - Smart access systems/tenants

Legislation ID: 237981

Bill URL: View Bill

Summary

Senate Bill 5937 addresses the implementation and operation of smart access systems in residential buildings, establishing requirements for landlords regarding tenant access and data privacy. It mandates that landlords provide alternative access methods to tenants who do not wish to use biometric identifiers or mobile applications and requires transparency about data collection and protection related to smart access systems.

Key Sections

Key Requirements

  • Mandates disclosure of the privacy policy upon lease signing or installation of the smart access system.
  • Prohibits the collection of unnecessary data.
  • Requires explanation of data collection, protection measures, and tenant consent processes.
  • Requires landlords to provide alternative keys such as key fobs, key cards, or physical keys.
  • Specifies the types of data that may be collected, such as user names and access details.

Sponsors

Legislative Actions

Date Action
2025-12-23 Prefiled for introduction.

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates biometric identifier information, which is a primary data source for many AI-driven authentication and security systems.

Mechanism of Influence: By requiring disclosures and limiting the collection of biometric data, the law indirectly regulates the deployment and data-gathering capabilities of AI-powered facial recognition or fingerprint scanning technologies used in residential settings.

Evidence:

  • Landlords must offer tenants alternative access keys that do not rely on biometric information
  • Limits the collection of authentication and reference data to what is necessary for the functioning of the smart access system
  • biometric identifier information

Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its focus on biometric identifiers and automated access systems covers technologies that frequently utilize AI for pattern matching and verification.

↑ Back to Table of Contents

West Virginia

Index of Bills

House - 4496 - To force any media/internet creator providing artificial intelligence created videos to have an identifying marker that allows viewers to know that the video is not real.

Legislation ID: 272908

Bill URL: View Bill

Summary

This bill amends the Code of West Virginia to introduce regulations concerning the use of artificial intelligence in media production. It establishes definitions for AI and AI-generated media, outlines disclosure requirements for entities producing such media, and sets forth enforcement mechanisms and civil penalties for non-compliance. The bill aims to protect consumers by mandating clear disclosures that inform them when they are engaging with AI-generated content.

Key Sections

Key Requirements

  • Audio media must have a statement at the beginning indicating AI generation.
  • Enforcement authority is given to the attorney general.
  • Individuals violating the provisions may be fined up to $1,000 per day.
  • Organizations violating disclosure can face penalties up to $100,000 per day.
  • Requires clear and conspicuous disclosure for AI-generated media.
  • Video media must have an on-screen disclosure and persistent watermark.
  • Visual media must include a watermark or text label indicating AI involvement.

Sponsors

Legislative Actions

Date Action
2026-01-19 Filed for introduction
2026-01-19 Introduced in House
2026-01-19 To House Judiciary
2026-01-19 To Judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill directly addresses the user's request for legislation requiring disclosures and regulating the use of artificial intelligence.

Mechanism of Influence: It creates a legal mandate for 'Covered Entities' to disclose the use of AI in media, utilizing specific formats like on-screen watermarks for video and introductory statements for audio.

Evidence:

  • This bill amends the Code of West Virginia to introduce regulations concerning the use of artificial intelligence in media production.
  • This section outlines the requirements for disclosing AI-generated media, including the form and visibility of disclosures for different types of media.
  • Organizations violating disclosure can face penalties up to $100,000 per day.

Ambiguity Notes: The effectiveness of the regulation may depend on the specific technical definitions of 'Materially Altered' and how 'Covered Entity' is scoped within the West Virginia Code.

House - 4682 - Fourth Amendment Restoration Act

Legislation ID: 282169

Bill URL: View Bill

Summary

House Bill 4682, known as the "Fourth Amendment Restoration Act," seeks to amend the Code of West Virginia by prohibiting law enforcement officials from utilizing specific surveillance technologies unless authorized by a warrant. The bill outlines the legislative findings regarding constitutional protections, establishes penalties for violations, and allows individuals to seek legal recourse if their rights are infringed upon by the use of prohibited technologies.

Key Sections

Key Requirements

  • Imposes felony charges for unauthorized use of prohibited technologies by law enforcement or political subdivision officials.
  • Requires a warrant for the use of specified surveillance technologies against a specific person based on probable cause.
  • Requires immediate discontinuation of previously implemented prohibited technologies by political subdivisions.

Sponsors

Legislative Actions

Date Action
2026-01-21 Filed for introduction
2026-01-21 Introduced in House
2026-01-21 To House Judiciary
2026-01-21 To Judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically addresses the use of artificial intelligence in the context of law enforcement surveillance and recognizes the need for regulation to protect constitutional rights.

Mechanism of Influence: It mandates a warrant based on probable cause for the deployment of AI-related surveillance tools like facial recognition, effectively regulating how government agencies can use these technologies.

Evidence:

  • The Legislature recognizes that advancements in surveillance and AI technologies pose threats to Fourth Amendment protections against unreasonable searches and seizures
  • Law enforcement is prohibited from using certain surveillance technologies without a warrant based on probable cause, including real-time security monitoring, facial recognition, and surveillance drones.

Ambiguity Notes: While 'AI technologies' is used broadly in the findings, the specific prohibitions target facial recognition and real-time monitoring, which are common applications of AI.

House - 4770 - Establishing limitations on the use of artificial intelligence and artificial intelligence technology to deliver mental health care, with exceptions for administrative support functions

Legislation ID: 284829

Bill URL: View Bill

Summary

House Bill 4770 introduces new regulations concerning the application of artificial intelligence in mental health services within West Virginia. It establishes limitations on the use of AI technology to ensure that human professionals retain responsibility for patient interactions and decisions. The bill also creates a Task Force on Artificial Intelligence to oversee the implementation of these regulations and to recommend best practices and policies related to AI use in various sectors, particularly in mental health care.

Key Sections

Key Requirements

  • AI cannot directly interact with clients in therapeutic communication.
  • AI cannot generate treatment plans without professional review.
  • AI must not make independent therapeutic decisions.
  • Patients must be informed that they are interacting with AI.
  • Patients must be notified when interacting with AI.
  • Policies issued or renewed after January 1, 2027, must comply with AI regulations.
  • The Task Force is required to submit an annual report on its findings and recommendations.
  • The Task Force must include representatives from various sectors, including health care and technology.
  • Written consent must be obtained for AI use in therapy sessions.

Sponsors

Legislative Actions

Date Action
2026-01-28 Markup Discussion
2026-01-23 Filed for introduction
2026-01-23 Introduced in House
2026-01-23 To Health and Human Resources then Finance
2026-01-23 To House Health and Human Resources

Detailed Analysis

Analysis 1

Why Relevant: The bill mandates transparency and disclosure requirements for AI interactions.

Mechanism of Influence: It requires that patients be explicitly notified when they are interacting with an AI and necessitates written consent before AI can be used in a therapeutic context.

Evidence:

  • Patients must be informed that they are interacting with AI.
  • Written consent must be obtained for AI use in therapy sessions.

Ambiguity Notes: The bill does not specify the exact format or language required for the notification, which may lead to varying standards of disclosure.

Analysis 2

Why Relevant: The legislation imposes strict prohibitions and operational limits on AI functionality within the healthcare sector.

Mechanism of Influence: It legally restricts AI from performing core professional tasks such as conducting psychotherapy or generating treatment plans without human review, ensuring AI remains an administrative tool rather than a decision-maker.

Evidence:

  • AI cannot directly interact with clients in therapeutic communication.
  • AI cannot generate treatment plans without professional review.
  • AI must not make independent therapeutic decisions.

Ambiguity Notes: The distinction between 'administrative support' and 'therapeutic communication' may become blurred as AI tools become more integrated into clinical workflows.

Analysis 3

Why Relevant: The bill creates a formal oversight body to manage AI policy and definitions.

Mechanism of Influence: The West Virginia Task Force on Artificial Intelligence is tasked with recommending definitions and best practices, which will shape future regulatory requirements and reporting standards.

Evidence:

  • This section establishes the West Virginia Task Force on Artificial Intelligence to recommend definitions, oversee AI policy, and develop best practices for AI use in public sectors, including mental health.

Ambiguity Notes: While the task force focuses on public sectors and mental health, its influence on private sector AI developers in West Virginia is not fully defined.

Senate - 498 - Relating to pornography access for minors

Legislation ID: 272840

Bill URL: View Bill

Summary

Senate Bill 498 aims to amend the Code of West Virginia by introducing a new section that mandates age verification for access to online pornography. The bill outlines definitions, requirements for age verification systems, and penalties for non-compliance, including fines and potential legal action against websites that allow minors access to explicit content. It also provides a legal defense for compliant entities and addresses the issue of circumvention of age verification measures.

Key Sections

Key Requirements

  • Burden of proof for compliance lies with the commercial entity.
  • Entities can claim immunity from liability if they demonstrate compliance with the statute.
  • Establishes a special enforcement unit within the Attorney General’s Office.
  • Imposes a fine of $1 million for each violation of age verification requirements.
  • Parents can file civil suits for damages if minors access prohibited material.
  • Prohibits storage or sharing of personal information collected during verification.
  • Prohibits the use of VPNs or anonymizing technologies to bypass age verification.
  • Requires commercial adult websites to implement an age verification system using biometric and ID verification methods.
  • Requires re-authentication every 24 hours for continued access.
  • Websites allowing minors access face civil liability and permanent blocking from operating in the state.
  • Websites failing to comply for more than 30 days face a ban and daily civil penalties.

Sponsors

Legislative Actions

Date Action
2026-01-19 Filed for introduction
2026-01-19 Introduced in Senate
2026-01-19 To Judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill specifically mandates the implementation of biometric facial recognition technology as a requirement for commercial entities.

Mechanism of Influence: By requiring biometric facial recognition, the law forces commercial adult websites to deploy AI-driven identification tools, thereby regulating the specific use case and operational requirements of such AI systems within the state.

Evidence:

  • Commercial adult websites must implement an age verification system that includes biometric facial recognition
  • Requires commercial adult websites to implement an age verification system using biometric and ID verification methods.

Ambiguity Notes: The bill does not define the technical accuracy or the specific algorithmic standards required for the 'biometric facial recognition' systems, which could lead to varying interpretations of what constitutes a compliant AI verification tool.

Senate - 70 - Protecting state and local government systems and data from foreign entities

Legislation ID: 262660

Bill URL: View Bill

Summary

This bill amends the Code of West Virginia by adding a new article that establishes a framework for banning the use of software, applications, and artificial intelligence tools owned by foreign adversaries within state agencies. It aims to enhance cybersecurity measures and protect citizens data from potential threats posed by foreign entities.

Key Sections

Key Requirements

  • Prohibits accessing websites of social media applications owned by foreign adversaries.
  • Prohibits downloading or using social media applications from foreign adversaries on state devices.
  • Prohibits using applications, software, or AI tools owned by foreign adversaries, unless a waiver is obtained.

Sponsors

Legislative Actions

Date Action
2026-01-14 Filed for introduction
2026-01-14 Introduced in Senate
2026-01-14 To Government Organization
2026-01-14 To Government Organization then Judiciary

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly includes artificial intelligence tools within its scope of prohibited technologies for state agencies.

Mechanism of Influence: It creates a legal prohibition against the use of AI tools owned by foreign adversaries, effectively regulating the procurement and operational use of AI within the state government.

Evidence:

  • Prohibits using applications, software, or AI tools owned by foreign adversaries, unless a waiver is obtained.

Ambiguity Notes: The term 'AI tools' is used broadly without a specific technical definition in the abstract, which could encompass a wide range of machine learning and automated systems.

↑ Back to Table of Contents

Wisconsin

Index of Bills

Assembly - 172 - Relating to: consumer data protection and providing a penalty. (FE)

Legislation ID: 129210

Bill URL: View Bill

Summary

Assembly Bill 172 provides a framework for the protection of consumer data by defining the roles of data controllers and processors, outlining consumer rights regarding their personal data, and setting penalties for violations. It aims to ensure that consumers can access, correct, delete, and control their personal data, while also imposing strict requirements on how businesses handle such data.

Key Sections

Key Requirements

  • Defines controller as entities determining the purpose and means of processing personal data.
  • Defines processor as entities processing personal data on behalf of a controller.
  • Specifies definitions for personal data, sensitive data, biometric data, and other relevant terms.

Sponsors

Legislative Actions

Date Action
2026-01-28 Executive action taken
2026-01-21 Public hearing held
2026-01-16 Withdrawn from committee on Consumer Protection and referred to committee on State Affairs pursuant to Assembly Rule 42 3(c)
2025-06-24 Fiscal estimate received
2025-04-09 Introduced by Representatives Zimmerman, Sortwell, Allen, Armstrong, Behnke, Dittrich, Duchow, Goeben, Gustafson, Knodl, Kreibich, Krug, Kurtz, Maxey, Melotik, Murphy, Mursau, Nedweski, OConnor, Penterman, Piwowarczyk, Pronschinske, Snyder, Steffen, Tittl, Tusler, Wittke and Moses; cosponsored by Senators Quinn, Nass, Roys and Marklein
2025-04-09 Read first time and referred to Committee on Consumer Protection

Detailed Analysis

Analysis 1

Why Relevant: The bill establishes the foundational data governance rules that apply to the datasets used to train and operate artificial intelligence systems.

Mechanism of Influence: AI developers and deployers would likely fall under the definitions of 'controllers' or 'processors,' requiring them to comply with data access, correction, and deletion requests for any personal data used within their AI models.

Evidence:

  • Defines controller as entities determining the purpose and means of processing personal data.
  • Defines processor as entities processing personal data on behalf of a controller.
  • Specifies definitions for personal data, sensitive data, biometric data, and other relevant terms.

Ambiguity Notes: While the text does not explicitly use the term 'Artificial Intelligence,' the inclusion of 'biometric data' and 'sensitive data' directly impacts AI applications such as facial recognition and predictive analytics.

Assembly - 377 - Relating to: establishing English as the official state language, use of artificial intelligence or other machine-assisted translation tools in lieu of appointing English language interpreters, and use of English for governmental oral and written communication and for nongovernmental purposes. (FE)

Legislation ID: 215723

Bill URL: View Bill

Summary

Wisconsin currently has no official language. This bill designates English as the official state language and allows state and local governments to utilize artificial intelligence or machine-assisted translation tools instead of appointing English language interpreters. It also requires that all governmental communications be conducted in English, with exceptions for individual cases or specific programs. Moreover, it protects individuals rights to learn and use other languages for non-governmental purposes.

Key Sections

Key Requirements

  • All government communications must be in English unless specified otherwise.
  • No restrictions on language proficiency or use for non-governmental purposes.
  • State agencies may use AI tools instead of human interpreters when legally required.
  • Use of other languages is allowed for specific purposes.

Sponsors

Legislative Actions

Date Action
2026-01-16 Read first time and referred to committee on Government Operations, Labor and Economic Development
2026-01-16 Received from Assembly
2026-01-15 Assembly Amendment 1 adopted
2026-01-15 Assembly Amendment 2 adopted
2026-01-15 Ordered immediately messaged
2026-01-15 Ordered to a third reading
2026-01-15 Read a second time
2026-01-15 Read a third time and passed, Ayes 51, Noes 45, Paired 2

Detailed Analysis

Analysis 1

Why Relevant: The bill explicitly addresses the deployment of artificial intelligence within government operations, specifically as a replacement for human personnel in translation and interpretation services.

Mechanism of Influence: It creates a legal framework allowing state agencies and local governments to bypass the appointment of human interpreters by providing access to AI-driven translation tools, thereby integrating AI into the state's legal and administrative infrastructure.

Evidence:

  • Allows state and local governments to utilize artificial intelligence or machine-assisted translation tools instead of appointing English language interpreters.
  • State agencies may use AI tools instead of human interpreters when legally required.

Ambiguity Notes: The bill does not define the specific technical standards, accuracy thresholds, or security requirements for the 'AI or machine-assisted translation tools' it authorizes, potentially allowing for a wide range of software applications without specific oversight.

Assembly - 575 - Relating to: prohibiting state agencies and local governmental units from using facial recognition technology or data generated from it.

Legislation ID: 229794

Bill URL: View Bill

Summary

This bill creates a new statute that defines facial recognition technology and establishes a prohibition against its use by state and local governmental entities. The bill aims to safeguard individual privacy by restricting the application of this controversial technology, which automatically identifies individuals by comparing facial images against a database. The only exception to this prohibition is for the identification of employees within the respective agencies for employment-related matters.

Key Sections

Key Requirements

  • Prohibits the use of facial recognition technology by state and local governmental units for all purposes except employee identification.

Sponsors

Legislative Actions

Date Action
2026-01-20 Representative Vining added as a coauthor
2025-10-24 Introduced by Representatives Clancy, Tenorio, Moore Omokunde, Cruz, Hong, Madison, Phelps and Subeck; cosponsored by Senator Larson
2025-10-24 Read first time and referred to Committee on Criminal Justice and Public Safety

Detailed Analysis

Analysis 1

Why Relevant: Facial recognition technology is a specific application of artificial intelligence, particularly in the domains of computer vision and biometric data processing.

Mechanism of Influence: The bill creates a legal prohibition on the deployment and use of AI-driven facial recognition systems by public agencies, representing a regulatory restriction on AI usage.

Evidence:

  • This bill creates a new statute that defines facial recognition technology and establishes a prohibition against its use by state and local governmental entities.
  • This provision explicitly prohibits state agencies and local governmental units from using facial recognition technology or the data it generates for any purpose, with the exception of identifying employees for employment-related purposes.

Ambiguity Notes: The impact of the law depends on the technical breadth of the definition of 'facial recognition technology' and whether it encompasses all algorithmic matching or specific automated systems.

Assembly - 673 - Relating to: banning the use of genetic software from foreign adversaries in medical and research facilities, the storage of any human genome sequencing data within the borders of a foreign adversary, and providing a penalty. (FE)

Legislation ID: 230695

Bill URL: View Bill

Summary

Assembly Bill 673 seeks to enhance the security and privacy of genetic information by banning the use of genetic sequencers and software developed by foreign adversaries in medical and research facilities. It also mandates that human genome sequencing data of Wisconsin residents cannot be stored in countries identified as foreign adversaries. The bill includes penalties for violations and establishes enforcement mechanisms through the attorney general.

Key Sections

Key Requirements

  • Attorney general may investigate violations.
  • Ensures data is inaccessible to persons located in foreign adversary countries.
  • Imposes a $10,000 forfeiture for each violation.
  • Prohibits medical and research facilities from using foreign adversary genetic software or sequencers.
  • Requires that human genome sequencing data cannot be stored in foreign adversary countries.

Sponsors

Legislative Actions

Date Action
2026-01-26 Read first time and referred to committee on Licensing, Regulatory Reform, State and Federal Affairs
2026-01-22 Assembly Amendment 1 adopted
2026-01-22 Assembly Substitute Amendment 1 offered by Representative McGuire
2026-01-22 Decision of the Chair appealed
2026-01-22 Decision of the Chair upheld, Ayes 53, Noes 44
2026-01-22 Ordered immediately messaged
2026-01-22 Ordered to a third reading
2026-01-22 Point of order that Assembly Substitute Amendment 1 not germane under Assembly Rule 54 (3)(f) well taken

Detailed Analysis

Analysis 1

Why Relevant: The bill regulates 'operational or research software' used for genetic analysis, which is a field that increasingly relies on artificial intelligence and machine learning for data processing and pattern recognition.

Mechanism of Influence: By prohibiting the use of specific software from foreign adversaries, the bill restricts the deployment of AI-driven bioinformatics tools and sequencing algorithms developed by those entities within Wisconsin's medical and research infrastructure.

Evidence:

  • Prohibits medical and research facilities from using foreign adversary genetic software or sequencers.
  • operational or research software produced by foreign adversaries
  • genetic analysis

Ambiguity Notes: The legislation does not explicitly use the term 'Artificial Intelligence'; however, the broad category of 'operational or research software' used for genetic analysis typically encompasses the AI models used in modern genomics.

Assembly - 883 - Relating to: limiting the use of automatic registration plate readers.

Legislation ID: 269954

Bill URL: View Bill

Summary

Assembly Bill 883 creates a statutory ban on the use of automatic registration plate readers, which are devices that capture and convert vehicle registration plate images into data. The bill outlines exceptions for specific uses, including parking enforcement, access control to nonpublic areas, and compliance checks for commercial vehicles at weigh stations. Data captured by these devices is restricted in terms of sharing and retention.

Key Sections

Key Requirements

  • Automatic registration plate readers can only be used by parking enforcement facilities.
  • Data captured cannot be shared except for the defined purposes.
  • Data must be deleted after 90 days.
  • They can be installed at weigh stations for compliance checks.
  • They may be used for controlling access to enclosed nonpublic areas.

Sponsors

Legislative Actions

Date Action
2026-01-16 Introduced by Representative Gustafson
2026-01-16 Read first time and referred to Committee on Criminal Justice and Public Safety

Detailed Analysis

Analysis 1

Why Relevant: The legislation regulates automatic registration plate readers, which are a specific application of computer vision and optical character recognition (OCR), both of which are core technologies within the field of artificial intelligence.

Mechanism of Influence: By restricting the use of these devices and the data they generate, the law effectively regulates the deployment and data lifecycle of AI-powered surveillance and automated data extraction systems.

Evidence:

  • automatic registration plate readers, which are devices that capture and convert vehicle registration plate images into data.

Ambiguity Notes: The bill focuses on the hardware and the resulting data rather than the underlying algorithms or 'weights' of the AI models used, but the functional definition of 'converting images into data' describes an automated AI process.

↑ Back to Table of Contents

4. Conclusion