This report tracks AI-related legislation emerging in the first month of the 2026 U.S. state legislative sessions. As of January 29, 2026, early filings already reveal a clear pattern: states are moving aggressively to regulate artificial intelligence, with a particular focus on consumer protection, transparency, and preventing the most harmful applications of the technology.
Across 35 states with active AI legislation in this first month, we've identified approximately 300 bills directly addressing artificial intelligence. The volume and scope signal that 2026 will be a watershed year for AI regulation at the state level. New York alone has introduced 89 bills, followed by Virginia with 30 and New Jersey with 17.
Analyzing the legislation reveals several dominant themes:
The largest category of bills focuses on preventing AI's worst applications. States are targeting:
A strong thread across states mandates that humans remain in the loop for consequential decisions:
States are increasingly concerned about AI-driven price manipulation:
Disclosure requirements are becoming standard:
States are building internal AI oversight infrastructure:
Education is a contested battleground:
Special concern exists around AI in mental health contexts:
Overall, the legislative posture toward AI is defensive. States are primarily focused on:
A smaller set of bills take a facilitative approach, focusing on AI literacy education, establishing innovation sandboxes, and creating frameworks for responsible AI adoption in government.
The first month of the 2026 session establishes clear legislative priorities: protect consumers, require transparency, maintain human oversight in high-stakes decisions, and prevent the most harmful applications of AI technology. As sessions progress, we expect to see many of these bills evolve through committee hearings and floor debates, with the most pressing concerns — particularly around deepfakes, insurance claim denials, and housing algorithms — likely to see the most legislative action.
This report will be updated as the legislative session progresses and more bills are introduced, amended, and voted upon.
Scholars Edge functions like a series of sieves, each designed to filter search results in a unique way. Artificial intelligence—particularly transformer models—is a cross-cutting technology used across a wide range of industries, applications, and use cases. For that reason, a concept search is the ideal tool for identifying related legislation. However, building an effective concept search is a process in itself.
To start, I copied the text of the Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure into a similarity search. This first search was not intended to be perfect; its purpose was simply to surface bills that could serve as the foundation for a concept search. Similarity search uses cosine similarity to identify documents that closely resemble the example text. The next filter applied an LLM to label whether each bill was truly focused on AI. Using this basic approach, I collected 10 bills from several states—broad enough to be representative—and organized them into two zip files, each containing five text files.
With those zip files in hand, I began building a new concept search. I uploaded the first zip file and saved it. Then, I ran the second zip file through the tool and copied the relevant concepts into the first search via the search editor. I carefully refined the combined concept list—removing overly broad or overly specific concepts, as well as those likely to bleed into unrelated topics. Where gaps existed, I added missing concepts manually. The resulting concept list was:
Unlike the 2025 post-session report, I did not manually curate the results. Instead, this is the raw output of the search engine.
This resolution calls on the U.S. Congress to adopt the Kids Online Safety Act, which seeks to implement safety measures for minors using online platforms. It highlights the risks associated with online use, including exposure to harmful content and mental health issues, and advocates for protections such as privacy safeguards and parental controls.
| Date | Action |
|---|---|
| 2026-01-23 | (H) EDC, JUD |
| 2026-01-23 | (H) READ THE FIRST TIME - REFERRALS |
| 2026-01-23 | (H) REFERRED TO EDUCATION |
Why Relevant: The legislation directly addresses algorithmic recommendations, which are a core component of artificial intelligence systems used by online platforms.
Mechanism of Influence: By requiring platforms to offer an opt-out for algorithmic recommendations, the law mandates a change in how AI-driven content delivery systems operate for users under 17.
Evidence:
Ambiguity Notes: The term 'algorithmic recommendations' is a common proxy for AI-driven curation, though the specific technical definitions of the algorithms are often left to regulatory interpretation.
Why Relevant: The user specifically requested legislation requiring audits and transparency for automated systems.
Mechanism of Influence: The mandate for independent audits and public reporting forces platforms to subject their internal processes, including AI-driven safety and moderation tools, to external scrutiny.
Evidence:
Ambiguity Notes: The scope of the 'independent audits' would likely be defined by the proposed Kids Online Safety Council or federal enforcement agencies.
Why Relevant: The legislation focuses on age-based usage restrictions and design safeguards, which aligns with the user's interest in age verification and usage regulation.
Mechanism of Influence: Platforms must implement specific design safeguards for users under 17, which often involves AI-based age estimation or verification technologies to ensure compliance.
Evidence:
Ambiguity Notes: While the summary mentions 'design safeguards,' the implementation often relies on automated systems to identify and protect minor users.
The bill addresses various aspects of political communication, particularly focusing on synthetic media and its implications for voter perception. It also outlines regulations for outdoor advertising near highways, procedures for voter registration, and the dissemination of election-related information. Additionally, it mandates a report on expanding early voting options in rural and low-income areas, highlighting the need for accessible voting practices.
| Date | Action |
|---|---|
| 2026-01-29 | (H) FINANCE at 01:30 PM ADAMS 519 |
| 2025-05-20 | (H) IN FINANCE |
| 2025-05-20 | (H) RULES TO CALENDAR PENDING FIN RPT/REF |
| 2025-05-19 | (H) IN FINANCE |
| 2025-05-19 | (H) RULES TO CALENDAR PENDING FIN RPT/REF |
| 2025-05-16 | (H) -- Delayed to a Call of the Chair -- |
| 2025-05-16 | (H) FINANCE at 01:30 PM ADAMS 519 |
| 2025-05-16 | (H) FINANCE at 09:00 AM ADAMS 519 |
Why Relevant: The bill provides a specific legal definition for 'synthetic media' created through artificial intelligence manipulation.
Mechanism of Influence: By defining AI-manipulated images, audio, and video, the bill creates a regulatory framework to identify and potentially restrict or require disclosures for deepfakes in political campaigns.
Evidence:
Ambiguity Notes: The exclusion of 'minor edits or enhancements' is not strictly defined, which could lead to disputes over whether a specific AI-enhanced video crosses the threshold into 'synthetic media'.
Why Relevant: The bill defines 'interactive computer services', which are the primary platforms for the dissemination of AI-generated content.
Mechanism of Influence: This definition establishes the types of digital entities and services that may be subject to regulations regarding the hosting or transmission of AI-manipulated political communications.
Evidence:
Ambiguity Notes: The definition is broad, covering everything from commercial internet services to educational systems, which may create varying levels of compliance burden.
HB 2245 introduces a veteran claims pilot program designed to enhance the claims development process for veterans. The program will utilize technology to conduct comprehensive reviews of veterans records, identify service-connected conditions, and produce complete claim packets for submission to the U.S. Department of Veterans Affairs. The bill outlines the integration of this program with existing veteran services and mandates an evaluation of its effectiveness, including various performance indicators.
| Date | Action |
|---|---|
| 2026-01-20 | House2nd Read |
| 2026-01-15 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill mandates the use of automated technology systems to perform complex analytical tasks traditionally handled by humans, specifically the review of medical records and the mapping of conditions to federal diagnostic criteria.
Mechanism of Influence: It requires the Department of Veterans Services to assess the effectiveness of these 'claims development technology systems' through specific performance indicators and reporting, effectively requiring a government evaluation of an automated system's accuracy and utility.
Evidence:
Ambiguity Notes: While the bill uses the term 'technology' rather than 'Artificial Intelligence,' the functions described—such as mapping medical documentation to complex rating criteria—are characteristic of AI-driven diagnostic and decision-support tools.
Legislation ID: 259099
Bill URL: View Bill
HB2311 introduces regulations for conversational AI services in Arizona, requiring operators to disclose when minors are interacting with AI, implement safety measures to protect minors from inappropriate content, and provide tools for privacy management. The bill also outlines penalties for violations and establishes protocols for handling sensitive topics like suicidal ideation.
| Date | Action |
|---|---|
| 2026-01-20 | House2nd Read |
| 2026-01-15 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill directly addresses the user's interest in legislation requiring disclosures for AI usage.
Mechanism of Influence: It mandates that operators of conversational AI services must provide persistent disclaimers and regular notifications to minor account holders to ensure they are aware they are interacting with an artificial intelligence rather than a human.
Evidence:
Ambiguity Notes: The term 'persistent disclaimers' and 'regular notifications' are not strictly defined by specific time intervals in the abstract, which may lead to varying implementation standards among different operators.
Why Relevant: The legislation focuses on regulating AI behavior and content safety, which aligns with the user's request for AI regulation.
Mechanism of Influence: It prohibits AI services from engaging in harmful interactions with minors, such as generating sexual content or encouraging sexual conduct, and requires specific crisis protocols for sensitive topics.
Evidence:
Ambiguity Notes: The definition of 'inappropriate content' may be subject to interpretation or further regulatory clarification to determine what specifically constitutes a violation.
Why Relevant: The bill establishes a legal framework for liability and oversight of AI operators.
Mechanism of Influence: By empowering the attorney general to enforce civil penalties and clarifying that developers are not liable for third-party violations, the bill creates a regulatory oversight structure for AI deployment.
Evidence:
Ambiguity Notes: The exclusion of 'developers of AI models' from liability for 'third-party violations' creates a distinction between the creator of the technology and the entity operating the specific service interface.
Legislation ID: 259200
Bill URL: View Bill
HB2371 amends Arizona law to introduce artificial intelligence-assisted arbitration in divorce cases, provided both parties consent and do not have minor children. The bill outlines the process for arbitration, the ability for parties to withdraw consent, and the appeal process for binding determinations made through this method. It defines artificial intelligence-assisted arbitration and establishes the jurisdiction of courts over these matters.
| Date | Action |
|---|---|
| 2026-01-20 | House2nd Read |
| 2026-01-15 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill provides a specific legal definition for artificial intelligence in the context of arbitration, which is a foundational element of AI regulation.
Mechanism of Influence: By defining AI-assisted arbitration as a computational system rather than a legal person, it establishes the legal status and limitations of AI tools used in the judicial process.
Evidence:
Ambiguity Notes: The term 'computational system' is broad and could encompass various types of algorithmic decision-making tools beyond generative AI.
Why Relevant: The legislation regulates the usage of AI by establishing strict prerequisites for its application in legal disputes.
Mechanism of Influence: It mandates a disclosure and consent mechanism where both parties must provide written agreement, effectively regulating the deployment of AI in sensitive legal matters.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill establishes a framework for oversight and human-in-the-loop review of AI-generated outcomes.
Mechanism of Influence: It creates a mandatory appeal process where AI-generated binding determinations are reviewed de novo by a human judge, ensuring that AI does not have the final word without judicial recourse.
Evidence:
Ambiguity Notes: None
Legislation ID: 259303
Bill URL: View Bill
HB 2409 establishes the Arizona Artificial Intelligence Education Program within the Department of Education. The program will offer summer courses that cover digital hygiene and civic integrity, as well as AI applications for small business and entrepreneurship. The curriculum will include training on navigating the digital world safely, understanding algorithmic bias, and using AI tools for business operations. The program aims to empower residents with skills for economic success and to recognize digital manipulation.
| Date | Action |
|---|---|
| 2026-01-26 | House2nd Read |
| 2026-01-22 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill mandates the creation of a curriculum that specifically addresses AI-related risks such as algorithmic bias and data protection.
Mechanism of Influence: By institutionalizing education on algorithmic bias and data protection, the state creates a framework for public awareness of AI oversight issues, though it stops short of direct industry regulation.
Evidence:
Ambiguity Notes: The bill does not define the specific standards or definitions for 'algorithmic bias' or 'data protection' that must be taught, leaving the substantive content to the discretion of the Office of Economic Opportunity.
Why Relevant: The legislation focuses on the economic and operational integration of AI for small businesses.
Mechanism of Influence: It promotes the adoption of AI technologies by providing state-sponsored training on operations and marketing, which influences how AI is deployed in the local economy.
Evidence:
Ambiguity Notes: The provision focuses on promotion and education rather than restriction or mandatory disclosure, which may fall outside the scope of 'regulation' depending on the user's strictness.
Legislation ID: 259305
Bill URL: View Bill
HB2410 introduces a new chapter to Title 18 of the Arizona Revised Statutes, specifically addressing the legal status of communications with artificial intelligence. It asserts that communications with AI will be considered privileged if the individual would have been entitled to privileged communication had they consulted a human professional, thereby extending legal protections to interactions with AI technologies.
| Date | Action |
|---|---|
| 2026-01-21 | House2nd Read |
| 2026-01-20 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill directly addresses the legal status and privacy protections of interactions with AI, which falls under the regulation of AI usage and data handling.
Mechanism of Influence: It grants AI-human interactions the same legal protections as professional-client relationships, preventing such communications from being used as evidence or disclosed in contexts where privilege applies.
Evidence:
Ambiguity Notes: The term 'human professional' is not explicitly defined in the provided text, leaving it open to interpretation regarding which specific professional privileges (e.g., legal, medical, clergy) are extended to AI.
HB 2490 introduces regulations concerning algorithmic pricing in the rental market. It defines key terms related to algorithmic devices and establishes prohibitions against their use in ways that could facilitate collusion among landlords or manipulate rental prices and terms. The bill outlines enforcement mechanisms and specifies the conditions under which these regulations apply, as well as exceptions for certain types of housing.
| Date | Action |
|---|---|
| 2026-01-21 | House2nd Read |
| 2026-01-20 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill directly regulates the application of algorithms, which are foundational components of artificial intelligence, within the real estate and rental sectors.
Mechanism of Influence: It imposes a legal prohibition on using specific types of algorithmic tools for price coordination and establishes a rebuttable presumption of illegal trade practices for violations.
Evidence:
Ambiguity Notes: The scope of 'algorithmic device' and 'algorithm' depends on the specific definitions provided in the bill, which may broadly encompass various automated decision-making systems or narrow predictive models.
Legislation ID: 266104
Bill URL: View Bill
HB2592 introduces regulations for the use of artificial intelligence systems by state budget units. It mandates the identification of opportunities for AI implementation, elimination of restrictive regulations, and the establishment of governance structures. The bill also requires legislative ratification for any rules specifically regulating AI, ensuring that such regulations do not hinder innovation or competition.
| Date | Action |
|---|---|
| 2026-01-26 | House2nd Read |
| 2026-01-22 | House1st Read |
| 2026-01-12 | Filed |
Why Relevant: The bill establishes strict oversight and a ratification process for any new regulations concerning artificial intelligence.
Mechanism of Influence: It requires that any emergency or temporary rules regulating AI be approved by the legislature within thirty days of a session and mandates that rules must not create barriers to market entry.
Evidence:
Ambiguity Notes: The term 'specific harms' is not explicitly defined, leaving the threshold for when the legislature may delegate regulatory authority open to interpretation.
Why Relevant: The legislation provides foundational legal definitions for AI and related technologies which determine the scope of future regulation.
Mechanism of Influence: By defining 'artificial intelligence system' and 'computational resource,' the bill sets the boundaries for what technologies fall under these legislative constraints.
Evidence:
Ambiguity Notes: The definition of AI as systems that 'influence environments' is broad and could encompass a wide range of software beyond generative AI.
Why Relevant: The bill regulates the internal government adoption and procurement of AI technologies.
Mechanism of Influence: It mandates that budget units streamline procurement and identify opportunities for AI implementation to reduce costs and improve services.
Evidence:
Ambiguity Notes: None
Legislation ID: 248169
Bill URL: View Bill
SB 1088 proposes an appropriation of $2,500,000 from the state general fund for the fiscal year 2026-2027 to support cybersecurity programs. Specifically, it designates $500,000 for generative artificial intelligence cybersecurity initiatives and $2,000,000 to modernize the statewide VPN security network using a zero trust network access solution.
| Date | Action |
|---|---|
| 2026-01-14 | Senate2nd Read |
| 2026-01-12 | Filed |
| 2026-01-12 | Senate1st Read |
Why Relevant: The bill explicitly mentions and allocates funding for generative artificial intelligence cybersecurity initiatives.
Mechanism of Influence: The appropriation provides the financial resources necessary for the state to develop or implement cybersecurity protocols and programs specifically designed to address the risks or capabilities of generative AI.
Evidence:
Ambiguity Notes: The bill does not provide a specific definition for 'generative artificial intelligence cybersecurity programs,' which could encompass securing AI models from external threats, using AI for cyber defense, or auditing AI systems for vulnerabilities.
Assembly Bill 1542 amends sections of the Civil Code regarding the handling of sensitive personal information by businesses. It establishes clearer guidelines for how businesses must inform consumers about the collection and use of their personal information, specifically sensitive data, and reinforces the rights of consumers to limit the use and disclosure of such information.
| Date | Action |
|---|---|
| 2026-01-06 | From printer. May be heard in committee February 5. |
| 2026-01-05 | Read first time. To print. |
Why Relevant: The bill regulates the collection and use of sensitive personal information, which is a primary input for many AI systems and machine learning models.
Mechanism of Influence: By requiring businesses to limit data use to 'necessary purposes' and allowing consumers to opt-out of further disclosure, the law restricts how data can be repurposed for AI training or algorithmic profiling.
Evidence:
Ambiguity Notes: The bill does not explicitly mention 'Artificial Intelligence,' but its restrictions on data usage and disclosure directly impact the data governance frameworks required for AI development.
Assembly Bill No. 1609 aims to regulate customer service support provided by large private businesses, specifically requiring them to offer human assistance during specified hours, respond quickly to customer requests, and ensure transparency regarding the use of artificial intelligence in customer service interactions.
| Date | Action |
|---|---|
| 2026-01-21 | From printer. May be heard in committee February 20. |
| 2026-01-20 | Read first time. To print. |
Why Relevant: The bill contains specific transparency and disclosure requirements for AI-driven customer service interactions.
Mechanism of Influence: It mandates that businesses disclose the use of AI and prohibits them from deceiving customers into thinking they are speaking with a human.
Evidence:
Ambiguity Notes: The term 'artificially generated' is used broadly and may require further technical clarification to determine which specific technologies, such as Large Language Models versus simple automated scripts, are covered.
Why Relevant: The bill regulates the operational deployment of AI bots in customer service settings by requiring human oversight/availability.
Mechanism of Influence: It requires a human fallback mechanism within a five-minute window if an AI bot is initially used to answer a customer query, effectively regulating the autonomy of AI in business-to-consumer interactions.
Evidence:
Ambiguity Notes: None
Senate Bill No. 867 introduces regulations on toys that include companion chatbots, which are defined as artificial intelligence systems that mimic human-like interactions. The bill prohibits the manufacture, sale, or exchange of such toys until January 1, 2031, aiming to ensure childrens safety and clarity in interactions with these technologies.
| Date | Action |
|---|---|
| 2026-01-06 | From printer. May be acted upon on or after February 5. |
| 2026-01-05 | Introduced. Read first time. To Com. on RLS. for assignment. To print. |
Why Relevant: The bill specifically targets and regulates a subset of artificial intelligence technology known as companion chatbots.
Mechanism of Influence: It creates a legal prohibition on the commercial distribution of AI-integrated toys, effectively banning this specific AI application for a set period to ensure child safety.
Evidence:
Ambiguity Notes: The definition of 'mimic human-like interactions' is broad and could encompass a wide range of AI complexities, from basic scripted decision trees to advanced generative models.
Why Relevant: The legislation addresses the user's interest in age-related AI regulations and safety protections for minors.
Mechanism of Influence: By defining 'toy' based on the age of the user (12 or less), the law restricts AI usage and exposure based on the age of the target demographic.
Evidence:
Ambiguity Notes: None
Legislation ID: 284203
Bill URL: View Bill
Senate Bill No. 903 establishes regulations for the use of artificial intelligence by licensed mental health professionals in California. It prohibits the use of AI in therapeutic settings without informed consent and restricts AI from making independent therapeutic decisions. The bill aims to safeguard individuals seeking mental health services and ensure that they are provided by qualified professionals.
| Date | Action |
|---|---|
| 2026-01-22 | From printer. May be acted upon on or after February 21. |
| 2026-01-21 | Introduced. Read first time. To Com. on RLS. for assignment. To print. |
Why Relevant: The bill mandates specific disclosures and informed consent protocols for the use of AI in a professional setting.
Mechanism of Influence: Licensed professionals are legally barred from utilizing AI tools in a therapeutic context unless they first obtain and document written consent from the patient or their representative.
Evidence:
Ambiguity Notes: The specific requirements for what constitutes 'informed' consent regarding the technical nature of the AI are not detailed in the abstract.
Why Relevant: It imposes direct restrictions on the functional capabilities and autonomy of AI systems within the mental health industry.
Mechanism of Influence: The law prevents AI from being used as a primary provider or decision-maker, ensuring that AI remains a tool for licensed humans rather than an independent agent.
Evidence:
Ambiguity Notes: The term 'independent therapeutic decisions' may require further regulatory clarification to distinguish between AI-assisted suggestions and autonomous actions.
Why Relevant: The legislation includes enforcement mechanisms and financial penalties for failing to adhere to AI regulations.
Mechanism of Influence: The Department of Consumer Affairs is granted investigative authority and the power to levy significant fines for non-compliance.
Evidence:
Ambiguity Notes: None
This bill introduces the Artificial Intelligence Bill of Rights in Florida, defining artificial intelligence and prohibiting certain contracts with foreign entities of concern. It lays out various rights for Floridians related to AI, including rights to privacy, consent, and protection from misuse of AI technologies. The bill also mandates that AI technologies must not infringe on personal rights and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-15 | H Now in Information Technology Budget & Policy Subcommittee |
| 2026-01-15 | H Referred to Civil Justice & Claims Subcommittee |
| 2026-01-15 | H Referred to Commerce Committee |
| 2026-01-15 | H Referred to Information Technology Budget & Policy Subcommittee |
| 2026-01-15 | H Referred to State Affairs Committee |
| 2026-01-13 | H 1st Reading |
| 2026-01-09 | H Filed |
Why Relevant: The bill mandates transparency and disclosure regarding the use of AI systems.
Mechanism of Influence: It establishes a legal right for individuals to be informed when they are interacting with an AI rather than a human and when their data is being harvested by such systems.
Evidence:
Ambiguity Notes: The specific method of disclosure, such as the required format or timing of the notice, is not detailed in the summary.
Why Relevant: The legislation addresses the regulation of AI usage for minors and parental oversight.
Mechanism of Influence: By granting parents the right to control their children's use of AI, the bill implies a need for age verification or parental consent mechanisms for AI platforms.
Evidence:
Ambiguity Notes: The summary does not specify the age threshold for 'children' or the technical requirements for implementing parental control.
Why Relevant: The bill regulates AI through the lens of data privacy and oversight of foreign entities.
Mechanism of Influence: It prohibits government contracts with foreign entities of concern that involve personal identifying information, effectively regulating which AI providers can provide services to state government infrastructure.
Evidence:
Ambiguity Notes: The term 'foreign entities of concern' likely refers to specific countries or organizations, but these are not explicitly listed in the summary.
This bill amends existing Florida Statutes to require school districts to develop and provide elective courses in computer technology for high school students. It also revises the general education core course standards for public postsecondary educational institutions to include technology-related courses, ensuring that students are equipped with relevant technological skills.
| Date | Action |
|---|---|
| 2026-01-28 | H CS Filed |
| 2026-01-28 | H Favorable with CS by Careers & Workforce Subcommittee |
| 2026-01-28 | H Laid on Table under Rule 7.18(a) |
| 2026-01-28 | H Reported out of Careers & Workforce Subcommittee |
| 2026-01-26 | H PCS added to Careers & Workforce Subcommittee agenda |
| 2026-01-15 | H Now in Careers & Workforce Subcommittee |
| 2026-01-15 | H Referred to Budget Committee |
| 2026-01-15 | H Referred to Careers & Workforce Subcommittee |
Why Relevant: The bill explicitly mandates the inclusion of artificial intelligence applications within the general education core course standards for public postsecondary institutions.
Mechanism of Influence: By requiring AI to be part of the core curriculum, the state influences the educational standards and foundational knowledge required for students, though it does not regulate the technology's development or commercial use.
Evidence:
Ambiguity Notes: The bill focuses on the educational and curriculum side of AI rather than the regulatory oversight (such as audits or weight submissions) mentioned in the system instructions.
Why Relevant: The legislation requires school districts to offer high school electives specifically in artificial intelligence.
Mechanism of Influence: It mandates that school districts provide access to AI education, potentially allowing students to earn industry certifications or college credit in the field.
Evidence:
Ambiguity Notes: The scope is limited to educational offerings and does not address age verification for AI usage or disclosure requirements for AI-generated content.
Legislation ID: 239390
Bill URL: View Bill
This bill establishes mandatory human reviews for insurance claim denials across various sectors, including workers compensation, general insurance, and health maintenance organizations. It defines key terms and outlines the responsibilities of qualified human professionals in the claims process. The bill mandates that any decision to deny or reduce a claim must be made by a qualified human professional after a thorough review of the case, independent of automated systems. It also requires carriers to maintain records of these decisions and includes provisions for enforcement and penalties for non-compliance.
| Date | Action |
|---|---|
| 2026-01-13 | H 1st Reading |
| 2025-12-12 | H Now in Commerce Committee |
| 2025-12-12 | H Referred to Commerce Committee |
| 2025-12-11 | H CS Filed |
| 2025-12-11 | H Laid on Table under Rule 7.18(a) |
| 2025-12-11 | H Reported out of Insurance & Banking Subcommittee |
| 2025-12-09 | H Favorable with CS by Insurance & Banking Subcommittee |
| 2025-12-02 | H Added to Insurance & Banking Subcommittee agenda |
Why Relevant: The bill sets specific constraints on how AI can be deployed in the insurance industry.
Mechanism of Influence: It mandates a human-in-the-loop requirement, ensuring AI is only an assistive tool rather than a final decision-maker.
Evidence:
Ambiguity Notes: The term qualified human professional is defined but its specific qualifications may vary by insurance type.
Why Relevant: It requires transparency and disclosure regarding the use of AI in the claims process.
Mechanism of Influence: Insurers must include statements in denial letters affirming that AI was not the sole factor and detail AI usage in internal manuals.
Evidence:
Ambiguity Notes: The level of detail required in the claims-handling manuals regarding AI logic is not fully specified.
Why Relevant: It provides legal definitions for AI-related technologies.
Mechanism of Influence: Establishes the scope of the law by defining algorithm, artificial intelligence system, and machine learning system.
Evidence:
Ambiguity Notes: None
Why Relevant: It allows for government oversight of AI-related practices.
Mechanism of Influence: Authorizes market conduct examinations and investigations to ensure compliance with AI regulations.
Evidence:
Ambiguity Notes: The frequency and specific criteria for these examinations are left to the office's discretion.
Legislation ID: 239843
Bill URL: View Bill
This bill creates a Task Force on Artificial Intelligence in Public Postsecondary Education under the Department of Education. The task force is required to convene by August 1, 2026, and will consist of various stakeholders, including faculty, education board representatives, and AI experts. The task forces duties include examining the impact of AI on academic integrity, exploring AI applications in education, and recommending policies for its ethical use. A report of findings and recommendations is to be submitted by December 1, 2026, after which the task force will terminate.
| Date | Action |
|---|---|
| 2026-01-13 | H 1st Reading |
| 2026-01-05 | H Now in Education Administration Subcommittee |
| 2026-01-05 | H Referred to Education Administration Subcommittee |
| 2026-01-05 | H Referred to Education & Employment Committee |
| 2026-01-05 | H Referred to Higher Education Budget Subcommittee |
| 2025-12-23 | H Filed |
Why Relevant: The bill initiates government oversight and policy development for AI within the public education sector, specifically focusing on ethical use and security.
Mechanism of Influence: The task force is mandated to recommend model policies for ethical use and assess privacy and security implications, which serves as a precursor to formal regulation or auditing requirements in educational settings.
Evidence:
Ambiguity Notes: The bill focuses on study and recommendation rather than immediate enforcement of regulations like age verification or weight submission, but it explicitly addresses the 'ethical use' and 'security implications' which are core to AI regulation.
This bill mandates the State Board of Education to create statewide policies regarding the use of artificial intelligence in schools. It requires that students receive instruction on digital literacy and AI ethics, and it outlines the responsibilities of the Department of Education in monitoring compliance and providing teacher training. Additionally, it amends existing statutes to incorporate AI-related policies into student codes of conduct.
| Date | Action |
|---|---|
| 2026-01-13 | • Introduced |
| 2026-01-12 | • Referred to Education Pre-K - 12; Commerce and Tourism; Rules |
| 2026-01-06 | • Filed |
Why Relevant: The bill explicitly addresses the user's interest in AI disclosures and regulation.
Mechanism of Influence: It creates a legal requirement for students to disclose the use of AI in their academic work and requires teacher permission for its use.
Evidence:
Ambiguity Notes: The bill does not specify the technical methods for detecting AI or the exact format of the required disclosures.
Why Relevant: The bill focuses on the oversight and monitoring of AI technology within a specific sector.
Mechanism of Influence: It imposes monitoring safeguards for assessments and requires the Department of Education to monitor compliance with AI policies.
Evidence:
Ambiguity Notes: The term 'monitoring safeguards' is broad and could refer to various forms of algorithmic or human oversight.
Why Relevant: The bill addresses the ethical regulation of AI through education and conduct codes.
Mechanism of Influence: It mandates instruction on AI ethics and requires that AI usage policies be formally integrated into student codes of conduct.
Evidence:
Ambiguity Notes: The bill leaves the definition of 'ethical use' to be determined by the State Board of Education.
Legislation ID: 250298
Bill URL: View Bill
This bill creates the Artificial Intelligence in Higher Education Study Group, tasked with reviewing the impact of artificial intelligence on academic integrity, teaching, and research within Floridas higher education systems. The group will consist of various stakeholders, including faculty, students, and AI experts, and is required to submit a report with findings and recommendations by December 1, 2026.
| Date | Action |
|---|---|
| 2026-01-22 | • Introduced |
| 2026-01-16 | • Referred to Education Postsecondary; Commerce and Tourism; Rules |
| 2026-01-08 | • Filed |
Why Relevant: The bill directly addresses the governance and oversight of artificial intelligence within the higher education sector.
Mechanism of Influence: By mandating the creation of model policies and reviewing governance best practices, the study group's recommendations could form the basis for future regulatory requirements or disclosure mandates for AI tools used in academic settings.
Evidence:
Ambiguity Notes: The terms 'model policies' and 'best practices in AI governance' are broad and could encompass anything from voluntary ethical guidelines to mandatory disclosure and auditing requirements.
Why Relevant: The legislation focuses on the ethical and legal implications of AI, specifically regarding privacy and intellectual property.
Mechanism of Influence: The study group is required to assess how AI affects privacy and IP, which may lead to specific disclosure requirements or restrictions on how AI models are trained or deployed using academic data.
Evidence:
Ambiguity Notes: The scope of 'privacy and intellectual property implications' is not strictly defined, leaving room for the group to investigate data collection practices and the ownership of AI-generated content.
Legislation ID: 239160
Bill URL: View Bill
This legislation mandates the Florida Digital Service to conduct a comprehensive study on the use of artificial intelligence by state agencies. It defines artificial intelligence and state agencies, and outlines the requirements for the study, including details on the agencies using AI, the purposes of their use, and associated costs. A report summarizing the findings must be submitted by March 1, 2027.
| Date | Action |
|---|---|
| 2026-01-13 | • Introduced |
| 2025-10-21 | • Referred to Governmental Oversight and Accountability; Appropriations Committee on Agriculture, Environment, and General Government; Fiscal Policy |
| 2025-10-09 | • Filed |
Why Relevant: The legislation focuses on government oversight and transparency regarding the implementation of artificial intelligence within state agencies.
Mechanism of Influence: It requires a formal study and reporting mechanism to the Governor and Legislature, which serves as a precursor to potential regulatory frameworks or budgetary oversight.
Evidence:
Ambiguity Notes: The effectiveness of the study depends on the specific definition of 'artificial intelligence' adopted in the bill's definitions section.
This bill amends section 1007.25 of the Florida Statutes to incorporate technology courses into the general education core course standards for public postsecondary educational institutions. It mandates that faculty committees review and recommend course options that include subjects related to computer science, artificial intelligence, robotics, and cybersecurity, ensuring students gain relevant technological skills.
| Date | Action |
|---|---|
| 2026-01-22 | • Introduced |
| 2026-01-16 | • Referred to Education Postsecondary; Appropriations Committee on Higher Education; Rules |
| 2026-01-09 | • Filed |
Why Relevant: The legislation explicitly identifies 'artificial intelligence' as a core subject area that must be included in the state's postsecondary technology course standards.
Mechanism of Influence: By mandating the inclusion of AI in general education standards, the law influences the academic framework and workforce preparation related to AI, though it does not impose direct regulatory controls on AI development or deployment.
Evidence:
Ambiguity Notes: The bill focuses on educational curriculum and academic standards rather than the regulatory oversight, audits, or disclosures typically associated with AI governance legislation.
Legislation ID: 239332
Bill URL: View Bill
This bill establishes regulations regarding the use of artificial intelligence (AI) in the fields of psychology, clinical social work, marriage and family therapy, and mental health counseling. It defines AI and prohibits its use in direct therapeutic practices, with specified exceptions for administrative support and session recording under certain conditions. The bill seeks to protect the integrity of mental health services by limiting AIs role in direct client interactions.
| Date | Action |
|---|---|
| 2026-01-13 | • Introduced |
| 2025-11-17 | • Referred to Health Policy; Children, Families, and Elder Affairs; Rules |
| 2025-11-04 | • Filed |
Why Relevant: The bill explicitly regulates and restricts the use of AI in specific professional fields by defining its permissible scope.
Mechanism of Influence: It creates a legal prohibition against using AI for direct therapeutic interventions, limiting its role to administrative support such as scheduling and billing.
Evidence:
Ambiguity Notes: The term 'direct clinical social work and counseling practices' may require clearer boundaries to determine if AI-assisted diagnostic tools or decision-support systems are also prohibited.
Why Relevant: The bill mandates disclosures and informed consent regarding AI usage for data processing.
Mechanism of Influence: Practitioners are required to obtain written consent from clients at least 24 hours before using AI for recording or transcribing sessions, ensuring transparency.
Evidence:
Ambiguity Notes: The 24-hour advance notice requirement might be impractical for certain types of immediate or emergency mental health interventions.
This bill creates the Division of Integrated Government Innovation and Technology (DIGIT) within the Executive Office of the Governor, transferring responsibilities from the Department of Management Services. It establishes DIGIT as a separate budget entity responsible for overseeing state information technology governance, cybersecurity standards, and supporting state agencies in technology initiatives. The bill also outlines various requirements for compliance, reporting, and collaboration among state agencies and establishes new roles and responsibilities for key positions related to information technology management.
| Date | Action |
|---|---|
| 2026-01-22 | • Introduced |
| 2026-01-16 | • Referred to Appropriations Committee on Agriculture, Environment, and General Government; Appropriations |
| 2026-01-12 | • Filed |
Why Relevant: The bill creates a centralized authority (DIGIT) for state technology governance and innovation. While AI is not explicitly mentioned in the provided text, AI-related projects and regulations within state government would logically fall under this division's oversight of 'technology initiatives'.
Mechanism of Influence: The Director of DIGIT, serving as the state CIO, would have the authority to set standards and oversee the implementation of emerging technologies, including AI, across state agencies.
Evidence:
Ambiguity Notes: The terms 'Innovation' and 'Technology' are not defined to specifically include or exclude Artificial Intelligence, allowing for broad interpretation of the division's scope regarding AI oversight.
The bill amends existing statutes and creates new sections to define artificial intelligence, prohibit certain contracts with foreign entities, and establish the Artificial Intelligence Bill of Rights for Floridians. It outlines the rights of individuals regarding AI, including consent requirements for minors, protections against misuse of personal data, and civil remedies for violations. The bill also imposes obligations on AI technology companies and chatbot platforms to protect user information and restrict access for minors without parental consent.
| Date | Action |
|---|---|
| 2026-01-21 | • Favorable by Commerce and Tourism; YEAS 10 NAYS 0 • Now in Appropriations |
| 2026-01-16 | • On Committee agenda-- Commerce and Tourism, 01/21/26, 8:30 am, 110 Senate Building |
| 2026-01-13 | • Introduced |
| 2026-01-07 | • Referred to Commerce and Tourism; Appropriations |
| 2025-12-22 | • Filed |
Why Relevant: The bill mandates transparency regarding the use of AI in interactions.
Mechanism of Influence: It grants Floridians the right to be informed when they are interacting with an artificial intelligence system rather than a human.
Evidence:
Ambiguity Notes: The specific method of disclosure, such as a visual badge or verbal notice, is not detailed in the provided abstract.
Why Relevant: The legislation includes specific age-related restrictions and parental oversight for AI platforms.
Mechanism of Influence: Companion chatbot platforms are required to implement parental consent mechanisms to prevent unauthorized use by minors and provide monitoring tools.
Evidence:
Ambiguity Notes: The term 'companion chatbot' may require further legal definition to determine which specific apps or services are covered.
Why Relevant: It regulates the data practices of AI technology companies.
Mechanism of Influence: It imposes a legal obligation on AI companies to deidentify personal information and prohibits the sale or disclosure of such data without explicit consent.
Evidence:
Ambiguity Notes: The standard for 'deidentified' data can vary; the bill's effectiveness depends on how strictly this is defined.
Legislation ID: 270848
Bill URL: View Bill
This bill amends various sections of Florida Statutes to provide exemptions from public records requirements for information held by the Department of Legal Affairs regarding notifications and investigations of violations related to companion chatbots, bots, and deidentified data. It outlines the conditions under which this information remains confidential and the circumstances under which it may be disclosed.
| Date | Action |
|---|---|
| 2026-01-22 | • Filed • Referred to Appropriations |
| 2026-01-21 | • Submitted as Committee Bill and Reported Favorably by Commerce and Tourism; YEAS 10 NAYS 0 |
| 2026-01-16 | • Submitted for consideration by Commerce and Tourism • On Committee agenda-- Commerce and Tourism, 01/21/26, 8:30 am, 110 Senate Building |
Why Relevant: The legislation specifically targets 'companion chatbots,' which are a specialized application of generative artificial intelligence.
Mechanism of Influence: It regulates the transparency and oversight process for AI chatbots by exempting investigation records from public disclosure, thereby governing how the state handles AI-related consumer protection cases.
Evidence:
Ambiguity Notes: The term 'companion chatbot' is used but the specific technical threshold for what constitutes a 'chatbot' versus other AI interfaces is not detailed in the abstract.
Why Relevant: The bill provides a legal definition for proprietary information specifically tailored to artificial intelligence technology companies.
Mechanism of Influence: By defining and protecting AI proprietary information, the law creates a shield for AI developers' weights, algorithms, or trade secrets during government investigations.
Evidence:
Ambiguity Notes: The definition of 'proprietary information' relies on the company's own treatment of the data as private, which could be interpreted broadly by AI firms.
Why Relevant: The bill addresses 'bots,' which are frequently powered by AI and are a core subject of AI regulatory discussions regarding automation and disclosure.
Mechanism of Influence: It establishes the confidentiality framework for state-level enforcement actions against bot operators, affecting how AI-driven automation is policed.
Evidence:
Ambiguity Notes: None
Legislation ID: 188075
Bill URL: View Bill
This bill amends the Georgia Technology Authoritys regulations to require an annual inventory of artificial intelligence systems utilized by state agencies. It mandates the development of policies regarding the procurement and implementation of these systems, with a focus on preventing unlawful discrimination. The bill also requires the authority to prepare annual reports on the usage of artificial intelligence across agencies and ensures cooperation among state entities in this process.
| Date | Action |
|---|---|
| 2026-01-12 | Senate Recommitted |
| 2025-03-27 | Senate Read Second Time |
| 2025-03-25 | Senate Committee Favorably Reported By Substitute |
| 2025-03-10 | Senate Withdrawn & Recommitted |
| 2025-02-21 | Senate Read and Referred |
| 2025-02-20 | House Passed/Adopted |
| 2025-02-20 | House Third Readers |
| 2025-02-06 | House Committee Favorably Reported |
Why Relevant: The bill establishes a formal oversight and disclosure mechanism for AI systems used within the state government.
Mechanism of Influence: It requires state agencies to disclose specific technical and operational details of their AI systems, including capabilities and impact assessment statuses, to a central authority.
Evidence:
Ambiguity Notes: The term 'impact assessment status' implies a requirement for audits or evaluations, though the specific criteria for these assessments are left to be defined in future policies.
Why Relevant: The legislation mandates the creation of regulatory frameworks governing how AI is acquired and deployed.
Mechanism of Influence: By requiring the development of policies for procurement and implementation, the law sets a regulatory floor for AI usage, specifically targeting the prevention of algorithmic discrimination.
Evidence:
Ambiguity Notes: The scope of 'unlawful discrimination' and the specific 'policies and procedures' for procurement are broad and will depend on the Georgia Technology Authority's eventual rulemaking.
Legislation ID: 188104
Bill URL: View Bill
This bill amends existing laws in Georgia regarding obscenity and related offenses by specifically prohibiting the distribution of computer-generated obscene material that depicts children. It establishes definitions for obscenity and child, outlines penalties for violations, and mandates reporting for individuals who suspect they are processing such material. Additionally, it introduces enhanced sentencing for defendants who utilize artificial intelligence in the commission of designated offenses.
| Date | Action |
|---|---|
| 2026-01-12 | Senate Recommitted |
| 2025-03-31 | Senate Committee Favorably Reported By Substitute |
| 2025-03-28 | Senate Recommitted |
| 2025-03-27 | Senate Committee Favorably Reported By Substitute |
| 2025-03-27 | Senate Read Second Time |
| 2025-02-27 | Senate Read and Referred |
| 2025-02-26 | House Passed/Adopted By Substitute |
| 2025-02-26 | House Third Readers |
Why Relevant: The bill specifically regulates the output of artificial intelligence by prohibiting the distribution of AI-generated obscene material depicting children.
Mechanism of Influence: It creates a legal prohibition against distributing specific types of AI-generated content, effectively regulating the use of AI for generating child-like obscene imagery.
Evidence:
Ambiguity Notes: The phrase 'appears to depict' relies on community standards and visual interpretation, which may vary.
Why Relevant: The legislation addresses the use of AI in criminal activities by providing for increased penalties.
Mechanism of Influence: It mandates enhanced sentencing for defendants who utilize AI during the commission of designated offenses, acting as a deterrent for the misuse of AI technology.
Evidence:
Ambiguity Notes: The specific 'designated offenses' are not fully listed in the summary, leaving the scope of the enhancement partially undefined.
Legislation ID: 188619
Bill URL: View Bill
House Bill 638 proposes amendments to the Georgia Code regarding the Metropolitan Atlanta Rapid Transit Authority (MARTA). It prohibits non-transit vehicles from stopping or parking in designated transit vehicle lanes in Atlanta, introduces penalties for violations, and authorizes the use of automated monitoring devices for enforcement. The bill outlines procedures for issuing citations, conditions for penalties, and the management of funds collected from fines.
| Date | Action |
|---|---|
| 2026-01-12 | Senate Recommitted |
| 2026-01-12 | Senate Taken from Table |
| 2025-04-02 | Senate Tabled |
| 2025-03-27 | Senate Committee Favorably Reported |
| 2025-03-27 | Senate Read Second Time |
| 2025-03-10 | Senate Read and Referred |
| 2025-03-06 | House Committee Favorably Reported By Substitute |
| 2025-03-06 | House Passed/Adopted By Substitute |
Why Relevant: The bill authorizes the use of automated monitoring devices for law enforcement and civil penalty issuance, which involves automated decision-making systems.
Mechanism of Influence: It establishes a legal framework for 'automated transit vehicle lane monitoring devices' to record images and trigger citations, effectively automating the enforcement of traffic laws.
Evidence:
Ambiguity Notes: The legislation does not explicitly use the term 'Artificial Intelligence,' but the automated systems described (likely involving computer vision or license plate recognition) are common applications of AI in public infrastructure.
Legislation ID: 258976
Bill URL: View Bill
Senate Bill 398 amends Georgias laws on wiretapping and surveillance by introducing provisions against virtual peeping, which involves the unauthorized generation of images of individuals using generative AI. The bill outlines specific definitions, penalties for violations, and exceptions for law enforcement activities. The legislation aims to protect the privacy of individuals, particularly minors, from unauthorized image generation.
| Date | Action |
|---|---|
| 2026-01-14 | Senate Read and Referred |
| 2026-01-13 | Senate Hopper |
Why Relevant: The bill directly regulates the use of generative artificial intelligence systems by criminalizing specific outputs.
Mechanism of Influence: It creates a legal framework that prohibits the use of AI to generate human likenesses without explicit consent, classifying such acts as 'virtual peeping' or felonies depending on the content.
Evidence:
Ambiguity Notes: The definition of 'generative artificial intelligence system' is central to the bill's scope, determining which software tools fall under these criminal statutes.
Why Relevant: The legislation focuses heavily on the protection of minors and age-based distinctions in AI usage.
Mechanism of Influence: It imposes harsher criminal penalties (1 to 20 years imprisonment) when the subject of the AI-generated image is a minor.
Evidence:
Ambiguity Notes: The bill distinguishes between minors under 14 and those 14 or older regarding consent and misdemeanor vs. felony status.
Why Relevant: The bill establishes a consent-based regulatory requirement for AI image generation.
Mechanism of Influence: By requiring consent from the subject or a guardian, it effectively mandates a disclosure or authorization process before AI can be used to replicate a person's likeness.
Evidence:
Ambiguity Notes: The bill does not specify the technical form consent must take, only that its absence triggers criminal liability.
This bill amends Title 33 of the Idaho Code by adding a new chapter that addresses the integration of generative AI technologies in education. It mandates the development of a statewide framework by the State Department of Education, which will guide local school districts and public charter schools in adopting policies regarding the use of generative AI. The bill emphasizes the importance of transparency, student privacy, and human oversight in educational settings while promoting the ethical use of AI tools for instruction and administration.
| Date | Action |
|---|---|
| 2026-01-23 | Reported Printed; referred to Education |
| 2026-01-22 | Introduced; read first time; referred to JR for Printing |
Why Relevant: The bill establishes a regulatory framework for AI usage and mandates specific disclosures from technology providers.
Mechanism of Influence: It requires vendors to disclose the use of machine learning and generative AI in educational tools and mandates that local districts create policies to govern and restrict AI use.
Evidence:
Ambiguity Notes: The bill leaves the specific definitions of 'appropriate' and 'prohibited' uses to be determined by local school districts, which could result in varying standards across the state.
Why Relevant: The legislation focuses on oversight, data privacy, and the ethical application of AI technologies.
Mechanism of Influence: It mandates that the statewide framework prioritize human-centered oversight and safety, while ensuring all AI-related procurement complies with existing data privacy laws.
Evidence:
Ambiguity Notes: The term 'human-centered oversight' is not strictly defined, leaving room for interpretation regarding the level of human intervention required in AI-driven administrative or instructional processes.
This Act prohibits surveillance-based discrimination in pricing and wages by prohibiting the use of surveillance data in automated decision systems that determine individualized prices for consumers or wages for employees. It defines key terms, sets out exemptions, establishes enforcement by the Attorney General, and provides for penalties and private rights of action. It also outlines relationship to other laws and provides for rulemaking.
| Date | Action |
|---|---|
| 2026-01-27 | Re-assigned toExecutive |
| 2026-01-14 | Added as Chief Co-SponsorSen. Celina Villanueva |
| 2025-04-11 | Rule 3-9(a) / Re-referred toAssignments |
| 2025-03-21 | Rule 2-10 Committee Deadline Established As April 11, 2025 |
| 2025-03-19 | ToAI and Social Media |
| 2025-03-12 | Assigned toExecutive |
| 2025-02-07 | Filed with Secretary bySen. Robert Peters |
| 2025-02-07 | First Reading |
Why Relevant: The bill directly regulates 'automated decision systems,' which is a standard legislative term used to encompass artificial intelligence and algorithmic decision-making tools.
Mechanism of Influence: It imposes substantive restrictions on how AI-driven automated systems can be used to calculate consumer prices and employee compensation, effectively regulating the output and application of AI in commercial and employment contexts.
Evidence:
Ambiguity Notes: While the abstract uses the term 'automated decision systems' rather than 'artificial intelligence' explicitly, these terms are often used interchangeably in regulatory frameworks to cover machine learning and algorithmic models.
Why Relevant: The bill includes disclosure requirements regarding the data used by these automated systems.
Mechanism of Influence: It requires employers to disclose the specific data considered by automated wage-setting systems before an individual is hired, aligning with the user's interest in AI transparency and disclosure mandates.
Evidence:
Ambiguity Notes: The disclosure requirement is specific to wage decisions and does not explicitly mention 'weights' or 'audits' in the technical AI sense, though it mandates 'procedures to ensure data accuracy.'
Legislation ID: 242366
Bill URL: View Bill
House Bill No. 1085 introduces provisions for civil liability concerning child sexual abuse material and obscene material on the Internet. It enables individuals depicted in or exposed to such materials to file civil actions against those who knowingly allow access to, disseminate, or provide the content. The bill also allows the attorney general to seek injunctive relief and establishes a safe harbor provision for certain entities under specific conditions. Notably, it states that comparative fault and tort claims immunities do not apply to these civil actions.
| Date | Action |
|---|---|
| 2026-01-13 | Representative Goss-Reaves added as coauthor |
| 2026-01-05 | Authored by Representative King |
| 2026-01-05 | First reading: referred to Committee on Judiciary |
Why Relevant: The bill targets information content providers and interactive computer services, which are categories that include AI developers and platforms hosting generative AI.
Mechanism of Influence: AI companies could face civil litigation if their models are used to generate or distribute prohibited content, as the bill removes certain tort immunities for these actions.
Evidence:
Ambiguity Notes: The legislation uses technology-neutral language like 'information content provider' which likely covers AI entities without naming them explicitly.
Legislation ID: 247680
Bill URL: View Bill
House Bill No. 1201 addresses various mental health and insurance matters by prohibiting the use of artificial intelligence to impersonate licensed mental health professionals, requiring compliance with network adequacy standards for health carriers, and ensuring favorable reimbursement rates for mental health services relative to Medicare. It also sets forth regulations on downcoding practices and retroactive audits of paid claims.
| Date | Action |
|---|---|
| 2026-01-22 | Representative Goss-Reaves added as coauthor |
| 2026-01-14 | Representative Ledbetter added as coauthor |
| 2026-01-13 | Representative Cash added as coauthor |
| 2026-01-05 | Authored by Representative Rowray |
| 2026-01-05 | First reading: referred to Committee on Insurance |
Why Relevant: The bill explicitly addresses the use of AI in healthcare, specifically prohibiting its use as a replacement for human mental health professionals.
Mechanism of Influence: It creates a legal barrier against the automation of mental health professional roles, ensuring that AI cannot be used to bypass the requirement for licensed human practitioners in specific interactions.
Evidence:
Ambiguity Notes: The terms 'impersonate' and 'substitute' could be interpreted broadly, potentially affecting the deployment of AI-driven mental health chatbots or diagnostic tools if they are deemed to be acting in place of a professional.
HB 1360 allows public agencies in Indiana to create electronic portals for public records requests. These portals will include security features to verify human requestors and their residency status. The bill also introduces provisions for collecting additional fees from non-residents and prioritizing requests based on their purpose. Public agencies are required to report suspicious requests, and the public access counselor must address excessive requests and recommend solutions to the General Assembly.
| Date | Action |
|---|---|
| 2026-01-28 | Senate sponsor: Senator Brown L |
| 2026-01-28 | Third reading: passed; Roll Call 115: yeas 94, nays 0 |
| 2026-01-27 | Amendment #1 (Lehman) prevailed; voice vote |
| 2026-01-27 | Second reading: amended, ordered engrossed |
| 2026-01-22 | Committee report: amend do pass, adopted |
| 2026-01-22 | Representative Miller D added as coauthor |
| 2026-01-15 | Representative Porter added as coauthor |
| 2026-01-08 | Authored by Representative Lehman |
Why Relevant: The bill addresses the use of automated systems and bots, which are frequently powered by AI, to scrape public records and interact with government portals.
Mechanism of Influence: It mandates technical barriers such as CAPTCHA or equivalent verification to ensure requestors are human, and requires agencies to log and report suspected automated submissions, data scraping, or phishing attempts.
Evidence:
Ambiguity Notes: While the bill does not use the specific term 'Artificial Intelligence,' the regulation of 'automated submissions' and 'data scraping' directly impacts the methods used by AI developers and agents to harvest public data.
Legislation ID: 248801
Bill URL: View Bill
House Bill No. 1421 aims to prohibit employers from relying solely on automated decision systems for employment-related decisions and outlines specific conditions under which such systems can be used. It establishes rights for employees and candidates regarding the use of automated outputs, mandates disclosures, and provides mechanisms for enforcement and civil action against violations. The bill seeks to protect covered individuals from discrimination and retaliation related to automated decision-making processes.
| Date | Action |
|---|---|
| 2026-01-08 | Authored by Representative Harris |
| 2026-01-08 | First reading: referred to Committee on Employment, Labor and Pensions |
Why Relevant: The bill directly addresses the regulation of automated decision systems, which are a primary form of artificial intelligence used for organizational decision-making.
Mechanism of Influence: It imposes a legal prohibition against fully autonomous AI decision-making in hiring and management, requiring human intervention and validation.
Evidence:
Ambiguity Notes: The scope of the regulation depends on the bill's specific definition of 'automated decision system' versus 'passive computing infrastructure'.
Why Relevant: The legislation includes mandatory disclosure requirements for AI-driven systems, a key component of AI transparency regulation.
Mechanism of Influence: Employers must provide clear descriptions of the system's logic and outputs to affected individuals before use or hiring, enabling individuals to understand and dispute AI-generated outcomes.
Evidence:
Ambiguity Notes: The requirement for 'clear descriptions' may be subject to interpretation regarding the level of technical complexity required in the disclosure.
Why Relevant: The bill mandates training and oversight for those operating AI systems.
Mechanism of Influence: It requires that human operators are trained on the limitations, potential biases, and adverse effects of the automated systems they use.
Evidence:
Ambiguity Notes: None
SB 199 introduces several amendments to Indianas education laws, including changes to the composition of the case review panel for interscholastic athletics, requirements for schools with low reading proficiency scores, and mandates for the secretary of education to report on civic literacy metrics and employee paid leave recommendations. Additionally, it establishes regulations for social media services concerning adolescent users.
| Date | Action |
|---|---|
| 2026-01-28 | Amendment #6 (Raatz) prevailed; voice vote |
| 2026-01-28 | Reread second time: amended, ordered engrossed |
| 2026-01-27 | Placed back on second reading |
| 2026-01-26 | Amendment #3 (Raatz) prevailed; voice vote |
| 2026-01-26 | Amendment #5 (Raatz) prevailed; voice vote |
| 2026-01-26 | Second reading: amended, ordered engrossed |
| 2026-01-15 | Committee report: amend do pass, adopted |
| 2026-01-08 | Senator Rogers added as second author |
Why Relevant: The bill includes specific mandates for age verification and parental consent for digital platforms.
Mechanism of Influence: It requires social media services to implement age verification protocols and restricts account creation for minors without explicit parental permission, which is a regulatory theme requested by the user.
Evidence:
Ambiguity Notes: The bill focuses on 'social media services' rather than 'artificial intelligence' specifically, though these platforms often utilize AI for content delivery and age estimation.
Legislation ID: 259635
Bill URL: View Bill
House File 2048 introduces regulations concerning the processing of personal data by companies operating in Iowa. It defines key terms related to personal data and outlines the responsibilities of companies, including disclosure requirements, consent protocols, and individual rights regarding personal data. The bill also establishes enforcement mechanisms, including penalties for violations and the ability for individuals to seek damages. Exemptions are provided for specific types of data processing, such as for law enforcement and national security purposes.
| Date | Action |
|---|---|
| 2026-01-14 | Introduced, referred to Economic Growth and Technology.H.J. 76. |
Why Relevant: The bill explicitly includes 'automated decision making' within its scope of definitions and regulations.
Mechanism of Influence: By defining and potentially regulating automated decision making, the law impacts how AI algorithms process personal data to make choices about individuals, often a precursor to specific AI transparency requirements.
Evidence:
Ambiguity Notes: The abstract mentions the definition of automated decision making but does not specify the exact restrictions or opt-out rights associated with it, which are common in similar privacy-AI legislation.
Why Relevant: The legislation mandates disclosure and consent protocols for data processing, which are foundational to AI governance and data provenance.
Mechanism of Influence: Requirements to disclose the purposes of data use and obtain affirmative consent directly affect the collection of training data for AI models and the transparency of AI-driven services.
Evidence:
Ambiguity Notes: While the bill focuses on personal data generally, the requirements for 'clear and affirmative consent' create a regulatory hurdle for the mass scraping or use of personal data in AI development.
Why Relevant: The bill provides individuals with the right to delete data and revoke consent, which impacts the lifecycle of data used in AI systems.
Mechanism of Influence: The requirement to cease processing within 30 days of consent revocation and the right to request deletion could necessitate the removal of specific data points from active AI models or training sets.
Evidence:
Ambiguity Notes: The practical application of 'deleting' data from a trained neural network is a complex technical area that the law does not explicitly address.
Legislation ID: 266493
Bill URL: View Bill
House File 2082 outlines definitions and regulations regarding the use of artificial intelligence (AI) in recreating an individuals likeness without consent. The bill specifies the types of uses that require consent, such as in commercial activities or political campaigns, and establishes a framework for civil liability, including potential damages and class action provisions for violations.
| Date | Action |
|---|---|
| 2026-01-15 | Introduced, referred to Economic Growth and Technology.H.J. 01/15. |
Why Relevant: The bill directly regulates the application of artificial intelligence by requiring consent for the recreation of an individual's likeness, which falls under the category of regulating AI usage and requiring disclosures/permissions.
Mechanism of Influence: It creates a legal prohibition against unauthorized AI likeness generation in commercial and political contexts, enforced through civil liability and punitive damages.
Evidence:
Ambiguity Notes: The definition of artificial intelligence as a system that can 'influence environments' is broad and could encompass a wide range of software beyond generative media.
Why Relevant: The legislation establishes a framework for oversight and accountability for AI developers and users through the legal system.
Mechanism of Influence: By defining separate violations for each day and allowing for class action suits, the bill creates significant financial and legal risks for non-compliance, effectively acting as a regulatory deterrent.
Evidence:
Ambiguity Notes: The bill does not specify technical standards for how consent must be obtained or verified, which may lead to litigation over the validity of digital disclosures.
Legislation ID: 286695
Bill URL: View Bill
House File 2153 mandates that community colleges, school districts, and state universities develop and publish policies regarding the use of artificial intelligence by students and employees. These policies must clarify when AI can be used in educational settings and outline any prohibitions. The bill includes specific deadlines for the adoption of these policies and requires them to be accessible via the institutions websites.
| Date | Action |
|---|---|
| 2026-01-26 | Introduced, referred to Education.H.J. 01/26. |
Why Relevant: The bill mandates the creation of regulatory frameworks and disclosure requirements for AI usage within the public education sector.
Mechanism of Influence: It requires institutions to formally define and disclose their stance on AI, creating a public record of allowed and prohibited AI activities, which acts as a form of institutional regulation and state-level oversight.
Evidence:
Ambiguity Notes: The text does not provide a specific technical definition of 'artificial intelligence,' which may lead to inconsistent policy applications across different school districts and universities.
Legislation ID: 284482
Bill URL: View Bill
House Study Bill 610 introduces measures to integrate computer science into high school curricula and graduation requirements in Iowa. Starting with the graduating class of 2030-2031, students will be required to complete one semester of computer science. The bill also mandates the development of high-quality standards for computer science education across all grades, the creation of a list of approved computer science courses, and a plan to increase the capacity of computer science teachers. Additionally, it outlines how computer science courses can fulfill science and mathematics requirements for college admissions.
| Date | Action |
|---|---|
| 2026-01-28 | Subcommittee recommends passage. |
| 2026-01-26 | Subcommittee Meeting: 01/28/2026 12:00PM RM 304. |
| 2026-01-22 | Introduced, referred to Education.H.J. 01/22. |
| 2026-01-22 | Introduced, referred to Education.H.J. 144. |
| 2026-01-22 | Subcommittee: Ingels, Kurth and Shipley.H.J. 01/22. |
| 2026-01-22 | Subcommittee: Ingels, Kurth and Shipley.H.J. 147. |
Why Relevant: The bill focuses on the educational integration of artificial intelligence and requires the establishment of standards for its instruction.
Mechanism of Influence: It mandates that the Department of Education develop standards for AI education that include instruction on the impact and ethical considerations of the technology.
Evidence:
Ambiguity Notes: The bill focuses on academic standards and curriculum rather than direct regulation of AI developers or technical audits of AI systems.
Legislation ID: 284483
Bill URL: View Bill
House Study Bill 610 proposes amendments to existing education codes to require high school students to complete a semester of computer science as part of their graduation requirements starting with the class of 2030-2031. It also sets standards for computer science education across all grade levels, mandates the publication of approved computer science courses, and outlines the necessary steps for expanding teacher capacity in this field. Additionally, it addresses the inclusion of computer science courses in college admissions criteria and provides for the potential waiver of graduation requirements in certain circumstances.
| Date | Action |
|---|---|
| 2026-01-28 | Subcommittee recommends amendment and passage. |
| 2026-01-27 | Subcommittee: Gruenhagen, Donahue, and Pike.S.J. 149. |
| 2026-01-27 | Subcommittee Meeting: 01/28/2026 2:30PM Room 217 Conference Room. |
| 2026-01-22 | Introduced, referred to Education.S.J. 130. |
Why Relevant: The bill mandates the inclusion of artificial intelligence in the state's educational curriculum and graduation requirements.
Mechanism of Influence: It requires the Department of Education to set standards for AI education, including instruction on ethical considerations and societal impacts, and mandates that teacher preparation programs include AI training.
Evidence:
Ambiguity Notes: The term 'ethical considerations' regarding AI is not specifically defined, leaving the scope of what must be taught to the discretion of the Department of Education and school districts.
Legislation ID: 254908
Bill URL: View Bill
This bill introduces a framework for defining and regulating chatbots within the state of Iowa. It sets forth specific requirements for chatbot functionality, including transparency about their nature as non-human entities and limitations regarding the advice they can provide. The bill also outlines civil penalties for violations of these regulations and empowers the attorney general to enforce compliance and implement rules.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced, referred to Technology. |
| 2026-01-13 | Subcommittee: Warme, Bennett, and Taylor. |
Why Relevant: The bill mandates specific transparency and disclosure requirements for AI-driven chatbot systems.
Mechanism of Influence: It requires chatbots to explicitly state they are not human at the start of interactions and at regular intervals, ensuring users are aware they are interacting with an AI.
Evidence:
Ambiguity Notes: The requirement to disclose every thirty minutes may be difficult to implement in asynchronous or long-running sessions without clear technical guidelines.
Why Relevant: The legislation imposes operational restrictions and government oversight on AI functionality.
Mechanism of Influence: It prohibits AI from providing professional advice and empowers the attorney general to levy significant fines and create new rules for AI deployment.
Evidence:
Ambiguity Notes: The definition of 'adaptive content' is broad and could potentially encompass a wide range of generative AI technologies beyond simple text bots.
Legislation ID: 258361
Bill URL: View Bill
This bill addresses the ownership of artificial intelligence output and trained artificial intelligence. It defines key terms such as artificial intelligence, input, output, train, and user. It stipulates that users who provide input to AI own the output generated, provided it does not infringe on third-party rights. Additionally, it states that individuals who train AI own the resulting AI if they have lawfully acquired the training data and have not transferred ownership. It also clarifies that if AI is used in an employment context, the output belongs to the employer under certain conditions. The bill ensures that ownership rights do not infringe on existing intellectual property rights.
| Date | Action |
|---|---|
| 2026-01-28 | Subcommittee Meeting: 01/29/2026 9:30AM Room 217 Conference Room (Cancelled). |
| 2026-01-13 | Introduced, referred to Technology. |
| 2026-01-13 | Subcommittee: Alons, Drey, and Kraayenbrink. |
| 2026-01-13 | Subcommittee: Alons, Kraayenbrink, and Staed. |
Why Relevant: The bill directly regulates the legal status and property rights of artificial intelligence outputs.
Mechanism of Influence: It creates a statutory default for who owns AI-generated content, which is a core component of AI legal regulation.
Evidence:
Ambiguity Notes: The phrase 'infringe on third-party rights' is broad and relies on existing intellectual property case law which is currently evolving regarding AI.
Why Relevant: The bill addresses the ownership of the trained AI models themselves, which relates to the oversight of AI weights and development.
Mechanism of Influence: It sets a legal requirement that training data must be 'lawfully acquired' to claim ownership of the resulting AI model, effectively regulating the data sourcing process for AI development.
Evidence:
Ambiguity Notes: The term 'lawfully acquired' may be subject to interpretation regarding web-scraping and fair use of copyrighted data for training.
Why Relevant: The bill provides regulatory clarity for the use of AI in professional and employment environments.
Mechanism of Influence: It defines the 'scope of employment' as a boundary for AI ownership, ensuring that corporate entities retain rights to AI developed or used by employees under their direction.
Evidence:
Ambiguity Notes: The 'direction and control' requirement may be difficult to apply to autonomous or semi-autonomous AI agents used by employees.
Legislation ID: 255881
Bill URL: View Bill
This bill establishes guidelines for the use of artificial intelligence systems and related software by state agencies in Iowa. It mandates the creation of an inventory of such systems, outlines requirements for automated employment decision-making tools, and prohibits certain uses of AI that could affect employee rights or benefits. The bill emphasizes accountability and transparency in the deployment of AI technologies within state agencies.
| Date | Action |
|---|---|
| 2026-01-27 | Subcommittee recommends passage. |
| 2026-01-21 | Subcommittee Meeting: 01/27/2026 12:00PM Room 217 Conference Room. |
| 2026-01-13 | Introduced, referred to Technology. |
| 2026-01-13 | Subcommittee: McClintock, Bennett, and Sires. |
Why Relevant: The bill directly regulates the application of AI within state government operations, specifically targeting employment-related decisions.
Mechanism of Influence: It creates a legal prohibition against using AI to discharge employees or reduce wages, effectively setting boundaries on algorithmic management.
Evidence:
Ambiguity Notes: The term 'affect employee rights' is broad and may require further legal interpretation to determine if it includes indirect impacts or procedural changes.
Why Relevant: The legislation mandates transparency and reporting for specific AI-driven tools used in hiring and personnel management.
Mechanism of Influence: State agencies are required to publish lists of automated employment tools and submit annual reports to the General Assembly, ensuring legislative oversight.
Evidence:
Ambiguity Notes: The effectiveness of the disclosure depends on the specific definition of 'automated employment decision-making tools' provided in the bill.
Why Relevant: The bill requires a comprehensive inventory and public disclosure of all AI systems used by state agencies.
Mechanism of Influence: By tasking the Department of Management with maintaining a public inventory, the bill subjects AI usage to public scrutiny and centralized government tracking.
Evidence:
Ambiguity Notes: The specific 'data elements' to be collected are left to the department's discretion, which could affect the depth of the oversight.
Legislation ID: 236969
Bill URL: View Bill
House Bill No. 2183 amends existing laws related to crimes against children, particularly focusing on sexual exploitation and privacy breaches involving visual depictions. The bill expands the definitions of visual depictions to encompass images created or modified by artificial intelligence, thereby addressing modern technological concerns. It modifies elements of existing crimes, introduces new prohibitions related to visual depictions, and includes exceptions for specific situations, such as those involving cable services.
| Date | Action |
|---|---|
| 2026-01-27 | Enrolled and presented to Governor on Tuesday, January 27, 2026 |
| 2026-01-23 | Reengrossed on Friday, January 23, 2026 |
| 2026-01-22 | Conference Committee Report was adopted; Yea: 83 Nay: 39 |
| 2025-03-27 | Conference committee report now available |
| 2025-03-27 | Conference Committee Report was adopted; Yea: 30 Nay: 10 |
| 2025-03-24 | Motion to accede adopted; Senator Warren , Senator Titus and Senator Corson appointed as conferees |
| 2025-03-20 | Nonconcurred with amendments; Conference Committee requested; appointed Representative Humphries , Representative Williams, L. and Representative Osman as conferees |
| 2025-03-19 | Committee of the Whole - Be passed as amended |
Why Relevant: The bill explicitly addresses the regulation of AI-generated content by incorporating it into criminal definitions for child exploitation and privacy breaches.
Mechanism of Influence: It subjects creators and possessors of AI-generated child sexual abuse material (CSAM) to criminal prosecution by updating the legal definition of "visual depiction" to include AI-created or altered images.
Evidence:
Ambiguity Notes: The phrase "digital means" is broad and could potentially cover a wide range of non-AI digital manipulation techniques, though AI is specifically named.
Legislation ID: 251520
Bill URL: View Bill
The bill establishes a new section in Kentucky law that defines algorithmic devices and prohibits their use by landlords in determining rent amounts. It emphasizes that such practices may violate antitrust laws and outlines the consequences for landlords who engage in these practices, deeming them unfair and deceptive. The bill also empowers the Attorney General to enforce these provisions under existing consumer protection laws.
| Date | Action |
|---|---|
| 2026-01-14 | to Judiciary (H) |
| 2026-01-07 | introduced in House to Committee on Committees (H) |
Why Relevant: The bill specifically targets and regulates algorithmic decision-making tools, which are a fundamental component of artificial intelligence systems used for automated pricing and market analysis.
Mechanism of Influence: The law creates a legal prohibition against the deployment of AI-driven pricing software in the residential rental sector, establishing penalties for landlords who rely on these automated systems to set prices.
Evidence:
Ambiguity Notes: The definition of 'algorithmic device' includes an exclusion for products 'designed and used internally by landlords,' which may create uncertainty regarding whether proprietary AI models developed in-house are subject to the same restrictions as third-party software.
Legislation ID: 251636
Bill URL: View Bill
This bill outlines the requirements for parental consent when children create accounts on covered AI companion or social media platforms. It includes provisions for resolving disputes regarding a childs age, invalidation of contracts made without proper consent, and the establishment of penalties for violations of these provisions. The bill also empowers the Attorney General to enforce these regulations and provides mechanisms for civil action against non-compliant platforms.
| Date | Action |
|---|---|
| 2026-01-14 | to Small Business & Information Technology (H) |
| 2026-01-07 | introduced in House to Committee on Committees (H) |
Why Relevant: The bill explicitly targets AI companion platforms and mandates age verification and parental consent for minors.
Mechanism of Influence: It requires covered AI platforms to obtain verifiable parental consent before account creation and provides a framework for resolving age disputes, backed by civil penalties and Attorney General enforcement.
Evidence:
Ambiguity Notes: The scope of 'covered AI companion' platforms may need precise definition to distinguish between general AI tools and those specifically designed for companionship.
This bill amends KRS 367.3611 to 367.3629, introducing definitions related to data privacy, including terms like personal data, controller, processor, and sensitive data. It outlines the responsibilities of data controllers in handling personal data, ensuring consumer rights are respected, and preventing practices like surveillance pricing. The amendments are set to take effect on January 1, 2026.
| Date | Action |
|---|---|
| 2026-01-13 | to Small Business & Information Technology (H) |
| 2026-01-06 | introduced in House to Committee on Committees (H) |
Why Relevant: The bill addresses data processing and surveillance pricing, which are central to the commercial application and regulation of AI technologies.
Mechanism of Influence: By regulating how controllers handle personal and sensitive data, the bill impacts the data pipelines used to train and operate AI models. The prohibition on surveillance pricing specifically targets algorithmic and AI-driven dynamic pricing models.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its focus on automated data processing and pricing algorithms aligns with common AI regulatory frameworks.
Legislation ID: 259851
Bill URL: View Bill
This legislation creates a new section in KRS Chapter 189 that defines Automated License Plate Readers (ALPR) and outlines strict prohibitions against their use. It establishes criminal penalties for violations, categorizing the use of ALPR systems as a Class D felony, and allows individuals recorded by these systems to seek civil damages.
| Date | Action |
|---|---|
| 2026-01-22 | to Transportation (H) |
| 2026-01-14 | introduced in House to Committee on Committees (H) |
Why Relevant: The law regulates a specific application of automated technology and algorithms used for data collection and identification, which falls under the broader category of AI-driven surveillance and automated decision-making systems.
Mechanism of Influence: It imposes a total ban on the technology, establishing severe criminal penalties and civil causes of action to prevent the deployment of algorithm-based license plate recognition systems.
Evidence:
Ambiguity Notes: The definition of ALPR specifically cites the use of 'algorithms' to read license plates, which is a fundamental component of computer vision AI, though the bill does not use the specific term 'Artificial Intelligence'.
Legislation ID: 256064
Bill URL: View Bill
This legislation prohibits the accessibility of artificial intelligence chatbots and social AI companions that exhibit human-like features to minors. It defines what constitutes human-like features and outlines specific requirements for deployers to prevent minors from accessing such technologies. The bill allows exceptions for therapy chatbots under strict conditions, mandates safeguards for user information, and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-13 | Referred in Concurrence |
| 2026-01-13 | Referred to Committee |
Why Relevant: The bill directly addresses the user's interest in age verification for AI usage.
Mechanism of Influence: Deployers are required to implement reasonable age verification systems to prevent minors from accessing specific AI technologies.
Evidence:
Ambiguity Notes: The term 'reasonable' regarding age verification is not strictly defined, leaving the specific technical method to the deployer's discretion or future rulemaking.
Why Relevant: The legislation regulates the deployment and accessibility of specific AI models based on their features.
Mechanism of Influence: It prohibits the accessibility of chatbots with human-like features to minors and defines what constitutes these features.
Evidence:
Ambiguity Notes: The definition of 'human-like features' is mentioned as being defined in the chapter but the specific criteria are not detailed in the abstract.
Why Relevant: The bill includes requirements for disclosures and the submission of safety data for specific AI applications.
Mechanism of Influence: Therapy chatbots must provide disclaimers and developers must submit peer-reviewed clinical trial data regarding safety and efficacy.
Evidence:
Ambiguity Notes: None
Why Relevant: The legislation mandates operational safeguards and data collection limits for AI deployers.
Mechanism of Influence: Deployers must implement emergency detection systems and are restricted to collecting only necessary user information.
Evidence:
Ambiguity Notes: The definition of 'legitimate purposes' for data collection may be subject to interpretation by the Attorney General.
Legislation ID: 263218
Bill URL: View Bill
House Bill 145 seeks to empower the State Administrator of Elections to act against election misinformation and disinformation, including the use of deepfakes. It mandates the State Board of Elections to maintain a reporting portal for the public and allows for civil actions against entities disseminating false information. The bill also prohibits the use of deepfakes to mislead voters and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-16 | Hearing 2/04 at 2:00 p.m. |
| 2026-01-14 | First Reading Government, Labor, and Elections |
| 2025-07-16 | Pre-filed |
Why Relevant: The bill explicitly defines and regulates deepfakes, which are a prominent application of generative artificial intelligence.
Mechanism of Influence: It prohibits the creation and dissemination of AI-generated deepfakes that contain materially false information intended to influence or mislead voters, establishing criminal penalties including fines and imprisonment.
Evidence:
Ambiguity Notes: The effectiveness of the law depends on the technical definition of 'deepfake' and whether it keeps pace with evolving AI synthesis techniques.
Why Relevant: The legislation establishes an oversight and reporting mechanism for AI-generated content in the political sphere.
Mechanism of Influence: It mandates the State Board of Elections to maintain a public portal for reporting misinformation (including deepfakes) and requires periodic reviews and corrective actions, creating a government-led audit trail for deceptive AI content.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill defines the legal boundaries and exemptions for the use of AI-generated media.
Mechanism of Influence: By providing exemptions for satire and news broadcasts, the bill sets a precedent for how AI regulations balance deceptive intent against protected speech and journalistic use.
Evidence:
Ambiguity Notes: The distinction between 'satire' and 'materially false information' may be subjective and lead to legal challenges regarding AI-generated parodies.
Legislation ID: 263230
Bill URL: View Bill
House Bill 148 establishes regulations against the use of surveillance data for setting prices and wages. It defines surveillance-based price setting and wage setting, outlines exceptions for certain pricing practices, and establishes penalties for violations under the Maryland Consumer Protection Act. The bill aims to ensure fair practices in pricing and employment compensation by restricting the use of personal data obtained through surveillance.
| Date | Action |
|---|---|
| 2026-01-19 | Hearing 2/10 at 1:00 p.m. |
| 2026-01-14 | First Reading Economic Matters |
| 2025-08-14 | Pre-filed |
Why Relevant: The bill explicitly regulates 'automated decision systems,' which is a standard legal classification for artificial intelligence and algorithmic decision-making tools.
Mechanism of Influence: It restricts the functional application of AI by prohibiting these systems from using specific types of data (surveillance data) to automate the determination of prices and wages.
Evidence:
Ambiguity Notes: The term 'automated decision system' is broad and typically encompasses a wide range of AI technologies, from simple rule-based algorithms to complex machine learning models, depending on the specific statutory definition used in the bill.
Why Relevant: The legislation focuses on the governance of data inputs for automated systems, which is a core component of AI regulation and oversight.
Mechanism of Influence: By defining and restricting 'surveillance-based' practices, the law forces developers and users of AI to audit their data pipelines to ensure prohibited surveillance data is not influencing automated outcomes.
Evidence:
Ambiguity Notes: The bill's impact on AI depends on how 'surveillance data' is defined; if defined broadly, it could affect a vast array of data points used in predictive AI modeling.
Legislation ID: 263367
Bill URL: View Bill
House Bill 184 seeks to enhance protections against identity fraud by prohibiting the unauthorized use of personal identifying information and the malicious use of artificial intelligence or deepfake technologies to harm individuals. The bill outlines various forms of identity fraud and establishes penalties for violations, while also allowing victims to pursue civil actions against perpetrators. It defines key terms related to identity fraud and sets forth requirements for prosecution and civil recourse.
| Date | Action |
|---|---|
| 2026-01-16 | Hearing 2/03 at 1:00 p.m. |
| 2026-01-16 | Hearing 2/03 at 2:00 p.m. |
| 2026-01-16 | Hearing canceled |
| 2026-01-14 | First Reading Judiciary |
| 2025-11-01 | Pre-filed |
Why Relevant: The bill explicitly targets the use of artificial intelligence and deepfake technology as tools for committing identity fraud and impersonation.
Mechanism of Influence: It creates a legal framework that prohibits the creation of deepfake representations for fraudulent purposes, subjecting users of such AI tools to criminal prosecution and civil liability.
Evidence:
Ambiguity Notes: The bill's effectiveness may depend on how broadly 'harm' and 'fraudulent purposes' are interpreted when applied to AI-generated content that might be satirical or non-malicious.
Why Relevant: The legislation establishes statutory definitions for key AI concepts, which is a primary step in regulating the technology.
Mechanism of Influence: By defining 'artificial intelligence' and 'deepfake representation,' the bill determines the technical scope of the activities that are subject to its prohibitions and penalties.
Evidence:
Ambiguity Notes: The abstract does not provide the specific technical language used in the definitions, which could be either too narrow to cover emerging AI techniques or too broad, potentially capturing standard digital editing.
Legislation ID: 264427
Bill URL: View Bill
House Bill 314 requires certain employers that deploy automation technology to report employee counts and displaced employees to the Secretary of Labor, and to pay an assessment for each displaced employee. The bill establishes the Displaced Employee Retraining Fund to support retraining programs for individuals affected by automation technology, ensuring they have access to training and job placement services.
| Date | Action |
|---|---|
| 2026-01-19 | Hearing 2/04 at 1:00 p.m. |
| 2026-01-15 | First Reading Economic Matters |
Why Relevant: The bill addresses the regulation and disclosure of 'automation technology' in the workplace, which is the broader category under which Artificial Intelligence (AI) systems typically fall when used to replace or augment human labor.
Mechanism of Influence: It imposes a mandatory reporting requirement on the types of automation technology used and the resulting displacement of human workers, effectively serving as a disclosure mandate for AI-driven automation.
Evidence:
Ambiguity Notes: The term 'automation technology' is broad and likely includes AI, though the abstract does not explicitly use the term 'Artificial Intelligence'. The scope of what constitutes 'automation technology' would determine the extent of AI oversight.
Legislation ID: 283780
Bill URL: View Bill
House Bill 434 introduces a prohibition against landlords utilizing algorithmic devices that rely on nonpublic competitor data for setting rent prices and lease terms. This legislation is designed to protect tenants from potentially unfair practices that could arise from the use of such technology. Violations of this act would be classified as unfair, abusive, or deceptive trade practices under the Maryland Consumer Protection Act.
| Date | Action |
|---|---|
| 2026-01-28 | Hearing 2/19 at 1:00 p.m. |
| 2026-01-22 | First Reading Economic Matters |
Why Relevant: The bill specifically regulates the use of 'algorithmic devices,' which is a core component of artificial intelligence and automated decision-making systems used in commercial settings.
Mechanism of Influence: It creates a legal prohibition against using specific types of automated systems for rent-setting, effectively regulating how AI-driven tools can be applied in the real estate industry to prevent price-fixing.
Evidence:
Ambiguity Notes: The specific definition of 'algorithmic device' and its 'exclusions' will determine the breadth of the law, as it may exclude simple spreadsheets while targeting complex AI models.
Legislation ID: 262849
Bill URL: View Bill
House Bill 9 aims to create the Maryland 3–1–1 Oversight Board and a 3–1–1 Program that incorporates artificial intelligence technology to enhance the efficiency of nonemergency services. The bill mandates the implementation of these systems across all counties by a specified date, ensuring that residents have access to streamlined information and services through modern technology.
| Date | Action |
|---|---|
| 2026-01-28 | Hearing 2/10 at 1:00 p.m. |
| 2026-01-14 | First Reading Government, Labor, and Elections |
| 2025-10-17 | Pre-filed |
Why Relevant: The bill provides a formal legal definition of artificial intelligence and applies it to state-run public services.
Mechanism of Influence: It adopts the definition from the State Finance and Procurement Article to categorize the predictive and decision-making software used in the 3-1-1 program.
Evidence:
Ambiguity Notes: The definition focusing on 'predictive and decision-making capabilities' is broad and could encompass a wide variety of algorithmic systems beyond generative AI.
Why Relevant: It creates a regulatory body (the Maryland 3–1–1 Oversight Board) specifically tasked with the governance of AI implementation.
Mechanism of Influence: The Board is responsible for establishing evaluation criteria, reviewing vendor applications, and ensuring technology aligns with best practices for accessibility and performance.
Evidence:
Ambiguity Notes: The term 'best practices' is not defined in the text, leaving the Board with significant discretion to set its own regulatory standards.
Why Relevant: The legislation mandates specific operational safeguards and features for AI-driven communication tools.
Mechanism of Influence: It requires chatbots and voicebots to include multilingual support and 'clear escalation protocols,' effectively regulating how the AI must interact with and hand off to human operators.
Evidence:
Ambiguity Notes: The bill does not specify the technical requirements for 'escalation protocols' or the specific conditions under which a human must intervene.
Why Relevant: The bill requires mandatory auditing and reporting on the AI program's performance.
Mechanism of Influence: The Board must submit reports evaluating the effectiveness, user satisfaction, and cost-efficiency of the AI systems to ensure accountability.
Evidence:
Ambiguity Notes: The specific metrics for 'effectiveness' and 'user satisfaction' are not detailed, leaving the methodology of the audit to the Board's discretion.
Legislation ID: 264988
Bill URL: View Bill
Senate Bill 114 proposes the creation of a Maryland 3-1-1 Oversight Board to oversee the implementation and expansion of a 3-1-1 Program. This program will utilize artificial intelligence, including chatbots and voicebots, to provide community information and route calls efficiently. The bill mandates the expansion of the 3-1-1 system to all counties by a specific deadline and outlines the roles and responsibilities of the Oversight Board, including evaluating vendor proposals and ensuring adherence to best practices.
| Date | Action |
|---|---|
| 2026-01-14 | First Reading Education, Energy, and the Environment |
| 2025-10-17 | Pre-filed |
Why Relevant: The bill explicitly mandates the use of artificial intelligence technologies, including chatbots and voicebots, within a government-run communication system and establishes a regulatory framework for its implementation.
Mechanism of Influence: It creates an Oversight Board responsible for evaluating AI vendor proposals and establishing criteria for the program's effectiveness, effectively regulating how AI is deployed and monitored in this context.
Evidence:
Ambiguity Notes: While the bill defines 'artificial intelligence' and specific AI tools, the specific 'best practices' and 'evaluation criteria' for these technologies are left to the discretion of the Oversight Board, which could lead to varying standards of oversight.
Legislation ID: 265015
Bill URL: View Bill
Senate Bill 141 establishes measures to combat election misinformation and disinformation, including the requirement for the State Administrator of Elections to act upon credible reports of such misinformation. It authorizes the State Board of Elections to pursue civil actions against entities that disseminate false information and prohibits the use of deepfakes to mislead voters. The bill outlines definitions, procedures for reporting misinformation, and penalties for violations.
| Date | Action |
|---|---|
| 2026-01-14 | First Reading Education, Energy, and the Environment |
| 2026-01-14 | Hearing 1/21 at 11:00 a.m. |
| 2025-07-16 | Pre-filed |
Why Relevant: The bill contains specific provisions regulating deepfakes, which are a primary application of generative artificial intelligence used to create deceptive media.
Mechanism of Influence: It prohibits the use of deepfakes to disseminate materially false information intended to mislead voters and establishes criminal penalties, including fines and imprisonment, for such use.
Evidence:
Ambiguity Notes: While the bill defines 'deepfake,' the specific technical threshold for what constitutes an AI-generated deepfake versus traditional digital manipulation may require further clarification in practice.
Legislation ID: 264505
Bill URL: View Bill
Senate Bill 8 seeks to enhance protections against identity fraud by prohibiting the unauthorized use of personal identifying information and the malicious use of artificial intelligence or deepfake representations. It outlines civil actions for victims and stipulates penalties for violators, thereby aiming to safeguard individuals from harm caused by identity theft and fraudulent representations.
| Date | Action |
|---|---|
| 2026-01-14 | First Reading Judicial Proceedings |
| 2026-01-13 | Hearing 1/22 at 1:00 p.m. |
| 2025-08-26 | Pre-filed |
Why Relevant: The bill explicitly regulates the use of artificial intelligence and deepfake technology in the context of identity theft and fraudulent representation.
Mechanism of Influence: It creates a legal prohibition against using AI to impersonate or mislead individuals and provides a framework for civil litigation and criminal prosecution against those who use these technologies maliciously.
Evidence:
Ambiguity Notes: The practical scope of the law will depend on the specific technical definitions of 'artificial intelligence' and 'deepfake representation' adopted in the bill's text.
Legislation ID: 241812
Bill URL: View Bill
This bill amends various provisions of the Massachusetts General Laws to establish clear requirements for health insurance carriers regarding prior authorization processes. It mandates that insurers publicly disclose items and services requiring prior authorization, report data on authorization requests, and ensure that decisions are based on evidence-based criteria. Additionally, it sets guidelines for the use of artificial intelligence in utilization reviews and protects patients from retrospective denials of previously authorized services.
| Date | Action |
|---|---|
| 2025-12-08 | Reporting date extended to Wednesday, March 18, 2026 |
| 2025-10-20 | New draft ofH1136 |
| 2025-10-20 | Reported favorably by committee and referred to the committee onHealth Care Financing |
| 2025-10-20 | Reported from the committee onFinancial Services |
Why Relevant: The bill contains a dedicated section regulating the use of artificial intelligence in medical utilization reviews.
Mechanism of Influence: It mandates that AI tools used for insurance approvals must incorporate individual patient data and prohibits these tools from replacing the final decision-making authority of human healthcare providers.
Evidence:
Ambiguity Notes: The term 'artificial intelligence' is used broadly; the specific technical definitions or thresholds for what constitutes an AI tool in this context may require further regulatory clarification.
Legislation ID: 241832
Bill URL: View Bill
The Massachusetts Consumer Data Privacy Act seeks to protect the personal data of residents by defining key terms and outlining the responsibilities of data controllers. It emphasizes the necessity of obtaining affirmative consent from consumers before collecting or processing their personal data and sets out specific requirements for transparency and consumer rights. The act also addresses various types of personal data, including biometric, genetic, and health-related information, and establishes guidelines for the sale and processing of such data.
| Date | Action |
|---|---|
| 2025-11-17 | Bill reported favorably by committee and referred to the committee onHouse Ways and Means |
| 2025-11-17 | New draft ofH78,H80,H86,H96,H103andH104 |
| 2025-11-17 | Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity |
Why Relevant: The act regulates the 'processing' of personal data, which is the foundational activity for training and operating artificial intelligence models.
Mechanism of Influence: AI developers and companies using AI systems would be required to obtain explicit affirmative consent from Massachusetts residents before using their personal data for model training or algorithmic processing.
Evidence:
Ambiguity Notes: The term 'processing' is broad and typically encompasses the computational analysis and data ingestion required for machine learning, though the abstract does not explicitly name 'machine learning' or 'AI'.
Why Relevant: The inclusion of biometric data regulation directly impacts AI-driven technologies such as facial recognition, voice analysis, and gait detection.
Mechanism of Influence: Companies deploying AI for biometric identification or analysis must adhere to specific guidelines for the sale and processing of such data, potentially requiring audits or specific disclosures to ensure compliance.
Evidence:
Ambiguity Notes: While it mentions 'guidelines for the sale and processing,' it does not specify if these guidelines include technical audits of the AI algorithms themselves.
Why Relevant: Consumer rights to opt-out and delete data create a 'right to be forgotten' that complicates the persistence of data within trained AI weights.
Mechanism of Influence: If a consumer exercises their right to delete personal data, AI companies may need to evaluate if that data must be removed from training sets or if the model needs to be retrained (machine unlearning).
Evidence:
Ambiguity Notes: The act does not clarify if the 'right to delete' extends to data already vectorized or transformed into neural network weights.
Legislation ID: 89120
Bill URL: View Bill
This bill proposes the creation of a legislative commission tasked with investigating the surges in electricity demand caused by data centers that support high-performance computing and AI, as well as the effects of industrial growth and electrification in transportation and buildings. The commission will include various appointed members and is required to submit a report with recommendations within one year of the bills passage.
| Date | Action |
|---|---|
| 2026-01-08 | Discharged to the committee onHouse Rules |
| 2025-12-24 | Bill reported favorably by committee and referred to the committee onRules of the two branches, acting concurrently |
| 2025-08-28 | Hearing rescheduled to 09/11/2025 from 01:00 PM-05:00 PM in A-2 and VirtualHearing updated to include Virtual |
| 2025-02-27 | Referred to the committee onAdvanced Information Technology, the Internet and Cybersecurity |
| 2025-02-27 | Senate concurred |
Why Relevant: The bill specifically targets the infrastructure and energy consumption associated with the operation of artificial intelligence systems.
Mechanism of Influence: The commission's findings and subsequent report could lead to legislative recommendations or regulations governing the expansion, location, and energy efficiency requirements of AI-related data centers.
Evidence:
Ambiguity Notes: While the bill focuses on the energy impact of AI rather than the algorithmic content or safety, it represents a form of indirect oversight over the physical requirements for AI development.
Legislation ID: 241780
Bill URL: View Bill
This bill establishes the Massachusetts Artificial Intelligence Innovation Trust Fund to support companies developing AI models and promotes entrepreneurship in AI through grants and partnerships. It also introduces the Transparency in Frontier Artificial Intelligence Act, which sets safety protocols, risk assessments, and reporting requirements for large frontier AI developers to ensure public safety and accountability.
| Date | Action |
|---|---|
| 2025-10-16 | Bill reported favorably by committee and referred to the committee onSenate Ways and Means |
| 2025-10-16 | New draft ofS37 |
| 2025-10-16 | Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity |
Why Relevant: The Transparency in Frontier Artificial Intelligence Act directly regulates large-scale AI models.
Mechanism of Influence: It requires developers to create and publish frontier AI frameworks and conduct assessments of catastrophic risks, effectively mandating a form of internal audit and public disclosure.
Evidence:
Ambiguity Notes: The definition of 'large frontier developer' and 'frontier AI framework' may require further regulatory clarification by the Attorney General.
Why Relevant: The bill establishes mandatory reporting and government oversight mechanisms.
Mechanism of Influence: Developers are required to report critical safety incidents to the Attorney General, and the Attorney General is tasked with producing annual reports on AI safety risks.
Evidence:
Ambiguity Notes: The reporting requirements for 'critical safety incidents' depend on the specific thresholds defined for catastrophic risk.
Why Relevant: The legislation includes enforcement mechanisms for AI-related regulations.
Mechanism of Influence: It empowers the Attorney General to pursue civil penalties of up to $1,000,000 for non-compliance with the AI safety and reporting standards.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill provides protections for whistleblowers within AI development companies.
Mechanism of Influence: It prohibits NDAs or contracts that prevent employees from disclosing safety risks to the government, ensuring a channel for oversight regarding internal AI risks.
Evidence:
Ambiguity Notes: None
Legislation ID: 241778
Bill URL: View Bill
This legislation seeks to amend Chapter 56 of the General Laws by introducing a new section that specifically addresses election misinformation. It defines key terms related to artificial intelligence and establishes prohibitions against distributing materially deceptive election-related communications within 90 days of an election. The bill also outlines the legal recourse available to individuals affected by such deceptive communications, while providing exceptions for certain media and content types.
| Date | Action |
|---|---|
| 2025-10-16 | Bill reported favorably by committee and referred to the committee onSenate Ways and Means |
| 2025-10-16 | New draft ofS44 |
| 2025-10-16 | Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity |
Why Relevant: The legislation explicitly defines and regulates artificial intelligence and synthetic media to prevent election interference.
Mechanism of Influence: It imposes a legal prohibition on distributing AI-generated deceptive content intended to mislead voters and provides a cause of action for individuals to seek damages or injunctions.
Evidence:
Ambiguity Notes: The term 'materially deceptive' and the specific thresholds for what constitutes 'intent to mislead' may be subject to judicial interpretation.
Legislation ID: 241777
Bill URL: View Bill
This legislation amends Chapter 112 of the General Laws to introduce guidelines for the use of artificial intelligence in therapy and psychotherapy services. It defines key terms, establishes requirements for consent, and outlines the permissible use of AI tools by licensed professionals. The bill also addresses the use of AI in utilization review by insurance carriers, ensuring compliance with state and federal laws.
| Date | Action |
|---|---|
| 2025-10-16 | Bill reported favorably by committee and referred to the committee onHealth Care Financing |
| 2025-10-16 | New draft ofS46 |
| 2025-10-16 | Reported from the committee onAdvanced Information Technology, the Internet and Cybersecurity |
Why Relevant: The bill establishes mandatory disclosure and consent requirements for the use of AI in a professional setting.
Mechanism of Influence: Licensed professionals are legally required to provide written notification to patients regarding the purpose of AI tools and must secure explicit consent prior to implementation.
Evidence:
Ambiguity Notes: The bill does not specify the exact format of the written disclosure or the technical standards for the AI tools being used.
Why Relevant: The legislation imposes strict prohibitions on autonomous AI functionality in clinical decision-making.
Mechanism of Influence: It prevents AI from acting as a primary actor in therapy by banning direct interaction with clients and requiring human professional review for all treatment plans and decisions.
Evidence:
Ambiguity Notes: The term 'therapeutic communication' may require further legal definition to determine if it includes administrative or scheduling interactions.
Why Relevant: The bill regulates the use of AI algorithms in insurance and utilization review processes.
Mechanism of Influence: It mandates that AI-driven insurance reviews cannot rely solely on medical necessity algorithms or group data, forcing a human-centric review of individual clinical circumstances.
Evidence:
Ambiguity Notes: It is unclear how 'individual clinical history' must be weighted against AI-generated group data in practice.
Legislation ID: 247159
Bill URL: View Bill
This bill, known as the Age-Appropriate Design Code Act, outlines regulations for online services that are accessed by minors. It mandates that businesses implement specific privacy settings, restrict certain data practices, and ensure that minors have clear access to privacy information. The bill also aims to prevent harmful practices that could exploit minors online and establishes civil sanctions for non-compliance.
| Date | Action |
|---|---|
| 2025-12-16 | bill electronically reproduced 12/11/2025 |
| 2025-12-11 | introduced by Representative Rep. Carol Glanville |
| 2025-12-11 | read a first time |
| 2025-12-11 | referred to Committee onRegulatory Reform |
Why Relevant: The act specifically prohibits the profiling of minors, which is a primary application of artificial intelligence and machine learning in digital services.
Mechanism of Influence: Businesses using AI-driven recommendation engines or behavioral analysis tools would be restricted from applying these technologies to minors unless they can demonstrate it is necessary for the requested service.
Evidence:
Ambiguity Notes: While the summary does not explicitly name 'Artificial Intelligence', the definition of profiling typically encompasses automated processing of personal data to evaluate or predict aspects of a person's behavior.
Why Relevant: The legislation requires online services to implement safety-by-design principles, which directly impacts how algorithmic systems are deployed for younger audiences.
Mechanism of Influence: The requirement to offer the 'highest level of privacy and safety' by default forces a redesign of algorithmic engagement features that might otherwise exploit minor vulnerabilities.
Evidence:
Ambiguity Notes: The act focuses on the 'online service' as a whole, which serves as the delivery mechanism for most consumer-facing AI.
Legislation ID: 264793
Bill URL: View Bill
Senate Bill No. 620 seeks to provide clear guidelines for relying parties that use mobile licenses for identity verification. It defines key terms, outlines the responsibilities of relying parties when handling mobile licenses, and sets restrictions on data collection and device access.
| Date | Action |
|---|---|
| 2025-10-22 | INTRODUCED BY SENATOR ERIKA GEISS |
| 2025-10-22 | REFERRED TO COMMITTEE ONTRANSPORTATION AND INFRASTRUCTURE |
Why Relevant: The bill governs the protocols for digital identity and age verification, which are foundational components for regulating access to AI services and ensuring compliance with age-restricted usage policies.
Mechanism of Influence: It mandates that any entity verifying identity via mobile licenses must use cryptographic authentication and limit data collection to only what is necessary, directly affecting how platforms implement age-gating or identity-based access controls.
Evidence:
Ambiguity Notes: The legislation is technology-neutral regarding the 'relying party,' meaning it applies to any service provider using digital IDs, but it does not explicitly name AI developers or automated decision-making systems as a specific category.
Legislation ID: 266150
Bill URL: View Bill
Senate Bill No. 760, known as the leading ethical AI development for kids act, establishes regulations for operators of companion chatbots, especially concerning their availability to minors. The bill outlines specific prohibitions on harmful interactions and sets forth civil penalties for violations, emphasizing the protection of minors in digital environments.
| Date | Action |
|---|---|
| 2025-12-17 | INTRODUCED BY SENATOR DAYNA POLEHANKI |
| 2025-12-17 | REFERRED TO COMMITTEE ONFINANCE, INSURANCE, AND CONSUMER PROTECTION |
Why Relevant: The bill specifically targets 'companion chatbots,' which are a specific application of generative artificial intelligence technology.
Mechanism of Influence: It imposes strict content restrictions and safety requirements on AI operators, effectively regulating the development and deployment of AI models intended for or accessible by minors.
Evidence:
Ambiguity Notes: The practical impact depends on the legal definition of 'companion chatbot'; if defined broadly, it could capture a wide range of LLM-based applications.
Why Relevant: The legislation addresses the user's interest in age-specific regulations and safety oversight for AI usage.
Mechanism of Influence: By restricting availability to 'covered minors' based on the risk of harmful outputs, it necessitates that AI developers implement age verification or robust safety filtering to avoid significant civil penalties.
Evidence:
Ambiguity Notes: The bill does not explicitly detail the technical method for age verification, leaving the implementation details to the operators or future regulatory guidance.
Legislation ID: 32376
Bill URL: View Bill
This bill prohibits landlords from using tenant screening software that relies on nonpublic competitor data to determine rent prices, as well as software that exhibits bias against protected classes. It amends existing statutes to include these prohibitions and establishes penalties for violations.
| Date | Action |
|---|---|
| 2025-02-24 | Author added Jones |
| 2025-02-20 | Authors added Sencer-Mura, Norris |
| 2025-02-19 | Introduction and first reading, referred to Housing Finance and Policy |
| 2025-02-19 | Introduction and first reading, referred to Housing Finance and Policy |
Why Relevant: The bill explicitly regulates the use of artificial intelligence and algorithms in the context of tenant background screening.
Mechanism of Influence: It creates a legal prohibition against using AI tools that result in biased outcomes for protected classes and establishes a statutory definition for AI within this regulatory framework.
Evidence:
Ambiguity Notes: The term 'disproportionately affect' may require further judicial or regulatory clarification to establish the specific metrics for determining bias.
Why Relevant: The legislation targets the use of algorithmic devices for price-fixing in the rental market.
Mechanism of Influence: It restricts the types of data (specifically nonpublic competitor data) that can be fed into algorithms used to determine rental prices, effectively regulating the operational inputs of automated pricing systems.
Evidence:
Ambiguity Notes: The distinction between 'nonpublic competitor data' and 'publicly available market data' could be a point of contention in enforcement.
Legislation ID: 53415
Bill URL: View Bill
This bill amends Minnesota Statutes to include a definition of artificial intelligence and explicitly prohibits its use in utilization review processes by health insurance organizations. The intent is to maintain human involvement in critical evaluations regarding healthcare services, thereby safeguarding the quality and reliability of healthcare decisions.
| Date | Action |
|---|---|
| 2025-03-03 | Introduction and first reading, referred to Commerce Finance and Policy |
Why Relevant: The bill establishes a formal legal definition of artificial intelligence within the state's statutes.
Mechanism of Influence: By defining AI, the bill sets the legal boundaries for what technologies are subject to the subsequent prohibitions and regulations.
Evidence:
Ambiguity Notes: The definition relies on the United States Code, which provides a specific federal framework but may be subject to future federal amendments.
Why Relevant: The bill directly regulates the application of AI by prohibiting its use in specific high-stakes decision-making processes.
Mechanism of Influence: It mandates that utilization reviews, evaluations, and appeals must be conducted by humans rather than automated AI systems, effectively banning AI from this sector of healthcare administration.
Evidence:
Ambiguity Notes: The phrase 'any aspect' is broad and could be interpreted to include not just final determinations but also administrative or preparatory tasks involving AI.
Legislation ID: 91049
Bill URL: View Bill
This bill introduces a prohibition against the use of artificial intelligence to adjust product prices in real time. It defines artificial intelligence and outlines the prohibited practices related to pricing strategies that could unfairly manipulate consumer behavior. The enforcement of this regulation is designated to the attorney general under existing consumer protection laws.
| Date | Action |
|---|---|
| 2025-03-20 | Author added Kotyza-Witthuhn |
| 2025-03-17 | Introduction and first reading, referred to Commerce Finance and Policy |
Why Relevant: The bill directly regulates a specific commercial application of artificial intelligence and establishes legal boundaries for its use in pricing strategies.
Mechanism of Influence: It creates a statutory prohibition against AI-driven real-time price adjustments, subjecting violators to enforcement by the attorney general under consumer protection laws.
Evidence:
Ambiguity Notes: The specific definition of 'artificial intelligence' used in the bill is referenced but not detailed in the text, which could impact the scope of technologies covered.
Legislation ID: 90999
Bill URL: View Bill
This bill amends Minnesota Statutes by adding a subdivision that explicitly prohibits health carriers from using algorithms or artificial intelligence in the approval or denial process of prior authorization requests. This measure aims to safeguard the integrity of the healthcare authorization process by preventing automated decision-making that could negatively impact patients.
| Date | Action |
|---|---|
| 2025-03-24 | Author added Rehrauer |
| 2025-03-17 | Introduction and first reading, referred to Commerce Finance and Policy |
Why Relevant: The legislation explicitly targets and restricts the application of artificial intelligence and algorithmic decision-making within the healthcare insurance sector.
Mechanism of Influence: By banning these technologies for prior authorization, the law mandates that health carriers must use non-AI methods to process requests, thereby preventing automated denials and requiring human-centric oversight.
Evidence:
Ambiguity Notes: The bill uses broad terms like 'algorithms' and 'artificial intelligence programs' without providing specific technical definitions, which could potentially encompass a wide range of data processing software.
Legislation ID: 90797
Bill URL: View Bill
This bill seeks to amend the Minnesota Consumer Data Privacy Act by defining health data as a form of sensitive data and introducing stricter regulations surrounding the processing of such data. It aims to ensure that consumers have greater control over their personal health information and that their privacy is adequately protected. The bill includes definitions for key terms related to data privacy and establishes requirements for consent and data processing.
| Date | Action |
|---|---|
| 2025-03-24 | Introduction and first reading, referred to Judiciary Finance and Civil Law |
Why Relevant: The bill regulates 'targeted advertising,' which is a primary application of artificial intelligence and machine learning algorithms.
Mechanism of Influence: By defining and regulating targeted advertising based on inferred preferences, the bill places constraints on how AI-driven profiling and ad-delivery systems can operate using consumer data.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'artificial intelligence,' but the definition of targeted advertising inherently covers algorithmic systems that process personal data to predict consumer behavior.
Why Relevant: The bill mandates 'data privacy assessments' for specific processing activities, a common regulatory tool used to oversee high-risk AI systems.
Mechanism of Influence: Controllers must document and assess processing activities, which in modern privacy frameworks typically includes automated decision-making and AI-driven data processing that poses a risk to consumers.
Evidence:
Ambiguity Notes: The summary does not specify if 'specific processing activities' explicitly includes automated decision-making or profiling, though these are standard inclusions in similar state privacy laws.
Why Relevant: The regulation of 'biometric data' is a core component of AI oversight, as AI is the primary technology used to process and identify individuals via biometrics.
Mechanism of Influence: By classifying biometric data as sensitive and requiring specific handling, the bill regulates the input data necessary for facial recognition, voice analysis, and other AI-based biometric systems.
Evidence:
Ambiguity Notes: None
Legislation ID: 33474
Bill URL: View Bill
This bill establishes regulations for social media platforms operating in Minnesota, specifically targeting algorithms that direct user-generated content towards minors. It defines key terms related to social media usage, sets forth prohibitions on algorithmic targeting, and outlines requirements for parental consent for minors. The bill also includes provisions for liability and penalties for violations, aiming to create a safer online environment for children.
| Date | Action |
|---|---|
| 2025-02-20 | Author added Stephenson |
| 2025-02-13 | Authors added Engen and Burkel |
| 2025-02-10 | Introduction and first reading, referred to Commerce Finance and Policy |
| 2025-02-10 | Introduction and first reading, referred to Commerce Finance and Policy |
Why Relevant: The bill directly regulates 'social media algorithms,' which are a primary application of AI and machine learning used for content recommendation and user profiling.
Mechanism of Influence: It restricts the functional application of recommendation AI by prohibiting its use for targeting content to minors, effectively mandating a non-algorithmic (chronological) interface for that user group.
Evidence:
Ambiguity Notes: The definition of 'social media algorithm' is broad and likely encompasses various machine learning models used for engagement optimization.
Why Relevant: The user query specifically requested legislation regarding age verification and usage restrictions for minors.
Mechanism of Influence: By requiring 'verifiable parental consent' for account creation, the law forces platforms to implement age-gating and identity verification systems to distinguish between minors and adults.
Evidence:
Ambiguity Notes: The specific technical standards for 'verifiable' consent are not detailed in the summary, leaving room for interpretation on how platforms must verify age and parental status.
Legislation ID: 30477
Bill URL: View Bill
This bill establishes regulations concerning social media platforms operating in Minnesota, particularly focusing on the use of algorithms that target minors. It defines key terms related to social media and outlines prohibitions against targeting user-generated content at minors through recommendation features. The bill also mandates parental consent for minors to create accounts and outlines penalties for non-compliance.
| Date | Action |
|---|---|
| 2025-02-17 | Introduction and first reading |
Why Relevant: The bill directly regulates 'social media algorithms' and 'recommendation features,' which are fundamental applications of artificial intelligence in content curation.
Mechanism of Influence: It prohibits the deployment of algorithmic targeting systems for a specific demographic (minors), effectively restricting how AI models can be used to process and serve user-generated content.
Evidence:
Ambiguity Notes: The definition of 'social media algorithm' is broad and likely encompasses various machine learning and automated decision-making systems used by platforms.
Why Relevant: The user specifically requested legislation concerning age verification and usage requirements.
Mechanism of Influence: The bill mandates 'verifiable parental consent' for account creation by minors, which necessitates the implementation of age verification or identity verification technologies by the platforms.
Evidence:
Ambiguity Notes: The bill does not specify the technical standards for 'verifiable' consent, leaving the implementation details to the platforms or future regulatory guidance.
Legislation ID: 30391
Bill URL: View Bill
This bill amends existing Minnesota statutes to explicitly outlaw the possession, sale, and distribution of child-like sex dolls and artificial intelligence-generated child sexual abuse material. It establishes definitions, penalties, and registration requirements for offenders associated with sexual crimes against minors, thereby reinforcing protections for children and addressing emerging threats posed by technology.
| Date | Action |
|---|---|
| 2025-02-20 | Introduction and first reading |
Why Relevant: The legislation explicitly regulates artificial intelligence by categorizing AI-generated depictions of minors in sexual conduct as illegal pornographic material.
Mechanism of Influence: It creates a legal framework where the creation, possession, or distribution of specific AI-generated content results in criminal prosecution and mandatory sex offender registration.
Evidence:
Ambiguity Notes: While the bill targets AI-generated CSAM, the technical criteria for what constitutes an 'AI-generated image' versus a digitally manipulated or traditionally rendered image may require further legal clarification.
Legislation ID: 30020
Bill URL: View Bill
The bill amends Minnesota Statutes to define artificial intelligence and explicitly prohibit its use in the utilization review processes conducted by organizations. This includes any reviews, evaluations, determinations, or appeals related to health insurance.
| Date | Action |
|---|---|
| 2025-03-10 | Author added Mitchell |
| 2025-02-27 | Authors added Boldon; Mann; Mohamed |
| 2025-02-24 | Introduction and first reading |
| 2025-02-24 | Referred to Commerce and Consumer Protection |
Why Relevant: The legislation directly regulates the application of artificial intelligence by prohibiting its use in a specific industry sector (health insurance).
Mechanism of Influence: It creates a legal prohibition that prevents utilization review organizations from using AI tools for decision-making, evaluations, or appeals processes, effectively mandating human-only review.
Evidence:
Ambiguity Notes: The bill adopts the federal definition of AI from 15 U.S.C. 9401, which is broad; however, the prohibition itself is narrow and specific to the utilization review context.
Legislation ID: 52905
Bill URL: View Bill
This bill introduces regulations concerning tenant screening algorithms used by landlords. It prohibits the use of software that relies on nonpublic competitor data to set rental prices and restricts the use of algorithms that may lead to discrimination against protected classes. The bill also outlines the consequences for violations and amends existing statutes related to tenant reporting and remedies.
| Date | Action |
|---|---|
| 2025-04-03 | Author added Fateh |
| 2025-03-27 | Author stricken Housley |
| 2025-03-03 | Introduction and first reading |
| 2025-03-03 | Referred to Judiciary and Public Safety |
Why Relevant: The provision explicitly regulates the use of AI software and algorithms in the context of tenant background screening.
Mechanism of Influence: It creates a legal prohibition against using AI tools that produce biased outcomes, effectively necessitating that landlords and software providers audit their algorithms for discriminatory impacts.
Evidence:
Ambiguity Notes: The term 'disproportionately affect' is a legal standard that may require specific statistical thresholds or algorithmic auditing protocols to define compliance.
Why Relevant: This section regulates 'algorithmic devices' used for automated financial decision-making (rent setting).
Mechanism of Influence: It restricts the data inputs available to AI models, specifically banning the use of nonpublic competitor data, which impacts how pricing algorithms are trained and deployed.
Evidence:
Ambiguity Notes: The definition of 'algorithmic devices' is broad and likely encompasses various forms of automated and machine-learning-based pricing software.
Legislation ID: 90468
Bill URL: View Bill
This bill amends various sections of the Minnesota Consumer Data Privacy Act to redefine and expand the scope of sensitive data, particularly health data. It establishes clearer definitions and protections for personal data, including biometric and genetic information, and introduces requirements for consent, data processing, and consumer rights regarding their personal information.
| Date | Action |
|---|---|
| 2025-04-22 | Author added Oumou Verbeten |
| 2025-03-24 | Introduction and first reading |
| 2025-03-24 | Referred to Commerce and Consumer Protection |
| 2025-03-24 | Referred to Commerce and Consumer Protection |
Why Relevant: The bill regulates biometric and genetic data, which are foundational inputs for many artificial intelligence systems, particularly those involving facial recognition and predictive health analytics.
Mechanism of Influence: By requiring clear consent and establishing processing standards for biometric data, the law restricts how AI developers can collect and utilize sensitive datasets for training or deploying identification algorithms.
Evidence:
Ambiguity Notes: The term 'processing' is broad and, while not explicitly naming AI, encompasses the computational methods used to train and run machine learning models on personal data.
Why Relevant: The legislation mandates transparency and consumer control over data processing, which aligns with AI disclosure and oversight goals.
Mechanism of Influence: The requirement for 'clear and unambiguous' consent for processing sensitive data forces AI companies to provide disclosures to users before their data is ingested into automated systems.
Evidence:
Ambiguity Notes: The bill focuses on data privacy rather than the specific algorithmic outputs or weights of AI models, but it regulates the 'fuel' (data) that powers AI.
Legislation ID: 96164
Bill URL: View Bill
This bill introduces a prohibition against the use of artificial intelligence to dynamically set product prices based on various market factors. It defines artificial intelligence and outlines the enforcement powers of the attorney general in relation to this prohibition. The goal is to prevent unfair pricing practices that could arise from automated systems adjusting prices in real time.
| Date | Action |
|---|---|
| 2025-04-24 | Author added Boldon |
| 2025-03-27 | Introduction and first reading |
| 2025-03-27 | Referred to Commerce and Consumer Protection |
| 2025-03-27 | Referred to Commerce and Consumer Protection |
Why Relevant: The bill directly regulates a specific application of artificial intelligence in the commercial sector, which falls under the user's request for legislation regulating AI.
Mechanism of Influence: It establishes a legal prohibition on AI-driven automated systems for price setting and grants the attorney general power to oversee and enforce compliance.
Evidence:
Ambiguity Notes: The specific definition of 'artificial intelligence' used in the bill is not provided in the abstract, which could determine the breadth of the enforcement.
Legislation ID: 270712
Bill URL: View Bill
This bill mandates that starting with the ninth-grade class of the 2029-2030 school year, public high school students in Mississippi must earn one unit of credit in a computer science course or an industry-aligned career and technical education (CTE) course with embedded computer science. The legislation aims to enhance students understanding of emerging technologies, including artificial intelligence, and establishes requirements for the courses offered to meet state graduation criteria.
| Date | Action |
|---|---|
| 2026-01-21 | (H) Title Suff Do Pass |
| 2026-01-16 | (H) Referred To Education |
Why Relevant: The bill explicitly identifies artificial intelligence as a core subject area for the mandated computer science or CTE curriculum.
Mechanism of Influence: By requiring AI-related education for graduation, the law ensures a baseline level of AI literacy among the future workforce and public in Mississippi.
Evidence:
Ambiguity Notes: While AI is mentioned as an aim, the specific standards for what constitutes 'understanding' of AI or the depth of the AI curriculum are left to the State Board of Education's approval process.
Legislation ID: 270736
Bill URL: View Bill
House Bill No. 1048 seeks to regulate the use of artificial intelligence in mental and behavioral health care by prohibiting AI systems from providing such care and restricting licensed professionals from using AI in their practice. It allows limited use of AI for administrative support services and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-16 | (H) Referred To Public Health and Human Services;Accountability, Efficiency, Transparency |
Why Relevant: The bill directly regulates the deployment and use of artificial intelligence in the healthcare sector.
Mechanism of Influence: It prohibits AI from performing core professional tasks (therapeutic decisions and client interactions) and restricts its use to administrative functions, thereby setting boundaries on AI autonomy in clinical settings.
Evidence:
Ambiguity Notes: The scope of 'administrative and supplementary support services' may require further clarification to distinguish between purely clerical tasks and those that might influence clinical outcomes.
Why Relevant: The legislation includes enforcement mechanisms and oversight for AI-related violations.
Mechanism of Influence: It grants the Attorney General investigative powers and establishes significant civil penalties ($15,000) for non-compliance, while also integrating AI usage standards into professional licensing disciplinary grounds.
Evidence:
Ambiguity Notes: None
Legislation ID: 270741
Bill URL: View Bill
This bill establishes regulations for businesses in Mississippi that generate over $25 million in revenue, focusing on their responsibilities towards consumer personal information. It outlines consumer rights to access, correct, delete, or opt-out of data processing, and mandates that businesses implement robust data security measures. The bill also designates the Attorney General as the authority for enforcement and provides for penalties for violations.
| Date | Action |
|---|---|
| 2026-01-16 | (H) Referred To Judiciary A |
Why Relevant: The bill mandates age verification for specific types of digital content providers.
Mechanism of Influence: Commercial entities publishing 'harmful material' are legally required to implement reasonable age verification methods, which often involves the use of third-party identity verification software or AI-based age estimation tools.
Evidence:
Ambiguity Notes: The term 'reasonable age verification' is not technologically defined, leaving open whether AI-based biometric estimation or document-based verification is required.
Why Relevant: The bill requires data protection assessments for targeted advertising and sensitive data processing, which are primary use cases for AI and machine learning models.
Mechanism of Influence: Businesses must document and weigh the benefits of data processing against risks to consumers. This creates a regulatory hurdle for deploying AI models used for profiling or behavioral targeting.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence' or 'Automated Decision Making,' but the regulated activities (targeted advertising and sensitive data processing) are almost exclusively driven by these technologies in the current market.
Why Relevant: The bill restricts the use of automated systems for targeting minors.
Mechanism of Influence: By prohibiting targeted advertising and the use of precise geolocation for known minors, the bill effectively bans the use of recommendation algorithms and AI-driven ad-tech targeting this demographic.
Evidence:
Ambiguity Notes: The effectiveness of this provision depends on the definition of 'known minors' and whether businesses must proactively identify them.
Legislation ID: 270804
Bill URL: View Bill
House Bill No. 1082 amends Section 67-1-81 of the Mississippi Code to impose additional penalties on permit holders who sell alcohol to minors. Specifically, after a third offense, the Commissioner of Revenue can require the use of an independent age-verification app on the premises to ensure compliance with age restrictions. The bill also outlines specific fines and penalties for both permit holders and individuals under 21 who violate alcohol purchase laws.
| Date | Action |
|---|---|
| 2026-01-16 | (H) Referred To Judiciary A |
Why Relevant: The bill mandates the use of specific age-verification technology as a regulatory compliance measure.
Mechanism of Influence: It requires permit holders with three or more offenses to implement a third-party age-verification app to validate customer ages, setting a performance standard of 85% accuracy.
Evidence:
Ambiguity Notes: While the bill does not explicitly use the term 'Artificial Intelligence,' age-verification apps frequently utilize AI-driven biometric analysis or automated document verification to meet accuracy standards.
Legislation ID: 273949
Bill URL: View Bill
The Mississippi Dyslexia Generative Artificial Intelligence Education and Workforce Development Act seeks to create an open-access dyslexia curriculum and AI system, designed to enhance educational resources for public schools, correctional facilities, and workforce training programs. It aims to improve literacy for students and adults with dyslexia while promoting research and development in dyslexia education.
| Date | Action |
|---|---|
| 2026-01-19 | (H) Referred To Education;Appropriations A |
Why Relevant: The act specifically mandates the creation and deployment of a generative artificial intelligence system for educational purposes.
Mechanism of Influence: It establishes a formal state program (the Mississippi Dyslexia Generative Artificial Intelligence Education and Workforce Development Program) to govern the development and use of AI in specific public sectors.
Evidence:
Ambiguity Notes: While the act focuses on development, the 'Rulemaking Authority' allows for future regulatory constraints or usage standards not explicitly detailed in the text.
Why Relevant: The legislation includes specific legal definitions for AI technologies.
Mechanism of Influence: By defining 'Generative artificial intelligence system,' the law sets the legal boundaries for what technologies fall under the program's scope and subsequent regulations.
Evidence:
Ambiguity Notes: None
Why Relevant: The act requires oversight and reporting on the AI system's performance and outcomes.
Mechanism of Influence: It mandates a formal research and evaluation component with annual reports to the Legislature, serving as a mechanism for government oversight and performance auditing.
Evidence:
Ambiguity Notes: None
Why Relevant: The act grants regulatory authority to state departments regarding the AI program.
Mechanism of Influence: The department is empowered to create regulations, which could include requirements for disclosures, data usage, or safety standards for the AI system.
Evidence:
Ambiguity Notes: None
Legislation ID: 282753
Bill URL: View Bill
House Bill No. 1576 establishes regulations for interactive computer service providers regarding minors. It mandates that such providers cannot engage in contracts with minors unless parental consent is obtained. The bill outlines requirements for parental access to user history, age verification methods, and penalties for non-compliance. Additionally, it allows the Attorney General to enforce these regulations and provides parents with the right to file civil complaints against violators.
| Date | Action |
|---|---|
| 2026-01-19 | (H) Referred To Judiciary A |
Why Relevant: The bill mandates age verification and parental consent for interactive computer services, which is a specific area of interest for AI regulation concerning minors.
Mechanism of Influence: AI platforms and applications falling under the definition of 'interactive computer service' would be required to implement age verification and obtain parental consent before allowing minors to use their services.
Evidence:
Ambiguity Notes: The definition of 'interactive computer service' is broad and does not explicitly name AI, but functionally covers the platforms where AI is most commonly deployed to minors.
Legislation ID: 282800
Bill URL: View Bill
The Mississippi Artificial Intelligence and STEM Education Innovation Act authorizes the use of artificial intelligence in public schools to improve STEM instruction. It establishes pilot programs for AI-assisted learning, teacher support, and career pathways while ensuring safeguards for student data privacy. The act aims to address challenges in STEM achievement and workforce readiness, particularly in underserved areas.
| Date | Action |
|---|---|
| 2026-01-19 | (H) Referred To Education |
Why Relevant: The act mandates data privacy compliance and restricts the commercial use of data collected by AI tools.
Mechanism of Influence: AI tools used in the pilot program must comply with FERPA and state privacy laws, specifically prohibiting the sale of student data or its use for noneducational purposes.
Evidence:
Ambiguity Notes: The term 'noneducational purposes' is not explicitly defined, which could lead to varying interpretations of permissible data use by AI vendors.
Why Relevant: It requires oversight through annual reporting and establishes standards for the ethical use of AI in an educational setting.
Mechanism of Influence: The Department of Education is required to report on outcomes and provide professional development on ethical AI usage, creating a framework for responsible implementation.
Evidence:
Ambiguity Notes: The criteria for 'ethical use' are not detailed, leaving the specific standards to be determined by the Department of Education during rule promulgation.
Why Relevant: The legislation provides formal legal definitions for artificial intelligence and related technologies.
Mechanism of Influence: By defining 'artificial intelligence' and 'AI-assisted learning tool,' the act sets the regulatory scope for which technologies are subject to the pilot's requirements and privacy protections.
Evidence:
Ambiguity Notes: The summary mentions that the section defines these terms but does not provide the specific text of the definitions.
Legislation ID: 282818
Bill URL: View Bill
House Bill No. 1618 seeks to establish regulations for interactive computer service providers regarding minors. It prohibits these providers from entering contracts with minors without parental consent, restricts minors from accessing harmful materials, and mandates reasonable age verification methods. The bill also grants the Attorney General the authority to enforce these provisions and impose civil penalties for violations.
| Date | Action |
|---|---|
| 2026-01-19 | (H) Referred To Judiciary A |
Why Relevant: The bill's requirements for age verification and parental consent directly impact how AI-driven platforms and interactive computer services are accessed by younger demographics.
Mechanism of Influence: AI service providers falling under the definition of 'digital service providers' would be legally required to implement age gates and obtain express parental consent before allowing minors to create accounts or interact with the service.
Evidence:
Ambiguity Notes: The term 'interactive computer service' is broad and typically includes AI platforms, but the bill does not explicitly name 'Artificial Intelligence,' leaving its application to specific AI architectures to be determined by the scope of 'digital services that allow social interaction.'
Why Relevant: The mandate to mitigate harmful content is highly relevant to AI safety and the deployment of Large Language Models (LLMs) or generative AI that may produce harmful outputs.
Mechanism of Influence: AI companies would be required to develop and implement safety strategies to prevent their models from generating or exposing minors to content related to self-harm, substance abuse, or other defined harmful behaviors.
Evidence:
Ambiguity Notes: The effectiveness of 'strategies to mitigate exposure' is subjective and may require AI companies to perform internal audits or implement specific filtering layers to comply with the law.
Why Relevant: The bill restricts data collection practices, which is a core component of how AI models are personalized or how user data is utilized for iterative training.
Mechanism of Influence: Providers must limit data collection to what is strictly necessary, potentially hindering the ability of AI services to collect extensive behavioral data from minors for targeted advertising or model optimization.
Evidence:
Ambiguity Notes: The definition of 'necessary for providing the service' could be interpreted narrowly, potentially impacting the functionality of personalized AI assistants.
Legislation ID: 284060
Bill URL: View Bill
The bill establishes the AI Task Force, which will include both voting and non-voting members with expertise in various fields related to AI technology. The task force is responsible for developing recommendations for the regulation of AI, reviewing existing laws, and proposing necessary revisions to the Mississippi Code. It aims to foster innovation while addressing ethical and societal concerns related to AI deployment.
| Date | Action |
|---|---|
| 2026-01-19 | (H) Referred To Public Health and Human Services |
Why Relevant: The legislation is directly focused on the creation of a regulatory oversight body for artificial intelligence.
Mechanism of Influence: By establishing a task force to develop recommendations for AI regulation and propose revisions to state code, this bill serves as the foundational step for future AI-specific mandates such as audits or disclosures.
Evidence:
Ambiguity Notes: The bill focuses on the formation of the task force rather than prescribing specific technical requirements like weight submissions or age verification at this stage.
This bill seeks to define artificial intelligence as a machine-based system capable of making predictions, recommendations, or decisions based on human-defined objectives. It outlines the capabilities of such systems, including their ability to perceive environments, analyze data, and formulate options for action or information.
| Date | Action |
|---|---|
| 2026-01-28 | (H) Title Suff Do Pass |
| 2026-01-19 | (H) Referred To Technology |
Why Relevant: The bill provides the foundational definition of AI, which is necessary for any subsequent regulation, disclosure requirements, or oversight mechanisms mentioned in the system instructions.
Mechanism of Influence: By defining what constitutes an AI system, this law determines the scope of future regulatory actions such as audits, disclosures, or government oversight.
Evidence:
Ambiguity Notes: The phrase "human-defined objectives" is broad and could potentially encompass a wide range of software from simple algorithms to complex generative models.
Legislation ID: 270251
Bill URL: View Bill
This bill establishes additional penalties for defendants who knowingly and intentionally use artificial intelligence systems in the commission of designated offenses. Depending on whether the offense is classified as a misdemeanor or felony, the penalties include increased terms of imprisonment and fines. Additionally, the bill amends existing laws to prohibit the transmission and possession of visual materials depicting child exploitation, reinforcing protections against such crimes.
| Date | Action |
|---|---|
| 2026-01-16 | (H) Referred To Judiciary B |
Why Relevant: The bill directly addresses the use of AI in criminal activities and provides a legal definition for artificial intelligence systems.
Mechanism of Influence: It regulates the use of AI by creating a deterrent through enhanced sentencing and legal definitions that courts must apply to AI-assisted crimes.
Evidence:
Ambiguity Notes: The term designated offense determines the scope of the AI-related penalties, which may vary depending on other sections of the code.
Why Relevant: It imposes procedural requirements on the legal system regarding AI-related crimes.
Mechanism of Influence: Prosecutors must explicitly cite the use of AI in indictments to trigger the enhanced penalties, creating a formal legal record and oversight mechanism for AI misuse.
Evidence:
Ambiguity Notes: None
Legislation ID: 249695
Bill URL: View Bill
Senate Bill No. 2050 amends Section 23-15-897 of the Mississippi Code to mandate that any qualified political advertisement utilizing artificial intelligence must disclose this fact to the public. The bill defines what constitutes a qualified political advertisement and the nature of artificial intelligence. It specifies the required information for disclosure, outlines who is exempt from liability for non-disclosure, and establishes civil penalties for violations. The bill also details the legal recourse available to aggrieved parties and the attorney general in cases of non-compliance.
| Date | Action |
|---|---|
| 2026-01-08 | (S) Referred To Elections;Technology |
Why Relevant: The bill directly regulates the use of artificial intelligence in political campaigning by requiring mandatory disclosures.
Mechanism of Influence: It imposes legal obligations on candidates and political committees to label AI-generated content, with specific technical requirements for how those labels appear in audio and video formats, and establishes civil penalties for failure to comply.
Evidence:
Ambiguity Notes: The scope of the regulation depends on the specific definitions provided for 'artificial intelligence' and 'qualified political advertisement' within the bill.
Legislation ID: 273109
Bill URL: View Bill
Senate Bill No. 2294 establishes a requirement for public high school students in Mississippi to earn one unit of credit in a computer science course or a career and technical education (CTE) course with embedded computer science instruction before graduation, starting with the ninth-grade class of 2029-2030. The bill also mandates that these courses include fundamental concepts of emerging technologies, such as artificial intelligence, and defines relevant terms for clarity in implementation.
| Date | Action |
|---|---|
| 2026-01-19 | (S) Referred To Education |
Why Relevant: The bill explicitly mentions artificial intelligence as a required component of the mandatory computer science curriculum for high school graduation.
Mechanism of Influence: By mandating the inclusion of AI in educational standards, the law ensures that the state's education system addresses the technology's fundamental concepts and societal implications.
Evidence:
Ambiguity Notes: The bill focuses on education and curriculum standards rather than the regulation of AI development, deployment, or oversight mechanisms like audits or weight submissions.
Legislation ID: 273204
Bill URL: View Bill
The Artificial Intelligence Fraud and Accountability Act aims to define artificial intelligence fraud and create a civil cause of action for those harmed by such fraudulent activities. It outlines the remedies available, including the possibility of punitive damages for willful violations and allows for injunctions against violators. The Act holds developers and users of AI systems accountable for any fraudulent use, promoting accountability in the deployment of AI technologies.
| Date | Action |
|---|---|
| 2026-01-19 | (S) Referred To Judiciary, Division A |
Why Relevant: This legislation directly addresses the regulation and accountability of artificial intelligence by establishing legal consequences for its misuse in fraudulent activities.
Mechanism of Influence: It creates a civil cause of action allowing for compensatory, statutory, and punitive damages, as well as injunctions, which forces developers and users to implement safeguards against fraudulent deployment.
Evidence:
Ambiguity Notes: The definition of 'deceptive use' and the threshold for 'knowingly facilitate' are broad, potentially leaving room for interpretation regarding the extent of a developer's responsibility for third-party misuse.
Legislation ID: 274071
Bill URL: View Bill
The Artificial Intelligence in Education Task Force Act aims to create a task force that will explore potential applications of artificial intelligence in K-12 education. The task force will develop policy recommendations for the responsible use of AI by students and educators, assess workforce needs related to AI, and ensure alignment with industry demands. It will consist of twelve members appointed by state officials and will conduct meetings, gather data, and submit reports on its findings and recommendations.
| Date | Action |
|---|---|
| 2026-01-19 | (S) Referred To Technology;Education |
Why Relevant: The act focuses on developing policy recommendations for the responsible use of AI in an educational setting, which aligns with the user's interest in AI regulation.
Mechanism of Influence: The task force is tasked with creating guidelines and policy frameworks that will likely shape future regulations for AI deployment in schools.
Evidence:
Ambiguity Notes: The term 'responsible use' is not defined, leaving room for a wide range of policy interpretations from restrictive to permissive.
Why Relevant: The legislation mandates the assessment of ethical and data privacy implications of AI technology.
Mechanism of Influence: By requiring an assessment of ethics and privacy, the task force's findings will influence how AI systems are vetted for safety and compliance before use by minors.
Evidence:
Ambiguity Notes: The specific ethical frameworks or privacy standards to be used for the assessment are not specified in the text.
Why Relevant: The act requires the evaluation of AI technology and reporting to government officials, which serves as a form of oversight.
Mechanism of Influence: The task force conducts evaluations and submits interim and final reports to state officials, providing a mechanism for government oversight of AI applications.
Evidence:
Ambiguity Notes: The scope of 'evaluations' is not detailed, so it is unclear if this includes technical audits or just general policy reviews.
Senate Bill No. 2437 aims to define artificial intelligence as a machine-based system capable of making predictions, recommendations, or decisions based on human-defined objectives. The bill outlines how AI systems utilize both machine and human inputs to understand environments, create models through analysis, and generate options for actions or information.
| Date | Action |
|---|---|
| 2026-01-28 | (S) Title Suff Do Pass |
| 2026-01-19 | (S) Referred To Technology |
Why Relevant: The bill provides the foundational legal definition of AI, which is a prerequisite for any regulatory framework, disclosure requirement, or oversight mechanism.
Mechanism of Influence: By codifying this definition into state law, it establishes the legal scope for what technologies will be subject to future AI-specific regulations, audits, or government oversight in Mississippi.
Evidence:
Ambiguity Notes: The definition is broad, utilizing terms like 'machine-based system' and 'human-defined objectives,' which could potentially encompass a wide array of traditional software and algorithms beyond modern neural networks.
Legislation ID: 284391
Bill URL: View Bill
Senate Bill No. 2672 seeks to bring forward various sections of the Mississippi Code related to information technology services, specifically establishing the Mississippi Department of Information Technology Services (MDITS) as the central authority for state technology procurement and management. The bill also proposes amendments to existing sections, ensuring cohesive planning and cooperation among state agencies for the optimal use of technology resources.
| Date | Action |
|---|---|
| 2026-01-19 | (S) Referred To Economic and Workforce Development |
Why Relevant: The bill governs the procurement and management of all information technology for state agencies. As AI is a subset of information technology, this department would be the primary body overseeing how AI tools are acquired and utilized by the state government.
Mechanism of Influence: MDITS is empowered to establish rules for competitive procurement and develop statewide plans for technology. This creates the administrative structure through which any future AI-specific procurement standards or usage policies would be implemented.
Evidence:
Ambiguity Notes: The bill uses the broad term 'information technology' without specific mention of artificial intelligence, machine learning, or automated decision systems, leaving the extent of AI-specific oversight to the department's rule-making authority.
Legislation ID: 235040
Bill URL: View Bill
This bill introduces the Missouri Artificial Intelligence Transparency and Accountability Act which mandates that AI-generated content must be clearly labeled and logged. It defines key terms related to AI content, outlines requirements for labeling and maintaining usage logs, and establishes enforcement mechanisms through the attorney general. The bill also allows for the creation of rules by the Missouri Department of Commerce and Insurance to ensure compliance and public awareness regarding AI-generated content.
| Date | Action |
|---|---|
| 2026-01-27 | Second Read and Referred S General Laws Committee |
| 2026-01-07 | S First Read |
| 2025-12-01 | Prefiled |
Why Relevant: The bill directly addresses the user's interest in requiring disclosures for AI-generated content.
Mechanism of Influence: It mandates specific disclosure formats for different media types, including verbal disclosures for audio, watermarks for images and video, and text labels for written content, ensuring the public is aware when content is AI-generated.
Evidence:
Ambiguity Notes: The term 'public consumption' is used but not fully defined in the abstract, which could impact the scope of which AI-generated materials require labeling.
Why Relevant: The bill aligns with the user's interest in AI regulation and oversight through record-keeping requirements.
Mechanism of Influence: By requiring developers and deployers to maintain usage logs for seven years, including user identity and input/output descriptions, the law creates an audit trail for government oversight and accountability.
Evidence:
Ambiguity Notes: While the bill requires logs, it does not explicitly mention 'audits' by third parties, though the logs serve as the primary data source for such oversight.
Why Relevant: The bill establishes the regulatory and enforcement mechanisms requested by the user.
Mechanism of Influence: It grants the Attorney General the power to enforce the act and impose penalties up to $100,000 per violation, while also allowing for private civil actions.
Evidence:
Ambiguity Notes: None
Legislation ID: 235111
Bill URL: View Bill
This bill repeals the existing section 484.020 and enacts a new provision that prohibits individuals and entities from engaging in the practice of law without proper licensing. It specifically addresses the unauthorized provision of legal services, including those facilitated by artificial intelligence, and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-27 | Second Read and Referred S General Laws Committee |
| 2026-01-07 | S First Read |
| 2025-12-02 | Prefiled |
Why Relevant: The legislation specifically targets the use of artificial intelligence in the delivery of legal services, categorizing unauthorized AI-driven legal assistance as a violation of law.
Mechanism of Influence: By explicitly mentioning AI, the bill subjects AI developers and platforms providing legal tools to the same licensing requirements and penalties as human practitioners, effectively regulating the commercial deployment of legal AI in the state.
Evidence:
Ambiguity Notes: The phrase 'facilitated by artificial intelligence' is not strictly defined, which could lead to broad interpretations covering a wide range of software from basic document automation to advanced generative AI legal advice.
Legislation ID: 235160
Bill URL: View Bill
This bill introduces a new section to chapter 407 of the Missouri statutes, defining artificial intelligence and outlining the legal implications for entities that develop or deploy AI in mental health contexts. It prohibits advertising AI as capable of providing therapy services and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-07 | S First Read |
| 2025-12-16 | Prefiled |
Why Relevant: The legislation directly addresses the regulation and disclosure requirements for AI deployment, specifically targeting the misrepresentation of AI capabilities in a professional context.
Mechanism of Influence: It imposes a legal prohibition on advertising AI as a mental health professional or therapy provider and establishes a civil penalty structure for enforcement by the Attorney General.
Evidence:
Ambiguity Notes: The term 'providing therapy' may be subject to interpretation regarding whether it encompasses non-clinical wellness support or emotional support chatbots.
Legislation ID: 250869
Bill URL: View Bill
This bill introduces the "AI Non-Sentience and Responsibility Act", which clarifies that artificial intelligence systems are non-sentient entities and outlines the legal responsibilities of developers, manufacturers, and owners of AI. It stipulates that AI cannot hold legal personhood, cannot be married or appointed to corporate roles, and establishes liability for harm caused by AI systems, ensuring that responsibility remains with human actors.
| Date | Action |
|---|---|
| 2026-01-07 | S First Read |
| 2025-12-29 | Prefiled |
Why Relevant: The bill imposes regulatory requirements on AI developers and owners regarding safety and reporting.
Mechanism of Influence: It requires mandatory incident reporting to authorities and the implementation of safety mechanisms like regular risk assessments.
Evidence:
Ambiguity Notes: The term 'significant harm' is not explicitly defined, which may lead to varying interpretations of when reporting is required.
Why Relevant: It defines the legal boundaries and accountability structures for AI technology.
Mechanism of Influence: By denying AI legal personhood and establishing strict liability for human actors, it ensures that AI usage remains under human oversight and control.
Evidence:
Ambiguity Notes: The conditions under which developers are liable for 'defects' versus owner liability for 'operations' may require further legal clarification.
Legislation ID: 234575
Bill URL: View Bill
This bill introduces the AI Non-Sentience and Responsibility Act, which defines artificial intelligence and clarifies that AI systems are non-sentient and cannot possess legal personhood. It establishes that owners and developers are responsible for any harm caused by AI systems and outlines the legal implications of AI-related incidents, including liability and oversight requirements. The bill is set to take effect on August 28, 2026.
| Date | Action |
|---|---|
| 2026-01-08 | Second Read and Referred S General Laws Committee |
| 2026-01-07 | S First Read |
| 2025-12-01 | Prefiled |
Why Relevant: The bill establishes mandatory oversight and reporting requirements for AI systems.
Mechanism of Influence: It requires owners to maintain active oversight and mandates that developers or owners report severe incidents to authorities, creating a regulatory compliance loop.
Evidence:
Ambiguity Notes: The term 'severe incidents' is not explicitly defined in the abstract, which could lead to inconsistent reporting standards.
Why Relevant: The legislation requires the implementation of safety protocols by AI stakeholders.
Mechanism of Influence: By requiring safety mechanisms to mitigate risks, the law forces developers to integrate risk-management features into the AI lifecycle.
Evidence:
Ambiguity Notes: The abstract does not specify what constitutes an acceptable 'safety measure,' leaving technical requirements to future regulation or court interpretation.
Why Relevant: The bill addresses liability and the legal status of AI, preventing entities from using AI 'autonomy' to evade regulation.
Mechanism of Influence: It ensures that liability cannot be waived by claiming an AI is 'ethically trained' or 'aligned,' maintaining a strict chain of human responsibility.
Evidence:
Ambiguity Notes: The 'specific conditions' under which developers and manufacturers are held liable versus owners are not detailed.
Legislation ID: 263260
Bill URL: View Bill
LB1083 introduces the Transparency in Artificial Intelligence Risk Management Act, which aims to address the potential risks associated with artificial intelligence technologies, particularly in relation to child safety and catastrophic risks. The bill mandates that large frontier developers and chatbot providers create and publish detailed safety and risk management plans, report safety incidents, and implement necessary safeguards to protect the public, especially minors, from the risks posed by AI systems.
| Date | Action |
|---|---|
| 2026-01-23 | Notice of hearing for February 09, 2026 |
| 2026-01-20 | Referred to Banking, Commerce and Insurance Committee |
| 2026-01-16 | KauthFA742filed |
| 2026-01-15 | Date of introduction |
Why Relevant: The legislation directly regulates artificial intelligence by imposing transparency and risk management requirements on developers.
Mechanism of Influence: It forces large frontier developers and chatbot providers to formalize, publish, and adhere to safety and risk management plans.
Evidence:
Ambiguity Notes: The term 'large frontier developers' and 'large chatbot providers' may require specific technical or market-share definitions to determine which entities are captured.
Why Relevant: The bill specifically addresses child safety and the protection of minors in the context of AI usage.
Mechanism of Influence: Chatbot providers are explicitly required to assess potential child safety risks as part of their mandatory protection plans.
Evidence:
Ambiguity Notes: The specific 'national and international standards' to be incorporated are not named, leaving room for interpretation on which benchmarks apply.
Why Relevant: It establishes a government oversight mechanism through mandatory incident reporting.
Mechanism of Influence: Developers must report safety incidents to the Attorney General, with extremely short windows (24 hours) for imminent risks.
Evidence:
Ambiguity Notes: The definition of 'imminent risks' versus 'critical safety incidents' could impact the urgency and volume of reports submitted.
Legislation ID: 270238
Bill URL: View Bill
Legislative Bill 1119 amends the Age-Appropriate Online Design Code Act to redefine terms related to online services and minors, change provisions regarding the collection and use of personal data, and impose additional duties and prohibitions on covered online services. It seeks to ensure that online environments are safer for minors by restricting targeted advertising, data collection practices, and the use of manipulative design features.
| Date | Action |
|---|---|
| 2026-01-23 | Notice of hearing for February 09, 2026 |
| 2026-01-21 | Referred to Banking, Commerce and Insurance Committee |
| 2026-01-20 | KauthFA778filed |
| 2026-01-16 | Date of introduction |
Why Relevant: The legislation targets online service design and data practices for minors, which directly impacts AI-driven platforms, recommendation engines, and algorithmic advertising.
Mechanism of Influence: The prohibition of dark patterns and manipulative design features restricts how AI algorithms can be used to influence minor behavior or engagement. Additionally, data minimization requirements limit the training data available from minor users for AI models.
Evidence:
Ambiguity Notes: The bill applies to 'covered online services,' a broad category that includes AI-powered social media and applications, though it does not explicitly mention 'Artificial Intelligence' by name.
This bill establishes the Biometric Autonomy Liberty Law, which recognizes biometric data as personal property of the individual from whom it is collected. It outlines definitions related to biometric data, the responsibilities of entities that collect or process such data, and the rights of individuals regarding their biometric information. The law aims to enhance security and privacy protections in light of the increasing use of biometric technology in various sectors.
| Date | Action |
|---|---|
| 2026-01-07 | Title printed. Carryover bill |
| 2025-01-29 | Notice of hearing for March 17, 2025 |
| 2025-01-16 | Referred to Banking, Commerce and Insurance Committee |
| 2025-01-14 | Date of introduction |
Why Relevant: Biometric data is a foundational component of many AI systems, including facial recognition, voice analysis, and gait detection, making the regulation of this data a primary method of governing AI applications.
Mechanism of Influence: AI developers and operators acting as 'controllers' or 'processors' would be legally required to obtain written consent and provide disclosures before using biometric datasets for training or deploying AI models.
Evidence:
Ambiguity Notes: The law does not explicitly use the term 'Artificial Intelligence,' but its definitions of 'processor' and 'biometric data' are broad enough to encompass the algorithmic processing of physical and behavioral characteristics common in AI.
Why Relevant: The bill addresses the user's interest in disclosures and oversight by requiring entities to specify the purpose and duration of biometric data usage.
Mechanism of Influence: This provision forces transparency on how AI-driven biometric systems are utilized, preventing the 'black box' collection of data for undisclosed algorithmic purposes.
Evidence:
Ambiguity Notes: While it requires disclosure of purpose, it does not specifically mandate the disclosure of AI model weights or technical audits of the algorithms themselves.
Legislation ID: 122219
Bill URL: View Bill
The Artificial Intelligence Consumer Protection Act is designed to protect consumers from algorithmic discrimination by setting forth requirements for developers and deployers of high-risk artificial intelligence systems. It outlines definitions, responsibilities, and documentation requirements to ensure compliance with anti-discrimination laws. The act mandates developers to disclose known risks and implement risk management policies, while deployers must conduct impact assessments and use reasonable care in their deployment of such systems.
| Date | Action |
|---|---|
| 2026-01-07 | Title printed. Carryover bill |
| 2025-01-28 | Notice of hearing for February 06, 2025 |
| 2025-01-24 | Referred to Judiciary Committee |
| 2025-01-22 | Date of introduction |
Why Relevant: The act directly regulates developers and deployers of high-risk AI systems, aligning with the user's interest in AI regulation.
Mechanism of Influence: It imposes a legal duty of reasonable care and mandates specific risk management policies for entities involved in AI development and deployment.
Evidence:
Ambiguity Notes: The term 'reasonable care' is a legal standard that may be subject to judicial interpretation rather than technical specification.
Why Relevant: The legislation requires impact assessments, which serve as a mandatory audit and oversight mechanism.
Mechanism of Influence: Deployers are legally required to complete impact assessments within 90 days of deployment or modification to evaluate the system's effects.
Evidence:
Ambiguity Notes: The summary does not specify if these assessments must be submitted to a central government authority or kept for internal compliance.
Why Relevant: The act mandates disclosures and documentation regarding AI system risks and outputs.
Mechanism of Influence: Developers must provide documentation detailing the use, limitations, and known risks of algorithmic discrimination to deployers.
Evidence:
Ambiguity Notes: The requirement to disclose risks 'without unreasonable delay' is a subjective timeframe.
Why Relevant: It addresses government oversight of AI systems used by federal agencies.
Mechanism of Influence: It removes exemptions for federal agencies when using high-risk AI systems that impact critical areas like employment or housing.
Evidence:
Ambiguity Notes: None
The Saving Human Connection Act establishes regulations for covered platforms that operate generative artificial intelligence systems. It defines key terms, outlines responsibilities for platforms to protect users, especially minors, and mandates transparency regarding the non-human nature of chatbots. The act also provides for enforcement mechanisms and civil penalties for violations.
| Date | Action |
|---|---|
| 2026-01-28 | Notice of hearing for February 17, 2026 (cancel) |
| 2026-01-23 | Notice of hearing for February 17, 2026 |
| 2026-01-13 | Referred to Banking, Commerce and Insurance Committee |
| 2026-01-09 | Date of introduction |
| 2026-01-09 | KauthFA563filed |
| 2026-01-09 | MurmanFA564filed |
| 2026-01-09 | MurmanFA565filed |
Why Relevant: The act explicitly mandates age verification for accessing specific AI features.
Mechanism of Influence: Platforms must implement systems to ensure minors cannot access chatbots with human-like features, effectively restricting AI usage based on age.
Evidence:
Ambiguity Notes: The specific technical standards for age verification are not defined, leaving implementation details to the platforms or future regulation.
Why Relevant: The legislation requires transparency and disclosures regarding the nature of AI interactions.
Mechanism of Influence: Covered platforms are legally obligated to inform users that they are interacting with an artificial system rather than a human.
Evidence:
Ambiguity Notes: The term 'regular disclosures' is broad and does not specify the frequency or format of these notifications.
Why Relevant: The act regulates the design and output of generative AI to protect user psychological well-being.
Mechanism of Influence: It imposes a legal duty on AI developers to prevent their systems from causing emotional dependence and to prioritize user safety in emergency detections.
Evidence:
Ambiguity Notes: Terms like 'emotional dependence' and 'best interests' are subjective and may be difficult to measure or enforce without specific metrics.
Legislation ID: 253730
Bill URL: View Bill
LB978 introduces legal measures to combat the distribution and possession of prohibited content related to child sexual abuse and exploitation. It defines terms, outlines civil actions that can be taken against violators, and specifies the roles of the Attorney General and county attorneys. The bill also establishes civil penalties for violations and ensures that certain legal protections are in place for judges and attorneys acting in good faith.
| Date | Action |
|---|---|
| 2026-01-14 | Referred to Judiciary Committee |
| 2026-01-13 | KauthFA634filed |
| 2026-01-12 | Date of introduction |
Why Relevant: The bill regulates the 'creation' of prohibited content and 'child sexual exploitation devices,' which in contemporary legislation often includes AI-generated synthetic media and the software or models used to generate them.
Mechanism of Influence: By prohibiting the creation and distribution of prohibited content online and regulating exploitation devices, the law creates civil liability and penalties for the use of AI tools to generate illegal material.
Evidence:
Ambiguity Notes: The abstract mentions definitions for 'child sexual abuse material' and 'child sexual exploitation devices' but does not explicitly detail the technical scope; however, these terms are frequently used in state legislation to encompass AI-generated content.
This bill introduces the Right to Compute Act, which asserts the rights of individuals to acquire, possess, and utilize computational resources for lawful purposes. It prohibits government entities from imposing restrictions on these rights unless such restrictions are necessary to serve a compelling government interest. The bill also defines key terms related to computational resources and government actions, and emphasizes the preservation of intellectual property rights.
| Date | Action |
|---|---|
| 2026-01-29 | Public Hearing: 01/29/2026 10:30 am GP 229 |
| 2026-01-07 | Introduced 01/07/2026 and referred to Commerce and Consumer Affairs |
| 2026-01-07 | To Be Introduced 01/07/2026 and referred to Commerce and Consumer Affairs |
Why Relevant: The bill directly impacts the infrastructure required for Artificial Intelligence. By protecting the 'Right to Compute,' it creates a barrier against regulations that might seek to limit AI development through hardware restrictions or compute-usage monitoring.
Mechanism of Influence: It would likely prevent government entities from imposing arbitrary caps on the amount of compute used for training AI models or requiring licenses to own high-performance computing hardware, unless the government can prove a compelling interest.
Evidence:
Ambiguity Notes: The term 'computational resources' is broad and likely includes the GPUs and specialized chips used for AI. The 'compelling government interest' clause is the primary loophole through which AI-specific safety regulations might still be enacted.
Legislation ID: 235540
Bill URL: View Bill
This bill establishes a moratorium on the construction of data centers in New Hampshire for one year and creates a committee to investigate the environmental effects of such facilities. The committee will consist of members from the House and Senate, tasked with reporting their findings and recommendations for legislation.
| Date | Action |
|---|---|
| 2026-01-07 | To Be Introduced 01/07/2026 and referred to Commerce and Consumer Affairs |
Why Relevant: Data centers serve as the critical physical infrastructure required for the development, training, and deployment of large-scale artificial intelligence models.
Mechanism of Influence: By prohibiting the construction of new data centers for one year, the bill restricts the expansion of the compute capacity available for AI operations within the state.
Evidence:
Ambiguity Notes: The bill does not explicitly mention artificial intelligence, focusing instead on environmental impacts; however, data center regulation is a primary bottleneck for AI industry growth.
Legislation ID: 235781
Bill URL: View Bill
This legislation introduces a framework for state agency heads to apply for exceptions to the restrictions on artificial intelligence usage established under RSA 5-D. It mandates the creation of a procedure by the Department of Information Technology for processing these requests, which will ultimately require approval from the executive council.
| Date | Action |
|---|---|
| 2026-01-29 | Public Hearing: 01/29/2026 01:00 pm GP 231 |
| 2026-01-07 | Introduced 01/07/2026 and referred to Executive Departments and Administration |
| 2026-01-07 | To Be Introduced 01/07/2026 and referred to Executive Departments and Administration |
Why Relevant: The bill directly concerns the governance and regulatory oversight of artificial intelligence usage within state government entities.
Mechanism of Influence: It creates a legal pathway for agencies to bypass standard AI restrictions, subject to administrative review and executive approval, thereby influencing how AI is deployed and controlled at the state level.
Evidence:
Ambiguity Notes: The text refers to RSA 5-D but does not detail the specific AI restrictions being exempted, leaving the scope of the exceptions dependent on the underlying law.
Legislation ID: 235857
Bill URL: View Bill
This bill seeks to eliminate the use of credit history and scores in the insurance underwriting process for personal automobile and homeowners policies. It also prohibits insurers from using drones, satellites, or other forms of surveillance without explicit permission from property owners. The bill aims to prevent unfair discrimination against consumers and to safeguard their privacy rights.
| Date | Action |
|---|---|
| 2026-01-28 | Public Hearing: 01/28/2026 02:00 pm GP 229 |
| 2026-01-07 | Introduced 01/07/2026 and referred to Commerce and Consumer Affairs |
| 2026-01-07 | To Be Introduced 01/07/2026 and referred to Commerce and Consumer Affairs |
Why Relevant: The bill regulates algorithmic inputs and automated data collection methods used in insurance underwriting.
Mechanism of Influence: By banning credit scores and restricting drone/satellite imagery, the bill limits the data sources and automated models insurers can use for risk assessment, which frequently involve AI or machine learning components.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its restrictions on credit scoring models and automated surveillance imagery directly impact the deployment of AI-driven underwriting tools.
Legislation ID: 236000
Bill URL: View Bill
This bill introduces a new chapter in New Hampshire law dedicated to the governance of artificial intelligence. It defines key terms related to AI, outlines the applicability of the regulations, establishes an artificial intelligence council, and sets forth general duties and prohibitions for entities involved with AI systems. Additionally, it creates a regulatory sandbox for testing AI innovations and details enforcement mechanisms for violations of the law.
| Date | Action |
|---|---|
| 2026-01-22 | Subcommittee Work Session: 01/22/2026 01:15 pm GP 229 |
| 2026-01-15 | Public Hearing: 01/15/2026 11:00 am GP 229 |
| 2026-01-07 | Introduced 01/07/2026 and referred to Commerce and Consumer AffairsHJ 1 |
| 2026-01-07 | To Be Introduced 01/07/2026 and referred to Commerce and Consumer AffairsHJ 1 |
Why Relevant: The bill establishes formal oversight and governance structures for AI development and use.
Mechanism of Influence: It creates a New Hampshire artificial intelligence council to advise on ethical practices and oversight, and grants the attorney general enforcement authority.
Evidence:
Ambiguity Notes: None
Why Relevant: The legislation mandates transparency through disclosure requirements.
Mechanism of Influence: Entities are required to provide clear disclosure to consumers when using AI systems, ensuring users are aware of AI involvement.
Evidence:
Ambiguity Notes: The specific format and timing of 'clear disclosure' may require further regulatory definition.
Why Relevant: The bill imposes specific prohibitions on AI applications and requires reporting for experimental systems.
Mechanism of Influence: It bans AI use for social scoring and manipulation while requiring quarterly reporting for entities operating within the regulatory sandbox.
Evidence:
Ambiguity Notes: None
Legislation ID: 250372
Bill URL: View Bill
This bill establishes the Privacy Protection Act, which prohibits the collection and sharing of certain personal information, such as immigration status and social security numbers, by government entities and health care facilities. It aims to protect individuals privacy interests and ensure that their data is not shared without consent, while also outlining specific conditions under which data may be collected or disclosed.
| Date | Action |
|---|---|
| 2026-01-12 | Motion To As (Kanitra) |
| 2026-01-12 | Motion To Table (Quijano) (42-23-0) |
| 2026-01-12 | Passed by the Assembly (47-26-0) |
| 2026-01-12 | Passed Senate (Passed Both Houses) (23-14) |
| 2026-01-12 | Received in the Senate without Reference, 2nd Reading |
| 2026-01-12 | Substituted for S5037 (1R) |
| 2026-01-08 | Reported out of Assembly Comm. with Amendments, 2nd Reading |
| 2026-01-05 | Reported and Referred to Assembly Appropriations Committee |
Why Relevant: The bill contains specific provisions regarding Automated License Plate Recognition (ALPR) technology.
Mechanism of Influence: ALPR systems utilize computer vision and automated data processing, which are foundational AI technologies. By restricting the sale and sharing of ALPR data, the legislation regulates the commercial and governmental application of AI-driven surveillance outputs.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but it regulates a specific application of AI (ALPR). It does not address other AI-specific requirements like algorithmic audits, submission of weights, or age verification.
Legislation ID: 256467
Bill URL: View Bill
This bill prohibits the collection, retention, conversion, storage, or sharing of biometric identifier information by public and private entities unless they provide clear and conspicuous notice of such practices. It establishes penalties for violations and defines what constitutes biometric identifier information and biometric surveillance systems.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Law and Public Safety Committee |
Why Relevant: The bill regulates 'biometric surveillance systems,' which are a core application of artificial intelligence, particularly in computer vision and facial recognition technologies.
Mechanism of Influence: It imposes a disclosure and transparency requirement on entities using AI-driven surveillance, requiring them to provide clear notice to individuals before their biometric data is processed by these systems.
Evidence:
Ambiguity Notes: The bill focuses on the 'biometric surveillance system' as the regulated entity; while AI is the standard underlying technology for such systems, the bill's scope depends on the specific technical definition of 'biometric surveillance' provided in the full text.
Legislation ID: 256468
Bill URL: View Bill
This bill prohibits business entities from using biometric surveillance systems on consumers at their physical premises without clear notice and lawful purpose. It mandates that businesses provide explanations if they use biometric data to deny access or remove consumers. Additionally, it restricts the sale or profit from biometric data obtained from consumers and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Law and Public Safety Committee |
Why Relevant: The bill regulates biometric surveillance systems and facial recognition, which are specific implementations of artificial intelligence technology used for identification and monitoring.
Mechanism of Influence: It imposes disclosure requirements (notice) and operational constraints (lawful purpose, no-profit rule) on businesses deploying AI-driven biometric tools.
Evidence:
Ambiguity Notes: The definition of 'biometric surveillance system' likely encompasses various AI models, though the text focuses on the application rather than the underlying algorithmic weights.
Legislation ID: 256731
Bill URL: View Bill
This bill establishes the Artificial Intelligence Innovation Partnership, which will be administered by the New Jersey Commission on Science, Innovation and Technology. The partnership will consist of independent nonprofit organizations working to support emerging artificial intelligence technology businesses and create collaborative innovation ecosystems across New Jersey. The bill outlines the goals, definitions, and operational framework for the partnership, including funding mechanisms and the establishment of a research grant fund.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Economic Growth Committee |
Why Relevant: The bill establishes a formal government oversight and reporting structure for entities involved in the development and support of artificial intelligence technology.
Mechanism of Influence: It mandates annual reporting to the Governor and Legislature regarding AI partnership activities and requires the Commission to define and categorize 'artificial intelligence technology' for regulatory and funding purposes.
Evidence:
Ambiguity Notes: While the bill focuses on innovation and funding, the definitions of 'artificial intelligence technology' and 'emerging AI technology business' will determine the scope of who falls under this state-monitored ecosystem.
Why Relevant: The legislation specifically requires audits and financial disclosures for organizations participating in the AI partnership.
Mechanism of Influence: Partners are required to submit independent audits of funds received and detailed annual reports on their organizational structure and activities to ensure compliance with state agreements.
Evidence:
Ambiguity Notes: The audits mentioned are focused on financial compliance and fund usage rather than technical algorithmic audits or safety assessments.
Legislation ID: 256940
Bill URL: View Bill
This bill mandates that artificial intelligence companies in New Jersey perform annual safety tests on their AI technologies, which include assessments for biases, inaccuracies, and cybersecurity threats. The results of these tests must be reported to the Office of Information Technology, which will also establish minimum testing requirements. The bill seeks to promote accountability and safety in the development and deployment of AI technologies.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Commerce Committee |
Why Relevant: The bill establishes a mandatory audit and reporting framework for AI developers, which is a core component of AI regulation.
Mechanism of Influence: It requires companies to submit detailed safety test results to a state agency, effectively creating a government oversight mechanism for AI safety and performance.
Evidence:
Ambiguity Notes: The specific technical standards for the 'minimum requirements' of these tests are left to the discretion of the Office of Information Technology, which may lead to varying levels of rigor.
Why Relevant: It addresses specific regulatory concerns regarding AI bias and cybersecurity vulnerabilities.
Mechanism of Influence: The legislation mandates data source analysis and vulnerability assessments to mitigate risks such as algorithmic bias and security threats.
Evidence:
Ambiguity Notes: The term 'remedies for identified issues' does not specify whether the government has the authority to block deployment if the proposed remedies are deemed insufficient.
Legislation ID: 256984
Bill URL: View Bill
The New Jersey Responsible AI Advancement and Workforce Protection Act seeks to ensure that the deployment of AI technologies does not displace workers or harm communities. It establishes the AI Horizon Fund to support workforce retraining and apprenticeship programs, mandates environmental impact assessments for AI infrastructure, and requires high-risk AI systems to undergo algorithmic impact assessments. The bill aims to protect civil rights and promote community engagement in AI developments, holding companies accountable for their impact on workers and the environment.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Labor Committee |
Why Relevant: The act directly regulates the deployment of high-risk AI systems through mandatory assessments.
Mechanism of Influence: It requires high-risk AI systems to undergo algorithmic impact assessments before deployment to evaluate societal impacts and ensure compliance with ethical use and transparency standards.
Evidence:
Ambiguity Notes: The specific criteria for what constitutes a 'high-risk' AI system and the exact metrics for 'societal impacts' may require further administrative definition.
Why Relevant: The legislation imposes disclosure and reporting requirements on AI infrastructure entities.
Mechanism of Influence: Entities must conduct annual environmental impact assessments and report on energy consumption, water usage, and carbon emissions, with penalties for non-compliance.
Evidence:
Ambiguity Notes: The definition of 'AI infrastructure entity' determines the scope of companies subject to these reporting requirements.
Why Relevant: It establishes government oversight and enforcement mechanisms for AI-related harms.
Mechanism of Influence: The Attorney General is granted authority to investigate AI-driven discrimination and workplace surveillance, while the Department of Labor monitors AI-driven displacement.
Evidence:
Ambiguity Notes: None
Why Relevant: The act includes financial penalties to ensure compliance with AI regulations.
Mechanism of Influence: It establishes a fine structure for violations related to environmental assessments and high-risk AI system mandates, with funds directed to a workforce retraining fund.
Evidence:
Ambiguity Notes: None
Legislation ID: 257300
Bill URL: View Bill
Senate Bill No. 2129 prohibits the disclosure and solicitation of deceptive audio or visual media within a specified timeframe before elections, imposing criminal penalties for violations. It allows registered voters and candidates to seek civil remedies against those who distribute deceptive media with the intent to mislead voters. The bill outlines exceptions for minor alterations and certain forms of expression, while also clarifying the protections for various media platforms.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate State Government, Wagering, Tourism & Historic Preservation Committee |
Why Relevant: The bill addresses 'deceptive audio or visual media,' a category that fundamentally includes AI-generated deepfakes used to manipulate public perception during elections.
Mechanism of Influence: It mandates disclosures in the form of disclaimers for such media and imposes legal liability on those who use AI-driven deceptive content to influence voters.
Evidence:
Ambiguity Notes: While the abstract uses the term 'deceptive audio or visual media' rather than 'artificial intelligence' explicitly, this terminology is the standard legislative framework for regulating AI-generated synthetic media.
Legislation ID: 257301
Bill URL: View Bill
This bill establishes the Deep Fake Technology Unit within the Division of Criminal Justice in the Department of Law and Public Safety in New Jersey. The unit will provide expertise, training, and technical assistance to law enforcement and the judiciary regarding deep fakes, which are manipulated media that can misrepresent reality. The bill also includes provisions for annual reporting on the units activities and technological advancements in the field, along with an appropriation of $2 million to support its operations.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Law and Public Safety Committee |
Why Relevant: Deep fakes are a primary output of generative artificial intelligence, and the creation of a dedicated state unit to monitor and authenticate such media represents a form of government oversight and regulation of AI-generated content.
Mechanism of Influence: The unit will provide technical assistance and authentication services for investigations, creating a practical mechanism for the state to identify and mitigate the impact of AI-manipulated media in legal and law enforcement contexts.
Evidence:
Ambiguity Notes: The definition of 'deceptive audio or visual media' may be broad, potentially covering a wide range of AI-generated or AI-enhanced content beyond traditional deep fakes.
Legislation ID: 257842
Bill URL: View Bill
The New Jersey Disclosure and Accountability Transparency Act (NJ DaTA) is designed to regulate how personally identifiable information is collected, processed, and disclosed by controllers. It mandates transparency in data handling, consumer consent for data processing, and establishes the Office of Data Protection and Responsible Use within the Division of Consumer Affairs to oversee compliance.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Commerce Committee |
Why Relevant: The legislation includes specific definitions for automated decision making, which is a core component of artificial intelligence systems.
Mechanism of Influence: By defining automated decision making within a regulatory framework for data transparency, the law creates a legal basis for overseeing how algorithms and AI models process personal data to make determinations about individuals.
Evidence:
Ambiguity Notes: The provided text mentions that automated decision making is defined but does not explicitly detail the specific regulatory constraints or audit requirements applied to those automated processes.
Why Relevant: The act mandates transparency and affirmative consent for data processing, which directly impacts the data acquisition and training phases of AI development.
Mechanism of Influence: AI developers acting as data controllers would be required to obtain explicit opt-in consent before collecting data used for processing, potentially limiting the use of scraped or non-consensual datasets for AI training.
Evidence:
Ambiguity Notes: While the law focuses on PII, the 'Responsible Use' aspect of the newly created Office suggests a broader mandate that could encompass algorithmic accountability.
Legislation ID: 257871
Bill URL: View Bill
Senate Bill No. 2625 establishes new legal definitions and penalties related to the sexual exploitation or abuse of children, particularly focusing on items that depict such exploitation, whether through direct photography or digital manipulation. It outlines various offenses, including distribution, possession, and creation of such materials, and specifies the legal consequences based on the number of items involved. The bill also addresses the treatment of juveniles in cases related to the sharing of sexually suggestive materials, aiming to provide educational and counseling opportunities.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Judiciary Committee |
Why Relevant: The bill includes 'manipulated depictions' in its scope, which is the legal terminology used to regulate AI-generated deepfakes and synthetic media.
Mechanism of Influence: It imposes criminal penalties on the creation and distribution of digitally manipulated content, effectively regulating the output of generative AI tools when used for illegal imagery involving minors.
Evidence:
Ambiguity Notes: The term 'manipulated depiction' is broad and typically covers both traditional digital editing and advanced AI-driven synthesis, though the bill focuses on the content rather than the specific technological method of generation.
Legislation ID: 258122
Bill URL: View Bill
This bill establishes an Artificial Intelligence Apprenticeship Program within the New Jersey Department of Labor and Workforce Development. The program will work with AI companies to create apprenticeship opportunities and will also set up a tax credit for employers who hire apprentices in the AI field. The tax credit will be equal to half of the wages paid to qualified apprentices, up to a maximum of $5,000 per apprentice.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Labor Committee |
Why Relevant: The legislation specifically targets the Artificial Intelligence industry by creating a state-managed workforce development program and financial incentives for AI employers.
Mechanism of Influence: The bill influences the AI sector through economic incentives and state-led coordination of labor, rather than through direct regulation or oversight of the technology itself.
Evidence:
Ambiguity Notes: The abstract does not define the specific criteria for what constitutes an 'AI company' or 'AI field,' which may lead to broad interpretation regarding which businesses qualify for the tax credit.
Legislation ID: 258124
Bill URL: View Bill
This bill mandates the inclusion of artificial intelligence instruction in K-12 education and requires public institutions of higher education to offer related certificate and degree programs. It outlines the responsibilities of the Commissioner of Education and the Secretary of Higher Education in developing curricula and resources to support these educational initiatives. The legislation is designed to enhance students understanding of AI and prepare them for careers in this growing field.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Education Committee |
Why Relevant: The bill mandates the inclusion of 'ethical practices' in AI education, which relates to the broader regulatory interest in AI safety and ethics.
Mechanism of Influence: By requiring the Commissioner of Education to develop resources for ethical AI instruction, the state establishes a framework for how AI's societal impacts are taught and understood at a foundational level.
Evidence:
Ambiguity Notes: The term 'ethical practices' is broad and undefined, potentially encompassing topics ranging from data privacy and algorithmic bias to the responsible use of generative AI.
Why Relevant: The bill regulates the educational requirements for AI, focusing on workforce development and academic standardization.
Mechanism of Influence: It mandates that public higher education institutions offer specific AI credentials, ensuring that the state's educational output aligns with the technical needs of the AI industry.
Evidence:
Ambiguity Notes: While it mandates the creation of programs, it does not specify the technical depth or specific AI sub-fields (e.g., machine learning vs. neural networks) that must be covered.
Legislation ID: 258213
Bill URL: View Bill
This bill establishes the Office of Cybersecurity Infrastructure as an independent entity within the Executive Branch of New Jerseys government. The office is tasked with creating and implementing cybersecurity policies, monitoring technology infrastructure, and establishing guidelines for the safe integration of artificial intelligence in both public and private sectors. The office will be led by a Director appointed by the Governor and will report on its activities to the Governor and Legislature annually.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate State Government, Wagering, Tourism & Historic Preservation Committee |
Why Relevant: The bill explicitly mandates the creation of policies for the safe integration of artificial intelligence.
Mechanism of Influence: The Office of Cybersecurity Infrastructure is tasked with developing AI policies for both public and private institutions, which serves as a regulatory framework for AI usage and safety standards.
Evidence:
Ambiguity Notes: The terms 'safe integration' and 'AI policies' are broad and do not specify whether they include specific requirements like audits, disclosure, or weight submissions, though the office has the authority to define these.
Why Relevant: The bill establishes an oversight and reporting mechanism for technology and AI policy.
Mechanism of Influence: The Director must report annually to the Governor and Legislature, providing a channel for government oversight of AI-related infrastructure and policy implementation.
Evidence:
Ambiguity Notes: The reporting requirements focus on 'operations and cybersecurity infrastructure,' which likely includes the progress and enforcement of the AI policies mentioned elsewhere in the bill.
Legislation ID: 258215
Bill URL: View Bill
This legislation enables the Commissioner of Labor and Workforce Development to create public-private partnerships aimed at providing training and retraining services related to artificial intelligence. The bill outlines the responsibilities of the private entities involved and establishes an advisory council for oversight. It also provides guidelines for project proposals and exempts certain entities from procurement and prevailing wage requirements to facilitate the development of AI training programs.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Labor Committee |
Why Relevant: The bill specifically addresses the artificial intelligence industry and establishes a legal framework for AI-related workforce development.
Mechanism of Influence: It creates a state-sanctioned mechanism for AI training and requires the definition of AI and the AI industry within state labor regulations.
Evidence:
Ambiguity Notes: The focus is on workforce training rather than technical regulation of AI models, but it establishes the state's role in overseeing AI's impact on the labor market.
Why Relevant: The legislation includes oversight and reporting requirements concerning AI initiatives.
Mechanism of Influence: It mandates an advisory council and requires annual reports to the Governor and Legislature regarding the progress and outcomes of AI training programs.
Evidence:
Ambiguity Notes: The oversight is administrative and focused on program efficacy rather than the technical auditing of AI weights or algorithms.
Legislation ID: 258671
Bill URL: View Bill
This resolution highlights the potential benefits and risks of artificial intelligence technology, emphasizing the need for better whistleblower protections for employees in the sector. It calls for generative AI companies to adopt principles that would ensure employee safety when reporting risks, promote transparency, and facilitate independent evaluations of AI systems.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Labor Committee |
Why Relevant: The resolution directly addresses the regulation and oversight of artificial intelligence by focusing on risk reporting and independent evaluations.
Mechanism of Influence: It seeks to establish a framework where employees can report AI-related risks to boards and regulators without fear of retaliation, thereby creating a mechanism for government and internal oversight.
Evidence:
Ambiguity Notes: The terms 'risk-related concerns' and 'good faith evaluations' are not strictly defined, allowing for a broad range of safety and ethical issues to be covered under these protections.
Legislation ID: 255539
Bill URL: View Bill
This legislation prohibits developers or deployers of artificial intelligence systems in New Jersey from advertising or claiming that such systems can act as licensed mental health professionals. Violations of this prohibition are deemed unlawful practices under the New Jersey Consumer Fraud Act, with penalties for infractions.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Commerce Committee |
Why Relevant: The bill directly regulates the marketing and public representation of AI systems, specifically prohibiting claims that AI can substitute for licensed mental health professionals.
Mechanism of Influence: It creates a legal prohibition on specific types of AI-related advertising and subjects violators to the New Jersey Consumer Fraud Act, effectively regulating the commercial deployment of AI in the mental health space.
Evidence:
Ambiguity Notes: The scope of 'advertising or claiming' could be interpreted to include not just traditional ads but also branding, user interface design, or conversational prompts that imply professional status.
Why Relevant: It provides legal definitions for artificial intelligence and establishes enforcement mechanisms for AI-related consumer protection.
Mechanism of Influence: By defining AI and integrating it into the Consumer Fraud Act, the law provides a framework for state oversight of AI developers and deployers.
Evidence:
Ambiguity Notes: None
Legislation ID: 255694
Bill URL: View Bill
Senate Bill No. 861 establishes requirements for social media websites in New Jersey concerning their content moderation practices. It mandates transparency in censorship actions, ensures consistent application of moderation standards, and allows users to challenge unjust censorship. The bill also provides for penalties against social media platforms that violate these regulations, particularly in relation to political candidates and journalistic enterprises.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced in the Senate, Referred to Senate Commerce Committee |
Why Relevant: The bill explicitly regulates the use of algorithms in content moderation and requires disclosures regarding their function.
Mechanism of Influence: It mandates that social media websites explain the algorithms used to flag content to users and allows the Attorney General to subpoena these algorithms for oversight purposes.
Evidence:
Ambiguity Notes: While the bill uses the term 'algorithm' rather than 'artificial intelligence,' modern content moderation algorithms are predominantly AI-driven, making this a direct regulation of AI application in social media.
Why Relevant: The bill establishes a framework for government oversight and submission of algorithmic documentation.
Mechanism of Influence: Social media platforms are required to provide their algorithms and related documentation to the government upon subpoena, which functions as a form of regulatory audit or oversight of automated decision-making systems.
Evidence:
Ambiguity Notes: The term 'documentation' is broad and could potentially encompass technical specifications, model weights, or training data used in the moderation algorithms.
House Bill 141, known as the Artificial Intelligence Accountability Act, aims to regulate the creation and distribution of synthetic content generated by artificial intelligence. It mandates disclosure requirements for covered providers, establishes guidelines for capture device manufacturers, and outlines the responsibilities of large online platforms. The bill also includes provisions for civil investigations and penalties for non-compliance to safeguard against deceptive synthetic content.
| Date | Action |
|---|---|
| 2026-01-22 | Not Printed |
Why Relevant: The bill directly addresses the user's interest in AI disclosures and regulation of synthetic content.
Mechanism of Influence: It mandates that covered providers include manifest and latent disclosures in AI-generated content and requires hardware manufacturers to embed disclosure capabilities.
Evidence:
Ambiguity Notes: The specific technical standards for 'latent disclosures' are not fully detailed in the abstract, potentially leaving implementation details to future rulemaking.
Why Relevant: The bill establishes government oversight and investigative powers over AI entities.
Mechanism of Influence: The Attorney General is authorized to issue civil investigative demands to compel information and documents from individuals or entities suspected of non-compliance.
Evidence:
Ambiguity Notes: The scope of 'relevant information' for an investigation is broad and subject to the Attorney General's discretion.
Why Relevant: It imposes specific operational requirements on large online platforms regarding AI content management.
Mechanism of Influence: Platforms must implement detection measures for synthetic content and provide user interfaces for reporting and removing deceptive AI content.
Evidence:
Ambiguity Notes: The definition of a 'Large Online Platform' (e.g., user thresholds) is not specified in the abstract.
Why Relevant: The bill requires transparency tools to identify the origin of AI-generated media.
Mechanism of Influence: Providers must offer a free, publicly accessible provenance detection tool to allow users to verify content data.
Evidence:
Ambiguity Notes: The effectiveness of these tools depends on the adoption of 'established standards' mentioned in the text.
Why Relevant: It regulates the misuse of AI through criminal and civil penalties.
Mechanism of Influence: The act increases prison sentences for felonies involving generative AI and allows for private lawsuits against those who spread deceptive synthetic content.
Evidence:
Ambiguity Notes: The term 'deceptive synthetic content' requires clear legal interpretation to distinguish between harmful misinformation and protected speech like satire.
This bill establishes the Artificial Intelligence Transparency Act, which requires entities deploying artificial intelligence systems to notify consumers about the use of these systems in making consequential decisions. It mandates that consumers receive clear explanations of how their data is used and the basis for decisions made by AI. The bill also outlines the rights of consumers to appeal adverse decisions and sets forth enforcement mechanisms to protect consumer rights.
| Date | Action |
|---|---|
| 2026-01-21 | Not Printed |
| 2026-01-20 | Sent to HPREF - Referrals: HPREF |
Why Relevant: The bill directly addresses the user's interest in AI regulation and disclosure requirements.
Mechanism of Influence: It mandates that deployers provide clear and conspicuous notifications to consumers before using AI for consequential decisions and at the start of interactions with AI companion products.
Evidence:
Ambiguity Notes: The scope of the act depends heavily on the definitions of 'consequential decision' and 'companion products', which are mentioned but not fully detailed in the abstract.
Why Relevant: The bill includes oversight and enforcement mechanisms, which aligns with the user's interest in government oversight of AI.
Mechanism of Influence: It empowers the state department of justice to enforce the act and grants consumers the right to take civil action for violations.
Evidence:
Ambiguity Notes: The specific penalties for non-compliance or the threshold for 'adverse decisions' may require further clarification in the full text.
Legislation ID: 282777
Bill URL: View Bill
This legislation, known as the Community and Health Information Safety and Privacy Act, introduces comprehensive definitions and requirements for entities that collect and process consumer data. It outlines the rights of consumers regarding their personal information, sets limitations on data processing, and prohibits certain uses of consumer data. The bill also includes provisions for enforcement and penalties for violations.
| Date | Action |
|---|---|
| 2026-01-21 | Sent to SCC - Referrals: SCC/SHPAC/SJC |
Why Relevant: The legislation regulates 'profiling' and 'profile-based feeds,' which are core functions of AI-driven recommendation engines and behavioral analysis systems.
Mechanism of Influence: By prohibiting profiling by default and requiring opt-in consent, the law restricts the automated categorization of individuals by AI models.
Evidence:
Ambiguity Notes: The bill uses the term 'profiling' rather than 'Artificial Intelligence,' which is a common legal approach to capture algorithmic decision-making without relying on a shifting technical definition.
Why Relevant: The act includes specific mandates for minors' privacy and default settings, aligning with the user's interest in age-related usage regulations.
Mechanism of Influence: It requires covered entities to set default privacy settings to the highest level for all users and specifically restricts notifications and unknown contacts for minors.
Evidence:
Ambiguity Notes: While it defines 'minor,' the abstract does not specify the technical method for age verification required to trigger these protections.
Why Relevant: The regulation of biometric data is a critical component of AI oversight, particularly concerning facial recognition and biometric identification technologies.
Mechanism of Influence: The law classifies biometric data as sensitive personal data, requiring explicit opt-in consent before it can be processed by any system.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill addresses the use of 'dark patterns,' which are often used in AI-driven user interfaces to manipulate consumer behavior.
Mechanism of Influence: It prohibits the use of manipulative design to coerce users into providing data, which impacts how AI-driven engagement loops are designed.
Evidence:
Ambiguity Notes: The definition of 'dark patterns' can be broad and may require further regulatory clarification to determine which specific UI/UX designs are prohibited.
The Artificial Intelligence Government Use Act mandates that public bodies create policies and training programs regarding the use of artificial intelligence and automated decision tools. It defines key terms related to AI and automated decision-making, outlines the requirements for public bodies to establish policies on authorized use, and mandates training for employees on cybersecurity and the appropriate use of these technologies.
| Date | Action |
|---|---|
| 2026-01-22 | Sent to SCC - Referrals: SCC/SHPAC/SJC |
Why Relevant: The act directly regulates the deployment and governance of AI within public institutions.
Mechanism of Influence: It mandates the creation of formal policies and security procedures, effectively setting a regulatory framework for government AI usage.
Evidence:
Ambiguity Notes: The specific scope of 'consequential decisions' is a critical term that will determine the breadth of the human oversight requirement.
Why Relevant: The legislation addresses oversight and accountability mechanisms for automated systems.
Mechanism of Influence: By requiring human oversight for consequential decisions, the law prevents fully autonomous AI systems from making high-stakes determinations without human intervention.
Evidence:
Ambiguity Notes: The act does not specify the level of human intervention required to satisfy the 'oversight' mandate.
Legislation ID: 283291
Bill URL: View Bill
This bill encompasses a wide range of amendments to existing laws related to motor vehicle regulations, insurance, environmental conservation, and economic development. It includes provisions for increasing motor vehicle fees, establishing safety courses, implementing technology for speed assistance in vehicles, and enhancing protections for highway workers. The bill also addresses funding for transportation projects and updates regulations concerning insurance and utilities.
| Date | Action |
|---|---|
| 2026-01-21 | referred to ways and means |
Why Relevant: The bill mandates a pilot program for 'intelligent speed assistance devices,' which represents a form of automated or algorithmic technology used to regulate vehicle speed.
Mechanism of Influence: It authorizes local governments to implement technological systems that can intervene in or monitor vehicle operation, which falls under the broader category of regulating automated and intelligent systems.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but 'intelligent' speed assistance systems typically rely on algorithmic processing, data inputs, or computer vision to function.
Legislation ID: 54982
Bill URL: View Bill
This legislation amends the state technology law to define artificial intelligence and automated decision-making systems, and to create the office of Chief Artificial Intelligence Officer. This officer will be responsible for developing statewide policies, ensuring compliance with laws, and coordinating AI activities across state agencies. The bill also establishes an advisory committee to assist in guiding AI practices and policy.
| Date | Action |
|---|---|
| 2026-01-07 | referred to governmental operations |
| 2025-01-09 | referred to governmental operations |
Why Relevant: The bill establishes the foundational legal definitions for AI and automated decision-making systems within the state's jurisdiction.
Mechanism of Influence: These definitions dictate the scope of future regulations and determine which technologies are subject to the oversight of the Chief AI Officer.
Evidence:
Ambiguity Notes: The exclusion of 'basic computerized processes' that do not 'materially affect human rights or safety' creates a subjective threshold for what constitutes regulated AI.
Why Relevant: It creates a centralized regulatory authority (Chief AI Officer) dedicated to AI governance.
Mechanism of Influence: The officer is empowered to develop statewide policies, ensure compliance with existing laws, and coordinate the use of AI tools across all state departments.
Evidence:
Ambiguity Notes: The specific content of the 'statewide policies' is left to the discretion of the officer, meaning the actual regulatory requirements are yet to be drafted.
Why Relevant: The legislation mandates the creation of an advisory body to shape AI best practices and policy.
Mechanism of Influence: The committee provides the expertise and recommendations that will form the basis of state AI policy and agency-level implementation.
Evidence:
Ambiguity Notes: The bill does not specify how much weight the Chief AI Officer must give to the committee's advice.
Legislation ID: 55115
Bill URL: View Bill
This legislation amends the criminal procedure law and civil practice law to set standards for the admissibility of evidence that is either created or processed by artificial intelligence. It requires that such evidence be supported by independent and admissible evidence and mandates that the proponent of the evidence demonstrates the reliability and accuracy of the AIs use in generating or processing that evidence.
| Date | Action |
|---|---|
| 2026-01-07 | referred to codes |
| 2025-01-09 | referred to codes |
Why Relevant: The legislation regulates the use and legal validity of artificial intelligence outputs within the judicial system.
Mechanism of Influence: It imposes a burden of proof on the proponent of AI evidence to demonstrate reliability and accuracy, effectively creating a regulatory framework for AI's application in legal evidence.
Evidence:
Ambiguity Notes: The distinction between 'new information not deducible' and 'conclusions not reasonably deducible' may lead to varying interpretations of what constitutes AI-created versus AI-processed evidence.
Legislation ID: 55119
Bill URL: View Bill
This legislation amends the general business law to mandate that operators of generative or surveillance advanced artificial intelligence systems collect oaths from users affirming their responsible use of these technologies. It defines key terms, outlines the requirements for user affirmation, and establishes penalties for non-compliance by operators.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-01-09 | referred to consumer affairs and protection |
Why Relevant: The legislation directly regulates the operation and user-onboarding process for advanced artificial intelligence systems.
Mechanism of Influence: It mandates that AI operators implement a specific compliance mechanism (sworn oaths) and submit these records to the government, creating a layer of oversight and legal accountability for AI usage.
Evidence:
Ambiguity Notes: The term 'advanced artificial intelligence systems' is defined within the law, but its practical scope depends on how the attorney general interprets 'generative' or 'surveillance' capabilities.
Legislation ID: 55286
Bill URL: View Bill
This bill amends the general business law in New York to mandate that any book published in the state that has been wholly or partially created using generative artificial intelligence must include a conspicuous disclosure on its cover. This requirement applies to all types of books, including printed and digital formats, and aims to inform consumers about the nature of the content they are purchasing.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-01-10 | referred to consumer affairs and protection |
Why Relevant: The bill directly addresses the user's interest in AI regulation and mandatory disclosures for AI-generated content.
Mechanism of Influence: It imposes a legal requirement on publishers to label books, providing transparency to consumers regarding the use of AI in the creative process.
Evidence:
Ambiguity Notes: The phrase 'partially created' may require further clarification to determine the threshold of AI involvement that triggers the disclosure requirement, though the bill attempts to define AI through the lens of 'minimal human oversight'.
Legislation ID: 53933
Bill URL: View Bill
This bill outlines the requirements and limitations for smart access systems used in multiple dwellings. It mandates that only essential data may be collected, prohibits certain types of data collection, and establishes penalties for violations. Additionally, it sets forth guidelines for the destruction of collected data and requires owners to provide written procedures to tenants regarding the use of these systems.
| Date | Action |
|---|---|
| 2026-01-07 | referred to housing |
| 2025-01-08 | referred to housing |
Why Relevant: The regulation of biometric data collection is a core component of AI oversight, as biometric identification systems—such as facial recognition or fingerprint analysis—typically rely on artificial intelligence and machine learning models.
Mechanism of Influence: By requiring express consent and limiting the retention of biometric data to 48 hours, the law restricts the operational parameters of AI-driven identification technologies in residential settings.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but the technologies used for biometric processing in modern 'smart access systems' are almost exclusively AI-based.
Why Relevant: The bill mandates disclosures regarding software security and requires vendors to provide updates, which aligns with the user's interest in oversight and transparency for automated systems.
Mechanism of Influence: It creates a mandatory disclosure and remediation pipeline for software vulnerabilities, ensuring technical accountability for the vendors of automated access systems.
Evidence:
Ambiguity Notes: While these provisions apply to all software within smart access systems, they serve as a mechanism for the 'oversight' and 'disclosures' requested by the user regarding automated technologies.
Legislation ID: 55729
Bill URL: View Bill
This bill introduces a new section to the labor law concerning automated employment decision tools. It defines what constitutes such tools and mandates that employers notify candidates about their use in the hiring process, including details about the job qualifications considered, the data used, and the data retention policy. The bill also ensures candidates rights to seek alternatives or accommodations in the selection process.
| Date | Action |
|---|---|
| 2026-01-07 | referred to labor |
| 2025-01-14 | referred to labor |
Why Relevant: The bill directly regulates the use of automated systems and AI-driven tools in the context of employment decisions.
Mechanism of Influence: It mandates transparency through mandatory disclosures to candidates at least ten business days before an automated tool is utilized, requiring the disclosure of assessment criteria and data practices.
Evidence:
Ambiguity Notes: The specific technical threshold for what constitutes an 'automated employment decision tool' depends on the provided definitions, which may vary in breadth regarding machine learning or simple algorithmic filtering.
Why Relevant: The legislation establishes a regulatory framework for AI oversight in the workplace by defining the scope of automated tools and ensuring candidate rights.
Mechanism of Influence: By defining 'automated employment decision tool' and 'employment decision,' the law creates a legal boundary for which AI technologies are subject to labor law oversight.
Evidence:
Ambiguity Notes: The effectiveness of the regulation depends on how strictly 'automated employment decision tool' is defined and whether it captures all forms of AI used in hiring.
Legislation ID: 54012
Bill URL: View Bill
This bill introduces a new section to the general business law that addresses unauthorized depictions of public officials generated by artificial intelligence. It defines key terms related to artificial intelligence and establishes responsibilities for the owners and operators of AI systems to prevent unauthorized depictions of covered persons. The bill outlines the requirements for notification and the liability of system operators for failing to comply with these regulations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-01-08 | referred to consumer affairs and protection |
Why Relevant: The bill directly regulates the output and operational requirements of generative AI systems and defines key AI-related terminology.
Mechanism of Influence: It imposes a legal duty on AI operators to implement technical safeguards and administrative notice systems, creating financial liability for failing to prevent unauthorized AI-generated depictions.
Evidence:
Ambiguity Notes: The terms 'reasonable prevention methods' and 'industry standards' are not explicitly defined, which may lead to varying interpretations of what constitutes technical compliance.
Legislation ID: 57911
Bill URL: View Bill
This legislation, known as the New York artificial intelligence bill of rights, is designed to protect New York residents from the potential harms of automated decision-making systems. It outlines specific rights related to safety, discrimination, data privacy, and the ability to opt for human alternatives in interactions with automated systems. The bill emphasizes the importance of oversight and accountability in the development and deployment of such technologies.
| Date | Action |
|---|---|
| 2026-01-07 | referred to science and technology |
| 2025-01-27 | referred to science and technology |
Why Relevant: The legislation mandates audits and assessments for AI systems to ensure equity and safety.
Mechanism of Influence: It requires automated systems to undergo equity assessments, disparity testing, and pre-deployment safety testing before they can be used.
Evidence:
Ambiguity Notes: The specific technical standards for 'equity assessments' and 'disparity testing' are not detailed, leaving room for interpretation on what constitutes a passing result.
Why Relevant: The bill regulates the deployment and continued use of AI based on performance and safety standards.
Mechanism of Influence: It grants the authority to prevent the deployment of or require the removal of systems that fail to meet safety standards or are found to be ineffective.
Evidence:
Ambiguity Notes: The criteria for 'ineffective systems' or 'safety standards' may be subject to administrative definition.
Why Relevant: It requires disclosures regarding data collection and usage in the context of automated systems.
Mechanism of Influence: It mandates that consent for data collection be clear and understandable, effectively requiring a disclosure mechanism for users interacting with these systems.
Evidence:
Ambiguity Notes: The term 'clear and understandable' is a subjective standard that may vary based on the target audience.
Legislation ID: 58041
Bill URL: View Bill
This legislation amends the New York election law to require that any political communication using an artificial intelligence system must inform the recipient that they are interacting with AI. This applies to various forms of communication, including phone calls and emails, to promote transparency and accountability in political discourse.
| Date | Action |
|---|---|
| 2026-01-07 | referred to election law |
| 2025-01-27 | referred to election law |
Why Relevant: The legislation directly addresses the user's interest in AI regulation and disclosure requirements, specifically within the context of political communications.
Mechanism of Influence: It creates a legal mandate for transparency, requiring entities to inform individuals when they are interacting with an AI rather than a human, thereby affecting how AI is deployed in political campaigning.
Evidence:
Ambiguity Notes: The definition of 'simulate human conversation' may be subject to interpretation regarding the level of sophistication required to trigger the disclosure.
Legislation ID: 58101
Bill URL: View Bill
This bill establishes a framework for the oversight of high-risk advanced artificial intelligence systems by empowering a secretary to review, recommend, and enforce compliance measures. It outlines the responsibilities of operators regarding system modifications, incident reporting, and compliance with ethical standards, as well as the penalties for non-compliance. The bill also addresses issues related to source code management, third-party integrations, and security risks associated with AI systems.
| Date | Action |
|---|---|
| 2026-01-07 | referred to science and technology |
| 2025-01-27 | referred to science and technology |
Why Relevant: The bill establishes a direct oversight framework for high-risk AI systems, aligning with the user's interest in AI regulation.
Mechanism of Influence: It empowers a secretary to review systems, issue binding recommendations, and enforce compliance measures, effectively creating a licensing and regulatory body for AI operators.
Evidence:
Ambiguity Notes: The term 'high-risk advanced artificial intelligence systems' is subject to secretary designation, which could be interpreted broadly or narrowly depending on future administrative rules.
Why Relevant: The legislation mandates government oversight of AI source code and system modifications, similar to the user's interest in the submission of weights or technical specifications.
Mechanism of Influence: Licensees must submit written notices of modifications for approval and share source code with the secretary, preventing unauthorized changes to AI models.
Evidence:
Ambiguity Notes: While 'source code' is specified, the bill does not explicitly use the term 'weights,' though source code management often encompasses the parameters and architecture of the model.
Why Relevant: The bill includes provisions for audits and investigations to ensure compliance with ethical and safety standards.
Mechanism of Influence: The secretary is authorized to conduct investigations, compel document production, and examine logs and records, functioning as a mandatory audit mechanism.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill requires mandatory disclosures regarding system failures and security risks.
Mechanism of Influence: Operators must promptly report significant malfunctions that could harm individuals to the department and law enforcement.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill establishes criminal penalties for the 'uncontainment' of high-risk AI, representing a high level of regulatory enforcement.
Mechanism of Influence: Willful or negligent release of high-risk source code without authorization can result in felony or misdemeanor charges.
Evidence:
Ambiguity Notes: The definition of 'uncontainment' is not fully detailed but implies the public release or leakage of restricted AI code.
Legislation ID: 58208
Bill URL: View Bill
This legislation amends the general business law by introducing a new section that mandates owners, licensees, or operators of generative artificial intelligence systems to display warnings on their user interfaces. These warnings must inform users that the systems outputs may not always be accurate or appropriate. Failure to comply with this requirement could result in civil penalties.
| Date | Action |
|---|---|
| 2026-01-28 | delivered to senate |
| 2026-01-28 | passed assembly |
| 2026-01-28 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2026-01-07 | ordered to third reading cal.110 |
| 2025-06-11 | ordered to third reading rules cal.608 |
| 2025-06-11 | reported |
| 2025-06-11 | rules report cal.608 |
| 2025-06-09 | amend (t) and recommit to rules |
Why Relevant: The legislation directly regulates the deployment of generative artificial intelligence by mandating specific transparency and disclosure requirements.
Mechanism of Influence: It forces AI developers and operators to modify their user interfaces to include legal disclaimers, thereby informing users of the limitations of the technology and shifting some liability or awareness to the end-user.
Evidence:
Ambiguity Notes: The term 'conspicuous' is not strictly defined, which could lead to variations in how prominent the warning must be. Additionally, 'inappropriate' is a subjective standard that may be difficult to define consistently across different AI applications.
Why Relevant: The law introduces a financial enforcement mechanism specifically for AI-related compliance failures.
Mechanism of Influence: By imposing penalties of $25 per user or up to $100,000, the law creates a significant financial incentive for AI companies to adhere to state-mandated disclosure standards.
Evidence:
Ambiguity Notes: The method for counting 'users' (e.g., unique visitors, registered accounts, or active monthly users) is not specified, which could lead to disputes over the total penalty amount.
Legislation ID: 59212
Bill URL: View Bill
The bill introduces a new section to the labor law that mandates the use of automated employment decision tools to comply with specific criteria, including conducting annual disparate impact analyses. It defines key terms related to automated tools and outlines the responsibilities of employers regarding reporting and compliance, as well as the enforcement powers of the attorney general and commissioner.
| Date | Action |
|---|---|
| 2026-01-07 | referred to labor |
| 2025-01-30 | referred to labor |
Why Relevant: The legislation specifically targets automated decision-making systems used in employment, which falls under the umbrella of artificial intelligence regulation and oversight.
Mechanism of Influence: It imposes a mandatory audit requirement (disparate impact analysis) and transparency obligations (publicly available summaries), directly addressing the user's interest in AI audits and disclosures.
Evidence:
Ambiguity Notes: The scope of the regulation depends on the specific definition of 'automated employment decision tool,' which may vary in breadth to include different types of algorithmic or AI-driven software.
Legislation ID: 59244
Bill URL: View Bill
This bill amends the real property law, general business law, and banking law to establish guidelines for the use of automated decision tools in housing and loan applications. It mandates annual disparate impact analyses to assess potential biases, requires landlords and banks to provide clear notifications to applicants regarding the use of these tools, and prohibits the use of certain algorithms that rely on nonpublic competitor data.
| Date | Action |
|---|---|
| 2026-01-07 | referred to housing |
| 2025-01-30 | referred to housing |
Why Relevant: The bill directly regulates the use of algorithms in the real estate market, specifically targeting price-fixing concerns.
Mechanism of Influence: It creates a legal prohibition against using specific types of data (nonpublic competitor data) within rent-setting algorithms, effectively regulating the logic and data inputs of AI tools.
Evidence:
Ambiguity Notes: The term 'nonpublic competitor data' may require further regulatory definition to determine if it includes aggregated or anonymized data sets.
Why Relevant: The legislation mandates transparency and consumer notification regarding the use of automated systems.
Mechanism of Influence: It requires a 24-hour advance notice to applicants, fulfilling the user's interest in 'requiring disclosures' for AI usage.
Evidence:
Ambiguity Notes: The bill does not specify the required format or level of detail for the notification beyond 'data processing policies'.
Why Relevant: The bill requires mandatory bias testing, which aligns with the user's interest in 'requiring audits'.
Mechanism of Influence: It forces entities to conduct annual 'disparate impact analyses' and make summaries publicly available, creating a public oversight mechanism for algorithmic bias.
Evidence:
Ambiguity Notes: The specific metrics or standards for what constitutes a sufficient 'disparate impact analysis' are not detailed in the abstract.
Why Relevant: The bill initiates government oversight and research into AI's societal impacts.
Mechanism of Influence: By mandating a formal study and a report to the governor and legislature, the bill creates a pathway for future AI-specific legislation and regulatory standards.
Evidence:
Ambiguity Notes: None
Legislation ID: 59247
Bill URL: View Bill
This bill amends the general business law to introduce regulations for third-party food delivery services regarding the delivery of food by bicycles with electric assist or electric scooters. It aims to ensure that delivery platforms do not impose unrealistic delivery times or penalize workers for traffic law violations, thereby promoting safer delivery practices.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-01-30 | referred to consumer affairs and protection |
Why Relevant: The bill explicitly mandates the auditing of algorithms used by third-party delivery platforms to ensure compliance with safety standards.
Mechanism of Influence: It grants state officials the authority to inspect and audit the logic and outputs of delivery algorithms, representing a form of algorithmic oversight and regulation.
Evidence:
Ambiguity Notes: While the bill uses the term 'algorithms' rather than 'artificial intelligence,' the automated systems used for route optimization and time estimation in delivery platforms typically fall under the broader umbrella of AI and automated decision-making systems.
Why Relevant: The legislation sets specific prohibitions on how algorithms can be programmed and utilized regarding worker performance and delivery estimates.
Mechanism of Influence: It legally restricts the parameters of the delivery platform's automated systems, prohibiting the use of algorithms that calculate or promote delivery times that are physically impossible to achieve safely.
Evidence:
Ambiguity Notes: None
Legislation ID: 60078
Bill URL: View Bill
This bill amends the insurance law to prohibit insurers from using external consumer data and information sources in ways that unfairly discriminate against individuals based on race, gender, and other protected characteristics. It establishes a framework for the superintendent of insurance to oversee and regulate the use of such data, ensuring that insurers demonstrate compliance and mitigate discriminatory practices. The bill also mandates stakeholder engagement and provides for the confidentiality of proprietary information.
| Date | Action |
|---|---|
| 2026-01-07 | referred to insurance |
| 2025-02-04 | referred to insurance |
Why Relevant: The bill explicitly regulates the use of algorithms and predictive models, which are fundamental components of artificial intelligence systems used in automated decision-making.
Mechanism of Influence: It requires insurers to establish risk management frameworks and conduct testing to mitigate discriminatory outcomes from these technologies, effectively mandating a form of algorithmic auditing and oversight.
Evidence:
Ambiguity Notes: While the bill uses terms like algorithm and predictive model rather than artificial intelligence exclusively, these terms encompass the AI technologies used for underwriting and pricing in the insurance industry.
Legislation ID: 60326
Bill URL: View Bill
This legislation mandates the New York Department of Labor, in consultation with relevant state departments, to conduct a comprehensive study on how artificial intelligence affects job performance, productivity, training, education requirements, privacy, and security within the state workforce. The department is required to report its findings and recommendations for legislative action every five years, culminating in a final report by January 1, 2035. Additionally, the bill prohibits state entities from using artificial intelligence in a manner that would displace employees until the final report is received.
| Date | Action |
|---|---|
| 2026-01-07 | referred to ways and means |
| 2025-04-30 | reported referred to ways and means |
| 2025-04-18 | amend (t) and recommit to labor |
| 2025-04-18 | print number 4550a |
| 2025-02-04 | referred to labor |
Why Relevant: The bill imposes a direct regulatory restriction on the application of AI by state entities.
Mechanism of Influence: It establishes a moratorium on AI-driven employee displacement, effectively regulating how the technology can be deployed within the public sector.
Evidence:
Ambiguity Notes: The term "displace" is not explicitly defined, leaving it open to interpretation whether it refers only to layoffs or also to the reduction of hours or reassignment of duties.
Why Relevant: The legislation mandates government oversight and periodic reporting on AI's effects on privacy and security.
Mechanism of Influence: By requiring the Department of Labor to study and report on AI's impact every five years, the bill creates a framework for ongoing legislative oversight and potential future regulation based on the findings.
Evidence:
Ambiguity Notes: The scope of "privacy" and "security" within the study is broad and may encompass both data protection and physical workplace security.
The New York Privacy Act aims to establish comprehensive privacy protections for consumers in New York by granting them rights over their personal data, including the ability to access, correct, and delete their data, as well as requiring businesses to implement reasonable data security measures and obtain consent for data processing. The act also empowers the New York State Attorney General to enforce compliance and allows consumers to seek legal recourse for violations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-02-10 | referred to consumer affairs and protection |
Why Relevant: The act provides a regulatory framework for the collection and processing of personal data, which is a foundational component for training and operating AI models involving consumer information.
Mechanism of Influence: Provisions requiring 'specific consent for data processing' and 'clear notice of data usage' would legally constrain how companies gather datasets for AI training and how they deploy AI-driven analytics on New York residents.
Evidence:
Ambiguity Notes: The text lacks explicit mentions of 'Artificial Intelligence,' 'algorithms,' or 'automated decision-making,' meaning its application to AI depends on the broad interpretation of 'data processing' and 'foreseeable harms.'
Legislation ID: 61348
Bill URL: View Bill
The bill amends the real property law to explicitly forbid landlords from employing algorithmic devices that utilize nonpublic competitor data for setting rent. This measure is introduced in response to allegations that such practices could lead to higher rents and diminish landlords direct involvement in pricing decisions. The bill outlines definitions for algorithmic devices and nonpublic competitor data, and establishes penalties for violations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to housing |
| 2025-02-10 | referred to housing |
Why Relevant: The legislation specifically regulates the use of 'algorithmic devices,' which encompasses automated decision-making systems and AI-driven pricing models used in the real estate sector.
Mechanism of Influence: It restricts the data inputs allowed for these algorithms, specifically banning the use of nonpublic competitor data to prevent algorithmic price-fixing or collusion.
Evidence:
Ambiguity Notes: The term 'algorithmic device' is defined broadly as any device using algorithms for rent calculations, which could capture a wide range of software from simple spreadsheets to complex machine learning models, though it excludes certain standard reports.
Legislation ID: 61793
Bill URL: View Bill
This legislation amends the state finance law to include requirements for the purchase of algorithmic decision systems by state units. It defines what constitutes an algorithmic decision system and mandates that such systems adhere to standards that prevent harm, promote transparency, ensure fairness, and undergo thorough evaluation. Additionally, it modifies the definition of unlawful discriminatory practices to include actions taken through these systems.
| Date | Action |
|---|---|
| 2026-01-07 | referred to governmental operations |
| 2025-02-12 | referred to governmental operations |
Why Relevant: The bill directly regulates the acquisition and use of algorithmic decision systems, which are a primary form of artificial intelligence used in automated decision-making.
Mechanism of Influence: It imposes procurement standards on state agencies, requiring them to evaluate AI systems for fairness and transparency before purchase, effectively creating a regulatory framework for government AI usage.
Evidence:
Ambiguity Notes: The definition of 'algorithmic decision system' is broad and likely covers a wide range of machine learning and AI technologies beyond simple rule-based software.
Why Relevant: The legislation addresses the legal accountability of AI systems regarding civil rights and discrimination.
Mechanism of Influence: By expanding the definition of unlawful discriminatory practices to include those performed through algorithmic systems, it ensures that AI-driven bias is subject to existing legal protections.
Evidence:
Ambiguity Notes: None
Legislation ID: 54340
Bill URL: View Bill
The bill amends the executive law and general business law to require the development of minimum standards for the use of automatic license plate reader systems by non-law enforcement entities. These standards will cover permissible uses, data sharing, record retention, and employee training. Non-law enforcement entities will be required to publicly disclose these standards on their websites or in their main offices. The bill also mandates the establishment of a training program for employees regarding these policies.
| Date | Action |
|---|---|
| 2026-01-12 | delivered to senate |
| 2026-01-12 | passed assembly |
| 2026-01-12 | REFERRED TO CONSUMER PROTECTION |
| 2026-01-07 | DIED IN SENATE |
| 2026-01-07 | ordered to third reading cal.16 |
| 2026-01-07 | RETURNED TO ASSEMBLY |
| 2025-05-05 | delivered to senate |
| 2025-05-05 | passed assembly |
Why Relevant: Automatic license plate reader (ALPR) systems are a specific application of computer vision and automated data processing, which are core components of artificial intelligence technology.
Mechanism of Influence: The bill imposes disclosure requirements and operational standards on automated surveillance technology, requiring entities to publish their data usage and retention policies.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'artificial intelligence,' but it regulates a technology that relies on AI-driven character recognition and automated decision-making regarding data capture.
Legislation ID: 64450
Bill URL: View Bill
The bill introduces a new section to the general business law that defines chatbots and their proprietors, sets forth restrictions on the type of information and advice chatbots can provide, and outlines the liabilities for proprietors who violate these regulations. It mandates clear notification to users that they are interacting with a chatbot and allows individuals to pursue civil action for damages caused by violations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-04-07 | amend (t) and recommit to consumer affairs and protection |
| 2025-04-07 | print number 6545a |
| 2025-03-06 | referred to consumer affairs and protection |
Why Relevant: The bill directly regulates chatbots, which are a primary application of artificial intelligence technology.
Mechanism of Influence: It imposes legal restrictions on the content AI can generate and mandates transparency disclosures to users.
Evidence:
Ambiguity Notes: The term 'substantive responses' may require further legal clarification to determine the threshold of prohibited advice versus general information.
Why Relevant: The legislation addresses AI transparency and consumer protection through mandatory disclosures.
Mechanism of Influence: By requiring notice in the same language and font size as the chatbot's text, it ensures users are aware they are not speaking to a human.
Evidence:
Ambiguity Notes: None
Legislation ID: 64494
Bill URL: View Bill
This legislation introduces the Artificial Intelligence Training Data Transparency Act, mandating developers of generative AI models to publicly disclose detailed information about the datasets used for training these models. It defines key terms related to artificial intelligence and outlines specific requirements for documentation, especially regarding employee data. Certain exceptions are included for models related to national security or aviation.
| Date | Action |
|---|---|
| 2026-01-12 | amended on third reading 6578a |
| 2026-01-07 | DIED IN SENATE |
| 2026-01-07 | ordered to third reading cal.166 |
| 2026-01-07 | RETURNED TO ASSEMBLY |
| 2025-06-10 | delivered to senate |
| 2025-06-10 | ordered to third reading rules cal.571 |
| 2025-06-10 | passed assembly |
| 2025-06-10 | REFERRED TO RULES |
Why Relevant: The act directly addresses the user's interest in AI regulation and disclosure requirements by mandating transparency for training datasets.
Mechanism of Influence: It requires developers to publicly post documentation regarding the sources, copyright status, and personal information contained within training data before a model is released.
Evidence:
Ambiguity Notes: While it requires 'descriptions' of data, the specific granularity of these descriptions is not fully defined, potentially allowing for varying levels of detail.
Why Relevant: The legislation includes specific regulatory requirements for the use of employee data in AI development.
Mechanism of Influence: It creates a legal obligation for entities to inform employees about the purpose of AI models and the specific types of employee data used to train them.
Evidence:
Ambiguity Notes: None
Legislation ID: 64602
Bill URL: View Bill
This legislation amends the General Business Law to introduce a new section focusing on responsible capability scaling policies for artificial intelligence. It mandates that all businesses operating in New York develop and file an annual certification of compliance regarding their AI practices with the Chief Information Officer. The bill also outlines the roles of the Chief Information Officer and the Attorney General in overseeing compliance and auditing policies.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-03-06 | referred to consumer affairs and protection |
Why Relevant: The bill explicitly establishes an auditing framework for AI compliance policies.
Mechanism of Influence: The Attorney General and Chief Information Officer are granted the authority to review and audit the AI policies filed by businesses to ensure regulatory adherence.
Evidence:
Ambiguity Notes: The specific technical standards for what constitutes a 'responsible capability scaling policy' are left to be defined by the Chief Information Officer through rule promulgation.
Why Relevant: It mandates the creation of internal AI governance policies and annual disclosures to the government.
Mechanism of Influence: Businesses must file an annual certification of compliance regarding their AI practices, creating a mandatory reporting and oversight loop with the state government.
Evidence:
Ambiguity Notes: The scope of 'Artificial Intelligence' is defined within the bill, but the breadth of businesses impacted depends on the CIO's use of waiver and exemption authority.
Legislation ID: 64694
Bill URL: View Bill
This bill amends the state technology law to ban public and nonpublic elementary and secondary schools from purchasing or using biometric identifying technology, such as facial recognition, for any purpose. It allows limited use of such technology for employee identification under certain conditions but requires a comprehensive report to assess the implications of biometric technology in educational settings.
| Date | Action |
|---|---|
| 2026-01-07 | referred to education |
| 2025-04-16 | amend and recommit to education |
| 2025-04-16 | print number 6720a |
| 2025-03-11 | referred to education |
Why Relevant: Facial recognition and other biometric identifying technologies are primary applications of artificial intelligence, specifically computer vision and pattern recognition. Regulating these tools falls under the umbrella of AI oversight.
Mechanism of Influence: The bill imposes a direct ban on the acquisition and use of these AI-driven technologies in schools, effectively halting their deployment until a formal impact assessment is conducted.
Evidence:
Ambiguity Notes: While 'biometric identifying technology' is defined with examples like facial recognition, the scope could extend to other AI-based systems like iris scanning or behavioral biometrics depending on the legal definition of 'biometric'.
Legislation ID: 98289
Bill URL: View Bill
This bill amends the general business law in New York to introduce regulations on algorithmically set prices. It requires clear disclosure when personalized algorithmic pricing is used, particularly in consumer transactions, and prohibits the use of protected class data in pricing decisions that could lead to discrimination. The bill aims to protect consumers from unfair pricing practices and enhance their understanding of how their personal data may influence pricing.
| Date | Action |
|---|---|
| 2026-01-07 | ordered to third reading cal.173 |
| 2025-03-25 | amended on third reading 6765a |
| 2025-03-25 | ordered to third reading rules cal.114 |
| 2025-03-25 | reported |
| 2025-03-25 | reported referred to codes |
| 2025-03-25 | reported referred to rules |
| 2025-03-25 | rules report cal.114 |
| 2025-03-12 | referred to consumer affairs and protection |
Why Relevant: The bill directly addresses the requirement for disclosures regarding the use of algorithms in consumer-facing transactions.
Mechanism of Influence: It mandates that any advertisement or announcement for a price set by an algorithm must include a clear and conspicuous disclosure to the consumer.
Evidence:
Ambiguity Notes: The scope of 'personalized algorithmic pricing' depends on the specific definition of 'algorithm' provided in the bill's definitions section.
Why Relevant: The legislation regulates the data inputs and decision-making processes of algorithmic systems to prevent bias.
Mechanism of Influence: By prohibiting the use of protected class data in pricing decisions, the law restricts how AI and algorithmic models can be trained or deployed in commercial settings.
Evidence:
Ambiguity Notes: The bill mentions 'discrimination' but may rely on existing executive law to define the specific thresholds for what constitutes a discriminatory algorithmic output.
Why Relevant: The bill establishes a regulatory framework for algorithmic oversight, including definitions and enforcement mechanisms.
Mechanism of Influence: It empowers the attorney general to seek injunctions and impose civil penalties for failure to comply with algorithmic transparency requirements.
Evidence:
Ambiguity Notes: None
Legislation ID: 98295
Bill URL: View Bill
This bill amends the General Business Law to introduce regulations for artificial intelligence companion models. It defines key terms related to AI and establishes requirements for operators of AI companions, including protocols for handling user expressions of self-harm or harm to others. The bill mandates notifications to users about the nature of AI companions and provides a legal basis for users to seek damages in case of violations.
| Date | Action |
|---|---|
| 2026-01-07 | DIED IN SENATE |
| 2026-01-07 | ordered to third reading cal.175 |
| 2026-01-07 | RETURNED TO ASSEMBLY |
| 2025-03-25 | delivered to senate |
| 2025-03-25 | ordered to third reading rules cal.116 |
| 2025-03-25 | passed assembly |
| 2025-03-25 | REFERRED TO CONSUMER PROTECTION |
| 2025-03-25 | reported |
Why Relevant: The bill directly defines and regulates artificial intelligence companion models.
Mechanism of Influence: It establishes legal definitions for AI and generative AI, setting the scope for regulatory oversight.
Evidence:
Ambiguity Notes: The definition of 'emotional recognition algorithms' may be broad depending on the technical implementation.
Why Relevant: The bill requires specific disclosures to users about the nature of the AI.
Mechanism of Influence: Operators must notify users every three hours that the companion is not human and lacks emotions, ensuring transparency.
Evidence:
Ambiguity Notes: The frequency of notification (every three hours) might be interpreted as continuous interaction or cumulative time.
Why Relevant: The bill mandates safety protocols and crisis intervention for AI interactions.
Mechanism of Influence: AI operators must implement systems to detect and respond to expressions of self-harm or harm to others, including referrals to crisis services.
Evidence:
Ambiguity Notes: The specific 'protocols' required are not detailed, leaving implementation details to the operators or future regulation.
Legislation ID: 98397
Bill URL: View Bill
This bill proposes the creation of an artificial intelligence literacy program within the digital equity competitive grant program. It seeks to provide funding for schools, community colleges, and organizations to develop and implement AI literacy initiatives, ensuring that individuals from all backgrounds gain essential knowledge and skills related to artificial intelligence technologies. The program emphasizes training for educators, resources for students, and outreach to underserved communities to bridge the digital divide.
| Date | Action |
|---|---|
| 2026-01-07 | referred to education |
| 2025-04-28 | amend (t) and recommit to education |
| 2025-04-28 | print number 6874a |
| 2025-03-18 | referred to education |
Why Relevant: The bill addresses the educational and literacy aspects of artificial intelligence, which is a foundational element of AI policy and public oversight.
Mechanism of Influence: It establishes government-funded programs that require reporting on AI implementation and training, providing a mechanism for state oversight of AI education.
Evidence:
Ambiguity Notes: The bill focuses on literacy and education rather than direct technical regulation, disclosures, or audits of AI models, but it sets a precedent for government involvement in AI-related standards and definitions.
Legislation ID: 98500
Bill URL: View Bill
This legislation seeks to address the urgent need for guidance on the integration of artificial intelligence in education. It proposes the formation of an artificial intelligence working group tasked with creating policies that ensure AI technologies enhance educational quality without compromising the roles of educators or the learning experience of students. The working group will assess current AI usage in schools, develop best practices, and provide recommendations for policy and legislative changes.
| Date | Action |
|---|---|
| 2026-01-07 | referred to education |
| 2025-03-18 | referred to education |
Why Relevant: The bill directly addresses the regulation and oversight of AI technologies within the educational sector.
Mechanism of Influence: By establishing a working group to create model policies and guidance, the state initiates a framework for how AI tools can be legally and safely deployed in classrooms, affecting procurement and usage standards.
Evidence:
Ambiguity Notes: While the bill focuses on guidance and model policies rather than hard prohibitions or technical audits like weight submission, these policies often form the basis for future mandatory regulations.
Legislation ID: 98497
Bill URL: View Bill
This bill introduces new regulations under the General Business Law to address the impact of addictive feeds on users, particularly in social media platforms. It defines key terms related to addictive feeds and algorithmic recommendations, mandates user control settings, prohibits deceptive design practices (dark patterns), and establishes enforcement mechanisms including penalties for non-compliance.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-03-18 | referred to consumer affairs and protection |
Why Relevant: The bill directly regulates 'algorithmic recommendations,' which are a primary application of AI in social media contexts.
Mechanism of Influence: It mandates that platforms provide a mechanism for users to opt-out of AI-driven content delivery (algorithmic feeds), effectively regulating the deployment and user interaction with recommendation algorithms.
Evidence:
Ambiguity Notes: The bill's definition of 'algorithmic recommendation' likely encompasses various machine learning and AI models used for content ranking, though the specific technical thresholds for these models are not detailed.
Legislation ID: 98704
Bill URL: View Bill
This act seeks to amend the executive law and the criminal procedure law in New York State regarding the use of AI and FRT in criminal investigations. It aims to create protocols for law enforcement use of these technologies while prohibiting the use of AI-generated outputs as evidence in court. The bill emphasizes the need for transparency, auditing, and training for law enforcement agencies to mitigate biases and errors associated with AI systems.
| Date | Action |
|---|---|
| 2026-01-07 | referred to codes |
| 2025-03-21 | referred to codes |
Why Relevant: The legislation directly regulates the legal standing and disclosure requirements of AI-generated outputs in criminal proceedings.
Mechanism of Influence: It creates a legal barrier by making AI outputs inadmissible as evidence and forces transparency by requiring prosecutors to disclose information about the AI systems used.
Evidence:
Ambiguity Notes: The term 'AI-generated outputs' is broad and could encompass a wide range of technologies, potentially leading to disputes over what constitutes an AI output versus a standard digital tool.
Why Relevant: The bill mandates oversight mechanisms such as audits and record-keeping for AI and FRT systems.
Mechanism of Influence: It requires law enforcement to maintain audit trails and subjects FRT systems to regular independent audits to ensure compliance and accuracy.
Evidence:
Ambiguity Notes: The criteria for what constitutes an 'independent' audit or the specific standards for the audit are not fully defined in the summary.
Why Relevant: It addresses the operational regulation of AI through mandatory training on bias and limitations.
Mechanism of Influence: By requiring training, the law attempts to mitigate the risks of algorithmic bias and human over-reliance on AI systems in law enforcement.
Evidence:
Ambiguity Notes: None
Legislation ID: 98880
Bill URL: View Bill
This bill amends the state technology law to introduce a new section prohibiting state agencies and state-owned entities from using large language models or artificial intelligence systems to make decisions that impact individuals rights, benefits, or services. The legislation allows for the use of AI in advisory roles and for data analysis, provided that final decisions remain with human personnel. It also mandates the development of compliance policies and grants the attorney general the authority to investigate violations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to science and technology |
| 2025-03-21 | referred to science and technology |
Why Relevant: The bill directly regulates the use of artificial intelligence and large language models within government operations.
Mechanism of Influence: It imposes a legal prohibition on automated decision-making for critical individual outcomes, mandating that AI remains in an advisory capacity only.
Evidence:
Ambiguity Notes: The terms 'rights, benefits, or services' are broad and could cover a wide range of administrative actions, from social welfare to professional licensing.
Why Relevant: The legislation establishes a framework for AI governance and oversight.
Mechanism of Influence: It requires the creation of internal compliance policies and empowers the attorney general to investigate and enforce these regulations.
Evidence:
Ambiguity Notes: The specific standards for what constitutes an 'advisory role' versus a 'decision-making' role may require further clarification in policy implementation.
Legislation ID: 111254
Bill URL: View Bill
This legislation, known as the respect electoral audiovisual legitimacy (REAL) act, seeks to amend the election law to prevent the use of generative artificial intelligence for creating realistic audio, video, or photo representations of political candidates. The bill defines generative artificial intelligence and establishes regulations regarding its use in political communications to maintain authenticity and prevent misinformation.
| Date | Action |
|---|---|
| 2026-01-07 | referred to election law |
| 2025-04-04 | referred to election law |
Why Relevant: The act directly regulates the use of generative AI by prohibiting specific types of AI-generated content in political communications.
Mechanism of Influence: It creates a legal prohibition against using AI to generate realistic depictions of candidates, effectively restricting the deployment of generative AI tools in election-related media.
Evidence:
Ambiguity Notes: The term "realistic" is not strictly defined in the abstract, which could lead to varying interpretations of what constitutes a prohibited depiction.
Why Relevant: The legislation establishes a formal legal definition for generative artificial intelligence.
Mechanism of Influence: By defining the technology, the law sets the jurisdictional boundaries for which AI systems and outputs are subject to these electoral regulations.
Evidence:
Ambiguity Notes: The specific technical criteria used to define "generative artificial intelligence" are not detailed in the summary, potentially leaving room for debate on emerging technologies.
Legislation ID: 54545
Bill URL: View Bill
The New York Artificial Intelligence Consumer Protection Act seeks to regulate the use of artificial intelligence decision systems that may lead to algorithmic discrimination. It defines key terms, outlines documentation requirements for AI developers, mandates risk management practices, and establishes enforcement mechanisms to protect consumers from discriminatory practices based on various protected characteristics.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-01-08 | referred to consumer affairs and protection |
Why Relevant: The legislation directly addresses the regulation of artificial intelligence systems to prevent discrimination.
Mechanism of Influence: It mandates that developers of high-risk AI systems maintain technical documentation and conduct risk management practices.
Evidence:
Ambiguity Notes: The term 'high-risk AI decision system' is defined within the act but its specific scope depends on the provided definitions section.
Why Relevant: The act requires specific disclosures and transparency measures.
Mechanism of Influence: Developers are legally obligated to disclose the uses, limitations, and known risks of their AI systems to deployers.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill includes a mandatory auditing requirement, which is a key focus of the user's query.
Mechanism of Influence: It requires developers to perform annual bias and governance audits to ensure compliance and manage risks.
Evidence:
Ambiguity Notes: The specific standards for what constitutes a 'governance audit' may require further regulatory clarification.
Why Relevant: The act establishes government oversight and enforcement mechanisms.
Mechanism of Influence: The Attorney General is granted the power to enforce these regulations and hold developers accountable for violations.
Evidence:
Ambiguity Notes: None
This bill aims to empower New York consumers by granting them greater control over their personal data. It mandates businesses to provide clear information on data usage, allows consumers to access and delete their data, and requires businesses to maintain data security and notify consumers of risks. The bill also establishes enforcement mechanisms through the New York State Attorney General.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-05-02 | referred to consumer affairs and protection |
Why Relevant: The bill establishes foundational data governance rules that apply to the datasets used to train and operate artificial intelligence systems.
Mechanism of Influence: AI developers and companies using AI would be classified as data controllers or processors, requiring them to provide disclosures on how consumer data is used within their models and to honor deletion or access requests for data used in training.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its broad definitions of 'personal data' and 'processing' would encompass the algorithmic use of consumer information.
Legislation ID: 147753
Bill URL: View Bill
This legislation, known as the election content accountability act, mandates that starting from the 2030 election cycle, campaigns for certain high-level offices in New York must include detailed provenance data for digital content in their political communications. This data must specify the origin, any modifications made, and the involvement of generative artificial intelligence in the contents creation. Violations can lead to significant penalties assessed by the attorney general.
| Date | Action |
|---|---|
| 2026-01-07 | referred to election law |
| 2025-05-20 | referred to election law |
Why Relevant: The legislation directly regulates the disclosure of generative artificial intelligence in political communications.
Mechanism of Influence: It mandates that campaigns include provenance data specifying AI involvement and provider details for any synthetic content used in communications.
Evidence:
Ambiguity Notes: The specific technical standards for what constitutes 'provenance data' are left to the Attorney General to define through rules and regulations.
Why Relevant: The act establishes a legal and financial penalty framework for the misuse or non-disclosure of AI-generated content.
Mechanism of Influence: It imposes fines of up to $100,000 for intentional failure to disclose the use of AI or synthetic media in campaign materials.
Evidence:
Ambiguity Notes: The distinction between 'intentional' and 'unintentional' violations may require further judicial or regulatory clarification.
Legislation ID: 147779
Bill URL: View Bill
This bill amends the civil practice law and rules of New York to require that any legal documents drafted with the assistance of generative artificial intelligence must include an affidavit disclosing this use. It mandates that a human must review and certify the accuracy of the content generated by the AI. Additionally, it defines generative artificial intelligence and outlines the requirements for disclosure in legal briefs.
| Date | Action |
|---|---|
| 2026-01-07 | referred to judiciary |
| 2025-05-20 | referred to judiciary |
Why Relevant: The bill directly addresses the requirement for disclosures when artificial intelligence is used in a professional capacity.
Mechanism of Influence: It mandates that any legal document drafted with generative AI must include a separate affidavit disclosing such use.
Evidence:
Ambiguity Notes: The requirement for an 'affidavit' is specific, but the threshold for what constitutes 'assistance' in drafting may need further clarification.
Why Relevant: The bill imposes a regulatory requirement for human oversight and auditing of AI-generated outputs.
Mechanism of Influence: It requires a human to review and certify the accuracy of content generated by AI before it is submitted to the court.
Evidence:
Ambiguity Notes: The term 'accuracy' in a legal brief can be subjective, potentially leading to disputes over the validity of the certification.
Why Relevant: The bill establishes a legal definition for generative artificial intelligence, which is foundational for AI regulation.
Mechanism of Influence: By defining the technology, the bill sets the scope for which systems are subject to disclosure and certification rules.
Evidence:
Ambiguity Notes: The definition includes 'human-like cognition and decision-making,' which are broad terms that may evolve as AI technology advances.
Legislation ID: 148212
Bill URL: View Bill
This bill, known as the New York Artificial Intelligence Transparency for Journalism Act, mandates that developers of generative artificial intelligence disclose information about the sources of training data derived from journalism. It aims to protect the rights of news organizations by requiring developers to provide details about the content they utilize from covered publications, ensuring that journalism is compensated fairly and that the public is aware of how AI systems are trained.
| Date | Action |
|---|---|
| 2026-01-07 | referred to codes |
| 2025-06-09 | amend and recommit to codes |
| 2025-06-09 | print number 8595b |
| 2025-05-29 | reported referred to codes |
| 2025-05-23 | amend and recommit to science and technology |
| 2025-05-23 | print number 8595a |
| 2025-05-22 | referred to science and technology |
Why Relevant: The bill directly mandates disclosures regarding the training data used for generative AI systems.
Mechanism of Influence: Developers are legally required to post information on their websites and provide detailed lists of URLs and content descriptions used in their training sets.
Evidence:
Ambiguity Notes: The bill specifies 'journalism providers' and 'covered publications,' which may leave ambiguity regarding whether social media posts or independent citizen journalism are included.
Why Relevant: The legislation establishes oversight and enforcement mechanisms for AI developers.
Mechanism of Influence: It empowers journalism providers to seek subpoenas and injunctions to compel developers to reveal their training data and crawler identities.
Evidence:
Ambiguity Notes: The bill states it does not alter federal copyright law, which may create legal tension if developers argue that training data usage is 'fair use' under federal law regardless of state disclosure mandates.
Legislation ID: 166783
Bill URL: View Bill
This bill introduces the Understanding Artificial Intelligence Act to define artificial intelligence and set forth liability standards for developers of advanced AI models. It establishes a strict liability framework for injuries caused by these models, while also outlining the definitions relevant to AI and the conditions under which developers may be held responsible or absolved of liability.
| Date | Action |
|---|---|
| 2026-01-07 | referred to science and technology |
| 2025-06-09 | referred to science and technology |
Why Relevant: This section directly regulates the legal accountability of AI developers, a core component of AI oversight and governance.
Mechanism of Influence: It creates a strict liability standard for injuries caused by AI models, effectively forcing developers to internalize the risks of their systems and providing a legal mechanism for redress when AI conduct causes harm.
Evidence:
Ambiguity Notes: The bill references 'negligence or tort criteria' as applied to AI conduct, which may require courts to interpret how human-centric legal standards apply to autonomous or semi-autonomous system outputs.
Why Relevant: The definitions section determines the scope of the regulation, identifying which specific technologies and entities are subject to the law.
Mechanism of Influence: By defining 'covered model' through training costs and computational requirements, the act targets high-compute, advanced AI systems for specific regulatory burdens while exempting smaller models.
Evidence:
Ambiguity Notes: The specific numerical thresholds for 'training cost' and 'computational requirements' are not provided in the abstract, leaving the exact breadth of the 'covered model' category undefined.
Legislation ID: 166829
Bill URL: View Bill
The New York Artificial Intelligence Act aims to regulate AI systems that significantly impact individuals rights and opportunities. It addresses algorithmic discrimination, mandates developer and deployer responsibilities, and introduces auditing and reporting requirements for high-risk AI systems. The Act emphasizes the need for transparency, oversight, and the protection of vulnerable populations from potential harms associated with AI technologies.
| Date | Action |
|---|---|
| 2026-01-12 | reference changed to science and technology |
| 2026-01-07 | referred to ways and means |
| 2025-06-11 | reference changed to ways and means |
| 2025-06-09 | referred to science and technology |
Why Relevant: The Act directly addresses the regulation of AI systems that impact individual rights and opportunities.
Mechanism of Influence: It defines high-risk AI and sets legal standards for its deployment and development, creating a compliance framework for AI technologies.
Evidence:
Ambiguity Notes: The term 'consequential decisions' may require further legal clarification to determine the full scope of applicable industries and scenarios.
Why Relevant: It mandates disclosures and transparency regarding the use of AI in decision-making.
Mechanism of Influence: Requires a five-day advance notice to users and provides an opt-out mechanism for AI-driven decisions, ensuring human oversight or choice.
Evidence:
Ambiguity Notes: The practical implementation of the opt-out without 'adverse consequences' might be complex for certain automated business models.
Why Relevant: The legislation specifically requires formal audits of AI systems, a key component of the user's request.
Mechanism of Influence: Mandates independent audits to detect and prevent algorithmic discrimination in high-risk systems, placing the burden of proof on developers and deployers.
Evidence:
Ambiguity Notes: The specific standards or certifications required for what constitutes an 'independent audit' are not detailed in the provided text.
Legislation ID: 216392
Bill URL: View Bill
The FAIR News Act establishes requirements for the disclosure of artificial intelligence usage in news media, mandates human oversight of AI-generated content, and provides protections for news workers against the misuse of their work in training AI systems. It seeks to maintain the quality of news reporting and safeguard journalistic integrity in the face of advancing technology.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-12-19 | amend (t) and recommit to consumer affairs and protection |
| 2025-12-19 | print number 8962a |
| 2025-08-13 | referred to consumer affairs and protection |
Why Relevant: The act mandates transparency through disclosures to consumers when content is generated by AI.
Mechanism of Influence: It requires news organizations to provide conspicuous disclosures on content that is significantly generated by AI, ensuring consumers are aware of the content's origin.
Evidence:
Ambiguity Notes: The term 'significantly generated' is not quantitatively defined, which may lead to varying interpretations of when a disclosure is legally required.
Why Relevant: The legislation requires human oversight and approval of AI-generated outputs.
Mechanism of Influence: It creates a legal requirement for a human-in-the-loop system where AI-generated content must be reviewed and approved by a person prior to publication.
Evidence:
Ambiguity Notes: The depth and standard of the 'review' process are not specified, leaving it unclear if a cursory glance suffices or if rigorous fact-checking is required.
Why Relevant: It regulates the use of proprietary data for the training of artificial intelligence systems.
Mechanism of Influence: The act prohibits employers from using content created by their workers to train AI models without obtaining explicit consent, and protects workers from retaliation for withholding consent.
Evidence:
Ambiguity Notes: The act does not specify the format of consent or if blanket consent can be included in standard employment contracts.
Why Relevant: The act requires disclosures to employees regarding the internal use of AI tools.
Mechanism of Influence: Employers must provide descriptions of AI systems and their purposes to their workforce, ensuring internal transparency about automation in the workplace.
Evidence:
Ambiguity Notes: It is unclear how frequently these disclosures must be updated as AI systems evolve or are updated.
Legislation ID: 241890
Bill URL: View Bill
This legislation amends the New York real property law to introduce definitions and requirements for the use of virtual agents and AI tools in property searches. It mandates that real estate brokers and online housing platforms conduct annual disparate impact analyses to assess potential discrimination resulting from these technologies. The bill also outlines specific obligations for identifying and mitigating discriminatory outcomes in algorithmic systems used for property searches and advertisements.
| Date | Action |
|---|---|
| 2026-01-07 | referred to judiciary |
| 2025-09-05 | referred to judiciary |
Why Relevant: The bill explicitly requires annual audits of AI systems used in the housing market.
Mechanism of Influence: Real estate entities must conduct disparate impact analyses on their AI tools and submit the results to the attorney general's office for oversight.
Evidence:
Ambiguity Notes: The specific technical standards for what constitutes a 'disparate impact analysis' are not fully detailed in the summary, potentially leaving room for varying levels of rigor.
Why Relevant: The legislation imposes regulatory requirements and disclosure obligations on AI-driven advertising and virtual agents.
Mechanism of Influence: It mandates public reporting on compliance and internal auditing methods, while prohibiting specific algorithmic functions like demographic targeting.
Evidence:
Ambiguity Notes: The definition of 'virtual agents' may be broad enough to cover a wide range of automated communication tools.
Why Relevant: The bill requires proactive mitigation and modification of AI algorithmic outcomes.
Mechanism of Influence: Brokers and platforms are legally obligated to identify and modify discriminatory algorithmic results and ensure predictive fairness across demographic groups.
Evidence:
Ambiguity Notes: The term 'predictive fairness' is a technical concept in machine learning that may require specific regulatory definitions to enforce consistently.
Legislation ID: 241953
Bill URL: View Bill
This bill amends the general business law to introduce a new section that mandates search engines to inform users when displaying information generated by artificial intelligence. It specifies the definition of artificial intelligence and outlines the requirements for disclosure, including the manner in which the information must be presented. Violations of this requirement can result in civil penalties.
| Date | Action |
|---|---|
| 2026-01-07 | referred to consumer affairs and protection |
| 2025-09-12 | referred to consumer affairs and protection |
Why Relevant: The bill directly addresses the user's interest in AI disclosures by mandating that search engines notify users of AI-generated content.
Mechanism of Influence: It requires specific formatting, including watermarks and notifications placed above the content, ensuring transparency for the end-user.
Evidence:
Ambiguity Notes: The requirement for 'clear language' and 'specific formatting' may require further regulatory clarification to ensure consistency across different platforms.
Why Relevant: The bill provides a formal legal definition of Artificial Intelligence, which is a foundational element of AI regulation.
Mechanism of Influence: By defining AI as a machine-based system for predictions or decisions, it sets the jurisdictional scope for which technologies are subject to the disclosure rules.
Evidence:
Ambiguity Notes: The definition is broad ('automated analysis of inputs'), which could potentially encompass traditional algorithms or statistical models not typically categorized as modern generative AI.
Why Relevant: The bill establishes an enforcement mechanism for AI regulations through civil penalties.
Mechanism of Influence: It imposes a financial deterrent of up to five thousand dollars for violations, creating a compliance burden for search engine operators.
Evidence:
Ambiguity Notes: It is unclear if the five thousand dollar fine is per violation (per user view) or per instance of non-compliant software deployment.
Legislation ID: 241959
Bill URL: View Bill
This legislation amends the civil practice law and rules along with the criminal procedure law to introduce requirements for the disclosure of the use of generative artificial intelligence in legal document preparation. It defines generative artificial intelligence, outlines the responsibilities of courts to inform litigants about its risks, and mandates that any documents created with AI assistance include an affidavit confirming human oversight and accuracy verification.
| Date | Action |
|---|---|
| 2026-01-07 | referred to judiciary |
| 2025-09-12 | referred to judiciary |
Why Relevant: The legislation directly addresses the user's interest in AI disclosures and regulation.
Mechanism of Influence: It mandates a formal disclosure process via affidavits for any legal document drafted using generative AI, ensuring transparency in the judicial process.
Evidence:
Ambiguity Notes: The standard for 'human review' and 'accuracy verification' is not explicitly defined, leaving room for interpretation on the level of diligence required.
Why Relevant: The bill establishes regulatory oversight and consumer protection measures for AI usage.
Mechanism of Influence: It requires legal professionals to obtain informed consent from clients and mandates that courts provide warnings about AI risks, effectively regulating how AI is integrated into professional services.
Evidence:
Ambiguity Notes: The specific 'risks' and 'dangers' that courts must warn about are not detailed, which may lead to inconsistent messaging across different courts.
Why Relevant: It provides a statutory definition for generative artificial intelligence.
Mechanism of Influence: By defining the technology's capabilities, such as autonomous task performance and learning from data, it sets the scope for which tools are subject to these legal regulations.
Evidence:
Ambiguity Notes: The phrase 'perform tasks autonomously' could be interpreted broadly to include basic automation or narrowly to include only advanced LLMs.
Legislation ID: 241968
Bill URL: View Bill
This bill introduces a new section to the education law that defines the permissible use of artificial intelligence in mental health care. It establishes clear guidelines for licensed professionals regarding administrative support and supplementary support tasks, while emphasizing the importance of client consent and confidentiality. The bill also outlines penalties for violations and clarifies that certain services, such as religious counseling and peer support, are exempt from these regulations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to higher education |
| 2025-09-26 | referred to higher education |
Why Relevant: The bill establishes a regulatory framework for the use of AI in a specific professional sector.
Mechanism of Influence: It prohibits AI from performing core professional functions like direct therapeutic communication or independent decision-making, restricting its use to administrative and supplementary support.
Evidence:
Ambiguity Notes: The term 'supplementary support tasks' is not explicitly defined in the summary and could be interpreted broadly.
Why Relevant: The legislation requires mandatory disclosures to users regarding the involvement of AI.
Mechanism of Influence: Licensed professionals are required to obtain informed written consent from patients or their representatives specifically regarding the use of AI in their care.
Evidence:
Ambiguity Notes: The bill does not specify the level of technical detail required in the informed consent disclosure.
Why Relevant: The bill includes enforcement mechanisms and penalties for the unauthorized or improper use of AI.
Mechanism of Influence: It establishes a civil penalty system with fines reaching up to fifty thousand dollars per violation to ensure compliance with AI regulations.
Evidence:
Ambiguity Notes: None
Legislation ID: 242047
Bill URL: View Bill
This bill amends Section 240.50 of the penal law to include provisions that specifically address the use of artificial intelligence in falsely reporting incidents. It outlines several scenarios where an individual can be guilty of falsely reporting an incident, including false reports of crimes, emergencies, or child abuse. The bill classifies these offenses as a class A misdemeanor.
| Date | Action |
|---|---|
| 2026-01-07 | referred to codes |
| 2025-10-17 | referred to codes |
Why Relevant: The legislation specifically targets the use of artificial intelligence in the commission of a crime, aligning with the user's interest in AI regulation and oversight.
Mechanism of Influence: It expands the scope of existing penal law to ensure that false reports created or disseminated via AI are subject to criminal prosecution, thereby regulating the conduct of individuals using AI tools.
Evidence:
Ambiguity Notes: The bill uses the broad term 'use of artificial intelligence' without defining specific technologies, which could encompass deepfakes, automated bots, or AI-generated text used to deceive emergency services.
Legislation ID: 242052
Bill URL: View Bill
This bill amends the education law to include a new section that prohibits the use of artificial intelligence (AI) in classrooms for students below ninth grade, with specific allowances for diagnostic and instructional interventions for students with disabilities. It also empowers the commissioner to provide guidance on permissible uses of AI and clarifies that teachers and school personnel may still use AI for administrative purposes.
| Date | Action |
|---|---|
| 2026-01-07 | referred to education |
| 2025-11-03 | referred to education |
Why Relevant: The bill directly regulates the deployment and usage of AI technologies within the education sector, specifically targeting age-based restrictions.
Mechanism of Influence: It establishes a legal ban on AI tools for students in grades K-8, requiring schools to filter or restrict access to such technologies in a classroom setting.
Evidence:
Ambiguity Notes: The bill does not provide a technical definition of 'artificial intelligence', which may lead to uncertainty regarding whether standard educational software with automated features is included in the prohibition.
Legislation ID: 242081
Bill URL: View Bill
This bill amends the general business law to introduce Article 47-A, which establishes requirements for the development and maintenance of AI technologies in professional fields. It mandates that developers involve professional domain experts throughout the design, training, validation, and ongoing evaluation processes of AI systems to ensure compliance with ethical and safety standards.
| Date | Action |
|---|---|
| 2026-01-07 | referred to science and technology |
| 2025-11-03 | referred to science and technology |
Why Relevant: The legislation imposes mandatory oversight and documentation requirements for AI development.
Mechanism of Influence: It forces developers to involve domain experts in the validation and risk assessment phases, effectively requiring a form of expert-led oversight and internal auditing before and during deployment.
Evidence:
Ambiguity Notes: The specific qualifications for a 'professional domain expert' and the depth of the 'risk assessment' are subject to the Attorney General's rulemaking.
Why Relevant: The bill requires specific disclosures regarding the safety and ethics of AI systems to the government.
Mechanism of Influence: Developers are legally obligated to disclose known risks and ethical concerns to the Attorney General, creating a government oversight mechanism for AI safety and potential harms.
Evidence:
Ambiguity Notes: It is unclear if these disclosures will be made public or remain confidential within the Attorney General's office.
Legislation ID: 242115
Bill URL: View Bill
This bill amends the executive law to require policing agencies to conduct an annual inventory of AI systems used in criminal investigations and to develop a publicly accessible policy regarding their use. The legislation defines covered AI, mandates disclosure in police reports, and establishes a model policy to be adopted by law enforcement agencies. It also allows for civil action against agencies that fail to comply with these requirements.
| Date | Action |
|---|---|
| 2026-01-07 | referred to codes |
| 2025-11-21 | referred to codes |
Why Relevant: The bill establishes a regulatory framework for the use of AI within a specific government sector (law enforcement).
Mechanism of Influence: It mandates transparency through annual inventories and public disclosure of AI capabilities, data inputs, and outputs.
Evidence:
Ambiguity Notes: The definition of 'machine-based technology that generates outputs from inputs' is broad and could encompass a wide range of standard software if not interpreted strictly.
Why Relevant: The legislation requires specific disclosures regarding the use of AI in official government documentation.
Mechanism of Influence: It forces law enforcement to document the role of AI in criminal investigations and the use of generative AI in drafting reports.
Evidence:
Ambiguity Notes: The extent of detail required for 'the AIs role in the investigation' may vary by agency interpretation.
Why Relevant: The bill provides for oversight and enforcement of AI regulations through legal action.
Mechanism of Influence: It empowers the Attorney General to investigate compliance and allows individuals to bring civil actions against non-compliant agencies.
Evidence:
Ambiguity Notes: None
Legislation ID: 252536
Bill URL: View Bill
This legislation seeks to amend the general business law to establish the Responsible AI Safety and Education (RAISE) Act, which introduces mandatory transparency and safety protocols for large frontier developers of artificial intelligence. It emphasizes the need for standardized disclosures, incident reporting, and the establishment of frameworks to manage catastrophic risks associated with AI technologies. The bill reflects the intent to foster innovation while safeguarding public interests.
| Date | Action |
|---|---|
| 2026-01-28 | reported referred to ways and means |
| 2026-01-21 | reported referred to codes |
| 2026-01-07 | referred to science and technology |
| 2026-01-06 | referred to science and technology |
Why Relevant: The legislation directly addresses the user's interest in AI disclosures and transparency requirements.
Mechanism of Influence: It mandates that large frontier developers publish transparency reports and a frontier AI framework on their websites before deploying models.
Evidence:
Ambiguity Notes: The specific financial thresholds defining a 'large frontier developer' are mentioned but not enumerated in the summary, potentially leaving the scope to be defined by rulemaking.
Why Relevant: The bill requires audits and reporting of safety incidents, aligning with the user's request for AI oversight and auditing legislation.
Mechanism of Influence: Developers are legally obligated to report critical safety incidents and unauthorized access to the government, supported by third-party assessments.
Evidence:
Ambiguity Notes: The criteria for what constitutes a 'critical safety incident' may be subject to interpretation or further department rulemaking.
Why Relevant: The act establishes government oversight and regulatory authority over AI development.
Mechanism of Influence: It grants rulemaking authority to a state department to implement safety protocols and defines specific duties for developers to ensure compliance.
Evidence:
Ambiguity Notes: The 'duties and obligations' are broadly stated and will likely be clarified through the granted rulemaking authority.
Legislation ID: 252574
Bill URL: View Bill
This bill amends the state technology law, education law, and civil service law to address the use of automated decision-making tools and artificial intelligence systems by various government entities. It repeals previous provisions related to automated decision-making and introduces new requirements for disclosure of such tools, aiming to protect employees rights and maintain existing collective bargaining agreements. The bill also defines covered entities and outlines their responsibilities regarding the use of these technologies.
| Date | Action |
|---|---|
| 2026-01-21 | ordered to third reading rules cal.51 |
| 2026-01-21 | reported |
| 2026-01-21 | reported referred to rules |
| 2026-01-21 | rules report cal.51 |
| 2026-01-21 | substituted by s8831 |
| 2026-01-07 | referred to science and technology |
Why Relevant: The bill establishes mandatory disclosure requirements for AI-driven tools used in employment contexts.
Mechanism of Influence: Covered entities are required to maintain and publish an annual list on their websites detailing the description, purpose, and start date of any automated employment decision-making tools in use.
Evidence:
Ambiguity Notes: The term 'any relevant information' regarding disclosures is broad and may be subject to varying interpretations by different government agencies.
Why Relevant: The legislation regulates the operational use of AI to prevent the displacement of human labor and protect worker rights.
Mechanism of Influence: It creates a legal barrier against using AI to automate away duties currently held by employees or to reduce wages and benefits, effectively regulating the scope of AI integration in the public sector workforce.
Evidence:
Ambiguity Notes: The bill does not explicitly define the technical threshold for what constitutes an 'artificial intelligence system' versus a standard software tool.
Legislation ID: 259907
Bill URL: View Bill
This bill, known as the automation displacement protection act, seeks to amend the labor law to establish protections for workers facing displacement due to the implementation of artificial intelligence and automated systems. It requires covered employers to notify employees about potential job losses, provide a transition period with options for retraining, and outlines penalties for non-compliance.
| Date | Action |
|---|---|
| 2026-01-14 | referred to labor |
Why Relevant: The bill requires specific disclosures regarding the implementation of AI systems in the workplace.
Mechanism of Influence: Employers must provide written notice to affected employees and state officials detailing the specific functions being automated by AI systems.
Evidence:
Ambiguity Notes: The definition of 'artificial intelligence system' is broad, potentially covering a wide range of software beyond generative AI.
Why Relevant: The legislation regulates the deployment of AI by imposing operational requirements and penalties on businesses using the technology.
Mechanism of Influence: It mandates a workforce transition period and imposes civil penalties of up to $10,000 per day for violations related to AI-driven displacement.
Evidence:
Ambiguity Notes: The threshold for 'technological displacement' (25 employees or 25% of the workforce) limits the scope of regulation to larger-scale AI implementations.
Legislation ID: 283141
Bill URL: View Bill
The bill introduces a new section to the labor law mandating that covered businesses, defined as those employing over 100 people or being publicly traded, submit annual reports detailing the effects of artificial intelligence on their workforce and operations. These reports must include data on employment changes related to AI and the nature of AI usage, including oversight and protections for sensitive data. The Department of Labor will develop reporting forms and compile an annual analysis based on submitted reports, with penalties for non-compliance.
| Date | Action |
|---|---|
| 2026-01-21 | referred to labor |
Why Relevant: The bill directly addresses the regulation and disclosure of artificial intelligence usage within the corporate sector.
Mechanism of Influence: It forces businesses to disclose the 'nature of AI usage' and 'oversight' measures, creating a transparency mechanism for how AI affects the labor market and requiring businesses to document their data protection measures.
Evidence:
Ambiguity Notes: The phrase 'nature of AI usage' is broad and could range from high-level descriptions to detailed technical disclosures depending on the Department of Labor's implementation.
Legislation ID: 283164
Bill URL: View Bill
This bill amends the labor law to prohibit employers from relying solely on automated systems for making employment decisions without meaningful human review. It mandates that any automated recommendations or evaluations be subject to human oversight, ensuring that applicants are not denied employment based solely on automated assessments. Employers must inform applicants about the use of automated systems and provide the opportunity for human review of adverse decisions.
| Date | Action |
|---|---|
| 2026-01-21 | referred to labor |
Why Relevant: The legislation directly regulates the application of AI and automated systems in the context of employment and hiring.
Mechanism of Influence: It mandates human oversight for AI-driven decisions and prohibits fully autonomous hiring processes, effectively creating a regulatory framework for AI deployment in HR.
Evidence:
Ambiguity Notes: The definition of 'meaningful human review' as a 'deliberate evaluation' may require further clarification to determine the necessary depth of human involvement to satisfy the law.
Why Relevant: The bill includes specific disclosure requirements for entities using automated decision-making tools.
Mechanism of Influence: Employers are legally required to inform applicants when automated systems are used and must provide transparency regarding the types of data these systems analyze.
Evidence:
Ambiguity Notes: The requirement to 'describe data analyzed' is broad and could range from a general category list to a detailed technical disclosure of inputs.
This bill introduces a new article to the labor law that prohibits employers from using algorithmic wage-setting, which involves determining wages through automated decision systems based on surveillance data of employees. It establishes definitions for key terms, outlines requirements for employers who may use such systems, and details enforcement mechanisms including civil penalties and the right for employees to take legal action if their rights are violated.
| Date | Action |
|---|---|
| 2026-01-21 | referred to labor |
Why Relevant: The bill directly regulates the application of automated decision systems (a form of AI) in the context of labor and wage determination.
Mechanism of Influence: It imposes a prohibition on algorithmic wage-setting unless employers provide specific disclosures and allow for data audits/corrections by employees, effectively requiring transparency and oversight of AI-driven financial decisions.
Evidence:
Ambiguity Notes: The definition of 'automated decision system' is broad and could encompass a wide range of AI and machine learning models used for workforce management.
Legislation ID: 283236
Bill URL: View Bill
This bill introduces a new article in the civil rights law dedicated to the regulation of artificial intelligence and algorithms. It outlines definitions, establishes standards for algorithmic use, mandates evaluations and assessments, and includes provisions for consumer protection and rights. The act aims to ensure that the deployment of algorithms does not lead to discriminatory practices and that individuals are informed and protected against potential harms caused by such technologies.
| Date | Action |
|---|---|
| 2026-01-21 | referred to science and technology |
Why Relevant: The bill directly addresses the regulation of artificial intelligence and algorithmic decision-making systems.
Mechanism of Influence: It creates legal standards for 'covered algorithms' and mandates both pre-deployment evaluations and post-deployment impact assessments, effectively requiring audits of AI systems.
Evidence:
Ambiguity Notes: The specific technical criteria for what constitutes a 'covered algorithm' are mentioned as being defined in the act but are not detailed in the abstract, potentially leaving the scope of regulated technologies broad.
Why Relevant: The legislation includes specific requirements for transparency and informing the public about AI usage.
Mechanism of Influence: The 'Notice and disclosure' provision mandates that individuals be informed when algorithms are used in ways that affect them, while 'Consumer awareness' sections aim to enhance public understanding of algorithmic implications.
Evidence:
Ambiguity Notes: The abstract does not specify the threshold of 'impact' required to trigger a notice, nor the specific format the disclosure must take.
Why Relevant: The bill establishes oversight and accountability measures for AI developers and users.
Mechanism of Influence: It defines the legal relationships and responsibilities between developers and deployers and establishes enforcement mechanisms to ensure compliance with civil rights standards.
Evidence:
Ambiguity Notes: The 'Content regulations' section mentions preventing 'harmful outcomes,' which is a subjective term that may require further regulatory clarification.
Legislation ID: 54749
Bill URL: View Bill
This bill, known as the Protect Our Privacy (POP) Act, establishes regulations regarding the use of drones by law enforcement agencies. It prohibits the use of drones for general law enforcement purposes without a warrant, restricts the collection of data during protests and other First Amendment activities, and mandates the destruction of certain data collected by drones. Additionally, it provides individuals with the right to sue for violations of their privacy rights related to drone surveillance.
| Date | Action |
|---|---|
| 2026-01-07 | referred to governmental operations |
| 2025-01-08 | referred to governmental operations |
Why Relevant: The legislation specifically targets facial recognition technology, which is a primary application of artificial intelligence in surveillance and biometric data processing.
Mechanism of Influence: The bill mandates the retroactive deletion of data collected via facial recognition and prohibits law enforcement from using such AI-generated data in investigations, thereby regulating the deployment and legal admissibility of AI surveillance outputs.
Evidence:
Ambiguity Notes: The bill focuses on a specific use case of AI (facial recognition) rather than general-purpose AI or algorithmic transparency, and it does not define the technical standards of what constitutes 'facial recognition technology'.
Legislation ID: 54751
Bill URL: View Bill
This bill outlines the responsibilities of data controllers and processors in relation to consumer personal data. It emphasizes the need for data protection assessments, the prohibition of unfair practices in obtaining consent, and the requirement to maintain reasonable safeguards for data security. Additionally, the bill mandates that controllers must not discriminate against consumers exercising their rights and must ensure that any data shared with processors is done under strict contractual obligations.
| Date | Action |
|---|---|
| 2026-01-07 | referred to codes |
| 2025-05-27 | reported referred to codes |
| 2025-01-08 | referred to consumer affairs and protection |
Why Relevant: The mandate for data protection assessments for high-risk processing is a common regulatory tool used to oversee AI and automated decision-making systems.
Mechanism of Influence: AI systems that utilize personal data for profiling or automated decision-making would likely be classified as 'heightened risk,' thereby requiring the controllers to conduct and document audits of the AI's impact on privacy.
Evidence:
Ambiguity Notes: The term 'heightened risk' is not explicitly defined to include AI, but in the context of modern privacy legislation, this category almost always encompasses algorithmic processing and machine learning applications.
Why Relevant: The bill regulates the collection and sharing of data, which is the foundational component for training and operating AI models.
Mechanism of Influence: By requiring consent or opt-out rights for third-party data sharing, the bill limits how consumer data can be aggregated for use in third-party AI training sets or large language models.
Evidence:
Ambiguity Notes: The bill does not specifically mention 'AI training,' but the restrictions on 'Third Party Data Sharing' would practically apply to any AI developer receiving data from a controller.
Legislation ID: 66847
Bill URL: View Bill
The New York Artificial Intelligence Act seeks to address the growing use of AI in various sectors and the potential for algorithmic discrimination. It mandates developers and deployers of AI systems to ensure their products do not harm consumers or violate civil rights. The legislation emphasizes the importance of transparency, accountability, and collaboration between the government and the AI industry to mitigate risks associated with AI technologies.
| Date | Action |
|---|---|
| 2026-01-07 | died in assembly |
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2026-01-07 | returned to senate |
| 2025-06-12 | COMMITTEE DISCHARGED AND COMMITTED TO RULES |
| 2025-06-12 | DELIVERED TO ASSEMBLY |
| 2025-06-12 | ORDERED TO THIRD READING CAL.1867 |
| 2025-06-12 | PASSED SENATE |
| 2025-06-12 | referred to ways and means |
Why Relevant: The legislation directly mandates third-party audits for high-risk AI systems, which is a specific area of interest for the user.
Mechanism of Influence: It requires high-risk AI systems to undergo independent audits prior to deployment and every eighteen months thereafter to assess data management, accuracy, and compliance.
Evidence:
Ambiguity Notes: The effectiveness of this provision depends on the specific criteria used to define 'high-risk' and the standards set for 'independent' auditors.
Why Relevant: The bill requires significant disclosures to both consumers and the government.
Mechanism of Influence: Deployers must notify end users when AI is used for consequential decisions and provide a detailed report of the system's functionality and risks to the Attorney General.
Evidence:
Ambiguity Notes: The term 'consequential decision' is a key threshold for disclosure that may require further regulatory clarification to determine which specific AI applications are covered.
Why Relevant: The act establishes a formal oversight mechanism involving the submission of technical and operational data to the government.
Mechanism of Influence: It mandates that developers and deployers submit reports to the Attorney General including software stack descriptions and risk assessments, facilitating state-level oversight of AI technologies.
Evidence:
Ambiguity Notes: While it requires reporting on the 'software stack,' it does not explicitly mention the submission of 'model weights' as requested by the user, though this could be interpreted as part of the technical documentation required.
Why Relevant: The legislation provides consumers with the right to opt-out of automated decision-making, a core component of AI regulation.
Mechanism of Influence: It forces developers to provide a human-led alternative to AI decisions and ensures consumers are not penalized for choosing the opt-out path.
Evidence:
Ambiguity Notes: The 'forty-five days' response window for human-rendered decisions might be considered a long delay for certain types of consumer interactions.
Legislation ID: 66924
Bill URL: View Bill
This legislation amends the General Business Law to include definitions and requirements regarding the use of synthetic performers in advertisements. It mandates that any advertisement featuring a synthetic performer must clearly disclose this fact if the advertiser has actual knowledge of it. Violations of this requirement can result in civil penalties.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO CONSUMER PROTECTION |
| 2025-06-13 | COMMITTED TO RULES |
| 2025-06-04 | ADVANCED TO THIRD READING |
| 2025-05-29 | 2ND REPORT CAL. |
| 2025-05-28 | 1ST REPORT CAL.1408 |
| 2025-05-21 | AMEND AND RECOMMIT TO CONSUMER PROTECTION |
| 2025-05-21 | PRINT NUMBER 1228C |
| 2025-05-15 | AMEND AND RECOMMIT TO CONSUMER PROTECTION |
Why Relevant: The bill directly addresses the requirement for disclosures when using AI-generated content in a commercial context.
Mechanism of Influence: It mandates that advertisers must disclose the use of synthetic performers if they have actual knowledge, creating a legal obligation for AI transparency.
Evidence:
Ambiguity Notes: The term 'actual knowledge' may create a loophole where advertisers claim ignorance of the AI-generated nature of a performer provided by a third party.
Why Relevant: The legislation provides formal legal definitions for generative artificial intelligence.
Mechanism of Influence: By defining 'generative artificial intelligence' and 'synthetic performer', the bill establishes the jurisdictional scope for AI regulation within the state's business law.
Evidence:
Ambiguity Notes: The definition of 'synthetic performer' might be interpreted broadly to include various forms of digital manipulation beyond generative AI, or narrowly depending on the specific technical language used.
Why Relevant: The bill establishes an enforcement framework for AI regulations through financial penalties.
Mechanism of Influence: It imposes civil penalties ranging from $1,000 to $5,000 for failure to comply with AI disclosure requirements, providing a deterrent against undisclosed AI usage.
Evidence:
Ambiguity Notes: It is unclear if penalties apply per advertisement, per airing, or per campaign.
Legislation ID: 67665
Bill URL: View Bill
This bill amends the general business law in New York to mandate that all books published in the state, which have been created using generative AI, must include a conspicuous disclosure indicating such use. The requirement applies to all formats of books, including printed and digital, and encompasses various types of content such as text, images, and games. It also defines generative AI and outlines its various forms and capabilities.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-01-14 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The bill directly addresses the regulation of artificial intelligence by imposing mandatory disclosure requirements on AI-generated content.
Mechanism of Influence: It creates a legal obligation for publishers to label products, thereby providing transparency to consumers regarding the use of AI in creative works.
Evidence:
Ambiguity Notes: The phrase 'partially' created with generative AI lacks a specific percentage or threshold, which may lead to broad interpretations regarding how much AI assistance triggers the disclosure requirement.
Why Relevant: The legislation establishes a legal definition for generative AI within the state's general business law.
Mechanism of Influence: By defining generative AI as systems that 'mimic human cognitive tasks' and 'perform tasks with minimal human oversight,' it sets the scope for which technologies are subject to oversight.
Evidence:
Ambiguity Notes: The definition of 'minimal human oversight' is subjective and could be interpreted differently depending on the complexity of the AI tool used.
Legislation ID: 67845
Bill URL: View Bill
The New York Artificial Intelligence Consumer Protection Act introduces a framework for regulating the use of artificial intelligence (AI) decision systems. It mandates compliance with ethical and privacy standards, allows for public research, and outlines the responsibilities of developers and deployers of AI technologies. The bill also establishes enforcement mechanisms and exemptions for certain entities, ensuring that AI systems do not adversely affect individuals rights and freedoms.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-01-14 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The act mandates pre-market research and testing of AI systems, which aligns with the user's interest in regulation and audits.
Mechanism of Influence: This requirement forces developers to audit their systems for safety and compliance before they reach consumers, ensuring that AI systems do not adversely affect rights.
Evidence:
Ambiguity Notes: The terms 'research and testing' are broad and may vary in rigor depending on the specific AI application or industry standards.
Why Relevant: It establishes enforcement mechanisms and reporting requirements for AI misuse.
Mechanism of Influence: The Attorney General is granted exclusive enforcement authority, and developers are mandated to investigate and report misuse, providing a layer of government oversight.
Evidence:
Ambiguity Notes: The specific criteria for what constitutes 'misuse' are not fully defined in the abstract, potentially leaving room for interpretation by the Attorney General.
Why Relevant: The bill introduces risk management and red-teaming as a compliance mechanism.
Mechanism of Influence: It encourages developers to perform internal audits (red-teaming) to identify and cure violations, offering an affirmative defense if corrective actions are taken within 60 days.
Evidence:
Ambiguity Notes: The effectiveness of the 'affirmative defense' depends on the specific risk management frameworks adopted by the developers.
Legislation ID: 68482
Bill URL: View Bill
This bill introduces the New York Artificial Intelligence Ethics Commission Act, which aims to create a commission responsible for overseeing the ethical deployment of AI technologies within the state. The commission will establish guidelines, review AI projects, educate the public, and investigate complaints related to unethical AI practices, ensuring that AI systems used do not discriminate or violate privacy rights.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-01-21 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The bill directly regulates AI by creating an oversight commission with the power to review projects and set ethical standards.
Mechanism of Influence: The commission reviews AI projects for compliance and establishes ethical guidelines that private and state entities must follow.
Evidence:
Ambiguity Notes: The term 'ethical guidelines' is broad and its specific requirements for AI developers are not fully defined in the abstract.
Why Relevant: The legislation includes provisions for auditing and reporting on AI systems.
Mechanism of Influence: The commission is required to submit annual reports that include audit results, providing a mechanism for government oversight of AI activities.
Evidence:
Ambiguity Notes: The scope and technical depth of the 'audit results' are not specified.
Why Relevant: The bill establishes enforcement mechanisms and penalties for AI-related violations.
Mechanism of Influence: It allows for civil and criminal penalties for entities that use AI in ways that discriminate, infringe on privacy, or cause harm.
Evidence:
Ambiguity Notes: The distinction between economic and non-economic harm for the purpose of civil vs criminal prosecution may require further legal clarification.
Legislation ID: 69303
Bill URL: View Bill
This bill seeks to redefine the term following in the context of stalking in the fourth degree under the New York penal law. It expands the definition to encompass unauthorized tracking of an individuals movements or location through devices or software that can access, record, or report on a persons location without their consent. Additionally, it clarifies that employers using tracking technology in the normal course of business do not constitute stalking under this statute.
| Date | Action |
|---|---|
| 2026-01-07 | died in assembly |
| 2026-01-07 | REFERRED TO CODES |
| 2026-01-07 | returned to senate |
| 2025-05-13 | DELIVERED TO ASSEMBLY |
| 2025-05-13 | PASSED SENATE |
| 2025-05-13 | referred to codes |
| 2025-04-15 | ADVANCED TO THIRD READING |
| 2025-04-10 | 2ND REPORT CAL. |
Why Relevant: The legislation regulates the use of software for surveillance and tracking, which is a primary application of AI and automated data processing technologies.
Mechanism of Influence: It creates legal liability for the unauthorized use of tracking software, which would include AI-driven geolocation and behavioral monitoring tools.
Evidence:
Ambiguity Notes: The bill uses the broad term 'software' rather than specifically naming 'artificial intelligence,' but the scope encompasses any algorithmic or automated software used for location reporting.
This bill addresses the growing concerns over privacy violations and the misuse of personal information in various domains, including employment, healthcare, and finance. It emphasizes the need for transparency in how personal data is handled and mandates that individuals give explicit consent before their information can be collected or used. Additionally, it introduces specific protections for biometric information and outlines the responsibilities of entities that engage in data collection and processing.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-02-03 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The provision regarding Automated Decision Systems directly regulates the deployment and financial support of AI-driven decision-making tools.
Mechanism of Influence: By prohibiting payment for systems that lack human intervention and banning discriminatory algorithms, the law imposes operational and ethical constraints on AI developers and users.
Evidence:
Ambiguity Notes: The term 'automated decision systems' is broad and typically includes machine learning models and AI, though the specific technical threshold for what constitutes 'automated' is not defined here.
Why Relevant: The Internet Safety Education provision relates to the user's interest in age-appropriate usage and digital literacy regarding technology.
Mechanism of Influence: It mandates a curriculum that includes digital literacy and privacy, which are foundational to safe AI usage and understanding algorithmic impact.
Evidence:
Ambiguity Notes: While it does not explicitly name 'Artificial Intelligence', digital literacy curricula in the modern era frequently encompass AI interactions and safety.
Legislation ID: 70178
Bill URL: View Bill
This bill amends the labor law to establish criteria for the use of automated employment decision tools, which include various systems that filter job candidates. It mandates that employers conduct annual disparate impact analyses to assess the effects of these tools on different demographic groups and requires transparency in the reporting of these analyses.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO LABOR |
| 2025-12-05 | AMEND (T) AND RECOMMIT TO LABOR |
| 2025-12-05 | PRINT NUMBER 4394A |
| 2025-02-04 | REFERRED TO LABOR |
Why Relevant: The bill specifically targets automated employment decision tools, which are a subset of artificial intelligence and algorithmic systems used to filter and evaluate job candidates.
Mechanism of Influence: It imposes mandatory auditing requirements (disparate impact analyses) and transparency obligations, requiring both public disclosure and submission of data to the government.
Evidence:
Ambiguity Notes: The scope of the law depends on the specific definition of 'automated employment decision tools,' which may vary in how broadly it captures different types of machine learning or AI software.
Legislation ID: 70290
Bill URL: View Bill
This bill amends the general business law by introducing Article 45-A, which outlines definitions, required user settings, prohibitions against deceptive design practices, and the attorney generals authority to enforce these regulations. It aims to protect users, particularly minors, from the addictive nature of algorithmically generated content on social media platforms.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-02-06 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The bill directly regulates 'algorithmic recommendations,' which are a primary application of artificial intelligence and machine learning in social media environments.
Mechanism of Influence: It mandates that social media operators provide a manual override for AI-driven content feeds, effectively requiring a disclosure of and an opt-out mechanism for algorithmic curation.
Evidence:
Ambiguity Notes: The term 'algorithmic recommendation' is a key definition that determines the scope of AI technologies covered, though the specific technical thresholds for what constitutes such an algorithm are left to the attorney general's rulemaking.
Legislation ID: 71270
Bill URL: View Bill
This bill amends the insurance law to include regulations concerning telematics systems used by insurers. It defines telematics systems, requires insurers to disclose their scoring methodologies, and mandates that data collected from these systems be used only for underwriting and rating decisions. The bill also prohibits discrimination in the use of telematics data and empowers the superintendent to enforce these regulations.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INSURANCE |
| 2025-02-21 | REFERRED TO INSURANCE |
Why Relevant: The bill requires the submission of algorithms and models to a government authority for oversight.
Mechanism of Influence: Third-party developers and vendors are mandated to file their scoring models or algorithms with the superintendent, providing a mechanism for regulatory review of automated decision-making tools.
Evidence:
Ambiguity Notes: While the bill focuses on 'telematics,' these systems typically rely on algorithmic scoring and machine learning to interpret driver behavior data.
Why Relevant: The legislation mandates disclosures and auditing for bias in automated scoring systems.
Mechanism of Influence: Insurers must publicly disclose how their scoring methodologies work and provide reports on testing performed to ensure the algorithms do not result in unfair discrimination against protected classes.
Evidence:
Ambiguity Notes: The bill uses the term 'scoring methodologies' and 'risk models' which are central to the regulation of AI and automated systems in financial services.
Legislation ID: 66064
Bill URL: View Bill
The bill amends the Alcoholic Beverage Control Law and Public Health Law to allow for the use of biometric identity verification devices. These devices will verify the identity and age of individuals attempting to purchase alcoholic beverages and tobacco products, ensuring compliance with age restrictions. The bill outlines the definition of such devices, the conditions under which they can be used, and the information that can be collected and maintained by licensees.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INVESTIGATIONS AND GOVERNMENT OPERATIONS |
| 2025-01-08 | REFERRED TO INVESTIGATIONS AND GOVERNMENT OPERATIONS |
Why Relevant: The legislation concerns biometric identity verification and age verification, which are key areas of AI application and regulation mentioned in the user's instructions.
Mechanism of Influence: The law regulates the use of biometric scans (facial, iris, fingerprints) to verify age, establishing requirements for data collection and system security for these AI-adjacent technologies.
Evidence:
Ambiguity Notes: While the bill does not explicitly use the term 'Artificial Intelligence', the technologies described (facial and iris recognition) are fundamentally powered by AI and machine learning algorithms.
Legislation ID: 71452
Bill URL: View Bill
The bill introduces a new section to the general business law concerning the liability of chatbot proprietors for misleading information provided by their chatbots. It defines key terms related to artificial intelligence and chatbots, outlines the responsibilities of chatbot proprietors, and specifies the conditions under which they can be held liable for harm caused to users. The bill also mandates that chatbot owners implement measures to protect users from self-harm and ensure that minors are not exposed to harmful content without parental consent.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-06-13 | COMMITTED TO RULES |
| 2025-03-17 | ADVANCED TO THIRD READING |
| 2025-03-13 | 2ND REPORT CAL. |
| 2025-03-12 | 1ST REPORT CAL.548 |
| 2025-02-27 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The bill provides foundational legal definitions for artificial intelligence and chatbots.
Mechanism of Influence: By defining these terms, the bill sets the jurisdictional boundaries for which technologies are subject to the proposed regulations and oversight.
Evidence:
Ambiguity Notes: The specific technical criteria for what constitutes 'artificial intelligence' versus standard automated software are not detailed in the abstract.
Why Relevant: The bill mandates transparency through user disclosures.
Mechanism of Influence: Proprietors are legally required to inform users that they are interacting with a chatbot, preventing the deceptive passing of AI as a human agent.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill requires age verification and parental consent for AI usage.
Mechanism of Influence: It imposes a gatekeeping requirement where AI proprietors must verify user age and obtain verifiable parental consent before allowing minors to access 'companion chatbots'.
Evidence:
Ambiguity Notes: The term 'companion chatbot' may imply a specific subset of AI, potentially leaving other types of AI chatbots unregulated in this regard.
Why Relevant: The bill mandates ongoing safety audits and vulnerability assessments.
Mechanism of Influence: It creates a proactive compliance burden on AI developers to monitor their systems for safety risks rather than just reacting to incidents.
Evidence:
Ambiguity Notes: The frequency and specific standards for 'continuous' assessment are left to future regulatory definition.
Why Relevant: The bill regulates AI output by establishing liability for misleading or harmful information.
Mechanism of Influence: It prevents AI companies from using 'as-is' disclaimers to avoid responsibility for damages caused by hallucinations or incorrect medical/safety advice provided by the AI.
Evidence:
Ambiguity Notes: None
Legislation ID: 99887
Bill URL: View Bill
This bill amends the general business law to require that any publication, whether printed or electronic, must clearly indicate when generative artificial intelligence has been used in the creation of its content. This includes articles, photographs, videos, or any other visual media. The intent is to inform readers about the use of AI in the content they consume, thereby promoting awareness and understanding of AIs role in media.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO CONSUMER PROTECTION |
| 2025-03-21 | REFERRED TO CONSUMER PROTECTION |
Why Relevant: The bill directly addresses the user's interest in legislation requiring disclosures for artificial intelligence usage.
Mechanism of Influence: It imposes a legal requirement on media publishers to label AI-generated content, creating a transparency mechanism for the public.
Evidence:
Ambiguity Notes: The definition of generative AI as systems performing tasks requiring 'human-like cognition or perception' is broad and could lead to varying interpretations regarding which specific automated tools trigger the disclosure requirement.
Legislation ID: 100093
Bill URL: View Bill
This legislation, known as the Stop Deepfakes Act, amends the General Business Law to mandate that generative artificial intelligence providers disclose provenance data for synthetic content they produce or modify. It outlines definitions, requirements for applying provenance data, and penalties for non-compliance. The bill seeks to enhance transparency and accountability in the use of AI-generated content, particularly on social media platforms and by state agencies.
| Date | Action |
|---|---|
| 2026-01-07 | died in assembly |
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2026-01-07 | returned to senate |
| 2025-06-17 | ordered to third reading rules cal.900 |
| 2025-06-17 | substituted for a6540c |
| 2025-06-12 | DELIVERED TO ASSEMBLY |
| 2025-06-12 | PASSED SENATE |
| 2025-06-12 | referred to codes |
Why Relevant: The act specifically targets generative artificial intelligence providers and imposes disclosure requirements.
Mechanism of Influence: Providers must apply provenance data to synthetic content, detailing its creation and identifying it as AI-generated.
Evidence:
Ambiguity Notes: The term 'synthetic content' is defined but its breadth depends on the specific technical implementation of provenance data.
Why Relevant: It regulates the lifecycle of AI-generated content on social media platforms.
Mechanism of Influence: Platforms are prohibited from removing metadata that identifies content as AI-generated, ensuring transparency for users.
Evidence:
Ambiguity Notes: The definition of 'degrading' provenance data may require further technical clarification by the attorney general.
Why Relevant: The legislation establishes a regulatory framework for AI oversight through the Attorney General.
Mechanism of Influence: Grants rulemaking authority to define acceptable methods for applying provenance data and enforcing compliance.
Evidence:
Ambiguity Notes: None
Legislation ID: 100095
Bill URL: View Bill
The Artificial Intelligence Training Data Transparency Act mandates developers of generative AI models to provide comprehensive documentation about the datasets used for training these models. This includes details on the sources of data, types of data points, and any modifications made to the datasets. Additionally, it requires disclosure to employees whose data is utilized in training AI models, while providing exemptions for specific national security-related applications.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-03-27 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The act directly addresses the regulation of artificial intelligence by focusing on transparency and disclosure requirements for training data.
Mechanism of Influence: It forces developers to publish dataset details on their websites and requires employers to inform employees about the use of their data in AI training processes.
Evidence:
Ambiguity Notes: The term 'national security-related applications' is not strictly defined in the abstract, potentially allowing for broad exemptions from the transparency requirements.
Legislation ID: 113289
Bill URL: View Bill
This legislation amends the General Business Law to establish clear definitions and responsibilities for chatbot proprietors. It prohibits chatbots from providing certain legal or professional advice unless they are compliant with relevant licensing laws. Additionally, it allows individuals to seek damages if they are harmed by chatbot interactions and mandates clear notifications that users are engaging with a chatbot.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-06-13 | COMMITTED TO RULES |
| 2025-05-07 | ADVANCED TO THIRD READING |
| 2025-05-06 | 2ND REPORT CAL. |
| 2025-05-05 | 1ST REPORT CAL.931 |
| 2025-04-07 | REFERRED TO INTERNET AND TECHNOLOGY |
Why Relevant: The bill explicitly defines 'artificial intelligence system' and 'chatbot' to establish the scope of regulation.
Mechanism of Influence: Sets the legal foundation for which technologies are subject to the requirements and prohibitions outlined in the law.
Evidence:
Ambiguity Notes: None
Why Relevant: It regulates the output and capabilities of AI systems by prohibiting specific types of professional advice.
Mechanism of Influence: Prevents AI from performing actions that require professional licensing, such as providing legal or medical advice, thereby restricting its functional application.
Evidence:
Ambiguity Notes: The term 'substantive responses' may require further clarification to distinguish between general information and regulated professional advice.
Why Relevant: The legislation mandates transparency through user disclosures.
Mechanism of Influence: Requires proprietors to provide clear and conspicuous notice to users that they are interacting with a chatbot rather than a human.
Evidence:
Ambiguity Notes: None
Why Relevant: It establishes a framework for oversight and accountability regarding AI-induced harm.
Mechanism of Influence: Creates a private right of action allowing individuals to sue for damages, which serves as a regulatory mechanism to ensure proprietors maintain safe AI interactions.
Evidence:
Ambiguity Notes: None
Legislation ID: 138531
Bill URL: View Bill
This bill introduces the Artificial Intelligence Literacy Act, which aims to improve artificial intelligence literacy among students and communities in New York. It recognizes the growing importance of AI technology and the need for educational initiatives that address both the benefits and risks associated with AI. The bill establishes a competitive grant program to fund educational efforts in public schools, community colleges, higher education institutions, and community organizations, particularly focusing on underserved populations.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO EDUCATION |
| 2025-04-30 | REFERRED TO EDUCATION |
Why Relevant: The bill directly addresses Artificial Intelligence by establishing a state-level framework for AI literacy and education.
Mechanism of Influence: While the bill focuses on education rather than technical restrictions like weight submissions or age verification, it creates a legal definition for 'AI system' and 'artificial intelligence literacy' within New York law and mandates reporting on AI-related educational initiatives.
Evidence:
Ambiguity Notes: The bill's focus is promotional and educational rather than regulatory; it does not impose restrictions on AI developers but rather focuses on the 'literacy' aspect of the user's request for AI-related legislation.
Legislation ID: 144758
Bill URL: View Bill
This legislation, referred to as the election content accountability act, mandates that political campaigns for specific offices include provenance data in their communications. This data must disclose the origin, modifications, and any use of generative artificial intelligence in creating or altering audio, images, or videos. The law introduces penalties for non-compliance and provides the attorney general with the authority to enforce the regulations.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO ELECTIONS |
| 2025-05-15 | REFERRED TO ELECTIONS |
Why Relevant: The legislation directly addresses the regulation of generative artificial intelligence by requiring specific disclosures (provenance data) when AI is used to create or alter political media.
Mechanism of Influence: It forces campaigns to label AI-generated content and provides a legal framework for penalties and oversight by the Attorney General, thereby regulating the output and transparency of AI systems in a political context.
Evidence:
Ambiguity Notes: The definition of 'synthetic content' and 'generative artificial intelligence system' will be crucial for determining the scope of what needs to be disclosed, though the text implies a broad application to media.
Legislation ID: 159844
Bill URL: View Bill
The New York Artificial Intelligence Transparency for Journalism Act establishes requirements for developers of generative AI systems to disclose information about the sources of data used for training their systems. This includes providing details about the content accessed from journalism providers and ensuring that such providers are recognized and compensated for their work. The bill reflects the need to sustain quality journalism and protect it from unfair practices in the evolving landscape of AI technology.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-06-09 | AMEND AND RECOMMIT TO RULES |
| 2025-06-09 | PRINT NUMBER 8331A |
| 2025-06-03 | REFERRED TO RULES |
Why Relevant: This legislation falls directly under the user's request for AI regulation and disclosure requirements.
Mechanism of Influence: It forces AI developers to provide a public accounting of the journalism data they ingest, creating a legal pathway for content owners to verify usage and seek enforcement.
Evidence:
Ambiguity Notes: The effectiveness depends on the specific definitions of AI and journalism providers provided in the act.
Legislation ID: 200158
Bill URL: View Bill
The New York FAIR News Act seeks to address the implications of artificial intelligence in news media by mandating disclosures to workers and consumers, ensuring human oversight of AI-generated content, and providing workplace protections for media professionals. It aims to safeguard the journalistic workforce from the potential negative impacts of AI technology on their roles and the quality of news reporting.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-07-07 | REFERRED TO RULES |
Why Relevant: The act mandates transparency for AI-generated content presented to the public.
Mechanism of Influence: It requires news media to provide conspicuous disclosures when content is significantly created by generative AI.
Evidence:
Ambiguity Notes: The term 'significantly created' is subjective and may require further regulatory clarification to determine the exact threshold of AI involvement that triggers disclosure.
Why Relevant: It regulates the internal use of AI tools within news organizations.
Mechanism of Influence: Employers are required to disclose the use and application of generative AI tools to their workforce.
Evidence:
Ambiguity Notes: None
Why Relevant: The legislation imposes a human-in-the-loop requirement for AI systems.
Mechanism of Influence: It prohibits the publication of AI-generated content without prior human review and approval.
Evidence:
Ambiguity Notes: None
Why Relevant: It addresses the use of intellectual property for AI training purposes.
Mechanism of Influence: The act prohibits training AI systems on journalists' work without obtaining their explicit consent.
Evidence:
Ambiguity Notes: None
Legislation ID: 281613
Bill URL: View Bill
This bill outlines the rights of consumers in relation to their personal data, including the ability to exercise these rights through authorized representatives. It also grants the Attorney General the authority to create rules and regulations to ensure compliance with the provisions of this article, including the collection of data from various stakeholders to inform these regulations. Additionally, the bill includes a severability clause to maintain the validity of the remaining provisions if any part is found invalid.
| Date | Action |
|---|---|
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2025-10-08 | REFERRED TO RULES |
Why Relevant: The bill mandates the Attorney General to define disclosure requirements for businesses.
Mechanism of Influence: This authority could be used to require businesses to disclose when AI is used to process consumer data or to provide transparency into automated decision-making processes.
Evidence:
Ambiguity Notes: The bill does not explicitly mention 'Artificial Intelligence', but 'disclosures' is a common regulatory tool used for AI transparency.
Why Relevant: The Attorney General is authorized to collect data from businesses to inform regulation.
Mechanism of Influence: This could serve as a mechanism for the government to request information about data sets used to train AI or the outcomes of AI processing to inform future oversight.
Evidence:
Ambiguity Notes: The scope of 'data and information' is broad and could include technical details about AI systems if they pertain to consumer data rights.
Legislation ID: 281674
Bill URL: View Bill
This bill, known as the "automation displacement protection act," seeks to address the impact of artificial intelligence and automation on employment in New York. It mandates that covered employers notify employees and relevant authorities about impending job losses due to automation, ensures a transition period for affected workers, and establishes penalties for non-compliance. The legislation aims to safeguard workers rights and promote fair labor practices in the face of technological advancements.
| Date | Action |
|---|---|
| 2026-01-16 | AMEND AND RECOMMIT TO LABOR |
| 2026-01-16 | PRINT NUMBER 8589B |
| 2026-01-07 | REFERRED TO LABOR |
| 2025-12-19 | AMEND (T) AND RECOMMIT TO RULES |
| 2025-12-19 | PRINT NUMBER 8589A |
| 2025-11-21 | REFERRED TO RULES |
Why Relevant: The bill establishes mandatory disclosure requirements for the implementation of AI systems in the workplace.
Mechanism of Influence: Employers must provide detailed written notice to employees and government officials about the specific automation technology and vendors being used 90 days before displacement occurs.
Evidence:
Ambiguity Notes: The definition of 'artificial intelligence system' is broad ('any system performing tasks that require human intelligence'), which could encompass a wide range of software beyond generative AI.
Why Relevant: The legislation creates a regulatory framework for AI by imposing penalties and government oversight on its deployment when it affects employment.
Mechanism of Influence: It empowers the attorney general and commissioner to enforce compliance through fines and makes violators ineligible for state grants or tax incentives.
Evidence:
Ambiguity Notes: None
Legislation ID: 281780
Bill URL: View Bill
This legislation mandates that businesses with more than 100 employees or that are publicly traded submit annual reports detailing how artificial intelligence affects their hiring processes, including data on employee displacement, hiring, and the specifics of AI usage. The Department of Labor will develop reporting guidelines and publish an annual report based on the submitted data, ensuring transparency and accountability in the use of AI in the workplace.
| Date | Action |
|---|---|
| 2026-01-15 | AMEND (T) AND RECOMMIT TO LABOR |
| 2026-01-15 | PRINT NUMBER 8706A |
| 2026-01-07 | REFERRED TO LABOR |
Why Relevant: The legislation directly regulates the disclosure of AI usage in the workplace, specifically focusing on hiring and employment impacts.
Mechanism of Influence: It mandates annual reporting by covered businesses, creating a transparency mechanism where the government oversees how AI affects the labor market.
Evidence:
Ambiguity Notes: The term 'nature of AI usage' is broad and may require further clarification in the Department's guidelines to determine the depth of technical disclosure required.
This legislation seeks to amend the General Business Law to establish the Responsible AI Safety and Education (RAISE) Act, which mandates transparency and safety protocols for large frontier developers of AI models. It emphasizes the need for standardized disclosures about the risks and management of AI technologies to protect the public and ensure responsible innovation.
| Date | Action |
|---|---|
| 2026-01-28 | DELIVERED TO ASSEMBLY |
| 2026-01-28 | PASSED SENATE |
| 2026-01-28 | referred to ways and means |
| 2026-01-20 | ORDERED TO THIRD READING CAL.94 |
| 2026-01-08 | REFERRED TO RULES |
Why Relevant: The act mandates extensive disclosure requirements for AI developers regarding their operations and ownership.
Mechanism of Influence: Developers are required to file disclosure statements with a designated office and publish transparency reports before deploying new or modified AI models.
Evidence:
Ambiguity Notes: The definition of 'large frontier developer' is central to the scope but the specific technical thresholds are not detailed in the summary.
Why Relevant: The legislation focuses on regulating AI safety and mitigating catastrophic risks.
Mechanism of Influence: It requires the creation and publication of a 'frontier AI framework' detailing practices for assessing and mitigating risks.
Evidence:
Ambiguity Notes: The term 'catastrophic risk' is defined in the act but the specific criteria for what constitutes such a risk may be subject to regulatory interpretation.
Why Relevant: The act requires reporting of safety incidents and provides for government oversight.
Mechanism of Influence: Developers must report critical safety incidents to the office within 72 hours and provide annual reports on safety incidents and risk assessments.
Evidence:
Ambiguity Notes: None
Why Relevant: The legislation includes provisions for third-party involvement in AI evaluation, which aligns with audit requirements.
Mechanism of Influence: Transparency reports must disclose the involvement of third-party evaluators in the assessment of AI models.
Evidence:
Ambiguity Notes: None
This legislation amends existing laws to repeal certain provisions related to automated decision-making by government agencies and establishes new requirements for the disclosure of automated employment decision-making tools. It outlines the responsibilities of covered entities in disclosing the use of such tools and ensures that the use of artificial intelligence does not infringe upon existing employee rights and collective bargaining agreements.
| Date | Action |
|---|---|
| 2026-01-21 | DELIVERED TO ASSEMBLY |
| 2026-01-21 | ordered to third reading rules cal.51 |
| 2026-01-21 | passed assembly |
| 2026-01-21 | PASSED SENATE |
| 2026-01-21 | referred to science and technology |
| 2026-01-21 | returned to senate |
| 2026-01-21 | substituted for a9487 |
| 2026-01-12 | ORDERED TO THIRD READING CAL.46 |
Why Relevant: The bill specifically mandates disclosures for automated employment decision-making tools, aligning with the user's interest in AI disclosure requirements.
Mechanism of Influence: Covered entities must publish descriptions and purposes of AI tools on their websites annually, providing public transparency into government AI usage.
Evidence:
Ambiguity Notes: The term 'automated employment decision-making tools' is used, which typically encompasses AI but may require specific technical definitions to determine the full scope of software covered.
Why Relevant: The legislation regulates the impact of AI on the workforce by prohibiting certain outcomes like displacement.
Mechanism of Influence: It prohibits the discharge or loss of position due to AI use and ensures AI does not infringe upon collective bargaining agreements.
Evidence:
Ambiguity Notes: The bill focuses on the labor outcomes of AI rather than the technical specifications or weights of the models themselves.
Why Relevant: The bill defines the scope of government oversight regarding automated systems.
Mechanism of Influence: It identifies specific public institutions (counties, cities, school districts, and universities) that must comply with AI transparency standards.
Evidence:
Ambiguity Notes: The repeal of 2025 provisions suggests a shift in how the government intends to oversee automated decision-making, though the specific nature of the repealed laws is not detailed.
Legislation ID: 281831
Bill URL: View Bill
This bill amends the New York labor law to require employers to indicate if layoffs are due to the use of artificial intelligence or automation. It mandates that employers provide specific information regarding the impact of these technologies on job losses, aiming to inform public policy and workforce retraining efforts. Additionally, it establishes a pilot program to monitor compliance and analyze the effects of these reporting requirements.
| Date | Action |
|---|---|
| 2026-01-16 | REFERRED TO LABOR |
Why Relevant: The bill imposes mandatory disclosure requirements on the use of artificial intelligence within corporate operations.
Mechanism of Influence: Employers are legally required to identify and describe the specific AI technologies or automated processes that lead to workforce reductions when filing WARN Act notices.
Evidence:
Ambiguity Notes: The bill uses the terms 'artificial intelligence' and 'automation' which may require further regulatory definition to ensure consistent reporting across different industries.
Why Relevant: The legislation establishes a government oversight and monitoring framework for AI's societal impact.
Mechanism of Influence: It mandates the creation of a state-managed database and requires the commissioner of labor to publish analytical summaries of AI-related job losses.
Evidence:
Ambiguity Notes: The effectiveness of the oversight depends on the specific data points collected during the pilot program and the level of detail provided by employers.
Legislation ID: 283543
Bill URL: View Bill
This act amends various laws related to vehicle and traffic regulations, insurance, environmental conservation, and economic development initiatives in New York. It includes provisions for increasing motor vehicle transaction fees, establishing motorcycle safety course requirements, implementing intelligent speed assistance devices, and enhancing penalties for crimes against highway workers. The bill also addresses energy policies, insurance regulations, and agricultural marketing, among other areas.
| Date | Action |
|---|---|
| 2026-01-21 | REFERRED TO FINANCE |
Why Relevant: The legislation addresses 'intelligent' speed assistance devices, which fall under the category of automated or intelligent vehicle technologies.
Mechanism of Influence: It creates a legal framework for cities to pilot and regulate devices that automatically assist with vehicle speed management, involving automated decision-making.
Evidence:
Ambiguity Notes: The text uses the term 'intelligent' but does not explicitly use the term 'Artificial Intelligence' or mandate the specific AI-centric disclosures (like model weights or audits) requested by the user.
Legislation ID: 285776
Bill URL: View Bill
This legislation amends New Yorks executive law to define artificial intelligence and generative artificial intelligence, and establishes unlawful discriminatory practices related to their use in employment. It mandates that employers cannot use AI for recruitment, hiring, or other employment-related decisions in a way that discriminates against individuals based on protected characteristics. Additionally, it requires employers to notify employees when AI is used in these contexts.
| Date | Action |
|---|---|
| 2026-01-23 | REFERRED TO INVESTIGATIONS AND GOVERNMENT OPERATIONS |
Why Relevant: The legislation provides formal legal definitions for artificial intelligence and generative artificial intelligence.
Mechanism of Influence: By defining these terms, the law establishes the specific scope of technologies subject to regulation and oversight within the state's executive law.
Evidence:
Ambiguity Notes: The breadth of the definition of 'capabilities and types of outputs' will determine how many software tools fall under this regulatory umbrella.
Why Relevant: The law requires disclosures regarding the use of AI in professional settings.
Mechanism of Influence: It creates a mandatory notification system where employers must inform employees and recruits if AI is being used to evaluate them, aligning with the user's interest in disclosure requirements.
Evidence:
Ambiguity Notes: The specific timing and means of notice are left to be determined by future rulemaking, which may affect the transparency's effectiveness.
Why Relevant: The legislation regulates the application of AI to prevent discriminatory outcomes and establishes enforcement mechanisms.
Mechanism of Influence: It prohibits specific uses of AI that lead to discrimination and empowers a division to create regulations and enforcement protocols for these AI-related practices.
Evidence:
Ambiguity Notes: None
Legislation ID: 66546
Bill URL: View Bill
This bill amends the state technology law to define artificial intelligence and automated decision-making systems, and to create the position of Chief Artificial Intelligence Officer. This officer will be responsible for developing policies, guidelines, and risk management plans for the use of AI in state operations, while also coordinating efforts across various state departments and ensuring public safety and rights are protected.
| Date | Action |
|---|---|
| 2026-01-07 | died in assembly |
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2026-01-07 | returned to senate |
| 2025-05-22 | DELIVERED TO ASSEMBLY |
| 2025-05-22 | PASSED SENATE |
| 2025-05-22 | referred to governmental operations |
| 2025-03-10 | ADVANCED TO THIRD READING |
| 2025-03-05 | 2ND REPORT CAL. |
Why Relevant: The bill establishes formal definitions for Artificial Intelligence and Automated Decision-Making Systems.
Mechanism of Influence: By defining these terms, the bill sets the legal scope for which technologies are subject to state oversight, regulation, and the powers of the Chief AI Officer.
Evidence:
Ambiguity Notes: The exclusion of 'basic software processes' that do not impact human rights or welfare may create a grey area regarding which automated tools fall under the regulatory umbrella.
Why Relevant: The legislation mandates the auditing of AI usage, which is a core component of AI regulation and oversight.
Mechanism of Influence: The Chief AI Officer is granted the authority to conduct audits to ensure that state agencies are complying with established laws and safety protocols when using AI.
Evidence:
Ambiguity Notes: The text does not specify the frequency of these audits or the specific technical standards against which the AI will be measured.
Why Relevant: The bill creates a centralized oversight body and a Chief AI Officer to manage AI risks.
Mechanism of Influence: The CAIO is responsible for developing risk management plans and policies, effectively creating a regulatory environment for AI deployment within the state government.
Evidence:
Ambiguity Notes: While the focus is on state operations, the policies developed by the CAIO could influence procurement requirements for private AI vendors.
Legislation ID: 66547
Bill URL: View Bill
This bill amends the general business law in New York by introducing a requirement for generative artificial intelligence systems to include conspicuous warnings on their user interfaces. These warnings must inform users that the outputs generated by these systems may not always be accurate or appropriate. Failure to comply with this requirement can result in civil penalties for the owners or operators of such systems.
| Date | Action |
|---|---|
| 2026-01-07 | died in assembly |
| 2026-01-07 | REFERRED TO INTERNET AND TECHNOLOGY |
| 2026-01-07 | returned to senate |
| 2025-06-12 | referred to codes |
| 2025-06-12 | REPASSED SENATE |
| 2025-06-12 | RETURNED TO ASSEMBLY |
| 2025-06-09 | AMENDED ON THIRD READING (T) 934A |
| 2025-06-09 | RECALLED FROM ASSEMBLY |
Why Relevant: The bill directly regulates generative AI systems by mandating specific disclosures to users regarding the reliability and nature of the content produced.
Mechanism of Influence: It imposes a legal requirement for AI operators to include warnings on user interfaces, backed by civil penalties, thereby enforcing transparency in AI-human interactions.
Evidence:
Ambiguity Notes: The definition of 'conspicuous' and 'inappropriate' may be subject to interpretation, potentially leading to varying standards of implementation among different AI providers.
Legislation ID: 66574
Bill URL: View Bill
This legislation outlines the requirements for smart access systems in multiple dwellings, including data collection limitations, prohibitions on certain types of data, and security measures for protecting tenant information. It also addresses the responsibilities of owners and managing agents regarding tenant consent and data retention, as well as penalties for violations of the provisions set forth in the bill.
| Date | Action |
|---|---|
| 2026-01-07 | died in assembly |
| 2026-01-07 | REFERRED TO HOUSING, CONSTRUCTION AND COMMUNITY DEVELOPMENT |
| 2026-01-07 | returned to senate |
| 2025-05-14 | DELIVERED TO ASSEMBLY |
| 2025-05-14 | PASSED SENATE |
| 2025-05-14 | referred to housing |
| 2025-05-07 | ADVANCED TO THIRD READING |
| 2025-05-06 | 2ND REPORT CAL. |
Why Relevant: The bill places specific restrictions on the collection and retention of biometric data within smart access systems.
Mechanism of Influence: Biometric data processing, such as facial recognition or fingerprint scanning, is a primary application of artificial intelligence in security and access control. By limiting biometric data collection, the bill regulates the deployment of AI-driven identification technologies in residential settings.
Evidence:
Ambiguity Notes: The text does not explicitly use the term 'artificial intelligence,' but 'smart access systems' and 'biometric data' collection typically involve AI-based pattern recognition and automated processing.
Why Relevant: The legislation mandates oversight and security requirements for the software powering 'smart' infrastructure.
Mechanism of Influence: The requirement for companies to notify customers of security breaches and provide software updates to fix vulnerabilities within 30 days imposes regulatory oversight on the automated software systems used for building access.
Evidence:
Ambiguity Notes: The scope of 'smart access software' is broad and could range from simple digital credentials to complex AI-integrated surveillance and entry systems.
H.B. 1004 appropriates funds to create AI Hubs and Technology Hubs within the University of North Carolina system. The bill outlines financial allocations for establishing these hubs, which will focus on technology innovation, workforce development, and research in artificial intelligence. Additionally, it mandates the selection of institutions, funding conditions, and reporting requirements to ensure accountability and effectiveness in achieving the bills goals.
| Date | Action |
|---|---|
| 2025-04-14 | Passed 1st Reading |
| 2025-04-14 | Ref to the Com on Appropriations, if favorable, Rules, Calendar, and Operations of the House |
| 2025-04-10 | Filed |
Why Relevant: The legislation specifically allocates funding for research into AI ethics and governance.
Mechanism of Influence: By establishing a grant program for AI ethics and governance, the state creates a framework for academic and policy oversight regarding how AI technologies are developed and deployed.
Evidence:
Ambiguity Notes: The term 'governance' is broad and could refer to either internal institutional policies or the development of broader regulatory recommendations for the state.
Why Relevant: The mandate for AI Hubs includes a focus on citizen rights.
Mechanism of Influence: Requiring AI Hubs to focus on citizen rights suggests a regulatory or oversight interest in protecting the public from potential AI-related harms.
Evidence:
Ambiguity Notes: The bill does not define specific 'citizen rights' or how the hubs will enforce or protect them, leaving the practical application to the selected institutions.
This bill establishes the Social Media Control in Information Technology Act, which mandates that social media platforms provide clear disclosures regarding data collection and usage, particularly for minors. It requires platforms to implement user-friendly mechanisms for privacy rights, prohibits the use of minors data in algorithmic recommendations, and sets default privacy settings to protect young users. Additionally, it holds operators accountable for non-compliance and creates a registry for privacy policies.
| Date | Action |
|---|---|
| 2025-06-17 | Reptd Fav Com Substitute |
| 2025-06-17 | Re-ref Com On Appropriations |
| 2025-04-10 | Passed 1st Reading |
| 2025-04-10 | Ref to the Com on Commerce and Economic Development, if favorable, Appropriations, if favorable, Rules, Calendar, and Operations of the House |
| 2025-04-09 | Filed |
Why Relevant: The bill directly regulates algorithmic recommendation systems, which are a primary application of artificial intelligence in social media.
Mechanism of Influence: It prohibits the use of minors' data within these AI-driven systems and mandates transparency regarding how these algorithms affect user well-being.
Evidence:
Ambiguity Notes: While the bill uses the term 'algorithmic recommendation system,' this is functionally synonymous with the AI models used to rank and suggest content to users.
Why Relevant: The legislation requires specific disclosures and transparency regarding data usage and platform features.
Mechanism of Influence: Platforms must provide clear disclosures about data collection and usage and maintain a registry of privacy policies for government oversight.
Evidence:
Ambiguity Notes: The level of technical detail required in these disclosures (e.g., model architecture vs. data categories) is not fully specified in the abstract.
Why Relevant: The bill focuses on age-specific regulations and usage controls.
Mechanism of Influence: It mandates default settings for minors to prevent data exposure and manipulation, effectively requiring platforms to distinguish between adult and minor users.
Evidence:
Ambiguity Notes: The bill implies a need for age verification to enforce these protections, though the specific technical requirements for verification are not detailed.
Why Relevant: The bill establishes oversight and enforcement mechanisms for data and algorithmic practices.
Mechanism of Influence: It creates a Data Privacy Task Force and empowers the Attorney General to monitor compliance and investigate platform operations.
Evidence:
Ambiguity Notes: The oversight is focused on privacy and well-being rather than a technical audit of AI weights or model performance.
House Bill 970 introduces measures to combat algorithmic rent fixing by prohibiting real estate lessors from using nonpublic competitor data to set rental prices. It establishes definitions related to pricing algorithms and unlawful coordination among lessors, and empowers the Attorney General to enforce these provisions as unfair trade practices.
| Date | Action |
|---|---|
| 2025-04-14 | Passed 1st Reading |
| 2025-04-14 | Ref To Com On Rules, Calendar, and Operations of the House |
| 2025-04-10 | Filed |
Why Relevant: The bill specifically regulates 'pricing algorithms,' which are a core component of automated and AI-driven commercial decision-making systems.
Mechanism of Influence: It prohibits the use of nonpublic competitor data within these algorithms to prevent anti-competitive price coordination, effectively placing a constraint on the data inputs and functional outputs of automated pricing systems.
Evidence:
Ambiguity Notes: While the bill uses the term 'pricing algorithm' rather than 'artificial intelligence,' modern dynamic pricing software often utilizes machine learning and AI, making this a form of algorithmic oversight.
Legislation ID: 232842
Bill URL: View Bill
House Bill No. 628 establishes a regulatory framework for independent verification organizations in Ohio, specifically targeting the verification of artificial intelligence applications and models. The bill defines key terms, outlines the licensing process, and sets forth the responsibilities and requirements for both the verification organizations and the developers or deployers of AI technologies. It also includes provisions for the establishment of an advisory council to oversee the implementation and effectiveness of the verification process.
| Date | Action |
|---|---|
| 2025-12-11 | Introduced |
Why Relevant: The bill directly addresses the regulation and auditing of artificial intelligence through a formal verification process.
Mechanism of Influence: It establishes a licensing regime for third-party organizations to audit AI models and applications for risk mitigation and compliance.
Evidence:
Ambiguity Notes: The term 'independent verification' functions as a regulatory audit, though the specific technical standards for 'acceptable risk mitigation' are left to be defined by the Attorney General.
Why Relevant: The legislation requires detailed disclosures regarding AI capabilities and potential harms.
Mechanism of Influence: IVOs are required to submit annual reports to the state government detailing AI capabilities, societal risks, and verification results.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill provides for government oversight and the creation of safety standards.
Mechanism of Influence: It creates an Artificial Intelligence Safety Advisory Council and empowers the Attorney General to adopt rules regarding AI risk mitigation and conflict of interest.
Evidence:
Ambiguity Notes: None
Legislation ID: 266871
Bill URL: View Bill
This bill establishes regulations for deployers of artificial intelligence (AI) chatbots, specifically those with human-like features. It mandates that such chatbots not be made available to minors, requires age verification systems, and allows for alternative versions of chatbots for younger users. Additionally, it outlines the responsibilities of deployers to prioritize user safety and well-being, as well as the penalties for non-compliance.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Representative Maynard |
| 2026-02-02 | First Reading |
Why Relevant: The bill directly addresses the user's interest in age verification for AI usage.
Mechanism of Influence: It mandates that deployers implement age certification systems to prevent minors from accessing chatbots with human-like features.
Evidence:
Ambiguity Notes: None
Why Relevant: The legislation includes disclosure requirements for specific AI applications.
Mechanism of Influence: It requires therapy chatbots to provide clear disclaimers to users regarding their nature as artificial intelligence rather than human professionals.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill imposes regulatory oversight and safety standards on AI deployers.
Mechanism of Influence: It requires the implementation of emergency response systems and professional monitoring for specialized AI (therapy chatbots), aligning with the user's interest in AI regulation.
Evidence:
Ambiguity Notes: None
Legislation ID: 268908
Bill URL: View Bill
This bill establishes definitions related to artificial intelligence, outlines prohibited and allowed uses of AI by state agencies, and mandates compliance reporting to the Office of Management and Enterprise Services (OMES). It seeks to protect individual rights while allowing beneficial uses of AI, with specific restrictions on certain applications.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Representative Maynard |
| 2026-02-02 | First Reading |
Why Relevant: The bill directly regulates the deployment and operational standards of AI systems within state government entities.
Mechanism of Influence: It mandates that state agencies review all existing AI systems for compliance, remove prohibited systems within nine months, and ensure all new deployments meet specific ethical and transparency standards.
Evidence:
Ambiguity Notes: The term 'cognitive behavioral manipulation' is broad and may require specific regulatory guidance to determine which types of user interface designs or algorithms fall under this prohibition.
Why Relevant: The legislation requires specific disclosures and human oversight, aligning with the user's interest in AI transparency and auditing.
Mechanism of Influence: It forces agencies to disclose when material is produced by generative AI and requires human intervention for any AI-driven decisions that are irreversible.
Evidence:
Ambiguity Notes: The requirement for 'user awareness' does not specify the format or prominence of the notification required when a citizen interacts with an AI.
Why Relevant: The bill establishes an oversight and reporting mechanism to track AI usage and compliance across the state government.
Mechanism of Influence: The Office of Management and Enterprise Services (OMES) is tasked with creating annual public reports detailing the AI systems in use and their compliance status, serving as a form of government audit.
Evidence:
Ambiguity Notes: While it mandates reporting on 'compliance status,' it is unclear what specific metrics or auditing standards OMES will use to verify an agency's self-reported compliance.
Legislation ID: 269130
Bill URL: View Bill
House Bill 3546 establishes a legal framework in Oklahoma that denies personhood status to artificial intelligence systems, environmental elements, nonhuman animals, and inanimate objects. The bill clarifies that it does not affect the personhood status of any legal entities that are already recognized under Oklahoma law as of November 1, 2026.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Representative Maynard |
| 2026-02-02 | First Reading |
Why Relevant: The bill directly addresses the legal classification of artificial intelligence systems within the state's jurisdiction.
Mechanism of Influence: By prohibiting the granting of personhood, the law ensures that AI systems cannot exercise legal rights, own property, or be held liable in the same manner as a natural person or a corporation, thereby setting a foundational regulatory boundary for AI governance.
Evidence:
Ambiguity Notes: The bill does not provide a specific technical definition for 'artificial intelligence systems', which may lead to broad interpretation regarding which software or automated processes fall under this prohibition.
Legislation ID: 268290
Bill URL: View Bill
This bill establishes regulations on the use of automated systems in making adverse determinations related to health care services. It requires that any adverse determination made by such systems must be reviewed by a qualified human professional prior to finalization. Additionally, the bill grants auditing authority to the Insurance Commissioner and mandates that notice of adverse determinations includes specific information related to the decision-making process and appeals.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Representative Provenzano |
| 2026-02-02 | First Reading |
Why Relevant: The bill directly regulates the application of artificial intelligence and automated decision systems in the health care sector.
Mechanism of Influence: It imposes a 'human-in-the-loop' requirement, preventing AI from making final adverse determinations without human verification.
Evidence:
Ambiguity Notes: The term 'qualified human professional' may require further regulatory clarification to determine the specific level of expertise required for different types of medical reviews.
Why Relevant: The legislation establishes a mechanism for government oversight and auditing of AI systems.
Mechanism of Influence: It empowers the Insurance Commissioner to conduct audits on how utilization review agents employ automated systems.
Evidence:
Ambiguity Notes: The bill does not specify the frequency or the technical standards of the audits to be performed.
Why Relevant: The bill requires disclosures related to the logic and criteria used by automated systems in decision-making.
Mechanism of Influence: By requiring the disclosure of 'screening criteria' and 'clinical basis' in notices, it forces transparency regarding the underlying logic of the automated system.
Evidence:
Ambiguity Notes: While it requires disclosure of criteria, it does not explicitly mandate a statement that an AI was the primary source of the initial determination in the notice itself.
Legislation ID: 266392
Bill URL: View Bill
This bill, known as the Protecting Consumers and Jobs from Predatory Pricing Act, establishes regulations for food retail establishments regarding algorithmic pricing. It mandates disclosures to consumers when personalized pricing is used, prohibits the use of electronic shelving labels for such pricing, and restricts data collection practices, particularly concerning minors and protected class data. The bill also outlines enforcement mechanisms and civil penalties for violations.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Representative Munson |
| 2026-02-02 | First Reading |
Why Relevant: The bill specifically targets algorithmic pricing and surveillance pricing, which are applications of artificial intelligence and automated decision-making systems.
Mechanism of Influence: It imposes mandatory disclosure requirements for algorithmic pricing, restricts the data inputs (such as minor data) used by these algorithms, and prohibits specific AI-driven pricing practices in food retail.
Evidence:
Ambiguity Notes: While the bill defines 'algorithm', its regulatory scope is limited to food retail establishments rather than general-purpose AI applications.
Legislation ID: 269132
Bill URL: View Bill
House Bill 4083 introduces regulations for AI chatbots in Oklahoma, focusing on preventing minors from accessing chatbots with human-like features. It mandates deployers to implement age verification systems, restricts access to social AI companions for minors, and outlines conditions under which therapeutic chatbots can be used by minors. The bill also establishes legal consequences for violations and emphasizes the need for safety measures in emergency situations.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Representative Alonso-Sandoval |
| 2026-02-02 | First Reading |
Why Relevant: The bill explicitly mandates age verification for specific types of AI interactions.
Mechanism of Influence: It requires deployers to implement systems that prevent minors from accessing chatbots with human-like features or social AI companions unless age is verified.
Evidence:
Ambiguity Notes: The term 'human-like feature' is defined in the bill but its practical application to UI/UX design may vary.
Why Relevant: The bill requires transparency and disclosures regarding the nature of the AI.
Mechanism of Influence: Therapeutic chatbots must clearly state they are AI and not licensed professionals to avoid misleading users.
Evidence:
Ambiguity Notes: The specific wording of the disclaimer is not provided, only the requirement for one.
Why Relevant: The bill imposes safety and efficacy requirements similar to audits for high-risk AI applications.
Mechanism of Influence: Deployers of therapeutic chatbots must provide clinical trial data to prove the tool is safe and effective before use by minors.
Evidence:
Ambiguity Notes: It is unclear what standard of 'clinical trial data' is required or which agency reviews it.
Legislation ID: 267031
Bill URL: View Bill
Senate Bill 1627 addresses the need for clarity in Oklahomas legislative framework by consolidating various versions of statutes. It seeks to amend specific sections of the Oklahoma Statutes and repeal outdated or redundant provisions, thereby enhancing the efficiency and accessibility of state laws.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Senator Paxton |
| 2026-02-02 | First Reading |
Why Relevant: The bill specifically addresses the regulation of AI-generated content by criminalizing the nonconsensual dissemination of 'artificially generated sexual depictions.'
Mechanism of Influence: It establishes legal penalties (misdemeanors or felonies) for individuals who intentionally share sexual images created via artificial intelligence (deepfakes) without the subject's consent.
Evidence:
Ambiguity Notes: The term 'artificially generated' is used broadly and may encompass various technologies beyond modern generative AI, such as traditional CGI, though it is clearly intended to capture AI-driven deepfakes.
Legislation ID: 267726
Bill URL: View Bill
This legislation establishes the Oklahoma Responsible Technology in Schools Act, which provides guidelines for the responsible use of artificial intelligence in public education. It seeks to maintain educator oversight in the use of AI tools, protect student privacy, and ensure transparency in educational practices involving technology.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Senator Seifried |
| 2026-02-02 | First Reading |
Why Relevant: The legislation directly regulates the deployment and operational constraints of AI within the public school system.
Mechanism of Influence: It imposes a 'human-in-the-loop' requirement and prevents automated systems from making high-stakes decisions without educator intervention.
Evidence:
Ambiguity Notes: The term 'high-stakes educational decisions' is not explicitly defined in the summary, which could lead to varying interpretations across districts regarding what constitutes a high-stakes decision.
Why Relevant: The act addresses transparency and data protection requirements for AI usage, aligning with disclosure-related regulatory interests.
Mechanism of Influence: School districts are mandated to adopt policies that specifically address transparency and identify personnel responsible for AI oversight.
Evidence:
Ambiguity Notes: The specific standards for 'transparency' are left to the State Department of Education and local boards to define in their guidance and policies.
Why Relevant: The legislation touches upon age-related constraints for AI tools used by minors.
Mechanism of Influence: It requires that AI tools used in schools be 'age-appropriate,' which necessitates a vetting process to ensure tools match the developmental stage of the students.
Evidence:
Ambiguity Notes: While it mentions age-appropriateness, it does not explicitly detail a technical 'age verification' mechanism like those found in commercial age-gating regulations.
Legislation ID: 268970
Bill URL: View Bill
Senate Bill 1785 introduces the Citizens Bill of Rights, which restricts government and business entities from imposing certain actions on citizens. It guarantees rights related to the use of gold and silver, prohibits digital identification requirements, bans social credit scores, and protects personal freedoms regarding medical decisions, energy usage, and agriculture. The bill also addresses the implications of artificial intelligence, ensuring that it is not used to discriminate or infringe on citizens rights. Violations of this act may result in legal consequences.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Senator Jett |
| 2026-02-02 | First Reading |
Why Relevant: The bill contains specific prohibitions on the application of artificial intelligence in critical sectors such as healthcare and employment.
Mechanism of Influence: It creates legal liability for entities that use AI to make life or medical care decisions and mandates compensation if AI is used to replace human labor. It also prevents the use of AI for discriminatory practices.
Evidence:
Ambiguity Notes: The provision regarding the replacement of human workers 'without compensation' is broad and does not specify the form of compensation, the duration, or who the recipient must be (e.g., the displaced worker or a state fund).
Legislation ID: 268350
Bill URL: View Bill
Senate Bill 2038 seeks to establish guidelines for health insurance issuers regarding the use of artificial intelligence (AI) in making decisions about health insurance coverage. It prohibits the issuance of adverse consumer outcomes by AI systems, mandates that licensed professionals must make final decisions on such outcomes, and requires health insurance issuers to disclose the involvement of human professionals in decision-making processes. The bill also empowers the Insurance Commissioner to investigate the use of AI by insurers and imposes penalties for violations.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Senator Goodwin |
| 2026-02-02 | First Reading |
Why Relevant: The bill directly regulates the application of AI in the health insurance sector by restricting its autonomy in decision-making.
Mechanism of Influence: It creates a legal barrier against fully automated adverse outcomes, forcing insurers to integrate human review into any AI-driven workflow.
Evidence:
Ambiguity Notes: The effectiveness depends on the specific definitions of 'AI system' and 'Artificial Intelligence' provided in the bill's text.
Why Relevant: The legislation includes a disclosure mandate regarding the decision-making process.
Mechanism of Influence: Insurers must inform claimants that a human professional, rather than just an algorithm, was responsible for the final decision, ensuring transparency.
Evidence:
Ambiguity Notes: It is unclear if the disclosure must explicitly state that AI was used in the preliminary stages or only that a human made the final call.
Why Relevant: The bill establishes government oversight and investigative authority over AI usage.
Mechanism of Influence: By granting the Insurance Commissioner the power to review AI systems, it creates a mechanism for auditing the logic and compliance of insurance algorithms.
Evidence:
Ambiguity Notes: The scope of the 'investigation' is broad, potentially allowing for technical audits of AI models or merely procedural reviews.
Legislation ID: 269139
Bill URL: View Bill
Senate Bill 2085 introduces comprehensive regulations regarding artificial intelligence technology in Oklahoma. It defines key terms, prohibits state entities from contracting with foreign adversaries, and establishes rights for individuals concerning AI use. The bill also includes specific provisions to protect minors from inappropriate interactions with AI chatbots and mandates transparency from AI companies regarding data use and user interactions.
| Date | Action |
|---|---|
| 2026-02-02 | Authored by Senator Hamilton |
| 2026-02-02 | First Reading |
Why Relevant: The bill directly addresses the user's interest in AI disclosures and transparency.
Mechanism of Influence: It establishes a legal right for Oklahomans to be informed when they are interacting with an artificial intelligence system rather than a human.
Evidence:
Ambiguity Notes: The bill mentions the 'right to know' but the specific format or timing of the disclosure (e.g., a watermark, a text disclaimer) may be subject to rules created by the Attorney General.
Why Relevant: The legislation includes specific age-related restrictions and verification requirements for AI usage.
Mechanism of Influence: It requires companion chatbot platforms to implement parental consent mechanisms and oversight tools before allowing minors to maintain accounts.
Evidence:
Ambiguity Notes: The bill does not explicitly define the technical method for age verification, leaving the implementation details to the platforms or future AG rulemaking.
Why Relevant: The bill introduces oversight and regulatory restrictions on AI companies based on ownership and control.
Mechanism of Influence: It mandates that AI companies provide affidavits regarding their ownership to ensure they are not controlled by foreign adversaries before contracting with the state.
Evidence:
Ambiguity Notes: The criteria for 'foreign adversaries' likely relies on external state or federal lists which may fluctuate.
Legislation ID: 196757
Bill URL: View Bill
Bill 3431 aims to amend the South Carolina Code of Laws by introducing new regulations for social media companies that cater to minors. It establishes definitions, outlines requirements for protecting minors personal data, restricts access during certain hours, and mandates parental controls. The bill also addresses consumer complaints and provides for enforcement mechanisms, ensuring that social media platforms prioritize the safety and well-being of minor users.
| Date | Action |
|---|---|
| 2026-01-21 | Concurred in House amendment and enrolled |
| 2026-01-14 | Returned to Senate with amendments |
| 2026-01-14 | Returned to Senate with amendments ( House Journal-page 105 ) |
| 2026-01-14 | Roll call Yeas-112 Nays-0 |
| 2026-01-14 | Roll call Yeas-112 Nays-0 ( House Journal-page 116 ) |
| 2026-01-14 | Senate amendment amended |
| 2026-01-14 | Senate amendment amended ( House Journal-page 105 ) |
| 2025-05-12 | Scriveners error corrected |
Why Relevant: The bill regulates algorithmic design and features that lead to compulsive usage and psychological harm.
Mechanism of Influence: By requiring services to prevent 'compulsive usage' and allow users to 'disable unnecessary design features,' the law impacts the deployment of AI-driven engagement and recommendation algorithms.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but the behaviors it regulates (compulsive usage, dark patterns, and targeted advertising) are primarily executed via AI and machine learning models on social media platforms.
Why Relevant: The bill mandates access restrictions and parental notifications based on the user's age.
Mechanism of Influence: To comply with restrictions on access hours for minors, platforms must implement age verification or estimation technologies to distinguish between minor and adult users.
Evidence:
Ambiguity Notes: The bill does not specify the technical standard for age verification, leaving the implementation method to the covered online services.
Why Relevant: The legislation restricts the use of personal data for targeted advertising to minors.
Mechanism of Influence: This provision limits the use of AI-driven profiling and automated decision-making systems used to serve personalized advertisements to minor users.
Evidence:
Ambiguity Notes: The definition of 'targeted advertising' often encompasses various AI-based ad-tech processes, though the bill focuses on the outcome rather than the specific technology.
Legislation ID: 244830
Bill URL: View Bill
Bill 4582 seeks to amend the South Carolina Code by adding a new section that mandates school districts to provide instruction on artificial intelligence. Starting from the 2026-2027 school year, schools will educate students on accessing, utilizing, and critically evaluating AI tools, guided by the Department of Educations recommendations. The bill emphasizes the importance of teaching foundational AI concepts, practical applications, responsible usage, and critical thinking skills related to AI.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced and read first time |
| 2026-01-13 | Referred to Committee on Education and Public Works |
| 2025-12-16 | Prefiled |
| 2025-12-16 | Referred to Committee on Education and Public Works |
Why Relevant: The bill addresses the 'responsible usage' of artificial intelligence, which aligns with the user's interest in AI oversight and regulation, specifically regarding how the technology is introduced to and managed within the public education system.
Mechanism of Influence: By mandating AI literacy and critical evaluation in schools, the law shapes the public's understanding of AI risks and benefits, potentially influencing future regulatory compliance and ethical standards for AI interaction.
Evidence:
Ambiguity Notes: The bill focuses on educational mandates rather than technical regulations like audits or weight submissions; however, 'responsible usage' is a broad term that could encompass discussions on AI ethics and disclosures.
The Right to Compute Act seeks to amend the South Carolina Code by adding a chapter that outlines the rights related to computational resources, particularly those controlled by artificial intelligence systems. It emphasizes the need for risk management policies for critical infrastructure and sets forth conditions under which governmental restrictions on private computational resources may occur. The bill recognizes the fundamental right to own and use technological tools while ensuring public safety and national security.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced and read first time |
| 2026-01-13 | Referred to Committee on Labor, Commerce and Industry |
| 2025-12-16 | Prefiled |
| 2025-12-16 | Referred to Committee on Labor, Commerce and Industry |
Why Relevant: The act specifically mandates risk management for certain AI systems.
Mechanism of Influence: Deployers of critical AI systems are required to develop and maintain risk management policies that align with national or international standards.
Evidence:
Ambiguity Notes: The term 'critical artificial intelligence systems' is not fully defined in the abstract, leaving room for interpretation on which AI applications fall under this mandate.
Why Relevant: The act establishes a legal framework for government oversight and restriction of AI-related resources.
Mechanism of Influence: It sets a high legal bar (narrowly tailored to a compelling interest) for any government action that would restrict the use of computational resources.
Evidence:
Ambiguity Notes: The definition of 'compelling governmental interest' and 'narrowly tailored' are legal standards that will require judicial interpretation in the context of AI.
Legislation ID: 244723
Bill URL: View Bill
The South Carolina Community Data Protection and Responsible Surveillance Act prohibits state and local entities from participating in surveillance systems that store data on third-party servers or use AI for tracking vehicles based on appearance. It establishes strict guidelines for data retention, judicial oversight, and annual reporting to ensure transparency and accountability in the use of surveillance technologies.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced and read first time |
| 2026-01-13 | Referred to Committee on Judiciary |
| 2025-12-16 | Prefiled |
| 2025-12-16 | Referred to Committee on Judiciary |
Why Relevant: The act explicitly bans the use of artificial intelligence for specific surveillance purposes.
Mechanism of Influence: It prohibits law enforcement from using AI or automated systems to identify or track vehicles based on non-license plate characteristics, such as vehicle appearance.
Evidence:
Ambiguity Notes: The term "essential contextual data" is not strictly defined, which could lead to varying interpretations of what ALPR systems are allowed to capture alongside license plates.
Why Relevant: The legislation mandates oversight through regular auditing of surveillance technology usage.
Mechanism of Influence: It requires independent audits every quarter by the South Carolina Inspector General to ensure compliance with the act's privacy and data management provisions.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill requires public disclosure of how surveillance data is utilized.
Mechanism of Influence: Law enforcement agencies must publish annual reports detailing total scans, alerts generated, and investigations involving ALPR data.
Evidence:
Ambiguity Notes: None
Legislation ID: 257267
Bill URL: View Bill
Bill 788 aims to amend the South Carolina Code of Laws to include provisions regarding the use of artificial intelligence in therapy and psychotherapy. It establishes definitions for key terms related to AI and therapy, sets requirements for informed consent from clients when AI is used, and prohibits unlicensed entities from providing therapy services. The bill also emphasizes the confidentiality of client records and outlines penalties for violations of these regulations.
| Date | Action |
|---|---|
| 2026-01-13 | Introduced and read first time |
| 2026-01-13 | Referred to Committee on Labor, Commerce and Industry |
Why Relevant: The bill explicitly mandates disclosures and informed consent regarding the use of AI in a professional setting.
Mechanism of Influence: Licensed professionals are required to provide written notification to patients and obtain their written consent before utilizing AI in recorded therapeutic sessions.
Evidence:
Ambiguity Notes: The term 'supplementary support' is not strictly defined, potentially allowing for a wide range of AI applications as long as a human is technically overseeing them.
Why Relevant: The legislation establishes strict oversight requirements for AI, preventing it from operating autonomously in a clinical capacity.
Mechanism of Influence: It prohibits AI from making independent therapeutic decisions and requires all AI-delivered services to be overseen by a licensed professional.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill includes enforcement mechanisms and penalties for failing to adhere to the AI regulations.
Mechanism of Influence: Licensing boards are empowered to assess civil penalties and fines for violations of the provisions governing AI use.
Evidence:
Ambiguity Notes: None
Legislation ID: 285242
Bill URL: View Bill
House Bill 1125 aims to create a taskforce composed of representatives from various industries, educational institutions, and government entities to examine the technological advancements and implications of artificial intelligence in South Dakota. The taskforce will provide findings and recommendations by December 1, 2028, including suggestions for any necessary legislation regarding AI systems.
| Date | Action |
|---|---|
| 2026-02-02 | Schedule for hearing |
| 2026-01-26 | First read in House and referred |
| 2026-01-23 | Introduced |
Why Relevant: The bill directly addresses the oversight and potential future regulation of artificial intelligence by creating a formal study taskforce.
Mechanism of Influence: The taskforce is specifically charged with examining AI's impact and providing recommendations for necessary legislation, which could lead to future regulatory frameworks, disclosure requirements, or audit mandates.
Evidence:
Ambiguity Notes: The term 'implications' is broad and could encompass a wide range of regulatory topics such as privacy, ethics, bias, or economic impact, depending on the taskforce's focus.
Legislation ID: 240616
Bill URL: View Bill
This bill amends the Tennessee Code to introduce specific definitions related to artificial intelligence and its applications, particularly focusing on AI chatbots. It establishes unlawful practices concerning the training of AI to engage in harmful behaviors or simulate human interactions that could lead to emotional harm or misinformation. The bill also provides for civil actions against violators, allowing individuals to seek damages for violations.
| Date | Action |
|---|---|
| 2026-01-15 | Sponsor(s) Added. |
| 2026-01-14 | Assigned to s/c Criminal Justice Subcommittee |
| 2026-01-14 | P2C, ref. to Judiciary Committee |
| 2026-01-13 | Intro., P1C. |
| 2025-12-11 | Filed for introduction |
Why Relevant: The bill directly regulates the development and training phase of artificial intelligence systems, which is a core component of AI oversight.
Mechanism of Influence: It imposes criminal and civil liability on developers and entities that train AI to engage in prohibited behaviors, effectively mandating safety guardrails during the model training process.
Evidence:
Ambiguity Notes: The prohibition on 'simulating human relationships' or 'emotional dependency' is broad and could impact a wide variety of generative AI and companion-style chatbots.
Why Relevant: The legislation establishes legal definitions and oversight mechanisms for AI technologies.
Mechanism of Influence: By defining terms like 'artificial intelligence chatbot' and 'train,' the bill creates a legal framework for the government and individuals to monitor and litigate AI-related harms.
Evidence:
Ambiguity Notes: None
This bill amends the Tennessee Code Annotated to establish regulations regarding artificial intelligence systems in mental health. It specifically prohibits individuals from advertising AI systems as qualified mental health professionals and outlines penalties for violations, including civil penalties under the Tennessee Consumer Protection Act.
| Date | Action |
|---|---|
| 2026-01-14 | Assigned to s/c Population Health Subcommittee |
| 2026-01-14 | P2C, ref. to Health Committee |
| 2026-01-13 | Intro., P1C. |
| 2026-01-05 | Filed for introduction |
Why Relevant: The bill directly regulates the marketing and public representation of AI systems, specifically targeting the mental health sector.
Mechanism of Influence: It imposes a legal prohibition on developers and deployers, preventing them from mischaracterizing AI capabilities as equivalent to human professional expertise, backed by financial penalties.
Evidence:
Ambiguity Notes: The abstract does not provide a specific definition for 'artificial intelligence system,' which may lead to broad interpretation regarding which software tools fall under these regulations.
Why Relevant: The legislation integrates AI-specific oversight into the state's existing consumer protection framework.
Mechanism of Influence: By classifying AI misrepresentation as an unfair or deceptive act, it grants state authorities the power to enforce AI regulations using established consumer protection mechanisms.
Evidence:
Ambiguity Notes: None
Legislation ID: 240617
Bill URL: View Bill
This bill amends the Tennessee Code to establish definitions and legal parameters regarding artificial intelligence, particularly in the context of training AI systems. It prohibits the training of AI to engage in harmful behaviors, such as encouraging suicide or simulating human relationships, and sets forth civil and criminal penalties for violations. The bill also provides mechanisms for individuals to seek damages if they are harmed by such AI systems.
| Date | Action |
|---|---|
| 2026-01-14 | Passed on Second Consideration, refer to Senate Judiciary Committee |
| 2026-01-13 | Introduced, Passed on First Consideration |
| 2025-12-18 | Filed for introduction |
Why Relevant: The bill directly regulates the development and training phase of artificial intelligence systems.
Mechanism of Influence: It creates a legal prohibition against specific AI functionalities, effectively mandating safety guardrails during the development and training phase by banning the simulation of human relationships or emotional support.
Evidence:
Ambiguity Notes: The prohibition on 'simulating human relationships' is broad and could potentially impact a wide variety of generative AI and chatbot applications beyond those intended for mental health.
Why Relevant: The legislation establishes an enforcement and liability framework for AI-related harms.
Mechanism of Influence: By allowing for liquidated damages of $150,000 and punitive damages, it creates a high-stakes compliance environment for AI developers and companies operating within the state.
Evidence:
Ambiguity Notes: The scope of 'aggrieved individuals' and the specific threshold for what constitutes a violation of 'training' versus 'deployment' may require further judicial interpretation.
The bill amends Tennessee Code Annotated to include regulations on artificial intelligence systems in the mental health field. It specifically prohibits individuals or entities from advertising AI systems as qualified mental health professionals, establishing penalties for violations under the Tennessee Consumer Protection Act. The bill defines artificial intelligence and sets a civil penalty for violations to ensure compliance and protect consumers.
| Date | Action |
|---|---|
| 2026-01-14 | Passed on Second Consideration, refer to Senate Health and Welfare Committee |
| 2026-01-13 | Introduced, Passed on First Consideration |
| 2026-01-12 | Filed for introduction |
Why Relevant: The bill directly regulates the marketing and representation of AI systems, specifically targeting the disclosure of AI's status versus human professional qualifications.
Mechanism of Influence: It creates a legal prohibition against misrepresenting AI capabilities as human professional expertise and enforces this through civil penalties and consumer protection laws.
Evidence:
Ambiguity Notes: The definition of AI as systems 'capable of performing tasks generally associated with human intelligence' is relatively broad and may require further clarification as to whether it applies to simple chatbots or only advanced diagnostic tools.
Legislation ID: 260262
Bill URL: View Bill
Senate Bill 1700, known as the Curbing Harmful AI Technology (CHAT) Act, amends Tennessee Code to introduce regulations governing artificial intelligence systems and companion chatbots. It defines key terms, outlines safety and design requirements, mandates transparency and data privacy protections, and establishes enforcement mechanisms to hold developers and deployers accountable for violations. The bill seeks to ensure that AI technologies do not harm minors and provides a framework for addressing issues related to mental health and user safety.
| Date | Action |
|---|---|
| 2026-01-15 | Filed for introduction |
Why Relevant: The bill mandates specific disclosures and transparency requirements for AI interactions.
Mechanism of Influence: Deployers are legally required to provide disclaimers that a chatbot is not human at specific intervals (every 30 minutes) and when giving regulated advice.
Evidence:
Ambiguity Notes: The term 'regulated advice' is mentioned but not specifically defined in the text, which could lead to broad interpretations regarding which AI outputs trigger specific disclosure requirements.
Why Relevant: The legislation addresses age-specific usage and data privacy for minors.
Mechanism of Influence: It prohibits the use of a minor's input for training AI models without explicit parental consent and restricts the types of AI interactions allowed for minors, specifically regarding mental health.
Evidence:
Ambiguity Notes: While it requires parental consent, the specific mechanism for age verification to identify a user as a minor is not detailed in the summary.
Why Relevant: The bill requires developers to perform safety testing and public reporting, similar to an audit requirement.
Mechanism of Influence: Developers are obligated to publish their safety test findings for public access and maintain mechanisms for reporting adverse incidents.
Evidence:
Ambiguity Notes: The criteria for what constitutes a 'safety test' and the required depth of the 'findings' are not specified, potentially allowing for varied levels of rigor among developers.
H.B. 218 introduces a requirement for Utah high school students to complete a half-credit digital literacy course to graduate. The bill emphasizes the integration of digital literacy concepts throughout K-12 education, defining key areas such as social media awareness and artificial intelligence literacy. It mandates end-of-course assessments for the digital literacy requirement, establishes a task force to oversee the implementation, and sets a timeline for the new requirements to take effect.
| Date | Action |
|---|---|
| 2026-01-23 | House/ received fiscal note from Fiscal Analyst |
| 2026-01-20 | House/ 1st reading (Introduced) |
| 2026-01-14 | House/ received bill from Legislative Research |
| 2026-01-09 | Bill Numbered but not Distributed |
| 2026-01-09 | Numbered Bill Publicly Distributed |
Why Relevant: The bill explicitly identifies artificial intelligence literacy as a core component of the new digital literacy graduation requirement.
Mechanism of Influence: By mandating AI literacy in the K-12 curriculum, the law requires the state to define educational standards for AI and ensures that all graduating students have a foundational understanding of the technology.
Evidence:
Ambiguity Notes: The bill focuses on educational literacy rather than the direct regulation of AI development, deployment, or technical audits.
This bill mandates the State Board of Education to develop model policies regarding technology and artificial intelligence use in public school classrooms. It includes specific requirements for local education agencies (LEAs) on how to integrate technology effectively and safely. The bill also addresses the need for transparency with parents, limits on screen time, and the introduction of artificial intelligence standards into core education curricula.
| Date | Action |
|---|---|
| 2026-01-20 | House/ 1st reading (Introduced) |
| 2026-01-20 | House/ received bill from Legislative Research |
| 2026-01-16 | Bill Numbered but not Distributed |
| 2026-01-16 | Numbered Bill Publicly Distributed |
Why Relevant: The bill establishes regulatory frameworks for AI usage in educational settings.
Mechanism of Influence: It requires the State Board of Education to publish a model AI use policy which local education agencies (LEAs) must then adopt and follow to ensure responsible use.
Evidence:
Ambiguity Notes: The specific criteria for 'responsible use' are not defined in the abstract and are left to the State Board's administrative rulemaking.
Why Relevant: It mandates disclosures and parental consent specifically for AI-related educational activities.
Mechanism of Influence: Students are prohibited from participating in AI sandbox courses unless the LEA provides notification and obtains explicit parental consent.
Evidence:
Ambiguity Notes: While 'AI sandbox courses' are defined, the specific parameters of what constitutes a 'sandbox' versus general AI use in other courses may require further clarification.
Why Relevant: The bill introduces oversight, auditing, and reporting requirements for AI policy implementation.
Mechanism of Influence: LEAs must certify compliance with AI usage policies to receive state funding and must submit detailed reports to the state board regarding their compliance monitoring plans.
Evidence:
Ambiguity Notes: The bill requires LEAs to adopt a method for evaluating effectiveness, but does not specify the metrics for that evaluation.
The bill enacts the Digital Voyeurism Prevention Act, which prohibits the generation and distribution of counterfeit intimate images without consent. It establishes civil liabilities for violations, mandates consent verification systems for generation services, and outlines procedures for platforms to remove non-consensual content. Additionally, it sets requirements for the disclosure of AI-generated content and the preservation of content provenance data.
| Date | Action |
|---|---|
| 2026-01-26 | House/ received fiscal note from Fiscal Analyst |
| 2026-01-20 | House/ 1st reading (Introduced) |
| 2026-01-20 | House/ received bill from Legislative Research |
| 2026-01-16 | Bill Numbered but not Distributed |
| 2026-01-16 | Numbered Bill Publicly Distributed |
Why Relevant: The bill provides specific legal definitions for artificial intelligence and generative AI systems to establish the scope of regulation.
Mechanism of Influence: By defining 'artificial intelligence technology' and 'generative artificial intelligence system', the law determines which software tools are subject to the act's mandates and liabilities.
Evidence:
Ambiguity Notes: The breadth of the definition for 'artificial intelligence technology' could determine whether traditional editing software is captured alongside modern LLMs or diffusion models.
Why Relevant: The legislation mandates specific technical and procedural requirements for AI generation services, including consent verification and transparency.
Mechanism of Influence: AI generation services must implement and maintain consent verification systems and disclose their procedures to users, creating a regulatory compliance burden for AI developers.
Evidence:
Ambiguity Notes: The term 'reasonable investigation' for platforms and the specific technical standards for 'consent verification systems' are not fully defined, leaving room for regulatory or judicial interpretation.
Why Relevant: The bill addresses AI transparency through disclosure requirements and the preservation of content provenance data.
Mechanism of Influence: It sets requirements for the disclosure of AI-generated content and mandates the preservation of provenance data, which tracks the origin and history of digital content.
Evidence:
Ambiguity Notes: The 'Digital Content Provenance Standards Act' component suggests a reliance on evolving technical standards for watermarking or metadata that may not be universally adopted.
Legislation ID: 273002
Bill URL: View Bill
The act requires large frontier developers to publish public safety plans addressing catastrophic risks, and child protection plans addressing child safety risks. It mandates publication of risk assessment summaries, prohibits false or misleading statements about risks, requires safety incident reporting to a state Office, and creates an enforcement framework including civil penalties, whistleblower protections, and an enforcement account. It also provides for rulemaking and annual reporting by the Office of Artificial Intelligence Policy and sets severability.
| Date | Action |
|---|---|
| 2026-01-28 | House/ 2nd reading |
| 2026-01-28 | House/ comm rpt/ substituted |
| 2026-01-27 | House Comm - Favorable Recommendation |
| 2026-01-27 | House Comm - Substitute Recommendation |
| 2026-01-26 | House/ to standing committee |
| 2026-01-23 | House/ received fiscal note from Fiscal Analyst |
| 2026-01-20 | House/ 1st reading (Introduced) |
| 2026-01-20 | House/ received bill from Legislative Research |
Why Relevant: The act specifically targets child safety and protection in the context of AI chatbots, aligning with the user's interest in age-related regulations.
Mechanism of Influence: Frontier developers must implement and publish child protection plans that assess and mitigate risks to minors.
Evidence:
Ambiguity Notes: The term 'potential risks to minors' is not explicitly defined in the abstract, leaving room for interpretation on what constitutes a safety risk.
Why Relevant: It mandates public disclosures and transparency regarding AI model risks and safety strategies.
Mechanism of Influence: Developers are required to publish summaries of risk assessments and public safety plans on their websites prior to deployment.
Evidence:
Ambiguity Notes: Provisions allowing redactions for 'trade secrets' could potentially be used to obscure critical safety information.
Why Relevant: The legislation establishes government oversight and reporting requirements for AI safety.
Mechanism of Influence: It creates a formal mechanism for reporting safety incidents to the Office of Artificial Intelligence Policy and requires annual risk assessment reports.
Evidence:
Ambiguity Notes: The specific 'specified timeframes' for reporting incidents are mentioned but not detailed in the text.
Legislation ID: 283936
Bill URL: View Bill
This bill modifies existing laws related to the Office of Artificial Intelligence Policy and its associated learning laboratory program. It introduces new definitions, revises duties of the office, updates provisions for regulatory agreements, and makes technical adjustments to improve the management and oversight of artificial intelligence technologies within the state.
| Date | Action |
|---|---|
| 2026-01-28 | House/ received fiscal note from Fiscal Analyst |
| 2026-01-22 | Bill Numbered but not Distributed |
| 2026-01-22 | House/ 1st reading (Introduced) |
| 2026-01-22 | House/ received bill from Legislative Research |
| 2026-01-22 | Numbered Bill Publicly Distributed |
Why Relevant: The bill creates a dedicated government office specifically for the management and oversight of artificial intelligence technologies.
Mechanism of Influence: The Office of Artificial Intelligence Policy is tasked with administering a learning laboratory program, consulting on regulatory proposals, and reporting annually on AI developments.
Evidence:
Ambiguity Notes: The scope of 'regulatory proposals' is broad and could encompass various types of AI governance from ethics to technical standards.
Why Relevant: The bill establishes a program to evaluate AI technologies and inform future state regulations.
Mechanism of Influence: The Artificial Intelligence Learning Laboratory Program analyzes AI technologies to evaluate existing regulatory frameworks and encourage responsible deployment.
Evidence:
Ambiguity Notes: The 'learning agenda' is not strictly defined, leaving the office significant discretion over which AI risks or technologies to prioritize.
Why Relevant: The bill sets specific criteria for AI developers to enter into regulatory agreements with the state, involving government vetting.
Mechanism of Influence: Participants must demonstrate technical capability and financial resources, and agreements must include safeguards and limitations on AI use.
Evidence:
Ambiguity Notes: The term 'regulatory mitigation' suggests a sandbox-like environment where certain rules might be waived in exchange for oversight, but the specific regulations being mitigated are not listed.
This bill amends the Utah Consumer Privacy Act to include specific provisions related to motor vehicle data privacy. It defines key terms, applies privacy regulations to motor vehicle manufacturers, mandates in-vehicle privacy controls, exempts certain safety data from consent requirements, and requires the Motor Vehicle Division to educate consumers about their data privacy rights.
| Date | Action |
|---|---|
| 2026-01-26 | House/ 1st reading (Introduced) |
| 2026-01-26 | House/ received bill from Legislative Research |
| 2026-01-23 | Bill Numbered but not Distributed |
| 2026-01-23 | Numbered Bill Publicly Distributed |
Why Relevant: The bill includes 'biometric data' within its scope, which is a critical data category often processed by AI systems for driver monitoring, facial recognition, or security.
Mechanism of Influence: By regulating the collection and use of biometric data in vehicles, the law places constraints on the types of data AI models can ingest and process without specific consumer protections.
Evidence:
Ambiguity Notes: The bill focuses on the privacy of the data rather than the specific AI algorithms that might use the data, leaving the technical implementation of 'privacy controls' for AI-driven features undefined.
Why Relevant: The bill addresses 'targeted advertising,' which is a field heavily reliant on AI and machine learning for consumer profiling and automated decision-making.
Mechanism of Influence: Mandating an opt-out for data processing related to targeted advertising restricts the data pipeline used to train and execute AI-driven marketing models.
Evidence:
Ambiguity Notes: The text does not explicitly mention AI or machine learning, focusing instead on the 'processing' of data for the purpose of advertising, which is the functional application of AI in this context.
Legislation ID: 245758
Bill URL: View Bill
H.B. 55 establishes requirements for the termination of contracts with third-party providers when they fail to comply with privacy laws. It mandates that the State Board of Education investigate alleged privacy violations and conduct audits of agreements. The bill also prohibits third-party contractors from imposing fees on education entities for terminating contracts due to privacy violations.
| Date | Action |
|---|---|
| 2026-01-23 | House/ 2nd reading |
| 2026-01-23 | House/ comm rpt/ amended/ placed on Consent Cal |
| 2026-01-23 | House/ placed back on 3rd Reading Calendar |
| 2026-01-22 | House Comm - Amendment Recommendation |
| 2026-01-22 | House Comm - Consent Calendar Recommendation |
| 2026-01-22 | House Comm - Favorable Recommendation |
| 2026-01-21 | House/ to standing committee |
| 2026-01-20 | House/ 1st reading (Introduced) |
Why Relevant: The legislation establishes a framework for auditing and regulating third-party contractors, which includes technology providers and software developers often utilizing AI in educational settings.
Mechanism of Influence: AI vendors serving as third-party contractors for educational entities would be subject to mandatory compliance audits and potential contract termination if their data processing or algorithmic functions violate student privacy laws.
Evidence:
Ambiguity Notes: The bill does not explicitly name 'Artificial Intelligence,' but its broad application to 'third-party contractors' and 'data privacy' encompasses AI service providers handling student data.
The Product Pricing Amendments bill enacts provisions related to algorithmic pricing, defining necessary terms and establishing requirements for suppliers to disclose their pricing methods. It aims to protect consumers from deceptive practices by ensuring transparency in how prices are set and displayed based on algorithms.
| Date | Action |
|---|---|
| 2026-01-27 | Senate/ received fiscal note from Fiscal Analyst |
| 2026-01-26 | Senate Comm - Not Considered |
| 2026-01-26 | Senate/ to standing committee |
| 2026-01-22 | Senate/ 1st reading (Introduced) |
| 2026-01-22 | Senate/ received bill from Legislative Research |
| 2026-01-21 | Bill Numbered but not Distributed |
| 2026-01-21 | Numbered Bill Publicly Distributed |
Why Relevant: The bill directly addresses the regulation of algorithms and requires specific disclosures for AI-driven or automated pricing systems.
Mechanism of Influence: It mandates that suppliers provide a disclaimer informing consumers that prices are set using algorithms based on personal data, thereby increasing transparency in automated decision-making.
Evidence:
Ambiguity Notes: The specific definition of 'algorithm' provided in the bill would determine the breadth of AI technologies covered, potentially ranging from simple rule-based systems to complex machine learning models.
Legislation ID: 258216
Bill URL: View Bill
The bill addresses the growing concern over the use of artificial intelligence systems in mental health services, highlighting the risks associated with unregulated AI interactions. It seeks to safeguard individuals by regulating the use of AI, prohibiting its use in therapeutic settings, and establishing guidelines for mental health professionals who may use AI for administrative purposes only.
| Date | Action |
|---|---|
| 2026-01-13 | Read first time and referred to the Committee on [Health Care] |
Why Relevant: The bill explicitly regulates the use of artificial intelligence by defining permitted and prohibited applications within a specific professional field.
Mechanism of Influence: It creates a legal boundary that prevents AI from being used as a substitute for human practitioners in therapeutic settings, effectively banning AI-driven therapy.
Evidence:
Ambiguity Notes: The term 'administrative tasks' is not granularly defined, which may leave room for interpretation regarding data processing or scheduling versus clinical documentation.
Why Relevant: The legislation requires disclosures and consent mechanisms for AI usage.
Mechanism of Influence: Mental health professionals must obtain patient consent before using AI for recording or transcription, ensuring transparency in AI deployment.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill establishes oversight requirements and professional accountability for AI use.
Mechanism of Influence: By amending the definition of unprofessional conduct to include AI misuse, the law subjects professionals to licensing penalties for failing to oversee AI tools properly.
Evidence:
Ambiguity Notes: None
Legislation ID: 258229
Bill URL: View Bill
The bill amends existing laws to establish a framework for the registration and certification of educational technology products that collect student data. It mandates that providers register with the Secretary of State, pay a fee, and disclose their privacy policies and product information. The Secretary of State will create certification standards to ensure these products comply with state and federal privacy laws, and schools are prohibited from using non-certified products. The bill also includes penalties for non-compliance and outlines a transition period for schools to adapt to the new requirements.
| Date | Action |
|---|---|
| 2026-01-13 | Read first time and referred to the Committee on [Commerce and Economic Development] |
Why Relevant: The bill regulates educational technology products, a category that increasingly includes AI-driven personalized learning platforms, automated grading systems, and student monitoring tools.
Mechanism of Influence: AI providers operating in the educational sector would be required to disclose their data collection practices and undergo a state certification process, effectively subjecting AI models used in schools to privacy audits and regulatory oversight.
Evidence:
Ambiguity Notes: The bill uses the broad term 'educational technology products' without explicitly defining 'Artificial Intelligence.' While AI tools fall under this umbrella, the specific requirements for AI-specific disclosures (like model weights or algorithmic bias) are not explicitly mentioned.
Legislation ID: 274514
Bill URL: View Bill
This bill introduces the Cloud Computing Public Utility Act, which recognizes cloud computing services as essential utilities in Vermont. It seeks to create a regulatory environment that fosters competition and innovation while safeguarding consumer interests against unfair practices. The bill outlines the definitions, jurisdiction, and operational requirements for cloud service providers, aiming to ensure service quality, affordability, and transparency in pricing and data management.
| Date | Action |
|---|---|
| 2026-01-20 | Read first time and referred to the Committee on [Energy and Digital Infrastructure] |
Why Relevant: The bill regulates the underlying infrastructure essential for the development, training, and deployment of Artificial Intelligence.
Mechanism of Influence: By classifying cloud computing as a public utility, the state gains oversight over the compute resources required for AI. Requirements for 'Provider Reports' and 'Service Quality' standards could be used to monitor the operational practices of platforms hosting large-scale AI models.
Evidence:
Ambiguity Notes: The bill does not explicitly mention 'Artificial Intelligence' or 'Machine Learning.' However, the definition of 'cloud computing' is typically broad enough to encompass Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) offerings used by AI developers.
Why Relevant: The bill addresses data management and portability, which are critical components of AI data governance.
Mechanism of Influence: Provisions requiring providers to offer data in a portable format and prohibiting excessive transfer fees affect how datasets used for AI training are managed and moved between cloud environments.
Evidence:
Ambiguity Notes: While focused on consumer data, these provisions could impact enterprise-level AI data sets depending on how 'consumer' is defined in the final regulations.
Legislation ID: 284621
Bill URL: View Bill
This bill mandates the Agency of Digital Services to conduct an annual inventory of automated decision systems in state government, assessing their cybersecurity vulnerabilities and potential risks to personal data. It empowers the Agency to request the termination of any hazardous systems identified during the review process.
| Date | Action |
|---|---|
| 2026-01-22 | Read first time and referred to the Committee on [Energy and Digital Infrastructure] |
Why Relevant: The bill directly addresses the oversight and auditing of automated decision systems, which is a primary category of artificial intelligence regulation.
Mechanism of Influence: By requiring an inventory that includes bias testing and risk assessments, the law creates a mandatory audit trail for AI-driven systems used by the state.
Evidence:
Ambiguity Notes: The term 'automated decision system' is often used as a legal catch-all for AI and algorithmic processes, though its specific technical scope depends on the statutory definition provided in the full text.
Why Relevant: The legislation establishes a regulatory enforcement mechanism to stop the use of harmful AI technologies.
Mechanism of Influence: The Agency is granted the authority to request the termination of systems that produce biased results or pose safety risks, effectively acting as a regulatory gatekeeper for state-deployed AI.
Evidence:
Ambiguity Notes: The phrase 'request the termination' may imply an advisory role rather than a unilateral power to shut down systems, which could affect the strength of the regulation.
Legislation ID: 252270
Bill URL: View Bill
The proposed legislation establishes a framework to prohibit surveillance pricing in the State of Vermont. It defines key terms related to consumer information and surveillance technology, outlines the conditions under which surveillance pricing may be used, and establishes penalties for violations. The bill seeks to ensure fair pricing practices for consumers and requires transparency when personal information is collected and used.
| Date | Action |
|---|---|
| 2026-01-06 | Read 1st time & referred to Committee on [Economic Development, Housing and General Affairs] |
Why Relevant: Surveillance pricing is a primary application of AI and machine learning in retail, where algorithms analyze consumer behavior and personal data to set dynamic, individualized prices.
Mechanism of Influence: The law regulates the output of AI-driven pricing models by prohibiting price discrimination based on individual consumer surveillance data, thereby restricting how these automated systems can be deployed in the marketplace.
Evidence:
Ambiguity Notes: While the abstract uses the term 'surveillance technology' rather than 'artificial intelligence,' the definitions of 'covered information' and 'surveillance pricing' likely encompass the data processing and algorithmic decision-making characteristic of AI.
Legislation ID: 271146
Bill URL: View Bill
This bill proposes the use of automated traffic law enforcement (ATLE) systems by municipal law enforcement agencies in work zones, areas with high crash or speeding incidents, traffic signals, and locations with excessive vehicle noise. It outlines definitions, usage guidelines, and requirements for the deployment of ATLE systems, including the need for public notification and engineering analysis. The bill also stipulates the procedures for municipalities to adopt ATLE systems and includes provisions for penalties and defenses related to violations captured by these systems.
| Date | Action |
|---|---|
| 2026-01-16 | Read 1st time & referred to Committee on [Transportation] |
Why Relevant: The bill regulates automated decision-making systems used in law enforcement contexts, specifically those designed to identify and penalize traffic violations without direct human intervention at the moment of the offense.
Mechanism of Influence: It imposes mandatory independent audits (calibration checks) and data logging requirements to ensure the accuracy and integrity of the automated systems' outputs.
Evidence:
Ambiguity Notes: While the bill uses the term 'automated' rather than 'artificial intelligence,' the functionalities described—such as identifying vehicles and sound levels from recorded images—typically rely on algorithmic processing and computer vision technologies often categorized under AI.
Why Relevant: The legislation includes disclosure and transparency requirements for the use of automated technology.
Mechanism of Influence: It requires municipalities to submit detailed annual reports to the legislature, including data on system operations, violations issued, and recommendations for changes to the automated oversight.
Evidence:
Ambiguity Notes: The reporting requirements focus on operational outcomes rather than the disclosure of underlying algorithms or 'weights' as requested in the system instructions.
Legislation ID: 269307
Bill URL: View Bill
House Bill No. 1170 introduces amendments to the Code of Virginia, particularly focusing on definitions related to the administration of criminal justice and the use of artificial intelligence (AI) systems. It defines various terms including artificial intelligence system, covered AI system, and outlines the responsibilities of law enforcement agencies in employing such technologies. The bill seeks to establish guidelines for the deployment of AI in criminal justice to enhance transparency and accountability.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Communications |
| 2026-01-26 | Fiscal Impact Statement from Department of Planning and Budget (HB1170) |
| 2026-01-14 | Committee Referral Pending |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26105300D |
| 2026-01-14 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The bill directly addresses the regulation and oversight of AI systems used by law enforcement and criminal justice agencies, aligning with the user's interest in AI regulation and government oversight.
Mechanism of Influence: By defining 'covered AI systems' and outlining their use in investigations and predictive policing, the law creates a legal framework for what technologies are subject to oversight and transparency requirements within the criminal justice system.
Evidence:
Ambiguity Notes: The exclusion of systems that do not 'materially impact' investigations may lead to varying interpretations of what constitutes an administrative task versus a regulated investigative tool.
Legislation ID: 269328
Bill URL: View Bill
This bill proposes the addition of a new section to the Code of Virginia that prohibits school boards from requiring or encouraging students to use artificial intelligence chatbots for instructional purposes. The bill recognizes the unreliability of such chatbots as sources of information and their potential negative impact on students critical thinking skills. Each school board is mandated to develop and implement a policy that enforces this prohibition.
| Date | Action |
|---|---|
| 2026-01-20 | Fiscal Impact Statement from Department of Planning and Budget (HB1186) |
| 2026-01-14 | Committee Referral Pending |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26101321D |
Why Relevant: The legislation directly regulates the deployment and usage of artificial intelligence tools within the educational sector.
Mechanism of Influence: It mandates that local school boards create and enforce policies that prevent the use of AI chatbots for instruction, effectively banning their integration into the curriculum.
Evidence:
Ambiguity Notes: The term 'instructional purposes' may require further clarification to determine if it applies to administrative tasks, extracurricular activities, or strictly classroom learning.
Legislation ID: 269438
Bill URL: View Bill
House Bill No. 1252 amends the Virginia Residential Landlord and Tenant Act to address the use of algorithmic pricing devices by landlords. It mandates disclosure of such devices to tenants, outlines requirements for human review of rent determinations, and establishes civil penalties for violations. The bill seeks to protect tenants from deceptive practices related to automated rent pricing.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HGL sub: Housing/Consumer Protection |
| 2026-01-14 | Committee Referral Pending |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26103337D |
| 2026-01-14 | Referred to Committee on General Laws |
Why Relevant: The bill establishes mandatory disclosure requirements for automated systems used in financial transactions (rent).
Mechanism of Influence: Landlords are legally required to inform tenants in writing if an algorithmic device is used and provide a plain-language explanation of the algorithm's logic.
Evidence:
Ambiguity Notes: The term 'plain-language summary' is not strictly defined, leaving room for interpretation on the level of technical detail required regarding the algorithm's weights or data inputs.
Why Relevant: It mandates human oversight of AI-driven or algorithmic decisions, a key pillar of AI safety and accountability legislation.
Mechanism of Influence: It creates a legal right for a consumer (tenant) to bypass or verify an automated decision through a human review process.
Evidence:
Ambiguity Notes: The bill does not specify the standards for the 'human review' or whether the human has the authority to override the algorithm without justification.
Why Relevant: The bill defines and prohibits the deceptive use of algorithmic tools, establishing a regulatory framework for AI-adjacent software.
Mechanism of Influence: It grants the Attorney General enforcement power to seek injunctions and civil penalties against landlords using these devices in misleading ways.
Evidence:
Ambiguity Notes: The definition of 'algorithmic pricing device' determines the scope of the law and whether it applies to simple spreadsheets versus complex machine learning models.
Legislation ID: 269444
Bill URL: View Bill
This bill introduces amendments to various sections of the Code of Virginia, particularly focusing on definitions relevant to criminal justice, law enforcement, and the use of artificial intelligence technologies. It aims to clarify the roles and responsibilities of criminal justice agencies, enhance the standards for forensic laboratories, and incorporate the use of generative AI and machine learning systems in law enforcement practices. The bill also seeks to ensure that private police departments operate under clear regulations and maintain compliance with state laws.
| Date | Action |
|---|---|
| 2026-01-14 | Committee Referral Pending |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26103277D |
Why Relevant: The bill explicitly addresses the integration of generative AI and machine learning systems within the framework of law enforcement and criminal justice.
Mechanism of Influence: By including these technologies in the definitions and operational standards for criminal justice agencies, the law establishes a legal basis for their use and potential oversight in policing and forensic contexts.
Evidence:
Ambiguity Notes: While the abstract mentions the incorporation of AI, the specific regulatory requirements such as audits or disclosures are not detailed in the provided summary, leaving the exact nature of the oversight to the full text of the amendments.
Legislation ID: 269448
Bill URL: View Bill
This bill proposes amendments to various sections of the Code of Virginia, particularly in relation to the definitions and roles of law enforcement agencies. It introduces new definitions for technologies like generative AI and machine learning systems, while also updating the definitions of criminal justice agencies and their functions. The bill seeks to enhance the framework governing law enforcement practices and ensure compliance with modern technological advancements.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Communications |
| 2026-01-14 | Committee Referral Pending |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26103194D |
| 2026-01-14 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The bill explicitly incorporates definitions for generative AI and machine learning into the state's criminal justice code.
Mechanism of Influence: By establishing these definitions, the law creates a regulatory foundation for how law enforcement agencies can legally deploy and categorize AI technologies.
Evidence:
Ambiguity Notes: The text focuses on definitions and integration; it is not immediately clear if it imposes strict prohibitions or merely provides a framework for adoption.
Why Relevant: The bill requires audits and privacy studies, which aligns with the user's interest in oversight and auditing of sensitive data systems.
Mechanism of Influence: Mandatory audits of criminal history information systems ensure oversight of the data that AI and machine learning models would likely process or generate.
Evidence:
Ambiguity Notes: While the audit provision does not explicitly name AI, the concurrent introduction of AI definitions suggests these audits may encompass AI-driven data processing.
Legislation ID: 269496
Bill URL: View Bill
This bill amends the Code of Virginia to define and provide guidelines for the use of artificial intelligence-based tools in law enforcement. It specifies what constitutes covered artificial intelligence, mandates disclosure of AI usage in police reports, and ensures that human decision-makers are involved in critical legal decisions. Additionally, it establishes a framework for civil actions against law enforcement agencies for non-compliance.
| Date | Action |
|---|---|
| 2026-01-15 | Committee Referral Pending |
| 2026-01-15 | Presented and ordered printed 26105298D |
Why Relevant: The bill directly addresses the user's request for legislation requiring AI disclosures and regulation.
Mechanism of Influence: It mandates that law enforcement officers include disclaimers in reports and notify attorneys and investigated individuals when covered AI is used in investigations.
Evidence:
Ambiguity Notes: The scope of regulation depends on the definition of 'covered artificial intelligence,' which excludes administrative tools that do not impact investigations.
Why Relevant: The legislation includes provisions for audits and government oversight of AI systems.
Mechanism of Influence: It requires the maintenance of audit trails for AI-generated reports and grants the Attorney General authority to investigate and sue non-compliant agencies.
Evidence:
Ambiguity Notes: The text does not specify the technical requirements or duration for which audit trails must be maintained.
Why Relevant: The bill regulates the decision-making autonomy of AI in high-stakes legal environments.
Mechanism of Influence: It prohibits AI from being the sole factor in decisions like pre-trial detention or sentencing, requiring a human-in-the-loop for all critical legal determinations.
Evidence:
Ambiguity Notes: The level of 'involvement' required by a human decision-maker to satisfy the requirement is not explicitly quantified.
Legislation ID: 269497
Bill URL: View Bill
This bill introduces a requirement for state and local law enforcement agencies to conduct an annual inventory of any artificial intelligence systems they utilize. The inventory must be publicly available and include detailed information about each systems capabilities, data inputs, outputs, and authorized uses. Additionally, the bill provides mechanisms for civil action against agencies that fail to comply with these requirements, allowing both the Attorney General and individuals to seek enforcement.
| Date | Action |
|---|---|
| 2026-01-15 | Committee Referral Pending |
| 2026-01-15 | Presented and ordered printed 26105299D |
Why Relevant: The bill directly addresses the user's interest in AI disclosures and government oversight of AI systems.
Mechanism of Influence: It requires law enforcement to create a public record of AI systems, including technical details like data inputs and outputs, which functions as a mandatory disclosure and transparency mechanism.
Evidence:
Ambiguity Notes: The definition of 'covered AI system' excludes administrative tasks, which may leave certain algorithmic tools used by police outside the scope of public disclosure.
Why Relevant: The legislation establishes enforcement and accountability protocols for AI regulation.
Mechanism of Influence: By authorizing the Attorney General to investigate and allowing private citizens to sue for non-compliance, the bill creates a legal framework to ensure the AI inventory requirements are met.
Evidence:
Ambiguity Notes: The 90-day written notice requirement for individuals may serve as a procedural hurdle that delays enforcement actions.
Legislation ID: 285963
Bill URL: View Bill
This bill introduces new sections to the Code of Virginia that govern the use of automated decision systems in making employment decisions. It defines key terms related to artificial intelligence and automated systems, establishes requirements for state agencies and employers regarding the use of such systems, and outlines civil penalties for violations. The bill emphasizes the need for human involvement in employment decisions and mandates testing for algorithmic discrimination.
| Date | Action |
|---|---|
| 2026-01-23 | Committee Referral Pending |
| 2026-01-23 | Presented and ordered printed 26105693D |
Why Relevant: The bill directly addresses the regulation of automated decision systems, which are a core component of artificial intelligence applications in the workforce.
Mechanism of Influence: It imposes mandatory annual testing for algorithmic discrimination and requires human oversight, effectively creating an auditing and accountability framework for AI-driven employment tools.
Evidence:
Ambiguity Notes: The definition of 'automated decision system' is broad, potentially covering a wide range of AI technologies from simple rule-based systems to complex machine learning models.
Why Relevant: The legislation includes specific disclosure requirements and consumer/employee rights regarding AI usage.
Mechanism of Influence: By providing a right to opt out and requiring transparency, the bill forces organizations to be accountable for their use of AI and allows for human intervention.
Evidence:
Ambiguity Notes: The bill does not specify the exact technical standards for 'testing for algorithmic discrimination,' which may lead to varying interpretations of compliance.
Legislation ID: 285970
Bill URL: View Bill
House Bill No. 1521 proposes the addition of a new chapter to the Code of Virginia concerning digital innovation and infrastructure. It addresses the rights to digital property and technology resources, mandates risk management policies for AI-controlled critical infrastructure, establishes safe harbors for compliance, preempts local regulations that contradict state law, and outlines enforcement mechanisms and remedies for violations.
| Date | Action |
|---|---|
| 2026-01-23 | Committee Referral Pending |
| 2026-01-23 | Presented and ordered printed 26106161D |
Why Relevant: The bill directly regulates the deployment of artificial intelligence systems within critical infrastructure sectors.
Mechanism of Influence: It mandates the creation and maintenance of risk management policies for AI systems, effectively requiring a governance framework for AI operations.
Evidence:
Ambiguity Notes: The term 'critical artificial intelligence systems' is defined within the chapter, but the specific thresholds for what constitutes 'critical' may be subject to interpretation or further administrative refinement.
Why Relevant: The legislation includes a mandatory disclosure and reporting mechanism through annual attestations.
Mechanism of Influence: Deployers must file an annual certification by July 1 each year to verify compliance with risk management policies, serving as a form of regulatory oversight and audit.
Evidence:
Ambiguity Notes: The abstract does not specify if the underlying risk management policies themselves must be submitted to the government or if only the attestation of their existence and compliance is required.
Why Relevant: The bill provides enforcement mechanisms for non-compliance with AI regulations.
Mechanism of Influence: It empowers the Attorney General to enforce the law against government entities and allows individuals to seek declaratory and injunctive relief.
Evidence:
Ambiguity Notes: While the Attorney General can enforce against government entities, the abstract is less explicit about the specific penalties for private deployers beyond the loss of safe harbor protections.
Legislation ID: 253727
Bill URL: View Bill
The Artificial Intelligence Workforce Impact Act establishes reporting requirements for state agencies regarding the impacts of artificial intelligence on workforce positions. Agencies must report workforce changes quarterly, and if significant impacts are reported, they must develop a transition plan to assist affected employees with retraining and job placement. The Department of Human Resource Management will review these reports and plans to identify trends and recommend strategies for workforce adaptation.
| Date | Action |
|---|---|
| 2026-01-27 | Fiscal Impact Statement from Department of Planning and Budget (HB310) |
| 2026-01-26 | Assigned HST sub: Communications |
| 2026-01-09 | Committee Referral Pending |
| 2026-01-09 | Prefiled and ordered printed; Offered 01-14-2026 26102961D |
| 2026-01-09 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The act requires disclosures regarding the implementation and impact of AI systems within state government operations.
Mechanism of Influence: Agencies must submit data on AI-related job impacts, which serves as a form of government oversight and transparency regarding AI deployment and its socio-economic consequences.
Evidence:
Ambiguity Notes: The specific definition of 'artificial intelligence system' provided in the act will determine the breadth of technologies that trigger these reporting and planning requirements.
Legislation ID: 258922
Bill URL: View Bill
This resolution calls for a comprehensive study of the artificial intelligence (AI) policies currently in place or being considered by higher education institutions in Virginia. The study aims to assess how these policies address critical issues such as academic integrity, data privacy, equity and access, transparency, and faculty autonomy. The Joint Legislative Audit and Review Commission (JLARC) will also develop a model policy for AI use and make recommendations for resources to support AI education.
| Date | Action |
|---|---|
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26104154D |
Why Relevant: The resolution directly addresses the regulation and oversight of AI within the educational sector by mandating a study and the creation of a model policy.
Mechanism of Influence: By tasking JLARC with evaluating policies and creating a model framework, the resolution sets the stage for standardized AI governance and transparency requirements across state universities.
Evidence:
Ambiguity Notes: The term 'transparency' is broad and could encompass various disclosure requirements for AI-generated content or algorithmic processes.
Legislation ID: 258707
Bill URL: View Bill
This bill amends existing consumer protection laws and introduces a new chapter specifically addressing the use of artificial intelligence chatbots in consumer transactions. It outlines prohibited practices, such as misrepresentation and failure to disclose necessary information, that companies must adhere to when utilizing AI chatbots. Penalties for violations are also specified to ensure compliance and protect consumers.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Communications |
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26105121D |
| 2026-01-13 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The bill specifically targets the regulation and disclosure requirements for artificial intelligence chatbots used in consumer interactions.
Mechanism of Influence: It mandates that companies using AI chatbots must avoid misrepresentation and provide specific disclosures, backed by legal penalties for non-compliance.
Evidence:
Ambiguity Notes: The text mentions 'failure to disclose necessary information' but does not explicitly define what specific AI-related technical information (like model version or data sources) constitutes 'necessary' beyond standard consumer protection.
Legislation ID: 258727
Bill URL: View Bill
This bill amends the Code of Virginia by adding a new section that defines biometric data and mandates that consent must be obtained from individuals before their biometric data can be processed. It also specifies that for children, the processing must comply with existing federal laws regarding online privacy.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Technology and Innovation |
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26103138D |
| 2026-01-13 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The bill regulates the collection and processing of biometric data, which is the foundational data source for AI-driven technologies such as facial recognition, voice synthesis, and biometric authentication systems.
Mechanism of Influence: By requiring explicit consent and federal compliance for children, the law imposes regulatory hurdles on AI companies that utilize biological characteristics for identification or automated decision-making.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its definition of biometric data encompasses the specific data types (facial features, voiceprints) that are central to the development and deployment of biometric AI models.
Legislation ID: 258741
Bill URL: View Bill
This bill introduces guidelines for mental health service providers regarding the use of artificial intelligence systems in their practice. It defines terms related to mental health services and establishes rules for the use of AI, including the requirement for patient consent and the prohibition of AI making therapeutic decisions. Violations of these provisions may result in civil penalties.
| Date | Action |
|---|---|
| 2026-01-20 | Fiscal Impact Statement from Department of Planning and Budget (HB668) |
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26104644D |
| 2026-01-13 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The bill mandates transparency through disclosure and consent requirements for AI usage in a clinical setting.
Mechanism of Influence: Mental health providers must obtain written consent before using AI in recorded sessions and must disclose any AI involvement to patients.
Evidence:
Ambiguity Notes: The term 'supplementary support' is defined but may be subject to interpretation regarding the extent of AI involvement in clinical workflows.
Why Relevant: The legislation imposes strict prohibitions on specific AI capabilities and use cases to ensure human oversight.
Mechanism of Influence: It legally restricts AI from making therapeutic decisions, generating treatment plans without review, or detecting emotions, effectively requiring a human-in-the-loop.
Evidence:
Ambiguity Notes: 'Detecting emotions' is a broad category that might overlap with basic sentiment analysis tools used in administrative support.
Legislation ID: 258742
Bill URL: View Bill
This bill introduces a new section in the Code of Virginia that addresses the impersonation of licensed professionals by chatbots. It defines key terms such as artificial intelligence system and chatbot, and outlines the responsibilities of proprietors in terms of providing clear user notices. It prohibits chatbots from giving substantive responses that could constitute illegal actions if performed by a human, and establishes civil liability for proprietors who fail to comply with these regulations.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Communications |
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26104752D |
| 2026-01-13 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The bill mandates specific transparency disclosures for AI-driven interactions.
Mechanism of Influence: Proprietors are legally required to provide clear and conspicuous notice to users that they are interacting with an artificial intelligence system rather than a human.
Evidence:
Ambiguity Notes: The term 'conspicuous' is defined by text size and language, but the specific placement on a user interface could still be subject to interpretation.
Why Relevant: The legislation regulates the output and functional capabilities of AI systems in professional contexts.
Mechanism of Influence: It prohibits AI from generating substantive responses that would violate professional licensing laws or constitute crimes if performed by a human, holding the proprietor liable for such outputs.
Evidence:
Ambiguity Notes: The phrase 'substantive responses' is not strictly defined and may require judicial interpretation to determine the threshold of advice that triggers a violation.
Legislation ID: 258786
Bill URL: View Bill
The Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act (FAIR AI Act) proposes the creation of regulations governing artificial intelligence systems in Virginia. It defines key terms related to AI, sets disclosure requirements for developers, establishes the FAIR AI Enforcement Fund for monitoring compliance, and outlines legal defenses in cases of harm caused by AI systems. The act seeks to ensure that AI technologies are deployed responsibly and ethically within the Commonwealth.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Communications |
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26104935D |
| 2026-01-13 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The act directly addresses the user's interest in AI disclosures and transparency requirements.
Mechanism of Influence: It mandates that developers of base AI models provide specific metadata to users, including training data update dates and terms of service, which facilitates oversight and user awareness.
Evidence:
Ambiguity Notes: The requirement that disclosures be 'clear and accessible' is a qualitative standard that may be subject to interpretation by regulators or courts.
Why Relevant: The legislation establishes a mechanism for government oversight and enforcement of AI regulations.
Mechanism of Influence: By creating the FAIR AI Enforcement Fund, the state provides a dedicated financial structure to support monitoring for AI misuse, bias, and workforce disruption.
Evidence:
Ambiguity Notes: While the fund is established, the specific technical methods for 'monitoring compliance' are not detailed in the provided abstract.
Why Relevant: The act addresses legal accountability and the regulation of harm caused by AI systems.
Mechanism of Influence: It removes the ability for developers to claim 'autonomous harm' as a legal defense, effectively increasing the liability and responsibility of the entities that create and deploy AI.
Evidence:
Ambiguity Notes: The phrase 'other common law defenses' allows for a wide range of existing legal strategies that are not AI-specific.
Legislation ID: 258831
Bill URL: View Bill
House Bill No. 758 seeks to amend the existing consumer protection laws in Virginia by adding provisions specifically addressing the use of artificial intelligence chatbots when interacting with minors. The bill outlines prohibited practices for suppliers of such technology and establishes penalties for violations. It aims to ensure that minors are not subjected to deceptive practices and are provided with appropriate disclosures when engaging with AI chatbots.
| Date | Action |
|---|---|
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26103964D |
Why Relevant: The bill specifically targets the regulation of artificial intelligence technology, focusing on chatbots and their interaction with a vulnerable demographic.
Mechanism of Influence: It imposes legal requirements on AI suppliers to provide disclosures and avoid fraudulent acts, creating a compliance framework for AI deployment.
Evidence:
Ambiguity Notes: The definition of 'deceptive practices' in the context of AI logic or generative responses may be subject to broad interpretation by courts.
Why Relevant: The legislation addresses the user's interest in age-related usage and mandatory disclosures for AI systems.
Mechanism of Influence: By establishing penalties for violations, the bill enforces accountability for AI service providers interacting with minors.
Evidence:
Ambiguity Notes: The bill mentions 'penalties as determined by the court' without specifying a fixed fine schedule, which could lead to varying levels of enforcement.
Legislation ID: 258870
Bill URL: View Bill
This bill amends the Code of Virginia to include provisions for the licensing and oversight of independent verification organizations (IVOs) that assess artificial intelligence systems and applications. The Chief Information Officer (CIO) is tasked with overseeing the licensing process and establishing regulations to ensure transparency, independence, and adequate risk mitigation in AI technologies. The bill also establishes an Artificial Intelligence Safety Advisory Council to assist in these efforts.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HST sub: Technology and Innovation |
| 2026-01-26 | Fiscal Impact Statement from Department of Planning and Budget (HB797) |
| 2026-01-13 | Committee Referral Pending |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26102711D |
| 2026-01-13 | Referred to Committee on Communications, Technology and Innovation |
Why Relevant: The provision directly addresses the regulation and auditing of artificial intelligence systems through the licensing of third-party verification organizations.
Mechanism of Influence: It empowers the CIO to establish regulations for IVOs, which are responsible for assessing AI systems for risk and transparency, effectively creating a state-sanctioned audit mechanism for AI technologies.
Evidence:
Ambiguity Notes: The terms 'necessary regulations' and 'risk management' are broad, leaving significant discretion to the CIO to define the specific standards AI systems must meet.
Why Relevant: It establishes ethical guardrails for the individuals responsible for overseeing AI safety and licensing.
Mechanism of Influence: By prohibiting equity ownership and post-employment work with AI firms, the law attempts to prevent regulatory capture and ensure that AI safety assessments are conducted without industry bias.
Evidence:
Ambiguity Notes: The phrase 'significantly involved in artificial intelligence' is not quantitatively defined, which could lead to disputes over what level of AI involvement triggers a conflict.
Why Relevant: It ensures administrative transparency regarding which organizations are permitted to audit AI systems.
Mechanism of Influence: Mandatory record-keeping of licensing decisions (issuance, refusal, or revocation) allows for public or legislative scrutiny of how AI verification standards are being applied.
Evidence:
Ambiguity Notes: None
Legislation ID: 260108
Bill URL: View Bill
House Bill No. 999 seeks to enhance protections against discrimination by regulating the use of automated decision systems. It defines key terms, outlines unlawful discriminatory practices, mandates disclosure requirements, and establishes assessment protocols for bias and discriminatory outcomes. The bill emphasizes accountability for entities that rely on such systems in decision-making processes, aiming to prevent discrimination based on protected characteristics.
| Date | Action |
|---|---|
| 2026-01-26 | Assigned HGL sub: Housing/Consumer Protection |
| 2026-01-14 | Committee Referral Pending |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26105175D |
| 2026-01-14 | Referred to Committee on General Laws |
Why Relevant: The bill directly regulates 'automated decision systems,' which is a core component of AI regulation and oversight.
Mechanism of Influence: It imposes legal prohibitions on discriminatory outcomes and mandates specific compliance actions like annual assessments and documentation maintenance.
Evidence:
Ambiguity Notes: The term 'automated decision system' is broad and likely encompasses various AI and machine learning models used for decision-making, though the specific technical threshold for what constitutes such a system is not defined in the abstract.
Why Relevant: The legislation includes specific requirements for disclosures and audits, which were explicitly mentioned in the user's instructions.
Mechanism of Influence: Entities must perform annual bias assessments (audits) and notify individuals when an automated system is used in a decision affecting them.
Evidence:
Ambiguity Notes: The abstract does not specify the exact format of the disclosure or the methodology required for the bias assessments.
Legislation ID: 269570
Bill URL: View Bill
Senate Bill No. 245 amends existing laws and introduces new sections to the Code of Virginia, focusing on the prohibition of using social media platforms as the sole means of communication for school-related extracurricular activities. It outlines specific regulations for school boards, employees, and volunteers, and establishes civil penalties for non-compliance. The bill also defines responsibilities for social media platforms regarding minors and addresses issues related to algorithmic discrimination and the use of artificial intelligence.
| Date | Action |
|---|---|
| 2026-01-12 | Prefiled and ordered printed; Offered 01-14-2026 26100795D |
| 2026-01-12 | Referred to Committee on Education and Health |
Why Relevant: The bill explicitly mandates the registration of AI systems and addresses algorithmic discrimination.
Mechanism of Influence: AI systems must register annually with the Secretary of the Commonwealth starting in 2027, requiring a fee and documentation of data practices.
Evidence:
Ambiguity Notes: The definition of 'AI systems' is not fully detailed in the abstract, potentially covering a wide range of software.
Why Relevant: The bill includes specific provisions for age verification and data usage restrictions for minors, which are often implemented via AI.
Mechanism of Influence: It mandates that data collected for age verification cannot be used for any other purpose and requires platforms to configure high privacy settings for minors.
Evidence:
Ambiguity Notes: The 'reasonable care' standard for avoiding risks in data processing is subjective and may lead to varying compliance standards.
Legislation ID: 269594
Bill URL: View Bill
This bill introduces regulations for mental health service providers regarding the use of artificial intelligence systems in their practice. It defines terms related to AI use, outlines permissible applications of AI, and establishes requirements for disclosure and consent from patients. The bill also includes provisions for penalties for violations and clarifies that certain types of counseling are exempt from these regulations.
| Date | Action |
|---|---|
| 2026-01-20 | Fiscal Impact Statement from Department of Planning and Budget (SB269) |
| 2026-01-12 | Prefiled and ordered printed; Offered 01-14-2026 26104492D |
| 2026-01-12 | Referred to Committee on General Laws and Technology |
Why Relevant: The bill directly regulates the application of artificial intelligence within the mental health profession.
Mechanism of Influence: It restricts AI systems from making independent therapeutic decisions or interacting directly with clients, ensuring that AI remains a tool under human supervision rather than an autonomous provider.
Evidence:
Ambiguity Notes: The term 'supplementary support' is defined in the bill but its practical boundaries in a clinical setting may require further interpretation by the Department of Health Professions.
Why Relevant: The bill mandates transparency and informed consent regarding the use of AI technologies.
Mechanism of Influence: Providers are legally required to disclose AI usage to patients and obtain written consent, creating a formal oversight mechanism for patient rights.
Evidence:
Ambiguity Notes: None
Why Relevant: The legislation includes enforcement mechanisms for AI-related regulatory violations.
Mechanism of Influence: It establishes civil penalties of up to $10,000 for non-compliance with the AI usage and disclosure rules.
Evidence:
Ambiguity Notes: None
Legislation ID: 271376
Bill URL: View Bill
The proposed legislation, known as the Fostering Access, Innovation, and Responsibility in Artificial Intelligence Act (FAIR AI Act), seeks to create a framework for the ethical development and use of artificial intelligence within the Commonwealth. It includes definitions of key terms related to artificial intelligence, outlines the responsibilities of developers and deployers of AI systems, and establishes an enforcement fund to address misuse and bias in AI applications.
| Date | Action |
|---|---|
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26104938D |
| 2026-01-13 | Referred to Committee on General Laws and Technology |
Why Relevant: The legislation directly mandates transparency through disclosure requirements for AI developers.
Mechanism of Influence: Developers of base AI models must provide accessible information regarding the model's origin, training data updates, and terms of service, which aligns with the user's interest in disclosure regulations.
Evidence:
Ambiguity Notes: The term 'clear and accessible manner' is not strictly defined, leaving room for interpretation on where and how these disclosures must be hosted.
Why Relevant: The act establishes a mechanism for government oversight and enforcement against AI misuse.
Mechanism of Influence: By creating the FAIR AI Enforcement Fund, the bill provides the financial infrastructure for state agencies to actively police AI bias and misuse.
Evidence:
Ambiguity Notes: The scope of 'workforce disruption' as a trigger for enforcement is broad and may require further regulatory clarification.
Why Relevant: The bill addresses legal accountability and liability for AI-driven harms.
Mechanism of Influence: It prevents developers and deployers from using the autonomous nature of AI as a legal shield, ensuring they remain responsible for the system's outputs.
Evidence:
Ambiguity Notes: None
Legislation ID: 271395
Bill URL: View Bill
This bill amends the Code of Virginia to include provisions for the licensing and oversight of independent verification organizations (IVOs) that assess artificial intelligence applications and models. It outlines the responsibilities of the Chief Information Officer (CIO) in regulating IVOs, the requirements for licensing, and the establishment of an Artificial Intelligence Safety Advisory Council to advise on these matters.
| Date | Action |
|---|---|
| 2026-01-26 | Fiscal Impact Statement from Department of Planning and Budget (SB384) |
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26101618D |
| 2026-01-13 | Referred to Committee on General Laws and Technology |
Why Relevant: The bill establishes a formal regulatory structure for the oversight and auditing of AI models through third-party verification organizations.
Mechanism of Influence: It mandates that the CIO oversee the licensing of IVOs, ensuring they remain independent from the AI industry while evaluating AI applications for risks.
Evidence:
Ambiguity Notes: The specific 'risk assessment and mitigation strategies' required in IVO plans are left to be defined by the CIO's regulations.
Why Relevant: The legislation requires disclosures and reporting regarding AI capabilities and observed risks.
Mechanism of Influence: IVOs must submit annual reports to the Virginia IT Agency (VITA) providing aggregated information on AI capabilities and risks, effectively creating a government oversight mechanism for AI performance.
Evidence:
Ambiguity Notes: It is unclear if these reports will be made public or remain internal to the government agency.
Why Relevant: The bill focuses on the auditing and verification of AI models, which aligns with the user's interest in AI audits.
Mechanism of Influence: Licensed IVOs are responsible for implementing verification plans to assess AI models and must update these plans to maintain efficacy in risk detection.
Evidence:
Ambiguity Notes: None
Legislation ID: 271405
Bill URL: View Bill
This legislation introduces a new section to the Code of Virginia that mandates the Board of Education to develop guidelines for the use of AI in instructional settings. It establishes the AI Innovation in Education Pilot Program, which will fund and evaluate innovative AI applications in schools, ensuring that AI is used ethically and effectively while prioritizing student data privacy and accessibility.
| Date | Action |
|---|---|
| 2026-01-13 | Prefiled and ordered printed; Offered 01-14-2026 26105454D |
| 2026-01-13 | Referred to Committee on Education and Health |
Why Relevant: The legislation establishes a regulatory framework for the implementation and use of AI within the state's public education system.
Mechanism of Influence: It mandates the creation of state-level guidance and requires local school boards to enforce policies that align with these safety and ethical standards.
Evidence:
Ambiguity Notes: The term 'ethical and safe use' is broad and leaves specific regulatory standards to be defined by the Board of Education's future guidance.
Why Relevant: The bill addresses transparency and data privacy requirements for AI systems.
Mechanism of Influence: It specifically requires that the state-issued guidance include protocols for student data privacy and transparency in how AI tools operate.
Evidence:
Ambiguity Notes: The level of transparency required (e.g., algorithmic transparency vs. usage disclosure) is not fully specified in the abstract.
Why Relevant: The legislation introduces oversight and evaluation mechanisms for AI applications.
Mechanism of Influence: Through the AI Innovation in Education Pilot Program, the Department of Education is tasked with evaluating AI applications and reporting on their effectiveness and risks.
Evidence:
Ambiguity Notes: None
Legislation ID: 273578
Bill URL: View Bill
This bill introduces amendments to existing laws related to fair housing and landlord-tenant relationships in Virginia. It includes definitions of key terms, outlines unlawful discriminatory housing practices, and specifies requirements for landlords and housing providers to ensure compliance with fair housing standards. The bill also addresses the use of algorithmic pricing in housing transactions, aiming to prevent discrimination based on protected class data.
| Date | Action |
|---|---|
| 2026-01-21 | Assigned GL&T sub: Housing |
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26104984D |
| 2026-01-14 | Referred to Committee on General Laws and Technology |
Why Relevant: The bill explicitly regulates the use of software and algorithms to coordinate pricing among landlords.
Mechanism of Influence: It prohibits the use of algorithmic tools for price-fixing or coordinating rental terms, effectively regulating the application of AI in the real estate sector.
Evidence:
Ambiguity Notes: The exemption for landlords owning four or fewer units creates a threshold where algorithmic coordination might still occur without oversight.
Why Relevant: The bill establishes legal definitions for key technological terms related to AI and data processing.
Mechanism of Influence: By defining 'algorithm', 'dynamic pricing', and 'personal data', the bill sets the scope for which automated systems are subject to these new regulations.
Evidence:
Ambiguity Notes: The abstract does not provide the specific technical criteria used in the definitions, which could determine if simple spreadsheets or complex machine learning models are covered.
Why Relevant: The bill mandates transparency and disclosure regarding the use of algorithms in setting prices.
Mechanism of Influence: It requires housing providers to disclose to consumers when an algorithm is being used to set rental prices based on personal data, fulfilling the user's interest in disclosure requirements.
Evidence:
Ambiguity Notes: The method and timing of the disclosure (e.g., 'in advertisements or promotions') may vary in effectiveness depending on the platform used.
Why Relevant: The bill prohibits landlords from relying on algorithmic recommendations for setting rental terms.
Mechanism of Influence: This acts as a direct restriction on the autonomy of AI/algorithmic systems in the housing market, preventing automated systems from dictating contract terms.
Evidence:
Ambiguity Notes: It is unclear if this prohibits all algorithmic assistance or only 'recommendations' that lead to specific adjustments.
Why Relevant: The bill regulates the data inputs used by algorithms to prevent discriminatory outcomes.
Mechanism of Influence: It bans the use of 'protected class data' in algorithmic pricing models, targeting the prevention of algorithmic bias in housing.
Evidence:
Ambiguity Notes: The exception for 'internal audits' might allow landlords to process protected class data within their systems, potentially creating a loophole if not strictly monitored.
Legislation ID: 273579
Bill URL: View Bill
Senate Bill No. 586 proposes amendments to existing sections of the Code of Virginia regarding health insurance practices. It introduces provisions related to the use of artificial intelligence in managing claims and coverage, mandates transparency in AI processes, and establishes rights for expedited appeals and civil penalties for non-compliance. The bill seeks to protect enrollees and providers by ensuring fair treatment in claims processing and addressing the implications of AI in insurance operations.
| Date | Action |
|---|---|
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26100849D |
| 2026-01-14 | Referred to Committee on Commerce and Labor |
Why Relevant: The bill establishes mandatory disclosure and documentation requirements for AI systems used in the insurance sector.
Mechanism of Influence: Carriers are required to notify enrollees and providers when AI is used for adverse determinations and must maintain an audit trail of AI decisions for five years.
Evidence:
Ambiguity Notes: While the bill requires documentation of AI decisions, it does not specify the technical granularity required for these records.
Why Relevant: The legislation provides for government oversight and submission of AI-related data for regulatory enforcement.
Mechanism of Influence: The Commission is empowered to request AI-related information from carriers and can impose civil penalties or revoke licenses for non-compliance.
Evidence:
Ambiguity Notes: The requirement to submit 'AI-related information' is broad and could potentially encompass algorithmic logic or training data parameters depending on Commission rules.
Why Relevant: The bill addresses algorithmic bias and discrimination in AI applications.
Mechanism of Influence: It prohibits the use of AI in ways that violate existing discrimination laws, specifically mentioning protected classes such as age, race, and sex.
Evidence:
Ambiguity Notes: None
Legislation ID: 281869
Bill URL: View Bill
This bill amends the Virginia Consumer Data Protection Act to include new provisions regarding online pricing strategies. It specifically prohibits controllers or processors from generating prices based on the hardware state of a consumers online device, the presence or absence of software, or precise geolocation data. However, it allows for exceptions in cases of device repairs, trade-in values, and legitimate pricing variations based on location. The bill aims to ensure fair pricing practices in the digital marketplace.
| Date | Action |
|---|---|
| 2026-01-14 | Prefiled and ordered printed; Offered 01-14-2026 26103781D |
| 2026-01-14 | Referred to Committee on General Laws and Technology |
Why Relevant: The bill regulates automated price generation, which is a common application of algorithmic and AI-driven systems in the digital marketplace.
Mechanism of Influence: By prohibiting the use of specific data inputs (hardware state, software, geolocation) for price generation, the law effectively restricts the features that can be used in pricing algorithms or AI models, mandating a form of algorithmic constraint.
Evidence:
Ambiguity Notes: The text does not explicitly use the term 'Artificial Intelligence' or 'Machine Learning,' focusing instead on the 'generation' of prices by controllers and processors. However, in modern contexts, such generation is typically handled by automated decision-making systems.
Legislation ID: 286022
Bill URL: View Bill
This bill seeks to amend existing lottery regulations in Virginia, enhancing the powers of the Virginia Lottery Board to regulate lottery operations and sports betting. It introduces new advertising restrictions to protect consumers and mandates age verification for lottery sales to prevent underage gambling. The bill also outlines the responsibilities of the Board in maintaining the integrity of lottery operations and ensuring consumer protection.
| Date | Action |
|---|---|
| 2026-01-23 | Presented and ordered printed 26105677D |
| 2026-01-23 | Referred to Committee on General Laws and Technology |
Why Relevant: The bill restricts targeted digital advertising based on personal data.
Mechanism of Influence: This limits the application of AI-driven profiling and algorithmic targeting used in digital marketing to reach specific demographics.
Evidence:
Ambiguity Notes: The bill does not explicitly name 'Artificial Intelligence' or 'Machine Learning,' but 'targeted digital advertising' typically relies on these technologies to process personal data for ad placement.
Why Relevant: The bill mandates the use of age verification software for lottery operations.
Mechanism of Influence: Requires the technical implementation of verification systems, which often utilize AI-based biometric or document analysis tools to confirm identity.
Evidence:
Ambiguity Notes: The specific technical requirements for the 'software' are not defined, leaving it open to various implementations, including automated AI systems or simple database lookups.
Legislation ID: 252363
Bill URL: View Bill
This bill seeks to update the Code of Virginia by modifying sections related to traffic enforcement technologies. It aims to clarify the use of speed safety cameras and other monitoring systems, establish guidelines for their operation, and outline the civil penalties for violations detected by these systems. The bill also addresses the handling of personal information collected through these systems and ensures compliance with privacy standards.
| Date | Action |
|---|---|
| 2026-01-15 | Reported from Transportation and rereferred to Finance and Appropriations (11-Y 3-N) |
| 2025-12-30 | Prefiled and ordered printed; Offered 01-14-2026 26100916D |
| 2025-12-30 | Referred to Committee on Transportation |
Why Relevant: The bill regulates automated enforcement systems which utilize computer vision and automated decision-making processes to identify violations.
Mechanism of Influence: It establishes operational standards and legal frameworks for 'speed safety cameras' and 'monitoring systems,' which are forms of automated technology used for law enforcement oversight.
Evidence:
Ambiguity Notes: While the bill does not explicitly use the term 'Artificial Intelligence,' the technologies described (automated monitoring and speed detection) often rely on algorithmic processing and computer vision to function without direct human intervention at the moment of detection.
Why Relevant: The legislation includes requirements for technical audits and performance reporting for automated systems.
Mechanism of Influence: It mandates daily accuracy tests and annual reporting of results to the Department of State Police, creating a mandatory oversight and calibration loop for the automated technology.
Evidence:
Ambiguity Notes: The 'audits' are focused on technical accuracy and calibration rather than algorithmic bias or model weights, but they represent a form of mandatory government oversight for automated systems.
Why Relevant: The bill addresses data privacy and the handling of personal information collected by automated surveillance systems.
Mechanism of Influence: It restricts how data collected by these systems can be used, mandates confidentiality, and requires the purging of data within specific timeframes (60 days) if no summons is issued.
Evidence:
Ambiguity Notes: The focus is on the protection of PII (Personally Identifiable Information) generated by the system rather than the disclosure of the system's underlying logic or training data.
Legislation ID: 252364
Bill URL: View Bill
This bill amends and reenacts sections of the Code of Virginia concerning consumer data protection, particularly focusing on definitions related to personal data, artificial intelligence, and social media platforms. It establishes rights for consumers regarding their personal data, including the rights to access, correct, delete, and obtain copies of their data. The bill also introduces new definitions and requirements for entities that process personal data, ensuring better protection and transparency for consumers.
| Date | Action |
|---|---|
| 2025-12-30 | Prefiled and ordered printed; Offered 01-14-2026 26100812D |
| 2025-12-30 | Referred to Committee on General Laws and Technology |
Why Relevant: The bill introduces specific regulatory requirements for 'model operators' of artificial intelligence, focusing on data interoperability and consumer transparency.
Mechanism of Influence: It mandates that AI model operators implement standardized interfaces to allow for the sharing of contextual data with other AI models, effectively regulating the technical architecture and data-sharing practices of AI developers.
Evidence:
Ambiguity Notes: The term 'contextual data' is not explicitly defined in the summary, which could lead to broad interpretations regarding how much internal model state or training data must be made interoperable.
Why Relevant: The legislation extends consumer data privacy rights—such as access, correction, and deletion—to the data handled by AI models and their operators.
Mechanism of Influence: AI companies (as model operators) are required to authenticate and fulfill consumer requests regarding their data, which impacts how AI systems store, process, and purge user information used for training or inference.
Evidence:
Ambiguity Notes: The effectiveness of these rights depends on the specific definition of 'model operator' and whether it includes both developers of foundational models and third-party deployers.
This bill outlines the obligations of developers and deployers of high-risk artificial intelligence systems, including requirements for impact assessments, consumer disclosures, and measures to mitigate algorithmic discrimination. It also specifies exemptions and establishes civil remedies for violations, aiming to protect consumers from potential harms associated with AI systems.
| Date | Action |
|---|---|
| 2025-12-16 | Prefiled for introduction. |
Why Relevant: The bill directly addresses the user's request for legislation requiring disclosures related to AI usage.
Mechanism of Influence: Deployers are legally required to notify consumers when they are interacting with an AI system and must provide explanations for any consequential decisions made by the system.
Evidence:
Ambiguity Notes: None
Why Relevant: The bill mandates impact assessments, which serve as a form of audit and oversight requested by the user.
Mechanism of Influence: Deployers must conduct and document impact assessments before a high-risk AI system is used, and these records must be retained for three years for potential review.
Evidence:
Ambiguity Notes: The bill allows for existing assessments from other regulations to fulfill these requirements if the scope is similar, which may lead to varying levels of rigor depending on the original regulation used.
Why Relevant: The legislation focuses on the regulation and risk management of AI systems to prevent harm.
Mechanism of Influence: It establishes a 'reasonable care' standard for both developers and deployers to mitigate the risks of algorithmic discrimination and requires the maintenance of risk management policies.
Evidence:
Ambiguity Notes: The term 'reasonable care' is a legal standard that may be subject to judicial interpretation and evolving industry best practices.
Why Relevant: The bill provides the necessary legal definitions to determine the scope of AI regulation.
Mechanism of Influence: By defining 'high-risk artificial intelligence system' and 'algorithmic discrimination,' the bill sets the boundaries for which technologies and behaviors are subject to these new requirements.
Evidence:
Ambiguity Notes: The specific exclusions within the definitions of 'high-risk' systems could potentially exempt certain AI applications that users or advocacy groups might otherwise consider dangerous.
This bill introduces regulations for operators of companion chatbots, requiring them to notify users about the nature of the interaction, implement protocols to prevent suicidal content, and provide referrals to crisis services. It also establishes civil liability for operators if their systems contribute to user harm, particularly in cases of suicide. The bill mandates annual reporting to the Department of Health on related incidents and protocols, and it outlines the responsibilities of operators regarding minors and the content generated by their systems.
| Date | Action |
|---|---|
| 2025-12-11 | Prefiled for introduction. |
Why Relevant: The bill mandates specific transparency disclosures for AI systems.
Mechanism of Influence: Operators are legally required to notify users that they are interacting with an AI and that the system may not be appropriate for minors.
Evidence:
Ambiguity Notes: The bill does not specify the exact format or prominence required for these notifications.
Why Relevant: The bill imposes reporting requirements to a government body, serving as a form of regulatory oversight.
Mechanism of Influence: Operators must submit annual reports to the Department of Health detailing their safety protocols and the frequency of crisis referrals triggered by the AI.
Evidence:
Ambiguity Notes: The criteria for what constitutes an 'adequate' protocol for handling suicidal ideation are not defined in the text.
Why Relevant: The legislation regulates the content generation and safety guardrails of AI systems.
Mechanism of Influence: It requires operators to implement technical or algorithmic protocols to prevent the AI from engaging in specific types of harmful discussions.
Evidence:
Ambiguity Notes: The phrase 'discussions that may lead to suicidal ideation' is broad and could lead to significant filtering of AI responses.
Senate Bill 5937 addresses the implementation and operation of smart access systems in residential buildings, establishing requirements for landlords regarding tenant access and data privacy. It mandates that landlords provide alternative access methods to tenants who do not wish to use biometric identifiers or mobile applications and requires transparency about data collection and protection related to smart access systems.
| Date | Action |
|---|---|
| 2025-12-23 | Prefiled for introduction. |
Why Relevant: The bill regulates biometric identifier information, which is a primary data source for many AI-driven authentication and security systems.
Mechanism of Influence: By requiring disclosures and limiting the collection of biometric data, the law indirectly regulates the deployment and data-gathering capabilities of AI-powered facial recognition or fingerprint scanning technologies used in residential settings.
Evidence:
Ambiguity Notes: The bill does not explicitly use the term 'Artificial Intelligence,' but its focus on biometric identifiers and automated access systems covers technologies that frequently utilize AI for pattern matching and verification.
Legislation ID: 272908
Bill URL: View Bill
This bill amends the Code of West Virginia to introduce regulations concerning the use of artificial intelligence in media production. It establishes definitions for AI and AI-generated media, outlines disclosure requirements for entities producing such media, and sets forth enforcement mechanisms and civil penalties for non-compliance. The bill aims to protect consumers by mandating clear disclosures that inform them when they are engaging with AI-generated content.
| Date | Action |
|---|---|
| 2026-01-19 | Filed for introduction |
| 2026-01-19 | Introduced in House |
| 2026-01-19 | To House Judiciary |
| 2026-01-19 | To Judiciary |
Why Relevant: The bill directly addresses the user's request for legislation requiring disclosures and regulating the use of artificial intelligence.
Mechanism of Influence: It creates a legal mandate for 'Covered Entities' to disclose the use of AI in media, utilizing specific formats like on-screen watermarks for video and introductory statements for audio.
Evidence:
Ambiguity Notes: The effectiveness of the regulation may depend on the specific technical definitions of 'Materially Altered' and how 'Covered Entity' is scoped within the West Virginia Code.
House Bill 4682, known as the "Fourth Amendment Restoration Act," seeks to amend the Code of West Virginia by prohibiting law enforcement officials from utilizing specific surveillance technologies unless authorized by a warrant. The bill outlines the legislative findings regarding constitutional protections, establishes penalties for violations, and allows individuals to seek legal recourse if their rights are infringed upon by the use of prohibited technologies.
| Date | Action |
|---|---|
| 2026-01-21 | Filed for introduction |
| 2026-01-21 | Introduced in House |
| 2026-01-21 | To House Judiciary |
| 2026-01-21 | To Judiciary |
Why Relevant: The bill specifically addresses the use of artificial intelligence in the context of law enforcement surveillance and recognizes the need for regulation to protect constitutional rights.
Mechanism of Influence: It mandates a warrant based on probable cause for the deployment of AI-related surveillance tools like facial recognition, effectively regulating how government agencies can use these technologies.
Evidence:
Ambiguity Notes: While 'AI technologies' is used broadly in the findings, the specific prohibitions target facial recognition and real-time monitoring, which are common applications of AI.
Legislation ID: 284829
Bill URL: View Bill
House Bill 4770 introduces new regulations concerning the application of artificial intelligence in mental health services within West Virginia. It establishes limitations on the use of AI technology to ensure that human professionals retain responsibility for patient interactions and decisions. The bill also creates a Task Force on Artificial Intelligence to oversee the implementation of these regulations and to recommend best practices and policies related to AI use in various sectors, particularly in mental health care.
| Date | Action |
|---|---|
| 2026-01-28 | Markup Discussion |
| 2026-01-23 | Filed for introduction |
| 2026-01-23 | Introduced in House |
| 2026-01-23 | To Health and Human Resources then Finance |
| 2026-01-23 | To House Health and Human Resources |
Why Relevant: The bill mandates transparency and disclosure requirements for AI interactions.
Mechanism of Influence: It requires that patients be explicitly notified when they are interacting with an AI and necessitates written consent before AI can be used in a therapeutic context.
Evidence:
Ambiguity Notes: The bill does not specify the exact format or language required for the notification, which may lead to varying standards of disclosure.
Why Relevant: The legislation imposes strict prohibitions and operational limits on AI functionality within the healthcare sector.
Mechanism of Influence: It legally restricts AI from performing core professional tasks such as conducting psychotherapy or generating treatment plans without human review, ensuring AI remains an administrative tool rather than a decision-maker.
Evidence:
Ambiguity Notes: The distinction between 'administrative support' and 'therapeutic communication' may become blurred as AI tools become more integrated into clinical workflows.
Why Relevant: The bill creates a formal oversight body to manage AI policy and definitions.
Mechanism of Influence: The West Virginia Task Force on Artificial Intelligence is tasked with recommending definitions and best practices, which will shape future regulatory requirements and reporting standards.
Evidence:
Ambiguity Notes: While the task force focuses on public sectors and mental health, its influence on private sector AI developers in West Virginia is not fully defined.
Senate Bill 498 aims to amend the Code of West Virginia by introducing a new section that mandates age verification for access to online pornography. The bill outlines definitions, requirements for age verification systems, and penalties for non-compliance, including fines and potential legal action against websites that allow minors access to explicit content. It also provides a legal defense for compliant entities and addresses the issue of circumvention of age verification measures.
| Date | Action |
|---|---|
| 2026-01-19 | Filed for introduction |
| 2026-01-19 | Introduced in Senate |
| 2026-01-19 | To Judiciary |
Why Relevant: The bill specifically mandates the implementation of biometric facial recognition technology as a requirement for commercial entities.
Mechanism of Influence: By requiring biometric facial recognition, the law forces commercial adult websites to deploy AI-driven identification tools, thereby regulating the specific use case and operational requirements of such AI systems within the state.
Evidence:
Ambiguity Notes: The bill does not define the technical accuracy or the specific algorithmic standards required for the 'biometric facial recognition' systems, which could lead to varying interpretations of what constitutes a compliant AI verification tool.
Legislation ID: 262660
Bill URL: View Bill
This bill amends the Code of West Virginia by adding a new article that establishes a framework for banning the use of software, applications, and artificial intelligence tools owned by foreign adversaries within state agencies. It aims to enhance cybersecurity measures and protect citizens data from potential threats posed by foreign entities.
| Date | Action |
|---|---|
| 2026-01-14 | Filed for introduction |
| 2026-01-14 | Introduced in Senate |
| 2026-01-14 | To Government Organization |
| 2026-01-14 | To Government Organization then Judiciary |
Why Relevant: The bill explicitly includes artificial intelligence tools within its scope of prohibited technologies for state agencies.
Mechanism of Influence: It creates a legal prohibition against the use of AI tools owned by foreign adversaries, effectively regulating the procurement and operational use of AI within the state government.
Evidence:
Ambiguity Notes: The term 'AI tools' is used broadly without a specific technical definition in the abstract, which could encompass a wide range of machine learning and automated systems.
Legislation ID: 129210
Bill URL: View Bill
Assembly Bill 172 provides a framework for the protection of consumer data by defining the roles of data controllers and processors, outlining consumer rights regarding their personal data, and setting penalties for violations. It aims to ensure that consumers can access, correct, delete, and control their personal data, while also imposing strict requirements on how businesses handle such data.
| Date | Action |
|---|---|
| 2026-01-28 | Executive action taken |
| 2026-01-21 | Public hearing held |
| 2026-01-16 | Withdrawn from committee on Consumer Protection and referred to committee on State Affairs pursuant to Assembly Rule 42 3(c) |
| 2025-06-24 | Fiscal estimate received |
| 2025-04-09 | Introduced by Representatives Zimmerman, Sortwell, Allen, Armstrong, Behnke, Dittrich, Duchow, Goeben, Gustafson, Knodl, Kreibich, Krug, Kurtz, Maxey, Melotik, Murphy, Mursau, Nedweski, OConnor, Penterman, Piwowarczyk, Pronschinske, Snyder, Steffen, Tittl, Tusler, Wittke and Moses; cosponsored by Senators Quinn, Nass, Roys and Marklein |
| 2025-04-09 | Read first time and referred to Committee on Consumer Protection |
Why Relevant: The bill establishes the foundational data governance rules that apply to the datasets used to train and operate artificial intelligence systems.
Mechanism of Influence: AI developers and deployers would likely fall under the definitions of 'controllers' or 'processors,' requiring them to comply with data access, correction, and deletion requests for any personal data used within their AI models.
Evidence:
Ambiguity Notes: While the text does not explicitly use the term 'Artificial Intelligence,' the inclusion of 'biometric data' and 'sensitive data' directly impacts AI applications such as facial recognition and predictive analytics.
Legislation ID: 215723
Bill URL: View Bill
Wisconsin currently has no official language. This bill designates English as the official state language and allows state and local governments to utilize artificial intelligence or machine-assisted translation tools instead of appointing English language interpreters. It also requires that all governmental communications be conducted in English, with exceptions for individual cases or specific programs. Moreover, it protects individuals rights to learn and use other languages for non-governmental purposes.
| Date | Action |
|---|---|
| 2026-01-16 | Read first time and referred to committee on Government Operations, Labor and Economic Development |
| 2026-01-16 | Received from Assembly |
| 2026-01-15 | Assembly Amendment 1 adopted |
| 2026-01-15 | Assembly Amendment 2 adopted |
| 2026-01-15 | Ordered immediately messaged |
| 2026-01-15 | Ordered to a third reading |
| 2026-01-15 | Read a second time |
| 2026-01-15 | Read a third time and passed, Ayes 51, Noes 45, Paired 2 |
Why Relevant: The bill explicitly addresses the deployment of artificial intelligence within government operations, specifically as a replacement for human personnel in translation and interpretation services.
Mechanism of Influence: It creates a legal framework allowing state agencies and local governments to bypass the appointment of human interpreters by providing access to AI-driven translation tools, thereby integrating AI into the state's legal and administrative infrastructure.
Evidence:
Ambiguity Notes: The bill does not define the specific technical standards, accuracy thresholds, or security requirements for the 'AI or machine-assisted translation tools' it authorizes, potentially allowing for a wide range of software applications without specific oversight.
Legislation ID: 229794
Bill URL: View Bill
This bill creates a new statute that defines facial recognition technology and establishes a prohibition against its use by state and local governmental entities. The bill aims to safeguard individual privacy by restricting the application of this controversial technology, which automatically identifies individuals by comparing facial images against a database. The only exception to this prohibition is for the identification of employees within the respective agencies for employment-related matters.
| Date | Action |
|---|---|
| 2026-01-20 | Representative Vining added as a coauthor |
| 2025-10-24 | Introduced by Representatives Clancy, Tenorio, Moore Omokunde, Cruz, Hong, Madison, Phelps and Subeck; cosponsored by Senator Larson |
| 2025-10-24 | Read first time and referred to Committee on Criminal Justice and Public Safety |
Why Relevant: Facial recognition technology is a specific application of artificial intelligence, particularly in the domains of computer vision and biometric data processing.
Mechanism of Influence: The bill creates a legal prohibition on the deployment and use of AI-driven facial recognition systems by public agencies, representing a regulatory restriction on AI usage.
Evidence:
Ambiguity Notes: The impact of the law depends on the technical breadth of the definition of 'facial recognition technology' and whether it encompasses all algorithmic matching or specific automated systems.
Legislation ID: 230695
Bill URL: View Bill
Assembly Bill 673 seeks to enhance the security and privacy of genetic information by banning the use of genetic sequencers and software developed by foreign adversaries in medical and research facilities. It also mandates that human genome sequencing data of Wisconsin residents cannot be stored in countries identified as foreign adversaries. The bill includes penalties for violations and establishes enforcement mechanisms through the attorney general.
| Date | Action |
|---|---|
| 2026-01-26 | Read first time and referred to committee on Licensing, Regulatory Reform, State and Federal Affairs |
| 2026-01-22 | Assembly Amendment 1 adopted |
| 2026-01-22 | Assembly Substitute Amendment 1 offered by Representative McGuire |
| 2026-01-22 | Decision of the Chair appealed |
| 2026-01-22 | Decision of the Chair upheld, Ayes 53, Noes 44 |
| 2026-01-22 | Ordered immediately messaged |
| 2026-01-22 | Ordered to a third reading |
| 2026-01-22 | Point of order that Assembly Substitute Amendment 1 not germane under Assembly Rule 54 (3)(f) well taken |
Why Relevant: The bill regulates 'operational or research software' used for genetic analysis, which is a field that increasingly relies on artificial intelligence and machine learning for data processing and pattern recognition.
Mechanism of Influence: By prohibiting the use of specific software from foreign adversaries, the bill restricts the deployment of AI-driven bioinformatics tools and sequencing algorithms developed by those entities within Wisconsin's medical and research infrastructure.
Evidence:
Ambiguity Notes: The legislation does not explicitly use the term 'Artificial Intelligence'; however, the broad category of 'operational or research software' used for genetic analysis typically encompasses the AI models used in modern genomics.
Legislation ID: 269954
Bill URL: View Bill
Assembly Bill 883 creates a statutory ban on the use of automatic registration plate readers, which are devices that capture and convert vehicle registration plate images into data. The bill outlines exceptions for specific uses, including parking enforcement, access control to nonpublic areas, and compliance checks for commercial vehicles at weigh stations. Data captured by these devices is restricted in terms of sharing and retention.
| Date | Action |
|---|---|
| 2026-01-16 | Introduced by Representative Gustafson |
| 2026-01-16 | Read first time and referred to Committee on Criminal Justice and Public Safety |
Why Relevant: The legislation regulates automatic registration plate readers, which are a specific application of computer vision and optical character recognition (OCR), both of which are core technologies within the field of artificial intelligence.
Mechanism of Influence: By restricting the use of these devices and the data they generate, the law effectively regulates the deployment and data lifecycle of AI-powered surveillance and automated data extraction systems.
Evidence:
Ambiguity Notes: The bill focuses on the hardware and the resulting data rather than the underlying algorithms or 'weights' of the AI models used, but the functional definition of 'converting images into data' describes an automated AI process.