Learn how to upskill and reskill effectively with our new ebook. Download the ebook

U.S. regulations on hiring with AI: a state-by-state guide

AI tools bring efficiency and objectivity, from resume screening algorithms to interview chatbots. However, they also raise critical questions about fairness, transparency, and bias. How does the law respond to that in the US?

JamieLee Watson
JamieLee Watson

JamieLee is Workable’s HR Advisor and in-house compliance expert.

regulations on hiring with AI

AI is here to make our lives easier and more productive, representing one of the most transformative technological advancements of our time. 

The following exploration of regulations on hiring with AI in various jurisdictions highlights the proactive steps being taken to create a balanced and ethical AI landscape, paving the way for sustainable innovation and societal progress.

Why are AI regulations important?

Regulations on hiring with AI are crucial for ensuring that AI technologies are developed and used ethically, fairly, and transparently. 

They play a vital role in preventing bias and discrimination, protecting privacy, and securing data. By mandating measures such as bias audits and impact assessments, regulations help reduce the risk of discriminatory outcomes and promote fairness in AI decision-making processes. 

These regulations also ensure that organizations handle personal data responsibly, obtaining consent and maintaining transparency about how AI systems operate and the data they collect. This helps build public trust and confidence in AI technologies, essential for their widespread adoption.

Moreover, regulations on hiring with AI foster accountability and transparency, holding organizations responsible for the actions and decisions of their AI systems. 

Ultimately, AI regulations ensure that AI technologies contribute positively to society, aligning with ethical principles and societal values.

Does the US regulate AI?

The regulation of AI in the U.S. is fragmented and evolving, with no single federal law overseeing it comprehensively. 

Various federal agencies address AI-related issues through existing frameworks; for example, the Federal Trade Commission (FTC) handles consumer protection and antitrust concerns, while the National Institute of Standards and Technology (NIST) is developing guidelines for AI. 

At the state level, some states, like California, have enacted regulations such as the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA), which influence how AI interacts with data privacy. 

Additionally, sector-specific regulations impact AI differently across industries, such as healthcare’s adherence to HIPAA. There are also ongoing legislative efforts, like the proposed Algorithmic Accountability Act, aimed at requiring companies to evaluate the impacts of their algorithms. 

Related: Compliance in AI for recruitment

State by state regulations of AI in 2024

In the United States, the regulatory landscape governing the use of regulations on hiring with AI is evolving to address these concerns. 

Regulatory bodies, including the Equal Employment Opportunity Commission (EEOC) and state legislatures, are beginning to establish guidelines to ensure that AI-driven hiring practices are fair, non-discriminatory, and transparent.

Understanding US regulations in AI hiring is crucial for HR professionals. Compliance is critical, and organizations need to navigate this complex regulatory environment. 

Below is a list of all states implementing AI protections in Hiring. 

New York City AI Regulation

Legislation: INT 1894-2020
Effective Date: January 1, 2023
Affected Employers: All Employers

Protections: Bias Discrimination
New York City’s INT 1894-2020 legislation addresses bias discrimination by mandating regular audits and transparency in the use of automated employment decision tools. This law includes several critical requirements to ensure fair and unbiased hiring practices.

Audit Required
Employers must conduct annual bias audits on their automated employment decision tools. These audits are essential for identifying and mitigating any discriminatory effects of AI technologies.

Penalties
If employers fail to conduct the required audits within a year, they must cease using the automated employment decision tools. This penalty ensures compliance and accountability in the use of AI.

Law Requirements:

  1. Bias Audits and Results:
    Employers are required to perform bias audits on their automated employment decision tools. The results of these audits must be posted publicly and must disclose selection or scoring rates across different gender and race or ethnicity categories. This transparency helps monitor and address potential biases in AI-driven hiring processes.
  2. Notice to Employees and Candidates:
    Employers must provide specific notices about the use of these tools at least 10 days prior to their deployment. This notice must be given to employees or job candidates who reside in the city. Additionally, candidates must be given the opportunity to request an alternative selection process or accommodation. This ensures that all applicants are informed and have options to opt-out of automated decisions if they prefer.

New York City’s INT 1894-2020 legislation enforces stringent guidelines on the use of automated employment decision tools to prevent bias discrimination. Through mandatory annual audits, public disclosure of results, and clear notices to candidates, this law promotes transparency, fairness, and accountability in AI-driven hiring practices.

Illinois AI regulation

Legislation: 820 ILCS 42
Effective Date: January 1, 2020
Affected Employers: All Employer

Protections: Video Recording of Interviews
Illinois legislation 820 ILCS 42 focuses on regulating the use of AI in the hiring process, specifically concerning the video recording of interviews. This law includes several key requirements to ensure the privacy and fairness of job applicants.

Deletion Request
Employers must delete any video recordings of interviews within 30 days upon request. This provision helps protect the privacy of job applicants and ensures that their data is not retained longer than necessary.

Consent RequiredEmployers are required to obtain consent from applicants prior to recording their interviews. This ensures that candidates are aware of and agree to the recording process.

Video/Recording Restrictions
There are strict limitations on sharing applicant videos. Employers may only share these videos with individuals whose expertise or technology is essential for evaluating the candidate’s fitness for the position. This restriction safeguards the privacy of applicants and limits the exposure of their recorded interviews.

Reporting Requirements
Employers who rely solely on AI for hiring decisions must collect and report race and ethnicity data of all applicants selected for in-person interviews following AI analysis. This data must be reported annually to the Illinois Department of Commerce and Economic Opportunity. These reporting requirements are designed to monitor and address potential biases in AI-driven hiring processes.

Illinois’ 820 ILCS 42 legislation sets forth clear guidelines for the use of AI in the hiring process. By emphasizing consent, privacy, and accountability, this law aims to ensure that AI technologies are used fairly and ethically in employment practices.

Maryland AI Regulation

Legislation: House Bill 1202
Effective Date: October 1, 2020
Affected Employers: All Employers

Protections: Facial Recognition
Maryland’s House Bill 1202 specifically addresses the use of facial recognition technology in employment interviews. This legislation introduces several important requirements to protect job applicants.

Waiver Required for AI Use
Employers must obtain a waiver with specific requirements before using facial recognition services to create a facial template during an interview. This waiver ensures that applicants are fully informed and give explicit consent for the use of this technology.

Consent Required
Consent must be obtained from applicants prior to using facial recognition technology in interviews. This requirement ensures that candidates are aware of and agree to the use of such technology.

Video/Recording Restrictions
Employers face restrictions on the use of video or recording technologies, especially concerning facial recognition services. These restrictions are in place to protect the privacy and rights of job applicants

Reporting Requirements
Employers must report annually on their use of facial recognition technology in hiring processes. These reports help monitor and regulate the use of AI, ensuring it is used fairly and ethically.

Facial Recognition Waiver
To use facial recognition technology, employers must have applicants sign a waiver. The waiver must include the applicant’s name, date of the interview, a statement of consent, and a signature indicating that the waiver has been read and understood. This waiver process ensures transparency and informed consent from applicants.

Maryland’s House Bill 1202 sets strict guidelines for the use of facial recognition technology in job interviews. By requiring waivers, consent, and annual reporting, this legislation aims to protect the privacy and rights of job applicants while ensuring ethical use of AI in employment practices.

Colorado AI regulation

Legislation: SB 24-205
Effective Date: February 1, 2026
Affected Employers: Those that deploy “high-risk AI”

Protections: Algorithmic Discrimination
Colorado’s SB 24-205 legislation aims to combat algorithmic discrimination by regulating the deployment of high-risk AI systems. This law mandates several key requirements for employers and organizations using such AI technologies.

Risk Management Policy
Employers must implement a risk management policy to address the potential risks associated with high-risk AI deployments. This policy is designed to ensure that AI systems are used responsibly and ethically.

Impact Assessments
Impact assessments are required to be conducted annually and within 90 days after any modification to the AI system. These assessments help in evaluating the implications and performance of the AI systems, ensuring they do not perpetuate biases or discrimination.

Discrimination Reporting
Organizations must report any instances of algorithmic discrimination to the State Attorney General within 90 days. This requirement ensures transparency and accountability in the use of AI technologies.

Video/Recording Restrictions
There are no specific restrictiοns on video or recording related to AI deployment in this legislation.

Information to Consumers
Consumers must be notified about the deployment of AI systems. Additionally, employers and organizations must publish statements disclosing the AI systems they deploy and the information collected by these systems. This transparency allows consumers to be aware of the AI technologies affecting them.

Consumer Corrective Measures
Consumers are given the right to correct any incorrect personal data processed by an AI system. They also have the ability to appeal adverse consequential decisions made by AI. This provision ensures that individuals can address and rectify errors or biases that may arise from AI decisions.

Colorado’s SB 24-205 is a comprehensive legislative measure aimed at regulating high-risk AI deployments to prevent algorithmic discrimination. It emphasizes transparency, accountability, and consumer rights, ensuring that AI technologies are used responsibly and ethically.

The landscape of regulations on hiring with AI seems uncharted, with significant developments expected in the coming months and years. As AI technologies rapidly evolve and become more pervasive, the need for comprehensive regulatory frameworks becomes urgent. 

Early adopters like Colorado, Illinois, Maryland, and New York City are setting the groundwork by implementing measures to ensure ethical use, fairness, and transparency in AI applications.

Disclaimer: This article is meant to provide general guidelines and should be used as a reference. It may not take into account all relevant local, state or federal laws and is not a legal document. Neither the author nor Workable will assume any legal liability that may arise from the use of this guide.

Frequently asked questions

Need to ensure a fully compliant hiring process?

We make compliance as easy as possible, whenever and wherever you're hiring.

Worry free

Let's grow together

Explore our full platform with a 15-day free trial.
Post jobs, get candidates and onboard employees all in one place.