AI in HR: What’s Changing This Year

Artificial intelligence is no longer something your human resources team has to think about “eventually.” It’s something you need to deal with now, as it creates both compliance risks and new opportunities.

In 2025, HR teams were using AI for hiring, screening, scheduling, and workforce management.

In 2026, businesses are going to build on those use cases, and many other organizations are going to join the movement.

However, regulatory bodies at the local, state, and federal levels are still playing catch-up. This year, you could face new disclosure rules, bias-audit requirements, and other scrutiny, especially at the state level.

If your business is still in the early stages of AI HR adoption, don’t let these new regulations deter you. Incorporating artificial intelligence technologies into human resources is essential for staying efficient and competitive. The organizations that thrive will learn to balance AI innovation with disciplined compliance and ethical guardrails.
 

AI Regulation Is Moving Faster (and Closer) to HR Teams

Initially, regulators provided little to no guidance regarding how businesses should use AI in HR workflows. Then, they shifted to broad guidance. Now, they are working on enforceable laws.

The most visible example is New York Local Law 144 of 2021, which regulates the use of automated employment decision tools in hiring and promotion. The law requires businesses to undergo annual independent bias audits and provide clear disclosure to candidates when AI tools are used in employment decisions.

Other states are following suit. California lawmakers have been working on AI-focused employment legislation. Federal agencies have issued guidance reinforcing that existing civil rights laws apply to AI hiring, retention, and employment-related decision-making technologies.

Compliance Tip: Maintain a list of each jurisdiction where your business uses AI tools and which state or local laws apply.
 

Bias Audits Are Becoming a Baseline Expectation

Bias audits are becoming mandatory in some jurisdictions. Even if they aren’t legally required in your state, they are expected during investigations or litigation where unfair hiring practices are part of the complaint.

A bias audit evaluates whether an AI system disproportionately impacts a protected class based on one or more of the following:

  • Age
  • Race
  • Gender
  • Religion
  • Any other legally protected characteristic
     

Regulators expect your audits to be independent and repeatable.

Compliance Tip: If your vendor conducts audits, ask them to send you a written report, as well as a breakdown of their validation methodologies. Confirm who is legally responsible if an issue is discovered.
 

Transparency Is Now a Legal and Cultural Requirement

People who are applying to your business want to know if you are using AI to make hiring decisions. The same trend applies to existing employees. Who can blame them? After all, their careers are at stake.

Compliance Tip: Create standardized AI disclosures to build trust and protect your business from claims of biased hiring.
 

Ethical AI Use Requires Internal Governance, Not Just Tools

Ethical AI isn’t achieved through software alone. Your human resources team needs documented internal guidelines that outline how AI can and cannot be used in employment decisions. Give your HR staff clear rules to play by so they can support the organization’s compliance posture.

Compliance Tip: Get multiple teams involved to promote cross-functional ownership for AI governance and revisit policies annually as tools and laws evolve.
 

Vendor Risk Is Now Employer Risk

HR teams increasingly rely on vendors for screening, scheduling, and workforce analytics. They are also turning outward for compliance automation tools. But regulators are clear that outsourcing artificial intelligence does not insulate the business from liability.

Choose your vendors wisely and carefully review contracts to ensure they address key areas of concern.

Compliance Tip: Review AI vendor contracts with legal counsel and require written assurances of compliance with applicable employment laws.
 

Preparing for State-Level Audits and Disclosure Requests

States and municipalities are increasingly empowered to request documentation related to AI use in employment decisions. Your HR team should be prepared to demonstrate:

  • Where AI is used
  • How decisions are made
  • What safeguards are in place
  • How compliance is monitored
     

Centralized, real-time compliance tools help your staff members respond confidently without scrambling to track down documents.

Compliance Tip: Maintain a centralized record of AI tools, policies, disclosures, and audit reports to support fast, accurate responses to regulators.
 

Turning Winter Into Readiness

Artificial intelligence can absolutely help your HR team move faster and work smarter, but only when you back it up with clear compliance processes and sound ethics. Winter challenges engagement and momentum, but the start of 2026 presents a prime opportunity to hit the ground running.

VirgilHR helps HR teams stay ahead of AI-related employment law changes with timely guidance. Our staff members help insulate you from risk while giving you the tools you need to make informed decisions that drive the business forward.

Schedule a demo today to see how VirgilHR can support compliant, people-first HR decisions all year long.
 

Sources:

https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page