83% of employers and 99% of Fortune 500 companies use some type of automated tool in their hiring processes, according to the Equal Employment Opportunity Commission (EEOC).
Artificial intelligence (AI) is rapidly transforming the way many business functions operate, and HR departments aren’t immune. AI is being used to automate tasks, improve decision-making, and provide personalized experiences for employees; however, as with any new technology, there are compliance issues that need to be addressed when using AI in HR. And we all know the HR department is certainly no stranger to compliance.
Compliance challenges can vary depending on the specific AI application, but some common concerns include:
Bias: AI algorithms are trained on data, and if that data is biased, the algorithm will be biased as well. This could lead to discrimination against certain groups of people.
Privacy: AI systems collect and analyze a lot of data about employees. This data could be used to track employees’ behavior, monitor their productivity, or even predict their future performance. This raises privacy concerns about how this data will be used and who will have access to it.
Transparency: AI systems are often complex and opaque, making it difficult to understand how they make decisions. This lack of transparency can make it difficult to comply with laws and regulations, such as those governing equal employment opportunity and data privacy.
Accountability: When AI systems make decisions that have a negative impact on employees, it can be difficult to hold anyone accountable. This is because the decisions are made by the AI system, not by a human being.
Potential HR Compliance Snares
Navigating the world of AI in HR can sometimes feel like walking a tightrope. For instance, consider the potential challenge of algorithmic bias mentioned above. Imagine an AI background check tool uses data that includes individuals’ criminal records. In that case, there’s a risk of inadvertently discriminating against a potential candidate based on race, color, or national origin, which violates Title VII of the Civil Rights Act of 1964.
It’s also important to remember that many states have adopted “ban the box” laws, which limit how employers can consider a person’s criminal history in making employment decisions.
The balancing act here is ensuring your AI tool helps you make fair and unbiased hiring decisions while keeping you compliant with the law.
Another consideration when hiring is trying to find employees who are just like your current all-stars. Who wouldn’t want to hire a team of super performers? So, you let an AI tool do the heavy lifting, analyzing data about your top performers. Now, imagine if the data used comes from a department largely populated by White men. Despite instructions to avoid selecting based on race or gender, the AI may still develop a bias towards these demographics due to the data it’s been fed.
Then, there’s a different but equally important form of bias, one that centers on access for candidates with disabilities. For instance, if an AI assessment is used during the hiring process, it might present challenges for applicants with visual or auditory impairments.
Here’s the key takeaway: Employers need to be aware of how their AI tools could potentially affect people with disabilities. EEOC guidance on AI and the Americans with Disabilities Act from May 2022 is full of useful examples and watchouts.
Applicants should be informed about the type of AI-powered tool being used in candidate selection and the criteria it’s assessing. This level of openness in your job descriptions ensures that applicants know what they’re getting into, promoting a fair and equitable hiring process.
Businesses and AI Use Discrimination Controversies
Discrimination resulting from the use of AI has become a real hot button over the past several years with lawsuits and negative news articles popping up left and right.
Workday Inc., a maker of AI applicant screening software, is in the middle of a class action lawsuit that alleges its products promote hiring discrimination. The lawsuit, filed in February 2023 alleges that Workday engaged in illegal age, disability, and race discrimination by selling its customers the company’s applicant-screening tools, which use biased AI algorithms.
HireVue uses AI to assess job candidates via comparison to more than a million video interviews. The AI is designed to determine how a candidate will perform as a successful employee based on their mannerisms—such as gestures, tone of voice, and cadence—during the interview, as well as their responses. The result is what HireVue calls an “employability score,” which recruiters and managers use to determine which applicants move forward in the hiring process. The problem? The tool could be biased against people with disabilities who might not produce the “right” gestures, tone, or cadence compared to the ideal interview videos.
And, as far back as 2014, Amazon was at the forefront of developing a recruiting platform that used AI. Developers trained the tool to evaluate job candidates by recognizing patterns in the resumes of people who were hired over a 10-year period. It scanned the resumes and scored them on a scale of 1 to 5 stars (just like an Amazon review). There was one major flaw in the system, however. Most of those resumes were submitted by men, meaning the tool was automatically biased against women by virtue of the data that it was trained on.
How to Address AI Compliance Issues in HR
As you can see, bias can be a big challenge with AI systems. Businesses can take a number of steps to address potential compliance pitfalls when it comes to the use of AI in HR. These steps include:
Conducting a risk assessment. A risk assessment will help you identify the potential compliance issues associated with the specific AI application that you’re considering using. This assessment should include a deep understanding of the type of data that the AI system will collect, the types of decisions that the system will make, and the potential impact of those decisions on employees.
Assessing the likelihood and impact of each risk. Once you have identified the potential risks, you need to assess the occurrence likelihood and possible impact of each risk. This will help you to prioritize the risks and focus your efforts on mitigating the most significant ones.
Developing mitigation strategies. Once you have assessed the likelihood and impact of each risk, you need to develop mitigation strategies. These strategies should be designed to reduce the likelihood and/or impact of each risk.
Implementing the mitigation strategies. Once you have developed mitigation strategies, you need to implement them. This may involve making changes to the AI system, the data that the system collects, or the way that the system makes decisions.
Monitoring the effectiveness of the mitigation strategies. After implementing the mitigation strategies, you need to monitor their effectiveness. This will help ensure that the strategies are working as intended and that they are truly reducing or eliminating the risks.
Reviewing and updating the risk assessment on a regular basis. The risk assessment should be reviewed and updated on a regular basis because risks associated with AI will change over time. For example, as the AI system is used more, it may collect more data and make more decisions. As a result, risks associated with the system may increase. By reviewing and updating the risk assessment on a regular basis, you can ensure that the assessment is up-to-date and that it is identifying the most significant risks.
Employing safeguards. Once you have identified the potential risks, you can implement safeguards to mitigate those risks. These safeguards may include things like:
- Using a diverse dataset to train the AI system. A more diverse dataset will help to ensure that the AI system is not biased against any particular group of people.
- Implementing privacy protections. This includes protections such as encrypting data, limiting access to data, and providing employees with the ability to access and correct their data.
- Making the AI system more transparent. Provide employees with information about how the AI system works and how it makes decisions.
- Holding the AI system accountable for its decisions. A human should review the AI system’s results or evaluations and a process should be in place for employees to appeal the AI system’s decisions.
- Conducting regular system audits. This will help to ensure that the system is working as intended and that it is not causing any unintended consequences.
- Creating a clear policy for the use of AI in HR. The policy should outline the purpose of AI in HR, the types of decisions that the AI system will make, and the safeguards that will be in place to protect employees.
- Training employees on the use of AI in HR. Successful training will help employees understand how the AI system works and how it can be used to their benefit.
Monitoring the system. Once the AI system is in place, it’s critical to monitor it to ensure that it is working as intended and not causing any unintended consequences. This monitoring should include items such as:
- Data collection and usage. Make sure that the AI system is only collecting and using data that is necessary for its intended purpose.
- Decision-making. Make sure that the AI system is making decisions in a fair and equitable manner.
- Accountability. Make sure that there are clear procedures in place for holding the AI
- Bias. Make sure that the AI system is not biased against any particular group of people.
- Security. Make sure that the AI system is secure and that the data that it collects is protected by using encryption, firewalls, and other security measures.
- Compliance. Ensure that the AI system complies with all applicable laws and regulations. This may include laws governing equal employment opportunity, data privacy, and other areas.
By taking these steps, businesses can help to ensure that they are using AI in HR in a way that complies with all applicable laws and regulations.
States with Artificial Intelligence Laws
Several states already have laws in place related to the use of artificial intelligence in the workplace:
State | Law Title | Law Description |
California | California Consumer Privacy Act (CCPA) and the California Automated Decision Systems (ADS) Act | The CCPA gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The ADS Act requires employers to provide employees with notice and an opportunity to opt out of decisions that are made solely or primarily on the basis of automated decision systems. |
Colorado | Colorado Privacy Act (CPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The CPA also prohibits discrimination on the basis of personal information. |
Illinois | Biometric Information Privacy Act (BIPA) | Prohibits the collection, use, or disclosure of biometric identifiers or biometric information without the informed consent of the individual. Biometric identifiers include fingerprints, face scans, and voiceprints. Biometric information includes any information derived from biometric identifiers, such as a template or map of a fingerprint. |
Maryland | Maryland Consumer Privacy Act (MCPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The MCPA also prohibits discrimination on the basis of personal information. |
New Jersey | New Jersey Privacy Act (NJPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The NJPA also prohibits discrimination on the basis of personal information. |
New York | New York Privacy Act (NYPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The NYPA also prohibits discrimination on the basis of personal information. |
Texas | Texas Privacy Protection Act (TPPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The TPPA also prohibits discrimination on the basis of personal information. |
Utah | Utah Consumer Privacy Act (UCPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The UCPA also prohibits discrimination on the basis of personal information. |
Virginia | Virginia Consumer Data Protection Act (VCDPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The VCDPA also prohibits discrimination on the basis of personal information. |
Washington | Washington Privacy Act (WPA) | Gives consumers the right to know what personal information is being collected about them, how it is being used, and who it is being shared with. The WPA also prohibits discrimination on the basis of personal information. |
Confidentiality & Data Privacy
Keeping your business’s confidential information secure is a top priority. In the era of AI tools like ChatGPT, it’s important to consider potential risks in data privacy and confidentiality.
Although AI like ChatGPT is designed not to retain specific details from conversations, it does evolve its responses based on previous interactions. While most AI tools offer a measure of data security, complete confidentiality in internet-based interactions can’t be entirely guaranteed. This means your proprietary, confidential, or trade secret data could potentially be exposed if shared in a conversation with AI tools.
As an extra precaution, it’s wise to include a clause in your employee confidentiality agreements and policies. This clause should prohibit employees from discussing or inputting sensitive company information into AI chatbots or large language models such as ChatGPT.
And, because ChatGPT uses a vast amount of online data to produce its responses, there could be cases where employees receive information that could be copyrighted, trademarked, or the intellectual property of another party. This poses a legal risk for your organization.
How Paycor Helps
Informed and cautious use of all AI technology to help ensure your business remains secure and legally compliant—whether it’s used for resume evaluation, candidate assessment, or social media posts—will help you navigate these ever-evolving tech challenges.
Artificial intelligence can help reduce budget waste in HR and recruiting departments by automating search and engagement, bringing in extra interviews, and freeing up recruiters’ time. If you’re interested in dipping a toe into AI, a good place to start is with Paycor Smart Sourcing. Learn more about how AI recruiting can help you and optimize your time, budget, and team.