Organizations are increasingly turning to artificial intelligence tools in the recruitment process. One of the main reasons for this is their potential to contribute to diversity, equity and inclusion by removing the element of human bias from the equation during the initial vetting process.
However, there are many questions and concerns when it comes to AI’s actual effectiveness in this area. In fact, in a paper published in Philosophy and Technology, two professors from the University of Cambridge’s Centre for Gender Studies argued that AI tools may have the opposite result. AI tools and programs “may entrench cultures of inequality and discrimination by failing to address systemic problems within organizations,” they said.
Francesca Profeta, a research analyst at SIA, describes how the use of these tools needs to evolve. “Despite technological advancements and progression made within AI, a one-size-fits-all recruitment process is unlikely to be successful when considering candidates from a diverse talent pool,” Profeta says. “Regardless of the sophistication of said technology, the outcome is only as good as the data on which the algorithm is being trained.”
Profeta points to data from the World Economic Forum that states about 78% of global professionals with AI skills are male. “For the solution to be completely unbiased, it must incorporate diverse perspectives to mitigate these forces.” Additionally, she notes that AI tools might be inclined toward a group of people solely based on sex, gender or age, but other groups may be negatively affected by its use as well. “We are losing sight of physical and non-visual disabilities,” she says. “Would AI make reasonable adjustments for candidates during the interview process or provide reassurance or advice as a great recruiter would?”
AI should be a supportive tool in the process, but people still need to be involved, says Ben Schiller, senior marketing manager at ConverzAI, an AI-based candidate engagement platform. “Organizations should maintain human decision-making in their recruitment processes, while AI technologies exist to support those decisions with factual and current data.”
The Contingent Connection
CW program managers whose programs or staffing providers engage AI in their processes should “learn where AI is being implemented and how it interacts with candidates or operates on candidate data,” Schiller advises. “Adopt AI technologies that are built to prioritize candidate experience and that do not have any biases.”
To address some of these concerns, governments and organizations around the world are developing regulatory frameworks and codes aimed at ensuring recruitment processes are fair and nondiscriminatory.
Contingent workforce program managers and staffing providers that plan to implement AI tools into their systems should keep up to date on existing and developing regulatory frameworks.
Ethical code. For example, the World Employment Confederation announced in March that its members have agreed to a set of principles to guide the deployment of AI in the recruitment and employment industry. The “Code of Ethical Principles in the Use of Artificial Intelligence” is a living set of principles that will be adapted as AI evolves.
“Fairness, nondiscrimination, diversity, inclusiveness and privacy — principles that WEC members also abide in their overall practice of HR services — are also principles to be followed to guarantee ethical use of AI in recruitment and employment,” the WEC states. “As for the principles enshrined in WEC’s overall Code of Conduct, WEC members have a duty to apply those ethical principles in their use of AI.”
John W. Healy, VP and chair of the taskforce on digitalization at the WEC, says, “As there is a variety of governance frameworks related to AI-based systems, we began by seeking the collective guidance of our membership, both amongst the global corporate members of the World Employment Confederation as well as the individual National Federations within our membership.
“From there, we extended the conversation to include the position of a wide array of our commercial partners (many of whom operate within the HR technology and education technology sectors) as well as the many national and international policy-making organizations who also are exploring the role of AI in the process of connecting individuals with work,” Healy adds.
Legislative Action
Meanwhile, new laws and proposals are focusing on candidate screening tools, with some guidance deeming these tools to be high risk and requiring a conformity assessment before they can be used.
Globally, national and local governments have begun to adopt strategies and issue guidelines for the ethical use of AI. According to the “Using AI: Risks and Challenges” report published by SIA in 2022, these will soon be followed by legislation, with the EU and China leading the way.
Here are some of the recent regulatory and policy actions on AI and AI discrimination bias being taken by governments and organizations worldwide.
United States. Earlier this year, the US Equal Employment Opportunity Commission turned its attention to AI tools. The draft Strategic Enforcement Plan covering fiscal years 2023-2027 states that the EEOC will focus on recruitment and hiring practices and policies that discriminate against racial, ethnic and religious groups; older workers; women; pregnant workers and those with pregnancy-related medical conditions; LGBTQ+ individuals; and people with disabilities.
These include the “use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”
Furthermore, the plan states that it will also focus on “screening tools or requirements that disproportionately impact workers based on their protected status, including those facilitated by artificial intelligence or other automated systems, pre-employment tests and background checks.”
The EEOC’s guidance noted how AI use in the recruitment process could lead to discrimination.
In 2022, the White House also published the “Blueprint for an AI Bill of Rights,” a framework with a set of principles to help guide the use of automated systems and to protect the public. It includes guidance on algorithmic discrimination protections.
Additional national legislation includes the Algorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed legislation would direct the Federal Trade Commission to create regulations that mandate “covered entities,” including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.
Elsewhere, in a recent notable development, officials in New York City are reviewing Local Law 144, which requires employers and employment firms that use automated employment decision tools (AEDT) within the city to conduct independent audits of such tools for bias and provide disclosures to candidates and employees at least 10 business days prior to using AEDT. The Department of Consumer and Worker Protection finalized the rule in April 2023, with enforcement scheduled to begin July 5.
Canada. In June 2022, the government of Canada proposed the Artificial Intelligence and Data Act as part of Bill C-27, the Digital Charter Implementation Act, in 2022. The proposed act would set the foundation for the responsible design, development and deployment of AI systems that impact the lives of Canadians. Under the AIDA, businesses would be held responsible for the AI activities under their control. They would be required to implement new governance mechanisms and policies that consider and address the risks of their AI system and give users enough information to make informed decisions.
The bill is currently undergoing a second reading under the House of Commons.
European Union. The AI Act, a proposed European law which would impact all 27 member states, assigns applications of AI to three risk categories.
United Kingdom. In the UK, the government in late March published a White Paper on AI in which they take the view that “rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances. Instead, the principles will be issued on a non-statutory basis and implemented by existing regulators.”
“Existing regulators will be expected to implement the framework underpinned by five values-focused cross-sectoral principles: 1. Safety, security and robustness; 2. Appropriate transparency and explainability; 3. Fairness; 4. Accountability and governance; and 5. Contestability and redress.”
The government adds that “without regulatory oversight, AI technologies could pose risks to our privacy and human dignity, potentially harming our fundamental liberties.”
John Buyers, head of AI at the law firm Osborne Clarke, commented on the white paper, telling CNBC that the move to delegate responsibility for supervising the technology among regulators risks creating a “complicated regulatory patchwork full of holes.”
“The risk with the current approach is that a problematic AI system will need to present itself in the right format to trigger a regulator’s jurisdiction, and moreover, the regulator in question will need to have the right enforcement powers in place to take decisive and effective action to remedy the harm caused and generate a sufficient deterrent effect to incentivize compliance in the industry,” Buyers told CNBC via email.
Japan. In Japan, the Ministry of Economy, Trade and Industry issued its Governance Guidelines for Implementation of AI Principles Ver. 1.1 in 2021. Updated in 2022, the paper states, “While the discussion on AI governance is developing in Japan and around the world, it is not easy to design actual AI governance.”
It adds, “Legally binding horizontal requirements for AI systems is deemed unnecessary at the moment. Even if discussions on legally binding horizontal requirements are held in the future, risk assessment should be implemented in consideration of not only risks but also potential benefits.”
China. Meanwhile, the China Academy of Information and Communications Technology issued the White Paper on AI Governance, which lays out ethical standards for using AI, including that algorithms should protect individual rights. The paper proposed that “AI should treat all users equally and in a non-discriminatory fashion and that all processes involved in AI design should also be nondiscriminatory.”
It adds, “AI must be trained using unbiased data sets representing different population groups, which entails considering potentially vulnerable persons and groups, such as workers, persons with disabilities, children and others at risk of exclusion.”
As organizations globally look to implement AI tools in their recruitment processes, contingent workforce program managers and staffing providers should keep up to date on the latest imposed and in-progress regulations on AI.
Danny Romero is an associate editor at Staffing Industry Analysts.
Understanding the new DOL independent contractor rule: A guide for businesses - April 08, 2024
Balancing act: Strategies for achieving both speed and quality in recruitment - April 04, 2024
The keys to effectively managing a global workforce - April 03, 2024
How life sciences organizations can prepare for change in 2024 - April 02, 2024
Gen Alpha: What staffing leaders should know - April 01, 2024