code4thought

Ensuring Fairness in AI-Assisted Hiring: Lessons Learned from the NYC Bias Audit Law

20/11/2024
16 MIN READ  /
As businesses increasingly leverage AI for streamlined recruitment due to its ability to quickly sift through vast applicant pools, a clear-eyed view of fairness is essential. The New York City’s Bias Audit Law is an unprecedented regulation that fosters fairness by mandating bias audits in AI-assisted hiring tools. Also known as Local Law 144, it marks a crucial step towards ensuring AI doesn’t reinforce societal biases. Instead, it upholds equitable hiring standards, enabling companies to build trustworthy hiring practices that benefit business goals and social equity. However, ensuring fairness in AI-assisted hiring requires businesses to address specific testing considerations. Going beyond “tick-the-box” compliance so as to enhance business benefits entails know-how and advanced AI testing capabilities.

The Necessity for Fairness in AI-Assisted Hiring

AI-assisted hiring tools may be affected by inherent biases and a lack of fairness when considering demographics. Common AI biases include historical data bias, where past hiring patterns influence AI models, and algorithmic bias, where improperly tested algorithms lead to unintended skewed outcomes. On the other hand, demographic unfairness may disadvantage certain groups, such as those of a specific age, gender, or racial or socioeconomic background, unintentionally excluding qualified candidates.
If unchecked, these biases can lead to significant business risks:
    1. Violating employment discrimination laws can result in lawsuits and regulatory fines, as seen in the case handled by the Equal Employment Opportunity Commission (EEOC).
    2. Discriminatory practices can harm a company’s reputation, eroding customer, employee, and stakeholder trust.
    3. Homogeneous workforces can limit creativity and innovation, affecting adaptability and growth.
    4. Biased hiring decisions can increase turnover and operational costs due to higher recruitment and training expenses.
    5. Unfair hiring practices may demotivate current employees, decreasing morale and productivity.
Based on these risks, the NYC Bias Audit Law, which requires employers to monitor their AI tools proactively, was enforced. Besides NYC Local Law 144, the EU AI Act also emphasizes fairness and transparency in employment-related AI systems, classifying them as “high-risk.”

Lessons Learned from the NYC Bias Audit Law

Despite the legislature’s good intentions, the challenges always hide in the implementation details, and NYC Local Law 144 is no exception.
Studies have unearthed several issues that legislators and businesses need to consider.
    1. Ambiguities in definitions: The law lacks clear definitions for critical terms such as “automated employment decision tools” (AEDTs) and “independent auditors.” This vagueness has led to varied interpretations and inconsistent applications among employers and auditors. In addition, although the Law is named the Bias Audit Law, its essence concerns demographic fairness.
    2. Data access and methodological issues: Auditors have faced difficulties accessing necessary data and selecting appropriate methodologies to assess bias, leading to audit quality and effectiveness inconsistencies.
    3. Audit controls suitability: The auditing and testing controls required to satisfy the Law criteria are basic and do not comprehensively cover the fairness dimension.
Because of these limitations, the law’s intended benefits for job applicants have diminished. The issue was amplified by challenges in accessing and understanding audit reports and transparency notices.

Measures that Promote Fairness in AI Hiring Tools

Organizations must adopt proactive measures to enhance fairness and mitigate bias in AI-assisted hiring. Key strategies are:
  • Developing a Comprehensive AI Policy: Creating a trustworthy AI policy establishes guidelines for responsible AI usage in HR. This policy should outline fairness, transparency, and accountability principles, providing a framework for continuous improvement.
  • Conducting Regular Bias Tests & Audits: Regular audits are essential to detect and address biases in AI tools. Model drift or even small data changes may necessitate model retraining or a shift in approach. It is equally important to check for demographic imbalances that may affect the performance of a model. Still, the impact is often not that noticeable to constitute a proper model drift. For companies under strict compliance requirements, third-party audits might be required semi-annually, with internal monthly checks becoming a standard part of business operations rather than an emergency procedure.
  • Diversifying Training Data: AI models trained on diverse datasets are less likely to produce biased outcomes. Employers should ensure that the data used to train AI reflects the diversity they seek in their workforce.
  • Implementing Transparency in AI Decision-Making: Transparency allows for more precise insight into AI decisions, fostering trust among applicants. Employers can implement explainable AI solutions, helping applicants understand why they were selected and reducing the opacity often associated with automated hiring.
  • Maintaining Human Oversight: Integrating human review in AI-driven decisions ensures that the nuances AI may miss are addressed. Human oversight catches any biases that AI might inadvertently perpetuate.

Beyond Ticking the Box: Comprehensive Fairness and Bias Testing

The 4/5 Rule

A commonly used statistical principle for fairness, the 4/5 Rule, although helpful in identifying explicit biases, can sometimes oversimplify bias analysis: if the 4/5 Rule flags fairness, typically the bias is so pronounced, even for an analyst’s glance at the reporting results. While the Law references the 4/5 Rule for its accessibility and as a foundational step in fairness assessments, this doesn’t suffice for a comprehensive fairness analysis.

Fairness vs Bias

It is also essential to transcend the restrictive confines of fairness (and the limitations of the law) and further explore bias. This is required since fairness is not the same as bias.
While there is no universally agreed upon definition for fairness, the term is most often used to describe the absence of prejudice or preference for an individual member or group of a population based on specific characteristics, e.g. demographics for human populations. Fairness applies to an analytical/testing process as a whole and absence of statistical bias for a specific population in such a process does not ensure a fair process; similarly bias presence is a very strong indicator that the process may not be fair but even if it cannot be technically resolved, mitigation actions performed on top of the analytical/modelling part can render fair the process as a whole.
Bias in its statistical sense may be described as the intentional or unintentional failure of any statistical analysis or data and entity modelling process to depict “reality” accurately (e.g. by omitting specific qualities of the experiment population). There are many different types of statistical bias and multiple root causes for them; each type may present itself at one or more steps in such processes.
Bias in natural language has a negative connotation but, in effect, some forms of bias exist naturally in the world and cannot be “solved” according to the laws of nature. What is important in each experiment/analytical process, is first anticipating the bias occurrences and understanding the different types, their meaning and their impact on the specific process and then mitigating/resolving them where applicable.

Thorough Bias Testing

By conducting a more comprehensive bias analysis that extends beyond demographics, it is possible to determine whether the model’s performance can be enhanced in a manner that is beneficial to the organization.
For instance, organizations should analyze recruitment data if there is a noticeable trend of hiring candidates whose characteristics don’t align with business goals. This analysis might examine why the system flagged some candidates as suitable, but ultimately rejected them or why certain hires left the company sooner than expected. Such insights can reveal a loss of talent, financial impacts, and operational inefficiencies.
Another critical factor is the far-reaching impact of bias on recruitment processes and the teams responsible for designing them. Biases introduced early—such as during data collection or labeling—can evolve into more complex issues, like inductive or survival biases, during model training. Over time, these biases can become deeply embedded, influencing how the model evaluates candidates and perpetuating flawed outcomes.
Without regular mitigation, these issues make it harder for teams to step back, assess the model objectively, and determine whether it truly supports the company’s goals. Ultimately, this limits the ability to evaluate the model’s performance and usefulness effectively.
To that end, more thorough testing, including metrics like group benefit, disparate impact, equal opportunity, and demographic parity, is essential. Companies may outsource that burden to third-party auditors, such as code4thought, to conduct bias tests, ensure objectivity and fairness, and receive specialized insights that can be invaluable for maintaining compliance. Third-party auditors are required not only to ensure their independence, which is a regulatory requirement in the case of NYC Bias Law, but also because they possess the expertise to perform advanced testing and analyses.

The Business Benefits of Bias-Free and Fair AI Hiring Systems

Addressing bias in AI systems yields significant business benefits besides the apparent regulatory compliance and risk mitigation, as evidenced by various studies and reports:
    1. Enhanced Financial Performance: Companies with diverse executive teams are 25% more likely to achieve above-average profitability. This underscores the financial advantages of fostering diversity through unbiased AI hiring practices.
    2. Increased Innovation: Organizations prioritizing diversity and inclusion are 1.7 times more likely to be innovation leaders in their market. By mitigating AI bias, businesses can cultivate diverse teams that drive creative solutions and maintain a competitive edge.
    3. Improved Employee Retention: Companies with inclusive cultures experience a 22% lower turnover rate. Implementing unbiased AI systems contributes to a fairer workplace, enhancing employee satisfaction and reducing recruitment costs.

Conclusion

The rising use of AI in hiring presents both opportunities and challenges. Fairness in AI-assisted hiring is a more than a legal and moral requirement for companies striving for an inclusive workforce: it is also a business imperative that ensures the hiring process is trustworthy and trully value adding for an organization. To foster trust and credibility, businesses must prioritize transparency, regular testing and periodical audits, and human oversight in AI-driven hiring processes.
As AI’s influence on HR grows, a commitment to trustworthy AI practices will become a defining characteristic of responsible organizations.

How we help

code4thought’s structured AI Quality Testing & Audit service has been adapted to the legal requirements of the NYC Bial Law for Bias Testing and offers the ideal solution, NYC Bias Audit, not only for compliance but also as the first step toward a reliable and trustworthy AI systems.
What’s more, iQ4AI, our proprietary AI Quality Testing platform, offers in-house teams robust and user-friendly tools for in-depth analysis of AI models and data, fully aligned with international standards and regulations.
Contact us to help you harness AI’s capabilities without sacrificing equity.