NYC AI Bias
Audit

NYC AI Bias Audit Law Solution
Reliable AI
Βias Τesting
- Disparate impact analysis on persons (e.g., candidates, employees)
- Per protected categories (e.g., gender, ethnicity, race)
Findings Report with Mitigation Measures
lorem ipsum lorem ipsum lorem ipsum
Summary of Results for publishing
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsumlorem ipsumlorem ipsum

Why us?
NYC Local Law 144 Summary
Bias audit
Published Results
Notice to Candidates
Penalties for non-compliance
Frequently Asked Questions
All employers and employment agencies who meet the criteria below must conduct or comply to a bias audit by April 15th, 2023:
- They use an automated employment decision tool (e.g., resume screening) whose output such as a score, classification or recommendation is used
- To evaluate candidates or employees
- Seeking a position or promotion
- And are residing in New York City (this also includes remote work positions).
The term “automated employment decision tool” means any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons. “To substantially assist or replace discretionary decision making” means:
- to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered; or
- to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or
- to use a simplified output to overrule conclusions derived from other factors including human decision-making.
- that generate a prediction, meaning an expected outcome for an observation, such as an assessment of a candidate’s fit or likelihood of success, or that generate a classification, meaning an assignment of an observation to a group, such as categorizations based on skill sets or aptitude; or
- for which a computer at least in part identifies the inputs, the relative importance placed on those inputs, and, if applicable, other parameters for the models in order to improve the accuracy of the prediction or classification.
Examples of AEDTs include tools that screen resumes, recommend whether a candidate should be given an interview, as well as those that score applicants for a "culture fit" or other assessments, such as game-based, image based or psychometric tools. Other examples include rating a candidate’s estimated technical skills, categorizing a candidate’s resume based on key words, assigning a skill or trait to a candidate, arranging a list of candidates based on how well their cover letters match the job description and others.
“Candidate for employment” means a person who has applied for a specific employment position by submitting the necessary information or items in the format required by the employer or employment agency.
“Screen” means to make a determination about whether a candidate for employment or employee being considered for promotion should be selected or advanced in the hiring or promotion process.
The term “employment decision” means to screen candidates for employment or employees for promotion within the city.
“Selection rate” is calculated by dividing the number of individuals in the category moving forward or assigned a classification by the total number of individuals in the category who applied for a position or were considered for promotion.
Example
“Scoring Rate” means the rate at which individuals in a category receive a score above the sample’s median score, where the score has been calculated by an AEDT.
“Impact ratio” means the selection or scoring rate for a category divided by the selection rate of the most selected or highest scoring category respectively
Example

Selection/Scoring rate and Impact Ratio must separately calculate the impact of the AEDT on:
- Sex categories
i.e., impact ratio for selection of male candidates vs female candidates, - Race/Ethnicity categories
e.g., impact ratio for selection of Hispanic or Latino candidates vs Black or African American [Not Hispanic or Latino] candidates - intersectional categories of sex, ethnicity, and race
e.g., impact ratio for selection of Hispanic or Latino male candidates vs. Not Hispanic or Latino Black or African American female candidates.
Example
Τhe number of individuals the AEDT assessed that are not included in the calculations because they fall within an unknown category, should be mentioned in a respective note in a the summary of results.
Example Note: The AEDT was also used to assess 250 individuals with an unknown sex or race/ethnicity category. Data on those individuals was not included in the calculations above.
“Independent auditor” means a person or group that is capable of exercising objective and impartial judgment on all issues within the scope of a bias audit of an AEDT. In order an auditor to be considered independent, he must:
- not be involved in using, developing, or distributing the AEDT
- not have an employment relationship with the employer or the employment agency or the AEDT software vendor at any point during the bias audit or
- have no financial interest in the employer or the employment agency or the AEDT software vendor at any point during the bias audit
A bias audit conducted must use historical data of the AEDT. If insufficient historical data is available to conduct a statistically significant bias audit, test data may be used instead. However, if a bias audit uses test data, the summary of results of the bias audit must explain why historical data was not used and describe how the test data used was generated and obtained. A bias audit of an AEDT used by multiple employers or employment agencies may use the historical data of any employers or employment agencies that use the AEDT. However, an employer or employment agency may rely on a bias audit of an AEDT that uses the historical data of other employers or employment agencies only if it has also provided its own historical data from AEDT use to the auditor for the bias audit or if it has never used the AEDT.
As a first step, a kick-off session with the Client takes place, followed by technical interviews with Client’s team. Based on the information and data gathered, code4thought team proceeds with the AI system testing and respective technical analysis, which is presented and validated with the client in a respective session.
This Phase (Analysis Phase) usually takes 1-3 weeks depending on the Project.
The Reporting Phase follows, during which code4thought prepares the results, which are presented to the Client for validation and finally we have the final report session.
Reporting Phase usually takes 1-3 weeks, as well.
WE' D LOVE TO HELP YOU
WE' D LOVE TO HELP YOU
WE' D LOVE TO HELP YOU
WE' D LOVE TO HELP YOU
WE' D LOVE TO HELP YOU
WE' D LOVE TO HELP YOU