Tuesday, October 15, 2024
ad
HomeOpinionNew York City Proposes New Rules on Employers’ Use of Automated Employment...

New York City Proposes New Rules on Employers’ Use of Automated Employment Decision Tools

This time the goal is to clarify the implementation and key terms in the AI Law that will come into effect in January 2023.

New York City established a new law last year requiring firms doing business in the city to do annual audits of their AI tools to ensure that biases do not manifest in the technologies. Employers in New York City who use “automated employment decision tools” in their hiring and promotion procedures are required to create a “bias audit” of those tools in accordance with Local Law International No. 1894-A (also known as Local Law 144). For each transgression, fines will range from US$500 to US$1,500.

The law, which will come into force on January 1, 2023, forbids employers and employment agencies from using automated employment decision tools unless some bias audit and notice criteria are satisfied, including carrying out bias audits and providing notifications to applicants and employees. The proposed regulations clarify many key terms, the prerequisites for a bias audit, the obligations for publishing the audit results, and the notices that must be given to staff and job hopefuls.

In order to give the AI Law trustworthy direction and clarity before it goes into force, New York City has now released new guidelines. The New York City Department of Consumer and Worker Protection (DCWP) presented proposed new rules to execute the city’s AEDT AI law (Local Law 144) on September 23, 2022. 

In other words, the newly proposed rules fill critical loopholes in Local Law 144, which was implemented in December 2021. With a public hearing scheduled for October 24th, the comment period is already open. The DCWP website allows interested parties to submit written comments or request an oral deposition. The public hearing will also allow for comments to be made, but those who desire to do so must register in advance by contacting (212) 436-0396. Speakers will have three minutes to share their thoughts.

The AI Law explains AEDTs as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, and that is used to significantly assist or replace discretionary decision making for making employment decisions that impact natural persons” in the AI law itself. According to the proposed rules, “to significantly assist or replace discretionary decision making” implies that the covered tool (a) depends solely on a simplified output,  (b) employs a simplified output as one set of criteria where the output is weighted greater than any other criterion in the set, or (c) makes use of a simplified output to override or modify conclusions derived from other variables, including human decision-making.”

Read More: Iranian Government Admits using Facial Recognition to Identify Women Violating Hijab Rule

The Proposed Rules, appear to make it clear that an employer can comply with the AI Law’s standards by using a bias audit that was commissioned by a vendor and was based on historical vendor data. There seems to be no requirement for employers to undertake their own independent analysis of a tool’s effects on their specific candidate pool. However, as with many AEDT organizations, the Proposed Rules do not consider a situation where the vendor simultaneously acts as the developer.

According to the proposed rules, a “candidate for employment” is a person who has submitted an application for employment for a specific position by providing the appropriate information and/or materials in the manner requested by the employer or employment agency.  The proposed rules make it clear that, as a result, the new law will not apply to prospective candidates who have not yet submitted an application for a post.

To conduct an audit, here are key takeaways from the proposed rules:

  1. If an AEDT is solely in charge of making an employment decision, if its decision is weighted more heavily than other factors, or if it supersedes a decision made by another process, including a human, it is subject to the law’s jurisdiction;
  1. Based on the EEOC framework, audits must offer data that allows for disparate impact analysis. This entails calculating the “selection” rate for each protected gender, race, and ethnicity group and contrasting it with the rate for the group that is most frequently selected;
  1. The term “independent” auditor refers to someone who was not engaged in “using or designing” the relevant AEDT.

Regarding the need to notify job applicants or current workers before using the AEDT, the rules state that authorized companies must do so 10 working days in advance as follows:

  1. For potential employees who live in the city, by publishing notification of the AEDT on the corporate website’s career or employment section, in the job announcement itself, or by sending the applicant an email or US mail directly;
  1. For present employees who live in the city, by incorporating information about the AEDT in a policy or procedure that is given to employees, by mentioning it in a job ad, or by distributing it through email or US mail.

Both candidates and employees must be given details on how to request an alternate selection method or accommodation in the notification. They do not, however, specifically state that a covered company must offer an alternate hiring procedure. Additionally, after 30 days of receiving a written request for such information, a covered employer must post a notice on its website’s careers or job sections or deliver written notice in person, via U.S. mail, or through email. The skills and qualities required for the position that the AEDT will consider when evaluating an applicant or employee should also be specified in this notice. Further, the employer must provide information regarding the kind of data acquired for the automated employment decision tool, the source of that data, and the employer’s or employment agency’s data retention policy, if it isn’t already available on the employer’s website.

The important phrases listed below that were not specified in the AI Law are also defined in the proposed rules:

  1. To “screen” someone means to decide if they should be chosen or progress in the recruitment or promotion process.
  1. The term “selection rate” refers to the frequency with which members of a category are either chosen to advance in the recruiting process or given a categorization by an AEDT. One way to determine this rate is to divide the total number of people in the category who applied for a job or were under consideration for a promotion by the number of people in the category who advanced or were classified.
  1. The term “impact ratio” refers to i) either the selection rate for a category divided by the selection rate of the most selected category or ii) the average score of all respondents in a category divided by the average score of those in the highest scoring category.”

By regulating the use of AI in companies to reduce hiring and promotional bias, New York City is trying to follow in the footsteps of Illinois. Though Illinois has restricted the use of AI analysis of video interviews since 2020, New York City will be the first in the nation to include the employment process as a whole. It intends to address concerns raised by the United States Equal Employment Opportunity Commission and the United States Department of Justice that “blind reliance” on AI technologies in the recruiting process might lead to corporations violating the Americans with Disabilities Act.

AI recruitment solutions are created to help HR teams at every stage of the hiring process, from posting job advertisements on job boards to screening candidate resumes to choosing the best remuneration to give. AI can rapidly screen every application for prerequisites like educational background or experience level. It can eliminate a recruiter’s (unconscious) prejudice for or against a candidate. Additionally, HR teams might find information gaps in job descriptions that prevent hiring the best applicant. Naturally, the objective behind employing AI-based hiring tools (automated employment decision tools) was to assist employers in finding candidates with the appropriate training and expertise.

Read More: How do geo-tagged data on Har Ghar Tiranga Website threaten an Orwellian Reality?

While the general idea was AI would equip firms to sift large amounts of resumes more cost-effectively, strategically, and efficiently, in practice, this could foster biased hiring due to its dependence on unconscious prejudiced selection tendencies like language and demographics. This reveals that there is a dearth of publicly available data demonstrating how various technologies undermine attempts to increase diversity in the hiring landscape by developing a bias themselves. 

For instance, Amazon allegedly discontinued a machine learning and AI-based hiring initiative in 2018 when it was discovered that the algorithm was prejudiced toward women. It appears that Amazon’s AI model was developed to screen applicants by looking for trends in resumes that the business received over a ten-year period. Since males had made up the majority of those applicants, the algorithm inferred that men were preferred over women – resulting in bias.

When individual data becomes tainted with minute overlapping biases, the reality of having a small data sample makes it hard for AI to understand what should be the preferred outcome, resulting in ambiguous outcomes. Apart from that, it is possible that AI becomes more adept in carrying out these skewed behaviors the more often it performs them. It develops a cycle that institutionalizes bias, which defeats the purpose of relying on AI tools for hiring.

This is why companies like Walmart, Nike, CVS Health, and others are eradicating bias from their own hiring algorithms as part of The Data & Trust Alliance.

With the new New York City regulation going into effect next January, HR experts hope bias due to automated employment decision tools can be kept in check to an agreeable extent.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Preetipadma K
Preetipadma K
Preeti is an Artificial Intelligence aficionado and a geek at heart. When she is not busy reading about the latest tech stories, she will be binge-watching Netflix or F1 races!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular