Thursday, November 20, 2025
ad
Home Blog Page 154

New York City Proposes New Rules on Employers’ Use of Automated Employment Decision Tools

New York City Proposes Rules on automated employment decision tools

New York City established a new law last year requiring firms doing business in the city to do annual audits of their AI tools to ensure that biases do not manifest in the technologies. Employers in New York City who use “automated employment decision tools” in their hiring and promotion procedures are required to create a “bias audit” of those tools in accordance with Local Law International No. 1894-A (also known as Local Law 144). For each transgression, fines will range from US$500 to US$1,500.

The law, which will come into force on January 1, 2023, forbids employers and employment agencies from using automated employment decision tools unless some bias audit and notice criteria are satisfied, including carrying out bias audits and providing notifications to applicants and employees. The proposed regulations clarify many key terms, the prerequisites for a bias audit, the obligations for publishing the audit results, and the notices that must be given to staff and job hopefuls.

In order to give the AI Law trustworthy direction and clarity before it goes into force, New York City has now released new guidelines. The New York City Department of Consumer and Worker Protection (DCWP) presented proposed new rules to execute the city’s AEDT AI law (Local Law 144) on September 23, 2022. 

In other words, the newly proposed rules fill critical loopholes in Local Law 144, which was implemented in December 2021. With a public hearing scheduled for October 24th, the comment period is already open. The DCWP website allows interested parties to submit written comments or request an oral deposition. The public hearing will also allow for comments to be made, but those who desire to do so must register in advance by contacting (212) 436-0396. Speakers will have three minutes to share their thoughts.

The AI Law explains AEDTs as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, and that is used to significantly assist or replace discretionary decision making for making employment decisions that impact natural persons” in the AI law itself. According to the proposed rules, “to significantly assist or replace discretionary decision making” implies that the covered tool (a) depends solely on a simplified output,  (b) employs a simplified output as one set of criteria where the output is weighted greater than any other criterion in the set, or (c) makes use of a simplified output to override or modify conclusions derived from other variables, including human decision-making.”

Read More: Iranian Government Admits using Facial Recognition to Identify Women Violating Hijab Rule

The Proposed Rules, appear to make it clear that an employer can comply with the AI Law’s standards by using a bias audit that was commissioned by a vendor and was based on historical vendor data. There seems to be no requirement for employers to undertake their own independent analysis of a tool’s effects on their specific candidate pool. However, as with many AEDT organizations, the Proposed Rules do not consider a situation where the vendor simultaneously acts as the developer.

According to the proposed rules, a “candidate for employment” is a person who has submitted an application for employment for a specific position by providing the appropriate information and/or materials in the manner requested by the employer or employment agency.  The proposed rules make it clear that, as a result, the new law will not apply to prospective candidates who have not yet submitted an application for a post.

To conduct an audit, here are key takeaways from the proposed rules:

  1. If an AEDT is solely in charge of making an employment decision, if its decision is weighted more heavily than other factors, or if it supersedes a decision made by another process, including a human, it is subject to the law’s jurisdiction;
  1. Based on the EEOC framework, audits must offer data that allows for disparate impact analysis. This entails calculating the “selection” rate for each protected gender, race, and ethnicity group and contrasting it with the rate for the group that is most frequently selected;
  1. The term “independent” auditor refers to someone who was not engaged in “using or designing” the relevant AEDT.

Regarding the need to notify job applicants or current workers before using the AEDT, the rules state that authorized companies must do so 10 working days in advance as follows:

  1. For potential employees who live in the city, by publishing notification of the AEDT on the corporate website’s career or employment section, in the job announcement itself, or by sending the applicant an email or US mail directly;
  1. For present employees who live in the city, by incorporating information about the AEDT in a policy or procedure that is given to employees, by mentioning it in a job ad, or by distributing it through email or US mail.

Both candidates and employees must be given details on how to request an alternate selection method or accommodation in the notification. They do not, however, specifically state that a covered company must offer an alternate hiring procedure. Additionally, after 30 days of receiving a written request for such information, a covered employer must post a notice on its website’s careers or job sections or deliver written notice in person, via U.S. mail, or through email. The skills and qualities required for the position that the AEDT will consider when evaluating an applicant or employee should also be specified in this notice. Further, the employer must provide information regarding the kind of data acquired for the automated employment decision tool, the source of that data, and the employer’s or employment agency’s data retention policy, if it isn’t already available on the employer’s website.

The important phrases listed below that were not specified in the AI Law are also defined in the proposed rules:

  1. To “screen” someone means to decide if they should be chosen or progress in the recruitment or promotion process.
  1. The term “selection rate” refers to the frequency with which members of a category are either chosen to advance in the recruiting process or given a categorization by an AEDT. One way to determine this rate is to divide the total number of people in the category who applied for a job or were under consideration for a promotion by the number of people in the category who advanced or were classified.
  1. The term “impact ratio” refers to i) either the selection rate for a category divided by the selection rate of the most selected category or ii) the average score of all respondents in a category divided by the average score of those in the highest scoring category.”

By regulating the use of AI in companies to reduce hiring and promotional bias, New York City is trying to follow in the footsteps of Illinois. Though Illinois has restricted the use of AI analysis of video interviews since 2020, New York City will be the first in the nation to include the employment process as a whole. It intends to address concerns raised by the United States Equal Employment Opportunity Commission and the United States Department of Justice that “blind reliance” on AI technologies in the recruiting process might lead to corporations violating the Americans with Disabilities Act.

AI recruitment solutions are created to help HR teams at every stage of the hiring process, from posting job advertisements on job boards to screening candidate resumes to choosing the best remuneration to give. AI can rapidly screen every application for prerequisites like educational background or experience level. It can eliminate a recruiter’s (unconscious) prejudice for or against a candidate. Additionally, HR teams might find information gaps in job descriptions that prevent hiring the best applicant. Naturally, the objective behind employing AI-based hiring tools (automated employment decision tools) was to assist employers in finding candidates with the appropriate training and expertise.

Read More: How do geo-tagged data on Har Ghar Tiranga Website threaten an Orwellian Reality?

While the general idea was AI would equip firms to sift large amounts of resumes more cost-effectively, strategically, and efficiently, in practice, this could foster biased hiring due to its dependence on unconscious prejudiced selection tendencies like language and demographics. This reveals that there is a dearth of publicly available data demonstrating how various technologies undermine attempts to increase diversity in the hiring landscape by developing a bias themselves. 

For instance, Amazon allegedly discontinued a machine learning and AI-based hiring initiative in 2018 when it was discovered that the algorithm was prejudiced toward women. It appears that Amazon’s AI model was developed to screen applicants by looking for trends in resumes that the business received over a ten-year period. Since males had made up the majority of those applicants, the algorithm inferred that men were preferred over women – resulting in bias.

When individual data becomes tainted with minute overlapping biases, the reality of having a small data sample makes it hard for AI to understand what should be the preferred outcome, resulting in ambiguous outcomes. Apart from that, it is possible that AI becomes more adept in carrying out these skewed behaviors the more often it performs them. It develops a cycle that institutionalizes bias, which defeats the purpose of relying on AI tools for hiring.

This is why companies like Walmart, Nike, CVS Health, and others are eradicating bias from their own hiring algorithms as part of The Data & Trust Alliance.

With the new New York City regulation going into effect next January, HR experts hope bias due to automated employment decision tools can be kept in check to an agreeable extent.

Advertisement

Neosoma HGG receives 510(k) clearance from the FDA

Neosoma HGG receives 510(k) clearance from the FDA

Neosoma HGG has received 510(k) clearance from the Food and Drug Administration (FDA). Neosoma HGG is an artificial intelligence-powered technology that may facilitate greater accuracy in the assessment of high-grade gliomas (HGGs) on brain magnetic resonance imaging (MRI).

Offering longitudinal tracking of patients with HGGs, Neosoma HGG provides tumor segmentation, facilitates imaging for 3D geometric analysis, and performs volumetric measurements, according to Neosoma, the manufacturer of Neosoma HGG.

In performance testing, Neosoma said the Neosoma HGG exceeded the assessment of individual neuroradiologists with a 95.5 percent accuracy rate in measuring the volume of HGGs at different points during a patient’s treatment course.

Read More: Meta’s ‘Horizon Worlds’ Inundated With Several Quality Issues

The company said the detailed objective measurements provided by Neosoma HGG aid in operative planning and the assessment of post-op progress, as well as the monitoring of chemotherapy treatment effectiveness.

“Clinicians usually debate the results of brain MRIs and whether the brain tumor is stable, responding to treatment, or progressing. Neosoma HHG will give us the objectivity needed to make our decisions easier and more accurate,” added Isabelle M. Germano, MD, MBA, FACS, a professor of neurosurgery and the director of the Comprehensive Brain Tumor Program, at Mount Sinai Medical Center.

Advertisement

AWS Open-Sourced its EC2 Trn1 Instances Powered by AWS-Designed Trainium Chips

aws open sourced ec2 trn 1 instances

Amazon Web Services (AWS), a leading cloud service provider, open-sourced its EC2 Trn 1 instances built for leveraging high performance powered by the AWS-designed Trainium Chips. For workloads like NLP, semantic search, recommendation engines, fraud detection, and others, Trn1 instances on AWS offer the quickest time to train popular machine learning models. 

Many companies have developed, trained, and deployed machine learning models to power applications that revolutionize their operations and customer experiences. To increase accuracy, these machine learning models are getting more complicated and consuming larger amounts of training data. 

The models are run across thousands of accelerators, and as a result, the cost of training them increases. With up to 50% less cost to train deep learning models than the most recent GPU-based P4d instances, new Trn1 instances powered by the Trainium processors provide the best pricing performance and the fastest machine learning model training on AWS.

Read More: Cadence Plans to Apply Big Data to Optimize Workloads

Trn1 instances are built on the AWS Nitro System, a combination of AWS-designed hardware and software advancements that streamline the delivery of isolated multi-tenancy, private networking, and quick local storage. 

David Brown, VP of Amazon EC2, said that the company is looking forward to enhancing AWS Inferentia, its machine learning chip, and AWS Trainium, its 2nd-gen machine learning chip. He also said, “Trn1 instances powered by AWS Trainium will help our customers reduce their training time from months to days while being more cost efficient.”

Advertisement

Amazon Halted Testing its Robot Scouts

amazon halted its robot scouts

The E-commerce giant Amazon has halted testing its home delivery robot “Scouts,” Amazon’s latest attempt to make delivery autonomous. According to Amazon, the battery-powered robots are a part of a plan to reduce greenhouse gas emissions from its delivery operations. 

However, the company is lowering its experimental efforts due to the slump in sales. The company is only reacting to the slumped growth in its retail segment by delaying some of its less responsive initiatives, as per CEO Andy Jassy.

Robot Scouts, the six-wheeled, cooler-sized vehicles intended to deliver goods to the front door, will reportedly be placed on hold for the time being, but Amazon has said that it may revisit the concept in the future. Although multiple witnesses stated that the robots had trouble getting around debris or other small objects on the pavement, the company declined to elaborate on why the halt occurred.

Read More: Google and Amazon criticize Microsoft over cloud computing changes

The robots were designed to stop at a front door during the testing and snap open their lids to allow a customer to pick up a present. Before extending trials to Southern California and Tennessee, Amazon began testing its color-sized robots on the streets of suburban Seattle. 

Alisa Carroll, an Amazon Spokesperson, said that the company learned through feedback that the program was not meeting customer expectations. 
Amazon also discontinued Glow, its child-based gaming service with video calling features, not only the robot scouts. The company is also planning to shut down Amazon Care, its telehealth service, after acquiring One Medical.

Advertisement

Google AI Introduced Frame Interpolation for Large Motion (FILM)

google ai introduced FILM

Google AI has been working on frame interpolation and has introduced a new neural network, Frame Interpolation for Large Motion (FILM). Frame interpolation is the process of synthesizing in-between images from pre-existing ones. The technique is frequently used for temporal up-sampling to accelerate video refresh rates or produce slow-motion effects.

Google published “FILM: Frame Interpolation for Large Motion” at the ECCV 2022, presenting a new technique to generate high-grade slow-mo videos from duplicate images. FILM is efficient for both large and small motions with state-of-the-art outcomes.

Google iteratively invoked the model to output in-between images at the inference moment. 

The FILM model generates a middle image from two input images. There are three parts to the FILM model: 

  1. A feature extractor uses deep multi-scale (pyramid) features to summarise each input image.
  2. A bi-directional motion estimator calculates pixel-wise motion (i.e., flows) at each pyramid level.
  3. A fusion module that generates the final interpolated image.

Read More: Google AI digitizes sense of smell by mapping scent of molecules

Typically, multi-resolution feature pyramids and hierarchical motion estimates are used to accommodate significant motion. Small and swiftly moving items challenge this technique as they tend to vanish near the pyramid’s base. 

The above components help solve this problem by using a shared motion estimator and creating a network with fewer weights. Shared weights increase the number of pixels available for large motion supervision by enabling the interpretation of minor motions at deeper levels to be the same as large motions at shallow levels.

Following feature extraction, FILM uses pyramid-based residual flow estimates to determine the flows from the center image—which has not yet been predicted—to the two inputs. The model aligns the two feature pyramids after estimating the bi-directional flows. Stacking the two aligned feature maps, the bi-directional flows, and the input images at each pyramid level create a concatenated feature pyramid.

Advertisement

Tesla and BYD break monthly records for deliveries in China

Tesla and BYD break monthly records for deliveries in China

Tesla and its Chinese rival BYD have each broken their monthly records for deliveries of electric vehicles in China as the global competition between the world’s biggest makers of new-energy autos intensifies.

In September, Tesla, the world’s biggest EV maker, delivered more than 83,000 Model 3s and Model Ys from its recently upgraded Shanghai plant. Tesla had been ahead of BYD in China before Covid-19 outbreaks disrupted production.

BYD made almost 95,000 EV deliveries in September, a record high for the Shenzhen-based company. BYD’s sales, including hybrids, totaled 201,000 units in September, also a record. The rivalry between the world’s leading EV companies intensified this year after BYD abandoned traditional gasoline-powered vehicle production to focus entirely on new-energy cars.

Read More: Tesla To Remove Ultrasonic Sensors From EVs Amid Scrutiny

BYD has dominated the Chinese domestic market this year, defying supply-chain disruptions and shortages of chips and raw materials for batteries that have plagued other manufacturers, including Tesla. The company’s monthly sales of electric and plug-in hybrid vehicles have risen more than threefold on average this year.

Behind the growth is the company’s ability to produce its batteries and many of the parts its vehicles use, ensuring stability along its supply chain. Tesla, meanwhile, lost ground after suffering production hiccups from Covid-19 lockdowns in Shanghai earlier this year.

In July, Tesla suspended operations for several days to upgrade its assembly lines for increased production capacity. The company said that its Shanghai plant can now crank out more than 750,000 units a year.

Last week, Tesla said it delivered 343,830 EVs globally during the second quarter. According to calculations based on the association’s data, vehicles from Shanghai made up about 54% of its global deliveries, up from 44% in the second quarter.

Advertisement

China’s AI investment is expected to reach $26.69 billion in 2026

China's AI investment is expected to reach $26.69 billion in 2026

According to IDC Worldwide Artificial Intelligence Spending Guide, China’s AI investment is expected to reach $26.69 billion in 2026, accounting for almost 8.9% of global investment and ranking second in the world. 

In recent years, more enterprises are getting involved in the Digintelligence Era. They have started deploying digital transformation (DX) and intelligent upgrading, which has thus spawned more demand for AI. 

For the next five years, the hardware market will be the most significant primary market in China’s AI market. It will account for more than half of the gross AI investment. According to IDC, China’s IT investment in the artificial intelligence hardware market will exceed $15 billion in 2026, almost reaching the AI hardware market size of the US. 

Read More: Global Metaverse Market Expected To Reach $996 Bn In 2030

With the steady improvement of AI infrastructure construction, hardware growth will gradually pace down, with the five-year CAGR remaining around 16.5%. The server market, as the central part of the hardware market, will account for over 80% over the five-year forecast period.

Simultaneously, the services market will expand faster, with the five-year CAGR expected to be almost 29.6%. Total investment in the services market is estimated to exceed $4 billion in 2026, which is near 4x the investment in 2021, with notable market growth. The AI services market is mainly dominated by the IT services segment. IDC has predicted that IT services will lead the services market growth at a five-year CAGR of 31.0%.

IDC predicts that the AI-related spending in the four major endpoint industries, viz. professional services, government, finance, and telecom, will continue to lead over the five-year forecast period, collectively exceeding 60% of the total spending of China’s AI market.

Advertisement

Activeloop.ai Launches Deep Lake, a Data Lake for Deep Learning

activeloop ai launches deep lake

Activeloop.ai, a company leveraging deep learning services for complex data infrastructure, launches Deep Lake, a data lake for deep learning capabilities. Without compromising on GPU utilization, Deep Lake stores complex data in the form of tensors, such as photos, videos, annotations, embeddings, and tabular data. It rapidly feeds the data across the network to Tensor Query Language, in-browser visualization engines, and deep learning frameworks.

A data lake is a centralized storage where companies store data for governance, analysis, and management. First-generation data lakes collect data into distributed storage platforms like HDFS or AWS S3. 

The second generation of data lakes, led by Delta, Iceberg, and Hudi, is a result of the transformation of data lakes into “data swamps” by unorganized data collections. Data lakes readily connect to query engines to run analytical queries.

Read More: Snowflake invests in Domino Data Lab to provide deeper integrations

Over the past ten years, deep learning algorithms have effectively handled complex and unstructured data, including text, images, videos, and audio.

Deep Lake maintains the advantages of a typical data lake with one notable exception: it saves complex data as tensors. It feeds it quickly to deep learning frameworks across the network without reducing GPU utilization. 
A seamless interface between deep learning frameworks like PyTorch, TensorFlow, and JAX is also supported by Deep Lake. On its GitHub page, DeepLake provides access to all of its resources.

Advertisement

Reserve Bank of India to soon test ‘E-Rupee’ Digital Currency 

Reserve Bank of India to soon test 'E-Rupee' Digital Currency

India will soon test the ‘E-Rupee’ digital currency backed by India’s central bank; the national cryptocurrency will be tested in limited pilot launches. The Reserve Bank of India (RBI) has announced a phased pilot of its version of Central Bank Digital Currency (CBDC), said the paper released on Friday.

In what was referred to as a “concept note,” RBI outlined its vision for a digital version of the rupee, calling it the e-rupee. RBI also explained its rationale for implementing a central bank digital currency and how it would be tested in distinct phases.

The RBI plans on rolling out the e-rupee in limited pilot launches, with the intent of implementing it as an additional form of currency issued alongside paper money. The paper states the e-rupee will also serve as an alternative to cryptocurrencies.

Read More: Kim Kardashian Fined $1.26m For Promoting Cryptocurrency On Instagram

CBDCs will provide the public with the advantages of virtual currencies while ensuring consumer protection by avoiding the damaging economic and social consequences of private virtual currencies.

The central bank is considering the release of two versions of a CBDC: one that people would use for making retail payments and another that would be used for settling transfers between banks and wholesale transactions. According to the RBI, a CBDC could make payments more efficient, robust, and trusted. 

The Indian government first announced its plans to launch a CBDC in February, stating the technology would substantially boost the country’s economy. 

Advertisement

Meta’s ‘Horizon Worlds’ inundated with several quality issues

Meta's Horizon Worlds inundated with several quality issues

The flagship metaverse product of Meta, Horizon Worlds, is reportedly inundated with several quality issues. According to a report in The Verge citing internal memos, Meta’s VR social network, Horizon Worlds, holds little promise in its current avatar. Even those building the virtual reality social network at the company are barely using it.

Meta’s VP of Metaverse, Vishal Shah, allegedly told employees that the metaverse team will remain in a quality lockdown for the remaining year to ensure that they fix the quality gaps and performance issues before Horizon is opened to more users.”

In August, Meta CEO Mark Zuckerberg was forced to post new screenshots on Instagram and Facebook, presenting a more life-like version of himself, after the memes taunted his poorly-designed metaverse avatar.

Read More: Mark Zuckerberg Responds To The Criticism Of Meta’s Newest Project Horizon Worlds

In the same month, Vivek Sharma, VP of Meta’s Horizon social media virtual reality (VR) platform, departed at a time when Zuckerberg doubled down on his US$10 billion metaverse dream.

Sharma’s team now reports to Shah. Shah wrote in an internal memo that the feedback from their users, creators, and playtesters on the team is that the aggregate weight of stability issues, bugs, and papercuts is making it difficult for the community to experience the magic of Horizon.”

Horizon Worlds is a social VR experience where one can discover new places with friends, build your own unique worlds, and form teams to compete in action-packed games. The Horizon Worlds social metaverse platform is available only on the company’s Quest VR headsets. Zuckerberg said that significant updates to Horizon and avatar graphics are coming soon.

Advertisement