New York City to Require Bias Audits of AI-Type HR Technology – LLODO


​New York City passed a first-of-its-kind law that will prohibit employers from using AI and algorithm-based technologies for recruiting, hiring or promotion without those tools first being audited for bias.

Outgoing New York City Mayor Bill de Blasio allowed the legislation to advance into law without a signature on Dec. 10. It takes effect Jan. 2, 2023, and applies only to decisions to screen candidates for employment or employees for promotion who are residents of New York City, but it is a harbinger of things to come for employers across the country.

If New York City employers are “using an AI-informed selection tool, whether it’s a pre-employment assessment or video interviews scored by AI or some other selection tool using AI, it is likely subject to this new ordinance,” said Mark Girouard, an attorney in the Minneapolis office of Nilan Johnson Lewis who, as part of his practice, advises employers on pre-employment assessments.

He added that “they will need to start engaging a third party to conduct bias audits of those tools to test the tool’s disparate impact—a neutral policy that could lead to discrimination—on the basis of race, ethnicity or sex.”

The law defines automated employment decision tools as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence,” that scores, classifies or otherwise makes a recommendation regarding candidates and is used to assist or replace an employer’s decision-making process.

“The definition is very broad,” Girouard said. “It’s not clear if the statute captures only the pure AI tools or sweeps in a broader set of selection tools. If an employer uses a traditional pre-employment personality test, for example, which is scored by an algorithm based on weighting and a combination of components, it could be included—we’re not certain,” he said.

Matthew Jedreski, an attorney in the Seattle office of Davis Wright Tremaine and a member of the firm’s artificial intelligence group, added that the law could “capture innumerable technologies used by many employers, including software that sources candidates, performs initial resume reviews, helps rank applicants or tracks employee performance.”

Provisions of the Law

Under the law, employers will be prohibited from using an AI-type tool to screen job candidates or evaluate employees unless the technology has been audited for bias no more than one year before its use and a summary of the audit’s results has been made publicly available on the employer’s website.

Girouard said that it’s unclear when and how often the bias audit would need to be updated and whether the audit is meant to cover the employer’s hiring process in conjunction with the tool, or the tool itself more generally.

Employers that fail to comply may be subject to a fine of up to $500 for a first violation and then penalized by fines between $500 and $1,500 daily for each subsequent violation.

Frida Polli, co-founder and CEO of Pymetrics, a talent matching platform that uses behavioral science and AI, is one of the most vocal supporters of reducing bias in technology.

To that end, her company works to make sure her tool’s algorithms do not have any disparate impact.

“We have a process that the algorithms go through before they are built that ensures that they are above the threshold that constitutes disparate impact,” she said. “We test for that and continue to monitor it once it is deployed.”

Girouard said that the law also requires employers to provide notice to candidates before using the technology and disclose the qualifications or characteristics that the tool is evaluating.

“The requirement to explain how the tool analyzes different characteristics is likely to raise particular challenges for employers that use vendor-made software, as these vendors often protect how their tools work as trade secrets or under confidentiality agreements,” Jedreski said.

Candidates and employees may also request an “alternative process or accommodation” instead of being assessed by the technology.

Girouard said that the New York City law illustrates certain prevalent trends employers must be aware of, including a focus on transparency, the importance of explainability (you must have some job-related characteristic or qualification that is being scored) and, increasingly, the concept of informed consent—either an ability to opt out or at least a requirement to notify candidates that AI is being used, he said.

The New York City law is the tip of the iceberg of what’s coming, Girouard said. The U.S. Equal Employment Opportunity Commission recently signaled it would step up its examination of AI-type tools. But until the agency publishes formal guidelines, states and municipalities will continue to fill in the gap, he said.

‘A Good Step’

The approved final version of the New York City law drew a variety of responses, even among proponents of greater scrutiny of AI technology who had advocated for it from the beginning.

Polli is supportive of the law, calling it “a good step in the right direction.”

Several important elements are included, she said, including provisions on candidate notification, transparency regarding what data is being evaluated, and testing for disparate impact.

“Right now, there is no place to go to see the disparate impact thresholds of different platforms and how they compare,” she said. “There is no public scoring of these products in any way. Public reporting will be helpful for employers making better decisions and in informing the public in general about where a lot of these technologies fall.”

Julia Stoyanovich, a professor of computer and data science at New York University and the founding director of the school’s Center for Responsible AI, also sees the law as a “substantial positive development,” especially the disclosure components informing candidates about what is being done.

“The law supports informed consent, which is crucial, and it’s been utterly lacking to date,” she said. “And it also supports at least a limited form of recourse, allowing candidates to seek accommodations or to challenge the process.”

‘Deeply Flawed’

But some digital rights activists expressed disappointment with the final legislative product.

The Center for Democracy and Technology (CDT) in Washington, D.C., called it a “deeply flawed” and “weakened” standard that doesn’t go far enough to curb AI technology bias in employment.  

“The New York City bill could have been a model for jurisdictions around the country to follow, but instead, it is a missed opportunity that fails to hold companies accountable, and leaves important forms of discrimination unaddressed,” said Matthew Scherer, senior policy counsel for worker privacy at CDT.

Scherer said that the worst thing about the revised legislation—compared with its original draft—is that it only requires companies to audit for discrimination on the basis of race or gender, ignoring other forms of discrimination. Instead of assessing a tool’s compliance with all anti-discrimination laws covering the gamut of protected traits, the law doesn’t require employers to do anything they aren’t already required to do, he said.

“The main effect of the revisions is, therefore, to relieve employers of any incentive to check for other forms of discrimination, such as discrimination against disabled, older or LGBTQ+ workers,” he said.

Polli countered that the challenge with including age and disability for disparate impact testing is “that we don’t get that information from candidates—it is illegal to ask a candidate about a disability,” she said. “I agree that there should be better safeguards in place, and we have created a workaround at Pymetrics, using past rates of people selecting accommodations versus those who have not, but these are real challenges we need to figure out how to overcome.”

The final version of the law also has a narrower scope for who and what it covers, said Ridhi Shetty, a policy counsel at the CDT. “It applies only to hiring and promotion,” she said. “This leaves many substantial employment decisions that dramatically impact workers’ lives, including those relating to compensation, scheduling, and working conditions, outside the law’s scope. It also applies only to workers who are residents of New York City, rather than to all employees of New York City-based employers. Given the sheer volume of non-New York City residents employed by New York City employers, this represents a significant narrowing of the law’s applicability.”

The CDT also believes that the notice and disclosure requirements are vague. “There’s no mechanism to ensure alternative tests or requests for accommodation are seriously considered, much less fairly offered,” Scherer said.

Stoyanovich said she understands the arguments critical of the law but sees its passage “as an opportunity for all of us, collectively, to figure out how to operationalize the very necessary requirements of bias auditing, and of public disclosure in the algorithmic hiring domain. We can only figure this out by acting, not by sitting back and allowing algorithmic hiring tools to continue to be used, without any oversight and any accountability.”





Link Hoc va de thi 2021

Chuyển đến thanh công cụ