New York City has enacted a groundbreaking law aimed at addressing bias in AI-driven hiring processes. The legislation, which comes into effect on July 5, requires employers to be transparent about their use of AI and algorithmic tools in hiring and promotions. It also mandates annual audits to detect potential bias in the technology. However, critics argue that the law, along with similar regulations, falls short in safeguarding job candidates from bias in hiring practices as AI advancements outpace regulatory measures.
The law, passed in 2021, obliges companies utilizing AI and algorithmic tools for hiring and promotions to disclose this information to all candidates. Candidates also have the right to inquire about the collection of their personal data. Many US employers rely on automated technology for employment decisions, with estimates suggesting that as many as four out of five use some form of AI-based system. These tools range from automated resume screeners and matchmaking algorithms to social media scrapers, AI chatbots, video platforms, and logic games that assess job applicants.
The vast market for these tools, coupled with the lack of transparency from vendors, makes regulation challenging. Some AI-powered hiring technologies have demonstrated erratic and discriminatory decision-making, resulting in biased outcomes for job candidates. New York City’s law requires companies to hire independent auditors to evaluate their AI tools for bias annually. The auditors assess the technology’s impact on hiring in terms of race, ethnicity, and gender, but not other protected groups. Violations by employers will be subject to fines.
Critics express doubts about the effectiveness of NYC’s hiring law, arguing that it does not provide sufficient protection for job candidates. Concerns exist that the law’s standards of measurement may be circumvented, allowing developers to evade or bypass audits. Transparency and accountability are critical, but there are reservations about the effectiveness of audits and impact assessments. Previous cases, such as HireVue, demonstrated that audits may not provide comprehensive information or enhance equity in the hiring experience.
New York City’s law has faced criticism, with revisions viewed as diluting its effectiveness. Some public interest advocates believe the law could become a national template, warranting higher expectations from policymakers. Several other states, including California, New Jersey, New York, and Vermont, as well as the District of Columbia, are developing laws to regulate AI in hiring, while Illinois and Maryland have already enacted regulations related to AI video hiring software.
While these laws have their flaws, they bring some transparency to AI-based hiring processes. They represent a concrete step towards regulating AI rather than leaving it unbounded. However, to close potential loopholes, advocates suggest allocating resources to support auditors and ensure their access to the necessary tools and protocols. This would enhance trust in the hiring process and provide job seekers with the assurance that independent and unbiased technology is being employed. Without such measures, candidates and regulators must rely on the companies’ assurances regarding the use of AI tools.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…