As the job marketplace goes increasingly digital and workforces become more distributed, the stacks of resumes recruiters and hiring managers sift through becomes more and more complex. Trying to streamline hiring could save companies tens of thousands of dollars in recruitment costs, but can human bias be eliminated from human built algorithms?
Garbage In, Garbage Out
As artificial intelligence and automation moved into the realm of human resources and recruitment, early attempts at augmented hiring went horribly awry. AI is only as good as the data it is fed, as a rule. Resumes, especially in fields dominated by white men, overwhelmingly produce data that teaches AIs to prefer the same.
This was the case with Amazon’s famously failed recruitment AI. Amazon started trying to mechanize hiring in 2014, but after years of feeding their AI resumes, the application taught itself to favor men over women based on the data input it was receiving, and had to be scrapped.
Other similar AI programs have met similar fates–Twitter, for example, had to abandon their chat AI after it turned into a foul-mouthed Nazi after only a few days of being live online–but developers aren’t giving up in the quest to create “ethical AI,” despite the fact that machine-learning algorithms can perpetuate or even exacerbate unconscious bias.
Censia and Ethical AI
Our commitment to ethical AI has been present since inception. Embracing human augmentation in hiring rather than human replacement by machine, Censia continually assesses the impact of incorrect predictions, and develops processes to monitor, understand, and document biases. The result is an AI that consistently provides rationale, context, and weighted factors to underpin its decisioning.
Censia also works to develop and improve AI infrastructure that is dependable, reproducible, and can be improved on in perpetuity. Accuracy and cost-metric functions are aligned to domain specific applications. Workforces are supported by documenting relevant information for business change processes that can mitigate the impact of job automation with upskilling and reskilling initiatives.
Finally, Censia has developed and implemented End User License Agreements and internal Data Management Policies that have received ISO 27001 Certification and are periodically audited, re-evaluated, and updated in order to ensure maximum security and privacy.
AI Recruitment and Regulatory Compliance
Increasing scrutiny into AI hiring (and the concept of unconscious bias leaking into algorithms) has led to legislation and regulatory actions focused on greater oversight of AI in human resources.
Deloitte notes that using AI driven video interviewing can reduce interview questions from 200 to five, and significantly increase the likelihood of a hire being completed after just one interview. However, the state of Illinois signed legislation regulating the use of AI in video job interviews, requiring companies using such technology to notify candidates that it will be used, explain how it works, and obtain candidate consent for AI evaluation prior to the interview.
New Jersey and Washington have also introduced varying levels of AI-related legislation, and New York City has a bill in play to regulate the use of AI in hiring, compensation and other HR-related decisions, and could mandate AI tools to be audited for bias before allowing their sale to companies in the city.
The Office of Federal Contract Compliance Programs (OFCCP) holds contractors and subcontractors doing business with federal government entities responsible for taking affirmative action and not discriminating on the basis of race, color, sex, sexual orientation, gender identity, religion, national origin, disability, or status as a protected veteran.