AirAsia, the renowned (or infamous) airline receives between 1 to 1.25 million job applications a year on average. Of that, only 6,000 new employees were added to the company in 2019 or 0.6 percent of 1 million applicants.
It was revealed that 75 percent of the time of AirAsia’s recruiters is spent solely on screening those applicants which still resulted in 80 percent of profiles or 800,000 applications being lost every year–not accounting for the time spent scheduling interviews, making an offer, and engaging with a potential candidate.
To cut down on this manual labor, AirAsia opted to engage an AI service that would screen candidates, match them to jobs based on their qualifications, and schedule interviews with candidates. As a result, recruiters spent 48 percent less time scheduling interviews and saved 60 percent of their time from screening candidates.
For AirAsia at least, that’s a win, right? But think about the screening criteria that was taught to that AI–whom to accept, whom to reject–and think about the ethical implications of algorithmic bias when determining the potential future of a person applying for a job.
How do you contest an AI that’s simply programmed to reject one CV while approving another? What cost are we paying in service to automation and how can we ethically leverage the use of AI?
Combating biases in AI
Richard Brady, the CEO of Mentis, a talent technology and consultancy company questions the use of AI when screening large applications:
“You’re (actually) closing the door to opportunities. [Companies] don’t have time to do the full recruitment process with everybody so how do you then move that down to a manageable manner ethically and in an evidence-based way?
A good standardized interview can predict about 10 percent of job performance. Personality tests do the same, but there’s a huge amount we can’t measure, that’s called luck. Then there’s the social quotient but we don’t [measure] that in AI”
There are constant reports about bias in AI that works against the recruitment process, like Amazon scrapping their AI tool that was biased against women, an AI that screened video interviews and selected candidates based on mannerisms detected (which was denounced as pseudoscience and removed), and even biases in showing the kinds of recruitment ads to a certain subset of people, like Facebook ads for taxi drivers being mainly served to a black audience.
As companies prosper and expand, there is a temptation to rush the recruitment process, but employers must be aware of the kinds of biases that they may be introducing into their workforce. These biases may result in a lack of diversity and, if the numbers are to be believed, diversity of staff can actually result in 2.3 times higher cash flows and 70 percent higher likelihood of capturing new markets.
Employers need to understand that AI is not a magic bullet when it comes to hiring. Though the time savings are tangible, the intangible costs of algorithmic biases in hiring cannot be seen immediately when dealing with people entering an organization.
Humanizing AI for development and recruitment of talent
In terms of using AI in recruitment, Richard thinks it should be a two-way process: “My hope is that more attention will be paid to the relevance of the use of information so it’s less big brother. Use AI to identify opportunities–and there’s an opportunity here for recruitment companies to change their model and get candidates to complete their profiles, nurture them, and present profiles to employers in more of a matched way,” he says.
As for talent development, “[To me] it’s more about making people aware of their strengths to optimize and their weaknesses to neutralize,” says Richard. “Our (Mentis) use case is for powerful people networks to succeed talented people within them. Identify the talent and use them, because a lot of people feel they’re not engaged in their development process. AI can play an important role here.”
Matched not only in qualification, but personality, culturally, and aspirationally. On top of that, another part of humanizing AI for better recruitment and development is to be transparent. Rather than an AI that’s a black box and results being churned, there’s a need for AI algorithms to be presented in a way that’s understandable and verifiable by the public.
An AI system that understands each candidate and presents them in the best possible light to employers for recruitment, or to managers for development, puts the workers’ needs first and is also a crucial part of the growth of a company. Happy employees make profitable workspaces.
This kind of AI can also be used to level the playing field between recruiters and candidates by democratizing data. Machine learning can be used to crunch large amounts of data that are constantly moving, data like the top programming languages required by companies in the past 6 months, the top jobs in demand in a particular country or region, or even salary data, particularly for tech talents particularly in Singapore that are constantly moving due to higher and higher salary increments.
Though the use of AI to empower talents is not publicized much, Gartner notes that the second most use of AI in companies is to monitor employee engagement. Research vice president of Gartner noted a particular use case where a dip in engagement by a group of employees was due to work uniforms.
Similarly, IBM used AI to find jobs for people who don’t have a college degree. The SkillsBuild platform was an upskilling and reskilling initiative that targeted vulnerable populations. Companies like Catalytle, use AI to identify the potential of individuals and train them to join the tech workforce.
Ethical use of AI for successful hires
For employers who are seeking ways to expand their team rapidly, remember that aggressive filtering algorithms might help you gain more qualified candidates quicker, but you may miss out on many more suitable candidates who didn’t game the system to appeal to an AI screener.
AI predictors are still in their infancy, but AI’s ability to efficiently crunch large data to present it as meaningful insights is not new. Machine learning can be used to bring HR systems from spreadsheets to the cloud and present the data for decision-making quicker.
When used ethically and in an evidence-based way, AI can be a force for good in recruitment and give HR managers and recruiters an objective edge that is so often lacking in the hiring process.
Chia Yei is the Co-Founder and Chief Operating Officer at TalentX. Our thanks to Richard Brady, the CEO of Mentis for his opinions on the article. Chia Yei reimagines AI in HR as the COO of TalentX after accumulating grounded experience from his global portfolio in Total Rewards, HR Analytics, and HR Systems Implementation. Reach out to CY directly on LinkedIn to speak about how you can use the same AI technology to improve the way you hire for your organization.
TechNode Global publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
The future of banking: Money redefined in a financialized world (part 1 of 3)