According to the World Economic Forum, it will take 58 years for countries in the East Asia and Pacific region to close the gender gap. Urgent action is needed.

Emerging technologies like AI are often looked at to accelerate or catalyze change in perennial issues. The accelerated adoption of generative AI tools like ChatGPT has brought new focus to AI’s potential in talent processes.

AI’s proliferation has raised important questions about its impacts—both positive and negative—on equitable talent processes.

The state in Southeast Asia

Southeast Asia has still some way to go to improve gender parity in the talent landscape compared to developed economies. To illustrate, in Malaysia, women executives comprise only 20 percent of leadership roles (Vice President or higher) in companies listed on the FTSE Bursa Malaysia KLCI. The representation is even lower in Vietnam and Indonesia, with women holding 12.5 percent of similar positions on the VN30 and a mere 0.2 percent on the IDX 80.

Southeast Asia faces a specific challenge with AI bias. In Southeast Asia, the AI bias is multiplied due to several factors, including the “democracy deficit” – the lack of public participation in AI decision-making. This bias shows up in different ways, from speech and image recognition that doesn’t work well for everyone, to biased credit risk assessments.

The potential for bias to creep into both data sets and decision-making algorithms is a major risk when using AI in talent systems. For example, some tools utilize HRIS (human resources information systems) company-level data to identify who has been successful in jobs based on performance data analysis and individual profile data, found on sites like LinkedIn. If the HRIS workforce performance data includes a gap in promotion rates between certain demographic groups (e.g., men vs. women), the existing bias represented in the company data may be perpetuated when using analytics to predict successful profiles.

There is a potential risk that unconscious bias from decision-makers gets transferred and baked into the AI systems. Not only are existing biases and inequities transferred from human decision-makers to AI, but it can become more reinforced and ubiquitous in less transparent algorithms. These algorithmic problems can lead to unfair results, especially for the marginalized.

There’s still room for AI

Despite the debates around AI algorithm bias, AI still has room to play in addressing gender gap issues in the workplace. While some leaders are concerned that AI has a propensity to perpetuate bias at scale, others believe AI can effectively accelerate de-biasing efforts.

We believe that AI has a role in disrupting the bias; the key to disruption is to ensure diversity of perspectives in the design and testing phases. In the example of combining external profile data with company performance data to generate predictive success profiles, organizations can pre-emptively correct the promotion differentials that may exist between demographic groups within the algorithm. When applying AI to screen resumes against job specifications, bias disruption techniques might include scoping the job description to include only “must have” vs. “nice to have” job requirements, which opens the aperture and optimizes potential applicant diversity.

AI also has a role in helping to embed equity.  Modern approaches embed equity practices into the full talent management life cycle. For example, in talent development, a desired outcome could be to ensure that all demographic groups are adequately represented in overall leadership development programs.

Organizational leaders can embed equity steps into the sourcing process by leveraging AI to identify a wide range of participants, efficiently scan the relevant employee population, systematically assess applicants by matching assessment data with future potential roles, and tabulate individual calibrations to provide a selection recommendation that is not influenced by group think.

Human-centricity and the human touch remain non-negotiable

We have seen the adoption of hiring technology platforms that leverage conversational AI, video interviewing, assessments, and automated scheduling. Having a candidate record their responses on video which is then reviewed by the hiring panel promotes a more consistent assessment of the candidate. We can see how this can improve and scale a more consistent candidate experience, but it also lacks the human touch. We cannot plainly rely on technology and assume that technology will equalize everything.

Human intervention is still needed to ensure that the composition of the hiring panel is diverse and that the panel takes time to undergo unconscious bias training before embarking on the hiring process so that they are aware of their biases and can take intentional actions to mitigate them.

Biases baked into the job description design will still infect the candidate screening process. The robust exercise of challenging hiring managers to create a broadened and inclusive candidate pool – which includes evaluating what is required and what is nice to have, and reducing bias by looking at culture fit vs. culture add — these judgments are still hard to supplant with technology. These are some of the practices we intentionally embed in client advisory to improve the inclusivity and equity in search.

Decision makers need to question the value of a candidate experience that is AI-dominant rather than people-dominant. At the end of the day, human decision-making is paramount in making the judgment call and maintaining that balance.

AI, used with care, as a powerful tool for human capital innovation

While technology of the past was designed to serve a singular purpose, generative AI can be deployed to solve myriad problems in an organization.

Any credible generative AI approach must be built upon a human-centric foundation – technology needs to be grounded in an organization’s people and culture. AI might supply a new framework, but human judgment, creativity, and strategic thinking are still crucial to developing effectively.

As leaders seek more equitable talent outcomes, AI can be a powerful tool for human capital innovation.


Ulrike Wieduwilt is Managing Director and Singapore Country Lead, Russell Reynolds Associates. Ulrike focuses on senior-level assignments for consumer products clients. Based in Hamburg, she specializes in conducting assignments for clients in fast-moving consumer goods, both in industry and in trade/retail. Ulrike has completed numerous high-profile searches for board members and CEOs of German and European companies. Ulrike previously lead the Shanghai office and has intensive knowledge of the Chinese Market.

Shoon Lim is Global DEI Lead, Russell Reynolds Associates. Leading the Diversity, Equity & Inclusion Capability at Russell Reynolds Associates, Shoon Lim advises and supports clients to implement, scale, and sustain their DE&I strategy for inclusive diversity through developing senior leaders who practice intentional inclusion, establishing robust governance that drive equity in talent processes and building inclusive cultures where all employees feel a sense of belonging and shared accountability toward intentional inclusion.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Leveraging AI to detect financial fraud in startups