The artificial intelligence (AI) revolution will cause seismic waves across any sector. For non-profit organizations (NPOs) like the Special Olympics, which serve hard-to-reach marginalized communities, it can be an invaluable tool.

However, people with intellectual disabilities, as with many excluded populations, face a specific challenge with AI bias. This bias shows up in different ways, from speech and image recognition that is not representative, to biased credit risk assessments. These systemic algorithmic problems can lead to unfair results, especially for people who are already sidelined.

The AI bias

Biases are embedded in AI tools. These biases not only negatively impact the technological landscape, but also reflect prevailing societal misconceptions about the capabilities and contributions of marginalized communities.

Recently, our colleagues in Latin America conducted an experiment asking an AI tool to generate images of athletes playing sports.

The resulting AI responses excluded images of athletes with disabilities participating in sports. This omission reveals the prevalent exclusion of people with disabilities from the very definition of what it means to be an athlete in the AI domain. This reflects a broader bias in society that inadvertently seeps into technology, creating a limited perception of who can be considered an athlete or actively engaged in physical activities.

In Southeast Asia, for example, the AI bias is multiplied due to several factors, reinforcing the “democracy deficit” – the lack of public participation in AI decision-making.

The silver lining for the inclusion revolution

Before AI exploded on the scene, existing technology had always been a significant game-changer for the non-profit sector. It has revolutionized processes and streamlined operations, bridging the much-needed inclusion gap for the marginalized communities that NPOs champion and the larger society.

There are an estimated 54 million people living with intellectual disabilities in the Asia Pacific region, and they are one of the world’s most socially isolated, misunderstood, stigmatized, and underserved populations. Social stigma in schools and workplaces is still prevalent and more must be done to address this while plugging gaps in areas like healthcare and employment.  The potential for AI to accelerate the work NPOs like us do every day is game-changing.

Education

Many education systems claim to be inclusive but are instead following either segregated or integrated models, missing the mark on meaningful inclusion. Inclusive education requires systems change, relooking principles of inclusion such as whole-system inclusion, learning-friendly environments, adapting a curriculum to student needs, differential learning, and universal design for learning, amongst others.

A freshly released study from the Special Olympics Global Center for Inclusion in Education reveals that 64 percent of educators and 77 percent of parents of students with intellectual and developmental disabilities (IDD) view AI as a potentially powerful mechanism to promote more inclusive classrooms and close educational gaps between students with and without IDD.

That said, only 35 percent of educators surveyed believe that developers of AI currently account for the needs and priorities of students with IDD, highlighting the critical need for the creation of more disability-inclusive tools.

Given the scarcity of PWID perspectives on AI, I strongly urge technology companies and developers to listen to marginalized voices, to more accurately reflect the needs and perspectives of the community with intellectual disabilities. After all, a failure to listen will mean sidelining about as many as 200 million people globally from important, ground-breaking technology, which would be detrimental.

Inclusive healthcare

Persons with intellectual disabilities (PWIDs) are often denied equal access to quality healthcare despite their urgent needs and higher health risks. A University of New South Wales study found that PWIDs are twice as likely to die a preventable death, due to delays or problems investigating, diagnosing, and receiving appropriate care. This gap is systemic as the healthcare community is not consistently equipped nor trained to treat PWIDs.

Herein lies a huge potential for AI technology to further enhance the health and lives of PWIDs, using tools that facilitate real-time captioning, robotic assistance and brain -to-computer interfaces, which can progress inclusive health, enable independence, and empower lives.

Inclusive employment

The UN reports that the unemployment rate for PWIDs is double that of the general population, often as high as 80 percent or more.

Hiring based on diversity and inclusion has proven to be the innovative spark needed for these current times. According to Great Place To Work, a diverse and inclusive workplace can result in higher revenue growth, greater readiness to innovate, increased ability to hire a diverse talent pool, and 5.4 times higher employee retention.

The workplace eco-system thrives on creativity, innovation, adaptability, and flexibility to flourish, and a diverse workforce plays a crucial role in attaining these objectives. I have seen firsthand how inclusive hiring can transform organizations and empower individuals, especially for those with intellectual disabilities.

By casting a wider net in the hiring process, workplaces can tap into a vast talent pool of skilled and motivated workers with different backgrounds and experiences.[BCW9]  It is my belief that AI recruitment tools – leveraged appropriately – can enable fairer hiring practices, and support workplaces in eliminating implicit, unconscious bias. This would ultimately support organizations in evaluating credentials based on ability and skill sets, instead of purely relying on personal judgment and referencing a limited, narrow talent pool.

Building an ecosystem of partners

UNESCO has started the ball rolling with the Recommendation on the Ethics of Artificial Intelligence adopted in 2021 by 193 UNESCO Member States. The recommendations call for better data governance, gender equality, and other important aspects of AI applications in education, culture, labor markets, the environment, communication and information, health and social welfare, and the economy.

The challenge is now for all countries to comply in taking concrete action, championing ethical standards, and partnering with like-minded governments, corporates, and organizations so that ultimately, AI is at our service and not expense, leaving no one behind.


Dipak Natali is the Regional President and Managing Director of Special Olympics, Asia Pacific.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Despite AI algorithm bias, can AI still play a part in fairer hiring?