Disability and Peace-centered AI Policy Is Multipolar, Multiagent and Reflects Historical Complexity
“Algorithms do not create biases themselves but perpetuate societal inequities and cultural prejudices, thus posing the thesis that technological problems are rather social and historical first and then – algorithmic.”
European AI Act and other rapidly emerging regulations such as recently presented documents by science and technology authorities of Japan and China or the US White House are important examples of how national laws are endeavoring to classify AI systems according to risks, compliance frameworks, definitions and explanations. These mechanisms are designed to bring safety, privacy, and individual and group protection to AI systems while also imposing necessary regulations and limitations.
However, the more AI becomes the cornerstone of national strategies, directly connected to national economic and social objectives such as GDP and economic performance, jobs creation, human capacity development, and infrastructure deployment, the more AI and technology become reflections of regional specifics, including culture, tradition, customs, historical context, unique actors, vocabulary, thus inevitably involving multipolar, multi-stakeholder, multi-agent nature of society, where each agent must bring own agency and participation.
It becomes even more critical if we use AI to address historically excluded and discriminated groups, access and participation of groups with disabilities, challenges of autonomous weapons, warfare, and peace, and socially critical infrastructure affecting areas of healthcare, education, and work. As a result, it brings an important reminder, that AI policy can’t follow the route of “one-size fits all”, but rather brings necessity in multipolar representation and agency, addressing perpetuated institutional, structural, and social biases, distortions, and exclusions.
Algorithms, society and access
Notable examples of groups that demonstrate social and historical context, directly affecting modern statistics, data sets, models, and systems are groups with disabilities, including physical, cognitive, and sensory impairments. Historically, individuals with disabilities were excluded from the workplace, educational system, and sufficient medical support. Around 50-80% of the population with disabilities are not employed full-time, for cognitive impairment these numbers may reach 85%. Individuals with disabilities are also disproportionally affected by unjust law enforcement, violence, and brutality. As for conflicts and crises, people with disabilities are also recognized as among the most marginalized and at-risk populations.
What brings even more complexity to today’s research and policy affecting individuals with disabilities is intersectionality, geographical and social context, such as gender, ethnicity, language, and socioeconomic status. In particular, in the past, women were frequently excluded from public research, affecting modern statistics and data sets. For cognitive disabilities, girls were often misdiagnosed or diagnosed at a much lower rate due to different manifestations. Particular ethnicities, social groups and communities were excluded from medical research or had less access to facilities and services. Social tendency to “normalize” and “rationalize” medical and statistical models led to numerous inhuman practices in the past and now, including still existing “forced sterilization” widespread across 13 European countries and globally (the issue raised by the European Disability Forum).
These factors inevitably affect existing institutions, statistics, systems, and practices. It is also amplified by the lack of access to the target communities and data, the tendency of AI models to “narrow” and “optimize” objectives and outcomes, the use of statistical generalizations and normalized assumptions (“proxies”), lack of area and condition-specific categorization and practices for high and unacceptable risk systems (eg. biometrics and emotion recognition use for individuals with disabilities and facial impairments), insufficient infrastructure and data silos, lack of specialized vocabulary, social studies and stakeholder involvement, including communities, families and caregivers.
As a result, AI systems are known to wrongly identify individuals with facial differences or asymmetry, different gestures, gesticulation, speech impairment, or particular communication patterns. There are examples of direct life-threatening scenarios when police and autonomous security systems, or military AI may falsely recognize assistive devices as a weapon or dangerous objects, or misidentify facial or speech patterns. These concerns were raised by UN Special Rapporteur on the Rights of Persons with Disabilities.
Addressing historical and social complexity
Individuals with disabilities are one example. However social and historical complexity may affect modern systems and policies for a variety of groups or subgroups. Addressing these historical challenges and layers is the question of protected legal status, participation, and regional and global sovereignty.
- Legal status, vocabulary and social layers – the society is not monolithic. Thus, it brings the necessity to bring vocabulary, definitions and practices which reflect particular groups and subgroups – such as gender (eg. Unicef – AI for girls with disabilities), age (eg. Unicef – AI for Children), ability (eg. UN Special Rapporteur on the Rights of Persons with Disabilities), identification of socio-economic parameters and criteria;
- Regional policies. AI directly connects with regional economic and social objectives, such as economic performance, jobs creation and access, and human capacity development. In particular, there are medical conditions or manifestations which are unique to particular regions or social groups (eg. diabetes, neuropathy and some genetic disorders in the region of the Middle East). It identifies specific infrastructure, skills and literacy, policing and social protection requirements.
- Resources and infrastructure. The private sector is not always capable to address particular social challenges or areas, such as the deployment of accessible infrastructure, investment of assistive technologies, providing in-scale literacy training or deploying medical facilities. Thus it points increasing role of sovereign funds and authorities investing and deploying AI infrastructure and related policy. (eg. China, UAE, Saudi Arabia or Egypt Vision 2030)
- Social studies and historical protection. Active deployment of AI, addressing cities, communities and social mobility, inevitably involve social studies, including language specific-technologies, AI focused on the recreation and protection of particular historical practices and custom (eg. The Philippines-based AI centers and researchers involved in the history of calligraphy)
- Global literacy and bottom-up participation – With more agents involved in the use and creation of AI algorithms, it poses an increasing role of widespread digital and data literacy which is accessible through bottom-up participation and cooperation (eg. Unesco’s Public call for the data literacy or WHO’s digital health competence framework). For individuals with disabilities, it also includes families and caregivers
- Global and regional ethical agents, peace – Rapidly decentralizing world brings the complexity of ethics agents and institutions involved in regulation, governance and overseeing algorithmic research and deployment. With social challenges of autonomous weapons, warfare, and fragile peacebuilding, with more group-specific frameworks, addressing epidemics, children, or women protection, it inevitably involves more area-specific agencies, institutions, organizations, and communities.
- Silos and multistakeholder participation. Finally, emerging corporate data silos, tending to aggregate power and access around data and deployment of large-scale AI models, bring the necessity to democratize opportunities for multi-stakeholder participation, such as researchers, academia, technologists, specifically addressing the spectrum of emerging assistive and accessibility technologies.
Way Forward – Reflecting The Past, Affecting The Present
Algorithms do not create biases themselves but perpetuate societal inequities and cultural prejudices, thus posing the thesis that technological problems are rather social and historical first and then – algorithmic. Responsibly developed AI algorithms bring hope to individuals with disabilities and impairments, making workplaces and education more accessible through fueling and augmenting smart wheelchairs, walking sticks, geolocation and smart city tools; bionic limbs, exoskeletons and rehabilitation technologies, social robotics for individuals with autism and cognitive impairments, facial and sign recognition for sign language identification and support of deaf individuals, or computer vision algorithms that can interpret images and videos into braille for visually-impaired individuals.
However, the more AI systems are becoming involved in critical social processes such as law enforcement, policing or warfare, or associated with high and unacceptable risk systems, the more there is the necessity to reflect the multipolar and multiagent nature of the society, where historical context and past define the present, including how we identify the purpose and practice. Thus, in order to tackle the challenges of algorithms, we should address the social challenges of the past and present first.