Bias in AI Safety Systems as a Governance Crises

By Karine M. Yengo, Author and Founder, The Crossroads Movement

January 12, 2026

Picture a facial recognition system deployed across multiple shopping center’s which are; Public landmarks meant to welcome people from diverse backgrounds and ethnicities. In a modern world where there’s diversity, equality and inclusiveness for all, this system fails to immediately recognize people of all races, based on how it was trained.

When this happens, who is responsible for this system’s error?

We tell ourselves, it was a “code” problem and to expatiate, the data used in training this model was biased. True. But, does accountability rest only with the technical engineers or do we, collectively, play a role in allowing such systems to be deployed without scrutiny?

When the System Gets It Wrong

Over the past year, I’ve observed conversations and reactions about this topic, and I realized that we still talk about bias in AI safety systems as though, it is a technical glitch waiting to be debugged.

In the example above, the failure was not just the bias in the code. It was equally, in the choice of what was used to measure, test, optimize, and who sat in the rooms where these decisions were made.

Governance, decides these outcomes. So why do we celebrate the system and overlook the steering wheel behind it? Isn’t this, the same as being excited, about the speed of a sports car, and careless about it’s direction?

Every biased safety system shares a common origin story: “This was trained on biased data”. But framing this as a data-quality issue misses the point entirely, because the data reflects reality and that reality is influenced by structural inequality that varies across contexts but appears almost everywhere.

History As the Blueprint

When we train a loan-approval algorithm, for example, on historical lending data from Kenya, Colombia, Indonesia, or India, we encode patterns where certain communities faced historical and systematic exclusion. A time, where identical financial profiles can receive different outcomes based on gender, ethnicity, or geography.

As a society, we’ve come a long way on matters of equality, diversity and inclusion; and we still have a long way to go. While on this path, if we build detection systems trained on heavily biased data, favourable to certain races, ethnicities, cultures, or region; doesn’t that takes us back decades, to a time when those exclusions were still treated as normal?

With occurrences like this, we still tell ourselves, “The numbers do not lie,” and “algorithms do not discriminate.” Which in-fact, they do, because today’s AI safety systems carry an aura of impartiality and that is precisely, what makes them dangerous.

As we build a more advanced future not just for ourselves, but for future generations, do we really want them to repeat the same cycle all over again?

If the answer at this point is no, then, now is the time to treat ethical, inclusive, and sustainable practice as a necessity in AI systems. The future undoubtedly will not function without AI; it is already here. So why is governance still treated as an after-thought while technology keeps accelerating?

And while, governance has its drawbacks, for instance; weaponizing it as a tool for power. Despite this, it forces us with an uncomfortable truth which is the fact that: some safety applications should not be left to algorithmic decision-making, especially when the data itself is already influenced by historical and systemic bias.

The Part We Keep Avoiding: Accountability

Effective governance sets clear chains of accountability, creates real mechanisms to challenge decisions, and demands transparency about how systems should work thereby asking the right questions at the right time. And when biased safety systems cause harm, these questions shift from being entirely technical to including elements of accountability.

Who is responsible?
Is it the engineers who built it?
Is it the executives who deployed it?
Or the governance committee that approved it?

When no one can answer, the ambiguity then becomes structural. And without accountability, there’s no responsibility, not only at organizational or governmental level, but at the individual levels too.

A handful of jurisdictions demonstrates what governance is in practice, because they have moved beyond procedural oversight, at least to some extent. Some of these include:

In the European Union, the AI Act entered into force on 1 August 2024, setting enforceable risk categories and legal limits on how certain systems can be deployed, with key obligations phased in over time.

In Canada, the federal Directive on Automated Decision-Making requires a mandatory Algorithmic Impact Assessment before automated decision systems are used in government.

In Asia, Singapore has pushed governance through practical guidance, including a Model AI Governance Framework for Generative AI released for public consultation in 2024, outlining expectations around accountability and oversight for higher-risk use cases.

South Korea has gone further into legislation: its AI Basic Act was passed in December 2024 and is set to take effect in January 2026, with requirements that include impact assessments for “high-impact” AI.

These frameworks are still young, but they establish and reinforce the fact that deployment without governance review does have consequences.

These cases are exceptions, of course. But notably, what distinguishes them is not better technology but governance with the authority to halt deployment, demand transparency, and reverse harm as rapidly as possible.

What We Deploy Today, Becomes the Future

At this point, the question is not whether we can build fairer safety systems because, we can.

Technical remedies exist, and we are widely exposed to; data audits, representative sampling, human oversight, appeal mechanisms, and constrained deployment which can all be used to reduce harm. These tools are well documented, as such, when they are applied inconsistently, it validates the governance-gap.

The question, however, is whether governance will evolve from theoretical theatre into actual implementation.

Will boards reject biased systems even when they are profitable?
Will regulation carry real consequences for unethical deployment?
Will organizations value equity enough to accept the friction that comes with it?

And individually, will we practice ethical, sustainable use of these systems?

Evidence from 2024 and early 2025 suggests we are at an inflection point and the responsibility lies with all of us to make sure this is enforced at every level.

The systems being deployed now will define access and protection not only in our future, but for generations to come. Whether these deployments replicate past inequality or interrupt it depends less on engineering capacity than on our collective willingness to take responsibility. And that cannot happen without governance: the resolve to apply constraints, demand accountability, and take responsibility.

Article authored by Karine M. Yengo, Author and Founder, The Crossroads Movement