The Horasis Council for the Ethical Practice of AI

By David Bruemmer, Chief Strategy Officer, NextDroid, USA

October 21, 2021

A specter haunts the server farms of Silicon Valley. It is the call for ethical AI — the notion that citizens of every nation have a right to just, transparent treatment by the algorithms that influence their lives. The impact of AI is keenly felt, but its algorithmic source and functionality remains largely in shadow. US elections illustrate how AI algorithms work behind the scenes (e.g. Facebook) to impact the political and social landscape. AI subtly influences what companies get funding, who we vote for and choose to date, what we read and how we think about our bodies, careers and health. The mechanisms that control this influence are purposefully opaque, so it is not surprising that the average person may not clearly understand the issues at stake. Most remain more worried about the threat of science fiction robots fifty years in the future than the AI system that just denied their home loan application.

The exponential growth of AI has become a central theme of recent Horasis meetings. To foster better understanding and help guide the growth of AI, Horasis has established a global council on the ethical practice of Artificial Intelligence. Founders include Alex Bates, Dr. Julie Marble, LTC Manuel Ugarte, Agata Ciesielski and David Bruemmer, each of whom bring a different perspective. The goal is to foster trustworthy, intuitive AI grounded in accountability and measurable impact. A number of efforts globally have emerged to develop frameworks for classifying and measuring ethics in AI. Instead, the goal of the Horasis global council on ethics in AI is to express the problems we see in narrative form, using our experiences and those of our friends and colleagues to share stories that help us understand the human phenomenon of interacting with AI… in addition to the intellectual, economic and political dimensions. At the start, there is no ready-baked conclusion and no thesis statement by which to convince the reader of a particular course of action or policy. The initial goal is simply to acknowledge the need for the ethical practice of AI and the subsequent need for discussion and study. The foundational principle is that AI should empower, rather than limit us… elevate our humanity and help us find peace.

When agave plants bloom, shooting the physical expression of their algorithms impossibly high, it gives confidence that the universe is well-structured, beautiful and exploding with opportunity. How do we get AI algorithms to do the same? Can machine intelligence speak to our souls and tell us that we are surrounded by beauty, cared for and safe? Is this too much to ask? The human experience of technology is not limited to how it works, but also about how our interaction with it makes us feel and how it impacts our world. Over the long haul, AI promises to ease our burden, to drive our cars and enhance our humanity. Yet, today AI is increasingly used to influence and control us. Without a focus on ethics, we may become the robots we are being promised. We are told our AI will take us anywhere, but we spend hours glued to a screen.

It all comes down to control. Who has control and who is being controlled? Is it possible to coordinate control of every car on the highway? Would we like the result? A growing number of self-driving cars, autonomous drones and adaptive factory robots are making these questions pertinent. Would you want a master program operating in Silicon Valley to control your car? If you think that is far-fetched, think again. You may not realize it, but large corporations have made a choice about what kind of control they want. It has less to do with smooth, efficient motion than with monetizing it (and you) as part of their system. Embedding high-level artificial intelligence into your car means there is now an individualized sales associate on board. It also allows remote servers to influence where your car goes and how it moves. That link can be hacked or used to control us in ways we don’t want.

The question is not whether we need better control, but rather where this control should lie. Does it lie with me, my vehicle, or with the servers that own and orchestrate the data? To address these questions may require a new approach, not based on rubrics or bureaucracy, but rather by respecting individual choice and the desire for transparency. This starts by enabling the average person to understand what is really happening to them and their data. The algorithms may be abstruse, but the intent and impact can be made much clearer. 

In this spirit, the Horasis International Council on the Ethical Practice of AI wants to analyze the impact of AI in several dimensions: 1) functional performance, 2) human centered design, 3) empowerment and social justice, 4) equitable use of human and financial resources. With attention to these factors AI can evolve to maximize human joy and minimize injustice. The first factor is functional performance. When AI fails to perform in a way that we expect it can affect us deeply. Sometimes we want to throw the device across the room. Today there is no standard metric for measuring this frustration. When the AI assistant fails to understand your question for the fifth time, it ceases to be merely a technical issue and becomes something else.  It begins at some point to degrade your dignity and perhaps also your humanity. We want to be understood and we need to know that the system we are devoting time to will meet certain basic standards.

How do we develop a metric for measuring this functional performance? Traditionally, performance is not considered an ethical issue, but what if a company intentionally makes some features hard to use to incentivize the purchase of more expensive tools? What if an electronics company purposefully makes it difficult to edit text on a cell phone to ensure people will also buy a laptop? Have you noticed that some AI fails to function reliably after a certain period of time to prompt purchase of an upgrade? Most AI systems today are less than perfect so the question should not be whether the system is optimal, but rather whether the benefit of the AI system outweighs any adverse impacts? For now, this last question remains a subjective assessment.

The next factor in determining ethical AI is the measure of human-centered design. What are the intangible impacts of using the AI on how we think, feel and emote? How does the AI make users feel about themselves? Does it make them feel inadequate or dependent on technology for their future well-being? Is it addictive? So many applications of AI, including video games and social media, are designed to be addictive and play on individual ego in a harmful way. To face these issues head on, we ask the following questions. Is human attention being unduly monitored or “tugged on?” What is the impact of AI on users over time? The questions and examples are too many to list. In fact, the purpose of the Horasis Council is to tackle these many issues over time, story by story, until the bigger picture comes into focus.

The next factor in measuring ethical practice of AI pertains to empowerment of the individual and what, over a larger population, we might generally call social justice. Does the individual retain agency when using the technology? How transparent is the function of the AI? Can the individual retain control of their own data? Is the individual coerced to share their data? Does the function of AI impact race, sex or age unfairly? Another factor is the use of resources. How much human capital, intelligence and time is being used to create the AI and then interact with it over its useful lifespan? Candy Crush is a great example of an application which, on the face of it, has no negative impact on the individual or the social fabric. It is relaxing and asks very little of the user… except hundreds and hundreds of hours. Do we have a means to measure the way brain cycles are going down the sink?

When we consider human resources, the impact on employment and work-place experience are paramount concerns. When we consider financial resources, we should consider whether AI impacts different socio-economic groups differently? Does it promote equitable distribution of wealth? Can individuals own their own data and preserve their privacy? Does corporate control of monetization adversely impact society? The environmental concerns are also paramount. Rarely do companies explain how their use of large data farms impacts the environment, even though computers use an unbelievably large amount of energy and have a significant impact on GHG and climate change.

AI will be an increasingly significant aspect of our daily lives, so we should think about the ethical model we want to use for designing AI. It may not be human. For some time, AI practitioners, roboticists and ethicists alike have struggled with the possibility that human ethics is the problem rather than the solution. This engenders some difficult discussions about trust. When might we trust an AI system over a human? Perhaps the best thing is to build trust from a relationship. Perhaps we should respond to AI as we do with humans, by interacting with systems and watching how they move, talk and deliver on their promises. AI practitioners and roboticists often work very hard to get people to trust the system, but perhaps it would be better to engender appropriate distrust. Can we imagine a future where AI under-promises and over-delivers?

Ideally, people would always be happy to see a robot coming towards them, but all too often, people are terrified to see unmanned vehicles fly over their heads. As a roboticist, I had always wanted people to be happy to see my robots, but this wasn’t always the case. Wired magazine did an article on our drug tunnel robotics work entitled: Bots vs. Smugglers: Drug Tunnel Smackdown. After several gut-wrenching days of sending robots into tunnels and hoping for the best, I started thinking that this was rough duty for both me and my little robot. The entrepreneurial element in me began wondering if there might not be more money in using robots to smuggle drugs than find drug smugglers. Often ethical behavior makes less money and takes more effort. With this trend at work, AI will continue down a swiftly-tilting downward slope to corporate control and dehumanization of the individual. The Horasis Global Council on the Ethical Practice of AI wants to upend this.

We can discuss the moral choices of humanoid robots decades into the future, but let’s also discuss the money-grabbing debasement of humanity today. How do we discern the empowering AI and differentiate it from the soul-sucking variety? When working on a project involving the highly dexterous Robonaut system we joked that we could rob banks by placing the humanoid torso on a balancing Segway and driving it into a bank with a sign that said “This is a robot stick-up!” in one hand and a brown burlap sack in the other. Robonaut was destined for the space station and its behaviors were centered around collaborating with astronauts, but we thought we could put it to some earthly use before it escaped into space. We told ourselves we could make the first criminal robot, but really it would have just been a crime committed robotically. Either way it would have been a recognizably unethical practice of AI. We did mount Robonaut on a Segway on which he careened, formidable, around the space station in Houston. Fortunately, Robonaut never rolled into a bank. Unfortunately, not all unethical practices of AI are as easy to spot as a robot bank robbery. It turns out that the imagined plight of our Robonaut – being a stand-in for greedy humans — is exactly what is happening every hour of every day. We may think that it is the machines, rather than the corporations we should be worried about, but machines are merely reflections of human intent. Too often, AI is being used as a computational engine of human greed. 

In and of itself, the pursuit of AI is neither good nor bad, right nor wrong.  AI, if applied with a clean heart can redeem our world and make right so much that has gone wrong, John Hagel believed it gives us the opportunity to redefine work and reclaim our humanity. The AI we want empowers the individual. The AI we don’t want controls them and monetizes them without regard to their humanity. This kind of AI is theft masquerading as something other than human greed. The servers in Silicon Valley are hidden and won’t saunter into the bank on a Segway, but these servers may be no less a crime if we let them steal our time and volition.   By sharing our stories, the Horasis Council on the Ethical Practice of AI, will strive to explore the right balance between public safety, individual convenience, and personal privacy. How do we manage the economic effects that are now rippling across our communities? Are we creating a world that will have jobs for our children? Are we creating a world that we can even control? The answers to these questions need to start with an appreciation for how technology is changing, based in part on understanding how it has changed in the past. Horasis invites you to participate with us. Write and send us your stories and expect an article each month as you join us for this journey.