Why we need a US tech ethics board
Big tech’s attitude of “move fast and break things” is finally hitting a wall, as business leaders and consumers realize just how much has been broken in the name of good intentions. Steps are being made to backtrack on abusive tactics. Facebook is, perhaps surprisingly, declaring that it encourages new regulations (as long as they don’t put US American companies at a disadvantage). Apple is taking the monumental decision of handing users more control over the sharing of personal data on apps. And in response to tech leaders opposing the move (read: Mark Zuckerburg), Apple CEO Tim Cook made a compelling argument:
“The path of least resistance is rarely the path of wisdom,” Cook said. That’s true. As is the fact that the tech industry can’t be its own judge, jury and executioner. We wouldn’t allow members of the public to declare themselves medical doctors, pat themselves on the back and set their own rules. Google tried this with its AI ethics board, which was ironically shut down in days after outrage over one of the board members’ questionable ethics.
Yet tech is a fundamental part of the fabric of society, with a huge potential for disruption. Whenever ethical values are ignored, tech will progress down that road at an alarming rate. We’ve all probably experienced the great harm this can cause, from discriminatory AI to fake news during a presidential election.
This downward spiral is openly permitted because we have no true ethical standard, overseer and discipliner. Sure, we have ethics-related laws that affect tech, but there’s no arm of the government enforcing those laws. When our representatives bring in tech giants to testify at congressional hearings, the outcome is negligible, especially when compared to concrete measures like the EU’s privacy and security law – the General Data Protection Regulation (GDPR).
The consensus in tech is that people don’t want to do harm. It just needs a steady hand.
The time is right for a US tech ethics board. Our new president — who has personal experience living with a disability — is leading the charge for inclusiveness, and is likely to take drastic action against big tech’s most damaging practices.
While this proposal for a board is just one of many routes we can take towards better tech ethics, the hope is to get the ball rolling and the conversation moving along.
Who: public but with private influence
A tech ethics board can’t be all bark and no bite — it needs to execute. So, it inevitably has to be a public body, created and protected by law. While there are a few government bodies covering technology and science, we’re far from an institution with real leverage over how technology is used in the private sector.
The US government has several independent bodies, like NASA, the SEC and the FTC. These report to Congress — not the president — and are relatively bipartisan, self-regulating and protected from presidential influence.
If one of these independent bodies were a Tech Ethics Board, it would be able to create federal regulations, and advocate and enforce policies. Like the Federal Trade Commission (FTC), it would have law enforcement powers and educate companies on regulations. And, like NASA, it could have an external council, made up of private sector advisors in tech, academia, innovation and business.
The board members won’t all be nominated by the president, and should represent a diversity of genders, ages, ethnicities, abilities, and politics. It’s imperative that they have an expert understanding of new and emerging technologies, mobile apps, social media — areas that feel underrepresented in government advisory boards, not to mention in congress. Nor should they all be tech buffs; many should have experience working in branches drastically impacted by tech, including finance, education, and health.
That independence is important. We already have a US Digital Service (USDS) and the White House Office of Science and Technology Policy (OSTP), but these report directly to the president. While they may be mission driven and focused on ethics, these bodies generally have neither carrots nor sticks to regulate ethical behavior in tech. If autonomous, this ethics board would be a trusted, non-partisan bridge between public and private.
What: regulations and education in the tech world
The board should cover the major issues of today and tomorrow, primarily the following:
– Accessibility, meaning products and the latest innovations are built for people of all abilities — from choosing visuals that consider people with speech or limb differences, to curating content that is considerate to people’s mental health.
– Handing users more control over their privacy and protecting people’s data.
– Eliminating bias from tech and the development of new AI technologies to encourage more equal representation in the industry.
– Tackling deceptive and dark patterns to prioritize user well-being.
– Educating businesses on fulfilling these regulations, providing the advice and resources to do so properly. It will also encourage more ethics education in training for tech professionals. This can be done in collaboration with organizations dedicated to these areas.
– Building a solid foundation for ethical behavior: laws and regulations.
Some laws already exist that, if followed, would make tech products better for everyone. Like the Americans with Disabilities Act (ADA), which should make services accessible to anyone with disabilities, from visual impairments to physical issues.
That doesn’t mean everybody does it though.
Step one for the board will be lobbying for laws where there are none. Technology always moves quicker than the law adapts to current innovation, from the gray area in the law around accessibility on mobile apps, to the biased AI that judges use to determine verdicts. Step two would be writing up concrete regulations that tech companies have to follow, in order to comply with existing laws.
Compliance is a big word in the tech world. Lack of it means you’re exposed to lawsuits and public backlash, so businesses might have the will to comply, but lack the know-how. They should be able to rely on the tech board to coordinate education around legislation.
So, the next big question then is: how would all of this be enforced?
How: enabling rules to be followed
When they hear “ethics board,” some might wrongly imagine an enforcement arm whose sole purpose is to stifle progress in tech. But this is more of a firm hand guiding a transition into a more acceptable, ethical, and equitable system. One that ensures all members of society can access innovation, while their rights and existence remain respected. Which is why a tech ethics board should work towards a “strive for this” rather than a “don’t do this,” — which will in turn avoid excluding people from the process itself.
Armed with their clear-cut regulations, the board’s officials will have oversight powers which include inspecting private businesses to ensure they’re in compliance. They’ll run a much tighter ship when it comes to Quality Assurance: with products being fined and/or hindered from going to market if they don’t abide by regulation. Ideally, there should be an agreement with mainstream marketplaces like the Apple Store (though Apple already has quite a solid QA process) and Google Play to temporarily kick sub-par products off the platform.
But who will bear the responsibility for the products getting put out into the world? As company leaders, we are ultimately the ones making the core decisions, and we have to be held to much higher standards than the rest of the team.
In the absence of an actual license that legally allows you to be an entrepreneur (which in this day and age also means being a cybersecurity director; psychologist; private data manager, etc.), CEOs should at the very least be taking some kind of professional oath. As with doctors’ hippocratic oath, it wouldn’t be binding, but would encourage awareness and pride in the profession’s ethical standards. However, breaking that promise to the people you’re serving should mean reporting to the ethics board, and potentially getting hit with fines.
When non-compliance has tough real-world consequences, that effect will quickly trickle down into professional tech education. That’s because demand will surge for employees with solid knowledge of accessibility, compliance, and ethics. This ensures that over time, academic and even less formal training courses will weave in these areas of study. By specifying the rules for desired outcome rather than, for example, demanding a license for all tech employees, we avoid excluding certain people from entering and diversifying the industry.
Those are the essentials. But the board would have so much more weight if it had enough resources to also encourage stakeholder responsibilities. That is to say, pushing other major players to be ethically accountable, and encouraging a domino effect.
The biggest stakeholder of all is the client. We know that the public is keen to denounce bad practices in companies, and the board could provide a platform other than the cage fight that is social media. A potential option is an anonymous and private reporting system (hopefully encouraging internal complaints) that could generate investigations if enough severe complaints are made against a particular product. Although this suggestion comes with its shortfalls — like being heavy on the “paper”work — if we put enough heads together we can come up with better solutions on this premise.
Somewhere between their vision, intentions, and their ambition, many founders forget that their first responsibility is to their customers. This is a human problem, and it won’t quietly disappear if our leaders don’t act assertively. So whether or not people agree on the need for better ethics is irrelevant, and unless we take action soon, our greatest innovations will advance so fast that we’ll lose sight of what needed regulating in the first place.
Let’s hope we haven’t reached that point yet.
This article was authored by Cat Noone, co-founder and CEO of Stark