Artificial Intelligence in a post-industrial World

By Nuno Neves Cordeiro, Senior Advisor, Strategy and Artificial Intelligence, Switzerland

November 10, 2023

The novelty of AI and its applications seem capable of creating an evolutionary new world, a post-industrial world with technology at its core and a distinct modus operandi. Our panel was entrusted with discussing the impact of AI in that post-industrial world, particularly how will it impact the nature of work and society as well as our conceptions of privacy and freedom. In addressing some of those questions, our panel looked to draw some parallels with previously adopted general-purpose technologies such as electricity, the computer, and the internet.  

In our panel, comprising Becky Wong, CEO, Globex Capital Partners, Hong Kong; Marina Gracia March, Founder, AlmAware, Spain; Muhammad Lawan Zanna, CEO, Oryo, Nigeria), Rufus Lidman, Founder, AIAR EdTech, Singapore; Yonah A. Welker, Founder, Yonah Fund, USA, and myself as panelist and moderator, we set ourselves the goal to address three concrete questions regarding AI’s potential impact, risks, and role in mitigating those same risks:

Question 1: With the benefit of hindsight regarding the adoption of previous general-purpose technologies and their impact on the nature of work and society – Will AI be any different? If so, how?

Question 2: As the adoption of AI increases, the risk of its abuse by governments, corporations and individuals in ways that impact our individual freedoms is unquestionable – How likely or real is that risk? How severe could it be?

• Question 3: Considering such risk of abuse – Can AI be used to assess the fairness of governments, corporations, and individual behaviors (AI enabled or otherwise) to safeguard our individual freedoms?

Nuno Neves Cordeiro, Senior Advisor on Strategy and AI in Financial Services, Switzerland

Question 1: Impact

The panel’s views on this question diverged. Some panelists observe that there are considerable differences between AI and previous technologies, particularly in terms of its (learning) autonomy, its (high) dimensionality and its (unpredictable) functional forms. 

Further, they believe that those differences position AI as a technology that augments humans by performing tasks that we cannot possibly perform in ways that we do not always understand – conversely, previous general-purpose technologies augment humans by performing tasks humans can perform but in more efficient and effective manners.

Therefore, they view AI as having the potential not only to improve how humans perform the tasks comprised under any given role (augmenting the human) but to alter the role itself (augmenting the role) hence creating significant impact in the nature of work. 

Furthermore, panelists elaborated on the potential impact of AI for people with disabilities and the role of the technology in offsetting sensory, cognitive, and physical differences. Because AI operates by elevating a certain functional baseline for humans in general, for the cross-section of the population for whom that functional baseline is distinct, the potential impact of the technology can be even more significant – not only regarding the nature of work, but many other aspects of everyday life.

The alternative view, that AI as likely to follow similar patterns as those of previous technologies, believes that as with those previously adopted technologies, AI will affect the nature of work by demanding new knowledge areas, by creating new roles in organizations to deploy those new knowledge areas, by displacing some roles that are narrowly defined and have a automation potential, and by teaming-up and augmenting humas in roles that will remain relevant.

Rufus Lidman, Founder, AIAR EdTech, Singapore

Question 2: Risks

There was more of a consensus on this question, that there is a risk of the technology being abused by governments, corporations and individuals – “AI is a tool just like a hammer and if I am a bad person holding that hammer, I can cause serious harm”, noted a panelist.

In discussing this question, our panel implicitly created a distinction between intentional risks and unintentional risks – even when not purposeful, abuse may arise from the subpar development of AI use cases. If good development practices are followed when creating AI use cases, paying close attention to the data sourcing stage (ensuring data’s statistical parity), to the training stage (creating bias penalties via constraints), and to the deployment stage (creating relevant monitoring mechanisms), unintentional risks should be less likely though.

Finally, panelists also offered a nuanced view of the intentional risks by making a distinction between the risk of abuse by governments, corporations, and individuals. Under this view, corporations have a strong economic incentive to use AI for the good, i.e., to better understand and react to their competitive environment, to better enable their employees, and better understand and serve their clients. In addition to that economic inventive, both in the EU and in the USA, regulation is being launched to create oversight over corporations’ use of AI. Therefore, corporations present lower levels of risk than some may expect.

Becky Wong, Chief Executive Officer, Globex Capital Partners, Hong Kong SAR

Question 3: Mitigation

The panel’s views on this question were again relatively in concordance: in that the same way that AI can be used as a tool for bad, AI can also be used as a tool for good. AI can monitor whether the AI-enabled decisions being made by governments and corporations are representative and unbiased – “This would be AI controlling AI”, rightly mentioned a member of the audience. This potential oversight role of AI would naturally extend to decisions made by those orchestrators that are not AI-enabled.

In its oversight role, AI, through its unparalleled pattern recognition abilities, would be leveraged to ensure that government and corporate decisions affecting us all are being made based on fair and unbiased criteria. In other words, to ensure that unwarranted constituent specific criteria (e.g., gender, ethnicity, age, religion, nationality) carries no weight in explaining those same decisions.

Marina Gracia March, Founder, AlmAware, Spain

Conclusion

After sharing their views on the three questions, the panelists addressed questions from the audience. Those were mostly concerned with the potential risk brought upon us by AI and in trying to understand “How bad can it become?” as framed by an audience member. The panelists’ view was that, in truth, we cannot possibly know ‘how bad it can become’ given we are still in relatively early adoption stages, but that though the risks of AI are undeniable, its potential benefits for humans in general, and certain cross-sections of the population in particular, will likely outweigh the risks. This is already being addressed through regulations. Indeed, like the previous general-purpose technologies before, the adoption of AI is an inevitability in work and society, and it is up to us to learn from the lessons of the past during this critical foundation stage of adoption.

By Nuno Neves Cordeiro, Senior Advisor, Strategy and Artificial Intelligence, Switzerland