AI Creating A Nascent Post-Industrial Civilization — Part II: May AI Have Dramatically Negative Consequences For Society?

By Rufus Lidman, Founder, AIAR EdTech, Singapore

December 10, 2023

In an earlier article it was asked if the effect Artificial Intelligence would be any different than from any other emerging technology. The conclusion was that intelligence is what brought mankind to the top of the food chain. And with an abundance of intelligence, and the abundance of energy that intelligence will render, there will be technological, planetary, economic, and social consequences so dramatic, as we will not even recognize this planet ever again. In this follow up we are turning to the question whether this insanely dramatic change will lead to something good or bad. Buckle up!

May AI have a dramatically negative consequences on mankind?

The background of the article series where three days with an exclusively invited and admirably diverse group of world leaders from 50 countries at Global Horasis in Gaziantep. In the first panel I shared we discussed the potential of AI as a tool to better humanity.

The concrete topics we came to deep dive in can be generalized to three a) If the disruption generated by AI would be any different than other emerging technologies (EmTech) in history of mankind, and b) Whether a massive AI adoption will have a negative or positive impact on mankind, and c) if we should use AI to control our governments and companies (or what we otherwise should do). In an earlier article I have shared my answer to the former question, while we now taking a stab on the second. Pls enjoy 😉

Can AI be a tool for institutional control?

One of the questions posted in the AI Panel was regarding the likelihood and severity of the risk that an increased AI adoption would be abused by non-benevolent governments, corporations and individuals in a way that would impact the freedom of individual? And for those that want clear responses, the answer is… 

…A VERY LOUD AND CLEAR YES – In a world where the advancement of AI is generating an abundance of intelligence and an abundance of energy, the entity having a firm grip on both of these resources will of course own the world – including government, corporations and people. 

But this is a matter of perspective. Cause looking at it this way of potential risks, it does not stop there. So let me start the journey with how I used to think – and how many, many prominent people still are thinking, more specifically regarding two fundamental risks of AI.

Any true “Organic” Intelligence” fearing the effects of “Artificial” Intelligence? 

In a recent study it is shown how 8/10 of all jobs are likely to be affected by GAI/GenAI. And unlike the industrial revolution, when it was the lowest-paid routine tasks that were replaced, now it is often well-paid jobs – where trade unions warns about the AI development doing exactly that. 

 And this is only the first, very minor risk where taking our jobs is just the beginning. The second risk is that we will lose control over AI, either to some villains or to itself, i.e., that AI will totally take over. This way of thinking goes all the way back to the famous Samuel Butler dating back as early as 1872, later quoted by Turing, the godfather of AI himself (“At some stage we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s ‘Erewhon’”). 

This is becoming a well-known theme, where eventually we had some intellectual titans and rather “sane” whistleblowing professors, like the scientific giant Stephen Hawking warned us (“artificial intelligence could end the human race”), the Swedish-born giant Nick Boström (“the new superintelligence could replace humans as the dominant life form on Earth”), everyone’s friend Noah Harari (“Dataism threatens to do to Homo Sapiens what Homo Sapiens has done to all other animals”), and another Swedish born giant Max Tegmark (“there are no physical obstacles to AI becoming smarter than us”).

And here we lately find the academics in good company, where the entire risks with the human machine have gained such impact that it even has formed an international organization – “Future of life“. Where we not only find prominent participants from academia but also business representatives such as Bill Gates, Jan Tallinn, and Elon Musk. And where even the short-lived new CEO of OpenAI previously stated P(doom) as high as 5-50%. 

Are these merely paranoic fears or are there true indications in this direction?

This is something that both I and others predicted many years before, where the pessimist no longer have to go as far as horror scenarios with AI weaponries or this month’s outburst of Q* indicating its potential for the first AGI:ish superintelligence higher than ours. Instead they can leverage their fears by turning their heads to even more dystopian forms predicted years ago – something that with the outbreak of GAI/GenAI and the abundance of “hallucinations” from GAI:s ingesting toxic and/or biased data, no longer is futuristic but impregnating our everyday life as we speak. 

We all just need some elegant prompting to be fed by broadly communicated GAI-responses like GPT telling you ”I can use information to make you suffer and cry and pray and die”. The “digital friend” My AI – given out for free to the 750 million kids on Snapchat – has been proclaiming “Quisling as national hero” in Norway, and advising a claimed 13-year-old girl to “have dark setting lit with candles” when dating a 30 year old man”. 

All with the effect to this year claiming its first deadly victim, contributing to the broad shoutouts for regulation and ministers becoming even more afraid of AI. 

HOW DOES THIS EFFECT THE EVOLUTION OF HUMAN RACE?

So, in the risk of losing jobs and/or AI taking over and feeding us with hallucinations, people scream, prominent world leaders call for a break, unions and ministers are afraid. When the dust has settled from many of the most obvious spinal cord mechanisms, for me it is still something completely different that lingers, let’s call it the third fundamental risk of AI. 

Has the need for innovation increased?

 In another of the Horasis panels (“Innovation in Service Industries”) I didn’t formally sit in, but party crashed (=asked so many questions until I was invited up :D). Here the digital context of reception and deception was to cover at least the whole of “Le Grand Finale” – with a diversity of interesting perspectives from an equally large diversity of interesting personalities and backgrounds.

With the takeoff being what makes individuals “Innovative in service industries”, our most senior expert pointed out the increasing need of innovation, in that 1958 the average lifetime for a company in F500 was 61 years, while in the BANI world we are living in no, it has decreased to less than 18 years. 

Together with the digital innovations recently disrupting sector by sector, the need for innovation should be more critical than ever, and thus also what amplifies the propensity for change in organizations (actually the subject of my own thesis some centuries ago) as well as what makes individuals innovative and creative in their life and their societies in general. 

How are we dealing with such need for innovation?

The more cognitively inclined panel participant picked up the ball with a tour on how the brains of mankind on one hand has the built-in creativity from its neuroplasticity, where new connections are formed between neurons in the hippocampus. In the neuroscience of creativity it is shown how children have it more than adults, some adults more than others, where there are weak pathways built in the brain and new impressions directly revise the thinking.

On the other hand most, adults get more and more overgrown paths, in some “ready” people even more cemented highways where the brain chooses the path of least resistance. Facing the need for change a stress response is triggered from the amygdala to the hypothalamus that releases adrenaline and cortisol, as a reaction to the brain preferring the already familiar patterns that require less energy to process. 

And, hence, the selective perception is born, with the tendency to attend to and interpret information in a way that confirms our preconceived notions, beliefs, expectations or experiences.

Is Internet helping us with innovation?

My question was how this increased need for innovation was working in this polarized world of ours where people mostly talk to like-minded people, and already only looked at news and newspapers that confirm their own world view. Something that at the rate of internet intervention in the everyday life of 5,2 billion internet connected people, where we spend 6,6 hours a day with digital channels in general, and 2,5h in social media in particular, all of which through AI-based personalization ensure that we are always served just exactly what we already think we know and like. 

That is, we have gone from an authentic tendency to selective perception, to an artificial compulsion to selective perception, we are only exposed to things we have already clicked on, things our friends have clicked on, and things like “people like us” (in computer language “look-alike-audiences” retrieved from “twindata”) has liked. 

All to the point that deepfakes threaten our experience and understanding of truth, and because we spread so much data about ourselves, we humans become easily manipulated – where we can go in a direction where we get a society of manipulation and we become digitally uncritical .

How could we ever break out of this eternal loop of filter bubbles, and be exposed to anything other than our existing perceptions and values, how would we ever be inspired to anything even resembling creativity and innovation, far outside our processed mental paths, and far outside the box of our immediate vicinity’s social taken-for-granted perceptions and socially accepted values?

Will GAI put the final nail in the coffin for the innovation of mankind?

And this is how it has been for some time now. Slowly but surely ever since the birth of Arpanet 1969, the introduction of WWW in 1991, with the truly brutal coup d’état in the smartphone launch in 2007 (awaiting the real death knell in AR glasses & lenses).

But here comes this. AI’s true breakthrough, wasn’t it supposed to be the one helping us humans become more rational and objective, and find the truth in a “humanistic AI” – i.e. with a true cross-collaboration between creativity, ethics and intuition of humans on one hand, with the data processing logic, analysis and prediction models of AI on the other?

Well, the sad truth is that what now has happened is risking becoming something else. Instead of taking our human logic to the highest of heights (i.e., AGI), the AI that has made the entire world take AI to its heart, is instead the generative AI (GAI) with LLM built from transformer models strengthened with reinforcement learning.

That is, we are not leaning on models that look for maximizing the determination coefficient between a number of independent variables and a dependent one (taking into account multicollinearity, heteroscedasticity and other interesting biases). Instead mankind fell in love with a model that based on tons of unmoderated data – where scientific reports and evidence-based data is mixed with tons of toxic and biased data based on subjective thinking, fake news and debate – predicts what the next token (e.g., “letter”) tend to be given the previous ones. 

That is, it finds some sort of average of what “John” would have said in the matter. And based on your own feedback (RLHF) it even adds a layer of “learning.” But not a learning what is true, but what YOU think or want to be true – i.e., a “customer centric solipsist,” trained to give you exactly what you want (which in rare cases of true curiosity would be anything else than the validation of what you already think is true…

….i.e. nothing else but a brutal leverage of the famous “bubble” of selective perception we dwelled upon in the “neuroscience of creativity” above.

Again, is this theoretical paranoia or are there any true indications of this in practice?

In an experimental case study published on Horasis Insights I show exactly this. With provocative prompting, I manage to get GPT4 to constantly adapt to what a relatively simple NLP should be able to figure out that I want to hear. 

We go from GOD to GAI, with the most seductive technology possible. A personalized Bible, which through the amount of data gives us “a sacred experience of divinity,” which unfortunately only contributes to an unparalleled megaphone of mutual admiration giving us the false impression of us being right in whatever we want to hear. 

Everything we have previously seen of filter bubbles is pure day child level compared to what we now are being sucked into, every single one of us. 

 That our polarization will explode is the understatement of the year.

During the panel discussion, I presented the thesis that this will be the final nail in the coffin of the “Escape from freedom” that at least the western world has been occupied with ever since Nietzsche proclaimed the death of God – i.e., GOD is dead, long live GAI. 

Will AI create an A- and B-world? 

And this has consequences far beyond the risk of society losing its propensity for innovation. Because in the process of losing our capacity for critical thinking, eliminating our exposure of the brain to other perceptions and values than those we already have, plasticity will be reduced, new neural pathways will be trodden, even more our highways will be cemented.

This is the very definition of being pacified as a human being, of going from alive to controlled, from dynamic to static. We become inhibited, passive, reactive and passive. More like a machine than a human. And as such easy to control.

While the artificial intelligence (AI) makes machines increasingly creative, innovative, and dynamic, the organic intelligence (OI) we humans are now becoming conventional, passive, and static.

While the machine is becoming increasingly human, man is becoming more and more machine.

The fourth risk is therefore that we become pacified. If we don’t write texts, program code, make diagnosis, build ML models, and draw pictures –  is there not a significant risk that our brains become as wasted by not “doing the math”? Just as our bodies wither if instead of working with the body, exercising in the gym, be out in nature and conduct adventure sports, we start sitting at home, do gaming, watch reality soaps, and order pizza? 

Well, there are tendencies that point to the fact that as we in developing countries have stopped working with the body, we have moved to become a physical A and B team. On the one hand we have those who don’t strip down at all but have become body conscious to the point of exaggeration, cross fitters, bodybuilders, and sports. On the other side the worldwide obesity rate has nearly doubled since 1980, where today more people have obesity than underweight in every region of the world, apart sub-Saharan Africa and Asia. Worldwide, more than 1 billion people have obesity, and more than 40% of adults in the U.S.

 So, yes, if we get the same trend for the brain in the information age that we got for the body in the industrial age, so of course you can see a certain risk for A and B teams here too. On one side we will have those who choose the path of least resistance and let the machines (AI) do all the thought work – just as you previously let the machines take over all the body work. On the other side we will have a group of “brain conscious” who maintain and develop the brain power (OI) – perhaps to the point that it will be the latter who proactively “produce” the new tools while the former reactively “consume ” the fruit of them.

SO, given this risk of a purely pacified race, is there a way out?

Sataya Nadella summarized the general conception when he earlier specified the general risks with AI as two, that AI will take our jobs, or that we lose control over AI – either to itself or that it is abused if it gets into the wrong hands. As we have seen above, both these risks are well noticed, with good awareness among a broad elite in both theory and practice. 

But above I have argued there is a third and fourth risk, which is at least as dangerous, and precisely due to not being given the same level of attention I would claim it to be the biggest risks of them all – I.e. that AI is pacifying us as humans, and as AI itself increases its creativity and innovation power, it inhibits our own. And without it, well….

….then we’ll soon be nothing more than some hundreds of septillions of atoms of carbon, nitrogen, and hydrogen, as well as a cocktail of some fat, minerals, and other juicy stuff.

Are there specific methodologies mitigating the risk?

So, if the proofs for these risks are so obvious and some of us really understand this, what should we do about it? And here the panel’s own creativity began to bubble. One of us said we should all just stop using AI that risk leveraging our bubbles. Another believed that we should invent an inspirational GAI, which constantly exposes us to at least one opposite position to our own (if we humans ourselves are not smart enough to selflessly prompt for both pros & cons with various positions).

 An even more drastic take came from my Lebanese friend on the panel who talked about the digital detox retreats he has for countless entrepreneurs. The entire effort was based on the awareness that you are never at your most innovative in “the fog of war” when everything is panic stressed so solve different things you must. In these cases, the energy-saving human preferably switches to autopilot with as much selective perception of the environment as selective reflection on already trampled paths without locking for everyone else. 

 For 3 days, he therefore takes his entrepreneurs out into nature, without mobile phones, where they can reflect without stress and write down their favorite thinkings. It was then this writing that becomes what the entrepreneur picks up when they then end up in the fog of war again. My friend said that he was not surprised when he saw how this has contributed to making Lebanon’s entrepreneurs not “resilient”, but the step above it – i.e., the opposite of fragile, antifragile, where you instead of standing still and wait out the storm, instead get stronger while doing it. 

And, of course it is true, all of us martial artists know it – “The more you sweat in training, the less you bleed in battle.” And of course, that this training, this “education” in thinking creatively with an open mind, must be able to help with what Aristotle claimed is precisely the result of a trained mind: “It is the mark of an educated mind to be able to entertain a thought without accepting it.”

Or is the real remedy a true pivot in mindset?

And I think all this would help. That is, help to mitigate the consequences of what is already happening. But for true remedy I actually see only one answer. And that’s what my blockchain-loving trailblazer to Korean entrepreneur in the same forum stated. That if humanity is to be able to fight both a coming superintelligence and the pacifying kryptonite, she is feeding us with, then we must all become superhumans. 

It is a similar thesis that I anticipated some years earlier, where I mean that we in “the race between races” cannot naively believe that one should be able to “stop” development (“stop using it” that one of our panel friends proposed) – something just some faction of engaged neo-Luddites for better or for worse will cope with. Nor that one should be able to regress to some sort of “pause” in development – something that will only mean that other forces, perhaps not all too good, come first in the race.

When earlier participating in the dialogue through both articles as well as key notes and participation in podcasts, I have earlier debated “whether mankind will be smart enough to control machines that are smarter than us”, taking the turn on how we may manage a “race between races” that allows the Organic Intelligence (OI) of mankind, to reach an intellectual level even in comparison with the Artificial Intelligence (AI) being developed. 

If we should retreat to what is happening in the GenAI explosion as we speak, there are only minor indications of the biggest risk being AI coming in the wrong hands or us “losing control” of it. Instead, I claim that while current data innovations are making AI more intelligent, both humanity’s’ broad use of AI and our “escape from freedom” into gaming and drugs, risk our organic intelligence, our OI, to become less intelligent.

To mitigate that, I claim the remedy being – in direct contrast to populistic talk that it is good that testosterone goes down in humanity and we start to suppress ourselves with drugs, pacifying medicines and gaming – making sure to invest in a strong development towards leveraging our collective OI to the level that it can actually battle AI. With the worlds’ broadest medium in the history of mankind, the smartphone, we for the first time in history actually have the opportunity to do that, and then that is exactly what we should do. 

i.e., the strategy of humanity should not be trying to “push AI down” but to “push OI up.”

And finally, the real question: will (!) AI have a dramatically negative consequences on society?

So, we can conclude that AI have some potential to wipe out some jobs, and in the wrong hands, or its own hand, can be misused for control, yes. But could we actually lose control so much as it will “end humanity”? Perhaps not. 

But taking a look at the other two risks, here summarized as “ending humanity in humans” – could AI make us loose or at least severely decrease our potential for creativity and innovation, and in the process passivate all of humanity as a race? Yes, compared to the other two risks, I would say it is an elevated risk with brutally severe outcomes, to say the least.  

So of course, the answer is that AI can have a dramatically negative consequence on society. 

 But, like my fellow panelist in the AI panel, Mohammed Lawan Samma, said, “My biggest fear is the fear.” Because, again, I don’t see the question “If AI will have negative consequences?”   as the most relevant question. 

Instead, the more fruitful question is whether AI will have a dramatically negative consequences on society. 

And if there is anything the science of technology has showed us, is that there nothing like a deterministic outcome of a certain technology. No value. No evil, no good. 

Instead, my argument is that of the hammer. I.e., that technology is like a hammer, a hammer you can kill persons or build stinking factories with. OR build hospitals and save people with. Nothing in the hammer itself points in any of these directions, but instead the control over it, the intention behind that control and – to a certain degree – the unintended consequences of the application.

So, the question is all about who holds the hammer, where dictators will control their people, terrorists will kill millions of innocent civilian people, warlords will make war even more “efficient” and kill even more soldiers,  while non-ethical capitalists will make billions on people while destroying the planet. AI will not lead to either positive or negative consequences for humanity, it is who hold the magnificent hammer that the whole question should be about. 

Which automatically brings us to the next question, i.e., if WE would hold the hammer should we use AI to control our governments and companies (or what we otherwise should do). More on that in the upcoming article 😉