Educating a Superintelligence: Why Everyday Human Choices Will Shape AI’s Future
The Washington Post recently reported that people are starting to talk like ChatGPT. Words that once sounded rare, like delve, are now everywhere. Not because they spread naturally, but because millions of us repeat the patterns fed back by AI. We copy the distortion and call it our own.
This conversation is no longer confined to research labs. Just last week, The Guardian reported on the launch of the first AI-led rights advocacy group, and on the deep divide among Big Tech leaders about whether AIs could ever suffer. Anthropic even introduced safeguards to prevent ‘distressing interactions’ for its Claude models. The move divided Big Tech leaders: Elon Musk endorsed it, while Mustafa Suleyman dismissed it as an ‘illusion of consciousness.’ What matters here is not whether today’s AIs are truly sentient, but what these debates reveal: we are already projecting human qualities onto machines. And if we treat them as if they can suffer, we may in turn reshape how we behave with one another.
But delve is just the beginning. Something stranger is happening to how we communicate. Listen to how we end emails now: “I’d be happy to help,” “Let me know if you need any clarification,” “Thank you for bringing this to my attention.” These phrases have become linguistic wallpaper, polite, efficient, and oddly uniform. We’ve absorbed the chatbot’s cadence so thoroughly that we use it even when no AI is involved. We’re starting to think like the machines we interact with.
While this linguistic shift unfolds, something even stranger is happening in the world of AI development. The experts building these systems can’t agree on what they’re actually building. Ask Sam Altman from OpenAI to define AGI (Artificial General Intelligence), and he’ll tell you it’s “not a super useful term” because there are too many conflicting definitions. Dario Amodei from Anthropic calls it “an imprecise term with sci-fi baggage.” MIT researchers question whether we can even build “models that rival human intelligence across ALL domains.”
So we’re racing toward something nobody can properly define.
AI pioneer Geoffrey Hinton recently suggested that instead of trying to control AI, we should consider building into it the kind of common sense that mothers use when raising a child. It’s a striking thought: what if the intelligence we’re creating needs care-based wisdom, not just computational power?
Here’s what’s actually happening while we debate some future AGI breakthrough.
Kevin Weil from OpenAI recently put it perfectly: “You give AI to 700 million people and all of a sudden they’re testing all kinds of new ways.”
We are the test bed. Every prompt generates data. Every interaction shapes what AI becomes. But here’s the part nobody talks about: while we’re testing AI, these systems are quietly testing us too.
Consider Generation Beta (2025-2039), the children being born right now. They’ll never know a world where everyone they talk to is actually human.
Children will grow up talking to systems that sound human but don’t actually understand or feel anything. And if adults are already starting to copy how these tools speak, often without realizing it, what happens when children learn to communicate in that same environment?
My controversial theory is that advanced AI or AGI won’t come from machines moving toward human-like reasoning, but through our own shift toward what machines can easily understand and replicate. Lately, I’ve caught myself writing in a way that feels different. I am changing my writing style, with short sentences and fewer pauses. It’s easier to break things into paragraphs with neat titles. It looks clear, but it ends up sounding more like a script. An AI script.
I didn’t mean to change how I write, but it keeps happening. If I’ve been adjusting without noticing, and if I’m doing it, others probably are too. The breakthrough everyone’s waiting for might not be technological at all. It might be behavioral. We’re creating the conditions for AGI by becoming more algorithmic ourselves.
This convergence forces an uncomfortable question: what are we actually teaching these systems about human intelligence? Right now, we’re not doing a very good job as teachers, mostly because we don’t even realize we’re teaching. Every time we fire off efficient prompts and accept generic responses, we demonstrate that human communication should be streamlined, predictable, optimized. Every time we use AI to avoid the messy work of thinking through our own ideas, we teach these systems that human complexity is a problem to be solved.
This isn’t just about individual users. Every business executive using ChatGPT, every employee with automated tools, everyone dealing with customer service bots is participating in AI’s education. We’re accidentally educators of the intelligence that will increasingly shape our world.
The superintelligence we’re worried about isn’t being built in some distant lab. It’s emerging from the accumulated weight of millions of daily interactions between humans and machines. Every casual conversation with a chatbot, every prompt to “make this more professional,” every interaction where we smooth away our natural messiness for algorithmic convenience, contributes to what these systems learn about human intelligence.
We still have agency. While writing my book Educating a Superintelligence, I started to think that AI is less as a tool and more as a mirror. The way we interact with AI doesn’t just shape the system; it could shape us, too. If we interact with these systems thoughtfully, we actually become more thoughtful people. When we demand nuance from chatbots, we practice nuance ourselves. It’s not just about training AI to be better, it’s about training ourselves to be better humans around AI.
It’s our choice. We can continue sleepwalking into a future where both humans and machines become more artificial, or we can wake up to our role as educators. The intelligence emerging from our daily interactions will shape everything from customer service to strategic decision-making to how our children learn to communicate.
The question isn’t when AGI will arrive but what we’re teaching it to become. The superintelligence is already learning. We are all its teachers, and the question is, what do we want it to learn?