Schooling

Are we raising Artificial Intelligence well? A wake-up call for parents, creators and citizens

Artificial intelligence is no longer a futuristic concept. It is already shaping the way we search, read, shop, communicate and even think. But after attending a recent conference titled Artificial Intelligence – The New Frontiers of Value Creation at the elegant Nobu Hotel London Portman Square organised by the Consulate General of Italy in London, I left with a question that has stayed with me for days. Not who will control AI. Not even how powerful it will become. The real question is far more uncomfortable: Are we raising artificial intelligence well enough to serve humanity, or are we accidentally training it to undermine it?

Artificial intelligence conference – The New Frontiers of Value Creation rof de kai and monica costa london mums magazine collage

Why I waited before writing about this

In the days following the event, social media filled with conference selfies, smiling photos with speakers and glamorous shots of the venue. But very little analysis. Those who know me know I prefer to think before posting.

Artificial intelligence is too important a subject to reduce to a handful of hashtags and a few pictures with famous people.

As a journalist and editor who grew up long before AI tools existed, I have experienced creativity in its purest form: writing, editing, researching and refining ideas without algorithmic assistance.

Today I happily use AI to speed up editing tasks, checking grammar, punctuation and structure. In that sense, AI is a fantastic tool. But the conversations I heard that evening made me realise that the future unfolding before us is far more complex than simply “AI helping humans”.

The alarming reality of algorithmic influence

One of the most striking elements of the discussion was how deeply artificial intelligence already influences our perception of reality. Three statistics illustrate why this conversation matters:

• Studies suggest that more than 70% of the content people see online is filtered or prioritised by algorithms, not human editors.

• Researchers estimate that false information spreads up to six times faster than factual news on social media, often because algorithms reward emotionally charged content.

• Recommendation systems now drive over 80% of what people watch on major streaming platforms, shaping cultural consumption in ways most users barely notice.

These systems are powered by AI. And their incentives are not necessarily aligned with human wellbeing.

Algorithms are designed to maximise engagement, which often means amplifying outrage, fear or controversy. Over time, this can quietly reshape the information environment in which societies function.

“The danger is that humanity destroys humanity”

Among the speakers was AI pioneer Kenneth Church De Kai, whose book Raising AI, published by MIT Press, explores a powerful idea. Artificial intelligence should not be seen simply as software. It is more like a child humanity is raising. AIs learn from data. They learn from human behaviour online. They learn from the digital environments we create. This means the systems shaping our future are absorbing the values embedded in today’s internet. During his presentation, De Kai offered a stark warning:

“The danger is that humanity is going to destroy humanity before AIs even get the chance to.”

It is a statement that initially sounds dramatic. But when one considers how misinformation, polarisation and digital manipulation already affect public discourse, the warning becomes harder to ignore.

Is AI just a tool?

Another compelling perspective came from Federico Marchetti, a pioneer in digital innovation and sustainable fashion. He reminded the audience that the story of civilisation is largely the story of tools. Human progress has always depended on technological breakthroughs: from the engineering techniques used to build Brunelleschi’s Renaissance cathedrals to the industrial innovations of companies like Olivetti. AI, he argued, began as another powerful tool. But something about it feels different. Unlike previous tools, artificial intelligence has the ability to improve itself through machine learning, sometimes with minimal direct human intervention. That raises a critical question: Is AI simply a tool?

Or is it becoming something more?

Marchetti also offered a reassuring insight for creative industries.

In the future, he suggested, true value will come from those who invest in human creativity, craftsmanship and artisanship.

Technology may accelerate processes, but authenticity remains deeply human.

Can AI have ethics?

The philosophical dimension of the debate emerged through the reflections of Paolo Benanti, professor at Luiss Guido Carli University. His perspective was strikingly simple: Artificial intelligence does not have ethics. Humans do. Technology itself cannot be moral or immoral. Ethics lies in the choices made by those designing, deploying and regulating these systems.

Father Benanti also spoke about the “displacement of power” that technology can create, shifting influence away from traditional institutions and towards digital infrastructures that operate largely behind the scenes. And that raises another unsettling question:

Are we still living in the same reality we once did? Or is technology gradually reshaping the very nature of reality itself?

Why this matters for parents

For readers of London Mums Magazine, the implications are profound. Children growing up today will experience a world where AI influences almost every aspect of daily life:

what information they encounter online

what videos appear in their feeds

what news reaches them first

what recommendations shape their tastes.

This means that teaching children critical thinking is becoming more important than ever. Digital literacy is no longer just about knowing how to use devices. It is about understanding how technology shapes perception. Parents may not need to become AI experts, but they can encourage curiosity, questioning and independent thought; qualities that remain uniquely human.

The real question: How do we raise AI well?

After the conference I had the chance to speak with Professor De Kai and learn more about his ideas. The conversation ended with an unexpected decision: I signed up to help create a London-based network of critical thinkers he informally calls the “De Kai Army.” Despite its dramatic name, this is not a militant movement.

It is a peaceful community of people who believe that the future of AI must be shaped by human wisdom, ethics and responsibility.

Because the most important question is not who controls artificial intelligence. The real question is:

How can we raise AI well so that it improves our lives instead of quietly damaging our societies?

The answer will depend on the choices we make today.

Our values.

Our priorities.

Our willingness to think critically.

Artificial intelligence may be the most powerful technology humanity has ever created. But the true value of AI will ultimately depend on something far older: human judgement.

AI is here to stay – But so are we

The future of artificial intelligence has not been written yet.

It will be shaped by innovators, policymakers, educators, parents and citizens alike. That means we all have a role to play. AI can help solve extraordinary challenges, from medical research to climate science. But it must remain what it was meant to be:

a tool that serves humanity, not a system that quietly reshapes it without our awareness.

Three realisations I had after the AI conference

After several days reflecting on the discussions at the conference, three thoughts kept returning to me.

1. Artificial Intelligence is already shaping reality. Most people still think of AI as something futuristic. But in reality, it already shapes what we see online every day: the news stories we encounter, the posts that appear in our feeds, the products we are recommended and the information that reaches us first. The influence is subtle, but it is everywhere.

2. The real risk is not Superintelligence. It is manipulation.

When people talk about AI risks, they often imagine science-fiction scenarios. But the real danger may be much simpler: systems that quietly amplify fear, division and misinformation because those emotions keep people clicking. Technology does not need to become conscious to change society. It only needs to shape the information we consume.

3. The future of AI is still being written. Despite the concerns, one message from the evening was ultimately hopeful. Artificial intelligence is not an unstoppable force beyond human influence.

The systems being built today are still learning from us.

Which means the future of AI will reflect the choices humanity makes now, our values, our priorities and our willingness to think critically.