Back

From Social Care to Social Change: Revolutionising Online Safety

When Geraint Thomas first stepped into the world of social care, he wasn’t chasing titles or bigger paychecks – he was driven by a simple but powerful mission: to make a lasting impact. For over two decades, he dedicated himself to supporting people often left behind by traditional systems. But there was another passion that always ran alongside his work in care: a deep love for technology.

It was this unlikely pairing – social care and tech – that ultimately led him to help shape one of the most innovative digital platforms for people with learning disabilities and autism. More importantly, it sparked something revolutionary: the creation of an AI-powered safeguarding system that doesn’t just protect users, but empowers them to thrive online.

This is the story of how a chance encounter with Big Life Adventure, presented to him through the company he was working with, inspired Geraint to jump on board as CTO – leading him to create Big Life Buddy, the AI safeguarding system that would transform online safety for vulnerable communities.

A Heart in Care, A Mind in Tech

Q: Can you tell us about your journey to Big Life Adventure?
A: I’ve spent over two decades in social care – it’s where my heart truly lies. Sure, I could have worked in other sectors and earned more money, but I’ve always been committed to helping those who need it most. At the same time, I’ve always been a tech enthusiast. I’ve always loved computers and gadgets. When I started working in care, it became clear to me that the sector was falling behind in terms of technology. That’s when I decided to make it my mission to help organisations catch up with technology that actually worked for them.

Over the years, I moved into roles such as Head of Digital and Head of Transformation. Eventually, I launched my own company, Guided Innovation, to extend that work and bring technological improvements to the sector on a larger scale.

Q: How did Big Life Adventure come into the picture?
A: I’ve spent over two decades in social care – it’s where my heart truly lies. Sure, I could have worked in other sectors and earned more money, but I’ve always been committed to helping those who need it most. At the same time, I’ve always been a tech enthusiast. I’ve always loved computers and gadgets. When I started working in care, it became clear to me that the sector was falling behind in terms of technology. That’s when I decided to make it my mission to help organisations catch up with technology that actually worked for them.

Over the years, I moved into roles such as Head of Digital and Head of Transformation. Eventually, I launched my own company, Guided Innovation, to extend that work and bring technological improvements to the sector on a larger scale.

From Moderation to Prevention: Rethinking Online Safety

Q: When did AI safeguarding become part of the vision?
A: The idea of AI safeguarding actually came from me, and it was rooted in real concern.
I’ve been an advocate for AI for years, even before it became mainstream. I’ve always seen it as a game-changer for decision-making, connection, and support. But at the same time, I’ve always been sceptical about the big social media platforms.

Their business model is designed to keep users online as long as possible – more clicks, more likes, more scrolling – all just to serve more ads. Even harmful behaviour gets rewarded because it drives engagement. One day, it hit me: “You’re in a position now where you can build a platform like this your way.” That’s when I began seriously considering AI as a safeguarding tool.

Q: What makes AI safeguarding different from traditional online safety measures?
A: The key challenge for any online platform – especially one for a vulnerable community – is ensuring safety in real-time.

Traditional moderation systems rely on users reporting issues after they’ve happened. By the time something’s been flagged, someone might already be hurt or upset. It’s not realistic to have moderators reviewing every post and comment as it happens unless you have a massive team working around the clock. But Big Life Buddy can step in instantly. It’s like a gentle, invisible guardian that reads every interaction as it occurs. Instead of simply blocking harmful content, 
it intervenes to offer support when it’s needed most.

Big Life Buddy can identify bullying, inappropriate content, or even signs that someone might be distressed. But rather than simply removing content, it provides helpful suggestions or connects people with support.

Empowerment, Not Policing

Q: How do you make AI safeguarding feel supportive rather than intrusive?
A: That’s a great question – and it’s absolutely central to what we do.

We never wanted Big Life Buddy to feel like a watchdog. It should feel like someone on your side.

For example, if someone types a message with bullying or harassment, Big Life Buddy steps in before it’s shared publicly. Instead of just blocking the message, it gently explains why it might not be okay and suggests a better way to phrase it. It might even recommend speaking to someone trusted if the person seems upset or frustrated.
Big Life Buddy steps in at the moment of creation, not after damage is done. It’s like having a thoughtful friend who taps you on the shoulder and says, “Maybe try saying it this way instead?” 
We worked hard to make the tone friendly and encouraging – not controlling.

Q: What role does AI play beyond just safety?
A: This is an important point – because for me, AI was never just about safety. It was about something bigger.

I want Big Life Buddy to notice if someone seems lonely or disconnected. I want it to nudge people toward new hobbies or interests. If one person loves rock climbing, and someone else posts about it frequently, I want those people to connect. But we need to be careful. The last thing we want is for anyone to feel like they’re being watched. The goal is to help people connect, thrive, and enjoy their online experience.

Empowerment, Not Policing

The conversation with Geraint reveals a profound vision for the future of online safety. Big Life Adventure isn’t just another social platform; it’s a glimpse into what the internet could look like if designed with care and empathy from the start.

In most social platforms, safety is an afterthought. Moderation is reactive, punitive, and often too late. But with Big Life Buddy, safety is the starting point: a baked-in support system designed to intervene before harm, empower instead of punish, and connect instead of isolate.

For individuals with learning disabilities and autism, Big Life Adventure offers a much-needed space that acknowledges their right to participate fully in the digital world. It’s proof that with the right mindset and the right tools, we can build an online world where everyone belongs – and where everyone is safe enough to be themselves. 
 And when people feel safe enough to be themselves, that’s when the real magic of community and connection happens.

What’s Next?

As Geraint and the Big Life Adventure team set out to create a safeguarding tool that felt supportive and human-centred, one question guided every decision: In a world where online harm can be subtle, fast-moving, and deeply personal, how do you build a space that’s not only inclusive – but truly safe?
In our next article, we go behind the scenes with Geraint – one of the minds behind this pioneering approach – to explore how AI can protect without policing, teach without shaming, and respect without compromise. And just as importantly, we look at the challenges of building a tool that gets it right.