This is what I’ve built. Each initiative addresses a different layer of the same structural problem: the people experiencing technology’s consequences have no infrastructure to make those experiences count.
SHIELD exists because the people closest to online harms are developing responses that work in their contexts. Community organizers understand local vulnerabilities to online exploitation. Parents navigate AI tutors and chatbot companions with their children. Advocates design technology that serves rather than exploits. Youth leaders teach critical thinking.
But these initiatives remain disconnected from each other and invisible to the rooms where tech decisions get made. Well-resourced policy conversations happen without the people who know what actually works on the ground.
SHIELD bridges that gap. We connect grassroots leaders to each other, help proven initiatives access resources to scale, and ensure their expertise shapes the technology that shapes their lives.
What we’ve built:
What this solves: Isolation. Practitioners working on digital safety are often the only person in their organization, their region, or their language doing this work. SHIELD is the network that allows them to find each other, learn from each other, and build together.
SIGNAL is SHIELD’s response to an intelligence failure in online safety.
Practitioners see patterns of harm every day, but these observations are fragmented and stay local. They don’t aggregate into the kind of evidence that decision-makers are required to act on.
SIGNAL is a system that allows practitioners to report what they’re seeing in standardized formats that can be aggregated across communities, geographies, and contexts. The data flows back to the communities that generate it as usable analysis, and builds a global evidence base for corporate risk assessments, regulatory bodies, and policy processes.
Current status: In development with practitioner partners across four continents.
What this solves: Companies conduct risk assessments using internal data, academic proxies, and filtered advocacy. The knowledge that would make those assessments credible sits with practitioners, parents, educators, and community builders.
SIGNAL is the mechanism that allows that knowledge to travel.
AI is changing how we learn, work, and live, but most people haven’t been equipped to navigate it. GEN:R gives them the frameworks to engage with AI safely, ethically, and with confidence.
Developed in collaboration with Lena Chauhan, Dr. Catilin Bentley of King’s College London and the UKRI Centre for Doctoral Training in Safe and Trusted AI, our programs are academically grounded, CPD-certified, and current as of 2026.
What we’ve created:
What this solves: The preparation gap. We’ve invested heavily in protecting people from AI. We’ve invested far less in preparing them to use it responsibly. GEN:R builds that capacity.
In the digital, AI-driven world, the difficulty factor for parents in keeping children safe has skyrocketed. We have to protect children from cyberbullying, online predators, inappropriate content, apps misusing data, and misinformation, in an environment where AI is helping break down traditional walls of protection at an incredible pace.
Data Girl, Ayla AI Girl, and the Everyday Digital Defenders are designed as tools for parents to discuss key issues related to AI and online safety with their children. Our approach leverages their existing knowledge and experiences, fosters critical thinking, and builds a relationship of trust between adults and young people.
What we’ve created:
What this solves: The gap between what children encounter online and what adults have prepared them for. Parents need tools that work with shorter attention spans and build understanding through conversation, not lectures.