ANGELINE CORVAGLIA

My Work

This is what I’ve built. Each initiative addresses a different layer of the same structural problem: the people experiencing technology’s consequences have no infrastructure to make those experiences count.

Shield Logo

A global network working to make the online world safer.

SHIELD exists because the people closest to online harms are developing responses that work in their contexts. Community organizers understand local vulnerabilities to online exploitation. Parents navigate AI tutors and chatbot companions with their children. Advocates design technology that serves rather than exploits. Youth leaders teach critical thinking.

But these initiatives remain disconnected from each other and invisible to the rooms where tech decisions get made. Well-resourced policy conversations happen without the people who know what actually works on the ground.

SHIELD bridges that gap. We connect grassroots leaders to each other, help proven initiatives access resources to scale, and ensure their expertise shapes the technology that shapes their lives.

What we’ve built:

  • A global network of 500+ practitioners, organizations, and advocates from 40 countries
  • Annual Global Online Safety Conference (80 speakers, 3 days, 25 countries represented in 2026)
  • A 111-page reference document synthesizing practitioner observations, what’s working, what’s failing, and why
  • Project Catalyst Initiative connecting proven community solutions with resources to scale
  • Voices of SHIELD Program creating platforms for grassroots leaders to share expertise where it influences outcomes
  • A curated newsletter and reference materials documenting frameworks, tools, and evidence from the field

What this solves: Isolation. Practitioners working on digital safety are often the only person in their organization, their region, or their language doing this work. SHIELD is the network that allows them to find each other, learn from each other, and build together.

World with connected lines and dots

SIGNAL: Turning ground-level observations into evidence that decision-makers cannot ignore.

SIGNAL is SHIELD’s response to an intelligence failure in online safety.

Practitioners see patterns of harm every day, but these observations are fragmented and stay local. They don’t aggregate into the kind of evidence that decision-makers are required to act on.

SIGNAL is a system that allows practitioners to report what they’re seeing in standardized formats that can be aggregated across communities, geographies, and contexts. The data flows back to the communities that generate it as usable analysis, and builds a global evidence base for corporate risk assessments, regulatory bodies, and policy processes.

Current status: In development with practitioner partners across four continents. 

What this solves: Companies conduct risk assessments using internal data, academic proxies, and filtered advocacy. The knowledge that would make those assessments credible sits with practitioners, parents, educators, and community builders.

SIGNAL is the mechanism that allows that knowledge to travel.

GENR Logo rectangle

CPD-accredited AI literacy training for students, educators, parents, and professionals.

AI is changing how we learn, work, and live, but most people haven’t been equipped to navigate it. GEN:R gives them the frameworks to engage with AI safely, ethically, and with confidence.
Developed in collaboration with Lena Chauhan, Dr. Catilin Bentley of King’s College London and the UKRI Centre for Doctoral Training in Safe and Trusted AI, our programs are academically grounded, CPD-certified, and current as of 2026. 

What we’ve created:

  • AI & the Future of Work: CPD-accredited programs helping students and young professionals understand how AI is shaping careers
  • AI Literacy for Schools: Year-group programs giving students practical frameworks for safe, ethical AI use while helping parents and schools align
  • AI Essentials for Educators: CPD-certified training focused on responsible AI use educators can start using today, available self-paced or facilitated

What this solves: The preparation gap. We’ve invested heavily in protecting people from AI. We’ve invested far less in preparing them to use it responsibly. GEN:R builds that capacity.

Data Girl Logo

Empowering families to safely and confidently navigate the AI-filled digital world.

In the digital, AI-driven world, the difficulty factor for parents in keeping children safe has skyrocketed. We have to protect children from cyberbullying, online predators, inappropriate content, apps misusing data, and misinformation, in an environment where AI is helping break down traditional walls of protection at an incredible pace.

Data Girl, Ayla AI Girl, and the Everyday Digital Defenders are designed as tools for parents to discuss key issues related to AI and online safety with their children. Our approach leverages their existing knowledge and experiences, fosters critical thinking, and builds a relationship of trust between adults and young people.

What we’ve created:

  • Bytes of Digital Adventures podcast: Stories about AI, privacy, and safety for kids
  • Discovery Squad materials for children stepping into the digital world for the first time
  • Digital Navigators materials for youth already navigating the digital landscape
  • Classroom lessons and activities tailored to age and digital experience
  • Family workshops organized with community groups
  • Train-the-trainer sessions for educators and community leaders
  • Educational videos and short stories

What this solves: The gap between what children encounter online and what adults have prepared them for. Parents need tools that work with shorter attention spans and build understanding through conversation, not lectures.

Data Girl and Ayla

This work addresses the same question at different scales: who gets to shape the technology shaping our lives?