ANGELINE CORVAGLIA

Cover art long The Quiet Cost

The Quiet Cost is an investigative podcast series that is a part of the Digital Dominoes podcast. Angeline Corvaglia examines the structural costs of digital systems. Each episode looks at what platforms, policies, and technologies actually produce rather than what they intended, focusing on the accountability gaps between institutional responses and ground-level realities. Evidence-based analysis drawing from Angeline’s work with online safety practitioners in education technology and digital rights. No hype about AI capabilities, no hero narratives, no soft framing about how “we can do better.” Just what’s broken, why it stays broken, and what would actually need to change.

Music for all episodes: “Burough by Molerider” by Blue Dot Sessions, licensed under CC BY‑NC 4.0

In this episode Angeline Corvaglia follows a parent struggling with a teacher’s requirement that students use WhatsApp, discovering Meta AI has no off switch and that chats with it aren’t end-to-end encrypted and can be used for ad personalization. The parent weighs data privacy and chatbot risks against the social and educational isolation her anxious 14-year-old might face if she refuses. She argues this reflects a broader structural problem: when platforms become functionally mandatory for school or work, consent becomes coerced “compliance,” and digital consent frameworks fail because refusal isn’t viable. Examples of school email and laptop “optional” programs show how opting out leaves children behind and excluded. She calls this “consent theater” and urges seamless, safe, privacy-first educational communication alternatives.

In this episode, Angeline Corvaglia explores the “invisible gap” between evidence of safety and users’ actual safety on tech platforms, arguing that companies optimize for compliance metrics that show evidence that they are working toward user safety rather than making users actually safe. Using steps Instagram has taken against sextortion since 2024, she describes Meta’s responses: blurred nudity in DMs, warning prompts, teen DM restrictions, automatic nudity protection, suspicious-account limits, and sextortion reporting flows, while noting these measures often don’t stop coercion or manipulation. She extends the pattern to OpenAI’s 2026 autonomous agent safety framework failing to prevent unauthorized real-world actions, and to Google’s AI principles and YouTube policies that coexist with engagement-driven recommendation harms. She cites Meta’s oversight board and a Brazil case where disinformation reached 400,000 users in six hours, emphasizing the need for outcome metrics, community trust, longitudinal tracking, and regulation that demands measurable protection, not just documentation.