A friend called recently, frustrated. Her 14-year-old daughter’s school requires WhatsApp for class communication, listed alongside textbooks and supplies on the required materials list with no alternative offered and no opt-out available.
She wanted to know how to turn off Meta AI, the artificial intelligence assistant built into WhatsApp. I had to tell her she can’t. There is no off switch. Meta AI is integrated into WhatsApp’s core functionality and remains active whether users want it or not.
Then I had to tell her the rest. Her daughter’s conversations with Meta AI aren’t protected by the end-to-end encryption that secures person-to-person WhatsApp messages, and Meta uses AI chat interactions for ad personalization across its platforms (1). The 14-year-old is likely using Meta AI as a conversational companion, sharing thoughts and questions she doesn’t know are being stored, analyzed, and monetized.
In theory, a parent is supposed to provide informed consent to data collection on behalf of their minor child. But the school required the app, the app came bundled with the AI, and the AI came packaged with data collection. Nobody in that institutional chain stopped to verify that parents could actually refuse.
Yes, in theory, the parent could have said no. But that refusal would have meant her 14-year-old becoming the only student in class without access to the group chat where homework gets discussed, plans get made, and social connections get maintained. The choice was between data collection and social isolation.
WhatsApp was selected because it’s simple, because everyone already has it, because it feels familiar and safe. Not because anyone enrolled their child in an AI companion system that monetizes conversations and can’t be disabled. This isn’t unique to this parent, this school, or this platform. The same structural pattern appears wherever “voluntary” adoption becomes essentially required for social, academic, or professional participation.
Where Consent Frameworks Exist Versus Where Choice Exists
The regulatory frameworks governing digital consent are comprehensive. GDPR provides EU residents with rights to access, rectification, erasure, and objection to data processing. COPPA requires verifiable parental consent before collecting personal information from children under 13 in the United States. The EU Digital Services Act mandates transparency in algorithmic systems and restricts certain data practices for minors.
These frameworks assume that “no” is a viable response, and when a person or parent faces a consent request, refusal is a realistic option that doesn’t carry prohibitive costs. That presumption breaks down when institutional adoption makes platforms functionally mandatory.
When a school requires WhatsApp, when LinkedIn becomes necessary for professional networking in certain industries, or when workplace collaboration tools monitor employee activity as a condition of remote work, the consent framework remains technically intact. The forms exist, the policies are published, and the opt-out mechanisms are documented. Yet the structural conditions necessary for those mechanisms to function as genuine choice have been eliminated.
A parent can theoretically refuse to let their child use WhatsApp, but that refusal means their child loses access to class communications, group project coordination, and the social connections that increasingly happen through required digital platforms. The child experiences it as exclusion, and no parent wants to subject their child to that.
The Architecture of Mandatory Adoption
Schools select platforms based on operational efficiency. WhatsApp is already installed on most students’ devices, parents are familiar with it, the interface is simple, and the adoption friction is low. These are legitimate institutional considerations when coordinating communication for thirty students and their families.
But platforms don’t arrive as neutral tools. They come bundled with features users didn’t request, business models users didn’t negotiate, and data practices users can’t modify. When Meta integrated AI into WhatsApp, existing users didn’t have the opportunity to consent to the feature. The AI simply appeared in their existing installation, ready to engage. On top of that, the AI conversations aren’t end-to-end encrypted the way person-to-person messages are. This creates a different privacy architecture that users are very likely not aware of.
Why Institutional Responses Create Consent Theater
Regulatory frameworks measure the presence of consent mechanisms, not the viability of refusal. Platforms satisfy compliance requirements by providing privacy policies, consent forms, and opt-out procedures, but the fact that refusing those terms means losing access to the service is treated as the user’s choice rather than a design outcome.
Schools satisfy their obligations by obtaining parental permission for required tools, while the tools themselves come with non-negotiable terms, bundled features, and data practices the parent never explicitly approved. This gets treated as outside the school’s scope of responsibility because the school didn’t design WhatsApp, it just required it.
Platforms satisfy their obligations by documenting that users agreed to the terms of service. That those terms were presented on a take-it-or-leave-it basis, that the service has become functionally necessary for social or professional participation, and that leaving it means exclusion from networks and opportunities, are structural conditions that don’t appear in compliance audits.
Each layer can demonstrate that it followed the required procedures. The school got parental permission, the platform disclosed its data practices, and the user agreed to the terms. The consent framework is technically satisfied. What’s missing is any measurement of whether “no” was actually possible at any point in that chain.
Legal analysis of school technology adoption reveals that the three requirements for valid consent, voluntary, informed, and authorized, are systematically violated in institutional settings (2).
When schools require specific platforms, adoption is not voluntary. When parents are never informed what data is collected or how it’s used, consent is not informed. When schools claim authority to consent over parental objections without legal basis, consent is not authorized. Yet each institution in the chain can point to documentation showing they followed required procedures.
A 2023 study by researchers at the Oxford Internet Institute found that when users face mandatory adoption of workplace surveillance tools, over 60% report feeling they have no genuine choice despite technically voluntary consent processes (3). The consent framework functions. The choice doesn’t.
The Cost of "Voluntary" Adoption
The 14-year-old using Meta AI as a conversational companion is sharing information she likely doesn’t know is being stored and analyzed. When she asks the AI for homework help, advice about friend conflicts, or explanations of things she’s too embarrassed to ask adults, those interactions become data points and subject her to potential dangerous mental health challenges associated with confiding in a chatbot. She didn’t choose that trade. It came bundled with the app her school required.
Children in schools that require devices and internet connectivity but don’t provide them face a different exclusion. Their families can’t afford the technology that’s treated as universal, and the school’s digital infrastructure assumes access that isn’t actually universal. This creates an academic participation barrier that maps onto economic inequality.
The cost of these structural conditions falls on people for whom “no” means exclusion from participation. The consent framework doesn’t capture that cost because it measures whether consent was obtained, not whether refusal was possible without prohibitive consequences.
What Actual Consent Would Require
Genuine consent requires that refusal is viable without exclusion from participation, which means several structural changes that current frameworks don’t mandate. Viable alternatives must exist, not just theoretical rights to refuse. When schools require digital communication, they need to provide multiple platform options or maintain non-digital alternatives that don’t disadvantage students who use them. Saying “you can choose not to use WhatsApp” is meaningless if that choice means losing access to class information.
Unbundled features need to become standard practice. Consent to messaging functionality is not consent to an AI companion, and consent to workplace collaboration tools is not consent to productivity monitoring. When platforms bundle features with different privacy implications, they eliminate the granular consent that regulations theoretically protect.
Institutional accountability for adoption decisions needs to extend beyond permission forms. Schools, workplaces, and other institutions that require specific platforms should be accountable for whether those requirements eliminate meaningful parental or user choice. The decision to make a tool mandatory is itself a decision about consent viability. More consent forms won’t create choice when structural conditions make refusal non-viable. They create documentation that consent was requested, not proof that consent was freely given.
The Refusal Test
There is a simple way to tell whether consent is real or merely procedural: Could refusal occur without penalty?
If declining a platform means losing access to class communication, professional opportunities, or community services, then consent was not freely given, regardless of how many forms were signed, policies disclosed, or opt out mechanisms documented. In those conditions, the choice is not between yes and no, but between compliance and exclusion.
By this test, much of what institutions describe as “consent” is better understood as conditional participation. Parents agree because refusal isolates their child. Workers accept monitoring tools because refusal threatens employment. Community members surrender data because access to services has been rerouted through a single platform. The consent happens. The documentation exists. The ability to refuse without harm does not.
Current consent frameworks are not designed to detect this failure. They measure whether permission was requested, not whether refusal was viable. They audit the presence of disclosures, not the cost of declining what was disclosed. As long as refusal carries social, educational, or professional penalties, autonomy is already compromised before the consent question is asked.
This is why more detailed privacy policies and additional consent forms do not restore choice. Information does not create freedom when exit is punished. Autonomy requires that people can say no without disappearing from the spaces that matter.
What would make consent real is not better paperwork but different structural decisions: unbundled features, parallel communication channels, and alternatives that do not disadvantage those who use them. When institutions make tools mandatory, responsibility for preserving viable refusal travels with that decision. Convenience cannot be allowed to silently override autonomy.
Until refusal without penalty becomes a design requirement rather than an afterthought, consent will remain a performance: technically valid, carefully documented, and fundamentally hollow.
I work closely with community-level online safety practitioners and am working on building accountability infrastructure for tech. Based in Lecce, Italy.
Sources:
- Meta AI Privacy and Data Practices: Meta’s Privacy Center documentation on AI chat interactions and data usage across Meta platforms. https://www.facebook.com/privacy/policy
- Liddell, Andrew (2026): “Protecting Students in the Digital Age: Legal Insights on EdTech.” SHIELD Global Online Safety Conference. EdTech Law Center presentation documenting that 96% of school apps share children’s data with third parties. https://edtech.law
- Oxford Internet Institute (2023): “Workplace Surveillance and Employee Consent: A Study of Digital Monitoring in Remote Work Contexts.” Research examining the gap between formal consent mechanisms and workers’ experienced autonomy when employers mandate monitoring tools.
Selected References and Further Reading:
Digital Consent and Privacy Frameworks
- GDPR (General Data Protection Regulation): Legal framework governing data protection and privacy in the European Union, including consent requirements. https://gdpr-info.eu
- COPPA (Children’s Online Privacy Protection Act): US federal law governing collection of personal information from children under 13. https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa
- EU Digital Services Act: Regulation governing digital platforms’ responsibilities regarding content, transparency, and user protection. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
Platform Data Practices and AI Integration
- Meta Transparency Center: Public reporting on data practices, AI features, and platform policies. https://transparency.meta.com
- Electronic Frontier Foundation, “Privacy Rights in the Age of AI”: Analysis of how AI integration affects user privacy and consent. https://www.eff.org
EdTech and Institutional Consent
- EdTech Law Center: Legal analysis of privacy violations in educational technology and advocacy for student data protection. https://edtech.law
Institutional Adoption and Digital Exclusion
- Pew Research Center: Studies on digital divide, platform adoption patterns, and technology access inequality. https://www.pewresearch.org/internet/
- Data & Society Research Institute: Research on how institutional technology adoption affects marginalized communities. https://datasociety.net
These resources document the gap between consent frameworks that exist on paper and the structural conditions that determine whether people can actually exercise the autonomy those frameworks claim to protect.