Family COnnect
Family COnnect
In the world of protecting kids and teens online, so much discussion today is focused on how to best identify underage users. But even assuming you’ve identified them properly, there remains a critical and often overlooked question: what do you do next?
A frequent refrain in the world of kids’ online privacy and safety that “self-declaration” – i.e., asking a user directly to volunteer their age information – is generally ineffective, due to the ease with which a child or teen can lie about their age to access content or features that are not age-appropriate.1 A whole host of technological “age assurance” solutions have emerged in recent years in order to try to combat the problem of kids lying about their age.
But let’s assume for the sake of this post that you’ve solved the age assurance problem. You now know with a reasonable deal of confidence who your adults are, who your teens are, and who your kids are. Ok…Now what?
Broadly speaking, we’re seeing the industry responding to the existence of kids on their platform in a few ways, some ways better than others. We’ll go through each option in turn.
This is obviously the riskiest approach. To use the US as one example: even if a service is NOT considered “child-directed” under COPPA, the law still applies if an operator learns (through self-declaration, age assurance, or some other method) that a specific user on their otherwise adult-oriented service is a child.2 The FTC enforced against Yelp in 2014 based on this principle: even though the FTC acknowledged that Yelp’s app was not intended for or even especially appealing to children, it nonetheless held that Yelp had actual knowledge that some users were underage and failed to take action: “People who registered on the app were asked for a date of birth, but regardless of what they entered, the Yelp app allowed them to sign up and gave them full access to all features.” This Yelp precedent is almost a decade old now, but it’s still crystal clear: failing to act on actual knowledge of children on your platform constitutes a legal violation.
Companies that take this approach often argue that providing an age-appropriate experience would be too expensive or cumbersome, given that the underage users are not their “intended audience.” Putting aside that in many industries (e.g., video games and social media), kicking out underage users is likely to be bad business – it alienates what might be a game’s most engaged and vocal demographic, and violates a number of platform policies3 – this strategy is also legally risky in many cases. For example:
Of course, there is nuance here: there are obviously some experiences that children can and should be blocked from participating in. (No one is saying that children need to be able to access things like dating services, for example!). That said, the bar for “likely to appeal to children” may be lower than many companies expect. Looking at some of the FTC’s past enforcements is instructive:
As part of each of these settlements, the companies involved had to comply with COPPA’s parental notice and consent requirements – they couldn’t simply kick out the kids and call it a day.
This is what we believe is the best approach: dynamically tailoring the online experience based on each child’s age and jurisdiction. A number of video game companies already do this to some degree.
Even for companies using this approach though, there are nuances and questions: for example, how many jurisdictions and local laws does your internal compliance logic support? How do you adapt when laws change? Do you simply provide a “childlite” experience where anything risky for children is turned off, or do you actually loop in the parent and get their consent to individual game features? What happens when a user has a birthday and is no longer considered a child in their jurisdiction?
Answering these questions historically required bespoke expensive technological solutions, not to mention hundreds of thousands of dollars in legal fees to stay current on a constantly expanding patchwork of local and international rules and regulations around kids’ data. However, third-party solutions are starting to emerge that distill all this complexity down in a simple and accessible format, so it’s as accessible to as many companies and consumers as possible.
There’s no one-size-fits all approach here, and every company’s situation and risk tolerance is different. Still, we at k-ID believe that the next generation of online services can do better than ignoring kids or treating them as undesirables to be removed. Instead, we think the best approach is to empower underage users by giving them a way to participate meaningfully in online worlds while still shielding them from more risky features as appropriate.
References: