All Internet users—businesses and consumers—want to feel safe online. But are we? For businesses, companies provide state-of-the-art security. But consumers—and especially teenagers—are often served up state-of-the-art insecurity.
What is security?
The need for online security originated with payment networks. (After all, people were doing electronic financial transactions long before the Internet was popular.) For businesses, security hasn’t fundamentally changed over time: the focus is on keeping “bad actors” from penetrating corporate networks to steal sensitive data, usually for financial gain. To some degree, this extends to consumers; for example, your bank prevents your Visa card number from being intercepted. But when we talk about teenagers being safe online, we’re delving into a much trickier area. Dangers such as cyberbullying, the oversharing of personal information, smartphone addiction, and increased anxiety are not addressed by the security products that businesses purchase. To make an analogy, if we use video cameras to curb shoplifting at a retail store, that’s great—but if the store itself specializes in selling cigarettes to kids, we’ve got a bigger problem to solve.
Business-grade security
The state of the art in business security is quite robust, going far beyond encryption and data protection into role-based access control for users. For example, a network security product I helped launch last year, based on Zero Trust Network Access (ZTNA), authenticates users every time they connect, and grants access based on the user’s role in their company. Think of traditional security as a moat around the castle; ZTNA is like putting locks on the doors to different rooms within it. Thus, a hacker gaining access by stealing a user’s credentials would no longer have the keys to the kingdom … just to a few rooms. ZTNA, by default, disallows access to everything (hence “zero trust”), with exceptions being configured one at a time. Corporate users can’t get to, say, Facebook unless the Chief Security Officer decides there is a valid reason to enable that access (e.g., only for marketing department users). Wouldn’t parents of teenagers love to have this kind of access control? Alas, the parental control software they’re given is not nearly so powerful.
Consumer-grade security
When it comes to protecting consumers, especially teenagers, the lack of sophistication of security products is not the central problem … it’s only the tip of the iceberg. Businesses pay for security with money—and they get what they pay for. But users of social media don’t pay the providers with money; they pay with their attention, their data, and their willingness to give up privacy. And where safety is concerned, they’re not getting a good deal at all. Paradoxically, many of the problems teenagers face online—e.g., phone addiction, FOMO, content that inflames them, the risk of an embarrassing moment going viral—are not the result of bad actors gaining unauthorized access to the platform. They are the result of the platform working as designed.

I recently read a New Yorker article, “Has Social Media Fuelled a Teen-Suicide Crisis?” that I found deeply disturbing. It tells harrowing stories about depressed teenagers interacting with the content delivery algorithms that shape their online experience. Teens who expressed suicidal ideation were rewarded, by the algorithm, with suicide-themed content (e.g., a video of a person pretending to hang herself). Why didn’t the algorithm provide a suicide hotline number instead? Well, if it could talk, the algorithm would say, “That’s not my job.” The issue isn’t that providers don’t have the tools to suppress troubling content; it’s that they don’t want to. Their algorithm gets its talons into users and doesn’t want to let them go. It leverages insecurity—a user’s fixation on likes, comments, re-posts, visits—to increase dwell time. Dwell time, after all, is what drives ad revenue. So the threats facing teenagers aren’t something we can easily solve by installing a security product. Such a product does not exist, and may never.
So there you have it: businesses get security, and teenagers get insecurity. O brave new world!
How do we fix this?
One way to address this fundamental disconnect is through legislation–for example, holding social media companies accountable for the addictiveness they are building into their platforms, and reigning in their algorithms. But I see at least two fundamental problems: one, legislation is a very slow process to begin with, and two, these companies have a vested interest in fighting such legislation, and are well armed to do so.
Another way forward is for parents to take a larger role, rather than hoping or assuming the social media industry will ever regulate itself. A number of the blog posts at My Digital TAT2 provide advice on how parents can be more involved and have rapport with their kids, so parents and kids can navigate digital media use together. As a parent, I have had some success with this, and can offer up three suggestions you might consider:
If you can, begin early with the dialogue about the Internet, before your child is even online. My kids, when they were very young, had some trepidation about the big, wide Internet so they were still pretty receptive. And as they venture online, continue to check in: just ask simple questions like what their favorite activities are, what they’d like to learn more about, and anything they see that concerns them. This early dialogue sets your kids up to learn about digital media from you, not just their friends. On the flip side, if you wait until bad habits are forming to talk about Internet use, kids may sense that their privileges are in jeopardy, and clam up.
In addition to setting limits on your child’s online time as their early habits are forming, have them log their use (start time, end time, what they were doing). This doesn’t need to be a high-tech process; I employed a simple paper chart. The idea is for kids to be aware of their time and their trend, so that going online doesn’t become a reflex.
Think carefully about when to get your child their first cell phone, and their first smartphone. I didn’t get my kids flip phones until high school, and didn’t offer a smartphone until college. (No, they didn’t like this, but I set the expectation when they were still in grade school!)
In my case, I was also able to curate my kids’ online experience using a corporate-grade firewall. All this being said, making online safety the parents’ responsibility is problematic, as it won’t work for every family. My wife and I were lucky to have the time and energy to have these conversations with our kids, and I could leverage my background in tech. A parent-focused “DIY security” approach wouldn’t work so well for, say, the child of a single parent who works two jobs to make ends meet.
A final way forward is through education and outreach, with organizations like My Digital TAT2 fostering a public dialogue to help users, particularly teenagers, develop the right habits as they navigate the Internet. Schools could play a larger role in areas where parents aren’t necessarily well equipped (following the model of sexual health education). A solid course in what we might call “digital health” ought to be an integral part of the health education kids get in schools.
Business users are well served by a mature, robust security industry. Technology could be applied to extend better security to teens and other users of social media. But before this will happen, we need to hold providers responsible for essentially hacking the minds of the very individuals they ought to protect.
The views and opinions expressed in this blog post belong solely to the author and may not represent the official stance or perspectives of Health Connected and My Digital TAT2. This content is shared to encourage diverse dialogue on digital wellness topics.
About Dana Albert
Dana Albert is the father of two grown daughters, helps coach a high school mountain bike team, and has worked in tech for almost thirty years, specializing in networks and online security.
Comments