Skip to main content

The problem with studies saying phones are bad for you

Some studies investigating the side effects of screen time might not actually be measuring what we think they are

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

 
Illustration by Alex Castro / The Verge

Every other week, there’s a new report about what staring at a screen is doing to your brain. It’s hard to know what to trust, and that could be because scientists haven’t been measuring screen time correctly.

Screen time is an all-encompassing term that could mean messaging friends, playing video games, or scrolling through Twitter. We hear that excessive phone use might be making people depressed and anxious, or maybe it isn’t. Some studies have suggested too much screen time is linked to ADHD. And the psychology community is fighting about whether compulsively playing video games is actually a mental health condition. This back-and-forth hasn’t stopped scaremongering comparisons between digital media and digital heroin, nor has it kept Silicon Valley parents from telling The New York Times that “the devil lives in our phones and is wreaking havoc on our children.”

“When you ask someone to estimate their screen time, they’re really crummy at it.”

The actual research hasn’t come to one neat conclusion, and that may be because the field has relied on self-reports. It’s possible to measure how much time you spend on your phone; it’s just that most research — some 90 percent of it, estimates David Ellis, a lecturer in computational social science at Lancaster University — hasn’t. People are notoriously unreliable reporters of their own behavior: people misremember, forget, or fudge their responses to make themselves look better. We’ve seen it before with food diaries; we’re bad at remembering or even noticing how much we eat. Sometimes we lie to ourselves and, as a result, our food diaries, too. The unreliability of self-reports has been a major problem for nutrition research.

So it’s reasonable to worry that people aren’t accurately telling researchers how much time they’re spending on their phones. The latest strike against self-reports was published last month on the preprint server PsyArXiv, first reported by New Scientist. The study hasn’t been peer-reviewed yet, but it adds to a growing body of evidence that the foundation for smartphone scaremongering is shaky. “We have actually known for quite a while that when you ask someone to estimate their screen time, they’re really crummy at it,” says Andrew Przybylski, director of research at the Oxford Internet Institute. “We’re coming to a time when it’s easier for psychologists to know how crummy that measurement is.”

“The relationships between that actual behavior and that survey-based assessment of that behavior is quite far off.”

The study, led by Lancaster University’s Ellis, asked 238 people to self-report the data recorded by Apple’s Screen Time app, which logs things like how often people pick up their phones and how much time they spend on the devices. That’s at least a little more reliable than asking for estimates; if you’ve been shocked by your Screen Time stats, you know why. The researchers analyzed those numbers against questionnaires that asked people to estimate things like how much time they spend on their phones, or how often they check their devices. The researchers also asked participants to respond to scales published in the literature. These are intended to assess how attached or addicted people are to their phones, how worried they feel about their phone use, and whether they find themselves mindlessly checking their phones without knowing why.

The team found that Apple’s Screen Time data don’t track at all well with the scales the field has been using. If self-reports and the more objective measurements matched perfectly, the correlation would be 1.0. If they weren’t related at all, the correlation would be 0. The correlation the team found between a one-time estimate of screen time, and what the Screen Time app logged, fell right in between. Responses on the scales that asked people to evaluate their own smartphone use behaviors tracked even less with the Screen Time data. So these self-reports aren’t measuring exactly the same thing as Apple’s Screen Time app. “It’s not as if there’s no relationship there,” Ellis says. “But the relationships between that actual behavior and that survey-based assessment of that behavior is quite far off.”

“It’s amazing that such a simple study could undermine almost the entire foundation of the fear against cellphones.”

The study still needs to be evaluated by experts before it can be published, but its results align with similar findings when researchers analyzed server logs to track calls and texts or client logs of internet activity. And they all suggest that studies that rely on self-reported measurements of phone use can mess with the results. “It’s an embarrassingly simple study,” says Patrick Markey, a professor at Villanova University who studies violent video games and who was not involved in the research. “It’s amazing that such a simple study could undermine almost the entire foundation of the fear against cellphones.”

Jean Twenge, a professor of psychology at San Diego State University and author of the book iGen: Why Today’s Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy — and Completely Unprepared for Adulthood, isn’t as convinced. “It seems like a tempest in a teapot. We always knew that self-reports of time use were imperfect,” says Twenge, who says her own research has relied on self-reported time estimates rather than scales of addiction or attachment. “This is how science works. If you wait for the perfect study, you’re going to wait forever.”

Twenge hopes that more studies use the Screen Time app, especially to track different uses and find out if there’s a difference in well-being between time spent on social media versus time spent, say, watching videos on YouTube. “That would be a great study,” she says. “I hope it’s done.”

“I don’t see how you couldn’t be a little bit ashamed of the fact that you do research on something that you can measure and yet you are content to guess.”

It makes sense to use the smartphone to do the measuring, rather than relying on people’s fallible memories. “A phone — as much as anything else in modern life — is a data collection and dissemination and display device,” says James Heathers, a postdoctoral scientist studying personal health informatics at Northeastern University. “I don’t see how you couldn’t be a little bit ashamed of the fact that you do research on something that you can measure and yet you are content to guess.” Jeffrey Boase, a professor at the University of Toronto who was pointing out discrepancies between self-reports and more objective measures of cellphone use all the way back in 2012, says that it’s not that easy. There are major technical and ethical, not to mention financial, barriers to getting at the data our devices and apps collect about us.

On the technical side, the people who can publish social science research tend to be at research institutions, not the tech companies — so they might not have access to the data these companies have been collecting, Boase says. Sure, researchers can build apps to monitor how people are using their devices, but that presents its own difficulties. For one thing, there can be problems with device compatibility. In a previous study, Ellis’ team built an app to log smartphone activations. But the app only worked for certain Android devices: all told, they could only monitor 23 people. That’s why the team was so thrilled when Apple’s Screen Time app came along, Ellis says. “Apple is Apple. All of the resources are there. And so we just thought, ‘Right, we’ll go for it.’”

“Right, we’ll go for it.”

And even if you do build an app, companies can change their policies and permissions that undoes that investment. Boase, for example, was recently notified about new restrictions in the Google Play Store for apps that require access to logs of phone calls and texts. He was part of a team that designed an app to collect, among other things, anonymized call and text log data from Android users who’d provided informed consent. “The changes would mean my app is no longer valid,” he says. Fortunately, he’d already collected all the data he needed. But had the change come sooner, the timing could have been disastrous. Still, Boase called the move reasonable, citing user privacy. “I’m not saying tech companies should just let researchers always have data,” he says. But, he says, it creates a barrier for researchers who want to study the effects of tech on consenting participants.

There’s another problem: if you’re using an app that one of those tech giants built to collect objective measurements, there’s little transparency into its inner workings. Przybylski, for example, says that Apple’s Screen Time app counts his podcast app as social media, which it isn’t. (That probably has more to do with how the developer categorized the app.) “As a scientist, I shouldn’t just take Apple’s word for it. It’s its own kind of noisy measurement,” Przybylski says.

“I don’t want to give you the impression that you can just merrily sail into it.”

Then there are the ethical issues, Heathers points out. “There are a lot of problems and challenges for installing something on someone’s phone that, if it wasn’t for research, would essentially be considered spyware,” he says. “I don’t want to give you the impression that you can just merrily sail into it.” And even if researchers can access say, social media profile or smartphone data collected by a major tech company, should they use it? One effort to study “emotional contagion” kicked up an ethical kerfuffle when the researchers manipulated what 700,000 Facebook users saw, in part because it wasn’t clear that participants had truly provided informed consent. And earlier this year, voter-profiling firm Cambridge Analytica hit headlines when a whistleblower revealed that the company had collected data on 50 million Facebook users in order to target political ads to particular personalities.

That’s why it’s key to make sure study participants know what they’re signing up for, Boase says. “If you explain to people what you’re doing in enough detail but also in clear language, and you ask if they’d like to participate, and they’re adults, and they agree to that, then you’re okay,” he says. Researchers also have a responsibility to keep the data they collect private, which is why Boase’s team doesn’t collect the content of messages or the names of people that the study participants communicate with.

Using smartphone data is technically and ethically challenging, so where does that leave researchers? Some, like Przybylski, are trying to figure out the most accurate way to collect self-reports. He’s comparing the results when people track their screen time on a moment-to-moment basis versus when they estimate at the end of the day. The responses, which haven’t been published yet, match up about 4 percent of the time, he says. So subjective data can be more or less reliable depending on how researchers ask for it.

“We’re desperately trying to clean the lens. And other people don’t give a crap.”

These kinds of studies that compare self-reports to other types of self-reports or to more objective measurements help ensure researchers are measuring what they think they’re measuring. And while the field sorts that out, the findings we’ve seen so far suggest that we need to be cautious about what we believe about the consequences of screen time. “Let’s imagine that there were a bunch of studies of how things interacted under a microscope, and you found out that there was a bunch of Vaseline on the lens,” Przybylski says. “Some of us know there’s Vaseline on the lens, and we’re desperately trying to clean the lens. And other people don’t give a crap.”