- Academic Debate: Recent social science research presents conflicting findings regarding the impact of social media usage on the cognitive development of children.
- Primary Study: An analysis using Adolescent Brain Cognitive Development data suggests that increased social media consumption during tween years correlates with lower verbal and spatial memory scores.
- Methodological Critique: Independent data analysis by Jordan Lasker disputes these findings, asserting that alternative statistical models show little evidence of negative cognitive effects.
- Research Methodology: The original research utilized longitudinal data to track cognitive performance fluctuations alongside changing social media habits across three usage-intensity groups.
- Comparative Analysis: Critics argue for more rigorous evaluation methods, such as sibling comparisons and within-individual longitudinal assessments, to control for familial and home environment variables.
- Scientific Uncertainty: Re-evaluating the data with more granular models renders most previous findings statistically insignificant, highlighting the difficulty in establishing definitive causal links.
- Variable Interpretations: Even within recalculated models, minor negative associations persist, though researchers suggest these may stem from broader family-level trends rather than direct screen-time impact.
- Ongoing Investigation: The lack of consensus among researchers underscores the complexity of utilizing large datasets to measure the long-term cognitive consequences of digital device exposure.
[
Courtesy Joel Mott/Unsplash
Kids’ cell-phone use is one of the hotter social-science topics these days, and one we’ve touched on here before. A recent kerfuffle shows why the debate is sure to keep raging for a long time.
The story begins with a study published in The Lancet Regional Health, which got some extra attention thanks to a tweet by Jonathan Haidt—who’s led the intellectual charge against kids’ overuse of tech devices.
In the study’s analysis, the kids who most heavily increase their use of social media during their tween years tend to have weaker cognitive skills, including verbal and spatial memory, than similar kids who stay off social networks. However, the study faced an almost immediate response from the prominent data blogger Jordan Lasker, better known as Crémieux, who ran different models on the same dataset and argued there’s little sign of an effect. His blog post is here and a longer paper is posted here.
The episode is an interesting lesson in how science can move quickly online—if not so much in formal academic journals—and how different ways of analyzing a given dataset can produce wildly different conclusions.
The original paper drew its data from the Adolescent Brain Cognitive Development Study, which began following its thousands of subjects in the mid-to-late 2010s, when the kids were 8 to 11 years old. Thanks to further data collection over the following two years, the authors can sort the kids not just according to their overall social media use, but according to how this use changed over time.
More than half of the sample used very little social media at any point. About 40 percent of the sample is placed in another group: those who started as light users but increased their use over time; the 12-year-olds in this group used social media for close to an hour a day. And the third group, constituting about 6 percent of the sample, comprised the heaviest users. Even nine-year-olds in this group used social media close to an hour a day, and the almost-teenagers used it more than three hours a day.
The authors check to see how these groups fared on cognitive tests at the study’s two-year follow-up. They include a variety of statistical controls, including the kids’ baseline cognitive performance—which should help to address issues of self-selection, where smarter or duller kids may be more likely to take up social media—as well as basic demographics and non-social media screen time.
These models answer the question: if two kids started out with the same cognitive scores, and also share numerous other traits available in the data, and yet they had two different trajectories of social media use, does the heavier social-media user tend to fare worse on the later cognitive test?
The study’s answer was yes. Across four different cognitive tests, there was always a measurable gap between the lightest and heaviest users. And for three of the four, there was also a gap between the lightest users and the medium group, the kids who’d started out light but increased over time. These are not enormous effects—generally falling between a tenth and a quarter of a standard deviation—but they are certainly worrisome given the ubiquity of screens for kids (including those at older ages than the study focused on).
Enter Crémieux. The blogger pointed out that the data in question facilitate a much more rigorous analysis than the authors had conducted.
Instead of comparing totally different kids with each other and relying on statistical adjustments to make that comparison more apples-to-apples, one could look at siblings—who share a family background and home environment—to see if the heavier-using siblings fared worse. One could also study outcomes from both survey waves, instead of focusing on performance at the two-year follow-up. One could even look within individuals, to see if kids’ own cognitive scores fluctuated along with their social-media use.
The results are underwhelming with these approaches, and yet still debatable. Most of Crémieux’s results are statistically insignificant, but two of the within-individual models still suggest negative effects.1 Lasker’s paper suggests that, since these effects weren’t evident in the family-based analysis, they might reflect “family-level time trends” instead of a real effect.
The debate over the impact of kids’ tech use—on cognitive skills and everything else—is far from over. And it’s too bad that even the fanciest statistical tools can’t unambiguously tell us what’s happening.
From the Manhattan Institute
- Jennifer Weber takes a hard look at math achievement in New York State schools.
Other Work of Note
When Texas shipped illegal immigrants to blue areas with sanctuary policies, the receiving counties shifted toward Trump.
Why don’t voters respond more strongly when candidates moderate their positions?
When various policy changes—such as the 1990s welfare reform—pushed low-income moms to work, they became less likely to vote but more conservative.
Conservatives and liberals differ in whom they see as victims. And Republicans and Democrats differ in how they dole out pork spending.
This analysis of austerity measures imposed by the International Monetary Fund has produced some strenuous arguments on X.
Elite colleges have generally found standardized tests quite helpful in predicting kids’ performance. A new study of a large public university system, though, finds that tests don’t add much beyond high-school GPA.
Lots of new research on federal student loan policy.
When Dallas schools started paying teachers based on effectiveness, student math achievement improved.
Has teacher turnover finally stabilized?
Another review of the returns to additional education. More bullish on school than the one I wrote up here a while back.
A calculator from the Joint Economic Committee on the fiscal effects of immigration policy. But don’t forget about MI’s own!
Options for Social Security reform from the folks who manage the Penn Wharton Budget Model.
Meanwhile, there’s a new Financial Report of the U.S. Government.
Is it possible to raise a nation’s overall level of happiness?
How international college students affect their domestic peers academically.
What do Americans think about assisted suicide?
Some interesting research based on the “Words Can Harm Scale,” which asks respondents whether they agree with a variety of melodramatic statements about how speech can traumatize.
Tyler Cowen has his new book online for free with an AI assistant: “Beginning with the 1871 Marginal Revolution and ending with the AI tools transforming research today, this is a book about how ideas are born, why they take so long to arrive, and what happens when machines begin to see around corners that humans cannot.”
There’s long been debate about how much improvements in trauma care have affected the homicide rate. This paper finds a 4 percent reduction in overall firearm mortality from the opening of a Level 1 trauma center on the South Side of Chicago.
The (predictable) impact of California’s $20 fast-food minimum wage on prices.
The Supreme Court has become less deferential to executive agencies’ interpretations of the laws that are supposed to constrain them, and this seems to be affecting how agency rulemakers do their jobs.
Did AI slow the growth of coding jobs? And what are CEOs saying about how AI affects their businesses? Plus an interesting paper on “new work,” or novel types of jobs that emerge with technological changes.
Some new results on the impact of food-stamp work requirements on single parents. These are generally seen as relatively weak, requiring a job search, registering with a job-search system, and in some cases participating in an Employment and Training program. The study finds they reduce enrollment but don’t induce parents to work more.
Building public hospitals was good for babies in the 1950s and 1960s.
Communist revolution was bad for Cuba.
“Higher caffeinated coffee intake was significantly associated with lower risk of dementia.”
It’s tempting to think that even these results are far smaller than the original study’s, but note that the new models calculate the effect of one hour of social-media use rather than the difference between the high and low trajectories.








