Desmyelinating Disorder
Image via Wikipedia

Nicholas Carr, who expanded his “Is Google Making Us Stoopid?” routine through the publication of “The Shallows,” recently wrote a post about a PLoS ONE article entitled, “Microstructure Abnormalities in Adolescents with Internet Addiction Disorder,” in which Chinese scientists studied 18 adolescents with something they term “Internet addiction disorder” or IAD, and compare their brain scans to similar scans from “normal” subjects. The authors of the study claim that IAD led to structural changes in the white and gray matter of the brains of those suffering from it. Carr “cautiously” touts the findings — he concludes that the results need to be confirmed by further studies, but you can almost hear him cheering in the background.

There are a few big problems with the study. Let’s go through them.

While the authors claim to be able to detect changes in the brains of the IAD subjects, they only detect differences between those they diagnose with IAD and those they don’t, while also noting that IAD is “not yet officially codified within a psychopathological framework.” So, not only do they not have before-and-after evidence of change, but they are using a framework that is probably edging toward a garbage can diagnosis. Wondering this, I went to the seven references the authors cite to bolster their statement about IAD’s validity. Of these seven, only one resolved to an actual PubMed or Google Scholar citation. Now, maybe PLoS ONE is having trouble with CrossRef, PubMed, and Google Scholar linking, but it seems odd that all three wouldn’t work, especially for straightforward search engine queries.

Because there’s no before-and-after comparison, there’s no way to claim cause-and-effect. Are people with certain gray and white matter brain compositions and more depressive natures drawn to video games and the Internet as ways to cope? Here’s one sentence from the paper:

IAD resulted in impaired individual psychological well-being, academic failure and reduced work performance among adolescents.

Let’s rearrange that to reverse the assumption of causation:

Adolescents with impaired psychological well-being, academic failure, and reduced work performance retreated into the Internet.

Which one do you find more plausible?

Then there’s the whole bias toward the medicalization of differences — you spend a lot of time on the Internet, so you must be sick. If that were the case, I should be hospitalized immediately, along with nearly every other person who works in an office. The threshold for “sick” was 10+/- hours of Internet use per day for about six days each week. Please. I think there’s something wrong with the others at this point, the ones deemed “healthy,” adolescents who used the Internet for less than one hour per day and for less than two days per week. And I’ll wager that these researchers themselves probably qualify on a couple of the IAD criteria.

Carr also points gleefully to a Scientific American article covering the study and cherry picks quotes that support his world view. Let me cherry pick (and tell you I’m cherry picking) one that gives more than a little pause:

The reason why Internet addiction isn’t a widely recognized disorder is a lack of scientific evidence.

This is from neuroscientist Nora Volkow of the National Institute on Drug Abuse. We can also cherry pick the last paragraph of the article:

In the end all of the researchers interviewed by Scientific American emphasized significance only goes so far in making a case for IAD as a true disorder with discrete effects on the brain. “It’s very important that results are confirmed, rather than simply mining data for whatever can be found,” Goldin says.

Goldin is Rebecca Goldin, a mathematician at George Mason University.

So, is there something here? Maybe, maybe not. But we need to raise the bar for skepticism. Too many times, we see a p-value and think everything’s OK when the real problems may emanate from things p-values can’t measure — i.e., citing papers that have vanished, proposing causality when it was impossible to measure, basing illness on criteria that aren’t validated, and on top of all that medicalizing something that has become normal for most people.

And I’ll harp again on one of my major concerns about how the spigot for scientific papers continues to be opened — here we have a “methodologically sound” paper that has huge conceptual problems within it, published in a journal that publishes the majority of papers it receives, the combination of which creates the illusion that rigorous and interesting science has been conducted and made it through a tight filter.

Maybe Google is making us stoopid — but maybe the “us” is looking back from the mirror.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

10 Thoughts on "A New Study Asks a Troubling Question: Are We Losing Our Minds?"

A very thoughtful post, Kent.
PLoS ONE is not the only journal with relaxed peer review and editorial standards. And I don’t think anyone really knows at this time whether “OA mega journals” (Peter Binfield’s term) are publishing material that would never have seen the light of day under a different publishing model, although I do have a proposed study in mind that may answer this question.

What is surprising is the amount of press coverage some of their questionably “methodologically sound” papers receive, and I think part of your angst is how their increasing centrality in the voice of science may be promoting the wrong message to the public.

Is PLoS ONE really responsible for the public perception of science, or does their responsibility stop with their own brand?

I don’t think PLoS ONE alone is responsible for how the public perception of science has degraded — it has taken a long time and a lot of bad practices, from pimping of poster exhibits to the press to cold fusion to vaccines/autism and so on. But I think there is a new permanence and leniency about publishing materials in “journals” and an inherent public and media trust in “journals” that is allowing these things to slip through. Given the fact that there are too many incentives for repositories with light review and different standards to call themselves “journals,” it’s up to others to gatekeep and provide skepticism. “Journals” are eroding their place in the world, and something new needs to pick up the gauntlet. Unfortunately, I don’t know what that is, because the attempts at it so far have been a bit incestuous and vain.

I am not surprised at all by the amount of press coverage questionable stories receive. Just turn on Discovery Channel (a major portion of the paleontology “documentaries” highlight unpublished and often never to be published “research”), or flip through the top science “news” stories on Google. Sensationalism sells, and this started loooong before PLoS ONE existed (as Kent notes). And, questionable stories from all major journals (e.g., arsenic life in Science, or unholy matings between velvet worms and butterflies as the origin of caterpillars in PNAS) frequently make it into the news even today (although the PNAS case was at least part of the impetus for the change in their editorial standards).

It might also be worth quantifying what percentage of papers at PLoS ONE only receive “relaxed peer review” (whatever that is). As a volunteer academic and section editor at the journal (all opinions and statements expressed here are my own, of course), I can say that all of the papers I handle receive at least as many reviews as they would at other journals (and in some cases, more). However, I would assume that behavior varies between editors and disciplines (at least in paleo, rigorous review seems to be the standard).

(I should also state that I do agree that this looks to be a pretty questionable study, lest I come across as a total fanboy defender)

Kent,

Obviously, studies like this one provide hints, not conclusions. But you misrepresent the study in two crucial ways. First, you imply that the researchers used the (admittedly vague) diagnosis of IAD to choose their subjects. Wrong. They chose subjects not based on a diagnosis of IAD but rather on actual behavior, particularly the time they spent online. The test subjects spent about 8-12 hours online daily, while the control subjects spent less than 2 hours online daily. So the question of defining what IAD means or doesn’t mean, while interesting, is irrelevant to the actual construction of the experiment.

Second, you suggest that the study lacks before-and-after comparisons and hence doesn’t show cause and effect. That’s true, but a crucial finding that you fail to report is that the researchers did measure the amount of time that the subjects had engaged in intensive web use, and they found that the degree of the brain changes increased with the duration of Net use. That still doesn’t prove causality, but it suggests causality in a way that’s much stronger than you indicate.

It’s important to read these studies with skepticism, but it’s also important to read them with care.

Best,

Nick Carr

In the methods it clearly states:

According to the modified Young Diagnostic Questionnaire for Internet addiction (YDQ) criteria by Beard and Wolf [16], [29], eighteen freshman and sophomore students with IAD (12 males, mean age = 19.4±3.1 years, education 13.4±2.5 years) were engaged in our study

They picked students with IAD.

I may be reading the paper incorrectly, but with regard to your point about changes correlating with length of ‘addiction’ I believe there is no such data. What there is, is an attempt to relate the single timepoint observations to the subjects recollection of the history of their internet usage. It’s a clever idea, but it’s pushing at the outer limits of the data to put it midly. Actually, I won’t. It’s a junk piece of analysis predicated on individual recollection. There’s so many biases, assumptions and confounding variables in play that it’s not appropriate to draw such conclusions. You would need to do a longditudinal cohort study to see if there was any correlation.

Finally, 18 subjects. Not a statistically significant sample size.

It’s important to read these studies with skepticism. That’s what scientists do.

David,

You’re right that I garbled the IAD point (my mistake for not going back to the paper). But the point is that there was a behavioral distinction between the test subjects (10 +/-2 hours internet use daily) vs the control (-2 hours net use daily), as explained in the paper:

“The IAD subjects spent 10.2±2.6 hours per day on online gaming. The days of internet use per week was 6.3±0.5 … Eighteen age- and gender-matched (p>0.01) healthy controls (12 males, mean age = 19.5±2.8 years, education 13.3±2.0 years) with no personal or family history of psychiatric disorders also participated in our study. According to a previous IAD study [19], we chose healthy controls who spent less than 2 hours per day on the internet.”

In other words, you could remove all references to IAD from the study, and you’d still have a study comparing the brains of heavy Net users with relatively light Net users, which strikes me as the important factor.

It’s true that the subjects’ Net use was based on self-reporting (backed up by interviews with parents and acquaintances). The researchers clearly identify this as a limitation to the study:

“With regard to the relationship between the structural changes and duration of IAD, the months of IAD is a gross characterization by the recollection of the IAD subjects. We asked the subjects to recall their life-style when they were initially addicted to the internet. To guarantee that they were suffering from internet addiction, we retested them with the YDQ criteria modified by Beard and Wolf. We also confirmed the reliability of the self-reports from the IAD subjects by talking with their parents over the telephone. The brain structural changes in accordance with the addiction process may be more crucial in understanding the disease, hence the correlation between duration and the brain structural measures was carried out. These correlations suggested that cumulative effects were found in the reduced gray matter volume of the right DLPFC, the right SMA, the left rACC and increased white matter FA in the left PLIC.”

As you know, self-reporting of behavior is a common technique in studies, particularly when researchers have no access to other data on past behavior (which is often the case). I assume you wouldn’t dismiss all studies that involve self-reporting of behavioral data, but perhaps that is your view. If so, I’d like to hear your full rationale as to why all studies involving self-reporting of behavior should be considered “junk” and whether you think that yours is a common view.

As to your claim that in a brain imaging study such as this one 18 subjects is “not a statistically significant sample size,” I’m curious as to your thoughts on the comment by the University College London imaging neuroscientist Karl Friston in the Scientific American article: “Friston says the techniques used to analyze brain tissue density in the new study are extremely strict. ‘It goes against intuition, but you don’t need a large sample size. That the results show anything significant at all is very telling,’ Friston notes.” I assume you don’t think Friston knows what he’s talking about. I’m wary of joining you in that opinion, but perhaps you can convince me that you know more about this than Friston.

No one is claiming that this is a definitive study, certainly not the researchers. But it does seem to be a serious, rigorous study that found significant evidence of brain abnormalities. I’m curious as to why you’re so eager to dismiss the evidence as “junk.” It seems to be that we’d be wise to view the study as one piece in a very large puzzle that is still being put together.

Nick

Comments are closed.