Recently, a scandal broke in the field of Alzheimer’s research. Sylvaine Lesné, an associate professor at the University of Minnesota was accused of falsifying image data in a series of research articles, including a seminal paper that identified a particular molecule as being instrumental in cognitive decline. A lot of the coverage of this latest scandal has focused on something called the “Amyloid Hypothesis, with some pretty dramatic headlines that imply that decades of work, including numerous failed drug trials, were based on a single paper written by a fraudster. Things seem to be calming down a little now with cooler heads and more measured coverage, for example, see this piece on AlzForum.

Accusations of research fraud are not new. Almost a decade ago, Paul Brookes, an associate professor at the University of Rochester had a short but dramatic career as a self-appointed, anonymous image sleuth, before he was doxed and stopped blogging. Today, there are a number of individuals that voluntarily investigate the work of others to identify possible misconduct, the best known of which is Elisabeth Bik, who publishes her work on her blog Science Integrity Digest.

So why did this case set off what looked like a bit of a moral panic? Perhaps it’s to do with the high profile nature of Alzheimer’s disease and frustration at the failure to develop truly disease modifying treatments, perhaps it’s that with an aging population, there is fear that the clock is ticking on a public health crisis that will result in a societally crippling financial burden. These factors are important, as they add even more pressure to researchers struggling to make names and careers for themselves in the face of a toxic mix of perverse incentives that erodes research reproducibility and integrity.

I propose that there is another structural problem, beyond the reproducibility crisis that inflamed this particular situation — a form of scientific tribalism between two groups that disagree on which protein is the true cause of dementia in Alzheimer’s.

Brain shaped white jigsaw puzzle on blue background, with a missing piece of the brain puzzle 

The alleged fraud and its overstated consequences

The research article in question was published in 2006. Lesné was the first author on it while he was working in Karen Ashe’s lab at the University of Minnesota, and he is currently an Associate professor with his own lab, also at Minnesota. It is worth noting that Ashe herself is not under investigation and no image manipulation has been found in any of her papers where Lesné was not an author. Lesné was studying TG2576 mice, which were developed by Ashe. They overexpress a mutant form of Amyloid Precursor Protein (APP), which leads to the accumulation of protein deposits in the brains of the mice, and an associated loss of cognitive function. The importance of APP and the protein deposits whose main component is called Amyloid-𝝱 (A𝝱), was not novel to Lesné’s or Ashe’s work. The so-called Amyloid Hypothesis emerged in the early 90s. In fact, the first observation of deposits, or plaques, in the spaces between neurons (along with deposits inside neurons called tau tangles) was made by Alois Alzheimer and Emil Kraepelin at the beginning of the 20th century. What Lesné claimed to have observed is that a version of A𝝱 — called A𝝱*56 — which can float around and diffuse through people’s brain was the ‘smoking gun’ that was poisoning the neurons and causing cognitive decline. He wrote that he had shown this by showing that concentrations of A𝝱*56 in the brains of TG2576 mice directly correlated with loss of memory function. The thing is, he’s far from the only person claiming that soluble oligomeric species of A𝝱 are the root cause of the disease. Particularly at the time, it was a hot topic in the field with active discussions about whether it was dimers, trimers, or some other species that was responsible.

In short, even if Lesné had never run a western blot in his life, we’d still have a significant body of research pursuing the Amyloid Hypothesis, looking at oligomers, and trying to design drugs to target Amyloid. The fraud of which he’s accused of is terrible if eventually proven to be true, but to pin decades of blind alleys on this one piece of research, is overstating its importance.

A schism in Alzheimer’s research

Along with A𝝱, the other protein that causes so much trouble in Alzheimer’s disease is called “tau”. It’s the major constituent of the tau tangles inside neurons that I mentioned earlier. Ever since those first observations that Alzheimer and Kraepelin made, researchers have been trying to understand the role of both of these proteins and how they might be connected. Something very strange has happened along the way.

Driven by a need to be right, publish frequently, and have impact in order to impress funding, hiring, and tenure committees, researchers are incentivized to adopt a position of trying to prove themselves right about their ideas. Since the way you rigorously test a hypothesis is to try to prove it wrong, these conditions create downward pressure on rigor. They also incentivize political behaviors like aligning yourself with those with power, and tearing down competitors’ ideas, rather than approaching all work in a spirit of open inquiry. The result can be tribalism.

I was personally involved in Alzheimer’s research at the time when Amyloid oligomers were gaining traction. At the time, there was already a divide in the field. An astute observer of human behavior whose name is lost to history coined the labels 𝝱aptists and Tauists to describe the two groups. The religious connotations of those nicknames is obviously not accidental. Researchers who work on tau have in the past publicly referred to the Amyloid ‘cabal’ or even Amyloid ‘mafia’. Some researchers have complained of a stranglehold on the field that prevents promising ideas involving tau from getting funded. I don’t know how much truth there is to such accusations, but the fact that they’re made at all, sometimes in public venues, is evidence of unhealthy dynamics that certainly aren’t accelerating progress towards curing the disease.

What can be done about tribalism?

I don’t wish to paint an overly critical picture of the field of Alzheimer’s research. Over time, the schism in the field has become less fractious and increasingly, models are favored that seek to account for the role of both proteins. On the other hand, when internal discussions within fields become so fractious that people lose objectivity, arguments spill out onto social media and the press and undermine both progress and public confidence in science.

Something needs to change, but what? It’s tempting to argue that more transparency, open data, or better quality control will prevent fraud, but while incentives are the way that they are, tribalism, lack of rigor, and even fraud, being symptoms of the same underlying structural problems, are almost inevitable. What needs to happen is a change in incentives to support and enable cultural changes. When researchers are solely rewarded for the rigor and transparency of their work, rather than also required to be lucky, cut enough corners, or become aggressively tribal in defending their ideas, perhaps we’ll see fewer of these scandals.

Phill Jones

Phill Jones

Phill Jones is a co-founder of MoreBrains Consulting Cooperative. MoreBrains works in open science, research infrastructure and publishing. As part of the MoreBrains team, Phill supports a diverse range of clients from funders to communities of practice, on a broad range of strategic and operational challenges. He's worked in a variety of senior and governance roles in editorial, outreach, scientometrics, product and technology at such places as JoVE, Digital Science, and Emerald. In a former life, he was a cross-disciplinary research scientist at the UK Atomic Energy Authority and Harvard Medical School.

Discussion

15 Thoughts on "Tribalism, Fraud, and the Loss of Perspective in Alzheimer’s Disease Research"

And yet thousands of researchers don’t engage in fraud even though they too work in the same system, under the same incentives, etc.

That’s true, lots of people manage careers in science without ever committing fraud. That doesn’t mean that broken incentives can’t make fraud more frequent.

I appreciate your clarifying/distinguishing existence of fraud from frequency of fraud. I find myself now reflecting on your reply to Roger, specifically “I personally think that at many -not all- institutions, particularly soft-money institutions, the expectations of productivity are unrealistically high.” It led me to wonder if there’s been an analysis correlating these conditions and the others you discuss with increased likelihood/frequency of fraud? If these are not the conditions everywhere, then we should see differential effects re where they are and where they are not?

Absolutely, that’s why I wrote “… perhaps we’ll see fewer of these scandals” rather than claiming that all fraud would be eliminated.

Your question about whether we can see patterns in undesirable behaviours to try to get a better grasp on the causes is an interesting one. That’s certainly an approach used in other parts of society where there are difficult to tackle social problems.

Personally, I think that looking at the most extreme cases of fraud, while serving as a wake-up call, might be less informative than looking the wider issues of ‘sloppy’ research, poor methodology, and sub-fraud misconduct. The big cases are outliers, by their nature, so perhaps not so informative of the larger issues.

Another complication is that accurate estimates of the prevalence of fraud are hard to come by. Is it 1 in every 100,000 https://doi.org/10.1007/pl00022268 or 1 in every 10,000 https://doi.org/10.1126/science.290.5497.1662, or nearly 1 in 12 https://www.science.org/content/article/landmark-research-integrity-survey-finds-questionable-practices-are-surprisingly-common

With no defined population of ‘researchers that commit fraud’, it would be quite difficult to get a handle on it.

On the other hand, looking for sloppy methodology is something that is being done by metascientists. It’s a growing field. Researchers use a variety of techniques including systematic reviews of the quality of reporting and statistical meta analyses that can demonstrate skew in meta-analyses indicative of selective reporting.

Here’s a blog post from 2016 by Prof Emily Sena (don’t be fooled by the bi-line, she’s been promoted since she wrote this) on her systematic approach to detecting discipline-wide inadequacies in methodology and reporting in pre-clinical research.
https://blogs.bmj.com/bmj/2016/06/20/emily-sena-too-many-drugs-too-few-medicines-the-translational-failure-of-animal-research/

I think that looking at patterns in that sort of data might be more informative.

Phill, Thanks for your effort to shed light on these dynamics given your deep experience in this field. No doubt what you call tribalism is pervasive in many fields. I particularly appreciate your identifying incentives as being a part of the problem.

That said, I see fraud such as that which has corrupted at least one line of Alzheimer’s research as closer to, or perhaps appropriately seen as, a crime; and not just the understandable of unfortunate byproduct of today’s research culture. While I agree that there is an important role for positive incentives, I have also come to believe that there is a need to expand negative incentives, writing: “The ultimate solution probably requires incentives that provide enough deterrence to eliminate such misconduct proactively rather than treating it reactively” ( https://scholarlykitchen.sspnet.org/2021/11/01/is-scientific-communication-fit-for-purpose/ ).

I wonder if you see any way to ramp up deterrence in the near term rather than, or perhaps in addition to, the longer term project of rethinking the competitive dynamics that have done so much to motivate scientific discovery?

Hi Roger,
I’m sure you’re right that this isn’t the only case of its type in research. Outside of research it’s a fairly prevalent behaviour as well. Perhaps it’s part of human nature to form groups of like minded people and at least partially define ourselves in opposition to some other group. We see it in everything from sports to politics. I think it’s worthwhile noting that it’s only toxic and damaging when it rises to the level that membership of the tribe becomes more important than rationality and objectivity.

I understand the desire to punish those who commit research fraud. Looking at high profile cases, the outcomes can be very strange. Andrew Wakefield was publicly shamed, fired and lost his license to practice medicine. No less than he deserved, but has reemerged as a darling of the far-right anti-vax movement in the US, propelled to re-emerged prominence during the Trump administration. Hwang Woo-Suk was jailed for 18 months (although weirdly, not for fraud). He nearly ended up in Libya, part of a project to build stem cell capability in North Africa. That project got cancelled due to a civil war. He’s now apparently working with a Chinese biotech that are cloning cows on an industrial scale.

I have a feeling that for anybody but the most shameless among us, the catastrophic loss of reputation associated with being caught would be a pretty serious punishment, coupled with effectively becoming unemployable within mainstream science, I ask myself what compels anybody to take such a huge risk, and what would stop them.

Why does it happen? I personally think that at many -not all- institutions, particularly soft-money institutions, the expectations of productivity are unrealistically high. At an institution I worked at, the widely acknowledged, unwritten standard was a paper in a respected journal every year for each postdoc in your lab. I personally saw people who failed to meet those productivity standards through no fault of their own not have their contract renewed.

That’s not to take away personal responsibility from researchers who cross the line. Sometimes, probably rarely, researcher commit heinous acts of scientific fraud and should be punished. My fear is that with any systemic issue, there’s a temptation to look at it purely through the lens of enforcement, because that feels like strong action even if fixing the underlying problem is too difficult. When that impulse is allowed to take over, the consequences of the attempted cure can end up being just as destructive as the problem itself. Take violent crime, for example. There are many examples of cities trying to tackle violent crime with violence; arming officers and increasing enforcement. It seems like that approach often doesn’t work very well compared to approaches like reducing poverty and treating violent crime as a public health issue. Glasgow in Scotland is known for successfully taking that approach and cities around the world are starting to copy it.

So I agree, people who are caught should face consequences, but there will never be a shortage of people calling for punitive measures against a visible enemy. What I’m worried about are the structural problems that make offending more likely. That’s a discussion people are often uncomfortable with because it involves addressing difficult and complex truths.

There is indeed a long line of research (in social psychology) that explains how we define ourselves on the basis of our group memberships and in opposition to other groups – social identity theory. Essentially, the “in-group” will enhance its own self-image through negative assessments and stereotypes of the out-group (and okay, my husband has been one of the key developers of this theory over the past few decades!!).

Is it possible that the public is frustrated about the lack of progress on Alzheimer treatment while the scholars argue about the interpretation of data, or whatever, on mice?
Every sentence about the hypothesis etc. should include the information that it is based on mice, not humans.

Hi Melissa,

Thanks for your comment.

I wouldn’t blame patient groups, advocacy groups and the general public for becoming frustrated with the fact that we still don’t have a good, disease modifying treatment, much less a cure for this dreadful disease. As I mentioned, I used to be a researchers in the field and one of the things that was often asked of any room full of people was for folks to put their hand up if their family had been affected by dementia. As you would probably guess, invariably, almost everybody raised their hand, including me. My grandmother eventually reached what’s sometimes called stage 7. I’ve seen it first hand, and it’s horrible.

Every researcher I know, is fiercely committed to finding a solution to the problem. It’s a different discussion, for a different time, but I often feel that sometimes, that desire can be so strong as to itself become an incentive to go too quickly, resulting in blind alleys and drugs going to trial, and in some cases even going to market, that don’t work.

As for the Amyloid hypothesis, or any of the hypotheses, being solely based on mice. I’m sorry but they aren’t. The first observations of plaques and tangles were in a human brain. Her name was Auguste Deter and she was a patient of Alois Alzheimer who’s husband agreed to let her brain be dissected after her death in exchange for free medical care.

Today, researchers use a combination of human histological, cell culture, animal models, computational models and more to try to understand the basic biology of what’s happening and test promising ideas. The field isn’t perfect, but some experiments can only be done in animal models and while no disease modifying treatment yet exists, if it weren’t for mouse work, we’d know a lot less about what’s happening in human brains when people develop Alzheimer’s disease. I personally think we don’t do enough basic science, and that’s why we end up swinging and missing so often.

Thanks, Phill – very helpful to have your perspective on this particular issue. But as you note, the problems are deeper and wider than this one case. While Lisa is absolutely correct that the vast majority of researchers don’t commit fraud, the culture is one in which a perverse set of incentives to be “first” or “novel” encourages – and at the very least, doesn’t punish – cutting corners. This was very clear in the final report on the research culture in psychology after the fraud case with Diederik Stapel a decade ago (see https://www.science.org/content/article/final-report-stapel-affair-points-bigger-problems-social-psychology) which points to a “sloppy” research culture. While psychology has done much to clean up its act since then, the problems are more pervasive and not ones that publishers or journals should be expected to solve alone. We’re easy scapegoats (even though many of us are spending more and more on pub ethics teams), but where is institutional accountability and responsibility? Phill’s conclusions are spot on but this requires much bigger systemic change.

Hi Alison,

I’m sure you’re absolutely right about that. In my experience, research exists on scale from the most rigorous to the most fraudulent, with many subtle levels of ‘sloppiness’, rule-bending, and poor practice in between. Much as we might want to see people as either good or bad, nobody is perfect, people are a product of the cultures they interact with and everybody makes compromises.

You’re also correct that publishers can’t tackle this problem alone. Publishers aren’t responsible for the way in which research is incentivised. It’s fascinating to me that getting published has become the gatekeeping process to a successful career but as far as I can tell, publishers never set out to make it so. I think that there is a problem with researchers being too dependant on publishing for career advancement and in some cases, just to stay employed, but that dependency developed in a time when many publishers knew far less about their authors than they do today, well before author-pays models, when most publishers saw the library as their primary customer.

In some respects, I think publishers are caught in the middle of this. I’m also very glad that some publishers, PLOS included, are taking problems like this seriously and working with other stakeholders to try to make things better.

The only reason publishers can’t tackle this under the current scholcomm structure is that they have no financial leverage because they get the work for free (or even get paid under gold/hybrid OA).
If publishers actually had to pay authors for their very expensive work, they would have an incentive to make sure that work was worth their money and would also have some leverage to make authors provide documentation about the research processes, raw data, whatever they would need to be more sure the quality was appropriate.

You put your finger on an interesting issue Phil, and one that (I think) probably doesn’t have a clear solution. As much as we value science for its rigor and objectivity, as long as this endeavor is designed and carried out by humans it will still be vulnerable to an array of cognitive biases—biases that lead scientists to expect certain answers to be right (and to therefore look askance at data that suggests otherwise), that lead scientists to believe certain opinions more than others, and so on. These biases eventually manifest as tribalism, the way you describe, and they happen not only today but throughout the history of science (read The Nature of the Book by Adrian Johns for a fascinating account of the decades-long knife fight between Netwon, Hooke, Flamsteed, Halley and others who were present at the birth of “science”—disagreements that had more to do with shifting tribal alliances than with following the evidence to the truth). Today, this tribalism is probably most harmful when it comes to funding—NIH and NSF awards, for example, tend to heavily favor established researchers pursuing established trails; well-established PIs can create fiefdoms that suffocate innovation and different thinking. Funding innovation might be one way of breaking this lock (e.g., funding lotteries); innovation in impact evalution might also help (e.g., REF). Helping scientists stay aware of their cognitive biases might help too. I’m not sure what this would look like. Open peer review? More robust collaboration networks?

Comments are closed.