Research Findings: Can We Trust them, Part II

Two years ago I wrote about the problems we face trusting research findings. To bring you up-to-date, here is my follow-up post. Unfortunately, the situation isn’t much improved.

The information, or misinformation, we face daily

You may have read the recent study out of Warwick Medical School in the UK suggesting that kids from families that frequently moved when the child was young (resulting in child often-changing schools) have an increased risk of psychosis. I imagine there a bunch of parents feeling guilty that they may have “caused” their child’s mental health issues because they frequently moved years ago. This type of interpretation, or misinterpretation, is all too common. Hardly a day goes by when we don’t hear another such aspersion from the news media. Why might this be off-base? That is correlational and observational-type research (not randomized or double-blinded); it’s not cause-effect. It’s simply indicating an apparent association between these two things; moving, and later evidence of psychosis. There are ample alternative explanations; for example, given that schizophrenia is predominately genetically-based, and can lead to job and housing instability, it’s reasonable to assume that families with a higher schizophrenia-loading are more apt to move, and it’s more likely that one of the children will later show some signs of psychosis. One may not have “caused” the other as the news reports would have you believe.

What made all the difference?

How are we able to cease making such associations between likely unrelated events? Historically, such capacity is relatively recent; think about it, how did we come to stop using ‘bled-letting’ to “cure” illnesses? What made the medical community finally realize these approaches were ineffective, and how did we come to realize that subsequent medical treatments were effective beyond simply doing nothing (or draining the blood out of somebody)?

Randomization

Yes, that simple word, but not so simple a process, saved the day for medicine and all subsequent treatment approaches to this day. The concept is relatively recent; first hypothesized and documented in the 1930’s, but not used to assess surgical treatment until the 1960’s. In the absence of randomization and, even better, single or double-blind controls, there are all sorts of things that can make you think the treatment works, or doesn’t work, when it really does, or doesn’t. These things are called confounding variables, and they wreak havoc on a study’s or a “treatment’s” apparent result or effect. Keep in mind that blood-letting was the treatment of choice for 2000 years and continued into the late 19th century. So much for basing a treatment on one’s clinical observation.

How much havoc?

Well, here’s the sad truth; when it comes to the predictive value of studies, randomized trials have an 85% Positive Predictive Value (PPV) rate. However, once you leave the world of randomization, it gets really bleak, really fast, with PPV dropping to between 20% and .1% for nonrandomized epidemiological studies (you know the ones, announced daily on the news saying that if you eat something in particular you’re going to get some type of horrible malady; or if you move, your child may become psychotic). This led to the prominent researcher, Dr John Ioannidis, asserting that half of all research findings are false (even worse, he suggested that 90% of all medical research is inaccurate, and 50% of the research deemed ‘most reliable’, in the most reputable journals, is inaccurate). In that regard, it doesn’t matter if the research is coming from the most reputable of journals; it was still found to be flawed (see hormone-replacement therapy, vitamin D for heart disease, and coronary stints, among countless of other research topics).

It’s also common to find a self-serving statistical sloppiness. In a 2011 analysis, Dr. Wicherts and Marjan Bakker, at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding—almost always in opposition to the authors’ hypothesis. These errors have far-reaching implications. For example, claims based on fMRI brain-scan studies are increasingly being allowed into court in both criminal and civil cases. However, study in 2009 found that about half of such studies published in prominent scientific journals were so “seriously defective” that they amounted to “voodoo science” and “should not be believed.”

What to do?

We’re bombarded daily with news of the ‘latest research’ asserting one thing or another. What can we believe? I wish I had an easy answer for you. All I can communicate, as emphatically as possible, is that if the research is not based in randomization, then it’s a crap-shoot. Moreover, factor the all-too-common politicization of research findings that further bias the results. Bottom-line: always be skeptical, always look below the surface, study the research design, do not take the news reports at face value, and don’t take the reseacher’s findings, as reported directly in the study, at face value. In that respect, lots of researchers will report findings that sound convincing (they want to get published, get tenure, and be seen on 60 Minutes) but are based in correlational or even purely observational designs, both of which are ripe for errors. To make the matter worse, even randomized designs can have problems and inaccurately skew the results in a favorable light (see “enriched” design).

Where do we go from here?

We have a few options:

1.) read and accept research results, as the mainstream press and journals would prefer,

2.) believe nothing and remain skeptical about everything you read and hear,

3.) learn how to effectively analyze research, or

4.) don’t read anything and turn off your TV.

Option 4 doesn’t sound so bad, but I suggest options 2 and 3. It’s not easy, but the alternative is, in my opinion, worse.

If you want some resources to learn about effectively interpreting research, email me at jcarosso@cpcwcare.com.

God bless you in your ongoing pursuit of the truth.

Dr. John Carosso

The following two tabs change content below.

Dr. John Carosso

Dr. Carosso has more than 30 years of experience as a licensed Child Clinical Psychologist and Certified School Psychologist working in private, inpatient, outpatient, residential, school, and home settings. He is Clinical Director of Community Psychiatric Centers (cpcwecare.com), a licensed Behavioral Health Outpatient Clinic, and operates both the Autism Center of Pittsburgh (autismcenterofpittsburgh.com) and the Dyslexia Diagnostic and Treatment Center (dyslexiatreaters.com).