In our paper, “Studies Show,” we focused on what makes good science and those things that can influence an outcome and turn good science into junk science.
Please Note: We are a 501(c)(3) educational charity and subsist on donations and affiliate programs. We sell nothing except our publications. All of our affiliate programs come from companies we trust that sell products we’ve used ourselves. Nearly all the products we promote have been tested by our volunteers. If we find something to be crap or nonsense, we tell you. If you wish to support us, you may do so by supporting the companies in our affiliate programs.
Now if you think the best way to stop a researcher with bad science is by a researcher with good science, you might be onto something. However, we’re taking this a few steps further, pointing to a recent study that showed that 9 out of 10 scientists know about one (or more) of their colleagues faking their results [Ref].
But Professor Neil DeGrasse Tyson! You told us that the good thing about science is that it’s true whether or not you believe it.
The thing is, Prof Tyson is in a field that is least affected by money. In theoretical physics, the results of any theory being tested will not make or lose billions of dollars. But there is still a money influence in theoretical physics, and it comes down to something every professor knows about: Publish or Perish. You can’t get grants if you don’t publish and you might not get tenured if you don’t publish and even the prestige of the journal in which you publish can boost or diminish your career depending upon that journal’s impact factor (the average number of citations to articles published therein). [Ref]
Anyone who’s ever spent time on a university campus has heard that phrase (publish or perish). An associate professor won’t make full professor without publishing, and publishing with results.
As we pointed out in our paper on Louis Pasteur, even he faked a few results, and like everyone else, the reason he did it was to get more grant money. Studies do not fund themselves.
As was the case with the Duke University study brought to light by a whistleblower, it was all about the money.
Duke University has admitted that one of its lab technicians falsified or fabricated research data on respiratory illnesses that were used to get large grants from the Environmental Protection Agency. [EPA-funded lab faked research results on respiratory illnesses, whistleblower lawsuit claims. ]
Now get this. One study on fraud was conducted by sending out surveys that the scientists were to fill out and return, anonymously. These were sent out between 1986 and 2005.
Not surprisingly, two thirds reported colleagues who invented their findings, but as to themselves, just 2% admitted to fudging. [Scientists faking results and omitting unwanted findings in research]
If you really want to get into the guts of the matter (you might want to brush up on your statistics mathematics first), you can read How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data.
You can guess from the term Meta-Analysis that they’re taking a lot of those surveys and crunching their numbers, but even they start out saying “The frequency with which scientists fabricate and falsify data, or commit other forms of scientific misconduct is a matter of controversy.”
It sure is, and by the end of their paper, after you’ve been bombarded with all the facts and figures, you arrive at the conclusion that they’ve not really learned much from all this work because the surveys were not standardized.
But they did hit on a few interesting points . . . that might just crack you up.
They determined that 1 out of every 100 scientists commits fraud, or 1 out of 10 depending on how you’re counting (personally, I use both fingers and toes). Retractions (this is where a paper is retracted because there was some naughty-naughty going on) in the PubMed library have a frequency of .02%, which led them to speculate that 0.02 to 0.2% of papers at PubMed are fraudulent.
Eight out of 800 papers submitted to The Journal of Cell Biology had digital images that had been improperly manipulated, suggesting a 1% frequency. Finally, routine data audits conducted by the US Food and Drug Administration between 1977 and 1990 found deficiencies and flaws in 10–20% of studies, and led to 2% of clinical investigators being judged guilty of serious scientific misconduct.
Then they go onto say that because their stats are based upon those “frauds” that have been caught, the numbers are “underestimates.”
Next they go into the mathematics they used to get their conclusion, which you’ve seen already, so let’s move onto another study conducted in 2015 called, Data fraud in clinical trials.
These people are much nicer. They first start out with:
Honesty and truthfulness are bedrock principles of scientific research. Adherence to these principles is essential both for the progress of science and the public perception of scientific results. Deviations from these principles may generally be considered scientific misconduct or fraud, although the US Public Health Service defines research misconduct more narrowly, restricting it to the most egregious practices.
And they even distinguish between deliberately fabricating results and “fudging” of some figures, as Isaac Newton “might have.”
The history of science is replete with high profile cases of known or suspected scientific misconduct. Indeed, some of the giants of science are not immune from suspicion of questionable practices, including Claudius Ptolemy, who is suspected of reporting work by others as his own direct observations; Isaac Newton, who may have falsified some data to make them agree more closely to his theories; and Gregor Mendel, who is suspected of some selective reporting of results or even data falsification. In these and other examples, there is often no direct proof of fraud, only statistical evidence that the observed results are too close to their theoretically expected value to be compatible with the random play of chance that affects actual experimental data
So there’s been a lot of famous fudging. Take the lipid hypothesis, that silly concept the doctors all repeat, that high cholesterol is the cause of heart disease (which it ain’t). Any respectable scientist can look at the data that resulted in the lipid hypothesis and tell you that the guy cherry-picked all . Or take Pasteur, who at times never admitted to failure preferring, instead, to make up new explanations for them.
But the research we’ve seen on actual fraud is not about fudging. Though no one can pin down the percentage of scientists who’ve committed actual fraud, the estimates are around 30% and possible higher. The costs too, due to fraud, are starting to pile up.
The Duke example above involved over $200 million, not to mentions years of research now being suspect. In the end, the researcher involved pled guilty to embezzlement, for siphoning off more than $25,000 to buy stuff at Amazon, Target, and Walmart while faking receipts. I guess someone good at faking data must also be good at faking receipts.
She got off with a fine, probation, and community service while poor Duke is being sued for three times the cost of the research, with a few million more going to the whistleblower.
The question here is, if fraud is so widespread and costly, what are we doing to catch it?
You’re gonna like this: Artificial Intelligence.
Two Stanford researchers have spotted patterns in published papers that can alert us to possible fraud.
It’s in the writing “patterns.”
Now, how we lie—and we do lie, every one of us; when someone tells you they never lie, that’s a lie— has been studied up, down, and inside out. You can find articles telling you how to spot a liar, or is your spouse cheating, or why your kids will tell you they didn’t get into the cookies with crumbs all over their faces. The data on lying is piling up. It’s a human trait, and when you lie, you’re just being a human.
These two researchers have learned from their research that when a liar lies, three common things appear in their lies.
The researchers uncovered these clues by going over “retracted” papers archived on PubMed (where we get many of the studies we point to).
They then created an “obfuscation index,” because you can’t have statistics without numbers.
This was achieved through a summary score of causal terms, abstract language, jargon, positive emotion terms and a standardized ease of reading score. [Ref]
They surmised that since the people writing up their research don’t want to get caught, they’ll obscure parts of the paper. This would be difficult for most of us to spot because as outsiders, most research papers look obfuscated to begin with, so anyone sleuthing through a research paper looking for fraud had better be familiar with the subject.
They immediately discovered that fraudulent (retracted) papers contained 1.5% more jargon.
However, one of the biggest clues to a fraudulent paper was a lack of positive emotion terms, terms normally used to praise sound data or sound discoveries. A paper that is not based on fraudulent figures is quite proud of its accomplishment and tends to puff its chest out. Naughty-naughty papers are a bit shyer and tend to look down at their feet when you talk to them.
The researchers easily concluded that an artificial intelligence program, designed from their findings, would be the thing to catch fraudulent papers before they’re published. Of course, with caveats concerning false-positives due to say a paper being written by the Queen, who rarely speaks in the first person (we are not amused), or someone in love with jargon who feels scholarship isn’t scholarship unless it’s obfuscated with words found only in my favorite book: Dictionary of Difficult Words.
As people who sometimes get sick and have to see doctors, a fraudulent study could be hazardous to our health. How many died because data about Vioxx and heart disease never made it into the final publications? 50,000? 60,000? (We discovered this connection two years before the FDA published it because we read a bunch of unknown journals; the sort of journal that doesn’t get you a gold star if you publish anything in it.)
Retractions aren’t going to bring the dead back to life. And since our medical system is profit based, profit is the bottom line. When money is in the equation, the chances of fraud arising increases dramatically.
If you are familiar with our work here, you’ll notice that I don’t often refer to myself in the first person. I call myself “we.” People have noticed that I’ve done this for a long time. They’ve also noticed that I refer to myself as just David, rather than David Bonello.
These traits are simply something I’ve picked up in my years of recovery from PTSD, from my meditation practices, and from reading and studying subjects that have enlightened me. I often tell people to come up and visit us on my farm. Well, the “us” referred to the dogs, the cats, the chickens, and me.
I don’t have a very needy ego. Oh, it’s there. Can’t run away from it. It’s just not needy. And so I use we instead of I a lot. There are others who do help out here, and the we often includes them. I’m just not all hopped up on the first person singular.
So, according to the study above, because I don’t use the first person singular, those first-person pronouns, I must be lying.
Shucks, you caught me again.
Are there a lot of first-person singular, object pronouns, or is it just me?
Blood — Care and Cleaning
Einkorn Flour Baguettes
Here We Go Again: Science
Temp
Saturated Fat Is Good For You
Chili With Mushrooms and Clams