Editor’s Note: There has been a page on our site (sites) called “Studies Show” for years. We thought it was about time to re-write it for a number of reasons. We hope you enjoy it. There’s a lot here to learn and even more to ponder.
It seems that everyone everywhere on some occasion wants to prove something to us. We are bombarded with commercials telling us that this phone plan is cheaper than that one, followed by a commercial from a competing company that tells us the exact opposite. Coke® wants us to think their cola tastes better than Pepsi®, and Pepsi® wants us to think its product tastes better than Coke®. Our government wants us to believe all sorts of things, that it is in charge, that we are all free, that our best interests and safety are its prime motivation. And Science (with a capital “S”) is constantly proving that our previous generation of thought was, if not slightly off, completely inaccurate.
I recall the words of an old and wise professor when I uttered the term “scientific proof.” He quickly, but tactfully, pointed out that the true scientist never uses the word prove, for the true scientist realizes that s/he can demonstrate something only for the moment, and that in the future, someone might discover a method of demonstrating the exact opposite, or at least, something quite different.
The word “prove” to a scientist means something completely different than to a lay person.
Allow me to show you something really special. You philologists out there will love this.
I have The Compact Edition of the Oxford English Dictionary. It cost me, I think, $99.00 when I bought it years ago. You need to read it with a magnifying glass, because it’s over 20 volumes shrunk down (in print size) and placed in two thick volumes (which I’ve found at AbeBooks for just a couple of bucks).
Here is the first mention of the word prove:
The sixth line down you see that it originates from the Latin word meaning “to test.”
What I love about this dictionary is that it was put together by everyone. At one time, not that long ago, around the time of the Civil War, a person could read nearly every book ever written (in their modern tongue).
The construction of the Oxford English Dictionary started in 1857, and people everywhere, including scholars who were still reading Old English and Middle English, when they found a word they felt to be significant, would send a letter to the editors showing the date of the book, the title and author, and the sentence in which the word was found. It was like a contest to find the earliest usage of a particular word (with a particular connotation), but the winner got nothing, except if the editors thought it was significant, their submission would be published.
In the photo above, you can see that the first sentence was published in 1297, in middle English, and the meaning, noted above, was “to test.”
And now you know the significance (and meaning) of the old cliché, “The exception proves the rule,” in that the exception “tests” the rule.
I hope you enjoyed that little excursion.
You could look this up on the web, but most of you who’ve not studied statistical analysis won’t get much out of it. I have studied this field, but because of my brain damage (from PTSD), I’m now horrible at math. The thing is, I can still remember key points in statistical analysis, and if I can understand them, then I’m pretty sure I can explain them to you.
We are going to work our way from the lowest (least trustworthy) to the highest (this you can believe). However, I want you to stay flexible enough to realize that exceptions exist all around us and that the qualification of what we can be sure of is never quite enough to be absolutely assured.
In other words, only very studied individuals realize that they know nothing at all, while the masses are often too stupid to realize how stupid they are.
As Einstein said, “Whoever undertakes to set himself up as a judge of Truth and Knowledge is shipwrecked by the laughter of the gods.”
What we know now to be true can always be shown to be only partially true in the future, given future findings. What we learn by experiment are limits, and, as Einstein also said, “Once we accept our limits, we go beyond them.”
The question, “Of what can I be sure?” has to be answered by constantly questioning. Whenever I engage in a discussion (argument in the highest sense), I often run into people who’ve never questioned their basic belief system, thus I realize that no matter what conclusions they arrive at, they will never really know if they are right or wrong, but will always assume they are right.
The path to knowledge begins with doubt, but where that path will lead and where it will end is determined by our creativity, not by our education. Education is like a flashlight; it shows us the way. But ask anyone who’s dealt in breakthrough science and they’ll tell you that intuition and imagination are your greatest tools, though most important is to never stop trying.
This is the least trustworthy because of the huge variable called “the observer.” An observation is very subjective. Two people can witness the same event and yet report it completely differently.
The classic study on this is called “They Saw A Game — A Case Study.”
To sum that study up, people from both colleges were asked about the game and their responses showed a huge difference, which the researchers called: “selective memory.” Yet it was more than just memory; it involved what they saw and how their subjectivity determined what they saw.
From another angle, we can view the art of observation as what oriental philosophy refers to as “The Tao.” The Tao was the world’s first science, and it was the science of observation. There are many translations of the Tao, but the best I’ve found is: “The Way,” or, “The Way of All Things.”
We look at direct observation as the least trustworthy, but over the centuries, four to five centuries, the Tao grew into a system of medicine that has worked quite well for the Asians.
Without the high-tech of today, they were able to find and map the meridian system of energy; able to read that energy, and modify it through manipulation, diet, acupuncture, and qigong.
Through centuries of observation, the Tao was molded and shaped as they observed the nature of things and the things in nature. Because this system has been in place for thousands of years, you can pretty much be sure that they’ve gotten a few things right, because, let’s face it, if something isn’t working, it doesn’t take a thousand years to notice it’s not working and to change it.
Focusing on the “life force” or, energy, Traditional Chinese Medical practitioners never studied cadavers, because cadavers contained no life force. However, when introduced to Western Medicine principles, they quickly picked them up and added to their own medicine those things that they could see were working and advantageous. This attitude is very much unlike that of the Western mindset that looked down upon their practices as primitive and superstitious without bothering to investigate.
“There is a principle which is a bar against all information, which is proof against all arguments, and which cannot fail to keep a man in everlasting ignorance—that principle is contempt prior to investigation.”
Herbert Spencer
Today, because we’ve actually tested parts of TCM, Westerners are beginning to accept bits of this ancient wisdom that were brought to us through simple “direct observation.”
And it is here that your humble editor must toss in his two cents:
Western science, like American medicine, is an elitist system. It looks down upon other systems. I’ve heard Traditional Chinese Medicine often referred to as primitive, when, in fact, since it’s been around so much longer than our medicine, it is quite sophisticated. It’s just not well understood by the western mind. For the longest time, western science thought acupuncture to be quaint and curious. A few even decided to study it. When they discovered that the meridians actually existed, and that acupuncture proved itself in the laboratory (using the “double blind”), they “slowly” began to admit to the value of acupuncture in a clinical setting.
However, they still did not know how it worked, and when Chinese physicians admitted that they too did not know how it worked, but only how to work it, again western medicine looked upon eastern medicine with a bit of disdain, which was hypocritical because they still had to admit that it worked.
Centuries of observation actually means something, not just to the oriental mind, but to an open mind.
Note that this is in the singular. Case Studies, or as they’re referred to by the scientific community, Case-Control Studies, are a bit different and we’ll talk about them later.
Case Reports are often referred to as anecdotal. Being anecdotal, people who think they are being “scientific” tend to write them off, but yet they exist for very substantial reasons.
They are used in sociology, psychology, and medicine. They tend to point out exceptions and anomalies, and give therapists an insight into treatment options. The study listed above, “They Saw a Game” was, as you noted, a case study.
From the British Medical Journal, here is a list of the types of case reports they have published:
Even though these lack statistical sampling and controls, case studies have been admittedly useful in research and in “evidence-based medicine.”
Guidelines are continually being developed and modified for reporting case studies in journals because too many being published could disturb a publication’s reputation and the publication of a sloppy study could completely destroy a reputation.
Finally, there are quite a few very famous case studies, such Fredrick Treve’s report on “The Elephant Man” or even Christiaan Barnard’s world’s first heart transplant.
This is probably the geekiest of all of these since this involves a lot of statistical analysis. They exist to sort the variety of causes of a thing (especially in economics) and to rule out variables, while discovering which variables are independent and which are dependent.
In Medicine, the cross sectional survey exists to provide data on an entire population.
From WikiPedia we get: “Cross-sectional studies involve data collected at a defined time. They are often used to assess the prevalence of acute or chronic conditions, or to answer questions about the causes of disease or the results of intervention.”
Of course there are advantages and disadvantages though this one really should be left to sociology and economics because in medicine you’re never really sure of your results, and little, if anything translates, into the cause of an illness or patient care.
So we’ll leave this one right here.
These are much like the previous, except instead of discovering something about the general population, a case-control study involves individuals with a specific characteristic compared to a small sample from the rest of the population. In medicine, a case control study would compare a group of people with a condition to people without that condition; however, both groups have a lot in common.
Here is probably the best example:
One group has lung cancer. Another group doesn’t have lung cancer.
Each group contains smokers. If there are more smokers in the lung cancer group, then you have shown (but not conclusively) a “possible” connection between smoking and lung cancer.
Proving cause is the hardest thing in medicine, especially when corporations don’t want us to associate their product with death and destruction.
This is my personal favorite. Why? Because with the advent of artificial intelligence, relationships and findings come out of cohort studies that are totally unexpected and could never have been found by mere humans examining the data. Artificial intelligence is the original “way out of the box” thinking.
In fact, AI (artificial intelligence) has come so far that there are AI programs for your spreadsheets. These programs will show you relationships and conclusions that you and your business partners would have never seen even if you stared at your data for months.
The word “cohort” means a group of people banded together with a common characteristic or common experience. But this definition is loose. For example, Baby Boomers and Millennials could form two cohorts. Take Baby Boomers and Millennials with Multiple Sclerosis, and you have another cohort.
You could, if you wanted, expand the cohort to include all living humans, or those who showed up for testing. It can be a pretty loose definition.
Cohort studies are used in medicine (and nursing), the earth sciences, insurance, business, psychology, and sociology. What they do is gather lots of data to find patterns.
Insurance companies know your odds of getting lung cancer, or bladder cancer, or kidney cancer based on where you live (from cohort studies using your location to form the cohort). Businesses determine their best audience (using data collected from cookies on your computer and Google searches). You belong to many, many cohorts once you go online, i.e., people who use Google, people who use social media, people who purchase online, etc., etc., etc.
Cohort studies are like case studies, but with a lot of people. They’re somewhat like cross-sectional surveys in that they too go thru a bit of statistical analysis, though, in the hierarchy of evidence, they are not considered the end-all, know-all, because of biases and because data is collected from human beings whose memories are untrustworthy, and other factors that cannot be controlled.
However, take it from someone who has read hundreds of cohort studies, if you write them off, you’re missing the point of them. It’s only the “sticklers” who look for a study’s faults without considering the “nearly magical” findings. Let’s face it: in the history of medicine, accidental discovery has shown itself to be just as valid as difficult, unceasing work. And the things cohort studies have found, through the use of AI, are mind-expanding.
I could list some fascinating findings of cohort studies for hours, but here are just a few highlights:
Here are some very famous cohort studies:
British Doctor’s Cohort Study — conducted from 1951 to 2001, though by 1956, it pretty much concluded that smoking caused lung cancer.
Framingham Heart Study — we’ve talked about this one a lot. It’s the largest running heart study ever. It started in 1948, and initial findings linked hypertension and high cholesterol levels to heart disease, though later it would discover that high cholesterol was a symptom and not a cause, but that didn’t seem to get out because the sugar industry was paying off universities to blame fats for cardiovascular problems and not the inflammatory nature of sugar. The study is still continuing, only today its focus has expanded to include cancer, arthritis, dementia, osteoporosis, and hearing and sight disorders.
Nurses’ Health Study — one of the drawbacks of cohort studies is the cost and the trouble of gathering huge numbers of subjects together to fill out questionnaires. But like the British Doctor’s study (above), nurses too are accessible, and giving them a bit more paperwork doesn’t seem to be a problem. (Where’s my sarcasm font?) This particular study started in 1976 continues on today and is one of the best studies ever to investigate risk factors for chronic diseases in women.
Physicians’ Health Study — just like nurses, we’ve got a captured audience (subjects) who just need to fill out a bit more paperwork. This started in 1983, was modified Physicians’ Health Study II, and ended in 2007. The conclusions were that supplementation (they used horrible supplements, many synthetic) of general vitamins did, in fact, significantly reduce the risk of cancers (in general). [Jama, November 14, 2012 ]
The drawbacks to cohort studies are that they are expensive, they take a long, long time to conclude, are worthless in cases of rare diseases or sudden outbreaks, and though they can find clues as to the origin of an illness, they are not definitive proof.
However, in many cases they are more ethical than Randomized Controlled Trials, as you will see.
This is something very much related to the Cohort Study, in that data is gathered from these studies, and from Case Studies as well as from Cross Sectional Surveys, and still more data from past studies. We got this from the NIH (National Institutes of Health):
Computational modeling is the use of computers to simulate and study complex systems using mathematics, physics and computer science. A computational model contains numerous variables that characterize the system being studied. Simulation is done by adjusting the variables alone or in combination and observing the outcomes. Computer modeling allows scientists to conduct thousands of simulated experiments by computer. The thousands of computer experiments identify the handful of laboratory experiments that are most likely to solve the problem being studied.
Today’s computational models can study a biological system at multiple levels. Models of how disease develops include molecular processes, cell to cell interactions, and how those changes affect tissues and organs. Studying systems at multiple levels is known as multiscale modeling (MSM). [Ref]
Computer modeling is fascinating, and growing in popularity among scientists because they allow predictions. And because nobody gets hurt by them. These studies use data, not subjects. And the results are available much quicker when an actual RCT (below) could take years.
In medicine/ health care, they can be crucial and indispensable. Take the recent (2020) pandemic. Computer modeling was used to confirm that masks and social distancing work. You’ve got quacks on YouTube blowing smoke through masks to show they’re ineffective, but computer modeling, using the statistics gathered early on, proved the benefits of masks and social distancing.
Computer modeling is used to track infectious diseases, determine clinical decisions, and even predict a drug’s side effects. Epidemiologists could not make their predictions without the aid of computer modeling. And in the end, computer modeling in medicine just improves the health care of everyone. [Ref]
I figured I’d better put these two together since they’re both RCTs or what many call double blind, randomly controlled studies, which sometimes have the added “with cross-overs.”
However, some RCTs cannot be double blind for a number of reasons. For example, let’s say one group gets a drug and the other group gets a placebo. If the drug has an odor to it, and the placebo does not, it’s hard to keep the researchers administering the drug in the dark, so only the subjects are unaware of which group they’re in, and this makes it simply a “blind” study.
Sometimes an RCT is called double blind with cross-overs. A cross-over is a person in the control group, who after a designated period of time, will suddenly be crossed over into the experimental group, usually without the observers or subject knowing about it. In this way, we get even a better look at how a specific medicine or procedure works and, if the drug/procedure is successful, the control group gets it too.
Concerning the two different outcomes, definitive and non-definitive, 99.99999999999999% of you won’t give a hoot as to the mathematics behind determining them, but I must bring it up.
I’ve been sent studies by people wanting to sell me stuff and in the old days, before my brain damage (from PTSD), I loved statistical analysis, and I would write back that the study they sent me was not definitive. I lost a few readers back then because of this.
Today statistics are just jumbled numbers to me. To get a grasp on any particular study, I have to pass it onto friends who still work in them.
But I am going to “try” to sum up the difference between these two in an oversimplified nutshell.
A study has what’s known as a confidence level or confidence interval. According to Wikipedia, “Confidence intervals were introduced to statistics by Jerzy Neyman in a paper published in 1937.”
This sort of thing means, for our discussion here, “We are 90% confident in our outcome.” That would be a confidence interval.
And next I have to introduce you to threshold and uncertainty. In polls, they call this a “margin of error.”
Look at a threshold as a detection limit; what it takes to “detect” a “thing,” the “thing” a study is looking for or it’s “quantitation” limit. (You can substitute quantification for quantitation if you wish.) This is just being able to “count” something, or one could say, it’s “big enough to note” or, of course, “it’s significant.”
Thus, a study that is deemed non-definitive is one in which the “confidence intervals” overlap the clinically significant threshold.
If they don’t overlap, then you’ve got yourself an RCT with definitive results.
That’s the determination in a nut-shell. If you don’t understand, don’t worry. There won’t be a test.
The RCT is considered the gold standard in what we know we know.
It’s pretty easy to see the advantages of this sort of study. It tries to eliminate all those factors that would invalidate a study, such as human error or human prejudice. But still, there are drawbacks.
The first is the Uncertainty Principle. Anything that happens in an experiment does not necessarily have to translate into real life. This is just something every scientist has to put up with. The observer affects the observed.
Next is “conflict of interest.” Conflict of interest can affect a study very subtly, from the methodology to the statistical analysis.
One final note on RCTs is they are considered unethical if they kill off the control group.
For example, you have in your study people with cancer. You give the experimental group the new drug and you give your control group sugar pills. You are, in effect, killing off the control group. Believe it or not, this was once standard practice.
There are ways to get around using a control group which usually involve creating a theoretical group based upon data collected in the past. A theoretical group can always be created from past data. We know what happens to people who have stage four cancer and take sugar pills. We don’t have to kill cancer patients to know this. This is where computer modeling (see above) comes in. Creating a computer model is more ethical that RCTs when the control group consists of people with terminal illness simply because the control group isn’t human, but consists of a theoretical group.
One more thing we should point out. Small studies get published quickly in our media because journalists are dying to write about something of interest. However, most initial studies are taken way too seriously by the masses.
A good RCT requires a huge sampling and just as huge controls.
And that leads us to our final stop at the top of the hierarchy of how we know what we know to be true.
These analyses (this is the plural of analysis) take a whole bunch of RCTs and put them together to make a grand statement.
When everyone’s findings agree, then we can be sure that we’ve arrived at some sort of truth.
Let me ask you: Once a meta-analysis is published, does that put the subject to rest?
Not when you live in a corporatocracy. If your studies show that glyphosate (Monsanto’s Round-up) is carcinogenic, you’ve not proved anything as far as Monsanto is concerned. They’re going to come at you with everything they’ve got. And just as I finished writing this paragraph, I discovered that the World Health Organization has just released a study showing that glyphosate is not carcinogenic.
Now what are we really to believe? First they publish that glyphosate “Is ‘Probably Carcinogenic,’” and then they publish “Glyphosate unlikely to pose risk to humans.” Which are we to believe, and why the sudden change?
One thing I know I’ll never forget are the seven CEOs in the tobacco industry testifying before congress saying that nicotine is not addictive. These executives lied, bold faced, before congress, and got away with it without even a slap on the wrist.
Bobby Kennedy Jr edited the book, Thimerosal: Let The Science Speak. We’ve been told there was no connection between vaccines and autism for the longest time, but the industry finally removed thimerosal from its vaccines [no they did not; we were just told they had]. This book is a meta-analysis of all the studies on thimerosal. Is it accepted today as being fact? Not on your life. They can remove the mercury, but they will battle for years that the studies were cherry-picked, that the studies in the book were invalid, etc., etc., etc.
This paper is about to get deeper into the battles and the outside influences on both studies in general and how studies are reported.
But first our conclusion.
So there you have it: the Hierarchy of Evidence. All of these methods have been used to determine the truth, but not everything can be put into a laboratory or made into an experiment. Sure, parts of certain things can be tested, but sometimes one has to rely on mathematics or extrapolation to come to a conclusion. Let’s face it; Einstein’s theory of Relativity could not be tested in a double blind study. Mathematically it was sound, given the premise, but to truly test it, its predictions had to be tested.
You see, Einstein described gravitation quite differently from his predecessors. He said that space was curved near a mass, and the greater the mass, the more pronounced the curvature of space around it. Thus his theory predicted that as the sun passes through the heavens (actually we’re moving, but “relatively” speaking, the sun appears to be moving), the light from stars should be seen along that curvature of space, thus appearing closer to the sun than they actually are. The only problem is that we can’t see the light from stars when the sun is out – with one exception: during a total eclipse. So in 1919, during a total eclipse, photographs showed that Einstein was correct; hence his theory “proved” to be sound.
I put quotation marks around the word “proved” for a reason. People use it quite loosely today, but as I pointed out at the top of this paper, scientists don’t. They know that nothing proves anything in today’s vernacular, because tomorrow we could learn something new that blows all that previous stuff away.
The word “prove” to a scientist means to test.
Testing is important, but not all things can be tested in a double blind, randomly controlled study.
Neil DeGrasse Tyson says, “The good thing about science is that it’s true whether or not you believe it.”
But not so fast there, Prof. Science isn’t always “true” or in fact, “correct.” Tyson’s field happens to be one that is the least influenced by money, but it’s also a field in which what we know and accept today isn’t written in stone, and can, with new evidence, be shown to be inaccurate tomorrow.
At one time it was thought by the scientific community that the planets’ orbits around the sun were circular. Today we know them to be ovular.
And it wasn’t long ago that we discovered that the distances to stars, galaxies, and all those things were a bit off because of a mathematical error that had gone unnoticed for years. And most recently, October 2016, we learned that the universe has almost 10 times more galaxies than previously thought.
So, obviously, Tyson’s quotation has to be taken with a grain of Celtic sea salt.
Those of us who’ve actually studied epistemology, the study of how we know what we know, know how wrong we’ve been in the past and that the ultimate truth is always something we are working toward. Every discovery just brings us a little closer, and the truth is not the ultimate goal but a journey.
However, that journey is for the philosophers, not for real life humans in real live situations. In real life, we have to know that a physician has the skills to repair an aneurysm, that the rocket on the launch pad will actually make it into space without killing everyone involved, or that a traffic computer will control street lights in such a fashion as to relieve some of the congestion and not screw up, creating 4 and 5 hours delays. We have to rely on the engineers who’ve taken the whole of collective knowledge in a group of cohesive subjects and refined it all down to a simple click, followed by the results expected and desired. A reality in flux is a difficult one to live within. We need stability.
So now that you know how we arrive at the “truth,” it’s time to show you how we arrive at the truth corporate America (with the help of the FDA and a lot of pseudo-scientific organizations) wants us to believe.
First off, destroy the competition. Our article, Health Care for Dummies is one of the most important articles at this site. It took years of research to write, and it is all about how medicine in America became a monopoly.
The final straw, as they say, was in 1962 when Pharm Industry Lobbyists pressured congress to pass the Kefauver-Harris Drug Amendments, which basically raised the cost of “proving” a drug to billions of dollars.
Nobody is going to spend a billion dollars (actually today it’s much more) that can never be recouped, proving a plant that grows along the side of the road can cure something.
This is how you screw with the truth.
Sure, small studies can show that cannabis kills cancer cells, but the FDA requires a lot of studies that can cost upwards of 5 billion dollars, and ironically, the cannabis industry might actually be able to compete in the arena, but the FDA and pushback from the Pharm Lobby can make sure these studies are never completed.
There is a substance used in a local burn unit in the Twin Cities (Mpls/St Paul) called Willard Water® (see article below). It’s amazing on burns, but the FDA refuses to test it. The inventor was told (off the record by a representative of the FDA) that even if it passed testing, it would not pass.
“Peer Review” is the first refuge of the skeptic.
Whenever someone makes a claim and there happens to be a skeptic around, the skeptic will often call for “peer reviewed” studies. I love that term because it’s absolutely meaningless. Who is the peer? and what is the review? The term has been bandied about so often, that it is, for all intents and purposes, meaningless. But it certainly does give the skeptic the veneer of intellectual high ground.
In the Journal of the Royal Society of Medicine, Richard Smith points out the following:
So we have little evidence on the effectiveness of peer review, but we have considerable evidence on its defects. In addition to being poor at detecting gross defects and almost useless for detecting fraud it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.
The term “peer review” has a nice, sciencey ring to it, but it turns out that when “reviewed,” it’s an empty vessel with a perfect fit for skeptics and anal retentives.
I point out conflicts of interest at this site at a number of places. Here’s just one example we’ve posted:
An ABC news report from June 12, 2002 illustrated this problem when it revealed that drug studies funded by the pharmaceutical interests have a 90% chance of showing effectiveness, while studies funded by sources outside the industry have only a 50% chance of favorable results. [McKenzie J. “Conflict of interest? Medical Journal Changes Policy of Finding Independent Doctors to Write” (transcript). ABC News. June 12, 2002]
One reason I love to repeat this particular study is that you would never think a news program would publish this sort of thing because news programs are financed by advertising, and it’s during the news hour that pharmaceutical companies do their most advertising. You don’t often see this “biting the hand that feeds it.”
But it is specifically this conflict of interest that keeps us from ever getting to the truth about anything where there are huge profits (or losses).
95% of scientists agree that global climate change is not only real, it’s here. And they all agree that humankind has a hand in this.
The problem is, the money in the energy industry (oil, coal, gas) influences think tanks and opinions and many people buy into their conclusions that it’s all a conspiracy theory.
I’ve heard from an individual that he’ll believe global climate change is real when 100% of scientists agree.
The thing is, in the world of science, a 95% agreement is, in reality, a 100% agreement because there are always those who, because they’ve been paid off, will disagree with the rest.
Now there are parts of climate change that scientists disagree on. This is science. There are always parts of any agreement within which there will be dynamic debates. Those engaged in these debates do not disagree overall, but they do argue points. There is always a better way to gather data or a better way to extrapolate the data or subtle conclusions that some overlook. This is the nature of epistemology. We learn not by agreeing, but by a vibrant conversation between researchers.
For example, in evolution theory, there are those who disagree with the conclusion that the mechanics of evolution are 100% mechanical. Some, looking at the math, feel that “randomness” just is not the answer; that evolutionary changes happen way too often for the changes to be random.
Sure, both groups have labeled the opposing groups with fun names, like “whackos” or “nutty,” but they don’t call them “science deniers.”
Excuse the digression: Additionally, if you study the history of science, you’ll learn that science is very conservative. It does not accept new ideas, concepts, and solutions easily. That, however, has changed over the years and scientists are readier today to accept change.
It’s only in the realm of “vaccinations” that if you ever question anything about vaccines, you are called a science denier. Robert F Kennedy Jr vaccinates his family, but still, because he edited the book, Thimerosal: Let The Science Speak, he’s called an anti-vaxer; a science denier.
Do you remember from above that the only thing “higher” than the gold standard RCT (Randomized Controlled Trial) was a meta-analyses of many RCTs with definitive results? Kennedy’s book is exactly that and because of the almighty dollar, he’s been attacked endlessly. The moment he says he’s not an anti-vaxer, trolls say, “Yeah, yeah, all anti-vaxers claim they’re not anti-vaxers.”
Because of the huge influence of money, we will never get to the truth surrounding vaccinations. That truth is that, with better research, less influence of money, and objective trials, we can make them safer and more effective. But that is not the goal of pharmaceutical medicine despite a popular opinion that it is. The number one goal of pharmaceutical medicine, especially in the US, is profit. Medicine’s bottom line is the almighty dollar.
There’s a saying among those who work with addicts, especially at the treatment centers here in Minnesota (we seem to be the Mecca of Treatment Centers), that you can’t drop something you’re not holding.
If you don’t admit you have a problem, you can’t solve the problem, or another way of saying this, you can’t solve a problem you don’t have.
Even the Journal of American Physicians and Surgeons came under fire when they published this article: Combining Childhood Vaccines at One Visit Is Not Safe. And thus far, we’ve read reviews both praising and condemning this article. Until someone figures out who is right in this debate, all we can do is present the facts.
There are just some things we will never know because the moneyed don’t want us to know, they’ve got enough money to keep us stupid, and for them, there is no problem.
Few realize that the outcome of polls is determined by the questions asked.
Take this image from PBS.org:
The results of the first question tell us that at least 88% of scientists feel GMOs are safe to eat. The question did not mention glyphosate. Just GMOs. If the question had been, “Is corn that is drenched in glyphosate safe to eat?” you’d get a much lower number saying GMOs are ok to eat.
And the question on childhood vaccines just shows us the difference between the actual science behind vaccinations and what people hear on the web or from friends.
But wouldn’t it have been interesting to ask these “scientists” how they feel about the safety and the testing of vaccinations, the conflicts of interest in testing, or even the heavy schedule of vaccines delivered at once to a baby with a rapidly forming immune system.
Outcomes of polls are determined by the questions asked, and if you don’t know the questions asked, you really don’t know the meaning of the outcome.
So going back to our title: Studies Show, it should be obvious right now that studies show exactly what the moneyed people behind the studies want to show; that with enough money, we can show that up is down, right is left, and the water in Flint, Michigan is potable.
Does this make us anti-science? No, it makes us aware that conflicts of interest determine outcome, and it makes us just a little more anti-bullshit.
Never be afraid to question.
This Study Shows I’m Lying to You!
Session expired
Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.