Enjoy Delphi Forums ad free!Click here.
Attitude -  Boloney detection (180 views) Notify me whenever anyone posts in this discussion.Subscribe
 
From: EdGlaze DelphiPlus Member IconJan-2 2:33 PM 
To: All  (1 of 10) 
 2206.1 

Baloney Detection
How to draw boundaries between science and pseudoscience
by Michael Shermer

When lecturing on science and pseudoscience at colleges and universities, I am inevitably asked, after challenging common beliefs held by many students, "Why should we believe you?" My answer: "You shouldn't."

I then explain that we need to check things out for ourselves and, short of that, at least to ask basic questions that get to the heart of the validity of any claim. This is what I call baloney detection, in deference to Carl Sagan, who coined the phrase "Baloney Detection Kit." To detect baloney — that is, to help discriminate between science and pseudoscience — I suggest 10 questions to ask when encountering any claim.

1. How reliable is the source of the claim?
Pseudoscientists often appear quite reliable, but when examined closely, the facts and figures they cite are distorted, taken out of context or occasionally even fabricated. Of course, everyone makes some mistakes. And as historian of science Daniel Kevles showed so effectively in his book The Baltimore Affair, it can be hard to detect a fraudulent signal within the background noise of sloppiness that is a normal part of the scientific process. The question is, Do the data and interpretations show signs of intentional distortion? When an independent committee established to investigate potential fraud scrutinized a set of research notes in Nobel laureate David Baltimore's laboratory, it revealed a surprising number of mistakes. Baltimore was exonerated because his lab's mistakes were random and nondirectional.

2. Does this source often make similar claims?
Pseudoscientists have a habit of going well beyond the facts. Flood geologists (creationists who believe that Noah's flood can account for many of the earth's geologic formations) consistently make outrageous claims that bear no relation to geological science. Of course, some great thinkers do frequently go beyond the data in their creative speculations. Thomas Gold of Cornell University is notorious for his radical ideas, but he has been right often enough that other scientists listen to what he has to say. Gold proposes, for example, that oil is not a fossil fuel at all but the by-product of a deep, hot biosphere (microorganisms living at unexpected depths within the crust). Hardly any earth scientists with whom I have spoken think Gold is right, yet they do not consider him a crank. Watch out for a pattern of fringe thinking that consistently ignores or distorts data.

3. Have the claims been verified by another source?
Typically pseudoscientists make statements that are unverified or verified only by a source within their own belief circle. We must ask, Who is checking the claims, and even who is checking the checkers? The biggest problem with the cold fusion debacle, for instance, was not that Stanley Pons and Martin Fleischman were wrong. It was that they announced their spectacular discovery at a press conference before other laboratories verified it. Worse, when cold fusion was not replicated, they continued to cling to their claim. Outside verification is crucial to good science.

4. How does the claim fit with what we know about how the world works?
An extraordinary claim must be placed into a larger context to see how it fits. When people claim that the Egyptian pyramids and the Sphinx were built more than 10,000 years ago by an unknown, advanced race, they are not presenting any context for that earlier civilization. Where are the rest of the artifacts of those people? Where are their works of art, their weapons, their clothing, their tools, their trash? Archaeology simply does not operate this way.

5. Has anyone gone out of the way to disprove the claim, or has only supportive evidence been sought?
This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore disconfirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible for any of us to avoid. It is why the methods of science that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are so critical.

  • Edited February 15, 2021 12:28 pm  by  EdGlaze
 

 
From: EdGlaze DelphiPlus Member IconJan-2 2:36 PM 
To: All  (2 of 10) 
 2206.2 in reply to 2206.1 

When exploring the borderlands of science, we often face a "boundary problem" of where to draw the line between science and pseudoscience. The boundary is the line of demarcation between geographies of knowledge, the border defining countries of claims. Knowledge sets are fuzzier entities than countries, however, and their edges are blurry. It is not always clear where to draw the line. Continuing with the baloney-detection questions, we see that in the process we are also helping to solve the boundary problem of where to place a claim.

6. Does the preponderance of evidence point to the claimant's conclusion or to a different one?

The theory of evolution, for example, is proved through a convergence of evidence from a number of independent lines of inquiry. No one fossil, no one piece of biological or paleontological evidence has "evolution" written on it; instead tens of thousands of evidentiary bits add up to a story of the evolution of life. Creationists conveniently ignore this confluence, focusing instead on trivial anomalies or currently unexplained phenomena in the history of life.

7. Is the claimant employing the accepted rules of reason and tools of research, or have these been abandoned in favor of others that lead to the desired conclusion?

A clear distinction can be made between SETI (Search for Extraterrestrial Intelligence) scientists and UFOlogists. SETI scientists begin with the null hypothesis that ETIs do not exist and that they must provide concrete evidence before making the extraordinary claim that we are not alone in the universe. UFOlogists begin with the positive hypothesis that ETIs exist and have visited us, then employ questionable research techniques to support that belief, such as hypnotic regression (revelations of abduction experiences), anecdotal reasoning (countless stories of UFO sightings), conspiratorial thinking (governmental cover-ups of alien encounters), low-quality visual evidence (blurry photographs and grainy videos), and anomalistic thinking (atmospheric anomalies and visual misperceptions by eyewitnesses).

8. Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation?

This is a classic debate strategy — criticize your opponent and never affirm what you believe to avoid criticism. It is next to impossible to get creationists to offer an explanation for life (other than "God did it"). Intelligent Design (ID) creationists have done no better, picking away at weaknesses in scientific explanations for difficult problems and offering in their stead "ID did it." This stratagem is unacceptable in science.

9. If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did?

Many HIV/AIDS skeptics argue that lifestyle causes AIDS. Yet their alternative theory does not explain nearly as much of the data as the HIV theory does. To make their argument, they must ignore the diverse evidence in support of HIV as the causal vector in AIDS while ignoring the significant correlation between the rise in AIDS among hemophiliacs shortly after HIV was inadvertently introduced into the blood supply.

10. Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?

All scientists hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice? Usually during the peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.

Clearly, there are no foolproof methods of detecting baloney or drawing the boundary between science and pseudoscience. Yet there is a solution: science deals in fuzzy fractions of certainties and uncertainties, where evolution and big bang cosmology may be assigned a 0.9 probability of being true, and creationism and UFOs a 0.1 probability of being true. In between are borderland claims: we might assign superstring theory a 0.7 and cryonics a 0.2. In all cases, we remain open-minded and flexible, willing to reconsider our assessments as new evidence arises. This is, undeniably, what makes science so fleeting and frustrating to many people; it is, at the same time, what makes science the most glorious product of the human mind.

_________

YouTube - The Baloney Detection Kit - Michael Shermer

 

 
From: EdGlaze DelphiPlus Member IconJan-2 2:40 PM 
To: All  (3 of 10) 
 2206.3 in reply to 2206.2 

How Thinking Goes Wrong:
Twenty-five Fallacies That Lead Us to Believe Weird Things

by Michael Shermer

Excerpt from his 1997 book "Why People Believe Weird Things". Categorized as Problems of Scientific Thinking, Problems of Pseudoscientific Thinking, Logical Problems in Thinking and Psychological Problems in Thinking. Shermer endorses "Spinoza's Dictum": "I have made a ceaseless effort not to ridicule, not to bewail, not to scorn human actions, but to understand them." [6 Apr 04]

[For a discussion of each item go to the link above]

Problems in Scientific Thinking

1. Theory Influences Observations

2. The Observer Changes the Observed

3. Equipment Constructs Results

Problems in Pseudoscientific Thinking

4. Anecdotes Do Not Make a Science

5. Scientific Language Does Not Make a Science

6. Bold Statements Do Not Make Claims True

7. Heresy Does Not Equal Correctness

8. Burden of Proof

9. Rumors Do Not Equal Reality

10. Unexplained Is Not Inexplicable

11. Failures Are Rationalized

12. After-the-Fact Reasoning

13. Coincidence

14. Representativeness

Logical Problems in Thinking

15. Emotive Words and False Analogies

16. Ad Ignorantiam (appeal to ignorance)

17. Ad Hominem and Tu Quoque (Literally "to the man" and "you also")

18. Hasty Generalization

19. Overreliance on Authorities

20. Either-Or

21. Circular Reasoning

22. Reductio ad Absurdum and the Slippery Slope

Psychological Problems in Thinking

23. Effort Inadequacies and the Need for Certainty, Control, and Simplicity

24. Problem-Solving Inadequacies

25. Ideological Immunity, or the Planck Problem

 

 
From: EdGlaze DelphiPlus Member IconJan-2 2:52 PM 
To: All  (4 of 10) 
 2206.4 in reply to 2206.3 

CARL SAGAN'S BALONEY DETECTION KIT


The Planetary Society (USA)Home page for the Australian Volunteer Co-ordinatorsJoin TPSTribute to Carl  SaganComments on Planetary ScienceHow microbes might hitch a ride between planetsCarl Sagan's Baloney Detection KitThe Search for Extraterrestrial IntelligenceThe search for Earth-threatening comets and asteroidsProposal to revive the Australian Spaceguard ProgramTsunami (Correspondence with politiciansTPSAVC Home (of course!)

Based on the book The Demon Haunted World by Carl Sagan

2 Oct 11: Carl Sagan's books, including this one, are now available as e-books from Kindle 

The following are suggested as tools for testing arguments and detecting fallacious or fraudulent arguments:

 

  • Wherever possible there must be independent confirmation of the facts
  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  • Arguments from authority carry little weight (in science there are no "authorities").
  • Spin more than one hypothesis - don't simply run with the first idea that caught your fancy.
  • Try not to get overly attached to a hypothesis just because it's yours.
  • Quantify, wherever possible.
  • If there is a chain of argument every link in the chain must work.
  • "Occam's razor" - if there are two hypothesis that explain the data equally well choose the simpler.
  • Ask whether the hypothesis can, at least in principle, be falsified (shown to be false by some unambiguous test). In other words, is it testable? Can others duplicate the experiment and get the same result?

Additional issues are common fallacies of logic and rhetoric

  • Conduct control experiments - especially "double blind" experiments where the person taking measurements is not aware of the test and control subjects.
  • Check for confounding factors - separate the variables.
  • Ad hominem - attacking the arguer and not the argument.
  • Argument from "authority".
  • Argument from adverse consequences (putting pressure on the decision maker by pointing out dire consequences of an "unfavourable" decision).
  • Appeal to ignorance (absence of evidence is not evidence of absence).
  • Special pleading (typically referring to god's will).
  • Begging the question (assuming an answer in the way the question is phrased).
  • Observational selection (counting the hits and forgetting the misses).
  • Statistics of small numbers (such as drawing conclusions from inadequate sample sizes).
  • Misunderstanding the nature of statistics (President Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence!)
  • Inconsistency (e.g. military expenditures based on worst case scenarios but scientific projections on environmental dangers thriftily ignored because they are not "proved").
  • Non sequitur - "it does not follow" - the logic falls down.
  • Post hoc, ergo propter hoc - "it happened after so it was caused by" - confusion of cause and effect.
  • Meaningless question ("what happens when an irresistible force meets an immovable object?).
  • Excluded middle - considering only the two extremes in a range of possibilities (making the "other side" look worse than it really is).
  • Short-term v. long-term - a subset of excluded middle ("why pursue fundamental science when we have so huge a budget deficit?").
  • Slippery slope - a subset of excluded middle - unwarranted extrapolation of the effects (give an inch and they will take a mile).
  • Confusion of correlation and causation.
  • Straw man - caricaturing (or stereotyping) a position to make it easier to attack..
  • Suppressed evidence or half-truths.
  • Weasel words - for example, use of euphemisms for war such as "police action" to get around limitations on Presidential powers. "An important art of politicians is to find new names for institutions which under old names have become odious to the public"
 

 
From: EdGlaze DelphiPlus Member IconJan-2 2:52 PM 
To: All  (5 of 10) 
 2206.5 in reply to 2206.4 
  • Edited January 2, 2021 4:58 pm  by  EdGlaze
 

 
From: EdGlaze DelphiPlus Member IconJan-2 3:20 PM 
To: All  (6 of 10) 
 2206.6 in reply to 2206.5 
 
Seven Warning Signs of Bogus Science
by Robert L. Park, Ph.D.

The National Aeronautics and Space Administration is investing close to a million dollars in an obscure Russian scientist's antigravity machine, although it has failed every test and would violate the most fundamental laws of nature. The Patent and Trademark Office recently issued Patent 6,362,718 for a physically impossible motionless electromagnetic generator, which is supposed to snatch free energy from a vacuum. And major power companies have sunk tens of millions of dollars into a scheme to produce energy by putting hydrogen atoms into a state below their ground state, a feat equivalent to mounting an expedition to explore the region south of the South Pole.

There is, alas, no scientific claim so preposterous that a scientist cannot be found to vouch for it. And many such claims end up in a court of law after they have cost some gullible person or corporation a lot of money. How are juries to evaluate them?

Before 1993, court cases that hinged on the validity of scientific claims were usually decided simply by which expert witness the jury found more credible. Expert testimony often consisted of tortured theoretical speculation with little or no supporting evidence. Jurors were bamboozled by technical gibberish they could not hope to follow, delivered by experts whose credentials they could not evaluate.

In 1993, however, with the Supreme Court's landmark decision in Daubert v. Merrell Dow Pharmaceuticals, Inc. the situation began to change. The case involved Bendectin, the only morning-sickness medication ever approved by the Food and Drug Administration. It had been used by millions of women, and more than 30 published studies had found no evidence that it caused birth defects. Yet eight so-called experts were willing to testify, in exchange for a fee from the Daubert family, that Bendectin might indeed cause birth defects.

In ruling that such testimony was not credible because of lack of supporting evidence, the court instructed federal judges to serve as "gatekeepers," screening juries from testimony based on scientific nonsense. Recognizing that judges are not scientists, the court invited judges to experiment with ways to fulfill their gatekeeper responsibility.

Justice Stephen G. Breyer encouraged trial judges to appoint independent experts to help them. He noted that courts can turn to scientific organizations, like the National Academy of Sciences and the American Association for the Advancement of Science, to identify neutral experts who could preview questionable scientific testimony and advise a judge on whether a jury should be exposed to it. Judges are still concerned about meeting their responsibilities under the Daubert decision, and a group of them asked me how to recognize questionable scientific claims. What are the warning signs?

I have identified seven indicators that a scientific claim lies well outside the bounds of rational scientific discourse. Of course, they are only warning signs — even a claim with several of the signs could be legitimate.

1. The discoverer pitches the claim directly to the media.

The integrity of science rests on the willingness of scientists to expose new ideas and findings to the scrutiny of other scientists. Thus, scientists expect their colleagues to reveal new findings to them initially. An attempt to bypass peer review by taking a new result directly to the media, and thence to the public, suggests that the work is unlikely to stand up to close examination by other scientists.

One notorious example is the claim made in 1989 by two chemists from the University of Utah, B. Stanley Pons and Martin Fleischmann, that they had discovered cold fusion — a way to produce nuclear fusion without expensive equipment. Scientists did not learn of the claim until they read reports of a news conference. Moreover, the announcement dealt largely with the economic potential of the discovery and was devoid of the sort of details that might have enabled other scientists to judge the strength of the claim or to repeat the experiment. (Ian Wilmut's announcement that he had successfully cloned a sheep was just as public as Pons and Fleischmann's claim, but in the case of cloning, abundant scientific details allowed scientists to judge the work's validity.)

Some scientific claims avoid even the scrutiny of reporters by appearing in paid commercial advertisements. A health-food company marketed a dietary supplement called Vitamin O in full-page newspaper ads. Vitamin O turned out to be ordinary saltwater.

2. The discoverer says that a powerful establishment is trying to suppress his or her work.

The idea is that the establishment will presumably stop at nothing to suppress discoveries that might shift the balance of wealth and power in society. Often, the discoverer describes mainstream science as part of a larger conspiracy that includes industry and government. Claims that the oil companies are frustrating the invention of an automobile that runs on water, for instance, are a sure sign that the idea of such a car is baloney. In the case of cold fusion, Pons and Fleischmann blamed their cold reception on physicists who were protecting their own research in hot fusion.

  • Edited January 2, 2021 3:21 pm  by  EdGlaze
 

 
From: EdGlaze DelphiPlus Member IconJan-2 3:20 PM 
To: All  (7 of 10) 
 2206.7 in reply to 2206.6 

3. The scientific effect involved is always at the very limit of detection.

Alas, there is never a clear photograph of a flying saucer, or the Loch Ness monster. All scientific measurements must contend with some level of background noise or statistical fluctuation. But if the signal-to-noise ratio cannot be improved, even in principle, the effect is probably not real and the work is not science.

Thousands of published papers in para-psychology, for example, claim to report verified instances of telepathy, psychokinesis, or precognition. But those effects show up only in tortured analyses of statistics. The researchers can find no way to boost the signal, which suggests that it isn't really there.

4. Evidence for a discovery is anecdotal.

If modern science has learned anything in the past century, it is to distrust anecdotal evidence. Because anecdotes have a very strong emotional impact, they serve to keep superstitious beliefs alive in an age of science. The most important discovery of modern medicine is not vaccines or antibiotics, it is the randomized double-blind test, by means of which we know what works and what doesn't. Contrary to the saying, "data" is not the plural of "anecdote."

5. The discoverer says a belief is credible because it has endured for centuries.

There is a persistent myth that hundreds or even thousands of years ago, long before anyone knew that blood circulates throughout the body, or that germs cause disease, our ancestors possessed miraculous remedies that modern science cannot understand. Much of what is termed "alternative medicine" is part of that myth.

Ancient folk wisdom, rediscovered or repackaged, is unlikely to match the output of modern scientific laboratories.

6. The discoverer has worked in isolation.

The image of a lone genius who struggles in secrecy in an attic laboratory and ends up making a revolutionary breakthrough is a staple of Hollywood's science-fiction films, but it is hard to find examples in real life. Scientific breakthroughs nowadays are almost always syntheses of the work of many scientists.

7. The discoverer must propose new laws of nature to explain an observation.

A new law of nature, invoked to explain some extraordinary result, must not conflict with what is already known. If we must change existing laws of nature or propose new laws to account for an observation, it is almost certainly wrong.

I began this list of warning signs to help federal judges detect scientific nonsense. But as I finished the list, I realized that in our increasingly technological society, spotting voodoo science is a skill that every citizen should develop.

_______________

Dr. Park is a professor of physics at the University of Maryland at College Park and director of public information for the American Physical Society. He is also the author of Voodoo Science: The Road From Foolishness to Fraud (Oxford University Press, 2002). This article was originally published in The Chronicle of Higher Education, Jan 31, 2003.

Quackwatch Home Page

_______________

From: Scientific Method 101

  • Edited January 7, 2021 6:44 am  by  EdGlaze
 

 
From: EdGlaze DelphiPlus Member IconJan-7 5:16 AM 
To: All  (8 of 10) 
 2206.8 in reply to 2206.7 

From What do you believe without proof?:

_____________

15 ways to tell if that science news story is hogwash
by Susannah Locke
Apr 22, 2014

Just because a study has been published in a scientific journal doesn't mean that it's perfect — there are plenty of flawed studies out there. But how can we spot them?

This excellent chart offers "A Rough Guide to Spotting Bad Science." It isn't meant to be an exhaustive list — and not all of these flaws are necessarily fatal. But it's a great guide to what to look for when reading science news and scientific studies:

Here's a more detailed breakdown:

1) Sensationalized headlines:
Behind sensationalized headlines are often sensationalized stories. Be wary.

2) Misinterpreted results:
Sometimes the study is fine, but the press has completely messed it up. Try to stick to news sources that are particularly trustworthy.

3) Conflict of interests:
Who funded the research in question? If you see a study claiming that drinking grape juice helps your memory and it's funded by the grape industry, then think about that a bit. (That happens all the time. Lots of studies on random foods being good for you, funded by random food councils.)

Be careful: some journals require researchers to reveal conflicts of interest and funding sources, but many do not. And not all conflicts of interest involve funding. For example, be a bit suspicious of someone testing a medical device who consults for free for a company that sells medical devices.

4) Correlation and causation:
Just because two things are correlated doesn't mean that one caused the other. If you want to really find out if something causes something else, you have to set up a controlled experiment. (Chemical Compound's infographic brings up the fabulous example of the correlation between fewer pirates over time and increasing global temperature. It's almost certain that fewer pirates did not cause global temperatures to rise [or vice versa], but the two are still correlated.)

5) Speculative language:
You can say anything with the word "could" and it could be true. Jelly beans could be the reason that the average global temperature is increasing. Unicorns might cause cancer. And pygmy marmosets may be living in the middle of black holes.

6) Small sample sizes:
Did the researchers study a large enough group to know that the results aren't just a fluke? That is, did they treat cancer in two people or in 200? Or 2,000? Was that brain scanning psych study on just seven people?

7) Unrepresentative samples:
If a researcher wants to make claims about how all people think, but she only studies the college students who show up to her university lab, well, then she can only really draw conclusions about how those college students think. One cultural group can't tell you about all of humanity. This is just one example, but it's a pervasive issue.

8) No control group used:
Why would anyone even waste their time doing a study like this?

9) No blind testing used:
The placebo (and nocebo) effects are strong. (Check out this awesome, three-minute video on the crazy effects of placebos.)

In medical and psychology studies, participants should not be aware of whether they're in the experimental group or the comparison group (often called a "control"). Otherwise, their expectations can muddle the outcomes. And, if at all possible, the researchers who interact with the participants should also be unaware of who is in the control group. Studies should be performed under these double-blind conditions unless there is some really good reason that it cannot be done that way.

10) "Cherry-picked" results:
Ignoring the results that don't support your hypothesis will not make them go away. It's possible that the worst cherry-picking happens before a study is published. There's all kinds of hidden data that the scientific community and the public will never see.

11) Unreplicable results:
If one lab discovers something once, it's sort of interesting. However, that lab could have some random result or — rarely, but possibly — be filled with liars. If someone else can replicate it, then it becomes far more real.

12) Journals and citations:
That something was published in a fancy scientific journal or has been cited many times by others doesn't mean that it's perfect research, or even good research.

13) Check for peer review:
Just because you saw it in a news story doesn't mean that it's been looked over by an independent group of scientists. Maybe the results were presented at a conference that doesn't review presentations. Maybe it went straight from the operating table to the press, like recent uterus transplants.

14) Results not statistically significant:
Generally, researchers want to see a statistical analysis showing that the results indicate a less than 5 percent likelihood that they could have happened from chance alone (a 95% confidence interval). Some fields are even more strict than that. This is so there's a reasonable degree of certainty that you're looking at a real result, not just a stroke of good luck.

15) Confounding variables:
Might something else be causing the effect that you see? Did the statistical analysis take that into account?

 

 
From: anaysharajFeb-24 6:09 AM 
To: EdGlaze DelphiPlus Member Icon  (9 of 10) 
 2206.9 in reply to 2206.1 

Content changed.

___________

 

  • Edited February 27, 2021 9:44 pm  by  EdGlaze
 
 Reply   Options 

 
From: EdGlaze DelphiPlus Member IconFeb-27 9:26 PM 
To: anaysharaj unread  (10 of 10) 
 2206.10 in reply to 2206.9 

  • Edited February 27, 2021 9:58 pm  by  EdGlaze
 

First Discussion>>

Adjust text size:

Welcome, guest! Get more out of Delphi Forums by logging in.

New to Delphi Forums? You can log in with your Facebook, Twitter, or Google account or use the New Member Login option and log in with any email address.

Home | Help | Forums | Chat | Blogs | Privacy Policy | Terms of Service
© Delphi Forums LLC All rights reserved.