Saturday, 10 February 2018

The 'JAMES H RANDI' Framework: Assessing Science Reporting.

It's vital to acknowledge that most people do not get information about the sciences from peer-reviewed papers, instead relying on the media to disseminate information to them. There's nothing intrinsically wrong with this, but most journalists are not qualified in the sciences, this includes actual science correspondents and content writers for science periodicals like New Scientist and Scientific American. Headlines are often vastly exaggerated or outright false. The articles in question may not reflect the findings of the paper or study they concern because the journalist failed to understand what they were reading, they took the information in their article from a secondary source that was incorrect or they've blatantly misrepresented the findings of the research to fit some other narrative or belief. Of course, there's another possible reason for the article being wrong, it may accurately represent the study it reports, but that study may itself be deeply flawed. So it's vital to review how to assess a news article that relates scientific findings.

In what follows I'm going to draw heavily on the work of Kevin McConway and David Spiegelhalter (1), two statisticians, who after getting tired of hearing bogus medical claims on the morning radio, developed a framework to assess the reporting of medical studies in the press. At points, I'm going to generalise to make the points apply beyond the medical sciences. My adapted framework contains eleven questions divided into two categories, study quality and the standard of reporting.

Scoring your article

All the questions in the framework can all be answered 'yes' or 'no' but you'll notice that they are sometimes worded in a quite ungainly fashion. This is because one point is awarded for a 'yes' and zero points are being awarded for a 'no'. Thus the higher an article's score the less trustworthy it is. An article with a score of seven or above should be considered deeply flawed, an article with 10 or more, utter bunkum.

Quality of study.

1. Just Observational? 

Have the researchers made any attempt to control for other variables or have they simply observed a process without interference? Whilst a lack of experimenter tampering may sound like a good thing, failing to apply proper controls make it extremely difficult to link a cause to an effect. Imagine testing a medical intervention but failing to control for other treatments. How can we tell which medical intervention caused an observed improvement?

2. Another Single study? 

What journalists often fail to realise is that scientific consensus cannot be built upon the outcome of one study. We should establish if the study in question has been successfully replicated, or if the results found reflect those found in other, similar investigations into the same phenomena. 

3. Might there be another explanation for the observed effect?

Is there a confounder that might explain the found results? The experimental controls should allow the researchers to eliminate plausible alternative explanations for an observed effect. Imagine experimenters are testing a new cold remedy. They select two groups, men and women. They give the women the new drug, but not the men. They find that the women in the group administered the medicine tend to recover more quickly than the men and report less extreme symptoms. They conclude the remedy is successful, but they have failed to control for gender. The experiment is confounded.

We should also consider any systematic bias, have the researchers introduced an element into the study that will skew the results in favour of one particular outcome? A striking example of this would be a recent survey issued by the Trump administration comparing voters opinions of the first year of Trump's first term to the first term of Obama's first term (below). You'll immediately notice that the first question has an element missing. Subjects are unable to rate Trump poorly whereas the option is available in the second question which asks subjects to rate Obama's first term(2). This quite laughable omission means that a side by side analysis is unsuitable.

4. Extrapolating Small sample sizes?

We should be extremely wary of studies with small sample sizes, especially those with subjects numbering in the tens rather than the hundreds or even thousands. There are mathematical ways to calculate appropriate sample sizes, but often it's easy enough to do this intuitively. You can't draw conclusions about millions of people based on a study of tens.  For example, consider Andrew Wakefield's withdrawn Lancet study which attempted to establish a link between the MMR vaccine and autism. Wakefield's study group contained five children, clearly not enough to draw conclusions about millions who had received the MMR vaccine. We need to be even more concerned when the conclusions of a series of tests are extrapolated to a much larger population

In relation to sample sizes, it's important to be wary of larger of studies of rare events. For example, a study of a rare illness may involve following millions of people but only an extremely small number of that sample develop the illness in question.

5. Samples not varied enough? 

Related to the previous point, it's not suitable to draw conclusions about a large population based on a sample that isn't varied enough. A good example of this is the study I looked at with the Spooktator crew early last year. The study proposed to show that individuals with strong religious or supernatural beliefs have poor cognitive abilities. The problem was, not only were the sample sizes extremely small but the vast majority of those studied were aged under 25 and female. It's not possible to draw conclusions about millions of believers of all ages and both sexes from such small, unvaried sample sizes. 

Standard of reporting.

6. Half (or less) of the story?

Are the reporters telling you everything? If they are reporting on the harmful effects of a medicine are they pointing out the benefits as well, or vice versa? Are they highlighting a small part of the research and ignoring the bigger picture? Researchers will normally point out flaws with their studies and suggest avenues for further research. Are these elements being covered in the report?

7. Representing risk in a misleading way? 

Watch out for the phrase "higher risk" in a report. If you are told that exposure to a substance doubles your risk of a certain ailment or illness it sounds quite bad. But what if your risk was incredibly low, to begin with? Unfortunately "X doubles the risk of Y" makes a fantastic attention-grabbing headline. This can also be true when considering a stated effect. If some variable makes the chance of a positive outcome more likely, we need to know how likely that outcome was in the first place to know if that is significant or not. To combat this we should be looking at absolute numbers as a sign of good science reporting.

8. An Exaggerated headline? 

Headlines for articles can be difficult to construct, this sometimes means important details are omitted, worse still they can be abandoned in favour of hyperbole. Does the headline of the article actually reflect what is said in the actual report, or is it misrepresentative or manipulated?

A great example of this would be a study published by the International Agency for Research on Cancer (IARC) reflecting the decision to list the radiation from mobile phones as “possibly carcinogenic to humans” or in specific terms to place classify it as a group 2B carcinogen (3). The 2B category is used when there is no specific evidence of a substance or material posing an actual risk, but there have been correlations made in the past. Some other 2B carcinogens include; fuels, laundry detergents and aloe vera.

The Daily Express clearly weren't interested in these details when they reported the IARC's report with the headline: "Shock Warning: Mobile phones can give you cancer" (4).

This headline complete strips the subtlety of the IARC report in favour of hyperbole and blind panic.

9. No Independent Comment?

When considering a scientific study it's vital to remember our first point, single studies do not make the scientific consensus. That means that we should be looking for independent comment from someone in the field of research not involved with the research in question to put our study in context. If an article omits this, it's likely based on promotional material issued by the institution that produced the research, one that has vested interest and may well not be as even-handed as one could hope. This doesn't mean these comments have to be negative, but they should be present.

This leads us to...

10. Does the report rely on public relations puff pieces, or are there considerable personal interests involved?

Are there elements of the report that imply the study is just PR? Who sponsored the research? What was the ultimate aim of the study? Does it fit into a wider scientific context? The answers to these questions are likely to tell you whether you should take the report with a pinch of salt or a shovel. This isn't to say that research that has been paid for by a company or corporation should be immediately disregarded, but it should be viewed with some skepticism. Likewise, research conducted by individuals with considerable personal interests in the research should be considered with suspicion.

For example, Martin Pall's research (5) on the dangers of electromagnetism should be weighted alongside the fact that he sells a range of supplements that he claims to strengthen the biological systems which his research claims EMF 'attacks'(6). Is this conflict of interest mentioned in the report, or in the original paper even?


11. Is the original research unavailable?

A great deal of the time, you'll find that the article you're reading doesn't even link to the original study. This means you're going to have to do the legwork yourself. The original study should be searchable by title if this gets you no results try searching by selected keywords. When doing your search, you may be far more likely to find success searching using an academic search programme such as Google Scholar (7). When you find your paper, it may well be hidden behind a paywall. Don't despair, even if this is the case, the abstract will be available for free. You will more likely than not find that this alone is sufficient to find errors in sloppy articles, especially if the author didn't even bother to read the abstract as often is the case!


If you're slightly worried that all that may be difficult to remember, fear not, I've formed it into a handy mnemonic, JAMES H RANDI after my skeptical hero. You could always rework the framework to spell out the name of your own hero of science or skepticism. I've also formed the questions into a rudimentary scorecard which you can see below and download by following the link in the sources (8). Hopefully, it should make assessing science articles much easier.


Tuesday, 6 February 2018

A Quick Look At Another of David Rountree's Academic Claims.

When this title says quick, it's no exaggeration.

By this point, David Rountree has been so fundamentally debunked that there's very little else to say about the man. His reputation is in tatters, his claims of academic success have been shredded so many times they should be served in pancakes with Hoisin sauce.

But this turkey keeps on cluckin'.

A friend led me to recent conversation Rountree had with his followers, and amidst the anger, denial and threats of retribution, I found the final nugget that David seems to be offering as evidence of his academic success. Rountree is repeatedly reposting links to his '' website. That's where Rountree initially uploaded his laughable "Wormhole theory of the Paranormal" paper.

Now, to Rountree's dwindling band of followers, who bizarrely continue to massage his ego, the fact Rountree has an 'academic profile' may seem impressive. It may even imply he has some form of credibility.

To disprove that, here's my Academia.Edu profile. I don't have any academic qualifications, having not yet finished my degree. That wasn't a problem in setting up the site. Nor was the fact that I didn't even use my real name!

They even send me e-mails to tell me I've been cited despite having never published there!

The only reason my profile isn't visible to the public is I won't pay the money that Academia.Edu require for me to finalise the account!

There is legitimately nothing to stop you setting an Academia.Edu account up for your hamster. Meaningless to anyone who hasn't taken a big old swig of Rountree's Kool-Aid.

If you want more meat on Rountree you could read this which collects the hard work of a group of folks who were determined not to let him get away with this kind of bullshit.


I said "Plugs. Plugs. Plugs." Dammit!

As this has been a quick post, I'm going to throw in two extremely quick plugs. Firstly in addition to writing here, I have also been writing for an up and coming news website called Scisco media as I've mentioned before. My writer's page can be found here,

My writer's page.

And without blowing my own trumpet, I think the last post I wrote for Scisco is possibly the best thing I've ever written.

Gravitational Waves: A new way to ‘see’ the universe

Also a plug for my new t-shirt shop. Yes, I have shilled out and am now selling what my children term 'merch' through the website Spreadshirt.  At the shop, you'll find some sciencey, some skeptical t-shirts, badges, bags and all sorts of merchandise. Purchasing from the shop is a nice way to support what I do here and hopefully moves me closer to devoting more time to writing.

There are about six designs at the moment and they are all available in a range of styles.

The Null Hypothesis@spreadshirt. 

Friday, 2 February 2018

Money For Huffin'. Steve Huff's Patreon Exposes Just How Sleazy He Is.

It's late, you're preparing for bed when there's a familiar ping from your phone. A little red bubble appears over your messenger app. A friend who keeps tabs on the worst scumbags in the paranormal field has something to show you. You open the message getting the feeling you have when you pull open the door of the seediest bar of your town. That's what happened to me tonight when my friend directed me to Steve Huff's Patreon (0). Just when you think that a human being could not possibly sink any lower than selling broken radios to the gullible at massively inflated prices. When you believe that this same person couldn't do anything more tasteless than 'contacting the spirits' of recently deceased celebrities.

He suprises you on both counts.

Let's get something out of the way first. I have no objection to people opening a Patreon. Heck, when I mulled over the move from blogging to making Youtube videos and podcasts I considered doing the same. The time restrictions on making videos meant I may well have to cut down my working hours and I thought Patreon would help me do this without impacting my family. I never made that move, mainly because I enjoy writing and the time constraints of recording audio and editing videos were just too limiting. But opening a Patreon isn't something I rule out, nor is it something I frown upon others for doing.

So why do I have a problem with Huff doing this?

Essentially, it comes down to the fact that Huff uses the videos and content he creates in order to sell his ridiculous range of broken or at best, poorly functioning radios. He's already receiving a revenue stream from the videos he creates, which are essentially just advertisements. But he's not content with that. He wants his followers to pay him to create his advertising material. Obviously, Huff can't outright say this, so let's unpack his justification for soliciting money from his followers.

Huff tells his followers that by donating to him through Patreon they are helping him to conduct his 'research'.

His "drive is stronger than ever before"? I guess a lot can change in a few months because only last year Huff quit the paranormal 'field' after claiming he was under sustained attack from demons! (1), As for Huff's claim that he is conducting 'research'. Where is it? In his introduction video on his Patreon he claims that people have never even seen the majority of his research.

Two questions. Why is that? And given that is the case, why the fuck should anyone pay to support Huff conduct more research, just for him to lash the results in the fucking garbage?

Sitting in your bedroom playing with neon-lights and broken radios is not research. It's tinkering and he's got a damn nerve asking others to pay to allow him to continue to do it. Now he's receiving money just for the purposes of research doesn't Huff have a responsibility to his donors to properly and publically present that research?

I believe so, but he gives no indication that he is going to do this.

In the above video, Huff declares that he is broke, that he can only continue his 'research' without donations. But surely he's got a revenue stream from selling his junk? In fact, only on January 25th, he posted his latest products on his website, costing between $900 - $3995! (2)

And finally, the model that costs over $3000. It will set you back over $4200 if you want it shipped abroad!

Now, Huff can't have lots of his money tied up in stock of this rubbish for the simple reason that he requests you accept that he won't build your box until he's got your cash in his hand.

There's another interesting condition the Huff asks his customers to agree to. He asks his customers to accept that there is no guarantee that his boxes will actually work!

Can you imagine any other type of retailer selling a product that they don't guarantee will work! Huff may actually be falling afoul of a piece of US legislation, the Sales and Storage of goods framework (3), which states:

This requires that the seller knows that an item is fit for purpose at the time of sale, Huff is expressly telling his customers that he does not know that!

Back to Huff's Patreon, as well as crying poverty, you'll notice that Huff brings the issue of heaven and hell into the equation of whether you choose to support him. Huff tells us he has categorically discovered there is a heaven and a hell, and his research is helping discover who goes where when they die. In this respect, Huff has made himself indistinguishable from televangelists like Jim Bakker who threatened that viewers' grandchildren would go to hell unless they buy a $60 bucket of pancake mix. (4) Well, there is one difference. Give your money to Bakker and you get some pancake mix.

So what is Huff offering to his donors?

It's a lot sleazier than pancake mix.

Steve Huff. Ghoul for Gold.

So, for your monthly contribution, Huff is offering to attempt to contact your deceased loved one. Of course, with everything Huff offers, he's quick to point out there are no guarantees. And when it does work, it's very likely because the patron has provided Huff with the name or names of the person they want to be contacted. Something which makes audio pareidolia and suggestibility a much easier for Huff to exploit.

Clearly, in the pursuit of the almighty dollar, there is no level Huff will not stoop to now. He is quite happy to manipulate his patron's grief and pain. Something that shills in the paranormal circus have been doing for years.

Just like mediums or anyone else who proposes to contact the dead, Huff is a ghoul. Pure and simple. It angers me greatly to read the messages from patrons on Huff's page requesting him to contact their loved ones. 

No-one deserves to be exploited in this way.

No one.

So what can we do? The first step I'd suggest is going to Patreon directly. I have to say that they aren't particularly responsive, but if enough people register their disdain perhaps we can help shut down this thinly veiled exploitation. Next, raise awareness, whether by spreading this post around social media or by writing your own posts and articles.

One thing should be abundantly clear here, Huff is a fraud and a charlatan. He has shown himself to have absolutely no boundaries in what he will do for money.