2. If they don't show it, it is data that would get them in trouble.
They're reviewing the research done on the leukaemia data over a few different prior studies. It's not exactly commonplace to reproduce the raw data from said studies when you're doing this.
You, and they, are not producing evidence, but making claims about what authority claims, authority telling us that authority tells us. That is not science, and on a question as politically dangerous as the effects of low dose radiation, authority is not evidence. If Authority had evidence for the politically correct view, they would show us that evidence, instead of assuring us it exists somewhere some unspecified place in some form we are insufficiently learned to understand.
Radon gas is extremely patchy, and the patches have no correlation with other population characteristics that might affect cancer. If radon gas caused cancer, the correct approach is to look at populations exposed to high levels of radon gas because of the rocks that happen to be underneath them, not because of lifestyle or employment characteristics. That is the only kind of survey that counts, and all such surveys shows no harmful radiation effects, and arguably beneficial radiation effects. Using any other kind of data is deliberately looking for bad data to grind your axe - just as looking at responders rather than evacuees is deliberately looking for bad data to grind your axe.
And if you are going to use bad data, show us the bad data. Don't show us what authority says about what authority says about what authority says about the bad data. Bad data is bad science, but what authority says about bad data is not even science.
Meanwhile, what do you make of this, the most thorough review I've yet found of the other data on low-dose (specifically, 100 mSv) effects on leukaemia risk? http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3095477/?tool=pubmed
This is not a review. It is a meta-analysis. Meta-analyses are notoriously worthless even on entirely uncontroversial topics. On a controversial topic, everything that makes meta-analyses worthless makes them not merely worthless, but of negative value.
Even in a non political meta analysis of non political data, publication bias in favor of significant results is going to produce bogus results:
Here is one problem with even non political meta-analysis: Suppose a penny coming ten heads in a row is statistically significant, and therefore publishable, but random behavior is unpublishable. You get one hundred published papers each reporting a penny coming up ten heads in a row, no published papers showing undramatic penny behavior. You do a meta-analysis, and now you have a published paper showing a penny coming up one thousand heads in a row! Someone else does a meta-analysis, including that previous meta-analysis as original independent data, when in fact it is re-analyzing the same data, and now you have a published paper showing a penny coming up two thousand heads in a row! Rinse and repeat!
A meta-analysis generates bogus statistical significance, fabricating strong statistical significance from a multitude of analyses each with weak statistical significance. People use meta-analyses when they cannot get analyses that show strong significance - but the usual reason that they can never get strong significance is because there is really no correlation at all. If low level radiation actually caused leukemia, no one would be reduced to doing meta analyses on each other's analyses, because they would be able to attain statistical significance without being reduced to such extremes.
Thus the very existence of meta analysis papers showing a correlation between x and y is evidence for the complete lack of any correlation between x and y. When people use statistical methods that are well known to be bad, it is evidence that they are hard up for evidence that shows what they want to show.
As with evidence for flying saucers, the abundance of fishy smelling evidence is itself evidence for the absence of non fishy evidence.
If people provide meta analysis as evidence for x, that is itself evidence against x. Because of various forms of publication bias, meta analysis is naughty, and well known to be naughty. If people do naughty things to prove x is true, x is probably untrue.
I've yet found of the other data on low-dose (specifically, 100 mSv) effects on leukaemia risk? http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3095477/?tool=pubmed
Another meta analysis. Funny thing that.
I repeat: meta analyses are well known to be bad evidence. Reliance on bad evidence is itself evidence for the absence of good evidence. If low doses caused leukemia, there would be good evidence.
Meh. You're leaving aside the evidence linked to in that review.
See here for a meta-analysis and summary of 13 European studies
Again with the meta analysis.
If anyone had any good evidence that radon caused harmful effects in low doses they would present
that survey. Since they don't present any one specific survey that shows what they want to show, we may conclude that no such survey exists.
If there were non fishy, statistically significant surveys showing what they wanted, they would not be reduced to meta analysis.
(particularly figure 2): http://www.ncbi.nlm.nih.gov/pmc/articles/PMC546066/?tool=pmcentrez
which tells us:
Collectively, though not separately, these studies show appreciable hazards from residential radon, particularly for smokers and recent ex-smokers, and indicate that it is responsible for about 2% of all deaths from cancer in Europe.
Firstly. Meta analysis is bad.
Secondly. Ill ventilated houses will have higher radon levels than well ventilated houses, and if those houses also have smokers ...
The correct approach is not house by house, but area by area. The question that should be asked is: Do people have higher cancer rates if they live in houses built over rocks that tend to emit radon, as compared to houses built over rocks that do not tend to emit radon.
And when this question, the right question is asked, we get the result that radon is good for your health.
To get the desired result, radon bad for health, we have to ask questions that do not directly reflect just radiation levels.
To get the desired result, have to ask a different question.
I've looked into the methodology of the two main meta-analyses behind the conclusions reached in that review on household exposure and no, they really didn't.
Yet oddly, when I just looked into the methodology, I found use of notoriously unreliable methods that have a tendency to magnify and rest upon publication bias, (meta analysis) and the deliberate search for radiation data that is likely to reflect factors other than radiation (house by house radon levels rather than area by area radon levels, and the use of radiation responders, rather radiation evacuees).
When you pool data from different studies operating on different principles, the number of ways of "analyzing" the data increase combinatorially, guaranteeing that you can find an analysis that shows whatever you want.
The risk is not so high as when you don't pool the data from all the valid studies
Pooling data is from different studies is well known to be bad practice, for numerous reasons, most notoriously publication bias, even on questions that are not politically sensitive. It is a well known problem.
Quoting from data pooled from multiple different studies done in multiple different ways as evidence for the danger of low dose radiation is like quoting from mental patients as evidence for visits from flying saucers. That one is reduced to such evidence is pretty good evidence for lack of visits.
Some substantial populations get 250 mSv per year. [of natural background radiation]
Source, please.
http://www.ncbi.nlm.nih.gov/pubmed/11769138