More relevantly, though, to the doses in question, you're talking small effects and limited scope to observe them without close observation.
That is a criticism of all studies before Chernobyl. Chernobyl was an experiment on a gigantic scale. If there were low dose effects, they would show up.
If low dose effects, should have killed about four thousand people, a large proportion of them in a few small towns near Chernobyl. That would be statistically detectable.
What are you basing this statistic on?
The Chernobyl results are: No indication of a low dose effect below 200 mSv. Questionable evidence of a low dose effect for thyroid cancer from 200 mSv up - which thyroid cancer cluster, supposing it to be real rather than a result of over diagnosis, killed nine people.
Where are you getting these stats from? And fine, the cluster didn't kill that many people, because thyroid cancer doesn't generally kill that many people, but you're still talking a cluster of 5,000 cases of thyroid cancer. Some of which may have been overdiagnosis, but a lot of which may well have been the simple fact that Chernobyl released radioactive iodine, and iodine is straightaway taken up by the thyroid (which is why giving radioactive iodine is one option for killing off hyperactive thyroid glands).
Above 1000 mSv, people start dying all over the place. There is no doubt whatever that 1000 mSv or more is really seriously bad for you. If the effect was linear between 200 mSv and 1000 msV, we would see plenty of excess deaths in the 200 mSv - 1000 mSv range, which is roughly what most of the evacuees got. We don't.
The figures you're giving for what the populations affected by Chernobyl got are all significantly larger than anything I'm seeing when I look this up (e.g. here http://www.who.int/ionizing_radiation/chernobyl/backgrounder/en/index.html
). Where are you getting them from?
Which there seems to have been to a better extent with the Japanese Survivor Survey (from those exposed to another very large natural experiment, which is what the linear model is based on).
But the Japanese survivor survey does not support the linear model: From the survivor survey, looks like radiation protects against cancer and improves general health in doses of 50 - 100 mSv, begins to have harmful effects on one's chances of cancer at around 100 - 200 mSv, and increases linearly thereafter. It supports the linear model above 100 mSv, with a threshold around 100 mSv.
However, in the Japanese survivor survey, radiation shows no linear effect on total deaths, with an increase of total deaths setting in at about 1000 mSv, consistent with the conjecture that doses of radiation above 100 mSv result in a small increased risk of being diagnosed with cancer and having one's death blamed on cancer, and a decreased risk of having one's death blamed on something else.
Source, please. What you're saying would seem to contradict the paper I linked to before on this data.
Anyone claiming linear is refuted by Chernobyl, an unintended experiment carried out on a gigantic scale. If the effect was linear, then we would see enormous patches of excess deaths
It would seem to contradict this data (where they saw a significant increase in leukaemia among those with 150-300 mSv of exposure) http://www.bmj.com/rapid-response/2011/10/29/elevated-leukemia-rates-chernobyl-accident-liquidators
There is no liquidator subpopulation that is known to have been exposed to only 150-300 mSv. The radiation exposure was guesswork. Radiation exposure of liquidators is uncertain and unreliable.
OK, so you're challenging this dose estimate. What's your evidence that it was higher than that estimated by the study in question?
The liquidator study gives "bordering level of significance" - in other words not really significant.
No, the results weren't merely borderline. From the study results: "Both the external (SIR=2.5, P<0.001) and internal comparisons (RR=2.2, P=0.03) indicate significantly elevated risks in the 150-300 mGy dose group."
p<0.001 is a very strong level of evidence - not borderline at all. p=0.03, meanwhile, is still pretty strong.
Further, the significance assumes that "dose reconstruction" is precise, but the dose reconstruction was guesswork, and guesswork made by people who knew the outcome. If someone is suffering from leukemia, people will be more apt to recall higher dose events, and more apt to interpret events as high dose events. If someone is in fine health, people will be less apt to recall higher dose events, and less apt to interpret events as high dose.
Recall bias is always a potential issue, and might explain away some of the internal comparison, but no so much the external comparison (the excess risk between liquidators and non-liquidators).
Even assuming no such bias, some those estimated as being exposed to x amount of radiation, will in fact have been exposed to 10x amount radiation, and others to 0.1x amount of radiation, smearing out any threshold effect.
Random errors in dose estimation are at least as likely to bias a study's findings towards the null as they are to bias them away from it.
If indeed those estimated as having 150-300mSv have higher rates of leukemia (and the result was not statistically significant) it may well be because some of those estimated as 300 mSv in fact had 1000 mSv.
Comparing studies that give different results, we see a consistent pattern: The better the data, the higher the threshold, and the threshold is at least 100 mSv, and quite likely 1000 mSv
Please show me a study claiming a safe threshold of 1000 mSv.
By the way, I did some more research, and found another, more recent paper summing up what's been found among all the different cohorts of liquidators. Not sure if you'll find it interesting or not, but here you are: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2904977/?tool=pubmed
Their conclusions were pretty conservative: "A significantly elevated OR was seen for all hematological malignancies combined at doses of 200 mGy and above." (For our purposes, we can think of Grays as basically equivalent of Sieverts, so far as I understand it).
I don't know the facts about routine general medical and dental X-ray's. Yet I still can't help but cringe when I see children in particular with a cellular phone smashed against their skull. Or using notebooks with high power wireless in their laps. Why not just have them stand in front of a worn out microwave oven?
I wouldn't worry about it. Mobile phones emit radio waves, not gamma or x-rays, and large epidemiological studies have shown no link whatsoever between use and cancer statistics.
As for medical x-rays etc.: back in the old days, medical X-rays and their sources could be properly dangerous (both Roentgen and his wife dying from exposure, for instance), they are kept to really very safe levels now. Please see here for a rough comparison of exposure levels: http://www.hpa.org.uk/Topics/Radiation/UnderstandingRadiation/UnderstandingRadiationTopics/DoseComparisonsForIonisingRadiation/
Note that the debate between me and Sam has been largely on the risks of exposures over 100 mSv. Below that, there isn't much evidence of risk, which is what the threshold hypothesis is all about, and advocates of hormesis claim there may even be some benefit. And medical scans range from, say, 0.005 mSv for a dental X-ray (meaning that you'd need, oh, about 20,000 of them to top 100 mSv in exposure) to 10 mSv for a full body CT scan (which would very rarely be carried out anyway, and generally only if you had something much more serious than low-dose radiation exposure to worry about).