Epidemiological case-control studies, examining risk of brain cancer due to exposure to mobile phone radiation, is the job that was not done well – at least so far…
Last night and today I got several messages informing that Lennart Hardell has published new analysis of his data (http://ije.oxfordjournals.org/content/early/2010/12/17/ije.dyq246.extract). It is a reanalysis of his old data but with a “twist”. He has extracted from his old data only the data that has the same parameters and boundaries as the data of the Interphone study (e.g. age of the subjects, length of the mobile phone use, and exclusion of non-mobile phone exposures). The analysis of such “refined” Hardell data has now shown that both the Hardell data and the Interphone data lead to very similar end results.
It is an interesting development because for a very long time researchers were “puzzled” why Hardell’s studies show different risk and the Interphone studies different. What makes it interesting is that in the recently published ICNIRP evaluation of the epidemiology data, the scientists stated that they have no clue why there is a difference between Hardell’s results and the other published results (https://betweenrockandhardplace.wordpress.com/2009/08/06/how-reliable-is-the-epidemiological-evidence-on-mobile-phones-and-cancer/). It might be so that now, Hardell has shown what the reason was.
However, there is a very worrying development, caused by the publication of Hardell’s re-analysis of data. It seems like some people have already jumped to conclusions. They think that if the Hardell’s re-analyzed data agree with the Interphone data then it is an ultimate proof that the conclusions drawn from the both data-sets are correct and that they ultimately prove that there is an increase of brain cancer risk in people highly exposed to mobile phone radiation.
Hold your horses. NO! It is not so.
We are dealing still with the two studies that are in part based on unreliable data. In both studies exposure of the subjects was evaluated retrospectively by asking what people remember. It means that both data-sets have the same exposure bias. Hence, the final analyses are as unreliable in both of the studies.
The fact that the results of one unreliable study (Interphone) are being confirmed by the other unreliable study (Hardell’s re-analysis) does not make either of them more reliable. They both contain the same biases and, therefore, it can be expected that they lead to similar results. Adding up two erroneous answers does not produce a correct answer.
So, let us stop crunching and re-crunching data from Interphone and Hardell. As a proverb says: “It is useless to flog a dead horse”.
Let us agree that both data sets are biased and whichever way they will be re-evaluated the biases will not disappear, and the re-analyzed data will still be insufficient to make any scientifically reliable conclusions.