Melbourne, Australia; Jan. 29, 2013:
Reliable scientific data are the Α & Ω of any science. Reliable data are the only way to obtain reliable conclusions. No matter how “fancy” data analysis is performed, if the used scientific data is not reliable the outcome of the data analysis will also be unreliable.
Some of the epidemiologists, doing research on mobile phones and cancer, seem to forget this simple truth. Some of them are imagining, and are imposing this mistaken view on others, that the unreliable scientific data is possible to make reliable by using statistical analyses.
It is not so and, finally, it is necessary to say it aloud and stand up to misleading and waste of funding by some of the epidemiologists.
I think, and I know that I am not alone among the scientists in RF field, that the importance of the epidemiological evidence is hugely overrated and that epidemiologists who produced the RF-related epidemiological studies seem to be detached from the reality and live in their own “parallel universe”.
WHO, IARC, ICNIRP and ICES consider epidemiological studies to be the providers of the most important scientific evidence concerning mobile phone radiation impact on human health. However, due to the low sensitivity of epidemiology in detecting effects within the population, it is unlikely that this approach will be ever able to conclusively determine whether weak stimulus, such as mobile phone radiation, causes such rare disease as brain cancer (ca. 10-20 cases per 100000 people).
There are numerous biases involved in estimation of the health risk by epidemiology, such as: selection bias, misclassification bias, recall bias, and the effect of the developing disease itself on the person’s mobile phone use. Furthermore, there are methodological considerations in epidemiological studies that are unsolved at the moment. The most important of them is evidence based exposure dosimetry.
Further complication with the epidemiological evidence is the long latency period (over 10 or even over 20 years) between the induction of brain cancer and the clinical diagnosis of brain cancer. Therefore, it is not a surprise that the majority of the executed epidemiological studies, covering at the longest the period of the first 10-15 years after the start of the use of the mobile phone, can not be expected to show link between brain cancer and mobile phone radiation, even if it would exist.
But epidemiologists, who executed studies examining existence of causal link between mobile phone radiation and brain cancer, in spite of the obvious to everybody limitation of long latency period and in spite of the lack of reliable dosimetry, staunchly claim that their research, looking at the short-term use of mobile phones, shows that there is no any risk of brain cancer and that it is unlikely that mobile phone radiation will cause any health problems in the future.
It sounds like a “fairytale” from the “parallel universe”.
We should remember that epidemiology generates “dirty data”, as one prominent epidemiologist said to me during the Monte Verita meeting in 2012.
To better understand how “dirty” the epidemiological data is let’s look at an extreme example, of course exaggerated to make the point:
When scientists perform animal study they use inbred mice or rats. It means that all animals have the same genetic makeup. It means that with high probability scientist can expect that animals will react in similar way to the same stimulus.
If the same scientist would perform animal study, but instead of using inbred animals would collect mice or rats off the city sewers, nobody would accept results of such study and the scientist would be called outright a dilettante and sent back to school.
But this is exactly what epidemiology is about. It generates “dirty data” that would not be accepted by any science. The only reason we have to use “dirty data” of epidemiology is because we cannot do anything about it – every person used in epidemiological study differs genetically from the others. Also the environment and ways of life are different for every person. That is why the results of epidemiological studies provide “dirty data” that should be viewed with much greater caution than what epidemiologists let the general public and the decision makers understand.
Scientists, decision makers and general public alike forget or even are not aware about this serious limitation of epidemiology and look up to epidemiological evidence as if it would be provider of the ultimate proof. Epidemiological evidence does not provide reliable proof if it is not supported by human volunteer studies, animal studies and in vitro laboratory studies.
It was 1999 when the largest case-control epidemiological study, INTERPHONE, was planned. At that time, optimists hoped that by the end of this project in 2004 we would know whether cell phone radiation causes brain cancer. After long delays, INTERPHONE published the results of the glioma brain cancer study in 2010. The results were confusing, to say the least. Use of the cell phone for less than 10 years seemed to have a “protective” effect, whereas the use of the cell phone for more than 10 years seemed to increase in glioma incidence.
By design, the INTERPHONE study was unable to detect brain cancer induced by cell phone radiation because of its long (over 10 years) latency period. At the time of execution of INTERPHONE (2000-2004), cell phones were in common use for only a few years. There was not enough time for the development and diagnosis of brain cancer if it was caused by cell phone radiation.
However, there was an even more important design flaw. The information about the extent of exposures to cell phone radiation was based on individual recollection of the subjects in the study. Therefore, by design, INTERPHONE compared reliable information on diagnosed cancers with the entirely unreliable information about exposures. Such comparison can not produce reliable result, as was seen in the confusing results of the study published by INTERPHONE in 2010.
Unfortunately, it is necessary to honestly admit that the same unreliable experimental set-up, that was used in INTERPHONE study, was also used in studies of Hardell group in Sweden.
In 2011, the Danish Cohort published another update of this largest cohort study. Similarly to INTERPHONE, the Danish Cohort update compared the reliable information on diagnosed brain cancers with the absolutely unreliable information about exposures. Exposure information was based solely on the length of subscription with the network operator. The study also contaminated the control group with the cell phone users. Again, as with the INTERPHONE, the Danish Cohort update made a comparison of the reliable data on cancer with the unreliable information about exposures.
My staunch criticism of the Danish Cohort update was noticed by a prominent epidemiologist who sent me the following e-mail message:
“…I appreciate your critical comments but with your comments about the Danish Cohort study, you do not yourself a favor. I strongly recommend to read an epidemiological textbook to understand what are the limitations and the advantages of the cohort. It is much better than you think. Exposure assessment in epidemiology is sometimes more complex than people assume. As long as you have not fully understood concepts like Berkson errors, etc, I recommend you not too harshly criticize epidemiological as I do not understand the full complexity of omics studies. …”
This message, to me, is a very good illustration of the “parallel universe” of epidemiologists who are forgetting that the first step in any data analysis is the correct data. In case of the Danish Cohort update study there is no information about the exposures to radiation and the information used instead of it (subscription periods) is absolutely wrong to be used as exposure equivalent. No any fancy statistical test will make of wrong data into a correct result.
In the “parallel universe” of some epidemiologists the exposure dosimetry or the cancer latency seem to not matter…
As a consequence we are left to deal with a trail of failed epidemiological studies that are wrongly heralded as ultimate proof that mobile phones are safe. The evidence is not there because studies are based on unreliable data and epidemiologists who designed them should finally accept these failures and learn from them.
As prominent Danish epidemiologist, Jørn Olsen, said about the INTERPHONE study(Bioelectromagnetics 2011, 32:164-167):
“…The worst-case scenario is that long-term use of cell phones does carry health risks but the Interphone Study dried up available resources for funding and made the public and funding agencies immune to the epidemiological results…”.
What we need now is to jump start non-epidemiological research because predictive value of epidemiology has been demonstrtaed to be in case of mobile phone radiation is a big fat zero.