The “Parallel Universe” of Epidemiologists

Melbourne, Australia; Jan. 29, 2013:

Reliable scientific data are the Α & Ω of any science. Reliable data are the only way to obtain reliable conclusions. No matter how “fancy” data analysis is performed, if the used scientific data is not reliable the outcome of the data analysis will also be unreliable.

Some of the epidemiologists, doing research on mobile phones and cancer, seem to forget this simple truth. Some of them are imagining, and are imposing this mistaken view on others, that the unreliable scientific data is possible to make reliable by using statistical analyses.

It is not so and, finally, it is necessary to say it aloud and stand up to misleading and waste of funding by some of the epidemiologists.

I think, and I know that I am not alone among the scientists in RF field, that the importance of the epidemiological evidence is hugely overrated and that epidemiologists who produced the RF-related epidemiological studies seem to be detached from the reality and live in their own “parallel universe”.

WHO, IARC, ICNIRP and ICES consider epidemiological studies to be the providers of the most important scientific evidence concerning mobile phone radiation impact on human health. However, due to the low sensitivity of epidemiology in detecting effects within the population, it is unlikely that this approach will be ever able to conclusively determine whether weak stimulus, such as mobile phone radiation, causes such rare disease as brain cancer (ca. 10-20 cases per 100000 people).

There are numerous biases involved in estimation of the health risk by epidemiology, such as: selection bias, misclassification bias, recall bias, and the effect of the developing disease itself on the person’s mobile phone use. Furthermore, there are methodological considerations in epidemiological studies that are unsolved at the moment. The most important of them is evidence based exposure dosimetry.

Further complication with the epidemiological evidence is the long latency period (over 10 or even over 20 years) between the induction of brain cancer and the clinical diagnosis of brain cancer. Therefore, it is not a surprise that the majority of the executed epidemiological studies, covering at the longest the period of the first 10-15 years after the start of the use of the mobile phone, can not be expected to show link between brain cancer and mobile phone radiation, even if it would exist.

But epidemiologists, who executed studies examining existence of causal link between mobile phone radiation and brain cancer, in spite of the obvious to everybody limitation of long latency period and in spite of the lack of reliable dosimetry, staunchly claim that their research, looking at the short-term use of mobile phones, shows that there is no any risk of brain cancer and that it is unlikely that mobile phone radiation will cause any health problems in the future.

It sounds like a “fairytale” from the “parallel universe”.

We should remember that epidemiology generates “dirty data”, as one prominent epidemiologist said to me during the Monte Verita meeting in 2012.

To better understand how “dirty” the epidemiological data is let’s look at an extreme example, of course exaggerated to make the point:

When scientists perform animal study they use inbred mice or rats. It means that all animals have the same genetic makeup. It means that with high probability scientist can expect that animals will react in similar way to the same stimulus.

If the same scientist would perform animal study, but instead of using inbred animals would collect mice or rats off the city sewers, nobody would accept results of such study and the scientist would be called outright a dilettante and sent back to school.

But this is exactly what epidemiology is about. It generates “dirty data” that would not be accepted by any science. The only reason we have to use “dirty data” of epidemiology is because we cannot do anything about it – every person used in epidemiological study differs genetically from the others. Also the environment and ways of life are different for every person. That is why the results of epidemiological studies provide “dirty data” that should be viewed with much greater caution than what epidemiologists let the general public and the decision makers understand.

Scientists, decision makers and general public alike forget or even are not aware about this serious limitation of epidemiology and look up to epidemiological evidence as if it would be provider of the ultimate proof. Epidemiological evidence does not provide reliable proof if it is not supported by human volunteer studies, animal studies and in vitro laboratory studies.

It was 1999 when the largest case-control epidemiological study, INTERPHONE, was planned. At that time, optimists hoped that by the end of this project in 2004 we would know whether cell phone radiation causes brain cancer. After long delays, INTERPHONE published the results of the glioma brain cancer study in 2010. The results were confusing, to say the least. Use of the cell phone for less than 10 years seemed to have a “protective” effect, whereas the use of the cell phone for more than 10 years seemed to increase in glioma incidence.

By design, the INTERPHONE study was unable to detect brain cancer induced by cell phone radiation because of its long (over 10 years) latency period. At the time of execution of INTERPHONE (2000-2004), cell phones were in common use for only a few years. There was not enough time for the development and diagnosis of brain cancer if it was caused by cell phone radiation.

However, there was an even more important design flaw. The information about the extent of exposures to cell phone radiation was based on individual recollection of the subjects in the study.  Therefore, by design, INTERPHONE compared reliable information on diagnosed cancers with the entirely unreliable information about exposures. Such comparison can not produce reliable result, as was seen in the confusing results of the study published by INTERPHONE in 2010.

Unfortunately, it is necessary to honestly admit that the same unreliable experimental set-up, that was used in INTERPHONE study, was also used in studies of Hardell group in Sweden.

In 2011, the Danish Cohort published another update of this largest cohort study. Similarly to INTERPHONE, the Danish Cohort update compared the reliable information on diagnosed brain cancers with the absolutely unreliable information about exposures. Exposure information was based solely on the length of subscription with the network operator. The study also contaminated the control group with the cell phone users. Again, as with the INTERPHONE, the Danish Cohort update made a comparison of the reliable data on cancer with the unreliable information about exposures.

My staunch criticism of the Danish Cohort update was noticed by a prominent epidemiologist who sent me the following e-mail message:

“…I appreciate your critical comments but with your comments about the Danish Cohort study, you do not yourself a favor. I strongly recommend to read an epidemiological textbook to understand what are the limitations and the advantages of the cohort. It is much better than you think. Exposure assessment in epidemiology is sometimes more complex than people assume. As long as you have not fully understood concepts like Berkson errors, etc, I recommend you not too harshly criticize epidemiological as I do not understand the full complexity of omics studies. …”

This message, to me, is a very good illustration of the “parallel universe” of epidemiologists who are forgetting that the first step in any data analysis is the correct data. In case of the Danish Cohort update study there is no information about the exposures to radiation and the information used instead of it (subscription periods) is absolutely wrong to be used as exposure equivalent. No any fancy statistical test will make of wrong data into a correct result.

In the “parallel universe” of some epidemiologists the exposure dosimetry or the cancer latency seem to not matter…

As a consequence we are left to deal with a trail of failed epidemiological studies that are wrongly heralded as ultimate proof that mobile phones are safe. The evidence is not there because studies are based on unreliable data and epidemiologists who designed them should finally accept these failures and learn from them.

As prominent Danish epidemiologist, Jørn Olsen, said about the INTERPHONE study(Bioelectromagnetics 2011, 32:164-167):

“…The worst-case scenario is that long-term use of cell phones does carry health risks but the Interphone Study dried up available resources for funding and made the public and funding agencies immune to the epidemiological results…”.

What we need now is to jump start non-epidemiological research because predictive value of epidemiology has been demonstrtaed to be in case of mobile phone radiation is a big fat zero.

Advertisements

20 thoughts on “The “Parallel Universe” of Epidemiologists

  1. popa,

    Proper dosimetry:
    – dosimetry presented in Danish Cohort is simply rubbish – it is not proper dosimetry
    – dosimetry in Interphone and Hardell studies is “better” but far from proper.

    Nobody remembers how long used cell phone years back. When researchers asked specifically how many minutes per day or wek someone used phone few years back it is obvious that the answer will be unrealistic. Much better would be cell phone records from mobile operators. However, at the time when Interphone was set up the mobile operators considered such information as business secret and did not agree to reveal records. So the scientists went for “reality” and got nowhere.

    Accepting reality is one thing but scientific validity is another. I can accept that dosimetry in Interphone and Hardell studies was the best what scientists were able to do then. But you will not convince me that this is the reeason to say that the results of Interphone or Hardell studies are superior and should be accepted because they are based on “reality”.

    Lastly, who are you and what qualifications you have to question my expertise, when you are just hiding behind the “popa”?

  2. Dariusz

    What do you mean by “proper” dosimetry in epidemiology?
    Don’t throw things at people when you offer no real alternative.
    It is very easy to rule out something that is not your expertise.
    What you suggested is not an end point of a brain tumor. If you don’t have an alternative, then you need to accept the fact the epidemiology has limitations of reality, because epidemiology takes real life conditions, and communication with real people, and both real life conditions, and people, are not perfect.
    Studies will not be prospective for 20 years, even if they provide better assessment. one must be realistic.

    popa

  3. There is no perfect way to address such complex issue. We can follow brain cancer trends in population but if there is any rise we might still do not know what is responsible for it. One way would be to expose human brain cells and look for changes on gene expression level and on level of expression and function of proteins. This way might be possible to determine whether any signalling pathways, known to be imvolved in brain cancer, are activated… If so, this might give us an indication that brain cancer might be caused by RF (not proof yet).

    Of course proper dosimetry in epidemiological studies would help a lot to make them more believable…

  4. Yes, many studies have “imperfect” dosimetry. However, dosimetry methodology, per se, is in a “bad mess” and in a bad need of improvement. Models used are outdated and they do not take into consideration real cell physiology. I spoke about it in my talk at the “Science & Wireless 2012” event in Melbourne last year. Also, one of my recent blogs about the PNAS study discussing hot spots in brain referred to my earlier writings about the outdated and unrealistic dosimetry models.

  5. Dariusz, I am in complete agreement with your comments on the unreliability of exposure assessment and dosimetry used in all epidemiology studies to date – whether showing effect or no effect. Even studies that are otherwise very well conducted are a big fat zero without proper exposure assessment. Furthermore, man-made EMF is so pervasive that finding unexposed controls is already next to impossible.

    But, I think you could go further and extend the same admonition to ALL scientific investigation of RF-EMF effects. We have seen many studies with wholly inadequate attention paid to this vital aspect. And while being well thought out in all other respects, are ultimately uninformative. Meanwhile, many of these studies are exploited for propaganda purposes – claimed as proving this or that, while actually proving nothing. If we don’t yet know for sure – let’s just say so!

    Well-defined exposure conditions are essential for finding reproducible and scientifically valuable data on biological effects of RF. Of course in vivo studies approximating humans will be required so that blood flow, thermoregulation effects, etc., can be accounted for. Rodents just won’t do, what about pigs?

  6. Dariusz

    Do you suggest to study brain tumors from mobile phone in volunteers studies?
    you obviously realize it is not possible. Please clarify how you suggest to advance
    regarding brain tumors research.

    popa

  7. Yes, of course… conditions of experiment are controlled by researcher “here and now”… no need to remember the past…

  8. Dariusz,

    You suggested volunteers studies, is it because you think that people in volunteers studies have no recall bias?

    popa

  9. popa,

    As I said, reliable data are of the paramount importance. If one half of data is reliable (cancer pathology) but the other half is severely biased by recall (exposure to radiation) then conclusions drawn on such data are not reliable and should be not used as a proof of effect. No matter how “fancy” statistical analyses are applied to this data afterwards. Unreliable data will remain unreliable also after this “fancy” statistics… At the most conclusions based on such biased data can be used as warning signs that justify precaution and further research.

  10. Dariusz

    You compare me with those who prefer the interphone, because you miss the whole methodology issue: It is not something new that methodologically, Hardell’s studies are far better than the interphone. You are not sensitive at all to the difference between the two studies, there is a big difference. That is why you should read Levis’ analysis, and understand first, why the court accepted the case.

    popa

  11. It is sufficient to guide and support implementation of certain policy – the Precautionary Principle.

  12. You say “not much use”, but that’s only with respect to proving an association (or lack of) – do you feel that the data as it stands is too weak to have any impact on guiding policy?

  13. Graham,

    Unfortunately the so far collected data is of not much use as it does not have reliable exposure. Whatever analysis will be done on such data will be unreliable.

    I consider both Interphone and Hardell studies as red warning flags. Neither of them proves danger. Better studies, with proper dosimetry, should be executed to confirm or to dismiss these red warning flags.

    Finally, I have very little faith in epidemiology because of biases and low sensitivity of this method. Mobile phone radiation is a weak stimulus so the effects might be too weak to be conclusively detected by such blunt tool as epidemiology.

    That is my opinion…

  14. Hi Dariusz, long time no speak!

    A very nice blog post again. One of the things that you didn’t mention (perhaps because it doesn’t add much more to your points) is that the cohort study misclassification was a serious problem, but perhaps not the study’s worst. Obviously recategorising the heaviest user group into the control _is_ probably bad enough in its own right to leave the paper with little credibility, but the fact that anyone who had a subscription after the year 1995 (this is, most people with a phone subscription) were also classified in the control section of the cohort. Pay as you go people are misclassified in the same way too, but one would guess that they would be somewhat less intense users of their phones.

    I do find the Hardell data the most compelling of the epidemiological studies so far, and that’s not because of their conclusions or their pristine data quality. Recall bias is obviously a concern (although Interphone’s own work indicates that it’s perhaps not as damaging as the critics would claim), but by and large their data collection seems to be as reasonable as one could hope from retrospective questionnaires. The reason I find them the most compelling is because the rest of the epidemiology on mobile phones and cancer is so downright awful, even measured against the limitations of what epidemiology can manage.

    However, the fact that the quality of the data is so limited doesn’t avoid the question of “what do we do about the data collected so far”? Having had a good conversation with you last year, I know your answer to this question, but I’ll ask for the sake of posterity anyway: At what point does the data reach a point where policy makers / legislators are obliged to take action (from guidance to minimise usage to updated laws on restrictions), and how does the current collection of mobile phone / health data apply to their decision making?

    Is the overall collected pool of data still sufficiently poor with regards to predictive power that it can or should be disregarded with respect to making policy? Does it indicate more strongly that a cancer risk is real, or that a null effect is real? Do we wait until good quality laboratory studies start clearly demonstrating replicable causal pathways before acting?

    My biggest frustration with these parallel universe epidemiologists isn’t their extreme faith in the quality of their data, it’s that they insist on stating that these questions are not a scientists’ problem on one hand, and then answering them (saying that there is no risk) on the other. Maybe there is a lack of awareness on their part, but being so categorical (to a lay person) about the lack of risk when the truth is that _there is a lack of good enough quality data to identify a true risk_ is in effect already making the political decision for the rest of us.

  15. popa (whoever you are),

    You seem to selectively accept evidence that fits your assumptions.

    There are two sets of studies done using the same flawed protocol and seriously affected by the same recall bias (Interphone & Hardell). However, you think that the Hardell is OK but Interphone not so much. Not surprisingly, others do the same as you, but they prefere Interphone…

    We should be fair. Interphone and Hardell provide red flags, warnings of possible effect. These red flags should be followed by better designed studies. Neither of them proves anything. Reason is recall bias and lack of reliable exposure data. It is a fairytale when you say that you remember well your phone usage 10 years ago or 5 or even a year ago. It is biased.

    Picking evidence fitting ones assumption and disregarding inconvenient evidence is pseudo-science.

  16. Dariusz,

    Not *everything* is biased by recall bias! people are not that stupid that they don’t know where they put their phone, and usually, it is one dominant side.
    The interphone researchers emphasized recall bias so much, because they in any case tried to downplay their risk results towards no risk! and their study was indeed biased in many ways.
    But why don’t you ask yourself, how is it that in different studies from different countries the risk was actually found even when they tried to downplay it? epidemiology has limitations, but what you are doing is to throw the baby with the water, it is not justified.

  17. thank you for this analytic article. i would agree that the data is not perfect but i also understand that it would be quite difficult task to obtain quality exposure data over such large sample size. it is not possible to chip everybody in the sample with a dosimeter. i m not an epidemiologist , so i gather my data by means of questionnaires and on-site measurements. however these methods also have a great uncertainty margin, since i can not be there each day taking measurements and no matter how good the questionnaire is , it never is perfect. but i do agree with many of your points. have a nice day.

  18. popa,

    In order to find out whether there is causal link between brain cance and mobile phone radiation, epidemiologists need two sets of data. One about the prevalence of brain cancer and the other about the exposures to mobile phone radiation.

    In the so far executed studies the first data set was provided.

    The second data set about exposures to mobile phone radiation is either non-existent (Danish Cohort) or severely biased by recall (INTERPHONE and Hardell studies).

    Comparing real data on cancer with non-existent data on exposures will not provide information whether radiation is, or is not, responsible for the brain cancer.

    Epidemiologists should finally admit that they made mistake in designing their studies because they do not account for radiation.

  19. Dariusz

    Why do you claim it’s a “mistake”? nobody is going to admit a “mistake” that produces profits.

    Since the first study of Hardell many years have passed, studies published, and latency period increased, and there is no comparison between Hardell and interphone studies today,
    if you have any doubt, why don’t you just read the epidemiological analysis for the Italian court.

    Environment affects genes. The weight of genes in cancer is considered around 5% and the rest lifestyle and environment. Read epigenetics. You are taking the “dirty” issue to your own interpretations to make everything big fat zero, it does not prove that this is the real situation.
    it’s like someone will say that all animal studies are worthless because animals are not humans, and also animals have different genetics. Only lately the claim towards the French study which found cancer from GMO, was that these kind of mice are more prone to cancer anyway….

  20. The “prominent epidemiologist” commenting on the Danish Cohort should be sent to bed without supper because it doesn’t require a Ph.D. to realize:
    a) the study made no exposure assessment at all.
    b) it’s no wonder that you’ll find a “protective effect” if your study design transfers all the heaviest users into the control group.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s