• It’s the epidemiology, stupid!

The known, to many, phrase “It’s the economy, stupid!” was brought into the politics by James Carville, adviser to the Bill Clinton 1992 presidential campaign. Since then this phrase has entered the main stream media and is used in different contexts, by replacing the word “economy” with some other term. This phrase points out to something so obvious that we not always remember of it.

Here, I have replaced the word “economy” with the word “epidemiology” to talk about the obvious reason for causing confusion among the mobile phone users and decision makers about the potential risk of mobile phone radiation-induced brain cancer. In fact, the two largest epidemiology studies are not only non-informative but they both are misleading and one of them includes such a grave error that this study should never pass through peer review of reviewers and editors and should never been published in its current form.

To lay readers, just as a brief reminder: epidemiology is the science that examines what is happening in human population when exposed to agents that might, or might not, affect the health. So, epidemiology attempts to determine what the probability is, and I stress the word “probability”, that the examined agent can, or cannot, cause harm to human health. What the lay readers (including journalists) do not always seem to remember, are the limitations of epidemiological method.

The major problem of the epidemiological studies is the heterogeneity of human population. We differ from each other because of our inherited genes and because of differences in our ways of living in the surrounding environment. That is why epidemiologists can not determine “the certainty” but only “the probability”

In spite of the limitations, in the world of the scientific evidence for health effects/risks, epidemiological studies are considered to be the most important and giving the most weight to the risk-related decisions.

That is why we all pay the most of attention to the outcomes of epidemiological studies and that is why I think it is important to discuss and explain to lay-audience the shortcomings of these studies because “not everything is golden what is shining”… Even if the authors try to convince us to the opposite.

Because of the above mentioned heterogeneity of human population, the epidemiological studies need to be large if wanting them to be even remotely “reliable”, having enough cases for the statistical evaluation. That is why the two largest studies on mobile phone radiation and brain cancer, the case-control study INTERPHONE and the cohort study from Denmark (called later Danish Cohort), have got a lot of attention in the news media, among the decision makers and from the industry.

The shortcomings of the INTERPHONE I have described in my science blog on several occasions, and I am not going to repeat them. Instead, I just quote a few sentences from the critical commentary of INTERPHONE published in journal Bioelectromagnetics by Professor Jørn Olsen, a well known epidemiologist:

  • “…This commentary questions the wisdom in choosing this design and argues that funding could and should have been used better by setting up a large-scale cohort study that could address other potential endpoints besides cancer...”
  • “…The Interphone Study had a double-digit budget in millions of Euros, even without including national funding sources. This was a large case–control study with 6420 cases and 7658 controls in the first published combined study. Still, the funding far exceeded what we normally see in case–control studies without attached expensive laboratory analyses...”
  • “…Random misclassification of the exposure was, however, large and will drive estimates of association toward null values...”
  • “…The methodological problems were smaller than could be expected but still large enough to limit the conclusions that can be drawn from the first part of the Interphone Study addressing glioma and meningiomal tumors…”
  • “…We may also conclude that the methodological problems after all were large enough to prohibit any conclusion about increased or decreased risk...”
  • “…The worst-case scenario is that long-term use of cell phones does carry health risks but the Interphone Study dried up available resources for funding and made the public and funding agencies immune to the epidemiological results…”

So, summa summarum, the INTERPHONE study, according to Professor Olsen, can not provide any estimates towards the existence or absence of human brain cancer risk and it was substantial waste of funds and public trust.

I have reached similar conclusions in my earlier blog texts.

Now, let’s look at another of the largest studies, the recent update of the Danish Cohort. It is the largest study because the authors started with the population of 723 421 subscription records in Denmark for years of 1982 – 1995. However, their information on exposure of people to mobile phone radiation is even worse than in INTERPHONE. Just as reminder, in INTERPHONE people were asked how much they used phone over the period of up to 10 years, what introduced enormous bias – who remembers how much was talking on the phone not 10 years ago but just a few weeks ago. So, the exposure information in INTERPHONE was, in practice, not reliable if not outright scientifically non-existent.

The Danish Cohort is even worse. The only information that the scientists had was how long a person owned mobile phone subscription with a provider. There was no any estimation of exposure, even as poor as the recollection from memory in INTERPHONE. Such way of gathering “exposure data” in Danish Cohort lead to paradoxes.

In Danish Cohort, two persons, one of which spends many hours per week on the phone and the other who spends just a few minutes per week are analyzed as belonging to the same exposure group when they own the subscription for the same length of time. It means that highly exposed and nearly unexposed persons are mixed up in the same exposure group. Such “data” can certainly not provide any scientifically valid information.

Even though the Danish Cohort study started with over 700 000 mobile phone subscribers, and after some exclusions (about this below) used only 358 043 subscribers, the statistical evaluations presented in tables 1, 2 and 3 in the article are largely based on just a few, or few tens, of cases. What it means is that the statistical results are worthless because of too low numbers. In practice the tables 1, 2 and 3 are non-informative at all.

Already the two above examples of shortcomings should be enough to preclude publication of Danish Cohort study. However, there is more. There is a big and an unforgivable error in the study design. Such error, that university students should study it as an example how not to do research.  Namely:

From the starting cohort of 723 421 mobile phone subscribers were excluded all corporate subscribers (200 507 subscribers) because the information about them was a “corporate secret”. So, the people who most likely were the heaviest users were excluded from the study. In years 1982 – 1995 using mobile phone was expensive for private persons. But this did not apply to business people and they, due to their professional needs, used extensively mobile phones.  So, the most exposed group was excluded. But hold on, this is an error but it is not the unforgivable error in study design that I mentioned above. It is this, and I quote the fragment of discussion of the Danish Cohort study:

“…Because we excluded corporate subscriptions, mobile phone users who do not have a subscription in their own name will have been misclassified as unexposed…”

It means that some of the highly exposed corporate users who did not have personal phone subscriptions (nobody knows how many)  have ended up as non exposed controls.

Let me give an example how grave this error is:

If a scientist would perform experiments on cells in laboratory and before measuring results would take some of his most exposed cells and mixed them with the unexposed cells and used such mixed sample as control sample, such activity would be condemned and such study would never be published and scientist would have a lot of explaining to do.

The same “thing” has been done, by design, in the Danish Cohort study. It is unbelievable how this study passed through peer-review of appointed reviewers and journal editors.

But, hold on, this is not yet all what is wrong with the control sample in Danish Cohort study. Any person who got subscription after the cut off year of the study (1995) was unexposed by the “standards” of the study. What it means? For example:

Person who was diagnosed in 2007 with brain cancer who had subscription of the phone from 1996 was by the design of the Danish Cohort study an unexposed person who got cancer. Whereas, in reality, this was a person exposed for 11 years who got cancer.

The control sample of Danish Cohort study is unbelievable rubbish! Pardon my expression but nothing better comes to mind. The authors simply cannot know what it contains. It is a mix of unexposed and exposed persons.

The Danish Cohort study should never been published, and since it was those responsible for publication should explain themselves and the study should be immediately withdrawn, with apologies.

But the reality is (still) different. The “botched” Danish Cohort study, in spite of numerous criticisms published by the same journal as rapid responses, is still a “valid” peer-reviewed article. It seems as if epidemiology would have own set of scientific rules…

But, in real world, it is not uncommon that flawed studies are retracted by either journal editors or the authors themselves. A most extreme case has happened just this September, when journal editor took blame and responsibility for publication of flawed study and resigned from his Editor-in-Chief position.

So, what is our problem with mobile phone radiation and brain cancer? What is confusing and messing up and preventing future research by draining funds and providing worthless results?

It’s the epidemiology, stupid!




13 thoughts on “• It’s the epidemiology, stupid!

  1. I completely agree with this Dariusz Leszczynski article about Interphone and the Danish Cohort studies. In december 2008, two years before the global release of Interphone, I published a paper in the “French medical daily” (Quotidien du médecin) where I explain that Interphone could not provide clear answers to the mobile phone hazard question.
    The author of the french part of Interphone have precisely known in 2007 (it was published in “Environnement Risques et Sante” Vol. 6, n° 2, 2007. p 104) that the answers to questions pertaining to the number of mobile calls and their length were not reliable, even for calls six months earlier. I quote :
    L’analyse de la concordance entre les données estimées en 2001 par les sujets et celles mesurées par les opérateurs montre une concordance assez médiocre (j = 0,34) mais significative (p < 0,01) pour les nombres moyens d’appels. En revanche, il n’y a aucune concordance entre les durées réelles et les durées estimées au cours du premier entretien (j = 0,18). … la corrélation entre les nombres [d’appels] estimés et mesurés et plus encore celle des durées est très mauvaise
    This was however not insurmountable. After all we can only make statistics with the available data. But they used logistic regression and did not take into account the data uncertainty.
    This has two consequences : an underestimation of the odd ratio and an underestimation of OR confidence interval. Since the scarce "significant" OR confidence intervals published are very close to 1, they probably would include 1 if the data uncertainty had not been ignored.
    Much Ado About Nothing ?

  2. I think if we apply the rules of science, epidemiology can provide information about a problem. However, this is undermined by the two mentioned studies. Anyone who reads these studies will identify mayor problems and than will ask questions such as: How is it possible to develop such criteria, to exclude the heavy mobile users, to reach such conclusions etc.. And later the sad part is such conclusions to be used further by regulatory bodies that-everything is okay, no problems. A such event occurred in the Greek parliament in the past by mentioning one of these studies. Is this scientific?

  3. Indeed. I, as Co-Chair (with Dr Nam Kim) of the Technical Program Committee have invited Prof. Olsen to speak in South Korea at the BEMS meeting, and then I asked him to submit his commentary to Bioelectromagnetics.

  4. I disagree with Lloyd on Interphone and agree with Jorn on Interphone (his commentary in Bioelectromagnetics): based on Interphone results is not possible to say anything about risk – neither that it exists not that it does not exist.

  5. Yep, I clearly said that I speak about the two largest studies: the largest case.control and the largest cohort… Nothing whong with that? It was not to be a review article 😉

  6. Professor Jorn Olsen who is quoted in the blog is Chairman of the Department of Epidemiology at the University of California in Los Angeles (UCLA). He made these statements at the Bioelectromagnetics Society (BEMS) meeing in Seoul, Korea in June 2010 and the was also later published in the Bioelectromagnetics Journal.

  7. “In spite of the limitations, in the world of the scientific evidence for health effects/risks, epidemiological studies are considered to be the most important and giving the most weight to the risk-related decisions.” While this is true, the only epidemiology studies mentioned were the INTERPHONE and Danish Cohort studies, which both had serious problems. The unmentioned elephant in this blog room are the Swedish epidemiology studies led by Dr. Lennart Hardell, which were void of serious problems.

    “So, summa summarum, the INTERPHONE study, according to Professor Olsen, can not provide any estimates towards the existence or absence of human brain cancer risk and it was substantial waste of funds and public trust.”

    “I have reached similar conclusions in my earlier blog texts.” I would suggest that this is a serious misinterpretation of the INTERPHONE study. Yes, it has copious problems, all of which lead to an underestimation of risk. This underestimation is a systemic problem resulting from selection bias (10% underestimation ), truncated age range (49% underestimation ), and treating cordless phone use as a non-exposure (26% underestimation2).

    This systemic underestimation results in the IINTERPHONE study, for light exposures (<10 years of use, and <1,600 cumulative hours of use), reporting multiple statistically significant protection findings of brain tumors from cellphone use. Yet, for heavy exposures, the INTERPHONE study finds more than a 2-fold statistically significant risk. Because of this systemic underestimation of risk, all risks found in the INTERPHONE are larger than the published risk. I would suggest that the systemic underestimation of risk is a result of yet another bias, financial bias, as the cellphone industry provided substantial funding. Nevertheless, the INTERPHONE study is important and should be given the substantial weight in risk-related decisions.

    I concur with the statement that the “The Danish Cohort study should never been published, and since it was those responsible for publication should explain themselves and the study should be immediately withdrawn, with apologies.” This study was entirely funded by cellphone companies and the International Epidemiology Institute who designed the study. While the problems delineated in this blog are all true, a far simpler way to understand how fundamentally flawed this study was is to realized that 100% of the statistically significant findings from the 5 studies published to date (talk about beating a dead horse!) were statistically significant protections from numerous cancers and neurological diseases. In effect, if you believe the Danish Cohort study, then being a Danish cellphone subscriber is the most impressive health elixir known to mankind!

    The Swedish epidemiology studies have found statistically significant risks for brain tumors from both cellphones and cordless phones as would be expected if cellphones are a risk from exposure to microwave radiation. Further, these studies have an internal consistency to the hypothesis that wireless phone use is a risk for brain tumors:
    • The higher the cumulative hours of use, the higher the risk;
    • The longer the time since first use, the higher the risk;
    • The higher the radiated power (analog cellphones and rural use of digital cellphones, the higher the risk;
    • The closer the tissue to the cellphone (ipsilateral versus contralateral use), the higher the risk, and
    • The younger the user at first use, the higher the risk.
    Thus the elephant in the room should not have been left out of this discussion.

  8. “In spite of the limitations, in the world of the scientific evidence for health effects/risks, epidemiological studies are considered to be the most important and giving the most weight to the risk-related decisions.” While this is true, the only epidemiology studies mention was the Interphone and Danish Cohort studies, which both had serious problems.

  9. Dear Michael,
    As I understand from your comment, it is not epidemiology to be blamed but the epidemiologists… If so, I agree. In fact the “original” title of my blog had word epidemiologists, and not epidemiology, but in last moment I opted for a less teasing option. However, also epidemiology, and our perception of it, is largely to blame. It is often accepted, and often without hesitation, that the epidemiological results are the ultimate truth about effects on human population. This is the problem and it is perpetuated by epidemiologists themselves. So, in a broader sense it is epidemiology to be blamed and there would not be epidemiology without epidemiologists and so on…
    Best, Dariusz

  10. Dear Dariusz,
    Although I agree with many of your specific comments I don’t think epidemiology can be blamed. This would mean throwing out the baby with the bath water. The Interphone consortium had a hard time over the years to finally agree on an intellectually and methodologically very modest version of a manuscript of which the companion editorial correctly said it contains Delphian statements that will lead to confusion. More importantly, a number of methodological decisions were drawn (e.g. to narrow down the age range, to largely ignore the selection bias – only in the appendix it was addressed in a way that is in line with epidemiological recommendations) that led to wrong interpretations. The Danish cohort study could have been analyzed in a way that considers the ‘contamination’ of the comparison cohort. It has not been done. In my opinion the reason for all these shortcomings is a prejudiced position of some authors. Epidemiology is not to be blamed. Observational studies almost always have shortcomings, but epidemiology has developed a lot of procedures to cope with these shortcomings. Of course, some shortcomings may be too severe to make meaningful analyses feasible. But in my opinion both studies have its merits and – conditional on a meaningful analysis – could contribute to risk assessment.
    Best wishes,

  11. Thanks Henrik. You are correct – I have mixed up two persons Jorgen and Jorn having the same surname Olsen. I have removed the erroneous referrence. Apologies.

  12. Dear Prof. Leszczynski,

    I’ve just read your latest blog post with great interest. I see you arrived at same conclusions re flaws in the Danish Cohort.

    In your post you mention a Professor “Jørn Olsen” and his comments re Interphone. You also write that he is co-author of Danish Cohort.
    I suspect his name is spelled wrong.
    There is Jørgen H. Olsen from Danish Cancer Society who is co-author of the cohort:
    and there is a Jørn Olsen who is an epidemiology professor at Aarhus Univ. and consultant to the Danish Health Board:
    Which one of the two did you mean?

    Recently on the Danish Health Board website, Jørn Olsen called the Danish Cohort “the best analysis to date” while admitting only one of it’s confounders .
    If you read Swedish, as many Finns do, you can look it up here:
    You can also run it through google translate: http://goo.gl/VMv5P

    Best regards,
    Henrik Eiriksson

    PS: Their names are similar in writing and in speech since the “g” in Jørgen is mute in Danish, so Jørgen and Jørn essentially sound the same.

  13. Hi Dariusz, another excellent post.

    I’d be interested, if you have them, in any comments you may have on our BMJ letter on the updated Danish Cohort (http://www.bmj.com/content/343/bmj.d6387?tab=responses – Philips and Lamburn). The authors have responded, but I don’t feel they’ve addressed all of our key issues (and were incorrect in some regards in the issues they did address, such as assuming that other exposures were all far field effects, which DECT cordless phones most certainly are not).

    Our title is perhaps a bit strong, but it seems like you broadly agree on the quality of the Frei paper itself?

    Best Regards,
    – Graham Lamburn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s