The known, to many, phrase “It’s the economy, stupid!” was brought into the politics by James Carville, adviser to the Bill Clinton 1992 presidential campaign. Since then this phrase has entered the main stream media and is used in different contexts, by replacing the word “economy” with some other term. This phrase points out to something so obvious that we not always remember of it.
Here, I have replaced the word “economy” with the word “epidemiology” to talk about the obvious reason for causing confusion among the mobile phone users and decision makers about the potential risk of mobile phone radiation-induced brain cancer. In fact, the two largest epidemiology studies are not only non-informative but they both are misleading and one of them includes such a grave error that this study should never pass through peer review of reviewers and editors and should never been published in its current form.
To lay readers, just as a brief reminder: epidemiology is the science that examines what is happening in human population when exposed to agents that might, or might not, affect the health. So, epidemiology attempts to determine what the probability is, and I stress the word “probability”, that the examined agent can, or cannot, cause harm to human health. What the lay readers (including journalists) do not always seem to remember, are the limitations of epidemiological method.
The major problem of the epidemiological studies is the heterogeneity of human population. We differ from each other because of our inherited genes and because of differences in our ways of living in the surrounding environment. That is why epidemiologists can not determine “the certainty” but only “the probability”.
In spite of the limitations, in the world of the scientific evidence for health effects/risks, epidemiological studies are considered to be the most important and giving the most weight to the risk-related decisions.
That is why we all pay the most of attention to the outcomes of epidemiological studies and that is why I think it is important to discuss and explain to lay-audience the shortcomings of these studies because “not everything is golden what is shining”… Even if the authors try to convince us to the opposite.
Because of the above mentioned heterogeneity of human population, the epidemiological studies need to be large if wanting them to be even remotely “reliable”, having enough cases for the statistical evaluation. That is why the two largest studies on mobile phone radiation and brain cancer, the case-control study INTERPHONE and the cohort study from Denmark (called later Danish Cohort), have got a lot of attention in the news media, among the decision makers and from the industry.
The shortcomings of the INTERPHONE I have described in my science blog on several occasions, and I am not going to repeat them. Instead, I just quote a few sentences from the critical commentary of INTERPHONE published in journal Bioelectromagnetics by Professor Jørn Olsen, a well known epidemiologist:
- “…This commentary questions the wisdom in choosing this design and argues that funding could and should have been used better by setting up a large-scale cohort study that could address other potential endpoints besides cancer...”
- “…The Interphone Study had a double-digit budget in millions of Euros, even without including national funding sources. This was a large case–control study with 6420 cases and 7658 controls in the first published combined study. Still, the funding far exceeded what we normally see in case–control studies without attached expensive laboratory analyses...”
- “…Random misclassification of the exposure was, however, large and will drive estimates of association toward null values...”
- “…The methodological problems were smaller than could be expected but still large enough to limit the conclusions that can be drawn from the first part of the Interphone Study addressing glioma and meningiomal tumors…”
- “…We may also conclude that the methodological problems after all were large enough to prohibit any conclusion about increased or decreased risk...”
- “…The worst-case scenario is that long-term use of cell phones does carry health risks but the Interphone Study dried up available resources for funding and made the public and funding agencies immune to the epidemiological results…”
So, summa summarum, the INTERPHONE study, according to Professor Olsen, can not provide any estimates towards the existence or absence of human brain cancer risk and it was substantial waste of funds and public trust.
I have reached similar conclusions in my earlier blog texts.
Now, let’s look at another of the largest studies, the recent update of the Danish Cohort. It is the largest study because the authors started with the population of 723 421 subscription records in Denmark for years of 1982 – 1995. However, their information on exposure of people to mobile phone radiation is even worse than in INTERPHONE. Just as reminder, in INTERPHONE people were asked how much they used phone over the period of up to 10 years, what introduced enormous bias – who remembers how much was talking on the phone not 10 years ago but just a few weeks ago. So, the exposure information in INTERPHONE was, in practice, not reliable if not outright scientifically non-existent.
The Danish Cohort is even worse. The only information that the scientists had was how long a person owned mobile phone subscription with a provider. There was no any estimation of exposure, even as poor as the recollection from memory in INTERPHONE. Such way of gathering “exposure data” in Danish Cohort lead to paradoxes.
In Danish Cohort, two persons, one of which spends many hours per week on the phone and the other who spends just a few minutes per week are analyzed as belonging to the same exposure group when they own the subscription for the same length of time. It means that highly exposed and nearly unexposed persons are mixed up in the same exposure group. Such “data” can certainly not provide any scientifically valid information.
Even though the Danish Cohort study started with over 700 000 mobile phone subscribers, and after some exclusions (about this below) used only 358 043 subscribers, the statistical evaluations presented in tables 1, 2 and 3 in the article are largely based on just a few, or few tens, of cases. What it means is that the statistical results are worthless because of too low numbers. In practice the tables 1, 2 and 3 are non-informative at all.
Already the two above examples of shortcomings should be enough to preclude publication of Danish Cohort study. However, there is more. There is a big and an unforgivable error in the study design. Such error, that university students should study it as an example how not to do research. Namely:
From the starting cohort of 723 421 mobile phone subscribers were excluded all corporate subscribers (200 507 subscribers) because the information about them was a “corporate secret”. So, the people who most likely were the heaviest users were excluded from the study. In years 1982 – 1995 using mobile phone was expensive for private persons. But this did not apply to business people and they, due to their professional needs, used extensively mobile phones. So, the most exposed group was excluded. But hold on, this is an error but it is not the unforgivable error in study design that I mentioned above. It is this, and I quote the fragment of discussion of the Danish Cohort study:
“…Because we excluded corporate subscriptions, mobile phone users who do not have a subscription in their own name will have been misclassified as unexposed…”
It means that some of the highly exposed corporate users who did not have personal phone subscriptions (nobody knows how many) have ended up as non exposed controls.
Let me give an example how grave this error is:
If a scientist would perform experiments on cells in laboratory and before measuring results would take some of his most exposed cells and mixed them with the unexposed cells and used such mixed sample as control sample, such activity would be condemned and such study would never be published and scientist would have a lot of explaining to do.
The same “thing” has been done, by design, in the Danish Cohort study. It is unbelievable how this study passed through peer-review of appointed reviewers and journal editors.
But, hold on, this is not yet all what is wrong with the control sample in Danish Cohort study. Any person who got subscription after the cut off year of the study (1995) was unexposed by the “standards” of the study. What it means? For example:
Person who was diagnosed in 2007 with brain cancer who had subscription of the phone from 1996 was by the design of the Danish Cohort study an unexposed person who got cancer. Whereas, in reality, this was a person exposed for 11 years who got cancer.
The control sample of Danish Cohort study is unbelievable rubbish! Pardon my expression but nothing better comes to mind. The authors simply cannot know what it contains. It is a mix of unexposed and exposed persons.
The Danish Cohort study should never been published, and since it was those responsible for publication should explain themselves and the study should be immediately withdrawn, with apologies.
But the reality is (still) different. The “botched” Danish Cohort study, in spite of numerous criticisms published by the same journal as rapid responses, is still a “valid” peer-reviewed article. It seems as if epidemiology would have own set of scientific rules…
But, in real world, it is not uncommon that flawed studies are retracted by either journal editors or the authors themselves. A most extreme case has happened just this September, when journal editor took blame and responsibility for publication of flawed study and resigned from his Editor-in-Chief position.
So, what is our problem with mobile phone radiation and brain cancer? What is confusing and messing up and preventing future research by draining funds and providing worthless results?
It’s the epidemiology, stupid!