Below are my responses to comments of Mike Repacholi in his guest blog. For easy reading I am quoting Mike’s text…
Comments on the relative importance of epidemiology studies
“…The latest review of RF fields, HPA (2012), has set the scene for the importance of in vitro studies by stating “…a cellular change does not imply an effect in the whole organism, and neither a change at the cellular level nor a change of the whole organism necessarily results in a health effect.” So we cannot extrapolate effects found in cells to whole organisms…”
It is an overstatement that HPA 2012 set the stage. To any serious scientist it was always known that the results of in vitro studies cannot be applied directly to the whole organisms.
What I am missing in this comment of Mike is that HPA 2012 knowingly omitted numerous in vitro studies that were published during the period reviewed by HPA. HPA did not give reasons for omitting these studies. It would be interesting to get an explanation from HPA.
“…The advantage of in vitro studies is that they allow effects in a simplified model to found, but then these effects must be investigated in vivo to determine whether they occur in the more complex whole organism. Further, the in vitro models allow mechanisms of interaction to be investigated that then should also be investigated in vivo. HPA gives a reason for this: “… the main disadvantage is that isolated cells do not experience the many interactions that would normally take place in a whole organism and hence their response to stimuli is not necessarily the same as it would be in an experimental animal or human.”…”
The statements are obvious ones. HPA did not “discover America” here. Though, it might be that not all non-scientists are aware of it. But I would not credit HPA for such “revelation”.
“…This is why public health authorities rely on epidemiological studies to assess health risks. They also rely on the results of animal studies to support the epidemiology studies. In fact IARC uses as a guide, if cancer is found in two different animal species then that cancer most likely occurs in humans…”
Indeed it is so. However, there are two major problems with this procedure.
- Problem #1: epidemiological studies have low sensitivity and are prone to variety of bias. It is like rat animal study performed on rats taken from city sewers. We have no other choice for epidemiological studies, but assigning them as the most important evidence in classifying human health effects is simply wrong.
- Problem #2: if cancer is found in two different species then it is possible to suspect that humans might be vulnerable. However, when the carcinogen has no effects on rats or mice, does it mean that humans are safe? Interpreters of cell phone radiation animal studies seem to think so when they bring up studies where life-time exposure of rats or mice did not cause significant health effects such as embryonic deformation, cancer or mortality. But this does not mean that humans are safe. We have the same genes as mice or rats but these genes might work differently in humans and in animals. The best example is the family of ras-genes, known to be involved in carcinogenesis. However, different ras-family-gene regulates carcinogenesis in humans and different in mice. It means that some tumors that mice will develop human will not and vice versa. Therefore, if the animal does not respond to carcinogen it does not automatically mean that humans are safe.
“…We all know the many problems associated with epidemiological studies. They are prone to many biases and have serious problems assessing a person’s exposure, especially to EMF. We are all living in a sea of EMF so it is difficult to distinguish between the exposed and control groups. Because of this, my concern is that there is an over-reliance on epidemiology studies…”
I absolutely agree with Mike that in evaluation of human health hazard different organizations, including IARC, over-rely on epidemiological evidence.
“…Given that animal studies can be conducted with high dosimetric precision public health authorities might do well to use the guide; if epidemiology studies show an effect but overwhelmingly the animal studies don’t, then the assessment should be that there is a problem with the epidemiology studies…”
I am not so certain about the dosimetric precision of animal studies. I think it is an overstatement. Though, I agree that the dosimetry is better than in epidemiological studies.
However, I very strongly disagree with considering animal studies as better guidance. Especially in situation when animals do not respond to carcinogen and this carcinogen cannot be given in high dose, as RF cannot be given due to heating effects. Negative animal studies performed with low doses of carcinogen do not provide any information about the possible human risk.
“…This is the case with RF fields recently being classified by IARC as “possibly carcinogenic to humans”. In my opinion the definition that IARC uses for this classification is flawed…”
I disagree that IARC classification is flawed.
- Epidemiological studies suggested possible increase in brain cancer among long-term avid users. Both, Hardell and Interphone have shown such trend, though the risk increase differed.
- Animal studies where RF was used alone showed no effect, but this does not mean that the humans are safe.
- Animal studies where RF was used in addition to other carcinogen have indicated a possibility of additive or synergistic effects. RF seemed to potentiate effects of other carcinogens.
- Although neither epidemiological nor animal studies provided reliable proof of harm, these studies provided sufficiently important “red flags”. Such “red flags” could not be ignored. This was reflected in the voting. Out of 30 members of the IARC Working Group 28 voted for 2B classification. This included all ICNIRP members who served on the Working Group. Calling it “flaw” is not correct.
Comments on the weight of evidence
“…There is widespread misunderstanding about the “weight of evidence” approach when used for health risk assessments. Weight of evidence is NOT counting the number of positive and negative studies and then concluding there are more positive study results than negative, or vice versa. A true weight of evidence approach requires that each study, both positive and negative, be evaluated for quality, similar to what was used in the systematic review of head cancers from cell phone use…”
I agree that the true weight of evidence should evaluate both positive and negative studies. However, this does not happen in practice. Even ICNIRP does not provide quality evaluation of the negative studies but just accepts them “automatically”.
In my first ever blog “From China with Love” in 2009 I wrote the following, and I still think the same way:
“…Another issue, mentioned at the conference was the “weight of evidence”. To me this term is abused by those who wish to disregard scientific studies showing that mobile phone radiation can induce biological effects. We continuously hear that there were done thousands of studies on mobile phone radiation. However, this number is grossly exaggerated because it refers to research at all microwave-frequencies. For example the applicability of the results obtained using radiation frequency of microwave ovens might not necessarily be directly applicable to the mobile phone-emitted microwaves. There is still ongoing discussion whether it is possible to transpose results of experiments done with one frequency of microwaves to other frequencies. To me, in order to be relevant, the studies should be performed using actually mobile-phone-emitted microwaves. The number of such studies, which were done using mobile phone-emitted microwaves, is available from the EMF-Portal database (http://www.emf-portal.de/) that is maintained by the Research Center for Bioelectromagnetic Interaction at the University Hospital of the Aachen University in Germany. This specialized database lists as of May 15th, 2009, total of 499 studies that explicitly investigated the biological and health effects of mobile phone-related microwave frequencies. Therefore, in my opinion, the number of the executed studies is not sufficiently large to create reliable basis for any conclusive statements about the existence or the absence of the health risk associated with the use of mobile phone. These 499 studies include studies that do not show any biological effects of mobile phone radiation but also studies that show induction of such effects. However, because the majority of the published studies (these thousands of articles with all microwave frequencies) show no effect, it is commonly suggested that this “weight of evidence” supports the notion that there are no biological effects and no health risk. This issue was also mentioned in a presentation in Hangzhou. One renowned scientist, C. K. Chou of Motorola, had stated that the newly designed, and about to start in the USA, large animal study is unlikely to have impact on science concerning mobile phone effects because of the “weight of evidence” provided by the earlier published studies. In short it means that, in his opinion, even well designed, well executed state-of-the-art study with best available radiation exposure dosimetry, is not sufficient to cause any change in thinking about mobile phone radiation effects. Why? Because the earlier published studies, of which many were poorly designed or executed or had poor dosimetry design, provide “weight of evidence” against any effects. In the discussion period, my question to Dr. Chou was whether, in order to make any impact, we need to produce another large number of new studies to overcome the already existing “weight of evidence”. I did not get any straight answer but just a defensive statement that the “weight of evidence” is a commonly used approach. Yes, it is commonly used and commonly abused. Single well done study is not enough but also a bunch of poor studies should not be enough too…”
“…Quality assessment criteria for all study types (See Repacholi et al 2011; online appendix) are well known and studies can be given more or less weight, where those studies that conducted experiments correctly according to these criteria are given more weight or believability in the outcome, than those deemed low quality. All “blue-ribbon” reviews use this approach. WHO has used this approach for over 50 years and it is a very well accepted, tried and true method for assessing health risks from any biological, chemical or physical agent…”
This is not the practice with the negative studies. And it is a common practice that the large numbers of negative studies are used to “discredit” any positive findings in other studies simply by the sheer size of the evidence. It is a common phrase – “the majority of studies show no effect” – that is used to support the no-effect-opinion in “blue-ribbon” reviews. This is the really flawed practice in RF area.
Comments on ICNIRP
“…ICNIRP works closely with WHO since it is a formally recognised NGO of WHO for NIR. As part of this relationship ICNIRP uses exactly the same weight of evidence approach as WHO and other leading national public health authorities in the NIR field when conducting their literature reviews and assessing the scientific evidence on which to base their guidelines…”
Indeed, this is the current reality. Is it good and reliable? I have serious doubts. I expressed them in my recent column in The Washington Times Communities, where I commented on the possible reasons for an unwillingness of ICNIRP to engage in debate with BioInitiative, and possibly with other entities having opposing view on the meaning of the current scientific evidence.
“…If one assesses the quality of studies referenced in the BioInitiative report it becomes very obvious they almost all fit into the low quality category that have not been replicated. It is very apparent that the authors of the BioInitiative report do not quote leading public health authorities such as WHO or the HPA in their review because they only want to summarise any study that supports their opinion and omit studies that don’t…”
I tend to disagree. Both, ICNIRP and BioInitiative have weak points in their reviews. ICNIRP over-values the negative studies independently of their quality. BioInitiative over-values positive studies independently of their quality. This however, does not mean that all what is in ICNIRP review or in BioInitiative review is bad science. Whoever thinks so is oversimplifying, misleading and simply falsifying the reality.
The comment reminds me a slide from the presentation by Vijayalakshmi at the URSI meeting in New Delhi few years ago. In her review of genotoxic evidence she had a slide where on one side of balance were listed all positive studies and on the other side of balance were listed all negative studies. The balance was tipped towards the negative studies. The comment was that all good quality studies were the negative ones and all poor quality studies were the positive ones. It was outrageous simplification that caused chair of the session, Jim Lin, to intervene and to remind presenter that such presentation of the data is improper… So, lets not say that overall BioInitiative report is bad and poor quality…
As Mark Elwood said at the BEMS 2012 and I twitted immediately from the conference room:
BEMS in Brisbane: Mark Elwood’s review of WHO and BioInitiative reports2007: both have valid thoughts and should not be easily dismissed
— Dariusz Leszczynski (@blogBRHP) June 21, 2012
“…With this approach there is no basis for discussion between ICNIRP and the BioInitiative group. ICNIRP has to maintain high quality standards in their approach to EMF protection to keep its very high credibility with national and international authorities who use their guidelines and recommendations…”
Mike, you are here absolutely wrong. Debate is necessary. Dismissing BioInitiative is wrong. The same, of course is to be said about the BioInitiative. Refusal of debate is wrong.
Who will win? Lawyers, because the concerned people will go to courts and the validity of science will be decided there. It will be a very sad development for the real science.