Below are my responses to comments of Mike Repacholi in his guest blog. For easy reading I am quoting Mike’s text…
Comments on the relative importance of epidemiology studies
“…The latest review of RF fields, HPA (2012), has set the scene for the importance of in vitro studies by stating “…a cellular change does not imply an effect in the whole organism, and neither a change at the cellular level nor a change of the whole organism necessarily results in a health effect.” So we cannot extrapolate effects found in cells to whole organisms…”
It is an overstatement that HPA 2012 set the stage. To any serious scientist it was always known that the results of in vitro studies cannot be applied directly to the whole organisms.
What I am missing in this comment of Mike is that HPA 2012 knowingly omitted numerous in vitro studies that were published during the period reviewed by HPA. HPA did not give reasons for omitting these studies. It would be interesting to get an explanation from HPA.
“…The advantage of in vitro studies is that they allow effects in a simplified model to found, but then these effects must be investigated in vivo to determine whether they occur in the more complex whole organism. Further, the in vitro models allow mechanisms of interaction to be investigated that then should also be investigated in vivo. HPA gives a reason for this: “… the main disadvantage is that isolated cells do not experience the many interactions that would normally take place in a whole organism and hence their response to stimuli is not necessarily the same as it would be in an experimental animal or human.”…”
The statements are obvious ones. HPA did not “discover America” here. Though, it might be that not all non-scientists are aware of it. But I would not credit HPA for such “revelation”.
“…This is why public health authorities rely on epidemiological studies to assess health risks. They also rely on the results of animal studies to support the epidemiology studies. In fact IARC uses as a guide, if cancer is found in two different animal species then that cancer most likely occurs in humans…”
Indeed it is so. However, there are two major problems with this procedure.
- Problem #1: epidemiological studies have low sensitivity and are prone to variety of bias. It is like rat animal study performed on rats taken from city sewers. We have no other choice for epidemiological studies, but assigning them as the most important evidence in classifying human health effects is simply wrong.
- Problem #2: if cancer is found in two different species then it is possible to suspect that humans might be vulnerable. However, when the carcinogen has no effects on rats or mice, does it mean that humans are safe? Interpreters of cell phone radiation animal studies seem to think so when they bring up studies where life-time exposure of rats or mice did not cause significant health effects such as embryonic deformation, cancer or mortality. But this does not mean that humans are safe. We have the same genes as mice or rats but these genes might work differently in humans and in animals. The best example is the family of ras-genes, known to be involved in carcinogenesis. However, different ras-family-gene regulates carcinogenesis in humans and different in mice. It means that some tumors that mice will develop human will not and vice versa. Therefore, if the animal does not respond to carcinogen it does not automatically mean that humans are safe.
“…We all know the many problems associated with epidemiological studies. They are prone to many biases and have serious problems assessing a person’s exposure, especially to EMF. We are all living in a sea of EMF so it is difficult to distinguish between the exposed and control groups. Because of this, my concern is that there is an over-reliance on epidemiology studies…”
I absolutely agree with Mike that in evaluation of human health hazard different organizations, including IARC, over-rely on epidemiological evidence.
“…Given that animal studies can be conducted with high dosimetric precision public health authorities might do well to use the guide; if epidemiology studies show an effect but overwhelmingly the animal studies don’t, then the assessment should be that there is a problem with the epidemiology studies…”
I am not so certain about the dosimetric precision of animal studies. I think it is an overstatement. Though, I agree that the dosimetry is better than in epidemiological studies.
However, I very strongly disagree with considering animal studies as better guidance. Especially in situation when animals do not respond to carcinogen and this carcinogen cannot be given in high dose, as RF cannot be given due to heating effects. Negative animal studies performed with low doses of carcinogen do not provide any information about the possible human risk.
“…This is the case with RF fields recently being classified by IARC as “possibly carcinogenic to humans”. In my opinion the definition that IARC uses for this classification is flawed…”
I disagree that IARC classification is flawed.
- Epidemiological studies suggested possible increase in brain cancer among long-term avid users. Both, Hardell and Interphone have shown such trend, though the risk increase differed.
- Animal studies where RF was used alone showed no effect, but this does not mean that the humans are safe.
- Animal studies where RF was used in addition to other carcinogen have indicated a possibility of additive or synergistic effects. RF seemed to potentiate effects of other carcinogens.
- Although neither epidemiological nor animal studies provided reliable proof of harm, these studies provided sufficiently important “red flags”. Such “red flags” could not be ignored. This was reflected in the voting. Out of 30 members of the IARC Working Group 28 voted for 2B classification. This included all ICNIRP members who served on the Working Group. Calling it “flaw” is not correct.
Comments on the weight of evidence
“…There is widespread misunderstanding about the “weight of evidence” approach when used for health risk assessments. Weight of evidence is NOT counting the number of positive and negative studies and then concluding there are more positive study results than negative, or vice versa. A true weight of evidence approach requires that each study, both positive and negative, be evaluated for quality, similar to what was used in the systematic review of head cancers from cell phone use…”
I agree that the true weight of evidence should evaluate both positive and negative studies. However, this does not happen in practice. Even ICNIRP does not provide quality evaluation of the negative studies but just accepts them “automatically”.
In my first ever blog “From China with Love” in 2009 I wrote the following, and I still think the same way:
“…Another issue, mentioned at the conference was the “weight of evidence”. To me this term is abused by those who wish to disregard scientific studies showing that mobile phone radiation can induce biological effects. We continuously hear that there were done thousands of studies on mobile phone radiation. However, this number is grossly exaggerated because it refers to research at all microwave-frequencies. For example the applicability of the results obtained using radiation frequency of microwave ovens might not necessarily be directly applicable to the mobile phone-emitted microwaves. There is still ongoing discussion whether it is possible to transpose results of experiments done with one frequency of microwaves to other frequencies. To me, in order to be relevant, the studies should be performed using actually mobile-phone-emitted microwaves. The number of such studies, which were done using mobile phone-emitted microwaves, is available from the EMF-Portal database (http://www.emf-portal.de/) that is maintained by the Research Center for Bioelectromagnetic Interaction at the University Hospital of the Aachen University in Germany. This specialized database lists as of May 15th, 2009, total of 499 studies that explicitly investigated the biological and health effects of mobile phone-related microwave frequencies. Therefore, in my opinion, the number of the executed studies is not sufficiently large to create reliable basis for any conclusive statements about the existence or the absence of the health risk associated with the use of mobile phone. These 499 studies include studies that do not show any biological effects of mobile phone radiation but also studies that show induction of such effects. However, because the majority of the published studies (these thousands of articles with all microwave frequencies) show no effect, it is commonly suggested that this “weight of evidence” supports the notion that there are no biological effects and no health risk. This issue was also mentioned in a presentation in Hangzhou. One renowned scientist, C. K. Chou of Motorola, had stated that the newly designed, and about to start in the USA, large animal study is unlikely to have impact on science concerning mobile phone effects because of the “weight of evidence” provided by the earlier published studies. In short it means that, in his opinion, even well designed, well executed state-of-the-art study with best available radiation exposure dosimetry, is not sufficient to cause any change in thinking about mobile phone radiation effects. Why? Because the earlier published studies, of which many were poorly designed or executed or had poor dosimetry design, provide “weight of evidence” against any effects. In the discussion period, my question to Dr. Chou was whether, in order to make any impact, we need to produce another large number of new studies to overcome the already existing “weight of evidence”. I did not get any straight answer but just a defensive statement that the “weight of evidence” is a commonly used approach. Yes, it is commonly used and commonly abused. Single well done study is not enough but also a bunch of poor studies should not be enough too…”
“…Quality assessment criteria for all study types (See Repacholi et al 2011; online appendix) are well known and studies can be given more or less weight, where those studies that conducted experiments correctly according to these criteria are given more weight or believability in the outcome, than those deemed low quality. All “blue-ribbon” reviews use this approach. WHO has used this approach for over 50 years and it is a very well accepted, tried and true method for assessing health risks from any biological, chemical or physical agent…”
This is not the practice with the negative studies. And it is a common practice that the large numbers of negative studies are used to “discredit” any positive findings in other studies simply by the sheer size of the evidence. It is a common phrase – “the majority of studies show no effect” – that is used to support the no-effect-opinion in “blue-ribbon” reviews. This is the really flawed practice in RF area.
Comments on ICNIRP
“…ICNIRP works closely with WHO since it is a formally recognised NGO of WHO for NIR. As part of this relationship ICNIRP uses exactly the same weight of evidence approach as WHO and other leading national public health authorities in the NIR field when conducting their literature reviews and assessing the scientific evidence on which to base their guidelines…”
Indeed, this is the current reality. Is it good and reliable? I have serious doubts. I expressed them in my recent column in The Washington Times Communities, where I commented on the possible reasons for an unwillingness of ICNIRP to engage in debate with BioInitiative, and possibly with other entities having opposing view on the meaning of the current scientific evidence.
“…If one assesses the quality of studies referenced in the BioInitiative report it becomes very obvious they almost all fit into the low quality category that have not been replicated. It is very apparent that the authors of the BioInitiative report do not quote leading public health authorities such as WHO or the HPA in their review because they only want to summarise any study that supports their opinion and omit studies that don’t…”
I tend to disagree. Both, ICNIRP and BioInitiative have weak points in their reviews. ICNIRP over-values the negative studies independently of their quality. BioInitiative over-values positive studies independently of their quality. This however, does not mean that all what is in ICNIRP review or in BioInitiative review is bad science. Whoever thinks so is oversimplifying, misleading and simply falsifying the reality.
The comment reminds me a slide from the presentation by Vijayalakshmi at the URSI meeting in New Delhi few years ago. In her review of genotoxic evidence she had a slide where on one side of balance were listed all positive studies and on the other side of balance were listed all negative studies. The balance was tipped towards the negative studies. The comment was that all good quality studies were the negative ones and all poor quality studies were the positive ones. It was outrageous simplification that caused chair of the session, Jim Lin, to intervene and to remind presenter that such presentation of the data is improper… So, lets not say that overall BioInitiative report is bad and poor quality…
As Mark Elwood said at the BEMS 2012 and I twitted immediately from the conference room:
BEMS in Brisbane: Mark Elwood’s review of WHO and BioInitiative reports2007: both have valid thoughts and should not be easily dismissed
— Dariusz Leszczynski (@blogBRHP) June 21, 2012
“…With this approach there is no basis for discussion between ICNIRP and the BioInitiative group. ICNIRP has to maintain high quality standards in their approach to EMF protection to keep its very high credibility with national and international authorities who use their guidelines and recommendations…”
Mike, you are here absolutely wrong. Debate is necessary. Dismissing BioInitiative is wrong. The same, of course is to be said about the BioInitiative. Refusal of debate is wrong.
Who will win? Lawyers, because the concerned people will go to courts and the validity of science will be decided there. It will be a very sad development for the real science.
Pingback: BRHP – Between a Rock and a Hard Place
Pingback: Australia: AMTA quotes Repacholi’s guest blog on BRHP | BRHP – Between a Rock and a Hard Place
The two studies funded by USAF are not repetitions of the experiment reported by Chou.
Both studies use different animals than Chou did (mammary tumor-prone mice instead of normal rats). Furthermore the exposure in Frei et. Al., Chronic exposure of cancer-prone mice to low-level 2450 MHz radiofrequency radiation. Bioelectromagnetics, 1998 is done with Continuous Wave (CW) radiation which is drastically different from the pulsed RADAR-like one used by Chou. The research Toler , …, Frei et. al., “Long-term, low level exposure of mice prone to mammary tumors to 435 MHz radiofrequency radiation,” Radiat. Res., uses a much lower frequency and somewhat different modulation. Also it does not present or analyze in detail the total cancer rates which are the most striking finding in the Chou paper but rather concentrates on mammary cancer and only partial data and qualitative statements are presented on other cancer types.
Thus the two papers are not a repetition of the Chou experiment; they differ in the animals used, in the exposure type and in the analysis. No wonder the results are different.
It is a breach of any code of ethics to expose people to the same radiation which raised the risk of cancer among rats more than two-fold in the Chou experiments. Regretfully the IEEE-C95.1-2005 and the ICNIRP standards still permit it. Those standards need a major revision.
Read IEEE C95.1-2005, Section B.7.1.1 Long-term animal bio-assays. The references include the two studies funded by USAF, namely Toler et al. (R621) and Frei et al. (R637). There are several others, all worth your attention. Also, see Joe Elder’s 2010 summary report. Read it all, twice!
Good, the statistically significant results in the Chou paper can be explained only by a carcinogenic influence of the pulsed RF radiation or by some error in the experiment which is not very likely, the experiment looks very high quality. It is not an outlier, the statistics shows this is almost impossible to get those results by a chance.
The IARC report cites the Chou paper and mentions no follow up studies testing the same pulsed RADAR-like exposure on the same type of rats. Such a repeated experiment should indeed be done. If you can provide links to such reports it will be interesting to read them.
You do make a good point – sort of. This provocative finding is referred to in the discussion section of the Chou paper. As a result, the US Air Force funded two more studies specifically on this question. The follow up studies did not show increased cancer. Furthermore, IEEE C95.1-2005 also has a detailed analysis of the cancer issue. It is important to look at all of the evidence, rather than just outliers and artifacts that support your own view.
In my opinion the data at the last line of table 2 in the full paper and also in table 4 indicate very clearly cancer risk excess ratio greater than 2 and caused solely by the microwave radiation, furthermore the paper discusses it and does not offer any alternative hypothesis which could explain those results.
I think that one nice thing about this forum is that the participants are very capable to read the full paper and evaluate the results presented there. I past the link again for convenience and rest my case.
Click to access Chou-Guy_Rats_Bioelectro%20magaetics-1992.pdf
The conclusions are fully explained by CK Chou, and are indeed obvious from a detailed analysis of ALL the data. Only your chronic confirmation bias
is preventing you from seeing this.
No, I did not consider contacting him or anyone from HPA. Once report is out it is too late… I do not believe that they would consider updating it.
Have you considered contacting Zenon Sienkiewicz regarding the HPA “missing” in vitro studies? He’s normally pretty reasonable in communications with me ….
The animal model does show a dramatic carcinogenetic influence:
There is at least one report stating that microwave exposure increases cancer rates in normal rats dramatically, by a factor of 2 to 4 and with reliable statistics. See Chou CK, Guy AW, Kunz LL, et al. Long-term, low level microwave irradiation of rats. Bioelectromagnetics 1992; 13: 469–96. Amazingly this is downplayed in the abstract of the paper, see the end of table 2 in the full paper available for free in
Click to access Chou-Guy_Rats_Bioelectro%20magaetics-1992.pdf
I find it hard to understand the statement of Mike Repacholi about no known effects on animals.
the planet will lose, because this technology, especially with the upcoming MIMO 8×8 (LTE), which will be the first real 4g network.
It has never been tested for biological effects, the ICNIRP approach, if it doesn’t barbecue you in 6 minutes, it is harmless, is a sad joke!
More and more people will become sick, EHS and alike. Just for some fast Internet access on the road? With the money put into those wireless networks, it would be easily possible to bring fiber to each and every house, delivering high speed Internet without any radiation.
But it is mostly about the money and there is lots to earn in the wireless industry, while I see more and more people getting highly addicted to the technology. Recently on a couple of kilometers 3 cars nearly crashed into mine, I could with a little luck avoid multiple crash! An guess what, all those drivers had their cell phone stuck in the brain!
Pingback: Guest Blog from Mike Repacholi | BRHP – Between a Rock and a Hard Place
Good catch! elp-ICNIRP yranoituacerP.
(Although, it is long past the point of talking about that anyway, more appropriate to speak now of things like avoidance, replacement, protection…)
Dr. Leszczynski, thank you very much for your detailed response to Mike Repacholi’s statements.
By Mr. Repacholi’s standard in routinely dismissing positive studies, there will be no need to regulate most of the IARC carcinogens. Following his advice means that until the MAJORITY of animals (humans?) die of or get sick from a toxic substance, we shouldn’t worry and should continue to feed that substance to the ENTIRE population.
Nice spin on public health policy. He is spelling the Precautionary Principle backwards.