commenting period for this guest blog has ended
This is the first guest blog on BRHP. The opinions expressed in it are of Mike Repacholi himself. Publication of these opinions in BRHP does not mean that BRHP agrees or endorses these opinions. However, publication of this, and subsequent guest blogs, is an attempt to start an open debate and free exchange of opinions on RF and health.
Guest Blog by Dr Mike Repacholi, Visiting Professor, University of Rome, Italy
In this Guest Blog I would like to address a few issues related to assessing health risk.
1. The relative importance of epidemiology studies
The latest review of RF fields, HPA (2012), has set the scene for the importance of in vitro studies by stating “…a cellular change does not imply an effect in the whole organism, and neither a change at the cellular level nor a change of the whole organism necessarily results in a health effect.” So we cannot extrapolate effects found in cells to whole organisms. The advantage of in vitro studies is that they allow effects in a simplified model to found, but then these effects must be investigated in vivo to determine whether they occur in the more complex whole organism. Further, the in vitro models allow mechanisms of interaction to be investigated that then should also be investigated in vivo. HPA gives a reason for this: “… the main disadvantage is that isolated cells do not experience the many interactions that would normally take place in a whole organism and hence their response to stimuli is not necessarily the same as it would be in an experimental animal or human.” This is why public health authorities rely on epidemiological studies to assess health risks. They also rely on the results of animal studies to support the epidemiology studies. In fact IARC uses as a guide, if cancer is found in two different animal species then that cancer most likely occurs in humans.
We all know the many problems associated with epidemiological studies. They are prone to many biases and have serious problems assessing a person’s exposure, especially to EMF. We are all living in a sea of EMF so it is difficult to distinguish between the exposed and control groups. Because of this, my concern is that there is an over-reliance on epidemiology studies. Given that animal studies can be conducted with high dosimetric precision public health authorities might do well to use the guide; if epidemiology studies show an effect but overwhelmingly the animal studies don’t, then the assessment should be that there is a problem with the epidemiology studies. This is the case with RF fields recently being classified by IARC as “possibly carcinogenic to humans”. In my opinion the definition that IARC uses for this classification is flawed.
2. Weight of evidence
There is widespread misunderstanding about the “weight of evidence” approach when used for health risk assessments. Weight of evidence is NOT counting the number of positive and negative studies and then concluding there are more positive study results than negative, or vice versa. A true weight of evidence approach requires that each study, both positive and negative, be evaluated for quality, similar to what was used in the systematic review of head cancers from cell phone use (Repacholi et al 2012). Quality assessment criteria for all study types (See Repacholi et al 2011; online appendix) are well known and studies can be given more or less weight, where those studies that conducted experiments correctly according to these criteria are given more weight or believability in the outcome, than those deemed low quality. All “blue-ribbon” reviews use this approach. WHO has used this approach for over 50 years and it is a very well accepted, tried and true method for assessing health risks from any biological, chemical or physical agent.
ICNIRP works closely with WHO since it is a formally recognised NGO of WHO for NIR. As part of this relationship ICNIRP uses exactly the same weight of evidence approach as WHO and other leading national public health authorities in the NIR field when conducting their literature reviews and assessing the scientific evidence on which to base their guidelines. If one assesses the quality of studies referenced in the BioInitiative report it becomes very obvious they almost all fit into the low quality category that have not been replicated. It is very apparent that the authors of the BioInitiative report do not quote leading public health authorities such as WHO or the HPA in their review because they only want to summarise any study that supports their opinion and omit studies that don’t. With this approach there is no basis for discussion between ICNIRP and the BioInitiative group. ICNIRP has to maintain high quality standards in their approach to EMF protection to keep its very high credibility with national and international authorities who use their guidelines and recommendations.
HPA (2012) Health effects from radiofrequency electromagnetic fields. Report of the independent Advisory Group on Non-ionizing Radiation. Download from: http://www.hpa.org.uk/webw/HPAweb&HPAwebStandard/HPAweb_C/1317133826368
ICNIRP See their web site at: http://www.icnirp.de
Repacholi et al (2012) Systematic review of wireless phone use and brain cancer and other head tumors. Bioelectromagnetics 33: 187-206.
Repacholi et al (2012) Online appendix:
Protocol by which all studies were assessed is given on: http://onlinelibrary.wiley.com/store/10.1002/bem.20716/asset/supinfo/bem_20716_sm_SupplInfData.doc?v=1&s=e9450ec8b002bf4a7fe126bd226c7602802b0ee6
Worksheets used for assessing study quality are given on: http://onlinelibrary.wiley.com/store/10.1002/bem.20716/asset/supinfo/BEM_20716_sm_Suppl-Appendix-Figs.pdf?v=1&s=8ffe056df9570710bc7266c16ed2e836482686da