I just finished reading the 2013 SCENIHR Report and got an overwhelming feeling of the utmost desperation. Evaluation of the scientific evidence is being distorted and SCENIHR provides an aura of “legitimacy” to this distortion.
SCENIHR report has over 200 pages and it is not possible to mention all problems with it in this short blog. Here are few of the more grave problems with the SCENIHR report.
Membership of the working group
I do not know what procedure was applied when the membership of the working group of SCENIHR was assembled.
What is clearly seen, is that the vast majority of scientists involved in the working group are known for the opinion that the current scientific evidence shows that RF exposures do not cause detrimental effects to human health. Such composition of the working group is, by itself, a reason for serious concern about possible bias in evaluation of the scientific evidence.
Chairmanship of the working group, analyzing biological and health effects, was put in hands of dosimetry expert. This is not an optimal setting. Chairman, lacking the in depth expertise in analysis of biological effects must rely entirely on the opinions of others. This substantially limits the guiding role of the Chairman in the evaluation proceedings.
In the past, the only committee/working group where experts expressing diverse opinions were invited was the 2011 IARC evaluation of the carcinogenicity of cell phone radiation. The vast majority of well-known expert groups that were assembled either before 2011 IARC or after it consist of experts with the same opinions. Experts having diverse opinions are excluded. This applies e.g. to WHO, ICNIRP, ICES, AGNIR, ICEMS, BioInitiative, just to name few.
It is a very bad trend, leading to distortion of the scientific evidence by all of the above mentioned groups. There is a need for the scientific debate where diverse opinions clash and spar over validity of scientific evidence.
“Innocent” cherry picking
Scientific evidence is being distorted by the old/new form of the old fashioned cherry picking. I call it “innocent” because it has an appearance of a legitimate selection of evaluated evidence, imposed by the time-frame of the analysis. This “innocent” cherry picking happened, is and will happen, not only with SCENIHR report but also with numerous other reports.
This new way of “innocent” cherry picking is done when preparing updates to the old reports. Commonly, the old report’s data is not included in the new report’s overall analysis. It is as if the old report’s science and new report’s science are analyzed in vacuum, in separation from each other and not as a part of the same scientific evidence.
SCENIHR has done it. Their previous report in 2009 analyzed scientific evidence available till 2008. Present report analyzes evidence gathered between 2008 and 2013. However, there is no attempt to provide overall analysis. The new studies are analyzed as in separation from the previous research. This causes serious distortion of the overall picture and conclusions. Evidence provided by few studies published before 2008 might be considered as insufficient to make any conclusion. Publication of few more studies after 2008, analyzed in disconnection from the studies published before 2008, might be as such also insufficient. This is the approach towards in vitro studies evaluated by SCENIHR. Combining old and new studies will say more. But SCENIHR report seems not to combine the old with the new.
Another problem of cherry picking is the omission of certain studies (page 10, lines 14-17):
“…Not all identified studies are necessarily included in the opinion. On the contrary, a main task is to evaluate and assess the articles and the scientific weight that is to be given to each of them. Only studies that are considered relevant for the task are commented upon in the opinion…”.
At the first glance this statement is justified but closer look… Closer look is not possible.
We know only the articles that were used by the working group because they are listed as references. When some articles are not on the list we do not know whether the working group was unaware of them or whether the working group excluded them as irrelevant.
For reports, as this made by SCENIHR, that will directly affect policy decisions it should be obligatory to publish, in annex form, list of articles that were looked at but excluded and to provide reason why they were excluded. This is of paramount importance when judging the accuracy of the report.
Unfortunately, this is the problem of all reports, not only SCENIHR report. The authors say that they used only relevant studies but they do not provide information what all studies they looked at and for what particular reason certain studies were considered irrelevant. Listing irrelevant studies in an appendix would go a long way to show unbiased approach. Furthermore, by looking at the list of references used and excluded, one could easily determine whether working group was unaware of and omitted any relevant research studies.
Irrelevant (?) dosimetry
Accuracy of dosimetry is an integral part of good quality RF study. Sometimes, the description of dosimetry is poor and readers do not have certainty how exposures were performed. In such cases the working group excluded study form the analysis. While such approach is acceptable for researchers writing review article for a peer reviewed journal, it should be not permitted for the groups writing reports of such importance as SCENIHR’s.
There is a simple solution to the inadequate description of the dosimetry. The working group should be obliged to go the “extra mile”. It is always possible to contact the authors of the studies and ask them for the clarification of the exposure conditions. Only, if the authors are unresponsive or provide unsatisfactory explanation, the study should be discarded.
Another dosimetry problem is the use of regular cell phones to expose cells or animals. The studies where exposures were provided by regular cell phones were considered by SCENIHR as irrelevant (page 10, line 23-26):
“…In the last few years there have been a number of in vivo and in vitro studies dealing with exposure directly from a mobile phone. In almost all cases these experiments are without relevance, since they do not describe the factual exposure…”.
On the contrary to SCENIHR opinion, studies using regular cell phones use the factual exposure. Let us not forget that all epidemiology studies rely on such exposures – person exposed to a regular phone-emitted radiation.
I must admit that also I, in the past, considered in vitro and in vivo studies done using exposures to regular cell phone as inadequate. I changed my opinion after listening to arguments of the dosimetry-guru, Niels Kuster, at the IARC meeting in Lyon in 2011. He argued that the positive results should be considered and not automatically disregarded.
What is problematic with studies using regular cell phone for exposures is the impossibility to recreate the same exposure conditions either in the new series of experiments in the same laboratory or in a different laboratory. On the other hand we can be certain that using regular cell phone will not cause any thermal effects. It means that if in experiment are observed any effects, these effects are certainly of non-thermal nature. It means that no positive result from studies using actual cell phone should be automatically discarded but it should be considered in the context of similar findings in other studies.
Misleading epidemiology and/or epidemiologists
Epidemiological studies executed after the 2009 SCENIHR report are considered, in the new report, to provide evidence against any link between RF and cancer (page 4 lines 42-43):
“…Based on the most recent cohort and incidence time trend studies, it appears that the evidence for an increased risk of glioma became weaker…”
However, is the mentioned cohort study really weakening the evidence? Quality criteria for inclusion of epidemiological study in analysis are stated by SCENIHR as follows (page 10 lines 29-34):
“…The minimum requirement for exposure assessment for an epidemiological study to be informative is to include reasonably accurate individual exposure characterization over a relevant period of time capturing all major sources of exposure for the pertinent part of the body. Valid exposure assessment allows a researcher to distinguish between sub-groups of the population with contrasting exposure levels.…”
This, the most recent cohort, referred in the first comment is the update of the Danish Cohort. This study clearly does not met criteria of validity presented in the second statement because of the completely unreliable dosimetry and the undetermined in size contamination of control group with highly exposed subjects (link).
Already earlier I warned that if not retracted, because of the inadequate scientific quality, the updated Danish Cohort study will be used to mislead and to falsely claim that it proves lack of causal link between cell phone radiation and brain cancer.
Danish Cohort does not prove it because exposed and unexposed persons are mixed-up in the same groups. SCENIHR perpetuates the false impression that Danish Cohort proves something what it does not.
Negative attitude towards the positive findings
In my first blog, then published on STUK website and later moved to the current BRHP site, titled “From China with Love” I posted the following comment concerning attitudes towards the positive and negative results observed in Bioelectromagnetics studies:
“…It is not only my personal observation that the negative studies seem to get accepted as such, without too much scrutiny, whereas the positive studies are examined in every detail to determine why the result is positive. Hence, the positive studies are not treated equally with the negative ones, even though also the negative studies might include erroneous results or interpretations. Moreover, only the positive studies are demanded to be replicated before they can be accepted as valid evidence. This replication requirement is of course the correct approach, but it should be applied, at least to some degree, also to negative studies. At least the negative studies that are considered as providing the crucial evidence of no-effect should be replicated. An error in study design, execution, data analysis or interpretation might lead not only to positive but also to negative result. Furthermore, many of the positive studies are not even being attempted to be replicated and of course negative studies are not replicated at all. However, if the replication of the positive study is attempted then, commonly, the protocol of the replication study has so many modifications, introduced to improve the quality, that the outcome of such study is difficult, if not impossible, to compare with the original one. As often happens, the outcome of the so-called replication study differs from that of the original study. However, the failed replication might be either because of incorrect (unreliable) result of the original study or because of the modifications introduced in replication study. Usually, this question remains unanswered but the final result is claimed to be – in summary, the original study has not been replicated (= is not valid evidence).…”
The authors of SCENIHR report seem to follow the same line of thinking. Negative results are obvious and correct whereas the positive results must be scrutinized because they likely are, the false positives. This statement in the SCENIHR report confirms this “negative attitude towards the positive findings” (page 21, lines 33-34):
“…some studies use multiple end-points, which are equally prone to false positive results…”
Here is another quote, from my letter to editors of the Radiation Research journal concerning the multiple-points examination and probability of false positive effects:
“…the observation that the number of affected genes was lower than the number of expected false positives does not automatically mean that every gene appearing as affected by mobile phone radiation is a false positive. This calculation (the number of expected false positives) shows the probability that the affected genes might be false positives, but it does not mean that all of them are indeed false positives.…”
There is too much of an “automatic” approach in SCENIHR report. Automatic approach that automatically discards studies without going an extra mile to clarify if the problem is real or not.
Grave error concerning the evidence from animal studies
It is puzzling how it is possible that group of experienced scientists, as SCENIHR is, can provide the following statement concerning animal studies:
“…A considerable number of well-performed in vivo studies using a wide variety of animal models have been mostly negative in outcome. These studies are considered to provide strong evidence for the absence of an effect.…”
This statement applies to studies where animals are exposed to high overdoses of chemical or radiation. If the high overdose is not causing any harm to animal, it can be considered that humans will not be affected. However, this logic does not apply to low doses of chemicals or radiation. Responses of animals and humans may differ.
In RF research animals are exposed to radiation doses similar to doses that humans encounter. It is not possible to expose animals to high overdoses of RF radiation because it would induce harmful thermal effects. The lack of response of animals to low dose exposures does not prove that humans are safe. It does not show at all whether humans will or will not respond. Many of the chemicals tested in animal studies, if used in low doses that humans encounter in real life, would not cause effects. Fact that animal does not react to low dose exposure does not prove that humans will do the same. This is a major error in SCENIHR report, when speaking about the meaning of the results obtained in animal studies. It is a scientifically false claim.
These are just few of the many problems with the SCENIHR report. I eagerly wait what will be the outcome of public consultation and what impact it will have on the final version of SCENIHR report.