Below is the next in a series of Guest Blogs on BRHP. The opinions expressed in this Guest Blog are of Bardo Frings himself. Publication of these opinions in BRHP does not imply that BRHP automatically agrees with or endorses these opinions. Publication of this, and other guest blogs, facilitates an open debate and free exchange of opinions on wireless technology and health.
Scientific writings of Bardo Frings on his website. These are two papers that he has written on the topic of radio-frequency electromagnetic fields and health research while studying at Leiden University.
[DL: One can wonder why Health Council of The Netherlands is so hesitant to discuss possibility of errors in their reports; Read also: Significant discrepancy of opinions on 5G and health between ICNIRP and the Health Council of the Netherlands]
Some Occurrences of Curious Errors in Reports on Mobile Phones and Cancer by the Health Council of the Netherlands
by Bardo Frings
A couple of years ago, I re-analyzed the first three reports from the Health Council of the Netherlands on mobile phones and cancer. To my surprise at the time, these reports, especially the first one published in 2013, appear to contain numerous errors and inaccuracies.
In light of the new published opinion of the Health Council of the Netherlands on 5G, these findings should be of interest again.
Perhaps entirely coincidentally, most of the erroneous statements I came across were directed at discrediting research from the Hardell group and related studies, and are subsequently used to put less weight on these studies in the final conclusions of the report. To quote:
‘In summary, there is doubt on the internal and external consistency of the Hardell data on account of (1) the increased risk observed already with very short usage times; (2) the unusually high response rates in the controls; and (3) the increased risks observed for cordless phone use, again in some cases for very short usage time. For these reasons, in combination with the lower numbers of subjects, the Committee has given the Hardell et al. studies less weight than the INTERPHONE studies in the overall analysis and conclusions.’ (Health Council of the Netherlands, 2013, p. 108).
These points are addressed in more detail throughout this report (and later reports) of the Health Council. However, none of the arguments appear to hold much truth, if any. I will break them down one by one:
- According to the first report (Health Council of the Netherlands, 2013, pp. 33–34) and third report (Health Council of the Netherlands, 2016 p. 31), there is no explanation for observed early increases in risk of brain tumors with cell phone use in the studies of Hardell et al. However, the Hardel group has indicated in different papers that the early increases in risks could be caused by promotional effects. For example here: ‘The shortened latency period may be consistent with a tumor promoting effect from microwaves’ (Hardell et al., 2003). For those unfamiliar with the definition of promotion effects, this means that mobile phone radiation may accelerate the growth of pre-existing tumors, causing these tumors to be detected earlier.Despite these explanations, promotion effects are never considered in the discussion of the papers of Hardell by the Health Council.
- The criticism of response rates is likewise not properly substantiated.The Council notes the response rates reported by Hardell et al. – ranging between 85% and 91% for cases and 84% and 92% for controls – are unusually high. In order to support its observation, The Council picks out four other Swedish studies with comparable designs from the same time period as the Hardell studies, showing lower response rates. (Health Council of the Netherlands, 2013, p. 65). However, these are four hand-picked studies, and might not give a clear overall picture of these trends in Swedish case-control studies. Therefore I did a short search myself, and quickly found an extensive review paper discussing international trends in response rates. For its analysis this study focused on labor force surveys, and found overall response rates for Sweden from 94% in 1983 to 87% in 1996 (de Heer, 1999). These rates seem not too far off from the response rates Hardell et al. achieved in their studies among controls, although they might not be directly comparable due to possible differences in different types of studies. However, another study cites response rates of 95–97% for studies of the National Health Interview Survey (NHIS) in the US, from the 1960s till the 1980s. Since then the NHIS has seen response rates declining, with 91.8% reported in 1997 and 86.9% in in 2004 (Galea et al., 2007). These rates seem more in accord with the rates the Hardell group has achieved, making the concerns of The Council seem more difficult to uphold.Additionally, The Council makes it seem the Hardell group didn’t follow common procedures in calculating their response rates. However, looking into the scientific discourse, it turns out response rates are a tricky topic in science, on which there is little agreement:‘The term ‘‘response rate’’ has become freighted with conflicting meaning, much of which is frequently incomprehensible to any but the most careful reader of a particular epidemiologic paper. Unfortunately, there is no such thing as a simple ‘‘response rate,’’ with different modalities of data collection embedding particular, but important, elements, each of which may contribute to the calculation of several ‘‘response rates’’ that may give us an indication about participation in a particular study.’ (Galea et al, 2007)Thus, it seems rather unfair to criticize a research group for applying one or another (in the case of Hardell et al. well described) procedure to calculate response rates that somehow differs from other studies (that have in part been setup later, such as Interphone), when their is no common way to do this.To their credit, in the third report of the Health Council it is acknowledged that the response rates of new studies by Hardell are not remarkably high: ‘In several of the studies from other groups discussed in the current report, similar high response rates have been obtained as in the more recent Hardell studies. Therefore the Committee does not consider the response rates in these recent Hardell studies as unrealistically high.’ (Health Council of the Netherlands, 2016, p. 30). However, there appears to be no awareness of the historical trends explaining the higher response rates of earlier Hardell studies.
- To argue against the assessment of cordless house phone use, the first report (Health Council of the Netherlands, 2013, pp. 102–103) cherry picks one paper from Redmayne et al. (2010) that supposedly misinterpreted a paper by Vrijheid et al. (2009). Redmayne et al. pointed out that mobile phones use a technology to reduce the transmission strength when a caller is not speaking. This reduction is not taken into account in the measurements of Vrijheid et al. The Health Council states that this observation is wrong:‘Redmayne et al. (2010) discussed the exposure by cordless phones and compared that with the data for mobile phones as assessed by Vrijheid et al. (2009). Vrijheid et al. state that “Analyses included data recorded during speech communication only.” This means: not during texting, but for the entire duration of a call, both during speaking and listening. However, Redmayne et al. (2010) erroneously interpreted this statement that power was only registered during speaking and not during listening.’ (Health Council of the Netherlands, 2013, p. 102–103)However, reading the quote from the paper of Vrijheid et al in its entire context, actually shows the opposite: ‘Analyzes included data recorded during speech communication only. The SMPs did not record information about DTX’ (Vrijheid et al., 2009, emphasis added). It is exactly DTX that ensures a reduction in the transmission power during speaking, see for example this quote from Kelsh et al. (2011):‘To address the effect of voice activated discontinuous transmission (DTX), which reduces output power when there is no speech, all phones were subjected to continuous rock music to generate a high proportion of sound to the phones. This could have caused a relatively higher average output power than what would be observed with normal conversation on a mobile phone.’ (Kelsh et al., 2011).This paper is cited by Health Council as well in the same section, so this should have been known to them.
It doesn’t end here, unfortunately. I found more inaccuracies, erroneous statements and unsubstantiated claims, which will be too extensive to address in this blog, but you can find my analyses in full on my website if interested (comments and critique on these analyses is welcome as well): https://www.bardofrings.nl/portfolio/papers/
The degree of misrepresentation of these studies should cast doubt on the reliability of these reports. If the observations that I’ve laid out here are correct, these reports should by no means form the final advice for governments to base their regulations of RF-EMF on.
Interestingly, the new opinion of the Health Council on 5G differs from the previous reports, in that it is mostly devoid of any in-depth analysis. Therefore little can be scrutinized anymore at this level, at least not without diving into the cited (draft) reports from the WHO and SSM (on which most of the new report for the Health Council is based), and the earlier reports of the Health Council itself. The published documents supporting the new opinion only contains lists of studies, their main findings (reduced to 4 categories), and an overall conclusion per field of research (Health Council of the Netherlands, 2020).
I haven’t been able spend sufficient amount of time on the draft report of the WHO and the SSM reports documents as of yet though unfortunately, but my first impression is these can use some additional scrutiny as well.
To conclude, a few important observations can be made:
- The many curious errors and inaccuracies that occur in the reports of the Health Council indicate that no scientific expert group should consider themselves free from error, even when their main activity is hunting for errors in publications from other scientists.
- The biases that these errors and inaccuracies support, show the need for a greater plurality of opinion in these committees. A higher plurality of opinion can feed a more healthy debate, and support a more appropriate in-depth scrutiny towards publications and findings supporting different outcomes.
- Most of all, if anything, it appears these reports themselves are in dire need of a well designed peer review process, one that is at least just as thorough, if not more, as any scientific publication.
With a background primarily in new media art and philosophy of science and technology, I have mostly remained an outsider in the field of RF-EMF research. Therefore, at the time I did these analyses, I didn’t feel confident enough to share my findings with the Health Council of the Netherlands itself, or publish my analyses in a related scientific journal. However, because of the new report that was coming up, a couple of months ago I decided to get into contact with Hans Kromhout, the current chairman of the Electromagnetic Fields Committee of the Health Council of the Netherlands. To my disappointment, Kromhout responded they didn’t have time to read my unpublished commentary, but advised to contact the general secretary of the Health Council of the Netherlands if I really wanted to share my findings with them. I decided to sum up my most important findings in a mail to the Health Council, and offered to send my complete analysis if they were interested in reading this in full. The Health Council thanked for my contribution, but expressed no interest in my full analysis, and never bothered to refute or address any of the observations I shared. Finally, I contacted Eric van Rongen and asked if the Health Council had considered my commentary. He gave a very short response in the end, with a formal ‘thanks’ for my contribution, adding that they did not want to discuss my commentary with me in any way.
I was a bit surprised by this disinterested reaction, but was hoping some of my comments would still be considered in one way or another for the new report on 5G. Unfortunately, the conclusions of the earlier reports have remained unchanged, and are used to maintain the same recommendations for the frequency range between 700 and 3500 MHz.
The reports of the Health Council of the Netherlands always look impressive and extensive at first. However, closer scrutiny reveals that looks can be greatly deceiving.
- de Heer, W. (1999). International Response Trends: Results of an International Survey. Journal of Official Statistics, 15(2), 129–142
- Galea, S., Tracy, M. (2007). Participation Rates in Epidemiologic Studies. Annals of Epidemiology, 17(9), 643–653. doi: 10.1016/j.annepidem.2007.03.013
- Hardell, L., Mild, K. H., Carlberg, M. (2003). Further aspects on cellular and cordless telephones and brain tumours. International Journal of Oncology, 22, 399–407
- Health Council of the Netherlands (2013). Mobile phones and cancer. Part 1: Epidemiology of tumours in the head. The Hague, The Netherlands: Health Council of the Netherlands
- Health Council of the Netherlands (2016). Mobile phones and cancer. Part 3: Update and overall conclusions from epidemiological and animal studies. The Hague, The Netherlands: Health Council of the Netherlands
- Health Council of the Netherlands (2020). Achtergronddocument bij: 5G en gezondheid 2020/16, Den Haag
- Kelsh, M. A. et al (2011). Measured radiofrequency exposure during various mobile-phone use scenarios. Journal of Exposure Science and Environmental Epidemiology 21, 343–354. doi: 10.1038/jes.2010.12
- Redmayne, M. et al (2010). Cordless telephone use: implications for mobile phone research. Journal of Environmental Monitoring, 12, 809–812. doi: 10.1039/b920489j
- Vrijheid, M. et al (2009). Determinants of mobile phone output power in a multinational study: implications for exposure assessment. Occupational Environmental Medicine, 66, 664–671. doi:10.1136/oem.2008.043380