Peter Wiedemann responds to my post on the ‘Letter to the Editor’

In one of my previous blogs I criticized recently published in journal Bioelectromagnetics ‘Letter to the Editor’. Later on, the ‘Letter to the Editor’ was also vigorously debated in discussion of BioEM2014 Cape Town group on LinkedIn.

My opinion about this publication remains unchanged because I did not hear yet arguments sufficient to reverse my opinion. In my opinion this publication should be retracted and returned for revisions to the authors. The authors should decide what they wish to publish, ‘Letter to the Editor’ or ‘brief communication’ or both, but separately, because the mix has not enough data and too much of speculation.

One sentence in the abstract of this ‘Letter to the Editors’ is especially disturbing and it  claims the following:

There have been considerable doubts that non-experts and experts alike fully understood what IARC’s categorization actually meant, as “possibly carcinogenic” can be interpreted in many ways.”[emphasis added DL]

The ‘Letter to the Editor’ does not present any evidence to justify the claim that the experts did not understand what the ‘possible carcinogen’ classification means. This claim comes in the abstract proverbially “out of the blue sky”. But, again proverbially, “read my lips”, if this ‘Letter to the Editor’ will not be withdrawn and rewritten to remove unfounded claims, this publication will be soon quoted as a peer-reviewed proof that the IARC Working Group experts made a mistake when they classified cell phone radiation as a possible carcinogen.

As expected, Peter Wiedemann strongly disagreed with me and he expressed it in the discussion on LinkedIn. Peter also submitted his response to my blog and asked it to be published on BRHP. Here it is, in always endorsed by me spirit of the open scientific debate. Text is directly copy/pasted from Peter’s word document (just my misspelled name was corrected).

**************************

Response to pseudo-science

Peter Wiedemann

Critique among peers contributes to better science. This is my personal belief; however, in some cases, I have doubts that critique is helpful. One example is the recent blog of Dariusz on pseudoscience that focused on a paper in Bioelectromagnetics that I published together with F. Boerner and M. Repacholi.

The problems I would like to address here are three issues that result in a distorted understanding of our paper: (1) skewed view of the intention of our study, (2) lack of understanding of social sciences methods, and (3) poor reading and misreadings.

First, concerning the biased perception of the objectives of our study, Dariusz´s comments on our study do not address the actual content of our paper. The idea of this study was born after the IARC published their press release on the 2B classification of RF EMF. My first impression was this: From a risk communication perspective the wording used in the IARC press release is far from being ideal. Of course, my opinion had nothing to do with the quality of the assessment or with the expertise of involved scientists who conducted the assessment. This opinion was solely about communications.

It is easy to detect two major problems: first, the usage of qualitative uncertainty phrases for describing conclusions about the hazardness of RF EMF (like “possibly carcinogenic “) and second, the use of relative risks expression (like “ 40% risk increase”). Even a preliminary look at the guidelines of evidence based patient information give you sufficient reason to assume that such risk communications are problematic (see Bunge et al 2010 [1], and the Cochrane Review 2011 [2].

Second, concerning methods: we conducted an online survey in order to check whether our first impression about IRAC´s communication problems are correct or not. Usually, you choose an approach that provides the evidence you need. In our case, we focused on the lay people´s subjective interpretation of (1) “possibly carcinogenic” and (2) “40% risk increase”. Fortunately, we did not need to reinvent the wheel. There was some research available conducted by the psychologist David Budescu on the communication of the United Nations’ Intergovernmental Panel on Climate Change (IPCC) [3]. He showed that lay people systematically misunderstand what the IPCC means when it uses verbal probability phrases such as “likely” or “very likely” to describe the evidential strength of its evaluations. We followed his approach.

Furthermore, the study was conducted as an online survey. Of course, this approach can be criticized. However, online studies are becoming more frequently used in social science. The question is, are they reliable? Some validation research indicates that the results of surveys depends partly on the mode of the questioning, i.e. either

face- to-face, telephone, or online. However, the findings are trickier than one would assume. A recent validation study concludes: [1] ” Our results suggest that differences between online and telephone surveys depend in part on the type of questions being compared. We observed the clearest differences between modes with cognitively demanding knowledge questions (where Web respondents were more likely to give correct answers) and with batteries of attitude questions presented online (where the Web respondents gave less differentiated answers).” In this line of thinking, our approach, which focused on cognitively demanding “knowledge issues”, can be viewed as suitable. In addition, the work of Budescu indicates consistency. His three studies—a student sample from a single US university, a representative samples of Americans, as well as a large international study in 24 countries across the globe—come to consistent findings: people misunderstand verbal descriptions of uncertainty.

With respect to the extrapolation of our results, we argue that if well educated people such as university students have problems in interpreting IARC´s 2B category and have difficulties to understand one of the core message of IARC – the expression “40% risk increase” – then people who are less educated will have at least similar problems.

Third, what are the misreadings? Dariusz claims that our paper says things it does not say. First, our paper is not about IARC´s cancer assessment of RF EMF. We do not examine the question whether the 2B classification is justified or not. Any reader may easily become aware of it (you can find the paper on my LINKEDIN web site).

Second, we do not question the ability of any expert to use the IARC classification system for grading the available evidence. Nevertheless, it would be an interesting study to analyze how experts agree or disagree about the interpretation of the meaning of “possibly carcinogenic.” Finally, we do not attack IARC; however, we come to the conclusion that the IARC´s communication needs improvements. This is our core message, no more and no less.

References

  1. Bunge M, Mühlhauser I, Steckelberg A: What constitutes evidence-based patient information? Overview of discussed criteria. Patient Educ Couns 2010; 78: 316–28.
  2. Akl EA, Oxman AD, Herrin J, Vist GE, Terrenato I, Sperati F, Costiniuk C, Blank D, Schünemann H (2011) Using alternative statistical formats for presenting risks and risk reductions. Cochrane review, prepared and maintained by The Cochrane Collaboration and published in The Cochrane Library 2011, Issue 3
  3. Intergovernmental Panel on Climate Change (IPCC)
  4. S Fricker, M Galesic , R Tourangeau , TI Yan (2005). An experimental comparision of web and telephone surveys. Public Opinion Quarterly. Vol. 69. No. 3. Fall 2, pp. 370-392

****************************

Advertisements