Trying to have it both ways.

Posted on October 25, 2011

0



Last week, once again, a paper on the carcinogenic risk of mobile phones was published; in the British Medical Journal this time (link to open access paper here). It was widely cited in international media (just to give you a taste of quality: Fox News, Daily Mail & of course the Daily Express) as “the largest study ever done” and, because it came out negative (ie no association between mobile phone use and brain tumour risk), was also widely criticised by others within the newspaper articles or websites (for example 1,2). So quite interesting stuff yet again.

Before addressing my point however, some details about the study to get us all on the same page. Essentially, this study was a further follow-up (after the 1996 study (link) and the 2002 first follow-up (link)) of a Danish nationwide study amongst 420,095 people who had signed a mobile phone contract with a phone company anytime from 1982 (introduction of mobile phones in Denmark) to 1995. This update to 2007 increased the number of years of follow-up for subscribers, especially in the long term users from 170,000 to 1,200,000, but also enabled combination of these data with data from another cohort to obtain socioeconomic (i.e. education and income) information for a large proportion of people in the cohort. So although this reduced the number of people in the study by about 60,000, reasoned the authors, the extra information would be worth the effort.

I would say that “nothing” is a good summary of the results, with no real increased risk for the whole population but also not for long term users. Also, no dose-response associations with years of subscription were observed.

Of course, as with all observational epidemiological research, this study also had a number of limitations; all of which have been honestly described by the authors in their paper.

An important limitation is that business users were excluded and were classified as “un-exposed”. Although it has been argued in an accompanying editorial (link) that “long term users who did not hold personal subscriptions would make up a small proportion of the reference population” and would therefore not affect the results very much, I am personally not convinced it is that simple. Regardless, it would be good to see some figures of the possible impact this may have had. Although this is indeed a big limitation of the study, I very much doubt that we are looking at a conspiracy here. It seems much more likely that something far more mundane may be responsible. Something as boring (but nonetheless important and hopefully solved in the future) as “these were contracts with companies and providers are not allowed to share this private information without the specific consent of each company”.  I am just speculating here, so if somebody knows why just post on here.

Another limitation was that although data was known about years of subscription no actual information on the amount of usage could be included. That is unfortunate because without this information it is not possible to identify “real” high-end users and epidemiologically important dose-response associations cannot be calculated (except for duration). In addition to not having information on usage, the researchers also had no information about other, similar types of exposures; most notably cordless phones and car phones. The authors argue that the misclassification resulting from these factors is likely to be non-differential which, in contrast to differential misclassification, leads to a dilution of the effect. So yes, this may be a reason why no results were found. The authors however, show by using data of an earlier validation study (link), that despite these limitations their study should still be able to find moderate to large risks related to mobile phone use. Or in other words, if there is an effect on brain cancer risk from cellphone use at all, it is going to be small (clearly, most people could have told you that – given that the amount of mindless jabbering in public transport has not exactly been dropping off rapidly).

There is another limitation in this study though (again, listed in the paper so I am not actually telling you something you didn’t know already); one that results in a very interested contradiction in discussions about this topic in newspaper comments, “watchdog” websites and blogs. And that is the issue of exposure lags, or in other words the time between relevant exposure and the clinical manifestation of the tumour. This comes up in all scientific work on cancer risks, but with respect to studies on mobile phone use has also been discussed (and critiqued (1,2)) for a paper I have been involved in earlier this year (link). Unfortunately, we don’t know exactly what this time period is while also, even if we did know it, this will not be the same time period for everyone; thus making it a complicated issue.

But back to the study. The authors mention that data on mobile phone subscriptions were only available until 1995. If you got your subscription in 1996 or later (well, and you lived in Denmark) you would have been classified as ‘un-exposed’. This issue can subsequently be exploited in the following argumentation: “Well, if you ignore so much exposure from more recent years of course you are not going to find anything! Ergo, useless study!”.

Interestingly, for other studies on mobile phones, for example like the one I have been involved in and which looked at trends in cancer rates in the previous decades (brain (link) and parotid gland (link)) and which also did not provide any evidence for an effect of mobile phone use on increase of rates, alternative reasoning goes like this “Not enough exposure-time has elapsed! You are only looking at exposure in the previous 5-15 years, so of course you are not going to find anything. It takes longer for the cancers to show up!Ergo, useless study!


Although I am sympathetic (well, a little) to both lines of reasoning, they seem to be mutually exclusive. Recent exposure (as in within the decade, give or take, prior to disease) is either important to detect increased risks, or it isn’t – and you can’t have it both ways depending on what you are trying to prove.

Unless of course, this is in fact not so much a contradiction but a paradox instead. It may, yet again, not be this simple. And I (and I bet others) would be very much interested in any comments and discussions.

About these ads