About one and a half month ago I became involved in the discussion which started after publication of a study claiming that the implementation of a workplace smoking ban in the Netherlands has saved thousands of lives in the following years: Effect of smoke-free legislation on the incidence of sudden circulatory arrest in the Netherlands, De Korte-De Boer, Heart e.al., (July 2012). Head of the research team, Prof. Onno van Schayck, could even be seen in the main news programme telling that more than 16.000 sudden circulatory arrests (SCA) probably had been prevented in four and a half years after the ban, based on an extrapolation of their findings to the whole Dutch population. This figure of 16.000 prevented SCA cases also featured prominently in the press release of Maastricht University and was picked up by many newspapers.
Not everybody was convinced, though. Science journalist Maarten Keulemans, working for one of the main newspapers, contacted me and asked if I would like to have a look at the statistics used in this study. He himself had already written a critical blog which started some sort of a row with the authors.
Keulemans points out that in the study period there was no decrease in SCA incidence at all! The authors only show that the trend in SCA incidence seems to ‘turn’ around at the time of implementation of the workplace smoking ban. The later implementation of a smoking ban in cafes and restaurants didn’t not show a positive effect in their data. On the contrary: SCA incidence even rose slightely.
Other relevant questions are brought forward by Keulemans as well: ‘is the population of this small part of the country (South Limburg) representative for the Netherlands as a whole?’, ‘how about the number op pensioners and unemployed in the sample population?’, etc., etc. But then he points to the main problem I think. It seems that the whole result can be the cause of the unexplained increase of SCA incidence in the period before the smoking ban! Hadn’t Van Schayck and his team just struck upon a statistical oddity and given it the wrong (but welcome) interpretation?
I focused on the modelling of the assumed trends in the study. Although the parameters of the Poisson Regression they did, were not given with high accuracy, I managed to match their main model and resulting graph quite well. And then it was easy to ‘show’ what the extrapolation was really about:
Now I could reconstruct the extrapolation which should give the more than 16.000 SCA cases prevented. You just take the difference between the lines of the two graphs on 01-07-2008 and mulitply with the numbers for the population of interest. I discovered that the authors made a mistake: to get the 16.638 which they mention in their article as extrapolation for the whole country, they put the total population figure in their formula, not just the figure for the group between 20 and 75 years! With the correct input it comes down to about 12.000 SCA cases, still a big number.
But my main concern was not this input/calculation error. I think it’s rather obvious if you look at the graph that this way of extrapolating is rather silly. For a couple of reasons. First you have to ask yourself whether it makes sense to extrapolate 4.5 years ahead based on a two year trend, also considering the rather random incidence pattern. Another problem is that the first trend is the result of modelling the whole period. If you need to make a good prediction model, you should do it differently. And last but not least: I think it’s a very bad habit not to mention any confidence interval on this figure. If they would have made that calculation, it would surprise me if they would still have called it a serious extrapolation. Also mentioning a difference without the numbers which you used in the subtraction, is not very wise in this case.
What caused this selfdeception of the authors? I had to agree with Keulemans on this: it’s the unexplained trend over 2002 and 2003. The authors mention “a small but significant increase in SCA incidence during the pre-ban period (+0.20% cases per week, p=0.044)”. I don’t think it is small and think it needed investigation before using it as basis for any extrapolation, whether for 4.5 years ahead or even for just one year .
I published my findings on kloptdatwel.nl and about a week later prof. Van Schayck contacted me by phone. At first contact we only agreed to speek later that day at a more suitable time. In the mean time his team probably figured out that at least I had been right about the error they made in the extrapolation. He also explained that this extrapolation was added to their concept version on request by a reviewer. He made it sound as if he had not been happy with that. But then why accept it? And why did they let this figure play such a major role in the media coverage of this study? They even put it in their own press release.
On July 13th, some weeks later, the authors published an addendum to their article: ‘Extrapolation put in perspective‘. They acknowledge that the extrapolation was probably a bit too much asked from the presented data. We have to give credit to them that they corrected this mistake. I’m not so sure though, whether they would have made the same sort of statement, if the calculation error I found, would have been a bit more ‘debatable’. And we have to assume that they still maintain that they found a significant turnaround of the SCA incidence trend, because there’s nothing mentioned in the addendum on this issue.
On the other hand I was happily surprised to see that Dutch public broadcaster NOS did give attention to this ‘retraction’ of ‘the more than 16.000’. The uncritical reporting on this study in the first place had added more scepsis to the question of science journalists whether the news programme could present any scientific result in a decent way, while not having a serious science journalist amongst its staff. At least on the website they show that they really try to improve.
I doubt that the last word on this study has been said or written. It’s bound to be one of those studies which are ridiculed by pro smoke tobacco lobbyists as another example of ‘foul play’ by anti smoking groups. Also scientists don’t bother looking into the details before delivering their critique, it seems. Prof Michael Siegel who has critized many of this type of studies, also has doubts on this one. But his comments miss the main problem, I think. He even misread that this study was about SCA and not heart attacks.
To me it has become even more clear that the debate on smoking bans is heavily politicized. Has it come so far that scientists and other experts dealing with smoking bans can’t deal objectively with this issue anymore? Who to trust?
PS my opinion on smoking: it really stinks!