Daily Vaper

Another Smoking Cessation Study Fails To Understand Smoking Cessation

Photo via Shutterstock

Carl V. Phillips Contributor
Font Size:

Given how much time and money they spend on it, tobacco controllers understand remarkably little about smoking cessation. The latest example is a study by serial junk scientist and alleged sexual harasser Stanton Glantz and colleagues, in which they claim to show that vaping hinders smoking cessation. In his broadside about the study, Glantz tries to spin it as a rebuke to recent pro-vaping messaging from the UK government and its quangos. In reality, it serves only as a good teaching exercise in identifying what researchers did wrong.

The study looked at European survey data from 2014, and compared the prevalence of vaping between current and former smokers. The authors observed that those who vaped or had tried vaping were more likely to be current smokers, rather than former smokers, when compared to those who had never tried vaping. From this they concluded — in what could only be seen as a parody of bad scientific inference were it not coming from tobacco controllers — that vaping inhibits smoking cessation.

One of the errors in this analysis is a common fatal flaw in research about smoking cessation: It fails to account for differences among smokers’ motivation to quit and their difficulty in doing so. Most successful smoking cessation is unaided — without use of any substitute product, drug or formal program — often immediate (“cold turkey”) but sometimes gradual. But this obviously does not mean that unaided quitting “works better.” Those who quit unaided would not have failed to quit if they had tried vaping or attended counseling as part of their quit attempt. Unaided quitters succeed because they are sufficiently motivated (and thus do not feel they need any tools) and their dependence on nicotine or other aspects of smoking are not insurmountable (so they do not want a substitute).

Every cessation tool — NRT, counseling, vaping, etc. — looks bad when a study, like Glantz’s, throws together unaided quitters and those who seek out some aid. Being less likely to quit (due to motivation and dependence) makes someone more likely to try an aid. Merely dropping the unaided quitters from the analysis, and only comparing success across aid methods, goes a long way to correcting for this. When that is done, switching to vaping (and also to smokeless tobacco) looks very good compared to the other options. There will still be differences in underlying chance of quitting across different aid methods, but this minimal simple step solves much of the problem.

A more glaring and comical error is the choice of outcome measures. Anyone who was a former smoker as of 2014 was counted as a success. Many, presumably most, of them had stopped smoking five or more years earlier. They could not have vaped before they quit smoking, but they are still counted as “successfully quit without vaping.” (Those familiar with epidemiology methods might recognize this as similar to the “immortal person-time” error.) The error is akin to observing that most of history’s great writing was not done on a computer and concluding we should therefore use typewriters or quill pens.

A few former smokers might have tried vaping long after they had quit smoking, but most would not. Moreover, this fact further illustrate of the errors of the analysis. If someone quit smoking and then tried vaping, their vaping should not affect the results of an analysis of smoking cessation at all. But it does in this case.

The previous problem probably creates the most bias in the results, but there is an even more obvious error in the study methods. We know that everyone who actually quit genuinely wanted to quit. We also know that everyone who employs anti-smoking drugs or cessation counseling wants to quit. But that is not true about vaping. Sometimes vaping is an attempt to quit smoking, but some smokers just want to try it out of curiosity, or are trying to find a partial substitute. So of course vapers are less likely to quit than people who actually quit.

The previous sentence is not an overstatement of the absurdity of the Glantz methods. It is often useful to think about what the same methods would “show” if applied to different data. Imagine a marketer for Starbucks running a similar analysis and concluding that visiting a indy coffee shop causes someone more likely to be a regular Starbucks customer. Of course the observed association exists: Interest in drinking coffee (whatever method is used to get it) varies; most Starbucks devotees were getting coffee somewhere before a Starbucks opened in their neighborhood; and committed Starbucks customers will probably try a new indy even if they have no expectation of switching. The reality is, of course, that indies take away business from Starbucks, despite the association in the data.

If the marketer pushed Starbucks to advertise for its indy competitors, claiming that this would increase business, he probably would not last long in his job. Offering bad advice based on comically bad research is seldom appreciated. Except in tobacco control, where it is basically the job description.

Follow Dr. Phillips on Twitter