Daily Vaper

Science Lesson: Clinical Trials Are A Terrible Way To Assess Vaping

Photo via Shutterstock

Carl V. Phillips Contributor
Font Size:

There is a common myth that epidemiology study types can be ranked generically in terms of how informative they are. Clinical trials – also known as randomized controlled trials or RCTs, and known outside of health science as experiments – are invariably at the top of the list. A study is an experiment if the researcher assigns exposures to subjects, as opposed to observational epidemiology where people make their own choices about exposure. Assigning exposures has a huge advantage, but that does not always make it better.

By assigning exposures, experiments can eliminate the problem of confounding. As explained in a previous science lesson, confounding means that those who choose one exposure are more likely to have the outcome than those who choose another exposure, even apart from any effect of the exposure. Confounding makes it very difficult to determine how much of the association between exposure and outcome is actually caused by the exposure. (Technically, experiments replace systematic confounding with random confounding, whose distribution is then described by random error statistics like confidence intervals.)

When confounding is a big challenge, doing an experiment is particularly advantageous. For example, all attempts to date to measure a gateway effect from vaping have failed because they could not deal with the huge confounding. If we could assign a group of randomly selected teenagers to start vaping, and compare their smoking uptake with a group that was assigned not to vape, the confounding problem would be solved. Of course, it seems rather unlikely this will be done.

Another advantage of experiments for studying clinical interventions is that the experimenters can ensure the intervention is always the same (e.g., dosage does not change) and the outcome is measured in the same way. When these details vary, as they do in the real world, they complicate the study results.

However, when the challenges in study design are listed (confounding, variations in the intervention, selection bias, etc.), the single biggest challenge is almost always omitted: Is the study really measuring what it claims to be measuring?

Psychology experiments provide cartoon-level illustrations of this. Experimenters recruit some college students, show each a series of images on a computer screen, and then given them a questionnaire. The resulting press release says, “Study shows how to redecorate your bedroom to revive your sex life.” The careful and controlled experiment may provide a very solid estimate of how different images change responses to the questionnaire. This may offer some vague suggestions about decor choices. But the headline interpretation differs wildly from what was actually measured.

Similarly, clinical trials of vaping and smoking cessation are terrible measures of the real experience of switching to vaping. These trials are often criticized because they offer inferior vapor products. But this is only a small part of the problem. Clinical trials of smoking cessation – or any other consumer choice –are only good measures of the effects of the clinical intervention. This does not resemble how most people make that choice.

Moreover, clinical interactions, along with their placebo effects, can have very different effects from real-world exposures. For example, tobacco controllers claim that NRT is much more effective for smoking cessation when it is part of a clinical intervention rather than just purchased over-the-counter. We might want to know the effects of clinical NRT therapy because clinical NRT therapy is a real thing. Moreover, it is reasonable to use clinical intervention results to estimate the effect of just buying NRT and following the instructions, since these are fairly similar experiences. It turns out that the “much more effective” claim translates into NRT failing nearly always in clinical interventions, which is a good approximation of it failing approximately always when used over-the-counter.

Vaping is a completely different story. Vaping is not prescribed in clinics and there are not particular dosage instructions. It is obviously not a standardized medicine. In the rare cases where enlightened clinicians recommend vaping for smoking cessation, they do not hand the consumer a box and detailed usage instructions. Ideally they would offer a sample, information about where to find and try a variety of products, and advice about how to seek an optimal nicotine dosage.

It would be possible to perform an experiment to test the effects of doing that, assigning some consumers to the advice about vaping and comparing them to others. Indeed, there is some such data. But notice that some of the celebrated advantages of clinical trials no longer apply. There is no confounding if we just look at the assignment, and the instructions are standardized. But the actual experiences of the subjects — e.g., how much effort they put in to find products and optimize their usage — are not standardized and may vary based on propensity to quit smoking, reintroducing confounding.

We can still eliminate the confounding using what is called “intention to treat” analysis, which compares outcomes based on the assignment regardless of what people actually did. This offers a measure of whether making that clinical assignment affects smoking cessation. We might want to know that. But while the clinical trial is a good measure of something, it is not a good measure of the effect of smokers choosing to try vaping in the real world. While the difference between the experiment and the real-world question is not so great as with the psychology labs, it is much greater than for NRT.

People who try vaping are self-selected for having a better chance it will work. They are familiar with vaping and probably have a friend who has explained and demonstrated it. They consider finding a smoking-like substitute to be a particularly attractive option for quitting smoking, or at least a reasonable option; anyone who hates the idea of a substitute will not consider switching to vaping.

The theory behind clinical trials says this is a problem, that it creates confounding and selection bias. But that is a naive misunderstanding of how science works. We only want to eliminate these complications if the question is “what if every smoker tried switching to vaping by following a particular script?” The question usually being asked, however, is “does vaping promote smoking cessation among those who choose vaping, and however they choose to do it?” The variations that are eliminated by clinical trials are not sources of error that should be eliminated; they are an integral part of the actual phenomenon we want to study. Eliminating them creates study bias rather than reducing it. Thus, observing the real-world experiences, with all their complications, is far more informative than artificial experiments.

In addition to this, clinical trials can only measure whether vaping increases the probability of success of a smoking cessation attempt. As explained in a previous science lesson, vaping causes smoking cessation not just because it increases this probability, but also because it causes cessation attempts that would not have otherwise happened. Furthermore, it prevents those who have quit smoking from returning to smoking. These effects can be measured with the right kind of study, but not a clinical trial.

There is no generically “best” type of study because the quality of an answer is highly dependent on what the question is. This truism, despite being rather obvious when stated, is not understood by most health researchers.

Follow Dr. Phillips on Twitter

Tags : vaping
Carl V. Phillips