Nagging 101

A glossary of research and marketing terms to help you become a savvy Nag yourself:


  • Biomedical or basic research:  studies done on cells, tissue or animals, often in a laboratory. They may or may not be relevant to humans. They often only suggest directions for research.
  • Epidemiological research:  studies of groups of people (populations) to look at patterns in the rates, distribution, prevention and control of disease, injuries, and other health-related events.
  • Social and behavioural research: studies that look at people’s attitudes, needs and practices.
  • Clinical research:  studies the diagnosis, treatment, prevention, and outcomes of diseases or conditions. They include intervention studies  such as clinical trials and community-based trials that look at whether a strategy, drug, or medical treatment is safe and works to treat or prevent a health-related event. Multi-centre trials are clinical trials conducted at more than one hospital or clinic and are more likely to be valid than studies conducted at one place.
  • Health services and operational research:  studies how health care is delivered and how people access health care.

Two general kinds of studies

  • Quantitative studies are the kind most often used in medical research, especially when looking at drugs and other treatments. These studies as questions such as “how many?” or “how much?” They count, measure, and compare things. For example, a quantitative study may count how many women taking a drug to prevent heart disease actually developed heart disease. Or, they may measure how much of a drug is needed to prevent heart disease. Another study might compare two drugs that prevent heart disease to see which one works better and has fewer side effects.
  • Qualitative studies ask questions like “why?” and “how?” This kind of research often uses interviews, surveys or other ways to observe people to find out how people think, feel, or act. For example, a qualitative study may ask: “Why do girls start smoking?” Qualitative studies ask questions that are tough to answer with numbers.

Levels of evidence in studies

  • Systematic Reviews and Meta-Analysis:  Comprehensive and structured review of all studies on a specific topic, sometimes combines the data or statistics from the studies (meta-analysis)
  • Randomized Controlled Studies Randomly assign study participants into two groups: those who get the treatment (case) and those who do not (control) to look for differences between the two groups. The chance of being in either group is 50/50. It does not depend on things like how much a person needs treatment. A stronger form of this type of study is the Randomized Controlled Double-Blind Study. This form has the extra step of making sure that neither the researchers nor the study participants know who is getting the treatment or the placebo until the end of the study.
  • Cohort or Panel Studies Observe a clearly defined group of people either forward in time (prospective) or looking back in time (retrospective) at previous medical records and other information; in Observational Studies, individuals are observed or certain outcomes are measured, but no attempt is made to affect the outcome (for example, no treatment is given). Observational studies can’t prove cause and effect. They can show a strong statistical association, but cannot prove benefit or risk reduction.
  • Case-Control Studies:  Compare two groups of people: people with a disease or condition (cases) to other people who have the same characteristics (such as age and sex) who do not have that condition (controls)
  • Case Series Describe a number of cases
  • Case Reports:  Report on a single case
  • Animal Research
  • In Vitro (Test Tube) research
  • Ideas, editorials, and opinions
  • What my neighbours tell me they learned while watching the Dr. Oz show

Funding for research

  • usually comes from government departments, foundations, universities or corporations
  • ask where the money comes from because some funders may have a financial interest in what the study says. For example, a drug company paying for research on their drug wants the research to show that the drug they’ve spent hundreds of millions of dollars developing is effective and will be a big seller. Researchers are now more frequently disclosing their  ‘conflict of interest’ issues for the study right on the research paper or journal article. Read the fine print.
  • Check the author affiliations listed to see if any of the authors are full-time employees of the drug/device company.

The people studied

  • Did the researchers study men only, women only, or both women and men (and was the sample group equally divided between genders or significantly more one-sided)?
  • Were the study participants from a wide range of ethnicities, backgrounds, ages?
  • If the researchers studied only certain types of women, their findings may not apply to all women. Ditto for men.
  • Until recently, medical research has tended to study men and not recognize that there are differences between men’s and women’s health.
  • Researchers are trying more to understand those differences using gender-based analysis or “GBA”.
  • Was the group studied large enough? When a quantitative study looks at whether one treatment is better than another, or has dangers or side effects, researching a large population over a long period of time gives results that are more meaningful.
  • Small groups are often used in qualitative studies. These studies look at an issue in-depth and can take a lot of time, so it is not always possible to have a large study group.


  • Statistics are supposed to show whether something just happens by chance or because of the treatment that is being studied.
  • Saying that statistics are significant means that the research results show that there is a meaningful difference between the groups in the study and that the differences are so big that they are probably not due to chance. The difference between to two groups is then due either to the treatment that is being studied, or an error or bias in the study.
  • Statistics can also be used to show whether two things occur together, how large a difference is, the combined effects of many factors, or how something changes over time.
  • See also number needed to treat (NNT) under Definitions, below.
  • See also absolute risk vs relative risk, under Definitions, below.


A bias is anything that influence the results of a research study. Different sources of bias can affect the study design, the data analysis, or the interpretation of the study results.

  • Medical research may focus on one disease or condition and only look at how some aspects of patients’ lives affect the condition.
  • Some people feel better when they think they are receiving treatment, even if they are only getting a placebo, the so-called placebo effect.
  • People’s memories are selective and volunteers can under- or overestimate answers to research questions. People sometimes give answers that they think are the ‘right’ ones or what the researchers want to hear. Last year, a group of 45 international nutrition scientists launched a campaign to end the use of one of their most commonly-used research tools: the self-reported food diary.  These scientists now claim that “dietary recall is skewed towards healthier behaviour.”
  • Sometimes, observations and conclusions may be influenced by stereotypes about groups of people.
  • There might be problems in the way that people were chosen or volunteer for the study so that the study participants are not representative of the larger society.
  • The more quantities researchers try to measure (multiple endpoints), the more likely it is that one will be statistically significant merely by chance, even if the experimental treatment does not work. This is why drug companies are required to report their primary study goals in advance – yet many still do not. Since 2008 in the U.S. for example, the FDA has required results of all clinical trials to be posted within a year of completion of the study. However, an audit published in 2012 has shown that 80% of trials failed to comply with this law. See selective outcome reporting (under Definitions, below) or more on this from the AllTrials campaign.

Questions to ponder:

  • Did the study do what it set out to do?
  • Do the conclusions follow from the information presented?
  • Does the study overestimate benefits or underestimate dangers?
  • If the study suggests drugs or therapies for prevention, was there longterm follow-up on the effects?
  • Do the results fit with older, more established evidence? Sometimes new research findings challenge established evidence.

Be suspicious of research presented at scientific meetings

  • information presented in talks at scientific meetings may be incomplete
  • research abstracts presented at prominent scientific meetings often receive substantial media attention before the validity of the work has been established in the scientific community
  • many of the abstracts receiving media attention have weak designs, are small, or are based on animal or laboratory studies
  • this work is generally not ready for public consumption: results change, fatal problems emerge, and hypotheses fail to pan out
  • 25% remained unpublished more than three years after the meeting
  • presentations that receive front-page coverage are no more likely to be published than abstracts receiving less prominent coverage
  • results are frequently presented as scientifically sound evidence rather than as preliminary findings with still uncertain validity
  • most findings have not undergone final peer review, have yet to be independently vetted, and may change
  • unlike at scientific meetings, evidence strength of research published in peer-reviewed journals can be evaluated
  • physicians, confronted with preliminary research findings, must be able to answer some fundamental questions such as:What is the rush?”
  • develop a healthy skepticism about the ‘breakthroughs’ you repeatedly encounter in the news
Sources include: Canadian Women’s Health Network and Health News Review



  • adverse event: any undesirable experience associated with the use of a medical product in a patient. This includes both obvious drug side effects and other new health problems that occur during a drug trial that may not be drug related. An adverse event is considered severe if it leads to hospitalization, disability or death.
  • astroturfing: political, advertising, or public relations campaigns that are formally planned by an organization or company, but designed to mask their true origins to create the impression of being spontaneous, popular “grassroots” behaviour. (The term refers to AstroTurf™, a brand of synthetic carpeting designed to look like natural grass).  Astroturfing campaigns are widely considered by us PR types to be behind the growing trend towards noisy health care protests and town hall meetings in the U.S.
  • conflict of interest: this happens if a research study author or team member is now receiving or has previously received money or any other benefits from the manufacturer of the drug or device being studied, or has anything to gain if the study is positive towards the manufacturer.  It’s generally suspected that you don’t bite the hand that feeds you. Some medical journals (not all) ask the authors of scientific papers being published in their journals to disclose any conflicts of interest, usually listed at the end of the article.
  • data mining: happens when a research study doesn’t show what researchers or funders wanted, and so they look at subgroups to tweak the data every which way until they get something that looks more positive.  See also: publication biasfile drawer effect and selective outcome reporting
  • drug reps: sales employees of pharmaceutical companies who visit physicians in order to convince them to prescribe their company’s drugs, or to up the dosage of these drugs
  • file drawer effect:  when negative research studies are not submitted for publication. See also publication bias data mining and selective outcome reporting
  • guest authors: the names of well-known physicians, academics or scientists added to the published list of authors in a journal article even though they did not write the article; often included as a ‘tribute’ to a department chair or the person who successfully arranged funding for the research because the ‘guest author’ gets to take credit for the title on a career list of publications even if he/she didn’t do the work.  See also: medical ghostwriting
  • marketing:  the process of interesting potential customers and clients in products and/or services; involves researching, promoting, selling, and distributing products or services.
  • medical ghostwriting: when someone makes substantial  contributions to a manuscript without being credited. “It is considered bad publication practice in the  medical sciences, and some argue it is scientific misconduct according to Danish researcher Dr. Peter GøtzscheAt its extreme, medical ghostwriting involves pharmaceutical companies hiring professional writers to produce papers promoting their products – but hiding those contributions and instead naming academic physicians or scientists as the authors.”  See also ‘guest authors’.
  • number needed to treat, or NNT:  offers a measurement of the impact of a medicine or therapy by estimating the number of patients that need to be treated in order to have an impact on just one person. The concept is statistical but intuitive, because we know that not everyone is helped by a medicine or intervention; some benefit, some are harmed, and some are unaffected. The NNT tells us how many of each. This website from a bunch of very brainy physicians working in emergency medicine has the most helpful explanation of NNT that I’ve seen yet, along with a list of credible resource links to check NNT stats on several therapies currently recommended for a number of different conditions.
  • optics:  in the public relations field, this has “nothing to do with the eyes, but it has everything to do with the way the public sees things” as The Globe and Mail called this description of perception back in 1983.
  • peer review: when a medical research paper is submitted to a scientific journal, editors send it to experts in the field who anonymously look for problems in the conduct and analysis of the study and overstatements or leaps of logic in the paper. These peer reviews recommend whether the paper should be accepted, rejected outright or sent back for major revisions.
  • placebo: looks like medicine, but isn’t; used to be called a ‘sugar pill
  • primary endpoint: the primary goal of a trial; examples include prolonging survival or curing infections. It’s decided before a research trial begins. If a trial doesn’t succeed in meeting its primary endpoint, it is considered a failure, even if there is evidence of other beneficial effects.
  • public relations (or PR): the actions of a corporation, business,  government, individual in promoting goodwill between itself and the public, the community, employees, customers, etc;  the art, technique, or profession of promoting such goodwill.
  • publication bias: when academic or medical journals are less likely to publish negative studies. See also file drawer effect, selective outcome reporting and data mining.
  • risk includes two important types of risks revealed in science, absolute risk vs. relative risk. In absolute risk, for example, risk is stated without any context. You have a 50% chance of flipping a coin and getting heads, or a 1% (one in a hundred) chance of getting lung cancer even if you have never smoked. These risks are not compared to any other risk – they are just the probability of something occurring. In relative risk, however, a comparison is made between different risk levels. For example, your relative risk for lung cancer is (approximately) 10 if you have ever smoked, compared to a non-smoker. This means you are 10 times more likely to get lung cancer. If the risk is about 1% for a nonsmoker, this translates to about 10% for a person who has smoked (but even higher for heavy smokers). Remember that the relative risk (or risk ratio) is NOT the same as an increase in risk.
  • selective outcome reporting: when study authors cherry-pick research results by publishing the good ones favourable to the product being studied, while ignoring the bad ones. See also: publication bias, file drawer effect, data mining, as well as the AllTrials campaign to force drug/device companies to report all results.
  • shills: this word originally, maybe as far back as 1914, denoted a carnival worker who pretended to be a member of the audience in an attempt to elicit interest in an attraction.  Both illegal and legal gambling industries use shills to make winning at games appear more likely than it actually is. In online forums, shills may express specific opinions in order to further the interests of an organization in which they have an interest, such as an employee, commercial vendor or special interest group. Sometimes shills may be used to downplay legitimate complaints posted by users on the internet forum.
  • sock puppetry: an online scam usually seen in community forums or letters to the editors in which people sign on as one user soliciting recommendations for a specific product or service, and then sign on as a different user pretending to be a satisfied customer of a specific company. In many jurisdictions and circumstances, this type of activity may be and should be illegal. See also: Sock Puppetry, Astroturfing and the Marketing Shill Game
  • surrogate endpoint (also sometimes called secondary or intermediate endpoint): measurements that are not the main goal of the clinical trial. They are usually used as supporting data about a medicine’s efficacy or to spell out potential side effects. See also: Your Health, Ball Possession, and the World Cup
  • thought leader – a flattering word that industry – particularly drug and medical device companies – likes to use when recruiting physicians or academics to take money in exchange for educating their peers in order to help boost sales of their products. See also: Is your doctor a “thought leader”?

See also:

3 thoughts on “Nagging 101

  1. Pingback: Dr. J.

  2. I think that “thought leaders” are even more nefarious – the leaders are often chosen to comment on the basis of recommendations of the authors. The thought leaders often receive incredible sums of money from big pharma.

  3. Pingback: CBA

What do you think?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s