Women’s Health: a 7-class series
Curriculum includes female hormones, menstruation, PMS, endometriosis, fibroids, hysterectomy, pregnancy, contraceptives, menopause, the new guidelines for HRT, mammography and breast health, PAP testing, HPV Vaccine, UTIs and more.
Class dates (each 2 hours): March 9 and 23; Apr 6 and 20; May 4 and 18, 27 at 8PM eastern
All classes will be recorded for later viewing
Tuition: $420 if you plan to complete homework assignments; $280 for audit
Research and Writing 2026
Course Objectives: to teach you how to research and to evaluate scientific and health information in order to arrive at accurate conclusions concerning diet, health and medicine.
Expectations: The course begins with classes during which you will learn basic principles and practices. You will be given an assignment at the end of each class which is due the week before the next class. The classes are deliberately spaced one month apart to give you enough time to complete assignments. Note: all classes take place at 8:00PM eastern time
Tues Feb 24, Mar 24, Apr 28, May 26, Jun 30, July 28, Aug 25, Sept 29
Tuition $1195
Sleep Apnea and other Sleep Issues and Disorders
Includes: causes of sleep disorders, consequences of poor or inadequate sleep, insomnia, circadian rhythm sleep disorders (difficulty getting to sleep and waking up on time), restless leg syndrome, narcolepsy, sleep apnea, sleep walking and talking, shift work sleep disorder, discussion of how much sleep people need, the best hours to sleep AND an evaluation of commonly prescribed drugs and supplements for sleep.
Friday Mar 20 7:30-10PM Saturday Mar 21 10:30AM-12:30PM, 2:00-4:30PM
Free for Annual Pass Members, $295 all other members/$495 nonmembers
Evaluating Research Studies
Pamela A. Popper, President, Wellness Forum Health
Question from a subscriber: Can you share some information about how to determine which research studies are reliable vs which are not? A little about me: I’m a healthcare provider who realized a few years ago that conventional medicine had lost its collective mind and turned to alternative medicine. I’m starting to believe that the alternative docs may be just as misguided.
Thank you for writing and your thinking is on the right track. Unfortunately, many providers have left conventional medicine and taken their bad habits with them. While there are exceptions, they tend to adopt practices based on recommendations or testimonials from other providers or even their patients. Many never learned how to critically review research studies, and the result is unwarranted enthusiasm for supplements and protocols for which there is little evidence. A great example is The Vitamin D Hoax, which was enthusiastically and equally embraced by both conventional MDs working in institutions and alternative practitioners, who continue to prescribe useless and, in some cases, dangerous doses of it.
I teach an 8-class in-depth research course once per year, but here are a few of the rules that we use when evaluating research:
Differentiate between correlation and cause-and-effect relationships.
Correlation means that two factors co-exist, but this does not mean that one factor causes the other. It is much easier to establish a correlation than to prove a cause-and-effect relationship. For example, research shows that in countries in which more women are getting drivers’ licenses, the breast cancer rate is increasing. But no sane person would claim that rescinding drivers’ licenses would reduce the breast cancer rate. A thorough dive into the data reveals that westernization results in more women driving AND more people adopting westernized diets and this diet change is cause of the increased cancer risk, not driving. Observations and relationships are interesting and may provide justification for more research, but should not, by themselves, be the basis for making any health-related decisions.
There is a difference between data expressed in relative vs absolute terms.
The benefits of drugs and supplements are often reported in relative terms instead of absolute terms because relative reporting makes many things seem like “medical breakthroughs” when they often have very little impact on health. For example, let’s assume that in a clinical trial for a new drug that reduces the risk of bone fractures, the incidence of fractures in the placebo group is 2% and the incidence of fractures in the group taking the drug is 1%. Expressed in relative terms, the new drug reduces the risk of fracture by 50% (1%÷2%=50%), and 50% sounds impressive. But expressed in absolute terms, which is the real benefit to you, the patient considering taking the drug, the risk reduction is only 1% (2%-1%=1%). This does not sound nearly as impressive, which is why relative numbers are commonly used. Often after considering the significant side effects associated with most drugs, the small benefits of drugs and supplements seem hardly worth it to many consumers.
Some findings are statistically significant but who cares?
Statistical significance refers to sorting out whether differences between groups in a study are real or due to chance. When a difference between groups is statistically significant, the findings are considered worth reporting, but this does not necessarily mean that the results are important or worth acting on. For example, a study that looked at the effects of adding olive oil or nuts to the daily diet on the risk of cardiac events concluded that eating nuts reduced the risk of heart attack by 1.0% and eating olive oil reduced the risk of heart attack by 0.6%. While the results were statistically significant, a 0.6%-1.0% reduction in risk is virtually meaningless to a person trying to avoid a heart attack.
Pay attention to study design.
While there are many researchers with great integrity, it has become a common practice to structure studies to show a particular pre-determined outcome. For example, a study that involves lowering fat consumption from 40.0% to 30.0% may show no difference in health outcomes, but the reason is not that lowering fat is not important, it’s that fat consumption must be reduced significantly more to result in health improvement. A good analogy would be looking at the effects of speed on death rates in automobile accidents. If studies show that accidents taking place when cars are traveling 90 miles per hour almost always result in death and the same is true for accidents involving speeds of 80 miles per hour, one could report that driving slower does not matter. It really does matter, but only when speed is reduced significantly more, say to 30 miles per hour.
Improvement in surrogate markers may not have anything to do with long-term health.
Many drugs and supplements lower cholesterol, fasting glucose, blood pressure, or reduce pain. However, most do not change health outcomes. For example, statin drugs lower plasma cholesterol levels, but the average reduction in the risk of heart attack or stroke from taking them for primary prevention is less than 1.0%. In other words, statins improve the results of lab tests, but not health outcomes for most people. This is important because what most people want is long-term health and longer life, not just better test results.
Reductionism is pervasive.
Reductionism involves focusing on the effect of single foods or single nutrients, and reporting that one food or nutrient can change health outcomes. Study design can sometimes create the illusion that a single nutrient or food impacts health, usually by measuring its effect on surrogate markers. But generalizing these results to infer that individual foods or nutrients will have significant benefits for the general public is very misleading. An example would be a study showing that taking vitamin C pills increases plasma vitamin C levels, and then reporting that since people who do not have cancer have higher plasma vitamin C levels, taking vitamin C pills can reduce the risk of cancer.
Single studies don’t mean much; look at the preponderance of the evidence.
Unfortunately, the medical journals are cluttered with studies supporting almost any claim one wants to make, and this unfortunate state of affairs has made it easy to mislead people. For example, some studies published in reputable medical journals show that cigarette smoking does not increase the risk of lung cancer. The problem is that most studies show that smoking does increase the risk, so citing a few studies supporting a claim while ignoring most studies on a particular topic is misleading. The dairy industry is famous for this as sponsored research will result in one study that shows a particular outcome. The industry then uses that study to promote its products even though almost every other study conducted on the same topic shows a different result.
The media often misreports study results.
I’ll be kind here, and attribute some of the misreporting in the media to the fact that journalists cannot be experts on everything and since most have little knowledge about nutrition, health, and medicine, it is difficult for them to do more than just report what they are told. This means that it is very important to read at least the abstract of a study covered in consumer-oriented media before forming an opinion or taking any action. Most journals publish abstracts online for free. I am regularly flabbergasted when I check articles at how much research findings are misreported by not only journalists, but health professionals who write articles, blogs and create other materials. Don’t rely on a report of the study, read the study itself.
These and many other factors skewing medical data have resulted in the suggestion that perhaps published research has become meaningless. I agree that there is a lot of misbehavior and that both providers and consumers should develop a healthy skepticism when looking at research. But I’m not ready to throw the baby out with the bathwater yet. There is much good information to be found in scientific journals, and new and valuable information is being added every day. The key is to be a discerning consumer of health and medical information by using better rules to filter information.



