THINK YOU KNOW SCIENCE? CHALLENGE YOURSELF WITH THIS 15-WEEK LIVE INTERACTIVE COURSE
Bart Kay is an exceptional skeptic. Like me, he appreciates the role that limits of knowledge play on framing our understanding of what we’d like to call knowledge. Like me, he see the role that science could play as a way of knowing.
I asked Bart to work with me to put together a course for the public to help them become conversant in the ways that observational studies can mislead. After our constructive collaboration, we are are pleased to offer, via IPAK-EDU.org,
IPAK-EDU ANALYTICS 201
Critical Evaluation of Studies in Nutrition Science
The public is fast becoming the watchdog of science. An educated public is an empowered public.
Topics: Study design basics, reproducibility crisis, confirmation bias, sample size & power, confounding, measurement bias, p-hacking, overadjustment, systematic reviews & meta-analysis.
Syllabus
Instructor: Bart Kay
Live Lecture/Discussion- Tuesdays 3pm EST
Video will be available thereafter.
Scientific studies impact all of our decisions we make in our daily lives and long-term health. Proper study design, analysis and interpretation of scientific studies are key to the formation of public views on nutrition – a key component of health. In this course, we will examine studies that come to conclusions and interpretations that may, or may not, be supported by the study design or analysis executive.
There is no book. Students are encouraged to recommend additional readings.
Accessing the literature: The instructor will share .pdfs of each reading to the extent that copyright laws permit fair share use. Students are encouraged to download the Unpaywall extension to help access available online resources.
Week/Topic
1. Overview
This week we will introduce the course, set roles and expectations, and disclaim existing biases and opinions. We’ll also do a cursory run through the topics as a primer. Specific readings will be provided on a week-by-week basis, rather than all up front. It is hoped that this will assist in keeping the focus on the logical pathway as it is unfolded sequentially.
2. Study Design Basics
How do we know what we know? What is the scientific method? Where has nutrition science gone off the rails? Topics Include: Science is not only observational; it is also experimental. Correlation does not establish causality. What is a control in science? When should control be exerted? Positive and negative controls. Delimitation vs. Generalization. Extrapolation.
3. Reproducibility Crisis
95% of peer reviewed experimental science is not reproducible. This is a serious limitation to the veracity of this body of work. What has led to this vast lack of corroboration? Almost no studies report on the reproducibility of measurements even within a study, let alone between studies. Today’s example is the GI scale in nutrition ‘science’.
4. Confirmation Bias
What is confirmation bias? Why does conformation bias occur? What is publication bias? Why does Publication bias occur? How do we combat selection biases? Why do we need to avoid biases?
5. Sample size, Power
A little bit of unavoidable math today, but don’t worry its all pretty simple stuff! Basically, sample size and statistical power are factors researchers must understand in order to meet the benchmarks required for ‘evidence’, whilst also keeping the accountants and ethicists happy.
6. Confounding
What is a confounder in experimental science? How do we spot a confounder? What can happen if we do not spot a confounder? What’s the difference between a confounder and a co-risk factor? Todays example is Statin medications.
7. Measurement Bias
Isn’t this just a euphemism for poor measurement? Measurement errors in experimental science come from technical error, from random error, and from human bias. These sources of error will be discussed, and examples from peer reviewed literature will be the readings.
8. P-Hacking
What is P-hacking? How is P-hacking a form of bias? Why is P-hacking so prevalent? How do we combat P-hacking? Let us also consider statistical vs. clinical significance.
9. Overadjustment Bias
‘Adjustment’ is the procedure by which multivariate analysis is used to justify reporting outcome data that has NOT ACTUALLY BEEN OBSERVED. Today we dissect the procedure (conceptually) and we point to its total contraindication in science. Todays example is an Adventist study making claims on the basis of ‘adjusted’ data.
10. Challenge Study 1
Now you get to try out your new skills! Challenge studies are your opportunity to critique existing studies according to the logical pathway set out in this course so far. Four challenge studies will be provided to you in advance, and we will construct our critical evaluation of each study as a group (per week ).
11. Challenge Study 2
12. Challenge Study 3
13. Systematic Reviews
Todays discussion is around systematic reviews. Where do they sit in terms of knowledge acquisition? What value is provided by systemic reviews? How do we know if a systematic review is a good one or a poor one?
14. Meta-Analysis
Meta-analysis is the synthesis of a number of studies into (in effect) a much larger one. Why are meta-analyses undertaken? What additional value do met- analyses provide? How do we spot publication bias in a meta-analysis? Meta-analysis in epidemiology: is this science or evidence?
15. Final Challenge study
This article first appeared on jameslyonsweiler.com.