You’ve probably heard the statistic that 97% of climate scientists agree that humans are causing global warming.
Have you asked where that number comes from? Is it true? The source is a paper by Cook et al. and, no, it is not true.
First, the 97% stat is the percent of abstracts expressing a position on AGW that implicitly or explicitly endorsed it. However, 66% of all abstracts reviewed expressed no opinion on AGW. A recently published review of the Cook paper found “just 0.3 % endorsement of the standard definition of consensus: that most warming since 1950 is anthropogenic.”
Second, the 97% is of abstracts not scientists. Abstracts are brief summaries of papers. To really understand a study, you have to read more than just the abstract. Each abstract can have multiple authors who may or may not agree with every conclusion in the paper so you can’t even just multiply the number of papers by the number of authors.
Relying on published abstracts is also biased. Challenging a popular position means that research is less likely to be funded and papers are less likely to get published. To know what scientists believe about any topic would require asking them in a systematic, unbiased, anonymous manner.
And even then consensus doesn’t matter. 99.999% of people believing something does not make it true. For example, physicians used to believe that gastric ulcers were caused by spicy foods and stress until two Australian scientists proved that most are caused by H. pylori bacteria.
TL;DR: No AGW consensus and consensus doesn’t equal fact.
After writing the post yesterday “All men must die“, it seemed fortuitous today to read an editorial in The Medical Post entitled “You gotta die of something“. Dr. Murray Waldman’s essay notes that extensions to life expectancy are limited by human life span. The oldest human on record died at 122. We are all going to die. What will die from if not heart disease or cancer or infection?
The answer is neurodegenerative diseases. This includes Alzheimer’s disease, ALS (Lou Gehrig’s Disease) and similar currently uncommon conditions. All these diseases have two characteristics: They have no effective treatment and their incidence rises extremely sharply after the age of 90.
This should be an alarm bell to those working in the field of life-prolonging technologies. If there is an absolute limit to human life expectancy and that limit appears to be approaching quickly, should we not pause and re-examine what we are trying to achieve?
Living long enough to die of Alzheimer’s or some similar disease is a fate most of us would not wish on ourselves or our loved ones.
I don’t want to live to 120 if the last 30 years are spent suffering from dementia and wearing diapers in a nursing home. No thank you.
While reading an article in the Globe and Mail about skin cancer, something about the article combined with the graphic for “What is the likelihood that you will get cancer?” bothered me. The bottom of the graphic emphasized lifetime probability of dying from cancer, 24% for women and 29% for men. The implication is that too many people get cancer and we need to reduce the number of people dying from cancer.
Here’s the problem: everybody dies eventually. Everybody. You can kind of pick what you are more likely to die from from but immortality is not an option. Reducing the number of people dying from one cause will increase the number dying from another. I found a CDC graph which perfectly illustrates this.
If fewer people die from infectious diseases, more will die from heart disease. If fewer people die from heart disease, more people will die from cancer.
Here’s the headline that I read today:
No. No. No. A thousand times no.
This article is based on a study of an MS drug in MICE published in Nature Neuroscience. I have two biology degrees and a PhD in epidemiology and I can’t understand the abstract. The headline seems to be derived from a chat with the paper’s author:
The possibility that the drug may treat PTSD is “one thing that comes to mind,” Spiegel said in a telephone interview. “That is a potential implication.”
Do not put faith in mouse studies. Ever.
According to the National Post headline, “Eating junk food before getting pregnant spikes risk of premature birth: researchers”.
No, it doesn’t.
The study “Preconception dietary patterns in human pregnancies are associated with preterm delivery” (JA Grieger, LE Grzeskowiak, and VL Clifton) was e-published ahead of print in the Journal of Nutrition (April 30, 2014). They concluded that nutrition before pregnancy was associated with pre-term birth. This was a retrospective, cross-sectional study which means that the 309 pregnant women were surveyed one time about what they ate during the 12 months before they got pregnant. They were never asked about what they ate during pregnancy although they cite other studies showing that the two are typically similar.
Problem 1: The results could just reflect that diet during pregnancy affects risk of premature delivery.
After a lot of complicated analyses, the researchers identified three dietary patterns (high-protein/fruit, high-fat/sugar/takeaway, vegetarian-type) and assigned a score to each woman based on how she compared to the average score. They didn’t categorize mothers into separate groups and compare the incidence of pre-term delivery. The overall pre-term rate is 10%. What is the risk for those eating “junk food”? We don’t know. This type of analysis could result in complete different findings with a different group of moms.
Problem 2: Even the researchers acknowledge in their discussion that the results of this study “may not be generalizable to other populations”.
The article and the headline clearly overstate the findings of this study. I wouldn’t equate an odds ratio of 1.5 to a “spike” in risk. As long as associations are being used to make dietary recommendations, they could have emphasized that the meat-based, high-protein diet was associated with lower risk of pre-term birth while the vegetarian diet showed no benefit.
Health reporters need to make much more liberal use of the words “may”, “might”, “suggests”, etc. and the headline writers need to stop being so sensationalistic.
A survey funded by EMD Inc. Canada, a drug company which sells infertility drugs, and “conducted by Conceivable Dreams in partnership with other infertility patient advocacy groups” found that people believe that government should fund in vitro fertilization ($6,000+). The people surveyed also believe that family doctors should educate patients about fertility. How are they supposed to squeeze that into a visit?
Conceivable Dreams does an excellent job of clearly stating who was surveyed and who paid for the study but the news articles leave out some or almost all of that information.
Online survey conducted by a special interest group funded by a drug company with a vested interest in results? I hope governments don’t act based on these results.
I’m actually surprised at the high level of knowledge about fertility given the fact that most people will not experience it.
A conclusion is only as good as the data that it is based upon and I am always curious about the data.
One of today’s big “news” topics was a story about how unfit Canadian kids are with the typical headline “Canadian kids continue to get failing fitness grade“. (I feel like this needs a dramatic sound effect). Canadian kids are not fit? How do they know? What does that mean?
Well … all of the news reports are a result of the release of the “2014 Active Healthy Kids Canada Report Card on Physical Activity for Children and Youth“. Active Healthy Kids Canada is a charity whose purpose is to increase kids’ activity levels. I would be surprised if they said anything other than “kids need to be more active”.
You can download a long version of the report card on their site and it includes the data sources that they used.
One example: Do children in various age groups meet the daily recommendation of at least 60 minutes of moderate-to-vigorous physical activity? This data came from the 2011-12 Canadian Health Measures Survey from Statistics Canada. The questions asked included “Over a typical or usual week, on how many days is he physically active for a total of at least 60 minutes per day?” (maximum answer is 4 or more days) and “About how many hours a week does he usually take part in physical activity that makes him out of breath or warmer than usual?”. The questions and answer choices appear to be carefully written but the problem comes with summarizing the results. For moderate-to-vigorous exercise, the answers must have come from the second type of question each with one of four classifications (school/home, organized/free play) and then grouped according to an average number of hours per day. Two full weekend days playing sports does not equal 60+ minutes each day but it would have to be classified that way given the available data.
Other categories for report card marks relied upon completely different data sources including surveys of parents, surveys of students, surveys of schools, clinical exams, and direct measurement of steps per day. Data was from a few different years.
Comparisons were then made to 14 other countries ranging from Australia and Ireland to Mozambique and Nigeria. I can only assume that these countries also use a wide range of data sources of varying quality.
TL;DR: Statements on report card may not be 100% accurate.
And don’t even get me started on that headline. None of the variables in this report actually measured fitness. None of them.
Also, big surprise, non-government organizations promoting physical activity received an A-! Are those the same groups that support Active Healthy Kids Canada and use this report card to promote their agendas? How convenient.