Hiding the Signal: The Danish Aluminum Study
A Masterclass on how to design a study that "proves" no danger by hiding it
In this article I will discuss a large study involving 1.2 million children published last week which examined the association between aluminum exposure from vaccines and chronic disease. It has already been critiqued appropriately elsewhere. I have written this summary of the study’s failings for several colleagues in medicine who have recently asked my opinion. In my opinion the study serves as an excellent example of why we as medical professionals can no longer simply read the abstract and results, trusting that peer-review will validate findings.
Let’s say you were a researcher who was interested in knowing whether there was an association between aluminum exposure through vaccines and chronic diseases. You have access to a huge database of vaccination records and health outcomes for each person in the database. How would you approach the question?
The obvious approach would be to examine the health records of the kids who got the most vaccines and compare their health records to those who got none. That would give you the best chance of detecting a risk if one existed.
Let’s see how the authors of the recently published paper in the Annals of Internal Medicine, “Aluminum-Adsorbed Vaccines and Chronic Diseases in Childhood: A Nationwide Cohort Study” put the question to rest. (The results made immediate headlines in legacy media which touted this study as even more proof that the aluminum in vaccines have no appreciable adverse effect on health).
Step 1: Deciding which children would be examined.
The older the children are, the more vaccines they would have had, so perhaps you would want to look at adolescents first, right?
Instead the authors chose to only look at kids who received aluminum containing vaccines before the age of two.
Step 2: Deciding on the observational window.
The authors had access to records going back 28 years. Instead of extending the observational period to age 10 or 15, they chose to examine health outcomes until the age of 5 in their primary analysis (secondary analyses extended to age 8).
Step 3: Deciding on the exclusion criteria.
Some kids were lost to follow up because they left the country. Some children died before the age of 2. Some have incomplete medical records. These kids were not included in their analysis. That makes sense.
However, the authors also chose to also exclude 35,547 children from their analysis because they received “Too many registered vaccinations containing aluminum during their first 2 y of life”.
These are the children they should be most interested in because if there was an association it would most likely manifest in this subgroup. Wouldn’t it make sense to analyze this group for health outcomes before excluding them so that they could justify their decision? It would, but the authors didn’t do that.
Instead, the authors explain that these kids received an “implausible” number of vaccines. Why? Because they received more vaccines than were recommended. In other words, the vaccination records must be inaccurate.
How do they know that? Inherent in this assumption is that vaccination records may not always be accurate. So, they are applying a double standard. These kids have inaccurate vaccination records but all those included don’t.
Next the researchers also excluded children who were expressing symptoms of the very conditions they were looking for in their analysis before the age of 2. With regard to at least one of these conditions 466,047 kids were excluded. How did they know that these kids were not the most susceptible to chronic diseases from aluminum toxicity? They didn’t. Wouldn’t it make sense to see what their aluminum exposure was? It would, but the authors didn’t do any subgroup analysis to address this.
Step 4: Adjusting for confounders
Properly adjusting for confounders is the most difficult challenge in retrospective studies like this one where a heterogenous population is being analyzed. There are subgroups who are more susceptible to the outcomes being examined than others for reasons other than aluminum exposure.
In this case the authors recognize many, with varying degrees of influence over the health of the child independent of aluminum exposure:
birth year and season
sex
maternal age at delivery
maternal place of birth
maternal smoking during pregnancy
parity
preterm birth
birthweight
number of visits to a general practitioner before age 2 years
maternal prescription drug use during pregnancy
selected maternal conditions within 5 years before the child’s birth date
parental household income
These “adjustments” are designed to obtain a clearer picture of the influence of aluminum exposure by eliminating differences in the population being studied that would lead to differences in the risk of developing chronic diseases. That’s good, however one of these differences is not a confounder that can be adjusted for.
Confounders are conditions that are present in a population being studied that affect the outcome being studied (incidence of chronic disease). They also must have no effect upon the independent variable considered (aluminum exposure) . For example, a given child’s aluminum exposure has no influence over maternal health, maternal place of birth, the child’s birthweight, how many pregnancies his/her mother had, etc. yet these factors are known to contribute to the risk of developing chronic diseases.
The investigators can adjust for confounders in a number of ways. For example, if we know how much each of these confounders contribute to chronic illness, their effect can be effectively subtracted so that what is left must be the contribution of the independent variable (aluminum exposure) on chronic illness. This is called Multivariable Regression analysis.
Another way to approach it would be to simply compare groups with the same proportion of confounders but differ only with respect to aluminum exposure and see if there is a difference in the incidence of chronic illness. If there were a difference it would be attributed to the difference of aluminum exposure. This is called Matching.
The authors stated that they adjusted their models to account for the ones listed above. However, they also adjusted for the “number of visits to a general practitioner before age 2 years”. The problem here is that if a child is experiencing health issues from early aluminum exposure parents will seek medical attention more often before the observational period.
In other words, this factor may very well be dependent upon the independent variable in the study (aluminum exposure). In fact, that is the point of the entire study, to see if there is a connection between aluminum exposure and chronic diseases.
By adjusting for this the authors would be erasing the very signal they should be looking for.
Let’s look at this more closely. Could the researchers determine what effect the number of visits to a general practitioner has on the the incidence of chronic illness? First, we must see that an increasing number of doctor visits doesn’t cause chronic disease, it’s the other way around. There is, however an association between the propensity to seek medical care and the general health of the patient, and likely a very strong one that could be calculated.
What happens if they “correct” for this calculated association by subtracting its influence upon the incidence of chronic disease later (Multivariable Regression)? They would be erasing at least some of the signal they were purportedly searching for.
To see this clearly let us assume that aluminum exposure causes chronic disease and this leads to frequent doctor visits before the age of two. More exposure always leads to more doctor visits. A properly designed study should have no problem detecting an association between aluminum and disease if this was what was happening.
Vaccinated kids would have sought medical care more often than unvaccinated kids, exactly as matching previously established associations. But the researchers decide to “correct” for this “confounder” because doctor visits are associated with the development of chronic diseases too. Can you see the problem? They would effectively be treating some of the kids who developed disease from aluminum exposure as kids who developed disease because they sought medical attention more often. Therefore those outcomes would be removed from their calculation. It’s a crafty way to hide the signal you are looking for.
What if they instead observed children who had the same number of doctors visits separately and determined the incidence of chronic disease as a function of aluminum exposure (Matching)? Can you see the problem here? The incidence of chronic disease and exposure to aluminum in each of the matched groups would be similar. The risk of aluminum would be buried because the kids in each group all have similar aluminum exposure and chronic disease risk. Because matching doesn’t allow for any comparison between groups of kids who had a different number of doctor visits a connection to aluminum exposure won’t be apparent. Another slick maneuver.
I simply cannot accept the idea that the authors of the study would do this inadvertently. This was done intentionally. And the reviewers didn’t catch it. The implications are disquieting.
Step 5: Analyze and report your findings
So what did they do with their data? Did they compare health outcomes of the least vaccinated to the most vaccinated? No they did not. Instead they calculated hazard ratios for each of the specified outcomes per mg of aluminum exposure. They did exactly opposite of what we would have predicted; they looked for differences between kids who had small differences in exposure. Why would they take the pains to examine 1.2 million children who had aluminum exposure ranging from none to over 4mg but decide to see if a single mg of aluminum resulted in any appreciable differences in health??
The authors summarized their findings in a forest plot above.
The vertical dashed line is centered on a Hazard Ratio of 1, i.e. no increased risk or benefit over a difference of 1 mg of aluminum exposure. The black diamonds represent hazard ratios for each of the specified outcomes. The horizontal heads and tails attached to each diamond represent the range of statistically insignificant findings. In other words, if the head or tail extends across the vertical line, there is no statistical significance of a mg of extra aluminum exposure through vaccination for that particular chronic disease.
Step 6: Discussing your findings
If there were a statistically significant increased risk of developing a chronic disease from a mg of aluminum the black diamond and its head and tail would appear on the right side of the vertical line. As you can see, none of the conditions are associated with more aluminum exposure. In fact most are centered on the left side (Hazard Ratio < 1) of the dotted line. Hence the authors conclude that aluminum exposure is not associated with any of the chronic diseases they identified.
However the authors neglect to mention the most glaring finding: several of the outcomes have a statistically significant decrease in their hazard ratio. These include asthma, angioedema, unspecified allergies, food allergy, autism, ADHD and Neurodevelopmental outcomes as a whole.
The problem here is that there is no plausible reason why an increasing exposure to aluminum, an adjuvant that stimulates an inflammatory response, would lead to a decrease in these autoimmune and inflammatory conditions. The authors are not proving that there is no associated increased risk of these conditions with increasing aluminum exposure, they are proving that their study is confounded at some level.
How do they explain this startling finding? They don’t. Worse, their peer-reviewers didn’t demand that they do so either.
The reality is that there is very little evidence in published studies that showcase the absurdity of the authors’ results. Either there is no prior evidence of an association between vaccine load and chronic diseases or evidence of an association has been filtered out of medical journals.
Unfortunately for those who hail these findings as definitive proof of aluminum safety, there is at least one study that completely upends the Danish study. Here are the results from its primary analysis:
The study examined 9 year olds from the Florida Medicaid system during the approximate years of the Danish study. But instead of grouping all the children together and calculating differences in outcomes as a function of small increments in vaccine exposure, the authors did what we would expect: compare those with no exposure to those with increasing exposure as evidenced by the number of vaccination visits.
A single vaccination visit was associated with a 70% increase in Autism Spectrum Disorder. With more visits came a greater risk, demonstrating dose dependency, a hallmark of causality.
The control group (something conspicuously absent from the Danish study) was comprised of about five thousand children who received no vaccinations. In other words, this study was tiny compared to the Danish study, however the results carried a high statistical significance. This means that the signal was very strong. The absence of the signal in the much bigger Danish study is a testament of the impact of the authors’ inexcusable and non-sensical approach in their analyses.
The Danish study did, in fact, include a cohort of children with zero aluminum exposure. The authors note that
“Only 15 237 children (1.2%) did not receive any aluminum-adsorbed vaccines before age 2 years.”
There were three times as many unvaccinated children in their study compared to the Florida study. How did they handle them? They chose not to examine them independently as a control group. Instead they address them in their secondary analysis in two ways.
First they recalculated their adjusted Hazard ratios after excluding them from their analysis and found no appreciable difference. No surprise there. These children represented only a tiny fraction of the entire population examined.
Next, they divided the entire study population into three groups based on low, medium and high exposure and compared their outcomes. The unvaccinated were included with a larger group of low exposed children.
In other words, in both cases they mixed them into larger groups of vaccinated kids instead of examining them separately. And, if that weren’t enough, they also inappropriately corrected for the number of doctors visits in these analyses as well. Why would they do something like this?
What were they thinking?
They either weren’t thinking, or they were once again being strategic in their approach to hide a safety signal. I cannot explain it in any other way. If you have a better explanation, please share it in the comments.
Conclusion
Briefly, the authors chose to
examine very young children (thus those having little aluminum exposure from vaccination)
only consider outcomes early in life
exclude kids who were potentially showing signs of aluminum toxicity early
exclude kids who potentially received the most vaccinations of all
confine the primary analysis to searching for outcome differences as a function of small differences in exposure
inappropriately adjust for propensity to seek medical care and
bury what could have served as a large control group among a much larger group of vaccinated kids.
James Lyons-Weiler, PhD summarized the study in his substack newsletter “Popular Rationalism”:
“The study is compromised not by a single flaw, but by an ensemble of methodological maneuvers—each subtle, plausible-sounding, and common in pharmacoepidemiology—that, when combined, render the entire exercise a façade of scientific inquiry.”
He is correct. The authors aren’t incompetent; they were crafty—crafty enough to fool legacy media who proclaimed their work as definitive. But the media weren’t the only ones who were hoodwinked.
STAT News quoted Dr. Edward Belongia, recently retired infectious disease and research scientist from the Marshfield Clinic Research Institute, who called the new paper “the largest and most definitive observational study on the safety of vaccine-related aluminum exposure in children” ever conducted.
Belongia went on:
“This is a rigorous and well-designed study that should put to rest any lingering doubts about the potential risks to children from cumulative aluminum exposure in vaccines.”
Belongia, incidentally, also served on the Centers for Disease Control and Prevention’s ACIP (Advisory Committee for Immunization Practices). Once again, the implications are disquieting.
Another former Advisory Committee on Immunization Practices (ACIP) advisor and mainstream media spokesperson for the wildly profitable vaccine industry, Dr. Paul Offit, appeared on CNN within a few days, calling the study “amazing,” assuring the public, “if there is such a thing as a definitive study, this is the definitive study.”
Offit went further, warning us that DHHS Secretary Robert F. Kennedy Jr. will most certainly respond with a “bogus” study that shows harm from aluminum salts in vaccines.
MedPageToday, a widely read publication among medical professionals lauded the study in an article titled, “Aluminum in Vaccines Not Culprit in Kids' Chronic Diseases, Study Shows” with absolutely no comment from any critics.
This study wasn’t proof of safety of vaccination-driven aluminum exposure, it was proof of the devious methods researchers will employ to hide potential safety signals and the incompetence of medical newsletters like StatNews whose journalists report “from the frontiers of Health and Medicine”.
It was also proof of the failure of peer-review. I am just an anesthesiologist who chose to read the study thoroughly. At the very least the peer reviewers should have demanded that the authors respond to at least some of the obvious problems I identified in their discussion.
This study is reminiscent of another that made headlines three years ago involving the use of Ivermectin in avoiding hospitalization from Covid-19. Instead of hiding the potential danger of aluminum exposure from vaccination, the study was designed to hide the benefit of Ivermectin in treating Covid.
The study examined a relatively small number of patients (less than 2000) in Brazil during the spring of 2021 when the pandemic response was in full gear and our authorities were stressing full vaccine compliance.
At the time there were already dozens of published studies demonstrating significant benefits of Ivermectin treatment with regard to hospitalization, especially if treated early. Yet this study, “Effect of Early Treatment with Ivermectin among Patients with Covid-19” published in the highly regarded New England Journal of Medicine, found no benefit of this extremely cheap, safe and fully licensed drug:
Once again, we have a forest plot which shows us how subgroups did if given Ivermectin compared to placebo as an outpatient. The black squares to the left of the vertical line means that those subgroups had a lower chance of requiring hospitalization if treated with Ivermectin and not a placebo.
The confidence intervals extended across the point of no effect for all subgroups. In other words, the benefit was not statistically significant. The NYTimes wasted no time in reporting the “failure” of Ivermectin.
Why did Ivermectin fail if so many prior studies showed benefit? One reason is that the study was too small to show statistical benefit. In fact, the results are quite promising as most subgroups seemed to do better even with a relatively small number of people enrolled.
Another reason why the study failed to show a benefit of Ivermectin is because the participants in the study were under dosed with Ivermectin. The protocol used by the researchers was only 40% that of what had previously been used to treat Covid effectively. It’s very easy to show “no effect” for a therapeutic agent: just administer it at a sub-therapeutic dose.
If we lived in a world where researchers always sought meaningful results for the public they would have used a dosing regimen that was known to be effective.
If we lived in a world where media appropriately critiqued controversial topics the New York Times would have cited opposing opinions, like the one I wrote for The Defender the day this study was released.
If we lived in a world where our agencies of public health were doing their best for, say, public health, they would have recognized the promising results from this small study and done the obvious: conduct another study, a bigger one which used an appropriate dosing regimen. After all, we were in a global pandemic! Why would they instead deem this medicine as “horse paste” and spur public ridicule of anyone who suggested its potential benefit?
Studies appearing in respected, peer-reviewed journals are not what we think they are. Some, like this and the Danish one are sophisticated propaganda curated specifically for medical professionals who implicitly trust the editorial boards of the publications that they have been taught to believe.
The medical orthodoxy is in a sense-making crisis. The first step is to recognize that we are in one because those we have entrusted to be rigorous and objective aren’t. Hopefully studies like these will help make this blatantly obvious.
Please leave your comments.







Wow...what an excellent critique. Thanx for proving (again) how Big Pharma shills lie and obfuscate with statistics. And sadly, most media don't even question the fake science. "Wagging the dog" is not just a media strategy for going to war but can also be used to create a media narrative that will lead to fear-mongering about one disease or another or to exonerating a dangerous drug by falsely testing for safety.
This Danish study is what I would call 'Dirty science'; though in fact it's not science at all if truth be told.