2015-05-17

Microbiome Nonsense: response to "Chowing Down On Meat"

Response to "Chowing Down On Meat"

As the claim that animal protein and saturated fat is unhealthy becomes less and less tenable, those who have the intuition that animal-based nutrition must be bad for you are looking elsewhere.

There was great excitement at the end of 2014 about a study posted in Nature demonstrating the rapid changes in human gut microbes in response to animal-based vs. plant-based diets [1]. The paper is very interesting, and it has a lot of original data of a kind we've often wished for. The authors then go on to interpret their findings without apparent restraint.

A report on the study on NPR called Chowing Down On Meat, Dairy Alters Gut Bacteria A Lot, And Quickly gets right to the point:

"Looks like Harvard University scientists have given us another reason to walk past the cheese platter at holiday parties and reach for the carrot sticks instead: Your gut bacteria will thank you."

and finally:

""I mean, I love meat," says microbiologist Lawrence David, who contributed to the study and is now at Duke University. "But I will say that I definitely feel a lot more guilty ordering a hamburger ... since doing this work," he says."

That's right. The excitement in the blog-o-sphere was not so much about the clear results — that the changes in the gut flora in response to diet are fast and large — but about the authors' opinions that the observed changes support a link between meat consumption and inflammatory bowel disease (IBD).

We take exception to these claims, as they are not well-founded by the data in the study, or in any other study. The data to support them do not warrant the conclusion. We consider it irresponsible at best to suggest that a dietary practice is harmful to health when the evidence is weak, especially when one is in a position of authority and subject to high publicity.

Here are the points we address:

The Claims about Inflammatory Bowel Disease

Here are some quotes from the paper stressing the possible dangers of a carnivorous diet based on a supposed link to IBD — inflammatory bowel disease. Notice that they use language that implies the claims are proven, when as we will show, they are not.

"increases in the abundance and activity of Bilophila wadsworthia on the animal-based diet support a link between dietary fat, bile acids and the outgrowth of microorganisms capable of triggering inflammatory bowel disease [6]" — Abstract

"Bile acids have been shown to cause inflammatory bowel disease in mice by stimulating the growth of the bacterium Bilophila[6], which is known to reduce sulphite to hydrogen sulphide via the sulphite reductase enzyme DsrA (Extended Data Fig. 10)." — from figure 5, page 4.

"Mouse models have also provided evidence that inflammatory bowel disease can be caused by B. wadsworthia, a sulphite-reducing bacterium whose production of H2S is thought to inflame intestinal tissue [6]. Growth of B. wadsworthia is stimulated in mice by select bile acids secreted while consuming saturated fats from milk. Our study provides several lines of evidence confirming that B. wadsworthia growth in humans can also be promoted by a high-fat diet. First, we observed B. wadsworthia to be a major component of the bacterial cluster that increased most while on the animal-based diet (cluster 28; Fig. 2 and Supplementary Table 8). This Bilophila-containing cluster also showed significant positive correlations with both long-term dairy (P , 0.05; Spearman correlation) and baseline saturated fat intake (Supplementary Table 20), supporting the proposed link to milk-associated saturated fats[6]. Second, the animal-based diet led to significantly increased faecal bile acid concentrations (Fig. 5c and Extended Data Fig. 9). Third, we observed significant increases in the abundance of microbial DNA and RNA encoding sulphite reductases on the animal-based diet (Fig. 5d, e). Together, these findings are consistent with the hypothesis that diet-induced changes to the gut microbiota may contribute to the development of inflammatory bowel disease." — last paragraph, emphasis ours.

This concern is prominent in the paper; they start with it and end with it. It is based on a single citation to a study in mice.

Reasons those claims are not warranted

Let's look at that study (Dietary-fat-induced taurocholic acid promotes pathobiont expansion and colitis in Il10−/− mice [2]):

Here's the abstract (emphasis ours):

"The composite human microbiome of Western populations has probably changed over the past century, brought on by new environmental triggers that often have a negative impact on human health1. Here we show that consumption of a diet high in saturated (milk-derived) fat, but not polyunsaturated (safflower oil) fat, changes the conditions for microbial assemblage and promotes the expansion of a low-abundance, sulphite-reducing pathobiont, Bilophila wadsworthia2. This was associated with a pro-inflammatory T helper type 1 (TH1) immune response and increased incidence of colitis in genetically susceptible Il10−/−, but not wild-type mice. These effects are mediated by milk-derived-fat-promoted taurine conjugation of hepatic bile acids, which increases the availability of organic sulphur used by sulphite-reducing microorganisms like B. wadsworthia. When mice were fed a low-fat diet supplemented with taurocholic acid, but not with glycocholic acid, for example, a bloom of B. wadsworthia and development of colitis were observed in Il10−/− mice. Together these data show that dietary fats, by promoting changes in host bile acid composition, can markedly alter conditions for gut microbial assemblage, resulting in dysbiosis that can perturb immune homeostasis. The data provide a plausible mechanistic basis by which Western-type diets high in certain saturated fats might increase the prevalence of complex immune-mediated diseases like inflammatory bowel disease in genetically susceptible hosts."

Translation:

They took some mice who were particularly susceptible to colitis, and also some regular mice, and fed them one of three different diets: a low fat diet (if we're reading it correctly they used the AIN-93M Purified Diet from harlan, which is about 10% fat), or a diet with 37% fat which was either polyunsaturated, or saturated milk fat. They didn't specify the amount of carbohydrate or protein, but we assume the diets were about 10-15% protein, leaving about 50% carbohydrate.

The mice who had the high milk-fat diet had a significant increase in the gut bacteria called Bilophila wadsworthia. The susceptible mice on the high milk-fat diet got colitis at a high rate (more than 60% in 6 months). The other susceptible mice, those on low-fat or polyunsaturated fat also got colitis, but at a lower rate (25-30%). The regular mice didn't get colitis, even on the high milk-fat diet.

What's the problem with knockout mice?

The mice that got colitis were susceptible because they were genetically manipulated to not function normally. Specifically, they couldn't produce something called interleuken-10 (IL-10). IL-10 has many complex actions including fighting against inflammation in multiple ways.

The argument made by the scientists is that Bilophila wadsworthia must induce inflammation, and that colitis probably comes about in people who are less effective at fighting that inflammation, just like the knockout mice. This seems intuitive, but it is certainly not proven by the experiment.

Look at it this way:

Suppose we didn't know the cause of phenylketonuria, a genetic disorder that makes the victim unable to make enzymes necessary to process the amino acid phenylalanine. We could knockout that gene in an animal, feed it phenylalanine, watch it suffer retardation and seizures, and conclude that phenylalanine must promote brain problems. This would be a mistake, of course. Phenylalanine is an essential amino acid occurring in breast milk. As far as we know, there is nothing unhealthy about it, as long as you don't have a genetic mutation interfering with its metabolism.

It is, of course, possible that Bilophila wadsworthia inflames the colon. As a hypothesis, based on this study, it is not by itself objectionable.

What we object to is the leap to citing Bilophila wadsworthia as causing colitis, as in the second excerpt above, which we repeat here:

"Bile acids have been shown to cause inflammatory bowel disease in mice by stimulating the growth of the bacterium Bilophila[6], which is known to reduce sulphite to hydrogen sulphide via the sulphite reductase enzyme DsrA (Extended Data Fig. 10)." — from figure 5, page 4.

In fact, Bilophila did not appear to affect the normal mice at all!

There is no claim that the genetic mutation in the mice has any relation to genetic susceptibility to IBS in humans, yet it is implied that natural human susceptibility might work the same way.

Hydrogen Sulfide

In the knockout mice study, a second experiment was done to determine whether the Bilophila wadsworthia seen in the milk-fat condition came from a particular bile acid, taurocholic acid. They fed the knockout mice a low fat diet supplemented with either taurocholic acid (TC), or glycocholic acid (GC). They confirmed that Bilophila wadsworthia was increased by taurocholic acid and not by glychocholic acid.

What else do we know about taurocholic acid?

According to the authors of this study, it is "a rich source of organic sulphur, […] resulting in the formation of H2S [hydrogen sulfide]". In one figure they even demonstrated the presence of Bilophila wadsworthia by the presence of H2S.

But H2S can be beneficial:

  • There is emerging evidence that H2S has diverse anti-inflammatory effects, as well as pro-inflammatory effects, possibly only at very high levels [3].
  • The levels needed for harm are probably higher than occurs naturally [4]
  • H2S levels in the blood are associated with high HDL, low LDL, and high adiponectin in humans [5], all considered good things.

Moreover, there is now evidence that colon cells in particular can actually use H2S as fuel, and lots of it. Other researchers have used a a similar argument in the opposite way. They claim that eating fiber is healthy, because of the butyrate generated from it in the colon, which colons cells then use as fuel. While we have problems with that argument, it shows a pervasive bias: Using it when it supports plants, but ignoring it when it doesn't.

Taking all this into account, it is not at all clear that the higher levels of sulfite reducing bacteria seen in the meat and cheese eaters was unhealthy.

What would happen if a human sufferer of IBS went on an animal foods only diet?

It's clear that these researchers are not studying IBS at all. They were studying gut bacteria, found an association, and cherry-picked one study suggesting that what they found in the animal diet results might be unhealthy.

If they were studying IBS, they might have noticed reasons to hypothesise that a diet low in fiber [6], [7], carbohydrates [8], or fermentable carbohydrates [9] would help IBS sufferers. If humans who are susceptible to IBS are susceptible in the same way as the knockout mice in the cited study, then these results might be surprising. Instead, these results in combination with the animal diet paper, should further decrease our belief that the mice results have any relevance at all.

Moreover, unless the authors are advocating a diet of low-fiber, low-carb plants (can't think of any plants like that off the top of my head...), they are encouraging IBS sufferers to eat foods that may worsen their condition.

We don't know what would happen in an all meat trial for IBS, but we'd love to find out.

In Sum

The supposed link between the animal diet and inflammatory bowel disease is composed of a chain of weak links:

A kind of bacteria they found in those eating meat and cheese was also found in a mouse study that suggested a link between the bacteria and IBS.

However:

  • It used animals that were genetically engineered to not function normally.
  • It did not and cannot establish causality between the observed gut bacteria changes and the increased level of disease.
  • It was merely an observation of the two coinciding along with a plausible mechanism, i.e. a clever story about how this might be a causal relationship.

This plausible mechanism is not as clean a story as it appears. Presenting it as such is downright misleading.

References

[1]

Diet rapidly and reproducibly alters the human gut microbiome

Lawrence A. David, Corinne F. Maurice, Rachel N. Carmody, David B. Gootenberg, Julie E. Button, Benjamin E. Wolfe, Alisha V. Ling, A. Sloan Devlin, Yug Varma, Michael A. Fischbach, Sudha B. Biddinger, Rachel J. Dutton & Peter J. Turnbaugh
Nature (2013) doi:10.1038/nature12820
[2]

Dietary-fat-induced taurocholic acid promotes pathobiont expansion and colitis in Il10−/− mice

Suzanne Devkota, Yunwei Wang, Mark W. Musch, Vanessa Leone, Hannah Fehlner-Peach, Anuradha Nadimpalli, Dionysios A. Antonopoulos, Bana Jabri & Eugene B. Chang
Nature (2012) doi:10.1038/nature11225
[3]

Evidence type: review and non-human animal experiment

Wallace JL.
Trends Pharmacol Sci. 2007 Oct;28(10):501-5. Epub 2007 Sep 19.

The notion of H2S being beneficial at physiological concentrations but detrimental at supraphysiological concentrations bears similarity to the situation with nitric oxide (NO), another gaseous mediator, which shares many biological effects with H2S. Also in common with NO, there is emerging evidence that physiological concentrations of H2S produce anti-inflammatory effects, whereas higher concentrations, which can be produced endogenously in certain circumstances, can exert pro-inflammatory effects [5]. Here, I focus on the anti-inflammatory effects of H2S, and on the concept that these effects can be exploited in the development of more effective and safer anti-inflammatory drugs. "

[4]

Evidence type: review and non-human animal experiment

Wallace JL.
Trends Pharmacol Sci. 2007 Oct;28(10):501-5. Epub 2007 Sep 19.

(Emphasis ours)

"How much H2S is physiological? "H2S is present in the blood of mammals at concentrations in the 30–100 m M range, and in the brain at concentrations in the 50–160 m M range [1–3]. Even after systemic administration of H2S donors at doses that produce pharmacological effects, plasma H2S concentrations seldom rise above the normal range, or do so for only a very brief period of time [24,27]. This is, in part, due to the efficient systems for scavenging, sequestering and metabolizing H2S. Metabolism of H2S occurs through methylation in the cytosol and through oxidation in mitochondria, and it is mainly excreted in the urine [1]. It can be scavenged by oxidized glutathione or methemoglobin, and can bind avidly to hemoglobin. Exposure of certain external surfaces andtissues to H2S can trigger inflammation [28], perhaps because of a relative paucity of the above-mentioned scavenging, metabolizing and sequestering systems. The highest concentrations of H2S in the body occur in the lumen of the colon, although there is some disagreement [29] as to whether theconcentrations of ‘free’ H2S really reach the millimolar concentrations that have been reported in some studies [30,31]. Although often alluded to [32,33], there is no direct evidence that H2S causes damage to colonic epithelial cells. Indeed, colonocytes seem to be particularly well adapted to use H2S as a metabolic fuel [4]. "There have been several suggestions that H2S might trigger mutagenesis, particularly in the colon. For example, one recent report [33] suggested that the concentrations of H2S in ‘the healthy human and rodent colon’ are genotoxic. Despite the major conclusion of that study, the authors observed that exposure of cultured colon cancer epithelial cells (i.e. transformed cells) to concentrations of Na2S as high as 2 mM for 72 hours did not cause any changes consistent with a genotoxic effect (nor cell death). It was only when the experiments were performed in the presence of two inhibitors of DNA repair, and only with a concentration of 2 mM, that they were able to detect a significant genotoxic signal. It is also important to bear in mind that the concentrations of H2S used in studies such as that described above are often referred to as those found in the ‘healthy’ colon. Clearly, if concentrations of H2S in the healthy colon do reach the levels reported, and if H2S has the capacity to produce genotoxic changes and/or to reduce epithelial viability, there must be systems in place to prevent the putative untoward effects of this gaseous mediator – otherwise, the colon would probably not be ‘healthy’"

[5]

Evidence type: observational

Jain SK, Micinski D, Lieblong BJ, Stapleton T.
Atherosclerosis. 2012 Nov;225(1):242-5. doi: 10.1016/j.atherosclerosis.2012.08.036. Epub 2012 Sep 10.

"Hydrogen sulfide (H2S) is an important signaling molecule whose blood levels have been shown to be lower in certain disease states. Increasing evidence indicates that H2S plays a potentially significant role in many biological processes and that malfunctioning of H2S homeostasis may contribute to the pathogenesis of vascular inflammation and atherosclerosis. This study examined the fasting blood levels of H2S, HDL-cholesterol, LDL-cholesterol, triglycerides, adiponectin, resistin, and potassium in 36 healthy adult volunteers. There was a significant positive correlation between blood levels of H2S and HDL-cholesterol (r=0.49, p=0.003), adiponectin (r=0.36, p=0.04), and potassium (r=0.34, p=0.047), as well as a significant negative correlation with LDL/HDL levels (r= -0.39, p=0.02). "

[6]

Evidence type: preliminary experiment

J. T. Woolner and G. A. Kirby
Journal of Human Nutrition and Dietetics Volume 13, Issue 4, pages 249–253, August 2000

"Abstract

Introduction High-fibre diets are frequently advocated for the treatment of irritable bowel syndrome (IBS) although there is little scientific evidence to support this. Experience of patients on low-fibre diets suggests that this may be an effective treatment for IBS, warranting investigation.

Methods Symptoms were recorded for 204 IBS patients presenting in the gastroenterology clinic. They were then advised on a low-fibre diet with bulking agents as appropriate. Symptoms were reassessed by postal questionnaire 4 weeks later. Patients who had improved on the diet were advised on the gradual reintroduction of different types of fibre to determine the quantity and type of fibre tolerated by the individual.

Results Seventy-four per cent of questionnaires were returned. A significant improvement (60–100% improvement in overall well-being) was recorded by 49% of patients.

Conclusion This preliminary study suggests that low-fibre diets may be an effective treatment for some IBS patients and justifies further investigation as a full clinical trial."

[7]

Evidence type: Review

Eswaran S1, Muir J, Chey WD.
Am J Gastroenterol. 2013 May;108(5):718-27. doi: 10.1038/ajg.2013.63. Epub 2013 Apr 2.

"Abstract

Despite years of advising patients to alter their dietary and supplementary fiber intake, the evidence surrounding the use of fiber for functional bowel disease is limited. This paper outlines the organization of fiber types and highlights the importance of assessing the fermentation characteristics of each fiber type when choosing a suitable strategy for patients. Fiber undergoes partial or total fermentation in the distal small bowel and colon leading to the production of short-chain fatty acids and gas, thereby affecting gastrointestinal function and sensation. When fiber is recommended for functional bowel disease, use of a soluble supplement such as ispaghula/psyllium is best supported by the available evidence. Even when used judiciously, fiber can exacerbate abdominal distension, flatulence, constipation, and diarrhea."

[8]

Evidence Type: uncontrolled experiment

Austin GL, Dalton CB, Hu Y, Morris CB, Hankins J, Weinland SR, Westman EC, Yancy WS Jr, Drossman DA.
Clin Gastroenterol Hepatol. 2009 Jun;7(6):706-708.e1. doi: 10.1016/j.cgh.2009.02.023. Epub 2009 Mar 10.

"Abstract Background & Aims

Patients with diarrhea-predominant IBS (IBS-D) anecdotally report symptom improvement after initiating a very low-carbohydrate diet (VLCD). This is the first study to prospectively evaluate a VLCD in IBS-D. Methods

Participants with moderate to severe IBS-D were provided a 2-week standard diet, then 4 weeks of a VLCD (20 grams of carbohydrates/day). A responder was defined as having adequate relief (AR) of gastrointestinal symptoms for 2 or more weeks during the VLCD. Changes in abdominal pain, stool habits, and quality of life (QOL) were also measured. Results

Of the 17 participants enrolled, 13 completed the study and all met the responder definition, with 10 (77%) reporting AR for all 4 VLCD weeks. Stool frequency decreased (2.6 ± 0.8/day to 1.4 ± 0.6/day; p<0.001). Stool consistency improved from diarrheal to normal form (Bristol Stool Score: 5.3 ± 0.7 to 3.8 ± 1.2; p<0.001). Pain scores and QOL measures significantly improved. Outcomes were independent of weight loss. Conclusion

A VLCD provides adequate relief, and improves abdominal pain, stool habits, and quality of life in IBS-D."

[9]

Evidence type: review

Suma Magge, MD and Anthony Lembo, MDcorresponding author
Gastroenterol Hepatol (N Y). 2012 Nov; 8(11): 739–745.

"Summary

A low-FODMAP diet appears to be effective for treatment of at least a subset of patients with IBS. FODMAPs likely induce symptoms in IBS patients due to luminal distention and visceral hypersensitivity. Whenever possible, implementation of a low-FODMAP diet should be done with the help of an experienced dietician. More research is needed to determine which patients can benefit from a low-FODMAP diet and to quantify the FODMAP content of various foods, which will help patients follow this diet effectively."

2015-05-15

Ornish Diet Worsens Heart Disease Risk: Part I

Dr. Dean Ornish has come under a lot of criticism lately for his misleading statements about diet and heart disease. See, for example: Critique of Dean Ornish Op-ed, by Nina Teicholz, and Why Almost Everything Dean Ornish Says about Nutrition Is Wrong, from Scientific American.

Ornish made his name with a study that claimed to actually reverse heart disease [1]. There are at least three problems with the study.

First, it included several confounders to the dietary regimen. For example, the intervention groups spent an hour a day on stress management techniques, such as meditation, and three hours a week exercising.

Second, although it was touted as the first study to look at "actual" heart disease results, it made no measurements of cardiac events! Instead, it was based on measuring stenosis — the degree of narrowing of coronary arteries. Considering that stenosis is only a predictor of cardiac events, it seems disingenuous to call it a direct measure of heart disease.

Stenosis is used to predict heart disease (though it is often not the previously found blockages that are ultimate culprits [2]). However, the measurement has a lot of variability. Because of this, differences in measurements over time need to be quite large to be showing a true progression or regression, and not just error. We found three studies attempting to pinpoint the minimum difference in measurements to make such a claim. They respectively recommended 15%, 9.3%, and 7.8% as a basis for this judgment [3], [4], [5].

So how much reduction of stenosis was there in Ornish's study?

"The average percentage diameter stenosis decreased from 40.0 (SD 16.9)% to 37.8 (16.5)% in the experimental group yet progressed from 42.7 (15.5)% to 46.11 (18.5)% in the control group (p = 0.001, two-tailed)."

That's the extent of the success in a year: a -2.2% change for the claim of "regression" vs. a 3.4% change for the claim of "progression". It does not reach a level of significance given the measurement tool.

Fortunately, there were other measurements taken that are also predictors of cardiac events: blood lipids. Even the AHA, an association that changes its mind slowly in response to evidence, considers triglycerides above 100 to be higher than optimal [6]. Low HDL is a strong marker of heart disease, with HDL below 40 considered by the AHA a "major heart disease risk factor" [7]. The intervention group went from an average triglyceride level of 211 to 258, and their HDL from 39 to 38. This shows that the intervention actually worsened the participants' risk factors!

Moreover, although not acknowledged by the AHA, we know that the ratio of triglycerides to HDL is a very strong predictor of heart disease; among the best [8]. A triglyceride-to-HDL level of less than 2 is considered ideal. Over 4 is considered risky. Over 6 is considered very high risk. The intervention group's average triglycerides-to-HDL ratio leapt from 5.4 to 6.8! It went from bad to worse. Thus, the third problem with the study is that it actually showed a worsening of heart disease by other important measures.

The bottom line is that Ornish's study never showed what it claimed to show.

After a year of intervention, even with other lifestyle changes incorporated, the subjects on his diet had a higher risk of heart disease than before they started.


References

[1]Ornish, Dean, et al. "Can lifestyle changes reverse coronary heart disease?: The Lifestyle Heart Trial." The Lancet 336.8708 (1990): 129-133.
[2]

Evidence type: experiment

Little WC, Constantinescu M, Applegate RJ, Kutcher MA, Burrows MT, Kahl FR, Santamore WP.
Circulation. 1988 Nov;78(5 Pt 1):1157-66.

Abstract

To help determine if coronary angiography can predict the site of a future coronary occlusion that will produce a myocardial infarction, the coronary angiograms of 42 consecutive patients who had undergone coronary angiography both before and up to a month after suffering an acute myocardial infarction were evaluated. Twenty-nine patients had a newly occluded coronary artery. Twenty-five of these 29 patients had at least one artery with a greater than 50% stenosis on the initial angiogram. However, in 19 of 29 (66%) patients, the artery that subsequently occluded had less than a 50% stenosis on the first angiogram, and in 28 of 29 (97%), the stenosis was less than 70%. In every patient, at least some irregularity of the coronary wall was present on the first angiogram at the site of the subsequent coronary obstruction. In only 10 of the 29 (34%) did the infarction occur due to occlusion of the artery that previously contained the most severe stenosis. Furthermore, no correlation existed between the severity of the initial coronary stenosis and the time from the first catheterization until the infarction (r2 = 0.0005, p = NS). These data suggest that assessment of the angiographic severity of coronary stenosis may be inadequate to accurately predict the time or location of a subsequent coronary occlusion that will produce a myocardial infarction.

[3]

Evidence type: experiment

Abstract

BACKGROUND:

Clinical trials with angiographic end points have been used to assess whether interventions influence the evolution of coronary atherosclerosis because sample size requirements are much smaller than for trials with hard clinical end points. Further studies of the variability of the computer-assisted quantitative measurement techniques used in such studies would be useful to establish better standardized criteria for defining significant change.

METHODS AND RESULTS:

In 21 patients who had two arteriograms 3-189 days apart, we assessed the reproducibility of repeat quantitative measurements of 54 target lesions under four conditions: 1) same film, same frame; 2) same film, different frame; 3) same view from films obtained within 1 month; and 4) same view from films 1-6 months apart. Quantitative measurements of 2,544 stenoses were also compared with an experienced radiologist's interpretation. The standard deviation of repeat measurements of minimum diameter from the same frame was very low (0.088 mm) but increased to 0.141 mm for measurements from different frames. It did not increase further for films within 1 month but increased to 0.197 mm for films 1-6 months apart. Diameter stenosis measurements were somewhat more variable. Measurement variability for minimum diameter was independent of vessel size and stenosis severity. Experienced radiologists did not systematically overestimate or underestimate lesion severity except for mild overestimation (mean 3.3%) for stenoses > or = 70%. However, the variability between visual and quantitative measurements was two to three times higher than the variability of paired quantitative measurements from the same frame.

CONCLUSIONS:

Changes of 0.4 mm or more for minimum diameter and 15% or more for stenosis diameter (e.g., 30-45%), measured quantitatively, are recommended as criteria to define progression and regression. Approaches to data analysis for coronary arteriographic trials are discussed.

[4]

Evidence type: experiment

Brown BG1, Hillger LA, Lewis C, Zhao XQ, Sacco D, Bisson B, Fisher L.
Circulation. 1993 Mar;87(3 Suppl):II66-73.

Abstract

BACKGROUND:

Imaging trials using arteriography have been shown to be effective alternatives to clinical end point studies of atherosclerotic vascular disease progression and the effect of therapy on it. However, lack of consensus on what end point measures constitute meaningful change presents a problem for quantitative coronary arteriographic (QCA) approaches. Furthermore, standardized approaches to QCA studies have yet to be established. To address these issues, two different arteriographic approaches were compared in a clinical trial, and the degree of concordance between disease change measured by these two approaches and clinical outcomes was assessed.

METHODS AND RESULTS:

In the Familial Atherosclerosis Treatment Study (FATS) of three different lipid-lowering strategies in 120 patients, disease progression/regression was assessed by two arteriographic approaches: QCA and a semiquantitative visual approach (SQ-VIS). Lesions classified with SQ-VIS as "not," "possibly," or "definitely" changed were measured by QCA to change by 10% stenosis in 0.3%, 11%, and 81% of cases, respectively. The "best" measured value for distinguishing definite from no change was identified as 9.3% stenosis by logistic regression analysis. The primary outcome analysis of the FATS trial, using a continuous variable estimate of percent stenosis change, gave almost the same favorable result whether by QCA or SQ-VIS.

CONCLUSIONS:

The excellent agreement between these two fundamentally different methods of disease change assessment and the concordance between disease change and clinical outcomes greatly strengthens confidence both in these measurement techniques and in the overall findings of the study. These observations have important implications for the design of clinical trials with arteriographic end points.

[5]

Evidence type: experiment

Gibson CM1, Sandor T, Stone PH, Pasternak RC, Rosner B, Sacks FM.
Am J Cardiol. 1992 May 15;69(16):1286-90.

Abstract

The purpose of this study was (1) to determine a threshold for categorizing individual coronary lesions as either significantly progressing or regressing, (2) to determine whether multiple lesions within individual patients progress at independent rates, and (3) to calculate sample sizes for atherosclerosis regression trials. Seventeen patients with 46 significant lesions (2.7 lesions/patient) underwent repeat coronary arteriography 3.0 years apart. With use of the standard error of the mean change in diameter from initial to repeat catheterization across 5 pairs of consecutive end-diastolic frames, individual lesions were categorized as either significantly (p less than 0.01) progressing or regressing if there was a 0.27 mm change in minimum diameter or a 7.8 percent point change in percent stenosis. The mean diameter change of a sample of lesions can also be analyzed as a continuous variable using either the lesions or the patient as the primary unit of analysis. A lesion-specific analysis can be accomplished using a multiple regression model that accounts for the intraclass correlation (rho) in the degree of change among multiple lesions within individual patients. The intraclass correlations in percent stenosis (rho = 0.01) and minimum diameter (rho = -0.24) were low, indicating that disease progression in different lesions within individual patients is nearly independent. With use of this model, 50 patients per treatment group would permit the detection of a 5.5% difference between treatment group means in the change in minimum diameter and a 2.7% percentage point (not percent) difference in the change in percent stenosis.(ABSTRACT TRUNCATED AT 250 WORDS)

[6]

From The American Heart Association's "Scientific Statement"

"New clinical recommendations include reducing the optimal triglyceride level from <150 mg/dL to <100 mg/dL, and performing non-fasting triglyceride testing as an initial screen."

[7]

From Levels of Cholesterol

Less than 40 mg/dL for men; less than 50 mg/dL for women: Major heart disease risk factor

60 mg/dL or higher Gives some protection against heart disease

[8]

Evidence type: observational

Gaziano JM1, Hennekens CH, O'Donnell CJ, Breslow JL, Buring JE.
Circulation. 1997 Oct 21;96(8):2520-5.

Abstract

BACKGROUND:

Recent data suggest that triglyceride-rich lipoproteins may play a role in atherogenesis. However, whether triglycerides, as a marker for these lipoproteins, represent an independent risk factor for coronary heart disease remains unclear, despite extensive research. Several methodological issues have limited the interpretability of the existing data.

METHODS AND RESULTS:

We examined the interrelationships of fasting triglycerides, other lipid parameters, and nonlipid risk factors with risk of myocardial infarction among 340 cases and an equal number of age-, sex-, and community-matched control subjects. Cases were men or women of <76 years of age with no prior history of coronary disease who were discharged from one of six Boston area hospitals with the diagnosis of a confirmed myocardial infarction. In crude analyses, we observed a significant association of elevated fasting triglycerides with risk of myocardial infarction (relative risk [RR] in the highest compared with the lowest quartile=6.8; 95% confidence interval [CI]=3.8 to 12.1; P for trend <.001). Results were not materially altered after control for nonlipid coronary risk factors. As expected, the relationship was attenuated after adjustment for HDL but remained statistically significant (RR in the highest quartile=2.7; 95% confidence interval [CI]=1.4 to 5.5; P for trend=.016). Furthermore, the ratio of triglycerides to HDL was a strong predictor of myocardial infarction (RR in the highest compared with the lowest quartile=16.0; 95% CI=7.7 to 33.1; P for trend <.001).

CONCLUSIONS:

Our data indicate that fasting triglycerides, as a marker for triglyceride-rich lipoproteins, may provide valuable information about the atherogenic potential of the lipoprotein profile, particularly when considered in context of HDL levels.

2015-04-30

What about the sugars in breast milk?

Something that nearly always comes up when we talk about babies naturally being in ketosis is the fact that breast milk contains sugars — as much as 40% [1].

Some people have even argued with us that therefore babies are not in ketosis!

That objection is, of course, reasoning backwards — objecting to a fact because it doesn't fit a theory. That healthy, breastfed babies live in a state of ketosis and use the ketogenic metabolism for energy and growth is not a hypothesis; it is an empirical fact. See our article on ketogenic babies for details.

However, the fact that babies are in ketosis even while consuming a diet relatively high in carbohydrates does pose a mystery that deserves investigation. In this article, we're going to suggest one possible explanation for the mystery, but remember that this possible explanation is just a hypothesis, until someone does an experiment to test it.

In brief

We can't conclude, just because breast milk has a relatively high proportion of carbohydrates, that babies are burning a lot of carbohydrates for fuel.

  • Breast milk is full of components that are good for building brains. Infancy is a period of intense brain growth.
  • The sugars in breast milk are mostly from lactose, with small amounts in the form of oligosaccharides. Both lactose and oligosaccharides are replete with components that are crucial building blocks of brains.
  • In addition to providing materials for growing brains, other non-fuel functions of at least oligosaccharides include serving as prebiotics and fighting infection.
  • Insofar as some parts of the milk are being used for other purposes, they can't also be used as fuel.

Therefore, a plausible explanation for how babies are in ketosis while consuming a relatively high-carbohydrate food, is that those carbohydrates are not being used as fuel, but rather as building blocks for the brain, and to a lesser extent, feeding gut bacteria, and fighting infections.

Lactose

Most of the carbohydrate in breast milk is lactose, which is broken down by digestion into glucose and galactose. Galactose is an important component of some glycoproteins and glycolipids, including cerebrosides — glycolipids in the brain and nervous system. Cerebrosides made of galactose are a major component of brain tissue [2]. They are also such a large component of myelin that cerebroside synthesis has been used as a measure of myelination or remyelination [2].

It is therefore plausible that much of the galactose in breast milk is used for brain tissue and myelin synthesis [3]. In fact, glucose is itself also used for making glycolipids for brain tissue [4], [5], although ketone bodies seem to be preferred [6], [7].

Oligosaccharides

After lactose and fat, oligosaccharides are the largest component of breast milk [8]. Oligosaccharides are unique to human breast milk — other animals produce almost no oligosaccharides in their milk [9].

Oligosaccharides are thought not to function as fuel. Some have been shown to have a prebiotic role [10], [11]. Much of the oligosaccharides pass completely through the infant's digestive tract, and probably have an immune system function [12], [13]. Oligosaccharides also contain sialic acid [14], an important component in the brain used for cell-to-cell interactions, neuronal outgrowth, modifying synaptic connectivity, and memory formation [15].

Bottom line

The main point to take from all this is that many of the components of breast milk that one might presume to be used as “calories” are actually being used for other things, especially to make brains with. That includes glucose, galactose, proteins, fats, and even ketone bodies.

This could explain the fact that infants are in mild ketosis while breastfed, even though breast milk has higher carbohydrates than would support a ketogenic metabolism in an adult.


References

[1]

Calculating the macronutrients in breast milk is made very complex by not only the variation among individuals but diurnal variations, and variations over longer periods of time. It is a huge simplification to report a single value for the amount of some component of breast milk:

Whitehead RG.
Am J Clin Nutr. 1985 Feb;41(2 Suppl):447-58.

“It should be recognized, however, that we have all been guilty of adopting an oversimplified approach insofar as relating energy needs to milk volumes is concerned. The energy composition of milk is not the constant factor we have all tacitly assumed. Fat is the major energy-donating component and its concentrations vary considerably. At the beginning of each feed, from either breast, the fat content of the milk the baby receives is low, the exact level being determined by the extent to which that breast was emptied during the previous fed. As the baby feeds, fat content then rises by an amount that can be as much as 3-4-fold but the extent is very variable. There is also evidence that average fat levels vary at different times during the day in a cyclical manner. Even after one has taken account of these variables, it is still apparent that individual women have characteristically different fat concentrations in their breast milk. These complications have been extensively studied by Prentice in rural Gambian women (8, 9), and for the purpose of calculating breast milk requirements, they are almost impossible to untangle.”

Nonetheless, the standard reported amount of carbohydrate is 38―41%:

Olivia Ballard, JD, PhD (candidate) and Ardythe L. Morrow, PhD, MSc
Pediatr Clin North Am. Feb 2013; 60(1): 49–74.
https://lh5.googleusercontent.com/-4RhjKvSSG4k/VLV0rP3Ib8I/AAAAAAAAJmI/Sslu4-6mx8k/w873-h565-no/breast-milk-comp.png
[2]

From Wikipedia:

“Galactosylceramide is the principal glycosphingolipid in brain tissue. Galactosylceramides are present in all nervous tissues, and can compose up to 2% dry weight of grey matter and 12% of white matter. They are major constituents of oligodendrocytes.”

“Monogalactosylceramide is the largest single component of the myelin sheath of nerves. Cerebroside synthesis can therefore give a measurement of myelin formation or remyelination.”

[3]

I first heard this idea from this blog post: What can we learn from breast milk? Part 1: Macronutrients

“…the carbohydrate source is lactose, made of glucose and galactose. Now galactose is very special, it's not used as an energy fuel like glucose, it's used for myelin synthesis (that is making nerve insulation), this is why human breast milk is so high in lactose, for the galactose! So that ~15% becomes ~7% of calories coming from carbs for an adult (~38g @ 2000 calories).”

[4]

Evidence type: review

Edmond J.
Can J Physiol Pharmacol. 1992;70 Suppl:S118-29.

“Many studies in the decade, 1970-1980, in human infants and in the rat pup model show that both glucose and the ketone bodies, acetoacetate and D-(-)-3-hydroxybutyrate, are taken up by brain and used for energy production and as carbon sources for lipogenesis. Products of fat metabolism, free fatty acids, ketone bodies, and glycerol dominate metabolic pools in early development as a consequence of the milk diet. This recognition of a distinctive metabolic environment from the well-fed adult was taken into consideration within the last decade when methods became available to obtain and study each of the major cell populations, neurons, astrocytes, and oligodendrocytes in near homogeneous state in primary cultures. Studies on these cells made it possible to examine the distinctive metabolic properties and capabilities of each cell population to oxidize the metabolites that are available in development. Studies by many investigators on these cell populations show that all three can use glucose and the ketone bodies in respiration and for lipogenesis.”

[5]

Evidence type: non-human animal experiment

“The incorporation of 14C-label from subcutaneously injected [3-14C]acetoacetate and [U-14C]glucose into phospholipids and sphingolipids in different regions of developing rat brain was determined. In all regions, phosphatidylcholine was the lipid synthesized most readily from either substrate. The percentages of radioactivity in other phospholipids and most sphingolipids remained relatively constant throughout postnatal development. An exceptional increase in the percentage of radioactivity incorporated into cerebroside, coinciding with a decrease of incorporation into phosphatidylcholine, was first noted on day 12 and continued until a maximal level was reached between days 18 and 20 of postnatal age. These developmental changes in preferential synthesis of lipids were associated with increased demands for phospholipids and cerebroside during the early and late postnatal stages, respectively. There was no difference in accumulation of radioactivity from acetoacetate, expressed as dpm of [14C]acetoacetate recovered in phospholipids plus sphingolipids per g of tissue, among all brain regions during the first 5 days of life. During active myelination (12 to 20 days of age); however, the amount of 14C-label was highest in brain stem, ranging from 1.9- to 2.3-fold greater than values for cerebrum and thalamus. The region with the next highest accumulation was cerebellum, followed by midbrain. During the same period, brain stem was likewise the most active site of accumulation of radioactivity from 14C-labeled glucose. Higher amounts of [14C]acetoacetate label accumulated in lipids of brain stem and cerebellum, relative to midbrain, thalamus, and cerebrum, coincide with evidence that active myelination begins in the hindbrain and proceeds rostrally toward the forebrain. Ketone bodies could therefore serve as a potential source of phospholipids and sphingolipids for brain growth and maturation.”

[6]

Evidence type: non-human animal experiment

Yeh YY, Sheehan PM.
Fed Proc. 1985 Apr;44(7):2352-8.

(Emphasis ours)

“Persistent mild hyperketonemia is a common finding in neonatal rats and human newborns, but the physiological significance of elevated plasma ketone concentrations remains poorly understood. Recent advances in ketone metabolism clearly indicate that these compounds serve as an indispensable source of energy for extrahepatic tissues, especially the brain and lung of developing rats. Another important function of ketone bodies is to provide acetoacetyl-CoA and acetyl-CoA for synthesis of cholesterol, fatty acids, and complex lipids. During the early postnatal period, acetoacetate (AcAc) and beta-hydroxybutyrate are preferred over glucose as substrates for synthesis of phospholipids and sphingolipids in accord with requirements for brain growth and myelination. Thus, during the first 2 wk of postnatal development, when the accumulation of cholesterol and phospholipids accelerates, the proportion of ketone bodies incorporated into these lipids increases. On the other hand, an increased proportion of ketone bodies is utilized for cerebroside synthesis during the period of active myelination. In the lung, AcAc serves better than glucose as a precursor for the synthesis of lung phospholipids. The synthesized lipids, particularly dipalmityl phosphatidylcholine, are incorporated into surfactant, and thus have a potential role in supplying adequate surfactant lipids to maintain lung function during the early days of life. Our studies further demonstrate that ketone bodies and glucose could play complementary roles in the synthesis of lung lipids by providing fatty acid and glycerol moieties of phospholipids, respectively. The preferential selection of AcAc for lipid synthesis in brain, as well as lung, stems in part from the active cytoplasmic pathway for generation of acetyl-CoA and acetoacetyl-CoA from the ketone via the actions of cytoplasmic acetoacetyl-CoA synthetase and thiolase.”

[7]

Evidence type: non-human animal experiment

Edmond J, Auestad N, Robbins RA, Bergstrom JD.
Fed Proc. 1985 Apr;44(7):2359-64.

(Emphasis ours)

“In the course of mammalian development milk has evolved with unique characteristics as has the capacity of the neonatal rat to process this nutrient source. The primary carbon source in milk is fat, which provides two readily utilized metabolites, acetoacetate and D(-)-3-hydroxybutyrate (ketone bodies), as well as free fatty acids and glycerol. Carbohydrate provides less than 12% of the caloric content of rat milk and glucose has to be produced by the suckling rat to maintain glucose homeostasis. One would predict that glucose would be used sparingly and in pathways that cannot be satisfied by other readily available metabolites. Studies of the uptake of metabolites and the development of key enzymes for the utilization of glucose and ketone bodies by developing brain support the concept that ketone bodies are preferred substrates for the supply of carbon to respiration and lipogenesis. Astrocytes, oligodendrocytes, and neurons from developing brain all have an excellent capacity to use ketone bodies for respiration. By contrast, glucose is utilized preferentially in the hexose monophosphate shunt by all three cell populations. We are examining the requirement for ketone bodies by developing brain with the application of a system to rear rat pups artificially on a milk substitute that promotes a hypoketonemia.”

[8]

Evidence type: review

Gudiel-Urbano M1, Goñi I.
Arch Latinoam Nutr. 2001 Dec;51(4):332-9.

(Emphasis ours)

“Breast-feeding is the optimal mode of feeding for the normal full-term infant. Human milk composition knowledge has been basis for recommended dietary allowances for infants. Few studies about human milk carbohydrates have been done until the last decade. However, carbohydrates provide approximately 40-50% of the total energy content of breast milk. Quantitatively oligosaccharides are the third largest solute in human milk after lactose and fat. Each individual oligosaccharide is based on a variable combination of glucose, galactose, sialic acid, fucose and N-acetylglucosamine with many and varied linkages between them, thus accounting for the enormous number of different oligosaccharides in human milk. The oligosaccharides content in human milk varies with the duration of lactation, diurnally and with the genetic makeup of the mother. At present, a great interest in the roles of human milk oligosaccharides is raising. They act as a the soluble fibre in breast milk and their structure is available to act as competitive ligands protecting the breast-fed infant from pathogens and act as well as prebiotic. They may also act as source of sialic acid and galactose, essential for brain development. This is why today there is an increasing health and industrial interest in human milk oligosaccharides content, with the main purpose of incorporating them as new ingredients in infant nutrition.”

[9]

Evidence type: review

McVeagh P1, Miller JB.
J Paediatr Child Health. 1997 Aug;33(4):281-6.

"Abstract

"Over 100 years ago it was first deduced that a major component of human milk must be an unidentified carbohydrate that was not found in cows milk. At first this was thought to be a form of lactose and was called gynolactose. We now know that this was not a single carbohydrate but a complex mixture of approximately 130 different oligosaccharides. Although small amounts of a few oligosaccharides have been found in the milk of other mammals, this rich diversity of sugars is unique to human milk. The oligosaccharide content of human milk varies with the infant's gestation, the duration of lactation, diurnally and with the genetic makeup of the mother. Milk oligosaccharides have a number of functions that may protect the health of the breast fed infant. As they are not digested in the small intestine, they form the 'soluble' fibre of breast milk and their intact structure is available to act as competitive ligands protecting the breast-fed infant from pathogens. There is a growing list of pathogens for which a specific oligosaccharide ligand has been described in human milk. They are likely to form the model for future therapeutic and prophylactic anti-microbials. They provide substrates for bacteria in the infant colon and thereby contribute to the difference in faecal pH and faecal flora between breast and formula-fed infants. They may also be important as a source of sialic acid, essential for brain development."

[10]

Evidence type: review

Coppa GV, Bruni S, Morelli L, Soldi S, Gabrielli O.
J Clin Gastroenterol. 2004 Jul;38(6 Suppl):S80-3.

“The development of intestinal microflora in newborns is strictly related to the kind of feeding. Breast-fed infants, unlike the bottle-fed ones, have an intestinal ecosystem characterized by a strong prevalence of bifidobacteria and lactobacilli. Data available so far in the literature show that, among the numerous substances present in human milk, oligosaccharides have a clear prebiotic effect. They are quantitatively one of the main components of human milk and are only partially digested in the small intestine, so they reach the colon, where they stimulate selectively the development of bifidogenic flora. Such results have been recently proved both by characterization of oligosaccharides in breast-fed infant feces and by the study of intestinal microflora using new techniques of molecular analysis, confirming that human milk oligosaccharides represent the first prebiotics in humans.”

[11]

Evidence type: review

Coppa GV, Zampini L, Galeazzi T, Gabrielli O.
Dig Liver Dis. 2006 Dec;38 Suppl 2:S291-4.

“The microbic colonization of human intestine begins at birth, when from a sterile state the newborn is exposed to an external environment rich in various bacterial species. The kind of delivery has an important influence on the composition of the intestinal flora in the first days of life. Thereafter, the microflora is mainly influenced by the kind of feeding: breast-fed infants show a predominance of bifidobacteria and lactobacilli, whereas bottle-fed infants develop a mixed flora with a lower number of bifidobacteria. The “bifidogenic effect” of human milk is not related to a single growth-promoting substance, but rather to a complex of interacting factors. In particular the prebiotic effect has been ascribed to the low concentration of proteins and phosphates, the presence of lactoferrin, lactose, nucleotides and oligosaccharides. The real prebiotic role of each of these substances is not yet clearly defined, with the exception of oligosaccharides which undoubtedly promote a bifidobacteria-dominant microflora.”

[12]

Evidence type: review

McVeagh P, Miller JB.
J Paediatr Child Health. 1997 Aug;33(4):281-6.

(Emphasis ours)

“Over 100 years ago it was first deduced that a major component of human milk must be an unidentified carbohydrate that was not found in cows milk. At first this was thought to be a form of lactose and was called gynolactose. We now know that this was not a single carbohydrate but a complex mixture of approximately 130 different oligosaccharides. Although small amounts of a few oligosaccharides have been found in the milk of other mammals, this rich diversity of sugars is unique to human milk. The oligosaccharide content of human milk varies with the infant's gestation, the duration of lactation, diurnally and with the genetic makeup of the mother. Milk oligosaccharides have a number of functions that may protect the health of the breast fed infant. As they are not digested in the small intestine, they form the 'soluble' fibre of breast milk and their intact structure is available to act as competitive ligands protecting the breast-fed infant from pathogens. There is a growing list of pathogens for which a specific oligosaccharide ligand has been described in human milk. They are likely to form the model for future therapeutic and prophylactic anti-microbials. They provide substrates for bacteria in the infant colon and thereby contribute to the difference in faecal pH and faecal flora between breast and formula-fed infants. They may also be important as a source of sialic acid, essential for brain development.”

[13]

Evidence type: experiment

Survival of human milk oligosaccharides in the intestine of infants.
Chaturvedi P, Warren CD, Buescher CR, Pickering LK, Newburg DS.
Adv Exp Med Biol. 2001;501:315-23.

(Emphasis ours)

“Several human milk oligosaccharides inhibit human pathogens in vitro and in animal models. In an infant, the ability of these oligosaccharides to offer protection from enteric pathogens would require that they withstand structural modification as they pass through the alimentary canal or are absorbed and excreted in urine. We investigated the fate of human milk oligosaccharides during transit through the alimentary canal by determining the degree to which breast-fed infants' urine and fecal oligosaccharides resembled those of their mothers' milk. Oligosaccharide profiles of milk from 16 breast-feeding mothers were compared with profiles of stool and urine from their infants. Results were compared with endogenous oligosaccharide profiles obtained from the urine and feces of age-, parity-, and gender-matched formula-fed infants. […] Among breast-fed infants, concentrations of oligosaccharides were higher in feces than in mothers' milk, and much higher in feces than in urine. Urinary and fecal oligosaccharides from breast-fed infants resembled those in their mothers' milk. Those from formula-fed infants did not resemble human milk oligosaccharides, were found at much lower concentrations, and probably resulted from remodeling of intestinal glycoconjugates or from intestinal bacteria. Most of the human milk oligosaccharides survived transit through the gut, and some were absorbed and then excreted into the urine intact, implying that inhibition of intestinal and urinary pathogens by human milk oligosaccharides is quite likely in breast-fed infants.”

[14]

Evidence type: experiment

Nakano T1, Sugawara M, Kawakami H.
Acta Paediatr Taiwan. 2001 Jan-Feb;42(1):11-7.

“Breast milk is the best nutrient source for infants. It contains all elements needed for a normal growth and development of infants. Human milk contains a large amount of sialic acid compared with bovine milk. Sialic acid contained in oligosaccharides, glycolipids and glycoproteins in milk is considered to play important roles in physiological functions in infancy. Thus, we have investigated the sialic acid composition and the functions of sialylated compounds in human milk. Sialic acids comprise a family of neuraminic acid derivatives present in secretions, fluids and tissues of mammals. In milk, sialic acid is present in different sialoglycoconjugate compounds such as oligosaccharides, glycolipids and glycoproteins, not in a free form. Human milk contains 0.3-1.5 mg/ml of sialic acid. Sialic acid bound to oligosaccharides accounts for about 75% of the total sialic acid contained in human milk. Most of the sialic acid contained in human milk is found in the form of sialyllactose, an oligosaccharide formed from lactose and sialic acid. In milk, gangliosides, sialic acid-containing glycolipid, occur mainly as monosialoganglioside 3 (GM3) and disialoganglioside 3 (GD3). The concentration of GM3 in human milk increases, while that of GD3 concentration decreases during lactation. Because the brain and central nervous system contain considerable level of sialic acid in infancy, it is considered to play important roles on the expression and development of their functions. Moreover, we found that some sialylated compounds had inhibited the adhesion of toxins, bacteria and viruses to the receptors on the surface of epithelial cells. Additionally, we found that some sialylated compounds had growth-promoting effects on bifidobacteria and lactobacilli, predominantly present in the intestinal flora of infants fed with human milk. The results suggested that sialylated compounds in human milk possibly behaved as a physiological component in the intestinal tract of infants to protect them against enteric infections.”

[15]

Evidence type: review

Wang B.
Annu Rev Nutr. 2009;29:177-222. doi: 10.1146/annurev.nutr.28.061807.155515.

“The rapid growth of infant brains places an exceptionally high demand on the supply of nutrients from the diet, particularly for preterm infants. Sialic acid (Sia) is an essential component of brain gangliosides and the polysialic acid (polySia) chains that modify neural cell adhesion molecules (NCAM). Sia levels are high in human breast milk, predominately as N-acetylneuraminic acid (Neu5Ac). In contrast, infant formulas contain a low level of Sia consisting of both Neu5Ac and N-glycolylneuraminic acid (Neu5Gc). Neu5Gc is implicated in some human inflammatory diseases. Brain gangliosides and polysialylated NCAM play crucial roles in cell-to-cell interactions, neuronal outgrowth, modifying synaptic connectivity, and memory formation. In piglets, a diet rich in Sia increases the level of brain Sia and the expression of two learning-related genes and enhances learning and memory. The purpose of this review is to summarize the evidence showing the importance of dietary Sia as an essential nutrient for brain development and cognition.”

2015-03-29

Meat is best for growing brains

There are multiple lines of evidence that an animal-based diet best supports human brain development in infants and young children.

Human fetuses and infants rely on ketones for brain building.

In a previous post, we wrote about the known (but little-spoken-of) fact that human infants are in mild ketosis all the time, especially when breastfed. In other words, ketosis is a natural, healthy state for infants. Infancy is a critical time for brain growth, so we expect that ketosis is advantageous for a growing brain. Otherwise, there would have been a selective advantage to reduced ketotis in infancy. This species-critical, rapid brain growth continues well past weaning. For that reason, we suggest in our article that weaning onto a ketogenic diet would probably be preferable to weaning away from ketosis.

In response to that post, a reader sent us a paper called Survival of the fattest: fat babies were the key to evolution of the large human brain. [1] The authors discuss the apparently unique human trait of having extremely fat babies, and explain it in terms of the unique need for growth of extremely large brains.

A key point they make is that a baby's ample fat provides more than simply a large energy supply, (much more than could be stored as glycogen or protein; by their calculations, more than 20 times more), but that ketone bodies are themselves important for human brain evolution.

They repeat the usual unwarranted assumption that adult brains use mainly glucose for brain fuel by default, and that ketone bodies are merely an alternative brain fuel. Nonetheless, when talking about fetuses, they are willing to say that the use of ketones is not merely an "alternative":

In human fetuses at mid-gestation, ketones are not just an alternative fuel but appear to be an essential fuel because they supply as much as 30% of the energy requirement of the brain at that age (Adam et al., 1975).

Second, ketones are a key source of carbon for the brain to synthesize the cholesterol and fatty acids that it needs in the membranes of the billions of developing nerve connections.

[...]

Ketones are the preferred carbon source for brain lipid synthesis and they come from fatty acids recently consumed or stored in body fat. This means that, in infants, brain cholesterol and fatty acid synthesis are indirectly tied to mobilization and catabolism of fatty acids stored in body fat.

In other words, the claim is that ketones are the best source of certain brain-building materials, and specifically, that fetuses use them for that purpose.

Moreover, the thesis is that the extra body fat on human babies is there specifically for the purpose of supporting extra brain growth after birth, through continued use of ketones.

Weaning onto meat increases brain growth.

[ Please note that by convention weaning refers to the gradual process of transitioning from exclusive breastfeeding (starting with the first foods introduced, while breastfeeding is still ongoing), to the end of breastfeeding, not just the end itself. ]

We aren't the only ones who have thought weaning onto meat would be a good idea. A couple of studies have compared weaning onto meat rather than cereal.

One showed a larger increase in head circumference [2], which is a good index of brain growth in infants [3] and young children [4]. Moreover, higher increases in head circumference in infants are correlated with higher intelligence, independently of head circumference at birth [5]. In other words, the amount of brain growth after birth is a better predictor of intelligence than the amount of brain growth in gestation.

That study also found the meat-fed infants to have better zinc status, and good iron status despite not supplementing iron as was done in the cereal arm [2]. Zinc and iron are abundant in the brain, and zinc deficiency is implicated in learning disorders and other brain development problems [6]. Iron deficiency is a common risk in infants in our culture, because of our dietary practices, which is why infant cereal is fortified with it [7].

Another study showed better growth in general in babies weaned onto primarily meat [8].


Weaning onto meat is easy. Here's how I did it.

It is believed likely that early humans fed their babies pre-chewed meat [9]. I did that, too, although that wasn't my first weaning step. Influenced by baby-led weaning, I waited until he was expressing clear interest in my food, and then simply shared it with him. At the time this meant:

  • Broth on a spoon, increasingly with small fragments of meat in it.
  • Bones from steaks and chops, increasingly with meat and fat left on them.
  • Homemade plain, unseasoned jerky, which he teethed on, or sucked until it disintegrated.
  • Beef and chicken liver, which has a soft, silky texture, and is extremely nutrient-dense.

--Amber


The brain is an energy-intensive organ that required an animal-based diet to evolve.

In 1995, anthropologists Leslie C. Aiello and Peter Wheeler posed the following problem [10]:

  • Brains require an enormous amount of energy.
  • Humans have much larger brains than other primates.
  • However, human basal metabolic rates are not more than would be predicted by their body mass.

Where do we get the extra energy required to fuel our brains, and how could this have evolved?

Aiello and Wheeler explain this by noting that at the same time as our brains were expanding, our intestines (estimated as comparably energy-intensive) were shrinking, by almost exactly the same amount. thereby freeing up the extra metabolic energy needed for the brain. Both adaptations, a large brain and small guts, independently required them to adopt a "high-quality" diet, for different reasons.

Let's mince no words; "high-quality" means meat [11]. Meat is more nutrient dense than plants, both in terms of protein and vitamins. Plants are simply too fibrous, too low in protein and calories, and too seasonal to have been relied on for such an evolutionary change [11], [12]. It is widely accepted that meat became an important part of our diets during this change. This is the mainstream view in anthropology [13].

Although the need for protein and brain-building nutrients is often cited as a reason for needing meat in the evolutionary diet, energy requirements are also important to consider. It would have been difficult to get caloric needs met from plants (especially before cooking) [13], because they were so fibrous. Herbivores with special guts (such as ruminants like cows with their "four stomachs") and primates with much larger intestines than we have, actually use bacteria in their guts to turn significant amounts of fiber into fat, see eg. [14]. This strategy is not available to a such a small gut [11], [15], which is why we had to find food that was energy dense as is.

Fortunately, insofar as we were already using animal sources to get protein and nutrients, we also had access to an abundance of fat. The animals we hunted were unlikely to have been as lean as modern game. Evidence supports the hypothesis that human hunting was the most likely cause of the extinction of many megafauna (large animals that were much fatter than the leaner game we have left today) [16]. Humans, like carnivores, prefer to hunt larger animals whenever they are available [17]. It has been proposed that the disappearance of the fatter megafauna exerted a strong evolutionary pressure on humans, who were already fat-dependent, to become more skilled hunters of the small game we have today, to rely more on the fat from eating brains and marrow, and to learn to find the fattest animals among the herds [18].

Animal fat and animal protein provided the energy, protein, and nutrients necessary for large brains, especially given the constraint of small guts.

Because humans wean early, and human brain growth is extended past weaning, the post-weaning diet must support fetal-like brain growth.

Humans wean much earlier than other primates, and yet their brains require prolonged growth. Our intelligence has been our primary selective advantage. Therefore it is critical from an evolutionary standpoint that the diet infants were weaned onto was supportive of this brain growth.

In a (fascinating and well-written) paper on weaning and evolution, Kennedy puts it this way:

"[A]lthough this prolonged period of development i.e., ‘‘childhood’’ renders the child vulnerable to a variety of risks, it is vital to the optimization of human intelligence; by improving the child’s nutritional status (and, obviously, its survival), the capability of the adult brain is equally improved. Therefore, a child’s ability to optimize its intellectual potential would be enhanced by the consumption of foods with a higher protein and calorie content than its mother’s milk; what better foods to nourish that weanling child than meat, organ tissues (particularly brain and liver), and bone marrow, an explanation first proposed by Bogin (1997)."

...

"Increase in the size of the human brain is based on the retention of fetal rates of brain growth (Martin, 1983), a unique and energetically expensive pattern of growth characteristic of altricial [ born under-developed ] mammals (Portmann, 1941; Martin, 1984). This research now adds a second altricial trait—early weaning—to human development. The metabolically expensive brain produced by such growth rates cannot be sustained long on maternal lactation alone, necessitating an early shift to adult foods that are higher in protein and calories than human milk."

The only food higher in protein and calories than breast milk is meat.

A high-fat animal-based diet best supports brain growth.

Taking these facts together:

  • Even modern fetuses and breastfed infants are in ketosis, which uniquely supports brain growth.
  • Infants who are weaned onto meat get essential nutrients to grow brains with: nutrients that are currently deficient in our plant-centric diets today. Moreover, experiments have found that their brains actually grow more than babies fed cereal.
  • Human brains continue to grow at a fast rate even past weaning.
  • It is likely that in order to evolve such large, capable brains, human babies were weaned onto primarily meat.

A meat-based, inherently ketogenic diet is not only likely to be our evolutionary heritage, it is probably the best way to support the critical brain growth of the human child.

Acknowledgements

We would like to thank Matthew Dalby, a researcher at the University of Aberdeen, for helpful discussions about short-chain fatty acid production in the large intestines.

References

[1]

Hypothesis paper

Cunnane SC, Crawford MA.
Comp Biochem Physiol A Mol Integr Physiol. 2003 Sep;136(1):17-26.
[2]

Evidence type: experiment

Krebs NF, Westcott JE, Butler N, Robinson C, Bell M, Hambidge KM.
J Pediatr Gastroenterol Nutr. 2006 Feb;42(2):207-14.

(Emphasis ours)

"OBJECTIVE:

"This study was undertaken to assess the feasibility and effects of consuming either meat or iron-fortified infant cereal as the first complementary food.

"METHODS:

"Eighty-eight exclusively breastfed infants were enrolled at 4 months of age and randomized to receive either pureed beef or iron-fortified infant cereal as the first complementary food, starting after 5 months and continuing until 7 months. Dietary, anthropometric, and developmental data were obtained longitudinally until 12 months, and biomarkers of zinc and iron status were measured at 9 months.

"RESULTS:

"Mean (+/-SE) daily zinc intake from complementary foods at 7 months for infants in the meat group was 1.9 +/- 0.2 mg, whereas that of the cereal group was 0.6 +/- 0.1 mg, which is approximately 25% of the estimated average requirement. Tolerance and acceptance were comparable for the two intervention foods. Increase in head circumference from 7 to 12 months was greater for the meat group, and zinc and protein intakes were predictors of head growth. Biochemical status did not differ by feeding group, but approximately 20% of the infants had low (<60 microg/dL) plasma zinc concentrations, and 30% to 40% had low plasma ferritin concentrations (<12 microg/L). Motor and mental subscales did not differ between groups, but there was a trend for a higher behavior index at 12 months in the meat group.

"CONCLUSIONS:

"Introduction of meat as an early complementary food for exclusively breastfed infants is feasible and was associated with improved zinc intake and potential benefits. The high percentage of infants with biochemical evidence of marginal zinc and iron status suggests that additional investigations of optimal complementary feeding practices for breastfed infants in the United States are warranted."

[3]

Evidence type: authority

(Emphasis ours)

"Today the close correlation between head circumference growth and brain development in the last weeks of gestation and in the first two years of life is no longer disputed. A recently developed formula even allows for calculations of brain weight based upon head circumference data. Between the ages of 32 postmenstrual weeks and six months after expected date of delivery there is a period of very rapid brain growth in which the weight of the brain quadruples. During this growth spurt there exists an increased vulnerability by unfavorable environmental conditions, such as malnutrition and psychosocial deprivation. The erroneous belief still being prevalent that the brain of the fetus and young infant is spared by malnutrition, can be looked upon as disproved by new research results. Severe malnutrition during the brain growth spurt is thought to be a very important non-genetic factor influencing the development of the central nervous system (CNS) and therewith intellectual performance. In the past a permanent growth retardation of head circumference and a reduced intellectual capacity usually was observed in small-for-gestational age infants (SGA). Nowadays, however, there can be found also proofs of successful catch-up growth of head circumference and normal intellectual development after early and high-energy postnatal feeding of SGA infants. The development of SGA infants of even very low birth weight can be supported in such a way that it takes a normal course by providing good environmental conditions, such as appropriate nutrition - especially during the early growth period - and a stimulating environment with abundant attention by the mother."

[4]

Evidence type: experiment

Bartholomeusz HH, Courchesne E, Karns CM.
Neuropediatrics. 2002 Oct;33(5):239-41.

(Emphasis ours)

"OBJECTIVE:

"To quantify the relationship between brain volume and head circumference from early childhood to adulthood, and quantify how this relationship changes with age.

"METHODS:

"Whole-brain volume and head circumference measures were obtained from MR images of 76 healthy normal males aged 1.7 to 42 years.

"RESULTS:

"Across early childhood, brain volume and head circumference both increase, but from adolescence onward brain volume decreases while head circumference does not. Because of such changing relationships between brain volume and head circumference with age, a given head circumference was associated with a wide range of brain volumes. However, when grouped appropriately by age, head circumference was shown to accurately predict brain volume. Head circumference was an excellent prediction of brain volume in 1.7 to 6 years old children (r = 0.93), but only an adequate predictor in 7 to 42 year olds.

"CONCLUSIONS:

"To use head circumference as an accurate indication of abnormal brain volume in the clinic or research setting, the patient's age must be taken into account. With knowledge of age-dependent head circumference-to-brain volume relationship, head circumference (particularly in young children) can be an accurate, rapid, and inexpensive indication of normalcy of brain size and growth in a clinical setting.

[5]

Evidence type: experiment

Gale CR1, O'Callaghan FJ, Godfrey KM, Law CM, Martyn CN.
Brain. 2004 Feb;127(Pt 2):321-9. Epub 2003 Nov 25.

"Head circumference is known to correlate closely with brain volume (Cooke et al., 1977; Wickett et al., 2000) and can therefore be used to measure brain growth, but a single measurement cannot provide a complete insight into neurological development. Different patterns of early brain growth may result in a similar head size. A child whose brain growth both pre‐ and postnatally followed the 50th centile might attain the same head size as a child whose brain growth was retarded in gestation but who later experienced a period of rapid growth. Different growth trajectories may reflect different experiences during sensitive periods of brain development and have different implications for later cognitive function.

"We have investigated whether brain growth during different periods of pre‐ and postnatal development influences later cognitive function in a group of children for whom serial measurements of head growth through foetal life, infancy and childhood were available."

[...]

"We found no statistically significant associations between head circumference at 18 weeks’ gestation or head circumference at birth SDS and IQ at the age of 9 years."

[...]

"In contrast, there were strong statistically significant associations between measures of postnatal head growth and IQ. After adjustment for sex, full‐scale IQ rose by 2.59 points (95% CI 0.87 to 4.32) for each SD increase in head circumference at 9 months of age, and by 3.85 points (95% CI 1.96 to 5.73) points for each SD increase in head circumference at 9 years; verbal IQ rose by 2.66 points (95% CI 0.49 to 4.83) for each SD increase in head circumference at 9 months of age, and by 3.76 points (95% CI 1.81 to 5.72) for each SD increase in head circumference at 9 years; performance IQ rose by 2.88 points (95% CI 0.659 to 5.11) for each SD increase in head circumference at 9 months of age, and by 3.16 points (95% CI 1.16 to 5.16) for each SD increase in head circumference at 9 years."

[...]

"[W]e interpret these findings as evidence that postnatal brain growth is more important than prenatal brain growth in determining higher mental function. This interpretation is supported by the finding that head growth in the first 9 months of life and head growth between 9 months and 9 years of age are also related to cognitive function, regardless of head size at the beginning of these periods."

[6]

Evidence type: review

Pfeiffer CC, Braverman ER.
Biol Psychiatry. 1982 Apr;17(4):513-32.

"The total content of zinc in the adult human body averages almost 2 g. This is approximately half the total iron content and 10 to 15 times the total body copper. In the brain, zinc is with iron, the most concentrated metal. The highest levels of zinc are found in the hippocampus in synaptic vesicles, boutons, and mossy fibers. Zinc is also found in large concentrations in the choroid layer of the retina which is an extension of the brain. Zinc plays an important role in axonal and synaptic transmission and is necessary for nucleic acid metabolism and brain tubulin growth and phosphorylation. Lack of zinc has been implicated in impaired DNA, RNA, and protein synthesis during brain development. For these reasons, deficiency of zinc during pregnancy and lactation has been shown to be related to many congenital abnormalities of the nervous system in offspring. Furthermore, in children insufficient levels of zinc have been associated with lowered learning ability, apathy, lethargy, and mental retardation. Hyperactive children may be deficient in zinc and vitamin B-6 and have an excess of lead and copper. Alcoholism, schizophrenia, Wilson's disease, and Pick's disease are brain disorders dynamically related to zinc levels. Zinc has been employed with success to treat Wilson's disease, achrodermatitis enteropathica, and specific types of schizophrenia."

[7]

Evidence type: authority

From the CDC:

"Who is most at risk?

Young children and pregnant women are at higher risk of iron deficiency because of rapid growth and higher iron needs.

Adolescent girls and women of childbearing age are at risk due to menstruation.

Among children, iron deficiency is seen most often between six months and three years of age due to rapid growth and inadequate intake of dietary iron. Infants and children at highest risk are the following groups:

  • Babies who were born early or small.
  • Babies given cow's milk before age 12 months.
  • Breastfed babies who after age 6 months are not being given plain, iron-fortified cereals or another good source of iron from other foods.
  • Formula-fed babies who do not get iron-fortified formulas.
  • Children aged 1–5 years who get more than 24 ounces of cow, goat, or soymilk per day. Excess milk intake can decrease your child's desire for food items with greater iron content, such as meat or iron fortified cereal.
  • Children who have special health needs, for example, children with chronic infections or restricted diets.
[8]

Evidence type: experiment

(Emphasis ours)

"Background: High intake of cow-milk protein in formula-fed infants is associated with higher weight gain and increased adiposity, which have led to recommendations to limit protein intake in later infancy. The impact of protein from meats for breastfed infants during complementary feeding may be different.

"Objective: We examined the effect of protein from meat as complementary foods on growths and metabolic profiles of breastfed infants.

"Design: This was a secondary analysis from a trial in which exclusively breastfed infants (5–6 mo old from the Denver, CO, metro area) were randomly assigned to receive commercially available pureed meats (MEAT group; n = 14) or infant cereal (CEREAL group; n = 28) as their primary complementary feedings for ∼5 mo. Anthropometric measures and diet records were collected monthly from 5 to 9 mo of age; intakes from complementary feeding and breast milk were assessed at 9 mo of age.

"Results: The MEAT group had significantly higher protein intake, whereas energy, carbohydrate, and fat intakes from complementary feeding did not differ by group over time. At 9 mo of age mean (± SEM), intakes of total (complementary feeding plus breast-milk) protein were 2.9 ± 0.6 and 1.4 ± 0.4 g ⋅ kg−1 ⋅ d−1, ∼17% and ∼9% of daily energy intake, for MEAT and CEREAL groups, respectively (P < 0.001). From 5 to 9 mo of age, the weight-for-age z score (WAZ) and length-for-age z score (LAZ) increased in the MEAT group (ΔWAZ: 0.24 ± 0.19; ΔLAZ: 0.14 ± 0.12) and decreased in the CEREAL group (ΔWAZ: −0.07 ± 0.17; ΔLAZ: −0.27 ± 0.24) (P-group by time < 0.05). The change in weight-for-length z score did not differ between groups. Total protein intake at 9 mo of age and baseline WAZ were important predictors of changes in the WAZ (R2 = 0.23, P = 0.01).

"Conclusion: In breastfed infants, higher protein intake from meats was associated with greater linear growth and weight gain but without excessive gain in adiposity, suggesting potential risks of high protein intake may differ between breastfed and formula-fed infants and by the source of protein."

[9]

From Wikipedia:

"Breastmilk supplement

"Premastication is complementary to breastfeeding in the health practices of infants and young children, providing large amounts of carbohydrate and protein nutrients not always available through breast milk,[3] and micronutrients such as iron, zinc, and vitamin B12 which are essential nutrients present mainly in meat.[25] Compounds in the saliva, such as haptocorrin also helps increase B12 availability by protecting the vitamin against stomach acid.

"Infant intake of heme iron

"Meats such as beef were likely premasticated during human evolution as hunter-gatherers. This animal-derived bioinorganic iron source is shown to confer benefits to young children (two years onwards) by improving growth, motor, and cognitive functions.[26] In earlier times, premastication was an important practice that prevented infant iron deficiency.[27]

"Meats provide Heme iron that are more easily absorbed by human physiology and higher in bioavailability than non-heme irons sources,[28][29] and is a recommended source of iron for infants.[30]"

[10]

Hypothesis paper

Leslie C. Aiello and Peter Wheeler
Current Anthropology, Vol. 36, No. 2 (Apr., 1995), pp. 199-221
[11]

Evidence type: review

Milton K.
J Nutr. 2003 Nov;133(11 Suppl 2):3886S-3892S.

(The whole paper is worth reading, but these highlights serve our point.)

"Without routine access to ASF [animal source foods], it is highly unlikely that evolving humans could have achieved their unusually large and complex brain while simultaneously continuing their evolutionary trajectory as large, active and highly social primates. As human evolution progressed, young children in particular, with their rapidly expanding large brain and high metabolic and nutritional demands relative to adults would have benefited from volumetrically concentrated, high quality foods such as meat."

[...]

"If the dietary trajectory described above was characteristic of human ancestors, the routine, that is, daily, inclusion of ASF in the diets of children seems mandatory as most wild plant foods would not be capable of supplying the protein and micronutrients children require for optimal development and growth, nor could the gut of the child likely provide enough space, in combination with the slow food turnover rate characteristic of the human species, to secure adequate nutrition from wild plant foods alone. Wild plant foods, though somewhat higher in protein and some vitamins and minerals than their cultivated counterparts (52), are also high in fiber and other indigestible components and most would have to be consumed in very large quantity to meet the nutritional and energetic demands of a growing and active child."

[...]

"Given the postulated body and brain size of the earliest humans and the anatomy and kinetic pattern characteristics of the hominoid gut, turning increasingly to the intentional consumption of ASF on a routine rather than fortuitous basis seems the most expedient, indeed the only, dietary avenue open to the emerging human lineage (2,3,10,53)."

[...]

"Given the probable diet, gut form and pattern of digestive kinetics characteristic of prehuman ancestors, it is hypothesized that the routine inclusion of animal source foods in the diet was mandatory for emergence of the human lineage. As human evolution progressed, ASF likely achieved particular importance for small children due to the energetic demands of their rapidly expanding large brain and generally high metabolic and nutritional demands relative to adults."

[12]

Evidence type: review

Kennedy GE.
J Hum Evol. 2005 Feb;48(2):123-45. Epub 2005 Jan 18.

"Although some researchers have claimed that plant foods (e.g., roots and tubers) may have played an important role in human evolution (e.g., O’Connell et al., 1999; Wrangham et al., 1999; Conklin-Brittain et al., 2002), the low protein content of ‘‘starchy’’ plants, generally calculated as 2% of dry weight (see Kaplan et al., 2000: table 2), low calorie and fat content, yet high content of (largely) indigestible fiber (Schoeninger et al., 2001: 182) would render them far less than ideal weaning foods. Some plant species, moreover, would require cooking to improve their digestibility and, despite claims to the contrary (Wrangham et al., 1999), evidence of controlled fire has not yet been found at Plio-Pleistocene sites. Other plant foods, such as the nut of the baobab (Adansonia digitata), are high in protein, calories, and lipids and may have been exploited by hominoids in more open habitats (Schoeninger et al., 2001). However, such foods would be too seasonal or too rare on any particular landscape to have contributed significantly and consistently to the diet of early hominins. Moreover, while young baobab seeds are relatively soft and may be chewed, the hard, mature seeds require more processing. The Hadza pound these into flour (Schoeninger et al., 2001), which requires the use of both grinding stones and receptacles, equipment that may not have been known to early hominins. Meat, on the other hand, is relatively abundant and requires processing that was demonstrably within the technological capabilities of Plio-Pleistocene hominins. Meat, particularly organ tissues, as Bogin (1988, 1997) pointed out, would provide the ideal weaning food."

[13]

Plants can become more nutrient dense through cooking. That is the basis of Wrangham's hypothesis:

(From Wikipedia)

"Wrangham's latest work focuses on the role cooking has played in human evolution. He has argued that cooking food is obligatory for humans as a result of biological adaptations[9][10] and that cooking, in particular the consumption of cooked tubers, might explain the increase in hominid brain sizes, smaller teeth and jaws, and decrease in sexual dimorphism that occurred roughly 1.8 million years ago.[11] Most anthropologists disagree with Wrangham's ideas, pointing out that there is no solid evidence to support Wrangham's claims.[11][12] The mainstream explanation is that human ancestors, prior to the advent of cooking, turned to eating meats, which then caused the evolutionary shift to smaller guts and larger brains.[13]"

[14]

Evidence type: review

Popovich DG1, Jenkins DJ, Kendall CW, Dierenfeld ES, Carroll RW, Tariq N, Vidgen E.
J Nutr. 1997 Oct;127(10):2000-5.

(Emphasis ours)

"We studied the western lowland gorilla diet as a possible model for human nutrient requirements with implications for colonic function. Gorillas in the Central African Republic were identified as consuming over 200 species and varieties of plants and 100 species and varieties of fruit. Thirty-one of the most commonly consumed foods were collected and dried locally before shipping for macronutrient and fiber analysis. The mean macronutrient concentrations were (mean ± SD, g/100 g dry basis) fat 0.5 ± 0.4, protein 11.8 ± 8.2, available carbohydrate 7.7 ± 6.3 and dietary fiber 74.0 ± 12.9. Assuming that the macronutrient profile of these foods was reflective of the whole gorilla diet and that dietary fiber contributed 6.28 kJ/g (1.5 kcal/g), then the gorilla diet would provide 810 kJ (194 kcal) metabolizable energy per 100 g dry weight. The macronutrient profile of this diet would be as follows: 2.5% energy as fat, 24.3% protein, 15.8% available carbohydrate, with potentially 57.3% of metabolizable energy from short-chain fatty acids (SCFA) derived from colonic fermentation of fiber. Gorillas would therefore obtain considerable energy through fiber fermentation. We suggest that humans also evolved consuming similar high foliage, high fiber diets, which were low in fat and dietary cholesterol. The macronutrient and fiber profile of the gorilla diet is one in which the colon is likely to play a major role in overall nutrition. Both the nutrient and fiber components of such a diet and the functional capacity of the hominoid colon may have important dietary implications for contemporary human health."

We disagree, of course, with the authors' suggested interpretation that humans, too, could make good use of the same dietary strategy, as we haven't the colons for it.

[15]

The maximum amount of fat humans could get from fermenting fibre in the gut is unknown. The widely cited value of 10% of calories comes from:

E. N. Bergman
Physiological Reviews Published 1 April 1990 Vol. 70 no. 2, 567-590

"The value of 6-10% for humans (Table 3) was calculated on the basis of a typical British diet where 50-60 g of carbohydrate (15 g fiber and 35-50 g sugar and starch) are fermented per day (209). It is pointed out, however, that dietary fiber intakes in Africa or the Third World are up to seven times higher than in the United Kingdom (55). It is likely, therefore, that much of this increased fiber intake is fermented to VFA and even greater amounts of energy are made available by large intestinal fermentation."

However, it should not be concluded that SCFA production could rise to 70% of energy requirements!

For one thing, as a back-of-the-envelope calculation, you can get up to about 2 kcal worth of SCFA per gram of fermentable carbohydrate. That would come from soluble plant fiber, resistant starch and regular starch that escapes digestion. To get 70% of calories this way on a 2000 kcal/day diet, you'd need to ingest 700g of fibre.

Even if you achieved this, it is unlikely you could absorb it all, and in the process of trying, you would experience gastrointestinal distress, including cramping, diarrhea or constipation, gas, and perhaps worse. Indeed, this would probably happen even at 100g/d, which would provide about 10% of energy in a 2000 kcal/d diet. Moreover, it would interfere with mineral absorption, rendering it an unviable evolutionary strategy. Even the ADA, which extols the virtues of fiber, cautions against exceeding their recommendations of 20-35g. See Position of the American Dietetic Association: health implications of dietary fiber.

[16]

Evidence type: review

"As the mathematical models now seem quite plausible and the patterns of survivors versus extinct species seem inexplicable by climate change and easily explicable by hunting (7,11), it is worth considering comparisons to other systems. Barnosky et al. note that on islands, humans cause extinctions through multiple synergistic effects, including predation and sitzkrieg, and “only rarely have island megafauna been demonstrated to go extinct because of environmental change without human involvement,” while acknowledging that the extrapolation from islands to continents is often disputed (7). The case for human contribution to extinction is now much better supported by chronology (both radiometric and based on trace fossils like fungal spores), mathematical simulations, paleoclimatology, paleontology, archaeology, and the traits of extinct species when compared with survivors than when Meltzer and Beck rejected it in the 1990s, although the blitzkrieg model which assumes Clovis-first can be thoroughly rejected by confirmation of pre-Clovis sites. Grayson and Meltzer (12) argue that the overkill hypothesis has become irrefutable, but the patterns by which organisms went extinct (7,11), the timing of megafauna population reductions and human arrival when compared with climate change (5), and the assumptions necessary to make paleoecologically informed mathematical models for the extinctions to make accurate predictions all provide opportunities to refute the overkill hypothesis, or at least make it appear unlikely. However, all of these indicate human involvement in megafauna extinctions as not only plausible, but likely."

[17]

Evidence type: review

William J. Ripple and Blaire Van Valkenburgh
BioScience (July/August 2010) 60 (7): 516-526.

"Humans are well-documented optimal foragers, and in general, large prey (ungulates) are highly ranked because of the greater return for a given foraging effort. A survey of the association between mammal body size and the current threat of human hunting showed that large-bodied mammals are hunted significantly more than small-bodied species (Lyons et al. 2004). Studies of Amazonian Indians (Alvard 1993) and Holocene Native American populations in California (Broughton 2002, Grayson 2001) show a clear preference for large prey that is not mitigated by declines in their abundance. After studying California archaeological sites spanning the last 3.5 thousand years, Grayson (2001) reported a change in relative abundance of large mammals consistent with optimal foraging theory: The human hunters switched from large mammal prey (highly ranked prey) to small mammal prey (lower-ranked prey) over this time period (figure 7). Grayson (2001) stated that there were no changes in climate that correlate with the nearly unilinear decline in the abundance of large mammals. Looking further back in time, Stiner and colleagues (1999) described a shift from slow-moving, easily caught prey (e.g., tortoises) to more agile, difficult-to-catch prey (e.g., birds) in Mediterranean Pleistocene archaeological sites, presumably as a result of declines in the availability of preferred prey."

[18]

Evidence type: review

Ben-Dor M1, Gopher A, Hershkovitz I, Barkai R.
PLoS One. 2011;6(12):e28689. doi: 10.1371/journal.pone.0028689. Epub 2011 Dec 9.

"The disappearance of elephants from the diet of H. erectus in the Levant by the end of the Acheulian had two effects that interacted with each other, further aggravating the potential of H. erectus to contend with the new dietary requirements:

"The absence of elephants, weighing five times the weight of Hippopotami and more than eighty times the weight of Fallow deer (Kob in Table 3), from the diet would have meant that hunters had to hunt a much higher number of smaller animals to obtain the same amount of calories previously gained by having elephants on the menu.

"Additionally, hunters would have had to hunt what large (high fat content) animals that were still available, in order to maintain the obligatory fat percentage (44% in our model) since they would have lost the beneficial fat contribution of the relatively fat (49% fat) elephant. This ‘large animal’ constraint would have further increased the energetic cost of foraging."

[...]

"Comparing the average calories per animal at GBY and Qesem Cave might lead to the conclusion that Qesem Cave dwellers had to hunt only twice as many animals than GBY dwellers. This, however, is misleading as obligatory fat consumption complicates the calculation of animals required. With the obligatory faunal fat requirement amounting to 49% of the calories expected to be supplied by the animal, Fallow deer with their caloric fat percentage of 31% (Kob in Table 3) would not have supplied enough fat to be consumed exclusively. Under dietary constraints and to lift their average fat consumption, the Qesem Cave dwellers would have been forced to hunt aurochs and horses whose caloric fat ratio amounts to 49% (the equivalent of buffalo in Table 3). The habitual use of fire at Qesem Cave, aimed at roasting meat [23], [45], may have reduced the amount of energy required for the digestion of protein, contributing to further reduction in DEE. The fact that the faunal assemblage at Qesem Cave shows significantly high proportions of burnt and fractured bones, typical of marrow extraction, is highly pertinent to the point. In addition, the over-representation of fallow deer skulls found at the site [9], [45] might imply a tendency to consume the brain of these prey animals at the cave. Combined, these data indicate a continuous fat-oriented use of prey at the site throughout the Acheulo-Yabrudian (400-200 kyr).

"However, the average caloric fat percentage attributed to the animals at Qesem Cave – 40% – is still lower than the predicted obligatory fat requirements of faunal calories for H. sapiens in our model, amounting to 49% (Table 2). This discrepancy may have disappeared should we have considered in our calculations in Table 3 the previously mentioned preference for prime-age animals that is apparent at Qesem Cave [9], [45]. The analysis of Cordain's Caribou fat data ([124]: Figure 5) shows that as a strategy the selective hunting of prime-age bulls or females, depending on the season, could, theoretically, result in the increase of fat content as the percentage of liveweight by 76% from 6.4% to 11.3%, thus raising the caloric percentage of fat from animal sources at Qesem Cave. Citing ethnographic sources, Brink ([125]:42) writes about the American Indians hunters: “Not only did the hunters know the natural patterns the bison followed; they also learned how to spot fat animals in a herd. An experienced hunter would pick out the pronounced curves of the body and eye the sheen of the coat that indicated a fat animal”. While the choice of hunting a particular elephant would not necessarily be significant regarding the amount of fat obtained, this was clearly not the case with the smaller game. It is apparent that the selection of fat adults would have been a paying strategy that required high cognitive capabilities and learning skills."