Thursday, January 27, 2011

Lack of Correlation Does Not Show Lack of Causation

From XKCD
I'm sure many of you feel that it is disappointingly easy to become embarrassed for humanity whenever reading a discussion of correlations.  In academia's greatest charade, every Stats 101 class or Epidemiology 101 class or heck even a Psych 101 class will emphatically declare that correlation does not imply causation.  Then most people graduate and spend their entire lives reading causation into correlations.  Especially if they become epidemiologists.

Observational studies are entirely legitimate forms of evidence, and correlations are entirely useful statistics.  No one can question this.  However, these correlations simply show a relationship and tell us nothing about the explanation of that relationship.  

This doesn't change just because an explanation is biologically plausible.  Nothing ever changes it.  A correlation raises the possibility of a cause-and-effect relationship, but no more or less than it raises the possibility of a non-causal relationship.  

Nevertheless, many people who understand this may still believe that a lack of correlation can rule out a cause-and-effect relationship.

But it can't.  It can't even come close.

In fact, a lack of correlation tells us much less than a correlation does.  This is because a correlation at least tells us there is a relationship, even if it tells us nothing about why the relationship exists.  A lack of correlation, by contrast, does not even tell us that there isn't a relationship.

While "no relationship" is one possible explanation for a lack of correlation, there are several others:

  • Lack of statistical power.  The study may have needed a larger sample to detect the correlation.  This would not affect the correlation coefficient, but it would affect whether it is statistically significant.


  • Lack of sufficient range of variation.  Even if you have a perfectly linear and very strong correlation, if you limit the range of variation you study, the correlation coefficient will decrease.  If you limit your range narrowly enough, the correlation coefficient will essentially disappear.  For example, if study time increases test scores linearly over the range of zero to ten hours per week, the relationship may be perfectly linear between eight and ten hours per week, but because you have limited the range of variation, you will decrease the correlation coefficient.  As you decrease the range further, the correlation coefficient becomes zero.  This would make it look like the study was not actually underpowered, because there would not even be a non-significant correlation.  But it's just an illusion.


  • Lack of linearity.  Conventional correlation coefficients look for linear relationships.  As X increases, Y increases.  Or as X increases, Y decreases.  As Ned Kock frequently points out, many relationships found in nature are U-shaped or otherwise non-linear.  For example, Paul and Shou-Ching Jaminet suggest in Perfect Health Diet that there is an optimal range of carbohydrate consumption.   The risk of disease, according to their hypothesis, will be lowest in this range, and will increase as one departs from it either by increasing or decreasing carbohydrate intake.  If you're looking for a straight line when nature provides a U, you aren't going to find your line.


  • Incomplete or inappropriate adjustment for confounding factors. There may be other factors that affect the relationship that are not being taken into account.  On the other hand, perhaps there were statistical adjustments that were made that shouldn't have been made.  Assuming that the stats "have been adjusted for all the confounding factors" assumes that our knowledge of what may affect the relationship is complete or nearly complete.  In fact, our knowledge of what affects the relationship could be closer to a grain of sand in an entire seashore.  Moreover, understanding what is a true confounding factor requires understanding the cause-and-effect relationship -- and usually this is uncertain and controversial.  Failure to make the right adjustments results in a failure to make the relationship manifest, while making the wrong adjustments can hide a true relationship.

Thus, lack of correlation certainly does not imply lack of causation.  

Back to our regularly scheduled genetics series -- with a likely wheat interlude coming soon.

Enjoy the night!

Read more about the author, Chris Masterjohn, PhD, here.

Monday, January 24, 2011

Yummy Fermented Whole Food Donuts -- I Repeat, "Yummy"

by Chris Masterjohn

No donut will ever be a health food, but the following doughnut recipe rocks.  I invented it a number of years ago, and it's a great idea for a once-in-a-blue-moon treat. 

These are fermented whole grain donuts, but in my experience you can give them to the uninitiated without telling them and they'll have no idea.  They are, quite simply, delicious.

This recipe is not gluten-free, but I'd be interested to hear back from people in the comments if they manage to replicate this with a gluten-free flour.

The recipe calls for home-made kefir.  It's possible that making the kefir from half cream instead of whole milk would reduce the water content and make the donuts easier to form.  I would be interested to hear back from anyone who manages to replicate this with yogurt to the same end.  Or, if anyone manages to make a casein-free version.

Well folks, here it is!  Enjoy!

=========================


Makes 20 donuts.
3 and 1/3 cups whole wheat pastry flour
1/3 cup raw honey
1/3 cup maple syrup
3 tbsp butter
3/4 cup kefir
1/2 tsp cinnamon
1/4 tsp nutmeg
3 tsp baking powder
1/2 tsp sea salt
2 eggs
tub-o-lard

Mix half the flour with the maple syrup, raw honey, butter, kefir, and
spices. Do not include the eggs, baking powder or salt. Let sit in a pyrex bowl
over night.

The following day, take the batter, which should be thick and stiff, and blend it together the
eggs, salt, and baking powder. Flour a board with the whole wheat pastry flour. Roll
about a tablespoon of the batter into a long piece. Curl into a circle and pinch
together. Make the donut about HALF the size you want it. Ideally it should
fit completely on your spatula before you cook it. Continue. This made 20
donuts for me.

Heat lard to about medium in a saucepan on the stovetop. Use a medium at
largest, or a small, to avoid needing enormous amounts of lard to get sufficient
depth. You'll need about 3 cups, and will probably have to add more and
eventually use a quart. The good news is the donuts will soak up at least half of
this, so it's not going to waste. You have to be careful to not have the temperature so high that the donuts brown on the outside before they cook on the inside. I used medium; medium-high might be better.

Spatula the donuts into the lard one at a time. They'll sink, then rise.
Flip over. Cook about one minute on each side. Monitor the donuts for your
desired brown-ness to fine-tune your cooking time.

Shake in a bag with rapadura or sucanat. Back and forth, up and down. Flip it, and
repeat. Your donut is done.

Mmm, good :-D

Read more about the author, Chris Masterjohn, PhD, here.

Friday, January 21, 2011

The New Genetics -- Part I: How Our B Cells Create Their Own Antibody Genes

Among the cells of our immune system, the B cells make our antibodies.  Each B cell makes a different antibody.  We make an estimated one trillion different antibodies, giving us the capacity to respond to virtually any pathogen.  

Even more remarkably, each of these antibodies reacts with a variety of different antigens (an antigen is something that produces an antibody response) with relatively low specificity and affinity (stickiness) at first.  Once we are exposed to a pathogen, we begin making highly specific antibodies that are incredibly "sticky" towards their antigens.  

Thus, while we might make a trillion different types of antibodies at any given time, over our lifetime we make far more different types of antibodies than that.

But wait!  Antibodies are proteins, and proteins are encoded by genes.  We only have about 25,000 genes, so how on earth can we make trillions of different antibodies?

Moreover, how can the nature of these antibodies change after we are exposed to pathogens?

The answer is fascinating for two reasons. First, there exists within our lymph organs a microcosm of Darwinian evolution by natural selection where we see generations upon generations of evolution within a single immune response. Second, we see a rather "intelligent" response of the cell where it actually deliberately creates its own antibody gene in order to make a "perfect fit" for the antigen it encounters.

What follows comes from chapter 25 of the most recent edition (2008) of Molecular Biology of the Cell.  Since they have scary copyright warnings, however, the images are my own creations.

Most of the research that is discussed in this post was the basis for Susumu Tonegawa's reception of the 1987 Nobel Prize in Medicine.

First, let's take a look at the basic structure of an antibody:

The antibody is Y-shaped and consists of four protein chains.  Two of them, shown in green, are called "heavy chains," while the other two, shown in purple, are called "light chains."  The solid colors show the "constant" regions of these chains while the yellow stripes mark the "variable" regions.

The variable regions are what gives antibodies their diversity.  Each B cell creates its own antibody gene that has a unique variable region by assembling different "gene segments" together.  This is called combinatorial diversification, and is illustrated in the following diagram.

The above diagram shows part of a coding region for a human light chain.  There are forty V (variable) segments, which are shown in blue, but most of which are not shown.  There are five J (joining) segments shown in green, and one C (constant) segment shown in yellow.

In between the V segments and in between the J segments there are "recombination sequences."  These are specific DNA sequences that tell our enzymes where to cut the DNA.  The recombination sequences between V segments are different from those between J segments, so that the cell never makes a mistake by combining two V segments or two J segments together.

Two proteins called RAG1 and RAG2 combine to make the RAG complex, which is like a pair of scissors, and is only found in our B cells and T cells.  The RAG complex cuts out a region between the chosen V segment and the chosen J segment.  Then, the ordinary DNA repair enzymes that are found in all of our cells engage in a process called site-specific recombination in order to join the DNA back together.  During this joining, a few nucleotides are allowed to fall off and are often replaced with different nucleotides, allowing further diversification. The cell has now created its own antibody gene.

During this process, the cell also edits the DNA surrounding this new "gene" so that specific DNA sequences called promoters and enhancers are positioned correctly to allow the gene to be expressed.  Thus, a B cell can only make an antibody after it engages in this DNA-editing, gene-creating process.

There are some excess V and J segments left over that flank the desired portion of the gene.  When the cell uses the gene template to make an RNA molecule that will later serve as a template to make the final protein, this extra portion of the RNA is removed.

For heavy chains, the process is similar, except that heavy chains also have a set of 25 D (diversity) segments,and have six J segments instead of five, and have five different C segments that can be selected depending on whether the B cell will specialize in making IgM, IgD, IgE, IgA, or IgG antibodies, all of which would bind to the same antigen but each of which has unique functions that determine what the antibody does with the antigen it has bound.

This process of combinatorial diversification is also called V(D)J joining.  This allows B cells to make hundreds of different light chains and thousands of different heavy chains.  The same basic process underlies the production of T cell receptors in T cells.

However, B cells undergo a further process that T cells do not, called somatic hypermutation.  This occurs in lymph follicles after B cells have been activated by antigens and "helped" by helper T cells.  The B cells form clusters where they reproduce rapidly but deliberately ramp up their mutation rate in these variable regions to a rate that is one million-fold higher than the normal rate.

They express antibodies on their cell surface that act as "receptors" for antigens.  The cells that do not bind well to the antigens commit suicide, while the cells that bind the best proliferate rapidly.  This process is called affinity maturation and is responsible for the increasing immunity we receive after we have been exposed to a pathogen.

This is rather remarkable in that it seems to be a microcosm of Darwinian natural selection.  

Only there are several important differences.  First, the B cells that bind poorly to the antigen are not slayed by starvation or inability to find a mate as they are out-competed by their more successful brethren.  Instead, they sacrifice themselves and commit suicide.  Second, there is a clear communication of binding affinity for the antigen from the cell surface to the B cell's genes.  These genes altruistically commit suicide only after receiving this communication.  Third, the mutation rate is deliberately increased one million-fold by a special enzymatic process.

In this process, an enzyme called "activation-induced deaminase" converts cytosine to uracil.  Since uracil does not belong in DNA, the DNA repair enzymes replace it with legitimate nucleotides.  This special mutational process happens only in the V regions of antibody genes, so the mutation is not truly "random."

Thus, while there are superficial similarities to Darwinian concepts of evolution by natural selection of random variation, the process is more indicative of a natural genetic engineering within the cell.

We must then deal with the problem of autoimmunity.  After all, if we can produce trillions of antibodies and respond to practically anything, why don't we usually destroy our own tissues?  The first defense against this danger is called "receptor editing."  B and T cells that react to our own tissues undergo a second round of V(D)J joining in order to further edit the antibody/T cell receptor genes.  If they are still self-reactive, they will either be destroyed, permanently inactivated, or indefinitely suppressed.  The eventual failure of this suppression in some cases is believed to be related to autoimmune disorders.

How do B and T cells tell the difference between antigens they should destroy and those they should not?  They require signals to interpret whether the antigen they bind to is a "good guy" or "bad guy."  Scientists debate whether these signals result from certain molecular patterns that look like "pathogens," or instead from patterns that indicate general "danger," whether they originate from pathogens or our own body.  But when all is in working order, we regulate our vast array of immune potential to use it to our advantage.

Of course, there may still be a degree of "genetic determinism" to our immune response.  While our B cells can make antibodies to any antigen and our T cells can make T cell receptors for any antigen, important cells called antigen-presenting cells do not have that luxury.  They have a number of major histocompatibility complex (MHC) proteins, which are called human leukocyte antigens (HLA) in humans.  These are thought to be limited because if they were not, a great number more T and B cells that are otherwise useful would become self-reactive and we would have to delete them.  This would compromise our defense against pathogens rather than boost it.

Antigen-presenting cells use these proteins to present digested fragments of antigens to T cells.  The T cell is then  activated, and corresponding helper T cells "help" B cells to become activated and start making antibodies.  Since the antigens are digested into many fragments, and since we generally have 12 or more different HLA proteins, at least one of these proteins is usually able to present at least one of these fragments to a T cell.  Thus, we can respond to virtually any pathogen.

However, many scientists believe that once in a great while when an epidemic sweeps through a population killing many people, it is because most of the people in that population did not have the right MHC proteins to handle that pathogen's unusual protein fragments.  And some scientists also believe we are attracted to people with different HLA genes than we have precisely so our children will be sure to have a diverse array of them and thus be maximally resistant to pathogens.

Fascinating?  I think so.  But let me know what you think in the comments.





Thursday, January 20, 2011

The New Genetics -- Introduction

by Chris Masterjohn

The rapid advance of the sequencing of the genome of humans and other species has revealed how little we actually understand about how our cells operate.

Consider this passage from the fifth edition of Molecular Biology of the Cell (2008, p. 207):
Accurate gene identification requires approaches that extract information from the inherently low signal-to-noise ratio of the human genome.  We shall describe some of them in Chapter 8.  Here we discuss only one general approach, which is based on the observation that sequences that have a function are relatively conserved during evolution, whereas those without a function are free to mutate randomly.  The strategy is therefore to compare the human sequence with that of the corresponding regions of a related genome, such as that of the mouse.  Humans and mice are thought to have diverged from a common mammalian ancestor about 80 x 106 years ago, which is long enough for the majority of nucleotides in their genomes to have been changed by random mutational events.  Consequently the only regions that will have remained closely similar in the two genomes are those in which mutations would have impaired function and put the animals carrying them at a disadvantage, resulting in their elimination from the population by natural selection.  Such closely similar regions are known as conserved regions.  The conserved regions include both functionally important exons and regulatory DNA sequences.  In contrast, nonconserved regions represent DNA whose sequence is unlikely to be critical for function.

The power of this method can be increased by comparing our genome with the genomes of additional animals whose genomes have been completely sequenced, including the rat, chicken, chimpanzee, and dog.  By revealing in this way the results of a very long natural "experiment," lasting for hundreds of millions of years, such comparative DNA sequencing studies have highlighted the most interesting regions in these genomes.  The comparisons reveal that roughly 5% of the human genome consists of "multi-species conserved sequences," as discussed in detail near the end of this chapter.  Unexpectedly, only about one-third of these sequences code for proteins.  Some of the conserved noncoding sequences correspond to clusters of protein-binding sites that are involved in gene regulation, while others produce RNA molecules that are not translated into protein.  But the function of the majority of these sequences remains unknown.  This unexpected discovery has led scientists to conclude that we understand much less about the cell biology of vertebrates than we had previously imagined.  Certainly, there are enormous opportunities for new discoveries, and we should expect many surprises ahead.
Molecular Biology of the Cell  is referred to by many as the "Bible" of molecular biology.  Its lead author, Bruce Alberts, served as President of the U.S. National Academy of Sciences for 12 years from 1993-2005.  This is about as "mainstream science" as molecular biology gets.

Let us take for granted, for a moment, their summary dismissal of 95% of the genome as functionally unimportant, or at least unlikely to be critically important.

The unexpected realization in the last few years that "we understand much less about the cell biology of vertebrates than we had previously imagined" should lead us to wonder, if our knowledge is but a drop in an ocean, how did it come to pass that we understand this particular drop and not another?  If it is but a grain of sand in a seashore, how did we come to understand this particular grain, and not another?

The reality is that while the scientific method is, under ideal conditions, an objective method of acquiring knowledge, it is always imperfect humans with biases, financial interests, and ambitions who wield it.  

Even as many scientists may struggle to keep their personal interests and preferences in check, science over the course of the twentieth century has been patronized by for-profit, non-profit, and governmental institutions with more explicit social goals and interests.  

While these interests may not generally stand in the way of the ideally objective operation of the scientific method, and thus rarely if ever dictate the answers to scientific questions, they certainly have their hand in which questions are asked, and to a lesser but nevertheless meaningful degree how scientists go about finding the answers.

As the renowned historian of biology Lily Kay detailed in her 1993 book, The Molecular Vision of Life: Caltech, the Rockefeller Foundation, and the Rise of the New Biology (which I have reviewed here), the science of molecular biology and molecular genetics emerged under the patronage of the Rockefeller Foundation and related interests with eugenics and the science of "social control" as their primary aim.

The term "molecular biology," in fact, was coined in 1938 by Warren Weaver, director of the Rokcefeller's natural sciences division, to rename for the third time the program originally known as "pscyhobiology," the aim of which was "the rationalization of human behavior."

Edward Allsworth Ross coined the term "social control" in 1894 in response to what he saw as the inevitable class conflict that engendered debates between socialism and capitalism.  Ross's "social control" would nationalize not the means of production and distribution but rather the thoughts, feelings and desires that would drive the private sector.  In 1925, F.E. Lumley defined it as "the practice of putting forth directive stimuli or wish-patterns, their accurate transmission to, and adoption by, others whether voluntarily or involuntarily."

Scientists who followed Thomas Huxley's 1864 "protoplasmic theory of life" that attributed all physical and mental attributes of life to the physical substance within the cell and shared John B. Watson's 1913 theoretical goal of "the prediction and control of behavior" would see the elucidation of the physicochemical foundations of life as the preeminent means of developing a science of social control.

Eugenics offered the first means of making social control an exact science, and by 1940 over 30,000 forced sterilizations had been performed in the United States.

The scientific credibility of eugenics waxed and waned.  The rediscovery of Mendel's laws of heredity around 1900 suggested all traits and behaviors were controlled by a single gene.  But the eugenics movement suffered a number of setbacks when scientists began to realize that many traits are affected by more than one gene and many genes affect more than one trait.  Arguments about race and other sensitive topics, and ultimately the Holocaust, tarnished the ethical reputations of the movement.

Nevertheless, even in the post-WWII era many scientists openly flirted with eugenics.  For example, in the 1950s Linus Pauling stated the following:
It will not be enough just to develop ways of treating the hereditary defects. We shall have to find some way to purify the pool of human germ plasm so that there will not be so many seriously defective children born . . . We are going to have to institute birth control, population control.
Pauling even thought for a time during the 1960s that we should be marked on the forehead with our genetic defects:
There should be tattooed on the forehead of every young person a symbol showing possession of the sickle-cell gene or whatever other similar gene . . . It is my opinion that legislation along this line, compulsory testing for defective genes before marriage, and some form of semi-public display of this possession, should be adopted.
These types of attitudes shaped our initial understanding of genetics, where we came to see a gene as something that exerts fundamental deterministic control over an organism.

As time went on, however, a new eugenics founded not on sterilization of the unfit but rather on genetic modification of the less-than-perfect would take the stage.  Under its patronage, we have learned about the extensive abilities of cells to modify their own genomes, a glimpse of a radically different view of genetics.

Joshua Lederberg, co-discoverer of genetic recombination in bacteria, saw this new eugenics as the popular view among scientists:
[T]he ultimate application of molecular biology would be the direct control of nucleotide sequences in human chromosomes, coupled with recognition, selection and integration of the desired genes, of which the existing population furnishes a considerable variety. These notions of a future eugenics are, I think, the popular view of the distant role of molecular biology in human evolution.
Caltech's Robert Sinsheimer suggested this new eugenics would be a more democratic one than the old:
The old eugenics was limited to a numerical enhancement of the best of our existing gene pool. The new eugenics would permit in principle the conversion of all the unfit to the highest genetic level.
We must then wonder, how would we have come to see genetics if we were asking radically different questions?

In the next post, I will discuss the fascinating ability of our B cells to create their own antibody genes, and then to edit these genes in direct response to their environment, honing the affinity of the antibodies they produce for the antigens encountered by their host.  

Without doubt, genes do carry heritable information and genes do contribute to virtually all of our traits, and on occasion even determine them.  

My purpose in this series is not to deny these facts, but simply to shed some light on what else we can say about genes and DNA, and to give a glimpse of some fascinating science out there that suggests our popular conception of what "genetic" means might be very different had our scientific establishment's initial foray into molecular biology been intended to understand how it is that we can generate high-affinity antibodies to virtually any pathogen we encounter, rather than to understand how to control human behavior.

We must remember that while objective science will always allow the facts to contribute to its discoveries, the questions we ask just as powerfully determine the answers we get.


Now, on to the series...


The New Genetics -- Part I: How Our B Cells Create Their Own Antibody Genes

The New Genetics -- Part II: Some Biological Heredity Is Neither Genetic Nor Epigenetic

The New Genetics -- Part III: Genes Don't Express Themselves

The New Genetics Part IV: Who's In the Driver's Seat? How Cells Regulate the Expression of Their Genes

The New Genetics Part V: Is the Intestinal Microbiome Part of Our Genome?



Read more about the author, Chris Masterjohn, PhD, here.

Wednesday, January 12, 2011

Widely Publicized Studies Show Purified Diets Hurt Rodents But Blame It On "Fat" -- Another Response to Bix Weber

by Chris Masterjohn

A reader named "blob" asked me to respond to a recent post by the Fanatic Cook Bix Weber, "Two Studies That Link Dietary Fat to Cancer."

Both of these studies were conducted by the same group led by Philippe G. Frank, Ph.D., Assistant Professor in the Department of Stem Cell Biology and Regenerative Medicine at Thomas Jefferson University in Philadelphia.  This group had a rather clever way to blame the ravages of refined, purified diets on "fat."

A recent article  that several others had forwarded to me explained Dr. Frank's reasoning for turning to animal models:
Dietary fat and cholesterol have been shown to be important risk factors in the development and progression of a number of tumor types, but diet-based studies in humans have reached contradictory conclusions. This has led Dr. Frank to turn to animal models of human cancer to examine links between cholesterol, diet, and cancer.
This statement would have made a lot of sense if a few words were switched around.  For example, if the journalist had written that Dr. Frank turned to experimental research because the observational research was conflicting, this would have sounded a bit like the scientific method, which states that we test our hypotheses, which we have generated in an attempt to explain our observations.

As stated, however, it sounds rather silly.  If human studies conflict, what on earth can we learn from mice, as if it were easier to generalize from mice to humans than from some humans to others?

Indeed, this little bit of silliness emphasizes the point that these authors used mice that were genetically engineered to develop spontaneous prostate cancers in one study and mice genetically engineered to develop spontaneous breast cancers in the other study, but never did anything specific to make these mice respond to dietary factors like humans do.  As such, studies like these might be interesting for the light they shed on certain mechanisms of the development of the disease, but not for extrapolating dietary effects from rodents to humans.

Nevertheless, let's give this research group the benefit of the doubt on this point and assume that the journalist had very little training or even casual interest in science.

Still, these authors clearly claimed in their own words that their "data suggest that dietary fat and
cholesterol play an important role in the development of prostate cancer."

Really?  The authors did, in fact, find that a so-called "Western diet" that was rich in fat and cholesterol increased the number and weight of tumors in both models.

But can we blame this on "fat" or "cholesterol"?  Let's take a look at their diets.

Here are the ingredients in the high-fat, high-cholesterol "Western" diet:
Sucrose, milk fat, casein, maltodextrin, powdered cellulose, dextrin, RP Mineral Mix #10 (adds 1.29% fiber), RP Vitamin Mix (adds 1.94% sucrose), DL-methionine, choline chloride, cholesterol, ethoxyquin (a preservative).
Sorry to use the same joke twice, but can you spot the food?


Picture from here.

Of course not, because there is none.  There's just a series of purified ingredients.

According to the manufacturer's web site, that diet corresponds to Basal Diet 5755, which is made of similarly purified ingredients but is lower in sucrose and contains more dextrin instead of milk fat or cholesterol.

Ah, well if we wanted to control our variables that would be a perfect diet to use!

But such a diet, marketed for the very purpose of being a control for the Western diet they were using, did not satisfy Dr. Frank's group.  Instead they used a "chow diet."

Let's take a look at the ingredients:
Ground corn, dehulled soybean meal, wheat middlings, fish meal, ground wheat, wheat germ, brewers dried yeast, ground oats, dehydrated alfalfa meal, calcium carbonate, porcine animal fat preserved with BHA, ground soybean hulls, soybean oil, salt, dried beet pulp, [vitamin and mineral supplements].
Well whaddyaknow, it has food!

As I pointed out in my previous post, "The Did the Same Thing to the Lab Rats That They Did to Us," rats started developing all kinds of problems like fatty liver and excessive bleeding when the American Institute of Nutrition first started encouraging the use of chemically defined, purified diets.  They resolved some of these problems by decreasing the sucrose content and increasing the content of certain vitamins, but some of the problems never resolved.  Consider this reason the AIN gave in 1993 for adding certain "ultratrace elements" that were not known to be essential:

Many of the ultratrace elements are found in plentiful quantities in the natural ingredients that make up cereal-based diets, but their concentrations in purified diets are often very low, and in chemically defined diets, they may be completely absent.  Purified diets without added ultratrace elements suport growth and reproduction, but investigators have noted that animals exposed to stress, toxins, carcinogens or diet imbalances display more negative effects when fed purified diets than when fed cereal-based diets (Bounous 1987, Boyd 1972 and 1983, Evers 1982, Gans 1982, Hafez and Kratzer 1976, Longnecker 1981).  This suggests that detrimental effects may occur with the omission of some substances found in the more natural, cereal-based diets; some of these substances may be the ultratrace elements.

I'm afraid all these studies might be showing is what has already been known for decades
— that chemically defined, purified diets are much worse for rodents than natural food-based diets.

And perhaps they hint at what many people including myself today believe — that humans should also be eating foodThe fat included.

Read more about the author, Chris Masterjohn, PhD, here.

Tuesday, January 11, 2011

Eating Fat and Diabetes -- Response to Bix Weber

by Chris Masterjohn

Melissa McEwen recently brought to my attention a blog post by Bix Weber, the Fanatic Cook, "Diabetes is a Disorder of Fat Metabolism."

Weber cites a 2009 study published in The Journal of Clinical Investigation entitled "Mitochondrial H2O2 emission and cellular redox state link excess fat intake to insulin resistance in both rodents and humans" that purports to show that eating too much fat contributes to insulin resistance not only in our furry little lab rat friends but also in humans.

The paper contains an animal study and a human study.  The animal study is useful and informative, while the human study is poorly designed.  This paper is important and does shed some light on the causes of insulin resistance, but to conclude from this paper that humans will become diabetic from eating too much fat is a serious misuse and misinterpretation of the paper.

The researchers fed rats a control diet or a 60% lard diet from Research Diets.  The authors don't state what the control diet was, but if it was similar to the standard control diet that Research Diets offers, the 60% lard diet would yield not only a high content of saturated fat, but a 50% increase in total PUFA.  They fed this diet with or without an antioxidant that targets the mitochondria, scavenges reactive oxygen species, and prevents the oxidative destruction of PUFA.

The high-fat diet did not produce any oxidative destruction of PUFA in rats:
No evidence of mitochondrial dysfunction or oxidative stress, at least with respect to the levels of the lipid peroxide derivative 4-hydroxy-nonenal (data not shown) was found in muscle of high-fat diet-fed rats with or without SS31 [the antioxidant] treatment.
However, when they isolated the muscles from these rats and provided them with energy sources, there was a 2-3-fold increase in the maximal production of hydrogen peroxide, which is used as a signaling molecule but can induce oxidative damage at high doses or when combined with certain metal ions.  This effect was abolished when the rats were treated with the mitochondrial antioxidant.

The high-fat diet also reduced concentrations of the master antioxidant of the cell, glutathione, and led to impaired glucose tolerance.  It is typical for this type of high-fat diet to produce these metabolic effects in rats, so this is not surprising.  Treatment with the mitochondrial antioxidant helped prevent the decrease in glutathione and completely abolished the impairment of glucose tolerance.

When they repeated the experiment in mice, they genetically engineered some of them to produce more of the enzyme catalase, which converts hydrogen peroxide to water.  Overproduction of catalase completely prevented the negative metabolic effects of the high-fat diet.

Quite obviously, this paper shows that high-fat diets are not inherently harmful to rodents.  Simply providing an antioxidant nearly abolishes all their negative effects.  

Health-conscious humans do not eat high-fat diets made of refined, purified ingredients or obtain 60% of their calories as lard.  A diet based on organ meats such as liver, shellfish, muscle meats, fish, fruits, vegetables, starches, and animal fats from healthy animals or selected traditional plant oils bears no resemblance to this type of diet and is instead loaded with antioxidants.

Nevertheless, I actually really like this paper.  In the discussion section, they point out that glutathione is not just an antioxidant, but is actually a master control switch responsible for regulating the activity of a whole host of different proteins and that insulin resistance is not so much a result of damage to the organism, but a way of homeostatically regulating energy balance.  I'll write more on this topic in the future.

Their hypothesis is basically that the supply of fat exceeds the metabolic demand for fat, and that the cell responds by creating a more oxidized environment in order to deliberately reduce its sensitivity to insulin, which will stop the flood of more incoming fuel in the form of glucose.

By providing an antioxidant, they increase the mitochondria's ability to process the fats, and thus increase the cell's tolerance for incoming fuel.  So the cell will take up more glucose in response to insulin.

The only problem with the paper is they never explain why fat would constitute "excess" whereas carbohydrate would not. It's possible that fats just burn a little less cleanly than glucose.  For example, a large excess of energy provided to mitochondria will tend to cause glucose to get converted to fat, but could tend to increase the burning of fatty acids in the endoplasmic reticulum, which generates a lot more oxidative stress than the mitochondria.  They provided no evidence of this, however.

But it's also quite possible that the 50% excess PUFA made all fuels burn less cleanly by increasing the mitochondrial content of vulnerable PUFAs.  Since they didn't find oxidative destruction of PUFAs, this hypothesis is not very well supported.

It's also possible that this simply reflects an adaptation to fat-burning.  As the cell adapts to fat, it stops taking up glucose. 

Regardless, the antioxidant improved mitochondrial efficiency enough to handle the fat and to be able to respond to insulin sensitively.

The big problem with this paper is that they try to extrapolate this to humans with an incredibly poorly designed study.

Here is what they report for methods:

Nine healthy lean (BMI, <25 kg/m2) men (aged 18-25 years) of a variety of races participated in an acute high-fat diet study.  Subjects reported to the laboratory following a 12-hour overnight fast.  After muscle samples were obtained, subjects consumed a single high-fat meal (35% daily kcal intake; >60% kcal from fat), and a second muscle biopsy was taken 4 hours later.  Subjects then consumed a high-fat diet (isocaloric; >60% kcal from fat) for 5 days and returned 12-hour fasted on the morning of the sixth day, when a final muscle biopsy was obtained.

Hmm, can you spot the control group?

Picture borrowed from here.
I think Waldo might be in there, but there's no control group in this study.  Nor is there a control trial where the people consumed a low-fat diet.

Nor is the diet described.

Their animal study would seem to suggest that the effects of the high-fat diet — if in fact there were any effects — could be mitigated simply by including appropriate antioxidants, perhaps the type that are included abundantly in traditional, nutrient-dense fatty foods.

Read more about the author, Chris Masterjohn, PhD, here.

Saturday, January 8, 2011

The Great Unknown: Using Statistics to Explore the Secret Depths of Unpublished Research

by Chris Masterjohn

I spent a large portion of the day today trying to figure out why a couple papers I have showed that EGCG, a component of green tea, increases glucose uptake into isolated skeletal muscle cells, but another shows the opposite.

The methods of these papers were a little different, and it's possible to speculate that some of the differences — for example, the amount of glucose in the culture dish — may have been responsible.

But it also occurred to me that perhaps EGCG doesn't have any effect on this phenomenon at all.  For all I know, perhaps a hundred times so far different groups have tested the effect of EGCG on uptake of glucose into skeletal muscle cells and 100 times it had no effect, and the research seemed like a waste of time and went down the memory hole.  But then a couple flukes occurred, and they were exciting, and the researchers wrote up the reports and got them published.

Melissa McEwen recently shared a New Yorker article with me, "The Truth Wears Off."  It discusses the "delcine" effect — a scientist stumbles upon an exciting phenomenon, publishes about it, but then over time as others attempt to replicate it, or even as that very scientist tries to replicate it, the phenomenon seems to wear off, becoming much less true than it originally seemed.

Part of the decline effect is likely due to publication bias.  Negative findings just aren't very interesting and don't get published.  Later on, when everyone believes something to be true, it suddenly becomes interesting to publish contrary research.

The New Yorker article mentions John Ioannidis, a crusader against junk science who wrote a 2005 paper that had floated around the Native Nutrition list back in the day, entitled "Why Most Published Research Findings Are False."  Ioannidis is a fascinating character, and you can read more about him in the Atlantic article, "Lies, Damned Lies, and Medical Science."

Statistics is an important weapon in Ioannidis's myth-busting arsenal, and I think the proliferation of statistics within experiments has had a positive effect on science. 

On the other hand, I think the proliferation of epidemiology, which uses even more advanced statistics, has in some ways had a negative impact because scientists and media people are too tempted to inflate the importance of these types of studies by abandoning everything taught in Stats 101 and Epi 101 like "correlation does not imply causation." 

Nevertheless, despite the rampant peddling of hypothesis-as-fact, statisticians are increasingly developing methods to detect publication and reporting bias.  These will be critical to cutting through all the junk to find the truth.

Back to EGCG for a minute.

It's important to realize that every experiment has some sampling error.  Statistical tests designed to determine whether a difference is likely to be real ("statistically significant") or just a random fluke are based on the concept of a sampling distribution.  The sampling distribution is based on the idea that if you repeated an experiment a hundred times, you'd get a hundred different results, but the results should all hover around the "true" value.

For example, let's say that the "true" effect of EGCG on uptake of glucose into skeletal muscle is zero, zip, zilch, nada.  It just doesn't do anything.  In that case, the "true" difference in glucose uptake between skeletal muscle cells that have been treated with EGCG and those that haven't should be zero.  Thus, we can construct a theoretical sampling distribution that has a mean, or average, of zero:


The sampling distribution will be a bell-shaped curve that has as its center the "true" value, and we are offering the hypothesis that this value is zero because we are testing the hypothesis that EGCG has no effect. 

The bell-shaped curve represents all the possible results of experiments.  As the blue line rises, experiments are more likely to come up with that result.  The curve is shaped like a bell, that is, raised in the middle, because results closer to the "true" effect — which we are hypothesizing is no effect, or zero — will be more common than results further away from the true effect.

Just how wide the curve is will depend on the statistical precision of the experiments. The greater the sample size and the lower the variability, the narrower the curve will be, meaning each experiment is likely to give a result closer to the "true" result.  It will generally stretch three standard deviations in either direction.  The standard deviations are marked off in units along the horizontal axis.  The standard deviation of this distribution is usually called the standard error.

For the sake of simplicity, let's say that if we repeat the experiment 100 times, 95 times we'll get a result within two standard errors. 

Now let's say EGCG really affects uptake of glucose into muscle.  If it falls outside of that range, in the tails of the distribution, we say the result is statistically significant because the chance of getting that result if EGCG truly has no effect is less than 5%.  

But, what if negative results never get published?  Say there are five papers published on the topic.  What if 100 people tried the experiment and 95 of them got no result so never published their findings?  EGCG could have no effect, but what you might see is three papers showing a positive effect and two showing a negative effect, and 95 experiments lost down the memory hole, never making it further than some lab technician's notebook paper.

Thankfully, there are some incentives against this phenomenon.  Among them, researchers don't want to waste time and money failing to generate publications.  There are a great many papers suggesting that green tea can help prevent diabetes, obesity, and fatty liver disease.  As a result, many labs will dedicate themselves to studying this effect and trying to explain it.  This will allow them to mention negative findings to "rule out" mechanisms as part of larger papers with positive findings that render the paper "interesting" to a journal editor.

Nevertheless, publication bias is likely to be a serious problem, especially in clinical trials where there is a strong incentive to show that a drug or other treatment has an effect.

Statisticians have developed a cool tool called the funnel plot in order to detect publication bias.  Here's a recent example of a funnel plot showing clinical trials of long-chain omega-3 fatty acids EPA and DHA in the treatment of depression:

Martins JG.  EPA but Not DHA Appears To Be Responsible for the Efficacy of Omega-3 Long Chain Polyunsaturated Fatty Acid Supplementation in Depression: Evidence from a Meta-Analysis of Randomized Controlled Trials.  J Am Coll Nutr. 2009;28(5):525-42.


In the above diagram, all the white circles represent trials that were actually published.  As you move up along the vertical axis, the statistical precision increases.  The plot should be funnel shaped and appear symmetrical, so that studies with more statistical precision hover more tightly around the average and studies with less statistical precision gradually spread out toward a base at the bottom.

However, as you can see above, the distribution of white circles is not symmetrical.  This indicates publication bias.  In order to construct the funnel plot, the authors trim off the asymmetrical portion and then draw a line down the middle of the symmetrical portion.  This is the estimated "true" mean of the trials.  Then, the authors estimate how many trials weren't published by throwing onto the diagram enough black circles to make the plot a symmetrical funnel shape.  These black circles represent hypothetical "unpublished trials."

Here, the authors estimated that there were nine trials that found EPA or DHA actually increased depression scores but were never published.

Obviously, there is something inherently impossible to validate about this method.  We can never know what is being kept in secret, so we can never actually prove this method works.  

Nevertheless, there are many methods we use in the looser sciences that we cannot validate.  For example, food frequency questionnaires (FFQs) can be kind-of-sort-of validated with weighted dietary records, but people may eat differently when they are weighing their food, and weighted dietary records can never be done over the full course of time that an FFQ is meant to apply to, often a year.  It is even more difficult, in fact completely impossible, to validate radiometric dating over the time course we use it for.  But we use reasonable assumptions to make estimations with these tools because that is better than throwing our hands up in the air and giving up.

So while funnel plots can never prove publication bias, I hope that they will bring the probability of publication bias to light, and thereby encourage the scientific community to be more transparent.

One of the moves towards transparency currently happening is to create a database of clinical trials where they are registered before they are conducted.  This is an incredibly important development that will help prevent publication bias.

Another important bias is reporting bias.  If the researcher measures 100 things and reports five of them, then how much credibility can we give to the "statistical significance" of those five things?  If you measure 100 things, you are bound to get five or so that are "statistically significant" but are really random flukes.

Statisticians are currently working on methods to detect this type of bias in reporting.  I have no idea how they'll work, but I can't wait to find out.

The New Yorker article I linked to above is a good reminder that the marks of a true scientist are not just a willingness to think rationally and question authority, but also patience, open-mindedness, and enough humility to say "I don't know" sometimes.

Read more about the author, Chris Masterjohn, PhD, here.

 

Wednesday, January 5, 2011

Wheat: In Search of Scientific Objectivity and New Year's Resolutions

by Chris Masterjohn

Well it's that time again, so Happy New Year!

January is a great time for trying new things to improve our lives and make them a bit better than they were the year before.  A number of people in the blogosphere have offered some great dietary ideas for January.  Stephan Guyenet recently passed on Matt Lentzner's call for a Gluten-Free January.  If you're up for a challenge, and a potentially bigger bang for a bigger buck, take on John Durant's 2011 Paleo Challenge and go "Paleo" for January. 

As Melissa McEwen recently pointed out in her post about the 2011 Paleo Challenge, "Choosing plant foods because of their history without taking biochemistry into account is dogma, not science." 

She was referring to the disproportionate demonization of white potatoes, but I'd like to take this opportunity to poke a few holes in the disproportionate demonization of wheat of all foods, while nevertheless supporting the concept of the January gluten-free and Paleo challenges.  

In particular, I'd like to provide a critical review of a study widely cited to show that wheat causes intestinal inflammation in people who do not have celiac disease, which in fact did nothing of the sort.

Since there are no validated tests for non-celiac gluten sensitivity (yes, I'm very prepared to defend this statement), going gluten-free for a while and reintroducing gluten is the best way to see whether you're sensitive.  If you feel a lot better while gluten-free, but regress to how you felt before when you reintroduce gluten, it's pretty reasonable to conclude your are likely to be gluten-sensitive.  

The main caveat to this approach is that most gluten products on the market are processed in ways that make them more toxic instead of less toxic, so getting rid of these nasty products doesn't necessarily indicate a problem with gluten per se

Another caveat is that gluten is among the most difficult proteins to digest, so anyone with digestive problems caused by something else is likely to have problems with gluten, and it may be the case that such a person could tolerate gluten at a later time.

Nevertheless, better to get rid of the nasty stuff now and sort out the details later.

I went gluten-free, casein-free (GFCF) for a year and a half.  My health got worse during that period.  In particular, I developed my first panic attack in years, and I experienced periodic jitters that seemed like they might be related to blood sugar or cortisol problems.  

I don't think I have enough evidence to definitively attribute these problems to the GFCF diet, and I was never able to separate the effects of going gluten-free from the effects of going casein-free, but I've been eating properly prepared gluten- and casein-containing foods again for years now and I no longer have these problems.  So I think I am justified in concluding that going GFCF is not essential and probably isn't even beneficial to my health.

Nevertheless, many other people report benefits from going gluten-free, and whether you're sensitive is a question you are much more equipped to answer than your doctor is, so if you haven't done it already, I'd recommend going gluten-free.  And while you're at it, might as well throw in a stab at a stricter Paleo diet as well.

While we're all being open-minded and non-dogmatic about this, I'd like to offer a brief critical review about a study that has been widely cited as showing that gluten causes intestinal inflammation in non-celiacs (1), when in fact it did nothing of the sort.

In this study, the researchers took intestinal biopsies from six individuals without celiac and then cultured them in laboratory dishes, and showed that adding wheat gluten or several difficult-to-digest fragments of the protein increased the production of interleukin-15 (IL15), an inflammatory signal.  All of the non-celiac subjects were sick with problems like hiatal hernia and chronic gastritis.

If we're going to use this study to hate on wheat, we should start hating on coconut oil, because that ol' "one meal of saturated fat will practically kill you" study (2) was of much higher quality than this one.  At least in that study they actually fed people coconut oil.  Nevertheless, their interpretation of the study was enormously flawed, and I published an extensive critique on my web site and published a much shorter letter in the Journal of the American College of Cardiology criticizing the authors' conclusions (3).

For the sake of objectivity, I'll have to offer a few critiques of the wheat study too, so here goes.

Here's a brief list of my problems with this study:
  1. This is not an in vivo study.
  2. There is no full report of the methods or data.
  3. The images supposedly demonstrating the data look awful.
  4. The study is completely uncontrolled, and there is no way to conclude that the effect is unique to gluten and no way to conclude that the effect is even attributable to gluten rather than to the solvent or to any inflammatory contaminants.
1.  This is not an in vivo study.

No one fed any wheat to anyone in this study.  Concluding something about eating wheat from this study is therefore complete nonsense.

That's not to say that such a finding wouldn't be interesting.  If it were convincing (and it isn't, for reasons described below), it would be a good reason to conduct a study feeding wheat to non-celiacs to see if it causes any inflammation.  Then again, they should conduct such a study anyway.

The investigators found six people who did not have celiac, but were otherwise pretty sick.  They took intestinal biopsies, incubated the cells in laboratory dishes, and challenged them with three proteins.  One was the full gluten protein (gliadin), one was a synthetic imitation fragment of this protein 19 amino acids in length (19-mer), and one was a fragment 33 amino acids in length (33-mer) that had been treated with the enzyme transglutaminase in order to render it toxic.

Had they fed wheat to these people, the people would have substantially digested the gliadin molecule.  Although the 19-mer and 33-mer fragments are particularly difficult to digest and are therefore thought to play a role in celiac, recent evidence suggests that microbial enzymes from bacteria in our mouths digest 33-mer (4) and evidence dating back seven years has suggested that in people who do not have active celiac disease, both 19-mer and 33-mer are totally degraded once they enter the cells that line the intestine (5).

On top of this, the authors state that the version of 33-mer they used was "deaminated."  This means that they treated it with an enzyme called tissue transglutaminase (TG).  

Ordinarily, TG remains within our cells in an inactive form, but when our tissues get damaged, the cells activate it and release it so that it can start repairing the damaged tissue.  However, it also modifies the 33-mer fragment of the gluten protein by stealing nitrogen from the amino acid glutamine and thereby converting it to the amino acid glutamate.  This is in all or almost all instances required to make 33-mer "immunogenic."  In other words, the immune system will only mount a response to the gluten fragment after it has been processed by TG (6).

Surely, we can all see now why this study is worthless for telling us how the patients would have responded had they actually eaten wheat.  Perhaps since these patients were all sick and were not in any remote sense a random sample of the population, they would have been producing some TG enzyme.  Or, perhaps they would have fully digested the gluten and all its protein fragments from a mixture of microbial and endogenous enzymes beginning in the mouth and ending inside the intestinal cell.

2.  There Is No Full Report of the Methods or Data

This study was published as a brief report, almost like a letter to the editor with pictures.

This suggests that the editors of the journal either found the study 1) only moderately interesting, or 2) too inconclusive to give it space for a full report.

If this study was the first of its kind to definitively show that wheat causes intestinal inflammation in non-celiacs, I would think that would be of great interest.  The lack of details provided in this short format makes it very difficult to critically evaluate.  Nevertheless, there are some things that stick out that suggest the study is almost uninterpretable.

3.  The Images Supposedly Demonstrating the Data Look Awful

You can take a look at their figure on the second page here.  Most of it looks like a blurry mess.  

Ideally what you would want to see in a Western blot is a clear, distinct, relatively thin black line demonstrating the protein, in this case IL15.  On one side you want to see a "ladder" showing various molecular weights and on the other a clean line representing your protein of interest, sometimes shown alongside a "positive control" that was purchased commercially. 

Here's an example of a good Western blot (7):

On the right we see a ladder with different protein fragments of known molecular weight, each shown as black lines.  On the left, we see several proteins that the authors isolated.  We are confident they are the correct proteins because 1) the staining is associated with an antibody demonstrating some specificity to what they are looking for and 2) the lines appear at the correct molecular weight, as judged by the ladder on the right.
 
In our study, what we would like to see is a clear absence of the line in our control cells and the clear presence of a line indicating our protein of interest in the gluten-treated cells.  Is that what we see?  In some cases, kind of.  In others, all I see is a blur.

Let's take a look:
Rather than a ladder, we just have a positive control on the upper left, marked IL15.  A ladder takes up a lot of space, so that's ok.  But that should be the best-looking line, and instead it looks like a blur.  We have the best-looking line in the third column showing a biopsy from a celiac patient treated with gluten.  But is there a response to gliadin in the non-celiac individual shown in the upper right?  I just see a massive blur

In the second row, we see the cells treated with the 19-mer and 33-mer fragments.  It kind of looks like there are some lines there, but they look pretty horrible.

Sometimes it's hard to get these images to copy into files or onto print.  The computer can produce objective quantitative data that can be presented as a bar graph next to the images of the blots, but we don't have that here, perhaps because the journal wouldn't give the authors space.  Still, I'd feel more confident about their findings if they could show us that the computer can see these lines, because they look pretty fuzzy to me.

Worst of all, you always submit your best images for publication.  That means the others were probably even worse.

4.  The Study Is Completely Uncontrolled

Even if we suspend our skepticism of the fuzzy lines and take the authors at their word that gliadin and its protein fragments caused the intestinal biopsies taken from sick non-celiac patients to produce IL15, the study is still completely uninterpretable because the authors did not use any appropriate controls.

Is this an effect specific to gluten and gluten fragments, or would virtually any protein or protein fragment have caused this effect?  We don't know, because they didn't use a negative protein control.

Actually, the authors state that they expected 33-mer to have no effect on IL15:
and, although not expected, the "non-toxic" immunodominant 33-mer was also able to induce an innate response.
One way of looking at this is that their negative control failed and turned out positive.  33-mer is known for stimulating the immune system in other ways, but they only expected 19-mer to cause the production of IL15.  Maybe many other proteins would elicit the same response.

But it gets worse.  Was this even an effect of the protein, or was it an effect of the solvent?  We don't know, because there was no vehicle control.  In fact, since there is no "methods" section, we don't even know what they dissolved the proteins in!

Or, was it an effect of contaminants?  The authors state that they discarded any contaminating endotoxin, but they do not state clearly whether they discarded it from the synthetic peptides or from the intestinal biopsies, or how they discarded it.  Sometimes endotoxin purification can introduce other contaminants.  

On the other hand, sometimes proteins purchased commercially are themselves already contaminated with endotoxin, which could produce an inflammatory response.  In fact, this is so potentially problematic that any study showing an inflammatory effect of incubating cells with a protein purchased commercially should be viewed with extreme skepticism if the authors do not verify that it is free of endotoxin.

Conclusion: Consider Going Gluten-Free or Paleo Anyway!

I hope I've convinced some of you that this particular study absolutely can not be used to justify any dietary conclusions, let alone "wheat is inherently toxic and evil."  

However, if we waited for conclusive scientific evidence for everything we believe in or act on, we would vegetate.  If it's possible that you're gluten-sensitive and you still have some unresolved health problems, why not go gluten-free?  Heck, why not go full Paleo?  It's just for January.  If you feel better, stick with it and see what happens!

What's more, share your results with the rest of us!  Let's fix up published literature and personal and clinical experience on a blind date.  Once they embrace, we'll all be better off.

Read more about the author, Chris Masterjohn, PhD, here.

References

1.  Bernardo D, Garrote JA, Fernandez-Salazar L, Riestra S, Arranz E.  Is gliadin really safe for non-coeliac individuals?  Production of interleukin 15 in biopsy culture from non-ceoliac individuals challenged with gliadin peptides.  Gut. 2007;56(6):889-90. [pubmed link]

2.  Nicholls SJ, Lundman P, Harmer JA, Cutri B, Griffiths KA, Rye KA, Barter PJ, Celermajer DS.  Consumption of saturated fat impairs the anti-inflammatory properties of high-density lipoproteins and endothelial function.  J Am Coll Cardiol. 2006. 15;48(4):715-20. [pubmed link]

3.  Masterjohn C.  The anti-inflammatory properties of safflower oil and coconut oil may bemediated by their respective concentrations of vitamin E.  J Am Coll Cardiol. 2007;49(17):1825-6. [pubmed link]

4.  Helmerhorst EJ, Zamakhchari M, Schuppan D, Oppenheim FG.  Discovery of a novel and rich source of gluten-degrading microbial enzymes in the oralcavity.  PLoS One. 201;5(10):e13264. [pubmed link]

5.  Matysiak-Budnik T, Candalh C, Dugave C, Namane A, Cellier C, Cerf-Bensussan N, Heyman M.  Alterations of the intestinal transport and processing of gliadin peptides in celiac disease.  Gastroenterology. 2003;125(3):696-707. [pubmed link]

6.  Tjon JM, van Bergen J, Koning F.  Celiac disease: how complicated can it get?  Immunogenetics. 2010;62(10):641-51. [pubmed link]

7.  Moron B, Cebolla A, Manyani H, Alvarez-Maqueda M, Megias M, Thomas Mdel C, Lopez MC, Sousa C.  Sensitive detection of cereal fractions that are toxic to celiac disease patients by using monoclonal antibodies to a main immunogenic wheat peptide.  Am J Clin Nutr. 2008;87(2):405-14. [pubmed link]