All posts by Raymond

DARK ENERGY

In science, we first observe phenomena. Then we try to account for why these things happen by creating a Hypothesis. Then we test the hypothesis by seeing how many different phenomena it can explain. If it explains a lot of disparate things, it is promoted to a Theory.

Any theory can be dislodged by new observations that contradict it. However, since any individual experiment has a small chance of being wrong, or wrongly interpreted, when a theory is challenged, we weigh the evidence pro vs. con. The more evidence favoring the theory, the more contrary evidence it takes to overthrow it. All this is just common sense.

One of the most well-supported theories, if not the very most, is Gravitation. All objects possessing mass, no matter how small, attract all other objects in the Universe. From infancy we see dozens of events daily that support gravitation. It was Isaac Newton who first recognized its universality, and although neither he nor anyone else understands how objects not touching can pull or push each other, we all recognize that it somehow happens all the time, with no exceptions.

Newton’s great triumph was to show that Gravitation operates not only locally, but also throughout the heavens. Not only does the Earth attract any unsupported object so that it falls to the ground, but the sun attracts the planets, and stars and galaxies attract one another.

In 1929, the astronomer Edwin Hubble reported that all heavenly bodies (save those very near) appeared to be flying away from us. Furthermore, the speed of this flight was proportional to their distance from us. Thus it appeared that everything in the heavens was flying away from everything else, and that at some time in the past all of them were in the same place (Hubble’s Law). What set them all flying apart was a colossal explosion whose date can be determined from how long it has taken them to get where they are. This date was about 13.8 billion years ago. This explosion is popularly called the Big Bang. On that date the present universe was born, and what existed before then is unknown.

We determine the speed away from us of all heavenly bodies by the Doppler Effect. When any thing that emits any kind of wave – say, sound or light – moves toward or away from us, we observe a shift of the wavelength. The waves from an approaching object are compressed because it catches up a bit with its own waves as it moves, so that they appear to be shorter to us than they appear to a passenger riding along on the moving object. Waves from a departing object appear stretched out to us, but not to the passenger. An example that all of us have observed is an ambulance or fire truck whose siren is on. As it approaches us, the pitch is steady, but as it passes us it falls, and when it moves further away it is once more steady but lower than it was during approach. The wavelength of sound determines the pitch; shorter wavelength denotes higher pitch.

Light waves behave in the same way, but because light travels enormously faster than sound, the moving light-emitter must move at tremendous speeds for us to perceive a Doppler effect. Stars and galaxies indeed move at such speeds. Various wavelengths of light are perceived as different colors. Light from approaching objects is shifted to shorter wavelengths, i.e. they are shifted toward the blue end of the spectrum. Light from departing objects is red-shifted. We can measure these shifts by means of features of the spectra of stars and galaxies whose wavelengths we know with high precision. What Hubble saw was that nearby heavenly objects moved relatively slowly in random directions, i.e. some blue- and some red-shifted, but all others not so nearby exhibited red shifts, which were proportional to their distances.

After the Big Bang, everything moved apart at random speeds, and the Universe expanded rapidly. Presumably, the force of gravitation acts as a brake on the expansion, slowing it down bit by bit, although until now no braking has been detectable.

The redshift tells us the speeds of these objects, but how do we know their distance? Stars and galaxies glow brightly, but their brightness varies randomly as far as we know. Dimmer stars might be dimmer because they are far away, or because they are intrinsically less bright. But there is a class of very bright objects (Type 1a supernovas), termed Standard Candles because they are all believed to be the same brightness. Therefore, we can gauge their distance from how bright they appear to us. They provide the most reliable data supporting Hubble’s Law.

However, an anomaly has been discovered recently. The most distant of these supernovas, about 13.5 billion light-years away, are slightly dimmer than they ought to be. It appears that they are a little further away than they would be by Hubble’s Law. This has been interpreted in terms of Dark Energy, a mysterious opponent of our picture of gravitation, possibly the most solidly supported theory in science. Normally, we discard established theories only reluctantly, on the basis of powerful evidence, which is clearly absent here. Why the rush onto this bandwagon?

Presumably all credible explanations have been considered and found wanting by someone. Herein I try to supply some more parsimonious explanations that must be refuted before Dark Energy can be enthroned. We must remember that the supernovas in question are among the furthest known objects. The light we receive from them has been traveling for ~13 billion years over a distance of 7.6×1022 miles. Thus the objects in question are seen as they were 13 billion years ago, when the universe was very young.

What if light loses brightness as it travels? As far as we know it does not, but perhaps it loses energy so slowly that only over distances as large as 7.6×1022 miles can it be detected. Of course this ad hoc explanation has no experimental support, but neither has it been experimentally refuted. Therefore we may regard it as silly, but it cannot be ruled out with certainty.

The next hypothesis has more meat on it. How do we know that standard candles had the same luminosity 13 billion years ago that they have in more recent times? I suggest that in fact they cannot have the same luminosity because their composition is significantly different from all the later ones. In particular, the earliest stars and supernovas contained no metals, as all later ones do. It would in fact be astonishing if the presence or absence of metals did not affect supernovas’ luminosity. Then why invent Dark Energy?

There is yet another possible solution for the problem. One of the tenets of quantum theory is that uncertainty applies not only to the existence of particles but also to their nonexistence. Therefore even a vacuum contains swarms of particles perpetually entering and leaving existence. I have been assured that these particles do not interact with light, but of course this assurance rests solidly on nothing. Even if we cannot detect quantum particles in the lab, when the lab is 7.6×1022 miles long we cannot rule out the possibility that quantum particles do interfere with light to an infinitesimal but real degree. This may be happening with all the starlight we receive and we would not know it. If it does, then assuredly the supernovas in question will show a greater effect than nearer standard candles. Once again, Dark Energy is not necessary.

Finally, there is another reason for skepticism about Dark energy. We know from the redshift how fast any object was flying away from us when the light we receive was emitted (~13.5 billion years ago). If there were no acceleration or deceleration we would know how far from us it was at that time. From this we would know how bright a Standard Candle would appear. Heretofore, Hubble’s Law has been confirmed again and again, showing no measurable gravitational deceleration until now. If these most distant supernovas have experienced a little gravitational deceleration, then before they had traveled as far as they did as we see them, they must have been going faster than at their redshift-determined speed. Therefore, they are actually further from us than we deduce from Hubble’s Law, and consequently dimmer than expected. That is what we do see! If Dark Energy were real, the most distant supernovas would have experienced gradual acceleration all this time, and they would be closer than predicted from Hubble’s Law, and therefore brighter. This is not what we see.

This raises the question, will the Universe’s expansion keep slowing down until it stops, and then the whole thing reverses itself until it all comes together in another Big Bang? This is attractive but not inevitable, because it is possible that the speed of expansion is so great that even the gravitational attraction of the whole thing is insufficient to bring it all back. Is it possible that the Universe will expand and collapse like a yo-yo forever, despite the prediction of the Second Law of thermodynamics that all disparities in energy will eventually disappear, leaving everything at the same temperature, the so-called “heat death”? Yes. The Second Law says that a bouncing ball must gradually surrender its kinetic energy to its surroundings, and stop bouncing. But the Universe has no surroundings. As far as we know, it’s all there is.

There is no limit to the number of hypotheses people can cook up to explain any phenomenon. They must all be allowed if they cannot be refuted. Therefore, to believe in Dark Energy because “what else could it be?” is not justified in the absence of corroborating experimental facts.

HEALTH STANDARDS FOR YOUNG AND OLD DIFFER LESS THAN IT SEEMS

Abstract: There is a recent trend toward relaxing the health standards for people over ~70 for weight, blood pressure and LDL. Recent studies find no increase in mortality when weight rises from BMI 22 to 27, BP rises from 140 to 150, and LDL rises to >100. I hold that these conclusions are fallacious because the studies make no distinction between robust and frail cohorts of the old.

 

In recent months I’ve come across medical advice for the old that seems sensible, but isn’t. In several areas, experimental data are presented that seem contrary to common sense although the data are not wrong. What’s wrong is the uncritical way in which the results are accepted at face value.

At this writing, I know of three areas in which I object to the interpretation of the data: optimum weight, hypertension, and the safe level of serum LDL (low-density lipoprotein).

Overweight has been recognized as a problem for decades, causing increased mortality from hypertension, diabetes, cancer and cardiovascular disease. Until recently, the ideal BMI (body mass index) for maximum longevity has seemed to be about 22, but recent publications have raised the ideal, first to 24, and recently to 27 in large cohorts of elderly people. Comparing mortality rates with body weight looks simple and foolproof, doesn’t it? But it isn’t.

Over the years, the target SBP (systolic blood pressure) for hypertension has been gradually focussed on 140 mm or less. Not that this is the ideal – all agree that lower is better (above ~90 mm) – but it’s an achievable maximum level for most people. However, recently there has been a movement to increase the suggested maximum to 150 mm for older people, on the ground that cohorts older than ~75 do not seem to benefit in longevity at 140 vis-à-vis 150 mm. Recently, a committee of cardiologists came out for 150, although a subset of these doctors emphatically disagreed, holding out for 140.

Similarly, the optimum upper recommendation for serum LDL to reduce atherosclerosis to a tolerable level has been reduced every decade. Presently it’s 100, an arbitrary number to be sure, since mortality goes down with LDL even to zero. However, we cannot achieve zero LDL, while most people can get to 100 easily with diet and, if necessary, drugs. Once again, mortality in older cohorts seems less sensitive to LDL in a range somewhat over 100.

It doesn’t make sense that a level of SBP, or BMI, or LDL that is harmful to young people is harmless to older ones. There’s no obvious reason that the old should be more resistant to noxious influences than the young. I hold that there’s no reason at all, and that it’s not true.

The fault in these studies is not that the data are incorrect, but that simple data collection is not sophisticated enough to provide meaningful insight into human health.

In any group of old persons selected solely for their age there is a major difference between two cohorts, the robust and the frail. The robust take care of themselves by keeping fit, eating nutritional diets, avoiding overweight and having good genes. The frail are also slim, but for unhealthful reasons: lack of exercise, cardiovascular disease, poor eating habits, wasting disease such as cancer, poor genes and the like. The robust have a long life expectancy, but the frail have a short one, so that the robust push up the viability of low BMI while the frail push it down. Likewise, the robust push up the viability of low SBP and low LDL and the frail do the opposite.

Therefore, the sensible thing is to report on these two cohorts separately. The inevitable result will be that the most healthy BMI, SBP and LDL will be well below the observed “normal” BMI calculated by averaging everyone at a given age.

There’s another, more subtle reason not to believe that older people are no longer subject to the dangers of high BMI, SBP or LDL. When you plot a graph of some physical measurement vs. age (or anything else), there is always a little “noise” or uncertainty in the relationship, owing to inevitable imperfections in the data, and also to subjects who are outliers (who vary from the average person and therefore don’t fit on the smooth curve). This noise shows up as data points that are above or below the “average” line, and the degree of uncertainty can be calculated according to known laws of probability.

This uncertainty obviously grows with the age of the cohort because they are closer to life’s end, so with age a point is reached for each different characteristic where the noise is comparable in magnitude to the data themselves. Beyond this age we can no longer distinguish the effect of this characteristic on mortality within the precision of the databut it is a fallacy to say that the effect on mortality has therefore vanished. It’s still there.

Conclusion: When we read that the old are no longer bound by the strictures of younger people, take it with a grain of salt.

THE ROOTS OF MORAL BEHAVIOR

Most people adhere to some moral code, which abjures, for example (under most circumstances), killing, injuring, stealing from, lying to, cheating and enslaving other people. To a larger extent we all agree on the rules. Where do these rules come from, and how do we each learn them? Many words have been spilled over these questions.

Religion — Bibles, the Golden Rule etc. — is one possible source of morality. God-fearing people believe this, and we all know devout individuals who really feel that they must do good because God wants them to. However, it is questionable how many follow this path. I know many people who profess deep sanctity within their church or synagogue, but then go out and daily deal falsely and even worse in business and personal relationships. The Crusades, the Hundred Years’ War, the Holocaust, Israel vs. the Palestinians, the Sunni-Shiite enmity, the Buddhist-Muslim hatred and many other events show us brutal mass killings incited by sometimes even tiny religious differences. So it is questionable whether religion inspires good or evil on the whole.

One school of philosophers holds that our ethical principles arise because we are genetically programmed in that direction. They say that natural selection favors goodness rather than the rule of tooth and claw because in the long run that’s what leads to better adaptability, the criterion of evolutionary success. Numerous studies provide examples of ethical behavior in animals and very young children, which are taken to mean that since the behavior could not have been learned, it must have been inborn. No doubt there’s some truth in this, but it seems to me inadequate to be the main story.

In the NY Review of Books, July 10, 2014, there is a review by Samuel Freeman of Bernard Williams’ book “Essays and Reviews, 1959-2002,” which probes deeply into the roots of moral behavior. Williams criticizes our culture’s morality system as a “powerful misconception of life.” We think we are “free, autonomous, morally responsible agents” and that morality is “a separate domain of unconditional principles that override all other considerations.” He derides “our notion of developed morality” as a myth, and says that “the daily experience of choice” is where morality should begin. I think he’s right, but he fails to identify exactly where our moral sense arises.

I believe that what the daily experience of choice fundamentally means is that our moral sense arises from putting ourselves into another person’s place. How often have we heard, “How would you feel if you were in that situation”? Most of us learn to instinctively ask ourselves that question, and from that flows our sense of right and wrong.

This is akin to the Golden Rule, but not identical with it. “Do unto others…” tells us what to do. This sounds good but often leads to immoral behavior, for sometimes others do not want to be treated as you like to be treated. We don’t feed someone else something we like but he dislikes. Missionaries used to force native women to wear bras, which we now recognize as presumptuous behavior. A better rule is Hillel’s Rule: “Do not do unto others what you would not have done unto you.” The negatives protect others from overzealous do-gooders. That’s the real root of moral behavior.

EVOLUTION AND DARWINISM

 

There is much discussion about evolution and Darwinism, which many people erroneously think are the same thing. There is overwhelming evidence for evolution taking place over at least a 500 million year span: there is an orderly sequence of life forms gradually evolving from ancient to modern forms, all firmly placed in time by radioactive dating. So evolution is an incontrovertible fact.

In contrast, Darwin’s theory for how evolution occurred is separate from evolution itself, and is only one of several conceivable explanations, albeit by far the most convincing; it accounts for the facts of evolution in a consistent, logical manner, which no other theory does.

Its chief rival is a teleological theory called Intelligent Design that posits an overall purpose, usually taken to mean God’s purpose. Agnostics reject any notion of purpose because they reject God herself, and even some believers reject teleology because they hold that God started the Universe, but has not interfered thereafter.

I and many other scientific-minded people, some believers and some not, dislike Intelligent Design because there is no evidence for it, and extremists like Dawkins say it is simply impossible. However, Dawkins’ viewpoint is scientifically untenable, for a truly scientific viewpoint must allow anything to be possible until it is proven wrong by experiment, which has not been done for Intelligent Design. If a heavenly hand had pushed a few nucleotides around from time to time, or even today, how could we ever know it?

I believe that the furor arises, not because evolution happened – it definitely did – but because Darwinism holds that evolution is a purely stochastic process, i.e., a succession of purely random events. But if life itself is governed by chance, then we sentient, emotional creatures amount to nothing more than the sum of our atoms interacting mindlessly, and consciousness and free will are only illusions. This conflict between hard facts and our feeling of free will cannot be escaped.

This picture frightens people, myself included, and that’s the driving force behind anti-Darwinism. Those of us, believers or not, who are skeptical of the Heavenly Hand must come to terms with the fact that Darwinism, brilliant though it is, is incomplete.

AGE OF THE EARTH AND THE UNIVERSE

 

It is distressing how many people have opted out of reality. One way is to declare that the Bible is true verbatim, and that the Earth was created about 6000 years ago. But there is rock-solid evidence that the Earth is much older than this. It has to be older than its oldest rocks, which can be dated by radioactive decay.

Radioactive elements decay by explosion of individual atomic nuclei, which occurs at a perfectly steady rate, measured as half-lives, i.e., how long it takes for half of the atoms to decay. Half-lives are different for each element, and range from days or even seconds to millions of years. We know exactly how fast each radioactive element decays because it is easy to measure. Explosion of a nucleus produces a burst of energy plus a new nucleus of a different element. The energy bursts can be precisely counted with a Geiger counter, and the appearance of the new element can be measured by chemical separation. When the rock is still molten, the newly created atoms are not fixed to the place where they were born, and separate from the other unexploded atoms. But after the rock has solidified, the new atoms and the unexploded ones are stuck together, so we can tell precisely how many half-lives have taken place since solidification. This gives us the exact age of the rock since it solidified, which is the minimum age of the Earth. The oldest known rock analyzed in this way was found to be 4.5 billion years old. Therefore, the Earth must be at least this old, and not 6000 years. This is unambiguous and certain.

The age of the Universe must be greater than this. It was discovered in this way: There are millions of heavenly bodies out there, traveling at sometimes small, sometimes enormous speeds relative to us. These speeds are easily determined by the Doppler effect. We have all noticed that when, say, an ambulance travels toward or away from us, its siren’s pitch is higher when approaching, and lower when going away. This is because, when coming toward us, each sound vibration is emitted closer to us than the one before. Thus we receive more vibrations per second than if the siren were stationary, which raises the apparent pitch to our ears. Similarly, the pitch of a departing siren is apparently lower. We can calculate the ambulance’s speed by measuring the change in pitch. The Doppler effect is detectable only when the ambulance’s speed is not too slow compared with the speed of sound.

If a star travels toward us at a speed not too far below the speed of light, the light waves reaching us are compressed via the Doppler effect just like the sound waves of the ambulance, so that we see more waves per second than the star actually emits. This shifts all colors toward the blue end of the spectrum. If the star is traveling away, all colors are shifted toward the red end. Thousands of stars have been examined for color shifts. The closest ones travel at random speeds toward or away from us, but all the others are traveling only away from us, i.e. their light is red-shifted. If the Universe were constant in size, we would expect random motion among all the heavenly bodies, close or distant. The occurrence of 99% red shifts means that almost all visible objects are traveling away from us in all directions, therefore away from each other as well. If there had been a gigantic explosion long ago this is what we would see.

We can measure not only the velocities of stars, but also how far away they are, by measuring their brightness. It has been found that each star’s velocity away from us is directly proportional to its distance. This means that if we imagine this explosion in the reverse direction, every thing would go back to the same point, i.e., there was only a single explosion which gave rise to the Universe we see. This has been dubbed the “big bang.” We can also calculate how long ago this explosion occurred, which was about 13.7 billion years ago. Although we cannot tell what came before, this date cannot be far wrong.

 

THEORY AND EXPERIMENT IN SCIENCE

In 1600, Giordano Bruno was burned at the stake for espousing heliocentricity, which had been proposed by Copernicus on the basis of experimental data by Tycho Brahe, because it contradicted Church dogma. In 1632, Galileo was threatened with the Iron Maiden by the Pope for doing the same thing. Galileo chose discretion over valor, and lived, but under lifelong house arrest. In the 1930’s, anyone in Russia who questioned Lysenko’s (false) genetic theories on experimental grounds was sent to the Gulag to perish. These are examples of dogma overpowering experiment with life-and-death power.

Since the Enlightenment, however, the modern scientific method has held sway in the West. Its core is that one creates theory and does experiments, and when the two clash, primacy is given to experimental data. Since both experiments and theory are fallible, both must be verified by further experimentation. The important thing is that nothing can be believed without experimental support. A famous example comes from Albert Einstein. Regarding general relativity, before Eddington’s results in 1919 on the bending of starlight were known, Einstein courageously declared that if his prediction that starlight would bend near massive objects, was not confirmed, the entire theory must be abandoned. Clearly he regarded experiment, not theory, as the arbiter.

In recent decades, however, we have witnessed some return to pre-Enlightenment attitudes toward science. In “The Trouble with Physics” (2007), the physicist Lee Smolin complained that the world’s top brains in physics were enamored of string theory to the exclusion of other opinions, despite the existence of not one iota of experimental support.

Now (2013) Richard Dawid, in “String Theory and the Scientific Method”, defends string theory and multiverses despite lack of evidence, because they are “the only available route to a unified theory” (i.e., what else can it be?), a.k.a. the no-alternatives argument; and the meta-inductive argument (successful predictions). Successful predictions make a true emotional impact on us, but sooner or later they must be substantiated by experiment or lose sway. The reviewer, George Ellis, summed it up well in describing these indefensible theories as “leaving the realm of science” (Science 342, 934, 2014).

The reason for my interest in the roots of science is that I have been embroiled for many years in a controversy in chemistry between theory and experiment. I have maintained, on the basis of purely experimental data, that many cycloadditions that are permitted by Woodward’s Rules to proceed with both new bonds forming simultaneously, nevertheless proceed with the two new bonds forming one at a time, with a diradical intermediate intervening. The Rules permit, but never compel, concert.

During a half century of papers on both sides of the issue, I frequently had manuscripts rejected by referees, not because my data or my interpretation of them were wrong, but because quantum calculations by the world’s most Eminent Chemists proved that all these cycloadditions were concerted, Q.E.D. My latest (No. 3 above in this blog site) contains 88 irrefutable examples of diradicals, which were all dismissed without discussion because they contradicted the results of the calculations.

Thus Science has returned, at least in part, to the Middle Ages. One consolation at least is that I no longer have to worry about an auto-da-fe.

THE CHIMP-HUMAN GAP

The advent of complete genomic analysis has given rise to some fundamental questions in genetics. Among them are: (1) the human genome has only about 20,000 genes. If this is the complete blueprint, how can a complex body arise from such meager information? (2) the human and chimpanzee genome differ by only about 1%. How can such a wide species gap in the phenome arise from a genomic difference of only 200 genes?

The latter is the principal question in Steven Mithen’s review of Thomas Suddenhdorf’s book “The Gap”, and also plays a role in Svante Paabo’s book “Neanderthal Man” in the same review (NY Rev. Books 4.3.14). Mithen says “ultimately the gap must derive from differences in the genomes.” this seems obvious but avoids the two questions above.

I suggest that we are more than the sum of our genes acting individually, i.e. that in addition to traits coded in the DNA, extra-genetic traits not coded there are also essential, and more numerous. “Extra-genetic” includes epigenetic, i.e. chemical modifications in genes such as methylation that preserve the gene sequence but modify their effects, and also emergent properties, i.e. phenotypic modifications that emerge from combinations of genes rather than just the sum of traits from individual genes. An example is the observation that implanting a mouse eye gene into flies produces flies with extra eyes in the wrong places, but they are fly eyes, not mouse eyes. Clearly, while the mouse eye gene is active in flies, it doesn’t act alone, but rather in concert with numerous other unmodified genes which together create an unprecedented new morphology, ectopic eyes There are innumerable possible combinations of 200 new genes with all the other genes shared by humans and chimps, and this could easily account for the huge interspecies gap.

There is another, totally different possible contributor to this gap: the mother. Do signals from her body, independent of the fetus’s genome, influence morphology during gestation? If they do, then the species gap resides not only in differences in our genes, but also in what species the mother is. To suggest that the mother influences our morphology is not absurd, for we know that tiny amounts of substances, e.g. thalidomide, transmitted through the placenta profoundly influence the phenotype but not the genotype of the baby. Since the mother normally transmits thousands of substances to the fetus, perhaps many of them influence its phenotype independently of its DNA.

A test of this question would require implantation of a trophoblast into a female not the mother who differs from her in some way, and then seeing whether any of the foster mother’s traits appear in the newborn.

“SPIKES” IN NUTRITION

For the past decade, we have increasingly learned that what and how much we eat are not the whole story. It’s also important to think about how fast it appears in the bloodstream. We are less than 500 generations after humans settled down to a farming existence, a span too short for major genetic changes to have taken place. Therefore we must consider how well our bodies handle whatever we eat. We can now, for the first time ever, easily ingest some food substances in amounts that are enormous compared to the amounts normally present in food.when this happens, these compounds temporarily reach blood concentrations far beyond ay available just from food, which are limited by the size of the stomach. Even good compounds become harmful at supernormal concentrations because the body, prepared only for amounts achievable for millions of years, cannot deal with the excess. Every substance, no matter how essential in nutrition, has a point beyond which it becomes harmful or even lethal. Vitamins A, D and K are well-known in this category, as well as all metals and, of course glucose. Excess of even water can be harmful.

Vitamin C. Unlike many other mammals, we cannot make our own, so we must eat it in food (or nowadays pills). Synthetic VC is identical with natural. It is widespread in foods, especially fruits and some vegetables. However, there is nothing in Nature that contains a high concentration. Therefore we must eat a fair amount of VC-containing foods to keep our bodies replete, which is important because it is the most important antioxidant, protecting us from numerous diseases. We need about 200 mg/day, which is normally ingested bit by bit all day. The blood level seldom moves far above or below the optimum. But now we can take all 200 mg at  once in a pill. This is enough for a whole day, but right after eating the pill, there is a huge influx into the bloodstream, a spike in concentration never before experienced by humans. The body is not equipped to handle it, and does several things: it send most of it out in the urine within an hour, and also up regulates an enzyme that destroys it, and doubtless other things we don’t yet know. The upshot is that most of the 200 mg is wasted, leaving us with insufficient VC for the rest of the day, and the unregulated anti-VC enzyme now attacks even normal amounts, reducing absorption of VC from food. Even more worrisome are the “things we don’t know about” , which might be anything at all.Those who choose to take supplemental VC should buy slow-release pills which permit absorption by the body only slowly, over a span of time long enough to prevent spikes.

Calcium. Like VC it normally appears in foods in moderate concentrations, and blood levels do not rise above a small bump. But when we eat a 500 mg pill, we get a huge spike that spreads out after a short time, but which meanwhile upsets the body’s metabolism. Very recently if has been reported that people who get their calcium from food are OK, but those who get it from pills are ~20% more likely to get a heart attack. How can a calcium atom in your blood know whether it came from food or a pill? It can’t, but your body can if there’s a spike. Must we then avoid calcium from supplements? No, we just have to take calcium only on a full stomach.By stretching out the time to digest, we stretch out the time it takes to absorb the calcium. That way, instead of a sharp tall spike, we get only a long-lived but relatively low bump in concentration. That’s exactly what happens with calcium in food, so by avoiding the spike we avoid the danger.

 Sugar. There is no natural source of pure sugar; the closest is honey. When we drink a sweetened soda, coffee or tea – and also when we eat white bread or white rice – all of the sugar and starch (which is polymerized sugar) appears in the blood as  sugar in a huge spike within a few minutes. This, repeated again and again, gives rise to diabetes and also (as recently discovered), to heart disease. Our bodies simply cannot handle these short but intense spikes. We speak of the glycemic index (GI) as the rate at which sugar or starch appears as sugar in the blood. A high GI appears as a narrow thigh spike; a low GI produces a broad low bump, which we can handle. There are two ways to deal with high GI: eat sugar and white starches in much lower amounts, and eat them only when there is a lot of fiber already in the stomach. Fiber temporarily binds sugar and thereby slows down its absorption, lowering not its amount but its GI. Therefore eat only brown bread and brown rice, whose fiber is protective, and eat sweets only in moderation and after a big meal, not before. Note that a low GI does not mean that we get less calories, only that we get them more slowly.

Carotene. This comes in foods only as a mixture of many types of carotene: alpha-, beta-, etc.-carotenes. They are all similar but not identical. This mixture is the only form of carotene we could eat since the beginning of time. Suddenly we are offered pure beta-carotene, an excellent antioxidant and therefore protective against degenerative diseases, e.g. cardiovascular, inflammatory and geriatric ones. Unfortunately our bodies cannot deal with pure beta-carotene in large amounts, and big pills that appear in the blood as tall spikes are somewhat detrimental to health. Yet moderately robust beta-carotene concentrations in the blood from food are beneficial. Why only from food? Because (1) the amounts cannot be too large owing to the limited capacity of the stomach, and (2) because beta-carotene from food is always admixed with all the other carotenes. Each carotene plays a different role in nutrition, as our bodies are not prepared for pure beta-carotene. The lesson: if you want carotene pills, make sure they’re small, taken on a full stomach, and also that they’re full-spectrum carotenes. Beta-carotene alone in small doses is safe because it’s converted to vitamin A in the body.

Vitamin E. The story is similar to that of carotenes. VE in Nature consists of eight similar but distinct substances, four tocopherols and four tocotrienols. All are excellent antioxidants that team up with VC for maximum protection. Nowadays most VE pills are only alpha-tocopherol which is cheap. Many recent studies have come up negative with VE because (1) the spikes from large pills are too high, and (2) the other seven components of VE are not there. In contrast, in populations getting their VE from food, high alpha-tocopherol blood levels correlate with good health, because it signals the presence of all eight components. The lesson: eat foods rich in VE, and if you want to insure yourself with pills, choose pills not too large that contain all eight components, eaten after meals.

There’s another special danger with  alpha-tocopherol. It can exist in two forms, d- and l-, which have the same structure but are mirror images of each other, just like left and right hands. Only the d-form occurs in Nature, but the cheapest form is dl-alpha-tocopherol, a 50-50 mixture of the two forms. In Nature, usually only one mirror form of anything is active, and the other is either inactive or even harmful. Whether l-alpha-tocopherol is merely inactive or actually harmful is not known. Therefore it is prudent to avoid ingesting l- or dl-alpha-tocopherol.

Drugs.  Pills containing drugs can also produce spikes in blood concentration. When spikes are dangerous, manufacturers make pills that dissolve slowly, to stretch out the period of absorption. Even these, however, do not provide perfectly smooth, constant blood levels because the rate of release of the drug, which is proportional to the pills’ surface are, is greater at first, diminishing as the pill gets smaller. Usually this is OK but sometimes perfect smoothness is required, and more complex formulations are needed. One way is to make the pills in layers like an onion, such that their rate of dissolution increases from outside to inside layers. Another is to encase the drug in an insoluble, harmless plastic shell pierced by a tiny hole that allows only slow exit of the drug during passage through the GI tract. Here there are no spikes at all. There is growing interest in these advanced drug delivery methods.