The top 30 problems with the Big Bang
‘Cosmologists are often in error, but never in doubt.’ — Lev Landau
(1) Static universe models fit observational data better than expanding universe models.
Static universe models match most observations with no adjustable parameters. The Big Bang can match each of the critical observations, but only with adjustable parameters, one of which (the cosmic deceleration parameter) requires mutually exclusive values to match different tests. [,] Without ad hoc theorizing, this point alone falsifies the Big Bang. Even if the discrepancy could be explained, Occam’s razor favors the model with fewer adjustable parameters – the static universe model.
(2) The microwave “background” makes more sense as the limiting temperature of space heated by starlight than as the remnant of a fireball.
The expression “the temperature of space” is the title of chapter 13 of Sir Arthur Eddington’s famous 1926 work, [] Eddington calculated the minimum temperature any body in space would cool to, given that it is immersed in the radiation of distant starlight. With no adjustable parameters, he obtained 3°K (later refined to 2.8°K []), essentially the same as the observed, so-called “background”, temperature. A similar calculation, although with less certain accuracy, applies to the limiting temperature of intergalactic space because of the radiation of galaxy light. [] So the intergalactic matter is like a “fog”, and would therefore provide a simpler explanation for the microwave radiation, including its blackbody-shaped spectrum.
Such a fog also explains the otherwise troublesome ratio of infrared to radio intensities of radio galaxies. [] The amount of radiation emitted by distant galaxies falls with increasing wavelengths, as expected if the longer wavelengths are scattered by the intergalactic medium. For example, the brightness ratio of radio galaxies at infrared and radio wavelengths changes with distance in a way which implies absorption. Basically, this means that the longer wavelengths are more easily absorbed by material between the galaxies. But then the microwave radiation (between the two wavelengths) should be absorbed by that medium too, and has no chance to reach us from such great distances, or to remain perfectly uniform while doing so. It must instead result from the radiation of microwaves from the intergalactic medium. This argument alone implies that the microwaves could not be coming directly to us from a distance beyond all the galaxies, and therefore that the Big Bang theory cannot be correct.
None of the predictions of the background temperature based on the Big Bang were close enough to qualify as successes, the worst being Gamow’s upward-revised estimate of 50°K made in 1961, just two years before the actual discovery. Clearly, without a realistic quantitative prediction, the Big Bang’s hypothetical “fireball” becomes indistinguishable from the natural minimum temperature of all cold matter in space. But none of the predictions, which ranged between 5°K and 50°K, matched observations. [] And the Big Bang offers no explanation for the kind of intensity variations with wavelength seen in radio galaxies.
(3) Element abundance predictions using the Big Bang require too many adjustable parameters to make them work.
The universal abundances of most elements were predicted correctly by Hoyle in the context of the original Steady State cosmological model. This worked for all elements heavier than lithium. The Big Bang co-opted those results and concentrated on predicting the abundances of the light elements. Each such prediction requires at least one adjustable parameter unique to that element prediction. Often, it’s a question of figuring out why the element was either created or destroyed or both to some degree following the Big Bang. When you take away these degrees of freedom, no genuine prediction remains. The best the Big Bang can claim is consistency with observations using the various ad hoc models to explain the data for each light element. Examples: [,] for helium-3; [] for lithium-7; [] for deuterium; [] for beryllium; and [,] for overviews. For a full discussion of an alternative origin of the light elements, see [].
(4) The universe has too much large scale structure (interspersed “walls” and voids) to form in a time as short as 10-20 billion years.
The average speed of galaxies through space is a well-measured quantity. At those speeds, galaxies would require roughly the age of the universe to assemble into the largest structures (superclusters and walls) we see in space [], and to clear all the voids between galaxy walls. But this assumes that the initial directions of motion are special, e.g., directed away from the centers of voids. To get around this problem, one must propose that galaxy speeds were initially much higher and have slowed due to some sort of “viscosity” of space. To form these structures by building up the needed motions through gravitational acceleration alone would take in excess of 100 billion years. []
(5) The average luminosity of quasars must decrease with time in just the right way so that their average apparent brightness is the same at all redshifts, which is exceedingly unlikely.
According to the Big Bang theory, a quasar at a redshift of 1 is roughly ten times as far away as one at a redshift of 0.1. (The redshift-distance relation is not quite linear, but this is a fair approximation.) If the two quasars were intrinsically similar, the high redshift one would be about 100 times fainter because of the inverse square law. But it is, on average, of comparable apparent brightness. This must be explained as quasars “evolving” their intrinsic properties so that they get smaller and fainter as the universe evolves. That way, the quasar at redshift 1 can be intrinsically 100 times brighter than the one at 0.1, explaining why they appear (on average) to be comparably bright. It isn’t as if the Big Bang has a reason why quasars should evolve in just this magical way. But that is required to explain the observations using the Big Bang interpretation of the redshift of quasars as a measure of cosmological distance. See [,].
By contrast, the relation between apparent magnitude and distance for quasars is a simple, inverse-square law in alternative cosmologies. In , Arp shows great quantities of evidence that large quasar redshifts are a combination of a cosmological factor and an intrinsic factor, with the latter dominant in most cases. Most large quasar redshifts (e.g., z > 1) therefore have little correlation with distance. A grouping of 11 quasars close to NGC 1068, having nominal ejection patterns correlated with galaxy rotation, provides further strong evidence that quasar redshifts are intrinsic. []
(6) The ages of globular clusters appear older than the universe.
Even though the data have been stretched in the direction toward resolving this since the “top ten” list first appeared, the error bars on the Hubble age of the universe (12±2 Gyr) still do not quite overlap the error bars on the oldest globular clusters (16±2 Gyr). Astronomers have studied this for the past decade, but resist the “observational error” explanation because that would almost certainly push the Hubble age older (as Sandage has been arguing for years), which creates several new problems for the Big Bang. In other words, the cure is worse than the illness for the theory. In fact, a new, relatively bias-free observational technique has gone the opposite way, lowering the Hubble age estimate to 10 Gyr, making the discrepancy worse again. [,]
(7) The local streaming motions of galaxies are too high for a finite universe that is supposed to be everywhere uniform.
In the early 1990s, we learned that the average redshift for galaxies of a given brightness differs on opposite sides of the sky. The Big Bang interprets this as the existence of a puzzling group flow of galaxies relative to the microwave radiation on scales of at least 130 Mpc. Earlier, the existence of this flow led to the hypothesis of a “Great Attractor” pulling all these galaxies in its direction. But in newer studies, no backside infall was found on the other side of the hypothetical feature. Instead, there is streaming on both sides of us out to 60-70 Mpc in a consistent direction relative to the microwave “background”. The only Big Bang alternative to the apparent result of large-scale streaming of galaxies is that the microwave radiation is in motion relative to us. Either way, this result is trouble for the Big Bang. [,,,,]
(8) Invisible dark matter of an unknown but non-baryonic nature must be the dominant ingredient of the entire universe.
The Big Bang requires sprinkling galaxies, clusters, superclusters, and the universe with ever-increasing amounts of this invisible, not-yet-detected “dark matter” to keep the theory viable. Overall, over 90% of the universe must be made of something we have never detected. By contrast, Milgrom’s model (the alternative to “dark matter”) provides a one-parameter explanation that works at all scales and requires no “dark matter” to exist at any scale. (I exclude the additional 50%-100% of invisible ordinary matter inferred to exist by, e.g., MACHO studies.) Some physicists don’t like modifying the law of gravity in this way, but a finite range for natural forces is a logical necessity (not just theory) spoken of since the 17th century. [,]
Milgrom’s model requires nothing more than that. Milgrom’s is an operational model rather than one based on fundamentals. But it is consistent with more complete models invoking a finite range for gravity. So Milgrom’s model provides a basis to eliminate the need for “dark matter” in the universe at any scale. This represents one more Big Bang “fudge factor” no longer needed.
(9) The most distant galaxies in the Hubble Deep Field show insufficient evidence of evolution, with some of them having higher redshifts (z = 6-7) than the highest-redshift quasars.
The Big Bang requires that stars, quasars and galaxies in the early universe be “primitive”, meaning mostly metal-free, because it requires many generations of supernovae to build up metal content in stars. But the latest evidence suggests lots of metal in the “earliest” quasars and galaxies. [,,] Moreover, we now have evidence for numerous ordinary galaxies in what the Big Bang expected to be the “dark age” of evolution of the universe, when the light of the few primitive galaxies in existence would be blocked from view by hydrogen clouds. []
(10) If the open universe we see today is extrapolated back near the beginning, the ratio of the actual density of matter in the universe to the critical density must differ from unity by just a part in 1059. Any larger deviation would result in a universe already collapsed on itself or already dissipated.
Inflation failed to achieve its goal when many observations went against it. To maintain consistency and salvage inflation, the Big Bang has now introduced two new adjustable parameters: (1) the cosmological constant, which has a major fine-tuning problem of its own because theory suggests it ought to be of order 10120, and observations suggest a value less than 1; and (2) “quintessence” or “dark energy”. [,] This latter theoretical substance solves the fine-tuning problem by introducing invisible, undetectable energy sprinkled at will as needed throughout the universe to keep consistency between theory and observations. It can therefore be accurately described as “the ultimate fudge factor”.
Anyone doubting the Big Bang in its present form (which includes most astronomy-interested people outside the field of astronomy, according to one recent survey) would have good cause for that opinion and could easily defend such a position. This is a fundamentally different matter than proving the Big Bang did not happen, which would be proving a negative – something that is normally impossible. (E.g., we cannot prove that Santa Claus does not exist.) The Big Bang, much like the Santa Claus hypothesis, no longer makes testable predictions wherein proponents agree that a failure would falsify the hypothesis. Instead, the theory is continually amended to account for all new, unexpected discoveries. Indeed, many young scientists now think of this as a normal process in science! They forget or were never taught that a model has value only when it can predict new things that differentiate the model from chance and from other models before the new things are discovered. Explanations of new things are supposed to flow from the basic theory itself with at most an adjustable parameter or two, and not from add-on bits of new theory.
thanks to Andrew F for the link..
the full list is at the link..
“The ages of globular clusters appear older than the universe.”
“the theory is continually amended to account for all new, unexpected discoveries. Indeed, many young scientists now think of this as a normal process in science! They forget or were never taught that a model has value only when it can predict new things that differentiate the model from chance and from other models before the new things are discovered. Explanations of new things are supposed to flow from the basic theory itself with at most an adjustable parameter or two, and not from add-on bits of new theory.”