I have noticed a resurgence of fine-tuning arguments lately, both within the scientific literature and in the general public. In short, the argument goes something like this:
The universe we find ourselves in appears uniquely tuned to the existence of life (or intelligence or matter…). Should the fundamental constants of the universe be even slightly adjusted, life (intelligence, matter, …) would not have occurred. The universe must, therefore, be finely tuned – perhaps by an active agent – to these values in order for life to occur.
The more I think about the argument, the less satisfied I am and the more surprised I am at its popularity. I must start by saying that the argument may be viewed in three ways.
First, Fine-tuning could be a personal statement about surprise and wonder. As such, I think it is great. I am in daily awe of the beauty, intricacy, complexity, and (frankly) the slightly perverse humor of the universe. I marvel at my own existence and that of those around me. I delight in nature and community. I call this the aesthetic argument.
Second, Leibniz once argued that we live in the best of all possible worlds. I agree. For him, this was a logical deduction on the basis of belief in an omnipotent benevolent God. This is a rational argument based on faith. There are other rational arguments based on other foundations. Here, I am not attempting to critique any of them. There are numerous starting points from which the fine-tuned universe falls out deductively.
Third, and I believe currently popular, is the idea that empirical (scientific) observation of the universe interpreted through probability tells us something we didn’t know before – namely that our universe is in some way special. Both science and probability hold particular weight in the popular imagination because they are capable of teaching us things; they are capable of telling us something we did not know already, perhaps even something we didn’t want to know. Science and probability give us “inconvenient truths.” You may be familiar with this in science, perhaps in the context of the Earth going around the Sun or the fact that a feather and a rock fall at the same rate. Most people are unaware that probability also provides such challenging results. If you are among them, I really recommend trying to work out the Monty Hall Paradox. [NB: It’s not really a paradox. It is a counter intuitive result. If that’s too simple (grin), move on to matters of type I and type II error and stochastics.]
Science and probability spur us to deeper wisdom about what we know and don’t know. Because of this, in both fields we expressly reject arguments that start with “it seems reasonable.” Don’t get me wrong; we start by brainstorming. It’s just that we always move on to testing. Often what seems reasonable is not, in fact reasonable, or even true. This essay is largely not about good reasoning. It is about reasoning from premises we know to be true. For that reason, when assessing fine-tuning arguments (only those of type three above), I’m going to hold us accountable to strict standards.
With that foundation, I’d like to present four reasons I find fine-tuning (as science and probability) arguments troubling.
1) The Datum
That’s right. There is exactly one data point. In general scientists will not work with one data point. They go out and find more. In this case, that’s not possible.
1.a. Aren’t cosmologists always working with one datum?
Yes and no. Generally cosmologists work with a number of converging data points. How do we extrapolate back in time from billions of galaxies moving away from each other? How do we extrapolate back in time from ~115 elements to a primordial plasma? How do we extrapolate from 4 observed forces back to just one? In all of those cases, we have billions of data points and from them conclude that something big bang like occurred.
There is an edge of physics that struggles with symmetry breaking in the very, very early times of our universe. They can, to my mind, get from broken symmetry now to unbroken symmetry then. They cannot (empirically) get from broken symmetry here to broken symmetry there (another universe). That would be a different type of problem. Science requires observations. If you can observe it, it’s in our universe. If you can’t it isn’t science.
1.b. Doesn’t science call for the simplest explanation? In other words, shouldn’t we assume that if it works that way inside the universe, it should work that way outside the universe too?
Yes. This is exactly my point. If we buy into the cosmological principle (a.k.a. Copernican principle, a.k.a. symmetry), then, in the absence of data, we assume things are uniform everywhere. That means we should assume everything outside our universe runs on the same natural laws as things inside our universe, including fine-tuned constants. The simplest explanation is to say that (if other universes exist) all universes are fundamentally like ours. To argue otherwise you would need data on the other universes. This is precisely what we lack.
You cannot argue from universal internal laws to pan-universal laws (on the basis of simplicity) and then turn around and argue that the pan-universe has a distribution of internal laws (on the basis of complexity?).
1.c. Don’t scientists frequently argue from limited data? After all, you work with what you’ve got.
Yes, and. Scientists often apply external expertise to a specific limited problem. In biology for example, I might look at a newly discovered animal, observe it has fur, and come to the conclusion that it is a mammal. This sort of reasoning happens all the time and it works because of what we call “independent data.” I know, from observations of other animals, that mammals have fur. I apply that external, independent knowledge to this particular, data-limited case.
Sometimes we call this “scientific intuition.” After looking at hundreds, if not thousands, of samples, experts in observation start to develop both conscious and unconscious tools for interpreting that data. The key, however, comes from familiarity. These scientists are familiar with multiple examples. They have done this type of reasoning under slightly different conditions over and over and over again. When they find themselves in a new situation, they can process the data really well.
No one is familiar with multiple universes. We are only familiar with this one. There have been no opportunities for scientists to develop multiverse intuition.
[I might consider the possibility that universes within the multiverse represent a large-scale parallel of some small-scale familiar phenomenon. In order for this line of reasoning to be compelling, the analogy would need to be spelled out. So far I’m only familiar with the “universe as Darwinian replicator” analogy and I don’t find it at all convincing.]
2. The Probable
Over the centuries, philosophers and theologians have spilt much ink over the question of possible worlds. As a scientist, I lean toward the “actualist” camp that says only the actual universe can be used as data. There do not appear to be objective ways of describing phenomena that may occur, but do not. Of course, there are many attempts to do exactly this, and there is an extensive philosophy of probability literature dealing with such questions.
Frequentists argue that the only good probabilistic arguments have to do with observed actual frequencies. I know I put 98 green balls in that urn along with 2 red balls. The frequency of red balls is 2/100 or 2%; therefore, the probability of drawing a red ball is 2%. The frequentist probability that a universe will have life in it is 1/1 or 100%. Note most frequentists say you just can’t do probability with only one datum; if you forced their hand, though, they would say 100%.
Subjectivists argue that probabilities reflect subjective statements about our confidence in a given outcome. When the meteorologist says 30% chance of rain, she has not visited all 10 possible futures and found that exactly three of them have rain in them. (I wish meteorology worked this way.) She means that according to her models, she is 30% confident it will rain. Confidence need not, however be entirely subjective. It can be based on data. Subjectivist approaches include likelihood and Bayesian camps.
Likelihoodists argue that you should consider all possible models for understanding a given phenomenon and compare the probability of observed events given the model. Note that likelihood is the probability of your data given the hypothesis. It is not equal to the probability of your hypothesis given the data. In our weather example, the meteorologist has tested a number of toy models (perfectly controlled simulations of the weather), of which 3/10 have a 100% likelihood of rain.
Likelihoodist arguments for fine-tuning go something like this. The probability of our universe given a creator God is higher than the probability of our universe given pure chance; therefore we should believe in a creator God. My major objections relate to how one understands “creator God” and “pure chance”, each of which I will deal with below. Philosophers, however, have raised another objection.
Imagine you have a pet cat and a pet golden retriever. You hear huge thumps from the attic and want to know which pet has gotten up there. The golden retriever is clumsier and more massive, so it is more likely to be the culprit. Likelihoodists are only limited by their imagination. What else could be up there? Perhaps it is a burglar or a gremlin or your dead aunt Millie. Gremlins are terribly noisy, so the likelihood (probability of noise given gremlin) is very high, even though the “prior probability” (your willingness to consider a gremlin) is very low.
Bayesianists argue that the probability of an observation (given the data) is equal to the likelihood (data given the model) times the prior (willingness to consider the model) divided by the sum of all possible likelihoods and priors. All of this follows necessarily from Bayes’ Theorum. Bayesians don’t like gremlins. They also don’t like it when you don’t know how many possibilities to consider.
From the Bayesian perspective, the question of fine-tuning is almost entirely resolved by your priors. If your prior for a creator God is much greater than your prior for pure chance, you will end up saying God is more probable. And vice versa.
The key to Bayesianism is figuring out just how many possible models you should cover. In an ideal case, you could replace your subjective willingness with a frequentist prior and end up with a logically impeccable way of updating your knowledge from old data (the prior) to new data (the likelihood). All three camps would love for probability to work this way. The arguments usually arise when you know you can’t get there for some reason.
3. The Possible
Bayesian and likelihoodist analyses come down to a simple question. How many possible models are there? How many models need to be considered in the denominator of your equation? It’s true that cosmologists can imagine other fundamental constants and other natural laws. What we need to know is:
3.a. Are all imaginable universes possible?
For example, it may be that universes with higher values for the strong constant don’t arise. We have no data.
3.b. Are all possible universes imaginable?
There may be other fundamental constants that we’ve never thought to question. Perhaps we will in the future. Perhaps we are completely incapable. We know that the history of science involves new discoveries and inconvenient truths. It seems reasonable (grin) to expect such to arise in the future.
3.c. How do we assign priors to other possible universes?
There must be some sort of distribution. Frequentists like to suggest a “flat” prior, attributing equal priors to all possible models. We know this works really well for cases when you have tons of data to update your model with, when you iterate in a Bayesian fashion. It works very well because data eventually swamps the priors.
We have no data, so the priors in this case remain subjective. It is not even obvious what constitutes a flat prior. Consider an imaginary set of 3 cubes. We could put a flat prior on the length of their edges, the area of their sides, or their volumes, and end up with 3 very different sets of cubes.
[flat edge distribution: edges: 1, 2, 3 in; sides: 1, 4, 9 sq. in; volumes: 1, 8, 27 cu. in.]
[flat side distribution: edges: 1, 1.4, 1.7 in; sides 1, 2, 3 sq. in; volumes: 1, 4, 9 cu. in.]
[flat volume distribution: edges: 1, 1.3, 1.4 in; sides 1, 1.4, 1.7 sq. in; volumes: 1, 2, 3 cu. in.]
If constants vary over universes, how do we know which flat distribution to use? Are all possible strengths of the strong constant equally probable? What about negative values?
4. The Preferred
We are usually not interested in the abstract question of life, but in the concrete question of Earth life (and human intelligence). We do not know that life of some sort would not arise in universes with other fundamental constants, only that life like us (carbon based, water rich, …) could not arise. A much better definition of life will need to be developed to make these arguments strong. The number one contenders currently – based on evolution by natural selection and the ability to locally resist the universal trend to disorder (see my book Life in Space for details) – are not only conceivable, but are probably necessary in all universes remotely resembling our own with regard to physical laws.
You have to think there is something preferable about matter (over non-matter), life (over non-life), and intelligence (over non-intelligence) in order to infer that someone preferred it. Scientifically, I see no reason to believe that any of these traits need trump the property of quidness which arose in some universe X, but not here. How do we know that something special does not occur in every universe? Perhaps ours is the only one that has exactly one intelligent species, while every other universe has at least 5. This is my problem with “pure chance.” This is not a lottery (unless the universe really is queerer than we suppose). We don’t know what the prize is.
Theologically a much worse problem arises, particularly for the likelihoodists. Remember that they argued our universe was likely given a creator God. What I should have said was that our universe is highly likely given a creator God who values a universe exactly like ours. Why would a good creator God not prefer a universe without scarcity, suffering, and decay? That kind of God would make our universe less likely. Why would a rational creator God not prefer a Newtonian universe without quantum indeterminacy and subjective relativity? Likelihood arguments rest on the circular assumption that a God who prefers X must have created X because God prefers X. This ends up reflecting poorly on God or on our reasoning skills.
So, there you have my top 4 reasons for thinking that fine-tuning fails as an empirical or scientific argument (with a few theological and philosophical remarks thrown in. If you have better arguments, I’d love to hear them and update my own understanding of the question.
[Interested readers might enjoy Elliott Sober’s book Evidence and Evolution and Roger White’s article “Fine-Tuning and Multiple Universes” in Nous 34:260-276.]