Troubles with Fine-Tuning

I have noticed a resurgence of fine-tuning arguments lately, both within the scientific literature and in the general public. In short, the argument goes something like this:

The universe we find ourselves in appears uniquely tuned to the existence of life (or intelligence or matter…). Should the fundamental constants of the universe be even slightly adjusted, life (intelligence, matter, …) would not have occurred. The universe must, therefore, be finely tuned – perhaps by an active agent – to these values in order for life to occur.

The more I think about the argument, the less satisfied I am and the more surprised I am at its popularity. I must start by saying that the argument may be viewed in three ways.

First, Fine-tuning could be a personal statement about surprise and wonder. As such, I think it is great. I am in daily awe of the beauty, intricacy, complexity, and (frankly) the slightly perverse humor of the universe. I marvel at my own existence and that of those around me. I delight in nature and community. I call this the aesthetic argument.

Second, Leibniz once argued that we live in the best of all possible worlds. I agree. For him, this was a logical deduction on the basis of belief in an omnipotent benevolent God. This is a rational argument based on faith. There are other rational arguments based on other foundations. Here, I am not attempting to critique any of them. There are numerous starting points from which the fine-tuned universe falls out deductively.

Third, and I believe currently popular, is the idea that empirical (scientific) observation of the universe interpreted through probability tells us something we didn’t know before – namely that our universe is in some way special. Both science and probability hold particular weight in the popular imagination because they are capable of teaching us things; they are capable of telling us something we did not know already, perhaps even something we didn’t want to know. Science and probability give us “inconvenient truths.” You may be familiar with this in science, perhaps in the context of the Earth going around the Sun or the fact that a feather and a rock fall at the same rate. Most people are unaware that probability also provides such challenging results. If you are among them, I really recommend trying to work out the Monty Hall Paradox. [NB: It’s not really a paradox. It is a counter intuitive result. If that’s too simple (grin), move on to matters of type I and type II error and stochastics.]

Science and probability spur us to deeper wisdom about what we know and don’t know. Because of this, in both fields we expressly reject arguments that start with “it seems reasonable.” Don’t get me wrong; we start by brainstorming. It’s just that we always move on to testing. Often what seems reasonable is not, in fact reasonable, or even true. This essay is largely not about good reasoning. It is about reasoning from premises we know to be true. For that reason, when assessing fine-tuning arguments (only those of type three above), I’m going to hold us accountable to strict standards.

With that foundation, I’d like to present four reasons I find fine-tuning (as science and probability) arguments troubling.

 

1) The Datum

That’s right. There is exactly one data point. In general scientists will not work with one data point. They go out and find more. In this case, that’s not possible.

1.a. Aren’t cosmologists always working with one datum?

Yes and no. Generally cosmologists work with a number of converging data points. How do we extrapolate back in time from billions of galaxies moving away from each other? How do we extrapolate back in time from ~115 elements to a primordial plasma? How do we extrapolate from 4 observed forces back to just one? In all of those cases, we have billions of data points and from them conclude that something big bang like occurred.

There is an edge of physics that struggles with symmetry breaking in the very, very early times of our universe. They can, to my mind, get from broken symmetry now to unbroken symmetry then. They cannot (empirically) get from broken symmetry here to broken symmetry there (another universe). That would be a different type of problem. Science requires observations. If you can observe it, it’s in our universe. If you can’t it isn’t science.

1.b. Doesn’t science call for the simplest explanation? In other words, shouldn’t we assume that if it works that way inside the universe, it should work that way outside the universe too?

Yes. This is exactly my point. If we buy into the cosmological principle (a.k.a. Copernican principle, a.k.a. symmetry), then, in the absence of data, we assume things are uniform everywhere. That means we should assume everything outside our universe runs on the same natural laws as things inside our universe, including fine-tuned constants. The simplest explanation is to say that (if other universes exist) all universes are fundamentally like ours. To argue otherwise you would need data on the other universes. This is precisely what we lack.

You cannot argue from universal internal laws to pan-universal laws (on the basis of simplicity) and then turn around and argue that the pan-universe has a distribution of internal laws (on the basis of complexity?).

1.c. Don’t scientists frequently argue from limited data? After all, you work with what you’ve got.

Yes, and. Scientists often apply external expertise to a specific limited problem. In biology for example, I might look at a newly discovered animal, observe it has fur, and come to the conclusion that it is a mammal. This sort of reasoning happens all the time and it works because of what we call “independent data.” I know, from observations of other animals, that mammals have fur. I apply that external, independent knowledge to this particular, data-limited case.

Sometimes we call this “scientific intuition.” After looking at hundreds, if not thousands, of samples, experts in observation start to develop both conscious and unconscious tools for interpreting that data. The key, however, comes from familiarity. These scientists are familiar with multiple examples. They have done this type of reasoning under slightly different conditions over and over and over again. When they find themselves in a new situation, they can process the data really well.

No one is familiar with multiple universes. We are only familiar with this one. There have been no opportunities for scientists to develop multiverse intuition.

[I might consider the possibility that universes within the multiverse represent a large-scale parallel of some small-scale familiar phenomenon. In order for this line of reasoning to be compelling, the analogy would need to be spelled out. So far I’m only familiar with the “universe as Darwinian replicator” analogy and I don’t find it at all convincing.]

2. The Probable

Over the centuries, philosophers and theologians have spilt much ink over the question of possible worlds. As a scientist, I lean toward the “actualist” camp that says only the actual universe can be used as data. There do not appear to be objective ways of describing phenomena that may occur, but do not. Of course, there are many attempts to do exactly this, and there is an extensive philosophy of probability literature dealing with such questions.

Frequentists argue that the only good probabilistic arguments have to do with observed actual frequencies. I know I put 98 green balls in that urn along with 2 red balls. The frequency of red balls is 2/100 or 2%; therefore, the probability of drawing a red ball is 2%. The frequentist probability that a universe will have life in it is 1/1 or 100%. Note most frequentists say you just can’t do probability with only one datum; if you forced their hand, though, they would say 100%.

Subjectivists argue that probabilities reflect subjective statements about our confidence in a given outcome. When the meteorologist says 30% chance of rain, she has not visited all 10 possible futures and found that exactly three of them have rain in them. (I wish meteorology worked this way.) She means that according to her models, she is 30% confident it will rain. Confidence need not, however be entirely subjective. It can be based on data. Subjectivist approaches include likelihood and Bayesian camps.

Likelihoodists argue that you should consider all possible models for understanding a given phenomenon and compare the probability of observed events given the model. Note that likelihood is the probability of your data given the hypothesis. It is not equal to the probability of your hypothesis given the data. In our weather example, the meteorologist has tested a number of toy models (perfectly controlled simulations of the weather), of which 3/10 have a 100% likelihood of rain.

Likelihoodist arguments for fine-tuning go something like this. The probability of our universe given a creator God is higher than the probability of our universe given pure chance; therefore we should believe in a creator God. My major objections relate to how one understands “creator God” and “pure chance”, each of which I will deal with below. Philosophers, however, have raised another objection.

Imagine you have a pet cat and a pet golden retriever. You hear huge thumps from the attic and want to know which pet has gotten up there. The golden retriever is clumsier and more massive, so it is more likely to be the culprit. Likelihoodists are only limited by their imagination. What else could be up there? Perhaps it is a burglar or a gremlin or your dead aunt Millie. Gremlins are terribly noisy, so the likelihood (probability of noise given gremlin) is very high, even though the “prior probability” (your willingness to consider a gremlin) is very low.

Bayesianists argue that the probability of an observation (given the data) is equal to the likelihood (data given the model) times the prior (willingness to consider the model) divided by the sum of all possible likelihoods and priors. All of this follows necessarily from Bayes’ Theorum. Bayesians don’t like gremlins. They also don’t like it when you don’t know how many possibilities to consider.

From the Bayesian perspective, the question of fine-tuning is almost entirely resolved by your priors. If your prior for a creator God is much greater than your prior for pure chance, you will end up saying God is more probable. And vice versa.

The key to Bayesianism is figuring out just how many possible models you should cover. In an ideal case, you could replace your subjective willingness with a frequentist prior and end up with a logically impeccable way of updating your knowledge from old data (the prior) to new data (the likelihood). All three camps would love for probability to work this way. The arguments usually arise when you know you can’t get there for some reason.

3. The Possible

Bayesian and likelihoodist analyses come down to a simple question. How many possible models are there? How many models need to be considered in the denominator of your equation? It’s true that cosmologists can imagine other fundamental constants and other natural laws. What we need to know is:

3.a. Are all imaginable universes possible?

For example, it may be that universes with higher values for the strong constant don’t arise. We have no data.

3.b. Are all possible universes imaginable?

There may be other fundamental constants that we’ve never thought to question. Perhaps we will in the future. Perhaps we are completely incapable. We know that the history of science involves new discoveries and inconvenient truths. It seems reasonable (grin) to expect such to arise in the future.

3.c. How do we assign priors to other possible universes?

There must be some sort of distribution. Frequentists like to suggest a “flat” prior, attributing equal priors to all possible models. We know this works really well for cases when you have tons of data to update your model with, when you iterate in a Bayesian fashion. It works very well because data eventually swamps the priors.

We have no data, so the priors in this case remain subjective. It is not even obvious what constitutes a flat prior. Consider an imaginary set of 3 cubes. We could put a flat prior on the length of their edges, the area of their sides, or their volumes, and end up with 3 very different sets of cubes.

[flat edge distribution: edges: 1, 2, 3 in; sides: 1, 4, 9 sq. in; volumes: 1, 8, 27 cu. in.]

[flat side distribution: edges: 1, 1.4, 1.7 in; sides 1, 2, 3 sq. in; volumes: 1, 4, 9 cu. in.]

[flat volume distribution: edges: 1, 1.3, 1.4 in; sides 1, 1.4, 1.7 sq. in; volumes: 1, 2, 3 cu. in.]

If constants vary over universes, how do we know which flat distribution to use? Are all possible strengths of the strong constant equally probable? What about negative values?

 

4. The Preferred

We are usually not interested in the abstract question of life, but in the concrete question of Earth life (and human intelligence). We do not know that life of some sort would not arise in universes with other fundamental constants, only that life like us (carbon based, water rich, …) could not arise. A much better definition of life will need to be developed to make these arguments strong. The number one contenders currently – based on evolution by natural selection and the ability to locally resist the universal trend to disorder (see my book Life in Space for details) – are not only conceivable, but are probably necessary in all universes remotely resembling our own with regard to physical laws.

You have to think there is something preferable about matter (over non-matter), life (over non-life), and intelligence (over non-intelligence) in order to infer that someone preferred it. Scientifically, I see no reason to believe that any of these traits need trump the property of quidness which arose in some universe X, but not here. How do we know that something special does not occur in every universe? Perhaps ours is the only one that has exactly one intelligent species, while every other universe has at least 5. This is my problem with “pure chance.” This is not a lottery (unless the universe really is queerer than we suppose). We don’t know what the prize is.

Theologically a much worse problem arises, particularly for the likelihoodists. Remember that they argued our universe was likely given a creator God. What I should have said was that our universe is highly likely given a creator God who values a universe exactly like ours. Why would a good creator God not prefer a universe without scarcity, suffering, and decay? That kind of God would make our universe less likely. Why would a rational creator God not prefer a Newtonian universe without quantum indeterminacy and subjective relativity? Likelihood arguments rest on the circular assumption that a God who prefers X must have created X because God prefers X. This ends up reflecting poorly on God or on our reasoning skills.

So, there you have my top 4 reasons for thinking that fine-tuning fails as an empirical or scientific argument (with a few theological and philosophical remarks thrown in. If you have better arguments, I’d love to hear them and update my own understanding of the question.

[Interested readers might enjoy Elliott Sober’s book Evidence and Evolution and Roger White’s article “Fine-Tuning and Multiple Universes” in Nous 34:260-276.]

Advertisements

7 thoughts on “Troubles with Fine-Tuning

  1. Thank you, Lucas! The theological reframing was particularly helpful to me. I would be curious to hear your thoughts on whether we should be doing any work on multiverse theory or if it is necessarily out-of-bounds to scientists by definition. Also, would you accept some form of mathematical empiricism, where we get a picture of the data in the math prior to observing it with instruments? This has been suggested as an argument for the multiverse as well, and has a history in physics (e.g. discovery of the Higgs boson). I think laypeople assume scientists were scared of fine-tuning, so they came up with the multiverse theory, when in reality it was suggested by the math.

    • Dear Friend,
      Thanks! I think that multiverse theory is really interesting (as opposed to many worlds, which never really worked for me). The point, though is not whether I think it’s a good theory, but whether I think we should pursue it. The answer is yes! Smart people proposed it for some really elegant reasons. That said, I consider it interesting model construction for the time being. Relativity used to be in this category along with the cosmological constant. In those cases, they became science as soon as they made concrete predictions and had scientific support once those predictions were borne out in observation. Phlogiston and aether were also once interesting models. They made predictions that were not borne out.
      I hope no one reads me as against experiment. I just want to make a clear case for distinguishing assumptions from conclusions so that we can make the best experiments possible.
      I think math is created, rather than discovered. I think we are discovering the logical consequences of our form of communication. Francis Bacon called them Idols of the Marketplace. I do not think we are discovering new things about the universe, so the phrase “mathematical empircism” confuses me. Along the lines of my first paragraph, though, I think math is a great way to think clearly and propose models. I hope that makes things clearer.
      -Lucas

  2. You say that “in both [science and probability] we expressly reject arguments that start with “it seems reasonable.””

    But this isn’t true. In science, at least, nearly every conclusion includes reliance, somewhere, on some claim that is merely reasonable. For instance, every set of data is compatible with more than one explanatory theory. Concluding that some given theory truly explains some data requires relying on the merely reasonable assumptions that the theoretical virtues—simplicity, fecundity, conservativeness, unity, etc.—guide us to the true theories.

    This is just one example—I could give many more. For instance, science assumes that the universe didn’t just pop into existence five minutes ago, complete with the apparent memories and records that we seem to have. I think this is a safe assumption, but it isn’t empirically verifiable. It’s justification is just that it seems reasonable.

    Many people roll their eyes at points like this last one, because they think that the reality of the past is too obvious to question. I agree–I’m not questioning it. But since this obviousness rests only on the reasonableness of that assumption, this shows that knowing or accepting things on the basis of mere reasonableness can be scientifically acceptable.

    You also say that “There is an edge of physics that struggles with symmetry breaking in the very, very early times of our universe. They can, to my mind, get from broken symmetry now to unbroken symmetry then. They cannot (empirically) get from broken symmetry here to broken symmetry there (another universe). That would be a different type of problem. Science requires observations. If you can observe it, it’s in our universe. If you can’t it isn’t science.”

    I don’t think this is true. The reasoning used by the cosmologists to get from now to then is the same used to get from here to there. That is, they get from now to then, in part, by knowing enough about the laws of nature to be able to figure out a bunch of “if-then” claims, i.e., if such and such value is this much lower, then this other value will be this much higher. Philosophers call these “if-then” claims “conditionals”. The important thing here is that these conditionals hold true merely because of the laws of nature. So, anywhere the laws are the same, the same conditionals will hold true. Physicists learn enough about the laws to figure out how we got from then to now by figuring out the right conditional truths. And once they’ve figured these out, the same conditionals are true of other possible universes where the laws are the same.

    • Dear Tom,
      Thanks for the comment. There are axiomatic claims that go into science and there is evidential support that comes out. I never denied the assumptions, I just don’t think you can plug an assumption in and then get it out as support at the end. Further, the assumptions get you some things but not others; we need to be transparent in our reasoning so we can see exactly which assumptions go in – and exactly which data go in. Fine tuning, as far as I can tell, requires either

      strong assumptions, and consequently strong conclusions which are based on the assumptions rather than the data

      OR

      weak assumptions, and consequently weak conclusions based on the data.

      I don’t think you can get strong conclusions from weak assumptions and data. That would be interesting.

      Once again, the laws of nature do not apply outside the universe. We can infer that they do TO THE EXACT EXTENT that we can infer that the physical constants are the same. We cannot be surprised that our universe is unique while claiming that other universes are just like ours.

      Anyway, that’s my take.
      -Lucas

  3. Yes, that makes things clearer for me, except for the last part. When you say we are “discovering the logical consequences of our form of communication,” this seems to suggest there is a logical structure to reality. It seems odd to accept one form of logic as descriptive of reality and to claim that another form of logic (mathematics) is artificial or created rather than discovered. Can’t we safely assume that some form of mathematics, if only arithmetic, holds in every possible and actual universe? Once you grant that, mathematical empiricism would simply be logical deduction. -Kelly

  4. Dear Kelly,
    I think of logic as the form of reasoning that gets you from premises to conclusions. For me, it includes things like syllogisms, modus ponens, and reductio ad absurdum. Of course, it’s much broader than that, but those are basic pieces. I do not see logic as producing anything new. Science is all about observing external reality and allowing the observations, through logic, to update your worldview. Science uses logic, but it is much more than logic. Math, on the other hand, is just logic. It requires premises, but works out the consequences of the premises. In the case of statistics and set theory, it’s rather remarkable how much can be derived from how little. You might be interested in googling “Synthetic a priori.” Stanford Encyclopedia of Philosophy usually has a good treatment.

    Oddly Enough,
    -Lucas

  5. Lucas, it’s your 3c. that has always caused me pause with respect to fine-tuning arguments. How do we know that the value of some particular constant can be anything other than what it is? I think that’s what you’re pointing out there. If the probably of a given constant being what it is is exactly 1, we get nowhere. Also, the anthropic principle in its weak version is not particularly remarkable. We’re here. If they were different, we wouldn’t be here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s