The group met on 24 March to talk about how the word “random” is used in biology and theology. We read a chapter by Ian Barbour about Quantum Mechanics, but spoke more about the implications for biology.
Barbour, Ian (2000) When science meets religion. Harper. Chapter 3: The Implications of Quantum Physics.
The word random takes a number of different definitions and it can be important to be clear on which one you are using when communicating about biology.
- Without purpose, intent, or order – contrasted with design. “Well, that was random.” Note that random can be observer specific (I Kings 22:34). Gamblers rely on this version of randomness.
- Operating with statistical but not deterministic regularity. My favorite description holds that predictions may be made for a class of events but not for individual events. Probabilistic or Stochastic. Casinos rely on this version of randomness.
- Operating with statistical regularity in a uniform distribution. All possible outcomes are equally probable, as with a die roll or a coin toss.
Only definition 2 applies in modern evolutionary biology. Variation and drift follow stochastic rules. Selection, per se, does not. Christian critiques of evolution often focus on definition 1.
Newtonian Atomism (Mechanism) was
Deterministic in the sense that knowledge of the complete state of the system now gives you full knowledge of the state of the system at any point in the future (see Laplace’s Demon).
Reductionistic in the sense that knowledge of the complete state of the fundamental particles (at the time atom) is sufficient to explain all interactions at all higher levels.
Realistic in the sense that the phenomena described are independent of their observation (cp. Cartesian Dualistic Mind from meetings 4.1 and 4.2).
Quantum mechanics calls all three into question. Beginning in the 1920s, physicists (including Planck, Bohr, Heisenberg, and Schrodinger) began describing physical events in terms of discrete packets of energy (“quanta”) according to stochastic rules. Further, those rules suggest some things cannot be known – for example, we apparently cannot simultaneously know the momentum and position of a particle beyond a known precision. (The product of their standard deviations must be greater than h-bar over 2).
This methodological stochasticity may be interpreted in (at least) 4 ways.
Hidden Variable approaches hold that more knowledge will yield a deterministic system and deterministic predictions. Virtual interference (the ability of a single photon or electron to interfere with itself by traveling multiple paths) makes this an unpopular option. [Einstein]
Conceptual or Experimental Limitation approaches hold that, whether or not the system is deterministic, we only have stochastic ways of modeling it. This limitation may or my not be fixable. Atomic decay (the breakdown of radioactive nuclei according to an exponential distribution) suggests that something is going on in the thing itself, independent of observation or conceptualization, making this a less popular option. [Bohr]
Objective Indeterminacy approaches (e.g., the Copenhagen Interpretation) hold that an objects or the universe has an inherent and probabilistic propensity to act in particular ways rather than a deterministic reliability to always act in a fixed way. This makes the universe time-irreversibly contingent. Multiple potential outcomes exist until one actual outcome occurs. Note that one popular interpretation of fitness follows these lines, constructing fitness as a propensity to survive and reproduce. Observed and propensity fitness are different according to contingent factors. [Heisenberg]
Many-Worlds approaches hold that all outcomes appear deterministically in some universe. What appears to be a collapse into contingency is really movement of the observer into one of two or more branching universes. Besides being an affront to Occam’s Razor (it multiplies entities to infinity), the Many-Worlds interpretation also fails to answer why we, as observers follow the trajectory we do. For these reasons it remains a minority opinion. [Everett]
Experimental evidence points toward Objective Indeterminacy, but it is not falsifiable or provable in physics, as we have no empirical access to potential but non-actual worlds.
The Mechanical Philosophy (Gassendi, Descartes, Boyle)
Enlightenment thinkers, in reducing the physical world to particles and forces, made physics far more tractable, but risked being called Atheists by an establishment that connected final causes to inherent dispositions and God’s will working in things. They replied that God was not only necessary in their system, but demonstrable as someone need to make and enforce the “laws” (a term just becoming popular). Theologians thus began to understand determinism as proof of God’s existence and goodness. When science lost determinism in the early 20th century, some theologians felt science was moving away from God. Others felt that pinning arguments for God on science was a bad idea in the first place and no harm had been done. In my opinion, too much theological reliance on science in the 17th century was the problem, not too little in the 20th.
This began a movement toward Deism in which God made the laws and made them self-sustaining, then walked away.
In evolutionary biology, one might claim that God is the hidden variable or that God operates with a different epistemology than humans, preserving human uncertainty without making God uncertain (Denis Alexander). OR one might claim that God created stochastic processes and lets them run (Arthur Peacocke). OR one might claim that stochastic processes exclude divine action (Jaques Monod) or at least separate divine action (Rene Descartes).
For more on randomness, see the second semester (Fall 2010) of discussions from the Forum on Chance, Purpose, and Progress at the University of Arizona. https://chancepurposeprogress.wordpress.com/