Making us wonder whether co-authors Keith Richards and Mike Jagger will someday be Nobelized in Physics, like Bob Dylan in literature for his beautiful songs.
For “I can’t get no…o… Satisfactio….on!”, the chorus of their famous song, may reveal to be the lament of our universe, which cosmological problem is… to be SATisfied.
Why that? Because SATisfaction, more precisely random Satisfaction – like if you don’t know whether to go surfing on sea waves or snowboarding the slopes – is proven in Complexity theory to compute Consistency. Wouldn’t therefore the universe be content to be consistent?
Actually, although long ago, the universe was perhpas content before even having content; a sort of being before having if you will. So, in the ‘Satisfiable Cosmological model’ initiated in works and papers already published or being submitted for publishing, but mostly still to be published – perhaps even before the Big Crunch, if any – the expansion resulting from its primarily spatial random variables minimally constrained appear to result in a behavior matching quite well what is observed, cf. below.
A word for our happy reader so far: random K-SATisfiability is a Constraint Satisfaction Problems technique which has proven efficient to model diverse Statistical Physics phenomena. And it is more precisely, now than in our previous papers and other works, applied to Cosmology, cf. past and future Posts here or there.
Consistency of the Universe versus a ‘Human Singularity’
Theoretical issues
In the section 2.5 “Implications” of their cosmological survey, Georges Ellis and Henk van Elst emphasize the “key issue” of a general consistency of the constraints with the evolution equations of a “1+3 covariant”, General Relativity grounded universe. More broadly, is the universe (logically) consistent? Then, after Kurt Gödel, how could it be also complete? And yet Quantum Mechanics was proven so and so should as well a Quantum Gravity integrated universe, e.g. from a path integral formulation, such as the Hartle-Hawking model, be understood.
The completeness of the Copenhagen interpretation however comes at a price, Niels Bohr’s complementarity. Meanwhile, could the model be extended up to the impact of an observer of the universe (taken as the quantum system with its cosmological wave function) upon which a measurement is performed? This is scrutinized by John Barrow and Franck Tipler from the chapter 7 of their master work, where they study this interpretation and extend it to the cosmos, with others, such as Hugh Everett’s Many-worlds interpretation. There, the role of what we model as a “Human Singularity” is indirectly at stake when they discuss “the Final Observation by the Ultimate Observer”, whom should stand “either at the final singularity or at future timelike infinity”. But would the universe then appear, currently so termed and then so terminated, consistent? That would require it to be de-terminated, which is for instance what the path integral might yield at this final T time of the formula for a Wave function at this end where, say, Penrose’s U (Unitary, functional) versus R (Quantum Reduction) should be reconciled.
The question is indirectly considered by Jean-Yves Girard from a mathematical logic (consistency) perspective, referring to a “transcendence of the next axiom”. Again this seems to call for an impossible reconciliation: “a definition, whatsoever, of the next axiom, contradicts the uncompleteness”: indeed not only a final observation, but then an unpredictable next axiom, i.e. out of whatever wave function has been accrued so far, is required. The natural idea is to choose at random, were randomness somehow definable.
Call it otherwise another clause in an unlimited series of constraints, which would not at the edges be limited to those of the “1+3 covariance” and there comes the use of Constraint Satisfaction Problems (CSP) and even Computational Complexity well known random CSPs. A particularly efficient reduction of these to random K-SATisfiability – commented and formulated in some previous and novel posts and papers – has been justified by Stephen Cook and particularly later refined by Fortnow and then successfully applied by researchers, such as quoted below, to a widening range of Physical problems.
In these approaches random sets of N Boolean variables are constrained by random sets of M clauses (each linking K variables from the pool of N) in order to consider whether consistency easily (SATISFIABILITY regime) arises or not (UNSATISFIABILITY). So, considering the issue of the consistency of universe as a whole, whether entirely evolving at random and yet under a growing set of constraints and new axioms or much more framed, could such an approach tell us something new about the universe currently observed and measured, with its cosmological parameters? In other words is the universe consistent, and as such SATisfiable, including with its final observers so far… us?
Observational hints
Diverse series of observations appear to have recently driven the scientific community toward the idea that the so-called ‘LCDM consensus’ model was not able or at least not sufficient to account for observed and measured behavior and parameters at its edges, meaning for instance largest scale or latest rhythm of expansion.
This is particularly the result of a consistent, determined effort of Adam Riess et al.’s to stick to observation so that the increasingly precise measurements of H(z) over the last two decades, from which came their conclusion of an enduring discrepancy with CMB rooted hence more LCDM model-based calculation of the then assumed ‘constant’, for Now, H0. The difference was then envisaged as a ‘late’ late acceleration with H(z) apparently nearly constant since z circa 0.2, which would seem the kind of behavior of a universe already entering a de Sitter exponential expansion. However this would at first only be expected when the energy density ratio WL, whether sourced in a cosmological constant or a sort of ‘dark energy’, would, if any, come closer to 1. Unless neither L nor dark energy are needed for a constant and consistent… ‘flatness’ (this seems the intuition of Roger Penrose, for instance as he summarizes some aspects of his Twistor theory at the end of his comprehensive survey of Physics). Hence the question: what might take up the role interpreted as a repulsive energy?
Another stream of observations and simulations has focused through recent decades on the evolution of the ‘cosmic voids’, with a series of conclusions about what currently appears to make the majority of the volume of the universe. Still expanding, and in fact carrying the expansion of the universe as opposed to rather contracting clusters, the largest voids build a sort of configuration of gigantic, quite spherical empty bubbles with flat cores and then a “sharp transition” to dense domain walls. These appear as even “overdense” according to Hamaus et al. and Sutter et al. (pinpointing the coincident development of large cosmic voids with the dark energy effect), so that the landscape appears like “separate universes”. All this might after all seem quite consistent with the scheme mentioned above (AIP # 1018).
Finally the third and unfortunately still less publicly acclaimed stream of observations and simulations, but of primary interest here, has along recent decades related Computational complexity – itself a fascinating scientific domain owing particularly to Alan Turing – and Condensed Matter Physics, also known as Statistical Physics.
This is particularly well summarized in a paper of Marc Mézard’s about the amazing power of the random K-SAT approach, modeling the consistency of a typical combinatorial problem of (logically, or consistently) SATisfiying a growing series of constraints, in this case all clauses of a defined K dimensionality and for instance all related to ANDs while, in each clause, the K variables would be linked through ORs in the case of a Conjunctive Normal Form (CNF) as opposed to the converse Disjunctive Normal Form.
The theoretical value traces backward to the seminal paper of Stephen Cook where the reducibility of Non-deterministic problems to DNF, and even 3-DNF (and then later to 3-CNF, hence K = 3) was proven through two theorems. The practical value has then arisen with the observation [10] of an effect labeled as a “sharper acceleration” at low redshift hence in very recent cosmology, while a recent “sharper transition between core and boundaries” of cosmic voids is analyzed.
Meanwhile, in Complexity/Statistical Physics, a ‘sharp transition’ was observed at some thresholds of the ratio a = M/N of the number of M random clauses to N random binary variables allocated to them: several papers have then, modeled this behavior in terms of energy and entropy global cost or goal functions, interpreted it as hypergraph with clustering, sphere jamming and condensation effects and narrowed the values of the ad, ac, as… where the transitions occur. This was mentioned in a previous post, together with the interest of looking at a ratio b = 1-1/a, which would range within [0, 1] when a spans [1, +¥[: the interest of a threshold at bd @ p/Ö18 fitting with the Hales-Kepler ‘orange stacking’ threshold was pinpointed in previous post and communications.
The quoted authors conclude that there is ‘easy’ SATisfiability – meaning that the typical 3-CNF formula linking the M 3-clauses through ANDs with a random allocation of the N variables easily ends as ‘True’ – with a below the first threshold, hard then and finally UNSATisfiability.
Now the question may have become, for the sake of an overall consistency, ‘How might this be of any relevance for the behavior of the universe as a whole’? And for the Human phenomenon (as also emphasized by Barrow & Tipler)?
Satisfying a basic universe and more… up to the Human Singularity
Applying this to the universe itself appears to deliver a picture of a naturally flat universe, hence without requiring neither cosmological constant nor dark energy, and in fact at first glance not more dark matter per se. Arguably the technical details, still to be published, will need to be shaken (there are probably already countless such models in the air) but the resulting landscape would seem to evolve as observed and particularly attuned with the very brief summary of observations above. A first test might be that the ‘equivalent’ WL, in the years or decades to come, converge to the p/Ö18 sphere packing threshold of Hales-Kepler theorem.
Then what about its links with us, the ‘Homo Saps’ as summarized by Arthur C. Clarke when envisioning that the heirs of our good old transistors, once become Robots and possibly sort of Terminators, send us join ‘Dino’ in the dust-bins of (then their) history? Well, assuming the gravitational universe proven consistent – hence satisfied, hence our proposed model – and then accordingly a matter and life filled universe, even only locally, what are that odds that the Human be themselves consistent when they seem so often UNSATisfied?
This drives us back to the position of any observer, in the (Copenhagen or Many-worlds, physical) diverse senses considered by Barrow & Tipler, but also any author of a ‘next axiom’, even if, in both cases, each in their local space-time: extending a cosmological SAT into matter and life seems natural, even promising, but the fascinating potential consequences come when it is pushed into the realm of Natural Language, and Consciousness.
Conclusion
Anyway, this post is a poor and mere hint at what the full and detailed model requires but its unfolding seems to require neither dark energy nor dark matter specific components. Remains to see whether its quantifiable prediction appears comforted or refuted by incoming observations. With the energy density ratios usually interpreted as dark energy or cosmological constant (WL) , or dark + baryonic matter (WM) becoming nearly constant, as strange as it may appear (technical content awaiting publishing).
More widely the question of a universe both consistent and complete remains and so far we, Homo Saps are the terms, so far, wherefrom they are de-terminate. Will some piece of logical quantum gravity satisfy us? Otherwise it will fall as another bit of quantum logic in the grave. But if it brings us SATisfication, then its system of random and then perhaps integral paths should somehow reach us… the ‘Human Singularity’.
