Quantum Paradox
This page dedicated to quantum physic. You don't have to be a physicist. You just have to be curious
Breakthrough new theory finally unites quantum mechanics and Einstein’s theory of general relativity.
A breakthrough new theory seeks to reconcile two pillars of modern physics – quantum mechanics and Einstein’s theory of general relativity.
By Joseph Shavit
In a cutting-edge development that has sent shockwaves through the scientific community, researchers at University College London (UCL) have unveiled a radical theory that seeks to reconcile two pillars of modern physics – quantum mechanics and Einstein's general theory of relativity.
These two theories, which have been the foundation of physics for over a century, have long been at odds with each other, and their unification has remained an elusive quest.
Today, we dive into the world of quantum gravity, a field of study that aims to bridge the gap between the quantum realm, which governs the behavior of particles at the smallest scales, and the macroscopic world, where gravity shapes the very fabric of spacetime.
While the prevailing consensus has been that Einstein's theory of gravity must be modified to fit within the framework of quantum theory, a new theory, coined as a "postquantum theory of classical gravity," challenges this assumption in a thought-provoking way.
The Clash of Titans: Quantum Mechanics vs. General Relativity
Quantum mechanics and general relativity, developed by Albert Einstein in the early 20th century, have both stood the test of time and have been proven accurate in their respective domains. However, when it comes to merging these two theories into a single, comprehensive framework, the scientific community has hit a roadblock.
Quantum mechanics, which beautifully describes the behavior of particles at the subatomic level, operates in a probabilistic realm characterized by wave functions and quantum states.
In contrast, general relativity paints a different picture of the universe, where gravity arises from the curvature of spacetime caused by massive objects. While these theories excel in their own domains, they clash when brought together, leading to mathematical inconsistencies and contradictions.
A New Approach: Spacetime as Classical
Enter Professor Jonathan Oppenheim and his team at UCL, who have challenged the status quo with their groundbreaking theory. In two parallel papers published simultaneously, they propose a novel perspective that suggests spacetime may remain classical and unaffected by quantum mechanics.
This theory, as described in a paper published in Physical Review X (PRX), refrains from modifying spacetime itself and instead modifies quantum theory.
The core tenet of this theory is that spacetime remains classical, not subject to the constraints of quantum theory. Instead, quantum theory is tweaked to account for intrinsic unpredictability mediated by spacetime. The consequence?
Spacetime experiences random and violent fluctuations that exceed the expectations set by quantum theory. These fluctuations, if measured precisely enough, render the apparent weight of objects unpredictable.
To put their theory to the test, the researchers propose a groundbreaking experiment aimed at detecting fluctuations in mass over time. For instance, consider a 1kg mass – the standard measurement used by the International Bureau of Weights and Measures in France. If the measurements of this 1kg mass exhibit fluctuations smaller than those required for mathematical consistency, it would challenge the new theory.
This experiment, which has far-reaching implications for our understanding of gravity and quantum mechanics, is not just theoretical but practical. It serves as a critical juncture in the ongoing debate between competing theories of quantum gravity. Professor Oppenheim, along with Professor Carlo Rovelli and Dr. Geoff Penington, leading proponents of quantum loop gravity and string theory, respectively, have even placed a 5000:1 odds bet on the outcome.
Five Years of Rigorous Testing
The UCL research group, led by Professor Oppenheim, has spent the past five years meticulously developing and examining their theory, scrutinizing its consequences from various angles.
As Professor Oppenheim puts it, "Quantum theory and Einstein's theory of general relativity are mathematically incompatible with each other, so it's important to understand how this contradiction is resolved."
Their journey has been marked by relentless exploration of the fundamental nature of gravity and the cosmos itself, probing the boundaries of our knowledge and challenging preconceived notions.
Beyond the Weight of Gravity: Implications of the Postquantum Theory
While the focus of the postquantum theory is on reconciling quantum mechanics and general relativity, its implications extend far beyond the realm of gravity. One notable consequence is the elimination of the notorious "measurement postulate" in quantum theory.
This postulate, which has long perplexed physicists, posits that measurements collapse quantum superpositions into definite states. In the new theory, quantum superpositions naturally localize through their interaction with classical spacetime, obviating the need for this postulate.
Professor Oppenheim's journey to this groundbreaking theory was motivated by his attempt to unravel the mysteries of the black hole information paradox. According to standard quantum theory, information cannot be destroyed.
Therefore, an object entering a black hole should somehow radiate information back out. However, this concept directly contradicts general relativity, which posits that once an object crosses a black hole's event horizon, it becomes inaccessible.
The postquantum theory offers a unique perspective, suggesting that the fundamental breakdown in predictability inherent to spacetime allows for information to be destroyed, resolving this long-standing paradox.
The proposal to test whether spacetime remains classical by detecting random fluctuations in mass is just one part of the puzzle.
Another experimental proposal aims to verify the quantum nature of spacetime through a phenomenon called "gravitationally mediated entanglement." These experiments, though challenging, hold immense promise in advancing our understanding of the fundamental laws of nature.
Professor Sougato Bose, an expert in the field who was not involved in the recent UCL announcement but had previously proposed the entanglement experiment, emphasized the importance of these endeavors, stating, "Experiments to test the nature of spacetime will take a large-scale effort, but they're of huge importance from the perspective of understanding the fundamental laws of nature. I believe these experiments are within reach – these things are difficult to predict, but perhaps we'll know the answer within the next 20 years."
At the heart of this theory lies a delicate interplay between quantum particles, such as atoms, and the fluctuations in classical spacetime. These fluctuations, if the theory holds, must occur on a scale yet to be detected but should be large enough to impact quantum particles' behavior.
The proposed experiments seek to find this elusive balance, shedding light on whether spacetime remains classical or succumbs to quantum mechanics at microscopic scales.
In the words of Professor Oppenheim, "Now that we have a consistent fundamental theory in which spacetime does not get quantized, it's anybody's guess." The journey has just begun, and the future of physics has never looked more intriguing.
Background information
Quantum mechanics background: All the matter in the universe obeys the laws of quantum theory, but we only really observe quantum behavior at the scale of atoms and molecules.
Quantum theory tells us that particles obey Heisenberg’s uncertainty principle, and we can never know their position or velocity at the same time. In fact, they don’t even have a definite position or velocity until we measure them. Particles like electrons can behave more like waves and act almost as if they can be in many places at once (more precisely, physicists describe particles as being in a “superposition” of different locations).
Quantum theory governs everything from semiconductors which are ubiquitous in computer chips, to lasers, to superconductivity to radioactive decay. In contrast, we say that a system behaves classically if it has definite underlying properties. A cat appears to behave classically – it is either dead or alive, not both, nor in a superposition of being dead and alive.
Why do cats behave classically, and small particles quantumly? We don’t know, but the postquantum theory doesn’t require the measurement postulate, because the classicality of spacetime infects quantum systems and causes them to localize.
Gravity background: Newton’s theory of gravity, gave way to Einstein’s theory of general relativity (GR), which holds that gravity is not a force in the usual sense. Instead, heavy objects such as the sun, bend the fabric of spacetime in such a way that causes the earth to revolve around it.
Spacetime is just a mathematical object consisting of the three dimensions of space, and time considered as a fourth dimension. General relativity predicted the formation of black holes and the big bang. It holds that time flows at different rates at different points in space, and the GPS in your smartphone needs to account for this in order to properly determine your location.
Note: Materials provided above by the The Brighter Side of News. Content may be edited for style and length.
Consciousness is the collapse of the wave function.
Quantum mechanics and the organic light of consciousness.
Quantum mechanics suggests that particles can be in a state of superposition - in two states at the same time - until a measurement take place. Only then does the wavefunction describing the particle collapses into one of the two states. According to the Copenhagen interpretation of quantum mechanics, the collapse of the wave function takes place when a conscious observer is involved. But according to Roger Penrose, it’s the other way around. Instead of consciousness causing the collapse, Penrose suggested that wavefunctions collapse spontaneously and in the process give rise to consciousness. Despite the strangeness of this hypothesis, recent experimental results suggest that such a process takes place within microtubules in the brain. This could mean that consciousness is a fundamental feature of reality, arising first in primitive bio-structures, in individual neurons, cascading upwards to networks of neurons, argues Roger Penrose collaborator Stuart Hameroff.
Consciousness defines our existence. It is, in a sense, all we really have, all we really are, The nature of consciousness has been pondered in many ways, in many cultures, for many years. But we still can’t quite fathom it.
Consciousness is, some say, all-encompassing, comprising reality itself, the material world a mere illusion. Others say consciousness is the illusion, without any real sense of phenomenal experience, or conscious control. According to this view we are, as TH Huxley bleakly said, ‘merely helpless spectators, along for the ride’. Then, there are those who see the brain as a computer. Brain functions have historically been compared to contemporary information technologies, from the ancient Greek idea of memory as a ‘seal ring’ in wax, to telegraph switching circuits, holograms and computers. Neuroscientists, philosophers, and artificial intelligence (AI) proponents liken the brain to a complex computer of simple algorithmic neurons, connected by variable strength synapses. These processes may be suitable for non-conscious ‘auto-pilot’ functions, but can’t account for consciousness.
Finally there are those who take consciousness as fundamental, as connected somehow to the fine scale structure and physics of the universe. This includes, for example Roger Penrose’s view that consciousness is linked to the Objective Reduction process - the ‘collapse of the quantum wavefunction’ – an activity on the edge between quantum and classical realms. Some see such connections to fundamental physics as spiritual, as a connection to others, and to the universe, others see it as proof that consciousness is a fundamental feature of reality, one that developed long before life itself.
Penrose turned the conscious observer around. Instead of consciousness causing collapse, wavefunctions collapsed spontaneously, causing a moment – a ‘quantum – of consciousness.
Consciousness and the collapse of the wavefunction
Penrose was suggesting Objective Reduction not only as a scientific basis for consciousness, but also as a solution to the ‘measurement problem’ in quantum mechanics. Since the early 20th century, it has been known that quantum particles can exist in superposition of multiple possible states and/or locations simultaneously, described mathematically as a wavefunction according to the Schrödinger equation. But we don’t see such superpositions because, it appeared to early quantum researchers, the very act of measurement, or of conscious observation, seemed to ‘collapse’ the wavefunction to definite states and location - the conscious observer effect - consciousness collapsed the wavefunction. But this view put consciousness outside the purview of science. Another proposal is ‘Many Worlds’ in which there is no collapse, and each possibility evolves its own universe.
Penrose turned the conscious observer around. Instead of consciousness causing collapse, wavefunctions collapsed spontaneously, causing a moment – a ‘quantum – of consciousness. Collapse, or quantum state reduction, occurred at an objective threshold in the fine scale structure of spacetime geometry.
While the wave-function is viewed by many as pure mathematics in an abstract space, Penrose characterized it as a process in the fine scale structure of the universe.
Penrose first likened quantum particles to tiny curvatures in spacetime geometry (as Einstein’s General Theory of Relativity had done for large objects like the sun). Superposition states of multiple possibilities, or of delocalized particles, could then be viewed as opposing curvatures, and hence separations in the fine scale structure of the universe, spacetime geometry. Were such separations to continue, ‘Many Worlds’ would result.
But such separations would be unstable, and reduce, or ‘collapse’ to definite states, selected neither randomly, nor algorithmically, but ‘non-computably’, perhaps reflecting ‘Platonicvalues’ embedded in spacetime geometry. Thus while the wave-function is viewed by many as pure mathematics in an abstract space, Penrose characterized it as a process in the fine scale structure of the universe.
And each Objective Reduction event would entail a moment of ‘proto-conscious’ experience in a random microenvironment, without memory, or context. But occasionally, at least, a feeling of pleasure would arise, e.g. from quantum optical effects leading to Objective Reduction in a micelle, providing a feedback fitness function to to optimize pleasure. Virtually all human and animal behavior is in some way related to the pursuit of pleasure in its various forms.
In the mid 1990s I teamed with Roger Penrose to suggest that quantum vibrations in microtubules in brain neurons were ‘orchestrated’. Consciousness was somewhat like music in the structure of spacetime.
Proto-conscious moments would lack memory, meaning and context, but have phenomenal ‘qualia’ – a primitive form of conscious experience. They may be like the unharmonious tones, notes and sounds of an orchestra tuning up. In the mid 1990s I teamed with Roger Penrose to suggest that quantum vibrations in microtubules in brain neurons were ‘orchestrated’, hence ‘Orchestrated Objective Reduction’. Consciousness was somewhat like music in the structure of spacetime.
Our Orchestrated Objective Reduction theory was viewed skeptically. Technological quantum computers were operated near absolute zero temperatures to avoid thermal decoherence, so quantum prospects in the ‘warm, wet and noisy’ brain seemed unlikely. But we knew quantum optical activity could occur within non-polar regions in microtubule proteins, where anesthetics appeared to act to selectively block consciousness. Recently we were proven right: a quantum optical state of superradiance has been shown in microtubules, and preliminary evidence suggests it is inhibited by anesthetics. How do quantum activities at this level affect brain-wide functions and consciousness?
It is becoming apparent that consciousness may occur in single brain neurons extending upward into networks of neurons, but also downward and deeper, to terahertz quantum optical processes, e.g. ‘superradiance’ in microtubules, and further still to fundamental spacetime geometry (Figure 1). I agree that consciousness is fundamental, and concur with Roger Penrose that it involves self-collapse of the quantum wavefunction, a rippling in the fine scale structure of the universe.
Organic light per se isn’t consciousness. But organic light could be the interface between the brain and conscious processes in the fine scale structure of the universe.
Figure 1. A scale-invariant hierarchy extending downward from a cortical pyramidal neuron (left) into microtubules, tubulin dipoles, organic ring dipoles and spacetime geometry curvatures. Self-similar dynamics recur every three orders of magnitude.
Light and consciousness
Impossible to directly measure or observe, consciousness might reveal itself in the brain by significant deviation from mere algorithmic non-conscious processes, like reflexive, auto-pilot behaviors. Such deviation is found in cortical Layer V pyramidal neurons (see Figure 1) in awake animals, without changes in external membrane potentials. This suggests ‘conscious’ modulation may arise inside neurons, from deeper, faster quantum processes in cytoskeletal microtubules (see Figure 1). These could include Penrose Objective Reduction connecting to fundamental spacetime geometry.
Light is the part of the electromagnetic spectrum that can be seen by the eyes of humans and animals – visible light. Each point on the spectrum corresponds with a photon of a particular wavelength, and inverse frequency. Each wavelength is seen by the eye and brain as a different color. In addition to wavelength/frequency, photons have other properties including intensity, polarization, phase and orbital angular momentum.
Ancient traditions characterized consciousness as light. Religious figures were often depicted with luminous ‘halos’, and/or auras. Hindu deities are portrayed with luminous blue skin. And people who have ‘near death’ and ‘out of body’ experiences described being attracted toward a ‘white light’. In many cultures, those who have ‘awakened to the truth about reality’ are ‘enlightened.’
Organic light per se isn’t consciousness. But organic light could be the interface between the brain and conscious processes in the fine scale structure of the universe.
In recent years, biophotons have been determined to occur in brain neurons, e.g. in ultraviolet, visible and infra-red wavelengths from oxidative metabolism in mitochondria.
Light was prevalent in the early universe, e.g. for a period beginning 10 seconds after the Big Bang, when photons dominated the energy landscape and briefly illuminated reality. However photons, protons and electrons then fused into a hot, opaque plasma, obscuring reality for 350,000 years until the universe cooled, enabling electrons and protons to form neutral atoms, and build matter and structure. Photons became free to roam a mostly transparent universe, and upon meeting matter, reflect, scatter or be absorbed, generally without significant chemical interaction. However compounds containing organic carbon rings, essential molecules in living systems, are notable exceptions.
18th century chemists knew of linear chains of carbon atoms with extra hydrogens – ‘hydrocarbons’, like methane, propane etc. They also knew of an oily, highly flammable molecule with 6 carbons they called benzene, but didn’t understand its structure. One night the German chemist August Kekule had a dream, that linear hydrocarbons were snakes, and one swallowed its tail – the mythical ‘Ourobouros’. He awoke to proclaim (correctly, it turned out) “benzene is a ring”!
Each hexagonal carbon benzene ring has 3 extra electrons which extend as ‘electron clouds’ above and below the ring, comprised of what later became known as ‘pi’ electron resonance’ orbitals. Within these clouds, electrons can switch between specific orbitals and energy levels by first absorbing a photon, and then subsequently emitting a lower energy photon. This is the basis for quantum optical effects including fluorescence, phosphorescence, excitons and superradiance.
Hexagonal organic rings with quantum optical properties may fuse, and include 5-sided rings to form ‘indole’ rings found in psychoactive molecules, living systems, and throughout the universe, e.g. in interstellar dust.
The hot plasma of the early universe had led to formation of poly aromatic hydrocarbons (PAHs), fused organic (‘aromatic’) complexes of benzene and indole rings. Ice-encrusted in inter-stellar dust, PAHs are still quantum optically active, e.g. fluorescent, and emitting photons seen on earth. This ‘organic light’ may play a key role in the origin and development of life and consciousness.
Life and consciousness – Which came first?
Life on earth is said to have begun in a simmering mix of aqueous and oily compounds, sunlight and lightning, called the ‘Primordial soup’, as proposed by Oparin and Haldane in the early 20th century. In the 1950s Miller and Urey simulated a version of the primordial soup and found ‘amphipathic’ biomolecules with a non-polar, benzene-like pi resonance organic ring on one end, and a polar, charged tail on the other. Such molecules are prevalent in biology, e.g. aromatic amino acids tryptophan (indole ring), phenylalanine and tyrosine in proteins, components of membranes and nucleic acids, and psychoactive molecules like dopamine, serotonin, L*D and DMT .
Oparin and Haldane proposed the non-polar, ‘hydrophobic’ pi resonance electron clouds coalesced to avoid the aqueous environment (‘oil and water don’t mix’). The polar, water soluble tails would stick outwardly, forming a water soluble ‘micelle’ with a non-polar interior. These micelles somehow developed into functional cells, and then multi-cellular organisms, long before genes. But why would inanimate creatures self-organize to perform purposeful complex functions, grow and evolve behaviors? And then, presumably, at some point, develop consciousness? Or was consciousness ‘there all along’?
Mainstream science and philosophy assume that consciousness emerged at some point in the course of evolution, possibly fairly recently, with the advent of the brain and nervous systems. But Eastern spiritual traditions, panpsychism, and the Objective Reduction theory of Roger Penrose suggest that consciousness preceded life.
Back in the Primordial soup, could light-induced proto-conscious moments have occurred by Penrose Obejtive Reduction in micelles in the primordial soup? Did such moments provide a feedback fitness function to optimize primitive pleasure, sparking the origin of life and driving its evolution? Are similar events occurring in PAHs and organic rings throughout the universe?
Stuart Hameroff
4th May 2022
Get ready for the 'Big Crunch'! Scientists predict the universe could start SHRINKING 'remarkably' soon as dark energy weakens – but it won’t happen for at least 65 million years.
Experts say acceleration of universe may rapidly end in the next 65 million years
Could stop expanding within 100 million years and enter era of slow contraction
It could end with the death, or perhaps the rebirth, of time and space – scientists
And this could all happen 'remarkably' quickly, Princeton University experts said.
The Big Bang is widely accepted as being the start of everything we see around us.
But a new study suggests the beginning of the end could come 'remarkably' soon, when the universe's 13.8 billion years of expansion comes to an abrupt halt.
Researchers believe it could then enter an era of slow contraction that ends billions of years from now with the death – or possibly rebirth – of time and space.
In a new paper, Princeton University scientists used previous observations of cosmic expansion to try to model dark energy — a mysterious force which makes up around 70 per cent of the universe.
The repellent force seems to be causing the cosmos to expand ever faster, but the experts think its influence may now be weakening.
According to their model, the acceleration of the universe could rapidly end within the next 65 million years — then, within 100 million years, the universe could stop expanding altogether and begin shrinking.
This would cause a 'Big Crunch' and could all happen 'remarkably' quickly, according to study co-author Paul Steinhardt, director of the Princeton Center for Theoretical Science at Princeton University in New Jersey.
'Going back in time 65 million years, that's when the Chicxulub asteroid hit the Earth and eliminated the dinosaurs,' he told Live Science.
'On a cosmic scale, 65 million years is remarkably short.'
Albert Einstein had the idea of the cosmological constant – which is the value of uniform energy density that permeates space – to explain why the universe wasn't collapsing.
But cosmology underwent a paradigm shift in 1998 when researchers announced that the rate at which the universe was expanding had accelerated.
As a result, the cosmological constant had to be given a non-zero value, and many believed this expansion was the sign of an imminent collapse.
Scientists named the mysterious source of this acceleration dark energy.
An invisible entity, they believe it works contrary to gravity by pushing the universe's most massive objects farther apart rather than drawing them together.
That means that if it becomes weaker, gravity would then be the dominant force and would cause the universe to shrink, leading to colliding stars, galaxies and planets as the cosmos collapses in on itself.
Experts think it could weaken because, rather than being constant, dark energy may be something called quintessence — a dynamic field that changes over time.
'The question we're raising in this paper is, "Does this acceleration have to last forever?"' Steinhardt said.
'And if not, what are the alternatives, and how soon could things change?'
To test their theory, the researchers created a model of quintessence, showing its repellent and attractive power over time.
Once the model reliably reproduced the universe's expansion history they then extended the predictions into the future.
What they found was that dark energy could be in the midst of a rapid decline that potentially began billions of years ago, meaning the accelerated expansion of the universe is already slowing down.
By SAM TONKIN FOR MAILONLINE
Bilayer graphene inspires two-universe cosmological model.
by Bailey Bedford, Joint Quantum Institute
Physicists sometimes come up with crazy stories that sound like science fiction. Some turn out to be true, like how the curvature of space and time described by Einstein was eventually borne out by astronomical measurements. Others linger on as mere possibilities or mathematical curiosities.
n a new paper in Physical Review Research, JQI Fellow Victor Galitski and JQI graduate student Alireza Parhizkar have explored the imaginative possibility that our reality is only one half of a pair of interacting worlds. Their mathematical model may provide a new perspective for looking at fundamental features of reality—including why our universe expands the way it does and how that relates to the most miniscule lengths allowed in quantum mechanics. These topics are crucial to understanding our universe and are part of one of the great mysteries of modern physics.
The pair of scientists stumbled upon this new perspective when they were looking into research on sheets of graphene—single atomic layers of carbon in a repeating hexagonal pattern. They realized that experiments on the electrical properties of stacked sheets of graphene produced results that looked like little universes and that the underlying phenomenon might generalize to other areas of physics. In stacks of graphene, new electrical behaviors arise from interactions between the individual sheets, so maybe unique physics could similarly emerge from interacting layers elsewhere—perhaps in cosmological theories about the entire universe.
"We think this is an exciting and ambitious idea," says Galitski, who is also a Chesapeake Chair Professor of Theoretical Physics in the Department of Physics. "In a sense, it's almost suspicious that it works so well by naturally 'predicting' fundamental features of our universe such as inflation and the Higgs particle as we described in a follow up preprint."
Stacked graphene's exceptional electrical properties and possible connection to our reality having a twin comes from the special physics produced by patterns called moiré patterns. Moiré patterns form when two repeating patterns—anything from the hexagons of atoms in graphene sheets to the grids of window screens—overlap and one of the layers is twisted, offset, or stretched.
The patterns that emerge can repeat over lengths that are vast compared to the underlying patterns. In graphene stacks, the new patterns change the physics that plays out in the sheets, notably the electrons' behaviors. In the special case called "magic angle graphene," the moiré pattern repeats over a length that is about 52 times longer than the pattern length of the individual sheets, and the energy level that governs the behaviors of the electrons drops precipitously, allowing new behaviors, including superconductivity.
Galitski and Parhizkar realized that the physics in two sheets of graphene could be reinterpreted as the physics of two two-dimensional universes where electrons occasionally hop between universes. This inspired the pair to generalize the math to apply to universes made of any number of dimensions, including our own four-dimensional one, and to explore if similar phenomenon resulting from moiré patterns might pop up in other areas of physics. This started a line of inquiry that brought them face to face with one of the major problems in cosmology.
"We discussed if we can observe moiré physics when two real universes coalesce into one," Parhizkar says. "What do you want to look for when you're asking this question? First you have to know the length scale of each universe."
A length scale—or a scale of a physical value generally—describes what level of accuracy is relevant to whatever you are looking at. If you're approximating the size of an atom, then a ten-billionth of a meter matters, but that scale is useless if you're measuring a football field because it is on a different scale. Physics theories put fundamental limits on some of the smallest and largest scales that make sense in our equations.
The scale of the universe that concerned Galitski and Parhizkar is called the Planck length, and it defines the smallest length that is consistent with quantum physics. The Planck length is directly related to a constant—called the cosmological constant—that is included in Einstein's field equations of general relativity. In the equations, the constant influences whether the universe—outside of gravitational influences—tends to expand or contract.
This constant is fundamental to our universe. So to determine its value, scientists, in theory, just need to look at the universe, measure several details, like how fast galaxies are moving away from each other, plug everything into the equations and calculate what the constant must be.
This straightforward plan hits a problem because our universe contains both relativistic and quantum effects. The effect of quantum fluctuations across the vast vacuum of space should influence behaviors even at cosmological scales. But when scientists try to combine the relativistic understanding of the universe given to us by Einstein with theories about the quantum vacuum, they run into problems.
One of those problems is that whenever researchers attempt to use observations to approximate the cosmological constant, the value they calculate is much smaller than they would expect based on other parts of the theory. More importantly, the value jumps around dramatically depending on how much detail they include in the approximation instead of homing in on a consistent value. This lingering challenge is known as the cosmological constant problem, or sometimes the "vacuum catastrophe."
"This is the largest—by far the largest—inconsistency between measurement and what we can predict by theory," Parhizkar says. "It means that something is wrong."
Since moiré patterns can produce dramatic differences in scales, moiré effects seemed like a natural lens to view the problem through. Galitski and Parhizkar created a mathematical model (which they call moiré gravity) by taking two copies of Einstein's theory of how the universe changes over time and introducing extra terms in the math that let the two copies interact. Instead of looking at the scales of energy and length in graphene, they were looking at the cosmological constants and lengths in universes.
Galitski says that this idea arose spontaneously when they were working on a seemingly unrelated project that is funded by the John Templeton Foundation and is focused on studying hydrodynamic flows in graphene and other materials to simulate astrophysical phenomena.
Playing with their model, they showed that two interacting worlds with large cosmological constants could override the expected behavior from the individual cosmological constants. The interactions produce behaviors governed by a shared effective cosmological constant that is much smaller than the individual constants. The calculation for the effective cosmological constant circumvents the problem researchers have with the value of their approximations jumping around because over time the influences from the two universes in the model cancel each other out.
"We don't claim—ever—that this solves cosmological constant problem," Parhizkar says. "That's a very arrogant claim, to be honest. This is just a nice insight that if you have two universes with huge cosmological constants—like 120 orders of magnitude larger than what we observe—and if you combine them, there is still a chance that you can get a very small effective cosmological constant out of them."
In preliminary follow up work, Galitski and Parhizkar have started to build upon this new perspective by diving into a more detailed model of a pair of interacting worlds—that they dub "bi-worlds." Each of these worlds is a complete world on its own by our normal standards, and each is filled with matching sets of all matter and fields. Since the math allowed it, they also included fields that simultaneously lived in both worlds, which they dubbed "amphibian fields."
The new model produced additional results the researchers find intriguing. As they put together the math, they found that part of the model looked like important fields that are part of reality. The more detailed model still suggests that two worlds could explain a small cosmological constant and provides details about how such a bi-world might imprint a distinct signature on the cosmic background radiation—the light that lingers from the earliest times in the universe.
This signature could possibly be seen—or definitively not be seen—in real world measurements. So future experiments could determine if this unique perspective inspired by graphene deserves more attention or is merely an interesting novelty in the physicists' toy bin.
"We haven't explored all the effects—that's a hard thing to do, but the theory is falsifiable experimentally, which is a good thing," Parhizkar says. "If it's not falsified, then it's very interesting because it solves the cosmological constant problem while describing many other important parts of physics. I personally don't have my hopes up for that— I think it is actually too big to be true."