“The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen.”

**Key takeaways**

- Physicist Vitaly Vanchurin suggests that the entire universe operates like a massive neural network, potentially explaining fundamental physical phenomena.
- Vanchurin believes that neural networks can reconcile quantum mechanics and general relativity, two major theories in physics that are currently not unified.
- To disprove Vanchurin’s theory, one would need to find a physical phenomenon that cannot be described by a neural network, a task made difficult by our limited understanding of neural networks and machine learning.
- The theory posits that stable structures within this neural network survive and evolve, leading to the complexity observed in particles, atoms, cells, and possibly life itself.
- While Vanchurin’s theory suggests we live in a neural network, it does not mean we’re in a simulation, though distinguishing between the two might be impossible.

It isn’t every day that we come across a document that seeks to redefine reality.

However, in a provocative preprint posted to arXiv this summer, Vitaly Vanchurin, a physics professor at the University of Minnesota Duluth, attempts to reframe reality in a particularly eye-opening way, claiming that we live inside a massive neural network that governs everything around us. In other words, he claimed in the report that there’s a “possibility that the entire universe on its most fundamental level is a neural network.”

For years, physicists have tried to reconcile quantum physics with general relativity. The one asserts that time is universal and absolute, whereas the later contends that time is variable and tied to the fabric of space-time.

In his research, Vanchurin claims that artificial neural networks may “exhibit approximate behaviors” of both universal theories. Because quantum mechanics “is a remarkably successful paradigm for modeling physical phenomena on a wide range of scales,” according to him, “it is widely believed that on the most fundamental level the entire universe is governed by the rules of quantum mechanics and even gravity should somehow emerge from it.”

“We are not just saying that the artificial neural networks can be useful for analyzing physical systems or for discovering physical laws, we are saying that this is how the world around us actually works,” according to the paper’s discussion. “With this respect it could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong.”

The proposal is so audacious that the majority of physicists and machine learning experts we contacted declined to speak on the record, citing reservations about the paper’s results. However, in a Q&A with Futurism, Vanchurin delved into the topic – and revealed more about his notion.

**Futurism: Your paper contends that the cosmos may be fundamentally a neural network. How would you communicate your logic to someone who knows nothing about neural networks or physics?**

**Vitaliy Vanchurin**: There are two possible answers to your inquiry.

The first technique is to start with a detailed model of neural networks and then to examine the behavior of the network in the limit of a high number of neurons. What I have proven is that equations of quantum mechanics explain quite well the behavior of the system near equilibrium and equations of classical mechanics describes pretty well how the system farther away from the equilibrium. Coincidence? Perhaps, but as far as we know, quantum and classical mechanics accurately describe how the physical world operates.

The second option is to start with physics. We know that quantum physics works well on small sizes and general relativity works well on big scales, but we have yet to integrate the two theories into a coherent framework. This is known as the “quantum gravity problem.” Clearly, we are missing something important, and to make matters worse, we have no idea how to deal with spectators. This is referred to as the measurement problem in quantum mechanics and the measure problem in cosmology, respectively.

The second option is to start with physics. We know that quantum physics works well on small sizes and general relativity works well on big scales, but we have yet to integrate the two theories into a coherent framework. This is known as the “quantum gravity problem.” Clearly, we are missing something important, and to make matters worse, we have no idea how to deal with spectators. This is referred to as the measurement problem in quantum mechanics and the measure problem in cosmology, respectively.

Then one may argue that three phenomena must be unified: quantum physics, general relativity, and observers. 99% of physicists would agree that quantum mechanics is the most important, and that everything else should follow from it, but no one knows precisely how. In this study, I present another possibility: a microscopic neural network is the fundamental structure, from which all else originates, including quantum physics, general relativity, and macroscopic observers. So far, everything look positive.

**What prompted you to come up with this idea?**

First, I wanted to better understand how deep learning works, so I prepared a paper titled “Towards a theory of machine learning“. The basic goal was to use statistical mechanics methods to analyze neural network activity, but it turned out that under certain limitations, the learning (or training) dynamics of neural networks are extremely similar to quantum dynamics seen in physics. At the time, I was (and still am) on sabbatical and decided to investigate the theory that the physical universe is a neural network. The concept is undoubtedly strange, but is it weird enough to be true? It remains to be seen.

In the publication you said that to prove the theory was wrong, “all that is needed is to find a physical phenomenon which cannot be described by neural networks.” What exactly do you mean by that? Why is it “easier said than done?”

Well, there are numerous “theories of everything,” and most of them must be incorrect. In my hypothesis, everything you see around you is a neural network and hence to prove it wrong all that is needed is to uncover a phenomenon which cannot be explained with a neural network. But if you think about it it is a really tough assignment partly because we know so little about how the neural networks operate and how the machine learning truly works. That was why I tried to develop a theory of machine learning on the first place.

The idea is definitely crazy, but if it is crazy enough to be true? That remains to be seen.

**How does your study connect to quantum physics, and does it deal with the observer effect?**

There are two main schools of thinking on quantum mechanics: Everett’s (or many-worlds) interpretation and Bohm’s (or hidden variables) interpretation. I don’t have anything new to say regarding the many-worlds interpretation, but I believe I can offer something to hidden variable theories. In the emergent quantum mechanics model I studied, the hidden variables are the states of individual neurons, whereas the trainable variables (such as the bias vector and weight matrix) are quantum variables. It is worth noting that the concealed variables might be highly non-local, hence violating Bell’s inequalities. An approximation space-time locality is predicted to arise, although technically speaking, every neuron can be coupled to any other neuron and so the system need not be local.

**Would you mind expanding on how this idea connects to natural selection? How does natural selection influence the evolution of complex structures/biological cells?**

What I’m saying is really basic. There are more stable structures (or subnetworks) in the microscopic neural network, whereas others are unstable. The more stable structures would survive the evolution, and the less stable structure would be killed. On the lowest sizes I predict that the natural selection should yield some extremely low complexity structures such as chains of neurons, but on greater dimensions the structures would be more sophisticated. I don’t see why this method should be limited to a specific length scale, therefore the argument is that everything that we see around us (e.g. particles, atoms, cells, observers, etc.) is the outcome of natural selection.

**I was intrigued by your first email when you said you might not understand everything yourself. What did you mean by that? Were you referring to the complexity of the neural network itself, or to something more philosophical?**

Yes, I only refer to the complexity of neural networks. I did not even have time to think about what could be philosophical implications of the results.

**I need to ask: would this theory mean we’re living in a simulation?**

No, we live in a neural network, but we might never know the difference.