Thanks to technological advances, massive amounts of biological information are now at scientists' disposal, but because they lack the tools to compute such large data networks, it often goes unanalyzed. However, researchers say they’ve found a way to decode one such system — the neural network — by using a new model to locate the brain's so-called edge of chaos.
By identifying this moment — a critical transition point between randomness and order — in neural networks, the authors of a study published Friday in Nature Communications say scientists can decode the way brains work, and perhaps even predict animal behavior patterns, making use of the abundance of priceless neural network data at their fingertips.
Data has surpassed the worth of oil, making it the most valuable resource around the world. It’s produced and stored in unimaginable amounts every second. But data can also be disorderly, intertwined and continuously changing; this is often the case for biological information. Such networks of data are extremely difficult to analyze, a problem that’s especially evident when examining neural networks.
“In neuroscience, people often think of brains as computers that process inputs into outputs, but brains are a closed loop, and we really try to focus on them not as a static machine, but as a dynamic system,” Miguel Aguilera, the study's lead author, told The Academic Times. “I think that it's very important to see biological systems or brains as things in continuous interaction with their environment.”
Along with colleagues from the University of Kyoto, Aguilera, a researcher from the University of Sussex's department of informatics, designed a way to understand ever-changing brain patterns, by using models to locate the particular moment in a neural system that scientists call the edge of chaos. This long-standing concept holds that certain systems on Earth — for example, plant evolution and wind dynamics — thrive when they sit exactly on the line between order and disorder.
This is a balance that brains must maintain, Aguilera explained: They have to be incredibly receptive to random external stimuli, but also coherent enough to manage thoughts related to that stimuli. If the balance is off, one might experience a seizure when exposed to new information, because the lack of order could cause a cascade of neuron signals.
“Brains make that [balance] by finding this special region of behavior, which combines order and chaos. It combines integration and sensitivity to perturbations,” he said.
If this edge-of-chaos moment in a neural network is isolated, scientists would be able to scrutinize the data from that moment and make simulations based on the information, the study says. The simulations would be able to predict behaviors of the neural network, and by extension, the brain.
That could shed light on the mysterious process of how neurons interact. It also has potential to increase understanding of how the brain might respond in various situations, such as in emotional responses to certain conversations, or physical reactions to harsh environmental changes.
The brain is an excellent example of unused analytic potential, Aguilera believes. He notes that current data-science literature fails to accommodate dynamic network systems, even though their data points are already available.
“We believe that we don't yet have adequate tools to process the massive amount of data we can get from recording the neurons of living animals,” Aguilera said. “The methods we have are not really compatible with brains.”
Scientists are accustomed to analyzing large data systems by simplifying them into an average statistic. This process is called mean-field theory and is a popular method among physicists to study the behavior of unchanging materials. But because traditional mean-field methods only work on static data — for example, a list of phone numbers — scientists hit a roadblock when dissecting points in transformative data systems like neuron networks.
“It’s very difficult to compute how all the parts of a system are interacting, but [physicists] can compute how they behave on average,” Aguilera said. “What we did is try to develop specific mean-field methods that can cope with these special regimes of behavior.”
Along with his team, Aguilera formulated a way of adjusting conventional mean-field methods to be applicable for systems in flux, namely the brain. In addition to incorporating the overall average of the data set, his method also includes the average correlations between points.
Applied to the brain, his models, referred to as asymmetric Ising models, could compute both the average activity of the neuron and the average relationship between each neuron. That could allow for the observation of variations in activity and for identification of the edge of chaos, as indicated by specific fluctuations. These changes are not a standard component of Ising models, which are normally used to detect static phase transitions — from solid to liquid, for example.
“It's more accurate, because it can account for the average activity of the system, but can also account for the fluctuations in the system,” Aguilera explained. “It’s promising that we can develop specific models at the edge of chaos that can capture these fluctuations.”
Beyond neural networks, Aguilera highlights that his method can be applied to a variety of other projects, like helping artificial intelligence learn the way humans do. If simulations of the edge of chaos can be constructed, scientists could potentially run them along with artificial intelligence, making the machines’ thought processes closer to a human's.
“The model we use can be used for modeling biological networks or data, but they're also models heavily used in machine learning or deep learning applications,” Aguilera said.
The team's next step is to utilize its calculations to study the neural patterns of live zebrafish, because at present, only unmoving, anesthetized animals are used in neurological studies.
The paper, “A unifying framework for mean-field theories of asymmetric kinetic Ising systems,” was authored by Miguel Aguilera, University of Sussex; and S. Amin Moosavi and Hideeaki Shimazaki, Kyoto University.