In Part 2 of this post, we examine what your brain’s activity really looks like. In the first part of this series, I discussed the idea that we only use 10% of our brains.
I showed recent neuroscientific evidence that there is a substantial number of neurons that don’t seem to have any usefulness we can discern.
The number of unused neurons is probably less than 90%—I suggested it may be closer to 50%. But I still think the 10% value is a useful reference point.
Why?
The 10% myth is helpful because it turns out that there is a subtler and more consequential way in which its value is closer to the truth than the 100% value. It has to do with a phrase I used in the previous post: brain usage over time. This brings us to an important concept called sparseness. It’s a term coined by Cornell neuroscientist David Field to describe large-scale patterns of neural activity.
Read 5 Powerful Mind And Body Hacks For A Limitless Brain
Sparseness
Sparseness is about patterns that go beyond the mere sum of activity. It is really about how activity is distributed across neurons and over time.
In a sparse system, a few units in the population are very active at a time, while the rest are quiet. Over time, most units participate. This pattern of activity would be said to have high population sparseness. In contrast, a system that uses most units at least a little bit at any given moment would have low population sparseness.
The same framework is used for characterizing single neurons over time. In this case, high lifetime sparseness means that a neuron is only active in rare bursts of high activity, while low lifetime sparseness would indicate that a neuron is active at a low level almost all the time.
It’s helpful here to make an analogy with written language. We can communicate a given thought by writing it in English, or pretty much equivalently in Chinese. Yet the coding systems of these two languages are quite different. To express almost any idea in written English, we would be using nearly all coding units—the 26 letters—at least once. Analogizing the letters with neurons is like using all neurons in the brain at a low level for a given task. We can say that a system like written English has low sparseness.
In Chinese, only a handful of the thousands of characters would be needed to express the same thought. Each character’s use would be emphatic since characters come in a wide variety of quite distinct forms, and most are not repeated in a single thought. This is like using a few neurons at a time at a high level. Chinese, then, corresponds to high sparseness.
Neither system is inherently better than the other—both have advantages and disadvantages. What we want to know is, which written language is most like the pattern of activity in the brain?
Through empirical and theoretical research over the last 35 years, it has become clear that the brain’s activity is quite sparse.
Read Untreated Depression Can Change the Brain Over Time – Study Says
Why brain’s activity is quite sparse?
It behaves this way for two main reasons.
First, it is due to the way neurons work, meaning the biochemical and biophysical rules that govern their operation.
This is the case both across neural populations and over time. Limitations on blood flow—which restrict how much energy can be delivered to many neurons—push the system toward high population sparseness. In the time domain, neural machinery is fundamentally based around emitting a brief, sharp burst of pent-up energy, then spending longer periods recharging for another burst.
They can’t be just a little active most of the time. The best theoretical estimates of the situation—though they make substantial assumptions about and simplifications of a very complex system—converge on a limit of about 10% usage in the mammal brain (1, 2, 3). Sound familiar?
Second, sparseness is demanded because the world is sparse.
In our environment—and, correspondingly, in our heads—things happen in bursts, rather than at a low level all the time. This includes events, objects, attention, action, meaning, and decisions.
David Field, working with UC Berkeley neuroscientist Bruno Olshausen, followed this line of thinking in a landmark 1996 paper about sparseness in the visual system. They used computational models to show that our brains assume sparse structure in the spatial patterns of the world around us.
Essentially, they built a computational system that was forced to learn a sparse code. The code or “alphabet” is much like a written language, except it encodes pictures (small chunks of natural images) rather than words. What they found was that the visual alphabet that the computer model learned strongly resembled the alphabet used in our visual system.
In other words, in a system trained to simply be sparse, the brain’s basic strategy for analyzing the visual world pops out “for free” and without being explicitly pre-programmed. As it turns out, sparse coding also has strong connections to the current “deep learning” revolution in artificial intelligence, which I will discuss in future posts.
Getting back to the recent study led by Saskia de Vries of the Allen Institute, which I mentioned in the first post, one of the researchers’ main goals was to measure sparseness. With this information, we can estimate what proportion of the brain is active at the same time.
Across the visual areas, they sampled, and in different layers of the cortex, the de Vries data imply that around 20% of neurons are typically active at the same time. Although about three-quarters of visual neurons respond regularly, only about one in five is active at a time. And over time, individual neurons were active only during about 20% of the length of the recordings. Clearly, these values are closer to the 10% value, which I suggested as a rule of thumb than they are to the 100% myth.
What does all this have to do with your internet brain? The key is that the internet is also sparse. It is active in bursts, both across the network at a given time and within communication channels over time.
Read What Near-Death Experiences Reveal about the Brain
You can get a sense of the activity by monitoring your computer’s wifi signals. Macintosh users can open the application Activity Monitor (in the Utilities folder) and examine the Network tab. You will see a plot of message chunks (packets) sent and received by your computer over time. Unless you are using a great deal of bandwidth, the trace will generally appear sparse and bursty, like this:
Brains and the internet share sparse operating conditions. But we can go further. Another lesson of the internet metaphor for the brain is that occasional brief signals are crucial for making the system work. We are naturally attracted to strong, consistent signals in the brain, and to the bright flashes when the brain “lights up.” But following the internet metaphor, short spurts of activity are also crucial.
On the internet, there are a variety of brief signal bursts that allow routers to stay in contact. Importantly, these signal bursts don’t carry any message content. The signals include ACKs or acknowledgments, which tell a sending router that a set of messages was received at its destination. Routers also periodically send keep-alives, which are small messages that tell network neighbors that a router is ready to transmit messages.
If the brain used a similar strategy, these kinds of tiny messages would be missed or misunderstood if we were only concerned with situations where lots of neurons are active together. Like the internet, many signals in the brain probably relate to keeping the communication system working, rather than performing a specific task or behavior. It is about keeping the whole brain network “reachable,” which is something the internet excels at.
The brain has the same limitations on activity as the internet—in both systems, we can’t have all components active at once, even at a low level.
Crucially, both systems are fundamentally designed to pass messages across a vast and highly interconnected network.
I explore this parallel between the internet and the brain (and many more) in my new book, An Internet in Your Head. I will discuss some of these parallels in future posts.
Read 22 Tips to Keep Your brain Sharp and Young At Any Age
So what level of sparseness should we aim for?
Though the de Vries study is a big step forward, we are still a long way from having a good picture of large-scale brain activity in our own brains. At present, we can’t say what the ideal value of sparseness would be for humans, let alone how to achieve it. The most we can say now is that some substantial degree of sparseness is required.
But perhaps less is more? I’ll conclude with a short passage from An Internet in Your Head:
David Field, who was my Ph.D. advisor, has taken the principle of sparse coding to heart, or to brain as it were. Given the limit of 10 percent or fewer cells highly active at a time in the brain, David likes to joke that he is trying to get his personal total down to 5 percent. He may be on to something.
References
- Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. Journal of Cerebral Blood Flow & Metabolism, 21(10), 1133-1145.
- de Vries, S. E., Lecoq, J. A., Buice, M. A., Groblewski, P. A., Ocker, G. K., Oliver, M., … & Koch, C. (2020). A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nature Neuroscience, 23(1), 138-151.
- Lennie, P. (2003). The cost of cortical computation. Current Biology, 13(6), 493-497.
- Levy, W. B., & Baxter, R. A. (1996). Energy efficient neural codes. Neural Computation, 8(3), 531-543.
- Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607-609.
Written by: Daniel Graham, Ph.D., the author of An Internet in Your Head: A New Paradigm for How the Brain Works Originally appeared on: Pyschology Today Republished with permission.
Leave a Reply