Critical Sensation

by Cosma Shalizi on July 24, 2006

First off, I should thank Henry and the rest of the Timberites for the kind invitation to guest-post, and that very warm introduction. In exchange, I’m going to blog more or less as I usually would, only here. This means some big bricks of posts about “complex systems”, so called, which is or was my scientific field, more or less; and also any miscellaneous outrages which catch my eye this week. Mounting my usual hobby-horses on this stage is a poor exchange for their generosity, but mounting hobby-horses is why I started blogging in the first place, and anyway I’m big on conscienceless nomothetic exploitation of cooperators.

Today I want to talk (below the fold) about some recent work in the statistical mechanics of disordered systems, which might help explain how our sense organs work, and actually involves some good uses of the self-organized criticality and power laws; tomorrow or the day after I’ll get to the smoldering question of “Why Oh Why Can’t We Have Better Econophysics?”

Folklore says that the dark-adapted human eye can detect a single photon; this isn’t quite true, but we can consciously detect a few tens of photons, and some species are that sensitive. Of course, we can see not only in the dark but also during broad daylight, but then the number of photons falling on every part of the retina is huge; the eye isn’t overwhelmed and saturated, though now one or ten photons more or less makes no discernible difference. In the jargon, the eye, and the other sensory organs, have both a large “dynamic range” (we can see in the dark and in the daylight), and “nonlinear response” (changes which are noticeable in the dark aren’t against a high-intensity background). Some version of these facts, including the basic (power-law) form of the relationship between physical stimulus intensity and perceived sensory magnitude, have been known since the nineteenth century. This makes it all the more puzzling that sensory neurons show a linear response over a narrow dynamic range, beyond which they saturate.

You could evade this difficulty by having lots of neurons with different operating ranges, so that raising stimulus intensity saturated some but activated others. The problem is that there don’t seem be that wide a spectrum of operating ranges for individual neurons. In a recent paper, Osame Kinouchi and Mauro Copelli (who blog together at Semciência) offer another way, which has to do with the way sensory neurons interact with each other in a network.

Osame Kinouchi and Mauro Copelli, “Optimal dynamical range of excitable networks at
criticality”, Nature Physics 2 (2006): 348–351; free preprint version at q-bio.NC/0601037 *

Abstract: A recurrent idea in the study of complex systems is that optimal information processing is to be found near phase transitions. However, this heuristic hypothesis has few (if any) concrete realizations where a standard and biologically relevant quantity is optimized at criticality. Here we give a clear example of such a phenomenon: a network of excitable elements has its sensitivity and dynamic range maximized at the critical point of a non-equilibrium phase transition. Our results are compatible with the essential role of gap junctions in olfactory glomeruli and retinal ganglionar cell output. Synchronization and global oscillations also emerge from the network dynamics. We propose that the main functional role of electrical coupling is to provide an enhancement of dynamic range, therefore allowing the coding of information spanning several orders of magnitude. The mechanism could provide a microscopic neural basis for psychophysical laws.

Neurons, like muscle cells, are “excitable”, in that the right stimulus will get them to suddenly expend a lot of energy in a characteristic way — muscle cells twitch, and neurons produce an electrical current called an action potential or spike. Kinouchi and Copelli use a standard sort of model of an excitable medium of such cells, which distinguish between the excited state, a sequence of “refractory” states where the neuron can’t spike again after it’s been excited, and a resting or quiescent state when the right input could get it to fire. (These models have a long history in neurodynamics, the study of heart failure, cellular slime molds, etc.) Normally, in these models the cells are arrayed in some regular grid, and the probability that a resting cell becomes excited goes up as it has more excited neighbors. This is still true in Kinouchi and Copelli’s model, only the arrangement of cells is now a simple random graph. Resting cells also get excited at a steady random rate, representing the physical stimulus.

Kinouchi and Copelli argue that the key quantity in their model is how many cells are stimulated into firing, on average, by a single excited cell. If this “branching ratio” is less than one, an external stimulus will tend to produce a small, short-lived burst of excitation, and there will be no spontaneous activity; the system is sub-critical. If the branching ratio is greater than one, outside stimuli produce very large, saturating waves of excitation, and there’s a lot of self-sustained activity, making it hard to use a super-critical network as a detector. At the critical point, however, where each excited cell produces, on average, exactly one more excited cell, waves of excitation eventually die out, but they tend to be very long-lived, and in fact their distribution follows a power law.

(People who teach courses on random processes are very fond of branching processes, because the basic model can be solved exactly with hundred-year-old math, but there are endless ramifications, and some of the applications are very technically sweet. Like most mathematical scientists, Kinouchi has certain tools he tends to return to, and critical branching processes are one of them.)

As Kinouchi and Copelli say in their abstract, the idea that the critical point, where things are just about to go unstable, is a useful place for processing or transmitting information is a persistent theme of complex systems. (You could, arguably, even trace a version of the idea back to William James’s Principles of Psychology.) It has also, before this, been one of the weakest of our ideas. The original work from the 1980s on “evolving to the edge of chaos” has proved impossible to replicate, I would even say experimentally refuted. (Why the phrase and idea continue to propagate is another question for another time.) Stu Kauffman‘s studies of models of gene regulatory networks certainly suggests that information moved through these most easily near their critical point, but I don’t think anyone has done a careful information-theoretic analysis of that. In any case, E. coli doesn’t care about the bandwidth of its regulatory network: it cares about reliably making lactase when it only has lactose to eat, i.e., specific adaptive functions. Prior to this, I can only think of one situation where the idea has been made precise and has strong evidence to back it up (namely, this paper), but that’s a purely mathematical exercise of no biological relevance.

What Kinouchi and Copelli have done is very different: they’ve actually identified something biologically important which is maximized at the critical branching ratio, namely the dynamic range. The network as a whole responds to the stimulus, and its dynamic range can be many orders of magnitude wider than that of its component cells. It is this enhancement which is maximized at the critical branching ratio, and falls off sharply for networks which are even a little sub- or super- critical. As a bonus, the shape of the response function is of the correct power-law form, though, in their model, the exact exponent isn’t right. Modifying the network structure, or some model details, changes the exponent, but the dynamic range is still sharply peaked at the critical branching ratio.

There are a lot of other nice things about this paper, which I won’t get in to least I repeat it all, but I will point out one thing: while their central qualitative results are pretty robust to small tweaks, there are some details of their model which make it a fair caricature of some excitable media, but not all. These are quite deliberately matched to properties of the olfactory system and the retina, but wouldn’t work in, say, the cortex, where the dynamics of excitation are different. So this isn’t an “over-universal” model, but one of particular phenomena produced by particular mechanisms. In fact, looking at olfaction, they are able to make a prediction about the effects of knocking out specific genes which are involved in the fast, symmetrical electric couplings they assume. Nobody seems to have done those experiments yet, but, at least to this non-biologist, it seems feasible, and, now, very interesting.

*: Here’s an anecdote illustrating how broken academic publishing is. Kinouchi and Copelli work at the University of Saõ Paulo, which doesn’t, for reasons of economy, subscribe to Nature Physics. To get an electronic copy of their own published paper, they were forced to write correspondents at other universities. I couldn’t help them, because my school doesn’t feel like it can afford to subscribe to Nature Physics either.

{ 11 comments }

1

NL 07.24.06 at 7:24 pm

To be fair, NP is a brand-new journal, and many places simply haven’t gotten around to subscribing yet.

2

John Quiggin 07.24.06 at 7:30 pm

As a general way of looking at this, is it helpful to make the (trivial) observation that for optimisation problems lacking interior solutions, the objective will be maximized (if at all) at the boundary of the domain? This would lead me to look at expressing the problem in a way that permits the use of convex analysis.

3

Slocum 07.24.06 at 8:15 pm

I’d suggest that a possible analogy would be ‘the wave’ at sport stadiums. Each person standing and then sitting back down is equivalent to a neuron spike. If each standing person tended to result in, say, 2 standing persons, before long, everybody in the stadium would be standing. If each standing person resulted in fewer that 1 standing persons, in time, nobody would be standing. With a branching factor of 1, on average, the wave persists indefinitely. Makes sense.

That’s simple enough–but where’s the dynamic range? Is the strength of the signal encoded in the amplitude (which would correspond to the percentage of the crowd that is participating in the wave)? Yes, my question would probably be answered by reading the paper, but…

4

eudoxis 07.24.06 at 8:59 pm

There must be multiple mechanisms of adaptations that work on different time scales. For vision, for example, there is a slower, physiologic, adaptive response to stimulus involving modulation of GABA receptors.

5

cosma 07.24.06 at 9:38 pm

Slocum: That’s a good analogy, and I wish I’d thought of it. Kinouchi and Copelli do indeed assume that the signal is encoded in the amplitude. They’re also assuming rate coding, rather than just time-coding, but my understanding is that this isn’t a bad assumption for the early stages of sensory processing.

Eudoxis: Yes, a more realistic model would definitely have to incorporate mechanisms for adaptation. (A slow fall in the susceptibility to external input suggests itself.) This would make for an interesting follow-up paper…

6

econgeek 07.24.06 at 10:58 pm

Why on earth is this appearing in NP instead of plain old Nature. It definatly seems Nature worthy to me. (I am not a biologist but I have seen much less interesting work than this in my field (economics) published in nature or Science.

7

taion 07.25.06 at 2:48 am

I think part of the reason might be that the paper has some flaws — consider “Our results are
consistent with the reduction in sensitivity, dynamical range and synchronization recently observed in retinal ganglion cell response of connexin-36 knockout mice [28].” If you actually look at the reference, you’ll note that while C36 KO mice don’t have the gap junctions among AII cells that might presumably enable those cells to act as an excitable network, they also lack the gap junctions between the AII cells and the appropriate retinal ganglion cells from which the measurements were taken. This essentially means that the RGCs from which the measurements were taken get no signal at all from rods, which lead to the observed effects on the dynamic range/&c. regardless of the excitable network properties of AII cells.

Why do I sound so bitter about this? I’m currently working on a biophysical model of mammalian retinas (cone pathways only though!), and this paper made me spend the entire day chasing around references believing that I’d made a crucial omission that I didn’t. I guess this is what I get for reading CT when I should be working, though!

8

mitch 07.25.06 at 3:13 am

As Eudoxis said, I believe there is adaptation occurring within a cell to shift its linear response range. Something I have seen recently (I know there’s more, i’m just tired at the moment), I’m pretty sure this retinal model uses such adaptation techniques.

9

Ken C. 07.25.06 at 7:47 am

The explanation I remember for wide dynamic range involved lateral inhibition: spikes in one cell inhibit spikes in nearby cells, so that a kind of spatial derivative is computed. No?

10

beajerry 07.25.06 at 9:07 am

The whole is greater than the parts?

11

Osame Kinouchi 07.26.06 at 10:39 am

Nice comments… Thanks econgeek, we tried, my friend, we tried…
Taion is right in the sense that the link to the retina results is not quit direct because of the complexity of the retinal network. Interesting that Taion, which seems to be a retina specialist, points that the amacrine network plays a structural role similar to the olfactory glomeruli. We hope that experiments in the olfactory glomeruli would be more direct.
Thanks to Cosma for the nice review. Unhappyly, Cosma, the model is, hummm… somewhat over-universal: directed bonds instead of symmetrical ones also works… and sometimes gap junctions are assymetrical also.
Of course, sensory systems are very adaptative, and their dynamic range is very large, with several mechanisms working together. Our proposal is more modest: to explain the dynamic range of the olfactory glomeruli, not the entire dynamic range of animal olfaction. But if Stevens psychophysical exponent would be, if not determined, at least affected by the value of directed percolation exponent, it would be nice, isn´t?

Comments on this entry are closed.