04 oktober 2008

CNS: Computational NeuroScience

Computational Neuroscience: modeling the mind


When people gesture while talking, it is usually for one of two reasons. If they are sure of themselves, hand movements can bring emotion and conviction to the words. If they are not so sure, they might be using the hand waving to convince you that they are right in general, and that the details are not important anyway. The latter is often the case when people talk about the brain.

It is not that what goes on beneath our scalp is a complete mystery. Modern neuroscience began over 100 years ago, when pioneering neuroanatomists began to unearth the basic architecture of the central nervous system. Since then, the field has grown significantly and today its largest conference, the Society for Neuroscience Annual Meeting, attracts over 30,000 attendees. Thanks to the popularity of the subject and the seemingly never-ending technical advances, scientists are now churning out masses of data at every level of analysis. Every day, they fill databases with gene sequences and protein interactions, map out networks of nerve cells, and even record the differing roles of each brain region in behaviour.

So why all the uncertainty? If all these experts are working so hard, why have we not found a cure for Alzheimer’s, understood how you can tell a cat from a dog in a split second, or explained why I feel like a person, and not just “a pack of neurons”, as suggested by Francis Crick? The answer lies in the sheer complexity of the brain. A human adult has about 100 billion neurons inside their head, all working away at their own little chores. Scientists have extensive knowledge about the different cell types, their make-up, how they are wired together, and ideas about what most of them are doing. But the leap from this to something that can do a crossword puzzle is a big one. It is not an impossible problem, exactly. Just a hard one.

Like other scientific disciplines before it, neuroscience is now reaching a stage where enough facts are known to start building general, and maybe even mathematical theories about how it all works. Computational neuroscience is the field that develops and tests these theories. That is not to say that experiments will ever become obsolete. Even relatively mature fields, such as physics, need the constant challenge of real life experiments to show who is right and who is wrong. The aim of computational neuroscience right now is to gather existing experimental data, try to fit it together in some coherent way, and go on to make suggestions and predictions for future experiments.

So how does someone tapping away at a computer in a dusty old office study the brain? They do it by trying to build theoretical models. A good example of this is a popular method called compartmental modelling, often used to examine the behaviour of single neurons (nerve cells). Since each neuron in our brain computes and transmits information using electrical signals, it is possible to think of each of them as a small, individual electrical circuit, made from the same basic elements - resistors, capacitors and the like - that control your mobile phone. In principle I could go, soldering iron in hand, and physically make a model neuron with these building blocks. Some people do. The downside is that it is a very time and resource-sapping process. It’s much easier to build a virtual circuit on your computer.

With enough constraining experimental data, these types of single-cell models can become quite detailed and include ion channels, complex molecular interactions and the varying shapes of real neurons. Once you have set up your model, you can test your hand-waving ideas explicitly and see if they fit together in a logical way. Another great advantage to this approach is that you can also do experiments on your virtual cell that are difficult, impossible or even immoral in real life, keeping animal rights activists happy in the process.

With enough data, these models can make specific statements about the real world. For example, an elegant study by Agmon-Snir and colleagues (Nature, 1998) looked at time-difference detecting neurons in the auditory brainstem. Imagine you are watching a tennis match from the stands. The grunting noise from the player on your left side will reach your left ear slightly earlier in time than your right ear. The further to the left the player is, the bigger the time difference. Your brain uses this information to tell which direction a sound came from. One puzzle was why, among the neurons involved, the ones that respond to higher-frequency sounds are smaller than those that deal with lower-frequency sounds. Agmon-Snir and colleagues created realistic computational models of these cells and showed that the higher frequency sound signals are optimally handled by neurons with shorter branches, because of noisy signal transmission.

Of course this single-cell example looks at just one of the many levels at which the brain could be studied. David Marr, an influential early theorist, defined three levels at which we can analyse a computational system such as the brain: the computational level, the algorithmic level and the implementation level. The first level identifies the computations that are to be performed. An example in the visual system would be motion detection. The second level determines the strategy used to perform this task. A computer programmer would call this the choice of algorithm. The third level looks at the physical implementation of this strategy, which in the case of the brain is the network of neurons. To a certain extent, experimental neuroscience has focused on this last ‘nuts-and-bolts’ level. Theoretical neuroscientists, however, have been working on all three levels; everything from the detailed biophysics of ion channels to more abstract full-brain models.

After defining a problem at a certain level, the theorist must design the model, taking into account several factors. Firstly, a model that is too detailed can be just as difficult to draw conclusions from as the real thing, which would render it mostly useless. To paraphrase Einstein, a model should include just enough detail to explore the question at hand and no more. Secondly, in many cases little experimental data is available to con- strain the model and ensure it reflects reality. The data must also be of high enough quality; substandard data will give you substandard results (a principle known among computer programmers as GIGO: Garbage In, Garbage Out). Thirdly, even with the extraordinary speed of modern computers, some simulations can take days or even weeks to run. For this reason, modellers may not want to include all the details, and may instead use approximations. Fortunately, computer processing power continues to increase every year. Many modern studies are based on ideas that others had decades ago but simply lacked the computational resources to implement at the time.

Neuroscience used to be a divided field, with the experimentalists complaining that the theorists were fiddling around with abstract ideas that would never work in a real brain, and the theorists criticising the experimentalists for filling the literature with reams of boring data. These divisions are rapidly fading. Many researchers are realising that steady progress will require a two-way flow of ideas. Even more scientists are actively blurring the lines by adapting methods from both approaches. This inter-disciplinary outlook will ensure that exciting times lie ahead for our understanding of the brain and, in many ways, of ourselves.


Refs:
  • The role of dendrites in auditory coincidence detection. H Agmon-Snir, C E Carr, J Rinzel. Nature (1998) 393 (6682) 268-72 PMID: 9607764
  • Computational neuroscience. T J Sejnowski, C Koch, P S Churchland. Science (1988) 241 (4871), 1299-306 PMID: 3045969

Geen opmerkingen: