The aim of this paper is to propose an interdisciplinary evolutionary connectionism approach for the study of the evolution of modularity. It is argued that neural networks as a model of the nervous system and genetic algorithms as simulative models of biological evolution would allow us to formulate a clear and operative definition of module and to simulate the different evolutionary scenarios proposed for the origin of modularity. I will present a recent model in which the evolution of primate cortical visual streams is possible starting from non-modular neural networks. Simulation results not only confirm the existence of the phenomenon of neural interference in non-modular network architectures but also, for the first time, reveal the existence of another kind of interference at the genetic level, i.e. genetic interference, a new population genetic mechanism that is independent from the network architecture. Our simulations clearly show that genetic interference reduces the evolvability of visual neural networks and sexual reproduction can at least partially solve the problem of genetic interference. Finally, it is shown that entrusting the task of finding the neural network architecture to evolution and that of finding the network connection weights to learning is a way to completely avoid the problem of genetic interference. On the basis of this evidence, it is possible to formulate a new hypothesis on the origin of structural modularity, and thus to overcome the traditional dichotomy between innatist and empiricist theories of mind.
The study of the modular organization of organisms is of interest in different disciplines: the biology of development, evolutionary biology, neurosciences and psychology. The terminology used when referring to modules varies from discipline to discipline: beginning with Darwin, evolutionary biologists have used the term ‘homologues’ when referring to individualized parts of two organisms that have been inherited, with more or less modification, from an equivalent organ in the common ancestor (e.g. the five-toed state in humans and iguanas; Futuyma 1998, p. 109); neuroscientists use the term ‘brain areas’ when referring to parts of the brain that are specialized in performing a specific task (e.g. Broca's area; see Hirsch et al. 2000).
It is now more than 20 years since Fodor (1983) published his small but seminal book, The modularity of mind, and the debate about the architecture of mind continues to be a central one also in cognitive science. Cognitivists choose to adopt the so-called ‘boxes and arrows’ models when they study mind and wish to refer to modules. In this model, each ‘box’ is used to represent a module with a specific function, and the ‘arrows’ connecting boxes are used to represent relationships between modules. The dual-route model for the English regular and irregular past tense is one of the most frequently cited models in cognitive science literature. In this model, one nervous pathway is involved in the production of the past tense of regular verbs and the other pathway is involved in the production of the past tense of irregular verbs (Pinker & Prince 1988; see also Calabretta & Parisi 2005).
On one hand, this way of describing the modular organization of mind could be helpful, at least, for formalizing experimental data. On the other hand, this approach has some important disadvantages, the first of which is that it does not take into account the real structure of the brain. In other words, using ‘boxes and arrows’ models, it becomes difficult to respond to the following simple questions: what is the specific box made of and what part of the brain is it located in? What are the other parts of the brain that the box considered exchanges information with? What about the evolutionary origin of brain modules?
Briefly, there are two main questions for cognitive science to answer: what are modules? How and why do they emerge? The first step towards answering these two questions is to define the term ‘module’ precisely. The next step is to recreate and study the evolution of modules by taking into account the real structure of the brain. The several attempts aimed at finding definitive answers to these questions have not been successful.
In this paper, I would like to propose a new way of finding correct answers to the previously stated questions, i.e. using neural networks as simulative models of the brain (Rumelhart & McClelland 1986) and genetic algorithms as a simulative model of biological evolution (Holland 1992). This different approach to the study of modularity of mind, which uses the simulative tools of artificial life, has been called evolutionary connectionism (Calabretta 2002a; Calabretta & Parisi 2005). Using the evolutionary connectionism approach, it is thus possible to simulate the different evolutionary scenarios proposed for the purpose of explaining the origin of modularity; to disentangle the role of evolution, development and learning; to explain why some species have a modular brain architecture while other species have a non-modular brain architecture; to understand the phylogenetic and ontogenetic dynamics involved in the evolution of modularity; to identify mechanisms that facilitate the evolution of modularity and mechanisms that prevent the evolution of modularity; to determine the relationship between modularity and genetic duplication; to settle the debate between cognitivists, who claim that brain is modular and brain modules are innate, and empiricists, who claim that brain modules are more than anything else the result of development and learning. The aim of this paper is to propose an interdisciplinary evolutionary connectionism approach for the study of the evolution of modularity.
In §2, I will show how using artificial neural networks as a brain model allows us to have an unambiguous definition of modularity. In §3, I will review a recent evolutionary connectionism study that takes inspiration from the real structure of the primate brain and in which, given the preceding definition of modularity, the evolution of modular neural networks is possible starting from non-modular neural networks. I will show how the simulative results obtained allowed some of the previously stated questions to be answered. These results suggest a hypothesis concerning the role of evolution and learning on the origin of modules (Di Ferdinando et al. 2000, 2001). Moreover, they not only confirm the existence of the phenomenon of neural interference in non-modular network architectures, but also for the first time reveal the existence of another kind of interference at the genetic level, i.e. genetic interference, a new population genetic mechanism that is independent from the network architecture and has not received appropriate evidence in artificial life literature (Calabretta et al. 2003a). In §4, some general conclusions will be drawn.
2. Definition of modularity
While the definition of module seems to be quite simple from an intuitive point of view, the attempt to formulate a more rigorous definition, which would allow us to perform an interdisciplinary comparative study, is much more difficult. In comparative biology and systematics, the homology concept is elusive (Wagner 1995a) and ‘the proper definition of homologues is problematic because there is no consensus about their biological role’ (Wagner 1995b). Bates (1994) refers to modularity as a contentious word in that the term ‘is used in markedly different ways by neuroscientists and behavioural scientists, a fact that has led to considerable confusion and misunderstanding in interdisciplinary discussions of brain and language’. According to Bates (1994), in fact, when a neuroscientist uses the word module ‘s/he is usually trying to underscore the conclusion that brains are structured, with cells, columns, layers and/or regions that divide up the labour of information processing in a variety of ways’.
In cognitive science and linguistics, we have a different definition of modularity. According to Fodor (1983), ‘a module is a specialized, encapsulated mental organ that has evolved to handle specific information types of enormous relevance to the species’. According to him, modules are cognitive systems, most of all input systems (e.g. language and face recognition in humans and other primates, echo location in bats or fly detection in the frog) that exhibit all or almost all of the following nine features as defined by Fodor himself:
Input systems are domain specific (p. 47): that is, they are specialized in the type of information they can deal with.
The operation of input systems is mandatory (p. 52): that is, it is ‘mediated by automatic processes which are obligatorily applied’.
There is only limited central access to the mental representations that input systems compute (p. 55): that is, ‘input representations are, typically, relatively inaccessible to consciousness’.
Input systems are fast (p. 61): that is, the speed of input processes is very high.
Input systems are informationally encapsulated (p. 64): that is, operations of input systems on information are not affected by higher level information.
Input analysers have ‘shallow’ output: (p. 86): that is, the information that the outputs of input systems are assumed to encode is limited.
Input systems are associated with fixed neural architecture (p. 98): that is, there is a characteristic neural architecture associated with each of the input systems.
Input systems exhibit characteristic and specific breakdown patterns (p. 99): that is, they have patterned failures of functioning.
The ontogeny of input systems exhibits a characteristic pace and sequencing (p. 100): that is, input systems develop according to specific, endogenously determined patterns.
Many years later, in his book The mind doesn't work in that way, Fodor (2000) replies to Pinker's (1997) How the mind works and to Plotkin's (1997) Evolution in mind books regarding adaptionism, the idea that many mind modules are the result of adaptive pressures faced by our ancestors. In other words, Fodor criticizes the pan-adaptivistic view of evolutionary psychology, i.e. the view of brain as a system of modules shaped by natural selection (Barkow et al. 1992; Carruthers 2003). Significantly, Fodor also adopts a stance on the so-called massive modularity (MM) thesis (e.g. Sperber 2002), the idea that most or all of cognition is modular: according to him ‘there are good reasons to doubt that MM is true’ (Fodor 2000, p. 55).
The positions described above represent only a small part of all the relatively independent doctrines regarding the modularity thesis in cognitive science literature. Here, I would only like to stress that nowadays there is no shared model that can be used to clarify the many different hot issues regarding the modularity of mind. One possible solution to this problem would be to attempt to simulate the evolution of modularity. In order to do that, we need a clear operative definition of modularity and a well-known model of study that can be simulated. As mentioned in §1, in order to describe brain architecture, cognitivists use information flow diagrams in which modules are represented by boxes connected by arrows. Since cognitivist modules are not inspired by the brain's physical structure and way of functioning, this kind of description is of no use in simulating the evolution of modularity. With regard to ‘boxes and arrows’ models, artificial life models offer at least three main advantages: (i) they draw their inspiration from the real structure and mode of functioning of the brain and allow us to give a more plausible definition of modularity; (ii) they allow us to adopt computer simulation for testing evolutionary scenarios; (iii) they allow us to take into account chance and other non-adaptive evolutionary factors.
Calabretta & Parisi (2005) give a general definition of both modularity and non-modularity: ‘modular systems can be defined as systems made up of structurally and/or functionally distinct parts. While non-modular systems are internally homogeneous, modular systems are segmented into modules, i.e. portions of a system having a structure and/or function different from the structure or function of other portions of the system’. The authors give a more specific and rigorous definition of modularity and non-modularity in the same paper.
In a nonmodular architecture one and the same connection weight may be involved in two or more tasks. In a modular architecture each weight is always involved in a single task: Modules are sets of ‘proprietary’ connections that are only used to accomplish a single task.(Calabretta & Parisi 2005, fig. 14.4)
According to this definition of modularity, a neural module is composed of neural units that are physically linked to units involved in the same task and are not linked to units involved in different tasks.
Once a definition of modularity is established, the simulation of the evolution of modular structures becomes possible due to simulative models of neural networks and genetic algorithms. In most connectionist simulations, neural networks have a fixed architecture and are non-modular. The modular neural networks are considered in a few connectionist studies, and also in these few cases network modular architecture is fixed and decided a priori by the researcher (e.g. Jacobs et al. 1991; Murre 1992). The study by Jacobs & Jordan (1992) differs from others in which the modular architecture of a neural network is not decided a priori by the researcher, but emerges as a result of organism development. This model is based on the preference of real neurons for establishing connections with spatially close neurons rather than with more distant neurons. Owing to this simple developmental rule, the architectures that emerge in this simulation are modular rather than non-modular. However, this result is also possible owing to the hardwired spatial location of network units, i.e. owing to certain decisions taken a priori by the researchers (see Di Ferdinando et al. 2001).
In §3, I will present the first simulative study in which modular neural networks evolve in a truly spontaneous fashion starting from non-modular neural networks.
3. Evolution of modularity
Earlier I answered the question ‘what are modules?’ by giving a definition of modularity, i.e. Modules are sets of ‘proprietary’ connections that are only used to accomplish a single task. I will now try answering the second question: how and why do they emerge?
Once the existence of modules in the brain has been postulated, the question of the origin of the same brain modules thus obviously follows. In other words, it becomes natural to ask whether the information about modules is coded into the genes and therefore transmitted over generations, or is the end result of a complicated process of brain development during life, or is the result of the interplay between the two processes. A first step towards being able to answer this second question (the origin of brain modules) could be to identify modules that it is claimed exist in the brain of real organisms. The second step will then be to attempt to replicate the evolution of these modules by simulation.
In 1982, Ungerleider and Mishkin suggested that two cortical ‘streams’ of projections in the brain of primates, the ventral and dorsal streams, were respectively involved in the ‘what and where’ tasks (Ungerleider & Mishkin 1982, Milner & Goodale 1995). The what and where tasks consist of recognizing the identity (‘what’) and the spatial location (‘where’) of objects that appear in the organism's retina. In 1989, Rueckl, Cave and Kosslyn reproduced the two tasks in a simulative context using a neural network that has to learn to perform the two tasks by means of a backpropagation procedure, i.e. by means of a learning algorithm (Rueckl et al. 1989). In this study, Rueckl, Cave and Kosslyn compare the performance of different modular and non-modular architectures in learning the what and where tasks. All the network architectures have three layers of units: the input layer is composed of 25 units corresponding to a 5×5 retina; the hidden layer is composed of 18 units; the output layer is composed of 18 units, 9 of which are involved in the what task and the other 9 in the where task (for simulative details, see Rueckl et al. 1989; see also Di Ferdinando et al. 2001, and Calabretta et al. 2003a, figs 1 and 2). Network architectures vary in the number of hidden units that are connected either to the what output units, or to the where output units, or to both.
Network modularity and non-modularity are as defined in §2. In modular networks, each weight is always involved in a single task: some connections are involved in accomplishing the first task, while the other connections are involved in accomplishing the second task. In non-modular networks, some connections are involved in accomplishing the first task, some in accomplishing the second task, and some others are involved in accomplishing both the tasks. The fact that in non-modular architecture some connections are involved in both tasks is important in that it raises the problem of neural interference. The interference derives from the fact that the correct accomplishment of the first task may require the initial weight value of the connection to be increased during learning, while the correct accomplishment of the second task may require the initial value to be decreased (Plaut & Hinton 1987; Jacobs et al. 1991; see Calabretta & Parisi 2005, fig. 14.4).
The main result of this simulative study is that the architecture with the best performance in the two tasks is a modular one in which there are more resources (i.e. hidden units) for the more complex task, i.e. the what task, than for the simpler task, i.e. the where task. More precisely, the best architecture is that in which 14 hidden units are dedicated to the what task, and 4 to the where task. Rueckl et al. (1989) have put forward a few hypotheses to account for these results and have argued that the difference in performance among modular and non-modular architectures might explain the evolutionary reason for which these two separate neural pathways evolved for the two tasks. A very simple test for this hypothesis would be to modify Rueckl et al. (1989) model setup by adding a genetic algorithm as a model of evolution (Holland 1992). In this way, it becomes possible to simulate the evolution of modular architectures starting from non-modular architectures. This was done by Di Ferdinando et al. (2000, 2001) with quite surprising results. In a first phase of the research, the genetic algorithm was used to evolve both the architecture and the weights of population of neural networks. Each neural network represents an organism whose reproductive chances across generations depend on the performance in the ‘what and where tasks’. Each organism is provided with a genotype into which the architecture and connection weight values are coded. The architectures in the population share the same number of layers, input units, hidden units and output units as well as the same pattern of connectivity between input and hidden layers (all the input units are connected with all the hidden units). Network architectures vary only in the pattern of connections among hidden and output units. At the first generation, a population of 100 individuals is created and each individual is characterized by random values for both the weights and the pattern of connectivity among the hidden units and output units. Each individual is presented with the 81 input patterns of the what and where task (nine objects are presented in nine different positions), and the individual's fitness is measured as minus the summed squared error on these patterns. In the first generation population, fitness is very low because both the architecture and the weights are random; the 20 individuals with highest fitness (smallest total error) reproduce asexually by generating five offspring which inherit the architecture and the weights of their single parent with the addition of randomly generated architecture and weight mutations. This process is repeated for 10 000 generations. In summary, simulations show that the genetic algorithm fails to find architecture and weights that are appropriate for solving both the where and the what task. More specifically, at the end of simulation, networks are able to solve the easier where task and not the more complex what task. Moreover, the evolved architectures emerging at the end of evolution are different from Rueckl et al. (1989) optimal modular architecture.
How do we explain these results? Why is the genetic algorithm not able to evolve the appropriate network architecture and weights for the two tasks? There are three main reasons (Di Ferdinando et al. 2001):
The coevolution of both the neural network architecture and weights is made more difficult by the fact that the adapted set of weights can suddenly become maladaptive if the network architecture changes due to mutations.
The architecture that evolves is the architecture in which more resources are dedicated to the easier task, the where task and fewer resources to the more complex task, the what task. This kind of resource allocation is the opposite of that found by Rueckl et al. (1989) to be optimal for solving both tasks.
In addition, in the few replications of simulation in which the genetic algorithm is able to evolve architectures similar to the optimal one, performance in the what task is low due to another interference mechanism, genetic interference. This type of interference can result from the linkage between advantageous and deleterious mutations falling on different modules that are encoded in the same genotype and are inherited together.
These intriguing results induced us to investigate the mechanism of genetic interference more thoroughly by collaborating with Gunter Wagner, an evolutionary biologist at Yale University. With Gunter Wagner, we have started an interdisciplinary research project on the evolution of modularity. Our evolutionary connectionist project was the first example where the origin of modularity was simulated in a computational model (Calabretta et al. 1997) and led to the discovery of a new mechanism for the origin of modularity: modularity as a side-effect of genetic duplication (Calabretta et al. 1998a,b, 2000; see also Wagner et al. 2005). The project was successful beyond expectations (G. P. Wagner 2000, personal communication) and recently José B. Pereira-Leal and Sarah A. Teichmann of the ‘Laboratory of Molecular Biology, Structural Studies Division’ at Cambridge University, by using an analysis of module duplication in Saccharomyces cerevisiae protein complexes, provided strong support for our hypothesis on the evolution of modularity (see Calabretta et al. 2000; Pereira-Leal & Teichmann 2006).
In order to study genetic interference, we carried out a long series of simulations in which we decouple the two processes that are going in our simulations: the process of the change of the neural network architecture, and the process of the change of the neural network weights (Calabretta et al. 2003a). In this series of simulations, we varied the mutation rate, the fitness formula and the nature of reproduction. The connection weights are encoded in the inherited genotype and the network architecture is fixed. The architecture is modular and is the optimal modular architecture with 14 hidden units dedicated to the what task and 4 hidden units dedicated to the where task. The genetic algorithm modifies evolving connection weights, only. At the beginning of the simulation, a population of 100 individuals is created and each individual inherits a genotype with random values for weight genes. The values of the weight genes are randomly chosen and can vary across generations owing to mutations. The simulation is terminated after 50 000 generations (instead of 10 000 generations used in Di Ferdinando et al. 2001).
According to our simulations and analyses, genetic interference is a new population genetic mechanism that reduces the efficiency of the selection process across generations. In fact, in asexual populations, it could happen that conflicting mutations (some favourable and others unfavourable) can fall on different but genetically linked portions of the genotype encoding distinct neural modules. In these cases, there is no way of retaining favourable mutations while eliminating unfavourable ones (see Calabretta et al. 2003a, fig. 4). Different analyses showed how interference at the genetic level is a general phenomenon that does not depend on differences in the difficulty of the two tasks. In fact, additional simulation results showed that this interference comes into play also in simulations in which the genotype codes for multiple separate neural modules involved in tasks that can be identical (i.e. two tasks of ‘what’ or two tasks of ‘where’) or different, with either the same or different difficulty. Our simulation model is different to classical models of the population genetics of genetic linkage in which it involves the optimization of two distinct tasks or functions. Our analyses indicate that genetic interference requires both genetic linkage between advantageous and deleterious mutations affecting different functions, and high mutation rates. Genetic interference appears only above a critical mutation rate that it is not possible to predict but requires an analysis of mutational effect distributions (see Calabretta et al. 2003a, fig. 6).
We also investigated the following question: can sexual reproduction eliminate the problem of genetic interference? We carried out a set of simulations in which network architecture was fixed and consisted of the one found to be optimal by Rueckl et al. (1989). In these simulations, the individuals reproduce sexually and the task of the genetic algorithm is to find network neural weights only. The results of these simulations show how sexual reproduction can mitigate but does not eliminate the problem of genetic interference. Sexual reproduction reduces the negative effects of genetic linkage ‘by allowing the decoupling of portions of genotypes affected by favourable and unfavourable mutations and the recombining together of genetic segments in new genotypes. In this way, sexual reproduction can find new genotypes that include only favourable mutations or only unfavourable mutations and this may increase the general efficiency of the evolutionary selection process’ (Calabretta et al. 2003a).
In another set of simulations, we used a backpropagation procedure as a model of learning. In these simulations, the genetic algorithm modifies the architecture of neural networks, while the backpropagation procedure takes care of the weights. The simulated setup was the same as the previous one except for the fact that the neural weights of each neural network were randomly chosen in each generation and modified during the individuals' lifetime by means of the learning algorithm. The results proved very interesting. Evolution was able to find the optimal modular network architecture for performing the two tasks, while learning was able to find the appropriate connection weights for this architecture. As a result, at the end of the evolutionary process, the neural networks were able to correctly perform both the what and the where tasks.
These results offer us a way of overcoming the traditional dichotomy between innatist and empiricist theories of mind, and allow us to formulate a new hypothesis on the origin of structural modularity: structural modularity as a result of cooperation between genetic adaptation and individual learning (Calabretta et al. 2003a; see also Wagner et al. 2005).
Four-day old newborns are already able to differentiate their native tongue from other languages, while three-month old babies show surprise if two solid objects seem to occupy the same position in the space. According to some researchers (e.g. Spelke 1994), these and many other data suggest the innateness of some brain modules, that are specialized for sounds, mechanical relationships between objects, numbers, etc. At the other extreme, there are researchers who interpret differently these data and stress the role of development and learning in the acquisition of these capacities (e.g. Karmiloff-Smith 2000). The possibility of recreating the evolutionary scenarios responsible for the evolution of modularity would be very useful for the purpose of settling the question of nature versus nurture. Clearly, the cognitivists' ‘boxes and arrows’ models are of no use in achieving this goal, while the evolutionary connectionism approach using neural networks and genetic algorithms allows us to simulate different evolutionary scenarios (Calabretta & Parisi 2005).
In the present paper, it is argued that neural networks as a model of the nervous system would allow us, first of all, to formulate a plausible definition of module, albeit a simplified one. A clear definition of modular and non-modular neural networks is the conditio sine qua non for simulating the evolution of modular architectures starting from non-modular ones using a genetic algorithm as a model of biological evolution. In this way, the different roles of evolution, learning, development, chance and other non-adaptive evolutionary factors could also be determined by simulation. To test the validity of this approach, the evolution of brain ventral and dorsal streams was chosen as a case study. These two separate cortical streams in the brain of primates are thought to be involved in the tasks of recognizing object identity and of identifying spatial location of visually perceived objects, respectively.
What did we learn from these evolutionary connectionist simulations? The results obtained in Di Ferdinando et al. (2001) have, first of all, confirmed the existence of neural interference. This interference is present in non-modular neural networks that have to learn multiple tasks. It was also confirmed that modular architecture, in which each module is dedicated to the solution of a specific task, is a solution to the problem of neural interference. A deeper analysis of the results showed that further interference was active at the genetic level both in modular and non-modular architectures. This result becomes more significant by considering that the negative effect of genetic linkage reported in our studies is potentially more serious than other negative effects of genetic linkage reported in population genetic models (Felsenstein 1974; Haigh 1978; De Visser et al. 1999; Waxman & Peck 1999). For the first time in literature, it was shown that this form of interference, genetic interference, ‘can completely prevent the adaptation of a character due to genetic linkage with deleterious mutations affecting another character’ (Calabretta et al. 2003a).
The evolvability of a system has been defined by Wagner & Altenberg (1996) as ‘the genome's ability to produce adaptive variants when acted upon by the genetic system’. In other words, the issue is about understanding the conditions under which mutation, recombination and selection can lead to complex adaptations. Our simulations clearly show that genetic interference is a new population genetic mechanism that reduces the evolvability of modular and non-modular visual neural networks. It was also shown how sexual reproduction can at least partially solve the problem of genetic interference. This important result could help us to explain the genetic and adaptive advantages of sexual reproduction and therefore to explain the evolutionary prevalence of sexual reproduction among higher organisms.
Finally, it was shown that entrusting the task of finding the neural network architecture to evolution and that of finding the network connection weights to learning completely solves the problem of genetic interference. Interestingly, this finding suggests that evolution and learning may have played different roles in the origin of brain modularity. As stressed in Calabretta et al. (2003a), ‘the reason for these different roles of evolution and learning is that evolution is guided by a global evaluation signal (fitness) and seems not to care about the specific capacities of individuals but only about their total performance, whereas learning during life can use distinct evaluation signals (teaching inputs) for each separate neural module’.
From a more general point of view, it is important to stress that this simulative approach has allowed many different analyses to be performed. One important feature of our model is in fact the co-occurrence of the different levels of biological organization (genetic, neural and cognitive levels). This is an essential model component for understanding the dynamics of complex systems such as interactions and interference among different levels (Bar-Yam 2000). In particular, the genotypic level modelling and the presence of the phenotypic level composed of the two tasks to be optimized allowed the mechanism of genetic interference to be understood. Significantly, this kind of interference was not described in literature as occurring in other models of genetic linkage (Calabretta et al. 2003a; see also Wagner & Mezey 2004).
Using this evolutionary connectionist model of the primate cortical visual ‘streams’ yielded further unexpected results: the analyses of the effects of sequential versus simultaneous learning of multiple tasks in modular and non-modular architectures (Calabretta et al. 2002, 2003b); the study of the generalization of ‘what and where’ tasks, and a new hypothesis for the ‘what’ generalization subtask; the possibility that organisms do not recognize directly the identity of an object, but, by moving their eyes, they first change the position of the object in the retina in order to bring the object in the fovea (Calabretta et al. 2004).
With regard to the evolution of modularity, many questions remain to be answered. Schematically, does the evolution of brain modules differ from the evolution of body modules? What is the role of development? What is the relationship between modularity and evolvability? In what ways do the nature and complexity of tasks to be performed shape the modular or non-modular structure of the brain? What is the role of non-modularity? These are all very difficult questions to answer and it is easy to predict that the nature–nurture debate will continue for a long time (for example, see Marcus 2006). In his book, Synaptic self, neuroscientist LeDoux (2002) claims that both nature and nurture contribute to synaptic connectivity and, therefore, to personality. By presenting his more recent research on mechanisms and neural circuits involved in the genesis of the emotion ‘fear’, LeDoux (1996, 2002) stresses that the amygdala plays a key role in this process, and maintains that in order to understand how the mind works, we have to study the interaction among different dedicated synaptic systems for cognitive, emotional and motivational functions (for a science fiction novel related to this topic, see Calabretta 2006). Stimulated by the work of LeDoux, we can ask the following two questions: the collection of nuclei in the telencephalon called amygdale, is it or is it not a module for the emotion ‘fear’? And if yes, how and why did it evolve (see Barton et al. 2003)? According to LeDoux, we are our synapses; the evolutionary connectionism approach allows us to simulate the communication between neurons at the synaptic level. The integration of the simulative models of evolutionary connectionism with the more traditional methods of biology (Wagner et al. 2005; compare also: Calabretta et al. 2000; Pereira-Leal et al. 2006) and the emergence of new paradigms for dealing with complex systems (Calabretta 2002b), represents one of the few practicable road towards success in the difficult but fascinating enterprise of answering the two questions above: the amygdale, is it or is it not a module for the emotion ‘fear’? And if yes, how and why did it evolve?
A preliminary version of this paper has been presented at the ‘XXXI Seminario sulla evoluzione biologica e i grandi problemi della biologia’ organized by the Accademia dei Lincei, ‘Centro Liceo Interdisciplinare Beniamino Segre’, held in Rome, Italy, in February 2004.
One contribution of 15 to a Theme Issue ‘The use of artificial neural networks to study perception in animals’.
- © 2007 The Royal Society