## Abstract

We review the principal assumptions underlying the application of phase-response curves (PRCs) to synchronization in neuronal networks. The PRC measures how much a given synaptic input perturbs spike timing in a neural oscillator. Among other applications, PRCs make explicit predictions about whether a given network of interconnected neurons will synchronize, as is often observed in cortical structures. Regarding the assumptions of the PRC theory, we conclude: (i) The assumption of noise-tolerant cellular oscillations at or near the network frequency holds in some but not all cases. (ii) Reduced models for PRC-based analysis can be formally related to more realistic models. (iii) Spike-rate adaptation limits PRC-based analysis but does not invalidate it. (iv) The dependence of PRCs on synaptic location emphasizes the importance of improving methods of synaptic stimulation. (v) New methods can distinguish between oscillations that derive from mutual connections and those arising from common drive. (vi) It is helpful to assume linear summation of effects of synaptic inputs; experiments with trains of inputs call this assumption into question. (vii) Relatively subtle changes in network structure can invalidate PRC-based predictions. (viii) Heterogeneity in the preferred frequencies of component neurons does not invalidate PRC analysis, but can annihilate synchronous activity.

## 1. Introduction

Oscillatory behaviour is ubiquitous in physiology (Glass & Mackey 1988; Strogatz & Stewart 1993; Glass 2001; Winfree 2001) and a great diversity of rhythms arise in the central nervous system (Buzsáki 2006). A large body of evidence suggests that many of the prominent nervous system rhythms come about through complex interactions between a number of cellular elements and, in particular, through the process of synchronization, by definition a network phenomenon (Gray 1994; Ritz & Sejnowski 1997; Glass 2001; Strogatz 2003). Synchronized neural activity of various frequencies has been correlated with both normal physiological function (Winson 1978; Steriade *et al*. 1993, 1996; Roelfsema *et al*. 1997; Fries 2005) and linked to disease states including Parkinson's disease (Murer *et al*. 2002; Brown 2003; Hammond *et al*. 2007), schizophrenia (Spencer *et al*. 2004), autism (Uhlhaas & Singer 2006) and epilepsy (Milton & Jung 2003). The formation of neuron assemblies defined by their mutual synchrony has been hypothesized to be fundamental to cognitive function and consciousness (von der Malsburg & Schneider 1986; Singer 1999; Fries 2005). These observations have motivated a large amount of theoretical and experimental work focusing on the mechanisms of synchronization (Kuramoto 1984; Strogatz 2000; Winfree 2001; Ermentrout & Chow 2002; Kopell & Ermentrout 2002; Izhikevich 2007*a*) and the important network parameters that might lead to the modulation of synchrony in the normal and pathological state. When examining endogenous oscillators, the phase-response curve (PRC) provides a prominent and powerful tool because it is an experimentally assessable characteristic that has direct implications for testing the applicability of a large body of neuron and neuron network dynamical theory (Pavlidis 1973; Winfree 2001; Ermentrout & Chow 2002). Because of the key position of the PRC in the theory of synchronous network function, it is crucial to keep in mind both the limitations of its experimental measurement and the assumptions that underlie PRC-based network analysis.

The PRC determines an oscillator's phase shifts in response to perturbations delivered at different phases of the cycle (figure 1). The phase of the original and perturbed cycle is measured relative to some salient event that occurs at consistent phase position in the free running oscillation, such as the action potential. Sometimes this information is presented in the form of a new phase relative to old phase, giving a so-called phase transition curve (PTC) (Winfree 2001). In general, the shape of the PRC depends both on the shape of the stimulus and the dynamics of the oscillator being perturbed. In the limit of a vanishingly small stimulus, the PRC becomes the infinitesimal PRC and is a mathematical construct that reflects only the dynamics of the oscillator. When the oscillator is an individual neuron, the infinitesimal PRC is correlated with the bifurcation type responsible for the onset of neuronal action potential firing (Hansel *et al*. 1995; Ermentrout 1996). Under specific sets of conditions, the PRC of a neuron can be used to gain insight into entrainment, pacemaker mechanism and to conditions of phase-locking between pairs and groups of neurons. Because the PRC theoretically links the dynamics of the neuronal elements making up a network to the synchronization properties of the network, it represents an experimentally assessable bridge between neuron biophysics and network dynamics.

PRC theory is a subdiscipline of nonlinear dynamical systems theory that focuses on oscillators. The applicability of nonlinear dynamical analysis to neuron dynamics is well-established (Rinzel & Ermentrout 1998; Izhikevich 2007*b*), as well as the link of different types of PRC to specific neuron dynamic characteristics (Hansel *et al*. 1995; Ermentrout 1996). When the underlying biophysics of the oscillation is well understood as in the case of the Hodgkin–Huxley model for the generation of action potentials in the squid axon, the use of the PRC theory combined with an accurate biophysically motivated computational model can provide specific, counterintuitive and verifiable predictions. For example, the PRC theory predicted that a small transient current injection (i.e. a perturbation) could extinguish an ongoing spiking oscillation resulting from specific levels of constant current injection in the squid giant axon (Best 1979). This prediction was subsequently verified in experiments (Guttman *et al*. 1980). There was also early success in the theoretical prediction of counterintuitive entrainment properties between neurons, such as the existence of entrainment regimes in which the frequency of the entrained neuron increases with the increased frequency of an *inhibitory* input (Moore *et al*. 1963; Perkel *et al*. 1964). These types of successes promote confidence in a theory, but as these theoretical tools have more recently been applied to the oscillations arising in more complicated neural networks, it has become clear that the existence or absence of synchrony, as well as important characteristics of synchrony such as stability, relative phase and frequency, depend sensitively on a number network parameters. Some of these parameters describe the nature of the coupling (synaptic excitatory, synaptic inhibitory, electrical), delays in coupling, the level of network heterogeneity, noise and network geometry. These parameters can interact in such a way that there are no general rules linking such basic classifications as synaptic excitation or inhibition and the resulting dynamic behaviour emerging from a neuronal network. As a result of this observation, it has been suggested that each new network may require individual study and subsequent classification as ‘synchronizing’ or ‘antisynchronizing’ (Kopell 1988; Hansel *et al*. 1995). For neuroscientists not directly involved in the development of this theoretical perspective, it may not be clear what the limitations are or when it is appropriate to make use of these methods. The answers to both of these issues are bound up in the assumptions used to develop the various network theories. The purpose of this review is to make explicit these assumptions, and to discuss their apparent validity, with the goals of making the PRC theory more accessible and highlighting areas in which further study is merited.

## 2. Some basics on nonlinear dynamical theory and prcs

This section is intended as a self-contained, somewhat detailed review of the PRC theory and its assumptions.

### (a) Nonlinear dynamics of the stable oscillation

The most fundamental assumption for the application of the PRC theory is the existence of one or more endogenous oscillators because the PRC measures the phase response of an ongoing oscillation to perturbations at different time points during the natural period. If an ongoing oscillation does not exist, then the PRC theory is not applicable. The oscillator assumption is valid for at least up to 10 per cent variability in the inter-spike interval of the oscillation (Galan *et al*. 2005). Neurons and networks of neurons are dynamical systems meaning that they have state variables that continuously evolve in time with a rate dependent on the state of the system and therefore can be described with differential equations. The assumption of a stable oscillation is equivalent to the assumption of the existence of a limit cycle, a term from the study of nonlinear differential equations. A limit cycle is a periodic solution to a system of differential equations that is stable to small perturbations, meaning that transient changes to the original form of the oscillation quickly decay away leaving the original oscillatory activity intact. However (and importantly), the oscillator returns with a phase shift relative to the unperturbed cycle. Many physiological oscillations in nature have this quality such as the beating of a heart, the respiratory rhythm and the oscillations of the nervous system (Winfree 2001). The existence of a limit cycle requires that the system of differential equations be nonlinear (a requirement for the limit cycle to be stable) and have at least two variables (second-order differential equation) to create the interplay between variables that can give rise to an oscillation, generally a positive action followed by delayed negative feedback. A system of two differential equations can be written in the general form:
where *X* and *Y* are the *state variables* evolving in time with rates of change defined by the functions *f* and *g* and α is a *parameter* or set of parameters that remain fixed. In a typical conductance-based neural model, state variables include membrane potential and ion-channel gating variables; parameters include maximal conductances and half-activation voltages of voltage-gated conductances. It should be stressed here that when we observe an oscillatory behaviour in nature that is stable to small perturbations, it is not an assumption that it can be modelled by the solution to a set of differential equations; this is true by definition. However, many fundamentally different types of dynamical systems can give rise to limit cycle solutions and where we may go wrong as a scientist is by misapplying a particular form to the problem at hand. The situation is improved by results from the computational neuroscience literature demonstrating that the large variety of natural dynamical systems can be classified by a few underlying mathematical mechanisms (Hoppensteadt & Izhikevich 1997; Rinzel & Ermentrout 1998; Ermentrout & Chow 2002). We will discuss in the following sections how a few different classes of dynamics give rise to fundamentally different emergent phenomena in networks of neurons.

Nonlinear differential equations in general do not have analytic solutions (i.e. there are no solutions in the form of an explicit equation). Because of this, techniques have been developed that use a combination of analytic, graphical and numerical techniques to solve and visualize solutions (Strogatz 1994; Guckenheimer & Holmes 1997; Ermentrout 2002). A great benefit of these techniques is that visualization is more conducive to intuition. A standard technique for visualizing the solutions to nonlinear differential equations is the phase plane (figure 2), not to be confused with the phase of an oscillator. A phase plane plots state variables against each other instead of the standard plot of one variable as a function of time. In a phase plot, time is implicit in the trajectory of the state variables and the trajectory may progress more quickly or slowly in different parts of the trajectory. Figure 2*a* is an example of a trajectory in a phase plane corresponding to an oscillation of a neuron involving an action potential. This particular example uses a two-dimensional Morris–Lecar model (Morris & Lecar 1981) that captures many important dynamics of higher dimensional Hodgkin–Huxley conductance-based models. Reducing the dimension of systems of equations making them more tractable to analysis is a common practice (Rinzel & Ermentrout 1998) and the arguments for such an approach will be discussed in a following section. The Morris–Lecar model incorporates a voltage-gated Ca^{2+} channel and a voltage-gated delayed rectifier K^{+} channel. The state variables are voltage, *V*, and the gating variable for the potassium channel, *w*. As the solution depicted in figure 2 is a periodic solution, both the voltage and gating state variable repeat in time and therefore the trajectory is a closed loop. The phase *of the oscillator* is a parametrization with respect to time. Therefore, the phase progresses linearly with time (i.e. the period is divided into equal phase increments). However, when the state variables of an oscillator are plotted against each other illustrating the limit cycle, the phase of the oscillator can advance more or less rapidly depending on the region of the trajectory. For example, the green, yellow and orange region of the trajectory in figure 2 advances through a relatively large amount of phase even though it covers a small region of the limit cycle. Conversely, during the rapid changes occurring during the action potential, the state variables cover a small amount of phase but a large region of the limit cycle. The time course of the gating variable and voltage are also displayed in figure 2*b*,*c* and the colour code maps the corresponding sections of the phase-plane trajectory onto the traditional time series, demonstrating the variability of the rate of change of each variable as it travels around the trajectory. Examples of perturbed trajectories off of a simple limit cycle are demonstrated with black curves showing the attraction back to the limit cycle (red) and confirming its stability (figure 2*d*). Figure 2*d* illustrates a second fundamental assumption necessary for the proper application of the PRC theory: the perturbations (figure 2*d*, black arrows) needed to construct the PRC must result in trajectories (figure 2*d*, black traces) that decay back to the limit cycle (figure 2*d*, red trace) within one period. Both perturbations illustrated in figure 2*d* meet this assumption. Although specific formulations of the PRC theory exist for dealing with perturbations that have effects carrying into a second cycle (Oprisan *et al*. 2004), the large majority of computational and experimental work using the PRC theory requires the rapid decay assumption to be valid. Effects beyond the second cycle become intractable because of the complexity of the bookkeeping.

Phase-plane analysis is powerful because it makes obvious some fundamental qualitative properties of the dynamical system. Examining the system of two first-order differential equations used in figure 2*a*–*c*, we can set the rate of change of each of the state variables equal to zero.

When these two curves are plotted they give the nullclines of a phase plane (figure 3), the curves where the derivatives of each state variable are zero. The intersection of the nullclines defines a fixed point or constant state for the system, the rate of change of both state variables are zero. An example of a fixed point is the resting membrane potential of a neuron. Furthermore, the stability of the fixed point can be determined by linearizing the nonlinear equations around the fixed point and examining the properties of the linear system. Additionally, the existence of limit cycles and their stability can be determined either using analytical or numerical techniques (Hale 1969). Variation of *parameters* can cause changes in the properties of these fixed points and limit cycles and at special transition points called bifurcations, a specific parameter change can alter the qualitative properties of the dynamical system (i.e. creating and destroying fixed points and limit cycles). A common example of a bifurcation in neuronal dynamics is the transition of a neuron from rest to a spiking oscillation that the experimenter can cause by controlling the parameter, holding current. Figure 3 shows a series of phase plots demonstrating the bifurcation that causes the shift from resting (the stable fixed point becomes unstable) to a stable oscillation in response to increasing current injection into the model neuron. The nature of the bifurcation involved in transitioning a neuron from rest to oscillatory firing is important in determining the tendency and conditions under which populations of neurons can give rise to synchrony.

### (b) Type I and type II membrane excitability

Hodgkin was the first to characterize some fundamental differences in the way the squid axon preparation transitioned from rest to firing in response to current injection (Hodgkin 1948). In particular, Hodgkin found that some axons could show continuous transition from zero frequency to arbitrarily low frequencies of firing, whereas others show an abrupt onset of repetitive firing at a non-zero firing frequency. These two forms of membrane excitability have been given the name type I for the arbitrarily low frequency of onset and type II for the non-zero firing frequency. Subsequent work has established that type I and type II membrane excitability are associated with distinct bifurcations, a saddle-node bifurcation for type I and a Hopf bifurcation for type II membrane excitability (Rinzel & Ermentrout 1998). There are a number of other bifurcation types that could arise in neuron and neuronal network dynamics (Hoppensteadt & Izhikevich 1997; Izhikevich 2007*b*), but experimental evidence for these two types arises frequently and they dominate the computational neuroscience literature. We will focus on these for highlighting the important restrictions regarding the application of the PRC theory.

Type I and type II membrane excitability are associated with other important dynamic characteristics of neurons. Type I membranes have a constant action potential amplitude while type II membranes can have a graded action potential amplitude. Type I membranes are conducive to coincidence detection and are often referred to as integrators. Type II membranes can oscillate at subthreshold potentials and are sensitive to certain frequencies of input and are referred to as resonators. These distinctions are being substantiated by increasing amounts of experimental evidence. Specific subtypes of cortical pyramidal neurons (Reyes & Fetz 1993; Gutkin & Ermentrout 1998; Mattei & Schmied 2002; Nowak *et al*. 2003; Tateno *et al*. 2004; Tateno & Robinson 2007*a*) and cortical interneurons (Mancilla *et al*. 2007; Tsubo *et al*. 2007) appear biased to type I or type II membrane excitability depending on neuron type and cortical layer. These observations are prominent in hypotheses about the function of these different neuron types, especially the interneurons. Interneurons in intact animals fire robustly and synchronously at a variety of frequencies (Bragin *et al*. 1995; Ylinen *et al*. 1995), and *in vitro* interneurons alone can support oscillations (Whittington *et al*. 1995), making them good candidate networks for the source of oscillations observed in the cortex. However, variability of type of membrane excitability within a neuron class complicates the picture (Mancilla *et al*. 2007) and type II membrane excitability might be more rare than initially thought (Rush & Rinzel 1995; Ermentrout & Chow 2002). Further complications arise from experimental evidence showing that membrane excitability type within single hippocampal pyramidal neurons can change in response to modulation of membrane currents such as *I*_{m} (Ermentrout *et al*. 2001) and variations in background synaptic activity (Prescott *et al*. 2008), with important implications about neuromodulation of network dynamics (Crook *et al*. 1998*b*; Prescott *et al*. 2006, 2008).

### (c) The PRC and the PTC

The PRC characterizes the phase shift of a limit cycle oscillation to perturbations as a function of where the perturbation occurs during the natural period of the oscillator (figure 1). For an oscillator to be able to synchronize with a periodic perturbation, which could be other oscillators in a network, the oscillator must respond differently to the perturbation at difference phases of its own oscillation. As a result of this condition for synchrony, the PRC is a natural way to display the oscillator behaviour to quickly gain insight into the potential for synchrony. Figure 1 shows an experiment where a neuronal oscillation is perturbed at various phases through intracellular injection of a small transient current. The perturbation could cause the cycle to shorten, a positive phase shift (phase advance), or it could cause the period to lengthen, a negative phase shift (phase delay). Experimentally, this is generally calculated by comparing the period length of the cycle in which the perturbation occurs to the previous cycle. Two types of PRCs are often encountered in neural oscillators: (i) PRCs that have only positive phase shifts (phase advances) called type I PRCs and (ii) PRCs that have both positive and negative phase shifts called type II PRCs (Hansel *et al*. 1995). We will distinguish PRC types from excitability types by referring to type I or II PRCs and type I or II membranes, respectively (Ermentrout 1996). The information in a PRC is often plotted as new phase versus old phase in a plot called the PTC. Computational models have correlated type I membranes and type II membranes with type I PRCs and type II PRCs, respectively, linking the biophysics of individual neurons to the synchronization properties of networks (Hansel *et al*. 1995; Ermentrout 1996), a mnemonically fortuitous result.

## 3. Validity of crucial assumptions: single neurons

In this section, we discuss the validity of the underlying assumptions about single neurons in the phase-response theory.

### (a) Are neurons oscillators?

It is a truism that application of coupled oscillator theory in neuroscience requires the neurons in question to be intrinsic oscillators. One might legitimately question this assumption, given the commonly perceived irregularity of *in vivo* spike trains. However, in the hippocampus, which has well-documented, 4–12 Hz ‘theta’ oscillatory fields (Gray *et al*. 1989; O'Keefe 1993; Buzsáki 2006), detailed analysis reveals that identifiable populations of interneurons fire regularly and at a consistent phase with respect to the field (Klausberger *et al*. 2003). Furthermore, given sufficiently dense synaptic coupling, some degree of irregularity can be tolerated. For example, in synchronous activity among simulated interneurons, individual cells can drop in and out of the firing population without fundamentally altering the population rhythm (White *et al*. 1998*a*).

In contrast with interneurons, hippocampal pyramidal cells fire highly irregularly during the theta rhythm and fire at phases that differ systematically with the animal's position within the pyramidal cell's spatial receptive field (O'Keefe & Recce 1993). In this case, the pyramidal cells appear to be providing excitation to the network, but it seems unlikely that the phase-response characteristics of sparsely firing pyramidal cells are important. In simulations, synchronous fields can arise from populations of cells in which no neurons fire regularly (Brunel & Wang 2003; Geisler *et al*. 2005). In these cases, other forms of analysis are more appropriate than phase-response curves. These analyses quantify the contributions of synaptic time constants and properties of the postsynaptic membrane to the probability of spiking at a given time after a synaptic input, and use a self-consistency argument to find frequencies of synchronization in sparsely firing networks (Brunel & Wang 2003; Geisler *et al*. 2005). Other classes of excitatory neurons, like spiny stellate cells of the entorhinal cortex, can fire as theta-oscillators and act as ideal subjects for PRC analysis related to generation of the theta rhythm (Netoff *et al*. 2005*a*,*b*).

Phase-response techniques have been used in conjunction with sinusoidal forcing to ensure periodic firing (Lengyel *et al*. 2005). In this case, we would argue that phase-response techniques are of limited value. The neuron is not an autonomous oscillator, and while its spike times can be perturbed within a given cycle, the cell will inexorably return to the phase of firing set by the forcing input.

### (b) Are neurons too noisy to be described by phase-response theory?

Sources of noise in neurons include background network activity, channel noise and synaptic noise (Matsumura *et al*. 1988; White *et al*. 2000; Steriade *et al*. 2001; Destexhe *et al*. 2003; Faisal *et al*. 2008). This noise generally adds to neuronal variability and therefore our experimental measures (Faisal *et al*. 2008). As a result, data acquired for PRCs can be quite scattered (Galan *et al*. 2005; Gutkin *et al*. 2005; Netoff *et al*. 2005*b*). Such scatter in experimental data could make it difficult to determine the shape of the PRC, type I or type II, which is a crucially important factor in how neurons synchronize. Galan *et al*. have suggested techniques for verifying the noisy experimentally determined PRC (Galan *et al*. 2005). The PRC can be recovered from slower non-stationarities if the ISI is short (Gutkin *et al*. 2005). Beyond the scatter it adds to the PRC, noise can alter the actual dynamics of network function. For example, noise can enhance signal detection through the mechanism of stochastic resonance, which stabilizes synchrony (Douglass *et al*. 1993; Braun *et al*. 1994). Alternatively, using random dynamical system viewpoint, theoretical work has shown it is possible for noise to destabilize a limit cycle and replace it with a stochastic equilibrium (Tass 1999). Therefore depending on context noise can stabilize or destabilize oscillations.

In entorhinal stellate cells, intrinsic neuronal noise has been suggested to *generate* cellular oscillations by providing irregular drive to a resonant membrane (White *et al*. 1998*b*; Dorval & White 2005). In such cases, channel noise appears necessary for subthreshold oscillations and an important shaper of responses to weak periodic drive (White *et al*. 1998*b*; Dorval & White 2005). The effects of noise on periodic spiking in entorhinal cells is less well studied, but in a modelling study, Acker *et al*. (2003) saw that realistic levels of noise altered the basins of attraction of synchronous and antisynchronous activity among simulated coupled neurons. In the hypothetical case of near-periodic firing that appears only in the presence of noise, it is not clear how the network might behave, but such a case would be a clear violation of the assumption of a stable limit cycle that is a crucial underpinning of the PRC theory.

### (c) Are reduced models relevant to real-world neurons?

The figures presented have used a reduced model of membrane excitability called the Morris–Lecar model (Morris & Lecar 1981). Because it has two state variables instead of four, it is conducive to the two-dimensional phase plots we have been using. Experimentalists are often alarmed at the casting away of so much apparent detail. Are we throwing the baby out with the bathwater just for the convenience of graphics? What is lost in such reductions? The four-dimensional Hodgkin–Huxley equations as originally formulated and parametrized exhibit type II membrane excitability (Rinzel & Ermentrout 1998). Other conductance-based models and different parametrizations exhibit type I (Connor *et al*. 1977; Morris & Lecar 1981; Rinzel & Ermentrout 1998; Okun & Lampl 2008). It turns out that it can be shown mathematically that there are certain canonical models that exhibit the same dynamical qualitative properties (Ermentrout & Kopell 1986; Gutkin & Ermentrout 1998; Izhikevich 1999). They are called canonical because any system of differential equations with same qualitative dynamics can be mathematically transformed to the canonical equations preserving the qualitative dynamics. These simpler formulations are more conducive to extending theoretical analysis and gaining intuition about dynamics (Morris & Lecar 1981; Ermentrout & Kopell 1998). What are lost are the quantitative details regarding the original system. Therefore, the specific trajectories of the original system can no longer be predicted, but the qualitative behaviour (types of stable states, types of bifurcations) still can, and it is these characteristics that are important for determining the conditions for synchrony in networks of elements, which are themselves dynamical systems. What about the reduction of the number of state variables? Bifurcation points are special points in dynamical systems theory. Near these points the dynamics of some of the dimensions become trivial and the remaining state variables define the dynamics of the system. Therefore, when reduced models (i.e. models with fewer variables) of neuron dynamics are used to infer dynamics of networks of neurons, the fundamental assumption is that the neurons exist near a bifurcation point, the special point where the dynamics of some of the variables become unimportant for determining qualitative dynamical behaviour (Hoppensteadt & Izhikevich 1997). As neurons routinely switch between a resting state and a firing state, a transition requiring a bifurcation, it seems reasonable to make this assumption. Additionally, experimental evidence suggesting balanced excitation and inhibition in neuronal circuits also supports this assumption (Okun & Lampl 2008), but it is certainly not necessarily the case. However, when in doubt, numerical studies of networks using full conductance-based models do not require this assumption.

A frequently used reduced model that is even simpler than the two-dimensional models discussed briefly so far is the integrate-and-fire model, which reduces the dynamics of each neuron to one equation. In this model, a variable, like voltage, monotonically increases to a threshold at which the variable is reset to a lower level. The moment of reset is a discontinuity in the dynamics. The reason this approximation is reasonable results from the dynamics of a relaxation oscillator where one variable changes much more quickly than the other. The integrate-and-fire model is analogous to the limit when the fast variable changes instantaneously relative to the other variable. The simplicity of this model makes it more tractable theoretically and it has been used in the derivation of a number of important results regarding networks of coupled oscillators (Mirollo & Strogatz 1990; Van Vreeswijk *et al*. 1994). However, important consequences can arise as a result of the discontinuity. For example, Ermentrout and Kopell have shown that for many networks of coupled oscillators, strong connections can quench network oscillations (Ermentrout & Kopell 1990), a result that Mirollo and Strogatz did not see owing to the discontinuity in their integrate-and-fire model (Mirollo & Strogatz 1990). Furthermore, subthreshold dynamics play a critical role in the shape of the PRC and thus on synchrony and these are absent in this reduced model.

### (d) Does spike-rate adaptation invalidate PRC analysis?

Theoretical results regarding network dynamics usually assume that parameters describing neuronal elements are not time dependent and, therefore, the PRCs are not time dependent. However, the details of the shape of the PRC depend on the intrinsic ionic conductances of the neurons and firing rate (Clay *et al*. 1990; Demir *et al*. 1997; Ermentrout *et al*. 2001; Acker *et al*. 2003; Pfeuty *et al*. 2003; Gutkin *et al*. 2005). Adapting currents such as the M-type potassium current (*I*_{m}) and the Ca^{2+} activated potassium currents (*I*_{AHP}) responsible for spike frequency adaptation in many neurons can alter the shape of the PRC in response to ongoing changes in membrane potential (Ermentrout *et al*. 2001; Pfeuty *et al*. 2003; Gutkin *et al*. 2005; Prescott *et al*. 2006; Prescott & Sejnowski 2008). As a result, the shape of the PRC and therefore the synchronization properties of the neuron becomes dependent on the firing rate of the neuron and the background activity of the network (Pfeuty *et al*. 2003; Gutkin *et al*. 2005; Prescott *et al*. 2006; Prescott & Sejnowski 2008).

Theoretical work suggests the adaptation-mediated changes in the shape of the PRC can be quite important, changing neurons from resonators (type II) to integrators (type I) and altering conditions under which synchronization occurs (Crook *et al*. 1998*b*; Ermentrout *et al*. 2001; van Vreeswijk & Hansel 2001; Prescott *et al*. 2008). In a modelling study, Prescott *et al*. have shown that *I*_{m} and *I*_{AHP} can have opposite effects on neural coding, *I*_{m} optimizing spike time encoding and *I*_{AHP} optimizing spike-rate encoding, owing to their different biophysical mechanisms of activation (Prescott & Sejnowski 2008). As both these currents can exist in the same neuron, it becomes crucial to understand what operating regime the neuron is in. For example, the same group found experimentally that pyramidal neurons acted as integrators *in vitro*, but when *in vivo* activity was mimicked they changed into a resonator (Prescott *et al*. 2008). Neuromodulators such as acetylcholine and dopamine are known to modulate adaptation and theoretical and experimental results have confirmed the sensitivity of membrane excitability and therefore the PRC to modulation by acetylcholine (Stiefel *et al*. 2008, 2009).

The dependence of the form of the PRC and the subsequent implications for network synchrony on the state of adaptation of the neuron substantially complicates the utilization of the PRC theory for understanding network function. The required parameters now include the current dynamical state of the network the neuron is part of as well as the recent history of the membrane potential. One strategy for dealing with this complication experimentally is to use frequencies thought to be meaningful in normal function to measure the PRC. In this case, adaptation is allowed to stabilize for the measurement of the PRC, which is therefore a frequency-dependent PRC (Mancilla *et al*. 2007; Tsubo *et al*. 2007). Alternatively, Cui *et al*. have suggested the measurement of what they call the functional PRC (Cui *et al*. 2009). In this experimental paradigm, the perturbation occurs after a fixed interval time after a marker event and incorporates adaptation (Pinsker & Kandel 1977; Cui *et al*. 2009). In either case, one must operate under the important constraint that the derived PRC is only valid for a neuron in the particular state of measurement. In neurons that adapt to a large degree, and are subject to substantial variations in drive (and thus level of adaptation) *in vivo*, the conclusions that one can legitimately draw from PRC analysis are limited.

### (e) Does it matter whether inputs are delivered to the soma or dendrites?

In most experimental measurements of PRCs, researchers have applied the input at the somatic location of the electrode. It is possible that this choice of experimental convenience leads to inaccurate results for cases in which the true input is located dendritically. In modelling studies (Crook *et al*. 1998*a*; Goldberg *et al*. 2007), PRCs associated with inputs to either active or passive cables can dramatically change the effects of those inputs on neuronal synchronization. These findings are supported by experimental data as well (Goldberg *et al*. 2007), suggesting that experimental protocols should be modified to include more realistic locations of inputs, delivered via dendritic patch electrode, nerve shock, iontophoresis, uncaging or some other method.

## 4. Validity of crucial assumptions: network properties

### (a) Network organization: coupled oscillators versus common oscillatory drive

The discussion of the PRC theory so far has concentrated on predicting synchrony in connected neuronal networks in which synchrony is thought to be an emergent property of the dynamics and connectivity of the neurons making up the network. However, it is possible that synchrony could arise in subpopulations of neurons with similar dynamics (Mancilla *et al*. 2007; Tateno & Robinson 2007*b*; Tsubo *et al*. 2007) owing to correlated input, which could represent noise or structured sensory input (Ermentrout *et al*. 2008). When neuronal network firing is correlated owing to an unknown correlated input, what appears to be an internally generated network phenomenon is simply dynamically similar neurons responding to this correlated input (Teramae & Tanaka 2004; Galan *et al*. 2006; Schoppa 2006). This phenomenon has been observed experimentally in the noise-induced synchrony of the mitral cells of the olfactory bulb (Galan *et al*. 2006). Theoretical results suggest that types of membrane excitability are related to susceptibility to this form of synchronization, with type II membrane excitability more likely to synchronize than type I (Gutkin & Ermentrout 1998; Robinson & Harsch 2002; Galán *et al*. 2007; Marella & Ermentrout 2008; Abouzeid & Ermentrout 2009). When studying the emergent network oscillations of the brain such as theta, gamma and alpha, we need to be able to distinguish between synchrony emerging from dynamics of the immediate network being examined and the synchrony arising from correlated input. In this situation, the entire oscillating population could be treated as a single oscillator and the PRC theory could be used to study synchronization between several such populations (Ko & Ermentrout 2009). This type of PRC experiment has been done in the context of epilepsy and treated the cortex and thalamus as linked population oscillators (Perez Velazquez *et al*. 2007). If the dynamics arise locally in the oscillator being stimulated, then the perturbations will shift the phase while phase will not shift if the dynamics are being driven from a distant oscillator. This problem is of practical importance as there is hope of using stochastic phase resetting (Tass 1999) to destabilize pathological rhythms.

### (b) Networks: weak and strong coupling

In interconnected networks of known oscillators, the most fundamental assumption is that the perturbed trajectory must return to the limit cycle before the next perturbation occurs. The goal of the PRC theory applied to networks is to predict conditions under which natural oscillators become entrained to periodic forcing or to other oscillators coupled to it. The oscillator will therefore receive a periodic train of perturbations. The strategy for predicting the result of coupling is to use the PRC of the oscillator to derive a phase shift in response to the first perturbation, then from this new phase the response to the second perturbation, etc. If at some point the phase shift stabilizes to a constant value for all subsequent perturbations, then a phase-locked state between the coupled oscillators has been achieved. However, if the trajectory does not make it back to the limit cycle before the next perturbation, then this strategy does not work and the majority of the PRC theory is no longer applicable. This assumption can be tested experimentally by measuring the periods after the period containing the perturbation. If the assumption is valid, then only the period containing the perturbation should be modified. The following period should return to the natural period of the oscillator. There are strategies for dealing with the failure of this assumption that involve secondary PRCs related to the period following the period containing the perturbation (Oprisan *et al*. 2004), but these will not be considered in detail in the present review.

Related to the assumption of rapid return to the limit cycle, is the assumptions of *weak coupling*. Oscillators interacting via weak coupling only affect each other through phase shifts defined by the PRC. The interpretation of this assumption is that the perturbation only affects the phase of the oscillation and not the shape of the limit cycle. In other words, the amplitude and natural period of the oscillation remains the same while the phase is shifted by an amount defined by the PRC. The assumption of rapid return to the limit cycle automatically follows in this case as the trajectory is never significantly perturbed away from the limit cycle in the first place. The assumption of *weak coupling* is very attractive to theoreticians because it makes available a number of powerful mathematical techniques that enable analytic solutions and proofs instead of merely numerical results that suggest proofs exist. When the weak coupling condition is true, the coupled dynamical systems of neurons can be represented by simplified equations using just the phase variables of the coupled oscillators (Kuramoto 1984; Hoppensteadt & Izhikevich 1997; Strogatz 2000).
Here, iPRC is the infinitesimal PRC describing the oscillator response to vanishingly small perturbations dependent on the phase of the perturbed oscillator and *S* is the form of the perturbation dependent on the phase of the driving oscillator. Because the individual neural dynamics are now just defined by their phases, *θ*_{1} and *θ*_{2}, this greatly reduces the dimensionality of the problem and for this reason this technique is often called phase reduction. An additional condition is that the natural frequencies of the oscillators, *ω*_{1} and *ω*_{2}, are not too divergent, an assumption that will be examined further in a later section on network heterogeneity. Assuming the perturbation is short relative to the period of the oscillator, which is consistent with the weak coupling assumption, averaging theory (Ermentrout & Kopell 1990; Guckenheimer & Holmes 1997) can be used to transform the equation above to a simpler form

Here, *H* is the coupling function and completely characterizes the interaction between the two oscillators, which can be single neurons or larger networks, as a function of a *difference in phases*. These equations can be solved for synchronous solutions. This approach, while based on networks of two oscillators, generalizes straightforwardly to larger networks, at least for the synchronous case. It has been shown that for weak coupling, the function *H* is equivalent to the PRC, and can be derived from the iPRC as originally described by Winfree (1967).

This is the convolution integral from the linear systems theory and implies that the PRC should scale linearly with stimulus amplitude and duration, and therefore the PRC of an oscillator in response to a perturbation of arbitrary shape, *S*, can be determined by the sum represented by the convolution integral. This result implies that iPRCs can be used to find the PRCs for an arbitrary stimulus, like a single synaptic conductance, or a string of them, and the PRC in turn can be used in coupling equations that can be solved for synchronous states. This is all robust if the assumptions are valid. Are real-world neurons weakly coupled?

The experimental literature supporting the assumption of weak coupling has been reviewed by Hoppensteadt & Izhikevich (1997) and includes observations of the relative small size of postsynaptic potentials to the excitation necessary to fire action potentials. A strong verification of weak coupling requires the perturbation of neuronal oscillators with stimuli of various strengths and shapes and the subsequent measurement of the associated PRCs. If the coupling is indeed weak for these various stimulus shapes, then deconvolving each stimulus with its associated PRC should generate the *same* iPRC. This experiment has been done in *Aplysia* neurons (Preyer & Butera 2005) and for a range of stimulus sizes, the coupling was found to be weak. However, the weak coupling did break down for larger stimulus sizes. Similarly, Netoff *et al*. (Netoff *et al*. 2005*a*) verified the weak coupling assumption over a range of stimulus magnitudes for stellate cells of the entorhinal cortex. This result was somewhat surprising in light of previous theoretical work suggesting that physiologically realistic coupling in these neurons was probably strong (Acker *et al*. 2003).

The assumption of weak coupling, while at first glance appears to be a technical detail of concern only to theoreticians, is actually important for many foundational results in the network synchronization literature that are often cited by both theoreticians and experimentalists. One of these important theoretical results is that neurons connected via *inhibition* in an all-to-all network geometry are generally more conducive to synchronization compared with connection via *excitation* (Van Vreeswijk *et al*. 1994; Hansel *et al*. 1995; Ermentrout 1996; Izhikevich 1999). Moreover, there are stereotyped interactions between excitation/inhibition, the kinetics of the synapse and neuron dynamics in the weak coupling case. Inhibition is more stabilizing and excitation more destabilizing for synchrony in networks of neurons with type 1 dynamics, while excitation can be stabilizing for synchrony in networks of neurons with type II dynamics when synapses are fast. The kinetics of the synapse has been generally found to be important as numerous analytic (Van Vreeswijk *et al*. 1994; Ernst *et al*. 1995; Hansel *et al*. 1995; Gerstner *et al*. 1996) and numerical models (Jefferys *et al*. 1996; Wang & Buzsaki 1996) using homogeneous networks of inhibitory neurons have shown stable synchrony occurs when inhibition is slow relative to firing rates while antisynchrony dominates when synaptic coupling is fast (Perkel & Mulloney 1974; Friesen 1994; Skinner *et al*. 1994; Van Vreeswijk *et al*. 1994). Given these results, experimentalists may be getting excited about a potential theoretical framework in which to view normal and pathological synchronous activity in terms of neuron firing dynamics (increasingly known) and type of synapse (increasingly measured) both in the normal and pathological state. However, this framework depends on the validity of weak coupling.

Even in cases for which the effects of individual inputs appear weak, the weak coupling assumption may be violated for trains of input of various timing and magnitude. The weak coupling assumption implies all inputs can be convolved with the iPRC to predict a response, but there is experimental evidence that for some input the summation is nonlinear (Netoff *et al*. 2005*a*; Preyer & Butera 2005). Additionally, an early observation in theoretical work was that strong coupling could destabilize inhibition-mediated synchrony (Ermentrout & Kopell 1990). Therefore the expected results may not just contain quantitative errors as the weak coupling assumption fails but failure of the assumption could fundamentally alter the qualitative properties of the predictions (Bressloff & Coombes 2000; Acker *et al*. 2003; Maran & Canavier 2008; Oh & Matveev 2009). If coupling is indeed strong, return maps of inter-spike intervals that are derived from PRCs measured using stimuli approximating the natural stimuli can be used to examine synchrony (Jones *et al*. 2000; Winfree 2001; Goel & Ermentrout 2002; Kopell & Ermentrout 2002; Acker *et al*. 2003; Oprisan *et al*. 2004; Pervouchine *et al*. 2006). In some situations, theoretical reasoning considering the emergent properties of the network suggest strong coupling. For example, it has been noted that the weak coupling paradigm has difficulty explaining the rapid synchronization often seen in the brain as it requires many cycles for the synchronization to occur (Achuthan & Canavier 2009). Strong coupling and the return map are theoretically capable of fast synchronization. However, the limitation of the return map approach requires that the perturbed trajectory return to the limit cycle before the next perturbation just as the weak coupling condition required.

### (c) Chemical or electrical synapses?

The majority of theoretical and experimental results discussed so far refer to interactions mediated by a chemical synapse. However, the observation that gap junctions couple interneuron networks of the same neuronal type in the cortex (Hestrin & Galarreta 2005) has stimulated work on the results of electrical coupling on synchrony (Chow & Kopell 2000; Lewis & Rinzel 2003; Nomura *et al*. 2003; Pfeuty *et al*. 2003; Di Garbo *et al*. 2005). Electrical synapses are very different from chemical synapses in that the coupling function depends on the voltage of both coupled neurons and therefore the dynamics of both neurons. Theoretical work predicts that synchrony dominates, but antisynchrony could exist as well (Chow & Kopell 2000; Lewis & Rinzel 2003; Nomura *et al*. 2003; Pfeuty *et al*. 2003; Di Garbo *et al*. 2005). Combined chemical and electrical coupling can lead to effects that are complementary or antagonistic, depending on the details including coupling sign, strength and spiking dynamics (Pfeuty *et al*. 2005; Ermentrout 2006). Experimental evidence suggests that synchrony dominates over a range of coupling strengths for the limited cases that have been studied (Merriam *et al*. 2005; Mancilla *et al*. 2007). When examining physiological networks that contain gap junctions, these results suggest it is important to include their effects in theories of network function.

### (d) Network geometry assumptions

The classic phase reduction derivations assume all-to-all connectivity (Kuramoto 1984; Strogatz 2000; Winfree 2001). However, there are strong reasons to believe that the specifics of network connectivity (i.e. the anatomy) will be an important parameter in ultimately determining the existence of synchrony. Ring and chain geometries have been examined for some time as a specific network geometry conducive to consistently elaborating various phase delays between oscillators for appropriate gate control in tetrapods and undulation in marine animals (Kopell 1988, 1995; Canavier *et al*. 1997, 1999; Dror *et al*. 1999). Since the publication of two seminal papers in the late 1990s on small-world networks (Watts & Strogatz 1998) and networks with power law scaling of connectivity patterns (Barabasi & Albert 1999), a more general approach to deriving network dynamics from network geometry has emerged (Sporns *et al*. 2000; Stam & Reijneveld 2007). Theoretical studies using model neurons with qualitatively similar dynamics to those used in the development of the PRC theory have found that connectivity patterns in general predict the dynamic patterns of network activity (Galan 2008). Theoretical and computational models informed from the anatomies of the different regions of the hippocampus can predict the region specific pathological rhythms arising in excitable hippocampal slice models (Netoff *et al*. 2004). Ultimately, the results derived from dynamical models of neurons using the PRC perspective and the results derived from the network geometry perspective should be consistent. Perhaps the resolution will be that specific geometries, types of synapse or action potential patterns make emergent synchrony robust to variations in the other parameters (Bush & Sejnowski 1996; Belykh *et al*. 2005). The work in this area is still in the initial stages.

### (e) Heterogeneity of network elements

Much of the early theoretical work on coupled oscillators (Kuramoto 1984; Strogatz 2000; Winfree 2001) assumed identical oscillator dynamics for the derivations for the same reason all-to-coupling was assumed, so the mathematics was tractable and some intuition could be gained into a very tough problem. Subsequent work has relaxed these conditions in order to better model the biological situation which undoubtedly has some variability. A parameter that has fundamental impact on synchronization dynamics when heterogeneity is introduced is the frequency of the individual oscillators. Winfree demonstrated that for networks of coupled simple phase oscillators (‘simple clocks’), synchronization always occurs if all the frequencies are the same and the coupling is appropriate, but that for a distribution of frequencies, a threshold phenomenon requiring the synchronization of a subpopulation of specific size is introduced with the threshold being proportional to the range of periods of the individual oscillators (Winfree 1967, 2001). The same phenomenon was shown for pulse-coupled integrate-and-fire oscillators (Kuramoto 1991). An analogous thresholding phenomenon was observed in a network of all-to-all excitatory integrate-and-fire neurons when the heterogeneity was in the external drive (Tsodyks *et al*. 1993).

A similar thresholding phenomenon is observed when the biophysical conductance-based models are used, but the non-synchronous dynamics are more diverse. Golomb & Rinzel found synchronous, asynchronous and partially synchronous regimes when excitability parameters were randomized (Golomb & Rinzel 1993). In a model of a hippocampal interneuron network, heterogeneity was found to restrict synchronous frequencies to the gamma frequencies (Wang & Buzsaki 1996). Systematic studies of the effect of heterogeneity in oscillator frequencies on the existence of stable synchronous states (Chow 1998; White *et al*. 1998*a*) have found that synchrony can be quite sensitive to heterogeneities and that the stability of the stable state depends on an interaction between heterogeneity, membrane excitability type and the characteristics of the synapse. This observation certainly complicates the prediction of synchronous states when variability has been experimentally verified in PRCs (Mancilla *et al*. 2007; Tateno & Robinson 2007*a*). Theoretical work that complicates the picture further indicates that delays between neurons can be very important (Gerstner *et al*. 1996) and stability of synchronous states depends on an interplay between delay, synaptic characteristics, and the dynamics of the neuron. Indeed delay has been a crucial element in models of synchronous activity in the hippocampus (Kopell *et al*. 2000). In summary, the above work suggests that synchrony might be quite fragile in response to heterogeneity and it has been hypothesized that synchrony could be stabilized by heterogeneity or adaptation in synaptic coupling (Li *et al*. 2003; Zhigulin *et al*. 2003) or by gap junctions (Pfeuty *et al*. 2005; Mancilla *et al*. 2007).

## 5. Conclusion

This review has focused on the applicability of coupled oscillator theory and PRCs to the study of neuronal synchronization. Although we have focused mostly on the question of whether the PRC approach, applied to single neuronal oscillators, can be used to understand synchronization in interconnected networks of such neurons, the approach can be generalized to account for cases we have not emphasized. For example, PRCs have also been used to describe pools of neurons, rather than individual neurons, as the ‘fundamental’ oscillator (Perez Velazquez *et al*. 2007; Ko & Ermentrout 2009). PRC-based approaches can also be used to study non-synchronous and even non-phase-locked behaviour. For example, PRCs can be used to study and predict travelling waves across networks and the clustering of subpopulations of neurons within networks (Goel & Ermentrout 2002; Jeong & Gutkin 2007; Achuthan & Canavier 2009). This more general applicability is important because it is not universally agreed that network synchrony is important for neural computation or that it is even prevalent in cortical circuits (Shadlen & Movshon 1999; Engel *et al*. 2001; Niebur *et al*. 2002; Brown 2003; Fries 2005; Nowotny *et al*. 2008). The conditions we describe for the validity of the PRC theory apply regardless of whether neuronal synchronization is the object of study.

The philosophical perspective of computational neuroscience is one that seeks to distill from the complexity of biology the key features and abstract them in the form of mathematics. The mathematics can in turn be manipulated to make testable predictions. The fundamental assumption here is the existence of simplified abstractions that still represent fundamental dynamical principles of the system we wish to study.
The best model of a cat is a cat; preferably the same cat. (Wiener 1945)

No one would be accused of oversimplifying if their model of the cat was the same cat, but we would probably get lost in the details of a model as complicated as the original system. Additionally, it has been discovered that reduced models give rise to a counterintuitive array of dynamics thus changing our perspective on what even simple biological networks are capable of and highlighting the importance of studying all levels of complexity. It is also becoming clear that some framework needs to be developed to study neurophysiology at a network level, as theoreticians have proved that for even simple networks for which both the elements and their connections are completely defined, there are some emergent properties that cannot be inferred (i.e. the problem of predicting emergent properties from a purely reductive perspective is undecidable) (Anderson 1972; Gu *et al*. 2009).
A scientific theory should be as simple as possible, but no simpler. (Einstein 1933)

The experimental evidence provided to date strongly suggests that the PRC theory is a productive way to abstract synchronous neural network activity. However, it is also clear from both theory and experiment that a number of anatomical and dynamical details of the network are crucial for the proper modelling, prediction and understanding of network dynamics. From the experimentalist's point of view, looking in through the window at the great diversity of theoretical work being done on neural network theory, it is easy to be at a loss about where to begin in testing these ideas. Ultimately, the validity of the various PRC approaches used to predict or understand emergent network synchrony depends on the accuracy of the underlying assumptions made to derive the theory. All of the theoretical models make use of various assumptions with important implications regarding a variety of neural network characteristics that have concrete form in the real physiology.

The assumptions important for the PRC theory discussed in this review vary in their implications regarding the applicability of the theory. The assumptions of the existence of a limit cycle and the rapid return to the limit cycle following perturbation are fundamental to the application of the theory. Failure to meet these assumptions implies the theory is not applicable or requires special reformulations. The assumption of weak coupling is not required for the application of the PRC theory but enables the use of a wider array of mathematical tools. Most of the assumptions discussed including network geometry, heterogeneity, noise, and stability of network elements are not fundamental in the sense that variation in these characteristics makes the application of the PRC theory impossible. However, the predictions that the PRC theory makes when these factors are incorporated into the mathematics may be radically different. Failure to properly define these important characteristics of the network can lead to substantial misunderstanding. Therefore, these assumptions are just as important to experimentally verify as the fundamental assumptions. Hopefully this review, in making more explicit what these assumptions are, provides some perspective from which an experimentalist can better select the most relevant models for their specific work, as well as, encourage additional experiments explicitly supporting or refuting these assumptions.

## Footnotes

One contribution of 8 to a Theme Issue ‘Neuronal network analyses: progress, problems, and uncertainties’.

- © 2010 The Royal Society