Nocturnal insects have evolved remarkable visual capacities, despite small eyes and tiny brains. They can see colour, control flight and land, react to faint movements in their environment, navigate using dim celestial cues and find their way home after a long and tortuous foraging trip using learned visual landmarks. These impressive visual abilities occur at light levels when only a trickle of photons are being absorbed by each photoreceptor, begging the question of how the visual system nonetheless generates the reliable signals needed to steer behaviour. In this review, I attempt to provide an answer to this question. Part of the answer lies in their compound eyes, which maximize light capture. Part lies in the slow responses and high gains of their photoreceptors, which improve the reliability of visual signals. And a very large part lies in the spatial and temporal summation of these signals in the optic lobe, a strategy that substantially enhances contrast sensitivity in dim light and allows nocturnal insects to see a brighter world, albeit a slower and coarser one. What is abundantly clear, however, is that during their evolution insects have overcome several serious potential visual limitations, endowing them with truly extraordinary night vision.
This article is part of the themed issue ‘Vision in dim light’.
… no one supposes that the intellect of any two animals or of any two men can be accurately gauged by the cubic contents of their skulls. It is certain that there may be extraordinary mental activity with an extremely small absolute mass of nervous matter: thus the wonderfully diversified instincts, mental powers, and affections of ants are notorious, yet their cerebral ganglia are not so large as the quarter of a small pin's head. Under this point of view, the brain of an ant is one of the most marvellous atoms of matter in the world, perhaps more so than the brain of a man.
—Charles Darwin [1, p. 145].
When Charles Darwin wrote these words, almost 150 years ago, our understanding of the nervous systems of animals was rudimentary. In fact, neurons had only been discovered a mere 34 years earlier—by the great Czech anatomist Jan Evangelista Purkinje. Yet by 1871, when Darwin wrote The Descent of Man, it was already accepted that a larger brain relative to body size is likely connected to higher mental capacities, a line of reasoning that Darwin also accepted. So his insight concerning the brains of ants was truly visionary, arguably the first recognition that the sheer mass of a nervous tissue is not the sole arbiter of behavioural or cognitive performance (as the remarkable higher cognitive abilities of ‘small-brained’ birds such as corvids are now confirming). And today, almost a century and a half later, Darwin's words about insects ring truer than ever—our sophisticated modern methods for recording from single neurons, for studying the functions of specific brain regions, and for tracking the responses of insects in cleverly designed behavioural paradigms, all basically point to the same conclusion . Insects and other arthropods are capable of extraordinary behavioural feats, seemingly disproportionate in sophistication compared with their small bodies and tiny brains.
The visual behaviours of nocturnal insects are an excellent case in point. At light levels where humans are essentially blind, many nocturnal insects reveal a visual performance that seems almost to defy the physically possible. The tropical nocturnal sweat bee Megalopta genalis, which has one of the best-studied visual systems of all nocturnal insects, emerges from its nest at night and departs on a foraging trip through a dense and tangled rainforest, later navigating without error back to its nest—an inconspicuous hollowed-out stick suspended within the forest understorey. In this complex rainforest environment—even during daylight hours—a person that stumbles just a few metres from a well-worn trail severely risks becoming hopelessly disoriented and eventually lost. Yet, despite small eyes and a tiny brain, Megalopta achieves its feat of rainforest navigation in the dead of night, a feat we now know is achieved when fewer than five photons are absorbed by each of its photoreceptors every second, a vanishingly small visual signal . And even more impressive, it has recently been shown that the humble nocturnal cockroach Periplaneta americana is able to detect the fleeting movements of objects in its environment when fewer than one photon is absorbed by each of its photoreceptors every 10 s !
This astonishing visual performance allows nocturnal insects to use vision for all aspects of daily life, including negotiating obstacles during locomotion, identifying mates, food and predators and for orienting in the environment [5,6]. But exactly how is this performance achieved? What sets the visual systems of nocturnal insects apart from those of insects active during the day? In this review, I will attempt to address both these questions. Part of their answer lies in the optical designs of compound eyes in nocturnal insects, which are typically orders of magnitude more sensitive to light than those of their diurnal relatives. Part of their answer also lies in specialized neural adaptations within the retina and optic lobe that increase the visual signal-to-noise ratio (SNR), although invariably only at the expense of spatial and temporal resolution. Admittedly part of their answer also remains unknown—the remarkable visual performance of nocturnal insects cannot entirely be explained by our current understanding of their eyes and their optic lobes. As yet unknown neural circuits, both within the optic lobes as well as in other areas of the brain, are undoubtedly of vital importance for improving visual performance to the levels revealed by behaviour.
2. The challenge of seeing well in dim light
To see well in dim light is not trivial, for essentially two reasons. First, eyes strain to collect enough light to provide a sufficient visual signal. Second, even if the eye collects enough light to resolve a visual image, physiological noise present in the photoreceptors of the retina may contaminate or even drown this visual signal, making it highly unreliable or even invisible. Not surprisingly, nocturnal (and deep-sea) animals have evolved mechanisms for reducing or eliminating both of these problems—eyes have evolved with optical designs that maximize light capture, and neural mechanisms have evolved that reduce the impact of physiological noise. But before describing these solutions in detail, I will first describe the problems.
(a) The problem of the visual stimulus itself: photon shot noise
The main task of every eye is to resolve the spatial details existing within a visual scene, and when we talk about ‘spatial details’ we are invariably referring to contrast details, whether they be luminance contrasts, colour contrasts or polarization contrasts. Such contrasts establish the boundaries of objects, or define their internal details, revealing their locations in visual space and disclosing whether they are stationary or moving. When such a scene is imaged onto a retina, the underlying matrix of photoreceptors is responsible for sampling contrast details as accurately as possible. Like the sensor pixels of a digital camera, each photoreceptor samples a single ‘pixel’ of the image, and thus a single ‘pixel’ of the outside world. And just as in a digital camera, the more photoreceptors there are, and the more tightly packed they are, the more finely the visual scene (and its intricate contrast details) can be sampled. But there is a limit to this—as the pixel of the visual world sampled by each photoreceptor becomes smaller, the amount of light it contains will also decline. Eventually, the pixel will contain too little light for the photoreceptor to reliably measure its intensity.
This undeniable fact reveals the most fundamental trade-off in vision, that between resolution and sensitivity. In an eye adapted for bright daylight, when each visual pixel in the scene contains a large amount of light, the photoreceptor packing density can afford to be high—diurnal animals, with no need for high sensitivity, can (and commonly do) have eyes with high spatial resolution and the ability to discriminate fine contrast details. But at night, when light levels are low, the pixels seen by each photoreceptor need to be a lot larger—only then will they contain enough light to be reliably discriminated. As a consequence, the photoreceptor packing density must be a lot lower, restricting vision only to coarser contrast details. In the quest for greater sensitivity, spatial resolution is sacrificed.
Thus, a low photon catch in dim light limits the ability of the photoreceptor matrix to reliably distinguish the contrasts of spatial details. Much of the reason for this derives from the sporadic and random nature of the stimulus itself. During constant intervals of time the number of photons striking a photoreceptor is not identical—sometimes fewer than average arrive during the time interval, sometimes more. In other words the photoreceptor's measure of the average rate of photon arrivals comes with a degree of uncertainty, an uncertainty referred to as ‘photon shot noise’. As the arrival of photons is a random process obeying Poisson statistics, this noise is given by the standard deviation in the average photon catch N, and this is simply √N—thus, the photoreceptor will absorb a sample of N ± √N photons per unit time. If we now define the visual signal as N and the noise as √N, then the visual SNR is simply N/√N, or √N. In other words SNR—a measure of the reliability of vision—increases with the square root of the photon catch, that is, with the square root of light intensity. This famous relationship is known as the ‘Rose–De Vries’ or ‘square root’ law of visual discrimination [7,8].
To see what this means for the discrimination of visual contrast in dim light, imagine an eye straining to distinguish a fine contrast border seen by two neighbouring photoreceptors, one viewing the darker side of the border, the other the brighter side. Imagine also that the brighter side reflects twice as many photons as the darker side. To see the contrast clearly, the photon catch N in each photoreceptor would need to differ significantly, in other words by well more than √N. To see the effect of a very low level of illumination, imagine that the two photoreceptors respectively absorb (during one visual integration time) six photons from the brighter side of the border and three photons from the darker side. Owing to shot noise and the square root law, the two photoreceptors would thus sample 6 ± 2.4 photons and 3 ± 1.7 photons—that is, the noise levels would be around half the magnitudes of the signals, with SNR ≈ 2! At this light intensity, the contrast border would no doubt be drowned by noise and thus invisible. But imagine instead a much higher intensity, with the two photoreceptors instead absorbing 6 million and 3 million photons, respectively. Now the two samples will be 6 000 000 ± 2449 photons and 3 000 000 ± 1732 photons, with SNR ≈ 2000. These noise levels are a tiny fraction of the signal levels (less than 0.1%) and there is no question at these higher light intensities the contrast border would be easily discriminated. In fact, even borders of considerably lower contrast would be discriminated.
This unpredictable nature of photon arrival is nicely illustrated in Pirenne's  well-known diagram showing a matrix of 400 photoreceptors that receives an image of a perfectly dark circular object on a bright background (figure 1). At the dimmest light level (figure 1a), the sporadic and random arrival of photons from the background results in their absorption by only six photoreceptors—in this situation the object remains indistinguishable. Even at a 10 times higher background light level (10×: figure 1b) the object is still disguised, hidden in the random ‘noise’ of photon absorptions. After a further 10 times increase in light intensity (100×: figure 1c), the dark object becomes notable, but not without a degree of uncertainty. It is not until light levels are increased by yet a further 10 times that the object can be distinguished clearly (1000×: figure 1d).
In the absence of all other sources of noise, photon shot noise—an external or ‘extrinsic’ source of noise—would be the only type of noise limiting the discrimination of visual contrasts in dim light, setting the upper limit to the SNR. In reality though, two further sources of noise—each internal, or ‘intrinsic’, to the photoreceptors themselves—limits contrast discrimination even further.
(b) The problem of photon detection: physiological noise
A remarkable property of all photoreceptors is their ability to detect single photons of light. Referred to as ‘bumps’ in the insect literature (figure 2), the responses of photoreceptors to single photons were first recorded almost 60 years ago in the photoreceptors of the horseshoe crab Limulus . Moreover, work a few years later established a very important principle governing these responses: the 1 : 1 relationship between transduced photons and bumps . Simply put, a single bump results from the absorption and transduction of no more than a single photon, and a single transduced photon leads to no more than a single bump. What also became very obvious was that despite being responses to an invariant stimulus—single photons—bumps were highly variable. Their amplitudes, latencies and time courses were not constant (figure 2a). This inability of the photoreceptors to produce an identical electrical response to each absorbed photon introduces a type of physiological noise known as ‘transducer noise’. This source of noise, originating in the biochemical processes leading to signal amplification, has the potential to degrade the reliability of vision [14–16]. However, recent work in the fruit fly Drosophila indicates that during stimulation with naturalistic light stimuli these stochastic variations in bump waveform—which turn out to depend on adaptation state and thus on the history of stimulation—can actually increase the visual SNR and enhance visual information transfer [17,18]. It is very likely that nocturnal insects also benefit from the same adaptive stochastic sampling strategy for improving visual reliability in dim light, implying that the negative impacts of transducer noise are probably not as great as previously thought.
A second type of noise physiological noise—known as ‘dark noise’—degrades visual reliability even further. Dark noise arises because the biochemical pathways responsible for transduction are occasionally activated, even in perfect darkness . There are two components of dark noise that have been identified in recordings from photoreceptors (, figure 2b): (i) a continuous low-amplitude fluctuation in measured electrical activity (sometimes called membrane noise or channel noise) and (ii) discrete ‘dark events’, electrical responses that are indistinguishable from those produced by real photons. The continuous component arises from spontaneous thermal activation of rhodopsin molecules or of intermediate components in the phototransduction chain (such as phosphodiesterase ). The amplitude of this membrane noise is negligible in insects , but can be quite significant in vertebrate photoreceptors, particularly cones. ‘Dark events’ also arise due to spontaneous thermal activations of rhodopsin molecules. In those animals where dark events have been measured they are rare (e.g. insects, crustaceans, toads and primates [11,14,21–24]). At their most frequent they occur around once per minute at 20°C, although in most species they occur a lot less frequently. As dark noise is the result of thermal activation of the molecular processes of transduction, it is also more pronounced at higher retinal temperatures. At very low light levels both components of dark noise can significantly contaminate visual signals , and even set the ultimate limit to visual sensitivity (as found in the nocturnal toad Bufo bufo [26,27]).
Taken together, these three sources of noise—photon shot noise, transducer noise and dark noise—severely limit the ability of the visual system to discriminate contrasts in dim light, thus degrading the reliability of vision. The first of these can be overcome by having an eye design that captures as much of the available light as possible, while the degrading effects of the other two can be minimized by physiological specializations in the photoreceptors and by higher-order neural strategies in the optic lobe. These solutions—which turn out to be highly effective—are the topic to which we turn next.
3. Solutions for meeting the challenge
(a) Sensitive compound eyes
To see well in dim light, nocturnal and deep-sea animals have evolved eyes that capture as much of the available light as possible [28–30]. Nocturnal insects—with their highly sensitive compound eyes (figure 3a)—are no exception.
The building blocks of compound eyes—the tube-like ommatidia—each contain a corneal facet lens (or ‘facet’, figure 3c,d) that focuses light from a narrow region of space onto a bundle of photoreceptors that lies directly beneath. These photoreceptors each contribute a photoreceptive segment to the light-sensitive ‘rhabdom’ (figure 3b), a rod-like structure composed of membranous microvilli that houses the rhodopsin molecules (which absorb and processes the incoming light). Each ommatidium samples an individual ‘pixel’ of the visual world—two neighbouring ommatidia thus sample two neighbouring pixels. The more densely packed these ommatidia, the more finely the visual scene is sampled, and the higher the potential spatial resolution.
There are two main types of compound eyes (figure 4): apposition eyes and superposition eyes. In an apposition eye, each ommatidium is isolated from its neighbours by a sleeve of light-absorbing screening pigment, thus preventing light reaching the photoreceptors from all but its own small corneal facet lens. This tiny lens—typically about 30 µm across—represents the pupil aperture of the apposition eye. Such a tiny pupil only allows very little light to be captured, and not surprisingly, apposition eyes are most common in day-active insects like butterflies, bees, wasps, ants, dragonflies and grasshoppers. Remarkably though, there are some cockroaches, wasps, ants and bees—such as the bee M. genalis mentioned earlier—which are strictly nocturnal and yet have apposition eyes, seeing remarkably well nonetheless [3,4,32–36]. We will return to some of these insects later.
Superposition eyes, by contrast, are typical of nocturnal insects such as moths and beetles. In these eyes the pigment sleeve is withdrawn, and a wide optically transparent area, the clear zone (cz in figure 4), is interposed between the lenses and the retina. This clear zone—and specially modified crystalline cones—allows light from a narrow region of space to be collected by as many as 2000 ommatidia (comprising the ‘superposition aperture’) to be focused onto a single photoreceptor in the underlying retina. This represents a massive improvement in sensitivity over an apposition eye while still producing a reasonably sharp image [37,38]. We now know that these eyes allow nocturnal insects to distinguish colours at night , to analyse the dim pattern of polarized light formed around the moon and to use it as a navigational cue [40,41] and to process optic flow cues to control flight at night . Again, we will return to some of these insects later.
The advantages of the superposition design for vision in dim light is readily seen by considering the optical sensitivity S (µm2 sr) of an eye to an extended source of broad-spectrum light [43,44]: 3.1where A is the diameter of the aperture (micrometre), f the focal length of the eye (micrometre), and d, l and k the diameter (micrometre), length (micrometre) and absorption coefficient (per micrometre) of the photoreceptors, respectively. This equation shows that good sensitivity to an extended scene results from an aperture of large area (πA2/4) and photoreceptors each viewing a large solid angle of visual space (πd2/4f2 steradians) and absorbing a substantial fraction of the incident light (kl/(2.3 + kl) for broad-spectrum terrestrial daylight). Larger apertures, shorter focal lengths and wider and longer rhabdoms all increase sensitivity.
Because the aperture A is so much larger in a superposition eye than in an apposition eye, the optical sensitivity is also a lot greater. As an example, consider the superposition eye of the nocturnal hawkmoth Deilephila elpenor (A = 940 µm) and the apposition eye of the nocturnal bee M. genalis (A = 36 µm): for Deilephila S is 69 µm2 sr , whereas for Megalopta it is 25 times lower (2.7 µm2 sr ). Nonetheless, as far as apposition eyes go, those of Megalopta are still very sensitive compared with most, a requirement of its nocturnal lifestyle and its formidable nocturnal visual abilities. Compared with its near relative, the day-active sweat bee Lasioglossum leucozonium, Megalopta's facet lenses are almost twice as wide (A = 36 µm versus 20 µm, figure 3c,d) and its rhabdoms five times as wide (d = 8 µm versus 1.6 µm), giving the eyes of Megalopta 27 times the optical sensitivity of Lasioglossum [3,31].
(b) Slow photoreceptors with high gain
Once the optics of the compound eye has maximized the photon catch N, and thus the visual signal, several adaptations within the photoreceptors then minimize the effects of intrinsic noise and provide the highest possible visual SNR prior to higher processing.
Firstly, photoreceptors adapted for vision in dim light typically respond more slowly to light. Over 60 years ago, the great German sensory physiologist Hansjochem Autrum discovered that the eyes of insects can be classified as being ‘fast’ or ‘slow’ . By measuring extracellular responses (ERGs) to light flashes in the eyes of insects, he showed that fast eyes are correlated with rapidly moving (and often) diurnal insects and slow eyes with slowly moving (and often) nocturnal insects, a finding later confirmed by intracellular photoreceptor recordings in a range of different insects [47–49]. These differences, it turns out, are largely due to differences in photoreceptor size and in the numbers and types of potassium (K+) channels found in the photoreceptor membranes of fast and slow eyes [50–52], the exact complement of channels creating a sensory filter matched to the speed of locomotion  and/or to the light intensity niche, whether nocturnal or diurnal, that the insect is active within .
In the fast-flying but nocturnal sweat bee Megalopta the dark-adapted photoreceptor responses are slow, with a time course around three times longer than those of its diurnal near-relative Lasioglossum . As both bees fly at similar speeds, their difference in temporal properties can only be related to the difference in light intensity experienced by the two species. The nocturnal Megalopta has thus clearly evolved slower vision, and this is of significant advantage at night. Slower vision in dim light—despite compromising temporal resolution—is beneficial because it increases the visual SNR and improves contrast discrimination at lower temporal frequencies by suppressing photon noise at frequencies that are too high to be reliably resolved .
Secondly, the photoreceptor's dark-adapted responses to single photons (i.e. bumps) tend to be much larger in nocturnal than in diurnal arthropods (e.g. crane flies, cockroaches and spiders [49,50,55–57]), indicating that the nocturnal photoreceptor's gain of transduction is greater. This can readily be seen in the nocturnal bee Megalopta, whose bumps are significantly larger than those in the diurnal bee Lasioglossum (figure 2a,b). The higher transduction gain manifests itself as a higher contrast gain, that is, in a greater photoreceptor voltage response per unit change in light intensity (or contrast). In dim light, Megalopta's visual gain is about five times higher than in Lasioglossum, and this results in greater signal amplification (and larger bumps). Unfortunately, this amplification of the signal in dim light also amplifies the noise, and thus on its own, the higher gain does not alter the visual SNR. However, because the noise (including photon shot noise) is uncorrelated between different photoreceptors and ommatidia, a subsequent spatial summation of signals generated in neighbouring ommatidia has the potential to average out the noise and amplify the signal even further, thereby greatly improving the visual SNR, albeit at the cost of spatial resolution. Thus, a high visual gain, followed by spatial summation, could represent a significant strategy for vision in dim light. Our most recent data—from the motion vision pathway of the nocturnal elephant hawkmoth D. elpenor—suggest that this is indeed the case . Summation of visual signals in space—and also in time—turns out to be of crucial importance for vision in dim light.
(c) Summation of light in time and space
Despite all of the optical and neural adaptations that have evolved within nocturnal compound eyes to improve vision in dim light, there is still an enormous performance gap between the visual periphery and the actual behaviours of nocturnal insects. As I mentioned earlier, nocturnal insects such as bees and cockroaches [4,58] display phenomenal visual performance even when the rate of photon absorption in each of their photoreceptors is negligible (in cockroaches fewer than one photon per 10 s). At night, through a perishingly dark and tangled rainforest, the nocturnal bee Megalopta is able to recall learned visual landmarks to find its way home to its small and inconspicuous nest when fewer than five photons are being absorbed by each of its photoreceptors per second . At the same light levels, this bee also uses optic flow cues to control its flight  and to land on its nest with an ease and precision no different to that of a diurnal bee in bright light . Even though their photon absorption rates have yet to be measured, the conclusion is likely to be the same for many other nocturnal insects. Dung beetles, for instance, are able to use dim nocturnal celestial cues for short-distance navigation [40,41,61–63], while nocturnal moths, many of which are capable of migrating over enormous distances at night (such as the Australian Bogong moth Agrotis infusa ), also very likely use visual cues for controlling flight and navigating.
To understand the extent of the performance gap, one needs to only consider the optical sensitivities of the compound eyes in our two sweat bees, the nocturnal Megalopta and the diurnal Lasioglossum. Even though they experience light levels that differ by as much as eight orders of magnitude, their optical sensitivities (as we calculated earlier) differ by little more than a factor of 10. The slow, high-gain photoreceptors of Megalopta do of course improve sensitivity somewhat more, but certainly nowhere near enough to bridge the remaining seven orders of magnitude difference in retinal light intensity the two bees would experience.
(i) Summation: a missing link
What then is missing? The answer—at least partly—is the summation of photons in time and space [65–68]. We have actually already discussed summation in time above: when light gets dim, the photoreceptors of nocturnal insects can improve visual reliability by responding more slowly. Additional temporal summation can also occur at higher levels of visual processing. But this only comes at a price: the resolution of events occurring rapidly in time, such as the passage of a fast-moving object, can be drastically degraded, potentially disastrous for a fast-flying nocturnal animal that needs to negotiate obstacles! Not surprisingly, substantial temporal summation is more likely to be employed by slowly moving animals .
Summation of photons in space can also improve visual reliability. Instead of each visual channel collecting photons in isolation from a single small ‘pixel’ of the visual scene (as in bright light), the transition to dim light could activate specialized laterally spreading neurons which couple the channels together into groups. Each summed group—themselves now defining the channels—could collect considerably more photons over a much wider visual angle, that is, from a considerably larger and brighter ‘pixel’. Again, this strategy only comes at a cost: a simultaneous and unavoidable loss of spatial resolution. Despite being much brighter, the image becomes necessarily coarser.
(ii) Evidence for spatial and temporal summation
Evidence for the existence of spatial and temporal summation in insects, and its benefits for vision in dim light, has come from two main lines of enquiry, the first of which is anatomical. As spatial summation in nocturnal insects is likely to occur early in visual processing, it was hypothesized that the necessary cells—with their wide lateral dendritic branches coupling visual channels together—would be found in the lamina ganglionaris, the first optic ganglion of the brain, located directly behind each compound eye. The lamina is built from an array of ‘cartridges’, tube-like divisions of the neural tissue that each process visual signals arriving from the single ommatidium lying directly above. Each cartridge in turn houses around 15–20 cells, including several classes of lamina monopolar cells (or LMCs) which are responsible for processing visual signals delivered to the lamina by the photoreceptors. It turned out that some classes of LMCs had precisely the morphology predicted for performing spatial summation, and again as predicted, only in nocturnal insects—both in nocturnal bees [69,70] and in nocturnal hawkmoths [71,72] these LMCs are widely branched compared with the LMCs of diurnal near relatives (figure 5a). Such widely branching LMCs have also been found in nocturnal cockroaches  and fireflies . In the hawkmoths D. elpenor and Manduca sexta, two species active in very dim light, three of their four classes of LMCs (types 2, 3 and 4) have dendrites that branch to a significantly greater number of lamina cartridges than in their diurnal relative Macroglossum stellatarum (figure 5b). If the wide lateral dendritic branches of LMCs in nocturnal insects are indeed summing visual signals from neighbouring cartridges (as many as 60 cartridges in Manduca: figure 5b), theoretical modelling suggests that these insects would be capable of resolving spatial details at much lower intensities than in the absence of summation .
The second line of evidence for the existence of summation comes from behavioural and physiological investigations of the motion vision pathways responsible for controlling flight and for generating the optomotor response, the reflex-like turning response of an animal that views a rotating pattern. The cells of the motion pathway are found in the third optic neuropil of the optic lobe, the lobula plate.
Motion-detecting cells in insects reveal outstanding sensitivity in dim light, even those of diurnal insects like flies. The well-known wide-field movement detector H1 of the fly reveals quantal sensitivity at threshold despite being at least three synapses distant from the photoreceptors: single action potentials generated in H1 are correlated with the absorption of single photons in the photoreceptors . Because H1 and the other motion-detecting cells of the fly lobula plate are thought to steer optomotor behaviour (see ), such quantal sensitivity even has the potential to occur at the level of the behaving animal. This is indeed the case: a housefly (Musca domestica) tethered within a rotating optomotor drum lined with vertical stripes reacts to the movements of the stripes when as few as two or three photons reach each photoreceptor every second [21,78–80]. And as mentioned earlier (§1), recent work has shown that nocturnal cockroaches subjected to exactly the same type of optomotor stimulus continue to react to the movements of an optomotor drum when as few as one photon every 10 s is absorbed by each of its photoreceptors .
Much of this remarkable behavioural performance can be ascribed to significant amounts of spatial and temporal summation occurring somewhere between the retina and the lobula plate [4,81–83]. And as we intimated above, the lamina is a prime location for (at least) spatial summation. Indeed, using behavioural methods, Dubs and colleagues  measured the threshold optomotor response of tethered flies that viewed a wide-field grating stimulus, and in parallel recorded the rates of bump production at the same threshold intensity, both in the photoreceptors and in the first-order interneurons to which they connect. Using a point source centred in the field of view, the interneuron bump rate was found to be six times that of the photoreceptors—exactly the ratio expected—because six photoreceptors synapse onto one interneuron. However, when the point source was exchanged for the dim extended grating stimulus at threshold intensity, the interneuron bump rate increased to between 18 and 20 times the photoreceptor rate, implying that signals from several neighbouring ommatidia were being summed at the interneuron (possibly via presynaptic summation between receptors).
However, it is recent work in the lobula plate of the nocturnal elephant hawkmoth D. elpenor that has most convincingly revealed the extents of spatial and temporal summation at different light levels and quantified its benefits for vision in dim light . Like a humming bird, this beautiful insect is a hoverer, effortlessly holding station in front of breeze-tossed flowers at night while sucking nectar on the wing, a behaviour that relies heavily on vision [84,85]. With its exquisitely sensitive superposition eyes, it is also able to see its world in colour, an ability thought possible only with the help of summation .
To investigate spatial and temporal summation, moths were stimulated with moving grating patterns of sinusoidally modulated black-and-white stripes while simultaneously recording from either the photoreceptors or the wide-field motion neurons of the lobula plate . The photoreceptors—which code visual stimuli with graded potentials—have a sinusoidal decrease and increase in response amplitude as the alternating black-and-white stripes pass through the photoreceptor's narrow receptive field. By contrast, the motion cells respond with a brisk train of action potentials as the stripes move in the cell's ‘preferred direction’—when they move in the opposite ‘null’ direction the cell is instead strongly inhibited. During an experiment, the stripes of the grating could be made finer and finer (i.e. their spatial frequency could be increased) and/or moved faster and faster (i.e. their temporal frequency could be increased) at any one of several grating mean light levels, ranging from late afternoon intensities down to overcast starlight intensities. At any given grating intensity, the following experiment was performed in both photoreceptors and motion-sensitive neurons—for a grating of specific spatial frequency and temporal frequency, the cell's response was recorded while the contrast of the moving grating gradually increased from zero towards its maximum contrast of 100%. At low contrasts both photoreceptors and motion cells fail to see the moving grating, but as contrast is ramped up, a contrast Co is eventually reached where the cell begins to react with a response that is significantly greater than the background noise level. This criterion threshold contrast defines the cell's contrast sensitivity (CS) to the specific grating pattern that stimulated it: . In other words, the lower the contrast of the grating when the cell is just able to detect it, the higher the cell's contrast sensitivity.
To test the spatial and temporal properties of motion-sensitive neurons at dimmer and dimmer light levels, cells were stimulated with gratings covering a wide range of spatial and temporal frequencies (all possible combinations were tested) and for each grating the cell's CS was measured (figure 6a). The resulting contrast sensitivity surface (heat map) at each grating intensity (from sunset to an overcast starlit night, figure 6a) shows a peak CS (figure 6d) at a single spatial (figure 6b) and temporal (figure 6c) frequency (intersection of solid white lines in left panel of figure 6a). Corner frequencies are also indicated in figure 6b,c—these are the spatial and temporal frequencies at which CS falls to 50% of its maximum value. The contrast sensitivity surfaces remain broad down to moonlight levels of intensity and even display regions of inhibition (blue regions in the heat maps of figure 6a). Moreover, due to the fact that the pigment pupil opens during the transition from sunset to dusk light levels (from 100 to 1 cd m−2: leftmost two heat maps in figure 6a), retinal illumination actually increases—during dusk the CS peaks at its maximum value of around 18 (figure 6d), meaning that the lowest contrast threshold recorded was 1/18 ≈ 5.5%. For grating intensities lower than moonlight levels, the contrast sensitivity surfaces narrow, with peak (and corner) spatial and temporal frequencies both declining more rapidly (figure 6b,c). The motion cells cease to respond at grating intensities dimmer than overcast starlight levels.
When the same experiment is performed on Deilephila's photoreceptors, their resulting spatial and temporal properties reveal cut-off frequencies (dashed white lines in figure 6a) that account for the spatial and temporal properties of the motion-sensitive neurons at all light intensities down to moonlight levels (especially in the spatial domain). However, at lower light intensities, the photoreceptors retain much greater spatial and temporal resolution than the motion neurons, indicating that the responses of motion neurons are being maintained by significant extents of spatial and temporal summation. This conclusion is reinforced by modelling motion vision in dim light using a standard Reichardt-type correlator model [42,86,87]. In this model, nearest-neighbour visual channels are summed together more strongly than next nearest neighbours, and so on—the exact summation profile is Gaussian, with a spatial half-width Δρs degrees (figure 7a). Temporal summation is implemented by an exponential low-pass filter with a time constant τs (milliseconds) (figure 7b).
The extents of spatial and temporal summation (represented by Δρs and τs), at any given light level, can now be directly compared with the spatial and temporal properties of the photoreceptors, represented by the size of the photoreceptor's spatial receptive field (roughly Gaussian, with half-width Δρ degrees) and the length of its visual integration time (Δt, milliseconds). The model shows that in Deilephila the extent of spatial summation remains insignificant down to moonlight levels (Δρs < Δρ), and then suddenly increases (figure 7a). In overcast starlight, it is estimated that in each visual channel visual signals are spatially summed from around 109 ommatidia! By contrast, the model shows that significant temporal summation (figure 7b) is present at all light levels , but like spatial summation, suddenly increases at intensities lower than moonlight levels (with τs about eight times longer than Δt). A slow visual system not only has advantages for visual reliability in dim light, but also for slow hovering flight, even at brighter light levels , and for holding station in front of wind-tossed flowers at night [84,85].
Thus in dim light the extents of spatial and temporal summation in Deilephila are both significant and work together to increase contrast sensitivity as light levels fall. In fact, for all light intensities dimmer than dusk levels, contrast sensitivity is substantially higher with summation than without it, extending vision to light intensities at least 100 times dimmer (figure 6d). In other words, despite sacrifices in both spatial and temporal resolution, summation dramatically enhances the reliability of the coarser and slower features of the visual world in dim light. There is thus no question that spatial and temporal summation is a major strategy in nocturnal vision, especially in animals with small eyes like insects .
Despite having tiny eyes and small brains, nocturnal insects have remarkable visual abilities in dim light, performing sophisticated visual behaviours when the photoreceptors each absorb only a weak trickle of photons. Nocturnal insects can see colour [33,39], control flight and land [59,60], react to faint movements in their environment , navigate using dim celestial cues [40,41,61–63] and find their way home after a long and tortuous foraging trip using learned visual landmarks [3,32,35,36,89]. Part of this ability lies in the optical designs of their sensitive compound eyes which maximize light capture. Part lies in the slow response speed and high gain of their photoreceptors, which improve the reliability of the visual signal as it leaves the retina for further processing. And a very large part lies in the spatial and temporal summation of these signals in the optic lobe, a strategy that substantially enhances contrast sensitivity in dim light and allows nocturnal insects to see a brighter world, albeit a slower and coarser one.
Whether these three major strategies are sufficient on their own to entirely bridge the gap between the surprisingly rare arrivals of photons on the retina and the spectacular visual behaviours these photons sustain, remains to be seen. Higher processes still—occurring more centrally in the brain—might further improve the reliability of visual signals before they are ultimately used as inputs to motor pathways that drive behaviour. But whichever strategies remain to be discovered, there is no question that insects have evolved truly extraordinary nocturnal visual powers, partly due to brains that are, as Charles Darwin would have entirely agreed, ‘the most marvellous atoms of matter in the world’.
I declare I have no competing interests.
This study was supported by Vetenskapsrådet, Kungliga Fysiografiska Sällskapet i Lund, Knut och Alice Wallenbergs Stiftelse and Air Force Office of Scientific Research.
The author is particularly grateful to the Swedish Research Council, the United States Air Force Office of Scientific Research, the Knut and Alice Wallenberg Foundation and the Royal Physiographic Society of Lund for their valuable and ongoing support.
One contribution of 17 to a theme issue ‘Vision in dim light’.
- Accepted August 25, 2016.
- © 2017 The Author(s)
Published by the Royal Society. All rights reserved.