Information

Where do spatial dimensions enter in single compartment neuronal models?

Where do spatial dimensions enter in single compartment neuronal models?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I am trying to understand how the length and diameter of a compartment are specified. For example, in the Hodgkin-Huxley model, we only have conductances specified in $ m mS/cm^2$. How do you specify that a compartment is say $100 mu m m$ long?


In a single-compartment model, you do not have spatial dimensions. Sometimes this is not a problem, for example in the stomatogastric ganglion of C. borealis, in which the neurons have a mechanism to scale the signal with distance by modifying the "effective reversal potential" (Otopalik et al., 2017). In the case of one of these models, it would be sensible to either make an approximation of the cell surface area (Liu et al., 1998 for example) or to use parameters which are normalized by the surface area, to whose explicit numerical value we remain agnostic. One would do this by using a membrane capacitance in microfarads per centimeter squared for instance (typically about unity) and then use maximal conductances in millisiemens per centimeter squared.

This way, the natural units for current divided by capacitance are mV * mS/cm^2 * cm^2/uF, or equivalent to mV/ms.

If you wanted to model multiple neurons with a decay factor, you could either use the cable equation and explicitly input the current into the descending neuron following that, or make a neuron with multiple compartments.


How Does p Develop?

If p is quantitatively distributed in the population, with extreme scores signaling neuroticism, emotion dysregulation, intellectual impairments, and disordered thought, what marks its developmental progression? One possibility that we hypothesized (24) is that many young children exhibit diffuse emotional and behavioral problems, fewer go on to manifest a brief episode of an individual disorder, still fewer progress to develop a persistent internalizing or externalizing syndrome, and only a very few individuals progress to the extreme elevation of p, ultimately emerging with a psychotic condition, most likely during late adolescence or early adulthood. This hypothesized developmental progression is supported by evidence from a unique prospective adoption study that showed that biological mothers’ p factor predicted their adopted-away children’s internalizing and externalizing problems by age 3, suggesting the early-life emergence of pleiotropic genetic effects (53, 54). This developmental progression would also require evidence that brief episodes of single disorders are widespread in the population, which is supported by the high lifetime prevalence rates of individuals with disorders over years of follow-up in longitudinal studies (55, 56). A developmental progression would also require that individuals who manifest psychosis have an extensive prior history of many other disorders, which has been reported (57, 58). And, moreover, a developmental progression would anticipate that when individuals are followed long enough, those with the most severe liability to psychopathology will tend to move in and out of diagnostic categories. Today’s patient with schizophrenia was yesterday’s boy with conduct disorder or girl with social phobia (and tomorrow’s elderly person with severe depression). This developmental progression hypothesis is consistent with evidence that sequential comorbidity is the rule rather than the exception (59) and that individuals experiencing sequentially comorbid disorders also exhibit more severe psychopathology (16). To the best of our knowledge, this entire developmental progression—from mild, diffuse emotional and behavioral problems to persistent syndromes to extreme, impairing comorbid conditions—has not been described in the same individuals followed over time, and predictors of age-graded transitions along the hypothesized progression have yet to be evaluated.


Analysis of Residuals

To determine where the PCA-based representational similarity analysis was failing to account for differences between mental states, we constructed a representational dissimilarity matrix of the residuals from a multiple regression featuring the three significant dimensions (Fig. S4). Additionally, we calculated the average residual for each mental state and correlated these averaged residuals with three significant PCs. The rationality of a mental state did not predict whether its pattern was chronically predicted to be more or less different from that of other states (r = −0.03). The pattern dissimilarity between negative states tended to be slightly overestimated (r = 0.18). Finally, pattern dissimilarity between highly socially impactful states tended to be substantially underestimated (r = −0.66).


Materials and Methods

Perceptual decision making paradigm

We adapted a 2-AFC paradigm of face and car discrimination, where a set of 12 face (Max Plank Institute face database) and 12 car grayscale images were used. The car image database was the same used in Philiastides and Sajda (2006) and Philiastides et al. (2006), which was constructed by taking images from the internet, segmenting the car from the background, converting the image to grayscale, and then resizing to be comparable as the face images. The pose of the faces and cars was also matched across the entire database and was sampled at random (left, right, center) for the training and test cases. All the images (512 ×� pixels, 8𠂛its/pixel) were equated for spatial frequency, luminance, and contrast. The phase spectra of the images were manipulated using the weighted mean phase method (Dakin et al., 2002) to introduce noise, resulting in a set of images graded by phase coherence. Specifically we computed the 2D Fourier transform of each image, and constructed the average magnitude spectra by averaging across all images. The phase spectra of an image were constructed by computing a weighted sum of the phase spectra of the original image (ϕimage) and that of random noise (ϕnoise).

Each image subtended 2° ×𠂒° of visual angle, and the background screen was set to a mean luminance gray. The image size was set to match the size of the V1 model, which covered 4 mm 2 of cortical sheet. Figure ​ Figure1 1 shows examples of the face and car images used in the experiment as well as the effect on the discriminability of the image class when varying the phase coherence.

The stimulus set for the 2-AFC perceptual decision making task. (A) Shown are 12 face and 12 car images at phase coherence 55%. (B) One sample face and one sample car image, at phase coherences varying from 20 to 55%. (C) Design and timing of the simulated psychophysics experiment for the model.

The sequence of images was input to the model where an image was flashed for 50 ms, followed by a gray mean luminance image with an inter-stimulus-interval (ISI) of 200 ms (Figure ​ (Figure1C). 1 C). Since simulating the model is computationally expensive, we minimized the simulation time by choosing an ISI which was as small as possible yet did not result in network dynamics leaking across trials. We conducted pilot experiments that showed that network activity settled to background levels approximately 200 ms after stimulus offset. We ran the simulation for each of the two classes, face and car, at different coherence levels (20, 25, 30, 35, 40, 45, 55%) respectively. Each image was repeated by 30 trials in the simulation, where the sequence of trials was randomly generated. In each simulation, we randomized the order of different images, making sure not to push the model into a periodic response pattern.

Parallel to simulating the model response, we conducted human psychophysics experiments. Ten volunteer subjects were recruited. All participants provided written informed consent, as approved by the Columbia University Institutional Review Board. All the subjects were healthy with corrected visual acuity of 20/20. Psychophysics testing was administered in a monocular manner. Images of different phase coherences were randomized in the psychophysics experiment. During the experiment subjects were instructed to fixate at the center of the images, and to make a decision on whether they saw a face or car, as soon as possible, by pressing one of two buttons with their right hand. The ISI for human psychophysics experiments was longer and randomized between 2500 and 3000 ms in order to provide for a comfortable reaction time and to reduce the subjects’ ability to predict the time of the next image. A Dell computer with an nVIDIA GeForce4 MX 440 AGP8X graphics card and E-Prime software controlled the stimulus presentation.

Model summary

An overview of the model architecture and decoding is illustrated in Figure ​ Figure2. 2 . We modeled the early visual pathway with a feedforward lateral geniculate nucleus (LGN) input and a recurrent spiking neuron network of the input layers of (4Cα/β) of primary visual cortex (V1). We model the short-range connectivity within the V1 layer, without feedback from higher areas. We simulated a magnocellular version of the model, the details of which have been described previously (Wielaard and Sajda, 2006a,b, 2007). Note our model is a variant of an earlier V1 model (McLaughlin et al., 2000 Wielaard et al., 2001).

Summary of the model architecture. (A) The model is comprises of the encoding and decoding components. (B) Architecture of the V1 model, where receptive fields and LGN axon targets are viewed in the visual space (left) and cortical space (right). Details can be found in Wielaard and Sajda (2006a).

In brief, the model consists of a layer of N (4096) conductance-based integrate-and-fire point neurons (one compartment), representing about a 2 ×𠂒 mm 2 piece of a V1 input layer (layer 4C). Our model of V1 consists of 75% excitatory neurons and 25% inhibitory neurons. In the model, 30% of both the excitatory and inhibitory cell populations receive LGN input. In agreement with experimental findings, the LGN neurons are modeled as rectified center-surround linear spatio-temporal filters. Sizes for center and surround were taken from experimental data (Hicks et al., 1983 Derrington and Lennie, 1984 Shapley, 1990 Spear et al., 1994 Croner and Kaplan, 1995 Benardete and Kaplan, 1999). Noise, cortical interactions, and LGN input are assumed to act additively in contributing to the total conductance of a cell. The noise term is modeled as Poisson spike train convolved with a kernel which comprises a fast AMPA component and a slow NMDA component (see Supplementary Materials in Wielaard and Sajda, 2006a).

The LGN RF centers were organized on a square lattice. These lattice spacing and consequent LGN receptive field densities imply LGN cellular magnification factors that are in the range of the experimental data available for macaque (Malpeli et al., 1996). The connection structure between LGN cells and cortical cells is made so as to establish ocular dominance bands and a slight orientation preference which is organized in pinwheels (Blasdel, 1992). It is further constructed under the constraint that the LGN axonal arbor sizes in V1 do not exceed the anatomically established values of 1.2 mm (Blasdel and Lund, 1983 Freund et al., 1989).

In the construction of the model our objective was to keep the parameters deterministic and uniform as much as possible. This enhances the transparency of the model, while at the same time provides insight into what factors may be essential for the considerable diversity observed in the responses of V1 cells.

Sparse decoding

We used a linear decoder to map the spatio-temporal activity in the V1 model to a decision on whether the input stimulus is a face or a car. We employed a sparsity constraint on the decoder in order to control the dimension of the effective feature space. Sparse decoding has been previously investigated for decoding real electrophysiological data, for instance by Chen et al. (2006), Palmer et al. (2007), and Quiroga et al. (2007).

Since a primary purpose of using the decoder is to identify informative dimensions in the neurodynamics, we estimate new decoder parameters at each stimulus noise level (coherence level) independently. Alternatively we could train a decoder at the highest coherence level and test the decoder at each coherence level. In this paper we focus on the first approach, since we view our decoder as a tool for analyzing the information content in the neurodynamics and how downstream neurons might best decode this information for discrimination.

We constructed an optimal decoder to read out the information in our spike neuron model, fully exploring the spatio-temporal dynamics. The spike train for each neuron in the population is si,k(t) = ∑lδ(tti,k,l), where t ∈ [0,250] ms, i =𠂑… N is the index for neurons, k =𠂑… M is the index for trials, l =𠂑… P is the index for spikes. Based on the population spike trains, we estimated the firing rate on each trial by counting the number of spikes within a time bin of width τ, resulting in a spike count matrix r i , j , k = ∫ ( j − 1 ) τ + 1 j τ s i , k ( t ) dt , where i =𠂑… N represents the ith neuron, j =𠂑… T/τ represents the jth time bin, k =𠂑… M represents the kth trial. Note that we explored decoding using time bins of different length. When τ =� ms, we assume that information is encoded in both neuron and time, since the firing rate is closer to instantaneous firing rate when τ =� ms, we integrate the spiking activity over the entire trial, leading to a rate-based representation of information. A separate post hoc analysis showed that 25 ms was in fact the bin width that yielded the highest discrimination accuracy (bin width varied from 5 to 250 ms). The class label of each sample bk takes the value of <-1, +𠂑>representing either face or car with M being the number of trials. In order to explore the information within the spatio-temporal dynamics, we compute the weighted sum of firing rate over different neurons and time bins. This leads to seeking the solution of the following constrained minimization problem,

where the first term is the empirical logistic loss function, and the second term is the regularization function, with λ >𠂐 as the regularization parameter. We create a stacked version of the spike count matrix xl,k = ri,j,k with l = (i −𠂑)N + j, i.e., stacking the neuron and time bin dimensions together. The resulting linear decoder can be geometrically interpreted as a hyperplane that separates the classes of face and car, where w represents the weights for the linear decoder, and v is the offset. In the case of the sparse decoder, we use an L1 regularization term J(w) = ‖w1 alternatively for the non-sparse decoder, we use the L2 regularization J ( w ) = ‖ w ‖ 2 2 . In the language of Bayesian analysis, the logistic loss term comes from maximum likelihood, L1 corresponds to the Laplacian prior, and L2 corresponds to the Gaussian prior. L1-regularized logistic regression results in a sparse solution for the weights (Krishnapuram et al., 2005 Koh et al., 2007 Meier et al., 2008). So-called “sparse logistic regression” serves as an approach for feature selection, where features that are most informative about the classification survive in the form of non-zero weights (Ng, 2004). We developed an efficient and accurate method to solve this optimization problem (Shi et al., 2010, 2011). Once we learn the hyperplane, for any new image, we can predict the image category via the sign of w T xk + v.

Figure ​ Figure3 3 provides a geometric intuition of why L1 and L2 regularization lead to sparse and non-sparse solutions, respectively. The solution of L1 or L2 regularized logistic regression is the intersection of the regularization geometry and a hyperplane. Figure ​ Figure3A 3 A shows the L1 regularization corresponds to the diamond shaped ball centered at the origin. As one increases the regularization parameter λ, the L1 ball grows and the solution is the point when it hits the hyperplane. Given the geometry of L1 ball, the solution is more likely to be sparse. Figure ​ Figure3B 3 B shows the L2 regularized logistic regression, where the geometry of the L2 ball is a sphere, therefore leading to a non-sparse solution.

A schematic illustration of how different regularization terms lead to sparse and non-sparse solutions in the linear classifier. (A) L1 regularization corresponds to the diamond shaped ball centered around the origin. (B) L2 regularization corresponds to the spherical ball centered around the origin.

Cross validation

Training and testing were carried out on different sets of images, each containing six face images and six car images, with 30 trials per image. Tenfold cross validation was used on the training set, while the final weights applied on the testing set are estimated using Jackknife estimation to reduce the bias. A regularization path was also employed, where a family of λ’s is used. Given that different values of λ offer different levels of sparsity, we chose λ that maximizes discrimination accuracy on the training dataset after cross validation. We used this hyperparameter on the testing dataset to calculate the final discrimination accuracy. In order to identify the time windows that are critical for reading out information in the V1 model, we used two approaches. One way to utilize dynamics was based on a heuristic approach, where we only consider dynamics during t ∈ [50, 150] ms, given that the V1 model has a delay of 50 ms after stimulus onset and the length of activation is about 100 ms. In a second approach, we optimized the temporal window by an adaptive technique, where we search for an optimal window that results in the best decoding performance. In the adaptive technique, we systematically varied the latency and width of the window, and computed the corresponding Az (area under ROC curve) values through cross validation. The best window is the one that results in the highest Az value.

Measuring sparseness

We characterize the sparseness of the neural representation in the population spike trains, for both the temporal and spatial domains. According to Willmore and Tolhurst (2001), lifetime sparseness describes the activity of a single neuron over time, while population sparseness characterizes the activity of a population of neurons for a given time window. We estimate instantaneous firing rates using a Gaussian window 25 ms wide with a standard deviation of 5 ms. Sparseness in firing rates can be measured by kurtosis (Olshausen and Field, 2004), namely the forth moment relative to the variance squared.

Using the sparse decoding framework, we are able to identify the informative dimensions that are critical for our specific decision making task. We define “informative dimensions” as the number of non-zero weights in the decoder, which is equal to the cardinality of the weight vector. Informative dimensions thus reflect the number of non-zeros in the spatio-temporal “word.” Note one neuron can be selected by the decoder at multiple time bins, therefore, we define “informative neurons” as the number of neurons having at least one non-zero weight across different time bins.

Statistical tests

We used a likelihood ratio test to evaluate the goodness of fit. We fit a single Weibull curve jointly to both the psychometric and neurometric dataset (dof =𠂔), as well as fitting two Weibull curve independently to both dataset (dof =𠂘). We computed the likelihood ratio using D = - 2ln(ljlpln). The null hypothesis is psychometric and neurometric data can be described by the same curve, and the decision rule is based on the Chi-square statistics χ 2 . If p >𠂐.05, do not reject null hypothesis otherwise, reject null hypothesis.


Summary and future directions

Studies of adaptation continue to reveal surprising and complex forms of plasticity in sensory systems, from peripheral receptors to central mechanisms coding highly abstract properties of the stimulus. The finding that vision adapts in such similar ways to such a diverse array of perceptual attributes suggests that adaptation is an intrinsic feature of visual coding that is manifest throughout the visual stream. However, we still understand little about the dynamics and mechanisms of these adjustments, how they operate over different timescales, and whether they serve common or distinct roles in calibrating our perceptions.


5 Answers 5

There are 2 problems you might face.

Your neural net (in this case convolutional neural net) cannot physically accept images of different resolutions. This is usually the case if one has fully-connected layers, however, if the network is fully-convolutional, then it should be able to accept images of any dimension. Fully-convolutional implies that it doesn't contain fully-connected layers, but only convolutional, max-pooling, and batch normalization layers all of which are invariant to the size of the image.

Exactly this approach was proposed in this ground-breaking paper Fully Convolutional Networks for Semantic Segmentation. Keep in mind that their architecture and training methods might be slightly outdated by now. A similar approach was used in widely used U-Net: Convolutional Networks for Biomedical Image Segmentation, and many other architectures for object detection, pose estimation, and segmentation.

Convolutional neural nets are not scale-invariant. For example, if one trains on the cats of the same size in pixels on images of a fixed resolution, the net would fail on images of smaller or larger sizes of cats. In order to overcome this problem, I know of two methods (might be more in the literature):

multi-scale training of images of different sizes in fully-convolutional nets in order to make the model more robust to changes in scale and

having multi-scale architecture.

Assuming you have a large dataset, and it's labeled pixel-wise, one hacky way to solve the issue is to preprocess the images to have same dimensions by inserting horizontal and vertical margins according to your desired dimensions, as for labels you add dummy extra output for the margin pixels so when calculating the loss you could mask the margins.

Try resizing the image to the input dimensions of your neural network architecture(keeping it fixed to something like 128*128 in a standard 2D U-net architecture) using nearest neighbor interpolation technique. This is because if you resize your image using any other interpolation, it may result in tampering with the ground truth labels. This is particularly a problem in segmentation. You won't face such a problem when it comes to classification.

As you want to perform image segmentation, you can use U-Net, which does not have fully connected layers, but it is a fully convolutional network, which makes it able to handle inputs of any dimension. You should read the linked papers for more info.

You could also have a look at the paper Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition (2015), where the SPP-net is proposed. SSP-net is based on the use of a "spatial pyramid pooling", which eliminates the requirement of having fixed-size inputs.


Computational models of spatial updating in peri-saccadic perception

Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades.

References

. 1867 Handbuch der physiologischen Optik . Hamburg, Germany : Voss . Google Scholar

. 1970 Visual perception of direction when voluntary saccades occur: II. Relation of visual direction of a fixation target extinguished before saccade to a subsequent test flash presented before the saccade . Percep. Psychophys. 8, 9–14. Crossref, Google Scholar

. 1972 Eye movements and perceived visual direction . Eye movements and perceived visual direction (eds

), pp. 331–380. Berlin, Germany : Springer . Google Scholar

. 1976 Saccades and extraretinal signal for visual direction . Eye movements and psychological processes (eds

), pp. 205–219. Hillsdale, NJ : Erlbaum Assoc . Google Scholar

. 1989 Perceptual localization of visual stimuli flashed during saccades . Percept. Psychophy. 45, 162–174. Crossref, PubMed, Google Scholar

. 1991 The time courses of visual mislocalization and of extraretinal eye position signals at the time of vertical saccades . Vis. Res. 31, 1915–1921.doi:

Dassonville P., Schlag J.& Schlag-Rey M.

. 1992 Oculomotor localization relies on a damped representation of saccadic eye displacement in human and nonhuman primates . Vis. Neurosci. 9, 261–269.doi:

. 1995 Illusory localization of stimuli flashed in the dark before saccades . Vis. Res. 35, 2347–2357.doi:

. 2002 Through the eye, slowly: delays and localization errors in the visual system . Nat. Rev. Neurosci. 3, 191–215.doi:

Van Wetter S. M. C. I.& Van Opstal A. J.

. 2008 Experimental test of visuomotor updating models that explain perisaccadic mislocalization . J. Vis. 8, 8.doi:

. 1968 Investigations and considerations of directional perception during voluntary saccadic eye movements . Psychol. Forsch. 32, 185–218.doi:

. 1993 Saccade-contingent displacement of the apparent position of visual stimuli flashed on a dimly illuminated structured background . Vis. Res. 33, 709–716.doi:

Dassonville P., Schlag J.& Schlag-Rey M.

. 1995 The use of egocentric and exocentric location cues in saccadic programming . Vis. Res. 35, 2191–2199.doi:

Ross J., Morrone M. C.& Burr D. C.

. 1997 Compression of visual space before saccades . Nature 386, 598–601.doi:

Morrone M. C., Ross J.& Burr D. C.

. 1997 Apparent position of visual targets during real and simulated saccadic eye movements . J. Neurosci. 17, 7941–7953. Crossref, PubMed, Google Scholar

Lappe M., Awater H.& Krekelberg B.

. 2000 Postsaccadic visual references generate presaccadic compression of space . Nature 403, 892–895.doi:

Morrone M. C., Ma-Wyatt A.& Ross J.

. 2005 Seeing and ballistic pointing at perisaccadic targets . J. Vis. 5, 741–754.doi:

. 2006 Mislocalization of perceived saccade target position induced by perisaccadic visual stimulation . J. Neurosci. 26, 12–20.doi:

. 2004 Contrast dependency of saccadic compression and suppression . Vis. Res. 44, 2327–2336.doi:

Georg K., Hamker F. H.& Lappe M.

. 2008 Influence of adaptation state and stimulus luminance on peri-saccadic localization . J. Vis. 8, 15.doi:

. 2004 Perisaccadic mislocalization orthogonal to saccade direction . Neuron 41, 293–300.doi:

Ostendorf F., Fischer C., Finke C.& Ploner C. J.

. 2007 Perisaccadic compression correlates with saccadic peak velocity: differential association of eye movement dynamics with perceptual mislocalization patterns . J. Neurosci. 27, 7559–7563.doi:

. 2004 Perception of visual space at the time of pro- and anti-saccades . J. Neurophys. 91, 2457–2464.doi:

Awater H., Burr D., Lappe M., Morrone M. C.& Goldberg M. E.

. 2005 Effect of saccadic adaptation on localization of visual targets . J. Neurophys. 93, 3605–3614.doi:

. 2009 Effects of saccadic adaptation on visual localization before and during saccades . Exp. Brain Res. 192, 9–23.doi:

Volkmann F. C., Riggs L. A., White K. D.& Moore R. K.

. 1978 Contrast sensitivity during saccadic eye movements . Vis. Res. 18, 1193–1199.doi:

. 1978 Saccadic omission: why we do not see a grey-out during a saccadic eye movement . Vis. Res. 18, 1297–1303.doi:

Burr D. C., Holt J., Johnstone J. R.& Ross J.

. 1982 Selective depression of motion sensitivity during saccades . J. Physiol. 333, 1–15. Crossref, PubMed, Google Scholar

Bridgeman B., Hendry D.& Stark L.

. 1975 Failure to detect displacement of the visual world during saccadic eye movements . Vis. Res. 15, 719–722.doi:

Deubel H., Schneider W. X.& Bridgeman B.

. 1996 Postsaccadic target blanking prevents saccadic suppression of image displacement . Vis. Res. 36, 985–996.doi:

Currie C. B., McConkie G. W., Carlson-Radvansky L. A.& Irwin D. E.

. 2000 The role of the saccade target object in the perception of a visually stable world . Percept. Psychophys. 62, 673–683. Crossref, PubMed, Google Scholar

Deubel H., Koch C.& Bridgeman B.

. 2010 Landmarks facilitate visual space constancy across saccades and during fixation . Vis. Res. 50, 249–259.doi:

Von Holst E.& Mittelstaedt H.

. 1950 Das Reafferenzprinzip . Naturwissenschaften 37, 464–476. Crossref, ISI, Google Scholar

. 1950 Neural basis of the spontaneous optokinetic response produced by visual inversion . J. Comp. Physiol. Psychol. 43, 482–489.doi:

. 2004 What the brain stem tells the frontal cortex. I. Oculomotor signals sent from superior colliculus to frontal eye field via mediodorsal thalamus . J. Neurophysiol. 91, 1381–1402.doi:

Duhamel J. R., Colby C. L.& Goldberg M. E.

. 1992 The updating of the representation of visual space in parietal cortex by intended eye movements . Science 255, 90–92.doi:

Colby C. L., Duhamel J. R.& Goldberg M. E.

. 1996 Visual, presaccadic, and cognitive activation of single neurons in monkey lateral intraparietal area . J. Neurophysiol. 76, 2841–2852. Crossref, PubMed, Google Scholar

. 2003 The time course of perisaccadic receptive field shifts in the lateral intraparietal area of the monkey . J. Neurophysiol. 89, 1519–1527.doi:

Walker M. F., Fitzgibbon E. J.& Goldberg M. E.

. 1995 Neurons in the monkey superior colliculus predict the visual result of impending saccadic eye movements . J. Neurophysiol. 73, 1988–2003. Crossref, PubMed, Google Scholar

. 1997 Spatial processing in the monkey frontal eye field. I. Predictive visual responses . J. Neurophysiol. 78, 1373–1383. Crossref, PubMed, Google Scholar

. 2006 Influence of the thalamus on spatial visual processing in frontal cortex . Nature 444, 374–377.doi:

. 2002 Updating of the visual representation in monkey striate and extrastriate cortex during saccades . Proc. Natl Acad. Sci. USA 99, 4026–4031.doi:

Tolias A. S., Moore T., Smirnakis S. M., Tehovnik E. J., Siapas A. G.& Schiller P. H.

. 2001 Eye movements modulate visual receptive fields of V4 neurons . Neuron 29, 757–767.doi:

Andersen R. A.& Mountcastle V. B.

. 1983 The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex . J. Neurosci. 3, 532–548. Crossref, PubMed, Google Scholar

Galletti C.& Battaglini P. P.

. 1989 Gaze-dependent visual neurons in area V3A of monkey prestriate cortex . J. Neurosci. 9, 1112–1125. Crossref, PubMed, Google Scholar

Galletti C., Battaglini P. P.& Fattori P.

. 1995 Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey . Eur. J. Neurosci. 7, 2486–2501.doi:

Bremmer F., Ilg U. J., Thiele A., Distler C.& Hoffmann K. P.

. 1997 Eye position effects in monkey cortex. I. Visual and pursuit-related activity in extrastriate areas MT and MST . J. Neurophysiol. 77, 944–961. Crossref, PubMed, Google Scholar

Bremmer F., Distler C.& Hoffmann K. P.

. 1997 Eye position effects in monkey cortex. II. Pursuit- and fixation-related activity in posterior parietal areas LIP and 7A . J. Neurophysiol. 77, 962–977. Crossref, PubMed, Google Scholar

Andersen R. A., Bracewell R. M., Barash S., Gnadt J. W.& Fogassi L.

. 1990 Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque . J. Neurosci. 10, 1176–1196. Crossref, PubMed, Google Scholar

Boussaoud D., Barth T. M.& Wise S. P.

. 1993 Effects of gaze on apparent visual responses of frontal cortex neurons . Exp. Brain Res. 93, 423–434.doi:

Boussaoud D., Jouffrais C.& Bremmer F.

. 1998 Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey . J. Neurophysiol. 80, 1132–1150. Crossref, PubMed, Google Scholar

Cassanello C. R.& Ferrera V. P.

. 2007 Computing vector differences using a gain field-like mechanism in monkey frontal eye field . J. Physiol. 582, 647–664.doi:

Schlag J., Schlag-Rey M.& Pigarev I.

. 1992 Supplementary eye field: influence of eye position on neural signals of fixation . Exp. Brain Res. 90, 302–306.doi:

Van Opstal A. J., Hepp K., Suzuki Y.& Henn V.

. 1995 Influence of eye position on activity in monkey superior colliculus . J. Neurophysiol. 74, 1593–1610. Crossref, PubMed, Google Scholar

Galletti C., Battaglini P. P.& Fattori P.

. 1993 Parietal neurons encoding spatial locations in craniotopic coordinates . Exp. Brain Res. 96, 221–229.doi:

Duhamel J. R., Bremmer F., BenHamed S.& Graf W.

. 1997 Spatial invariance of visual receptive fields in parietal cortex neurons . Nature 389, 845–848.doi:

Fogassi L., Gallese V., Di Pellegrino G., Fadiga L., Gentilucci M., Luppino G., Matelli M., Pedotti A.& Rizzolatti G.

. 1992 Space coding by premotor cortex . Exp. Brain Res. 89, 686–690.doi:

Wang X., Zhang M., Cohen I. S.& Goldberg M. E.

. 2007 The proprioceptive representation of eye position in monkey primary somatosensory cortex . Nat. Neurosci. 10, 640–646.doi:

Zhang M., Wang X.& Goldberg M. E.

. 2008 Monkey primary somatosensory cortex has a proprioceptive representation of eye position . Prog. Brain Res. 171, 37–45.doi:

. 1909 Eye movement and central anaesthesia. I: the problem of anaesthesia during eye-movement . Psychol. Monographs 4, 3–46. Google Scholar

. 1918 Observations on the sensual role of the proprioceptive nerve-supply of the extrinsic ocular muscles . Brain 41, 332–343.doi:

Niemeier M., Crawford J. D.& Tweed D. B.

. 2003 Optimal transsaccadic integration explains distorted spatial perception . Nature 422, 76–80.doi:

. 2004 Models of the mechanism underlying perceived location of a perisaccadic flash . Vis. Res. 44, 2799–2813.doi:

Binda P., Cicchini G. M., Burr D. C.& Morrone M. C.

. 2009 Spatiotemporal distortions of visual perception at the time of saccades . J. Neurosci. 29, 13 147–13 157.doi:

. 1988 A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons . Nature 331, 679–684.doi:

. 2000 Models of the posterior parietal cortex which perform multimodal integration and represent space in several coordinate frames . J. Cogn. Neurosci. 12, 601–614.doi:

. 2004 A neural network model of flexible spatial updating . J. Neurophysiol. 91, 1608–1619.doi:

Quaia C., Optican L. M.& Goldberg M. E.

. 1998 The maintenance of spatial accuracy by the perisaccadic remapping of visual receptive fields . Neural Netw. 11, 1229–1240.doi:

Keith G. P., Blohm G.& Crawford J. D.

. 2010 Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study . J. Neurophysiol. 103, 117–139.doi:

Hamker F. H., Zirnsak M., Calow D.& Lappe M.

. 2008 The peri-saccadic perception of objects and space . PLoS Comput. Biol. 4, e31.doi:

. 2007 Efference copy and its limitations . Comput. Biol. Med. 37, 924–929.doi:

. 1996 Visual stability across saccades while viewing complex pictures . J. Exp. Psychol. Hum. Percept. Perform. 22, 563–581.doi:

Bridgeman B., Van Der Heijden A. H. C.& Velichkovsky B. M.

. 1994 A theory of visual stability across saccadic eye movements . Behav. Brain Sci. 17, 247–292.doi:

Deubel H., Bridgeman B.& Schneider W. X.

. 1998 Immediate post-saccadic information mediates space constancy . Vis. Res. 38, 3147–3159.doi:

Deubel H., Schneider W. X.& Bridgeman B.

. 2002 Transsaccadic memory of position and form . Prog. Brain Res. 140, 165–180.doi:

Niemeier M., Crawford J. D.& Tweed D. B.

. 2007 Optimal inference explains dimension-specific contractions of spatial perception . Exp. Brain Res. 179, 313–323.doi:

. 1987 Saccadic eye movements and the perception of visual direction . Percept. Psychophys. 41, 35–44. Crossref, PubMed, Google Scholar

Hershberger W. A., Jordan J. S.& Lucas D. R.

. 1998 Visualizing the perisaccadic shift of spatiotopic coordinates . Percept. Psychophys. 60, 82–88. Crossref, PubMed, Google Scholar

. 1990 The extraretinal signal from the pursuit-eye-movement system: its role in the perceptual and the egocentric localization systems . Percept. Psychophys. 48, 509–515. Crossref, PubMed, Google Scholar

Jordan J. S.& Hershberger W. A.

. 1994 Timing the shift in retinal local signs that accompanies a saccadic eye movement . Percept. Psychophys. 55, 657–666. Crossref, PubMed, Google Scholar

Binda P., Bruno A., Burr D. C.& Morrone M. C.

. 2007 Fusion of visual and auditory stimuli during saccades: a Bayesian explanation for perisaccadic distortions . J. Neurosci. 27, 8525–8532.doi:

. 2007 A model of the mechanism for the perceived location of a single flash and two successive flashes presented around the time of a saccade . Vis. Res. 47, 2798–2813.doi:

Colby C. L., Duhamel J. R.& Goldberg M. E.

. 1995 Oculocentric spatial representation in parietal cortex . Cereb. Cortex 5, 470–481.doi:

. 2008 Neuronal mechanisms of visual stability . Vis. Res. 48, 2070–2089.doi:

. 2007 Spatial remapping of the visual world across saccades . Neuroreport 18, 1207–1213.doi:

. 2005 The reentry hypothesis: the putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement . Cereb. Cortex 15, 431–447.doi:

. 2003 Selective gating of visual signals by microstimulation of frontal cortex . Nature 421, 370–373.doi:

Pouget P., Stepniewska I., Crowder E. A., Leslie M. W., Emeric E. E., Nelson M. J.& Schall J. D.

. 2009 Visual and motor connectivity and the distribution of calcium-binding proteins in macaque frontal eye field: implications for saccade target selection . Front. Neuroanat. 3, 2.doi:

. 2003 Effects of stimulus-response compatibility on neural selection in frontal eye field . Neuron 38, 637–648.doi:

Shepherd M., Findlay J. M.& Hockey R. J.

. 1986 The relationship between eye movements and spatial attention . Q. J. Exp. Psychol. 38, 475–491. Crossref, Google Scholar

Hoffman J. E.& Subramaniam B.

. 1995 The role of visual attention in saccadic eye movements . Percept. Psychophys. 57, 787–795. Crossref, PubMed, Google Scholar

Kowler E., Anderson E., Dosher B.& Blaser E.

. 1995 The role of attention in the programming of saccades . Vis. Res. 35, 1897–1916.doi:

. 2002 Endogenous saccades are preceded by shifts of visual attention: evidence from cross-saccadic priming effects . Acta Psychol. 110, 83–102.doi:

Peterson M. S., Kramer A. F.& Irwin D. E.

. 2004 Covert shifts of attention precede involuntary eye movements . Percept. Psychophys. 66, 398–405. Crossref, PubMed, Google Scholar

. 2003 The reentry hypothesis: linking eye movements to visual perception . J. Vis. 3, 808–816.doi:

. 2004 A dynamic model of how feature cues guide spatial attention . Vis. Res. 44, 501–521.doi:

Dubois J., Hamker F. H.& VanRullen R.

. 2009 Attentional selection of noncontiguous locations: the spotlight is only transiently ‘split’ . J. Vis. 9, 3.doi:

Reynolds J. H., Pasternak T.& Desimone R.

. 2000 Attention increases sensitivity of V4 neurons . Neuron 26, 703–714.doi:

Martínez-Trujillo J.& Treue S.

. 2002 Attentional modulation strength in cortical area MT depends on stimulus contrast . Neuron 35, 365–370. Crossref, PubMed, Google Scholar

Williford T.& Maunsell J. H. R.

. 2006 Effects of spatial attention on contrast response functions in macaque area V4 . J. Neurophysiol. 96, 40–54.doi:

Richard A., Churan J., Guitton D. E.& Pack C. C.

. 2009 The geometry of perisaccadic visual perception . J. Neurosci. 29, 10160–10170.doi:

. 1961 The representation of the visual field on the cerebral cortex in monkeys . J. Physiol. 159, 203–221. Crossref, PubMed, Google Scholar

Van Essen D. C., Newsome W. T.& Maunsell J. H.

. 1984 The visual field representation in striate cortex of the macaque monkey: asymmetries, anisotropies, and individual variability . Vis. Res. 24, 429–448. Crossref, PubMed, Google Scholar

. 2003 The representation of retinal blood vessels in primate striate cortex . J. Neurosci. 23, 5984–5997. Crossref, PubMed, Google Scholar

Schira M. M., Wade A. R.& Tyler C. W.

. 2007 Two-dimensional mapping of the central and parafoveal visual field to human visual cortex . J. Neurophysiol. 97, 4284–4295.doi:

Zirnsak M., Lappe M.& Hamker F. H.

. 2010 The spatial distribution of receptive field changes in a model of peri-saccadic perception: predictive remapping and shifts towards the saccade target . Vis. Res. 50, 1328–1337.doi:

. 2004 A simple translation in cortical log-coordinates may account for the pattern of saccadic localization errors . Biol. Cybern. 91, 131–137.doi:

Ross J., Morrone M. C., Goldberg M. E.& Burr D. C.

. 2001 Changes in visual perception at the time of saccades . Trends Neurosci. 24, 113–121.doi:

Moore T., Tolias A. S.& Schiller P. H.

. 1998 Visual representations during saccadic eye movements . Proc. Natl Acad. Sci. USA 95, 8981–8984.doi:

. 2006 V4 receptive field dynamics as predicted by a systems-level model of visual attention using feedback from the frontal eye field . Neural Netw. Official J. Int. Neural Netw. Soc. 19, 1371–1382. Crossref, PubMed, Google Scholar

Mazzoni P., Andersen R. A.& Jordan M. I.

. 1991 A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a . Cereb. Cortex 1, 293–307.doi:

Pouget A., Deneve S.& Duhamel J.-R.

. 2002 A computational perspective on the neural basis of multisensory spatial representations . Nat. Rev. Neurosci. 3, 741–747.doi:

Deneve S., Latham P. E.& Pouget A.

. 2001 Efficient computation and cue integration with noisy population codes . Nat. Neurosci. 4, 826–831.doi:

Denève S., Duhamel J.-R.& Pouget A.

. 2007 Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of Kalman filters . J. Neurosci. 27, 5744–5756.doi:

. 2000 Memory activity of LIP neurons for sequential eye movements simulated with neural networks . J. Neurophysiol. 84, 651–665. Crossref, PubMed, Google Scholar

Hamker F. H., Zirnsak M.& Lappe M.

. 2008 About the influence of post-saccadic mechanisms for visual stability on peri-saccadic compression of object location . J. Vis. 8, 1–13.doi:

De Pisapia N., Kaunitz L.& Melcher D.

. 2010 Backward masking and unmasking across saccadic eye movements . Curr. Biol. 20, 613–617.doi:

. 2007 Predictive remapping of visual features precedes saccadic eye movements . Nat. Neurosci. 10, 903–907.doi:

Merriam E. P., Genovese C. R.& Colby C. L.

. 2003 Spatial updating in human parietal cortex . Neuron 39, 361–373.doi:

Merriam E. P., Genovese C. R.& Colby C. L.

. 2007 Remapping in human visual cortex . J. Neurophys. 97, 1738–1755.doi:

. 2008 Electrophysiological correlates of presaccadic remapping in humans . Psychophysiology 45, 776–783.doi:

. 2005 A computational model of visual stability and change detection during eye movements in realworld scenes . Vis. Cogn. 12, 1161–1176.doi:

. 2008 Trans-saccadic perception . Trends Cogn. Sci. 12, 466–473.doi:

. 2009 Selective attention and the active remapping of object features in trans-saccadic perception . Vis. Res. 49, 1249–1255.doi:

Gottlieb J. P., Kusunoki M.& Goldberg M. E.

. 1998 The representation of visual salience in monkey parietal cortex . Nature 391, 481–484.doi:

Stanford T. R., Shankar S., Massoglia D. P., Costello M. G.& Salinas E.

. 2010 Perceptual decision making in less than 30 milliseconds . Nat. Neurosci. 13, 379–385. Crossref, PubMed, Google Scholar

. 2001 The time course of perceptual choice: the leaky, competing accumulator model . Psychol. Rev. 108, 550–592.doi:

Mazurek M. E., Roitman J. D., Ditterich J.& Shadlen M. N.

. 2003 A role for neural integrators in perceptual decision making . Cereb. Cortex 13, 1257–1269.doi:

. 2004 Psychology and neurobiology of simple decisions . Trends Neurosci. 27, 161–168.doi:

. 2007 The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making . Adv. Cogn. Psychol. 3, 111–123.doi:

. 2001 Seeing properties of an invisible object: feature inheritance and shine-through . Proc. Natl Acad. Sci. USA 98, 4271–4275.doi:

. 2008 Brain circuits for the internal monitoring of movements . Annu. Rev. Neurosci. 31, 317–338.doi:


Computational models of spatial updating in peri-saccadic perception

Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades.

References

. 1867 Handbuch der physiologischen Optik . Hamburg, Germany : Voss . Google Scholar

. 1970 Visual perception of direction when voluntary saccades occur: II. Relation of visual direction of a fixation target extinguished before saccade to a subsequent test flash presented before the saccade . Percep. Psychophys. 8, 9–14. Crossref, Google Scholar

. 1972 Eye movements and perceived visual direction . Eye movements and perceived visual direction (eds

), pp. 331–380. Berlin, Germany : Springer . Google Scholar

. 1976 Saccades and extraretinal signal for visual direction . Eye movements and psychological processes (eds

), pp. 205–219. Hillsdale, NJ : Erlbaum Assoc . Google Scholar

. 1989 Perceptual localization of visual stimuli flashed during saccades . Percept. Psychophy. 45, 162–174. Crossref, PubMed, Google Scholar

. 1991 The time courses of visual mislocalization and of extraretinal eye position signals at the time of vertical saccades . Vis. Res. 31, 1915–1921.doi:

Dassonville P., Schlag J.& Schlag-Rey M.

. 1992 Oculomotor localization relies on a damped representation of saccadic eye displacement in human and nonhuman primates . Vis. Neurosci. 9, 261–269.doi:

. 1995 Illusory localization of stimuli flashed in the dark before saccades . Vis. Res. 35, 2347–2357.doi:

. 2002 Through the eye, slowly: delays and localization errors in the visual system . Nat. Rev. Neurosci. 3, 191–215.doi:

Van Wetter S. M. C. I.& Van Opstal A. J.

. 2008 Experimental test of visuomotor updating models that explain perisaccadic mislocalization . J. Vis. 8, 8.doi:

. 1968 Investigations and considerations of directional perception during voluntary saccadic eye movements . Psychol. Forsch. 32, 185–218.doi:

. 1993 Saccade-contingent displacement of the apparent position of visual stimuli flashed on a dimly illuminated structured background . Vis. Res. 33, 709–716.doi:

Dassonville P., Schlag J.& Schlag-Rey M.

. 1995 The use of egocentric and exocentric location cues in saccadic programming . Vis. Res. 35, 2191–2199.doi:

Ross J., Morrone M. C.& Burr D. C.

. 1997 Compression of visual space before saccades . Nature 386, 598–601.doi:

Morrone M. C., Ross J.& Burr D. C.

. 1997 Apparent position of visual targets during real and simulated saccadic eye movements . J. Neurosci. 17, 7941–7953. Crossref, PubMed, Google Scholar

Lappe M., Awater H.& Krekelberg B.

. 2000 Postsaccadic visual references generate presaccadic compression of space . Nature 403, 892–895.doi:

Morrone M. C., Ma-Wyatt A.& Ross J.

. 2005 Seeing and ballistic pointing at perisaccadic targets . J. Vis. 5, 741–754.doi:

. 2006 Mislocalization of perceived saccade target position induced by perisaccadic visual stimulation . J. Neurosci. 26, 12–20.doi:

. 2004 Contrast dependency of saccadic compression and suppression . Vis. Res. 44, 2327–2336.doi:

Georg K., Hamker F. H.& Lappe M.

. 2008 Influence of adaptation state and stimulus luminance on peri-saccadic localization . J. Vis. 8, 15.doi:

. 2004 Perisaccadic mislocalization orthogonal to saccade direction . Neuron 41, 293–300.doi:

Ostendorf F., Fischer C., Finke C.& Ploner C. J.

. 2007 Perisaccadic compression correlates with saccadic peak velocity: differential association of eye movement dynamics with perceptual mislocalization patterns . J. Neurosci. 27, 7559–7563.doi:

. 2004 Perception of visual space at the time of pro- and anti-saccades . J. Neurophys. 91, 2457–2464.doi:

Awater H., Burr D., Lappe M., Morrone M. C.& Goldberg M. E.

. 2005 Effect of saccadic adaptation on localization of visual targets . J. Neurophys. 93, 3605–3614.doi:

. 2009 Effects of saccadic adaptation on visual localization before and during saccades . Exp. Brain Res. 192, 9–23.doi:

Volkmann F. C., Riggs L. A., White K. D.& Moore R. K.

. 1978 Contrast sensitivity during saccadic eye movements . Vis. Res. 18, 1193–1199.doi:

. 1978 Saccadic omission: why we do not see a grey-out during a saccadic eye movement . Vis. Res. 18, 1297–1303.doi:

Burr D. C., Holt J., Johnstone J. R.& Ross J.

. 1982 Selective depression of motion sensitivity during saccades . J. Physiol. 333, 1–15. Crossref, PubMed, Google Scholar

Bridgeman B., Hendry D.& Stark L.

. 1975 Failure to detect displacement of the visual world during saccadic eye movements . Vis. Res. 15, 719–722.doi:

Deubel H., Schneider W. X.& Bridgeman B.

. 1996 Postsaccadic target blanking prevents saccadic suppression of image displacement . Vis. Res. 36, 985–996.doi:

Currie C. B., McConkie G. W., Carlson-Radvansky L. A.& Irwin D. E.

. 2000 The role of the saccade target object in the perception of a visually stable world . Percept. Psychophys. 62, 673–683. Crossref, PubMed, Google Scholar

Deubel H., Koch C.& Bridgeman B.

. 2010 Landmarks facilitate visual space constancy across saccades and during fixation . Vis. Res. 50, 249–259.doi:

Von Holst E.& Mittelstaedt H.

. 1950 Das Reafferenzprinzip . Naturwissenschaften 37, 464–476. Crossref, ISI, Google Scholar

. 1950 Neural basis of the spontaneous optokinetic response produced by visual inversion . J. Comp. Physiol. Psychol. 43, 482–489.doi:

. 2004 What the brain stem tells the frontal cortex. I. Oculomotor signals sent from superior colliculus to frontal eye field via mediodorsal thalamus . J. Neurophysiol. 91, 1381–1402.doi:

Duhamel J. R., Colby C. L.& Goldberg M. E.

. 1992 The updating of the representation of visual space in parietal cortex by intended eye movements . Science 255, 90–92.doi:

Colby C. L., Duhamel J. R.& Goldberg M. E.

. 1996 Visual, presaccadic, and cognitive activation of single neurons in monkey lateral intraparietal area . J. Neurophysiol. 76, 2841–2852. Crossref, PubMed, Google Scholar

. 2003 The time course of perisaccadic receptive field shifts in the lateral intraparietal area of the monkey . J. Neurophysiol. 89, 1519–1527.doi:

Walker M. F., Fitzgibbon E. J.& Goldberg M. E.

. 1995 Neurons in the monkey superior colliculus predict the visual result of impending saccadic eye movements . J. Neurophysiol. 73, 1988–2003. Crossref, PubMed, Google Scholar

. 1997 Spatial processing in the monkey frontal eye field. I. Predictive visual responses . J. Neurophysiol. 78, 1373–1383. Crossref, PubMed, Google Scholar

. 2006 Influence of the thalamus on spatial visual processing in frontal cortex . Nature 444, 374–377.doi:

. 2002 Updating of the visual representation in monkey striate and extrastriate cortex during saccades . Proc. Natl Acad. Sci. USA 99, 4026–4031.doi:

Tolias A. S., Moore T., Smirnakis S. M., Tehovnik E. J., Siapas A. G.& Schiller P. H.

. 2001 Eye movements modulate visual receptive fields of V4 neurons . Neuron 29, 757–767.doi:

Andersen R. A.& Mountcastle V. B.

. 1983 The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex . J. Neurosci. 3, 532–548. Crossref, PubMed, Google Scholar

Galletti C.& Battaglini P. P.

. 1989 Gaze-dependent visual neurons in area V3A of monkey prestriate cortex . J. Neurosci. 9, 1112–1125. Crossref, PubMed, Google Scholar

Galletti C., Battaglini P. P.& Fattori P.

. 1995 Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey . Eur. J. Neurosci. 7, 2486–2501.doi:

Bremmer F., Ilg U. J., Thiele A., Distler C.& Hoffmann K. P.

. 1997 Eye position effects in monkey cortex. I. Visual and pursuit-related activity in extrastriate areas MT and MST . J. Neurophysiol. 77, 944–961. Crossref, PubMed, Google Scholar

Bremmer F., Distler C.& Hoffmann K. P.

. 1997 Eye position effects in monkey cortex. II. Pursuit- and fixation-related activity in posterior parietal areas LIP and 7A . J. Neurophysiol. 77, 962–977. Crossref, PubMed, Google Scholar

Andersen R. A., Bracewell R. M., Barash S., Gnadt J. W.& Fogassi L.

. 1990 Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque . J. Neurosci. 10, 1176–1196. Crossref, PubMed, Google Scholar

Boussaoud D., Barth T. M.& Wise S. P.

. 1993 Effects of gaze on apparent visual responses of frontal cortex neurons . Exp. Brain Res. 93, 423–434.doi:

Boussaoud D., Jouffrais C.& Bremmer F.

. 1998 Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey . J. Neurophysiol. 80, 1132–1150. Crossref, PubMed, Google Scholar

Cassanello C. R.& Ferrera V. P.

. 2007 Computing vector differences using a gain field-like mechanism in monkey frontal eye field . J. Physiol. 582, 647–664.doi:

Schlag J., Schlag-Rey M.& Pigarev I.

. 1992 Supplementary eye field: influence of eye position on neural signals of fixation . Exp. Brain Res. 90, 302–306.doi:

Van Opstal A. J., Hepp K., Suzuki Y.& Henn V.

. 1995 Influence of eye position on activity in monkey superior colliculus . J. Neurophysiol. 74, 1593–1610. Crossref, PubMed, Google Scholar

Galletti C., Battaglini P. P.& Fattori P.

. 1993 Parietal neurons encoding spatial locations in craniotopic coordinates . Exp. Brain Res. 96, 221–229.doi:

Duhamel J. R., Bremmer F., BenHamed S.& Graf W.

. 1997 Spatial invariance of visual receptive fields in parietal cortex neurons . Nature 389, 845–848.doi:

Fogassi L., Gallese V., Di Pellegrino G., Fadiga L., Gentilucci M., Luppino G., Matelli M., Pedotti A.& Rizzolatti G.

. 1992 Space coding by premotor cortex . Exp. Brain Res. 89, 686–690.doi:

Wang X., Zhang M., Cohen I. S.& Goldberg M. E.

. 2007 The proprioceptive representation of eye position in monkey primary somatosensory cortex . Nat. Neurosci. 10, 640–646.doi:

Zhang M., Wang X.& Goldberg M. E.

. 2008 Monkey primary somatosensory cortex has a proprioceptive representation of eye position . Prog. Brain Res. 171, 37–45.doi:

. 1909 Eye movement and central anaesthesia. I: the problem of anaesthesia during eye-movement . Psychol. Monographs 4, 3–46. Google Scholar

. 1918 Observations on the sensual role of the proprioceptive nerve-supply of the extrinsic ocular muscles . Brain 41, 332–343.doi:

Niemeier M., Crawford J. D.& Tweed D. B.

. 2003 Optimal transsaccadic integration explains distorted spatial perception . Nature 422, 76–80.doi:

. 2004 Models of the mechanism underlying perceived location of a perisaccadic flash . Vis. Res. 44, 2799–2813.doi:

Binda P., Cicchini G. M., Burr D. C.& Morrone M. C.

. 2009 Spatiotemporal distortions of visual perception at the time of saccades . J. Neurosci. 29, 13 147–13 157.doi:

. 1988 A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons . Nature 331, 679–684.doi:

. 2000 Models of the posterior parietal cortex which perform multimodal integration and represent space in several coordinate frames . J. Cogn. Neurosci. 12, 601–614.doi:

. 2004 A neural network model of flexible spatial updating . J. Neurophysiol. 91, 1608–1619.doi:

Quaia C., Optican L. M.& Goldberg M. E.

. 1998 The maintenance of spatial accuracy by the perisaccadic remapping of visual receptive fields . Neural Netw. 11, 1229–1240.doi:

Keith G. P., Blohm G.& Crawford J. D.

. 2010 Influence of saccade efference copy on the spatiotemporal properties of remapping: a neural network study . J. Neurophysiol. 103, 117–139.doi:

Hamker F. H., Zirnsak M., Calow D.& Lappe M.

. 2008 The peri-saccadic perception of objects and space . PLoS Comput. Biol. 4, e31.doi:

. 2007 Efference copy and its limitations . Comput. Biol. Med. 37, 924–929.doi:

. 1996 Visual stability across saccades while viewing complex pictures . J. Exp. Psychol. Hum. Percept. Perform. 22, 563–581.doi:

Bridgeman B., Van Der Heijden A. H. C.& Velichkovsky B. M.

. 1994 A theory of visual stability across saccadic eye movements . Behav. Brain Sci. 17, 247–292.doi:

Deubel H., Bridgeman B.& Schneider W. X.

. 1998 Immediate post-saccadic information mediates space constancy . Vis. Res. 38, 3147–3159.doi:

Deubel H., Schneider W. X.& Bridgeman B.

. 2002 Transsaccadic memory of position and form . Prog. Brain Res. 140, 165–180.doi:

Niemeier M., Crawford J. D.& Tweed D. B.

. 2007 Optimal inference explains dimension-specific contractions of spatial perception . Exp. Brain Res. 179, 313–323.doi:

. 1987 Saccadic eye movements and the perception of visual direction . Percept. Psychophys. 41, 35–44. Crossref, PubMed, Google Scholar

Hershberger W. A., Jordan J. S.& Lucas D. R.

. 1998 Visualizing the perisaccadic shift of spatiotopic coordinates . Percept. Psychophys. 60, 82–88. Crossref, PubMed, Google Scholar

. 1990 The extraretinal signal from the pursuit-eye-movement system: its role in the perceptual and the egocentric localization systems . Percept. Psychophys. 48, 509–515. Crossref, PubMed, Google Scholar

Jordan J. S.& Hershberger W. A.

. 1994 Timing the shift in retinal local signs that accompanies a saccadic eye movement . Percept. Psychophys. 55, 657–666. Crossref, PubMed, Google Scholar

Binda P., Bruno A., Burr D. C.& Morrone M. C.

. 2007 Fusion of visual and auditory stimuli during saccades: a Bayesian explanation for perisaccadic distortions . J. Neurosci. 27, 8525–8532.doi:

. 2007 A model of the mechanism for the perceived location of a single flash and two successive flashes presented around the time of a saccade . Vis. Res. 47, 2798–2813.doi:

Colby C. L., Duhamel J. R.& Goldberg M. E.

. 1995 Oculocentric spatial representation in parietal cortex . Cereb. Cortex 5, 470–481.doi:

. 2008 Neuronal mechanisms of visual stability . Vis. Res. 48, 2070–2089.doi:

. 2007 Spatial remapping of the visual world across saccades . Neuroreport 18, 1207–1213.doi:

. 2005 The reentry hypothesis: the putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement . Cereb. Cortex 15, 431–447.doi:

. 2003 Selective gating of visual signals by microstimulation of frontal cortex . Nature 421, 370–373.doi:

Pouget P., Stepniewska I., Crowder E. A., Leslie M. W., Emeric E. E., Nelson M. J.& Schall J. D.

. 2009 Visual and motor connectivity and the distribution of calcium-binding proteins in macaque frontal eye field: implications for saccade target selection . Front. Neuroanat. 3, 2.doi:

. 2003 Effects of stimulus-response compatibility on neural selection in frontal eye field . Neuron 38, 637–648.doi:

Shepherd M., Findlay J. M.& Hockey R. J.

. 1986 The relationship between eye movements and spatial attention . Q. J. Exp. Psychol. 38, 475–491. Crossref, Google Scholar

Hoffman J. E.& Subramaniam B.

. 1995 The role of visual attention in saccadic eye movements . Percept. Psychophys. 57, 787–795. Crossref, PubMed, Google Scholar

Kowler E., Anderson E., Dosher B.& Blaser E.

. 1995 The role of attention in the programming of saccades . Vis. Res. 35, 1897–1916.doi:

. 2002 Endogenous saccades are preceded by shifts of visual attention: evidence from cross-saccadic priming effects . Acta Psychol. 110, 83–102.doi:

Peterson M. S., Kramer A. F.& Irwin D. E.

. 2004 Covert shifts of attention precede involuntary eye movements . Percept. Psychophys. 66, 398–405. Crossref, PubMed, Google Scholar

. 2003 The reentry hypothesis: linking eye movements to visual perception . J. Vis. 3, 808–816.doi:

. 2004 A dynamic model of how feature cues guide spatial attention . Vis. Res. 44, 501–521.doi:

Dubois J., Hamker F. H.& VanRullen R.

. 2009 Attentional selection of noncontiguous locations: the spotlight is only transiently ‘split’ . J. Vis. 9, 3.doi:

Reynolds J. H., Pasternak T.& Desimone R.

. 2000 Attention increases sensitivity of V4 neurons . Neuron 26, 703–714.doi:

Martínez-Trujillo J.& Treue S.

. 2002 Attentional modulation strength in cortical area MT depends on stimulus contrast . Neuron 35, 365–370. Crossref, PubMed, Google Scholar

Williford T.& Maunsell J. H. R.

. 2006 Effects of spatial attention on contrast response functions in macaque area V4 . J. Neurophysiol. 96, 40–54.doi:

Richard A., Churan J., Guitton D. E.& Pack C. C.

. 2009 The geometry of perisaccadic visual perception . J. Neurosci. 29, 10160–10170.doi:

. 1961 The representation of the visual field on the cerebral cortex in monkeys . J. Physiol. 159, 203–221. Crossref, PubMed, Google Scholar

Van Essen D. C., Newsome W. T.& Maunsell J. H.

. 1984 The visual field representation in striate cortex of the macaque monkey: asymmetries, anisotropies, and individual variability . Vis. Res. 24, 429–448. Crossref, PubMed, Google Scholar

. 2003 The representation of retinal blood vessels in primate striate cortex . J. Neurosci. 23, 5984–5997. Crossref, PubMed, Google Scholar

Schira M. M., Wade A. R.& Tyler C. W.

. 2007 Two-dimensional mapping of the central and parafoveal visual field to human visual cortex . J. Neurophysiol. 97, 4284–4295.doi:

Zirnsak M., Lappe M.& Hamker F. H.

. 2010 The spatial distribution of receptive field changes in a model of peri-saccadic perception: predictive remapping and shifts towards the saccade target . Vis. Res. 50, 1328–1337.doi:

. 2004 A simple translation in cortical log-coordinates may account for the pattern of saccadic localization errors . Biol. Cybern. 91, 131–137.doi:

Ross J., Morrone M. C., Goldberg M. E.& Burr D. C.

. 2001 Changes in visual perception at the time of saccades . Trends Neurosci. 24, 113–121.doi:

Moore T., Tolias A. S.& Schiller P. H.

. 1998 Visual representations during saccadic eye movements . Proc. Natl Acad. Sci. USA 95, 8981–8984.doi:

. 2006 V4 receptive field dynamics as predicted by a systems-level model of visual attention using feedback from the frontal eye field . Neural Netw. Official J. Int. Neural Netw. Soc. 19, 1371–1382. Crossref, PubMed, Google Scholar

Mazzoni P., Andersen R. A.& Jordan M. I.

. 1991 A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a . Cereb. Cortex 1, 293–307.doi:

Pouget A., Deneve S.& Duhamel J.-R.

. 2002 A computational perspective on the neural basis of multisensory spatial representations . Nat. Rev. Neurosci. 3, 741–747.doi:

Deneve S., Latham P. E.& Pouget A.

. 2001 Efficient computation and cue integration with noisy population codes . Nat. Neurosci. 4, 826–831.doi:

Denève S., Duhamel J.-R.& Pouget A.

. 2007 Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of Kalman filters . J. Neurosci. 27, 5744–5756.doi:

. 2000 Memory activity of LIP neurons for sequential eye movements simulated with neural networks . J. Neurophysiol. 84, 651–665. Crossref, PubMed, Google Scholar

Hamker F. H., Zirnsak M.& Lappe M.

. 2008 About the influence of post-saccadic mechanisms for visual stability on peri-saccadic compression of object location . J. Vis. 8, 1–13.doi:

De Pisapia N., Kaunitz L.& Melcher D.

. 2010 Backward masking and unmasking across saccadic eye movements . Curr. Biol. 20, 613–617.doi:

. 2007 Predictive remapping of visual features precedes saccadic eye movements . Nat. Neurosci. 10, 903–907.doi:

Merriam E. P., Genovese C. R.& Colby C. L.

. 2003 Spatial updating in human parietal cortex . Neuron 39, 361–373.doi:

Merriam E. P., Genovese C. R.& Colby C. L.

. 2007 Remapping in human visual cortex . J. Neurophys. 97, 1738–1755.doi:

. 2008 Electrophysiological correlates of presaccadic remapping in humans . Psychophysiology 45, 776–783.doi:

. 2005 A computational model of visual stability and change detection during eye movements in realworld scenes . Vis. Cogn. 12, 1161–1176.doi:

. 2008 Trans-saccadic perception . Trends Cogn. Sci. 12, 466–473.doi:

. 2009 Selective attention and the active remapping of object features in trans-saccadic perception . Vis. Res. 49, 1249–1255.doi:

Gottlieb J. P., Kusunoki M.& Goldberg M. E.

. 1998 The representation of visual salience in monkey parietal cortex . Nature 391, 481–484.doi:

Stanford T. R., Shankar S., Massoglia D. P., Costello M. G.& Salinas E.

. 2010 Perceptual decision making in less than 30 milliseconds . Nat. Neurosci. 13, 379–385. Crossref, PubMed, Google Scholar

. 2001 The time course of perceptual choice: the leaky, competing accumulator model . Psychol. Rev. 108, 550–592.doi:

Mazurek M. E., Roitman J. D., Ditterich J.& Shadlen M. N.

. 2003 A role for neural integrators in perceptual decision making . Cereb. Cortex 13, 1257–1269.doi:

. 2004 Psychology and neurobiology of simple decisions . Trends Neurosci. 27, 161–168.doi:

. 2007 The mechanisms of feature inheritance as predicted by a systems-level model of visual attention and decision making . Adv. Cogn. Psychol. 3, 111–123.doi:

. 2001 Seeing properties of an invisible object: feature inheritance and shine-through . Proc. Natl Acad. Sci. USA 98, 4271–4275.doi:

. 2008 Brain circuits for the internal monitoring of movements . Annu. Rev. Neurosci. 31, 317–338.doi:


Background

It has been reported that mania may be associated with superior cognitive performance. In this study, we test the hypothesis that manic symptoms in youth separate along two correlated dimensions and that a symptom constellation of high energy and cheerfulness is associated with superior cognitive performance.

Method

We studied 1755 participants of the IMAGEN study, of average age 14.4 years (SD = 0.43), 50.7% girls. Manic symptoms were assessed using the Development and Wellbeing Assessment by interviewing parents and young people. Cognition was assessed using the Wechsler Intelligence Scale For Children (WISC-IV) and a response inhibition task.

Results

Manic symptoms in youth formed two correlated dimensions: one termed exuberance, characterized by high energy and cheerfulness and one of undercontrol with distractibility, irritability and risk-taking behavior. Only the undercontrol, but not the exuberant dimension, was independently associated with measures of psychosocial impairment. In multivariate regression models, the exuberant, but not the undercontrolled, dimension was positively and significantly associated with verbal IQ by both parent- and self-report conversely, the undercontrolled, but not the exuberant, dimension was associated with poor performance in a response inhibition task.

Conclusions

Our findings suggest that manic symptoms in youth may form dimensions with distinct correlates. The results are in keeping with previous findings about superior performance associated with mania. Further research is required to study etiological differences between these symptom dimensions and their implications for clinical practice.


How Does p Develop?

If p is quantitatively distributed in the population, with extreme scores signaling neuroticism, emotion dysregulation, intellectual impairments, and disordered thought, what marks its developmental progression? One possibility that we hypothesized (24) is that many young children exhibit diffuse emotional and behavioral problems, fewer go on to manifest a brief episode of an individual disorder, still fewer progress to develop a persistent internalizing or externalizing syndrome, and only a very few individuals progress to the extreme elevation of p, ultimately emerging with a psychotic condition, most likely during late adolescence or early adulthood. This hypothesized developmental progression is supported by evidence from a unique prospective adoption study that showed that biological mothers’ p factor predicted their adopted-away children’s internalizing and externalizing problems by age 3, suggesting the early-life emergence of pleiotropic genetic effects (53, 54). This developmental progression would also require evidence that brief episodes of single disorders are widespread in the population, which is supported by the high lifetime prevalence rates of individuals with disorders over years of follow-up in longitudinal studies (55, 56). A developmental progression would also require that individuals who manifest psychosis have an extensive prior history of many other disorders, which has been reported (57, 58). And, moreover, a developmental progression would anticipate that when individuals are followed long enough, those with the most severe liability to psychopathology will tend to move in and out of diagnostic categories. Today’s patient with schizophrenia was yesterday’s boy with conduct disorder or girl with social phobia (and tomorrow’s elderly person with severe depression). This developmental progression hypothesis is consistent with evidence that sequential comorbidity is the rule rather than the exception (59) and that individuals experiencing sequentially comorbid disorders also exhibit more severe psychopathology (16). To the best of our knowledge, this entire developmental progression—from mild, diffuse emotional and behavioral problems to persistent syndromes to extreme, impairing comorbid conditions—has not been described in the same individuals followed over time, and predictors of age-graded transitions along the hypothesized progression have yet to be evaluated.


Microelectrode arrays serve as an indispensable tool in electro-physiological research to study the electrical activity of neural cells, enabling measurements of single cell as well as network communication analysis. Recent experimental studies have reported that the neuronal geometry has an influence on electrical signaling and extracellular recordings. However, the corresponding mechanisms are not yet fully understood and require further investigation. Allowing systematic parameter studies, computational modeling provides the opportunity to examine the underlying effects that influence extracellular potentials. In this letter, we present an in silico single cell model to analyze the effect of geometrical variability on the extracellular electric potentials. We describe finite element models of a single neuron with varying geometric complexity in three-dimensional space. The electric potential generation of the neuron is modeled using Hodgkin-Huxley equations. The signal propagation is described with electro-quasi-static equations, and results are compared with corresponding cable equation descriptions. Our results show that both the geometric dimensions and the distribution of ion channels of a neuron are critical factors that significantly influence both the amplitude and shape of extracellular potentials.

In vitro cultures of neuronal networks on microelectrode arrays (MEA) provide an approximated biophysical environment of in vivo conditions inside the brain. MEA are being extensively used in various applications such as basic research, pharmaceutical testing, and as a research platform for implant development. Moreover, recently introduced high-density MEA allow for measurement of the neuronal activity on a subcellular level (Berdondini et al., 2005 Franke et al., 2012). Using such setups, significant variations of the measured extracellular action potentials (EAP) in both shape and amplitude with respect to the electrode-neuron position are observed (see Obien, Deligkaris, Bullmann, Bakkum, & Frey, 2015). Yet the underlying mechanisms causing these variations are still not entirely understood. Since neuronal cell cultures are very sensitive to any experimental manipulations, we chose to investigate this issue further using computational in silico models. With the ability to define certain aspects of individual neurons or neuronal networks, computational models have proven to be beneficial for addressing specific questions in this area.

For example, the models of Agudelo-Toro and Neef (2013) and Joucla, Glière, & Yvert (2014) assess the effect of extracellular electric stimuli on neuronal electrical activity. The influence of ephaptic coupling in electrically active neurons is studied in Xylouris and Wittum (2015), and the models of Bauer et al. (2013) have shown the effect of inhomogeneous extracellular medium on extracellular electric fields. The traditional method to model neurons employs a cable theory-based description of action potential (AP) propagation. It is typically implemented on multicompartment geometries defined by quasi-one-dimensional cylinders (Brette et al., 2007 Einevoll, Kayser, Logothetis, & Panzeri, 2013 Gold, Henze, Koch, & Buszáki, 2006 Holt & Koch, 1999). While initial models were described in lower dimensions, mainly to reduce computational complexity, several simulations with a three-dimensional description of extracellular space have been presented recently. To be able to address problems on a three-dimensional level while utilizing cable theory, hybrid 1D/3D models have been introduced. Such models calculate the electrophysiological activity of one or several neurons in a quasi-one-dimensional multicompartment model (Bauer et al., 2013 Grein, Stepniewski, Reiter, Knodel, & Queisser, 2014 Joucla et al., 2014), for example, using the software NEURON developed by Hines and Carnevale (1997). Here, AP propagation inside the neuron is governed by the cable equation (CE) (Rall, 1962), which is solved individually for each compartment. The corresponding results are subsequently transferred into a three-dimensional description of extracellular space (e.g., by using point or line source approximation methods). The extracellular potential can be solved in a following step by solving the corresponding electro-quasi-static equations. An alternative approach to approximate the extracellular potential is the method of images (MOI) (Jackson, 1998). Due to a significantly reduced computational complexity, it is particularly suitable for larger model dimensions (e.g., for calculating the extracellular potential of neuronal slices). Yet its accuracy decreases in small dimensions—for example, if the distance between an adherent neuron and the surface is only a few micrometers (Ness et al., 2015).

A restriction of the original CE that cannot be circumvented is its constraint to cylindrical geometries for a single compartment. More elaborate geometries can be realized with multicompartment approaches, allowing for a discretized geometry approximation. Mathematical adaptations presented in Holt and Koch (1999), Foster et al. (2010) and Herrera-Valdez and Suslov (2010) further allow for a description of rotational symmetric geometries (e.g., spheres and cones). Nevertheless, this still limits the applicability of cable theory for modeling more complex neuronal shapes.

As an alternative approach, full 3D models have been recently introduced that simulate intra- and extracellular space in parallel. Here, AP propagation is described with varying implementations of the electro-quasi-static Poisson's equation (van Rienen, Flehr, Schreiber, & Motrescu, 2003 Flehr, 2007 Agudelo-Toro & Neef, 2013 Appali, 2013 Joucla et al., 2014 Xylouris & Wittum, 2015). This allows for neuron descriptions similar to CE-based models, but also more detailed approaches, for example, by coupling Poisson's equation with the Nernst-Planck equation (Pods, Schönke, & Bastian, 2013). In contrast to CE, this mathematical approach does not pose any restrictions on neuron shape and allows for an approximation of general nonsymmetric model geometries.

Computational models have addressed many aspects of the interaction between neurons and extracellular space. Yet the specific effect of geometry and inhomogeneous ion channel distribution of an individual neuron on extracellular electric potentials has not been assessed in detail. This topic is particularly interesting in the case of in vitro cultures formed by dissociated neurons. Such cultures form a monolayer network of adherent cells, and neuronal electric activity can be measured (e.g., by electrodes of an MEA). Adhering on the surface alters the geometry of the cell. Furthermore, the extracellular gap between cell and substrate or electrode is very small, with distances below 100 nm (Braun & Fromherz, 1998). As both aspects may have a significant effect on the resulting extracellular potentials, the goal of this work is to evaluate the dependence of EAP on morphologic aspects using finite element method simulations.

For a systematic analysis, three distinct geometric models are developed in a three-dimensional simulation environment. While a cylindrical axon model is used for basic evaluation, geometric complexity increases in a second model by introducing a spherical soma geometry. Finally, a soma geometry with a planar face at the bottom is implemented to mimic an adherent neuron on an electrically insulating surface. For all models, AP generation is described using the well-known Hodgkin-Huxley model (Hodgkin & Huxley, 1952), while an electro-quasi-static equation system (EQS) is used to describe AP propagation. In addition, the results of the EQS-based models are compared with analogous CE-based models for confirmation. Extracellular space is described by an electro-quasi-static approach similar to the EQS-based equation system used for AP propagation in all models. The electrical properties of the extracellular domain, as well as its boundary conditions, are defined to resemble the conditions of in vitro cultures formed by dissociated neurons. For spatial discretization, the finite element method (FEM) is used, as it tends to be more suitable dealing with nonlinear boundary conditions (e.g., based on the Hodgkin-Huxley model) compared to the finite volume method (FVM) (Wendt et al., 2009).


Contents

Dendritic spines are small with spine head volumes ranging 0.01 μm 3 to 0.8 μm 3 . Spines with strong synaptic contacts typically have a large spine head, which connects to the dendrite via a membranous neck. The most notable classes of spine shape are "thin", "stubby", "mushroom", and "branched". Electron microscopy studies have shown that there is a continuum of shapes between these categories. [3] The variable spine shape and volume is thought to be correlated with the strength and maturity of each spine-synapse.

Distribution Edit

Dendritic spines usually receive excitatory input from axons, although sometimes both inhibitory and excitatory connections are made onto the same spine head. Excitatory axon proximity to dendritic spines is not sufficient to predict the presence of a synapse, as demonstrated by the Lichtman lab in 2015. [4]

Spines are found on the dendrites of most principal neurons in the brain, including the pyramidal neurons of the neocortex, the medium spiny neurons of the striatum, and the Purkinje cells of the cerebellum. Dendritic spines occur at a density of up to 5 spines/1 μm stretch of dendrite. Hippocampal and cortical pyramidal neurons may receive tens of thousands of mostly excitatory inputs from other neurons onto their equally numerous spines, whereas the number of spines on Purkinje neuron dendrites is an order of magnitude larger.

Cytoskeleton and organelles Edit

The cytoskeleton of dendritic spines is particularly important in their synaptic plasticity without a dynamic cytoskeleton, spines would be unable to rapidly change their volumes or shapes in responses to stimuli. These changes in shape might affect the electrical properties of the spine. The cytoskeleton of dendritic spines is primarily made of filamentous actin (F-actin). tubulin Monomers and microtubule-associated proteins (MAPs) are present, and organized microtubules are present. [5] Because spines have a cytoskeleton of primarily actin, this allows them to be highly dynamic in shape and size. The actin cytoskeleton directly determines the morphology of the spine, and actin regulators, small GTPases such as Rac, RhoA, and CDC42, rapidly modify this cytoskeleton. Overactive Rac1 results in consistently smaller dendritic spines.

In addition to their electrophysiological activity and their receptor-mediated activity, spines appear to be vesicularly active and may even translate proteins. Stacked discs of the smooth endoplasmic reticulum (SERs) have been identified in dendritic spines. Formation of this "spine apparatus" depends on the protein synaptopodin and is believed to play an important role in calcium handling. "Smooth" vesicles have also been identified in spines, supporting the vesicular activity in dendritic spines. The presence of polyribosomes in spines also suggests protein translational activity in the spine itself, not just in the dendrite.

The morphogenesis of dendritic spines is critical to the induction of long-term potentiation (LTP). [6] [7] The morphology of the spine depends on the states of actin, either in globular (G-actin) or filamentous (F-actin) forms. The role of Rho family of GTPases and its effects in the stability of actin and spine motility [8] has important implications for memory. If the dendritic spine is the basic unit of information storage, then the spine's ability to extend and retract spontaneously must be constrained. If not, information may be lost. Rho family of GTPases makes significant contributions to the process that stimulates actin polymerization, which in turn increases the size and shape of the spine. [9] Large spines are more stable than smaller ones and may be resistant to modification by additional synaptic activity. [10] Because changes in the shape and size of dendritic spines are correlated with the strength of excitatory synaptic connections and heavily depend on remodeling of its underlying actin cytoskeleton, [11] the specific mechanisms of actin regulation, and therefore the Rho family of GTPases, are integral to the formation, maturation, and plasticity of dendritic spines and to learning and memory.

RhoA pathway Edit

One of the major Rho GTPases involved in spine morphogenesis is RhoA, a protein that also modulates the regulation and timing of cell division. In the context of activity in neurons, RhoA is activated in the following manner: once calcium has entered a cell through NMDA receptors, it binds to calmodulin and activates CaMKII, which leads to the activation of RhoA. [9] The activation of the RhoA protein will activate ROCK, a RhoA kinase, which leads to the stimulation of LIM kinase, which in turn inhibits the protein cofilin. Cofilin's function is to reorganize the actin cytoskeleton of a cell namely, it depolymerizes actin segments and thus inhibits the growth of growth cones and the repair of axons. [12]

A study conducted by Murakoshi et al. in 2011 implicated the Rho GTPases RhoA and Cdc42 in dendritic spine morphogenesis. Both GTPases were quickly activated in single dendritic spines of pyramidal neurons in the CA1 region of the rat hippocampus during structural plasticity brought on by long-term potentiation stimuli. Concurrent RhoA and Cdc42 activation led to a transient increase in spine growth of up to 300% for five minutes, which decayed into a smaller but sustained growth for thirty minutes. [9] The activation of RhoA diffused around the vicinity of the spine undergoing stimulation, and it was determined that RhoA is necessary for the transient phase and most likely the sustained phase as well of spine growth.

Cdc42 pathway Edit

Cdc42 has been implicated in many different functions including dendritic growth, branching, and branch stability. [13] Calcium influx into the cell through NMDA receptors binds to calmodulin and activates the Ca2+/calmodulin-dependent protein kinases II (CaMKII). In turn, CaMKII is activated and this activates Cdc42, after which no feedback signaling occurs upstream to calcium and CaMKII. If tagged with monomeric-enhanced green fluorescent protein, one can see that the activation of Cdc42 is limited to just the stimulated spine of a dendrite. This is because the molecule is continuously activated during plasticity and immediately inactivates after diffusing out of the spine. Despite its compartmentalized activity, Cdc42 is still mobile out of the stimulated spine, just like RhoA. Cdc42 activates PAK, which is a protein kinase that specifically phosphorylates and, therefore, inactivates ADF/cofilin. [14] Inactivation of cofilin leads to increased actin polymerization and expansion of the spine's volume. Activation of Cdc42 is required for this increase in spinal volume to be sustained.

Observed changes in structural plasticity Edit

Murakoshi, Wang, and Yasuda (2011) examined the effects of Rho GTPase activation on the structural plasticity of single dendritic spines elucidating differences between the transient and sustained phases. [9]

Transient changes in structural plasticity Edit

Applying a low-frequency train of two-photon glutamate uncaging in a single dendritic spine can elicit rapid activation of both RhoA and Cdc42. During the next two minutes, the volume of the stimulated spine can expand to 300 percent of its original size. However, this change in spine morphology is only temporary the volume of the spine decreases after five minutes. Administration of C3 transferase, a Rho inhibitor, or glycyl-H1152, a Rock inhibitor, inhibits the transient expansion of the spine, indicating that activation of the Rho-Rock pathway is required in some way for this process. [9]

Sustained changes in structural plasticity Edit

After the transient changes described above take place, the spine's volume decreases until it is elevated by 70 to 80 percent of the original volume. This sustained change in structural plasticity will last about thirty minutes. Once again, administration of C3 transferase and Glycyl-H1152 suppressed this growth, suggesting that the Rho-Rock pathway is necessary for more persistent increases in spinal volume. In addition, administration of the Cdc42 binding domain of Wasp or inhibitor targeting Pak1 activation-3 (IPA3) decreases this sustained growth in volume, demonstrating that the Cdc42-Pak pathway is needed for this growth in spinal volume as well. This is important because sustained changes in structural plasticity may provide a mechanism for the encoding, maintenance, and retrieval of memories. The observations made may suggest that Rho GTPases are necessary for these processes. [15]

Receptor activity Edit

Dendritic spines express glutamate receptors (e.g. AMPA receptor and NMDA receptor) on their surface. The TrkB receptor for BDNF is also expressed on the spine surface, and is believed to play a role in spine survival. The tip of the spine contains an electron-dense region referred to as the "postsynaptic density" (PSD). The PSD directly apposes the active zone of its synapsing axon and comprises

10% of the spine's membrane surface area neurotransmitters released from the active zone bind receptors in the postsynaptic density of the spine. Half of the synapsing axons and dendritic spines are physically tethered by calcium-dependent cadherin, which forms cell-to-cell adherent junctions between two neurons.

Glutamate receptors (GluRs) are localized to the postsynaptic density, and are anchored by cytoskeletal elements to the membrane. They are positioned directly above their signalling machinery, which is typically tethered to the underside of the plasma membrane, allowing signals transmitted by the GluRs into the cytosol to be further propagated by their nearby signalling elements to activate signal transduction cascades. The localization of signalling elements to their GluRs is particularly important in ensuring signal cascade activation, as GluRs would be unable to affect particular downstream effects without nearby signallers.

Signalling from GluRs is mediated by the presence of an abundance of proteins, especially kinases, that are localized to the postsynaptic density. These include calcium-dependent calmodulin, CaMKII (calmodulin-dependent protein kinase II), PKC (Protein Kinase C), PKA (Protein Kinase A), Protein Phosphatase-1 (PP-1), and Fyn tyrosine kinase. Certain signallers, such as CaMKII, are upregulated in response to activity.

Spines are particularly advantageous to neurons by compartmentalizing biochemical signals. This can help to encode changes in the state of an individual synapse without necessarily affecting the state of other synapses of the same neuron. The length and width of the spine neck has a large effect on the degree of compartmentalization, with thin spines being the most biochemically isolated spines.

Plasticity Edit

Dendritic spines are very "plastic", that is, spines change significantly in shape, volume, and number in small time courses. Because spines have a primarily actin cytoskeleton, they are dynamic, and the majority of spines change their shape within seconds to minutes because of the dynamicity of actin remodeling. Furthermore, spine number is very variable and spines come and go in a matter of hours, 10-20% of spines can spontaneously appear or disappear on the pyramidal cells of the cerebral cortex, although the larger "mushroom"-shaped spines are the most stable.

Spine maintenance and plasticity is activity-dependent [16] and activity-independent. BDNF partially determines spine levels, [17] and low levels of AMPA receptor activity is necessary to maintain spine survival, and synaptic activity involving NMDA receptors encourages spine growth. Furthermore, two-photon laser scanning microscopy and confocal microscopy have shown that spine volume changes depending on the types of stimuli that are presented to a synapse.

Importance to learning and memory Edit

Evidence of importance Edit

Spine plasticity is implicated in motivation, learning, and memory. [18] [19] [20] In particular, long-term memory is mediated in part by the growth of new dendritic spines (or the enlargement of pre-existing spines) to reinforce a particular neural pathway. Because dendritic spines are plastic structures whose lifespan is influenced by input activity, [21] spine dynamics may play an important role in the maintenance of memory over a lifetime.

Age-dependent changes in the rate of spine turnover suggest that spine stability impacts developmental learning. In youth, dendritic spine turnover is relatively high and produces a net loss of spines. [1] [22] [23] This high rate of spine turnover may characterize critical periods of development and reflect learning capacity in adolescence—different cortical areas exhibit differing levels of synaptic turnover during development, possibly reflecting varying critical periods for specific brain regions. [19] [22] In adulthood, however, most spines remain persistent, and the half-life of spines increases. [1] This stabilization occurs due to a developmentally regulated slow-down of spine elimination, a process which may underlie the stabilization of memories in maturity. [1] [22]

Experience-induced changes in dendritic spine stability also point to spine turnover as a mechanism involved in the maintenance of long-term memories, though it is unclear how sensory experience affects neural circuitry. Two general models might describe the impact of experience on structural plasticity. On the one hand, experience and activity may drive the discrete formation of relevant synaptic connections that store meaningful information in order to allow for learning. On the other hand, synaptic connections may be formed in excess, and experience and activity may lead to the pruning of extraneous synaptic connections. [1]

In lab animals of all ages, environmental enrichment has been related to dendritic branching, spine density, and overall number of synapses. [1] In addition, skill training has been shown to lead to the formation and stabilization of new spines while destabilizing old spines, [18] [24] suggesting that the learning of a new skill involves a rewiring process of neural circuits. Since the extent of spine remodeling correlates with success of learning, this suggests a crucial role of synaptic structural plasticity in memory formation. [24] In addition, changes in spine stability and strengthening occur rapidly and have been observed within hours after training. [18] [19]

Conversely, while enrichment and training are related to increases in spine formation and stability, long-term sensory deprivation leads to an increase in the rate of spine elimination [1] [22] and therefore impacts long-term neural circuitry. Upon restoring sensory experience after deprivation in adolescence, spine elimination is accelerated, suggesting that experience plays an important role in the net loss of spines during development. [22] In addition, other sensory deprivation paradigms—such as whisker trimming—have been shown to increase the stability of new spines. [25]

Research in neurological diseases and injuries shed further light on the nature and importance of spine turnover. After stroke, a marked increase in structural plasticity occurs near the trauma site, and a five- to eightfold increase from control rates in spine turnover has been observed. [26] Dendrites disintegrate and reassemble rapidly during ischemia—as with stroke, survivors showed an increase in dendritic spine turnover. [27] While a net loss of spines is observed in Alzheimer's disease and cases of intellectual disability, cocaine and amphetamine use have been linked to increases in dendritic branching and spine density in the prefrontal cortex and the nucleus accumbens. [28] Because significant changes in spine density occur in various brain diseases, this suggests a balanced state of spine dynamics in normal circumstances, which may be susceptible to disequilibrium under varying pathological conditions. [28]

There is also some evidence for loss of dendritic spines as a consequence of aging. One study using mice has noted a correlation between age-related reductions in spine densities in the hippocampus and age-dependent declines in hippocampal learning and memory. [29]

Importance contested Edit

Despite experimental findings that suggest a role for dendritic spine dynamics in mediating learning and memory, the degree of structural plasticity's importance remains debatable. For instance, studies estimate that only a small portion of spines formed during training actually contribute to lifelong learning. [24] In addition, the formation of new spines may not significantly contribute to the connectivity of the brain, and spine formation may not bear as much of an influence on memory retention as other properties of structural plasticity, such as the increase in size of spine heads. [30]

Theoreticians have for decades hypothesized about the potential electrical function of spines, yet our inability to examine their electrical properties has until recently stopped theoretical work from progressing too far. Recent advances in imaging techniques along with increased use of two-photon glutamate uncaging have led to a wealth of new discoveries we now suspect that there are voltage-dependent sodium, [31] potassium, [32] and calcium [33] channels in the spine heads. [34]

Cable theory provides the theoretical framework behind the most "simple" method for modelling the flow of electrical currents along passive neural fibres. Each spine can be treated as two compartments, one representing the neck, the other representing the spine head. The compartment representing the spine head alone should carry the active properties.

Baer and Rinzel's continuum model Edit

To facilitate the analysis of interactions between many spines, Baer & Rinzel formulated a new cable theory for which the distribution of spines is treated as a continuum. [35] In this representation, spine head voltage is the local spatial average of membrane potential in adjacent spines. The formulation maintains the feature that there is no direct electrical coupling between neighboring spines voltage spread along dendrites is the only way for spines to interact.

Spike-diffuse-spike model Edit

The SDS model was intended as a computationally simple version of the full Baer and Rinzel model. [36] It was designed to be analytically tractable and have as few free parameters as possible while retaining those of greatest significance, such as spine neck resistance. The model drops the continuum approximation and instead uses a passive dendrite coupled to excitable spines at discrete points. Membrane dynamics in the spines are modelled using integrate and fire processes. The spike events are modelled in a discrete fashion with the wave form conventionally represented as a rectangular function.

Modeling spine calcium transients Edit

Calcium transients in spines are a key trigger for synaptic plasticity. [37] NMDA receptors, which have a high permeability for calcium, only conduct ions if the membrane potential is suffiently depolarized. The amount of calcium entering a spine during synaptic activity therefore depends on the depolarization of the spine head. Evidence from calcium imaging experiments (two-photon microscopy) and from compartmental modelling indicates that spines with high resistance necks experience larger calcium transients during synaptic activity. [34] [38]

Dendritic spines can develop directly from dendritic shafts or from dendritic filopodia. [39] During synaptogenesis, dendrites rapidly sprout and retract filopodia, small membrane organelle-lacking membranous protrusions. Recently, I-BAR protein MIM was found to contribute to the initiation process. [40] During the first week of birth, the brain is predominated by filopodia, which eventually develop synapses. However, after this first week, filopodia are replaced by spiny dendrites but also small, stubby spines that protrude from spiny dendrites. In the development of certain filopodia into spines, filopodia recruit presynaptic contact to the dendrite, which encourages the production of spines to handle specialized postsynaptic contact with the presynaptic protrusions.

Spines, however, require maturation after formation. Immature spines have impaired signaling capabilities, and typically lack "heads" (or have very small heads), only necks, while matured spines maintain both heads and necks.

Cognitive disorders such as ADHD, Alzheimer's disease, autism, intellectual disability, and fragile X syndrome, may be resultant from abnormalities in dendritic spines, especially the number of spines and their maturity. [41] [42] The ratio of matured to immature spines is important in their signaling, as immature spines have impaired synaptic signaling. Fragile X syndrome is characterized by an overabundance of immature spines that have multiple filopodia in cortical dendrites.

Dendritic spines were first described at the end of the 19th century by Santiago Ramón y Cajal on cerebellar neurons. [43] Ramón y Cajal then proposed that dendritic spines could serve as contacting sites between neurons. This was demonstrated more than 50 years later thanks to the emergence of electron microscopy. [44] Until the development of confocal microscopy on living tissues, it was commonly admitted that spines were formed during embryonic development and then would remain stable after birth. In this paradigm, variations of synaptic weight were considered as sufficient to explain memory processes at the cellular level. But since about a decade ago, new techniques of confocal microscopy demonstrated that dendritic spines are indeed motile and dynamic structures that undergo a constant turnover, even after birth. [45] [46] [39]


5 Answers 5

There are 2 problems you might face.

Your neural net (in this case convolutional neural net) cannot physically accept images of different resolutions. This is usually the case if one has fully-connected layers, however, if the network is fully-convolutional, then it should be able to accept images of any dimension. Fully-convolutional implies that it doesn't contain fully-connected layers, but only convolutional, max-pooling, and batch normalization layers all of which are invariant to the size of the image.

Exactly this approach was proposed in this ground-breaking paper Fully Convolutional Networks for Semantic Segmentation. Keep in mind that their architecture and training methods might be slightly outdated by now. A similar approach was used in widely used U-Net: Convolutional Networks for Biomedical Image Segmentation, and many other architectures for object detection, pose estimation, and segmentation.

Convolutional neural nets are not scale-invariant. For example, if one trains on the cats of the same size in pixels on images of a fixed resolution, the net would fail on images of smaller or larger sizes of cats. In order to overcome this problem, I know of two methods (might be more in the literature):

multi-scale training of images of different sizes in fully-convolutional nets in order to make the model more robust to changes in scale and

having multi-scale architecture.

Assuming you have a large dataset, and it's labeled pixel-wise, one hacky way to solve the issue is to preprocess the images to have same dimensions by inserting horizontal and vertical margins according to your desired dimensions, as for labels you add dummy extra output for the margin pixels so when calculating the loss you could mask the margins.

Try resizing the image to the input dimensions of your neural network architecture(keeping it fixed to something like 128*128 in a standard 2D U-net architecture) using nearest neighbor interpolation technique. This is because if you resize your image using any other interpolation, it may result in tampering with the ground truth labels. This is particularly a problem in segmentation. You won't face such a problem when it comes to classification.

As you want to perform image segmentation, you can use U-Net, which does not have fully connected layers, but it is a fully convolutional network, which makes it able to handle inputs of any dimension. You should read the linked papers for more info.

You could also have a look at the paper Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition (2015), where the SPP-net is proposed. SSP-net is based on the use of a "spatial pyramid pooling", which eliminates the requirement of having fixed-size inputs.


Analysis of Residuals

To determine where the PCA-based representational similarity analysis was failing to account for differences between mental states, we constructed a representational dissimilarity matrix of the residuals from a multiple regression featuring the three significant dimensions (Fig. S4). Additionally, we calculated the average residual for each mental state and correlated these averaged residuals with three significant PCs. The rationality of a mental state did not predict whether its pattern was chronically predicted to be more or less different from that of other states (r = −0.03). The pattern dissimilarity between negative states tended to be slightly overestimated (r = 0.18). Finally, pattern dissimilarity between highly socially impactful states tended to be substantially underestimated (r = −0.66).


Materials and Methods

Perceptual decision making paradigm

We adapted a 2-AFC paradigm of face and car discrimination, where a set of 12 face (Max Plank Institute face database) and 12 car grayscale images were used. The car image database was the same used in Philiastides and Sajda (2006) and Philiastides et al. (2006), which was constructed by taking images from the internet, segmenting the car from the background, converting the image to grayscale, and then resizing to be comparable as the face images. The pose of the faces and cars was also matched across the entire database and was sampled at random (left, right, center) for the training and test cases. All the images (512 ×� pixels, 8𠂛its/pixel) were equated for spatial frequency, luminance, and contrast. The phase spectra of the images were manipulated using the weighted mean phase method (Dakin et al., 2002) to introduce noise, resulting in a set of images graded by phase coherence. Specifically we computed the 2D Fourier transform of each image, and constructed the average magnitude spectra by averaging across all images. The phase spectra of an image were constructed by computing a weighted sum of the phase spectra of the original image (ϕimage) and that of random noise (ϕnoise).

Each image subtended 2° ×𠂒° of visual angle, and the background screen was set to a mean luminance gray. The image size was set to match the size of the V1 model, which covered 4 mm 2 of cortical sheet. Figure ​ Figure1 1 shows examples of the face and car images used in the experiment as well as the effect on the discriminability of the image class when varying the phase coherence.

The stimulus set for the 2-AFC perceptual decision making task. (A) Shown are 12 face and 12 car images at phase coherence 55%. (B) One sample face and one sample car image, at phase coherences varying from 20 to 55%. (C) Design and timing of the simulated psychophysics experiment for the model.

The sequence of images was input to the model where an image was flashed for 50 ms, followed by a gray mean luminance image with an inter-stimulus-interval (ISI) of 200 ms (Figure ​ (Figure1C). 1 C). Since simulating the model is computationally expensive, we minimized the simulation time by choosing an ISI which was as small as possible yet did not result in network dynamics leaking across trials. We conducted pilot experiments that showed that network activity settled to background levels approximately 200 ms after stimulus offset. We ran the simulation for each of the two classes, face and car, at different coherence levels (20, 25, 30, 35, 40, 45, 55%) respectively. Each image was repeated by 30 trials in the simulation, where the sequence of trials was randomly generated. In each simulation, we randomized the order of different images, making sure not to push the model into a periodic response pattern.

Parallel to simulating the model response, we conducted human psychophysics experiments. Ten volunteer subjects were recruited. All participants provided written informed consent, as approved by the Columbia University Institutional Review Board. All the subjects were healthy with corrected visual acuity of 20/20. Psychophysics testing was administered in a monocular manner. Images of different phase coherences were randomized in the psychophysics experiment. During the experiment subjects were instructed to fixate at the center of the images, and to make a decision on whether they saw a face or car, as soon as possible, by pressing one of two buttons with their right hand. The ISI for human psychophysics experiments was longer and randomized between 2500 and 3000 ms in order to provide for a comfortable reaction time and to reduce the subjects’ ability to predict the time of the next image. A Dell computer with an nVIDIA GeForce4 MX 440 AGP8X graphics card and E-Prime software controlled the stimulus presentation.

Model summary

An overview of the model architecture and decoding is illustrated in Figure ​ Figure2. 2 . We modeled the early visual pathway with a feedforward lateral geniculate nucleus (LGN) input and a recurrent spiking neuron network of the input layers of (4Cα/β) of primary visual cortex (V1). We model the short-range connectivity within the V1 layer, without feedback from higher areas. We simulated a magnocellular version of the model, the details of which have been described previously (Wielaard and Sajda, 2006a,b, 2007). Note our model is a variant of an earlier V1 model (McLaughlin et al., 2000 Wielaard et al., 2001).

Summary of the model architecture. (A) The model is comprises of the encoding and decoding components. (B) Architecture of the V1 model, where receptive fields and LGN axon targets are viewed in the visual space (left) and cortical space (right). Details can be found in Wielaard and Sajda (2006a).

In brief, the model consists of a layer of N (4096) conductance-based integrate-and-fire point neurons (one compartment), representing about a 2 ×𠂒 mm 2 piece of a V1 input layer (layer 4C). Our model of V1 consists of 75% excitatory neurons and 25% inhibitory neurons. In the model, 30% of both the excitatory and inhibitory cell populations receive LGN input. In agreement with experimental findings, the LGN neurons are modeled as rectified center-surround linear spatio-temporal filters. Sizes for center and surround were taken from experimental data (Hicks et al., 1983 Derrington and Lennie, 1984 Shapley, 1990 Spear et al., 1994 Croner and Kaplan, 1995 Benardete and Kaplan, 1999). Noise, cortical interactions, and LGN input are assumed to act additively in contributing to the total conductance of a cell. The noise term is modeled as Poisson spike train convolved with a kernel which comprises a fast AMPA component and a slow NMDA component (see Supplementary Materials in Wielaard and Sajda, 2006a).

The LGN RF centers were organized on a square lattice. These lattice spacing and consequent LGN receptive field densities imply LGN cellular magnification factors that are in the range of the experimental data available for macaque (Malpeli et al., 1996). The connection structure between LGN cells and cortical cells is made so as to establish ocular dominance bands and a slight orientation preference which is organized in pinwheels (Blasdel, 1992). It is further constructed under the constraint that the LGN axonal arbor sizes in V1 do not exceed the anatomically established values of 1.2 mm (Blasdel and Lund, 1983 Freund et al., 1989).

In the construction of the model our objective was to keep the parameters deterministic and uniform as much as possible. This enhances the transparency of the model, while at the same time provides insight into what factors may be essential for the considerable diversity observed in the responses of V1 cells.

Sparse decoding

We used a linear decoder to map the spatio-temporal activity in the V1 model to a decision on whether the input stimulus is a face or a car. We employed a sparsity constraint on the decoder in order to control the dimension of the effective feature space. Sparse decoding has been previously investigated for decoding real electrophysiological data, for instance by Chen et al. (2006), Palmer et al. (2007), and Quiroga et al. (2007).

Since a primary purpose of using the decoder is to identify informative dimensions in the neurodynamics, we estimate new decoder parameters at each stimulus noise level (coherence level) independently. Alternatively we could train a decoder at the highest coherence level and test the decoder at each coherence level. In this paper we focus on the first approach, since we view our decoder as a tool for analyzing the information content in the neurodynamics and how downstream neurons might best decode this information for discrimination.

We constructed an optimal decoder to read out the information in our spike neuron model, fully exploring the spatio-temporal dynamics. The spike train for each neuron in the population is si,k(t) = ∑lδ(tti,k,l), where t ∈ [0,250] ms, i =𠂑… N is the index for neurons, k =𠂑… M is the index for trials, l =𠂑… P is the index for spikes. Based on the population spike trains, we estimated the firing rate on each trial by counting the number of spikes within a time bin of width τ, resulting in a spike count matrix r i , j , k = ∫ ( j − 1 ) τ + 1 j τ s i , k ( t ) dt , where i =𠂑… N represents the ith neuron, j =𠂑… T/τ represents the jth time bin, k =𠂑… M represents the kth trial. Note that we explored decoding using time bins of different length. When τ =� ms, we assume that information is encoded in both neuron and time, since the firing rate is closer to instantaneous firing rate when τ =� ms, we integrate the spiking activity over the entire trial, leading to a rate-based representation of information. A separate post hoc analysis showed that 25 ms was in fact the bin width that yielded the highest discrimination accuracy (bin width varied from 5 to 250 ms). The class label of each sample bk takes the value of <-1, +𠂑>representing either face or car with M being the number of trials. In order to explore the information within the spatio-temporal dynamics, we compute the weighted sum of firing rate over different neurons and time bins. This leads to seeking the solution of the following constrained minimization problem,

where the first term is the empirical logistic loss function, and the second term is the regularization function, with λ >𠂐 as the regularization parameter. We create a stacked version of the spike count matrix xl,k = ri,j,k with l = (i −𠂑)N + j, i.e., stacking the neuron and time bin dimensions together. The resulting linear decoder can be geometrically interpreted as a hyperplane that separates the classes of face and car, where w represents the weights for the linear decoder, and v is the offset. In the case of the sparse decoder, we use an L1 regularization term J(w) = ‖w1 alternatively for the non-sparse decoder, we use the L2 regularization J ( w ) = ‖ w ‖ 2 2 . In the language of Bayesian analysis, the logistic loss term comes from maximum likelihood, L1 corresponds to the Laplacian prior, and L2 corresponds to the Gaussian prior. L1-regularized logistic regression results in a sparse solution for the weights (Krishnapuram et al., 2005 Koh et al., 2007 Meier et al., 2008). So-called “sparse logistic regression” serves as an approach for feature selection, where features that are most informative about the classification survive in the form of non-zero weights (Ng, 2004). We developed an efficient and accurate method to solve this optimization problem (Shi et al., 2010, 2011). Once we learn the hyperplane, for any new image, we can predict the image category via the sign of w T xk + v.

Figure ​ Figure3 3 provides a geometric intuition of why L1 and L2 regularization lead to sparse and non-sparse solutions, respectively. The solution of L1 or L2 regularized logistic regression is the intersection of the regularization geometry and a hyperplane. Figure ​ Figure3A 3 A shows the L1 regularization corresponds to the diamond shaped ball centered at the origin. As one increases the regularization parameter λ, the L1 ball grows and the solution is the point when it hits the hyperplane. Given the geometry of L1 ball, the solution is more likely to be sparse. Figure ​ Figure3B 3 B shows the L2 regularized logistic regression, where the geometry of the L2 ball is a sphere, therefore leading to a non-sparse solution.

A schematic illustration of how different regularization terms lead to sparse and non-sparse solutions in the linear classifier. (A) L1 regularization corresponds to the diamond shaped ball centered around the origin. (B) L2 regularization corresponds to the spherical ball centered around the origin.

Cross validation

Training and testing were carried out on different sets of images, each containing six face images and six car images, with 30 trials per image. Tenfold cross validation was used on the training set, while the final weights applied on the testing set are estimated using Jackknife estimation to reduce the bias. A regularization path was also employed, where a family of λ’s is used. Given that different values of λ offer different levels of sparsity, we chose λ that maximizes discrimination accuracy on the training dataset after cross validation. We used this hyperparameter on the testing dataset to calculate the final discrimination accuracy. In order to identify the time windows that are critical for reading out information in the V1 model, we used two approaches. One way to utilize dynamics was based on a heuristic approach, where we only consider dynamics during t ∈ [50, 150] ms, given that the V1 model has a delay of 50 ms after stimulus onset and the length of activation is about 100 ms. In a second approach, we optimized the temporal window by an adaptive technique, where we search for an optimal window that results in the best decoding performance. In the adaptive technique, we systematically varied the latency and width of the window, and computed the corresponding Az (area under ROC curve) values through cross validation. The best window is the one that results in the highest Az value.

Measuring sparseness

We characterize the sparseness of the neural representation in the population spike trains, for both the temporal and spatial domains. According to Willmore and Tolhurst (2001), lifetime sparseness describes the activity of a single neuron over time, while population sparseness characterizes the activity of a population of neurons for a given time window. We estimate instantaneous firing rates using a Gaussian window 25 ms wide with a standard deviation of 5 ms. Sparseness in firing rates can be measured by kurtosis (Olshausen and Field, 2004), namely the forth moment relative to the variance squared.

Using the sparse decoding framework, we are able to identify the informative dimensions that are critical for our specific decision making task. We define “informative dimensions” as the number of non-zero weights in the decoder, which is equal to the cardinality of the weight vector. Informative dimensions thus reflect the number of non-zeros in the spatio-temporal “word.” Note one neuron can be selected by the decoder at multiple time bins, therefore, we define “informative neurons” as the number of neurons having at least one non-zero weight across different time bins.

Statistical tests

We used a likelihood ratio test to evaluate the goodness of fit. We fit a single Weibull curve jointly to both the psychometric and neurometric dataset (dof =𠂔), as well as fitting two Weibull curve independently to both dataset (dof =𠂘). We computed the likelihood ratio using D = - 2ln(ljlpln). The null hypothesis is psychometric and neurometric data can be described by the same curve, and the decision rule is based on the Chi-square statistics χ 2 . If p >𠂐.05, do not reject null hypothesis otherwise, reject null hypothesis.


Summary and future directions

Studies of adaptation continue to reveal surprising and complex forms of plasticity in sensory systems, from peripheral receptors to central mechanisms coding highly abstract properties of the stimulus. The finding that vision adapts in such similar ways to such a diverse array of perceptual attributes suggests that adaptation is an intrinsic feature of visual coding that is manifest throughout the visual stream. However, we still understand little about the dynamics and mechanisms of these adjustments, how they operate over different timescales, and whether they serve common or distinct roles in calibrating our perceptions.


Watch the video: Wreck It Ralph 2012 Full Movie Compilation - Animation Movies For Children - Disney Cartoon 2019 (May 2022).


Comments:

  1. Zioniah

    Is there something similar?

  2. Yoshi

    Thanks! Super article! Blog in reader unambiguously

  3. Malami

    Hope all is well



Write a message