Date of Award

Spring 2021

Document Type


Degree Name

Doctor of Philosophy (PhD)


Computer Science

First Advisor

Zucker, Steven


The rapid development of multi-electrode and imaging techniques is leading to a data explosion in neuroscience, opening the possibility of truly understanding the organization and functionality of our visual systems. Furthermore, the need for more natural visual stimuli greatly increases the complexity of the data. Together, these create a challenge for machine learning. Our goal in this thesis is to develop one such technique. The central pillar of our contribution is designing a manifold of neurons, and providing an algorithmic approach to inferring it. This manifold is functional, in the sense that nearby neurons on the manifold respond similarly (in time) to similar aspects of the stimulus ensemble. By organizing the neurons, our manifold differs from other, standard manifolds as they are used in visual neuroscience which instead organize the stimuli. Our contributions to the machine learning component of the thesis are twofold. First, we develop a tensor representation of the data, adopting a multilinear view of potential circuitry. Tensor factorization then provides an intermediate representation between the neural data and the manifold. We found that the rank of the neural factor matrix can be used to select an appropriate number of tensor factors. Second, to apply manifold learning techniques, a similarity kernel on the data must be defined. Like many others, we employ a Gaussian kernel, but refine it based on a proposed graph sparsification technique—this makes the resulting manifolds less sensitive to the choice of bandwidth parameter. We apply this method to neuroscience data recorded from retina and primary visual cortex in the mouse. For the algorithm to work, however, the underlying circuitry must be exercised to as full an extent as possible. To this end, we develop an ensemble of flow stimuli, which simulate what the mouse would 'see' running through a field. Applying the algorithm to the retina reveals that neurons form clusters corresponding to known retinal ganglion cell types. In the cortex, a continuous manifold is found, indicating that, from a functional circuit point of view, there may be a continuum of cortical function types. Interestingly, both manifolds share similar global coordinates, which hint at what the key ingredients to vision might be. Lastly, we turn to perhaps the most widely used model for the cortex: deep convolutional networks. Their feedforward architecture leads to manifolds that are even more clustered than the retina, and not at all like that of the cortex. This suggests, perhaps, that they may not suffice as general models for Artificial Intelligence.