![]() First, behavioral strategies can help mitigate the impact of self-motion on visual feature encoding by changing the nature of the neural encoding task at hand. Strategies for reliable visual feature detection during self-motion fall into one of at least three categories. Here, we explore the neural mechanisms by which local feature detection is made robust to the visual inputs and behavioral signals associated with natural vision. However, these studies have typically been conducted either in non-behaving animals, or under conditions of visual fixation. Neurons that respond selectively to local visual features have been described in many species, including flies, amphibians, rodents, and primates ( Keleş and Frye, 2017 Kerschensteiner, 2022 Klapoetke et al., 2022 Lettvin et al., 1959 Pasupathy and Connor, 2001 Piscopo et al., 2013). ![]() In addition, neurons that selectively respond to small features could also be activated by high spatial frequency content in the broader scene, potentially corrupting their responses under naturalistic viewing conditions. ![]() Conversely, local features like prey, conspecifics, or approaching predators engage only a small part of the visual field, dramatically reducing the redundancy of the visual input. For detecting widefield motion, or large static features of the scene like oriented edges and landmarks, the visual scene is intrinsically redundant, as many neurons distributed across the visual field can encode information that is relevant to the feature of interest even as the scene moves. Local feature detection during self-motion presents unique challenges. How do visual neurons selectively encode local features of interest under these dynamic conditions? While this problem has been well studied in the context of motion estimation ( Borst et al., 2010 Britten, 2008), the broader question of how visual neurons might extract local features of the scene under naturalistic viewing conditions is relatively poorly understood. This presents a challenge for the visual system, as visual circuitry must extract and represent specific features of the external visual scene in a rapidly changing context where the dominant sources of visual changes on the retina may be self-generated. As a result, the image on the retina is frequently subject to self-generated motion. Sighted animals frequently move their bodies, heads, and eyes to achieve their behavioral goals and to actively sample the environment. This work reveals a strategy for reliable feature detection during locomotion. These two inputs adjust the sensitivity of these feature detectors across the locomotor cycle, selectively reducing their gain during saccades and restoring it during intersaccadic intervals. ![]() A subset of these neurons, tuned to small objects, is modulated by two independent signals associated with self-movement, a motor-related signal, and a visual motion signal associated with rotation of the animal. We show that local visual features are jointly represented across the population, and a shared gain factor improves trial-to-trial coding fidelity. Here, we describe a connectome-based volumetric imaging strategy to measure visually evoked neural activity across this population. How can the visual system reliably extract local features from an input dominated by self-generated signals? In Drosophila, diverse local visual features are represented by a group of projection neurons with distinct tuning properties. Natural vision is dynamic: as an animal moves, its visual input changes dramatically.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |