Workshop on
Neural Systems - Science and Engineering
January 23 - 25, 2019
Faculty Hall, Indian Institute of Science, Bangalore
Sponsored by the Pratiksha Trust and Indian Institute of Science
January 23 - 25, 2019
Faculty Hall, Indian Institute of Science, Bangalore
Sponsored by the Pratiksha Trust and Indian Institute of Science
Understanding the processes involved in brain functions like audition and vision forms an important and growing area of interdisciplinary research. These approaches and associated techniques have acted as a melting pot for researchers from disparate disciplines to come together and address one of the grandest challenges of the 21st century. The grandness of the challenge and the requirement on diverse forms of expertise has deemed that such endeavors require synergistic interactions among neurobiologists, electrical engineers and computer scientists. Over the past decade or two, neurobiologists have made significant conceptual advances in our understanding of the brain through technical breakthroughs that have yielded unprecedented opportunities to gather large-scale structural and functional data. Learning and understanding theses tools would enable computer scientists and data analysts to develop exceptional tools to address questions in machine learning and signal processing, tools that are not only helpful in emulating brain function, but also are radically transforming many applications in information and communication technologies. This workshop titled "Neural Systems - Science and Engineering" falls under the broader theme of activities in IISc under the banner of Brain Computation and Learning (BCL) . The current workshop is aimed at creating this useful dialogue between neurobiologists and computer scientists and educating research students of each area with relevant topics of the other.
A prominent goal of this workshop is to promote synergistic interactions among neuroscientists, electrical engineers, and computer scientists. The workshop would allow young researchers to understand the diverse themes of research and appreciate the close relationships between these apparently distinct themes.
This workshop is funded by a generous endowment from the Pratiksha Trust, which has been significantly promoting fundamental and translational neuroscience research within the country through the establishment of research centres and chair professorships at the Indian Institutes of Science (Bangalore).
The venue is Faculty Hall (inside IISc Main Building), IISc Bangalore. Map of IISc and workshop venue . You can click on icons for more information.
Additional visitor information at the IISc website , and NMI
Lori Holt
Carnegie Mellon University Understanding how humans interpret the complexity of spoken language
Experience deeply shapes how human listeners perceive spoken language. We learn long-term phonetic representations and words that respect the sound structure of our native language and, yet, we maintain enough flexibility to make sense of experience with nonnative accents or speech from imperfect computer synthesis. There are rich behavioral-science literatures that speak to the many ways that experience shapes speech perception. Yet, for the most part, contemporary neurobiological models of spoken language are oriented toward characterization of the system in a stable state. We are just beginning to understand the learning mechanisms involved in supporting successful human speech communication. I will describe how experience shapes speech perception at different time scales - from the influence of a single precursor sound, to distributions of sounds across seconds, to statistical regularities in acoustics experienced across multiple training sessions.
|
Mounya Elhilali
Johns Hopkins University Reverse-engineering auditory computations in the brain
The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge to humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge facing both brain sciences and engineering systems. The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge to humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge facing both brain sciences and engineering systems.
|
Shantanu Chakrabartty
Washington University St. Louis Neuromorphic Computing at Cross-roads & Neuromorphic Sensing: ways to approach energy-efficiency limits
Talk 1: As an isolated signal processing unit, a biological neuron is not optimized for energy efficiency. Constrained by the idiosyncrasies of ion-channel dynamics, a relatively large membrane capacitance and propagation artifacts through axonal pathways, a neuron typically dissipates an order of magnitude more energy than a highly optimized silicon neuron. In spite of such disparity, populations of biological neurons serve as marvels of energy- optimized systems. The biological basis for such energy-efficient and robust representation might lie in the nature of the spatiotemporal network dynamics, in the physics of noise-exploitation and through the use of neural oscillations. On the other hand, most synthetic and large-scale neuromorphic systems ignore these network dynamics, focusing instead on a single neuron and building the network bottom-up. From this approach, it is not evident how the shape, the nature and the dynamics of each individual spike is related to the overall system objective and how a population of neurons when coupled together can self-optimize itself to produce an emergent spiking or population response, for instance spectral noise-shaping or synchrony. Other well established synthetic neural network formulations (for example deep neural networks and support vector machines) follow a top-down synthesis approach starting with a system objective function and then reducing the problem to a model of a neuron that inherently does not exhibit any spiking or complex dynamics. This talk will provide an overarching view of the discipline of neuromorphic computing and discuss new perspectives on how to combine machine learning principles with biologically relevant neural dynamics.
|
Barbara Shinn-Cunningham
Carnegie Mellon University Role of attention mechanisms in listening Understanding speech in natural environments depends not just on decoding the speech signal, but on extracting the speech signal from a mixture of sounds. In order to achieve this, the listener must be able to 1) parse the scene, determining what sound energy belongs to the speech signal and what energy is from a competing source (perform auditory scene analysis), and 2) filter out the competing source energy and focus on the speech. Together, these processes allow a listener to focus attention on the speech and analyze its content in detail. In Part I of my presentation, I will illustrate these issues, including what acoustic features support auditory scene analysis and what features allow a listener to focus attention. In Part II, I will describe the different brain networks that control auditory attention, and how we measure the effects of attention on neural processing. |
Ying Xu
Western Sydney University A Digital Neuromorphic Auditory Pathway This talk gives an overview of my work on the development of a digital binaural cochlear system, and its applications to a “where” pathway and a “what” pathway model. The binaural cochlear system models the basilar membrane, the outer hair cells, the inner hair cells and the spiral ganglion cells. The “where” pathway model uses a deep convolutional neural network to analyse correlograms from the binaural cochlear system to obtain sound source location. The “what” pathway model uses an event-based unsupervised feature extraction approach to investigate the acoustic characteristics embedded in auditory spike streams from the binaural cochlear system. |
Neeraj Sharma
Carnegie Mellon University and Indian Institute of Science Talker Change Detection: Humans, Machines, and the Gap Studies on natural selection suggest - it is not the strongest of the species that survives, but rather, the one most adaptable to change. A similar strategy might be in play while listening to multi-talker conversation, composed of multiple talkers speaking in turns. On the listener’s side, the perception of conversational speech demands quick perception and adaptation to talker changes to support communication. The mechanism in play is open for research, and understanding it will benefit design of automatic systems for the flagship problem of conversational speech analysis. In this talk, I will present a study examining human talker change detection (TCD) in multi-party speech utterances using a behavioral paradigm in which listeners indicate the moment of perceived talker change. Modeling the behavioral data shows that the human reaction time can be well estimated using the distance between acoustic features before and after change instant. Further, the estimation improves by incorporation of longer durations of speech prior to talker change. A performance comparison of humans with few of the state-of-the-art machine TCD systems indicates a gap yet to be filled in by machines. |
Shayan Garani Srinivasa
Indian Institute of Science Spatio-temporal Memories Inspired by the functioning of the brain, content addressable memories, such as the Kohonen self-organizing maps and its variants have been proposed and widely used in data science applications. However, this paradigm is ‘static’, in the sense, the input signal dynamics is not reflected within the memory of the neural network. Inspired by the seminal work of Alan Turing on morphogenesis that explains the formation of patterns in animals, we develop an analogous theoretical model for storing and recalling spatio-temporal patterns from first principles. The spatio-temporal memory is neuro-biologically inspired, and the neurons exhibit the 'temporal-plasticity' effect during recall. Future research directions and applications of this model will be highlighted towards the end of the talk. |
Shihab Shamma
University of Maryland Cortical Mechanisms for Auditory Selective Attention and Decision Making
|
S. P. Arun
Indian Institute of Science Compositionality as the key to object perception Compositionality refers to the premise that the whole can be understood in terms of its parts. This is a fundametal question in our quest to simplify neural representations. In the case of vision, it is widely believed that our brain has evolved to highly specialized feature detectors whose response is "more than the sum of their parts", thereby violating compositionality. A classic example is the idea of a grandmother cell, which responds to any image containing your grandmother, whether small or big, rotated towards or away etc. Such feature processing, it is believed, is what makes our brain so good at vision compared to the best computers today. Identifying these highly specialized features then becomes extremely difficult because a given image might contain a large number of features, and finding the right combination of features involves searching through a combinatorial explosion of possible feature subsets. In my lab we are investigating these fundamental questions using a combination of experimental techniques. I will present a series of results from our lab that challenge these widely held beliefs about how higher-order visual processing works. Our key conceptual advance is that while identifying complex features is difficult, understanding how such features combine is in fact tractable. I will present a series of results showing that visual object representations are highly compositional in nature at both behavioral and neural levels. In particular I will show that the response to the whole object is systematically related to its parts, but the definition of parts require careful elaboration. Further, these systematic relationships can explain complex percepts like symmetry, visual word processing etc. Thus, it may be more insightful to understand how features combine than identifying the features themselves. |
For queries, please write to: bcl20xx@gmail.com