Multisensory Integration

Humans are constantly confronted with sensory stimuli that vary in their reliability. Every piece of incoming information, whether sight, sound, taste, scent, or touch, has a certain signal-to-noise ratio that influences how the brain processes it. If the brain seeks to extract as much useful information as possible from a given set of stimuli, it must reduce the processing power, or weight, given to unreliable stimuli and increase the weight given to reliable stimuli. However, it remains unclear exactly how the brain carries out this process of weighting. Some studies found that individuals were able to adjust their weights optimally according to stimulus reliability, while other studies showed sub-optimal reweighting.

In Dr. Brandon Turner’s model-based cognitive neuroscience lab at Ohio State University, we found that there were rich individual differences in multisensory integration: As auditory and visual stimuli varied in reliability, some participants integrated them in an optimally adaptive manner, while others did not.

Screen Shot 2017-10-13 at 6.21.45 PM

 

We also proposed the Averaging Diffusion Model (ADM) to explain how multisensory integration occurs over time. Unlike other evidence accumulation models, such as the Diffusion Decision Model (DDM), the ADM assumes that evidence for two choices is continuously averaged, not summed. This implies that decision-making is not a process by which noisy evidence eventually sums to a given threshold, but rather a noise filtering process, whereby the variability of accumulated evidence decreases with time.

Screen Shot 2017-10-13 at 6.21.00 PM

The full study can be found here (paywall).