You can't call it a dictionary just yet, but University of Delaware neuroscientist Joshua Neunuebel is starting to break the code mice use to communicate with each other.
So far, it's all action-specific. Mice sound one way when they are being chased, quite another when they are the chaser, not much at all when they are not in motion.
He knows this because he and his research team have found a way to identify precisely which mouse is making which sound, where and when.
Their findings, which were just published in Nature Neuroscience, provide a foundation for examining the neural circuits that link sensory cues -; specifically these ultrasonic mouse calls -; to social behavior.
"This is fundamental science that will allow us to potentially get at more complicated problems," Neunuebel said. That includes a broad range of communication disorders, including autism.
The work is supported by the Foundation for the National Institutes of Health, the University of Delaware Research Foundation and Delaware's General University Research Program.
Humans can't hear the majority of mouse-to-mouse vocal interactions at all because they happen on a scale our ears don't catch. This is likely one of life's hidden blessings, since mice like to scurry around in our walls, attics, basements and other human habitats.
But studying their communication patterns can help researchers understand the neurobiology of social behavior and bring valuable insight-;not just into the secret life of rodents, but possibly into the mechanics of human communication. Research shows that about 98 percent of human genes are shared by mice.
To study these mouse interactions, Neunuebel's team gathered data as four mice -; two males, two females -; got acquainted. The mice interacted for five hours at a time in a chamber fitted with eight microphones and a video camera. Researchers recorded 10 similar encounters using different mice each time, studying a total of 44 mice.
They collected enormous amounts of data, with each microphone capturing 250,000 audio samples per second and the video camera capturing 30 frames per second. Each five-hour encounter produced more than 100 gigabytes of data.
Using machine-learning programs along with other computational approaches, they were able to show that specific sounds were associated with distinct behaviors.
To make sense of the mountain of data, we wrote a lot of computer programs. Everybody in the lab now writes code -; and that's a huge attribute of what my lab does. I think it's essential for deciphering very complex behavior." Joshua Neunuebel, neuroscientist, University of Delaware Related Stories
Also in Industry News
How to decide whether or not to start treatment for prostate cancer?
Analysis of the SARS-CoV-2 proteome via visual tools
$65m investment increases British Patient Capital’s exposure to life sciences and health technology