What Using Our Hands While Speaking Reveals About Our Brains


What Using Our Hands While Speaking Reveals About Our Brains

But have you ever studied a person's gestures as they spoke to you? Or noticed how you move your hands subconsciously when speaking? Gesturing comes naturally to many of us and is often encouraged as a powerful form of communication enhancing the words we speak.

But why? You can argue that hand gestures emphasize or augment our meaning. In a literal sense, yes. But why does it work?

Scientists from the Max Planck Institute for Psycholinguistics at the Radboud University in Nijmegen, Netherlands, set out to observe and interpret the why behind our speaking gestures.

"Most language research focuses on speech only. However, most language use happens during face-to-face conversations, in which we also communicate with bodily signals, such as facial expressions, eye gaze, head gestures, and hand gestures," said Marlijn ter Bekke, the PhD candidate who led the study. "If we really want to understand how language works, we also need to take these bodily signals into account."

Why we use our hands while we speak may go back to a fundamental survival instinct: We need to communicate to other humans quickly and efficiently, and hand gestures aid that.

But ter Bekke's research suggests something that makes that communication even faster: Our brains are designed to not just interpret but to predict.

Iconic Gestures: How the Study Worked

Using a virtual avatar with human-recorded audio, the scientists were able to replicate yet control exactly how someone could potentially speak and gesture, ensuring a level of consistency across the groups of participants.

The study involved two distinct experiments.

In the first, participants watched videos of the virtual avatar asking them questions. Before each question was finished, the video stopped and prompted the participant to type how they thought the question would end. Participants only watched each video once and had to rely on their intuition to fill in the blanks or predict the target word.

For each question, there were three versions of the video: one where the avatar made an "iconic gesture," one where the avatar made a control hand movement, and one where the avatar made no hand movements.

"Iconic gestures are hand gestures that people make while talking and that depict concrete concepts," said ter Bekke. These can include actions (eg, bringing the hand to the mouth as if holding a glass to depict drinking), objects (eg, pretending to hold a ball to depict a ball), events (eg, a fast movement with the hand from left to right to depict how a car just raced by), and spatial relations (eg, holding two hands next to each other with the palms facing down to indicate how closely two cars are parked next to each other).

The control hand movements, on the other hand, were simply for grooming, and included an elbow scratch, jaw wipe, neck scratch, palm rub, forearm scratch, and hand scratch. The iconic gestures and control movements started and ended in the same rest position on the avatar's lap and were performed at the same sections of a given question.

Videos with control movements were necessary to isolate the impact of the listeners discerning meaning from the iconic gestures as opposed to merely reacting to movements. Videos with no hand movements were necessary to demonstrate they had the same impact as those with control movements.

Overall, 12.9% of the target words were predicted correctly: Prediction accuracy was 7.8% paired with videos with control or no hand movements and 23% with videos with preceding iconic gestures, ie, nearly three times higher. There was no discernible difference in prediction accuracy between videos with control and no hand movements.

The second experiment tested a separate group of people, recording their brain activity with EEG as they watched and listened to the iconic gesture and control movement videos in full. Before the target word, there was a short, silent pause to collect enough clean oscillation data.

During the pauses, the team observed neural oscillations with reduced alpha and beta power when paired with the iconic gestures vs control movements, a notable marker of anticipation.

The Argument for the Brain as a 'Prediction Machine'

Based on ter Bekke's study alone, can we assume that we gesture to prompt people to get what we are saying, even before we say it? Perhaps, but the answer is not so simple.

"The fact that listeners can use early gestures to predict upcoming meaning does not automatically mean that speakers produce gestures to help the listener predict," said ter Bekke.

"There are all kinds of beneficial effects of gestures, both for the speaker and for the listener: This effect on prediction we found is of course not the only thing that gestures do."

"I adopt Andy Clark's view that our brains are essentially prediction machines," said Jack Wilson, PhD, researcher and lecturer in English Language at the University of Salford, Salford, England, who has also studied this subject but is not involved in this study. "The job of our brains is to make sure that we survive, and one of the ways they do this is by making predictions about what we are going to experience."

The same phenomenon can be applied to our speaking and gesturing behaviors. "When I speak, I am predicting what I am going to say and gesture. I am also predicting how you will respond," Wilson said. "The person I am speaking to is using their predictions of how they speak and gesture to comprehend what I am saying."

"So the point is not that we produce gestures in order to help our interlocutors predict what we are going to say but that the whole process of speaking and gesturing is grounded in prediction," Wilson concluded. "Helping our interlocutors predict what we are about to say is just an outcome of the fact that our brains are prediction machines."

This plays out whether a person is a big gesture person or more reserved. Even subtler gestures can help the other person predict what's coming in the conversation.

A good visualizer may gesture while speaking based on their reliance on mental imagery, whereas a poor visualizer may gesture in order to rely on the live visual information gesturing provides "in the same way that having a pad of paper can help us when we are trying to think about complex ideas," said Wilson.

On top of this, there are cross-cultural and cross-linguistic differences to acknowledge that can influence how and how frequently people gesture. Ter Bekke's study, for example, "focused only on Dutch language processing in a sample of Western, educated, industrialized, rich, and democratic (WEIRD) participants." Participant samples that are representative of various cultures and languages may reveal additional insights.

"The most important thing to recognize is that all communication is multimodal," said Wilson. "Even in communicative acts that appear to be purely linguistic, there are nonlinguistic features that affect understanding," and that includes how our brains can help us figure out -- possibly via fast-acting prediction -- what it all means.

Previous articleNext article

POPULAR CATEGORY

corporate

15083

entertainment

18301

research

9123

misc

17955

wellness

15070

athletics

19447