Information

Why are melodies/harmonies perceived as pleasurable by humans?


Is there any evolutionary advantage to finding melodies or harmonies pleasurable? Does the ear pick up these particular oscillating waves differently from other sounds, and if so, how does that affect our perception of pleasure? I'm looking for some sort of signalling pathway (most likely involving neurotransmitters I realize).


There are strong connections between the auditory cortex and the limbic system, which includes such structures as the hippocampus and the amygdala.

A recent paper [1] builds on earlier notions of emotional "significance" of music without any lyrics. It adds in lyrics, so giving a perspective of which portions of the brain are reacting to which component of the music.

Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca's area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics.

One of the limitations of this particular study is that the subjects self-selected their own pieces, which may limit the reliability of the results. Of course, defining "happy" or "sad" for every individual is slightly subjective and difficult. They cited an earlier "pioneering" study which standardized the musical selection between subjects. Without consideration of the lyrics:

The first pioneer study using functional magnetic resonance imaging (fMRI) by Khalfa et al. (2005) chose a controlled manipulation of two musical features (tempo and mode) to vary the happy or sad emotional connotations of 34 instrumental pieces of classical music, lasting 10s each. Sad pieces in minor-mode contrasted with happy pieces in major mode produced activations in the left medial frontal gyrus (BA 10) and the adjacent superior frontal gyrus (BA 9). These regions have been associated with emotional experiences, introspection, and self-referential evaluation (Jacobsen et al., 2006; Kornysheva et al., 2010).

As an aside to answer your final thought, in cases like this I think trying to jam everything under an umbrella of one "neurotransmitter system" or another can make things overly simplistic to the point where you lose focus of the diversity of receptors expressed. You can say a system is driven by Dopamine, but D1 and D2 receptors have exactly the opposite effects on the neuron.

[1] Brattico, E., Alluri, V., et al (2011) A functional MRI study of happy and sad emotions in music with and without lyrics. Frontiers in Psychology, 2: 308. doi: 10.3389/fpsyg.2011.00308 (free pdf)

(see also, http://www.sciencedirect.com/science/article/pii/S0028393206003083 and related)


In music, harmonies are simultaneous combination of tones or chords that are concordant.

In physics, each note is actually a vibration with defined wavelenght, the concordance can be explained in mathemathical terms, for instance with regard to coincidence of phase oscillation.

In physiology, the ear perceives air vibrations and send them to the brain by means of trains of pulses.

According to some scientists, a music providing regular train of pulses (like harmonic music and rhytmic music) should be more pleasing, probably because of stimulation of the limbic system, as the other answer explains.

Source: Ushakow et al. 2011, Physical Review Letters, DOI 10.1103/PhysRevLett.107.108103

Lay explanation: Why harmony pleases the brain, New Scientist, Sept 2011


Musical perception: nature or nurture?

This is the subject of the research by Juan Manuel Toro (ICREA) and Carlota Pagès Portabella, researchers at the Center for Brain and Cognition, published in the journal Psychophysiology as part of a H2020 project

Universitat Pompeu Fabra - Barcelona

IMAGE: Topographic map of how the brain reacts in musicians and non-musicians. view more

From a general perspective, harmony in music is the balance of the proportions between the different parts of a whole, which causes a feeling of pleasure. "When we listen to music, each sound we hear helps us to imagine what is coming next. It what we expect is fulfilled, we feel satisfied. But if not, we may be pleasantly surprised or upset", comments Carlota Pagès Portabella, a researcher with the Language and Comparative Cognition research group (LCC) at the Center for Brain and Cognition (CBC).

A study by Joan M. Toro, director of the LCC and ICREA research professor at the Department of Information and Communication Technologies (DTIC) at UPF and Carlota Pagès Portabella, published in the journal Psychophysiology, studies human musical perception comparing how the brain reacts when the musical sequences perceived do not finish as might be expected. The study is part of a H2020 international European project which the CBC is conducting the with Fundació Bial to understand the bases of musical cognition.

The results of the study have shown that although the perception of music is universal, training in music alters its perception. To reach this conclusion, the researchers used encephalographic registers to record what happened in the brains of 28 people, with and without musical training, when they listened to melodies with various unexpected endings.

A specific response to any irregularity

First, the researchers showed that regardless of the subjects' musical training, in the event of any irregularity in musical sequences the brain produces a specific response known as early right anterior negativity (ERAN).

Furthermore, the authors observed that people with no musical training do not distinguish between a simply unexpected and a musically unacceptable ending. Nevertheless, when the musically trained participants heard an utterly unacceptable ending with regard to harmony, their brain underwent a stronger response than when they were presented with simply unexpected endings.

These results show that while the perception of music is a relatively universal experience, musical training alters how humans perceive music. The brains of musicians distinguish between different types of musical irregularities that untrained listeners do not differentiate.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.


When Houston’s song focuses on notes 1, 3, and 5, it’s focusing on the notes that rank highest in what music theorists call the “tonal hierarchy.” Hughes describes it as such: “The tonal hierarchy is this idea that certain notes are sort of more important than others.” In the major key, we generally talk about four levels of note importance.

Level one:

Hughes says, “So if you’re in the key of C major, at the top of the hierarchy is the note C. Because that’s the most important note in the key of C major.” That’s the root or tonic of the scale, and the name of the scale — and so it’s the most important note.

Level two:

He goes on: “And the next two notes on the hierarchy are E and G.” Those are the third and fifth notes of the C major scale, they make up the basis for the C major chord, and so they are of secondary importance.

Level three:

According to Hughes, “The remaining notes, on the next rung down, are all of the notes left in the key of C major that are not C, E, or G. So that would be D, F, A, and B.” These are the diatonic notes that produce other chords in need of resolving back to the root.

Level four:

Lastly, we have the lowest level of hierarchical importance. This would be all of the notes in Western harmony that are not contained within the C major scale, like D♭, E♭, A♭, and B♭.

If we changed those notes to their respective diatonic scale degrees, it would look like this:

  1. Level one: C (1)
  2. Level two: E (3) G (5)
  3. Level three: D (2) F (4) A (6) B (7)

And then of course, all the notes that aren’t in the scale would be numbered and identified in relation to their function in a chord, but we won’t go into that right now. This all came about as a result of music psychologist Carol Krumhansl’s experiments on how average listeners judged the placement of a “probe tone” in a short melodic excerpt. These tests would later be known as “the probe tone experiments.”

Cui and Dr. Hughes have each done their own different variations on the probe tone experiments.

[AC]: “If you imagine you are participating in one of these experiments, you’ll get played a short melodic excerpt, say, a scale, then you get played a tone. And then you have to rate on a scale of 1-7 how well you think the tone fits with the music that was played before. Based on those ratings, you can order basically how well people think various tones fit in with the same context.”

[BH:] “What Krumhansl found was that the ratings, you know, ‘How well each note fit’ reflected exactly the tonal hierarchy.”

[AC]: “I think the most interesting thing about this is that even participants or listeners who don’t have any musical training show the similar patterns. So in your head, I’m sure you’re not really thinking, ‘How important is that tone?’ You’re giving it a rating based on your gut feeling.”

So what gives us that gut feeling?

Krumhansl proposed that we’ve heard enough songs in major keys to be able to pick up on what songs in major do, and how they should sound. Just like when you watch enough spy movies, you can basically predict what’s supposed to happen in the spy movie you’re about to watch. Cui says, “I’m assuming that most people hearing ‘I Wanna Dance With Somebody’ would know that it’s in major even though they might not know it’s called major.

And according to the probe tone experiments, most people also probably recognize that the notes 1, 3, and 5 are also going to be pretty important in the melody. Houston’s song “I Wanna Dance With Somebody” is not in the aforementioned key of C major, but it is in a different major, the key of G♭ major, and this tonal hierarchy works in any key, so let’s use the numbers and look at it the way we’d look at any song on Earth.

Remember above when we identified that Houston uses the notes 1, 3, and 5 more in the chorus than the verse? Those notes are hierarchically more important, and so they appear in the most important section of a song: the chorus. The chorus is hierarchically more important from a structural standpoint, so part of the reason this song is so effective at creating a memorable musical experience is that it joins predictable notes with their predictable placement in the song.

That’s pleasurable to use because of something cognitive scientists call the “fluency heuristic,” a psychological shortcut our brains use that’s associated with pleasure. In other words, the human brain likes things it can process faster. And with good reason! With so much going on every second, your brain has to focus on the things it can process quickly just to keep up.

Cui reiterates the results of the probe tone experiments and explains that “Tones that fit well often are also easier to process.” Not only do the tonic tones fit the best, they help our brain process all of the information faster.

The verses of “I Wanna Dance With Somebody” contain those three top-tiered notes (1, 3, and 5) 57% of the time and the pre-chorus contains them 50% of the time. But the chorus uses these notes 85% of the time, meaning that it’s both pleasurable and predictable when it repeats and allows us to sing along in our heads (or in our showers).

This isn’t to say that we only like songs because our brains are lazy. It has to do with how our brains process new information as it fits into a given context, in this case, how melodic notes fit into a key.

Sometimes, I’ll mention “I Wanna Dance With Somebody” and someone in the room will just start singing the chorus immediately. Part of that has to do with this melodic context stuff and the tonal hierarchy of certain notes that dominate that section, but it also has to do with other stuff like lyrical repetition in the chorus, tonal resolution, the rhythm and meter, and even with personal memories we might attribute to that song. Cognitive science can explain a portion of this, but not all of it, as Cui is sure to mention.

[AC]: “That’s fundamentally the trouble of trying to apply specific scientific experiments to songs. The whole idea of scientific experiments is to try to control as many things as possible, and sometimes that ends up happening by stripping away some of the things that happen in real life: like lyrics, like meter, and so on. And now you have this song, which has all these extra things that are not part of music cognition experiments — anything you say that the scientific experiments might predict are confounded by the fact that in real life, there are all these other things that weren’t part of the experiment.”

This song is a great example of how music theory and psychology can help the songwriting process. In essence, you want to try to structure how listeners bring their sense of joy through the song, with the ultimate high point being in the chorus where lyrics and melodies are all repeated for better recall. Now you’ve got a tonal hierarchy to work with to make that section, and the others leading up to it, even stronger.

Want to get all of Soundfly’s premium online courses for a low monthly cost?

Subscribe to get unlimited access to all of our course content, an invitation to join our members-only Slack community forum, exclusive perks from partner brands, and massive discounts on personalized mentor sessions for guided learning. Learn what you want, whenever you want, with total freedom.

Hunter Farris runs the Song Appeal podcast, which focuses on the psychology behind why we like the music we like. His podcast on music theory and music psychology has appealed broadly enough for Hunter to speak at Comic-Con 2018, and is instructive enough to be used as homework by a music theory professor. He currently teaches people to play piano by ear and make their own arrangements of other people’s music.


Contents

Pitch Edit

Sounds consist of waves of air molecules that vibrate at different frequencies. These waves travel to the basilar membrane in the cochlea of the inner ear. Different frequencies of sound will cause vibrations in different location of the basilar membrane. We are able to hear different pitches because each sound wave with a unique frequency is correlated to a different location along the basilar membrane. This spatial arrangement of sounds and their respective frequencies being processed in the basilar membrane is known as tonotopy. When the hair cells on the basilar membrane move back and forth due to the vibrating sound waves, they release neurotransmitters and cause action potentials to occur down the auditory nerve. The auditory nerve then leads to several layers of synapses at numerous clusters of neurons, or nuclei, in the auditory brainstem. These nuclei are also tonotopically organized, and the process of achieving this tonotopy after the cochlea is not well understood. [1] This tonotopy is in general maintained up to primary auditory cortex in mammals. [2]

A widely postulated mechanism for pitch processing in the early central auditory system is the phase-locking and mode-locking of action potentials to frequencies in a stimulus. Phase-locking to stimulus frequencies has been shown in the auditory nerve, [3] [4] the cochlear nucleus, [3] [5] the inferior colliculus, [6] and the auditory thalamus. [7] By phase- and mode-locking in this way, the auditory brainstem is known to preserve a good deal of the temporal and low-passed frequency information from the original sound this is evident by measuring the auditory brainstem response using EEG. [8] This temporal preservation is one way to argue directly for the temporal theory of pitch perception, and to argue indirectly against the place theory of pitch perception.

The right secondary auditory cortex has finer pitch resolution than the left. Hyde, Peretz and Zatorre (2008) used functional magnetic resonance imaging (fMRI) in their study to test the involvement of right and left auditory cortical regions in frequency processing of melodic sequences. [9] As well as finding superior pitch resolution in the right secondary auditory cortex, specific areas found to be involved were the planum temporale (PT) in the secondary auditory cortex, and the primary auditory cortex in the medial section of Heschl's gyrus (HG).

Many neuroimaging studies have found evidence of the importance of right secondary auditory regions in aspects of musical pitch processing, such as melody. [10] Many of these studies such as one by Patterson, Uppenkamp, Johnsrude and Griffiths (2002) also find evidence of a hierarchy of pitch processing. Patterson et al. (2002) used spectrally matched sounds which produced: no pitch, fixed pitch or melody in an fMRI study and found that all conditions activated HG and PT. Sounds with pitch activated more of these regions than sounds without. When a melody was produced activation spread to the superior temporal gyrus (STG) and planum polare (PP). These results support the existence of a pitch processing hierarchy.

Absolute pitch Edit

Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch. [11] [12] Neuroscientific research has not discovered a distinct activation pattern common for possessors of AP. Zatorre, Perry, Beckett, Westbury and Evans (1998) examined the neural foundations of AP using functional and structural brain imaging techniques. [13] Positron emission tomography (PET) was utilized to measure cerebral blood flow (CBF) in musicians possessing AP and musicians lacking AP. When presented with musical tones, similar patterns of increased CBF in auditory cortical areas emerged in both groups. AP possessors and non-AP subjects demonstrated similar patterns of left dorsolateral frontal activity when they performed relative pitch judgments. However, in non-AP subjects activation in the right inferior frontal cortex was present whereas AP possessors showed no such activity. This finding suggests that musicians with AP do not need access to working memory devices for such tasks. These findings imply that there is no specific regional activation pattern unique to AP. Rather, the availability of specific processing mechanisms and task demands determine the recruited neural areas.

Melody Edit

Studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. This automatic processing occurs in the secondary auditory cortex. Brattico, Tervaniemi, Naatanen, and Peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. [14] They recorded event-related potentials (ERPs) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. Both conditions revealed an early frontal error-related negativity independent of where attention was directed. This negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. The negativity response was larger for pitch that was out of tune than that which was out of key. Ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. In the focused attention condition, out of key and out of tune pitches produced late parietal positivity. The findings of Brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. [14] The findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed.

Rhythm Edit

The belt and parabelt areas of the right hemisphere are involved in processing rhythm. [15] Rhythm is a strong repeated pattern of movement or sound. When individuals are preparing to tap out a rhythm of regular intervals (1:2 or 1:3) the left frontal cortex, left parietal cortex, and right cerebellum are all activated. With more difficult rhythms such as a 1:2.5, more areas in the cerebral cortex and cerebellum are involved. [16] EEG recordings have also shown a relationship between brain electrical activity and rhythm perception. Snyder and Large (2005) [17] performed a study examining rhythm perception in human subjects, finding that activity in the gamma band (20 – 60 Hz) corresponds to the beats in a simple rhythm. Two types of gamma activity were found by Snyder & Large: induced gamma activity, and evoked gamma activity. Evoked gamma activity was found after the onset of each tone in the rhythm this activity was found to be phase-locked (peaks and troughs were directly related to the exact onset of the tone) and did not appear when a gap (missed beat) was present in the rhythm. Induced gamma activity, which was not found to be phase-locked, was also found to correspond with each beat. However, induced gamma activity did not subside when a gap was present in the rhythm, indicating that induced gamma activity may possibly serve as a sort of internal metronome independent of auditory input.

Tonality Edit

Tonality describes the relationships between the elements of melody and harmony – tones, intervals, chords, and scales. These relationships are often characterized as hierarchical, such that one of the elements dominates or attracts another. They occur both within and between every type of element, creating a rich and time-varying perception between tones and their melodic, harmonic, and chromatic contexts. In one conventional sense, tonality refers to just the major and minor scale types – examples of scales whose elements are capable of maintaining a consistent set of functional relationships. The most important functional relationship is that of the tonic note (the first note in a scale) and the tonic chord (the first note in the scale with the third and fifth note) with the rest of the scale. The tonic is the element which tends to assert its dominance and attraction over all others, and it functions as the ultimate point of attraction, rest and resolution for the scale. [18]

The right auditory cortex is primarily involved in perceiving pitch, and parts of harmony, melody and rhythm. [16] One study by Petr Janata found that there are tonality-sensitive areas in the medial prefrontal cortex, the cerebellum, the superior temporal sulci of both hemispheres and the superior temporal gyri (which has a skew towards the right hemisphere). [19]

Motor control functions Edit

Musical performance usually involves at least three elementary motor control functions: timing, sequencing, and spatial organization of motor movements. Accuracy in timing of movements is related to musical rhythm. Rhythm, the pattern of temporal intervals within a musical measure or phrase, in turn creates the perception of stronger and weaker beats. [20] Sequencing and spatial organization relate to the expression of individual notes on a musical instrument.

These functions and their neural mechanisms have been investigated separately in many studies, but little is known about their combined interaction in producing a complex musical performance. [20] The study of music requires examining them together.

Timing Edit

Although neural mechanisms involved in timing movement have been studied rigorously over the past 20 years, much remains controversial. The ability to phrase movements in precise time has been accredited to a neural metronome or clock mechanism where time is represented through oscillations or pulses. [21] [22] [23] [24] An opposing view to this metronome mechanism has also been hypothesized stating that it is an emergent property of the kinematics of movement itself. [23] [24] [25] Kinematics is defined as parameters of movement through space without reference to forces (for example, direction, velocity and acceleration). [20]

Functional neuroimaging studies, as well as studies of brain-damaged patients, have linked movement timing to several cortical and sub-cortical regions, including the cerebellum, basal ganglia and supplementary motor area (SMA). [20] Specifically the basal ganglia and possibly the SMA have been implicated in interval timing at longer timescales (1 second and above), while the cerebellum may be more important for controlling motor timing at shorter timescales (milliseconds). [21] [26] Furthermore, these results indicate that motor timing is not controlled by a single brain region, but by a network of regions that control specific parameters of movement and that depend on the relevant timescale of the rhythmic sequence. [20]

Sequencing Edit

Motor sequencing has been explored in terms of either the ordering of individual movements, such as finger sequences for key presses, or the coordination of subcomponents of complex multi-joint movements. [20] Implicated in this process are various cortical and sub-cortical regions, including the basal ganglia, the SMA and the pre-SMA, the cerebellum, and the premotor and prefrontal cortices, all involved in the production and learning of motor sequences but without explicit evidence of their specific contributions or interactions amongst one another. [20] In animals, neurophysiological studies have demonstrated an interaction between the frontal cortex and the basal ganglia during the learning of movement sequences. [27] Human neuroimaging studies have also emphasized the contribution of the basal ganglia for well-learned sequences. [28]

The cerebellum is arguably important for sequence learning and for the integration of individual movements into unified sequences, [28] [29] [30] [31] [32] while the pre-SMA and SMA have been shown to be involved in organizing or chunking of more complex movement sequences. [33] [34] Chunking, defined as the re-organization or re-grouping of movement sequences into smaller sub-sequences during performance, is thought to facilitate the smooth performance of complex movements and to improve motor memory. [20] Lastly, the premotor cortex has been shown to be involved in tasks that require the production of relatively complex sequences, and it may contribute to motor prediction. [35] [36]

Spatial organization Edit

Few studies of complex motor control have distinguished between sequential and spatial organization, yet expert musical performances demand not only precise sequencing but also spatial organization of movements. Studies in animals and humans have established the involvement of parietal, sensory–motor and premotor cortices in the control of movements, when the integration of spatial, sensory and motor information is required. [37] [38] Few studies so far have explicitly examined the role of spatial processing in the context of musical tasks.

Auditory-motor interactions Edit

Feedforward and feedback interactions Edit

An auditory–motor interaction may be loosely defined as any engagement of or communication between the two systems. Two classes of auditory-motor interaction are "feedforward" and "feedback". [20] In feedforward interactions, it is the auditory system that predominately influences the motor output, often in a predictive way. [39] An example is the phenomenon of tapping to the beat, where the listener anticipates the rhythmic accents in a piece of music. Another example is the effect of music on movement disorders: rhythmic auditory stimuli have been shown to improve walking ability in Parkinson's disease and stroke patients. [40] [41]

Feedback interactions are particularly relevant in playing an instrument such as a violin, or in singing, where pitch is variable and must be continuously controlled. If auditory feedback is blocked, musicians can still execute well-rehearsed pieces, but expressive aspects of performance are affected. [42] When auditory feedback is experimentally manipulated by delays or distortions, [43] motor performance is significantly altered: asynchronous feedback disrupts the timing of events, whereas alteration of pitch information disrupts the selection of appropriate actions, but not their timing. This suggests that disruptions occur because both actions and percepts depend on a single underlying mental representation. [20]

Models of auditory–motor interactions Edit

Several models of auditory–motor interactions have been advanced. The model of Hickok and Poeppel, [44] which is specific for speech processing, proposes that a ventral auditory stream maps sounds onto meaning, whereas a dorsal stream maps sounds onto articulatory representations. They and others [45] suggest that posterior auditory regions at the parieto-temporal boundary are crucial parts of the auditory–motor interface, mapping auditory representations onto motor representations of speech, and onto melodies. [46]

Mirror/echo neurons and auditory–motor interactions Edit

The mirror neuron system has an important role in neural models of sensory–motor integration. There is considerable evidence that neurons respond to both actions and the accumulated observation of actions. A system proposed to explain this understanding of actions is that visual representations of actions are mapped onto our own motor system. [47]

Some mirror neurons are activated both by the observation of goal-directed actions, and by the associated sounds produced during the action. This suggests that the auditory modality can access the motor system. [48] [49] While these auditory–motor interactions have mainly been studied for speech processes, and have focused on Broca's area and the vPMC, as of 2011, experiments have begun to shed light on how these interactions are needed for musical performance. Results point to a broader involvement of the dPMC and other motor areas. [20]

Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. [50] Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca's area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities. [50]

Syntactical information mechanisms in both music and language have been shown to be processed similarly in the brain. Jentschke, Koelsch, Sallat and Friederici (2008) conducted a study investigating the processing of music in children with specific language impairments (SLI). [51] Children with typical language development (TLD) showed ERP patterns different from those of children with SLI, which reflected their challenges in processing music-syntactic regularities. Strong correlations between the ERAN (Early Right Anterior Negativity—a specific ERP measure) amplitude and linguistic and musical abilities provide additional evidence for the relationship of syntactical processing in music and language. [51]

However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS). [52] Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot. Alternatively, it was also suggested that speech production may be less robust than melodic production and thus more susceptible to interference. [52]

Language processing is a function more of the left side of the brain than the right side, particularly Broca's area and Wernicke's area, though the roles played by the two sides of the brain in processing different aspects of language are still unclear. Music is also processed by both the left and the right sides of the brain. [50] [53] Recent evidence further suggest shared processing between language and music at the conceptual level. [54] It has also been found that, among music conservatory students, the prevalence of absolute pitch is much higher for speakers of tone language, even controlling for ethnic background, showing that language influences how musical tones are perceived. [55] [56]

Differences Edit

Brain structure within musicians and non-musicians is distinctly different. Gaser and Schlaug (2003) compared brain structures of professional musicians with non-musicians and discovered gray matter volume differences in motor, auditory and visual-spatial brain regions. [57] Specifically, positive correlations were discovered between musician status (professional, amateur and non-musician) and gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas and in the inferior temporal gyrus bilaterally. This strong association between musician status and gray matter differences supports the notion that musicians' brains show use-dependent structural changes. [58] Due to the distinct differences in several brain regions, it is unlikely that these differences are innate but rather due to the long-term acquisition and repetitive rehearsal of musical skills.

Brains of musicians also show functional differences from those of non-musicians. Krings, Topper, Foltys, Erberich, Sparing, Willmes and Thron (2000) utilized fMRI to study brain area involvement of professional pianists and a control group while performing complex finger movements. [59] Krings et al. found that the professional piano players showed lower levels of cortical activation in motor areas of the brain. It was concluded that a lesser amount of neurons needed to be activated for the piano players due to long-term motor practice which results in the different cortical activation patterns. Koeneke, Lutz, Wustenberg and Jancke (2004) reported similar findings in keyboard players. [60] Skilled keyboard players and a control group performed complex tasks involving unimanual and bimanual finger movements. During task conditions, strong hemodynamic responses in the cerebellum were shown by both non-musicians and keyboard players, but non-musicians showed the stronger response. This finding indicates that different cortical activation patterns emerge from long-term motor practice. This evidence supports previous data showing that musicians require fewer neurons to perform the same movements.

Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory. [61] Chan's study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.

Similarities Edit

Studies have shown that the human brain has an implicit musical ability. [62] [63] Koelsch, Gunter, Friederici and Schoger (2000) investigated the influence of preceding musical context, task relevance of unexpected chords and the degree of probability of violation on music processing in both musicians and non-musicians. [62] Findings showed that the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory. The ability to process information musically supports the idea of an implicit musical ability in the human brain. In a follow-up study, Koelsch, Schroger, and Gunter (2002) investigated whether ERAN and N5 could be evoked preattentively in non-musicians. [63] Findings showed that both ERAN and N5 can be elicited even in a situation where the musical stimulus is ignored by the listener indicating that there is a highly differentiated preattentive musicality in the human brain.

Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences. [64] Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls. [65] This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.

It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch. [66] [67] It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion [68] [69] and the Scale illusion. [70] [71]

Musical imagery refers to the experience of replaying music by imagining it inside the head. [72] Musicians show a superior ability for musical imagery due to intense musical training. [73] Herholz, Lappe, Knief and Pantev (2008) investigated the differences in neural processing of a musical imagery task in musicians and non-musicians. Utilizing magnetoencephalography (MEG), Herholz et al. examined differences in the processing of a musical imagery task with familiar melodies in musicians and non-musicians. Specifically, the study examined whether the mismatch negativity (MMN) can be based solely on imagery of sounds. The task involved participants listening to the beginning of a melody, continuation of the melody in his/her head and finally hearing a correct/incorrect tone as further continuation of the melody. The imagery of these melodies was strong enough to obtain an early preattentive brain response to unanticipated violations of the imagined melodies in the musicians. These results indicate similar neural correlates are relied upon for trained musicians imagery and perception. Additionally, the findings suggest that modification of the imagery mismatch negativity (iMMN) through intense musical training results in achievement of a superior ability for imagery and preattentive processing of music.

Perceptual musical processes and musical imagery may share a neural substrate in the brain. A PET study conducted by Zatorre, Halpern, Perry, Meyer and Evans (1996) investigated cerebral blood flow (CBF) changes related to auditory imagery and perceptual tasks. [74] These tasks examined the involvement of particular anatomical regions as well as functional commonalities between perceptual processes and imagery. Similar patterns of CBF changes provided evidence supporting the notion that imagery processes share a substantial neural substrate with related perceptual processes. Bilateral neural activity in the secondary auditory cortex was associated with both perceiving and imagining songs. This implies that within the secondary auditory cortex, processes underlie the phenomenological impression of imagined sounds. The supplementary motor area (SMA) was active in both imagery and perceptual tasks suggesting covert vocalization as an element of musical imagery. CBF increases in the inferior frontal polar cortex and right thalamus suggest that these regions may be related to retrieval and/or generation of auditory information from memory.

Music is able to create an incredibly pleasurable experience that can be described as "chills". [75] Blood and Zatorre (2001) used PET to measure changes in cerebral blood flow while participants listened to music that they knew to give them the "chills" or any sort of intensely pleasant emotional response. They found that as these chills increase, many changes in cerebral blood flow are seen in brain regions such as the amygdala, orbitofrontal cortex, ventral striatum, midbrain, and the ventral medial prefrontal cortex. Many of these areas appear to be linked to reward, motivation, emotion, and arousal, and are also activated in other pleasurable situations. [75] The resulting pleasure responses enable the release dopamine, serotonin, and oxytocin. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing.

[76] According to the National Institute of Health, children and adults who are suffering from emotional trauma have been able to benefit from the use of music in a variety of ways. The use of music has been essential in helping children who struggle with focus, anxiety, and cognitive function by using music in therapeutic way. Music therapy has also helped children cope with autism, pediatric cancer, and pain from treatments.

Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. [58] Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. [77] Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense. [77]

When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain. [16] The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.

Neuropsychology of musical memory Edit

Musical memory involves both explicit and implicit memory systems. [78] Explicit musical memory is further differentiated between episodic (where, when and what of the musical experience) and semantic (memory for music knowledge including facts and emotional concepts). Implicit memory centers on the 'how' of music and involves automatic processes such as procedural memory and motor skill learning – in other words skills critical for playing an instrument. Samson and Baird (2009) found that the ability of musicians with Alzheimer's Disease to play an instrument (implicit procedural memory) may be preserved.

Neural correlates of musical memory Edit

A PET study looking into the neural correlates of musical semantic and episodic memory found distinct activation patterns. [79] Semantic musical memory involves the sense of familiarity of songs. The semantic memory for music condition resulted in bilateral activation in the medial and orbital frontal cortex, as well as activation in the left angular gyrus and the left anterior region of the middle temporal gyri. These patterns support the functional asymmetry favouring the left hemisphere for semantic memory. Left anterior temporal and inferior frontal regions that were activated in the musical semantic memory task produced activation peaks specifically during the presentation of musical material, suggestion that these regions are somewhat functionally specialized for musical semantic representations.

Episodic memory of musical information involves the ability to recall the former context associated with a musical excerpt. [79] In the condition invoking episodic memory for music, activations were found bilaterally in the middle and superior frontal gyri and precuneus, with activation predominant in the right hemisphere. Other studies have found the precuneus to become activated in successful episodic recall. [80] As it was activated in the familiar memory condition of episodic memory, this activation may be explained by the successful recall of the melody.

When it comes to memory for pitch, there appears to be a dynamic and distributed brain network subserves pitch memory processes. Gaab, Gaser, Zaehle, Jancke and Schlaug (2003) examined the functional anatomy of pitch memory using functional magnetic resonance imaging (fMRI). [81] An analysis of performance scores in a pitch memory task resulted in a significant correlation between good task performance and the supramarginal gyrus (SMG) as well as the dorsolateral cerebellum. Findings indicate that the dorsolateral cerebellum may act as a pitch discrimination processor and the SMG may act as a short-term pitch information storage site. The left hemisphere was found to be more prominent in the pitch memory task than the right hemispheric regions.

Therapeutic effects of music on memory Edit

Musical training has been shown to aid memory. Altenmuller et al. studied the difference between active and passive musical instruction and found both that over a longer (but not short) period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation. The passively taught students weren't wasting their time they, along with the active group, displayed greater left hemisphere activity, which is typical in trained musicians. [82]

Research suggests we listen to the same songs repeatedly because of musical nostalgia. One major study, published in the journal Memory & Cognition, found that music enables the mind to evoke memories of the past. [83]

Treder et al. [84] identified neural correlates of attention when listening to simplified polyphonic music patterns. In a musical oddball experiment, they had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one or several notes deviating from an otherwise repetitive pattern. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument could be classified offline with high accuracy. This indicates that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for building more ergonomic music-listing based brain-computer interfaces. [84]

Musical four-year-olds have been found to have one greater left hemisphere intrahemispheric coherence. [82] Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992. This was confirmed by a study by Schlaug et al. in 1995 that found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven, and a great increase in the size of the corpus callosum. [82] These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brain and the linguistical processing of the left brain. This large relaying across many different areas of the brain might contribute to music's ability to aid in memory function.

Focal hand dystonia Edit

Focal hand dystonia is a task-related movement disorder associated with occupational activities that require repetitive hand movements. [85] Focal hand dystonia is associated with abnormal processing in the premotor and primary sensorimotor cortices. An fMRI study examined five guitarists with focal hand dystonia. [86] The study reproduced task-specific hand dystonia by having guitarists use a real guitar neck inside the scanner as well as performing a guitar exercise to trigger abnormal hand movement. The dystonic guitarists showed significantly more activation of the contralateral primary sensorimotor cortex as well as a bilateral underactivation of premotor areas. This activation pattern represents abnormal recruitment of the cortical areas involved in motor control. Even in professional musicians, widespread bilateral cortical region involvement is necessary to produce complex hand movements such as scales and arpeggios. The abnormal shift from premotor to primary sensorimotor activation directly correlates with guitar-induced hand dystonia.

Music agnosia Edit

Music agnosia, an auditory agnosia, is a syndrome of selective impairment in music recognition. [87] Three cases of music agnosia are examined by Dalla Bella and Peretz (1999) C.N., G.L., and I.R.. All three of these patients suffered bilateral damage to the auditory cortex which resulted in musical difficulties while speech understanding remained intact. Their impairment is specific to the recognition of once familiar melodies. They are spared in recognizing environmental sounds and in recognizing lyrics. Peretz (1996) has studied C.N.'s music agnosia further and reports an initial impairment of pitch processing and spared temporal processing. [88] C.N. later recovered in pitch processing abilities but remained impaired in tune recognition and familiarity judgments.

Musical agnosias may be categorized based on the process which is impaired in the individual. [89] Apperceptive music agnosia involves an impairment at the level of perceptual analysis involving an inability to encode musical information correctly. Associative music agnosia reflects an impaired representational system which disrupts music recognition. Many of the cases of music agnosia have resulted from surgery involving the middle cerebral artery. Patient studies have surmounted a large amount of evidence demonstrating that the left side of the brain is more suitable for holding long-term memory representations of music and that the right side is important for controlling access to these representations. Associative music agnosias tend to be produced by damage to the left hemisphere, while apperceptive music agnosia reflects damage to the right hemisphere.

Congenital amusia Edit

Congenital amusia, otherwise known as tone deafness, is a term for lifelong musical problems which are not attributable to mental retardation, lack of exposure to music or deafness, or brain damage after birth. [90] Amusic brains have been found in fMRI studies to have less white matter and thicker cortex than controls in the right inferior frontal cortex. These differences suggest abnormal neuronal development in the auditory cortex and inferior frontal gyrus, two areas which are important in musical-pitch processing.

Studies on those with amusia suggest different processes are involved in speech tonality and musical tonality. Congenital amusics lack the ability to distinguish between pitches and so are for example unmoved by dissonance and playing the wrong key on a piano. They also cannot be taught to remember a melody or to recite a song however, they are still capable of hearing the intonation of speech, for example, distinguishing between "You speak French" and "You speak French?" when spoken.


A musician, composer, and neuroscientist, Mark Tramo studies how the brain perceives music and responds to it emotionally. The dark stripe on the model brain he holds marks an area particularly sensitive to rhythm, melody, and harmony. (Staff photo by Justin Ide)

Babies come into the world with musical preferences. They begin to respond to music while still in the womb. At the age of 4 months, dissonant notes at the end of a melody will cause them to squirm and turn away. If they like a tune, they may coo.

Scientists cite such responses as evidence that certain rules for music are wired into the brain, and musicians violate them at the risk of making their audiences squirm. Even the Smashing Pumpkins, a hard-rock group, play by some of the same rules of harmony that Johann Sebastian Bach did in the 18th century.

“Music is in our genes,” says Mark Jude Tramo, a musician, prolific songwriter, and neuroscientist at the Harvard Medical School. “Many researchers like myself are trying to understand melody, harmony, rhythm, and the feelings they produce, at the level of individual brain cells. At this level, there may be a universal set of rules that governs how a limited number of sounds can be combined in an infinite number of ways.”

“All humans come into the world with an innate capability for music,” agrees Kay Shelemay, professor of music at Harvard. “At a very early age, this capability is shaped by the music system of the culture in which a child is raised. That culture affects the construction of instruments, the way people sound when they sing, and even the way they hear sound. By combining research on what goes on in the brain with a cultural understanding of music, I expect we’ll learn a lot more than we would by either approach alone.”

Besides increasing basic understanding, Tramo believes that studying the biology of music can lead to practical applications related to learning, deafness, and personal improvement. For example, there’s evidence that music can help lower blood pressure and ease pain.

Looking for a music center

A human brain is divided into two hemispheres, and the right hemisphere has been traditionally identified as the seat of music appreciation. However, no one has found a “music center” there, or anywhere else. Studies of musical understanding in people who have damage to either hemisphere, as well as brain scans of people taken while listening to tunes, reveal that music perception emerges from the interplay of activity in both sides of the brain.

Some brain circuits respond specifically to music but, as you would expect, parts of these circuits participate in other forms of sound processing. For example, the region of the brain dedicated to perfect pitch is also involved in speech perception.

Music and other sounds entering the ears go to the auditory cortex, assemblages of cells just above both ears. The right side of the cortex is crucial for perceiving pitch as well as certain aspects of melody, harmony, timbre, and rhythm. (All the people tested were right-handed, so brain preferences may differ in lefties.)

The left side of the brain in most people excels at processing rapid changes in frequency and intensity, both in music and words. Such rapid changes occur when someone plucks a violin string versus running a bow across it.

Both left and right sides are necessary for complete perception of rhythm. For example, both hemispheres need to be working to tell the difference between three-quarter and four-quarter time.

The front part of your brain (frontal cortex), where working memories are stored, also plays a role in rhythm and melody perception.

“It’s not clear what, if any, part these hearing centers play in ‘feeling’ music,” Tramo notes. “Other areas of the brain deal with emotion and pleasure. There is a great deal of effort going on to map connections between the auditory cortex and parts of the brain that participate in emotion.”

Researchers have found activity in brain regions that control movement even when people just listen to music without moving any parts of their bodies. “If you’re just thinking about tapping out a rhythm, parts of the motor system in your brain light up,” Tramo notes.

“Music is as inherently motor as it is auditory,” he continues. “Many of us ‘conduct’ while listening to classical music, hum along with show tunes, or dance to popular music. Add the contributions of facial expressions, stage lights, and emotions, and you appreciate the complexity of what our brain puts together while we listen and interact with music in a concert hall or mosh pit.”

Practical applications

Understanding the biology of music could allow people to use it better in medical and other areas where evidence indicates music produces benefits beyond entertainment.

Following heart bypass surgery, patients often experience erratic changes in blood pressure. Such changes are treated with drugs. Studies show that those in intensive care units where background music is played need lower doses of these drugs compared with patients in units where no music is played.

Scientists and medical doctors are investigating the value of musiclike games to aid dyslexics. When dyslexics play a game that calls for responses to tones that come very fast, it reportedly helps them to read better. “The approach is controversial,” Tramo admits, “but there’s enough favorable evidence for researchers to continue testing it.”

Some hospitals play soft background music in intensive care units for premature babies. Researchers have found that such music, as well as a nurse’s or mother’s humming, helps babies to gain weight faster and to leave the unit earlier than premies who don’t hear these sounds.

On the other end of the age scale, music has been used to calm Alzheimer’s patients. At mealtime in nursing homes or hospitals these people may be difficult to organize. Fights even occur. The right kind of music, it has been demonstrated, reduces confusion and disagreements.

Investigators have also found that music lowers blood pressure in certain situations, and it seems to increase the efficiency of oxygen consumption by the heart. “One study showed that the heart muscle of people exercising on treadmills didn’t work as hard when people listened to music as it did when they exercised in silence,” Tramo notes.

Then there are endless anecdotes about athletes using music to enhance their performance. Pitcher Trevor Hoffman of the San Diego Padres, for example, listens to AC/DC to get psyched up in a game. Tramo ran to “Brown Sugar” by the Rolling Stones when he won a gold medal in the 100-yard dash in high school. To determine how much difference music makes, however, the performance of an athlete who listens to music would have to be compared with that in games when he or she didn’t listen.

Tramo believes that music and dancing preceded language. Archaeologists have discovered flutes made from animal bones by Neanderthals living in Eastern Europe more than 50,000 years ago. No human culture is known that does not have music.

“Despite this, large gaps exist in our knowledge about the underlying biology,” Tramo points out. We don’t know how the brain decides if music is consonant and dissonant. We don’t know whether practicing music helps people master other skills such as math or reading diagrams, although evidence that merely listening to Mozart in the womb improves IQ scores is weak or nonexistent.

Tramo made a choice between composing music and studying its biology at the end of medical school. When he and his roommate at Yale recorded a demonstration album called “Men With Tales,” both RCA and Columbia Records said they wanted to hear more. But Tramo decided to stay with medicine. He didn’t quit music though. Recently, he and his band recorded a song, “Living in Fantasy,” which ranks in the top 40 of MP3 (accessible by computer) recordings made in Boston.

“I’m working on the neurobiology of harmony,” Tramo says, “but I find time to compose and play music. Bringing the two together is like bringing together work and play.”


Depression Essential Reads

New Studies Link Excessive Facebook Use to Depression

How Relationship Troubles Can Cause Depression

The primary way that listening to music affects us is by changing our stress response. For example, in one study, participants were randomly assigned to either listen to music or take anti-anxiety drugs. The patients who listened to music had less anxiety and lower cortisol than people who took drugs. Music is arguably less expensive than drugs, is easier on the body, and doesn't have side effects (Finn & Fancourt, 2018).

Eerola T, Vuoskoski JK, Peltola HR, Putkinen V, Schäfer K. (2018), An integrative review of the enjoyment of sadness associated with music. Phys Life Rev.25:100-121.

Finn S, Fancourt D. (2018) The biological impact of listening to music in clinical and nonclinical settings: A systematic review. Prog Brain Res237:173-200.

Huron D., Margulis E. H. (2011). “Music expectancy and thrills,” in, Handbook of Music and Emotion: Theory, Research, Applications, eds Juslin P. N., Sloboda J. A., editors. (New York: Oxford University Press ), 575–604

Juslin PN. (2013), What does music express? Basic emotions and beyond Front Psychol. 64:596.

Kawakami, A., Furukawa, K., Katahira, K., and Okanoya, K. (2013). Sad music induces pleasant emotion. Front. Psychol. 4:311.

Sachs ME, Damasio A, Habibi A. (2015), The pleasures of sad music: a systematic review. Front Hum Neurosci. 249:404.


Here’s why human screams make your skin crawl

The human scream triggers a range of emotions. It’s one of the few primal responses we share with other animals. Few sounds rank as powerful as the first cry of a newborn. But the shrieks of that same infant will one day rattle the nerves of fellow airplane travelers.

A new study shines light how our brains and bodies respond to this sound that grips and consumes us. Neuroscientist Luc Arnal of the University of Geneva and colleagues show that screams possess a unique sound property that exists outside the boundaries of human speech. Regardless of loudness or words used, this acoustic feature shocks our core fear centers. The study was published Thursday in the journal Current Biology.

All sound comes from the vibration of objects, whether these objects be drums or your vocal chords. The rate of vibration, known as frequency, determines the sound. When you hear a high-pitched squeal, you ears and brain are actually perceiving a sound with a high vibration rate.

Though two human voices can sound exceedingly different — think Gilbert Gottfried versus James Earl Jones — humans (and animals) use a limited set of sound frequencies when communicating. When biologists like Arnal measure these sound patterns — using a model for organizing the volume and frequency called a “modulation power spectrum” — they find that our speech isn’t erratic. Instead, it features a uniform melody of frequencies and intensities, which both people and animals use over and over when communicating — typically it’s “low sounds with fine harmonies.” In fact, all natural sounds fall within this universal range of noises.

Prior studies have shown that voices always use the same patterns of sound (left panel). Gender-related tones and intensities fall into their own realm (blue in left panel), while the meaning of our words lands in another (green in left panel). Screams produce an acoustic property called roughness that falls outside the bound of normal speech (brown in left panel), which scientists noted when a person screamed a sentence (middle panel) or recited it normally. Courtesy of Arnal et al., 2015, Current Biology.

But when Arnal examined these sound spectrums of sentences spoken or screamed by 19 adults, he noticed something unusual. Unlike talking, screams cycle through a high variety of sounds in a quick timeframe. The result is an acoustic phenomenon akin to an uncomfortable rattle, known as the zona incognita or “roughness”.

“Roughness is well known, but it has never been considered to be important for communication,” said Arnal said. “Our work is the first to show that roughness is useful to convey information, specifically about danger in the environment.”

Arnal’s team asked 20 subjects to judge screams as neutral (1 point) or fearful (5 points), and found that the scariest almost always corresponded with roughness. The roughest sounds made the scariest screams. (You can hear the ranked screams in interactive to the right).

The team then studied how the human brain responds to roughness using fMRI brain scanners. As expected, after hearing a scream, activity increased in the brain’s auditory centers — where sound coming into the ears is processed. But the scans also lit up in the amygdala, the brain’s fear center.

The amygdala gauges whether a threat is real, regulating our emotional and physiological response to danger. This is how it works: We get angry or aggravated. Our adrenaline rises and vision gets clearer. This study found that screams have a similar influence on the body.

“It isn’t explicitly stated anywhere that people should use roughness to create alarm signals. Sound engineers have been tapping into roughness by accident, just by trial and error.

“We found that roughness improves behavior in various ways,” said Arnal, such as by increasing a subject’s reaction time to alarms and refining their perception of sounds.

His team also found that roughness isn’t heard when we speak naturally, regardless of language, but it is rampant in artificial sounds. The most aggravating alarm clocks, car horns and fire alarms possess high degrees of roughness, according to the study.

“It isn’t explicitly stated anywhere that people should use roughness to create alarm signals. Sound engineers have been tapping into roughness by accident, just by trial and error,” said Arnal.

As expected, screams enter our ears and elevate brain activity on fMRI scans in our auditory cortex, which processes sound. But these shrieks also trigger our fear center, the amygdala, which may explain why they command our attention. Courtesy of Arnal et al., 2015, Current Biology.

The responses that roughness provokes extend beyond the purely negative. Some people enjoy the fear triggered by a bloodcurdling scream in a horror movie, for example. This is because stimulating the amygdala increases not only adrenaline, but also natural painkillers called endorphins that create sensations of pleasure.

The team found that dissonant tones used by musicians — two harmonic tones that clash — exhibit roughness too.

“Dissonance is used a lot in rock music with saturated guitars, and we might add these unpleasant sounds because they move us,” Arnal said.

Left: A new study looks at the science of screams. Photo by Tara Moore/via Getty Images.


III. Benefits of Learning Music

Music’s influence on the brain is significant, and includes therapeutic improvements, healing, educational, and cognitive benefits. According to Campbell (2011b), author of the book Healing at the Speed of Sound: How What We Hear Transforms Our Brains and Our Lives, “A child who is moving, dancing and singing learns coordination between their eye, ear and sound early on. And [the experience of participating in music education] helps integrate the social, the emotional and the real context of what we’re learning. There are studies that show children who play music have higher SAT scores, that learning to control rhythm and tempo not only help them get along with others but plants seeds for similar advantages when we get much older.”

Music not only helps increase children’s verbal memory and reduces memory loss during aging, but aids people in healing faster after a stroke, reduces stress and anxiety, increases memory retention, helps transplant recipients, and soothes pain.

Music shows a positive impact on a person’s

  • vision, body awareness, and gross and fine motor skills
  • directionality—moving expressively in response to directions and use of musical instruments
  • acquisition of receptive and expressive language, voice in singing
  • cognitive abilities of memorization, sequencing, imitation, and classification making relationships and choices affects each child’s ability to create new lyrics, melodies, harmonies, and rhythms and express perceptions of dynamics, mood, form, and timbre
  • and ability to pay attention.

In a 2006 study, Tallal et al. suggest relationships between musical training, auditory processing, language, and literary skills. The study shows that music training and musical aptitude improves or correlates positively with:

  • Music Processing (melody, rhythm, meter, timbre, harmony, etc.)
  • General Auditory Processing (pitch discrimination, pitch memory, auditory rapid spectrotemporal processing)
  • Language and Literary Skills (reading, phonological awareness, pitch processing in speech, prosody perception, verbal memory, verbal fluency)

The study also indicates that after musical training, there was an improvement in attention, sequencing skills, and processing literary components such as syllables, language skills, and literacy skills.

A two–three-year-long study concluded that children attending a musical play school exhibited significant differences in auditory discrimination and attention compared to children not involved in music. Children with exposure to more musical activities showed more mature processing of auditory features and heightened sensitivity in temporal aspects of sounds, while surprising sounds were less likely to distract the children’s attention (Putkenin et al., 2013).

Study after study records significant findings regarding brain changes in musicians, particularly instrumental musicians’ motor, auditory, and visual-spatial regions (Gaser, 2003). These same brain changes occur at very early ages for young children who play music. Children with only 15 months of musical training demonstrated structural brain changes in early childhood, which correlated with improvements in relevant motor and auditory skills (Hyde et al., 2009).

  1. What does music have to do with creativity? This TED talk by Charles Limb discusses just that and more.
  2. “How music changes our brains”: An article on how music affects the brain.
  3. An incredible video showing a three-year-old child conducting Beethoven.
  4. An article and video on the psychological effects of music on health and to help the body sleep.

Math Behind Music Theory

Musical notes are just the names given to certain frequencies of sound waves in music. For example, high-pitched notes have a high frequency. The distance between notes is called “intervals” and there is an aesthetic created by these intervals. When the distances of the notes are positioned using more aesthetically perceived intervals, firstly 7 notes in the major scale and then 12 notes appear. The frequencies of the notes are chosen according to a certain rule to make them sound pleasant when selecting the notes. The notes should be selected in a way that their distances from each other contain the maximum number of harmonious intervals.

The most harmonious interval between intervals is an octave. The lower or upper octaves of a note sound ‘same’ to the ear, but only in higher or lower versions. The reason for this harmony is the simplicity in the ratio of frequencies. The simpler the ratio of the two frequencies to each other, the more dopamine the human brain tends to secrete while hearing these two notes at the same time or listening to the melodies made from these two notes. In the case of octaves, this ratio is 2 and it has the simplest ratio among the intervals. While the note of A4 is 440 hertz, A3 with a lower octave is 220 hertz and the A5 with an upper octave is 880 hertz.

The octave interval always gives us the same note. Therefore, we need a different interval to find different notes. After the octave, the most harmonious intervals are the perfect fifth and the perfect fourth. Most melodies, songs and and classical pieces contain the perfect fourth and perfect fifth intervals. On the other hand, the dissonant intervals minor & major seconds, and minor & major sevenths. These intervals are respectively one, two, eight, and nine halftones long. It is difficult to listen to these dissonant intervals. However, composers use the tension and chaotic feeling created by the dissonant intervals to increase the feeling of dissolution created by harmonious intervals. Harmonious intervals following dissonant intervals are often used to make music catchy and create more emotions. If a music piece consists only of harmonious intervals, it will sound nice but that’s all.


Music and therapy

Research on groove also has potential therapeutic applications. For example, the use of rhythmic music to treat motor symptoms of Parkinson’s Disease such as problems with gait has shown promising results. Groove research has the potential to clarify the connections between music, movement and pleasure that may be crucial in understanding and improving rhythm-based therapies. In addition, groove research may help to maximise the enjoyability of music used in this type of therapy which could increase patient motivation and enhance the therapeutic experience.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Cosmos

Cosmos is a quarterly science magazine. We aim to inspire curiosity in ‘The Science of Everything’ and make the world of science accessible to everyone.

Read science facts, not fiction.

There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.

Make a donation