The Musical Brain

Music is found everywhere in humans groups. Musical information consists of pitch, loudness, timbre, location, and movement of the sound source. A combination of sounds of different pitches produces harmony and a sequence of pitches becomes melody. Timbre describes the harmonics in a sound that give it recognizable qualities. A range of timbres in human voices provides for the sound identification of individuals. You can identify who is talking from voice timbre and intonation, just as you can identify a trumpet, an oboe or a violin.

Formal music is assembled into language-equivalent structures, suggesting phonemes, syntax and semantics. The elements of music began millions of years ago with other animals. We humans are just recent practitioners of the art of sound communication.

The general plan of communication using sounds and written symbols involves a supramodal, movement-modeling capacity that can create and retain schemas of action in the world and that some of these schemas are expressions that we refer to as emotions, some as language and some as music. It is not surprising that these three modes often merge as the most dramatic and moving form of human communication.

The central feature of intelligence is the ability to understand what is really going on out there and to respond to events with successful and adaptive behavior. Praxis is skillful movement and is central to intelligent behavior. If you add mimesis to praxis, you can start building a meaningful model of intelligence.

Music in the original sense is communication, part of group assemblies that featured drumming, vocalization and dance. In an evolved sense, music became attached to rituals, celebrations, theatre and entertainment. Active group participation in creating music and dance has often become passive as audiences collect to sit and listen to professional musicians perform.

Our brains have evolved to detect and evaluate discrete low volume sounds. Everyone who has spent time in natural environments will know that little sounds are ubiquitous in nature. Loud sounds are unusual and signal danger. A nature person will be able identify birds, insects, and other animals by their characteristic sounds. Wind sounds inform about weather changes. Some trees can be identified by the sounds of their leaves vibrating in the wind. A sailor can determine wind direction and velocity by moving his head slightly to hear changes in pitch and timbre as the wind blows around his head.

The human brain extracts several kinds of information from the components of sounds: pitch, loudness, timbre, location and direction of movement. Animal communication begins with sounds that declare specific meanings such as the alarm cries of squirrels and monkeys, bird songs that regulate mating and social activity and human grunts, shouts and cries that attract attention, signal danger and express emotion.

The auditory system is organized into spatial and nonspatial, processing streams. In the monkey, the posterolateral auditory cortex is more responsive to spatial features stimuli than the anterolateral region that is more selective for vocalization. Single neurons in these cortical areas respond differentially to features of the auditory input.

Neurons selectively responsive to vocalizations were found in the ventral prefrontal cortex. Neurons responsive to spatial features were found in the dorsal prefrontal cortex. The responsiveness of auditory neurons in both the prefrontal and parietal cortices is dependent on the significance of the stimulus. The superior temporal sulcus in humans exhibits selective activation for voices.

Neurons detect the start and end of sounds through separate channels; onset is detected by neurons close to the sensory receptors. On and off decision are made in cortical areas responsible for language processing. Leek explained that the distinction between 'chop' and 'shop,' or between 'stay' and 'say" are based on short, transient differences at the beginning of the words: "One of the major challenges is to note precise timing information in speech that permits localization of sound in space, separation of sound sources that are occurring simultaneously, and the suppression of redundancy such as echoes in a highly-reverberant space." People have sound recognition problems when the auditory cortex fails to encode the audio frequencies and timing cues.

Parietal Lobes

Learned skills that require constant monitoring and adjustment are the basis of musical performances. The parietal lobes have been described as memory and multimodal skill processors. Each parietal lobe sits between the visual brain (occipital lobe) behind, and the frontal lobes in front and the temporal lobe below. In the simplest terms, there are two functional regions; 1. the postcentral gyrus is the sensor cortex that receives data from the body and face via the thalamus; 2. the remainder of the parietal lobe is an associative memory device that integrates body sensory input with visual information to create awareness of a body moving in space.

Damage to the parietal lobes can often be recognized as loss of motor skills and abnormalities in body image and spatial relations. Damage on the left side may cause right-left confusion, verbal memory deficits, difficulty with writing (agraphia), difficulty with mathematics (acalculia), other disorders of language (aphasia) and the inability to identify and use objects (agnosia). Right parietal lobe damage causes neglect of part of the body, difficulty making things (constructional apraxia), and denial of deficits anosagnosia). Bilateral parietal lobe disease may be recognized by the inability to control eye movements (ocular apraxia), inability to resolve visual information into recognizable objects (simultanagnosia),and the inability to reach for an object with visual guidance (optic ataxia).

The inferior parietal lobe creates a multi-modal map of auditory experiences processed initially in the right temporal lobe, which is dominant in the perception of timbre, chords, tone, pitch, loudness, and melody. When the right temporal lobe is damaged deficits appear in the ability to sing, melody recognition, and impaired evaluation of loudness, and timbre. The capacity to enjoy music may be lost (amusia.) Right temporal-parietal area damage also impairs comprehension of verbal prosody and emotional speech. Itoh et al demonstrated that the left parietal lobe plays a significant role in piano performance.

From the Sound of Music by Stephen Gislason

Download a free draft copy until Nov 15 2010