Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural Manipulations

The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural... The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural Manipulations div.banner_title_bkg div.trangle { border-color: #171A2F transparent transparent transparent; opacity:0.8; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=80)" ;filter: alpha(opacity=80); /*new styles end*/ } div.banner_title_bkg_if div.trangle { border-color: transparent transparent #171A2F transparent ; opacity:0.8; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=80)" ;filter: alpha(opacity=80); /*new styles end*/ } div.banner_title_bkg div.trangle { width: 248px; } #banner { background-image: url('http://images.hindawi.com/journals/aneu/aneu.banner.jpg'); background-position: 50% 0;} Hindawi Publishing Corporation Home Journals About Us Advances in Neuroscience About this Journal Submit a Manuscript Table of Contents Journal Menu About this Journal · Abstracting and Indexing · Advance Access · Aims and Scope · Article Processing Charges · Articles in Press · Author Guidelines · Bibliographic Information · Citations to this Journal · Contact Information · Editorial Board · Editorial Workflow · Free eTOC Alerts · Publication Ethics · Reviewers Acknowledgment · Submit a Manuscript · Subscription Information · Table of Contents Open Special Issues · Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Supplementary Material Advances in Neuroscience Volume 2014 (2014), Article ID 482126, 9 pages http://dx.doi.org/10.1155/2014/482126 Research Article The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural Manipulations Stefania Ferri , 1 Cristina Meini , 2 Giorgio Guiot , 3 Daniela Tagliafico , 4 Gabriella Gilli , 5 and Cinzia Di Dio 1,5 1 Department of Neuroscience, Università di Parma, Via Volturno 39/E, 43100 Parma, Italy 2 Department of Humanistic Studies, Università del Piemonte Orientale, Via Manzoni 8, 13100 Vercelli, Italy 3 Associazione Cantabile, Via Campana 2, 10125 Turin, Italy 4 Department of Philosophy, Università Degli Studi di Torino, Via Sant’Ottavio 20, 10125 Turin, Italy 5 Department of Psychology, Università Cattolica del Sacro Cuore, 20123 Milan, Italy Received 20 June 2014; Revised 5 December 2014; Accepted 8 December 2014; Published 30 December 2014 Academic Editor: Notger G. Mueller Copyright © 2014 Stefania Ferri et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract This fMRI study investigates the effect of melody on aesthetic experience in listeners naïve to formal musical knowledge. Using simple melodic lines, whose syntactic structure was manipulated, we created systematic acoustic dissonance. Two stimulus categories were created: canonical (syntactically “correct,” in the Western culture) and modified (made of an altered version of the canonical melodies). The stimuli were presented under two tasks: listening and aesthetic judgment. Data were analyzed as a function of stimulus structure (canonical and modified) and stimulus aesthetics, as appraised by each participant during scanning. The critical contrast modified versus canonical stimuli produced enhanced activation of deep temporal regions, including the parahippocampus, suggesting that melody manipulation induced feelings of unpleasantness in the listeners. This was supported by our behavioral data indicating decreased aesthetic preference for the modified melodies. Medial temporal activation could also have been evoked by stimulus structural novelty determining increased memory load for the modified stimuli. The analysis of melodies judged as beautiful revealed that aesthetic judgment of simple melodies relied on a fine-structural analysis of the stimuli subserved by a left frontal activation and, possibly, on meaning attribution at the charge of right superior temporal sulcus for increasingly pleasurable stimuli. 1. Introduction Music is simultaneously art and science: it allows the artist to express his/her inner world through sounds, which are linked one to another by stringent rules that are strongly influenced by culture. These rules represent hallmarks that, on one side, constrain the composer’s freedom to choose associations and successions of sounds and, on the other, offer a context, within which all elements gain a meaning. Traditionally, there has been a strong tendency to emphasize the dominance of compositional structures in outlining the aesthetic character of a musical piece. In the present study, we investigated this relationship by exploring the aesthetics of melody, that is, the capacity of simple musical structures to evoke an aesthetic experience in listeners naïve to formal musical knowledge. Music is made of rules that govern the relation between notes and of a dynamic dimension that defines its tempo and rhythm. As far as the succession of sounds is concerned, the founding rules of a musical piece are also referred to as syntactic rules (this denomination implicitly underlines the similarities between music and language). Music syntax is basically constituted by melody (horizontal syntax) and harmony (vertical syntax). Melody consists of a distribution of notes on scales that are organized into “modes” (e.g., minor and major) by our musical tradition. Harmony, on the other hand, establishes the criteria upon which chords are built and associated in time. The syntactic rules of music are not absolute; contrarily, they vary in relation to the different musical styles. For example, the rules forming the base of classic music are different from those characterizing soul music or blues. Still, dodecaphonic music arises in opposition to the norms of classical music, which are paradigmatically expressed by traditional “mozartian” music. Recently, the growing interest of neuroscience for music has dealt with the way our brain processes the temporal and syntactic structure of music. Some evidence suggests that the neural processing of music syntax involves the activation of areas that are also involved in language processing and in motor planning [ 1 – 6 ]. Tillmann and colleagues [ 6 ], for example, showed that the processing of a chord unrelated to musical context modulates the activity of the inferior frontal gyrus (IFG). Similarly, Levitin and Menon [ 7 ] found enhanced activation of IFG (BA47) in the contrast between musical pieces and their scrambled versions, showing that this brain area may be involved in the coding of fine stimulus structure. A more recent study showed that even in newborn children altered music structures cause perceived dissonance, which involves the activation of the inferior frontal cortex [ 8 ]. Another aspect of music that has been investigated concerns the neural correlates of aesthetic experience evoked by music and, specifically, to its emotional dimension. A PET study by Blood and Zatorre [ 9 ] showed that the intensity of emotional experience elicited by familiar musical pieces positively correlated with signal change in subcortical structures, including ventral striatum, and in limbic structures, including insular cortex, orbitofrontal cortex, and anterior cingulated cortex. In their fMRI study, Koelsch et al. [ 10 ] reported the bilateral activation of the primary auditory cortex, IFG, and anterior insula while listening to pleasant music with respect to unpleasant music (see also [ 11 ]). Altogether, these studies emphasize the role of emotional centres during the aesthetic experience of music. In the present study, we aimed at breaking down music in one of its building structural dimensions, namely, melody, and at clarifying whether aesthetic experience can be evoked by this single component alone in naïve listeners (nonmusic experts or players). Differently from the above studies that used complex music excerpts characterized by a rich harmonic and rhythmic structure as experimental stimuli, in the present study we used simple melodic lines. The effect of melody on aesthetic experience was investigated by systematically manipulating the syntactic structure of the stimuli. In fact, violation of the syntactic rules building a musical system creates acoustic dissonance that, phenomenically, could translate into an unpleasant emotion feeling. For this purpose, two categories of melodies were presented: canonical, that is, syntactically “correct,” and modified, that is, made of an altered version of the canonical melodies. In order to evaluate whether the structural alteration of the melodies modulates aesthetic experience, canonical and modified stimuli were presented in two experimental tasks, listening and aesthetic judgment. During listening task, participants had to merely listen to the presented melodies; during aesthetic judgment task, participants were required to overtly express a pleasantness evaluation of the same stimuli. 2. Methods 2.1. Participants Nineteen healthy right-handed Italian native speakers (9 males and 8 females; mean age 24.3) participated in the fMRI study. They were undergraduate and graduate students naïve to music expertise: they did not play any musical instruments nor were they able to read piano score. They were unfamiliar with the presented melodies. After receiving an explanation of the experimental procedures, they gave their written informed consent. This study was approved by the Local Ethics Committee of Parma, Italy. 2.2. Stimuli Simple tonal melodies played with piano were used in this study. The stimuli were presented in a canonical, syntactically “correct,” version (CAN) and in a modified, syntactically “incorrect,” version (MOD) of the canonical stimuli. In total, 10 stimuli (5 CAN and 5 MOD melodies) were selected on the basis of a preliminary behavioral study, in which a sample of 20 listeners naïve to formal musical knowledge (10 males, mean age = 28.8 yrs; 10 females, mean age = 28.2 yrs), different from the sample undergoing fMRI, were asked to evaluate a set of stimuli composed by 12 CAN and 12 MOD melodies. For each stimulus, participants were required to rate aesthetic preference and syntactic accuracy. The CAN and MOD versions that showed the highest discrepancy in aesthetic ratings were chosen; moreover, the syntactic alterations of the selected MOD melodies had to be clearly perceived. Four CAN and 4 MOD stimuli were created by extracting pure melodic lines from unfamiliar excerpts written by illustrious classical composers (F. Chopin: preludio number 20; Gershwin: Oh, I can’t sit down (Porgy and Bess); I wonder as I wander (American folk song); N. Morali: Notturno). In some instances, variations to the original excerpts were made to adapt the melodies to a piano composition and to equalize all melodies in terms of duration. The fifth CAN stimulus and MOD version were created from scratch (G. Guiot: Melodia). During postscanning debriefing, we ascertained that participants were unfamiliar with the presented melodies. The melodies were created through an electronic musical program “NUENDO” using a piano timber and keeping a low reverberation to avoid a superimposition of adjoining notes that could create a harmonic dimension. The modified versions of the canonical stimuli were created through ascending alterations of the fifth note of the musical scale, as exemplified in Figure 1 . This type of alteration involves an increase in one semitone of the fifth degree of the musical scale. The fifth degree of a musical scale, named “dominant,” is the most frequent note in a melody line and represents a keynote determining the stability of the composition. Therefore, this alteration represents the most disruptive intervention that can alter the perception of a melody (see Supplementary Material for the melodies score used in this study, available online at http://dx.doi.org/10.1155/2014/482126 ). Figure 1: Example of melody used in this study (canonical version, upper part; modified version, bottom part). The modified version was created by alteration of the fifth note of the musical scale, which involves an increase in one semitone of the fifth degree of the musical scale. Each melody was presented to the participants for 12 s and contained, on average, 5 alterations. 2.3. Procedure During scanning, participants were provided with digital visors (VisuaSTIM, 500,000 px × 0.25-square-inch resolution, horizontal eye field of 30°) that were applied directly on the volunteers’ face. The visors displayed the instructions, a fixation cross, and the question mark (see below). The participants were also provided with earphones delivering musical stimuli and a response box placed under their right hand. The stimuli were presented in two experimental tasks: listening (L) and aesthetic judgment (AJ). The tasks were presented in separated fMRI runs; each run/task lasted about 8 minutes. The task order was maintained fixed across participants, with listening task first and aesthetic judgment last. By keeping listening task first, we aimed at measuring unbiased brain responses to the type of stimuli. Each melody was presented twice within each task, totaling 10 stimulus presentations for each category (10 CAN and 10 MOD) for each task. At the beginning of each run, a 20 s visual instruction informed the volunteers about the upcoming task. Each experimental trial began with the musical stimulus that lasted 12 s, followed by a 6 s white noise (WN) used as explicit baseline and by a question mark that instructed the participants to respond to the music stimulus using the response box placed inside the scanner. The trials were separated by a jittered intertrial interval (ITI mean duration 3.5 s; range 2.5–4.5). During music stimulation and white noise presentation, the volunteers were instructed to fixate on a cross randomly displaced on the visors screen across the various trials. The fixation point was placed in order to reduce eye movement; the changing spatial location of the fixation point across trials aimed at maintaining a certain attention level and to avoid eyestrain. After the white noise presentation, a question mark instructed the participants to respond to the stimulus. During listening task (L), the participants were instructed to press one of 4 buttons of the response box in a random fashion. During aesthetic judgment task (AJ), they had to express a judgment about each musical stimulus using a 4-point scale. Therefore both tasks (L and AJ) required a motor response from the participants. The scale ranged from “aesthetically pleasant” to “aesthetically unpleasant.” For half of the participants, “pleasant” corresponded to 1 and “unpleasant” to 4. More specifically, they had to respond to the following question: “how much do you find it is pleasant?” (1 = very pleasant; 2 = pleasant; 3 = moderately pleasant; 4 = not pleasant at all). For the other half of the participants, the scale was set in the opposite order (“pleasant” corresponded to 4 and “unpleasant” to 1). Each finger corresponded to one specific response: the thumb, index, medium, and ring finger produced responses 1, 2, 3, and 4, respectively. The distribution of scores ascribed to each melody is summarized the in response frequency Table S1 in Supplementary Material. 2.4. fMRI Data Acquisition and Statistical Analysis Functional images were acquired with a General Electric scanner operating at 3T using an 8-channel head coil. Blood oxygenation level dependent (BOLD) contrasts were obtained using echo-planar T2 * weighted imaging (EPI). Each of the 185 volumes acquired was composed of 40 transverse slices, which provided coverage of the whole cerebral cortex with the exception of the primary visual cortex and the posterior part of cerebellum (TR = 2500, TE = 30 ms, flip angle = 85 degrees, FOV = 240 mm × 240, interslice gap = 0.5 mm, slice thickness = 4 mm, and plane resolution = ). Immediately after the functional scanning, a high-resolution T1 weighted anatomical scan (150 slices, TR = 600 ms; TE = 20 ms, slice thickness = 1 mm, and in-plane resolution = 1 × 1 mm) was acquired for each participant. Image preprocessing and statistical analysis were performed using SPM8 (Wellcome Department of Cognitive Neurology, http://www.fil.ion.ucl.ac.uk/spm/ ), implemented in Matlab v7.6 (Mathworks, Inc., Sherborn, MA [ 12 ]). The first four images volumes of each run were discarded to allow for stabilization of longitudinal magnetization. For each participant, the volumes were spatially realigned [ 13 ] to the first volume of the first session to correct for between-scan motion and unwarped [ 14 ]. A mean image from the realigned volumes was created. Acquisition time was then corrected using the middle slice as reference. To allow intersubject analysis, images were normalized to Montreal Neurological Institute (MNI) standard space [ 15 ], using the mean of the functional images. All images were smoothed using an isotropic Gaussian kernel (6 mm). Two types of fMRI data analyses were performed. The “structure” analysis accounted for the effect evoked by the canonical (CAN) and modified (MOD) melodic structures on the listeners’ brain, independently of the participants’ explicit aesthetic response to them. The second analysis (“aesthetic” analysis) categorized each excerpt as pleasant or unpleasant according to the behavioral responses measured during AJ runs, independently of melodies modification (CAN, MOD). Statistical inference was based on a random effect approach [ 13 ] that comprised two steps: a subject-level analysis (first level analysis) and an intersubject analysis (group analysis). With respect to the structure analysis, at the first level fMRI data were best fitted (least square fit) at every voxel using a linear combination of the effects of interest. The effects of interests were modelled as a function of the following: stimulus category (CAN, MOD), the question mark that cued overt responses, and the white noise, considered as explicit baseline, plus six regressors obtained from motion correction during the realignment process. All event types were convolved with the SPM8 standard hemodynamic function (HRF). By making linear contrasts, activation associated with WN presentation was subtracted to the activation associated with the two stimulus categories in each task (CAN-WN and MOD-WN in both L and AJ tasks). These contrasts were produced in order to isolate the specific effects of the musical stimuli partialling out the mere effect of sound. The second step of statistical analysis comprised one flexible factorial model that included the contrast images created for each subject in the first step (CAN-WN and MOD-WN in both L and AJ tasks). This model considered the pattern of activation specific for each stimulus category in the listening and aesthetic judgment tasks. The following contrasts were tested: first, CAN versus white noise and MOD versus white noise in order to evaluate the positive effects of music on brain activation; second, CAN versus MOD within each condition to highlight specific effects of stimulus structure on brain activation. The aesthetic analysis, carried out on data from AJ task only, examined the regional modulation of signal change induced by different levels of aesthetic judgment. As described above, judgments were recorded on a scale ranging from 1 to 4 (see Table S1 for details regarding score frequencies). Like the structure analysis, the aesthetic analysis included two different steps. At the first level of analysis, the fMRI data were best fitted (least square fit) at every voxel using a linear combination of the effects of interest. The effects of interest, modelled for each participant, were as follows: the presentation time of question mark that cued overt responses, the presentation time of the white noise, and the presentation times of the music stimuli (regardless of the type of melodic structure, CAN or MOD), plus six regressors obtained from motion correction during the realignment process. All event types were convolved with the SPM8 standard hemodynamic function (HRF). At the intersubject level, a one-sample t -test was carried out to define the brain areas modulated by increased aesthetic rating to the music stimuli regardless of stimulus type. For all these analyses, SPM maps were thresholded at P -corrected = 0.05 at the cluster level (cluster size estimated with a voxel level threshold of P -uncorrected = 0.001). Because of acquisition plane that cut off the posterior portion of brain, it was not possible to define whether activation at its proximity constituted independent clusters or belonged to more extended activation. For this reason, the activation found in the occipitotemporal visual regions and in the cerebellum is not discussed. 3. Results 3.1. Response-Based Results To assess aesthetic ratings provided by each participant during fMRI scanning as a function of the type of melody (CAN or MOD), a repeated measures GLM analysis, with two levels of stimulus category (CAN, MOD) and two levels of stimulus repetition (R1, R2), was carried out on responses recorded during AJ task. The data file containing the participants’ responses to the stimuli is in Supplementary Material (see SDataFile.xls for the participants responses to the stimuli). The results showed that canonical stimuli were rated as more pleasant than their modified counterparts ( F 1,18 = 8.5, ; partial- η 2 = .31; δ = .79), whereas there was no effect of stimulus repetition on aesthetic appraisal ( ). These results indicate that acoustic dissonance created by atypical musical syntax characterizing the modified stimuli negatively affected aesthetic preference. 3.2. fMRI Results 3.2.1. Structure Analysis Overall Effect of Melody Listening. MRI analysis was carried out by first assessing the overall activation elicited by melody, contrasting, separately, CAN and MOD (minus WN) versus baseline across listening (L) and aesthetic judgment (AJ) tasks. The contrast CAN (minus WN) versus baseline produced activation in superior occipital area, in superior temporal gyrus (STG), and in dorsal premotor cortex (dPM). Temporal activation included the primary auditory cortex and its neighbouring associative auditory regions, including BA 22, BA 21, and the superior part of BA 38. All activation was bilateral (Figure 2(a) , Table 1 ). Table 1: Activation reflecting the effect of canonical stimuli (versus white noise). Figure 2: Activation observed in (a) the contrast CAN versus WN and (b) the contrast MOD versus WN averaging activation across the two experimental tasks (listening and aesthetic judgment). Group-averaged statistical parametric maps are rendered onto the MNI brain template ( P -corr. < 0.05). As shown in Figure 2(b) (Table 2 ), the contrast MOD (minus WN) versus baseline revealed similar activation as that observed for the contrast CAN (minus WN) versus baseline. Table 2: Activation reflecting effect of modified stimuli (versus white noise). Canonical versus Modified Melodies. The direct contrast CAN versus MOD was carried out for each task (L and AJ) separately to evaluate whether the structure of melodies is an element affecting the listeners’ aesthetic experience. The results revealed no significant activation evoked by canonical stimuli with respect to the modified ones in either listening or aesthetic judgment tasks. The opposite contrast, MOD versus CAN, assessed the neural effects of unpleasantness due to syntax alteration on brain activation within each experimental task (L and AJ). During L, the contrast MOD versus CAN showed differential activation in right dorsal premotor cortex and postcentral gyrus (Table 3 (a)). With respect to AJ, differential activation between modified and canonical stimuli was observed in right middle temporal gyrus, right parahippocampus, and precuneus bilaterally, whereas, in the left hemisphere, enhanced activation was observed in middle occipital lobe and fusiform gyrus (Figure 3 , Table 3 (b)). Table 3: Activation reflecting the contrast MOD versus CAN during listening (L) and aesthetic judgment (AJ) tasks. Figure 3: Activation observed in the contrast MOD versus CAN stimuli during aesthetic judgment task (AJ). Group-averaged statistical parametric maps are rendered onto the MNI brain template ( P -corr. < 0.05). The bars show the activity profile within right parahippocampal gyrus in the contrast MOD versus CAN during AJ task in arbitrary units (a.u.). 3.2.2. Aesthetic Analysis: Parametric Effect of Aesthetic Judgment To test whether explicit aesthetic judgment modulated brain activation, independently of melody structure, we carried out a parametric analysis based on the participants’ responses given during AJ task independently of stimulus type (CAN, MOD). Increasing aesthetic rating was associated with greater activation in right superior temporal sulcus (STS, maxima: 62, −26, 0; P -corr. < 0.05) and left IFG pars triangularis corresponding to BA 44/45 (maxima: −44, 34, 4; P -uncorr. = 0.02) (Figure 4 ). Decreasing aesthetic rating, on the other hand, was associated with greater activation in the right precuneus (maxima: 6, −78, 30). Figure 4: Activation observed as a function of increasing aesthetic rating on brain activation (parametric analysis) in right superior temporal sulcus and left IFG pars triangularis. Activation is rendered onto the MNI brain template. 4. Discussion The neuroscience of music has mostly dealt with the way our brain processes and responds to the temporal and syntactic structure of music. The aim of the present study was to isolate one of the syntactic forms of music, namely, melody, to explore its independent effect on aesthetic experience in listeners naïve to formal musical knowledge. For this purpose, we used simple melodic lines whose syntactic structure was systematically manipulated to create acoustic dissonance. Two categories of melodies were presented to participants: canonical (syntactically “correct”) and modified, that is, made of an altered version of the canonical melodies. In what we termed structure analysis, we evaluated the effect on brain activation exerted by syntactic structural alterations of the melodies by comparing canonical and modified stimuli in two experimental tasks: listening and aesthetic judgment. Moreover, an aesthetic analysis, based on the listeners’ responses recorded during AJ task, was carried out to evaluate the brain regions involved in aesthetic judgment, independently of structural modifications. Our results highlighted some important aspects of neural processing underling melody listening. First, contrast analysis comparing canonical and modified stimuli with white noise showed that processing melody, regardless of structural modification and experimental task, involves activation of dorsal premotor cortex (dPM) and superior temporal gyrus (STG) bilaterally. The activation of dorsal premotor cortex is in line with findings showing its implication in rhythm processing (e.g., [ 2 ]). In a melody, this is given by its temporal structure and phrasing, which are characterized by the pitch relationship of one note to the next [ 16 ]. In fact, melodic processing incorporates intervals between individual notes and the overall contour of the sequence, as shown by studies investigating melody or pitch perception and discrimination ([ 17 , 18 ]; for a review, see [ 19 ]). The posterior part of STG, including Heschl’s gyrus (HG) and temporal planum (PT), is involved in acoustic-stimulus processing. While HG represents the first cortical step of auditory analysis, it was proposed that PT elaborates an auditory scene analysis [ 20 ] that allows one to segregate different sounds heard simultaneously and to match these with stored patterns. The output of this high level processing should inform about the acoustic environment, information that is not available from stimulus analyses elaborated at previous levels [ 21 ]. This region has been also found to be crucial for music processing. In a work including epileptic patients that underwent a unilateral temporoctomy and healthy controls, Liégeois-Chauvel and colleagues [ 22 ] found that pSTG is involved in the extraction of both contour and temporal information of melodies. The functional data of Patterson and colleagues’ study [ 23 ] further clarified that cortical processing of pitch is hierarchic: this recruits not only the posterior but also the anterior part of this region (polar planum, PP) as interval information of the acoustic stimulus becomes more complex. Coherently with these data, the bilateral activation of STG found in the present study for both CAN versus WN and MOD versus WN contrasts may represent the hierarchic neural processing of melodies. The temporal cluster expanded into the third posterior of insular cortex. This is a granular region and, as shown by several anatomical studies (e.g., [ 24 , 25 ]), is connected with the medial geniculate nucleus of the thalamus, with Heschl’s gyrus and superior temporal sulcus. It was shown that posterior insula might preprocess the auditory stimulus before the primary auditory cortex [ 25 ] and some neuropsychological works indicate that lesions of the posterior part of the insula are associated with auditory deficits, such as agnosia. The posterior insula might then mediate the precortical phase of auditory analysis. Direct comparisons between stimulus types (canonical and modified) highlighted the areas specifically involved in the syntactic processing of melodies. Direct contrasts between CAN versus MOD melodies did not produce any differential activation, suggesting that there was no specific processing associated with canonical compared to modified structures. The opposite contrast, namely, modified versus canonical stimuli, revealed on the other hand signal increase in deep temporal regions and particularly the right parahippocampal cortex. The critical role of the parahippocampal cortex in processing the emotional valence of dissonance has been shown in several works. A PET study by Blood et al. [ 26 ] showed that the increasing dissonance of the stimuli (and the relative judgments of unpleasantness) correlated with activation of right parahippocampal gyrus and precuneus, also found activated in the present study. Koelsch and colleagues [ 10 ] found activation of parahippocampal gyrus, hippocampus, amygdala, and temporal pole by contrasting dissonant stimuli judged as unpleasant with consonant classical excerpts judged as pleasant. Gosselin and coworkers [ 27 ] clarified the role of mediotemporal structures in the processing of emotional response to dissonance by studying aesthetic judgments to classical and dissonant music excerpts in both patients with lesion to medial temporal lobe and healthy subjects. While both groups gave positive aesthetic judgments to classical excerpts, the patients judged the dissonant music as slightly pleasant, opposite to healthy subjects. It was concluded that the parahippocampal cortex is specific for processing judgments of unpleasantness due to dissonance because the volume of this region, and not of other surrounding structures (like the amygdala or hippocampus), correlated with the values of judgments given by patients to the dissonant stimuli. Since the behavioral analysis of the present study showed a link between negative aesthetic judgment and modified melodies, the activation of the parahippocampal cortex found in the contrast MOD versus CAN melodies suggests a role of this region in processing the negative/emotional value of melodies driven by structural dissonance. An alternative interpretation for parahippocampal activation favors the idea that it could have been evoked by stimulus structural novelty. The role of the hippocampus and surrounding areas in memory encoding and processing is well known (for reviews, see e.g., [ 28 , 29 ]). In this light, it is plausible to suggest that the activation of the parahippocampal cortex was determined by a stronger brain effort to decode and retain the new structures intrinsic to the MOD melodies compared to the CAN ones (increased memory load for the MOD stimuli). This interpretation of the data does not automatically discount the former emotion-related explanation for parahippocampal activation and it can serve as a suggestion for future investigations. On the whole, the lack of enhanced brain activation for the canonical stimuli with respect to the modified ones and the presence of signal increase for the opposite contrast suggests that modified melodic structures exert a stronger effect on brain processing (in terms of either negative emotional valence and/or mnemonic-related processing), compared to melodies that respect a structural canon, at least within the Western culture. Aesthetic preference for music, although related to a certain extent to melody structure as shown by our behavioral data, may also be guided by idiosyncratic criteria. In the present study, we attempted to capture this aspect carrying out an aesthetic analysis based on the responses from each participant during AJ task, independently of stimulus structure. This analysis revealed activation of right STS and inferior frontal gyrus (IFG) associated with increasing pleasantness expressed for the melodies, independently of structure modification (CAN, MOD). STS cluster included BA22 that represents the homologue of Wernicke area in the right hemisphere. Recent findings suggest that the frontotemporal regions of the right hemisphere play an important role in the semantic processing of language, opposite to the traditional view that highlights the role of only the left hemisphere. Additionally, it was shown that the Wernicke homologue in the right hemisphere is involved in metaphors understanding [ 30 , 31 ]. In a TMS study, Harpaz and colleagues [ 32 ] showed a crucial implication of right BA 22 in associating words with their remote meaning. In accordance with this evidence, a complex model for semantic language processing was advanced, which considers the different contribution of left and right frontotemporal regions in semantic processing. In this model, semantic processing is described as highly distributed in both hemispheres but the right regions are described to be crucial for coarser semantic coding compared with that of the left ones [ 33 ]. As language does, music conveys meaningful information. Using N400 as marker of processing meaning, it was shown that long or short music excerpts are able to prime the processing of subsequent target words [ 34 – 37 ]. Moreover, in an EEG-fMRI study, Steinbeis and Koelsch [ 37 ] found that right posterior STS has a key role in processing of meaning of music, as it occurs for coarse aspects of language. Although melody meaningfulness was not directly assessed in the present study, a tentative explanation for our results is that there may be a link between aesthetic preference and the coding of music meaning, in the fact that aesthetic preference was accorded to melodies that were somehow more meaningful to the listeners or, alternatively, to which the listener was able to ascribe a meaning. Of course, other interpretations for STS activation may account for the observed data. For example, some intrinsic properties of the pleasurable stimuli may have enhanced the participants’ attention, therefore modulating the activity within the STS cluster. In fact, as discussed in Himmelbach et al. [ 38 ], STG/STS seem to be involved in the attentional orienting towards potentially relevant events or stimuli [ 39 ]. Additionally, the superior temporal cortex has been shown to be a site for multimodal sensory convergence and neuronal populations in STS encode object-properties as well as spatial positions [ 40 ], orienting attention towards salient stimuli. The results from the parametric analysis further revealed a modulation effect of the expressed aesthetic pleasure on right inferior frontal gyrus pars orbitalis corresponding to BA47 and on left IFG pars triangularis corresponding to BA44/45. With respect to activation of Broca area, several works found that Broca (left IFG) is important for both harmonic and syntactic errors processing [ 5 , 6 ] and, likewise, in the present work it may be involved in syntax coding. In this study, Broca activation was associated with listening to pleasant melodies, suggesting that syntactic coding of the canonical stimuli facilitated the ascription of an aesthetic judgment as requested by the task (AJ). Since no emotional activation was found in association with aesthetic pleasurable melodies, it is possible that aesthetic judgment of the presented melodies was based on the more formal aspect of stimulus processing, namely, its syntactic analysis. In general, in contrast with other studies that found a neural correlation between aesthetic pleasure for music and activation of emotion-related structures (see, e.g., [ 9 – 11 , 41 – 43 ]), our results suggest that aesthetic preference for simple melodic pieces is mediated by a structural-syntactic and, possibly, semantic analysis of the stimuli. We suggest that the main difference across diverging findings may rest on the type of stimuli used and on the specific alterations introduced that, in our study, were different from the integer, rich stimuli (painting and sculpture images or famous musical excerpts) used in other studies. In fact, we produced a single, highly significant syntactic error, without altering in a gross and overwhelming way the original melody. Additionally, we isolated melody from a harmonic context, whose violation would have intensified melodic dissonances. Likewise, we did not alter any other extramelodic parameter, such as rhythm, timbre, or intensity, with the aim of producing results reflecting the capability of melody alone to evoke an aesthetic experience in the listeners. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors wish to thank Dr. Rachel Wood, Diego Lisfera, and Edoardo Acotto for help in stimulus preparation and Professor Giacomo Rizzolatti and Professor Vittorio Gallese for theoretical and methodological suggestions; finally, they are grateful to Fondazione Cassa di Risparmio di Parma (CARIPARMA) for providing the infrastructures that made it possible to conduct this study. References J. L. Chen, R. J. Zatorre, and V. B. Penhune, “Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms,” NeuroImage , vol. 32, no. 4, pp. 1771–1781, 2006. View at Publisher · View at Google Scholar · View at Scopus J. L. Chen, V. B. Penhune, and R. J. Zatorre, “Listening to musical rhythms recruits motor regions of the brain,” Cerebral Cortex , vol. 18, no. 12, pp. 2844–2854, 2008. View at Publisher · View at Google Scholar · View at Scopus P. Janata and S. T. Grafton, “Swinging in the brain: shared neural substrates for behaviors related to sequencing and music,” Nature Neuroscience , vol. 6, no. 7, pp. 682–687, 2003. View at Publisher · View at Google Scholar · View at Scopus B. Maess, S. Koelsch, T. C. Gunter, and A. D. Friederici, “Musical syntax is processed in Broca's area: an MEG study,” Nature Neuroscience , vol. 4, no. 5, pp. 540–545, 2001. View at Scopus B. Tillmann, S. Koelsch, N. Escoffier et al., “Cognitive priming in sung and instrumental music: activation of inferior frontal cortex,” NeuroImage , vol. 31, no. 4, pp. 1771–1782, 2006. View at Publisher · View at Google Scholar · View at Scopus B. Tillmann, S. Koelsch, N. Escoffier et al., “Cognitive priming in sung and instrumental music: activation of inferior frontal cortex,” NeuroImage , vol. 31, no. 4, pp. 1771–1782, 2006. View at Publisher · View at Google Scholar · View at Scopus D. J. Levitin and V. Menon, “The neural locus of temporal structure and expectancies in music: evidence from functional neuroimaging at 3 tesla,” Music Perception , vol. 22, no. 3, pp. 563–575, 2005. View at Publisher · View at Google Scholar · View at Scopus D. Perani, M. C. Saccuman, P. Scifo et al., “Functional specializations for music processing in the human newborn brain,” Proceedings of the National Academy of Sciences of the United States of America , vol. 107, no. 10, pp. 4758–4763, 2010. View at Publisher · View at Google Scholar · View at Scopus A. J. Blood and R. J. Zatorre, “Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion,” Proceedings of the National Academy of Sciences of the United States of America , vol. 98, no. 20, pp. 11818–11823, 2001. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, T. Fritz, D. Y. V. Cramon, K. Müller, and A. D. Friederici, “Investigating emotion with music: an fMRI study,” Human Brain Mapping , vol. 27, no. 3, pp. 239–250, 2006. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, S. Skouras, T. Fritz et al., “The roles of superficial amygdala and auditory cortex in music-evoked fear and joy,” NeuroImage , vol. 31, pp. 1771–1782, 2013. View at Publisher · View at Google Scholar · View at Scopus K. J. Worsley and K. J. Friston, “Analysis of fMRI time-series revisited—again,” NeuroImage , vol. 2, no. 3, pp. 173–181, 1995. View at Publisher · View at Google Scholar · View at Scopus K. J. Friston, “Bayesian estimation of dynamical systems: an application to fMRI,” NeuroImage , vol. 16, no. 2, pp. 513–530, 2002. View at Publisher · View at Google Scholar · View at Scopus J. L. R. Andersson, C. Hutton, J. Ashburner, R. Turner, and K. Friston, “Modeling geometric deformations in EPI time series,” NeuroImage , vol. 13, no. 5, pp. 903–919, 2001. View at Publisher · View at Google Scholar · View at Scopus D. L. Collins, P. Neelin, T. M. Peters, and A. C. Evans, “Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space,” Journal of Computer Assisted Tomography , vol. 18, no. 2, pp. 192–205, 1994. View at Publisher · View at Google Scholar · View at Scopus C. J. Limb, “Structural and functional neural correlates of music perception,” Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology , vol. 288, no. 4, pp. 435–446, 2006. View at Publisher · View at Google Scholar · View at Scopus R. J. Zatorre, A. C. Evans, and E. Meyer, “Neural mechanisms underlying melodic perception and memory for pitch,” Journal of Neuroscience , vol. 14, no. 4, pp. 1908–1919, 1994. View at Scopus A. R. Halpern and R. J. Zatorre, “When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies,” Cerebral Cortex , vol. 9, no. 7, pp. 697–704, 1999. View at Publisher · View at Google Scholar · View at Scopus I. Peretz and R. J. Zatorre, “Brain organization for music processing,” Annual Review of Psychology , vol. 56, pp. 89–114, 2005. View at Publisher · View at Google Scholar · View at Scopus A. S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound , The MIT Press, Cambridge, Mass, USA, 1990. T. D. Griffiths and J. D. Warren, “The planum temporale as a computational hub,” Trends in Neurosciences , vol. 25, no. 7, pp. 348–353, 2002. View at Publisher · View at Google Scholar · View at Scopus C. Liégeois-Chauvel, I. Peretz, M. Babaï, V. Laguitton, and P. Chauvel, “Contribution of different cortical areas in the temporal lobes to music processing,” Brain , vol. 121, no. 10, pp. 1853–1867, 1998. View at Publisher · View at Google Scholar · View at Scopus R. D. Patterson, S. Uppenkamp, I. S. Johnsrude, and T. D. Griffiths, “The processing of temporal pitch and melody information in auditory cortex,” Neuron , vol. 36, no. 4, pp. 767–776, 2002. View at Publisher · View at Google Scholar · View at Scopus J. R. Augustine, “Circuitry and functional aspects of the insular lobe in primates including humans,” Brain Research Reviews , vol. 22, no. 3, pp. 229–244, 1996. View at Publisher · View at Google Scholar · View at Scopus D.-E. Bamiou, F. E. Musiek, and L. M. Luxon, “The insula (Island of Reil) and its role in auditory processing: literature review,” Brain Research Reviews , vol. 42, no. 2, pp. 143–154, 2003. View at Publisher · View at Google Scholar · View at Scopus A. J. Blood, R. J. Zatorre, P. Bermudez, and A. C. Evans, “Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions,” Nature Neuroscience , vol. 2, no. 4, pp. 382–387, 1999. View at Publisher · View at Google Scholar · View at Scopus N. Gosselin, S. Samson, R. Adolphs et al., “Emotional responses to unpleasant music correlates with damage to the parahippocampal cortex,” Brain , vol. 129, no. 10, pp. 2585–2592, 2006. View at Publisher · View at Google Scholar · View at Scopus S.-H. Wang and R. G. M. Morris, “Hippocampal-neocortical interactions in memory formation, consolidation, and reconsolidation,” Annual Review of Psychology , vol. 61, pp. 49–79, 2010. View at Publisher · View at Google Scholar · View at Scopus N. M. van Strien, N. L. M. Cappaert, and M. P. Witter, “The anatomy of memory: an interactive overview of the parahippocampal- hippocampal network,” Nature Reviews Neuroscience , vol. 10, no. 4, pp. 272–282, 2009. View at Publisher · View at Google Scholar · View at Scopus G. Bottini, R. Corcoran, R. Sterzi, et al., “The role of the right hemisphere in the interpretation of figurative aspects of language: a positron emission tomography activation study,” Brain , vol. 117, no. 6, pp. 1241–1253, 1994. View at Publisher · View at Google Scholar · View at Scopus M. Sotillo, L. Carretié, J. A. Hinojosa et al., “Neural activity associated with metaphor comprehension: spatial analysis,” Neuroscience Letters , vol. 373, no. 1, pp. 5–9, 2005. View at Publisher · View at Google Scholar · View at Scopus Y. Harpaz, Y. Levkovitz, and M. Lavidor, “Lexical ambiguity resolution in Wernicke's area and its right homologue,” Cortex , vol. 45, no. 9, pp. 1097–1103, 2009. View at Publisher · View at Google Scholar · View at Scopus M. Jung-Beeman, “Bilateral brain processes for comprehending natural language,” Trends in Cognitive Sciences , vol. 9, no. 11, pp. 512–518, 2005. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, E. Kasper, D. Sammler, K. Schulze, T. Gunter, and A. D. Friederici, “Music, language and meaning: brain signatures of semantic processing,” Nature Neuroscience , vol. 7, no. 3, pp. 302–307, 2004. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, T. Fritz, K. Schulze, D. Alsop, and G. Schlaug, “Adults and children processing music: an fMRI study,” NeuroImage , vol. 25, no. 4, pp. 1068–1076, 2005. View at Publisher · View at Google Scholar · View at Scopus J. Daltrozzo and D. Schön, “Is conceptual processing in music automatic? An electrophysiological approach,” Brain Research , vol. 1270, pp. 88–94, 2009. View at Publisher · View at Google Scholar · View at Scopus N. Steinbeis and S. Koelsch, “Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns,” Cerebral Cortex , vol. 18, no. 5, pp. 1169–1178, 2008. View at Publisher · View at Google Scholar · View at Scopus M. Himmelbach, M. Erb, and H.-O. Karnath, “Exploring the visual world: the neural substrate of spatial orienting,” NeuroImage , vol. 32, no. 4, pp. 1747–1759, 2006. View at Publisher · View at Google Scholar · View at Scopus J. Downar, A. P. Crawley, D. J. Mikulis, and K. D. Davis, “A cortical network sensitive to stimulus salience in a neutral behavioral context across multiple sensory modalities,” Journal of Neurophysiology , vol. 87, no. 1, pp. 615–620, 2002. View at Scopus H.-O. Karnath, “New insights into the functions of the superior temporal cortex,” Nature Reviews Neuroscience , vol. 2, no. 8, pp. 568–576, 2001. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, “Towards a neural basis of music-evoked emotions,” Trends in Cognitive Sciences , vol. 14, no. 3, pp. 131–137, 2010. View at Publisher · View at Google Scholar · View at Scopus D. D. Cinzia and G. Vittorio, “Neuroaesthetics: a review,” Current Opinion in Neurobiology , vol. 19, no. 6, pp. 682–687, 2009. View at Publisher · View at Google Scholar · View at Scopus D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music,” Psychophysiology , vol. 44, no. 2, pp. 293–304, 2007. View at Publisher · View at Google Scholar · View at Scopus (function (i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-8578054-2', 'auto'); ga('send', 'pageview'); http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Neuroscience Hindawi Publishing Corporation

The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural Manipulations

Loading next page...
 
/lp/hindawi-publishing-corporation/the-effect-of-simple-melodic-lines-on-aesthetic-experience-brain-8T4uOCfUVd

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2014 Stefania Ferri et al.
ISSN
2356-6787
Publisher site
See Article on Publisher Site

Abstract

The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural Manipulations div.banner_title_bkg div.trangle { border-color: #171A2F transparent transparent transparent; opacity:0.8; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=80)" ;filter: alpha(opacity=80); /*new styles end*/ } div.banner_title_bkg_if div.trangle { border-color: transparent transparent #171A2F transparent ; opacity:0.8; /*new styles start*/ -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(Opacity=80)" ;filter: alpha(opacity=80); /*new styles end*/ } div.banner_title_bkg div.trangle { width: 248px; } #banner { background-image: url('http://images.hindawi.com/journals/aneu/aneu.banner.jpg'); background-position: 50% 0;} Hindawi Publishing Corporation Home Journals About Us Advances in Neuroscience About this Journal Submit a Manuscript Table of Contents Journal Menu About this Journal · Abstracting and Indexing · Advance Access · Aims and Scope · Article Processing Charges · Articles in Press · Author Guidelines · Bibliographic Information · Citations to this Journal · Contact Information · Editorial Board · Editorial Workflow · Free eTOC Alerts · Publication Ethics · Reviewers Acknowledgment · Submit a Manuscript · Subscription Information · Table of Contents Open Special Issues · Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Supplementary Material Advances in Neuroscience Volume 2014 (2014), Article ID 482126, 9 pages http://dx.doi.org/10.1155/2014/482126 Research Article The Effect of Simple Melodic Lines on Aesthetic Experience: Brain Response to Structural Manipulations Stefania Ferri , 1 Cristina Meini , 2 Giorgio Guiot , 3 Daniela Tagliafico , 4 Gabriella Gilli , 5 and Cinzia Di Dio 1,5 1 Department of Neuroscience, Università di Parma, Via Volturno 39/E, 43100 Parma, Italy 2 Department of Humanistic Studies, Università del Piemonte Orientale, Via Manzoni 8, 13100 Vercelli, Italy 3 Associazione Cantabile, Via Campana 2, 10125 Turin, Italy 4 Department of Philosophy, Università Degli Studi di Torino, Via Sant’Ottavio 20, 10125 Turin, Italy 5 Department of Psychology, Università Cattolica del Sacro Cuore, 20123 Milan, Italy Received 20 June 2014; Revised 5 December 2014; Accepted 8 December 2014; Published 30 December 2014 Academic Editor: Notger G. Mueller Copyright © 2014 Stefania Ferri et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract This fMRI study investigates the effect of melody on aesthetic experience in listeners naïve to formal musical knowledge. Using simple melodic lines, whose syntactic structure was manipulated, we created systematic acoustic dissonance. Two stimulus categories were created: canonical (syntactically “correct,” in the Western culture) and modified (made of an altered version of the canonical melodies). The stimuli were presented under two tasks: listening and aesthetic judgment. Data were analyzed as a function of stimulus structure (canonical and modified) and stimulus aesthetics, as appraised by each participant during scanning. The critical contrast modified versus canonical stimuli produced enhanced activation of deep temporal regions, including the parahippocampus, suggesting that melody manipulation induced feelings of unpleasantness in the listeners. This was supported by our behavioral data indicating decreased aesthetic preference for the modified melodies. Medial temporal activation could also have been evoked by stimulus structural novelty determining increased memory load for the modified stimuli. The analysis of melodies judged as beautiful revealed that aesthetic judgment of simple melodies relied on a fine-structural analysis of the stimuli subserved by a left frontal activation and, possibly, on meaning attribution at the charge of right superior temporal sulcus for increasingly pleasurable stimuli. 1. Introduction Music is simultaneously art and science: it allows the artist to express his/her inner world through sounds, which are linked one to another by stringent rules that are strongly influenced by culture. These rules represent hallmarks that, on one side, constrain the composer’s freedom to choose associations and successions of sounds and, on the other, offer a context, within which all elements gain a meaning. Traditionally, there has been a strong tendency to emphasize the dominance of compositional structures in outlining the aesthetic character of a musical piece. In the present study, we investigated this relationship by exploring the aesthetics of melody, that is, the capacity of simple musical structures to evoke an aesthetic experience in listeners naïve to formal musical knowledge. Music is made of rules that govern the relation between notes and of a dynamic dimension that defines its tempo and rhythm. As far as the succession of sounds is concerned, the founding rules of a musical piece are also referred to as syntactic rules (this denomination implicitly underlines the similarities between music and language). Music syntax is basically constituted by melody (horizontal syntax) and harmony (vertical syntax). Melody consists of a distribution of notes on scales that are organized into “modes” (e.g., minor and major) by our musical tradition. Harmony, on the other hand, establishes the criteria upon which chords are built and associated in time. The syntactic rules of music are not absolute; contrarily, they vary in relation to the different musical styles. For example, the rules forming the base of classic music are different from those characterizing soul music or blues. Still, dodecaphonic music arises in opposition to the norms of classical music, which are paradigmatically expressed by traditional “mozartian” music. Recently, the growing interest of neuroscience for music has dealt with the way our brain processes the temporal and syntactic structure of music. Some evidence suggests that the neural processing of music syntax involves the activation of areas that are also involved in language processing and in motor planning [ 1 – 6 ]. Tillmann and colleagues [ 6 ], for example, showed that the processing of a chord unrelated to musical context modulates the activity of the inferior frontal gyrus (IFG). Similarly, Levitin and Menon [ 7 ] found enhanced activation of IFG (BA47) in the contrast between musical pieces and their scrambled versions, showing that this brain area may be involved in the coding of fine stimulus structure. A more recent study showed that even in newborn children altered music structures cause perceived dissonance, which involves the activation of the inferior frontal cortex [ 8 ]. Another aspect of music that has been investigated concerns the neural correlates of aesthetic experience evoked by music and, specifically, to its emotional dimension. A PET study by Blood and Zatorre [ 9 ] showed that the intensity of emotional experience elicited by familiar musical pieces positively correlated with signal change in subcortical structures, including ventral striatum, and in limbic structures, including insular cortex, orbitofrontal cortex, and anterior cingulated cortex. In their fMRI study, Koelsch et al. [ 10 ] reported the bilateral activation of the primary auditory cortex, IFG, and anterior insula while listening to pleasant music with respect to unpleasant music (see also [ 11 ]). Altogether, these studies emphasize the role of emotional centres during the aesthetic experience of music. In the present study, we aimed at breaking down music in one of its building structural dimensions, namely, melody, and at clarifying whether aesthetic experience can be evoked by this single component alone in naïve listeners (nonmusic experts or players). Differently from the above studies that used complex music excerpts characterized by a rich harmonic and rhythmic structure as experimental stimuli, in the present study we used simple melodic lines. The effect of melody on aesthetic experience was investigated by systematically manipulating the syntactic structure of the stimuli. In fact, violation of the syntactic rules building a musical system creates acoustic dissonance that, phenomenically, could translate into an unpleasant emotion feeling. For this purpose, two categories of melodies were presented: canonical, that is, syntactically “correct,” and modified, that is, made of an altered version of the canonical melodies. In order to evaluate whether the structural alteration of the melodies modulates aesthetic experience, canonical and modified stimuli were presented in two experimental tasks, listening and aesthetic judgment. During listening task, participants had to merely listen to the presented melodies; during aesthetic judgment task, participants were required to overtly express a pleasantness evaluation of the same stimuli. 2. Methods 2.1. Participants Nineteen healthy right-handed Italian native speakers (9 males and 8 females; mean age 24.3) participated in the fMRI study. They were undergraduate and graduate students naïve to music expertise: they did not play any musical instruments nor were they able to read piano score. They were unfamiliar with the presented melodies. After receiving an explanation of the experimental procedures, they gave their written informed consent. This study was approved by the Local Ethics Committee of Parma, Italy. 2.2. Stimuli Simple tonal melodies played with piano were used in this study. The stimuli were presented in a canonical, syntactically “correct,” version (CAN) and in a modified, syntactically “incorrect,” version (MOD) of the canonical stimuli. In total, 10 stimuli (5 CAN and 5 MOD melodies) were selected on the basis of a preliminary behavioral study, in which a sample of 20 listeners naïve to formal musical knowledge (10 males, mean age = 28.8 yrs; 10 females, mean age = 28.2 yrs), different from the sample undergoing fMRI, were asked to evaluate a set of stimuli composed by 12 CAN and 12 MOD melodies. For each stimulus, participants were required to rate aesthetic preference and syntactic accuracy. The CAN and MOD versions that showed the highest discrepancy in aesthetic ratings were chosen; moreover, the syntactic alterations of the selected MOD melodies had to be clearly perceived. Four CAN and 4 MOD stimuli were created by extracting pure melodic lines from unfamiliar excerpts written by illustrious classical composers (F. Chopin: preludio number 20; Gershwin: Oh, I can’t sit down (Porgy and Bess); I wonder as I wander (American folk song); N. Morali: Notturno). In some instances, variations to the original excerpts were made to adapt the melodies to a piano composition and to equalize all melodies in terms of duration. The fifth CAN stimulus and MOD version were created from scratch (G. Guiot: Melodia). During postscanning debriefing, we ascertained that participants were unfamiliar with the presented melodies. The melodies were created through an electronic musical program “NUENDO” using a piano timber and keeping a low reverberation to avoid a superimposition of adjoining notes that could create a harmonic dimension. The modified versions of the canonical stimuli were created through ascending alterations of the fifth note of the musical scale, as exemplified in Figure 1 . This type of alteration involves an increase in one semitone of the fifth degree of the musical scale. The fifth degree of a musical scale, named “dominant,” is the most frequent note in a melody line and represents a keynote determining the stability of the composition. Therefore, this alteration represents the most disruptive intervention that can alter the perception of a melody (see Supplementary Material for the melodies score used in this study, available online at http://dx.doi.org/10.1155/2014/482126 ). Figure 1: Example of melody used in this study (canonical version, upper part; modified version, bottom part). The modified version was created by alteration of the fifth note of the musical scale, which involves an increase in one semitone of the fifth degree of the musical scale. Each melody was presented to the participants for 12 s and contained, on average, 5 alterations. 2.3. Procedure During scanning, participants were provided with digital visors (VisuaSTIM, 500,000 px × 0.25-square-inch resolution, horizontal eye field of 30°) that were applied directly on the volunteers’ face. The visors displayed the instructions, a fixation cross, and the question mark (see below). The participants were also provided with earphones delivering musical stimuli and a response box placed under their right hand. The stimuli were presented in two experimental tasks: listening (L) and aesthetic judgment (AJ). The tasks were presented in separated fMRI runs; each run/task lasted about 8 minutes. The task order was maintained fixed across participants, with listening task first and aesthetic judgment last. By keeping listening task first, we aimed at measuring unbiased brain responses to the type of stimuli. Each melody was presented twice within each task, totaling 10 stimulus presentations for each category (10 CAN and 10 MOD) for each task. At the beginning of each run, a 20 s visual instruction informed the volunteers about the upcoming task. Each experimental trial began with the musical stimulus that lasted 12 s, followed by a 6 s white noise (WN) used as explicit baseline and by a question mark that instructed the participants to respond to the music stimulus using the response box placed inside the scanner. The trials were separated by a jittered intertrial interval (ITI mean duration 3.5 s; range 2.5–4.5). During music stimulation and white noise presentation, the volunteers were instructed to fixate on a cross randomly displaced on the visors screen across the various trials. The fixation point was placed in order to reduce eye movement; the changing spatial location of the fixation point across trials aimed at maintaining a certain attention level and to avoid eyestrain. After the white noise presentation, a question mark instructed the participants to respond to the stimulus. During listening task (L), the participants were instructed to press one of 4 buttons of the response box in a random fashion. During aesthetic judgment task (AJ), they had to express a judgment about each musical stimulus using a 4-point scale. Therefore both tasks (L and AJ) required a motor response from the participants. The scale ranged from “aesthetically pleasant” to “aesthetically unpleasant.” For half of the participants, “pleasant” corresponded to 1 and “unpleasant” to 4. More specifically, they had to respond to the following question: “how much do you find it is pleasant?” (1 = very pleasant; 2 = pleasant; 3 = moderately pleasant; 4 = not pleasant at all). For the other half of the participants, the scale was set in the opposite order (“pleasant” corresponded to 4 and “unpleasant” to 1). Each finger corresponded to one specific response: the thumb, index, medium, and ring finger produced responses 1, 2, 3, and 4, respectively. The distribution of scores ascribed to each melody is summarized the in response frequency Table S1 in Supplementary Material. 2.4. fMRI Data Acquisition and Statistical Analysis Functional images were acquired with a General Electric scanner operating at 3T using an 8-channel head coil. Blood oxygenation level dependent (BOLD) contrasts were obtained using echo-planar T2 * weighted imaging (EPI). Each of the 185 volumes acquired was composed of 40 transverse slices, which provided coverage of the whole cerebral cortex with the exception of the primary visual cortex and the posterior part of cerebellum (TR = 2500, TE = 30 ms, flip angle = 85 degrees, FOV = 240 mm × 240, interslice gap = 0.5 mm, slice thickness = 4 mm, and plane resolution = ). Immediately after the functional scanning, a high-resolution T1 weighted anatomical scan (150 slices, TR = 600 ms; TE = 20 ms, slice thickness = 1 mm, and in-plane resolution = 1 × 1 mm) was acquired for each participant. Image preprocessing and statistical analysis were performed using SPM8 (Wellcome Department of Cognitive Neurology, http://www.fil.ion.ucl.ac.uk/spm/ ), implemented in Matlab v7.6 (Mathworks, Inc., Sherborn, MA [ 12 ]). The first four images volumes of each run were discarded to allow for stabilization of longitudinal magnetization. For each participant, the volumes were spatially realigned [ 13 ] to the first volume of the first session to correct for between-scan motion and unwarped [ 14 ]. A mean image from the realigned volumes was created. Acquisition time was then corrected using the middle slice as reference. To allow intersubject analysis, images were normalized to Montreal Neurological Institute (MNI) standard space [ 15 ], using the mean of the functional images. All images were smoothed using an isotropic Gaussian kernel (6 mm). Two types of fMRI data analyses were performed. The “structure” analysis accounted for the effect evoked by the canonical (CAN) and modified (MOD) melodic structures on the listeners’ brain, independently of the participants’ explicit aesthetic response to them. The second analysis (“aesthetic” analysis) categorized each excerpt as pleasant or unpleasant according to the behavioral responses measured during AJ runs, independently of melodies modification (CAN, MOD). Statistical inference was based on a random effect approach [ 13 ] that comprised two steps: a subject-level analysis (first level analysis) and an intersubject analysis (group analysis). With respect to the structure analysis, at the first level fMRI data were best fitted (least square fit) at every voxel using a linear combination of the effects of interest. The effects of interests were modelled as a function of the following: stimulus category (CAN, MOD), the question mark that cued overt responses, and the white noise, considered as explicit baseline, plus six regressors obtained from motion correction during the realignment process. All event types were convolved with the SPM8 standard hemodynamic function (HRF). By making linear contrasts, activation associated with WN presentation was subtracted to the activation associated with the two stimulus categories in each task (CAN-WN and MOD-WN in both L and AJ tasks). These contrasts were produced in order to isolate the specific effects of the musical stimuli partialling out the mere effect of sound. The second step of statistical analysis comprised one flexible factorial model that included the contrast images created for each subject in the first step (CAN-WN and MOD-WN in both L and AJ tasks). This model considered the pattern of activation specific for each stimulus category in the listening and aesthetic judgment tasks. The following contrasts were tested: first, CAN versus white noise and MOD versus white noise in order to evaluate the positive effects of music on brain activation; second, CAN versus MOD within each condition to highlight specific effects of stimulus structure on brain activation. The aesthetic analysis, carried out on data from AJ task only, examined the regional modulation of signal change induced by different levels of aesthetic judgment. As described above, judgments were recorded on a scale ranging from 1 to 4 (see Table S1 for details regarding score frequencies). Like the structure analysis, the aesthetic analysis included two different steps. At the first level of analysis, the fMRI data were best fitted (least square fit) at every voxel using a linear combination of the effects of interest. The effects of interest, modelled for each participant, were as follows: the presentation time of question mark that cued overt responses, the presentation time of the white noise, and the presentation times of the music stimuli (regardless of the type of melodic structure, CAN or MOD), plus six regressors obtained from motion correction during the realignment process. All event types were convolved with the SPM8 standard hemodynamic function (HRF). At the intersubject level, a one-sample t -test was carried out to define the brain areas modulated by increased aesthetic rating to the music stimuli regardless of stimulus type. For all these analyses, SPM maps were thresholded at P -corrected = 0.05 at the cluster level (cluster size estimated with a voxel level threshold of P -uncorrected = 0.001). Because of acquisition plane that cut off the posterior portion of brain, it was not possible to define whether activation at its proximity constituted independent clusters or belonged to more extended activation. For this reason, the activation found in the occipitotemporal visual regions and in the cerebellum is not discussed. 3. Results 3.1. Response-Based Results To assess aesthetic ratings provided by each participant during fMRI scanning as a function of the type of melody (CAN or MOD), a repeated measures GLM analysis, with two levels of stimulus category (CAN, MOD) and two levels of stimulus repetition (R1, R2), was carried out on responses recorded during AJ task. The data file containing the participants’ responses to the stimuli is in Supplementary Material (see SDataFile.xls for the participants responses to the stimuli). The results showed that canonical stimuli were rated as more pleasant than their modified counterparts ( F 1,18 = 8.5, ; partial- η 2 = .31; δ = .79), whereas there was no effect of stimulus repetition on aesthetic appraisal ( ). These results indicate that acoustic dissonance created by atypical musical syntax characterizing the modified stimuli negatively affected aesthetic preference. 3.2. fMRI Results 3.2.1. Structure Analysis Overall Effect of Melody Listening. MRI analysis was carried out by first assessing the overall activation elicited by melody, contrasting, separately, CAN and MOD (minus WN) versus baseline across listening (L) and aesthetic judgment (AJ) tasks. The contrast CAN (minus WN) versus baseline produced activation in superior occipital area, in superior temporal gyrus (STG), and in dorsal premotor cortex (dPM). Temporal activation included the primary auditory cortex and its neighbouring associative auditory regions, including BA 22, BA 21, and the superior part of BA 38. All activation was bilateral (Figure 2(a) , Table 1 ). Table 1: Activation reflecting the effect of canonical stimuli (versus white noise). Figure 2: Activation observed in (a) the contrast CAN versus WN and (b) the contrast MOD versus WN averaging activation across the two experimental tasks (listening and aesthetic judgment). Group-averaged statistical parametric maps are rendered onto the MNI brain template ( P -corr. < 0.05). As shown in Figure 2(b) (Table 2 ), the contrast MOD (minus WN) versus baseline revealed similar activation as that observed for the contrast CAN (minus WN) versus baseline. Table 2: Activation reflecting effect of modified stimuli (versus white noise). Canonical versus Modified Melodies. The direct contrast CAN versus MOD was carried out for each task (L and AJ) separately to evaluate whether the structure of melodies is an element affecting the listeners’ aesthetic experience. The results revealed no significant activation evoked by canonical stimuli with respect to the modified ones in either listening or aesthetic judgment tasks. The opposite contrast, MOD versus CAN, assessed the neural effects of unpleasantness due to syntax alteration on brain activation within each experimental task (L and AJ). During L, the contrast MOD versus CAN showed differential activation in right dorsal premotor cortex and postcentral gyrus (Table 3 (a)). With respect to AJ, differential activation between modified and canonical stimuli was observed in right middle temporal gyrus, right parahippocampus, and precuneus bilaterally, whereas, in the left hemisphere, enhanced activation was observed in middle occipital lobe and fusiform gyrus (Figure 3 , Table 3 (b)). Table 3: Activation reflecting the contrast MOD versus CAN during listening (L) and aesthetic judgment (AJ) tasks. Figure 3: Activation observed in the contrast MOD versus CAN stimuli during aesthetic judgment task (AJ). Group-averaged statistical parametric maps are rendered onto the MNI brain template ( P -corr. < 0.05). The bars show the activity profile within right parahippocampal gyrus in the contrast MOD versus CAN during AJ task in arbitrary units (a.u.). 3.2.2. Aesthetic Analysis: Parametric Effect of Aesthetic Judgment To test whether explicit aesthetic judgment modulated brain activation, independently of melody structure, we carried out a parametric analysis based on the participants’ responses given during AJ task independently of stimulus type (CAN, MOD). Increasing aesthetic rating was associated with greater activation in right superior temporal sulcus (STS, maxima: 62, −26, 0; P -corr. < 0.05) and left IFG pars triangularis corresponding to BA 44/45 (maxima: −44, 34, 4; P -uncorr. = 0.02) (Figure 4 ). Decreasing aesthetic rating, on the other hand, was associated with greater activation in the right precuneus (maxima: 6, −78, 30). Figure 4: Activation observed as a function of increasing aesthetic rating on brain activation (parametric analysis) in right superior temporal sulcus and left IFG pars triangularis. Activation is rendered onto the MNI brain template. 4. Discussion The neuroscience of music has mostly dealt with the way our brain processes and responds to the temporal and syntactic structure of music. The aim of the present study was to isolate one of the syntactic forms of music, namely, melody, to explore its independent effect on aesthetic experience in listeners naïve to formal musical knowledge. For this purpose, we used simple melodic lines whose syntactic structure was systematically manipulated to create acoustic dissonance. Two categories of melodies were presented to participants: canonical (syntactically “correct”) and modified, that is, made of an altered version of the canonical melodies. In what we termed structure analysis, we evaluated the effect on brain activation exerted by syntactic structural alterations of the melodies by comparing canonical and modified stimuli in two experimental tasks: listening and aesthetic judgment. Moreover, an aesthetic analysis, based on the listeners’ responses recorded during AJ task, was carried out to evaluate the brain regions involved in aesthetic judgment, independently of structural modifications. Our results highlighted some important aspects of neural processing underling melody listening. First, contrast analysis comparing canonical and modified stimuli with white noise showed that processing melody, regardless of structural modification and experimental task, involves activation of dorsal premotor cortex (dPM) and superior temporal gyrus (STG) bilaterally. The activation of dorsal premotor cortex is in line with findings showing its implication in rhythm processing (e.g., [ 2 ]). In a melody, this is given by its temporal structure and phrasing, which are characterized by the pitch relationship of one note to the next [ 16 ]. In fact, melodic processing incorporates intervals between individual notes and the overall contour of the sequence, as shown by studies investigating melody or pitch perception and discrimination ([ 17 , 18 ]; for a review, see [ 19 ]). The posterior part of STG, including Heschl’s gyrus (HG) and temporal planum (PT), is involved in acoustic-stimulus processing. While HG represents the first cortical step of auditory analysis, it was proposed that PT elaborates an auditory scene analysis [ 20 ] that allows one to segregate different sounds heard simultaneously and to match these with stored patterns. The output of this high level processing should inform about the acoustic environment, information that is not available from stimulus analyses elaborated at previous levels [ 21 ]. This region has been also found to be crucial for music processing. In a work including epileptic patients that underwent a unilateral temporoctomy and healthy controls, Liégeois-Chauvel and colleagues [ 22 ] found that pSTG is involved in the extraction of both contour and temporal information of melodies. The functional data of Patterson and colleagues’ study [ 23 ] further clarified that cortical processing of pitch is hierarchic: this recruits not only the posterior but also the anterior part of this region (polar planum, PP) as interval information of the acoustic stimulus becomes more complex. Coherently with these data, the bilateral activation of STG found in the present study for both CAN versus WN and MOD versus WN contrasts may represent the hierarchic neural processing of melodies. The temporal cluster expanded into the third posterior of insular cortex. This is a granular region and, as shown by several anatomical studies (e.g., [ 24 , 25 ]), is connected with the medial geniculate nucleus of the thalamus, with Heschl’s gyrus and superior temporal sulcus. It was shown that posterior insula might preprocess the auditory stimulus before the primary auditory cortex [ 25 ] and some neuropsychological works indicate that lesions of the posterior part of the insula are associated with auditory deficits, such as agnosia. The posterior insula might then mediate the precortical phase of auditory analysis. Direct comparisons between stimulus types (canonical and modified) highlighted the areas specifically involved in the syntactic processing of melodies. Direct contrasts between CAN versus MOD melodies did not produce any differential activation, suggesting that there was no specific processing associated with canonical compared to modified structures. The opposite contrast, namely, modified versus canonical stimuli, revealed on the other hand signal increase in deep temporal regions and particularly the right parahippocampal cortex. The critical role of the parahippocampal cortex in processing the emotional valence of dissonance has been shown in several works. A PET study by Blood et al. [ 26 ] showed that the increasing dissonance of the stimuli (and the relative judgments of unpleasantness) correlated with activation of right parahippocampal gyrus and precuneus, also found activated in the present study. Koelsch and colleagues [ 10 ] found activation of parahippocampal gyrus, hippocampus, amygdala, and temporal pole by contrasting dissonant stimuli judged as unpleasant with consonant classical excerpts judged as pleasant. Gosselin and coworkers [ 27 ] clarified the role of mediotemporal structures in the processing of emotional response to dissonance by studying aesthetic judgments to classical and dissonant music excerpts in both patients with lesion to medial temporal lobe and healthy subjects. While both groups gave positive aesthetic judgments to classical excerpts, the patients judged the dissonant music as slightly pleasant, opposite to healthy subjects. It was concluded that the parahippocampal cortex is specific for processing judgments of unpleasantness due to dissonance because the volume of this region, and not of other surrounding structures (like the amygdala or hippocampus), correlated with the values of judgments given by patients to the dissonant stimuli. Since the behavioral analysis of the present study showed a link between negative aesthetic judgment and modified melodies, the activation of the parahippocampal cortex found in the contrast MOD versus CAN melodies suggests a role of this region in processing the negative/emotional value of melodies driven by structural dissonance. An alternative interpretation for parahippocampal activation favors the idea that it could have been evoked by stimulus structural novelty. The role of the hippocampus and surrounding areas in memory encoding and processing is well known (for reviews, see e.g., [ 28 , 29 ]). In this light, it is plausible to suggest that the activation of the parahippocampal cortex was determined by a stronger brain effort to decode and retain the new structures intrinsic to the MOD melodies compared to the CAN ones (increased memory load for the MOD stimuli). This interpretation of the data does not automatically discount the former emotion-related explanation for parahippocampal activation and it can serve as a suggestion for future investigations. On the whole, the lack of enhanced brain activation for the canonical stimuli with respect to the modified ones and the presence of signal increase for the opposite contrast suggests that modified melodic structures exert a stronger effect on brain processing (in terms of either negative emotional valence and/or mnemonic-related processing), compared to melodies that respect a structural canon, at least within the Western culture. Aesthetic preference for music, although related to a certain extent to melody structure as shown by our behavioral data, may also be guided by idiosyncratic criteria. In the present study, we attempted to capture this aspect carrying out an aesthetic analysis based on the responses from each participant during AJ task, independently of stimulus structure. This analysis revealed activation of right STS and inferior frontal gyrus (IFG) associated with increasing pleasantness expressed for the melodies, independently of structure modification (CAN, MOD). STS cluster included BA22 that represents the homologue of Wernicke area in the right hemisphere. Recent findings suggest that the frontotemporal regions of the right hemisphere play an important role in the semantic processing of language, opposite to the traditional view that highlights the role of only the left hemisphere. Additionally, it was shown that the Wernicke homologue in the right hemisphere is involved in metaphors understanding [ 30 , 31 ]. In a TMS study, Harpaz and colleagues [ 32 ] showed a crucial implication of right BA 22 in associating words with their remote meaning. In accordance with this evidence, a complex model for semantic language processing was advanced, which considers the different contribution of left and right frontotemporal regions in semantic processing. In this model, semantic processing is described as highly distributed in both hemispheres but the right regions are described to be crucial for coarser semantic coding compared with that of the left ones [ 33 ]. As language does, music conveys meaningful information. Using N400 as marker of processing meaning, it was shown that long or short music excerpts are able to prime the processing of subsequent target words [ 34 – 37 ]. Moreover, in an EEG-fMRI study, Steinbeis and Koelsch [ 37 ] found that right posterior STS has a key role in processing of meaning of music, as it occurs for coarse aspects of language. Although melody meaningfulness was not directly assessed in the present study, a tentative explanation for our results is that there may be a link between aesthetic preference and the coding of music meaning, in the fact that aesthetic preference was accorded to melodies that were somehow more meaningful to the listeners or, alternatively, to which the listener was able to ascribe a meaning. Of course, other interpretations for STS activation may account for the observed data. For example, some intrinsic properties of the pleasurable stimuli may have enhanced the participants’ attention, therefore modulating the activity within the STS cluster. In fact, as discussed in Himmelbach et al. [ 38 ], STG/STS seem to be involved in the attentional orienting towards potentially relevant events or stimuli [ 39 ]. Additionally, the superior temporal cortex has been shown to be a site for multimodal sensory convergence and neuronal populations in STS encode object-properties as well as spatial positions [ 40 ], orienting attention towards salient stimuli. The results from the parametric analysis further revealed a modulation effect of the expressed aesthetic pleasure on right inferior frontal gyrus pars orbitalis corresponding to BA47 and on left IFG pars triangularis corresponding to BA44/45. With respect to activation of Broca area, several works found that Broca (left IFG) is important for both harmonic and syntactic errors processing [ 5 , 6 ] and, likewise, in the present work it may be involved in syntax coding. In this study, Broca activation was associated with listening to pleasant melodies, suggesting that syntactic coding of the canonical stimuli facilitated the ascription of an aesthetic judgment as requested by the task (AJ). Since no emotional activation was found in association with aesthetic pleasurable melodies, it is possible that aesthetic judgment of the presented melodies was based on the more formal aspect of stimulus processing, namely, its syntactic analysis. In general, in contrast with other studies that found a neural correlation between aesthetic pleasure for music and activation of emotion-related structures (see, e.g., [ 9 – 11 , 41 – 43 ]), our results suggest that aesthetic preference for simple melodic pieces is mediated by a structural-syntactic and, possibly, semantic analysis of the stimuli. We suggest that the main difference across diverging findings may rest on the type of stimuli used and on the specific alterations introduced that, in our study, were different from the integer, rich stimuli (painting and sculpture images or famous musical excerpts) used in other studies. In fact, we produced a single, highly significant syntactic error, without altering in a gross and overwhelming way the original melody. Additionally, we isolated melody from a harmonic context, whose violation would have intensified melodic dissonances. Likewise, we did not alter any other extramelodic parameter, such as rhythm, timbre, or intensity, with the aim of producing results reflecting the capability of melody alone to evoke an aesthetic experience in the listeners. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors wish to thank Dr. Rachel Wood, Diego Lisfera, and Edoardo Acotto for help in stimulus preparation and Professor Giacomo Rizzolatti and Professor Vittorio Gallese for theoretical and methodological suggestions; finally, they are grateful to Fondazione Cassa di Risparmio di Parma (CARIPARMA) for providing the infrastructures that made it possible to conduct this study. References J. L. Chen, R. J. Zatorre, and V. B. Penhune, “Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms,” NeuroImage , vol. 32, no. 4, pp. 1771–1781, 2006. View at Publisher · View at Google Scholar · View at Scopus J. L. Chen, V. B. Penhune, and R. J. Zatorre, “Listening to musical rhythms recruits motor regions of the brain,” Cerebral Cortex , vol. 18, no. 12, pp. 2844–2854, 2008. View at Publisher · View at Google Scholar · View at Scopus P. Janata and S. T. Grafton, “Swinging in the brain: shared neural substrates for behaviors related to sequencing and music,” Nature Neuroscience , vol. 6, no. 7, pp. 682–687, 2003. View at Publisher · View at Google Scholar · View at Scopus B. Maess, S. Koelsch, T. C. Gunter, and A. D. Friederici, “Musical syntax is processed in Broca's area: an MEG study,” Nature Neuroscience , vol. 4, no. 5, pp. 540–545, 2001. View at Scopus B. Tillmann, S. Koelsch, N. Escoffier et al., “Cognitive priming in sung and instrumental music: activation of inferior frontal cortex,” NeuroImage , vol. 31, no. 4, pp. 1771–1782, 2006. View at Publisher · View at Google Scholar · View at Scopus B. Tillmann, S. Koelsch, N. Escoffier et al., “Cognitive priming in sung and instrumental music: activation of inferior frontal cortex,” NeuroImage , vol. 31, no. 4, pp. 1771–1782, 2006. View at Publisher · View at Google Scholar · View at Scopus D. J. Levitin and V. Menon, “The neural locus of temporal structure and expectancies in music: evidence from functional neuroimaging at 3 tesla,” Music Perception , vol. 22, no. 3, pp. 563–575, 2005. View at Publisher · View at Google Scholar · View at Scopus D. Perani, M. C. Saccuman, P. Scifo et al., “Functional specializations for music processing in the human newborn brain,” Proceedings of the National Academy of Sciences of the United States of America , vol. 107, no. 10, pp. 4758–4763, 2010. View at Publisher · View at Google Scholar · View at Scopus A. J. Blood and R. J. Zatorre, “Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion,” Proceedings of the National Academy of Sciences of the United States of America , vol. 98, no. 20, pp. 11818–11823, 2001. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, T. Fritz, D. Y. V. Cramon, K. Müller, and A. D. Friederici, “Investigating emotion with music: an fMRI study,” Human Brain Mapping , vol. 27, no. 3, pp. 239–250, 2006. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, S. Skouras, T. Fritz et al., “The roles of superficial amygdala and auditory cortex in music-evoked fear and joy,” NeuroImage , vol. 31, pp. 1771–1782, 2013. View at Publisher · View at Google Scholar · View at Scopus K. J. Worsley and K. J. Friston, “Analysis of fMRI time-series revisited—again,” NeuroImage , vol. 2, no. 3, pp. 173–181, 1995. View at Publisher · View at Google Scholar · View at Scopus K. J. Friston, “Bayesian estimation of dynamical systems: an application to fMRI,” NeuroImage , vol. 16, no. 2, pp. 513–530, 2002. View at Publisher · View at Google Scholar · View at Scopus J. L. R. Andersson, C. Hutton, J. Ashburner, R. Turner, and K. Friston, “Modeling geometric deformations in EPI time series,” NeuroImage , vol. 13, no. 5, pp. 903–919, 2001. View at Publisher · View at Google Scholar · View at Scopus D. L. Collins, P. Neelin, T. M. Peters, and A. C. Evans, “Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space,” Journal of Computer Assisted Tomography , vol. 18, no. 2, pp. 192–205, 1994. View at Publisher · View at Google Scholar · View at Scopus C. J. Limb, “Structural and functional neural correlates of music perception,” Anatomical Record Part A: Discoveries in Molecular, Cellular, and Evolutionary Biology , vol. 288, no. 4, pp. 435–446, 2006. View at Publisher · View at Google Scholar · View at Scopus R. J. Zatorre, A. C. Evans, and E. Meyer, “Neural mechanisms underlying melodic perception and memory for pitch,” Journal of Neuroscience , vol. 14, no. 4, pp. 1908–1919, 1994. View at Scopus A. R. Halpern and R. J. Zatorre, “When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies,” Cerebral Cortex , vol. 9, no. 7, pp. 697–704, 1999. View at Publisher · View at Google Scholar · View at Scopus I. Peretz and R. J. Zatorre, “Brain organization for music processing,” Annual Review of Psychology , vol. 56, pp. 89–114, 2005. View at Publisher · View at Google Scholar · View at Scopus A. S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound , The MIT Press, Cambridge, Mass, USA, 1990. T. D. Griffiths and J. D. Warren, “The planum temporale as a computational hub,” Trends in Neurosciences , vol. 25, no. 7, pp. 348–353, 2002. View at Publisher · View at Google Scholar · View at Scopus C. Liégeois-Chauvel, I. Peretz, M. Babaï, V. Laguitton, and P. Chauvel, “Contribution of different cortical areas in the temporal lobes to music processing,” Brain , vol. 121, no. 10, pp. 1853–1867, 1998. View at Publisher · View at Google Scholar · View at Scopus R. D. Patterson, S. Uppenkamp, I. S. Johnsrude, and T. D. Griffiths, “The processing of temporal pitch and melody information in auditory cortex,” Neuron , vol. 36, no. 4, pp. 767–776, 2002. View at Publisher · View at Google Scholar · View at Scopus J. R. Augustine, “Circuitry and functional aspects of the insular lobe in primates including humans,” Brain Research Reviews , vol. 22, no. 3, pp. 229–244, 1996. View at Publisher · View at Google Scholar · View at Scopus D.-E. Bamiou, F. E. Musiek, and L. M. Luxon, “The insula (Island of Reil) and its role in auditory processing: literature review,” Brain Research Reviews , vol. 42, no. 2, pp. 143–154, 2003. View at Publisher · View at Google Scholar · View at Scopus A. J. Blood, R. J. Zatorre, P. Bermudez, and A. C. Evans, “Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions,” Nature Neuroscience , vol. 2, no. 4, pp. 382–387, 1999. View at Publisher · View at Google Scholar · View at Scopus N. Gosselin, S. Samson, R. Adolphs et al., “Emotional responses to unpleasant music correlates with damage to the parahippocampal cortex,” Brain , vol. 129, no. 10, pp. 2585–2592, 2006. View at Publisher · View at Google Scholar · View at Scopus S.-H. Wang and R. G. M. Morris, “Hippocampal-neocortical interactions in memory formation, consolidation, and reconsolidation,” Annual Review of Psychology , vol. 61, pp. 49–79, 2010. View at Publisher · View at Google Scholar · View at Scopus N. M. van Strien, N. L. M. Cappaert, and M. P. Witter, “The anatomy of memory: an interactive overview of the parahippocampal- hippocampal network,” Nature Reviews Neuroscience , vol. 10, no. 4, pp. 272–282, 2009. View at Publisher · View at Google Scholar · View at Scopus G. Bottini, R. Corcoran, R. Sterzi, et al., “The role of the right hemisphere in the interpretation of figurative aspects of language: a positron emission tomography activation study,” Brain , vol. 117, no. 6, pp. 1241–1253, 1994. View at Publisher · View at Google Scholar · View at Scopus M. Sotillo, L. Carretié, J. A. Hinojosa et al., “Neural activity associated with metaphor comprehension: spatial analysis,” Neuroscience Letters , vol. 373, no. 1, pp. 5–9, 2005. View at Publisher · View at Google Scholar · View at Scopus Y. Harpaz, Y. Levkovitz, and M. Lavidor, “Lexical ambiguity resolution in Wernicke's area and its right homologue,” Cortex , vol. 45, no. 9, pp. 1097–1103, 2009. View at Publisher · View at Google Scholar · View at Scopus M. Jung-Beeman, “Bilateral brain processes for comprehending natural language,” Trends in Cognitive Sciences , vol. 9, no. 11, pp. 512–518, 2005. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, E. Kasper, D. Sammler, K. Schulze, T. Gunter, and A. D. Friederici, “Music, language and meaning: brain signatures of semantic processing,” Nature Neuroscience , vol. 7, no. 3, pp. 302–307, 2004. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, T. Fritz, K. Schulze, D. Alsop, and G. Schlaug, “Adults and children processing music: an fMRI study,” NeuroImage , vol. 25, no. 4, pp. 1068–1076, 2005. View at Publisher · View at Google Scholar · View at Scopus J. Daltrozzo and D. Schön, “Is conceptual processing in music automatic? An electrophysiological approach,” Brain Research , vol. 1270, pp. 88–94, 2009. View at Publisher · View at Google Scholar · View at Scopus N. Steinbeis and S. Koelsch, “Shared neural resources between music and language indicate semantic processing of musical tension-resolution patterns,” Cerebral Cortex , vol. 18, no. 5, pp. 1169–1178, 2008. View at Publisher · View at Google Scholar · View at Scopus M. Himmelbach, M. Erb, and H.-O. Karnath, “Exploring the visual world: the neural substrate of spatial orienting,” NeuroImage , vol. 32, no. 4, pp. 1747–1759, 2006. View at Publisher · View at Google Scholar · View at Scopus J. Downar, A. P. Crawley, D. J. Mikulis, and K. D. Davis, “A cortical network sensitive to stimulus salience in a neutral behavioral context across multiple sensory modalities,” Journal of Neurophysiology , vol. 87, no. 1, pp. 615–620, 2002. View at Scopus H.-O. Karnath, “New insights into the functions of the superior temporal cortex,” Nature Reviews Neuroscience , vol. 2, no. 8, pp. 568–576, 2001. View at Publisher · View at Google Scholar · View at Scopus S. Koelsch, “Towards a neural basis of music-evoked emotions,” Trends in Cognitive Sciences , vol. 14, no. 3, pp. 131–137, 2010. View at Publisher · View at Google Scholar · View at Scopus D. D. Cinzia and G. Vittorio, “Neuroaesthetics: a review,” Current Opinion in Neurobiology , vol. 19, no. 6, pp. 682–687, 2009. View at Publisher · View at Google Scholar · View at Scopus D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and emotion: electrophysiological correlates of the processing of pleasant and unpleasant music,” Psychophysiology , vol. 44, no. 2, pp. 293–304, 2007. View at Publisher · View at Google Scholar · View at Scopus (function (i, s, o, g, r, a, m) { i['GoogleAnalyticsObject'] = r; i[r] = i[r] || function () { (i[r].q = i[r].q || []).push(arguments) }, i[r].l = 1 * new Date(); a = s.createElement(o), m = s.getElementsByTagName(o)[0]; a.async = 1; a.src = g; m.parentNode.insertBefore(a, m) })(window, document, 'script', '//www.google-analytics.com/analytics.js', 'ga'); ga('create', 'UA-8578054-2', 'auto'); ga('send', 'pageview');

Journal

Advances in NeuroscienceHindawi Publishing Corporation

Published: Dec 30, 2014

There are no references for this article.