361, P = 0.03) and mid-lateral (r = 0.331, P = 0.049) sites. The above sections have listed all significant results of the study. Here we summarize them again, with the focus on the findings that bear directly on the main questions of the study and that will be further evaluated in the Discussion. These findings are as follows. Behavioral measures revealed that all participants were faster and more accurate when classifying vocal as compared with musical
sounds, both standards and deviants. Musicians were overall more accurate when making sound duration judgments. They responded equally accurately to vocal and musical deviants, while non-musicians were less accurate and more delayed in their responses to music as compared with voice deviants. Electrophysiological measures showed a significantly larger N1 peak amplitude in musicians, regardless of the nature of the stimulus (standard Saracatinib chemical structure vs. deviant, voice vs. music, or natural vs. spectrally-rotated). This group difference was present across a larger number of electrodes over the right as compared with the left hemisphere. The N1 peak amplitude to NAT sounds was positively LBH589 order correlated with self-rated music proficiency and performance on the MAP test. The two groups did not differ in the mean
amplitude of the P3a and P3b components. However, musicians showed a marginally larger RON. The mean amplitude of RON was significantly greater over the right hemisphere. We asked whether early sensory encoding of vocal and completely novel sounds may be enhanced in amateur musicians compared with non-musicians (e.g. Pantev et al., 1998; Shahin et al., 2003, 2004; Fujioka et al., 2006). We compared the N1 peak amplitude and peak latency elicited by musical and vocal sounds, as well fantofarone as by their spectrally-rotated versions, as a measure of such sensory encoding. We found that musicians had a significantly larger N1 peak amplitude. This effect did not interact either with sound type (voice, music) or with naturalness (NAT, ROT). Instead, it was present across the
board, even in response to completely novel and never before heard spectrally-rotated sounds. The lack of timbre specificity in our results suggests that the enhancement in the N1 component shown by musicians is not due to the perceptual similarity between musical and vocal timbres; instead, it is likely that musical training leads to a more general enhancement in the encoding of at least some acoustic features that are shared by perceptually dissimilar but acoustically complex sound categories. One of the acoustic features whose perception may be fine-tuned by musical training is spectral complexity. For example, Shahin et al. (2005) manipulated the number of harmonics in a piano note and reported a larger P2m to tones with a higher number of harmonics in trained pianists.