Language Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.Anti-spam check. Do not fill this in! ==Physiological and neural architecture of language and speech== Speaking is the default modality for language in all cultures. The production of spoken language depends on sophisticated capacities for controlling the lips, tongue and other components of the vocal apparatus, the ability to acoustically decode speech sounds, and the neurological apparatus required for acquiring and producing language.<ref>{{harvcoltxt|Trask|1999|pp=11β14, 105β113}}</ref> The study of the [[genetics|genetic]] bases for human language is at an early stage: the only gene that has definitely been implicated in language production is [[FOXP2]], which may cause a kind of [[Developmental verbal dyspraxia|congenital language disorder]] if affected by [[mutation]]s.<ref>{{harvcoltxt|Fisher|Lai|Monaco|2003}}</ref> ===The brain=== {{main|Neurolinguistics|Language processing in the brain}} [[File:Brain Surface Gyri.SVG|thumb|Language Areas of the brain. {{legend|#ffae76|[[Angular gyrus]]}} {{legend|#fcfb99|[[Supramarginal gyrus]]}} {{legend|#b5d9ed|[[Broca's area]]}} {{legend|#b7d09e|[[Wernicke's area]]}} {{legend|#f7a8b7|[[Primary auditory cortex]]}}]] The brain is the coordinating center of all linguistic activity; it controls both the production of linguistic cognition and of meaning and the mechanics of speech production. Nonetheless, our knowledge of the neurological bases for language is quite limited, though it has advanced considerably with the use of modern imaging techniques. The discipline of linguistics dedicated to studying the neurological aspects of language is called [[neurolinguistics]].<ref name="Lesser205">{{harvcoltxt|Lesser|1989|pp=205β206}}</ref> Early work in neurolinguistics involved the study of language in people with brain lesions, to see how lesions in specific areas affect language and speech. In this way, neuroscientists in the 19th century discovered that two areas in the brain are crucially implicated in language processing. The first area is [[Wernicke's area]], which is in the posterior section of the [[superior temporal gyrus]] in the dominant cerebral hemisphere. People with a lesion in this area of the brain develop [[receptive aphasia]], a condition in which there is a major impairment of language comprehension, while speech retains a natural-sounding rhythm and a relatively normal [[syntax|sentence structure]]. The second area is [[Broca's area]], in the posterior [[inferior frontal gyrus]] of the dominant hemisphere. People with a lesion to this area develop [[expressive aphasia]], meaning that they know what they want to say, they just cannot get it out.<ref>{{harvcoltxt|Trask|1999|pp=105β107}}</ref> They are typically able to understand what is being said to them, but unable to speak fluently. Other symptoms that may be present in expressive aphasia include problems with [[word repetition]]. The condition affects both spoken and written language. Those with this aphasia also exhibit ungrammatical speech and show inability to use syntactic information to determine the meaning of sentences. Both expressive and receptive aphasia also affect the use of sign language, in analogous ways to how they affect speech, with expressive aphasia causing signers to sign slowly and with incorrect grammar, whereas a signer with receptive aphasia will sign fluently, but make little sense to others and have difficulties comprehending others' signs. This shows that the impairment is specific to the ability to use language, not to the physiology used for speech production.<ref>{{harvcoltxt|Trask|1999|p=108}}</ref><ref>{{harvcoltxt|Sandler|Lillo-Martin|2001|p=554}}</ref> With technological advances in the late 20th century, neurolinguists have also incorporated non-invasive techniques such as [[functional magnetic resonance imaging]] (fMRI) and [[electrophysiology]] to study language processing in individuals without impairments.<ref name="Lesser205"/> ===Anatomy of speech=== {{main|Speech production|Phonetics|Articulatory phonetics}} {{multiple image | align = right | direction = vertical | width = 200 | image1 = Illu01 head neck.jpg | caption1 = The human vocal tract | image2 = Spectrogram -iua-.png | caption2 = [[Spectrogram]] of American English vowels {{IPA|[i, u, Ι]}} showing the formants ''f''<sub>1</sub> and ''f''<sub>2</sub> | image3 = Real-time MRI - Speaking (Chinese).ogv | caption3 = Real time [[MRI scan]] of a person speaking in Mandarin Chinese }} Spoken language relies on human physical ability to produce [[sound]], which is a longitudinal wave propagated through the air at a frequency capable of vibrating the [[ear drum]]. This ability depends on the physiology of the human speech organs. These organs consist of the lungs, the voice box ([[larynx]]), and the upper vocal tract β the throat, the mouth, and the nose. By controlling the different parts of the speech apparatus, the airstream can be manipulated to produce different speech sounds.<ref>{{harvcoltxt|MacMahon|1989|p=2}}</ref> The sound of speech can be analyzed into a combination of [[Segment (linguistics)|segmental and suprasegmental]] elements. The segmental elements are those that follow each other in sequences, which are usually represented by distinct letters in alphabetic scripts, such as the Roman script. In free flowing speech, there are no clear boundaries between one segment and the next, nor usually are there any audible pauses between them. Segments therefore are distinguished by their distinct sounds which are a result of their different articulations, and can be either vowels or consonants. Suprasegmental phenomena encompass such elements as [[Stress (linguistics)|stress]], [[phonation]] type, voice [[timbre]], and [[Prosody (linguistics)|prosody]] or [[Intonation (linguistics)|intonation]], all of which may have effects across multiple segments.<ref name="MacMahon5">{{harvcoltxt|MacMahon|1989|pp=3}}</ref> [[Consonant]]s and [[vowel]] segments combine to form [[syllable]]s, which in turn combine to form utterances; these can be distinguished phonetically as the space between two inhalations. [[Acoustics|Acoustically]], these different segments are characterized by different [[formant]] structures, that are visible in a [[spectrogram]] of the recorded sound wave. Formants are the amplitude peaks in the frequency spectrum of a specific sound.<ref name="MacMahon5"/><ref name="IPA">{{harvcoltxt|International Phonetic Association|1999|pp=3β8}}</ref> Vowels are those sounds that have no audible friction caused by the narrowing or obstruction of some part of the upper vocal tract. They vary in quality according to the degree of lip aperture and the placement of the tongue within the oral cavity.<ref name="MacMahon5"/> Vowels are called ''[[Close vowel|close]]'' when the lips are relatively closed, as in the pronunciation of the vowel {{ipa|[i]}} (English "ee"), or ''[[open vowel|open]]'' when the lips are relatively open, as in the vowel {{ipa|[a]}} (English "ah"). If the tongue is located towards the back of the mouth, the quality changes, creating vowels such as {{ipa|[u]}} (English "oo"). The quality also changes depending on whether the lips are [[Roundedness|rounded]] as opposed to unrounded, creating distinctions such as that between {{ipa|[i]}} (unrounded front vowel such as English "ee") and {{ipa|[y]}} ([[rounded front vowel]] such as German "ΓΌ").<ref>{{harvcoltxt|MacMahon|1989|pp=11β15}}</ref> Consonants are those sounds that have audible friction or closure at some point within the upper vocal tract. Consonant sounds vary by place of articulation, i.e. the place in the vocal tract where the airflow is obstructed, commonly at the lips, teeth, [[alveolar ridge]], [[palate]], [[Soft palate|velum]], [[uvula]], or [[glottis]]. Each place of articulation produces a different set of consonant sounds, which are further distinguished by [[manner of articulation]], or the kind of friction, whether full closure, in which case the consonant is called ''[[occlusive]]'' or ''[[stop consonant|stop]]'', or different degrees of aperture creating ''[[fricative]]s'' and ''[[approximant consonant|approximants]]''. Consonants can also be either ''[[Voice (phonetics)|voiced or unvoiced]]'', depending on whether the vocal cords are set in vibration by airflow during the production of the sound. Voicing is what separates English {{ipa|[s]}} in ''bus'' ([[sibilant|unvoiced sibilant]]) from {{ipa|[z]}} in ''buzz'' ([[Voiced alveolar sibilant|voiced sibilant]]).<ref>{{harvcoltxt|MacMahon|1989|pp=6β11}}</ref> Some speech sounds, both vowels and consonants, involve release of air flow through the nasal cavity, and these are called ''[[Nasal consonant|nasals]]'' or ''[[Nasalization|nasalized]]'' sounds. Other sounds are defined by the way the tongue moves within the mouth such as the l-sounds (called ''[[Lateral consonant|laterals]]'', because the air flows along both sides of the tongue), and the r-sounds (called ''[[rhotics]]'').<ref name="IPA"/> By using these speech organs, humans can produce hundreds of distinct sounds: some appear very often in the world's languages, whereas others are much more common in certain language families, language areas, or even specific to a single language.<ref name="LadefogedMaddieson">{{harvcoltxt|Ladefoged|Maddieson|1996}}</ref> Summary: Please note that all contributions to Christianpedia may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here. You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see Christianpedia:Copyrights for details). Do not submit copyrighted work without permission! Cancel Editing help (opens in new window) Discuss this page