Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Hearing Loss: Causes, Prevention, and Treatment
Hearing Loss: Causes, Prevention, and Treatment
Hearing Loss: Causes, Prevention, and Treatment
Ebook844 pages7 hours

Hearing Loss: Causes, Prevention, and Treatment

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Hearing Loss: Causes, Prevention, and Treatment covers hearing loss, causes and prevention, treatments, and future directions in the field, also looking at the cognitive problems that can develop.

To avoid the “silent epidemic of hearing loss, it is necessary to promote early screening, use hearing protection, and change public attitudes toward noise. Successful treatments of hearing loss deal with restoring hearing sensitivity via hearing aids, including cochlear, brainstem, or midbrain implants. Both the technical aspects and effects on the quality of life of these devices are discussed.

The integration of all aspects of hearing, hearing loss, prevention, and treatment make this a perfect one-volume course in audiology at the graduate student level. However, it is also a great reference for established audiologists, ear surgeons, neurologists, and pediatric and geriatric professionals.

  • Presents an in-depth overview of hearing loss, causes and prevention, treatments, and future directions in the field
  • Written for researchers and clinicians, such as auditory neuroscientists, audiologists, neurologists, speech pathologists, pediatricians, and geriatricians
  • Presents the benefits and problems with hearing aids and cochlear implants
  • Includes important quality of life issues
LanguageEnglish
Release dateFeb 22, 2017
ISBN9780128093498
Hearing Loss: Causes, Prevention, and Treatment
Author

Jos J. Eggermont

Dr. Jos J. Eggermont is an Emeritus Professor in the Departments of Physiology and Pharmacology, and Psychology at the University of Calgary in Alberta, Canada. Dr. Eggermont is one of the most renowned scientists in the field of the auditory system and his work has contributed substantially to the current knowledge about hearing loss. His research comprises most aspects of audition with an emphasis on the electrophysiology of the auditory system in experimental animals. He has published over 225 scientific articles, authored/edited 10 books, and contributed to over 100 book chapters all focusing on the auditory system.

Read more from Jos J. Eggermont

Related to Hearing Loss

Related ebooks

Psychology For You

View More

Related articles

Related categories

Reviews for Hearing Loss

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Hearing Loss - Jos J. Eggermont

    type

    Part I

    The Basics

    Outline

    Chapter 1 Hearing Basics

    Chapter 2 Brain Plasticity and Perceptual Learning

    Chapter 3 Multisensory Processing

    Chapter 1

    Hearing Basics

    Abstract

    New findings about the structure and function of the auditory system continue to emerge. In this chapter, we describe, besides the classical textbook knowledge on the action of the cochlea and central nervous system, recent insights into the effects of subclinical noise exposures on the ribbon synapses, and insights into the protective effects of the efferent system for the effects of these subclinical noise exposures. This protective effect does not seem to include traumatic noise exposure, albeit that conditioning with moderate level sound that presumably includes activity of the efferent system induces such protection. We also pay close attention to new findings regarding the type II synapses innervating the outer hair cells. The growing knowledge about the efferent system in preventing or protecting against noise-induced and age-related hearing loss is extensively reviewed. The advent of powerful imaging techniques applied to the human auditory cortex has allowed a comparison with the detailed functional knowledge of these areas in nonhuman primates and suggests very strong similarities between them.

    Keywords

    Cochlear potentials; auditory nerve fibers; compound action potentials; ribbon synapses; parallel processing; efferent system; central auditory system; neural imaging

    Hearing loss comprises reduced sensitivity for pure tones (the audiogram) and problems in the understanding of speech. The loss of sensitivity results from deficits in the transmission of sound via the middle ear and/or loss of transduction of mechanical vibrations into electrical nerve activity in the inner ear. Problems of speech understanding mainly result from deficits in the synchronization of auditory nerve fibers’ (ANFs) and central nervous system activity. This can be the result of problems in the auditory periphery but may also occur in the presence of nearly normal audiometric hearing. In order to appreciate the interaction of the audibility and understanding aspects of hearing loss, I will, besides presenting a condensed review of the auditory system, pay detailed attention to new findings pertaining to the important role of the ribbon synapses in the inner hair cells (IHCs), parallel processing in the ascending auditory system, and finally the importance of the efferent system.

    1.1 Hearing Sensitivity in the Animal Kingdom

    Every animal that grunts, croaks, whistles, sings, barks, meows, or speaks can hear. Most of the hearing animal species that we are familiar with are vertebrates, however insects also have keen hearing. For cicades and crickets that come as no surprise as these form choruses to enliven our nights. It may also turn out that the humble fruit fly, Drosophila, whose song is barely audible (Shorey, 1962), and the favorite of geneticists, may become important to elucidate the genetics of hearing loss (Christie et al., 2013).

    A common way to quantify the hearing sensitivity (or loss) in humans is by way of the audiogram—a plot of the threshold level of hearing at a fixed series of (typically octave-spaced) frequencies between 125 Hz and 8 kHz when used in a clinical setting. In research settings, a wider and more finely spaced range of frequencies is employed (Fig. 1.1). In research settings, the just audible sound pressure level (dB SPL) is plotted, whereas in clinical settings the loss of sensitivity relative to a normalized value is represented (dB HL). To avoid confusion we call the research representation the hearing field.

    Figure 1.1 Representative hearing fields from the five vertebrate classes: hardhead catfish, bullfrog, sparrow, cat, bobtail skink, and human. Data of hardhead catfish from Popper, A.N., Fay, R.R., 2011. Rethinking sound detection by fishes. Hear. Res. 273, 25–36; bullfrog, sparrow, and cat data from Fay, R.R., 1988. Hearing in Vertebrates. A Psychophysics Databook. Hill-Fay Associates, Winnetka, IL; bobtail skink data from Manley, G.A., 2002. Evolution of structure and function of the hearing organ of lizards. J. Neurobiol. 53, 202–211; human data from Heffner, H.E., Heffner, R.S., 2007. Hearing ranges of laboratory animals. J. Am. Assoc. Lab. Anim. Sci. 46, 20–22 (Manley, 2002; Fay, 1988).

    As Fig. 1.1 shows, hearing sensitivity differs considerably between vertebrates, even between mammals. Small mammals often have better high-frequency hearing than humans, with the 60 dB SPL upper limits of the hearing field ranging from 34.5 kHz for the Japanese macaque, and about 60 kHz for the cat to more than 120 kHz for the horseshoe bat (Heffner and Heffner, 2007). One reason for this variation may be that small mammals need to hear higher frequencies than larger mammals do in order to make use of sound localization cues provided by the frequency-dependent attenuating effect of the head and pinnae on sound. As a result, mammals with small heads generally have better high-frequency hearing than mammals with large heads, such as the elephant. Almost all mammals have poorer low-frequency hearing than humans, with the 60 dB lower limits ranging from 28 Hz for the Japanese macaque to 2.3 kHz for the domestic mouse (Heffner and Heffner, 2007; not shown). Only the Indian elephant, with a 60-dB low-frequency limit of 17 Hz, is known to have significantly better low-frequency hearing than humans, reaching into the infrasound range (Garstang, 2004).

    Birds are among the most vocal vertebrates and have excellent hearing sensitivity. However, a striking feature of bird hearing is that the high-frequency limit, which falls between 6 and 12 kHz—even for small birds—is well below those of most mammals, including humans. Fig. 1.1 shows a typical bird audiogram represented by the sparrow. Among reptiles, lizards such as the bobtail skink (Fig. 1.1) are the best hearing species and are up to 30 dB more sensitive than alligators and crocodiles.

    Anurans (frogs and toads) are very vocal amphibians: In specific parts of the year, depending on the species, their behavior is dominated by sound. As I wrote earlier (Eggermont, 1988): Sound guides toads and frogs to breeding sites, sound is used to advertise the presence of males to other males by territorial calls, and sound makes the male position known to females through mating or advertisement calls. To have the desired effect these calls must be identified as well as localized. Frogs and toads are remarkably good in localizing conspecific males, especially when we take into account their small head size and the fact that hardly any of the territorial or mating calls has sufficient energy in the frequency region above 5 kHz to be audible at some distance. Especially, the bullfrog’s threshold is relatively low at 10 dB SPL around 600 Hz (Fig. 1.1).

    Teleost fishes, the largest group of living vertebrates, include both vocal and nonvocal species. This is especially evident for some by their intense sound production during the breeding season (Bass and McKibben, 2003). Except for the hardhead catfish (Fig. 1.1) that hears sounds at approximately 20 dB SPL for 200 Hz, most fishes have thresholds around 40 dB SPL, and with few exceptions do not hear sounds above 2 kHz (Popper and Fay, 2011).

    Nearly all insects have high-frequency hearing (Fonseca et al., 2000). For instance, the hearing ranges for crickets are 0.1–60 kHz, for grasshoppers 0.2–50 kHz, for flies 1–40 kHz, and for cicades 0.1–25 kHz. Tiger moths are typically most sensitive to ultrasound frequencies between 30 and 50 kHz. The frequency sensitivity of the ears of moth species is often matched to the sonar emitted by the bats preying upon them (Conner and Corcoran, 2012).

    1.2 The Mammalian Middle Ear

    The auditory periphery of mammals is one of the most remarkable examples of a biomechanical system. It is highly evolved, with tremendous mechanical complexity (Puria and Steele, 2008).

    Transmission of sound energy from air to fluid typically results in considerable loss as a result of reflection from the fluid surface and estimated at about 99.7% of the incoming energy. This is compensated by the pressure gain provided by the ratio of the areas of the tympanic membrane (TM) (typical 0.55 cm²) and the stapes footplate (typical 0.032 cm²) for human, which is approximately 17, and the lever action of the middle ear bones which contributes a factor approximately 1.3 (Dallos, 1973). This would theoretically result in a combined gain of a factor 22 (about 27 dB). In practice, the gain is considerably less and maximal between 20 and 25 dB in the 800–1500 Hz range (Dallos, 1973).

    Merchant et al. (1997) extensively described the middle ear action as the result of two mechanisms: ossicular and acoustic coupling. Ossicular coupling incorporates the gain in sound pressure that occurs through the TM and ossicular chain. In the normal middle ear, sound pressure in the ear canal results in TM motion transmitted through the malleus and incus to produce a force at the stapes head (Fig. 1.2). This force applied over the area of the footplate produces a pressure PS. PS represents the ossicular coupling of sound to the cochlea. Acoustic coupling refers to the difference in sound pressures acting directly on the oval window (OW), POW, and round window (RW), PRW. In normal ears, acoustic coupling, ΔP=(POW−PRW), is negligibly small, but it can play a significant role in some diseased middle ears (Peake et al., 1992).

    Figure 1.2 Illustration of ossicular and acoustic coupling from ambient sound to the cochlea. In the normal middle ear, sound pressure in the ear canal results in TM motion transmitted through the malleus and incus to produce a force at the stapes head (red path). This force applied over the area of the footplate produces a pressure PS. PS represents the ossicular coupling of sound to the cochlea. TM motion also compresses and dilates the air in the middle ear cavity, creating sound pressure in the middle ear (blue paths), which just outside the OW equals (POW) and at the RW (PRW). The difference between these two acoustic pressures, ΔP=POW−PRW represents the acoustic coupling of sound to the cochlea. In the normal ear, this acoustic coupling is negligibly small.

    1.3 The Mammalian Inner Ear

    Until 1971, the basilar membrane (BM) was considered to be a linear device with broad mechanical tuning, as originally found already in the 1940s by von Békésy (1960). The bridge to the narrow ANF tuning was even long thereafter considered the result of a second filter (Evans and Klinke, 1982). Thus, it took a while before the results from Rhode (1971) indicating that the BM was a sharply tuned nonlinear filtering device were accepted. Appreciating these dramatic changes in viewing the working of the cochlea, Davis (1983) wrote: We are in the midst of a major breakthrough in auditory physiology. Recent experiments force us, I believe, to accept a revolutionary new hypothesis concerning the action of the cochlea namely, that an active process increases the vibration of the basilar membrane by energy provided somehow in the organ of Corti. Then, another crucial discovery was that the outer hair cells (OHCs), in response to depolarization, were capable of producing a mechanical force on the BM (Brownell, 1984) later called the cochlear amplifier, but this is in essence the second filter.

    1.3.1 Basilar Membrane Mechanics

    The BM presents the first level of frequency analysis in the cochlea because of its changing stiffness and mass from base to apex. High-frequency sound produces maximal BM movement at the base of the cochlea (near the stapes) whereas low-frequency sound also activates the apical parts of the BM. Thus each site on the BM has a characteristic frequency (CF), to which it responds maximally in a strict tonotopic order (Robles and Ruggero, 2001). BM movements produce motion of hair cell stereocilia, which open and close transduction channels therein. This results in the generation of hair cell receptor potentials and the excitation of ANFs.

    In a normal ear the movement of the BM is nonlinear, i.e., the amplitude of its movement is not proportional to the SPL of the sound, but increases proportionally less for increments in higher SPLs. In a deaf ear, the BM movement is called passive (Békésy-labeled wave and envelope in Fig. 1.3) as it just reacts to the SPL, without cochlear amplification. Davis (1983) proposed a model for the activation of IHCs that combined a passive BM movement and an active one resulting from the action of a cochlear amplifier. The passive BM movement only activates the IHCs at levels of approximately 40 dB above normal hearing threshold (von Békésy, 1960). At lower sound levels and up to about 60 dB above threshold, the cochlear amplifier provides a mechanical amplification of BM movement in a narrow segment of the BM near the apical end of the passive traveling wave envelope (Fig. 1.3). The OHC motor action provides this amplification. Davis (1983) noted that both the classical high-intensity system and the active low-level cochlear amplifier system compress the large dynamic range of hearing into a much narrower range of mechanical movement of BM and consequently the cilia of the IHCs. Robles and Ruggero (2001) found that the high sensitivity and sharp-frequency tuning, as well as compression and other nonlinearities (two-tone suppression and intermodulation distortion), are highly labile, suggesting the action of a vulnerable cochlear amplifier. Davis (1983) underestimated the effect of the cochlear amplifier considerably (Fig. 1.3) as the next section shows.

    Figure 1.3 A traveling wave, as described by von Békésy (1960), is shown by the dashed line and its envelope by the heavy full line. The wave travels from right (base) to left (apex). The form of the envelope and the phase relations of the traveling wave are approximately those given by von Békésy (1960). To the envelope is added, near its left end, the effect of the cochlear amplifier. A tone of 2000 Hz thereby adds a peak at about 14 mm from the (human) stapes. The peak corresponds to the tip of the tuning curve for CF=2000 Hz. Reprinted from Davis, H., 1983. An active process in cochlear mechanics. Hear. Res. 9, 79–90, with permission from Elsevier.

    1.3.2 The Cochlear Amplifier

    Because of the high-frequency selectivity of the auditory system, Gold (1948) predicted that active feedback mechanisms must amplify passive BM movements induced by sound in a frequency-selective way. This active cochlear amplification process depends critically on OHCs, which are thought to act locally in the cochlea. When a pure tone stimulates a passive BM a resonance occurs at a unique location and activates the OHCs. These activated OHCs feed energy back into the system thereby enhancing the BM vibration. Because of saturation, the cochlear amplifier shows a compressive nonlinearity so that the lowest SPL sounds are amplified substantially more than high SPL ones (Müller and Gillespie, 2008). Hudspeth (2008) detailed this active process as characterized by amplification, frequency selectivity, compressive nonlinearity, and the generation of spontaneous otoacoustic emissions (SOAEs) (Fig. 1.4).

    Figure 1.4 Characteristics of the ear’s active process. (A) An input–output relation for the mammalian cochlea relates the magnitude of vibration at a specific position along the BM to the frequency of stimulation at a particular intensity. Amplification by the active process renders the actual cochlear response (red) over 100-fold as great as the passive response (blue). Note the logarithmic scales in this and the subsequent panels. (B) As a result of the active process, the observed BM response (red) is far more sharply tuned to a specific frequency of stimulation, the natural frequency, than is a passive response driven to the same peak magnitude by much stronger stimulation (blue). (C) Each time the amplitude of stimulation is increased 10-fold, the passive response distant from the natural frequency grows by an identical amount (green arrows). For the natural frequency at which the active process dominates, however, the maximal response of the BM increases by only 10¹/³, a factor of about 2.15 (orange arrowheads). This compressive nonlinearity implies that the BM is far more sensitive than a passive system at low stimulus levels, but approaches the passive level of responsiveness as the active process saturates for loud sounds. (D) The fourth characteristic of the active process is SOAE, the unprovoked production of one or more pure tones by the ear in a very quiet environment. For humans and many other species, the emitted sounds differ between individuals and from ear to ear but are stable over months or even years. Reprinted from Hudspeth, A.J., 2008. Making an effort to listen: mechanical amplification in the ear. Neuron 59, 530–545, with permission from Elsevier.

    The OHC electromotile response is also nonlinear and works in a cycle-by-cycle mode up to a frequency of at least 70 kHz (Dallos and Fakler, 2002). Recently, a gene that is specifically expressed in OHCs was isolated and termed Prestin (Zheng et al., 2000). The action of prestin is orders of magnitude faster than that of any other cellular motor protein. Note that gene names are commonly indicated in italics whereas the expressed protein, which may have the same name, is indicated in normal font. Prestin is required for normal auditory function (Fig. 1.5) because prestin knockout (KO), or knockin (KI), mice do not exhibit OHC electromotility (Liberman et al., 2002; Dallos et al., 2008) and thus do not show cochlear amplification. A gene KO is a genetic technique in which one of an organisms genes is made inoperative (knocked out of the organism). A gene KI refers to a genetic engineering method that involves the insertion of a protein coding a cDNA sequence at a particular locus in an organism’s chromosome.

    Figure 1.5 Average (±SD) compound action potential masking tuning curves from an RW electrode for KI and wild-type mice. Probe tone frequency: 12 kHz. Reprinted from Dallos, P., Wu, X., Cheatham, M.A., Gao, J., Zheng, J., Anderson, C.T., et al., 2008. Prestin-based outer hair cell motility is necessary for mammalian cochlear amplification. Neuron 58, 333–339, with permission from Elsevier.

    Functional consequences of loss of this nonlinear amplification process result in hearing loss, loudness recruitment, reduced frequency selectivity, and changes temporal processing. This manifests itself in hearing-impaired listeners as difficulties in speech understanding, especially in complex acoustic backgrounds (Oxenham and Bacon, 2003).

    van der Heijden and Versteegh (2015) recently challenged the involvement of the cochlear amplifier in BM movement, as sketched above. They recorded the vibrations at adjacent positions on the BM in sensitive gerbil cochleas with a single-point laser vibrometer to measure the velocity of the BM. This measurement was converted in a putative power amplification by the action of the OHCs, and the local wave propagation on the BM. No local power amplification of soft sounds was evident, and this was combined with strong local attenuation of intense sounds. van der Heijden and Versteegh (2015) also reported that: The waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach.

    1.3.3 Mechanoelectrical Transduction

    The [BM] displacement is transferred to the hair bundles by means of the tectorial membrane, which contacts the OHC stereocilia and produces fluid movements that displace the IHC stereocilia. Movement of the hairs in the excitatory direction (i.e., toward the tallest row) depolarizes the hair cells whilst opposite deflections hyperpolarize them (Furness and Hackney, 2008).

    There are two hair cell types in the cochlea: IHCs and OHCs. The IHCs receive up to 95% of the auditory nerves afferent innervation (Spoendlin and Schrott, 1988), but are fewer in number than the OHCs by a factor of 3–4 (He et al., 2006). As we have seen, the OHCs provide a frequency-dependent boost to the BM motion, which enhances the mechanical input to the IHCs, thereby promoting enhanced tuning and amplification. This occurs as follows as labeled in Fig. 1.6: (1) Pressure difference across the cochlear partition causes the BM to move up (purple arrow). (2) The upward BM movement causes rotation of the organ of Corti toward the modiolus and shear of the reticular lamina relative to the tectorial membrane that deflects OHC stereocilia in the excitatory direction (green arrow). (3) This stereocilia deflection opens mechanoelectrical transduction channels, which increases the receptor current driven into the OHC (blue arrow) by the potential difference between the endocochlear potential (+100 mV) and the OHC resting potential (−40 mV). This depolarizes the OHC. (4) OHC depolarization causes conformational changes in prestin molecules that induce a reduction in OHC length (red arrows). The OHC contraction pulls the BM upward toward the reticular lamina, which amplifies BM motion when the pull on the BM is in the correct phase. In contrast to OHCs that are displacement detectors, IHCs are sensitive to velocity of the fluid surrounding the stereocilia (Guinan et al., 2012).

    Figure 1.6 Steps in the cochlear amplification of BM motion for BM movement toward scala vestibuli. TM, tectorial membrane; RL, reticular lamina; IP, inner pillar. Reprinted from Guinan, J.J., Salt, A., Cheatham, M.A., 2012. Progress in cochlear physiology after Békésy. Hear. Res. 293, 12–20, with permission from Elsevier.

    1.3.4 Cochlear Microphonics and Summating Potentials

    Both IHC and OHC generate receptor potentials in response to sound (Russell and Sellick, 1978; Dallos et al., 1982). It has long been known that population responses from the cochlea can be recorded at remote sites such as the round window, tympanic membrane, or even from the scalp and can be used clinically (Eggermont et al., 1974; Fig. 1.7). These responses are called the cochlear microphonic (CM) and the summating potential (SP). The CM is produced almost exclusively from OHC receptor currents and when recorded from the RW membrane is dominated by the responses of OHCs in the basal turn. The SP is a direct-current component resulting from the nonsymmetric depolarization–hyperpolarization response of the cochlea, which can be of positive or negative polarity, and is likely also generated dominantly by the OHCs (Russell, 2008). The compound action potential (CAP) is mixed in with the CM and SP and will be described in Section 1.4.3.

    Figure 1.7 Sound-evoked gross potentials in the cochlea. In response to short tone bursts three stimulus-related potentials can be recorded from the cochlea. These potentials, CM, SP, and CAP, appear intermingled in the recorded response from the promontory. By presenting the stimulus alternately in phase and counter phase and averaging of the recorded response, a separation can be obtained between CM on the one hand and CAP and SP on the other. High pass filtering provides a separation between SP and CAP. This can also be obtained by increasing the repetition rate of the stimuli, which results in an adaptation of the CAP but leaves the SP unaltered. From Eggermont, J.J., 1974. Basic principles for electrocochleography. Acta. Otolaryngol. Suppl. 316, 7–16 (Eggermont, 1974).

    1.3.5 Otoacoustic Emissions

    Unlike other sensory receptor systems, the inner ear appears to generate signals of the same type as it is designed to receive. These sounds, called otoacoustic emissions (OAEs), have long been considered byproducts of the cochlear amplifier, the process that makes cochlear mechanics active by adding mechanical energy at the same frequency as a stimulus tone in a positive feedback process. This feature of the inner ear is one of the most important distinctions from other sensory receptors (Siegel, 2008).

    Kemp (1978) discovered that sound could evoke echoes from the ear. These echoes, called OAEs, result from the action of the cochlear amplifier. Guinan et al. (2012) described their generation as follows: As a traveling wave moves apically [along the BM] it generates distortion due to cochlear nonlinearities (mostly from nonlinear characteristics of the OHC [mechanoelectrical transduction] channels, the same source that produces the nonlinear growth of BM motion), and encounters irregularities due to variations in cellular properties. As a result, some of this energy travels backwards in the cochlea and the middle ear to produce OAEs. Normal human ears generally exhibit SOAEs. SOAEs arise from multiple reflections of forward and backward traveling waves that are powered by cochlear amplification likely via OHC-stereocilia resonance (Shera, 2003). OAEs can be measured with a sensitive microphone in the ear canal and provide a noninvasive measure of cochlear amplification. There are two main types of OAEs in clinical use. Transient-evoked OAEs (TEOAEs) are evoked using a click stimulus. The evoked response from a click covers the frequency range up to around 4 kHz. Distortion product OAEs (DPOAEs) are evoked using a pair of primary tones f1 and f2 (f1

    1.4 The Auditory Nerve

    1.4.1 Type I and Type II Nerve Fibers

    The cell bodies of auditory afferent neurons in mammals form the spiral ganglion, which runs along the modiolar edge of the organ of Corti. The peripheral axons of type I afferents (also known as radial fibers) contact only a single IHC (Robertson, 1984). However, each mammalian IHC provides input to 5–30 type I afferents (depending on the species), allowing parallel processing of sound-induced activity (Rutherford and Roberts, 2008). Type I neurons constitute 90–95% of cochlear nerve afferents (Spoendlin, 1969; Liberman, 1982). Both the peripheral axons (also called dendrites) and the central axons as well as the cell bodies of a type I afferent neurons in mammals are myelinated. This increases conduction velocity, reduces temporal jitter, and decreases the probability of conduction failure across the cell body during high-frequency action-potential firing (Hossain et al., 2005).

    A small population of afferent axons in the mammalian cochlea (type II) are unmyelinated (Liberman et al., 1990), and each type II axon synapses with many OHCs. They may be monitoring (like muscle spindles) the state of the motor aspects of the OHCs, but may not contribute to the perception of sound. Like the type I afferents, they may initiate action potentials near their distal tips (Hossain et al., 2005). OHC synapses typically release single neurotransmitter vesicles with low probability so that extensive summation is required to reach the relatively high action potential initiation threshold. Modeling suggests that neurotransmitter release from at least six OHCs is required to trigger an action potential in a type II neuron (Weisz et al., 2014).

    Recently Flores et al. (2015) suggested that type II cochlear afferents might be involved in the detection of noise-induced tissue damage. They implied that this represents a novel form of sensation, termed auditory nociception, potentially related to pain hyperacusis (Tyler et al., 2014). Using immunoreactivity to c-Fos, Flores et al. (2015) recorded neuronal activation in the brainstem of Vglut3−/− mice, in which the type I afferents were silenced. In these deaf mice, they found responses to hair-cell-damaging noise, but not to non-traumatic noise, in cochlear nucleus neurons. This response originated in the cochlea. Flores et al. (2015) concluded that their findings imply the existence of an alternative neuronal pathway from cochlea to brainstem that is activated by tissue-damaging noise and does not require glutamate release from IHCs. Corroboration of this idea that type II afferents are nociceptors comes from Liu et al. (2015), who showed that type II afferents are activated when OHCs are damaged. This response depends on purinergic receptors, binding ATP released from nearby supporting cells in response to hair cell damage. They found that selective activation of the metabotropic purinergic receptors increased type II afferent excitability, a potential mechanism for putative pain hyperacusis.

    1.4.2 Type I Responses

    Nearly all recordings of action potentials in mammalian ANFs are from axons of type I neurons, and nearly all type I neurons fire action potentials spontaneously. The spontaneous firing rate (SFR) ranges from less than 5 to approximately 100 spikes/s, irrespective of the ANFs CF (Kiang et al., 1965; Tsuji and Liberman, 1997). The SFRs are correlated with both axon diameter and ribbon synapse location in the IHCs (Liberman, 1980; Merchan-Perez and Liberman, 1996; Tsuji and Liberman, 1997). In these studies ANFs with high SFRs were found in spiral ganglion neurons (SGNs) with large diameter peripheral axons and contacted IHCs predominantly on the inner pillar face (cf. Fig. 1.6). Low- and intermediate-SFR fibers contact the modiolar face and have synapses with larger ribbons and more vesicles than synapses with high-SFR fibers. Because the 5–30 afferent synapses on each IHC display this range of characteristics, and because ANFs of the same CF differ in SFR, it is likely that each IHC synapses with high-SFRs as well as medium- and low-SFRs ANFs (Rutherford and Roberts, 2008). High-SFR neurons have low thresholds and their driven firing rate saturates at low SPL. Medium- and low-SFR neurons have higher threshold and typically do not show saturating firing rates (Fig. 1.8).

    Figure 1.8 ) a fiber with threshold above CAP threshold, zero spontaneous rate, and straight rate–intensity function. Reprinted from Müller, M., Robertson, D., Yates, G.K., 1991. Rate-versus-level functions of primary auditory nerve fibres: evidence for square law behaviour of all fibre categories in the guinea pig. Hear. Res. 55, 50–56 (Müller et al., 1991), with permission from

    Enjoying the preview?
    Page 1 of 1