You are on page 1of 15

Sound

From Wikipedia, the free encyclopedia


This article is about audible acoustic waves. For other uses, see Sound (disambiguation).

A drum produces sound via a vibrating membrane.


Sound is a mechanical wave that is an oscillation of pressure transmitted through a solid, liquid, or gas, composed of frequencies within the range of hearing and of a level sufficiently strong to be heard, or the sensation stimulated in organs of hearing by such vibrations.[1]

Contents
[hide]

1 Propagation of sound 2 Perception of sound 3 Physics of sound


o o o o o

3.1 Longitudinal and transverse waves 3.2 Sound wave properties and characteristics 3.3 Speed of sound 3.4 Acoustics 3.5 Noise

4 Sound pressure level 5 Equipment for dealing with sound 6 Sound measurement 7 See also

8 References 9 External links

Propagation of sound
Sound is a sequence of waves of pressure that propagates through compressible media such as air or water. (Sound can propagate through solids as well, but there are additional modes of propagation). Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. In air at standard temperature and pressure, the corresponding wavelengths of sound waves range from 17 m to 17 mm. During propagation, waves can be reflected, refracted, or attenuated by the medium.[2] The behavior of sound propagation is generally affected by three things:

A relationship between density and pressure. This relationship, affected by temperature, determines the speed of sound within the medium.

The propagation is also affected by the motion of the medium itself. For example, sound moving through wind. Independent of the motion of sound through the medium, if the medium is moving, the sound is further transported.

The viscosity of the medium also affects the motion of sound waves. It determines the rate at which sound is attenuated. For many media, such as air or water, attenuation due to viscosity is negligible.

When sound is moving through a medium that does not have constant physical properties, it may be refracted (either dispersed or focused).[2]

Perception of sound

Human ear

The perception of sound in any organism is limited to a certain range of frequencies. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz)[3], although these limits are not definite. The upper limit generally decreases with age. Other species have a different range of hearing. For example, dogs can perceive vibrations higher than 20 kHz, but are deaf to anything below 40 Hz. As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere,water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound. The scientific study of human sound perception is known as psychoacoustics.

Physics of sound

Spherical compression waves


The mechanical vibrations that can be interpreted as sound are able to travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through a vacuum.

Longitudinal and transverse waves


Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressuredeviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress at right angle to the direction of propagation.

Matter in the medium is periodically displaced by a sound wave, and thus oscillates. The energy carried by the sound wave converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter and the kinetic energy of the oscillations of the medium.

Sound wave properties and characteristics

Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The horizontal axis represents time.
Sound waves are often simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties:

Frequency, or its inverse, the period Wavelength Wavenumber Amplitude Sound pressure Sound intensity Speed of sound Direction

Sometimes speed and direction are combined as a velocity vector; wavenumber and direction are combined as a wave vector. Transverse waves, also known as shear waves, have the additional property, polarization, and are not a characteristic of sound waves.

Speed of sound

U.S. Navy F/A-18 breaking the sound barrier. The white halo is formed by condensed water droplets thought to result from a drop in air pressure around the aircraft (see Prandtl-Glauert Singularity).
Main article: Speed of sound The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. In general, the speed of sound is proportional to the square root of the ratio of the elastic modulus (stiffness) of the medium to its density. Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 C (68 F) air at the sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the formula "v = (331 + 0.6 T) m/s". In fresh water, also at 20 C, the speed of sound is approximately 1,482 m/s (5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph).[6] The speed of sound is also slightly sensitive (a second-order anharmonic effect) to the sound amplitude, which means that there are nonlinear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array).
[4][5]

Acoustics
Main article: Acoustics Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical or audio engineer. The application of acoustics can be seen in almost all aspects of modern society with the most obvious being the audio and noise control industries.

Noise
Main article: Noise Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal.

Sound pressure level

Main article: Sound pressure Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm Pa) and (1 atm Pa), that is between 101323.6

Sound measurements Sound pressure p, SPL Particle velocity v, SVL Particle displacement Sound intensity I, SIL Sound power Pac Sound power level SWL Sound energy Sound energy density E Sound energy flux q Acoustic impedance Z Speed of sound c

and 101326.4 Pa. Such a tiny (relative to atmospheric) variation in air pressure at an audio frequency is perceived as a deafening sound, and can cause hearing damage, according to the table below. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. Thesound pressure level (SPL) or Lp is defined as

where p is the root-mean-square sound pressure and

is a

Audio frequency AF

reference sound pressure. Commonly used reference sound pressures, defined in the standard ANSI S1.1-1994, are 20 Pa in air and 1 Pa in water. Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound pressure level. Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level

matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.

Equipment for dealing with sound


Equipment for generating or using sound includes musical instruments, hearing aids, sonar systems and sound reproduction and broadcasting equipment. Many of these use electro-acoustic transducers such as microphones and loudspeakers.

5. What is frequency?
The number of cycles per unit of time is called the frequency. For convenience, frequency is most often measured in cycles per second (cps) or the interchangeable Hertz (Hz)(60 cps = 60 Hz), named after the 19th C. physicist. 1000 Hz is often referred to as 1 kHz (kilohertz) or simply '1k' in studio parlance.

The range of human hearing in the young is approximately 20 Hz to 20 kHz the higher number tends to decrease with age (as do many other things). It may be quite normal for a 60-year-old to hear a maximum of 16,000 Hz.
Amazing factoid #2: For comparison, it is believed that many whales and dolphins can create and perceive sounds in the 175 kHz range. Bats use slightly lower frequencies for their echo-location system.

Frequencies above and below the range of human hearing are also commonly used in computer music studios. We refer to these ranges as:
<20 Hz 20-20kHz >20kHz

sub-audio rate audio rate ultrasonic

Sub-audio signals are used as controls (since we can't hear them) in synthesis to produce effects like vibrato. The lowest 32' organ pipes also produce fundamental frequencies below our ability to hear them (the lowest, C four octaves below 'middle C' is 16.4 Hz) we may sense the vibrations with our body or extrapolate the fundamental pitch from the higher audible frequencies (discussed below), but these super-low ranks are usually doubled with higher ranks which reinforce their partials.

The perceived pitch of a sound is our ear/mind's subjective interpretation of its frequency. As will be discussed in a later chapter, an increased frequency is perceived by us as a higher pitch, although not linearly. A frequency lowered by 400 Hz will not be perceived by us as a change equivalent to a pitch to raised by 400 Hz. Therefore, frequency and pitch should not be considered interchangeable terms. Frequency is directly related to wavelength, often represented by the Greek lambda ( ). The wavelength is the distance in space required to complete a full cycle of a frequency.The wavelength of a sound is the inverse of its frequency. The formula is: wavelength ( ) = speed of sound/frequency
Example: A440 Hz (the frequency many orchestras tune to) in a dry, sea level, 68F room would create a waveform that is ~2.5 ft. long (2.56 = 1128 (feet/sec) / 440). Be certain to measure the speed of sound and wavelength in the same units. Notice how if the speed of sound changed due to temperature, altitude, humidity or conducting medium, so too would the wavelength.

As can be seen from the above formula, lower frequencies have longer wavelengths. We are able to hear lower frequencies around a corner because the longer wavelengths refractor bend more easily around objects than do shorter ones. Longer wavelengths are harder for us to directionally locate, which is why you can put your Surround Sound subwoofer most anywhere in a room except perhaps underneath you. At 20C, sound waves in the human hearing spectrum have wavelengths from 0.0172 m (0.68 inches) to 17.2 meters (56.4 feet). One particularly interesting frequency phenomenon is the Doppler effect or Doppler shift. You've no doubt seen movies where a police siren or train whistle seems to drop in pitch as it passes the listener. In actuality, the wavelength of sound waves from a moving source are compressed ahead of the source and expanded behind the source, creating a sensation of a higher and then lower frequency than is actually being produced by the source. This is the same phenomenon used by astronomers with light wavelengths to calculate the speed and distance of a receding star. The light wavelengths as stars move away are shifted toward the red end of the spectrum, hence the term red shift.

The formula for an approaching sound source is:

The formula for a receding sound source is:

fobserved=frequency we hear, fsource=frequency of source, v=speed of sound, vsource=speed of approaching or receding sound source Example: At 20C the speed of sound (v) is 343.7 m/s. An oboist in a convertible traveling at 29 m/s (vsource ) or 65 mi/hr is tuning to A440. As the convertible approaches your position, you hear 480 Hz and as the car passes and moves away, you hear 405 Hz.

It is particularly important for musicians to have their hearing tested regularly, where audiograms may depict both the maximum frequency heard and any loss of sensitivity in certain frequency ranges. Many audiologists will test only up to 8kHz, because that is considered the high end of what is necessary for speech perception. Composers should insist on tests for the full hearing spectrum if possible.

What is a Sound Spectrum?


Most sounds are made up of a complicated mixture of vibrations. (There is an introduction to sound and vibrations in the document "How woodwind instruments work".) If you are reading this on the web, you can probably hear the sound of the fan in your computer, perhaps the sound of the wind outside, the rumble of traffic - or perhaps you have some music playing in the background, in which case there is a mixture of high notes and low notes, and some sounds (such as drum beats and cymbal crashes) which have no clear pitch.

A sound spectrum is a representation of a sound usually a short sample of a sound in terms of the amount of vibration at each individual frequency. It is usually presented as a graph of either power or pressure as a function of frequency. The power or pressure is usually measured in decibels and the frequency is measured in vibrations per second (or hertz, abbreviation Hz) or thousands of vibrations per second (kilohertz, abbreviation kHz). You can think of the sound spectrum as a sound recipe: take this amount of that frequency, add this amount of that frequency etc until you have put together the whole, complicated sound. Today, sound spectra (the plural of spectrum is spectra) are usually measured using

a microphone which measures the sound pressure over a certain time interval, an analogue-digital converter which converts this to a series of numbers (representing the microphone voltage) as a function of time, and a computer which performs a calculation upon these numbers.

Your computer probably has the hardware to do this already (a sound card). Many software packages for sound analysis or sound editing have the software that can take a short sample of a sound recording, perform the calculation to obtain a spectrum (a digital fourier transform or DFT) and display it in 'real time' (i.e. after a brief delay). If how have these, you can learn a lot about spectra by singing sustained notes (or playing notes on a musical instrument) into the microphone and looking at their spectra. If you change the loudness, the size (or amplitude) of the spectral components gets bigger. If you change the pitch, the frequency of all of the components increases. If you change a sound without changing its loudness or its pitch then you are, by definition, changing its timbre. (Timbre has a negative definition - it is the sum of all the qualities that are different in two different sounds which have the same pitch and the same loudness.) One of the things that determines the timbre is the relative size of the different spectral components. If you sing "ah" and "ee" at the same pitch and loudness, you will notice that there is a big difference between the spectra.
Velocity of Sound is an album by The Apples in Stereo. It was the group's fifth album, released in October 2002. The Americanrelease has an orange album cover, while the European version is green and the Japanese version is blue. The bonus track is also different for each version. The album is somewhat louder and more aggressive than the band's four previous studio albums with more emphasis on electric instruments (such as the distorted sounds of an electric guitar) than acoustic.

Speed of sound
From Wikipedia, the free encyclopedia

For other uses, see Speed of sound (disambiguation).

Pressure-pulse or compression-type wave (longitudinal wave) confined to a plane. This is the only type of sound wave that travels in fluids (gases and liquids)

Transverse wave affecting atoms initially confined to a plane. This additional type of sound wave (additional type of elastic wave) travels only in solids, and the sideways shearing motion may take place in any direction at right angles to the direction of wave-travel (only one shear direction is shown here, at right angles to the plane). Furthermore, the right-angle shear direction may change over time and distance, resulting in different types of polarization of shear-waves
The speed of sound is the distance travelled during a unit of time by a sound wave propagating through an elasticmedium. In dry air at 20 C (68 F), the speed of sound is 343.2 metres per second (1,126 ft/s). This is 1,236 kilometres per hour (768 mph), or about one kilometer in three seconds or approximately one mile in five seconds.

In fluid dynamics, the speed of sound in a fluid medium (gas or liquid) is used as a relative measure of speed itself. The speed of an object (in distance per time) divided by the speed of sound in the fluid is called the Mach number. Objects moving at speeds greater than Mach1 are traveling at supersonic speeds. The speed of sound in an ideal gas is independent of frequency, but it weakly depends on frequency for all real physical situations. It is a function of the square root of the absolute temperature, but is nearly independent of pressure or densityfor a given gas. For different gases, the speed of sound is inversely dependent on square root of the mean molecular weight of the gas, and affected to a lesser extent by the number of ways in which the molecules of the gas can store heatfrom compression, since sound in gases is a type of compression. Although, in the case of gases only, the speed of sound may be expressed in terms of a ratio of both density and pressure, these quantities are not fully independent of each other, and canceling their common contributions from physical conditions leads to a velocity expression using the independent variables of temperature, composition, and heat capacity noted above. In common everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies from substance to substance. Sound travels faster in liquids and non-porous solids than it does in air. It travels about 4.3 times faster in water (1,484 m/s), and nearly 15 times as fast in iron (5,120 m/s), than in air at 20 degrees Celsius. In solids, sound waves propagate as two different types. A longitudinal wave is associated with compression and decompression in the direction of travel, which is the same process as all sound waves in gases and liquids. A transverse wave, often called shear wave, is due to elastic deformation of the medium perpendicular to the direction of wave travel; the direction of shear-deformation is called the "polarization" of this type of wave. In general, transverse waves occur as a pair of orthogonal polarizations. These different waves (compression waves and the different polarizations of shear waves) may have different speeds at the same frequency. Therefore, they arrive at an observer at different times, an extreme example being an earthquake, where sharp compression waves arrive first, and rocking transverse waves seconds later. The speed of an elastic wave in any medium is determined by the medium's compressibility and density. The speed ofshear waves, which can occur only in solids, is determined by the solid material's stiffness, compressibility and density

Noise reduction is the process of removing noise from a signal. All recording devices, both analogue or digital, have traits which make them susceptible to noise. Noise can be random or white noise with no coherence, or coherent noise introduced by the device's mechanism or processing algorithms. In electronic recording devices, a major form of noise is hiss caused by random electrons that, heavily influenced by heat, stray from their designated path. These stray electrons influence the voltage of the output signal and thus create detectable noise.

In the case of photographic film and magnetic tape, noise (both visible and audible) is introduced due to the grain structure of the medium. In photographic film, the size of the grains in the film determines the film's sensitivity, more sensitive film having larger sized grains. In magnetic tape, the larger the grains of the magnetic particles (usually ferric oxide ormagnetite), the more prone the medium is to noise. To compensate for this, larger areas of film or magnetic tape may be used to lower the noise to an acceptable level.
Contents
[hide]

1 In audio
o o o

1.1 Dolby and dbx noise reduction system 1.2 Dynamic Noise Reduction 1.3 Other approaches

2 In images
o o

2.1 Types 2.2 Removal


2.2.1 Tradeoffs 2.2.2 Chroma and luminance noise separation 2.2.3 Linear smoothing filters 2.2.4 Signal combination 2.2.5 Anisotropic diffusion 2.2.6 Non-local means 2.2.7 Nonlinear filters

2.3 Software programs

3 See also
o o o

3.1 General noise issues 3.2 Audio 3.3 Video

4 References 5 External links

[edit]In

audio

Noise reduction example

Example of noise reduction using Audacitywith 0 dB, 5 dB, 12 dB, and 30 dB reduction, 150 Hz frequency smoothing, and 0.15 seconds attack/decay time.

Problems listening to this file? See media help.

When using analog tape recording technology, they may exhibit a type of noise known as tape hiss. This is related to the particle size and texture used in the magnetic emulsion that is sprayed on the recording media, and also to the relative tape velocity across the tape heads. Four types of noise reduction exist: single-ended pre-recording, single-ended hiss reduction, single-ended surface noise reduction, and codec or dual-ended systems. Singleended pre-recording systems (such as Dolby HX Pro) work to affect the recording medium at the time of recording. Single-ended hiss reduction systems (such as DNR) work to reduce noise as it occurs, including both before and after the recording process as well as for live broadcast applications. Single-ended surface noise reduction (such as CEDAR and the earlier SAE 5000A and Burwen TNE 7000) is applied to the playback of phonograph records to attenuate the sound of scratches, pops, and surface non-linearities. Dual-ended systems (such as Dolby NR and dbx Type I and II) have a pre-emphasis process applied during recording and then a de-emphasis process applied at playback. [edit]Dolby

and dbx noise reduction system

While there are dozens of different kinds of noise reduction, the first widely used audio noise reduction technique was developed by Ray Dolby in 1966. Intended for professional use, Dolby Type A was an encode/decode system in which the amplitude of frequencies in four bands was increased during recording (encoding), then decreased proportionately during playback (decoding). The Dolby B system (developed in conjunction with Henry Kloss) was a single band system designed for consumer products. In particular, when recording quiet parts of an audio signal, the frequencies above 1 kHz would be boosted. This had the effect of increasing the signal to noise ratio on tape up to 10dB depending on the initial signal volume. When it was played back, the decoder reversed the process, in effect reducing the noise level by up to 10dB. The Dolby B system, while not as effective as Dolby A, had the advantage of remaining listenable on playback systems without a decoder.

Dbx was the competing analog noise reduction system developed by dbx laboratories. It used a root-mean-squared (RMS) encode/decode algorithm with the noise-prone high frequencies boosted, and the entire signal fed through a 2:1 compander. Dbx operated across the entire audible bandwidth and unlike Dolby B was unusable as an open ended system. However it could achieve up to 30 dB of noise reduction. Since Analog video recordings use frequency modulation for the luminance part (composite video signal in direct colour systems), which keeps the tape at saturation level, audio style noise reduction is unnecessary. [edit]Dynamic

Noise Reduction

Dynamic Noise Reduction (DNR) is an audio noise reduction system, introduced by National Semiconductor to reduce noise levels on long-distance telephony.[1] First sold in 1981, DNR is frequently confused with the far more common Dolby noise reduction system.[2] However, unlike Dolby and dbx Type I & Type II noise reduction systems, DNR is a playback-only signal processing system that does not require the source material to first be encoded, and it can be used together with other forms of noise reduction.[3] It was a development of the unpatented Philips Dynamic Noise Limiter (DNL) system, introduced in 1971, with the circuitry on a single chip.[4][5] Because DNR is non-complementary, meaning it does not require encoded source material, it can be used to remove background noise from any audio signal, including magnetic tape recordings and FM radio broadcasts, reducing noise by as much as 10 dB.[6] It can be used in conjunction with other noise reduction systems, provided that they are used prior to applying DNR to prevent DNR from causing the other noise reduction system to mistrack. One of DNR's first widespread applications was in the GM Delco car stereo systems in U.S. GM cars introduced in 1984.[7] It was also used in factory car stereos in Jeepvehicles in the 1980s, such as the Cherokee XJ. Today, DNR, DNL, and similar systems are most commonly encountered as a noise reduction system in microphone systems.[8]

You might also like