Professional Documents
Culture Documents
Physics
of
Sound
Within
music
and
music
production,
there
are
many
different
aspects
to
take
into
account
as
producers
or
sound
engineers.
Whether
you
are
recording
a
band,
solo
singer
or
working
on
a
live
performance,
each
aspect
can
affect
the
final
product
and
it
is
up
to
you
as
producers,
sound
engineers
and
music
technologists
to
make
sure
that
everything
is
done
properly.
In
this
article
I
will
discuss
the
basics
of
acoustics
and
sound
all
around
us.
What
is
sound?
Sound
is
an
energy
caused
by
vibrations
that
travel
through
a
medium
that
is
received
by
either
a
human
ear
or
an
animals
ear
and
interpreted
by
your
brain.
Sound
is
all
around
us
all
the
time
and
the
process
of
your
ears
receiving
vibrations
of
air
and
your
brain
interpreting
them
as
sound
is
ongoing
24/7.
Sound
is
omnidirectional
meaning
sound
travels
away
from
the
source
of
the
sound
with
the
same
energy
in
every
direction.
Sound
waves
&
Waveforms
Now
I
have
discussed
what
sound
actually
is,
now
we
can
talk
about
how
sound
travels
as
sound
waves.
Sound
waves
are
waves
of
compressions
and
rarefactions
as
sound
travels
in
a
medium
such
as
air.
A
sound
wave
is
a
visual
representation
of
a
sound
bubble
which
pulsates
outwards.
Without
sound
waves
we
would
not
be
able
to
hear
sound
because
sound
wouldnt
be
able
to
t
ravel.
For
there
to
be
sound,
there
needs
to
be
a
disturbance
of
air
particles
from
a
sound
source
causing
a
chain
reaction
which
then
creates
a
collision
between
atoms
and
molecules.
The
compression
and
rarefaction
of
air
molecules
cause
the
vibrations
which
then,
in
turn,
creates
sound.
For
example,
when
Pitching
Forks
vibrate
it
creates
regular
periods
of
high
and
low
pressure.
These
are
known
as
the
compressions
and
rarefactions
of
the
air
molecules,
and
will
produce
a
frequency.
I
will
discuss
frequency
in
the
next
section
of
this
article.
Waveforms
are
visual
representations
of
sound
waves
and
are
curves
showing
the
shape
of
a
wave
at
any
given
time.
You
will
see
waveforms
when
producing
music
in
Logic
when
recording
or
importing
audio.
Please
note
you
will
not
see
a
waveform
when
recording/working
with
MIDI
as
you
will
only
see
the
MIDI
notes
in
Logic.
Frequency,
Amplitude
&
Envelopes
Frequency
-
One
complete
compression
and
rarefaction
is
known
as
a
cycle.
The
number
of
cycles
within
one
second
is
called
the
frequency
which
is
measured
in
Hertz.
The
perfect
human
hearing
range
is
from
20
Hertz
(Hz)
to
20
Kilohertz
(kHz)
which
decreases
over
time
as
you
get
older.
This
is
why
sometimes
shops
use
mosquito
alarms
to
deter
loitering
by
younger
people
as
it
emits
a
high
frequency
sound
which
can
only
be
heard
by
younger
people.
Envelopes
Envelopes
are
a
major
component
of
many
synthesizers,
samplers,
and
other
electronic
musical
instruments
especially
within
Logic.
In
music
production,
us
producers
use
envelopes
to
change
an
aspect
of
an
instruments
sound.
The
change
could
be
subtle
but
it
can
make
a
big
difference
in
the
quality
of
the
sound
produced.
An
envelope
is
made
up
of
ADSR.
ADSR
stands
for
Attack,
Decay,
Sustain
and
Release.
Attack
Attack
is
the
amount
of
time
it
takes
for
the
sound
to
reach
full
volume
after
the
sound
is
activated
(e.g
a
key
pressed
on
a
Piano).
Decay
Decay
is
how
quickly
it
takes
for
the
sound
to
drop
to
the
sustain
level
a
fter
the
attack.
Sustain
Sustain
is
the
state
of
a
constant
volume
of
the
sound
after
the
decay
until
the
note
is
released.
This
is
a
volume
parameter
rather
than
a
time
parameter
like
the
attack,
decay
and
release.
Release
-
The
release
is
the
time
it
takes
for
the
sound
to
fade
out
when
a
note
ends
(e.g
when
a
key
on
the
Piano
is
released).
The
release
will
tend
to
be
longer
for
a
percussion
instrument
such
as
a
glockenspiel.
The
screenshot
below
is
an
Envelope
setting
within
Massive
by
Native
Instruments
which
is
a
synth
plug-In
for
Logic.
Speed
of
Sound
Speed
of
sound
is
always
measured
in
metres
per
second
(m/s)
and
is
worked
out
by
dividing
the
velocity
by
the
wavelength
plus
frequency.
The
main
speed
of
sound
you
really
have
to
know
is
the
speed
of
sound
through
air.
The
speed
of
sound
through
air
at
20
degrees
Celsius
is
344
metres
per
second
(m/s).
This
is
the
fastest
speed
of
sound
through
different
mediums
out
of
air,
petroleum,
the
human
body,
Iron
and
Aluminium.
The
slowest
speed
of
sound
through
a
medium
is
sound
travelling
through
Aluminium
also
at
20
degrees
Celsius.
You
can
compare
differences
in
the
speed
of
sound
using
the
graph
below.
Phase
Whenever
two
or
more
waveforms
travel
to
a
single
location
out-of-phase,
the
relative
signal
levels
of
each
waveform
are
added
together
to
create
one
amplitude
level
at
that
certain
point.
Whenever
two
waveforms
that
have
the
same
frequency,
shape
and
peak
amplitude
and
have
no
relative
time
difference,
they
are
fully
in-phase.
This
causes
the
newly-joined
waveform
to
have
the
same
frequency,
phase
and
shape
but
with
double
the
amplitude.
If
the
same
two
waves
are
joined
completely
out-of-phase,
which
means
it
has
a
phase
difference
of
180
degrees,
they
will
cancel
each
other
out
when
added
together.
This
creates
a
straight
line
of
zero
amplitude.
If
the
second
wave
is
only
slightly
out-of-
phase,
by
a
degree
other
than
180
degrees,
the
levels
will
be
added
at
points
where
the
combined
amplitudes
are
positive
and
at
a
level
where
the
combined
product
is
negative.
Take
a
look
at
the
screenshot
below
to
find
out
more
and
read
through
the
description
below
it.
If
you
were
to
listen
to
tracks
1
and
2
while
muting
track
3
and
listen
to
the
results,
it
should
result
in
a
summed
signal
thats
3
decibels
(dB)
louder.
If
you
were
to
listen
to
tracks
1
and
3
while
muting
track
2
and
listen
to
the
results,
it
should
cancel
out,
resulting
in
no
output.
If
you
were
to
offset
track
3,
relative
to
track
1,
it
should
result
in
a
difference
in
degrees
of
cancellation.
This
is
just
one
example
of
how
you
can
test/try
out
the
Phase
effect
within
Logic.
Please
note
that
this
is
universal
to
the
whole
of
music
and
not
just
Logic,
it
happens
elsewhere
in
the
world
of
music
too!
Remember,
it
is
all
to
do
with
frequencies.
Harmonics
The
ear
is
able
to
interpret
frequencies
that
have
ratios
that
are
whole
multiples
of
the
fundamentals
as
being
specifically
related.
There
are
two
types
of
harmonics
which
are
odd
harmonics
and
even
harmonics.
To
be
odd
harmonics
the
frequencies
have
to
be
odd
multiples
of
the
fundamental
and
to
be
even
harmonics
the
frequencies
have
to
be
even
multiples
of
the
fundamental.
To
the
ear,
even
harmonics
tend
to
be
more
pleasing
while
odd
harmonics
tend
to
give
a
harsher
and
more
dissonant
sound.
Decibels
The
amplitude
is
the
height
divided
by
the
depth
of
the
wave
which
is
the
volume
measured
in
decibels.
Decibels
are
measured
relative
to
the
observer,
so
10dB
to
one
could
seem
like
20dB
to
another.
The
Doppler
Effect
The
Doppler
effect
is
when
either
a
source
moves
towards
something,
the
wavelength
decreases
and
the
frequency
increases
or
when
a
source
moves
away
from
something,
the
wavelength
increases
and
the
frequency
decreases.
An
example
of
this
is
when
a
fire
engine
or
ambulance
driving
past
with
its
siren
on.
The
pitch
of
the
siren
appears
to
change
to
the
human
ear
as
it
drives
closer
to
you,
as
it
passes
you
and
as
it
drives
away
from
you.
Equalisation
(EQ)
Equalisation
is
the
process
of
making
adjustments
to
the
balance
between
frequencies
within
an
electronic
signal.
EQ
allows
us
to
correct
specific
problems
in
a
recorded
sound
to
restore
a
sound
to
its
natural
tone
(fidelity),
correct
problems
in
the
frequency
response
of
a
microphone
or
in
the
sound
of
an
instrument,
contrast
sounds
from
instruments
or
recorded
tracks
to
blend
the
mix
together
properly
and
alter
a
sound
for
musical
ideas
or
creativity.
You
will
use
EQ
in
everything
from
live
music
performance
to
music
production
in
the
studio
with
Logic
Pro
X
as
seen
in
the
screenshot
on
the
right.
There
are
two
main
types
of
EQ.
These
are
Shelving
EQ
and
Bell
or
Peak
EQ.
Shelving
EQ
is
designed
to
correspond
to
a
rise
or
drop
in
a
frequency
response
at
a
selected
frequency
which
refers
to
a
preset
level
and
continues
at
that
level
until
the
end
of
the
audio
spectrum.
The
Bell
or
peak
EQ
is
the
most
common
type
of
EQ.
It
is
started
by
a
peaking
filter
and
as
its
name
suggests,
a
peak-shaped
bell
curve
can
either
be
boosted
or
reduced
around
a
selected
middle
frequency.
As
music
producers,
sound
engineers
or
music
technologists,
it
is
vital
that
you
know
and
understand
all
of
these
basic
concepts
within
acoustics
and
the
physics
of
sound.
Once
you
understand
the
basics
you
can
then
start
working
on
producing
professional
standard
music
and
professional
recordings.