Professional Documents
Culture Documents
Material Vacuum Air PTFE/Teflon Polyethylene Polyimide Polypropylene Polystyrene Carbon disulfide Paper Electroactive polymers Silicon dioxide Concrete Pyrex (Glass) Rubber Diamond Salt Graphite Silicon Ammonia Methanol Ethylene Glycol Furfural Glycerol Water Hydrofluoric acid Formamide Sulfuric acid Hydrogen peroxide Hydrocyanic acid Titanium dioxide Strontium titanate
r 1 (by definition) 1.00058986 0.00000050 (at STP, for 0.9 MHz),[2] 2.1 2.25 3.4 2.22.36 2.42.7 2.6 3.85 212 3.9 [3] 4.5 4.7 (3.710) 7 5.510 315 1015 11.68 26, 22, 20, 17 (80, 40, 0, 20 C) 30 37 42.0 41.2, 47, 42.5 (0, 20, 25 C) 88, 80.1, 55.3, 34.5 (0, 20, 100, 200 C) for visible light: 1.77 83.6 (0 C) 84.0 (20 C) 84100 (2025 C) 128 aq60 (3025 C) 158.02.3 (021 C) 86173 310
Relative static permittivities of some materials at room temperature under 1 kHz [1] (corresponds to light with wavelength of 300 km)
Material Barium strontium titanate Barium titanate Lead zirconate titanate Conjugated polymers Calcium copper titanate
Temperature dependence of the relative static permittivity of water The relative permittivity of a material under given conditions reflects the extent to which it concentrates electrostatic lines of flux. In technical terms, it is the ratio of the amount of electrical energy stored in a material by an applied voltage, relative to that stored in a vacuum. Likewise, it is also the ratio of the capacitance of a capacitor using that material as a dielectric, compared to a similar capacitor that has a vacuum as its dielectric.
Contents
[hide] 1 Terminology 2 Measurement 3 Practical relevance 4 Chemical applications 5 Complex permittivity 6 Lossy medium 7 Metals 8 See also
9 References
[edit] Terminology
The relative permittivity of a material for a frequency of zero is known as its static relative permittivity or as its dielectric constant. Other terms used for the zero frequency relative permittivity include relative dielectric constant and static dielectric constant. While they remain very common, these terms are ambiguous and have been deprecated by some standards organizations.[6][7] The reason for the potential ambiguity is twofold. First, some older authors used "dielectric constant" or "absolute dielectric constant" for the absolute permittivity rather than the relative permittivity.[8] Second, while in most modern usage "dielectric constant" refers to a relative permittivity,[7][9] it may be either the static or the frequency-dependent relative permittivity, depending on context. Relative permittivity is typically denoted as r() (sometimes or K) and is defined as
where () is the complex frequency-dependent absolute permittivity of the material, and 0 is the vacuum permittivity. Relative permittivity is a dimensionless number that is in general complex. The imaginary portion of the permittivity corresponds to a phase shift of the polarization P relative to E and leads to the attenuation of electromagnetic waves passing through the medium. By definition, the linear relative permittivity of vacuum is equal to 1,[9] that is = 0, although there are theoretical nonlinear quantum effects in vacuum that exist at high field strengths.[10] The relative permittivity of a medium is related to its electric susceptibility, e, as r() = 1 + e.
[edit] Measurement
The relative static permittivity, r, can be measured for static electric fields as follows: first the capacitance of a test capacitor, C0, is measured with vacuum between its plates. Then, using the same capacitor and distance between its plates the capacitance Cx with a dielectric between the plates is measured. The relative dielectric constant can be then calculated as
For time-variant electromagnetic fields, this quantity becomes frequency-dependent and in general is called relative permittivity.
the magnitude of that field will be measurably reduced within the volume of the dielectric. This fact is commonly used to increase the capacitance of a particular capacitor design. The layers beneath etched conductors in printed circuit boards (PCBs) also act as dielectrics. Dielectrics are used in RF transmission lines. In a coaxial cable, polyethylene can be used between the center conductor and outside shield. It can also be placed inside waveguides to form filters. Optical fibers are examples of dielectric waveguides. They consist of dielectric materials that are purposely doped with impurities so as to control the precise value of r within the cross-section. This controls the refractive index of the material and therefore also the optical modes of transmission. However, in these cases it is technically the relative permittivity that matters, as they are not operated in the electrostatic limit.
in terms of a "dielectric conductivity" (units S/m, siemens per meter), which "sums over all the dissipative effects of the material; it may represent an actual [electrical] conductivity caused by migrating charge carriers and it may also refer to an energy loss associated with the dispersion of ' [the real-valued permittivity]" (,[12] p. 8). Expanding the angular frequency = 2c/ and the electric constant 0 = 1/(0c2), it reduces to:
where is the wavelength, c is the speed of light in vacuum and = 0c/2 60.0 S1 is a newly-introduced constant (units reciprocal of siemens, such that = r" remains unitless).
[edit] Metals
Although permittivity is typically associated with dielectric materials, we may still speak of an effective permittivity of a metal, with real relative permittivity equal to one (,[13] eq.(4.6), p. 121). In the low-frequency region (which extends from radiofrequencies to the far infrared region), the plasma frequency of the electron gas is much greater than the electromagnetic propagation frequency, so the complex permittivity of a metal is practically a purely imaginary number, expressed in terms of the imaginary unit and a real-valued electrical conductivity (,[13] eq.(4.8)-(4.9), p. 122).
Electrostatics
From Wikipedia, the free encyclopedia (Redirected from Electrostatic) Jump to: navigation, search For a less technical introduction, see Static electricity.
Electromagnetism
Electricity Magnetism
v t e
Paper shavings attracted by a charged CD Electrostatics is the branch of physics that deals with the phenomena and properties of stationary or slow-moving (without acceleration) electric charges. Since classical antiquity, it was known that some materials such as amber attract lightweight particles after rubbing. The Greek word for amber, electron, was
the source of the word 'electricity'. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law. Even though electrostatically induced forces seem to be rather weak, the electrostatic force between e.g. an electron and a proton, that together make up a hydrogen atom, is about 40 orders of magnitude stronger than the gravitational force acting between them. Electrostatic phenomena include many examples as simple as the attraction of the plastic wrap to your hand after you remove it from a package, to the apparently spontaneous explosion of grain silos, to damage of electronic components during manufacturing, to the operation of photocopiers. Electrostatics involves the buildup of charge on the surface of objects due to contact with other surfaces. Although charge exchange happens whenever any two surfaces contact and separate, the effects of charge exchange are usually only noticed when at least one of the surfaces has a high resistance to electrical flow. This is because the charges that transfer to or from the highly resistive surface are more or less trapped there for a long enough time for their effects to be observed. These charges then remain on the object until they either bleed off to ground or are quickly neutralized by a discharge: e.g., the familiar phenomenon of a static 'shock' is caused by the neutralization of charge built up in the body from contact with nonconductive surfaces.
Contents
[hide] 1 Fundamental concepts o 1.1 Coulomb's law o 1.2 Electric field o 1.3 Gauss's law o 1.4 Poisson's equation o 1.5 Laplace's equation 2 Electrostatic approximation o 2.1 Electrostatic potential 3 Electrostatic energy 4 Triboelectric series 5 Electrostatic generators 6 Charge neutralization 7 Charge induction 8 'Static' electricity o 8.1 Static electricity and chemical industry 8.1.1 Applicable standards 9 Electrostatic induction in commercial applications 10 See also 11 References 12 Further reading
13 External links
where 0 is a constant called the vacuum permittivity or permittivity of free space, a defined value:
Or we can say a charged object in an electric field feels a force F=qE From this definition and Coulomb's law, it follows that the magnitude of the electric field E created by a single point charge Q is:
The electric field produced by a distribution of charges given by the volume charge density is obtained by a triple integral of a vector function:
The value of the electric field gives the force that a charged particle would feel if it entered the electric field. Electric field lines gives the direction of a positive charge in the electric field.
where
From Faraday's law, this assumption implies the absence or near-absence of timevarying magnetic fields:
In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored.
The electrostatic field (lines with arrows) of a nearby positive charge (+) causes the mobile charges in conductive objects to separate due to electrostatic induction. Negative charges (blue) are attracted and move to the surface of the object facing the external charge. Positive charges (red) are repelled and move to the surface facing away. These induced surface charges are exactly the right size and shape so their opposing electric field cancels the electric field of the external charge throughout the interior of the metal. Therefore the electrostatic field everywhere inside a conductive object is zero, and the electrostatic potential is constant. Because the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function, called the electrostatic potential (also known as the voltage). An electric field, , points from regions of high potential, , to regions of low potential, expressed mathematically as
The electrostatic potential at a point can be defined as the amount of work per unit charge required to move a charge from infinity to the given point.
Energy due to a charge distribution is obtained by a triple integral: in which V represents the volume of charge distribution.
Before the year 1832, when Michael Faraday published the results of his experiment on the identity of electricities, physicists thought "static electricity" was somehow different from other electrical charges. Michael Faraday proved that the electricity induced from the magnet, voltaic electricity produced by a battery, and static electricity are all the same. Static electricity is usually caused when certain materials are rubbed against each other, like wool on plastic or the soles of shoes on carpet. The process causes electrons to be pulled from the surface of one material and relocated on the surface of the other material. A static shock occurs when the surface of the second material, negatively charged with electrons, touches a positively-charged conductor, or vice-versa. Static electricity is commonly used in xerography, air filters, and some automotive paints. Static electricity is a build up of electric charges on two objects that have become separated from each other. Small electrical components can easily be damaged by static electricity. Component manufacturers use a number of antistatic devices to avoid this.
Charge generation increases at higher fluid velocities and larger pipe diameters, becoming quite significant in pipes 8 inches (200 mm) or larger. Static charge generation in these systems is best controlled by limiting fluid velocity. The British standard BS PD CLC/TR 50404:2003 (formerly BS-5958-Part 2) Code of Practice for Control of Undesirable Static Electricity prescribes velocity limits. Because of its large impact on dielectric constant, the recommended velocity for hydrocarbon fluids containing water should be limited to 1 m/s. Bonding and earthing are the usual ways by which charge buildup can be prevented. For fluids with electrical conductivity below 10 pS/m, bonding and earthing are not adequate for charge dissipation, and anti-static additives may be required. [edit] Applicable standards 1.BS PD CLC/TR 50404:2003 Code of Practice for Control of Undesirable Static Electricity 2.NFPA 77 (2007) Recommended Practice on Static Electricity 3.API RP 2003 (1998) Protection Against Ignitions Arising Out of Static, Lightning, and Stray Currents
Flux
From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the concept of flux in science and mathematics. For other uses of the word, see Flux (disambiguation). In the various subfields of physics, there exist two common usages of the term flux, both with rigorous mathematical frameworks. In the study of transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as flow per unit area, where flow is the movement of some quantity per unit time.[1] Flux, in this definition, is a vector.
In the fields of electromagnetism and mathematics, flux is usually the integral of a vector quantity, flux density, over a finite surface. It is an integral operator that acts on a vector field similarly to the gradient, divergence and curl operators found in vector analysis. The result of this integration is a scalar quantity called flux.[2] The magnetic flux is thus the integral of the magnetic vector field B over a surface, and the electric flux is defined similarly. Using this definition, the flux of the Poynting vector over a specified surface is the rate at which electromagnetic energy flows through that surface. Confusingly, the Poynting vector is sometimes called the power flux, which is an example of the first usage of flux, above.[3] It has units of watts per square metre (W/m2).
One could argue, based on the work of James Clerk Maxwell,[4] that the transport definition precedes the more recent way the term is used in electromagnetism. The specific quote from Maxwell is "In the case of fluxes, we have to take the integral, over a surface, of the flux through every element of the surface. The result of this operation is called the surface integral of the flux. It represents the quantity which passes through the surface." In addition to these few common mathematical definitions, there are many more looser, but equally valid, usages to describe observations from other fields such as biology, the arts, history, and humanities.
Contents
[hide]
1 Transport phenomena o 1.1 Origin of the term o 1.2 Flux definition and theorems o 1.3 Chemical diffusion o 1.4 Quantum mechanics 2 Electromagnetism
2.1 Flux definition and theorems o 2.2 Maxwell's equations o 2.3 Poynting vector 3 Biology 4 See also 5 Notes
o
6 Further reading
where: is the diffusion coefficient (m2/s) of component A diffusing through component B, is the concentration (mol/m3) of species A.[7]
This flux has units of molm2s1, and fits Maxwell's original definition of flux.[4] Note: ("nabla") denotes the del operator.
For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass M, the collision cross section , and the absolute temperature T by
where the second factor is the mean free path and the square root (with Boltzmann's constant k) is the mean velocity of the particles. In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient.
, is
Then the number of particles passing through a perpendicular unit of area per unit time is
[edit] Electromagnetism
[edit] Flux definition and theorems
An example of the first definition of flux is the magnitude of a river's current, that is, the amount of water that flows through a cross-section of the river each second. The amount of sunlight that lands on a patch of ground each second is also a kind of flux. To better understand the concept of flux in Electromagnetism, imagine a butterfly net. The amount of air moving through the net at any given instant in time is the flux. If the wind speed is high, then the flux through the net is large. If the net is made bigger, then the flux would be larger even though the wind speed is the same. For the most air to move through the net, the opening of the net must be facing the direction the wind is blowing. If the net opening is parallel to the wind, then no wind will be moving through the net. Perhaps the best way to think of flux abstractly is "How much stuff goes through your thing", where the stuff is a field and the thing is the virtual surface.
The flux visualized. The rings show the surface boundaries. The red arrows stand for the flow of charges, fluid particles, subatomic particles, photons, etc. The number of arrows that pass through each ring is the flux. As a mathematical concept, flux is represented by the surface integral of a vector field,
where:
The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative. The surface normal is directed accordingly, usually by the right-hand rule. Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density. Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks). See also the image at right: the number of red arrows passing through a unit area is the flux density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the sign of the inner product of the vector field with the surface normals. If the surface encloses a 3D region, usually the surface is oriented such that the influx is counted positive; the opposite is the outflux. The divergence theorem states that the net outflux through a closed surface, in other words the net outflux from a 3D region, is found by adding the local net outflow from each point in the region (which is expressed by the divergence). If the surface is not closed, it has an oriented curve as boundary. Stokes' theorem states that the flux of the curl of a vector field is the line integral of the vector field over this boundary. This path integral is also called circulation, especially in fluid dynamics. Thus the curl is the circulation density. We can apply the flux and these theorems to many disciplines in which we see currents, forces, etc., applied through areas.
where: is the electric field, is the area of a differential square on the surface A with an outward facing surface normal defining its direction, is the charge enclosed by the surface, is the permittivity of free space
Either
or
If one considers the flux of the electric field vector, E, for a tube near a point charge in the field the charge but not containing it with sides formed by lines tangent to the field, the flux for the sides is zero and there is an equal and opposite flux at both ends of the tube. This is a consequence of Gauss's Law applied to an inverse square field. The flux for any cross-sectional surface of the tube will be the same. The total flux for any surface surrounding a charge q is q/0.[9] In free space the electric displacement vector D = 0 E so for any bounding surface the flux of D = q, the charge within it. Here the expression "flux of" indicates a mathematical operation and, as can be seen, the result is not necessarily a "flow". Faraday's law of induction in integral form is:
where: is an infinitesimal element (differential) of the closed curve C (i.e. a vector with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve C, with the sign determined by the integration direction).
The magnetic field is denoted by . Its flux is called the magnetic flux. The time-rate of change of the magnetic flux through a loop of wire is minus the electromotive force created in that wire. The direction is such that if current is allowed to pass through the wire, the electromotive force will cause a current which "opposes" the change in magnetic field by itself producing a magnetic field opposite to the change. This is the basis for inductors and many electric generators.
[edit] Biology
In general, flux in biology relates to movement of a substance between compartments. There are several cases where the concept of flux is important. The movement of molecules across a membrane: in this case, flux is defined by the rate of diffusion or transport of a substance across a permeable membrane. Except in the case of active transport, net flux is directly proportional to the concentration difference across the membrane, the surface area of the membrane, and the membrane permeability constant. In ecology, flux is often considered at the ecosystem level - for instance, accurate determination of carbon fluxes using techniques like eddy covariance (at a regional and global level) is essential for modeling the causes and consequences of global warming. Metabolic flux refers to the rate of flow of metabolites along a metabolic pathway, or even through a single enzyme. A calculation may also be made of carbon (or other elements, e.g. nitrogen) flux. It is dependent on a number of factors, including: enzyme concentration; the concentration of precursor, product, and intermediate metabolites; post-translational modification of enzymes; and the presence of metabolic activators or repressors. Metabolic control analysis and flux balance analysis provide frameworks for understanding metabolic fluxes and their constraints.
Frequency
From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Frequency (disambiguation).
Three cyclically flashing lights, from lowest frequency (top) to highest frequency (bottom). f is the frequency in hertz (Hz), meaning the number of cycles per second. T is the period in seconds (s), meaning the number of seconds per cycle. T and f are reciprocals. Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as temporal frequency. The period is the duration of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example, if a newborn baby's heart beats at a frequency of 120 times a minute, its period (the interval between beats) is half a second.
Contents
[hide]
1 Definitions and units 2 Measurement o 2.1 By counting o 2.2 By stroboscope o 2.3 By frequency counter o 2.4 Heterodyne methods 3 Frequency of waves 4 Examples o 4.1 Physics of light o 4.2 Physics of sound o 4.3 Line current
5 Period versus frequency 6 Other types of frequency 7 Frequency ranges 8 See also 9 References 10 Further reading
11 External links
[edit] Measurement
Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The horizontal axis represents time.
[edit] By counting
Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the
length of the time period. For example, if 71 events occur within 15 seconds the frequency is:
If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time.[2] The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an average error in the calculated frequency of f = 1/(2 Tm), or a fractional error of f / f = 1/(2 f Tm) where Tm is the timing interval and f is the measured frequency. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small.
[edit] By stroboscope
An older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integer multiple of the strobing frequency will also appear stationary.
the reference frequency, which must be determined by some other method. To reach higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).
In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes:
When waves from a monochrome source travel from one medium to another, their frequency remains exactly the same only their wavelength and speed change.
[edit] Examples
[edit] Physics of light
Complete spectrum of electromagnetic radiation with the visible portion highlighted Main articles: Light and Electromagnetic radiation Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: 41014 Hz is red light, 81014 Hz is violet light, and between these (in the range 481014 Hz) are all the other colors of the rainbow. An electromagnetic wave can have a frequency less than 41014 Hz, but it will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave,
and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave can have a frequency higher than 81014 Hz, but it will be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays. All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through a vacuum at the speed of light. Another property of an electromagnetic wave is its wavelength. The wavelength is inversely proportional to the frequency, so an electromagnetic wave with a higher frequency has a shorter wavelength, and vice-versa.
Angular frequency is commonly measured in radians per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sample time, which is a dimensionless quantity. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.:
Wavenumber, k, sometimes means the spatial frequency analogue of angular temporal frequency. In case of more than one spatial dimension, wavenumber is a vector quantity.
Complex number
From Wikipedia, the free encyclopedia Jump to: navigation, search
A complex number can be visually represented as a pair of numbers (a,b) forming a vector on a diagram called an Argand diagram, representing the complex plane. Re is the real axis, Im is the imaginary axis, and i is the imaginary unit, satisfying i2 = 1. A complex number is a number which can be put in the form a + bi, in which a and b are real numbers and i is called the imaginary unit, where i2 = 1.[1] In this expression, a is called the real part and b the imaginary part of the complex number. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number can be identified with the point (a, b). A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way the complex numbers contain the ordinary real numbers while extending them in order to solve problems that cannot be solved with only real numbers. Complex numbers are used in many scientific fields, including engineering, electromagnetism, quantum physics, applied mathematics, and chaos theory. Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers. He called them "fictitious", during his attempts to find solutions to cubic equations in the 16th century.[2]
Contents
[hide]
1 Overview o 1.1 Definition o 1.2 Complex plane o 1.3 History in brief 2 Elementary operations
2.1 Conjugation 2.2 Addition and subtraction 2.3 Multiplication and division 2.4 Square root 3 Polar form o 3.1 Absolute value and argument o 3.2 Multiplication, division and exponentiation in polar form 4 Properties o 4.1 Field structure o 4.2 Solutions of polynomial equations o 4.3 Algebraic characterization o 4.4 Characterization as a topological field 5 Formal construction o 5.1 Formal development o 5.2 Matrix representation of complex numbers 6 Complex analysis o 6.1 Complex exponential and related functions o 6.2 Holomorphic functions 7 Applications o 7.1 Control theory o 7.2 Improper integrals o 7.3 Fluid dynamics o 7.4 Dynamic equations o 7.5 Electromagnetism and electrical engineering o 7.6 Signal analysis o 7.7 Quantum mechanics o 7.8 Relativity o 7.9 Geometry 7.9.1 Fractals 7.9.2 Triangles o 7.10 Algebraic number theory o 7.11 Analytic number theory o 7.12 Quality Adjusted Life Years 8 History 9 Generalizations and related notions 10 See also 11 Notes 12 References o 12.1 Mathematical references o 12.2 Historical references 13 Further reading
o o o o
14 External links
[edit] Overview
Complex numbers allow for solutions to certain equations that have no real solution: the equation
has no real solution, since the square of a real number is 0 or positive. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the imaginary unit i where , so that solutions to equations like the preceding one can be found. In this case the solutions are 1 3i. In fact not only quadratic equations, but all polynomial equations in a single variable can be solved using complex numbers.
[edit] Definition
An illustration of the complex plane. The real part of a complex number z = x + iy is x, and its imaginary part is y. A complex number is a number that can be expressed in the form
where a and b are real numbers and i is the imaginary unit, satisfying i2 = 1. For example, 3.5 + 2i is a complex number. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write a bi with b > 0 instead of a + (b)i, for example 3 4i instead of 3 + (4)i. The set of all complex numbers is denoted by or .
The real number a of the complex number z = a + bi is called the real part of z, and the real number b is often called the imaginary part. By this convention the imaginary part is a real number not including the imaginary unit: hence b, not bi, is the imaginary part.[3][4] The real part is denoted by Re(z) or (z), and the imaginary part b is denoted by Im(z) or (z). For example,
Some authors write a+ib instead of a+bi (scalar multiplication between b and i is commutative). In some disciplines, in particular electromagnetism and electrical
engineering, j is used instead of i, since i is frequently used for electric current. In these cases complex numbers are written as a + bj or a + jb. A real number a can usually be regarded as a complex number with an imaginary part of zero, that is to say, a + 0i. However the sets are defined differently and have slightly different operations defined, for instance comparison operations are not defined for complex numbers. A pure imaginary number is a complex number whose real part is zero, that is to say, of the form 0 + bi.
Figure 1: A complex number plotted as a point (red) and position vector (blue) on an Argand diagram; is the rectangular expression of the point. A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form. The defining characteristic of a position vector is that it has magnitude and direction. These are emphasised in a complex number's polar form and it turns out notably that the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating a complex number counterclockwise through 90 about the origin: .
Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[5] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.
Geometric representation of and its conjugate in the complex plane The complex conjugate of the complex number z = x + yi is defined to be x yi. It is denoted or . Geometrically, is the "reflection" of z about the real axis. In particular, conjugating twice gives the original complex number: . The real and imaginary parts of a complex number can be extracted using the conjugate:
Moreover, a complex number is real if and only if it equals its conjugate. Conjugation distributes over the standard arithmetic operations:
This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying more general reflections than ones about a line, can also be expressed in terms of complex numbers.
Addition of two complex numbers can be done geometrically by constructing a parallelogram. Complex numbers are added by adding the real and imaginary parts of the summands. That is to say:
Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are O, A and B. Equivalently, X is the point such that the triangles with vertices O, A, B, and X, B, A, are congruent.
The preceding definition of multiplication of general complex numbers follows naturally from this fundamental property of the imaginary unit. Indeed, if i is treated as a number so that di means d times i, the above multiplication rule is identical to the usual rule for multiplying two sums of two terms. (distributive law) (commutative law of additionthe order of the summands can be changed) (commutative law of multiplicationthe order of the multiplicands can be changed) (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division:
As shown earlier, is the complex conjugate of the denominator . The real part c and the imaginary part d of the denominator must not both be zero for division to be defined.
and
to obtain a
+ bi.[6][7] Here is called the modulus of a + bi, and the square root with nonnegative real part is called the principal square root.
Figure 2: The argument and modulus r locate a point on an Argand diagram; or are polar expressions of the point.
If z is a real number (i.e., y = 0), then r = |x|. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The argument or phase of z is the angle of the radius OP with the positive real axis, and is written as rectangular form . As with the modulus, the argument can be found from the :[8]
The value of must always be expressed in radians. It can change by any multiple of 2 and still give the same angle. Hence, the arg function is sometimes considered as multivalued. Normally, as given above, the principal value in the interval is chosen. Values in the range are obtained by adding if the value is negative. The polar angle for the complex number 0 is undefined, but arbitrary choice of the angle 0 is common. The value of equals the result of atan2: .
Together, r and give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called trigonometric form
In angle notation, often used in electronics to represent a phasor with amplitude r and phase it is written as[9]
Multiplication of 2+i (blue triangle) and 3+i (red triangle). The red triangle is rotated to match the vertex of the blue one and stretched by 5, the length of the hypotenuse of the blue triangle. The relevance of representing complex numbers in polar form stems from the fact that the formulas for multiplication, division and exponentiation are simpler than the ones using Cartesian coordinates. Given two complex numbers z1 = r1(cos 1 + isin 1) and z2 =r2(cos 2 + isin 2) the formula for multiplication is
In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarterrotation counter-clockwise, which gives back i 2 = 1. The picture at the right illustrates the multiplication of
Since the real and imaginary part of 5+5i are equal, the argument of that number is 45 degrees, or /4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangle are arctan(1/3) and arctan(1/2), respectively. Thus, the formula
holds. As the arctan function can be approximated highly efficiently, formulas like this known as Machin-like formulasare used for high-precision approximations of . Similarly, division is given by
This also implies de Moivre's formula for exponentiation of complex numbers with integer exponents:
for any integer k satisfying 0 k n 1. Here is the usual (positive) nth root of the positive real number r. While the nth root of a positive real number r is chosen to be the positive real number c satisfying cn = x there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z), as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as
(which holds for positive real numbers), do in general not hold for complex numbers.
[edit] Properties
[edit] Field structure
The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number z, its negative z is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers z1 and z2:
These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field. Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = 1 precludes the existence of an ordering on C. When the underlying field for a mathematical topic or construct is the field of complex numbers, the thing's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra.
has at least one complex solution z, provided that at least one of the higher coefficients, a1, ..., an, is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 2 does not have a rational root, since 2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real root for a > 0, since the square of x is positive for any real number x). There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one root. Because of this fact, theorems that hold "for any algebraically closed field", apply to C. For example, any complex matrix has at least one (complex) eigenvalue.
P is closed under addition, multiplication and taking inverses. If x and y are distinct elements of P, then either x y or y x is in P. If S is any nonempty subset of P, then S + P = x + P for some x in C. (namely the complex
Moreover, C has a nontrivial involutive automorphism conjugation), such that xx is in P for any nonzero x in C.
Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = {y | p (y x)(y x) P} as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C.
The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not.
It is then just a matter of notation to express (a, b) as a + bi. Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with an addition, subtraction, multiplication and division operations which behave as is familiar from, say, rational numbers. For example, the distributive law
must hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form
where the a0, ..., an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring. The quotient ring R[X]/(X2+1) can be shown to be a field. This extension field contains two square roots of 1, namely (the cosets of) X and X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X2+1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a,b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach the two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R.
Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices. The geometric description of the multiplication of complex numbers can also be phrased in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix:
The conjugate corresponds to the transpose of the matrix. Though this representation of complex numbers with matricies is the most common, many other representations arise from matrices other than that square to the negative of the identity matrix. See the article on 2 2 real matrices for other representations of complex numbers.
Color wheel graph of sin(1/z). Black parts inside refer to numbers having large absolute values. Main article: Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions which are commonly represented as twodimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.
for any two complex numbers z1 and z2. Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written ez, is defined as the infinite series
and the series defining the real trigonometric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states:
Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation
for any complex number w 0. It can be shown that any such solution zcalled complex logarithm of asatisfies
where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2, log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval (,]. Complex exponentiation z is defined as
Consequently, they are in general multi-valued. For = 1 / n, for some natural number n, this recovers the non-unicity of n-th roots mentioned above. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when navely treated as single-valued functions; see failure of power and logarithm identities. For example they do not satisfy
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.
with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand is real-differentiable, but does not satisfy the Cauchy-Riemann equations. Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as f(z)/(z z0)n with a holomorphic function f(z), still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0.
[edit] Applications
Some applications of complex numbers are:
in the right half plane, it will be unstable, all in the left half plane, it will be stable, on the imaginary axis, it will have marginal stability.
If a system has zeros in the right half plane, it is a nonminimum phase system.
where represents the angular frequency and the complex number z encodes the phase and amplitude as explained above. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Another example, relevant to the two side bands of amplitude modulation of AM radio, is:
that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics the Schrdinger equation and Heisenberg's matrix mechanics make use of complex numbers.
[edit] Relativity
In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.
[edit] Geometry
[edit] Fractals Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets. [edit] Triangles Every triangle has a unique Steiner inellipsean ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem:[11][12] Denote the triangle's vertices in the complex plane as a=xA+yAi, b=xB+yBi, and c=xC+yCi. Write the cubic equation , take its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse.
Construction of a regular polygon using straightedge and compass. As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers they are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown
that it is not possible to construct a regular nonagon using only compass and straightedge a purely geometric problem. Another example are Pythagorean triples (a, b, c), that is to say integers satisfying
(which implies that the triangle having sidelengths a, b, and c is a right triangle). They can be studied by considering Gaussian integers, that is, numbers of the form x + iy, where x and y are integers.
[edit] History
The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Heron of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive.[14] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's cubic formula gives the solution to the equation x3 x = 0 as
At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions i, and . Substituting these in turn for in Tartaglia's cubic formula and simplifying, one gets 0, 1 and 1 as the solutions of x3 x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues. The term "imaginary" for these quantities was coined by Ren Descartes in 1637, although he was at pains to stress their imaginary nature[15] [...] quelquefois seulement imaginaires cest--dire que lon peut toujours en imaginer autant que j'ai dit en chaque quation, mais quil ny a quelquefois aucune quantit qui corresponde celle quon imagine. ([...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.) A further source of confusion was that the equation seemed to be capriciously inconsistent with the algebraic identity , which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity ) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of to guard against this mistake[citation needed]. Even so Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout.
In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula:
In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis:
by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of 1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[16] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called (1828) called the direction factor, and the modulus; Cauchy the reduced form (l'expression rduite) and apparently
introduced the term argument; Gauss used i for , introduced the term complex 2 2 number for a + bi, and called a + b the norm. The expression direction coefficient, often used for , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hlder, Felix Klein, Henri Poincar, Hermann Schwarz, Karl Weierstrass and many others.
The process of extending the field R of reals to C is known as Cayley-Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively. However, with increasing dimension, the algebraic properties familiar from real and complex numbers vanish: the quaternions are only a skew field, i.e. xy yx for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: (xy)z x(yz). However, all of these are normed division algebras over R. By Hurwitz's theorem they are the only ones. The next step in the CayleyDickson construction, the sedenions fail to have this structure. The Cayley-Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis 1, i. This means the following: the R-linear map
for some fixed complex number w can be represented by a 22 matrix (once a basis has been chosen). With respect to the basis 1, i, this matrix is
i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 2 real matrices, it is not the only one. Any matrix
has the property that its square is the negative of the identity matrix: J2 = I. Then
is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize R, C, H, and O. For example this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions. The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of padic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closure of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion of turns out to be algebraically closed. This field is called p-adic complex numbers by analogy.
The fields R and Qp and their finite field extensions, including C, are local fields.
Permittivity
From Wikipedia, the free encyclopedia Jump to: navigation, search
A dielectric medium showing orientation of charged particles creating polarization effects. Such a medium can have a higher ratio of electric flux to charge (permittivity) than empty space In electromagnetism, absolute permittivity is the measure of the resistance that is encountered when forming an electric field in a medium. In other words, permittivity is a measure of how an electric field affects, and is affected by, a dielectric medium. The permittivity of a medium describes how much electric field (more correctly, flux) is 'generated' per unit charge in that medium. Less electric flux exists in a medium with a high permittivity (per unit charge) because of polarization effects. Permittivity is directly related to electric susceptibility, which is a measure of how easily a dielectric polarizes in response to an electric field. Thus, permittivity relates to a material's ability to transmit (or "permit") an electric field. In SI units, permittivity is measured in farads per meter (F/m); electric susceptibility is dimensionless. They are related to each other through
Contents
[hide]
1 Explanation 2 Vacuum permittivity 3 Relative permittivity 4 Dispersion and causality o 4.1 Complex permittivity o 4.2 Classification of materials o 4.3 Lossy medium
9 External links
[edit] Explanation
In electromagnetism, the electric displacement field D represents how an electric field E influences the organization of electrical charges in a given medium, including charge migration and electric dipole reorientation. Its relation to permittivity in the very simple case of linear, homogeneous, isotropic materials with "instantaneous" response to changes in electric field is
where the permittivity is a scalar. If the medium is anisotropic, the permittivity is a second rank tensor. In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters. In a nonlinear medium, the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on real or complex values. In SI units, permittivity is measured in farads per meter (F/m or A2s4kg1m3). The displacement field D is measured in units of coulombs per square meter (C/m2), while the electric field E is measured in volts per meter (V/m). D and E describe the interaction between charged objects. D is related to the charge densities associated with this interaction, while E is related to the forces and potential differences.
where
c0 is the speed of light in free space,[2] 0 is the vacuum permeability. Constants c0 and 0 are defined in SI units to have exact numerical values, shifting responsibility of experiment to the determination of the meter and the ampere.[3] (The approximation in the second value of 0 above stems from being an irrational number.)
where (frequently written e) is the electric susceptibility of the material. The susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field E to the induced dielectric polarization density P such that
where
The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation. The electric displacement D is related to the polarization density P by
The permittivity and permeability of a medium together determine the phase velocity v = c/n of electromagnetic radiation through that medium:
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by extended to infinity as well if one defines . The upper limit of this integral can be for . An instantaneous .
It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Because of the convolution theorem, the integral becomes a simple product,
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material. Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. for ), a consequence of causality, imposes Kramers . Kronig constraints on the susceptibility
A dielectric permittivity spectrum over a wide range of frequencies. and denote the real and the imaginary part of the permittivity, respectively. Various processes are labeled on the image: ionic and dipolar relaxation, and atomic and electronic resonances at higher energies.[4] As opposed to the response of a vacuum, the response of normal materials to external fields generally depends on the frequency of the field. This frequency dependence reflects the fact that a material's polarization does not respond instantaneously to an applied field. The response must always be causal (arising after the applied field) which can be represented by a phase difference. For this reason permittivity is often treated as a complex function (since complex numbers allow specification of magnitude and phase) of the (angular) frequency of the applied field , permittivity therefore becomes . The definition of
where D0 and E0 are the amplitudes of the displacement and electrical fields, respectively, i is the imaginary unit, i 2 = 1. The response of a medium to static electric fields is described by the low-frequency limit of permittivity, also called the static permittivity s (also DC ):
At the high-frequency limit, the complex permittivity is commonly referred to as . At the plasma frequency and above, dielectrics behave as ideal metals, with electron gas behavior. The static permittivity is a good approximation for alternating fields of low frequencies, and as the frequency increases a measurable phase difference emerges between D and E. The frequency at which the phase shift becomes noticeable depends on temperature and the details of the medium. For moderate fields strength (E0), D and E remain proportional, and
Since the response of materials to alternating fields is characterized by a complex permittivity, it is natural to separate its real and imaginary parts, which is done by convention in the following way:
where " is the imaginary part of the permittivity, which is related to the dissipation (or loss) of energy within the medium. ' is the real part of the permittivity, which is related to the stored energy within the medium. It is important to realize that the choice of sign for time-dependence, , dictates the sign convention for the imaginary part of permittivity. The signs used here correspond to those commonly used in physics, whereas for the engineering convention one should reverse all imaginary quantities. The complex permittivity is usually a complicated function of frequency , since it is a superimposed description of dispersion phenomena occurring at multiple frequencies. The dielectric function () must have poles only for frequencies with positive imaginary parts, and therefore satisfies the KramersKronig relations. However, in the narrow frequency ranges that are often studied in practice, the permittivity can be approximated as frequency-independent or by model functions. At a given frequency, the imaginary part of leads to absorption loss if it is positive (in the above sign convention) and gain if it is negative. More generally, the imaginary parts of the eigenvalues of the anisotropic dielectric tensor should be considered. In the case of solids, the complex dielectric function is intimately connected to band structure. The primary quantity that characterizes the electronic structure of any crystalline material is the probability of photon absorption, which is directly related to the imaginary part of the optical dielectric function (). The optical dielectric function is given by the fundamental expression:[5]
In this expression, Wcv(E) represents the product of the Brillouin zone-averaged transition probability at the energy E with the joint density of states,[6][7] Jcv(E); is a broadening function, representing the role of scattering in smearing out the energy levels.[8] In general, the broadening is intermediate between Lorentzian and Gaussian;[9] [10] for an alloy it is somewhat closer to Gaussian because of strong scattering from statistical fluctuations in the local composition on a nanometer scale.
where is the conductivity of the medium; ' is the real part of the permittivity. is the complex permittivity The size of the displacement current is dependent on the frequency of the applied field E; there is no displacement current in a constant field. In this formalism, the complex permittivity is defined as[11]:
In general, the absorption of electromagnetic energy by dielectrics is covered by a few different mechanisms that influence the shape of the permittivity as a function of frequency: First, are the relaxation effects associated with permanent and induced molecular dipoles. At low frequencies the field changes slowly enough to allow dipoles to reach equilibrium before the field has measurably changed. For frequencies at which dipole orientations cannot follow the applied field because
of the viscosity of the medium, absorption of the field's energy leads to energy dissipation. The mechanism of dipoles relaxing is called dielectric relaxation and for ideal dipoles is described by classic Debye relaxation. Second are the resonance effects, which arise from the rotations or vibrations of atoms, ions, or electrons. These processes are observed in the neighborhood of their characteristic absorption frequencies.
The above effects often combine to cause non-linear effects within capacitors. For example, dielectric absorption refers to the inability of a capacitor that has been charged for a long time to completely discharge when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage, a phenomenon that is also called soakage or battery action. For some dielectrics, such as many polymer films, the resulting voltage may be less than 1-2% of the original voltage. However, it can be as much as 15 - 25% in the case of electrolytic capacitors or supercapacitors.
[edit] Measurement
Main article: dielectric spectroscopy The dielectric constant of a material can be found by a variety of static electrical measurements. The complex permittivity is evaluated over a wide range of frequencies by using different variants of dielectric spectroscopy, covering nearly 21 orders of magnitude from 106 to 1015 Hz. Also, by using cryostats and ovens, the dielectric properties of a medium can be characterized over an array of temperatures. In order to study systems for such diverse excitation fields, a number of measurement setups are used, each adequate for a special frequency range. Various microwave measurement techniques are outlined in Chen et al..[12] Typical errors for the Hakki-Coleman method employing a puck of material between conducting planes are about 0.3%.[13]
Low-frequency time domain measurements (106-103 Hz) Low-frequency frequency domain measurements (105-106 Hz) Reflective coaxial methods (106-1010 Hz) Transmission coaxial method (108-1011 Hz) Quasi-optical methods (109-1010 Hz) Terahertz time-domain spectroscopy (1011-1013 Hz) Fourier-transform methods (1011-1015 Hz)
At infrared and optical frequencies, a common technique is ellipsometry. Dual polarisation interferometry is also used to measure the complex refractive index for very thin films at optical frequencies.
Density functional theory Electric-field screening Green-Kubo relations Green's function (many-body theory) Linear response function Rotational Brownian motion Electromagnetic permeability
[edit] References
1. ^ electric constant 2. ^ Current practice of standards organizations such as NIST and BIPM is to use c0, rather than c, to denote the speed of light in vacuum according to ISO 31. In the original Recommendation of 1983, the symbol c was used for this purpose. See NIST Special Publication 330, Appendix 2, p. 45 . 3. ^ Latest (2006) values of the constants (NIST) 4. ^ Dielectric Spectroscopy 5. ^ Peter Y. Yu, Manuel Cardona (2001). Fundamentals of Semiconductors: Physics and Materials Properties. Berlin: Springer. p. 261. ISBN 3-540-25470-6.
6. ^ Jos Garca Sol, Jose Sol, Luisa Bausa, (2001). An introduction to the optical spectroscopy of inorganic solids. Wiley. Appendix A1, pp, 263. ISBN 0-470-86885-6. 7. ^ John H. Moore, Nicholas D. Spencer (2001). Encyclopedia of chemical physics and physical chemistry. Taylor and Francis. p. 105. ISBN 07503-0798-6.
acuum permittivity
From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the electric constant. For the analogous magnetic constant, see vacuum permeability. For the ordinal in mathematics 0, see epsilon naught. The physical constant 0, commonly called the vacuum permittivity, permittivity of free space or electric constant is an ideal, (baseline) physical constant, which is the value of the absolute (not relative) dielectric permittivity of classical vacuum. Its value is:[1] 0 8.854187817620... 1012 Farads per metre (Fm1). The ellipsis ... does not represent an experimental inaccuracy (the value is exact) but the error introduced by truncation of a non-terminating decimal value. This constant relates the units for electric charge to mechanical quantities such as length and force.[2] For example, the force between two separated electric charges (in the vacuum of classical electromagnetism) is given by Coulomb's law:
where q1 and q2 are the charges, and r is the distance between them. Likewise, 0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation, and relate them to their sources.
Contents
[hide]
1 Value o 1.1 Redefinition of the SI units 2 Terminology 3 Historical origin of the parameter 3.1 Rationalization of units 3.2 Determination of a value for 0 4 Permittivity of real media 5 See also
o o
6 Notes
[edit] Value
The value of 0 is defined by the formula[3]
where c0 is the defined value for the speed of light in classical vacuum in SI units,[4] and 0 is the parameter that international Standards Organizations call the "magnetic constant" (commonly called vacuum permeability). Since 0 has the defined value 4 107 H m1,[5] and c0 has the defined value 299792458 ms1,[6] it follows that 0 has a defined value given approximately by 0 8.854187817620... 1012 Fm1 (or A2s4kg1m3 in SI base units, or C2N1m2 or CV1m1 using other SI coherent units).[7][8] The ellipsis (...) does not indicate experimental uncertainty, but the arbitrary termination of a nonrecurring decimal. The historical origins of the electric constant 0, and its value, are explained in more detail below.
with e the exact elementary charge, h the exact Planck constant, and c0 the exact speed of light in vacuum. Here use is made of the relation for the fine structure constant:
The relative uncertainty in the value of 0 therefore would be the same as that for the fine structure constant, currently 6.81010.[7]
[edit] Terminology
Historically, the parameter 0 has been known by many different names. The terms "vacuum permittivity" or its variants, such as "permittivity in/of vacuum",[10][11] "permittivity of empty space",[12] or "permittivity of free space"[13] are widespread.
Standards Organizations worldwide now use "electric constant" as a uniform term for this quantity,[7] and official standards documents have adopted the term (although they continue to list the older terms as synonyms).[14][15] Another historical synonym was "dielectric constant of vacuum", as "dielectric constant" was sometimes used in the past for the absolute permittivity.[16][17] However, in modern usage "dielectric constant" typically refers exclusively to a relative permittivity /0 and even this usage is considered "obsolete" by some standards bodies in favor of relative static permittivity.[15][18] Hence, the term "dielectric constant of vacuum" for the electric constant 0 is considered obsolete by most modern authors, although occasional examples of continuing usage can be found. As for notation, the constant can be denoted by either common glyphs for the letter epsilon. or , using either of the
where Q is a quantity that represents the amount of electricity present at each of the two points, and ke is Coulomb's constant. If one is starting with no constraints, then the value of ke may be chosen arbitrarily.[19] For each different choice of ke there is a different "interpretation" of Q: to avoid confusion, each different "interpretation" has to be allocated a distinctive name and symbol. In one of the systems of equations and units agreed in the late 19th century, called the "centimetre-gram-second electrostatic system of units" (the cgs esu system), the constant ke was taken equal to 1, and a quantity now called "gaussian electric charge" qs was defined by the resulting equation
The unit of gaussian charge, the statcoulomb, is such that two units, a distance of 1 centimetre apart, repel each other with a force equal to the cgs unit of force, the dyne. Thus the unit of gaussian charge can also be written 1 dyne1/2 cm. "Gaussian electric
charge" is not the same mathematical quantity as modern (rmks) electric charge and is not measured in coulombs. The idea subsequently developed that it would be better, in situations of spherical geometry, to include a factor 4 in equations like Coulomb's law, and write it in the form:
This idea is called "rationalization". The quantities q's and ke' are not the same as those in the older convention. Putting ke'=1 generates a unit of electricity of different size, but it still has the same dimensions as the cgs esu system. The next step was to treat the quantity representing "amount of electricity" as a fundamental quantity in its own right, denoted by the symbol q, and to write Coulomb's Law in its modern form:
The system of equations thus generated is known as the rationalized metre-kilogramsecond (rmks) equation system, or "metre-kilogram-second-ampere (mksa)" equation system. This is the system used to define the SI units.[20] The new quantity q is given the name "rmks electric charge", or (nowadays) just "electric charge". Clearly, the quantity qs used in the old cgs esu system is related to the new quantity q by
By convention, the electric constant 0 appears in the relationship that defines the electric displacement field D in terms of the electric field E and classical electrical polarization density P of the medium. In general, this relationship has the form: . For a linear dielectric, P is assumed to be proportional to E, but a delayed response is permitted, and a spatially non-local response, so one has:[21]
In the event that nonlocality and delay of response are not important, the result is:
where is the permittivity and r the relative static permittivity. In the vacuum of classical electromagnetism, the polarization P = 0, so r = 1 and = 0.
Electric susceptibility
From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2010) In electromagnetism, the electric susceptibility (latin: susceptibilis receptiveness) is a dimensionless proportionality constant that indicates the degree of polarization of a dielectric material in response to an applied electric field. The greater the electric susceptibility, the greater the ability of a material to polarize in response to the field, and thereby reduce the total electric field inside the material (and store energy). It is in this way that the electric susceptibility influences the electric permittivity of the material and thus influences many other phenomena in that medium, from the capacitance of capacitors to the speed of light.[1][2]
Contents
[hide] 1 Definition of Volume Susceptibility 2 Molecular Polarizability 3 Dispersion and causality 4 See also
5 References
Where:
is the Polarization Density is the Electric Permittivity of Free Space is the Electric Susceptibility is the Electric Field
The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation. The susceptibility is related to its relative permittivity by:
At the same time, the electric displacement D is related to the polarization density P by:
This introduces a complication however, as locally the field can differ significantly from the overall applied field. We have:
where P is the polarization per unit volume, and N is the number of molecules per unit volume contributing to the polarization. Thus, if the local electric field is parallel to the ambient electric field, we have:
Thus only if the local field equals the ambient field can we write:
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by extended to infinity as well if one defines . The upper limit of this integral can be for . An instantaneous .
It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a simple product,
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material. Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. for ), a consequence of causality, imposes Kramers . Kronig constraints on the susceptibility
Capacitance
From Wikipedia, the free encyclopedia Jump to: navigation, search
Electromagnetism
Electricity Magnetism
Electrical conduction Electrical resistance Capacitance Inductance Impedance Resonant cavities Waveguides
v t e
In electromagnetism and electronics, capacitance is the ability of a capacitor to store charge in an electric field. Capacitance is also a measure of the amount of electric potential energy stored (or separated) for a given electric potential. A common form of energy storage device is a parallel-plate capacitor. In a parallel plate capacitor, capacitance is directly proportional to the surface area of the conductor plates and
inversely proportional to the separation distance between the plates. If the charges on the plates are +q and q, and V gives the voltage between the plates, then the capacitance is given by
The SI unit of capacitance is the farad; 1 farad is 1 coulomb per volt. The energy (measured in joules) stored in a capacitor is equal to the work done to charge it. Consider a capacitor of capacitance C, holding a charge +q on one plate and q on the other. Moving a small element of charge dq from one plate to the other against the potential difference V = q/C requires the work dW:
where W is the work measured in joules, q is the charge measured in coulombs and C is the capacitance, measured in farads. The energy stored in a capacitor is found by integrating this equation. Starting with an uncharged capacitance (q = 0) and moving charge from one plate to the other until the plates have charge +Q and Q requires the work W:
Contents
[hide]
1 Capacitors o 1.1 Voltage dependent capacitors o 1.2 Frequency dependent capacitors 2 Capacitance matrix 3 Self-capacitance 4 Elastance 5 Stray capacitance 6 Capacitance of simple systems 7 See also 8 References 9 Further reading
[edit] Capacitors
Main article: Capacitor
The capacitance of the majority of capacitors used in electronic circuits is several orders of magnitude smaller than the farad. The most common subunits of capacitance in use today are the millifarad (mF), microfarad (F), nanofarad (nF), picofarad (pF), and femtofarad (fF). Capacitance can be calculated if the geometry of the conductors and the dielectric properties of the insulator between the conductors are known. For example, the capacitance of a parallel-plate capacitor constructed of two parallel plates both of area A separated by a distance d is approximately equal to the following:
where C is the capacitance; A is the area of overlap of the two plates; r is the relative static permittivity (sometimes called the dielectric constant) of the material between the plates (for a vacuum, r = 1); 0 is the electric constant (0 8.8541012 F m1); and d is the separation between the plates. Capacitance is proportional to the area of overlap and inversely proportional to the separation between conducting sheets. The closer the sheets are to each other, the greater the capacitance. The equation is a good approximation if d is small compared to the other dimensions of the plates so the field in the capacitor over most of its area is uniform, and the so-called fringing field around the periphery provides a small contribution. In CGS units the equation has the form:[1]
where C in this case has the units of length. Combining the SI equation for capacitance with the above equation for the energy stored in a capacitance, for a flat-plate capacitor the energy stored is:
. where W is the energy, in joules; C is the capacitance, in farads; and V is the voltage, in volts.
where the voltage dependence of capacitance, C(V), stems from the field, which in a large area parallel plate device is given by = V/d. This field polarizes the dielectric, which polarization, in the case of a ferroelectric, is a nonlinear S-shaped function of field, which, in the case of a large area parallel plate device, translates into a capacitance that is a nonlinear function of the voltage causing the field.[2][3] Corresponding to the voltage-dependent capacitance, to charge the capacitor to voltage V an integral relation is found:
which agrees with Q = CV only when C is voltage independent. By the same token, the energy stored in the capacitor now is given by
Integrating:
where interchange of the order of integration is used. The nonlinear capacitance of a microscope probe scanned along a ferroelectric surface is used to study the domain structure of ferroelectric materials.[4] Another example of voltage dependent capacitance occurs in semiconductor devices such as semiconductor diodes, where the voltage dependence stems not from a change in dielectric constant but in a voltage dependence of the spacing between the charges on the two sides of the capacitor.[5] This effect is intentionally exploited in diode-like devices known as varicaps.
governed by dielectric relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed as (see electric susceptibility):
indicating the lag in response by the time dependence of r, calculated in principle from an underlying microscopic analysis, for example, of the dipole behavior in the dielectric. See, for example, linear response function.[6][7] The integral extends over the entire past history up to the present time. A Fourier transform in time then results in:
where r() is now a complex function, with an imaginary part related to absorption of energy from the field by the medium. See permittivity. The capacitance, being proportional to the dielectric constant, also exhibits this frequency behavior. Fourier transforming Gauss's law with this form for displacement field:
where j is the imaginary unit, V() is the voltage component at angular frequency , G() is the real part of the current, called the conductance, and C() determines the imaginary part of the current and is the capacitance. Z() is the complex impedance. When a parallel-plate capacitor is filled with a dielectric, the measurement of dielectric properties of the medium is based upon the relation:
where a single prime denotes the real part and a double prime the imaginary part, Z() is the complex impedance with the dielectric present, C() is the so-called complex capacitance with the dielectric present, and C0 is the capacitance without the dielectric. [8][9] (Measurement "without the dielectric" in principle means measurement in free space, an unattainable goal inasmuch as even the quantum vacuum is predicted to exhibit nonideal behavior, such as dichroism. For practical purposes, when measurement errors are taken into account, often a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.[10]) Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant. These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method
measures absorption as a function of frequency.[11] Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy.[12] Another example of frequency dependent capacitance occurs with MOS capacitors, where the slow generation of minority carriers means that at high frequencies the capacitance measures only the majority carrier response, while at low frequencies both types of carrier respond.[13][14] At optical frequencies, in semiconductors the dielectric constant exhibits structure related to the band structure of the solid. Sophisticated modulation spectroscopy measurement methods based upon modulating the crystal structure by pressure or by other stresses and observing the related changes in absorption or reflection of light have advanced our knowledge of these materials.[15]
From this, the mutual capacitance for the total charge Q and using
Since no actual device holds perfectly equal and opposite charges on each of the two "plates", it is the mutual capacitance that is reported on capacitors.
The collection of coefficients is known as the capacitance matrix,[17][18] and is the inverse of the elastance matrix.
[edit] Self-capacitance
In electrical circuits, the term capacitance is usually a shorthand for the mutual capacitance between two adjacent conductors, such as the two plates of a capacitor. However, for an isolated conductor there also exists a property called self-capacitance, which is the amount of electrical charge that must be added to an isolated conductor to raise its electrical potential by one unit (i.e. one volt, in most measurement systems).[19] The reference point for this potential is a theoretical hollow conducting sphere, of infinite radius, centered on the conductor. Using this method, the self-capacitance of a conducting sphere of radius R is given by:[20]
Example values of self-capacitance are: for the top "plate" of a van de Graaff generator, typically a sphere 20 cm in radius: 20 pF the planet Earth: about 710 F[21]
The capacitative component of a coil, which reduces its impedance at high frequencies and can lead to resonance and self-oscillation, is also called self-capacitance[22] as well as stray or parasitic capacitance.
[edit] Elastance
The reciprocal of capacitance is called elastance. The unit of elastance is the daraf, but is not recognised by SI.
between the first node and ground and a KZ/(K-1) impedance between the second node and ground. Since impedance varies inversely with capacitance, the internode capacitance, C, will be seen to have been replaced by a capacitance of KC from input to ground and a capacitance of (K-1)C/K from output to ground. When the input-to-output gain is very large, the equivalent input-to-ground impedance is very small while the output-to-ground impedance is essentially equal to the original (input-to-output) impedance.
Coaxial cable
Strip width ki: d/ (2wi+d) strips[24] k2: k1k2 K: Elliptic integral : Length a1: Inner radius a2: Outer radius a: Radius d: Distance, d > 2a D = d/2a : Euler's constant a: Radius d: Distance, d>a D = d/a a: Radius a: Radius a: Wire radius : Length : ln( /a)
Concentric spheres
Sphere in front of wall[25] Sphere Circular disc[27] Thin straight wire, finite length[28][29]
[30]
Electric field
From Wikipedia, the free encyclopedia Jump to: navigation, search
Electromagnetism
Electricity Magnetism
v t e
In physics, an electric field surrounds electrically charged particles and time-varying magnetic fields. The electric field depicts the force exerted on other electrically charged objects by the electrically charged particle the field is surrounding. The concept of an electric field was introduced by Michael Faraday.
Contents
[hide]
1 Qualitative description 2 Quantitative definition 3 Superposition o 3.1 Array of discrete point charges o 3.2 Continuum of charges 4 Electrostatic fields
o o
fields 5 Electrodynamic fields 6 Energy in the electric field 7 Further extensions o 7.1 Definitive equation of vector fields o 7.2 Constitutive relation 8 See also 9 References 10 External links
Electric field from a negative Q Electric fields are generated by charges. Suppose a stationary charge Q (the "source charge") creates an electric field E, and that another separate charge q (a "test charge") is placed in the E-field due to Q. The electric field intensity E is defined as the force F experienced by a stationary positive unit point charge q at position r (relative to Q) in the field:[1][2]
Since the E field can vary from point to point in space, i.e. depends on r, it is a vector field. Using Coulomb's law, the E-field at a point in space due to Q is given by:
where r = |r| is the magnitude of the position vector, is the unit vector corresponding to r (pointing from Q to q), and 0 is the electric constant. From the definition, the direction of the electric field is the same as the direction of the force it would exert on a positively-charged particle, and opposite the direction of the force on a negatively-charged particle. Since like charges repel and opposites attract, the electric field is directed away from positive charges and towards negative charges. According to Coulomb's law the electric field is dependent on position. The electric field due to any single charge falls off as the square of the distance from that charge, an example of an inverse-square law. Adding or moving another source charge will alter the electric field distribution. Therefore an electric field is defined with respect to a particular configuration of source charges.
[edit] Superposition
The total E-field due to N point charges is simply the superposition of the E-fields due to each point charge:
where is the charge density (the amount of charge per unit volume), and dV is the differential volume element. This integral is a volume integral over the region of the charge distribution.
The electric field at a point is equal to the negative gradient of the electric potential there, : Coulomb's law is actually a special case of Gauss's Law, a more fundamental description of the relationship between the distribution of electric charge in space and
the resulting electric field. While Columb's law (as given above) is only true for stationary point charges, Gauss's law is true for all charges either in static or in motion. Gauss's law is one of Maxwell's equations governing electromagnetism. Gauss's law allows the E-field to be calculated in terms of a continuous distribution of charge density
where is the divergence operator, is the total charge density, including free and bound charge, in other words all the charge present in the system (per unit volume).
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge in one dimension if the right charge is changing from positive to negative
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge. The electric field at a point E(r) is equal to the negative gradient of the electric potential (r), a scalar field at the same point:
where is the gradient. This is equivalent to the force definition above, since electric potential is defined by the electric potential energy U per unit (test) positive charge:
If several spatially distributed charges generate such an electric potential, e.g. in a solid, an electric field gradient may also be defined.
where is the potential difference between the plates d is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases.
This suggests similarities between the electric field E and the gravitational field g, so sometimes mass is called "gravitational charge". Similarities between electrostatic and gravitational forces: 1. Both act in a vacuum. 2. Both are central and conservative. 3. Both obey an inverse-square law (both are inversely proportional to square of r). 4. Both propagate with finite speed c, the speed of light. 5. Electric charge and relativistic mass are conserved; note, though, that rest mass is not conserved. Differences between electrostatic and gravitational forces: 1. Electrostatic forces are much greater than gravitational forces (by about 36 10 times). 2. Gravitational forces are attractive for like charges, whereas electrostatic forces are repulsive for like charges. 3. There are no negative gravitational charges (no negative mass) while there are both positive and negative electric charges. This difference combined with previous implies that gravitational forces are always attractive, while electrostatic forces may be either attractive or repulsive.
in which B satisfies
and denotes the curl. The vector field B is the magnetic flux density and the vector A is the magnetic vector potential. Taking the curl of the electric field equation we obtain,
where is the permittivity of the medium in which the field exists, and E is the electric field vector. The total energy U stored in the electric field in a given volume V is therefore
where P is the electric polarization - the volume density of electric dipole moments, and D is the electric displacement field. Since E and P are defined separately, this equation can be used to define D. The physical interpretation of D is not as clear as E (effectively the field applied to the material) or P (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents.
For anisotropic materials the E and D fields are not parallel, and so E and D are related by the permittivity tensor (a 2nd order tensor field), in component form:
For non-linear media, E and D are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy.
[edit] References
1. ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 2. ^ Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 3. ^ Huray, Paul G. (2009), Maxwell's Equations, Wiley-IEEE, p. 205, ISBN 0-470-54276-4, Chapter 7, p 205 4. ^ Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3 5. ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9 6. ^ Electricity and Modern Physics (2nd Edition), G.A.G. Bennet, Edward Arnold (UK), 1974, ISBN 0-7131-2459-8 7. ^ Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 978-0-471-92712-9
'The Electric Field' - Chapter 23 of Frank Wolfs's lectures at University of Rochester [1] - An applet that shows the electric field of a moving point charge. Fields - a chapter from an online textbook Learning by Simulations Interactive simulation of an electric field of up to four point charges Java simulations of electrostatics in 2-D and 3-D Electric Fields Applet - An applet that shows electric field lines as well as potential gradients. The inverse cube law The inverse cube law for dipoles (PDF file) by Eng. Xavier Borg Interactive Flash simulation picturing the electric field of user-defined or preselected sets of point charges by field vectors, field lines, or equipotential lines. Author: David Chappell
Dielectric
From Wikipedia, the free encyclopedia Jump to: navigation, search A dielectric is an electrical insulator that can be polarized by an applied electric field. When a dielectric is placed in an electric field, electric charges do not flow through the material, as in a conductor, but only slightly shift from their average equilibrium positions causing dielectric polarization. Because of dielectric polarization, positive charges are displaced toward the field and negative charges shift in the opposite direction. This creates an internal electric field which reduces the overall field within the dielectric itself.[1] If a dielectric is composed of weakly bonded molecules, those molecules not only become polarized, but also reorient so that their symmetry axis aligns to the field.[1] Although the term "insulator" implies low electrical conduction, "dielectric" is typically used to describe materials with a high polarizability. The latter is expressed by a number called the dielectric constant. A common, yet notable example of a dielectric is the electrically insulating material between the metallic plates of a capacitor. The polarization of the dielectric by the applied electric field increases the capacitor's surface charge.[1] The study of dielectric properties is concerned with the storage and dissipation of electric and magnetic energy in materials.[2] It is important to explain various phenomena in electronics, optics, and solid-state physics. The term "dielectric" was coined by William Whewell (from "dia-electric") in response to a request from Michael Faraday.[3]
Contents
[hide]
1 Electric susceptibility o 1.1 Dispersion and causality 2 Dielectric polarization o 2.1 Basic atomic model o 2.2 Dipolar polarization o 2.3 Ionic polarization 3 Dielectric dispersion 4 Dielectric relaxation o 4.1 Debye relaxation o 4.2 Variants of the Debye equation 5 Applications o 5.1 Capacitors o 5.2 Dielectric resonator
6 Some practical dielectrics 7 See also 8 References 9 Further reading 10 External links
where
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by extended to infinity as well if one defines . The upper limit of this integral can be for . An instantaneous .
It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a simple product,
Note the simple frequency dependence of the susceptibility, or equivalently the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material. Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. for ), a consequence of causality, imposes Kramers . Kronig constraints on the susceptibility
Electric field interaction with an atom under the classical dielectric model. In the classical approach to the dielectric model, a material is made up of atoms. Each atom consists of a cloud of negative charge (Electrons) bound to and surrounding a positive point charge at its center. In the presence of an electric field the charge cloud is distorted, as shown in the top right of the figure. This can be reduced to a simple dipole using the superposition principle. A dipole is characterized by its dipole moment, a vector quantity shown in the figure as the blue arrow labeled M. It is the relationship between the electric field and the dipole moment that gives rise to the behavior of the dielectric. (Note that the dipole moment is shown to be pointing in the same direction as the electric field. This isn't always correct, and it is a major simplification, but it is suitable for many materials.) When the electric field is removed the atom returns to its original state. The time required to do so is the so-called relaxation time; an exponential decay.
This is the essence of the model in physics. The behavior of the dielectric now depends on the situation. The more complicated the situation the richer the model has to be in order to accurately describe the behavior. Important questions are:
Is the electric field constant or does it vary with time? o If the electric field does vary, at what rate? What are the characteristics of the material? o Is the direction of the field important (isotropy)? o Is the material the same all the way through (homogeneous)? o Are there any boundaries/interfaces that have to be taken into account? Is the system linear or do nonlinearities have to be taken into account?
The relationship between the electric field E and the dipole moment M gives rise to the behavior of the dielectric, which, for a given material, can be characterized by the function F defined by the equation: . When both the type of electric field and the type of material have been defined, one then chooses the simplest function F that correctly predicts the phenomena of interest. Examples of phenomena that can be so modeled include:
When an external electric field is applied in the infrared, a molecule is bent and stretched by the field and the molecular moment changes in response. The molecular vibration frequency is approximately the inverse of the time taken for the molecule to bend, and the distortion polarization disappears above the infrared.
where n is the number of different possible wavelengths of emitted radiation is the number of energy levels (including ground level).
where is the permittivity at the high frequency limit, where is the static, low frequency permittivity, and is the characteristic relaxation time of the medium. This relaxation model was introduced by and named after the chemist Peter Debye (1913).[4]
[edit] Applications
[edit] Capacitors
Main article: capacitor
Charge separation in a parallel-plate capacitor causes an internal electric field. A dielectric (orange) reduces the field and increases the capacitance. Commercially manufactured capacitors typically use a solid dielectric material with high permittivity as the intervening medium between the stored positive and negative charges. This material is often referred to in technical contexts as the "capacitor dielectric".[5] The most obvious advantage to using such a dielectric material is that it prevents the conducting plates on which the charges are stored from coming into direct electrical contact. More significant, however, a high permittivity allows a greater charge to be stored at a given voltage. This can be seen by treating the case of a linear dielectric with permittivity and thickness d between two conducting plates with uniform charge density . In this case the charge density is given by
From this, it can easily be seen that a larger leads to greater charge stored and thus greater capacitance.
Dielectric materials used for capacitors are also chosen such that they are resistant to ionization. This allows the capacitor to operate at higher voltages before the insulating dielectric ionizes and begins to allow undesirable current.
Ferroelectric materials often have very high dielectric constants, making them quite useful for capacitors.
Electromagnetic field
From Wikipedia, the free encyclopedia Jump to: navigation, search It has been suggested that Flux density be merged into this article or section. (Discuss) Proposed since October 2011.
Electromagnetism
Electricity Magnetism
v t e
An electromagnetic field (also EMF or EM field) is a physical field produced by moving electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature (the others are gravitation, the weak interaction, and the strong interaction). The field can be viewed as the combination of an electric field and a magnetic field. The electric field is produced by stationary charges, and the magnetic field by moving charges (currents); these two are often described as the sources of the field. The way in which charges and currents interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. From a classical perspective, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner; whereas from the perspective of
quantum field theory, the field is seen as quantized, being composed of individual particles.[citation needed]
Contents
[hide]
1 Structure of the electromagnetic field o 1.1 Continuous structure o 1.2 Discrete structure 2 Dynamics of the electromagnetic field 3 Electromagnetic field as a feedback loop 4 Mathematical description 5 Properties of the field o 5.1 Reciprocal behavior of electric and magnetic fields o 5.2 Light as an electromagnetic disturbance 6 Relation to and comparison with other physical fields o 6.1 Electromagnetic and gravitational fields 7 Applications o 7.1 Static E and B fields and static EM fields o 7.2 Time-varying EM fields in Maxwells equations 8 Health and safety 9 See also 10 References 11 External links
where is Planck's constant, named in honor of Max Planck, and is the frequency of the photon . Although modern quantum optics tells us that there also is a semi-classical explanation of the photoelectric effect the emission of electrons from metallic surfaces subjected to electromagnetic radiation the photon was historically (although strictly unnecessarily) used to explain certain observations. It is found that increasing the intensity of the incident radiation (so long as one remains in the linear regime) increases only the number of electrons ejected, and has almost no effect on the energy distribution of their ejection. Only the frequency of the radiation is relevant to the energy of the ejected electrons. This quantum picture of the electromagnetic field (which treats it as analogous to harmonic oscillators) has proved very successful, giving rise to quantum electrodynamics, a quantum field theory describing the interaction of electromagnetic radiation with charged matter. It also gives rise to Quantum optics, which is different from quantum electrodynamics in that the matter itself is modelled using quantum mechanics rather than Quantum field theory.
The behavior of the electromagnetic field can be resolved into four different parts of a loop:
the electric and magnetic fields are generated by electric charges, the electric and magnetic fields interact with each other, the electric and magnetic fields produce forces on electric charges, the electric charges move in space.
A common misunderstanding is that (a) the quanta of the fields act in the same manner as (b) the charged particles that generate the fields. In our everyday world, charged particles, such as electrons, move slowly through matter, typically on the order of a few inches (or centimeters) per second[citation needed], but fields propagate at the speed of light - approximately 300 thousand kilometers (or 186 thousand miles) a second. The mundane speed difference between charged particles and field quanta is on the order of one to a million, more or less. Maxwell's equations relate (a) the presence and movement of charged particles with (b) the generation of fields. Those fields can then affect the force on, and can then move, other slowly moving charged particles. Charged particles can move at relativistic speeds nearing field propagation speeds, but, as Einstein showed[citation needed], this requires enormous field energies, which are not present in our everyday experiences with electricity, magnetism, matter, and time. The feedback loop can be summarized in a list, including phenomena belonging to each part of the loop:
charged particles generate electric and magnetic fields the fields interact with each other o changing electric field acts like a current, generating 'vortex' of magnetic field o Faraday induction: changing magnetic field induces (negative) vortex of electric field o Lenz's law: negative feedback loop between electric and magnetic fields fields act upon particles o Lorentz force: force due to electromagnetic field electric force: same direction as electric field magnetic force: perpendicular both to magnetic field and to velocity of charge particles move o current is movement of particles particles generate more electric and magnetic fields; cycle repeats
often regarded as functions of the space and time coordinates. As such, they are often written as (electric field) and (magnetic field).
If only the electric field ( ) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field ( ) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.[1] With the advent of special relativity, physical laws became susceptible to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws. The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed in a vacuum by Maxwell's equations. In the vector field formalism, these are:
(Gauss's law) (Gauss's law for magnetism) (Faraday's law) (Ampre-Maxwell law) where is the charge density, which can (and often does) depend on time and position, is the permittivity of free space, is the permeability of free space, and is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors. The Lorentz force law governs the interaction of the electromagnetic field with charged matter. When a field travels across to different media, the properties of the field change according to the various boundary conditions. These equations are derived from Maxwell's equations. The tangential components of the electric and magnetic fields as they relate on the boundary of two media are as follows[2]:
(current-free) (charge-free)
The angle of refraction of an electric field between media is related to the permittivity of each media:
The angle of refraction of a magnetic field between media is related to the permeability of each media:
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's Circuital law.
Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because the fundamental forces operate by exchanging what are now known to be gauge bosons.
Geometrodynamics Gravitation
[edit] Applications
This section requires expansion.
show that that the space now contains an electric field as well, which will be found to produces an additional Lorentz force upon the moving charge. Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section.
particles. Far-field effects (EMR) in the quantum picture of radiation, are represented by ordinary photons.
Static electric fields: see Electric shock Static magnetic fields: see MRI#Safety Extremely low frequency (ELF): see Power lines#Health concerns Radio frequency (RF): see Electromagnetic radiation and health Light: see Laser safety Ultraviolet (UV): see Sunburn Gamma rays: see Gamma ray Mobile telephony: see Mobile phone radiation and health