You are on page 1of 31

THE recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise

ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information

The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.

If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set, all choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic function. Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages, we will in all cases use an essentially logarithmic measure

The logarithmic measure is more convenient for various reasons: 1. It is practically more useful. Parameters of engineering importance such as time, bandwidth, number of relays, etc., tend to vary linearly with the logarithm of the number of possibilities. For example, adding one relay to a group doubles the number of possible states of the relays. It adds 1 to the base 2 logarithm of this number. Doubling the time roughly squares the number of possible messages, or doubles the logarithm, etc. 2. It is nearer to our intuitive feeling as to the proper measure. This is closely related to (1) since we intuitively measures entities by linear comparison with common standards. One feels, for example, that two punched cards should have twice the capacity of one for information storage, and two identical channels twice the capacity of one for transmitting information. 3. It is mathematically more suitable. Many of the limiting operations are simple in terms of the logarithm but would require clumsy restatement in terms of the number of possibilities

The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits, since the total number of possible states is 2N and log2 2N =N. If the base 10 is used the units may be called decimal digits. Since log2M = log10M=log10 2 = 3:32log10M;

haos theory is a field of study in mathematics, with applicati s i several iscipli es i cl i physics, ec ics, i l y, and phil s phy. haos theory st dies the ehavior of dynamical systems that are highly sensitive to initial conditions; an effect hich is popularly referred to as the utterfly effect. mall differences in initial conditions (such as those due to rounding errors in numerical computation) yield idely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general.[ ] his happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, ith no random elements involved.[ ] In other ords, the deterministic nature of these systems does not make them predictable.[ ][4] his behavior is known as deterministic chaos, or simply chaos.

A successful application of chaos theory is in ecology where dynamical systems such as the Ricker model have been used to show how population growth under density dependence can lead to chaotic dynamics[citation needed].mechanical and magneto-mechanical devices, as well as computer models of chaotic processes. Observations of chaotic behavior in nature include changes in weather,[5] the dynamics of satellites in the solar system, the time evolution of the magnetic field of celestial bodies, population growth in ecology, the dynamics of the action potentials in neurons, and molecular vibrations. There is some controversy over the existence of chaotic dynamics in plate tectonics and in economics.[12][13][1 ] Chaos theory is also currently being applied to medical studies of epilepsy, specifically to the prediction of seemingly random seizures by observing initial conditions

In mathematics, algebraic K-theory is an important part of homological algebra concerned with defining and applying a sequence Kn(R) of functors from rings to abelian groups, for all integers n. For historical reasons, the lower K-groups K0 and K1 are thought of in somewhat different terms from the higher algebraic K-groups Kn for n 2. Indeed, the lower groups are more accessible, and have more applications, than the higher groups. The theory of the higher K-groups is noticeably deeper, and certainly much harder to compute (even when R is the ring of integers). The group K0(R) generalises the construction of the ideal class group of a ring, using . projective modules. Its development in the 1960s and 1970s was linked to attempts to solve a conjecture of Serre on projective modules that now is the Quillen-Suslin theorem; numerous other connections with classical algebraic problems were found in this era. Similarly, K1(R) is a modification of the group of units in a ring, using elementary matrix theory. The group K1(R) is important in topology, especially when R is a group ring, because its quotient the Whitehead group contains the Whitehead torsion used to study problems in simple homotopy theory and surgery theory; the group K0(R) also contains other invariants such as the finiteness invariant. Since the 1980s, algebraic K-theory has increasingly had applications to algebraic geometry. For example, motivic cohomology is closely related to algebraic K-theory

In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with uantitatively characterizing the errors introduced thereby. ote that what is meant by best and simpler will depend on the application. closely related topic is the approximation of functions by generalized ourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. ne problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. his is typically done with polynomial or rational (ratio of polynomials) approximations. he objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. his is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function. arrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated. odern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment.

Asymptotic theory or large sample theory is the branch of mathematics which studies properties of asymptotic expansions. The most known result of this field is the prime number theorem: Let (x) be the number of prime numbers that are smaller than or equal to x.

The limit exists, and is equal to 1 Some results often neglected include the probability distribution of the likelihood ratio statistic and the expected value of the deviance in statistics, results that are used daily by applied statisticians.

In theoretical computer science, automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. The figure at right illustrates a finite state machine, which is one well-known variety of automaton. This automaton consists of states (represented in the figure by circles), and transitions (represented by arrows). As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function (which takes the current state and the recent symbol as its inputs). Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata play a major role in compiler design and parsing.

Bifurcation theory is the mathematical study of changes in the qualitative or topological structure of a given family, such as the integral curves of a family of vector fields, and the solutions of a family of differential equations. Most commonly applied to the mathematical study of dynamical systems, a bifurcation occurs when a small smooth change made to the parameter values (the bifurcation parameters) of a system causes a sudden 'qualitative' or topological change in its behaviour.[1] Bifurcations occur in both continuous systems (described by ODEs, DDEs or PDEs), and discrete systems (described by maps).

In topology, a branch of mathematics, braid theory is an abstract geometric theory studying the everyday braid concept, and some generalizations. The idea is that braids can be organized into groups, in which the group operation is 'do the first braid on a set of strings, and then follow it with a second on the twisted strings'. Such groups may be described by explicit presentations, as was shown by Emil Artin (19 7). For an elementary treatment along these lines, see the article on braid groups. Braid groups may also be given a deeper mathematical interpretation: as the fundamental group of certain configuration spaces.

The braid is associated with a planar graph.

In mathematics, catastrophe theory is a branch of bifurcation theory in the study of dynamical systems; it is also a particular special case of more general singularity theory in geometry. Bifurcation theory studies and classifies phenomena characterized by sudden shifts in behavior arising from small changes in circumstances, analysing how the qualitative nature of equation solutions depends on the parameters that appear in the equation. This may lead to sudden and dramatic changes, for example the unpredictable timing and magnitude of a landslide. Catastrophe theory, which originated with the work of the French mathematician Ren Thom in the 1960s, and became very popular due to the efforts of Christopher Zeeman in the 1970s, considers the special case where the long-run stable equilibrium can be identified with the minimum of a smooth, well-defined potential function (Lyapunov function). Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. However, examined in a larger parameter space, catastrophe theory reveals that such bifurcation points tend to occur as part of well-defined qualitative geometrical structures

Category theory is an area of study in mathematics that examines in an abstract way the properties of particular mathematical concepts, by formalising them as collections of objects and arrows (also called morphisms, although this term also has a specific, non category-theoretical sense), where these collections satisfy certain basic conditions. Many significant areas of mathematics can be formalised as categories, and the use of category theory allows many intricate and subtle mathematical results in these fields to be stated, and proved, in a much simpler way than without the use of categories. The most accessible example of a category is the category of sets, where the objects are sets and the arrows are functions from one set to another. However it is important to note that the objects of a category need not be sets nor the arrows functions; any way of formalising a mathematical concept such that it meets the basic conditions on the behaviour of objects and arrows is a valid category, and all the results of category theory will apply to it.

One of the simplest examples of a category is that of groupoid, defined as a category whose arrows or morphisms are all invertible. The groupoid concept is important in topology. Categories now appear in most branches of mathematics, some areas of theoretical computer science where they correspond to types, and mathematical physics where they can be used to describe vector spaces. Categories were first introduced by Samuel Eilenberg and Saunders Mac Lane in 19 2 5, in connection with algebraic topology. Category theory has several faces known not just to specialists, but to other mathematicians. A term dating from the 19 0s, "general abstract nonsense", refers to its high level of abstraction, compared to more classical branches of mathematics. Homological algebra is category theory in its aspect of organising and suggesting manipulations in abstract algebra. Diagram chasing is a visual method of arguing with abstract "arrows" joined in diagrams. Note that arrows between categories are called functors, subject to specific defining commutativity conditions; moreover, categorical diagrams and sequences can be defined as functors (viz. Mitchell, 1965). An arrow between two functors is a natural transformation when it is subject to certain naturality or commutativity conditions. Functors and natural transformations ('naturality') are the key concepts in category theory.[1] Topos theory is a form of abstract sheaf theory, with geometric origins, and leads to ideas such as pointless topology. A topos can also be considered as a specific type of category with two additional topos axioms.

This article refers to the use of the term character theory in mathematics. For the media studies definition, see Character theory (Media). In mathematics, more specifically in group theory, the character of a group representation is a function on the group which associates to each group element the trace of the corresponding matrix. The character carries the essential information about the representation in a more condensed form. Georg Frobenius initially developed representation theory of finite groups entirely based on the characters, and without any explicit matrix realization of representations themselves. This is possible because a complex representation of a finite group is determined (up to isomorphism) by its character. The situation with representations over a field of positive characteristic, so-called "modular representations", is more delicate, but Richard Brauer developed a powerful theory of characters in this case as well. Many deep theorems on the structure of finite groups use characters of modular representations.

In mathematics, Choquet theory is an area of functional analysis and convex analysis created by Gustave Choquet. It is concerned with measures with support on the extreme points of a convex set C. Roughly speaking, all vectors of C should appear as 'averages' of extreme points, a concept made more precise by the idea of convex combinations replaced by integrals taken over the set E of extreme points. Here C is a subset of a real vector space V, and the main thrust of the theory is to treat the cases where V is an infinite-dimensional (locally convex Hausdorff) topological vector space along lines similar to the finitedimensional case. The main concerns of Gustave Choquet were in potential theory. Choquet theory has become a general paradigm, particularly for treating convex cones as determined by their extreme rays, and so for many different notions of positivity in mathematics. The two ends of a line segment determine the points in between: in vector terms the segment from v to w consists of the v + (1 )w with 0 1. The classical result of Hermann Minkowski says that in Euclidean space, a bounded, closed convex set C is the convex hull of its extreme point set E, so that any c in C is a (finite) convex combination of points e of E. Here E may be a finite or an infinite set. In vector terms, by assigning non-negative weights w(e) to the e in E, almost all 0, we can represent any c in C as

In any case the w(e) give a probability measure supported on a finite subset of E. For any affine function f on C, its value at the point c is In the infinite dimensional setting, one would like to make a similar statement. Choquet's theorem states that for a compact convex subset C in a normed space V, given c in C there exist a probability measure w supported on the set E of extreme points of C such that, for all affine function f on C. In practice V will be a Banach space. The original KreinMilman theorem follows from Choquet's result. Another corollary is the Riesz representation theorem for states on the continuous functions on a metrizable compact Hausdorff space. More generally, for V a locally convex topological vector space, the Choquet-Bishopde Leeuw theorem[1] gives the same formal statement. In addition to the existence of a probability measure supported on the extreme boundary that represent a given point c, one might also consider the uniqueness of such measures. It is easy to see that uniqueness does not hold even in the finite dimensional setting. One can take, for counterexamples, the convex set to be a cube or a ball in R3. Uniqueness does hold, however, when the convex set is a finite dimensional simplex. So that the weights w(e) are unique. A finite dimensional simplex is a special cases of a Choquet simplex. Any point in a Choquet simplex is represented by a unique probability measure on the extreme points

In mathematics, class field theory is a major branch of algebraic number theory. Most of the central results in this area were proved in the period between 1900 and 1950. The theory takes its name from some of the early ideas, conjectures and results such as those on the Hilbert class field, which took a generation to settle up to 1930. The ideal class group (which is a basic object of study inside a single field of numbers K, such as a quadratic field), is also seen as a Galois group of a field extension L/K: L is a field containing K and all the roots of a polynomial with coefficients in K. These days the term is generally used synonymously with the study of all the abelian extensions of algebraic number fields, or more generally of global fields; an abelian extension being a Galois extension with Galois group that is an abelian group (a finite abelian extension of Q is often simply called an abelian number field). In general terms, the objective is either to construct extensions of this type for a general number field K, or, to predict their arithmetical properties in terms of the arithmetical properties of K itself

Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error-correction and more recently also for network coding. Codes are studied by various scientific disciplinessuch as information theory, electrical engineering, mathematics, and computer sciencefor the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data. There are essentially two aspects to Coding theory: Data compression (or, source coding) Error correction (or, channel coding). These two aspects may be studied in combination. Source encoding, attempts to compress the data from a source in order to transmit it more efficiently. This practice is found every day on the Internet where the common "Zip" data compression is used to reduce the network load and make files smaller. The second, channel encoding, adds extra data bits to make the transmission of data more robust to disturbances present on the transmission channel. The ordinary user may not be aware of many applications using channel coding. A typical music CD uses the Reed-Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones also use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, and NASA all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes.

In mathematics, specifically in algebraic topology, cohomology is a general term for a sequence of abelian groups defined from a co-chain complex. That is, cohomology is defined as the abstract study of cochains, cocycles, and coboundaries. Cohomology can be viewed as a method of assigning algebraic invariants to a topological space that has a more refined algebraic structure than does homology. Cohomology arises from the algebraic dualization of the construction of homology. In less abstract language, cochains in the fundamental sense should assign 'quantities' to the chains of homology theory. From its beginning in topology, this idea became a dominant method in the mathematics of the second half of the twentieth century; from the initial idea of homology as a topologically invariant relation on chains, the range of applications of homology and cohomology theories has spread out over geometry and abstract algebra. The terminology tends to mask the fact that in many applications cohomology, a contravariant theory, is more natural than homology. At a basic level this has to do with functions and pullbacks in geometric situations: given spaces X and Y, and some kind of function F on Y, for any mapping : X Y composition with gives rise to a function F o on X. Cohomology groups often also have a natural product, the cup product, which gives them a ring structure. Because of this feature, cohomology is a stronger invariant than homology, as it can differentiate between certain algebraic objects that homology cannot

In theoretical computer science, the theory of computation is the branch that deals with whether and how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory, computability theory and computational complexity theory.[1] In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation.[citation needed] It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a bounded amount of memory.

In mathematics, deformation theory is the study of infinitesimal conditions associated with varying a solution P of a problem to slightly different solutions P , where is a small number, or vector of small quantities. The infinitesimal conditions are therefore the result of applying the approach of differential calculus to solving a problem with constraints. One can think of a structure that is not completely rigid, and that deforms slightly to accommodate forces applied from outside; this explains the name. Some characteristic phenomena are: the derivation of first-order equations by treating the quantities as having negligible squares; the possibility of isolated solutions, in that varying a solution may not be possible, or does not bring anything new; and the question of whether the infinitesimal constraints actually 'integrate', so that their solution does provide small variations. In some form these considerations have a history of centuries in mathematics, but also in physics and engineering. For example, in the geometry of numbers a class of results called isolation theorems was recognised, with the topological interpretation of an open orbit (of a group action) around a given solution. Perturbation theory also looks at deformations, in general of operators.

In mathematical analysis, distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. Distributions are widely used to formulate generalized solutions of partial differential equations. Where a classical solution may not exist or be very difficult to establish, a distribution solution to a differential equation is often much easier. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are distributions, such as the Dirac delta distribution. Generalized functions were introduced by Sergei Sobolev in 1935. They were re-introduced in the late 19 0s by Laurent Schwartz, who developed a comprehensive theory of distributions.

Fields are important objects of study in algebra since they provide a useful generalization of many number systems, such as the rational numbers, real numbers, and complex numbers. In particular, the usual rules of associativity, commutativity and distributivity hold. Fields also appear in many other areas of mathematics; see the examples below. When abstract algebra was first being developed, the definition of a field usually did not include commutativity of multiplication, and what we today call a field would have been called either a commutative field or a rational domain. In contemporary usage, a field is always commutative. A structure which satisfies all the properties of a field except possibly for commutativity, is today called a division ring or division algebra or sometimes a skew field. Also noncommutative field is still widely used. In French, fields are called corps (literally, body), skew fields are called corps gauche or anneau divisions or also algbre divisions. The German word for body is Krper and this word is to denote a used to denote fields; hence the use of the blackboard bold field. The concept of fields was first (implicitly) used to prove that there is no general formula expressing in terms of radicals the roots of a polynomial with rational coefficients of degree 5 or higher.

In commutative algebra and algebraic geometry, elimination theory is the classical name for algorithmic approaches to eliminating between polynomials of several variables. he linear case would now routinely be handled by aussian elimination, rather than the theoretical solution provided by ramer's rule. In the same way, computational techniques for elimination can in practice be based on r bner basis methods. here is however older literature on types of eliminant, including resultants to find common roots of polynomials, discriminants and so on. In particular the discriminant appears in invariant theory, and is often constructed as the invariant of either a curve or an n-ary k-ic form. hilst discriminants are always constructed resultants, the variety of constructions and their meaning tends to vary. modern and systematic version of theory of the discriminant has been developed by elfand and coworkers. ome of the systematic methods have a homological basis, that can be made explicit, as in ilbert's theorem on syzygies. his field is at least as old as Bzout's theorem. he historical development of commutative algebra, which was initially called ideal theory, is closely linked to concepts in elimination theory: ideas of ronecker, who wrote a major paper on the subject, were adapted by ilbert and effectively 'linearised' while dropping the explicit constructive content. he process continued over many decades: the work of . . acaulay who gave his name to ohen- acaulay modules was motivated by elimination. .

Extremal graph theory is a branch of the mathematical field of graph theory. Extremal graph theory studies extremal (maximal or minimal) graphs which satisfy a certain property. Extremality can be taken with respect to different graph invariants, such as order, size or girth. ore abstractly, it studies how global properties of a graph influence local substructures of the graph.[ ] or example, a simple extremal graph theory question is which acyclic graphs on n vertices have the maximum number of edges? he extremal graphs for this question are trees on n vertices, which have n 1 edges.[ ] ore generally, a typical question is the following. iven a graph property , an invariant u, and a set of graphs , we wish to find the minimum value of m such that every graph in which has u larger than m possess property . In the example above, was the set of n-vertex graphs, was the property of being cyclic, and u was the number of edges in the graph. hus every graph on n vertices with more than n 1 edges must contain a cycle. everal foundational results in extremal graph theory are questions of the above-mentioned form. or instance, the question of how many edges can an n-vertex graph have before it must contain as subgraph a clique of size k is answered by urn's theorem. Instead of cliques, if the same question is asked for complete multi-partite graphs, the answer is given by the Erd s tone theorem.

The end

You might also like