You are on page 1of 10

10:-

Set theory is the branch of mathematics that studies sets, which are collections of objects. Although any type
of object can be collected into a set, set theory is applied most often to objects that are relevant to
mathematics.

The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. After the
discovery of paradoxes in naive set theory, numerous axiom systems were proposed in the early twentieth
century, of which the Zermelo–Fraenkel axioms, with the axiom of choice, are the best-known.

The language of set theory could be used in the definitions of nearly all mathematical objects, such
as functions, and concepts of set theory are integrated throughout the mathematics curriculum. Elementary
facts about sets and set membership can be introduced in primary school, along with Venn and Euler
diagrams, to study collections of commonplace physical objects. Elementary operations such as set union and
intersection can be studied in this context. More advanced concepts such as cardinality are a standard part of
the undergraduate mathematics curriculum.

Set theory is commonly employed as a foundational system for mathematics, particularly in the form
of Zermelo–Fraenkel set theory with theaxiom of choice. Beyond its foundational role, set theory is a branch
of mathematics in its own right, with an active research community.

Mathematical logic (also known as symbolic logic) is a subfield of mathematics with close connections
to computer science andphilosophical logic.[1] The field includes both the mathematical study of logic and the
applications of formal logic to other areas of mathematics. The unifying themes in mathematical logic include
the study of the expressive power of formal systems and the deductive power of formal proof systems.

Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof
theory. These areas share basic results on logic, particularly first-order logic, and definability. In computer
science (particularly in the ACM Classification) mathematical logic encompasses additional topics not
detailed in this article; see logic in computer science for those.

Since its inception, mathematical logic has contributed to, and has been motivated by, the study
of foundations of mathematics. This study began in the late 19th century with the development
of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped
by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt
Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues
involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be
formalized in terms of sets, although there are some theorems that cannot be proven in common axiom
systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing
which parts of mathematics can be formalized in particular formal systems,

Some concerns that mathematics had not been built on a proper foundation led to the development of
axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry.

In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano (1888) published a
set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system
of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the
same time Richard Dedekind showed that the natural numbers are uniquely characterized by
their induction properties. Dedekind (1888) proposed a different characterization, which lacked the formal
logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's
system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive
definitions of addition and multiplication from the successor function and mathematical induction.

In the mid-19th century, flaws in Euclid's axioms for geometry became known (Katz 1998, p. 774). In
addition to the independence of theparallel postulate, established by Nikolai Lobachevsky in 1826
(Lobachevsky 1840), mathematicians discovered that certain theorems taken for granted by Euclid were not
in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that
circles of the same radius whose centers are separated by that radius must intersect. Hilbert (1899) developed
a complete set of axioms for geometry, building on previous work by Pasch (1882). The success in
axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics,
such as the natural numbers and the real line. This would prove to be a major area of research in the first
half of the 20th century.

8
: Group theory

In mathematics and abstract algebra, group theory studies the algebraic structures known asgroups. The concept of
a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector
spaces can all be seen as groups endowed with additional operations and axioms. Groups recur throughout
mathematics, and the methods of group theory have strongly influenced many parts of algebra. Linear algebraic
groups and Lie groups are two branches of group theory that have experienced tremendous advances and have
become subject areas in their own right.

Various physical systems, such as crystals and the hydrogen atom, can be modelled bysymmetry groups. Thus group
theory and the closely related representation theory have many applications in physics and chemistry.

Tree (graph theory)


In mathematics, more specifically graph theory, a tree is an undirected graph in which any twovertices are
connected by exactly one simple path. In other words, any connected graph withoutcycles is a tree. A forest is
a disjoint union of trees.

The various kinds of data structures referred to as trees in computer science are similar to trees in graph theory,
except that computer science trees have directed edges. Although they do not meet the definition given here, these
graphs are referred to in graph theory as ordered directed trees (see below).

Applications of trees

File compression

Fixed length encodings


• ASCII character set
o about 100 printable characters, several unprintable
o requires 7 bits to represent 128 characters uniquely
since 7 = ceiling(log 100))
o eighth bit allows parity checks
• Any character set of size C will require ceiling(log C) bits for a standard, fixed-length encoding
Example: Encoding choices
• Given: alphabet of five symbols: A, B, C, D, E
• A fixed-length encoding:
symbol code
A 000
B 001
C 010
D 011
E 111
• Encoding of message BADDEED: 0010000110111111111011
• Average code length: 3 bits
• Requires 21 bits: rather inefficient use of space for only 5characters
Variable length codes
o Additional information is available: Relative frequency of each code
o More common letters are given shorter codes
symbol frequency code
A 0.35 0
B 0.20 1
C 0.20 00
D 0.15 01
E 0.10 10
• Average code length:
o 1*(0.35) + 1*(0.20) + 2*(0.20) + 2*(0.15) + 2*(0.10) = 1.45
o less than half the average code length for the fixed-length code!
o problem: It won't work!
 How to interpret the code 0010?
 Several possibilities: aaba or aae or ada or cba or ce
Possible solutions
Symbol sequence as marker
• choose, for example, 11 as the starting marker for all codes
effectively plays the role of a space ...
symbol frequency code
A 0.35 110
B 0.20 111
C 0.20 1100
D 0.15 1101
E 0.10 1110
• Average code length:
o 3*(0.35) + 3*(0.20) + 4*(0.20) + 4*(0.15) + 4*(0.10) = 3.45
o Original fixed length was better!
Prefix property
• no code sequence can be the prefix of any other
symbol frequency code
A 0.35 00
B 0.20 10
C 0.20 010
D 0.15 011
E 0.10 111
• Average code length:
o 2*(0.35) + 2*(0.20) + 3*(0.20) + 3*(0.15) + 3*(0.10) = 2.45
o Better than original fixed length
o There is one and only one way to interpret each code
• Example: BADDEED becomes 1000011011111111011

7
:- Recursion in computer science is a method where the solution to a problem depends on solutions to smaller
instances of the same problem.[1] The approach can be applied to many types of problems, and is one of the central
ideas of computer science.[2]

"The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In
the same manner, an infinite number of computations can be described by a finite recursive program, even if this
program contains no explicit repetitions." [3]

Most high-level computer programming languages support recursion by allowing a function to call itself within the
program text. Some functional programming languages do not define any looping constructs but rely solely on
recursion to repeatedly call code. Computability theory has proven that these recursive-only languages are
mathematically equivalent to the imperative languages, meaning they can solve the same kinds of problems even
without the typical control structures like “while” and “for”.
Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one technique
for representing data whose exact size the programmer does not know: the programmer can specify this data with
a self-referential definition. There are two flavors of self-referential definitions: inductive
and coinductive definitions.

nductively-defined data

An inductively-defined recursive data definition is one that specifies how to construct instances of the data. For
example, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.
The self-reference in the definition permits the construction of lists of any (finite) number of strings.

Coinductively-defined data and corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,
self-referential coinductive definitions are used for data structures of infinite size.

A coinductive definition of infinite streams of strings, given informally, might look like this:

A stream of strings is an object s such that


head(s) is a string, and
tail(s) is a stream of strings.

This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how to
access the contents of the data structure—namely, via the accessor functions head and tail -- and what those contents
may be, whereas the inductive definition specifies how to create the structure and what it may be created from

Recursive algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,
solve those problems, and combine the results. This is often referred to as the divide-and-conquer method; when
combined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly and
incurring extra computation time), it can be referred to as dynamic programmingor memoization.

Structural versus generative recursion

Some authors classify recursion as either "generative" or "structural". The distinction is related to where a recursive
procedure gets the data that it works on, and how it processes that data:
6
Mathematical Induction

The series of natural numbers, can all be defined if we know what we mean by the three terms "0," "number", and
"successor." But we may go a step farther: we can define all the natural numbers if we know what we mean by "0"
and "successor." It will help us to understand the difference between finite and infinite to see how this can be done,
and why the method by which it is done cannot be extended beyond the finite. We will not yet consider how "0" and
"successor" are to be defined: we will for the moment assume that we know what these terms mean, and show how
thence all other natural numbers can be obtained.

It is easy to see that we can reach any assigned number, say 30,000. We first define "1" as "the successor of 0," then
we define "2" as "the successor of 1," and so on. In the case of an assigned number, such as 30,000, the proof that
we can reach it by proceeding step by step in this fashion may be made, if we have the patience, by actual
experiment: we can go on until we actually arrive at 30,000. But although the method of experiment is available for
each particular natural number, it is not available for proving the general proposition that all such numbers can be
reached in this way, i.e. by proceeding from 0 step by step from each number to its successor. Is there any other way
by which this can be proved?

Let us consider the question the other way around. What are the numbers that can be reached, given the terms "0"
and "successor"? Is there any way by which we can define the whole class of such numbers? We reach 1, as the
successor of 0; 2, as the successor of 1; 3, as the successor of 2; and so on. It is this "and so on" that we wish to
replace by something less vague and indefinite. We might be tempted to say that "and so on" means that the process
of proceeding to the successor may be repeated any finite number of times; but the problem upon which we are
engaged is the problem of defining "finite number," and therefore we must not use this notion in our definition. Our
definition must not assume that we know what a finite number is.

The key to our problem lies in mathematical induction. It will be remembered that this was the fifth of the five
primitive propositions which we laid down about the natural numbers. It stated that any property which belongs to 0,
and to the successor of any number which has the property, belongs to all the natural numbers. This was then
presented as a principle, but we shall now adopt it as a definition. It is not dif6cult to see that the terms obeying it
are the same as the numbers that can be reached from 0 by successive steps from next to next, but as the point is
important we will set forth the matter in some detail.

An implicit proof by mathematical induction for arithmetic sequences was introduced in the al-Fakhri written by al-
Karaji around 1000 AD, who used it to prove the binomial theorem and properties of Pascal's triangle.

None of these ancient mathematicians, however, explicitly stated the inductive hypothesis. Another similar case
(contrary to what Vacca has written, as Freudenthal carefully showed) was that of Francesco Maurolico in
his Arithmeticorum libri duo (1575), who used the technique to prove that the sum of the first n odd integers is n2.
The first explicit formulation of the principle of induction was given by Pascal in his Traité du triangle
arithmétique (1665). Another Frenchman, Fermat, made ample use of a related principle, indirect proof by infinite
descent.
4:
Combinational Logic Circuits

Unlike Sequential Logic Circuits whose outputs are dependant on both their present inputs and their previous
output state giving them some form of Memory, the outputs of Combinational Logic Circuitsare only determined
by the logical function of their current input state, logic "0" or logic "1", at any given instant in time as they have no
feedback, and any changes to the signals being applied to their inputs will immediately have an effect at the output.
In other words, in a Combinational Logic Circuit, the output is dependant at all times on the combination of its
inputs and if one of its inputs condition changes state so does the output as combinational circuits have "no
memory", "timing" or "feedback loops".

Combinational Logic

Combination Logic Circuits are made up from basic logic NAND, NOR or NOT gates that are "combined" or
connected together to produce more complicated switching circuits. These logic gates are the building blocks of
combinational logic circuits. An example of a combinational circuit is a decoder, which converts the binary code
data present at its input into a number of different output lines, one at a time producing an equivalent decimal code
at its output.

Combinational logic circuits can be very simple or very complicated and any combinational circuit can be
implemented with only NAND and NOR gates as these are classed as "universal" gates. The three main ways of
specifying the function of a combinational logic circuit are:

• Truth Table Truth tables provide a concise list that shows the output values in tabular form for each
possible combination of input variables.

• Boolean Algebra Forms an output expression for each input variable that represents a logic "1"

• Logic Diagram Shows the wiring and connections of each individual logic gate that implements the circuit.

and all three are shown below.


As combination logic circuits are made up from individual logic gates only, they can also be considered as "decision
making circuits" and combinational logic is about combining logic gates together to process two or more signals in
order to produce at least one output signal according to the logical function of each logic gate. Common
combinational circuits made up from individual logic gates that carry out a desired application
include Multiplexers, De-multiplexers, Encoders, Decoders, Full andHalf Adders etc.

Classification of Combinational Logic

One of the most common uses of combination logic is in Multiplexer and De-multiplexer type circuits. Here,
multiple inputs or outputs are connected to a common signal line and logic gates are used to decode an address to
select a single data input or output switch. A multiplexer consist of two separate components, a logic decoder and
some solid state switches, but before we can discuss multiplexers, decoders and de-multiplexers in more detail we
first need to understand how these devices use these "solid state switches" in their design.
Analogue Bilateral Switches

Analogue or "Analog" switches are those types that are used to switch data or signal currents when they are in their
"ON" state and block them when they are in their "OFF" state. The rapid switching between the "ON" and the
"OFF" state is usually controlled by a digital signal applied to the control gate of the switch. An ideal analogue
switch has zero resistance when "ON" (or closed), and infinite resistance when "OFF" (or open) and switches
with RON values of less than 1Ω are commonly available.

Solid State Analogue Switch

By connecting an N-channel MOSFET in parallel with a P-channel MOSFET allows signals to pass in either
direction making it a Bi-directional switch and as to whether the N-channel or the P-channel device carries more
signal current will depend upon the ratio between the input to the output voltage. The two MOSFETs are switched
"ON" or "OFF" by two internal non-inverting and inverting amplifiers.

Contact Types

Just like mechanical switches, analogue switches come in a variety of forms or contact types, depending on the
number of "poles" and "throws" they offer. Thus, terms such as "SPST" (single-pole single throw) and "SPDT"
(single-pole double-throw) also apply to solid state analogue switches with "make-before-break" and "break-before-
make" configurations available.
Analogue Switch Types

Individual analogue switches can be grouped together into standard IC packages to form devices with multiple
switching configurations of SPST and SPDT as well as multi channel multiplexers. The most common and simplest
analogue switch IC is the 74HC4066 which has 4 independent bi-directional "ON/OFF" Switches within a single
package but the most widely used variants of the CMOS analogue switch are those described as "Multi-way
Bilateral Switches" otherwise known as the "Multiplexer" and "De-multiplexer" IC´s and these are discussed in the
next tutorial.

You might also like