You are on page 1of 17

PROGRAM NO: - 6

Unit/Topic: Linux

PROBLEM DEFINITION:
Write a program for generating for various intermediate code forms i) Three address code ii) Polish notation .

OBJECTIVE:
To understand intermediate code format.

ALGORITHM:
Three address code
Three-address code (often abbreviated to TAC or 3AC) is a form of representing intermediate code used by compilers to aid in the implementation of code-improving transformations. Each instruction in three-address code can be described as a 4-tuple: (operator, operand1, operand2, result). Each statement has the general form of:

such as:

where x, y and z are variables, constants or temporary variables generated by the compiler. op represents any operator, e.g. an arithmetic operator. Expressions containing more than one fundamental operation, such as:

are not representable in three-address code as a single instruction. Instead, they are decomposed into an equivalent series of instructions, such as

The term three-address code is still used even if some instructions use more or fewer than two operands. The key features of three-address code are that every instruction implements exactly one fundamental operation, and that the source and destination may refer to any available register.

Example
int main(void) { int i; int b[10]; for (i = 0; i < 10; ++i) { b[i] = i*i; } }

The preceding C program, translated into three-address code, might look something like the following:
L1: i := 0 if i >= 10 goto L2 t0 := i*i t1 := &b t2 := t1 + i *t2 := t0 i := i + 1 goto L1 ; assignment ; conditional jump ; address-of operation ; t2 holds the address of b[i] ; store through pointer

L2:

INPUT SET: OUTPUT SET: NOTES:

NAME OF FACULTY: SIGNATURE: DATE:

PROGRAM NO: - 7
Unit/Topic: Linux

PROBLEM DEFINITION:
Write a program to simulate heap storage allocation strategy.

OBJECTIVE:
To understand Brute force technique.

ALGORITHM:

INPUT SET: OUTPUT SET: NOTES:

NAME OF FACULTY: SIGNATURE: DATE:

PROGRAM NO: - 8
Unit/Topic: Linux

PROBLEM DEFINITION:
Generate a Lexical analyzer using LEX..

OBJECTIVE:
To understand lexical analyzer .

ALGORITHM:
Lex
During the first phase the compiler reads the input and converts strings in the source to tokens. With regular expressions we can specify patterns to lex so it can generate code that will allow it to scan and match strings in the input. Each pattern specified in the input to lex has an associated action. Typically an action returns a token that represents the matched string for subsequent use by the parser. Initially we will simply print the matched string rather than return a token value. The following represents a simple pattern, composed of a regular expression, that scans for identifiers. Lex will read this pattern and produce C code for a lexical analyzer that scans for identifiers. letter(letter|digit)* This pattern matches a string of characters that begins with a single letter followed by zero or more letters or digits.

Lex Input
#include <stdlib.h> #include "calc3.h" #include "y.tab.h" void yyerror(char *); %} %% [a-z] { yylval.sIndex = *yytext - 'a'; return VARIABLE; } 0{ yylval.iValue = atoi(yytext); return INTEGER; } [1-9][0-9]* {

yylval.iValue = atoi(yytext); return INTEGER; } [-()<>=+*/;{}.] { return *yytext; } ">=" return GE; "<=" return LE; "==" return EQ; "!=" return NE; "while" return WHILE; "if" return IF; "else" return ELSE; "print" return PRINT; [ \t\n]+ ; /* ignore whitespace */ . yyerror("Unknown character"); %% int yywrap(void) { return 1;

INPUT SET: OUTPUT SET: NOTES:

NAME OF FACULTY: SIGNATURE: DATE:

PROGRAM NO: - 9
Unit/Topic: Linux

PROBLEM DEFINITION:
Generate YACC specification for syntactic categories.

OBJECTIVE:
To understand YACC

ALGORITHM:
#include <stdlib.h> #include <stdarg.h> #include "calc3.h" /* prototypes */ nodeType *opr(int oper, int nops, ...); nodeType *id(int i); nodeType *con(int value); void freeNode(nodeType *p); int ex(nodeType *p); int yylex(void); void yyerror(char *s); int sym[26]; /* symbol table */ %} %union { int iValue; /* integer value */ char sIndex; /* symbol table index */ nodeType *nPtr; /* node pointer */ }; %token <iValue> INTEGER %token <sIndex> VARIABLE %token WHILE IF PRINT %nonassoc IFX %nonassoc ELSE %left GE LE EQ NE '>' '<' %left '+' '-' %left '*' '/' %nonassoc UMINUS %type <nPtr> stmt expr stmt_list

%% program: function { exit(0); } ; function: function stmt { ex($2); freeNode($2); } | /* NULL */ ; stmt: ';' { $$ = opr(';', 2, NULL, NULL); } | expr ';' { $$ = $1; } | PRINT expr ';' { $$ = opr(PRINT, 1, $2); } | VARIABLE '=' expr ';' { $$ = opr('=', 2, id($1), $3); } | WHILE '(' expr ')' stmt { $$ = opr(WHILE, 2, $3, $5); } | IF '(' expr ')' stmt %prec IFX { $$ = opr(IF, 2, $3, $5); } | IF '(' expr ')' stmt ELSE stmt { $$ = opr(IF, 3, $3, $5, $7); } | '{' stmt_list '}' { $$ = $2; } ; stmt_list: stmt { $$ = $1; } | stmt_list stmt { $$ = opr(';', 2, $1, $2); } ; expr: INTEGER { $$ = con($1); } | VARIABLE { $$ = id($1); } | '-' expr %prec UMINUS { $$ = opr(UMINUS, 1, $2); } | expr '+' expr { $$ = opr('+', 2, $1, $3); } | expr '-' expr { $$ = opr('-', 2, $1, $3); } | expr '*' expr { $$ = opr('*', 2, $1, $3); } | expr '/' expr { $$ = opr('/', 2, $1, $3); } | expr '<' expr { $$ = opr('<', 2, $1, $3); } | expr '>' expr { $$ = opr('>', 2, $1, $3); } | expr GE expr { $$ = opr(GE, 2, $1, $3); } | expr LE expr { $$ = opr(LE, 2, $1, $3); } | expr NE expr { $$ = opr(NE, 2, $1, $3); } | expr EQ expr { $$ = opr(EQ, 2, $1, $3); } | '(' expr ')' { $$ = $2; } ; %% #define SIZEOF_NODETYPE ((char *)&p->con - (char *)p) nodeType *con(int value) { nodeType *p; size_t nodeSize; /* allocate node */ nodeSize = SIZEOF_NODETYPE + sizeof(conNodeType); if ((p = malloc(nodeSize)) == NULL) yyerror("out of memory");

/* copy information */ p->type = typeCon; p->con.value = value; return p; } nodeType *id(int i) { nodeType *p; size_t nodeSize; /* allocate node */ nodeSize = SIZEOF_NODETYPE + sizeof(idNodeType); if ((p = malloc(nodeSize)) == NULL) yyerror("out of memory"); /* copy information */ p->type = typeId; p->id.i = i; return p; } nodeType *opr(int oper, int nops, ...) { va_list ap; nodeType *p; size_t nodeSize; int i; /* allocate node */ nodeSize = SIZEOF_NODETYPE + sizeof(oprNodeType) + (nops - 1) * sizeof(nodeType*); if ((p = malloc(nodeSize)) == NULL) yyerror("out of memory"); /* copy information */ p->type = typeOpr; p->opr.oper = oper; p->opr.nops = nops; va_start(ap, nops); for (i = 0; i < nops; i++) p->opr.op[i] = va_arg(ap, nodeType*); va_end(ap); return p; } void freeNode(nodeType *p) { int i; if (!p) return; if (p->type == typeOpr) { for (i = 0; i < p->opr.nops; i++) freeNode(p->opr.op[i]); } free (p); } void yyerror(char *s) { fprintf(stdout, "%s\n", s); }

int main(void) { yyparse(); return 0; }

INPUT SET: OUTPUT SET: NOTES:

NAME OF FACULTY: SIGNATURE: DATE:

PROGRAM NO: - 10
Unit/Topic: Linux

PROBLEM DEFINITION:
Give any intermediate code form implementing optimization techniques.

OBJECTIVE:
To understand Optimization technique.

ALGORITHM:
Optimization is the process of transforming a piece of code to make more efficient (either in terms of time or space) without changing its output or side-effects. The only difference visible to the codes user should be that it runs faster and/or consumes less memory. It is really a misnomer that the name implies you are finding an "optimal" solution in truth, optimization aims to improve, not perfect, the result. Optimization is the field where most compiler research is done today. The tasks of the front-end (scanning, parsing, semantic analysis) are well understood and unoptimized code generation is relatively straightforward. Optimization, on the other hand, still retains a sizable measure of mysticism. High-quality optimization is more of an art than a science. Compilers for mature languages arent judged by how well they parse or analyze the codeyou just expect it to do it right with a minimum of hasslebut instead by the quality of the object code they produce. Many optimization problems are NP-complete and thus most optimization algorithms rely on heuristics and approximations. It may be possible to come up with a case where a particular algorithm fails to produce better code or perhaps even makes it worse. However, the algorithms tend to do rather well overall.Its worth reiterating here that efficient code starts with intelligent decisions by the

Types of optimizations
Techniques used in optimization can be broken up among various scopes which can affect anything from a single statement to the entire program. Generally speaking, locally scoped techniques are easier to implement than global ones but result in smaller gains. Some examples of scopes include:

Peephole optimizations: Usually performed late in the compilation process after machine code has been generated. This form of optimization examines a few adjacent instructions (like "looking through a peephole" at the code) to see whether they can be replaced by a single instruction or a shorter sequence of instructions. For instance, a multiplication of a value by 2 might be more efficiently executed by left-shifting the value or by adding the value to itself. (This example is also an instance of strength reduction.) Local optimizations: These only consider information local to a function definition. This reduces the amount of analysis that needs to be performed

(saving time and reducing storage requirements) but means that worst case assumptions have to be made when function calls occur or global variables are accessed (because little information about them is available).

Interprocedural or whole-program optimization: These analyze all of a program's source code. The greater quantity of information extracted means that optimizations can be more effective compared to when they only have access to local information (i.e., within a single function). This kind of optimization can also allow new techniques to be performed. For instance function inlining, where a call to a function is replaced by a copy of the function body. Loop optimizations: These act on the statements which make up a loop, such as a for loop (eg, loop-invariant code motion). Loop optimizations can have a significant impact because many programs spend a large percentage of their time inside loops.

In addition to scoped optimizations there are two further general categories of optimization:

Programming language-independent vs. language-dependent: Most highlevel languages share common programming constructs and abstractions decision (if, switch, case), looping (for, while, repeat.. until, do.. while), encapsulation (structures, objects). Thus similar optimization techniques can be used across languages. However, certain language features make some kinds of optimizations difficult. For instance, the existence of pointers in C and C++ makes it difficult to optimize array accesses (see Alias analysis). However, languages such as PL/1 (that also supports pointers) nevertheless have available sophisticated optimizing compilers to achieve better performance in various other ways. Conversely, some language features make certain optimizations easier. For example, in some languages functions are not permitted to have "side effects". Therefore, if a program makes several calls to the same function with the same arguments, the compiler can immediately infer that the function's result need be computed only once. Machine independent vs. machine dependent: Many optimizations that operate on abstract programming concepts (loops, objects, structures) are independent of the machine targeted by the compiler, but many of the most effective optimizations are those that best exploit special features of the target platform.

The following is an instance of a local machine dependent optimization. To set a register to 0, the obvious way is to use the constant '0' in an instruction that sets a register value to a constant. A less obvious way is to XOR a register with itself. It is up to the compiler to know which instruction variant to use. On many RISC machines, both instructions would be equally appropriate, since they would both be the same length and take the same time. On many other microprocessors such as the Intel x86 family, it turns out that the XOR variant is shorter and probably faster, as there will be no need to decode an immediate operand, nor use the internal

"immediate operand register". (A potential problem with this is that XOR may introduce a data dependency on the previous value of the register, causing a pipeline stall. However, processors often have XOR of a register with itself as a special case that doesn't cause stalls.) Factors affecting optimization The machine itself Many of the choices about which optimizations can and should be done depend on the characteristics of the target machine. It is sometimes possible to parameterize some of these machine dependent factors, so that a single piece of compiler code can be used to optimize different machines just by altering the machine description parameters. GCC is a compiler which exemplifies this approach. The architecture of the target CPU

Number of CPU registers: To a certain extent, the more registers, the easier it is to optimize for performance. Local variables can be allocated in the registers and not on the stack. Temporary/intermediate results can be left in registers without writing to and reading back from memory. RISC vs. CISC: CISC instruction sets often have variable instruction lengths, often have a larger number of possible instructions that can be used, and each instruction could take differing amounts of time. RISC instruction sets attempt to limit the variability in each of these: instruction sets are usually constant length, with few exceptions, there are usually fewer combinations of registers and memory operations, and the instruction issue rate (the number of instructions completed per time period, usually an integer multiple of the clock cycle) is usually constant in cases where memory latency is not a factor. There may be several ways of carrying out a certain task, with CISC usually offering more alternatives than RISC. Compilers have to know the relative costs among the various instructions and choose the best instruction sequence (see instruction selection). Pipelines: A pipeline is essentially a CPU broken up into an assembly line. It allows use of parts of the CPU for different instructions by breaking up the execution of instructions into various stages: instruction decode, address decode, memory fetch, register fetch, compute, register store, etc. One instruction could be in the register store stage, while another could be in the register fetch stage. Pipeline conflicts occur when an instruction in one stage of the pipeline depends on the result of another instruction ahead of it in the pipeline but not yet completed. Pipeline conflicts can lead to pipeline stalls: where the CPU wastes cycles waiting for a conflict to resolve. Compilers can schedule, or reorder, instructions so that pipeline stalls occur less frequently.

Number of functional units: Some CPUs have several ALUs and FPUs. This allows them to execute multiple instructions simultaneously. There may be restrictions on which instructions can pair with which other instructions

("pairing" is the simultaneous execution of two or more instructions), and which functional unit can execute which instruction. They also have issues similar to pipeline conflicts. Here again, instructions have to be scheduled so that the various functional units are fully fed with instructions to execute. The architecture of the machine

Cache size (256kB-12MB) and type (direct mapped, 2-/4-/8-/16-way associative, fully associative): Techniques such as inline expansion and loop unrolling may increase the size of the generated code and reduce code locality. The program may slow down drastically if a highly utilized section of code (like inner loops in various algorithms) suddenly cannot fit in the cache. Also, caches which are not fully associative have higher chances of cache collisions even in an unfilled cache. Cache/Memory transfer rates: These give the compiler an indication of the penalty for cache misses. This is used mainly in specialized applications.

Intended use of the generated code

Debugging: When a programmer is still writing an application, he or she will recompile and test often, and so compilation must be fast. This is one reason most optimizations are deliberately avoided during the test/debugging phase. Also, program code is usually "stepped through" (see Program animation) using a symbolic debugger, and optimizing transformations, particularly those that reorder code, can make it difficult to relate the output code with the line numbers in the original source code. This can confuse both the debugging tools and the programmers using them. General purpose use: Prepackaged software is very often expected to be executed on a variety of machines and CPUs that may share the same instruction set, but have different timing, cache or memory characteristics. So, the code may not be tuned to any particular CPU, or may be tuned to work best on the most popular CPU and yet still work acceptably well on other CPUs. Special-purpose use: If the software is compiled to be used on one or a few very similar machines, with known characteristics, then the compiler can heavily tune the generated code to those specific machines (if such options are available). Important special cases include code designed for parallel and vector processors, for which special parallelizing compilers are employed. Embedded systems: These are a "common" case of specialpurpose use. Embedded software can be tightly tuned to an exact CPU and memory size. Also, system cost or reliability may be more important than the code's speed. So, for example, compilers for embedded software usually offer options that reduce code size at the expense of speed, because memory is the main cost of an embedded computer. The code's timing may need to be predictable, rather than

"as fast as possible," so code caching might be disabled, along with compiler optimizations that require it.

INPUT SET: OUTPUT SET: NOTES:

NAME OF FACULTY: SIGNATURE: DATE:

PROGRAM NO: - 11
Unit/Topic:

PROBLEM DEFINITION:
Study of Object - Oriented Compiler.

OBJECTIVE:
To understand object-oriented compiler.

ALGORITHM:
A compiler takes a program in a source language, creates some internal representation while checking the syntax of the program, performs semantic checks, and finally generates something that can be executed to produce the intended effect of the program.The obvious candidate for object technology in a compiler is the symbol table: a mappingfrom userdefined names to their properties as expressed in the program. It turns out however,that compiler implementation can benefit from object technology in many more areas.If the internal representation is a tree of objects, semantic checking and generation can beaccomplished by sending a message to these objects or by visiting each object. If the result ofgeneration is a set of persistent objects, program execution can consist of sending a message to a distinguished object in this set.Compilers are usually made with tools such as parser and lexical analysis generators. Aparser-generator takes a grammar, specified in a language such asBNForEBNF, checks it, and constructs a representation (the parser) which will execute semantic actions as phrasesover the grammar are recognized. If the parser consists of a set of persistent objects, checkingthe grammar is accomplished by sending a message to the objects. Similarly, recognizinga program amounts to a message to the start symbol of the grammar. Goal objects are thencreated to represent the phrases to be recognized and the user actions are defined as methodsfor the goals which are called from the parsing objects. It turns out that OOPcan be applied productively in every phase of a compiler implementationand it delivers the expected benefits:objects enforce information hiding and state encapsulation,method shelp to develop by divide and conquer,all work is carried out by messages which can be debugged by instrumenting their methods. Most importantly, classes encourage code reuse between projects and

inheritance allows code reuse within a project and modifications from one project to another. As an added benefit, modern class libraries contain many pre-fabricated algorithms and data structures.

INPUT SET: OUTPUT SET: NOTES:

NAME OF FACULTY: SIGNATURE: DATE:

You might also like