Professional Documents
Culture Documents
Historical Perspective
In 1770, English physicist Joseph Priestley studied the erosive effect of electrical discharges. Furthering Priestley's research, the EDM process was invented by two Russian scientists, Dr. B. R. Lazarenko and Dr. N. I. Lazarenko, in 1943. In their efforts to exploit the destructive effects of an electrical discharge, they developed a controlled process for machining of metals. Their initial process used a spark machining process, named after the succession of sparks (electrical discharges) that took place between two electrical conductors immersed in a dielectric fluid. The discharge generator effect used by this machine, known as the Lazarenko circuit, was used for many years in the construction of generators for electrical discharge. Additional researchers entered the field and contributed many fundamental characteristics of the machining method we know today. In 1952, the manufacturer Charmilles created the first machine using the spark machining process and was presented for the first time at the European Machine Tool Exhibition in 1955. In 1969 Agie launched the world's first numerically controlled wire-cut EDM machine. Seibu developed the first CNC wire EDM machine 1972 and the first system manufactured in Japan. There is clearly a need to understand the process closely. When Japan began its reconstruction efforts after world war2, it faced an acute shortage of good quality of raw materials, high quality manufacturing equipment and skilled engineers. The challenge was to produce high quality products and continue to improve the quality under those circumstances. The task of developing a methodology to meet the challenge was assigned to Dr. Genichi Taguchi, who at that time was a manager in Nippon Telephone & Telegraph Company. Thorough his research in the 1950s and early 1960s Dr. Taguchi developed the foundations of robust design and validated its basic philosophies by applying them in the development of many products. In recognition of this contribution, he received the individual Deming Award in 1962, which is one of the highest recognition in quality field. The robust design method can be applied a wide variety of problems. The application of the method in electronics, automotive products, photography, and many others industries have been an important factor in the rapid industrial growth and the subsequent domination of international market in these industries by Japan.
Process :
In EDM, a potential difference is applied between the tool and work piece. Both the tool and the work material are to be conductors of electricity. The tool and the work material are immersed in a dielectric medium. Generally kerosene or deionised water is used as the dielectric medium. A gap is maintained between the tool and the work piece. Depending upon the applied potential difference and the gap between the tool and work piece, an electric field would be established. Generally the tool is connected to the negative terminal of the generator and the work piece is connected to positive terminal. As the electric field is established between the tool and the job, the free electrons on the tool are subjected to electrostatic forces. If the work function or the bonding energy of the electrons is less, electrons would be emitted from the tool (assuming it to be connected to the negative terminal). Such emission of electrons are called or termed as cold emission. The cold emitted electrons are then accelerated towards the job through the dielectric medium. As they gain velocity and energy, and start moving towards the job, there would be collisions between the electrons and dielectric molecules. Such collision may result in ionisation of the dielectric molecule depending upon the work function or ionisation energy of the dielectric molecule and the energy of the electron. Thus, as the electrons get accelerated, more positive ions and electrons would get generated due to collisions. This cyclic process would increase the concentration of electrons and ions in the dielectric medium between the tool and the job at the spark gap. The concentration would be so high that the matter existing in that channel could be characterised as plasma. The electrical resistance of such plasma channel would be very less. Thus all of a sudden, a large number of electrons will flow from the tool to the job and ions from the job to the tool. This is called avalanche motion of electrons. Such movement of electrons and ions can be visually seen as a spark. Thus the electrical energy is dissipated as the thermal energy of the spark.
The waveform is characterised by the The open circuit voltage - Vo The working voltage - Vw The maximum current - Io The pulse on time the duration for which the voltage pulse is applied - ton The pulse off time - toff The gap between the work piece and the tool spark gap - The polarity straight polarity tool (-ve) The dielectric medium External flushing through the spark gap.
Dielectric:
In EDM, as has been discussed earlier, material removal mainly occurs due to thermal evaporation and melting. As thermal processing is required to be carried out in absence of oxygen so that the process can be controlled and oxidation avoided. Oxidation often leads to poor surface conductivity (electrical) of the workpiece hindering further machining. Hence, dielectric fluid should provide an oxygen free machining environment. Further it should have enough strong dielectric resistance so that it does not breakdown electrically too easily but at the same time ionise when electrons collide with its molecule. Moreover, during sparking it should be thermally resistant as well. Generally kerosene and deionised water is used as dielectric fluid in EDM. Tap water cannot be used as it ionises too early and thus breakdown due to presence of salts as impurities occur. Dielectric medium is generally flushed around the spark zone. It is also applied through the tool to achieve efficient removal of molten material.
The molten crater can be assumed to be hemispherical in nature with a radius r which forms due to a single pulse or spark. Hence material removal in a single spark can be expressed as
Now it can be logically assumed that material removal in a single spark would be roportional to the spark energy. Thus
Now material removal rate is the ratio of material removed in a single spark to cycle time. Thus
The model presented above is a very simplified one and linear relationship is not observed in practice. But even then such simplified model captures the complexity of EDM in a very efficient manner. MRR in practice does increase with increase in working voltage, current, pulse on time and decreases with increase in pulse off time. Product quality is a very important characteristic of a manufacturing process along with MRR. The followings are the product quality issues in EDM Surface finish Overcut Tapercut No two sparks take place side by side. They occur completely randomly so that over time one gets uniform average material removal over the whole tool cross section. But for the sake of simplicity, it is assumed that sparks occur side by side as shown in Fig.
Thus it may be noted that surface roughness in EDM would increase with increase in spark energy and surface finish can be improved by decreasing working voltage, working current and pulse on time. In EDM, the spark occurs between the two nearest point on the tool and workpiece. Thus machining may occur on the side surface as well leading to overcut and tapercut as depicted in Fig. 5. Taper cut can be prevented by suitable insulation of the tool. Overcut cannot be prevented as it is inherent to the EDM process. But the tool design can be done in such a way so that same gets compensated.
In RC type generator, the capacitor is charged from a DC source. As long as the voltage in the capacitor is not reaching the breakdown voltage of the dielectric medium under the prevailing machining condition, capacitor would continue to charge. Once the breakdown voltage is reached the capacitor would start discharging and a spark would be established between the tool and work piece leading to machining. Such discharging would continue as long as the spark can be sustained. Once the voltage becomes too low to sustain the spark, the charging of the capacitor would continue. Fig. 8 shows the working of RC type EDM relaxation.
Solution:
where,
Ic = charging current Vo= open circuit voltage Rc= charging resistance C = capacitance Vc= instantaneous capacitor voltage during charging
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 9
During discharging, the electrical load coming from the EDM may be assumed a totally resistive and is characterised by a machine resistance of Rm. then the current passing through the EDM machine is given by:
where, Id = discharge current or current flowing through the machine Vc= instantaneous capacitor voltage during discharging Rm= machine resistance
10
Study of EDM and Optimization of its machining parameters Study of Taguchis method of Design of experiment using orthogonal arrays:
Literature Survey:
The model quality improvement focuses on the reduction on variability of the manufactured product. It is more costly to costly the cause of variations than to make a process insensitive to control the cause of variations than to make a process insensitive to those variations. The manufacturing variability can never be controlled if checked at the manufacturing and the inspection stages only. Considerable advantages can be obtained by achieving product quality at manufacturing process stage (Design stage) instead of controlling quality at manufacturing process stage or through the inspection of the finished product. Taguchi method is a powerful tool for the design of high quality system. It provides a simple, efficient and systemic approach to optimize design of performance, quality and cost. The methodology is valuable when the design parameters and qualitative or discrete. Taguchi parameter design can optimize the performance characteristics through the setting of design parameters and reduce the sensitivity of the system performance to source of variations. In recent years the rapid growth of interest in the Taguchi method has led to numerous applications of the method in a world-wide range of industries and nations. In the present research work, the above methodology is employed to optimize the machining parameters and to find out the most important factors and their influence on the quality characteristics, i.e. surface roughness. In order to obtain better surface roughness, the proper setting of cutting parameters is crucial before the process takes place. As a starting point for determining cutting parameters, technologists could use the hands on data tables that are furnished in machining data handbooks. Lin (1994) suggested that a trial and error approach could be followed in order to obtain the optimal machining conditions for a particular operation. Consequently, it is a very time consuming process of identifying the optimum cutting conditions for a particular operation. Recently, a Design of Experiment (DOE) has been implemented to select manufacturing process parameters that could result in a better quality product. The DOE is an effective approach to optimize the throughput in various manufacturing related processes. In their study, three independent variables, each with three levels, had total of (33) = 27 experimental runs. Oftentimes, the optimum metal cutting process required studying more than three factors for the cutting parameters. For example, if a DOE setup considered 4 or 5 independent variables, each with at least three levels, then (34)=81 runs or (35)=243 runs were required in the experiments. Imagining the total cost of these experimental runs, one could conclude that it was very costly for the industry. In addition,
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 11
Subject overview:
Design of experiments (DOE) is a powerful statistical technique introduced by R.A Fisher in England in 1920s to study the effect of multiple variables simultaneously. In his early applications, fisher wanted to find out how much rain, water, fertilizer, sunshine etc. are needed to produce the best crop. Since that time, much development of the technique has been taken place in the academic environment, but did help generate many applications in the production floor. As a researcher in Electronic Control Laboratory in Japan, Dr. Genechi Taguchi carried out significant research with DOE techniques in the late 1940s he spent considerable effort to make his experimental technique more user friendly (easy to apply) and applied it to improve the quality of manufactured products. Dr. Taguchis standardized version of DOE, popularly known as the Taguchi approach, was introduced in the USA in the early 1980s.
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 12
13
14
Static Problems
Generally, a process to be optimized has several control factors, which directly decide the target or desired value of the output. Such a problem is called as a Static Problem. This is best explained using a P-Diagram, which is shown (P stands for process or product). Noise is shown to be present in the process but should have no effect on the output! This is the primary aim of the Taguchi experiments-to minimize variations in output even though noise is present in the process. The process is then said to have become ROBUST.
NOISE
P-Diagram
OUTPUT
Dynamic Problems
If the product to be optimized has a signal input that directly decides the output, the optimization involves determining the best control factor levels so the input signal/Output ratio is closest to the desired relationship. Such a problem is called as a Dynamic Problem.
15
NOISE
SIGNAL
P-Diagram
OUTPUT
16
17
Though Quality can be defined as conformance to specification and fitness for use etc in the general concept, these definitions do not cover the entire implied meaning of quality. The ideal Quality a customer can expect is that the product delivers the target performance each time the product is used, under all intended operating conditions and throughout its intended life, with no harmful side effects.
Dr. Taguchi brought the facility in the fraction defective definition for the quality, in which the number of defectives based on the principle is depicted in Fig. 3.1 was the only concern. As per his theory, the measure of quality of a product is in terms of the total loss to society due to functional variation and harmful side effects. Under ideal quality, this loss is equal to zero. Greater the loss, lower the quality. As per this the total cost of the product is the sum of the operating cost including maintenance and inventory, the manufacturing cost, the R & D cost(the time, laboratory charges, resources etc) and the cost incurred by its breakdown and thereby the losses caused to the society.
18
Reject
Target-
Target
Target +
19
This means that all products that meet the specifications are equally good. But in reality it is not so. The product whose response is exactly on target gives the best performance. As the products performance deviate from the target, the quality becomes progressively worse. These two quality philosophies are narrated in Fig. 3.4. as in one case, the focus is on meeting the target and on other case the focus is on meeting tolerance. This is the actual case study result on SONY TV companies of USA & Japan and demonstrates how the japan made TVs were branded as high quality products by following the principle of focussing the target than focussing the tolerance.
From these it can be realized that the true quality measure should not be based on the step function as shown in Fig. 3.3 but as a quadratic loss function as shown in Fig. 3.4. Here the quality loss function L(y) is symmetric about the target performance. As the performance deviate from the target the quality loss correspondingly increases. Also the cost of replacement or repair and represents the acceptable limit.
Target-
Target
Target+
20
m-
m+
L=Loss associated with attribute y m=Specification target k=constant depending upon the cost and width of the species
Example : The cost of scraping a part is Rs. 100 when it deviates 0.50mm from a target nominal of 2 mm. Rs 100=k(2.5-2)^2 K=Rs. 400 per mm^2 L=400(y-2)^2
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 21
m-
m+
II.
Loss
22
Eg. Radiation leakage from a microwave oven , Response time of computer Pollution from
automobile etc.
III.
L(y)=k(1/y^2)
loss
23
How to reduce economically the variation of a product function in the customers environment. (Note that achieving a product function consistently on target maximizes customers satisfaction ) How to ensure that decisions found to be optimum during laboratory experiments will prove to be so in manufacturing & in customer environment.
In addressing these concerns, robust design uses the mathematical formalism of statistical experimental design. A matrix experiment is a set of experiments, where we change the setting of the various parameters we want to study from one experiment to another. After conducting a matrix experiment, the data from all experiments in the set taken together are analyzed to determine the effects of various parameters. The analysis of Means(ANOM) and the analysis of variance (ANOVA) are used to interpret the data to find the sensitivity of each parameters of interest. Conducting the matrix experiments using special matrices called Orthogonal Arrays , allows the effect of several parameters to be determined efficiently and is an important technique in robust design. The different levels of the parameters are known as experimental region or the region of interest. Orthogonality is interpreted in a combinatory sense -(i.e. ) for any pair of columns, all combinations of factor levels occurs and they occur on equal number of times. This is called the balancing property and it implies Orthogonality. So an Orthogonal Array can be defined as a matrix with the columns representing the number of parameters to be studied with their different levels in different combinations of experiments and the number of rows equal to the number of experiments. Standard orthogonal arrays are designed and are available. Selection of an orthogonal array for a robust design project is based on the number of degrees of freedom of the experiment in
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 24
Column N0.
Trial No. 1 2 3 4 5 6 7 8 9 1 1 1 1 2 2 2 3 3 3 2 1 2 3 1 2 3 1 2 3 3 1 2 3 2 3 1 3 1 2 4 1 2 3 3 1 2 2 3 1
Y1 * * * * * * * * *
y2 * * * * * * * * *
y3 * * * * * * * * *
y4 * * * * * * * * *
25
Expt. No. 1 2 3 4 1 1 1 2 2
Column 2 1 2 1 2 3 1 2 2
Exp No. 1 2 3 4 1 5 6 7 8 2 2 2 2 1 1 2 2 1 1 1 1 1 2 1 1 2 2
Column 3 1 1 2 2 2 2 1 1 4 1 2 1 2 1 2 1 2 5 1 2 1 2 2 1 2 1 6 1 2 2 1 1 2 2 1 7 1 2 2 1 2 1 1 2
26
Study of EDM and Optimization of its machining parameters Selection of factors levels:
A minimum of two levels is necessary to estimate a factors effect. Continuous factors must be discretised (in preferably equal intervals). Example: Levels of a length parameters: 1 cm, 1.5 cm and 2.0 cm. The more levels the more experimental runs that are necessary/. The number of levels indicates the resolution of effects that can be predicted. The advantage of taking minimum of three points to capture the second order effect is demonstrated in Fig. 4.5 and Fig 4.6 more the number of levels means more the capture of non- linearity, but more the number of experiments and associated efforts. So it is recommended to consider optimum of three levels. Fig. 4.5 demonstrates how the nonlinearity can be missed if only two levels are taken and in Fig 4.6, it is shown that considering three levels will help in better representation of actual effect, which can be nonlinear. Even a three level combination wouldnt capture the exact relationship. So the entire matrix experimentation may have to be repeated several times to get most robust design.
Factor Assignment:
The selected factors are assigned to the different columns of the specific orthogonal array as shown below. The OA system in Table 4.4 is a 3 level, a 4 four parameter and 9 experiment orthogonal array. The first column represents, starting from 1 to 9. Second to fourth columns are the levels of each parameter or factors A,B,C and D. This will result in 9 experiments with factor combinations as given in each row. For example, the first experiment will be conducted with all the factors A, B, C and D at level 1. For the second experiment factor a will be at level 1 & all other factors at 2 and so on. As per this project concern this chapter introduces the techniques of matrix experiments based on experimental rrays for optimisation of machining parameter to measure surface roughness. The engineering issues inviolved in planning and conducting matrix experiment and the technique of constructing orthogonal array are discussed in the following section
27
Trial number 1 2 3 4 5 6 7 8 9
Factor A 1 1 1 2 2 2 3 3 3
Factor B 1 2 3 1 2 3 1 2 3
Factor C 1 2 3 2 3 1 3 1 2
Factor D 1 2 3 3 1 2 2 3 1
28
Study of EDM and Optimization of its machining parameters Selecting a standard orthogonal Array:
Taguchi has tabulated 18 basic orthogonal arrays that were called standard orthogonal arrays. In many case studies, any of the arrays table 4.5, listed below is available and and can be used directly to plan a matrix experiment. An array name indicates the number of rows and columns it has, and also the number of levels in each of columns. Thus array L4 (23) has four rows and three level column ; and seven t3-level columns. Thus there are eight columns in the L18 (2137). Table 4.5 listed the 18 standard orthogonal arrays along with the number of columns at different levels: Orthogonal No. of array rows L4 L8 L9 L12 L16 L16 L18 L25 L27 L32 L32 L36 L36 L50 L54 L64 L64 L81 4 8 9 12 16 16 18 25 27 32 32 36 36 50 54 64 64 81 Maximum no. of factors 3 7 4 11 15 5 8 6 13 31 10 23 26 12 26 63 21 40 Maximum no. of columns at these levels 2 3 4 5 3 7 11 15 1 31 1 11 3 1 1 63 4 7 13 12 13 25 40 5 9 21 6 11 -
2 LEVEL ARRAYS: L4, L8, L12, L16, L32, L64 3 LEVEL ARRAYS: L9, L27, L81. MIXED 2- AND 3- LEVEL ARRAYS: L18, L36, L36, L54. TABLE -4.5 Standard Orthogonal Arrays
29
Since there three level factors in this project, it is preferable to use array from the three level series because there are sevan degrees of freedom, the array must have 7 or more rows. Looking at Table 4.5 we see that the smallest array with the smallest array with at least 7 rows is Ls. but this array has seven 2-level columns. As per uor project, we nrrd 3-level column. The next larger array is Ls which has four 3-level columns. Here we can assign three 3-Level factors to three of the four 3-level columns, keeping one 3-level column empty. Keeping one or more column of an array empty lose not lose orthogonality of the matrix experiment. So Ls is a good choice for this experiment. In a situation like this, we would take another look at the control factors if there is an additional control factor to be studied which we may have ignored as less important. If one exist it should assign in the empty column. Doing this allow us a chance to gain information about this additional factor without spending any more resources.
30
in general, linear graph dose not show the interaction between every pair of columns of the orthogonal array. That information contained in the interaction table.
31
Expt no
1 2 3 4 5 6 7 8 9
*empty column are identified by E Table - L9 orthogonal array and Factors assignment
32
The nine rows of the L9 array represent the nine experiment 1 to be conducted at level 1 to for each of the three next chapter. These labels can be read from table 4.6, which is mentioned in next chapter. However to make it convenient and to prevent translation error, the entire matrix of table 4.6 should should be translated using the level definitions in table 4.7 to create the experimenters log sheet.
Expt no
Column number and factor assignment 1 A 8 8 8 10 10 10 12 12 12 2 B 200 500 750 200 500 750 200 500 750 3 C 6 12 18 6 12 18 6 12 18
1 2 3 4 5 6 7 8 9
33
Study of EDM and Optimization of its machining parameters Grey Relational Analysis :
Grey Relational Analysis. The Grey System Theory that was developed by Deng (1982) is mainly utilized to study system models uncertainty, analyse relations between systems, establish models, and forecast and makes decisions (Deng, 1984). Grey Relational Analysis is utilized to probe the extent of connections between two digits by applying the methodology of departing and scattering measurement to the actual measurement of distance. Grey Relational Analysis is an effective means of analysing the relationship between two series. This study applies grey relational analysis to measure the similarity between the series.
1. Data Pre-processing
Grey data processing must be performed before Grey correlation coefficients can be calculated. A series of various units must be transformed to be dimensionless. Usually, each series is normalized by dividing the data in the original series by their average. Let the original reference sequence and sequence for comparison be represented as xo(k) and xi(k), i=1, 2, . . .,m; k=1,2, . . ., n, respectively, where m is the total number of experiment to be considered, and n is the total number of observation data. Data pre-processing converts the original sequence to a comparable sequence. Several methodologies of pre-processing data can be used in Grey relation analysis, depending on the characteristics of the original sequence.
If the target value of the original sequence is the-larger-the-better, then the original sequence is normalized as follows:
35
2. Grey Relational Coefficients:. Following the data pre-processing, a Grey relational coefficient can be calculated using the pre-processed sequences. The Grey relational coefficients are calculated to express the relationship between the ideal (best =1) and the actual experimental results. The grey relational coefficient ij can be expressed as follows:
Where xi is the ideal normalized results for the ith performance characteristics and is the distinguishing coefficient which is defined in the range 0 1. is also called environmental factor.
GRG =
, Here, the Grey relational grade represents the level of correlation between the reference and comparability sequences. If the two sequences are identical, then the value of the Grey
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 36
37
Study of EDM and Optimization of its machining parameters EXPERIMENTAL DESIGN, SET-UP AND ANALYSYS:
Principle of operation: Electrical discharge machining
In the precision engineering the electrical discharge machining is very important method of manufacturing. In this electrical sparks cause the metal removal and MRR is measured due to heat produced. Work piece and tool both are conductive material. Spark is generated between tool and work piece and thats why the electricity is generated and heat is produced all these happens within micro seconds. The frequency of sparking may be thousands of sparks in a second the area over which he spark is effective is very small. And a very high temperature is developed. Partly melting and partly vaporizing material from a localised area on the both of electrodes i. e. work piece and tool is found. The material is removed in the form of craters which spreads over the entire surface of the work piece. Finally the cavity produced in the work piece is approximately the replica of the tool. To have machined cavity as replica of the tool wear should be zero. The tool wear is minimised by Taguchis method. Fig.
The above figure shows the experimental set-up with basic components of z- axis controlled electrical discharge machine. The schematic diagram of machining operation through electrical discharge machining shown in the fig. The EDM used for our experiment has been manufactured by electronica machine Tools Ltd.
38
Study of EDM and Optimization of its machining parameters Specification of the EDM machine used for experimentation
Value of current density varies with the tool material used. 1. For copper maximum current density is 5 to 7 amp/ cm2. 2. For graphite maximum current density varies from 7 to 10 amp/cm2
3. For Tungsten Copper maximum current density varies from 12 to 15 amp/cm2
Pulse on time values: 0.5, 0.75, 1.5, 2, 3, 4, 5, 7, 5, 10, 15, 20, 30, 40, 50, 75, 100, 150,
200, 300, 400, 500, 750, 1000, 1500, 2000, 3000, and 4000.
Gap voltage: 50 V Flushing Pressure: 0.1 kg/cm2 Electrode: copper Electrode polarity: positive Dielectric: kerosene (EDM oil) Flushing condition: Side pressure flushing Mains voltage: 3, 415 V Peak current: Range 0 to 50 ampere in steps of 0.5. Surface Roughness Measurement: Sutronic 10.
39
PULSE ON TIME: During this period the voltage is applied across the tool and the work piece. DUTY CYCLE: it is the ratio of on time and of time of EDM. Duty cycle control the amount of time the pulse should apply the energy. PEAK CURRENT: it is the maximum current obtained in the process.
40
S. no.
Ton(s)
E(error)
1. 2. 3. 4. 5. 6. 7. 8. 9.
8 8 8 10 10 10 12 12 12
1 2 3 3 1 2 2 3 1
42
S. NO.
Difference
1. 2. 3. 4. 5. 6. 7. 8. 9.
43
1. 2. 3. 4. 5. 6. 7. 8. 9.
Weight Avg. surface Weight Avg. surface (gm.) roughness(s) (gm.) roughness(s) 83.628 39.4 82.737 19.1 80.432 83.63 79.119 84.203 84.149 85.184 83.779 89.169 36.7 35.2 31.2 29.8 29.6 31.4 30.7 31.7 76.807 80.665 76.413 80.854 83.531 81.844 82.984 86.326 28.65 28.5 19.125 26.25 15.98 26.00 18.55 24.30
44
S.no. Duty Ton P.I. E cycle 1. 2. 3. 4. 5. 6. 7. 8. 9. 8 8 8 10 10 10 12 12 12 200 500 750 200 500 750 200 500 750 6
MRR
TWR
1 2 1 20.7 21.1
12 3 11.9 20.6 20.3 31.7 19.125 18 1 32.9 22.5 20.8 28.8 6 2 10.9 18 15.3 19.7 26.25 15.8 26 18.55 24.3
45
Avg. S.R. 19.1 28.65 28.5 19.125 26.25 15.8 26 18.55 24.3
Normalized Avg. S.R. 0.7537 0.0 0.0118 0.7517 0.1894 1.00 0.20915 0.797 0.3433
MRR
Normalized MRR 0.09 1.00 0.798 0.694 0.908 0.00 0.905 0.0588 0.7399
TWR
4.08
37.58 0.668
46
Study of EDM and Optimization of its machining parameters Step 2: Calculation of Grey relational Coefficients (GRC) and Grade (GRD).
Formula used:
(i(k) is GRC)
& GRD=
GRC (Avg. S.R.) 0.6699 0.33 0.3359 0.6681 0.3815 1.00 0.373 0.7112 0.4322
GRC (MRR) 0.3546 1.0 0.7122 0.6203 0.8446 0.3333 0.8403 0.3469 0.6578
GRC (TWR) 0.9608 0.7648 0.4461 0.6258 0.3333 1.000 0.3464 0.9 0.6
GRD 0.6617 0.6994 0.4981 0.6088 0.5198 0.7777 0.52467 0.6793 0.5033
As shown in the table that the greatest value of grey relational grade is .7777 therefore the combination of the parameters in the experiment will have the best possible compromise between the responses and the optimized machining parameters.
47
Different values of surface roughness is obtained by keeping two values constant and varying any one of the value. Three lines are plotted in one graph showing change in nature of one factor by varying the other. This shows the interaction one factor with other.
48
D.C.
Surface roughness is decreasing with increase in pulse n time. Higher is the duty cycle lower is the surface roughness. However variation in Pulse On Time effects the nature of graph of Duty Cycle. With the increase in Pulse On Time the curve the curve become almost flat i.e, there is very slight change in surface roughness.
49
S.R.
D.C.
All the three lines in the graph almost coincide with each other. This shows that peak current feebly affects the nature of graph of duty cycle vs surface roughness. Higher value of surface should be selected to minimize surface roughness.
50
S.R.
Ton
nature of the graph is changing wiyh varying peak current. It shows that peak current and pulse on time interacts with each other in the manner shown in the graph.
51
S.R.
Ton
Pulse on time has very less effect on surface roughness at low duty cycle but this effect becomes significant at higher value of duty cycle. Surface roughness starts increasing with pulse on time at higher value of duty cycle.
52
S.R.
I.P.
With the increase in peak current the surface roughness increases. As the lines obtained are parallel it shows that with increase in pulse on time the surface roughness increases but the nature of the graph remains unaffected.
53
S.R.
I.P.
Since all the three lines almost coincide with each other, it shows that effect of these two factors on surface roughness are almost independent of each other.
54
Similar as obtained for surface roughness different values of maerial rate is obtained by keeping two values constant and varying any one of the value three lines are plotted in one graph showing change in nature of one factor by varying the other this shows the interaction of one factor with other.
1.
MRR decreases with increase in Duty cycle. With increase in Ton the curve shifts downwards.
55
With increases in duty cycle mrr decreases, but the nature of the curve remains same. It can be seen that the changes are not very high.
3.
With increase in TON the MRR decreases. At lower duty cycle the curve is shifted high but as d.c. increases the MRR decreases. At high peak current the MRR increases with increase in duty cycle.
56
4.
With increase in Pulse on Time the MRR decreases. The magnitude of MRR increases but the nature remains same.
5.
MRR increases with increasing Peak current. The nature of the curve remains same .
Department of Mechanical Engineering M.M.M. Engineering College Gorakhpur 57
6.
MRR increases with increasing peak current. The nature of the curve remains same. With increasing Ton MRR decreases.
58