12 views

Uploaded by shardapatel

- Automatic Generation of%0ANeural Networh Architecture%0AUsing EvoluNonarq ComputaMon%0A.pdf
- Ryan Fefeposter
- 1-8
- A Hybrid of Back Propagation Neural Network and Genetic Algorithm for Optimization of Injection Molding Process Parameters
- AISP LESSON PLAN.doc
- IJAIEM-2013-06-17-051
- IJETR041153
- sa_paper
- 01333845
- Algoritma Genetika Dalam Matlab
- Apso Computational Statistics Final
- [Mitsuo_Gen,_Runwei_Cheng,_Lin_Lin]_Network_Models(b-ok.org).pdf
- NATURE INSPIRED ALGORITHMS FOR TEST SUITE OPTIMIZATION FOR REGRESSION TESTING: AN INVESTIGATION
- Linard_1999-IsD_System Dynamics of Employment Planning - Employment Optimization
- group technology journals
- 043
- 3836801 Neural Networks
- TB04_soft-computing-ebook.pdf
- 2011SeralathanA Thesis
- 70539_05b

You are on page 1of 77

Paper/Presentation/Programs

Overview

Genetic Algorithms

estimation Application to Boundary Inverse Heat Conduction Problem

Overview

Neural Networks

estimation Discussion of boundary inverse heat conduction problem

MATLAB

computation and visualization of results Simple programming language Optimized algorithms Add-in toolbox for Genetic Algorithms

Genetic Algorithms

dimensional solution space GAs mimic processes in nature that led to evolution of higher organisms

Crossover

Mutation

and therefore may be suitable for nonlinear problems

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 6

Genetic Algorithms

a specified fitness measure The best members of the population are selected for reproduction to form the next generation. The new population is related to the old one in a particular way Random mutations occur to introduce new characteristics into the new generation

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 7

Genetic Algorithms

A random number generator will be

called thousands of times during a simulation

computationally intensive Usually will find the global max/min within the specified search domain

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 8

Genetic Algorithms

Basic scheme

(1)Initialize population (2)evaluate fitness of each member (3)reproduce with fittest members (4)introduce random mutations in new

generation Continue (2)-(3)-(4) until prespecified number of generations are complete

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 9

in the population Similar to the role in conventional inverse problem

10

Elitism

generation to ensure that their characteristics continue to influence subsequent generations

11

Encoding

Binary Encoding

Represents data as strings of binary

numbers Useful for certain GA operations (e.g., crossover)

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 12

Parent A Crossover Point (randomly selected)

Parent B

Child AB

0

13

Binary Encoding

Mutation

chromosome (bit); If the random number is greater than a mutation threshold selected before the simulation, then flip the bit

14

numbers Parents selected by sorting population best to worst and taking the top Nbest for random reproduction

15

Reproduction

Ci = wAi + (1-w)*Bi where w is a random number 0 w 1 If sequence of arrays are relevant, use a crosover-like scheme on the children

16

Mutation

the entire array with a randomly generated one Introduces large changes into population

17

Creep

the member of the population with Ci = ( 1 + w )*Ci where w is a random number in the range 0 w wmax. Both the creep threshold and wmax must be specified before the simulation begins Introduces small scale changes into population

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 18

Simple GA Example

a line, determine the best value of the intercept b and the slope m

y ! b mx

measure fitness:

N data

S!

y

i !i

y

19

>> b = 1; m = 2; >> xvals =[ 1 2 3 4 5]; >> yvals = b*ones(1,5) + m * xvals yvals = 3 5 7 9 11

20

Parameters

population (low, high) real number pair specifying the domain of the search space Nbest number of the best members to use for reproduction at each new generation

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 21

Parameters

produce Mut_chance mutation threshold Creep_chance creep threshold Creep_amount parameter wmax

22

Parameters

23

24

25

26

27

28

of Nunknown values representing the piecewise constant heat flux components Discrete Duhamels Summation used to compute the response of the 1-D domain

29

0.001 Assume classic triangular heat flux

0 , 0.24 t q(t ) ! 0 .6 (t 0.84 ) 0

May 28, 2002

0.24 0.24 e t

0.84

4th Int. Conf. Inv. Probs. Eng. 30

Data

31

generated set

Choose every third point from the Use all the data from the generated set

32

GA program modifications

and creep_amount be vectors

Ngen = [ 100 200 ] mut_chance = [ 0.7 0.5 ] means let mut_chance = 0.7 for 100 generations and then let mut_chance = 0.5 until 200 generations

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 33

GA Program Modifications

Ngen array, redefine (low,high) based on (min,max) of the best member of the population Nelite = 5

34

Easy Problem

35

Easy Problem

36

Easy Problem

37

Easy Problem

38

Nbest = 20 Ngen =[ 200 350 500 650 750] mut_chance = [0.9 0.7 0.5 0.3 0.1] creep_chance = [ 0.9 0.9 0.9 0.9 0.9] creep_amount =[0.7 0.5 0.3 0.1 0.05 ]

39

40

41

42

Hard Problem

has small time step data (t = 0.06 Use same parameters as last

Nbest = 20 Ngen =[ 200 350 500 650 750] mut_chance = [0.9 0.7 0.5 0.3 0.1] creep_chance = [ 0.9 0.9 0.9 0.9 0.9] creep_amount =[0.7 0.5 0.3 0.1 0.05 ]

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 43

Hard Problem

44

Hard Problem

45

Hard Problem

46

Hard Problem

as (t becomes small.

objective function

N data

S!

y

i !i

y

2

N data 2 j !1

E q

1

j 1

qj

47

Hard Problem

With E1 = 1.e-3

48

Hard Problem

With E1 = 1.e-3

49

Hard Problem

With E1 = 1.e-3

50

GAs are a random search procedure GAs are computationally intensive GAs can be applied to ill-posed

Domain of solution must be known

problems but cannot by-pass the illposedness of the problem Selection of solution parameters for GAs is important for successful simulation

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 51

Neural Networks

with weights Intended to mimic the massively parallel operations of the human brain Act as interpolative functions for given set of facts

52

Neural Networks

These answers are consistent with the

training data

May 28, 2002 4th Int. Conf. Inv. Probs. Eng.

network are adjusted until the correct answer is given for all the facts in the training set

53

Neural Networks

relationship between the inputs and outputs by adjustment of the weights in the network When confronted with facts not in the training set, the weights and activation functions act to compute a result consistent with the training data

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 54

Neural Networks

once Recurrent or dynamic NNs accept input sequentially and may have one or more outputs fed back to input

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 55

(input,output) data sets for training

56

Neural Networks

Input Layer Output Layer

Hidden Layer

57

Neurons

w1

w2

SUM out !

n

w p

i i

. . . wn

7

summation Activation function

58

MATLAB Toolbox

and use of NNs

functions Variety of training algorithms

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 59

determine the slope (m) and intercept (b) of the line

60

Simple Example

First approach:

let the six values of x be fixed at 0, 0.2, 0.4, 0.6, 0.8, and 1.0 Inputs to the network will be the six values of y corresponding to these Outputs of the network will be the slope m and intercept b

61

Simple Example

Training data

Columns 1 through 8

0 0.2500 0.5000 0.7500 1.0000 1.0000 0.7500 0.5000 0.2000 0.4000 0.6000 0.8000 1.0000 1.0000 0.8000 0.6000 0.4000 0.5500 0.7000 0.8500 1.0000 1.0000 0.8500 0.7000 0.6000 0.7000 0.8000 0.9000 1.0000 1.0000 0.9000 0.8000 0.8000 0.8500 0.9000 0.9500 1.0000 1.0000 0.9500 0.9000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000

62

Simple Example

Network1

Use tansig activation function

63

input_data1 = 0.9000 0.1000 1.0400 0.2800 1.1800 0.4600 1.3200 0.6400 1.4600 0.8200 1.6000 1.0000 0.9000 0.3000 0.5000 0.3000 0.7000 0.7000 0.9200 0.4000 0.5600 0.4400 0.7600 0.8800 0.9400 0.5000 0.6200 0.5800 0.8200 1.0600 0.9600 0.6000 0.6800 0.7200 0.8800 1.2400 0.9800 0.7000 0.7400 0.8600 0.9400 1.4200 1.0000 0.8000 0.8000 1.0000 1.0000 1.6000

output_data1 = 0.9000 0.7000 0.1000 0.9000 0.9000 0.1000 0.3000 0.5000 0.5000 0.3000 0.3000 0.7000 0.7000 0.3000 0.7000 0.9000

64

Simple Example

Network1:Test data1 results b m bNN mNN 0.9000 0.7000 0.8973 0.7236 0.1000 0.9000 0.0997 0.9006 0.9000 0.1000 0.9002 0.0998 0.3000 0.5000 0.3296 0.5356 0.5000 0.3000 0.5503 0.3231 0.3000 0.7000 0.3001 0.6999 0.7000 0.3000 0.7000 0.3001 0.7000 0.9000 0.7269 0.8882

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 65

Network2

Increase number of neurons in hidden layer to 24 Train until SSE < 10-15 Test data1 results: b m bNN mNN 0.9000 0.7000 0.8978 0.7102 0.1000 0.9000 0.0915 0.9314 0.9000 0.1000 0.9017 0.0920 0.3000 0.5000 0.2948 0.5799 0.5000 0.3000 0.4929 0.3575 0.3000 0.7000 0.3017 0.6934 0.7000 0.3000 0.6993 0.3032 0.7000 0.9000 0.7008 0.8976

May 28, 2002 4th Int. Conf. Inv. Probs. Eng. 66

Network3

Add a second hidden layer with 24 neurons Train until SSE < 10-18 Test data1 results: b m bNN mNN 0.9000 0.7000 0.8882 0.7018 0.1000 0.9000 0.1171 0.9067 0.9000 0.1000 0.8918 0.1002 0.3000 0.5000 0.3080 0.5084 0.5000 0.3000 0.5033 0.3183 0.3000 0.7000 0.2937 0.6999 0.7000 0.3000 0.7039 0.2996 0.7000 0.9000 0.7026 0.9180

67

Network Design

First add more neurons in each layer Add more hidden layers if necessary

68

(total of 12 inputs) Network4

69

Network4

Test data1 results b m bNN 0.9000 0.7000 0.8972 0.1000 0.9000 0.1016 0.9000 0.1000 0.9002 0.3000 0.5000 0.2940 0.5000 0.3000 0.5115 0.3000 0.7000 0.2996 0.7000 0.3000 0.7000 0.7000 0.9000 0.7108 mNN 0.7171 0.9081 0.0979 0.5059 0.3013 0.6980 0.3009 0.8803

70

Network4

Try with x values not in the training data (x = 0.1, .3, .45, .55, .7, .9 ) b m bNN mNN 0.9000 0.7000 0.9148 0.3622 0.1000 0.9000 0.1676 0.4901 0.9000 0.1000 0.9169 -0.1488 0.3000 0.5000 0.3834 0.1632 0.5000 0.3000 0.5851 0.0109 0.3000 0.7000 0.3705 0.3338 0.7000 0.3000 0.7426 0.0139 0.7000 0.9000 0.7284 0.5163

71

sequential

possibility of solving the whole domain problem (Krejsa, et al 1999)

72

Training data

inputs/outputs Use forward solver to supply solutions to many standard problems (linear, constant, triangular heat flux inputs)

73

Ill-posedness?

data

74

Radial basis functions and cascade

correlation networks offer a better possibility for solution of the whole domain problem than standard backpropagation networks

75

parameter estimation and inverse problems Proper design of the network and training set is essential for successful application

76

References

M. Raudensky, K. A. Woodbury, J. Kral, and T. Brezina,, Genetic Algorithm in Solution of Inverse Heat Conduction Problems, Numerical Heat Transfer, Part B: Fundamentals, Vol 28, no 3, Oct.-Nov. 1995, pp. 293-306. J. Krejsa, K. A. Woodbury, J. D. Ratliff., and M. Raudensky, Assessment of Strategies and Potential for Neural Networks in the IHCP, Inverse Problems in Engineering, Vol 7, n 3, pp. 197-213. (1999)

77

- Automatic Generation of%0ANeural Networh Architecture%0AUsing EvoluNonarq ComputaMon%0A.pdfUploaded byGareth Thomas
- Ryan FefeposterUploaded byBreno Leonhardt Pacheco
- 1-8Uploaded bygkgj
- A Hybrid of Back Propagation Neural Network and Genetic Algorithm for Optimization of Injection Molding Process ParametersUploaded byMusta Mustapha
- AISP LESSON PLAN.docUploaded byVinopraba Thirumavalavan
- IJAIEM-2013-06-17-051Uploaded byeditorijaiem
- IJETR041153Uploaded byerpublication
- sa_paperUploaded byAnonymous TxPyX8c
- 01333845Uploaded bynabeel
- Algoritma Genetika Dalam MatlabUploaded byTomo Siagian
- Apso Computational Statistics FinalUploaded byAnonymous TxPyX8c
- [Mitsuo_Gen,_Runwei_Cheng,_Lin_Lin]_Network_Models(b-ok.org).pdfUploaded byAuliaRamadhana
- NATURE INSPIRED ALGORITHMS FOR TEST SUITE OPTIMIZATION FOR REGRESSION TESTING: AN INVESTIGATIONUploaded byTJPRC Publications
- Linard_1999-IsD_System Dynamics of Employment Planning - Employment OptimizationUploaded byKeith Linard
- group technology journalsUploaded byKumar Nori
- 043Uploaded byjasimabd
- 3836801 Neural NetworksUploaded byvalentinz1
- TB04_soft-computing-ebook.pdfUploaded byPrasannajs Jagadeesan Sankaran
- 2011SeralathanA ThesisUploaded byzorrin
- 70539_05bUploaded byDan Farris
- 606-UBICC_606Uploaded byUbiquitous Computing and Communication Journal
- yeniay05Uploaded byFiski Lanxza
- brinUploaded bybrinthaveni
- A Soft Computing Approach for Osteoporosis Risk Factor EstimationUploaded byPulut Suryati
- GATbx_v12Uploaded byzdysys
- EVOLUTIONARY COMPUTATION BASED OPTIMIZATION TECHNIQUE FOR MINIMUM VERTEX COVERUploaded byTJPRC Publications
- Ppt by MadhuUploaded bySWAPNIL_1212SONALI
- 01338474Uploaded byapi-3697505
- Nonlinear Dynamical Systems Feedforward Neural Network Perspectives (Adaptive and Learning SystemsUploaded byvinny loli
- Bitai White PaperUploaded byblakskorpio

- SAD NOTESUploaded byshardapatel
- Heuristic SearchUploaded byLalit Kumar
- Atul KahateUploaded byshardapatel
- pptUploaded byshardapatel
- httpdUploaded byAbhi Shelke
- httpdUploaded byAbhi Shelke
- httpdUploaded byAbhi Shelke
- thesisUploaded byshardapatel
- Exception Handling CPPUploaded byshardapatel
- Heuristics in Judgment and Decision-making - Wikipedia, The Free EncyclopediaUploaded byshardapatel
- LoadUploaded byshardapatel
- ProbabilityUploaded byshardapatel
- Chapter3_HeuristcSearchUploaded byEr Asmitaa Kumar
- DATA COMUploaded byshardapatel
- Data MiningUploaded bySelvarani Js
- Business Data ProcessingUploaded byshardapatel
- study materialUploaded byshardapatel
- notes in critical systemUploaded byshardapatel
- MCQ VB _ Jatin KotadiyaUploaded byshardapatel
- MCQ VB _ Jatin KotadiyaUploaded byshardapatel
- AI05_04_HeurSearch notesUploaded byshardapatel
- AI05_04_HeurSearch notesUploaded byshardapatel
- Round Robin CPU Scheduling » GATE FundasUploaded byshardapatel
- Introduction to XMLUploaded byimadpr
- 76 73 Heuristic Search UI1Uploaded byshardapatel
- Timeline of Operating Systems - Wikipedia, The Free EncyclopediaUploaded byshardapatel
- Critical Section ProblemUploaded byMohamyz EDðiÊ
- OS Interview QuestionsUploaded byshardapatel
- DCA-IUploaded bydeepakishere
- JEE Advanced 2014 Key Paper 2Uploaded byAnweshaBose

- Adv Space Propulsion for Interstellar TravelUploaded byNullpunktsenergie
- lab intro osmosisUploaded byapi-327826901
- Cogeneration Power PlantUploaded byVenice Tan
- Securing Cranes for Storm Winds Uncertainties and Recommendations PresentationsUploaded byAshley Dean
- A Leisurely Look at the Bootstrap, The Jackknife, And Cross-Validation (1983 13s)_BRADLEY EFRONUploaded byValentin Rodriguez
- On the Development of a Rod-Less PendulumUploaded byapi-25947526
- NeurophysicsUploaded byOlimpiu Ant
- Probability 2Uploaded bySaurabh Meena
- Donna Price- The Detonation Velocity-Loading Density Relation for Selected Explosives and Mixtures of ExplosivesUploaded byMfdrr
- Design of IR Transmitter and ReceiverUploaded byAakash Sheelvant
- A Shale Rock Physics Model for Analysis of Brittleness Index MineralogyUploaded byHugoDeConello
- Ha 1Uploaded byShubha Mangala
- A506 00.pdfUploaded byJohano Araújo
- 3210-8210-SP-0008 REV A2.pdfUploaded byRamu Nallathambi
- Tese Bruno Henriques 2012Uploaded byjohndoe21ro
- 1073Uploaded byAdeeko Adekunle
- popup (1)Uploaded byBadri Seetharaman
- Wu_technical Technical Considerations for Power Grid Interconnection InUploaded byapi-3697505
- Scania Oil ConsumptionUploaded byjengandxb
- Nov Paper 1Uploaded byqidamadani
- Analysis of GATE 2012Uploaded byDeepak Kumar
- An Additive Manufacturing Oriented Design Approach to Mechanical AssembliesUploaded byAdrianGilRojano
- Photographs in the Metropolitan the Metropolitan Museum of Art Bulletin v 27 No 7 March 1969Uploaded byJulian Espinel
- StamUploaded byfdacunha
- General Introduction to Airborne Magnetic SurveyUploaded byvelkus2013
- MTD1N80EUploaded byMata Randall
- Origin of the Universe TheoriesUploaded bytadashii
- Photogrammetry Fem BridgesUploaded byElena Efthymiadi
- Rubber eafdAdhesivesUploaded byOnat Yılmaz
- ScheinmanUploaded byGilbertGoh