You are on page 1of 6

Content Addressable Memory

Scott Blunsden

Motivation
Reproduce the example in the course where parts of a face can be used to recover the whole face.

ADCS 315 Neural Computation 1

Scott Blunsden

System
Hopfield Network All to all connection Synchronously updated. Images can be fitted to different size Hopfield network Use black and white images (converted from greyscale.) 40 images used. Number of images increased. Try to reconstruct faces. Restoration attempted for: last image learnt, original image, parts of original image, and an unlearnt image. Tested at several sizes of network. Original (whole) image compared to result of restoration Error measured as a percentage, each node can have a max difference of 1. Comparison graphed.
Scott Blunsden

Original image

10 x 10 network

20 x 20 network

30 x 30 network

40 x 40 network

ADCS 315 Neural Computation 1

Results
As the number of images increases so does the recognition error. Stops at around 20 % error. Similar story for other sizes of networks
Unlearnt Original Part Image 1 Part Image 2
Hopfield All to All 40 x40
60

50

40 Current
Error %

Original 30 Part Image1 Part Image2 Unlearnt 20

10

0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 Num Images

ADCS 315 Neural Computation 1

Scott Blunsden

Results (2)
With a 14 % error Seems reasonable, but doesnt bear much resemblance to the original.
Forms a composition of all the images (10) that have been learnt so far. As all the images show similar features the error never increases dramatically - parasitic fixed point .
Result of attempting restoration after learning 10 images Original Restored
Error %
100 90 80 70 60 50 40 30 20 10 0 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 Noise Percent 61 65 69 73 77 81 85 89 93 97 101

Brings up how to measure them ?


Experiment using noise. Noise is added (from 0-100%.) Learn the noisy image Learns the inverse at around 60 % noise

Learning Noisy Images

ADCS 315 Neural Computation 1

Scott Blunsden

Conclusions
Networks reach capacity quickly then theory tells us a network can represent 0.15 x nodes patterns. Is less in this case as the patterns are similar Need a fast machine with lots of memory 40x40 run took about 20 minutes to learn 40 images !! Networks are very memory hungry on a 256 MB machine the maximum net size is 50x50 when it is connected all to all. Can use other architectures such as Topological. Network size of greater than 100 x 100 for a connection radius of 5, but Topological networks have their own problems. Hard to measure accuracy with a computer. Once the network starts to error it deteriorates very quickly parasitic fixed point.
Network Capacity
30

25

20
Error %

10 x 10 15 20 x 20 30 x 30 40 x 40 10

But.. enables reconstruction of original image using only parts of the original image
ADCS 315 Neural Computation 1

0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 Num Images

Scott Blunsden

You might also like