You are on page 1of 10

480

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 4. NO. 5, OCTOBER 1994

Image Compression Using Self-organization Networks


Oscal T.-C. Chen, Member, IEEE, Bing J. Sheu, Senior Member, IEEE and Wai-Chi Fang, Senior Member, IEEE
Abstract- A self-organization neural network architecture is used to implement vector quantization for image compression. A modified self-organization algorithm, which is based on the frequency-sensitive cost function and centroid learning rule, is utilized to construct the codebooks. Performances of this frequency-sensitiveself-organization network and a conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results. Good adaptivity for different statistics of source data can also be achieved.
lndex Terms- Image processing, vector quantization, neural network.

Learning Rule

Layer

I. INTRODUCTION
MAGE compression can be applied to the motion-picture industry and consumer electronics for high-definition TVs, advanced multi-media computer systems, remote sensing systems via satellite, aircraft, radar, sonar, teleconferencing or movie-on-a-chip systems. Efficient compression of data would significanlty decrease both the communication and archival costs [I], [2]. According to Shannon's rate-distortion theory, a better performance is always achievable by coding a block of signals instead of coding each signal individually [3], [4]. A vector is a k-dimensional ordered set of real numbers. The components of a vector represent signal samples or numerical values of certain parameters or features that have been extracted from an image. In the most direct application of vector quantization to image compression, a group of contiguous signal samples is blocked into a vector so that each vector simply describes a small portion of the original image. This leads to efficient exploitation of the correlation between samples within an individual vector. Vector quantization (VQ) is popular in image processing, speech processing and facsimile transmission [2], [5]-[7]. It is capable of producing good-quality reconstructed images. The efficiency of a data compression scheme is measured by its compression ability, the resulting distortion, and the implementational complexity. Artificial neural network approaches appear to be very promising for intelligent information processing [8]-[ 151 due to their massively paralleled computing structures and the use
Manuscript received August 5 , 1992: revised April 18, 1994. This paper was recommended by Sarah A. Rajala. 0. T. C. Chen and B. J. Sheu are with the Department of Electrical Engineering, Signal & Image Processing Institute. University of Southern California, Los Angeles, CA 90089-0271 USA. W.-C. Fang is with the Jet Propulsion Laboratory. California Institute of Technology, Pasadena, CA 91 109-8099 USA. IEEE Log Number 9404954.

Fig. I .

The structure of the FSO neural network.

of learning to adapt the network parameters. In this paper, a self-organization neural network architecture is used to implement the vector quantizer. An improved self-organization algorithm, which is based on the frequency-sensitive cost function and centroid learning rule, is utilized to construct the VQ codebooks. This algorithm yields near-optimal results with very few iteration paths. A vector quantizer is adaptive if the codebook or the encoding rule is changed in time in order to match the local statistics of the input sequence. The proposed method of using one iteration path can provide a fairly good local vector quantizer with a minimized computational complexity. The training source data can be a subset of an image frame, an individual image frame, or multiple image frames. The proposed adaptive vector quantizer based on the self-organization network is a forward adaptation method [ 161. Before encoding the different statistics of the source data, the codebook is updated and transmitted to the decoder so that it can correctly reproduce the current vector.
11. THE ALGORITHM

A. Selj-Organization Learning

Artificial neural network approaches provide an effective alternative to solving complex information processing problems. The principle of constructing artificial neural networks comes from the understanding of operation of the neurons in the biological brains. Neurons are placed in an orderly fashion and reflect some physical characteristics of the external stimulus. Although much of the low-level organization in the brain in genetically predetermined, it is likely that some of high-level organization is created during learning which

1051-8215/94$04.00 0 1994 IEEE

CHEN et al.: IMAGE COMPRESSION USING SELF-ORGANIZATION NETWORKS

48 1

MSE

170

'1
MSE

i
MSE
I

Fthd

MSE

MSE

671

Fig. 2. Plots of the mean-squared error against the frequency threshold for 512 x j 1 2 pixel Couple image using the 1-path FSO method on 5 X 6 pixel subimage blocks. (a) 6-bit codebook; ATF = 163. (b) 7-bit codebook; ATF = 62. (c) 8-bit codebook. ATF = 41. (d) 9-bit codebook; XTF = 21. (e) IO-bit codebook ATF = 11. (MSE: Mean-squared error; ATF Average training frequency).

promotes self-organization [ 113. A self-organization network consists of the input layer and the output layer, which is also called the competitive layer. The basic theory and operation of self-organizing neural networks were described by Grossberg [8]-[ lo], Kohonen [ 111, and other researchers [17]-[ 191. One major challenge of using the basic self-organization network is that some of the neural units may be under-utilized. Various modifications have been proposed to solve this problem [8],

[ 171, [20]. The proposed frequency-sensitive self-organization method applies the variable-threshold model from Grossberg [8]-[ 101 to overcome the under-utilization problem.

B. Frequency-Sensitive Self-organization (FSo) Method


The neural network architecture based on frequencysensitive self-organization for image compression is shown

482

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 4, NO. 5, OCTOBER 1994

MSE

MSE

MSE

MSE

MSE

Fthd

Fig. 3. Plots of the mean-squared error against the frequency threshold for 512 x 512-pixel Creek image using the 1-path FSO method on 5 X 5-pixel subimage blocks. (a) 6-bit codebook ATF = 163. (b) 7-bit codebook; ATF = 82. (c) 8-bit codebook; ATF = 41. (d) 9-bit codebook ATF = 21. (e) 10-bit codebook ATF = 1 . (MSE Mean-squared error; ATF: Average training frequency).

in Fig. 1. The FSO network consists of two layers: an input layer, and the competitive layer. The input layer serves as a data buffer. It also distributes the input data X , to the competitive layer. In the competitive layer, each node computes the distortion between its codevector Wi and the input vector. The competitive process is performed throughout the whole layer by the winner-take-all operation. The winning

neural unit is determined according to the minimum distortion criterion. The synaptic weights are then updated according to the FSO learning rule. The FSO method systematically distributes the codevectors in the vector space R" to approximate the unknown probability density function P ( X ) of the training vectors. It overcomes the under-utilization problem by including the frequency-

CHEN et al.: IMAGE COMPRESSION USING SELF-ORGANIZATION NETWORKS

483

Freq.

Freq.

Indices

[ Indices

Freq.
1200

Freq.

Indices

Indices

Fig. 4. Index histograms of 64 code vectors using the 1-path FSO method on 5 x 5 pixed subimage blocks. (a) 512 x 512-pixel Couple image with F t h d equal to 1; standard deviation = 83. (b) 512 x 512-pixel Couple image with Ftt,d equal to 163; standard deviation = 152. (c) 512 x 512-pixel Couple image with F t h d equal to s, standard deviation = 235. (d) 512 x 512-pixel Creek image with Fthr, equal to 1; standard deviation = 75. (e) 512 x 512-pixel Creek image with F t h d equal to 163; standard deviation = 63. (f) 512 x 512-pixel Creek image with Fttld equal to 00; standard deviation = 147.

sensitive cost function to the learning rule. The neural units are modified with an approximately equal frequency. The 1-path FSO scheme for adaptive quantization is described as follows: 1) Initialize codevectors (synaptic vectors): W, = X , or Rand(i), i = 1,. . . .M with Rand being a random number generation function and W, = [W,l.W22, . . . W,k:]. . 2 ) For input vector X,, find the frequency-sensitive distor-

tion FD, = d ( X p ,W t )for all out-put neural units:

FD, = 1

(+

l i d )

. C ( X p J- W,J)2,

(1)

where F, is the frequency count of the codevector W, and Fthd is the frequency threshold.

484

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 4, NO. 5, OCTOBER 1994

TABLE 1 1

RECONSTRUCTION PERFORMANCE OF IMAGE COMPRESSION USING THE FSO METHOD AND ON THE LBG METHOD 5 X 5 - m X E L SUBIMAGE OF A 512 X 5 1 2 - P K E L COUPLE IMAGE

SMR: signal-to-noiseratio; MSE:mean-squared error; Fhd: frequency threshold.


TABLE 111

RECONSTRUC~ION PERFORMANCE OF IMAGE COMPRESSION THE FSO METHOD USING AND THE LBG METHOD 5 X .>-PIXEL SUBIMAGE BLOCKS A 512 X 512-pD(EL CREEK IMAGE ON OF

Use of the frequency-sensitive cost function can avoid under-utilization of some condevectors during the learning crease its winning frequency count F;by one. process for an inadequately chosen initial codebook. The 4) Update the winning weight vector W i - ( t ) with a term in (1) represents the frequency sensitivity of the cost frequency-sensitive learning rule: function. In order to pursue a better performance, the selection Wi*(t+ 1) Wi*(t) S ( t ) [ X ( t - Wi*(t)], of the frequency threshold Fthd is very important. If the F t h d + ) (2) value is equal to one, the distortion calculation in the 1where t is the iteration index; and 0 5 S ( t ) 5 1. Notice path FSO method is similar to that of the frequency-sensitive that the learning rule moves the winning weight vector competitive learning method proposed by Krishnamurthy et toward the input vector by some fractional amount, al. [13]. If the value of Fthd is larger than the total number S ( t )= which is a centroid learning ratio value. of source vectors, the cost function is not sensitive to the 5) Repeat steps (2) through (4) for all training vectors. frequency. Figs. 2 and 3 show the relationship between the
3) Select the output unit Ni with the smallest frequencysensitive distortion and label it as the winner and in-

8,

CHEN er al.: IMAGE COMPRESSION USING SELF-ORGANIZATION NETWORKS

485

MSE
162

TABLE I MEAN-SQUARED ERROR BETWEEN ORIGINAL RECONSTRUCTED AND RESULTS FOR THE COUPLE AND CREEK IMAGES BY USING THE WPATH FSO METHOD

MSE
316.5

315 5

316L
I
\
A \

\F5163

the infinity, the reconstruction performance is also quite poor. Therefore, the choice of frequency threshold Fthd value being the average training frequency can yield a good performance according to the computer analysis. In order to further improve the performance, a second path Fig. 5. Comparison for three different frequency thresholds, 163, 1 6 3 . ~ 1 , is used to adjust codevectors into better cluster centroids. The and x , with the same convergence criterion. (a) 512 x 512-pixel Couple codebook produced by the 1-path FOS method is used as a image using a 6-bit codebook. (b) 512 x 512-pixel Creek image using a 6-bit reference codebook. In the Fig. 4(b), the frequencies of few codebook. codevectors are very small or zero while the average training frequency is used for the frequency threshold. During the reconstructed performance and the frequency threshold F t h d second path, the unused or least frequently used codevectors for the 512 x 512 pixel Couple and Creek images, respectively. in the 1-path FSO method are deleted. Each highly used The good frequency threshold is close to the average training codevector is split into the two codevectors. According to the frequency. In an example with 10,402 source vectors and 6-bit number of the unused or least frequently used codevectors, the codebook, the average training frequency for each codevector corresponding number of the highly used codevectors is used will be - = 163. The selection of Fthd value is related to generate the new codevectors. Here, the splitting scheme in to the grouping of source vectors. The index histograms of the LBG method [6] is used. The highly used codevector, Wi codevectors for the different frequency threshold values are is split into Wi 6 and W, - 6, where S is a small constant shown in Fig. 4, where the initial 6-bit code-book is sampled vector. The training process of the 2-path FSO method is the from the source data. The standard deviations of the codevector same as that of the 1-path FSO method. index frequencies for the Fthd being equal to 1, 163, and the In the LBG method, the initial codebook could come from infinity in the Couple image are 89, 152, and 235, respectively. the splitting algorithm [7]. First, these codevectors are used to In general, if the value of Fthd is smaller than the average partition source data into subgroups. Then new codevectors training frequency, the index histogram of codevectors is close are produced by calculating centroids of these subgroups. to the uniform distribution and the codevector grouping is The iteration of grouping and calculating centroids in the very sensitive to the frequencies of the codevectors. Some LBG method is similar to that of incrementally updating dissimilar source vectors might be inappropriately assigned to the closest codevector for each incoming data through the one codevector. The reconstruction performance is the worst centroid technique in the FSO method. If the frequencywhile the Fthd value is equal to one. On the other hand, if sensitive term is not used in the cost function, and the scheme for generating the new codevectors from the highly F t h d is much larger than the average training frequency, a diverse distribution will occur and the frequencies of some used codevectors is also not performed, then the result from the codevectors could be very small. While the Fthd is equal to iterative FSO method asymptotically approximates that from

&

486

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 4, NO. 5, OCTOBER 1994

(C)

(d)

Fig. 6. Image compression using the FSO method on 5 x 5-pixel subimage blocks. (a) Original Couple image; 512 x 512 pixels. (b) Reconstructed image using 10-bit a-path FSO codebook; MSE = 59.35. (c) Reconstructed image using IO-bit 2-path FSO codebook; MSE = 51.89. (d) Reconstructed image using 10-bit n-path FSO codebook; l l S E = 46.79.

the LBG method. The learning process in the FSO method is repeated with the same termination criterion in the LBG method. The result of the n-path FSO method appears very close to that of the LBG method. The frequency-sensitive term is used to avoid underutilization of some codevectors. After the first and second iterations, a good resultant codebook has been generated. If the frequency-sensitive term is used in the n-path FOS method, it may generate little disturbance in the grouping operation. The total distortion in each iteration is not smoothly decayed. The proposed method may need more iterations to achieve the convergence. Fig. 5 shows the distortion in each iteration for three different frequency thresholds, 163, 163.~1, and infinity, with the same convergence criterion when a 6-bit codebook is used. The best performance can be achieved when the frequency threshold is equal to infinity. It indicates that frequency-sensitive term is not required in the n-path FSO method in order to optimize the computational complexity and reconstruction performance. Therefore, the n-path FSO scheme for vector quantization can be described as follows: The codevectors after training in the 2-path FSO method are used as the initial codevectors. The initial value of n is set to 3.
Codebook Training:

the new codevectors are generated from the highly used codevectors. The frequencies of all codevectors are reset to zero. For input vector X,, find the distortion D; = d(X,, Wi) for all output neural units:

Select the output unit N; with the smallest distortion and label it as the winner and increase its winning frequency count Fi by one. Update the winning weight vector Wi*(t)with a frequency-sensitive learning rule:

According to the previous index histogram of codevectors, the unused codevectors or the least frequently used codevectors are deleted. By using the splitting method,

where t is the iteration index; and 0 5 S ( t ) 5 1. Notice that the learning rule moves the winning weight vector toward the input vector by some fractional amount, S(t)= which is a centroid learning ratio value. Repeat steps (4) through (6) for all training vectors. Estimation of Convergence: The total distortion TDn is reset to zero. For input vector X , find the distortion D;= d(X,, Wi) for all output neural units.

k,

CHEN

et

al.: IMAGE COMPRESSION USING SELF-ORGANIZATION NETWORKS

487

Fig. 7. Image compression using the FSO method on 5 x 5-pixelsubimage blocks. (a) Original Creek image; 512 x 512 pixels. (b) Reconstructed image using IO-bit a-path FSO codebook; NSE = 161.76. (c) Reconstructed image using 10-bit 2-path FSO codebook; MSE = 156.62. (d) Reconstructed image using IO-bit n-path FSO codebook; MSE = 150.73

as an adaptive method. The adaptivity of the 1-path FSO method is a forward adaptation [16] in which the current block information is extracted from the future of the vector TD" = TD" Di. ( 5 ) sequence. A new codebook can be trained from a next subset of an image frame, an individual image frame, or multiple Repeat steps (9) through (10) for all source vectors. The convergence of the n-path FSO method is deter- image frames. The trained codebook completely replaces the old one and is required to be transmitted to the decoder before mined by the following equation, reproducing the source data with different statistics. While 1TD" = TD"-lI (6) an image sequence is encoded, the codebook can be updated 5 E, TD" periodically or updated by a criterion which is determined where E is a convergence parameter. If the convergence according to the reconstruction performance. Please note that criterion is matched, the n-path FSO method is finished. a slightly modified version of the n-path FSO method can be Otherwise, "n" is increased by one and steps (2) considered as suggestion by an anonymous reviewer of this through (12) are performed again. Table I lists the paper. Although the simulation results in Fig. 5 indicate that performance of the Couple and Creek images at each an increase of the threshold value with a linearly dependence path step before reaching the termination criterion of on the iteration number is inadequate, a faster increase might the n-path FSO method. In the 10-bit codebook size, be suitable. No data on this aspect is presented due to ad-hoc the value of n are 14 and 7 for the Couple and Creek nature of such a suggested variant of our proposed method. images, respectively. Here, the E is chosen to be 0.0005 for the cases listed in the Table I. The large dynamic range of images require that the effec111. SYSTEM SIMULATION tive compression algorithm be adaptive to the local image frame statistics. For the vector quantization approach, edge In the computer analysis, the original and reconstructed degradation is very severe if no adaptation is allowed for Couple images using the 1-path, 2-path and n-path FSO different scene characteristics. In order to simplify compu- methods for the 10-bit codebook are shown in Fig. 6. The tational complexity, the 1-path FSO method can be used mean-squared error (MSE) measure is used to evaluate the

Select the output unit Ni with the smallest distortion and calculate the total distortion,

488

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 4, NO. 5, OCTOBER 1994

IV. CONCLUSION An improved frequency-sensitive self-organization method is described. The 1-path FSO method has adaptivity and low computational complexity as compared with the classic LBG method. The near-optimal results also can be achieved by using the n-path FSO method. The algorithm and architecture of the artificial neural network approach, and simulation results on image data compression have been presented. The proposed method can achieve a good reconstruction performance and a fair compression ratio.

ACKNOWLEDGMENT
Fig. 8. Reconstructed Creek image using a IO-bit FSO codebook fromCouple image: MSE = 1.58.

Valuable comments and suggestions from reviewers are highly appreciated.

reconstructed image quality,

REFERENCES
A. Netravali and F. Mounts, Ordering techniques for facsimile coding: a review, Proc. IEEE, vol. 68, pp. 796-807, Jul. 1980. N. M. Nasrabadi and R. A. King, Image coding using vector quantization: a review, IEEE Trans. on Communications, vol. 36, no. 8, pp. 957-971, Aug. 1988. C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. Joumal, vol. 27, pp. 379423, and 623-656, 1948. R. M. Gray, Source Coding Theory, Kluwer Academic Publishers: Boston, MA, 1990, A. Gersho, On the structure of vector quantizers, IEEE Trans. on Inform. Theory, vol. 28, no. 2, pp. 157-162, Mar. 1982. R. M. Gray, Vector quantization, IEEE ASSP Magazine, pp. 4 2 9 , Apr. 1984. Y. Linde, A. Buzo and R. M. Gray, An algorithm for vector quantizer design, IEEE Trans. on Communication, vol. COM-28, no. 1, pp. 84-95, Jan. 1980. S. Grossberg, Competitive learnng: From interactive activation to adaptive resonance, Cognitive Sci., vol. 11, pp. 23-63, 1987. S. Grossberg, Adaptive pattem classification and universal recording I. Parallel development and coding of neural feature detectors, Biological Cybernetics, vol. 23, pp. 121-134, 1976. S. Grossberg, Adaptive pattem classification and universal recording: 11. Feedback, expectation, olfaction, illusions, Biological Cybernetics, vol. 23, pp. 187-202, 1976. T. Kohonen, Self-organization and Associative Memory, 2nd Ed., Spring-Verlag: New York, NY, 1988. N. M. Nasrabadi and Y. Feng, Vector quantization of images based upon the Kohonen self-organizing feature maps, Proc. 1988 Int. Joint Conj on Neural Networks, vol. I, pp. 101-108, San Diego, CA June 1988. A. K. Krishnamurthy, S. C. Ahalt, D. E. Melton and P. Chen, Neural networks for vector quantization of speech and images, IEEE Journal on Select Areas in Communication, vol. 8, no. 8, pp. 1449-1457, Oct. 1990. C. Lu and Y. Shin, A neural network based image compression system, IEEE Trans. on Consumer Electronics, vol. 38, no. 1, pp. 25-29, Feb. 1992. 0. T.-C. Chen, B. J. Sheu and W.-C. Fang, Adaptive vetor quantizer for image compression using self-organization approach, IEEE Inter. Con$ on Acoustics, Speech and Signal Processing, vol. 2, pp. 385-388, Mar. 1992. A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publishers: Boston, MA 1992. R. Hecht-Nielsen, Application of counterpropagation networks, Neural Networks, vol. 1, no. 2, pp. 131-141, 1988. B. Kosko, Stochastic competitive learning, Proc. 1990 International Joint Conference on Neural Networks, vol. 11, pp. 215-226, San Diego, CA, Jun. 1990. D. Rumelhart and D. Zipser, Feature discovery by competitive learning, Cognitive Sci., vol. 9, pp. 75-112, 1985. D. DeSieno, Adding a conscience to competitive learning, Proc. of IEEE International Joint Conference on Neural Networks, vol. I, pp. 117-124, San Diego, CA, Jun. 1988.

where I is the original image of size NI . N and I is the 2 reconstructed image. Table I1 lists the mean-squared error and computational time of images using the 1-path, 2-path and n-path FSO methods, as well as the LBG method. Similar simulation results for the Creek image are shown in Fig. 7 and listed in Table 111. In the n-path FSO method, the performance of the Couple and Creek images at each path step is illustrated in the Table I. The results from the n-path FSO method are very close to those from the LBG method with the same convergence criterion. In some selective cases, the performances of the reconstructed images from the n-path FSO method can be better than those from the LBG method. The 7-bit and 8-bit reconstructed Couple images, and the 9bit and IO-bit reconstructed Creek image are such cases. The reconstructed images using the proposed method on 5 x 5-pixel subimage blocks are reasonably good. In order to maintain the fidelity, the codebook is to be updated according to the image frame statistics. If the 10bit codebook generated from the 1-path FSO method for the Couple image is used to encode and decode the Creek image without any modification, the mean-squared error is up to 258, as shown in Fig. 8. The bridge in this reconstructed Creek image is quite blurred. After adapting this poor codebook by using the 1-path FSO method, a better reconstructed image as shown in Fig. 7(b) can be achieved with the mean-squared error of 162. This result illustrates that the codebook needs to be adjusted according to the statistic change of the source data. In the I-path FSO method, the codebook is trained by the source data with one iterative path. Each source vector is used to update the corresponding codevector only one time. In the LBG method, a lot of iterative paths for codebook training are required. According to the results of Tables I1 and 111, the computational complexity of the 1-path FSO method is close to that of one iterative path in the LBG method. Therefore, the proposed I-path FSO method can achieve a very good adaptivity.

CHEN et 01.: IMAGE COMPRESSION USING SELF-ORGANIZATIONNETWORKS

489

Oscal T. C. Chen was born in Taiwan in 1965. He received the B.S. degree in electncal engineenng from National Taiwan University, Taipei, in 1987, and M.S., Ph.D. degrees in electncal engineenng from the University of Southern California in 1990, 1994, respectively Mr. Chen served as the President of the USC Engineenng Graduate Student Associahon from September 1993 to August 1994 At USC, he was a graduate research assistant in the VLSI Signal 4 Processing Laboratory. He also helps manage the computing facility He has participated in many research topics including data compression, image analysis, VLSI and optical interconnects, neural network learning methods, and biologically-inspmd neural netwrok model. He was a teaching assistant for two graduate-level courses in image processing and data compression in the Summer 1991 and Fall 1992 semesters. He was the recipient of the 1994 Oversea Chnese Outstanding Youth Award and 1994 USC Leadership Award. He has co-authored 18 papers in international scientific journals and conferences. He serves on the Technical Program Committee of the 1994 IEEE International Conference on Computer Design in the Architectures and Algonthm Track. He is a member of the IEEE.

'

4%

Bing J. Sheu was born in Taiwan in 1955. He received the B.S.E.E. degree (Honors) in 1978 from the National Taiwan University, the M.S. and Ph.D. degrees in electrical engineering from the University of California, Berkeley, in 1983 and 1985, respectively. At National Taiwan University, he was the recipient of the Distinguished Book-Coupon Award seven times. In 1981, he was involved in custom VLSI design for a speech recognition system at Threshold Technology, Inc., Cupertino, CA. From 1981 to 1982, he was a Teaching Assistant in the EECS Department at Berkeley. From 1982 to 1985, he was a Research Assistant in the Electronics Research Laboratory at Berkeley, working on digital and analog VLSO circuits for signal processing. In 1985, he joined the faculty in the Electrical Engineering Department at the University of Southem California, where he is currently an Associate Professor. He has been an active researcher in several research organizations at USC, including the Signal and Image Processing Institute (SIPI), Center for Neural Engineering (CNE), Institute for Robotics and Intelligent Systems (IRIS), and Center for Photonic Technology (CPT). He serves as Director of VLSI and Signal Processing Laboratory. SInce 1983, he has served as consultant to the microelectronic and information processing industry. His research interests include high-speed VLSI, massively paralleled neural networks and image processing, transceivers for portable communication systems, and optoelectronic interconnects and computing. He is an Honorary Consulting Professor at National Chiao Tung University, Hsin-Chu, Taiwan. Dr. Sheu was a recipient of the 1987 NSF Engineering Initiation Award and, at Berkeley, the Tse-Wei Liu Memorial Fellowship and the Stanley M. Tasheira Scholarship Award. He was also a recipient of the Best Presenter Award at the IEEE International Conference on Computer Design in both 1990 and 1991. He has published more than 140 papers in international scientific and technical journals and conferences and is a coauthor of the book Hardware Annealing in Analog VLSI Neurocomputing in 1991, and the book Neural Information Processing and VLSI in 1994 from Kluwer Academic Publishers. He served on the Technical Program Committee of the IEEE Custom Integrated Circuits Conference. He served as a Guest Editor on custom VLSI technologies for the IEEE Journal of Solid-state Circuits for the March 1992 and March 1993 Special Issues; a Guest Editor on computer technologies for the IEEE Transactions on VLSI Systems for the June 1993 Special Issue. He is on the Technical Program Committees of IEEE International Conference on Neural Networks, International Conference on Computer Design, and the International Symposium on Circuits and Systems. At present, he serves as an Associate Editor of the IEEE Transactions on VLSI Systems; an Associate Editor of the IEEE Transactions on Neural Networks; and as Associate Editor of the IEEE Circuits and Devices Magazine. he also serves on the editorial board of the Journal o Analog Integrated Circuits and Signal Processing, Kluwer Press; f and on the editorial board of Neurocomputing Journal, Elsevier Press. He serves as the Tutorials Chair of the 1995 IEEE International Symposium on Circuits and Systems. He is among the key contributors of the widely-used BSIM model in the SPICE circuit simulator. He is a Senior Member of the IEEE, a member of International Neural Networks Society, Eta Kappa Nu, and Phi Tau Phi Honorary Scholastic Society.

Wai-Chi-Fang was b m in Taiwan in 1956. He o received the B.S. degree in electronics engineenng in 1978 from the National Chiao-Tung University, and the M.S. degree in electrical engineering from the State University of New York at Stony Brook in 1982. He received the Engineer and Ph.D. degrees in electrical engineering from the University of Southern California, Los Angeles, CA, in 1987 and 1992, respectively. Dr. Fang has an excellent research record with more than U)high-quality research papers in a broad research area, ranging from paralle algorithms to the design of VLSI signal processing systems. He has received several NASA Monterey Awards for his novel contnbutions in the VLSI signdimage processign area. Dr. Fang has been a very active member in the IEEE Signal Processing Society VLSI for Signal Processing Technical Committee. He has also been a technical commmittee member in the IEEE International ASIC Conference and Exhibit, IEEE International Conference of Computer Design (ICCD). Dr. Fang has contributed significantly to many key aspects on VLSI implementation of image and video compression systems, digital neurocomputing and systolic array-based image processing. His works have extended the state of the art in the signaUimage processing hardware technology. For example, in the Terminal Data Corporation, he had developed a systolic image processor for document image management system. During the past nine years in the Jet Propulsion Laboratory, California Institute of Technology, he has served as an engineer and task leader in artificial neural networks for image processing, focal-plane morphological image processor, data compression techniques for radar imaging system, VLSI communication processors for the space-flight parallel computer and simulated area weapon effect system. He has been engaged in the architecture and design of high-performance computing and signavimage processing systems and is currently a key research in two main projects on staellite image data compression for Cassini Titan Radar Mapper and Mars Environmental Survey Network.

You might also like