You are on page 1of 13

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO.

2, FEBRUARY 2013

329

AC-Plus Scan Methodology for Small Delay Testing and Characterization


Tsung-Yeh Li, Shi-Yu Huang, Member, IEEE, Hsuan-Jung Hsu, Chao-Wen Tzeng, Chih-Tsun Huang, Member, IEEE, Jing-Jia Liou, Member, IEEE, Hsi-Pin Ma, Member, IEEE, Po-Chiun Huang, Member, IEEE, Jenn-Chyou Bor, Ching-Cheng Tien, Chih-Hu Wang, and Cheng-Wen Wu, Fellow, IEEE

AbstractSmall delay defects escaping traditional delay testing could cause a device to malfunction in the eld and thus detecting these defects is often necessary. To address this issue, we propose three test modes in a new methodology called AC-plus scan, in which versatile test clocks can be generated on the chip by embedding an all-digital phase-locked loop (ADPLL) into the circuit under test (CUT). AC-plus scan can be executed on an in-house wireless test platform called HOY system. The rst test mode of our AC-plus scan provides a more efcient way to measure the longest path delay associated with each test pattern. Experimental result shows that our method could greatly reduce the test time by 81.8%. The second test mode is designed for volume production test. It could effectively detect small delay defects and provide fast characterization on those defective chips for further processing. This mode could be used to help predict which chips are more likely to fall victim to operational failure in the eld. The third test mode is to extract the waveform of each ip-ops output in a real chip. This is made possible by taking advantage of the almost unlimited test memory our HOY test platform provides, so that we could easily store a great volume of data and reconstruct the waveform for post-silicon debugging. We have successfully fabricated a Viterbi decoder chip with such an AC-plus scan methodology inside to demonstrate its capability. Index TermsAC scan, characterization, delay testing, small delay defect.

I. INTRODUCTION S PROCESS dimension continues to shrink, delay testing becomes more and more important for achieving satisfactory product quality. There are many factors in a chip that could produce delay variation. For instance, random dopant uctuation could cause threshold voltage mismatch between transistors and results in signicant delay variation. The effect could worsen with process scaling because the number of dopant atoms in the channel reduces with effective channel
Manuscript received January 26, 2011; revised July 26, 2011; accepted January 30, 2012. Date of publication March 08, 2012; date of current version January 17, 2013. This work was supported in part by the HOY Project sponsored by Ministry of Economic Affairs of Taiwan (MOEA) under Grant 96-EC-17-A-01-S1-002, and in part by the National Science Council of Taiwan (NSC) under Grant NSC 98-2220-E-007-033. T.-Y. Li, S.-Y. Huang, H.-J. Hsu, C.-W. Tzeng, J.-J. Liou, H.-P. Ma, P.-C. Huang, J.-C. Bor, and C.-W. Wu are with the Electrical Engineering Department, National Tsing Hua University, Hsinchu 30013, Taiwan (e-mail: syhuang@ee.nthu.edu.tw). C.-T. Huang is with the Computer Science Department, National Tsing Hua University, Hsinchu 30013, Taiwan. C.-C. Tien and C.-H. Wang are with the Electrical Engineering Department, Chung Hua University, Hsinchu 30013, Taiwan. Digital Object Identier 10.1109/TVLSI.2012.2187223

length [1]. Another factor is the photolithography. As the feature size shrinks, the limitation of wavelength begins to emerge. Using ultra-violet light with a longer wavelength than the transistors gate length could cause the shapes of metal and gate lengths to vary from die to die, wafer to wafer, and thereby causing delay variations [2]. One of the emerging concerns regarding delay testing is the small delay defect. In the past, an additional small delay on a circuit would not cause the circuit to fail when the operating frequency is slow. Nevertheless, when the operating frequency is high, the used-to-be trivial delay could become inuential and cause timing failures. Besides, research has shown that the occurrence of small delay defects is likely to increase with the advancing process technology [3]. There are two major strategies for delay testing. One is the at-speed functional test, by using functional patterns to test the chips at the target operating frequency. Even though this is an effective method for delay testing and does not have the overkilling problem (which is referred to the mistake of misclassifying defect-free chips as failing ones), the growing gate count in the circuit under test (CUT) makes it harder to develop highquality functional test. For a new microprocessor, it was reported that three man-years might be needed just to complete the functional test set [4]. The other delay testing method is the at-speed scan test, also known as AC scan test. There are mainly two types of fault models for generating AC scan test patterns. One of them is the path delay fault model [5]. Path delay fault considers the accumulative delay along a structural path. Therefore if we could achieve high coverage for path delay faults, we could effectively detect small delay defects. However, since the path number grows exponentially with circuit size, it is often not feasible to consider every path in the circuit. Another fault model is the transition fault model [6] which has been widely used in industry. It considers every gate in a circuit for a slow-to-rise and a slow-to-fall delay faults. For traditional transition-fault pattern generation, once a fault is propagated to an output, it will be considered as detectable. Due to the unequal path lengths within a circuit, each path would have different timing slacks for the same test clock period. Since a delay fault cannot be detected unless it causes a path delay exceeding the test clock period, a small delay defect located in a short path with a larger slack could escape the detection. Therefore some researches [7][9] have tried to solve this problem by choosing the better paths to propagate the transition faults. In [8], the authors combined the information from standard delay format (SDF) les into ATPG tool. Then, they generated the

1063-8210/$31.00 2012 IEEE

330

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO. 2, FEBRUARY 2013

transition fault patterns by propagating the faults through the longest paths. In [9], Yilmaz et al. proposed a metric called output deviation to guide the test pattern generation process. Even though these approaches could solve the problem of small delay defects detection to some extent, they do not provide adequate resolution. In turn, there are many other researches [10], [11][15], [18], [19], [37], [38] trying to improve the quality of delay testing from the perspective of how the test is applied. In [10], Xiong et al. proposed to use a test frequency slightly higher than the target operating frequency to increase the defect coverage. They dened the additional performance requirements as test margin. The authors proposed a formula to calculate the test margin for maximizing the yield when staying in a shipped product quality loss (SPQL) limit. Liou et al. [11] used two test pattern sets for delay testing. First, they test the chip at tighter frequencies with a test pattern set. If the chip fails, then they test the chip with both of the test pattern sets at target operating frequency. Otherwise, the chip is considered to have passed the test. In [12], Mitra et al. proposed to use the on-chip process monitor structure for delay defect screening. Only chips with the embedded ring oscillator producing a frequency higher than a threshold is considered as normal functioning chips. With this second test criterion, some marginal chips that might have been test escapes can thus be screened. However, it introduces the risk of over-killing as well. Also, it cannot detect spot defects (which is defect that might have affected some part of the circuit but not the ring oscillator). In [13], Putman and Gawde proposed a method to overcome the shortage of transition fault testing by dividing test patterns into different bins. Each test pattern is classied according to its longest path length. Patterns with similar lengths would be categorized into the same bin. After that, the authors test each bin with a specic test frequency to take into account the different slacks of different transition fault patterns. In [14], Tayade and Sunddereswaran focused on interconnection defects to propose a new path selection algorithm for small delay defect detection. By calculating the variance of each path, this method chooses those paths with lower delay variance for better detection of small delay defects. In addition to the above methods, parametric test [15][17] has been touted as another paradigm for dealing with small delay defects. Unlike marginal pass/fail test where a certain margin is decided in advance to accommodate the process variation and then a device under test is classied with a binary pass-or-fail decision, the parametric test often resorts to some kind of measurement or characterization techniques to collect the information of a large number of devices rst, which is further analyzed by some statistical approach (such as outliner analysis). Those parts with their characteristics strongly deviated from the rest of parts are classied as faulty. In general, parametric test can take into account the process variation in a more natural way at the cost of longer test time. The proposed method can be viewed a general supporting technique for both types of methodsdepending on the budget of the test time. In [18] and [19], the authors applied the test patterns at increasingly higher test frequencies until the chip under test (CUT) fails. Each test pattern is characterized by the frequency that it will fail, called failing frequency. The combination of the failing frequencies of all test patterns form a prole, called

failing frequency signature (FFS). Then, the small delay test is simplied to signature comparison. A chip with its signature deviating from the normal region is considered as failing. This method can discriminate those chips with random delay defects from those with process variation. However, it still has one major drawbackthe entire process is too time-consuming. The authors recommended this method be used only for characterization or analysis of No Trouble Found devices (representing failing devices in the eld for which no trouble can be found from the normal testing session) [20], [21]. In our AC-plus scan methodology, we aim to enhance both the quality and the efciency of small-delay-defect testing and characterization. For this end, we incorporate an in-house all-digital phase-locked loop (ADPLL) inside the CUT so that it can provide a wide range of test clock frequencies for AC-scan operation. With such an infrastructure in place, we can perform three test modes, namely: 1) delay measurement; 2) adaptive-frequency test; and 3) waveform extraction. The advantages of this AC-plus scan methodology are multiple. First, with our delay measurement, we can speed up the failing frequency signature (FFS) analysis proposed by [19] signicantly. This is mainly for the characterization and/or diagnosis purpose. Second, we can gauge how marginal a chip is from a target frequency via a so-called adaptive-frequency test mode. In this test mode, a test pattern is applied under a number of different test frequencies, e.g., the target frequency, the contour frequency, and the middle frequency to be dened later. Since each test pattern may have a distinct delay, we adapt the test frequency from one test pattern to another as well. This test mode introduces only a very modest test time overhead than the traditional AC-scan and thus it is efcient enough to be used for volume production test. Thirdly, we can extract the waveform of any selective ip-op under any give test pattern, for the silicon debugging purpose. There is also one issue that remains relatively less addressed in the previous works; that is, how to deal with those chips that have been identied with small delay defects. These chips may still pass the test at the target frequency. However, they could fail more likely whenever the operating condition in the eld changes, and thus become a non-trouble-found (NTF) device. With the detection capability of the small delay defect in our methodology, we will also propose some post-processing heuristic to try to reduce the possibilities of the NTF devices. The rest of this paper is organized as follows. In Section II, we rst introduce our test structure AC-plus scan and the HOY test platform. In Section III, we will describe the rst test mode used for delay measurement. Section IV describes the second test mode used for volume production test. In Section V, we describe a waveform extraction method for post-silicon debug. In Section VI, we described a fabricated chip embedded with AC-plus scan. Section VII shows some experiment results and Section VIII summarizes our contribution. II. PRELIMINARY A. AC-Plus Scan With the increasing operating frequency, providing the highspeed clock from the external interface is becoming more and

LI et al.: AC-PLUS SCAN METHODOLOGY FOR SMALL DELAY TESTING AND CHARACTERIZATION

331

no hurry in producing the cycle-based waveform inside the pin electronics every test clock cycle, one can perform what-if type of test procedure, in which one can wait to see the response of the CUT before deciding the next test patterns or test conditions (e.g., the test frequencies). This is especially advantageous in our AC-plus scan methodology in that we can adapt the test frequency conditionally at any moment based on the previous test responses of the CUT. There is time overhead in conguring the on-chip ADPLL to produce a desired clock frequency, but it is modest.
Fig. 1. Architecture of AC-plus scan.

A. Overall Flow of Per-Pattern Delay Measurement The procedure of our proposed delay measurement is described in Algorithm 1. First of all, we use the nominal delay plus 3 to 6 standard deviations as the starting test clock period for delay measurement. We refer to the added number of standard deviations as measurement condence. Note that this is a stringent test condition since this test frequency is much higher than the target frequency. The result is either pass or fail. Consider the rst condition when the pattern fails. Then, we would retest this pattern with a more relaxed clock frequency, as the one used in [19]. On the other hand, if the pattern passes the rst test clock period, then we gradually reduce the test clock period until this pattern fails, and eventually we can get the longest path delay of this pattern. Before we start the test, we have to decide the nominal delay of each pattern. We could do this by computer simulation or through the characterization process taking a small volume of chips as the samples. If we want to obtain the data by characterization, we could start with the measured result of the rst chip as the temporary nominal delay. Then we implement Algorithm 1 with a doubled measurement condence, because this chip could be on the other side of the standard distribution. After we get the measured result from the second chip by the temporary nominal delay, we calculate the average of the measured results as the new temporary nominal delay. Then we continue this procedure until the change between the new temporary and old temporary nominal delay is smaller than a certain small percentage. That is, the process stops when it converges to a statistical nominal delay. After that, we could consider the temporary nominal delay as the nominal delay. Then we could start to use Algorithm 1. It is notable that all the procedures described above can be written in a high-level test program and perform automatically in our HOY test platform. If the hazard-free patterns are available by the methods proposed in [27] and [28], we could further improve the efciency of our approach. Because of the hazard-free patterns which generated by aforementioned methods may not have an adequate fault coverage; it is common that certain amount of non hazardfree patterns is needed to further boost the fault coverage. For non-hazard-free patterns, we could perform delay measurement by using binary search with a smaller possible range for each pattern. The smaller range for binary search is described in (1), where is the nominal delay of pattern , is the measurement condence, and is the standard deviation. UpperBound: LowerBound: (1)

Fig. 2. Architecture of HOY test platform.

more difcult. Therefore, on-chip PLL is often necessary to provide the high speed test clock [4]. For example, in our in-house wireless HOY test platform [26], we employ an on-chip ALPLL to generate a wide-range of test frequencies for delay testing in our AC-plus scan architecture as shown in Fig. 1. AC-plus scan also requires a clock pulse controller to generate the required test clock pulses. Some researches [22][25] have presented different designs for the clock pulse controller. A clock pulse controller could be congurable. It can change the output pulses depending on the test patterns and schemes. B. HOY Test Platform HOY test platform [26] employs a regular PC as the tester which is much cheaper and easier to congure and program. A HOY tester communicates with the chip under test via wireless test channel, as depicted in Fig. 2. To enable wireless testing, it is required to integrate a communication module and a HOY test wrapper into the chip under test to support this type of wireless protocol-based testing [35]. III. PER-PATTERN DELAY MEASUREMENT For measuring the delay of a test pattern, [19] proposed a sweeping frequency method. For each pattern, the test frequency starts from a low value and incrementally increases until the test pattern fails. By that, one can record the failing frequency of each pattern, which also indirectly implies the longest path delay of the respective pattern. In this paper, we call this as delay measurement rather than collecting the failing frequency. It is worth mentioning that HOY test platform provides a higher degree of controllability of the test session than the traditional ATE, since it incorporates a packet-based communication channel. That is, one can easily use the conditional statements (e.g., if-then-else) or iterative statements (e.g., for-loop) in writing the test program. Unlike ATE-based testing, there is

332

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO. 2, FEBRUARY 2013

Fig. 4. Advantage of adaptive-frequency test. Fig. 3. Possible cases in our method for delay measurement.

Algorithm 1: Per-Pattern Delay Measurement Input1: nominal delay of pattern Input2: % Input3: Input4: (ns) (ns)

other than the most critical one have nished their transitions before the starting test clock period. If the above condition is violated, then the measurement process will be restarted again with a looser starting test clock period to avoid any accuracy loss. IV. ADAPTIVE-FREQUENCY TEST It is known that at-speed scan test cannot detect small delay defects very well. The failing frequency signature (FFS) analysis proposed in [19] could indeed detect small delay defects, but it is too time-consuming for production test. Here, we propose an alternative called adaptive-frequency test (AF-test) to remedy this problem. AF-test could provide a test result almost as good as the FFS analysis while without long characterization time. In fact, it only needs a very modest test time overhead than the normal AC scan test. In some sense, it is a methodology that combines the quick test time of AC-scan with the high test quality of failing-frequency signature analysis as depicted in Fig. 4. It can be viewed as a fast characterization technique aimed at detecting marginal defects and reducing test escapes. The basic concept of adaptive-frequency test is to set test frequencies adaptively for each pattern according to the delay of its most critical path. Note that this is possible since we have incorporated an ADPLL with a wide-frequency range (e.g., 40600 MHz) and a high resolution (e.g., only 5 ps between the clock periods of two adjacent frequencies). Since we can test each pattern with a very stringent test frequency, even a small delay will lead to failing test result and thus detected. A. Test Frequencies of Adaptive-Frequency Test In our adaptive-frequency test, every pattern could at most tested by three test signals as depicted in Fig. 5, including target test clock, middle test clock, and contour test clock. It is notable that the horizontal axis of this gure is the index of the test patterns sorted by the longest path delay as proposed in 19. 1) Target Test Clock Period: This is the inverse of the target operating frequency of the circuit under test. 2) Contour Test Clock Period: A contour test clock period for one pattern is simply its longest path delay plus some margin (e.g., 3 times the estimated process variation). Testing a pattern with the contour test clock can be viewed

Sub Function Description: : perform AC-plus scan testing for pattern with test clock period , return pass or fail. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. for each pattern do if else end if while then

do

the longest path delay of pattern is assumed to be end while end for

B. Potential Loss of Accuracy In this subsection, we discuss the potential loss of accuracy using our method. We illustrate all possible cases in Fig. 3. There are eight possible cases. Cases 1, 3, and 6 are the most probable with the combined occurrence possibility of 99.865% when the measurement condence is 3, in our experiments. In these three cases, our method could work properly and would not cause any loss of accuracy. For cases 2, 4, 5, 7, and 8, the combined occurrence probability is only 0.135%. When the measurement condence is increased from the original 3 to 4, it would further drop to 0.00317%. Next, we will examine cases 4 and 7 more carefully since they are the only two cases that would possibly cause inaccuracy when the following condition is metall paths

LI et al.: AC-PLUS SCAN METHODOLOGY FOR SMALL DELAY TESTING AND CHARACTERIZATION

333

Fig. 5. Example for the relation between three test frequencies and nominal delay prole of the circuit under test.

Fig. 6. Flow of adaptive-frequency test.

as a special form of stress test, aimed at exposing any delay larger than the normal delay. 3) Middle Test Clock Period: The middle test clock period of a pattern is simply the middle value of the target test clock period and its contour test clock period. This period also varies with the pattern like the contour test clock period.

D. Delay Score Delay Score is dened as the total measured extra abnormal delay for all patterns. Once the delay of a pattern exceed the contour test clock period, that pattern would be assigned a delay score corresponding to its abnormal delay. After we test all patterns, we add up all the delay scores of all patterns as the delay score for the chip. High delay score means there are plenty of abnormal delay in that chip. It is likely that some of these extra delays would propagate to the untested long paths (by the current test pattern set), and thereby causing the chip to fail under some operation conditions in the eld (e.g., large supply voltage variation and/or extreme environment). As a result, a marginal chip with a large delay score is one that is more likely to have a delay fault even though the current test set cannot catch it using the traditional AC scan. E. Delay Unit Delay unit is the unit for the calculation of the delay score. We tend to use a delay unit that scales with designs. Hence, we dene it as the average of the standard deviations of all patterns, shown in (2), where stands for the delay unit, stands for the total number of test patterns, and stands for the standard deviation of pattern (2) For example, a design with a target operating clock period of 10 ns, the delay unit could be 0.5 ns. On the other hand, a design with the target operating clock period of 1 ns, the delay unit could be 0.05 ns. F. Calculation of Contour Test Clock Period With the information of the mean and standard deviation for the longest path delay of each pattern, we could derive the delay unit by averaging the standard deviation of every pattern. For calculating the contour test clock period of each pattern, we rst add several times of standard deviations on the nominal delay as the threshold for normal process variation. Second, we add one delay unit on it to become the contour test clock period for each pattern. It is worth mentioning again that each pattern has its own path length, the standard deviations we add on is different

B. Three Categories of Chips After performing the adaptive-frequency test, we will classify a chip into one of three categories as described in the following. 1) Passing chips: These chips pass for all patterns under the contour test clock period and the target test clock period. They can be viewed as robustly working devices. 2) Failing chips: These chips fail for at least one pattern under the target test clock period, which means that they could not meet the target timing requirement, and thus they should be treated as malfunctioning chips and discarded. Normally, these chips also fail the traditional at-speed scan test. 3) Marginal chips: These chips fail for at least one pattern at contour test clock period, but pass for all the other patterns at target test clock period. In some sense, their test results are in the ambiguous region between good and total failure. We refer to them as marginal chips. In this paper, we propose to grade these chips with a delay score and an unreliability score to represent the levels of their marginalities. C. Flow of Adaptive-Frequency Test The overall ow of our adaptive-frequency test is described in Fig. 6. First, we test a chip with all patterns at contour test clock period. If it passes the contour test clock period, we classify it as a passing chip. If it fails, we conditionally retest it with only those failing patterns at the target test clock period. If this chip fails for any of those patterns again, we consider it as a bad chip. For those chips that pass the target test clock period but fail at contour test clock period, we classify them as marginal chips. And then we test only those marginal chips with the middle test clock period in order to calculate their delay scores and unreliability scores.

334

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO. 2, FEBRUARY 2013

Fig. 7. Case 1 of delay score calculation; pattern is assumed to have failed contour test clock period but passed the middle test clock period.

Fig. 8. Case 2 of delay score calculation, pattern is assumed to have failed the middle test clock period but passed the target test clock period.

for one pattern to another. However, the delay unit is the same for every pattern. G. Calculation of Delay Score We dene delay score for a pattern as shown in (3), where is delay score of pattern is the longest path delay of pattern is the threshold for normal process variation for pattern , and is the delay unit (3) There are two cases when we calculate the delay score. In the rst case, a chip may fail the contour test clock period while pass the middle test clock period for some pattern, say pattern . It implies that the longest path delay of pattern is located between the contour test clock period and the middle test clock period as depicted in Fig. 7. But there is no knowing about the exact delay of pattern . Therefore we use an estimate to represent the delay of pattern . For the additional delay caused by process variation, additional delay added to the nominal delay would have a higher possibility. For the extra delay caused by spot defects, it was reported that there are more small delay defects than larger delay defects 3. Therefore, after we combined those two factors, we suggest that the one which is closest to the nominal delay has the highest possibility. In this case, it is the contour test clock period. The contour test clock period is calculated by adding one delay unit on the threshold of normal process variation, which means the delay of pattern is assumed to be one delay unit above the threshold of normal process variation, therefore the delay score for pattern A is 1. In the second case, a chip may fail the middle test clock period while pass the target test clock period for some pattern, say pattern . It means that the longest path delay of pattern is located between the middle test clock period and the target test clock period as depicted in Fig. 8. Here again we use the one which closest to the nominal delay to represent the delay of pattern . In this case, we use the middle test clock period to represent. The delay score by denition is the distance between

the longest path delay of a pattern and the threshold for normal process variation divides by the delay unit. The path delay of pattern is 7 ns, the threshold for normal process variation is 4.5 ns, and the delay unit is 0.5 ns. Consequently the distance is estimated as 2.5 ns. Divide 2.5 ns by delay unit 0.5 ns would be 5. Therefore the delay score of pattern is 5. H. Unreliability Score A marginal chip could be caused by spot defects in addition to extreme process variation. Studies have revealed that a spot defect may deteriorate and cause larger and larger delay over the time during its usage in the eld [7], [29]. That implies that an originally harmless small delay could become malicious or even catastrophic. In light of this, we dene a term called unreliability score to measure the possibility of such reliability failure. It is shown in (4), where the is the target test clock period, is the longest path delay of pattern (4) It basically reects the distance between the longest path delay of a pattern and the target clock period. The shorter this distance, the higher the unreliability score, meaning that the involved small delay could become malicious more easily and cause reliability issue. Fig. 9 is an example for unreliability score calculation. The chip under test is assumed to fail under the middle test clock period while pass under the target test clock period for some pattern, say pattern . Again there is no knowing about the exact delay of pattern , hence we use a probable delay to represent it. Here we use the middle test clock period to represent the longest path delay of pattern C. As a result, the unreliability score of pattern C is 9 divided by 2 equals to 4.5. I. Relation Between Delay Score and Unreliability Score The delay score and the unreliability score are not perfectly correlated. As the example in Fig. 10, pattern has a higher unreliability score than pattern , but a lower delay score than

LI et al.: AC-PLUS SCAN METHODOLOGY FOR SMALL DELAY TESTING AND CHARACTERIZATION

335

Fig. 9. Example for unreliability score calculation.

heat to the chip. With AC-plus scan architecture, the waveform of each ip-op in a chip under test can be extracted without using a special probing tool. The waveform extraction can be conducted as described in Algorithm 2. First, we choose the test pattern which we intend to investigate. Then we further choose the time period during which we want to observe, and the measurement resolution. Next, we start to perform AC-plus scan test with different test clock periods. The rst test clock period is the starting point of our observing range. Then we gradually increase the test clock period by measurement resolution until the test clock period is the end point of our observing range. By AC-plus scan, one can scan out and record all the values in ip-ops for each test round. After that, we could gather the value of each ip-op in different time periods, and thereby we could reconstruct the change of values during the time period for every ip-op. Algorithm 2 Waveform Extraction Input1: Test pattern Input2: Starting point Input3: End point Input4: Measurement resolution

Fig. 10. Example illustrating the difference between delay score and unreliability score.

pattern . As a result, it is better that these two scores are calculated separately and use as different indicators. J. Mixed Treatments With the above scores, one can treat a marginal chip in many ways including the following. 1) One can pick a certain percentage of chips with the highest delay cores to go through some thorough test session for exposing the potential path delay faults. 2) One can pick a certain percentage of chips with the highest unreliability scores to go through more rigorous burn-in to accelerate the deterioration of the dormant small delay defect. 3) One can also pick a certain percentage of marginal chips with either the highest delay scores or the highest unreliability scores to discard to ensure a lower defect level for those shipped devices. V. WAVEFORM EXTRACTION Silicon debug is an important phase in IC product development. It has been reported that this phase could take over 50% of the overall product development time in some cases 30. In the step of nding the root cause of the failure, one usually uses special probing tools such as laser voltage probe (LVP) and laser assisted device alteration (LADA) to take waveforms from the circuit [31], [32]. As the products operating frequency continues to increase, the thermal density becomes a problem when we apply those laser-based probing tools which would add extra

1. 2. for do 3. Perform AC-plus scan for pattern with clock period 4. Scan out and store all the value in the Flip-Flops 5. 6. end for 7. Reconstruct the waveform from the stored values For a traditional ATE, the test memory is a precious resource which we can store only small volume of test responses and output compression might be needed. But in our HOY test platform, the communication between the tester and the device under test is through a packet-based channel and the tester is a regular PC which could use the almost unlimited main memory as the stimulus/response memory. Consequently, we could easily store all the values of all ip-ops even though the data volume could be very huge.

VI. FABRICATED CHIP WITH AC-PLUS SCAN We have implemented a design with the feature of AC-plus scan by using TSMC 0.18- m technology via the service of National Chip Implementation Center, Taiwan [33]. In total we received eight packaged chips in return. Fig. 11 shows the architecture of our fabricated chip. The die photo of packaged chip is showed in Fig. 12. We use an ADPLL proposed in [34] as the programmable on-chip PLL in the structure of our AC-plus scan. The clock pulse controller in our chip is based on the design proposed in [22]. We also implemented the data compression circuit proposed in [35] to reduce the test time. Our circuit under test is a 9.5 K Viterbi decoder. The circuits for supporting AC-plus scan are mainly the ADPLL and the clock pulse controller which have a combined gate count of around 2.3 K in our

336

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO. 2, FEBRUARY 2013

TABLE I REDUCED MEASUREMENT TIME FOR EIGHT CHIPS

Fig. 11. Architecture of fabricated chip.

Fig. 12. Die photo of our fabricated chip.

design. This is a xed overhead independent of the size of the CUT. VII. EXPERIMENTAL RESULTS The data reported in this section is based on an implementation of the proposed methodology on our HOY test platform with some real chips. It is notable that it is easier to perform AC-plus scan in our HOY test platform. However, the linking between them is not absolutely necessary. The entire methodology can also be modied slightly to work with a traditional ATE. In the modication, for example, we will need to insert some ADPLL-tuning code as the prex of each test pattern loaded to the ATE test memory so that each test pattern can be tested with its own unique test frequency in the adaptive-test mode. We performed several sets of experiments to validate our proposed methods. In section , we present the results of delay measurement on our fabricated chips. We compare the required time of our method with the conventional one. In section , we rst report the results of an experiment to justify that the delay score is a good indicator for reducing the test escapes due to small delay faults. Then, we will assess some discarding strategy for marginal chips by analyzing its impacts on the number of caught test escapes and the number of possible overkills. In section , we present waveform extraction results performed on our fabricated chip. A. Per-Pattern Delay Measurement We performed delay measurement on our eight fabricated chips with method proposed in [19] and our proposed method to compare the required time. Before we started, we had to decide

the nominal delay of each pattern for reference. In this experiment, we randomly chose one chip and swept from the lowest possible frequency to nd the longest path delay for 500 transition fault patterns. We assumed that the standard deviation of delay is 10%. Then we multiply the delay of longest pattern by 160% (following the six-sigma rule) as the upper bound of possible delays. Here we set the measurement resolution as 0.1 ns. First, we test 500 transition fault patterns starting from the same clock period which is the upper bound of possible delays for each chip and recorded the test times. Second, we performed delay measurement by our proposed method and recorded the test times. We set the measurement condence as three. That means the test clock period starts from the nominal delay multiplied by 130% for each pattern. The results are summarized in Table I. B. Adaptive-Frequency Test We performed an experiment to demonstrate the effectiveness of our adaptive-frequency test. We rst use a 9.5 K Viterbi decoder with 608 ip-ops as the CUT. 1) Experimental Setup: a) Generation of Defective Chips: We modify the SDF le which is generated by an APR tool (Synopsys Astro [36]) with 0.18- m technology to mimic a chip with defects. We have three steps to generate a defective sample. First, we randomly implant 15% inter-die process variation into the SDF le. Second, we randomly implant 3% intra-die process variation. Third, we randomly implant 1 to 10 small delay defects into the SDF le. The small delay defects range from 1 to 2 ns. In the following section, we call this generated defective sample as a defective chip. b) Generation of Test Sets: We generated two test sets. One is called the basic test set, used to mimic the volume production test set which would have some test escapes. Another one is called the thorough test set, used to mimic the thorough eld usage. The basic test set is generated by setting the ATPG tool (Synopsys TetraMax [36]) to treat slow-to-rise and slow-to-fall as the same fault when we generate transition fault patterns. The thorough test set is generated by turning on the -detect function in ATPG tool and set the to be a fairly large number, i.e., in our experiment. 2) Flow of Experiment: Step 1) We generate a defective chip via modifying the SDF le.

LI et al.: AC-PLUS SCAN METHODOLOGY FOR SMALL DELAY TESTING AND CHARACTERIZATION

337

decoder, each of which will be applied once in the traditional AC-scan test. For adaptive-frequency test, a pattern could be applied 13 times. On the average, the total number of test pattern application is 205.27 for each marginal chip (as compared to 200 in the traditional AC-scan). For a passing chip, we only have to consider the overhead of OPTT since each pattern is applied only once. Therefore the test time overhead would be only 15.6%. For a marginal chip, the overall test time overhead would be
Fig. 13. Relation between delay scores and thorough test failure for Viterbi decoder.

(5) The is the OPTT of our adaptive-frequency, and the is the OPTT of traditional AC scan test. In our experiment, the test time overhead for a marginal chip would be . That is, the test time overhead is 18.2%. For a failing chip, the test time overhead depends on the index of the pattern when the test (using the target test clock period) fails. For example, if a chip fails at the 100th pattern, the test time overhead would be . That is, the test time overhead is 16.7%. Since both of the test time overheads for the passing chips and the failing chips are smaller than the marginal chips, the overall test time overhead for adaptive-frequency test is smaller than 18.2% in our experiment. 5) Evaluate the Discarding Strategy: We could further use the previous results to evaluate tactic C for marginal chips, i.e., discarding certain percentage of marginal chips based on scores. Here we only consider delay scores since our experiment cannot evaluate the effect of eld aging. Those 500 marginal chips would pass the traditional at-speed scan test since the denition of marginal chips is passing all patterns at the target test clock period. Therefore we treat those marginal chips that failed the thorough test set as test escapes for the traditional at-speed scan test. Then we perform tactic based on delay scores to see how many test escapes we could catch, and what the cost will be. First, we observe how many test escapes we could catch purely by tactic . The test escapes of adaptive-frequency test with tactic and the test escapes of the at-speed scan test are depicted in Fig. 14. The discarding rate denotes the percentage of marginal chips that we discarded. Because the discarding rate is not a variable for the at-speed scan test, the value of test escapes is a constant in Fig. 14. On the other hand, we could see from Fig. 14 that the higher discarding rate normally leads to fewer test escapes if our AF-test is used. Second, we evaluate the cost for catching those test escapes by tactic . We dened those marginal chips that do not fail the thorough test set but was being discarded as possible overkill. The cost is measured as the number of possible overkill divided by caught test escapes; a metric representing how many robustly functionally chips one may need to sacrice in order to catch one potential test escape. The result is shown in Fig. 15. We could see that the cost for catching a test escape grows with discarding rate. For example, at a discarding rate of 10%, the cost is 1.5 (i.e., one may overkill 1.5 good chips to catch one test escape). On the other hand, at a discarding rate of 20%, then the cost increases to 2.5. As also shown in Fig. 16, when we increase the discarding rate, we would inevitably discard more chips with lower scores.

Step 2) We test this defective chip by adaptive-frequency test. If this chip is a failing or passing chip we discard it and go back to Step 1) to generate a new defective chip. If this chip is a marginal chip, then we go to Step 3). Step 3) We record the scores of this marginal chip and test it with the thorough test set to see if it passes the thorough test set. In total we generated 500 marginal chips. In this experiment, we want to see if a chip with a higher delay score would have a higher possibility to fail the thorough test set. If yes, that means a chip with higher delay score in volume production test would probably have a higher possibility to fail in the eld. 3) Higher Delay Score Would have Higher Possibility to Fail the Thorough Test Set: The experimental result is quite in line with as what we expected, i.e., a chip with a higher delay score would have a higher possibility to fail the thorough test set. The result is depicted in Fig. 13. The chips are sorted by their delay scores. A chip with a higher score would have a higher serial number. That means the 500th chip has the highest delay score. A diamond in the gure indicates a chip which failed the thorough test set. We could see clearly in Fig. 13 that the diamonds are much denser at the high score region. That indicates that the delay score can be used as a good assessment for the possibility of thorough test failure. It thus implies that delay score could provide a good assessment for the possibility of being a test escape in the volume production test. 4) Test Time Overhead: In our adaptive-frequency test, we dene that test time for a test pattern (using the launch-off-capture based transition-fault AC scan test) as one-pattern test time (OPTT) in the sequel. There are two types of test time overheads to consider. a) Test Time Overhead for Setting a Specic Test Frequency for Each Test Pattern: When we perform AC scan test on our HOY platform, the test time for a test pattern includes two parts, i.e., 1) the scan-shifting time and 2) two clock pulses at the functional mode. For adaptive-frequency test, the test time includes one additional part, i.e., setting the ADPLL to a specic frequency. By comparing the test time difference between testing 500 patterns with the same frequency and testing 500 patterns with changing frequencies, we found out that the one-pattern test time (OPTT) for adaptive-frequency test is merely 1.156 times that of a traditional AC-scan test. b) Test Time Overhead for More Test Pattern Applications: There are two hundred patterns in the basic test set for Viterbi

338

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO. 2, FEBRUARY 2013

Fig. 14. Test escapes of adaptive-frequency test with tactic of the traditional at-speed scan test.

and test escapes

Fig. 15. Possible overkill divided by caught test escapes at different discarding rate.

Fig. 17. Relation between delay score and thorough test failure for (a) s38417, (b) s35932, (c) b15, and (d) b20.

TABLE II CHARACTERISTICS OF TEST CASE CIRCUITS

Fig. 16. Percentage of bad chips we discarded at different discarding rate.

TABLE III COMPARISON OF POSSIBLE OVERKILL DIVIDED BY CAUGHT TEST ESCAPES AT DIFFERENT DISCARDING RATES

6) Repeat the Previous Experiment on Other Circuits: We further chose two circuits from ISCAS89 and two circuits from ITC99 for the same experiment. The detailed characteristics of those circuits are listed in Table II. The results of the correlation between delay score and thorough test failure are depicted in Fig. 17. For evaluating the discarding tactic , we summarize the cost for catching a test escape in Table III. It is worth noting that even the correlation between delay score and thorough test failure cannot be easily recognized from Fig. 17(d). But we could clearly see from Table III that, the cost for catching a test escape is reduced when we decrease the discarding rate. That means higher delay score still represents higher possibility of being a test escape for b20. For a situation like b20, discarding any rate of marginal chips would greatly reduce the test escapes with little cost as in Table III. For example, if we discard 10% marginal chips, then one would sacrice 0.282 good chips to catch one test escape (since the cost is now 0.282).

C. Waveform Extraction We implemented the waveform extraction for our fabricated chips. On the HOY test platform, we performed AC-plus scan test for a chosen pattern with the test frequency sweeping from 400 MHz (2.5 ns) down to 100 MHz (10 ns). We set the measurement resolution as 0.1 ns. The results are shown in Fig. 18. After we launched the signal, we totally sampled 76 times from the real waveform in the chip. The interval between two samples 0.1 ns. We transmitted all the values from the ip-ops back to the tester for each test frequency and then reconstructed the waveform. The reconstructed waveform is shown in Fig. 19.

LI et al.: AC-PLUS SCAN METHODOLOGY FOR SMALL DELAY TESTING AND CHARACTERIZATION

339

Fig. 18. Sampling points in the test time period.

Fig. 19. Extracted waveform from our fabricated chip.

VIII. CONCLUSION ADPLL has many advantages when used as a clock generator over its analog counterpart. For example, it can be easily designed as a cell-based netlist and thus highly portable among process technologies. Also, it can provide a wide-range of frequencies reliably. By embedding an ADPLL into a CUT, testing capability can be thus enormously enhanced. In this work, we have exploited such an emerging test infrastructure with a comprehensive AC-plus scan test methodology on our HOY test platform. We proposed three test modes, namely (1) delay measurement, (2) adaptive-frequency test, and (3) waveform extraction. All three test modes have been validated through a Viterbi decoder fabricated in 0.18- m CMOS process technology and tested by HOY test platform. This methodology, in general, is valuable in fullling the needs of several post-manufacturing delay-sensitive product validation ow, including device characterization, volume production test, diagnosis, silicon debugging, and product grading. ACKNOWLEDGMENT The authors would like to thank National Chip Implementation Center (CIC) and Taiwan Semiconductor Manufacturing Company (TSMC) for chip fabrication. REFERENCES
[1] H. Mahmoodi, S. Mukhopadhyay, and K. Roy, Estimation of delay variations due to random-dopant uctuations in nanoscale CMOS circuits, IEEE J. Solid-State Circuits, vol. 40, no. 9, pp. 17871796, Sep. 2005. [2] T. M. Mak, A. Krstic, K. T. Cheng, and Li.-C. Wang, New challenges in delay testing of nanometer, multigigahertz designs, IEEE Design Test Comput., vol. 21, no. 3, pp. 241247, MayJun. 2004. [3] P. Nigh and A. Gattiker, Test method evaluation experiments & data, in Proc. Int. Test Conf., 2000, pp. 454463. [4] X. Lin, R. Press, J. Rajski, P. Reuter, T. Rinderknecht, B. Swanson, and N. Tamarapalli, High-frequency, at-speed scan testing, IEEE Design Test Comput., vol. 20, no. 5, pp. 1725, Sep.Oct. 2003.

[5] G. L. Smith, Model for delay faults based upon paths, in Proc. Int. Test Conf., 1985, pp. 342349. [6] J. A. Waicukauski, E. Lindbloom, B. K. Rosen, and V. S. Iyengar, Transition fault simulation, IEEE Design Test Comput., vol. 4, no. 2, pp. 3238, Apr. 1987. [7] N. Ahmed, M. Tehranipoor, and V. Jayaram, Timing-based delay test for screening small delay defects, in Proc. Design Autom. Conf., 2006, pp. 320324. [8] X. Lin, K. H. Tsai, C. Wang, M. Kassab, J. Rajski, T. Kobayashi, R. Klingenberg, Y. Sato, S. Hamada, and T. Aikyo, Timing-aware ATPG for high quality at-speed testing of small delay defects, in Proc. Asian Test Symp., 2006, pp. 139146. [9] M. Yilmaz, K. Chakrabarty, and M. Tehranipoor, Test-pattern grading and pattern selection for small-delay defects, in Proc. IEEE VLSI Test Symp., 2008, pp. 233239. [10] J. Xiong, V. Zolotov, C. Visweswariah, and P. A. Habitz, Optimal margin computation for at-speed test, in Proc. Design Autom. Test in Euro., 2008, pp. 622627. [11] J.-J. Liou, L.-C. Wang, K.-T. Cheng, J. Dworak, M. R. Mercer, R. Kapur, and T. W. Williams, Enhancing test efciency for delay fault testing using multiple-clocked schemes, in Proc. Design Autom. Conf., 2002, pp. 371374. [12] S. Mitra, E. Volkerink, E. J. McCluskey, and S. Eichenberger, Delay defect screening using process monitor structures, in Proc. IEEE VLSI Test Symp., 2004, pp. 4352. [13] R. Putman and P. Gawde, Enhanced timing-based transition delay testing for small delay defects, in Proc. IEEE VLSI Test Symp., 2006, pp. 336342. [14] R. Tayade and S. Sundereswaran, Small-delay defect detection in the presence of process variations, Microelectron. J., vol. 39, no. 8, pp. 10931100, Aug. 2008. [15] H. Yan and A. D. Singh, Experiments in detecting delay faults using multiple higher frequency clocks and result from neighboring die, in Proc. Int. Test Conf., 2003, pp. 105111. [16] S. H. Wu, D. Drmanac, and L.-C. Wang, A study of outliner analysis techniques for delay testing, in Proc. Int. Test Conf., 2008, pp. 110. [17] D. Drmanac, B. Bolin, L.-C. Wang, and M. S. Abadir, Minimizing outliner delay test cost in the presence of systematic variability, in Proc. Int. Test Conf., 2009, pp. 110. [18] K. S. Kim, S. Mitra, and P. G. Ryan, Delay defect characteristics and testing strategies, IEEE Design Test Comput., vol. 20, no. 5, pp. 816, Sep.Oct. 2003. [19] J. Lee and E. J. McCluskey, Failing frequency signature analysis, in Proc. Int. Test Conf., 2008, pp. 18. [20] S. Davidson, Towards an understanding of no trouble found devices, in Proc. IEEE VLSI Test Symp., 2005, pp. 147152. [21] S. Davidson, Understanding NTF components from the eld, in Proc. Int. Test Conf., 2005, pp. 33334. [22] M. Beck, O. Barondeau, M. Kaibel, F. Poehl, X. Lin, and R. Press, Logic design for on-chip test clock generation-implementation details and impact on delay test quality, in Proc. Design Autom. Test in Euro., 2005, pp. 5661. [23] R. Press and J. Boyer, Easily implement PLL clock switching for at-speed test, Chip Design Mag., Feb.Mar. 2006. [24] D. Wang, X. Fan, X. Fu, H. Liu, K. Wen, R. Li, H. Li, Y. Hu, and X. Li, The design-for-testability features of a general purpose microprocessor, in Proc. Int. Test Conf., 2007, pp. 19. [25] X. X. Fan, Y. Hu, and L. T. Wang, An on-chip test clock control scheme for multi-clock at-speed testing, in Proc. Asian Test Symp., 2007, pp. 341346. [26] C.-W. Wu, C.-T. Huang, S.-Y. Huang, P.-C. Huang, T.-Y. Chang, and Y.-T. Hsing, The HOY testerCan IC testing go wireless?, in Proc. Int. Symp. VLSI Design, Autom., Test, 2006, pp. 183186. [27] B. Kruseman, A. K. Majhi, G. Gronthoud, and S. Eichenberger, On hazard-free patterns for ne-delay fault testing, in Proc. Int. Test Conf., 2004, pp. 213222. [28] S. Menon, A. D. Singh, and V. Agrawal, Output hazard-free transition delay fault test generation, in Proc. IEEE VLSI Test Symp., 2009, pp. 97102. [29] K. Baker, G. Gronthoud, M. Lousberg, I. Schanstra, and C. Hawkins, Defect-based delay testing of resistive vias-contacts, a critical evaluation, in Proc. Int. Test Conf., 1999, pp. 467476. [30] J. Gao, Y. Han, and X. Li, A new post-silicon debug approach based on suspect window, in Proc. IEEE VLSI Test Symp., 2009, pp. 8590.

340

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 21, NO. 2, FEBRUARY 2013

[31] D. Josephson and B. Gottlieb, The crazy mixed up world of silicon debug, in Proc. Custom Integr. Circuits Conf., 2004, pp. 665670. [32] D. Josephson, The good, the bad, and the ugly of silicon debug, in Proc. Design Autom. Conf., 2006, pp. 36. [33] V1.0 National Chip Implementation Center, HsinChu, Taiwan, CIC referenced ow for cell-based IC design, Tech. Rep. CIC-DSD-RD-08-01, 2008. [34] H.-J. Hsu, C.-C. Tu, and S.-Y. Huang, Built-in speed grading with a process-tolerant ADPLL, in Proc. Asian Test Symp., 2007, pp. 384389. [35] C.-W. Tzeng, C.-Y. Lin, S.-Y. Huang, C.-T. Huang, J.-J. Liou, H.-P. Ma, P.-C. Huang, and C.-W. Wu, iScan: Indirect-access scan test over HOY test platform, in Proc. Int. Symp. VLSI Design, Autom., Test, 2009, pp. 6063. [36] Synopsys, Inc., Mountain View, CA, User manuals for SYNOPSYS toolset version 2007.06, 2007. [37] H. Li, Z. Li, and Y. Min, Delay testing with double observations, in Proc. IEEE Asian Test Symp., 1998, pp. 96100. [38] W. B. Jone and Y. P. Ho, Delay fault coverage enhancement using variable observation times, J. Electron. Test.: Theory Appl., vol. 5, pp. 131146, 1997.

Chao-Wen Tzeng received the B.S. and Ph.D. degrees in electrical engineering from National Tsing Hua University, Hsinchu, Taiwan, in 2004 and 2009, respectively. He is currently a post-doctoral researcher with the Electrical Engineering Department, National Tsing Hua University. His research interests include fault diagnosis, test compression, low power testing and all-digital phase-locked loop (ADPLL) design.

Chih-Tsun Huang (S98M01) received the Ph.D. degree in electrical engineering from the National Tsing Hua University (NTHU), Hsinchu, Taiwan, in 2000. He is currently an Assistant Professor with the Department of Computer Science, NTHU, where has been since 2004. His research interests include security and error-correction VLSI designs, core-based SOC/IP designs, VLSI/SOC design and test, and embedded memory testing and repair. Prof. Huang was a recipient of the Best Paper Award of the 2003 IEEE Asia and South Pacic Design Automation Conference (ASP-DAC) and the Special Feature Award of the 2003 ASP-DAC University VLSI Design Contest.

Tsung-Yeh Li was born in Kaohsiung, Taiwan, in 1985. He received the B.S. degree in electrical engineering from National Sun Yat-Sen University, Kaohsiung, Taiwan, in 2007, and the M.S. degree in electrical engineering from National Tsing Hua University, Hsinchu, Taiwan, in 2009. He is currently an IC Design Engineer with Weltrend Inc.

Jing-Jia Liou (M98) received the B.S. and M.S. degrees in electrical engineering from National Tsing Hua University, Hsinchu, Taiwan, in 1993 and 1995, respectively, and the Ph.D. degree in electrical and computer engineering from the University of California, Santa Barbara, in 2002. He is currently an Associate Professor with National Tsing Hua University. His research interests include delay testing and diagnosis, statistical timing modeling and analysis, and yield enhancement design techniques. Prof. Liou was a recipient of Best Paper Awards from IEEE Conference of Design, Automation, and Test in Europe (DATE 2004) and Asian Test Symposium (ATS 2009).

Shi-Yu Huang (S94M97) was born in 1965, in Tainan, Taiwan. He received the B.S. and M.S. degrees in electrical engineering from National Taiwan University, Taipei, Taiwan, in 1988 and 1992, respectively, and the Ph.D. degree in electrical and computer engineering from University of California, Santa Barbara, in 1997. He joined the faculty of the Electrical Engineering Department, National Tsing Hua University, Hsinchu, Taiwan, in 1999, where he is now a Professor. His research interests mainly include VLSI design, automation, and testing, with an emphasis on power estimation, fault diagnosis, all-digital phase-locked loop (ADPLL) design and its application in delay fault testing in VLSI, and nanometer SRAM design.

Hsi-Pin Ma (M98) was born in Nantou, Taiwan, on January 17, 1973. He received the B.S. and Ph.D. degrees in electrical engineering from National Taiwan University, Taipei, Taiwan, in 1995 and 2002, respectively. Since 2003, he has been with the Department of Electrical Engineering, National TsingHua University, Hsinchu, Taiwan, where he is currently an Associate Professor. His research interests include communications system design and SoC implementation, power efcient/energy efcient signal processing, and biomedical signal processing and system applications.

Hsuan-Jung Hsu received the M.S. degree in electrical engineering from National Tsing Hua University, Hsinchu, Taiwan, in 2009. He is currently an Engineer with MediaTek Inc., Hsinchu, Taiwan. His research interests include alldigital phase-locked loop (ADPLL) design and highspeed digital design.

Po-Chiun Huang (M01) received the B.S. and Ph.D. degrees in electrical engineering from National Central University, Taiwan, in 1992 and 1998, respectively. During the summers of 1992 and 1996, he took internship at Computer and Communication Research Laboratory, ITRI, and Lucent Technology, for optical and RF communication systems. From 2000 to 2002, he was with MediaTek Inc., where he was involved with chipset design for optical storage products. In 2002, he joined National Tsing Hua University, Hsinchu, Taiwan, where he is currently an Associate Professor. His research interests include mixed-signal circuit designs for communication, biomedical and power-management applications.

LI et al.: AC-PLUS SCAN METHODOLOGY FOR SMALL DELAY TESTING AND CHARACTERIZATION

341

Jenn-Chyou Bor was born in Haitian, Taiwan, in 1966. He received the B.S. and Ph.D. degrees from the Department of Electronics Engineering, National Chiao-Tung University, Hsinchu, Taiwan, in 1988 and 1996, respectively. He is currently an Associate Editor with the Chip Implementation Center, Hsinchu, Taiwan. His main research interests include switched-capacitor lters and neural network integrated circuits and systems.

fessor with the Department of Electrical Engineering, Chung Hua University, Hsinchu, Taiwan. His research interests include mixed signal VLSI testing and design, power electronics, smart building, bio-electronics, WSN, and neural network technologies

Ching-Cheng Tien was born in Miaoli, Taiwan, on March 19, 1966. He received the B.S. degree in communication engineering and the Ph.D. degree in electronic engineering from the National Chiao Tung University, Hsinchu, Taiwan, in 1988 and 1993, respectively. He was an Associate Professor with the Department of Electrical Engineering, Chung Hua University, Hsinchu, Taiwan, from 1995 to 2004, and in communication engineering from 2004 to 2010. His current research interests include theoretical study of microwave eld theory, microwave passive circuit and antenna design, radio frequency integrated circuits design, mixed signal integrated circuit design, ultra high radio frequency identication integrated circuit design, and implementation of wireless sensor networks.

Chih-Hu Wang was born in Hsinchu, Taiwan, in 1958. He received the B.S. degree in electronic engineering technology from National Taiwan University of Science and Technology, Taipei City, Taiwan, in 1987, the M.S. degree from the Department of Computer and Information Science, Nova Southeastern University, Fort Lauderdale, FL, in 1994, and the Ph.D. degree from the Department of Electrical Engineering, National Central University, Taoyuan, Taiwan, in 2009. He was a Senior VLSI Test Engineer with Advantest Co., Ltd., a famous Japanese company. Currently he is an Assistant Pro-

Cheng-Wen Wu (S86M87SM95F04) received the B.S.E.E. degree from National Taiwan University, Taipei, Taiwan, in 1981 and the M.S. and Ph.D. degrees, both in electrical and computer engineering from the University of California, Santa Barbara (UCSB), in 1985 and 1987, respectively. Since 1988, he has been with the Department of Electrical Engineering, National Tsing Hua University (NTHU), Hsinchu, Taiwan, where he is currently a Professor. He also served as the Director of the Computer Center, NTHU from 1996 to 1998, and the Director of the Technology Service Center, NTHU from 1998 to 1999. From August 1999 to February 2000, he was a visiting Researcher with the Electrical and Computer Engineering Department, UCSB. He then served as Chair of the Electrical Engineering Department, NTHU from 2000 to 2003, Director of the IC Design Technology Center, NTHU from 2000 to 2005, and Dean of the College of Electrical Engineering and Computer Science from 2004 to 2007. From 2007 to 2009, he was on leave from NTHU and served as the General Director of the SOC Technology Center (STC), Industrial Technology Research Institute (ITRI). He is currently back to NTHU, and also serving as the General Director of the Information and Communications Research Laboratories (ICL), ITRI. His current research interests include design and test of VLSI circuits and systems. Prof. Wu is a Fellow of the IEEE Computer Society and the IEEE Circuits and Systems Society. He is the Editor-in-Chief for the International Journal of Electrical Engineering (IJEE), an Editor for the Journal of Electronic Testing: Theory and Applications (JETTA), and an Editor for the IEEE DESIGN AND TEST OF COMPUTERS.

You might also like