Professional Documents
Culture Documents
Katholieke Universiteit Leuven, Electrical Engineering, Kasteelpark Arenberg 10, Leuven, Belgium
Abstract
This paper presents the performance analysis of several well-known partitioning scheduling algorithms in real-time and fault-tolerant multiprocessor systems. Both static and dynamic scheduling algorithms are analyzed. Partitioning scheduling algorithms, which are studied here, are heuristic algorithms that are formed by combining any of the bin-packing algorithms with any of the schedulability conditions for the Rate-Monotonic (RM) and Earliest-Deadline-First (EDF) policies. A tool is developed which enables to experimentally evaluate the performance of the algorithms from the graph of tasks. The results show that among several partitioning algorithms evaluated, the RM-SmallTask (RMST) algorithm is the best static algorithm and the EDF-Best-Fit (EDF-BF) is the best dynamic algorithm, for non fault-tolerant systems. For faulttolerant systems which require about 49% more processors, the results show that the RM-First-Fit Decreasing Utilization (RM-FFDU) is the best static algorithm and the EDF-BF is the best dynamic algorithm. To decrease the number of processors in faulttolerant systems, the RMST is modified. The results show that the modified RMST decreases the number of required processors between 7% and 78% in comparison with the original RMST, the RM-FFDU and other well-known static partitioning scheduling algorithms.
1. Introduction
Real-time systems are often used in applications where system failures may pose a threat to human lives or cause significant economic loss [2]. Examples of such applications are systems that control cars, trucks, trains, aircraft, satellites and industrial process control systems. Real-time systems may be characterized by three main features: 1) response time, i.e., the operation correctness of a real-time system depends not only on its logical results, but also on the time at which these results
become available [15,18], 2) fault-tolerance, i.e., it is essential that every task admitted to the system completes its execution even in the presence of failures [1, 4, 19], 3) task scheduling, i.e., to decide when and on which processor the given tasks should be executed [9, 23]. There are two schemes for scheduling real-time tasks on multiprocessor systems, i.e., the partitioning scheme and the global scheme. In the partitioning scheme, instances of a particular task in the set are executed on the same processor. However, in the global scheme, each instance of a real-time task in the set may be executed on different processors. Since an optimal solution to the partitioning problem of the tasks is computationally intractable [3, 11], many heuristics for partitioning have been proposed, a majority of which are versions of the bin-packing algorithms [5, 7, 8, 10, 12, 13, 17, 21, 25, 26]. All of the above works have been performed for non fault-tolerant real-time systems. Furthermore, none of these works have compared the performance of all well-known partitioning scheduling algorithms. This paper has three main contributions: 1) the performance of all well-known partitioning scheduling algorithms is studied and compared for non fault-tolerant systems, 2) the study and comparison are extended for fault-tolerant systems, 3) simulation results show that some of algorithms, such as the RMST, are not suitable for fault-tolerant systems because of high rate of required processors, therefore the RMST algorithm is modified to decrease the number of processors overhead for fault-tolerant systems. To achieve fault-tolerance in the second contribution, multiple versions of the tasks are executed on different processors simultaneously. By this approach processor failures or task failures can be tolerated [1]. Performance evaluation is one of the most important stages for designing and analyzing realtime systems [6, 9, 19]. It is necessary to know the capabilities and drawbacks of these algorithms in order to be able to understand their trade-offs and also to know the capabilities of these algorithms to
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
tolerate faults in real-time multiprocessor systems. The results of this work support system designers to select appropriate scheduling algorithms for a realtime system. To evaluate the three above mentioned contributions, a tool is designed and implemented by which the performance of the algorithms is studied and compared. The rest of the paper is organized as follows. Section 2 describes the system model used. Section 3 discusses the partitioning scheduling algorithms for non fault-tolerant systems. In section 4, the algorithms are studied for fault-tolerant systems. The modification of the RMST is given and analytically evaluated in section 5. Section 6 presents the simulation results and section 7 concludes the paper.
fault.
2. System Model
A set of n tasks = { 1 , 2 ,..., n } are given with i = ((ci1 , ci 2 ,..., ci j ), ri , di , Ti ) for i = 1,2,..., n where
ci1 , ci 2 ,..., ci j
versions of task i . ri , di and Ti are the release time, deadline and period of task i , respectively. u i is the
utilization of the task i , i.e., u i = ci Ti . It is assumed that a) Each task has redundancies, where is a natural number. The versions of a task may have different computation time requirements, and the versions may be merely copies of one implementation or versions of different implementations. b) All versions of each task must be executed on different processors. c) The requests of all task are periodic with constant intervals between requests. The request of a task consists of the requests of all its versions, i.e., all versions of a task are ready for execution when its request arrives. d) Each task must be completed before the next request for it arrives, i.e., all its versions must be completed at the end of each request period. e) The tasks are independent and computation times are constant. We define as a load factor, or maximum utilization of any task in the system, = maxi =1, 2,..., n (ci / Ti ) . We will assume that no tasks will miss their deadlines and any task tolerate
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
Where Next-Fit (NF), First-Fit (FF), Best-Fit (BF) and First-Fit-Decreasing (FFD) are the binpacking heuristics and Worst-Case (WC) [20], Increasing-Period (IP) [17], Utilization-Oriented (UO) [25] and Period-Oriented (PO) [21] are the schedulability conditions for the RM policy. For the EDF policy this set of algorithms can be described as follows: B = {NF , FF , BF ,...} {EDF _ Condition} The partitioning schemes to be studied in this paper are the following: Static algorithms: Rate-Monotonic Next Fit with IP condition (RMNF) and Rate-Monotonic First Fit with IP condition (RMFF) [17], RateMonotonic Best Fit with IP condition (RMBF) [26], First Fit Decreasing Utilization Factor (FFDUF) [10],Rate-Monotonic First Fit Decreasing Utilization (RM-FFDU) [25], RateMonotonic Small Task (RMST) and RateMonotonic Global Task (RMGT) [7]. Dynamic algorithms: Rate-Monotonic Next Fit with WC condition (RMNF-WC), RateMonotonic First Fit with WC condition (RMFF-WC) and Rate-Monotonic Best Fit with WC condition (RMBF-WC) [26], RateMonotonic Next Fit with UO condition (RMNF-UO), Rate-Monotonic First Fit with UO condition (RMFF-UO) and RateMonotonic Best Fit with UO condition (RMBFUO) [25], Rate-Monotonic Global Tasks/M (RMGT/M) [8], Earliest Deadline First Next Fit (EDF-NF) and Earliest Deadline First First Fit (EDF-FF) [21], Earliest Deadline First Best Fit (EDF-BF).
paper develops heuristic partitioning scheduling algorithms to tolerate processor failures with this mechanism and attempt to minimize the number of processors. On each individual processor, the task deadlines are guaranteed by the RM or the EDF policies. For the periodic task scheduling, it has been proven that the release times of tasks do not affect the schedulability of the tasks [20]. Therefore, release time ri and deadline di can be safely omitted when we consider solution to the problem. In the following we address the pseudo code of fault-tolerant and realtime scheduling algorithms that we simulated in this paper. Let processors be indexed as P 1, P 2 ,..., P m with each initially in the idle state, i.e., with zero utilization. The tasks { 1 , 2 ,..., n } will be scheduled in that order. is the maximum number of versions of a task, i.e., = max{1 j n} j . To schedule a version of task i , find the smallest j index of processors such that , together with all the tasks (versions) that have been assigned to processor Pj , can be feasibly scheduled according to the existing condition (RM condition or EDF condition) for a single processor and no version of i has been previously assigned to processor Pj , then assign task version to Pj . To facilitate the discussion, two type systems are defined as follows: Type A: a system in which its task sets are without redundancy, i.e. each task is unique in the system. Type B: a system in which its task sets are with redundancy, i.e. each task has multiple versions in the system.
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
1) 2) a) b) c) d) 3)
Set i=1; and Set m=1; /* i denotes the ith task and m denotes the number of processors allocated */ S m = Vi ; /* index for smallest value of Vi in each processor */ Set l=1; /* l denotes the lth version of task i Set j=1; /* j denotes the index for processor */ If (the lth version of task i together with the versions that have been assigned to processor P j , can be feasibly scheduled according to the RMST condition in single processor and no version of task i has been previously assigned to processor P j ) then Assign the lth version of task i to processor P j /*i.e., assign the lth version of i to dispatch queue of processor P j and processor P j schedule tasks in own dispatch queue according to RM policy */ /* RMST condition: U j + u i max {loge 2 loge 2} where = Vi S j */ /* where U j and ui are the total utilization of processor j and utilization of task i */
Else { j = j +1; if ( U j == 0) then S j = Vi go to step 3; 4) } if ( l > i ) then /* i.e., all versions of task i have been scheduled */ go to Step 5; Else { l = l + 1 ; and go to Step 2(d); } 5) if ( j > m ) then set m = j ; 6) if ( i > n ) then /* i.e., all tasks have been assigned */ return; Else { i = i + 1 ; and go to Step 2(c); }
However, in systems that imply task execution in multiple-version forms, the performance of both RMST and RMGT degrades, thus they fall among the worst algorithms [6]. RMST is a static algorithm based on the PO condition that considers task sets with small . At first, the RMST algorithm sorts tasks with respect to i in increasing order and then schedules them where i = log 2 Ti log 2 Ti [7]. Then it assigns tasks to processors almost in the same manner as the Next-Fit bin-packing heuristic. The RMST algorithm works well for scheduling task sets in type A, as the simulation results in the next section indicate this
issue. Therefore the RMST algorithm is suitable for systems with task sets having 0.5 and the tasks execute uniquely. However, for systems of type B, the RMST algorithm is not a good choice. Since in these systems tasks are executed on different processors simultaneously, the Next-Fit heuristic implies allocating almost two tasks to one processor. It is clear that all versions of a task arrive at the same time and they must be executed on different processors, so the number of processors increases linearly. In the RMST algorithm if we change the strategy of distributing tasks among processors or in other words, if we change the Next-Fit bin-packing
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
heuristic strategy to the First-Fit bin-packing heuristic strategy, then the RMST algorithm will be the best algorithm for type B. We name this modified algorithm to RMST-FF (Figure 1) and in the following, the RMST-FF algorithm is compared with the RMST algorithm. The RMST algorithm is an algorithm with complexity O( n log n) and is composed of linear and nonlinear parts. Firstly, in nonlinear part, tasks are sorted with respect to , as mentioned above, and then distributed among processors with the Next-Fit bin-packing algorithm. The order of the nonlinear part is O( n log n) and the order of the second part is O(n) . The complexity of the RMST-FF algorithm is O(n log n) and is formed of two nonlinear parts. In the first nonlinear part, tasks are sorted with respect to and in the second part tasks are distributed among processors with the First-Fit bin-packing algorithm. The order of both parts is O( n log n) . Looking carefully, we discover that the time complexity of the RMST algorithm is O(n log n) + O( n) and the time complexity of the RMST-FF algorithm is O( n log n) + O(n log n) . Thus the time complexity of the RMST algorithm is a little bit better than the RMST-FF algorithm. For systems of type A that task sets have the small load factor, i.e. 0.5 , the RMST algorithm is better than the RMST-FF algorithm due to the time complexity and the management issues. For systems in which multiple versions of tasks are executed on different processors simultaneously (type B), since RMST-FF improves results between 7% and 78% in comparison with the RMST and other well-known static partitioning scheduling algorithms, the RMST-FF algorithm would be a better choice. The interesting observation is that the RMST-FF algorithm is suitable for all values of and almost independent from utilization of tasks.
of processors required to feasible schedule a given set of tasks, respectively. In the RMST-FF schedule, let j ,1 , j , 2 ,..., j ,s j be the s j tasks that are assigned to a processor Pj and
Uj =
j ,i
be
sj k =1
u j ,k for
the
value
of
task
j ,i
and and
j = max1in j i min1in j i
i = log 2 Ti log 2 Ti . Also u a be the utilization of the arrival task ( a ). According to the worst case of the RMST-FF algorithm that mentioned above, we have
sj k =1
(Eq.5.1)
sN k =1
(Eq.5.2)
N 1 j =1
N 1 j =1
j ln 2
Since
j =1
N j =1
j 1 and u a , we have
(Eq.5.3)
N 1
U j + ( N 1) ( N 1) ln 2
Since
j =1
Uj
N 1 j =1
U j and
N0
n u i =1 i
j =1
U j , we have
(Eq.5.4) (Eq.5.5)
N 0 + ( N 1) ( N 1) ln 2 N 0 + 1 + ln 2 (1 ) N N
N 0 1 + ln 2 (Eq.5.6) + 1 1 N 1 1.7 (Eq.5.7) + N 0 1 (1 )N 0 As the Eq.5.7 shows, in the worst case the RMSTFF algorithm has same manner as the RMST algorithm. In fact in the worst case the RMST-FF distributes an arrival task among processors with the Next-Fit heuristic like the RMST. Fortunately, this case occurs rarely.
6. Simulation Results
In this section, we present simulation results
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
evaluating the performance of the scheduling algorithms presented in previous sections. In any graph for each point, 1000 task sets were generated. On each task set, Ti is generated uniform distribution with following a 10 Ti 500 and ci is generated following a uniform distribution in the range 0 < ci Ti , where = max i =1, 2,..., n (ci / Ti ) . The number of versions for each task in type B is uniformly distributed in the range of 2 i 5 . Tasks in type A are unique. In order to make comparisons, we run the same data through all algorithms. In Figures 2, 3, 4, 5, 6 and 7 an infinite number of processors are considered. In this case, the metric is used to measure the performance of the algorithms is the number of processors on which the task sets are feasibly scheduled and processor failure can be tolerated. The performance of the system is measured using four values of , . = 0.2, = 0.5, = 0.8, = 1.0 For space restrictions in this paper, only = 0.2 and = 0.8 are shown in figures. From figures and tables, we can conclude that some algorithms yield best performance only under certain values and certain type of systems. For example, in figure 2 we can observe that RMST and RMGT are the best static algorithms in comparison to the other static algorithms, when 0.2 , for type A, but they are between the worst static algorithms when 0.8 . Another example is that EDF-BF is the best dynamic scheduling algorithm for any value of . Simulation results show that: Most of these algorithms increase the number of required processors over 49% in type B relative to type A. This measure is over 60% for scheduling algorithms that based on Next-Fit bin-packing heuristic. Best-Fit bin-packing heuristic almost is the best approach to distribute tasks among processors. Next-Fit bin-packing heuristic is a simple approach to distribute tasks among processors and has complexity of O(n) , but it is the weakest algorithm among all bin-packing heuristics in distributing tasks among processors especially for type B as mentioned earlier. Scheduling algorithms based on NextFit heuristic, increase the number of required processors by 41% relative to other scheduling algorithms for type B.
Although management of EDF policy for scheduler is harder than RM policy, in dynamic category, EDF policy would be much better than RM policy. because EDF policy-based algorithms need over 20% and 16% processors less than similar RM policy-based algorithms for type A and type B, respectively. Based on the values of , the number of required processors is increased for both type A and B by following: o When is changed from 0.2 to 0.5, it is increased by 56% o When is changed from 0.5 to 0.8, it is increased by 36% o When is changed from 0.8 to 1.0, it is increased by 15% Table 1, 2, 3 and 4 show the performance order of the static and dynamic partitioning scheduling algorithms for both type A and B.
Table 1. Performance order of static partitioning scheduling algorithms for type A Algorithm RMST RMGT RM-FFDU FFDUF RMBF RMFF RMNF =0.2 1 1 3 4 5 6 7 =0.5 2 1 2 4 5 6 7 =0.8 6 5 1 2 3 4 7 =1.0 6 5 1 2 3 4 7
Table 2. Performance order of static partitioning scheduling algorithms for type B Algorithm RMST RMGT RM-FFDU FFDUF RMBF RMFF RMNF =0.2 5 5 1 2 3 4 5 =0.5 5 6 1 2 3 4 7 =0.8 6 5 1 2 3 4 7 =1.0 6 5 1 2 3 4 7
Table 3. Performance order of dynamic partitioning scheduling algorithms for type A Algorithm EDF-BF EDF-FF EDF-NF RMNF-WC RMNF-UO RMFF-WC RMFF-UO RMBF-WC RMBF-UO RMGT/M =0.2 1 2 3 10 9 8 6 7 5 4 =0.5 1 2 3 10 9 8 6 7 5 4 =0.8 1 2 3 10 9 7 5 6 4 8 =1.0 1 2 3 10 9 7 5 6 4 8
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
200
200
Number of Processors
150
100
50
Number of Processors
150
100
50
Number of Tasks
400
500
600
700
800
900
1000
100
200
300
Number of Tasks
400
500
600
700
800
900
1000
(a)
700 600
(a)
700 600
Number of Processors
Number of Processors
200
300
Number of Tasks
400
500
600
700
800
900
1000
100
200
300
Number of Tasks
400
500
600
700
800
900
1000
(b)
Figure 2. Number of processors required as a function of number of tasks using static partitioning scheduling algorithms for type A (a) =0.2, (b) =0.8
1000
(b)
Figure 4. Number of processors required as a function of number of tasks using dynamic partitioning scheduling algorithms for type A (a) =0.2, (b) =0.8
1000
Number of Processors
600
Number of Processors
800
800
600
400
400
200
200
Number of Tasks
400
500
600
700
800
900
1000
100
200
300
Number of Tasks
400
500
600
700
800
900
1000
(a)
1500
(a)
1500
Number of Processors
900
Number of Processors
1200
1200
900
600
600
300
300
Number of Tasks
400
500
600
700
800
900
1000
100
200
300
Number of Tasks
400
500
600
700
800
900
1000
(b)
Figure 3. Number of processors required as a function of number of tasks using static partitioning scheduling algorithms for type B (a) =0.2, (b) =0.8
(b)
Figure 5. Number of processors required as a function of number of tasks using dynamic partitioning scheduling algorithms for type B (a) =0.2, (b) =0.8
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
200
150
Algorithm EDF-BF EDF-FF EDF-NF RMNF-WC RMNF-UO RMFF-WC RMFF-UO RMBF-WC RMBF-UO RMGT/M
=0.2 1 1 7 7 7 6 4 5 3 8
=0.5 1 1 7 9 8 6 4 5 3 8
=0.8 1 1 7 10 9 6 5 4 3 8
=1.0 1 2 7 10 9 6 5 4 2 8
100
50
0 100 200 300 400 500 600 700 800 900 1000
Number of Tasks
(a)
700 600
After performance evaluation of seven static and ten dynamic algorithms and extracting results, we could improve the RMST algorithm for systems of type B after doing meticulous analyses (section 5). As simulation results show, the modified RMST (RMST-FF) decreases the number of required processors between 7% and 78% in comparison with the RMST and other well-known static partitioning scheduling algorithms in systems of type B. Simulation results show that the RMST-FF algorithm is independent from and suitable for schedule tasks with any value of . Table 5 shows the decreasing rate of required processors for the RMST-FF with respect to RMST and other well-known static partitioning scheduling algorithms for systems of type B. As can be seen the number of required processors is far less than RMST and other algorithms.
Table 5. Reduction of number of required processors for modified RMST with respect to RMST and other algorithms
Load factor () With respect to RMST RMNF RMFF RMBF FFDUF RM-FFDU RMGT =0.2 78% 78% 25% 25% 24% 24% 78% =0.5 46% 49% 19% 18% 16% 16% 47% =0.8 31% 38% 14% 14% 11% 11% 32% =1.0 20% 22% 9% 9% 7% 7% 20%
Number of Processors
300
400
500
600
700
800
900
1000
Number of Tasks
(b)
Figure 6. Performance comparison between the modified RMST algorithm and other static partitioning scheduling algorithms for type A
1000
Number of Processors
800
600
400
200
0 100 200 300 400 500 600 700 800 900 1000
Number of Tasks
7. Conclusions
1500
(a)
RMNF RMFF RMBF FFDUF RM-FFDU RMST RMGT RMST-FF
In this paper, the performance of the well-known partitioning scheduling algorithms has been studied for real-time multiprocessor systems, with or without fault-tolerant requirements. Multiple versions of a task have been executed on different processors to tolerate processor failures. The performance of seven static and ten dynamic scheduling partitioning algorithms have been evaluated. The partitioning scheduling algorithms studied here are heuristic
Number of Processors
1200
900
600
300
0 100 200 300 400 500 600 700 800 900 1000
Number of Tasks
(b)
Figure 7. Performance comparison between the modified RMST algorithm and other static partitioning scheduling algorithms for type B
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006
algorithms that are formed by combining any of the bin-packing algorithms with any of the schedulability conditions for the RM and EDF policies. To evaluate the performance of the algorithms, a tool has been designed and implemented. The experimental results showed that the RMST and the RMGT are the best static algorithms to schedule tasks in non faulttolerant systems, with task sets having 0.5 . The RM-FFDU is suitable to schedule tasks in non faulttolerant systems with task sets having 0.5 and also it is the best static scheduling algorithm in faulttolerant systems for any value of . Among dynamic algorithms, the EDF-BF and the EDF-FF are the best for both fault-tolerant and non fault-tolerant systems and for any value of . In fault-tolerant systems, about 49% processors overhead was produced. To decrease the processor overhead, the RMST was modified. The results show that the modified RMST decreased the processors overhead between7% and 78% in comparison with the original RMST, the RM-FFDU and other wellknown static partitioning scheduling algorithms.
References
[1] R. Al-Omari, A. K. Somani, G. Manimaran, Efficient overloading techniques for primary-backup scheduling in realtime systems, Journal of Parallel and Distributing Computing, vol.64, no.2004, pp. 629648, March 2004. [2] B.Andersson, J.Jonsson, preemptive multiprocessor scheduling anomalies, Technical Report 01-9, Dep. Of Computer engineering, Chalmers university, 2001. [3] I. Assayad, A. Girault, and H. Kalla, "A bi-criteria scheduling heuristics for distributed embedded systems under reliability and real-time constraints", International Conference on Dependable Systems and Networks, DSN'04, Firenze, Italy, June 2004.. [4] A. Avizienis, The N-version approach to fault-tolerant systems, IEEE Trans. Software Eng., vol. SE-11, pp. 14911501, Dec. 1985. [5] A. A. Bertossi, L. V. Mancini, F. Rossini, Fault-Tolerant Rate-Monotonic First-Fit Scheduling in Hard-Real-Time Systems, IEEE Trans. Parallel Distrib. Syst, vol.10, no.9, pp. 934-945, 1999. [6] H.Beitollahi, S.G. Miremadi, A Fault-Tolerant Static Scheduling Algorithm for Real-Time Multiprocessor Systems, 13th IEEE International Conference on Real-Time Systems (RTS05), April 2005, Paris, France. [7] A. Burchard, J. Liebeherr, Y. Oh, S.H. Son, New Strategies for Assigning Real-Time Tasks to Multiprocessor Systems, IEEE Trans. on Computers, vol. 44, no. 12, pp. 1429-1442, December 1995. [8] A. Burchard, Y. Oh, J. Liebeherr, S.H. Son, A Linear Time Online Task Assignment Scheme for Multiprocessor Systems, 11th IEEE Workshop on Real-time Operating Systems and Software, Seattle, WA, pp. 28-31, May 1994. [9] J. Carpenter, S. Funk, P. Holman, A. Srinivasan, J. Anderson, S. Baruah, A Categorization of Real-time Multiprocessor Scheduling Problems and Algorithms, In
Handbook of Scheduling: Algorithms, Models, and Performance Analysis, Joseph Y-T Leung (ed.) Chapman Hall/CRC Press, 2003. [10] S. Davari S.K. Dhall, On a Periodic Real-Time Task Allocation Problem, Proceedings of the 19th Annual International Conference on System Sciences, pp. 133-141, 1986. [11]A. Girault, H. Kalla, and Y. Sorel, "A scheduling heuristics for distributed real-time embedded systems tolerant to processor and communication media failures", International Journal of Production Research, vol.42, no.14, pp. 2877-2898, July 2004. [12] A. Girault, C. Lavarenne, M. Sighireanu, and Y. Sorel, Fault tolerant static scheduling for real-time distributed embedded systems, Research Report 4006, INRIA, Sept 2000. [13] S. Ghosh, R. G. Melhem, D. Moss, J. S. Sarma, FaultTolerant Rate-Monotonic Scheduling, Journal of Real-Time systems. vol 15, no. 2, Sept 1998. [14] N. Kandasamy, J. P. Hayes, B. T. Murray, Scheduling Algorithms for Fault Tolerance in Real-Time Embedded Systems, Dependable Network Computing, D. Avresky (Ed.), Kluwer Academic Publishers, Boston, 1999. [15] H.Kopetz, Real-Time Systems: Design Principles for Distributed Embedded Applications, Kluwer, Boston, 1997. [16] H.Kopetz et al., Distributed fault-tolerant systems: The MARS Approach, IEEE Micro, vol.9, no.1, pp. 25-40, Feb. 1989. [17] S.Lauzac, R. Melhem, D. Mosse, On a Real-Time Scheduling Problem, Operations Research, vol.26, number 1, 1978. [18] S.Lauzac, R.Mellhem and D.Mosse, "An improved RateMonotonic Admission Control and its Application", IEEE Trans. on Computers. vol.52, no.3, March 2003. [19] F. Liberato, R. G. Melhem, and D. Mosse. Tolerance to Multiple Transient Faults for Aperiodic Tasks in Hard RealTime Systems. IEEE Transactions on Computers, vol. 49, no.9, pp.906914, 2000. [20] C.Liu, J.Layland, Scheduling algorithms for Multiprogramming in a Hard Real-Time Environment, Journal of the ACM, vol. 20, no.1, pp. 46-61, Jan 1973. [21] J.M. Lopez, M. Garca, J.L. Daz, and D.F. Garca, Worst-case utilization bound for EDF scheduling on real-time multiprocessor systems, In Proc. 12th EuroMicro Conference on Real-Time Systems, pp, 2533, Stockholm, Sweden, June 1921, 2000. [22] M. Lyu (ed), Software Fault Tolerance, John Wiley & Sons, New York, 1995. [23] G. Manimaran, A. Manikutty, C. S. R. Murthy, A Tool for evaluating dynamic scheduling algorithms for real-time multiprocessor systems, The Journal of Systems and Software vol.50, no.2000, pp. 131-149, June 1998. [24] G. Manimaran, C. Siva Ram Murthy, An efficient dynamic scheduling algorithm for multiprocessor real-time systems, IEEE Trans. Parallel Distributed Systems, vol.9, no. 3, pp. 312319, March 1998. [25] Y. Oh and S. H. Son, Allocating Fixed Priority Tasks Multiprocessor Systems, Journal of Real Time Systems, pp. 207-239, 1995. [26] Y. Oh and S. H. Son, New Strategies for Assigning Real-Time Tasks to Multiprocessor Systems, IEEE Trans. Computers vol.44, no.12, pp. 1429-1442, 1995.
12th Pacific Rim International Symposium on Dependable Computing (PRDC'06) 0-7695-2724-8/06 $20.00 2006