You are on page 1of 7

Scheduling Analysis Using Discrete Event Simulation by Edward J.

Williams Igal Ahitov

Copyright IEEE Proceedings of the 29th Annual Simulation Symposium, New Orleans, Louisiana. April 9 - 11, 1996 (to appear)

Scheduling Analysis Using Discrete Event Simulation


Edward J. Williams 206-2 Engineering Computer Center Mail Drop 3 Ford Motor Company Dearborn, Michigan 48121-2053 USA Abstract
We describe a production shop which needed to undertake schedule analyses at the macro level of planning and explain the methods of using simulation to obtain valid, credible schedule analyses quickly. After discussion of the simulation model itself we present actual simulation results and conclusions. These successful simulation results help the production shop attain nimble adaptation to product mix requirements, optimal in-process buffer sizing, and speedy confirmation of the ability of scheduling proposals to meet throughput targets.

Igal Ahitov Production Modeling Corporation Three Parklane Boulevard Suite 910 West Dearborn, Michigan 48126 USA
process-simulation software [6]. Additionally, many of the scheduling packages now in use assume deterministic systems. Use of simulation as a precursor to scheduling permits consideration of probabilistic events as well. As stressed by [7], one of the most vital steps in a simulation study is the careful statement of project objectives; until precise knowledge of the issues to be addressed by the model is available, it is impossible to decide the appropriate level of model detail. Then, the model complexity needn't, and shouldn't, exceed the minimum required to accomplish those project objectives [3]. Consistent with objectives on specific projects, models may be macro models (low level of detail and encompassing a broadly defined system) or micro models (high level of detail and encompassing a narrowly defined system). First, we describe the production shop whose scheduling concerns were the motivation for the simulation study. We then describe the simulation model itself, stressing its adaptation to these scheduling concerns and how the need for these adaptations influenced the building of the model. We then present the conclusions drawn from the model and indicate promising directions for further work.

1. Introduction
Simulation has been defined as "the process of designing a mathematical or logical model of a real system and then conducting computer-based experiments with the model to describe, explain, and predict the behavior of the real system" [5]. Scheduling problems, by contrast, deal with deciding the processing times of jobs comprised by a project, given constraints on personnel, equipment, and facilities [8]. Inasmuch as simulation is a powerful tool for the analysis of scheduling problems, algorithms, and policies, simulation and scheduling analyses can work synergistically toward process improvement. Controlling production operations with economic effectiveness is an ever-present challenge to operations management. This challenge presents itself at two levels: the macro level of long-term work balancing and the micro level of daily or even hourly facility control [1]. Long-standing obstacles between this synergy of simulation and scheduling, especially at the macro level, include differences between scheduling tasks and traditional simulation applications and the absence of scheduling capabilities within typical i l ti

2. Overview of the System


The system in question is a manufacturing shop (in design, not yet an existing system) for the production of an automotive component. These components are naturally subdivided into three distinct families, denoted x1, x2, and x3 respectively in this paper. All processing times are fixed within a family. In turn, each of these three families comprises two part types (x11 and x12, x21 and x22, and x31 and x3). In this design, the visualized part flow is the following: Operations 10, 20, and 30 are undertaken at three work centers each. The three work centers for each operation work in

parallel, and the three operations are serial. These operations are CNC (computer numerically controlled) machines. Since operation 20 is slower than operation 10, there is a buffer of capacity one between them Operation 40, a gauging operation downstream from these CNC machines, is likewise performed at three parallel work centers Operation 50, a drilling operation, is performed at two parallel work centers downstream from the gauging operation Operation 60, the cup press (used only by the x2 family), is represented by a single work center downstream from the two drilling operations Operation 70, a balancing operation, is represented by two work centers downstream from the single cup press (for x2 parts) or downstream from the two drilling operations (for x1 or x3 parts) Operation 80, a grinding operation, is represented by two work centers, each downstream from one of the balancing centers Operation 90, a washing operation, is represented by one machine comprising two independent paths, each fed by one grinding center Operation 100, a second gauging operation, is represented by two work centers, each downstream from one of the washing centers.

The existence of multiple part families with different processing sequences, like those indicated above, requires flexibility of manufacturing similar to that described by [11]. This work flow design is illustrated in Figure 1, next page. The first four operations appear in triplicate. The identical triplets are dedicated to the x1, the x2, and the X3 families respectively. Of these

families, the x1 family parts have the highest target production rate. Downstream from operation 40, x1 family parts follow the upper or middle path, bypassing operation 60; x2 family parts follow the middle or lower path, including operation 60; and x3 family parts follow the middle or lower path, bypassing operation 60. Table 1, below, specifies the gross output capabilities of each operation. Operations 40, 60, 90, and 100 have no changeover time. For all other operations, except operation 50, there is a setup time whenever a part from a different family arrives. Additionally, at operation 50, setup time is required whenever a different part type arrives. In conjunction with the dedication of Line #1 (the upper path in Figure 1) to the x1 family, as noted above, these considerations imply that operation 50 is the only operation in Line #1 ever requiring changeover time. The significant system metrics, and hence issues specifying the objectives of the project, are: Ability of the system to produce the target product mix The ability of Line #2 (the middle path in Figure 1) to support Line #1 in production of the highest-demand x1 family Effects of different schedules and sequences on throughput Effects of buffer sizes immediately upstream from work centers on throughput. The third metric is of particular importance relative to the anticipated installation of conveyors between operations 20 and 30, since validating ability to thrift these buffer sizes implies high cost avoidance in capital expenditure, conveyor installation costs, and floor space requirements. These metrics, and their underlying economic motivations, are similar to those described by [10]. 3. Modeling Approach The fundamental model-building approach entails representing each of the principal components of the

Table 1: Gross Output Capabilities by Operation Family x1 x2 x3 OP10 CNC 85 88 102 OP20 CNC 82 77 95 OP30 CNC 85 78 109 OP40 Gage 240 240 240 OP50 Drill 200 200 200 OP60 Cup Press N/A 225 N/A OP70 Balancer 300 300 300 OP80 Grinder 220 220 220 OP90 Washer 700 700 700 OP100 Gage 240 240 240

system at the macro level in terms of its processing time for different product families, the frequency and duration of its maintenance, both scheduled and unscheduled, its changeover times between different part families, and the size of the buffer immediately upstream from it. The reasons for the choice of this macro-level representation are the increased adaptability of the simulation model to change as the system design is refined, the need to avoid inclusion of too much weakly understood detail into the model early in its life cycle, the lack of detailed knowledge of the type and capacity of the material-handling equipment to be used, and the ability to build, verify, and validate the model in time for its beneficial recommendations to be fully acted upon by management. This macro approach, adapted here to assess long-range strategic policies for a system not yet implemented, may be contrasted with the micro approach used to guide real-time decisions in an existing system, as described in [4]. In accordance with this macro approach to model building, simplifying assumptions are appropriate. First, especially since transfer times between stations were known only approximately, all details of material-handling systems are omitted from the model. Second, the model assumes the appropriate skilled labor is always available for repair, tooling changes, and changeover. Third, an infinite supply of parts is assumed to exist at operation 10 relative to each of the three lines. Fourth, taking historical scrap rates from analogous, existing systems into account, input into the system per two-day period required to achieve the target production rates is assumed to occur as indicated in Table 2, below.

Table 2: Assumed Input Rates by Family and Part

Input

Target Output 2586 784 880' 624 1706 422

x1 family x2 family x3 family

Part x11 Part x12 Part x21 Part x22 Part x31 Part x32

2622 796 894 636 1724 430

More specifically, this model is constructed using the SIMAN/ARENA software, but not the Advanced Manufacturing Template. ARENA animation capabilities such as entity color-coding by part type, color change of resource icons (busy, idle, down), and placement of numbers on the screen to represent operation throughput to date and current upstream queue size provide generous help to both the modeler and the user in verifying model behavior and visualizing system performance. Operation downtimes are modeled both as count-based (for tooling changes) and for busy-time-based (for random malfunctions). Under these conditions, neither type of downtime can begin while the other type of downtime is in progress. This subdivision of downtime by cause and attention to the possibility of overlapping downtimes concur with modeling considerations discussed in [12]. The details of buffering afforded by the Advanced Manufacturing Template are extraneous given the avoidance of material-handling detail indicated above. Hence, dummy resources between the operations simulate buffers. Additionally, since changeover is modeled by a dummy part seizing the machine, changeover intervals appear as operation utilization time in the statistical output reports. This modeling approach simplifies model construction and verification at the acceptable expense of overestimated operation utilizations. Since absolute operation utilization is not a metric of importance in this study (although equality of utilizations is), this simplification serves as an excellent example of tailoring the modeling approach to the user's validation expectations as specified by performance-prediction requirements [9]. In accordance with the originally specified project goals, this model can show the effects of operating the system with different schedules, the effects of buffer size on throughput, the effect of various tool changeover times and/or changeovers between different parts on throughput, or the effect of adding new machinery on throughput. However, in accordance with the truism that "simulation is not linear programming," the model, although it allows its user to experiment quickly and efficiently with different hypotheses concerning the extent to which the middle path (Line #2) can help the upper path (Line #1) produce x1 family parts, cannot specify an optimal scheduling scheme. With a verified and validated model, the users can assess different allocation levels of Line #2 to backup x1 production on behalf of Line #1, and observe their effects on

throughput and machine utilization. Indeed, the model, by allowing such experimentation, allows the meaning of "optimal" to vary between users, or between successive "what-if' studies undertaken by the same user. For example, "optimal" may mean "using the smallest total buffer size possible" to one user or manager, may mean "have highest probability of meeting demand" to another user or manager, and may mean "achieve the most nearly balanced utilization of a particular work cell possible" to another. These capabilities reflect well-accepted reasons for doing a simulation analysis before scheduling implementation: evaluation of alternative scheduling logic rules, establishment of performance criteria, and identification of problem areas (severe capacity constraints).

4. Major Findings During Experimentation


Preliminary statistical analyses reveal that long runs are required to overcome high system variability; hence production runs represent one hundred days, with no warm-up period. Experimental runs under the model-simplifying assumptions described above verify that the system reaches steady state within five minutes. Furthermore, since all issues under investigation pursued with the help of the model are closely related to evaluation of proposed production schedules driven by specific production-mix demands, the runs of the model are conceptually terminating, not steady-state. Extensive experimentation with this model produces the following results, all of immediate value to process-planning engineers and managers:

days, using an operational policy which reduces the average number of changeovers per day The buffer size immediately upstream from Operation 30 can be reduced from fifteen to five with no significant effect on throughput; specifically, achieving the targeted throughput requires only eight minutes per day longer with this two-thirds reduction in buffer size Increasing any other buffers affects throughput in no significant way; as a conceptual extreme, infinitely large buffers accelerate production of the targeted quota by only five minutes per day. Details of these results are presented in Tables 3 and 4, below, both of which simulated run times of 100 work days (1900 simulated hours). Furthermore, in order to assess the effect of tool changeover times on production rate, the "Operation 30 with capacity 5" scenario was compared with two different sets of tool changeover times. If the average tool changeover time for Operation 10, Operation 20, and Operation 30 doubles from five minutes to ten minutes (i.e., in the above tables, changes from a triangular density with minimum three, mode five, and maximum seven (T(3,5,7)) to T(6,10,14)), the time necessary to produce the target amount of product mix increases by 35 minutes per day, from 17 hours to 17.6 hours per day. This is a relative increase of less than four percent.

5. Conclusions and Directions for Further Work


Most importantly, the model proves the impossibility of meeting target production using a daily basis of scheduled changeovers, but also proves the possibility of meeting it using a two-day changeover schedule. Also, the model verifies the ability of the system under design to meet this target using in-process buffers (between operations 20 and 30) having only a third of the capacity originally

The system can produce the targeted amount of product mix, on average, in seventeen hours per day The system cannot produce target in one day for all parts due to excessive time loss to changeovers, but can produce target over two

Table 3: Production Hours Necessary to Meet Daily Demands

Scenario Base - Operation 30 with Capacity 15 1 - No Buffer Constraints 2 - Operation 30 with Capacity 10 3 - Operation 30 with Capacity 5 T(3,5,7) 4 - Operation 30 with Capacity 5 T(6,10,14)

Average (hrs/day) 16.84 16.76 16.89 16.99 17.56

Average (hrs/day) 17.88 17.77 17.80 18.41 19.12

Average (hrs/day) 16.49 16.45 16.57 16.61 17.12

Table 4: Average Number of Hours to Finish Line 1 vs. Lines 2 & 3 to Meet Daily Demands

Scenario Base - Operation 30 with Capacity 15 1 - No Buffer Constraints 2 - Operation 30 with Capacity 10 3 - Operation 30 with Capacity 5 T(3,5,7) 4 - Operation 30 with Capacity 5 T(6,10,14) estimated intuitively, and without expansion of other buffers within the system. This reduction of buffer sizes provides significant savings on conveyor purchase and installation costs, operational expense, and space requirements. With respect to modeling methodology, this project confirms the value of using simulation-specific software with carefully justified simplifying assumptions to meet tight target dates. At this point, customer interaction was critical; simplifying assumptions required their confirmation. In the context of scheduling issues, they provided valuable input about the varieties of routing algorithms most common to their industry's practice. Hence, during the simulation study, their routing algorithms were fine-tuned to meet the target production rates and balance the lines. The results reached the model user in time to support managerial investment decisions. Such an approach also permits rapid experimentation via changes to numeric quantities such as buffer sizes, frequency of tooling changes and random breakdowns, and duration of these tooling changes and breakdowns. Furthermore, the project confirms the value of animation in making valid results credible to the managers responsible for investment, facility configuration, and operational decisions. Future work includes ongoing use of the model throughout the production-system life cycle to analyze ability of the system to meet market-driven demand changes (for example, the x1 family, currently in highest demand, may cede that role to the x2 or the x3 family subsequently). This assessment of the tradeoff between system speed of adaptation to a change in market demand and system ability to balance workload relative to current demand is similar to an assessment described in [2]. This future work is anticipated by thorough internal model documentation, which will help a potential incoming analyst implement such changes quickly. Also, micro-level models can be built as needed to assess the effects of various material-handling proposals or staffing-level proposals

Line 1 (hrs/day) 16.63 16.58 16.70 16.74 17.26

Lines 2 & 3 (hrs/day) 16.74 16.59 16.78 16.90 17.48

Difference (hrs/day) 6.47 0.60 4.97 9.96 13.02

when availability of additional detail justifies removal of the original simplifying assumptions. This study has successfully identified the system bottleneck; after it is improved as much as economically feasible, further micro-level scheduling studies will be undertaken relative to the bottleneck resource. Currently, the model users receive recommendations provided by the model via printed reports, rather than by running the model themselves to experiment with various scenarios. This situation, viewed as a transient expedient to meet tight timing constraints on system implementation, will be overridden by implementation of a user-friendly runtime interface allowing the users to import key model data directly from spreadsheet software.

Acknowledgments
Dr. Onur M. Ulgen, professor, Industrial and Manufacturing Systems Engineering Department, University of Michigan - Dearborn, and president of Production Modeling Corporation; Celia Ortiz, Senior Consultant, Production Modeling Corporation; and Steven Weiss, Systems Analyst, Systems Development, Production Modeling Corporation, made valuable suggestions to improve the clarity and organization of this paper. Cogent criticisms of three anonymous referees likewise helped markedly in these regards.

Appendix: Trademark
SIMAN/ARENA is a trademark of Systems Modeling Corporation.

References
1. Abbott, Rebecca A., and Timothy J. Greene. 1982. Determination of Appropriate Dynamic Slack Sequencing Rules for an Industrial Flow Shop via Discrete Simulation. In Proceedings of the 1982 Winter Simulation Conference,

eds. Harold J. Highland, Yen W. Chao, and Orlando Madrigal, 223-232. 2. Ayadi et. al. 1995. A Simulation Approach for the Scheduling of a Complex Manufacturing System. In Proceedings of the 1995 Summer Computer Simulation Conference, eds. Tuncer I. Oren and Louis G. Birta 385-389. 3. Banks, Jerry, and John S. Carson, U. 1984. Discrete Event System Simulation. Englewood Cliffs, New Jersey: Prentice-Hall, Incorporated. 4. Drake, Glenn R., Jeffery S. Smith, and Brett A. Peters. 1995. Simulation as a Planning and Scheduling Tool for Flexible Manufacturing Systems. In Proceedings of the 1995 Winter Simulation Conference, eds. Christos Alexopoulos, Keebom Kang, William R. Lilegdon, and David Goldsman, 805-812. 5. Hoover, Stewart V., and Ronald F. Perry. 1989. Simulation a Problem-Solving Approach. Reading, Massachusetts: Addison-Wesley Publishing Company. 6. Larsen, Niels Erik, and Leo Alting. 1990. Requirements to Scheduling Simulation Systems. In Proceedings of the 1990 Summer Computer Simulation Conference, ed. William Y. Svrcek, 231-236. 7. Law, Averill M., and Michael G. McComas. 1991. Secrets of Successful Simulation Studies. In Proceedings of the 1991 Winter Simulation Conference, eds. Barry L. Nelson, W. David Kelton, and Gordon M. Clark, 21-27. 8. Pritsker, A. Alan B., Lawrence J. Watters, and Philip M. Wolfe. Multiproject Scheduling with Limited Resources: a Zero-One Programming Approach. In Papers Experiences Perspectives, ed. A. Alan B. Pritsker, 116-133. West Lafayette, Indiana: Systems Publishing Corporation. 9. Robinson, Stewart, and Vinod Bhatia. 1995. Secrets of Successful Simulation Projects. In Proceedings of the 1995 Winter Simulation Conference, eds. Christos Alexopoulos, Keebom Kang, William R. Lilegdon, and David Goldsman, 61-67. 10. Rosenwinkel, Maureen T., and Paul Rogers. 1993. Simulation-Based Finite Capacity Scheduling: a Case Study. In Proceedings of the 1993 Winter Simulation Conference, eds. Gerald W. Evans, Mansooreh Mollaghasemi, Edward C. Russell, and William E. Biles, 939-946. 11. Savory, Paul A., Gerald T. Mackulak, and Jeffery K. Cochran. 1991. Material Handling in a Flexible Manufacturing System Processing Part Families. In Proceedings of the 1991 Winter Simulation Conference, eds. Barry L. Nelson, W. David Kelton, and Gordon M. Clark, 375-381. 12. Williams, Edward J. 1994. Downtime Data - Its Collection, Analysis, and Importance. In Proceedings of the 1994 Winter Simulation Conference, eds. Jeffrey D. Tew, Mani S. Manivannan, Deborah A. Sadowski, and Andrew F. Seila, 1040-1043.

Author Biographies
Edward J. Williams holds bachelor's and master's degrees in mathematics (Michigan State University, 1967; University of Wisconsin, 1968). From 1969 to 1971, he did statistical programming and analysis of biomedical data at Walter Reed Army Hospital, Washington, D.C. He joined Ford in 1972, where he works as a. computer software analyst supporting statistical and simulation software. Since 1980, he has taught evening classes at the University of Michigan, including undergraduate and graduate statistics classes and undergraduate and graduate simulation classes using GPSS/H SLAM II, or SIMAN. He is a member of the Association for Computing Machinery [ACM] and its Special Interest Group in Simulation [SIGSIM]. Igal Ahitov holds a bachelor's degree in mechanical engineering from Stevens Institute of Technology (1993) and a master's degree in industrial and systems engineering from Georgia Institute of Technology (1995). In 1995, he joined Production Modeling Corporation in Dearborn, Michigan, where he works as an applications engineer in the simulation field. He is a member of the Society of Manufacturing Engineers [SME].

You might also like