You are on page 1of 9

Computers & Chemical Engineering Printed in Great Britain.

Vol. 7, No.4,

pp. 48-91,

1983

009~1354/83 Q 1983 Perqmon

$3.00 + .OO Press Ltd.

COMPUTER

TECHNOLOGY

IN PROCESS

SYSTEMS ENGINEERING

R. L. Motard Department of Chemical Engineering, St. Louis, Missouri, 63130 USA Washington University

Abstract. This paper looks at the future of computer aided process systems engineering. Productivity issues are discussed with respect to software, computer design technology, engineering relational data base management systems, and data independent programming. Very large systems integration technology will have a major impact on the structure of process systems engineering software. Standards are proposed for host languages and host operating systems. Expert systems will find an increasing role. Keywords. Software, very large scale integration, systems, computer aided design, expert systems. data base management

INTRODUCTION In the past 15 years chemical engineers have experienced a minor revolution in the practice of process systems engineering due to the expanding role of computer technology. It is difficult to discuss the past successes of computer applications to our field when one is so aware of the tremendous promises of the future. This paper will emphasize primarily the future as an evolution from the past and present. We have all read the projections of potential increases in packing density of computer and memory silicon chips, in anticipation of hardware cost reduction. As of 1982 we are told that memory chips may ultimately reach a density of 1 - 10 megabytes per chip versus 256K today. Microprocessor chips are now being developed with a half million transistors and may reach 2 - 10 million transistors before At some point silicon technology 1990. will reach its limit and the historical trends in computer prices will bottom out. Currently, the semiconductor industry is still projecting a 15% per year decrease in cost/performance of large scale general purpose machines and 25% per year for small-scale general purpose machines, expressed in $/million-instructions-persecond (Branscomb 1982). The limit of silicon technology will probably be reached within the next few years. Where it bottoms out depends on the ability of silicon substrate manufacturers to produce ultrapure silicon economically. This is a chemical engineering problem in its own right. The ability to shrink electronic elements and their interconnections onto silicon is critically dependent on the

density of imperfections, which in turn affects the yield of acceptable chips in mass production. The packing density of circuit elements on a chip is only one of the factors affecting cost. There are engineering problems associated with the integration of these devices into working computers. The energy dissipated must be removed by suitThe design of very large able packaging. scale integrated (VLSI) elements is so complex that progress is keyed to the development of computer aided design tools. Typically, (LaBrecque 19821, a microelectronic system design begins with a system architect who produces a block diagram meeting the functional requirements. A logic designer translates the block diagram into logic and gate specifications. A circuit designer then produces circuit diagrams from these specifications. A layout designer places the circuit diagrams onto a silicon chip and a draftsman prepares the masks from which the chip will be manufactured. The process of digital system design certainly has its analog in the process systems engineering enterprise. Obviously, the cost of design must be considered. In today's economy, with stateof-the-art design technology, the design of a new chip will cost $1 million for simple devices and up to $20 million for complex microprocessor chips. The semiconductor industry today is forced to manufacture in very large volumes to recover the high cost of design and the high cost of equipment to manufacture high-density New technologies such as optical devices. electronic digital systems promise further reductions in hardware size, and biocomputers, now only a gleam in the biotechnologist's eye, may eventually reduce

483

484

R. L. MOTARD
figuration table to establish the interface with the application environment. They supply well-established functions which don't require modification from one system to another. Typically, a siliconbased operating system adds 20 or more high-level instructions to the basic instruction set of the microcomputer. Peripheral software routines will also allow operating system calls from higher level languages like PASCAL. Software-insilicon is currently a marketing strategy, allowing hardware manufacturers to enter the very large value-added market in software. Nevertheless, interest will grow, quality will improve, and it is possible to speculate that the range of such products will grow to include telecommunication functions, networking functions, high-level language compilers, graphics functions and other application programs. Software development tools and methodology These imis another productivity factor. provements in software production result in part from a decomposition of the development process and in part from the availaInvaribility of management procedures. ably the development sequence involves a layering of languages and a collection of utility programs. For example, the portability of the UNIX operating system (Kernighan and Morgan 1982) to diverse hardware systems is greatly enhanced because UNIX was written in the C language (Kernighan and Ritchie 1978), a lower-level C is a system development language. machine-dependent language with machine dependencies carefully tailored to be adaptable to a universe of machine architectures, and restricted to about 5% of the compiler code. The UNIX part is invariant, and all one needs is a C compiler for the target machine, itself written in C. The portability of PASCAL depends on the availability of a p-code interpreter for the target machine since PASCAL is compiled to p-code. Again the PASCAL part is invariant, and the interpreter to implement p-code interpretation or translation is far easier to do quickly and correctly as a lower level machine-dependent language. Among the other tools that complete the development process we include special journalizing editors, source-language-level debuggers, documentation facilities, specia linkers, and compilation and configuration databases. The management of large software projects is further enhanced by such automated tools as formal specification and design languages, text managers (filing systems), configuration managers, test data generators, flow analyzers, etc.(Howden 198: Program forming operations are inhibited today by our inability to construct programs in terms which fit the problem rather than at the detail required for the machine to implement the functions associated with the problem (Bacon 1982).

*'electronic" elements to molecular scale. However, any displacement of silicon technology will have to challenge the inertia that an entrenched, mature and ultra-refined technology acquires in the industry. All of this progress in miniaturization serves to aggravate the two main problems impeding further progress, namely software and design technology.

THE SOFTWARE

PROBLEM

The development of information systems of interest to process systems engineers is critically dependent on the availability of software. While hardware cost is decreasing a million-fold between 1955 and 1985, programmer productivity will increase only by a factor of 4 (Birnbaum 1982). Another view of the software problem is offered by the observation that in 1985 the cost of leasing 1 million instructions per second of hardware capability will fall below the salary of a professional programmer for the same period of time (Branscomb 1982). Programming costs now approach 85% of total user costs for a functioning computer facility. The key to future progress in the growth of computer use is programmer productivity. For inThis can happen in several ways. stance, the availability of personal microcomputers has spawned a cottage programming industry, with new software products capturing surprising market shares in unexpected ways. Two examples of the latter are VISICALC, a financial spread sheet planning tool which has become a standard personal computer package from very modest beginnings. CP/M, the standard operating system for zilog's ~80 microprocessor has had a similar history. Further improvements in programming productivity depends on three developments (Bacon 1982): 1) Software building,blocks for building system software 2) Software development tools and methods, and 3) Higher levels of abstraction in program-forming operations with strong algebraic properties. Engineers will accept as fundamental the philosophy that an inventory of standard parts with suitable interface disciplines between them can be the basis for composing more complex programs. Because the parts would be rigorously verified and tested, high quality programs might be quickly assembled. This has been successful in application programming but not in systems software. There are two problems, namely determining the right set of standard blocks and providing a proper environment for their interconnection. Nevertheless, microprocessor operating systems are now being provided in read only memory (ROM) chip sets (Lettieri 1982). Some are hardware dependent but others are processorindependent, requiring only a few external parameters such as a user-supplied con-

Computer technology in process systems engineering


There is need for further research on the development of programming languages with strong algebraic properties relating the primitive functions of the language. With strong program-forming operations a rigorous approach might be found for buildThe prining new programs from old ones. cipal barrier to these developments is the architecture of existing computers which force each program to be concerned with the detailed assignment and manipulation of storage. This leads to inconsistent views of data. There may be some relief in sight due to the increasing interest in relational data base management systems which offer data independent programming environments, a subject to be discussed later. done. It is the only way to truly harness the power of VLSI and avoid the cross-chip communication dilemmna. The dawn of powerful special-purpose machines will require computer aids to assist the computer architect to divide problem-oriented computation into subunits. What better opportunity for process systems engineers to become engaged in the design of computer architectures specifically implemented to solve chemical proThere are a host of cess design problems? questions to be addressed, all with the objective of casting process analysis, The design and simulation into silicon. use of chemical engineers in this fashion is not as far-fetched as it may seem. Carver Mead of Caltech and Lynn Conway of Xerox have designed a one semester graduate course which teaches graduate students with backgrounds typical of computer science rather than electrical engineering how to Such backgrounds would design VLSI chips. not be foreign to engineers from disciplines other than electrical engineering. The entire process can be automated like a computer aided design project and produces chip designs only slightly less optimal than those produced by experts in a very short time, compared to the many manmonths required when done by traditional The entire approach is bound to methods. be synergistic as new special-purpose chip designs are produced which assist in the design of new VLSI systems. One such development is a geometry engine containing a half million transistors which perIndeed it is forms VLSI circuit layout. thought that the entire VLSI process can be automated, between a behavioral description of the computational problem and a set of circuit masks. Prototype software and hardware systems now produce chips with 20 to 30% more area than optimal designs. Once the circuit masks are available there are any number of resource centers in the US which can deliver the actual VLSI cirThe key to cuit in four to six weeks. such design productivity is computer aided design and a well-structured design technology. The economic conditions which dictated high volume production of chips disappear when the cost of design is radically deflated. Short run production of special purpose computers designed for process engineering tasks become economically feasible.

COMPUTER

DESIGN TECHNOLOGY

We have already discussed the shrinking size of circuit elements on silicon chips. There are two advantages of shrinking transistors on chips (LaSrecque 1982). They switch faster and consume less energy. Reducing the linear dimension of a transistor element by 2 increases the packing density 4-fold and switching speed is doubled, a substantial increase in computing power. But, the increasing complexity of very large scale integrated (VLSI) circuits poses another problem in productivity whose solution has substantial implications for process systems engineering. As circuit elements become smaller and faster, cross-chip communication delay becomes a bottleneck. Most of the energy consumed in chips is in communication and the wires take up 95% of the chip area as well. The basic von Neumann architecture of computers consisting of one memory, one input channel, one output channel and one processor makes computation a single, sequential process. This poses serious difficulties in harnessing the technology of VLSI and perhaps offers an opportunity to redefine the architecture of computing systems. Architectural issues involve two approaches to decomposition in computation. One approach is to break up the process into independent or parallel parts which can be performed concurrently in ensembles of small processors. Another approach is to use systolic arrays, the so-called data flow machines, in which the data flows rhythmically through several simple computational cells before it returns to memory. Arrays, which are either linear or two-dimensional, can achieve higher degrees of parallelism. Data may flow in an array at multiple speeds and in multiple directions. The key to parallel decomposition or array decomposition of computational tasks is to find the appropriate regularity in the application problems to be solved. Each problem would have its own special-purpose machine, with VLSI chips designed to do exactly what people want

DATA MANAGEMENT Having briefly reviewed some of the hardware issues and related problems, we have raised some questions about data organization forced on the software developer by a universal dependence on von Newmann computer architecture. There are two solution: to this problem, one involving software and The first is easily the other, hardware. Let perceived but the latter only dimly.

CACE Vol. 7,No.6-s

R. L. MOTARD
us first approach the problem a software solution. of data from Large ing programs. structures generate Not only projects. but data independent much more productive are mastered. changes in data large reprogramming is maintenance reduced programming should be once the techniques

The history of computer aided design in process systems engineering has been one of proliferation of individual engineering computer programs. Communication among these stand-alone aids for the engineering process has not been solved satisfactorily. Most chemical engineers involved in process engineering are familiar with programs for process synthesis, process design, process simulation, process optimization, reactor and fractionator design, physical property estimation, heat exchanger and vessel design, piping design, inventory and project control, bill of materials take-off and automated drafting. The entire mix takes on the aspect of an unmanageable complex of activities and resources to the average project manager, who must meet ever more critical deadlines on project completion with rapidly escalatinq costs of execution and delays. These problems are created by a programcentered approach to large-scale computer applications. In the past, the program and its developer occupied the core of the application process. The data were given a secondary role in the development and management of such engineering applications of computers. Most of the development effort was devoted to the program, and its elegance. NO great attention was paid to the value of the data in itself. Indeed most of the data ended up as a pile of computer printouts. The solution to this complex problem is to adopt a data-centered development approach. The data are put at the core of the application process, hence the emphasis on data base management. After all, it is the data that become information and decisionmaking depends on information; only secondarily on computer programs. The business community discovered long ago that their daily activities could not depend on program-centered management. The engineering community must now seriously consider the movement to data base management system (DBMS) technology to survive the complexity problem. The first goal of data base management software is to decouple programs from data (Codd 1982). As we have said earlier, the barrier to strong program forming operations is a concern for the detailed assignment and manipulation of storage in individual programs. The decoupling of data makes the development of data independent programming conceivable. Today's problem with software is the high maintenance cost of application programs, much of it due to the close coupling of data and programs. Every small change in data structure, as each application grows in sophistication and complexity, triggers a chain reaction of programming changes to maintain the viability of exist-

The first step in data decoupling is to isolate the programmer from the data This is achieved by restorage model. placing positional addressing with assoThe programmer need ciative addressing. never be concerned how or where data are stored or indeed if storage has been reHis principal concerns are the organized. associative relations among data, whether data are conceptually related on a one-tomany' or many-to-many level. He retrieves or stores data via a relation name (or entity name), attribute name and attribute Typically, value (or key). Value + Attribute such as, 1OO'C + Temperature (Stream 5) (Entity)

In a truly relational data environment further data, decoupling takes place between the user (logical level) and the program There is thus a (conceptual level). three-level description of data, the storage level description being hidden from the programmer, the storage and conceptual The levels being hidden from the user. user interface is a set of commands which can be used for entering data, executing application programs and generating reports. The relational DBMS is an interface program between the applications and the data. It has its own data sublanguaqe permitting the insertion, deletion, retrieval and update of data along with data definition It must permit algebraic set facilities. operations without resorting to iteration A complete relational or recursion. algebra is derivable from SELECT, PROJECT Relational processing and JOIN operators. treats whole relations as operands, avoidIn analyzing ing loops as we have said. the operations it is convenient to think of relations as tables with tuples (collections of attribute values) as rows and the attributes themselves as columns: Tuples are not position sensitive in the tabular concept. The SELECT operator takes one relation as operand and produces a new relation (table) consistinq of selected tuples (rows) of the first. The PROJECT operator also transform! one relation into a new one, this time consisting of selected attributes (columns) The JOIN operator takes two of the first. relations (tables) as operands and produces a third relation consisting of the rows of the first concatenated with the rows of the second, but only where speci-

Computer technology in process systems engineering


fied columns (attributes) of the first have matching values with specified columns of The JOIN operator may or may the second. not remove redundancy in columns (attriThe DBMS must support tables (rebutes). lations), without user-visible navigation links between them. The system provides the automatic navigation. In order to be useful to the process system engineering program developer, the data sublanguage must be usable in two modes (1) Interactively at a terminal and (2) Embedded in an application program written in a host language such as FORTRAN or PASCAL. Thus, application programmers can separately debug at a terminal the database operations that they wish to incorporate in their application programs, then embed the same statements in the host language to complete the application program, Beyond these simple ideas the complete DBMS must provide the following services (Codd 1982):

487

1.

Data storage,

retrieval

and update

eliminates the need for coding and recoding of data from one phase of process engineering to the next. It promotes efficient project management. When coupled to non-procedural command languages it provides an electronic filing cabinet and an electronic scratchpad. Process problems can be solved incrementally, bit by bit, working on one part of the project at a time without sacrificing the coherence required for the overall project. Discipline specialists can be alerted to design decision changes as they occur, without the delay inherent in bureaucratic organizations. Reports can be generated in timely fashion. There is no need to stockpile massive computer printouts, since a new report can be tailored and produced as needed. Such a system can really become a computer aid to the designer, supporting rather than inhibiting experimentation and case studies, continuously recording the data traffic in short or long computer terminal sessions, filing multiple examples, and supporting a framework for transmitting large volumes of system documentation when coupled to a text processing capability. Data independent programs allow the construction of ever more powerful and complex process systems engineering resources. With regard to hardware solutions, relational DBMS offer great decomposition flexibility when planning a distributed network of computer systems and great recomposition power for dynamic recombination of decentralized information. Workstation-style approaches to large scale design projects become the natural mode of execution. As we have implied earlier, very powerful local computing power will become available at the engineer's desk. Network communication and DBMS will provide the underpinning which makes the synergism between the engineer and the machine complete. There are unresolved problems in process systems DBMS. Very large data collections lead to slow storage and retrieval. In a typical chemical process capital project, it is estimated that there are one to two gigabytes (109 bytes) of data per billion dollars of plant investment (Perris 1981). Such estimates have led some to speculate that a complete process engineering data base would be composed of separate data bases for each discipline (Cherry, Grogan, Knapp and Perris 1982). Some manufacturers are now offering data base machines which accelerate the searching of disc files for data retrieval. What shape future hardware will take in the face of large data managed projects is a matter of speculation. Host Language and Operating System

catalog of data de2. A user-accessible scriptions (schemas of relations and attributes) 3. Transaction in databases 4. Recovery the program, 5. support to monitor changes

services in case of failure at system or hardware level control

Concurrency

6. Authorization services to ensure that access and manipulation of data is controlled at both the user and program level 7. Integration services with data communication

8. Integrity services to ensure that database states and changes of state conform to specified rules. Obviously, such extensive services makes the operation of the DBMS somewhat dependent on a host operating system. Whether one can offer such services irrespective of the operating system, thereby making the DBMS much more portable, is a question for the future. We have stated that DBMS-based process systems engineering separates data from programs. This leads to the survivability of programs, data-independent programming, much lower maintenance costs for application software, and much higher application programming productivity. What else does it promise? Does it enhance the productivity of the chemical engineer? The answer is that it does, in several ways. In the first place, DBMS technology

Since we have alluded to host language and host operating systems in DBMS technology it is time that we say a few words about the future of both. In our opinion, supported by our perception of widespread

R. L. MOTARD interest in the matter, a language like PASCAL and its offspring ADA, will become the process systems engineering computer language of the future. FORTRAN has served us well over two decades and successive generations of the language have enhanced FORTRAN with features to be found in a more integrated environment in PASCAL. PASCAL is a well-structured language, with a simplicity which enhances its ability to detect programming errors. It encourages the creation of portable programs, and modular programs. PASCAL as a language is easy to read and write, therefore programs are easy to maintain. It is a language in which restrictions have been introduced intentionally in order to reduce the number of decisions, hence the number of errors which a programmer makes. ADA, which has its roots in PASCAL, is the result of a s-year effort on the part of the United States Department of Defense to define a universal system development language. While there have been some reservations about various features of ADA which lack clear and precise definition or implementability, the international resources behind its development will ultimately insure that a viable version of the ADA language will survive. PASCAL and ADA incorporate the work of early research on the properties of programming languages that help eliminate common coding errors. These allow the language to enforce assertions about the range of variables and other properties. These features provide a formal approach to verifying that the programs match the intent of the designer. We are not proposing that all old programs be discarded. Certainly, it is possible to link PASCAL and FORTRAN routines. In a DBMS environment, old FORTRAN programs with data allocation and conventional inputoutput statements stripped out and replaced by DBMS access statements can easily be integrated into new computing environments. All new developments however should be based on PASCAL/ADA. Operating systems and DBMS are difficult to disentangle. Nevertheless, one should examine the evolving area of operating systems to identify any movements toward standardization. We have said earlier that this is important from the point of view of acquiring well-tested building blocks. It is also important for the future movement into ROM-based, software-in-silicon operating systems and programs. If there is one candidate which is emerging as the industry standard in this area it is the UNIX operating system developed in the early 1970's at Bell Laboratories. Perhaps with somewhat less conviction we anticipate that all future operating systems will be similar to UNIX, especially in the type of workstations that will be used by process systems engineers. UNIX UNIX (Kernighan and Morgan 1982) is fundamentally a single user operating system, although it is available on multiuser systems. Without marketing impetus, it has grown to 3000 systems world-wide excluding microcomputer sites. It is now available on most major computing machines. It is basically an electronic filing system. All files are treated merely as a stream of characters (or bytes) with no reference to hardware device characteristics such as tracks, cylinders or blocks that typify other commercial operating systems. Associated with files is a hierarchy of directories which contain information about other directories or about files which helps to organize large collections of files. Input or output devices are handled in the same way as ordinary files, with the application program being unaware of either the source or destination of the data. UNIX is a multitasking system which means that a user may have several processes (or tasks) executing concurrently. Task schedu ing is handled by the system kernel which also manages data storage. In addition to a repository of utility programs available to the kernel, the system is controlled by a command interpreter called the shell. The shell accepts commands and interprets them as requests to run programs. Commands may be pipelined to connect programs. For instance, the command program < datalplot

will tell the system to run "program" using a data file called "data" for input and to connect the output to a program called "plot". program < data > lpr

would direct "program" output to a line printer. If data smoothing is required before plotting, program C data)spline)plot

would achieve the desired result. Provided a family of development tools is available, it is possible to write complex systems without ever using a programming latiguage. Even the shell is a program and the command sh < comds causes the shell to take its commands file "cmds". from

The value of UNIX as a very high level system building environment results from the decoupling of data and programs. The uniform file interface promotes this. In addition, the use of shell scripts and pipelines promotes extensive modularity. The modularity in turn promotes great

Computer technology in process systems engineering


flexibility and evolutionary changes. UNIX can be tailored to a wide diversity of environments. Another virtue of UNIX is its small size and clean structure which makes it very popular in university computer science departments. It is spawning a whole cottage industry of UNIX-based programs and systems. Since the source code is distributed with the system it continues to gain in popularity. It has been adapted to the new generation of 16-bit and 32-bit computer chips. UNIX has found broad application in text processing, software development, laboratory automation, information systems involving small databases and computer science education. A commercial relational database system called INGRES is now available with a UNIX interface (Weiss 1982). UNIX is as yet not suitable for real-time systems, database systems handling large volumes of on-line transactions, or for non-programmers (although a PASCAL/UNIX interface is now available).

489

or engineering enterprise being localized in Yokohama, London or Los Angeles will no longer be relevant. Task forces of people will be brought together through telecommunication on a global basis. The VSLI age will provide affordable graphic processors, color raster scan devices with 1024 x 1024 pixel resolution will grace every engineering office. These will be tied to very powerful virtual microcomputers allowing the engineer to communicate with process engineering software via an electronic sketchpad. Super computers capable of 100 million instruction executions per second will be available at distributed service centers. Nevertheless, a great deal of the computation load will be local, using parallel processing and data flow hardware.

SOFTWARE FOR PROCESS SYSTEMS ENGINEERING We have discussed the host language for process systems engineering (PASCAL/ADA) and the host operating system (UNIX prototype). We have made a case for data-independent programming through the use of the engineerIt now remains to anticipate the ing DBMS. future of application software in process systems engineering. Process design software of the future will be highly modular, data-independent, strongly structured for maintainability in the modern software engineering sense, and supported on both personal workstations and larger host machines through networking. This software will be integrated into a hardware-software complex making liberal use of building-block architecture and containing some VLSI subsystems in silicon (ROM) for such activities as operating system kernel, graphics engines, compiler engines, relational database engines, and possibly intermediate level operations of chemical process design in ROM. As examples of the latter we propose the possibility that physical property operations, and two-and three-phase determination routines would be available in ROM. Another silicon engine might handle all scalar-vector, vector-vector, and vectormatrix operations using pipelined (systolic) array processor technology. At this point it is useful to consider whether complex process design procedures would be coded in a conventional procedural language or in an interpretive command language like the UNIX shell language. One can at least surmise that a command language layer will be the standard user interface. Graphical interfaces will be used interchangeably with command languages. Beneath the user interface one could have a hybrid of procedural routines and interpreted, non-procedural, problem-oriented languages with the interpreter residing in ROM. With data-

HARDWARE FOR PROCESS ENGINEERING

SYSTEMS

From the foregoing, it is obvious that we expect the computer technology of the future to be distributed in architecture with a great many opportunities for customizing to the application. The late 1980's will signal the dawn of the VLSI era, supported by computer aided design. Instead of merely replicating the von Neumann structures of the past in cheaper and faster circuit elements, we will see new boundaries between hardware and software. Current hardware technology is simply the lowest common denominator of machines with general purpose application. New special purpose hardware will be plug-compatible with traditional computers, software-in-silicon operating systems, intelligent terminals, microcomputers, graphics systems, printers and peripheral memory devices. Design reports will be prepared from engineering data bases edited on text-or wordprocessing machines, typeset if necessary and transmitted over broadband telecommuniThe communication systems cation systems. of the world will consist of local and global connections, combining voice, data, message and video modes. Local networks will support a great array of printing modules, distributed computing power, data storage modules, etc. New and cheaper large scale data storage media such as videodiscs will quickly assimilate whole libraries of instructional materials, reAccess to the netports, and design data. work will take place at home, factory, or office. Local networks will be ported to national and international networks via television cable and space satellites. Not only will the computing power be distributed but the notion of an industrial

490

R. L. MOTARD
EXPERT SYSTEMS Having established a viewpoint about computer technology developments as an aid to process engineers, it now remains to Where stretch our conception to the limit. does the interface between man and machine We are not so concernfall in the future? ed about ergonomics and user friendliness in this instance but about the computer At what system as a "knowledge" base. point does the software transcend the analytical or algorithmic boundary and become a synthetic problem solver, a true tool for the human mind? Those of us who have studied process synthesis in its various parts have some However, experience with knowledge bases. our most important successes, as Allen Newell (1981) has emphasized, have been in problem areas that yield to a uniformity of representation such as heat exchanger networks and unintegrated distillation Subsequences for sharp separations. stantial progress has also been made in certain applications of computer search methods to problems in organic synthesis. Again a uniform representation is already available in the classical structural model Other areas of organic chemical molecules. of chemical process knowledge may ultimately yield to uniform representation and thereby uniform procedures but, by and large, the most important knowledge namely, problem-specific knowledge, is non-uniform. One alternative (Stephanopoulos, Linnhoff and Sophos 1982) is to decompose the problem into uniform domains as has recently been proposed for energy integrated The best sharp distillation sequences. unintegrated sequences turn out to be the best candidates for integration; a happy result. As process engineers we see the benefits of artificial intelligence and expert systems in assisting us to invent chemical Once a complete processing structures. structure is provided by the initial structuring heuristics we are confident of our ability to analyze the design, to optimize it, and to identify its weaker characteristics. From this knowledge one can then improve the structure, using evolutionary rules. One school of thought prefers to keep the rules simple for training purposes and for easy application using pencil and paper. However, there is ample scope for improving the simple logic of expert procedures using computer-based expert systems. In a computer environment one can provide access to a broader knowledge base of physical and chemical behavior of chemical species and their mixtures. The challenge for chemical engineers is to encode chemical process knowledge in a form that is suitable for decision making. The difficulty is that in integrated networks of chemical processing units decisions have both a local and

independent programming any mix of old and new process design modules can be assembled into a complex program using shell language pipelines and scripts. Alternatively, as is the option with DBMS, the user might wish to carry on a piecemeal interaction with process models making use of the high modularity of the environment. The architecture of process flowsheeting systems will need to be redefined to fit the computer technology of the future. In one respect at least, flowsheeting packages will be more simple to construct. They will no longer be built with internal data storage manipulation. MIT-ASPEN, which will become the U.S. industrial standard in the near future, will be the last generation of stand-alone flowsheeting systems. The final generation will be data-based. In another respect one will be considering new program architectures, divorced from data management, which lend themselves to different forms of partitioning to harness the power of innovative or non-von Neumann hardware Partitions which support structures. parallel processing and data flow processing will evolve. We have traditionally been burdened with the dichotomy between equation-oriented and simultaneous modular flowsheeting systems, with a spectrum of alternatives in between. Such distinctions will begin to blur in the A hierarchy of approaches new technology. will be executable on the same process models. At one level, the user will be able to use the simplest kind of sequential models for process synthesis and screening of alternative flowsheets. At another level the model might be used as input to a hybrid equation-solving approach, containing both equations and conventional At a third discrete process modules. level, simultaneous modular architectures might be used interchangeably on the same process model to optimize the process according to infeasible path algorithms now under development(Biegler and Hughes 19E11). Until data management is fully developed the DBMS environment will have to accommodate hybrid environments where substantial data banks of evaluated experimental thermopbvsical data in traditional file and record form coexist with relational Data preprocessing will prodata bases. vide for a rational preparation of the process model with respect to physical data in a natural manner, generating VLE correlations, etc. The same can be said of other forms of tabular data normally extracted from handbooks and design manuals ultimately residing on laser optical videodiscs, with anticipated recording densities of 3 gigabytes per side, or 750,000 pages of text on one side of a randomly accessible optical disc (Goldstein 1982).

Computer technology in process systems engineering


a global impact. So, as the application of expert programming matures we will see a growing emphasis on adaptive learning procedures drawn from the field of artificial intelligence. Much has been made of the ability of symbolic manipulation systems in artificial intelligence, or interactive problem solving. In our field I see this as only a superficial veneer since so much of our knowledge base requires intensive computation to extract meaningful information. Every sensitive variable must be evaluated in the context of mixture-dependent properties, every reaction phenomenon is environmentally dependent. The expert systems of chemical engineering will have to be supported by substantial computation and pure list processing languages alone will not do. Nevertheless, the evolution of relational data management systems, of VLSI engines tuned to the relational and computational algebra of chemical processing environments, and of multi-mode flowsheeting systems will raise exciting opportunities for configuring new processing complexes with computer aids. Progress in problem solving by computer may come from an unexpected source. The proliferation of recreational software in video and computer games (Bimbaum 1982) may spill over into the computer aided design sector. These AI-based procedures already incorporate limited natural language understanding, common-sense knowledge bases, and non-trivial simulations of human reasoning. Goldstein, Technology 862-868. Charles M. (1982). Optical and Information. Science,

49
Disk

215,

Howden, William E. (1982). Contemporary Software Development Environment. comm. ACM, 25_, 318-329. Kernighan, Brian W. and Dennis M. Ritchie (1978). The C Programming Language, PrenticeLHall, Englewood Cliffs, New Jersey. Kernighan, Brian W. and Samuel P. Morgan A (1982). The UNIX Operating System: Science, 215, Model for Software Design. 779-783. LaBrecque, Mort (1982). Faster Switches, Smaller Wires, Larger Chips. MOSAIC, Jan/Feb, National Science Foundation, Washington, D.C. pp. 26-32. Lettieri, Larry (1982). Software-inSilicon Boosts System Performance, Cuts Mini-Micro Systems, Programming Time. March, 93-95. Newell, Allan (1981). In R. S. H. Computer. (Eds.1, Foundations of Process Design, Vol.l, of Chemical Engineers, How to View the Mah and W. D. Seider Computer Aided American Institute New York. pp. l-25.

Perris, F. A. (1981). Imperial Chemical Industries, Ltd. Personal Communication. Stephanopoulos, George, B. Linnhoff and A. Sophos (1982). Synthesis of Heat Integrated Distillation Sequences. Understanding Process Integration, The Institution of Chemical Engineers Symp. Ser. No. 74, London. pp. 111-130. Weiss, Harvey M. (1982). INGPES: A DataManagement System for Minis. Mini-Micro -Systems, January, 231-237.

REFERENCES

Bacon, Glenn (1982). 215, 775-779.

Software.

Science,

Biegler, L. T. and R. R. Hughes, 1981. Infeasible Path Optimization with Sequential Modular Simulators. American Institute of Chemical Engineers Meeting, New Orleans, Louisiana, November, Paper 50a. Bimbaum, Joel S. (1982). Computers: A Survey of Trends and Limitations. Science, 215, 760-765. Branscomb, Lewis M. (1982). and Computers: An Overview. 755-760. Electronics Science, 215,

Cherry, D. H., J. C. Grogan, G. L. Knapp, and F. A. Perris (1982). Use of Data Bases in Engineering Design. Chem. Eng. Progr., --78, 59-67. codd, E. F. (1982). Relational Database: A Practical Foundation for Productivity. Comm ACM, 25, 109-117. --

You might also like