You are on page 1of 41

Previous | Next | Contents

ESDEP WG 5

COMPUTER AIDED DESIGN

AND MANUFACTURE

Lecture 5.1: Introduction to

Computer Aided Design & Manufacture

OBJECTIVE/SCOPE

To review briefly the developments in computing generally and to describe the various ways in which
computers can be used in the context of steel construction, with particular emphasis on design, drafting
and modelling.

PREREQUISITES

None.

RELATED LECTURES

Lecture 5.2: The Future Development of Information Systems for Steel Construction

SUMMARY

The reduced cost of relatively powerful computing facilities has led to many activities traditionally
performed by hand being performed with the aid of a computer. The improvements in computing which
have largely enabled this development are reviewed.

The potential for using computers within the whole process associated with steelwork construction, from
client brief through to construction on site, is described. General applications such as the use of
wordprocessing, spreadsheets and databases are included, but the emphasis is on analytical and design
calculations, and computer aided design (CAD). The distinction between 2-D drafting systems and solid
modelling is discussed and the potential for transferring the data from a solid modelling system onto
numerically controlled fabrication machinery is considered.

1. INTRODUCTION

The ways in which computers have affected the various activities involved in steel construction have been
led by developments in computing hardware, user environments, software and systems for data exchange.
These developments in themselves have been interlinked, typically by advances in hardware allowing
new possibilities for software development. However, not all advances for the end-user have followed this
sequence; to a very large extent the development of user-friendly interfaces has gone on in anticipation of
suitable computing facilities becoming available.
The computerised processes involved in computer aided design and manufacture (CAD/CAM) have to be
integrated within the normal sequence of events involved in the inception, design and construction of
structures (Figure 1). The process may be handled by a group of individual consultants on various aspects,
together with a fabricator and contractor(s). Alternatively, it may be a "design-and-build" process in
which one large organisation takes responsibility for the whole operation, even if specialist aspects are
contracted-out of the parent company. In either case, problems of communication exist, and the degree of
success in overcoming them is crucial to the success of the project.
Information Technology (IT) is largely concerned with efficient exchange of data and can be used to
maximise the efficiency of all stages of the project. Although structural aspects are of primary concern
here, it is assumed that all the specialist groups associated with a project consider themselves to be part of
an integrated team. In this case, the facilities afforded by computerised systems for sharing data will be
used, for example, to ensure that services can be fitted into the structure without any problems arising at a
later stage in the contract, particularly on site. From the architectural point of view, it is also important
that structural members do not obscure natural light from windows or the free flow of occupants within
the building. Even in the structural steelwork context, there are areas where problems commonly arise; a
typical one is where a consulting engineer has selected individual optimum-sized members throughout a
building, giving the fabricator the problem of having to order small quantities of a large number of
different sections and to design and fabricate different connections. Alternatively, consultants may
themselves design connections which, although efficient in their use of material, cause extra fabrication
cost which could have been saved by standardisation on a system which suits the fabricator's capabilities.
These problems should, of course, never arise in any case, and the fact that they commonly do is
essentially the result of inefficient communication between members of the design team.

During the initial tendering phase, the structural designers have to:

 Interact with the client, architect and other specialists, possibly including a fabricator.
 Conceive, agree and rationalise a structural form.
 Perform rapid structural design calculations.
 Produce a limited range of drawings
 Decide on material requirements and construction processes.
 Use these for estimating a tender price and producing tender documents.

This stage clearly involves a great deal of work which may, after the contract is awarded, have been
fruitless. From this point of view, therefore, there is a need to minimise the effort expended in a very risky
endeavour. On the other hand, in the event of winning the contract, it is essential to reduce the amount of
eventual variation from the tender specification, so this process must be carried out in a conscientious
fashion. There is obvious scope at this stage for a relatively crude computerised approach to save a larger
amount of employee-time in preliminary sizing of members, in production of tender drawings and in cost-
estimating.

When the contract has been awarded, the successful design team is then faced with the need to:

 Produce detailed design calculations.


 Produce a range of drawings for fabrication, construction and building control and for interaction
between the structural, architectural and services specialists.
 Produce a detailed bill of quantities and contract documents.
 Identify an efficient fabrication and construction sequence, which ensures that components arrive
on site shortly before they are needed and that the unexpected does not happen on site.

In each of these processes the use of computers directly, using software, and to share data is an important
aspect of ensuring that the building is constructed efficiently and works well.

Although it is natural in a lecture such as this to concentrate on the technical input of computerisation to
the design and fabrication processes, it must be borne in mind that a significant part of the potential gain
in efficiency in any complex multi-stage process can come from a suitable integration of normal office-
automation software such as wordprocessors, spreadsheets and databases. Decisions about how data is
shared and communicated, and how the total process is organised, can also make significant differences to
its efficiency.

In this lecture it is assumed that the reader has only a general awareness of computers and their uses, and
of the applications of automatic control to fabrication and manufacturing operations. The lecture gives,
therefore, a general review of current computing and the routes by which computing has developed over
the past 40 years or so. It is necessary to introduce and use some computer jargon, which is initially
printed in italics.

Computing developments are subject to rapid advancement and, therefore, all such descriptions are valid
only for a short period of time after they are written.

2. COMPUTER HARDWARE

Mechanically-based digital 'computers' were first developed by mathematicians in the 19th Century. They
were developed further only as far as the 'adding machines' and electro-mechanical calculators
(sometimes analogue rather than digital) used in commercial, industrial and military applications until the
mid-20th Century. They performed numerical computations much faster than could be done manually, but
were limited by their large numbers of precision-made moving parts to fairly simple general arithmetic, or
to unique tasks such as range finding for artillery.

The first electronic computers began to be developed in the mid-20th century, using radio valves as their
basic processing components. These components were accommodated on racks and the computers thus
acquired the title of mainframes. They generated large amounts of heat and efficient cooling and air-
conditioning systems were always required. Early computers were unreliable because of the limited life of
the thermionic valves and as the size of installations grew so did the probability of failure. The natural
limit to the size of such computers arrived when a design was considered which employed so many valves
that it was estimated by normal probability theory that it would average 57 minutes of 'down-time' out of
every hour. Maintenance and operation of a computer required a large number of specialised personnel.
Compared with the previous generation of mechanical devices, these computers were extremely powerful.
Within industry they tended to be installed mainly for payroll and financial management, but in the
research environment their development allowed the field of numerical analysis to begin to grow.

The development in the 1950's of transistors and in the 1960's and 70's of miniaturised integrated circuits
(microchips) led to progressive improvements in the size, energy consumption, computing power,
reliability and cost of computer hardware. This enabled a great diversification in the applications of
computing and the machines which do it. The first of these developments was of mini-computers -
relatively portable computers with sufficient processing power to perform tasks which had previously
only been possible on mainframes. The central processor unit was typically accommodated within a
cabinet which could be mounted on a trolley with the required peripherals and used within a normal office
or laboratory environment. In comparison with mainframes, mini-computers had only modest technical
support requirements. Their size reduced dramatically during the 1980's to the extent that their current
descendants, usually known as workstations, are very similar visually to personal computers. Mainframes
themselves developed into supercomputers, with the emphasis being on massive memory and data storage
together with extremely fast processing. Supercomputers are now used to run huge database applications
and numerical simulations of complex systems.

By the mid-1970's microchip technology had developed to the extent that significant computing power
could be fitted within very small units - variously referred to as micro, desktop, personal or home
computers. Initially, they had very low on-board memory, but were directly programmable from the
keyboard in BASIC and could load programs from audio cassettes. The early microcomputer
manufacturers each had their own operating system (or control program) and there was no possibility of
transferring programs or data directly from one type of machine to another. There were also several types
of processor chip in use, each with its own instruction set, so that even programming language compilers
had to be rewritten for each type. A considerable step forward came when a common operating system
(CP/M) was written for one family of processor. This system spurred the production of a large range of
microcomputers between which programs were interchangeable. This process of standardisation has
continued to the extent that at the time of writing there are only two major groups of personal computers
used in business and professional environments; the IBM PC-compatibles and the Apple Macintosh. In the
case of PC-compatibles, little more than the basic specifications are set by IBM itself and a huge
worldwide industry exists to produce the hardware and software. No such 'compatible' manufacturing
industry exists in the case of the Macintosh which, however, has a very strong software base in some
areas, especially in graphic design and publishing.

3. PRINTERS AND PLOTTERS

Despite the current multiplicity of ways for presenting and storing information, a facility for obtaining
hard (paper) copy of input data, program listing, results of analyses, graphics and documents is still very
important. For alphanumeric output hard copy is most conveniently obtained using a printer. In this area
also, there is now a considerable range of options, but the principal change in recent years has been from
hard-formed character printers to raster (or matrix) printers of various types. The great majority of
modern printers belong to the latter group, in which the output is formed from a matrix of dots which
covers the print area in similar fashion to the pixels which form screen images. In black-and-white
printing each of these dots is simply turned on or off to form the character shapes or graphical images,
and the fineness of the printed output depends on how densely the dots are spaced. The method by which
the dots are printed on the paper constitutes the main technical difference between one printer type and
another.

The original mainframe lineprinters were based on similar principles to the typewriter, with hard-formed
characters being struck via an inked ribbon onto the paper. These line printers can achieve high-volume
text output at high speed, but are very limited in their ability to print graphics. Their smaller derivatives
include daisywheel and thimble printers which suffer from the same limitation, and also from rather slow
printing, although their text output is generally of a high quality.

Impact dot-matrix printers have been in use for many years and provide a relatively cheap system for
producing output of reasonable quality for both text and graphics. A moving print head contains one or
more vertical rows of pins each of which can be fired at the paper producing a single dot. Typical systems
offer 9 pins in a single column or 24 pins in three offset columns. Draft output is produced rapidly by
printing dots which do not overlap at all, while near letter quality (NLQ) is produced by simulating
publishers' character fonts with arrays of overlapped dots. In simple 9-pin printers this is achieved by the
print head making two passes over a line with a slight shift in position to give a denser, more precise
image. Various fonts may be provided and a wide range of characters incorporated. Given the ability to
control each pin of the print-head as it passes across the paper, it is also possible to print graphical
images. These may be defined as bitmaps in which the image is stored as a continuous array of dots
covering the whole print area and which may be sent to the printer as a simple screen dump which
converts a screen pixel directly to one or more printer dots. Alternatively, vector images (such as
engineering drawings) may be converted to bitmaps by software either at the computer or embodied in the
printer.
Much more dense bitmaps can be achieved with laser printers, which deposit their dots electrostatically,
in similar fashion to photocopiers. Although expensive, they offer excellent print quality, speed and
flexibility (in terms of range of characters, fonts and print sizes). The high density of the matrix makes
laser printers capable of printing high-quality graphical images as well as text. The cheaper inkjet
printers, which project tiny individual droplets of ink at the paper from a moving print head produce
output of almost comparable quality, but are less flexible and are much slower.

Most engineering drawings produced by CAD systems are stored as vector data (or drawing instructions).
The pen-plotters which have been in use for many years basically have used pens to obey these
instructions, acting very much as a mechanised draughtsman. The manufacturing technology of these
plotters has developed to the extent that at the time of writing they still represent an economical way of
producing large drawings at a reasonable speed, in multiple colours and with a variety of pen thicknesses.
Since they are based on servo-motors there is no great penalty to be paid for increasing the physical size
of the drawing space and the amount of plotting data sent and stored is merely proportional to the number
of vector instructions on the plot. However, a dependence on moving parts limits their speed and
precision of plotting. These plotters cover the complete range of paper sizes in use, from A4 to A0. Since
their whole method of working is to move the pen in vectors across the paper (sometimes by moving the
paper as well as the pen) their most economical use of text is to draw "simplex" characters rather than to
attempt to simulate character fonts. For the same reasons they do not perform well when used to produce
blocks of solid colour, for which they simply have to "shade" the area with huge numbers of strokes.
Continuous, or automatic, paper feed is usually available on higher-priced models.

Electrostatic plotters, which derive from laser printers are increasing in use at the expense of pen-plotters.
Since a high-quality dot-matrix image requires massive amounts of memory at the plotter to hold it, the
penalties for requiring large paper size are at present considerable and these plotters can be very
expensive. They are, however, very fast and accurate. It has already been mentioned that laser printers
produce very high-quality plotted output and these represent a very much cheaper solution for a large
amount of technical material for which the smaller paper sizes (A4-A3) are considered suitable. Inkjet
plotters are also available at much cheaper prices than electrostatic and provide an economical way
towards accurate colour plotting.

4. INPUT/OUTPUT AND STORAGE

In batch-processing systems all information, including program code and input data, is supplied by the
user before any processing begins. It can be done in a number of different ways. Early mainframe systems
used punched paper tape or cards, which were cumbersome to edit and conducive to errors. They were
superseded during the 1970s by magnetic tape and disk storage. In the case of early microcomputers the
tape often took the form of audio cassette tapes which have now largely been replaced by the much more
controllable floppy disks. They provide portable storage for a relatively large amount of data and, having
been through several phases of development, have now settled for the present in the 3,5 inch format
which is robust enough to be almost self-protecting against reasonable physical abuse. The so-called hard
disks found on many current personal computers provide both quicker access and very much greater
storage capacity than floppy disks, but are usually not portable between machines. Tape cassette systems
(often known as streamers) are now largely used for making compressed backup copies of material
normally stored on hard disks.

A form of data storage rather different from the magnetic systems mentioned above is compact disk (CD-
ROM) storage. This is very much the same product as the CD's used for sound or video reproduction, and
allows huge amounts of data to be held and rapidly retrieved, compared with the magnetic systems. CD-
ROM is often included with personal computers used for training and information retrieval, because it
provides a facility for mixing software, large information bases and video-quality graphics interactively.
In some cases it is possible to write to CD as portable storage, but it is not possible to re-use the space on
the disk once it has been written to, so that CD is considered as a write once read many (WORM) storage
medium. However, where there is a need to produce, store and retrieve huge amounts of data, it is the
obvious choice.

It is now fairly common to use scanners to enter text and pictures directly into a computer from paper
copy. The key to this technology is not so much in the ability of the scanner device to input a picture of
the sheet placed upon it, but in the character-recognition software which resolves individual character bit-
images into normal printer font characters. For graphics, the production of a bitmap of a photograph or a
line-drawing is fairly straightforward. Software which produces vector plot files from bitmaps of
engineering drawings exists, although at the time of writing it is still under development. In either case,
scanned input can still be fairly unreliable, given the problems which can be encountered with the original
paper documents.

5. INTERACTION

Direct interactive use of computers was not possible on the early mainframes, but it has progressively
become the most effective method of use in most cases. Initially, dumb terminals were used so that users
could type and send to the computer directly the kind of batch programming commands which had
previously been read from punched cards. However, with mainframes two-way communication was slow
since a large number of users might be sharing time on the central processor and data transmission rates
were rather low in any case. It was only when communication and processing speeds had increased that
interactive programs became possible. At this point, an executing program could be made to pause and
request additional data or decisions from the user at the remote terminal, and to resume execution when
this data had been entered. Results could be shown on the terminal or printed as a hard copy.

The use of dumb terminals has now largely been superseded by distributed computing. The personal
computer itself has enough processing power and memory for most applications, so that communication
with the central processor is not subject to time-sharing and truly interactive software is possible. Where
access to software or data needs to be shared between numbers of users, computers tend to be attached to
a network. In a network a number of computers, each of which uses its own processing power, is linked
together (Figure 2) so that each has access to the others and, more importantly, each has access to a very
large central filestore on which data and software is stored. This filestore is controlled by a "slave"
computer known as the file server which generally runs the network. When a computer in the ring needs
to use a particular program it loads the program from the filestore and runs it locally. Data produced by
one computer can be held in a common database on the central filestore and accessed by others. Such
networks are often provided with gateways to larger, national or international networks so that
information can be shared by a large group of people. Even with a home computer the use of
a modem allows a user to access the network via an ordinary telephone connection, thus providing a dial-
in facility. This possibility obviously carries the implication that data needs protection against being
corrupted by unauthorised users and, in some cases, confidentiality must be maintained. Various systems
of password protection are used to attempt to ensure that network users do not have access beyond the
areas in which they have a legitimate interest.
Computers are not the only devices which can be attached to a network. Most of the common types of
peripheral (such as printers, plotters, scanners and other input/output devices) can also be attached. In the
case, say, of a plotter the file server will control access to the device by queuing the output to it so that
control is maintained. This queuing system can be applied to any peripheral device which can be attached
to the network; in the context of a fabrication plant, it can be applied to a numerically controlled
workshop machine for which a number of jobs may be waiting at any one time.

6. THE USER INTERFACE

The term user interface refers to the way in which the user and the computer exchange information. In the
most basic sense it might refer to how the user gives instructions when the computer is first accessed or
switched on, and to how the computer responds.

It is controlled by the computer's operating system, which is loaded from its hard disk when it is started,
and includes a series of utility functions which can be initiated by appropriate (shorthand) commands
issued by the user. As many of these functions are concerned with file operations on a disk (deleting,
running, renaming, etc.), the operating system is usually referred to as a disk operating system, or DOS.
In the days of dumb terminals the only two functions of a user interface were:

 To show on the VDU screen the line of characters which was being typed at the keyboard and
eventually to send them to the remote computer (typically when the "Enter" key was pressed).
 To show on the screen any characters sent to the terminal from the computer.

The nature of this interaction was very sequential. Lines of text would progress from top to bottom of the
screen and thenceforward the display would progressively scroll up the screen as more lines were added
to the bottom.

With the very fast data transfer rates which are now possible, and because a screen is controlled by just a
single computer, communication between computer and screen is virtually instantaneous as far as the user
is concerned. This has enabled a very rapid development of the user interface to take place, with the
objective of making the use of computers a more "natural" and less specialised human activity. A
recognition has grown that normal thought processes are largely based on pictorial images rather than
verbalised logic. Opening up the use of computers to the majority of people depends on removing the
necessity to learn even high-level programming languages, including the specialised commands of an
operating system or of a piece of software.

The current generation of windowing user-interfaces (Figure 3) has attempted to minimise the amount of
specialist knowledge needed by users and to address the non-verbal nature of human decision-making.
Their basic context is a computer screen, considered as a desktop on which a number of ledgers
(windows) are placed. These ledgers contain collections of tools (programs) and documents (data files).
The ledgers may be put into the background or brought forward and their contents displayed, and one
ledger may be partially overlaid by another. The tools are each represented by an icon - a small picture -
and a title. A pointer directly controlled by a mouse is used to select a program simply by pointing at it
and clicking a button on the mouse. Once a piece of software is running it obeys the common standards of
the windows interface, so that there is no new working method to be learned by the user on coming to a
new software tool. The working principle is usually to minimise the use of the keyboard for decision-
making (it is obviously the best tool for direct text or data entry) by using the pointer to select options
using a large but standard range of visual devices on the screen. These options include pull-down menus
and dialogue boxes, both of which are small screen overlays on which selections can be made, which
remove themselves after the action has been taken. It is currently fashionable to make major selections by
"pushing buttons" with the pointer. It is possible, while running one program in a window, to pause
operation and use another application in another window. This is not true multi-tasking, since there is
only one program running actively at a time, but it is possible to mix a range of tasks in a given period
without completely closing down any one of them. For example, in writing a technical report it might be
appropriate to keep a word-processor, a spreadsheet, a specific design or analysis program and a CAD
program all open simultaneously, so that the final document can be produced as new figures, calculation
results and tabular information or graphs are generated or modified. Real multi-tasking, in which a large
finite element analysis, for example, could be running while more routine interactive tasks are being
performed, is only available in practice on the most powerful types of workstation.
Although window interfaces make computers accessible to a very wide range of potential users, they
present some difficulties for developers of software. The requirement for on-board memory is high, as is
that for hard disk storage. Development of original software for windows environments is usually rather
slow and time-consuming and, therefore, the economics of writing original technical programs for a
restricted market is not always favourable. Conversion of well-established software running in the normal
operating system environment, in such a way that it keeps its full functionality and retains the working
methods which have made it popular whilst taking advantage of the common user-interface, is an even
more difficult task. It is, therefore, often necessary to work within the normal keyboard-based operating
system environment. On PCs this is usually MSDOS and on workstations Unix. Using a computer in these
environments requires much more understanding of the functions of the operating system and how data is
stored on disk. Visually the user sees a blank screen, or part of a screen, with a flashing cursor to the right
of a brief prompt. In order to make the computer perform any useful task it is necessary to type in a
command in the operating system's high-level language. This is less daunting than it sounds - with only a
few commands in one's vocabulary and a working knowledge of the directory structuring of hard disks it
is possible to work very effectively with either a personal computer or a workstation.

7. PROGRAMMING COMPUTERS

At the level of the processor chip very large numbers of very simple instructions are executed in order to
perform even the simplest of computing tasks. The task of programming a computer in such terms is a
very tedious process and is only attempted when execution speed is the very highest priority for an item
of software. High-level programming languages provide an alternative means of presenting a sequence of
more advanced instructions to a computer in a form reasonably comparable with ordinary language. The
set of instructions (the computer program) are then translated (compiled) into machine code form
comprehensible to the processor.

Any programming language has a vocabulary of functional commands and a syntax of rules. In addition,
there are numerous arithmetic operators, including many of those used in conventional mathematics and
the ability to use variables of many different types. The programmer prepares a list of such instructions
which represents the flow of control within the program. There are numerous programming languages,
nearly all of which are capable of performing most programming tasks, but each of which has a unique
basic philosophy which makes it efficient in a specific field. For engineering
applications FORTRAN (originally used on mainframes for batch processing) is still very widely used on
account of its mathematical efficiency and its huge library of mathematical subroutines. The world's most
popular language for general programming is BASIC which exists in many different forms, from the
almost unstructured interpreted versions generally bundled with any type of personal computer to very
advanced compiled languages with very large libraries of functions. Perhaps the most versatile and
powerful general-purpose language used by professional programmers is C which includes operators
which allow very easy direct access to computer memory. Other languages are used mainly in specific
types of application with their own functional requirements, and it is not necessary to go into their detail
here. At this time computer users do not formally need to write programs in any case, but will use
software produced by professional developers over many man-years. A particular exception to this is in
the context of spreadsheets, and occasionally databases, in which it may be convenient to
write applications in the high-level languages which are included in these types of software.

8. STRUCTURAL ANALYSIS AND DESIGN SOFTWARE

Largely because of its direct links with computational research in universities, structural analysis software
has been available for a long time, initially on mainframe computers but more recently on all types of
hardware. Except in the most complex analytical processes the power of modern personal computers is
adequate for even the more specialised tasks needed for structural engineering. In the case of statically
determinate analysis of structural components, the analysis is normally contained within the detail design
software. Elastic analysis of plain frames or grillages is probably the most useful general tool for the
structural designer. It now exists on personal computers in a multiplicity of different forms. The important
differences between these programs tend to be more in their ease of use than in their technical
capabilities; all tend to have graphical rationalisation capabilities (Figure 4), so that geometry and results
can be viewed conveniently, but the processes for editing geometry and loads vary widely, as do their
capabilities of interacting with design and CAD software. Nonlinear, elasto-plastic and three-dimensional
frame analyses are now routinely available on personal computer, usually within general-purpose finite-
element packages which derive from mainframe software developed in academic research. These
packages, although useful for checking stresses, deflections and dynamic motions in very complex cases,
tend to be over-specified for most structural design problems, require very large amounts of data to be
defined and often produce far more output than is necessary. Their use is more appropriate as a final
validation of a design than in the earlier stages when the analysis is being used often as part of the
member selection process.

Structural design software is a much more recent phenomenon, since it relies very heavily on interaction
with the design engineer and only started to become widespread when microcomputers began to flourish
in the early 1980s. Much structural design involves relatively simple calculations - standard loading
calculations, analysis and element sizing based on rules embodied in codes of practice. These calculations
have traditionally been performed by hand, but interactive computing now enables designers to take
advantage of the power of the computer without relinquishing control over design decisions. Design
software relieves the designer of the tedium of laborious manual calculations - in many cases a degree of
'optimisation' is incorporated within the program, but decisions about selecting the most appropriate
individual member sizes remain with the designer. Design software now reaches into nearly all areas, but
is very variable in its nature, style and quality. The best allows considerable flexibility in use, making
revisions to existing designs easy and allowing data to be exchanged with software for analysis, CAD and
modelling and for estimating quantities.

In the context of steel structure design, the material available starts with "free disks" provided by
manufacturers of cold-formed products such as sheeting, composite decking and purlins, which
effectively provide quick look-up tables for safe working loads and spans against key dimensions.
Element design to various codes includes beams (both steel and composite), columns and beam-columns,
and connections of various kinds. Whilst element design usually takes the form of free-standing
executable programs the power of present-day spreadsheet software is such that applications for standard
spreadsheets can provide a very flexible way of automating these fairly straightforward design processes,
with good links to other standard software. Plastic design of steel frames, particularly low-rise frames
such as portals, is available in different degrees of sophistication in terms of its convenience in use, links
to downstream software and CAD, and in the order of analysis it offers. Plastic design is one area where
different degrees of analytical capability provide different orders of realism in results; the more non-linear
analysis, which allows development of plastic zones, can produce distinctly lower load resistances than
the rigid-plastic and elastic-plastic versions.

Perhaps the most important thing to appreciate about design software is that different ways of working
will be convenient for different design environments. A steel fabricator with a large commitment to
design-and-build will really need an integrated system, preferably based on a 3-D modeller, in which it is
easy to handle large numbers of members, to standardise sizes and connections, to make rapid revisions,
and to produce accurate costing and fabrication data. A small firm of general consulting engineers, on the
other hand, may find it more convenient to keep a fairly extensive library of free-standing design
programs with an easily understood user-interface, so that basic member sizing and presentation
calculations for building control approval can be done reliably and without a significant re-learning
process when the software is occasionally used.

9. COMPUTER-AIDED DESIGN: TWO-DIMENSIONAL DRAUGHTING

The development of interactive graphs at about the beginning of the 1970s provided the opportunity for
using computers for draughting. These early systems used mainframe computers with graphics terminals
("green" screens) and provided three-dimensional draughting capabilities. Initially this was limited in use
to heavy manufacturing industry, particularly in the production of aircraft, ships and motor cars, where
the benefits of 'mass' production justified the enormous investment then required for CAD. Even in those
pioneering days, the output from the CAD systems was providing automatic bills of quantities and also
being linked into numerically controlled (NC) machines, thus improving manufacturing efficiency.

In the late 1970s the development of 'super mini' computers was a significant factor in a very large growth
in the use of CAD. They provided a single-user facility and can be referred to as 'personal designers'.
Application was still concentrated in the production-based industries, but with increasing use of relatively
cheap, unsophisticated, two-dimensional systems in the construction industry. These personal designers
were difficult to learn and use, largely because they were not developed with the end-user in mind. User
interfaces, which were not standardised, generally took the form of a command line with complex syntax.
The capabilities typically replicated those of conventional draughting processes and often provided little
additional intelligence. For instance, it was often possible to change the numerical value of a dimension
without the drawn length changing, and without appropriate warning messages. Some simple systems still
allow this. The advantages of this type of CAD are very limited - essentially the ease of revising a
drawing and replotting. Time to produce the original drawing might often be as much or more than
producing the same drawing at a conventional drawing-board.

More sophisticated features have rapidly been introduced, offering greater advantages. The advantages
start with improved geometrical constructions such as:

 Snapping, for instance onto the end or mid-points of lines, grid points, tangents, etc.
 Automatic grid generation.
 Rubber-band shapes, including lines, rectangles, circles and other shapes, allowing them to be
replaced, dragged, stretched and distorted.
 Associative dimensioning, in which a dimension arrow is tied to two points on an object and
changes its printed value if the object is stretched or distorted.
 Layering, enabling different groups of information, for instance those relating to architectural
detail, foundations, structural details, and various building services, to be superimposed on a
basic common plan, see Figure 5.
 Objects which can be defined so that they can be scaled and placed anywhere within the drawing.
Thus, changing information relating to a particular dimension of a defined object influences other
dimensions dependent upon it.
 Symbol libraries of standard geometric forms, e.g. architectural or structural details. These
libraries not only facilitate the drawing process, but can also provide data for use elsewhere, for
example, in the production of bills of quantities. Additional symbol libraries for specialised
purposes can be created or purchased.

These facilities are now fairly typical in professional personal computer CAD tools. Increased intelligence
has been introduced into the way elements are represented, for instance, in according specific
relationships between drawn elements. There is, however, a penalty to be paid for storing data in an
intelligent form, since:
 Additional data must often be specified by the user.
 It requires a sophisticated database system and increased computer memory.
 In order to take advantage of the intelligence, a significant amount of processing is needed.

Two-dimensional draughting systems still have a role in the production of general arrangement drawings,
traditionally the responsibility of the consulting engineer. Unless the system is to be used subsequently to
produce detail drawings which are normally the fabricator's responsibility, there is no real advantage to be
gained for this kind of user by using the three-dimensional structural modelling approach. A standard 2D
system also allows easy interaction with architects and building services engineers. It also enables the
integration of different parts of the civil and structural engineering design work via simple layering.
Drawings, or parts of drawings, are easily copied directly into word processing packages for report
writing. It may also be possible, in future, for the 2D system to act as a partial pre-processor for full
structural modelling.

10. THREE-DIMENSIONAL STRUCTURAL MODELLING

Three-dimensional CAD systems can vary from a simple wire-frame model which operates on lines only,
through surface modelling to complete solid modelling which requires comprehensive data definition and
relationships but offers enormous potential.

Simple three-dimensional systems offer little advantage over 2D CAD for the construction industry.
However, the development of specialised forms of modelling system provides enormous power with
direct relevance to steelwork fabrication (including detail design). In this context, the 3D solid model is a
means of representing the complete structure, as distinct from conventional CAD where individual
elements are merely drawn as flat shapes. This provides a complete description of the steelwork,
including connections from which all necessary fabrication and erection information can be extracted
automatically. The model is typically created in a manner similar to the design sequence itself, coarsely
defined at the start, with progressively more detail added as appropriate.

Initially the structural layout is defined using a wire frame model (Figure 6a). This can be done with the
aid of a 3D framework of grid lines and datum levels and corresponds to the general arrangement
produced by the architect or consulting engineer. With 3D modelling, it is also possible at this stage to
generate more detailed engineering drawings, including isometric views (Figure 6b). Information
regarding section sizes, geometric offsets and additional data such as end reactions from design
calculations can all be entered very easily. The fabricator's next responsibility is to design connection
details. Detail design is facilitated by using a library of standard connection types (which can be tailored
to suit the needs of individual companies or clients) which will scale automatically to account for the
members to be connected (Figure 7). Appropriate detailed calculations can also be performed according
to accepted design rules and based on the end reactions prescribed when setting up the wire frame model.
Where non-standard connections are required, interactive modelling facilities exist for constructing the
appropriate details, either from first principles or by modifying standard forms. These can be added
subsequently to the library for future use.
The definition of a 3D model in this way contains a complete geometrical and topological description of
the structure, including all vertices, edges and surfaces of each physical piece of steel. As a result all
element dimensions are automatically tested for compatibility, and clashes which can easily carry through
in the traditional processes are removed. The model allows the efficient generation of conventional
drawing information, including general arrangement drawings (plans, elevations, sections, foundations,
isometric views - Figure 8), full shop fabrication details for all members, assemblies and fittings (Figures
9a and 9b), and calculation of surface areas and volumes for all steelwork. Further benefits of such
systems are related to the links which can be established with other parts of the production process. Full
size templates can be drawn, e.g. for gusset plates, and wrap-around templates for tubes. Erection
drawings can be output and material lists (including details of cutting, assembly, parts, bolts, etc.
produced automatically. An interface to a management information system can also facilitate stock
control, estimating, accounting, etc. Potentially of greatest importance is the possibility of downloading
data directly to Numerically Controlled (NC) fabrication machinery, automating much of the fabricating
work itself. At this level, 3D modelling is the central controlling tool for an integrated steel fabrication
works in which the total design-and-build package is offered.
In more general terms, surface modelling provides additional information about a 3D model. At its
simplest, but probably most cumbersome, this can take the form of defining boundaries within which
there is a surface with specified characteristics. More sophisticated surface modelling techniques, such as
rubber surfacing which allows a surface to be stretched and squeezed into shape, are not directly relevant
to most construction work, but are particularly valuable where shell forms are being developed, e.g. for
motor car body design and manufacture. It may be that developments in steelwork modelling of the type
described above will allow a convenient way of integrating the skeletal models with surface models of the
building envelope and architectural visualisation models, but at the time of writing this is not yet a reality.

The general arrangement drawings have typically provided the basis for a Bill of Quantities used for
tendering. Preparation of a Bill requires the weight of steelwork in different parts of the structure to be
calculated, including an allowance for attachments and connections, and a brief description of the
operations required for fabrication and erection. The specification, which may be in a largely standardised
form, provides additional information, e.g. regarding the corrosion protection system to be applied. The
Bill of Quantities is traditionally prepared by hand. However, if a suitable 3D modeller is used, the output
can form the basis of the Bill, with quantities called off automatically. This technique not only avoids
time spent on tedious calculation, but also minimises the potential for errors in the quantities. As part of
the steelwork detail drawings, each item is given a unique reference number. This number is used to
identify each workpiece in the subsequent fabrication and erection operations and also serves as the basis
for a materials list which is issued for ordering stock and planning production.

In a design-and-build contract, a formal Bill of Quantities is not used. Instead the steelwork contractor
must estimate a lump sum on the basis of experience and preliminary calculations. When the contract is
awarded, the fabricator produces the design calculations and general arrangement of drawings.
Preliminary buying lists for purchasing stock from the steel mills or stockholders are then required and
the sequence of operations follows a similar route to the more traditional method of procurement. In this
environment also, it is clear that use of a suitable 3D modeller can enhance the accuracy of estimation of
quantities, even before a complete detailed solid model exists.

11. NUMERICAL CONTROL IN FABRICATION

Traditional methods of preparing steelwork elements for construction - cutting to length, drilling, making
attachments (cleats, brackets, etc.) and assembling sub-frames (e.g. trusses) were labour-intensive, and
based on precise information on the steelwork detail drawings. Measurements and marking were
performed manually using templates, typically of timber construction, for repetitive or complicated
details. Appropriate machine tools (saws, drills, etc.) would be aligned visually and each operation
performed in sequence, with the workpiece being transported between individual items of equipment.
Subframes were typically put together on a laying-out floor on which the form of the geometry had been
marked using traditional setting-out methods.

The introduction of NC machines has enabled preparation details such as overall length and position of
holes to be defined numerically via a computer console. Handling equipment automatically positions the
workpiece in relation to the machine tool, which performs the necessary operations. In this way, the
labour-intensive operations of marking, positioning and preparation are integrated into a single process
which leads to major improvements in fabrication efficiency, especially where fairly standard or repetitive
operations are concerned. Even greater efficiency can be achieved by transferring the necessary
information on machining directly from the steelwork modeller into the NC machines rather than by
transcribing it manually from drawings or paper specifications. This process requires a computer modeller
which is capable of providing the machining operations data in a suitable form. The data can then be
transferred either by writing to a floppy disk which can then be read by the NC machine, or via a direct
network connection between the machine and the CAD workstation. At the time of writing only a
minority of fabrication plants have complete computer-integration in this way because of
incompatibilities between computing hardware and machine tools, but this integration is clearly capable
of providing much greater efficiency and higher quality than the present semi-manual process.

12. THE FUTURE

Predicting future developments in computing is notoriously hazardous. However, the trend of increasing
power of computers with little or no increase in cost shows no sign of slowing down, suggesting that the
application of computing is likely to spread even further. Applications, which currently require excessive
amounts of processing making them impractical, will become feasible. The evolution of graphical user
interfaces appears to have reached a plateau, but the application of graphics may well become much
wider, with "virtual reality" applications, for instance, allowing the structural designer as well as the
architect a realistic visualisation of new developments. This application has already been used in
demonstration form for a small number of new constructions.

Routine design calculations may become more sophisticated, possibly allowing more adventurous design
solutions, but there is a danger that the designer may become over-reliant on the processing power of the
computer. A simple understanding of general structural behaviour is still essential. There may be a
temptation to use over-elaborate methods of analysis and design, and the engineer should always consider
whether these are appropriate, particularly bearing in mind unavoidable uncertainties regarding design
loads, material strengths, etc. There is also a danger of refining designs to an excessive degree in an
attempt to optimise structural efficiency. For example, a structure in which every steelwork element has
been designed for minimum weight will result in the lowest overall tonnage, but almost certainly at the
expense of increased fabrication and erection costs.

Some aspects of steel design, for instance fire resistance, have traditionally been treated in an over-
simplified fashion and increased computer usage will rightly allow more rational approaches to become
more commonly considered as part of the design calculations. Other aspects of structural behaviour have
often simply been ignored. Dynamic analysis, for instance, is a specialist topic which the designer may be
called on increasingly to look at in detail, and again the integrated computer model could enable this to be
done painlessly as far the design is concerned. Increasingly, the designer will create an intelligent model
of the structure and expose it to a number of design scenarios, observing and interpreting the responses. In
this respect graphics is again likely to become prominent, with visualisation of behaviour rather than the
presentation of lists of numerical results requiring careful interpretation.

In building forms where complex geometry is involved, such as the International Terminal at Waterloo,
the use of conventional draughting methods would have been almost impossible. In this case a 3D
modeller was used to set up the geometry of a single bay of the three-pinned trussed-arch system. This
acted as the starting point for the whole roof and also facilitated the setting-out on site, with a number of
targets attached to each arch which could then be positioned on site using precise three-dimensional co-
ordinates and conventional electronic distance measuring equipment.

Integration of computers into each of the different stages of design and construction will not only lead to
improved efficiency with data automatically carried forward, but it will also extend computing into areas
which might be regarded as trivial. If the scheme design involves the creation of a simple 3D wire-frame
model of the structure, then loading calculations become almost automatic. Whilst this is not a difficult
part of engineering design calculations, it is somewhat tedious and automatic load assessment would
result in valuable time-savings. It is possible that eventually expert systems, which have so far had limited
success in structural engineering, may be of use at the concept stage and in integrating the structural form
with services and building-use requirements.

These developments, which all depend on the establishment of a common, universal database structure,
will allow information about a structure to be shared between different applications, so that a change in
data as a result of one process automatically feeds through to other dependent processes to ensure
consistency. Object-orientated programming concepts and relational databases provide the vehicles for
these developments. It has been seen that the 3D modeller is already being linked to fabrication
machinery and to other aspects of the whole building. This linking is likely to become more common as
standard data structures are established and fabricators exploit the improvements in efficiency which
integration offers. The linking can be extended through to site planning, allowing more precise control
over component delivery and operations, where even greater improvements in efficiency could be
realised. Integration is also likely to be extended to non-structural areas with, for instance, analysis of
energy requirements, day lighting, etc. all being integrated and making use of a central database.
13. CONCLUDING SUMMARY

 Computing facilities continue to improve dramatically and their use is now highly cost effective
for a wide range of activities within steel construction.
 Interactive graphical user interfaces have become standard, making it easier for non-specialists to
use computers.
 Different facilities are required by different organisations within the design and construction
process.
 The greater the degree of automatic data transfer between different applications, the more
efficient the overall process will be.

Previous | Next | Contents

Previous | Next | Contents

ESDEP WG 5

COMPUTER AIDED DESIGN

AND MANUFACTURE

Lecture 5.2: The Future Development

of Information Systems for Steel

Construction

OBJECTIVE/SCOPE

To discuss possible future developments in data transfer between different stages in the steel construction
process, through a product model approach. To indicate the benefits that might be realised as a result of
this and how such changes can be achieved.

PREREQUISITES

Lecture 5.1: Introduction to Computer Aided Design and Manufacture

RELATED LECTURES

None.

SUMMARY

The processes of exchanging information at various stages of a steel construction project are reviewed
briefly. The potential advantages of enabling this transfer to be made directly between computers rather
than, as at present, on paper are outlined. The basic requirements that must be met before such a system
can be implemented are discussed in principle, and the practical ways in which it might be achieved are
considered. The role of the management information system is explained, and a realistic approach to
implementation throughout the industry is outlined.
1. INTRODUCTION

The progress of a building project from client's brief to completion of execution entails the generation and
transfer of large quantities of information, much of it in the form of paper documents. Many people
contribute to the project as it progresses through its various phases.

Inefficiency and disruption results from the need to translate information from one format to another - as
occurs, for example, in the creation of workshop drawings for the steel fabrication - as well as from the
transfer of inadequate or erroneous information, and from late changes which may entail laborious
reworking.

The aim of this lecture is to extend ideas relating to information exchange standards, developed for
manufacturing industry, to the processes of information transfer between the various stages of
construction, in order to render these more efficient and economical.

2. INFORMATION EXCHANGE IN THE CONSTRUCTION PROCESS

2.1 Information Exchange: The Present

Figures 1 and 2 give an indication of the information generated and exchanged within the construction
process, and the various parties which may need to be involved in such exchanges of information. Figure
3 represents a portion of this information schematically, using the terminology of the Product Model - the
product being in this case the steelwork aspect of the building project. This figure marks the stages in the
life of the product, and illustrates the accumulation of product data as product life progresses. Information
exchanged between phases often has particular legal significance. A particular example is the set of
information exchanges which take place at the end of the design phase and which are marked by the
signing of a contract. There is a particular onus on the participants to ensure the completeness,
correctness, clarity and finality of such information exchanges, as errors can waste time and money and
variations lead to contractual claims.
Common sense would suggest that the quantity of information exchange should be confined to the
essential - consistent, of course, with the conveyance of an adequate understanding of requirements.

At present this information is exchanged between participants as hard copy, i.e. as reports, calculations,
drawings, etc. Interpreting this data at each stage of information exchanged can be a time consuming
process, particularly if there are ambiguities, or if some aspects are incomplete. Modifications in the
information generally result in changes within all subsequent stages of the product model route. Changes
in the client brief, for instance, cause reworking of design calculations, drawings, details, etc. and if made
late in the programme can result in significant delays. Substitution of alternative section sizes at the
detailing stage to take account of material availability, for example, may have less consequence, but even
so the required changes to details may be time consuming. It is always a danger too that isolated changes
have implications for other aspects of construction which are not identified in the rush to make the
corrections. Something as simple as a change in beam depth, for instance, may have considerable
significance for the accommodation of services.
The present system does, however, provide useful opportunities for checking information since at each
stage of information transfer the data is examined afresh. It also allows for considerable flexibility in the
system, with some information being passed on in a partially complete form, and in a variety of formats.
Feedback between later stages in the product model route and those earlier in the process are also
relatively straightforward.

2.2 Information Exchange: The Future

More use is being made of computers within each stage of the product model route, with a view to
increasing efficiency. One of the most time consuming aspects of using computers is entering data, and
significant savings can therefore be achieved if the effort required for data input is minimised. This can be
achieved by transferring data between successive stages in the product model route electronically rather
than as hard copy.

Future information exchange will therefore involve wider use of computers to reduce manual input of
data and provide a better flow of information relating to the steelwork 'product'. For instance information
derived from design calculations could be transferred directly to a computer-aided design (CAD) system
to avoid duplicating definitions of basic data such as beam spans, column heights, etc. and to enable the
output from the calculations (section sizes, beam reactions, etc.) to be taken directly into the next stage.

Some developments have already taken place in this sense. The integration of general arrangement
drafting and detailing systems, and the output of 3-D modelling systems leading directly into numerically
controlled machines for fabrication. This means replacing the present limited conventions and protocols
for information exchange, both manual and computer based, with a more rigorous unified information
exchange system which can apply across the entire construction industry, and which is capable of
operation in all phases of the project's life.

Such a system requires:

i. the establishment of a unified computer based product description.

This requires data in a sufficiently comprehensive form to describe all relevant aspects of the product at
all stages.

ii. the creation of standards for the transfer of information between different computer systems and
organisations.

iii. the creation of information management systems - to control information changes, access rights and
quality assurance.

These requirements can be illustrated by the following simple examples:

i. A computer program for the elastic analysis of frames requires member cross-sections to be described
in such terms as area and moment of inertia. It does not require a description of how the area is distributed
throughout the section, i.e. what its shape is. Such a description would, however, be inadequate to
generate connection details. It is, therefore, desirable to have a single format which would accommodate
both needs, permitting an efficient transition from analysis to drafting.
This is a somewhat trivial example. Most frame analysis programmes now allow definition of cross-
sections by reference to a standard library. However it does illustrate the point that data which is
sufficient to describe the product at one particular stage in the process may be inadequate for other stages.

ii. Working on a range of products, a steelwork fabricator has to produce shop drawings from engineering
design information originating from a variety of software and hardware systems, some of which are
mutually incompatible. It would seem advantageous if this fabrication could access directly the graphical
information base created by the designer in each case. This would necessitate a CAD system capable of
information exchange with all others on the market. CAD developers have tended to concentrate on
transfer of information between computers running the same software, i.e. theirs, rather than facilitating
exchange of information with machines running software produced by a competing CAD developer. An
information exchange format which is particular to a CAD package is termed the 'native' data exchange
format of that system.

Considerable progress has been made in this direction with regard to alphanumeric data. The ASCII
format provides a basic standard so that text produced using one wordprocessor system, for instance, can
be output in this form enabling it to be read directly by other systems or application programs. Dealing
with text is a relatively simple matter because it involves a limited number of unique characters. Even so,
the ASCII standard provides for the basic characters only with no formatting signals to indicate different
text styles, subscripting, etc. Data for presenting information graphically is even more complicated, but
some standards have been established, IGES and DXF files serve a similar function, providing a standard
of data appropriate to drawing instructions, enabling the output from the CAD system to be interpreted by
another. However it should be clear that this is not in itself sufficient to provide a full description of what
is being drawn. The full product model description requires much more complete information.

3. A FRAMEWORK FOR CHANGE

3.1 The Product Model

An essential first step towards an integrated approach is the development of a standard specification for
the organisation of technical information on structural steelwork. This specification is referred to as the
'logical product model' and provides a standard basis for the production of interfaces between structural
steelwork software products. When the technical information on a particular structural steelwork contract
is arranged according to the 'logical product model' specification, it is then simply referred to as a product
model. The product model approach can be used to transfer information between all sorts of software
products by using product model files (computer files) to transfer the information automatically.
Consistent versions of existing paper documents can then be generated, as required, from this unified
digital description, or product model.

In broad terms, the system would work in the following way:

 Each software product concerned with structural steelwork would have its own product model
interface.
 Product model files would be used to transfer information between the various software
products.
 The product model interfaces would read information from, and write information to, the
product model files as and when required.
Figure 4 compares the traditional approach with the product model approach for information exchange.
The main advantages of the product model approach are that it will offer flexibility for users to configure
and develop systems from the software products they prefer (provided each product they wish to use has a
product model interface).

A product model for steelwork construction is currently being developed within the Eureka EU130
CIMSTEEL project.

In the long term, the approach is capable of being developed to achieve full database integration of
software products. It is the target system architecture activity which aims to map out the future part of
development.

3.2 Information Exchange Between Software Products

3.2.1 Introduction
The native exchange file formats are 'de facto' standards established by particular software vendors and
remain under their control. In contrast, the concept of a neutral file format implies a universal standard
independent of any particular vendor. Such standards originate typically from research projects but are
now increasingly coming under the control of international standards bodies.

One of the principal goals of current research projects, e.g. EUREKA, ESPRIT, is to make it possible to
transfer information easily and inexpensively between the many different software products already
available or being developed for the structural steelwork industry. This implies direct digital transfer of
information obviating the need for manual interpretation of drawings, etc.

Examples of software products involved are:

 Structural analysis programs.


 Computer aided design and detailing systems.
 Software for programming of NC (numerically controlled) machines, tools, e.g. sawing, drilling,
flame cutting, and profiling machines.
 Software for programming of welding cells.
 Company MIS (management information systems) and software for cost estimating.

The main benefits of linking software are that time and effort can be saved, and transcription errors can be
eliminated.

Traditionally, wherever a company requires an efficient means of information exchange between specific
software products, a new piece of purpose-written software, 'an interface', has to be produced.
Unfortunately, an interface will only work with the particular pieces of software for which it was
specifically written in the first place. Thus, every time a new software product is introduced, new
interfaces have to be produced to link with each and every other piece of software with which it needs to
exchange information. The simple interface of two pieces of software solves only a local problem and
creates a localised increase in efficiency (Figure 5). To achieve a solution to meet the requirement of the
whole industry a wider perspective is required.
3.2.2 'Neutral' graphical exchange file formats

The IGES Standard

The most widely supported of the current generation of neutral exchange file formats is the Initial
Graphical Exchange Standard (IGES). It originated in 1980 with the then United States National Bureau
of Standards. By 1988 Version 4.0 had been published, and at the time of writing this lecture, a final
version - to be Version 5.0 - is awaited. While IGES has extended its ability to represent information and
addressed problems of efficiency, the standard has grown increasingly complex. It is similar in principle
to the DXF system, which is a proprietary product of Autodesk.

The neutral file concept established by IGES led to the evolution of several other data exchange
standards, each targeted on the needs of a specific group of CAD/CAM users. In each case, the objective
was to make the exchange process more efficient and reliable, and to maximise the ability of the
developed data exchange file format to represent particular classes of engineering information.

While considerable technical progress was made by these various standards projects, the result was a
proliferation of data-exchange formats. It was recognised that the solution lay in a single second-
generation neutral file standard which would provide a unified framework for data exchange by all sectors
of engineering. The result was the new emerging International Standards Organisation (ISO) STEP
Standard.

The ISO STEP Standard

STEP, the Standard for the Exchange of Product model data seeks to provide consistent data models
across a broad range of engineering applications which would be applicable to the whole life-cycle of
engineering products. Thus the STEP data models will (eventually) enable all aspects of a construction
project to be represented, from conception through to the structure's ultimate demolition.

So in some respects, STEP is just another neutral data exchange file format. However, the true
significance of STEP is that it uses much definitive second-generation engineering data exchange
standards based on the concept of a product model. It is interesting to note that during the early
development of IGES it was 'Product Data' that was to be exchanged. The switch to 'Product Model' in
STEP reflects a recognition that it is information (i.e. meaning), not data, that has to be transferred (see
Figure 6).
Currently, STEP is little more than a powerful enabling technology and, while it may be a long-term task
to compile the necessary component product models, the technology for the implementation of STEP will
soon be available [1, 2].

3.3 Management Information Systems (MIS)

To make real progress in the area of the future management information systems it is necessary to have a
clear and common view of how it relates to the product model. The main point to recognise is that the
product model is limited to technical information. Management information must be dealt with separately
by the MIS.

Figure 7 presents a simplistic view of the structural steelwork design and manufacture process with boxes
representing the functions of scheme design, detail design, fabrication and erection. Types of software
products which may be used are shown under each function. At the top of the diagram is the Management
Information System, which monitors and controls the functions. At the bottom of the diagram is the
Product Model which provides the technical information needed by the software products in the form of
product model files.

Although a clear division between technical and management information can be defined theoretically, in
reality the MIS will need to:

 Know where all technical information is located and organised.


 Monitor and control all modifications to Product Model information.
 Monitor and control all Product Model file transfers to and from the software products.

Thus, in addition to a 'management information controller' the MIS should also include a 'product model
information controller' whose function will be to manage the flow of product model information in the
form of product model files. Figure 8 illustrates the way in which this could be arranged. In essence, the
MIS controls both the management functions and the transfer of product model information. Product
model files are stored in the product model file store and are used to transfer technical information to
enable the various pieces of technical software to perform their required operations for any particular
contract going through the factory.
It should be noted that Figure 8 represents product model file interfacing, and does not include database
integration of products model information. As such, it can only represent a step towards the development
of fully integrated systems.

4. IMPLEMENTATION

Incremental Implementation

It has to be fully recognised that many of these long term aims have only a theoretical meaning today. As
a result practical incremental implementation is essential so that the industry can start to reap the benefits
in the shorter term. The shorter term goals of common information exchange standards allow the
interfacing of systems enabling the industry to take the first vital steps towards implementation of
computer integrated manufacture (CIM).

Recognising the different ways in which steelwork companies are managed, and will continue to be
managed, it is evident that an all encompassing standard for management information is going to be very
difficult to achieve.

However, if the finance, sales and marketing, personnel, and administration functions are excluded for the
time being, then a common approach is feasible for:

 Contract planning.
 Capacity planning.
 Process planning.
 Design control.
 Materials control.
 Fabrication control.
 Despatch/transport control.
 Erection control.

It is in these areas that an industry-wide approach could be developed. An industry MIS could be
produced covering these functions which would comprise a number of modules operating in conjunction
with a management information database and a product model file store.

5. CONCLUDING SUMMARY

 Computer aided transfer of standard product information between design and fabricator will
reduce time for information production, detailed design and the production of fabrication
drawings, as all required information can be transferred automatically.
 The net result will be significant increases in efficiency due to the reduction in contract variation
claims and hence a less contentious contractual relationship. Controlled early access to relevant
information and changes to information has great advantages in reducing lead times and errors.
 This future development will result in a dramatic change in the nature of estimating with respect
to current practice. The fabricator will receive standard items of product information, files of
historical manufacturing (material and workmanship) cost data for each item. The 'scientific'
part of estimating can thus be automated. Commercial judgments on an estimated contract
value can then be applied in the more certain knowledge that estimates are correct.
 There are four key requirements for the structural steelwork industry to enable the effective
and efficient transfer of product information. These are computer-based product descriptions,
international information exchange standards for structural steelwork (neutral exchange file
formats), and information control (management information systems).
 These developments represent a fundamental change in current methods of working.
Acceptance by the industry can only be achieved by the introduction of short term solutions
which lead towards the ultimate goal.

6. REFERENCES

[1] National Economic Development Council (NEDC), Information Transfer in Building, NEDO,
London, 1990.

[2] Watson, A. S., CAD Data Exchange, Proc. Institution of Civil Engineers, Part 1, 1990, Vol. 88,
December, 955-969.

Previous | Next | Contents

You might also like