You are on page 1of 40

Output Process- it

ensures that system output


is not lost, misdirected or
corrupted and that privacy is
violated. Exposures of this
cost can cause serious
disruptions to operations
and may result in financial
losses.

Output
Run(Spoolin
g)

Output
Report
Data
Contr
ol

Output File

Print Run

Output
Report

Aborted
Output

Output
Control
Report
Distrib
ution

Output
Report

Burstin
g

Wast
e

End
User

Output
Report

File
Stages in the Output
Process

Output Spooling- application are often


designed to direct their output to a magnetic
disk file rather than to the printer directly
Print Programs- print programs are often
complex systems that require operator
intervention.
Four Common Types of Operator
Pausing the print program to load the correct type of
output documents( checks, stocks, invoices, or other
special form )
Entering parameters needed by the print run , such as
the number of copies to be printed.
Restarting the print run at a prescribed checkpoint after
a printer malfunction.
Removing printed output from the printer for review
and distribution.

Bursting- when output reports are removed


from the printer , they go to the bursting stage
to have their pages separated and collated.
Waste- Computer output waste represents a
potential exposure. It is important to dispose of
aborted reports and the carbon copies from
multipart paper removed during bursting
properly.
Data Control Ins some organizations, the data
control group is responsible for verifying the
accuracy of computer output before it is
distributed to the user.
Report Distribution- the primary risk
associated with report distribution include

For highly sensitive reports, the following


distribution can be used:
The reports may be placed in a secure mailbox
to which only the user has the key.
The user may be required to appear in person
at the distribution center and sign for the
report.
A security officer or special courier may deliver
the report to the user.
End User Controls- Once in the hands of the user,
output reports should be reexamined for any
errors that may have evaded the data control
clerks review.

Controlling Real-Time System Output- Real time


system direct their output to the user s
computer screen , terminal , or printer. This
method of distribution eliminates the various
intermediaries in the journey from the computer
center to the user and thus reduced many of the
exposures .The primary threat to real-time
output is the interception , disruption ,
destruction or corruption of the output message
as it passes along the communication link. This
threat comes from the two types of exposures:
1. Exposures from equipment failure
2. Exposures from subversive acts, whereby a
computer criminal intercepts the output
message transmitted between the sender and

Testing Computer Application Controls


Two general approaches:
Black-box approach - Auditors testing with the
black-box approach do not rely on a detailed
knowledge of the applications internal logic.
Instead, they seek to understand the
functional characteristics of the application by
analyzing flowcharts and interviewing
knowledgeable personnel in the clients
organization.
White-box approach- It relies on an in-depth
understanding of the internal logic of the
application being tested.

INPUT

MASTER FILES

APPLICATION UNDER
REVIEW

Auditor reconciles
input transactions
with output
produced by
application

OUTPUT

Auditing Around the computer-The Black Box


Approach

Common types of test of controls


Authenticity test- which verify that an individual , a
programmed procedure, or a message (such as an EDI
transmission) attempting to access a system is
authentic.
Accuracy test-which ensure that the system processes
only data values that conform to specified tolerance.
Completeness test- which identify missing data within
a single record and entire records missing from a
batch.
Redundancy test-which determine that an application
processes each record only once.
Access test- which ensure that the application
prevents authorized users from unauthorized access to
data.
Audit trail tests-which ensure that the application
creates an adequate audit trail.
Rounding error tests- which verify the correctness of
rounding procedures. Rounding errors occur in
accounting information when the level of precision

Computer-Aided Audit Tools and Techniques for


Testing Controls
1. Test Data Method- it is used to establish application
integrity by processing specially prepared sets of input
data through production applications that are under
review.

Base Case System Evaluation- when the set of test of


test of data is comprehensive. BCSE test are conducted
with a set of test transactions containing all possible
transaction types.

Tracing- another types of the test data technique


that performs an electronic walk-through of the
applications internal logic.
2. Integrated Test Facility- is an automated technique that
enables the auditor to test an applications logic and
controls during its normal operation. The ITF is the one or
more audit modules designed into the application during
the systems development process.

Advantages of Test Data Techniques


They employ through-the-computer testing , thus
providing the auditor with explicit evidence concerning
application functions.
If properly planned , test data runs can be employed
only minimal disruption to the organizations operations.
They require only minimal computer expertise on the
part of auditors.
Disadvantages of Test Data Techniques
Primary disadvantages of this technique is that auditors
must rely on computer services personnel to obtain a
copy of the application for test purposes.
Second disadvantages of this technique is that they
provide a static picture of application integrity at a single
point in time.
Third disadvantages of this technique is their relatively
high cost of implementation, which results in audit
inefficiency.

Auditor prepares
test transactions
,test master files,
and expected
results

Predetermin
ed Results

Test data

Test data

Test Master
Files

Applicatio
n under
review

After test run,


auditor compares
test results with
predetermined
results

The Test Data


Technique

Test
Results

Test
data

Production
Transactions
Auditor uses Gas to
produce simulation of
application under
review

Application
Specificatio
n
Generalized
Audit
Software(GA
S)

Production
transaction
Files
Simulation
Program

Production
Master Files

Actual
Production
Application
Production
Output

Simulation
Output
Auditor reconciles
simulation output with
production output

Parallel Simulation

Chapte
r8

Data Structures and CAATTs for Data Extraction

Data Structures
Two fundamental components:
o
o

Organization refers to the way records are


physically arranged on the secondary storage
device may be sequential or random.
Access method is the technique used to locate
records and to navigate through the database or
file.

Types of Data Structures


1. Flat file Model - data files that contain records
with no structured relationship; often associated
with legacy systems
2. Data files structured, formatted, and arranged to
suit the specific needs of the primary user.
3. Sequential Structure- typically called sequential
access method

- records in an indexed random file that


dispersed throughout a disk without regard for
their physical proximity to other related
records.
- Virtual Storage access method (VSAM)
structure is used for very large files that require
routine batch processing and a moderate degree
of individual record processing .
- VSAM file has three physical components ;
the indexes, the prime data storage area , and
the overflow area.

VSAM Virtual Storage Access Method


Cylinder
Index
Key range
Num
1100
97

Cyl

Surface Index
Cylinder 99
Key Range
Surface Nu

2200
98
3300
99
4400

Looking for Key 2546

2300

2400

2500

2600

Search
Track 99 on
Surface 3 of
Cylinder 99
sequentially
. We do not
have the
specific
address of
Record
(key) 2546

File Processing Operations


1.Retrieve a record from the file based on its primary
key
2.Insert a record into a file
3.Update a record in the file
4.Read a complete file of records
5.Find the next record in the file
6.Scan a file for records with common secondary keys
7.Delete a record from a file

5.

Hashing Structure employs an algorithm that converts the

primary key of a record directly into a storage address


- It usually eliminates the need for a separate index. By
calculating the address, rather than reading it from an
index, records can be retrieved more quickly.

Hashing
structures uses a
random file
organization
because the
process of
calculating
residuals and
converting them
into storage
locations
produces widely
dispersed record
addresses.

6. Pointer Structures an approach stores in a field of


one record the address (pointer) of a related record
- a pointer provide connections between the records

Types of Pointers
Physical address pointer contains the actual disk
storage location ( cylinder, surface, and record
number) needed by the disk controller.
Relative address pointer contains the relative
position of a record in the file.
Logical key pointer contains the primary key of
the related record
- this key value is then converted into the records
physical address by a hashing algorithm.

Hierarchical and Network


Database Structures
The main difference in two approaches is
the degree of process integration and data
sharing that can be achieved.
- Two dimensional flat files exist as
independent data structures that are not
linked logically or physically to other files.
Database models were designed to support flat
file systems already in place, while allowing
the organization to move to new levels of data
integration.
In usual scenarios a many-to-many
association between an inventory file and a
vendor file. Each vendor supplies many
inventory items is supplies many inventory

Relational Database Structure,


Concepts and Terminology
Relational database are based on the
indexed sequential file structure.
- inverted list are the multiple indexes
can be used to create a cross reference
Relational Database
Theory
These
three algebra functions are explained below:
Restrict: Extract specified rows from a specified table.
Project: Extract specified attributes (column) from a table to
create virtual table
Join: Builds a new physical table from two tables consisting of
all concatenated pairs of rows from each table.

Relational Database Concepts


Data Model is the blueprint for ultimately creating the physical
database.
- the graphical representation used to depict model is called an
entity relationship (ER) diagram.
Entity is anything about which the organization wishes to capture
data.
Occurrence is used to describe the number of instances or
records that pertain to a specific entity
Attributes are the data elements that define an entity.
Buys
Illustration
Customer
Product

Sends
Payments

Anomalies, Structural Dependencies, and


Data Normalization
Database

Anomalies

- Anomalies is a negative operational symptoms


1. Update Anomaly results from data redundancy in an
unnormalized table
2. Insertion Anomaly
3. Deletion Anomaly involves the unintentional deletion of
data from a table.

Normalizing Tables
- Dependencies are symptoms of structural
problems within tables
- Normalization process involves identifying and
removing structural dependencies from the table

Database Normalization is a technical matter that is usually


the responsibility of system professional.
- The auditor needs to know how the data are structured before
he or she can extract data from tables to perform audit procedures.

Designing Relational Databases


Is a component of a much larger system development
process that involves extensive analysis of user needs.
Six phases of database design:
1. Identify entities
1.1 The purchasing agent reviews the inventory
status report.
1.2 The agent selects a suppliers and prepares an
online purchase order
1.3 The agent prints a copy of the purchase order
and sends it to supplier
1.4 The supplier ships inventory to the company.

2. Construct a Data Model Showing


Entity Associations

3. Add Primary keys: The Next step in


the process is to assign primary keys to
the entities in the model
The analyst should select a primary key that
logically defines the nonkey Sometimes attributes
and uniquely identifies each occurrence in the entity.
can be accomplished using a simple sequential code
such as Invoice voice Number, Check Number, or
Purchase Order number. Sequential codes, however,
are not always efficient or effective keys. Through
careful design of block codes, alphabetic codes, and
mnemonic codes, primary keys can also impart useful
information about the nature of the entity.

4. Normalize Data Model and Add


Foreign Keys
1.Repeating Group Data in Purchase Order.

-The attributes Part Number, Description, Order Quantity, and Unit Cost are
repeating group data. This means that when a particular purchase order contains
more than one item (most of the time), then multiple values will need to be captured
for these attributes. To resolve this, these repeating group data were removed to a
new PO Item Detail entity. The new entity was assigned a primary key that is a
composite of Part Number and PO Number. The creation of the new entity also
resolved the M:M association between the Purchase Order and Inventory entities by
providing a link.
2. Repeating Group Data in Receiving Report
-The attributes Part Number, Quantity Received, and Condition Code are
repeating groups in the Receiving Report entity and were removed to a new
called Rec Report Item Detail. A COMPOSITE KEY composed of PART NUMBER and
REC REPT NUMBER was assigned. As in the Previous example, creating this new
entity also resolved the M:M association between Receiving Report and Inventory.

3. Transitive Dependencies
-The Purchase Order and Receiving Report entities contain attributes that are
redundant with data in the Inventory and Supplier entities. These redundancies
occur because of transitive dependencies in the Purchase Order and Receiving
Report entities and are dropped.

Construct the Physical Database


Each record in the Rec Report Item Detail table represents an individual
item on the receiving report. The table has a combined key comprising REC
REPT NUMBER and PART NUMBER. This composite key is needed to uniquely
identify the Quantity Received and Condition attributes of each item-detail
record. The REC REPT NUMBER portion of the key provides the link to the
Receiving Report table that contains data about the receiving event. The PART
NUMBER portion of the key is used to access the Inventory table to facilitate
updating the Quantity on Hand field from the Quantity Received field of the
Item-Detail record.
The PO Item Detail table uses a composite primary key of PO NUMBE and
PART NUMBER to uniquely identify the Order Quantity attribute. The PO NUMBER
component of the composite key provides a link to the purchase Order table.
The PART NUMBER element of the key is a link to the Inventory table where
Description and Unit Cost data reside.
The next step is to create the physical tables and populate them with data.
This is an involved step that must be carefully planned and executed and may
take many months in a large installation. Programs will need to be written to
transfer organization data currently stored on paper documents may need to be
entered into the database tables manually. Once this is done, the physical user
views can be produced.

Global View Integration


The view modeling process described previously
pertained to only one business function-the purchases
system-and the resulting tables and views constitute only
a subschema of the overall database schema. A modern
company, however, would need hundreds or thousands of
views and associated tables. Combining the data needs of
all users into a single schema or enterprise-wide view is
called view integration. This daunting undertaking when
creating the entire database from scratch. To facilitate this
task, modern Enterprise Resource Planning (ERP) systems
come equipped with a core schema, normalized tables,
and view templates. These best-practices databases are
derived from economic models that identify commonalities
among the data needs of different organizations.

Embedded Audit Module


The objective of the embedded audit module (EAM), also
known as continuous auditing, is to identify important
transactions while they are being processed and extract copies
of them in real time. An EAM is a specially programmed module
embedded in a host application to capture predetermined
transaction types for subsequent analysis.
As the selected transaction is being processed by the host
application, a copy of the transaction is stored in an audit file
for subsequent review. The EAM approach allows selected
transactions to be captured throughout the audit period.
Captured transactions are made available to the auditor in real
time, at period end, or at any time during the period, thus
significantly reducing the amount of work the auditor must do
to identify significant transactions for substantive testing.

Disadvantages of EAMs
The EAM approach has two significant disadvantages. The first pertains to
operational efficiency and the second is concerned with EAM integrity.

Operational Efficiency
From the users point of view, EAMs decrease operational performance.
The presence of an audit module within the host application may create
significant overhead, especially when the amount of testing is extensive.
One approach for relieving this burden from the system is to design
modules that may be turned on and off by the auditor. Doing so will, of
course, reduce the effectiveness of the EAM as an ongoing audit tool.

Verifying EAM Integrity


The EAM approach may not be a viable audit technique in environments
with a high level of program maintenance. When host applications
undergo frequent changes, the EAMs embedded within the hosts will also
require frequent modifications. The integrity concerns raised earlier
regarding application maintenance apply equally to EAMs. The integrity of
the EAM directly affects the quality of the audit process. Auditors must
therefore evaluate the EAM integrity. This evaluation is accomplished in

Generalized Audit Software (GAS) is the most widely used CAATT


for IS auditing. GAS allows auditors to access electronically coded data
files and perform various operations on their contents. Some of the
more common uses for GAS include:
Footing and balancing entire files or selected data items
Selecting and reporting detailed data contained in files
Selecting stratified statistical samples from data files
Formatting results of tests into reports
Printing confirmations in either standardized or special wording
Screening data and selectively including or excluding items
Comparing multiple files and identifying any differences
Recalculating data fields
The widespread popularity of GAS is due to four factors: (1) GAS
languages are easy to use and require little computer background on
the part of the auditor; (2)many GAS products can be used on both
mainframe and PC systems; (3)auditors can perform their independent
of the clients computer service staff; and (4)GAS can be used to audit
the data stored in most file structures and formats.

Using GAS to Access Complex


Structures
DBMS
Utility
Program

1. Auditor specifies which


database records to copy
into flat file.

Flat File

GAS

3. Auditor determines
the selection criteria
used by GAS

Transaction
s List

Database

2. Database
management system
produces a flat file of
a portion of the
database.
4. GAS retrieves
selected records
from the flat file.

Using GAS to Access Simple Structures


(Flat File)

Auditor determines
selection criteria
(materially threshold)
and key fields to be
retrieved by GAS

Production
Inventory File

GAS

Transactions
List

GAS extracts data


selected by auditor
and produces a list of
inventory items to be
counted as part of
substantive testing.

Audit Issues Pertaining to the Creation of Flat F


The auditor must sometimes rely on computer services personnel to
produce a flat file from the complex file structures. There is a risk that
data integrity will be compromised by the procedure used to create the
flat file. For example, if the auditors objective is to confirm accounts
receivable, certain fraudulent accounts in the complex structure may be
intentionally omitted from the flat file may therefore be unreliable.
Auditors skilled in programming languages may avoid this potential
pitfall by writing their own data extraction routines. In the past, public
accounting firms developed proprietary versions of GAS, which they
used in the audits of their clients. More recently, software companies
have serviced this market. Among them, ACL (audit command
language) is the leader in the industry. ACL was designed as a metalanguage for auditors to access data stored in various digital formats
and to test them comprehensively. In fact, many of the problems
associated with accessing complex data structures have been solved by
ACLs open Database Connectivity (ODBC) interface.

Data Definition
One of ACLs strengths is the ability to read data stored in most
formats. ACL uses the data definition feature for this purpose.
To create a data definition, the auditor needs to know both where
the source file physically resides and its field structure layout.
Small files can be imported via text files or spreadsheets. Very
large files may need to be accessed directly from the mainframe
computer. When this is the case, the auditor must obtain access
privileges to the directory in which the file resides. Where
possible, however, a copy of the file should be stored in a
separate test directory or downloaded to the auditors PC. This
setup usually requires the assistance of systems professionals.
The auditor should ensure that he or she secures the correct
version of the file, that it is complete, and that the file structure
documentation is intact. At this point, the auditor is ready to
define the file to ACL. Figure 8.31 illustrates ACLs data definition
screen.

Customizing a View
A view is simply a way of looking at data in file; auditors
seldom need to use all the data contained in a file. ACL allows
the auditor to customize the original view created during data
definition to one that better meets his or her audit needs. The
auditor can create and reformat new views without changing or
deleting the data in the underlying file. Only the presentation of
the data is affected.

Filtering Data
ACL provides powerful options for filtering data that support
various audit tests. Filters are expressions that search for
records that meet the filter criteria. ACLs expression builder
allows the auditor to use logical operators such as AND, OR,,,
NOT and others to define and test conditions of any complexity
and to process only those records that match specific
conditions.

Stratifying Data
ACLs stratification feature allows the auditor to view the distribution
of records that fall into specified strata. Data can be stratified on any
numeric field such as sales price, unit cost, quantity sold, and so on.
The data are summarized and classified by strata, which can be
equal in size ( called intervals) or vary in size (called free).

Statistical Analysis
ACL offers many sampling methods for statistical analysis. Two of the
most frequently used are record sampling and monetary unit
sampling (MUS). Each method allows random and interval sampling.
The choice of methods will depend on the auditors strategy and the
composition of the file being audited. On one hand, when the records
in file are fairly evenly distributed across strata, the auditor may
want an unbiased sample and will thus choose the record sample
approach.

Normalizing Tables in a Relational


Database
The database anomalies are symptoms of structural problems
within tables called dependencies. Specifically, these are known s
repeating groups, partial dependencies, and transitive dependencies.
The normalization process involves systematically identifying and
removing these dependencies from the table(s) under review. Figure
8.37 graphically illustrates the unnormalized tables progression
toward 3 NF as each type of dependency is resolved. Tables in 3NF
will be free of anomalies and will meet two conditions:

1. All nonkey attributes will be wholly and uniquely dependent on


(defined by) the primary key.
2.

None of the nonkey attributes will be dependent on (defined by)


other nonkey attributes.

You might also like