You are on page 1of 12

Systems and Infrastructure Life-Cycle Management

Projects are unique, temporary and are developed progressively.

Business case – shows benefits to be achieved for the business and must be kept for lifecycle of
project

Three major forms of organizational alignment for project management:


Influence – Project Manager has no formal authority
Pure project – Project Manager has formal authority over those taking part in project.
Matrix project – Project Manager share authority with functional managers.

Project objectives must be SMART:


Specific
Measurable
Achievable
Relevant
Time bound

Project roles and responsibilities – purpose is to show accountability


Senior Mgmt - approves the resources for the project
User Mgmt – assumes ownership of project and resulting system
Project steering committee – overall direction and ensures stakeholders represented.

Responsible for deliverables, costs and schedules


Project sponsor – provides funding and works with Project Manager to define critical success
factors and metrics. Data and application ownership assigned to sponsor
System dev mgmt – provides tech support
Project manager – provides day to day mgmt of project.

Three critical elements to projects:


Time/duration – how long will it take?
Cost/resources – how much will it cost
Deliverables/scope – what is to be done

Software size estimation:


Lines of code – SLOC (# of lines of source code), KLOC (kilo lines of source code), KDSI –
thousand delivered source instruction – better for basic or cobol.
Function Point analysis – used to estimate complexity in developing large apps.

Size measured based on number and complexity of inputs, outputs, files, interfaces and queries.
Software Cost estimates directly related to software size estimates.

Critical Path –
Longest path through the network
Shortest possible completion time.
No slack time for any activity on critical path and any activities with no slack time are on the
critical path.
GANTT charts: aid in scheduling of activities/tasks. Charts show when activities start and end
and dependencies. Used for checkpoints/milestones too.

PERT – network management technique Shows relationships between various tasks and shows
estimates/scenarios for completing tasks – three estimates shown – optimistic, most likely and
pessimistic. It doesn’t talk about costs.

Time box – project management technique for defining and deploying software deliverables
within a short and fixed period of time with pre-determined resources. Must be a software
baseline.

Traditional SDLC (aka waterfall)


1. Feasibility

2. Requirements (ERD – entity relationship diagrams can be used here as a requirements


analysis tool to obtain an understanding of the data systems needs to capture and manage –
logical data model) Security requirements should be defined at this stage. Test plans designed
here. Decide if build or buy here.

3. Design (or selection if purchase a system – auditor concerned that adequate level of security
controls has been considered before purchase agreement/contract signed and RFP should be
based on requirements) Software design phase is the best place for software baselining to occur
– where requirements are set and software config mgmt process starts. Auditor concerned again
that sufficient controls are going to be built in and auditor looks at the effectiveness of the
design process itself.

4. Development (or configuration if purchase a system) – testing done here.

5. Implementation – certification (indicates compliance with requirements, policies etc) and


accreditation (indicates ready to use) done here. Set a SPOC (single point of contact) and
appropriate support structures.

6. Post Implementation – were the requirements met? Are users satisfied? Includes post
mortem – lessons learned, were the right techniques applied, right tools used? How could we
have done it better?

Data Conversion: risk is you will not convert all the data – some will be missed. You also need to
make sure that you are comparing control totals before and after conversion to avoid this.

Control totals can be used to compare batches too.


If purchasing a system, need to make sure decision makers are involved at all steps. Need to
consider many things as part of acquisition including turnaround time (time to fix an issue from
when it is first logged) and response time (the time a system takes to respond to a query by a
user).

Asset mgmt – assets stand by themselves


Configuration management – interrelationships between assets.

Quality assurance is responsible for ensuring that programs and program changes and
documentation adhere to established standards.

Early engagement of key users will help ensure business requirements will be met in software
development process.

Testing – make sure that what is being tested, is actually what was implemented.
Project steering committee approves the RFPs for software acquisitions. It is responsible for all
costs and timetables.

Two test approaches:


Bottom up – begin testing each module and work your way up until whole system tested.
Finds critical errors earlier because can start before system done – sort of white box testing.
Top down – start at interfaces of entire system and work your way down to each
function/component – like black box testing – functional.

Total Quality Management (TQM) purpose is end user satisfaction

Testing Classification:
Unit testing – testing of individual programs or modules – usually white box testing.
System testing – making sure that all modules function together properly.
Integration testing – evaluates connection of components that pass info to each other.
Final acceptance testing – done during implementation phase by QA and then UAT.
Other types of testing:
Alpha and beta
Pilot
White box – assess effectiveness of software program logic.
Black box – testing of interfaces and general function – doesn’t care about internal structure.
Function/validation – similar to system testing, but often used to test the functionality of the
system against requirements.
Regression testing – rerunning a portion of a test scenario to make sure that changes have not
introduced new errors in other parts of app
Parallel – feed test data into two systems (new and old) and compare results
Sociability – confirm that the new system can operate in its target environment without
affecting other systems.
Risks associated with software development:
New system does not meet users’ needs
Exceeded cost/time estimates
Auditor should review success of project and management discipline over project.
Alternative Development approaches:
Agile development – used when don’t have much in the way of requirements and things are
changing frequently. Designed to flexibly handle changes to the system being developed. Use of
small timeboxed subprojects and greater reliance on tacit knowledge – knowledge in people’s
heads. No real requirements baseline. Not much documentation. Less testing. Project Manager
becomes more of an advocate and facilitator rather than a manager. Can help detect risks early
on. Lot’s of face to face work.
Prototyping – creating system through controlled trial and error. Can lead to poor controls in
finished system because focused on what user wants and what user sees. Change control
complicated also – changes happen so quickly, they are rarely documented or approved. Also
called evolutionary development. Reduces risk associated with not understanding user
requirements.
Just include screens, interactive edits and reports (no real process programs)
Rapid Application Development –RAD – methodology to develop important systems quickly,
while reducing costs but maintaining quality. – small dev teams, evolutionary prototypes,
Automates large portions of the SDLC via CASE and imposes rigid time frames. Prototyping is
core to this. Skip documentation, less emphasis on requirements
Object Oriented – data and software together to form object – sort of a blackbox – other
objects talk to the object’s interface and don’t care what’s inside. Encapsulation provides high
degree of security over the data.
Component based – outgrowth of object oriented – assembling applications from cooperating
packages of executable software that make services available through defined interfaces.

In timeboxed development, having a baseline of requirements is important since it is so


timebound.

Web Based App Dev.


Components of Web Services
First key component: XML language called SOAP is used to define APIs. Will work with any
operating system and programming language that understands XML. Easier than RPC approach
because modules can be loosely coupled so a change to one component does not normally
require changes to others.
WSDL – web services description language – also based on XML. Used to identify the SOAP
specification to be used for the API and the formats of the SOAP messages used for input and
output to the code modules. Also used to identify the particular web service accessible via a
corporate intranet or across the Internet by being published to a relevant intranet or internet
web server.
UDDI – universal description, discovery and integration – acts as an electronic directory
accessible via corporate intranet or internet and allows interested parties to learn of the
existence of web services.

Reengineering – process of updating an existing system by extracting and reusing design and
program components.

Reverse engineering – process of taking apart an app to see how it functions. Can be done by
decompiling code.

Configuration management – version control software and check out process. Used for software
dev and for other stuff – programs, documentation, data. Change control works off of config
mgmt.
Logical path monitor – reports on the sequence of steps executed by a programmer.
Program maintenance is facilitated by more cohesive (the performance of a single, dedicated
function by a program) and more loosely coupled (independence of the comparable units)
programs.
Structured walk through is a management tool – it involves peer reviews to detect software
errors during a program development activity.

First concern of an auditor is does the application meet business requirements; close second is
are there adequate controls in place.

Computer Aided Software Engineering (CASE) -


Automated tools to aid in the software development process. Their use may include the
application of software tools for requirements analysis, software design, code generation,
testing, documentation generation. Can enforce uniform approach to software dev, reduces
manual effort. Don’t guarantee that software will meet user requirements or be correct.
Upper case – requirements
Middle case – designs
Lower case – code generation
Fourth Generation Languages
Non procedural languages
Portability – cross platform
Lack lower level detailed commands – lose granularity.

Business Process Re-engineering


This is the process of responding to competitive and economic pressures and customer demands
to survive in a business environment. And is usually done by automating system processes so
that there are fewer manual interventions and manual controls. Must identify the area under
review and then the processes involved, decomposition of process down to elementary processes
– unit of work with an input
and output., identify people who are responsible for the process, documentation. Important for
the auditor to understand the flow charts showing the before and after processes to make sure
appropriate controls in place.

Benchmarking is a technique all about improving business process – BPR technique (PROAAI):
Plan – identify processes
Research – identify benchmarking partners
Observe – visit partners
Analyze
Adapt
Improve – continuous improvement

Capability Maturity Model


IRDMO - Framework to help organizations improve their software lifecycle processes (less
directly aligned to SDLC than to newer dev processes)
Initial – ad hoc
Repeatable – processes can be repeated on projects of similar size and scope – basic project
management
Defined – institutionalized software process applicable to all dev projects, documented project
management, standard software development
Managed – application of quantitative managed control to improve productivity
Optimized – continuous improvement
Application Controls
These refer to the transactions and data relating to each computer based system – the objectives
of app controls which can be manual or programmed, are to ensure the completeness and
accuracy of the records and the validity of the entries made.

Three types:
Input
Processing
Output

IS auditor needs to identify the app components and the flow of transactions through the
system. Identify controls and their relative strengths and weakness and the impact. Identify
control objectives. Testing the controls and evaluating overall control environment.

Input Controls
1. Input authorization verifies all transactions have been authorized and approved by mgmt.
Signatures on batch forms
Online access controls
Unique passwords
Terminal or client workstation identification
Source documents

2. Batch Controls and balancing


Total Monetary amount – total monetary amount of items processed = total monetary value of
batch docs
Total items – total number of items on each doc in the batch = total number of items
processed
Total documents
Hash totals – verification that the total (meaningless in itself) for a predetermined numeric
field (like employee number) that exists for all docs in the batch = same total calculated by
system.

3. Error Reporting and Handling


Input error handling:
Reject only transactions with errors
Reject the whole batch
Hold batch in suspense
Accept the batch and flag the errors
Input control
Transaction log
Reconciliation of data
Documentation
Error correction procedures
Anticipation – user anticipates receipt of data
Transmittal log
Cancellation of source document
Processing Controls
Data Validation identifies data errors, incomplete or missing data or inconsistencies among
related items and edit controls are preventive controls used before data is processed. Input data
should be evaluated as close to the time and point of origination as possible
Sequence check – is everything in sequence
Limit check – data should not exceed a certain predetermined limit
Range check – data should be within the range of predetermined values
Validity check – record should be rejected if anything but a valid entry is made – like marital
status should not be entered into employee number field.
Reasonableness check – input data matched to predetermined reasonable limits or occurrence
rates – normally receive 20 orders, if receive 25 then that’s a problem
Table lookups – input data compared to predetermined criteria maintained in a lookup table.

Existence check – data entered correctly and meet predetermined criteria – valid transaction
code must be entered in the transaction code field.
Key verification – keying in process repeated by two different people
Check digit – a numeric value that has been calculated mathematically is added to data to
ensure that the original data have not been altered or an incorrect value submitted. Detects
transposition and transcription errors. Verifies data accuracy/integrity. (checksum)
Completeness check – a field should always contain data and not zeros or nulls
Duplicate check – new transactions matched to those previously input to make sure they were
not entered previously.
Logical relationship check – if this condition is true, then one or more additional conditions or
relationships may be required to be true.
Domain integrity test – verify that the edit and validation routines are working satisfactorily,
all data items are in the correct domain.

Processing Controls ensure the completeness and accuracy of accumulated data. These are
processing control techniques:
Manual recalculations – of transactions samples
Editing – edit check is a program instruction that tests the accuracy, completeness and validity
of the data
Run-to-run totals – can verify the data through the stages of application processing.
Programmed controls – software can be used to detect and initiate corrective action for errors
in data and processing.
Reasonableness verification of calculated amounts
Limit check on calculated amounts
Reconciliation of file totals
Exception reports.

Data File Control procedures


Data files/database tables fall into four categories: system control parameters, standing data,
master data/balance data, transaction files.
File controls should ensure that only authorized processing occurs to stored data.
Before and after image reporting – record data before and after processing so can trace the
impact transactions have on computer records
Maintenance error reporting and handling
Source documentation retention – so can reconstruct data if need be
Internal and external labeling – of removable storage media
Version usage – verify proper version of the file used
Data file security – access controls so only authorized users get to it.
One for one checking – individual documents agree with a detailed listing of documents
processed
Prerecorded input – certain information fields are preprinted on blank input forms to reduce
initial input errors
Transaction logs – all transaction input activity is recorded by the computer.
File updating and maintenance – proper authorization required to change, move etc data files.
Parity checking – checks for completeness of data transmissions/transmission errors.
check bits – used in telecom for this
Redundancy check - appends calculated bits onto the end of each segment of data to detect
transmission errors) check to see if it is a redundant transmission

Output Controls
Output controls provide assurance that the data delivered to users will be presented, formatted
and delivered in a consistent and secure way.
Logging and storage of negotiable, sensitive and critical forms in a secure place
Computer generation of negotiable instruments, forms and signatures – needs to be controlled
Report Distribution
Balancing and reconciling – data processing app program output should be balanced routinely
to the control totals. Timeliness important in balancing. If do balancing in a timely way can be a
preventive control – find and correct the error before it posts.
Output error handling
Output report retention
Verification of receipt of reports
To detect lost transactions – automated systems balancing could be used.

Auditing Application Controls


Observation and testing of users – observe them performing separation of duties,
authorizations, balancing, error controls, distribution of reports

Data integrity testing


Relational integrity tests – performed at the data element and record level – enforced through
data validation routines or by defining input condition constraints and data characteristics or
both. Is the data ok?
Referential integrity tests- these define existence relationships between entities in a database
that need to be maintained by the DBMS. These relationships maintained through referential
constraints (primary and foreign key). It is necessary that references be kept consistent in the
event of insertions, deletions, updates to these relationships.
Data Integrity in Online Transaction Processing Systems.
ACID test
Atomicity – transaction either completed in its entirety or not at all
Consistency – all integrity conditions (consistent state) with each transaction – so database
moves from one consistent state to another
Isolation – each transaction isolated from other transactions so each transaction only accesses
data that are part of a consistent database state
Durability – if a transaction has been reported back to the user as complete, the resulting
changes will persist even if the database falls over.

Testing Application Systems


Testing effectiveness of application controls
Snapshot – take snapshots of data as flows through the app. Very useful as an audit trail.
Mapping – identifies unused code and helps identify potential exposures
Tracing/Tagging – shows exact picture of sequence of events – shows trail of instructions
executed during application processing. Tagging involves placing a flag on selected transactions
at input and using tracing to track them.
Test data/deck – simulates transactions through real programs.
Base case system evaluation – uses test data sets developed as part of comprehensive testing
programs. Used to verify correct system operation before acceptance.
Parallel operation – put prod data through existing and new system and compare
Integrated test facility – creates test file in prod system and those test transactions get
processed along with the live data.
Parallel simulation – processes prod data using software that simulates the app.
Transaction selection programs – uses audit hooks or generalized audit software to screen and
select transactions input to the regular prod cycle
Embedded audit data collection – software embedded in host computer application screen –
selects input transactions. Usually developed as part of system development. Types include
(SCARF – auditor determines reasonableness of tests and provides info for further review) and
(SARF – randomly selects transaction for analysis)
Extended records – gathers all data that haven’t been affected by a particular program.
Generalized audit software – can be used for this – includes mathematical computations,
stratifications, statistical analysis, sequence and duplicate checking and recompilations.

This provides direct access to the data – it can review an entire inventory and look for certain
criteria you specify. Very flexible 5 types of automated evaluation techniques applicable to
continuous online auditing:
SCARF/EAM
Snapshots
Audit hooks – embed hooks in app systems to function as red flags and to induce IS auditors
to act before an error or irregularity gets out of hand. Useful when only select transactions need
to be examined.
ITF
Continuous and intermittent simulation – as each transaction is entered, simulator decides
whether transaction meets certain criteria and if so audits it.
Electronic Commerce
Originally two tier (browser and web server) or three tiered (browser, web server, database
server) architectures.
EDI – in use for more than 20 years, one of the first ecommerce apps in use between business
partners for transmitting business transactions between organizations with dissimilar computer
systems. It involves the exchange and transmittal of business documents such as invoices,
purchase orders, in a standard, machine processible way.

Translate data from business app then transmit data then retranslate on the other side. There is
traditional EDI and web based EDI.
Traditional EDI systems require:

Communications software/handler – process for transmitting and receiving electronic


documents between trading partners.
EDI interface – EDI translator (translates the data from standard format to trading partners
format) and app interface (moves electronic transactions to or from the app systems and
performs data mapping – data mapping is the process by which data are extracted from EDI
translation process and integrated with data of receiving company – the EDI interface may
generate and send functional acknowledgements (used to validate data mapping as well as
receipt), verify the identity of partners and check the validity of transactions by checking
transmission info against a trading partner master file.
Functional acknowledgements are standard EDI transactions that tell
the trading partners that their electronic documents were received)
Application system

EDI risks
Transaction authorization and authentication– since transaction electronic no inherent
authentication occurring.
Identity of trading partners
Loss of business continuity
Critical nature of EDI transactions requires assurance that transmissions were completed. –
methods to be assured about this are internal batch total checking, run-to-run and transmission
record count balancing, and use of functional acknowledgements. Higher levels of logging for
these too.
Need to make sure message format and content are valid, no unauthorized changes,
transmission channels protected, appropriate levels of logging – log all transactions, segregation
of duties (segregate initiation and transmission), limit who can initiate transactions, things are
converted properly, messages are reasonable when received.

Receipt of inbound transactions


Controls should ensure that all inbound EDI transactions are accurately and completely
received, translated and passed into an application, as well as processed only once.

Outbound transactions
Controls should ensure that only properly authorized outbound transactions are processed. This
includes objectives that outbound EDI messages are initiated upon authorization, that they
contain only pre-approved transaction types and that they are only sent to valid trading
partners.

Email systems
Ultimate control is at the workstation. Digital signatures good way of getting rid of spam in
email system
Payment systems
Two parties involved in these – issuers (operates payment service) and the users (send and
receive payments).

There are three types:


EMM – electronic money model – emulates physical cash – payer does not have to be online
at the time of purchase, payer can have unconditional intractability.
Electronic checks –emulate real-world checks – easy to understand and implement.
Electronic Transfer model – payer creates payment transfer instruction, signs it digitally and
sends it to the issuer. Simplest of the three.

Payer has to be online


Electronic Funds Transfer:
EFT is the exchange of money via telecommunications without currency actually changing
hands. It is the electronic transfer of funds between a buyer, a seller and his/her respective
financial institution.
EFT refers to any financial transaction that transfers a sum of money from one account to
another electronically. In the settlement between parties, EFT transactions usually function via
an internal bank transfer from one party’s account to another via a clearinghouse network.
Usually, transactions originate from a computer at one institution and are transmitted to
another computer at another institution with the monetary amount recorded in the respective
organization’s accounts. Very high risk systems.

Auditor concerns are:


EFT switch involved in the network is of concern since it is the facility that provides the
communication linkage for all equipment in the network.
Auditor also concerned about the interface between the EFT system and the applications that
process the accounts from which funds are transferred.
Concerns similar to EDI

Integrated customer file – where all the info about a given customer combined together into one
file. ATMs are point of sale devices.

Image processing – scanning - Computer manipulation of images

Artificial intelligence and expert systems


Expert systems – Artificial intelligence is the study and application of the principles by which
knowledge is acquired and shared, information communicated.

Expert systems are artificial intelligence systems. Auditor cares about the soundness of the
expert knowledge
allows the user to specify certain basic assumptions or formulas and then uses these
assumptions to analyze arbitrary events and produce a conclusion.
Good for capturing the knowledge and experience of individuals in the organization
Good for knowledge sharing
Helps create consistency in decision making
Comprised of database, inference engine, knowledge base, explanation module. KB is most
critical.
Knowledge base info collected as decision tree (questionnaires), rules (if then) or
semantic nets (graphs with nodes and relationships between nodes)
Useful as audit tools also

Business intelligence
Broad field of IT that encompasses the collection and dissemination of information to assist in
decision making and assess organizational performance. These are subject oriented. There are
risks if it is a global system and data has to be synchronized between regions – this can be
problematic.
Data warehouse – once data in warehouse, should not be modified
Data mart
Metadata – Quality of the metadata is critical to these.

Decision support system


Interactive system that provides the user with easy access to decision models and data from a
wide range of sources – supports managers in decision making tasks for business purposes.
Concentrates less on efficiency than on effectiveness (performing the right task). Usually based
on 4GL languages.

Improves managers decision making ability, but hard to measure. Implementation risk is
inability to specify purpose and usage.

Supply Chain Management - SCM is about linking the business processes between the related
entities (buyer and seller).

Important for just in time inventory – store does not keep inventory – stuff comes as you need it
– should have multiple suppliers in case one fails or you could be in trouble.

You might also like