Professional Documents
Culture Documents
GC18-9155-02
IBM DB2 Information Integrator
GC18-9155-02
Before using this information and the product it supports, be sure to read the general information under “Notices” on page 101.
This document contains proprietary information of IBM. It is provided under a license agreement and copyright law
protects it. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative:
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at
www.ibm.com/planetwide
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 2003, 2004. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
© CrossAccess Corporation 1993, 2003.
Contents
Chapter 1. Introduction . . . . . . . . 1 Validating the setup . . . . . . . . . . . 48
Product overview . . . . . . . . . . . . 1. Accessing CA-IDMS data with SQL locally . . . 48
Operational components . . . . . . . . . 2. Accessing CA-IDMS data with SQL remotely . . 49
Data server . . . . . . . . . . . . . 2.
Clients . . . . . . . . . . . . . . 5. Chapter 6. Setting up DB2 Universal
Connectors . . . . . . . . . . . . . 6. Database for z/OS . . . . . . . . . . 51
Enterprise server . . . . . . . . . . . 7. Overview . . . . . . . . . . . . . . . 51
Application components . . . . . . . . . 7. Setting up the interface to DB2 Universal Database
Administrative components . . . . . . . . 7. for z/OS . . . . . . . . . . . . . . . 51
Data Mapper . . . . . . . . . . . . 7. Mapping the DB2 Universal Database table
Mainframe utilities . . . . . . . . . . . 11 definitions to logical tables . . . . . . . . . 52
Loading the metadata catalog . . . . . . . . 52
Chapter 2. Concepts . . . . . . . . . 13 Validating the setup . . . . . . . . . . . 53
Relational access to data . . . . . . . . . . 13 Accessing DB2 Universal Database data with
Client/server architecture. . . . . . . . . . 14 SQL locally . . . . . . . . . . . . . 54
Data sources . . . . . . . . . . . . . . 15 Accessing DB2 Universal Database data with
Nonrelational data mapping . . . . . . . . . 15 SQL remotely . . . . . . . . . . . . . 55
Data server and client components . . . . . . 16
Data server components . . . . . . . . . 17 Chapter 7. Setting up IMS . . . . . . 57
Data server system exits . . . . . . . . . 18 Overview . . . . . . . . . . . . . . . 57
Clients . . . . . . . . . . . . . . . 20 Mapping the sample IMS DBD to logical tables . . 57
Configuration methodology . . . . . . . . . 20 Loading the metadata catalog . . . . . . . . 65
Data server configuration (CACDSCF) . . . . 21 Validating the setup . . . . . . . . . . . 67
Query processor configuration (CACQPCF). . . 21 Establishing the interface to DBCTL/DRA . . . 67
Administrator configuration (CACADMIN) . . . 21 Accessing IMS data with SQL locally . . . . . 68
Methods for updating configuration members . . 21 Accessing IMS data with SQL remotely . . . . 69
Product overview
DB2 Information Integrator Classic Federation for z/OS is a powerful, efficient,
and easy-to-implement mainframe data integration solution. It provides Windows®
and UNIX® tools and applications with direct, real-time SQL access to mainframe
databases and files. Tools and applications issue JDBC or ODBC SQL statements to
read and write data that is stored in VSAM and Sequential files, as well as IMS™,
CA-IDMS, CA-Datacom, Adabas, and DB2 Universal Database™ for z/OS
databases.
The following figure demonstrates how IBM DB2 Information Integrator Classic
Federation for z/OS (DB2 II Classic Federation) accesses data.
User
DB2 Information Nonrelational Data
Integrator
Relational Data
Warehouse/Datamart
Figure 1. Accessing data with DB2 Information Integrator Classic Federation for z/OS
Operational components
Operational components provide the processing required to connect tools and
applications with data. They are responsible for:
v Accepting and validating SQL statements from a server, client, desktop tool, or
desktop application
v Communicating SQL and result sets between distributed tools and applications
and mainframe data sources
v Accessing the appropriate data using native file and database access aids such as
indexes and keys
v Translating results into a consistent relational format regardless of source data
type
Data server
The core of the operational environment is the data server. The data server
processes SQL statements that are sent from tools and applications through the
Java Database Connectivity (JDBC), Microsoft Open Database Connectivity
(ODBC), and Call Level Interface (CLI) clients.
The data server also can invoke stored procedures for mainframe algorithm reuse.
Stored procedures are defined to run within the data server address space. The
data server also can use the APPC bridge to access programs running in other
regions such as CICS®.
Finally, the data server can integrate IMS DC transactions by using DB2
Information Integrator Classic Federation transaction services and the data server’s
stored procedure mechanisms.
There are five types of tasks (services) that run in the data server:
v Region controller, which includes an MTO Operator Interface
v Connection handlers
v Query processors
v Logger
v Initialization services
Chapter 1. Introduction 3
Region controller
The data server has multiple tasks running within it. The main task is the region
controller. The region controller is responsible for starting, stopping, and
monitoring the other tasks running within the data server. The region controller
determines which tasks to start based on configuration parameter settings. See IBM
DB2 Information Integrator Administration Guide and Reference for Classic Federation
and Classic Event Publishing for more information about configuration parameters.
The region controller also supplies a z/OS MTO (Master Terminal Operator)
interface that can be used to monitor and control a data server address space.
Connection handlers
A connection handler (CH) task is responsible for listening for connection requests
from client applications and routing them to the appropriate query processor task.
DB2 Information Integrator Classic Federation for z/OS contains modules for
standard transport layers. These modules can be loaded by the connection handler
task:
v TCP/IP
v Cross memory services
v WebSphere MQ
A local z/OS client application can connect to a data server using any of these
methods. (The recommended approach is to use z/OS cross memory services).
Remote client applications (running under Windows or a UNIX platform) use
TCP/IP or WebSphere MQ to communicate with a remote data server.
UNIX
CLI Client
TCP/IP
TCP/IP
Query processor
Embedded in the data server is a query processor that acts as a relational engine.
The query processor has no knowledge of the physical databases or files being
referenced in a SELECT, INSERT, UPDATE or DELETE statement. For each table
referenced in an SQL statement, the query processor invokes a connector that is
specific to the database or file type of the source data. The query processor treats
the different database or file system as a single data source and is capable of
processing SQL statements that access either a single type of database or file
system or reference multiple types of databases or file systems.
To process SQL data access requests, data definitions must be mapped to logical
tables. This information is stored in generated metadata catalogs, which emulate
DB2 Universal Database system catalogs. (The Data Mapper tool is used in
conjunction with the metadata utility to perform this mapping. See “Data Mapper”
on page 7, for more information.)
DB2 Information
Integrator Client
Result Rows
Native
Result
DB2 Information
Data Management
Integrator
System
Data Server
Native
Dialect
Logger
A single logger task can be running within a data server. The logger reports on
data server activities and is also used in error diagnosis situations.
Initialization services
Initialization services are special tasks used to prepare the data server execution
environment to access relational and nonrelational data, initialize high level
language environments for use by exits, or allow the data server to use the z/OS
Workload Manager (WLM) services to process queries in WLM goal mode. For
more information about initialization services, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Clients
Desktop tools and applications can issue SQL data access requests to a DB2
Information Integrator Classic Federation for z/OS data server through a DB2
Information Integrator Classic Federation for z/OS ODBC JDBC, or Call Level
Interface (CLI) client.
Chapter 1. Introduction 5
The DB2 Information Integrator Classic Federation for z/OS ODBC, JDBC, and CLI
clients provide a single interface between end-user tools, applications (Java and C),
and other DB2 Information Integrator Classic Federation for z/OS operational
components. High-speed performance and application integrity are provided by
the 32-bit thread safe ODBC, JDBC, and CLI clients. A single client can access all
data sources on all platforms.
The DB2 Information Integrator Classic Federation for z/OS client serves as both
an ODBC, JDBC, and CLI driver and a connection handler to other platforms. All
clients can leverage the underlying TCP/IP communications backbone; ODBC and
JDBC clients also can leverage the WebSphere MQ communications backbone.
For more information on the DB2 Information Integrator Classic Federation for
z/OS clients, see IBM DB2 Information Integrator Client Guide for Classic Federation
and Classic Event Publishing.
Connectors
The data server uses platform and database specific data connectors for data
access. For each table referenced in an SQL statement, the data server invokes a
database or file type specific data connector.
The data connectors are reentrant so only a single copy is loaded even though
multiple load requests may be issued based on the number of tables referenced in
a statement and the number of concurrent users. This maximizes throughput while
minimizing the operational footprint.
The connectors use native database I/O commands. DB2 II Classic Federation data
connectors have been developed to use the most efficient, yet standard (supplied
with the database) multi-user environments available. DB2 II Classic Federation
does not rely on internal database control block organization. This ensures the
integrity of the result set while leveraging the performance profile of the
underlying database.
Like a data server, the enterprise server’s connection handler is responsible for
listening for client connection requests. However, when a connection request is
received, the enterprise server does not forward the request to a query processor
task for processing. Instead, the connection request is forwarded to a data source
handler (DSH) and then to a data server for processing. The enterprise server
maintains the end-to-end connection between the client application and the target
data server. It is responsible for sending messages to and receiving messages from
the client application and the data server.
The enterprise server is also used to perform load balancing. Using configuration
parameters, the enterprise server determines the locations of the data servers that it
will be communicating with and whether those data servers are running on the
same platform as the enterprise server.
The enterprise server can automatically start a local data server if there are no
instances active. It can also start additional instances of a local data server when
the currently active instances have reached the maximum number of concurrent
users they can service, or the currently active instances are all busy.
Application components
Application-enabling components provide developers with a means of using the
DB2 Information Integrator Classic Federation for z/OS data delivery capabilities
within 3GL applications. The DB2 Information Integrator Classic Federation for
z/OS clients provide standard interfaces to C and Java programs to access
heterogeneous data sources through a single API.
Administrative components
Administrative components are tools and utilities used to perform the
housekeeping and data administration required to define an installation’s
environment and to define the data to be accessed by DB2 Information Integrator
Classic Federation for z/OS. This section discusses the Data Mapper, which is used
to create mappings between nonrelational data and logical relational tables, and
the mainframe utilities, which are used to manage the metadata catalog.
Data Mapper
The Data Mapper is a Microsoft Windows-based application that automates many
of the tasks required to create logical table definitions for nonrelational data
structures. The objective is to view a single file or portion of a file as one or more
relational tables. The mapping must be accomplished while maintaining the
structural integrity of the underlying database or file.
Chapter 1. Introduction 7
The Data Mapper interprets existing physical data definitions that define both the
content and the structure of nonrelational data. The tool is designed to minimize
administrative work, using a definition-by-default approach.
The Data Mapper accomplishes the creation of logical table definitions for
nonrelational data structures by creating metadata grammar from existing
nonrelational data definitions (such as COBOL copybooks, IMS DBDs, and
CA-IDMS schemas/subschemas). The metadata grammar is used as input to the
metadata utility to create a metadata catalog that defines how the nonrelational
data structure is mapped to an equivalent logical table. The metadata catalogs are
used by query processor tasks to facilitate both the access and translation of the
data from the nonrelational data structure into relational result sets.
The Data Mapper import utilities create initial logical tables from COBOL
copybooks. You refine these initial logical tables in a graphical environment to
match site- and user-specific requirements. You can utilize the initial table
definitions automatically created by Data Mapper, or customize those definitions as
needed.
Table-1: Table-2:
Multiple logical tables can be created that map to a single physical file or database.
For example, a site may choose to create multiple table definitions that all map to
an employee VSAM file:
v One table is used by department managers who need access to information
about the employees in their departments.
v Another table is used by HR managers who have access to all employee
information.
v Another table is used by HR clerks who have access to information that is not
considered confidential.
v Another table is used by the employees themselves who can query information
about their own benefits structure.
Chapter 1. Introduction 9
Customizing these table definitions to the needs of the user is not only beneficial
to the end-user, but recommended.
Desktop Mainframe
Data Mapper
Metadata Catalogs
Note: The Data Mapper contains embedded FTP support to facilitate file transfer
to and from the mainframe.
To create a relational model of your data using the Data Mapper, you perform the
following steps:
1. Import existing descriptions of your nonrelational data into Data Mapper.
COBOL copybooks, IMS-DL/I Database Definitions (DBDs), and CA-IDMS
schema/subschema can all be imported into the Data Mapper.
The Data Mapper creates default logical table definitions from the COBOL
copybook information.
2. If these default table definitions are suitable for end users, go to the next step.
If not, refine or customize the default table definitions as needed. For example,
importing the record layout for the VSAM customer master file creates the
default Customer_Table. Two additional tables can also be created from the
original:
v Marketing_Customer_Table, which contains only those data items required by
the marketing department.
v Service_Customer_Table, which contains only those data items required by
support representatives.
3. Export the logical table definitions to the mainframe where the database/file
resides. These definitions are then used as input to the metadata utility, which
creates the metadata catalogs.
After completing these steps, you are ready to use the operational components
with your tools and applications to access your nonrelational data.
Mainframe utilities
A variety of utilities are provided on the mainframe. Below is a list of some of the
key utilities:
1. CACCATLG contains sample JCL to allocate the metadata catalog
2. CACMETAU (metadata utility) contains sample JCL to load the metadata
catalog
3. CACGRANT to grant access rights to users
Chapter 1. Introduction 11
12 DB2 II Getting Started with Classic Federation
Chapter 2. Concepts
This chapter describes the components that make up the core DB2 Information
Integrator Classic Federation for z/OS system and key concepts as they pertain to
these components. It describes the data server’s capabilities and also defines many
of the concepts and terminology that DB2 Information Integrator Classic
Federation for z/OS employs. Information about how other platforms interface
with the DB2 Information Integrator Classic Federation for z/OS components is
also included.
This chapter introduces the key concepts and components so that you have the
context that is required to perform the setup steps in the chapters that follow. For
more information about the discussed in this chapter, see IBM DB2 Information
Integrator Administration Guide and Reference for Classic Federation and Classic Event
Publishing.
DB2 Information Integrator Classic Federation for z/OS uses a metadata catalog.
You use the Data Mapper, a Windows-based graphical tool, to generate the input
DB2 Information Integrator Classic Federation for z/OS supports the DB2
Universal Database version 4 dialect of SQL. This is also called the SQL-92
standard. DB2 Information Integrator Classic Federation for z/OS supports a fairly
complete SQL implementation, including inner joins, outer joins, subselects,
GROUP BY, HAVING, and scalar functions.
Client/server architecture
DB2 Information Integrator Classic Federation for z/OS uses a client/server
architecture. In order to communicate with a data server, your applications need to
interface with a client using one of the clients provided with DB2 Information
Integrator Classic Federation for z/OS. ODBC and JDBC clients are provided for
Windows applications. JDBC and CLI clients are provided for UNIX applications.
Before running your application, you must configure the client. Depending upon
the platform, configuration is performed in one of several ways. For the ODBC
client, configuration is performed using the ODBC Administrator. For JDBC and
CLI clients, configuration is performed using a text file.
Regardless of the configuration method used, you must identify one or more data
sources that your application will be accessing. For each data source you must
identify the communications protocol (TCP/IP or WebSphere MQ) that will be
used to communicate with a DB2 Information Integrator Classic Federation for
z/OS data server. TCP/IP is supported by all clients. WebSphere MQ is supported
by the ODBC and JDBC clients. Additionally, for local (z/OS) applications, a cross
memory connection handler service is also supported that uses data spaces.
These are automatically created by the data server and require no definitions to
any other subsystems.
If your application supports a large number of concurrent users (more than can be
handled by a single data server), then the enterprise server is available to manage
these situations. The enterprise server is installed between your application and the
DB2 Information Integrator Classic Federation for z/OS data server and appears to
the client as the data server. The enterprise server is responsible for starting
additional data servers as the number of concurrent users increases. For
information about the enterprise server, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
The query processor is neutral with respect to database and file system. You can
map a data source to a particular type of database or file system or to multiple
databases or file systems. You should define data sources in organizational terms
based on the data that your end users need to access and not based on the
underlying database or file system.
For example, if you have a Credit department that require access to IMS and
VSAM data, you can create a single data source, CREDIT. In this example, you
would define the logical tables that reference the desired IMS and VSAM credit
data in a set of metadata catalogs that are accessible to the data server with a
service defined for the CREDIT data source. Logical tables are discussed in the
following section.
In the credit example, when your application connects to the CREDIT data source,
it has access to all the credit-related data as if the data were contained in a single
relational database. This allows your application to perform heterogeneous joins
for any of these tables even though the data is physically stored in IMS databases
and VSAM files.
A DB2 Information Integrator Classic Federation for z/OS data server can support
multiple data sources running different types of query processors. For example, in
addition to your Credit application, you could have another application that needs
to access accounting information. In this case, you would create the logical tables
that contain the desired accounting information in the same metadata catalogs that
contain your credit tables. You would then define another data source called
ACCOUNTING for the data server.
You can map multiple logical tables to a single physical database or file system. A
typical situation in which you would perform this operation is when there is a
VSAM file containing 10 different types of records. In this case, you would define
10 different logical tables, one for each type using “views,” each with its own
record type.
When accessing IMS data, DB2 Information Integrator Classic Federation for z/OS
only accesses a single hierarchical path in the database as a logical table. If your
IMS databases contains multiple child segments, in different hierarchical paths,
then you must define logical tables for each hierarchical path that you want to
access. After this is done, you can use JOINs to retrieve all of the data you need
from different hierarchical paths in a single query.
Chapter 2. Concepts 15
Logical table definitions are stored in the metadata catalog. The logical tables, their
associated column definitions, and index information are defined by metadata
(USE) grammar. The index information is used to optimize access to the physical
database or file system.
The metadata grammar is a text file processed by the metadata utility, which
updates the metadata catalog. When the metadata utility is executed, it verifies the
syntax and content of the metadata grammar and verifies that the physical
databases or files that are referenced exist. During this verification process, the
metadata utility also collects additional physical information that is used to further
optimize access to the physical database or file.
The Data Mapper generates the proper metadata grammar for each of the logical
tables you have defined for a data source. The Data Mapper also contains
embedded FTP support so that you can easily download the COBOL copybooks
for import into the Data Mapper. Additionally, you can use the embedded FTP
support to move the generated metadata grammar back to the mainframe so it can
be run through the metadata utility.
In relational databases, the format and structure of the tables are strictly enforced.
For example, a column is defined as one of the data types that is supported by the
database. Additionally, relational databases do not support repeating items.
Instead, you define a separate table that contains multiple rows of data for each
repeating data item. These restrictions are typically not enforced by a nonrelational
database or file system.
DB2 Information Integrator Classic Federation for z/OS supports the definition of
repeating data items in the data mapping process. DB2 Information Integrator
Classic Federation for z/OS logically joins the repeating and non-repeating data
and returns a separate row for each instance of the repeating data.
Data Server
Client Application
or Client
Driver Manager (ODBC, JDBC, or CLI) Region Controller Logger
System Exits
Logger (Optional) Transport Layer Connection
Handler Query Processor
TCP/IP
or Target Data Sources
WebSphere MQ System Catalogs
The region controller is responsible for starting, stopping, and monitoring the
different services that are running within the data server. The services are
implemented as individual load modules running as separate z/OS tasks within
the data server address space. Most of the services can have multiple instances and
most can support multiple users.
A description of the different types of services that the region controller manages
follows.
Initialization services
Initialization services are special purpose services (tasks) that are used to initialize
and terminate different types of interfaces to underlying database management
systems or z/OS system components. For example, an initialization service is
provided to activate the DRA interface used by the IMS DRA connector in order to
access IMS data. An example of a z/OS system component initialization service is
the Workload Manager (WLM) service.
Chapter 2. Concepts 17
Query processor services
The query processor is the DB2 Information Integrator Classic Federation for z/OS
relational engine that services user SQL requests.
The query processor can service SELECT statements and stored procedure
invocations. The query processor invokes one or more connectors to access the
target database or file system that is referenced in an SQL request. The following
connectors are supported:
v IMS BMP/DBB interface: Allows IMS data to be accessed through an IMS region
controller. A region controller is restricted to a single PSB for the data server,
limiting the number of concurrent users the query processor can handle.
v IMS DRA interface: Allows IMS data to be accessed using the IMS DRA
interface. The DRA interface supports multiple PSBs and is the only way to
support a large number of concurrent users. This is the recommended interface.
v Sequential interface: Allows access to Sequential files or members.
v Stored procedure interface: Allows a z/OS Assembler, C, COBOL, or PLI
application program to be invoked.
v VSAM interface: Allows access to VSAM ESDS, KSDS or RRDS files. This
interface also supports use of alternate indexes.
v CA-IDMS interface: Allows access to CA-IDMS files.
v Adabas interface: Allows access to Adabas files.
v CA-Datacom interface: Allows access to CA-Datacom files.
v DB2 interface: Allows access to DB2 Universal Database tables.
MTO interface
The MTO interface is a z/OS Master Terminal Operator interface that allows you
to display and control the services and users that are being serviced by a data
server. Using the MTO interface you can also dynamically configure the data
server. The MTO interface is contained within the region controller service.
Logger service
The logger service is a task that is used for system monitoring and
troubleshooting. During normal operations you will not need to be concerned with
the logger service.
All system exits are written in Assembler language and are designed to run in a
multi-user environment. Source code is provided for all exits so that you can
customize the supplied exits to meet your site standards. Complete descriptions
about activating the system exits and their APIs can be found IBM DB2 Information
Integrator Administration Guide and Reference for Classic Federation and Classic Event
Publishing. The following system exits are provided:
v SAF security exit
v SMF accounting exit
v CPU Resource Governor exit
v Workload Manager exit
v DB2 Thread Management exit
v Record Processing exit
When the CPU Resource Governor exit is activated, it is passed the available CPU
time for that user. Periodically, the CPU Resource Governor exit is called to check
how much CPU time has been used. After the allotted time is exceeded, the exit
returns a return code that stops the query. The frequency with which the exit is
called is controlled by DB2 Information Integrator Classic Federation for z/OS.
The CPU Resource Governor exit is activated using the CPU GOVERNOR
configuration parameter.
Chapter 2. Concepts 19
The Workload Manager exit uses the same unit-of-work concept that the CPU
Resource Governor exit uses. A unit-of-work is a single query unless your
application opens multiple simultaneous cursors, in which case the unit-of-work is
from first cursor open to last cursor close.
When the Workload Manager exit is active, it joins a Workload Manager enclave
when the unit-of-work starts. The enclave is left when the unit-of-work is
completed. While in the unit-of-work, the query processor is under Workload
Manager control, provided that WLM goal mode is active.
Clients
The client is responsible for loading the appropriate transport layer to establish a
connection with the target data servers. When your application connects to a data
source, the connection handler activates the appropriate transport layer service
based on configuration parameters. The client is responsible for shipping all
requests to the appropriate transport layer service for the duration of the session
(until your application disconnects from a data source).
DB2 Information Integrator Classic Federation for z/OS provides the following
clients:
v ODBC
v JDBC
v Call Level Interface (CLI)
Configuration methodology
DB2 Information Integrator Classic Federation for z/OS configuration varies based
on the type of client used and data server types. For example, when you use the
ODBC client with an application, configuration is performed with the ODBC
Administrator. When configuring a CLI client without an ODBC driver manager,
then configuration is performed manually, using a text configuration file.
Client configuration is simple and straightforward. You must define the data
sources that your application uses. You can define additional tuning and
debugging parameters. This configuration, however, usually is only performed
once per new application deployment.
The DB2 Information Integrator Classic Federation for z/OS data server is
designed for continuous operation. As your use of DB2 Information Integrator
Classic Federation for z/OS expands, the data servers are designed such that you
The data server configuration files are text files that contain the various
configuration parameters that define services and other operational and tuning
parameters. These configuration files are stored as members in a configuration
PDS. Data servers have three classes of configuration members stored in the
SCACCONF data set:
v Data server configuration (CACDSCF)
v Query processor configuration (CACQPCF)
v Administrator configuration (CACADMIN)
When user configuration overrides are activated, then the user connects to the data
server, and a query processor service task is selected to service that user. The
configuration PDS is accessed using the user ID for that user as the configuration
member name. If a member name exists, then the configuration definitions found
in that member override applicable definitions that exist in the query processor
configuration member.
Note: Typically, you only use the user configuration override feature when you are
developing an application, tuning, or troubleshooting. They should be used
with caution. For normal production operations, the configuration
parameters that are used to control the query processor should be defined at
the query processor configuration member level.
When you update query processor configuration members, the associated service
must be stopped and then restarted for the updates to take affect. Manual updates
Chapter 2. Concepts 21
to an Administrator configuration member take affect when a user connects to the
data server and a query processor is activated.
Overview
The following sections describe how to enable SQL access to Software AG Adabas
data. They explain how to set up a sample file called Employees, which is usually
created during installation of Adabas. The Employees file contains 1107 records of
employee information. If the Employees sample is not available at your site, you
can use one of your own Adabas files.
Although this chapter uses a sample database, you use the same general steps to
enable SQL access to your own Adabas databases:
v Set up the Adabas environment
v Run the USE grammar generator (USG) to extract information about the Adabas
file and create logical tables
v Load the metadata catalog with the logical tables
v Access the Adabas data with SQL
These steps are described in more detail in the following sections. For additional
information about developing and deploying applications with DB2 Information
Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Note: In all of the jobs that are described in this section, you must customize the
JCL as appropriate for your site. For example, you may need to concatenate
libraries specific to Adabas that are provided by the vendor. Templates for
these libraries are included in the JCL. You must uncomment them and
provide the appropriate high-level qualifiers.
The SCACSAMP data set contains a member named CACADAL. Edit and submit
the CACADAL job. This creates and populate the members CACADLN and
CACADLN2 in the SCACLOAD data set with the CACADABS and ADALNK
modules, which is needed for Adabas access.
Note: If you are testing your own Adabas tables, then you can create your
own SQL in the same member (CACSQL) or create another member
for your SQL statements.
c. Configure the client.
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
Overview
The following sections describe how to enable SQL access to CA-Datacom data.
They explain how to set up a sample CA-Datacom database called CUST. The
CUST sample is included in the CA-Datacom installation. It contains 116 records of
customer information. If the CUST sample is not available at your site, you can use
one of your own CA-Datacom databases.
Although this chapter uses a sample database, you use the same general steps to
enable SQL access to your own CA-Datacom databases:
v Export the CA-Datacom file definition into COBOL copybook format.
v Map the source language member to logical tables and export those definitions
as metadata (USE) grammar.
v Load the USE grammar into the metadata catalog.
v Verify SQL access to the CA-Datacom data.
These steps are described in more detail in the following sections. For additional
information about developing and deploying applications with DB2 Information
Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Note: In all of the jobs that are described in this section, you must customize the
JCL as appropriate for your site. For example, you may need to concatenate
libraries specific to CA-Datacom that are provided by the vendor. Templates
for these libraries are included in the JCL. You must uncomment them and
provide the appropriate high-level qualifiers.
The SCACSAMP data set on the mainframe contains a member called CACDCSLG.
This member contains sample JCL that you can use to run the CA-Datacom source
language generation utility, which creates a COBOL copybook for all fields in the
CUST table.
To use the CACDCSLG JCL:
1. Customize the JCL to run in your environment:
For more detailed information on data mapping, see IBM DB2 Information Integrator
Data Mapper Guide for Classic Federation and Classic Event Publishing.
To map the sample CA-Datacom copybook:
1. (Optional) Transfer the sample COBOL copybook to the workstation on which
the Data Mapper is installed:
a. Transfer the CACCUSFD member from the COPYBOOK data set to the
workstation where the Data Mapper is installed. The following steps
assume that the files are located in the Samples subdirectory of the Data
Mapper installation directory. By default, the location is C:\Program
Files\IBM\DB2IIClassic82\Data Mapper\Samples.
b. Change the file name from CACCUSFD to customer.fd.
2. Start the Data Mapper.
On the Windows Start menu, open IBM DB2 Information Integrator Classic
Tools and click Data Mapper.
3. Open the sample repository Sample.mdb.
a. On the Data Mapper File menu, click Open Repository.
b. In the Open Repository dialog, select the Sample.mdb repository file and
click Open. (If the Sample.mdb repository isn’t listed in the working
c. Click OK.
5. List the tables in the CUSTOMER SAMPLE - DATACOM data catalog.
a. In the Sample.mdb repository window, click in the first column of the
CUSTOMER SAMPLE - DATACOM row to select that data catalog.
c. Select the copybook file that you created on the mainframe for the CUST
table.
If you transferred the CACCUSFD member from the mainframe to a
customer.fd file on the workstation in step 1 on page 30, then simply select
the customer.fd file.
If you did not transfer the CACCUSFD member from the mainframe, then
you can retrieve the CACCUSFD member using the Remote FTP feature:
1) Click Remote.
2) In the FTP Connect dialog box, enter the appropriate connection
information for the mainframe on which the CACCUSFD member
resides:
v Host Address: The host address of the mainframe. This can be a host
name or IP address.
v Port ID: Typically 21 for FTP connections.
v User ID: The user ID that is required by the FTP server on the
mainframe.
v User Password: The user password that is required by the FTP server
on the mainframe.
3) Click Connect.
Wait while the connection is completed and the data set list is built and
transferred to the workstation.
4) When the data set list appears:
a) Scroll the list or change the working directory to locate the source
language data set that you generated on the mainframe. For this
example, go to COPYBOOK.
The COBOL definitions are imported from the copybook into the table
CAC.CUSTDCOM and mapped to SQL data types. The resulting table columns
are show in the Columns for DATACOM Table CUSTDCOM window.
8. Close all windows within the Data Mapper except the Sample.mdb repository
window.
9. Generate USE statements for the mappings that you created.
a. In the Sample.mdb repository window, select the data catalog CUSTOMER
SAMPLE - DATACOM.
b. On the File menu, click Generate USE Statements.
d. After the Data Mapper generates the USE statements, it offers you a chance
to see the USE statements (metadata grammar) that were generated. Click
Yes to open the USE statements in Microsoft Notepad.
Note: If you plan to use tools that require access to the metadata catalog
information, you must run CACGRANT using CACGRSYS as the input.
Note: If you are testing your own CA-Datacom tables, you can either create
your own SQL in the same member (CACSQL) or create another
member for your SQL statements.
c. Configure the client.
The client configuration file is used to communicate to the data server using
the communication protocol defined in the data server.
In the SCACCONF data set there is a member called CACUSCF. Configure
the DATASOURCE parameter based on the communications protocol set up
in the data server, as described in IBM DB2 Information Integrator Installation
Guide for Classic Federation and Classic Event Publishing.
d. Customize and submit the local client job CACCLNT.
In the SCACSAMP data set there is a member called CACCLNT. This job
executes the client batch job to issue SQL to the data server using CACSQL
as the input SQL. Customize the JCL to run in your environment and
submit.
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
Overview
The following sections describe how to enable SQL access to CA-IDMS. They
explain how to set up a sample CA-IDMS database called Employee Demo
Database. The sample database is part of the CA-IDMS installation and is
identified by a schema named EMPSCHM and subschema named EMPSS01.
Although this chapter uses a sample database, you use the same general steps to
enable SQL access to your own CA-IDMS databases:
v On the mainframe, punch the schema and subschema.
v On Windows, use the Data Mapper to create logical tables based on the
CA-IDMS schema and subschema.
v On the mainframe, use the metadata utility to load the logical tables into the
data server’s metadata catalog.
v Verify SQL access to the CA-IDMS data.
These steps are described in more detail in the following sections. For additional
information about developing and deploying applications with DB2 Information
Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Note: In all of the jobs that are described in this section, you must customize the
JCL as appropriate for your site. For example, you may need to concatenate
libraries specific to CA-IDMS that are provided by the vendor. Templates for
these libraries are commented out in the JCL. You must uncomment them
and provide the appropriate high-level qualifiers.
For more information about data mapping, see IBM DB2 Information Integrator Data
Mapper Guide for Classic Federation and Classic Event Publishing.
To map the CA-IDMS schema and subschema:
| 1. Prepare the sample CA-IDMS schema and subschema:
| a. On the mainframe, ensure that the schema (CACIDSCH) and subschema
| (CACIDSUB) are in library members so that they can be transferred to
| your workstation.
b. Transfer (via FTP) the CACIDSCH and CACIDSUB members from the
SCACSAMP data set to the workstation where the Data Mapper is
installed. The following steps assume that the files are located in the
Samples subdirectory of the Data Mapper installation directory. By default,
| the location is C:\Program Files\IBM\DB2IIClassic82\Data Mapper\Sample.
c. Rename the files CACIDSCH and CACIDSUB to follow Windows naming
conventions:
v cacidsch.sch
v cacidsub.sub
2. Start the Data Mapper.
On the Windows Start menu, open IBM DB2 Information Integrator Classic
Tools and click Data Mapper.
3. Open the sample repository Sample.mdb.
a. On the Data Mapper File menu, click Open Repository.
b. In the Open Repository dialog, select the Sample.mdb repository file and
click Open. (If the Sample.mdb repository isn’t listed in the working
directory, browse to the Data Mapper installation directory C:\Program
Files\IBM\DB2IIClassic82\Data Mapper and look in the Xadata
subdirectory.
c. Click Open.
4. Create a new data catalog in the repository.
a. On the Edit menu, click Create a New Data Catalog.
The Create Data Catalog dialog box appears.
b. Enter the following information:
v Name: CUSTOMER SAMPLE - IDMS
v Type: IDMS
c. After loading the schema, the Data Mapper prompts you to load a
subschema. Click OK.
d. In the Load CA-IDMS Schema File dialog box, select the cacidsub.sub
subschema file that you transferred from the mainframe and click OK.
e. The Data Mapper confirms that the load operation was successful. Click
OK.
6. List the tables in the CUSTOMER SAMPLE - IDMS data catalog.
a. In the Sample.mdb repository window, click in the first cell of the
CUSTOMER SAMPLE - IDMS row to select that data catalog.
c. Click Continue.
The Import Copybook dialog box appears.
d. Click Import.
The COBOL definitions are imported from the loaded schema into the
table CAC.EMPLIDMS, as shown in the Columns for IDMS Table
9. Close all windows within the Data Mapper except the Sample.mdb repository
window.
10. Generate USE statements for the mappings that you created.
a. In the Sample.mdb repository window, select the data catalog CUSTOMER
SAMPLE - IDMS.
b. On the File menu, click Generate USE Statements.
c. Enter a file name (such as idms.use) for the generated USE statements and
click OK.
d. After the Data Mapper generates the USE statements, it offers you a
chance to see the USE statements (metadata grammar) that were
generated. Click Yes to open the USE statements in Microsoft Notepad.
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
Overview
The following sections describe how to enable SQL access to DB2 Universal
Database for z/OS (DB2 UDB). They explain how to set up a sample DB2 UDB
database table called EMP, which is usually created during installation of DB2 UDB
under a sample Employees database. The EMP table contains 42 records of
employee information. If the Employees sample is not available at your site, you
can use one of your own DB2 UDB databases.
Although this chapter uses a sample database, you use the same general steps to
enable SQL access to your own DB2 UDB databases:
v Set up the interface to DB2 UDB.
v Map the DB2 UDB tables to DB2 Information Integrator Classic Federation for
z/OS logical tables.
v Load the metadata catalog with the logical tables.
v Verify SQL access to the DB2 UDB data.
These steps are described in more detail in the following sections. For additional
information about developing and deploying applications with DB2 Information
Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Note: In all of the jobs that are described in this section, you must customize the
JCL as appropriate for your site. For example, you may need to concatenate
libraries specific to DB2 UDB that are provided by IBM. Templates for these
libraries are included in the JCL. You must uncomment them and provide
the appropriate high-level qualifiers.
Note: If you are testing your own DB2 Universal Database tables, then you
can create your own SQL in the same member (CACSQL) or create
another member for your own SQL statements.
c. Configure the client.
The client configuration file is used to communicate to the data server using
the communication protocol defined in the data server.
In the SCACCONF data set is a member called CACUSCF. Configure the
DATASOURCE parameter based on the communications protocol set up in
the data server, as described in IBM DB2 Information Integrator Installation
Guide for Classic Federation and Classic Event Publishing.
d. Customize and submit the local client job CACCLNT.
In the SCACSAMP data set there is a member called CACCLNT. This job
executes the client batch job to issue SQL to the data server using CACSQL
as the input SQL. Customize the JCL to run in your environment and
submit.
e. View the output. The output should contain the SQL statement that is being
issued and the corresponding result sets.
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
Overview
The following sections describe how to enable SQL access to IMS data. They
explain how to set up access to a sample IMS database called DI21PART. You IMS
system should have the DI21PART database installed by default.
The SCACSAMP data set on the mainframe contains DBD and COBOL copybooks
that describe the DI21PART database. The DBD is contained in a member called
CACIMPAR and the two COBOL copybooks are in members CACIMROT
(PARTROOT segment) and CACIMSTO (STOKSTAT segment). You will need these
files to complete the following sections. The following sections create a mapping
for the recommended data capture options for root and child segments.
Although this chapter uses a sample database, you use the same general steps to
enable SQL access to your own IMS databases:
v Identify the DBD and COBOL copybooks that describe the database.
v Map the DBD to logical tables and export those definitions as metadata (USE)
grammar.
v Load the USE grammar into the metadata catalog.
v Verify SQL access to the IMS data.
These steps are described in more detail in the following sections. For additional
information about developing and deploying applications with DB2 Information
Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Note: In all of the jobs that are described in this section, you must customize the
JCL as appropriate for your site. For example, you may need to concatenate
libraries specific to IMS that are provided by the vendor. Templates for these
libraries are included in the JCL. You must uncomment them and provide
the appropriate high-level qualifiers.
c. Click OK.
5. Load the IMS DL/I DBD, so that you can use it for reference when creating
logical tables.
a. On the File menu, click Load DL/I DBD for Reference.
7. Create a new table in the Parts Catalog - IMS data catalog for the PARTROOT
segment.
c. Click OK.
8. Import the field (column) definitions from the CACIMROT copybook that you
transferred from the SCACSAMP data set into the PARTROOT table.
a. In the Tables for Data Catalog Parts Catalog - IMS window, select the
PARTROOT table by clicking in the first cell of its row.
b. On the File menu, click Import External File.
c. Select the cacimrot.fd copybook file that you transferred to your
workstation.
d. Click OK.
e. In the Import Copybook dialog box, confirm the information.
f. Click Import.
The COBOL definitions are imported from the cacimrot.fd copybook into
the table CAC.PARTROOT and mapped to SQL data types. The resulting
table columns are show in the Columns for VSAM Table PARTROOT
window.
You have now created a logical table mapping that matches the data access
options that you specified for the PARTROOT segment. The following steps
show you how to create the logical table for the STOKSTAT segment.
9. Create a new table in the Parts Catalog - IMS data catalog for the STOKSTAT
segment.
a. Go to the IMS Tables for Data Catalog Parts Catalog - IMS window.
b. On the Edit menu, click Create a New Table.
c. In the Create IMS Table dialog box, enter the following table properties:
v Name: automatically populated from the Leaf Seg field
v Owner: CAC
v Index Root: PARTROOT
v Leaf Seg: STOKSTAT
STOKSTAT is referred to as the leaf segment because it acts as the leaf
segment as defined by this logical table.
v PSB Name: DFSSAM03
d. Click OK.
10. Import the field (column) definitions from the CACIMROT copybook that you
transferred from the SCACSAMP data set into the STOKSTAT table.
a. In the Tables for Data Catalog Parts Catalog - IMS window, select the
STOKSTAT table by clicking in the first cell of its row.
b. On the File menu, click Import External File.
c. Select the cacimrot.fd copybook file that you transferred to your
workstation.
d. Click OK.
e. In the Import Copybook dialog box, confirm the information.
f. Click Import.
The COBOL definitions are imported from the cacimrot.fd copybook into
the table CAC.STOKSTAT and mapped to SQL data types. The resulting
table columns are show in the Columns for VSAM Table STOKSTAT
window.
11. Import the field (column) definitions from the CACIMSTO copybook that you
transferred from the SCACSAMP data set into the STOKSTAT table.
a. In the Tables for Data Catalog Parts Catalog - IMS window, select the
STOKSTAT table by clicking in the first cell of its row.
b. On the File menu, click Import External File.
c. Select the cacimsto.fd copybook file that you transferred to your
workstation.
d. Click OK.
e. In the Import Copybook dialog box make sure that the following fields are
set correctly:
v Append to Existing Columns is checked
v Seg Name is STOKSTAT
You have now defined a logical table which includes a root and child
segment.
12. Close all windows within the Data Mapper except the Sample.mdb repository
window.
13. Generate USE statements for the mappings that you created.
a. In the Sample.mdb repository window, select the data catalog Parts
Catalog - IMS.
b. On the File menu, click Generate USE Statements.
c. Enter a file name (such as parts.use) for the generated USE statements
and click OK.
d. After the Data Mapper generates the USE statements, it offers you a
chance to see the USE statements (metadata grammar) that were
generated. Click Yes to open the USE statements in Microsoft Notepad.
Note: If you plan to use tools that require access to the metadata catalog
information, you must run CACGRANT using CACGRSYS as the input.
Note: If you are testing your own IMS tables, you can either create your
own SQL in the same member (CACSQL) or create another member
for your SQL statements.
4. Configure the client.
The client configuration file is used to communicate to the data server using
the communication protocol defined in the data server.
In the SCACCONF data set there is a member called CACUSCF. Configure the
DATASOURCE parameter based on the communications protocol set up in the
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
Overview
The following sections describe how to enable SQL access to Sequential data. They
describe this process using a sample Sequential file that was created during the
installation process of the data server. The sample Sequential file contains 34
records of employee information. A sample COBOL copybook describing this file is
also provided in the SCACSAMP data set.
Although this chapter uses a sample database, you use the same general steps to
enable SQL access to your own Sequential databases:
v On Windows, use the Data Mapper to create logical tables based on the
Sequential copybook and to export those table definitions as metadata (USE)
grammar.
v On the mainframe, use the metadata utility to load the logical tables into the
data server’s metadata catalog.
v Verify SQL access to the Sequential data.
These steps are described in more detail in the following sections. For additional
information about developing and deploying applications with DB2 Information
Integrator Classic Federation for z/OS, see IBM DB2 Information Integrator
Administration Guide and Reference for Classic Federation and Classic Event Publishing.
Note: Because Sequential files do not have indexes, any SQL statement issued to
this data source results in a full scan of the Sequential file.
Note: In all of the jobs that are described in this section, you must customize the
JCL as appropriate for your site.
c. Click OK.
7. Import the field (column) definitions from the cacemp.fd copybook that you
transferred from the mainframe.
a. On the File menu, click Import External File.
b. Select the cacemp.fd copybook.
c. Click OK.
e. Click Import.
The COBOL definitions are imported from the cacemp.fd copybook into the
table CAC.EMPLSEQ and mapped to SQL data types. The resulting table
columns are show in the Columns for SEQUENTIAL Table EMPLSEQ window.
8. Close all windows within the Data Mapper except the Sample.mdb repository
window.
9. Generate USE statements for the mappings that you created.
a. In the Sample.mdb repository window, select the data catalog Employee
Sample - Sequential.
b. On the File menu, click Generate USE Statements.
c. Enter a file name (such as emplseq.use) for the generated USE statements
and click OK.
d. After the Data Mapper generates the USE statements, it offers you a chance
to see the USE statements (metadata grammar) that were generated. Click
Yes to open the USE statements in Microsoft Notepad.
Note: If you are testing for your own Sequential files, then you can create
your own SQL in the same member (CACSQL) or create another
member for your own SQL statements.
b. Configure the client:
The client configuration file is used to communicate to the data server using
the communication protocol defined in the data server.
In the SCACCONF data set there is a member called CACUSCF. Configure
the appropriate DATASOURCE parameter based on the communications
protocol set up in the data server, as described in IBM DB2 Information
Integrator Installation Guide for Classic Federation and Classic Event Publishing.
c. Execute local client
In the SCACSAMP data set there is a member called CACCLNT. This job
executes the client batch job to issue SQL to the data server using CACSQL
as the input SQL. Customize the JCL to run in your environment and
submit.
d. View the output. The output should contain the SQL statement that is being
issued and the corresponding result sets.
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
Overview
The following sections describe how to enable access to VSAM data through the
native VSAM interfaces, as well as CICS.
Your VSAM installation should have a sample VSAM cluster. This VSAM cluster
contains 34 records of employee information. The same process described below
can be used to bring your own VSAM database online.
Although this chapter uses a sample database, you use the same general steps to
enable access to your own VSAM databases:
v On Windows, use the Data Mapper to create logical tables based on the VSAM
copybook and to export those table definitions as metadata (USE) grammar.
v On the mainframe, use the metadata utility to load the logical tables into the
data server’s metadata catalog.
v Verify access to the VSAM data.
Note: In all the jobs that follow, you must customize the JCL as appropriate for
your site.
Note: All references to VSAM files throughout the DB2 Information Integrator
Classic Federation for z/OS documentation also apply to IAM files. IAM
(Innovation Access Method) is supplied by Innovation Data Processing and
is a reliable, high-performance disk file manager that can be used in place of
VSAM KSDS or ESDS data sets. Innovation Data Processing also provides an
optional Alternate Index feature that is fully supported by DB2 Information
Integrator Classic Federation for z/OS. The only exceptions are references to
VSAM RRDS files, which currently are not supported by IAM.
The COBOL definitions are imported from the cacemp.fd copybook into the
table CAC.EMPCICS and mapped to SQL data types. The resulting table
columns are show in the Columns for VSAM Table EMPCICS window.
8. Close all windows within the Data Mapper except the Sample.mdb repository
window.
9. Generate USE statements for the mappings that you created.
a. In the Sample.mdb repository window, select the data catalog Employee
Sample - CICS VSAM.
b. On the File menu, click Generate USE Statements.
c. Enter a file name (such as empcics.use) for the generated USE statements
and click OK.
d. After the Data Mapper generates the USE statements, it offers you a chance
to see the USE statements (metadata grammar) that were generated. Click
Yes to open the USE statements in Microsoft Notepad.
To access VSAM data through CICS, the DB2 Information Integrator Classic
Federation for z/OS data server will establish a VTAM® LU 6.2 connection to CICS
to initiate a transaction when the query begins and uses this transaction to
communicate with CICS during the query. To establish this environment, VTAM
and CICS definitions will be required. An additional DB2 Information Integrator
Classic Federation for z/OS table mapping is also required to define a table to use
CICS to access VSAM data.
Note: The sample is set to allow 100 concurrent users. If you will be having
additional users, the count on EAS and DSESLIM will need to be adjusted.
Load modules CACCICAT and CACCIVS need to be copied from the DB2
Information Integrator Classic Federation for z/OS load library to the CICS user
load library.
Sample file CACCDEF in SCACSAMP data set contain a sample job to add the
CICS transaction, program, connection, session, and file definitions required for
DB2 Information Integrator Classic Federation for z/OS. Follow these steps to run
the job.
1. Update the job card for your site specifications.
2. Update the STEPLIB for the correct CICS library.
3. Update the DFHCSD DD for the correct CSD file.
4. Update the DSNAME in the DEFINE FILE to the name of the sample VSAM
file that was installed.
5. Update the ATTACHSEC parameter in the DEFINE CONNECTION entries to
VERIFY if you want CICS to validate the userid and password.
Note: The connection used by the metadata utility (EXC2) should not be set to
VERIFY as this transaction only inquires file attributes and the metadata
utility does not send a userid or password.
6. Update the MAXIMUM parameter in the DEFINE SESSION entries to increase
the number of concurrent users. This should be the same as specified on the
DSESLIM and EAS values in the APPL definition.
Note: When adding your own files to CICS, file operation BROWSE must be
specified to allow SELECT queries, ESDS UPDATE queries, or UPDATE,
INSERT, or DELETE queries to an RRDS file to process. READ must be
specified to allow UPDATE, INSERT, or DELETE queries to process.
UPDATE must be specified to allow UPDATE queries to process. ADD
must be specified to INSERT queries to process. DELETE must be
specified to allow DELETE queries to process.
After successful completion of the job, the new definitions must be installed. This
is accomplished with the following CICS transaction:
The CACVSAM group should then be added to your start-up group. This is
accomplished with the following CICS transaction:
CEDA ADD GR(CACVSAM) LIST(xxxxxxxx)
In the example above, xxxxxxxx is the name of the start-up group from your SIT
table.
Note: If you are testing for your own VSAM files you want to create your
own SQL in the same member (CACSQL) or create another member
for your own SQL statements.
b. Configure the client.
The client configuration file is used to communicate to the data server using
the communication protocol defined in the data server.
In the SCACCONF data set is a member called CACUSCF. Configure the
DATASOURCE parameter based on the communications protocol set up in
the DB2 Information Integrator Classic Federation for z/OS data server, as
described in the IBM DB2 Information Integrator Installation Guide for Classic
Federation and Classic Event Publishing.
c. Run local client.
See IBM DB2 Information Integrator Client Guide for Classic Federation and Classic
Event Publishing for detailed information about configuring and using the various
DB2 Information Integrator Classic Federation for z/OS clients.
To access the latest DB2 Information Integrator product documentation, from the
DB2 Information Integrator Support Web site, click on the Product Information
link, as shown in Figure 7 on page 92.
You can access the latest DB2 Information Integrator documentation, in all
supported languages, from the Product Information link:
v DB2 Information Integrator product documentation in PDF files
v Fix pack product documentation, including release notes
v Instructions for downloading and installing the DB2 Information Center for
Linux, UNIX, and Windows
v Links to the DB2 Information Center online
Scroll though the list to find the product documentation for the version of DB2
Information Integrator that you are using.
You can also view and print the DB2 Information Integrator PDF books from the
DB2 PDF Documentation CD.
To view the installation requirements and release notes that are on the product CD:
v On Windows operating systems, enter:
x:\doc\%L
x is the Windows CD drive letter and %L is the locale of the documentation that
you want to use, for example, en_US.
v On UNIX operating systems, enter:
/cdrom/doc/%L/
cdrom refers to the UNIX mount point of the CD and %L is the locale of the
documentation that you want to use, for example, en_US.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country/region or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product, and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, other countries, or both:
IBM
CICS
DB2
DB2 Universal Database
IMS
Language Environment
RMF
VTAM
WebSphere
z/OS
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation
in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 103
104 DB2 II Getting Started with Classic Federation
Index
A D M
Adabas 23 Data Mapper 5, 7, 16 mapping data 7, 15, 16
accessing data 26, 27 data mapping. data sources 15
creating logical tables 24 See Mapping data DB2 Universal Database 52
interface 18 data servers 2 IMS 57
loading metadata catalog with automatically starting 7 Mapping data
metadata grammar 24 configuring 21 Sequential 71
sample database 23 data sources 15 metadata catalog
setting up the environment 23 data spaces 14 loading for CA-Datacom 35
applications DB2 loading for CA-IDMS 46
binding 51 interface 18 loading for IMS 65
Thread Management exit 20 loading with Adabas metadata
DB2 UDB 51 grammar 24
B DB2 Universal Database
accessing data 54, 55
metadata grammar 8
loading metadata catalog 24
binding applications 51
creating logical tables 52 MTO interface 18
sample database 51
DBCTL subsystem 67
C DELETE command 5 N
CA-Datacom 29
nonrelational data
accessing data with SQL 37, 39
mapping 15, 16
loading metadata catalog 35
mapping source language
E translating to relational data 8
enterprise servers 7
member 30
punching source language
member 29 O
CA-Datacom/DB F ODBC client 6
interface 18 FTP support operational components 2
CA-IDMS 41 Data Mapper 10
accessing data with SQL 48, 49
interface 18 P
loading metadata catalog 46
mapping schema/subschema 42
I punching schema/subschema 41
IMS
punching scheme/subschema 41
accessing data with SQL 68, 69
setting up 41
catalog access rights, granting 37, 53, 66
BMP/DBB interface 18
DRA interface 18
Q
catalogs queries
establishing DBCTL/DRA 67
See also metadata catalog optimizing 3
loading metadata catalog 65
loading query processor services 18
mapping child segments 15
Adabas 24 query processors 4, 5
mapping data 57
DB2 Universal Database 52 configuring 21
initialization services 17
Sequential 75
INSERT statement 5
child segments, mapping 15
CICS VSAM 79
client
R
communications 2 J Record Processing exit 20
region controller 17
clients 5 JOINs 3
region controllers 4
Clients 20 retrieving child segments from IMS
result sets
communications databases 15
converting to consistent relational
between server and application 16
form 2
supported protocols 14
configuration members, updating 21
connection handler services 17
L
connection handlers 4
load balancing 7
logger service 18
S
connectors 18 SAF exit 19
logical tables
CPU Resource Governor exit 19 schema
creating for Adabas 24
cross memory 14 mapping CA-IDMS 42
security
SAF exit 19
Sequential 71
accessing data with SQL 76, 77
T
tables
creating logical for Adabas 24
TCP/IP 14
U
UPDATE statement 5
user configuration overrides 21
V
VSAM 79
accessing data with SQL 89, 90
interface 18
W
WebSphere MQ 14
Workload Manager exit 19
To learn about available service options, call one of the following numbers:
v In the United States: 1-888-426-4343
v In Canada: 1-800-465-9600
To locate an IBM office in your country or region, see the IBM Directory of
Worldwide Contacts on the Web at www.ibm.com/planetwide.
Product information
Information about DB2 Information Integrator is available by telephone or on the
Web.
If you live in the United States, you can call one of the following numbers:
v To order products or to obtain general information: 1-800-IBM-CALL
(1-800-426-2255)
v To order publications: 1-800-879-2755
Printed in USA
GC18-9155-02