You are on page 1of 15

Previous | Next

Glossary A B C D E F G H I J K L M N P Q R S T U V W X

Ab Initio Environment The Ab Initio Environment is a collection of one or more interconnected projects that share certain basic definitions through a shared common environment project. There are three levels of Ab Initio Environment: Basic, Standard, and Enterprise. In the EME, all the projects in the Ab Initio Environment are located in the same area of the datastore. absolute path An absolute path specifies the exact location of a file in a hierarchical file system starting from the root, for example, /c/mysandbox/mp/mygraph.mp. Also called full path. Compare with relative path. ad hoc multifile An ad hoc multifile is a parallel file effectively created by naming a set of serial files as its partitions. You name these partitions by explicitly listing the serial files on the component's Description tab or by using a Shell expression that expands at runtime to a list of serial files. Compare with multifile. all-to-all flow All-to-all flows typically connect components with different numbers of partitions. Data from any of the upstream partitions can be sent to any of the downstream partitions. The most common use of all-to-all flows is to repartition data. base data mount The base data mount is the file system directory under which all serial files, MFS control files, and log areas in the Ab Initio Environment are arranged. Basic Environment The Basic Environment is the simplest level of the Ab Initio Environment. It consists of a single private project and a required environment project. built-in component A built-in component is a component that is defined by the Co>Operating System. You can use both builtin components and custom components to build graphs and subgraphs. Compare with custom component. built-in function A built-in function is a DML function defined by the Co>Operating System, which allows you to manipulate strings, dates, and numbers, and to access system properties. business rule A business rule is an instruction in a transform function that directs the construction of one field in an output record. Also called rule. catalog

A catalog is a file on the run host that stores information about lookup files. One way to share lookup files in multiple graphs is to share a catalog among them. checkin In the EME, checkin is the process by which projects, graphs, or files are copied from your sandbox to a project in an EME datastore. Checkin places the new versions under source control and makes them available to other users. You can check in through the GDE's Checkin Wizard or at the command line. checkout In the EME, checkout is the process by which a a copy of a project, graph, or file is transferred from a project in an EME datastore to your sandbox. You can check out through the GDE's Checkout Wizard or at the command line. checkpoint A checkpoint is an intermediate stopping point in a graph. A checkpoint saves status information, which allows you to recover completed stages of a graph that occurred prior to a failure. In the GDE, a checkpoint is denoted as a white number inside a blue box. In a continuous flow graph, a checkpoint is an extra package of information that appears between records in a continuous data flow. When a component receives a checkpoint, the component saves intermediate state. If processing is interrupted, you can restart the graph from the last checkpoint. client project In the EME, a client project is a project that includes one or more common projects. Co>Operating System The Co>Operating System is Ab Initio core software that unites a network of computing resources CPUs, storage disks, programs, datasets into a production-quality data-processing system. It provides a distributed model for process execution, file management, process monitoring, checkpointing, and debugging. You can interact with Co>Operating System through its graphical user interface the Graphical Development Environment, or GDE. common project In the EME, a common project is a project that is included in another project. The parameter values of the common project are accessible by the including project, called the client project. component An Ab Initio component can represent either a dataset or a program that operates on data records in specified ways. Use components connected with flows to construct a graph that represents an application. There are three kinds of components: datasets, subgraphs, and program components. component parallelism Component parallelism occurs when program components execute simultaneously on different branches of a graph. The more branches a graph has, the greater the possibilities for component parallelism. computepoint In a continuous flow graph, a computepoint is an extra packet of information sent between records on a flow. A computepoint marks a block of records that you want to process as a group. When you have multiple input flows to a continuously enabled component, a computepoint indicates which blocks of data on one flow correspond to which blocks of data on another flow.

configuration variable A configuration variable is a name-value pair that controls the behavior of Ab Initio software, typically with regard to debugging, resource allocation, and remote connection methods. You can define it in three places: (1) in $HOME/.abinitiorc, (2) in $AB_HOME/config/abinitiorc, and (3) in the environment. The names of configuration variables begin with the prefix AB_. Compare with environment variable. Continuous Component A continuous component is a component that is designed specifically for use in a continuous flow graph. Most continuous components are located in the Continuous folder of the Component Organizer. Examples of continuous components are Subscribe and Publish. Many other components are continuously enabled. continuous flow graph A continuous flow graph is a graph that is intended to run indefinitely, continually accepting new input and producing new, usable output while the graph keeps running. It consists of one or more subscribers as the data source, a publisher at the end of every flow, and any continuous or continuously enabled components between a subscriber and a publisher. continuously enabled component A continuously enabled component is a component that can (1) save and restore intermediate state (known as checkpointing) and (2) pass computepoints and checkpoints to and from the data flows in the graph. Most of the built-in components are continuously enabled. control partition A control partition is the file in a multifile that contains the locations (URLs) of the multifile's data partitions. A control partition can reside on a different computer from those containing the data partitions. custom component A custom component is a component that you build from a template. You can use both custom and built-in components to build graphs and subgraphs. Compare with built-in component. custom layout A custom layout is a type of layout that provides a list of URLs that specify serial files on different hosts and in different directory locations. custom sequence modifier A custom sequence modifier is the sort order you specify for strings. It is part of a key specifier. data parallelism Data parallelism occurs when you separate data into multiple segments called partitions, allowing copies of a program component to operate on the data in all the partitions simultaneously. To support this form of parallelism, Ab Initio software provides partitioning components to segment data, and departitioning components to merge segmented data. data partition A data partition is a file in a multifile that contains data. Collectively, all the data partitions in the same multifile are a single virtual file managed by the control partition.

data type See type. database configuration file A database configuration file provides the GDE with the information it needs to connect to a database. The filename has a .dbc extension. A database configuration file contains, at a minimum: (1) the name and version number of the database software; (2) the name of the computer on which the database instance or server runs; and (3) the name of the database instance, server, or provider. Data Manipulation Language See DML. Database Package The Database Package is an interface for integrating specific database management systems (like Oracle and DB2) into Ab Initio graphs. The Database Package, which is part of the Co>Operating System, includes components for building Ab Initio graphs that interface with standard databases. dataset A dataset is a logical collection of data, such as "the customer file on the mainframe system." Datasets can be either flat files or database tables. dataset component A dataset component represents a dataset. It can serve as the input or output of a graph. Dataset components are either file components or table components. dataset definition In Data Profiler, a dataset definition is an EME object created by Data Profiler to identify a logical dataset. A dataset definition identifies the type of dataset and has a reference to the dataset's record format. If the dataset is a table, the dataset definition also references the database configuration file as well as the table name. datastore A datastore is an individual installation of the EME. It is a permanent named and versioned collection of files and metadata. You can access the datastore through the GDE, the command line, and the EME Web Interface. default rule A default rule is a transform function rule that copies values from fields in the input to fields with the same name in the output. departition To departition means to combine or merge multiple flow partitions of data records into a single flow. Compare with partition (v.). dependency analysis

In the EME, dependency analysis is a process by which the EME examines an entire project and traces how data is transformed and transferred, field by field, from component to component, within and between graphs. The results allow you to see how operations that occur later in a graph are affected by components earlier in the graph and see which data is operated on by which component. deploy In the GDE, to deploy a graph means to move the graph file and all supporting files to the host system without running the graph. In Data Profiler, to deploy a profile job means to create a script of a Profile Setup and its related Dataset Setups, without running the script. depth Depth refers to the number of partitions in a particular layout. Also called the degree of parallelism. The number representing the depth appears on the component in the GDE when the graph is run. DML DML is short for Ab Initio's Data Manipulation Language. Ab Initio products use DML internally to represent types, record formats, expressions, transform functions, and key specifiers. Users of Ab Initio products can enter DML directly, either as an alternative to, or in combination with, the GDE's graphical tools. To enter or edit DML, use the text view of the Record Format Editor, Transform Editor, and Package Editor. .dml file A .dml file has a .dml extension and contains record format definitions. DML statement See statement. DML type See type. statement A statement is a DML construct that can assign a value to a local variable, a global variable, or an output field. A statement can also define processing logic or control the number of iterations of another statement. DML statements include these types: if, while, for, block, switch, expression, and assignment. Statements are used in DML transform functions. ${} substitution Dollar curly substitution (denoted as ${} substitution) is similar to dollar substitution, except that to obtain the value of a variable you must enclose it in curly braces before applying the $, for example, $ {foo}. This notation allows the $ symbol to appear explicitly in variable names. $ substitution Dollar substitution (denoted as $ substitution) is the process by which dollar-sign references (for example, $DB) are converted to values. When the GDE reads a dollar-sign reference, it searches for a parameter with the same name, such as DB, and replaces the dollar-sign reference with the value of that parameter. EME

EME is short for Enterprise Meta>Environment. The EME is an object-oriented storage system that manages Ab Initio applications (including data formats and business rules) and related information. It provides an integrated and consolidated view of your business. Enterprise Environment The Enterprise Environment is the most complex level of the Ab Initio Environment. It contains multiple private and public projects, organized hierarchically in a tree structure, and the required environment project. Environment Project An environment project is a special public project that exists in every Ab Initio Environment. Its parameter values give the other projects in the environment their "environment identity". Through the environment project's parameters the projects share a common data area, a common area in an EME datastore, a common multifile system, and numerous other static and runtime values. environment variable An environment variable is a variable that is bound in the current environment. When an expression is evaluated in a particular environment, the evaluation of a variable consists of looking up its name in the environment and substituting its value. In a Korn shell, an environment variable is defined with the export statement. Compare with configuration variable. export Export means to pass on properties (parameters, layouts, or ports) from components to enclosing graphs or subgraphs. For example, if you export a component parameter to its containing graph, the component parameter receives its value from the corresponding graph parameter. expression A DML expression describes a simple computation of a single value from other values in constants, data record fields, or the results of other expressions. fan-in flow The fan-in flow pattern connects components with some number of partitions to components with a smaller number of partitions, thereby merging data in multiple segments into fewer or a single segment. The most common use of fan-in is to connect flows to departitioning components like Gather. Compare with fan-out flow, all-to-all flow, and straight flow. fan-out flow A fan-out flow connects components with some number of partitions to components with a larger number of partitions, thereby dividing data into many segments for performance improvement. The most common use of fan-out is to connect flows from partition components. file lock In the EME, a file lock makes checked-out files, graphs, and projects unavailable to other users and prevents them from changing the object as you work on it in the GDE. The lock button on the GDE tool bar identifies the object's lock status unlocked, locked by you, or locked by another user. flat file

A flat file is a file that contains unstructured data. Typically, flat files are not connected to one another either through links or internal pointers. Common flat file types include: ASCII or EBCDIC files, commaseparated lists, eXcel spreadsheets, and transform files. Flat files can be parallel or nonparallel. flow A flow carries a stream of data between components in a graph. Flows connect components at their ports. Ab Initio software supplies four kinds of flows straight, fan-in, fan-out, and all-to-all which are denoted by different flow patterns in the GDE. GDE GDE is short for Graphical Development Environment. It is the user interface to the Co>Operating System. It allows you to create and edit graphs and monitor their execution. global variable A global variable is one declared outside a transform function and persists for the life of the component process. Compare with local variable. graph A graph is a diagram that defines the various processing stages of a task and the streams of data as they move from one stage to another. In the GDE, stages are represented by components, and streams are represented by flows. Also called dataflow graph. graph component A graph component, also called a subgraph, is a group of dataset or program components. host directory The host directory is the directory in which your graphs execute on the run host. host settings file The host settings file contains the information that the GDE, Data Profiler, and Shop for Data need to log in to the run host, such as the name of the host, login or username, password, and type of shell with which you log in. Specify this information in the Host Settings dialog. The default host settings file is named Normal.aih. include file See package. job A job is a given execution of a graph. For example, running a graph three times produces three jobs. key The key is the field or fields of a record upon which an operation ordering, partitioning, or grouping of records is based. For example, to group records based on a field named age, specify age as the key. key specifier

A key specifier is the DML representation of a key. It describes the way input and output data records are ordered, partitioned, or grouped. A key specifier consists of one or more fields in a record, and optionally includes an order specification, character ordering sequence, and other modifiers. layout A layout is a list of host and directory locations, usually specified as the URL of a file or multifile. If a layout has multiple locations but is not a multifile, the layout is a list of URLs called a custom layout. A program component's layout is the list of hosts and directories in which the component runs. A dataset component's layout is the list of hosts and directories in which the data resides. local variable A local variable is a variable declared within a transform function. It persists for a single evaluation of the transform function. It is reinitialized each time the transform function is called. Compare with global variable. location parameter In the EME, the location parameter is the name of the parameter that represents the top-level directory of an EME project, for example, /Projects/warehouse. In a sandbox, the location parameter represents the absolute path of the sandbox, for example, //u/dev/tom/mysandbox. The default value of the location parameter is PROJECT_DIR. lookup expression A lookup expression in a transform calls a built-in function that retrieves data from a lookup file. The first argument of the expression is the label of the lookup file; the second is the index expression. lookup file A lookup file is a dataset component that represents a file of data records that is small enough to fit in main memory, letting a transform function retrieve records much more quickly than it could if they were stored on disk. Lookup files associate key values with corresponding data values to index records and retrieve them. Unlike other datasets, a lookup file is not connected with flows to other components. You use the built-in functions lookup and lookup_local to access lookup files. metadata Metadata is a description of data. Several kinds of metadata are: record formats (usually in .dml files), key specifiers (for grouping and ordering), computations (transform functions in .xfr files), graphs, datasets, high-level data descriptions, tracking information, job history, categories, and versioning. The metadata associated with applications and related data is known as technical metadata. User-defined documentation of job functions and roles is known as enterprise metadata. .mdc file A .mdc file has a .mdc file extension and represents a dataset or custom dataset component. .mp file A .mp file has a .mp extension and stores an Ab Initio graph or graph component. .mpc file A .mpc file has a .mpc file extension and stores a program or custom program component. multidirectory

A multidirectory is a parallel directory that is composed of individual directories, typically on different disks or computers. The individual directories are partitions of the multidirectory. Each multidirectory contains one control directory and one or more data directories. Multifiles are stored in multidirectories. multifile A multifile is a parallel file composed of individual files, typically on different disks or computers. The individual files are partitions of the multifile. A multifile has one control partition and a number of data partitions. You create multifiles by specifying a URL on a component's Description tab. The URL starts with mfile:, for example, mfile://r5.abinitio.com/dat/input.dat. multifile system A multifile system (MFS) is a specially created set of directories, usually on different machines, that have identical substructures. Each directory is a partition of the multifile system. When a multifile is placed in a multifile system, its partitions are files within the directories that are partitions of the multifile system. Compare with multifile. multistage transform A multistage transform is a transform component that modifies records in up to five stages: input selection, temporary initialization, processing, finalization, and output selection. A different DML transform function performs each stage. The multistage components are Denormalize Sorted, Normalize, Rollup, and Scan. named type A named type is a user-defined type with a name applied for shorthand. It is used to locally group related fields. natural key A natural key is a field or set of fields that uniquely identifies a record in a file or table. A natural key is a key that is meaningful in some business or real-world sense. For example, a Social Security number for a person, or a serial number for a piece of equipment, is a natural key. Compare with surrogate key. NULL NULL represents the absence of a value. When the Co>Operating System cannot evaluate an expression, it produces NULL. package A DML package is a group of types, variables, statements, and transform functions in a collection that you can refer to by name, and reuse. parallel file A parallel file is a general term for an Ab Initio multifile. parallelism Parallelism relates to the simultaneous performance of multiple operations. Ab Initio software uses three kinds of parallelism: component parallelism, data parallelism, and pipeline parallelism. parameter

A parameter specifies some aspect of behavior of a graph, component, project, or sandbox. Specifically, a parameter consists of a name and a value, with a number of additional attributes that describe how and when to interpret or resolve the parameter's value. An example of a parameter is select_expr of the Filter by Expression component. partition (n.) A partition is either a portion of a multifile or a segment of a parallel computation. partition (v.) To partition data is to divide it into segments, so the data can run in parallel. Some components partition data. For example, the Partition by Round-robin component can divide its input into equal segments, one record at a time, resulting in partitioned data. Partitioned data can be stored in multifiles. phase A phase is a stage of a graph that must run to completion before the start of the next stage. By dividing a graph into phases, you can control the number of components that run simultaneously. Also, phases save status information, so you can use them to recover from graph failures. A phase is denoted by a blue number in a white box. See also checkpoint. pipeline parallelism Pipeline parallelism occurs when a connected sequence of program components on the same branch of a graph execute simultaneously. Each component in the pipeline continuously reads from upstream components, processes records, and writes to downstream components. port A port is a connection point that allows data to flow into and out of a component. Flows connect to ports. Every port has a record format. port binding A port binding associates ports of components in a subgraph with ports of the subgraph itself. You can create port bindings by dragging a flow from a port in a subgraph to the outer edge of the subgraph. Port bindings are represented by dotted gray lines. The subgraph port always has the same properties as the inner port. priority The priority is the order of evaluation that you assign to rules assigned to the same output field in a transform function. The rule with the lowest-numbered priority is evaluated before rules with highernumbered priorities. The rule without an assigned priority is evaluated last. private project A private project is a project in the Ab Initio Environment that is not supposed to be accessible to other projects. Private projects are typically where on-going development work is done. process lock In the EME, a process lock is an internal data structure that keeps multiple users from overwriting each other's data. Process locks are not user visible. Compare with file lock. processing host

A processing host is a computer on which the Co>Operating System is installed and where individual instances of graph component processes execute. The processing host is managed by the run host. In a multiple-host environment, there may be many processing hosts, each processing a small piece of the job. Compare with run host. Profile Output Profile Output is an EME object summarizing a Data Profiler job. The Profile Output object references Dataset Profile objects, which, in turn, reference the related Physical Element Profile objects. profile results In Data Profiler, profile results are the EME objects that result from running a Data Profiler job. Profile results include Profile Outputs, Dataset Profiles, and Physical Element Profiles. program component Program components represent the processing stages of a graph. Sort is an example of a program component. project In the EME, a project is a collection of graphs and related files that are stored in a single directory tree in an EME datastore and are treated as a group for purposes of version control, navigation, and migration. Logically, a project is a self-contained and largely independent content area that accomplishes a single business goal. Projects are checked out to sandboxes. project data directories in the Ab Initio Environment, project data directories are the project-specific file system areas for serial and multifile data. project parameters In the EME, project parameters specify various aspects of project behavior. Each parameter has a name and a string value. When a project is created, it comes with a set of default project parameters that set up a correspondence between file system URLs and locations within a project in a datastore. project template In the Ab Initio Environment, a project template is a "blank sandbox" used to create new sandboxes (and their default characteristics) for new public and private projects. The template is stored in the Ab Initio Environment installation area. propagation Propagation in the GDE is the automatic assignment of the component layout and record format properties of one component to the neighboring components. Propagation occurs when you connect components with flows. When the GDE propagates a layout, it puts an asterisk (*) next to the layout marker (L1*). When it propagates a record format, it puts an asterisk (*) next to the port name (out*). proxy file A proxy file is a temporary file that is written by a deployed script. A proxy file contains the text embedded in the graph, such as transform functions and record formats. public project

In the Ab Initio Environment, a public project is a common project, accessible to other projects. Projects must include a public project in order to gain access to its project parameters. A public project is the Ab Initio Environment's mechanism for sharing data among projects; access to the data is "exported" in the form of parameters in the public project. publisher A publisher is a component in a continuous flows graph that (1) writes data to user-specified destinations and (2) consumes computepoints and checkpoints. When a publisher receives a computepoint or checkpoint, it sends the output to the queue or file specified as the destination. Publisher components include Publish, Continuous Multi Update Table, and Gather Logs. queue In a continuous flow graph, a queue is a data structure that supports a single publisher writing data to it, and one or more subscribers reading data from it. Queues support persistence, commit and rollback, record-based first-in first-out handling, and subscribers and publishers. There are different types of queues, including Ab Initio queues, MQ Series queues, and Java Message Service queues. record format A record format describes the data in a record. It may include data length and interpretation information. A record format is stored in either a .dml file or a DML string. recovery Recovery means to restart a job from the last completed checkpoint. When your graph fails (for example, one phase of a graph encounters an error), you do not need to rerun checkpointed phases that completed prior to failure. The graph recovers from the last checkpoint. For the failed phases, all processes are terminated, all temporary files are deleted, and all nodes and their respective files are rolled back to their initial state by default. relative path A relative path is the location of a file relative to the current working directory or to an implied or explicitly indicated default directory. For example, the project-relative path of mygraph.mp in the mp directory of the /Projects directory is mp/mygraph.mp. A relative path does not begin with a slash. Compare with absolute path. repartition To repartition data is to change the degree of parallelism or the grouping of partitioned data. reserved word A reserved word is a DML term reserved for special situations. You cannot use a reserved word for the DML objects you name. There are currently about six dozen reserved words, for example: and, char, date, if, and for. rule A rule is an instruction in a transform function that directs the construction of one field in an output record. Also called business rule. run host

The run host is the computer that starts and controls the execution of an Ab Initio graph in a multiple host environment. The GDE connects to the run host to run graphs. The Co>Operating System must be installed on the run host. Compare with processing host. sandbox A sandbox is a single directory tree in which graphs and related files are stored and are treated as a group for purposes of version control, navigation, and migration. In the EME, a sandbox is a checked-out copy of a datastore project. sequence modifier A sequence modifier describes the sort order for strings in a key specifier. By default, DML sequences strings according to their character codes, but you can use sequence modifiers when you need to alter the sequence. The built-in alternatives to the default are phone book order, index order, and machine order. You can also define your own sequence specifier, called a custom sequence modifier. serial file A serial file is a flat, non-parallel file. You specify the name of a serial file as a URL on a component's Description tab. The URL of a serial file starts with file:, for example, file://r5.abinitio.com/dat/input.dat. SFD parameter language The Shop for Data (SFD) parameter language is a metaprogramming language. It specifies how an SFD graph derives parameters and creates parameter prompts in the Ab Initio Web user interface. Use it to enhance the SFD components' parameters or to create your own components, parameters, and reusable functions. skew Skew is the measure of unbalanced parallel behavior with regard to data storage or program execution. Skew varies from -100 to 100. 0 is balanced storage or execution. SQL statement An SQL statement is a complete command written in the SQL language for creating, updating and, querying relational database management systems. Standard Environment A Standard Environment is the Ab Initio Environment that has more than one private project and perhaps other public projects, in addition to the required environment project. statement A statement is a DML construct that can assign a value to a local variable, a global variable, or an output field. A statement can also define processing logic or control the number of iterations of another statement. DML statements include these types: if, while, for, block, switch, expression, and assignment. Statements are used in DML transform functions. straight flow Straight flows connect components that have the same number of partitions. Partitions of components connected with straight flows have a one-to-one correspondence. Straight flows are the most common flow pattern.

subgraph A subgraph is a graph fragment. Just like graphs, subgraphs can contain components and flows. Subgraphs are useful for organizing a graph into reusable subtasks. subscriber A subscriber is a continuous flow component that (1) writes data from various sources into a continuous flow graph and (2) generates computepoints and checkpoints. Components that are subscribers include Batch Subscribe, Generate Records, JMS Subscribe, MQ Subscribe, Subscribe and Universal Subscribe. surrogate key A surrogate key is a field that is added to a record, either to replace the natural key or in addition to it, and has no business meaning. For example, a customer number is a typical surrogate key. Surrogate keys are frequently added to records when a data warehouse is populated in order to help isolate the records in the warehouse from changes to the natural keys by outside processes. Compare with natural key. to-do cue A to-do cue is a yellow highlighted area that lets you know that required information is missing. transform component A transform component modifies data, typically using a transform function. Modifications include reformatting data, filtering data, or merging separate data streams. transform function A transform function is the logic that describes how to compute output values from input values. A transform function consists mainly of one or more business rules that express the computation logic. Within a transform function, you can prioritize rules, use local variables, and include statements. A transform function can be stored in a file or an embedded string. two-stage routing Two-stage routing is a Co>Operating System mechanism available for saving networking resources when an all-to-all flow connects components with layouts containing large numbers of partitions. Two-stage routing is available only if the number of partitions in the source and destination are the same. When you set two-stage routing, the flow symbol changes from a pattern with one X in the middle to a pattern with two Xs in the middle. type A DML type describes the structure of data. DML provides built-in types as well as the capability for graph developers to create their own user-defined types. The built-in DML base types include voids, integers, reals, decimals, dates, datetimes, and strings. Built-in compound types are vectors, records, unions, and user-defined types. URL URL is short for Universal Resource Locator. Ab Initio software uses URLs to locate files. The format of a URL is: protocol://hostname/directory1/directory2.../filename, where protocol is either file for a serial file or mfile for a multifile. value census

In Data Profiler, a value census is an intermediate file containing an entry for each unique value that occurs for each Physical Element in a profiled dataset and a corresponding count of how many times that value occurs. variable A variable is a named storage location in an expression or transform function containing data that can be modified during program execution. There are three types of variables: local variables, global variables, and variables associated with ports. watcher A watcher is a debugging component that lets you view the data that has passed through a flow. There are two types of watchers: non-phased (default) and phased. wrapper object In Data Profiler, a wrapper object is an object in an EME datastore that provides a container for other objects. For example, a Dataset Setup is a wrapper object that identifies an EME dataset definition object and some profiling options. .xfr file A file with a .xfr extension contains transform function definitions or packages. Previous | Next Copyright 2004, Ab Initio Software Corporation, Confidential and Proprietary. All rights reserved.

You might also like